Friday Gemini AI
Ruby gem for integrating with Google's Gemini AI models.
The full API of this library can be found in docs/reference/api.md.
Installation
gem install friday_gemini_aiSet your API key in .env:
GEMINI_API_KEY=your_api_key
Note
Ensure your API key is kept secure and not committed to version control.
HarperBot Integration
HarperBot provides automated PR code reviews using Google's Gemini AI. It supports two deployment modes:
Webhook Mode (Recommended)
- Self-hosted on Vercel for centralized, scalable analysis
- Install the HarperBot GitHub App once for all repositories
- No per-repository setup required
- Note: For self-hosting outside Vercel, use a production WSGI server like Gunicorn instead of Flask's development server for security and performance.
Workflow Mode (Legacy)
- Repository-specific GitHub Actions workflow
- Requires secrets setup per repository
- Automated setup:
curl -fsSL https://raw.githubusercontent.com/bniladridas/friday_gemini_ai/main/bin/setup-harperbot | bash(use--updateto update,--dry-runto preview) - Note: This is legacy mode for existing users. New installations should use Webhook Mode for better scalability and centralized management
For detailed setup instructions, see harperbot/HarperBot.md.
Usage
The full API of this library can be found in docs/reference/api.md.
Basic Setup
Security Note for Automated Setup: The recommended curl | bash method downloads and executes code from the internet. For security, review the script at https://github.com/bniladridas/friday_gemini_ai/blob/main/bin/setup-harperbot before running. Alternatively, download first: curl -O https://raw.githubusercontent.com/bniladridas/friday_gemini_ai/main/bin/setup-harperbot, inspect, then bash setup-harperbot.
require 'friday_gemini_ai'
GeminiAI.load_env
client = GeminiAI::Client.new # Default: gemini-2.5-pro
fast_client = GeminiAI::Client.new(model: :flash)Model Reference
| Key | ID | Use case |
|---|---|---|
:pro |
gemini-2.5-pro |
Most capable, complex reasoning |
:flash |
gemini-2.5-flash |
Fast, general-purpose |
:flash_2_0 |
gemini-2.0-flash |
Legacy support |
:flash_lite |
gemini-2.0-flash-lite |
Lightweight legacy |
Capabilities
- Text: content generation, summaries, documentation
- Chat: multi-turn Q&A and assistants
- Image: image-to-text analysis
- CLI: for quick prototyping and automation
Features
- Multiple Model Support: Gemini 2.5 + 2.0 families with automatic fallback
- Text Generation: configurable parameters, safety settings
- Image Analysis: base64 image input, detailed descriptions
- Chat: context retention, system instructions
- Security: API key masking, retries, and rate limits (1s default, 3s CI)
Handling errors
When the library is unable to connect to the API,
or if the API returns a non-success status code (i.e., 4xx or 5xx response),
a subclass of GeminiAI::APIError will be thrown:
response = client.generate_text('Hello').catch do |err|
if err.is_a?(GeminiAI::APIError)
puts err.status # 400
puts err.name # BadRequestError
puts err.headers # {server: 'nginx', ...}
else
raise err
end
endError codes are as follows:
| Status Code | Error Type |
|---|---|
| 400 | BadRequestError |
| 401 | AuthenticationError |
| 403 | PermissionDeniedError |
| 404 | NotFoundError |
| 422 | UnprocessableEntityError |
| 429 | RateLimitError |
| >=500 | InternalServerError |
| N/A | APIConnectionError |
Retries
Certain errors will be automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default.
You can use the max_retries option to configure or disable this:
# Configure the default for all requests:
client = GeminiAI::Client.new(max_retries: 0) # default is 2
# Or, configure per-request:
client.generate_text('Hello', max_retries: 5)
### Timeouts
Requests time out after 60 seconds by default. You can configure this with a `timeout` option:
```ruby
# Configure the default for all requests:
client = GeminiAI::Client.new(timeout: 20) # 20 seconds (default is 60)
# Override per-request:
client.generate_text('Hello', timeout: 5)On timeout, an APIConnectionTimeoutError is thrown.
Note that requests which time out will be retried twice by default.
Advanced Usage
Logging
Important
All log messages are intended for debugging only. The format and content of log messages may change between releases.
Log levels
The log level can be configured via the GEMINI_LOG_LEVEL environment variable or client option.
Available log levels, from most to least verbose:
-
'debug'- Show debug messages, info, warnings, and errors -
'info'- Show info messages, warnings, and errors -
'warn'- Show warnings and errors (default) -
'error'- Show only errors -
'off'- Disable all logging
require 'friday_gemini_ai'
client = GeminiAI::Client.new(log_level: 'debug') # Show all log messages
## Frequently Asked Questions
## Semantic versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://github.com/bniladridas/friday_gemini_ai/issues) with questions, bugs, or suggestions.
## Requirements
Ruby 3.0 or later is supported.
The following runtimes are supported:
- Ruby 3.0+
- JRuby (compatible versions)
- TruffleRuby (compatible versions)
Note that Windows support is limited; Linux and macOS are recommended.
## Migration Guide
Gemini 1.5 models have been deprecated.
Use:
* `:pro` → `gemini-2.5-pro`
* `:flash` → `gemini-2.5-flash`
Legacy options (`:flash_2_0`, `:flash_lite`) remain supported for backward compatibility.
## Environment Variables
```bash
# Required
GEMINI_API_KEY=your_api_key_here
# Optional
GEMINI_LOG_LEVEL=debug # debug | info | warn | error
CLI Shortcuts
./bin/gemini test
./bin/gemini generate "Your prompt"
./bin/gemini chatGitHub Actions Integration
Friday Gemini AI includes a built-in GitHub Actions workflow for automated PR reviews via HarperBot, powered by Gemini AI.
💡 Install the HarperBot GitHub App for automated PR reviews across repositories.
HarperBot – Automated PR Analysis
HarperBot provides AI-driven code review and analysis directly in pull requests.
Key Capabilities:
- Configurable focus:
all,security,performance,quality - Code quality, documentation, and test coverage analysis
- Security & performance issue detection
- Inline review comments with actionable suggestions
- Clean, minimal, and structured feedback output
Setup
Workflow Mode (default)
-
Add repository secrets:
GEMINI_API_KEY-
GITHUB_TOKEN(auto-provided by GitHub)
-
Configure
.github/workflows/harperbot.yml -
Optional: tune behavior via
harperbot/config.yaml
Webhook Mode (Recommended)
- Deploy
webhook-vercelbranch to Vercel - Create a GitHub App and set environment variables:
-
GEMINI_API_KEY: Your Google Gemini API key -
HARPER_BOT_APP_ID: App ID from your GitHub App settings (found under "About" section) -
HARPER_BOT_PRIVATE_KEY: Private key content (paste the entire .pem file) -
WEBHOOK_SECRET: Random secret string for webhook verification
-
- Install the GitHub App on your repositories
- Webhooks will handle PR events automatically
- Preferred for scalability and centralized management
Workflow Highlights
- Pull Requests: triggered on open, update, or reopen
- Push to main: runs Gemini CLI verification
- Concurrency control: cancels redundant runs for efficiency
Required permissions:
permissions:
contents: read
pull-requests: write
issues: write
statuses: writeLocal Development & Testing
bundle exec rake test # Run tests
bundle exec rake rubocop # Optional lint check
gem build *.gemspec # Verify buildTest Workflows Locally
Using act:
brew install act
act -j test --container-architecture linux/amd64Examples
Text Generation
client = GeminiAI::Client.new
puts client.generate_text('Write a haiku about Ruby')Image Analysis
image_data = Base64.strict_encode64(File.binread('path/to/image.jpg'))
puts client.generate_image_text(image_data, 'Describe this image')Chat
messages = [
{ role: 'user', content: 'Hello!' },
{ role: 'model', content: 'Hi there!' },
{ role: 'user', content: 'Tell me about Ruby.' }
]
puts client.chat(messages, system_instruction: 'Be helpful and concise.')Conventional Commits
Consistent commit messages are enforced via a local Git hook.
cp scripts/commit-msg .git/hooks/
chmod +x .git/hooks/commit-msgTypes: feat, fix, docs, style, refactor, test, chore
Example:
git commit -m "feat: add user authentication"Documentation
Contributing
Fork → Branch → Commit → Pull Request.
License
MIT – see LICENSE.