Rack::AI
A next-generation RubyGem that extends Rack with AI-powered capabilities for request classification, content moderation, security analysis, and intelligent caching.
โ Support My Open Source Work โ
Love this project? Buy me a coffee and help me continue building innovative security solutions!
Your support makes a difference! ๐ Every contribution helps me dedicate more time to open source development.
๐ Hey, This is Ahmet KAHRAMAN
๐ป Mobile Developer & Cyber Security Expert
๐ Living in Ankara, Turkey ๐น๐ท
๐ 10+ years experience in Public Sector IT
๐ฌ Mantra: "Security first, innovation always"
๐ Connect With Me
Platform | Link | Description |
---|---|---|
๐ Portfolio | ahmetxhero.web.app | My home on the net |
๐ผ LinkedIn | linkedin.com/in/ahmetxhero | Professional network |
๐ฅ YouTube | @ahmetxhero | Haven't you subscribed yet? |
๐ Medium | ahmetxhero.medium.com | Technical articles |
๐ค Twitter | @ahmetxhero | Tech thoughts & updates |
๐ท Instagram | instagram.com/ahmetxhero | Behind the scenes |
๐ฎ Twitch | twitch.tv/ahmetxhero | Live coding sessions |
๐จ Dribbble | dribbble.com/ahmetxhero | Design portfolio |
๐จ Figma | figma.com/@ahmetxhero | UI/UX designs |
๐ง Email | ahmetxhero@gmail.com | You can reach me here |
๐ ๏ธ Tech Stack & Expertise
Mobile Development
- iOS: Swift, Objective-C, UIKit, SwiftUI
- Android: Java, Kotlin, Android SDK
- Cross-Platform: Flutter, React Native
- Frameworks: Xamarin, Ionic
Cybersecurity & Forensics
- Digital Forensics: EnCase, FTK, Autopsy, Volatility
- Penetration Testing: Metasploit, Burp Suite, Nmap
- Security Tools: Wireshark, OWASP ZAP, Nessus
- Programming: Python, PowerShell, Bash
System Administration
- Operating Systems: Windows Server, Linux (Ubuntu, CentOS)
- Virtualization: VMware, Hyper-V, VirtualBox
- Cloud Platforms: AWS, Azure, Google Cloud
- Networking: TCP/IP, DNS, DHCP, VPN
๐ Professional Experience
Current Role
Mobile Developer / Mobile System Operator
Gendarmerie General Command | July 2024 - Present
๐ Ankara, Turkey
Previous Experience
- IT Expert | Gendarmerie General Command (2015-2024)
- Founder | Sadฤฑk Internet Cafe (2012-2014)
๐ Education
- Master's Degree in Forensic Informatics | Gazi University (2021-2023)
- Bachelor's Degree in Health Management | Anadolu University (2017-2021)
- Associate's Degree in Computer Programming | Atatรผrk University (2020-2023)
- Associate's Degree in Occupational Health & Safety | Atatรผrk University (2016-2019)
๐ Certifications & Achievements
- Microsoft Certified Professional
- Certified Ethical Hacker (CEH)
- Digital Forensics Expert
- iOS & Swift Development
- Flutter Development
- Android Development
๐ฏ Current Focus
- ๐ Cybersecurity: Developing secure mobile applications
- ๐ฑ Mobile Development: Creating innovative iOS and Android apps
- ๐ Digital Forensics: Advancing forensic investigation techniques
- ๐ Knowledge Sharing: Contributing to the tech community
- ๐ Open Source: Building tools for the developer community
๐ฃ๏ธ Speaking & Content Creation
- ๐ฅ I have my own YouTube channel - @ahmetxhero
- ๐ฃ๏ธ I enjoy speaking at tech events. Interested in having me speak at your event?
- ๐ค I'm passionate about mobile development and cybersecurity solutions
"Building secure, innovative solutions for a better digital future" ๐
Features
- ๐ค Request Classification: Automatically classify requests as human, bot, spam, or suspicious
- ๐ก๏ธ Security Analysis: Detect SQL injection, XSS, prompt injection, and other security threats
- ๐ Content Moderation: Real-time toxicity and policy violation detection
- โก Smart Caching: AI-powered predictive caching and prefetching
- ๐ฏ Intelligent Routing: Route requests based on AI analysis results
- ๐ Enhanced Logging: AI-generated insights and traffic pattern analysis
- โจ Content Enhancement: Automatic SEO, readability, and accessibility improvements
- ๐ Multiple AI Providers: Support for OpenAI, HuggingFace, and local AI models
- ๐ Production Ready: Fail-safe mode, async processing, and comprehensive monitoring
Installation
Add this line to your application's Gemfile:
gem 'rack-ai'
And then execute:
$ bundle install
Or install it yourself as:
$ gem install rack-ai
Quick Start
Basic Usage
require 'rack/ai'
# Configure Rack::AI
Rack::AI.configure do |config|
config.provider = :openai
config.api_key = ENV['OPENAI_API_KEY']
config.features = [:classification, :security, :moderation]
end
# Add to your Rack application
use Rack::AI::Middleware
Rails Integration
# config/application.rb
class Application < Rails::Application
config.middleware.use Rack::AI::Middleware,
provider: :openai,
api_key: ENV['OPENAI_API_KEY'],
features: [:classification, :moderation, :security]
end
Sinatra Integration
require 'sinatra'
require 'rack/ai'
class MyApp < Sinatra::Base
use Rack::AI::Middleware,
provider: :openai,
api_key: ENV['OPENAI_API_KEY'],
features: [:classification, :security]
get '/' do
ai_results = request.env['rack.ai'][:results]
"Classification: #{ai_results[:classification][:classification]}"
end
end
Configuration
Global Configuration
Rack::AI.configure do |config|
# Provider settings
config.provider = :openai # :openai, :huggingface, :local
config.api_key = ENV['OPENAI_API_KEY']
config.timeout = 30
config.retries = 3
# Feature toggles
config.features = [:classification, :moderation, :security, :caching]
config.fail_safe = true
config.async_processing = true
# Security settings
config.sanitize_logs = true
config.allowed_data_types = [:headers, :query_params]
# Feature-specific configuration
config.classification.confidence_threshold = 0.8
config.moderation.toxicity_threshold = 0.7
config.caching.redis_url = ENV['REDIS_URL']
end
Environment-Specific Configuration
# config/environments/production.rb
Rack::AI.configure do |config|
config.provider = :openai
config.timeout = 10 # Shorter timeout in production
config.explain_decisions = false
config.sanitize_logs = true
end
# config/environments/development.rb
Rack::AI.configure do |config|
config.provider = :local
config.api_url = 'http://localhost:8080'
config.explain_decisions = true
config.log_level = :debug
end
AI Providers
OpenAI
Rack::AI.configure do |config|
config.provider = :openai
config.api_key = ENV['OPENAI_API_KEY']
# Optional: config.api_url = 'https://api.openai.com/v1'
end
HuggingFace
Rack::AI.configure do |config|
config.provider = :huggingface
config.api_key = ENV['HUGGINGFACE_API_KEY']
end
Local AI Model
Rack::AI.configure do |config|
config.provider = :local
config.api_url = 'http://localhost:8080'
end
Features
Request Classification
Automatically classifies incoming requests:
# Access classification results
ai_results = request.env['rack.ai'][:results]
classification = ai_results[:classification]
case classification[:classification]
when :human
# Handle human user
when :bot
# Handle bot request
when :spam
# Block or handle spam
when :suspicious
# Apply additional security measures
end
Security Analysis
Detects various security threats:
security = ai_results[:security]
if security[:threat_level] == :high
# Block request immediately
halt 403, 'Security threat detected'
elsif security[:threat_level] == :medium
# Apply additional verification
require_captcha
end
# Check specific threats
if security[:injection_detection][:threats].include?('sql_injection')
log_security_incident(request)
end
Content Moderation
Real-time content analysis:
moderation = ai_results[:moderation]
if moderation[:flagged]
categories = moderation[:categories]
if categories['hate']
reject_content('Hate speech detected')
elsif categories['violence']
flag_for_review(content)
end
end
Smart Caching
AI-powered caching decisions:
caching = ai_results[:caching]
if caching[:should_prefetch]
# Schedule prefetching for predicted requests
PrefetchJob.perform_later(caching[:pattern_analysis])
end
if caching[:cache_hit]
# Use cached AI analysis
cached_result = caching[:cached_result]
end
Advanced Usage
Custom AI Processing
class CustomAIMiddleware
def initialize(app)
@app = app
end
def call(env)
# Add custom context for AI processing
env['rack.ai.custom'] = {
user_tier: extract_user_tier(env),
api_version: extract_api_version(env)
}
status, headers, body = @app.call(env)
# Process AI results
ai_results = env['rack.ai'][:results]
handle_ai_insights(ai_results)
[status, headers, body]
end
end
Conditional AI Processing
use Rack::AI::Middleware,
provider: :openai,
api_key: ENV['OPENAI_API_KEY'],
features: [:classification, :security],
condition: ->(env) {
# Only process API requests
env['PATH_INFO'].start_with?('/api/')
}
Multiple AI Providers
# Primary provider with fallback
primary_config = {
provider: :openai,
api_key: ENV['OPENAI_API_KEY']
}
fallback_config = {
provider: :local,
api_url: 'http://localhost:8080'
}
use Rack::AI::Middleware, primary_config
use Rack::AI::FallbackMiddleware, fallback_config
Monitoring and Metrics
Built-in Metrics
# Access metrics
metrics = Rack::AI::Utils::Metrics
# Request processing metrics
processing_time = metrics.get_histogram_stats('rack_ai.processing_time')
request_count = metrics.get_counter('rack_ai.requests_processed')
# Feature-specific metrics
classification_accuracy = metrics.get_histogram_stats('rack_ai.feature.classification.confidence')
security_threats = metrics.get_counter('rack_ai.feature.security.threats_detected')
Prometheus Integration
# Export metrics in Prometheus format
get '/metrics' do
content_type 'text/plain'
Rack::AI::Utils::Metrics.export_prometheus_format
end
Custom Logging
# Enhanced logging with AI context
class AILogger < Rack::AI::Utils::Logger
def self.log_request(env, ai_results)
info('AI-enhanced request processed', {
path: env['PATH_INFO'],
classification: ai_results[:classification][:classification],
threat_level: ai_results[:security][:threat_level],
processing_time: ai_results[:processing_time]
})
end
end
Performance Considerations
Async Processing
# Enable async processing for better performance
Rack::AI.configure do |config|
config.async_processing = true
config.timeout = 5 # Shorter timeout for async
end
Feature Selection
# Enable only necessary features
Rack::AI.configure do |config|
# Lightweight configuration
config.features = [:classification, :security]
# Heavy configuration (includes caching, enhancement)
# config.features = [:classification, :moderation, :security, :caching, :enhancement]
end
Caching AI Results
# Cache AI analysis results
Rack::AI.configure do |config|
config.cache_enabled = true
config.cache_ttl = 3600 # 1 hour
config.caching.redis_url = ENV['REDIS_URL']
end
Error Handling
Fail-Safe Mode
# Graceful degradation when AI services are unavailable
Rack::AI.configure do |config|
config.fail_safe = true # Continue processing even if AI fails
config.timeout = 10 # Reasonable timeout
config.retries = 2 # Limited retries
end
Custom Error Handling
class AIErrorHandler
def initialize(app)
@app = app
end
def call(env)
@app.call(env)
rescue Rack::AI::ProviderError => e
# Handle AI provider errors
[503, {}, ['AI service temporarily unavailable']]
rescue Rack::AI::ConfigurationError => e
# Handle configuration errors
[500, {}, ['AI configuration error']]
end
end
Security Best Practices
Data Sanitization
Rack::AI.configure do |config|
# Sanitize sensitive data before sending to AI
config.sanitize_logs = true
config.allowed_data_types = [:headers, :query_params]
config.blocked_data_types = [:body, :cookies]
end
API Key Management
# Use environment variables for API keys
Rack::AI.configure do |config|
config.api_key = ENV['OPENAI_API_KEY']
# Never hardcode API keys in your application
end
Rate Limiting
Rack::AI.configure do |config|
config.rate_limit = 1000 # Requests per minute
config.security.rate_limit = 100 # Lower limit for security analysis
end
Testing
Test Configuration
# spec/spec_helper.rb
RSpec.configure do |config|
config.before(:each) do
Rack::AI.configure do |config|
config.provider = :local
config.api_url = 'http://localhost:8080'
config.features = [] # Disable AI in tests
config.fail_safe = true
end
end
end
Mocking AI Responses
# Mock AI provider responses
RSpec.describe 'AI Integration' do
before do
allow_any_instance_of(Rack::AI::Providers::OpenAI)
.to receive(:classify_request)
.and_return({
classification: :human,
confidence: 0.9,
provider: :openai
})
end
it 'processes requests with AI' do
get '/api/test'
expect(last_response.headers['X-AI-Classification']).to eq('human')
end
end
Deployment
Production Checklist
- Set appropriate API keys via environment variables
- Configure fail-safe mode and timeouts
- Enable metrics and monitoring
- Set up Redis for caching (if using caching features)
- Configure log sanitization
- Test AI provider connectivity
- Set up alerting for AI service failures
Docker Configuration
# Dockerfile
FROM ruby:3.2
# Install dependencies
COPY Gemfile* ./
RUN bundle install
# Copy application
COPY . .
# Set environment variables
ENV OPENAI_API_KEY=${OPENAI_API_KEY}
ENV REDIS_URL=${REDIS_URL}
ENV RACK_AI_LOG_LEVEL=info
CMD ["bundle", "exec", "rackup"]
Kubernetes Deployment
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rack-ai-app
spec:
replicas: 3
selector:
matchLabels:
app: rack-ai-app
template:
metadata:
labels:
app: rack-ai-app
spec:
containers:
- name: app
image: your-app:latest
env:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: ai-secrets
key: openai-api-key
- name: REDIS_URL
value: "redis://redis-service:6379"
Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -am 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
Development Setup
git clone https://github.com/rack-ai/rack-ai.git
cd rack-ai
bundle install
bundle exec rspec
bundle exec rubocop
Running Benchmarks
bundle exec ruby benchmarks/performance_benchmark.rb
License
This gem is available as open source under the terms of the MIT License.
Support
- ๐ Documentation
- ๐ Issue Tracker
- ๐ฌ Discussions
- ๐ง Email Support
Roadmap
See our roadmap for planned features and improvements.