PgReports
A comprehensive PostgreSQL monitoring and analysis library for Rails applications. Get insights into query performance, index usage, table statistics, connection health, and more. Includes a beautiful web dashboard and Telegram integration for notifications.
Features
- ๐ Query Analysis - Identify slow, heavy, and expensive queries using
pg_stat_statements - ๐ Index Analysis - Find unused, duplicate, invalid, and missing indexes
- ๐ Table Statistics - Monitor table sizes, bloat, vacuum needs, and cache hit ratios
- ๐ Connection Monitoring - Track active connections, locks, and blocking queries
- ๐ฅ๏ธ System Overview - Database sizes, PostgreSQL settings, installed extensions
- ๐ Web Dashboard - Beautiful dark-themed UI with sortable tables and expandable rows
- ๐จ Telegram Integration - Send reports directly to Telegram
- ๐ Grafana / Prometheus Exporter - Expose selected reports at
/metricswith severity derived from configured thresholds - ๐ฅ Export - Download reports in TXT, CSV, or JSON format
- ๐ IDE Integration - Open source locations in VS Code, Cursor, RubyMine, or IntelliJ (with WSL support)
- ๐ Comparison Mode - Save records to compare before/after optimization
- ๐ EXPLAIN ANALYZE - Advanced query plan analyzer with problem detection and recommendations
- ๐ SQL Query Monitoring - Real-time monitoring of all executed SQL queries with source location tracking
- ๐ Connection Pool Analytics - Monitor pool usage, wait times, saturation warnings, and connection churn
- ๐ค AI Prompt Export - Copy a ready-to-paste prompt for Claude Code, Cursor, or Codex with problem context and report data
- ๐๏ธ Migration Generator - Generate Rails migrations to drop unused indexes
Installation
# Gemfile
gem "pg_reports"
gem "telegram-bot-ruby" # optional, for Telegram deliverybundle installMount the dashboard:
# config/routes.rb
Rails.application.routes.draw do
if Rails.env.development?
mount PgReports::Engine, at: "/pg_reports"
end
# Or with authentication:
# authenticate :user, ->(u) { u.admin? } do
# mount PgReports::Engine, at: "/pg_reports"
# end
endVisit http://localhost:3000/pg_reports.
For query analysis, also enable pg_stat_statements โ see setup below.
Usage
# In console or code
PgReports.slow_queries.display
PgReports.unused_indexes.each { |row| puts row["index_name"] }
# Export
report = PgReports.expensive_queries
report.to_text
report.to_csv
report.to_a
# Telegram
PgReports.slow_queries.send_to_telegramConfiguration
# config/initializers/pg_reports.rb
PgReports.configure do |config|
# Telegram (optional)
config.telegram_bot_token = ENV["PG_REPORTS_TELEGRAM_TOKEN"]
config.telegram_chat_id = ENV["PG_REPORTS_TELEGRAM_CHAT_ID"]
# Thresholds
config.slow_query_threshold_ms = 100
config.heavy_query_threshold_calls = 1000
config.expensive_query_threshold_ms = 10_000
config.unused_index_threshold_scans = 50
config.bloat_threshold_percent = 20
config.dead_rows_threshold = 10_000
# Output
config.max_query_length = 200
# Auth (optional)
config.dashboard_auth = -> {
authenticate_or_request_with_http_basic do |user, pass|
user == ENV["PG_REPORTS_USER"] && pass == ENV["PG_REPORTS_PASSWORD"]
end
}
# Google Fonts (default: false โ no external requests)
config.load_external_fonts = false
endPgReports follows your application's I18n.locale. Set it the way you set it for the rest of the app โ there's no PgReports-specific knob. The dashboard supports en, ru, and uk out of the box.
โ ๏ธ Disabled by default. The dashboard's "Execute Query" and "EXPLAIN ANALYZE" buttons require this opt-in.
PgReports.configure do |config|
config.allow_raw_query_execution = Rails.env.development? || Rails.env.staging?
endPgReports parses query annotations to show where queries originated. On Rails 7.0+ use the built-in ActiveRecord::QueryLogs (no extra gem needed). On older Rails, install Marginalia โ PgReports auto-detects both formats.
Minimal setup โ adds controller/action:
# config/application.rb
config.active_record.query_log_tags_enabled = true
config.active_record.query_log_tags = [:controller, :action]To also surface file path and line number (so source links jump to the actual call site, not just the controller), add a custom source_location lambda that walks caller_locations and skips gem/framework frames:
# config/application.rb
config.active_record.query_log_tags_enabled = true
config.active_record.query_log_tags = [
:controller,
:action,
:job,
{
source_location: -> {
ignore = %r{/(gems|active_record|active_support|active_model|railties|
action_controller|action_view|action_pack|action_dispatch|
rack|core_ext|relation|associations|scoping|connection_adapters)/}x
loc = caller_locations.find { |l| !l.path.match?(ignore) }
"#{loc.path}:#{loc.lineno}" if loc
}
}
]PgReports recognizes the source_location tag and splits it into file and line for the source column.
pg_stat_statements setup
- Edit
postgresql.conf:shared_preload_libraries = 'pg_stat_statements' pg_stat_statements.track = all - Restart PostgreSQL:
sudo systemctl restart postgresql - Create the extension (via dashboard button or
PgReports.enable_pg_stat_statements!).
PgReports does not require the
pg_read_all_settingsrole โ extension availability is detected directly. Works with CloudnativePG, managed databases, and other restricted environments.
Report object
Every method returns a PgReports::Report:
report = PgReports.slow_queries
report.title # "Slow Queries (mean time >= 100ms)"
report.data # Array of hashes
report.columns # Column names
report.size # Row count
report.empty? # Boolean
report.generated_at # Timestamp
# Output formats
report.to_text # Plain text table
report.to_markdown # Markdown table
report.to_html # HTML table
report.to_csv # CSV
report.to_a # Raw data
# Actions
report.display # Print to STDOUT
report.send_to_telegram # Send as message
report.send_to_telegram_as_file # Send as file attachment
# Enumerable
report.each { |row| puts row }
report.map { |row| row["query"] }
report.select { |row| row["calls"] > 100 }Dashboard features
The dashboard provides one-click execution, sortable columns, expandable rows, filter parameters, multi-format export, Telegram delivery, and pg_stat_statements management.
Expand a row with a query, click ๐ EXPLAIN ANALYZE. Shows:
- Status indicator (๐ข๐ก๐ด) โ overall query health
- Key metrics โ planning/execution time, cost, rows
- Detected problems โ sequential scans on large tables, high-cost ops, sorts spilling to disk, slow sorts (>1s), inaccurate row estimates (>10ร off), slow execution
- Recommendations for each issue
- Color-coded plan โ node types tinted by performance impact (green: efficient, blue: normal, yellow: potential issue)
- Line annotations highlighting problems on specific plan lines
Queries from pg_stat_statements with parameter placeholders ($1, $2) prompt for parameter values before analysis.
Requires config.allow_raw_query_execution = true.
Live capture of all SQL executed by your Rails app. Click โถ Start Monitoring, run any operation, watch the queries appear with:
- SQL with syntax highlighting
- Duration (color-coded: ๐ข <10ms, ๐ก <100ms, ๐ด >100ms)
- Source location with click-to-IDE
- Timestamp
Built on ActiveSupport::Notifications (sql.active_record). Filters internal queries (SCHEMA / CACHE / pg_reports' own). Logged to log/pg_reports.log (JSON Lines). Configurable buffer size and backtrace filter:
PgReports.configure do |config|
config.query_monitor_log_file = Rails.root.join("log", "custom_monitor.log")
config.query_monitor_max_queries = 200
config.query_monitor_backtrace_filter = ->(loc) { !loc.path.match?(%r{/(gems|ruby|railties)/}) }
endUse cases: debugging N+1, identifying slow queries during feature development, tracking down unexpected queries, teaching ActiveRecord behavior.
Four specialized reports under the Connections category:
- Pool Usage โ total/active/idle per database, utilization %, idle-in-transaction count, available capacity
- Wait Times โ queries waiting on locks/IO/network with wait event types and severity
- Pool Saturation โ auto-classified (Normal / Elevated / Warning / Critical) with context-aware recommendations
- Connection Churn โ age distribution by application, short-lived (<10s) detection, churn-rate calculation, missing-pooling diagnosis
PgReports.pool_usage.display
PgReports.pool_saturation.display
PgReports.connection_churn.displayClick any source location (file:line) in a report to open it in your IDE. Supported: VS Code, VS Code (WSL), RubyMine, IntelliJ IDEA, Cursor, Cursor (WSL). Use the โ๏ธ button to set your default and skip the menu.
For unused or invalid indexes, the dashboard generates a Rails migration: expand the row โ ๐๏ธ Generate Migration โ copy the code or create the file directly (opens in your default IDE).
When optimizing queries, click ๐ Save for Comparison on any expanded row. Saved records persist in browser localStorage per report type and appear above the results table for before/after comparison.
The Export dropdown includes Copy Prompt (visible on actionable reports). It assembles a ready-to-paste prompt with problem description, fix instructions, and the actual report data โ formatted for Claude Code, Cursor, Codex, or any code-aware AI assistant.
Expose selected reports at <mount_point>/metrics in Prometheus exposition format. The default mount is /pg_reports, so the endpoint is typically /pg_reports/metrics โ but it follows whatever path you used in mount PgReports::Engine, at: "...". Severity (ok / warning / critical) is derived automatically from the thresholds defined in Dashboard::ReportsRegistry::REPORT_CONFIG.
PgReports.configure do |config|
config.grafana_favorites = [
:slow_queries,
:unused_indexes,
:bloated_tables,
:missing_validations,
:polymorphic_without_index
]
config.grafana_metrics_token = ENV["PG_REPORTS_METRICS_TOKEN"] # optional bearer token
config.grafana_cache_ttl = 60 # seconds
endScrape with Prometheus:
scrape_configs:
- job_name: pg_reports
metrics_path: /pg_reports/metrics # adjust to your Engine mount point
scrape_interval: 60s
authorization: { credentials: "${PG_REPORTS_METRICS_TOKEN}" }
static_configs:
- targets: ["app.internal:3000"][!WARNING] Reports are cached via
Rails.cacheforgrafana_cache_ttlso frequent scrapes don't hammer the database. Without it, Prometheus' default 15s scrape interval against heavy reports likemissing_validationswill DDoS your own DB. Always set a TTL โฅ scrape interval, and consider a longer per-report TTL for expensive reports.
The exporter also emits a pg_reports_row series per report row (each column becomes a Prometheus label), so the auto-generated dashboard can show a table panel with the actual rows that need fixing โ not just an aggregate count.
Generate a matching Grafana dashboard from the same favorites:
bundle exec rake pg_reports:grafana:dashboard
# writes pg_reports.json in pwd; then Dashboards โ Import in GrafanaFull Grafana integration guide โ ย ยทย Local Prometheus + Grafana without Docker โ
Get a bot token from @BotFather and your chat ID from @userinfobot, then:
PgReports.configure do |config|
config.telegram_bot_token = "123456:ABC-DEF..."
config.telegram_chat_id = "-1001234567890"
end
PgReports.slow_queries.send_to_telegram
PgReports.health_report.send_to_telegram_as_fileReports under ~50 rows go as a message; larger ones are sent as a file attachment.
Development
git clone https://github.com/yourusername/pg_reports
cd pg_reports
bundle install
bundle exec rspec
bundle exec rubocopContributing
- Fork it
- Create your feature branch (
git checkout -b feature/my-feature) - Commit your changes
- Push to the branch
- Create a Pull Request
License
MIT. See LICENSE.txt.
Acknowledgments
Inspired by rails-pg-extras. UI built with Claude by Anthropic.
