Project

berater

0.03
The project is in a healthy, maintained state
work...within limits
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
 Dependencies

Development

Runtime

>= 3
 Project Readme

Berater

All systems have limits, beyond which they tend to fail. So we should strive to understand a system's limits and work within them. Better to cause a few excessive requests to fail than bring down the whole server and deal with a chaotic, systemic failure. Berater makes working within limits easy.

require 'berater'
require 'redis'

Berater.configure do |c|
  c.redis = Redis.new
end

Berater(:key, 3) do
  # allow only three simultaneous requests at a time, with a concurrency limiter
end

Berater(:key, 2, interval: :second) do
  # or do work twice per second with a rate limiter
end

Berater

Berater(key, capacity, **opts, &block)
  • key - name of limiter
  • capacity - maximum number of requests permitted
  • opts
    • redis - a redis instance
    • interval - how often the capacity limit resets, if it does (either number of seconds or a symbol: :second, :minute, hour)
  • block - optional block to call immediately via .limit

Berater::Limiter

The base class for all limiters.

limiter = Berater(*)
limiter.limit(**opts) do
  # limited work
end

lock = limiter.limit
# do work inline
lock.release

limiter.limit(cost: 2) do
# do extra expensive work
end

limiter.limit(capacity: 3) do
# do work within a new capacity limit
end

.limit - acquire a lock. Raises a Berater::Overloaded error if limits have been exceeded. When passed a block, it will execute the block unless the limit has been exceeded. Otherwise it returns the lock, which should be released once completed.

  • capacity - override the limiter's capacity for this call
  • cost - the relative cost of this piece of work, default is 1

.utilization - a Float representing how much capacity is being used, as a fraction of the limit. Values >= 1 indicate that the limiter is overloaded and calls to .limit will fail.

Berater::Lock

Created when a call to .limit is successful.

Berater(*) do |lock|
  lock.contention
end

# or inline
lock = Berater(*).limit
  • .capacity - capacity limitation
  • .contention - capacity being utilized
  • .locked? - whether the lock is currently being held
  • .release - release capacity being held

Berater::RateLimiter

A leaky bucket rate limiter. Useful when you want to limit usage within a given time window, eg. 2 times per second.

Berater::RateLimiter.new(key, capacity, interval, **opts)
  • key - name of limiter
  • capacity - maximum number of requests permitted
  • interval - how often the capacity limit resets. Either number of seconds or a symbol: :second, :minute, hour
  • opts
    • redis - a redis instance

eg.

limiter = Berater::RateLimiter.new(:key, 2, :second, redis: redis)
limiter.limit do
  # do work, twice per second
end

# or, more conveniently
Berater(:key, 2, interval: :second) do
  ...
end

Berater::ConcurrencyLimiter

Useful to limit the amount of work done concurrently, ie. simulteneously. eg. no more than 3 connections at once.

Berater::ConcurrencyLimiter.new(key, capacity, **opts)
  • key - name of limiter
  • capacity - maximum number of simultaneous requests
  • opts
    • timeout - maximum seconds a lock may be held (optional, but recommended)
    • redis - a redis instance

eg.

limiter = Berater::ConcurrencyLimiter.new(:key, 3, redis: redis, timeout: 30)
limiter.limit do
  # allow only three simultaneous requests at a time, for no more than 30 seconds each
end

# or, more conveniently
Berater(:key, 3) do
  ...
end

Install

gem install berater

Configure a default redis connection.

Berater.configure do |c|
  c.redis = Redis.new
end

Integrations

Rails

Convert limit errors into a HTTP status code

class ApplicationController < ActionController::Base
  rescue_from Berater::Overloaded do
    head :too_many_requests
  end
end

Sidekiq

Sidekiq Ent provides a framework for applying limitations to Sidekiq jobs. It offers a bunch of built in limiter types, and automatically reschedules workers to be retried later when limits are exceeded. Berater limiters can be used seamlessly within this framework by registering its error class.

# config/initializers/sidekiq.rb

Sidekiq::Limiter.errors << Berater::Overloaded

Testing

Berater has a few tools to make testing easier. And it plays nicely with Timecop.

test_mode

Force all .limit calls to either pass or fail, without hitting Redis.

require 'berater/test_mode'

describe 'MyTest' do
  let(:limiter) { Berater.new(:key, 1, :second) }
  
  context 'with test_mode = :pass' do
    before { Berater.test_mode = :pass }

    it 'always works' do
      10.times { limiter.limit { ... } }
    end
  end

  context 'with test_mode = :fail' do
    before { Berater.test_mode = :fail }

    it 'always raises an exception' do
      expect { limiter.limit }.to raise_error(Berater::Overloaded)
    end
  end
end

rspec

rspec matchers and automatic flushing of Redis between examples.

require 'berater/rspec'

describe 'MyTest' do
  let(:limiter) { Berater.new(:key, 1, :second) }

  it 'rate limits' do
    limiter.limit
    
    expect { limiter.limit }.to be_overloaded
  end
end

Unlimiter

A limiter which always succeeds.

limiter = Berater::Unlimiter.new

Inhibitor

A limiter which always fails.

limiter = Berater::Inhibitor.new

Misc

A riddle!

What's the difference between a rate limiter and a concurrency limiter?  Can you build one with the other?

Both enforce limits, but differ with respect to time and memory. A rate limiter can be implemented using a concurrency limiter, by allowing every lock to timeout rather than be released. It wastes memory, but is functionally equivalent. A concurrency limiter can nearly be implemented using a rate limiter, by decrementing the used capacity when a lock is released. The order of locks, however, is lost and thus a timeout will not properly function.

An example is worth a thousand words :)

Load Shedding

If work has different priorities, then preemptively shedding load will facilitate more graceful failures. Low priority work should yield to higher priorty work. Here's a simple, yet effective approach:

limiter = Berater(*)

capacity = if priority == :low
  (limiter.capacity * 0.8).to_i
end

limiter.limit(capacity: capacity) do
  # work
end 

DSL

Experimental...

using Berater::DSL

Berater(:key) { 1.per second } do
  ...
end

Berater(:key) { 3.at_once } do
  ...
end

Contributing

Yes please :)

  1. Fork it
  2. Create your feature branch (git checkout -b my-feature)
  3. Ensure the tests pass (bundle exec rspec)
  4. Commit your changes (git commit -am 'awesome new feature')
  5. Push your branch (git push origin my-feature)
  6. Create a Pull Request

Inspired by

https://stripe.com/blog/rate-limiters

https://github.blog/2021-04-05-how-we-scaled-github-api-sharded-replicated-rate-limiter-redis

@ptarjan


Gem codecov