Project

autoscaler

0.11
No commit activity in last 3 years
No release in over 3 years
Currently provides a Sidekiq middleware that does 0/1 scaling of Heroku processes
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
 Dependencies

Development

~> 1.0
~> 3.0
~> 1.0

Runtime

 Project Readme

Sidekiq Heroku Autoscaler

Sidekiq performs background jobs. While its threading model allows it to scale easier than worker-pre-process background systems, people running test or lightly loaded systems on Heroku still want to scale down to zero to avoid racking up charges.

Requirements

Tested on Ruby 2.1.7 and Heroku Cedar stack.

Installation

gem install autoscaler

Getting Started

This gem uses the Heroku Platform-Api gem, which requires an OAuth token from Heroku. It will also need the heroku app name. By default, these are specified through environment variables. You can also pass them to HerokuPlatformScaler explicitly.

AUTOSCALER_HEROKU_ACCESS_TOKEN=.....
AUTOSCALER_HEROKU_APP=....

Install the middleware in your Sidekiq.configure_ blocks

require 'autoscaler/sidekiq'
require 'autoscaler/heroku_platform_scaler'

Sidekiq.configure_client do |config|
  config.client_middleware do |chain|
    chain.add Autoscaler::Sidekiq::Client, 'default' => Autoscaler::HerokuPlatformScaler.new
  end
end

Sidekiq.configure_server do |config|
  config.server_middleware do |chain|
    chain.add(Autoscaler::Sidekiq::Server, Autoscaler::HerokuPlatformScaler.new, 60) # 60 second timeout
  end
end

Limits and Challenges

  • HerokuPlatformScaler includes an attempt at current-worker cache that may be overcomplication, and doesn't work very well on the server
  • Multiple scale-down loops may be started, particularly if there are multiple jobs queued when the servers comes up. Heroku seems to handle multiple scale-down commands well.
  • The scale-down monitor is triggered on job completion (and server middleware is only run around jobs), so if the server never processes any jobs, it won't turn off.
  • The retry and schedule lists are considered - if you schedule a long-running task, the process will not scale-down.
  • If background jobs trigger jobs in other scaled processes, please note you'll need config.client_middleware in your Sidekiq.configure_server block in order to scale-up.
  • Exceptions while calling the Heroku API are caught and printed by default. See HerokuPlatformScaler#exception_handler to override

Experimental

Strategies

You can pass a scaling strategy object instead of the timeout to the server middleware. The object (or lambda) should respond to #call(system, idle_time) and return the desired number of workers. See lib/autoscaler/binary_scaling_strategy.rb for an example.

Initial Workers

Client#set_initial_workers to start workers on main process startup; typically:

Autoscaler::Sidekiq::Client.add_to_chain(chain, 'default' => heroku).set_initial_workers

Working caching

scaler.counter_cache = Autoscaler::CounterCacheRedis.new(Sidekiq.method(:redis))

Tests

The project is setup to run RSpec with Guard. It expects a redis instance on a custom port, which is started by the Guardfile.

The HerokuPlatformScaler is not tested by default because it makes live API requests. Specify AUTOSCALER_HEROKU_APP and AUTOSCALER_HEROKU_ACCESS_TOKEN on the command line, and then watch your app's logs.

AUTOSCALER_HEROKU_APP=... AUTOSCALER_HEROKU_ACCESS_TOKEN=... guard
heroku logs --app ...

Authors

Justin Love, @wondible, https://github.com/JustinLove

Contributors

Licence

Released under the MIT license.