Low commit activity in last 3 years
A long-lived project that still receives updates
Store large objects in memcache or others
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
 Dependencies

Runtime

~> 1.5.5
 Project Readme

Store large objects in memcache or others by slicing them.

  • uses read_multi for fast access
  • returns nil if one slice is missing
  • low performance overhead, only uses single read/write if data is below 1MB

Install

gem install large_object_store

Usage

Rails.cache.write("a", "a"*10_000_000) # => false -> oops too large

store = LargeObjectStore.wrap(Rails.cache)
store.write("a", "a"*10_000_000)  # => true -> always!
store.read("a").size              # => 10_000_000 using multi_get
store.read("b")                   # => nil
store.fetch("a"){ "something" }   # => "something" executes block on miss
store.write("a" * 10_000_000, compress: true)                # compress when greater than 16k
store.write("a" * 1000, compress: true, compress_limit: 100) # compress when greater than 100
store.write("a" * 1000, raw: true)                           # store as string to avoid marshaling overhead

zstd

zstd compression, a modern improvement over the venerable zlib compression algorithm, is supported by passing the zstd flag when writing items:

store.write("a" * 10_000_000, compress: true, zstd: true)

For backwards compatibility and to enable safe roll-out of the change in working systems, the zstd flag defaults to false.

zstd decompression is used when the zstd magic number is detected at the beginning of compressed data, so zstd: true does not need to be passed when reading/fetching items.

Author

Ana Martinez
acemacu@gmail.com
Michael Grosser
michael@grosser.it
License: MIT
CI