Lifter offers a built-in caching mechanism that can be used store query results and retrieve them later.

This is an efficient way to reduce the I/O of your application, avoid reaching rate limites, suffer from network latency, etc.

Caching is configured on store creation, via the following API:

from lifter import caches
from lifter import models
from lifter.backend import http

class MyModel(models.Model):
    class Meta:
        app_name = 'my_app'
        name = 'my_model'

cache = caches.DummyCache()
store = http.RESTStore(identifier='my_store', cache=cache)
manager = store.query(MyModel)

You can use the same Cache instance accross multiple store if you want, this won’t lead to cache collisions.

How does it work?

When a cache is configured for a given store and the store execute a query, the following happens:

  1. The store identifier, the model app, the model name and the query are hashed together to form a cache key
  2. The cache is then queried using that key
  3. If a result is found with that key, it’s returned directly without sending the query to the underlying backend
  4. If no result is found, the query is processed normally, but result will be stored in the cache for further use

Once a cache is configured for a store, it is automatically used:

# This will execute the query and store results in the cache

# For this one, the query won't execute, since the value is present in the cache

For the previous example, the cache key will look like:


Cache options

The following arguments are available to all cache instances, all are optional.



The default timeout in seconds that will be used for cached values. Defaults to None, meaning the value will never expire.



Wether the cache is enabled by default or not.


You can override this behaviour by using Cache.disable() and Cache.enable() context managers

Cache methods

class lifter.caches.Cache(default_timeout=None, enabled=True)[source]

Returns a context manager to bypass the cache:

with cache.disable():
    # Will ignore the cache

Returns a context manager to force enabling the cache if it is disabled:

with cache.enable():
get(key, default=None, reraise=False)[source]

Get the given key from the cache, if present. A default value can be provided in case the requested key is not present, otherwise, None will be returned.

  • key (bool) – the key to query
  • default – the value to return if the key does not exist in cache
  • reraise – wether an exception should be thrown if now value is found, defaults to False.

Example usage:

cache.set('my_key', 'my_value')

>>> 'my_value'

cache.get('not_present', 'default_value')
>>> 'default_value'

cache.get('not_present', reraise=True)
>>> raise lifter.exceptions.NotInCache
set(key, value, timeout=<class 'lifter.caches.NotSet'>)[source]

Set the given key to the given value in the cache. A timeout may be provided, otherwise, the Cache.default_timeout will be used.

  • key (str) – the key to which the value will be bound
  • value – the value to store in the cache
  • timeout (integer or None) – the expiration delay for the value. None means it will never expire.

Example usage:

# this cached value will expire after half an hour
cache.set('my_key', 'value', 1800)

Available cache backends

At the moment, the only cache backend available is the DummyCache, that store values in a Python dictionary.

You can use it’s code as a starting point to implement your own backends, using Redis or Memcached, for example.