Caching
Overview
The Caching module provides a powerful, decorator-based system for caching the results of asynchronous functions. It is designed to be highly resilient and performant, implementing several advanced caching strategies out-of-the-box.
Key Features
@cacheDecorator: The primary interface for caching function results.@invalidate_cacheDecorator: For declaratively invalidating cache keys.- Resilient Fallback: Can be configured with a fallback cache (e.g., in-memory) that is used if the primary cache (e.g., Redis) is unavailable.
- Single-Flight Caching: Uses a distributed lock to prevent the "thundering herd" problem, where multiple concurrent requests for a missed key all trigger the expensive computation.
- Refresh-Ahead (Stale-While-Revalidate): Can serve stale data while a background task refreshes the cache, minimizing latency for users.
Usage
from nala.athomic.performance import cache, invalidate_cache
class ProductService:
@cache(ttl=300, key_prefix="products") # Cache results for 5 minutes
async def get_product_details(self, product_id: str) -> dict:
# Expensive database call
return await db.fetch_product(product_id)
@invalidate_cache(key_prefix="products", key_resolver=lambda result, **kwargs: f"products:{kwargs['product_id']}")
async def update_product_details(self, product_id: str, data: dict):
# Update the product in the database
# The cache for this product will be automatically invalidated.
return await db.update_product(product_id, data)
For more details on the resilient provider, see the Fallback documentation.
API Reference
nala.athomic.performance.cache.decorators.cache(ttl=60, key_prefix=None, key_resolver=None, use_jitter=False, use_lock=False, lock_timeout=30, refresh_ahead=False, refresh_threshold=None, provider=None, ttl_key=None)
Decorator to cache the result of an asynchronous function using the Cache-Aside, Single-Flight, and Refresh-Ahead strategies.
It collects all configuration parameters and delegates the complex execution logic to the CacheHandler.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ttl
|
int
|
Time to live (in seconds) for the cached item. Can be overridden by ttl_key. |
60
|
key_prefix
|
Optional[str]
|
Static prefix for the cache key (e.g., 'user_service:'). |
None
|
key_resolver
|
Optional[ContextualKeyResolverType]
|
Custom function, string, or list to generate contextual keys. |
None
|
use_jitter
|
Optional[bool]
|
If True, adds random variance to the TTL to prevent cache stampedes. |
False
|
use_lock
|
Optional[bool]
|
If True, enables distributed locking (Single-Flight Caching). |
False
|
lock_timeout
|
Optional[int]
|
Timeout in seconds for acquiring the distributed lock. |
30
|
refresh_ahead
|
Optional[bool]
|
If True, enables the background refresh strategy (stale hit). |
False
|
refresh_threshold
|
Optional[float]
|
Percentage of TTL (0.0 to 1.0) when the item is considered stale and a background refresh should be triggered. |
None
|
provider
|
Optional[CacheProtocol]
|
Optional explicit CacheProtocol instance (for testing). Defaults to CacheFallbackFactory.create(). |
None
|
ttl_key
|
Optional[str]
|
Key name in Live Config to dynamically override the TTL. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
Callable |
Callable[..., Any]
|
The decorator function. |
nala.athomic.performance.cache.decorators.invalidate_cache(key_prefix=None, key_resolver=None, provider=None)
Decorator to invalidate cache after the decorated function is called.
This decorator will call the invalidate method of the provided cache provider
with the specified key prefix and context.
Args:
key_prefix: An optional prefix to use for the cache keys.
key_resolver: An optional function to generate cache keys based on the function's context.
provider: An optional cache provider to use for invalidation.