imcache

package module
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 28, 2026 License: MIT Imports: 5 Imported by: 0

README

imcache

A tiny, zero-dependency, generic, sharded, thread-safe in-memory cache for Go 1.26+.

Zero external dependencies. Only the Go standard library. No go.sum file. Nothing to audit, nothing to update.

Go Reference


Features

  • Lightweight, zero dependencies -- built entirely on the Go standard library. No transitive dependency tree, no supply chain risk, no version conflicts.
  • Generics -- fully type-safe Cache[V any]; no interface{} casts at call sites.
  • Sharded locking -- 256 independent RWMutex shards (configurable), so reads and writes on different keys never block each other.
  • TTL + LRU eviction -- per-item TTL with lazy expiry on Get, periodic janitor sweeps, and optional per-shard LRU capacity limits with O(1) eviction.
  • Atomic operations -- GetOrSet, SetIfAbsent, and Peek (read without updating LRU order).
  • Range iterators -- All() and Keys() via iter.Seq2/iter.Seq for lazy, allocation-free iteration.
  • Built-in stats -- lock-free atomic hit/miss/eviction counters with Stats() and ResetStats().
  • Eviction callbacks -- get notified on TTL expiry, LRU eviction, or explicit deletes.
  • Lowest memory footprint -- uses less heap memory than sync.Map, go-cache, and golang-lru for the same dataset (see BENCHMARKS.md).
Why imcache over patrickmn/go-cache?
go-cache imcache
Type safety interface{} + manual casts Generics (Cache[V any])
Concurrency Single global RWMutex 256 independent shard locks
Eviction policy TTL only TTL + LRU capacity eviction
Expiry on read Janitor only Lazy delete on Get + janitor
GetOrSet / SetIfAbsent No Yes
Peek (no LRU touch) No Yes
Range iterators No All(), Keys() via iter.Seq2
Hit/miss/eviction stats No Atomic counters
Dependencies 0 0

Installation

go get github.com/psdhajare/imcache

Requires Go 1.26+.


Quick start

package main

import (
    "fmt"
    "time"

    "github.com/psdhajare/imcache"
)

func main() {
    // defaultTTL=5m, janitor runs every 10m
    c := imcache.New[string](5*time.Minute, 10*time.Minute)
    defer c.Close()

    // Set with explicit TTL
    c.Set("session:abc", "user-42", 30*time.Minute)

    // Set using the default TTL
    c.Set("config:theme", "dark", imcache.DefaultExpiration)

    // Set with no expiry
    c.Set("static:logo", "/img/logo.png", imcache.NoExpiration)

    if val, ok := c.Get("session:abc"); ok {
        fmt.Println("session:", val)
    }

    // Atomic get-or-set
    val, loaded := c.GetOrSet("once", "computed-value", time.Hour)
    fmt.Println(val, loaded) // "computed-value", false

    // Lazy iteration (no allocation)
    for key, value := range c.All() {
        fmt.Println(key, value)
    }

    // Stats
    s := c.Stats()
    fmt.Printf("hits=%d misses=%d evictions=%d hitRate=%.2f\n",
        s.Hits, s.Misses, s.Evictions, s.HitRate)
}

API reference

Creating a cache
// Basic – string values, 5-minute default TTL, 10-minute janitor sweep.
c := imcache.New[string](5*time.Minute, 10*time.Minute)

// With options
c := imcache.New[MyStruct](
    imcache.NoExpiration,     // items never expire by default
    0,                        // no automatic janitor
    imcache.WithNumShards(512),          // more shards for ultra-high concurrency
    imcache.WithMaxItemsPerShard(1024),  // LRU cap; total ~ 512 x 1024 items
    imcache.WithOnEvict(func(key string, val MyStruct) {
        log.Printf("evicted %s", key)
    }),
)
defer c.Close()
Writing
c.Set("k", value, ttl)                          // insert or update
c.Set("k", value, imcache.DefaultExpiration)     // use cache default TTL
c.Set("k", value, imcache.NoExpiration)          // never expires

actual, loaded := c.SetIfAbsent("k", value, ttl) // set only if absent/expired
Reading
val, ok := c.Get("k")                 // updates LRU order; records stats
val, ok := c.Peek("k")               // does NOT update LRU; does NOT record stats
val, loaded := c.GetOrSet("k", v, ttl) // atomic get-or-set
Iterating
// Lazy iteration over all live entries (no map allocation).
for key, value := range c.All() {
    fmt.Println(key, value)
}

// Iterate over keys only.
for key := range c.Keys() {
    fmt.Println(key)
}

// Snapshot (allocates a map copy) — prefer All() for large caches.
items := c.Items()
Deleting
c.Delete("k")          // explicit delete; fires eviction callback
c.DeleteExpired()      // manual sweep of all expired items
c.Flush()              // remove everything (callbacks NOT fired)
Inspection
n := c.Count()                    // number of items (may include expired)
items := c.Items()                // snapshot of all live items
s := c.Stats()                    // Stats{Hits, Misses, Evictions, HitRate}
c.ResetStats()                    // zero all counters
Eviction callback
c := imcache.New[MyStruct](ttl, cleanup,
    imcache.WithOnEvict(func(key string, val MyStruct) {
        log.Printf("evicted %s", key)
    }),
)

Fired on TTL expiry (lazy on Get, or bulk on DeleteExpired/janitor), LRU capacity eviction, explicit Delete, and LRU eviction during SetIfAbsent/GetOrSet.

Options
Option Default Description
WithNumShards(n) 256 Number of shards (rounded to next power of 2)
WithMaxItemsPerShard(n) 0 (unbounded) Per-shard LRU capacity limit
WithOnEvict(fn) nil Eviction callback, set at construction time

Architecture

Sharded locking

The cache maintains N independent shards (default 256, always a power of 2). Each shard owns its own sync.RWMutex. A key is assigned to a shard via an inline zero-allocation FNV-1a hash:

shard = fnv32a(key) & (numShards - 1)   // bitmasking, no division

Reads and writes on different shards never block each other, giving near-linear throughput scaling as goroutine count grows.

Read paths

Without LRU (WithMaxItemsPerShard not set): Get acquires a shared RLock and copies the value while holding it, allowing unlimited parallel readers on the same shard. Expired items are lazily deleted under a write lock only when detected.

With LRU: Get must promote the entry to the MRU head of a container/list, which requires an exclusive lock. Throughput is still much better than a single global lock because contention is spread across 256 shards.

Expiry

Items store their deadline as a Unix nanosecond timestamp (int64). expired() is a single integer compare — no time.Time allocation on the hot path.

Expiry happens in two ways:

  1. Lazy — detected and cleaned up on the first Get after expiry.
  2. Periodic — a background janitor goroutine calls DeleteExpired at the configured interval. The janitor stops cleanly on Close().

Important: Always call Close() when the cache is no longer needed to stop the background janitor goroutine and prevent goroutine leaks.

LRU eviction

When WithMaxItemsPerShard(n) is set each shard maintains a container/list (doubly-linked list from the standard library). Insertion and promotion are O(1). When a shard reaches capacity the tail (LRU) entry is removed before the new entry is inserted.


Performance

On an Apple M1 Max (10 cores, Go 1.26), imcache is 2x faster than go-cache and 3-6x faster than golang-lru under concurrency, with zero allocations per operation and the lowest memory footprint among all tested libraries.

Benchmark (parallel, 10 goroutines) ns/op allocs/op
BenchmarkGet (pure reads) ~66 0
BenchmarkSet (pure writes) ~37 0
BenchmarkGetMixed (1,000 keys) ~61 0
BenchmarkLRUSet (bounded, 256 shards) ~53 0

For a detailed comparison against sync.Map, go-cache, golang-lru, bigcache, and freecache, see BENCHMARKS.md.


Running tests

# Unit tests
go test ./...

# With race detector (recommended before release)
go test -race -count=3 ./...

# Benchmarks
go test -bench=. -benchmem ./...

Contributing

PRs and issues are welcome. Please run go test -race ./... before submitting.


License

MIT — see LICENSE.

Documentation

Overview

Package imcache is a lightweight, zero-dependency, generic, sharded, thread-safe in-memory cache for Go 1.26+.

It is built entirely on the Go standard library with no external dependencies.

Features:

  • Generics: fully type-safe Cache[V any]; no interface{} casts at call sites.
  • Sharded locking: 256 independent RWMutex shards (configurable) so reads and writes on different keys never block each other.
  • LRU eviction: optional per-shard capacity limit with O(1) eviction via container/list.
  • Lazy + periodic expiry: expired items are removed on access and during periodic janitor sweeps; the janitor stops cleanly on Close().
  • Atomic stats: lock-free hit/miss/eviction counters.
  • Range iterators: All() and Keys() return iter.Seq2/iter.Seq for lazy, allocation-free iteration.

Index

Examples

Constants

View Source
const (
	// NoExpiration indicates that an item should never expire.
	NoExpiration time.Duration = -1

	// DefaultExpiration uses the Cache's default TTL passed to New.
	DefaultExpiration time.Duration = 0
)

Variables

This section is empty.

Functions

This section is empty.

Types

type Cache

type Cache[V any] struct {
	// contains filtered or unexported fields
}

Cache is a generic, sharded, thread-safe in-memory key-value cache.

V is the value type; keys are always strings. Create one with New. A Cache must not be copied after first use.

func New

func New[V any](defaultTTL, cleanupInterval time.Duration, opts ...Option) *Cache[V]

New creates a new Cache.

Remember to call Cache.Close when the cache is no longer needed to stop the background goroutine (if cleanupInterval > 0).

Example
package main

import (
	"fmt"
	"time"

	"github.com/psdhajare/imcache"
)

func main() {
	// Create a cache with 5-minute default TTL and 10-minute janitor sweep.
	c := imcache.New[string](5*time.Minute, 10*time.Minute)
	defer c.Close()

	c.Set("greeting", "hello", imcache.DefaultExpiration)
	if val, ok := c.Get("greeting"); ok {
		fmt.Println(val)
	}
}
Output:

hello
Example (WithLRU)
package main

import (
	"fmt"

	"github.com/psdhajare/imcache"
)

func main() {
	// Create a bounded cache with LRU eviction: 4 shards x 100 items each.
	c := imcache.New[int](imcache.NoExpiration, 0,
		imcache.WithNumShards(4),
		imcache.WithMaxItemsPerShard(100),
	)
	defer c.Close()

	c.Set("answer", 42, imcache.NoExpiration)
	if val, ok := c.Get("answer"); ok {
		fmt.Println(val)
	}
}
Output:

42

func (*Cache[V]) All

func (c *Cache[V]) All() iter.Seq2[string, V]

All returns an iterator over all non-expired entries across all shards. Entries are yielded lazily under per-shard RLocks — no map allocation. The iteration order is non-deterministic.

Example
package main

import (
	"fmt"

	"github.com/psdhajare/imcache"
)

func main() {
	c := imcache.New[int](imcache.NoExpiration, 0)
	defer c.Close()

	c.Set("a", 1, imcache.NoExpiration)
	c.Set("b", 2, imcache.NoExpiration)

	sum := 0
	for _, v := range c.All() {
		sum += v
	}
	fmt.Println("sum:", sum)
}
Output:

sum: 3

func (*Cache[V]) Close

func (c *Cache[V]) Close()

Close stops the background janitor goroutine started by New. It is safe to call Close more than once and from multiple goroutines.

func (*Cache[V]) Count

func (c *Cache[V]) Count() int

Count returns the number of items currently held (including expired but not-yet-deleted items). For an exact live count, call DeleteExpired first.

func (*Cache[V]) Delete

func (c *Cache[V]) Delete(key string)

Delete removes key from the cache. The eviction callback is invoked if set. It is a no-op if the key does not exist.

func (*Cache[V]) DeleteExpired

func (c *Cache[V]) DeleteExpired()

DeleteExpired scans all shards and removes items that have passed their TTL. This is called automatically by the background janitor; you only need to call it directly if you disabled automatic cleanup.

func (*Cache[V]) Flush

func (c *Cache[V]) Flush()

Flush removes all items from every shard. The eviction callback is NOT invoked for flushed items for performance reasons; if you need per-item cleanup, iterate Items() before calling Flush.

func (*Cache[V]) Get

func (c *Cache[V]) Get(key string) (V, bool)

Get returns the value associated with key and true, or the zero value and false if the key is absent or expired. Accessing a key updates its LRU position when WithMaxItemsPerShard is set.

func (*Cache[V]) GetOrSet

func (c *Cache[V]) GetOrSet(key string, value V, ttl time.Duration) (actual V, loaded bool)

GetOrSet returns the existing value for key if it is present and not expired. Otherwise it stores value under key and returns value. The boolean reports whether an existing value was returned.

Example
package main

import (
	"fmt"
	"time"

	"github.com/psdhajare/imcache"
)

func main() {
	c := imcache.New[string](time.Hour, 0)
	defer c.Close()

	// First call stores and returns the value.
	val, loaded := c.GetOrSet("key", "computed-value", time.Hour)
	fmt.Println(val, loaded)

	// Second call returns the existing value.
	val, loaded = c.GetOrSet("key", "other-value", time.Hour)
	fmt.Println(val, loaded)
}
Output:

computed-value false
computed-value true

func (*Cache[V]) Items

func (c *Cache[V]) Items() map[string]Item[V]

Items returns a point-in-time snapshot of all non-expired items across all shards. The returned map is a copy; mutations do not affect the cache.

For large caches, prefer Cache.All which iterates lazily without allocating.

func (*Cache[V]) Keys

func (c *Cache[V]) Keys() iter.Seq[string]

Keys returns an iterator over all non-expired keys across all shards. The iteration order is non-deterministic. Each shard is held under an RLock while its keys are yielded.

func (*Cache[V]) Peek

func (c *Cache[V]) Peek(key string) (V, bool)

Peek returns the value for key without updating LRU order or recording stats. Returns zero value and false when the key is absent or expired.

func (*Cache[V]) ResetStats

func (c *Cache[V]) ResetStats()

ResetStats zeroes all performance counters.

func (*Cache[V]) Set

func (c *Cache[V]) Set(key string, value V, ttl time.Duration)

Set adds or replaces an item in the cache.

  • ttl == DefaultExpiration: use the cache's default TTL.
  • ttl == NoExpiration: item never expires.
  • ttl > 0: item expires after the given duration.

func (*Cache[V]) SetIfAbsent

func (c *Cache[V]) SetIfAbsent(key string, value V, ttl time.Duration) (actual V, loaded bool)

SetIfAbsent sets key only if it does not already exist (or has expired).

If the key already holds a live value, it returns that existing value and loaded=true. Otherwise it stores value, returns it, and reports loaded=false.

func (*Cache[V]) Stats

func (c *Cache[V]) Stats() Stats

Stats returns a point-in-time snapshot of hit/miss/eviction counters. Counters are updated atomically and can be read while the cache is in use.

type EvictCallback

type EvictCallback[V any] func(key string, value V)

EvictCallback is invoked whenever an item is removed from the cache, whether by TTL expiry, LRU capacity eviction, or explicit Cache.Delete.

The callback runs synchronously in the goroutine that triggered the eviction, but outside any shard lock. Slow callbacks will block the calling goroutine without affecting other cache operations.

Register a callback with WithOnEvict.

type Item

type Item[V any] struct {
	// Value is the cached value.
	Value V
	// ExpiresAt is the absolute time when this item expires.
	// A zero value means the item has no expiry.
	ExpiresAt time.Time
}

Item represents a point-in-time snapshot of a cached entry, returned by Cache.Items.

type Option

type Option func(*config)

Option is a functional option for configuring a Cache via New.

Available options: WithNumShards, WithMaxItemsPerShard, WithOnEvict.

func WithMaxItemsPerShard

func WithMaxItemsPerShard(n int) Option

WithMaxItemsPerShard sets the maximum number of items allowed per shard. When a shard reaches capacity, the least-recently-used (LRU) item is evicted before the new item is inserted.

Total cache capacity ≈ numShards × maxItemsPerShard. Default is 0 (unbounded).

func WithNumShards

func WithNumShards(n int) Option

WithNumShards sets the number of internal shards. Must be a positive integer; it will be rounded up to the next power of 2. Default is 256. More shards reduce lock contention under high write concurrency at the cost of slightly higher memory overhead.

func WithOnEvict

func WithOnEvict[V any](fn EvictCallback[V]) Option

WithOnEvict registers fn as the eviction callback. It is called synchronously (in the goroutine that triggered the eviction) for every evicted item. Pass nil to disable callbacks.

Example
package main

import (
	"fmt"

	"github.com/psdhajare/imcache"
)

func main() {
	c := imcache.New[string](imcache.NoExpiration, 0,
		imcache.WithNumShards(1),
		imcache.WithMaxItemsPerShard(1),
		imcache.WithOnEvict(func(key string, val string) {
			fmt.Printf("evicted %s=%s\n", key, val)
		}),
	)
	defer c.Close()

	c.Set("first", "a", imcache.NoExpiration)
	c.Set("second", "b", imcache.NoExpiration) // evicts "first"
}
Output:

evicted first=a

type Stats

type Stats struct {
	// Hits is the number of [Cache.Get] calls that found a live entry.
	Hits int64
	// Misses is the number of [Cache.Get] calls that found no live entry,
	// including lookups where the key existed but had expired.
	Misses int64
	// Evictions is the total number of items removed by TTL expiry,
	// LRU capacity eviction, or explicit [Cache.Delete].
	Evictions int64
	// HitRate is Hits / (Hits + Misses). It is 0 when no requests have
	// been made yet.
	HitRate float64
}

Stats holds cache performance counters. Obtain a snapshot with Cache.Stats and reset with Cache.ResetStats.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL