Documentation
¶
Index ¶
- func ToPlainMap[K comparable, V any](m *Map[K, V]) map[K]V
- func WithGrowOnly() func(*MapConfig)
- func WithPresize(sizeHint int) func(*MapConfig)
- func WithSerialResize() func(*MapConfig)deprecated
- type ComputeOp
- type Counter
- type MPMCQueue
- type MPMCQueueOfdeprecated
- type Map
- func (m *Map[K, V]) Clear()
- func (m *Map[K, V]) Compute(key K, valueFn func(oldValue V, loaded bool) (newValue V, op ComputeOp)) (actual V, ok bool)
- func (m *Map[K, V]) Delete(key K)
- func (m *Map[K, V]) Load(key K) (value V, ok bool)
- func (m *Map[K, V]) LoadAndDelete(key K) (value V, loaded bool)
- func (m *Map[K, V]) LoadAndStore(key K, value V) (actual V, loaded bool)
- func (m *Map[K, V]) LoadOrCompute(key K, valueFn func() (newValue V, cancel bool)) (value V, loaded bool)
- func (m *Map[K, V]) LoadOrStore(key K, value V) (actual V, loaded bool)
- func (m *Map[K, V]) Range(f func(key K, value V) bool)
- func (m *Map[K, V]) Size() int
- func (m *Map[K, V]) Stats() MapStats
- func (m *Map[K, V]) Store(key K, value V)
- type MapConfig
- type MapOfdeprecated
- type MapStats
- type RBMutex
- type RToken
- type SPSCQueue
- type SPSCQueueOfdeprecated
- type UMPSCQueue
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ToPlainMap ¶
func ToPlainMap[K comparable, V any](m *Map[K, V]) map[K]V
ToPlainMap returns a native map with a copy of xsync Map's contents. The copied xsync Map should not be modified while this call is made. If the copied Map is modified, the copying behavior is the same as in the Range method.
func WithGrowOnly ¶
func WithGrowOnly() func(*MapConfig)
WithGrowOnly configures new Map instance to be grow-only. This means that the underlying hash table grows in capacity when new keys are added, but does not shrink when keys are deleted. The only exception to this rule is the Clear method which shrinks the hash table back to the initial capacity.
func WithPresize ¶
WithPresize configures new Map instance with capacity enough to hold sizeHint entries. The capacity is treated as the minimal capacity meaning that the underlying hash table will never shrink to a smaller capacity. If sizeHint is zero or negative, the value is ignored.
func WithSerialResize
deprecated
func WithSerialResize() func(*MapConfig)
Deprecated: map resizing now happens cooperatively, without starting any additional goroutines.
Types ¶
type ComputeOp ¶
type ComputeOp int
const ( // CancelOp signals to Compute to not do anything as a result // of executing the lambda. If the entry was not present in // the map, nothing happens, and if it was present, the // returned value is ignored. CancelOp ComputeOp = iota // UpdateOp signals to Compute to update the entry to the // value returned by the lambda, creating it if necessary. UpdateOp // DeleteOp signals to Compute to always delete the entry // from the map. DeleteOp )
type Counter ¶
type Counter struct {
// contains filtered or unexported fields
}
A Counter is a striped int64 counter.
Should be preferred over a single atomically updated int64 counter in high contention scenarios.
A Counter must not be copied after first use.
type MPMCQueue ¶
type MPMCQueue[I any] struct { // contains filtered or unexported fields }
A MPMCQueue is a bounded multi-producer multi-consumer concurrent queue.
MPMCQueue instances must be created with NewMPMCQueue function. A MPMCQueue must not be copied after first use.
Based on the data structure from the following C++ library: https://github.com/rigtorp/MPMCQueue
func NewMPMCQueue ¶
NewMPMCQueue creates a new MPMCQueue instance with the given capacity.
func NewMPMCQueueOf
deprecated
Deprecated: use NewMPMCQueue.
func (*MPMCQueue[I]) TryDequeue ¶
TryDequeue retrieves and removes the item from the head of the queue. Does not block and returns immediately. The ok result indicates that the queue isn't empty and an item was retrieved.
func (*MPMCQueue[I]) TryEnqueue ¶
TryEnqueue inserts the given item into the queue. Does not block and returns immediately. The result indicates that the queue isn't full and the item was inserted.
type MPMCQueueOf
deprecated
type Map ¶
type Map[K comparable, V any] struct { // contains filtered or unexported fields }
Map is like a Go map[K]V but is safe for concurrent use by multiple goroutines without additional locking or coordination. It follows the interface of sync.Map with a number of valuable extensions like Compute or Size.
A Map must not be copied after first use.
Map uses a modified version of Cache-Line Hash Table (CLHT) data structure: https://github.com/LPD-EPFL/CLHT
CLHT is built around idea to organize the hash table in cache-line-sized buckets, so that on all modern CPUs update operations complete with at most one cache-line transfer. Also, Get operations involve no write to memory, as well as no mutexes or any other sort of locks. Due to this design, in all considered scenarios Map outperforms sync.Map.
Map also borrows ideas from Java's j.u.c.ConcurrentHashMap (immutable K/V pair structs instead of atomic snapshots) and C++'s absl::flat_hash_map (meta memory and SWAR-based lookups).
func NewMap ¶
func NewMap[K comparable, V any](options ...func(*MapConfig)) *Map[K, V]
NewMap creates a new Map instance configured with the given options.
func (*Map[K, V]) Clear ¶
func (m *Map[K, V]) Clear()
Clear deletes all keys and values currently stored in the map.
func (*Map[K, V]) Compute ¶
func (m *Map[K, V]) Compute( key K, valueFn func(oldValue V, loaded bool) (newValue V, op ComputeOp), ) (actual V, ok bool)
Compute either sets the computed new value for the key, deletes the value for the key, or does nothing, based on the returned ComputeOp. When the op returned by valueFn is UpdateOp, the value is updated to the new value. If it is DeleteOp, the entry is removed from the map altogether. And finally, if the op is CancelOp then the entry is left as-is. In other words, if it did not already exist, it is not created, and if it did exist, it is not updated. This is useful to synchronously execute some operation on the value without incurring the cost of updating the map every time. The ok result indicates whether the entry is present in the map after the compute operation. The actual result contains the value of the map if a corresponding entry is present, or the zero value otherwise. See the example for a few use cases.
This call locks a hash table bucket while the compute function is executed. It means that modifications on other entries in the bucket will be blocked until the valueFn executes. Consider this when the function includes long-running operations.
func (*Map[K, V]) Load ¶
Load returns the value stored in the map for a key, or zero value of type V if no value is present. The ok result indicates whether value was found in the map.
func (*Map[K, V]) LoadAndDelete ¶
LoadAndDelete deletes the value for a key, returning the previous value if any. The loaded result reports whether the key was present.
func (*Map[K, V]) LoadAndStore ¶
LoadAndStore returns the existing value for the key if present, while setting the new value for the key. It stores the new value and returns the existing one, if present. The loaded result is true if the existing value was loaded, false otherwise.
func (*Map[K, V]) LoadOrCompute ¶
func (m *Map[K, V]) LoadOrCompute( key K, valueFn func() (newValue V, cancel bool), ) (value V, loaded bool)
LoadOrCompute returns the existing value for the key if present. Otherwise, it tries to compute the value using the provided function and, if successful, stores and returns the computed value. The loaded result is true if the value was loaded, or false if computed. If valueFn returns true as the cancel value, the computation is cancelled and the zero value for type V is returned.
This call locks a hash table bucket while the compute function is executed. It means that modifications on other entries in the bucket will be blocked until the valueFn executes. Consider this when the function includes long-running operations.
func (*Map[K, V]) LoadOrStore ¶
LoadOrStore returns the existing value for the key if present. Otherwise, it stores and returns the given value. The loaded result is true if the value was loaded, false if stored.
func (*Map[K, V]) Range ¶
Range calls f sequentially for each key and value present in the map. If f returns false, range stops the iteration.
Range does not necessarily correspond to any consistent snapshot of the Map's contents: no key will be visited more than once, but if the value for any key is stored or deleted concurrently, Range may reflect any mapping for that key from any point during the Range call.
It is safe to modify the map while iterating it, including entry creation, modification and deletion. However, the concurrent modification rule apply, i.e. the changes may be not reflected in the subsequently iterated entries.
type MapConfig ¶
type MapConfig struct {
// contains filtered or unexported fields
}
MapConfig defines configurable Map options.
type MapOf
deprecated
type MapOf[K comparable, V any] = Map[K, V]
Deprecated: use Map
type MapStats ¶
type MapStats struct {
// RootBuckets is the number of root buckets in the hash table.
// Each bucket holds a few entries.
RootBuckets int
// TotalBuckets is the total number of buckets in the hash table,
// including root and their chained buckets. Each bucket holds
// a few entries.
TotalBuckets int
// EmptyBuckets is the number of buckets that hold no entries.
EmptyBuckets int
// Capacity is the Map capacity, i.e. the total number of
// entries that all buckets can physically hold. This number
// does not consider the load factor.
Capacity int
// Size is the exact number of entries stored in the map.
Size int
// Counter is the number of entries stored in the map according
// to the internal atomic counter. In case of concurrent map
// modifications this number may be different from Size.
Counter int
// CounterLen is the number of internal atomic counter stripes.
// This number may grow with the map capacity to improve
// multithreaded scalability.
CounterLen int
// MinEntries is the minimum number of entries per a chain of
// buckets, i.e. a root bucket and its chained buckets.
MinEntries int
// MinEntries is the maximum number of entries per a chain of
// buckets, i.e. a root bucket and its chained buckets.
MaxEntries int
// TotalGrowths is the number of times the hash table grew.
TotalGrowths int64
// TotalGrowths is the number of times the hash table shrinked.
TotalShrinks int64
}
MapStats is Map statistics.
Warning: map statistics are intented to be used for diagnostic purposes, not for production code. This means that breaking changes may be introduced into this struct even between minor releases.
type RBMutex ¶
type RBMutex struct {
// contains filtered or unexported fields
}
A RBMutex is a reader biased reader/writer mutual exclusion lock. The lock can be held by an many readers or a single writer. The zero value for a RBMutex is an unlocked mutex.
A RBMutex must not be copied after first use.
RBMutex is based on a modified version of BRAVO (Biased Locking for Reader-Writer Locks) algorithm: https://arxiv.org/pdf/1810.01553.pdf
RBMutex is a specialized mutex for scenarios, such as caches, where the vast majority of locks are acquired by readers and write lock acquire attempts are infrequent. In such scenarios, RBMutex performs better than sync.RWMutex on large multicore machines.
RBMutex extends sync.RWMutex internally and uses it as the "reader bias disabled" fallback, so the same semantics apply. The only noticeable difference is in reader tokens returned from the RLock/RUnlock methods.
func (*RBMutex) Lock ¶
func (mu *RBMutex) Lock()
Lock locks m for writing. If the lock is already locked for reading or writing, Lock blocks until the lock is available.
func (*RBMutex) RLock ¶
RLock locks m for reading and returns a reader token. The token must be used in the later RUnlock call.
Should not be used for recursive read locking; a blocked Lock call excludes new readers from acquiring the lock.
func (*RBMutex) RUnlock ¶
RUnlock undoes a single RLock call. A reader token obtained from the RLock call must be provided. RUnlock does not affect other simultaneous readers. A panic is raised if m is not locked for reading on entry to RUnlock.
func (*RBMutex) TryRLock ¶
TryRLock tries to lock m for reading without blocking. When TryRLock succeeds, it returns true and a reader token. In case of a failure, a false is returned.
func (*RBMutex) Unlock ¶
func (mu *RBMutex) Unlock()
Unlock unlocks m for writing. A panic is raised if m is not locked for writing on entry to Unlock.
As with RWMutex, a locked RBMutex is not associated with a particular goroutine. One goroutine may RLock (Lock) a RBMutex and then arrange for another goroutine to RUnlock (Unlock) it.
type RToken ¶
type RToken struct {
// contains filtered or unexported fields
}
RToken is a reader lock token.
type SPSCQueue ¶
type SPSCQueue[I any] struct { // contains filtered or unexported fields }
A SPSCQueue is a bounded single-producer single-consumer concurrent queue. This means that not more than a single goroutine must be publishing items to the queue while not more than a single goroutine must be consuming those items.
SPSCQueue instances must be created with NewSPSCQueue function. A SPSCQueue must not be copied after first use.
Based on the data structure from the following article: https://rigtorp.se/ringbuffer/
func NewSPSCQueue ¶
NewSPSCQueue creates a new SPSCQueue instance with the given capacity.
func NewSPSCQueueOf
deprecated
Deprecated: use NewSPSCQueue.
func (*SPSCQueue[I]) TryDequeue ¶
TryDequeue retrieves and removes the item from the head of the queue. Does not block and returns immediately. The ok result indicates that the queue isn't empty and an item was retrieved.
func (*SPSCQueue[I]) TryEnqueue ¶
TryEnqueue inserts the given item into the queue. Does not block and returns immediately. The result indicates that the queue isn't full and the item was inserted.
type SPSCQueueOf
deprecated
type UMPSCQueue ¶
type UMPSCQueue[T any] struct { // contains filtered or unexported fields }
A UMPSCQueue an unbounded multi-producer single-consumer concurrent queue. It is meant to serve as a replacement for a channel. However, crucially, it has infinite capacity. This is a very bad idea in many cases as it means that it never exhibits backpressure. In other words, if nothing is consuming elements from the queue, it will eventually consume all available memory and crash the process. However, there are also cases where this is desired behavior as it means the queue will dynamically allocate more memory to store temporary bursts, allowing producers to never block while the consumer catches up.
Note however that because no locks are acquired, it is unsafe for multiple goroutines to consume from the queue. Consumers must explicitly synchronize between themselves.
func NewUMPSCQueue ¶
func NewUMPSCQueue[T any]() *UMPSCQueue[T]
NewUMPSCQueue creates a new UMPSCQueue instance.
func (*UMPSCQueue[T]) Dequeue ¶
func (q *UMPSCQueue[T]) Dequeue() T
Dequeue returns the next value in the queue, blocking if it is empty. It is not safe to invoke Dequeue from multiple goroutines.
func (*UMPSCQueue[T]) Enqueue ¶
func (q *UMPSCQueue[T]) Enqueue(value T)
Enqueue writes the given value to the queue. It never blocks and is safe to be called by multiple goroutines concurrently.