关于moka
“Moka” 是一个用于 Rust 的高性能缓存库,它提供了多种类型的缓存数据结构,包括哈希表、LRU(最近最少使用)缓存和 支持TTL(生存时间)缓存。
以下是一些 “moka” 库的特点和功能:
- 多种缓存类型: “moka” 提供了多种缓存类型,包括哈希表缓存、LRU 缓存和 TTL 缓存。你可以根据具体的需求选择适合的缓存类型。
- 线程安全: “moka” 库是线程安全的,可以在多线程环境中使用,不需要额外的同步措施。
- 高性能: “moka” 的设计目标之一是提供高性能的缓存实现。它经过优化,能够在高并发场景下快速处理缓存操作。
- 可配置性: “moka” 允许你根据需要对缓存进行配置,如容量限制、缓存项的最大生存时间等。
moka的github地址:moka。
moka的使用示例
- 事件通知:
支持在缓存项发生过期淘汰、用户主动淘汰、缓存池大小受限强制淘汰时,触发回调函数执行一些后续任务。
use moka::{notification::RemovalCause, sync::Cache};
use std::time::{Duration,Instant};
fn main() {
// 创建一个缓存项事件监听闭包
let now = Instant::now();
let listener = move |k, v: String, cause| {
// 监听缓存项的触发事件,RemovalCause包含四种场景:Expired(缓存项过期)、Explicit(用户主动移除缓存)、Replaced(缓存项发生更新或替换)、Size(缓存数量达到上限驱逐)。
println!(
"== An entry has been evicted. time:{} k: {:?}, v: {:?},cause:{:?}",
now.elapsed().as_secs(),
k,
v,
cause
);
// 针对不同事项,进行处理。
// match cause {
// RemovalCause::Expired => {}
// RemovalCause::Explicit => {}
// RemovalCause::Replaced => {}
// RemovalCause::Size => {}
// }
};
//缓存生存时间:10s
let ttl_time = Duration::from_secs(10);
// 创建一个具有过期时间和淘汰机制的缓存
let cache: Cache<String, String> = Cache::builder()
.time_to_idle(ttl_time)
.eviction_listener(listener)
.build();
// insert 缓存项
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());
cache.insert("key3".to_string(), "value3".to_string());
// 5s后使用key1
std::thread::sleep(Duration::from_secs(5));
if let Some(value) = cache.get(&"key1".to_string()) {
println!("5s: Value of key1: {}", value);
}
cache.remove("key3");
println!("5s: remove key3");
// 等待 6 秒,让缓存项key2过期
std::thread::sleep(Duration::from_secs(6));
// 尝试获取缓存项 "key1" 的值
if let Some(value) = cache.get("key1") {
println!("11s: Value of key1: {}", value);
} else {
println!("Key1 has expired.");
}
// 尝试获取缓存项 "key2" 的值
if let Some(value) = cache.get("key2") {
println!("11s: Value of key2: {}", value);
} else {
println!("Key2 has expired.");
}
// 尝试获取缓存项 "key3" 的值
if let Some(value) = cache.get("key3") {
println!("11s: Value of key3: {}", value);
} else {
println!("Key3 has removed.");
}
// 空置9s后
std::thread::sleep(Duration::from_secs(11));
// 再次尝试获取缓存项 "key1" 的值
if let Some(value) = cache.get("key1") {
println!("22s: Value of key1: {}", value);
} else {
println!("Key1 has expired.");
}
}
运行结果:
5s: Value of key1: value1
== An entry has been evicted. time:5 k: "key3", v: "value3",cause:Explicit
5s: remove key3
== An entry has been evicted. time:10 k: "key2", v: "value2",cause:Expired
11s: Value of key1: value1
Key2 has expired.
Key3 has removed.
== An entry has been evicted. time:21 k: "key1", v: "value1",cause:Expired
Key1 has expired.
- 支持同步并发:
use moka::sync::Cache;
use std::thread;
fn value(n: usize) -> String {
format!("value {}", n)
}
fn main() {
const NUM_THREADS: usize = 3;
const NUM_KEYS_PER_THREAD: usize = 2;
// Create a cache that can store up to 6 entries.
let cache = Cache::new(6);
// Spawn threads and read and update the cache simultaneously.
let threads: Vec<_> = (0..NUM_THREADS)
.map(|i| {
// To share the same cache across the threads, clone it.
// This is a cheap operation.
let my_cache = cache.clone();
let start = i * NUM_KEYS_PER_THREAD;
let end = (i + 1) * NUM_KEYS_PER_THREAD;
thread::spawn(move || {
// Insert 2 entries. (NUM_KEYS_PER_THREAD = 2)
for key in start..end {
my_cache.insert(key, value(key));
println!("{}",my_cache.get(&key).unwrap());
}
// Invalidate every 2 element of the inserted entries.
for key in (start..end).step_by(2) {
my_cache.invalidate(&key);
}
})
})
.collect();
// Wait for all threads to complete.
threads.into_iter().for_each(|t| t.join().expect("Failed"));
// Verify the result.
for key in 0..(NUM_THREADS * NUM_KEYS_PER_THREAD) {
if key % 2 == 0 {
assert_eq!(cache.get(&key), None);
} else {
assert_eq!(cache.get(&key), Some(value(key)));
}
}
}
结果:
value 2
value 3
value 0
value 4
value 1
value 5
并发读写cahce中的数据不会产生异常。
- 下面是moka库example中给出的异步示例:
use moka::future::Cache;
#[tokio::main]
async fn main() {
const NUM_TASKS: usize = 16;
const NUM_KEYS_PER_TASK: usize = 64;
fn value(n: usize) -> String {
format!("value {}", n)
}
// Create a cache that can store up to 10,000 entries.
let cache = Cache::new(10_000);
// Spawn async tasks and write to and read from the cache.
let tasks: Vec<_> = (0..NUM_TASKS)
.map(|i| {
// To share the same cache across the async tasks, clone it.
// This is a cheap operation.
let my_cache = cache.clone();
let start = i * NUM_KEYS_PER_TASK;
let end = (i + 1) * NUM_KEYS_PER_TASK;
tokio::spawn(async move {
// Insert 64 entries. (NUM_KEYS_PER_TASK = 64)
for key in start..end {
// insert() is an async method, so await it.
my_cache.insert(key, value(key)).await;
// get() returns Option<String>, a clone of the stored value.
assert_eq!(my_cache.get(&key), Some(value(key)));
}
// Invalidate every 4 element of the inserted entries.
for key in (start..end).step_by(4) {
// invalidate() is an async method, so await it.
my_cache.invalidate(&key).await;
}
})
})
.collect();
// Wait for all tasks to complete.
futures_util::future::join_all(tasks).await;
// Verify the result.
for key in 0..(NUM_TASKS * NUM_KEYS_PER_TASK) {
if key % 4 == 0 {
assert_eq!(cache.get(&key), None);
} else {
assert_eq!(cache.get(&key), Some(value(key)));
}
}
}