A Redis Distributed Lock Pattern enhanced by Client Side Caching.
package main
import (
"context"
"github.com/rueian/rueidis"
"github.com/rueian/rueidis/rueidislock"
)
func main() {
locker, err := rueidislock.NewLocker(rueidislock.LockerOption{
ClientOption: rueidis.ClientOption{InitAddress: []string{"node1:6379", "node2:6380", "node3:6379"}},
KeyMajority: 2,
})
if err != nil {
panic(err)
}
defer locker.Close()
// acquire the lock "my_lock"
ctx, cancel, err := locker.WithContext(context.Background(), "my_lock")
if err != nil {
panic(err)
}
// "my_lock" is acquired. use the ctx as normal.
doSomething(ctx)
// invoke cancel() to release the lock.
cancel()
}
- The returned
ctx
will be canceled automatically if theKeyMajority
is not held anymore. - The waiting
Locker.WithContext
will retry again if the lock is released by someone.
When the locker.WithContext
is invoked, it will:
- Try acquiring 3 keys (given that
KeyMajority
is 2), which arerueidislock:0:my_lock
,rueidislock:1:my_lock
andrueidislock:2:my_lock
, by sending redis commandSET NX PXAT
. - If the
KeyMajority
is satisfied within theKeyValidity
duration, the invocation is successful and actx
is returned. - If the invocation is not successful, it will wait for client-side caching notification to retry again.
- If the invocation is successful, the
Locker
will extend thectx
validity periodically and also watch client-side caching notification for canceling thectx
if theKeyMajority
is not held anymore.