Initial commit: Open sourcing all of the Maple Open Technologies code.

This commit is contained in:
Bartlomiej Mika 2025-12-02 14:33:08 -05:00
commit 755d54a99d
2010 changed files with 448675 additions and 0 deletions

View file

@ -0,0 +1,237 @@
# Distributed Mutex
A Redis-based distributed mutex implementation for coordinating access to shared resources across multiple application instances.
## Overview
This package provides a distributed locking mechanism using Redis as the coordination backend. It's built on top of the `redislock` library and provides a simple interface for acquiring and releasing locks across distributed systems.
## Features
- **Distributed Locking**: Coordinate access to shared resources across multiple application instances
- **Automatic Retry**: Built-in retry logic with configurable backoff strategy
- **Thread-Safe**: Safe for concurrent use within a single application
- **Formatted Keys**: Support for formatted lock keys using `Acquiref` and `Releasef`
- **Logging**: Integrated zap logging for debugging and monitoring
## Installation
The package is already included in the project. The required dependency (`github.com/bsm/redislock`) is automatically installed.
## Interface
```go
type Adapter interface {
Acquire(ctx context.Context, key string)
Acquiref(ctx context.Context, format string, a ...any)
Release(ctx context.Context, key string)
Releasef(ctx context.Context, format string, a ...any)
}
```
## Usage
### Basic Example
```go
import (
"context"
"github.com/redis/go-redis/v9"
"go.uber.org/zap"
"codeberg.org/mapleopentech/monorepo/cloud/maplepress-backend/pkg/distributedmutex"
)
// Create Redis client
redisClient := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
})
// Create logger
logger, _ := zap.NewProduction()
// Create distributed mutex adapter
mutex := distributedmutex.NewAdapter(logger, redisClient)
// Acquire a lock
ctx := context.Background()
mutex.Acquire(ctx, "my-resource-key")
// ... perform operations on the protected resource ...
// Release the lock
mutex.Release(ctx, "my-resource-key")
```
### Formatted Keys Example
```go
// Acquire lock with formatted key
tenantID := "tenant-123"
resourceID := "resource-456"
mutex.Acquiref(ctx, "tenant:%s:resource:%s", tenantID, resourceID)
// ... perform operations ...
mutex.Releasef(ctx, "tenant:%s:resource:%s", tenantID, resourceID)
```
### Integration with Dependency Injection (Wire)
```go
// In your Wire provider set
wire.NewSet(
distributedmutex.ProvideDistributedMutexAdapter,
// ... other providers
)
// Use in your application
func NewMyService(mutex distributedmutex.Adapter) *MyService {
return &MyService{
mutex: mutex,
}
}
```
## Configuration
### Lock Duration
The default lock duration is **1 minute**. Locks are automatically released after this time to prevent deadlocks.
### Retry Strategy
- **Retry Interval**: 250ms
- **Max Retries**: 20 attempts
- **Total Max Wait Time**: ~5 seconds (20 × 250ms)
If a lock cannot be obtained after all retries, an error is logged and the `Acquire` method returns without blocking indefinitely.
## Best Practices
1. **Always Release Locks**: Ensure locks are released even in error cases using `defer`
```go
mutex.Acquire(ctx, "my-key")
defer mutex.Release(ctx, "my-key")
```
2. **Use Descriptive Keys**: Use clear, hierarchical key names
```go
// Good
mutex.Acquire(ctx, "tenant:123:user:456:update")
// Not ideal
mutex.Acquire(ctx, "lock1")
```
3. **Keep Critical Sections Short**: Minimize the time locks are held to improve concurrency
4. **Handle Timeouts**: Use context with timeout for critical operations
```go
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
mutex.Acquire(ctx, "my-key")
```
5. **Avoid Nested Locks**: Be careful with acquiring multiple locks to avoid deadlocks
## Logging
The adapter logs the following events:
- **Debug**: Lock acquisition and release operations
- **Error**: Failed lock acquisitions, timeout errors, and release failures
- **Warn**: Attempts to release non-existent locks
## Thread Safety
The adapter is safe for concurrent use within a single application instance. It uses an internal mutex to protect the lock instances map from concurrent access by multiple goroutines.
## Error Handling
The current implementation logs errors but does not return them. Consider this when using the adapter:
- Lock acquisition failures are logged but don't panic
- The application continues running even if locks fail
- Check logs for lock-related issues in production
## Limitations
1. **Lock Duration**: Locks automatically expire after 1 minute
2. **No Lock Extension**: Currently doesn't support extending lock duration
3. **No Deadlock Detection**: Manual deadlock prevention is required
4. **Redis Dependency**: Requires a running Redis instance
## Example Use Cases
### Preventing Duplicate Processing
```go
func ProcessJob(ctx context.Context, jobID string, mutex distributedmutex.Adapter) {
lockKey := fmt.Sprintf("job:processing:%s", jobID)
mutex.Acquire(ctx, lockKey)
defer mutex.Release(ctx, lockKey)
// Process job - guaranteed only one instance processes this job
// ...
}
```
### Coordinating Resource Updates
```go
func UpdateTenantSettings(ctx context.Context, tenantID string, mutex distributedmutex.Adapter) error {
mutex.Acquiref(ctx, "tenant:%s:settings:update", tenantID)
defer mutex.Releasef(ctx, "tenant:%s:settings:update", tenantID)
// Safe to update tenant settings
// ...
return nil
}
```
### Rate Limiting Operations
```go
func RateLimitedOperation(ctx context.Context, userID string, mutex distributedmutex.Adapter) {
lockKey := fmt.Sprintf("ratelimit:user:%s", userID)
mutex.Acquire(ctx, lockKey)
defer mutex.Release(ctx, lockKey)
// Perform rate-limited operation
// ...
}
```
## Troubleshooting
### Lock Not Acquired
**Problem**: Locks are not being acquired (error in logs)
**Solutions**:
- Verify Redis is running and accessible
- Check network connectivity to Redis
- Ensure Redis has sufficient memory
- Check for Redis errors in logs
### Lock Contention
**Problem**: Frequent lock acquisition failures due to contention
**Solutions**:
- Reduce critical section duration
- Use more specific lock keys to reduce contention
- Consider increasing retry limits if appropriate
- Review application architecture for excessive locking
### Memory Leaks
**Problem**: Lock instances accumulating in memory
**Solutions**:
- Ensure all `Acquire` calls have corresponding `Release` calls
- Use `defer` to guarantee lock release
- Monitor lock instance map size in production

View file

@ -0,0 +1,138 @@
package distributedmutex
import (
"context"
"fmt"
"sync"
"time"
"github.com/bsm/redislock"
"github.com/redis/go-redis/v9"
"go.uber.org/zap"
)
// Adapter provides interface for abstracting distributed mutex operations.
// CWE-755: Methods now return errors to properly handle exceptional conditions
type Adapter interface {
Acquire(ctx context.Context, key string) error
Acquiref(ctx context.Context, format string, a ...any) error
Release(ctx context.Context, key string) error
Releasef(ctx context.Context, format string, a ...any) error
}
type distributedMutexAdapter struct {
logger *zap.Logger
redis redis.UniversalClient
locker *redislock.Client
lockInstances map[string]*redislock.Lock
mutex *sync.Mutex // Mutex for synchronization with goroutines
}
// NewAdapter constructor that returns the default distributed mutex adapter.
func NewAdapter(logger *zap.Logger, redisClient redis.UniversalClient) Adapter {
logger = logger.Named("distributed-mutex")
// Create a new lock client
locker := redislock.New(redisClient)
logger.Info("✓ Distributed mutex initialized (Redis-backed)")
return &distributedMutexAdapter{
logger: logger,
redis: redisClient,
locker: locker,
lockInstances: make(map[string]*redislock.Lock),
mutex: &sync.Mutex{}, // Initialize the mutex
}
}
// Acquire function blocks the current thread if the lock key is currently locked.
// CWE-755: Now returns error instead of silently failing
func (a *distributedMutexAdapter) Acquire(ctx context.Context, key string) error {
startDT := time.Now()
a.logger.Debug("acquiring lock", zap.String("key", key))
// Retry every 250ms, for up to 20x
backoff := redislock.LimitRetry(redislock.LinearBackoff(250*time.Millisecond), 20)
// Obtain lock with retry
lock, err := a.locker.Obtain(ctx, key, time.Minute, &redislock.Options{
RetryStrategy: backoff,
})
if err == redislock.ErrNotObtained {
nowDT := time.Now()
diff := nowDT.Sub(startDT)
a.logger.Error("could not obtain lock after retries",
zap.String("key", key),
zap.Time("start_dt", startDT),
zap.Time("now_dt", nowDT),
zap.Duration("duration", diff),
zap.Int("max_retries", 20))
return fmt.Errorf("could not obtain lock after 20 retries (waited %s): %w", diff, err)
} else if err != nil {
a.logger.Error("failed obtaining lock",
zap.String("key", key),
zap.Error(err))
return fmt.Errorf("failed to obtain lock: %w", err)
}
// DEVELOPERS NOTE:
// The `map` datastructure in Golang is not concurrently safe, therefore we
// need to use mutex to coordinate access of our `lockInstances` map
// resource between all the goroutines.
a.mutex.Lock()
defer a.mutex.Unlock()
if a.lockInstances != nil { // Defensive code
a.lockInstances[key] = lock
}
a.logger.Debug("lock acquired", zap.String("key", key))
return nil // Success
}
// Acquiref function blocks the current thread if the lock key is currently locked.
// CWE-755: Now returns error from Acquire
func (a *distributedMutexAdapter) Acquiref(ctx context.Context, format string, args ...any) error {
key := fmt.Sprintf(format, args...)
return a.Acquire(ctx, key)
}
// Release function releases the lock for the given key.
// CWE-755: Now returns error instead of silently failing
func (a *distributedMutexAdapter) Release(ctx context.Context, key string) error {
a.logger.Debug("releasing lock", zap.String("key", key))
// DEVELOPERS NOTE:
// The `map` datastructure in Golang is not concurrently safe, therefore we
// need to use mutex to coordinate access of our `lockInstances` map
// resource between all the goroutines.
a.mutex.Lock()
lockInstance, ok := a.lockInstances[key]
if ok {
delete(a.lockInstances, key)
}
a.mutex.Unlock()
if ok {
if err := lockInstance.Release(ctx); err != nil {
a.logger.Error("failed to release lock",
zap.String("key", key),
zap.Error(err))
return fmt.Errorf("failed to release lock: %w", err)
}
a.logger.Debug("lock released", zap.String("key", key))
return nil // Success
}
// Lock not found - this is a warning but not an error (may have already been released)
a.logger.Warn("lock not found for release", zap.String("key", key))
return nil // Not an error, just not found
}
// Releasef function releases the lock for a formatted key.
// CWE-755: Now returns error from Release
func (a *distributedMutexAdapter) Releasef(ctx context.Context, format string, args ...any) error {
key := fmt.Sprintf(format, args...)
return a.Release(ctx, key)
}

View file

@ -0,0 +1,70 @@
package distributedmutex
import (
"context"
"testing"
"time"
"github.com/redis/go-redis/v9"
"go.uber.org/zap"
)
// mockRedisClient implements minimal required methods for testing
type mockRedisClient struct {
redis.UniversalClient
}
func (m *mockRedisClient) Get(ctx context.Context, key string) *redis.StringCmd {
return redis.NewStringCmd(ctx)
}
func (m *mockRedisClient) Set(ctx context.Context, key string, value any, expiration time.Duration) *redis.StatusCmd {
return redis.NewStatusCmd(ctx)
}
func (m *mockRedisClient) Eval(ctx context.Context, script string, keys []string, args ...any) *redis.Cmd {
return redis.NewCmd(ctx)
}
func (m *mockRedisClient) EvalSha(ctx context.Context, sha string, keys []string, args ...any) *redis.Cmd {
return redis.NewCmd(ctx)
}
func (m *mockRedisClient) ScriptExists(ctx context.Context, scripts ...string) *redis.BoolSliceCmd {
return redis.NewBoolSliceCmd(ctx)
}
func (m *mockRedisClient) ScriptLoad(ctx context.Context, script string) *redis.StringCmd {
return redis.NewStringCmd(ctx)
}
func TestNewAdapter(t *testing.T) {
logger, _ := zap.NewDevelopment()
adapter := NewAdapter(logger, &mockRedisClient{})
if adapter == nil {
t.Fatal("expected non-nil adapter")
}
}
func TestAcquireAndRelease(t *testing.T) {
ctx := context.Background()
logger, _ := zap.NewDevelopment()
adapter := NewAdapter(logger, &mockRedisClient{})
// Test basic acquire/release
adapter.Acquire(ctx, "test-key")
adapter.Release(ctx, "test-key")
// Test formatted acquire/release
adapter.Acquiref(ctx, "test-key-%d", 1)
adapter.Releasef(ctx, "test-key-%d", 1)
}
func TestReleaseNonExistentLock(t *testing.T) {
ctx := context.Background()
logger, _ := zap.NewDevelopment()
adapter := NewAdapter(logger, &mockRedisClient{})
// This should not panic, just log a warning
adapter.Release(ctx, "non-existent-key")
}

View file

@ -0,0 +1,13 @@
package distributedmutex
import (
"github.com/redis/go-redis/v9"
"go.uber.org/zap"
)
// ProvideDistributedMutexAdapter creates a new distributed mutex adapter instance.
// Accepts *redis.Client which implements redis.UniversalClient interface
func ProvideDistributedMutexAdapter(logger *zap.Logger, redisClient *redis.Client) Adapter {
// redis.Client implements redis.UniversalClient, so we can pass it directly
return NewAdapter(logger, redisClient)
}