23 KiB
Meilisearch Setup (Single Instance)
Prerequisites: Complete 01_init_docker_swarm.md first
Time to Complete: 25-30 minutes
What You'll Build:
- Single Meilisearch instance on new worker-5
- Master key protected with Docker secrets
- Private network communication only (maple-private-prod overlay)
- Persistent data with volume storage
- Ready for Go application search integration
Table of Contents
- Overview
- Create Worker-5 Droplet
- Configure Worker-5 for Swarm
- Label Worker Node
- Create Meilisearch Master Key Secret
- Deploy Meilisearch
- Verify Meilisearch Health
- Connect from Application
- Meilisearch Management
- Troubleshooting
Overview
Architecture
Docker Swarm Cluster:
mapleopentech-swarm-manager-1-prod (10.116.0.2)
Orchestrates cluster
mapleopentech-swarm-worker-1-prod (10.116.0.3)
Redis (single instance)
mapleopentech-swarm-worker-2,3,4-prod
Cassandra Cluster (3 nodes)
mapleopentech-swarm-worker-5-prod (NEW)
Meilisearch (search engine)
Network: maple-private-prod (overlay, shared)
Port: 7700 (private only)
Auth: Master key (Docker secret)
Data: Persistent volume
Shared Network (maple-private-prod):
All services can communicate
Service discovery by name (meilisearch, redis, cassandra-1, etc.)
No public internet access
Future Application:
mapleopentech-swarm-worker-X-prod
Go Backend í Connects to meilisearch:7700 on maple-private-prod
Meilisearch Configuration
- Version: Meilisearch v1.5
- Memory: 768MB reserved, 1GB max
- Persistence: Volume-backed at /meili_data
- Network: Private overlay network only
- Authentication: Master key via Docker secret
- Environment: Production mode with analytics disabled
Why Worker-5?
- Dedicated droplet for search workload
- Isolates indexing from database operations
- Allows independent scaling of search capacity
- 2GB RAM sufficient for moderate indexing
Create Worker-5 Droplet
Step 1: Create Droplet on DigitalOcean
Login to DigitalOcean:
- Go to https://cloud.digitalocean.com
- Click Create í Droplets
Configure Droplet:
Name: mapleopentech-swarm-worker-5-prod
Region: Toronto 1 (TOR1) - SAME region as manager
Image: Ubuntu 24.04 LTS x64
Size: Basic - Regular - 2 GB RAM / 1 vCPU / 50 GB SSD ($12/mo)
VPC Network: maple-prod-vpc-tor1 (SAME VPC as manager)
Authentication: SSH Key (use existing mapleopentech-prod-key)
IMPORTANT:
- Must be in same region as manager (Toronto 1)
- Must be in same VPC (maple-prod-vpc-tor1)
- Use same SSH key as other nodes
Step 2: Note Worker-5 IP Addresses
After creation, note both IPs (you'll need these):
On your local machine, update .env:
# Add to cloud/infrastructure/production/.env
WORKER_5_PUBLIC_IP=<worker-5-public-ip>
WORKER_5_PRIVATE_IP=<worker-5-private-ip> # Should be 10.116.0.X
Example:
WORKER_5_PUBLIC_IP=147.182.xxx.xxx
WORKER_5_PRIVATE_IP=10.116.0.7
Configure Worker-5 for Swarm
Step 1: SSH to Worker-5
# SSH using your local SSH key
ssh root@<worker-5-public-ip>
Step 2: System Updates and Create Admin User
# Update and upgrade system
apt update && apt upgrade -y
# Install essential packages
apt install -y curl wget apt-transport-https ca-certificates gnupg lsb-release
# Create dedicated Docker admin user
adduser dockeradmin
# Enter a strong password when prompted
# Press Enter for other prompts (or fill them in)
# Add to sudo group
usermod -aG sudo dockeradmin
# Copy SSH keys to new user
rsync --archive --chown=dockeradmin:dockeradmin ~/.ssh /home/dockeradmin
✅ Checkpoint - Update your .env file:
# On your local machine, add:
DOCKERADMIN_PASSWORD=your_strong_password_here # The password you just created
Step 3: Install Docker
Still as root on worker-5:
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Set up Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Add dockeradmin to docker group
usermod -aG docker dockeradmin
# Verify installation
docker --version
# Should show: Docker version 27.x.x or higher
# Test dockeradmin has docker access
su - dockeradmin
docker ps
# Should show empty list (not permission error)
exit
Step 4: Configure Firewall
# Still as root on worker-5
# Enable UFW
ufw --force enable
# Allow SSH (critical - do this first!)
ufw allow 22/tcp
# Allow Docker Swarm ports (from VPC only)
ufw allow from 10.116.0.0/16 to any port 2377 proto tcp # Swarm management
ufw allow from 10.116.0.0/16 to any port 7946 # Container network discovery
ufw allow from 10.116.0.0/16 to any port 4789 proto udp # Overlay network traffic
# Allow Meilisearch port (from VPC only)
ufw allow from 10.116.0.0/16 to any port 7700 proto tcp # Meilisearch API
# Verify rules
ufw status verbose
Step 5: Join Swarm as Worker
On manager node, get the join token:
# SSH to manager
ssh dockeradmin@<manager-public-ip>
# Get worker join token
docker swarm join-token worker
# Copy the entire command (docker swarm join --token ...)
Back on worker-5, join the swarm:
# SSH to worker-5 as dockeradmin
ssh dockeradmin@<worker-5-public-ip>
# Paste and run the join command from manager
docker swarm join --token SWMTKN-1-xxxxx... <manager-private-ip>:2377
# Expected output:
# This node joined a swarm as a worker.
Verify on manager:
# SSH to manager
ssh dockeradmin@<manager-public-ip>
# List nodes
docker node ls
# Should show mapleopentech-swarm-worker-5-prod with status Ready
Label Worker Node
We'll use Docker node labels to ensure Meilisearch always deploys to worker-5.
On your manager node:
# SSH to manager
ssh dockeradmin@<manager-public-ip>
# Label worker-5 for Meilisearch placement
docker node update --label-add meilisearch=true mapleopentech-swarm-worker-5-prod
# Verify label
docker node inspect mapleopentech-swarm-worker-5-prod --format '{{.Spec.Labels}}'
# Should show: map[meilisearch:true]
Create Meilisearch Master Key Secret
Meilisearch will use Docker secrets for master key authentication.
Step 1: Generate Master Key
On your manager node:
# Generate a random 32-character master key
MEILISEARCH_MASTER_KEY=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-32)
# Display it (SAVE THIS IN YOUR PASSWORD MANAGER!)
echo $MEILISEARCH_MASTER_KEY
# Example output: a8K9mP2nQ7rT4vW5xY6zB3cD1eF0gH8i
† IMPORTANT: Save this master key in your password manager now! You'll need it for:
- Application configuration
- Manual API requests
- Administrative operations
- Troubleshooting
Step 2: Create Docker Secret
# Create secret from the master key
echo $MEILISEARCH_MASTER_KEY | docker secret create meilisearch_master_key -
# Verify secret was created
docker secret ls
# Should show:
# ID NAME CREATED
# xyz789... meilisearch_master_key About a minute ago
Step 3: Update .env File
On your local machine, update your .env file:
# Add to cloud/infrastructure/production/.env
MEILISEARCH_HOST=meilisearch
MEILISEARCH_PORT=7700
MEILISEARCH_MASTER_KEY=<paste-the-master-key-here>
MEILISEARCH_URL=http://meilisearch:7700
Deploy Meilisearch
Step 1: Create Meilisearch Stack File
On your manager node:
# Create directory for stack files (if not exists)
mkdir -p ~/stacks
cd ~/stacks
# Create Meilisearch stack file
vi meilisearch-stack.yml
Copy and paste the following:
version: '3.8'
networks:
maple-private-prod:
external: true
volumes:
meilisearch-data:
secrets:
meilisearch_master_key:
external: true
services:
meilisearch:
image: getmeili/meilisearch:v1.5
hostname: meilisearch
networks:
- maple-private-prod
volumes:
- meilisearch-data:/meili_data
secrets:
- meilisearch_master_key
entrypoint: ["/bin/sh", "-c"]
command:
- |
export MEILI_MASTER_KEY=$$(cat /run/secrets/meilisearch_master_key)
exec meilisearch
environment:
- MEILI_ENV=production
- MEILI_NO_ANALYTICS=true
- MEILI_DB_PATH=/meili_data
- MEILI_HTTP_ADDR=0.0.0.0:7700
- MEILI_LOG_LEVEL=INFO
- MEILI_MAX_INDEXING_MEMORY=512mb
- MEILI_MAX_INDEXING_THREADS=2
deploy:
replicas: 1
placement:
constraints:
- node.labels.meilisearch == true
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
resources:
limits:
memory: 1G
reservations:
memory: 768M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:7700/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
Save and exit (:wq in vi).
Step 2: Verify Shared Overlay Network
Check if the maple-private-prod network exists:
docker network ls | grep maple-private-prod
You should see:
abc123... maple-private-prod overlay swarm
If you completed 02_cassandra.md (Step 4), the network already exists and you're good to go!
If the network doesn't exist, create it now:
# Create the shared maple-private-prod network
docker network create \
--driver overlay \
--attachable \
maple-private-prod
# Verify it was created
docker network ls | grep maple-private-prod
What is this network?
- Shared by all Maple services (Cassandra, Redis, Meilisearch, your Go backend)
- Enables private communication between services
- Service names act as hostnames (e.g.,
meilisearch,redis,cassandra-1) - No public exposure - overlay network is internal only
Step 3: Deploy Meilisearch Stack
# Deploy Meilisearch
docker stack deploy -c meilisearch-stack.yml meilisearch
# Expected output:
# Creating service meilisearch_meilisearch
Step 4: Verify Deployment
# Check service status
docker service ls
# Should show:
# ID NAME REPLICAS IMAGE
# xyz... meilisearch_meilisearch 1/1 getmeili/meilisearch:v1.5
# Check which node it's running on
docker service ps meilisearch_meilisearch
# Should show mapleopentech-swarm-worker-5-prod
# Watch logs
docker service logs -f meilisearch_meilisearch
# Should see: "Meilisearch is running and waiting for new commands"
# Press Ctrl+C when done
Meilisearch should be up and running in ~20-30 seconds.
Verify Meilisearch Health
Step 1: Test Meilisearch Health Endpoint
SSH to worker-5:
# Get worker-5's public IP from your .env
ssh dockeradmin@<worker-5-public-ip>
# Get Meilisearch container ID
MEILI_CONTAINER=$(docker ps -q --filter "name=meilisearch_meilisearch")
# Test health endpoint
docker exec -it $MEILI_CONTAINER wget --no-verbose --tries=1 --spider http://localhost:7700/health
# Should return: HTTP/1.1 200 OK
Step 2: Test with Master Key
# Get master key from secret
MASTER_KEY=$(docker exec $MEILI_CONTAINER cat /run/secrets/meilisearch_master_key)
# Test version endpoint with authentication
docker exec -it $MEILI_CONTAINER wget -qO- \
--header="Authorization: Bearer $MASTER_KEY" \
http://localhost:7700/version
# Should return JSON with version info:
# {"commitSha":"...","commitDate":"...","pkgVersion":"v1.5.0"}
Step 3: Create Test Index
# Create a test index
docker exec -it $MEILI_CONTAINER wget -qO- \
--header="Authorization: Bearer $MASTER_KEY" \
--header="Content-Type: application/json" \
--post-data='{"uid":"test_index","primaryKey":"id"}' \
http://localhost:7700/indexes
# Should return JSON with task info
Connect from Application
Go Application Integration
Example Go code for connecting to Meilisearch:
package main
import (
"os"
"github.com/meilisearch/meilisearch-go"
)
func NewMeilisearchClient() *meilisearch.Client {
client := meilisearch.NewClient(meilisearch.ClientConfig{
Host: os.Getenv("MEILISEARCH_URL"), // http://meilisearch:7700
APIKey: os.Getenv("MEILISEARCH_MASTER_KEY"),
Timeout: 10, // seconds
})
return client
}
// Example: Create an index
func CreateIndex(client *meilisearch.Client, indexName string) error {
_, err := client.CreateIndex(&meilisearch.IndexConfig{
Uid: indexName,
PrimaryKey: "id",
})
return err
}
// Example: Add documents
func IndexDocuments(client *meilisearch.Client, indexName string, docs []map[string]interface{}) error {
index := client.Index(indexName)
_, err := index.AddDocuments(docs)
return err
}
// Example: Search
func Search(client *meilisearch.Client, indexName, query string) (*meilisearch.SearchResponse, error) {
index := client.Index(indexName)
return index.Search(query, &meilisearch.SearchRequest{
Limit: 20,
})
}
Environment variables in your backend .env:
MEILISEARCH_URL=http://meilisearch:7700
MEILISEARCH_MASTER_KEY=<your-master-key>
Backend stack file (when deploying backend):
version: '3.8'
services:
backend:
image: your-backend:latest
networks:
- maple-private-prod # SAME network as Meilisearch
environment:
- MEILISEARCH_URL=http://meilisearch:7700
- MEILISEARCH_MASTER_KEY_FILE=/run/secrets/meilisearch_master_key
secrets:
- meilisearch_master_key
networks:
maple-private-prod:
external: true
secrets:
meilisearch_master_key:
external: true
Meilisearch Management
Restarting Meilisearch
# On manager node
docker service update --force meilisearch_meilisearch
# Wait for restart (20-30 seconds)
docker service ps meilisearch_meilisearch
Stopping Meilisearch
# Remove Meilisearch stack (data persists in volume)
docker stack rm meilisearch
# Verify it's stopped
docker service ls | grep meilisearch
# Should show nothing
Starting Meilisearch After Stop
# Redeploy the stack
cd ~/stacks
docker stack deploy -c meilisearch-stack.yml meilisearch
# Data is intact from previous volume
Viewing Logs
# Recent logs
docker service logs meilisearch_meilisearch --tail 50
# Follow logs in real-time
docker service logs -f meilisearch_meilisearch
Backing Up Meilisearch Data
# SSH to worker-5
ssh dockeradmin@<worker-5-public-ip>
# Get container ID
MEILI_CONTAINER=$(docker ps -q --filter "name=meilisearch_meilisearch")
# Create dump (Meilisearch's native backup format)
MASTER_KEY=$(docker exec $MEILI_CONTAINER cat /run/secrets/meilisearch_master_key)
docker exec $MEILI_CONTAINER wget -qO- \
--header="Authorization: Bearer $MASTER_KEY" \
--post-data='' \
http://localhost:7700/dumps
# Wait for dump to complete (check task status)
# Dumps are created in /meili_data/dumps/
# Copy dump to host
docker cp $MEILI_CONTAINER:/meili_data/dumps ~/meilisearch-backup-$(date +%Y%m%d)
# Download to local machine (from your local terminal)
scp -r dockeradmin@<worker-5-public-ip>:~/meilisearch-backup-* ./
Monitoring Index Status
# SSH to worker-5
MEILI_CONTAINER=$(docker ps -q --filter "name=meilisearch_meilisearch")
MASTER_KEY=$(docker exec $MEILI_CONTAINER cat /run/secrets/meilisearch_master_key)
# List all indexes
docker exec $MEILI_CONTAINER wget -qO- \
--header="Authorization: Bearer $MASTER_KEY" \
http://localhost:7700/indexes
# Get specific index stats
docker exec $MEILI_CONTAINER wget -qO- \
--header="Authorization: Bearer $MASTER_KEY" \
http://localhost:7700/indexes/YOUR_INDEX_NAME/stats
Troubleshooting
Problem: Network Not Found During Deployment
Symptom: network "maple-private-prod" is declared as external, but could not be found
Solution:
Create the shared maple-private-prod network first:
# Create the network
docker network create \
--driver overlay \
--attachable \
maple-private-prod
# Verify it exists
docker network ls | grep maple-private-prod
# Should show: maple-private-prod overlay swarm
# Then deploy Meilisearch
docker stack deploy -c meilisearch-stack.yml meilisearch
Why this happens:
- You haven't completed Step 2 (verify network)
- The network was deleted
- First time deploying any Maple service
Note: This network is shared by all services (Cassandra, Redis, Meilisearch, backend). You only need to create it once, before deploying your first service.
Problem: Service Won't Start
Symptom: docker service ls shows 0/1 replicas
Solutions:
-
Check logs:
docker service logs meilisearch_meilisearch --tail 50 -
Verify secret exists:
docker secret ls | grep meilisearch_master_key # Must show the secret -
Check node label:
docker node inspect mapleopentech-swarm-worker-5-prod --format '{{.Spec.Labels}}' # Must show: map[meilisearch:true] -
Verify maple-private-prod network exists:
docker network ls | grep maple-private-prod # Should show: maple-private-prod overlay swarm
Problem: Can't Connect (Authentication Failed)
Symptom: 401 Unauthorized or Invalid API key
Solutions:
-
Verify you're using the correct master key:
# View the secret (from manager node) docker secret inspect meilisearch_master_key # Compare ID with what you saved -
Test with master key from secret file:
# SSH to worker-5 MEILI_CONTAINER=$(docker ps -q --filter "name=meilisearch_meilisearch") MASTER_KEY=$(docker exec $MEILI_CONTAINER cat /run/secrets/meilisearch_master_key) docker exec $MEILI_CONTAINER wget -qO- \ --header="Authorization: Bearer $MASTER_KEY" \ http://localhost:7700/version # Should return version JSON
Problem: Container Keeps Restarting
Symptom: docker service ps meilisearch_meilisearch shows multiple restarts
Solutions:
-
Check memory:
# On worker-5 free -h # Should have at least 1GB free -
Check logs for errors:
docker service logs meilisearch_meilisearch # Look for "Out of memory" or permission errors -
Verify volume permissions:
# On worker-5 docker volume inspect meilisearch_meilisearch-data # Check mountpoint permissions
Problem: Can't Connect from Application
Symptom: Application can't reach Meilisearch on port 7700
Solutions:
-
Verify both services on same network:
# Check your app is on maple-private-prod network docker service inspect your_app --format '{{.Spec.TaskTemplate.Networks}}' # Should show maple-private-prod -
Test DNS resolution:
# From your app container nslookup meilisearch # Should resolve to Meilisearch container IP -
Test connectivity:
# From your app container (install curl/wget first) curl -H "Authorization: Bearer YOUR_MASTER_KEY" http://meilisearch:7700/health
Problem: Slow Indexing Performance
Symptom: Indexing takes a long time or times out
Solutions:
-
Check indexing tasks:
MEILI_CONTAINER=$(docker ps -q --filter "name=meilisearch_meilisearch") MASTER_KEY=$(docker exec $MEILI_CONTAINER cat /run/secrets/meilisearch_master_key) docker exec $MEILI_CONTAINER wget -qO- \ --header="Authorization: Bearer $MASTER_KEY" \ http://localhost:7700/tasks # Look for failed or enqueued tasks -
Check memory usage:
docker stats $(docker ps -q --filter "name=meilisearch_meilisearch") # Monitor memory and CPU usage -
Increase indexing resources (edit meilisearch-stack.yml):
environment: - MEILI_MAX_INDEXING_MEMORY=768mb # Increase from 512mb - MEILI_MAX_INDEXING_THREADS=4 # Increase from 2 (if CPU available)
Problem: Data Lost After Restart
Symptom: Indexes disappear when container restarts
Verification:
# On worker-5, check if volume exists
docker volume ls | grep meilisearch
# Should show: meilisearch_meilisearch-data
# Check volume is mounted
docker inspect $(docker ps -q --filter "name=meilisearch_meilisearch") --format '{{.Mounts}}'
# Should show /meili_data mounted to volume
This shouldn't happen if volume is properly configured. If it does:
- Check data directory:
docker exec <container> ls -lh /meili_data/ - Check Meilisearch config:
docker exec <container> env | grep MEILI_DB_PATH
Next Steps
You now have:
- Meilisearch instance running on worker-5
- Master key protected access
- Persistent data storage
- Private network connectivity
- Ready for application integration
Next guides:
- 05_app_backend.md - Deploy your Go backend application
- Connect backend to Meilisearch, Redis, and Cassandra
- Set up NGINX reverse proxy for public access
Performance Notes
Current Setup (2GB RAM Worker)
Capacity:
- 768MB reserved, 1GB max memory
- Suitable for: ~100k-500k documents (depending on document size)
- Indexing speed: ~1,000-5,000 docs/sec
- Search latency: <50ms for most queries
Limitations:
- Single instance (no high availability)
- Limited to 1GB memory
- 2 indexing threads (limited CPU)
Upgrade Path
For Production with High Load:
-
Increase memory (resize worker-5 to 4GB):
- Update MEILI_MAX_INDEXING_MEMORY to 2GB
- Better for larger datasets
-
Add CPU cores (resize to 4GB/2vCPU):
- Increase MEILI_MAX_INDEXING_THREADS to 4
- Faster indexing performance
-
Multiple instances (for high availability):
- Deploy read replicas on additional workers
- Use NGINX for load balancing
- Note: Meilisearch doesn't natively support clustering
-
Dedicated SSD storage:
- Use DigitalOcean volumes for better I/O
- Especially important for large indexes
For most applications starting out, single instance with 1GB memory is sufficient.
Last Updated: November 3, 2025 Maintained By: Infrastructure Team