When you run thousands of background jobs through Sidekiq, Redis becomes the bottleneck. Every job enqueue adds Redis writes, network round-trips, and memory pressure. This post covers a real-world optimization we applied and a broader toolkit for keeping Sidekiq lean.
The Problem: One Job Per Item
Imagine sending weekly emails to 10,000 users. The naive approach:
# ❌ Bad: 10,000 Redis writes, 10,000 scheduled entriesuser_ids.each do |id| WeeklyEmailWorker.perform_async(id)end
Each perform_async does:
- A Redis
LPUSH(orZADDfor scheduled jobs) - Serialization of job payload
- Network round-trip
At 10,000 users, that’s 10,000 Redis operations and 10,000 scheduled entries. At 1M users, that’s 1M scheduled jobs in Redis. That’s expensive and slow.
The Fix: Batch + Staggered Scheduling
Instead of one job per user, we batch users and schedule each batch with a small delay:
# ✅ Good: 100 Redis writes, 100 scheduled entriesBATCH_SIZE = 100BATCH_DELAY = 0.2 # secondspending_user_ids.each_slice(BATCH_SIZE).with_index do |batch_ids, batch_index| delay_seconds = batch_index * BATCH_DELAY WeeklyEmailByWorker.perform_in(delay_seconds, batch_ids)end
What this achieves:
| Metric | Before (1 per user) | After (batched) |
|---|---|---|
| Redis ops | 10,000 | 100 |
| Scheduled jobs | 10,000 | 100 |
| Scheduled jobs at 1M users | 1,000,000 | 10,000 |
Each worker still processes one user at a time internally, but we only enqueue one job per batch. Redis overhead drops by roughly 100x.
Why perform_in instead of chaining?
perform_in(delay, batch_ids)— all jobs are scheduled immediately with their future timestamps. Sidekiq moves them into the ready queue at the right time regardless of other queue traffic.- Chaining (each job enqueues the next) — the next batch only enters the queue after the current one finishes. If other jobs are busy, your email chain sits behind them and can be delayed significantly.
For time-sensitive jobs like “send at 8:46 AM local time,” upfront scheduling is the right choice.
Other Sidekiq Optimization Strategies
1. Bulk Enqueue (Sidekiq Pro/Enterprise)
Sidekiq::Client.push_bulk pushes many jobs in one Redis call:
# Single Redis call instead of NSidekiq::Client.push_bulk( 'class' => WeeklyEmailWorker, 'args' => user_ids.map { |id| [id] })
Useful when you don’t need per-job delays and want to minimize Redis round-trips.
2. Adjust Concurrency
Default is 10 threads per process. More threads = more concurrency but more memory:
# config/sidekiq.yml:concurrency: 25 # Tune based on CPU/memory
Higher concurrency helps if jobs are I/O-bound (HTTP, DB, email). For CPU-bound jobs, lower concurrency is usually better.
3. Use Dedicated Queues
Separate heavy jobs from light ones:
# config/sidekiq.yml:queues: - [critical, 3] # 3x weight - [default, 2] - [low, 1]
Critical jobs get more CPU time. Low-priority jobs don’t block the rest.
4. Rate Limiting (Sidekiq Enterprise)
Throttle jobs that hit external APIs:
class EmailWorker include Sidekiq::Worker sidekiq_options throttle: { threshold: 100, period: 1.minute }end
Prevents hitting rate limits and keeps Redis usage predictable.
5. Unique Jobs (sidekiq-unique-jobs)
Avoid duplicate jobs for the same work:
sidekiq_options lock: :until_executed, on_conflict: :log
Reduces redundant work and Redis load when jobs are retried or triggered multiple times.
6. Dead Job Cleanup
Dead jobs accumulate in Redis. Set retention and cleanup:
# config/initializers/sidekiq.rbSidekiq.configure_server do |config| config.death_handlers << ->(job, ex) { # Log, alert, or move to DLQ }end
Use dead_max_jobs and periodic cleanup so Redis doesn’t grow unbounded.
7. Job Size Limits
Large payloads increase Redis memory and serialization cost:
# Keep payloads small; pass IDs, not full objectsWeeklyEmailWorker.perform_async(user_id) # ✅WeeklyEmailWorker.perform_async(user.to_json) # ❌
8. Connection Pooling
Ensure each worker process has a bounded Redis connection pool:
# config/initializers/sidekiq.rbSidekiq.configure_server do |config| config.redis = { url: ENV['REDIS_URL'], size: 25 }end
Prevents connection exhaustion under load.
9. Scheduled Job Limits
Scheduled jobs live in Redis. If you schedule millions of jobs, you may need to cap or paginate:
# Avoid scheduling 1M jobs at once# Use batch + perform_in with reasonable batch sizes
10. Redis Memory and Eviction
Configure Redis for Sidekiq:
maxmemory 2gbmaxmemory-policy noeviction # or volatile-lru for cache-only keys
Monitor memory and eviction to avoid unexpected data loss.
Summary
| Strategy | When to Use |
|---|---|
Batch + perform_in | Many similar jobs at a specific time; reduces Redis ops by ~100x |
push_bulk | Large batches of jobs without per-job delays |
| Dedicated queues | Different priority levels for job types |
| Rate limiting | External APIs or rate-limited services |
| Unique jobs | Idempotent or duplicate-prone jobs |
| Small payloads | Always; pass IDs instead of full objects |
| Connection pooling | High concurrency or many processes |
The batch + perform_in pattern is especially effective for time-sensitive jobs that must run in a narrow window while keeping Redis overhead low.
Happy Coding with Sidekiq!