Sidekiq & Redis Optimization: Reducing Overhead and Scaling Worker Jobs

When you run thousands of background jobs through Sidekiq, Redis becomes the bottleneck. Every job enqueue adds Redis writes, network round-trips, and memory pressure. This post covers a real-world optimization we applied and a broader toolkit for keeping Sidekiq lean.


The Problem: One Job Per Item

Imagine sending weekly emails to 10,000 users. The naive approach:

# ❌ Bad: 10,000 Redis writes, 10,000 scheduled entries
user_ids.each do |id|
WeeklyEmailWorker.perform_async(id)
end

Each perform_async does:

  • A Redis LPUSH (or ZADD for scheduled jobs)
  • Serialization of job payload
  • Network round-trip

At 10,000 users, that’s 10,000 Redis operations and 10,000 scheduled entries. At 1M users, that’s 1M scheduled jobs in Redis. That’s expensive and slow.


The Fix: Batch + Staggered Scheduling

Instead of one job per user, we batch users and schedule each batch with a small delay:

# ✅ Good: 100 Redis writes, 100 scheduled entries
BATCH_SIZE = 100
BATCH_DELAY = 0.2 # seconds
pending_user_ids.each_slice(BATCH_SIZE).with_index do |batch_ids, batch_index|
delay_seconds = batch_index * BATCH_DELAY
WeeklyEmailByWorker.perform_in(delay_seconds, batch_ids)
end

What this achieves:

MetricBefore (1 per user)After (batched)
Redis ops10,000100
Scheduled jobs10,000100
Scheduled jobs at 1M users1,000,00010,000

Each worker still processes one user at a time internally, but we only enqueue one job per batch. Redis overhead drops by roughly 100x.

Why perform_in instead of chaining?

  • perform_in(delay, batch_ids) — all jobs are scheduled immediately with their future timestamps. Sidekiq moves them into the ready queue at the right time regardless of other queue traffic.
  • Chaining (each job enqueues the next) — the next batch only enters the queue after the current one finishes. If other jobs are busy, your email chain sits behind them and can be delayed significantly.

For time-sensitive jobs like “send at 8:46 AM local time,” upfront scheduling is the right choice.


Other Sidekiq Optimization Strategies

1. Bulk Enqueue (Sidekiq Pro/Enterprise)

Sidekiq::Client.push_bulk pushes many jobs in one Redis call:

# Single Redis call instead of N
Sidekiq::Client.push_bulk(
'class' => WeeklyEmailWorker,
'args' => user_ids.map { |id| [id] }
)

Useful when you don’t need per-job delays and want to minimize Redis round-trips.

2. Adjust Concurrency

Default is 10 threads per process. More threads = more concurrency but more memory:

# config/sidekiq.yml
:concurrency: 25 # Tune based on CPU/memory

Higher concurrency helps if jobs are I/O-bound (HTTP, DB, email). For CPU-bound jobs, lower concurrency is usually better.

3. Use Dedicated Queues

Separate heavy jobs from light ones:

# config/sidekiq.yml
:queues:
- [critical, 3] # 3x weight
- [default, 2]
- [low, 1]

Critical jobs get more CPU time. Low-priority jobs don’t block the rest.

4. Rate Limiting (Sidekiq Enterprise)

Throttle jobs that hit external APIs:

class EmailWorker
include Sidekiq::Worker
sidekiq_options throttle: { threshold: 100, period: 1.minute }
end

Prevents hitting rate limits and keeps Redis usage predictable.

5. Unique Jobs (sidekiq-unique-jobs)

Avoid duplicate jobs for the same work:

sidekiq_options lock: :until_executed, on_conflict: :log

Reduces redundant work and Redis load when jobs are retried or triggered multiple times.

6. Dead Job Cleanup

Dead jobs accumulate in Redis. Set retention and cleanup:

# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
config.death_handlers << ->(job, ex) {
# Log, alert, or move to DLQ
}
end

Use dead_max_jobs and periodic cleanup so Redis doesn’t grow unbounded.

7. Job Size Limits

Large payloads increase Redis memory and serialization cost:

# Keep payloads small; pass IDs, not full objects
WeeklyEmailWorker.perform_async(user_id) # ✅
WeeklyEmailWorker.perform_async(user.to_json) # ❌

8. Connection Pooling

Ensure each worker process has a bounded Redis connection pool:

# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
config.redis = { url: ENV['REDIS_URL'], size: 25 }
end

Prevents connection exhaustion under load.

9. Scheduled Job Limits

Scheduled jobs live in Redis. If you schedule millions of jobs, you may need to cap or paginate:

# Avoid scheduling 1M jobs at once
# Use batch + perform_in with reasonable batch sizes

10. Redis Memory and Eviction

Configure Redis for Sidekiq:

maxmemory 2gb
maxmemory-policy noeviction # or volatile-lru for cache-only keys

Monitor memory and eviction to avoid unexpected data loss.


Summary

StrategyWhen to Use
Batch + perform_inMany similar jobs at a specific time; reduces Redis ops by ~100x
push_bulkLarge batches of jobs without per-job delays
Dedicated queuesDifferent priority levels for job types
Rate limitingExternal APIs or rate-limited services
Unique jobsIdempotent or duplicate-prone jobs
Small payloadsAlways; pass IDs instead of full objects
Connection poolingHigh concurrency or many processes

The batch + perform_in pattern is especially effective for time-sensitive jobs that must run in a narrow window while keeping Redis overhead low.

Happy Coding with Sidekiq!


Unknown's avatar

Author: Abhilash

Hi, I’m Abhilash! A seasoned web developer with 15 years of experience specializing in Ruby and Ruby on Rails. Since 2010, I’ve built scalable, robust web applications and worked with frameworks like Angular, Sinatra, Laravel, Node.js, Vue and React. Passionate about clean, maintainable code and continuous learning, I share insights, tutorials, and experiences here. Let’s explore the ever-evolving world of web development together!

Leave a comment