Sidekiq & Redis Optimization: Reducing Overhead and Scaling Worker Jobs

When you run thousands of background jobs through Sidekiq, Redis becomes the bottleneck. Every job enqueue adds Redis writes, network round-trips, and memory pressure. This post covers a real-world optimization we applied and a broader toolkit for keeping Sidekiq lean.


The Problem: One Job Per Item

Imagine sending weekly emails to 10,000 users. The naive approach:

# ❌ Bad: 10,000 Redis writes, 10,000 scheduled entries
user_ids.each do |id|
WeeklyEmailWorker.perform_async(id)
end

Each perform_async does:

  • A Redis LPUSH (or ZADD for scheduled jobs)
  • Serialization of job payload
  • Network round-trip

At 10,000 users, that’s 10,000 Redis operations and 10,000 scheduled entries. At 1M users, that’s 1M scheduled jobs in Redis. That’s expensive and slow.


The Fix: Batch + Staggered Scheduling

Instead of one job per user, we batch users and schedule each batch with a small delay:

# ✅ Good: 100 Redis writes, 100 scheduled entries
BATCH_SIZE = 100
BATCH_DELAY = 0.2 # seconds
pending_user_ids.each_slice(BATCH_SIZE).with_index do |batch_ids, batch_index|
delay_seconds = batch_index * BATCH_DELAY
WeeklyEmailByWorker.perform_in(delay_seconds, batch_ids)
end

What this achieves:

MetricBefore (1 per user)After (batched)
Redis ops10,000100
Scheduled jobs10,000100
Scheduled jobs at 1M users1,000,00010,000

Each worker still processes one user at a time internally, but we only enqueue one job per batch. Redis overhead drops by roughly 100x.

Why perform_in instead of chaining?

  • perform_in(delay, batch_ids) — all jobs are scheduled immediately with their future timestamps. Sidekiq moves them into the ready queue at the right time regardless of other queue traffic.
  • Chaining (each job enqueues the next) — the next batch only enters the queue after the current one finishes. If other jobs are busy, your email chain sits behind them and can be delayed significantly.

For time-sensitive jobs like “send at 8:46 AM local time,” upfront scheduling is the right choice.


Other Sidekiq Optimization Strategies

1. Bulk Enqueue (Sidekiq Pro/Enterprise)

Sidekiq::Client.push_bulk pushes many jobs in one Redis call:

# Single Redis call instead of N
Sidekiq::Client.push_bulk(
'class' => WeeklyEmailWorker,
'args' => user_ids.map { |id| [id] }
)

Useful when you don’t need per-job delays and want to minimize Redis round-trips.

2. Adjust Concurrency

Default is 10 threads per process. More threads = more concurrency but more memory:

# config/sidekiq.yml
:concurrency: 25 # Tune based on CPU/memory

Higher concurrency helps if jobs are I/O-bound (HTTP, DB, email). For CPU-bound jobs, lower concurrency is usually better.

3. Use Dedicated Queues

Separate heavy jobs from light ones:

# config/sidekiq.yml
:queues:
- [critical, 3] # 3x weight
- [default, 2]
- [low, 1]

Critical jobs get more CPU time. Low-priority jobs don’t block the rest.

4. Rate Limiting (Sidekiq Enterprise)

Throttle jobs that hit external APIs:

class EmailWorker
include Sidekiq::Worker
sidekiq_options throttle: { threshold: 100, period: 1.minute }
end

Prevents hitting rate limits and keeps Redis usage predictable.

5. Unique Jobs (sidekiq-unique-jobs)

Avoid duplicate jobs for the same work:

sidekiq_options lock: :until_executed, on_conflict: :log

Reduces redundant work and Redis load when jobs are retried or triggered multiple times.

6. Dead Job Cleanup

Dead jobs accumulate in Redis. Set retention and cleanup:

# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
config.death_handlers << ->(job, ex) {
# Log, alert, or move to DLQ
}
end

Use dead_max_jobs and periodic cleanup so Redis doesn’t grow unbounded.

7. Job Size Limits

Large payloads increase Redis memory and serialization cost:

# Keep payloads small; pass IDs, not full objects
WeeklyEmailWorker.perform_async(user_id) # ✅
WeeklyEmailWorker.perform_async(user.to_json) # ❌

8. Connection Pooling

Ensure each worker process has a bounded Redis connection pool:

# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
config.redis = { url: ENV['REDIS_URL'], size: 25 }
end

Prevents connection exhaustion under load.

9. Scheduled Job Limits

Scheduled jobs live in Redis. If you schedule millions of jobs, you may need to cap or paginate:

# Avoid scheduling 1M jobs at once
# Use batch + perform_in with reasonable batch sizes

10. Redis Memory and Eviction

Configure Redis for Sidekiq:

maxmemory 2gb
maxmemory-policy noeviction # or volatile-lru for cache-only keys

Monitor memory and eviction to avoid unexpected data loss.


Summary

StrategyWhen to Use
Batch + perform_inMany similar jobs at a specific time; reduces Redis ops by ~100x
push_bulkLarge batches of jobs without per-job delays
Dedicated queuesDifferent priority levels for job types
Rate limitingExternal APIs or rate-limited services
Unique jobsIdempotent or duplicate-prone jobs
Small payloadsAlways; pass IDs instead of full objects
Connection poolingHigh concurrency or many processes

The batch + perform_in pattern is especially effective for time-sensitive jobs that must run in a narrow window while keeping Redis overhead low.

Happy Coding with Sidekiq!


🚀 Optimizing Third-Party Script Loading in a Rails + Vue Hybrid Architecture

Part 1: The Problem – When Legacy Meets Modern Frontend

Our Architecture: A Common Evolution Story

Many web applications today follow a similar evolutionary path. What started as a traditional Rails monolith gradually transforms into a modern hybrid architecture. Our application, let’s call it “MealCorp,” followed this exact journey:

Phase 1: Traditional Rails Monolith

# Traditional Rails serving HTML + embedded JavaScript
class HomeController < ApplicationController
  def index
    # Rails renders ERB templates with inline scripts
    render 'home/index'
  end
end

Phase 2: Hybrid Rails + Vue Architecture (Current State)

# Modern hybrid: Rails API + Vue frontend
class AppController < ApplicationController
  INDEX_PATH = Rails.root.join('public', 'app.html')
  INDEX_CONTENT = File.exist?(INDEX_PATH) && File.open(INDEX_PATH, &:read).html_safe

  def index
    if Rails.env.development?
      redirect_to request.url.gsub(':3000', ':5173') # Vite dev server
    else
      render html: INDEX_CONTENT # Serve built Vue app
    end
  end
end

The routes configuration looked like this:

# config/routes.rb
Rails.application.routes.draw do
  root 'home#index'
  get '/dashboard' => 'app#index'
  get '/settings' => 'app#index'
  get '/profile' => 'app#index'
  # Most routes serve the Vue SPA
end

The Hidden Performance Killer

While our frontend was modern and fast, we discovered a critical performance issue that’s common in hybrid architectures. Our Google PageSpeed scores were suffering, showing this alarming breakdown:

JavaScript Execution Time Analysis:

Reduce JavaScript execution time: 1.7s
┌─────────────────────────────────────────────────────────────┐
│ Script                           │ Total │ Evaluation │ Parse │
├─────────────────────────────────────────────────────────────┤
│ Google Tag Manager              │ 615ms │    431ms   │ 171ms │
│ Rollbar Error Tracking          │ 258ms │    218ms   │  40ms │
│ Facebook SDK                    │ 226ms │    155ms   │  71ms │
│ Main Application Bundle         │ 190ms │    138ms   │  52ms │
└─────────────────────────────────────────────────────────────┘

The smoking gun? Third-party monitoring scripts were consuming more execution time than our actual application!

Investigating the Mystery

The puzzle deepened when we compared our source files:

Vue Frontend Source (index.html):

<pre class="wp-block-syntaxhighlighter-code"><!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <title>MealCorp Dashboard</title>
    <!-- Clean, minimal head section -->
    <a href="https://js.stripe.com/v3">https://js.stripe.com/v3</a>
    <a href="https://kit.fontawesome.com/abc123.js">https://kit.fontawesome.com/abc123.js</a>
  </head>
  <body>
    <div id="app"></div>
    <a href="/src/main.ts">/src/main.ts</a>
  </body>
</html></pre>

Built Static File (public/app.html):

<pre class="wp-block-syntaxhighlighter-code"><!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <title>MealCorp Dashboard</title>
    <!-- Same clean content, no third-party scripts -->
    <a href="/assets/index-xyz123.js">/assets/index-xyz123.js</a>
    <link rel="stylesheet" crossorigin href="/assets/index-abc456.css">
  </head>
  <body>
    <div id="app"></div>
  </body>
</html></pre>

But Browser “View Source” Showed:

<pre class="wp-block-syntaxhighlighter-code"><!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <title>MealCorp Dashboard</title>

    <!-- Mystery scripts appearing from nowhere! -->
    <script>var _rollbarConfig = {"accessToken":"token123...","captureUncaught":true...}</script>
    <script>!function(r){var e={}; /* Minified Rollbar library */ }</script>

    <script>(function(w,d,s,l,i){w[l]=w[l]||[]; /* GTM script */ })(window,document,'script','dataLayer','GTM-ABC123');</script>

    <!-- Our clean application code -->
    <a href="/assets/index-xyz123.js">/assets/index-xyz123.js</a>
  </body>
</html></pre>

The Root Cause Discovery

After investigation, we discovered that Rails was automatically injecting third-party scripts at runtime, despite serving static files!

Here’s what was happening in our Rails configuration:

Google Tag Manager Configuration:

# config/initializers/analytics.rb (Old problematic approach)
# This was loading synchronously in the Rails asset pipeline

Rollbar Configuration:

# config/initializers/rollbar.rb
Rollbar.configure do |config|
  config.access_token = 'server_side_token_123'

  # The culprit: Automatic JavaScript injection!
  config.js_enabled = true  # X This caused performance issues
  config.js_options = {
    accessToken: Rails.application.credentials[Rails.env.to_sym][:rollbar_client_token],
    captureUncaught: true,
    payload: { environment: Rails.env },
    hostSafeList: ['example.com', 'staging.example.com']
  }
end

The Request Flow That Caused Our Performance Issues:

  1. Browser requests /dashboard
  2. Rails routes to AppController#index
  3. Rails renders static public/app.html content
  4. Rollbar gem automatically injects JavaScript into the HTML response
  5. GTM configuration adds synchronous tracking scripts
  6. Browser receives HTML with blocking third-party scripts
  7. Performance suffers due to synchronous execution

Part 2: The Solution – Modern Deferred Loading

Understanding the Performance Impact

The core issue was synchronous script execution during page load. Each third-party service was blocking the main thread:

// What was happening (blocking):
<script>
  var _rollbarConfig = {...}; // Immediate execution - blocks rendering
</script>
<script>
  (function(w,d,s,l,i){ // GTM immediate execution - blocks rendering
    // Heavy synchronous operations
  })(window,document,'script','dataLayer','GTM-ABC123');
</script>

The Modern Solution: Deferred Loading Architecture

We implemented a Vue-based deferred loading system that maintains identical functionality while dramatically improving performance.

Step 1: Disable Rails Auto-Injection

# config/initializers/rollbar.rb
Rollbar.configure do |config|
  config.access_token = 'server_side_token_123'

  # Disable automatic JavaScript injection for better performance
  config.js_enabled = false  # Good - Stop Rails from injecting scripts

  # Server-side error tracking remains unchanged
  config.person_method = "current_user"
  # ... other server-side config
end

Step 2: Implement Vue-Based Deferred Loading

// src/App.vue
<script setup lang="ts">
import { onMounted } from 'vue';

// Load third-party scripts after Vue app mounts for better performance  
onMounted(() => {
  loadGoogleTagManager();
  loadRollbarDeferred();
});

function loadGoogleTagManager() {
  const script = document.createElement('script');
  script.async = true;
  script.src = `https://www.googletagmanager.com/gtm.js?id=${import.meta.env.VITE_GTM_ID}`;

  // Track initial pageview once GTM loads
  script.onload = () => {
    trackEvent({
      event: 'page_view',
      page_title: document.title,
      page_location: window.location.href,
      page_path: window.location.pathname
    });
  };

  document.head.appendChild(script);
}

function loadRollbarDeferred() {
  const rollbarToken = import.meta.env.VITE_ROLLBAR_CLIENT_TOKEN;
  if (!rollbarToken) return;

  // Load after all other resources are complete
  window.addEventListener('load', () => {
    // Initialize Rollbar configuration
    (window as any)._rollbarConfig = {
      accessToken: rollbarToken,
      captureUncaught: true,
      payload: {
        environment: import.meta.env.MODE // 'production', 'staging', etc.
      },
      hostSafeList: ['example.com', 'staging.example.com']
    };

    // Load Rollbar script asynchronously
    const rollbarScript = document.createElement('script');
    rollbarScript.async = true;
    rollbarScript.src = 'https://cdn.rollbar.com/rollbarjs/refs/tags/v2.26.1/rollbar.min.js';
    document.head.appendChild(rollbarScript);
  });
}
</script>

Step 3: TypeScript Support

// src/types/global.d.ts
declare global {
  interface Window {
    _rollbarConfig?: {
      accessToken: string;
      captureUncaught: boolean;
      payload: {
        environment: string;
      };
      hostSafeList: string[];
    };
    dataLayer?: any[];
  }
}

export {};

Environment Configuration

# .env.production
VITE_GTM_ID=GTM-PROD123
VITE_ROLLBAR_CLIENT_TOKEN=client_token_prod_456

# .env.staging  
VITE_GTM_ID=GTM-STAGING789
VITE_ROLLBAR_CLIENT_TOKEN=client_token_staging_789

Testing the Implementation

Comprehensive Testing Script:

// Browser console testing function
function testTrackingImplementation() {
  console.log('=== TRACKING SYSTEM TEST ===');

  // Test 1: GTM Integration
  console.log('GTM dataLayer exists:', !!window.dataLayer);
  console.log('GTM script loaded:', !!document.querySelector('script[src*="googletagmanager.com"]'));
  console.log('Recent GTM events:', window.dataLayer?.slice(-3));

  // Test 2: Rollbar Integration  
  console.log('Rollbar loaded:', typeof Rollbar !== 'undefined');
  console.log('Rollbar config:', window._rollbarConfig);
  console.log('Rollbar script loaded:', !!document.querySelector('script[src*="rollbar"]'));

  // Test 3: Send Test Events
  // GTM Test Event
  window.dataLayer?.push({
    event: 'test_tracking',
    test_source: 'manual_verification',
    timestamp: new Date().toISOString()
  });

  // Rollbar Test Error
  if (typeof Rollbar !== 'undefined') {
    Rollbar.error('Test error for verification - please ignore', {
      test_context: 'performance_optimization_verification'
    });
  }

  console.log('✅ Test events sent - check dashboards in 1-2 minutes');
}

// Run the test
testTrackingImplementation();

Expected Console Output:

=== TRACKING SYSTEM TEST ===
GTM dataLayer exists: true
GTM script loaded: true
Recent GTM events: [
  {event: "page_view", page_title: "Dashboard", ...},
  {event: "gtm.dom", ...}, 
  {event: "gtm.load", ...}
]
Rollbar loaded: true
Rollbar config: {accessToken: "...", captureUncaught: true, ...}
Rollbar script loaded: true
✅ Test events sent - check dashboards in 1-2 minutes

Performance Results

Before Optimization:

JavaScript Execution Time: 1.7s
├── Google Tag Manager: 615ms (synchronous)
├── Rollbar: 258ms (synchronous)  
├── Facebook SDK: 226ms (synchronous)
└── Application Code: 190ms

After Optimization:

JavaScript Execution Time: 0.4s
├── Application Code: 190ms (immediate)
├── Deferred Scripts: ~300ms (non-blocking, post-load)
└── Performance Improvement: ~1.3s (76% reduction)

Key Benefits Achieved

  1. Performance Gains:
  • 76% reduction in blocking JavaScript execution time
  • Improved Core Web Vitals scores
  • Better user experience with faster perceived load times
  1. Maintained Functionality:
  • Identical error tracking capabilities
  • Same analytics data collection
  • All monitoring dashboards continue working
  1. Better Architecture:
  • Modern Vue-based script management
  • Environment-specific configuration
  • TypeScript support for better maintainability
  1. Security Improvements:
  • Proper separation of server vs. client tokens
  • Environment variable management
  • No sensitive data in version control

Common Pitfalls and Solutions

Issue 1: Token Confusion

Error: post_client_item scope required but token has post_server_item

Solution: Use separate client-side tokens for browser JavaScript.

Issue 2: Missing Initial Pageviews
Solution: Implement manual pageview tracking in script.onload callback.

Issue 3: TypeScript Errors

// Fix: Add proper type declarations
(window as any)._rollbarConfig = { ... }; // Type assertion approach
// OR declare global types for better type safety

This hybrid architecture optimization demonstrates how modern frontend practices can be retroactively applied to existing applications, achieving significant performance improvements while maintaining full functionality. The key is identifying where legacy server-side patterns conflict with modern client-side performance optimization and implementing targeted solutions.


Happy Optimization! 🚀