🌀 Event Loops Demystified: The Secret Behind High-Performance Applications

Ever wondered how a single Redis server can handle thousands of simultaneous connections without breaking a sweat? Or how Node.js can serve millions of requests with just one thread? The magic lies in the event loop – a deceptively simple concept that powers everything from your web browser to Netflix’s streaming infrastructure.

🎯 What is an Event Loop?

An event loop is like a super-efficient restaurant manager who never stops moving:

👨‍💼 Event Loop Manager:
"Order ready at table 3!" → Handle it
"New customer at door!" → Seat them  
"Payment needed at table 7!" → Process it
"Kitchen needs supplies!" → Order them

Traditional approach (blocking):

# One waiter per table (one thread per request)
waiter1.take_order_from_table_1  # Waiter1 waits here...
waiter2.take_order_from_table_2  # Waiter2 waits here...  
waiter3.take_order_from_table_3  # Waiter3 waits here...

Event loop approach (non-blocking):

# One super-waiter handling everything
loop do
  event = get_next_event()
  case event.type
  when :order_ready then serve_food(event.table)
  when :new_customer then seat_customer(event.customer)
  when :payment then process_payment(event.table)
  end
end

🏗️ The Anatomy of an Event Loop

🔄 The Core Loop

// Simplified event loop (Node.js style)
while (true) {
  // 1. Check for completed I/O operations
  let events = pollForEvents();

  // 2. Execute callbacks for completed operations
  events.forEach(event => {
    event.callback();
  });

  // 3. Execute any scheduled timers
  runTimers();

  // 4. If no more work, sleep until next event
  if (noMoreWork()) {
    sleep();
  }
}

📋 Event Queue System

┌─ Timer Queue ──────┐    ┌─ I/O Queue ────────┐    ┌─ Check Queue ──┐
│ setTimeout()       │    │ File operations    │    │ setImmediate() │
│ setInterval()      │    │ Network requests   │    │ process.tick() │
└────────────────────┘    │ Database queries   │    └────────────────┘
                          └────────────────────┘
                                     ↓
                          ┌─────────────────────┐
                          │   Event Loop Core   │
                          │  "What's next?"     │
                          └─────────────────────┘

Why Event Loops Are Lightning Fast

🚫 No Thread Context Switching

# Traditional multi-threading overhead
Thread 1: [████████] Context Switch [████████] Context Switch
Thread 2:           [████████] Context Switch [████████]
Thread 3:                     [████████] Context Switch [████████]
# CPU wastes time switching between threads!

# Event loop efficiency  
Single Thread: [████████████████████████████████████████████████]
# No context switching = pure work!

🎯 Perfect for I/O-Heavy Applications

# What most web applications do:
def handle_request
  user = database.find_user(id)     # Wait 10ms
  posts = api.fetch_posts(user)     # Wait 50ms  
  cache.store(posts)                # Wait 5ms
  render_response(posts)            # Work 1ms
end
# Total: 66ms (65ms waiting, 1ms working!)

Event loop transforms this:

# Non-blocking version
def handle_request
  database.find_user(id) do |user|           # Queue callback
    api.fetch_posts(user) do |posts|         # Queue callback  
      cache.store(posts) do                  # Queue callback
        render_response(posts)               # Finally execute
      end
    end
  end
  # Returns immediately! Event loop handles the rest
end

☁️ Nextflix: Cloud Architecture

NextFlix Cloud Architecture

 High-Level Design of Netflix System Design:

Microservices Architecture of Netflix:

Netflix Billing Migration To Aws

Application-data-caching-using-ssds

Read System Design Netflix | A Complete Architecture: System-design-netflix-a-complete-Architecture

Reference: NetFlix Blog, GeeksForGeeks Blog

🔧 Event Loop in Action: Redis Case Study

📡 How Redis Handles 10,000 Connections

// Simplified Redis event loop (C code)
void aeMain(aeEventLoop *eventLoop) {
    while (!eventLoop->stop) {
        // 1. Wait for file descriptor events (epoll/kqueue)
        int numEvents = aeApiPoll(eventLoop, tvp);

        // 2. Process each ready event
        for (int i = 0; i < numEvents; i++) {
            aeFileEvent *fe = &eventLoop->events[fired[i].fd];

            if (fired[i].mask & AE_READABLE) {
                fe->rfileProc(eventLoop, fired[i].fd, fe->clientData, fired[i].mask);
            }
            if (fired[i].mask & AE_WRITABLE) {
                fe->wfileProc(eventLoop, fired[i].fd, fe->clientData, fired[i].mask);
            }
        }
    }
}

🎭 The BRPOP Magic Revealed

# When you call redis.brpop()
redis.brpop("queue:default", timeout: 30)

# Redis internally does:
# 1. Check if list has items → Return immediately if yes
# 2. If empty → Register client as "blocked"  
# 3. Event loop continues serving other clients
# 4. When item added → Wake up blocked client
# 5. Return the item

# Your Ruby thread blocks, but Redis keeps serving others!

🌍 Event Loops in the Wild

🟢 Node.js: JavaScript Everywhere

// Single thread handling thousands of requests
const server = http.createServer((req, res) => {
  // This doesn't block other requests!
  database.query('SELECT * FROM users', (err, result) => {
    res.json(result);
  });
});

🐍 Python: AsyncIO

import asyncio

async def handle_request():
    # Non-blocking database call
    user = await database.fetch_user(user_id)
    # Event loop serves other requests while waiting
    posts = await api.fetch_posts(user.id)  
    return render_template('profile.html', user=user, posts=posts)

# Run multiple requests concurrently
asyncio.run(handle_many_requests())

💎 Ruby: EventMachine & Async

# EventMachine (older)
EventMachine.run do
  EventMachine::HttpRequest.new('http://api.example.com').get.callback do |response|
    puts "Got response: #{response.response}"
  end
end

# Modern Ruby (Async gem)
require 'async'

Async do |task|
  response = task.async { Net::HTTP.get('example.com', '/api/data') }
  puts response.wait
end

Read More (Premium content): RailsDrop: Rubys-async-revolution

Message from the post: “💪 Ruby is not holding back with the JS spread of the WORLD! 💪 Rails is NOT DYING in 2025!! ITS EVOLVING!! 💪 We Ruby/Rails Community Fire with BIG in Future! 🕺”


⚙️ The Dark Side: Event Loop Limitations

🐌 CPU-Intensive Tasks Kill Performance

// This blocks EVERYTHING!
function badCpuTask() {
  let result = 0;
  for (let i = 0; i < 10000000000; i++) {  // 10 billion iterations
    result += i;
  }
  return result;
}

// While this runs, NO other requests get served!

🩹 The Solution: Worker Threads

// Offload heavy work to separate thread
const { Worker } = require('worker_threads');

function goodCpuTask() {
  return new Promise((resolve) => {
    const worker = new Worker('./cpu-intensive-task.js');
    worker.postMessage({ data: 'process this' });
    worker.on('message', resolve);
  });
}

🎯 When to Use Event Loops

Perfect For:

  • Web servers (lots of I/O, little CPU)
  • API gateways (routing requests)
  • Real-time applications (chat, games)
  • Database proxies (Redis, MongoDB)
  • File processing (reading/writing files)

Avoid For:

  • Heavy calculations (image processing)
  • Machine learning (training models)
  • Cryptographic operations (Bitcoin mining)
  • Scientific computing (physics simulations)

🚀 Performance Comparison

Handling 10,000 Concurrent Connections:

Traditional Threading:
├── Memory: ~2GB (200KB per thread)
├── Context switches: ~100,000/second  
├── CPU overhead: ~30%
└── Max connections: ~5,000

Event Loop:
├── Memory: ~50MB (single thread + buffers)
├── Context switches: 0
├── CPU overhead: ~5%  
└── Max connections: ~100,000+

🔮 The Future: Event Loops Everywhere

Modern frameworks are embracing event-driven architecture:

  • Rails 7+: Hotwire + ActionCable
  • Django: ASGI + async views
  • Go: Goroutines (event-loop-like)
  • Rust: Tokio async runtime
  • Java: Project Loom virtual threads

💡 Key Takeaways

  1. Event loops excel at I/O-heavy workloads – perfect for web applications
  2. They use a single thread – no context switching overhead
  3. Non-blocking operations – one slow request doesn’t block others
  4. CPU-intensive tasks are kryptonite – offload to worker threads
  5. Modern web development is moving toward async patterns – learn them now!

The next time you see redis.brpop() blocking your Ruby thread while Redis happily serves thousands of other clients, you’ll know exactly what’s happening under the hood! 🎉


Our Challenges with Microservices on AWS ECS

As part of our startup, our predecessors chose to use micro-services for our new website as it is a trending technology.

This decision has many benefits, such as:

  • Scaling a website becomes much easier when using micro-services, as each service can be scaled independently based on its individual needs.
  • The loosely coupled nature of micro-services also allows for easier development and maintenance, as changes to one service do not affect the functionality of other services.
  • Additionally, deployment can be focused on each individual service, making the overall process more efficient.
  • Micro-services also allow for the use of different technologies for each service, providing greater flexibility and the ability to choose the best tools for each task.
  • Finally, testing can be concentrated on one service at a time, allowing for more thorough and effective testing, which can result in higher quality code and a better user experience.

In developing our application with micro-services, we considered the potential problems that we may face in the future. However, it is important to note that we also need to consider whether these problems will have a significant impact compared to the potential disadvantages of using micro-services.

One factor to keep in mind is that our website is currently experiencing low traffic and we are acquiring clients gradually. As such, we need to consider whether the benefits of micro-services outweigh any potential drawbacks for our particular situation.

Regardless, some potential issues with micro-services include increased complexity and overhead in development, as well as potential performance issues when integrating multiple services. Additionally, managing multiple services and ensuring they communicate effectively can also be a challenge.

Despite the benefits of micro-services, we have faced some issues in implementing them. One significant challenge is the increased complexity of deployment and maintenance that comes with having multiple services. This can require more time and resources to manage and can potentially increase the likelihood of errors.

Additionally, the cost of using AWS ECS for hosting all of the micro-services can be higher than using other hosting solutions for a less traffic website. This is something to consider when weighing the benefits and drawbacks of using micro-services for our specific needs.

Another challenge we have faced is managing dependencies between services, which can be difficult to avoid. When one service goes offline, it can cause issues with other services, leading to a “No Service” issue on the website.

Finally, it can be very difficult to go back to a monolithic application even if we combine 3-4 services together, as they may use different software or software versions. This can make it challenging to make changes or updates to the application as a whole.

It is important to carefully consider whether micro-service architecture is the best fit for your business and current situation. If you have a less used website or are just starting your business, it may not be necessary or cost-effective to implement micro-services.

It is important to take the time to evaluate the benefits and drawbacks of using micro-services for your specific needs and budget. Keep in mind that hosting multiple micro-services can come with additional costs, so be prepared to pay a minimum amount for hosting if you decide to go this route.

Ultimately, the decision to use micro-services should be based on a thorough assessment of your business needs and available resources, rather than simply following a trend or industry hype.

Set up:

  • Used AWS ECS (ec2 launch type) with services and task definitions defined
  • 11 Micro-services, 11 containers are spinning
  • Cost: Rs.12k ($160) per month

Workaround:

  • Consider using AWS Fargate type but not sure these issues get resolved
  • Deploy all the services in one EC2 Instance without using ECS

Install Learning Locker (NodeJS) in your Mac OSX

Visit the following link to install LL on CentOS, Fedora, Ubuntu, and Debian OS
http://docs.learninglocker.net/guides-installing/

Mac OS is not supported in the installation process specified above.

You can install Learning Locker in Mac OSX by custom installation.

Steps to Install LL in Mac OSX:

1. Clone the repo from learning locker git repo

$ git clone https://github.com/LearningLocker/learninglocker.git

Enter into the directory and install the requirements:

$ yarn install

You can be built 5 distinct services with this codebase.
2. Install services

If you want to install all services on the same machine, you can use the following one command:
$ yarn build-all

Install Services separately if you want to install each service in different servers

Install the UI Server
$ yarn build-ui-server

Install the UI Client
$ yarn build-ui-client

Install the API Server
$ yarn build-api-server

Install the Worker
$ yarn build-worker-server

Install CLI
$ yarn build-cli-server

Note: Copy the .env.example into a new .env file and edit as required

RUNNING THE SERVICES VIA PM2

Install pm2 , if you have not yet installed
$ npm i -g pm2

To start API, Scheduler, UI, Worker services, navigate to the LL working directory and run:

$ pm2 start pm2/all.json

INSTALLING THE XAPI SERVICE

Step 1: Clone the repo
$ git clone https://github.com/LearningLocker/xapi-service.git

Step 2: Enter into the directory and install the requirements and build

$ yarn install
$ yarn build

Note: Copy the .env.example into a new .env file and edit as required

To start the xAPI service, navigate to the xAPI Service working directory and run:

$ pm2 start pm2/xapi.json

You can check the service status:

$ pm2 status

$ pm2 restart all # restart all services

$ pm2 logs # view logs

Launch the UI:
http://localhost:3310/login, note that I have change the UI port in .env to 3310, as other services running in default port

TheLearningLockerMac

Now you have to create User for logging in. Lets create an Admin User by the following command.

$ node cli/dist/server createSiteAdmin [email] [organisation] [password]

In order to use this command you have to install the CLI server for LL, that I already mentioned in the installation steps.

An Admin Example
$ node cli/dist/server createSiteAdmin "MyEmailId" "CoMakeIT" “myPassword”

You can run the migrations by the following command if any pending migrations exists

$ node cli/dist/server migrateMongo

LL-sign-in-error

Now try to Login with the credentials.

Ooops… there is an issue, and we can’t login

After doing some research I found the issue. We need to add a secret key base to our Application to work with JWT tokens.
Open .env file and find:

# Unique string used for hashing
# Recommended length, 256 bits
APP_SECRET=

Create a 256 bit length hash key and give as a value. As I am a Rails developer, It is easy for me to create one by going into any my Rails project and type

$ rake secret

I get One! 🙂

Or you can use some online applications like: https://randomkeygen.com/

And do:
$ pm2 restart all

from LL project folder

LL-dashboard

Try to sign in again

Wohh, Its done. Great. Now you can try some bit of dashboard items like Graphs. Go on.