Design Studio v0.9.5: A Visual Improvement in E-commerce Experience ๐ŸŽจ

Published: June 25, 2025

I am thrilled to announce the release of Design Studio v0.9.5, a major milestone that transforms our online shopping platform into a truly immersive visual experience. This release focuses heavily on user interface enhancements, performance optimizations, and creating a more engaging shopping journey for our customers.

๐Ÿš€ What’s New in v0.9.5

1. Stunning 10-Slide Hero Carousel

The centerpiece of this release is our brand-new interactive hero carousel featuring 10 beautifully curated slides with real product imagery. Each slide tells a story and creates an emotional connection with our visitors.

Dynamic Gradient Themes

Each slide features its own unique gradient theme:

<!-- Hero Slide Template -->
<div class="slide relative h-screen flex items-center justify-center overflow-hidden"
     data-theme="<%= slide[:theme] %>">
  <!-- Dynamic gradient backgrounds -->
  <div class="absolute inset-0 bg-gradient-to-br <%= slide[:gradient] %>"></div>

  <!-- Content with sophisticated typography -->
  <div class="relative z-10 text-center px-4">
    <h1 class="text-6xl font-bold text-white mb-6 leading-tight">
      <%= slide[:title] %>
    </h1>
    <p class="text-xl text-white/90 mb-8 max-w-2xl mx-auto">
      <%= slide[:description] %>
    </p>
  </div>
</div>

Smart Auto-Cycling with Manual Controls

// Intelligent carousel management
class HeroCarousel {
  constructor() {
    this.currentSlide = 0;
    this.autoInterval = 4000; // 4-second intervals
    this.isPlaying = true;
  }

  startAutoPlay() {
    this.autoPlayTimer = setInterval(() => {
      if (this.isPlaying) {
        this.nextSlide();
      }
    }, this.autoInterval);
  }

  pauseOnInteraction() {
    // Pause auto-play when user interacts
    this.isPlaying = false;
    setTimeout(() => this.isPlaying = true, 10000); // Resume after 10s
  }
}

2. Modular Component Architecture

We’ve completely redesigned our frontend architecture with separation of concerns in mind:

<!-- Main Hero Slider Component -->
<%= render 'home/hero_slider' %>

<!-- Individual Components -->
<%= render 'home/hero_slide', slide: slide_data %>
<%= render 'home/hero_slider_navigation' %>
<%= render 'home/hero_slider_script' %>
<%= render 'home/category_grid' %>
<%= render 'home/featured_products' %>

Component-Based Development Benefits:

  • Maintainability: Each component has a single responsibility
  • Reusability: Components can be used across different pages
  • Testing: Isolated components are easier to test
  • Performance: Selective rendering and caching opportunities

3. Enhanced Visual Design System

Glass Morphism Effects

We’ve introduced subtle glass morphism effects throughout the application:

/* Modern glass effect implementation */
.glass-effect {
  background: rgba(255, 255, 255, 0.1);
  backdrop-filter: blur(10px);
  border: 1px solid rgba(255, 255, 255, 0.2);
  border-radius: 16px;
  box-shadow: 0 8px 32px 0 rgba(31, 38, 135, 0.37);
}

/* Category cards with gradient overlays */
.category-card {
  @apply relative overflow-hidden rounded-xl;

  &::before {
    content: '';
    @apply absolute inset-0 bg-gradient-to-t from-black/60 to-transparent;
  }
}

Dynamic Color Management

Our new helper system automatically manages theme colors:

# app/helpers/application_helper.rb
def get_category_colors(gradient_class)
  case gradient_class
  when "from-pink-400 to-purple-500"
    "#f472b6, #8b5cf6"
  when "from-blue-400 to-indigo-500"  
    "#60a5fa, #6366f1"
  when "from-green-400 to-teal-500"
    "#4ade80, #14b8a6"
  else
    "#6366f1, #8b5cf6" # Elegant fallback
  end
end

def random_decorative_background
  themes = [:orange_pink, :blue_purple, :green_teal, :yellow_orange]
  decorative_background_config(themes.sample)
end

4. Mobile-First Responsive Design

Every component is built with mobile-first approach:

<!-- Responsive category grid -->
<div class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-6">
  <% categories.each do |category| %>
    <div class="group relative h-64 rounded-xl overflow-hidden cursor-pointer
                hover:scale-105 transform transition-all duration-300">
      <!-- Responsive image handling -->
      <div class="absolute inset-0">
        <%= image_tag category[:image], 
            class: "w-full h-full object-cover group-hover:scale-110 transition-transform duration-500",
            alt: category[:name] %>
      </div>
    </div>
  <% end %>
</div>

5. Public Product Browsing

We’ve opened up product browsing to all visitors:

# app/controllers/products_controller.rb
class ProductsController < ApplicationController
  # Allow public access to browsing
  allow_unauthenticated_access only: %i[index show]

  def index
    products = Product.all

    # Smart category filtering
    if params[:category].present?
      products = products.for_category(params[:category])
      @current_category = params[:category]
    end

    # Pagination for performance
    @pagy, @products = pagy(products)
  end
end

๐Ÿ”ง Technical Improvements

Test Coverage Excellence

I’ve achieved 73.91% test coverage (272/368 lines), ensuring code reliability:

# Enhanced authentication test helpers
module AuthenticationTestHelper
  def sign_in_as(user)
    # Generate unique IPs to avoid rate limiting conflicts
    unique_ip = "127.0.0.#{rand(1..254)}"
    @request.remote_addr = unique_ip

    session[:user_id] = user.id
    user
  end
end

Asset Pipeline Optimization

Rails 8 compatibility with modern asset handling:

# config/application.rb
class Application < Rails::Application
  # Modern browser support
  config.allow_browser versions: :modern

  # Asset pipeline optimization
  config.assets.css_compressor = nil # Tailwind handles this
  config.assets.js_compressor = :terser
end

Security Enhancements

# Role-based access control
class ApplicationController < ActionController::Base
  include Authentication

  private

  def require_admin
    unless current_user&.admin?
      redirect_to root_path, alert: "Access denied."
    end
  end
end

๐Ÿ“Š Performance Metrics

Before vs After v0.9.5:

MetricBeforeAfter v0.9.5Improvement
Test Coverage45%73.91%+64%
CI/CD Success23 failures0 failures+100%
Component Count3 monoliths8 modular components+167%
Mobile Score72/10089/100+24%

๐ŸŽจ Design Philosophy

This release embodies our commitment to:

  1. Visual Excellence: Every pixel serves a purpose
  2. User Experience: Intuitive navigation and interaction
  3. Performance: Fast loading without sacrificing beauty
  4. Accessibility: Inclusive design for all users
  5. Maintainability: Clean, modular code architecture

๐Ÿ”ฎ What’s Next?

Version 0.9.5 sets the foundation for exciting upcoming features:

  • Enhanced Search & Filtering
  • User Account Dashboard
  • Advanced Product Recommendations
  • Payment Integration
  • Order Tracking System

๐ŸŽ‰ Try It Today!

Experience the new Design Studio v0.9.5 and see the difference visual design makes in online shopping. Our hero carousel alone tells the story of modern fashion in 10 stunning slides.

Key Benefits for Users:

  • โœจ Immersive visual shopping experience
  • ๐Ÿ“ฑ Perfect on any device
  • โšก Lightning-fast performance
  • ๐Ÿ”’ Secure and reliable

For Developers:

  • ๐Ÿ—๏ธ Clean, maintainable architecture
  • ๐Ÿงช Comprehensive test suite
  • ๐Ÿ“š Well-documented components
  • ๐Ÿš€ Rails 8 compatibility

Design Studio v0.9.5 – Where technology meets artistry in e-commerce.

Download: GitHub Release
Documentation: GitHub Wiki
Live Demo: Design Studio – coming soon!


Enjoy Rails 8 with Hotwire! ๐Ÿš€

Rails 8 Tests: ๐Ÿ”„ TDD vs ๐ŸŽญ BDD | System Tests

Testโ€‘Driven Development (TDD) and Behaviorโ€‘Driven Development (BDD) are complementary testing approaches that help teams build robust, maintainable software by defining expected behaviour before writing production code. In TDD, developers write small, focused unit tests that fail initially, then implement just enough code to make them pass, ensuring each component meets its specification. BDD extends this idea by framing tests in a global language that all stakeholdersโ€”developers, QA, and product ownersโ€”can understand, using human-readable scenarios to describe system behaviour. While TDD emphasizes the correctness of individual units, BDD elevates collaboration and shared understanding by specifying the “why” and “how” of features in a narrative style, driving development through concrete examples of desired outcomes.

๐Ÿ”„ TDD vs ๐ŸŽญ BDD: Methodologies vs Frameworks

๐Ÿง  Understanding the Concepts

๐Ÿ”„ TDD (Test Driven Development)
  • Methodology/Process: Write test โ†’ Fail โ†’ Write code โ†’ Pass โ†’ Refactor
  • Focus: Testing the implementation and correctness
  • Mindset: “Does this code work correctly?”
  • Style: More technical, code-focused
๐ŸŽญ BDD (Behavior Driven Development)
  • Methodology/Process: Describe behavior โ†’ Write specs โ†’ Implement โ†’ Verify behavior
  • Focus: Testing the behavior and user requirements
  • Mindset: “Does this behave as expected from user’s perspective?”
  • Style: More natural language, business-focused

๐Ÿ› ๏ธ Frameworks Support Both Approaches

๐Ÿ“‹ RSpec (Primarily BDD-oriented)
# BDD Style - describing behavior
describe "TwoSum" do
  context "when given an empty array" do
    it "should inform user about insufficient data" do
      expect(two_sum([], 9)).to eq('Provide an array with length 2 or more')
    end
  end
end
โš™๏ธ Minitest (Supports Both TDD and BDD)
๐Ÿ”ง TDD Style with Minitest
class TestTwoSum < Minitest::Test
  # Testing implementation correctness
  def test_empty_array_returns_error
    assert_equal 'Provide an array with length 2 or more', two_sum([], 9)
  end

  def test_valid_input_returns_indices
    assert_equal [0, 1], two_sum([2, 7], 9)
  end
end
๐ŸŽญ BDD Style with Minitest
describe "TwoSum behavior" do
  describe "when user provides empty array" do
    it "guides user to provide sufficient data" do
      _(two_sum([], 9)).must_equal 'Provide an array with length 2 or more'
    end
  end

  describe "when user provides valid input" do
    it "finds the correct pair indices" do
      _(two_sum([2, 7], 9)).must_equal [0, 1]
    end
  end
end

๐ŸŽฏ Key Differences in Practice

๐Ÿ”„ TDD Approach
# 1. Write failing test
def test_two_sum_with_valid_input
  assert_equal [0, 1], two_sum([2, 7], 9)  # This will fail initially
end

# 2. Write minimal code to pass
def two_sum(nums, target)
  [0, 1]  # Hardcoded to pass
end

# 3. Refactor and improve
def two_sum(nums, target)
  # Actual implementation
end
๐ŸŽญ BDD Approach
# 1. Describe the behavior first
describe "Finding two numbers that sum to target" do
  context "when valid numbers exist" do
    it "returns their indices" do
      # This describes WHAT should happen, not HOW
      expect(two_sum([2, 7, 11, 15], 9)).to eq([0, 1])
    end
  end
end

๐Ÿ“Š Summary Table

AspectTDDBDD
FocusImplementation correctnessUser behavior
LanguageTechnicalBusiness/Natural
FrameworksAny (Minitest, RSpec, etc.)Any (RSpec, Minitest spec, etc.)
Test Namestest_method_returns_value"it should behave like..."
AudienceDevelopersStakeholders + Developers

๐ŸŽช The Reality

  • RSpec encourages BDD but can be used for TDD
  • Minitest is framework-agnostic – supports both approaches equally
  • Your choice of methodology (TDD vs BDD) is independent of your framework choice
  • Many teams use hybrid approaches – BDD for acceptance tests, TDD for unit tests

The syntax doesn’t determine the methodology – it’s about how you think and approach the problem!

System Tests ๐Ÿ’ปโš™๏ธ

System tests in Rails (located in test/system/*) are full-stack integration tests that simulate real user interactions with your web application. They’re the highest level of testing in the Rails testing hierarchy and provide the most realistic testing environment.

System tests actually launch a real web browser (or headless browser) and interact with your application just like a real user would. Looking at our Rails app’s configuration: design_studio/test/application_system_test_case.rb

driven_by :selenium, using: :headless_chrome, screen_size: [ 1400, 1400 ]

This means our system tests run using:

  • Selenium WebDriver (browser automation tool)
  • Headless Chrome (Chrome browser without UI)
  • 1400×1400 screen size for consistent testing

Code Snippets from:actionpack-8.0.2/lib/action_dispatch/system_test_case.rb

# frozen_string_literal: true

# :markup: markdown

gem "capybara", ">= 3.26"

require "capybara/dsl"
require "capybara/minitest"
require "action_controller"
require "action_dispatch/system_testing/driver"
require "action_dispatch/system_testing/browser"
require "action_dispatch/system_testing/server"
require "action_dispatch/system_testing/test_helpers/screenshot_helper"
require "action_dispatch/system_testing/test_helpers/setup_and_teardown"

module ActionDispatch
  # # System Testing
  #
  # System tests let you test applications in the browser. Because system tests
  # use a real browser experience, you can test all of your JavaScript easily from
  # your test suite.
  #
  # To create a system test in your application, extend your test class from
  # `ApplicationSystemTestCase`. System tests use Capybara as a base and allow you
  # to configure the settings through your `application_system_test_case.rb` file
  # that is generated with a new application or scaffold.
  #
  # Here is an example system test:
  #
  #     require "application_system_test_case"
  #
  #     class Users::CreateTest < ApplicationSystemTestCase
  #       test "adding a new user" do
  #         visit users_path
  #         click_on 'New User'
  #
  #         fill_in 'Name', with: 'Arya'
  #         click_on 'Create User'
  #
  #         assert_text 'Arya'
  #       end
  #     end
  #
  # When generating an application or scaffold, an
  # `application_system_test_case.rb` file will also be generated containing the
  # base class for system testing. This is where you can change the driver, add
  # Capybara settings, and other configuration for your system tests.
  #
  #     require "test_helper"
  #
  #     class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  #       driven_by :selenium, using: :chrome, screen_size: [1400, 1400]
  #     end
  #
  # By default, `ActionDispatch::SystemTestCase` is driven by the Selenium driver,
  # with the Chrome browser, and a browser size of 1400x1400.
  #
  # Changing the driver configuration options is easy. Let's say you want to use
  # the Firefox browser instead of Chrome. In your
  # `application_system_test_case.rb` file add the following:
  #
  #     require "test_helper"
  #
  #     class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  #       driven_by :selenium, using: :firefox
  #     end
  #
  # `driven_by` has a required argument for the driver name. The keyword arguments
  # are `:using` for the browser and `:screen_size` to change the size of the
  # browser screen. These two options are not applicable for headless drivers and
  # will be silently ignored if passed.
  #
  # Headless browsers such as headless Chrome and headless Firefox are also
  # supported. You can use these browsers by setting the `:using` argument to
  # `:headless_chrome` or `:headless_firefox`.
  #
  # To use a headless driver, like Cuprite, update your Gemfile to use Cuprite
  # instead of Selenium and then declare the driver name in the
  # `application_system_test_case.rb` file. In this case, you would leave out the
  # `:using` option because the driver is headless, but you can still use
  # `:screen_size` to change the size of the browser screen, also you can use
  # `:options` to pass options supported by the driver. Please refer to your
  # driver documentation to learn about supported options.
  #
  #     require "test_helper"
  #     require "capybara/cuprite"
  #
  #     class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  #       driven_by :cuprite, screen_size: [1400, 1400], options:
  #         { js_errors: true }
  #     end
  #
  # Some drivers require browser capabilities to be passed as a block instead of
  # through the `options` hash.
  #
  # As an example, if you want to add mobile emulation on chrome, you'll have to
  # create an instance of selenium's `Chrome::Options` object and add capabilities
  # with a block.
  #
  # The block will be passed an instance of `<Driver>::Options` where you can
  # define the capabilities you want. Please refer to your driver documentation to
  # learn about supported options.
  #
  #     class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  #       driven_by :selenium, using: :chrome, screen_size: [1024, 768] do |driver_option|
  #         driver_option.add_emulation(device_name: 'iPhone 6')
  #         driver_option.add_extension('path/to/chrome_extension.crx')
  #       end
  #     end
  #
  # Because `ActionDispatch::SystemTestCase` is a shim between Capybara and Rails,
  # any driver that is supported by Capybara is supported by system tests as long
  # as you include the required gems and files.
  class SystemTestCase < ActiveSupport::TestCase
    include Capybara::DSL
    include Capybara::Minitest::Assertions
    include SystemTesting::TestHelpers::SetupAndTeardown
    include SystemTesting::TestHelpers::ScreenshotHelper

    ..........

How They Work

System tests can:

  • Navigate pages: visit products_url
  • Click elements: click_on "New product"
  • Fill forms: fill_in "Title", with: @product.title
  • Verify content: assert_text "Product was successfully created"
  • Check page structure: assert_selector "h1", text: "Products"

Examples From Our Codebase

Basic navigation test (from products_test.rb):

test "visiting the index" do
  visit products_url
  assert_selector "h1", text: "Products"
end

Complex user workflow (from profile_test.rb):

def sign_in_user(user)
  visit new_session_path
  fill_in "Email", with: user.email
  fill_in "Password", with: "password"
  click_button "Log In"

  # Wait for redirect and verify we're not on the login page anymore
  # Also wait for the success notice to appear
  assert_text "Logged in successfully", wait: 10
  assert_no_text "Log in to your account", wait: 5
end

Key Benefits

  1. End-to-end testing: Tests the complete user journey
  2. JavaScript testing: Can test dynamic frontend behavior
  3. Real browser environment: Tests CSS, responsive design, and browser compatibility
  4. User perspective: Validates the actual user experience

When to Use System Tests

  • Critical user workflows (login, checkout, registration)
  • Complex page interactions (forms, modals, AJAX)
  • Cross-browser compatibility
  • Responsive design validation

Our profile_test.rb is a great example – it tests the entire user authentication flow, profile page navigation, and various UI interactions that a real user would perform.

Happy Testing ๐Ÿš€

๐Ÿ“‹ Software Architect Guide: Designs | Patterns | Coding | Architectures

๐Ÿ“ About

Technical assessment platforms used by employers to screen candidates via timed online tests. Key characteristics include:

  • Mixed question formats: multiple-choice, fill-in-the-blank, coding tasks, and design scenarios.
  • Language & framework coverage: supports Ruby, Rails, React, Angular, AWS, SQL, DevOps and more.
  • Time-boxed: each test typically runs 30โ€“60 minutes, depending on role complexity. Each tests (1/12) allocated 2-8 mins depending on complexity.
  • Auto-grading + manual review: coding tasks are auto-checked against test cases; system-design answers may be manually reviewed.
  • Real-world scenarios: questions often mimic production challenges rather than classic white-board puzzles.

๐Ÿ“‹ Software Architect Test Structure

On these type of platforms, a Software Architect assessment usually combines:

  1. System & API Design
  2. Coding & Code Review
  3. Architecture Patterns
  4. DevOps & Cloud
  5. Database & Performance
  6. Front-end Integration

Below is a breakdown with sample questions you can practice.

๐Ÿ”ง 1. System & API Design

1.1 Design a RESTful API

  • Prompt: Sketch endpoints (URL, HTTP verbs, request/response JSON) for a multi-tenant blogging platform.
  • Whatโ€™s tested: URI naming, resource nesting, versioning, authentication.

https://railsdrop.com/2025/07/02/software-architect-guide-designing-a-restful-api-for-a-multi-tenant-blogging-platform/

1.2 High-Level Architecture Diagram

  • Prompt: Draw (in ASCII or pseudo-UML) a scalable order-processing system that integrates with payment gateways, queues, and microservices.
  • What’s tested: service decomposition, message brokering, fault tolerance.

Premium: https://railsdrop.com/2025/07/03/software-architect-guide-designing-a-scalable-order-processing-architecture-with-micro-services/


โ“Questions

Q) Why is layering an application important while executing a project?

https://railsdrop.com/2025/07/04/software-architect-guide-layering-an-app/

๐Ÿ’ป 2. Coding & Code Review

2.1 Ruby/Rails Coding Task

2.2 Code Quality Review


๐Ÿ—๏ธ 3. Architecture Patterns

3.1 Design Patterns Identification

3.2 Event-Driven vs Request-Driven


โ˜๏ธ 4. DevOps & Cloud (AWS)

4.1 AWS CI/CD Pipeline

4.2 Server Scaling Strategy


๐Ÿ—„๏ธ 5. Database & Performance (SQL)

5.1 Query Optimization

5.2 Schema Design

  • Prompt: Design a relational schema for a multi-language product catalog, ensuring easy localization and high read throughput.
  • Whatโ€™s tested: normalization, partitioning/sharding strategies.

๐ŸŒ 6. Front-end Integration

6.1 React Component Design

  • Prompt: Build a stateless React component that fetches and paginates API data. Outline props, state hooks, and error handling.
  • Whatโ€™s tested: Hooks, prop-drilling vs context, error boundaries.

6.2 Angular vs React Trade-offs

  • Prompt: Compare how you’d structure a large-scale dashboard in Angular vs React. Focus on modules, lazy loading, and state management.
  • What’s tested: NgModules, Redux/MobX/NgRx, bundle splitting.

6.3 React Native Considerations

  • Prompt: You don’t have React Native experienceโ€”explain how you’d architect code-sharing between web (React) and mobile (React Native).
  • What’s tested: Monorepo setups (e.g. Yarn Workspaces), shared business logic, native module stubs.

๐Ÿ’กPreparation Tips

  • Hands-on practice: Spin up mini-projects (e.g. Rails API with React front-end, deployed on AWS).
  • Mock interviews: Time yourself on similar Platform problemsโ€”aim for clarity and brevity.
  • Review fundamentals: Brush up on design patterns, AWS core services, SQL indexing strategies, and front-end state management.
  • Document trade-offs: Always justify architecture decisions with pros/cons.

Good luck, Architect to Greatness! ๐Ÿš€

Ruby Enumerable ๐Ÿ“š Module: Exciting Methods

Enumerable is a collection of iteration methods, a Ruby module, and a big part of what makes Ruby a great programming language.

# count elements that evaluate to true inside a block
[1,2,34].count
=> 3

# Group enumerable elements by the block return value. Returns a hash
[12,3,7,9].group_by {|x| x.even? ? 'even' : 'not_even'}
=> {"even" => [12], "not_even" => [3, 7, 9]}

# Partition into two groups. Returns a two-dimensional array
> [1,2,3,4,5].partition { |x| x.even? }
=> [[2, 4], [1, 3, 5]]

# Returns true if the block returns true for ANY elements yielded to it
> [1,2,5,8].any? 4
=> false

> [1,2,5,8].any? { |x| x.even?}
=> true

# Returns true if the block returns true for ALL elements yielded to it
> [2,5,6,8].all? {|x| x.even?}
=> false

# Opposite of all?
> [2,2,5,7].none? { |x| x.even?}
=> false

# Repeat ALL the elements n times
> [3,4,6].cycle(3).to_a
=> [3, 4, 6, 3, 4, 6, 3, 4, 6]

# select - SELECT all elements which pass the block
> [18,4,5,8,89].select {|x| x.even?}
=> [18, 4, 8]
> [18,4,5,8,89].select(&:even?)
=> [18, 4, 8]

# Like select, but it returns the first thing it finds
> [18,4,5,8,89].find {|x| x.even?}
=> 18

# Accumulates the result of the previous block value & passes it into the next one. Useful for adding up totals
> [4,5,8,90].inject(0) { |x, sum| x + sum }
=> 107
> [4,5,8,90].inject(:+)
=> 107
# Note that 'reduce' is an alias of 'inject'.

# Combines together two enumerable objects, so you can work with them in parallel. Useful for comparing elements & for generating hashes

> [2,4,56,8].zip [3,4]
=> [[2, 3], [4, 4], [56, nil], [8, nil]]

# Transforms every element of the enumerable object & returns the new version as an array
> [3,6,9].map { |x| x+89-27/2*23 }
=> [-207, -204, -201]

What is :+ in [4, 5, 8, 90].inject(:+) in Ruby?

๐Ÿ”ฃ :+ is a Symbol representing the + method.

In Ruby, every operator (like +, *, etc.) is actually a method under the hood.

  • inject takes a symbol (:+)
  • Ruby calls .send(:+) on each pair of elements
  • It’s equivalent to:
    (((4 + 5) + 8) + 90) => 107

๐Ÿ”ฃ &: Explanation:

  • :even? is a symbol representing the method even?
  • &: is Ruby’s “to_proc” shorthand, converting a symbol into a block
  • So &:even? becomes { |n| n.even? } under the hood

Enjoy Enumerable ๐Ÿš€

๐Ÿ” Ruby Programming Language Loops: A Case Study

Loops are an essential part of any programming languageโ€”they allow developers to execute code repeatedly without redundant repetition. Ruby, being an elegant and expressive language, offers several ways to implement looping constructs. This blog post explores Ruby loops through a real-world case study and demonstrates best practices for choosing the right loop for the right situation.


๐Ÿง  Why Loops Matter in Ruby

In Ruby, loops help automate repetitive tasks and iterate over collections (arrays, hashes, ranges, etc.). Understanding the different loop types and their use cases will help you write more idiomatic, efficient, and readable Ruby code.

๐Ÿงช The Case Study: Daily Sales Report Generator

Imagine you’re building a Ruby application for a retail store (like our Design studio) that generates a daily sales report. Your data source is an array of hashes, where each hash represents a sale with attributes like product name, category, quantity, and revenue.

sales = [
  { product: "T-shirt", category: "Apparel", quantity: 3, revenue: 900 },
  { product: "Laptop", category: "Electronics", quantity: 1, revenue: 50000 },
  { product: "Shoes", category: "Footwear", quantity: 2, revenue: 3000 },
  { product: "Headphones", category: "Electronics", quantity: 4, revenue: 12000 }
]

We’ll use this dataset to explore various loop types.

In Ruby:

  • Block-based loops like each, each_with_index, and loop do do create a new scope, so variables defined inside them do not leak outside.
  • Keyword-based loops like while, until, and for do not create a new scope, so variables declared inside are accessible outside.

๐Ÿ”„ each Loop โ€“ The Idiomatic Ruby Way

sales.each do |sale|
  puts "Sold #{sale[:quantity]} #{sale[:product]}(s) for โ‚น#{sale[:revenue]}"
end
Sold 3 T-shirt(s) for โ‚น900
Sold 1 Laptop(s) for โ‚น50000
Sold 2 Shoes(s) for โ‚น3000
Sold 4 Headphones(s) for โ‚น12000
=>
[{product: "T-shirt", category: "Apparel", quantity: 3, revenue: 900},
 {product: "Laptop", category: "Electronics", quantity: 1, revenue: 50000},
 {product: "Shoes", category: "Footwear", quantity: 2, revenue: 3000},
 {product: "Headphones", category: "Electronics", quantity: 4, revenue: 12000}]

Why use each:

  • Readable and expressive
  • Doesn’t return an index (cleaner when you donโ€™t need one)
  • Scope-safe: variables declared inside the block do not leak outside
  • Preferred for iterating over collections in Ruby

๐Ÿ”ข each_with_index โ€“ When You Need the Index

sales.each_with_index do |sale, index|
  puts "#{index + 1}. #{sale[:product]}: โ‚น#{sale[:revenue]}"
end
1. T-shirt: โ‚น900
2. Laptop: โ‚น50000
3. Shoes: โ‚น3000
4. Headphones: โ‚น12000
=>
[{product: "T-shirt", category: "Apparel", quantity: 3, revenue: 900},
 {product: "Laptop", category: "Electronics", quantity: 1, revenue: 50000},
 {product: "Shoes", category: "Footwear", quantity: 2, revenue: 3000},
 {product: "Headphones", category: "Electronics", quantity: 4, revenue: 12000}]

Use case: Numbered lists or positional logic.

  • Scope-safe like each

๐Ÿงฎ for Loop โ€“ Familiar but Rare in Idiomatic Ruby

for sale in sales
  puts "Product: #{sale[:product]}, Revenue: โ‚น#{sale[:revenue]}"
end
Product: T-shirt, Revenue: โ‚น900
Product: Laptop, Revenue: โ‚น50000
Product: Shoes, Revenue: โ‚น3000
Product: Headphones, Revenue: โ‚น12000
=>
[{product: "T-shirt", category: "Apparel", quantity: 3, revenue: 900},
 {product: "Laptop", category: "Electronics", quantity: 1, revenue: 50000},
 {product: "Shoes", category: "Footwear", quantity: 2, revenue: 3000},
 {product: "Headphones", category: "Electronics", quantity: 4, revenue: 12000}]

Caution:

  • โŒ Not scope-safe: Variables declared inside remain accessible outside the loop.
  • Though valid, for loops are generally avoided in idiomatic Ruby

๐Ÿชœ while Loop โ€“ Controlled Repetition

index = 0
while index < sales.size
  puts sales[index][:product]
  index += 1
end
T-shirt
Laptop
Shoes
Headphones
=> nil

Use case: When you’re manually controlling iteration.

  • โŒ Not scope-safe: variables declared within the loop (like index) remain accessible outside the loop.

๐Ÿ” until Loop โ€“ The Inverse of while

index = 0
until index == sales.size
  puts sales[index][:category]
  index += 1
end
Apparel
Electronics
Footwear
Electronics
=> nil

Use case: When you want to loop until a condition is true.

Similar to while, variables persist outside the loop (not block scoped).

๐Ÿงจ loop do with break โ€“ Infinite Loop with Manual Exit

index = 0
loop do
  break if index >= sales.size
  puts sales[index][:quantity]
  index += 1
end
3
1
2
4
=> nil

Use case: Custom control with explicit break condition.

Scope-safe: like other block-based loops, variables inside loop do blocks do not leak unless declared outside.

๐Ÿงน Bonus: Filtering with Loops vs Enumerable

#--- Loop-based filter
electronics_sales = []
sales.each do |sale|
  electronics_sales << sale if sale[:category] == "Electronics"
end
=>
[{product: "T-shirt", category: "Apparel", quantity: 3, revenue: 900},
 {product: "Laptop", category: "Electronics", quantity: 1, revenue: 50000},
 {product: "Shoes", category: "Footwear", quantity: 2, revenue: 3000},
 {product: "Headphones", category: "Electronics", quantity: 4, revenue: 12000}]

#--- Idiomatic Ruby filter
> electronics_sales = sales.select { |sale| sale[:category] == "Electronics" }
=>
[{product: "Laptop", category: "Electronics", quantity: 1, revenue: 50000},
...
> electronics_sales
=>
[{product: "Laptop", category: "Electronics", quantity: 1, revenue: 50000},
 {product: "Headphones", category: "Electronics", quantity: 4, revenue: 12000}]

Takeaway: Prefer Enumerable methods like select, map, reduce when working with collections. Loops are useful, but Ruby’s functional approach often leads to cleaner code.


โœ… Summary Table: Ruby Loops at a Glance

Loop TypeScope-safeIndex AccessBest Use Case
eachโœ…โŒSimple iteration
each_with_indexโœ…โœ…Need both element and index
forโŒโœ…Familiar syntax, but avoid in idiomatic Ruby
whileโœ…โœ… (manual)When condition is external
untilโœ…โœ… (manual)Inverted while, clearer for some logic
loop do + breakโœ…โœ… (manual)Controlled infinite loop

๐Ÿ Conclusion

Ruby offers a wide range of looping constructs. This case study demonstrates how to choose the right one based on context. For most collection traversals, each and other Enumerable methods are preferred. Use while, until, or loop when finer control over the iteration process is required.

Loop mindfully, and let Ruby’s elegance guide your code.

Enjoy Ruby ๐Ÿš€

Hotwire ใ€ฐ in Rails 8 World โ€“ And How My New Rails App Puts this into Work ๐Ÿš€

When you create a brand-new Rails 8 project today you automatically get a super-powerful front-end toolbox called Hotwire.

Because it is baked into the framework, it can feel a little magical (“everything just works!”). This post demystifies Hotwire, shows how its two core librariesโ€”Turbo and Stimulusโ€”fit together, and then walks through the places where the design_studio codebase is already using them.


1. What is Hotwire?

Hotwire (HTML Over The Wire) is a set of conventions + JavaScript libraries that lets you build modern, reactive UIs without writing (much) custom JS or a separate SPA. Instead of pushing JSON to the browser and letting a JS framework patch the DOM, the server sends HTML fragments over WebSockets, SSE, or normal HTTP responses and the browser swaps them in efficiently.

Hotwire is made of three parts:

  1. Turbo โ€“ the engine that intercepts normal links/forms, keeps your page state alive, and swaps HTML frames or streams into the DOM at 60fps.
  2. Stimulus โ€“ a “sprinkle-on” JavaScript framework for the little interactive bits that still need JS (dropdowns, clipboard buttons, etc.).
  3. (Optional) Strada โ€“ native-bridge helpers for mobile apps; not relevant to our web-only project.

Because Rails 8 ships with both turbo-rails and stimulus-rails gems, simply creating a project wires everything up.


2. How Turbo & Stimulus complement each other

  • Turbo keeps pages fresh โ€“ It handles navigation (Turbo Drive), partial page updates via <turbo-frame> (Turbo Frames), and real-time broadcasts with <turbo-stream> (Turbo Streams).
  • Stimulus adds behaviour โ€“ Tiny ES-module controllers attach to DOM elements and react to events/data attributes. Importantly, Stimulus plays nicely with Turboโ€™s DOM-swapping because controllers automatically disconnect/re-connect when elements are replaced.

Think of Turbo as the transport layer for HTML and Stimulus as the behaviour layer for the small pieces that still need JavaScript logic.

# server logs - still identify as HTML request, It handles navigation through (Turbo Drive)

Started GET "/products/15" for ::1 at 2025-06-24 00:47:03 +0530
Processing by ProductsController#show as HTML
  Parameters: {"id" => "15"}
.......

Started GET "/products?category=women" for ::1 at 2025-06-24 00:50:38 +0530
Processing by ProductsController#index as HTML
  Parameters: {"category" => "women"}
.......

Javascript and css files that loads in our html head:

    <link rel="stylesheet" href="/assets/actiontext-e646701d.css" data-turbo-track="reload" />
<link rel="stylesheet" href="/assets/application-8b441ae0.css" data-turbo-track="reload" />
<link rel="stylesheet" href="/assets/tailwind-8bbb1409.css" data-turbo-track="reload" />
    <script type="importmap" data-turbo-track="reload">{
  "imports": {
    "application": "/assets/application-3da76259.js",
    "@hotwired/turbo-rails": "/assets/turbo.min-3a2e143f.js",
    "@hotwired/stimulus": "/assets/stimulus.min-4b1e420e.js",
    "@hotwired/stimulus-loading": "/assets/stimulus-loading-1fc53fe7.js",
    "trix": "/assets/trix-4b540cb5.js",
    "@rails/actiontext": "/assets/actiontext.esm-f1c04d34.js",
    "controllers/application": "/assets/controllers/application-3affb389.js",
    "controllers/hello_controller": "/assets/controllers/hello_controller-708796bd.js",
    "controllers": "/assets/controllers/index-ee64e1f1.js"
  }
}</script>
<link rel="modulepreload" href="/assets/application-3da76259.js">
<link rel="modulepreload" href="/assets/turbo.min-3a2e143f.js">
<link rel="modulepreload" href="/assets/stimulus.min-4b1e420e.js">
<link rel="modulepreload" href="/assets/stimulus-loading-1fc53fe7.js">
<link rel="modulepreload" href="/assets/trix-4b540cb5.js">
<link rel="modulepreload" href="/assets/actiontext.esm-f1c04d34.js">
<link rel="modulepreload" href="/assets/controllers/application-3affb389.js">
<link rel="modulepreload" href="/assets/controllers/hello_controller-708796bd.js">
<link rel="modulepreload" href="/assets/controllers/index-ee64e1f1.js">
<script type="module">import "application"</script>
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0-beta3/css/all.min.css">

3. Where Hotwire lives in design_studio

Because Rails 8 scaffolded most of this for us, the integration is scattered across a few key spots:

3.1 Gems & ES-modules are pinned

# config/importmap.rb

pin "@hotwired/turbo-rails",  to: "turbo.min.js"
pin "@hotwired/stimulus",     to: "stimulus.min.js"
pin "@hotwired/stimulus-loading", to: "stimulus-loading.js"
pin_all_from "app/javascript/controllers", under: "controllers"

The Gemfile pulls the Ruby wrappers:

gem "turbo-rails"
gem "stimulus-rails"

3.2 Global JavaScript entry point

# application.js 

import "@hotwired/turbo-rails"
import "controllers"   // <-- auto-registers everything in app/javascript/controllers

As soon as that file is imported (it’s linked in application.html.erb via
javascript_include_tag "application", "data-turbo-track": "reload"
), Turbo intercepts every link & form on the site.

3.3 Stimulus controllers

The framework-generated controller registry lives at app/javascript/controllers/index.js; the only custom controller so far is the hello-world example:

connect() {
  this.element.textContent = "Hello World!"
}

You can drop new controllers into app/javascript/controllers/anything_controller.js and they will be auto-loaded thanks to the pin_all_from line above.

pin_all_from "app/javascript/controllers", under: "controllers"

3.4 Turbo Streams in practice โ€“ removing a product image

The most concrete Hotwire interaction in design_studio today is the “Delete image” action in the products feature:

  1. Controller action responds to turbo_stream:
respond_to do |format|
  ...
  format.turbo_stream   # <-- returns delete_image.turbo_stream.erb
end
  1. Stream template sent back:
# app/views/products/delete_image.turbo_stream.erb

<turbo-stream action="remove" target="product-image-<%= @image_id %>"></turbo-stream>
  1. Turbo receives the <turbo-stream> tag, finds the element with that id, and removes it from the DOMโ€”no page reload, no hand-written JS.
# app/views/products/show.html.erb
....
<%= link_to @product, 
    data: { turbo_method: :delete, turbo_confirm: "Are you sure you want to delete this product?" }, 
    class: "px-4 py-2 bg-red-500 text-white rounded-lg hover:bg-red-600 transition-colors duration-200" do %>
    <i class="fas fa-trash mr-2"></i>Delete Product
<% end %>
....

3.5 “Free” Turbo benefits you might not notice

Because Turbo Drive is on globally:

  • Standard links look instantaneous (HTML diffing & cache).
  • Form submissions automatically request .turbo_stream when you ask for format.turbo_stream in a controller.
  • Redirects keep scroll position/head tags in sync.

All of this happens without any code in the repoโ€”Rails 8 + Turbo does the heavy lifting.


4. Extending Hotwire in the future

  1. More Turbo Frames โ€“ Wrap parts of pages in <turbo-frame id="cart"> to make only the cart refresh on โ€œAdd to cartโ€.
  2. Broadcasting โ€“ Hook Product model changes to turbo_stream_from channels so that all users see live stock updates.
  3. Stimulus components โ€“ Replace jQuery snippets with small controllers (dropdowns, modals, copy-to-clipboard, etc.).

Because everything is wired already (Importmap, controller autoloading, Cable), adding these features is mostly a matter of creating the HTML/ERB templates and a bit of Ruby.


Questions

1. Is Rails 8 still working with the real DOM?

  • Yes, the browser is always working with the real DOMโ€”nothing is virtualized (unlike Reactโ€™s virtual DOM).
  • Turbo intercepts navigation events (links, form submits). Instead of letting the browser perform a โ€œhardโ€ navigation, it fetches the HTML with fetch() in the background, parses the response into a hidden document fragment, then swaps specific pieces (usually the whole <body> or a <turbo-frame> target) into the live DOM.
  • Because Turbo only swaps the changed chunks, it keeps the rest of the page alive (JS state, scroll position, playing videos, etc.) and fires lifecycle events so Stimulus controllers disconnect/re-connect cleanly.

“Stimulus itself is a tiny wrapper around MutationObserver. It attaches controller instances to DOM elements and tears them down automatically when Turbo replaces those elementsโ€”so both libraries cooperate rather than fighting the DOM.”


2. How does the HTML from Turbo Drive get into the DOM without a full reload?

Step-by-step for a normal link click:

  1. turbo-railsย JSย (loadedย viaย importย “@hotwired/turbo-rails”) cancelsย the browser’sย defaultย navigation.
  2. Turbo sends anย AJAXย request (actuallyย fetch()) forย theย new URL, requesting full HTML.
  3. The response text is parsed into an off-screen DOMParser document.
  4. Turboย comparesย theย <head>ย tags, updatesย <title>ย andย anyย changed assets, thenย replaces theย <body>ย of the currentย page withย theย new oneย (or, forย <turbo-frame>, just thatย frame).
  5. Itย pushesย aย history.pushStateย entry soย Back/Forwardย work, andย firesย events likeย turbo:load.

Because no real navigation happened, the browser doesnโ€™t clear JS state, WebSocket connections, or CSS; it just swaps some DOM nodesโ€”visually it feels instantaneous.


3. What does pin mean in config/importmap.rb?

Rails 8 ships with Importmapโ€”a way to use normal ES-module import statements without a bundler.pin is simply a mapping declaration:

pin "@hotwired/turbo-rails", to: "turbo.min.js"
pin "@hotwired/stimulus",    to: "stimulus.min.js"

Meaning:

  • When the browser sees import "@hotwired/turbo-rails", fetch โ€ฆ/assets/turbo.min.js
  • When it sees import “controllers”, look at 
    pin_all_from "app/javascript/controllers" 
    which expands into individual mappings for every controller file.

Thinkย ofย pinย as theย importmap equivalentย ofย aย requireย statement in a bundler configโ€”justย declarative andย handled at runtime by theย browser. That’sย all there is to it: real DOM, no pageย reloads, and a lightweightย wayย to load JS modules without Webpack.

Take-aways

  • Hotwire is not one big library; it is a philosophy (+ Turbo + Stimulus) that keeps most of your UI in Ruby & ERB but still feels snappy and modern.
  • Rails 8 scaffolds everything, so you may not even realize you’re using itโ€”but you are!
  • design_studio already benefits from Hotwire’s defaults (fast navigation) and uses Turbo Streams for dynamic image deletion. The plumbing is in place to expand this pattern across the app with minimal effort.

Happy hot-wiring! ๐Ÿš€

๐ŸŒ€ Event Loops Demystified: The Secret Behind High-Performance Applications

Ever wondered how a single Redis server can handle thousands of simultaneous connections without breaking a sweat? Or how Node.js can serve millions of requests with just one thread? The magic lies in the event loop – a deceptively simple concept that powers everything from your web browser to Netflix’s streaming infrastructure.

๐ŸŽฏ What is an Event Loop?

An event loop is like a super-efficient restaurant manager who never stops moving:

๐Ÿ‘จโ€๐Ÿ’ผ Event Loop Manager:
"Order ready at table 3!" โ†’ Handle it
"New customer at door!" โ†’ Seat them  
"Payment needed at table 7!" โ†’ Process it
"Kitchen needs supplies!" โ†’ Order them

Traditional approach (blocking):

# One waiter per table (one thread per request)
waiter1.take_order_from_table_1  # Waiter1 waits here...
waiter2.take_order_from_table_2  # Waiter2 waits here...  
waiter3.take_order_from_table_3  # Waiter3 waits here...

Event loop approach (non-blocking):

# One super-waiter handling everything
loop do
  event = get_next_event()
  case event.type
  when :order_ready then serve_food(event.table)
  when :new_customer then seat_customer(event.customer)
  when :payment then process_payment(event.table)
  end
end

๐Ÿ—๏ธ The Anatomy of an Event Loop

๐Ÿ”„ The Core Loop

// Simplified event loop (Node.js style)
while (true) {
  // 1. Check for completed I/O operations
  let events = pollForEvents();

  // 2. Execute callbacks for completed operations
  events.forEach(event => {
    event.callback();
  });

  // 3. Execute any scheduled timers
  runTimers();

  // 4. If no more work, sleep until next event
  if (noMoreWork()) {
    sleep();
  }
}

๐Ÿ“‹ Event Queue System

โ”Œโ”€ Timer Queue โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”Œโ”€ I/O Queue โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”Œโ”€ Check Queue โ”€โ”€โ”
โ”‚ setTimeout()       โ”‚    โ”‚ File operations    โ”‚    โ”‚ setImmediate() โ”‚
โ”‚ setInterval()      โ”‚    โ”‚ Network requests   โ”‚    โ”‚ process.tick() โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ”‚ Database queries   โ”‚    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                          โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                                     โ†“
                          โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                          โ”‚   Event Loop Core   โ”‚
                          โ”‚  "What's next?"     โ”‚
                          โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

โšก Why Event Loops Are Lightning Fast

๐Ÿšซ No Thread Context Switching

# Traditional multi-threading overhead
Thread 1: [โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ] Context Switch [โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ] Context Switch
Thread 2:           [โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ] Context Switch [โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ]
Thread 3:                     [โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ] Context Switch [โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ]
# CPU wastes time switching between threads!

# Event loop efficiency  
Single Thread: [โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ]
# No context switching = pure work!

๐ŸŽฏ Perfect for I/O-Heavy Applications

# What most web applications do:
def handle_request
  user = database.find_user(id)     # Wait 10ms
  posts = api.fetch_posts(user)     # Wait 50ms  
  cache.store(posts)                # Wait 5ms
  render_response(posts)            # Work 1ms
end
# Total: 66ms (65ms waiting, 1ms working!)

Event loop transforms this:

# Non-blocking version
def handle_request
  database.find_user(id) do |user|           # Queue callback
    api.fetch_posts(user) do |posts|         # Queue callback  
      cache.store(posts) do                  # Queue callback
        render_response(posts)               # Finally execute
      end
    end
  end
  # Returns immediately! Event loop handles the rest
end

โ˜๏ธ Nextflix: Cloud Architecture

NextFlix Cloud Architecture

ย High-Level Design of Netflix System Design:

Microservices Architectureย of Netflix:

Netflix Billing Migration To Aws

Application-data-caching-using-ssds

Read System Design Netflix | A Complete Architecture: System-design-netflix-a-complete-Architecture

Reference: NetFlix Blog, GeeksForGeeks Blog

๐Ÿ”ง Event Loop in Action: Redis Case Study

๐Ÿ“ก How Redis Handles 10,000 Connections

// Simplified Redis event loop (C code)
void aeMain(aeEventLoop *eventLoop) {
    while (!eventLoop->stop) {
        // 1. Wait for file descriptor events (epoll/kqueue)
        int numEvents = aeApiPoll(eventLoop, tvp);

        // 2. Process each ready event
        for (int i = 0; i < numEvents; i++) {
            aeFileEvent *fe = &eventLoop->events[fired[i].fd];

            if (fired[i].mask & AE_READABLE) {
                fe->rfileProc(eventLoop, fired[i].fd, fe->clientData, fired[i].mask);
            }
            if (fired[i].mask & AE_WRITABLE) {
                fe->wfileProc(eventLoop, fired[i].fd, fe->clientData, fired[i].mask);
            }
        }
    }
}

๐ŸŽญ The BRPOP Magic Revealed

# When you call redis.brpop()
redis.brpop("queue:default", timeout: 30)

# Redis internally does:
# 1. Check if list has items โ†’ Return immediately if yes
# 2. If empty โ†’ Register client as "blocked"  
# 3. Event loop continues serving other clients
# 4. When item added โ†’ Wake up blocked client
# 5. Return the item

# Your Ruby thread blocks, but Redis keeps serving others!

๐ŸŒ Event Loops in the Wild

๐ŸŸข Node.js: JavaScript Everywhere

// Single thread handling thousands of requests
const server = http.createServer((req, res) => {
  // This doesn't block other requests!
  database.query('SELECT * FROM users', (err, result) => {
    res.json(result);
  });
});

๐Ÿ Python: AsyncIO

import asyncio

async def handle_request():
    # Non-blocking database call
    user = await database.fetch_user(user_id)
    # Event loop serves other requests while waiting
    posts = await api.fetch_posts(user.id)  
    return render_template('profile.html', user=user, posts=posts)

# Run multiple requests concurrently
asyncio.run(handle_many_requests())

๐Ÿ’Ž Ruby: EventMachine & Async

# EventMachine (older)
EventMachine.run do
  EventMachine::HttpRequest.new('http://api.example.com').get.callback do |response|
    puts "Got response: #{response.response}"
  end
end

# Modern Ruby (Async gem)
require 'async'

Async do |task|
  response = task.async { Net::HTTP.get('example.com', '/api/data') }
  puts response.wait
end

Read More (Premium content): RailsDrop: Rubys-async-revolution

Message from the post: “๐Ÿ’ช Ruby is not holding back with the JS spread of the WORLD! ๐Ÿ’ช Rails isย NOT DYING in 2025!! ITS EVOLVING!! ๐Ÿ’ช We Ruby/Rails Community Fire with BIG in Future! ๐Ÿ•บ”


โš™๏ธ The Dark Side: Event Loop Limitations

๐ŸŒ CPU-Intensive Tasks Kill Performance

// This blocks EVERYTHING!
function badCpuTask() {
  let result = 0;
  for (let i = 0; i < 10000000000; i++) {  // 10 billion iterations
    result += i;
  }
  return result;
}

// While this runs, NO other requests get served!

๐Ÿฉน The Solution: Worker Threads

// Offload heavy work to separate thread
const { Worker } = require('worker_threads');

function goodCpuTask() {
  return new Promise((resolve) => {
    const worker = new Worker('./cpu-intensive-task.js');
    worker.postMessage({ data: 'process this' });
    worker.on('message', resolve);
  });
}

๐ŸŽฏ When to Use Event Loops

โœ… Perfect For:

  • Web servers (lots of I/O, little CPU)
  • API gateways (routing requests)
  • Real-time applications (chat, games)
  • Database proxies (Redis, MongoDB)
  • File processing (reading/writing files)

โŒ Avoid For:

  • Heavy calculations (image processing)
  • Machine learning (training models)
  • Cryptographic operations (Bitcoin mining)
  • Scientific computing (physics simulations)

๐Ÿš€ Performance Comparison

Handling 10,000 Concurrent Connections:

Traditional Threading:
โ”œโ”€โ”€ Memory: ~2GB (200KB per thread)
โ”œโ”€โ”€ Context switches: ~100,000/second  
โ”œโ”€โ”€ CPU overhead: ~30%
โ””โ”€โ”€ Max connections: ~5,000

Event Loop:
โ”œโ”€โ”€ Memory: ~50MB (single thread + buffers)
โ”œโ”€โ”€ Context switches: 0
โ”œโ”€โ”€ CPU overhead: ~5%  
โ””โ”€โ”€ Max connections: ~100,000+

๐Ÿ”ฎ The Future: Event Loops Everywhere

Modern frameworks are embracing event-driven architecture:

  • Rails 7+: Hotwire + ActionCable
  • Django: ASGI + async views
  • Go: Goroutines (event-loop-like)
  • Rust: Tokio async runtime
  • Java: Project Loom virtual threads

๐Ÿ’ก Key Takeaways

  1. Event loops excel at I/O-heavy workloads – perfect for web applications
  2. They use a single thread – no context switching overhead
  3. Non-blocking operations – one slow request doesn’t block others
  4. CPU-intensive tasks are kryptonite – offload to worker threads
  5. Modern web development is moving toward async patterns – learn them now!

The next time you see redis.brpop() blocking your Ruby thread while Redis happily serves thousands of other clients, you’ll know exactly what’s happening under the hood! ๐ŸŽ‰


๐Ÿ”Œ The Complete Guide to Sockets: How Your Code Really Talks to the World

Ever wondered what happens when Sidekiq calls redis.brpop() and your thread magically “blocks” until a job appears? The answer lies in one of computing’s most fundamental concepts: sockets. Let’s dive deep into this invisible infrastructure that powers everything from your Redis connections to Netflix streaming.

๐Ÿš€ What is a Socket?

A socket is essentially a communication endpoint – think of it like a “phone number” that programs can use to talk to each other.

Application A  โ†โ†’  Socket  โ†โ†’  Network  โ†โ†’  Socket  โ†โ†’  Application B

Simple analogy: If applications are people, sockets are like phone numbers that let them call each other!

๐ŸŽฏ The Purpose of Sockets

๐Ÿ“ก Inter-Process Communication (IPC)

# Two Ruby programs talking via sockets
# Program 1 (Server)
require 'socket'
server = TCPServer.new(3000)
client_socket = server.accept
client_socket.puts "Hello from server!"

# Program 2 (Client)  
client = TCPSocket.new('localhost', 3000)
message = client.gets
puts message  # "Hello from server!"

๐ŸŒ Network Communication

# Talk to Redis (what Sidekiq does)
require 'socket'
redis_socket = TCPSocket.new('localhost', 6379)
redis_socket.write("PING\r\n")
response = redis_socket.read  # "PONG"

๐Ÿ  Are Sockets Only for Networking?

NO! Sockets work for both local and network communication:

๐ŸŒ Network Sockets (TCP/UDP)

# Talk across the internet
require 'socket'
socket = TCPSocket.new('google.com', 80)
socket.write("GET / HTTP/1.1\r\nHost: google.com\r\n\r\n")

๐Ÿ”— Local Sockets (Unix Domain Sockets)

# Talk between programs on same machine
# Faster than network sockets - no network stack overhead
socket = UNIXSocket.new('/tmp/my_app.sock')

Real example: Redis can use Unix sockets for local connections:

# Network socket (goes through TCP/IP stack)
redis = Redis.new(host: 'localhost', port: 6379)

# Unix socket (direct OS communication)
redis = Redis.new(path: '/tmp/redis.sock')  # Faster!

๐Ÿ”ข What Are Ports?

Ports are like apartment numbers – they help identify which specific application should receive the data.

IP Address: 192.168.1.100 (Building address)
Port: 6379                (Apartment number)

๐ŸŽฏ Why This Matters

Same computer running:
- Web server on port 80
- Redis on port 6379  
- SSH on port 22
- Your app on port 3000

When data arrives at 192.168.1.100:6379
โ†’ OS knows to send it to Redis

๐Ÿข Why Do We Need So Many Ports?

Think of a computer like a massive apartment building:

๐Ÿ”ง Multiple Services

# Different services need different "apartments"
$ netstat -ln
tcp 0.0.0.0:22    SSH server
tcp 0.0.0.0:80    Web server  
tcp 0.0.0.0:443   HTTPS server
tcp 0.0.0.0:3306  MySQL
tcp 0.0.0.0:5432  PostgreSQL
tcp 0.0.0.0:6379  Redis
tcp 0.0.0.0:27017 MongoDB

๐Ÿ”„ Multiple Connections to Same Service

Redis server (port 6379) can handle:
- Connection 1: Sidekiq worker
- Connection 2: Rails app  
- Connection 3: Redis CLI
- Connection 4: Monitoring tool

Each gets a unique "channel" but all use port 6379

๐Ÿ“Š Port Ranges

0-1023:    Reserved (HTTP=80, SSH=22, etc.)
1024-49151: Registered applications  
49152-65535: Dynamic/Private (temporary connections)

โš™๏ธ How Sockets Work Internally

๐Ÿ› ๏ธ Socket Creation

# What happens when you do this:
socket = TCPSocket.new('localhost', 6379)

Behind the scenes:

// OS system calls
socket_fd = socket(AF_INET, SOCK_STREAM, 0)  // Create socket
connect(socket_fd, server_address, address_len)  // Connect

๐Ÿ“‹ The OS Socket Table

Process ID: 1234 (Your Ruby app)
File Descriptors:
  0: stdin
  1: stdout  
  2: stderr
  3: socket to Redis (localhost:6379)
  4: socket to PostgreSQL (localhost:5432)
  5: listening socket (port 3000)

๐Ÿ”ฎ Kernel-Level Magic

Application: socket.write("PING")
     โ†“
Ruby: calls OS write() system call
     โ†“  
Kernel: adds to socket send buffer
     โ†“
Network Stack: TCP โ†’ IP โ†’ Ethernet
     โ†“
Network Card: sends packets over wire

๐ŸŒˆ Types of Sockets

๐Ÿ“ฆ TCP Sockets (Reliable)

# Like registered mail - guaranteed delivery
server = TCPServer.new(3000)
client = TCPSocket.new('localhost', 3000)

# Data arrives in order, no loss
client.write("Message 1")
client.write("Message 2") 
# Server receives exactly: "Message 1", "Message 2"

โšก UDP Sockets (Fast but unreliable)

# Like shouting across a crowded room
require 'socket'

# Sender
udp = UDPSocket.new
udp.send("Hello!", 0, 'localhost', 3000)

# Receiver  
udp = UDPSocket.new
udp.bind('localhost', 3000)
data = udp.recv(1024)  # Might not arrive!

๐Ÿ  Unix Domain Sockets (Local)

# Super fast local communication
File.delete('/tmp/test.sock') if File.exist?('/tmp/test.sock')

# Server
server = UNIXServer.new('/tmp/test.sock')
# Client
client = UNIXSocket.new('/tmp/test.sock')

๐Ÿ”„ Socket Lifecycle

๐Ÿค TCP Connection Dance

# 1. Server: "I'm listening on port 3000"
server = TCPServer.new(3000)

# 2. Client: "I want to connect to port 3000"  
client = TCPSocket.new('localhost', 3000)

# 3. Server: "I accept your connection"
connection = server.accept

# 4. Both can now send/receive data
connection.puts "Hello!"
client.puts "Hi back!"

# 5. Clean shutdown
client.close
connection.close
server.close

๐Ÿ”„ Under the Hood (TCP Handshake)

Client                    Server
  |                         |
  |---- SYN packet -------->| (I want to connect)
  |<-- SYN-ACK packet ------| (OK, let's connect)  
  |---- ACK packet -------->| (Connection established!)
  |                         |
  |<---- Data exchange ---->|
  |                         |

๐Ÿ—๏ธ OS-Level Socket Implementation

๐Ÿ“ File Descriptor Magic

socket = TCPSocket.new('localhost', 6379)
puts socket.fileno  # e.g., 7

# This socket is just file descriptor #7!
# You can even use it with raw system calls

๐Ÿ—‚๏ธ Kernel Socket Buffers

Application Buffer  โ†โ†’  Kernel Send Buffer  โ†โ†’  Network
                   โ†โ†’  Kernel Recv Buffer  โ†โ†’

What happens on socket.write:

socket.write("BRPOP queue 0")
# 1. Ruby copies data to kernel send buffer
# 2. write() returns immediately  
# 3. Kernel sends data in background
# 4. TCP handles retransmission, etc.

What happens on socket.read:

data = socket.read  
# 1. Check kernel receive buffer
# 2. If empty, BLOCK thread until data arrives
# 3. Copy data from kernel to Ruby
# 4. Return to your program

๐ŸŽฏ Real-World Example: Sidekiq + Redis

# When Sidekiq does this:
redis.brpop("queue:default", timeout: 2)

# Here's the socket journey:
# 1. Ruby opens TCP socket to localhost:6379
socket = TCPSocket.new('localhost', 6379)

# 2. Format Redis command
command = "*4\r\n$5\r\nBRPOP\r\n$13\r\nqueue:default\r\n$1\r\n2\r\n"

# 3. Write to socket (goes to kernel buffer)
socket.write(command)

# 4. Thread blocks reading response
response = socket.read  # BLOCKS HERE until Redis responds

# 5. Redis eventually sends back data
# 6. Kernel receives packets, assembles them
# 7. socket.read returns with the job data

๐Ÿš€ Socket Performance Tips

โ™ป๏ธ Socket Reuse
# Bad: New socket for each request
100.times do
  socket = TCPSocket.new('localhost', 6379)
  socket.write("PING\r\n")
  socket.read
  socket.close  # Expensive!
end

# Good: Reuse socket
socket = TCPSocket.new('localhost', 6379)
100.times do
  socket.write("PING\r\n")  
  socket.read
end
socket.close
๐ŸŠ Connection Pooling
# What Redis gem/Sidekiq does internally
class ConnectionPool
  def initialize(size: 5)
    @pool = size.times.map { TCPSocket.new('localhost', 6379) }
  end

  def with_connection(&block)
    socket = @pool.pop
    yield(socket)
  ensure
    @pool.push(socket)
  end
end

๐ŸŽช Fun Socket Facts

๐Ÿ“„ Everything is a File
# On Linux/Mac, sockets appear as files!
$ lsof -p #{Process.pid}
ruby 1234 user 3u sock 0,9 0t0 TCP localhost:3000->localhost:6379
๐Ÿšง Socket Limits
# Your OS has limits
$ ulimit -n
1024  # Max file descriptors (including sockets)

# Web servers need thousands of sockets
# That's why they increase this limit!
๐Ÿ“Š Socket States
$ netstat -an | grep 6379
tcp4 0 0 127.0.0.1.6379 127.0.0.1.50123 ESTABLISHED
tcp4 0 0 127.0.0.1.6379 127.0.0.1.50124 TIME_WAIT
tcp4 0 0 *.6379         *.*            LISTEN

๐ŸŽฏ Key Takeaways

  1. ๐Ÿ”Œ Sockets = Communication endpoints between programs
  2. ๐Ÿ  Ports = Apartment numbers for routing data to the right app
  3. ๐ŸŒ Not just networking – also local inter-process communication
  4. โš™๏ธ OS manages everything – kernel buffers, network stack, blocking
  5. ๐Ÿ“ File descriptors – sockets are just special files to the OS
  6. ๐ŸŠ Connection pooling is crucial for performance
  7. ๐Ÿ”’ BRPOP blocking happens at the socket read level

๐ŸŒŸ Conclusion

The beauty of sockets is their elegant simplicity: when Sidekiq calls redis.brpop(), it’s using the same socket primitives that have powered network communication for decades!

From your Redis connection to Netflix streaming to Zoom calls, sockets are the fundamental building blocks that make modern distributed systems possible. Understanding how they work gives you insight into everything from why connection pooling matters to how blocking I/O actually works at the system level.

The next time you see a thread “blocking” on network I/O, you’ll know exactly what’s happening: a simple socket read operation, leveraging decades of OS optimization to efficiently wait for data without wasting a single CPU cycle. Pretty amazing for something so foundational! ๐Ÿš€


โšก Inside Redis: How Your Favorite In-Memory Database Actually Works

You’ve seen how Sidekiq connects to Redis via sockets, but what happens when Redis receives that BRPOP command? Let’s pull back the curtain on one of the most elegant pieces of software ever written and discover why Redis is so blazingly fast.

๐ŸŽฏ What Makes Redis Special?

Redis isn’t just another database – it’s a data structure server. While most databases make you think in tables and rows, Redis lets you work directly with lists, sets, hashes, and more. It’s like having super-powered variables that persist across program restarts!

# Traditional database thinking
User.where(active: true).pluck(:id)

# Redis thinking  
redis.smembers("active_users")  # A set of active user IDs

๐Ÿ—๏ธ Redis Architecture Overview

Redis has a deceptively simple architecture that’s incredibly powerful:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚          Client Connections     โ”‚ โ† Your Ruby app connects here
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚         Command Processing      โ”‚ โ† Parses your BRPOP command
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚         Event Loop (epoll)      โ”‚ โ† Handles thousands of connections
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚        Data Structure Engine    โ”‚ โ† The magic happens here
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚         Memory Management       โ”‚ โ† Keeps everything in RAM
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚        Persistence Layer        โ”‚ โ† Optional disk storage
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐Ÿ”ฅ The Single-Threaded Magic

Here’s Redis’s secret sauce: it’s mostly single-threaded!

// Simplified Redis main loop
while (server_running) {
    // 1. Check for new network events
    events = epoll_wait(eventfd, events, max_events, timeout);

    // 2. Process each event
    for (int i = 0; i < events; i++) {
        if (events[i].type == READ_EVENT) {
            process_client_command(events[i].client);
        }
    }

    // 3. Handle time-based events (expiry, etc.)
    process_time_events();
}

Why single-threaded is brilliant:

  • โœ… No locks or synchronization needed
  • โœ… Incredibly fast context switching
  • โœ… Predictable performance
  • โœ… Simple to reason about

๐Ÿง  Data Structure Deep Dive

๐Ÿ“ Redis Lists (What Sidekiq Uses)

When you do redis.brpop("queue:default"), you’re working with a Redis list:

// Redis list structure (simplified)
typedef struct list {
    listNode *head;      // First item
    listNode *tail;      // Last item  
    long length;         // How many items
    // ... other fields
} list;

typedef struct listNode {
    struct listNode *prev;
    struct listNode *next;
    void *value;         // Your job data
} listNode;

BRPOP implementation inside Redis:

// Simplified BRPOP command handler
void brpopCommand(client *c) {
    // Try to pop from each list
    for (int i = 1; i < c->argc - 1; i++) {
        robj *key = c->argv[i];
        robj *list = lookupKeyRead(c->db, key);

        if (list && listTypeLength(list) > 0) {
            // Found item! Pop and return immediately
            robj *value = listTypePop(list, LIST_TAIL);
            addReplyMultiBulkLen(c, 2);
            addReplyBulk(c, key);
            addReplyBulk(c, value);
            return;
        }
    }

    // No items found - BLOCK the client
    blockForKeys(c, c->argv + 1, c->argc - 2, timeout);
}

๐Ÿ”‘ Hash Tables (Super Fast Lookups)

Redis uses hash tables for O(1) key lookups:

// Redis hash table
typedef struct dict {
    dictEntry **table;       // Array of buckets
    unsigned long size;      // Size of table
    unsigned long sizemask;  // size - 1 (for fast modulo)
    unsigned long used;      // Number of entries
} dict;

// Finding a key
unsigned int hash = dictGenHashFunction(key);
unsigned int idx = hash & dict->sizemask;
dictEntry *entry = dict->table[idx];

This is why Redis is so fast – finding any key is O(1)!

โšก The Event Loop: Handling Thousands of Connections

Redis uses epoll (Linux) or kqueue (macOS) to efficiently handle many connections:

// Simplified event loop
int epollfd = epoll_create(1024);

// Add client socket to epoll
struct epoll_event ev;
ev.events = EPOLLIN;  // Watch for incoming data
ev.data.ptr = client;
epoll_ctl(epollfd, EPOLL_CTL_ADD, client->fd, &ev);

// Main loop
while (1) {
    int nfds = epoll_wait(epollfd, events, MAX_EVENTS, timeout);

    for (int i = 0; i < nfds; i++) {
        client *c = (client*)events[i].data.ptr;

        if (events[i].events & EPOLLIN) {
            // Data available to read
            read_client_command(c);
            process_command(c);
        }
    }
}

Why this is amazing:

Traditional approach: 1 thread per connection
- 1000 connections = 1000 threads
- Each thread uses ~8MB memory
- Context switching overhead

Redis approach: 1 thread for all connections  
- 1000 connections = 1 thread
- Minimal memory overhead
- No context switching between connections

๐Ÿ”’ How BRPOP Blocking Actually Works

Here’s the magic behind Sidekiq’s blocking behavior:

๐ŸŽญ Client Blocking State

// When no data available for BRPOP
typedef struct blockingState {
    dict *keys;           // Keys we're waiting for
    time_t timeout;       // When to give up
    int numreplicas;      // Replication stuff
    // ... other fields
} blockingState;

// Block a client
void blockClient(client *c, int btype) {
    c->flags |= CLIENT_BLOCKED;
    c->btype = btype;
    c->bstate = zmalloc(sizeof(blockingState));

    // Add to server's list of blocked clients
    listAddNodeTail(server.clients, c);
}

โฐ Timeout Handling

// Check for timed out clients
void handleClientsBlockedOnKeys(void) {
    time_t now = time(NULL);

    listIter li;
    listNode *ln;
    listRewind(server.clients, &li);

    while ((ln = listNext(&li)) != NULL) {
        client *c = listNodeValue(ln);

        if (c->flags & CLIENT_BLOCKED && 
            c->bstate.timeout != 0 && 
            c->bstate.timeout < now) {

            // Timeout! Send null response
            addReplyNullArray(c);
            unblockClient(c);
        }
    }
}

๐Ÿš€ Unblocking When Data Arrives

// When someone does LPUSH to a list
void signalKeyAsReady(redisDb *db, robj *key) {
    readyList *rl = zmalloc(sizeof(*rl));
    rl->key = key;
    rl->db = db;

    // Add to ready list
    listAddNodeTail(server.ready_keys, rl);
}

// Process ready keys and unblock clients
void handleClientsBlockedOnKeys(void) {
    while (listLength(server.ready_keys) != 0) {
        listNode *ln = listFirst(server.ready_keys);
        readyList *rl = listNodeValue(ln);

        // Find blocked clients waiting for this key
        list *clients = dictFetchValue(rl->db->blocking_keys, rl->key);

        if (clients) {
            // Unblock first client and serve the key
            client *receiver = listNodeValue(listFirst(clients));
            serveClientBlockedOnList(receiver, rl->key, rl->db);
        }

        listDelNode(server.ready_keys, ln);
    }
}

๐Ÿ’พ Memory Management: Keeping It All in RAM

๐Ÿงฎ Memory Layout

// Every Redis object has this header
typedef struct redisObject {
    unsigned type:4;        // STRING, LIST, SET, etc.
    unsigned encoding:4;    // How it's stored internally  
    unsigned lru:24;        // LRU eviction info
    int refcount;          // Reference counting
    void *ptr;             // Actual data
} robj;

๐Ÿ—‚๏ธ Smart Encodings

Redis automatically chooses the most efficient representation:

// Small lists use ziplist (compressed)
if (listLength(list) < server.list_max_ziplist_entries &&
    listTotalSize(list) < server.list_max_ziplist_value) {

    // Use compressed ziplist
    listConvert(list, OBJ_ENCODING_ZIPLIST);
} else {
    // Use normal linked list
    listConvert(list, OBJ_ENCODING_LINKEDLIST);  
}

Example memory optimization:

Small list: ["job1", "job2", "job3"]
Normal encoding: 3 pointers + 3 allocations = ~200 bytes
Ziplist encoding: 1 allocation = ~50 bytes (75% savings!)

๐Ÿงน Memory Reclamation

// Redis memory management
void freeMemoryIfNeeded(void) {
    while (server.memory_usage > server.maxmemory) {
        // Try to free memory by:
        // 1. Expiring keys
        // 2. Evicting LRU keys  
        // 3. Running garbage collection

        if (freeOneObjectFromFreelist() == C_OK) continue;
        if (expireRandomExpiredKey() == C_OK) continue;
        if (evictExpiredKeys() == C_OK) continue;

        // Last resort: evict LRU key
        evictLRUKey();
    }
}

๐Ÿ’ฟ Persistence: Making Memory Durable

๐Ÿ“ธ RDB Snapshots

// Save entire dataset to disk
int rdbSave(char *filename) {
    FILE *fp = fopen(filename, "w");

    // Iterate through all databases
    for (int dbid = 0; dbid < server.dbnum; dbid++) {
        redisDb *db = server.db + dbid;
        dict *d = db->dict;

        // Save each key-value pair
        dictIterator *di = dictGetSafeIterator(d);
        dictEntry *de;

        while ((de = dictNext(di)) != NULL) {
            sds key = dictGetKey(de);
            robj *val = dictGetVal(de);

            // Write key and value to file
            rdbSaveStringObject(fp, key);
            rdbSaveObject(fp, val);
        }
    }

    fclose(fp);
}

๐Ÿ“ AOF (Append Only File)

// Log every write command
void feedAppendOnlyFile(struct redisCommand *cmd, int dictid, 
                       robj **argv, int argc) {
    sds buf = sdsnew("");

    // Format as Redis protocol
    buf = sdscatprintf(buf, "*%d\r\n", argc);
    for (int i = 0; i < argc; i++) {
        buf = sdscatprintf(buf, "$%lu\r\n", 
                          (unsigned long)sdslen(argv[i]->ptr));
        buf = sdscatsds(buf, argv[i]->ptr);
        buf = sdscatlen(buf, "\r\n", 2);
    }

    // Write to AOF file
    write(server.aof_fd, buf, sdslen(buf));
    sdsfree(buf);
}

๐Ÿš€ Performance Secrets

๐ŸŽฏ Why Redis is So Fast

  1. ๐Ÿง  Everything in memory – No disk I/O during normal operations
  2. ๐Ÿ”„ Single-threaded – No locks or context switching
  3. โšก Optimized data structures – Custom implementations for each type
  4. ๐ŸŒ Efficient networking – epoll/kqueue for handling connections
  5. ๐Ÿ“ฆ Smart encoding – Automatic optimization based on data size

๐Ÿ“Š Real Performance Numbers

Operation           Operations/second
SET                 100,000+
GET                 100,000+  
LPUSH               100,000+
BRPOP (no block)    100,000+
BRPOP (blocking)    Limited by job arrival rate

๐Ÿ”ง Configuration for Speed

# redis.conf optimizations
tcp-nodelay yes              # Disable Nagle's algorithm
tcp-keepalive 60            # Keep connections alive
timeout 0                   # Never timeout idle clients

# Memory optimizations  
maxmemory-policy allkeys-lru  # Evict least recently used
save ""                       # Disable snapshotting for speed

๐ŸŒ Redis in Production

๐Ÿ—๏ธ Scaling Patterns

Master-Slave Replication:

Master (writes) โ”€โ”
                 โ”œโ”€โ†’ Slave 1 (reads)
                 โ”œโ”€โ†’ Slave 2 (reads)
                 โ””โ”€โ†’ Slave 3 (reads)

Redis Cluster (sharding):

Client โ”€โ†’ Hash Key โ”€โ†’ Determine Slot โ”€โ†’ Route to Correct Node

Slots 0-5460:    Node A  
Slots 5461-10922: Node B
Slots 10923-16383: Node C
๐Ÿ” Monitoring Redis
# Real-time stats
redis-cli info

# Monitor all commands
redis-cli monitor

# Check slow queries
redis-cli slowlog get 10

# Memory usage by key pattern
redis-cli --bigkeys

๐ŸŽฏ Redis vs Alternatives

๐Ÿ“Š When to Choose Redis
โœ… Need sub-millisecond latency
โœ… Working with simple data structures  
โœ… Caching frequently accessed data
โœ… Session storage
โœ… Real-time analytics
โœ… Message queues (like Sidekiq!)

โŒ Need complex queries (use PostgreSQL)
โŒ Need ACID transactions across keys
โŒ Dataset larger than available RAM
โŒ Need strong consistency guarantees
๐ŸฅŠ Redis vs Memcached
Redis:
+ Rich data types (lists, sets, hashes)
+ Persistence options
+ Pub/sub messaging
+ Transactions
- Higher memory usage

Memcached:  
+ Lower memory overhead
+ Simpler codebase
- Only key-value storage
- No persistence

๐Ÿ”ฎ Modern Redis Features

๐ŸŒŠ Redis Streams
# Modern alternative to lists for job queues
redis.xadd("jobs", {"type" => "email", "user_id" => 123})
redis.xreadgroup("workers", "worker-1", "jobs", ">")
๐Ÿ“ก Redis Modules
RedisJSON:     Native JSON support
RedisSearch:   Full-text search
RedisGraph:    Graph database
RedisAI:       Machine learning
TimeSeries:    Time-series data
โšก Redis 7 Features
- Multi-part AOF files
- Config rewriting improvements  
- Better memory introspection
- Enhanced security (ACLs)
- Sharded pub/sub

๐ŸŽฏ Key Takeaways

  1. ๐Ÿ”ฅ Single-threaded simplicity enables incredible performance
  2. ๐Ÿง  In-memory architecture eliminates I/O bottlenecks
  3. โšก Custom data structures are optimized for specific use cases
  4. ๐ŸŒ Event-driven networking handles thousands of connections efficiently
  5. ๐Ÿ”’ Blocking operations like BRPOP are elegant and efficient
  6. ๐Ÿ’พ Smart memory management keeps everything fast and compact
  7. ๐Ÿ“ˆ Horizontal scaling is possible with clustering and replication

๐ŸŒŸ Conclusion

Redis is a masterclass in software design – taking a simple concept (in-memory data structures) and optimizing every single aspect to perfection. When Sidekiq calls BRPOP, it’s leveraging decades of systems programming expertise distilled into one of the most elegant and performant pieces of software ever written.

The next time you see Redis handling thousands of operations per second while using minimal resources, you’ll understand the beautiful engineering that makes it possible. From hash tables to event loops to memory management, every component works in harmony to deliver the performance that makes modern applications possible.

Redis proves that sometimes the best solutions are the simplest ones, executed flawlessly! ๐Ÿš€


Automating ๐Ÿฆพ LeetCode ๐Ÿ‘จ๐Ÿฝโ€๐Ÿ’ปSolution Testing with GitHub Actions: A Ruby Developer’s Journey

As a Ruby developer working through LeetCode problems, I found myself facing a common challenge: how to ensure all my solutions remain working as I refactor and optimize them? With multiple algorithms per problem and dozens of solution files, manual testing was becoming a bottleneck.

Today, I’ll share how I set up a comprehensive GitHub Actions CI/CD pipeline that automatically tests all my LeetCode solutions, providing instant feedback and maintaining code quality.

๐Ÿค” The Problem: Testing Chaos

My LeetCode repository structure looked like this:

leetcode/
โ”œโ”€โ”€ two_sum/
โ”‚   โ”œโ”€โ”€ two_sum_1.rb
โ”‚   โ”œโ”€โ”€ two_sum_2.rb
โ”‚   โ”œโ”€โ”€ test_two_sum_1.rb
โ”‚   โ””โ”€โ”€ test_two_sum_2.rb
โ”œโ”€โ”€ longest_substring/
โ”‚   โ”œโ”€โ”€ longest_substring.rb
โ”‚   โ””โ”€โ”€ test_longest_substring.rb
โ”œโ”€โ”€ buy_sell_stock/
โ”‚   โ””โ”€โ”€ ... more solutions
โ””โ”€โ”€ README.md

The Pain Points:

  • Manual Testing: Running ruby test_*.rb for each folder manually
  • Forgotten Tests: Easy to forget testing after small changes
  • Inconsistent Quality: Some solutions had tests, others didn’t
  • Refactoring Fear: Scared to optimize algorithms without breaking existing functionality

๐ŸŽฏ The Decision: One Action vs. Multiple Actions

I faced a key architectural decision: Should I create separate GitHub Actions for each problem folder, or one comprehensive action?

Why I Chose a Single Action:

โœ… Advantages:

  • Maintenance Simplicity: One workflow file vs. 6+ separate ones
  • Resource Efficiency: Fewer GitHub Actions minutes consumed
  • Complete Validation: Ensures all solutions work together
  • Cleaner CI History: Single status check per push/PR
  • Auto-Discovery: Automatically finds new test folders

โŒ Rejected Alternative (Separate Actions):

  • More complex maintenance
  • Higher resource usage
  • Fragmented test results
  • More configuration overhead

๐Ÿ› ๏ธ The Solution: Intelligent Test Discovery

Here’s the GitHub Actions workflow that changed everything:

name: Run All LeetCode Tests

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v4

    - name: Set up Ruby
      uses: ruby/setup-ruby@v1
      with:
        ruby-version: '3.2'
        bundler-cache: true

    - name: Install dependencies
      run: |
        gem install minitest
        # Add any other gems your tests need

    - name: Run all tests
      run: |
        echo "๐Ÿงช Running LeetCode Solution Tests..."

        # Colors for output
        GREEN='\033[0;32m'
        RED='\033[0;31m'
        YELLOW='\033[1;33m'
        NC='\033[0m' # No Color

        # Track results
        total_folders=0
        passed_folders=0
        failed_folders=()

        # Find all folders with test files
        for folder in */; do
          folder_name=${folder%/}

          # Skip if no test files in folder
          if ! ls "$folder"test_*.rb 1> /dev/null 2>&1; then
            continue
          fi

          total_folders=$((total_folders + 1))
          echo -e "\n${YELLOW}๐Ÿ“ Testing folder: $folder_name${NC}"

          # Run tests for this folder
          cd "$folder"
          test_failed=false

          for test_file in test_*.rb; do
            if [ -f "$test_file" ]; then
              echo "  ๐Ÿ” Running $test_file..."
              if ruby "$test_file"; then
                echo -e "  ${GREEN}โœ… $test_file passed${NC}"
              else
                echo -e "  ${RED}โŒ $test_file failed${NC}"
                test_failed=true
              fi
            fi
          done

          if [ "$test_failed" = false ]; then
            echo -e "${GREEN}โœ… All tests passed in $folder_name${NC}"
            passed_folders=$((passed_folders + 1))
          else
            echo -e "${RED}โŒ Some tests failed in $folder_name${NC}"
            failed_folders+=("$folder_name")
          fi

          cd ..
        done

        # Summary
        echo -e "\n๐ŸŽฏ ${YELLOW}TEST SUMMARY${NC}"
        echo "๐Ÿ“Š Total folders tested: $total_folders"
        echo -e "โœ… ${GREEN}Passed: $passed_folders${NC}"
        echo -e "โŒ ${RED}Failed: $((total_folders - passed_folders))${NC}"

        if [ ${#failed_folders[@]} -gt 0 ]; then
          echo -e "\n${RED}Failed folders:${NC}"
          for folder in "${failed_folders[@]}"; do
            echo "  - $folder"
          done
          exit 1
        else
          echo -e "\n${GREEN}๐ŸŽ‰ All tests passed successfully!${NC}"
        fi

๐Ÿ” What Makes This Special?

๐ŸŽฏ Intelligent Auto-Discovery

The script automatically finds folders containing test_*.rb files:

# Skip if no test files in folder
if ! ls "$folder"test_*.rb 1> /dev/null 2>&1; then
  continue
fi

This means new problems automatically get tested without workflow modifications!

๐ŸŽจ Beautiful Output

Color-coded results make it easy to scan CI logs:

๐Ÿงช Running LeetCode Solution Tests...

๐Ÿ“ Testing folder: two_sum
  ๐Ÿ” Running test_two_sum_1.rb...
  โœ… test_two_sum_1.rb passed
  ๐Ÿ” Running test_two_sum_2.rb...
  โœ… test_two_sum_2.rb passed
โœ… All tests passed in two_sum

๐Ÿ“ Testing folder: longest_substring
  ๐Ÿ” Running test_longest_substring.rb...
  โŒ test_longest_substring.rb failed
โŒ Some tests failed in longest_substring

๐ŸŽฏ TEST SUMMARY
๐Ÿ“Š Total folders tested: 6
โœ… Passed: 5
โŒ Failed: 1

Failed folders:
  - longest_substring

๐Ÿš€ Smart Failure Handling

  • Individual Test Tracking: Each test file result is tracked separately
  • Folder-Level Reporting: Clear summary per problem folder
  • Build Failure: CI fails if ANY test fails, maintaining quality
  • Detailed Reporting: Shows exactly which folders/tests failed

๐Ÿ“Š The Impact: Metrics That Matter

โฑ๏ธ Time Savings

  • Before: 5+ minutes manually testing after each change
  • After: 30 seconds of automated feedback
  • Result: 90% time reduction in testing workflow

๐Ÿ”’ Quality Improvements

  • Before: ~60% of solutions had tests
  • After: 100% test coverage (CI enforces it)
  • Result: Zero regression bugs since implementation

๐ŸŽฏ Developer Experience

  • Confidence: Can refactor aggressively without fear
  • Speed: Instant feedback on pull requests
  • Focus: More time solving problems, less time on manual testing

๐ŸŽ“ Key Learnings & Best Practices

โœ… What Worked Well

๐Ÿ”ง Shell Scripting in GitHub Actions

Using bash arrays and functions made the logic clean and maintainable:

failed_folders=()
failed_folders+=("$folder_name")
๐ŸŽจ Color-Coded Output

Made CI logs actually readable:

GREEN='\033[0;32m'
RED='\033[0;31m'
echo -e "${GREEN}โœ… Test passed${NC}"
๐Ÿ“ Flexible File Structure

Supporting multiple test files per folder without hardcoding names:

for test_file in test_*.rb; do
  # Process each test file
done

โš ๏ธ Lessons Learned

๐Ÿ› Edge Case Handling

Always check if files exist before processing:

if [ -f "$test_file" ]; then
  # Safe to process
fi
๐ŸŽฏ Exit Code Management

Proper failure propagation ensures CI accurately reports status:

if [ ${#failed_folders[@]} -gt 0 ]; then
  exit 1  # Fail the build
fi

๐Ÿš€ Getting Started: Implementation Guide

๐Ÿ“‹ Step 1: Repository Structure

Organize your code with consistent naming:

your_repo/
โ”œโ”€โ”€ .github/workflows/test.yml  # The workflow file
โ”œโ”€โ”€ problem_name/
โ”‚   โ”œโ”€โ”€ solution.rb             # Your solution
โ”‚   โ””โ”€โ”€ test_solution.rb        # Your tests
โ””โ”€โ”€ another_problem/
    โ”œโ”€โ”€ solution_v1.rb
    โ”œโ”€โ”€ solution_v2.rb
    โ”œโ”€โ”€ test_solution_v1.rb
    โ””โ”€โ”€ test_solution_v2.rb

๐Ÿ“‹ Step 2: Test File Convention

Use the test_*.rb naming pattern consistently. This enables auto-discovery.

๐Ÿ“‹ Step 3: Workflow Customization

Modify the workflow for your needs:

  • Ruby version: Change ruby-version: '3.2' to your preferred version
  • Dependencies: Add gems in the “Install dependencies” step
  • Triggers: Adjust branch names in the on: section

๐Ÿ“‹ Step 4: README Badge

Add a status badge to your README:

![Tests](https://github.com/abhilashak/leetcode/workflows/Run%20All%20LeetCode%20Tests/badge.svg)

๐ŸŽฏ What is the Status Badge?

The status badge is a visual indicator that shows the current status of your GitHub Actions workflow. It’s a small image that displays whether your latest tests are passing or failing.

๐ŸŽจ What It Looks Like:

โœ… When tests pass: Tests
โŒ When tests fail: Tests
๐Ÿ”„ When tests are running: Tests

๐Ÿ“‹ What Information It Shows:

  1. Workflow Name: “Run All LeetCode Tests” (or whatever you named it)
  2. Current Status:
  • Green โœ…: All tests passed
  • Red โŒ: Some tests failed
  • Yellow ๐Ÿ”„: Tests are currently running
  1. Real-time Updates: Automatically updates when you push code

๐Ÿ”— The Badge URL Breakdown:

![Tests](https://github.com/abhilashak/leetcode/workflows/Run%20All%20LeetCode%20Tests/badge.svg)
  • abhilashak = My GitHub username
  • leetcode = My repository name
  • Run%20All%20LeetCode%20Tests = Your workflow name (URL-encoded)
  • badge.svg = GitHub’s badge endpoint

๐ŸŽฏ Why It’s Valuable:

๐Ÿ” For ME:

  • Quick Status Check: See at a glance if your code is working
  • Historical Reference: Know the last known good state
  • Confidence: Green badge = safe to deploy/share

๐Ÿ‘ฅ For Others:

  • Trust Indicator: Shows your code is tested and working
  • Professional Presentation: Demonstrates good development practices

๐Ÿ“Š For Contributors:

  • Pull Request Status: See if their changes break anything
  • Fork Confidence: Know the original repo is well-maintained
  • Quality Signal: Indicates a serious, well-tested project

๐ŸŽ–๏ธ Professional Benefits:

When someone visits your repository, they immediately see:

  • โœ… “This developer writes tests”
  • โœ… “This code is actively maintained”
  • โœ… “This project follows best practices”
  • โœ… “I can trust this code quality”

It’s essentially a quality seal for your repository! ๐ŸŽ–๏ธ

๐ŸŽฏ Results & Future Improvements

๐ŸŽ‰ Current Success Metrics

  • 100% automated testing across all solution folders
  • Zero manual testing required for routine changes
  • Instant feedback on code quality
  • Professional presentation with status badges

๐Ÿ”ฎ Future Enhancements

๐Ÿ“Š Performance Tracking

Planning to add execution time measurement:

start_time=$(date +%s%N)
ruby "$test_file"
end_time=$(date +%s%N)
execution_time=$(( (end_time - start_time) / 1000000 ))
echo "  โฑ๏ธ  Execution time: ${execution_time}ms"

๐ŸŽฏ Test Coverage Reports

Considering integration with Ruby coverage tools:

- name: Generate coverage report
  run: |
    gem install simplecov
    # Coverage analysis per folder

๐Ÿ“ˆ Algorithm Performance Comparison

Auto-comparing different solution approaches:

# Compare solution_v1.rb vs solution_v2.rb performance

๐Ÿ’ก Conclusion: Why This Matters

This GitHub Actions setup transformed my LeetCode practice from a manual, error-prone process into a professional, automated workflow. The key benefits:

๐ŸŽฏ For Individual Practice

  • Confidence: Refactor without fear
  • Speed: Instant validation of changes
  • Quality: Consistent test coverage

๐ŸŽฏ For Team Collaboration

  • Standards: Enforced testing practices
  • Reviews: Clear CI status on pull requests
  • Documentation: Professional presentation

๐ŸŽฏ For Career Development

  • Portfolio: Demonstrates DevOps knowledge
  • Best Practices: Shows understanding of CI/CD
  • Professionalism: Industry-standard development workflow

๐Ÿš€ Take Action

Ready to implement this in your own LeetCode repository? Here’s what to do next:

  1. Copy the workflow file into .github/workflows/test.yml
  2. Ensure consistent naming with test_*.rb pattern
  3. Push to GitHub and watch the magic happen
  4. Add the status badge to your README
  5. Start coding fearlessly with automated testing backup!

Check my github repo: https://github.com/abhilashak/leetcode/actions

The best part? Once set up, this system maintains itself. New problems get automatically discovered, and your testing workflow scales effortlessly.

Happy coding, and may your CI always be green! ๐ŸŸข

Have you implemented automated testing for your coding practice? Share your experience in the comments below!

๐Ÿ“š Resources

๐Ÿท๏ธ Tags

#GitHubActions #Ruby #LeetCode #CI/CD #DevOps #AutomatedTesting #CodingPractice