๐Ÿ”Œ The Complete Guide to Sockets: How Your Code Really Talks to the World

Ever wondered what happens when Sidekiq calls redis.brpop() and your thread magically “blocks” until a job appears? The answer lies in one of computing’s most fundamental concepts: sockets. Let’s dive deep into this invisible infrastructure that powers everything from your Redis connections to Netflix streaming.

๐Ÿš€ What is a Socket?

A socket is essentially a communication endpoint – think of it like a “phone number” that programs can use to talk to each other.

Application A  โ†โ†’  Socket  โ†โ†’  Network  โ†โ†’  Socket  โ†โ†’  Application B

Simple analogy: If applications are people, sockets are like phone numbers that let them call each other!

๐ŸŽฏ The Purpose of Sockets

๐Ÿ“ก Inter-Process Communication (IPC)

# Two Ruby programs talking via sockets
# Program 1 (Server)
require 'socket'
server = TCPServer.new(3000)
client_socket = server.accept
client_socket.puts "Hello from server!"

# Program 2 (Client)  
client = TCPSocket.new('localhost', 3000)
message = client.gets
puts message  # "Hello from server!"

๐ŸŒ Network Communication

# Talk to Redis (what Sidekiq does)
require 'socket'
redis_socket = TCPSocket.new('localhost', 6379)
redis_socket.write("PING\r\n")
response = redis_socket.read  # "PONG"

๐Ÿ  Are Sockets Only for Networking?

NO! Sockets work for both local and network communication:

๐ŸŒ Network Sockets (TCP/UDP)

# Talk across the internet
require 'socket'
socket = TCPSocket.new('google.com', 80)
socket.write("GET / HTTP/1.1\r\nHost: google.com\r\n\r\n")

๐Ÿ”— Local Sockets (Unix Domain Sockets)

# Talk between programs on same machine
# Faster than network sockets - no network stack overhead
socket = UNIXSocket.new('/tmp/my_app.sock')

Real example: Redis can use Unix sockets for local connections:

# Network socket (goes through TCP/IP stack)
redis = Redis.new(host: 'localhost', port: 6379)

# Unix socket (direct OS communication)
redis = Redis.new(path: '/tmp/redis.sock')  # Faster!

๐Ÿ”ข What Are Ports?

Ports are like apartment numbers – they help identify which specific application should receive the data.

IP Address: 192.168.1.100 (Building address)
Port: 6379                (Apartment number)

๐ŸŽฏ Why This Matters

Same computer running:
- Web server on port 80
- Redis on port 6379  
- SSH on port 22
- Your app on port 3000

When data arrives at 192.168.1.100:6379
โ†’ OS knows to send it to Redis

๐Ÿข Why Do We Need So Many Ports?

Think of a computer like a massive apartment building:

๐Ÿ”ง Multiple Services

# Different services need different "apartments"
$ netstat -ln
tcp 0.0.0.0:22    SSH server
tcp 0.0.0.0:80    Web server  
tcp 0.0.0.0:443   HTTPS server
tcp 0.0.0.0:3306  MySQL
tcp 0.0.0.0:5432  PostgreSQL
tcp 0.0.0.0:6379  Redis
tcp 0.0.0.0:27017 MongoDB

๐Ÿ”„ Multiple Connections to Same Service

Redis server (port 6379) can handle:
- Connection 1: Sidekiq worker
- Connection 2: Rails app  
- Connection 3: Redis CLI
- Connection 4: Monitoring tool

Each gets a unique "channel" but all use port 6379

๐Ÿ“Š Port Ranges

0-1023:    Reserved (HTTP=80, SSH=22, etc.)
1024-49151: Registered applications  
49152-65535: Dynamic/Private (temporary connections)

โš™๏ธ How Sockets Work Internally

๐Ÿ› ๏ธ Socket Creation

# What happens when you do this:
socket = TCPSocket.new('localhost', 6379)

Behind the scenes:

// OS system calls
socket_fd = socket(AF_INET, SOCK_STREAM, 0)  // Create socket
connect(socket_fd, server_address, address_len)  // Connect

๐Ÿ“‹ The OS Socket Table

Process ID: 1234 (Your Ruby app)
File Descriptors:
  0: stdin
  1: stdout  
  2: stderr
  3: socket to Redis (localhost:6379)
  4: socket to PostgreSQL (localhost:5432)
  5: listening socket (port 3000)

๐Ÿ”ฎ Kernel-Level Magic

Application: socket.write("PING")
     โ†“
Ruby: calls OS write() system call
     โ†“  
Kernel: adds to socket send buffer
     โ†“
Network Stack: TCP โ†’ IP โ†’ Ethernet
     โ†“
Network Card: sends packets over wire

๐ŸŒˆ Types of Sockets

๐Ÿ“ฆ TCP Sockets (Reliable)

# Like registered mail - guaranteed delivery
server = TCPServer.new(3000)
client = TCPSocket.new('localhost', 3000)

# Data arrives in order, no loss
client.write("Message 1")
client.write("Message 2") 
# Server receives exactly: "Message 1", "Message 2"

โšก UDP Sockets (Fast but unreliable)

# Like shouting across a crowded room
require 'socket'

# Sender
udp = UDPSocket.new
udp.send("Hello!", 0, 'localhost', 3000)

# Receiver  
udp = UDPSocket.new
udp.bind('localhost', 3000)
data = udp.recv(1024)  # Might not arrive!

๐Ÿ  Unix Domain Sockets (Local)

# Super fast local communication
File.delete('/tmp/test.sock') if File.exist?('/tmp/test.sock')

# Server
server = UNIXServer.new('/tmp/test.sock')
# Client
client = UNIXSocket.new('/tmp/test.sock')

๐Ÿ”„ Socket Lifecycle

๐Ÿค TCP Connection Dance

# 1. Server: "I'm listening on port 3000"
server = TCPServer.new(3000)

# 2. Client: "I want to connect to port 3000"  
client = TCPSocket.new('localhost', 3000)

# 3. Server: "I accept your connection"
connection = server.accept

# 4. Both can now send/receive data
connection.puts "Hello!"
client.puts "Hi back!"

# 5. Clean shutdown
client.close
connection.close
server.close

๐Ÿ”„ Under the Hood (TCP Handshake)

Client                    Server
  |                         |
  |---- SYN packet -------->| (I want to connect)
  |<-- SYN-ACK packet ------| (OK, let's connect)  
  |---- ACK packet -------->| (Connection established!)
  |                         |
  |<---- Data exchange ---->|
  |                         |

๐Ÿ—๏ธ OS-Level Socket Implementation

๐Ÿ“ File Descriptor Magic

socket = TCPSocket.new('localhost', 6379)
puts socket.fileno  # e.g., 7

# This socket is just file descriptor #7!
# You can even use it with raw system calls

๐Ÿ—‚๏ธ Kernel Socket Buffers

Application Buffer  โ†โ†’  Kernel Send Buffer  โ†โ†’  Network
                   โ†โ†’  Kernel Recv Buffer  โ†โ†’

What happens on socket.write:

socket.write("BRPOP queue 0")
# 1. Ruby copies data to kernel send buffer
# 2. write() returns immediately  
# 3. Kernel sends data in background
# 4. TCP handles retransmission, etc.

What happens on socket.read:

data = socket.read  
# 1. Check kernel receive buffer
# 2. If empty, BLOCK thread until data arrives
# 3. Copy data from kernel to Ruby
# 4. Return to your program

๐ŸŽฏ Real-World Example: Sidekiq + Redis

# When Sidekiq does this:
redis.brpop("queue:default", timeout: 2)

# Here's the socket journey:
# 1. Ruby opens TCP socket to localhost:6379
socket = TCPSocket.new('localhost', 6379)

# 2. Format Redis command
command = "*4\r\n$5\r\nBRPOP\r\n$13\r\nqueue:default\r\n$1\r\n2\r\n"

# 3. Write to socket (goes to kernel buffer)
socket.write(command)

# 4. Thread blocks reading response
response = socket.read  # BLOCKS HERE until Redis responds

# 5. Redis eventually sends back data
# 6. Kernel receives packets, assembles them
# 7. socket.read returns with the job data

๐Ÿš€ Socket Performance Tips

โ™ป๏ธ Socket Reuse
# Bad: New socket for each request
100.times do
  socket = TCPSocket.new('localhost', 6379)
  socket.write("PING\r\n")
  socket.read
  socket.close  # Expensive!
end

# Good: Reuse socket
socket = TCPSocket.new('localhost', 6379)
100.times do
  socket.write("PING\r\n")  
  socket.read
end
socket.close
๐ŸŠ Connection Pooling
# What Redis gem/Sidekiq does internally
class ConnectionPool
  def initialize(size: 5)
    @pool = size.times.map { TCPSocket.new('localhost', 6379) }
  end

  def with_connection(&block)
    socket = @pool.pop
    yield(socket)
  ensure
    @pool.push(socket)
  end
end

๐ŸŽช Fun Socket Facts

๐Ÿ“„ Everything is a File
# On Linux/Mac, sockets appear as files!
$ lsof -p #{Process.pid}
ruby 1234 user 3u sock 0,9 0t0 TCP localhost:3000->localhost:6379
๐Ÿšง Socket Limits
# Your OS has limits
$ ulimit -n
1024  # Max file descriptors (including sockets)

# Web servers need thousands of sockets
# That's why they increase this limit!
๐Ÿ“Š Socket States
$ netstat -an | grep 6379
tcp4 0 0 127.0.0.1.6379 127.0.0.1.50123 ESTABLISHED
tcp4 0 0 127.0.0.1.6379 127.0.0.1.50124 TIME_WAIT
tcp4 0 0 *.6379         *.*            LISTEN

๐ŸŽฏ Key Takeaways

  1. ๐Ÿ”Œ Sockets = Communication endpoints between programs
  2. ๐Ÿ  Ports = Apartment numbers for routing data to the right app
  3. ๐ŸŒ Not just networking – also local inter-process communication
  4. โš™๏ธ OS manages everything – kernel buffers, network stack, blocking
  5. ๐Ÿ“ File descriptors – sockets are just special files to the OS
  6. ๐ŸŠ Connection pooling is crucial for performance
  7. ๐Ÿ”’ BRPOP blocking happens at the socket read level

๐ŸŒŸ Conclusion

The beauty of sockets is their elegant simplicity: when Sidekiq calls redis.brpop(), it’s using the same socket primitives that have powered network communication for decades!

From your Redis connection to Netflix streaming to Zoom calls, sockets are the fundamental building blocks that make modern distributed systems possible. Understanding how they work gives you insight into everything from why connection pooling matters to how blocking I/O actually works at the system level.

The next time you see a thread “blocking” on network I/O, you’ll know exactly what’s happening: a simple socket read operation, leveraging decades of OS optimization to efficiently wait for data without wasting a single CPU cycle. Pretty amazing for something so foundational! ๐Ÿš€


โšก Inside Redis: How Your Favorite In-Memory Database Actually Works

You’ve seen how Sidekiq connects to Redis via sockets, but what happens when Redis receives that BRPOP command? Let’s pull back the curtain on one of the most elegant pieces of software ever written and discover why Redis is so blazingly fast.

๐ŸŽฏ What Makes Redis Special?

Redis isn’t just another database – it’s a data structure server. While most databases make you think in tables and rows, Redis lets you work directly with lists, sets, hashes, and more. It’s like having super-powered variables that persist across program restarts!

# Traditional database thinking
User.where(active: true).pluck(:id)

# Redis thinking  
redis.smembers("active_users")  # A set of active user IDs

๐Ÿ—๏ธ Redis Architecture Overview

Redis has a deceptively simple architecture that’s incredibly powerful:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚          Client Connections     โ”‚ โ† Your Ruby app connects here
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚         Command Processing      โ”‚ โ† Parses your BRPOP command
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚         Event Loop (epoll)      โ”‚ โ† Handles thousands of connections
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚        Data Structure Engine    โ”‚ โ† The magic happens here
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚         Memory Management       โ”‚ โ† Keeps everything in RAM
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚        Persistence Layer        โ”‚ โ† Optional disk storage
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐Ÿ”ฅ The Single-Threaded Magic

Here’s Redis’s secret sauce: it’s mostly single-threaded!

// Simplified Redis main loop
while (server_running) {
    // 1. Check for new network events
    events = epoll_wait(eventfd, events, max_events, timeout);

    // 2. Process each event
    for (int i = 0; i < events; i++) {
        if (events[i].type == READ_EVENT) {
            process_client_command(events[i].client);
        }
    }

    // 3. Handle time-based events (expiry, etc.)
    process_time_events();
}

Why single-threaded is brilliant:

  • โœ… No locks or synchronization needed
  • โœ… Incredibly fast context switching
  • โœ… Predictable performance
  • โœ… Simple to reason about

๐Ÿง  Data Structure Deep Dive

๐Ÿ“ Redis Lists (What Sidekiq Uses)

When you do redis.brpop("queue:default"), you’re working with a Redis list:

// Redis list structure (simplified)
typedef struct list {
    listNode *head;      // First item
    listNode *tail;      // Last item  
    long length;         // How many items
    // ... other fields
} list;

typedef struct listNode {
    struct listNode *prev;
    struct listNode *next;
    void *value;         // Your job data
} listNode;

BRPOP implementation inside Redis:

// Simplified BRPOP command handler
void brpopCommand(client *c) {
    // Try to pop from each list
    for (int i = 1; i < c->argc - 1; i++) {
        robj *key = c->argv[i];
        robj *list = lookupKeyRead(c->db, key);

        if (list && listTypeLength(list) > 0) {
            // Found item! Pop and return immediately
            robj *value = listTypePop(list, LIST_TAIL);
            addReplyMultiBulkLen(c, 2);
            addReplyBulk(c, key);
            addReplyBulk(c, value);
            return;
        }
    }

    // No items found - BLOCK the client
    blockForKeys(c, c->argv + 1, c->argc - 2, timeout);
}

๐Ÿ”‘ Hash Tables (Super Fast Lookups)

Redis uses hash tables for O(1) key lookups:

// Redis hash table
typedef struct dict {
    dictEntry **table;       // Array of buckets
    unsigned long size;      // Size of table
    unsigned long sizemask;  // size - 1 (for fast modulo)
    unsigned long used;      // Number of entries
} dict;

// Finding a key
unsigned int hash = dictGenHashFunction(key);
unsigned int idx = hash & dict->sizemask;
dictEntry *entry = dict->table[idx];

This is why Redis is so fast – finding any key is O(1)!

โšก The Event Loop: Handling Thousands of Connections

Redis uses epoll (Linux) or kqueue (macOS) to efficiently handle many connections:

// Simplified event loop
int epollfd = epoll_create(1024);

// Add client socket to epoll
struct epoll_event ev;
ev.events = EPOLLIN;  // Watch for incoming data
ev.data.ptr = client;
epoll_ctl(epollfd, EPOLL_CTL_ADD, client->fd, &ev);

// Main loop
while (1) {
    int nfds = epoll_wait(epollfd, events, MAX_EVENTS, timeout);

    for (int i = 0; i < nfds; i++) {
        client *c = (client*)events[i].data.ptr;

        if (events[i].events & EPOLLIN) {
            // Data available to read
            read_client_command(c);
            process_command(c);
        }
    }
}

Why this is amazing:

Traditional approach: 1 thread per connection
- 1000 connections = 1000 threads
- Each thread uses ~8MB memory
- Context switching overhead

Redis approach: 1 thread for all connections  
- 1000 connections = 1 thread
- Minimal memory overhead
- No context switching between connections

๐Ÿ”’ How BRPOP Blocking Actually Works

Here’s the magic behind Sidekiq’s blocking behavior:

๐ŸŽญ Client Blocking State

// When no data available for BRPOP
typedef struct blockingState {
    dict *keys;           // Keys we're waiting for
    time_t timeout;       // When to give up
    int numreplicas;      // Replication stuff
    // ... other fields
} blockingState;

// Block a client
void blockClient(client *c, int btype) {
    c->flags |= CLIENT_BLOCKED;
    c->btype = btype;
    c->bstate = zmalloc(sizeof(blockingState));

    // Add to server's list of blocked clients
    listAddNodeTail(server.clients, c);
}

โฐ Timeout Handling

// Check for timed out clients
void handleClientsBlockedOnKeys(void) {
    time_t now = time(NULL);

    listIter li;
    listNode *ln;
    listRewind(server.clients, &li);

    while ((ln = listNext(&li)) != NULL) {
        client *c = listNodeValue(ln);

        if (c->flags & CLIENT_BLOCKED && 
            c->bstate.timeout != 0 && 
            c->bstate.timeout < now) {

            // Timeout! Send null response
            addReplyNullArray(c);
            unblockClient(c);
        }
    }
}

๐Ÿš€ Unblocking When Data Arrives

// When someone does LPUSH to a list
void signalKeyAsReady(redisDb *db, robj *key) {
    readyList *rl = zmalloc(sizeof(*rl));
    rl->key = key;
    rl->db = db;

    // Add to ready list
    listAddNodeTail(server.ready_keys, rl);
}

// Process ready keys and unblock clients
void handleClientsBlockedOnKeys(void) {
    while (listLength(server.ready_keys) != 0) {
        listNode *ln = listFirst(server.ready_keys);
        readyList *rl = listNodeValue(ln);

        // Find blocked clients waiting for this key
        list *clients = dictFetchValue(rl->db->blocking_keys, rl->key);

        if (clients) {
            // Unblock first client and serve the key
            client *receiver = listNodeValue(listFirst(clients));
            serveClientBlockedOnList(receiver, rl->key, rl->db);
        }

        listDelNode(server.ready_keys, ln);
    }
}

๐Ÿ’พ Memory Management: Keeping It All in RAM

๐Ÿงฎ Memory Layout

// Every Redis object has this header
typedef struct redisObject {
    unsigned type:4;        // STRING, LIST, SET, etc.
    unsigned encoding:4;    // How it's stored internally  
    unsigned lru:24;        // LRU eviction info
    int refcount;          // Reference counting
    void *ptr;             // Actual data
} robj;

๐Ÿ—‚๏ธ Smart Encodings

Redis automatically chooses the most efficient representation:

// Small lists use ziplist (compressed)
if (listLength(list) < server.list_max_ziplist_entries &&
    listTotalSize(list) < server.list_max_ziplist_value) {

    // Use compressed ziplist
    listConvert(list, OBJ_ENCODING_ZIPLIST);
} else {
    // Use normal linked list
    listConvert(list, OBJ_ENCODING_LINKEDLIST);  
}

Example memory optimization:

Small list: ["job1", "job2", "job3"]
Normal encoding: 3 pointers + 3 allocations = ~200 bytes
Ziplist encoding: 1 allocation = ~50 bytes (75% savings!)

๐Ÿงน Memory Reclamation

// Redis memory management
void freeMemoryIfNeeded(void) {
    while (server.memory_usage > server.maxmemory) {
        // Try to free memory by:
        // 1. Expiring keys
        // 2. Evicting LRU keys  
        // 3. Running garbage collection

        if (freeOneObjectFromFreelist() == C_OK) continue;
        if (expireRandomExpiredKey() == C_OK) continue;
        if (evictExpiredKeys() == C_OK) continue;

        // Last resort: evict LRU key
        evictLRUKey();
    }
}

๐Ÿ’ฟ Persistence: Making Memory Durable

๐Ÿ“ธ RDB Snapshots

// Save entire dataset to disk
int rdbSave(char *filename) {
    FILE *fp = fopen(filename, "w");

    // Iterate through all databases
    for (int dbid = 0; dbid < server.dbnum; dbid++) {
        redisDb *db = server.db + dbid;
        dict *d = db->dict;

        // Save each key-value pair
        dictIterator *di = dictGetSafeIterator(d);
        dictEntry *de;

        while ((de = dictNext(di)) != NULL) {
            sds key = dictGetKey(de);
            robj *val = dictGetVal(de);

            // Write key and value to file
            rdbSaveStringObject(fp, key);
            rdbSaveObject(fp, val);
        }
    }

    fclose(fp);
}

๐Ÿ“ AOF (Append Only File)

// Log every write command
void feedAppendOnlyFile(struct redisCommand *cmd, int dictid, 
                       robj **argv, int argc) {
    sds buf = sdsnew("");

    // Format as Redis protocol
    buf = sdscatprintf(buf, "*%d\r\n", argc);
    for (int i = 0; i < argc; i++) {
        buf = sdscatprintf(buf, "$%lu\r\n", 
                          (unsigned long)sdslen(argv[i]->ptr));
        buf = sdscatsds(buf, argv[i]->ptr);
        buf = sdscatlen(buf, "\r\n", 2);
    }

    // Write to AOF file
    write(server.aof_fd, buf, sdslen(buf));
    sdsfree(buf);
}

๐Ÿš€ Performance Secrets

๐ŸŽฏ Why Redis is So Fast

  1. ๐Ÿง  Everything in memory – No disk I/O during normal operations
  2. ๐Ÿ”„ Single-threaded – No locks or context switching
  3. โšก Optimized data structures – Custom implementations for each type
  4. ๐ŸŒ Efficient networking – epoll/kqueue for handling connections
  5. ๐Ÿ“ฆ Smart encoding – Automatic optimization based on data size

๐Ÿ“Š Real Performance Numbers

Operation           Operations/second
SET                 100,000+
GET                 100,000+  
LPUSH               100,000+
BRPOP (no block)    100,000+
BRPOP (blocking)    Limited by job arrival rate

๐Ÿ”ง Configuration for Speed

# redis.conf optimizations
tcp-nodelay yes              # Disable Nagle's algorithm
tcp-keepalive 60            # Keep connections alive
timeout 0                   # Never timeout idle clients

# Memory optimizations  
maxmemory-policy allkeys-lru  # Evict least recently used
save ""                       # Disable snapshotting for speed

๐ŸŒ Redis in Production

๐Ÿ—๏ธ Scaling Patterns

Master-Slave Replication:

Master (writes) โ”€โ”
                 โ”œโ”€โ†’ Slave 1 (reads)
                 โ”œโ”€โ†’ Slave 2 (reads)
                 โ””โ”€โ†’ Slave 3 (reads)

Redis Cluster (sharding):

Client โ”€โ†’ Hash Key โ”€โ†’ Determine Slot โ”€โ†’ Route to Correct Node

Slots 0-5460:    Node A  
Slots 5461-10922: Node B
Slots 10923-16383: Node C
๐Ÿ” Monitoring Redis
# Real-time stats
redis-cli info

# Monitor all commands
redis-cli monitor

# Check slow queries
redis-cli slowlog get 10

# Memory usage by key pattern
redis-cli --bigkeys

๐ŸŽฏ Redis vs Alternatives

๐Ÿ“Š When to Choose Redis
โœ… Need sub-millisecond latency
โœ… Working with simple data structures  
โœ… Caching frequently accessed data
โœ… Session storage
โœ… Real-time analytics
โœ… Message queues (like Sidekiq!)

โŒ Need complex queries (use PostgreSQL)
โŒ Need ACID transactions across keys
โŒ Dataset larger than available RAM
โŒ Need strong consistency guarantees
๐ŸฅŠ Redis vs Memcached
Redis:
+ Rich data types (lists, sets, hashes)
+ Persistence options
+ Pub/sub messaging
+ Transactions
- Higher memory usage

Memcached:  
+ Lower memory overhead
+ Simpler codebase
- Only key-value storage
- No persistence

๐Ÿ”ฎ Modern Redis Features

๐ŸŒŠ Redis Streams
# Modern alternative to lists for job queues
redis.xadd("jobs", {"type" => "email", "user_id" => 123})
redis.xreadgroup("workers", "worker-1", "jobs", ">")
๐Ÿ“ก Redis Modules
RedisJSON:     Native JSON support
RedisSearch:   Full-text search
RedisGraph:    Graph database
RedisAI:       Machine learning
TimeSeries:    Time-series data
โšก Redis 7 Features
- Multi-part AOF files
- Config rewriting improvements  
- Better memory introspection
- Enhanced security (ACLs)
- Sharded pub/sub

๐ŸŽฏ Key Takeaways

  1. ๐Ÿ”ฅ Single-threaded simplicity enables incredible performance
  2. ๐Ÿง  In-memory architecture eliminates I/O bottlenecks
  3. โšก Custom data structures are optimized for specific use cases
  4. ๐ŸŒ Event-driven networking handles thousands of connections efficiently
  5. ๐Ÿ”’ Blocking operations like BRPOP are elegant and efficient
  6. ๐Ÿ’พ Smart memory management keeps everything fast and compact
  7. ๐Ÿ“ˆ Horizontal scaling is possible with clustering and replication

๐ŸŒŸ Conclusion

Redis is a masterclass in software design – taking a simple concept (in-memory data structures) and optimizing every single aspect to perfection. When Sidekiq calls BRPOP, it’s leveraging decades of systems programming expertise distilled into one of the most elegant and performant pieces of software ever written.

The next time you see Redis handling thousands of operations per second while using minimal resources, you’ll understand the beautiful engineering that makes it possible. From hash tables to event loops to memory management, every component works in harmony to deliver the performance that makes modern applications possible.

Redis proves that sometimes the best solutions are the simplest ones, executed flawlessly! ๐Ÿš€


Setup ๐Ÿ› ย Rails 8 App โ€“ Part 10: PostgreSQL Into The Action

For a Ruby on Rails 8 application, the choice of database depends on your specific needs, but hereโ€™s a breakdown of the best options and when to use each:

PostgreSQL (Highly Recommended)

Best overall choice for most Rails apps.

Why:

  • First-class support in Rails.
  • Advanced features like full-text search, JSONB support, CTEs, window functions.
  • Strong consistency and reliability.
  • Scales well vertically and horizontally (with tools like Citus).
  • Used by: GitHub, Discourse, Basecamp, Shopify.

Use if:

  • Youโ€™re building a standard Rails web app or API.
  • You need advanced query features or are handling complex data types (e.g., JSON).

SQLite (For development/testing only)

  • Lightweight, file-based.
  • Fast and easy to set up.
  • But not recommended for production.

Use if:

  • Youโ€™re building a quick prototype or local dev/testing app.
  • NOT for multi-user production environments.

MySQL / MariaDB

  • Also supported by Rails.
  • Can work fine for simpler applications.
  • Lacks some advanced features (like robust JSON support or full Postgres-style indexing).
  • Not the default in many modern Rails setups.

Use if:

  • Your team already has MySQL infrastructure or legacy systems.
  • You need horizontal scaling with Galera Cluster or similar setups.

Others (NoSQL like MongoDB, Redis, etc.)

  • Use Redis for caching and background job data (not as primary DB).
  • Use MongoDB or other NoSQL only if your data model really demands it (e.g., unstructured documents, event sourcing).

Recommendation Summary:

Use CaseRecommended DB
Production web/API appPostgreSQL
Dev/prototyping/local testingSQLite
Legacy systems/MySQL infrastructureMySQL/MariaDB
Background jobs/cachingRedis
Special needs (e.g., documents)MongoDB (with caution)

If you’re starting fresh or building something scalable and modern with Rails 8, go with PostgreSQL.

Letโ€™s break that down:

๐Ÿ’ฌ What does “robust JSON support” mean?

PostgreSQL supports a special column type: json and jsonb, which lets you store structured JSON data directly in your database โ€” like hashes or objects.

Why it matters:

  • You can store dynamic data without needing to change your schema.
  • You can query inside the JSON using SQL (->, ->>, @>, etc.).
  • You can index parts of the JSON โ€” for speed.

๐Ÿ”ง Example:

You have a products table with a specs column that holds tech specs in JSON:

specs = {
  "color": "black",
  "brand": "Libas",
  "dimensions": {"chest": "34", "waist": "30", "shoulder": "13.5"}
}

You can query like:

SELECT * FROM products WHERE specs->>'color' = 'black';

Or check if the JSON contains a value:

SELECT * FROM products WHERE specs @> '{"brand": "Libas"}';

You can even index specs->>'color' to make these queries fast.


๐Ÿ’ฌ What does “full Postgres-style indexing” mean?

PostgreSQL supports a wide variety of powerful indexing options, which improve query performance and flexibility.

โš™๏ธ Types of Indexes PostgreSQL supports:

Index TypeUse Case
B-TreeDefault; used for most equality and range searches
GIN (Generalized Inverted Index)Fast indexing for JSON, arrays, full-text search
Partial IndexesIndex only part of the data (e.g., WHERE active = true)
Expression IndexesIndex a function or expression (e.g., LOWER(email))
Covering Indexes (INCLUDE)Fetch data directly from the index, avoiding table reads
  • B-Tree Indexes: B-tree indexes are more suitable for single-value columns.
  • When to Use GIN Indexes: When you frequently search for specific elements within arrays, JSON documents, or other composite data types.
  • Example for GIN Indexes: Imagine you have a table with a JSONB column containing document metadata. A GIN index on this column would allow you to quickly find all documents that have a specific author or belong to a particular category. 

Why does this matter for our shopping app?

  • We can store and filter products with dynamic specs (e.g., kurtas, shorts, pants) without new columns.
  • Full-text search on product names/descriptions.
  • Fast filters: color = 'red' AND brand = 'Libas' even if those are stored in JSON.
  • Index custom expressions like LOWER(email) for case-insensitive login.

๐Ÿ’ฌ What are Common Table Expressions (CTEs)?

CTEs are temporary result sets you can reference within a SQL query โ€” like defining a mini subquery that makes complex SQL easier to read and write.

WITH recent_orders AS (
  SELECT * FROM orders WHERE created_at > NOW() - INTERVAL '7 days'
)
SELECT * FROM recent_orders WHERE total > 100;

  • Breaking complex queries into readable parts.
  • Re-using result sets without repeating subqueries.
In Rails (via with from gems like scenic or with_cte):
Order
  .with(recent_orders: Order.where('created_at > ?', 7.days.ago))
  .from('recent_orders')
  .where('total > ?', 100)

๐Ÿ’ฌ What are Window Functions?

Window functions perform calculations across rows related to the current row โ€” unlike aggregate functions, they donโ€™t group results into one row.

๐Ÿ”ง Example: Rank users by their score within each team:
SELECT
  user_id,
  team_id,
  score,
  RANK() OVER (PARTITION BY team_id ORDER BY score DESC) AS rank
FROM users;
Use cases:
  • Ranking rows (like leaderboards).
  • Running totals or moving averages.
  • Calculating differences between rows (e.g. โ€œHow much did this order increase from the last?โ€).
๐Ÿ›ค In Rails:

Window functions are available through raw SQL or Arel. Here’s a basic example:

User
  .select("user_id, team_id, score, RANK() OVER (PARTITION BY team_id ORDER BY score DESC) AS rank")

CTEs and Window functions are fully supported in PostgreSQL, making it the go-to DB for any Rails 8 app that needs advanced querying.

JSONB Support

JSONB stands for “JSON Binary” and is a binary representation of JSON data that allows for efficient storage and retrieval of complex data structures.

This can be useful when you have data that doesn’t fit neatly into traditional relational database tables, such as nested or variable-length data structures.

Absolutely โ€” storing JSON in a relational database (like PostgreSQL) can be super powerful when used wisely. It gives you schema flexibility without abandoning the structure and power of SQL. Here are real-world use cases for using JSON columns in relational databases:

Here are real-world use cases for using JSON columns in relational databases:

๐Ÿ”ง 1. Flexible Metadata / Extra Attributes

Let users store arbitrary attributes that don’t require schema changes every time.

Use case: Product variants, custom fields

t.jsonb :metadata

{
  "color": "red",
  "size": "XL",
  "material": "cotton"
}

=> Good when:

  • You can’t predict all the attributes users will need.
  • You donโ€™t want to create dozens of nullable columns.

๐ŸŽ›๏ธ 2. Storing Settings or Preferences

User or app settings that vary a lot.

Use case: Notification preferences, UI layout, feature toggles

{
  "email": true,
  "sms": false,
  "theme": "dark"
}

=> Easy to store and retrieve as a blob without complex joins.

๐ŸŒ 3. API Response Caching

Store external API responses for caching or auditing.

Use case: Storing Stripe, GitHub, or weather API responses.

t.jsonb :api_response

=> Avoids having to map every response field into a column.

๐Ÿ“ฆ 4. Storing Logs or Events

Use case: Audit trails, system logs, user events

{
  "action": "login",
  "timestamp": "2025-04-18T10:15:00Z",
  "ip": "123.45.67.89"
}

=> Great for capturing varied data over time without a rigid schema.

๐Ÿ“Š 6. Embedded Mini-Structures

Use case: A form builder app storing user-created forms and fields.

{
  "fields": [
    { "type": "text", "label": "Name", "required": true },
    { "type": "email", "label": "Email", "required": false }
  ]
}

=> When each row can have nested, structured data โ€” almost like a mini-document.

๐Ÿ•น๏ธ 7. Device or Browser Info (User Agents)

Use case: Analytics, device fingerprinting

{
  "browser": "Safari",
  "os": "macOS",
  "version": "17.3"
}

=> You donโ€™t need to normalize or query this often โ€” perfect for JSON.


JSON vs JSONB in PostgreSQL

Use jsonb over json unless you need to preserve order or whitespace.

  • jsonb is binary format โ†’ faster and indexable
  • You can do fancy stuff like:
SELECT * FROM users WHERE preferences ->> 'theme' = 'dark';

Or in Rails:

User.where("preferences ->> 'theme' = ?", 'dark')

store and store_accessor

They let you treat JSON or text-based hash columns like structured data, so you can access fields as if they were real database columns.

๐Ÿ”น store

  • Used to declare a serialized store (usually a jsonb, json, or text column) on your model.
  • Works best with key/value stores.

๐Ÿ‘‰ Example:

Letโ€™s say your users table has a settings column of type jsonb:

# migration
add_column :users, :settings, :jsonb, default: {}

Now in your model:

class User < ApplicationRecord
  store :settings, accessors: [:theme, :notifications], coder: JSON
end

You can now do this:

user.theme = "dark"
user.notifications = true
user.save

user.settings
# => { "theme" => "dark", "notifications" => true }

๐Ÿ”น store_accessor

A lightweight version that only declares attribute accessors for keys inside a JSON column. Doesnโ€™t include serialization logic โ€” so you usually use it with a json/jsonb/text column that already works as a Hash.

๐Ÿ‘‰ Example:

class User < ApplicationRecord
  store_accessor :settings, :theme, :notifications
end

This gives you:

  • user.theme, user.theme=
  • user.notifications, user.notifications=
๐Ÿค” When to Use Each?
FeatureWhen to Use
storeWhen you need both serialization and accessors
store_accessorWhen your column is already serialized (jsonb, etc.)

If you’re using PostgreSQL with jsonb columns โ€” it’s more common to just use store_accessor.

Querying JSON Fields
User.where("settings ->> 'theme' = ?", "dark")

Or if you’re using store_accessor:

User.where(theme: "dark")

๐Ÿ’ก But remember: youโ€™ll only be able to query these fields efficiently if youโ€™re using jsonb + proper indexes.


๐Ÿ”ฅ Conclusion:

  • PostgreSQL can store, search, and index inside JSON fields natively.
  • This lets you keep your schema flexible and your queries fast.
  • Combined with its advanced indexing, itโ€™s ideal for a modern e-commerce app with dynamic product attributes, filtering, and searching.

To install and set up PostgreSQL on macOS, you have a few options. The most common and cleanest method is using Homebrew. Hereโ€™s a step-by-step guide:

Setting Up โš™๏ธ SSH in your system

SSH (Secure Shell) is used to establish secure remote connections over an unsecured network, enabling secure access, management, and data transfer on remote systems, including running commands, transferring files, and managing applications.

Setup SSH keys:

To create an SSH key and add it to your GitHub account, follow these steps:

1. Generate an SSH Key

ssh-keygen -t ed25519 -C "your-email@example.com"
  • Replace "your-email@example.com" with your GitHub email.
  • If prompted, press Enter to save the key in the default location (~/.ssh/id_ed25519).
  • Set a passphrase (optional for security).

2. Start the SSH Agent

eval "$(ssh-agent -s)"

3. Add the SSH Key to the Agent

ssh-add ~/.ssh/id_ed25519

4. Copy the SSH Key to Clipboard

cat ~/.ssh/id_ed25519.pub | pbcopy   # macOS
cat ~/.ssh/id_ed25519.pub | xclip -selection clipboard   # Linux
clip < ~/.ssh/id_ed25519.pub   # Windows (Git Bash)

(If xclip is not installed, use sudo apt install xclip on Linux)


5. Add the SSH Key to GitHub

  • Go to GitHub โ†’ Settings โ†’ SSH and GPG keys (GitHub SSH Keys).
  • Click New SSH Key.
  • Paste the copied key into the field and give it a title.
  • Click Add SSH Key.

6. Test the SSH Connection

ssh -T git@github.com

You should see a message like:

Hi username! You've successfully authenticated, but GitHub does not provide shell access.

Now you can clone, push, and pull repositories without entering your GitHub password!

You may be wondering what is ed25519 ?

ed25519 is a modern cryptographic algorithm used for generating SSH keys. It is an alternative to the older RSA algorithm and is considered more secure and faster.

Why Use ed25519 Instead of RSA?

  1. Stronger Security โ€“ ed25519 provides 128-bit security, while RSA requires a 4096-bit key for similar security.
  2. Smaller Key Size โ€“ The generated keys are much shorter than RSA keys, making them faster to use.
  3. Faster Performance โ€“ ed25519 is optimized for speed, especially on modern hardware.
  4. Resistant to Certain Attacks โ€“ Unlike RSA, ed25519 is resistant to side-channel attacks.

Why GitHub Recommends ed25519?

  • Since 2021, GitHub suggests using ed25519 over RSA because of better security and efficiency.
  • Older RSA keys (e.g., 1024-bit) are now considered weak.

When Should You Use ed25519?

  • Always, unless you’re working with old systems that do not support it.
  • If you need maximum security, speed, and smaller key sizes.

Example: Creating an ed25519 SSH Key

ssh-keygen -t ed25519 -C "your-email@example.com"

This creates a strong and secure SSH key for GitHub authentication.

What is the SSH Agent?

The SSH agent is a background process that securely stores your SSH private keys and manages authentication.

Instead of entering your private key passphrase every time you use SSH (e.g., for git push), the agent remembers your key after you add it.


Why Do We Need the SSH Agent?

  1. Avoid Entering Your Passphrase Repeatedly
  • If your SSH key has a passphrase, you would normally need to enter it every time you use git push or ssh.
  • The agent caches the key in memory so you donโ€™t need to enter the passphrase every time.
  1. Automatic Authentication
  • Once the agent has your key, it can sign SSH requests for authentication automatically.
  1. Keeps Your Private Key Secure
  • Your private key stays in memory and is not exposed on disk after adding it.

What Do These Commands Do?

1๏ธโƒฃ Start the SSH Agent

eval "$(ssh-agent -s)"
  • This starts the SSH agent in the background.
  • The eval "$(ssh-agent -s)" command initializes the agent and prints its process ID (PID), confirming itโ€™s running.

2๏ธโƒฃ Add the SSH Key to the Agent

ssh-add ~/.ssh/id_ed25519
  • This adds your private key (~/.ssh/id_ed25519) to the agent.
  • Now, the agent will remember the key and use it when making SSH connections.

How to Check If Your Key is Added?

Run:

ssh-add -l

This lists all keys currently added to the agent. If your key is listed, it means the agent is working correctly.


Example: How This Helps?

Without the agent:

git push origin main
# Every time, you need to enter your SSH passphrase

With the agent:

ssh-add ~/.ssh/id_ed25519
git push origin main
# No need to enter your passphrase again!

Conclusion
  • The SSH agent stores your private key securely in memory.
  • It automates authentication, so you donโ€™t need to enter your passphrase for every SSH request.
  • Adding the key with ssh-add lets Git and SSH use it without prompting.

Now, you can git push and git pull seamlessly!

But the question is: Can this make unauthorized persons to access with this commands without ssh passphrase?


Does using ssh-agent make your system less secure? Let’s break it down.

Can Someone Else Use My SSH Key Without a Passphrase?

โœ… No, If Your System is Secure

  • The SSH agent only runs under your user session.
  • Other users on the same machine cannot access your agent.
  • Your private key is never written to disk in an unprotected way.
  • The agent only allows authentication, not direct access to your key.

โš ๏ธ Yes, If Someone Gets Access to Your Session

  • If an attacker gains access to your system (e.g., via malware, or if you leave your computer unlocked), they might use your active SSH agent to authenticate without needing your passphrase.
  • However, they cannot extract your private key from the agent.

How to Improve Security?

If you want extra security, here are a few things you can do:

1๏ธโƒฃ Remove Keys from Agent When Not Needed

After using your SSH key, you can remove it from the agent:

ssh-add -D

This removes all stored keys. Next time you push, you’ll need to re-enter your passphrase.


2๏ธโƒฃ Use -t (Timeout) for Auto Removal

To automatically remove the key after a set time:

ssh-add -t 3600 ~/.ssh/id_ed25519  # Removes the key after 1 hour


3๏ธโƒฃ Lock Your Screen When Away

If someone gets access to your logged-in session, they could use your agent to authenticate without needing the passphrase.

Always lock your screen (Ctrl + L or Win + L on Windows/Linux, Cmd + Ctrl + Q on Mac) when stepping away.


4๏ธโƒฃ Disable Agent Forwarding (Extra Security)

By default, SSH agent forwarding (ssh -A) can expose your keys to remote servers. If you don’t need it, disable it by editing:

nano ~/.ssh/config

And adding:

Host *
    ForwardAgent no

Summary
  1. The SSH agent only runs in your session, so no one else can access it unless they get control of your user session.
  2. Attackers cannot steal your private key from the agent, but if they have access to your session, they could use it.
  3. To be safe, remove keys when not needed (ssh-add -D), use timeouts (-t), and always lock your computer.

You’re now both secure and productive with SSH! ๐Ÿš€

Setup ๐Ÿ›  Rails 8 App โ€“ Partย 3: Git setup, modify gitignore, git config

So now let’s push the code to github repository. Before that there is some final checks need to be done from our end.

Check .gitignore file and update

โœ… Files/Folders to Include in .gitignore

Hereโ€™s a breakdown of which files/folders should be added to .gitignore in your Rails project:

These files are user-specific or environment-specific and should not be committed to Git.

1๏ธโƒฃ .dockerignore โ†’ โŒ Ignore from .gitignore

  • Keep this file if you’re using Docker.
  • Itโ€™s like .gitignore but for Docker, helping to reduce Docker image size.
  • Do not add it to .gitignore if you need it.

2๏ธโƒฃ .github/ โ†’ โœ… Add to .gitignore (If personal CI/CD configs)

  • If this contains GitHub Actions or issue templates, keep it in the repo.
  • If itโ€™s just for personal workflows, add it to .gitignore.

3๏ธโƒฃ .kamal/ โ†’ โœ… Add to .gitignore

  • This contains deployment secrets and configuration files for Kamal (deployment tool).
  • Itโ€™s usually auto-generated and should not be committed.

4๏ธโƒฃ .vscode/ โ†’ โœ… Add to .gitignore

  • User-specific VSCode settings, should not be committed.
  • Different developers use different editors.

Keep These Files in Git (Don’t Add to .gitignore)

These files are important for project configuration.

1๏ธโƒฃ .gitattributes โ†’ โŒ Keep in Git

  • Defines how Git handles line endings and binary files.
  • Helps avoid conflicts on Windows/Linux/Mac.

2๏ธโƒฃ .gitignore โ†’ โŒ Keep in Git

  • Defines ignored files, obviously should not be ignored itself.

3๏ธโƒฃ .rubocop.yml โ†’ โŒ Keep in Git

  • This is for Rubocop linting rules, which helps maintain coding style.
  • All developers should follow the same rules.

4๏ธโƒฃ .ruby-version โ†’ โŒ Keep in Git

  • Defines the Ruby version for the project.
  • Ensures all team members use the same Ruby version.

Final .gitignore Entries Based on Your Files

# Ignore log files, temp files, and dependencies
/log/
/tmp/
.bundle/
/node_modules/

# Manually added
# Ignore editor & environment-specific configs
.vscode/

# Ignore deployment configs
.kamal/

# Ignore personal GitHub configs (if applicable)
.github/

Final Summary

FolderInclude in Git?Why?
log/โŒ IgnoreDynamically generated logs
public/โœ… Keep (except public/assets/)Static files like favicon, error pages
script/โœ… KeepOld Rails script files (if used)
storage/โŒ IgnoreActiveStorage uploads (except seed/)
test/โœ… KeepContains important test cases
tmp/โŒ IgnoreTemporary runtime files
vendor/โŒ Ignore (except custom libraries)Third-party libraries

First time git setup

# You can view all of your settings and where they are coming from using

git config --list --show-origin

# Your Identity: The first thing you should do is to set your user name and email address.

git config --global user.name "Abhilash"
git config --global user.email abhilash@example.com

# configure the default text editor that will be used when Git needs you to type in a message

git config --global core.editor emacs
git config --global core.editor "code --wait"     # vs code
git config --global -e      # verify editor

# command to list all the settings Git can find at that point

git config --list
git config user.name

Add your ssh key to your github. Check the post: https://railsdrop.com/2025/03/30/setting-up-ssh-in-your-system/

Initial commit: Execute the Git commands

Run the following commands in your rails app folder:

git add .
git commit -m "first commit"
git branch -M main
git remote add origin git@github.com:abhilashak/design_studio.git
git remote -v     # check the remote endpoints
git push -u origin main

git log      # check commit details

What It Does:

git branch -M main
  1. Renames the current branch to main.
  2. The -M flag forcefully renames the branch (overwrites if needed).

Common Use Case:

  • If your branch is named master and you want to rename it to main (which is now the default in many repositories).
  • If you created a branch with a different name and want to standardise it as main.

Example Usage:

git branch -M main
git push -u origin main

This renames the current branch to main and then pushes it to the remote repository.

Use github ssh option: it checks the ssh key that you setup in your github account for that machine.
http option asks your github credentials to login.

The -u option in the command:

git push -u origin main

What Does -u Do?

The -u flag stands for --set-upstream. It sets the upstream branch for main, which means:

  • It links your local branch (main) to the remote branch (main on origin).
  • After running this command once, you can simply use:
  git push

instead of git push origin main, because Git now knows where to push.

Example Use Case:

If you just created a new branch (main in this case) and are pushing it for the first time:

git push -u origin main

This ensures that main always pushes to origin/main without needing to specify it every time.

After Running This Command:

โœ… Next time, you can simply use:

git push
git pull

without needing to specify origin main again.

For better git work flow

Check the post: https://railsdrop.com/2025/03/29/git-workflow-best-practices-for-your-development-process/

Want to See Your Upstream Branch?

Run:

git branch -vv

This shows which remote branch your local branches are tracking.

to be continued.. ๐Ÿš€


Installing โš™๏ธ and Setting Up ๐Ÿ”ง Ruby 3.4, Rails 8.0 and IDE on macOS in 2025

Ruby on Rails is a powerful framework for building web applications. If you’re setting up your development environment on macOS in 2025, this guide will walk you through installing Ruby 3.4, Rails 8, and a best IDE for development.

1. Installing Ruby and Rails

“While macOS comes with Ruby pre-installed, it’s often outdated and can’t be upgraded easily. Using a version manager like Mise allows you to install the latest Ruby version, switch between versions, and upgrade as needed.” – Rails guides

Install Dependencies

Run the following command to install essential dependencies (takes time):

brew install openssl@3 libyaml gmp rust

โ€ฆ..
==> Installing rust dependency: libssh2, readline, sqlite, python@3.13, pkgconf
==> Installing rust

zsh completions have been installed to:
/opt/homebrew/share/zsh/site-functions
==> Summary
๐Ÿบ /opt/homebrew/Cellar/rust/1.84.1: 3,566 files, 321.3MB
==> Running brew cleanup rustโ€ฆ
==> openssl@3
A CA file has been bootstrapped using certificates from the system
keychain. To add additional certificates, place .pem files in
/opt/homebrew/etc/openssl@3/certs

and run
/opt/homebrew/opt/openssl@3/bin/c_rehash
==> rust
zsh completions have been installed to:
/opt/homebrew/share/zsh/site-functions

Install Mise Version Manager

curl https://mise.run | sh
echo 'eval "$(~/.local/bin/mise activate zsh)"' >> ~/.zshrc
source ~/.zshrc

Install Ruby and Rails

mise use -g ruby@3
mise ruby@3.4.1 โœ“ installed
mise ~/.config/mise/config.toml tools: ruby@3.4.1

ruby --version   # output Ruby 3.4.1

gem install rails

# reload terminal and check
rails --version  # output Rails 8.0.1

For additional guidance, refer to these resources:


2. Installing an IDE for Ruby on Rails Development

Choosing the right Integrated Development Environment (IDE) is crucial for productivity. Here are some popular options:

RubyMine

  • Feature-rich and specifically designed for Ruby on Rails.
  • Includes debugging tools, database integration, and smart code assistance.
  • Paid software that can be resource-intensive.

Sublime Text

  • Lightweight and highly customizable.
  • Requires plugins for additional functionality.

Visual Studio Code (VS Code) (Recommended)

  • Free and open-source.
  • Excellent plugin support.

Install VS Code

Follow the official installation guide.

Enable GitHub Copilot for AI-assisted coding:

  1. Open VS Code.
  2. Sign in with your GitHub account.
  3. Enable Copilot from the extensions panel.

To use VS Code from the terminal, ensure code is added to your $PATH:

  1. Open Command Palette (Cmd+Shift+P).
  2. Search for Shell Command: Install 'code' command in PATH.
  3. Restart your terminal and try: code .

3. Your 15 Essential VS Code Extensions for Ruby on Rails

To enhance your development workflow, install the following VS Code extensions:

  1. GitHub Copilot – AI-assisted coding (already installed).
  2. vscode-icons – Better file and folder icons.
  3. Tabnine AI – AI autocompletion for JavaScript and other languages.
  4. Ruby & Ruby LSP – Language support and linting.
  5. ERB Formatter/Beautify – Formats .erb files (requires htmlbeautifier gem): gem install htmlbeautifier
  6. ERB Helper Tags – Autocomplete for ERB tags.
  7. GitLens – Advanced Git integration.
  8. Ruby Solargraph – Provides code completion and inline documentation (requires solargraph gem): gem install solargraph
  9. Rails DB Schema – Auto-completion for Rails database schema.
  10. ruby-rubocop – Ruby linting and auto-formatting (requires rubocop gem): gem install rubocop
  11. endwise – Auto-adds end keyword in Ruby.
  12. Output Colorizer – Enhances syntax highlighting in log files.
  13. Auto Rename Tag – Automatically renames paired HTML/Ruby tags.
  14. Highlight Matching Tag – Highlights matching tags for better visibility.
  15. Bracket Pair Colorizer 2 – Improved bracket highlighting.

Conclusion

By following this guide, you’ve successfully set up a robust Ruby on Rails development environment on macOS. With Mise for version management, Rails installed, and VS Code configured with essential extensions, you’re ready to start building Ruby on Rails applications.

Part 2: https://railsdrop.com/2025/03/22/setup-rails-8-app-rubocop-actiontext-image-processing-part-2

Happy Rails setup! ๐Ÿš€

Setting Up Terminal ๐Ÿ–ฅ๏ธ for Development on MacOS (Updated 2025)

If you’re setting up your MacBook for development, having a well-configured terminal is essential. This guide will walk you through installing and configuring a powerful terminal setup using Homebrew, iTerm2, and Oh My Zsh, along with useful plugins.

1. Install Homebrew

Homebrew is a package manager that simplifies installing software on macOS.

Open the Terminal and run:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

After installation, add Homebrew to your PATH by running the following commands:

echo >> ~/.zprofile
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

Verify installation:

brew --version

Check here.

2. Install iTerm2

The default macOS Terminal is functional but lacks advanced features. iTerm2 is a powerful alternative.

Install it using Homebrew:

brew install --cask iterm2

Open iTerm2 from your Applications folder after installation.

Check and Install Git

Ensure Git is installed:

git --version

If not installed, install it using Homebrew:

brew install git

3. Install Oh My Zsh

Oh My Zsh enhances the Zsh shell with themes and plugins. Install it with:

sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

Check here.

Configure .zshrc

Edit your .zshrc file:

vim ~/.zshrc

Add useful plugins:

plugins=(git rails ruby)

The default theme is robbyrussell. You can explore other themes here.

Customize iTerm2 Color Scheme

Find and import themes from iTerm2 Color Schemes.

4. Add Zsh Plugins

Enhance your terminal experience with useful plugins.

a. Install zsh-autosuggestions

This plugin provides command suggestions as you type.

Install via Oh My Zsh:

git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions

Or install via Homebrew:

brew install zsh-autosuggestions

Add to ~/.zshrc:

plugins=(git rails ruby zsh-autosuggestions)

If installed via Homebrew, add:

source /opt/homebrew/share/zsh-autosuggestions/zsh-autosuggestions.zsh

to the bottom of ~/.zshrc.

Restart iTerm2:

exec zsh

b. Install zsh-syntax-highlighting

This plugin highlights commands to distinguish valid syntax from errors.

Install via Oh My Zsh:

git clone https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting

Add to .zshrc:

plugins=(git rails ruby zsh-autosuggestions zsh-syntax-highlighting)

Restart iTerm2:

exec zsh

Wrapping Up

Your terminal is now set up for an optimized development experience! With Homebrew, iTerm2, Oh My Zsh, and useful plugins, your workflow will be faster and more efficient.

to be continued …

#Rails 4.2: How to create a full URL with given host and port

Basically if you need to generate url based on the current url during a request OR you can create your own Urls by using Rails ‘ActionDispatch::Integration::Session‘ class.

Rails creates an object ‘app’. It is and action dispatch session object.

You can make use of that object for creating your own URLs like:

> app.root_url(:port => 20)  => "http://www.example.com:20/"

> app.root_url(:port => 20, :host => 'www.bing.com')
 => "http://www.bing.com:20/"

During a request you can use like this:

 > request.url(:port => 20)

Create bootable usb drive of OSX from Mac OS

You can use mac’s createinstallmedia command. The format to create a bootable USB is given below.

$ sudo /Applications/Install\ macOS\ Sierra.app/Contents/Resources/createinstallmedia --volume |YOUR-USB-DRIVE-PATH-HERE| --applicationpath |DOT-APP-FILE-MACOS|

In My system the command will be as follows:

$ sudo /Applications/Install\ macOS\ Sierra.app/Contents/Resources/createinstallmedia --volume /Volumes/ABHI\'S/ --applicationpath /Applications/Install\ macOS\ Sierra.app/

You can easily find the corresponding software path in your system, if you have a different OS that mentioned above.

For more details, visit:
https://support.apple.com/en-us/HT201372

Mongodb how to Import / Export in Linux/Mac

For Importing a mongodb use the following command

$ mongodump --db database_name

This will dump the json/bson files into dump/db_name folder
Or specify a directory with -o option

$ mongodump --db database_name -o path_to_folder

By specifying username and password

$ mongodump --db database_name -o /path/to/folder/ --username=my_user --password="my_password"

For Exporting a mongodb use the following command

$  mongorestore --db database_name path_to_the_json_bson_files

path_to_the_json_bson_files => That we already imported and stored before.

Import one document

$ mongodump --db=db_name --collection=collection_name --out=path_to_folder_to_import
$ mongorestore --db=new_db_name --collection=collection_name path_to_folder_to_import/db_name/collection_name.bson