Rails 8 App: Setup Test DB in PostgreSQL | Query Performance Using EXPLAIN ANALYZE

Now we’ll go full-on query performance pro mode using EXPLAIN ANALYZE and real plans. We’ll learn how PostgreSQL makes decisions, how to catch slow queries, and how your indexes make them 10x faster.

💎 Part 1: What is EXPLAIN ANALYZE?

EXPLAIN shows how PostgreSQL plans to execute your query.

ANALYZE runs the query and adds actual time, rows, loops, etc.

Syntax:

EXPLAIN ANALYZE
SELECT * FROM users WHERE username = 'bob';

✏️ Example 1: Without Index

SELECT * FROM users WHERE username = 'bob';

If username has no index, plan shows:

Seq Scan on users
  Filter: (username = 'bob')
  Rows Removed by Filter: 9999

❌ PostgreSQL scans all rows = Sequential Scan = slow!

➕ Add Index:

CREATE INDEX idx_users_username ON users (username);

Now rerun:

EXPLAIN ANALYZE SELECT * FROM users WHERE username = 'bob';

You’ll see:

Index Scan using idx_users_username on users
  Index Cond: (username = 'bob')

✅ PostgreSQL uses B-tree index
🚀 Massive speed-up!

🔥 Want even faster?

SELECT username FROM users WHERE username = 'bob';

If PostgreSQL shows:

Index Only Scan using idx_users_username on users
  Index Cond: (username = 'bob')

🎉 Index Only Scan! = covering index success!
No heap fetch = lightning-fast.

⚠️ Note: Index-only scan only works if:

  • Index covers all selected columns
  • Table is vacuumed (PostgreSQL uses visibility map)

If you still get Seq scan output like:

test_db=# EXPLAIN ANALYSE SELECT * FROM users where username = 'aman_chetri';
                                           QUERY PLAN
-------------------------------------------------------------------------------------------------
 Seq Scan on users  (cost=0.00..1.11 rows=1 width=838) (actual time=0.031..0.034 rows=1 loops=1)
   Filter: ((username)::text = 'aman_chetri'::text)
   Rows Removed by Filter: 2
 Planning Time: 0.242 ms
 Execution Time: 0.077 ms
(5 rows)

even after adding an index, because PostgreSQL is saying:

  • 🤔 “The table is so small (cost = 1.11), scanning the whole thing is cheaper than using the index.”
  • Also: Your query uses only SELECT username, which could be eligible for Index Only Scan, but heap fetch might still be needed due to visibility map.

🔧 Step-by-step Fix:

✅ 1. Add Data for Bigger Table

If the table is small (few rows), PostgreSQL will prefer Seq Scan no matter what.

Try adding ~10,000 rows:

INSERT INTO users (username, email, phone_number)
SELECT 'user_' || i, 'user_' || i || '@mail.com', '1234567890'
FROM generate_series(1, 10000) i;

Then VACUUM ANALYZE users; again and retry EXPLAIN.

✅ 2. Confirm Index Exists

First, check your index exists and is recognized:

\d users

You should see something like:

Indexes:
    "idx_users_username" btree (username)

If not, add:

CREATE INDEX idx_users_username ON users(username);

✅ 3. Run ANALYZE (Update Stats)
ANALYZE users;

This updates statistics — PostgreSQL might not be using the index if it thinks only one row matches or the table is tiny.

✅ 4. Vacuum for Index-Only Scan

Index-only scans require the visibility map to be set.

Run:

VACUUM ANALYZE users;

This marks pages in the table as “all-visible,” enabling PostgreSQL to avoid reading the heap.

✅ 5. Force PostgreSQL to Consider Index

You can turn off sequential scan temporarily (for testing):

SET enable_seqscan = OFF;

EXPLAIN SELECT username FROM users WHERE username = 'bob';

You should now see:

Index Scan using idx_users_username on users ...

⚠️ Use this only for testing/debugging — not in production.

💡 Extra Tip (optional): Use EXPLAIN (ANALYZE, BUFFERS)
EXPLAIN (ANALYZE, BUFFERS)
SELECT username FROM users WHERE username = 'bob';

This will show:

  • Whether heap was accessed
  • Buffer hits
  • Actual rows
📋 Summary
StepCommand
Check Index\d users
Analyze tableANALYZE users;
Vacuum for visibilityVACUUM ANALYZE users;
Disable seq scan for testSET enable_seqscan = OFF;
Add more rows (optional)INSERT INTO ...

🚨 How to catch bad index usage?

Always look for:

  • “Seq Scan” instead of “Index Scan” ➔ missing index
  • “Heap Fetch” ➔ not a covering index
  • “Rows Removed by Filter” ➔ inefficient filtering
  • “Loops: 1000+” ➔ possible N+1 issue

Common Pattern Optimizations

PatternFix
WHERE column = ?B-tree index on column
WHERE column LIKE 'prefix%'B-tree works (with text_ops)
SELECT col1 WHERE col2 = ?Covering index: (col2, col1) or (col2) INCLUDE (col1)
WHERE col BETWEEN ?Composite index with range second: (status, created_at)
WHERE col IN (?, ?, ?)Index still helps
ORDER BY col LIMIT 10Index on col helps sort fast

⚡ Tip: Use pg_stat_statements to Find Slow Queries

Enable it in postgresql.conf:

shared_preload_libraries = 'pg_stat_statements'

Then run:

SELECT query, total_exec_time, calls
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 5;

🎯 Find your worst queries & optimize them with new indexes!

🧪 Try It Yourself

Want a little lab setup to practice?

CREATE TABLE users (
  user_id serial PRIMARY KEY,
  username VARCHAR(220),
  email VARCHAR(150),
  phone_number VARCHAR(20)
);

-- Insert 100K fake rows
INSERT INTO users (username, email, phone_number)
SELECT
  'user_' || i,
  'user_' || i || '@example.com',
  '999-000-' || i
FROM generate_series(1, 100000) i;

Then test:

  1. EXPLAIN ANALYZE SELECT * FROM users WHERE username = 'user_5000';
  2. Add INDEX ON username
  3. Re-run, compare speed!

🎯 Extra Pro Tools for Query Performance

  • EXPLAIN ANALYZE → Always first tool
  • pg_stat_statements → Find slow queries in real apps
  • auto_explain → Log slow plans automatically
  • pgBadger or pgHero → Visual query monitoring
💥 Now We Know:

✅ How to read query plans
✅ When you’re doing full scans vs index scans
✅ How to achieve index-only scans
✅ How to catch bad performance early
✅ How to test and fix in real world

Happy Performance Fixing.. 🚀

Rails 8 App: Setup Test DB in PostgreSQL | Covering Index | BRIN Indexes | Hash Indexes | Create super fast indexes

Let’s look into some of the features of sql data indexing. This will be super helpful while developing our Rails 8 Application.

💎 Part 1: What is a Covering Index?

Normally when you query:

SELECT * FROM users WHERE username = 'bob';

  • Database searches username index (secondary).
  • Finds a pointer (TID or PK).
  • Then fetches full row from table (heap or clustered B-tree).

Problem:

  • Heap fetch = extra disk read.
  • Clustered B-tree fetch = extra traversal.

📜 Covering Index idea:

✅ If the index already contains all the columns you need,
✅ Then the database does not need to fetch the full row!

It can answer the query purely by scanning the index! ⚡

Boom — one disk read, no extra hop!

✏️ Example in PostgreSQL:

Suppose your query is:

SELECT username FROM users WHERE username = 'bob';

  • You only need username.
  • But by default, PostgreSQL indexes only store the index column (here, username) + TID.

✅ So in this case — already covering!

No heap fetch needed!

✏️ Example in MySQL InnoDB:

Suppose your query is:

SELECT username FROM users WHERE username = 'bob';

  • Secondary index (username) contains:
    • username (indexed column)
    • user_id (because secondary indexes in InnoDB always store PK)

♦️ So again, already covering!
No need to jump to the clustered index!

🎯 Key point:

If your query only asks for columns already inside the index,
then only the index is touched ➔ no second lookup ➔ super fast!

💎 Part 2: Real SQL Examples

✨ PostgreSQL

Create a covering index for common query:

CREATE INDEX idx_users_username_email ON users (username, email);

Now if you run:

SELECT email FROM users WHERE username = 'bob';

Postgres can:

  • Search index on username
  • Already have email in index
  • ✅ No heap fetch!

(And Postgres is smart: it checks index-only scan automatically.)

✨ MySQL InnoDB

Create a covering index:

CREATE INDEX idx_users_username_email ON users (username, email);

✅ Now query:

SELECT email FROM users WHERE username = 'bob';

Same behavior:

  • Only secondary index read.
  • No need to touch primary clustered B-tree.

💎 Part 3: Tips to design smart Covering Indexes

✅ If your query uses WHERE on col1 and SELECT col2,
✅ Best to create index: (col1, col2).

✅ Keep indexes small — don’t add 10 columns unless needed.
✅ Avoid huge TEXT or BLOB columns in covering indexes — they make indexes heavy.

Composite indexes are powerful:

CREATE INDEX idx_users_username_email ON users (username, email);

→ Can be used for:

  • WHERE username = ?
  • WHERE username = ? AND email = ?
  • etc.

✅ Monitor index usage:

  • PostgreSQL: EXPLAIN ANALYZE
  • MySQL: EXPLAIN

✅ Always check if Index Only Scan or Using Index appears in EXPLAIN plan!

📚 Quick Summary Table

DatabaseNormal QueryWith Covering Index
PostgreSQLB-tree ➔ Heap fetch (unless TID optimization)B-tree scan only
MySQL InnoDBSecondary B-tree ➔ Primary B-treeSecondary B-tree only
Result2 steps1 step
SpeedSlowerFaster

🏆 Great! — Now We Know:

🧊 How heap fetch works!
🧊 How block lookup is O(1)!
🧊 How covering indexes skip heap fetch!
🧊 How to create super fast indexes for PostgreSQL and MySQL!


🦾 Advanced Indexing Tricks (Real Production Tips)

Now it’s time to look into super heavy functionalities that Postgres supports for making our sql data search/fetch super fast and efficient.

1. 🎯 Partial Indexes (PostgreSQL ONLY)

✅ Instead of indexing the whole table,
✅ You can index only the rows you care about!

Example:

Suppose 95% of users have status = 'inactive', but you only search active users:

SELECT * FROM users WHERE status = 'active' AND email = 'bob@example.com';

👉 Instead of indexing the whole table:

CREATE INDEX idx_active_users_email ON users (email) WHERE status = 'active';

♦️ PostgreSQL will only store rows with status = 'active' in this index!

Advantages:

  • Smaller index = Faster scans
  • Less space on disk
  • Faster index maintenance (less updates/inserts)

Important:

  • MySQL (InnoDB) does NOT support partial indexes 😔 — only PostgreSQL has this superpower.

2. 🎯 INCLUDE Indexes (PostgreSQL 11+)

✅ Normally, a composite index uses all columns for sorting/searching.
✅ With INCLUDE, extra columns are just stored in index, not used for ordering.

Example:

CREATE INDEX idx_username_include_email ON users (username) INCLUDE (email);

Meaning:

  • username is indexed and ordered.
  • email is only stored alongside.

Now query:

SELECT email FROM users WHERE username = 'bob';

Index-only scan — no heap fetch.

Advantages:

  • Smaller & faster than normal composite indexes.
  • Helps to create very efficient covering indexes.

Important:

  • MySQL 8.0 added something similar with INVISIBLE columns but it’s still different.

3. 🎯 Composite Index Optimization

✅ Always order columns inside index smartly based on query pattern.

Golden Rules:

⚜️ Equality columns first (WHERE col = ?)
⚜️ Range columns second (WHERE col BETWEEN ?)
⚜️ SELECT columns last (for covering)

Example:

If query is:

SELECT email FROM users WHERE status = 'active' AND created_at > '2024-01-01';

Best index:

CREATE INDEX idx_users_status_created_at ON users (status, created_at) INCLUDE (email);

♦️ status first (equality match)
♦️ created_at second (range)
♦️ email included (covering)

Bad Index: (wrong order)

CREATE INDEX idx_created_at_status ON users (created_at, status);

→ Will not be efficient!

4. 🎯 BRIN Indexes (PostgreSQL ONLY, super special!)

✅ When your table is very huge (millions/billions of rows),
✅ And rows are naturally ordered (like timestamp, id increasing),
✅ You can create a BRIN (Block Range Index).

Example:

CREATE INDEX idx_users_created_at_brin ON users USING BRIN (created_at);

♦️ BRIN stores summaries of large ranges of pages (e.g., min/max timestamp per 128 pages).

♦️ Ultra small index size.

♦️ Very fast for large range queries like:

SELECT * FROM users WHERE created_at BETWEEN '2024-01-01' AND '2024-04-01';

Important:

  • BRIN ≠ B-tree
  • BRIN is approximate, B-tree is precise.
  • Only useful if data is naturally correlated with physical storage order.

MySQL?

  • MySQL does not have BRIN natively. PostgreSQL has a big advantage here.

5. 🎯 Hash Indexes (special case)

✅ If your query is always exact equality (not range),
✅ You can use hash indexes.

Example:

CREATE INDEX idx_users_username_hash ON users USING HASH (username);

Useful for:

  • Simple WHERE username = 'bob'
  • Never ranges (BETWEEN, LIKE, etc.)

⚠️ Warning:

  • Hash indexes used to be “lossy” before Postgres 10.
  • Now they are safe, but usually B-tree is still better unless you have very heavy point lookups.

😎 PRO-TIP: Which Index Type to Use?

Use caseIndex type
Search small ranges or equalityB-tree
Search on huge tables with natural order (timestamps, IDs)BRIN
Only exact match, super heavy lookupHash
Search only small part of table (active users, special conditions)Partial index
Need to skip heap fetchINCLUDE / Covering Index

🗺️ Quick Visual Mindmap:

Your Query
│
├── Need Equality + Range? ➔ B-tree
│
├── Need Huge Time Range Query? ➔ BRIN
│
├── Exact equality only? ➔ Hash
│
├── Want Smaller Index (filtered)? ➔ Partial Index
│
├── Want to avoid Heap Fetch? ➔ INCLUDE columns (Postgres) or Covering Index

🏆 Now we Know:

🧊 Partial Indexes
🧊 INCLUDE Indexes
🧊 Composite Index order tricks
🧊 BRIN Indexes
🧊 Hash Indexes
🧊 How to choose best Index

This is serious pro-level database knowledge.


Enjoy SQL! 🚀

Rails 8 App: Setup Test DB | Comprehensive Guide 📖 for PostgreSQL , Mysql Indexing – PostgreSQL Heap ⛰ vs Mysql InnoDB B-Tree 🌿

Enter into psql terminal:

✗ psql postgres
psql (14.17 (Homebrew))
Type "help" for help.

postgres=# \l
                                     List of databases
           Name            |  Owner   | Encoding | Collate | Ctype |   Access privileges
---------------------------+----------+----------+---------+-------+-----------------------
 studio_development | postgres | UTF8     | C       | C     |
  • Create a new test database
  • Create a users Table
  • Check the db and table details
postgres=# create database test_db;
CREATE DATABASE

test_db=# CREATE TABLE users (
user_id INT,
username VARCHAR(220),
email VARCHAR(150),
phone_number VARCHAR(20)
);
CREATE TABLE

test_db=# \dt
List of relations
 Schema | Name  | Type  |  Owner
--------+-------+-------+----------
 public | users | table | abhilash
(1 row)

test_db=# \d users;
                          Table "public.users"
    Column    |          Type          | Collation | Nullable | Default
--------------+------------------------+-----------+----------+---------
 user_id      | integer                |           |          |
 username     | character varying(220) |           |          |
 email        | character varying(150) |           |          |
 phone_number | character varying(20)  |           |          |

Add a Primary key to users and check the user table.

test_db=# ALTER TABLE users ADD PRIMARY KEY (user_id);
ALTER TABLE

test_db=# \d users;
                          Table "public.users"
    Column    |          Type          | Collation | Nullable | Default
--------------+------------------------+-----------+----------+---------
 user_id      | integer                |           | not null |
 username     | character varying(220) |           |          |
 email        | character varying(150) |           |          |
 phone_number | character varying(20)  |           |          |
Indexes:
    "users_pkey" PRIMARY KEY, btree (user_id)

# OR add primary key when creating the table:
CREATE TABLE users (
  user_id INT PRIMARY KEY,
  username VARCHAR(220),
  email VARCHAR(150),
  phone_number VARCHAR(20)
);

You can a unique constraint and an index added when adding a primary key.

Why does adding a primary key also add an index?

  • A primary key must guarantee that each value is unique and fast to find.
  • Without an index, the database would have to scan the whole table every time you look up a primary key, which would be very slow.
  • So PostgreSQL automatically creates a unique index on the primary key to make lookups efficient and to enforce uniqueness at the database level.

👉 It needs the index for speed and to enforce the “no duplicates” rule of primary keys.

What is btree?

  • btree stands for Balanced Tree (specifically, a “B-tree” data structure).
  • It’s the default index type in PostgreSQL.
  • B-tree indexes organize the data in a tree structure, so that searches, inserts, updates, and deletes are all very efficient — about O(log n) time.
  • It’s great for looking up exact matches (like WHERE user_id = 123) or range queries (like WHERE user_id BETWEEN 100 AND 200).

👉 So when you see btree, it just means it’s using a very efficient tree structure for your primary key index.

Summary in one line:
Adding a primary key automatically adds a btree index to enforce uniqueness and make lookups super fast.


In MySQL (specifically InnoDB engine, which is default now):

  • Primary keys always create an index automatically.
  • The index is a clustered index — this is different from Postgres!
  • The index uses a B-tree structure too, just like Postgres.

👉 So yes, MySQL also adds an index and uses a B-tree under the hood for primary keys.

But here’s a big difference:

  • In InnoDB, the table data itself is stored inside the primary key’s B-tree.
    • That’s called a clustered index.
    • It means the physical storage of the table rows follows the order of the primary key.
  • In PostgreSQL, the index and the table are stored separately (non-clustered by default).

Example: If you have a table like this in MySQL:

CREATE TABLE users (
  user_id INT PRIMARY KEY,
  username VARCHAR(220),
  email VARCHAR(150)
);
  • user_id will have a B-tree clustered index.
  • The rows themselves will be stored sorted by user_id.

Short version:

DatabasePrimary Key BehaviorB-tree?Clustered?
PostgreSQLSeparate index created for PKYesNo (separate by default)
MySQL (InnoDB)PK index + Table rows stored inside the PK’s B-treeYesYes (always clustered)

Why Indexing on Unique Columns (like email) Improves Lookup 🔍

Use Case

You frequently run queries like:

SELECT * FROM students WHERE email = 'john@example.com';

Without an index, this results in a full table scan — checking each row one-by-one.

With an index, the database can jump directly to the row using a sorted structure, significantly reducing lookup time — especially in large tables.


🌲 How SQL Stores Indexes Internally (PostgreSQL)

📚 PostgreSQL uses B-Tree indexes by default.

When you run:

CREATE UNIQUE INDEX idx_students_on_email ON students(email);

PostgreSQL creates a balanced B-tree like this:

          m@example.com
         /              \
  d@example.com     t@example.com
  /        \           /         \
...      ...        ...         ...

  • ✅ Keys (email values) are sorted lexicographically.
  • ✅ Each leaf node contains a pointer to the actual row in the students table (called a tuple pointer or TID).
  • ✅ Lookup uses binary search, giving O(log n) performance.

⚙️ Unique Index = Even Faster

Because all email values are unique, the database:

  • Can stop searching immediately once a match is found.
  • Doesn’t need to scan multiple leaf entries (no duplicates).

🧠 Summary

FeatureValue
Index TypeB-tree (default in PostgreSQL)
Lookup TimeO(log n) vs O(n) without index
Optimized forEquality search (WHERE email = ...), sorting, joins
Email is unique?✅ Yes – index helps even more (no need to check multiple rows)
Table scan avoided?✅ Yes – PostgreSQL jumps directly via B-tree lookup

What Exactly is a Clustered Index in MySQL (InnoDB)?

🔹 In MySQL InnoDB, the primary key IS the table.

🔹 A Clustered Index means:

  • The table’s data rows are physically organized in the order of the primary key.
  • No separate storage for the table – it’s merged into the primary key’s B-tree structure.

In simple words:
👉 “The table itself lives inside the primary key B-tree.”

That’s why:

  • Every secondary index must store the primary key value (not a row pointer).
  • InnoDB can only have one clustered index (because you can’t physically order a table in two different ways).
📈 Visual for MySQL Clustered Index

Suppose you have:

CREATE TABLE users (
  user_id INT PRIMARY KEY,
  username VARCHAR(255),
  email VARCHAR(255)
);

The storage looks like:

B-tree by user_id (Clustered)

user_id  | username | email
----------------------------
101      | Alice    | a@x.com
102      | Bob      | b@x.com
103      | Carol    | c@x.com

👉 Table rows stored directly inside the B-tree nodes by user_id!


🔵 PostgreSQL (Primary Key Index = Separate)

Imagine you have a users table:

users table (physical table):

row_id | user_id | username | email
-------------------------------------
  1    |   101   | Alice    | a@example.com
  2    |   102   | Bob      | b@example.com
  3    |   103   | Carol    | c@example.com

And the Primary Key Index looks like:

Primary Key B-Tree (separate structure):

user_id -> row pointer
 101    -> row_id 1
 102    -> row_id 2
 103    -> row_id 3

👉 When you query WHERE user_id = 102, PostgreSQL goes:

  • Find user_id 102 in the B-tree index,
  • Then jump to row_id 2 in the actual table.

🔸 Index and Table are separate.
🔸 Extra step: index lookup ➔ then fetch row.

🟠 MySQL InnoDB (Primary Key Index = Clustered)

Same users table, but stored like this:

Primary Key Clustered B-Tree (index + data together):

user_id | username | email
---------------------------------
  101   | Alice    | a@example.com
  102   | Bob      | b@example.com
  103   | Carol    | c@example.com

👉 When you query WHERE user_id = 102, MySQL:

  • Goes straight to user_id 102 in the B-tree,
  • Data is already there, no extra lookup.

🔸 Index and Table are merged.
🔸 One step: direct access!

📈 Quick Visual:

PostgreSQL
(Index)    ➔    (Table Row)
    |
    ➔ extra lookup needed

MySQL InnoDB
(Index + Row Together)
    |
    ➔ data found immediately

Summary:

  • PostgreSQL: primary key index is separate ➔ needs 2 steps (index ➔ table).
  • MySQL InnoDB: primary key index is clustered1 step (index = table).

📚 How Secondary Indexes Work

Secondary Index = an index on a column that is not the primary key.

Example:

CREATE INDEX idx_username ON users(username);

Now you have an index on username.

🔵 PostgreSQL Secondary Index Behavior

  • Secondary indexes are separate structures from the table (just like the primary key index).
  • When you query by username, PostgreSQL:
    1. Finds the matching row_id using the secondary B-tree index.
    2. Then fetches the full row from the table by row_id.
  • This is called an Index Scan + Heap Fetch.

📜 Example:

Secondary Index (username -> row_id):

username -> row_id
------------------
Alice    -> 1
Bob      -> 2
Carol    -> 3

(users table is separate)

👉 Flexible, but needs 2 steps: index (row_id) ➔ table.

🟠 MySQL InnoDB Secondary Index Behavior

  • In InnoDB, secondary indexes don’t store row pointers.
  • Instead, they store the primary key value!

So:

  1. Find the matching primary key using the secondary index.
  2. Use the primary key to find the actual row inside the clustered primary key B-tree.

📜 Example:

Secondary Index (username -> user_id):

username -> user_id
--------------------
Alice    -> 101
Bob      -> 102
Carol    -> 103

(Then find user_id inside Clustered B-Tree)

✅ Needs 2 steps too: secondary index (primary key) ➔ clustered table.

📈 Quick Visual:

FeaturePostgreSQLMySQL InnoDB
Secondary Indexusername ➔ row pointer (row_id)username ➔ primary key (user_id)
Fetch Full RowUse row_id to get table rowUse primary key to find row in clustered index
Steps to FetchIndex ➔ TableIndex ➔ Primary Key ➔ Table (clustered)
ActionPostgreSQLMySQL InnoDB
Primary Key LookupIndex ➔ Row (2 steps)Clustered Index (1 step)
Secondary Index LookupIndex (row_id) ➔ Row (2 steps)Secondary Index (PK) ➔ Row (2 steps)
Storage ModelSeparate index and tablePrimary key and table merged (clustered)

🌐 Now, let’s do some Real SQL Query ⛁ Examples!

1. Simple SELECT * FROM users WHERE user_id = 102;
  • PostgreSQL:
    Look into PK btree ➔ find row pointer ➔ fetch row separately.
  • MySQL InnoDB:
    Directly find the row inside the PK B-tree (no extra lookup).

MySQL is a little faster here because it needs only 1 step!

2. SELECT username FROM users WHERE user_id = 102; (Only 1 Column)
  • PostgreSQL:
    Might do an Index Only Scan if all needed data is in the index (very fast).
  • MySQL:
    Clustered index contains all columns already, no special optimization needed.

Both can be very fast, but PostgreSQL shines if the index is “covering” (i.e., contains all needed columns). Because index table has less size than clustered index of mysql.

3. SELECT * FROM users WHERE username = 'Bob'; (Secondary Index Search)
  • PostgreSQL:
    Secondary index on username ➔ row pointer ➔ fetch table row.
  • MySQL:
    Secondary index on username ➔ get primary key ➔ clustered index lookup ➔ fetch data.

Both are 2 steps, but MySQL needs 2 different B-trees: secondary ➔ primary clustered.

Consider the below situation:

SELECT username FROM users WHERE user_id = 102;
  • user_id is the Primary Key.
  • You only want username, not full row.

Now:

🔵 PostgreSQL Behavior

👉 In PostgreSQL, by default:

  • It uses the primary key btree to find the row pointer.
  • Then fetches the full row from the table (heap fetch).

👉 But PostgreSQL has an optimization called Index-Only Scan.

  • If all requested columns are already present in the index,
  • And if the table visibility map says the row is still valid (no deleted/updated row needing visibility check),
  • Then Postgres does not fetch the heap.

👉 So in this case:

  • If the primary key index also stores username internally (or if an extra index is created covering username), Postgres can satisfy the query just from the index.

✅ Result: No table lookup needed ➔ Very fast (almost as fast as InnoDB clustered lookup).

📢 Postgres primary key indexes usually don’t store extra columns, unless you specifically create an index that includes them (INCLUDE (username) syntax in modern Postgres 11+).

🟠 MySQL InnoDB Behavior
  • In InnoDB:
    Since the primary key B-tree already holds all columns (user_id, username, email),
    It directly finds the row from the clustered index.
  • So when you query by PK, even if you only need one column, it has everything inside the same page/block.

One fast lookup.

🔥 Why sometimes Postgres can still be faster?
  • If PostgreSQL uses Index-Only Scan, and the page is already cached, and no extra visibility check is needed,
    Then Postgres may avoid touching the table at all and only scan the tiny index pages.
  • In this case, for very narrow queries (e.g., only 1 small field), Postgres can outperform even MySQL clustered fetch.

💡 Because fetching from a small index page (~8KB) is faster than reading bigger table pages.

🎯 Conclusion:

✅ MySQL clustered index is always fast for PK lookups.
✅ PostgreSQL can be even faster for small/narrow queries if Index-Only Scan is triggered.

👉 Quick Tip:

  • In PostgreSQL, you can force an index to include extra columns by using: CREATE INDEX idx_user_id_username ON users(user_id) INCLUDE (username); Then index-only scans become more common and predictable! 🚀

Isn’t PostgreSQL also doing 2 B-tree scans? One for secondary index and one for table (row_id)?

When you query with a secondary index, like:

SELECT * FROM users WHERE username = 'Bob';
  • In MySQL InnoDB, I said:
    1. Find in secondary index (username ➔ user_id)
    2. Then go to primary clustered index (user_id ➔ full row)
Let’s look at PostgreSQL first:

♦️ Step 1: Search Secondary Index B-tree on username.

  • It finds the matching TID (tuple ID) or row pointer.
    • TID is a pair (block_number, row_offset).
    • Not a B-tree! Just a physical pointer.

♦️ Step 2: Use the TID to directly jump into the heap (the table).

  • The heap (table) is not a B-tree — it’s just a collection of unordered pages (blocks of rows).
  • PostgreSQL goes directly to the block and offset — like jumping straight into a file.

🔔 Important:

  • Secondary index ➔ TID ➔ heap fetch.
  • No second B-tree traversal for the table!
🟠 Meanwhile in MySQL InnoDB:

♦️ Step 1: Search Secondary Index B-tree on username.

  • It finds the Primary Key value (user_id).

♦️ Step 2: Now, search the Primary Key Clustered B-tree to find the full row.

  • Need another B-tree traversal based on user_id.

🔔 Important:

  • Secondary index ➔ Primary Key B-tree ➔ data fetch.
  • Two full B-tree traversals!
Real-world Summary:

♦️ PostgreSQL

  • Secondary index gives a direct shortcut to the heap.
  • One B-tree scan (secondary) ➔ Direct heap fetch.

♦️ MySQL

  • Secondary index gives PK.
  • Then another B-tree scan (primary clustered) to find full row.

PostgreSQL does not scan a second B-tree when fetching from the table — just a direct page lookup using TID.

MySQL does scan a second B-tree (primary clustered index) when fetching full row after secondary lookup.

Is heap fetch a searching technique? Why is it faster than B-tree?

📚 Let’s start from the basics:

When PostgreSQL finds a match in a secondary index, what it gets is a TID.

♦️ A TID (Tuple ID) is a physical address made of:

  • Block Number (page number)
  • Offset Number (row slot inside the page)

Example:

TID = (block_number = 1583, offset = 7)

🔵 How PostgreSQL uses TID?

  1. It directly calculates the location of the block (disk page) using block_number.
  2. It reads that block (if not already in memory).
  3. Inside that block, it finds the row at offset 7.

♦️ No search, no btree, no extra traversal — just:

  • Find the page (via simple number addressing)
  • Find the row slot

📈 Visual Example

Secondary index (username ➔ TID):

usernameTID
Alice(1583, 7)
Bob(1592, 3)
Carol(1601, 12)

♦️ When you search for “Bob”:

  • Find (1592, 3) from secondary index B-tree.
  • Jump directly to Block 1592, Offset 3.
  • Done ✅!

Answer:

  • Heap fetch is NOT a search.
  • It’s a direct address lookup (fixed number).
  • Heap = unordered collection of pages.
  • Pages = fixed-size blocks (usually 8 KB each).
  • TID gives an exact GPS location inside heap — no searching required.

That’s why heap fetch is faster than another B-tree search:

  • No binary search, no B-tree traversal needed.
  • Only a simple disk/memory read + row offset jump.

🌿 B-tree vs 📁 Heap Fetch

ActionB-treeHeap Fetch
What it doesBinary search inside sorted tree nodesDirect jump to block and slot
Steps neededTraverse nodes (root ➔ internal ➔ leaf)Directly read page and slot
Time complexityO(log n)O(1)
SpeedSlower (needs comparisons)Very fast (direct)

🎯 Final and short answer:

♦️ In PostgreSQL, after finding the TID in the secondary index, the heap fetch is a direct, constant-time (O(1)) accessno B-tree needed!
♦️ This is faster than scanning another B-tree like in MySQL InnoDB.


🧩 Our exact question:

When we say:

Jump directly to Block 1592, Offset 3.

We are thinking:

  • There are thousands of blocks.
  • How can we directly jump to block 1592?
  • Shouldn’t that be O(n) (linear time)?
  • Shouldn’t there be some traversal?

🔵 Here’s the real truth:

  • No traversal needed.
  • No O(n) work.
  • Accessing Block 1592 is O(1) — constant time.

📚 Why?

Because of how files, pages, and memory work inside a database.

When PostgreSQL stores a table (the “heap”), it saves it in a file on disk.
The file is just a long array of fixed-size pages.

  • Each page = 8KB (default in Postgres).
  • Each block = 1 page = fixed 8KB chunk.
  • Block 0 is the first 8KB.
  • Block 1 is next 8KB.
  • Block 2 is next 8KB.
  • Block 1592 = (1592 × 8 KB) offset from the beginning.

✅ So block 1592 is simply located at 1592 × 8192 bytes offset from the start of the file.

✅ Operating systems (and PostgreSQL’s Buffer Manager) know exactly how to seek to that byte position without reading everything before it.

📈 Diagram (imagine the table file):
+-----------+-----------+-----------+-----------+-----------+------+
| Block 0   | Block 1   | Block 2   | Block 3   | Block 4   |  ... |
+-----------+-----------+-----------+-----------+-----------+------+
  (8KB)       (8KB)       (8KB)       (8KB)       (8KB)

Finding Block 1592 ➔
Seek directly to offset 1592 * 8192 bytes ➔
Read 8KB ➔
Find row at Offset 3 inside it.

🤔 What happens technically?

If in memory (shared buffers / page cache):
  • PostgreSQL checks its buffer pool (shared memory).
  • “Do I already have block 1592 cached?”
    • ✅ Yes: immediately access memory address.
    • ❌ No: Load block 1592 from disk into memory.
If from disk (rare if cached):
  • File systems (ext4, xfs, etc) know how to seek to a byte offset in a file without reading previous parts.
  • Seek to (block_number × 8192) bytes.
  • Read exactly 8KB into memory.
  • No need to scan the whole file linearly.

📊 Final Step: Inside the Block

Once the block is loaded:

  • The block internally is structured like an array of tuples.
  • Each tuple is placed into an offset slot.
  • Offset 3 ➔ third tuple inside the block.

♦️ Again, this is just array lookup — no traversal, no O(n).

⚡ So to summarize:
QuestionAnswer
How does PostgreSQL jump directly to block?Using the block number × page size calculation (fixed offset math).
Is it O(n)?❌ No, it’s O(1) constant time
Is there any traversal?❌ No traversal. Just a seek + memory read.
How fast?Extremely fast if cached, still fast if disk seeks.
🔥 Key concept:

PostgreSQL heap access is O(1) because the heap file is a flat sequence of fixed-size pages, and the TID gives exact coordinates.

🎯 Simple Real World Example:

Imagine you have a giant book (the table file).
Each page of the book is numbered (block number).

If someone says:

👉 “Go to page 1592.”

♦️ You don’t need to read pages 1 to 1591 first.
♦️ You just flip directly to page 1592.

📗 Same idea: no linear traversal, just positional lookup.

🧠 Deep thought:

Because blocks are fixed size and TID is known,
heap fetch is almost as fast as reading a small array.

(Actually faster than searching B-tree because B-tree needs multiple comparisons at each node.)

Enjoy SQL! 🚀

Setup 🛠 Rails 8 App – Part 13: Composite keys & Candidate keys in Rails DB

🔑 What Is a Composite Key?

A composite key is a primary key made up of two or more columns that together uniquely identify a row in a table.

Use a composite key when no single column is unique on its own, but the combination is.

👉 Example: Composite Key in Action

Let’s say we’re building a table to track which students are enrolled in which courses.

Without Composite Key:
-- This table might allow duplicates
CREATE TABLE Enrollments (
  student_id INT,
  course_id INT
);

Nothing stops the same student from enrolling in the same course multiple times!

With Composite Key:
CREATE TABLE Enrollments (
  student_id INT,
  course_id INT,
  PRIMARY KEY (student_id, course_id)
);

Now:

  • student_id alone is not unique
  • course_id alone is not unique
  • But together → each (student_id, course_id) pair is unique

📌 Why Use Composite Keys?

When to UseWhy
Tracking many-to-many relationshipsEnsures unique pairs
Bridging/junction tablese.g., students-courses, authors-books
No natural single-column keyBut the combination is unique

⚠️ Things to Keep in Mind

  • Composite keys enforce uniqueness across multiple columns.
  • They can also be used as foreign keys in other tables.
  • Some developers prefer to add an auto-increment id as the primary key instead—but that’s a design choice.

🔎 What Is a Candidate Key?

A candidate key is any column (or combination of columns) in a table that can uniquely identify each row.

  • Every table can have multiple candidate keys
  • One of them is chosen to be the primary key
  • The rest are called alternate keys

🔑 Think of candidate keys as “potential primary keys”

👉 Example: Users Table

CREATE TABLE Users (
  user_id INT,
  username VARCHAR(80),
  email VARCHAR(150),
  phone_number VARCHAR(30)
);

Let’s have some hands own experience in SQL queries by creating a TEST DB. Check https://railsdrop.com/2025/04/25/rails-8-app-part-13-2-test-sql-queries/

Assume:

  • user_id is unique
  • username is unique
  • email is unique
Candidate Keys:
  • user_id
  • username
  • email

You can choose any one of them as the primary key, depending on your design needs.

-- Choosing user_id as the primary key
PRIMARY KEY (user_id)

The rest (username, email) are alternate keys.

📌 Characteristics of Candidate Keys

PropertyDescription
UniquenessMust uniquely identify each row
Non-nullCannot contain NULL values
MinimalityMust be the smallest set of columns that uniquely identifies a row (no extra columns)
No duplicatesNo two rows have the same value(s)

👥 Candidate Key vs Composite Key

ConceptExplanation
Candidate KeyAny unique identifier (single or multiple columns)
Composite KeyA candidate key that uses multiple columns

So: All composite keys are candidate keys, but not all candidate keys are composite.

💡 When Designing a Database

  • Find all possible candidate keys
  • Choose one as the primary key
  • (Optional) Define other candidate keys as unique constraints
CREATE TABLE Users (
  user_id INT PRIMARY KEY,
  username VARCHAR UNIQUE,
  email VARCHAR UNIQUE
);


Let’s walk through a real-world example using a schema we are already working on: a shopping app that sells clothing for women, men, kids, and infants.

We’ll look at how candidate keys apply to real tables like Users, Products, Orders, etc.

🛍️ Example Schema: Shopping App

1. Users Table

CREATE TABLE Users (
  user_id SERIAL PRIMARY KEY,
  email VARCHAR UNIQUE,
  username VARCHAR UNIQUE,
  phone_number VARCHAR
);

Candidate Keys:

  • user_id
  • email
  • username

We chose user_id as the primary key, but both email and username could also uniquely identify a user — so they’re candidate keys.


2. Products Table

CREATE TABLE Products (
  product_id SERIAL PRIMARY KEY,
  sku VARCHAR UNIQUE,
  name VARCHAR,
  category VARCHAR
);

Candidate Keys:

  • product_id
  • sku ✅ (Stock Keeping Unit – a unique identifier for each product)

sku is a candidate key. We use product_id as the primary key, but you could use sku if you wanted a natural key instead.

3. Orders Table

CREATE TABLE Orders (
  order_id SERIAL PRIMARY KEY,
  user_id INT REFERENCES Users(user_id),
  order_number VARCHAR UNIQUE,
  created_at TIMESTAMP
);

Candidate Keys:

  • order_id
  • order_number

You might use order_number (e.g., "ORD-20250417-0012") for external reference and order_id internally. Both are unique identifiers = candidate keys.

4. OrderItems Table (Join Table)

This table links orders to the specific products and quantities purchased.

CREATE TABLE OrderItems (
  order_id INT,
  product_id INT,
  quantity INT,
  PRIMARY KEY (order_id, product_id),
  FOREIGN KEY (order_id) REFERENCES Orders(order_id),
  FOREIGN KEY (product_id) REFERENCES Products(product_id)
);

Candidate Key:

  • Composite key: (order_id, product_id)

Here, a combination of order_id and product_id uniquely identifies a row — i.e., what product was ordered in which order — making it a composite candidate key, and we’ve selected it as the primary key.

👀 Summary of Candidate Keys by Table

TableCandidate KeysPrimary Key Used
Usersuser_id, email, usernameuser_id
Productsproduct_id, skuproduct_id
Ordersorder_id, order_numberorder_id
OrderItems(order_id, product_id)(order_id, product_id)

Let’s explore how to implement candidate keys in both SQL and Rails (Active Record). Since we are working on a shopping app in Rails 8, I’ll show how to enforce uniqueness and data integrity in both layers:

🔹 1. Candidate Keys in SQL (PostgreSQL Example)

Let’s take the Users table with multiple candidate keys (email, username, and user_id).

CREATE TABLE users (
  user_id SERIAL PRIMARY KEY,
  email VARCHAR(255) NOT NULL UNIQUE,
  username VARCHAR(100) NOT NULL UNIQUE,
  phone_number VARCHAR(20)
);

  • user_id: chosen as the primary key
  • email and username: candidate keys, enforced via UNIQUE constraints

💎 Composite Key Example (OrderItems)

CREATE TABLE order_items (
  order_id INT,
  product_id INT,
  quantity INT NOT NULL,
  PRIMARY KEY (order_id, product_id),
  FOREIGN KEY (order_id) REFERENCES orders(order_id),
  FOREIGN KEY (product_id) REFERENCES products(product_id)
);

This sets (order_id, product_id) as a composite candidate key and primary key.

🔸 2. Candidate Keys in Rails (ActiveRecord)

Now let’s do the same with Rails models + migrations + validations.

users Migration (with candidate keys)

# db/migrate/xxxxxx_create_users.rb
class CreateUsers < ActiveRecord::Migration[8.0]
  def change
    create_table :users do |t|
      t.string :email, null: false
      t.string :username, null: false
      t.string :phone_number

      t.timestamps
    end

    add_index :users, :email, unique: true
    add_index :users, :username, unique: true
  end
end

User Model

class User < ApplicationRecord
  validates :email, presence: true, uniqueness: true
  validates :username, presence: true, uniqueness: true
end

✅ These are candidate keysemail and username could be primary keys, but we are using id instead.

✅ Composite Key with OrderItem (Join Table)

ActiveRecord doesn’t support composite primary keys natively, but you can enforce uniqueness via a multi-column index:

Migration:

class CreateOrderItems < ActiveRecord::Migration[8.0]
  def change
    create_table :order_items, id: false do |t|
      t.references :order, null: false, foreign_key: true
      t.references :product, null: false, foreign_key: true
      t.integer :quantity, null: false

      t.timestamps
    end

    add_index :order_items, [:order_id, :product_id], unique: true
  end
end

Model:

class OrderItem < ApplicationRecord
  belongs_to :order
  belongs_to :product

  validates :quantity, presence: true
  validates :order_id, uniqueness: { scope: :product_id }
end

🎯 This simulates a composite key behavior: each product can only appear once per order.

➕ Extra: Use composite_primary_keys Gem (Optional)

If you really need true composite primary keys, use:

gem 'composite_primary_keys'

But it’s best to avoid unless your use case demands it — most Rails apps use a surrogate key (id) for simplicity.


to be continued.. 🚀

Setup 🛠 Rails 8 App – Part 12: Modify Product Schema – Apply Normalization

Right now we have following fields in Product Table:

create_table "products", force: :cascade do |t|
    t.string "title", null: false
    t.text "description"
    t.string "category"
    t.string "color"
    t.string "size", limit: 10
    t.decimal "mrp", precision: 7, scale: 2
    t.decimal "discount", precision: 7, scale: 2
    t.decimal "rating", precision: 2, scale: 1
    t.datetime "created_at", null: false
    t.datetime "updated_at", null: false
  end

If you examine the above table, there will be repetitive product items if consider for a size of an item there comes so many colours. We need to create different product rows for each size different colours.

So Let’s split the table into two.

1. Product Table

class CreateProducts < ActiveRecord::Migration[8.0]
  def change
    def change
      create_table :products do |t|
        t.string  :name
        t.text    :description
        t.string  :category   # women, men, kids, infants
        t.decimal :rating, precision: 2, scale: 1, default: 0.0

        t.timestamps
      end

      add_index :products, :category
    end
  end
end

2. Product Variant Table

class CreateProductVariants < ActiveRecord::Migration[8.0]
  def change
    create_table :product_variants do |t|
      t.references :product, null: false, foreign_key: true
      t.string  :sku, null: false
      t.decimal :price, precision: 10, scale: 2
      t.string  :size
      t.string  :color
      t.integer :stock_quantity, default: 0
      t.jsonb   :specs, default: {}, null: false

      t.timestamps
    end

    # GIN index for fast JSONB attribute searching
    add_index :product_variants, :specs, using: :gin
    add_index :product_variants, [ :product_id, :size, :color ], unique: true
    add_index :product_variants, :sku, unique: true
  end
end

Data normalization is a core concept in database design that helps organize data efficiently, eliminate redundancy, and ensure data integrity.


🔍 What Is Data Normalization?

Normalization is the process of structuring a relational database in a way that:

  • Reduces data redundancy (no repeated data)
  • Prevents anomalies in insert, update, or delete operations
  • Improves data integrity

It breaks down large, complex tables into smaller, related tables and defines relationships using foreign keys.

🧐 Why Normalize?

Problem Without NormalizationHow Normalization Helps
Duplicate data everywhereMoves repeated data into separate tables
Inconsistent valuesEnforces rules and relationships
Hard to update dataIsolates each concept so it’s updated once
Wasted storageReduces data repetition

📚 Normal Forms (NF)

Each Normal Form (NF) represents a level of database normalization. The most common are:

🔸 1NF – First Normal Form

  • Eliminate repeating groups
  • Ensure each column has atomic (indivisible) values

Bad Example:

CREATE TABLE Orders (
  order_id INT,
  customer_name VARCHAR,
  items TEXT  -- "Shirt, Pants, Hat"
);

Fixed (1NF):

CREATE TABLE OrderItems (
  order_id INT,
  item_name VARCHAR
);

🔸 2NF – Second Normal Form

  • Must be in 1NF
  • Remove partial dependencies (when a non-key column depends only on part of a composite key)

Example: If a table has a composite key (student_id, course_id), and student_name depends only on student_id, it should go into a separate table.

🔸 3NF – Third Normal Form

  • Must be in 2NF
  • Remove transitive dependencies (non-key columns depending on other non-key columns)

Example:

CREATE TABLE Employees (
  emp_id INT PRIMARY KEY,
  emp_name VARCHAR,
  dept_name VARCHAR,
  dept_location VARCHAR
);

Here, dept_location depends on dept_name, not emp_id — so split it:

Normalized:

CREATE TABLE Departments (
  dept_name VARCHAR PRIMARY KEY,
  dept_location VARCHAR
);

💡 Real-World Example

Let’s say you have this table:

Orders(order_id, customer_name, customer_email, product1, product2, product3)

Problems:

  • Repeating columns (product1, product2, …)
  • Redundant customer data in each order

Normalized Version:

  1. Customers(customer_id, name, email)
  2. Orders(order_id, customer_id)
  3. OrderItems(order_id, product_id)
  4. Products(product_id, name, price)

⚖️ Normalization vs. Denormalization

  • Normalization = Good for consistency, long-term maintenance
  • ⚠️ Denormalization = Good for performance in read-heavy systems (like reporting dashboards)

Use normalization as a default practice, then selectively denormalize if performance requires it.


Delete button example (Rails 7+)

<%= link_to "Delete Product",
              @product,
              data: { turbo_method: :delete, turbo_confirm: "Are you sure you want to delete this product?" },
              class: "inline-block px-4 py-2 bg-red-100 text-red-600 border border-red-300 rounded-md hover:bg-red-600 hover:text-white font-semibold transition duration-300 transform hover:scale-105" %>

💡 What’s Improved:

  • data: { turbo_confirm: ... } ensures compatibility with Turbo (Rails 7+).
  • Better button-like appearance (bg, px, py, rounded, etc.).
  • Hover effects and transitions for a smooth UI experience.

Add Brand to products table

Let’s add brand column to the product table:

✗ rails g migration add_brand_to_products brand:string:
index
class AddBrandToProducts < ActiveRecord::Migration[8.0]
  def change
    # Add 'brand' column
    add_column :products, :brand, :string

    # Add index for brand
    add_index :products, :brand
  end
end

❗️Important Note:

PostgreSQL does not support BEFORE or AFTER when adding a column.

Caused by:
PG::SyntaxError: ERROR:  syntax error at or near "BEFORE" (PG::SyntaxError)
LINE 1: ...LTER TABLE products ADD COLUMN brand VARCHAR(255) BEFORE des...
  • PostgreSQL (default in Rails) does not support column order (they’re always returned in the order they were created).
  • If you’re using MySQL, you could use raw SQL for positioning as shown below.

If I USE MySQL, I would like to see the brand name as first column of the table products. You can do that by changing the migration to:

class AddBrandToProducts < ActiveRecord::Migration[8.0]
  def up
    execute "ALTER TABLE products ADD COLUMN brand VARCHAR(255) BEFORE description;"
    add_index :products, :brand
  end

  def down
    remove_index :products, :brand
    remove_column :products, :brand
  end
end

Reverting Previous Migrations

You can use Active Record’s ability to rollback migrations using the revert method:

require_relative "20121212123456_example_migration"

class FixupExampleMigration < ActiveRecord::Migration[8.0]
  def change
    revert ExampleMigration

    create_table(:apples) do |t|
      t.string :variety
    end
  end
end

The revert method also accepts a block of instructions to reverse. This could be useful to revert selected parts of previous migrations.

Reference: https://guides.rubyonrails.org/active_record_migrations.html#reverting-previous-migrations

Product Index Page after applying NF:
Product Show Page after applying NF:
New Product Page after applying NF:

to be continued.. 🚀

Setup 🛠 Rails 8 App – Part 11: Convert 🔄 Rails App from SQLite to PostgreSQL

If you’ve already built a Rails 8 app using the default SQLite setup and now want to switch to PostgreSQL, here’s a clean step-by-step guide to make the transition smooth:

1.🔧 Setup PostgreSQL in macOS

🔷 Step 1: Install PostgreSQL via Homebrew

Run the following:

brew install postgresql

This created a default database cluster for me, check the output. So you can skip the Step 3.

==> Summary
🍺  /opt/homebrew/Cellar/postgresql@14/14.17_1: 3,330 files, 45.9MB

==> Running `brew cleanup postgresql@14`...
==> postgresql@14
This formula has created a default database cluster with:
  initdb --locale=C -E UTF-8 /opt/homebrew/var/postgresql@14

To start postgresql@14 now and restart at login:
  brew services start postgresql@14

Or, if you don't want/need a background service you can just run:
  /opt/homebrew/opt/postgresql@14/bin/postgres -D /opt/homebrew/var/postgresql@14

After installation, check the version:

psql --version
> psql (PostgreSQL) 14.17 (Homebrew)

🔷 Step 2: Start PostgreSQL Service

To start PostgreSQL now and have it start automatically at login:

brew services start postgresql
==> Successfully started `postgresql@14` (label: homebrew.mxcl.postgresql@14)

If you just want to run it in the background without autostart:

# pg_ctl — initialize, start, stop, or control a PostgreSQL server
pg_ctl -D /opt/homebrew/var/postgresql@14 start

https://www.postgresql.org/docs/current/app-pg-ctl.html

You can find the installed version using:

brew list | grep postgres

🔷 Step 3: Initialize the Database (if needed)

Sometimes Homebrew does this automatically. If not:

initdb /opt/homebrew/var/postgresql@<version>

Or a more general version:

initdb /usr/local/var/postgres

Key functions of initdb: Creates a new database cluster, Initializes the database cluster’s default locale and character set encoding, Runs a vacuum command.

In essence, initdb prepares the environment for a PostgreSQL database to be used and provides a foundation for creating and managing databases within that cluster

🔷 Step 4: Create a User and Database

PostgreSQL uses a role-based access control. Create a user with superuser privileges:

# createuser creates a new Postgres user
createuser -s postgres

createuser is a shell script wrapper around the SQL command CREATE USER via the Postgres interactive terminal psql. Thus, there is nothing special about creating users via this or other methods

Then switch to psql:

psql postgres

You can also create a database:

createdb <db_name>

🔷 Step 5: Connect and Use psql

psql -d <db_name>

Inside the psql shell, try:

\l    -- list databases
\dt   -- list tables
\q    -- quit

🔷 Step 6: Use a GUI (Optional)

For a friendly UI, install one of the following:

pgAdmin

Postico

TablePlus

2. Update Gemfile

Replace SQLite gem with PostgreSQL:

# Remove or comment this:
# gem "sqlite3", "~> 1.4"

# Add this:
gem "pg", "~> 1.4"

Then run:

bundle install


3. Update config/database.yml

Replace the entire contents of config/database.yml with the following:

default: &default
  adapter: postgresql
  encoding: unicode
  username: postgres
  password:
  host: localhost
  pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>

development:
  <<: *default
  database: your_app_name_development

test:
  <<: *default
  database: your_app_name_test

production:
  primary: &primary_production
    <<: *default
    database: your_app_name_production
    username: your_production_username
    password: <%= ENV['YOUR_APP_DATABASE_PASSWORD'] %>
  cache:
    <<: *primary_production
    database: your_app_name_production_cache
    migrations_paths: db/cache_migrate
  queue:
    <<: *primary_production
    database: your_app_name_production_queue
    migrations_paths: db/queue_migrate
  cable:
    <<: *primary_production
    database: your_app_name_production_cable
    migrations_paths: db/cable_migrate

Replace your_app_name with your actual Rails app name.

4. Drop SQLite Database (Optional)

rm storage/development.sqlite3
rm storage/test.sqlite3

5. Create and Setup PostgreSQL Database

rails db:create
rails db:migrate

If you had seed data:

rails db:seed

6. Test It Works

Boot up your server:

bin/dev

Then go to http://localhost:3000 and confirm everything works.

7. Check psql manually (Optional)

psql -d your_app_name_development

Then run:

\dt     -- view tables
\q      -- quit

8. Update .gitignore

Note: If not already added /storage/*

Make sure SQLite DBs are not accidentally committed:

/storage/*.sqlite3
/storage/*.sqlite3-journal


After moving into PostgreSQL

I was getting an issue with postgres column, where I have the following data in the migration:

# migration
t.decimal :rating, precision: 1, scale: 1

# log
ActiveRecord::RangeError (PG::NumericValueOutOfRange: ERROR:  numeric field overflow
12:44:36 web.1  | DETAIL:  A field with precision 1, scale 1 must round to an absolute value less than 1.
12:44:36 web.1  | )

Value passed is: 4.3. I was not getting this issue in SqLite DB.

What does precision: 1, scale: 1 mean?

  • precision: Total number of digits (both left and right of the decimal).
  • scale: Number of digits after the decimal point

If you want to store ratings like 4.3, 4.5, etc., a good setup is:

t.decimal :rating, precision: 2, scale: 1
# revert and migrate for products table

✗ rails db:migrate:down VERSION=2025031XXXXX -t
✗ rails db:migrate:up VERSION=2025031XXXXXX -t

Then go to http://localhost:3000 and confirm everything works.

to be continued.. 🚀

Setup 🛠 Rails 8 App – Part 9: Setup ⚙️ CI/CD with GitHub Actions | Run Test Cases via VS Code Co-pilot

Switching to a feature-branch workflow with pull requests is a great move for team collaboration, code review, and better CI/CD practices. Here’s how you can transition our Rails 8 app to a proper CI/CD pipeline using GitHub and GitHub Actions.

🔄 Workflow Change: Feature Branch + Pull Request

1. Create a new branch for each feature/task:
git checkout -b feature/feature-name
2. Push it to GitHub:
git push origin feature/feature-name
3. Open a Pull Request on GitHub from feature/feature-name to main.
4. Enable branch protection (optional but recommended):

Note: You can set up branch protection rules in GitHub for free only on public repositories.

About protected branches

You can protect important branches by setting branch protection rules, which define whether collaborators can delete or force push to the branch and set requirements for any pushes to the branch, such as passing status checks or a linear commit history.

You can create a branch protection rule in a repository for a specific branch, all branches, or any branch that matches a name pattern you specify with fnmatch syntax. For example, to protect any branches containing the word release, you can create a branch rule for *release*

  • Go to your repo → SettingsBranches → Protect main.
  • Require pull request reviews before merging.
  • Require status checks to pass before merging (CI tests).

Check the link in your github account: https://github.com/<user-name>/<repo-name>/settings/branch_protection_rules/new

For creating the branch protection rules, you need to take the github business account OR Move your work into an organization (https://github.com/account/organizations/new).

GitHub Actions

Basically github actions allow us to run some actions (ex: testing the code) if an event occurs during the code changes/commit/push (it mostly related to a branch).

Our Goal: When we push to a feature branch test the code before merging it to the main branch so that we can ensure nothing is broken before going the code into live.

You can try the VS Code plugin for helping the Github Actions workflow (best for auto-complete the data we needed and auto-populate the env variables etc from our github account):

Sign in using your github account and grant access to the public repositories.

If you try to push to main branch, you will find the following error:

remote: error: GH006: Protected branch update failed for refs/heads/main.
remote:
remote: - Changes must be made through a pull request.
remote:
remote: - Cannot change this locked branch
To github.com:<username>/<project>.git
 ! [remote rejected] main -> main (protected branch hook declined)

We will be finishing Database and all other setup for our Web Application before starting CI/CD setup.

For the next part of CI/CD configuration check the post: https://railsdrop.com/2025/05/06/rails-8-ci-cd-setup-with-github-actions/

Let’s Start to Use VS Code Co-pilot For Test Creation/Execution

Test cases are Important for CI/CD setup. Our main focus will be running Rails test cases when integrating CI.

  • Generate Test using Co-pilot From Controller
  • Co-pilot Creates Tests
  • Co-pilot run Tests

  • Use Co-pilot to Fix Test Failures
  • Test Results: Pending Migrations
  • Test Success: After Migration
  • VS Code: Check ruby Lint Symbol for details
  • VS Code try to run Tests: Rubocop Path Issue
  • Fixed Rubocop Issue: All Test passes

Profiling 📊 Ruby on Rails 8 Applications: Essential Tools and Techniques

Introduction

Performance optimization is critical for delivering fast, responsive Rails applications. This comprehensive guide covers the most important profiling tools you should implement in your Rails 8 application, complete with setup instructions and practical examples.

Why Profiling Matters

Before diving into tools, let’s understand why profiling is essential:

  1. Identify bottlenecks: Pinpoint exactly which parts of your application are slowing things down
  2. Optimize resource usage: Reduce memory consumption and CPU usage
  3. Improve user experience: Faster response times lead to happier users
  4. Reduce infrastructure costs: Efficient applications require fewer server resources

Essential Profiling Tools for Rails 8

1. Rack MiniProfiler

What it does: Provides real-time profiling of your application’s performance directly in your browser.

Why it’s important: It’s the quickest way to see performance metrics without leaving your development environment.

Installation:

# Gemfile
gem 'rack-mini-profiler', group: :development

Usage example:
After installation, it automatically appears in your browser’s corner showing:

  • SQL query times
  • Ruby execution time
  • Memory allocation
  • Flamegraphs (with additional setup)

Advantages:

  • No configuration needed for basic setup
  • Shows N+1 query warnings
  • Integrates with Rails out of the box

GitHubhttps://github.com/MiniProfiler/rack-mini-profiler
Documentationhttps://miniprofiler.com/

2. Bullet

What it does: Detects N+1 queries, unused eager loading, and missing counter caches.

Why it’s important: N+1 queries are among the most common performance issues in Rails applications.

Installation:

# Gemfile
gem 'bullet', group: :development

Configuration:

# config/environments/development.rb
config.after_initialize do
  Bullet.enable = true
  Bullet.alert = true
  Bullet.bullet_logger = true
  Bullet.console = true
  Bullet.rails_logger = true
end

Example output:

GET /posts
USE eager loading detected
  Post => [:comments]
  Add to your query: Post.includes([:comments])

Advantages:

  • Catches common ORM performance issues early
  • Provides specific recommendations for fixes
  • Works across all environments

GitHubhttps://github.com/flyerhzm/bullet
Documentationhttps://github.com/flyerhzm/bullet/blob/master/README.md

3. Ruby Prof (and StackProf)

What it does: Low-level Ruby code profiler that shows exactly where time is being spent.

Why it’s important: When you need deep insight into method-level performance characteristics.

Installation:

# Gemfile
gem 'ruby-prof', group: :development
gem 'stackprof', group: :development

Usage example:

# In your controller or service object
result = RubyProf.profile do
  # Code you want to profile
end

printer = RubyProf::GraphPrinter.new(result)
printer.print(STDOUT, {})

For StackProf:

StackProf.run(mode: :cpu, out: 'tmp/stackprof.dump') do
  # Code to profile
end

Advantages:

  • Method-level granularity
  • Multiple output formats (call graphs, flamegraphs)
  • StackProf is sampling-based so has lower overhead

GitHubhttps://github.com/ruby-prof/ruby-prof
Documentationhttps://github.com/ruby-prof/ruby-prof/blob/master/README.md

StackProf Alternative:
GitHubhttps://github.com/tmm1/stackprof
Documentationhttps://github.com/tmm1/stackprof/blob/master/README.md

4. Memory Profiler

What it does: Tracks memory allocations and helps identify memory bloat.

Why it’s important: Memory issues can lead to slow performance and even crashes.

Installation:

# Gemfile
gem 'memory_profiler', group: :development

Usage example:

report = MemoryProfiler.report do
  # Code to profile
end

report.pretty_print(to_file: 'memory_report.txt')

Advantages:

  • Shows allocated objects by class and location
  • Tracks retained memory after GC
  • Helps find memory leaks

GitHubhttps://github.com/SamSaffron/memory_profiler
Documentationhttps://github.com/SamSaffron/memory_profiler/blob/master/README.md

5. Skylight

What it does: Production-grade application performance monitoring (APM).

Why it’s important: Understanding real-world performance characteristics is different from development profiling.

Installation:

# Gemfile
gem 'skylight'

Configuration:

# config/skylight.yml
production:
  authentication: [YOUR_AUTH_TOKEN]

Advantages:

  • Low-overhead production profiling
  • Endpoint-level performance breakdowns
  • Database query analysis
  • Exception tracking

Websitehttps://www.skylight.io
Documentationhttps://docs.skylight.io
GitHubhttps://github.com/skylightio/skylight-ruby

6. AppSignal

What it does: Full-stack performance monitoring and error tracking.

Why it’s important: Provides comprehensive insights across your entire application stack.

Installation:

# Gemfile
gem 'appsignal'

Then run:

bundle exec appsignal install YOUR_PUSH_API_KEY

Advantages:

  • Error tracking alongside performance
  • Host metrics integration
  • Background job monitoring
  • Magic Dashboard for quick insights

Websitehttps://appsignal.com
Documentationhttps://docs.appsignal.com/ruby
GitHubhttps://github.com/appsignal/appsignal-ruby

7. Derailed Benchmarks

What it does: Suite of benchmarks and performance tests for your application.

Why it’s important: Helps catch performance regressions before they hit production.

Installation:

# Gemfile
group :development, :test do
  gem 'derailed_benchmarks'
end

Usage examples:

# Memory usage at boot
bundle exec derailed bundle:mem

# Performance per route
bundle exec derailed exec perf:routes

Advantages:

  • CI-friendly performance testing
  • Memory usage analysis
  • Route-based performance testing

GitHubhttps://github.com/schneems/derailed_benchmarks
Documentationhttps://github.com/schneems/derailed_benchmarks/blob/master/README.md

8. Flamegraph Generation

What it does: Visual representation of where time is being spent in your application.

Why it’s important: Provides an intuitive way to understand call stacks and hot paths.

Installation:

# Gemfile
gem 'flamegraph'
gem 'stackprof' # if not already installed

Usage example:

Flamegraph.generate('flamegraph.html') do
  # Code to profile
end

Advantages:

  • Visual representation of performance
  • Easy to spot hot paths
  • Interactive exploration

GitHubhttps://github.com/SamSaffron/flamegraph
Documentationhttp://samsaffron.github.io/flamegraph/rails-startup.html

Additional Helpful Tools 🔧

9. Benchmark-ips

Benchmark-ips (iterations per second) is a superior benchmarking tool compared to Ruby’s standard Benchmark library. It provides:

  1. Iterations-per-second measurement – More intuitive than raw time measurements
  2. Statistical analysis – Shows standard deviation between runs
  3. Comparison mode – Easily compare different implementations
  4. Warmup phase – Accounts for JIT and cache warming effects

Benchmark-ips solves these problems and is particularly valuable for:

  • Comparing algorithm implementations
  • Testing performance optimizations
  • Benchmarking gem alternatives
  • Validating performance-critical code

GitHubhttps://github.com/evanphx/benchmark-ips
Documentationhttps://github.com/evanphx/benchmark-ips/blob/master/README.md

Installation
# Gemfile
gem 'benchmark-ips', group: :development
Basic Usage:
require 'benchmark/ips'

Benchmark.ips do |x|
  x.report("addition") { 1 + 2 }
  x.report("addition with to_s") { (1 + 2).to_s }
  x.compare!
end
Advanced Features:
Benchmark.ips do |x|
  x.time = 5 # Run each benchmark for 5 seconds
  x.warmup = 2 # Warmup time of 2 seconds
  
  x.report("Array#each") { [1,2,3].each { |i| i * i } }
  x.report("Array#map") { [1,2,3].map { |i| i * i } }
  
  # Add custom statistics
  x.config(stats: :bootstrap, confidence: 95)
  
  x.compare!
end
# Memory measurement
require 'benchmark/memory'

Benchmark.memory do |x|
  x.report("method1") { ... }
  x.report("method2") { ... }
  x.compare!
end

# Disable GC for more consistent results
Benchmark.ips do |x|
  x.config(time: 5, warmup: 2, suite: GCSuite.new)
end
Sample Output:
Warming up --------------------------------------
            addition    281.899k i/100ms
  addition with to_s    261.831k i/100ms
Calculating -------------------------------------
            addition      8.614M (± 1.2%) i/s -     43.214M in   5.015800s
  addition with to_s      7.017M (± 1.8%) i/s -     35.347M in   5.038446s

Comparison:
            addition:  8613594.0 i/s
  addition with to_s:  7016953.3 i/s - 1.23x slower

Key Advantages

  1. Accurate comparisons with statistical significance
  2. Warmup phase eliminates JIT/caching distortions
  3. Memory measurements available through extensions
  4. Customizable reporting with various statistics options

10. Rails Performance (Dashboard)

What is Rails Performance?

Rails Performance is a self-hosted alternative to New Relic/Skylight that provides:

  1. Request performance tracking
  2. Background job monitoring
  3. Slowest endpoints identification
  4. Error tracking
  5. Custom event monitoring
Why It’s Important

For teams that:

  • Can’t use commercial SaaS solutions
  • Need to keep performance data in-house
  • Want historical performance tracking
  • Need simple setup without complex infrastructure

GitHubhttps://github.com/igorkasyanchuk/rails_performance
Documentationhttps://github.com/igorkasyanchuk/rails_performance/blob/master/README.md

Installation
# Gemfile
gem 'rails_performance', group: :development

Then run:

rails g rails_performance:install
rake db:migrate
Configuration
# config/initializers/rails_performance.rb
RailsPerformance.setup do |config|
  config.redis = Redis.new # optional, will use Rails.cache otherwise
  config.duration = 4.hours # store requests for 4 hours
  config.enabled = Rails.env.production?
  config.http_basic_authentication_enabled = true
  config.http_basic_authentication_user_name = 'admin'
  config.http_basic_authentication_password = 'password'
end
Accessing the Dashboard:

After installation, access the dashboard at:

http://localhost:3000/rails/performance

Custom Tracking:

# Track custom events
RailsPerformance.trace("custom_event", tags: { type: "import" }) do
  # Your code here
end

# Track background jobs
class MyJob < ApplicationJob
  around_perform do |job, block|
    RailsPerformance.trace(job.class.name, tags: job.arguments) do
      block.call
    end
  end
end
# Add custom fields to requests
RailsPerformance.attach_extra_payload do |payload|
  payload[:user_id] = current_user.id if current_user
end

# Track slow queries
ActiveSupport::Notifications.subscribe("sql.active_record") do |*args|
  event = ActiveSupport::Notifications::Event.new(*args)
  if event.duration > 100 # ms
    RailsPerformance.trace("slow_query", payload: {
      sql: event.payload[:sql],
      duration: event.duration
    })
  end
end
Sample Dashboard Views:
  1. Requests Overview:
    • Average response time
    • Requests per minute
    • Slowest actions
  2. Detailed Request View:
    • SQL queries breakdown
    • View rendering time
    • Memory allocation
  3. Background Jobs:
    • Job execution time
    • Failures
    • Queue times
Key Advantages
  1. Self-hosted solution – No data leaves your infrastructure
  2. Simple setup – No complex dependencies
  3. Historical data – Track performance over time
  4. Custom events – Track any application events
  5. Background jobs – Full visibility into async processes

Implementing a Complete Profiling Strategy

For a comprehensive approach, combine these tools at different stages:

  1. Development:
  • Rack MiniProfiler (always on)
  • Bullet (catch N+1s early)
  • RubyProf/StackProf (for deep dives)
  1. CI Pipeline:
  • Derailed Benchmarks
  • Memory tests
  1. Production:
  • Skylight or AppSignal
  • Error tracking with performance context

Sample Rails 8 Configuration

Here’s how to set up a complete profiling environment in a new Rails 8 app:

# Gemfile

# Development profiling
group :development do
  # Basic profiling
  gem 'rack-mini-profiler'
  gem 'bullet'
  
  # Deep profiling
  gem 'ruby-prof'
  gem 'stackprof'
  gem 'memory_profiler'
  gem 'flamegraph'
  
  # Benchmarking
  gem 'derailed_benchmarks', require: false
  gem 'benchmark-ips'
  
  # Dashboard
  gem 'rails_performance'
end

# Production monitoring (choose one)
group :production do
  gem 'skylight'
  # or
  gem 'appsignal'
  # or
  gem 'newrelic_rpm' # Alternative option
end

Then create an initializer for development profiling:

# config/initializers/profiling.rb
if Rails.env.development?
  require 'rack-mini-profiler'
  Rack::MiniProfilerRails.initialize!(Rails.application)

  Rails.application.config.after_initialize do
    Bullet.enable = true
    Bullet.alert = true
    Bullet.bullet_logger = true
    Bullet.rails_logger = true
  end
end

Conclusion

Profiling your Rails 8 application shouldn’t be an afterthought. By implementing these tools throughout your development lifecycle, you’ll catch performance issues early, maintain a fast application, and provide better user experiences.

Remember:

  • Use development tools like MiniProfiler and Bullet daily
  • Run deeper profiles with RubyProf before optimization work
  • Monitor production with Skylight or AppSignal
  • Establish performance benchmarks with Derailed

With this toolkit, you’ll be well-equipped to build and maintain high-performance Rails 8 applications.

Enjoy Rails! 🚀

Setup 🛠 Rails 8 App – Part 8: Debugbar – Apply performance 📈 optimization

1. Integrate pagy for pagination

Why it’s the great choice:

  • Super fast and lightweight (~300x faster than Kaminari or WillPaginate).
  • No dependencies on Active Record or view helpers.
  • Very customizable and modular (can do Bootstrap/Tailwind/semantic UI integrations).
  • Supports metadata, responsive pagination, overflow handling, infinite scrolling, and JSON API pagination.
# Gemfile
# The Best Pagination Ruby Gem [https://ddnexus.github.io/pagy/]
gem "pagy", "~> 9.3" # omit patch digit

bundle install
Example Usage in Controller:
include Pagy::Backend

def index
  @pagy, @products = pagy(Product.all)
end

In Product Helper / Application Helper:
include Pagy::Frontend
In the View (ERB or HAML):
<%= pagy_nav(@pagy) %>
Add an initializer file

Download the file from: https://ddnexus.github.io/pagy/quick-start/

https://ddnexus.github.io/pagy/gem/config/pagy.rb

and save it into the config/initializers directory. Uncomment limit and size options.

Tailwind Support:
# In an initializer (e.g., config/initializers/pagy.rb)
Pagy::DEFAULT[:limit]       = 20                    # default
Pagy::DEFAULT[:size]        = 7                     # default
# Better user experience handled automatically
require "pagy/extras/overflow"
Pagy::DEFAULT[:overflow] = :last_page

I am getting a load error when I want tailwind css to apply to my views:

LoadError: cannot load such file -- pagy/extras/tailwind (LoadError)

Ahh it’s not supporting Tailwind CSS, and there is no tailwind file found in the Gem too!

Hmm..😟 Check below:

We can try to include the css manually, check: https://ddnexus.github.io/pagy/docs/api/stylesheets/#pagy-tailwind-css

Create a file pagy.tailwind.css and add the following:

.pagy {
    @apply flex space-x-1 font-semibold text-sm text-gray-500;
    a:not(.gap) {
      @apply block rounded-lg px-3 py-1 bg-gray-200;
      &:hover {
        @apply bg-gray-300;
      }
      &:not([href]) { /* disabled links */
        @apply text-gray-300 bg-gray-100 cursor-default;
      }
      &.current {
        @apply text-white bg-gray-400;
      }
    }
    label {
      @apply inline-block whitespace-nowrap bg-gray-200 rounded-lg px-3 py-0.5;
      input {
        @apply bg-gray-100 border-none rounded-md;
      }
    }
  }

Modify app/assets/tailwind/application.css :

@import "tailwindcss";
@import "./pagy.tailwind.css";

Restart your server and you got it!

Testing performance

You can see that in the query Tab in Debugbar, select * from products query has been replaced with limit query. But this is not the case where you go through the entire thousand hundreds of products, for example searching. We can think of view caching and SQL indexing for such a situation.

to be continued.. 🚀

Setup 🛠 Rails 8 App – Part 7: Mastering Debugbar 👾 for Rails Performance Optimization

As Rails developers, we’ve all been there – your application starts slowing down as data grows, pages take longer to load, and memory usage spikes. Before you blame Rails itself or consider rewriting your entire application, you should profile your app to understand what’s really happening behind the scenes.

Most of the time, the issue lies in how the app is written: unnecessary SQL queries, excessive object allocations, or inefficient code patterns. Before you think about rewriting your app or switching frameworks, profile it.

That’s where Rails Debugbar shines— It helps you identify bottlenecks like slow database queries, excessive object allocations, and memory leaks – all from a convenient toolbar at the bottom of your development environment.


🤔 What is Rails Debugbar?

Rails Debugbar is a browser-integrated dev tool that adds a neat, powerful panel at the bottom of your app in development. It helps you answer questions like:

  • How long is a request taking?
  • How many SQL queries are being executed?
  • How many Ruby objects are being allocated?
  • Which parts of my code are slow?

It’s like a surgeon’s X-ray for your app—giving you visibility into internals without needing to dig into logs or guess. Get a better understanding of your application performance and behavior (SQL queries, jobs, cache, routes, logs, etc)


⚙️ Installation & Setup (Rails 8)

Prerequisites

  • Ruby on Rails 5.2+ (works perfectly with Rails 8)
  • A Ruby version supported by your Rails version

1. Add it to your Gemfile:

group :development do
  gem 'debugbar'
end

Then run:

bundle install

2. Add the Debugbar layout helpers in your application layout:

In app/views/layouts/application.html.erb, just before the closing </head> and </body> tags:

<%= debugbar_head if defined?(Debugbar) %>
...
<%= debugbar_body if defined?(Debugbar) %>

That’s it! When you restart your server, you’ll see a sleek Debugbar docked at the bottom of the screen.

You can see ActionCable interacting with debugbar_channel in logs:

[ActionCable] Broadcasting to debugbar_channel: [{id: "xxxx-xxxx-xxxx-xxxx", meta: {controller: "ProductsController", action: "show", params: {"controller" => "products", "action" => "show", "id" => "3"}, format: :html, method: "GET", path: "/products/3", status: 200, view_runtime: 10.606000004219823, db_runtime: 0.44599999819...

23:47:17 web.1  | Debugbar::DebugbarChannel transmitting [{"id" => "xxxx-xxxx-xxxx-xxxx", "meta" => {"controller" => "ProductsController", "action" => "show", "params" => {"controller" => "products", "action" => "show", "id" => "3"}, "format" => "html", "method" => "GET", "path" => "/products/3", "status" => 200, "view_runtime" => 10.6... (via streamed from debugbar_channel)

23:47:17 web.1  | Debugbar::DebugbarChannel#receive({"ids" => ["xxxx-xxxx-xxxx-xxxx"]})
23:47:17 web.1  | [ActionCable] Broadcasting to debugbar_channel: []

23:47:17 web.1  | Debugbar::DebugbarChannel transmitting [] (via streamed from debugbar_channel)

📚 Official links for reference:


🔍 Exploring the Debugbar Tabs

Rails Debugbar includes several tabs. Let’s go through the most useful ones—with real-world examples of how to interpret and improve performance using the data.

1. Queries Tab

This tab shows all SQL queries executed during the request, including their duration in milliseconds.

Example:

You see this in the Queries tab:

SELECT * FROM users WHERE email = 'test@example.com'  (15ms)
SELECT * FROM products WHERE user_id = 1                 (20ms)
SELECT * FROM comments WHERE product_id IN (...)         (150ms)

You realize:

  • The third query is taking 10x more time.
  • You’re not using eager loading, and it’s triggering N+1 queries.

How to Fix:

Update your controller:

@products = Product.includes(:comments).where(user_id: 1)

This loads the comments in a single query, reducing load time and object allocation.


2. Timeline Tab

Gives you a timeline breakdown of how long each part of the request takes—view rendering, database, middleware, etc.

Example:

You notice that rendering a partial takes 120ms, way more than expected.

<%= render 'shared/sidebar' %>

How to Fix:

Check the partial for:

  • Heavy loops or database calls
  • Uncached helper methods

Move the partial to use a fragment cache:

<% cache('sidebar') do %>
  <%= render 'shared/sidebar' %>
<% end %>

Another Example Problem:
If you notice view rendering takes 800ms for a simple page.

Solution:
Investigate partials being rendered. You might be:

  • Rendering unnecessary partials
  • Using complex helpers in views
  • Need to implement caching
# Before
<%= render @products %> # Renders _product.html.erb for each

# After (with caching)
<% @products.each do |product| %>
  <% cache product do %>
    <%= render product %>
  <% end %>
<% end %>

3. Memory Tab

Tracks memory usage and object allocations per request.

Example:

You load a dashboard page and see 25,000+ objects allocated. Yikes.

Dig into the view and see:

<% User.all.each do |user| %>
  ...
<% end %>

That’s loading all users into memory.

How to Fix:

Use pagination or lazy loading:

@users = User.page(params[:page]).per(20)

Now the object count drops dramatically.


4. Environment & Request Info

See request parameters, environment variables, session data, and headers.

Example:

You’re debugging an API endpoint and want to confirm the incoming headers or params—Debugbar shows them neatly in this tab.

It can help identify:

  • Wrong content-type headers
  • CSRF issues
  • Auth headers or missing cookies

💡 Debugbar Best Practices

  • Use it early: Don’t wait until your app is slow—profile as you build.
  • Watch out for hidden N+1 in associations, partials, or background jobs.
  • Keep an eye on object counts to reduce memory pressure in production.
  • Use fragment and Russian doll caching where needed, based on render timelines.
  • Regularly review slow pages with Debugbar open—it’s a development-time lifesaver.

💭 Final Thoughts

Rails Debugbar offers an easy, visual way to profile and optimize your Rails 8 app. Whether you’re debugging a slow page, inspecting a query storm, or chasing down memory leaks, this tool gives you insight without friction.

So before you overhaul your architecture or blame Rails, fire up Debugbar—and fix the real issues.

to be modified..  🚀