Rails 8 App: Setup Test DB in PostgreSQL | Query Performance Using EXPLAIN ANALYZE

Now we’ll go full-on query performance pro mode using EXPLAIN ANALYZE and real plans. We’ll learn how PostgreSQL makes decisions, how to catch slow queries, and how your indexes make them 10x faster.

๐Ÿ’Ž Part 1: What is EXPLAIN ANALYZE?

EXPLAIN shows how PostgreSQL plans to execute your query.

ANALYZE runs the query and adds actual time, rows, loops, etc.

Syntax:

EXPLAIN ANALYZE
SELECT * FROM users WHERE username = 'bob';

โœ๏ธ Example 1: Without Index

SELECT * FROM users WHERE username = 'bob';

If username has no index, plan shows:

Seq Scan on users
  Filter: (username = 'bob')
  Rows Removed by Filter: 9999

โŒ PostgreSQL scans all rows = Sequential Scan = slow!

โž• Add Index:

CREATE INDEX idx_users_username ON users (username);

Now rerun:

EXPLAIN ANALYZE SELECT * FROM users WHERE username = 'bob';

You’ll see:

Index Scan using idx_users_username on users
  Index Cond: (username = 'bob')

โœ… PostgreSQL uses B-tree index
๐Ÿš€ Massive speed-up!

๐Ÿ”ฅ Want even faster?

SELECT username FROM users WHERE username = 'bob';

If PostgreSQL shows:

Index Only Scan using idx_users_username on users
  Index Cond: (username = 'bob')

๐ŸŽ‰ Index Only Scan! = covering index success!
No heap fetch = lightning-fast.

โš ๏ธ Note: Index-only scan only works if:

  • Index covers all selected columns
  • Table is vacuumed (PostgreSQL uses visibility map)

If you still get Seq scan output like:

test_db=# EXPLAIN ANALYSE SELECT * FROM users where username = 'aman_chetri';
                                           QUERY PLAN
-------------------------------------------------------------------------------------------------
 Seq Scan on users  (cost=0.00..1.11 rows=1 width=838) (actual time=0.031..0.034 rows=1 loops=1)
   Filter: ((username)::text = 'aman_chetri'::text)
   Rows Removed by Filter: 2
 Planning Time: 0.242 ms
 Execution Time: 0.077 ms
(5 rows)

even after adding an index, because PostgreSQL is saying:

  • ๐Ÿค” “The table is so small (cost = 1.11), scanning the whole thing is cheaper than using the index.”
  • Also: Your query uses only SELECT username, which could be eligible for Index Only Scan, but heap fetch might still be needed due to visibility map.

๐Ÿ”ง Step-by-step Fix:

โœ… 1. Add Data for Bigger Table

If the table is small (few rows), PostgreSQL will prefer Seq Scan no matter what.

Try adding ~10,000 rows:

INSERT INTO users (username, email, phone_number)
SELECT 'user_' || i, 'user_' || i || '@mail.com', '1234567890'
FROM generate_series(1, 10000) i;

Then VACUUM ANALYZE users; again and retry EXPLAIN.

โœ… 2. Confirm Index Exists

First, check your index exists and is recognized:

\d users

You should see something like:

Indexes:
    "idx_users_username" btree (username)

If not, add:

CREATE INDEX idx_users_username ON users(username);

โœ… 3. Run ANALYZE (Update Stats)
ANALYZE users;

This updates statistics โ€” PostgreSQL might not be using the index if it thinks only one row matches or the table is tiny.

โœ… 4. Vacuum for Index-Only Scan

Index-only scans require the visibility map to be set.

Run:

VACUUM ANALYZE users;

This marks pages in the table as “all-visible,” enabling PostgreSQL to avoid reading the heap.

โœ… 5. Force PostgreSQL to Consider Index

You can turn off sequential scan temporarily (for testing):

SET enable_seqscan = OFF;

EXPLAIN SELECT username FROM users WHERE username = 'bob';

You should now see:

Index Scan using idx_users_username on users ...

โš ๏ธ Use this only for testing/debugging โ€” not in production.

๐Ÿ’ก Extra Tip (optional): Use EXPLAIN (ANALYZE, BUFFERS)
EXPLAIN (ANALYZE, BUFFERS)
SELECT username FROM users WHERE username = 'bob';

This will show:

  • Whether heap was accessed
  • Buffer hits
  • Actual rows
๐Ÿ“‹ Summary
StepCommand
Check Index\d users
Analyze tableANALYZE users;
Vacuum for visibilityVACUUM ANALYZE users;
Disable seq scan for testSET enable_seqscan = OFF;
Add more rows (optional)INSERT INTO ...

๐Ÿšจ How to catch bad index usage?

Always look for:

  • “Seq Scan” instead of “Index Scan” โž” missing index
  • “Heap Fetch” โž” not a covering index
  • “Rows Removed by Filter” โž” inefficient filtering
  • “Loops: 1000+” โž” possible N+1 issue

Common Pattern Optimizations

PatternFix
WHERE column = ?B-tree index on column
WHERE column LIKE 'prefix%'B-tree works (with text_ops)
SELECT col1 WHERE col2 = ?Covering index: (col2, col1) or (col2) INCLUDE (col1)
WHERE col BETWEEN ?Composite index with range second: (status, created_at)
WHERE col IN (?, ?, ?)Index still helps
ORDER BY col LIMIT 10Index on col helps sort fast

โšก Tip: Use pg_stat_statements to Find Slow Queries

Enable it in postgresql.conf:

shared_preload_libraries = 'pg_stat_statements'

Then run:

SELECT query, total_exec_time, calls
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 5;

๐ŸŽฏ Find your worst queries & optimize them with new indexes!

๐Ÿงช Try It Yourself

Want a little lab setup to practice?

CREATE TABLE users (
  user_id serial PRIMARY KEY,
  username VARCHAR(220),
  email VARCHAR(150),
  phone_number VARCHAR(20)
);

-- Insert 100K fake rows
INSERT INTO users (username, email, phone_number)
SELECT
  'user_' || i,
  'user_' || i || '@example.com',
  '999-000-' || i
FROM generate_series(1, 100000) i;

Then test:

  1. EXPLAIN ANALYZE SELECT * FROM users WHERE username = 'user_5000';
  2. Add INDEX ON username
  3. Re-run, compare speed!

๐ŸŽฏ Extra Pro Tools for Query Performance

  • EXPLAIN ANALYZE โ†’ Always first tool
  • pg_stat_statements โ†’ Find slow queries in real apps
  • auto_explain โ†’ Log slow plans automatically
  • pgBadger or pgHero โ†’ Visual query monitoring
๐Ÿ’ฅ Now We Know:

โœ… How to read query plans
โœ… When you’re doing full scans vs index scans
โœ… How to achieve index-only scans
โœ… How to catch bad performance early
โœ… How to test and fix in real world

Happy Performance Fixing..ย ๐Ÿš€

Rails 8 App: Setup Test DB in PostgreSQL | Covering Index | BRIN Indexes | Hash Indexes | Createย super fast indexes

Let’s look into some of the features of sql data indexing. This will be super helpful while developing our Rails 8 Application.

๐Ÿ’Ž Part 1: What is a Covering Index?

Normally when you query:

SELECT * FROM users WHERE username = 'bob';

  • Database searches username index (secondary).
  • Finds a pointer (TID or PK).
  • Then fetches full row from table (heap or clustered B-tree).

Problem:

  • Heap fetch = extra disk read.
  • Clustered B-tree fetch = extra traversal.

๐Ÿ“œ Covering Index idea:

โœ… If the index already contains all the columns you need,
โœ… Then the database does not need to fetch the full row!

It can answer the query purely by scanning the index! โšก

Boom โ€” one disk read, no extra hop!

โœ๏ธ Example in PostgreSQL:

Suppose your query is:

SELECT username FROM users WHERE username = 'bob';

  • You only need username.
  • But by default, PostgreSQL indexes only store the index column (here, username) + TID.

โœ… So in this case โ€” already covering!

No heap fetch needed!

โœ๏ธ Example in MySQL InnoDB:

Suppose your query is:

SELECT username FROM users WHERE username = 'bob';

  • Secondary index (username) contains:
    • username (indexed column)
    • user_id (because secondary indexes in InnoDB always store PK)

โ™ฆ๏ธ So again, already covering!
No need to jump to the clustered index!

๐ŸŽฏ Key point:

If your query only asks for columns already inside the index,
then only the index is touched โž” no second lookup โž” super fast!

๐Ÿ’Ž Part 2: Real SQL Examples

โœจ PostgreSQL

Create a covering index for common query:

CREATE INDEX idx_users_username_email ON users (username, email);

Now if you run:

SELECT email FROM users WHERE username = 'bob';

Postgres can:

  • Search index on username
  • Already have email in index
  • โœ… No heap fetch!

(And Postgres is smart: it checks index-only scan automatically.)

โœจ MySQL InnoDB

Create a covering index:

CREATE INDEX idx_users_username_email ON users (username, email);

โœ… Now query:

SELECT email FROM users WHERE username = 'bob';

Same behavior:

  • Only secondary index read.
  • No need to touch primary clustered B-tree.

๐Ÿ’Ž Part 3: Tips to design smart Covering Indexes

โœ… If your query uses WHERE on col1 and SELECT col2,
โœ… Best to create index: (col1, col2).

โœ… Keep indexes small โ€” don’t add 10 columns unless needed.
โœ… Avoid huge TEXT or BLOB columns in covering indexes โ€” they make indexes heavy.

โœ… Composite indexes are powerful:

CREATE INDEX idx_users_username_email ON users (username, email);

โ†’ Can be used for:

  • WHERE username = ?
  • WHERE username = ? AND email = ?
  • etc.

โœ… Monitor index usage:

  • PostgreSQL: EXPLAIN ANALYZE
  • MySQL: EXPLAIN

โœ… Always check if Index Only Scan or Using Index appears in EXPLAIN plan!

๐Ÿ“š Quick Summary Table

DatabaseNormal QueryWith Covering Index
PostgreSQLB-tree โž” Heap fetch (unless TID optimization)B-tree scan only
MySQL InnoDBSecondary B-tree โž” Primary B-treeSecondary B-tree only
Result2 steps1 step
SpeedSlowerFaster

๐Ÿ† Great! โ€” Now We Know:

๐ŸงŠ How heap fetch works!
๐ŸงŠ How block lookup is O(1)!
๐ŸงŠ How covering indexes skip heap fetch!
๐ŸงŠ How to create super fast indexes for PostgreSQL and MySQL!


๐Ÿฆพ Advanced Indexing Tricks (Real Production Tips)

Now it’s time to look into super heavy functionalities that Postgres supports for making our sql data search/fetch super fast and efficient.

1. ๐ŸŽฏ Partial Indexes (PostgreSQL ONLY)

โœ… Instead of indexing the whole table,
โœ… You can index only the rows you care about!

Example:

Suppose 95% of users have status = 'inactive', but you only search active users:

SELECT * FROM users WHERE status = 'active' AND email = 'bob@example.com';

๐Ÿ‘‰ Instead of indexing the whole table:

CREATE INDEX idx_active_users_email ON users (email) WHERE status = 'active';

โ™ฆ๏ธ PostgreSQL will only store rows with status = 'active' in this index!

Advantages:

  • Smaller index = Faster scans
  • Less space on disk
  • Faster index maintenance (less updates/inserts)

Important:

  • MySQL (InnoDB) does NOT support partial indexes ๐Ÿ˜” โ€” only PostgreSQL has this superpower.

2. ๐ŸŽฏ INCLUDE Indexes (PostgreSQL 11+)

โœ… Normally, a composite index uses all columns for sorting/searching.
โœ… With INCLUDE, extra columns are just stored in index, not used for ordering.

Example:

CREATE INDEX idx_username_include_email ON users (username) INCLUDE (email);

Meaning:

  • username is indexed and ordered.
  • email is only stored alongside.

Now query:

SELECT email FROM users WHERE username = 'bob';

โž” Index-only scan โ€” no heap fetch.

Advantages:

  • Smaller & faster than normal composite indexes.
  • Helps to create very efficient covering indexes.

Important:

  • MySQL 8.0 added something similar with INVISIBLE columns but it’s still different.

3. ๐ŸŽฏ Composite Index Optimization

โœ… Always order columns inside index smartly based on query pattern.

Golden Rules:

โšœ๏ธ Equality columns first (WHERE col = ?)
โšœ๏ธ Range columns second (WHERE col BETWEEN ?)
โšœ๏ธ SELECT columns last (for covering)

Example:

If query is:

SELECT email FROM users WHERE status = 'active' AND created_at > '2024-01-01';

Best index:

CREATE INDEX idx_users_status_created_at ON users (status, created_at) INCLUDE (email);

โ™ฆ๏ธ status first (equality match)
โ™ฆ๏ธ created_at second (range)
โ™ฆ๏ธ email included (covering)

Bad Index: (wrong order)

CREATE INDEX idx_created_at_status ON users (created_at, status);

โ†’ Will not be efficient!

4. ๐ŸŽฏ BRIN Indexes (PostgreSQL ONLY, super special!)

โœ… When your table is very huge (millions/billions of rows),
โœ… And rows are naturally ordered (like timestamp, id increasing),
โœ… You can create a BRIN (Block Range Index).

Example:

CREATE INDEX idx_users_created_at_brin ON users USING BRIN (created_at);

โ™ฆ๏ธ BRIN stores summaries of large ranges of pages (e.g., min/max timestamp per 128 pages).

โ™ฆ๏ธ Ultra small index size.

โ™ฆ๏ธ Very fast for large range queries like:

SELECT * FROM users WHERE created_at BETWEEN '2024-01-01' AND '2024-04-01';

Important:

  • BRIN โ‰  B-tree
  • BRIN is approximate, B-tree is precise.
  • Only useful if data is naturally correlated with physical storage order.

MySQL?

  • MySQL does not have BRIN natively. PostgreSQL has a big advantage here.

5. ๐ŸŽฏ Hash Indexes (special case)

โœ… If your query is always exact equality (not range),
โœ… You can use hash indexes.

Example:

CREATE INDEX idx_users_username_hash ON users USING HASH (username);

Useful for:

  • Simple WHERE username = 'bob'
  • Never ranges (BETWEEN, LIKE, etc.)

โš ๏ธ Warning:

  • Hash indexes used to be “lossy” before Postgres 10.
  • Now they are safe, but usually B-tree is still better unless you have very heavy point lookups.

๐Ÿ˜Ž PRO-TIP: Which Index Type to Use?

Use caseIndex type
Search small ranges or equalityB-tree
Search on huge tables with natural order (timestamps, IDs)BRIN
Only exact match, super heavy lookupHash
Search only small part of table (active users, special conditions)Partial index
Need to skip heap fetchINCLUDE / Covering Index

๐Ÿ—บ๏ธ Quick Visual Mindmap:

Your Query
โ”‚
โ”œโ”€โ”€ Need Equality + Range? โž” B-tree
โ”‚
โ”œโ”€โ”€ Need Huge Time Range Query? โž” BRIN
โ”‚
โ”œโ”€โ”€ Exact equality only? โž” Hash
โ”‚
โ”œโ”€โ”€ Want Smaller Index (filtered)? โž” Partial Index
โ”‚
โ”œโ”€โ”€ Want to avoid Heap Fetch? โž” INCLUDE columns (Postgres) or Covering Index

๐Ÿ† Now we Know:

๐ŸงŠ Partial Indexes
๐ŸงŠ INCLUDE Indexes
๐ŸงŠ Composite Index order tricks
๐ŸงŠ BRIN Indexes
๐ŸงŠ Hash Indexes
๐ŸงŠ How to choose best Index

โœ… This is serious pro-level database knowledge.


Enjoy SQL! ๐Ÿš€

Rails 8 App: Setup Test DB | Comprehensive Guide ๐Ÿ“– for PostgreSQL , Mysql Indexing – PostgreSQL Heap โ›ฐ vs Mysql InnoDB B-Tree ๐ŸŒฟ

Enter into psql terminal:

โœ— psql postgres
psql (14.17 (Homebrew))
Type "help" for help.

postgres=# \l
                                     List of databases
           Name            |  Owner   | Encoding | Collate | Ctype |   Access privileges
---------------------------+----------+----------+---------+-------+-----------------------
 studio_development | postgres | UTF8     | C       | C     |
  • Create a new test database
  • Create a users Table
  • Check the db and table details
postgres=# create database test_db;
CREATE DATABASE

test_db=# CREATE TABLE users (
user_id INT,
username VARCHAR(220),
email VARCHAR(150),
phone_number VARCHAR(20)
);
CREATE TABLE

test_db=# \dt
List of relations
 Schema | Name  | Type  |  Owner
--------+-------+-------+----------
 public | users | table | abhilash
(1 row)

test_db=# \d users;
                          Table "public.users"
    Column    |          Type          | Collation | Nullable | Default
--------------+------------------------+-----------+----------+---------
 user_id      | integer                |           |          |
 username     | character varying(220) |           |          |
 email        | character varying(150) |           |          |
 phone_number | character varying(20)  |           |          |

Add a Primary key to users and check the user table.

test_db=# ALTER TABLE users ADD PRIMARY KEY (user_id);
ALTER TABLE

test_db=# \d users;
                          Table "public.users"
    Column    |          Type          | Collation | Nullable | Default
--------------+------------------------+-----------+----------+---------
 user_id      | integer                |           | not null |
 username     | character varying(220) |           |          |
 email        | character varying(150) |           |          |
 phone_number | character varying(20)  |           |          |
Indexes:
    "users_pkey" PRIMARY KEY, btree (user_id)

# OR add primary key when creating the table:
CREATE TABLE users (
  user_id INT PRIMARY KEY,
  username VARCHAR(220),
  email VARCHAR(150),
  phone_number VARCHAR(20)
);

You can a unique constraint and an index added when adding a primary key.

Why does adding a primary key also add an index?

  • A primary key must guarantee that each value is unique and fast to find.
  • Without an index, the database would have to scan the whole table every time you look up a primary key, which would be very slow.
  • So PostgreSQL automatically creates a unique index on the primary key to make lookups efficient and to enforce uniqueness at the database level.

๐Ÿ‘‰ It needs the index for speed and to enforce the “no duplicates” rule of primary keys.

What is btree?

  • btree stands for Balanced Tree (specifically, a “B-tree” data structure).
  • It’s the default index type in PostgreSQL.
  • B-tree indexes organize the data in a tree structure, so that searches, inserts, updates, and deletes are all very efficient โ€” about O(log n) time.
  • It’s great for looking up exact matches (like WHERE user_id = 123) or range queries (like WHERE user_id BETWEEN 100 AND 200).

๐Ÿ‘‰ So when you see btree, it just means it’s using a very efficient tree structure for your primary key index.

Summary in one line:
Adding a primary key automatically adds a btree index to enforce uniqueness and make lookups super fast.


In MySQL (specifically InnoDB engine, which is default now):

  • Primary keys always create an index automatically.
  • The index is a clustered index โ€” this is different from Postgres!
  • The index uses a B-tree structure too, just like Postgres.

๐Ÿ‘‰ So yes, MySQL also adds an index and uses a B-tree under the hood for primary keys.

But here’s a big difference:

  • In InnoDB, the table data itself is stored inside the primary key’s B-tree.
    • Thatโ€™s called a clustered index.
    • It means the physical storage of the table rows follows the order of the primary key.
  • In PostgreSQL, the index and the table are stored separately (non-clustered by default).

Example: If you have a table like this in MySQL:

CREATE TABLE users (
  user_id INT PRIMARY KEY,
  username VARCHAR(220),
  email VARCHAR(150)
);
  • user_id will have a B-tree clustered index.
  • The rows themselves will be stored sorted by user_id.

Short version:

DatabasePrimary Key BehaviorB-tree?Clustered?
PostgreSQLSeparate index created for PKYesNo (separate by default)
MySQL (InnoDB)PK index + Table rows stored inside the PK’s B-treeYesYes (always clustered)

Why Indexing on Unique Columns (like email) Improves Lookup ๐Ÿ”

Use Case

You frequently run queries like:

SELECT * FROM students WHERE email = 'john@example.com';

Without an index, this results in a full table scan โ€” checking each row one-by-one.

With an index, the database can jump directly to the row using a sorted structure, significantly reducing lookup time โ€” especially in large tables.


๐ŸŒฒ How SQL Stores Indexes Internally (PostgreSQL)

๐Ÿ“š PostgreSQL uses B-Tree indexes by default.

When you run:

CREATE UNIQUE INDEX idx_students_on_email ON students(email);

PostgreSQL creates a balanced B-tree like this:

          m@example.com
         /              \
  d@example.com     t@example.com
  /        \           /         \
...      ...        ...         ...

  • โœ… Keys (email values) are sorted lexicographically.
  • โœ… Each leaf node contains a pointer to the actual row in the students table (called a tuple pointer or TID).
  • โœ… Lookup uses binary search, giving O(log n) performance.

โš™๏ธ Unique Index = Even Faster

Because all email values are unique, the database:

  • Can stop searching immediately once a match is found.
  • Doesnโ€™t need to scan multiple leaf entries (no duplicates).

๐Ÿง  Summary

FeatureValue
Index TypeB-tree (default in PostgreSQL)
Lookup TimeO(log n) vs O(n) without index
Optimized forEquality search (WHERE email = ...), sorting, joins
Email is unique?โœ… Yes โ€“ index helps even more (no need to check multiple rows)
Table scan avoided?โœ… Yes โ€“ PostgreSQL jumps directly via B-tree lookup

What Exactly is a Clustered Index in MySQL (InnoDB)?

๐Ÿ”น In MySQL InnoDB, the primary key IS the table.

๐Ÿ”น A Clustered Index means:

  • The table’s data rows are physically organized in the order of the primary key.
  • No separate storage for the table – it’s merged into the primary keyโ€™s B-tree structure.

In simple words:
๐Ÿ‘‰ “The table itself lives inside the primary key B-tree.”

Thatโ€™s why:

  • Every secondary index must store the primary key value (not a row pointer).
  • InnoDB can only have one clustered index (because you can’t physically order a table in two different ways).
๐Ÿ“ˆ Visual for MySQL Clustered Index

Suppose you have:

CREATE TABLE users (
  user_id INT PRIMARY KEY,
  username VARCHAR(255),
  email VARCHAR(255)
);

The storage looks like:

B-tree by user_id (Clustered)

user_id  | username | email
----------------------------
101      | Alice    | a@x.com
102      | Bob      | b@x.com
103      | Carol    | c@x.com

๐Ÿ‘‰ Table rows stored directly inside the B-tree nodes by user_id!


๐Ÿ”ต PostgreSQL (Primary Key Index = Separate)

Imagine you have a users table:

users table (physical table):

row_id | user_id | username | email
-------------------------------------
  1    |   101   | Alice    | a@example.com
  2    |   102   | Bob      | b@example.com
  3    |   103   | Carol    | c@example.com

And the Primary Key Index looks like:

Primary Key B-Tree (separate structure):

user_id -> row pointer
 101    -> row_id 1
 102    -> row_id 2
 103    -> row_id 3

๐Ÿ‘‰ When you query WHERE user_id = 102, PostgreSQL goes:

  • Find user_id 102 in the B-tree index,
  • Then jump to row_id 2 in the actual table.

๐Ÿ”ธ Index and Table are separate.
๐Ÿ”ธ Extra step: index lookup โž” then fetch row.

๐ŸŸ  MySQL InnoDB (Primary Key Index = Clustered)

Same users table, but stored like this:

Primary Key Clustered B-Tree (index + data together):

user_id | username | email
---------------------------------
  101   | Alice    | a@example.com
  102   | Bob      | b@example.com
  103   | Carol    | c@example.com

๐Ÿ‘‰ When you query WHERE user_id = 102, MySQL:

  • Goes straight to user_id 102 in the B-tree,
  • Data is already there, no extra lookup.

๐Ÿ”ธ Index and Table are merged.
๐Ÿ”ธ One step: direct access!

๐Ÿ“ˆ Quick Visual:

PostgreSQL
(Index)    โž”    (Table Row)
    |
    โž” extra lookup needed

MySQL InnoDB
(Index + Row Together)
    |
    โž” data found immediately

Summary:

  • PostgreSQL: primary key index is separate โž” needs 2 steps (index โž” table).
  • MySQL InnoDB: primary key index is clustered โž” 1 step (index = table).

๐Ÿ“š How Secondary Indexes Work

Secondary Index = an index on a column that is not the primary key.

Example:

CREATE INDEX idx_username ON users(username);

Now you have an index on username.

๐Ÿ”ต PostgreSQL Secondary Index Behavior

  • Secondary indexes are separate structures from the table (just like the primary key index).
  • When you query by username, PostgreSQL:
    1. Finds the matching row_id using the secondary B-tree index.
    2. Then fetches the full row from the table by row_id.
  • This is called an Index Scan + Heap Fetch.

๐Ÿ“œ Example:

Secondary Index (username -> row_id):

username -> row_id
------------------
Alice    -> 1
Bob      -> 2
Carol    -> 3

(users table is separate)

๐Ÿ‘‰ Flexible, but needs 2 steps: index (row_id) โž” table.

๐ŸŸ  MySQL InnoDB Secondary Index Behavior

  • In InnoDB, secondary indexes don’t store row pointers.
  • Instead, they store the primary key value!

So:

  1. Find the matching primary key using the secondary index.
  2. Use the primary key to find the actual row inside the clustered primary key B-tree.

๐Ÿ“œ Example:

Secondary Index (username -> user_id):

username -> user_id
--------------------
Alice    -> 101
Bob      -> 102
Carol    -> 103

(Then find user_id inside Clustered B-Tree)

โœ… Needs 2 steps too: secondary index (primary key) โž” clustered table.

๐Ÿ“ˆ Quick Visual:

FeaturePostgreSQLMySQL InnoDB
Secondary Indexusername โž” row pointer (row_id)username โž” primary key (user_id)
Fetch Full RowUse row_id to get table rowUse primary key to find row in clustered index
Steps to FetchIndex โž” TableIndex โž” Primary Key โž” Table (clustered)
ActionPostgreSQLMySQL InnoDB
Primary Key LookupIndex โž” Row (2 steps)Clustered Index (1 step)
Secondary Index LookupIndex (row_id) โž” Row (2 steps)Secondary Index (PK) โž” Row (2 steps)
Storage ModelSeparate index and tablePrimary key and table merged (clustered)

๐ŸŒ Now, let’s do some Real SQL Query โ› Examples!

1. Simple SELECT * FROM users WHERE user_id = 102;
  • PostgreSQL:
    Look into PK btree โž” find row pointer โž” fetch row separately.
  • MySQL InnoDB:
    Directly find the row inside the PK B-tree (no extra lookup).

โœ… MySQL is a little faster here because it needs only 1 step!

2. SELECT username FROM users WHERE user_id = 102; (Only 1 Column)
  • PostgreSQL:
    Might do an Index Only Scan if all needed data is in the index (very fast).
  • MySQL:
    Clustered index contains all columns already, no special optimization needed.

โœ… Both can be very fast, but PostgreSQL shines if the index is “covering” (i.e., contains all needed columns). Because index table has less size than clustered index of mysql.

3. SELECT * FROM users WHERE username = 'Bob'; (Secondary Index Search)
  • PostgreSQL:
    Secondary index on username โž” row pointer โž” fetch table row.
  • MySQL:
    Secondary index on username โž” get primary key โž” clustered index lookup โž” fetch data.

โœ… Both are 2 steps, but MySQL needs 2 different B-trees: secondary โž” primary clustered.

Consider the below situation:

SELECT username FROM users WHERE user_id = 102;
  • user_id is the Primary Key.
  • You only want username, not full row.

Now:

๐Ÿ”ต PostgreSQL Behavior

๐Ÿ‘‰ In PostgreSQL, by default:

  • It uses the primary key btree to find the row pointer.
  • Then fetches the full row from the table (heap fetch).

๐Ÿ‘‰ But PostgreSQL has an optimization called Index-Only Scan.

  • If all requested columns are already present in the index,
  • And if the table visibility map says the row is still valid (no deleted/updated row needing visibility check),
  • Then Postgres does not fetch the heap.

๐Ÿ‘‰ So in this case:

  • If the primary key index also stores username internally (or if an extra index is created covering username), Postgres can satisfy the query just from the index.

โœ… Result: No table lookup needed โž” Very fast (almost as fast as InnoDB clustered lookup).

๐Ÿ“ข Postgres primary key indexes usually don’t store extra columns, unless you specifically create an index that includes them (INCLUDE (username) syntax in modern Postgres 11+).

๐ŸŸ  MySQL InnoDB Behavior
  • In InnoDB:
    Since the primary key B-tree already holds all columns (user_id, username, email),
    It directly finds the row from the clustered index.
  • So when you query by PK, even if you only need one column, it has everything inside the same page/block.

โœ… One fast lookup.

๐Ÿ”ฅ Why sometimes Postgres can still be faster?
  • If PostgreSQL uses Index-Only Scan, and the page is already cached, and no extra visibility check is needed,
    Then Postgres may avoid touching the table at all and only scan the tiny index pages.
  • In this case, for very narrow queries (e.g., only 1 small field), Postgres can outperform even MySQL clustered fetch.

๐Ÿ’ก Because fetching from a small index page (~8KB) is faster than reading bigger table pages.

๐ŸŽฏ Conclusion:

โœ… MySQL clustered index is always fast for PK lookups.
โœ… PostgreSQL can be even faster for small/narrow queries if Index-Only Scan is triggered.

๐Ÿ‘‰ Quick Tip:

  • In PostgreSQL, you can force an index to include extra columns by using: CREATE INDEX idx_user_id_username ON users(user_id) INCLUDE (username); Then index-only scans become more common and predictable! ๐Ÿš€

Isn’t PostgreSQL also doing 2 B-tree scans? One for secondary index and one for table (row_id)?

When you query with a secondary index, like:

SELECT * FROM users WHERE username = 'Bob';
  • In MySQL InnoDB, I said:
    1. Find in secondary index (username โž” user_id)
    2. Then go to primary clustered index (user_id โž” full row)
Let’s look at PostgreSQL first:

โ™ฆ๏ธ Step 1: Search Secondary Index B-tree on username.

  • It finds the matching TID (tuple ID) or row pointer.
    • TID is a pair (block_number, row_offset).
    • Not a B-tree! Just a physical pointer.

โ™ฆ๏ธ Step 2: Use the TID to directly jump into the heap (the table).

  • The heap (table) is not a B-tree โ€” itโ€™s just a collection of unordered pages (blocks of rows).
  • PostgreSQL goes directly to the block and offset โ€” like jumping straight into a file.

๐Ÿ”” Important:

  • Secondary index โž” TID โž” heap fetch.
  • No second B-tree traversal for the table!
๐ŸŸ  Meanwhile in MySQL InnoDB:

โ™ฆ๏ธ Step 1: Search Secondary Index B-tree on username.

  • It finds the Primary Key value (user_id).

โ™ฆ๏ธ Step 2: Now, search the Primary Key Clustered B-tree to find the full row.

  • Need another B-tree traversal based on user_id.

๐Ÿ”” Important:

  • Secondary index โž” Primary Key B-tree โž” data fetch.
  • Two full B-tree traversals!
Real-world Summary:

โ™ฆ๏ธ PostgreSQL

  • Secondary index gives a direct shortcut to the heap.
  • One B-tree scan (secondary) โž” Direct heap fetch.

โ™ฆ๏ธ MySQL

  • Secondary index gives PK.
  • Then another B-tree scan (primary clustered) to find full row.

โœ… PostgreSQL does not scan a second B-tree when fetching from the table โ€” just a direct page lookup using TID.

โœ… MySQL does scan a second B-tree (primary clustered index) when fetching full row after secondary lookup.

Is heap fetch a searching technique? Why is it faster than B-tree?

๐Ÿ“š Let’s start from the basics:

When PostgreSQL finds a match in a secondary index, what it gets is a TID.

โ™ฆ๏ธ A TID (Tuple ID) is a physical address made of:

  • Block Number (page number)
  • Offset Number (row slot inside the page)

Example:

TID = (block_number = 1583, offset = 7)

๐Ÿ”ต How PostgreSQL uses TID?

  1. It directly calculates the location of the block (disk page) using block_number.
  2. It reads that block (if not already in memory).
  3. Inside that block, it finds the row at offset 7.

โ™ฆ๏ธ No search, no btree, no extra traversal โ€” just:

  • Find the page (via simple number addressing)
  • Find the row slot

๐Ÿ“ˆ Visual Example

Secondary index (username โž” TID):

usernameTID
Alice(1583, 7)
Bob(1592, 3)
Carol(1601, 12)

โ™ฆ๏ธ When you search for “Bob”:

  • Find (1592, 3) from secondary index B-tree.
  • Jump directly to Block 1592, Offset 3.
  • Done โœ…!

Answer:

  • Heap fetch is NOT a search.
  • It’s a direct address lookup (fixed number).
  • Heap = unordered collection of pages.
  • Pages = fixed-size blocks (usually 8 KB each).
  • TID gives an exact GPS location inside heap โ€” no searching required.

That’s why heap fetch is faster than another B-tree search:

  • No binary search, no B-tree traversal needed.
  • Only a simple disk/memory read + row offset jump.

๐ŸŒฟ B-tree vs ๐Ÿ“ Heap Fetch

ActionB-treeHeap Fetch
What it doesBinary search inside sorted tree nodesDirect jump to block and slot
Steps neededTraverse nodes (root โž” internal โž” leaf)Directly read page and slot
Time complexityO(log n)O(1)
SpeedSlower (needs comparisons)Very fast (direct)

๐ŸŽฏ Final and short answer:

โ™ฆ๏ธ In PostgreSQL, after finding the TID in the secondary index, the heap fetch is a direct, constant-time (O(1)) access โ€” no B-tree needed!
โ™ฆ๏ธ This is faster than scanning another B-tree like in MySQL InnoDB.


๐Ÿงฉ Our exact question:

When we say:

Jump directly to Block 1592, Offset 3.

We are thinking:

  • There are thousands of blocks.
  • How can we directly jump to block 1592?
  • Shouldn’t that be O(n) (linear time)?
  • Shouldn’t there be some traversal?

๐Ÿ”ต Here’s the real truth:

  • No traversal needed.
  • No O(n) work.
  • Accessing Block 1592 is O(1) โ€” constant time.

๐Ÿ“š Why?

Because of how files, pages, and memory work inside a database.

When PostgreSQL stores a table (the “heap”), it saves it in a file on disk.
The file is just a long array of fixed-size pages.

  • Each page = 8KB (default in Postgres).
  • Each block = 1 page = fixed 8KB chunk.
  • Block 0 is the first 8KB.
  • Block 1 is next 8KB.
  • Block 2 is next 8KB.
  • Block 1592 = (1592 ร— 8 KB) offset from the beginning.

โœ… So block 1592 is simply located at 1592 ร— 8192 bytes offset from the start of the file.

โœ… Operating systems (and PostgreSQL’s Buffer Manager) know exactly how to seek to that byte position without reading everything before it.

๐Ÿ“ˆ Diagram (imagine the table file):
+-----------+-----------+-----------+-----------+-----------+------+
| Block 0   | Block 1   | Block 2   | Block 3   | Block 4   |  ... |
+-----------+-----------+-----------+-----------+-----------+------+
  (8KB)       (8KB)       (8KB)       (8KB)       (8KB)

Finding Block 1592 โž”
Seek directly to offset 1592 * 8192 bytes โž”
Read 8KB โž”
Find row at Offset 3 inside it.

๐Ÿค” What happens technically?

If in memory (shared buffers / page cache):
  • PostgreSQL checks its buffer pool (shared memory).
  • “Do I already have block 1592 cached?”
    • โœ… Yes: immediately access memory address.
    • โŒ No: Load block 1592 from disk into memory.
If from disk (rare if cached):
  • File systems (ext4, xfs, etc) know how to seek to a byte offset in a file without reading previous parts.
  • Seek to (block_number ร— 8192) bytes.
  • Read exactly 8KB into memory.
  • No need to scan the whole file linearly.

๐Ÿ“Š Final Step: Inside the Block

Once the block is loaded:

  • The block internally is structured like an array of tuples.
  • Each tuple is placed into an offset slot.
  • Offset 3 โž” third tuple inside the block.

โ™ฆ๏ธ Again, this is just array lookup โ€” no traversal, no O(n).

โšก So to summarize:
QuestionAnswer
How does PostgreSQL jump directly to block?Using the block number ร— page size calculation (fixed offset math).
Is it O(n)?โŒ No, it’s O(1) constant time
Is there any traversal?โŒ No traversal. Just a seek + memory read.
How fast?Extremely fast if cached, still fast if disk seeks.
๐Ÿ”ฅ Key concept:

PostgreSQL heap access is O(1) because the heap file is a flat sequence of fixed-size pages, and the TID gives exact coordinates.

๐ŸŽฏ Simple Real World Example:

Imagine you have a giant book (the table file).
Each page of the book is numbered (block number).

If someone says:

๐Ÿ‘‰ “Go to page 1592.”

โ™ฆ๏ธ You don’t need to read pages 1 to 1591 first.
โ™ฆ๏ธ You just flip directly to page 1592.

๐Ÿ“— Same idea: no linear traversal, just positional lookup.

๐Ÿง  Deep thought:

Because blocks are fixed size and TID is known,
heap fetch is almost as fast as reading a small array.

(Actually faster than searching B-tree because B-tree needs multiple comparisons at each node.)

Enjoy SQL! ๐Ÿš€

Setup ๐Ÿ› ย Rails 8 App โ€“ Part 13: Composite keys & Candidate keys in Rails DB

๐Ÿ”‘ What Is a Composite Key?

A composite key is a primary key made up of two or more columns that together uniquely identify a row in a table.

Use a composite key when no single column is unique on its own, but the combination is.

๐Ÿ‘‰ Example: Composite Key in Action

Letโ€™s say weโ€™re building a table to track which students are enrolled in which courses.

Without Composite Key:
-- This table might allow duplicates
CREATE TABLE Enrollments (
  student_id INT,
  course_id INT
);

Nothing stops the same student from enrolling in the same course multiple times!

With Composite Key:
CREATE TABLE Enrollments (
  student_id INT,
  course_id INT,
  PRIMARY KEY (student_id, course_id)
);

Now:

  • student_id alone is not unique
  • course_id alone is not unique
  • But together โ†’ each (student_id, course_id) pair is unique

๐Ÿ“Œ Why Use Composite Keys?

When to UseWhy
Tracking many-to-many relationshipsEnsures unique pairs
Bridging/junction tablese.g., students-courses, authors-books
No natural single-column keyBut the combination is unique

โš ๏ธ Things to Keep in Mind

  • Composite keys enforce uniqueness across multiple columns.
  • They can also be used as foreign keys in other tables.
  • Some developers prefer to add an auto-increment id as the primary key insteadโ€”but thatโ€™s a design choice.

๐Ÿ”Ž What Is a Candidate Key?

A candidate key is any column (or combination of columns) in a table that can uniquely identify each row.

  • Every table can have multiple candidate keys
  • One of them is chosen to be the primary key
  • The rest are called alternate keys

๐Ÿ”‘ Think of candidate keys as “potential primary keys”

๐Ÿ‘‰ Example: Users Table

CREATE TABLE Users (
  user_id INT,
  username VARCHAR(80),
  email VARCHAR(150),
  phone_number VARCHAR(30)
);

Let’s have some hands own experience in SQL queries by creating a TEST DB. Check https://railsdrop.com/2025/04/25/rails-8-app-part-13-2-test-sql-queries/

Assume:

  • user_id is unique
  • username is unique
  • email is unique
Candidate Keys:
  • user_id
  • username
  • email

You can choose any one of them as the primary key, depending on your design needs.

-- Choosing user_id as the primary key
PRIMARY KEY (user_id)

The rest (username, email) are alternate keys.

๐Ÿ“Œ Characteristics of Candidate Keys

PropertyDescription
UniquenessMust uniquely identify each row
Non-nullCannot contain NULL values
MinimalityMust be the smallest set of columns that uniquely identifies a row (no extra columns)
No duplicatesNo two rows have the same value(s)

๐Ÿ‘ฅ Candidate Key vs Composite Key

ConceptExplanation
Candidate KeyAny unique identifier (single or multiple columns)
Composite KeyA candidate key that uses multiple columns

So: All composite keys are candidate keys, but not all candidate keys are composite.

๐Ÿ’ก When Designing a Database

  • Find all possible candidate keys
  • Choose one as the primary key
  • (Optional) Define other candidate keys as unique constraints
CREATE TABLE Users (
  user_id INT PRIMARY KEY,
  username VARCHAR UNIQUE,
  email VARCHAR UNIQUE
);


Letโ€™s walk through a real-world example using a schema we are already working on: a shopping app that sells clothing for women, men, kids, and infants.

Weโ€™ll look at how candidate keys apply to real tables like Users, Products, Orders, etc.

๐Ÿ›๏ธ Example Schema: Shopping App

1. Users Table

CREATE TABLE Users (
  user_id SERIAL PRIMARY KEY,
  email VARCHAR UNIQUE,
  username VARCHAR UNIQUE,
  phone_number VARCHAR
);

Candidate Keys:

  • user_id โœ…
  • email โœ…
  • username โœ…

We chose user_id as the primary key, but both email and username could also uniquely identify a user โ€” so they’re candidate keys.


2. Products Table

CREATE TABLE Products (
  product_id SERIAL PRIMARY KEY,
  sku VARCHAR UNIQUE,
  name VARCHAR,
  category VARCHAR
);

Candidate Keys:

  • product_id โœ…
  • sku โœ… (Stock Keeping Unit โ€“ a unique identifier for each product)

sku is a candidate key. We use product_id as the primary key, but you could use sku if you wanted a natural key instead.

3. Orders Table

CREATE TABLE Orders (
  order_id SERIAL PRIMARY KEY,
  user_id INT REFERENCES Users(user_id),
  order_number VARCHAR UNIQUE,
  created_at TIMESTAMP
);

Candidate Keys:

  • order_id โœ…
  • order_number โœ…

You might use order_number (e.g., "ORD-20250417-0012") for external reference and order_id internally. Both are unique identifiers = candidate keys.

4. OrderItems Table (Join Table)

This table links orders to the specific products and quantities purchased.

CREATE TABLE OrderItems (
  order_id INT,
  product_id INT,
  quantity INT,
  PRIMARY KEY (order_id, product_id),
  FOREIGN KEY (order_id) REFERENCES Orders(order_id),
  FOREIGN KEY (product_id) REFERENCES Products(product_id)
);

Candidate Key:

  • Composite key: (order_id, product_id) โœ…

Here, a combination of order_id and product_id uniquely identifies a row โ€” i.e., what product was ordered in which order โ€” making it a composite candidate key, and weโ€™ve selected it as the primary key.

๐Ÿ‘€ Summary of Candidate Keys by Table

TableCandidate KeysPrimary Key Used
Usersuser_id, email, usernameuser_id
Productsproduct_id, skuproduct_id
Ordersorder_id, order_numberorder_id
OrderItems(order_id, product_id)(order_id, product_id)

Let’s explore how to implement candidate keys in both SQL and Rails (Active Record). Since we are working on a shopping app in Rails 8, I’ll show how to enforce uniqueness and data integrity in both layers:

๐Ÿ”น 1. Candidate Keys in SQL (PostgreSQL Example)

Letโ€™s take the Users table with multiple candidate keys (email, username, and user_id).

CREATE TABLE users (
  user_id SERIAL PRIMARY KEY,
  email VARCHAR(255) NOT NULL UNIQUE,
  username VARCHAR(100) NOT NULL UNIQUE,
  phone_number VARCHAR(20)
);

  • user_id: chosen as the primary key
  • email and username: candidate keys, enforced via UNIQUE constraints

๐Ÿ’Ž Composite Key Example (OrderItems)

CREATE TABLE order_items (
  order_id INT,
  product_id INT,
  quantity INT NOT NULL,
  PRIMARY KEY (order_id, product_id),
  FOREIGN KEY (order_id) REFERENCES orders(order_id),
  FOREIGN KEY (product_id) REFERENCES products(product_id)
);

This sets (order_id, product_id) as a composite candidate key and primary key.

๐Ÿ”ธ 2. Candidate Keys in Rails (ActiveRecord)

Now letโ€™s do the same with Rails models + migrations + validations.

โœ… users Migration (with candidate keys)

# db/migrate/xxxxxx_create_users.rb
class CreateUsers < ActiveRecord::Migration[8.0]
  def change
    create_table :users do |t|
      t.string :email, null: false
      t.string :username, null: false
      t.string :phone_number

      t.timestamps
    end

    add_index :users, :email, unique: true
    add_index :users, :username, unique: true
  end
end

โœ… User Model

class User < ApplicationRecord
  validates :email, presence: true, uniqueness: true
  validates :username, presence: true, uniqueness: true
end

โœ… These are candidate keys โ€” email and username could be primary keys, but we are using id instead.

โœ… Composite Key with OrderItem (Join Table)

ActiveRecord doesn’t support composite primary keys natively, but you can enforce uniqueness via a multi-column index:

Migration:

class CreateOrderItems < ActiveRecord::Migration[8.0]
  def change
    create_table :order_items, id: false do |t|
      t.references :order, null: false, foreign_key: true
      t.references :product, null: false, foreign_key: true
      t.integer :quantity, null: false

      t.timestamps
    end

    add_index :order_items, [:order_id, :product_id], unique: true
  end
end

Model:

class OrderItem < ApplicationRecord
  belongs_to :order
  belongs_to :product

  validates :quantity, presence: true
  validates :order_id, uniqueness: { scope: :product_id }
end

๐ŸŽฏ This simulates a composite key behavior: each product can only appear once per order.

โž• Extra: Use composite_primary_keys Gem (Optional)

If you really need true composite primary keys, use:

gem 'composite_primary_keys'

But itโ€™s best to avoid unless your use case demands it โ€” most Rails apps use a surrogate key (id) for simplicity.


to be continued.. ๐Ÿš€

Setup ๐Ÿ› ย Rails 8 App โ€“ Part 11: Convert ๐Ÿ”„ Rails App from SQLite to PostgreSQL

If youโ€™ve already built a Rails 8 app using the default SQLite setup and now want to switch to PostgreSQL, hereโ€™s a clean step-by-step guide to make the transition smooth:

1.๐Ÿ”ง Setup PostgreSQL in macOS

๐Ÿ”ท Step 1: Install PostgreSQL via Homebrew

Run the following:

brew install postgresql

This created a default database cluster for me, check the output. So you can skip the Step 3.

==> Summary
๐Ÿบ  /opt/homebrew/Cellar/postgresql@14/14.17_1: 3,330 files, 45.9MB

==> Running `brew cleanup postgresql@14`...
==> postgresql@14
This formula has created a default database cluster with:
  initdb --locale=C -E UTF-8 /opt/homebrew/var/postgresql@14

To start postgresql@14 now and restart at login:
  brew services start postgresql@14

Or, if you don't want/need a background service you can just run:
  /opt/homebrew/opt/postgresql@14/bin/postgres -D /opt/homebrew/var/postgresql@14

After installation, check the version:

psql --version
> psql (PostgreSQL) 14.17 (Homebrew)

๐Ÿ”ท Step 2: Start PostgreSQL Service

To start PostgreSQL now and have it start automatically at login:

brew services start postgresql
==> Successfully started `postgresql@14` (label: homebrew.mxcl.postgresql@14)

If you just want to run it in the background without autostart:

# pg_ctl โ€” initialize, start, stop, or control a PostgreSQL server
pg_ctl -D /opt/homebrew/var/postgresql@14 start

https://www.postgresql.org/docs/current/app-pg-ctl.html

You can find the installed version using:

brew list | grep postgres

๐Ÿ”ท Step 3: Initialize the Database (if needed)

Sometimes Homebrew does this automatically. If not:

initdb /opt/homebrew/var/postgresql@<version>

Or a more general version:

initdb /usr/local/var/postgres

Key functions of initdb: Creates a new database cluster, Initializes the database cluster’s default locale and character set encoding, Runs a vacuum command.

In essence, initdb prepares the environment for a PostgreSQL database to be used and provides a foundation for creating and managing databases within that cluster

๐Ÿ”ท Step 4: Create a User and Database

PostgreSQL uses a role-based access control. Create a user with superuser privileges:

# createuser creates a new Postgres user
createuser -s postgres

createuser is a shell script wrapper around the SQL command CREATE USER via the Postgres interactive terminal psql. Thus, there is nothing special about creating users via this or other methods

Then switch to psql:

psql postgres

You can also create a database:

createdb <db_name>

๐Ÿ”ท Step 5: Connect and Use psql

psql -d <db_name>

Inside the psql shell, try:

\l    -- list databases
\dt   -- list tables
\q    -- quit

๐Ÿ”ท Step 6: Use a GUI (Optional)

For a friendly UI, install one of the following:

pgAdmin

Postico

TablePlus

2. Update Gemfile

Replace SQLite gem with PostgreSQL:

# Remove or comment this:
# gem "sqlite3", "~> 1.4"

# Add this:
gem "pg", "~> 1.4"

Then run:

bundle install


3. Update config/database.yml

Replace the entire contents of config/database.yml with the following:

default: &default
  adapter: postgresql
  encoding: unicode
  username: postgres
  password:
  host: localhost
  pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>

development:
  <<: *default
  database: your_app_name_development

test:
  <<: *default
  database: your_app_name_test

production:
  primary: &primary_production
    <<: *default
    database: your_app_name_production
    username: your_production_username
    password: <%= ENV['YOUR_APP_DATABASE_PASSWORD'] %>
  cache:
    <<: *primary_production
    database: your_app_name_production_cache
    migrations_paths: db/cache_migrate
  queue:
    <<: *primary_production
    database: your_app_name_production_queue
    migrations_paths: db/queue_migrate
  cable:
    <<: *primary_production
    database: your_app_name_production_cable
    migrations_paths: db/cable_migrate

Replace your_app_name with your actual Rails app name.

4. Drop SQLite Database (Optional)

rm storage/development.sqlite3
rm storage/test.sqlite3

5. Create and Setup PostgreSQL Database

rails db:create
rails db:migrate

If you had seed data:

rails db:seed

6. Test It Works

Boot up your server:

bin/dev

Then go to http://localhost:3000 and confirm everything works.

7. Check psql manually (Optional)

psql -d your_app_name_development

Then run:

\dt     -- view tables
\q      -- quit

8. Update .gitignore

Note: If not already added /storage/*

Make sure SQLite DBs are not accidentally committed:

/storage/*.sqlite3
/storage/*.sqlite3-journal


After moving into PostgreSQL

I was getting an issue with postgres column, where I have the following data in the migration:

# migration
t.decimal :rating, precision: 1, scale: 1

# log
ActiveRecord::RangeError (PG::NumericValueOutOfRange: ERROR:  numeric field overflow
12:44:36 web.1  | DETAIL:  A field with precision 1, scale 1 must round to an absolute value less than 1.
12:44:36 web.1  | )

Value passed is: 4.3. I was not getting this issue in SqLite DB.

What does precision: 1, scale: 1 mean?

  • precision: Total number of digits (both left and right of the decimal).
  • scale: Number of digits after the decimal point

If you want to store ratings like 4.3, 4.5, etc., a good setup is:

t.decimal :rating, precision: 2, scale: 1
# revert and migrate for products table

โœ— rails db:migrate:down VERSION=2025031XXXXX -t
โœ— rails db:migrate:up VERSION=2025031XXXXXX -t

Then go to http://localhost:3000 and confirm everything works.

to be continued.. ๐Ÿš€

Setup ๐Ÿ› ย Rails 8 App โ€“ Part 10: PostgreSQL Into The Action

For a Ruby on Rails 8 application, the choice of database depends on your specific needs, but hereโ€™s a breakdown of the best options and when to use each:

PostgreSQL (Highly Recommended)

Best overall choice for most Rails apps.

Why:

  • First-class support in Rails.
  • Advanced features like full-text search, JSONB support, CTEs, window functions.
  • Strong consistency and reliability.
  • Scales well vertically and horizontally (with tools like Citus).
  • Used by: GitHub, Discourse, Basecamp, Shopify.

Use if:

  • Youโ€™re building a standard Rails web app or API.
  • You need advanced query features or are handling complex data types (e.g., JSON).

SQLite (For development/testing only)

  • Lightweight, file-based.
  • Fast and easy to set up.
  • But not recommended for production.

Use if:

  • Youโ€™re building a quick prototype or local dev/testing app.
  • NOT for multi-user production environments.

MySQL / MariaDB

  • Also supported by Rails.
  • Can work fine for simpler applications.
  • Lacks some advanced features (like robust JSON support or full Postgres-style indexing).
  • Not the default in many modern Rails setups.

Use if:

  • Your team already has MySQL infrastructure or legacy systems.
  • You need horizontal scaling with Galera Cluster or similar setups.

Others (NoSQL like MongoDB, Redis, etc.)

  • Use Redis for caching and background job data (not as primary DB).
  • Use MongoDB or other NoSQL only if your data model really demands it (e.g., unstructured documents, event sourcing).

Recommendation Summary:

Use CaseRecommended DB
Production web/API appPostgreSQL
Dev/prototyping/local testingSQLite
Legacy systems/MySQL infrastructureMySQL/MariaDB
Background jobs/cachingRedis
Special needs (e.g., documents)MongoDB (with caution)

If you’re starting fresh or building something scalable and modern with Rails 8, go with PostgreSQL.

Letโ€™s break that down:

๐Ÿ’ฌ What does “robust JSON support” mean?

PostgreSQL supports a special column type: json and jsonb, which lets you store structured JSON data directly in your database โ€” like hashes or objects.

Why it matters:

  • You can store dynamic data without needing to change your schema.
  • You can query inside the JSON using SQL (->, ->>, @>, etc.).
  • You can index parts of the JSON โ€” for speed.

๐Ÿ”ง Example:

You have a products table with a specs column that holds tech specs in JSON:

specs = {
  "color": "black",
  "brand": "Libas",
  "dimensions": {"chest": "34", "waist": "30", "shoulder": "13.5"}
}

You can query like:

SELECT * FROM products WHERE specs->>'color' = 'black';

Or check if the JSON contains a value:

SELECT * FROM products WHERE specs @> '{"brand": "Libas"}';

You can even index specs->>'color' to make these queries fast.


๐Ÿ’ฌ What does “full Postgres-style indexing” mean?

PostgreSQL supports a wide variety of powerful indexing options, which improve query performance and flexibility.

โš™๏ธ Types of Indexes PostgreSQL supports:

Index TypeUse Case
B-TreeDefault; used for most equality and range searches
GIN (Generalized Inverted Index)Fast indexing for JSON, arrays, full-text search
Partial IndexesIndex only part of the data (e.g., WHERE active = true)
Expression IndexesIndex a function or expression (e.g., LOWER(email))
Covering Indexes (INCLUDE)Fetch data directly from the index, avoiding table reads
  • B-Tree Indexes: B-tree indexes are more suitable for single-value columns.
  • When to Use GIN Indexes: When you frequently search for specific elements within arrays, JSON documents, or other composite data types.
  • Example for GIN Indexes: Imagine you have a table with a JSONB column containing document metadata. A GIN index on this column would allow you to quickly find all documents that have a specific author or belong to a particular category. 

Why does this matter for our shopping app?

  • We can store and filter products with dynamic specs (e.g., kurtas, shorts, pants) without new columns.
  • Full-text search on product names/descriptions.
  • Fast filters: color = 'red' AND brand = 'Libas' even if those are stored in JSON.
  • Index custom expressions like LOWER(email) for case-insensitive login.

๐Ÿ’ฌ What are Common Table Expressions (CTEs)?

CTEs are temporary result sets you can reference within a SQL query โ€” like defining a mini subquery that makes complex SQL easier to read and write.

WITH recent_orders AS (
  SELECT * FROM orders WHERE created_at > NOW() - INTERVAL '7 days'
)
SELECT * FROM recent_orders WHERE total > 100;

  • Breaking complex queries into readable parts.
  • Re-using result sets without repeating subqueries.
In Rails (via with from gems like scenic or with_cte):
Order
  .with(recent_orders: Order.where('created_at > ?', 7.days.ago))
  .from('recent_orders')
  .where('total > ?', 100)

๐Ÿ’ฌ What are Window Functions?

Window functions perform calculations across rows related to the current row โ€” unlike aggregate functions, they donโ€™t group results into one row.

๐Ÿ”ง Example: Rank users by their score within each team:
SELECT
  user_id,
  team_id,
  score,
  RANK() OVER (PARTITION BY team_id ORDER BY score DESC) AS rank
FROM users;
Use cases:
  • Ranking rows (like leaderboards).
  • Running totals or moving averages.
  • Calculating differences between rows (e.g. โ€œHow much did this order increase from the last?โ€).
๐Ÿ›ค In Rails:

Window functions are available through raw SQL or Arel. Here’s a basic example:

User
  .select("user_id, team_id, score, RANK() OVER (PARTITION BY team_id ORDER BY score DESC) AS rank")

CTEs and Window functions are fully supported in PostgreSQL, making it the go-to DB for any Rails 8 app that needs advanced querying.

JSONB Support

JSONB stands for “JSON Binary” and is a binary representation of JSON data that allows for efficient storage and retrieval of complex data structures.

This can be useful when you have data that doesn’t fit neatly into traditional relational database tables, such as nested or variable-length data structures.

Absolutely โ€” storing JSON in a relational database (like PostgreSQL) can be super powerful when used wisely. It gives you schema flexibility without abandoning the structure and power of SQL. Here are real-world use cases for using JSON columns in relational databases:

Here are real-world use cases for using JSON columns in relational databases:

๐Ÿ”ง 1. Flexible Metadata / Extra Attributes

Let users store arbitrary attributes that don’t require schema changes every time.

Use case: Product variants, custom fields

t.jsonb :metadata

{
  "color": "red",
  "size": "XL",
  "material": "cotton"
}

=> Good when:

  • You can’t predict all the attributes users will need.
  • You donโ€™t want to create dozens of nullable columns.

๐ŸŽ›๏ธ 2. Storing Settings or Preferences

User or app settings that vary a lot.

Use case: Notification preferences, UI layout, feature toggles

{
  "email": true,
  "sms": false,
  "theme": "dark"
}

=> Easy to store and retrieve as a blob without complex joins.

๐ŸŒ 3. API Response Caching

Store external API responses for caching or auditing.

Use case: Storing Stripe, GitHub, or weather API responses.

t.jsonb :api_response

=> Avoids having to map every response field into a column.

๐Ÿ“ฆ 4. Storing Logs or Events

Use case: Audit trails, system logs, user events

{
  "action": "login",
  "timestamp": "2025-04-18T10:15:00Z",
  "ip": "123.45.67.89"
}

=> Great for capturing varied data over time without a rigid schema.

๐Ÿ“Š 6. Embedded Mini-Structures

Use case: A form builder app storing user-created forms and fields.

{
  "fields": [
    { "type": "text", "label": "Name", "required": true },
    { "type": "email", "label": "Email", "required": false }
  ]
}

=> When each row can have nested, structured data โ€” almost like a mini-document.

๐Ÿ•น๏ธ 7. Device or Browser Info (User Agents)

Use case: Analytics, device fingerprinting

{
  "browser": "Safari",
  "os": "macOS",
  "version": "17.3"
}

=> You donโ€™t need to normalize or query this often โ€” perfect for JSON.


JSON vs JSONB in PostgreSQL

Use jsonb over json unless you need to preserve order or whitespace.

  • jsonb is binary format โ†’ faster and indexable
  • You can do fancy stuff like:
SELECT * FROM users WHERE preferences ->> 'theme' = 'dark';

Or in Rails:

User.where("preferences ->> 'theme' = ?", 'dark')

store and store_accessor

They let you treat JSON or text-based hash columns like structured data, so you can access fields as if they were real database columns.

๐Ÿ”น store

  • Used to declare a serialized store (usually a jsonb, json, or text column) on your model.
  • Works best with key/value stores.

๐Ÿ‘‰ Example:

Letโ€™s say your users table has a settings column of type jsonb:

# migration
add_column :users, :settings, :jsonb, default: {}

Now in your model:

class User < ApplicationRecord
  store :settings, accessors: [:theme, :notifications], coder: JSON
end

You can now do this:

user.theme = "dark"
user.notifications = true
user.save

user.settings
# => { "theme" => "dark", "notifications" => true }

๐Ÿ”น store_accessor

A lightweight version that only declares attribute accessors for keys inside a JSON column. Doesnโ€™t include serialization logic โ€” so you usually use it with a json/jsonb/text column that already works as a Hash.

๐Ÿ‘‰ Example:

class User < ApplicationRecord
  store_accessor :settings, :theme, :notifications
end

This gives you:

  • user.theme, user.theme=
  • user.notifications, user.notifications=
๐Ÿค” When to Use Each?
FeatureWhen to Use
storeWhen you need both serialization and accessors
store_accessorWhen your column is already serialized (jsonb, etc.)

If you’re using PostgreSQL with jsonb columns โ€” it’s more common to just use store_accessor.

Querying JSON Fields
User.where("settings ->> 'theme' = ?", "dark")

Or if you’re using store_accessor:

User.where(theme: "dark")

๐Ÿ’ก But remember: youโ€™ll only be able to query these fields efficiently if youโ€™re using jsonb + proper indexes.


๐Ÿ”ฅ Conclusion:

  • PostgreSQL can store, search, and index inside JSON fields natively.
  • This lets you keep your schema flexible and your queries fast.
  • Combined with its advanced indexing, itโ€™s ideal for a modern e-commerce app with dynamic product attributes, filtering, and searching.

To install and set up PostgreSQL on macOS, you have a few options. The most common and cleanest method is using Homebrew. Hereโ€™s a step-by-step guide:

๐Ÿงฌ Extracting and Joining on Ancestry Values in PostgreSQL: A Complete Guide

I am working on a project where we face issues in an ancestral path data in PostgreSql DB. Working with hierarchical data in PostgreSQL often involves dealing with ancestry paths stored as delimited strings. This comprehensive guide explores how to extract specific values from ancestry columns and utilize them effectively in join operations, complete with practical examples, troubleshooting tips and how I fixed the issues.

๐Ÿ“‹ Table of Contents

๐ŸŽฏ Introduction

PostgreSQL’s robust string manipulation capabilities make it ideal for handling complex hierarchical data structures. When working with ancestry values stored in text columns, you often need to extract specific parts of the hierarchy for data analysis, reporting, or joining operations.

This article demonstrates how to:

  • โœจ Extract values from ancestry strings using regular expressions
  • ๐Ÿ”— Perform efficient joins on extracted ancestry data
  • ๐Ÿ›ก๏ธ Handle edge cases and avoid common pitfalls
  • โšก Optimize queries for better performance

โ“ Problem Statement

๐Ÿ“Š Scenario

Consider a projects table with an ancestry column containing hierarchical paths like:

-- Sample ancestry values
"6/4/5/3"     -- Parent chain: 6 โ†’ 4 โ†’ 5 โ†’ 3
"1/2"         -- Parent chain: 1 โ†’ 2
"9"           -- Single parent: 9
NULL          -- Root level project

๐ŸŽฏ Goal

We need to:

  1. Extract the last integer value from the ancestry path
  2. Use this value in a JOIN operation to fetch parent project data
  3. Handle edge cases like NULL values and malformed strings

๐Ÿ—๏ธ Understanding the Data Structure

๐Ÿ“ Table Structure

CREATE TABLE projects (
    id BIGINT PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    ancestry TEXT,  -- Stores parent hierarchy as "id1/id2/id3"
    created_at TIMESTAMP DEFAULT NOW()
);

-- Sample data
INSERT INTO projects (id, name, ancestry) VALUES
    (1, 'Root Project', NULL),
    (2, 'Department A', '1'),
    (3, 'Team Alpha', '1/2'),
    (4, 'Task 1', '1/2/3'),
    (5, 'Subtask 1A', '1/2/3/4');

๐Ÿงญ Ancestry Path Breakdown

Project IDNameAncestryImmediate Parent
1Root ProjectNULLNone (root)
2Department A11
3Team Alpha1/22
4Task 11/2/33
5Subtask 1A1/2/3/44

๐Ÿ”ง Solution Overview

๐ŸŽฏ Core Approach

  1. ๐Ÿ” Pattern Matching: Use regex to identify the last number in the ancestry string
  2. โœ‚๏ธ Value Extraction: Extract the matched value using regexp_replace()
  3. ๐Ÿ”„ Type Conversion: Cast the extracted string to the appropriate numeric type
  4. ๐Ÿ”— Join Operation: Use the converted value in JOIN conditions

๐Ÿ“ Basic Query Structure

SELECT projects.*
FROM projects
LEFT OUTER JOIN projects AS parent_project 
    ON CAST(
        regexp_replace(projects.ancestry, '.*\/(\d+)$', '\1')
        AS BIGINT
    ) = parent_project.id
WHERE projects.ancestry IS NOT NULL;

๐Ÿ“ Regular Expression Deep Dive

๐ŸŽฏ Pattern Breakdown: .*\/(\d+)$

Let’s dissect this regex pattern:

.*      -- Match any characters (greedy)
\/      -- Match literal forward slash
(\d+)   -- Capture group: one or more digits
$       -- End of string anchor

๐Ÿ“Š Pattern Matching Examples

Ancestry StringRegex MatchCaptured GroupResult
"6/4/5/3"5/33โœ… 3
"1/2"1/22โœ… 2
"9"No matchโŒ Original string
"abc/def"No matchโŒ Original string

๐Ÿ”ง Alternative Regex Patterns

-- For single-level ancestry (no slashes)
regexp_replace(ancestry, '^(\d+)$', '\1')

-- For extracting first parent instead of last
regexp_replace(ancestry, '^(\d+)\/.*', '\1')

-- For handling mixed delimiters (/ or -)
regexp_replace(ancestry, '.*[\/\-](\d+)$', '\1')

๐Ÿ’ป Implementation Examples

๐Ÿ”ง Example 1: Basic Parent Lookup

-- Find each project with its immediate parent information
SELECT 
    p.id,
    p.name AS project_name,
    p.ancestry,
    parent.id AS parent_id,
    parent.name AS parent_name
FROM projects p
LEFT OUTER JOIN projects parent 
    ON CAST(
        regexp_replace(p.ancestry, '.*\/(\d+)$', '\1')
        AS BIGINT
    ) = parent.id
WHERE p.ancestry IS NOT NULL
ORDER BY p.id;

Expected Output:

 id | project_name | ancestry | parent_id | parent_name
----+--------------+----------+-----------+-------------
  2 | Department A | 1        |         1 | Root Project
  3 | Team Alpha   | 1/2      |         2 | Department A
  4 | Task 1       | 1/2/3    |         3 | Team Alpha
  5 | Subtask 1A   | 1/2/3/4  |         4 | Task 1

๐ŸŽฏ Example 2: Handling Edge Cases

-- Robust query that handles all edge cases
SELECT 
    p.id,
    p.name AS project_name,
    p.ancestry,
    CASE 
        WHEN p.ancestry IS NULL THEN 'Root Level'
        WHEN p.ancestry !~ '.*\/(\d+)$' THEN 'Single Parent'
        ELSE 'Multi-level'
    END AS hierarchy_type,
    parent.name AS parent_name
FROM projects p
LEFT OUTER JOIN projects parent ON 
    CASE 
        -- Handle multi-level ancestry
        WHEN p.ancestry ~ '.*\/(\d+)$' THEN
            CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
        -- Handle single-level ancestry
        WHEN p.ancestry ~ '^\d+$' THEN
            CAST(p.ancestry AS BIGINT)
        ELSE NULL
    END = parent.id
ORDER BY p.id;

๐Ÿ“ˆ Example 3: Aggregating Child Counts

-- Count children for each project
WITH parent_child_mapping AS (
    SELECT 
        p.id AS child_id,
        CASE 
            WHEN p.ancestry ~ '.*\/(\d+)$' THEN
                CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
            WHEN p.ancestry ~ '^\d+$' THEN
                CAST(p.ancestry AS BIGINT)
            ELSE NULL
        END AS parent_id
    FROM projects p
    WHERE p.ancestry IS NOT NULL
)
SELECT 
    p.id,
    p.name,
    COUNT(pcm.child_id) AS direct_children_count
FROM projects p
LEFT JOIN parent_child_mapping pcm ON p.id = pcm.parent_id
GROUP BY p.id, p.name
ORDER BY direct_children_count DESC;

๐Ÿšจ Common Errors and Solutions

โŒ Error 1: “invalid input syntax for type bigint”

Problem:

-- โŒ Incorrect: Casting entire ancestry string
CAST(projects.ancestry AS BIGINT) = parent.id

Solution:

-- โœ… Correct: Cast only the extracted value
CAST(
    regexp_replace(projects.ancestry, '.*\/(\d+)$', '\1') 
    AS BIGINT
) = parent.id

โŒ Error 2: Unexpected Results with Single-Level Ancestry

Problem: Single values like "9" don’t match the pattern .*\/(\d+)$

Solution:

-- โœ… Handle both multi-level and single-level ancestry
CASE 
    WHEN ancestry ~ '.*\/(\d+)$' THEN
        CAST(regexp_replace(ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
    WHEN ancestry ~ '^\d+$' THEN
        CAST(ancestry AS BIGINT)
    ELSE NULL
END

โŒ Error 3: NULL Ancestry Values Causing Issues

Problem: NULL values can cause unexpected behaviour in joins

Solution:

-- โœ… Explicitly handle NULL values
WHERE ancestry IS NOT NULL 
AND ancestry != ''

๐Ÿ›ก๏ธ Complete Error-Resistant Query

SELECT 
    p.id,
    p.name AS project_name,
    p.ancestry,
    parent.id AS parent_id,
    parent.name AS parent_name
FROM projects p
LEFT OUTER JOIN projects parent ON 
    CASE 
        WHEN p.ancestry IS NULL OR p.ancestry = '' THEN NULL
        WHEN p.ancestry ~ '.*\/(\d+)$' THEN
            CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
        WHEN p.ancestry ~ '^\d+$' THEN
            CAST(p.ancestry AS BIGINT)
        ELSE NULL
    END = parent.id
ORDER BY p.id;

โšก Performance Considerations

๐Ÿ“Š Indexing Strategies

-- Create index on ancestry for faster pattern matching
CREATE INDEX idx_projects_ancestry ON projects (ancestry);

-- Create partial index for non-null ancestry values
CREATE INDEX idx_projects_ancestry_not_null 
ON projects (ancestry) 
WHERE ancestry IS NOT NULL;

-- Create functional index for extracted parent IDs
CREATE INDEX idx_projects_parent_id ON projects (
    CASE 
        WHEN ancestry ~ '.*\/(\d+)$' THEN
            CAST(regexp_replace(ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
        WHEN ancestry ~ '^\d+$' THEN
            CAST(ancestry AS BIGINT)
        ELSE NULL
    END
) WHERE ancestry IS NOT NULL;

๐Ÿ”„ Query Optimization Tips

  1. ๐ŸŽฏ Use CTEs for Complex Logic
WITH parent_lookup AS (
    SELECT 
        id,
        CASE 
            WHEN ancestry ~ '.*\/(\d+)$' THEN
                CAST(regexp_replace(ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
            WHEN ancestry ~ '^\d+$' THEN
                CAST(ancestry AS BIGINT)
        END AS parent_id
    FROM projects
    WHERE ancestry IS NOT NULL
)
SELECT p.*, parent.name AS parent_name
FROM parent_lookup p
JOIN projects parent ON p.parent_id = parent.id;
  1. โšก Consider Materialized Views for Frequent Queries
CREATE MATERIALIZED VIEW project_hierarchy AS
SELECT 
    p.id,
    p.name,
    p.ancestry,
    CASE 
        WHEN p.ancestry ~ '.*\/(\d+)$' THEN
            CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
        WHEN p.ancestry ~ '^\d+$' THEN
            CAST(p.ancestry AS BIGINT)
    END AS parent_id
FROM projects p;

-- Refresh when data changes
REFRESH MATERIALIZED VIEW project_hierarchy;

๐Ÿ› ๏ธ Advanced Techniques

๐Ÿ” Extracting Multiple Ancestry Levels

-- Extract all ancestry levels as an array
SELECT 
    id,
    name,
    ancestry,
    string_to_array(ancestry, '/') AS ancestry_array,
    -- Get specific levels
    split_part(ancestry, '/', 1) AS level_1,
    split_part(ancestry, '/', 2) AS level_2,
    split_part(ancestry, '/', -1) AS last_level
FROM projects
WHERE ancestry IS NOT NULL;

๐Ÿงฎ Calculating Hierarchy Depth

-- Calculate the depth of each project in the hierarchy
SELECT 
    id,
    name,
    ancestry,
    CASE 
        WHEN ancestry IS NULL THEN 0
        ELSE array_length(string_to_array(ancestry, '/'), 1)
    END AS hierarchy_depth
FROM projects
ORDER BY hierarchy_depth, id;

๐ŸŒณ Building Complete Hierarchy Paths

-- Recursive CTE to build full hierarchy paths
WITH RECURSIVE hierarchy_path AS (
    -- Base case: root projects
    SELECT 
        id,
        name,
        ancestry,
        name AS full_path,
        0 AS level
    FROM projects 
    WHERE ancestry IS NULL

    UNION ALL

    -- Recursive case: child projects
    SELECT 
        p.id,
        p.name,
        p.ancestry,
        hp.full_path || ' โ†’ ' || p.name AS full_path,
        hp.level + 1 AS level
    FROM projects p
    JOIN hierarchy_path hp ON 
        CASE 
            WHEN p.ancestry ~ '.*\/(\d+)$' THEN
                CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
            WHEN p.ancestry ~ '^\d+$' THEN
                CAST(p.ancestry AS BIGINT)
        END = hp.id
)
SELECT * FROM hierarchy_path
ORDER BY level, id;

โœ… Best Practices

๐ŸŽฏ Data Validation

  1. โœ… Validate Ancestry Format on Insert/Update
-- Add constraint to ensure valid ancestry format
ALTER TABLE projects 
ADD CONSTRAINT check_ancestry_format 
CHECK (
    ancestry IS NULL 
    OR ancestry ~ '^(\d+)(\/\d+)*$'
);
  1. ๐Ÿ” Regular Data Integrity Checks
-- Find orphaned projects (ancestry points to non-existent parent)
SELECT p.id, p.name, p.ancestry
FROM projects p
WHERE p.ancestry IS NOT NULL
AND NOT EXISTS (
    SELECT 1 FROM projects parent
    WHERE parent.id = CASE 
        WHEN p.ancestry ~ '.*\/(\d+)$' THEN
            CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
        WHEN p.ancestry ~ '^\d+$' THEN
            CAST(p.ancestry AS BIGINT)
    END
);

๐Ÿ›ก๏ธ Error Handling

-- Function to safely extract parent ID
CREATE OR REPLACE FUNCTION extract_parent_id(ancestry_text TEXT)
RETURNS BIGINT AS $$
BEGIN
    IF ancestry_text IS NULL OR ancestry_text = '' THEN
        RETURN NULL;
    END IF;

    IF ancestry_text ~ '.*\/(\d+)$' THEN
        RETURN CAST(regexp_replace(ancestry_text, '.*\/(\d+)$', '\1') AS BIGINT);
    ELSIF ancestry_text ~ '^\d+$' THEN
        RETURN CAST(ancestry_text AS BIGINT);
    ELSE
        RETURN NULL;
    END IF;
EXCEPTION 
    WHEN OTHERS THEN
        RETURN NULL;
END;
$$ LANGUAGE plpgsql IMMUTABLE;

-- Usage
SELECT p.*, parent.name AS parent_name
FROM projects p
LEFT JOIN projects parent ON extract_parent_id(p.ancestry) = parent.id;

๐Ÿ“Š Monitoring and Maintenance

-- Query to analyze ancestry data quality
SELECT 
    'Total Projects' AS metric,
    COUNT(*) AS count
FROM projects

UNION ALL

SELECT 
    'Projects with Ancestry' AS metric,
    COUNT(*) AS count
FROM projects 
WHERE ancestry IS NOT NULL

UNION ALL

SELECT 
    'Valid Ancestry Format' AS metric,
    COUNT(*) AS count
FROM projects 
WHERE ancestry ~ '^(\d+)(\/\d+)*$'

UNION ALL

SELECT 
    'Orphaned Projects' AS metric,
    COUNT(*) AS count
FROM projects p
WHERE p.ancestry IS NOT NULL
AND extract_parent_id(p.ancestry) NOT IN (SELECT id FROM projects);

๐Ÿ“ Conclusion

Working with ancestry data in PostgreSQL requires careful handling of string manipulation, type conversion, and edge cases. By following the techniques outlined in this guide, you can:

๐ŸŽฏ Key Takeaways

  1. ๐Ÿ” Use robust regex patterns to handle different ancestry formats
  2. ๐Ÿ›ก๏ธ Always handle edge cases like NULL values and malformed strings
  3. โšก Consider performance implications and use appropriate indexing
  4. โœ… Implement data validation to maintain ancestry integrity
  5. ๐Ÿ”ง Create reusable functions for complex extraction logic

๐Ÿ’ก Final Recommendations

  • ๐ŸŽฏ Test thoroughly with various ancestry formats
  • ๐Ÿ“Š Monitor query performance and optimize as needed
  • ๐Ÿ”„ Consider alternative approaches like ltree for complex hierarchies
  • ๐Ÿ“š Document your ancestry format for team members
  • ๐Ÿ› ๏ธ Implement proper error handling in production code

The techniques demonstrated here provide a solid foundation for working with hierarchical data in PostgreSQL. Whether you’re building organizational charts, category trees, or project hierarchies, these patterns will help you extract and manipulate ancestry data effectively and reliably! ๐Ÿš€


๐Ÿ“– Additional Resources

Understanding the Array Aggregation Function in PostgreSQL (array_agg)

PostgreSQL, also known as Postgres, is a powerful and feature-rich relational database management system. One of its notable features is the array aggregation function, array_agg, which allows you to aggregate values from multiple rows into a single array. In this blog post, we’ll explore how array_agg works, its applications, and considerations for performance.

How Does array_agg Work?

The array_agg function takes an expression as an argument and returns an array containing the values of that expression for all the rows that match the query. Let’s illustrate this with an example.

Consider a table called employees with columns id, name, and department. Suppose we want to aggregate all the names of employees belonging to the “Sales” department into an array. We can achieve this using the following query:

SELECT array_agg(name) AS sales_employees
FROM employees
WHERE department = 'Sales';

The result of this query will be a single row with a column named sales_employees, which contains an array of all the names of employees in the “Sales” department.

Usage of array_agg with Subqueries

The ability to get an array as the output opens up various possibilities, especially when used in subqueries. You can leverage this feature to aggregate data from related tables or filter results based on complex conditions.

For instance, imagine you have two tables, orders and order_items, where each order can have multiple items. You want to retrieve a list of orders along with an array of item names for each order. The following query achieves this:

SELECT o.order_id, (
  SELECT array_agg(oi.item_name)
  FROM order_items oi
  WHERE oi.order_id = o.order_id
) AS item_names
FROM orders o;

In this example, the subquery within the main query’s select list utilizes array_agg to aggregate item names from the order_items table, specific to each order.

Complex Query Example Using array_agg

To demonstrate a more complex scenario, let’s consider a database that stores books and their authors. We have three tables: books, authors, and book_authors (a join table that associates books with their respective authors).

Suppose we want to retrieve a list of books along with an array of author names for each book by alphabetical order. We can achieve this using a query that involves joins and array_agg:

SELECT b.title, array_agg(a.author_name ORDER BY a.author_name ASC) AS authors
FROM books b
JOIN book_authors ba ON b.book_id = ba.book_id
JOIN authors a ON ba.author_id = a.author_id
GROUP BY b.book_id;

In this query, we join the tables based on their relationships and use array_agg to aggregate author names into an array for each book. The GROUP BY clause ensures that each book’s array of author names is grouped correctly.

Performance Considerations

While array_agg is a powerful function, it’s essential to consider its performance implications, especially when working with large datasets. Aggregating values into arrays can be computationally intensive, and the resulting array can consume significant memory.

If you anticipate working with large result sets or complex queries involving array_agg, it’s worth optimizing your database schema, indexing relevant columns, and analyzing query performance using PostgreSQL’s built-in tools.

Additionally, consider whether array_agg is the most efficient solution for your specific use case. Sometimes, alternative approaches, such as using temporary tables or custom aggregate functions, might offer better performance.

Conclusion

The array_agg function in PostgreSQL provides a powerful mechanism for aggregating values into arrays. It offers flexibility and opens up opportunities for various applications, including subqueries and complex data manipulations. However, when working with large datasets, it’s crucial to be mindful of potential performance implications and explore optimization strategies accordingly.

PostgreSQL commandsย to remember

List of commands to remember using postgres DB managment system.

Login, Create user and password

# login to psql client
psql postgres # OR
psql -U postgres
create database mydb; # create db
create user abhilash with SUPERUSER CREATEDB CREATEROLE encrypted password 'abhilashPass!'; 
grant all privileges on database mydb to myuser; # add privileges

Connect to DB, List tables and users, functions, views, schema

\l # lists all the databases
\c dbname # connect to db
\dt # show tables
\d table_name # Describe a table
\dn # List available schema
\df #  List available functions
\dS [your_table_name] # List triggers
\dv # List available views
\du # lists all user accounts and roles 
\du+ # is the extended version which shows even more information.

Show history, save to file, edit using editor, execution time, help

SELECT version(); # version of psql
\g  # Execute the previous command
\s # Command history
\s filename # save Command history to a file
\i filename # Execute psql commands from a file
\? # help on psql commands
\h ALTER TABLE # To get help on specific PostgreSQL statement
\timing #  Turn on/off query execution time
\e # Edit command in your own editor
\e [function_name] # It is more useful when you edit a function in the editor. Do \df for functions
\o [file_name] # send all next query results to file
    \o out.txt
    \dt 
    \o # switch
    \dt

Change output, Quit psql

# Switch output options
\a command switches from aligned to non-aligned column output.
\H command formats the output to HTML format.
\q # quit psql

Reference: https://www.postgresqltutorial.com/postgresql-administration/psql-commands/

PostgreSQL Cheat Sheet

CREATE DATABASE

CREATE DATABASE dbName;

CREATE TABLE (with auto numbering integer id)

CREATE TABLE tableName (
 id serial PRIMARY KEY,
 name varchar(50) UNIQUE NOT NULL,
 dateCreated timestamp DEFAULT current_timestamp
);

Add a primary key

ALTER TABLE tableName ADD PRIMARY KEY (id);

Create an INDEX

CREATE UNIQUE INDEX indexName ON tableName (columnNames);

Backup a database (command line)

pg_dump dbName > dbName.sql

Backup all databases (command line)

pg_dumpall > pgbackup.sql

Run a SQL script (command line)

psql -f script.sql databaseName

Search using a regular expression

SELECT column FROM table WHERE column ~ 'foo.*';

The first N records

SELECT columns FROM table LIMIT 10;

Pagination

SELECT cols FROM table LIMIT 10 OFFSET 30;

Prepared Statements

PREPARE preparedInsert (int, varchar) AS
  INSERT INTO tableName (intColumn, charColumn) VALUES ($1, $2);
EXECUTE preparedInsert (1,'a');
EXECUTE preparedInsert (2,'b');
DEALLOCATE preparedInsert;

Create a Function

CREATE OR REPLACE FUNCTION month (timestamp) RETURNS integer 
 AS 'SELECT date_part(''month'', $1)::integer;'
LANGUAGE 'sql';

Table Maintenance

VACUUM ANALYZE table;

Reindex a database, table or index

REINDEX DATABASE dbName;

Show query plan

EXPLAIN SELECT * FROM table;

Import from a file

COPY destTable FROM '/tmp/somefile';

Show all runtime parameters

SHOW ALL;

Grant all permissions to a user

GRANT ALL PRIVILEGES ON table TO username;

Perform a transaction

BEGIN TRANSACTION 
 UPDATE accounts SET balance += 50 WHERE id = 1;
COMMIT;

Basic SQL

Get all columns and rows from a table

SELECT * FROM table;

Add a new row

INSERT INTO table (column1,column2)
VALUES (1, 'one');

Update a row

UPDATE table SET foo = 'bar' WHERE id = 1;

Delete a row

DELETE FROM table WHERE id = 1;

From: https://www.petefreitag.com/cheatsheets/postgresql/


PostgreSQL 9.3 : Installation on ubuntu 14.04

Hi guys, I just started installing postgres on my ubuntu VM. I referred some docs, and followed this one: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-14-04

Its pretty much explained in this page. But just explaining here the important things.

You can install postgres by ubuntu’s own apt packaging system. Update local apt repository.

$ sudo apt-get update
$ sudo apt-get install postgresql postgresql-contrib

Postgres uses role based access for the unix users. After the installation a default role called ‘postgres’ will be created. You can login to postgres account and start using or creating new roles with Postgres.

Sign in as postgres user

$ sudo -i -u postgres

Access the postgres console by

$ psql

But i cannot enter into the console and I got the following error:

postgres@8930a29k5d05:/home/rails/my_project$ psql
psql: could not connect to server: No such file or directory
        Is the server running locally and accepting
        connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

What could be the reason for this error?

So just gone through Postgres doc (http://www.postgresql.org/docs/9.3/static/server-start.html). You can see the same error under the section 17.3.2. Client Connection Problems. But the solution is not mentioned.

Original Reason: PostgreSQL Server was not running after the installation.

I tried rebooting the system and via init script the server should run automatically. But the server is not running again. I understood that something prevents postgres from running the server. What is it?

Just check your postgres server is running or not

$ sudo -aux | grep post
postgres@8930a29k5d05:/home/rails/my_project$ ps -aux | grep postgres
root       136  0.0  0.2  47124  3056 ?        S    06:10   0:00 sudo -u postgres -s
postgres   137  0.0  0.3  18164  3220 ?        S    06:10   0:00 /bin/bash
postgres   140  0.0  0.2  15572  2192 ?        R+   06:10   0:00 ps -aux
postgres   141  0.0  0.0   4892   336 ?        R+   06:10   0:00 grep post

The server is not running.

Run the server manually by

root@8930a29k5d05:/home/rails/my_project#  /etc/init.d/postgresql start
 * Starting PostgreSQL 9.3 database server
                                                                                                                                                         [ OK ] 
root@8930a29k5d05:/home/rails/my_project# ps aux | grep postgres
postgres   158  0.1  2.0 244928 20752 ?        S    06:28   0:00 /usr/lib/postgresql/9.3/bin/postgres -D /var/lib/postgresql/9.3/main -c config_file=/etc/postgresql/9.3/main/postgresql.conf
postgres   160  0.0  0.3 244928  3272 ?        Ss   06:28   0:00 postgres: checkpointer process

postgres   161  0.0  0.4 244928  4176 ?        Ss   06:28   0:00 postgres: writer process

postgres   162  0.0  0.3 244928  3272 ?        Ss   06:28   0:00 postgres: wal writer process

postgres   163  0.0  0.5 245652  6000 ?        Ss   06:28   0:00 postgres: autovacuum launcher process

postgres   164  0.0  0.3 100604  3336 ?        Ss   06:28   0:00 postgres: stats collector process

root       178  0.0  0.0   8868   884 ?        S+   06:28   0:00 grep --color=auto post
root@8930a29k5d05:/home/rails/my_project#

Now the server starts running. If still not works, then try to reconfigure your locales as mentioned here

$ dpkg-reconfigure locales

It is strange that, after installing such a popular database software, it doesn’t provide any information regarding the failure of its own server. It should give the developers some clue so that they can save their precious time.

The reason of this failure, what I concluded is
1. After installation we have to run the server manually
OR
2. I tried resetting the locales (So if no locales set in the machine may prevented the postgres from starting automatically?)