Rails 8 App: Create an Academic software app using SQL without using ActiveRecord- Part 2 | students | courses | schools

Design: Our Students Table -> course -> school

We need a UNIQUE constraint on user_id because:

  • One student per user (user_id should be unique)
  • Multiple students per course (course_id can be repeated)

Check Migration Files:

Key Changes:

  1. ✅ Added UNIQUE constraint: CONSTRAINT uk_students_user_id UNIQUE (user_id)
  2. 🔧 Fixed typos:
  • TIMSTAMPTIMESTAMP
  • stidentsstudents

📈 Optimized indexes: No need for user_id index since UNIQUE creates one automatically

Business Logic Validation:

  • user_id: One student per user ✅
  • course_id: Multiple students per course ✅
  • school_id: Multiple students per school ✅

This ensures referential integrity and business rules are enforced at the database level!


📁 Schema Storage Options:

Rails allows you to store the schema in SQL format instead of the default Ruby format. Let me explain the options and why you’d choose each:

1. Ruby Format (Default)

# db/schema.rb
ActiveRecord::Schema[8.0].define(version: 2025_07_09_074552) do
  enable_extension "pg_catalog.plpgsql"

  create_table "users", force: :cascade do |t|
    t.string "first_name", limit: 100, null: false
    t.string "email", limit: 150, null: false
    t.datetime "created_at", null: false
    t.index ["email"], name: "idx_users_email"
  end
end

2. SQL Format

-- db/structure.sql
CREATE EXTENSION IF NOT EXISTS pg_catalog.plpgsql;

CREATE TYPE gender_enum AS ENUM ('male', 'female', 'not-specified');

CREATE TABLE users (
    id bigserial PRIMARY KEY,
    first_name varchar(100) NOT NULL,
    email varchar(150) NOT NULL,
    created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
);

CREATE UNIQUE INDEX idx_users_email ON users(email);

Check structure.sql File: Github: db/structure.sql

🔧 How to Switch to SQL Format:

Add this to config/application.rb:

module AcademicSqlSoftware
  class Application < Rails::Application
    # ... existing config ...

    # Use SQL format for schema
    config.active_record.schema_format = :sql
  end
end

Then regenerate the schema:

# Generate SQL schema file
rails db:schema:dump

This will create db/structure.sql instead of db/schema.rb.

Comparison Table:

FeatureRuby Format (schema.rb)SQL Format (structure.sql)
Default✅ Yes❌ No
Database Agnostic✅ Yes❌ No (DB-specific)
Custom SQL Features❌ Limited✅ Full support
Views❌ No✅ Yes
Stored Procedures❌ No✅ Yes
Triggers❌ No✅ Yes
Custom Types/ENUMs❌ Limited✅ Full support
Partial Indexes❌ Limited✅ Yes
Check Constraints❌ Limited✅ Yes
Version Control✅ Easy to diff⚠️ Harder to diff
Readability✅ Ruby-like⚠️ Raw SQL

When to Use Each:

Choose SQL Format When:

  • ✅ Using PostgreSQL-specific features (ENUMs, arrays, JSONB operators)
  • ✅ Need database views, triggers, or stored procedures
  • ✅ Using complex constraints (CHECK, partial indexes)
  • ✅ Working with custom data types
  • ✅ Team is SQL-proficient

Choose Ruby Format When:

  • ✅ Want database portability (switching between PostgreSQL/MySQL/SQLite)
  • ✅ Team prefers Ruby syntax
  • ✅ Using standard Rails features only
  • ✅ Need easier version control diffs

For Our Project:

I recommend SQL format because you’re using:

  • ✅ PostgreSQL ENUMs (gender_enum, category_enum) – Rails 8 supports this
  • ✅ Custom constraints (CHECK (price > 0)) – Rails 8 supports this
  • ✅ Complex foreign key relationships
  • ✅ Raw SQL migrations

Our config/application.rb should include:

config.active_record.schema_format = :sql

Then run:

rails db:schema:dump

This will give us full control over PostgreSQL features while maintaining schema consistency! 🎯

🌱 Seed Data

1. Let’s refactor the seed file into modular components. I’ll create separate seeder classes for each model type and organize them properly.

2. Let’s me create individual seeder classes for each model, the seeders for the dependent models (orders and students), refactor the main seeds.rb file to use the new modular structure, create a documentation file for the new modular structure.

3. I’ve successfully refactored our seed file into a modular, maintainable architecture. Here’s what we’ve created:

🎯 Modular Seed System Overview

📁 File Structure:

db/
├── seeds.rb                    # Main orchestrator (now 20 lines vs 246!)
└── seeds/
    ├── README.md              # Complete documentation
    ├── base_seeder.rb         # Shared functionality & config
    ├── users_seeder.rb        # User creation logic
    ├── products_seeder.rb     # Product creation logic  
    ├── courses_seeder.rb      # Course creation logic
    ├── schools_seeder.rb      # School creation logic
    ├── orders_seeder.rb       # Order creation (with dependencies)
    └── students_seeder.rb     # Student creation (with constraints)

Key Benefits:

✅ Separation of Concerns:
  • Each model has its own dedicated seeder class
  • Clear responsibility boundaries
  • Easy to modify individual model generation
✅ Maintainability:
  • Single file changes vs monolithic seed file
  • Inheritance from BaseSeeder for shared functionality
  • Comprehensive documentation
✅ Flexibility:
  • Easy to add new models
  • Environment-specific configuration
  • Configurable record counts
✅ Robustness:
  • Proper dependency handling
  • Constraint compliance (stock, unique keys)
  • Error handling and validation

📋 Usage Examples:

# Basic seeding
rails db:seed

# Clear data first
CLEAR_DATA=true rails db:seed

# Scale up for testing
# Edit USERS_COUNT in base_seeder.rb then:
rails db:seed

Check my Github Seed File: Seed File – Academic SQL App

🛠️ Commit the Refactored System:

# Stage all the new modular files
git add db/seeds.rb \
        db/seeds/

# Commit with comprehensive message
git commit -m "Refactor seed system into modular architecture" -m "

Modular Structure:
- Split monolithic seeds.rb into focused seeder classes
- Create BaseSeeder with shared functionality and configuration
- Separate concerns: each model has dedicated seeder
- Add comprehensive documentation in seeds/README.md

Benefits:
- Maintainability: Easy to modify individual model generation
- Flexibility: Simple to add new models or adjust data
- Robustness: Proper dependency handling and constraint compliance
- Testability: Individual seeders can be tested in isolation

Features:
- Environment-specific behavior (auto-clear in development)
- Smart constraint handling (stock validation, unique keys)
- Progress indicators and comprehensive reporting
- Configurable record counts via constants

Files:
- db/seeds.rb: Main orchestrator (20 lines vs 246)
- db/seeds/base_seeder.rb: Shared functionality & config
- db/seeds/*_seeder.rb: Individual model seeders
- db/seeds/README.md: Complete documentation

The system is now ready for production use and easy to extend."

🎉 What We Can Do Now:

Scale Easily:

# In base_seeder.rb - change these and run rails db:seed
USERS_COUNT = 10_000
PRODUCTS_COUNT = 1_000  
ORDERS_COUNT = 50_000

Add New Models: Create new_model_seeder.rb, inherit from BaseSeeder, add to orchestration.

Customize Data: Edit individual seeder constants and attribute methods.

Test Components:

users = UsersSeeder.create_users
products = ProductsSeeder.create_products

Our seed system is now enterprise-ready with clean architecture, comprehensive documentation, and infinite scalability! 🚀

Possible Refactoring

Now Let’s move the helper methods to a dedicated helper folder for better organization. I’ll create a helpers directory and move the shared utility methods there.

Here’s how to refactor your seed helper methods into a new helpers folder under db/seeds:

  1. Create a directory:
    db/seeds/helpers/
  2. Move all shared/random data methods (like random_gender, random_category, etc.) into a module, e.g. db/seeds/helpers/data_generators.rb:
# db/seeds/helpers/data_generators.rb
module DataGenerators
  def random_gender
    ['male', 'female', 'not-specified'].sample
  end

  def random_category
    ['men', 'women', 'kids', 'infants'].sample
  end

  def random_date_of_birth
    Date.today - rand(18..80).years - rand(365).days
  end

  def random_phone
    "+1#{rand(100..999)}-#{rand(100..999)}-#{rand(1000..9999)}"
  end

  def random_price
    [9.99, 19.99, 29.99, 49.99, 99.99, 199.99, 299.99, 499.99].sample
  end

  def random_stock
    [0, 5, 10, 25, 50, 100, 500].sample
  end

  def random_past_date(days_ago_max = 365)
    Date.today - rand(1..days_ago_max).days
  end
end
  1. In your seeders and base seeder, require and include the helper:
# At the top of each seeder or in base_seeder.rb
require_relative 'helpers/data_generators'

class UsersSeeder < BaseSeeder
  extend DataGenerators
  # ... now you can call random_gender, random_category, etc.
end
  1. Remove the helper methods from BaseSeeder and use the module instead.

Summary:

  • Move all shared/random data methods to db/seeds/helpers/data_generators.rb as a module.
  • require_relative and extend or include the module in your seeders.
  • This keeps your seed logic clean and DRY!

Our BaseSeeder still contains all the random data and progress helper methods, which are now in db/seeds/helpers/data_generators.rb and db/seeds/helpers/ui_helpers.rb.

Here’s what we should do:

  • Remove all the methods at the bottom of BaseSeeder that are now in the helpers.
  • At the top of base_seeder.rb, require the helper modules.
  • If you want to use the helpers as class methods, use extend for each module.

Here’s the refactor plan:

  1. Remove:
  • random_gender
  • random_category
  • random_date_of_birth
  • random_phone
  • random_price
  • random_stock
  • random_past_date
  • progress_indicator
  1. Add at the top:
   require_relative 'helpers/data_generators'
   require_relative 'helpers/ui_helpers'
   extend DataGenerators
   extend UiHelpers

Changes Made:

  1. In UiHelpers:
    • Removed the hardcoded print_summary and print_verification methods.
    • Ensured print_summary_box and print_verification_stats are ready to accept data.
  2. In BaseSeeder:
    • Now dynamically creates a summary_data hash with the final record counts.
    • Creates a verification_stats hash with key metrics.
    • Calls UiHelpers.print_summary_box and UiHelpers.print_verification_stats, passing in the data.

Benefits of this Refactor:

  • More Flexible: You can now easily add or remove items from the summary and verification sections just by changing the hashes in BaseSeeder.
  • Decoupled: The UiHelpers module is now purely for presentation and has no knowledge of your specific models.
  • Cleaner Code: The logic is now clearly separated—BaseSeeder gathers the data, and UiHelpers displays it.

Now the BaseSeeder is clean and only contains orchestration logic and configuration. All random data and UI/progress helpers are now properly delegated to the new helper modules.

We can now use these helpers in any seeder by calling random_gender,  progress_indicator, etc., thanks to the extend statements. This makes our seed system much more modular, maintainable, and DRY!

That’s a great suggestion for further refactoring. It makes sense to keep all data-related logic, including aggregation for summaries, within the DataGenerators helper.

Let’s proceed. I’ll move the hash construction into new methods inside db/seeds/helpers/data_generators.rb and then update BaseSeeder to call them.
Now that the data generation has been moved, I’ll update the BaseSeeder to call the new methods.

Issues Fixed:

  1. UiHelpers method calls: All seeder files were calling progress_indicator directly, but since the methods in UiHelpers are defined as class methods, they needed to be prefixed with UiHelpers.. Fixed in:
  • UsersSeeder
  • ProductsSeeder
  • CoursesSeeder
  • SchoolsSeeder
  • OrdersSeeder
  • StudentsSeeder
  1. DataGenerators references: The BaseSeeder was trying to call DataGenerators.summary_data and DataGenerators.verification_stats, but that file was deleted. I updated it to use the local summary_data and verification_stats methods that were already defined in the BaseSeeder class.

Seeding Results:

  • 10,000 users created
  • 1,000 products created
  • 50 courses created
  • 25 schools created
  • 5,000 orders created
  • 5,000 students created

The verification stats show proper data relationships:

  • 4,004 users have orders
  • 647 products have been ordered
  • 756 products are active
  • 200 students per school (average)
  • 100 students per course (average)

Final Architecture:

  1. DataGenerators: Is now responsible for all data-related tasks, including generating random primitive data (random_phone) and creating aggregated summary data (summary_data, verification_stats).
  2. UiHelpers: Is responsible for all presentation logic, taking data as input and printing it to the console in a formatted way.
  3. Individual Seeders (UsersSeeder, etc.): Responsible for the business logic of creating a specific type of record, using helpers for data and UI.
  4. BaseSeeder: The main orchestrator. It knows the correct order to call the individual seeders and delegates all data and UI tasks to the appropriate helpers.
  5. seeds.rb: The single entry point that kicks off the entire process.

to be continued … 🚀

Rails 8 App: Create an Academic software app using SQL without using ActiveRecord – Part 1 | users | products | orders

Let’s create a Rails 8 app which use SQL queries with raw SQL instead of ActiveRecord. Let’s use the full Rails environment with ActiveRecord for infrastructure, but bypass AR’s ORM features for pure SQL writing. Let me guide you through this step by step:

Step 1: Create the Rails App with ActiveRecord and PostgreSQL (skipping unnecessary components)

rails new academic-sql-software --database=postgresql --skip-action-cable --skip-jbuilder --skip-solid --skip-kamal

What we’re skipping and why:

  • –skip-action-cable: No WebSocket functionality needed
  • –skip-jbuilder: No JSON API views needed for our SQL practice app
  • –skip-solid: Skips Solid Cache and Solid Queue (we don’t need caching or background jobs)
  • –skip-kamal: No deployment configuration needed

What we’re keeping:

  • ActiveRecord: For database connection management and ActiveRecord::Base.connection.execute()
  • ActionController: For creating web interfaces to display our SQL query results
  • ActionView: For creating simple HTML pages to showcase our SQL learning exercises
  • PostgreSQL: Our database for practicing advanced SQL features

Why this setup is perfect for App with raw SQL:

  • Minimal Rails app focused on database interactions
  • Full Rails environment for development conveniences
  • ActiveRecord infrastructure without ORM usage
  • Clean setup without unnecessary overhead

=> Open config/application.rb and comment the following for now:

# require "active_job/railtie"
...
# require "active_storage/engine"
...
# require "action_mailer/railtie"
# require "action_mailbox/engine"
...
# require "action_cable/engine"

=> Open config/environments/development.rb config/environments/production.rb config/environments/test.rb comment action_mailer

🤔 Why I am using ActiveRecord (even though I don’t want the ORM):

  • Database Connection Management: ActiveRecord provides robust connection pooling, reconnection handling, and connection management
  • Rails Integration: Seamless integration with Rails console, database tasks (rails db:create, rails db:migrate), and development tools
  • Raw SQL Execution: We get ActiveRecord::Base.connection.execute() which is perfect for our raw SQL writing.
  • Migration System: Easy table creation and schema management with migrations (even though we’ll query with raw SQL)
  • Database Configuration: Rails handles database.yml configuration, environment switching, and connection setup
  • Development Tools: Access to Rails console for testing queries, database tasks, and debugging

Our Learning Strategy: We’ll use ActiveRecord’s infrastructure but completely bypass its ORM methods. Instead of Student.where(), we’ll use ActiveRecord::Base.connection.execute("SELECT * FROM students WHERE...")

Step 2: Navigate to the project directory

cd academic-sql-software

Step 3: Verify PostgreSQL setup

# Check if PostgreSQL is running
brew services list | grep postgresql
# or
pg_ctl status

Database Foundation: PostgreSQL gives us advanced SQL features:

  • Complex JOINs (INNER, LEFT, RIGHT, FULL OUTER)
  • Window functions (ROW_NUMBER, RANK, LAG, LEAD)
  • Common Table Expressions (CTEs)
  • Advanced aggregations and subqueries

Step 4: Install dependencies

bundle install

What this gives us:

  • pg gem: Pure PostgreSQL adapter (already included with --database=postgresql)
  • ActiveRecord: For connection management only
  • Rails infrastructure: Console, generators, rake tasks

Step 5: Create the PostgreSQL databases

✗ rails db:create
Created database 'academic_sql_software_development'
Created database 'academic_sql_software_test

Our Development Environment:

  • Creates academic_sql_software_development and academic_sql_software_test
  • Sets up connection pooling and management
  • Enables us to use Rails console for testing queries: rails console then ActiveRecord::Base.connection.execute("SELECT 1")

Our Raw SQL Approach:

# We'll use this pattern throughout our app:
connection = ActiveRecord::Base.connection
result = connection.execute("SELECT s.name, t.subject FROM students s INNER JOIN teachers t ON s.teacher_id = t.id")

Why not pure pg gem:

  • Would require manual connection management
  • No Rails integration (no console, no rake tasks)
  • More boilerplate code for connection handling
  • Loss of Rails development conveniences

Why not pure ActiveRecord ORM:

  • We want to do SQL query writing, not ActiveRecord methods.
  • Need to understand database performance implications.
  • Want to practice complex queries that might be harder to express in ActiveRecord.

Step 6: Create Users table

mkdir -p db/migrate
class CreateUsers < ActiveRecord::Migration[8.0]
  def up
    # create users table
    execute <<~SQL
      CREATE TABLE users (
        id INT,
        username VARCHAR(200),
        email VARCHAR(150),
        phone_number VARCHAR(20)
      );
    SQL
  end

  def down
    execute <<~SQL
      DROP TABLE users;
    SQL
  end
end

class CreateOrders < ActiveRecord::Migration[8.0]
  def up
    # create table orders
    execute <<~SQL
    SQL
  end

  def down
    execute <<~SQL
    SQL
  end
end

execute <<~SQL is a Rails migration method that allows you to run raw SQL statements. Let me break it down:

Components:

  1. execute – A Rails migration method that executes raw SQL directly against the database
  2. <<~SQL – Ruby’s “squiggly heredoc” syntax for multi-line strings that automatically strips leading whitespace (read: https://www.rubyguides.com/2018/11/ruby-heredoc/)

Usage:

class SomeMigration < ActiveRecord::Migration[8.0]
  def change
    execute <<~SQL
      CREATE INDEX CONCURRENTLY idx_users_email_lower 
      ON users (LOWER(email));
    SQL
  end
end

Why use it?

  • Database-specific features: When you need PostgreSQL-specific syntax, MySQL features, etc.
  • Complex SQL: For operations that Rails DSL doesn’t support easily
  • Performance: Direct SQL can sometimes be more efficient
  • Raw SQL requirements: When you specifically need to write SQL (as mentioned in our requirement)

Example for creating a users table with raw SQL:

class CreateUsers < ActiveRecord::Migration[8.0]
  def up
    execute <<~SQL
      CREATE TABLE users (
        id BIGSERIAL PRIMARY KEY,
        name VARCHAR(255) NOT NULL,
        email VARCHAR(255) NOT NULL,
        created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
        updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
      );

      CREATE UNIQUE INDEX idx_users_email ON users(email);
    SQL
  end

  def down
    execute <<~SQL
      DROP TABLE IF EXISTS users;
    SQL
  end
end

The <<~SQL heredoc ends when it encounters the closing SQL keyword, and the ~ strips indentation so your SQL can be properly formatted within the Ruby code.

Q) 🤔 Is it a best practise to repeat the database constraints in model validation in Rails? DB level vs App level? or both?

Best Practice: Use BOTH 🎯

Why both levels are recommended:

Database Level (Last Line of Defense)

-- Database constraints
price DECIMAL(10,2) NOT NULL CHECK (price > 0),
category product_category NOT NULL,
title VARCHAR(250) NOT NULL

Pros:

  • Data integrity guarantee – No bad data can enter, ever
  • Performance – Database enforces constraints efficiently
  • Bulletproof – Protects against bulk operations, direct SQL, data imports
  • Concurrency safe – Prevents race conditions
  • Multi-application protection – If other apps access same DB

Application Level (User Experience)

validates :price, presence: true, numericality: { greater_than: 0 }
validates :category, inclusion: { in: %w[men women kids infants] }

Pros:

  • User-friendly errors – “Price must be greater than 0” vs “CHECK constraint violated”
  • Better UX – Validation before form submission
  • Easier testing – Can unit test validations
  • Conditional logic – Complex business rules
  • Framework features – Callbacks, custom validators

Real-world scenarios where each matters:

Database saves you when:

# Bulk operations bypass Rails validations
Product.update_all(price: -10)  # DB constraint prevents this

# Direct SQL injection attempts
# DB constraints are your last line of defense

App validations save you when:

# User gets friendly error instead of:
# PG::CheckViolation: ERROR: new row violates check constraint
@product = Product.new(price: -5)
@product.valid? # => false
@product.errors.full_messages # => ["Price must be greater than 0"]

Practical Implementation:

class Product < ApplicationRecord
  # App-level validations for UX
  validates :title, presence: true, length: { maximum: 250 }
  validates :price, presence: true, numericality: { greater_than: 0 }
  validates :category, inclusion: { in: %w[men women kids infants] }

  # Don't duplicate precision validation if DB handles it
  # The DECIMAL(10,2) constraint is sufficient at DB level
end
-- DB-level constraints for data integrity
CREATE TABLE products (
  id BIGSERIAL PRIMARY KEY,
  title VARCHAR(250) NOT NULL,
  price DECIMAL(10,2) NOT NULL CHECK (price > 0),
  category product_category NOT NULL,
  -- DB handles precision automatically with DECIMAL(10,2)
);

What NOT to duplicate:

  • Precision constraintsDECIMAL(10,2) handles this perfectly
  • Data type validation – DB enforces INTEGER, BOOLEAN, etc.
  • Complex regex patterns – Better handled in app layer

Conclusion:

Use both, but strategically:

  • Database: Core data integrity, type constraints, foreign keys
  • Application: User experience, business logic, conditional rules
  • Don’t over-duplicate simple type/precision constraints that DB handles well

This approach gives you belt and suspenders protection with optimal user experience.

to be continued … 🚀

Rails 8 App: Setup Test DB in PostgreSQL | Write SQL Queries | Operators | Joins

Here’s a list of commonly used SQL comparison operators with brief explanations and examples:

📋 Basic Comparison Operators:

OperatorMeaningExampleResult
=Equal toWHERE age = 25Matches rows where age is 25
<>Not equal to (standard)WHERE status <> 'active'Matches rows where status is not 'active'
!=Not equal to (alternative)WHERE id != 10Same as <>, matches if id is not 10
>Greater thanWHERE salary > 50000Matches rows with salary above 50k
<Less thanWHERE created_at < '2024-01-01'Matches dates before Jan 1, 2024
>=Greater than or equalWHERE age >= 18Matches age 18 and above
<=Less than or equalWHERE age <= 65Matches age 65 and below

📋 Other Common Operators:

OperatorMeaningExample
BETWEENWithin a rangeWHERE price BETWEEN 100 AND 200
INMatch any value in a listWHERE country IN ('US', 'CA', 'UK')
NOT INNot in a listWHERE role NOT IN ('admin', 'staff')
IS NULLValue is nullWHERE deleted_at IS NULL
IS NOT NULLValue is not nullWHERE updated_at IS NOT NULL
LIKEPattern match (case-insensitive in some DBs)WHERE name LIKE 'J%'
ILIKECase-insensitive LIKE (PostgreSQL only)WHERE email ILIKE '%@gmail.com'

Now we’ve our products and product_variants schema, let’s re-explore all major SQL JOINs using these two related tables.

####### Products

   Column    |              Type              | Collation | Nullable |               Default
-------------+--------------------------------+-----------+----------+--------------------------------------
 id          | bigint                         |           | not null | nextval('products_id_seq'::regclass)
 description | text                           |           |          |
 category    | character varying              |           |          |
 created_at  | timestamp(6) without time zone |           | not null |
 updated_at  | timestamp(6) without time zone |           | not null |
 name        | character varying              |           | not null |
 rating      | numeric(2,1)                   |           |          | 0.0
 brand       | character varying              |           |          |

######## Product variants

      Column      |              Type              | Collation | Nullable |                   Default
------------------+--------------------------------+-----------+----------+----------------------------------------------
 id               | bigint                         |           | not null | nextval('product_variants_id_seq'::regclass)
 product_id       | bigint                         |           | not null |
 sku              | character varying              |           | not null |
 mrp              | numeric(10,2)                  |           | not null |
 price            | numeric(10,2)                  |           | not null |
 discount_percent | numeric(5,2)                   |           |          |
 size             | character varying              |           |          |
 color            | character varying              |           |          |
 stock_quantity   | integer                        |           |          | 0
 specs            | jsonb                          |           | not null | '{}'::jsonb
 created_at       | timestamp(6) without time zone |           | not null |
 updated_at       | timestamp(6) without time zone |           | not null |

💎 SQL JOINS with products and product_variants

These tables are related through:

product_variants.product_id → products.id

So we can use that for all join examples.

🔸 1. INNER JOIN – Show only products with variants
SELECT 
  p.name, 
  pv.sku, 
  pv.price 
FROM products p
INNER JOIN product_variants pv ON p.id = pv.product_id;

♦️ Only returns products that have at least one variant.

🔸 2. LEFT JOIN – Show all products, with variants if available
SELECT 
  p.name, 
  pv.sku, 
  pv.price 
FROM products p
LEFT JOIN product_variants pv ON p.id = pv.product_id;

♦️ Returns all products, even those with no variants (NULLs in variant columns).

🔸 3. RIGHT JOIN – Show all variants, with product info if available

(Less common, but useful if variants might exist without a product record)

SELECT 
  pv.sku, 
  pv.price, 
  p.name 
FROM products p
RIGHT JOIN product_variants pv ON p.id = pv.product_id;

🔸 4. FULL OUTER JOIN – All records from both tables
SELECT 
  p.name AS product_name, 
  pv.sku AS variant_sku 
FROM products p
FULL OUTER JOIN product_variants pv ON p.id = pv.product_id;

♦️ Shows all products and all variants, even when there’s no match.

🔸 5. SELF JOIN Example (for product_variants comparing similar sizes or prices)

Let’s compare variants of the same product that are different sizes.

SELECT 
  pv1.product_id,
  pv1.size AS size_1,
  pv2.size AS size_2,
  pv1.sku AS sku_1,
  pv2.sku AS sku_2
FROM product_variants pv1
JOIN product_variants pv2 
  ON pv1.product_id = pv2.product_id 
  AND pv1.size <> pv2.size
WHERE pv1.product_id = 101;  -- example product

♦️ Useful to analyze size comparisons or price differences within a product.

🧬 Complex Combined JOIN Example

Show each product with its variants, and include only discounted ones (price < MRP):

SELECT 
  p.name AS product_name,
  pv.sku,
  pv.price,
  pv.mrp,
  (pv.mrp - pv.price) AS discount_value
FROM products p
INNER JOIN product_variants pv ON p.id = pv.product_id
WHERE pv.price < pv.mrp
ORDER BY discount_value DESC;

📑 JOIN Summary with These Tables

JOIN TypeUse Case
INNER JOINOnly products with variants
LEFT JOINAll products, even if they don’t have variants
RIGHT JOINAll variants, even if product is missing
FULL OUTER JOINEverything — useful in data audits
SELF JOINCompare or relate rows within the same table

Let’s now look at JOIN queries with more realistic conditions using products and product_variants.

🦾 Advanced JOIN Queries with Conditions to practice

🔹 1. All products with variants in stock AND discounted

SELECT 
  p.name AS product_name,
  pv.sku,
  pv.size,
  pv.color,
  pv.stock_quantity,
  pv.mrp,
  pv.price,
  (pv.mrp - pv.price) AS discount_amount
FROM products p
JOIN product_variants pv ON p.id = pv.product_id
WHERE pv.stock_quantity > 0
  AND pv.price < pv.mrp
ORDER BY discount_amount DESC;

♦️ Shows available discounted variants, ordered by discount.

🔹 2. Products with high rating (4.5+) and at least one low-stock variant (< 10 items)

SELECT 
  p.name AS product_name,
  p.rating,
  pv.sku,
  pv.stock_quantity
FROM products p
JOIN product_variants pv ON p.id = pv.product_id
WHERE p.rating >= 4.5
  AND pv.stock_quantity < 10;

🔹 3. LEFT JOIN to find products with no variants or all variants out of stock

SELECT 
  p.name AS product_name,
  pv.id AS variant_id,
  pv.stock_quantity
FROM products p
LEFT JOIN product_variants pv 
  ON p.id = pv.product_id AND pv.stock_quantity > 0
WHERE pv.id IS NULL;

✅ This tells you:

  • Either the product has no variants
  • Or all variants are out of stock

🔹 4. Group and Count Variants per Product

SELECT 
  p.name AS product_name,
  COUNT(pv.id) AS variant_count
FROM products p
LEFT JOIN product_variants pv ON p.id = pv.product_id
GROUP BY p.name
ORDER BY variant_count DESC;

🔹 5. Variants with price-percentage discount more than 30%

SELECT 
  p.name AS product_name,
  pv.sku,
  pv.mrp,
  pv.price,
  ROUND(100.0 * (pv.mrp - pv.price) / pv.mrp, 2) AS discount_percent
FROM products p
JOIN product_variants pv ON p.id = pv.product_id
WHERE pv.price < pv.mrp
  AND (100.0 * (pv.mrp - pv.price) / pv.mrp) > 30;

🔹 6. Color-wise stock summary for a product category

SELECT 
  p.category,
  pv.color,
  SUM(pv.stock_quantity) AS total_stock
FROM products p
JOIN product_variants pv ON p.id = pv.product_id
WHERE p.category = 'Shoes'
GROUP BY p.category, pv.color
ORDER BY total_stock DESC;

These queries simulate real-world dashboard views: inventory tracking, product health, stock alerts, etc.


Happy SQL Query Writing! 🚀

Rails 8 App: Setup Test DB | Comprehensive Guide 📖 for PostgreSQL , Mysql Indexing – PostgreSQL Heap ⛰ vs Mysql InnoDB B-Tree 🌿

Enter into psql terminal:

✗ psql postgres
psql (14.17 (Homebrew))
Type "help" for help.

postgres=# \l
                                     List of databases
           Name            |  Owner   | Encoding | Collate | Ctype |   Access privileges
---------------------------+----------+----------+---------+-------+-----------------------
 studio_development | postgres | UTF8     | C       | C     |
  • Create a new test database
  • Create a users Table
  • Check the db and table details
postgres=# create database test_db;
CREATE DATABASE

test_db=# CREATE TABLE users (
user_id INT,
username VARCHAR(220),
email VARCHAR(150),
phone_number VARCHAR(20)
);
CREATE TABLE

test_db=# \dt
List of relations
 Schema | Name  | Type  |  Owner
--------+-------+-------+----------
 public | users | table | abhilash
(1 row)

test_db=# \d users;
                          Table "public.users"
    Column    |          Type          | Collation | Nullable | Default
--------------+------------------------+-----------+----------+---------
 user_id      | integer                |           |          |
 username     | character varying(220) |           |          |
 email        | character varying(150) |           |          |
 phone_number | character varying(20)  |           |          |

Add a Primary key to users and check the user table.

test_db=# ALTER TABLE users ADD PRIMARY KEY (user_id);
ALTER TABLE

test_db=# \d users;
                          Table "public.users"
    Column    |          Type          | Collation | Nullable | Default
--------------+------------------------+-----------+----------+---------
 user_id      | integer                |           | not null |
 username     | character varying(220) |           |          |
 email        | character varying(150) |           |          |
 phone_number | character varying(20)  |           |          |
Indexes:
    "users_pkey" PRIMARY KEY, btree (user_id)

# OR add primary key when creating the table:
CREATE TABLE users (
  user_id INT PRIMARY KEY,
  username VARCHAR(220),
  email VARCHAR(150),
  phone_number VARCHAR(20)
);

You can a unique constraint and an index added when adding a primary key.

Why does adding a primary key also add an index?

  • A primary key must guarantee that each value is unique and fast to find.
  • Without an index, the database would have to scan the whole table every time you look up a primary key, which would be very slow.
  • So PostgreSQL automatically creates a unique index on the primary key to make lookups efficient and to enforce uniqueness at the database level.

👉 It needs the index for speed and to enforce the “no duplicates” rule of primary keys.

What is btree?

  • btree stands for Balanced Tree (specifically, a “B-tree” data structure).
  • It’s the default index type in PostgreSQL.
  • B-tree indexes organize the data in a tree structure, so that searches, inserts, updates, and deletes are all very efficient — about O(log n) time.
  • It’s great for looking up exact matches (like WHERE user_id = 123) or range queries (like WHERE user_id BETWEEN 100 AND 200).

👉 So when you see btree, it just means it’s using a very efficient tree structure for your primary key index.

Summary in one line:
Adding a primary key automatically adds a btree index to enforce uniqueness and make lookups super fast.


In MySQL (specifically InnoDB engine, which is default now):

  • Primary keys always create an index automatically.
  • The index is a clustered index — this is different from Postgres!
  • The index uses a B-tree structure too, just like Postgres.

👉 So yes, MySQL also adds an index and uses a B-tree under the hood for primary keys.

But here’s a big difference:

  • In InnoDB, the table data itself is stored inside the primary key’s B-tree.
    • That’s called a clustered index.
    • It means the physical storage of the table rows follows the order of the primary key.
  • In PostgreSQL, the index and the table are stored separately (non-clustered by default).

Example: If you have a table like this in MySQL:

CREATE TABLE users (
  user_id INT PRIMARY KEY,
  username VARCHAR(220),
  email VARCHAR(150)
);
  • user_id will have a B-tree clustered index.
  • The rows themselves will be stored sorted by user_id.

Short version:

DatabasePrimary Key BehaviorB-tree?Clustered?
PostgreSQLSeparate index created for PKYesNo (separate by default)
MySQL (InnoDB)PK index + Table rows stored inside the PK’s B-treeYesYes (always clustered)

Why Indexing on Unique Columns (like email) Improves Lookup 🔍

Use Case

You frequently run queries like:

SELECT * FROM students WHERE email = 'john@example.com';

Without an index, this results in a full table scan — checking each row one-by-one.

With an index, the database can jump directly to the row using a sorted structure, significantly reducing lookup time — especially in large tables.


🌲 How SQL Stores Indexes Internally (PostgreSQL)

📚 PostgreSQL uses B-Tree indexes by default.

When you run:

CREATE UNIQUE INDEX idx_students_on_email ON students(email);

PostgreSQL creates a balanced B-tree like this:

          m@example.com
         /              \
  d@example.com     t@example.com
  /        \           /         \
...      ...        ...         ...

  • ✅ Keys (email values) are sorted lexicographically.
  • ✅ Each leaf node contains a pointer to the actual row in the students table (called a tuple pointer or TID).
  • ✅ Lookup uses binary search, giving O(log n) performance.

⚙️ Unique Index = Even Faster

Because all email values are unique, the database:

  • Can stop searching immediately once a match is found.
  • Doesn’t need to scan multiple leaf entries (no duplicates).

🧠 Summary

FeatureValue
Index TypeB-tree (default in PostgreSQL)
Lookup TimeO(log n) vs O(n) without index
Optimized forEquality search (WHERE email = ...), sorting, joins
Email is unique?✅ Yes – index helps even more (no need to check multiple rows)
Table scan avoided?✅ Yes – PostgreSQL jumps directly via B-tree lookup

What Exactly is a Clustered Index in MySQL (InnoDB)?

🔹 In MySQL InnoDB, the primary key IS the table.

🔹 A Clustered Index means:

  • The table’s data rows are physically organized in the order of the primary key.
  • No separate storage for the table – it’s merged into the primary key’s B-tree structure.

In simple words:
👉 “The table itself lives inside the primary key B-tree.”

That’s why:

  • Every secondary index must store the primary key value (not a row pointer).
  • InnoDB can only have one clustered index (because you can’t physically order a table in two different ways).
📈 Visual for MySQL Clustered Index

Suppose you have:

CREATE TABLE users (
  user_id INT PRIMARY KEY,
  username VARCHAR(255),
  email VARCHAR(255)
);

The storage looks like:

B-tree by user_id (Clustered)

user_id  | username | email
----------------------------
101      | Alice    | a@x.com
102      | Bob      | b@x.com
103      | Carol    | c@x.com

👉 Table rows stored directly inside the B-tree nodes by user_id!


🔵 PostgreSQL (Primary Key Index = Separate)

Imagine you have a users table:

users table (physical table):

row_id | user_id | username | email
-------------------------------------
  1    |   101   | Alice    | a@example.com
  2    |   102   | Bob      | b@example.com
  3    |   103   | Carol    | c@example.com

And the Primary Key Index looks like:

Primary Key B-Tree (separate structure):

user_id -> row pointer
 101    -> row_id 1
 102    -> row_id 2
 103    -> row_id 3

👉 When you query WHERE user_id = 102, PostgreSQL goes:

  • Find user_id 102 in the B-tree index,
  • Then jump to row_id 2 in the actual table.

🔸 Index and Table are separate.
🔸 Extra step: index lookup ➔ then fetch row.

🟠 MySQL InnoDB (Primary Key Index = Clustered)

Same users table, but stored like this:

Primary Key Clustered B-Tree (index + data together):

user_id | username | email
---------------------------------
  101   | Alice    | a@example.com
  102   | Bob      | b@example.com
  103   | Carol    | c@example.com

👉 When you query WHERE user_id = 102, MySQL:

  • Goes straight to user_id 102 in the B-tree,
  • Data is already there, no extra lookup.

🔸 Index and Table are merged.
🔸 One step: direct access!

📈 Quick Visual:

PostgreSQL
(Index)    ➔    (Table Row)
    |
    ➔ extra lookup needed

MySQL InnoDB
(Index + Row Together)
    |
    ➔ data found immediately

Summary:

  • PostgreSQL: primary key index is separate ➔ needs 2 steps (index ➔ table).
  • MySQL InnoDB: primary key index is clustered1 step (index = table).

📚 How Secondary Indexes Work

Secondary Index = an index on a column that is not the primary key.

Example:

CREATE INDEX idx_username ON users(username);

Now you have an index on username.

🔵 PostgreSQL Secondary Index Behavior

  • Secondary indexes are separate structures from the table (just like the primary key index).
  • When you query by username, PostgreSQL:
    1. Finds the matching row_id using the secondary B-tree index.
    2. Then fetches the full row from the table by row_id.
  • This is called an Index Scan + Heap Fetch.

📜 Example:

Secondary Index (username -> row_id):

username -> row_id
------------------
Alice    -> 1
Bob      -> 2
Carol    -> 3

(users table is separate)

👉 Flexible, but needs 2 steps: index (row_id) ➔ table.

🟠 MySQL InnoDB Secondary Index Behavior

  • In InnoDB, secondary indexes don’t store row pointers.
  • Instead, they store the primary key value!

So:

  1. Find the matching primary key using the secondary index.
  2. Use the primary key to find the actual row inside the clustered primary key B-tree.

📜 Example:

Secondary Index (username -> user_id):

username -> user_id
--------------------
Alice    -> 101
Bob      -> 102
Carol    -> 103

(Then find user_id inside Clustered B-Tree)

✅ Needs 2 steps too: secondary index (primary key) ➔ clustered table.

📈 Quick Visual:

FeaturePostgreSQLMySQL InnoDB
Secondary Indexusername ➔ row pointer (row_id)username ➔ primary key (user_id)
Fetch Full RowUse row_id to get table rowUse primary key to find row in clustered index
Steps to FetchIndex ➔ TableIndex ➔ Primary Key ➔ Table (clustered)
ActionPostgreSQLMySQL InnoDB
Primary Key LookupIndex ➔ Row (2 steps)Clustered Index (1 step)
Secondary Index LookupIndex (row_id) ➔ Row (2 steps)Secondary Index (PK) ➔ Row (2 steps)
Storage ModelSeparate index and tablePrimary key and table merged (clustered)

🌐 Now, let’s do some Real SQL Query ⛁ Examples!

1. Simple SELECT * FROM users WHERE user_id = 102;
  • PostgreSQL:
    Look into PK btree ➔ find row pointer ➔ fetch row separately.
  • MySQL InnoDB:
    Directly find the row inside the PK B-tree (no extra lookup).

MySQL is a little faster here because it needs only 1 step!

2. SELECT username FROM users WHERE user_id = 102; (Only 1 Column)
  • PostgreSQL:
    Might do an Index Only Scan if all needed data is in the index (very fast).
  • MySQL:
    Clustered index contains all columns already, no special optimization needed.

Both can be very fast, but PostgreSQL shines if the index is “covering” (i.e., contains all needed columns). Because index table has less size than clustered index of mysql.

3. SELECT * FROM users WHERE username = 'Bob'; (Secondary Index Search)
  • PostgreSQL:
    Secondary index on username ➔ row pointer ➔ fetch table row.
  • MySQL:
    Secondary index on username ➔ get primary key ➔ clustered index lookup ➔ fetch data.

Both are 2 steps, but MySQL needs 2 different B-trees: secondary ➔ primary clustered.

Consider the below situation:

SELECT username FROM users WHERE user_id = 102;
  • user_id is the Primary Key.
  • You only want username, not full row.

Now:

🔵 PostgreSQL Behavior

👉 In PostgreSQL, by default:

  • It uses the primary key btree to find the row pointer.
  • Then fetches the full row from the table (heap fetch).

👉 But PostgreSQL has an optimization called Index-Only Scan.

  • If all requested columns are already present in the index,
  • And if the table visibility map says the row is still valid (no deleted/updated row needing visibility check),
  • Then Postgres does not fetch the heap.

👉 So in this case:

  • If the primary key index also stores username internally (or if an extra index is created covering username), Postgres can satisfy the query just from the index.

✅ Result: No table lookup needed ➔ Very fast (almost as fast as InnoDB clustered lookup).

📢 Postgres primary key indexes usually don’t store extra columns, unless you specifically create an index that includes them (INCLUDE (username) syntax in modern Postgres 11+).

🟠 MySQL InnoDB Behavior
  • In InnoDB:
    Since the primary key B-tree already holds all columns (user_id, username, email),
    It directly finds the row from the clustered index.
  • So when you query by PK, even if you only need one column, it has everything inside the same page/block.

One fast lookup.

🔥 Why sometimes Postgres can still be faster?
  • If PostgreSQL uses Index-Only Scan, and the page is already cached, and no extra visibility check is needed,
    Then Postgres may avoid touching the table at all and only scan the tiny index pages.
  • In this case, for very narrow queries (e.g., only 1 small field), Postgres can outperform even MySQL clustered fetch.

💡 Because fetching from a small index page (~8KB) is faster than reading bigger table pages.

🎯 Conclusion:

✅ MySQL clustered index is always fast for PK lookups.
✅ PostgreSQL can be even faster for small/narrow queries if Index-Only Scan is triggered.

👉 Quick Tip:

  • In PostgreSQL, you can force an index to include extra columns by using: CREATE INDEX idx_user_id_username ON users(user_id) INCLUDE (username); Then index-only scans become more common and predictable! 🚀

Isn’t PostgreSQL also doing 2 B-tree scans? One for secondary index and one for table (row_id)?

When you query with a secondary index, like:

SELECT * FROM users WHERE username = 'Bob';
  • In MySQL InnoDB, I said:
    1. Find in secondary index (username ➔ user_id)
    2. Then go to primary clustered index (user_id ➔ full row)
Let’s look at PostgreSQL first:

♦️ Step 1: Search Secondary Index B-tree on username.

  • It finds the matching TID (tuple ID) or row pointer.
    • TID is a pair (block_number, row_offset).
    • Not a B-tree! Just a physical pointer.

♦️ Step 2: Use the TID to directly jump into the heap (the table).

  • The heap (table) is not a B-tree — it’s just a collection of unordered pages (blocks of rows).
  • PostgreSQL goes directly to the block and offset — like jumping straight into a file.

🔔 Important:

  • Secondary index ➔ TID ➔ heap fetch.
  • No second B-tree traversal for the table!
🟠 Meanwhile in MySQL InnoDB:

♦️ Step 1: Search Secondary Index B-tree on username.

  • It finds the Primary Key value (user_id).

♦️ Step 2: Now, search the Primary Key Clustered B-tree to find the full row.

  • Need another B-tree traversal based on user_id.

🔔 Important:

  • Secondary index ➔ Primary Key B-tree ➔ data fetch.
  • Two full B-tree traversals!
Real-world Summary:

♦️ PostgreSQL

  • Secondary index gives a direct shortcut to the heap.
  • One B-tree scan (secondary) ➔ Direct heap fetch.

♦️ MySQL

  • Secondary index gives PK.
  • Then another B-tree scan (primary clustered) to find full row.

PostgreSQL does not scan a second B-tree when fetching from the table — just a direct page lookup using TID.

MySQL does scan a second B-tree (primary clustered index) when fetching full row after secondary lookup.

Is heap fetch a searching technique? Why is it faster than B-tree?

📚 Let’s start from the basics:

When PostgreSQL finds a match in a secondary index, what it gets is a TID.

♦️ A TID (Tuple ID) is a physical address made of:

  • Block Number (page number)
  • Offset Number (row slot inside the page)

Example:

TID = (block_number = 1583, offset = 7)

🔵 How PostgreSQL uses TID?

  1. It directly calculates the location of the block (disk page) using block_number.
  2. It reads that block (if not already in memory).
  3. Inside that block, it finds the row at offset 7.

♦️ No search, no btree, no extra traversal — just:

  • Find the page (via simple number addressing)
  • Find the row slot

📈 Visual Example

Secondary index (username ➔ TID):

usernameTID
Alice(1583, 7)
Bob(1592, 3)
Carol(1601, 12)

♦️ When you search for “Bob”:

  • Find (1592, 3) from secondary index B-tree.
  • Jump directly to Block 1592, Offset 3.
  • Done ✅!

Answer:

  • Heap fetch is NOT a search.
  • It’s a direct address lookup (fixed number).
  • Heap = unordered collection of pages.
  • Pages = fixed-size blocks (usually 8 KB each).
  • TID gives an exact GPS location inside heap — no searching required.

That’s why heap fetch is faster than another B-tree search:

  • No binary search, no B-tree traversal needed.
  • Only a simple disk/memory read + row offset jump.

🌿 B-tree vs 📁 Heap Fetch

ActionB-treeHeap Fetch
What it doesBinary search inside sorted tree nodesDirect jump to block and slot
Steps neededTraverse nodes (root ➔ internal ➔ leaf)Directly read page and slot
Time complexityO(log n)O(1)
SpeedSlower (needs comparisons)Very fast (direct)

🎯 Final and short answer:

♦️ In PostgreSQL, after finding the TID in the secondary index, the heap fetch is a direct, constant-time (O(1)) accessno B-tree needed!
♦️ This is faster than scanning another B-tree like in MySQL InnoDB.


🧩 Our exact question:

When we say:

Jump directly to Block 1592, Offset 3.

We are thinking:

  • There are thousands of blocks.
  • How can we directly jump to block 1592?
  • Shouldn’t that be O(n) (linear time)?
  • Shouldn’t there be some traversal?

🔵 Here’s the real truth:

  • No traversal needed.
  • No O(n) work.
  • Accessing Block 1592 is O(1) — constant time.

📚 Why?

Because of how files, pages, and memory work inside a database.

When PostgreSQL stores a table (the “heap”), it saves it in a file on disk.
The file is just a long array of fixed-size pages.

  • Each page = 8KB (default in Postgres).
  • Each block = 1 page = fixed 8KB chunk.
  • Block 0 is the first 8KB.
  • Block 1 is next 8KB.
  • Block 2 is next 8KB.
  • Block 1592 = (1592 × 8 KB) offset from the beginning.

✅ So block 1592 is simply located at 1592 × 8192 bytes offset from the start of the file.

✅ Operating systems (and PostgreSQL’s Buffer Manager) know exactly how to seek to that byte position without reading everything before it.

📈 Diagram (imagine the table file):
+-----------+-----------+-----------+-----------+-----------+------+
| Block 0   | Block 1   | Block 2   | Block 3   | Block 4   |  ... |
+-----------+-----------+-----------+-----------+-----------+------+
  (8KB)       (8KB)       (8KB)       (8KB)       (8KB)

Finding Block 1592 ➔
Seek directly to offset 1592 * 8192 bytes ➔
Read 8KB ➔
Find row at Offset 3 inside it.

🤔 What happens technically?

If in memory (shared buffers / page cache):
  • PostgreSQL checks its buffer pool (shared memory).
  • “Do I already have block 1592 cached?”
    • ✅ Yes: immediately access memory address.
    • ❌ No: Load block 1592 from disk into memory.
If from disk (rare if cached):
  • File systems (ext4, xfs, etc) know how to seek to a byte offset in a file without reading previous parts.
  • Seek to (block_number × 8192) bytes.
  • Read exactly 8KB into memory.
  • No need to scan the whole file linearly.

📊 Final Step: Inside the Block

Once the block is loaded:

  • The block internally is structured like an array of tuples.
  • Each tuple is placed into an offset slot.
  • Offset 3 ➔ third tuple inside the block.

♦️ Again, this is just array lookup — no traversal, no O(n).

⚡ So to summarize:
QuestionAnswer
How does PostgreSQL jump directly to block?Using the block number × page size calculation (fixed offset math).
Is it O(n)?❌ No, it’s O(1) constant time
Is there any traversal?❌ No traversal. Just a seek + memory read.
How fast?Extremely fast if cached, still fast if disk seeks.
🔥 Key concept:

PostgreSQL heap access is O(1) because the heap file is a flat sequence of fixed-size pages, and the TID gives exact coordinates.

🎯 Simple Real World Example:

Imagine you have a giant book (the table file).
Each page of the book is numbered (block number).

If someone says:

👉 “Go to page 1592.”

♦️ You don’t need to read pages 1 to 1591 first.
♦️ You just flip directly to page 1592.

📗 Same idea: no linear traversal, just positional lookup.

🧠 Deep thought:

Because blocks are fixed size and TID is known,
heap fetch is almost as fast as reading a small array.

(Actually faster than searching B-tree because B-tree needs multiple comparisons at each node.)

Enjoy SQL! 🚀