Rails 8 App: Comprehensive Guide 📑 to Write Controller Tests | 👓 Rspec – 20 Test Cases For Reference

Testing is a crucial part of ensuring the reliability and correctness of a Ruby on Rails 8 application. Controller tests verify the behaviour of your application’s controllers, ensuring that actions handle requests properly, return correct responses, and enforce security measures.

This guide explores the best practices in writing Rails 8 controller tests, references well-known Rails projects, and provides 20 test case examples—including 5 complex ones.

Setting Up the Testing Environment using Rspec

To effectively write controller tests, we use RSpec (the most popular testing framework in the Rails community) along with key supporting gems:

Recommended Gems

Add the following gems to your Gemfile under the :test group:

group :test do
  gem 'rspec-rails'  # Main testing framework
  gem 'factory_bot_rails'  # For test data setup
  gem 'database_cleaner-active_record'  # Cleans test database
  gem 'faker'  # Generates fake data
  gem 'shoulda-matchers'  # Provides one-liner matchers for common Rails functions
end

Run:

bundle install
rails generate rspec:install

Then, configure spec_helper.rb and rails_helper.rb to include necessary test configurations.

Types of Controller Tests

A controller test should cover various scenarios:

  1. Successful actions (index, show, create, update, destroy)
  2. Error handling (record not found, invalid params)
  3. Authentication & Authorization (user roles, access control)
  4. Redirections & Response types (HTML, JSON, Turbo Streams)
  5. Edge cases (empty parameters, SQL injection attempts)

Let’s dive into examples.

Basic Controller Tests

1. Testing Index Action

require 'rails_helper'

describe ArticlesController, type: :controller do
  describe 'GET #index' do
    it 'returns a successful response' do
      get :index
      expect(response).to have_http_status(:ok)
    end
  end
end

2. Testing Show Action with a Valid ID

describe 'GET #show' do
  let(:article) { create(:article) }
  it 'returns the requested article' do
    get :show, params: { id: article.id }
    expect(response).to have_http_status(:ok)
    expect(assigns(:article)).to eq(article)
  end
end

3. Testing Show Action with an Invalid ID

describe 'GET #show' do
  it 'returns a 404 for an invalid ID' do
    get :show, params: { id: 9999 }
    expect(response).to have_http_status(:not_found)
  end
end

4. Testing Create Action with Valid Parameters

describe 'POST #create' do
  it 'creates a new article' do
    expect {
      post :create, params: { article: attributes_for(:article) }
    }.to change(Article, :count).by(1)
  end
end

5. Testing Create Action with Invalid Parameters

describe 'POST #create' do
  it 'does not create an article with invalid parameters' do
    expect {
      post :create, params: { article: { title: '' } }
    }.not_to change(Article, :count)
  end
end

6. Testing Update Action

describe 'PATCH #update' do
  let(:article) { create(:article) }
  it 'updates an article' do
    patch :update, params: { id: article.id, article: { title: 'Updated' } }
    expect(article.reload.title).to eq('Updated')
  end
end

7. Testing Destroy Action

describe 'DELETE #destroy' do
  let!(:article) { create(:article) }
  it 'deletes an article' do
    expect {
      delete :destroy, params: { id: article.id }
    }.to change(Article, :count).by(-1)
  end
end

Here are the missing test cases (7 to 15) that should be included in your blog post:

8. Testing Redirection After Create

describe 'POST #create' do
  it 'redirects to the article show page' do
    post :create, params: { article: attributes_for(:article) }
    expect(response).to redirect_to(assigns(:article))
  end
end

9. Testing JSON Response for Index Action

describe 'GET #index' do
  it 'returns a JSON response' do
    get :index, format: :json
    expect(response.content_type).to eq('application/json')
  end
end

10. Testing JSON Response for Show Action

describe 'GET #show' do
  let(:article) { create(:article) }
  it 'returns the article in JSON format' do
    get :show, params: { id: article.id }, format: :json
    expect(response.content_type).to eq('application/json')
    expect(response.body).to include(article.title)
  end
end

11. Testing Unauthorized Access to Update

describe 'PATCH #update' do
  let(:article) { create(:article) }
  it 'returns a 401 if user is not authorized' do
    patch :update, params: { id: article.id, article: { title: 'Updated' } }
    expect(response).to have_http_status(:unauthorized)
  end
end

12. Testing Strong Parameters Enforcement

describe 'POST #create' do
  it 'does not allow mass assignment of protected attributes' do
    expect {
      post :create, params: { article: { title: 'Valid', admin_only_field: true } }
    }.to raise_error(ActiveModel::ForbiddenAttributesError)
  end
end

13. Testing Destroy Action with Invalid ID

describe 'DELETE #destroy' do
  it 'returns a 404 when the article does not exist' do
    delete :destroy, params: { id: 9999 }
    expect(response).to have_http_status(:not_found)
  end
end

14. Testing Session Persistence

describe 'GET #dashboard' do
  before { session[:user_id] = create(:user).id }
  it 'allows access to the dashboard' do
    get :dashboard
    expect(response).to have_http_status(:ok)
  end
end

15. Testing Rate Limiting on API Requests

describe 'GET #index' do
  before do
    10.times { get :index }
  end
  it 'returns a 429 Too Many Requests when rate limit is exceeded' do
    get :index
    expect(response).to have_http_status(:too_many_requests)
  end
end

Complex Controller 🎮 Tests

16. Testing Admin Access Control

describe 'GET #admin_dashboard' do
  context 'when user is admin' do
    let(:admin) { create(:user, role: :admin) }
    before { sign_in admin }
    it 'allows access' do
      get :admin_dashboard
      expect(response).to have_http_status(:ok)
    end
  end
  context 'when user is not admin' do
    let(:user) { create(:user, role: :user) }
    before { sign_in user }
    it 'redirects to home' do
      get :admin_dashboard
      expect(response).to redirect_to(root_path)
    end
  end
end

17. Testing Turbo Stream Responses

describe 'PATCH #update' do
  let(:article) { create(:article) }
  it 'updates an article and responds with Turbo Stream' do
    patch :update, params: { id: article.id, article: { title: 'Updated' } }, format: :turbo_stream
    expect(response.media_type).to eq Mime[:turbo_stream]
  end
end

Here are three additional complex test cases (18, 19, and 20) to include in your blog post:

18. Testing WebSockets with ActionCable

describe 'WebSocket Connection' do
  let(:user) { create(:user) }
  
  before do
    sign_in user
  end

  it 'successfully subscribes to a channel' do
    subscribe room_id: 1
    expect(subscription).to be_confirmed
    expect(subscription).to have_stream_from("chat_1")
  end
end

Why? This test ensures that ActionCable properly subscribes users to real-time chat channels.

19. Testing Nested Resource Actions

describe 'POST #create in nested resource' do
  let(:user) { create(:user) }
  let(:post) { create(:post, user: user) }

  it 'creates a comment under the correct post' do
    expect {
      post :create, params: { post_id: post.id, comment: { body: 'Nice post!' } }
    }.to change(post.comments, :count).by(1)
  end
end

Why? This test ensures correct behavior when working with nested resources like comments under posts.

20. Testing Multi-Step Form Submission

describe 'PATCH #update (multi-step form)' do
  let(:user) { create(:user, step: 'personal_info') }

  it 'advances the user to the next step in a multi-step form' do
    patch :update, params: { id: user.id, user: { step: 'address_info' } }
    expect(user.reload.step).to eq('address_info')
  end
end

Why? This test ensures users can progress through a multi-step form properly.

📝 Conclusion

This guide provides an extensive overview of controller testing in Rails 8, ensuring robust coverage for all possible scenarios. By following these patterns, your Rails applications will have reliable, well-tested controllers that behave as expected.

Happy Testing! 🚀

Setup 🛠 Rails 8 App – Part 14: Product Controller Test cases 🔍 For GitHub Actions

In an e-commerce application built with Ruby on Rails, controller tests ensure that your APIs and web interfaces behave as expected. In this post, we’ll explore our ProductsControllerTest suite that validates product creation, editing, deletion, and error handling—including associated product variants and image uploads.

Overview

Our controller is responsible for managing Product records and their associated ProductVariant. A Product may have multiple variants, but for simplicity, we’re focusing on creating a product with a primary variant. The test suite uses ActionDispatch::IntegrationTest for full-stack request testing and some pre-seeded fixtures (products(:one) and product_variants(:one)).

Integration tests (Rails 5+)

  • Inherit from ActionDispatch::IntegrationTest.
  • Spin up the full Rails stack (routing, middleware, controllers, views).
  • You drive them with full URLs/paths (e.g. get products_url) and can even cross multiple controllers in one test.

🧪 Fixture Setup

Before diving into the tests, here’s how we set up our test data using fixtures.

Product Fixture (test/fixtures/products.yml)

one:
  name: My Product
  description: "Sample description"
  brand: BrandOne
  category: men
  rating: 4.0
  created_at: <%= Time.now %>
  updated_at: <%= Time.now %>

Product Variant Fixture (test/fixtures/product_variants.yml)

one:
  product: one
  sku: "ABC123"
  mrp: 1500.00
  price: 1300.00
  discount_percent: 10.0
  size: "M"
  color: "Red"
  stock_quantity: 10
  specs: { material: "cotton" }
  created_at: <%= Time.now %>
  updated_at: <%= Time.now %>

We also include a sample image for upload testing:

📁 test/fixtures/files/sample.jpg

🧩 Breakdown of ProductsControllerTest

Here’s what we’re testing and why each case is important:

setup do … end

Runs before each test in the class. Use it to prepare any common test data or state.

class ProductsControllerTest < ActionDispatch::IntegrationTest
  setup do
    @product = products(:one)
    @variant = product_variants(:one)
  end

  # every test below can use @product and @variant
end

test "description" do … end

Defines an individual test case. The string describes what behaviour you’re verifying.

test "should get index" do
  get products_url
  assert_response :success
end

1. GET /products (index)

test "should get index" do
    get products_url
    assert_response :success
    # check products header exists
    assert_select "h1", /Products/i
    # check new product button exists
    assert_select "main div a.btn-new[href=?]", new_product_path,
                  text: "➕ New Product", count: 1
  end

✔️ Verifies the product listing page is accessible and renders a header. New product button also rendered.

assert_response

Verifies the HTTP status code returned by your request.
Common symbols:

  • :success (200)
  • :redirect (3xx)
  • :unprocessable_entity (422)
  • :not_found (404)
get new_product_url
assert_response :success

post products_url, params: invalid_params
assert_response :unprocessable_entity

assert_select

Inspects the server‐rendered HTML using CSS selectors.
Great for making sure particular elements and text appear.

get products_url
assert_select "h1", "Products"      # exact match
assert_select "h1", /Products/i     # regex match
assert_select "form[action=?]", products_path

2. GET /products/new

test "should get new" do
    get new_product_url
    assert_response :success
    assert_select "form"
    assert_select "main div a.btn-back[href=?]", products_path,
                  text: /Back to Products/, count: 1
  end

✔️ Ensures the new product form is available. Back button is rendered (for button text we use Reg Exp).

3. POST /products with valid product and variant

test "should create product with variant" do
  assert_difference([ "Product.count", "ProductVariant.count" ]) do
    post products_url, params: {
      product: {
        name: "New Product",
        ...
        images: [fixture_file_upload("test/fixtures/files/sample.jpg", "image/jpeg")],
        product_variant: { ... }
      }
    }
  end
   assert_redirected_to product_url(product)
   assert_equal 1, product.variants.count
   ....
end

✔️ Tests nested attributes, image file uploads, and variant creation in one go.

assert_difference

Ensures a given expression changes by an expected amount.
Often used to test side‐effects like record creation/deletion.

assert_difference "Product.count", +1 do
  post products_url, params: valid_product_params
end

assert_difference ["Product.count", "ProductVariant.count"], +1 do
  post products_url, params: nested_variant_params
end

assert_no_difference "Product.count" do
  post products_url, params: invalid_params
end

assert_redirected_to

Confirms that the controller redirected to the correct path or URL.

post products_url, params: valid_params
assert_redirected_to product_url(Product.last)

delete product_url(@product)
assert_redirected_to products_url

4. POST /products fails when variant is invalid

test "should not create product if variant invalid (missing required mrp)" do
  assert_no_difference([ "Product.count", "ProductVariant.count" ]) do
    post products_url, params: { ... 
        product: { ...
           product_variant: {
             ...
             mrp: nil, # Invalid
             ...
           }
        }
    }
  end
  assert_response :unprocessable_entity
end

✔️ Ensures validations prevent invalid data from being saved.

5. GET /products/:id

test "should show product" do
  get product_url(@product)
  assert_response :success
  assert_select "h2", @product.brand
  assert_select "h4", @product.name
end

✔️ Validates the product detail page renders correct content.

6. GET /products/:id/edit

test "should get edit" do
  get edit_product_url(@product)
  assert_response :success
  assert_select "form"
end

✔️ Confirms the edit form is accessible.

7. PATCH /products/:id with valid update

test "should update product and variant" do
  patch product_url(@product), params: {
    product: {
      name: "Updated Product",
      rating: 4.2,
      product_variant: {
        size: "XL",
        color: "Blue"
      }
    }
  }
  ...
  assert_equal "Updated Product", @product.name
  assert_equal 4.2, @product.rating
end

✔️ Tests simultaneous updates to product and its variant.

assert_equal

Checks that two values are exactly equal.
Use it to verify model attributes, JSON responses, or any Ruby object.

patch product_url(@product), params: update_params
@product.reload
assert_equal "Updated Name", @product.name
assert_equal 4.2, @product.rating

8. PATCH /products/:id fails with invalid data

test "should not update with invalid variant data" do
  patch product_url(@product), params: {
    product: {
      product_variant: { mrp: nil }
    }
  }
  assert_response :unprocessable_entity
end

✔️ Verifies that invalid updates are rejected and return 422.

9. DELETE /products/:id

test "should destroy product" do
  assert_difference("Product.count", -1) do
    delete product_url(@product)
  end
end

✔️ Ensures products can be deleted successfully.

10. Enforce unique SKU

test "should enforce unique SKU" do
  post products_url, params: {
    product: {
      ...,
      product_variant: {
        sku: @variant.sku, # duplicate
        ...
      }
    }
  }
  assert_response :unprocessable_entity
end

✔️ Tests uniqueness validation for variant SKUs to maintain data integrity.


Putting It All Together

Each of these building blocks helps compose clear, maintainable tests:

  1. setup prepares the ground.
  2. test names and isolates scenarios.
  3. assert_response and assert_redirected_to check HTTP behavior.
  4. assert_select inspects rendered views.
  5. assert_difference validates side-effects.
  6. assert_equal verifies precise state changes.

Refer for more here: https://github.com/rails/rails-dom-testing/blob/main/test/selector_assertions_test.rb

With these tools, you can cover every happy path and edge case in your Rails controllers – ensuring confidence in your application’s behaviour!

📌 Best Practices Covered

  • 🔁 Fixture-driven tests for consistency and speed
  • 🔍 Use of assert_select to test views
  • 🧩 Testing nested models and image uploads
  • 🚫 Validation enforcement with assert_no_difference
  • 🧪 Full CRUD test coverage with edge cases

📝 Summary

A well-tested controller gives you peace of mind when iterating or refactoring. With a test suite like this, you’re not only testing basic functionality but also ensuring that validations, associations, and user-facing forms behave as expected. You can also use Rspec for Test Cases. Check the post for Rspec examples: https://railsdrop.com/2025/05/04/rails-8-write-controller-tests-20-rspec-test-cases-examples/

Stay confident.

Enjoy Testing! 🚀

Rails 8 App: Setup Test DB in PostgreSQL | Write SQL Queries | Operators | Joins

Here’s a list of commonly used SQL comparison operators with brief explanations and examples:

📋 Basic Comparison Operators:

OperatorMeaningExampleResult
=Equal toWHERE age = 25Matches rows where age is 25
<>Not equal to (standard)WHERE status <> 'active'Matches rows where status is not 'active'
!=Not equal to (alternative)WHERE id != 10Same as <>, matches if id is not 10
>Greater thanWHERE salary > 50000Matches rows with salary above 50k
<Less thanWHERE created_at < '2024-01-01'Matches dates before Jan 1, 2024
>=Greater than or equalWHERE age >= 18Matches age 18 and above
<=Less than or equalWHERE age <= 65Matches age 65 and below

📋 Other Common Operators:

OperatorMeaningExample
BETWEENWithin a rangeWHERE price BETWEEN 100 AND 200
INMatch any value in a listWHERE country IN ('US', 'CA', 'UK')
NOT INNot in a listWHERE role NOT IN ('admin', 'staff')
IS NULLValue is nullWHERE deleted_at IS NULL
IS NOT NULLValue is not nullWHERE updated_at IS NOT NULL
LIKEPattern match (case-insensitive in some DBs)WHERE name LIKE 'J%'
ILIKECase-insensitive LIKE (PostgreSQL only)WHERE email ILIKE '%@gmail.com'

Now we’ve our products and product_variants schema, let’s re-explore all major SQL JOINs using these two related tables.

####### Products

   Column    |              Type              | Collation | Nullable |               Default
-------------+--------------------------------+-----------+----------+--------------------------------------
 id          | bigint                         |           | not null | nextval('products_id_seq'::regclass)
 description | text                           |           |          |
 category    | character varying              |           |          |
 created_at  | timestamp(6) without time zone |           | not null |
 updated_at  | timestamp(6) without time zone |           | not null |
 name        | character varying              |           | not null |
 rating      | numeric(2,1)                   |           |          | 0.0
 brand       | character varying              |           |          |

######## Product variants

      Column      |              Type              | Collation | Nullable |                   Default
------------------+--------------------------------+-----------+----------+----------------------------------------------
 id               | bigint                         |           | not null | nextval('product_variants_id_seq'::regclass)
 product_id       | bigint                         |           | not null |
 sku              | character varying              |           | not null |
 mrp              | numeric(10,2)                  |           | not null |
 price            | numeric(10,2)                  |           | not null |
 discount_percent | numeric(5,2)                   |           |          |
 size             | character varying              |           |          |
 color            | character varying              |           |          |
 stock_quantity   | integer                        |           |          | 0
 specs            | jsonb                          |           | not null | '{}'::jsonb
 created_at       | timestamp(6) without time zone |           | not null |
 updated_at       | timestamp(6) without time zone |           | not null |

💎 SQL JOINS with products and product_variants

These tables are related through:

product_variants.product_id → products.id

So we can use that for all join examples.

🔸 1. INNER JOIN – Show only products with variants
SELECT 
  p.name, 
  pv.sku, 
  pv.price 
FROM products p
INNER JOIN product_variants pv ON p.id = pv.product_id;

♦️ Only returns products that have at least one variant.

🔸 2. LEFT JOIN – Show all products, with variants if available
SELECT 
  p.name, 
  pv.sku, 
  pv.price 
FROM products p
LEFT JOIN product_variants pv ON p.id = pv.product_id;

♦️ Returns all products, even those with no variants (NULLs in variant columns).

🔸 3. RIGHT JOIN – Show all variants, with product info if available

(Less common, but useful if variants might exist without a product record)

SELECT 
  pv.sku, 
  pv.price, 
  p.name 
FROM products p
RIGHT JOIN product_variants pv ON p.id = pv.product_id;

🔸 4. FULL OUTER JOIN – All records from both tables
SELECT 
  p.name AS product_name, 
  pv.sku AS variant_sku 
FROM products p
FULL OUTER JOIN product_variants pv ON p.id = pv.product_id;

♦️ Shows all products and all variants, even when there’s no match.

🔸 5. SELF JOIN Example (for product_variants comparing similar sizes or prices)

Let’s compare variants of the same product that are different sizes.

SELECT 
  pv1.product_id,
  pv1.size AS size_1,
  pv2.size AS size_2,
  pv1.sku AS sku_1,
  pv2.sku AS sku_2
FROM product_variants pv1
JOIN product_variants pv2 
  ON pv1.product_id = pv2.product_id 
  AND pv1.size <> pv2.size
WHERE pv1.product_id = 101;  -- example product

♦️ Useful to analyze size comparisons or price differences within a product.

🧬 Complex Combined JOIN Example

Show each product with its variants, and include only discounted ones (price < MRP):

SELECT 
  p.name AS product_name,
  pv.sku,
  pv.price,
  pv.mrp,
  (pv.mrp - pv.price) AS discount_value
FROM products p
INNER JOIN product_variants pv ON p.id = pv.product_id
WHERE pv.price < pv.mrp
ORDER BY discount_value DESC;

📑 JOIN Summary with These Tables

JOIN TypeUse Case
INNER JOINOnly products with variants
LEFT JOINAll products, even if they don’t have variants
RIGHT JOINAll variants, even if product is missing
FULL OUTER JOINEverything — useful in data audits
SELF JOINCompare or relate rows within the same table

Let’s now look at JOIN queries with more realistic conditions using products and product_variants.

🦾 Advanced JOIN Queries with Conditions to practice

🔹 1. All products with variants in stock AND discounted

SELECT 
  p.name AS product_name,
  pv.sku,
  pv.size,
  pv.color,
  pv.stock_quantity,
  pv.mrp,
  pv.price,
  (pv.mrp - pv.price) AS discount_amount
FROM products p
JOIN product_variants pv ON p.id = pv.product_id
WHERE pv.stock_quantity > 0
  AND pv.price < pv.mrp
ORDER BY discount_amount DESC;

♦️ Shows available discounted variants, ordered by discount.

🔹 2. Products with high rating (4.5+) and at least one low-stock variant (< 10 items)

SELECT 
  p.name AS product_name,
  p.rating,
  pv.sku,
  pv.stock_quantity
FROM products p
JOIN product_variants pv ON p.id = pv.product_id
WHERE p.rating >= 4.5
  AND pv.stock_quantity < 10;

🔹 3. LEFT JOIN to find products with no variants or all variants out of stock

SELECT 
  p.name AS product_name,
  pv.id AS variant_id,
  pv.stock_quantity
FROM products p
LEFT JOIN product_variants pv 
  ON p.id = pv.product_id AND pv.stock_quantity > 0
WHERE pv.id IS NULL;

✅ This tells you:

  • Either the product has no variants
  • Or all variants are out of stock

🔹 4. Group and Count Variants per Product

SELECT 
  p.name AS product_name,
  COUNT(pv.id) AS variant_count
FROM products p
LEFT JOIN product_variants pv ON p.id = pv.product_id
GROUP BY p.name
ORDER BY variant_count DESC;

🔹 5. Variants with price-percentage discount more than 30%

SELECT 
  p.name AS product_name,
  pv.sku,
  pv.mrp,
  pv.price,
  ROUND(100.0 * (pv.mrp - pv.price) / pv.mrp, 2) AS discount_percent
FROM products p
JOIN product_variants pv ON p.id = pv.product_id
WHERE pv.price < pv.mrp
  AND (100.0 * (pv.mrp - pv.price) / pv.mrp) > 30;

🔹 6. Color-wise stock summary for a product category

SELECT 
  p.category,
  pv.color,
  SUM(pv.stock_quantity) AS total_stock
FROM products p
JOIN product_variants pv ON p.id = pv.product_id
WHERE p.category = 'Shoes'
GROUP BY p.category, pv.color
ORDER BY total_stock DESC;

These queries simulate real-world dashboard views: inventory tracking, product health, stock alerts, etc.


Happy SQL Query Writing! 🚀

Rails 8 App: Setup Test DB in PostgreSQL | Write SQL Queries

1. Add the test db and users table: https://railsdrop.com/2025/04/25/rails-8-app-postgresql-heap-vs-mysql-innodb-b-tree-indexing/

2. Add fake data into the table: https://railsdrop.com/2025/04/29/rails-8-app-postgresql-faker-extensions-for-rails/

Start Learn (Premium): https://railsdrop.com/sql-postgresql-queries-bitmap-seq-index-scan-db-clustering/

📌 Summary of all queries

Check: https://railsdrop.com/postgresql-queries-a-summary/


Read – Composite vs Individual indexes (Premium): https://railsdrop.com/sql-postgres-understanding-current-composite-index/

Read Create 1 million sample users: https://railsdrop.com/sql-postgresql-create-1-million-sample-users-data/

👉 1. SELECT – Basic Query

🔹 1. Select all columns:

SELECT * FROM users;

This gives you every row and every column in the users table.

🔹 2. Select specific columns:

SELECT username, email FROM users;

This limits the output to only username and email.

👉 2. ALTER – Modify Table Structure

The ALTER TABLE statement is used to add, delete, or modify columns in an existing table.

🔹 Example 1: Add a new column

Let’s add a column created_at of type timestamp:

ALTER TABLE users 
  ADD COLUMN created_at timestamp;

🔹 Example 2: Rename a column

Let’s rename phone_number to mobile:

ALTER TABLE users
  RENAME COLUMN phone_number TO mobile;
🔹 Example 3: Drop a column

Let’s say you want to remove the created_at column:

ALTER TABLE users
  DROP COLUMN created_at;
🔹 4. Modify specific columns:
UPDATE users
  SET mobile = '123456'
  WHERE mobile IS NULL;
  • Use UPDATE instead of ALTER when modifying data in a table.
  • ALTER is used for changing the structure/schema of a table (e.g., adding columns), not for updating data.

👉 3. DISTINCT – Remove Duplicate Values

The DISTINCT keyword is used to return only unique (non-duplicate) values in a result set.

🔹 Example 1: Distinct usernames
SELECT DISTINCT username FROM users;

This returns a list of unique usernames, removing any duplicates.

🔹 Example 2: Distinct combinations of username and email
SELECT DISTINCT username, email FROM users;
SELECT DISTINCT username from users WHERE username like '%quin%';
EXPLAIN ANALYSE SELECT DISTINCT username from users WHERE username like '%quin%';

This checks for uniqueness based on both username and email combined.


👉 4. WHERE – Filter Records + Major Combine Types (AND, OR, NOT)

The WHERE clause is used to filter records that meet a certain condition.

Let’s look at basic and combined conditions using our users table.

🔹 Example 1: Simple WHERE
SELECT * FROM users WHERE username = 'john_doe';

🔹 Example 2: AND – Combine multiple conditions (all must be true)
SELECT * FROM users 
WHERE username = 'quinton' AND email LIKE '%@gmail.com';

🔹 Example 3: OR – At least one condition must be true
SELECT * FROM users 
WHERE username = 'quinton' OR username = 'joaquin_hand';

🔹 Example 4: NOT – Negate a condition
SELECT * FROM users 
WHERE NOT email LIKE '%@example.com';

🔹 Example 5: Combine AND, OR, NOT (use parentheses!)
SELECT * FROM users 
WHERE (email like '%example%' OR email like '%test%') 
  AND NOT username = 'admin';


👉 5. ORDER BY – Sort the Results

And we’ll also look at combined queries afterward.

🔹 Example 1: Order by a single column (ascending)
SELECT * FROM users 
ORDER BY username;

🔹 Example 2: Order by a column (descending)
SELECT * FROM users 
ORDER BY email DESC;

🔹 Example 3: Order by multiple columns
SELECT * FROM users 
ORDER BY username ASC, email DESC;


👉 6. Combined Queries (UNION, INTERSECT, EXCEPT)

✅ These allow you to combine results from multiple SELECT statements.

⚠ Requirements:

  • Each query must return the same number of columns.
  • Data types must be compatible.
🔹 UNION – Combine results and remove duplicates
SELECT username FROM users WHERE email LIKE '%@example.com'
UNION
SELECT username FROM users WHERE username LIKE 'ton%';

🔹 UNION ALL – Combine results and keep duplicates
SELECT username FROM users WHERE email LIKE '%@gmail.com'
UNION ALL
SELECT username FROM users WHERE username LIKE 'test%';

🔹 INTERSECT – Return only common results
SELECT username FROM users 
  WHERE email LIKE '%@gmail.com'
INTERSECT
SELECT username FROM users 
  WHERE username LIKE 'test%';

SELECT username FROM users
  WHERE (email like '%example' OR email like '%test')
INTERSECT
SELECT username FROM users
  WHERE username like 'adam';
🔹 EXCEPT – Return results from the first query that are not in the second
SELECT username FROM users 
  WHERE email LIKE '%@example'
EXCEPT
SELECT username FROM users 
  WHERE (username like '%ada%' OR username like '%merlin%');


👉 7. IS NULL and IS NOT NULL – Handling Missing Data

These are used to check if a column contains a NULL value (i.e., no value).

🔹 Example 1: Users with a missing/have an email
# Find users with a missing email
SELECT * FROM users 
WHERE email IS NULL;

# Find 
SELECT * FROM users 
WHERE email IS NOT NULL;

🔹 Example 2: Users with no email and no mobile
SELECT * FROM users 
WHERE email IS NULL AND phone_number IS NULL;

🔹 Example 3: Users with either email or mobile missing
SELECT * FROM users 
WHERE email IS NULL OR phone_number IS NULL;

🔹 Example 4: Users who have an email and username starts with ‘adam’
SELECT * FROM users 
WHERE email IS NOT NULL AND username LIKE 'adam%';

🔹 Example 5: Users with email missing but username is not empty
SELECT * FROM users 
WHERE email IS NULL AND username IS NOT NULL;

🔹 Example 6: Users where email or mobile is null, but not both (exclusive or)
SELECT * FROM users 
WHERE (email IS NULL AND mobile IS NOT NULL)
   OR (email IS NOT NULL AND mobile IS NULL);


👉 8. LIMIT, SELECT TOP, SELECT TOP PERCENT (PostgreSQL-style)

In PostgreSQL, we use LIMIT instead of SELECT TOP.
(PostgreSQL doesn’t support TOP directly like SQL Server.)

🔹 Example 1: Limit number of results (first 10 rows)
SELECT * FROM users 
LIMIT 10;

🔹 Example 2: Combined with ORDER BY (top 5 newest usernames)
SELECT username FROM users 
  WHERE username IS NOT NULL
ORDER BY id DESC
LIMIT 5;

🔹 Example 3: Paginate (e.g., 11th to 20th row)
SELECT * FROM users 
ORDER BY id 
OFFSET 10 LIMIT 10;

🔔 Simulating SELECT TOP and SELECT TOP PERCENT in PostgreSQL

🔹 Example 4: Simulate SELECT TOP 1
SELECT * FROM users 
ORDER BY id 
LIMIT 1;

🔹 Example 5: Simulate SELECT TOP 10 PERCENT

To get the top 10% of users by id, you can use a subquery:

SELECT * FROM users
ORDER BY id
LIMIT (SELECT CEIL(COUNT(*) * 0.10) FROM users);

🔹 Example 6: Users with Gmail or Yahoo emails, ordered by ID, limit 5
SELECT id, username, email FROM users
WHERE email LIKE '%@gmail.com' OR email LIKE '%@yahoo.com'
AND username IS NOT NULL
ORDER BY id ASC
LIMIT 5;

Note: Without parentheses, AND has higher precedence than OR.

🔹 Better version with correct logic:
SELECT id, username, email FROM users
WHERE (email LIKE '%@gmail.com' OR email LIKE '%@yahoo.com')
  AND username IS NOT NULL
ORDER BY id ASC
LIMIT 5;


👉 9. Aggregation Functions: MIN, MAX, COUNT, AVG, SUM

These functions help you perform calculations on column values.

🔹 1. COUNT – Number of rows
SELECT COUNT(*) FROM users;

✔️ Total number of users.

SELECT COUNT(email) FROM users WHERE email IS NOT NULL;

✔️ Count of users who have an email.

🔹 2. MIN and MAX – Smallest and largest values
SELECT MIN(id) AS first_user, MAX(id) AS last_user FROM users;

🔹 3. AVG – Average (only on numeric fields)

Assuming id is somewhat sequential, we can do:

SELECT AVG(id) AS avg_id FROM users;

🔹 4. SUM – Total (again, only on numeric fields)
SELECT SUM(id) AS total_ids FROM users WHERE id < 1000;

Combined Queries with Aggregates

🔹 Example 1: Count users without email and with usernames starting with ‘test’
SELECT COUNT(*) FROM users 
WHERE email IS NULL AND username LIKE 'test%';

🔹 Example 2: Get min, max, avg ID of users with Gmail addresses
SELECT 
  MIN(id) AS min_id,
  MAX(id) AS max_id,
  AVG(id) AS avg_id
FROM users 
WHERE email LIKE '%@gmail.com';

🔹 Example 3: Count how many users per email domain
SELECT 
  SPLIT_PART(email, '@', 2) AS domain,
  COUNT(*) AS total_users
FROM users
WHERE email IS NOT NULL
GROUP BY domain
ORDER BY total_users DESC
LIMIT 5;

♦️ This query breaks email at the @ to group by domain like gmail.com, yahoo.com.

GROUP BY Course

Here’s the SQL query to get the maximum mark, minimum mark, and the email (or emails) of users grouped by each course:

Option 1: Basic GROUP BY with aggregate functions (only max/min mark per course, not emails)
SELECT
  course,
  MAX(mark) AS max_mark,
  MIN(mark) AS min_mark
FROM users
GROUP BY course;

Option 2: Include emails of users who have the max or min mark per course

(PostgreSQL-specific using subqueries and JOIN)

SELECT u.course, u.email, u.mark
FROM users u
JOIN (
  SELECT
    course,
    MAX(mark) AS max_mark,
    MIN(mark) AS min_mark
  FROM users
  GROUP BY course
) stats ON u.course = stats.course AND (u.mark = stats.max_mark OR u.mark = stats.min_mark)
ORDER BY u.course, u.mark DESC;

♦️ This second query shows all users who have the highest or lowest mark in their course, including ties.

Here’s the updated query that includes:

  • Course name
  • Emails of users with the maximum or minimum marks
  • Their marks
  • Average mark per course
SELECT
  u.course,
  u.email,
  u.mark,
  stats.max_mark,
  stats.min_mark,
  stats.avg_mark
FROM users u
JOIN (
  SELECT
    course,
    MAX(mark) AS max_mark,
    MIN(mark) AS min_mark,
    ROUND(AVG(mark), 2) AS avg_mark
  FROM users
  GROUP BY course
) stats ON u.course = stats.course AND (u.mark = stats.max_mark OR u.mark = stats.min_mark)
ORDER BY u.course, u.mark DESC;

Notes:
  • ROUND(AVG(mark), 2) gives the average mark rounded to two decimal places.
  • Users with the same max or min mark are all included.

Here’s the full query including:

  • Course
  • Email
  • Mark
  • Max/Min mark
  • Average mark
  • User count per course
SELECT
  u.course,
  u.email,
  u.mark,
  stats.max_mark,
  stats.min_mark,
  stats.avg_mark,
  stats.user_count
FROM users u
JOIN (
  SELECT
    course,
    MAX(mark) AS max_mark,
    MIN(mark) AS min_mark,
    ROUND(AVG(mark), 2) AS avg_mark,
    COUNT(*) AS user_count
  FROM users
  GROUP BY course
) stats ON u.course = stats.course AND (u.mark = stats.max_mark OR u.mark = stats.min_mark)
ORDER BY u.course, u.mark DESC;

♦️ This query gives you a full breakdown of top/bottom performers per course along with stats per group.

Here’s a version that adds the rank of each user within their course based on their mark (highest mark = rank 1), along with:

  • Course
  • Email
  • Mark
  • Rank (within course)
  • Max mark, Min mark, Average mark, User count per course
WITH ranked_users AS (
  SELECT
    u.*,
    RANK() OVER (PARTITION BY course ORDER BY mark DESC) AS rank_in_course
  FROM users u
),
course_stats AS (
  SELECT
    course,
    MAX(mark) AS max_mark,
    MIN(mark) AS min_mark,
    ROUND(AVG(mark), 2) AS avg_mark,
    COUNT(*) AS user_count
  FROM users
  GROUP BY course
)
SELECT
  r.course,
  r.email,
  r.mark,
  r.rank_in_course,
  cs.max_mark,
  cs.min_mark,
  cs.avg_mark,
  cs.user_count
FROM ranked_users r
JOIN course_stats cs ON r.course = cs.course
ORDER BY r.course, r.rank_in_course;

Key features:
  • Users are ranked per course using RANK() (supports ties).
  • The output includes all users, not just those with max/min marks.

NOTE: Here we can see output like:

    course    |                   email                   | mark | rank_in_course | max_mark | min_mark | avg_mark | user_count
--------------+-------------------------------------------+------+----------------+----------+----------+----------+------------
 IT           | lisandra.schoen@borer-effertz.test        | 1000 |              1 |     1000 |      100 |   543.04 |        796
 IT           | leona@jaskolski-jaskolski.test            | 1000 |              1 |     1000 |      100 |   543.04 |        796
 IT           | angle@ankunding-sauer.example             |  999 |              3 |     1000 |      100 |   543.04 |        796
 IT           | drucilla_okeefe@monahan.test              |  999 |              3 |     1000 |      100 |   543.04 |        796
 algebra      | natashia.langosh@luettgen.test            | 1000 |              1 |     1000 |      100 |   541.52 |        779
 algebra      | tiffany.tremblay@bergnaum.example         | 1000 |              1 |     1000 |      100 |   541.52 |        779
 algebra      | kristeen.nikolaus@crist.example           |  999 |              3 |     1000 |      100 |   541.52 |        779
 algebra      | domenic@predovic-dare.example             |  999 |              3 |     1000 |      100 |   541.52 |        779
 algebra      | kit@oconner.example                       |  999 |              3 |     1000 |      100 |   541.52 |        779
 architecture | tierra_reilly@botsford-okuneva.test       |  997 |              1 |      997 |      100 |   549.24 |        776
 architecture | celestine_reilly@bayer.example            |  996 |              2 |      997 |      100 |   549.24 |        776
 architecture | carson@kulas.example                      |  995 |              3 |      997 |      100 |   549.24 |        776
 botany       | hassan@towne.test                         | 1000 |              1 |     1000 |      103 |   554.07 |        760
 botany       | shaunna@hudson.test                       | 1000 |              1 |     1000 |      103 |   554.07 |        760
 botany       | sanford_jacobs@johnston.example           |  999 |              3 |     1000 |      103 |   554.07 |        760
 botany       | arnulfo_cremin@ernser.example             |  999 |              3 |     1000 |      103 |   554.07 |        760

The Ranks are not consistent. To avoid this we can use DENSE_RANK().

Here’s the updated query using DENSE_RANK() instead of RANK() — this avoids gaps in rank numbering when there are ties:

WITH ranked_users AS (
  SELECT
    u.*,
    DENSE_RANK() OVER (PARTITION BY course ORDER BY mark DESC) AS rank_in_course
  FROM users u
),
course_stats AS (
  SELECT
    course,
    MAX(mark) AS max_mark,
    MIN(mark) AS min_mark,
    ROUND(AVG(mark), 2) AS avg_mark,
    COUNT(*) AS user_count
  FROM users
  GROUP BY course
)
SELECT
  r.course,
  r.email,
  r.mark,
  r.rank_in_course,
  cs.max_mark,
  cs.min_mark,
  cs.avg_mark,
  cs.user_count
FROM ranked_users r
JOIN course_stats cs ON r.course = cs.course
WHERE r.rank_in_course <= 3
ORDER BY r.course, r.rank_in_course;

DENSE_RANK difference:
  • If 2 users tie for 1st place, the next gets rank 2 (not 3 like with RANK).
  • Ensures consistent top-N output when ties are frequent.

🔥 Boom, Bonus: To export the query result as a CSV file in PostgreSQL, you can use the \copy command in psql (PostgreSQL’s CLI), like this:

🧾 Export Top 3 Students per Course to CSV

\copy (
  WITH ranked_users AS (
    SELECT
      u.*,
      DENSE_RANK() OVER (PARTITION BY course ORDER BY mark DESC) AS rank_in_course
    FROM users u
  ),
  course_stats AS (
    SELECT
      course,
      MAX(mark) AS max_mark,
      MIN(mark) AS min_mark,
      ROUND(AVG(mark), 2) AS avg_mark,
      COUNT(*) AS user_count
    FROM users
    GROUP BY course
  )
  SELECT
    r.course,
    r.email,
    r.mark,
    r.rank_in_course,
    cs.max_mark,
    cs.min_mark,
    cs.avg_mark,
    cs.user_count
  FROM ranked_users r
  JOIN course_stats cs ON r.course = cs.course
  WHERE r.rank_in_course <= 3
  ORDER BY r.course, r.rank_in_course
) TO 'top_students_per_course.csv' WITH CSV HEADER;

✅ Requirements:
  • Run this in the psql shell.
  • The file top_students_per_course.csv will be saved in your local working directory (where psql was started).
  • Make sure PostgreSQL has write permissions to that directory.


👉 10. LIKE, %, _ – Pattern Matching in SQL

These are used to filter text using wildcards:

  • % = matches any sequence of characters (0 or more)
  • _ = matches exactly one character

🔹 Basic LIKE Queries

Example 1: Usernames starting with “admin”
SELECT * FROM users 
WHERE username LIKE 'admin%';

Example 2: Usernames ending with “bot”
SELECT * FROM users 
WHERE username LIKE '%bot';

Example 3: Usernames containing “test”
SELECT * FROM users 
WHERE username LIKE '%test%';

🔹 _ Single-character Wildcard

Example 4: 5-character usernames
SELECT * FROM users 
WHERE username LIKE '_____';

(Each _ stands for one character.)

Example 5: Emails starting with any single letter + “ohn” (e.g., “john”, “kohn”)
SELECT * FROM users 
WHERE username LIKE '_ohn';

Combined Queries with LIKE, %, _

🔹 Example 6: Users whose username contains “test” and email ends with “gmail.com”
SELECT * FROM users 
WHERE username LIKE '%test%' AND email LIKE '%@gmail.com';

🔹 Example 7: Users with 3-character usernames and missing email
SELECT * FROM users 
WHERE username LIKE '___' AND email IS NULL;

🔹 Example 8: Users with usernames that start with “a” or end with “x” and have a mobile number
SELECT * FROM users 
WHERE (username LIKE 'a%' OR username LIKE '%x') AND mobile IS NOT NULL;


👉 11. IN, NOT IN, BETWEEN – Set & Range Filters

These are used to filter based on a list of values (IN) or a range (BETWEEN).

🔹 1. IN – Match any of the listed values

SELECT * FROM users 
WHERE username IN ('admin', 'test_user', 'john_doe');

🔹 2. NOT IN – Exclude listed values

SELECT * FROM users 
WHERE username NOT IN ('admin', 'test_user');

🔹 3. BETWEEN – Match within a range (inclusive)

SELECT * FROM users 
WHERE id BETWEEN 100 AND 200;

♦️ Equivalent to: id >= 100 AND id <= 200

Combined Queries

🔹 Example 1: Users with username in a list and id between 1 and 500
SELECT * FROM users 
WHERE username IN ('alice', 'bob', 'carol') 
  AND id BETWEEN 1 AND 500;

🔹 Example 2: Exclude system users and select a range of IDs
SELECT id, username FROM users 
WHERE username NOT IN ('admin', 'system') 
  AND id BETWEEN 1000 AND 2000;

🔹 Example 3: Top 5 users whose email domains are in a specific list
SELECT * FROM users 
WHERE SPLIT_PART(email, '@', 2) IN ('gmail.com', 'yahoo.com', 'hotmail.com')
ORDER BY id
LIMIT 5;


👉 12. SQL Aliases – Renaming Columns or Tables Temporarily

Aliases help improve readability, especially in joins or when using functions.

🔹 1. Column Aliases

Use AS (optional keyword) to rename a column in the result.

Example 1: Rename username to user_name
SELECT username AS user_name, email AS user_email 
FROM users;

You can also omit AS:

SELECT username user_name, email user_email 
FROM users;

🔹 2. Table Aliases

Assign a short name to a table (very useful in joins).

Example 2: Simple alias for table
SELECT u.username, u.email 
FROM users u 
WHERE u.email LIKE '%@gmail.com';

🔹 3. Alias with functions

SELECT COUNT(*) AS total_users, MAX(id) AS latest_id 
FROM users;

Combined Query with Aliases

🔹 Example 4: Count Gmail users, alias result and filter
SELECT 
  COUNT(*) AS gmail_users 
FROM users u 
WHERE u.email LIKE '%@gmail.com';

🔹 Example 5: List usernames with shortened table name and domain extracted
SELECT 
  u.username AS name, 
  SPLIT_PART(u.email, '@', 2) AS domain 
FROM users u 
WHERE u.email IS NOT NULL 
ORDER BY u.username
LIMIT 10;


Rails 8 App: Setup Test DB in PostgreSQL | Faker | Extensions for Rails app, VSCode

Let’s try to add some sample data first to our database.

Step 1: Install pgxnclient

On macOS (with Homebrew):

brew install pgxnclient

On Ubuntu/Debian:

sudo apt install pgxnclient

Step 2: Install the faker extension via PGXN

pgxn install faker

I get issue with installing faker via pgxn:

~ pgxn install faker
INFO: best version: faker 0.5.3
ERROR: resource not found: 'https://api.pgxn.org/dist/PostgreSQL_Faker/0.5.3/META.json'

⚠️ Note: faker extension we’re trying to install via pgxn is not available or improperly published on the PGXN network. Unfortunately, the faker extension is somewhat unofficial and not actively maintained or reliably hosted.

🚨 You can SKIP STEP 3,4,5 and opt Option 2

Step 3: Build and install the extension into PostgreSQL

cd /path/to/pg_faker  # PGXN will print this after install
make
sudo make install

Step 4: Enable it in your database

Inside psql :

CREATE EXTENSION faker;

Step 5: Insert 10,000 fake users

INSERT INTO users (user_id, username, email, phone_number)
SELECT
  gs AS user_id,
  faker_username(),
  faker_email(),
  faker_phone_number()
FROM generate_series(1, 10000) AS gs;
Option 2: Use Ruby + Faker gem (if you’re using Rails or Ruby)

If you’re building your app in Rails, use the faker gem directly:

In Ruby:
require 'faker'
require 'pg'

conn = PG.connect(dbname: 'test_db')

(1..10_000).each do |i|
  conn.exec_params(
    "INSERT INTO users (user_id, username, email, phone_number) VALUES ($1, $2, $3, $4)",
    [i, Faker::Internet.username, Faker::Internet.email, Faker::PhoneNumber.phone_number]
  )
end

In Rails (for test_db), Create the Rake Task:

Create a file at:

lib/tasks/seed_fake_users.rake
# lib/tasks/seed_fake_users.rake

namespace :db do
  desc "Seed 10,000 fake users into the users table"
  task seed_fake_users: :environment do
    require "faker"
    require "pg"

    conn = PG.connect(dbname: "test_db")

    # If user_id is a serial and you want to reset the sequence after deletion, run:
    # conn.exec_params("TRUNCATE TABLE users RESTART IDENTITY")
    # delete existing users to load fake users
    conn.exec_params("DELETE FROM users")
    

    puts "Seeding 10,000 fake users ...."
    (1..10_000).each do |i|
      conn.exec_params(
        "INSERT INTO users (user_id, username, email, phone_number) VALUES ($1, $2, $3, $4)",
        [ i, Faker::Internet.username, Faker::Internet.email, Faker::PhoneNumber.phone_number ]
      )
    end
    puts "Seeded 10,000 fake users into the users table"
    conn.close
  end
end
# run the task
bin/rails db:seed_fake_users
For Normal Rails Rake Task:
# lib/tasks/seed_fake_users.rake

namespace :db do
  desc "Seed 10,000 fake users into the users table"
  task seed_fake_users: :environment do
    require 'faker'

    puts "🌱 Seeding 10,000 fake users..."

    users = []

    # delete existing users
    User.destroy_all

    10_000.times do |i|
      users << {
        user_id: i + 1,
        username: Faker::Internet.unique.username,
        email: Faker::Internet.unique.email,
        phone_number: Faker::PhoneNumber.phone_number
      }
    end

    # Use insert_all for performance
    User.insert_all(users)

    puts "✅ Done. Inserted 10,000 users."
  end
end
# run the task
bin/rails db:seed_fake_users

Now we will discuss about PostgreSQL Extensions and it’s usage.

PostgreSQL extensions are add-ons or plug-ins that extend the core functionality of PostgreSQL. They provide additional capabilities such as new data types, functions, operators, index types, or full features like full-text search, spatial data handling, or fake data generation.

🔧 What Extensions Can Do

Extensions can:

  • Add functions (e.g. gen_random_bytes() from pgcrypto)
  • Provide data types (e.g. hstore, uuid, jsonb)
  • Enable indexing techniques (e.g. btree_gin, pg_trgm)
  • Provide tools for testing and development (e.g. faker, pg_stat_statements)
  • Enhance performance monitoring, security, or language support

📦 Common PostgreSQL Extensions

ExtensionPurpose
pgcryptoCryptographic functions (e.g., hashing, random byte generation)
uuid-osspFunctions to generate UUIDs
postgisSpatial and geographic data support
hstoreKey-value store in a single PostgreSQL column
pg_trgmTrigram-based text search and indexing
citextCase-insensitive text type
pg_stat_statementsSQL query statistics collection
fakerGenerates fake but realistic data (for testing)

📥 Installing and Enabling Extensions

1. Install (if not built-in)

Via package manager or PGXN (PostgreSQL Extension Network), or compile from source.

2. Enable in a database

CREATE EXTENSION extension_name;

Example:

CREATE EXTENSION pgcrypto;

Enabling an extension makes its functionality available to the current database only.

🤔 Why Use Extensions?

  • Productivity: Quickly add capabilities without writing custom code.
  • Performance: Access to advanced indexing, statistics, and optimization tools.
  • Development: Generate test data (faker), test encryption (pgcrypto), etc.
  • Modularity: PostgreSQL stays lightweight while letting you add only what you need.

Here’s a categorized list (with a simple visual-style layout) of PostgreSQL extensions that are safe and useful for Rails apps in both development and production environments.

🔌 PostgreSQL Extensions for Rails Apps

# connect psql
psql -U username -d database_name

# list all available extensions
SELECT * FROM pg_available_extensions;

# eg. to install the hstore extension run
CREATE EXTENSION hstore;

# verify the installation
SELECT * FROM pg_extension;
SELECT * FROM pg_extension WHERE extname = 'hstore';

🔐 Security & UUIDs

ExtensionUse CaseSafe for Prod
pgcryptoSecure random bytes, hashes, UUIDs
uuid-osspUUID generation (v1, v4, etc.)

💡 Tip: Use uuid-ossp or pgcrypto to generate UUID primary keys (id: :uuid) in Rails.

📘 PostgreSQL Procedures and Triggers — Explained with Importance and Examples

PostgreSQL is a powerful, open-source relational database that supports advanced features like stored procedures and triggers, which are essential for encapsulating business logic inside the database.

🔹 What are Stored Procedures in PostgreSQL?

A stored procedure is a pre-compiled set of SQL and control-flow statements stored in the database and executed by calling it explicitly.

Purpose: Encapsulate business logic, reuse complex operations, improve performance, and reduce network overhead.

✅ Benefits of Stored Procedures:
  • Faster execution (compiled and stored in DB)
  • Centralized logic
  • Reduced client-server round trips
  • Language support: SQL, PL/pgSQL, Python, etc.
🧪 Example: Create a Procedure to Add a New User
CREATE OR REPLACE PROCEDURE add_user(name TEXT, email TEXT)
LANGUAGE plpgsql
AS $$
BEGIN
    INSERT INTO users (name, email) VALUES (name, email);
END;
$$;

▶️ Call the procedure:
CALL add_user('John Doe', 'john@example.com');


🔹 What are Triggers in PostgreSQL?

A trigger is a special function that is automatically executed in response to certain events on a table (like INSERT, UPDATE, DELETE).

Purpose: Enforce rules, maintain audit logs, auto-update columns, enforce integrity, etc.

✅ Benefits of Triggers:
  • Automate tasks on data changes
  • Enforce business rules and constraints
  • Keep logs or audit trails
  • Maintain derived data or counters

🧪 Example: Trigger to Log Inserted Users

1. Create the audit table:

CREATE TABLE user_audit (
    id SERIAL PRIMARY KEY,
    user_id INTEGER,
    name TEXT,
    email TEXT,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

2. Create the trigger function:

CREATE OR REPLACE FUNCTION log_user_insert()
RETURNS TRIGGER AS $$
BEGIN
    INSERT INTO user_audit (user_id, name, email)
    VALUES (NEW.id, NEW.name, NEW.email);
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

3. Create the trigger on users table:

CREATE TRIGGER after_user_insert
AFTER INSERT ON users
FOR EACH ROW
EXECUTE FUNCTION log_user_insert();

Now, every time a user is inserted, the trigger logs it in the user_audit table automatically.

📌 Difference: Procedures vs. Triggers

FeatureStored ProceduresTriggers
When executedCalled explicitly with CALLAutomatically executed on events
PurposeBatch processing, encapsulate logicReact to data changes automatically
ControlFull control by developerFire based on database event (Insert, Update, Delete)
ReturnsNo return or OUT parametersMust return NEW or OLD row in most cases

🎯 Why Are Procedures and Triggers Important?

✅ Use Cases for Stored Procedures:
  • Bulk processing (e.g. daily billing)
  • Data import/export
  • Account setup workflows
  • Multi-step business logic
✅ Use Cases for Triggers:
  • Auto update updated_at column
  • Enforce soft-deletes
  • Maintain counters or summaries (e.g., post comment count)
  • Audit logs / change history
  • Cascading updates or cleanups

🚀 Real-World Example: Soft Delete Trigger

Instead of deleting records, mark them as deleted = true.

CREATE OR REPLACE FUNCTION soft_delete_user()
RETURNS TRIGGER AS $$
BEGIN
  UPDATE users SET deleted = TRUE WHERE id = OLD.id;
  RETURN NULL; -- cancel the delete
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER before_user_delete
BEFORE DELETE ON users
FOR EACH ROW
EXECUTE FUNCTION soft_delete_user();

Now any DELETE FROM users WHERE id = 1; will just update the deleted column.

🛠️ Tools to Manage Procedures & Triggers

  • pgAdmin (GUI)
  • psql (CLI)
  • Code-based migrations (via tools like ActiveRecord or pg gem)

🧠 Summary

FeatureStored ProcedureTrigger
Manual/AutoManual (CALL)Auto (event-based)
FlexibilityComplex logic, loops, variablesQuick logic, row-based or statement-based
LanguagesPL/pgSQL, SQL, Python, etc.PL/pgSQL, SQL
Best forMulti-step workflowsAudit, logging, validation

Use Postgres RANDOM()

By using RANDOM() in PostgreSQL. If the application uses PostgreSQL’s built-in RANDOM() function to efficiently retrieve a random user from the database. Here’s why this is important:

  1. Efficiency: PostgreSQL’s RANDOM() is more efficient than loading all records into memory and selecting one randomly in Ruby. This is especially important when dealing with large datasets (like if we have 10000 users).
  2. Database-level Operation: The randomization happens at the database level rather than the application level, which:
  • Reduces memory usage (we don’t need to load unnecessary records)
  • Reduces network traffic (only one record is transferred)
  • Takes advantage of PostgreSQL’s optimized random number generation
  1. Single Query: Using RANDOM() allows us to fetch a random record in a single SQL query, typically something like:sqlApply to
SELECT * FROM users ORDER BY RANDOM() LIMIT 1

This is in contrast to less efficient methods like:

  • Loading all users and using Ruby’s sample method (User.all.sample)
  • Getting a random ID and then querying for it (which would require two queries)
  • Using offset with count (which can be slow on large tables)

🔍 Full Text Search & Similarity

ExtensionUse CaseSafe for Prod
pg_trgmTrigram-based fuzzy search (great with ILIKE & similarity)
unaccentRemove accents for better search results
fuzzystrmatchSoundex, Levenshtein distance✅ (heavy use = test!)

💡 Combine pg_trgm + unaccent for powerful search in Rails models using ILIKE.

📊 Performance Monitoring & Dev Insights

ExtensionUse CaseSafe for Prod
pg_stat_statementsMonitor slow queries, frequency
auto_explainLog plans for slow queries
hypopgSimulate hypothetical indexes✅ (dev only)

🧪 Dev Tools & Data Generation

ExtensionUse CaseSafe for Prod
fakerFake data generation for testing❌ Dev only
pgfakerCommunity alternative to faker❌ Dev only

📦 Storage & Structure

ExtensionUse CaseSafe for Prod
hstoreKey-value storage in a column
citextCase-insensitive text

💡 citext is very handy for case-insensitive email columns in Rails.

🗺️ Geospatial (Advanced)

ExtensionUse CaseSafe for Prod
postgisGIS/spatial data support✅ (big apps)

🎨 Visual Summary

+-------------------+-----------------------------+-----------------+
| Category          | Extension                   | Safe for Prod?  |
+-------------------+-----------------------------+-----------------+
| Security/UUIDs    | pgcrypto, uuid-ossp         | ✅              |
| Search/Fuzziness  | pg_trgm, unaccent, fuzzystr | ✅              |
| Monitoring        | pg_stat_statements          | ✅              |
| Dev Tools         | faker, pgfaker              | ❌ (Dev only)   |
| Text/Storage      | citext, hstore              | ✅              |
| Geo               | postgis                     | ✅              |
+-------------------+-----------------------------+-----------------+

PostgreSQL Extension for VSCode

# 1. open the Command Palette (Cmd + Shift + P)
# 2. Type 'PostgreSQL: Add Connection'
# 3. Enter the hostname of the database authentication details
# 4. Open Command Palette, type: 'PostgreSQL: New Query'

Enjoy PostgreSQL  🚀


Rails 8 App: Setup Test DB in PostgreSQL | Query Performance Using EXPLAIN ANALYZE

Now we’ll go full-on query performance pro mode using EXPLAIN ANALYZE and real plans. We’ll learn how PostgreSQL makes decisions, how to catch slow queries, and how your indexes make them 10x faster.

💎 Part 1: What is EXPLAIN ANALYZE?

EXPLAIN shows how PostgreSQL plans to execute your query.

ANALYZE runs the query and adds actual time, rows, loops, etc.

Syntax:

EXPLAIN ANALYZE
SELECT * FROM users WHERE username = 'bob';

✏️ Example 1: Without Index

SELECT * FROM users WHERE username = 'bob';

If username has no index, plan shows:

Seq Scan on users
  Filter: (username = 'bob')
  Rows Removed by Filter: 9999

❌ PostgreSQL scans all rows = Sequential Scan = slow!

➕ Add Index:

CREATE INDEX idx_users_username ON users (username);

Now rerun:

EXPLAIN ANALYZE SELECT * FROM users WHERE username = 'bob';

You’ll see:

Index Scan using idx_users_username on users
  Index Cond: (username = 'bob')

✅ PostgreSQL uses B-tree index
🚀 Massive speed-up!

🔥 Want even faster?

SELECT username FROM users WHERE username = 'bob';

If PostgreSQL shows:

Index Only Scan using idx_users_username on users
  Index Cond: (username = 'bob')

🎉 Index Only Scan! = covering index success!
No heap fetch = lightning-fast.

⚠️ Note: Index-only scan only works if:

  • Index covers all selected columns
  • Table is vacuumed (PostgreSQL uses visibility map)

If you still get Seq scan output like:

test_db=# EXPLAIN ANALYSE SELECT * FROM users where username = 'aman_chetri';
                                           QUERY PLAN
-------------------------------------------------------------------------------------------------
 Seq Scan on users  (cost=0.00..1.11 rows=1 width=838) (actual time=0.031..0.034 rows=1 loops=1)
   Filter: ((username)::text = 'aman_chetri'::text)
   Rows Removed by Filter: 2
 Planning Time: 0.242 ms
 Execution Time: 0.077 ms
(5 rows)

even after adding an index, because PostgreSQL is saying:

  • 🤔 “The table is so small (cost = 1.11), scanning the whole thing is cheaper than using the index.”
  • Also: Your query uses only SELECT username, which could be eligible for Index Only Scan, but heap fetch might still be needed due to visibility map.

🔧 Step-by-step Fix:

✅ 1. Add Data for Bigger Table

If the table is small (few rows), PostgreSQL will prefer Seq Scan no matter what.

Try adding ~10,000 rows:

INSERT INTO users (username, email, phone_number)
SELECT 'user_' || i, 'user_' || i || '@mail.com', '1234567890'
FROM generate_series(1, 10000) i;

Then VACUUM ANALYZE users; again and retry EXPLAIN.

✅ 2. Confirm Index Exists

First, check your index exists and is recognized:

\d users

You should see something like:

Indexes:
    "idx_users_username" btree (username)

If not, add:

CREATE INDEX idx_users_username ON users(username);

✅ 3. Run ANALYZE (Update Stats)
ANALYZE users;

This updates statistics — PostgreSQL might not be using the index if it thinks only one row matches or the table is tiny.

✅ 4. Vacuum for Index-Only Scan

Index-only scans require the visibility map to be set.

Run:

VACUUM ANALYZE users;

This marks pages in the table as “all-visible,” enabling PostgreSQL to avoid reading the heap.

✅ 5. Force PostgreSQL to Consider Index

You can turn off sequential scan temporarily (for testing):

SET enable_seqscan = OFF;

EXPLAIN SELECT username FROM users WHERE username = 'bob';

You should now see:

Index Scan using idx_users_username on users ...

⚠️ Use this only for testing/debugging — not in production.

💡 Extra Tip (optional): Use EXPLAIN (ANALYZE, BUFFERS)
EXPLAIN (ANALYZE, BUFFERS)
SELECT username FROM users WHERE username = 'bob';

This will show:

  • Whether heap was accessed
  • Buffer hits
  • Actual rows
📋 Summary
StepCommand
Check Index\d users
Analyze tableANALYZE users;
Vacuum for visibilityVACUUM ANALYZE users;
Disable seq scan for testSET enable_seqscan = OFF;
Add more rows (optional)INSERT INTO ...

🚨 How to catch bad index usage?

Always look for:

  • “Seq Scan” instead of “Index Scan” ➔ missing index
  • “Heap Fetch” ➔ not a covering index
  • “Rows Removed by Filter” ➔ inefficient filtering
  • “Loops: 1000+” ➔ possible N+1 issue

Common Pattern Optimizations

PatternFix
WHERE column = ?B-tree index on column
WHERE column LIKE 'prefix%'B-tree works (with text_ops)
SELECT col1 WHERE col2 = ?Covering index: (col2, col1) or (col2) INCLUDE (col1)
WHERE col BETWEEN ?Composite index with range second: (status, created_at)
WHERE col IN (?, ?, ?)Index still helps
ORDER BY col LIMIT 10Index on col helps sort fast

⚡ Tip: Use pg_stat_statements to Find Slow Queries

Enable it in postgresql.conf:

shared_preload_libraries = 'pg_stat_statements'

Then run:

SELECT query, total_exec_time, calls
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 5;

🎯 Find your worst queries & optimize them with new indexes!

🧪 Try It Yourself

Want a little lab setup to practice?

CREATE TABLE users (
  user_id serial PRIMARY KEY,
  username VARCHAR(220),
  email VARCHAR(150),
  phone_number VARCHAR(20)
);

-- Insert 100K fake rows
INSERT INTO users (username, email, phone_number)
SELECT
  'user_' || i,
  'user_' || i || '@example.com',
  '999-000-' || i
FROM generate_series(1, 100000) i;

Then test:

  1. EXPLAIN ANALYZE SELECT * FROM users WHERE username = 'user_5000';
  2. Add INDEX ON username
  3. Re-run, compare speed!

🎯 Extra Pro Tools for Query Performance

  • EXPLAIN ANALYZE → Always first tool
  • pg_stat_statements → Find slow queries in real apps
  • auto_explain → Log slow plans automatically
  • pgBadger or pgHero → Visual query monitoring
💥 Now We Know:

✅ How to read query plans
✅ When you’re doing full scans vs index scans
✅ How to achieve index-only scans
✅ How to catch bad performance early
✅ How to test and fix in real world

Happy Performance Fixing.. 🚀

Rails 8 App: Setup Test DB in PostgreSQL | Covering Index | BRIN Indexes | Hash Indexes | Create super fast indexes

Let’s look into some of the features of sql data indexing. This will be super helpful while developing our Rails 8 Application.

💎 Part 1: What is a Covering Index?

Normally when you query:

SELECT * FROM users WHERE username = 'bob';

  • Database searches username index (secondary).
  • Finds a pointer (TID or PK).
  • Then fetches full row from table (heap or clustered B-tree).

Problem:

  • Heap fetch = extra disk read.
  • Clustered B-tree fetch = extra traversal.

📜 Covering Index idea:

✅ If the index already contains all the columns you need,
✅ Then the database does not need to fetch the full row!

It can answer the query purely by scanning the index! ⚡

Boom — one disk read, no extra hop!

✏️ Example in PostgreSQL:

Suppose your query is:

SELECT username FROM users WHERE username = 'bob';

  • You only need username.
  • But by default, PostgreSQL indexes only store the index column (here, username) + TID.

✅ So in this case — already covering!

No heap fetch needed!

✏️ Example in MySQL InnoDB:

Suppose your query is:

SELECT username FROM users WHERE username = 'bob';

  • Secondary index (username) contains:
    • username (indexed column)
    • user_id (because secondary indexes in InnoDB always store PK)

♦️ So again, already covering!
No need to jump to the clustered index!

🎯 Key point:

If your query only asks for columns already inside the index,
then only the index is touched ➔ no second lookup ➔ super fast!

💎 Part 2: Real SQL Examples

✨ PostgreSQL

Create a covering index for common query:

CREATE INDEX idx_users_username_email ON users (username, email);

Now if you run:

SELECT email FROM users WHERE username = 'bob';

Postgres can:

  • Search index on username
  • Already have email in index
  • ✅ No heap fetch!

(And Postgres is smart: it checks index-only scan automatically.)

✨ MySQL InnoDB

Create a covering index:

CREATE INDEX idx_users_username_email ON users (username, email);

✅ Now query:

SELECT email FROM users WHERE username = 'bob';

Same behavior:

  • Only secondary index read.
  • No need to touch primary clustered B-tree.

💎 Part 3: Tips to design smart Covering Indexes

✅ If your query uses WHERE on col1 and SELECT col2,
✅ Best to create index: (col1, col2).

✅ Keep indexes small — don’t add 10 columns unless needed.
✅ Avoid huge TEXT or BLOB columns in covering indexes — they make indexes heavy.

Composite indexes are powerful:

CREATE INDEX idx_users_username_email ON users (username, email);

→ Can be used for:

  • WHERE username = ?
  • WHERE username = ? AND email = ?
  • etc.

✅ Monitor index usage:

  • PostgreSQL: EXPLAIN ANALYZE
  • MySQL: EXPLAIN

✅ Always check if Index Only Scan or Using Index appears in EXPLAIN plan!

📚 Quick Summary Table

DatabaseNormal QueryWith Covering Index
PostgreSQLB-tree ➔ Heap fetch (unless TID optimization)B-tree scan only
MySQL InnoDBSecondary B-tree ➔ Primary B-treeSecondary B-tree only
Result2 steps1 step
SpeedSlowerFaster

🏆 Great! — Now We Know:

🧊 How heap fetch works!
🧊 How block lookup is O(1)!
🧊 How covering indexes skip heap fetch!
🧊 How to create super fast indexes for PostgreSQL and MySQL!


🦾 Advanced Indexing Tricks (Real Production Tips)

Now it’s time to look into super heavy functionalities that Postgres supports for making our sql data search/fetch super fast and efficient.

1. 🎯 Partial Indexes (PostgreSQL ONLY)

✅ Instead of indexing the whole table,
✅ You can index only the rows you care about!

Example:

Suppose 95% of users have status = 'inactive', but you only search active users:

SELECT * FROM users WHERE status = 'active' AND email = 'bob@example.com';

👉 Instead of indexing the whole table:

CREATE INDEX idx_active_users_email ON users (email) WHERE status = 'active';

♦️ PostgreSQL will only store rows with status = 'active' in this index!

Advantages:

  • Smaller index = Faster scans
  • Less space on disk
  • Faster index maintenance (less updates/inserts)

Important:

  • MySQL (InnoDB) does NOT support partial indexes 😔 — only PostgreSQL has this superpower.

2. 🎯 INCLUDE Indexes (PostgreSQL 11+)

✅ Normally, a composite index uses all columns for sorting/searching.
✅ With INCLUDE, extra columns are just stored in index, not used for ordering.

Example:

CREATE INDEX idx_username_include_email ON users (username) INCLUDE (email);

Meaning:

  • username is indexed and ordered.
  • email is only stored alongside.

Now query:

SELECT email FROM users WHERE username = 'bob';

Index-only scan — no heap fetch.

Advantages:

  • Smaller & faster than normal composite indexes.
  • Helps to create very efficient covering indexes.

Important:

  • MySQL 8.0 added something similar with INVISIBLE columns but it’s still different.

3. 🎯 Composite Index Optimization

✅ Always order columns inside index smartly based on query pattern.

Golden Rules:

⚜️ Equality columns first (WHERE col = ?)
⚜️ Range columns second (WHERE col BETWEEN ?)
⚜️ SELECT columns last (for covering)

Example:

If query is:

SELECT email FROM users WHERE status = 'active' AND created_at > '2024-01-01';

Best index:

CREATE INDEX idx_users_status_created_at ON users (status, created_at) INCLUDE (email);

♦️ status first (equality match)
♦️ created_at second (range)
♦️ email included (covering)

Bad Index: (wrong order)

CREATE INDEX idx_created_at_status ON users (created_at, status);

→ Will not be efficient!

4. 🎯 BRIN Indexes (PostgreSQL ONLY, super special!)

✅ When your table is very huge (millions/billions of rows),
✅ And rows are naturally ordered (like timestamp, id increasing),
✅ You can create a BRIN (Block Range Index).

Example:

CREATE INDEX idx_users_created_at_brin ON users USING BRIN (created_at);

♦️ BRIN stores summaries of large ranges of pages (e.g., min/max timestamp per 128 pages).

♦️ Ultra small index size.

♦️ Very fast for large range queries like:

SELECT * FROM users WHERE created_at BETWEEN '2024-01-01' AND '2024-04-01';

Important:

  • BRIN ≠ B-tree
  • BRIN is approximate, B-tree is precise.
  • Only useful if data is naturally correlated with physical storage order.

MySQL?

  • MySQL does not have BRIN natively. PostgreSQL has a big advantage here.

5. 🎯 Hash Indexes (special case)

✅ If your query is always exact equality (not range),
✅ You can use hash indexes.

Example:

CREATE INDEX idx_users_username_hash ON users USING HASH (username);

Useful for:

  • Simple WHERE username = 'bob'
  • Never ranges (BETWEEN, LIKE, etc.)

⚠️ Warning:

  • Hash indexes used to be “lossy” before Postgres 10.
  • Now they are safe, but usually B-tree is still better unless you have very heavy point lookups.

😎 PRO-TIP: Which Index Type to Use?

Use caseIndex type
Search small ranges or equalityB-tree
Search on huge tables with natural order (timestamps, IDs)BRIN
Only exact match, super heavy lookupHash
Search only small part of table (active users, special conditions)Partial index
Need to skip heap fetchINCLUDE / Covering Index

🗺️ Quick Visual Mindmap:

Your Query
│
├── Need Equality + Range? ➔ B-tree
│
├── Need Huge Time Range Query? ➔ BRIN
│
├── Exact equality only? ➔ Hash
│
├── Want Smaller Index (filtered)? ➔ Partial Index
│
├── Want to avoid Heap Fetch? ➔ INCLUDE columns (Postgres) or Covering Index

🏆 Now we Know:

🧊 Partial Indexes
🧊 INCLUDE Indexes
🧊 Composite Index order tricks
🧊 BRIN Indexes
🧊 Hash Indexes
🧊 How to choose best Index

This is serious pro-level database knowledge.


Enjoy SQL! 🚀

Rails 8 App: Setup Test DB | Comprehensive Guide 📖 for PostgreSQL , Mysql Indexing – PostgreSQL Heap ⛰ vs Mysql InnoDB B-Tree 🌿

Enter into psql terminal:

✗ psql postgres
psql (14.17 (Homebrew))
Type "help" for help.

postgres=# \l
                                     List of databases
           Name            |  Owner   | Encoding | Collate | Ctype |   Access privileges
---------------------------+----------+----------+---------+-------+-----------------------
 studio_development | postgres | UTF8     | C       | C     |
  • Create a new test database
  • Create a users Table
  • Check the db and table details
postgres=# create database test_db;
CREATE DATABASE

test_db=# CREATE TABLE users (
user_id INT,
username VARCHAR(220),
email VARCHAR(150),
phone_number VARCHAR(20)
);
CREATE TABLE

test_db=# \dt
List of relations
 Schema | Name  | Type  |  Owner
--------+-------+-------+----------
 public | users | table | abhilash
(1 row)

test_db=# \d users;
                          Table "public.users"
    Column    |          Type          | Collation | Nullable | Default
--------------+------------------------+-----------+----------+---------
 user_id      | integer                |           |          |
 username     | character varying(220) |           |          |
 email        | character varying(150) |           |          |
 phone_number | character varying(20)  |           |          |

Add a Primary key to users and check the user table.

test_db=# ALTER TABLE users ADD PRIMARY KEY (user_id);
ALTER TABLE

test_db=# \d users;
                          Table "public.users"
    Column    |          Type          | Collation | Nullable | Default
--------------+------------------------+-----------+----------+---------
 user_id      | integer                |           | not null |
 username     | character varying(220) |           |          |
 email        | character varying(150) |           |          |
 phone_number | character varying(20)  |           |          |
Indexes:
    "users_pkey" PRIMARY KEY, btree (user_id)

# OR add primary key when creating the table:
CREATE TABLE users (
  user_id INT PRIMARY KEY,
  username VARCHAR(220),
  email VARCHAR(150),
  phone_number VARCHAR(20)
);

You can a unique constraint and an index added when adding a primary key.

Why does adding a primary key also add an index?

  • A primary key must guarantee that each value is unique and fast to find.
  • Without an index, the database would have to scan the whole table every time you look up a primary key, which would be very slow.
  • So PostgreSQL automatically creates a unique index on the primary key to make lookups efficient and to enforce uniqueness at the database level.

👉 It needs the index for speed and to enforce the “no duplicates” rule of primary keys.

What is btree?

  • btree stands for Balanced Tree (specifically, a “B-tree” data structure).
  • It’s the default index type in PostgreSQL.
  • B-tree indexes organize the data in a tree structure, so that searches, inserts, updates, and deletes are all very efficient — about O(log n) time.
  • It’s great for looking up exact matches (like WHERE user_id = 123) or range queries (like WHERE user_id BETWEEN 100 AND 200).

👉 So when you see btree, it just means it’s using a very efficient tree structure for your primary key index.

Summary in one line:
Adding a primary key automatically adds a btree index to enforce uniqueness and make lookups super fast.


In MySQL (specifically InnoDB engine, which is default now):

  • Primary keys always create an index automatically.
  • The index is a clustered index — this is different from Postgres!
  • The index uses a B-tree structure too, just like Postgres.

👉 So yes, MySQL also adds an index and uses a B-tree under the hood for primary keys.

But here’s a big difference:

  • In InnoDB, the table data itself is stored inside the primary key’s B-tree.
    • That’s called a clustered index.
    • It means the physical storage of the table rows follows the order of the primary key.
  • In PostgreSQL, the index and the table are stored separately (non-clustered by default).

Example: If you have a table like this in MySQL:

CREATE TABLE users (
  user_id INT PRIMARY KEY,
  username VARCHAR(220),
  email VARCHAR(150)
);
  • user_id will have a B-tree clustered index.
  • The rows themselves will be stored sorted by user_id.

Short version:

DatabasePrimary Key BehaviorB-tree?Clustered?
PostgreSQLSeparate index created for PKYesNo (separate by default)
MySQL (InnoDB)PK index + Table rows stored inside the PK’s B-treeYesYes (always clustered)

Why Indexing on Unique Columns (like email) Improves Lookup 🔍

Use Case

You frequently run queries like:

SELECT * FROM students WHERE email = 'john@example.com';

Without an index, this results in a full table scan — checking each row one-by-one.

With an index, the database can jump directly to the row using a sorted structure, significantly reducing lookup time — especially in large tables.


🌲 How SQL Stores Indexes Internally (PostgreSQL)

📚 PostgreSQL uses B-Tree indexes by default.

When you run:

CREATE UNIQUE INDEX idx_students_on_email ON students(email);

PostgreSQL creates a balanced B-tree like this:

          m@example.com
         /              \
  d@example.com     t@example.com
  /        \           /         \
...      ...        ...         ...

  • ✅ Keys (email values) are sorted lexicographically.
  • ✅ Each leaf node contains a pointer to the actual row in the students table (called a tuple pointer or TID).
  • ✅ Lookup uses binary search, giving O(log n) performance.

⚙️ Unique Index = Even Faster

Because all email values are unique, the database:

  • Can stop searching immediately once a match is found.
  • Doesn’t need to scan multiple leaf entries (no duplicates).

🧠 Summary

FeatureValue
Index TypeB-tree (default in PostgreSQL)
Lookup TimeO(log n) vs O(n) without index
Optimized forEquality search (WHERE email = ...), sorting, joins
Email is unique?✅ Yes – index helps even more (no need to check multiple rows)
Table scan avoided?✅ Yes – PostgreSQL jumps directly via B-tree lookup

What Exactly is a Clustered Index in MySQL (InnoDB)?

🔹 In MySQL InnoDB, the primary key IS the table.

🔹 A Clustered Index means:

  • The table’s data rows are physically organized in the order of the primary key.
  • No separate storage for the table – it’s merged into the primary key’s B-tree structure.

In simple words:
👉 “The table itself lives inside the primary key B-tree.”

That’s why:

  • Every secondary index must store the primary key value (not a row pointer).
  • InnoDB can only have one clustered index (because you can’t physically order a table in two different ways).
📈 Visual for MySQL Clustered Index

Suppose you have:

CREATE TABLE users (
  user_id INT PRIMARY KEY,
  username VARCHAR(255),
  email VARCHAR(255)
);

The storage looks like:

B-tree by user_id (Clustered)

user_id  | username | email
----------------------------
101      | Alice    | a@x.com
102      | Bob      | b@x.com
103      | Carol    | c@x.com

👉 Table rows stored directly inside the B-tree nodes by user_id!


🔵 PostgreSQL (Primary Key Index = Separate)

Imagine you have a users table:

users table (physical table):

row_id | user_id | username | email
-------------------------------------
  1    |   101   | Alice    | a@example.com
  2    |   102   | Bob      | b@example.com
  3    |   103   | Carol    | c@example.com

And the Primary Key Index looks like:

Primary Key B-Tree (separate structure):

user_id -> row pointer
 101    -> row_id 1
 102    -> row_id 2
 103    -> row_id 3

👉 When you query WHERE user_id = 102, PostgreSQL goes:

  • Find user_id 102 in the B-tree index,
  • Then jump to row_id 2 in the actual table.

🔸 Index and Table are separate.
🔸 Extra step: index lookup ➔ then fetch row.

🟠 MySQL InnoDB (Primary Key Index = Clustered)

Same users table, but stored like this:

Primary Key Clustered B-Tree (index + data together):

user_id | username | email
---------------------------------
  101   | Alice    | a@example.com
  102   | Bob      | b@example.com
  103   | Carol    | c@example.com

👉 When you query WHERE user_id = 102, MySQL:

  • Goes straight to user_id 102 in the B-tree,
  • Data is already there, no extra lookup.

🔸 Index and Table are merged.
🔸 One step: direct access!

📈 Quick Visual:

PostgreSQL
(Index)    ➔    (Table Row)
    |
    ➔ extra lookup needed

MySQL InnoDB
(Index + Row Together)
    |
    ➔ data found immediately

Summary:

  • PostgreSQL: primary key index is separate ➔ needs 2 steps (index ➔ table).
  • MySQL InnoDB: primary key index is clustered1 step (index = table).

📚 How Secondary Indexes Work

Secondary Index = an index on a column that is not the primary key.

Example:

CREATE INDEX idx_username ON users(username);

Now you have an index on username.

🔵 PostgreSQL Secondary Index Behavior

  • Secondary indexes are separate structures from the table (just like the primary key index).
  • When you query by username, PostgreSQL:
    1. Finds the matching row_id using the secondary B-tree index.
    2. Then fetches the full row from the table by row_id.
  • This is called an Index Scan + Heap Fetch.

📜 Example:

Secondary Index (username -> row_id):

username -> row_id
------------------
Alice    -> 1
Bob      -> 2
Carol    -> 3

(users table is separate)

👉 Flexible, but needs 2 steps: index (row_id) ➔ table.

🟠 MySQL InnoDB Secondary Index Behavior

  • In InnoDB, secondary indexes don’t store row pointers.
  • Instead, they store the primary key value!

So:

  1. Find the matching primary key using the secondary index.
  2. Use the primary key to find the actual row inside the clustered primary key B-tree.

📜 Example:

Secondary Index (username -> user_id):

username -> user_id
--------------------
Alice    -> 101
Bob      -> 102
Carol    -> 103

(Then find user_id inside Clustered B-Tree)

✅ Needs 2 steps too: secondary index (primary key) ➔ clustered table.

📈 Quick Visual:

FeaturePostgreSQLMySQL InnoDB
Secondary Indexusername ➔ row pointer (row_id)username ➔ primary key (user_id)
Fetch Full RowUse row_id to get table rowUse primary key to find row in clustered index
Steps to FetchIndex ➔ TableIndex ➔ Primary Key ➔ Table (clustered)
ActionPostgreSQLMySQL InnoDB
Primary Key LookupIndex ➔ Row (2 steps)Clustered Index (1 step)
Secondary Index LookupIndex (row_id) ➔ Row (2 steps)Secondary Index (PK) ➔ Row (2 steps)
Storage ModelSeparate index and tablePrimary key and table merged (clustered)

🌐 Now, let’s do some Real SQL Query ⛁ Examples!

1. Simple SELECT * FROM users WHERE user_id = 102;
  • PostgreSQL:
    Look into PK btree ➔ find row pointer ➔ fetch row separately.
  • MySQL InnoDB:
    Directly find the row inside the PK B-tree (no extra lookup).

MySQL is a little faster here because it needs only 1 step!

2. SELECT username FROM users WHERE user_id = 102; (Only 1 Column)
  • PostgreSQL:
    Might do an Index Only Scan if all needed data is in the index (very fast).
  • MySQL:
    Clustered index contains all columns already, no special optimization needed.

Both can be very fast, but PostgreSQL shines if the index is “covering” (i.e., contains all needed columns). Because index table has less size than clustered index of mysql.

3. SELECT * FROM users WHERE username = 'Bob'; (Secondary Index Search)
  • PostgreSQL:
    Secondary index on username ➔ row pointer ➔ fetch table row.
  • MySQL:
    Secondary index on username ➔ get primary key ➔ clustered index lookup ➔ fetch data.

Both are 2 steps, but MySQL needs 2 different B-trees: secondary ➔ primary clustered.

Consider the below situation:

SELECT username FROM users WHERE user_id = 102;
  • user_id is the Primary Key.
  • You only want username, not full row.

Now:

🔵 PostgreSQL Behavior

👉 In PostgreSQL, by default:

  • It uses the primary key btree to find the row pointer.
  • Then fetches the full row from the table (heap fetch).

👉 But PostgreSQL has an optimization called Index-Only Scan.

  • If all requested columns are already present in the index,
  • And if the table visibility map says the row is still valid (no deleted/updated row needing visibility check),
  • Then Postgres does not fetch the heap.

👉 So in this case:

  • If the primary key index also stores username internally (or if an extra index is created covering username), Postgres can satisfy the query just from the index.

✅ Result: No table lookup needed ➔ Very fast (almost as fast as InnoDB clustered lookup).

📢 Postgres primary key indexes usually don’t store extra columns, unless you specifically create an index that includes them (INCLUDE (username) syntax in modern Postgres 11+).

🟠 MySQL InnoDB Behavior
  • In InnoDB:
    Since the primary key B-tree already holds all columns (user_id, username, email),
    It directly finds the row from the clustered index.
  • So when you query by PK, even if you only need one column, it has everything inside the same page/block.

One fast lookup.

🔥 Why sometimes Postgres can still be faster?
  • If PostgreSQL uses Index-Only Scan, and the page is already cached, and no extra visibility check is needed,
    Then Postgres may avoid touching the table at all and only scan the tiny index pages.
  • In this case, for very narrow queries (e.g., only 1 small field), Postgres can outperform even MySQL clustered fetch.

💡 Because fetching from a small index page (~8KB) is faster than reading bigger table pages.

🎯 Conclusion:

✅ MySQL clustered index is always fast for PK lookups.
✅ PostgreSQL can be even faster for small/narrow queries if Index-Only Scan is triggered.

👉 Quick Tip:

  • In PostgreSQL, you can force an index to include extra columns by using: CREATE INDEX idx_user_id_username ON users(user_id) INCLUDE (username); Then index-only scans become more common and predictable! 🚀

Isn’t PostgreSQL also doing 2 B-tree scans? One for secondary index and one for table (row_id)?

When you query with a secondary index, like:

SELECT * FROM users WHERE username = 'Bob';
  • In MySQL InnoDB, I said:
    1. Find in secondary index (username ➔ user_id)
    2. Then go to primary clustered index (user_id ➔ full row)
Let’s look at PostgreSQL first:

♦️ Step 1: Search Secondary Index B-tree on username.

  • It finds the matching TID (tuple ID) or row pointer.
    • TID is a pair (block_number, row_offset).
    • Not a B-tree! Just a physical pointer.

♦️ Step 2: Use the TID to directly jump into the heap (the table).

  • The heap (table) is not a B-tree — it’s just a collection of unordered pages (blocks of rows).
  • PostgreSQL goes directly to the block and offset — like jumping straight into a file.

🔔 Important:

  • Secondary index ➔ TID ➔ heap fetch.
  • No second B-tree traversal for the table!
🟠 Meanwhile in MySQL InnoDB:

♦️ Step 1: Search Secondary Index B-tree on username.

  • It finds the Primary Key value (user_id).

♦️ Step 2: Now, search the Primary Key Clustered B-tree to find the full row.

  • Need another B-tree traversal based on user_id.

🔔 Important:

  • Secondary index ➔ Primary Key B-tree ➔ data fetch.
  • Two full B-tree traversals!
Real-world Summary:

♦️ PostgreSQL

  • Secondary index gives a direct shortcut to the heap.
  • One B-tree scan (secondary) ➔ Direct heap fetch.

♦️ MySQL

  • Secondary index gives PK.
  • Then another B-tree scan (primary clustered) to find full row.

PostgreSQL does not scan a second B-tree when fetching from the table — just a direct page lookup using TID.

MySQL does scan a second B-tree (primary clustered index) when fetching full row after secondary lookup.

Is heap fetch a searching technique? Why is it faster than B-tree?

📚 Let’s start from the basics:

When PostgreSQL finds a match in a secondary index, what it gets is a TID.

♦️ A TID (Tuple ID) is a physical address made of:

  • Block Number (page number)
  • Offset Number (row slot inside the page)

Example:

TID = (block_number = 1583, offset = 7)

🔵 How PostgreSQL uses TID?

  1. It directly calculates the location of the block (disk page) using block_number.
  2. It reads that block (if not already in memory).
  3. Inside that block, it finds the row at offset 7.

♦️ No search, no btree, no extra traversal — just:

  • Find the page (via simple number addressing)
  • Find the row slot

📈 Visual Example

Secondary index (username ➔ TID):

usernameTID
Alice(1583, 7)
Bob(1592, 3)
Carol(1601, 12)

♦️ When you search for “Bob”:

  • Find (1592, 3) from secondary index B-tree.
  • Jump directly to Block 1592, Offset 3.
  • Done ✅!

Answer:

  • Heap fetch is NOT a search.
  • It’s a direct address lookup (fixed number).
  • Heap = unordered collection of pages.
  • Pages = fixed-size blocks (usually 8 KB each).
  • TID gives an exact GPS location inside heap — no searching required.

That’s why heap fetch is faster than another B-tree search:

  • No binary search, no B-tree traversal needed.
  • Only a simple disk/memory read + row offset jump.

🌿 B-tree vs 📁 Heap Fetch

ActionB-treeHeap Fetch
What it doesBinary search inside sorted tree nodesDirect jump to block and slot
Steps neededTraverse nodes (root ➔ internal ➔ leaf)Directly read page and slot
Time complexityO(log n)O(1)
SpeedSlower (needs comparisons)Very fast (direct)

🎯 Final and short answer:

♦️ In PostgreSQL, after finding the TID in the secondary index, the heap fetch is a direct, constant-time (O(1)) accessno B-tree needed!
♦️ This is faster than scanning another B-tree like in MySQL InnoDB.


🧩 Our exact question:

When we say:

Jump directly to Block 1592, Offset 3.

We are thinking:

  • There are thousands of blocks.
  • How can we directly jump to block 1592?
  • Shouldn’t that be O(n) (linear time)?
  • Shouldn’t there be some traversal?

🔵 Here’s the real truth:

  • No traversal needed.
  • No O(n) work.
  • Accessing Block 1592 is O(1) — constant time.

📚 Why?

Because of how files, pages, and memory work inside a database.

When PostgreSQL stores a table (the “heap”), it saves it in a file on disk.
The file is just a long array of fixed-size pages.

  • Each page = 8KB (default in Postgres).
  • Each block = 1 page = fixed 8KB chunk.
  • Block 0 is the first 8KB.
  • Block 1 is next 8KB.
  • Block 2 is next 8KB.
  • Block 1592 = (1592 × 8 KB) offset from the beginning.

✅ So block 1592 is simply located at 1592 × 8192 bytes offset from the start of the file.

✅ Operating systems (and PostgreSQL’s Buffer Manager) know exactly how to seek to that byte position without reading everything before it.

📈 Diagram (imagine the table file):
+-----------+-----------+-----------+-----------+-----------+------+
| Block 0   | Block 1   | Block 2   | Block 3   | Block 4   |  ... |
+-----------+-----------+-----------+-----------+-----------+------+
  (8KB)       (8KB)       (8KB)       (8KB)       (8KB)

Finding Block 1592 ➔
Seek directly to offset 1592 * 8192 bytes ➔
Read 8KB ➔
Find row at Offset 3 inside it.

🤔 What happens technically?

If in memory (shared buffers / page cache):
  • PostgreSQL checks its buffer pool (shared memory).
  • “Do I already have block 1592 cached?”
    • ✅ Yes: immediately access memory address.
    • ❌ No: Load block 1592 from disk into memory.
If from disk (rare if cached):
  • File systems (ext4, xfs, etc) know how to seek to a byte offset in a file without reading previous parts.
  • Seek to (block_number × 8192) bytes.
  • Read exactly 8KB into memory.
  • No need to scan the whole file linearly.

📊 Final Step: Inside the Block

Once the block is loaded:

  • The block internally is structured like an array of tuples.
  • Each tuple is placed into an offset slot.
  • Offset 3 ➔ third tuple inside the block.

♦️ Again, this is just array lookup — no traversal, no O(n).

⚡ So to summarize:
QuestionAnswer
How does PostgreSQL jump directly to block?Using the block number × page size calculation (fixed offset math).
Is it O(n)?❌ No, it’s O(1) constant time
Is there any traversal?❌ No traversal. Just a seek + memory read.
How fast?Extremely fast if cached, still fast if disk seeks.
🔥 Key concept:

PostgreSQL heap access is O(1) because the heap file is a flat sequence of fixed-size pages, and the TID gives exact coordinates.

🎯 Simple Real World Example:

Imagine you have a giant book (the table file).
Each page of the book is numbered (block number).

If someone says:

👉 “Go to page 1592.”

♦️ You don’t need to read pages 1 to 1591 first.
♦️ You just flip directly to page 1592.

📗 Same idea: no linear traversal, just positional lookup.

🧠 Deep thought:

Because blocks are fixed size and TID is known,
heap fetch is almost as fast as reading a small array.

(Actually faster than searching B-tree because B-tree needs multiple comparisons at each node.)

Enjoy SQL! 🚀

Setup 🛠 Rails 8 App – Part 13: Composite keys & Candidate keys in Rails DB

🔑 What Is a Composite Key?

A composite key is a primary key made up of two or more columns that together uniquely identify a row in a table.

Use a composite key when no single column is unique on its own, but the combination is.

👉 Example: Composite Key in Action

Let’s say we’re building a table to track which students are enrolled in which courses.

Without Composite Key:
-- This table might allow duplicates
CREATE TABLE Enrollments (
  student_id INT,
  course_id INT
);

Nothing stops the same student from enrolling in the same course multiple times!

With Composite Key:
CREATE TABLE Enrollments (
  student_id INT,
  course_id INT,
  PRIMARY KEY (student_id, course_id)
);

Now:

  • student_id alone is not unique
  • course_id alone is not unique
  • But together → each (student_id, course_id) pair is unique

📌 Why Use Composite Keys?

When to UseWhy
Tracking many-to-many relationshipsEnsures unique pairs
Bridging/junction tablese.g., students-courses, authors-books
No natural single-column keyBut the combination is unique

⚠️ Things to Keep in Mind

  • Composite keys enforce uniqueness across multiple columns.
  • They can also be used as foreign keys in other tables.
  • Some developers prefer to add an auto-increment id as the primary key instead—but that’s a design choice.

🔎 What Is a Candidate Key?

A candidate key is any column (or combination of columns) in a table that can uniquely identify each row.

  • Every table can have multiple candidate keys
  • One of them is chosen to be the primary key
  • The rest are called alternate keys

🔑 Think of candidate keys as “potential primary keys”

👉 Example: Users Table

CREATE TABLE Users (
  user_id INT,
  username VARCHAR(80),
  email VARCHAR(150),
  phone_number VARCHAR(30)
);

Let’s have some hands own experience in SQL queries by creating a TEST DB. Check https://railsdrop.com/2025/04/25/rails-8-app-part-13-2-test-sql-queries/

Assume:

  • user_id is unique
  • username is unique
  • email is unique
Candidate Keys:
  • user_id
  • username
  • email

You can choose any one of them as the primary key, depending on your design needs.

-- Choosing user_id as the primary key
PRIMARY KEY (user_id)

The rest (username, email) are alternate keys.

📌 Characteristics of Candidate Keys

PropertyDescription
UniquenessMust uniquely identify each row
Non-nullCannot contain NULL values
MinimalityMust be the smallest set of columns that uniquely identifies a row (no extra columns)
No duplicatesNo two rows have the same value(s)

👥 Candidate Key vs Composite Key

ConceptExplanation
Candidate KeyAny unique identifier (single or multiple columns)
Composite KeyA candidate key that uses multiple columns

So: All composite keys are candidate keys, but not all candidate keys are composite.

💡 When Designing a Database

  • Find all possible candidate keys
  • Choose one as the primary key
  • (Optional) Define other candidate keys as unique constraints
CREATE TABLE Users (
  user_id INT PRIMARY KEY,
  username VARCHAR UNIQUE,
  email VARCHAR UNIQUE
);


Let’s walk through a real-world example using a schema we are already working on: a shopping app that sells clothing for women, men, kids, and infants.

We’ll look at how candidate keys apply to real tables like Users, Products, Orders, etc.

🛍️ Example Schema: Shopping App

1. Users Table

CREATE TABLE Users (
  user_id SERIAL PRIMARY KEY,
  email VARCHAR UNIQUE,
  username VARCHAR UNIQUE,
  phone_number VARCHAR
);

Candidate Keys:

  • user_id
  • email
  • username

We chose user_id as the primary key, but both email and username could also uniquely identify a user — so they’re candidate keys.


2. Products Table

CREATE TABLE Products (
  product_id SERIAL PRIMARY KEY,
  sku VARCHAR UNIQUE,
  name VARCHAR,
  category VARCHAR
);

Candidate Keys:

  • product_id
  • sku ✅ (Stock Keeping Unit – a unique identifier for each product)

sku is a candidate key. We use product_id as the primary key, but you could use sku if you wanted a natural key instead.

3. Orders Table

CREATE TABLE Orders (
  order_id SERIAL PRIMARY KEY,
  user_id INT REFERENCES Users(user_id),
  order_number VARCHAR UNIQUE,
  created_at TIMESTAMP
);

Candidate Keys:

  • order_id
  • order_number

You might use order_number (e.g., "ORD-20250417-0012") for external reference and order_id internally. Both are unique identifiers = candidate keys.

4. OrderItems Table (Join Table)

This table links orders to the specific products and quantities purchased.

CREATE TABLE OrderItems (
  order_id INT,
  product_id INT,
  quantity INT,
  PRIMARY KEY (order_id, product_id),
  FOREIGN KEY (order_id) REFERENCES Orders(order_id),
  FOREIGN KEY (product_id) REFERENCES Products(product_id)
);

Candidate Key:

  • Composite key: (order_id, product_id)

Here, a combination of order_id and product_id uniquely identifies a row — i.e., what product was ordered in which order — making it a composite candidate key, and we’ve selected it as the primary key.

👀 Summary of Candidate Keys by Table

TableCandidate KeysPrimary Key Used
Usersuser_id, email, usernameuser_id
Productsproduct_id, skuproduct_id
Ordersorder_id, order_numberorder_id
OrderItems(order_id, product_id)(order_id, product_id)

Let’s explore how to implement candidate keys in both SQL and Rails (Active Record). Since we are working on a shopping app in Rails 8, I’ll show how to enforce uniqueness and data integrity in both layers:

🔹 1. Candidate Keys in SQL (PostgreSQL Example)

Let’s take the Users table with multiple candidate keys (email, username, and user_id).

CREATE TABLE users (
  user_id SERIAL PRIMARY KEY,
  email VARCHAR(255) NOT NULL UNIQUE,
  username VARCHAR(100) NOT NULL UNIQUE,
  phone_number VARCHAR(20)
);

  • user_id: chosen as the primary key
  • email and username: candidate keys, enforced via UNIQUE constraints

💎 Composite Key Example (OrderItems)

CREATE TABLE order_items (
  order_id INT,
  product_id INT,
  quantity INT NOT NULL,
  PRIMARY KEY (order_id, product_id),
  FOREIGN KEY (order_id) REFERENCES orders(order_id),
  FOREIGN KEY (product_id) REFERENCES products(product_id)
);

This sets (order_id, product_id) as a composite candidate key and primary key.

🔸 2. Candidate Keys in Rails (ActiveRecord)

Now let’s do the same with Rails models + migrations + validations.

users Migration (with candidate keys)

# db/migrate/xxxxxx_create_users.rb
class CreateUsers < ActiveRecord::Migration[8.0]
  def change
    create_table :users do |t|
      t.string :email, null: false
      t.string :username, null: false
      t.string :phone_number

      t.timestamps
    end

    add_index :users, :email, unique: true
    add_index :users, :username, unique: true
  end
end

User Model

class User < ApplicationRecord
  validates :email, presence: true, uniqueness: true
  validates :username, presence: true, uniqueness: true
end

✅ These are candidate keysemail and username could be primary keys, but we are using id instead.

✅ Composite Key with OrderItem (Join Table)

ActiveRecord doesn’t support composite primary keys natively, but you can enforce uniqueness via a multi-column index:

Migration:

class CreateOrderItems < ActiveRecord::Migration[8.0]
  def change
    create_table :order_items, id: false do |t|
      t.references :order, null: false, foreign_key: true
      t.references :product, null: false, foreign_key: true
      t.integer :quantity, null: false

      t.timestamps
    end

    add_index :order_items, [:order_id, :product_id], unique: true
  end
end

Model:

class OrderItem < ApplicationRecord
  belongs_to :order
  belongs_to :product

  validates :quantity, presence: true
  validates :order_id, uniqueness: { scope: :product_id }
end

🎯 This simulates a composite key behavior: each product can only appear once per order.

➕ Extra: Use composite_primary_keys Gem (Optional)

If you really need true composite primary keys, use:

gem 'composite_primary_keys'

But it’s best to avoid unless your use case demands it — most Rails apps use a surrogate key (id) for simplicity.


to be continued.. 🚀

Setup 🛠 Rails 8 App – Part 12: Modify Product Schema – Apply Normalization

Right now we have following fields in Product Table:

create_table "products", force: :cascade do |t|
    t.string "title", null: false
    t.text "description"
    t.string "category"
    t.string "color"
    t.string "size", limit: 10
    t.decimal "mrp", precision: 7, scale: 2
    t.decimal "discount", precision: 7, scale: 2
    t.decimal "rating", precision: 2, scale: 1
    t.datetime "created_at", null: false
    t.datetime "updated_at", null: false
  end

If you examine the above table, there will be repetitive product items if consider for a size of an item there comes so many colours. We need to create different product rows for each size different colours.

So Let’s split the table into two.

1. Product Table

class CreateProducts < ActiveRecord::Migration[8.0]
  def change
    def change
      create_table :products do |t|
        t.string  :name
        t.text    :description
        t.string  :category   # women, men, kids, infants
        t.decimal :rating, precision: 2, scale: 1, default: 0.0

        t.timestamps
      end

      add_index :products, :category
    end
  end
end

2. Product Variant Table

class CreateProductVariants < ActiveRecord::Migration[8.0]
  def change
    create_table :product_variants do |t|
      t.references :product, null: false, foreign_key: true
      t.string  :sku, null: false
      t.decimal :price, precision: 10, scale: 2
      t.string  :size
      t.string  :color
      t.integer :stock_quantity, default: 0
      t.jsonb   :specs, default: {}, null: false

      t.timestamps
    end

    # GIN index for fast JSONB attribute searching
    add_index :product_variants, :specs, using: :gin
    add_index :product_variants, [ :product_id, :size, :color ], unique: true
    add_index :product_variants, :sku, unique: true
  end
end

Data normalization is a core concept in database design that helps organize data efficiently, eliminate redundancy, and ensure data integrity.


🔍 What Is Data Normalization?

Normalization is the process of structuring a relational database in a way that:

  • Reduces data redundancy (no repeated data)
  • Prevents anomalies in insert, update, or delete operations
  • Improves data integrity

It breaks down large, complex tables into smaller, related tables and defines relationships using foreign keys.

🧐 Why Normalize?

Problem Without NormalizationHow Normalization Helps
Duplicate data everywhereMoves repeated data into separate tables
Inconsistent valuesEnforces rules and relationships
Hard to update dataIsolates each concept so it’s updated once
Wasted storageReduces data repetition

📚 Normal Forms (NF)

Each Normal Form (NF) represents a level of database normalization. The most common are:

🔸 1NF – First Normal Form

  • Eliminate repeating groups
  • Ensure each column has atomic (indivisible) values

Bad Example:

CREATE TABLE Orders (
  order_id INT,
  customer_name VARCHAR,
  items TEXT  -- "Shirt, Pants, Hat"
);

Fixed (1NF):

CREATE TABLE OrderItems (
  order_id INT,
  item_name VARCHAR
);

🔸 2NF – Second Normal Form

  • Must be in 1NF
  • Remove partial dependencies (when a non-key column depends only on part of a composite key)

Example: If a table has a composite key (student_id, course_id), and student_name depends only on student_id, it should go into a separate table.

🔸 3NF – Third Normal Form

  • Must be in 2NF
  • Remove transitive dependencies (non-key columns depending on other non-key columns)

Example:

CREATE TABLE Employees (
  emp_id INT PRIMARY KEY,
  emp_name VARCHAR,
  dept_name VARCHAR,
  dept_location VARCHAR
);

Here, dept_location depends on dept_name, not emp_id — so split it:

Normalized:

CREATE TABLE Departments (
  dept_name VARCHAR PRIMARY KEY,
  dept_location VARCHAR
);

💡 Real-World Example

Let’s say you have this table:

Orders(order_id, customer_name, customer_email, product1, product2, product3)

Problems:

  • Repeating columns (product1, product2, …)
  • Redundant customer data in each order

Normalized Version:

  1. Customers(customer_id, name, email)
  2. Orders(order_id, customer_id)
  3. OrderItems(order_id, product_id)
  4. Products(product_id, name, price)

⚖️ Normalization vs. Denormalization

  • Normalization = Good for consistency, long-term maintenance
  • ⚠️ Denormalization = Good for performance in read-heavy systems (like reporting dashboards)

Use normalization as a default practice, then selectively denormalize if performance requires it.


Delete button example (Rails 7+)

<%= link_to "Delete Product",
              @product,
              data: { turbo_method: :delete, turbo_confirm: "Are you sure you want to delete this product?" },
              class: "inline-block px-4 py-2 bg-red-100 text-red-600 border border-red-300 rounded-md hover:bg-red-600 hover:text-white font-semibold transition duration-300 transform hover:scale-105" %>

💡 What’s Improved:

  • data: { turbo_confirm: ... } ensures compatibility with Turbo (Rails 7+).
  • Better button-like appearance (bg, px, py, rounded, etc.).
  • Hover effects and transitions for a smooth UI experience.

Add Brand to products table

Let’s add brand column to the product table:

✗ rails g migration add_brand_to_products brand:string:
index
class AddBrandToProducts < ActiveRecord::Migration[8.0]
  def change
    # Add 'brand' column
    add_column :products, :brand, :string

    # Add index for brand
    add_index :products, :brand
  end
end

❗️Important Note:

PostgreSQL does not support BEFORE or AFTER when adding a column.

Caused by:
PG::SyntaxError: ERROR:  syntax error at or near "BEFORE" (PG::SyntaxError)
LINE 1: ...LTER TABLE products ADD COLUMN brand VARCHAR(255) BEFORE des...
  • PostgreSQL (default in Rails) does not support column order (they’re always returned in the order they were created).
  • If you’re using MySQL, you could use raw SQL for positioning as shown below.

If I USE MySQL, I would like to see the brand name as first column of the table products. You can do that by changing the migration to:

class AddBrandToProducts < ActiveRecord::Migration[8.0]
  def up
    execute "ALTER TABLE products ADD COLUMN brand VARCHAR(255) BEFORE description;"
    add_index :products, :brand
  end

  def down
    remove_index :products, :brand
    remove_column :products, :brand
  end
end

Reverting Previous Migrations

You can use Active Record’s ability to rollback migrations using the revert method:

require_relative "20121212123456_example_migration"

class FixupExampleMigration < ActiveRecord::Migration[8.0]
  def change
    revert ExampleMigration

    create_table(:apples) do |t|
      t.string :variety
    end
  end
end

The revert method also accepts a block of instructions to reverse. This could be useful to revert selected parts of previous migrations.

Reference: https://guides.rubyonrails.org/active_record_migrations.html#reverting-previous-migrations

Product Index Page after applying NF:
Product Show Page after applying NF:
New Product Page after applying NF:

to be continued.. 🚀