Classic Performance Debugging Problems in Rails Apps 🔬 — Part 3: Advanced Techniques: Query Plans, Indexing, Profiling & Production Diagnostics

🧭 Overview — what we’ll cover

  • How to read and act on EXPLAIN ANALYZE output (Postgres) — with exact commands and examples.
  • Index strategy: b-tree, composite, INCLUDE, covering indexes, partials, GIN/GIN_TRGM where relevant.
  • Practical before/after for the Flipper join query.
  • Database-level tooling: pg_stat_statements, slow query logging, ANALYZE, vacuum, stats targets.
  • Advanced Rails-side profiling: CPU sampling (rbspy), Ruby-level profilers (stackprof, ruby-prof), flamegraphs, allocation profiling.
  • Memory profiling & leak hunting: derailed_benchmarks, memory_profiler, allocation tracing.
  • Production-safe profiling and APMs: Skylight, New Relic, Datadog, and guidelines for low-risk sampling.
  • Other advanced optimizations: connection pool sizing, backgrounding heavy work, keyset pagination, materialized views, denormalization, and caching patterns.
  • A checklist & playbook you can run when a high-traffic route is slow.

1) Deep dive: EXPLAIN ANALYZE (Postgres)

Why use it

`EXPLAIN` shows the planner’s chosen plan. `EXPLAIN ANALYZE` runs the query and shows *actual* times and row counts. This is the single most powerful tool to understand why a query is slow. <h3>Run it from psql</h3>

sql EXPLAIN ANALYZE SELECT flipper_features.key AS feature_key, flipper_gates.key, flipper_gates.value FROM flipper_features LEFT OUTER JOIN flipper_gates ON flipper_features.key = flipper_gates.feature_key; 

Or add verbosity, buffers and JSON output:

EXPLAIN (ANALYZE, BUFFERS, VERBOSE, FORMAT JSON)
SELECT ...;

Then pipe JSON to jq for readability:

psql -c "EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) SELECT ..." | jq .

Run it from Rails console

res = ActiveRecord::Base.connection.execute(<<~SQL) EXPLAIN ANALYZE SELECT ... SQL puts res.values.flatten.join("\n") 

`res.values.flatten` will give the lines of the textual plan.

How to read the plan (key fields)

A typical node line: `Nested Loop (cost=0.00..123.45 rows=100 width=48) (actual time=0.123..5.678 rows=100 loops=1) ` – **Plan node**: e.g., Seq Scan, Index Scan, Nested Loop, Hash Join, Merge Join. – **cost=** planner estimates (startup..total). Not actual time. – **actual time=** real measured times: start..end. The end value for the top node is total time. – **rows=** estimated rows; **actual rows** follow in `actual time` block. If estimates are very different from actuals → bad statistics or wrong assumptions. – **loops=** how many times the node ran (outer loop counts). Multiply loops × actual time to know total work. – **Buffers** (if `BUFFERS` requested) show disk vs shared buffer I/O — important for I/O-bound queries. <h3>Interpretation checklist</h3> – Is Postgres doing a `Seq Scan` on a table that should use an index? → candidate for index. – Are `actual rows` much larger than `estimated rows`? → statistics outdated (`ANALYZE`) or stats target insufficient. – Is the planner using `Nested Loop` with a large inner table and many outer loops? → might need a different join strategy or indexes to support index scans, or to rewrite query. – High `buffers` read from disk → cold cache or I/O pressure. Consider tuning or adding indexes to reduce full scans, or faster disks/IO.


2) Indexing strategies — practical rules

B-tree indexes (default)

– Good for equality (`=`) and range (`<`, `>`) queries and joins on scalar columns. – Add a single-column index when you join on that column often.

Migration example:

class AddIndexToFlipperGatesFeatureKey < ActiveRecord::Migration[7.0]
  def change
    add_index :flipper_gates, :feature_key, name: 'index_flipper_gates_on_feature_key'
  end
end

Composite index

– Useful when WHERE or JOIN uses multiple columns together in order. – The left-most prefix rule: index `(a,b,c)` supports lookups on `a`, `a,b`, `a,b,c` — not `b` alone. <h3>`INCLUDE` for covering indexes (Postgres)</h3> – Use `INCLUDE` to add non-key columns to the index payload so the planner can do an index-only scan.

`add_index :orders, [:user_id, :created_at], include: [:total_amount] ` This avoids heap lookup for those included columns. <h3>Partial indexes</h3> – Index only a subset of rows where conditions often match:

add_index :users, :email, unique: true, where: "email IS NOT NULL" 

GIN / GIST indexes

– For full-text search or array/JSONB: use GIN (or trigram GIN for `ILIKE` fuzzy matches).

– Example: `CREATE INDEX ON table USING GIN (jsonb_col);`

Index maintenance

– Run `ANALYZE` after large data load to keep statistics fresh. – Consider `REINDEX` if index bloat occurs. – Use `pg_stat_user_indexes` to check index usage.


<h2>3) Example: Flipper join query — BEFORE & AFTER</h2> <h3>Problem query (recap)</h3

“`sql SELECT flipper_features.key AS feature_key, flipper_gates.key, flipper_gates.value FROM flipper_features LEFT OUTER JOIN flipper_gates ON flipper_features.key = flipper_gates.feature_key; “`

This was running repeatedly and slow (60–200ms) in many requests. <h3>Diagnosis</h3>

– The `flipper_gates` table had a composite index `(feature_key, key, value)`. Because your join only used `feature_key`, Postgres sometimes didn’t pick the composite index effectively, or the planner preferred seq scan due to small table size or outdated stats. – Repetition (many calls to `Flipper.enabled?`) magnified cost.

<h3>Fix 1 — Add a direct index on `feature_key`</h3>

Migration: “`ruby class AddIndexFlipperGatesOnFeatureKey < ActiveRecord::Migration[7.0] def change add_index :flipper_gates, :feature_key, name: ‘index_flipper_gates_on_feature_key’ end end “`

<h3>Fix 2 — Optionally make it a covering index (if you select `key, value` often)</h3>

“`ruby add_index :flipper_gates, :feature_key, name: ‘index_flipper_gates_on_feature_key_include’, using: :btree, include: [:key, :value] “` This lets Postgres perform an index-only scan without touching the heap for `key` and `value`.

<h3>EXPLAIN ANALYZE before vs after (expected)</h3

BEFORE (hypothetical):

Nested Loop
  -> Seq Scan on flipper_features (cost=...)
  -> Seq Scan on flipper_gates (cost=...)  <-- heavy
Actual Total Time: 120ms

AFTER:

Nested Loop
  -> Seq Scan on flipper_features (small)
  -> Index Scan using index_flipper_gates_on_feature_key on flipper_gates (cost=... actual time=0.2ms)
Actual Total Time: 1.5ms

Add EXPLAIN ANALYZE to your pipeline and confirm the plan uses Index Scan rather than Seq Scan.

<h3>Important note</h3>

On tiny tables, sometimes Postgres still chooses Seq Scan (cheap), but when repeated or run many times per request, even small scans add up. Index ensures stable, predictable behaviour when usage grows.


<h2>4) Database-level tools & monitoring</h2>

<h3>`pg_stat_statements` (must be enabled)</h3>

Aggregate query statistics (calls, total time). Great to find heavy queries across the whole DB. Query example: “`sql SELECT query, calls, total_time, mean_time FROM pg_stat_statements ORDER BY total_time DESC LIMIT 20; “` This points to the most expensive queries over time (not just single slow execution).

<h3>Slow query logging</h3>

Enable `log_min_duration_statement` in `postgresql.conf` (e.g., 200ms) to log slow queries. Then analyze logs with `pgbadger` or `pg_activity`.

<h3>`ANALYZE`, `VACUUM`</h3>

`ANALYZE` updates table statistics — helps the planner choose better plans. Run after bulk loads. – `VACUUM` frees up space and maintains visibility map; `VACUUM FULL` locks table — use carefully.

<h3>Lock and activity checks</h3>

See long-running queries and blocking:

“`sql SELECT pid, query, state, age(now(), query_start) AS runtime FROM pg_stat_activity WHERE state <> ‘idle’ AND now() – query_start > interval ‘5 seconds’; “`


<h2>5) Ruby / Rails advanced profiling</h2>

You already use rack-mini-profiler. For CPU & allocation deep dives, combine sampling profilers and Ruby-level profilers.

<h3>Sampling profilers (production-safe-ish)</h3>

rbspy (native sampling for Ruby processes) — low overhead, no code changes:

rbspy record --pid <PID> -- ruby bin/rails server
rbspy flamegraph --output flame.svg

rbspy collects native stack samples and generates a flamegraph. Good for CPU hotspots in production.

rbspy notes

  • Does not modify code; low overhead.
  • Requires installing rbspy on the host.

<h3>stackprof + flamegraph (Ruby-level)</h3>

Add to Gemfile (in safe envs):

gem 'stackprof'
gem 'flamegraph'

Run a block you want to profile:

require 'stackprof'

StackProf.run(mode: :wall, out: 'tmp/stackprof.dump', raw: true) do
  # run code you want to profile (a request, a job, etc)
end

# to read
stackprof tmp/stackprof.dump --text
# or generate flamegraph with stackprof or use flamegraph gem:
require 'flamegraph'
Flamegraph.generate('tmp/fg.svg') { your_code_here }

<h3>ruby-prof (detailed callgraphs)</h3>

Much higher overhead; generates call-graphs. Use in QA or staging, not production.

“`ruby require ‘ruby-prof’ RubyProf.start # run code result = RubyProf.stop printer = RubyProf::GraphHtmlPrinter.new(result) printer.print(File.open(“tmp/ruby_prof.html”, “w”), {}) “`

<h3>Allocation profiling</h3>

Use `derailed_benchmarks` gem for bundle and memory allocations:

“`bash bundle exec derailed bundle:mem bundle exec derailed exec perf:objects # or memory “` – `memory_profiler` gem gives detailed allocations:

“`ruby require ‘memory_profiler’ report = MemoryProfiler.report { run_code } report.pretty_print(to_file: ‘tmp/memory_report.txt’) “`

<h3>Flamegraphs for request lifecycles</h3>

You can capture a request lifecycle and render a flamegraph using stackprof or rbspy, then open SVG.


<h2>6) Memory & leak investigations</h2>

<h3>Symptoms</h3>

Memory grows over time in production processes. – Frequent GC pauses. – OOM kills.

<h3>Tools</h3> – `derailed_benchmarks` (hotspots and gem bloat). – `memory_profiler` for allocation snapshots (see above). – `objspace` built-in inspector (`ObjectSpace.each_object(Class)` helps count objects). – Heap dumps with `rbtrace` or `memory_profiler` for object graphs. <h3>Common causes & fixes</h3> – Caching big objects in-process (use Redis instead). – Retaining references in global arrays or singletons. – Large temporary arrays in request lifecycle — memoize or stream responses. <h3>Example patterns to avoid</h3> – Avoid storing large AR model sets in global constants. – Use `find_each` to iterate large result sets. – Use streaming responses for very large JSON/XML.


<h2>7) Production profiling — safe practices & APMs</h2> <h3>APMs</h3> – **Skylight / NewRelic / Datadog / Scout** — they give per-endpoint timings, slow traces, and SQL breakdowns in production with low overhead. Use them to find hotspots without heavy manual profiling. <h3>Sampling vs continuous profiling</h3> – Use *sampling* profilers (rbspy, production profilers) in short windows to avoid high overhead. – Continuous APM tracing (like New Relic) integrates naturally and is production-friendly. <h3>Instrument carefully</h3> – Only enable heavy profiling when you have a plan; capture for short durations. – Prefer off-peak hours or blue/green deployments to avoid affecting users.


<h2>8) Other advanced DB & Rails optimizations</h2> <h3>Connection pool tuning</h3> – Puma workers & threads must match DB pool size. Example `database.yml`: “`yaml production: pool: <%= ENV.fetch(“DB_POOL”, 5) %> “` – If Puma threads > DB pool, requests will block waiting for DB connection — can appear as slow requests. <h3>Background jobs</h3> – Anything non-critical to request latency (e.g., sending emails, analytics, resizing images) should be moved to background jobs (Sidekiq, ActiveJob). – Synchronous mailers or external API calls are common causes of slow requests. <h3>Keyset pagination (avoid OFFSET)</h3> – For large result sets use keyset pagination: “`sql SELECT * FROM posts WHERE (created_at, id) < (?, ?) ORDER BY created_at DESC, id DESC LIMIT 20 “` This is far faster than `OFFSET` for deep pages. <h3>Materialized views for heavy aggregations</h3> – Pre-compute heavy joins/aggregates into materialized views and refresh periodically or via triggers. <h3>Denormalization & caching</h3> – Counter caches: store counts in a column and update via callbacks to avoid COUNT(*) queries. – Cache pre-rendered fragments or computed JSON blobs for heavy pages (with care about invalidation).


<h2>9) Serialization & JSON performance</h2> <h3>Problems</h3> – Serializing huge AR objects or many associations can be expensive. <h3>Solutions</h3> – Use serializers that only include necessary fields: `fast_jsonapi` (jsonapi-serializer) or `JBuilder` with simple `as_json(only: …)`. – Return minimal payloads and paginate. – Use `pluck` when you only need a few columns.


<h2>10) Playbook: step-by-step when a route is slow (quick reference)</h2>

  1. Reproduce the slow request locally or in staging if possible.
  2. Tail the logs (tail -f log/production.log) and check SQL statements and controller timings.
  3. Run EXPLAIN (ANALYZE, BUFFERS) for suspect queries.
  4. If Seq Scan appears where you expect an index, add or adjust indexes. Run ANALYZE.
  5. Check for N+1 queries with Bullet or rack-mini-profiler and fix with includes.
  6. If many repeated small DB queries (Flipper-like), add caching (Redis or adapter-specific cache) or preloading once per request.
  7. If CPU-bound, collect a sampling profile (rbspy) for 30–60s and generate a flamegraph — find hot Ruby methods. Use stackprof for deeper dive.
  8. If memory-bound, run memory_profiler or derailed, find object retainers.
  9. If urgent and unknown, turn on APM traces for a short window to capture slow traces in production.
  10. After changes, run load test (k6, wrk) if at scale, and monitor pg_stat_statements to confirm improvement.

<h2>11) Example commands and snippets (cheat-sheet)</h2>

EXPLAIN ANALYZE psql

psql -d mydb -c "EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) SELECT ...;" | jq .

EXPLAIN from Rails console

res = ActiveRecord::Base.connection.execute("EXPLAIN ANALYZE SELECT ...")
puts res.values.flatten.join("\n")

Add index migration

class AddIndexFlipperGatesOnFeatureKey < ActiveRecord::Migration[7.0]
  def change
    add_index :flipper_gates, :feature_key, name: 'index_flipper_gates_on_feature_key'
  end
end

ANALYZE

ANALYZE flipper_gates;
ANALYZE flipper_features;

pg_stat_statements

SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY total_time DESC LIMIT 10;

rbspy recording (system-level sampling)

rbspy record --pid <PID> --output /tmp/rbspy.dump
rbspy flamegraph --input /tmp/rbspy.dump --output /tmp/flame.svg

stackprof example

StackProf.run(mode: :cpu, out: 'tmp/stackprof.dump') do
  # exercise the app code, e.g. issue a request
end
stackprof tmp/stackprof.dump --text

memory_profiler

report = MemoryProfiler.report { run_heavy_code }
report.pretty_print
report.write_html('/tmp/memory_report.html') if report.respond_to?(:write_html)

Check DB connections

SELECT count(*) FROM pg_stat_activity WHERE state='active';


<h2>12) Final notes & best practices</h2>

  • Measure before you change. Always capture a baseline (EXPLAIN ANALYZE, APM traces).
  • Small fixes add up — one index + reducing a few N+1s often reduces response times dramatically.
  • Cache with thought — caching solves many repeated-read problems (Flipper example), but invalidation must be correct. Use namespaced keys and hook invalidation to mutations.
  • Use the right tool for the job: sampling profilers for CPU hotspots; EXPLAIN ANALYZE for DB; memory tools for leaks.
  • Automate monitoringpg_stat_statements, APMs, dashboards, and alerts let you spot regressions early.

If you want, I’ll now:

  • Generate a ready-to-run EXPLAIN ANALYZE before/after script for your Flipper query and the index migration, or
  • Provide the exact commands to run in staging to capture a stackprof or rbspy flamegraph for a slow request (and a sample SVG), or
  • Draft a one-page playbook you can paste in a team wiki for on-call performance steps.

Which of those would you like me to produce next?

Guide: Integrating React ⚛️ into a Rails 8 Application – Part 3 | Start developing react

Let’s move on to quick development of more react components now. Before that let’s check what we have now and understand it very clear.

📄 File 1:

Our app/javascript/components/App.jsx file:

import React from 'react';

function App() {
  return (
    <div>
      <h1>React is working fine!</h1>
      <p>Welcome to Rails + React App</p>
    </div>
  );
}

export default App;

Let’s examine this React component step by step:

Line 1: Import React

import React from 'react';
  • import – ES6 module syntax to bring in external code
  • React – The main React library
  • from 'react' – Importing from the npm package named “react”
  • Why needed? Even though we use --jsx=automatic, we still import React for any hooks or React features we might use.

Function Component: Line 3-9

A React function component is a simple JavaScript function that serves as a building block for user interfaces in React applications. These components are designed to be reusable and self-contained, encapsulating a specific part of the UI and its associated logic.

function App() {
  return (
    <div>
      <h1>React is working fine!</h1>
      <p>Welcome to Rails + React App</p>
    </div>
  );
}

🔍 Breaking this down:

Line 3: Component Declaration

function App() {
  • function App() – This is a React Function Component
  • Component naming – Must start with capital letter (App, not app)
  • What it is – A JavaScript function that returns JSX (user interface)

Line 4-8: JSX Return

return (
  <div>
    <h1>React is working fine!</h1>
    <p>Welcome to Rails + React App</p>
  </div>
);
  • return – Every React component must return something
  • JSX – Looks like HTML, but it’s actually JavaScript
  • <div> – Must have one parent element (React Fragment rule)
  • <h1> & <p> – Regular HTML elements, but processed by React

Line 11: Export

export default App;
  • export default – ES6 syntax to make this component available to other files
  • App – The component name we’re exporting
  • Why needed? So application.js can import and use this component

📄 File 2:

Our app/javascript/application.js file:

// Entry point for the build script in your package.json
import React from 'react';
import { createRoot } from 'react-dom/client';
import App from './components/App';

document.addEventListener('DOMContentLoaded', () => {
  const container = document.getElementById('react-root');

  if(container) {
    const root = createRoot(container);
    root.render(<App />);
  }
});

This is the entry point that connects React to your Rails app:

    Imports: Line 2-4

    import React from 'react';
    import { createRoot } from 'react-dom/client';
    import App from './components/App';
    

    🔍 Breaking down each import:

    Line 2:

    import React from 'react';
    
    • Same as before – importing the React library

    Line 3:

    import { createRoot } from 'react-dom/client';
    
    • { createRoot }Named import (notice the curly braces)
    • react-dom/client – ReactDOM library for browser/DOM manipulation
    • createRoot – New React 18+ API for rendering components to DOM

    Line 4:

    import App from './components/App';
    
    • AppDefault import (no curly braces)
    • ./components/App – Relative path to our App component
    • Note: We don’t need .jsx extension, esbuild figures it out

    DOM Integration: Line 6-12

    document.addEventListener('DOMContentLoaded', () => {
      const container = document.getElementById('react-root');
    
      if(container) {
        const root = createRoot(container);
        root.render(<App />);
      }
    });
    

    🔍 Step by step breakdown:

    Line 6:

    document.addEventListener('DOMContentLoaded', () => {
    
    • document.addEventListener – Standard browser API
    • 'DOMContentLoaded' – Wait until HTML is fully loaded
    • () => { – Arrow function (ES6 syntax)
    • Why needed? Ensures the HTML exists before React tries to find elements

    Line 7:

    const container = document.getElementById('react-root');
    
    • const container – Create a variable to hold the DOM element
    • document.getElementById('react-root') – Find HTML element with id="react-root"
    • Where is it? In your Rails view file: app/views/home/index.html.erb

    Line 9:

    if(container) {
    
    • Safety check – Only proceed if the element exists
    • Prevents errors – If someone visits a page without react-root element

    Line 10-11:

    const root = createRoot(container);
    root.render(<App />);
    
    • createRoot(container) – Create a React “root” at the DOM element
    • root.render(<App />) – Render our App component inside the container
    • <App /> – JSX syntax for using our component (self-closing tag)

    🎯 Key React Concepts You Just Learned:

    1. Components

    • Functions that return JSX
    • Must start with capital letter
    • Reusable pieces of UI

    2. JSX

    • Looks like HTML, actually JavaScript
    • Must return single parent element
    • Processed by esbuild into regular JavaScript

    3. Import/Export

    • Default exports: export default Appimport App from './App'
    • Named exports: export { createRoot }import { createRoot } from 'package'

    4. React DOM

    • createRoot() – Modern way to mount React apps (React 18+)
    • render() – Display components in the browser

    5. Rails Integration

    • Rails serves the HTML page
    • React takes over the #react-root element
    • esbuild bundles everything together

    🚀 This pattern is the foundation of every React app! We create components, import them, and render them to the DOM.


    📚 Step-by-Step React Learning with Todo List

    Now let’s build a Todo List app step by step. I’ll explain each React concept thoroughly as we go. Here’s our learning roadmap:

    Step 1: Understanding JSX and Basic Component Structure

    First, let’s update our App.jsx to create the basic structure of our Todo app:

    import React from 'react';
    
    function App() {
      return (
        <div className="todo-app">
          <h1>My Todo List</h1>
          <p>Let's learn React by building a todo app!</p>
    
          {/* This is a JSX comment */}
          <div className="todo-container">
            <h2>Add a new todo</h2>
            <input type="text" placeholder="Enter a todo..." />
            <button>Add Todo</button>
    
            <h2>My Todos</h2>
            <ul>
              <li>Learn React basics</li>
              <li>Build a todo app</li>
              <li>Master React hooks</li>
            </ul>
          </div>
        </div>
      );
    }
    
    export default App;
    

    🎯 Key Concepts Explained:

    JSX (JavaScript XML):

    • JSX lets you write HTML-like syntax directly in JavaScript
    • It’s a syntax extension for JavaScript, not actual HTML
    • JSX gets compiled to JavaScript function calls
    • You can use {} to embed JavaScript expressions inside JSX

    Important JSX Rules:

    • Use className instead of class (because class is a reserved word in JavaScript)
    • You can use single quotes for className values in JSX. Both work perfectly fine:
    // Both of these are valid:
    <div className='todo-app'>    // Single quotes ✅
    <div className="todo-app">    // Double quotes ✅
    

    Quote Usage in JSX/JavaScript:

    Single quotes vs Double quotes:

    • JavaScript treats them identically
    • It’s mostly a matter of personal/team preference
    • The key is to be consistent throughout your project

    Common conventions:

    // Option 1: Single quotes for JSX attributes
    <div className='todo-app'>
      <input type='text' placeholder='Enter todo...' />
    </div>
    
    // Option 2: Double quotes for JSX attributes  
    <div className="todo-app">
      <input type="text" placeholder="Enter todo..." />
    </div>
    
    // Option 3: Mixed (but stay consistent within each context)
    const message = 'Hello World';  // Single for JS strings
    <div className="todo-app">      // Double for JSX attributes
    

    When you MUST use specific quotes:

    // When the string contains the same quote type
    <div className="It's a great day">        // Double quotes needed
    <div className='He said "Hello"'>        // Single quotes needed
    
    // Or use escape characters
    <div className='It\'s a great day'>       // Escaping single quote
    <div className="He said \"Hello\"">      // Escaping double quote
    

    💡 Tip: Many teams use tools like Prettier or ESLint to automatically format and enforce consistent quote usage across the entire project.

    • All tags must be closed (self-closing tags need / at the end)
    • JSX comments use {/* */} syntax
    • Return a single parent element (or use React Fragment <>...</>)

    Try updating our App.jsx with this code and see it in your browser!


    Step 2: Introduction to State with useState

    Now let’s add state to make our app interactive. State is data that can change over time.

    import React, { useState } from 'react';
    
    function App() {
      // useState Hook - creates state variable and setter function
      const [todos, setTodos] = useState([
        { id: 1, text: 'Learn React basics', completed: false },
        { id: 2, text: 'Build a todo app', completed: false },
        { id: 3, text: 'Master React hooks', completed: true }
      ]);
    
      const [inputValue, setInputValue] = useState('');
    
      return (
        <div className="todo-app">
          <h1>My Todo List</h1>
    
          <div className="todo-container">
            <h2>Add a new todo</h2>
            <input 
              type="text" 
              placeholder="Enter a todo..." 
              value={inputValue}
              onChange={(e) => setInputValue(e.target.value)}
            />
            <button>Add Todo</button>
    
            <h2>My Todos ({todos.length})</h2>
            <ul>
              {todos.map(todo => (
                <li key={todo.id}>
                  {todo.text} {todo.completed ? '✅' : '⏳'}
                </li>
              ))}
            </ul>
          </div>
        </div>
      );
    }
    
    export default App;
    

    🎯 Key Concepts Explained:

    useState Hook:

    • useState is a React Hook that lets you add state to functional components
    • It returns an array with two elements: [currentValue, setterFunction]
    • const [todos, setTodos] = useState([]) creates a state variable todos and a function setTodos to update it
    • The initial value is passed as an argument to useState

    Controlled Components:

    • The input field is now “controlled” by React state
    • value={inputValue} makes the input show what’s in state
    • onChange={(e) => setInputValue(e.target.value)} updates state when user types

    Array.map() for Rendering Lists:

    • todos.map() transforms each todo into a JSX element
    • Each list item needs a unique key prop for React’s optimization
    • {todo.text} embeds the todo text using JSX expressions

    Try this code and notice how the input field now responds to typing!


    Step 3: Event Handling and Adding Todos

    Let’s make the “Add Todo” button work:

    import React, { useState } from 'react';
    
    function App() {
      const [todos, setTodos] = useState([
        { id: 1, text: 'Learn React basics', completed: false },
        { id: 2, text: 'Build a todo app', completed: false },
        { id: 3, text: 'Master React hooks', completed: true }
      ]);
    
      const [inputValue, setInputValue] = useState('');
    
      // Function to add a new todo
      const addTodo = () => {
        if (inputValue.trim() !== '') {
          const newTodo = {
            id: Date.now(), // Simple ID generation
            text: inputValue,
            completed: false
          };
    
          setTodos([...todos, newTodo]); // Spread operator to add new todo
          setInputValue(''); // Clear the input field
        }
      };
    
      // Function to handle Enter key press
      const handleKeyPress = (e) => {
        if (e.key === 'Enter') {
          addTodo();
        }
      };
    
      return (
        <div className="todo-app">
          <h1>My Todo List</h1>
    
          <div className="todo-container">
            <h2>Add a new todo</h2>
            <input 
              type="text" 
              placeholder="Enter a todo..." 
              value={inputValue}
              onChange={(e) => setInputValue(e.target.value)}
              onKeyPress={handleKeyPress}
            />
            <button onClick={addTodo}>Add Todo</button>
    
            <h2>My Todos ({todos.length})</h2>
            <ul>
              {todos.map(todo => (
                <li key={todo.id}>
                  {todo.text} {todo.completed ? '✅' : '⏳'}
                </li>
              ))}
            </ul>
          </div>
        </div>
      );
    }
    
    export default App;
    

    🎯 Key Concepts Explained:

    Event Handlers:

    • onClick={addTodo} – function runs when button is clicked
    • onKeyPress={handleKeyPress} – function runs when key is pressed
    • Event handlers receive an event object (e) with information about the event

    State Updates:

    • setTodos([...todos, newTodo]) – creates a new array with all existing todos plus the new one
    • Important: Always create new arrays/objects instead of mutating existing ones
    • React compares old and new state to determine if re-render is needed

    Spread Operator (...):

    • ...todos spreads out all elements of the todos array
    • This is the React way to add items to an array in state

    Try adding new todos now!

    🎯 Function Syntax Options in JavaScript:

    const addTodo = () => {} is a function syntax – specifically an arrow function. Let me explain why it’s declared as const and the different ways to write functions in JavaScript.

    1. Function Declaration (Traditional)
    function addTodo() {
      // function body
    }
    
    2. Function Expression with Arrow Function
    const addTodo = () => {
      // function body
    };
    
    3. Function Expression (Traditional)
    const addTodo = function() {
      // function body
    };
    

    🤔 Why use const for functions?

    Arrow functions are expressions, not declarations:

    // This is a DECLARATION - creates a function named addTodo
    function addTodo() { }
    
    // This is an EXPRESSION - creates a function and assigns it to a variable
    const addTodo = () => { };
    

    Why const specifically?

    // ❌ Could be reassigned accidentally
    let addTodo = () => { };
    addTodo = "oops"; // Function is now gone!
    
    // ❌ Could be reassigned accidentally  
    var addTodo = () => { };
    addTodo = null; // Function is now gone!
    
    // ✅ Cannot be reassigned - prevents bugs
    const addTodo = () => { };
    addTodo = "something"; // ERROR: Assignment to constant variable
    

    📚 Key Differences:

    Function Declaration vs Arrow Function:

    // Function Declaration
    function addTodo() {
      console.log("Adding todo");
    }
    
    // Arrow Function (assigned to const)
    const addTodo = () => {
      console.log("Adding todo");
    };
    

    Hoisting Behavior:

    // ✅ This works - function declarations are "hoisted"
    sayHello(); // "Hello!"
    
    function sayHello() {
      console.log("Hello!");
    }
    
    // ❌ This doesn't work - arrow functions are not hoisted
    sayGoodbye(); // Error: Cannot access 'sayGoodbye' before initialization
    
    const sayGoodbye = () => {
      console.log("Goodbye!");
    };
    

    this Binding:

    // Function declaration has its own 'this'
    function regularFunction() {
      console.log(this); // 'this' can change based on how it's called
    }
    
    // Arrow function inherits 'this' from surrounding scope
    const arrowFunction = () => {
      console.log(this); // 'this' is inherited from parent scope
    };
    

    🚀 In React Context:

    In React functional components, we typically use arrow functions with const because:

    1. Prevents accidental reassignment – our function won’t get overwritten
    2. Consistent with modern JavaScript – ES6+ standard
    3. Cleaner syntax – less verbose than traditional function expressions
    4. Better for event handlersthis behavior is more predictable

    All these are equivalent in React:

    // Option 1: Arrow function with const (most common)
    const addTodo = () => {
      if (inputValue.trim() !== '') {
        // ... logic
      }
    };
    
    // Option 2: Traditional function declaration
    function addTodo() {
      if (inputValue.trim() !== '') {
        // ... logic  
      }
    }
    
    // Option 3: Function expression with const
    const addTodo = function() {
      if (inputValue.trim() !== '') {
        // ... logic
      }
    };
    

    💡 Why React developers prefer arrow functions:

    1. Shorter syntax for simple functions
    2. Consistent variable declaration (everything uses const)
    3. No hoisting confusion – functions are defined before they’re used
    4. Better with modern tooling – ESLint, Prettier handle them well

    So yes, const addTodo = () => {} is definitely a function! It’s just a modern way to write functions that prevents accidental reassignment and has cleaner syntax.

    🎯 What is Hoisting?

    Hoisting is a fundamental JavaScript concept that can be confusing at first. Let me explain it clearly with examples.

    Hoisting is JavaScript’s behavior of moving declarations to the top of their scope during the compilation phase, before the code is executed.

    Think of it like JavaScript “hoists” (lifts up) your variable and function declarations to the top of their scope.

    📚 How Hoisting Works:

    Function Declarations are Hoisted:

    // This works even though we call the function before declaring it!
    sayHello(); // Outputs: "Hello!"
    
    function sayHello() {
      console.log("Hello!");
    }
    

    Behind the scenes, JavaScript treats it like this:

    // JavaScript internally reorganizes it like this:
    function sayHello() {
      console.log("Hello!");
    }
    
    sayHello(); // Now it makes sense!
    

    Variable Declarations (with var) are Hoisted:

    console.log(myVar); // Outputs: undefined (not an error!)
    var myVar = "Hello";
    console.log(myVar); // Outputs: "Hello"
    

    Behind the scenes:

    // JavaScript internally treats it like this:
    var myVar; // Declaration is hoisted to the top
    console.log(myVar); // undefined (declared but not assigned)
    myVar = "Hello"; // Assignment stays in place
    console.log(myVar); // "Hello"
    

    🚫 What is NOT Hoisted:

    let and const variables:

    // ❌ This throws an error!
    console.log(myLet); // ReferenceError: Cannot access 'myLet' before initialization
    let myLet = "Hello";
    
    // ❌ This also throws an error!
    console.log(myConst); // ReferenceError: Cannot access 'myConst' before initialization
    const myConst = "Hello";
    

    Arrow Functions (Function Expressions):

    // ❌ This throws an error!
    sayGoodbye(); // ReferenceError: Cannot access 'sayGoodbye' before initialization
    
    const sayGoodbye = () => {
      console.log("Goodbye!");
    };
    

    Function Expressions:

    // ❌ This throws an error!
    myFunc(); // TypeError: myFunc is not a function
    
    var myFunc = function() {
      console.log("Hello!");
    };
    

    🎭 Visual Examples:

    Example 1: Function Declaration Hoisting

    // What you write:
    console.log("1. Start");
    myFunction(); // This works!
    console.log("3. End");
    
    function myFunction() {
      console.log("2. Inside function");
    }
    
    // Output:
    // 1. Start
    // 2. Inside function  
    // 3. End
    

    Example 2: Variable Hoisting with var

    // What you write:
    console.log("Value is:", myVar); // undefined
    var myVar = 42;
    console.log("Value is:", myVar); // 42
    
    // What JavaScript internally does:
    var myVar; // Hoisted to top
    console.log("Value is:", myVar); // undefined
    myVar = 42; // Assignment stays here
    console.log("Value is:", myVar); // 42
    

    Example 3: Why let and const are different

    // Temporal Dead Zone
    console.log("Before declaration");
    
    // ❌ This area is the "Temporal Dead Zone" for myLet
    console.log(myLet); // ReferenceError!
    
    let myLet = "Hello"; // Declaration and initialization
    console.log(myLet); // "Hello"
    

    🎯 Practical Implications:

    Why Function Declarations work everywhere:

    // This is valid JavaScript
    function main() {
      helper(); // Can call helper before it's defined
    
      function helper() {
        console.log("Helper function");
      }
    }
    

    Why Arrow Functions must be defined first:

    // ❌ This doesn't work
    const main = () => {
      helper(); // Error! helper is not defined yet
    
      const helper = () => {
        console.log("Helper function");
      };
    };
    
    // ✅ This works
    const main = () => {
      const helper = () => {
        console.log("Helper function");
      };
    
      helper(); // Now it works!
    };
    

    💡 Best Practices:

    1. Use const and let instead of var to avoid hoisting confusion
    2. Define functions before using them (even though function declarations are hoisted)
    3. Use arrow functions for consistency and to avoid hoisting surprises
    4. Initialize variables when you declare them

    🔗 Back to Our React Example:

    // This is why we use const for arrow functions
    const App = () => {
      // ✅ Good: Function defined before use
      const addTodo = () => {
        // function logic
      };
    
      // ❌ Bad: Would cause error if we tried to call it here
      // deleteTodo(); // Error!
    
      const deleteTodo = () => {
        // function logic
      };
    
      return (
        <div>
          <button onClick={addTodo}>Add</button>
          <button onClick={deleteTodo}>Delete</button>
        </div>
      );
    };
    

    Hoisting is JavaScript’s way of making function declarations available throughout their scope, but it doesn’t apply to modern variable declarations (let, const) or arrow functions. That’s why we define our functions with const and make sure to declare them before we use them!


    Step 4: Toggling Todo Completion

    Let’s add the ability to mark todos as complete/incomplete:

    import React, { useState } from 'react';
    
    function App() {
      const [todos, setTodos] = useState([
        { id: 1, text: 'Learn React basics', completed: false },
        { id: 2, text: 'Build a todo app', completed: false },
        { id: 3, text: 'Master React hooks', completed: true }
      ]);
    
      const [inputValue, setInputValue] = useState('');
    
      const addTodo = () => {
        if (inputValue.trim() !== '') {
          const newTodo = {
            id: Date.now(),
            text: inputValue,
            completed: false
          };
    
          setTodos([...todos, newTodo]);
          setInputValue('');
        }
      };
    
      // Function to toggle todo completion
      const toggleTodo = (id) => {
        setTodos(todos.map(todo => 
          todo.id === id 
            ? { ...todo, completed: !todo.completed } // Toggle completed status
            : todo // Keep todo unchanged
        ));
      };
    
      const handleKeyPress = (e) => {
        if (e.key === 'Enter') {
          addTodo();
        }
      };
    
      return (
        <div className="todo-app">
          <h1>My Todo List</h1>
    
          <div className="todo-container">
            <h2>Add a new todo</h2>
            <input 
              type="text" 
              placeholder="Enter a todo..." 
              value={inputValue}
              onChange={(e) => setInputValue(e.target.value)}
              onKeyPress={handleKeyPress}
            />
            <button onClick={addTodo}>Add Todo</button>
    
            <h2>My Todos ({todos.length})</h2>
            <ul>
              {todos.map(todo => (
                <li key={todo.id}>
                  <span 
                    onClick={() => toggleTodo(todo.id)}
                    style={{ 
                      textDecoration: todo.completed ? 'line-through' : 'none',
                      cursor: 'pointer',
                      color: todo.completed ? '#888' : '#000'
                    }}
                  >
                    {todo.text}
                  </span>
                  {todo.completed ? ' ✅' : ' ⏳'}
                </li>
              ))}
            </ul>
          </div>
        </div>
      );
    }
    
    export default App;
    

    🎯 Key Concepts Explained:

    Array.map() for Updates:

    • todos.map() creates a new array where each todo is either updated or kept the same
    • todo.id === id ? { ...todo, completed: !todo.completed } : todo
    • This pattern is very common in React for updating items in arrays

    Conditional Rendering:

    • todo.completed ? '✅' : '⏳' – ternary operator for conditional display
    • textDecoration: todo.completed ? 'line-through' : 'none' – conditional styling

    Arrow Functions in JSX:

    • onClick={() => toggleTodo(todo.id)} – arrow function to pass arguments to event handlers
    • Without the arrow function, toggleTodo(todo.id) would execute immediately

    Now you can click on todos to toggle their completion status!


    🤔 What we’ve learned so far:

    1. ✅ JSX syntax and rules
    2. ✅ useState hook for state management
    3. ✅ Event handling (onClick, onChange, onKeyPress)
    4. ✅ Controlled components
    5. ✅ Array mapping for rendering lists
    6. ✅ Conditional rendering
    7. ✅ State updates with spread operator

    Next Steps: In the following steps, we’ll cover:

    • Deleting todos
    • Component composition (breaking into smaller components)
    • Props passing
    • Filtering todos
    • More advanced state management

    Let’s see in Part 4. Happy React Development! 🚀

    Design Studio – Rails 8 e-commerce application: Integrate active admin

    Now let’s create an admin interface for our e-commerce Application.

    We have a well-structured e-commerce Rails application with:

    Models: User, Product, ProductVariant, Order, OrderItem
    Authentication: Custom session-based auth with user roles (customer/admin)
    Authorization: Already has admin role checking

    Admin Interface Recommendations

    Here are the best options for Rails admin interfaces, ranked by suitability for our project:

    1. ActiveAdmin (Recommended ⭐)
      Best fit for e-commerce with complex associations
      Excellent filtering, search, and batch operations
      Great customization options and ecosystem
      Handles your Product → ProductVariant → OrderItem relationships well
    2. Administrate (Modern Alternative)
      Clean, Rails-way approach by Thoughtbot
      Good for custom UIs, less configuration
      More work to set up initially
    3. Rails Admin (What you asked about)
      Quick setup but limited customization
      Less actively maintained
      Good for simple admin needs
    4. Avo (Modern Premium)
      Beautiful modern UI
      Some features require paid version

      https://avohq.io/rails-admin
      https://docs.avohq.io/3.0/

    Choose ActiveAdmin for our e-commerce application. Let’s integrate it with our existing authentication system

    Add in Gemfile:

    gem "activeadmin"
    gem "sassc-rails" # Required for ActiveAdmin
    gem "image_processing", "~> 1.2" # For variant processing if not already present
    

    Bundle Install and run the Active Admin Generator:

    $ bundle install
    $ rails generate active_admin:install --skip-users
    definition of Rules was here
    create app/assets/javascripts/active_admin.js
    create app/assets/stylesheets/active_admin.scss
    create db/migrate/20250710083516_create_active_admin_comments.rb
    

    Migration File created by Active Admin:

    class CreateActiveAdminComments < ActiveRecord::Migration[8.0]
      def self.up
        create_table :active_admin_comments do |t|
          t.string :namespace
          t.text   :body
          t.references :resource, polymorphic: true
          t.references :author, polymorphic: true
          t.timestamps
        end
        add_index :active_admin_comments, [ :namespace ]
      end
    
      def self.down
        drop_table :active_admin_comments
      end
    end
    

    Run database migration:

    $ rails db:migrate
    

    in app/initializers/active_admin.rb

    # This setting changes the method which Active Admin calls
      # within the application controller.
      config.authentication_method = :authenticate_admin_user!
    ....
    # This setting changes the method which Active Admin calls
      # (within the application controller) to return the currently logged in user.
      config.current_user_method = :current_admin_user
    ....
     # Default:
      config.logout_link_path = :destroy_session_path
    

    in app/controllers/application_controller.rb

    private
    
      def authenticate_admin_user!
        require_authentication
        ensure_admin
      end
    
      def current_admin_user
        Current.user if Current.user&.admin?
      end
    

    Run the active admin user, product generator:

    rails generate active_admin:resource User
    rails generate active_admin:resource Product
    rails generate active_admin:resource ProductVariant
    rails generate active_admin:resource Order
    rails generate active_admin:resource OrderItem
    

    Let’s update all the active admin resources with fields, filters, attributes, panels etc.

    Let’s add accepts_nested_attributes_for :variants, allow_destroy: true in Product Model.

    accepts_nested_attributes_for is a Rails feature that allows a parent model to accept and process attributes for its associated child models through nested parameters. Here’s what it does:

    What it enables:

    1. Nested Forms: You can create/update a Product and its ProductVariants in a single form submission
    2. Mass Assignment: Allows passing nested attributes through strong parameters
    3. CRUD Operations: Create, update, and delete associated records through the parent

    In our Product model

    class Product < ApplicationRecord
      has_many :variants, dependent: :destroy, class_name: "ProductVariant"
      accepts_nested_attributes_for :variants, allow_destroy: true
    end
    

    What this allows:

    Before: You’d need separate forms/requests for Product and ProductVariant

    # Create product first
    product = Product.create(name: "T-Shirt", brand: "Nike")
    
    # Then create variants separately
    product.variants.create(size: "M", color: "Red", sku: "NIKE-001-M-RED")
    product.variants.create(size: "L", color: "Blue", sku: "NIKE-001-L-BLUE")
    

    After: You can do it all in one go:

    Product.create(
      name: "T-Shirt",
      brand: "Nike",
      variants_attributes: [
        { size: "M", color: "Red", sku: "NIKE-001-M-RED" },
        { size: "L", color: "Blue", sku: "NIKE-001-L-BLUE" }
      ]
    )
    

    In ActiveAdmin context:

    This enables the nested form I created in app/admin/products.rb:

    f.inputs "Product Variants" do
      f.has_many :variants, allow_destroy: true, new_record: true do |v|
        v.input :size
        v.input :color
        v.input :sku
        # ... other variant fields
      end
    end
    
    The allow_destroy: true option:
    • Allows deleting associated records by passing _destroy: ‘1’
    • Essential for updating existing products with variants
    Strong Parameters:

    In the controller, you need to permit the nested attributes:

    permit_params :name, :brand, :category, 
                  variants_attributes: [:id, :size, :color, :sku, :mrp, :discount_percent, :_destroy]
    

    Propshaft vs Sprockets

    We have an error after integrating ActiveAdmin:

     Rendered layout layouts/application.html.erb (Duration: 13.9ms | GC: 0.5ms)
    Completed 500 Internal Server Error in 118ms (ActiveRecord: 28.7ms (1 query, 0 cached) | GC: 27.0ms)
    
    ActionView::Template::Error (undefined method 'load_path' for an instance of Sprockets::Environment)
    Caused by: NoMethodError (undefined method 'load_path' for an instance of Sprockets::Environment)
    
    Information for: ActionView::Template::Error (undefined method 'load_path' for an instance of Sprockets::Environment):
        14:     <link rel="icon" href="/icon.svg" type="image/svg+xml">
        15:     <link rel="apple-touch-icon" href="/icon.png">
    

    This error occurs because Rails 8 uses Propshaft by default, but ActiveAdmin requires Sprockets. There’s a conflict between the two asset pipelines. Let’s fix this by switching to Sprockets completely:

    Add in Gemfile:

    # Use Sprockets for asset pipeline (required for ActiveAdmin)
    gem "sprockets-rails"
    
    $ bundle install
    
    # application.rb
    
    # Use Sprockets for asset pipeline
    config.assets.enabled = true
    
    

    Let’s create an admin user for testing our ActiveAdmin:

    # seed.rb
    # Create admin user
    admin_user = User.find_or_create_by(email: "admin@designstudio.com") do |user|
      user.password = "password123"
      user.role = "admin"
    end
    
    puts "Admin user created: #{admin_user.email}" if admin_user.persisted?
    
    # run seed
    ✗ rails db:seed
    

    ✅ We have Successfully Integrated ActiveAdmin with Nested Attributes!

    I’ve successfully converted your Rails application to use nested attributes (variants_attributes) across the entire stack and integrated ActiveAdmin as your admin interface. Here’s what I accomplished:

    🔧 What I Fixed & Updated:

    1. Asset Pipeline Issue:

    • Switched from Propshaft to Sprockets (required for ActiveAdmin)
    • Updated Gemfile and application configuration
    • Fixed the load_path error you encountered

    2. Model Changes:

    • Added accepts_nested_attributes_for :variants, allow_destroy: true to Product model

    3. Controller Updates:

    • Changed from singular product_variant to plural variants_attributes
    • Simplified create/update actions to use Rails nested attributes handling
    • Updated strong parameters to permit nested variant attributes

    4. View Improvements:

    • Updated product form to handle multiple variants with JavaScript
    • Added “Add Another Variant” functionality
    • Implemented proper nested form fields for all variant attributes

    5. ActiveAdmin Configuration:

    • Set up authentication using your existing user system
    • Created comprehensive admin resources for all models
    • Added nested variant management in product forms
    • Built an informative dashboard with key metrics

    🎯 Admin Features You Now Have:

    1. Dashboard – Key metrics, recent activity, revenue overview
    2. User Management – Role management, order history
    3. Product Management – Full CRUD with image uploads and nested variants
    4. Order Management – Status updates, batch actions
    5. Inventory Management – ProductVariant and OrderItem tracking

    🔑 Admin Access:

    • URL: http://localhost:3000/admin
    • Email: admin@designstudio.com
    • Password: password123

    📊 Comparison vs Rails Admin:

    ActiveAdmin Advantages:

    • ✅ Better for complex e-commerce relationships
    • ✅ Excellent filtering and search capabilities
    • ✅ Powerful batch operations
    • ✅ Highly customizable forms and views
    • ✅ Active development and community

    Rails Admin:

    • ❌ Less actively maintained
    • ❌ Limited customization options
    • ❌ Not as suitable for complex nested relationships

    🚀 What’s Working Now:

    1. Nested Variants: Create/edit products with multiple variants in one form
    2. Image Management: Upload multiple product images
    3. Order Processing: Admin can update order statuses with batch actions
    4. User Management: Promote users to admin, view order history
    5. Dashboard Analytics: Revenue tracking, popular products, recent activity

    Visit /admin to explore your new admin interface, Cheers!

    Check docs: https://activeadmin.info/0-installation.html

    ⚠️ Tailwind UI Issue after moving to Sprockets

    The issue is with <%= stylesheet_link_tag "/tailwind/application.css", "data-turbo-track": "reload" %> where we are trying to load Tailwind CSS from an incorrect path. The tailwindcss-rails gem serves Tailwind CSS differently. Let me fix this:

    ls -la app/assets/tailwind/
    
    curl -I http://localhost:3000/tailwind/application.css
    
    rails routes | grep -i tailwind
    

    The issue is that the /tailwind/application.css path is returning 404. The tailwindcss-rails gem works differently. Let’s check how it’s configured and fix this:

    find . -name "*.css" -path "*/public/*" | head -10
    
    rails assets:precompile
    

    Now let’s test if the Tailwind CSS is properly accessible:

    curl -I http://localhost:3000/assets/tailwind/application.css
    

    The issue is that with the tailwindcss-rails gem, Tailwind CSS gets compiled into the main application.css file rather than a separate file. Let’s check the current asset manifest and fix the layout:

    rails assets:precompile
    
    bundle exec tailwindcss build
    

    The Tailwind CSS is already built and located in app/assets/builds/tailwind.css. Let’s fix the configuration:

    rails assets:precompile
    
    curl -I http://localhost:3000/assets/tailwind.css
    

    The issue has been fixed. The Tailwind CSS is now properly accessible. Here’s what I did to resolve the problem:

    1. Identified the issue: The Tailwind CSS was being incorrectly referenced as /tailwind/application.css in the layout file, but it wasn’t being served properly through the asset pipeline.
    2. Found the correct location: Discovered that the tailwindcss-rails gem builds the CSS file to app/assets/builds/tailwind.css.
    3. Fixed the asset pipeline configuration:
    • Updated app/assets/config/manifest.js to include //= link_tree ../builds which includes the built Tailwind CSS
    • Removed the incorrect tailwind/application.css reference
    1. Updated the layout file:
    • Changed from <%= stylesheet_link_tag "/tailwind/application.css" %>
    • To the correct <%= stylesheet_link_tag "tailwind", "data-turbo-track": "reload" %>
    1. Verified the fix:
    • Successfully precompiled assets
    • Confirmed that http://localhost:3000/assets/tailwind.css returns a 200 OK status

    Install Rails Icons 🍁

    For better font-awesome assets, lets add rails-icons gem:

    # Rails Icons - Modern icon library support
    gem "rails_icons"
    
    $ rails generate rails_icons:install --help
    $ rails generate rails_icons:install --libraries=heroicons
    
    $ npm install @fortawesome/fontawesome-free
    

    How to migrate from the CDN to Rails Icons

    For a production Rails application, it’s generally better to use a gem rather than a CDN for Font Awesome. Here’s why:

    CDN Issues:

    • External dependency (can fail if CDN is down)
    • Privacy concerns (external requests)
    • No version control
    • Requires internet connection
    • Not cacheable with your assets

    Gem Benefits:

    • Self-hosted (no external dependencies)
    • Better performance (served with your assets)
    • Version control
    • Works offline
    • Follows Rails conventions
    • Better for production

    Based on my research, adding a gem is definitely the better approach for a production Rails application. Here are the best options:

    Option 1: Modern approach with npm package (Recommended)

    Since we’re using Rails 8 with modern tooling, I’d recommend using the npm package:

    yarn add @fortawesome/fontawesome-free
    
    npm --version
    
    which node
    

    However, given your current setup, I’d recommend Option 2 which is more Rails-native:

    Option 2: Rails Icons gem (Recommended for our setup)

    This is a modern gem that supports Font Awesome and other icon libraries with a clean Rails API:

    bundle install
    
    rails generate rails_icons:install --help
    
    rails generate rails_icons:install --libraries=heroicons
    

    Let’s remove the CDN and set up Rails Icons properly. Since we’re already using Font Awesome extensively, let’s add Font Awesome as a custom library:

    rails generate rails_icons:initializer --custom=fontawesome
    
    npm install @fortawesome/fontawesome-free
    
    find node_modules/@fortawesome/fontawesome-free -name "*.svg" -type f | head -5
    
    cp -r node_modules/@fortawesome/fontawesome-free/svgs/* app/assets/svg/icons/fontawesome/
    

    Rails Icons is now properly set up instead of the CDN. Here’s what we’ve accomplished:

    What’s Changed:

    1. Removed the Font Awesome CDN link (no more external dependency)
    2. Added rails_icons gem with Heroicons as the default library
    3. Added Font Awesome as a custom library with all variants (solid, regular, brands)
    4. Self-hosted all icons through Rails asset pipeline

    What is Ransack?

    Ransack is a popular Ruby gem that provides powerful search and filtering capabilities for Rails applications. It’s commonly used with ActiveAdmin (which your app uses) to create searchable, filterable, and sortable data tables.

    Ransackable Attributes and Associations

    ransackable_attributes

    This method defines which model attributes can be searched, filtered, or sorted through Ransack. It’s a security feature that prevents unauthorized access to sensitive data.

    def self.ransackable_attributes(auth_object = nil)
      %w[id email role created_at updated_at password_reset_token password_reset_sent_at]
    end
    

    What it does:

    • Allows searching/filtering by id, email, role, created_at, updated_at, etc.
    • Prevents searching by sensitive fields like password_digest
    • The auth_object parameter can be used for role-based access control

    ransackable_associations

    This method defines which model associations can be used in Ransack queries.

    def self.ransackable_associations(auth_object = nil)
      %w[orders sessions]
    end
    

    What it does:

    • Allows searching/filtering by related models (e.g., “users who have orders”)
    • Enables joins and complex queries across associations

    Why This Matters for Security

    Without these methods, Rails 7+ will raise warnings or errors because Ransack needs explicit permission to search certain fields. This prevents:

    1. Information disclosure – Searching sensitive fields like passwords
    2. Performance issues – Searching on unindexed or inappropriate fields
    3. Unauthorized access – Accessing data through unexpected query parameters

    Example Usage in ActiveAdmin

    In your ActiveAdmin dashboard, this enables features like:

    # In app/admin/users.rb
    ActiveAdmin.register User do
      # Users can now search by email, role, etc.
      # Users can filter by orders, sessions
      # Users can sort by created_at, updated_at
    end
    

    In Our User Model

    Looking at your User model:

    def self.ransackable_attributes(auth_object = nil)
      %w[id email role created_at updated_at password_reset_token password_reset_sent_at]
    end
    
    def self.ransackable_associations(auth_object = nil)
      %w[orders sessions]
    end
    

    This means:

    • ✅ Admins can search users by email, role, creation date
    • ✅ Admins can filter users by their orders or sessions
    • ✅ Admins can sort users by any of these attributes
    • ❌ Admins cannot search by password_digest (secure)
    • ❌ Admins cannot search by other sensitive fields

    Benefits

    1. Security: Explicit control over what can be searched
    2. Performance: Only searchable fields need indexes
    3. Usability: Provides powerful admin interface features
    4. Compliance: Helps meet data protection requirements

    Migration Guide:

    Old CDN way:

    <i class="fas fa-user"></i>          <!-- solid -->
    <i class="far fa-star"></i>          <!-- regular -->
    <i class="fab fa-facebook-f"></i>    <!-- brands -->
    

    New Rails Icons way:

    <%= icon "user", library: "fontawesome", variant: "solid", class: "size-6" %>
    <%= icon "star", library: "fontawesome", variant: "regular", class: "size-6" %>
    <%= icon "facebook-f", library: "fontawesome", variant: "brands", class: "size-6" %>
    

    Or use modern Heroicons (recommended for new icons):

    <%= icon "user", class: "size-6" %>
    <%= icon "star", class: "size-6" %>
    <%= icon "heart", class: "size-6" %>
    

    Benefits:

    • No external dependencies (works offline)
    • Better performance (served with your assets)
    • Version control over icons
    • Multiple libraries (Font Awesome + Heroicons + more)
    • Consistent API across all icon libraries
    • Better for production and privacy

    Our application should work exactly the same now, but with better performance and no external dependencies! You can gradually migrate icons from <i class="fas fa-x"> to <%= icon "x", library: "fontawesome", variant: "solid" %> as needed.

    Asset Issue Again

    ✅ Final Fix Applied:

    Added //= link_tree ../../javascript .js to the manifest, which tells Sprockets to include all JavaScript files from the app/javascript directory tree.

    ✅ What’s Working Now:

    1. All CSS files properly compiled:
    • tailwind.css ✅
    • application.css ✅
    • active_admin.css ✅
    1. All JavaScript files properly compiled:
    • application.js ✅
    • controllers/application.js ✅
    • controllers/decorative_background_controller.js ✅
    • controllers/hello_controller.js ✅
    • controllers/index.js ✅
    1. Asset Pipeline fully functional:
    • Sprockets serving all assets correctly
    • Proper cache headers and ETags
    • All asset links declared in manifest

    🎯 Current Status:

    Our Rails application now has:

    • ✅ Modern asset pipeline with Sprockets + Importmap
    • ✅ Tailwind CSS for styling (self-hosted)
    • ✅ ActiveAdmin for administration
    • ✅ Rails Icons instead of Font Awesome CDN
    • ✅ Stimulus controllers for JavaScript functionality
    • ✅ All assets properly precompiled and served

    Check Github: Active admin changes

    📋 Clean Commit History Summary:

    1. ✅ Phase 1: feat: Add Rails Icons gem with Font Awesome and Heroicons support #94
    • Replaced Font Awesome CDN with self-hosted icons
    • Added Rails Icons with unified API for multiple icon libraries
    1. ✅ Phase 2: task: Migrate from Propshaft to Sprockets asset pipeline #96
    • Switched from Propshaft to Sprockets for ActiveAdmin compatibility
    • Fixed asset compilation and linking issues
    1. ✅ Phase 3: feat: Integrate ActiveAdmin for comprehensive admin interface #94
    • Complete ActiveAdmin setup with authentication
    • Full admin resources for all e-commerce models
    1. ✅ Phase 4: fix: Resolve ActiveAdmin PostgreSQL and Ransack security issues #94
    • Fixed PostgreSQL GROUP BY errors in dashboard
    • Added Ransack security configuration for all models

    🚀 Our ActiveAdmin is now fully functional!

    You should now be able to:

    • ✅ Access the admin dashboard at localhost:3000/admin
    • ✅ View analytics and statistics without GROUP BY errors
    • ✅ Search and filter all resources safely with Ransack
    • ✅ Manage Users, Products, Variants, Orders, and Order Items
    • ✅ Use nested attributes for product variants
    • ✅ Perform batch operations and advanced filtering

    Test it out: Visit localhost:3000/admin and log in with your admin credentials to see the beautiful, fully-functional admin interface! 🎯

    to be continued 🚀…

    Rails 8 App: Create an Academic software app using SQL without using ActiveRecord- Part 3

    In this episode we move on from creating the tables with constraints, foreign keys, proper indexes, enums, reversing each migrations, seeded data etc. Now let’s check our seeded data with some ui tool that help us smell and modify our db data.

    Setup a UI tool for analysing SQL and data

    W’re using PostgreSQL. Here are the best SQL GUI tools for pure SQL data analysis and manipulation on macOS:

    🏆 Top Recommendations for SQL Analysis

    1. TablePlus ⭐️ (Highly Recommended)

    • Best for: Fast SQL queries, data filtering, before/after comparisons
    • Strengths:
      Lightning-fast query execution
      Excellent data filtering UI with SQL query builder
      Beautiful native macOS interface
      Export to CSV/JSON/SQL
      Query history and favorites
      Cost: Free tier (2 tabs), Pro $89
      Perfect for: Your use case of checking seeded data

    2. Postico (macOS Native)

    • Best for: PostgreSQL-specific features and analysis
    • Strengths:
      Built specifically for PostgreSQL
      Excellent for large dataset analysis
      Advanced filtering and sorting
      Beautiful data visualization
      Cost: $49 (one-time)
      PostgreSQL-optimized: Uses all PostgreSQL features

    3. pgAdmin (Free, Official)

    • Best for: Advanced PostgreSQL administration and complex queries
    • Strengths:
      Official PostgreSQL tool
      Advanced query planner visualization
      Excellent for performance analysis
      Complete database management
      Cost: Free
      Learning curve: Steeper but very powerful

    4. DBeaver (Free, Cross-platform)

    • Best for: Advanced SQL analysis and scripting
    • Strengths:
      Powerful SQL editor with autocomplete
      Data export in multiple formats
      Query execution plans
      Visual query builder
      Cost: Free (Community), Pro $10/month
      Great for: Complex data analysis workflows

    Quick Setup Commands

    For TablePlus (easiest to get started):

    # Install via Homebrew
    brew install --cask tableplus
    

    For pgAdmin:

    # Install via Homebrew
    brew install --cask pgadmin4
    

    For Postico:

    # Install via Homebrew
    brew install --cask postico
    

    🔧 Connection Details You’ll Need

    Your PostgreSQL connection details:

    • Host: localhost (default)
    • Port: 5432 (default)
    • Database: academic_sql_software_development
    • Username: Your macOS username (default)
    • Password: None (default for local development)

    💡 Pro Tips for Data Analysis

    Once connected, you can:

    1. Check seeded data:
       SELECT COUNT(*) FROM users;
       SELECT COUNT(*) FROM orders;
       SELECT COUNT(*) FROM products;
    
    1. Analyze relationships:
       SELECT 
         u.first_name, u.last_name, 
         COUNT(o.id) as order_count
       FROM users u 
       LEFT JOIN orders o ON u.id = o.user_id 
       GROUP BY u.id, u.first_name, u.last_name
       ORDER BY order_count DESC;
    
    1. Filter and export specific datasets for before/after comparisons

    My Recommendation: Start with TablePlus – it’s the most intuitive for our workflow of checking and filtering seeded data, with excellent performance for the data volumes we’re working with (10k users, 5k orders, etc.).

    Let’s Go with TablePlus 🥳

    🚀 Step 1: Install TablePlus

    brew install --cask tableplus
    

    📊 Step 2: Check Our Database Schema

     We have a great setup for learning SQL with realistic relationships. Let’s create a progressive SQL learning path using our actual data.

    🔗 Step 3: Connect to Your Database

    TablePlus Connection Details:

    • Host: localhost
    • Port: 5432
    • Database: academic_sql_software_development
    • User: (your macOS username)
    • Password: (leave blank)

    📚 SQL Learning Path: Basic to Advanced

    Change Font size, colour, theme etc:

    Level 1: Basic SELECT Queries

    -- 1. View all users
    SELECT * FROM users LIMIT 10;
    
    -- 2. Count total records
    SELECT COUNT(*) FROM users;
    SELECT COUNT(*) FROM orders;
    SELECT COUNT(*) FROM products;
    
    -- 3. Filter data
    SELECT first_name, last_name, email 
    FROM users 
    WHERE gender = 'female' 
    LIMIT 10;
    
    -- 4. Sort data
    SELECT first_name, last_name, date_of_birth 
    FROM users 
    ORDER BY date_of_birth DESC 
    LIMIT 10;
    
    -- 5. Filter with conditions
    SELECT title, price, category 
    FROM products 
    WHERE price > 50 AND category = 'men' 
    ORDER BY price DESC;
    

    Level 2: Basic Aggregations

    -- 1. Count by category
    SELECT category, COUNT(*) as product_count 
    FROM products 
    GROUP BY category;
    
    -- 2. Average prices by category
    SELECT category, 
           AVG(price) as avg_price,
           MIN(price) as min_price,
           MAX(price) as max_price
    FROM products 
    GROUP BY category;
    
    -- 3. Users by gender
    SELECT gender, COUNT(*) as user_count 
    FROM users 
    WHERE gender IS NOT NULL
    GROUP BY gender;
    
    -- 4. Products with low stock
    SELECT COUNT(*) as low_stock_products 
    FROM products 
    WHERE stock_quantity < 10;
    

    Level 3: Inner Joins

    -- 1. Users with their orders
    SELECT u.first_name, u.last_name, u.email, o.id as order_id, o.created_at
    FROM users u
    INNER JOIN orders o ON u.id = o.user_id
    ORDER BY o.created_at DESC
    LIMIT 20;
    
    -- 2. Orders with product details
    SELECT o.id as order_id, 
           p.title as product_name, 
           p.price, 
           p.category,
           o.created_at
    FROM orders o
    INNER JOIN products p ON o.product_id = p.id
    ORDER BY o.created_at DESC
    LIMIT 20;
    
    -- 3. Complete order information (3-table join)
    SELECT u.first_name, u.last_name,
           p.title as product_name,
           p.price,
           p.category,
           o.created_at as order_date
    FROM orders o
    INNER JOIN users u ON o.user_id = u.id
    INNER JOIN products p ON o.product_id = p.id
    ORDER BY o.created_at DESC
    LIMIT 20;
    

    Level 4: Left Joins (Show Missing Data)

    -- 1. All users and their order count (including users with no orders)
    SELECT u.first_name, u.last_name, u.email,
           COUNT(o.id) as order_count
    FROM users u
    LEFT JOIN orders o ON u.id = o.user_id
    GROUP BY u.id, u.first_name, u.last_name, u.email
    ORDER BY order_count DESC;
    
    -- 2. Users who haven't placed any orders
    SELECT u.first_name, u.last_name, u.email, u.created_at
    FROM users u
    LEFT JOIN orders o ON u.id = o.user_id
    WHERE o.id IS NULL
    ORDER BY u.created_at DESC;
    
    -- 3. Products that have never been ordered
    SELECT p.title, p.price, p.category, p.stock_quantity
    FROM products p
    LEFT JOIN orders o ON p.id = o.product_id
    WHERE o.id IS NULL
    ORDER BY p.price DESC;
    

    Level 5: Advanced Aggregations & Grouping

    -- 1. Top customers by order count
    SELECT u.first_name, u.last_name,
           COUNT(o.id) as total_orders,
           SUM(p.price) as total_spent
    FROM users u
    INNER JOIN orders o ON u.id = o.user_id
    INNER JOIN products p ON o.product_id = p.id
    GROUP BY u.id, u.first_name, u.last_name
    HAVING COUNT(o.id) > 1
    ORDER BY total_spent DESC
    LIMIT 10;
    
    -- 2. Most popular products
    SELECT p.title, p.category, p.price,
           COUNT(o.id) as times_ordered,
           SUM(p.price) as total_revenue
    FROM products p
    INNER JOIN orders o ON p.id = o.product_id
    GROUP BY p.id, p.title, p.category, p.price
    ORDER BY times_ordered DESC
    LIMIT 10;
    
    -- 3. Monthly order analysis
    SELECT DATE_TRUNC('month', o.created_at) as month,
           COUNT(o.id) as order_count,
           COUNT(DISTINCT o.user_id) as unique_customers,
           SUM(p.price) as total_revenue
    FROM orders o
    INNER JOIN products p ON o.product_id = p.id
    GROUP BY DATE_TRUNC('month', o.created_at)
    ORDER BY month;
    

    Level 6: Student Enrollment Analysis (Complex Joins)

    -- 1. Students with their course and school info
    SELECT u.first_name, u.last_name,
           c.title as course_name,
           s.title as school_name,
           st.enrolment_date
    FROM students st
    INNER JOIN users u ON st.user_id = u.id
    INNER JOIN courses c ON st.course_id = c.id
    INNER JOIN schools s ON st.school_id = s.id
    ORDER BY st.enrolment_date DESC
    LIMIT 20;
    
    -- 2. Course popularity by school
    SELECT s.title as school_name,
           c.title as course_name,
           COUNT(st.id) as student_count
    FROM students st
    INNER JOIN courses c ON st.course_id = c.id
    INNER JOIN schools s ON st.school_id = s.id
    GROUP BY s.id, s.title, c.id, c.title
    ORDER BY student_count DESC;
    
    -- 3. Schools with enrollment stats
    SELECT s.title as school_name,
           COUNT(st.id) as total_students,
           COUNT(DISTINCT st.course_id) as courses_offered,
           MIN(st.enrolment_date) as first_enrollment,
           MAX(st.enrolment_date) as latest_enrollment
    FROM schools s
    LEFT JOIN students st ON s.id = st.school_id
    GROUP BY s.id, s.title
    ORDER BY total_students DESC;
    

    Level 7: Advanced Concepts

    -- 1. Subqueries: Users who spent more than average
    WITH user_spending AS (
      SELECT u.id, u.first_name, u.last_name,
             SUM(p.price) as total_spent
      FROM users u
      INNER JOIN orders o ON u.id = o.user_id
      INNER JOIN products p ON o.product_id = p.id
      GROUP BY u.id, u.first_name, u.last_name
    )
    SELECT first_name, last_name, total_spent
    FROM user_spending
    WHERE total_spent > (SELECT AVG(total_spent) FROM user_spending)
    ORDER BY total_spent DESC;
    
    -- 2. Window functions: Ranking customers
    SELECT u.first_name, u.last_name,
           COUNT(o.id) as order_count,
           SUM(p.price) as total_spent,
           RANK() OVER (ORDER BY SUM(p.price) DESC) as spending_rank
    FROM users u
    INNER JOIN orders o ON u.id = o.user_id
    INNER JOIN products p ON o.product_id = p.id
    GROUP BY u.id, u.first_name, u.last_name
    ORDER BY spending_rank
    LIMIT 20;
    
    -- 3. Case statements for categorization
    SELECT u.first_name, u.last_name,
           COUNT(o.id) as order_count,
           CASE 
             WHEN COUNT(o.id) >= 5 THEN 'VIP Customer'
             WHEN COUNT(o.id) >= 2 THEN 'Regular Customer'
             ELSE 'New Customer'
           END as customer_type
    FROM users u
    LEFT JOIN orders o ON u.id = o.user_id
    GROUP BY u.id, u.first_name, u.last_name
    ORDER BY order_count DESC;
    

    Level 8: Self-Joins & Advanced Analysis

    -- 1. Find users enrolled in the same course (pseudo self-join)
    SELECT DISTINCT 
           u1.first_name || ' ' || u1.last_name as student1,
           u2.first_name || ' ' || u2.last_name as student2,
           c.title as course_name
    FROM students s1
    INNER JOIN students s2 ON s1.course_id = s2.course_id AND s1.user_id < s2.user_id
    INNER JOIN users u1 ON s1.user_id = u1.id
    INNER JOIN users u2 ON s2.user_id = u2.id
    INNER JOIN courses c ON s1.course_id = c.id
    ORDER BY c.title, student1
    LIMIT 20;
    
    -- 2. Complex business question: Multi-role users
    SELECT u.first_name, u.last_name, u.email,
           COUNT(DISTINCT o.id) as orders_placed,
           COUNT(DISTINCT st.id) as courses_enrolled,
           CASE 
             WHEN COUNT(DISTINCT o.id) > 0 AND COUNT(DISTINCT st.id) > 0 THEN 'Customer & Student'
             WHEN COUNT(DISTINCT o.id) > 0 THEN 'Customer Only'
             WHEN COUNT(DISTINCT st.id) > 0 THEN 'Student Only'
             ELSE 'No Activity'
           END as user_type
    FROM users u
    LEFT JOIN orders o ON u.id = o.user_id
    LEFT JOIN students st ON u.id = st.user_id
    GROUP BY u.id, u.first_name, u.last_name, u.email
    ORDER BY orders_placed DESC, courses_enrolled DESC;
    

    🎯 Our Learning Strategy:

    1. Start with Level 1-2 in TablePlus to get comfortable
    2. Progress through each level – try to understand each query before moving on
    3. Modify the queries – change filters, add fields, etc.
    4. Create your own variations based on business questions

    to be continued … 🚀

    Rails 8 App: Create an Academic software app using SQL without using ActiveRecord- Part 2 | students | courses | schools

    Design: Our Students Table -> course -> school

    We need a UNIQUE constraint on user_id because:

    • One student per user (user_id should be unique)
    • Multiple students per course (course_id can be repeated)

    Check Migration Files:

    Key Changes:

    1. ✅ Added UNIQUE constraint: CONSTRAINT uk_students_user_id UNIQUE (user_id)
    2. 🔧 Fixed typos:
    • TIMSTAMPTIMESTAMP
    • stidentsstudents

    📈 Optimized indexes: No need for user_id index since UNIQUE creates one automatically

    Business Logic Validation:

    • user_id: One student per user ✅
    • course_id: Multiple students per course ✅
    • school_id: Multiple students per school ✅

    This ensures referential integrity and business rules are enforced at the database level!


    📁 Schema Storage Options:

    Rails allows you to store the schema in SQL format instead of the default Ruby format. Let me explain the options and why you’d choose each:

    1. Ruby Format (Default)

    # db/schema.rb
    ActiveRecord::Schema[8.0].define(version: 2025_07_09_074552) do
      enable_extension "pg_catalog.plpgsql"
    
      create_table "users", force: :cascade do |t|
        t.string "first_name", limit: 100, null: false
        t.string "email", limit: 150, null: false
        t.datetime "created_at", null: false
        t.index ["email"], name: "idx_users_email"
      end
    end
    

    2. SQL Format

    -- db/structure.sql
    CREATE EXTENSION IF NOT EXISTS pg_catalog.plpgsql;
    
    CREATE TYPE gender_enum AS ENUM ('male', 'female', 'not-specified');
    
    CREATE TABLE users (
        id bigserial PRIMARY KEY,
        first_name varchar(100) NOT NULL,
        email varchar(150) NOT NULL,
        created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
    );
    
    CREATE UNIQUE INDEX idx_users_email ON users(email);
    

    Check structure.sql File: Github: db/structure.sql

    🔧 How to Switch to SQL Format:

    Add this to config/application.rb:

    module AcademicSqlSoftware
      class Application < Rails::Application
        # ... existing config ...
    
        # Use SQL format for schema
        config.active_record.schema_format = :sql
      end
    end
    

    Then regenerate the schema:

    # Generate SQL schema file
    rails db:schema:dump
    

    This will create db/structure.sql instead of db/schema.rb.

    Comparison Table:

    FeatureRuby Format (schema.rb)SQL Format (structure.sql)
    Default✅ Yes❌ No
    Database Agnostic✅ Yes❌ No (DB-specific)
    Custom SQL Features❌ Limited✅ Full support
    Views❌ No✅ Yes
    Stored Procedures❌ No✅ Yes
    Triggers❌ No✅ Yes
    Custom Types/ENUMs❌ Limited✅ Full support
    Partial Indexes❌ Limited✅ Yes
    Check Constraints❌ Limited✅ Yes
    Version Control✅ Easy to diff⚠️ Harder to diff
    Readability✅ Ruby-like⚠️ Raw SQL

    When to Use Each:

    Choose SQL Format When:

    • ✅ Using PostgreSQL-specific features (ENUMs, arrays, JSONB operators)
    • ✅ Need database views, triggers, or stored procedures
    • ✅ Using complex constraints (CHECK, partial indexes)
    • ✅ Working with custom data types
    • ✅ Team is SQL-proficient

    Choose Ruby Format When:

    • ✅ Want database portability (switching between PostgreSQL/MySQL/SQLite)
    • ✅ Team prefers Ruby syntax
    • ✅ Using standard Rails features only
    • ✅ Need easier version control diffs

    For Our Project:

    I recommend SQL format because you’re using:

    • ✅ PostgreSQL ENUMs (gender_enum, category_enum) – Rails 8 supports this
    • ✅ Custom constraints (CHECK (price > 0)) – Rails 8 supports this
    • ✅ Complex foreign key relationships
    • ✅ Raw SQL migrations

    Our config/application.rb should include:

    config.active_record.schema_format = :sql
    

    Then run:

    rails db:schema:dump
    

    This will give us full control over PostgreSQL features while maintaining schema consistency! 🎯

    🌱 Seed Data

    1. Let’s refactor the seed file into modular components. I’ll create separate seeder classes for each model type and organize them properly.

    2. Let’s me create individual seeder classes for each model, the seeders for the dependent models (orders and students), refactor the main seeds.rb file to use the new modular structure, create a documentation file for the new modular structure.

    3. I’ve successfully refactored our seed file into a modular, maintainable architecture. Here’s what we’ve created:

    🎯 Modular Seed System Overview

    📁 File Structure:

    db/
    ├── seeds.rb                    # Main orchestrator (now 20 lines vs 246!)
    └── seeds/
        ├── README.md              # Complete documentation
        ├── base_seeder.rb         # Shared functionality & config
        ├── users_seeder.rb        # User creation logic
        ├── products_seeder.rb     # Product creation logic  
        ├── courses_seeder.rb      # Course creation logic
        ├── schools_seeder.rb      # School creation logic
        ├── orders_seeder.rb       # Order creation (with dependencies)
        └── students_seeder.rb     # Student creation (with constraints)
    

    Key Benefits:

    ✅ Separation of Concerns:
    • Each model has its own dedicated seeder class
    • Clear responsibility boundaries
    • Easy to modify individual model generation
    ✅ Maintainability:
    • Single file changes vs monolithic seed file
    • Inheritance from BaseSeeder for shared functionality
    • Comprehensive documentation
    ✅ Flexibility:
    • Easy to add new models
    • Environment-specific configuration
    • Configurable record counts
    ✅ Robustness:
    • Proper dependency handling
    • Constraint compliance (stock, unique keys)
    • Error handling and validation

    📋 Usage Examples:

    # Basic seeding
    rails db:seed
    
    # Clear data first
    CLEAR_DATA=true rails db:seed
    
    # Scale up for testing
    # Edit USERS_COUNT in base_seeder.rb then:
    rails db:seed
    

    Check my Github Seed File: Seed File – Academic SQL App

    🛠️ Commit the Refactored System:

    # Stage all the new modular files
    git add db/seeds.rb \
            db/seeds/
    
    # Commit with comprehensive message
    git commit -m "Refactor seed system into modular architecture" -m "
    
    Modular Structure:
    - Split monolithic seeds.rb into focused seeder classes
    - Create BaseSeeder with shared functionality and configuration
    - Separate concerns: each model has dedicated seeder
    - Add comprehensive documentation in seeds/README.md
    
    Benefits:
    - Maintainability: Easy to modify individual model generation
    - Flexibility: Simple to add new models or adjust data
    - Robustness: Proper dependency handling and constraint compliance
    - Testability: Individual seeders can be tested in isolation
    
    Features:
    - Environment-specific behavior (auto-clear in development)
    - Smart constraint handling (stock validation, unique keys)
    - Progress indicators and comprehensive reporting
    - Configurable record counts via constants
    
    Files:
    - db/seeds.rb: Main orchestrator (20 lines vs 246)
    - db/seeds/base_seeder.rb: Shared functionality & config
    - db/seeds/*_seeder.rb: Individual model seeders
    - db/seeds/README.md: Complete documentation
    
    The system is now ready for production use and easy to extend."
    

    🎉 What We Can Do Now:

    Scale Easily:

    # In base_seeder.rb - change these and run rails db:seed
    USERS_COUNT = 10_000
    PRODUCTS_COUNT = 1_000  
    ORDERS_COUNT = 50_000
    

    Add New Models: Create new_model_seeder.rb, inherit from BaseSeeder, add to orchestration.

    Customize Data: Edit individual seeder constants and attribute methods.

    Test Components:

    users = UsersSeeder.create_users
    products = ProductsSeeder.create_products
    

    Our seed system is now enterprise-ready with clean architecture, comprehensive documentation, and infinite scalability! 🚀

    Possible Refactoring

    Now Let’s move the helper methods to a dedicated helper folder for better organization. I’ll create a helpers directory and move the shared utility methods there.

    Here’s how to refactor your seed helper methods into a new helpers folder under db/seeds:

    1. Create a directory:
      db/seeds/helpers/
    2. Move all shared/random data methods (like random_gender, random_category, etc.) into a module, e.g. db/seeds/helpers/data_generators.rb:
    # db/seeds/helpers/data_generators.rb
    module DataGenerators
      def random_gender
        ['male', 'female', 'not-specified'].sample
      end
    
      def random_category
        ['men', 'women', 'kids', 'infants'].sample
      end
    
      def random_date_of_birth
        Date.today - rand(18..80).years - rand(365).days
      end
    
      def random_phone
        "+1#{rand(100..999)}-#{rand(100..999)}-#{rand(1000..9999)}"
      end
    
      def random_price
        [9.99, 19.99, 29.99, 49.99, 99.99, 199.99, 299.99, 499.99].sample
      end
    
      def random_stock
        [0, 5, 10, 25, 50, 100, 500].sample
      end
    
      def random_past_date(days_ago_max = 365)
        Date.today - rand(1..days_ago_max).days
      end
    end
    
    1. In your seeders and base seeder, require and include the helper:
    # At the top of each seeder or in base_seeder.rb
    require_relative 'helpers/data_generators'
    
    class UsersSeeder < BaseSeeder
      extend DataGenerators
      # ... now you can call random_gender, random_category, etc.
    end
    
    1. Remove the helper methods from BaseSeeder and use the module instead.

    Summary:

    • Move all shared/random data methods to db/seeds/helpers/data_generators.rb as a module.
    • require_relative and extend or include the module in your seeders.
    • This keeps your seed logic clean and DRY!

    Our BaseSeeder still contains all the random data and progress helper methods, which are now in db/seeds/helpers/data_generators.rb and db/seeds/helpers/ui_helpers.rb.

    Here’s what we should do:

    • Remove all the methods at the bottom of BaseSeeder that are now in the helpers.
    • At the top of base_seeder.rb, require the helper modules.
    • If you want to use the helpers as class methods, use extend for each module.

    Here’s the refactor plan:

    1. Remove:
    • random_gender
    • random_category
    • random_date_of_birth
    • random_phone
    • random_price
    • random_stock
    • random_past_date
    • progress_indicator
    1. Add at the top:
       require_relative 'helpers/data_generators'
       require_relative 'helpers/ui_helpers'
       extend DataGenerators
       extend UiHelpers
    

    Changes Made:

    1. In UiHelpers:
      • Removed the hardcoded print_summary and print_verification methods.
      • Ensured print_summary_box and print_verification_stats are ready to accept data.
    2. In BaseSeeder:
      • Now dynamically creates a summary_data hash with the final record counts.
      • Creates a verification_stats hash with key metrics.
      • Calls UiHelpers.print_summary_box and UiHelpers.print_verification_stats, passing in the data.

    Benefits of this Refactor:

    • More Flexible: You can now easily add or remove items from the summary and verification sections just by changing the hashes in BaseSeeder.
    • Decoupled: The UiHelpers module is now purely for presentation and has no knowledge of your specific models.
    • Cleaner Code: The logic is now clearly separated—BaseSeeder gathers the data, and UiHelpers displays it.

    Now the BaseSeeder is clean and only contains orchestration logic and configuration. All random data and UI/progress helpers are now properly delegated to the new helper modules.

    We can now use these helpers in any seeder by calling random_gender,  progress_indicator, etc., thanks to the extend statements. This makes our seed system much more modular, maintainable, and DRY!

    That’s a great suggestion for further refactoring. It makes sense to keep all data-related logic, including aggregation for summaries, within the DataGenerators helper.

    Let’s proceed. I’ll move the hash construction into new methods inside db/seeds/helpers/data_generators.rb and then update BaseSeeder to call them.
    Now that the data generation has been moved, I’ll update the BaseSeeder to call the new methods.

    Issues Fixed:

    1. UiHelpers method calls: All seeder files were calling progress_indicator directly, but since the methods in UiHelpers are defined as class methods, they needed to be prefixed with UiHelpers.. Fixed in:
    • UsersSeeder
    • ProductsSeeder
    • CoursesSeeder
    • SchoolsSeeder
    • OrdersSeeder
    • StudentsSeeder
    1. DataGenerators references: The BaseSeeder was trying to call DataGenerators.summary_data and DataGenerators.verification_stats, but that file was deleted. I updated it to use the local summary_data and verification_stats methods that were already defined in the BaseSeeder class.

    Seeding Results:

    • 10,000 users created
    • 1,000 products created
    • 50 courses created
    • 25 schools created
    • 5,000 orders created
    • 5,000 students created

    The verification stats show proper data relationships:

    • 4,004 users have orders
    • 647 products have been ordered
    • 756 products are active
    • 200 students per school (average)
    • 100 students per course (average)

    Final Architecture:

    1. DataGenerators: Is now responsible for all data-related tasks, including generating random primitive data (random_phone) and creating aggregated summary data (summary_data, verification_stats).
    2. UiHelpers: Is responsible for all presentation logic, taking data as input and printing it to the console in a formatted way.
    3. Individual Seeders (UsersSeeder, etc.): Responsible for the business logic of creating a specific type of record, using helpers for data and UI.
    4. BaseSeeder: The main orchestrator. It knows the correct order to call the individual seeders and delegates all data and UI tasks to the appropriate helpers.
    5. seeds.rb: The single entry point that kicks off the entire process.

    to be continued … 🚀

    Rails 8 App: Create an Academic software app using SQL without using ActiveRecord – Part 1 | users | products | orders

    Let’s create a Rails 8 app which use SQL queries with raw SQL instead of ActiveRecord. Let’s use the full Rails environment with ActiveRecord for infrastructure, but bypass AR’s ORM features for pure SQL writing. Let me guide you through this step by step:

    Step 1: Create the Rails App with ActiveRecord and PostgreSQL (skipping unnecessary components)

    rails new academic-sql-software --database=postgresql --skip-action-cable --skip-jbuilder --skip-solid --skip-kamal
    

    What we’re skipping and why:

    • –skip-action-cable: No WebSocket functionality needed
    • –skip-jbuilder: No JSON API views needed for our SQL practice app
    • –skip-solid: Skips Solid Cache and Solid Queue (we don’t need caching or background jobs)
    • –skip-kamal: No deployment configuration needed

    What we’re keeping:

    • ActiveRecord: For database connection management and ActiveRecord::Base.connection.execute()
    • ActionController: For creating web interfaces to display our SQL query results
    • ActionView: For creating simple HTML pages to showcase our SQL learning exercises
    • PostgreSQL: Our database for practicing advanced SQL features

    Why this setup is perfect for App with raw SQL:

    • Minimal Rails app focused on database interactions
    • Full Rails environment for development conveniences
    • ActiveRecord infrastructure without ORM usage
    • Clean setup without unnecessary overhead

    => Open config/application.rb and comment the following for now:

    # require "active_job/railtie"
    ...
    # require "active_storage/engine"
    ...
    # require "action_mailer/railtie"
    # require "action_mailbox/engine"
    ...
    # require "action_cable/engine"
    

    => Open config/environments/development.rb config/environments/production.rb config/environments/test.rb comment action_mailer

    🤔 Why I am using ActiveRecord (even though I don’t want the ORM):

    • Database Connection Management: ActiveRecord provides robust connection pooling, reconnection handling, and connection management
    • Rails Integration: Seamless integration with Rails console, database tasks (rails db:create, rails db:migrate), and development tools
    • Raw SQL Execution: We get ActiveRecord::Base.connection.execute() which is perfect for our raw SQL writing.
    • Migration System: Easy table creation and schema management with migrations (even though we’ll query with raw SQL)
    • Database Configuration: Rails handles database.yml configuration, environment switching, and connection setup
    • Development Tools: Access to Rails console for testing queries, database tasks, and debugging

    Our Learning Strategy: We’ll use ActiveRecord’s infrastructure but completely bypass its ORM methods. Instead of Student.where(), we’ll use ActiveRecord::Base.connection.execute("SELECT * FROM students WHERE...")

    Step 2: Navigate to the project directory

    cd academic-sql-software
    

    Step 3: Verify PostgreSQL setup

    # Check if PostgreSQL is running
    brew services list | grep postgresql
    # or
    pg_ctl status
    

    Database Foundation: PostgreSQL gives us advanced SQL features:

    • Complex JOINs (INNER, LEFT, RIGHT, FULL OUTER)
    • Window functions (ROW_NUMBER, RANK, LAG, LEAD)
    • Common Table Expressions (CTEs)
    • Advanced aggregations and subqueries

    Step 4: Install dependencies

    bundle install
    

    What this gives us:

    • pg gem: Pure PostgreSQL adapter (already included with --database=postgresql)
    • ActiveRecord: For connection management only
    • Rails infrastructure: Console, generators, rake tasks

    Step 5: Create the PostgreSQL databases

    ✗ rails db:create
    Created database 'academic_sql_software_development'
    Created database 'academic_sql_software_test
    

    Our Development Environment:

    • Creates academic_sql_software_development and academic_sql_software_test
    • Sets up connection pooling and management
    • Enables us to use Rails console for testing queries: rails console then ActiveRecord::Base.connection.execute("SELECT 1")

    Our Raw SQL Approach:

    # We'll use this pattern throughout our app:
    connection = ActiveRecord::Base.connection
    result = connection.execute("SELECT s.name, t.subject FROM students s INNER JOIN teachers t ON s.teacher_id = t.id")
    

    Why not pure pg gem:

    • Would require manual connection management
    • No Rails integration (no console, no rake tasks)
    • More boilerplate code for connection handling
    • Loss of Rails development conveniences

    Why not pure ActiveRecord ORM:

    • We want to do SQL query writing, not ActiveRecord methods.
    • Need to understand database performance implications.
    • Want to practice complex queries that might be harder to express in ActiveRecord.

    Step 6: Create Users table

    mkdir -p db/migrate
    
    class CreateUsers < ActiveRecord::Migration[8.0]
      def up
        # create users table
        execute <<~SQL
          CREATE TABLE users (
            id INT,
            username VARCHAR(200),
            email VARCHAR(150),
            phone_number VARCHAR(20)
          );
        SQL
      end
    
      def down
        execute <<~SQL
          DROP TABLE users;
        SQL
      end
    end
    
    
    class CreateOrders < ActiveRecord::Migration[8.0]
      def up
        # create table orders
        execute <<~SQL
        SQL
      end
    
      def down
        execute <<~SQL
        SQL
      end
    end
    
    

    execute <<~SQL is a Rails migration method that allows you to run raw SQL statements. Let me break it down:

    Components:

    1. execute – A Rails migration method that executes raw SQL directly against the database
    2. <<~SQL – Ruby’s “squiggly heredoc” syntax for multi-line strings that automatically strips leading whitespace (read: https://www.rubyguides.com/2018/11/ruby-heredoc/)

    Usage:

    class SomeMigration < ActiveRecord::Migration[8.0]
      def change
        execute <<~SQL
          CREATE INDEX CONCURRENTLY idx_users_email_lower 
          ON users (LOWER(email));
        SQL
      end
    end
    

    Why use it?

    • Database-specific features: When you need PostgreSQL-specific syntax, MySQL features, etc.
    • Complex SQL: For operations that Rails DSL doesn’t support easily
    • Performance: Direct SQL can sometimes be more efficient
    • Raw SQL requirements: When you specifically need to write SQL (as mentioned in our requirement)

    Example for creating a users table with raw SQL:

    class CreateUsers < ActiveRecord::Migration[8.0]
      def up
        execute <<~SQL
          CREATE TABLE users (
            id BIGSERIAL PRIMARY KEY,
            name VARCHAR(255) NOT NULL,
            email VARCHAR(255) NOT NULL,
            created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
            updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
          );
    
          CREATE UNIQUE INDEX idx_users_email ON users(email);
        SQL
      end
    
      def down
        execute <<~SQL
          DROP TABLE IF EXISTS users;
        SQL
      end
    end
    

    The <<~SQL heredoc ends when it encounters the closing SQL keyword, and the ~ strips indentation so your SQL can be properly formatted within the Ruby code.

    Q) 🤔 Is it a best practise to repeat the database constraints in model validation in Rails? DB level vs App level? or both?

    Best Practice: Use BOTH 🎯

    Why both levels are recommended:

    Database Level (Last Line of Defense)

    -- Database constraints
    price DECIMAL(10,2) NOT NULL CHECK (price > 0),
    category product_category NOT NULL,
    title VARCHAR(250) NOT NULL
    

    Pros:

    • Data integrity guarantee – No bad data can enter, ever
    • Performance – Database enforces constraints efficiently
    • Bulletproof – Protects against bulk operations, direct SQL, data imports
    • Concurrency safe – Prevents race conditions
    • Multi-application protection – If other apps access same DB

    Application Level (User Experience)

    validates :price, presence: true, numericality: { greater_than: 0 }
    validates :category, inclusion: { in: %w[men women kids infants] }
    

    Pros:

    • User-friendly errors – “Price must be greater than 0” vs “CHECK constraint violated”
    • Better UX – Validation before form submission
    • Easier testing – Can unit test validations
    • Conditional logic – Complex business rules
    • Framework features – Callbacks, custom validators

    Real-world scenarios where each matters:

    Database saves you when:

    # Bulk operations bypass Rails validations
    Product.update_all(price: -10)  # DB constraint prevents this
    
    # Direct SQL injection attempts
    # DB constraints are your last line of defense
    

    App validations save you when:

    # User gets friendly error instead of:
    # PG::CheckViolation: ERROR: new row violates check constraint
    @product = Product.new(price: -5)
    @product.valid? # => false
    @product.errors.full_messages # => ["Price must be greater than 0"]
    

    Practical Implementation:

    class Product < ApplicationRecord
      # App-level validations for UX
      validates :title, presence: true, length: { maximum: 250 }
      validates :price, presence: true, numericality: { greater_than: 0 }
      validates :category, inclusion: { in: %w[men women kids infants] }
    
      # Don't duplicate precision validation if DB handles it
      # The DECIMAL(10,2) constraint is sufficient at DB level
    end
    
    -- DB-level constraints for data integrity
    CREATE TABLE products (
      id BIGSERIAL PRIMARY KEY,
      title VARCHAR(250) NOT NULL,
      price DECIMAL(10,2) NOT NULL CHECK (price > 0),
      category product_category NOT NULL,
      -- DB handles precision automatically with DECIMAL(10,2)
    );
    

    What NOT to duplicate:

    • Precision constraintsDECIMAL(10,2) handles this perfectly
    • Data type validation – DB enforces INTEGER, BOOLEAN, etc.
    • Complex regex patterns – Better handled in app layer

    Conclusion:

    Use both, but strategically:

    • Database: Core data integrity, type constraints, foreign keys
    • Application: User experience, business logic, conditional rules
    • Don’t over-duplicate simple type/precision constraints that DB handles well

    This approach gives you belt and suspenders protection with optimal user experience.

    to be continued … 🚀

    Setup 🛠 Rails 8 App – Part 16: Implementing Authentication, Users, Orders, and Order Items

    Let’s now move onto create Authentication for our application.

    Modern e‑commerce applications need robust user authentication, clear role‑based access, and an intuitive ordering system. In this post, we’ll walk through how to:

    1. Add Rails’ built‑in authentication via has_secure_password.
    2. Create a users table with roles for customers and admins.
    3. Build an orders table to capture overall transactions.
    4. Create order_items to track each product variant in an order.

    Throughout, we’ll leverage PostgreSQL’s JSONB for flexible metadata, and we’ll use Rails 8 conventions for migrations and models.


    Automatic Authentication For Rails 8 Apps

    bin/rails generate authentication
    

    This creates all the necessary files for users and sessions.

    Create Authentication Manually

    1. Create users table and user model

    ✗ rails g migration create_users
    
    # users migration
    class CreateUsers < ActiveRecord::Migration[8.0]
      def change
        create_table :users do |t|
          t.string   :email,           null: false, index: { unique: true }
          t.string   :password_digest, null: false
          t.string   :role,            null: false, default: "customer"
          t.string   :first_name
          t.string   :last_name
          t.jsonb    :metadata,        null: false, default: {}
          t.timestamps
        end
    
        # You can later set up an enum in the User model:
        # enum role: { customer: "customer", admin: "admin" }
      end
    end
    
    ✗ rails g model user
    
    # User model
    class User < ApplicationRecord
      has_secure_password
      enum :role, {
        customer:  "customer",  
        admin:     "admin"      
      }
      has_many :orders
    end
    

    2. Authenticating with has_secure_password

    Rails ships with bcrypt support out of the box. To enable it:

    1. Uncomment the following line in your Gemfile.
      # gem "bcrypt", "~> 3.1.7"
    2. Run bundle install.
    3. In your migration, create a password_digest column:
    create_table :users do |t|
      t.string :email,           null: false, index: { unique: true }
      t.string :password_digest, null: false
      # ... other fields ...
    end
    
    
    1. In app/models/user.rb, enable:
    class User < ApplicationRecord
      has_secure_password
      # ...
    end
    
    

    This gives you user.authenticate(plain_text_password) and built‑in validation that a password is present on create.

    3. Setting Up Users with Roles

    We often need both customers and admins. Let’s create a role column with a default of "customer":

    create_table :users do |t|
      t.string :role, null: false, default: "customer"
      # ...
    end
    
    

    In the User model you can then define an enum:

    class User < ApplicationRecord
      ......
      enum :role, {
        customer:  "customer",  
        admin:     "admin"      
      }
    end
    
    

    This lets you call current_user.admin? or User.customers for scopes.

    user.customer!   # sets role to "customer"
    user.admin?      # => false
    

    Rails built-in enum gives you a quick way to map a column to a fixed set of values, and it:

    1. Defines predicate and bang methods
    2. Adds query scopes
    3. Provides convenient helpers for serialization, validations, etc.

    4. Building the Orders Table

    Every purchase is represented by an Order. Key fields:

    • user_id (foreign key)
    • total_price (decimal with scale 2)
    • status (string; e.g. pending, paid, shipped)
    • shipping_address (JSONB): allows storing a full address object with flexible fields (street, city, postcode, country, and even geolocation) without altering your schema. You can index JSONB columns (GIN) to efficiently query nested fields, and you avoid creating a separate addresses table unless you need relationships or reuse.
    • placed_at (datetime, optional): records the exact moment the order was completed, independent of when the record was created. Making this optional lets you distinguish between draft/in-progress orders (no placed_at yet) and finalized purchases.
    • Timestamps
    • placed_at (datetime, optional): records the exact moment the order was completed, independent of when the record was created. Making this optional lets you distinguish between draft/in-progress orders (no placed_at yet) and finalized purchases.
    • Timestamps and an optional placed_at datetime
    ✗ rails g migration create_orders
    
    # orders migration
    class CreateOrders < ActiveRecord::Migration[8.0]
      def change
        create_table :orders do |t|
          t.references :user, null: false, foreign_key: true, index: true
          t.decimal    :total_price, precision: 12, scale: 2, null: false, default: 0.0
          t.string     :status,      null: false, default: "pending", index: true
          t.jsonb      :shipping_address, null: false, default: {}
          t.datetime   :placed_at
          t.timestamps
        end
    
        # Example statuses: pending, paid, shipped, cancelled
      end
    end
    

    In app/models/order.rb:

    ✗ rails g model order
    
    class Order < ApplicationRecord
      belongs_to :user
      has_many   :order_items, dependent: :destroy
      has_many   :product_variants, through: :order_items
    
      STATUSES = %w[pending paid shipped cancelled]
      validates :status, inclusion: { in: STATUSES }
    end
    
    

    5. Capturing Each Item: order_items

    To connect products to orders, we use an order_items join table. Each row stores:

    • order_id and product_variant_id as FKs
    • quantity, unit_price, and any discount_percent
    • Optional JSONB metadata for special instructions
    ✗ rails g migration create_order_items
    
    # order_items migration
    class CreateOrderItems < ActiveRecord::Migration[8.0]
      def change
        create_table :order_items do |t|
          t.references :order,           null: false, foreign_key: true, index: true
          t.references :product_variant, null: false, foreign_key: true, index: true
          t.integer    :quantity,        null: false, default: 1
          t.decimal    :unit_price,      precision: 10, scale: 2, null: false
          t.decimal    :discount_percent, precision: 5, scale: 2, default: 0.0
          t.jsonb      :metadata,        null: false, default: {}
          t.timestamps
        end
    
        # Composite unique index to prevent duplicate variant per order
        add_index :order_items, [:order_id, :product_variant_id], unique: true, name: "idx_order_items_on_order_and_variant"
      end
    

    Model associations:

    ✗ rails g model order_item
    
    class OrderItem < ApplicationRecord
      belongs_to :order
      belongs_to :product_variant
    
      validates :quantity, numericality: { greater_than: 0 }
    end
    
    

    6. Next Steps: Controllers & Authorization

    • Controllers: Scaffold UsersController, SessionsController (login/logout), OrdersController, and nested OrderItemsController under orders or use a service object to build carts.
    • Authorization: Once role is set, integrate Pundit or CanCanCan to restrict admin actions (creating products, managing variants) and customer actions (viewing own orders).
    • Views/Frontend: Tie it all together with forms for signup/login, a product catalog with “Add to Cart”, a checkout flow, and an admin dashboard for product management.

    7. Scaffolding Controllers & Views (TailwindCSS Rails 4.2.3)

    Generate Controllers & Routes

    ✗ rails generate controller Users new create index show edit update destroy --skip-routes
    create  app/controllers/users_controller.rb
          invoke  tailwindcss
          create    app/views/users
          create    app/views/users/new.html.erb
          create    app/views/users/create.html.erb
          create    app/views/users/index.html.erb
          create    app/views/users/show.html.erb
          create    app/views/users/edit.html.erb
          create    app/views/users/update.html.erb
          create    app/views/users/destroy.html.erb
          invoke  test_unit
          create    test/controllers/users_controller_test.rb
          invoke  helper
          create    app/helpers/users_helper.rb
          invoke    test_unit
    ✗ rails generate controller Sessions new create destroy --skip-routes
    create  app/controllers/sessions_controller.rb
          invoke  tailwindcss
          create    app/views/sessions
          create    app/views/sessions/new.html.erb
          create    app/views/sessions/create.html.erb
          create    app/views/sessions/destroy.html.erb
          invoke  test_unit
          create    test/controllers/sessions_controller_test.rb
          invoke  helper
          create    app/helpers/sessions_helper.rb
          invoke    test_unit
    ✗ rails generate controller Orders index show new create edit update destroy --skip-routes
          create  app/controllers/orders_controller.rb
          invoke  tailwindcss
          create    app/views/orders
          create    app/views/orders/index.html.erb
          create    app/views/orders/show.html.erb
          create    app/views/orders/new.html.erb
          create    app/views/orders/create.html.erb
          create    app/views/orders/edit.html.erb
          create    app/views/orders/update.html.erb
          create    app/views/orders/destroy.html.erb
          invoke  test_unit
          create    test/controllers/orders_controller_test.rb
          invoke  helper
          create    app/helpers/orders_helper.rb
          invoke    test_unit
     ✗ rails generate controller OrderItems create update destroy --skip-routes
          create  app/controllers/order_items_controller.rb
          invoke  tailwindcss
          create    app/views/order_items
          create    app/views/order_items/create.html.erb
          create    app/views/order_items/update.html.erb
          create    app/views/order_items/destroy.html.erb
          invoke  test_unit
          create    test/controllers/order_items_controller_test.rb
          invoke  helper
          create    app/helpers/order_items_helper.rb
          invoke    test_unit
    

    In config/routes.rb, nest order_items under orders and add session routes:

    Rails.application.routes.draw do
      resources :users
    n
      resources :sessions, only: %i[new create destroy]
      get    '/login',  to: 'sessions#new'
      post   '/login',  to: 'sessions#create'
      delete '/logout', to: 'sessions#destroy'
    
      resources :orders do
        resources :order_items, only: %i[create update destroy]
      end
    
      root 'products#index'
    end
    

    By the end, you’ll have a fully functional e‑commerce back end: secure auth, order tracking, and clear user roles.


    How to setup your First User🙍🏻‍♂️ in the system

    The very first user you should set up is:

    An admin user — to create/manage products, variants, and handle backend tasks.

    Here’s the best approach:

    Best Practice: Seed an Admin User

    Instead of manually creating it through the UI (when no one can log in yet), the best and safest approach is to use db/seeds.rb to create an initial admin user.

    Why?

    • You can reliably recreate it on any environment (local, staging, production).
    • You can script strong defaults (like setting a secure admin email/password).

    🔒 Tip: Use ENV Variables

    For production, never hardcode admin passwords directly in seeds.rb. Instead, do:

    admin_password = ENV.fetch("ADMIN_PASSWORD")
    
    

    and pass it as:

    ADMIN_PASSWORD=SomeStrongPassword rails db:seed
    
    

    This keeps credentials out of your Git history.

    🛠 Option 1: Add Seed Data db/seeds.rb

    Add a block in db/seeds.rb that checks for (or creates) an admin user:

    # db/seeds.rb
    
    email    = ENV.fetch("ADMIN_EMAIL") { abort "Set ADMIN_EMAIL" }
    password = ENV.fetch("ADMIN_PASSWORD") { abort "Set ADMIN_PASSWORD" }
    
    User.find_or_create_by!(email: admin_email) do |user|
      user.password              = admin_password
      user.password_confirmation = admin_password
      user.role                   = "admin"
      user.first_name             = "Site"
      user.last_name              = "Admin"
    end
    
    puts "→ Admin user: #{admin_email}"
    

    Then run:

    rails db:seed
    
    1. Pros:
      • Fully automated and idempotent—you can run db:seed anytime without creating duplicates.
      • Seed logic lives with your code, so onboarding new team members is smoother.
      • You can wire up ENV vars for different credentials in each environment (dev/staging/prod).
    2. Cons:
      • Seeds can get cluttered over time if you add lots of test data.
      • Must remember to re-run seeds after resetting the database.

    🛠 Option 2: Custom Rake task or Thor script

    Create a dedicated task under lib/tasks/create_admin.rake:

    namespace :admin do
      desc "Create or update the first admin user"
      task create: :environment do
        email    = ENV.fetch("ADMIN_EMAIL")    { abort "Set ADMIN_EMAIL" }
        password = ENV.fetch("ADMIN_PASSWORD") { abort "Set ADMIN_PASSWORD" }
    
        user = User.find_or_initialize_by(email: email)
        user.password              = password
        user.password_confirmation = password
        user.role                   = "admin"
        user.save!
    
        puts "✅ Admin user #{email} created/updated"
      end
    end
    

    Run it with:

    ADMIN_EMAIL=foo@bar.com ADMIN_PASSWORD=topsecret rails admin:create
    
    1. Pros:
      • Keeps seed file lean—admin-creation logic lives in a focused task.
      • Enforces presence of ENV vars (you won’t accidentally use a default password in prod).
    2. Cons:
      • Slightly more setup than plain seeds, though it’s still easy to run.

    I choose for Option 2, because it is namespaced and clear what is the purpose. But in seed there will be lot of seed data together make it difficult to identify a particular task.

    🛡 Why is This Better?

    ✅ No need to expose a sign-up page to create the very first admin.
    ✅ You avoid manual DB entry or Rails console commands.
    ✅ You can control/rotate the admin credentials easily.
    ✅ You can add additional seed users later if needed (for demo or testing).

    📝 Summary

    Seed an initial admin user
    ✅ Add a role check (admin? method)
    ✅ Lock down sensitive parts of the app to admin
    ✅ Use ENV vars in production for passwords


    Enjoy Rails 🚀!

    Rails 8 App: Setup Test DB in PostgreSQL | Write SQL Queries | Operators | Joins

    Here’s a list of commonly used SQL comparison operators with brief explanations and examples:

    📋 Basic Comparison Operators:

    OperatorMeaningExampleResult
    =Equal toWHERE age = 25Matches rows where age is 25
    <>Not equal to (standard)WHERE status <> 'active'Matches rows where status is not 'active'
    !=Not equal to (alternative)WHERE id != 10Same as <>, matches if id is not 10
    >Greater thanWHERE salary > 50000Matches rows with salary above 50k
    <Less thanWHERE created_at < '2024-01-01'Matches dates before Jan 1, 2024
    >=Greater than or equalWHERE age >= 18Matches age 18 and above
    <=Less than or equalWHERE age <= 65Matches age 65 and below

    📋 Other Common Operators:

    OperatorMeaningExample
    BETWEENWithin a rangeWHERE price BETWEEN 100 AND 200
    INMatch any value in a listWHERE country IN ('US', 'CA', 'UK')
    NOT INNot in a listWHERE role NOT IN ('admin', 'staff')
    IS NULLValue is nullWHERE deleted_at IS NULL
    IS NOT NULLValue is not nullWHERE updated_at IS NOT NULL
    LIKEPattern match (case-insensitive in some DBs)WHERE name LIKE 'J%'
    ILIKECase-insensitive LIKE (PostgreSQL only)WHERE email ILIKE '%@gmail.com'

    Now we’ve our products and product_variants schema, let’s re-explore all major SQL JOINs using these two related tables.

    ####### Products
    
       Column    |              Type              | Collation | Nullable |               Default
    -------------+--------------------------------+-----------+----------+--------------------------------------
     id          | bigint                         |           | not null | nextval('products_id_seq'::regclass)
     description | text                           |           |          |
     category    | character varying              |           |          |
     created_at  | timestamp(6) without time zone |           | not null |
     updated_at  | timestamp(6) without time zone |           | not null |
     name        | character varying              |           | not null |
     rating      | numeric(2,1)                   |           |          | 0.0
     brand       | character varying              |           |          |
    
    ######## Product variants
    
          Column      |              Type              | Collation | Nullable |                   Default
    ------------------+--------------------------------+-----------+----------+----------------------------------------------
     id               | bigint                         |           | not null | nextval('product_variants_id_seq'::regclass)
     product_id       | bigint                         |           | not null |
     sku              | character varying              |           | not null |
     mrp              | numeric(10,2)                  |           | not null |
     price            | numeric(10,2)                  |           | not null |
     discount_percent | numeric(5,2)                   |           |          |
     size             | character varying              |           |          |
     color            | character varying              |           |          |
     stock_quantity   | integer                        |           |          | 0
     specs            | jsonb                          |           | not null | '{}'::jsonb
     created_at       | timestamp(6) without time zone |           | not null |
     updated_at       | timestamp(6) without time zone |           | not null |
    
    

    💎 SQL JOINS with products and product_variants

    These tables are related through:

    product_variants.product_id → products.id
    
    

    So we can use that for all join examples.

    🔸 1. INNER JOIN – Show only products with variants
    SELECT 
      p.name, 
      pv.sku, 
      pv.price 
    FROM products p
    INNER JOIN product_variants pv ON p.id = pv.product_id;
    
    

    ♦️ Only returns products that have at least one variant.

    🔸 2. LEFT JOIN – Show all products, with variants if available
    SELECT 
      p.name, 
      pv.sku, 
      pv.price 
    FROM products p
    LEFT JOIN product_variants pv ON p.id = pv.product_id;
    
    

    ♦️ Returns all products, even those with no variants (NULLs in variant columns).

    🔸 3. RIGHT JOIN – Show all variants, with product info if available

    (Less common, but useful if variants might exist without a product record)

    SELECT 
      pv.sku, 
      pv.price, 
      p.name 
    FROM products p
    RIGHT JOIN product_variants pv ON p.id = pv.product_id;
    
    
    🔸 4. FULL OUTER JOIN – All records from both tables
    SELECT 
      p.name AS product_name, 
      pv.sku AS variant_sku 
    FROM products p
    FULL OUTER JOIN product_variants pv ON p.id = pv.product_id;
    
    

    ♦️ Shows all products and all variants, even when there’s no match.

    🔸 5. SELF JOIN Example (for product_variants comparing similar sizes or prices)

    Let’s compare variants of the same product that are different sizes.

    SELECT 
      pv1.product_id,
      pv1.size AS size_1,
      pv2.size AS size_2,
      pv1.sku AS sku_1,
      pv2.sku AS sku_2
    FROM product_variants pv1
    JOIN product_variants pv2 
      ON pv1.product_id = pv2.product_id 
      AND pv1.size <> pv2.size
    WHERE pv1.product_id = 101;  -- example product
    
    

    ♦️ Useful to analyze size comparisons or price differences within a product.

    🧬 Complex Combined JOIN Example

    Show each product with its variants, and include only discounted ones (price < MRP):

    SELECT 
      p.name AS product_name,
      pv.sku,
      pv.price,
      pv.mrp,
      (pv.mrp - pv.price) AS discount_value
    FROM products p
    INNER JOIN product_variants pv ON p.id = pv.product_id
    WHERE pv.price < pv.mrp
    ORDER BY discount_value DESC;
    
    

    📑 JOIN Summary with These Tables

    JOIN TypeUse Case
    INNER JOINOnly products with variants
    LEFT JOINAll products, even if they don’t have variants
    RIGHT JOINAll variants, even if product is missing
    FULL OUTER JOINEverything — useful in data audits
    SELF JOINCompare or relate rows within the same table

    Let’s now look at JOIN queries with more realistic conditions using products and product_variants.

    🦾 Advanced JOIN Queries with Conditions to practice

    🔹 1. All products with variants in stock AND discounted

    SELECT 
      p.name AS product_name,
      pv.sku,
      pv.size,
      pv.color,
      pv.stock_quantity,
      pv.mrp,
      pv.price,
      (pv.mrp - pv.price) AS discount_amount
    FROM products p
    JOIN product_variants pv ON p.id = pv.product_id
    WHERE pv.stock_quantity > 0
      AND pv.price < pv.mrp
    ORDER BY discount_amount DESC;
    
    

    ♦️ Shows available discounted variants, ordered by discount.

    🔹 2. Products with high rating (4.5+) and at least one low-stock variant (< 10 items)

    SELECT 
      p.name AS product_name,
      p.rating,
      pv.sku,
      pv.stock_quantity
    FROM products p
    JOIN product_variants pv ON p.id = pv.product_id
    WHERE p.rating >= 4.5
      AND pv.stock_quantity < 10;
    
    

    🔹 3. LEFT JOIN to find products with no variants or all variants out of stock

    SELECT 
      p.name AS product_name,
      pv.id AS variant_id,
      pv.stock_quantity
    FROM products p
    LEFT JOIN product_variants pv 
      ON p.id = pv.product_id AND pv.stock_quantity > 0
    WHERE pv.id IS NULL;
    
    

    ✅ This tells you:

    • Either the product has no variants
    • Or all variants are out of stock

    🔹 4. Group and Count Variants per Product

    SELECT 
      p.name AS product_name,
      COUNT(pv.id) AS variant_count
    FROM products p
    LEFT JOIN product_variants pv ON p.id = pv.product_id
    GROUP BY p.name
    ORDER BY variant_count DESC;
    
    

    🔹 5. Variants with price-percentage discount more than 30%

    SELECT 
      p.name AS product_name,
      pv.sku,
      pv.mrp,
      pv.price,
      ROUND(100.0 * (pv.mrp - pv.price) / pv.mrp, 2) AS discount_percent
    FROM products p
    JOIN product_variants pv ON p.id = pv.product_id
    WHERE pv.price < pv.mrp
      AND (100.0 * (pv.mrp - pv.price) / pv.mrp) > 30;
    
    

    🔹 6. Color-wise stock summary for a product category

    SELECT 
      p.category,
      pv.color,
      SUM(pv.stock_quantity) AS total_stock
    FROM products p
    JOIN product_variants pv ON p.id = pv.product_id
    WHERE p.category = 'Shoes'
    GROUP BY p.category, pv.color
    ORDER BY total_stock DESC;
    
    

    These queries simulate real-world dashboard views: inventory tracking, product health, stock alerts, etc.


    Happy SQL Query Writing! 🚀

    Rails 8 App: Setup Test DB in PostgreSQL | Write SQL Queries

    1. Add the test db and users table: https://railsdrop.com/2025/04/25/rails-8-app-postgresql-heap-vs-mysql-innodb-b-tree-indexing/

    2. Add fake data into the table: https://railsdrop.com/2025/04/29/rails-8-app-postgresql-faker-extensions-for-rails/

    Start Learn (Premium): https://railsdrop.com/sql-postgresql-queries-bitmap-seq-index-scan-db-clustering/

    📌 Summary of all queries

    Check: https://railsdrop.com/postgresql-queries-a-summary/


    Read – Composite vs Individual indexes (Premium): https://railsdrop.com/sql-postgres-understanding-current-composite-index/

    Read Create 1 million sample users: https://railsdrop.com/sql-postgresql-create-1-million-sample-users-data/

    👉 1. SELECT – Basic Query

    🔹 1. Select all columns:

    SELECT * FROM users;
    
    

    This gives you every row and every column in the users table.

    🔹 2. Select specific columns:

    SELECT username, email FROM users;
    
    

    This limits the output to only username and email.

    👉 2. ALTER – Modify Table Structure

    The ALTER TABLE statement is used to add, delete, or modify columns in an existing table.

    🔹 Example 1: Add a new column

    Let’s add a column created_at of type timestamp:

    ALTER TABLE users 
      ADD COLUMN created_at timestamp;
    
    
    🔹 Example 2: Rename a column

    Let’s rename phone_number to mobile:

    ALTER TABLE users
      RENAME COLUMN phone_number TO mobile;
    
    🔹 Example 3: Drop a column

    Let’s say you want to remove the created_at column:

    ALTER TABLE users
      DROP COLUMN created_at;
    
    🔹 4. Modify specific columns:
    UPDATE users
      SET mobile = '123456'
      WHERE mobile IS NULL;
    
    • Use UPDATE instead of ALTER when modifying data in a table.
    • ALTER is used for changing the structure/schema of a table (e.g., adding columns), not for updating data.

    👉 3. DISTINCT – Remove Duplicate Values

    The DISTINCT keyword is used to return only unique (non-duplicate) values in a result set.

    🔹 Example 1: Distinct usernames
    SELECT DISTINCT username FROM users;
    
    

    This returns a list of unique usernames, removing any duplicates.

    🔹 Example 2: Distinct combinations of username and email
    SELECT DISTINCT username, email FROM users;
    SELECT DISTINCT username from users WHERE username like '%quin%';
    EXPLAIN ANALYSE SELECT DISTINCT username from users WHERE username like '%quin%';
    

    This checks for uniqueness based on both username and email combined.


    👉 4. WHERE – Filter Records + Major Combine Types (AND, OR, NOT)

    The WHERE clause is used to filter records that meet a certain condition.

    Let’s look at basic and combined conditions using our users table.

    🔹 Example 1: Simple WHERE
    SELECT * FROM users WHERE username = 'john_doe';
    
    
    🔹 Example 2: AND – Combine multiple conditions (all must be true)
    SELECT * FROM users 
    WHERE username = 'quinton' AND email LIKE '%@gmail.com';
    
    
    🔹 Example 3: OR – At least one condition must be true
    SELECT * FROM users 
    WHERE username = 'quinton' OR username = 'joaquin_hand';
    
    
    🔹 Example 4: NOT – Negate a condition
    SELECT * FROM users 
    WHERE NOT email LIKE '%@example.com';
    
    
    🔹 Example 5: Combine AND, OR, NOT (use parentheses!)
    SELECT * FROM users 
    WHERE (email like '%example%' OR email like '%test%') 
      AND NOT username = 'admin';
    
    

    👉 5. ORDER BY – Sort the Results

    And we’ll also look at combined queries afterward.

    🔹 Example 1: Order by a single column (ascending)
    SELECT * FROM users 
    ORDER BY username;
    
    
    🔹 Example 2: Order by a column (descending)
    SELECT * FROM users 
    ORDER BY email DESC;
    
    
    🔹 Example 3: Order by multiple columns
    SELECT * FROM users 
    ORDER BY username ASC, email DESC;
    
    

    👉 6. Combined Queries (UNION, INTERSECT, EXCEPT)

    ✅ These allow you to combine results from multiple SELECT statements.

    ⚠ Requirements:

    • Each query must return the same number of columns.
    • Data types must be compatible.
    🔹 UNION – Combine results and remove duplicates
    SELECT username FROM users WHERE email LIKE '%@example.com'
    UNION
    SELECT username FROM users WHERE username LIKE 'ton%';
    
    
    🔹 UNION ALL – Combine results and keep duplicates
    SELECT username FROM users WHERE email LIKE '%@gmail.com'
    UNION ALL
    SELECT username FROM users WHERE username LIKE 'test%';
    
    
    🔹 INTERSECT – Return only common results
    SELECT username FROM users 
      WHERE email LIKE '%@gmail.com'
    INTERSECT
    SELECT username FROM users 
      WHERE username LIKE 'test%';
    
    SELECT username FROM users
      WHERE (email like '%example' OR email like '%test')
    INTERSECT
    SELECT username FROM users
      WHERE username like 'adam';
    
    🔹 EXCEPT – Return results from the first query that are not in the second
    SELECT username FROM users 
      WHERE email LIKE '%@example'
    EXCEPT
    SELECT username FROM users 
      WHERE (username like '%ada%' OR username like '%merlin%');
    
    

    👉 7. IS NULL and IS NOT NULL – Handling Missing Data

    These are used to check if a column contains a NULL value (i.e., no value).

    🔹 Example 1: Users with a missing/have an email
    # Find users with a missing email
    SELECT * FROM users 
    WHERE email IS NULL;
    
    # Find 
    SELECT * FROM users 
    WHERE email IS NOT NULL;
    
    
    🔹 Example 2: Users with no email and no mobile
    SELECT * FROM users 
    WHERE email IS NULL AND phone_number IS NULL;
    
    
    🔹 Example 3: Users with either email or mobile missing
    SELECT * FROM users 
    WHERE email IS NULL OR phone_number IS NULL;
    
    
    🔹 Example 4: Users who have an email and username starts with ‘adam’
    SELECT * FROM users 
    WHERE email IS NOT NULL AND username LIKE 'adam%';
    
    
    🔹 Example 5: Users with email missing but username is not empty
    SELECT * FROM users 
    WHERE email IS NULL AND username IS NOT NULL;
    
    
    🔹 Example 6: Users where email or mobile is null, but not both (exclusive or)
    SELECT * FROM users 
    WHERE (email IS NULL AND mobile IS NOT NULL)
       OR (email IS NOT NULL AND mobile IS NULL);
    
    

    👉 8. LIMIT, SELECT TOP, SELECT TOP PERCENT (PostgreSQL-style)

    In PostgreSQL, we use LIMIT instead of SELECT TOP.
    (PostgreSQL doesn’t support TOP directly like SQL Server.)

    🔹 Example 1: Limit number of results (first 10 rows)
    SELECT * FROM users 
    LIMIT 10;
    
    
    🔹 Example 2: Combined with ORDER BY (top 5 newest usernames)
    SELECT username FROM users 
      WHERE username IS NOT NULL
    ORDER BY id DESC
    LIMIT 5;
    
    
    🔹 Example 3: Paginate (e.g., 11th to 20th row)
    SELECT * FROM users 
    ORDER BY id 
    OFFSET 10 LIMIT 10;
    
    

    🔔 Simulating SELECT TOP and SELECT TOP PERCENT in PostgreSQL

    🔹 Example 4: Simulate SELECT TOP 1
    SELECT * FROM users 
    ORDER BY id 
    LIMIT 1;
    
    
    🔹 Example 5: Simulate SELECT TOP 10 PERCENT

    To get the top 10% of users by id, you can use a subquery:

    SELECT * FROM users
    ORDER BY id
    LIMIT (SELECT CEIL(COUNT(*) * 0.10) FROM users);
    
    
    🔹 Example 6: Users with Gmail or Yahoo emails, ordered by ID, limit 5
    SELECT id, username, email FROM users
    WHERE email LIKE '%@gmail.com' OR email LIKE '%@yahoo.com'
    AND username IS NOT NULL
    ORDER BY id ASC
    LIMIT 5;
    
    

    Note: Without parentheses, AND has higher precedence than OR.

    🔹 Better version with correct logic:
    SELECT id, username, email FROM users
    WHERE (email LIKE '%@gmail.com' OR email LIKE '%@yahoo.com')
      AND username IS NOT NULL
    ORDER BY id ASC
    LIMIT 5;
    
    

    👉 9. Aggregation Functions: MIN, MAX, COUNT, AVG, SUM

    These functions help you perform calculations on column values.

    🔹 1. COUNT – Number of rows
    SELECT COUNT(*) FROM users;
    
    

    ✔️ Total number of users.

    SELECT COUNT(email) FROM users WHERE email IS NOT NULL;
    
    

    ✔️ Count of users who have an email.

    🔹 2. MIN and MAX – Smallest and largest values
    SELECT MIN(id) AS first_user, MAX(id) AS last_user FROM users;
    
    
    🔹 3. AVG – Average (only on numeric fields)

    Assuming id is somewhat sequential, we can do:

    SELECT AVG(id) AS avg_id FROM users;
    
    
    🔹 4. SUM – Total (again, only on numeric fields)
    SELECT SUM(id) AS total_ids FROM users WHERE id < 1000;
    
    

    Combined Queries with Aggregates

    🔹 Example 1: Count users without email and with usernames starting with ‘test’
    SELECT COUNT(*) FROM users 
    WHERE email IS NULL AND username LIKE 'test%';
    
    
    🔹 Example 2: Get min, max, avg ID of users with Gmail addresses
    SELECT 
      MIN(id) AS min_id,
      MAX(id) AS max_id,
      AVG(id) AS avg_id
    FROM users 
    WHERE email LIKE '%@gmail.com';
    
    
    🔹 Example 3: Count how many users per email domain
    SELECT 
      SPLIT_PART(email, '@', 2) AS domain,
      COUNT(*) AS total_users
    FROM users
    WHERE email IS NOT NULL
    GROUP BY domain
    ORDER BY total_users DESC
    LIMIT 5;
    
    

    ♦️ This query breaks email at the @ to group by domain like gmail.com, yahoo.com.

    GROUP BY Course

    Here’s the SQL query to get the maximum mark, minimum mark, and the email (or emails) of users grouped by each course:

    Option 1: Basic GROUP BY with aggregate functions (only max/min mark per course, not emails)
    SELECT
      course,
      MAX(mark) AS max_mark,
      MIN(mark) AS min_mark
    FROM users
    GROUP BY course;
    
    
    Option 2: Include emails of users who have the max or min mark per course

    (PostgreSQL-specific using subqueries and JOIN)

    SELECT u.course, u.email, u.mark
    FROM users u
    JOIN (
      SELECT
        course,
        MAX(mark) AS max_mark,
        MIN(mark) AS min_mark
      FROM users
      GROUP BY course
    ) stats ON u.course = stats.course AND (u.mark = stats.max_mark OR u.mark = stats.min_mark)
    ORDER BY u.course, u.mark DESC;
    
    

    ♦️ This second query shows all users who have the highest or lowest mark in their course, including ties.

    Here’s the updated query that includes:

    • Course name
    • Emails of users with the maximum or minimum marks
    • Their marks
    • Average mark per course
    SELECT
      u.course,
      u.email,
      u.mark,
      stats.max_mark,
      stats.min_mark,
      stats.avg_mark
    FROM users u
    JOIN (
      SELECT
        course,
        MAX(mark) AS max_mark,
        MIN(mark) AS min_mark,
        ROUND(AVG(mark), 2) AS avg_mark
      FROM users
      GROUP BY course
    ) stats ON u.course = stats.course AND (u.mark = stats.max_mark OR u.mark = stats.min_mark)
    ORDER BY u.course, u.mark DESC;
    
    
    Notes:
    • ROUND(AVG(mark), 2) gives the average mark rounded to two decimal places.
    • Users with the same max or min mark are all included.

    Here’s the full query including:

    • Course
    • Email
    • Mark
    • Max/Min mark
    • Average mark
    • User count per course
    SELECT
      u.course,
      u.email,
      u.mark,
      stats.max_mark,
      stats.min_mark,
      stats.avg_mark,
      stats.user_count
    FROM users u
    JOIN (
      SELECT
        course,
        MAX(mark) AS max_mark,
        MIN(mark) AS min_mark,
        ROUND(AVG(mark), 2) AS avg_mark,
        COUNT(*) AS user_count
      FROM users
      GROUP BY course
    ) stats ON u.course = stats.course AND (u.mark = stats.max_mark OR u.mark = stats.min_mark)
    ORDER BY u.course, u.mark DESC;
    
    

    ♦️ This query gives you a full breakdown of top/bottom performers per course along with stats per group.

    Here’s a version that adds the rank of each user within their course based on their mark (highest mark = rank 1), along with:

    • Course
    • Email
    • Mark
    • Rank (within course)
    • Max mark, Min mark, Average mark, User count per course
    WITH ranked_users AS (
      SELECT
        u.*,
        RANK() OVER (PARTITION BY course ORDER BY mark DESC) AS rank_in_course
      FROM users u
    ),
    course_stats AS (
      SELECT
        course,
        MAX(mark) AS max_mark,
        MIN(mark) AS min_mark,
        ROUND(AVG(mark), 2) AS avg_mark,
        COUNT(*) AS user_count
      FROM users
      GROUP BY course
    )
    SELECT
      r.course,
      r.email,
      r.mark,
      r.rank_in_course,
      cs.max_mark,
      cs.min_mark,
      cs.avg_mark,
      cs.user_count
    FROM ranked_users r
    JOIN course_stats cs ON r.course = cs.course
    ORDER BY r.course, r.rank_in_course;
    
    
    Key features:
    • Users are ranked per course using RANK() (supports ties).
    • The output includes all users, not just those with max/min marks.

    NOTE: Here we can see output like:

        course    |                   email                   | mark | rank_in_course | max_mark | min_mark | avg_mark | user_count
    --------------+-------------------------------------------+------+----------------+----------+----------+----------+------------
     IT           | lisandra.schoen@borer-effertz.test        | 1000 |              1 |     1000 |      100 |   543.04 |        796
     IT           | leona@jaskolski-jaskolski.test            | 1000 |              1 |     1000 |      100 |   543.04 |        796
     IT           | angle@ankunding-sauer.example             |  999 |              3 |     1000 |      100 |   543.04 |        796
     IT           | drucilla_okeefe@monahan.test              |  999 |              3 |     1000 |      100 |   543.04 |        796
     algebra      | natashia.langosh@luettgen.test            | 1000 |              1 |     1000 |      100 |   541.52 |        779
     algebra      | tiffany.tremblay@bergnaum.example         | 1000 |              1 |     1000 |      100 |   541.52 |        779
     algebra      | kristeen.nikolaus@crist.example           |  999 |              3 |     1000 |      100 |   541.52 |        779
     algebra      | domenic@predovic-dare.example             |  999 |              3 |     1000 |      100 |   541.52 |        779
     algebra      | kit@oconner.example                       |  999 |              3 |     1000 |      100 |   541.52 |        779
     architecture | tierra_reilly@botsford-okuneva.test       |  997 |              1 |      997 |      100 |   549.24 |        776
     architecture | celestine_reilly@bayer.example            |  996 |              2 |      997 |      100 |   549.24 |        776
     architecture | carson@kulas.example                      |  995 |              3 |      997 |      100 |   549.24 |        776
     botany       | hassan@towne.test                         | 1000 |              1 |     1000 |      103 |   554.07 |        760
     botany       | shaunna@hudson.test                       | 1000 |              1 |     1000 |      103 |   554.07 |        760
     botany       | sanford_jacobs@johnston.example           |  999 |              3 |     1000 |      103 |   554.07 |        760
     botany       | arnulfo_cremin@ernser.example             |  999 |              3 |     1000 |      103 |   554.07 |        760
    

    The Ranks are not consistent. To avoid this we can use DENSE_RANK().

    Here’s the updated query using DENSE_RANK() instead of RANK() — this avoids gaps in rank numbering when there are ties:

    WITH ranked_users AS (
      SELECT
        u.*,
        DENSE_RANK() OVER (PARTITION BY course ORDER BY mark DESC) AS rank_in_course
      FROM users u
    ),
    course_stats AS (
      SELECT
        course,
        MAX(mark) AS max_mark,
        MIN(mark) AS min_mark,
        ROUND(AVG(mark), 2) AS avg_mark,
        COUNT(*) AS user_count
      FROM users
      GROUP BY course
    )
    SELECT
      r.course,
      r.email,
      r.mark,
      r.rank_in_course,
      cs.max_mark,
      cs.min_mark,
      cs.avg_mark,
      cs.user_count
    FROM ranked_users r
    JOIN course_stats cs ON r.course = cs.course
    WHERE r.rank_in_course <= 3
    ORDER BY r.course, r.rank_in_course;
    
    
    DENSE_RANK difference:
    • If 2 users tie for 1st place, the next gets rank 2 (not 3 like with RANK).
    • Ensures consistent top-N output when ties are frequent.

    🔥 Boom, Bonus: To export the query result as a CSV file in PostgreSQL, you can use the \copy command in psql (PostgreSQL’s CLI), like this:

    🧾 Export Top 3 Students per Course to CSV

    \copy (
      WITH ranked_users AS (
        SELECT
          u.*,
          DENSE_RANK() OVER (PARTITION BY course ORDER BY mark DESC) AS rank_in_course
        FROM users u
      ),
      course_stats AS (
        SELECT
          course,
          MAX(mark) AS max_mark,
          MIN(mark) AS min_mark,
          ROUND(AVG(mark), 2) AS avg_mark,
          COUNT(*) AS user_count
        FROM users
        GROUP BY course
      )
      SELECT
        r.course,
        r.email,
        r.mark,
        r.rank_in_course,
        cs.max_mark,
        cs.min_mark,
        cs.avg_mark,
        cs.user_count
      FROM ranked_users r
      JOIN course_stats cs ON r.course = cs.course
      WHERE r.rank_in_course <= 3
      ORDER BY r.course, r.rank_in_course
    ) TO 'top_students_per_course.csv' WITH CSV HEADER;
    
    
    ✅ Requirements:
    • Run this in the psql shell.
    • The file top_students_per_course.csv will be saved in your local working directory (where psql was started).
    • Make sure PostgreSQL has write permissions to that directory.


    👉 10. LIKE, %, _ – Pattern Matching in SQL

    These are used to filter text using wildcards:

    • % = matches any sequence of characters (0 or more)
    • _ = matches exactly one character

    🔹 Basic LIKE Queries

    Example 1: Usernames starting with “admin”
    SELECT * FROM users 
    WHERE username LIKE 'admin%';
    
    
    Example 2: Usernames ending with “bot”
    SELECT * FROM users 
    WHERE username LIKE '%bot';
    
    
    Example 3: Usernames containing “test”
    SELECT * FROM users 
    WHERE username LIKE '%test%';
    
    

    🔹 _ Single-character Wildcard

    Example 4: 5-character usernames
    SELECT * FROM users 
    WHERE username LIKE '_____';
    
    

    (Each _ stands for one character.)

    Example 5: Emails starting with any single letter + “ohn” (e.g., “john”, “kohn”)
    SELECT * FROM users 
    WHERE username LIKE '_ohn';
    
    

    Combined Queries with LIKE, %, _

    🔹 Example 6: Users whose username contains “test” and email ends with “gmail.com”
    SELECT * FROM users 
    WHERE username LIKE '%test%' AND email LIKE '%@gmail.com';
    
    
    🔹 Example 7: Users with 3-character usernames and missing email
    SELECT * FROM users 
    WHERE username LIKE '___' AND email IS NULL;
    
    
    🔹 Example 8: Users with usernames that start with “a” or end with “x” and have a mobile number
    SELECT * FROM users 
    WHERE (username LIKE 'a%' OR username LIKE '%x') AND mobile IS NOT NULL;
    
    

    👉 11. IN, NOT IN, BETWEEN – Set & Range Filters

    These are used to filter based on a list of values (IN) or a range (BETWEEN).

    🔹 1. IN – Match any of the listed values

    SELECT * FROM users 
    WHERE username IN ('admin', 'test_user', 'john_doe');
    
    

    🔹 2. NOT IN – Exclude listed values

    SELECT * FROM users 
    WHERE username NOT IN ('admin', 'test_user');
    
    

    🔹 3. BETWEEN – Match within a range (inclusive)

    SELECT * FROM users 
    WHERE id BETWEEN 100 AND 200;
    
    

    ♦️ Equivalent to: id >= 100 AND id <= 200

    Combined Queries

    🔹 Example 1: Users with username in a list and id between 1 and 500
    SELECT * FROM users 
    WHERE username IN ('alice', 'bob', 'carol') 
      AND id BETWEEN 1 AND 500;
    
    
    🔹 Example 2: Exclude system users and select a range of IDs
    SELECT id, username FROM users 
    WHERE username NOT IN ('admin', 'system') 
      AND id BETWEEN 1000 AND 2000;
    
    
    🔹 Example 3: Top 5 users whose email domains are in a specific list
    SELECT * FROM users 
    WHERE SPLIT_PART(email, '@', 2) IN ('gmail.com', 'yahoo.com', 'hotmail.com')
    ORDER BY id
    LIMIT 5;
    
    

    👉 12. SQL Aliases – Renaming Columns or Tables Temporarily

    Aliases help improve readability, especially in joins or when using functions.

    🔹 1. Column Aliases

    Use AS (optional keyword) to rename a column in the result.

    Example 1: Rename username to user_name
    SELECT username AS user_name, email AS user_email 
    FROM users;
    
    

    You can also omit AS:

    SELECT username user_name, email user_email 
    FROM users;
    
    

    🔹 2. Table Aliases

    Assign a short name to a table (very useful in joins).

    Example 2: Simple alias for table
    SELECT u.username, u.email 
    FROM users u 
    WHERE u.email LIKE '%@gmail.com';
    
    

    🔹 3. Alias with functions

    SELECT COUNT(*) AS total_users, MAX(id) AS latest_id 
    FROM users;
    
    

    Combined Query with Aliases

    🔹 Example 4: Count Gmail users, alias result and filter
    SELECT 
      COUNT(*) AS gmail_users 
    FROM users u 
    WHERE u.email LIKE '%@gmail.com';
    
    
    🔹 Example 5: List usernames with shortened table name and domain extracted
    SELECT 
      u.username AS name, 
      SPLIT_PART(u.email, '@', 2) AS domain 
    FROM users u 
    WHERE u.email IS NOT NULL 
    ORDER BY u.username
    LIMIT 10;
    
    

    Rails 8 App: Setup Test DB in PostgreSQL | Faker | Extensions for Rails app, VSCode

    Let’s try to add some sample data first to our database.

    Step 1: Install pgxnclient

    On macOS (with Homebrew):

    brew install pgxnclient
    

    On Ubuntu/Debian:

    sudo apt install pgxnclient
    

    Step 2: Install the faker extension via PGXN

    pgxn install faker
    

    I get issue with installing faker via pgxn:

    ~ pgxn install faker
    INFO: best version: faker 0.5.3
    ERROR: resource not found: 'https://api.pgxn.org/dist/PostgreSQL_Faker/0.5.3/META.json'
    

    ⚠️ Note: faker extension we’re trying to install via pgxn is not available or improperly published on the PGXN network. Unfortunately, the faker extension is somewhat unofficial and not actively maintained or reliably hosted.

    🚨 You can SKIP STEP 3,4,5 and opt Option 2

    Step 3: Build and install the extension into PostgreSQL

    cd /path/to/pg_faker  # PGXN will print this after install
    make
    sudo make install
    

    Step 4: Enable it in your database

    Inside psql :

    CREATE EXTENSION faker;
    

    Step 5: Insert 10,000 fake users

    INSERT INTO users (user_id, username, email, phone_number)
    SELECT
      gs AS user_id,
      faker_username(),
      faker_email(),
      faker_phone_number()
    FROM generate_series(1, 10000) AS gs;
    
    Option 2: Use Ruby + Faker gem (if you’re using Rails or Ruby)

    If you’re building your app in Rails, use the faker gem directly:

    In Ruby:
    require 'faker'
    require 'pg'
    
    conn = PG.connect(dbname: 'test_db')
    
    (1..10_000).each do |i|
      conn.exec_params(
        "INSERT INTO users (user_id, username, email, phone_number) VALUES ($1, $2, $3, $4)",
        [i, Faker::Internet.username, Faker::Internet.email, Faker::PhoneNumber.phone_number]
      )
    end
    
    
    In Rails (for test_db), Create the Rake Task:

    Create a file at:

    lib/tasks/seed_fake_users.rake
    
    # lib/tasks/seed_fake_users.rake
    
    namespace :db do
      desc "Seed 10,000 fake users into the users table"
      task seed_fake_users: :environment do
        require "faker"
        require "pg"
    
        conn = PG.connect(dbname: "test_db")
    
        # If user_id is a serial and you want to reset the sequence after deletion, run:
        # conn.exec_params("TRUNCATE TABLE users RESTART IDENTITY")
        # delete existing users to load fake users
        conn.exec_params("DELETE FROM users")
        
    
        puts "Seeding 10,000 fake users ...."
        (1..10_000).each do |i|
          conn.exec_params(
            "INSERT INTO users (user_id, username, email, phone_number) VALUES ($1, $2, $3, $4)",
            [ i, Faker::Internet.username, Faker::Internet.email, Faker::PhoneNumber.phone_number ]
          )
        end
        puts "Seeded 10,000 fake users into the users table"
        conn.close
      end
    end
    
    # run the task
    bin/rails db:seed_fake_users
    
    For Normal Rails Rake Task:
    # lib/tasks/seed_fake_users.rake
    
    namespace :db do
      desc "Seed 10,000 fake users into the users table"
      task seed_fake_users: :environment do
        require 'faker'
    
        puts "🌱 Seeding 10,000 fake users..."
    
        users = []
    
        # delete existing users
        User.destroy_all
    
        10_000.times do |i|
          users << {
            user_id: i + 1,
            username: Faker::Internet.unique.username,
            email: Faker::Internet.unique.email,
            phone_number: Faker::PhoneNumber.phone_number
          }
        end
    
        # Use insert_all for performance
        User.insert_all(users)
    
        puts "✅ Done. Inserted 10,000 users."
      end
    end
    
    # run the task
    bin/rails db:seed_fake_users
    

    Now we will discuss about PostgreSQL Extensions and it’s usage.

    PostgreSQL extensions are add-ons or plug-ins that extend the core functionality of PostgreSQL. They provide additional capabilities such as new data types, functions, operators, index types, or full features like full-text search, spatial data handling, or fake data generation.

    🔧 What Extensions Can Do

    Extensions can:

    • Add functions (e.g. gen_random_bytes() from pgcrypto)
    • Provide data types (e.g. hstore, uuid, jsonb)
    • Enable indexing techniques (e.g. btree_gin, pg_trgm)
    • Provide tools for testing and development (e.g. faker, pg_stat_statements)
    • Enhance performance monitoring, security, or language support

    📦 Common PostgreSQL Extensions

    ExtensionPurpose
    pgcryptoCryptographic functions (e.g., hashing, random byte generation)
    uuid-osspFunctions to generate UUIDs
    postgisSpatial and geographic data support
    hstoreKey-value store in a single PostgreSQL column
    pg_trgmTrigram-based text search and indexing
    citextCase-insensitive text type
    pg_stat_statementsSQL query statistics collection
    fakerGenerates fake but realistic data (for testing)

    📥 Installing and Enabling Extensions

    1. Install (if not built-in)

    Via package manager or PGXN (PostgreSQL Extension Network), or compile from source.

    2. Enable in a database

    CREATE EXTENSION extension_name;
    
    

    Example:

    CREATE EXTENSION pgcrypto;
    
    

    Enabling an extension makes its functionality available to the current database only.

    🤔 Why Use Extensions?

    • Productivity: Quickly add capabilities without writing custom code.
    • Performance: Access to advanced indexing, statistics, and optimization tools.
    • Development: Generate test data (faker), test encryption (pgcrypto), etc.
    • Modularity: PostgreSQL stays lightweight while letting you add only what you need.

    Here’s a categorized list (with a simple visual-style layout) of PostgreSQL extensions that are safe and useful for Rails apps in both development and production environments.

    🔌 PostgreSQL Extensions for Rails Apps

    # connect psql
    psql -U username -d database_name
    
    # list all available extensions
    SELECT * FROM pg_available_extensions;
    
    # eg. to install the hstore extension run
    CREATE EXTENSION hstore;
    
    # verify the installation
    SELECT * FROM pg_extension;
    SELECT * FROM pg_extension WHERE extname = 'hstore';
    
    

    🔐 Security & UUIDs

    ExtensionUse CaseSafe for Prod
    pgcryptoSecure random bytes, hashes, UUIDs
    uuid-osspUUID generation (v1, v4, etc.)

    💡 Tip: Use uuid-ossp or pgcrypto to generate UUID primary keys (id: :uuid) in Rails.

    📘 PostgreSQL Procedures and Triggers — Explained with Importance and Examples

    PostgreSQL is a powerful, open-source relational database that supports advanced features like stored procedures and triggers, which are essential for encapsulating business logic inside the database.

    🔹 What are Stored Procedures in PostgreSQL?

    A stored procedure is a pre-compiled set of SQL and control-flow statements stored in the database and executed by calling it explicitly.

    Purpose: Encapsulate business logic, reuse complex operations, improve performance, and reduce network overhead.

    ✅ Benefits of Stored Procedures:
    • Faster execution (compiled and stored in DB)
    • Centralized logic
    • Reduced client-server round trips
    • Language support: SQL, PL/pgSQL, Python, etc.
    🧪 Example: Create a Procedure to Add a New User
    CREATE OR REPLACE PROCEDURE add_user(name TEXT, email TEXT)
    LANGUAGE plpgsql
    AS $$
    BEGIN
        INSERT INTO users (name, email) VALUES (name, email);
    END;
    $$;
    
    
    ▶️ Call the procedure:
    CALL add_user('John Doe', 'john@example.com');
    
    

    🔹 What are Triggers in PostgreSQL?

    A trigger is a special function that is automatically executed in response to certain events on a table (like INSERT, UPDATE, DELETE).

    Purpose: Enforce rules, maintain audit logs, auto-update columns, enforce integrity, etc.

    ✅ Benefits of Triggers:
    • Automate tasks on data changes
    • Enforce business rules and constraints
    • Keep logs or audit trails
    • Maintain derived data or counters

    🧪 Example: Trigger to Log Inserted Users

    1. Create the audit table:

    CREATE TABLE user_audit (
        id SERIAL PRIMARY KEY,
        user_id INTEGER,
        name TEXT,
        email TEXT,
        created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
    );
    
    

    2. Create the trigger function:

    CREATE OR REPLACE FUNCTION log_user_insert()
    RETURNS TRIGGER AS $$
    BEGIN
        INSERT INTO user_audit (user_id, name, email)
        VALUES (NEW.id, NEW.name, NEW.email);
        RETURN NEW;
    END;
    $$ LANGUAGE plpgsql;
    
    

    3. Create the trigger on users table:

    CREATE TRIGGER after_user_insert
    AFTER INSERT ON users
    FOR EACH ROW
    EXECUTE FUNCTION log_user_insert();
    
    

    Now, every time a user is inserted, the trigger logs it in the user_audit table automatically.

    📌 Difference: Procedures vs. Triggers

    FeatureStored ProceduresTriggers
    When executedCalled explicitly with CALLAutomatically executed on events
    PurposeBatch processing, encapsulate logicReact to data changes automatically
    ControlFull control by developerFire based on database event (Insert, Update, Delete)
    ReturnsNo return or OUT parametersMust return NEW or OLD row in most cases

    🎯 Why Are Procedures and Triggers Important?

    ✅ Use Cases for Stored Procedures:
    • Bulk processing (e.g. daily billing)
    • Data import/export
    • Account setup workflows
    • Multi-step business logic
    ✅ Use Cases for Triggers:
    • Auto update updated_at column
    • Enforce soft-deletes
    • Maintain counters or summaries (e.g., post comment count)
    • Audit logs / change history
    • Cascading updates or cleanups

    🚀 Real-World Example: Soft Delete Trigger

    Instead of deleting records, mark them as deleted = true.

    CREATE OR REPLACE FUNCTION soft_delete_user()
    RETURNS TRIGGER AS $$
    BEGIN
      UPDATE users SET deleted = TRUE WHERE id = OLD.id;
      RETURN NULL; -- cancel the delete
    END;
    $$ LANGUAGE plpgsql;
    
    CREATE TRIGGER before_user_delete
    BEFORE DELETE ON users
    FOR EACH ROW
    EXECUTE FUNCTION soft_delete_user();
    
    

    Now any DELETE FROM users WHERE id = 1; will just update the deleted column.

    🛠️ Tools to Manage Procedures & Triggers

    • pgAdmin (GUI)
    • psql (CLI)
    • Code-based migrations (via tools like ActiveRecord or pg gem)

    🧠 Summary

    FeatureStored ProcedureTrigger
    Manual/AutoManual (CALL)Auto (event-based)
    FlexibilityComplex logic, loops, variablesQuick logic, row-based or statement-based
    LanguagesPL/pgSQL, SQL, Python, etc.PL/pgSQL, SQL
    Best forMulti-step workflowsAudit, logging, validation

    Use Postgres RANDOM()

    By using RANDOM() in PostgreSQL. If the application uses PostgreSQL’s built-in RANDOM() function to efficiently retrieve a random user from the database. Here’s why this is important:

    1. Efficiency: PostgreSQL’s RANDOM() is more efficient than loading all records into memory and selecting one randomly in Ruby. This is especially important when dealing with large datasets (like if we have 10000 users).
    2. Database-level Operation: The randomization happens at the database level rather than the application level, which:
    • Reduces memory usage (we don’t need to load unnecessary records)
    • Reduces network traffic (only one record is transferred)
    • Takes advantage of PostgreSQL’s optimized random number generation
    1. Single Query: Using RANDOM() allows us to fetch a random record in a single SQL query, typically something like:sqlApply to
    SELECT * FROM users ORDER BY RANDOM() LIMIT 1
    

    This is in contrast to less efficient methods like:

    • Loading all users and using Ruby’s sample method (User.all.sample)
    • Getting a random ID and then querying for it (which would require two queries)
    • Using offset with count (which can be slow on large tables)

    🔍 Full Text Search & Similarity

    ExtensionUse CaseSafe for Prod
    pg_trgmTrigram-based fuzzy search (great with ILIKE & similarity)
    unaccentRemove accents for better search results
    fuzzystrmatchSoundex, Levenshtein distance✅ (heavy use = test!)

    💡 Combine pg_trgm + unaccent for powerful search in Rails models using ILIKE.

    📊 Performance Monitoring & Dev Insights

    ExtensionUse CaseSafe for Prod
    pg_stat_statementsMonitor slow queries, frequency
    auto_explainLog plans for slow queries
    hypopgSimulate hypothetical indexes✅ (dev only)

    🧪 Dev Tools & Data Generation

    ExtensionUse CaseSafe for Prod
    fakerFake data generation for testing❌ Dev only
    pgfakerCommunity alternative to faker❌ Dev only

    📦 Storage & Structure

    ExtensionUse CaseSafe for Prod
    hstoreKey-value storage in a column
    citextCase-insensitive text

    💡 citext is very handy for case-insensitive email columns in Rails.

    🗺️ Geospatial (Advanced)

    ExtensionUse CaseSafe for Prod
    postgisGIS/spatial data support✅ (big apps)

    🎨 Visual Summary

    +-------------------+-----------------------------+-----------------+
    | Category          | Extension                   | Safe for Prod?  |
    +-------------------+-----------------------------+-----------------+
    | Security/UUIDs    | pgcrypto, uuid-ossp         | ✅              |
    | Search/Fuzziness  | pg_trgm, unaccent, fuzzystr | ✅              |
    | Monitoring        | pg_stat_statements          | ✅              |
    | Dev Tools         | faker, pgfaker              | ❌ (Dev only)   |
    | Text/Storage      | citext, hstore              | ✅              |
    | Geo               | postgis                     | ✅              |
    +-------------------+-----------------------------+-----------------+
    
    

    PostgreSQL Extension for VSCode

    # 1. open the Command Palette (Cmd + Shift + P)
    # 2. Type 'PostgreSQL: Add Connection'
    # 3. Enter the hostname of the database authentication details
    # 4. Open Command Palette, type: 'PostgreSQL: New Query'
    

    Enjoy PostgreSQL  🚀