How to Integrate Datadog and PagerDuty into an Enterprise Rails Application – Part 2

Stack: Ruby 3+, Rails 7+
Audience: Backend engineers building or maintaining production-grade Rails services
Goal: Add real-time observability and on-call alerting to a critical business process

Part 3: Hooking It All Together — Rake Task + Cron

3.1 Rake Task

Create lib/tasks/billing.rake:

namespace :billing do
desc "Run billing health check: emit Datadog metrics and alert if unhealthy"
task health_check: :environment do
Monitoring::BillingHealthCheck.new(
billing_week: BillingWeek.current
).run
end
end

Run it manually:

bundle exec rake billing:health_check

3.2 Cron Script

Create scripts/cron/billing_health_check.sh:

#!/bin/bash
source /apps/myapp/current/scripts/env.sh
bundle exec rake billing:health_check

Using Healthchecks.io (or similar) to wrap the cron gives you a second layer of alerting: if the cron doesn’t ping within the expected window, you get an alert – even if the app never starts.

3.3 Crontab Entry

# Run billing health check every Thursday at 5:30 AM
30 5 * * 4 . /apps/myapp/current/scripts/cron/billing_monitoring.sh

⚠️ Important for managed deployments: If your crontab is version-controlled but not auto-deployed (e.g., Capistrano without cron management), changes to the file in your repo do not automatically update the server. Always verify with crontab -l after deploying.


Part 4: Building the Datadog Dashboard

Once metrics are flowing, set up a dashboard for at-a-glance visibility.

4.1 Create the Dashboard

  1. Datadog → Dashboards → New Dashboard
  2. Name it: “Billing Health Monitor”
  3. Click + Add Widgets

4.2 Add Timeseries Widgets

For each metric, add a Timeseries widget:

Widget titleMetricVisualization
Unbilled Ordersbilling.unbilled_ordersLine chart
Missing Billing Recordsbilling.missing_billing_recordsLine chart
Failed Chargesbilling.failed_chargesLine chart

Widget configuration:

  • Graph: select metric → billing.unbilled_orders
  • Display as: Line
  • Timeframe: Set to “Past 1 Week” or “Past 1 Month” after data starts flowing (not “Past 1 Hour” which shows nothing between weekly runs)

4.3 Add Reference Lines (Optional but Useful)

For the unbilled orders widget, add a constant line at your alert threshold:

  • In the widget editor → Markers → Add marker at y = 10 (your BILLING_UNBILLED_THRESHOLD)
  • Color it red to make the threshold visually obvious

4.4 Where to Find Your Custom Metrics


Part 5: Testing the Integration End-to-End

5.1 Test Datadog Metrics (no alerts, safe in any env)

# Rails console
require 'datadog/statsd'
host = ENV.fetch('DD_AGENT_HOST', '127.0.0.1')
statsd = Datadog::Statsd.new(host, 8125)
statsd.gauge('billing.unbilled_orders', 0)
statsd.gauge('billing.missing_billing_records', 0)
statsd.gauge('billing.failed_charges', 0)
statsd.close
puts "Sent — check /metric/explorer in Datadog in ~2-3 minutes"

5.2 Test PagerDuty (staging)

# Rails console — staging
# First, verify the key exists:
Rails.application.credentials[:staging][:pagerduty_billing_integration_key].present?
# Then trigger a test incident:
svc = Monitoring::BillingHealthCheck.new(billing_week: BillingWeek.current)
svc.send(:trigger_pagerduty, "TEST: Billing health check — staging validation #{Time.current}")
# Remember to resolve the incident in PagerDuty UI immediately after!

5.3 Test PagerDuty (production) — Preferred Method

Use PagerDuty’s built-in test instead of triggering from code:

  1. PagerDuty → Services → Billing Pipeline → Integrations
  2. Find the integration → click “Send Test Event”

This fires through the same pipeline without touching your app or risking a real alert.

5.4 Test PagerDuty (production) — via Rails Console

If you must test via code in production, use a unique dedup key so it doesn’t collide with real billing alerts, and coordinate with your on-call engineer first:

svc = Monitoring::BillingHealthCheck.new(billing_week: BillingWeek.current)
Pagerduty::Wrapper.new(
integration_key: svc.send(:pagerduty_integration_key)
).client.incident("billing-health-test-#{Time.current.to_i}").trigger(
summary: "TEST ONLY — please ignore — integration validation",
source: "rails-console",
severity: "critical"
)

5.5 Test the Full Service Class (production, after billing has run)

Once billing has completed successfully for the week, all counts will be 0 and no PagerDuty alert will fire:

result = Monitoring::BillingHealthCheck.new(billing_week: BillingWeek.current).run
puts result
# => { unbilled_orders_count: 0, missing_billing_records_count: 0, failed_charges_count: 0, ... }

Common Gotchas

1. StatsD is Fire-and-Forget

UDP has no acknowledgment. If the agent isn’t running, your statsd.gauge() calls return normally with no error. Always verify the agent is reachable by checking for your metric in the Datadog UI after sending — don’t rely on exception-free code as proof of delivery.

2. Metric Volume vs Metric Explorer

  • Metric Volume (/metric/volume): Confirms Datadog received the metric. Good for first-time setup verification.
  • Metric Explorer (/metric/explorer): Lets you actually graph and analyze the metric over time. This is where you do your monitoring work.

3. Rescue Around Everything

Both emit_datadog_metrics and trigger_pagerduty should have rescue blocks. Your monitoring code must never crash your main business process. The job that failed to alert is better than the job that crashed silently because the alert raised an exception.

def emit_datadog_metrics(results)
# ... emit metrics
rescue => e
Rails.logger.error("Failed to emit Datadog metrics: #{e.message}")
# Do NOT re-raise — monitoring failure is never a reason to abort the job
end

4. Environment Parity for the Datadog Agent

In production the agent runs as a sidecar or daemon. In local development and staging, it often doesn’t. This is fine — just make sure your code uses ENV.fetch('DD_AGENT_HOST', '127.0.0.1') so the host is configurable per environment, and don’t be alarmed when staging metrics don’t appear in Datadog.

5. PagerDuty Dedup Keys Prevent Double-Paging

If your cron job or health check can run more than once for the same underlying issue (retry logic, manual reruns), always use a stable dedup_key tied to the resource and time period — not a timestamp. A timestamp-based key creates a new PagerDuty incident on every run.


Summary

ConcernToolHow
Custom business metricsDatadog StatsDDatadog::Statsd#gauge via local agent (UDP)
APM / request tracingDatadog ddtraceDatadog.configure initializer
Metric visualizationDatadog DashboardsTimeseries widgets per metric
Critical alert on failurePagerDuty Events API v2Pagerduty::Wrapper + dedup key
Secondary notificationGoogle Chat / Slack webhookHTTP POST to webhook URL
Scheduled executionCron + RakeShell script wrapping bundle exec rake
Cron liveness monitoringHealthchecks.ioPing before/after cron run

Both integrations together give you a complete observability loop: your scheduled jobs run on time, emit metrics to Datadog for trending and analysis, and page the right engineer via PagerDuty the moment something goes wrong — before any customer notices.


Further Reading

Happy Integration!

How to Integrate Datadog and PagerDuty into an Enterprise Rails Application – Part 1

Stack: Ruby 3+, Rails 7+
Audience: Backend engineers building or maintaining production-grade Rails services
Goal: Add real-time observability and on-call alerting to a critical business process


Introduction

When you’re running an enterprise web application, two questions keep engineering teams up at night:

  1. “Is our system healthy right now?”
  2. “If something breaks at 3 AM, will we know before our customers do?”

Datadog and PagerDuty together answer both. Datadog gives you the metrics, dashboards, and visibility. PagerDuty turns critical metrics into actionable alerts that reach the right person at the right time. This post walks you through integrating both into a Rails 7+ application — from gem installation to a live production dashboard — using a real-world billing health monitor as the example.

What is Datadog?

Datadog is a cloud-based observability and monitoring platform. It collects metrics, traces, and logs from your infrastructure and applications and surfaces them in a unified UI.

Core capabilities relevant to Rails apps:

FeatureWhat it does
APM (Application Performance Monitoring)Traces every Rails request, shows latency, errors, and bottlenecks
StatsD / DogStatsDAccepts custom business metrics (gauges, counters, histograms) via UDP
DashboardsVisualize any metric over time — single chart or full ops dashboard
Monitors & AlertsTrigger notifications when a metric crosses a threshold
Log ManagementCentralized log search and correlation with traces
Infrastructure MonitoringCPU, memory, disk — the full host/container picture

For this guide, we focus on custom business metrics via DogStatsD — the most powerful and underused feature for application teams.


What is PagerDuty?

PagerDuty is an incident management platform. When something breaks in production, PagerDuty decides who gets notified, how (phone call, SMS, push notification, Slack), and when to escalate if the alert isn’t acknowledged.

Key concepts:

ConceptDescription
ServiceA logical grouping of alerts (e.g., “Billing Service”)
Integration KeyThe secret key your app uses to send events to a PagerDuty service
IncidentA triggered alert that requires human acknowledgment
Dedup KeyA unique string that prevents duplicate incidents for the same root cause
Escalation PolicyDefines who gets paged and in what order if the incident isn’t acknowledged
Severitycritical, error, warning, or info

PagerDuty integrates with Datadog (you can alert from DD monitors), but for critical business logic alerts — like a billing pipeline failing — it’s often better to trigger PagerDuty directly from your application code, giving you full control over deduplication and context.


Why These Are Must-Have Integrations for Enterprise Apps

If you’re running any of the following, you need both:

  • Scheduled jobs / cron tasks that process money, orders, or user data
  • Background workers (Sidekiq, Delayed Job) that can silently fail
  • Third-party payment or fulfillment pipelines with no built-in alerting
  • SLAs that require uptime or processing guarantees
  • On-call rotations where the right person needs to be paged — not just an email inbox

The core problem both solve: Rails applications fail silently. A rescue clause that logs an error to Rails.logger does nothing at 2 AM. A Sidekiq deadlock on your billing job won’t send you an email. Without Datadog and PagerDuty:

  • You find out about failures from customers, not dashboards
  • You can’t tell when a metric degraded or how long it’s been broken
  • There’s no escalation path — the alert that fires at 3 AM goes nowhere

With both integrated, you get: visibility (Datadog) + accountability (PagerDuty).


Architecture Overview

Rails App / Cron Job
├──► Datadog Agent (UDP :8125)
│ └──► Datadog Cloud ──► Dashboard / Monitor
└──► PagerDuty Events API (HTTPS)
└──► On-call Engineer ──► Slack / Phone / SMS

The Datadog Agent runs as a daemon on your server or as a sidecar container. Your app sends lightweight UDP packets to it (fire-and-forget). The agent batches and forwards them to Datadog’s cloud.

PagerDuty receives events over HTTPS directly from your app — no local agent needed.


Part 1: Datadog Integration

1.1 Install the Gems

# Gemfile
gem 'ddtrace', '~> 2.0' # APM tracing
gem 'dogstatsd-ruby', '~> 5.0' # Custom metrics via StatsD
bundle install

1.2 Configure the Datadog Initializer

Create config/initializers/datadog.rb:

require 'datadog/statsd'
require 'datadog'
enabled = Rails.application.credentials[Rails.env.to_sym][:datadog_integration_enabled]
service_name = "myapp-#{Rails.env}"
Datadog.configure do |c|
c.tracing.enabled = enabled
c.runtime_metrics.enabled = enabled
c.tracing.instrument :rails, service_name: service_name
c.tracing.instrument :rake, enabled: false # avoid tracing long-running tasks
# Consolidate HTTP client spans under one service name to reduce noise
c.tracing.instrument :faraday, service_name: service_name
c.tracing.instrument :httpclient, service_name: service_name
c.tracing.instrument :http, service_name: service_name
c.tracing.instrument :rest_client, service_name: service_name
end

Store the flag in Rails credentials:

rails credentials:edit --environment production
# config/credentials/production.yml.enc
datadog_integration_enabled: true

Important: The datadog_integration_enabled flag controls APM tracing only. Custom StatsD metrics (gauges, counters) are sent by Datadog::Statsd regardless of this flag — as long as the Datadog Agent is running.

1.3 Install and Configure the Datadog Agent

The Datadog Agent must be running on the host where your app runs. It listens for UDP packets on port 8125 and forwards them to Datadog’s cloud.

Docker Compose (recommended for containerized apps):

# docker-compose.yml
services:
app:
environment:
DD_AGENT_HOST: datadog-agent
DD_DOGSTATSD_PORT: 8125
datadog-agent:
image: datadog/agent:latest
environment:
DD_API_KEY: ${DATADOG_API_KEY}
DD_DOGSTATSD_NON_LOCAL_TRAFFIC: "true"
ports:
- "8125:8125/udp"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /proc/:/host/proc/:ro
- /sys/fs/cgroup/:/host/sys/fs/cgroup:ro

Bare metal / VM:

DD_API_KEY=your_api_key bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script.sh)"

1.4 Emit Custom Business Metrics

Now the interesting part — emitting metrics from your business logic.

Create a service class for a billing health check at app/lib/monitoring/billing_health_check.rb:

# frozen_string_literal: true
class Monitoring::BillingHealthCheck
UNBILLED_THRESHOLD = ENV.fetch('BILLING_UNBILLED_THRESHOLD', 10).to_i
def initialize(date:)
@date = date
end
def run
results = collect_metrics
fire_datadog_metrics(results)
alert_if_unhealthy(results)
results
end
private
def collect_metrics
billed_ids = BillingRecord.where(date: @date).pluck(:order_id)
missing_order_ids = billed_ids - Order.where(date: @date).ids
unbilled_count = Order.active.where(week: @date, billed: false).count
failed_charges = Order.joins(:bills)
.where(date: @date, billed: false, bills: { success: false })
.distinct
.count
{
missing_order_ids: missing_order_ids,
missing_order_records_count: missing_order_ids.size,
unbilled_orders_count: unbilled_count,
failed_charges_count: failed_charges
}
end
def fire_datadog_metrics(results)
host = ENV.fetch('DD_AGENT_HOST', '127.0.0.1')
port = ENV.fetch('DD_DOGSTATSD_PORT', 8125).to_i
statsd = Datadog::Statsd.new(host, port)
statsd.gauge('billing.unbilled_orders', results[:unbilled_orders_count])
statsd.gauge('billing.missing_billing_records', results[:missing_billing_records_count])
statsd.gauge('billing.failed_charges', results[:failed_charges_count])
statsd.close
rescue => e
Rails.logger.error("Failed to emit Datadog metrics: #{e.message}")
end
# ... alerting covered in Part 2
end

Why Datadog::Statsd.new(host, port) instead of Datadog::Statsd.new?

The no-argument form defaults to 127.0.0.1:8125. In containerized environments, the Datadog Agent runs as a separate container/service with a different hostname. Always read the host from an environment variable so the code works in every environment without changes.

1.5 Choosing the Right Metric Type

TypeMethodUse when
Gaugestatsd.gauge('name', value)Current snapshot value (queue depth, count at a point in time)
Counterstatsd.increment('name')Counting occurrences (requests, errors)
Histogramstatsd.histogram('name', value)Distribution of values (response times, batch sizes)
Timingstatsd.timing('name', ms)Duration in milliseconds

For billing health metrics — unbilled orders, failed charges — gauge is correct because you want the current count, not a running total.

1.6 Debugging: Why Aren’t My Metrics Appearing?

This is the most common issue. Because StatsD uses UDP, failures are completely silent.

Checklist:

# 1. Is the Datadog Agent reachable from your app container/host?
# Run in Rails console:
require 'socket'
UDPSocket.new.send("test:1|g", 0, ENV.fetch('DD_AGENT_HOST', '127.0.0.1'), 8125)
# 2. Send a test gauge and wait 2-3 minutes
statsd = Datadog::Statsd.new(ENV.fetch('DD_AGENT_HOST', '127.0.0.1'), 8125)
statsd.gauge('debug.connectivity_test', 1)
statsd.close
puts "Sent — check Datadog metric/explorer in 2-3 minutes"
# 3. Check if the integration flag is blocking APM (not metrics, but worth knowing)
Rails.application.credentials[Rails.env.to_sym][:datadog_integration_enabled]

Then in the Datadog UI:

  • Go to Metrics → Explorer
  • Type your metric name (e.g., billing.) in the graph field — it should autocomplete
  • If it doesn’t autocomplete after 5 minutes, the agent is not receiving the packets

Common root causes in staging/dev environments:

SymptomLikely cause
No metrics in any envAgent not running or wrong host
Metrics in production onlyDD_AGENT_HOST not set, defaults to 127.0.0.1 but agent is on a different host in staging
Intermittent metricsUDP packet loss (rare, but can happen under high load)

Part 2: PagerDuty Integration

2.1 Install the Gem

# Gemfile
gem 'pagerduty', '~> 3.0'
bundle install

2.2 Create a PagerDuty Service

  1. Log in to PagerDuty → Services → Service Directory → + New Service
  2. Name it (e.g., “Billing Pipeline”)
  3. Under Integrations, select “Use our API directly” → choose Events API v2
  4. Copy the Integration Key — you’ll need this in credentials

2.3 Store Credentials Securely

rails credentials:edit --environment production
# config/credentials/production.yml.enc
pagerduty_billing_integration_key: your_integration_key_here
google_chat_monitoring_webhook: https://chat.googleapis.com/v1/spaces/...

2.4 Create a PagerDuty Wrapper

Create a lightweight wrapper at app/lib/pagerduty/wrapper.rb:

# frozen_string_literal: true
class Pagerduty::Wrapper
def initialize(integration_key:, api_version: 2)
@integration_key = integration_key
@api_version = api_version
end
def client
@client ||= Pagerduty.build(
integration_key: @integration_key,
api_version: @api_version
)
end
end

2.5 Wire Up Alerting in Your Service Class

Continuing the billing health check class:

def alert_if_unhealthy(results)
issues = []
if results[:missing_billing_records_count] > 0
missing_names = results[:missing_regions].map(&:name).join(', ')
issues << "Missing billing records for regions: #{missing_names}"
end
if results[:unbilled_orders_count] > UNBILLED_THRESHOLD
issues << "#{results[:unbilled_orders_count]} unbilled orders (threshold: #{UNBILLED_THRESHOLD})"
end
return if issues.empty?
summary = build_alert_summary(results, issues)
trigger_pagerduty(summary)
send_google_chat_notification(summary)
end
private
def build_alert_summary(results, issues)
[
"Billing Health Check FAILED at #{Time.zone.now.strftime('%Y-%m-%d %H:%M:%S %Z')}",
"Week: #{@billing_week}",
*issues,
"Failed charges: #{results[:failed_charges_count]}"
].join(" | ")
end
def trigger_pagerduty(summary)
dedup_key = "billing-health-#{@billing_week}"
Pagerduty::Wrapper.new(
integration_key: pagerduty_integration_key
).client.incident(dedup_key).trigger(
summary: summary,
source: Rails.application.routes.default_url_options[:host],
severity: "critical"
)
rescue => e
Rails.logger.error("Failed to trigger PagerDuty: #{e.message}")
end
def send_google_chat_notification(message)
# Post to your team's Google Chat / Slack webhook
HTTParty.post(
google_chat_webhook,
body: { text: message }.to_json,
headers: { 'Content-Type' => 'application/json' }
)
rescue => e
Rails.logger.error("Failed to send Google Chat notification: #{e.message}")
end
def pagerduty_integration_key
Rails.application.credentials[Rails.env.to_sym][:pagerduty_billing_integration_key]
end
def google_chat_webhook
Rails.application.credentials[Rails.env.to_sym][:google_chat_monitoring_webhook]
end

2.6 The Dedup Key — Why It Matters

dedup_key = "billing-health-#{@billing_week}"

PagerDuty uses the dedup_key to group events about the same incident. If your billing check runs at 8:30 AM and again at 9:00 AM (e.g., after a retry), PagerDuty will update the existing incident instead of creating a second one and paging your on-call engineer twice.

Best practices for dedup keys:

  • Make them specific to the root cause, not the timestamp
  • Include the resource identifier (week date, job ID, etc.)
  • Use a format like {service}-{resource}-{date} for easy filtering in PagerDuty

Happy Integration!

🗄️ Browser Storage Mechanisms Explained (with Vue.js Examples)

Modern web applications often need to store data on the client side – whether it’s user preferences, form progress, or temporary UI state.
Browsers provide built-in storage mechanisms that help developers do exactly that, without hitting the backend every time.

In this post, we’ll cover:

  • What browser storage is
  • localStorage vs sessionStorage
  • Basic JavaScript examples
  • Vue.js usage patterns
  • When to use (and not use) each
  • Why developers rely on browser storage
  • Comparison with React and Angular

🌐 What Is Browser Storage?

Browser storage allows web applications to store key–value data directly in the user’s browser.

Key characteristics:

  • Data is stored client-side
  • Data is stored as strings
  • Accessible via JavaScript
  • Faster than server round-trips

The most common browser storage mechanisms are:

  • localStorage
  • sessionStorage

(Both are part of the Web Storage API)


📦 localStorage

What is localStorage?

localStorage stores data persistently in the browser.

  • Survives page reloads
  • Survives browser restarts
  • Shared across all tabs of the same origin
  • Cleared only manually or via code

Basic JavaScript Example

// Save data
localStorage.setItem('theme', 'dark')
// Read data
const theme = localStorage.getItem('theme')
// Remove data
localStorage.removeItem('theme')

Vue.js Example

export default {
data() {
return {
theme: localStorage.getItem('theme') || 'light'
}
},
watch: {
theme(newValue) {
localStorage.setItem('theme', newValue)
}
}
}

When to Use localStorage

  • User preferences (theme, language)
  • Remembered UI settings
  • Non-sensitive data that should persist
  • Cross-tab shared state

When NOT to Use localStorage

  • Authentication tokens (use httpOnly cookies)
  • Sensitive personal data
  • Temporary flows (signup, checkout steps)

⏳ sessionStorage

What is sessionStorage?

sessionStorage stores data only for the lifetime of a browser tab.

  • Cleared when the tab is closed
  • Not shared across tabs
  • Survives page refresh
  • Scoped per tab/window

Basic JavaScript Example

// Save data
sessionStorage.setItem('step', '2')
// Read data
const step = sessionStorage.getItem('step')
// Remove data
sessionStorage.removeItem('step')

Vue.js Example (Signup Flow)

export default {
data() {
return {
postalCode: sessionStorage.getItem('postalCode') || ''
}
},
methods: {
savePostalCode() {
sessionStorage.setItem('postalCode', this.postalCode)
}
}
}

When to Use sessionStorage

  • Multi-step forms
  • Signup / onboarding flows
  • Temporary UI state
  • One-time user journeys

When NOT to Use sessionStorage

  • Long-term preferences
  • Data needed across tabs
  • Anything that must survive browser close

⚖️ localStorage vs sessionStorage

FeaturelocalStoragesessionStorage
LifetimePersistentUntil tab closes
ScopeAll tabsSingle tab
Shared across tabs✅ Yes❌ No
Survives reload✅ Yes✅ Yes
Use casePreferencesTemporary flows
Storage size~5–10MB~5MB

🤔 Why Web Developers Use Browser Storage

Developers use browser storage because it:

  • Improves performance
  • Reduces API calls
  • Remembers user intent
  • Simplifies UI state management
  • Works instantly (no async fetch)

Browser storage is often used as a support layer, not a replacement for backend storage.


⚠️ Security Considerations

Important points to remember:

  • ❌ Data is NOT encrypted
  • ❌ Accessible via JavaScript
  • ❌ Vulnerable to XSS attacks

Never store:

  • Passwords
  • JWTs
  • Payment data
  • Sensitive personal information

⚛️ Comparison with React and Angular

Browser storage usage is framework-agnostic.

Vue.js

sessionStorage.setItem('key', value)

React

useEffect(() => {
localStorage.setItem('key', value)
}, [value])

Angular

localStorage.setItem('key', value)

All frameworks rely on the same Web Storage API
The difference lies in state management patterns, not storage itself.


Best Practices

  • Store only strings (use JSON.stringify)
  • Always handle null values
  • Clean up storage after flows
  • Wrap storage logic in utilities or composables
  • Never trust browser-stored data blindly

Final Thoughts

  • localStorage → persistent, shared, preference-based
  • sessionStorage → temporary, per-tab, flow-based
  • Vue, React, and Angular all use the same browser API
  • Use browser storage wisely — as a helper, not a database

Used correctly, browser storage can make your frontend faster, smarter, and more user-friendly.


Happy web development!

The Evolution of Stripe’s Payment APIs: From Charges to Payment Intents

A developer’s guide to understanding Stripe’s API transformation and avoiding common migration pitfalls


The payment processing landscape has evolved dramatically over the past decade, and Stripe has been at the forefront of this transformation. One of the most significant changes in Stripe’s ecosystem was the transition from the Charges API to the Payment Intents API. This shift wasn’t just a cosmetic update – it represented a fundamental reimagining of how online payments should work in an increasingly complex global marketplace.

The Old World: Charges API (2011-2019)

The Simple Days

When Stripe first launched, online payments were relatively straightforward. The Charges API reflected this simplicity:

# The old way - direct charge creation
charge = Stripe::Charge.create({
  amount: 2000,
  currency: 'usd',
  source: 'tok_visa',  # Token from Stripe.js
  description: 'Example charge'
})

if charge.paid
  # Payment succeeded, fulfill order
  fulfill_order(charge.id)
else
  # Payment failed, show error
  handle_error(charge.failure_message)
end

This approach was beautifully simple: create a charge, check if it succeeded, done. The API returned a charge object with an ID like ch_1234567890, and that was your payment.

What Made It Work

The Charges API thrived in an era when:

  • Card payments dominated – Most transactions were simple credit/debit cards
  • 3D Secure was optional – Strong customer authentication wasn’t mandated
  • Regulations were simpler – PCI DSS was the main compliance concern
  • Payment methods were limited – Mostly cards, with PayPal as the main alternative
  • Mobile payments were nascent – Most transactions happened on desktop browsers

The Cracks Begin to Show

As the payments ecosystem evolved, the limitations of the Charges API became apparent:

Authentication Challenges: When 3D Secure authentication was required, the simple charge-and-done model broke down. Developers had to handle redirects, callbacks, and asynchronous completion manually.

Mobile Payment Integration: Apple Pay and Google Pay required more complex flows that didn’t map well to direct charge creation.

Regulatory Compliance: European PSD2 regulations introduced Strong Customer Authentication (SCA) requirements that the Charges API couldn’t elegantly handle.

Webhook Reliability: With complex payment flows, relying on synchronous responses became insufficient. Webhooks were critical, but the Charges API didn’t provide a cohesive event model.

The Catalyst: PSD2 and Strong Customer Authentication

The European Union’s Revised Payment Services Directive (PSD2), which came into effect in 2019, was the final nail in the coffin for simple payment flows. PSD2 mandated Strong Customer Authentication (SCA) for most online transactions, requiring:

  • Two-factor authentication for customers
  • Dynamic linking between payment and authentication
  • Exemption handling for low-risk transactions

The Charges API, with its synchronous create-and-complete model, simply couldn’t handle these requirements elegantly.

The New Era: Payment Intents API (2019-Present)

A Paradigm Shift

Stripe’s response was revolutionary: instead of treating payments as simple charge operations, they reconceptualized them as intents that could evolve through multiple states:

# The modern way - intent-based payments
payment_intent = Stripe::PaymentIntent.create({
  amount: 2000,
  currency: 'usd',
  payment_method: 'pm_card_visa',
  confirmation_method: 'manual',
  capture_method: 'automatic'
})

case payment_intent.status
when 'requires_confirmation'
  # Confirm the payment intent
  payment_intent.confirm
when 'requires_action'
  # Handle 3D Secure or other authentication
  handle_authentication(payment_intent.client_secret)
when 'succeeded'
  # Payment completed, fulfill order
  fulfill_order(payment_intent.id)
when 'requires_payment_method'
  # Payment failed, request new payment method
  handle_payment_failure
end

The Intent Lifecycle

Payment Intents introduced a state machine that could handle complex payment flows:

requires_payment_method → requires_confirmation → requires_action → succeeded
                       ↓                      ↓                 ↓
                   canceled              canceled          requires_capture
                                                               ↓
                                                           succeeded

This model elegantly handles scenarios that would break the Charges API:

3D Secure Authentication:

# Payment requires additional authentication
if payment_intent.status == 'requires_action'
  # Frontend handles 3D Secure challenge
  # Webhook confirms completion asynchronously
end

Delayed Capture:

# Authorize now, capture later
payment_intent = Stripe::PaymentIntent.create({
  amount: 2000,
  currency: 'usd',
  payment_method: 'pm_card_visa',
  capture_method: 'manual'  # Authorize only
})

# Later, when ready to fulfill
payment_intent.capture({ amount_to_capture: 1500 })

Key Architectural Changes

1. Separation of Concerns

Payment Intents represent the intent to collect payment and track the payment lifecycle.

Charges become implementation details—the actual movement of money that happens within a Payment Intent.

# A successful Payment Intent contains charges
payment_intent = Stripe::PaymentIntent.retrieve('pi_1234567890')
puts payment_intent.charges.data.first.id  # => "ch_0987654321"

2. Enhanced Webhook Events

Payment Intents provide richer webhook events that track the entire payment lifecycle:

# webhook_endpoints.rb
case event.type
when 'payment_intent.succeeded'
  handle_successful_payment(event.data.object)
when 'payment_intent.payment_failed'
  handle_failed_payment(event.data.object)
when 'payment_intent.requires_action'
  notify_customer_action_required(event.data.object)
end

3. Client-Side Integration

The Payment Intents API encouraged better client-side integration through Stripe Elements and mobile SDKs:

// Modern client-side payment confirmation
const {error} = await stripe.confirmCardPayment(clientSecret, {
  payment_method: {
    card: cardElement,
    billing_details: {name: 'Jenny Rosen'}
  }
});

if (error) {
  // Handle error
} else {
  // Payment succeeded, redirect to success page
}

Migration Challenges and Solutions

The ID Problem: A Real-World Example

One of the most common migration issues developers face is the ID confusion between Payment Intents and Charges. Here’s a real scenario:

# Legacy refund code expecting charge IDs
def process_refund(charge_id, amount)
  Stripe::Refund.create({
    charge: charge_id,  # Expects ch_xxx
    amount: amount
  })
end

# But Payment Intents return pi_xxx IDs
payment_intent = create_payment_intent(...)
process_refund(payment_intent.id, 500)  # ❌ Fails!

The Solution: Extract the actual charge ID from successful Payment Intents:

def get_charge_id_for_refund(payment_intent)
  if payment_intent.status == 'succeeded'
    payment_intent.charges.data.first.id  # Returns ch_xxx
  else
    raise "Cannot refund unsuccessful payment"
  end
end

# Correct usage
payment_intent = Stripe::PaymentIntent.retrieve('pi_1234567890')
charge_id = get_charge_id_for_refund(payment_intent)
process_refund(charge_id, 500)  # ✅ Works!

Database Schema Evolution

Many applications need to update their database schemas to accommodate both old and new payment types:

# Migration to support both charge and payment intent IDs
class AddPaymentIntentSupport < ActiveRecord::Migration[6.0]
  def change
    add_column :payments, :stripe_payment_intent_id, :string
    add_column :payments, :payment_type, :string, default: 'charge'

    add_index :payments, :stripe_payment_intent_id
    add_index :payments, :payment_type
  end
end

# Updated model to handle both
class Payment < ApplicationRecord
  def stripe_id
    case payment_type
    when 'payment_intent'
      stripe_payment_intent_id
    when 'charge'
      stripe_charge_id
    end
  end

  def refundable_charge_id
    if payment_type == 'payment_intent'
      # Fetch the actual charge ID from the payment intent
      pi = Stripe::PaymentIntent.retrieve(stripe_payment_intent_id)
      pi.charges.data.first.id
    else
      stripe_charge_id
    end
  end
end

Webhook Handler Updates

Webhook handling becomes more sophisticated with Payment Intents:

# Legacy charge webhook handling
def handle_charge_webhook(event)
  charge = event.data.object

  case event.type
  when 'charge.succeeded'
    mark_payment_successful(charge.id)
  when 'charge.failed'
    mark_payment_failed(charge.id)
  end
end

# Modern payment intent webhook handling
def handle_payment_intent_webhook(event)
  payment_intent = event.data.object

  case event.type
  when 'payment_intent.succeeded'
    # Payment completed successfully
    complete_order(payment_intent.id)

  when 'payment_intent.payment_failed'
    # All payment attempts have failed
    cancel_order(payment_intent.id)

  when 'payment_intent.requires_action'
    # Customer needs to complete authentication
    notify_action_required(payment_intent.id, payment_intent.client_secret)

  when 'payment_intent.amount_capturable_updated'
    # Partial capture scenarios
    handle_partial_authorization(payment_intent.id)
  end
end

Best Practices for Modern Stripe Integration

1. Embrace Asynchronous Patterns

With Payment Intents, assume payments are asynchronous:

class PaymentProcessor
  def create_payment(amount, customer_id, payment_method_id)
    payment_intent = Stripe::PaymentIntent.create({
      amount: amount,
      currency: 'usd',
      customer: customer_id,
      payment_method: payment_method_id,
      confirmation_method: 'automatic',
      return_url: success_url
    })

    # Don't assume immediate success
    case payment_intent.status
    when 'succeeded'
      complete_payment_immediately(payment_intent)
    when 'requires_action'
      # Send client_secret to frontend for authentication
      { status: 'requires_action', client_secret: payment_intent.client_secret }
    when 'requires_payment_method'
      { status: 'failed', error: 'Payment method declined' }
    else
      # Wait for webhook confirmation
      { status: 'processing', payment_intent_id: payment_intent.id }
    end
  end
end

2. Implement Robust Webhook Handling

Webhooks are critical for Payment Intents—implement them defensively:

class StripeWebhookController < ApplicationController
  protect_from_forgery except: :handle

  def handle
    payload = request.body.read
    sig_header = request.env['HTTP_STRIPE_SIGNATURE']

    begin
      event = Stripe::Webhook.construct_event(
        payload, sig_header, ENV['STRIPE_WEBHOOK_SECRET']
      )
    rescue JSON::ParserError, Stripe::SignatureVerificationError
      head :bad_request and return
    end

    # Handle idempotently
    return head :ok if processed_event?(event.id)

    case event.type
    when 'payment_intent.succeeded'
      PaymentSuccessJob.perform_later(event.data.object.id)
    when 'payment_intent.payment_failed'
      PaymentFailureJob.perform_later(event.data.object.id)
    end

    mark_event_processed(event.id)
    head :ok
  end

  private

  def processed_event?(event_id)
    Rails.cache.exist?("stripe_event_#{event_id}")
  end

  def mark_event_processed(event_id)
    Rails.cache.write("stripe_event_#{event_id}", true, expires_in: 24.hours)
  end
end

3. Handle Multiple Payment Methods Gracefully

Payment Intents excel at handling diverse payment methods:

def create_flexible_payment(amount, payment_method_types = ['card'])
  Stripe::PaymentIntent.create({
    amount: amount,
    currency: 'usd',
    payment_method_types: payment_method_types,
    metadata: {
      order_id: @order.id,
      customer_email: @customer.email
    }
  })
end

# Support multiple payment methods
payment_intent = create_flexible_payment(2000, ['card', 'klarna', 'afterpay_clearpay'])

4. Implement Proper Error Handling

Payment Intents provide detailed error information:

def handle_payment_error(payment_intent)
  last_payment_error = payment_intent.last_payment_error

  case last_payment_error&.code
  when 'authentication_required'
    # Redirect to 3D Secure
    redirect_to_authentication(payment_intent.client_secret)

  when 'card_declined'
    decline_code = last_payment_error.decline_code
    case decline_code
    when 'insufficient_funds'
      show_error("Insufficient funds on your card")
    when 'expired_card'
      show_error("Your card has expired")
    else
      show_error("Your card was declined")
    end

  when 'processing_error'
    show_error("A processing error occurred. Please try again.")

  else
    show_error("An unexpected error occurred")
  end
end

The Future: What’s Next?

1. Embedded Payments

Stripe continues to innovate with embedded payment solutions that make Payment Intents even more powerful:

# Embedded checkout with Payment Intents
payment_intent = Stripe::PaymentIntent.create({
  amount: 2000,
  currency: 'usd',
  automatic_payment_methods: { enabled: true },
  metadata: { integration_check: 'accept_a_payment' }
})

2. Real-Time Payments

As real-time payment networks like FedNow and Open Banking expand, Payment Intents provide the flexibility to support these new methods seamlessly.

3. Cross-Border Optimization

Payment Intents are evolving to better handle multi-currency and cross-border transactions with improved routing and local payment method support.

Key Takeaways for Developers

  1. Payment Intents are the future: If you’re building new payment functionality, start with Payment Intents, not Charges.
  2. Embrace asynchronous patterns: Don’t expect payments to complete immediately. Design your system around webhooks and state management.
  3. Handle the ID confusion: Remember that Payment Intents (pi_) contain Charges (ch_). Refunds and some other operations still work on charge IDs.
  4. Implement robust webhook handling: With complex payment flows, webhooks become critical infrastructure, not nice-to-have features.
  5. Test thoroughly: The increased complexity of Payment Intents requires more comprehensive testing, especially around authentication flows and edge cases.
  6. Monitor proactively: Use Stripe’s dashboard and logs extensively during development and deployment to understand payment flow behavior.

Conclusion

The evolution from Stripe’s Charges API to Payment Intents represents more than just a technical upgrade—it’s a fundamental shift toward a more flexible, regulation-compliant, and globally-aware payment processing model. While the migration requires thoughtful planning and careful implementation, the benefits in terms of supported payment methods, authentication handling, and regulatory compliance make it essential for any serious payment processing application.

The key is to approach the migration systematically: understand the differences, plan for the ID confusion, implement robust webhook handling, and test extensively. With these foundations in place, Payment Intents unlock capabilities that simply weren’t possible with the older Charges API.

As global payment regulations continue to evolve and new payment methods emerge, Payment Intents provide the architectural flexibility to adapt and grow. The initial complexity investment pays dividends in long-term maintainability and feature capability.

For developers still using the Charges API, the writing is on the wall: it’s time to embrace the future of payment processing with Payment Intents.


Have you encountered similar challenges migrating from Charges to Payment Intents? What patterns have worked best in your applications? Share your experiences in the comments below.