Writing Effective Test Cases 🚧 for Your Ruby on Rails Model: A Guide

When it comes to building robust and maintainable applications, writing test cases is a crucial practice. In this guide, I will walk you through writing effective test cases for a Ruby on Rails model using a common model name, “Task.” The concepts discussed here are applicable to any model in your Rails application.

Why Write Test Cases?

Writing test cases is essential for several reasons:

  1. Bug Detection: Test cases help uncover and fix bugs before they impact users.
  2. Regression Prevention: Tests ensure that new code changes do not break existing functionality.
  3. Documentation: Well-written test cases serve as documentation for your codebase, making it easier for other developers to understand and modify the code.
  4. Refactoring Confidence: Tests provide the confidence to refactor code knowing that you won’t introduce defects.
  5. Collaboration: Tests facilitate collaboration within development teams by providing a common set of expectations.

Now, let’s dive into creating test cases for a Ruby on Rails model.

Model: Task

We will use a model called “Task” as an example. Tasks might represent items on a to-do list, items in a project management system, or any other entity that requires tracking and management.

Setting Up the Environment

Before writing test cases, ensure that your Ruby on Rails application is set up correctly with the testing framework of your choice. Rails typically uses MiniTest or RSpec for testing. For this guide, we’ll use MiniTest.

# Gemfile
group :test do
  gem 'minitest'
  # Other testing gems...
end

After updating your Gemfile, run bundle install to install the testing gems. Ensure your test database is set up and up-to-date by running bin/rails db:test:prepare.

Writing Test Cases

Model Validation

The first set of test cases should focus on validating the model’s attributes. For our Task model, we might want to ensure that the title is present and within an acceptable length range.

# test/models/task_test.rb

require 'test_helper'

class TaskTest < ActiveSupport::TestCase
  test "should not save task without title" do
    task = Task.new
    assert_not task.save, "Saved the task without a title"
  end

  test "should save task with valid title" do
    task = Task.new(title: "A valid task title")
    assert task.save, "Could not save the task with a valid title"
  end
end
Testing Associations

In Rails, models often have associations with other models. For example, a Task might belong to a User. You can write test cases to ensure these associations work correctly.

# test/models/task_test.rb

class TaskTest < ActiveSupport::TestCase
  # ...

  test "task should belong to a user" do
    user = User.create(name: "John")
    task = Task.new(title: "Task", user: user)
    assert_equal user, task.user, "Task does not belong to the correct user"
  end
end
Custom Model Methods

If your model contains custom methods, ensure they behave as expected. For example, if you have a method that returns the completion status of a task, test it.

# test/models/task_test.rb

class TaskTest < ActiveSupport::TestCase
  # ...

  test "task should return completion status" do
    task = Task.new(title: "Task", completed: false)
    assert_equal "Incomplete", task.completion_status
    task.completed = true
    assert_equal "Complete", task.completion_status
  end
end
Scopes

Scopes allow you to define common queries for your models. Write test cases to ensure scopes return the expected results.

# test/models/task_test.rb

class TaskTest < ActiveSupport::TestCase
  # ...

  test "completed scope should return completed tasks" do
    Task.create(title: "Completed Task", completed: true)
    Task.create(title: "Incomplete Task", completed: false)

    completed_tasks = Task.completed
    assert_equal 1, completed_tasks.length
    assert_equal "Completed Task", completed_tasks.first.title
  end
end

Running Tests

You can run your tests with the following command:

bin/rails test

This command will execute all the test cases you’ve written in your test files.

Conclusion

Writing test cases is an essential practice in building reliable and maintainable Ruby on Rails applications. In this guide, we’ve explored how to write effective test cases for a model using a common model name, “Task.” These principles can be applied to test any model in your Rails application.

By writing comprehensive test cases, you ensure that your application functions correctly, maintains quality over time, and makes collaboration within your development team more efficient.

Happy testing!

🧬 Extracting and Joining on Ancestry Values in PostgreSQL: A Complete Guide

I am working on a project where we face issues in an ancestral path data in PostgreSql DB. Working with hierarchical data in PostgreSQL often involves dealing with ancestry paths stored as delimited strings. This comprehensive guide explores how to extract specific values from ancestry columns and utilize them effectively in join operations, complete with practical examples, troubleshooting tips and how I fixed the issues.

📋 Table of Contents

🎯 Introduction

PostgreSQL’s robust string manipulation capabilities make it ideal for handling complex hierarchical data structures. When working with ancestry values stored in text columns, you often need to extract specific parts of the hierarchy for data analysis, reporting, or joining operations.

This article demonstrates how to:

  • ✨ Extract values from ancestry strings using regular expressions
  • 🔗 Perform efficient joins on extracted ancestry data
  • 🛡️ Handle edge cases and avoid common pitfalls
  • ⚡ Optimize queries for better performance

❓ Problem Statement

📊 Scenario

Consider a projects table with an ancestry column containing hierarchical paths like:

-- Sample ancestry values
"6/4/5/3"     -- Parent chain: 6 → 4 → 5 → 3
"1/2"         -- Parent chain: 1 → 2
"9"           -- Single parent: 9
NULL          -- Root level project

🎯 Goal

We need to:

  1. Extract the last integer value from the ancestry path
  2. Use this value in a JOIN operation to fetch parent project data
  3. Handle edge cases like NULL values and malformed strings

🏗️ Understanding the Data Structure

📁 Table Structure

CREATE TABLE projects (
    id BIGINT PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    ancestry TEXT,  -- Stores parent hierarchy as "id1/id2/id3"
    created_at TIMESTAMP DEFAULT NOW()
);

-- Sample data
INSERT INTO projects (id, name, ancestry) VALUES
    (1, 'Root Project', NULL),
    (2, 'Department A', '1'),
    (3, 'Team Alpha', '1/2'),
    (4, 'Task 1', '1/2/3'),
    (5, 'Subtask 1A', '1/2/3/4');

🧭 Ancestry Path Breakdown

Project IDNameAncestryImmediate Parent
1Root ProjectNULLNone (root)
2Department A11
3Team Alpha1/22
4Task 11/2/33
5Subtask 1A1/2/3/44

🔧 Solution Overview

🎯 Core Approach

  1. 🔍 Pattern Matching: Use regex to identify the last number in the ancestry string
  2. ✂️ Value Extraction: Extract the matched value using regexp_replace()
  3. 🔄 Type Conversion: Cast the extracted string to the appropriate numeric type
  4. 🔗 Join Operation: Use the converted value in JOIN conditions

📝 Basic Query Structure

SELECT projects.*
FROM projects
LEFT OUTER JOIN projects AS parent_project 
    ON CAST(
        regexp_replace(projects.ancestry, '.*\/(\d+)$', '\1')
        AS BIGINT
    ) = parent_project.id
WHERE projects.ancestry IS NOT NULL;

📝 Regular Expression Deep Dive

🎯 Pattern Breakdown: .*\/(\d+)$

Let’s dissect this regex pattern:

.*      -- Match any characters (greedy)
\/      -- Match literal forward slash
(\d+)   -- Capture group: one or more digits
$       -- End of string anchor

📊 Pattern Matching Examples

Ancestry StringRegex MatchCaptured GroupResult
"6/4/5/3"5/33✅ 3
"1/2"1/22✅ 2
"9"No match❌ Original string
"abc/def"No match❌ Original string

🔧 Alternative Regex Patterns

-- For single-level ancestry (no slashes)
regexp_replace(ancestry, '^(\d+)$', '\1')

-- For extracting first parent instead of last
regexp_replace(ancestry, '^(\d+)\/.*', '\1')

-- For handling mixed delimiters (/ or -)
regexp_replace(ancestry, '.*[\/\-](\d+)$', '\1')

💻 Implementation Examples

🔧 Example 1: Basic Parent Lookup

-- Find each project with its immediate parent information
SELECT 
    p.id,
    p.name AS project_name,
    p.ancestry,
    parent.id AS parent_id,
    parent.name AS parent_name
FROM projects p
LEFT OUTER JOIN projects parent 
    ON CAST(
        regexp_replace(p.ancestry, '.*\/(\d+)$', '\1')
        AS BIGINT
    ) = parent.id
WHERE p.ancestry IS NOT NULL
ORDER BY p.id;

Expected Output:

 id | project_name | ancestry | parent_id | parent_name
----+--------------+----------+-----------+-------------
  2 | Department A | 1        |         1 | Root Project
  3 | Team Alpha   | 1/2      |         2 | Department A
  4 | Task 1       | 1/2/3    |         3 | Team Alpha
  5 | Subtask 1A   | 1/2/3/4  |         4 | Task 1

🎯 Example 2: Handling Edge Cases

-- Robust query that handles all edge cases
SELECT 
    p.id,
    p.name AS project_name,
    p.ancestry,
    CASE 
        WHEN p.ancestry IS NULL THEN 'Root Level'
        WHEN p.ancestry !~ '.*\/(\d+)$' THEN 'Single Parent'
        ELSE 'Multi-level'
    END AS hierarchy_type,
    parent.name AS parent_name
FROM projects p
LEFT OUTER JOIN projects parent ON 
    CASE 
        -- Handle multi-level ancestry
        WHEN p.ancestry ~ '.*\/(\d+)$' THEN
            CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
        -- Handle single-level ancestry
        WHEN p.ancestry ~ '^\d+$' THEN
            CAST(p.ancestry AS BIGINT)
        ELSE NULL
    END = parent.id
ORDER BY p.id;

📈 Example 3: Aggregating Child Counts

-- Count children for each project
WITH parent_child_mapping AS (
    SELECT 
        p.id AS child_id,
        CASE 
            WHEN p.ancestry ~ '.*\/(\d+)$' THEN
                CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
            WHEN p.ancestry ~ '^\d+$' THEN
                CAST(p.ancestry AS BIGINT)
            ELSE NULL
        END AS parent_id
    FROM projects p
    WHERE p.ancestry IS NOT NULL
)
SELECT 
    p.id,
    p.name,
    COUNT(pcm.child_id) AS direct_children_count
FROM projects p
LEFT JOIN parent_child_mapping pcm ON p.id = pcm.parent_id
GROUP BY p.id, p.name
ORDER BY direct_children_count DESC;

🚨 Common Errors and Solutions

Error 1: “invalid input syntax for type bigint”

Problem:

-- ❌ Incorrect: Casting entire ancestry string
CAST(projects.ancestry AS BIGINT) = parent.id

Solution:

-- ✅ Correct: Cast only the extracted value
CAST(
    regexp_replace(projects.ancestry, '.*\/(\d+)$', '\1') 
    AS BIGINT
) = parent.id

Error 2: Unexpected Results with Single-Level Ancestry

Problem: Single values like "9" don’t match the pattern .*\/(\d+)$

Solution:

-- ✅ Handle both multi-level and single-level ancestry
CASE 
    WHEN ancestry ~ '.*\/(\d+)$' THEN
        CAST(regexp_replace(ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
    WHEN ancestry ~ '^\d+$' THEN
        CAST(ancestry AS BIGINT)
    ELSE NULL
END

Error 3: NULL Ancestry Values Causing Issues

Problem: NULL values can cause unexpected behaviour in joins

Solution:

-- ✅ Explicitly handle NULL values
WHERE ancestry IS NOT NULL 
AND ancestry != ''

🛡️ Complete Error-Resistant Query

SELECT 
    p.id,
    p.name AS project_name,
    p.ancestry,
    parent.id AS parent_id,
    parent.name AS parent_name
FROM projects p
LEFT OUTER JOIN projects parent ON 
    CASE 
        WHEN p.ancestry IS NULL OR p.ancestry = '' THEN NULL
        WHEN p.ancestry ~ '.*\/(\d+)$' THEN
            CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
        WHEN p.ancestry ~ '^\d+$' THEN
            CAST(p.ancestry AS BIGINT)
        ELSE NULL
    END = parent.id
ORDER BY p.id;

⚡ Performance Considerations

📊 Indexing Strategies

-- Create index on ancestry for faster pattern matching
CREATE INDEX idx_projects_ancestry ON projects (ancestry);

-- Create partial index for non-null ancestry values
CREATE INDEX idx_projects_ancestry_not_null 
ON projects (ancestry) 
WHERE ancestry IS NOT NULL;

-- Create functional index for extracted parent IDs
CREATE INDEX idx_projects_parent_id ON projects (
    CASE 
        WHEN ancestry ~ '.*\/(\d+)$' THEN
            CAST(regexp_replace(ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
        WHEN ancestry ~ '^\d+$' THEN
            CAST(ancestry AS BIGINT)
        ELSE NULL
    END
) WHERE ancestry IS NOT NULL;

🔄 Query Optimization Tips

  1. 🎯 Use CTEs for Complex Logic
WITH parent_lookup AS (
    SELECT 
        id,
        CASE 
            WHEN ancestry ~ '.*\/(\d+)$' THEN
                CAST(regexp_replace(ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
            WHEN ancestry ~ '^\d+$' THEN
                CAST(ancestry AS BIGINT)
        END AS parent_id
    FROM projects
    WHERE ancestry IS NOT NULL
)
SELECT p.*, parent.name AS parent_name
FROM parent_lookup p
JOIN projects parent ON p.parent_id = parent.id;
  1. ⚡ Consider Materialized Views for Frequent Queries
CREATE MATERIALIZED VIEW project_hierarchy AS
SELECT 
    p.id,
    p.name,
    p.ancestry,
    CASE 
        WHEN p.ancestry ~ '.*\/(\d+)$' THEN
            CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
        WHEN p.ancestry ~ '^\d+$' THEN
            CAST(p.ancestry AS BIGINT)
    END AS parent_id
FROM projects p;

-- Refresh when data changes
REFRESH MATERIALIZED VIEW project_hierarchy;

🛠️ Advanced Techniques

🔍 Extracting Multiple Ancestry Levels

-- Extract all ancestry levels as an array
SELECT 
    id,
    name,
    ancestry,
    string_to_array(ancestry, '/') AS ancestry_array,
    -- Get specific levels
    split_part(ancestry, '/', 1) AS level_1,
    split_part(ancestry, '/', 2) AS level_2,
    split_part(ancestry, '/', -1) AS last_level
FROM projects
WHERE ancestry IS NOT NULL;

🧮 Calculating Hierarchy Depth

-- Calculate the depth of each project in the hierarchy
SELECT 
    id,
    name,
    ancestry,
    CASE 
        WHEN ancestry IS NULL THEN 0
        ELSE array_length(string_to_array(ancestry, '/'), 1)
    END AS hierarchy_depth
FROM projects
ORDER BY hierarchy_depth, id;

🌳 Building Complete Hierarchy Paths

-- Recursive CTE to build full hierarchy paths
WITH RECURSIVE hierarchy_path AS (
    -- Base case: root projects
    SELECT 
        id,
        name,
        ancestry,
        name AS full_path,
        0 AS level
    FROM projects 
    WHERE ancestry IS NULL

    UNION ALL

    -- Recursive case: child projects
    SELECT 
        p.id,
        p.name,
        p.ancestry,
        hp.full_path || ' → ' || p.name AS full_path,
        hp.level + 1 AS level
    FROM projects p
    JOIN hierarchy_path hp ON 
        CASE 
            WHEN p.ancestry ~ '.*\/(\d+)$' THEN
                CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
            WHEN p.ancestry ~ '^\d+$' THEN
                CAST(p.ancestry AS BIGINT)
        END = hp.id
)
SELECT * FROM hierarchy_path
ORDER BY level, id;

✅ Best Practices

🎯 Data Validation

  1. ✅ Validate Ancestry Format on Insert/Update
-- Add constraint to ensure valid ancestry format
ALTER TABLE projects 
ADD CONSTRAINT check_ancestry_format 
CHECK (
    ancestry IS NULL 
    OR ancestry ~ '^(\d+)(\/\d+)*$'
);
  1. 🔍 Regular Data Integrity Checks
-- Find orphaned projects (ancestry points to non-existent parent)
SELECT p.id, p.name, p.ancestry
FROM projects p
WHERE p.ancestry IS NOT NULL
AND NOT EXISTS (
    SELECT 1 FROM projects parent
    WHERE parent.id = CASE 
        WHEN p.ancestry ~ '.*\/(\d+)$' THEN
            CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
        WHEN p.ancestry ~ '^\d+$' THEN
            CAST(p.ancestry AS BIGINT)
    END
);

🛡️ Error Handling

-- Function to safely extract parent ID
CREATE OR REPLACE FUNCTION extract_parent_id(ancestry_text TEXT)
RETURNS BIGINT AS $$
BEGIN
    IF ancestry_text IS NULL OR ancestry_text = '' THEN
        RETURN NULL;
    END IF;

    IF ancestry_text ~ '.*\/(\d+)$' THEN
        RETURN CAST(regexp_replace(ancestry_text, '.*\/(\d+)$', '\1') AS BIGINT);
    ELSIF ancestry_text ~ '^\d+$' THEN
        RETURN CAST(ancestry_text AS BIGINT);
    ELSE
        RETURN NULL;
    END IF;
EXCEPTION 
    WHEN OTHERS THEN
        RETURN NULL;
END;
$$ LANGUAGE plpgsql IMMUTABLE;

-- Usage
SELECT p.*, parent.name AS parent_name
FROM projects p
LEFT JOIN projects parent ON extract_parent_id(p.ancestry) = parent.id;

📊 Monitoring and Maintenance

-- Query to analyze ancestry data quality
SELECT 
    'Total Projects' AS metric,
    COUNT(*) AS count
FROM projects

UNION ALL

SELECT 
    'Projects with Ancestry' AS metric,
    COUNT(*) AS count
FROM projects 
WHERE ancestry IS NOT NULL

UNION ALL

SELECT 
    'Valid Ancestry Format' AS metric,
    COUNT(*) AS count
FROM projects 
WHERE ancestry ~ '^(\d+)(\/\d+)*$'

UNION ALL

SELECT 
    'Orphaned Projects' AS metric,
    COUNT(*) AS count
FROM projects p
WHERE p.ancestry IS NOT NULL
AND extract_parent_id(p.ancestry) NOT IN (SELECT id FROM projects);

📝 Conclusion

Working with ancestry data in PostgreSQL requires careful handling of string manipulation, type conversion, and edge cases. By following the techniques outlined in this guide, you can:

🎯 Key Takeaways

  1. 🔍 Use robust regex patterns to handle different ancestry formats
  2. 🛡️ Always handle edge cases like NULL values and malformed strings
  3. ⚡ Consider performance implications and use appropriate indexing
  4. ✅ Implement data validation to maintain ancestry integrity
  5. 🔧 Create reusable functions for complex extraction logic

💡 Final Recommendations

  • 🎯 Test thoroughly with various ancestry formats
  • 📊 Monitor query performance and optimize as needed
  • 🔄 Consider alternative approaches like ltree for complex hierarchies
  • 📚 Document your ancestry format for team members
  • 🛠️ Implement proper error handling in production code

The techniques demonstrated here provide a solid foundation for working with hierarchical data in PostgreSQL. Whether you’re building organizational charts, category trees, or project hierarchies, these patterns will help you extract and manipulate ancestry data effectively and reliably! 🚀


📖 Additional Resources

Understanding the Difference Between e.target and e.currentTarget in React

Introduction:

When working with React and handling events, it’s crucial to understand the difference between e.target and e.currentTarget. While they may seem similar at first glance, they have distinct roles and behaviors. In this blog post, we’ll delve into the details of these event properties and explore when to use each one.

  1. e.target: The Originating Element
    e.target refers to the element that triggered the event or where the event originated from. It represents the specific DOM element that the user interacted with, such as clicking a button or typing in an input field. It provides direct access to properties and attributes of that element.
  2. e.currentTarget: The Bound Element
    On the other hand, e.currentTarget refers to the element to which the event handler is attached. It remains constant, regardless of which element triggered the event. In most cases, e.currentTarget and e.target are the same element, especially when you attach the event handler directly to the target element.
  3. Differences in Markup Structure
    One important distinction arises when your markup structure involves nested elements or event delegation. If the event handler is attached to a parent element and an event occurs within one of its child elements, e.target will point to the specific child element, while e.currentTarget will reference the parent element. This behavior is particularly useful when you want to handle events for multiple elements within a container and need to determine which child element triggered the event.
  4. Practical Use Cases
    4.1 Individual Element Control:
    When you have multiple input elements, such as multiple text fields or checkboxes, and want to handle their changes individually, you should typically use e.target.value to access the specific value of the element that triggered the event. This ensures that you’re updating the correct state or performing the appropriate actions for that particular element.
    4.2 Event Delegation:
    In scenarios where you use event delegation, attaching a single event handler to a parent element to handle events from its child elements, e.currentTarget.value can be useful. It allows you to access the value of the parent element, which can be helpful when you want to track changes or perform actions based on the parent element’s state.
Conclusion:

Understanding the nuances between e.target and e.currentTarget in React is crucial for properly handling events and accessing the relevant elements and their properties. By grasping these differences, you can write more effective event handlers and ensure your application responds accurately to user interactions. Remember that e.target refers to the element where the event originated, while e.currentTarget represents the element to which the event handler is bound. Utilize this knowledge to build robust and interactive React applications.

Hopefully, this blog post clarifies the distinction between e.target and e.currentTarget in React and helps you write better event handling code.

Understanding the Array Aggregation Function in PostgreSQL (array_agg)

PostgreSQL, also known as Postgres, is a powerful and feature-rich relational database management system. One of its notable features is the array aggregation function, array_agg, which allows you to aggregate values from multiple rows into a single array. In this blog post, we’ll explore how array_agg works, its applications, and considerations for performance.

How Does array_agg Work?

The array_agg function takes an expression as an argument and returns an array containing the values of that expression for all the rows that match the query. Let’s illustrate this with an example.

Consider a table called employees with columns id, name, and department. Suppose we want to aggregate all the names of employees belonging to the “Sales” department into an array. We can achieve this using the following query:

SELECT array_agg(name) AS sales_employees
FROM employees
WHERE department = 'Sales';

The result of this query will be a single row with a column named sales_employees, which contains an array of all the names of employees in the “Sales” department.

Usage of array_agg with Subqueries

The ability to get an array as the output opens up various possibilities, especially when used in subqueries. You can leverage this feature to aggregate data from related tables or filter results based on complex conditions.

For instance, imagine you have two tables, orders and order_items, where each order can have multiple items. You want to retrieve a list of orders along with an array of item names for each order. The following query achieves this:

SELECT o.order_id, (
  SELECT array_agg(oi.item_name)
  FROM order_items oi
  WHERE oi.order_id = o.order_id
) AS item_names
FROM orders o;

In this example, the subquery within the main query’s select list utilizes array_agg to aggregate item names from the order_items table, specific to each order.

Complex Query Example Using array_agg

To demonstrate a more complex scenario, let’s consider a database that stores books and their authors. We have three tables: books, authors, and book_authors (a join table that associates books with their respective authors).

Suppose we want to retrieve a list of books along with an array of author names for each book by alphabetical order. We can achieve this using a query that involves joins and array_agg:

SELECT b.title, array_agg(a.author_name ORDER BY a.author_name ASC) AS authors
FROM books b
JOIN book_authors ba ON b.book_id = ba.book_id
JOIN authors a ON ba.author_id = a.author_id
GROUP BY b.book_id;

In this query, we join the tables based on their relationships and use array_agg to aggregate author names into an array for each book. The GROUP BY clause ensures that each book’s array of author names is grouped correctly.

Performance Considerations

While array_agg is a powerful function, it’s essential to consider its performance implications, especially when working with large datasets. Aggregating values into arrays can be computationally intensive, and the resulting array can consume significant memory.

If you anticipate working with large result sets or complex queries involving array_agg, it’s worth optimizing your database schema, indexing relevant columns, and analyzing query performance using PostgreSQL’s built-in tools.

Additionally, consider whether array_agg is the most efficient solution for your specific use case. Sometimes, alternative approaches, such as using temporary tables or custom aggregate functions, might offer better performance.

Conclusion

The array_agg function in PostgreSQL provides a powerful mechanism for aggregating values into arrays. It offers flexibility and opens up opportunities for various applications, including subqueries and complex data manipulations. However, when working with large datasets, it’s crucial to be mindful of potential performance implications and explore optimization strategies accordingly.

Understanding the Difference Between Date.current and Date.today in Ruby

Introduction:

In the world of Ruby programming, we often encounter scenarios where we need to work with dates. Ruby provides us with two methods, Date.current and Date.today, to retrieve the current date. Although they may appear similar at first glance, understanding their differences can help us write more accurate and reliable code. Let’s explore the reasons behind their existence, where we can use them, and the potential pitfalls we might encounter.

  1. Why are there two different methods?
    Ruby’s Date.current and Date.today methods exist to handle different time zone considerations. When developing applications using the Ruby on Rails framework, it’s crucial to account for the possibility of multiple time zones. Rails provides a simple and consistent way to handle time zone-related operations, and these two methods are part of that feature set.
  2. Where can we use them?
    a) Date.current: This method is specifically designed for Rails applications. It returns the current date in the time zone specified by the application’s configuration. It ensures that the date obtained is consistent across the entire application, regardless of the server or machine executing the code. Date.current is particularly useful when dealing with user interactions, scheduling, or any scenario where consistent time zone handling is necessary.

    b) Date.today: This method retrieves the current date based on the default time zone of the server or machine where the code is running. It is not limited to Rails applications and can be used in any Ruby program. However, when working in a Rails application, it’s generally recommended to use Date.current to maintain consistent time zone handling.
  3. Problems when using each method:
    Using these methods incorrectly or without understanding their differences can lead to unexpected results:
    a) Inconsistent time zones: If a Rails application is deployed across multiple servers or machines with different default time zones, using Date.today may produce inconsistent results. It can lead to situations where the same code yields different dates depending on the server’s time zone.

    b) Time zone misconfigurations: In Rails applications, failing to properly set the application’s time zone can result in incorrect date calculations. It’s crucial to configure the desired time zone in the application’s configuration file, ensuring that Date.current returns the expected results.

Conclusion:

Understanding the nuances between Date.current and Date.today in Ruby can greatly improve the accuracy and reliability of our code, particularly in Rails applications. By using Date.current, we ensure consistent time zone handling throughout the application, regardless of the server or machine executing the code. Carefully considering the appropriate method to use based on the specific context can prevent common pitfalls related to time zone inconsistencies.

Rails 6.1 introduce ‘compact_blank’

Before Rails 6 we used to remove the blank values from Array and Hash by using other available methods.

Before:

  [...].delete_if(&:blank?)
  {....}.delete_if { |_k, v| v.blank? }
OR
  [...].reject(&:blank?)
  ...

From now, Rails 6.1.3.1 onwards you can use the module Enumerable’s compact_blank and compact_blank! methods.

Now we can use:

[1, "", nil, 2, " ", [], {}, false, true].compact_blank
=> [1, 2, true]

['', nil, 8, [], {}].compact_blank
=> [8]

{ a: "", b: 1, c: nil, d: [], e: false, f: true }.compact_blank
=> {:b=>1, :f=>true}

The method compact_blank! is a destructive method (handle with care) for compact_blank.

As a Rails developer, I am grateful for this method because there are many scenarios where we find ourselves replicating this code.

PostgreSQL commands to remember

List of commands to remember using postgres DB managment system.

Login, Create user and password

# login to psql client
psql postgres # OR
psql -U postgres
create database mydb; # create db
create user abhilash with SUPERUSER CREATEDB CREATEROLE encrypted password 'abhilashPass!'; 
grant all privileges on database mydb to myuser; # add privileges

Connect to DB, List tables and users, functions, views, schema

\l # lists all the databases
\c dbname # connect to db
\dt # show tables
\d table_name # Describe a table
\dn # List available schema
\df #  List available functions
\dS [your_table_name] # List triggers
\dv # List available views
\du # lists all user accounts and roles 
\du+ # is the extended version which shows even more information.

Show history, save to file, edit using editor, execution time, help

SELECT version(); # version of psql
\g  # Execute the previous command
\s # Command history
\s filename # save Command history to a file
\i filename # Execute psql commands from a file
\? # help on psql commands
\h ALTER TABLE # To get help on specific PostgreSQL statement
\timing #  Turn on/off query execution time
\e # Edit command in your own editor
\e [function_name] # It is more useful when you edit a function in the editor. Do \df for functions
\o [file_name] # send all next query results to file
    \o out.txt
    \dt 
    \o # switch
    \dt

Change output, Quit psql

# Switch output options
\a command switches from aligned to non-aligned column output.
\H command formats the output to HTML format.
\q # quit psql

Reference: https://www.postgresqltutorial.com/postgresql-administration/psql-commands/

PostgreSQL Cheat Sheet

CREATE DATABASE

CREATE DATABASE dbName;

CREATE TABLE (with auto numbering integer id)

CREATE TABLE tableName (
 id serial PRIMARY KEY,
 name varchar(50) UNIQUE NOT NULL,
 dateCreated timestamp DEFAULT current_timestamp
);

Add a primary key

ALTER TABLE tableName ADD PRIMARY KEY (id);

Create an INDEX

CREATE UNIQUE INDEX indexName ON tableName (columnNames);

Backup a database (command line)

pg_dump dbName > dbName.sql

Backup all databases (command line)

pg_dumpall > pgbackup.sql

Run a SQL script (command line)

psql -f script.sql databaseName

Search using a regular expression

SELECT column FROM table WHERE column ~ 'foo.*';

The first N records

SELECT columns FROM table LIMIT 10;

Pagination

SELECT cols FROM table LIMIT 10 OFFSET 30;

Prepared Statements

PREPARE preparedInsert (int, varchar) AS
  INSERT INTO tableName (intColumn, charColumn) VALUES ($1, $2);
EXECUTE preparedInsert (1,'a');
EXECUTE preparedInsert (2,'b');
DEALLOCATE preparedInsert;

Create a Function

CREATE OR REPLACE FUNCTION month (timestamp) RETURNS integer 
 AS 'SELECT date_part(''month'', $1)::integer;'
LANGUAGE 'sql';

Table Maintenance

VACUUM ANALYZE table;

Reindex a database, table or index

REINDEX DATABASE dbName;

Show query plan

EXPLAIN SELECT * FROM table;

Import from a file

COPY destTable FROM '/tmp/somefile';

Show all runtime parameters

SHOW ALL;

Grant all permissions to a user

GRANT ALL PRIVILEGES ON table TO username;

Perform a transaction

BEGIN TRANSACTION 
 UPDATE accounts SET balance += 50 WHERE id = 1;
COMMIT;

Basic SQL

Get all columns and rows from a table

SELECT * FROM table;

Add a new row

INSERT INTO table (column1,column2)
VALUES (1, 'one');

Update a row

UPDATE table SET foo = 'bar' WHERE id = 1;

Delete a row

DELETE FROM table WHERE id = 1;

From: https://www.petefreitag.com/cheatsheets/postgresql/


Setup Nginx, SSL , Firewall | Moving micro-services into AWS EC2 instance – Part 4

Install Nginx proxy server. Nginx also act like a load-balacer which is helpful for the balancing of network traffic.

sudo apt-get update
sudo apt-get install nginx

Commands to stop, start, restart, check status

sudo systemctl stop nginx
sudo systemctl start nginx
sudo systemctl restart nginx

# after making configuration changes
sudo systemctl reload nginx
sudo systemctl disable nginx
sudo systemctl enable nginx

Install SSL – Letsencrypt

Install packages needed for ssl

sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-nginx

Install the SSL Certificate:

certbot -d '*.domain.com' -d domain.com --manual --preferred-challenges dns certonly

Your certificate and chain have been saved at:
   /etc/letsencrypt/live/domain.com/fullchain.pem

Your key file has been saved at:
   /etc/letsencrypt/live/domain.com/privkey.pem
SSL certificate auto renewal

Let’s Encrypt’s certificates are valid for 90 days. To automatically renew the certificates before they expire, the certbot package creates a cronjob which will run twice a day and will automatically renew any certificate 30 days before its expiration.

Since we are using the certbot webroot plug-in once the certificate is renewed we also have to reload the nginx service. To do so append –renew-hook “systemctl reload nginx” to the /etc/cron.d/certbot file so as it looks like this:

/etc/cron.d/certbot
0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew --renew-hook "systemctl reload nginx"

To test the renewal process, use the certbot –dry-run switch:

sudo certbot renew --dry-run

Renew your EXPIRED certificate this way:

sudo certbot --force-renewal -d '*.domain.com' -d domain.com --manual --preferred-challenges dns certonly

Are you OK with your IP being logged?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please deploy a DNS TXT record under the name
_acme-challenge.<domain>.com with the following value:

O3bpxxxxxxxxxxxxxxxxxxxxxxxxxxY4TnNo

Before continuing, verify the record is deployed.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Press Enter to Continue

You need to update the DNS txt record for _acme-challenge.<domain>.com

sudo systemctl restart nginx # restart nginx to take effect

Configure the Firewall

Next, we’ll update our firewall to allow HTTPS traffic.

Check firewall status in the system. If it is inactive enable firewall.

sudo ufw status # check status

# enable firewall
sudo ufw enable
sudo ufw allow ssh
sudo ufw allow OpenSSH

Enable particular ports where your micro-services are running. Example:

sudo ufw allow 4031/tcp # Authentication service
sudo ufw allow 4131/tcp # File service
sudo ufw allow 4232/tcp # Search service

You can delete the ‘Authentication service’ firewall rule by:

sudo ufw delete allow 4031/tcp

Setup Ruby, ruby-build, rbenv-gemset | Conclusion – Moving micro-services into AWS EC2 instance – Part 3

In this post let’s setup Ruby and ruby gemsets for each project, so that your package versions are maintained.

Install ruby-build # ruby-build is a command-line utility for rbenv

git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build

# Add ruby build path

echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc # OR
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.zshrc

# load it

source ~/.bashrc # OR
source ~/.zshrc


For Mac users – iOS users


# verify rbenv
curl -fsSL https://github.com/rbenv/rbenv-installer/raw/main/bin/rbenv-doctor | bash

If you are using zsh add the following to `~/.zshrc`

# rbenv configuration
eval "$(rbenv init -)"
export RUBY_CONFIGURE_OPTS="--with-openssl-dir=$(brew --prefix openssl@1.1)"

Install Ruby 2.5.1 using rbenv

rbenv install 2.5.1

rbenv global 2.5.1 # to make this version as default

ruby -v # must display 2.5.1 if installed correctly

which ruby # must show the fully qualified path of the executable

echo "gem: --no-document" > ~/.gemrc # to skip documentation while installing gem

rbenv rehash # latest version of rbenv apparently don't need this. Nevertheless, lets use it to avoid surprises.

gem env home # See related details

# If a new version of ruby was installed, ensure RubyGems is up to date.
gem update --system --no-document


Install rbenv gemset – https://github.com/jf/rbenv-gemset

git clone git://github.com/jf/rbenv-gemset.git ~/.rbenv/plugins/rbenv-gemset

If you are getting following issue:

fatal: remote error:
  The unauthenticated git protocol on port 9418 is no longer supported.
# Fix
 git clone https://github.com/jf/rbenv-gemset.git ~/.rbenv/plugins/rbenv-gemset

Now clone your project and go inside the project folder -Micro-service folder (say my-project) which has Gemfile in it and do the following commands.

cd my-project

my-project $ rbenv gemset init # NOTE: this will create the gemset under the current ruby version.

my-project $ rbenv gemset list # list all gemsets

my-project $ rbenv gemset active # check this in project folder

my-project $ gem install bundler -v '1.6.0'

my-project $ rbenv rehash

my-project $ bundle install  # install all the gems for the project inside the gemset.

my-project $ rails s -e production # start rails server
my-project $ puma -e production -p 3002 -C config/puma.rb # OR start puma server
# OR start the server you have configured with rails. 

Do this for all the services and see how this is running. The above will install all the gems inside the project gemset that acts like a namespace.

So our aim is to setup all the ruby micro-services in the same machine.

  • I started 10 services together in AWS EC2 (type: t3.small).
  • Database is running in t2.small instance with 2 volumes (EBS) attached.
  • For Background job DB (redis) is running in t2.micro instance.

So for 3 ec2 instance + 2 EBS volumes –$26 + elastic IP addresses ( aws charges some amount – $7.4) 1 month duration, it costs me around $77.8, almost 6k rupees. That means we reduced the aws-cloud cost to half of the previous cost.

Setup Zsh, NVM, Rbenv | Moving micro-services into AWS EC2 instance – Part 2

In this post let’s continue to install the other packages.

Install Oh my zsh.

sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
sudo reboot

Make sure that ~/.zshrc contains the following lines.

# Path to your oh-my-zsh installation.
export ZSH="$HOME/.oh-my-zsh"

# See https://github.com/ohmyzsh/ohmyzsh/wiki/Themes
ZSH_THEME="robbyrussell"

# Example format: plugins=(rails git textmate ruby lighthouse)
# Add wisely, as too many plugins slow down shell startup.
plugins=(git)

source $ZSH/oh-my-zsh.sh

# Rbenv Loader
export PATH="$HOME/.rbenv/bin:$PATH"
eval "$(rbenv init -)"
export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"

# NVM loader
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion

Install NVM

sudo apt-get update -y
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
source ~/.bashrc or source ~/.zshrc

Install Rbenv

git clone https://github.com/rbenv/rbenv.git ~/.rbenv

echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc # OR
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.zshrc

echo 'eval "$(rbenv init -)"' >> ~/.bashrc # OR
echo 'eval "$(rbenv init -)"' >> ~/.zshrc

source ~/.bashrc # OR
source ~/.zshrc

type rbenv # to see if rbenv is installed correctly

In this tutorial, we installed nvm to manage Node versions, rbenv to manage Ruby versions and gemsets, and Oh My Zsh for a better terminal interface with more information. As a result, we use the .zshrc file instead of the .bashrc file on this machine.

To load all the necessary configurations into the terminal, add the above lines of code for nvm and rbenv to the zshrc file.