Hi, I’m Abhilash! A seasoned web developer with 15 years of experience specializing in Ruby and Ruby on Rails. Since 2010, I’ve built scalable, robust web applications and worked with frameworks like Angular, Sinatra, Laravel, Node.js, Vue and React.
Passionate about clean, maintainable code and continuous learning, I share insights, tutorials, and experiences here. Let’s explore the ever-evolving world of web development together!
When it comes to building robust and maintainable applications, writing test cases is a crucial practice. In this guide, I will walk you through writing effective test cases for a Ruby on Rails model using a common model name, “Task.” The concepts discussed here are applicable to any model in your Rails application.
Why Write Test Cases?
Writing test cases is essential for several reasons:
Bug Detection: Test cases help uncover and fix bugs before they impact users.
Regression Prevention: Tests ensure that new code changes do not break existing functionality.
Documentation: Well-written test cases serve as documentation for your codebase, making it easier for other developers to understand and modify the code.
Refactoring Confidence: Tests provide the confidence to refactor code knowing that you won’t introduce defects.
Collaboration: Tests facilitate collaboration within development teams by providing a common set of expectations.
Now, let’s dive into creating test cases for a Ruby on Rails model.
Model: Task
We will use a model called “Task” as an example. Tasks might represent items on a to-do list, items in a project management system, or any other entity that requires tracking and management.
Setting Up the Environment
Before writing test cases, ensure that your Ruby on Rails application is set up correctly with the testing framework of your choice. Rails typically uses MiniTest or RSpec for testing. For this guide, we’ll use MiniTest.
# Gemfile
group :test do
gem 'minitest'
# Other testing gems...
end
After updating your Gemfile, run bundle install to install the testing gems. Ensure your test database is set up and up-to-date by running bin/rails db:test:prepare.
Writing Test Cases
Model Validation
The first set of test cases should focus on validating the model’s attributes. For our Task model, we might want to ensure that the title is present and within an acceptable length range.
# test/models/task_test.rb
require 'test_helper'
class TaskTest < ActiveSupport::TestCase
test "should not save task without title" do
task = Task.new
assert_not task.save, "Saved the task without a title"
end
test "should save task with valid title" do
task = Task.new(title: "A valid task title")
assert task.save, "Could not save the task with a valid title"
end
end
Testing Associations
In Rails, models often have associations with other models. For example, a Task might belong to a User. You can write test cases to ensure these associations work correctly.
# test/models/task_test.rb
class TaskTest < ActiveSupport::TestCase
# ...
test "task should belong to a user" do
user = User.create(name: "John")
task = Task.new(title: "Task", user: user)
assert_equal user, task.user, "Task does not belong to the correct user"
end
end
Custom Model Methods
If your model contains custom methods, ensure they behave as expected. For example, if you have a method that returns the completion status of a task, test it.
# test/models/task_test.rb
class TaskTest < ActiveSupport::TestCase
# ...
test "task should return completion status" do
task = Task.new(title: "Task", completed: false)
assert_equal "Incomplete", task.completion_status
task.completed = true
assert_equal "Complete", task.completion_status
end
end
Scopes
Scopes allow you to define common queries for your models. Write test cases to ensure scopes return the expected results.
# test/models/task_test.rb
class TaskTest < ActiveSupport::TestCase
# ...
test "completed scope should return completed tasks" do
Task.create(title: "Completed Task", completed: true)
Task.create(title: "Incomplete Task", completed: false)
completed_tasks = Task.completed
assert_equal 1, completed_tasks.length
assert_equal "Completed Task", completed_tasks.first.title
end
end
Running Tests
You can run your tests with the following command:
bin/rails test
This command will execute all the test cases you’ve written in your test files.
Conclusion
Writing test cases is an essential practice in building reliable and maintainable Ruby on Rails applications. In this guide, we’ve explored how to write effective test cases for a model using a common model name, “Task.” These principles can be applied to test any model in your Rails application.
By writing comprehensive test cases, you ensure that your application functions correctly, maintains quality over time, and makes collaboration within your development team more efficient.
I am working on a project where we face issues in an ancestral path data in PostgreSql DB. Working with hierarchical data in PostgreSQL often involves dealing with ancestry paths stored as delimited strings. This comprehensive guide explores how to extract specific values from ancestry columns and utilize them effectively in join operations, complete with practical examples, troubleshooting tips and how I fixed the issues.
PostgreSQL’s robust string manipulation capabilities make it ideal for handling complex hierarchical data structures. When working with ancestry values stored in text columns, you often need to extract specific parts of the hierarchy for data analysis, reporting, or joining operations.
This article demonstrates how to:
✨ Extract values from ancestry strings using regular expressions
🔗 Perform efficient joins on extracted ancestry data
🛡️ Handle edge cases and avoid common pitfalls
⚡ Optimize queries for better performance
❓ Problem Statement
📊 Scenario
Consider a projects table with an ancestry column containing hierarchical paths like:
Extract the last integer value from the ancestry path
Use this value in a JOIN operation to fetch parent project data
Handle edge cases like NULL values and malformed strings
🏗️ Understanding the Data Structure
📁 Table Structure
CREATE TABLE projects (
id BIGINT PRIMARY KEY,
name VARCHAR(255) NOT NULL,
ancestry TEXT, -- Stores parent hierarchy as "id1/id2/id3"
created_at TIMESTAMP DEFAULT NOW()
);
-- Sample data
INSERT INTO projects (id, name, ancestry) VALUES
(1, 'Root Project', NULL),
(2, 'Department A', '1'),
(3, 'Team Alpha', '1/2'),
(4, 'Task 1', '1/2/3'),
(5, 'Subtask 1A', '1/2/3/4');
🧭 Ancestry Path Breakdown
Project ID
Name
Ancestry
Immediate Parent
1
Root Project
NULL
None (root)
2
Department A
1
1
3
Team Alpha
1/2
2
4
Task 1
1/2/3
3
5
Subtask 1A
1/2/3/4
4
🔧 Solution Overview
🎯 Core Approach
🔍 Pattern Matching: Use regex to identify the last number in the ancestry string
✂️ Value Extraction: Extract the matched value using regexp_replace()
🔄 Type Conversion: Cast the extracted string to the appropriate numeric type
🔗 Join Operation: Use the converted value in JOIN conditions
📝 Basic Query Structure
SELECT projects.*
FROM projects
LEFT OUTER JOIN projects AS parent_project
ON CAST(
regexp_replace(projects.ancestry, '.*\/(\d+)$', '\1')
AS BIGINT
) = parent_project.id
WHERE projects.ancestry IS NOT NULL;
📝 Regular Expression Deep Dive
🎯 Pattern Breakdown: .*\/(\d+)$
Let’s dissect this regex pattern:
.* -- Match any characters (greedy)
\/ -- Match literal forward slash
(\d+) -- Capture group: one or more digits
$ -- End of string anchor
📊 Pattern Matching Examples
Ancestry String
Regex Match
Captured Group
Result
"6/4/5/3"
5/3
3
✅ 3
"1/2"
1/2
2
✅ 2
"9"
No match
–
❌ Original string
"abc/def"
No match
–
❌ Original string
🔧 Alternative Regex Patterns
-- For single-level ancestry (no slashes)
regexp_replace(ancestry, '^(\d+)$', '\1')
-- For extracting first parent instead of last
regexp_replace(ancestry, '^(\d+)\/.*', '\1')
-- For handling mixed delimiters (/ or -)
regexp_replace(ancestry, '.*[\/\-](\d+)$', '\1')
💻 Implementation Examples
🔧 Example 1: Basic Parent Lookup
-- Find each project with its immediate parent information
SELECT
p.id,
p.name AS project_name,
p.ancestry,
parent.id AS parent_id,
parent.name AS parent_name
FROM projects p
LEFT OUTER JOIN projects parent
ON CAST(
regexp_replace(p.ancestry, '.*\/(\d+)$', '\1')
AS BIGINT
) = parent.id
WHERE p.ancestry IS NOT NULL
ORDER BY p.id;
Expected Output:
id | project_name | ancestry | parent_id | parent_name
----+--------------+----------+-----------+-------------
2 | Department A | 1 | 1 | Root Project
3 | Team Alpha | 1/2 | 2 | Department A
4 | Task 1 | 1/2/3 | 3 | Team Alpha
5 | Subtask 1A | 1/2/3/4 | 4 | Task 1
🎯 Example 2: Handling Edge Cases
-- Robust query that handles all edge cases
SELECT
p.id,
p.name AS project_name,
p.ancestry,
CASE
WHEN p.ancestry IS NULL THEN 'Root Level'
WHEN p.ancestry !~ '.*\/(\d+)$' THEN 'Single Parent'
ELSE 'Multi-level'
END AS hierarchy_type,
parent.name AS parent_name
FROM projects p
LEFT OUTER JOIN projects parent ON
CASE
-- Handle multi-level ancestry
WHEN p.ancestry ~ '.*\/(\d+)$' THEN
CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
-- Handle single-level ancestry
WHEN p.ancestry ~ '^\d+$' THEN
CAST(p.ancestry AS BIGINT)
ELSE NULL
END = parent.id
ORDER BY p.id;
📈 Example 3: Aggregating Child Counts
-- Count children for each project
WITH parent_child_mapping AS (
SELECT
p.id AS child_id,
CASE
WHEN p.ancestry ~ '.*\/(\d+)$' THEN
CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
WHEN p.ancestry ~ '^\d+$' THEN
CAST(p.ancestry AS BIGINT)
ELSE NULL
END AS parent_id
FROM projects p
WHERE p.ancestry IS NOT NULL
)
SELECT
p.id,
p.name,
COUNT(pcm.child_id) AS direct_children_count
FROM projects p
LEFT JOIN parent_child_mapping pcm ON p.id = pcm.parent_id
GROUP BY p.id, p.name
ORDER BY direct_children_count DESC;
-- ✅ Correct: Cast only the extracted value
CAST(
regexp_replace(projects.ancestry, '.*\/(\d+)$', '\1')
AS BIGINT
) = parent.id
❌ Error 2: Unexpected Results with Single-Level Ancestry
Problem: Single values like "9" don’t match the pattern .*\/(\d+)$
Solution:
-- ✅ Handle both multi-level and single-level ancestry
CASE
WHEN ancestry ~ '.*\/(\d+)$' THEN
CAST(regexp_replace(ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
WHEN ancestry ~ '^\d+$' THEN
CAST(ancestry AS BIGINT)
ELSE NULL
END
❌ Error 3: NULL Ancestry Values Causing Issues
Problem: NULL values can cause unexpected behaviour in joins
Solution:
-- ✅ Explicitly handle NULL values
WHERE ancestry IS NOT NULL
AND ancestry != ''
🛡️ Complete Error-Resistant Query
SELECT
p.id,
p.name AS project_name,
p.ancestry,
parent.id AS parent_id,
parent.name AS parent_name
FROM projects p
LEFT OUTER JOIN projects parent ON
CASE
WHEN p.ancestry IS NULL OR p.ancestry = '' THEN NULL
WHEN p.ancestry ~ '.*\/(\d+)$' THEN
CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
WHEN p.ancestry ~ '^\d+$' THEN
CAST(p.ancestry AS BIGINT)
ELSE NULL
END = parent.id
ORDER BY p.id;
⚡ Performance Considerations
📊 Indexing Strategies
-- Create index on ancestry for faster pattern matching
CREATE INDEX idx_projects_ancestry ON projects (ancestry);
-- Create partial index for non-null ancestry values
CREATE INDEX idx_projects_ancestry_not_null
ON projects (ancestry)
WHERE ancestry IS NOT NULL;
-- Create functional index for extracted parent IDs
CREATE INDEX idx_projects_parent_id ON projects (
CASE
WHEN ancestry ~ '.*\/(\d+)$' THEN
CAST(regexp_replace(ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
WHEN ancestry ~ '^\d+$' THEN
CAST(ancestry AS BIGINT)
ELSE NULL
END
) WHERE ancestry IS NOT NULL;
🔄 Query Optimization Tips
🎯 Use CTEs for Complex Logic
WITH parent_lookup AS (
SELECT
id,
CASE
WHEN ancestry ~ '.*\/(\d+)$' THEN
CAST(regexp_replace(ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
WHEN ancestry ~ '^\d+$' THEN
CAST(ancestry AS BIGINT)
END AS parent_id
FROM projects
WHERE ancestry IS NOT NULL
)
SELECT p.*, parent.name AS parent_name
FROM parent_lookup p
JOIN projects parent ON p.parent_id = parent.id;
⚡ Consider Materialized Views for Frequent Queries
CREATE MATERIALIZED VIEW project_hierarchy AS
SELECT
p.id,
p.name,
p.ancestry,
CASE
WHEN p.ancestry ~ '.*\/(\d+)$' THEN
CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
WHEN p.ancestry ~ '^\d+$' THEN
CAST(p.ancestry AS BIGINT)
END AS parent_id
FROM projects p;
-- Refresh when data changes
REFRESH MATERIALIZED VIEW project_hierarchy;
🛠️ Advanced Techniques
🔍 Extracting Multiple Ancestry Levels
-- Extract all ancestry levels as an array
SELECT
id,
name,
ancestry,
string_to_array(ancestry, '/') AS ancestry_array,
-- Get specific levels
split_part(ancestry, '/', 1) AS level_1,
split_part(ancestry, '/', 2) AS level_2,
split_part(ancestry, '/', -1) AS last_level
FROM projects
WHERE ancestry IS NOT NULL;
🧮 Calculating Hierarchy Depth
-- Calculate the depth of each project in the hierarchy
SELECT
id,
name,
ancestry,
CASE
WHEN ancestry IS NULL THEN 0
ELSE array_length(string_to_array(ancestry, '/'), 1)
END AS hierarchy_depth
FROM projects
ORDER BY hierarchy_depth, id;
🌳 Building Complete Hierarchy Paths
-- Recursive CTE to build full hierarchy paths
WITH RECURSIVE hierarchy_path AS (
-- Base case: root projects
SELECT
id,
name,
ancestry,
name AS full_path,
0 AS level
FROM projects
WHERE ancestry IS NULL
UNION ALL
-- Recursive case: child projects
SELECT
p.id,
p.name,
p.ancestry,
hp.full_path || ' → ' || p.name AS full_path,
hp.level + 1 AS level
FROM projects p
JOIN hierarchy_path hp ON
CASE
WHEN p.ancestry ~ '.*\/(\d+)$' THEN
CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
WHEN p.ancestry ~ '^\d+$' THEN
CAST(p.ancestry AS BIGINT)
END = hp.id
)
SELECT * FROM hierarchy_path
ORDER BY level, id;
✅ Best Practices
🎯 Data Validation
✅ Validate Ancestry Format on Insert/Update
-- Add constraint to ensure valid ancestry format
ALTER TABLE projects
ADD CONSTRAINT check_ancestry_format
CHECK (
ancestry IS NULL
OR ancestry ~ '^(\d+)(\/\d+)*$'
);
🔍 Regular Data Integrity Checks
-- Find orphaned projects (ancestry points to non-existent parent)
SELECT p.id, p.name, p.ancestry
FROM projects p
WHERE p.ancestry IS NOT NULL
AND NOT EXISTS (
SELECT 1 FROM projects parent
WHERE parent.id = CASE
WHEN p.ancestry ~ '.*\/(\d+)$' THEN
CAST(regexp_replace(p.ancestry, '.*\/(\d+)$', '\1') AS BIGINT)
WHEN p.ancestry ~ '^\d+$' THEN
CAST(p.ancestry AS BIGINT)
END
);
🛡️ Error Handling
-- Function to safely extract parent ID
CREATE OR REPLACE FUNCTION extract_parent_id(ancestry_text TEXT)
RETURNS BIGINT AS $$
BEGIN
IF ancestry_text IS NULL OR ancestry_text = '' THEN
RETURN NULL;
END IF;
IF ancestry_text ~ '.*\/(\d+)$' THEN
RETURN CAST(regexp_replace(ancestry_text, '.*\/(\d+)$', '\1') AS BIGINT);
ELSIF ancestry_text ~ '^\d+$' THEN
RETURN CAST(ancestry_text AS BIGINT);
ELSE
RETURN NULL;
END IF;
EXCEPTION
WHEN OTHERS THEN
RETURN NULL;
END;
$$ LANGUAGE plpgsql IMMUTABLE;
-- Usage
SELECT p.*, parent.name AS parent_name
FROM projects p
LEFT JOIN projects parent ON extract_parent_id(p.ancestry) = parent.id;
📊 Monitoring and Maintenance
-- Query to analyze ancestry data quality
SELECT
'Total Projects' AS metric,
COUNT(*) AS count
FROM projects
UNION ALL
SELECT
'Projects with Ancestry' AS metric,
COUNT(*) AS count
FROM projects
WHERE ancestry IS NOT NULL
UNION ALL
SELECT
'Valid Ancestry Format' AS metric,
COUNT(*) AS count
FROM projects
WHERE ancestry ~ '^(\d+)(\/\d+)*$'
UNION ALL
SELECT
'Orphaned Projects' AS metric,
COUNT(*) AS count
FROM projects p
WHERE p.ancestry IS NOT NULL
AND extract_parent_id(p.ancestry) NOT IN (SELECT id FROM projects);
📝 Conclusion
Working with ancestry data in PostgreSQL requires careful handling of string manipulation, type conversion, and edge cases. By following the techniques outlined in this guide, you can:
🎯 Key Takeaways
🔍 Use robust regex patterns to handle different ancestry formats
🛡️ Always handle edge cases like NULL values and malformed strings
⚡ Consider performance implications and use appropriate indexing
✅ Implement data validation to maintain ancestry integrity
🔧 Create reusable functions for complex extraction logic
💡 Final Recommendations
🎯 Test thoroughly with various ancestry formats
📊 Monitor query performance and optimize as needed
🔄 Consider alternative approaches like ltree for complex hierarchies
📚 Document your ancestry format for team members
🛠️ Implement proper error handling in production code
The techniques demonstrated here provide a solid foundation for working with hierarchical data in PostgreSQL. Whether you’re building organizational charts, category trees, or project hierarchies, these patterns will help you extract and manipulate ancestry data effectively and reliably! 🚀
When working with React and handling events, it’s crucial to understand the difference between e.target and e.currentTarget. While they may seem similar at first glance, they have distinct roles and behaviors. In this blog post, we’ll delve into the details of these event properties and explore when to use each one.
e.target: The Originating Element e.target refers to the element that triggered the event or where the event originated from. It represents the specific DOM element that the user interacted with, such as clicking a button or typing in an input field. It provides direct access to properties and attributes of that element.
e.currentTarget: The Bound Element On the other hand, e.currentTarget refers to the element to which the event handler is attached. It remains constant, regardless of which element triggered the event. In most cases, e.currentTarget and e.target are the same element, especially when you attach the event handler directly to the target element.
Differences in Markup Structure One important distinction arises when your markup structure involves nested elements or event delegation. If the event handler is attached to a parent element and an event occurs within one of its child elements, e.target will point to the specific child element, while e.currentTarget will reference the parent element. This behavior is particularly useful when you want to handle events for multiple elements within a container and need to determine which child element triggered the event.
Practical Use Cases 4.1 Individual Element Control: When you have multiple input elements, such as multiple text fields or checkboxes, and want to handle their changes individually, you should typically use e.target.value to access the specific value of the element that triggered the event. This ensures that you’re updating the correct state or performing the appropriate actions for that particular element. 4.2 Event Delegation: In scenarios where you use event delegation, attaching a single event handler to a parent element to handle events from its child elements, e.currentTarget.value can be useful. It allows you to access the value of the parent element, which can be helpful when you want to track changes or perform actions based on the parent element’s state.
Conclusion:
Understanding the nuances between e.target and e.currentTarget in React is crucial for properly handling events and accessing the relevant elements and their properties. By grasping these differences, you can write more effective event handlers and ensure your application responds accurately to user interactions. Remember that e.target refers to the element where the event originated, while e.currentTarget represents the element to which the event handler is bound. Utilize this knowledge to build robust and interactive React applications.
Hopefully, this blog post clarifies the distinction between e.target and e.currentTarget in React and helps you write better event handling code.
PostgreSQL, also known as Postgres, is a powerful and feature-rich relational database management system. One of its notable features is the array aggregation function, array_agg, which allows you to aggregate values from multiple rows into a single array. In this blog post, we’ll explore how array_agg works, its applications, and considerations for performance.
How Does array_agg Work?
The array_agg function takes an expression as an argument and returns an array containing the values of that expression for all the rows that match the query. Let’s illustrate this with an example.
Consider a table called employees with columns id, name, and department. Suppose we want to aggregate all the names of employees belonging to the “Sales” department into an array. We can achieve this using the following query:
SELECT array_agg(name) AS sales_employees
FROM employees
WHERE department = 'Sales';
The result of this query will be a single row with a column named sales_employees, which contains an array of all the names of employees in the “Sales” department.
Usage of array_agg with Subqueries
The ability to get an array as the output opens up various possibilities, especially when used in subqueries. You can leverage this feature to aggregate data from related tables or filter results based on complex conditions.
For instance, imagine you have two tables, orders and order_items, where each order can have multiple items. You want to retrieve a list of orders along with an array of item names for each order. The following query achieves this:
SELECT o.order_id, (
SELECT array_agg(oi.item_name)
FROM order_items oi
WHERE oi.order_id = o.order_id
) AS item_names
FROM orders o;
In this example, the subquery within the main query’s select list utilizes array_agg to aggregate item names from the order_items table, specific to each order.
Complex Query Example Using array_agg
To demonstrate a more complex scenario, let’s consider a database that stores books and their authors. We have three tables: books, authors, and book_authors (a join table that associates books with their respective authors).
Suppose we want to retrieve a list of books along with an array of author names for each book by alphabetical order. We can achieve this using a query that involves joins and array_agg:
SELECT b.title, array_agg(a.author_name ORDER BY a.author_name ASC) AS authors
FROM books b
JOIN book_authors ba ON b.book_id = ba.book_id
JOIN authors a ON ba.author_id = a.author_id
GROUP BY b.book_id;
In this query, we join the tables based on their relationships and use array_agg to aggregate author names into an array for each book. The GROUP BY clause ensures that each book’s array of author names is grouped correctly.
Performance Considerations
While array_agg is a powerful function, it’s essential to consider its performance implications, especially when working with large datasets. Aggregating values into arrays can be computationally intensive, and the resulting array can consume significant memory.
If you anticipate working with large result sets or complex queries involving array_agg, it’s worth optimizing your database schema, indexing relevant columns, and analyzing query performance using PostgreSQL’s built-in tools.
Additionally, consider whether array_agg is the most efficient solution for your specific use case. Sometimes, alternative approaches, such as using temporary tables or custom aggregate functions, might offer better performance.
Conclusion
The array_agg function in PostgreSQL provides a powerful mechanism for aggregating values into arrays. It offers flexibility and opens up opportunities for various applications, including subqueries and complex data manipulations. However, when working with large datasets, it’s crucial to be mindful of potential performance implications and explore optimization strategies accordingly.
In the world of Ruby programming, we often encounter scenarios where we need to work with dates. Ruby provides us with two methods, Date.current and Date.today, to retrieve the current date. Although they may appear similar at first glance, understanding their differences can help us write more accurate and reliable code. Let’s explore the reasons behind their existence, where we can use them, and the potential pitfalls we might encounter.
Why are there two different methods? Ruby’s Date.current and Date.today methods exist to handle different time zone considerations. When developing applications using the Ruby on Rails framework, it’s crucial to account for the possibility of multiple time zones. Rails provides a simple and consistent way to handle time zone-related operations, and these two methods are part of that feature set.
Where can we use them? a) Date.current: This method is specifically designed for Rails applications. It returns the current date in the time zone specified by the application’s configuration. It ensures that the date obtained is consistent across the entire application, regardless of the server or machine executing the code. Date.current is particularly useful when dealing with user interactions, scheduling, or any scenario where consistent time zone handling is necessary.
b) Date.today: This method retrieves the current date based on the default time zone of the server or machine where the code is running. It is not limited to Rails applications and can be used in any Ruby program. However, when working in a Rails application, it’s generally recommended to use Date.current to maintain consistent time zone handling.
Problems when using each method: Using these methods incorrectly or without understanding their differences can lead to unexpected results: a) Inconsistent time zones: If a Rails application is deployed across multiple servers or machines with different default time zones, using Date.today may produce inconsistent results. It can lead to situations where the same code yields different dates depending on the server’s time zone.
b) Time zone misconfigurations: In Rails applications, failing to properly set the application’s time zone can result in incorrect date calculations. It’s crucial to configure the desired time zone in the application’s configuration file, ensuring that Date.current returns the expected results.
Conclusion:
Understanding the nuances between Date.current and Date.today in Ruby can greatly improve the accuracy and reliability of our code, particularly in Rails applications. By using Date.current, we ensure consistent time zone handling throughout the application, regardless of the server or machine executing the code. Carefully considering the appropriate method to use based on the specific context can prevent common pitfalls related to time zone inconsistencies.
List of commands to remember using postgres DB managment system.
Login, Create user and password
# login to psql client
psql postgres # OR
psql -U postgres
create database mydb; # create db
create user abhilash with SUPERUSER CREATEDB CREATEROLE encrypted password 'abhilashPass!';
grant all privileges on database mydb to myuser; # add privileges
Connect to DB, List tables and users, functions, views, schema
\l # lists all the databases
\c dbname # connect to db
\dt # show tables
\d table_name # Describe a table
\dn # List available schema
\df # List available functions
\dS [your_table_name] # List triggers
\dv # List available views
\du # lists all user accounts and roles
\du+ # is the extended version which shows even more information.
Show history, save to file, edit using editor, execution time, help
SELECT version(); # version of psql
\g # Execute the previous command
\s # Command history
\s filename # save Command history to a file
\i filename # Execute psql commands from a file
\? # help on psql commands
\h ALTER TABLE # To get help on specific PostgreSQL statement
\timing # Turn on/off query execution time
\e # Edit command in your own editor
\e [function_name] # It is more useful when you edit a function in the editor. Do \df for functions
\o [file_name] # send all next query results to file
\o out.txt
\dt
\o # switch
\dt
Change output, Quit psql
# Switch output options
\a command switches from aligned to non-aligned column output.
\H command formats the output to HTML format.
\q # quit psql
certbot -d '*.domain.com' -d domain.com --manual --preferred-challenges dns certonly
Your certificate and chain have been saved at:
/etc/letsencrypt/live/domain.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/domain.com/privkey.pem
SSL certificate auto renewal
Let’s Encrypt’s certificates are valid for 90 days. To automatically renew the certificates before they expire, the certbot package creates a cronjob which will run twice a day and will automatically renew any certificate 30 days before its expiration.
Since we are using the certbot webroot plug-in once the certificate is renewed we also have to reload the nginx service. To do so append –renew-hook “systemctl reload nginx” to the /etc/cron.d/certbot file so as it looks like this:
In this post let’s setup Ruby and ruby gemsets for each project, so that your package versions are maintained.
Install ruby-build # ruby-build is a command-line utility for rbenv
git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
# Add ruby build path
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc # OR
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.zshrc
# load it
source ~/.bashrc # OR
source ~/.zshrc
For Mac users – iOS users
# verify rbenv
curl -fsSL https://github.com/rbenv/rbenv-installer/raw/main/bin/rbenv-doctor | bash
If you are using zsh add the following to `~/.zshrc`
# rbenv configuration
eval "$(rbenv init -)"
export RUBY_CONFIGURE_OPTS="--with-openssl-dir=$(brew --prefix openssl@1.1)"
Install Ruby 2.5.1 using rbenv
rbenv install 2.5.1
rbenv global 2.5.1 # to make this version as default
ruby -v # must display 2.5.1 if installed correctly
which ruby # must show the fully qualified path of the executable
echo "gem: --no-document" > ~/.gemrc # to skip documentation while installing gem
rbenv rehash # latest version of rbenv apparently don't need this. Nevertheless, lets use it to avoid surprises.
gem env home # See related details
# If a new version of ruby was installed, ensure RubyGems is up to date.
gem update --system --no-document
fatal: remote error:
The unauthenticated git protocol on port 9418 is no longer supported.
# Fix
git clone https://github.com/jf/rbenv-gemset.git ~/.rbenv/plugins/rbenv-gemset
Now clone your project and go inside the project folder -Micro-service folder (say my-project) which has Gemfile in it and do the following commands.
cd my-project
my-project $ rbenv gemset init # NOTE: this will create the gemset under the current ruby version.
my-project $ rbenv gemset list # list all gemsets
my-project $ rbenv gemset active # check this in project folder
my-project $ gem install bundler -v '1.6.0'
my-project $ rbenv rehash
my-project $ bundle install # install all the gems for the project inside the gemset.
my-project $ rails s -e production # start rails server
my-project $ puma -e production -p 3002 -C config/puma.rb # OR start puma server
# OR start the server you have configured with rails.
Do this for all the services and see how this is running. The above will install all the gems inside the project gemset that acts like a namespace.
So our aim is to setup all the ruby micro-services in the same machine.
I started 10 services together in AWS EC2 (type: t3.small).
Database is running in t2.small instance with 2 volumes (EBS) attached.
For Background job DB (redis) is running in t2.micro instance.
So for 3 ec2 instance + 2 EBS volumes –$26 + elastic IP addresses ( aws charges some amount – $7.4) 1 month duration, it costs me around $77.8, almost 6k rupees. That means we reduced the aws-cloud cost to half of the previous cost.
git clone https://github.com/rbenv/rbenv.git ~/.rbenv
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc # OR
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.zshrc
echo 'eval "$(rbenv init -)"' >> ~/.bashrc # OR
echo 'eval "$(rbenv init -)"' >> ~/.zshrc
source ~/.bashrc # OR
source ~/.zshrc
type rbenv # to see if rbenv is installed correctly
In this tutorial, we installed nvm to manage Node versions, rbenv to manage Ruby versions and gemsets, and Oh My Zsh for a better terminal interface with more information. As a result, we use the .zshrc file instead of the .bashrc file on this machine.
To load all the necessary configurations into the terminal, add the above lines of code for nvm and rbenv to the zshrc file.