Rails 6.1 introduce ‘compact_blank’

Before Rails 6 we used to remove the blank values from Array and Hash by using other available methods.

Before:

  [...].delete_if(&:blank?)
  {....}.delete_if { |_k, v| v.blank? }
OR
  [...].reject(&:blank?)
  ...

From now, Rails 6.1.3.1 onwards you can use the module Enumerable’s compact_blank and compact_blank! methods.

Now we can use:

[1, "", nil, 2, " ", [], {}, false, true].compact_blank
=> [1, 2, true]

['', nil, 8, [], {}].compact_blank
=> [8]

{ a: "", b: 1, c: nil, d: [], e: false, f: true }.compact_blank
=> {:b=>1, :f=>true}

The method compact_blank! is a destructive method (handle with care) for compact_blank.

As a Rails developer, I am grateful for this method because there are many scenarios where we find ourselves replicating this code.

Advertisement

PostgreSQL commands to remember

List of commands to remember using postgres DB managment system.

Login, Create user and password

# login to psql client
psql postgres # OR
psql -U postgres
create database mydb; # create db
create user abhilash with SUPERUSER CREATEDB CREATEROLE encrypted password 'abhilashPass!'; 
grant all privileges on database mydb to myuser; # add privileges

Connect to DB, List tables and users, functions, views, schema

\l # lists all the databases
\c dbname # connect to db
\dt # show tables
\d table_name # Describe a table
\dn # List available schema
\df #  List available functions
\dS [your_table_name] # List triggers
\dv # List available views
\du # lists all user accounts and roles 
\du+ # is the extended version which shows even more information.

Show history, save to file, edit using editor, execution time, help

SELECT version(); # version of psql
\g  # Execute the previous command
\s # Command history
\s filename # save Command history to a file
\i filename # Execute psql commands from a file
\? # help on psql commands
\h ALTER TABLE # To get help on specific PostgreSQL statement
\timing #  Turn on/off query execution time
\e # Edit command in your own editor
\e [function_name] # It is more useful when you edit a function in the editor. Do \df for functions
\o [file_name] # send all next query results to file
    \o out.txt
    \dt 
    \o # switch
    \dt

Change output, Quit psql

# Switch output options
\a command switches from aligned to non-aligned column output.
\H command formats the output to HTML format.
\q # quit psql

Reference: https://www.postgresqltutorial.com/postgresql-administration/psql-commands/

Setup Nginx, SSL , Firewall | Moving micro-services into AWS EC2 instance – Part 4

Install Nginx proxy server. Nginx also act like a load-balacer which is helpful for the balancing of network traffic.

sudo apt-get update
sudo apt-get install nginx

Commands to stop, start, restart, check status

sudo systemctl stop nginx
sudo systemctl start nginx
sudo systemctl restart nginx

# after making configuration changes
sudo systemctl reload nginx
sudo systemctl disable nginx
sudo systemctl enable nginx

Install SSL – Letsencrypt

Install packages needed for ssl

sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-nginx

Install the SSL Certificate:

certbot -d '*.domain.com' -d domain.com --manual --preferred-challenges dns certonly

Your certificate and chain have been saved at:
   /etc/letsencrypt/live/domain.com/fullchain.pem

Your key file has been saved at:
   /etc/letsencrypt/live/domain.com/privkey.pem
SSL certificate auto renewal

Let’s Encrypt’s certificates are valid for 90 days. To automatically renew the certificates before they expire, the certbot package creates a cronjob which will run twice a day and will automatically renew any certificate 30 days before its expiration.

Since we are using the certbot webroot plug-in once the certificate is renewed we also have to reload the nginx service. To do so append –renew-hook “systemctl reload nginx” to the /etc/cron.d/certbot file so as it looks like this:

/etc/cron.d/certbot
0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew --renew-hook "systemctl reload nginx"

To test the renewal process, use the certbot –dry-run switch:

sudo certbot renew --dry-run

Renew your EXPIRED certificate this way:

sudo certbot --force-renewal -d '*.domain.com' -d domain.com --manual --preferred-challenges dns certonly

Are you OK with your IP being logged?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please deploy a DNS TXT record under the name
_acme-challenge.<domain>.com with the following value:

O3bpxxxxxxxxxxxxxxxxxxxxxxxxxxY4TnNo

Before continuing, verify the record is deployed.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Press Enter to Continue

You need to update the DNS txt record for _acme-challenge.<domain>.com

sudo systemctl restart nginx # restart nginx to take effect

Configure the Firewall

Next, we’ll update our firewall to allow HTTPS traffic.

Check firewall status in the system. If it is inactive enable firewall.

sudo ufw status # check status

# enable firewall
sudo ufw enable
sudo ufw allow ssh
sudo ufw allow OpenSSH

Enable particular ports where your micro-services are running. Example:

sudo ufw allow 4031/tcp # Authentication service
sudo ufw allow 4131/tcp # File service
sudo ufw allow 4232/tcp # Search service

You can delete the ‘Authentication service’ firewall rule by:

sudo ufw delete allow 4031/tcp

Setup Ruby, ruby-build, rbenv-gemset | Conclusion – Moving micro-services into AWS EC2 instance – Part 3

In this post let’s setup Ruby and ruby gemsets for each project, so that your package versions are maintained.

Install ruby-build # ruby-build is a command-line utility for rbenv

git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build

# Add ruby build path

echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc # OR
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.zshrc

# load it

source ~/.bashrc # OR
source ~/.zshrc


For Mac users – iOS users


# verify rbenv
curl -fsSL https://github.com/rbenv/rbenv-installer/raw/main/bin/rbenv-doctor | bash

If you are using zsh add the following to `~/.zshrc`

# rbenv configuration
eval "$(rbenv init -)"
export RUBY_CONFIGURE_OPTS="--with-openssl-dir=$(brew --prefix openssl@1.1)"

Install Ruby 2.5.1 using rbenv

rbenv install 2.5.1

rbenv global 2.5.1 # to make this version as default

ruby -v # must display 2.5.1 if installed correctly

which ruby # must show the fully qualified path of the executable

echo "gem: --no-document" > ~/.gemrc # to skip documentation while installing gem

rbenv rehash # latest version of rbenv apparently don't need this. Nevertheless, lets use it to avoid surprises.

gem env home # See related details

# If a new version of ruby was installed, ensure RubyGems is up to date.
gem update --system --no-document


Install rbenv gemset – https://github.com/jf/rbenv-gemset

git clone git://github.com/jf/rbenv-gemset.git ~/.rbenv/plugins/rbenv-gemset

If you are getting following issue:

fatal: remote error:
  The unauthenticated git protocol on port 9418 is no longer supported.
# Fix
 git clone https://github.com/jf/rbenv-gemset.git ~/.rbenv/plugins/rbenv-gemset

Now clone your project and go inside the project folder -Micro-service folder (say my-project) which has Gemfile in it and do the following commands.

cd my-project

my-project $ rbenv gemset init # NOTE: this will create the gemset under the current ruby version.

my-project $ rbenv gemset list # list all gemsets

my-project $ rbenv gemset active # check this in project folder

my-project $ gem install bundler -v '1.6.0'

my-project $ rbenv rehash

my-project $ bundle install  # install all the gems for the project inside the gemset.

my-project $ rails s -e production # start rails server
my-project $ puma -e production -p 3002 -C config/puma.rb # OR start puma server
# OR start the server you have configured with rails. 

Do this for all the services and see how this is running. The above will install all the gems inside the project gemset that acts like a namespace.

So our aim is to setup all the ruby micro-services in the same machine.

  • I started 10 services together in AWS EC2 (type: t3.small).
  • Database is running in t2.small instance with 2 volumes (EBS) attached.
  • For Background job DB (redis) is running in t2.micro instance.

So for 3 ec2 instance + 2 EBS volumes –$26 + elastic IP addresses ( aws charges some amount – $7.4) 1 month duration, it costs me around $77.8, almost 6k rupees. That means we reduced the aws-cloud cost to half of the previous cost.

Setup Zsh, NVM, Rbenv | Moving micro-services into AWS EC2 instance – Part 2

In this post let’s continue to install the other packages.

Install Oh my zsh.

sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
sudo reboot

Make sure that ~/.zshrc contains the following lines.

# Path to your oh-my-zsh installation.
export ZSH="$HOME/.oh-my-zsh"

# See https://github.com/ohmyzsh/ohmyzsh/wiki/Themes
ZSH_THEME="robbyrussell"

# Example format: plugins=(rails git textmate ruby lighthouse)
# Add wisely, as too many plugins slow down shell startup.
plugins=(git)

source $ZSH/oh-my-zsh.sh

# Rbenv Loader
export PATH="$HOME/.rbenv/bin:$PATH"
eval "$(rbenv init -)"
export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"

# NVM loader
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion

Install NVM

sudo apt-get update -y
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
source ~/.bashrc or source ~/.zshrc

Install Rbenv

git clone https://github.com/rbenv/rbenv.git ~/.rbenv

echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc # OR
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.zshrc

echo 'eval "$(rbenv init -)"' >> ~/.bashrc # OR
echo 'eval "$(rbenv init -)"' >> ~/.zshrc

source ~/.bashrc # OR
source ~/.zshrc

type rbenv # to see if rbenv is installed correctly

In this tutorial, we installed nvm to manage Node versions, rbenv to manage Ruby versions and gemsets, and Oh My Zsh for a better terminal interface with more information. As a result, we use the .zshrc file instead of the .bashrc file on this machine.

To load all the necessary configurations into the terminal, add the above lines of code for nvm and rbenv to the zshrc file.

Basic Software installation| Moving micro-services into AWS EC2 instance – Part 1

As I mentioned in the previous post, I have decided to move away from micro-services. To achieve this, I am taking an AWS EC2 instance and configuring each micro-service on this instance. For this setup, I am using an Ubuntu 16.04 machine because my application setup is a bit old. However, if you have newer versions of Rails, Ruby, etc., you may want to choose Ubuntu 20.04.

Our setup includes Ruby on Rails (5.2.1) micro-services (5-10 in number), a NodeJS application, a Sinatra Application, and an Angular 9.1 Front-End Application.

To begin, go to the AWS EC2 home page and select an Ubuntu 16.04 machine with default configurations and SSH enabled.

https://ap-south-1.console.aws.amazon.com/ec2/v2/home

Now login to this new instance and install all the packages we needed for our setup.

Software Installation

Update the package list.

sudo apt-get update

Install Ruby dependencies.

sudo apt-get install ruby-dev
sudo apt-get install libxml2-dev
sudo apt-get install libxslt-dev
sudo apt-get install graphviz

Install NodeJS

curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
sudo apt-get install -y nodejs
node -v

Install yarn and other dependencies.

curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt-get update
sudo apt-get install git-core zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev software-properties-common libffi-dev nodejs yarn

Install Mysql 5.7 (Remember this is for Ubuntu 16.04, 18.04 versions)

sudo apt-get install mysql-server-5.7 mysql-client-core-5.7 libmysqlclient-dev
sudo service mysql status # or
systemctl status mysql
username: <your-username>, password: <your-password>

You can also try
mysql_secure_installation, if you use other mysql version.

Note that if you are setting up Ubuntu 20.04, there is a significant change in MySQL, as the version of MySQL is now 8.0 instead of 5.7. If you have applications running in MySQL 5.7, it is recommended that you set up and use Ubuntu 16.04 or 18.04.

We will continue the installation process in our next post.

Our problem with micro-services using AWS ECS

As part of our startup, our predecessors chose to use micro-services for our new website as it is a trending technology.

This decision has many benefits, such as:

  • Scaling a website becomes much easier when using micro-services, as each service can be scaled independently based on its individual needs.
  • The loosely coupled nature of micro-services also allows for easier development and maintenance, as changes to one service do not affect the functionality of other services.
  • Additionally, deployment can be focused on each individual service, making the overall process more efficient.
  • Micro-services also allow for the use of different technologies for each service, providing greater flexibility and the ability to choose the best tools for each task.
  • Finally, testing can be concentrated on one service at a time, allowing for more thorough and effective testing, which can result in higher quality code and a better user experience.

In developing our application with micro-services, we considered the potential problems that we may face in the future. However, it is important to note that we also need to consider whether these problems will have a significant impact compared to the potential disadvantages of using micro-services.

One factor to keep in mind is that our website is currently experiencing low traffic and we are acquiring clients gradually. As such, we need to consider whether the benefits of micro-services outweigh any potential drawbacks for our particular situation.

Regardless, some potential issues with micro-services include increased complexity and overhead in development, as well as potential performance issues when integrating multiple services. Additionally, managing multiple services and ensuring they communicate effectively can also be a challenge.

Despite the benefits of micro-services, we have faced some issues in implementing them. One significant challenge is the increased complexity of deployment and maintenance that comes with having multiple services. This can require more time and resources to manage and can potentially increase the likelihood of errors.

Additionally, the cost of using AWS ECS for hosting all of the micro-services can be higher than using other hosting solutions for a less traffic website. This is something to consider when weighing the benefits and drawbacks of using micro-services for our specific needs.

Another challenge we have faced is managing dependencies between services, which can be difficult to avoid. When one service goes offline, it can cause issues with other services, leading to a “No Service” issue on the website.

Finally, it can be very difficult to go back to a monolithic application even if we combine 3-4 services together, as they may use different software or software versions. This can make it challenging to make changes or updates to the application as a whole.

It is important to carefully consider whether micro-service architecture is the best fit for your business and current situation. If you have a less used website or are just starting your business, it may not be necessary or cost-effective to implement micro-services.

It is important to take the time to evaluate the benefits and drawbacks of using micro-services for your specific needs and budget. Keep in mind that hosting multiple micro-services can come with additional costs, so be prepared to pay a minimum amount for hosting if you decide to go this route.

Ultimately, the decision to use micro-services should be based on a thorough assessment of your business needs and available resources, rather than simply following a trend or industry hype.

Set up:

  • Used AWS ECS (ec2 launch type) with services and task definitions defined
  • 11 Micro-services, 11 containers are spinning
  • Cost: Rs.12k ($160) per month

Workaround:

  • Consider using AWS Fargate type but not sure these issues get resolved
  • Deploy all the services in one EC2 Instance without using ECS

AWS: How to push your docker images to ECR

Try to do the docker login from your terminal. If you face the following error:

Error: Cannot perform an interactive login from a non TTY device

For fixing this update your aws cli version 1.x.x to 2

If you don’t have aws-cli installed, please install it.

Install AWS-CLI v2

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

$ unzip awscliv2.zip

$ sudo ./aws/install --update -b /home/abhilash/.local/bin

$ aws --version

NOTE: Add/Update access and secret key with aws configure of your new I AM role that has ECR access.

Goto https://ap-south-1.console.aws.amazon.com/ecr/repositories and create a repository.

Login and push to this repository URI:

$ aws ecr get-login-password --region ap-south-1 | docker login --username AWS --password-stdin xxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com

If this cmd not works for some reason, you can try another way:

$ docker login -u AWS -p $(aws ecr get-login-password --region ap-south-1) xxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com

Tag the local image:

$ docker tag test-api:latest xxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com/test-api-container:latest

Push the image:

$ docker push xxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com/test-api-container:latest

AWS doc url: https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html

Setup Rspec, factory bot and database cleaner for Rails 5.2.6

To configure the best test suite in Rails using the RSpec framework and other supporting libraries, such as Factory Bot and Database Cleaner, we’ll remove the Rails native test folder and related configurations.

To begin, we’ll add the necessary gems to our Gemfile:

group :development, :test do
  # Rspec testing module and needed libs
  gem 'factory_bot_rails', '5.2.0'
  gem 'rspec-rails', '~> 4.0.0'
end

group :test do
  # db cleaner for test suite 
  gem 'database_cleaner-active_record', '~> 2.0.1'
end

Now do

bunde install # this installs all the above gems

If your Rails application already includes the built-in Rails test suite, you’ll need to remove it in order to use the RSpec module instead.

I recommend using RSpec over the Rails native test module, as RSpec provides more robust helpers and mechanisms for testing.

To disable the Rails test suite, navigate to the application.rb file and comment out the following line:

# require 'rails/test_unit/railtie'

inside the class Application add this line:

# Don't generate system test files.
config.generators.system_tests = nil

Remove the native rails test folder:

rm -r test/

We use factories over fixtures. Remove this line from rails_helper.rb

config.fixture_path = "#{::Rails.root}/spec/fixtures"

and modify this line to:

config.use_transactional_fixtures = false # instead of true

This is for preventing rails to generate the native test files when we run rails generators.

Database Cleaner

Now we configure the database cleaner that is used for managing data in our test cycles.

Open rails_helper.rb file and require that module

require 'rspec/rails'
require 'database_cleaner'  # <= add here

Note: Use only if you run integration tests with capybara or dealing with javascript codes in the test suite.

“Capybara spins up an instance of our Rails app that can’t see our test data transaction so even tho we’ve created a user in our tests, signing in will fail because to the Capybara run instance of our app, there are no users.”

I experienced database credentials issues:

➜ rspec
An error occurred while loading ./spec/models/user_spec.rb.
Failure/Error: ActiveRecord::Migration.maintain_test_schema!

Mysql2::Error::ConnectionError:
  Access denied for user 'username'@'localhost' (using password: NO)

Initially, I planned to use Database Cleaner, but later I realized that an error I was experiencing was actually due to a corrupted credentials.yml.enc file. I’m not sure how it happened.

To check if your credentials are still intact, try editing the file and verifying that the necessary information is still present.

EDITOR="code --wait" bin/rails credentials:edit

Now in the Rspec configuration block we do the Database Cleaner configuration.

Add the following file:

spec/support/database_cleaner.rb

Inside, add the following:

# DB cleaner using database cleaner library
RSpec.configure do |config|
  # This says that before the entire test suite runs, clear 
  # the test database out completely
  config.before(:suite) do
    DatabaseCleaner.strategy = :transaction
    DatabaseCleaner.clean_with(:truncation)
  end

  # This sets the default database cleaning strategy to 
  # be transactions
  config.before(:each) do
    DatabaseCleaner.strategy = :transaction
  end

  # include this if you uses capybara integration tests
  config.before(:each, :js => true) do
    DatabaseCleaner.strategy = :truncation
  end

  # These lines hook up database_cleaner around the beginning 
  # and end of each test, telling it to execute whatever 
  # cleanup strategy we selected
  config.before(:each) do
    DatabaseCleaner.start
  end

  config.after(:each) do
    DatabaseCleaner.clean
  end
end

and be sure to require this file in rails_helper.rb

require 'rspec/rails'
require 'database_cleaner'
require_relative 'support/database_cleaner'  # <= here

Configure Factories

Note: We use factories over fixtures because factories provide better features that make writing test cases an easy task.

Create a folder to generate the factories:

mkdir spec/factories

Rails generators will automatically generate factory files for models inside this folder.

A generator for model automatically creating the following files:

spec/models/model_spec.rb
spec/factories/model.rb

Now lets load Factory bot configuration to rails test suite.

Add the following file:

spec/support/factory_bot.rb

and be sure to require this file in rails_helper.rb

require 'rspec/rails'
require 'database_cleaner'
require_relative 'support/database_cleaner'
require_relative 'support/factory_bot'  # <= here

You can see the following line commented

# Dir[Rails.root.join('spec', 'support', '**', '*.rb')].sort.each { |f| require f }

You can uncomment the line to make all factories available in your test suite, but I don’t recommend this approach as it can slow down test execution. Instead, it’s better to load each factory as needed.

Here’s the final version of the rails_helper.rb file. Note that we won’t be using Capybara for integration tests, so we’re not including the database_cleaner configuration:

# This file is copied to spec/ when you run 'rails generate rspec:install'
require 'spec_helper'
ENV['RAILS_ENV'] ||= 'test'
require File.expand_path('../config/environment', __dir__)
# Prevent database truncation if the environment is production
abort('The Rails environment is running in production mode!') if Rails.env.production?
require 'rspec/rails'
require_relative 'support/factory_bot'

# Checks for pending migrations and applies them before tests are run.
# If you are not using ActiveRecord, you can remove these lines.
begin
  ActiveRecord::Migration.maintain_test_schema!
rescue ActiveRecord::PendingMigrationError => e
  puts e.to_s.strip
  exit 1
end
RSpec.configure do |config|
  # If you're not using ActiveRecord, or you'd prefer not to run each of your
  # examples within a transaction, remove the following line or assign false
  # instead of true.
  config.use_transactional_fixtures = false

  config.infer_spec_type_from_file_location!

  # Filter lines from Rails gems in backtraces.
  config.filter_rails_from_backtrace!
  # arbitrary gems may also be filtered via:
  # config.filter_gems_from_backtrace("gem name")
end

A spec directory look something like this:

spec/
  controllers/
    user_controller_spec.rb
    product_controller_spec.rb
  factories/
    user.rb
    product.rb
  models/
    user_spec.rb
    product_spec.rb
  mailers/
    mailer_spec.rb
  services/
    service_spec.rb  
  rails_helper.rb
  spec_helper.rb

References:

https://github.com/rspec/rspec-rails
https://relishapp.com/rspec/rspec-rails/docs
https://github.com/thoughtbot/factory_bot/blob/master/GETTING_STARTED.md#configure-your-test-suite
https://github.com/DatabaseCleaner/database_cleaner

Model Specs

Lets generate a model spec. A model spec is used to test smaller parts of the system, such as classes or methods.

# RSpec also provides its own spec file generators
➜ rails generate rspec:model user
      create  spec/models/user_spec.rb
      invoke  factory_bot
      create    spec/factories/users.rb

Now run the rpsec command. That’s it. You can see the output from rspec.

➜ rspec
*

Pending: (Failures listed here are expected and do not affect your suite's status)

  1) Item add some examples to (or delete) /home/.../spec/models/user_spec.rb
     # Not yet implemented
     # ./spec/models/user_spec.rb:4

Finished in 0.00455 seconds (files took 1.06 seconds to load)
1 example, 0 failures, 1 pending

Lets discuss how to write a perfect model spec in the next lesson.

AWS SNS: How to send SMS to a topic, OTP SMS, promotional or transactional SMS

You can send SMS in 2 ways using AWS SNS.

  1. Subscribe to a topic that is already created in AWS SNS and send sms to all numbers who has the subscription.
  2. Send SMS directly to a mobile number.

You can find the following aws doc as a starting point from web and it describes how to create a topic, subscribe a topic and sending sms to the mobile numbers.

https://docs.aws.amazon.com/code-samples/latest/catalog/code-catalog-ruby-example_code-sns.html

For the demonstration purpose, I use Ruby here.

  1. Subscribe to a topic and send SMS
require 'aws-sdk-sns'  # v2: require 'aws-sdk'

sns = Aws::SNS::Resource.new(region: 'us-west-2')

topic = sns.topic('arn:aws:sns:us-west-2:123456789:MySampleTextTopic')

topic.publish({
  message: 'Hello!'
})

This assumes you already created a topic inside your aws console:

Goto AWS Console

  1. Open AWS SNS
  2. Goto Left side -> Mobile -> text messaging sms
  3. Create a topic

You can also follow the above link to perform an API to create topic, subscribe to a topic and send sms

2. Send SMS directly to a mobile number (eg: Send OTP SMS)

Either you can use aws console to send the sms or SNS API.

AWS console:

  1. Open AWS SNS
  2. Goto Left side -> Mobile -> text messaging sms
  3. We are using transactional text messages
  4. Goto publish text message
  5. Select transactional, add mobile number and publish it

Almost all cases we use an API for sending OTP SMS. For that follow the steps.

SNS API – Steps

gem install aws-sdk-sns 
require 'aws-sdk-sns'
otp = generate_otp
set_sns_client
set_sns_client_attrs
response = publish_sms(otp)
  • Generate an OTP
def generate_otp
    (1000..9999).to_a.sample
end
  • Set SNS client
def set_sns_client
    @sns_client = Aws::SNS::Client.new(
      region: ENV['AWS_SNS_REGION'],
      access_key_id: ENV['AWS_SNS_ACCESS_KEY'],
      secret_access_key: ENV['AWS_SNS_SECRET_KEY']
    )
end

  • Set SNS client attributes
 def set_sns_client_attrs
    @sns_client.set_sms_attributes({
                                     attributes: {
                                       'DefaultSenderID' => SENDER_ID,
                                       'DefaultSMSType' => SMS_TYPE
                                     }
                                   })
end

  • Publish SMS
def publish_sms(otp)
    @sns_client.publish({
                          phone_number: @mobile_no,
                          message: "#{OTM_MSG} #{otp}"
                        })
  end

Here ,

OTM_MSG: ‘Your OTP for login is’

SMS_TYPE : ‘Transactional’ or ‘Promotional’

If you want to send OTP, then it is ‘Transactional’. Else if you want to send some promotional sms of your product then it is ‘Promotional’

SENDER_ID: is the sms header that you already registered with TRAI.

The Steps to add Sender ID in AWS is given below:

For example suppose your sender id is: Zomato

Follow the steps to add our SENDER ID – Zomato to AWS SNS

  1. Sign in to the Amazon SNS console – https://console.aws.amazon.com/sns/home
  2. On the navigation panel, choose Mobile, Text messaging (SMS).
  3. On the Mobile text messaging (SMS) page, in the Text messaging preferences section, choose Edit.
  4. On the Edit text messaging preferences page, in the Details section, do the following:
  5. For Default sender ID , enter the provided sender ID to be used (Zomato) as the default for all messages from your account.
  6. Choose Save changes.

If you don’t want to register sender id, then skip this method: set_sns_client_attrs and publish the sms. It take the sms as ‘Promotional’ and sender id will be 8 character random number. Amazon use this type of sms from International route and it costs you almost $0.02 (Rs. 1.5) per sms. Very high rate. So I recommend to register any sender id that resembles your product or company name, from Jio trueconnect (that is free, link given below) and use it in SNS.

If you don’t know how to register sender id, follow this:

For AWS SNS service, there is 2 way of sending sms.

  1. Local route
  2. International route

For local route the price is Rs. 0.20 per sms
For international route the price will be Rs 1.58 per sms – too high

by default AWS SNS use International route

If you are from India follow the TRAI registration
For considering local route we have to register our use case and message templates with TRAI .

So first register here:
https://www.vilpower.in/
as an enterprise / company with all company details and our purpose

These registration requirements are designed to reduce the number of unsolicited messages that Indian consumers receive, and to protect consumers from potentially harmful messages

You can Register in Jio for free:

The link to register:
https://trueconnect.jio.com/#/home/entity-registration
Select Principal entity and continue

Recently Indian Govt made DLT Registration mandatory for sending sms.

Example:
Take Msg from AD-ZOMATO , here ZOMATO is 6 char sender id that we can give in the service provider and send sms before. But now we have to register this in DLT then only our service provider can use this.

After registering DLT we get an ENTITY ID. This entity id need to be attached in our’s otp service provider for sending otp msgs.

If you are using SNS service for the first time you should increase your SMS quota:

AWS says:

If you're new to SMS messaging with Amazon SNS, request a monthly SMS spending threshold that meets the expected demands of your SMS use case. By default, your monthly spending threshold is $1.00 (USD). You can request to increase your spending threshold in the same support case that includes your request for a sender ID

Because Amazon SNS is a distributed system, it stops sending SMS messages within minutes of the spending quota being exceeded. During this period, if you continue to send SMS messages, you might incur costs that exceed your quota.

https://docs.aws.amazon.com/sns/latest/dg/channels-sms-awssupport-sender-id.html

Requesting increases to your monthly SMS spending quota for Amazon SNS:

https://docs.aws.amazon.com/sns/latest/dg/channels-sms-awssupport-spend-threshold.html

Currently, Amazon SNS supports SMS messaging in the following AWS Regions:

https://docs.aws.amazon.com/sns/latest/dg/sns-supported-regions-countries.html

Reference:

https://docs.aws.amazon.com/sdk-for-ruby/v2/api/Aws/SNS/Client.html