Understanding the Array Aggregation Function in PostgreSQL (array_agg)

PostgreSQL, also known as Postgres, is a powerful and feature-rich relational database management system. One of its notable features is the array aggregation function, array_agg, which allows you to aggregate values from multiple rows into a single array. In this blog post, we’ll explore how array_agg works, its applications, and considerations for performance.

How Does array_agg Work?

The array_agg function takes an expression as an argument and returns an array containing the values of that expression for all the rows that match the query. Let’s illustrate this with an example.

Consider a table called employees with columns id, name, and department. Suppose we want to aggregate all the names of employees belonging to the “Sales” department into an array. We can achieve this using the following query:

SELECT array_agg(name) AS sales_employees
FROM employees
WHERE department = 'Sales';

The result of this query will be a single row with a column named sales_employees, which contains an array of all the names of employees in the “Sales” department.

Usage of array_agg with Subqueries

The ability to get an array as the output opens up various possibilities, especially when used in subqueries. You can leverage this feature to aggregate data from related tables or filter results based on complex conditions.

For instance, imagine you have two tables, orders and order_items, where each order can have multiple items. You want to retrieve a list of orders along with an array of item names for each order. The following query achieves this:

SELECT o.order_id, (
  SELECT array_agg(oi.item_name)
  FROM order_items oi
  WHERE oi.order_id = o.order_id
) AS item_names
FROM orders o;

In this example, the subquery within the main query’s select list utilizes array_agg to aggregate item names from the order_items table, specific to each order.

Complex Query Example Using array_agg

To demonstrate a more complex scenario, let’s consider a database that stores books and their authors. We have three tables: books, authors, and book_authors (a join table that associates books with their respective authors).

Suppose we want to retrieve a list of books along with an array of author names for each book by alphabetical order. We can achieve this using a query that involves joins and array_agg:

SELECT b.title, array_agg(a.author_name ORDER BY a.author_name ASC) AS authors
FROM books b
JOIN book_authors ba ON b.book_id = ba.book_id
JOIN authors a ON ba.author_id = a.author_id
GROUP BY b.book_id;

In this query, we join the tables based on their relationships and use array_agg to aggregate author names into an array for each book. The GROUP BY clause ensures that each book’s array of author names is grouped correctly.

Performance Considerations

While array_agg is a powerful function, it’s essential to consider its performance implications, especially when working with large datasets. Aggregating values into arrays can be computationally intensive, and the resulting array can consume significant memory.

If you anticipate working with large result sets or complex queries involving array_agg, it’s worth optimizing your database schema, indexing relevant columns, and analyzing query performance using PostgreSQL’s built-in tools.

Additionally, consider whether array_agg is the most efficient solution for your specific use case. Sometimes, alternative approaches, such as using temporary tables or custom aggregate functions, might offer better performance.

Conclusion

The array_agg function in PostgreSQL provides a powerful mechanism for aggregating values into arrays. It offers flexibility and opens up opportunities for various applications, including subqueries and complex data manipulations. However, when working with large datasets, it’s crucial to be mindful of potential performance implications and explore optimization strategies accordingly.

Understanding the Difference Between Date.current and Date.today in Ruby

Introduction:

In the world of Ruby programming, we often encounter scenarios where we need to work with dates. Ruby provides us with two methods, Date.current and Date.today, to retrieve the current date. Although they may appear similar at first glance, understanding their differences can help us write more accurate and reliable code. Let’s explore the reasons behind their existence, where we can use them, and the potential pitfalls we might encounter.

  1. Why are there two different methods?
    Ruby’s Date.current and Date.today methods exist to handle different time zone considerations. When developing applications using the Ruby on Rails framework, it’s crucial to account for the possibility of multiple time zones. Rails provides a simple and consistent way to handle time zone-related operations, and these two methods are part of that feature set.
  2. Where can we use them?
    a) Date.current: This method is specifically designed for Rails applications. It returns the current date in the time zone specified by the application’s configuration. It ensures that the date obtained is consistent across the entire application, regardless of the server or machine executing the code. Date.current is particularly useful when dealing with user interactions, scheduling, or any scenario where consistent time zone handling is necessary.

    b) Date.today: This method retrieves the current date based on the default time zone of the server or machine where the code is running. It is not limited to Rails applications and can be used in any Ruby program. However, when working in a Rails application, it’s generally recommended to use Date.current to maintain consistent time zone handling.
  3. Problems when using each method:
    Using these methods incorrectly or without understanding their differences can lead to unexpected results:
    a) Inconsistent time zones: If a Rails application is deployed across multiple servers or machines with different default time zones, using Date.today may produce inconsistent results. It can lead to situations where the same code yields different dates depending on the server’s time zone.

    b) Time zone misconfigurations: In Rails applications, failing to properly set the application’s time zone can result in incorrect date calculations. It’s crucial to configure the desired time zone in the application’s configuration file, ensuring that Date.current returns the expected results.

Conclusion:

Understanding the nuances between Date.current and Date.today in Ruby can greatly improve the accuracy and reliability of our code, particularly in Rails applications. By using Date.current, we ensure consistent time zone handling throughout the application, regardless of the server or machine executing the code. Carefully considering the appropriate method to use based on the specific context can prevent common pitfalls related to time zone inconsistencies.

Rails 6.1 introduce ‘compact_blank’

Before Rails 6 we used to remove the blank values from Array and Hash by using other available methods.

Before:

  [...].delete_if(&:blank?)
  {....}.delete_if { |_k, v| v.blank? }
OR
  [...].reject(&:blank?)
  ...

From now, Rails 6.1.3.1 onwards you can use the module Enumerable’s compact_blank and compact_blank! methods.

Now we can use:

[1, "", nil, 2, " ", [], {}, false, true].compact_blank
=> [1, 2, true]

['', nil, 8, [], {}].compact_blank
=> [8]

{ a: "", b: 1, c: nil, d: [], e: false, f: true }.compact_blank
=> {:b=>1, :f=>true}

The method compact_blank! is a destructive method (handle with care) for compact_blank.

As a Rails developer, I am grateful for this method because there are many scenarios where we find ourselves replicating this code.

PostgreSQL commands to remember

List of commands to remember using postgres DB managment system.

Login, Create user and password

# login to psql client
psql postgres # OR
psql -U postgres
create database mydb; # create db
create user abhilash with SUPERUSER CREATEDB CREATEROLE encrypted password 'abhilashPass!'; 
grant all privileges on database mydb to myuser; # add privileges

Connect to DB, List tables and users, functions, views, schema

\l # lists all the databases
\c dbname # connect to db
\dt # show tables
\d table_name # Describe a table
\dn # List available schema
\df #  List available functions
\dS [your_table_name] # List triggers
\dv # List available views
\du # lists all user accounts and roles 
\du+ # is the extended version which shows even more information.

Show history, save to file, edit using editor, execution time, help

SELECT version(); # version of psql
\g  # Execute the previous command
\s # Command history
\s filename # save Command history to a file
\i filename # Execute psql commands from a file
\? # help on psql commands
\h ALTER TABLE # To get help on specific PostgreSQL statement
\timing #  Turn on/off query execution time
\e # Edit command in your own editor
\e [function_name] # It is more useful when you edit a function in the editor. Do \df for functions
\o [file_name] # send all next query results to file
    \o out.txt
    \dt 
    \o # switch
    \dt

Change output, Quit psql

# Switch output options
\a command switches from aligned to non-aligned column output.
\H command formats the output to HTML format.
\q # quit psql

Reference: https://www.postgresqltutorial.com/postgresql-administration/psql-commands/

PostgreSQL Cheat Sheet

CREATE DATABASE

CREATE DATABASE dbName;

CREATE TABLE (with auto numbering integer id)

CREATE TABLE tableName (
 id serial PRIMARY KEY,
 name varchar(50) UNIQUE NOT NULL,
 dateCreated timestamp DEFAULT current_timestamp
);

Add a primary key

ALTER TABLE tableName ADD PRIMARY KEY (id);

Create an INDEX

CREATE UNIQUE INDEX indexName ON tableName (columnNames);

Backup a database (command line)

pg_dump dbName > dbName.sql

Backup all databases (command line)

pg_dumpall > pgbackup.sql

Run a SQL script (command line)

psql -f script.sql databaseName

Search using a regular expression

SELECT column FROM table WHERE column ~ 'foo.*';

The first N records

SELECT columns FROM table LIMIT 10;

Pagination

SELECT cols FROM table LIMIT 10 OFFSET 30;

Prepared Statements

PREPARE preparedInsert (int, varchar) AS
  INSERT INTO tableName (intColumn, charColumn) VALUES ($1, $2);
EXECUTE preparedInsert (1,'a');
EXECUTE preparedInsert (2,'b');
DEALLOCATE preparedInsert;

Create a Function

CREATE OR REPLACE FUNCTION month (timestamp) RETURNS integer 
 AS 'SELECT date_part(''month'', $1)::integer;'
LANGUAGE 'sql';

Table Maintenance

VACUUM ANALYZE table;

Reindex a database, table or index

REINDEX DATABASE dbName;

Show query plan

EXPLAIN SELECT * FROM table;

Import from a file

COPY destTable FROM '/tmp/somefile';

Show all runtime parameters

SHOW ALL;

Grant all permissions to a user

GRANT ALL PRIVILEGES ON table TO username;

Perform a transaction

BEGIN TRANSACTION 
 UPDATE accounts SET balance += 50 WHERE id = 1;
COMMIT;

Basic SQL

Get all columns and rows from a table

SELECT * FROM table;

Add a new row

INSERT INTO table (column1,column2)
VALUES (1, 'one');

Update a row

UPDATE table SET foo = 'bar' WHERE id = 1;

Delete a row

DELETE FROM table WHERE id = 1;

From: https://www.petefreitag.com/cheatsheets/postgresql/


Setup Nginx, SSL , Firewall | Moving micro-services into AWS EC2 instance – Part 4

Install Nginx proxy server. Nginx also act like a load-balacer which is helpful for the balancing of network traffic.

sudo apt-get update
sudo apt-get install nginx

Commands to stop, start, restart, check status

sudo systemctl stop nginx
sudo systemctl start nginx
sudo systemctl restart nginx

# after making configuration changes
sudo systemctl reload nginx
sudo systemctl disable nginx
sudo systemctl enable nginx

Install SSL – Letsencrypt

Install packages needed for ssl

sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-nginx

Install the SSL Certificate:

certbot -d '*.domain.com' -d domain.com --manual --preferred-challenges dns certonly

Your certificate and chain have been saved at:
   /etc/letsencrypt/live/domain.com/fullchain.pem

Your key file has been saved at:
   /etc/letsencrypt/live/domain.com/privkey.pem
SSL certificate auto renewal

Let’s Encrypt’s certificates are valid for 90 days. To automatically renew the certificates before they expire, the certbot package creates a cronjob which will run twice a day and will automatically renew any certificate 30 days before its expiration.

Since we are using the certbot webroot plug-in once the certificate is renewed we also have to reload the nginx service. To do so append –renew-hook “systemctl reload nginx” to the /etc/cron.d/certbot file so as it looks like this:

/etc/cron.d/certbot
0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew --renew-hook "systemctl reload nginx"

To test the renewal process, use the certbot –dry-run switch:

sudo certbot renew --dry-run

Renew your EXPIRED certificate this way:

sudo certbot --force-renewal -d '*.domain.com' -d domain.com --manual --preferred-challenges dns certonly

Are you OK with your IP being logged?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please deploy a DNS TXT record under the name
_acme-challenge.<domain>.com with the following value:

O3bpxxxxxxxxxxxxxxxxxxxxxxxxxxY4TnNo

Before continuing, verify the record is deployed.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Press Enter to Continue

You need to update the DNS txt record for _acme-challenge.<domain>.com

sudo systemctl restart nginx # restart nginx to take effect

Configure the Firewall

Next, we’ll update our firewall to allow HTTPS traffic.

Check firewall status in the system. If it is inactive enable firewall.

sudo ufw status # check status

# enable firewall
sudo ufw enable
sudo ufw allow ssh
sudo ufw allow OpenSSH

Enable particular ports where your micro-services are running. Example:

sudo ufw allow 4031/tcp # Authentication service
sudo ufw allow 4131/tcp # File service
sudo ufw allow 4232/tcp # Search service

You can delete the ‘Authentication service’ firewall rule by:

sudo ufw delete allow 4031/tcp

Setup Ruby, ruby-build, rbenv-gemset | Conclusion – Moving micro-services into AWS EC2 instance – Part 3

In this post let’s setup Ruby and ruby gemsets for each project, so that your package versions are maintained.

Install ruby-build # ruby-build is a command-line utility for rbenv

git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build

# Add ruby build path

echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc # OR
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.zshrc

# load it

source ~/.bashrc # OR
source ~/.zshrc


For Mac users – iOS users


# verify rbenv
curl -fsSL https://github.com/rbenv/rbenv-installer/raw/main/bin/rbenv-doctor | bash

If you are using zsh add the following to `~/.zshrc`

# rbenv configuration
eval "$(rbenv init -)"
export RUBY_CONFIGURE_OPTS="--with-openssl-dir=$(brew --prefix openssl@1.1)"

Install Ruby 2.5.1 using rbenv

rbenv install 2.5.1

rbenv global 2.5.1 # to make this version as default

ruby -v # must display 2.5.1 if installed correctly

which ruby # must show the fully qualified path of the executable

echo "gem: --no-document" > ~/.gemrc # to skip documentation while installing gem

rbenv rehash # latest version of rbenv apparently don't need this. Nevertheless, lets use it to avoid surprises.

gem env home # See related details

# If a new version of ruby was installed, ensure RubyGems is up to date.
gem update --system --no-document


Install rbenv gemset – https://github.com/jf/rbenv-gemset

git clone git://github.com/jf/rbenv-gemset.git ~/.rbenv/plugins/rbenv-gemset

If you are getting following issue:

fatal: remote error:
  The unauthenticated git protocol on port 9418 is no longer supported.
# Fix
 git clone https://github.com/jf/rbenv-gemset.git ~/.rbenv/plugins/rbenv-gemset

Now clone your project and go inside the project folder -Micro-service folder (say my-project) which has Gemfile in it and do the following commands.

cd my-project

my-project $ rbenv gemset init # NOTE: this will create the gemset under the current ruby version.

my-project $ rbenv gemset list # list all gemsets

my-project $ rbenv gemset active # check this in project folder

my-project $ gem install bundler -v '1.6.0'

my-project $ rbenv rehash

my-project $ bundle install  # install all the gems for the project inside the gemset.

my-project $ rails s -e production # start rails server
my-project $ puma -e production -p 3002 -C config/puma.rb # OR start puma server
# OR start the server you have configured with rails. 

Do this for all the services and see how this is running. The above will install all the gems inside the project gemset that acts like a namespace.

So our aim is to setup all the ruby micro-services in the same machine.

  • I started 10 services together in AWS EC2 (type: t3.small).
  • Database is running in t2.small instance with 2 volumes (EBS) attached.
  • For Background job DB (redis) is running in t2.micro instance.

So for 3 ec2 instance + 2 EBS volumes –$26 + elastic IP addresses ( aws charges some amount – $7.4) 1 month duration, it costs me around $77.8, almost 6k rupees. That means we reduced the aws-cloud cost to half of the previous cost.

Setup Zsh, NVM, Rbenv | Moving micro-services into AWS EC2 instance – Part 2

In this post let’s continue to install the other packages.

Install Oh my zsh.

sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
sudo reboot

Make sure that ~/.zshrc contains the following lines.

# Path to your oh-my-zsh installation.
export ZSH="$HOME/.oh-my-zsh"

# See https://github.com/ohmyzsh/ohmyzsh/wiki/Themes
ZSH_THEME="robbyrussell"

# Example format: plugins=(rails git textmate ruby lighthouse)
# Add wisely, as too many plugins slow down shell startup.
plugins=(git)

source $ZSH/oh-my-zsh.sh

# Rbenv Loader
export PATH="$HOME/.rbenv/bin:$PATH"
eval "$(rbenv init -)"
export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"

# NVM loader
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion

Install NVM

sudo apt-get update -y
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
source ~/.bashrc or source ~/.zshrc

Install Rbenv

git clone https://github.com/rbenv/rbenv.git ~/.rbenv

echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc # OR
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.zshrc

echo 'eval "$(rbenv init -)"' >> ~/.bashrc # OR
echo 'eval "$(rbenv init -)"' >> ~/.zshrc

source ~/.bashrc # OR
source ~/.zshrc

type rbenv # to see if rbenv is installed correctly

In this tutorial, we installed nvm to manage Node versions, rbenv to manage Ruby versions and gemsets, and Oh My Zsh for a better terminal interface with more information. As a result, we use the .zshrc file instead of the .bashrc file on this machine.

To load all the necessary configurations into the terminal, add the above lines of code for nvm and rbenv to the zshrc file.

Basic Software installation| Moving micro-services into AWS EC2 instance – Part 1

As I mentioned in the previous post, I have decided to move away from micro-services. To achieve this, I am taking an AWS EC2 instance and configuring each micro-service on this instance. For this setup, I am using an Ubuntu 16.04 machine because my application setup is a bit old. However, if you have newer versions of Rails, Ruby, etc., you may want to choose Ubuntu 20.04.

Our setup includes Ruby on Rails (5.2.1) micro-services (5-10 in number), a NodeJS application, a Sinatra Application, and an Angular 9.1 Front-End Application.

To begin, go to the AWS EC2 home page and select an Ubuntu 16.04 machine with default configurations and SSH enabled.

https://ap-south-1.console.aws.amazon.com/ec2/v2/home

Now login to this new instance and install all the packages we needed for our setup.

Software Installation

Update the package list.

sudo apt-get update

Install Ruby dependencies.

sudo apt-get install ruby-dev
sudo apt-get install libxml2-dev
sudo apt-get install libxslt-dev
sudo apt-get install graphviz

Install NodeJS

curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
sudo apt-get install -y nodejs
node -v

Install yarn and other dependencies.

curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt-get update
sudo apt-get install git-core zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev software-properties-common libffi-dev nodejs yarn

Install Mysql 5.7 (Remember this is for Ubuntu 16.04, 18.04 versions)

sudo apt-get install mysql-server-5.7 mysql-client-core-5.7 libmysqlclient-dev
sudo service mysql status # or
systemctl status mysql
username: <your-username>, password: <your-password>

You can also try
mysql_secure_installation, if you use other mysql version.

Note that if you are setting up Ubuntu 20.04, there is a significant change in MySQL, as the version of MySQL is now 8.0 instead of 5.7. If you have applications running in MySQL 5.7, it is recommended that you set up and use Ubuntu 16.04 or 18.04.

We will continue the installation process in our next post.

Our Challenges with Microservices on AWS ECS

As part of our startup, our predecessors chose to use micro-services for our new website as it is a trending technology.

This decision has many benefits, such as:

  • Scaling a website becomes much easier when using micro-services, as each service can be scaled independently based on its individual needs.
  • The loosely coupled nature of micro-services also allows for easier development and maintenance, as changes to one service do not affect the functionality of other services.
  • Additionally, deployment can be focused on each individual service, making the overall process more efficient.
  • Micro-services also allow for the use of different technologies for each service, providing greater flexibility and the ability to choose the best tools for each task.
  • Finally, testing can be concentrated on one service at a time, allowing for more thorough and effective testing, which can result in higher quality code and a better user experience.

In developing our application with micro-services, we considered the potential problems that we may face in the future. However, it is important to note that we also need to consider whether these problems will have a significant impact compared to the potential disadvantages of using micro-services.

One factor to keep in mind is that our website is currently experiencing low traffic and we are acquiring clients gradually. As such, we need to consider whether the benefits of micro-services outweigh any potential drawbacks for our particular situation.

Regardless, some potential issues with micro-services include increased complexity and overhead in development, as well as potential performance issues when integrating multiple services. Additionally, managing multiple services and ensuring they communicate effectively can also be a challenge.

Despite the benefits of micro-services, we have faced some issues in implementing them. One significant challenge is the increased complexity of deployment and maintenance that comes with having multiple services. This can require more time and resources to manage and can potentially increase the likelihood of errors.

Additionally, the cost of using AWS ECS for hosting all of the micro-services can be higher than using other hosting solutions for a less traffic website. This is something to consider when weighing the benefits and drawbacks of using micro-services for our specific needs.

Another challenge we have faced is managing dependencies between services, which can be difficult to avoid. When one service goes offline, it can cause issues with other services, leading to a “No Service” issue on the website.

Finally, it can be very difficult to go back to a monolithic application even if we combine 3-4 services together, as they may use different software or software versions. This can make it challenging to make changes or updates to the application as a whole.

It is important to carefully consider whether micro-service architecture is the best fit for your business and current situation. If you have a less used website or are just starting your business, it may not be necessary or cost-effective to implement micro-services.

It is important to take the time to evaluate the benefits and drawbacks of using micro-services for your specific needs and budget. Keep in mind that hosting multiple micro-services can come with additional costs, so be prepared to pay a minimum amount for hosting if you decide to go this route.

Ultimately, the decision to use micro-services should be based on a thorough assessment of your business needs and available resources, rather than simply following a trend or industry hype.

Set up:

  • Used AWS ECS (ec2 launch type) with services and task definitions defined
  • 11 Micro-services, 11 containers are spinning
  • Cost: Rs.12k ($160) per month

Workaround:

  • Consider using AWS Fargate type but not sure these issues get resolved
  • Deploy all the services in one EC2 Instance without using ECS

AWS: How to push your docker images to ECR

Try to do the docker login from your terminal. If you face the following error:

Error: Cannot perform an interactive login from a non TTY device

For fixing this update your aws cli version 1.x.x to 2

If you don’t have aws-cli installed, please install it.

Install AWS-CLI v2

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

$ unzip awscliv2.zip

$ sudo ./aws/install --update -b /home/abhilash/.local/bin

$ aws --version

NOTE: Add/Update access and secret key with aws configure of your new I AM role that has ECR access.

Goto https://ap-south-1.console.aws.amazon.com/ecr/repositories and create a repository.

Login and push to this repository URI:

$ aws ecr get-login-password --region ap-south-1 | docker login --username AWS --password-stdin xxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com

If this cmd not works for some reason, you can try another way:

$ docker login -u AWS -p $(aws ecr get-login-password --region ap-south-1) xxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com

Tag the local image:

$ docker tag test-api:latest xxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com/test-api-container:latest

Push the image:

$ docker push xxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com/test-api-container:latest

AWS doc url: https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html