Our problem with micro-services using AWS ECS

As part of our startup, our predecessors chose to use micro-services for our new website as it is a trending technology.

This decision has many benefits, such as:

  • Scaling a website becomes much easier when using micro-services, as each service can be scaled independently based on its individual needs.
  • The loosely coupled nature of micro-services also allows for easier development and maintenance, as changes to one service do not affect the functionality of other services.
  • Additionally, deployment can be focused on each individual service, making the overall process more efficient.
  • Micro-services also allow for the use of different technologies for each service, providing greater flexibility and the ability to choose the best tools for each task.
  • Finally, testing can be concentrated on one service at a time, allowing for more thorough and effective testing, which can result in higher quality code and a better user experience.

In developing our application with micro-services, we considered the potential problems that we may face in the future. However, it is important to note that we also need to consider whether these problems will have a significant impact compared to the potential disadvantages of using micro-services.

One factor to keep in mind is that our website is currently experiencing low traffic and we are acquiring clients gradually. As such, we need to consider whether the benefits of micro-services outweigh any potential drawbacks for our particular situation.

Regardless, some potential issues with micro-services include increased complexity and overhead in development, as well as potential performance issues when integrating multiple services. Additionally, managing multiple services and ensuring they communicate effectively can also be a challenge.

Despite the benefits of micro-services, we have faced some issues in implementing them. One significant challenge is the increased complexity of deployment and maintenance that comes with having multiple services. This can require more time and resources to manage and can potentially increase the likelihood of errors.

Additionally, the cost of using AWS ECS for hosting all of the micro-services can be higher than using other hosting solutions for a less traffic website. This is something to consider when weighing the benefits and drawbacks of using micro-services for our specific needs.

Another challenge we have faced is managing dependencies between services, which can be difficult to avoid. When one service goes offline, it can cause issues with other services, leading to a “No Service” issue on the website.

Finally, it can be very difficult to go back to a monolithic application even if we combine 3-4 services together, as they may use different software or software versions. This can make it challenging to make changes or updates to the application as a whole.

It is important to carefully consider whether micro-service architecture is the best fit for your business and current situation. If you have a less used website or are just starting your business, it may not be necessary or cost-effective to implement micro-services.

It is important to take the time to evaluate the benefits and drawbacks of using micro-services for your specific needs and budget. Keep in mind that hosting multiple micro-services can come with additional costs, so be prepared to pay a minimum amount for hosting if you decide to go this route.

Ultimately, the decision to use micro-services should be based on a thorough assessment of your business needs and available resources, rather than simply following a trend or industry hype.

Set up:

  • Used AWS ECS (ec2 launch type) with services and task definitions defined
  • 11 Micro-services, 11 containers are spinning
  • Cost: Rs.12k ($160) per month


  • Consider using AWS Fargate type but not sure these issues get resolved
  • Deploy all the services in one EC2 Instance without using ECS

Read Files From Amazon s3 with Expiry

Suppose you have a need that is to download a file from amazon s3, that stored in http://s3.amazonaws.com//file.doc, if it is not accessable to public you will not get.

You can get an idea about Authenticated read by reading the following
(Reference: http://www.bucketexplorer.com/documentation/amazon-s3–access-control-list-details.html)

ACL and its Workings

Amazon S3 allows users to store their objects in Buckets. All Buckets and Objects are associated with Access control policies. ACL is a mechanism which decides who can access what. ACL is the set of permissions of read,write and update on Object as well as Bucket on the basis of these ACLs user can perform operation of upload new files, delete existing objects.

Bucket ACLs are completely independent of Object ACLs. It means that ACLs set on a bucket can be different of ACLs set on any object, contained in bucket.

Types of ACL provided by Amazon S3:

With reference to Bucket:

  • Read: Authorized user can list the file names, their size and last modified date from a bucket.
  • Write: Authorized user can upload new files in your bucket. They can also delete files on which they don’t have permission. Someone with write permission on a bucket can delete files even if they don’t have read permission to those files.
  • Read ACP: Authorized users can check ACL of a bucket.
  • Write ACP: Authorized user can update ACL of the bucket.

With reference to Object:

  • Read: Authorized user can download the file.
  • Write: Authorized user can replace the file or delete it.
  • Read ACP: Authorized user can list ACL of that file.
  • Write ACP: Authorized user can modify the ACL of the file.

Who can Access and How?

Amazon grants permission to four types of users:

  1. Owner (Account Holder): Person who holds Amazon s3 Account is also known as owner of the service. By default owner has full permission. Owner can create access and delete objects. She can also view and modify ACLs of each and every Bucket and its object(s).
  2. Amazon S3 Users (by Adding Amazon.com email address or Canonical Id)
    If owner wants to share or allow another AmazonS3 user to access her bucket, then owner should know the email address of the invitee, email address only works if invitee has registered her Amazon s3 account with that email address.
  3. Authenticated User (Sharing globally with all Amazon s3 Users)
    Anyone with a valid S3 account is a member of “Authenticated Users” group.If Owner wants to share her bucket globally with all Amazon’s s3 users then she can give read permission to authenticated user see the objects and can give write permission to update existing and upload new objects.
  4. Non Authenticated Users (All Users)
    If owner wants to make public her bucket and objects with all internet users, then she needs to give the appropriate permissions to ALL USERS. Now any user will be able to access the object provided name of the bucket.

Amazon s3 Request Url without expiry

So if you want private files from Amazon s3 access by, giving the correct url by giving the access key id and secret access key.


Expire the Amazon s3 Request Url

If anyone access this url they can get the files. So here comes the use of expiring a request url. Create a url with access key id and secret access key and expires this after some seconds say 10 seconds.

Eg: http://s3.amazonaws.com//file.doc?AWSAccessKeyId=EOKJGAKIAIHHMD3HP5OLLME5N4A&Expires=1325481379&Signature=0ipwRz3sss6xnfAbebtigAGNOysdfdf1sDpKCl0%3D 

Ruby gem aws-s3 and the Class AWS::S3::Base

aws-s3 is a Ruby library for Amazon’s Simple Storage service’s (S3) REST API. AWS::S3::Base is the abstract super class of all classes who make requests against S3.

Establishing a connection with the Base class is the entry point to using the library:

  AWS::S3::Base.establish_connection!(:access_key_id => '...', :secret_access_key => '...')
The :access_key_id and:secret_access_key are the two required connection options.