Rails MemCacheStore and dalli gem

We can use a memcached server for centralized caching in our application. This server is handling all caching mechanisms. Rails is using the gem memcache-client by default. But here I am using the dalli gem. Because it is better than memcache-client, but one thing to say is I am not experienced this. By reading documents, I have concluded that. You just go through this dalli gem link, https://github.com/mperham/dalli. This is a super caching mechanism by Rails in production environments. The Only thing we need to do is

  1. Say you want caching in your application, by adding the following in your environments, add the following in your production.rb file, if you are running in production mode and you are using production.rb file as your environment configuration file.
  2. config.action_controller.perform_caching = true
  3. Just install dalli gem
  4. Add the following in your gem file and do bundle install

    $ gem 'dalli', '2.0.2'
    $ bundle install
  5. Install memcached server
  6. $ sudo apt-get install memcached
  7. Start the memcached server if not started
  8. $ sudo /etc/init.d/memcached start
  9. Configure in your environments, add the following in your production.rb file
  10. config.cache_store = :dalli_store, 'YOUR_PRODUCTION_PUBLIC_DNS:11211',
        { :namespace => "app-YOUR_HOST", :expires_in => 1.day, :compress => true }

    Here ‘YOUR_PRODUCTION_PUBLIC_DNS:11211’ is saying where your memcached server is running. Public DNS and the port number of the memcached server.

Take the rails console and experiment something like the following to know the MemCacheStore cache mechanism is working. If you are in localhost then do

ruby-1.9.2-p290 :003 > cache = Dalli::Client.new('localhost:11211')
 => #0}, @ring=nil> 
ruby-1.9.2-p290 :004 > cache.set('testing', 1234)
 => true
ruby-1.9.2-p290 :006 > cache.get('testing')
 => 1234

Now you are ready to go


Read Files From Amazon s3 with Expiry

Suppose you have a need that is to download a file from amazon s3, that stored in http://s3.amazonaws.com//file.doc, if it is not accessable to public you will not get.

You can get an idea about Authenticated read by reading the following
(Reference: http://www.bucketexplorer.com/documentation/amazon-s3–access-control-list-details.html)

ACL and its Workings

Amazon S3 allows users to store their objects in Buckets. All Buckets and Objects are associated with Access control policies. ACL is a mechanism which decides who can access what. ACL is the set of permissions of read,write and update on Object as well as Bucket on the basis of these ACLs user can perform operation of upload new files, delete existing objects.

Bucket ACLs are completely independent of Object ACLs. It means that ACLs set on a bucket can be different of ACLs set on any object, contained in bucket.

Types of ACL provided by Amazon S3:

With reference to Bucket:

  • Read: Authorized user can list the file names, their size and last modified date from a bucket.
  • Write: Authorized user can upload new files in your bucket. They can also delete files on which they don’t have permission. Someone with write permission on a bucket can delete files even if they don’t have read permission to those files.
  • Read ACP: Authorized users can check ACL of a bucket.
  • Write ACP: Authorized user can update ACL of the bucket.

With reference to Object:

  • Read: Authorized user can download the file.
  • Write: Authorized user can replace the file or delete it.
  • Read ACP: Authorized user can list ACL of that file.
  • Write ACP: Authorized user can modify the ACL of the file.

Who can Access and How?

Amazon grants permission to four types of users:

  1. Owner (Account Holder): Person who holds Amazon s3 Account is also known as owner of the service. By default owner has full permission. Owner can create access and delete objects. She can also view and modify ACLs of each and every Bucket and its object(s).
  2. Amazon S3 Users (by Adding Amazon.com email address or Canonical Id)
    If owner wants to share or allow another AmazonS3 user to access her bucket, then owner should know the email address of the invitee, email address only works if invitee has registered her Amazon s3 account with that email address.
  3. Authenticated User (Sharing globally with all Amazon s3 Users)
    Anyone with a valid S3 account is a member of “Authenticated Users” group.If Owner wants to share her bucket globally with all Amazon’s s3 users then she can give read permission to authenticated user see the objects and can give write permission to update existing and upload new objects.
  4. Non Authenticated Users (All Users)
    If owner wants to make public her bucket and objects with all internet users, then she needs to give the appropriate permissions to ALL USERS. Now any user will be able to access the object provided name of the bucket.

Amazon s3 Request Url without expiry

So if you want private files from Amazon s3 access by, giving the correct url by giving the access key id and secret access key.


Expire the Amazon s3 Request Url

If anyone access this url they can get the files. So here comes the use of expiring a request url. Create a url with access key id and secret access key and expires this after some seconds say 10 seconds.

Eg: http://s3.amazonaws.com//file.doc?AWSAccessKeyId=EOKJGAKIAIHHMD3HP5OLLME5N4A&Expires=1325481379&Signature=0ipwRz3sss6xnfAbebtigAGNOysdfdf1sDpKCl0%3D 

Ruby gem aws-s3 and the Class AWS::S3::Base

aws-s3 is a Ruby library for Amazon’s Simple Storage service’s (S3) REST API. AWS::S3::Base is the abstract super class of all classes who make requests against S3.

Establishing a connection with the Base class is the entry point to using the library:

  AWS::S3::Base.establish_connection!(:access_key_id => '...', :secret_access_key => '...')
The :access_key_id and:secret_access_key are the two required connection options.