Scalable EC2 consuming servers for SQS

Queues can be used to receive asynchronous requests. These requests can be served later. It is mandatory to be able to process in a speed faster or similar to the incoming requests speed. If the processing speed is less than the incoming requests count, the queue size will grow indefinitely. One solution is to use as much resources as the system might need. This works well in static cases, where the jobs count per time unit is fixed all the time, and thus one can plan ahead, but what if the incoming requests count per time unit varies or is dynamic!

In this tutorial, we will implement a queue with AWS SQS FIFO, and serve the requests using application deployed in EC2. These EC2 instances count will scale up when the requests are a lot, and down when the queues are free.

After this tutorial, you will have some work experience one the following services.

  1. EC2, for creating the consuming servers, and the scaling policy.
  2. SQS, for creating the queue.
  3. Cloud watch, for defining the alerts that will trigger the scaling.
  4. IAM, for creating a role that will allow EC2 instances to access SQS

This tutorial will be somehow long, so I decided to split it into short lessons.

  1. Create SQS FIFO, and the role to access it.
  2. Create Cloud watch Alerts.
  3. Create python consuming servers, and deploy them into EC2.
  4. Create EC2 image, and the scaling policy.

Next lesson from here 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *