- Lambda function with Python and Boto3 library
- DynamoDB to store key metadata about instances and AMIs
- DynamoDB Streams to capture event changes on tha table
- Cloudwatch to trigger the automation at X times
- Terraform to deploy the whole infrastructure (install if you don’t have it)
How many times have you lost valuable data without an AMI?
In the past, I used to struggled with getting the latest back up data from an EC2 instance since I kept forgetting to take AMIs.
It is often recommended to back up your instance data as often as possible in case of any failure.
Therefore, given the big problem and perhaps the need for a solution, I decided to write this automation to provide a mechanism for safely backing up your instances state through AMI scheduling.
This automation is composed by two different lambda functions, one which takes the AMIs, and one which progressively cleans the environment.
The first lambda, responsible for taking AMIs, will be triggered through CloudWatch events every X amount of time, say daily at 9am.
While executing this, a new record is going to be put in DynamoDB table, which has time-to-live (TTL) enabled.
The second lambda, responsible for cleaning AMIs, will be invoked by DynamoDB Streams every time an item is expired.
This function creates an AMI, and then records the information about the instances on a DynamoDB table.
Each record is going to have a time to live (TTL). This time will be important as it will be the trigger for the second function through DynamoDB Streams.
This is configurable through the variables: “lasting” and “tomorrow”, on line 13 and 24 respectively.
This function is triggered by DynamoDB Streams once the record has expired.
Its main responsibility is to deregister AMIs and delete snapshots to save on cost for having too many snapshots.
Notice that in line 24, we are filtering events only with name DELETE. We want to capture those items that are flushed from the table due to expiry time.
How to run this?
y -- confirm to deploy the lambda resources
This will deploy: 2 Lambda functions, IAM Role with policy, and a DynamoDB table.
CloudWatch Event Triggers:
1. Go to the CloudWatch console in AWS
2. Click on Rules under the Events tab
3. Then, click create rules. and follow the screens below
This would adapt to your business needs. For sake of speed this is going to get triggered every 15 minutes.
4. after this is configured, click on Configure details
5. set a Name and Description for the event.
You should be good to go 🎉🎉!
Let me know if you have any questions
I’ll leave the reference down below