EBS Volume Encryption

Ed Reinoso
5 min readJan 5, 2021

--

Before Reading

Before diving deeper into how this script is structured, we need to understand the fundamental steps in this process:

1) Taking a snapshot

2) Creating an encrypted volume from the snapshot

3) Removing the unencrypted volumes

These will help us look at the big picture as we progress through the article. So even though it might feel complex at some point, you have to remember these fundamental steps!

All the resources are provisioned using Terraform. If you don’t have it, don’t worry. I’ll leave some references below to installed it. However, it would be very helpful if you have it installed for the purpose of the article.

Goal and Tech Stack

This article is intended to be for DevOps and Cloud Engineers who want to automate their security in their AWS cloud environment.

The goal for this article is to protect the data in our volumes by encrypting its operations with the application. If the data in our application/system is encrypted, we can guarantee our users a more reliable and secured experience.

For this automation, we are going to be using the following services and technologies:

  • Lambda with Python, leveraging the library Boto3
  • DynamoDB to keep a record of the state of volumes
  • AWS KMS (Key Management Service) to use it for encrypting data
  • IAM policies and roles to grant the necessary permission for the functions
  • CloudWatch Events to trigger our function on schedule
  • Terraform to provision the whole infrastructure

Logic

1. Provision Resources

Terraform will help us provision the necessary resources to start encrypting volumes. Here is the reference on how you can get up and running with Terraform: https://www.terraform.io/downloads.html. I will leave this in the reference section down below.

This is the architecture that we are going to be working with.

Encrypting EBS Volumes with Lambda and Python

In order to run this, we have to the go the specific directory in the git repo (aws-devops/ebs-encrypt) and do:

terraform init
terraform apply -auto-approve

The idea behind leveraging an infrastructure as code scripting language such as Terraform is the flexibility and easiness of handling these resources in the cloud. This will save us time and money as we do not manually have to be provisioning or deleting resources.

As we provisioned the infrastructure, it’s time to get into the more technical part which is handling the functions with Python and Boto3.

2. Take Snaphshot Function

The first function is responsible for taking snapshots of the volumes that need to be encrypted. Boto3 will help us carry the operations needed for this.

The three main methods that are going to be used are:

  • describe_volumes(): identify which volumes are not encrypted — line 11
  • create_snapshot(): create a new snapshot for volume encryption — line 61
  • put_item(): create a record with the volume and snapshot info — line 101

Code Breakdown (Optional!)

— line 41: this determines whether the volume X has an attachment

— line 67: logic to check whether the volume X is encrypted.

If it is, then it is added to an array of volumes — line 105

Else, this volume is not taken into consideration

— line 108: we have to tag the volumes that have already gone through the function. In other words, the volumes that already have snapshots created are not going to be much use in a later stage.

This is particularly important to do since it will help us clean up the environment from idle volumes in the next few steps ahead.

— line 120: the item to be put in the DynamoDB table will have the following attributes:

{
'Name': '',
'SnapshotId': '',
'RootVolume': '',
'VolumeSize': '',
'AZ': '',
'Type': '',
'InstanceId': '',
'Device': '',
'IVoId': '',
'EVoId': '',
'Date': '',
'EncryptionCreated': ''
}

These attributes are manipulated across the two functions as they contain the data for the volumes, instances and snapshots. What both of these functions share in common is the connection the DynamoDB table.

After the snapshots have been taken and the information recorded on the DynamoDB table, we’ll jump into the third portion.

3. Create Volume Function

The second function is responsible for creating an encrypted volume from the snapshot taken in previous step.

This function is a little more complex because it automates the attachment of the encrypted volumes to their corresponding EC2 instance.

In this function, there are two very important methods for this function:

  • create_volume(): will create the volume according to a certain KMS key— line 30
  • update_item(): this will update the DynamoDB table about the state of the old and new volume— line 62

As for the additional part dealing with EC2 instances, these are the most important methods:

  • describe_instances()
  • stop_instances(): it is recommended to stop the instances before detaching volumes — line 84
  • detach_volume(): this method performs the detachment — line 112
  • attach_volume(): this method performs the attachment of the encrypted volume — line 125
  • start_instances(): this method will start the instance back up — line 136

Important point about Time: (Optional!)

There is an important caveat to mention about attaching and detaching volumes.

Operations such as, creating volumes, turning off instances, detaching volumes, usually take a certain amount of time. For example, it may take few seconds from a volume to go from ‘In-Use’ state to the ‘Available’ state.

If such considerations about time are not properly handled by the function, this would cause a logical error since it won’t be able to perform an operation until the previous step has successfully been completed.

I suggest you experimenting with the time variable depending on how big your EBS volume is. The heavier it is, the more time it’ll take to create an encrypted volume, for instances.

So, once we have the new encrypted volumes created, and properly attached to the running instances. It’s time to move on to the next and final step of the automation function.

4. Remove Unencrypted Volume Function

This final function is short and sweet! But, it does have a very important economic impact with respects to the billing.

Once we have migrated our data from an unencrypted to encrypted volume, we may not necessarily need this volume any longer since it’s consuming space and resources.

Idle volumes usually tend to have their cost. That’s why it’s important to clean up the environment to be charged for something it’s not being used. In case of a rollback, we have the snapshot which was taken in the first function.

Once again, the major methods which will be used here are:

  • describe_volumes(): it’s important filter by a tag since it allows us to only get the volumes which were completed by the first step — line 13
  • delete_volume(): delete only the volumes that are NOT attached to an instance — line 29

Final Notes

These three functions can work very well when they are invoked in sequence. This is why enabling CloudWatch Events to trigger these functions at a certain frequency would actually be very powerful for your AWS environment.

Hope you may have enjoyed the reading, specially if you find it particularly helpful along your way of automating AWS environments!

Don’t hesitate to reach out with questions.

--

--

Ed Reinoso

Cloud Engineer with a passion for AWS automation