Subscribe to the Blog

Get articles sent directly to your inbox.

The AWS Well-Architected Framework includes some of the best practices for cloud architects to follow. Following these best practices is a way to ensure that the cloud infrastructure is stable and secure. The framework is based on five pillars – operational excellence, security, reliability, performance efficiency and cost optimization. 

While the framework is a great tool to help you build a secure cloud infrastructure consistently, many organizations struggle to translate the framework into real practices. Cloudrail makes the AWS Well-Architected Framework into actionable security controls which you can embed into the pipeline for policy enforcement. With hundreds of rules “baked in” to comply with the security best practices, Cloudrail effectively makes the framework easy to operationalize. 

The table below summarizes what Cloudrail covers for the AWS Well-Architected Framework.

PillarAreaTopic# of rules
Operational ExcellencePrepareDesign of operations4
SecurityIdentity and Access Management (IAM)Identity Management17
Permission Management21
Infrastructure ProtectionProtecting networks35
Protecting compute7
Data ProtectionProtecting data at rest41
Protecting data in transit13
Incident ResponsePrepare2
ReliabilityFoundationsPlan your network topology4
Failure ManagementBack up data1
Design your workloads to withstand component failures1
Cost OptimizationCost-effective resourcesSelect the correct resource type, size and number2

In essence, Cloudrail plays a key role to establish security controls to provide governance for IAM as well as protecting infrastructure and data. We will delve into the various aspects of the security pillar in more detail.

AWS  IAM

AWS IAM is essential to securely control access to AWS services and resources. With identity being the new cloud perimeter, securing identities and access should be at the center of your strategy for reducing your cloud attack surface. This needs to be beyond just checking for password hygiene or using multi-factor authentication, but ensuring advanced practices are followed. For example: 

  1. EC2(s) within the public and private subnets should not share identical AWS IAM roles

Having the same AWS IAM role for both public and private instances may be dangerous. Someone may expand the permissions for the role in order to use it in a private workload, without realizing a public workload has the same privileges.

  1. Disallow AWS IAM permissions which can lead to privilege escalation

We all know privilege escalation is one of the more dangerous strategies attackers deploy during a cloud breach. Read this blog to learn about the very problematic privilege escalation issue and how Cloudrail can prevent this from happening. 

  1. Ensure AWS IAM entities policies are managed solely in infrastructure-as-code
Related Article  Catching Unused Roles Defined in Terraform within CI with Cloudrail

As an organization, you may have decided to standardize the use of Terraform to manage your cloud infrastructure. But what if someone logged into the AWS console to grant additional privileges? The ability to identify IAM configuration drift is paramount to ensure continuous security in your environment. Read this blog for more information.

Cloud Infrastructure Protection

Infrastructure protection ensures that the cloud infrastructure and services within your workload are protected against unintended and unauthorized access. Cloudrail ensures your networks and compute are protected. Here are some sample security controls to protect your resources: 

  1. Exposing resources to the Internet is generally unwise

It is a good practice to block known protocols (e.g. SSH, RDP, etc.) for access to resources such as S3, Oracle DB, Postgres, MySQL, MongoDB, Elasticsearch, Kibana, Redshift, etc. Simply checking for “publicly_accessible = true” is not sufficient as this can generate a lot of noise. What if the resource is in a subnet that has no route to the Internet, and therefore it is not actually publicly accessible. You can learn more about how Cloudrail is able to effectively eliminate the false positives here.

  1. Catching resources that are indirectly exposed to the Internet

How do you ensure, for example, a RDS database is not accessible indirectly via a publicly accessible resource? Many organizations would like to protect their databases by multiple layers. For instance, they would want to avoid a situation where a publicly-accessible EC2 instance can directly access a database. The reason behind this is to avoid having “two hops” into the database from the Internet. With the ability to understand the relationships among the cloud resources, Cloudrail can detect this tricky scenario.

  1. Using default security groups in general is unwise

By locking security groups down, you are validating that if anyone uses them by accident, they will realize before any security issues occur. 

  1. Avoid using default VPC

Many AWS resources can be configured to reside in a specific VPC. If parameters are left to their default values, AWS may place the resources in the default VPC. Using the default VPC is generally a bad practice. Cloudrail can determine if resources are placed in the default VPC. 

  1. Avoid sending data destined to AWS services through the Internet
Related Article  Securing S3 Buckets Built in Terraform, using IaC Security Tools

The best practice is to enforce the use of VPC Endpoints to avoid the need of sending data destined to AWS services through the Internet. These services include DynamoDB, S3, SQS, etc. that may be sensitive in nature.

  1. Protect sensitive resources 

For example, the EKS cluster’s management API is a sensitive endpoint to expose publicly. 

Data Protection

Protecting data at rest and in transit are key approaches to data protection. Encrypting your database in AWS is easy if you haven’t provisioned it yet, so you should always enable encryption. However, if the database already exists and you want to turn on encryption at rest, you need to destroy and rebuild it from a backup. This is a very costly change so encryption at rest is something you want to catch before deployment, otherwise it may be too late.

  1. Protecting data at rest

Cloudrail ensures a variety of resources being created are set to be encrypted at rest.

  • Analytics – Kinesis streams, Kinesis Firehose delivery stream, Elasticsearch domains, Athena Workgroup query results, Redshift clusters
  • Application Integration – SNS topics, SQS queues
  • Database – DynamoDB DAX clusters, Elasticache replication groups, DocDB clusters, Neptune clusters, RDS instances and clusters
  • Developer tools – X-Ray encryption config
  • End User Computing – Workspaces for root and user volumes
  • Machine Learning – Sagemaker endpoint configurations
  • Management & Governance – CloudTrail trails
  • Storage – EFS filesystems, S3 buckets, S3 bucket objects and all data stored in the S3 bucket
  1. Protecting data in transit
  • Ensure Elasticsearch domains being created are set to be encrypted node-to-node and ensure Elasticsearch domain enforces HTTPS
  • Ensure Elasticache replication groups being created are set to be encrypted in transit
  • Ensure HTTPS is used for Application Load Balancer and its target groups between the load balancer and the servers 
  • Enforce use of HTTPS in S3 bucket policy
  • Ensure CloudFront Distribution being created is set to encrypt in transit, perform field-level encryption and that the protocol version is a good one
  • Ensure DocDB clusters are set to encrypt the connection with applications, and DocDB TLS is not disabled
  • Ensure API Gateway uses modern TLS
  • Ensure ECS task definitions being created are set to encrypt in transit with EFS volumes

Make the AWS Well-Architected Framework real in your environment

With Cloudrail, you can regularly evaluate your Infrastructure-as-code against the AWS Well-Architected Framework for continuous compliance. Also, remember that the best security is to involve security early in the development process. It’s better to catch security issues before they make it to your production environment because it will be harder to fix or patch exposures after the system is deployed. Between the AWS Well-architected Framework and the “shift-left” security approach, you can start to move toward a preventive cloud security strategy for your infrastructure.