AWS

Overview
AWS logs provide unique and crucial visibility into the activities and resources in an organization’s AWS environment.
As Cloud environments are vastly different from regular on-prem environments, many classic security products and auditing and logging mechanisms do not exist anymore in the Cloud environment as they were, which make the multiple logging mechanisms of AWS all the more important for defending an organization’s AWS environment.
Supported Data Types
AWS CloudTrail: logs (under the right configuration) each and every API call done in your environment, whether by a user or by system, in the AWS web console or programmatically. This datasource is required for all the detections in the AWS control plane.
AWS Config: which (under the right configuration) records and snapshots the configuration of each and every resource in your environment, is required for adding context to automatic investigations of threat signals detected in the control plane or data plane.
AWS VPC Flow Logs: are the equivalent of firewall logs in the Cloud environment, and enable detections on the network level of the virtual network in the AWS environment.
AWS WAF Logs
Prerequisites
Logs and security related data generated by AWS can be integrated into Hunters using an S3 source. In order to allow Hunters to access your S3 bucket, please follow this tutorial.
Creating a Data Flow
Login into the Hunters Portal, go to the "Data Flows" section in the left bar, and click the "Add Data Flows" button.
In the Product box, select AWS
In the Source box, select AWS S3
Paste the Role ARN from the prerequisites section in the Hunters Add Data Flow wizard.
Choose the specific Data Type you wish to add.
For each Data Type, fill in the appropriate File Prefix, File Format and S3 Bucket Name according to the table below. After inputting the appropriate values, test the connection and click Submit.
Data Type | File Prefix | File Format |
---|---|---|
CloudTrail |
| AWS Format |
Config Snapshot |
| AWS Format |
VPC Flow Logs |
| CSV with Header |
Guard Duty |
| NDJSON |
ELB Access Logs |
| CSV with Header |
WAF |
| NDJSON |
Configuring AWS Config within AWS
Config configuration guidelines and recommendations
AWS Config has several features, of them only one is the interesting one for Hunters, which is logging of resource configurations.
In order to configure AWS Config in the optimal way for Hunters, two steps are required:
Enabling AWS Config
Enabling periodic configuration snapshot delivery
This needs to be done per region, for every AWS account, for maximal coverage.
Enabling AWS Config
This step enables AWS Config for the region.
Browse to the AWS Config service.
Press "Get started".
On the next window:
Under "General settings"
Leave the default "Record all resources supported in this region" selected (this is important for data comprehensiveness)
Select the "Include global resources (e.g., AWS IAM resources)"
For AWS Config role, pick your preferred role permissions (by default, AWS will generate a role for you)
Under "Delivery method"
For Amazon S3 bucket, pick the bucket you wish to send Config logs to (either an existing bucket or a new one)
If you pick an existing bucket, you must make sure the bucket's policy allows the role to write logs to it.
On the next window (Rules), press Next without selecting anything.
Press Confirm.
After ~1 minute, Config should be enabled for the region.
After some ~20-30 minutes,
ConfigHistory
files should start being logged to the S3 destination you picked. However, this is not enough, as configuration snapshot delivery must also be configured (see below).
You may later enter the Settings tab in the Config service and modify the data retention period for which the Config files are stored.
The default retention period is 7 years, but this is less critical for Hunters, as we ingest data in close to real-time.
This can be modified to whichever retention period you wish (but must be at least two days).
Enabling periodic configuration snapshot delivery
This step is required in order for AWS to periodically write configuration snapshots of all resources to S3. This data is essential for adding context to automatic investigations of threat signals detected in the control plane or data plane.
Unfortunately, this can not be configured through the AWS web console, so you need to manually configure this using AWS CLI.
The following command shows the existing delivery channel configuration:
aws --region <REGION> configservice describe-delivery-channels
You should see something like:
{ "DeliveryChannels": [ { "name": "default", "s3BucketName": "<CONFIG_BUCKET_NAME>" } ] }
CODEYou then need to call
put-delivery-channel
, with all the parameters that are already configured as shown above, and an additional parameter with the frequency with which AWS Config should take the resource configuration snapshots.An example command looks like:
aws --region <REGION> configservice put-delivery-channel —delivery-channel name="default",s3BucketName="<CONFIG_BUCKET_NAME>",configSnapshotDeliveryProperties={deliveryFrequency="TwentyFour_Hours"}
If your existing delivery channel configuration also contained any of the variables
s3KeyPrefix
,s3KmsKeyArn
orsnsTopicARN
, you must also pass them in theput-delivery-channel
command, otherwise they will be disabled.The possible values for
deliveryFrequency
are:One_Hour
Three_Hours
Six_Hours
Twelve_Hours
TwentyFour_Hours
It is up to you which frequency to pick. This represents a tradeoff between AWS costs and accuracy (and "freshness") of the resource configurations that will be fetched by Hunters’ auto-investigation for resources that appear in AWS-related leads.
Setting it toOne_Hour
will incur the highest AWS costs (as the pricing is per configuration recorded), but will allow the auto-investigation to fetch the most recent configuration seen for a resource. Setting it toTwentyFour_Hours
, on the other hand, will incur the lowest AWS costs, but will cause the auto-investigation to fetch a configuration snapshot from up to a day back, which might be outdated.
To prevent undesired or unplanned costs, we recommend starting withTwentyFour_Hours
, and optionally increasing the frequency later on.
After the command successfully runs,
ConfigSnapshot
files will start being written to S3 periodically.
Notice ⚠️
The above process only enables Config (and sets up ConfigSnapshot
file delivery) for a specific region! This needs to be repeated across all regions for maximal coverage.
Log Samples
EC2 Log Example
{"Records": [{ "eventVersion": "1.0", "userIdentity": { "type": "IAMUser", "principalId": "EX_PRINCIPAL_ID", "arn": "arn:aws:iam::123456789012:user/Alice", "accessKeyId": "EXAMPLE_KEY_ID", "accountId": "123456789012", "userName": "Alice" }, "eventTime": "2014-03-06T21:22:54Z", "eventSource": "ec2.amazonaws.com", "eventName": "StartInstances", "awsRegion": "us-east-2", "sourceIPAddress": "205.251.233.176", "userAgent": "ec2-api-tools 1.6.12.2", "requestParameters": {"instancesSet": {"items": [{"instanceId": "i-ebeaf9e2"}]}}, "responseElements": {"instancesSet": {"items": [{ "instanceId": "i-ebeaf9e2", "currentState": { "code": 0, "name": "pending" }, "previousState": { "code": 80, "name": "stopped" } }]}} }]}
IAM Log Example
{"Records": [{ "eventVersion": "1.0", "userIdentity": { "type": "IAMUser", "principalId": "EX_PRINCIPAL_ID", "arn": "arn:aws:iam::123456789012:user/Alice", "accountId": "123456789012", "accessKeyId": "EXAMPLE_KEY_ID", "userName": "Alice" }, "eventTime": "2014-03-24T21:11:59Z", "eventSource": "iam.amazonaws.com", "eventName": "CreateUser", "awsRegion": "us-east-2", "sourceIPAddress": "127.0.0.1", "userAgent": "aws-cli/1.3.2 Python/2.7.5 Windows/7", "requestParameters": {"userName": "Bob"}, "responseElements": {"user": { "createDate": "Mar 24, 2014 9:11:59 PM", "userName": "Bob", "arn": "arn:aws:iam::123456789012:user/Bob", "path": "/", "userId": "EXAMPLEUSERID" }} }]}