Connect this data source on your own, using the Hunters platform.
Overview
Table name: cloudwatch_logs
Cloudwatch by Amazon is used to monitor, store, and access log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources. CloudWatch enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis. CloudWatch Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time.
Integrating the CloudWatch logs to Hunters allows ingesting the data, as well as leveraging it for custom use cases, such as custom detection.
Send data to Hunters
To connect AWS CloudWatch logs:
CloudWatch allow customers to send manually each time the current logs (from a specific chosen time) to an S3 bucket.
In order to automate it we can schedule with EventBridge + Lambda to send logs every one hour for example (something should be developed on the customer side).
How to do it?
Export tasks are for historical ranges of time, not streaming.
They can have some constraints (min time window, delays, throttling).
Track last exported timestamp, handle overlaps, failures, etc in your lambda.
Set
EventBridgerule to run every 1 hour and computefrom/totimestamps and callCreateExportTaskfor the CloudWatch Logs log group towards your S3 bucket.
Another option is to stream CloudWatch Logs to S3 via Kinesis Data Firehose.
This gives you continuous delivery of logs to S3, and Firehose can buffer and write objects every X MB or Y seconds, you can tune that to roughly “hourly-ish”.
How to do it?
CloudWatch Logs log group
(subscription filter)
Kinesis Data Firehose delivery stream
(optional transform via Lambda)
S3 bucket (compressed/partitioned logs)
Ship the CloudWatch logs to S3 (destination bucket).
Once the export is completed and the logs are collected to S3, follow the steps in this section.
Expected format
Logs are expected in JSON format.
{"timestamp":1696851819528,"message":{"level":30,"time":1696851819528,"pid":25,"hostname":"1234"},"logStream":"graphql-gateway/graphql-gateway/123321123321123321","logGroup":"/aws/ecs/consumer-graphql-gateway"}
{"timestamp":1696851844444,"message":"Koko Shoko","logStream":"graphql-gateway/graphql-gateway/123321123321123321","logGroup":"/aws/ecs/consumer-graphql-gateway"}