Connect this data source on your own, using the Hunters platform.
TL;DR
Supported data types | 3rd party detection | Hunters detection | IOC search | Search | Table name | Log format | Collection method |
---|---|---|---|---|---|---|---|
Teleport Audit Events Logs | ✅ | ✅ | teleport_audit_events | NDJSON | S3 |
Overview
Teleport is an open-source tool for providing zero-trust access to servers and cloud applications using SSH, Kubernetes, and HTTPS. It can eliminate the need for VPNs by providing a single gateway to access computing infrastructure via SSH, Kubernetes clusters, and cloud applications via a built-in proxy.
Integrating your Teleport logs into the Hunters ecosystem will allow storing the data in a parsed format and leveraging it in various security use cases and investigations.
Supported data types
Teleport Audit Events Logs
Table name: teleport_audit_events
Teleport audit events logs record detailed information about access requests, activities performed during sessions, and any changes made to the Teleport Cloud infrastructure.
Learn more here.
Send data to Hunters
To enable Hunters' collection and ingestion of Teleport for your account, we need to transfer the logs from Teleport Cloud through Fluentd to an S3 Bucket and then, set up the connection via Hunters portal.
📘Learn more
Step 1: Route logs from Teleport to an S3 bucket
To route logs from Teleport to an S3 bucket:
We will use the tctl
admin tool and tsh
client tool version 11.1.3 or greater to connect to your Teleport account. To download these tools, visit the download page.
Connect to your Teleport account.
$ tsh login --proxy=teleport.example.com --user=email@example.com $ tctl status Cluster test.teleport.sh Version 11.1.3 sha256:c450f9322kkkkjkj390755908d41e0b644951859cd8ba5f7014c2c34ccd61cf
Create a folder called fluentd to hold the configuration and plugin state.
$ mkdir -p event-handler $ cd event-handler
Install the event handler plugin (Linux).
$ curl -L -O <https://get.gravitational.com/teleport-event-handler-v11.1.4-linux-amd64-bin.tar.gz> $ tar -zxvf teleport-event-handler-v11.1.4-linux-amd64-bin.tar.gz
Run the configure command to generate a sample configuration. Replace mytenant.teleport.sh with the DNS name of your Teleport Cloud tenant:
$ ./teleport-event-handler configure . mytenant.teleport.sh
The output should look like this:Teleport event handler 11.1.4 [1] Generated mTLS Fluentd certificates ca.crt, ca.key, server.crt, server.key, client.crt, client.key [2] Generated sample teleport-event-handler role and user file teleport-event-handler-role.yaml [3] Generated sample fluentd configuration file fluent.conf [4] Generated plugin configuration file teleport-event-handler.toml Follow-along with our getting started guide: https://goteleport.com/docs/setup/guides/fluentd
The plugin generates several setup files:File Name
Purpose
ca.crt and ca.key
Self-signed CA certificate and private key for Fluentd
server.crt and server.key
Fluentd server certificate and key
client.crt and client.key
Fluentd client certificate and key, all signed by the generated CA
teleport-event-handler-role.yaml
user and role resource definitions for Teleport's event handler
fluent.conf
Fluentd plugin configuration
The configure command generates a file calledteleport-event-handler-role.yaml
that defines ateleport-event-handler
role and a user with read-only access to the
event API:kind: user metadata: name: teleport-event-handler spec: roles: ['teleport-event-handler'] version: v2 --- kind: role metadata: name: teleport-event-handler spec: allow: rules: - resources: ['event'] verbs: ['list','read'] version: v5
Use
tctl
to create the role and the user:$ tctl create -f teleport-event-handler-role.yaml role 'teleport-event-handler' has been created user "teleport-event-handler" has been updated
Create a role that enables your user to impersonate the Fluentd user:
First, paste the following YAM document into a file called
teleport-event-handler-impersonator.yaml
:kind: role version: v5 metadata: name: teleport-event-handler-impersonator spec: # SSH options used for user sessions options: # max_session_ttl defines the TTL (time to live) of SSH certificates # issued to the users with this role. max_session_ttl: 10h # allow section declares a list of resource/verb combinations that are # allowed for the users of this role. by default nothing is allowed. allow: impersonate: users: ["teleport-event-handler"] roles: ["teleport-event-handler"]
Next, create the role:
tctl create -f teleport-event-handler-impersonator.yaml role 'teleport-event-handler-impersonator' has been created
💡Why do we need this?
For the Fluentd plugin to forward events from your Teleport cluster, it needs a signed identity file from the cluster’s certificate authority. The Fluentd user cannot request this itself and requires another user to impersonate this account in order to request a certificate.
Assign the
teleport-event-handler-impersonator
role to your Teleport user by running the following commands, depending on whether you authenticate as a local
Teleport user or via thegithub
,saml
, oroidc
authenticator connectors:Retrieve your local user’s configuration resources:
tctl get users/$(tsh status -f json | jq -r '.active.username') > out.yaml
Edit
out.yaml
, addingteleport-event-handler-impersonator
to the list of existing roles:roles: - access - auditor - editor + - teleport-event-handler-impersonator
Apply your changes.
$ tctl create -f out.yaml user "user@org.com" has been updated
Log out of your Teleport cluster and log in again to assume the new role.
Export an identity file for the Fluentd plugin user.
💡Why do we need this?
The Fluentd Teleport plugin uses the
teleport-event-handler
role and user to read events. We export an identity file for the user with thetctl
auth sign command.$ tctl auth sign --user=teleport-event-handler --out=identity
The above sequence should result in one PEM-encoded file,identity
.Start the Fluentd forwarder.
💡Why do we need this?
The Fluentd plugin will send events to your Fluentd instance using keys generated on the previous step.
The
fluent.conf
file generated earlier configures your Fluentd instance to accept events using TLS and print them.<source> @type http port 8888 <transport tls> client_cert_auth true # We are going to run fluentd in Docker. /keys will be mounted from the host file system. ca_path /keys/ca.crt cert_path /keys/server.crt private_key_path /keys/server.key private_key_passphrase ********** # Passphrase generated along with the keys </transport> <parse> @type json json_parser oj # This time format is used by the plugin. This field is required. time_type string time_format %Y-%m-%dT%H:%M:%S </parse> </source> ## Add the details for the output destination (such as S3) ## Current configuration is logging the audit event in stdout # Events sent to test.log will be dumped to STDOUT. <match test.log> @type stdout </match>
📘 Note
In case you get any certificate authority error, comment on the section and restart the Fluentd instance.
In order to try out this Fluentd configuration, start your Fluentd instance:
$ docker run -u $(id -u ${USER}):$(id -g ${USER}) -p 8888:8888 -v $(pwd):/keys -v $(pwd)/fluent.conf:/fluentd/etc/fluent.conf fluent/fluentd:edge
To start the event handler, run the following command:
$ ./teleport-event-handler start --config teleport-event-handler.toml
📘Note
This example will start exporting from Jan 8th 2023:
$ ./teleport-event-handler start --config teleport-event-handler.toml --start-time "2023-01-08T00:00:00Z"
The start time can be set only once, on the first run of the tool. If you want to change the time frame later, remove the plugin state directory that you specified in the
storage
field of the handler’s configuration file.
The log should look like this:2023-01-11 06:53:39.000000001 +0000 test.log: {"ei":0,"event":"cert.create","uid":"1b2332bc-ae5c-475d-980a-3dba85024dad","code":"TC000I","cluster_name":"sacumen.teleport.sh","cert_type":"user","identity":{"user":"deepak.baraik@sacumentech.com","roles":["access","editor","auditor","teleport-event-handler-impersonator"],"logins":["root","-teleport-internal-join"],"expires":"2023-01-10T18:47:06.409188596Z","route_to_cluster":"sacumen.teleport.sh","traits":{"kubernetes_users":null,"kubernetes_groups":null,"db_users":null,"db_names":null,"aws_role_arns":null,"windows_logins":null,"logins":["root"]},"teleport_cluster":"sacumen.teleport.sh","prev_identity_expires":"0001-01-01T00:00:00Z"}}
Step 2: Connect your S3 bucket to Hunters
Once the export is completed and the logs are collected to S3, follow the steps in this section.