Skip to content

CloudTrail

CloudTrail

Amazon Logs

logs within web console might be updated after ~20 mins

Enumerate Trails:

aws cloudtrail describe-trails --region <AWS_REGION>
aws cloudtrail describe-trails --region <AWS_REGION> --profile <PROFILE_NAME>

Stop logging (it might trigger alarms):

aws cloudtrail stop-logging --name <ARN_NAME>

Re-enable logging:

aws cloudtrail start-logging --name <ARN_NAME>

Stop multi-region logging (it might still trigger alarms, but still better than completely stop logging):

aws cloudtrail update-trail --name <ARN_NAME> --no-is-multi-region-trail --no-include-global-service-events

Disrupting logs:

Create lambda function with following policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:DeleteObjectVersion"
      ],
      "Resource": [
        "arn:aws:s3:::*"
      ]
    }
  ]
}

Log File Validation

In order to prevent this type of log tampering, AWS provides a feature called "log file validation", which creates a hash of the log files CloudTrail creates every hour. Administrators can then verify that there logs still match the hash via the various AWS interfaces to the AWS control plane.

We can see if log file validation is enabled by viewing the information relating to CloudTrail.

Enable log file validation (hashes for each log file will be created every hour):

aws cloudtrail update-trail --name <ARN_NAME> --enable-log-file-validation --region <AWS_REGION>

Disable log file validation:

aws cloudtrail update-trail --name <ARN_NAME> --no-enable-log-file-validation --region <AWS_REGION>

Validate logs:

aws cloudtrail validate-logs --trail-arn <ARN_NAME> --start-time 2018-01-01T12:31:41Z --region <AWS_REGION>

Breaking the Validation

Log verification is a process which Cloudtrail uses to prove the log files have not been changed or deleted.

It does this by continuously creating "digest files" which contain hashes used to verify the log files.
Every digest points to the previous digest before it in the chain, forming a chain into the past.
S3 protects digest files from deletion where a subsequent digest file is pointing to it.

If we temporary disable log verification and then enable it again the chain of digests will be started again as if from the beginning, effectively breaking the previous chain of digests.

The previous digest will not have another digest pointing to it, and hence can now be deleted from the S3 bucket. Hence if we stop and start log verification we can delete previous digests deleting our way from the newest digests to the oldest digests up the chain. If digest are deleted, then modified log files will no longer fail validation.

It is possible to create a lambda function to automatically delete digest files and filter out logs containing a specified IP address.
source code

  1. Unzip the file and open up the "hiddenIPs.json" file within a text editor and add the IP address we want to filter.
  2. Now create a new zip file from the folder you extracted with the edited hiddenIPs.json file inside (depending on your operating system you can probably right click the folder and click compress).
  3. And now within your Lambda Management Console for the function you just created upload the zip you just created.
  4. Add the trigger for this function by clicking "S3" on the left hand-side of the interface in the web console.

Further information