Bedrock Blog

Breaking Codefinger: 5 Steps to Take Now

Written by Pranava Adduri | Jan 16, 2025 5:57:40 PM

Codefinger Strikes

Recently, an adversary named Codefinger showed it could hold victims ransom by encrypting their files in AWS S3 buckets using AWS’s own built-in capabilities to server side encrypt files using a custom provided key, referred to as SSE-C. In exchange for payment, the keys would be given back to the victim to decrypt their data. While this attack is based on an initial credential compromise, the complexity of executing ransomware attacks has significantly decreased given the lack of need for dedicated encryption infrastructure.    

We live and breathe cloud and data, and in this blog we want to share a few immediate steps that can help mitigate or avoid impacts from Codefinger.  Additionally, we will highlight general best practices that can help reduce impact on related attacks.

Assess the Impact

If you were impacted by Codefinger, or by similar data corruption in S3, you can run the following query  in AWS Athena  to see a list of files adversaries potentially encrypted, provided you have S3 Server Access Logs enabled.

To set up Athena, follow these instructions.


SELECT * 
FROM s3_access_logs_db.bucket_logs -- Replace with name of your table for S3 Server Access Logs
WHERE operation = 'REST.PUT.OBJECT'
  AND httpstatus = '200' -- Search for successful PUTs
  AND requester LIKE '%PRINCIPAL_NAME%' -- Replace w/ the compromised identity
  AND remoteip='IP' -- Optional, replace IP w/ attack origination IP if available
  AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z')
    BETWEEN parse_datetime('YYYY-MM-DD','yyyy-MM-dd') -- Suspected attack start time
        AND parse_datetime('YYYY-MM-DD','yyyy-MM-dd') -- Time identity was quarantined

Top Five Mitigation Steps To Execute Now

1. Least Privilege

The privileges required for ransom via SSE-C are s3:GetObject and s3:PutObject. Audit identities (human and non-human) that can use SSE-C, and compare that list to a master list of authorized users to SSE-C. Be sure to consider identity trust relationships for potential third parties (e.g. contactors).


A good starting point is to eliminate highly privileged identities that have not been active for over thirty days. A followup is to prune un-needed policies that grant SSE-C privileges from identities that do not require that access.

2. Segregate Permissions

An identity with SSE-C privileges should not also have privileges to disable object versioning, destroy backups, destroy existing logs, nor disable logging. Monitor for the following permissions being co-mingled with SSE-C permissions:

 

Deletion of Logs

s3:DeleteBucket- Allows deletion of log containing bucket

s3:DeleteObject- Allows deletion of specific objects in a bucket


Deletion of Backups

s3:DeleteObjectVersion- Allows deletion of specific versions of objects

backup:DeleteRecoveryPoint- Allows deletion of AWS Backup S3 recovery points


Object Versioning

s3:PutBucketVersioning- Allows enabling or suspending versioning


Logging and Audit Configuration

s3:PutBucketLogging- Allows enabling, disabling, or altering bucket logging configurations

s3:GetBucketLogging- Provides visibility into the current logging configuration

s3:PutBucketPolicy- Allows modification of bucket policies, which could disable logging indirectly or prevent access logging from being written

s3:PutBucketAcl- Allows modification of bucket access control lists (ACLs), potentially disrupting access logging

 

3. Enable Data Event Logging

Ransomware that targets S3 is particularly challenging because AWS does not log S3 GETs and PUTs by default. As a result, a victim may find themselves in a situation with a ransom note and being unable to evaluate the extent of what all was encrypted.

There are two ways to log data events in S3: CloudTrail Data Events and S3 Server Access Logs. While CloudTrail Data Events offer greater detail, they are billed per data event volume and costs can rise quickly in buckets with high change rates. S3 Server Access Logs, however, are not billed for log generation, but only storage. This article explains their differences well. Given its accessibility, we will only focus on S3 Server Access Logs in this post.

To help you do a quick survey of which S3 buckets do not have S3 Server Access Logging enabled, you can run the following Python script:


import boto3
import json

def check_s3_bucket_logging():
    s3 = boto3.client('s3')
    buckets = s3.list_buckets()["Buckets"]

    print("bucket_name, server_access_logging_enabled, tags")
    for bucket in buckets:
        bucket_name = bucket["Name"]

        # Default values
        logging_enabled = "Unknown"
        tags_json = json.dumps({})

        # Check if logging is enabled
        try:
            logging = s3.get_bucket_logging(Bucket=bucket_name)
            logging_enabled = logging.get("LoggingEnabled") is not None
        except s3.exceptions.ClientError as e:
            if e.response['Error']['Code'] == 'AccessDenied':
                logging_enabled = "unknown"
            else:
                raise  # Re-raise unexpected errors

        # Retrieve bucket tags
        try:
            tags = s3.get_bucket_tagging(Bucket=bucket_name)
            tags_json = json.dumps({tag["Key"]: tag["Value"] for tag in tags["TagSet"]})
        except s3.exceptions.ClientError as e:
            if e.response['Error']['Code'] == 'NoSuchTagSet':
                tags_json = json.dumps({})
            elif e.response['Error']['Code'] == 'AccessDenied':
                tags_json = json.dumps({"Error": "AccessDenied"})
            else:
                raise  # Re-raise unexpected errors

        # Print bucket details
        print(f"{bucket_name}, {logging_enabled}, {tags_json}")

if __name__ == "__main__":
    check_s3_bucket_logging()

Sample output:

bucket_name, server_access_logging_enabled, tags
prod_customer_data, False, {“department”: “Engineering”, “is_pii”: “True”}
customer_analytics, False, {“department”: “Analytics”, “is_pii”: “False”}

Enable Logging

If you want to enable S3 Server Access Logging for the buckets the previous script identified, use the command below.

 

 aws s3api put-bucket-logging \
    --bucket NAME_OF_BUCKET_TO_MONITOR \
    --bucket-logging-status '{
        "LoggingEnabled": {
            "TargetBucket": "LOG_DESTINATION_BUCKET_NAME",
            "TargetPrefix": "logs/source_bucket_name/"
        }
    }'

Make sure your log destination bucket is a secured location. Consider enabling object locking in compliance mode with a data retention period to create immutable logs that an adversary cannot delete.

4.  Object Versioning

In the event of S3 data corruption, having S3 Object Versioning can help you recover your files from the last known good state. Per an earlier recommendation, we highly suggested the permissions to change object versioning policies be segregated from permissions that let you delete or corrupt objects.

If enabling object versioning globally is cost prohibitive, focus object versioning on mission critical S3 buckets first.

5.  Take A Risk Based Approach

The steps above can help recover quickly from most ransomware attacks. However, given the vast amounts of data, identities, and permissions present in today’s cloud environments, it is critical to prioritize the assets you act on. A discovery and classification exercise across all your data - structured, semi-structured, and unstructured can help stack rank which data assets to harden and which identities to prune first.

Whether you are worried about your exposure in AWS or need guidance on breach impact from Codefinger, we are here to help. Contact us at protectdata@bedrock.security to connect with a data security expert.

Additional Resources