S3 Access Denied: When Lambda Can’t Read What It Triggers (S3 → Lambda → DynamoDB)

 

S3 Access Denied: When Lambda Can’t Read What It Triggers (S3 → Lambda → DynamoDB)

In AWS, an event can trigger successfully even when your function can’t see the data it was told about.





When your S3 bucket triggers a Lambda function, you expect smooth handoffs — the file lands, the event fires, and the function does its job.

But sometimes, the event makes it all the way through to Lambda… only for your code to crash with a nasty surprise:

botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation

The bucket fired the event.
Lambda received it.
And yet, it can’t even open the object it was just told about.

Welcome to one of AWS’s most baffling pain points — S3 Access Denied on triggered events.


Problem

You’ve got a clean, serverless pipeline.

Files land in S3 → Lambda reads them → DynamoDB logs the result.

It works perfectly for hours — then suddenly, you see red in the CloudWatch console.

Your logs show:

[ERROR] ClientError: An error occurred (AccessDenied) when calling the GetObject operation

You check the file — it’s there.
The Lambda runs — confirmed in CloudWatch.
IAM policies look correct.

Nothing looks broken, but the function keeps failing on GetObject.


Clarifying the Issue

This isn’t a random AWS hiccup — it’s a permissions mismatch caused by how S3 events and IAM interact.

Lambda can receive the event even when it can’t read the object.

Here’s why:

  • EventBridge or S3 notifications don’t validate object access — they just fire the event.
  • If the object is KMS-encrypted (SSE-KMS), or uploaded by another principal (like a different account or service), your Lambda’s IAM role might not have the right decryption or object permissions.
  • S3 events trigger instantly — long before your KMS or ACL policies might sync across accounts or buckets.

The result:

Lambda is successfully invoked but fails the moment it tries to read the file.


Why It Matters

This is one of those “half-success” problems — the worst kind.

You get billed for a Lambda invocation that did nothing.

Your monitoring shows “healthy invocation counts,” but your business logic never runs.

It’s not a crash in your code.

It’s a quiet permissions trap at the boundary between S3 → Lambda.

Understanding — and fixing — this ensures your event-driven systems are both secure and reliable.


Key Terms

  • SSE-KMS: Server-side encryption using a KMS-managed key.
  • Resource Policy: Defines which principals can access an S3 bucket or decrypt a KMS key.
  • Execution Role: The IAM role assumed by your Lambda during runtime.
  • Cross-Account Access: When the S3 bucket and Lambda belong to different AWS accounts.
  • head_object: A lightweight S3 call that verifies accessibility before attempting to read the full object.

Steps at a Glance

  1. Create the baseline pipeline: S3 bucket, Lambda function, and DynamoDB table.
  2. Upload a test file and verify access.
  3. Introduce the failure: Enable SSE-KMS encryption or restrict bucket access.
  4. Observe and confirm AccessDenied errors.
  5. Apply the fix: Update IAM policies and KMS key grants.
  6. Re-test and confirm success.

Step 1 – Create the Baseline Pipeline

We’ll start with the same three-service build.

aws s3api create-bucket \
  --bucket access-demo-bucket \
  --region us-east-1
aws dynamodb create-table \
  --table-name access-demo-table \
  --attribute-definitions AttributeName=id,AttributeType=S \
  --key-schema AttributeName=id,KeyType=HASH \
  --billing-mode PAY_PER_REQUEST

✅ Verify creation:

aws s3 ls

✅ Output:

2025-11-01 21:42:07 access-demo-bucket

(✅ Shows the bucket is live and visible to the AWS CLI. If it doesn’t appear, wait a few seconds and re-run aws s3 ls.)

Create the Lambda function:

cat > lambda_handler.py <<'EOF'
import boto3, json, hashlib

s3 = boto3.client('s3')
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('access-demo-table')

def lambda_handler(event, context):
    record = event['Records'][0]
    bucket = record['s3']['bucket']['name']
    key = record['s3']['object']['key']

    # Verify access before reading
    s3.head_object(Bucket=bucket, Key=key)
    content = s3.get_object(Bucket=bucket, Key=key)['Body'].read().decode('utf-8')

    table.put_item(Item={'id': hashlib.md5(key.encode()).hexdigest(), 'data': content})
    print(f"✅ Processed {key}")
EOF

✅ Upload and deploy with a simple zip:

zip function.zip lambda_handler.py
aws lambda create-function \
  --function-name access-demo-func \
  --zip-file fileb://function.zip \
  --handler lambda_handler.lambda_handler \
  --runtime python3.11 \
  --role arn:aws:iam::<account-id>:role/LambdaExecutionRole

Step 2 – Upload a Test File and Verify

echo "Hello World" > testfile.txt
aws s3 cp testfile.txt s3://access-demo-bucket/

✅ Confirm DynamoDB:

aws dynamodb scan --table-name access-demo-table

✅ Output:

{
  "Count": 1,
  "Items": [
    { "id": { "S": "d8e8fca2dc0f896fd7cb4cb0031ba249" }, "data": { "S": "Hello World" } }
  ]
}

Everything works perfectly.


Step 3 – Introduce the Failure

Now let’s encrypt the bucket with a KMS key.

aws s3api put-bucket-encryption \
  --bucket access-demo-bucket \
  --server-side-encryption-configuration '{
    "Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"aws:kms","KMSMasterKeyID":"<kms-key-id>"}}]
  }'

Re-upload the file:

aws s3 cp testfile.txt s3://access-demo-bucket/

Wait a few seconds, then check your Lambda logs:

[ERROR] ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied

Step 4 – Confirm the Access Denied State

Lambda has no permission to decrypt the object.

Let’s verify the key in use:

aws s3api get-object-attributes --bucket access-demo-bucket --key testfile.txt --object-attributes ETag,Checksum

If the bucket is encrypted under a KMS key you don’t own, Lambda can’t read it — even though it was triggered by the event.


Step 5 – Apply the Fix

Grant the Lambda execution role permission to use the KMS key.

aws kms create-grant \
  --key-id <kms-key-id> \
  --grantee-principal arn:aws:iam::<account-id>:role/LambdaExecutionRole \
  --operations Decrypt,GenerateDataKey

Then attach a bucket policy that explicitly grants the role access:

aws s3api put-bucket-policy \
  --bucket access-demo-bucket \
  --policy '{
    "Version":"2012-10-17",
    "Statement":[{
      "Effect":"Allow",
      "Principal":{"AWS":"arn:aws:iam::<account-id>:role/LambdaExecutionRole"},
      "Action":["s3:GetObject","s3:ListBucket"],
      "Resource":["arn:aws:s3:::access-demo-bucket","arn:aws:s3:::access-demo-bucket/*"]
    }]
  }'

✅ Redeploy and re-upload:

aws s3 cp testfile.txt s3://access-demo-bucket/

✅ Check DynamoDB:

aws dynamodb scan --table-name access-demo-table

✅ Output:

{
  "Count": 1,
  "Items": [
    { "id": { "S": "d8e8fca2dc0f896fd7cb4cb0031ba249" }, "data": { "S": "Hello World" } }
  ]
}

Lambda can now decrypt and process the object correctly.


Step 6 – Add a Defensive Check

As a safeguard, always preflight with head_object before attempting a full get_object.

If head_object fails, log and skip rather than crashing the function.

try:
    s3.head_object(Bucket=bucket, Key=key)
except Exception as e:
    print(f"⚠️ Skipping {key}: access denied or object not found ({e})")
    return

Conclusion

In AWS, an event can trigger successfully even when your function can’t see the data it was told about.

That’s the paradox of S3’s design — the event system isn’t tied to the access system.

By layering correct IAM policies, KMS grants, and preflight checks, you build a pipeline that’s both secure and resilient — where every trigger has the permission to finish what it started.

Reliability isn’t just about firing events — it’s about ensuring every component can follow through.


Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.

Comments

Popular posts from this blog

The New ChatGPT Reason Feature: What It Is and Why You Should Use It

Insight: The Great Minimal OS Showdown—DietPi vs Raspberry Pi OS Lite

Raspberry Pi Connect vs. RealVNC: A Comprehensive Comparison