AWS Lambda Error – Request payload too large (6 MB / 256 KB limits)

 

AWS Lambda Error – Request payload too large (6 MB / 256 KB limits)

A practical diagnostic guide for resolving Lambda invocation failures caused by oversized event payloads—whether coming through API Gateway, direct invokes, asynchronous triggers, or payload expansions you didn’t know were happening behind the scenes.





Problem

Your Lambda is receiving an event that is too large for the invocation method being used.

This error appears as:

2023-07-19T20:11:12.345Z ERROR  Invoke Error: Request payload size exceeded

Or API Gateway may surface:

413 Payload Too Large

Possible causes:

  • API Gateway request body is effectively limited to 6 MB, even though HTTP APIs accept up to 10 MB
  • Lambda direct invoke > 6 MB
  • Asynchronous event (SNS, S3, EventBridge) > 256 KB
  • Base64-encoded uploads from browsers inflating file size by ~33%
  • Images/PDFs submitted directly through API Gateway rather than via S3

Clarifying the Issue

Lambda enforces two hard limits:

  • Synchronous invoke limit: 6 MB (e.g., API Gateway → Lambda, AWS CLI direct invoke)
  • Asynchronous invoke limit: 256 KB (e.g., S3, SNS, EventBridge)

If your payload exceeds these numbers, Lambda rejects it without mercy.
Retries will not help. Re-deploying won’t help.
Nothing works until the payload shrinks or is offloaded.


Why It Matters

Oversized payloads cause:

  • Silent failures (especially async)
  • Lost events
  • High retry costs
  • API Gateway 413 errors
  • Broken client uploads
  • Unpredictable behavior if Base64 encoding expands data beyond limits

Fixing the payload size restores reliability and prevents hidden data loss.


Key Terms

  • Synchronous invoke – Client waits for Lambda result (API Gateway, CLI).
  • Asynchronous invoke – Event sent to Lambda and retried automatically (SNS, S3, EventBridge).
  • Base64 inflation – Binary → Base64 conversion increases size by ~33%.
  • Claim Check Pattern – Store large data externally (e.g., S3) and send only a pointer.

Steps at a Glance

  1. Identify how the Lambda was invoked
  2. Check payload size
  3. Inspect CloudWatch (and API Gateway logs) for size-related failures
  4. Check for Base64 inflation
  5. Move payload to S3 (Claim Check Pattern)
  6. Re-test with minimal payload
  7. Apply proactive size logging

Detailed Steps

Step 1: Identify invocation style (sync vs async).

Run:

aws logs tail /aws/lambda/my-function --since 5m

Check the Trigger tab in the Lambda console:
If it shows S3, SNS, or EventBridge, it is asynchronous (256 KB max).
If invoked via API Gateway or CLI, it is synchronous (6 MB max).


Step 2: Check the payload size.

If the event is from API Gateway:

aws logs tail /aws/lambda/my-function --since 5m

Confirm the body size:

echo -n "$BODY" | wc -c

Note: $BODY refers to the raw JSON string of the incoming request body.

If using HTTP APIs, note:

API Gateway HTTP APIs accept up to 10 MB,
but Lambda still enforces the 6 MB synchronous limit.

If payload > limit → it will fail every time — and no retries will save it.


Step 3: Inspect CloudWatch for truncated or failed bodies.

Look for messages like:

Truncated body due to size limit

or:

RequestEntityTooLarge

Note: The "Truncated body" message often appears in **API Gateway access logs, not in Lambda logs — either location confirms the payload was too large.

If you see these → payload exceeded the limit.


Step 4: Check for Base64 expansion (hidden 33% tax).

Binary uploads (images, PDFs) sent through API Gateway are often Base64 encoded.

Check encoded size of a file:

base64 my_large_image.jpg | wc -c

4.5 MB file becomes ≈ 6 MB once Base64 encoded → immediately breaks Lambda.

If Base64 is the issue, go to Step 5.


Step 5: Switch to S3 payload offloading (Claim Check Pattern).

Store the large data in S3 and send only a reference to Lambda.

Upload first:

aws s3 cp my_large_file.pdf s3://my-bucket/uploads/123.pdf

Send this as the event:

{
  "s3Bucket": "my-bucket",
  "s3Key": "uploads/123.pdf"
}

Lambda retrieves the data:

Node.js:

const data = await s3.getObject({
  Bucket: event.s3Bucket,
  Key: event.s3Key
}).promise();

Python:

data = s3.get_object(
    Bucket=event["s3Bucket"],
    Key=event["s3Key"]
)

IAM Note: Ensure the Lambda execution role includes s3:GetObject.


Step 6: Re-test with a minimal payload.

Use:

aws lambda invoke --function-name my-function \
  --payload '{}' out.json

If this works → original payload was too large, and architecture changes (Step 5) are required.


Step 7: Add proactive size logging.

During dev, add:

Node.js:

console.log("EVENT SIZE:", Buffer.byteLength(JSON.stringify(event)));

Python:

print("EVENT SIZE:", len(json.dumps(event)))

This catches oversized events before deployment.


Pro Tips

  • Prefer presigned S3 URLs for uploads
  • Never send binary blobs through API Gateway
  • Consider multipart uploads for huge files
  • Add payload-size alarms to catch regressions early

Conclusion

Lambda has two hard limits:
• 6 MB synchronous
• 256 KB asynchronous

Violating either leads to immediate failure.

The long-term fix is architectural: move large data out of the event and into S3 using the Claim Check Pattern, then pass only lightweight metadata.


Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.

Comments

Popular posts from this blog

The New ChatGPT Reason Feature: What It Is and Why You Should Use It

Insight: The Great Minimal OS Showdown—DietPi vs Raspberry Pi OS Lite

Raspberry Pi Connect vs. RealVNC: A Comprehensive Comparison