AWS Lambda Error – Handler timed out (execution duration exceeded)
Whether it's a slow database, a heavy calculation, or a missing network route, the fix usually lies in observability (X-Ray/Logs) rather than just increasing the timer.
Problem
Your Lambda invocation ends with:
Task timed out after X.XX seconds
This means your function exceeded the configured Lambda timeout. Lambda forcibly terminates execution—no retries can save a slow function unless you fix the root cause.
Possible causes:
- CPU Starvation: Low memory settings resulting in slow processing.
- External Dependencies: API calls that hang or databases that exceed latency.
- Cold Starts: Initialization overhead in heavier runtimes or complex VPC networking.
- Code Issues: Blocking loops, missing
await, or Promise hangs.
Clarifying the Issue
A timeout isn’t a crash—it’s simply the end of Lambda’s patience.
Timeout = the handler failed to finish in time.
Lambda kills the function immediately (SIGKILL) and returns a timeout error upstream.
Why It Matters
Timeouts lead to:
- Retry Storms: Clients retrying automatically can DDoS your own downstream services.
- Ghost Billing: If API Gateway times out (29s) but Lambda keeps running (30s+), you pay for a result no one receives.
- SQS Backlogs: Failed batches block queue processing.
Steps at a Glance
- Identify current timeout & memory settings.
- Inspect CloudWatch Logs & X-Ray.
- Check external dependencies (HTTP/DB).
- Detect blocking code.
- Review VPC networking.
- Optimize Memory (CPU) or Increase Timeout.
Detailed Steps
Step 1: Identify configuration
Check your timeout and memory.
aws lambda get-function-configuration \
--function-name my-fn \
--query '[Timeout, MemorySize]'
Step 2: Inspect CloudWatch & X-Ray
Use the CLI to view the specific failure log:
aws logs tail /aws/lambda/my-fn --since 5m
Look for:
INIT_DURATION: High? You have a Cold Start problem.Billed Duration: If it equals your timeout setting, the process was killed mid-stream.
Pro Tip: Enable AWS X-Ray. It visualizes exactly where the time went (e.g., 90% waiting on S3 vs. 10% compute).
Step 3: Check external dependencies
Wrap your calls to see if an external API is the bottleneck.
Python:
import time, requests
t0 = time.time()
# Always set a connect timeout!
requests.get(url, timeout=3.05)
print(f"Fetch time: {(time.time() - t0) * 1000} ms")
Go:
start := time.Now()
client := http.Client{Timeout: 2 * time.Second}
_, err := client.Get(url)
fmt.Printf("Fetch time: %v\n", time.Since(start))
Step 4: Detect Promise hangs (Node.js)
If using Node, ensure you aren't leaving the event loop open.
node --trace-warnings index.js
Step 5: Check VPC networking latency
If your Lambda is in a VPC, it might be timing out trying to reach the internet via a NAT Gateway or initializing ENIs (Elastic Network Interfaces).
Sanity Check: Does this function need to be in a VPC? If it only accesses standard AWS services (S3, DynamoDB) and not private resources, remove it from the VPC to eliminate ENI cold start penalties entirely.
Inside the Lambda (or a standard EC2 in the same subnet):
time curl https://example.com
If this takes >1 second, check your NAT Gateway or Security Groups.
Step 6: Power-up or Extend
Option A: The "Power-up" (Recommended)
In Lambda, CPU is proportional to Memory.
If your timeout is caused by heavy processing (encryption, parsing), increase the Memory. It often costs the same or less because the function finishes so much faster.
# Example: Increasing memory to 512MB often doubles CPU speed
aws lambda update-function-configuration \
--function-name my-fn \
--memory-size 512
Option B: Increase the Timeout
aws lambda update-function-configuration \
--function-name my-fn \
--timeout 30
Warning: If using API Gateway, the hard limit is 29 seconds. Setting Lambda >29s can result in "zombie" processes.
Pro Tips
- Set P99 + Buffer: Set your timeout slightly above your P99 latency, not your median.
- Provisioned Concurrency: For critical paths where cold starts cause timeouts, enable Provisioned Concurrency to keep environments warm (though this increases cost).
- Fail Fast: Configure your HTTP clients (Axios, requests, etc.) to timeout before the Lambda execution limit. This allows you to log a graceful error rather than crashing hard.
CloudWatch Insights: Use this query to find your slowest invocations:
filter @type = "REPORT" | stats max(@duration) as max_dur by @requestId | sort max_dur desc
Conclusion
Timeouts signal that Lambda is waiting too long. Whether it's a slow database, a heavy calculation, or a missing network route, the fix usually lies in observability (X-Ray/Logs) rather than just increasing the timer.
Beware the "Ghost Bill": Simply increasing the timeout on a failing function just means you pay AWS for 30 seconds of failure instead of 3. Fix the bottleneck, don't just feed it more time.
Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.
.jpeg)

Comments
Post a Comment