AWS Lambda: “The Duplicate Delivery Dilemma” — When the Same Event Hits Twice
The Duplicate Delivery Dilemma isn’t an AWS bug — it’s the cost of safety.
Problem
You see it in your logs and feel your stomach drop.
Two identical Lambda invocations, seconds apart, with the same payload.
Your metrics look fine.
Your alerts are silent.
But your database shows two new records where there should only be one.
This is the Duplicate Delivery Dilemma — when AWS Lambda processes the same event twice, and your downstream system quietly doubles your data, payments, or updates.
Clarifying the Issue
This problem originates deep inside AWS’s asynchronous event delivery model.
Services like S3, SNS, and EventBridge guarantee at-least-once delivery, not exactly-once.
That means duplicates aren’t a bug — they’re an architectural promise.
However, the most common culprit for duplicates in Lambda isn’t even the event source.
It’s SQS, when the visibility timeout expires before your Lambda finishes.
Here’s the pattern:
- Lambda receives an SQS message.
- Processing takes longer than the queue’s visibility timeout.
- The message reappears in the queue while the first Lambda is still working.
- A second Lambda grabs it — and now you’ve got two Lambdas processing the same message.
No error. No alarm. Just duplication.
Why It Matters
Duplicates can silently destroy trust and data integrity across your entire system:
- Financial Operations: Payments or transactions processed twice.
- Inventory Systems: Quantities double-counted or decremented twice.
- Notifications: Customers spammed with identical alerts.
- Analytics Pipelines: Event counts inflated.
These are hard to detect because every Lambda succeeds — there’s no visible failure, only duplicated work.
Key Terms
- At-least-once Delivery: AWS guarantees each message will be delivered one or more times — not exactly once.
- Visibility Timeout: The period an SQS message stays hidden from other consumers after one Lambda receives it.
- Deduplication ID: A unique token (for FIFO queues) that allows SQS to reject identical messages within a 5-minute window.
- Idempotency: Designing your system so duplicate requests produce no harmful side effects.
- Correlation ID: A unique identifier that traces a message or transaction across retries and invocations.
Steps at a Glance
- Detect duplicates through CloudWatch metrics and logs.
- Compare Lambda duration with queue visibility timeout.
- Use FIFO queues with MessageDeduplicationId.
- Make side effects idempotent (DynamoDB conditional writes, SQL upserts).
- Add Correlation IDs to detect duplicates downstream.
- Monitor retry and receive counts to identify patterns.
Step 1: Detect duplicates in CloudWatch
Run this simplified query in CloudWatch Metrics Insights to compare message flow and Lambda activity:
-- Compare Lambda invocation spikes to SQS message volume
databases = filter( Namespace = 'AWS/Lambda' and MetricName = 'Invocations')
messages = filter( Namespace = 'AWS/SQS' and MetricName = 'ApproximateNumberOfMessagesSent')
Look for: Periods where AWS/Lambda:Invocations rise faster than AWS/SQS:ApproximateNumberOfMessagesSent.
A high ratio means old messages are being reprocessed.
Sample Output:
{
  "AWS/Lambda:Invocations": 1200,
  "AWS/SQS:ApproximateNumberOfMessagesSent": 800
}
If you see spikes in Invocations without a corresponding rise in SQS message count, you’re likely experiencing duplicate deliveries.
Step 2: Check your SQS visibility timeout
Inspect your queue configuration:
aws sqs get-queue-attributes \
  --queue-url https://sqs.us-west-2.amazonaws.com/123456789012/orders \
  --attribute-names VisibilityTimeout
Sample Output:
{
  "Attributes": {
    "VisibilityTimeout": "30"
  }
}
If your Lambda takes 60 seconds to process and your queue timeout is 30 seconds, the message will reappear halfway through processing.
📏 Rule of thumb:
Set VisibilityTimeout ≥ Function Timeout + 30% safety margin.
Step 3: Use FIFO queues with deduplication IDs
Switch to a FIFO queue and assign each message a deduplication token:
sqs.send_message(
    QueueUrl='https://sqs.us-west-2.amazonaws.com/123456789012/orders.fifo',
    MessageBody=json.dumps(order),
    MessageGroupId='orders',
    MessageDeduplicationId=order['OrderId']
)
SQS rejects any new message with the same MessageDeduplicationId received within five minutes.
Step 4: Make side effects idempotent
In DynamoDB:
table.put_item(
    Item=order,
    ConditionExpression="attribute_not_exists(OrderId)"  # This assumes 'OrderId' is your primary key!
)
In SQL:
INSERT INTO Orders (order_id, amount)
VALUES ('123', 500)
ON CONFLICT (order_id) DO NOTHING;
This guarantees that a repeated event can’t insert a duplicate record.
Step 5: Track with Correlation IDs
Propagate a CorrelationId through your workflow — log it, tag it, and include it in responses.
This allows downstream consumers or external systems to detect duplicates even after successful processing.
Step 6: Monitor retry and receive metrics
Use CloudWatch’s ApproximateReceiveCount metric on SQS messages to track how often each message is processed.
A high ReceiveCount is an early warning that your function duration or timeout configuration may be misaligned.
Pro Tip #1: Duplicate != Retry
Retries are automatic recovery attempts.
Duplicates are symptoms of unconfirmed completion.
You can survive retries; you must defend against duplicates.
Pro Tip #2: Protect at the Edges
Every Lambda function that writes data, sends messages, or calls APIs should enforce idempotency at the boundary — where the effect leaves your control.
Conclusion
The Duplicate Delivery Dilemma isn’t an AWS bug — it’s the cost of safety.
When the system can’t confirm success, it tries again.
By combining tuned visibility timeouts, deduplication IDs, and idempotent operations, you transform a fragile at-least-once pipeline into a resilient, effectively-once system.
Because in distributed systems, doing something twice is easy.
Doing it safely is engineering.
Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.


 
 
 
Comments
Post a Comment