Insight: Can You Save on Redshift Snapshot Costs? Yes You Can...And Here's How


Insight: Can You Save on Redshift Snapshot Costs?  Yes You Can...And Here's How







Understanding the Challenge

When running Redshift at scale, every penny counts—especially when it comes to long-term storage like manual snapshots. Naturally, folks start wondering: can we move those snapshots to cheaper S3 storage classes like Glacier? The question came up again recently, and it’s a smart one. The logic is simple: if snapshots live in S3, shouldn’t we be able to take advantage of S3’s storage tiers?


Why It Matters

Manual snapshots in Redshift are invaluable for point-in-time recovery, regulatory compliance, and protecting against accidental data loss. But they can also rack up surprising costs over time, particularly for large datasets or frequent snapshot schedules. AWS’s default behavior is to store these snapshots in the S3 Standard class, which prioritizes quick access over affordability. If you're retaining these for months—or even years—without regular access, that’s potentially a lot of money left on the table.


Current Limits and Workarounds

The reality today is this: Redshift does not allow you to move native manual snapshots into lower-cost S3 classes like Glacier or Deep Archive. These snapshots are fully managed by AWS, and the system keeps them in the Standard tier for consistency, availability, and rapid restoration.

But that doesn’t mean you're out of options. A common workaround is to export your data manually using UNLOAD to push your tables out to S3 in formats like CSV or Parquet. Once there, you’re in the driver's seat. You can apply S3 lifecycle policies that automatically shift the files to Glacier tiers after a certain number of days. This method won't preserve snapshot metadata or point-in-time recovery, but it does retain the data itself in a usable form.

Here’s a quick example of an UNLOAD command: 

Bash
UNLOAD ('SELECT * FROM sales')
TO 's3://your-bucket/snapshots/sales/'
CREDENTIALS 'aws_iam_role=arn:aws:iam::123456789012:role/YourRedshiftRole'
DELIMITER ','
ADDQUOTES
ALLOWOVERWRITE
PARALLEL OFF; 

And a sample S3 lifecycle policy to transition to Glacier after 30 days: 

Bash
{
  "Rules": [
    {
      "ID": "MoveToGlacier",
      "Prefix": "snapshots/",
      "Status": "Enabled",
      "Transitions": [
        {
          "Days": 30,
          "StorageClass": "GLACIER"
        }
      ]
    }
  ]
}   


Practical Advice

If you’re maintaining long-term snapshots primarily for compliance or retention—not rapid restore—then this hybrid approach may be your best friend. Automate the export, document your table structure, and schedule those transitions to cheaper storage. It's not perfect, but it strikes a balance between cost and control.

And for teams with evolving needs, keep an eye on AWS announcements. Redshift snapshot flexibility is an area many have flagged for improvement. You wouldn’t be alone in hoping for native Glacier support in the future.


Update: For teams dealing with large-scale Redshift exports, we’ve released a Python tool that bulk-exports all tables in a schema to S3 using UNLOAD. You can find it here: Bulk Redshift Table Export to S3.


Need AWS Expertise?

We'd love to help you with your AWS projects.  Feel free to reach out to us at info@pacificw.com.


Written by Aaron Rose, software engineer and technology writer at Tech-Reader.blog.

Comments

Popular posts from this blog

The New ChatGPT Reason Feature: What It Is and Why You Should Use It

Raspberry Pi Connect vs. RealVNC: A Comprehensive Comparison

The Reasoning Chain in DeepSeek R1: A Glimpse into AI’s Thought Process