LocalStack S3 CRUD Lab: A Hands-On Guide to Buckets and Objects
Learn AWS S3 operations locally—complete with real outputs, deliberate errors, and the satisfaction of mastering cloud storage without spending a dime.
What You'll Build
By the end of this lab, you'll have manually performed all core S3 operations in a local AWS environment: creating buckets, uploading files, reading objects, updating content, and deleting resources. More importantly, you'll understand what happens when things go wrong—and how to fix them.
Why LocalStack? Testing against real AWS costs money and risks production mishaps. LocalStack gives you a full-featured local cloud environment where you can experiment freely, break things safely, and learn S3 operations before touching live infrastructure.
Prerequisites
Before diving in, ensure you have:
- A Python virtual environment for your LocalStack project (e.g.,
~/envs/localstack-lab
) - LocalStack and AWS CLI installed and configured
- Docker running in the background (LocalStack's engine)
If you haven't set these up yet, follow the installation guide:
👉 How to Install LocalStack and the AWS CLI on Your System
Once ready, open two terminal windows—one to run LocalStack, one to execute commands.
Terminal 1: Starting Your Local Cloud
In your first terminal, activate your environment and launch LocalStack:
cd ~/envs/localstack-lab
source bin/activate
localstack start
Expected Output
You'll see LocalStack's ASCII art banner followed by startup logs:
__ _______ __ __
/ / ____ _________ _/ / ___// /_____ ______/ /__
/ / / __ \/ ___/ __ `/ /\__ \/ __/ __ `/ ___/ //_/
/ /___/ /_/ / /__/ /_/ / /___/ / /_/ /_/ / /__/ ,<
/_____/\____/\___/\__,_/_//____/\__/\__,_/\___/_/|_|
LocalStack version: 3.2.1
Docker container id: 8e3a6a7b44c7
Build date: 2025-10-12
Build git hash: 4a9bdf1f
Starting edge router (https port 4566)...
Starting mock services: s3
Waiting for all LocalStack services to be ready
2025-10-12T23:59:59.123 INFO --- Ready.
That final line—INFO --- Ready.
—is your green light. Your local AWS environment is live.
Keep this terminal open. It will display real-time logs of every S3 operation you perform, giving you insight into what's happening under the hood.
Terminal 2: Your Command Center
In your second terminal, activate the same environment:
cd ~/envs/localstack-lab
source bin/activate
This is where you'll execute all your S3 commands. The awslocal
wrapper is a convenience tool that points AWS CLI commands to LocalStack instead of real AWS.
Step 1: Create an S3 Bucket
Every S3 object lives in a bucket. Let's create one:
awslocal s3 mb s3://demo-bucket
Output:
make_bucket: demo-bucket
Verify it exists:
awslocal s3 ls
Output:
2025-10-12 23:59:59 demo-bucket
What just happened? You created a storage container in your local S3 service. In production AWS, this would be a globally unique namespace, but LocalStack runs isolated on your machine.
Step 2: Create and Upload Files (CREATE)
Let's generate some test files with varying extensions:
echo "Hello, JPG World" > image.jpg
echo "Sample report text" > report.pdf
echo "Draft document" > notes.docx
echo "Spreadsheet data" > data.xlsx
echo "print('Hello, LocalStack!')" > script.py
Now upload each file to your bucket:
awslocal s3 cp image.jpg s3://demo-bucket/
awslocal s3 cp report.pdf s3://demo-bucket/
awslocal s3 cp notes.docx s3://demo-bucket/
awslocal s3 cp data.xlsx s3://demo-bucket/
awslocal s3 cp script.py s3://demo-bucket/
Output:
upload: ./image.jpg to s3://demo-bucket/image.jpg
upload: ./report.pdf to s3://demo-bucket/report.pdf
upload: ./notes.docx to s3://demo-bucket/notes.docx
upload: ./data.xlsx to s3://demo-bucket/data.xlsx
upload: ./script.py to s3://demo-bucket/script.py
List your bucket contents:
awslocal s3 ls s3://demo-bucket/
Output:
2025-10-12 23:59:59 16 data.xlsx
2025-10-12 23:59:59 17 image.jpg
2025-10-12 23:59:59 22 notes.docx
2025-10-12 23:59:59 18 report.pdf
2025-10-12 23:59:59 31 script.py
Notice the file sizes (in bytes) and timestamps. S3 tracks metadata for every object automatically.
Step 3: Download and Read Files (READ)
Retrieve an object from S3 and inspect its contents:
awslocal s3 cp s3://demo-bucket/script.py ./script_downloaded.py
cat script_downloaded.py
Output:
download: s3://demo-bucket/script.py to ./script_downloaded.py
print('Hello, LocalStack!')
⚠️ Important behavior: If you download to a filename that already exists locally, S3 will silently overwrite it. There's no confirmation prompt—the new version replaces the old one instantly.
Step 4: Update Files (UPDATE)
In S3, "updating" means uploading a new version of an object with the same key (filename). The old version is replaced.
Modify script.py
locally and re-upload:
echo "print('Updated LocalStack script!')" > script.py
awslocal s3 cp script.py s3://demo-bucket/
Output:
upload: ./script.py to s3://demo-bucket/script.py
Verify the update worked:
awslocal s3 cp s3://demo-bucket/script.py ./script_check.py
cat script_check.py
Output:
print('Updated LocalStack script!')
The object in S3 now contains the new content. Without versioning enabled, the previous version is gone forever.
Step 5: Delete Resources (DELETE)
Deletion in AWS follows strict rules. Let's learn them through experience.
Attempt 1: Delete the Bucket First (This Will Fail)
Try removing the bucket while it still contains files:
awslocal s3 rb s3://demo-bucket
Output:
remove_bucket failed: s3://demo-bucket An error occurred (BucketNotEmpty)
when calling the DeleteBucket operation: The bucket you tried to delete is not empty
đź’Ą Perfect! This error is AWS protecting you from accidentally deleting data. You must empty a bucket before you can remove it.
Attempt 2: Delete Objects One by One
Remove each file individually:
awslocal s3 rm s3://demo-bucket/script.py
awslocal s3 rm s3://demo-bucket/image.jpg
awslocal s3 rm s3://demo-bucket/report.pdf
awslocal s3 rm s3://demo-bucket/notes.docx
awslocal s3 rm s3://demo-bucket/data.xlsx
Output:
delete: s3://demo-bucket/script.py
delete: s3://demo-bucket/image.jpg
delete: s3://demo-bucket/report.pdf
delete: s3://demo-bucket/notes.docx
delete: s3://demo-bucket/data.xlsx
Verify the bucket is empty:
awslocal s3 ls s3://demo-bucket/
Output:
(no output—the bucket is empty)
Attempt 3: Delete the Empty Bucket (Success)
Now that the bucket is empty, deletion will succeed:
awslocal s3 rb s3://demo-bucket
Output:
remove_bucket: demo-bucket
Final verification:
awslocal s3 ls
Output:
(no output—no buckets exist)
Lab Results Summary
Operation | Command | Result | Key Lesson |
---|---|---|---|
Create Bucket | awslocal s3 mb s3://demo-bucket | ✅ Success | Buckets are storage namespaces |
Upload Files | awslocal s3 cp file s3://demo-bucket/ | ✅ Success | Objects stored with metadata |
Download File | awslocal s3 cp s3://demo-bucket/file . | ✅ Success | Local files silently overwritten |
Update File | Re-upload with same key | ✅ Success | Previous version replaced |
Delete Full Bucket | awslocal s3 rb s3://demo-bucket | ❌ Failed | Must empty bucket first |
Delete Objects | awslocal s3 rm s3://demo-bucket/file | ✅ Success | Individual deletion required |
Delete Empty Bucket | awslocal s3 rb s3://demo-bucket | ✅ Success | Clean deletion once emptied |
What You Learned
Through hands-on practice, you've mastered:
- Bucket lifecycle management: Creating and deleting S3 storage containers
- Object operations: Uploading, downloading, and updating files in S3
- Error handling: Understanding AWS's safety mechanisms (like
BucketNotEmpty
) - S3 behavior patterns: How overwrites work, metadata tracking, and deletion requirements
- LocalStack workflow: Running a local AWS environment for risk-free experimentation
Observer's Log: Terminal 1 Activity
Back in Terminal 1, LocalStack logged every operation:
2025-10-12T23:59:59.123 INFO --- Ready.
2025-10-12T23:59:59.501 INFO --- s3: create bucket demo-bucket
2025-10-12T23:59:59.905 INFO --- s3: PUT /demo-bucket/image.jpg
2025-10-12T23:59:59.906 INFO --- s3: PUT /demo-bucket/report.pdf
2025-10-12T23:59:59.907 INFO --- s3: GET /demo-bucket/script.py
2025-10-12T23:59:59.908 INFO --- s3: PUT /demo-bucket/script.py
2025-10-12T23:59:59.909 WARN --- s3: delete bucket request failed (BucketNotEmpty)
2025-10-12T23:59:59.910 INFO --- s3: DELETE /demo-bucket/script.py
2025-10-12T23:59:59.912 INFO --- s3: DELETE bucket demo-bucket
2025-10-12T23:59:59.913 INFO --- All services idle. Standing by.
These logs show the HTTP methods behind each S3 command—PUT for uploads, GET for downloads, DELETE for removals. Understanding these underlying operations helps when debugging or working with S3 APIs directly.
Clean Shutdown
When you're finished, shut down gracefully:
In Terminal 1, press Ctrl + C to stop LocalStack:
^C
2025-10-12T23:59:59.950 INFO --- Shutting down LocalStack
2025-10-12T23:59:59.953 INFO --- All services stopped.
Then deactivate and close:
deactivate
exit
In Terminal 2, deactivate and exit:
deactivate
exit
Next Steps
Now that you've mastered basic S3 operations, in future articles we'll explore:
- Batch operations: Use
aws s3 sync
to upload/download entire directories - Programmatic access: Write Python scripts using
boto3
to automate S3 tasks - Advanced features: Experiment with S3 versioning, lifecycle policies, or bucket policies
- Other AWS services: LocalStack supports DynamoDB, Lambda, SQS, and more
The beauty of LocalStack is that you can break things, experiment wildly, and learn without consequences. Everything resets when you restart the container.
Wrapping Up
You've built, read, updated, failed gloriously, and succeeded triumphantly—all by hand, watching each operation execute in real time. Automation is powerful, but there's deep value in understanding the manual process first. Now when you write infrastructure-as-code or use SDKs, you'll know exactly what's happening behind the abstractions.
Your local cloud stood up, served faithfully, and shut down cleanly—all on your command. That's the kind of control and understanding that turns AWS from intimidating to empowering.
Happy cloud building! ☁️
Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.
Comments
Post a Comment