The Secret Life of AWS: S3, EC2, and SQS

 

The Secret Life of AWS: S3, EC2, and SQS

The foundation of AWS storage, compute, and messaging explained

#AWS #CloudComputing #AWSSeries




Margaret is a senior software engineer. Timothy is her junior colleague. They meet in a grand Victorian library in London — and in every episode, they work through the tools, ideas, and infrastructure that power modern software. Today, Timothy meets the services that started everything.

Episode 3

Timothy arrived early, which told Margaret everything she needed to know about his mood. He had his notebook open before he sat down, and there was a particular set to his jaw that she recognized — the expression of someone who had done preparatory reading and arrived with objections.

She found it encouraging.

"You've been reading," she said.

"I have." He uncapped his pen. "And I want to say something before we start."

"Go ahead."

"S3 is a place to store files. EC2 is a virtual server. SQS is a message queue." He looked at her steadily. "I have used all three of these things before. Not on AWS — but the concepts are not new to me. I have used a file system. I have run servers. I have used queues." He paused. "So my question, before we begin, is this — what am I actually learning today? Because from where I sit, these look like things I already know, wearing different names."

Margaret regarded him for a long moment.

"That," she said, "is the most useful objection you could have walked in with. Hold onto it. We'll come back to it at the end."


SQS — The Problem of Two Systems Talking

"We'll start where AWS started," Margaret said. "Not S3, not EC2 — SQS. The queue. And I want to start there because most people skip it, which is a mistake, because it solves a problem that every distributed system eventually hits."

"Two things need to talk to each other," Timothy said.

"Two things need to talk to each other," Margaret agreed. "And the naive solution — the one everyone reaches for first — is to have them talk directly. Service A calls Service B. Service B responds. Simple, fast, obvious." She paused. "What happens when Service B is slow?"

"Service A waits."

"Service A waits. And if Service A is handling a user request while it waits?"

"The user waits."

"And if Service B goes down entirely?"

Timothy saw where it was going. "Service A fails. The whole thing fails together."

"Tightly coupled," Margaret said. "Two systems that are functionally one system, because the failure of either brings down both. This is not a theoretical problem. This is the problem that takes down applications at the worst possible moment — under load, when both services are being asked to do the most work, precisely when the queue between them would have helped the most." She folded her hands. "SQS is a buffer. Service A puts a message in the queue and moves on. Service B takes messages from the queue at whatever pace it can manage. The two systems no longer need to be available at the same time, running at the same speed, or even aware of each other beyond the queue itself."

"Decoupling," Timothy said.

"Decoupling. One of the most valuable words in distributed systems." She paused. "Now — you said you've used queues before. And you have. The concept is not new. But here is the question I want you to sit with: when you built a queue yourself, who managed the infrastructure underneath it? Who ensured it didn't lose messages if the host went down? Who scaled it when message volume spiked? Who was responsible for it at three in the morning?"

Timothy was quiet.

"You did. Or someone on your team did." She looked at him steadily. "SQS is not a new concept. It is a managed, durable, scalable implementation of a concept you already understand — one where Amazon handles the infrastructure and you handle the logic. That distinction matters more than it sounds."

"So the concept is mine. The operational burden isn't."

"That is the value proposition of every managed AWS service, stated plainly." She nodded once. "Remember it. You'll hear it again."


S3 — Not a File System

"S3 next," Margaret said. "And here I want to be careful, because this is where the familiar concept becomes a trap."

Timothy looked up from his notes.

"You said S3 is a place to store files. That is true. But the word files carries an assumption that will mislead you." She reached for her pen. "When you think of a file system — the one on your laptop, the one on a server — what do you picture?"

"Folders," Timothy said. "A hierarchy. Files inside folders inside folders."

"A tree structure. You navigate it. You move files between folders. You think of a file's location as its address — this file lives here, in this folder, on this drive." She set the pen down. "S3 has none of that. Not really. There are no folders. There is no hierarchy. There is a bucket — think of it as a flat container — and inside it there are objects. Every object has a key, which is simply a name. The key can look like a path — photos/2024/january/sunset.jpg — but that slash is not a folder separator. It is just a character in a name."

Timothy frowned. "That feels like a meaningful difference."

"It is. And it changes how you interact with it. You do not navigate S3 the way you navigate a file system. You retrieve objects by their key. You list objects by prefix. The mental model is closer to a very large, very durable key-value store than it is to a hard drive." She paused. "Why does this matter practically? Because developers who treat S3 like a file system write code that fights the tool. They try to move files, rename folders, build hierarchies that the service was never designed to support. They get confused when their intuitions fail them."

"So what is it actually good at?"

"Storing any amount of data — individual objects up to five terabytes, and the bucket itself effectively limitless — with extraordinary durability. S3 is designed for durability at a scale most teams could not replicate themselves. It scales without any intervention from you. It costs almost nothing at small scale and scales its cost linearly as you grow." She looked at him. "It is also the backbone of an enormous number of AWS services. Data pipelines, machine learning training sets, application assets, backups, static websites, log archives — S3 is underneath almost everything. Learning it correctly is not optional."

"And the file system instinct —"

"Set it aside entirely," Margaret said. "S3 is not a file system. It is an object store. The sooner that distinction lives in your hands rather than just your head, the better."


EC2 — The Server That Isn't

"EC2 last," Margaret said. "And this is the one where your objection is strongest — because a virtual server really does look, from the outside, exactly like a server. You SSH into it. You install software. You run processes. It has an IP address and a file system and a processor." She paused. "So why is it different?"

"I'm listening."

"Because it is not a server. It is the idea of a server, made temporarily concrete." She let that sit for a moment. "A physical server has a fixed location, fixed hardware, and a lifecycle measured in years. An EC2 instance has a type — a specific combination of CPU, memory, and storage characteristics — and that type can be changed. It can be stopped. It can be terminated. It can be replaced with an identical copy in minutes. It exists in a region, in an availability zone, but the specific hardware underneath it is not yours and never was."

"Ephemeral," Timothy said.

"Ephemeral is the right word. And this is where the lift-and-shift instinct causes the most damage." She leaned forward slightly. "Developers who treat EC2 instances like physical servers build systems that depend on those instances being permanent. They store data on the instance itself — in local storage that disappears when the instance terminates. They configure the instance manually, by hand, so that if it fails and needs to be replaced, they cannot reproduce the configuration." She paused. "They log into it to fix things. They grow attached to it. They give it a name."

Timothy smiled. "People name their servers."

"People name their servers," Margaret agreed, with the faint tone of someone who had witnessed this personally and not forgotten it. "In cloud thinking, a server is not a pet. It is livestock. You do not name livestock. You manage the herd. If one is unhealthy, you replace it. You do not nurse it back to health at two in the morning." She paused. "You configure a template that launches a replacement automatically when one fails. The herd stays healthy without you losing sleep over any individual animal."

"Pets versus livestock," Timothy said. "I've heard that framing before."

"It is a useful framing because it is slightly uncomfortable, which means it sticks." She settled back. "The practical implication is this: anything that matters must live outside the instance. Your data in S3 or a database. Your configuration in code. Your instance should be reproducible from scratch, automatically, without human intervention. If it cannot be, you have built a pet, and pets are expensive and fragile and will wake you up at night."


The Objection, Revisited

Timothy had been quiet for a moment, looking at his notes.

"You told me to hold onto my objection," he said. "That these are familiar concepts wearing different names."

"I did."

"I think I understand the answer now. But I want to say it myself."

Margaret waited.

"The concepts are mine," he said slowly. "Queues, object storage, virtual compute — I knew what these were before I walked in. What I didn't know was what it costs to build and operate them yourself, at scale, reliably, with redundancy and monitoring and failover." He paused. "When Amazon offers SQS, they're not selling me the idea of a queue. They're selling me the operational work I would otherwise have to do myself. The infrastructure work that isn't my product." He looked at her. "The concept is familiar. The cost of ownership is what's new."

Margaret was quiet for a moment.

"Yes," she said. "That is precisely it." She picked up her book. "And once you understand that — once you genuinely feel it rather than just repeat it — you start evaluating every AWS service through that lens. Not what does this do, but what does this spare me from doing. That question will serve you for the rest of our discussions."


Before Next Time

He gathered his things, notebook closed, pen capped.

"Pets versus livestock," he said, at the door.

"It stays with you," Margaret said, without looking up.

"Uncomfortably so." He paused. "I have named servers."

"Most people have," she said. "The ones who haven't usually started in the cloud and never had to unlearn it."

He nodded once — the particular nod of someone filing something away permanently — and left.

She sat for a moment in the quiet of the library, thinking that the hardest lessons were always the ones that arrived wearing familiar clothes.


Next episode: The AWS Console and the CLI — two ways into the same house, and why the one that feels easier will eventually cost you. Timothy logs in for the first time, and Margaret has opinions about where he clicks.


Aaron Rose is a software engineer and technology writer at tech-reader.blog

Catch up on the latest explainer videos, podcasts, and industry discussions below.


Comments

Popular posts from this blog

Insight: The Great Minimal OS Showdown—DietPi vs Raspberry Pi OS Lite

The New ChatGPT Reason Feature: What It Is and Why You Should Use It

Raspberry Pi Connect vs. RealVNC: A Comprehensive Comparison