The Secret Life of AWS: Regions and Availability Zones
The Secret Life of AWS: Regions and Availability Zones
Where your code actually lives — and why it matters
#AWS #CloudComputing #Regions #AvailabilityZones
Margaret is a senior software engineer. Timothy is her junior colleague. They meet in a grand Victorian library in London — and in every episode, they work through the tools, ideas, and infrastructure that power modern software. Today, Timothy discovers that the cloud has an address.
Episode 6
Timothy arrived early, notebook already open on the table, pen uncapped and waiting. He had the focused stillness of someone who had been thinking about a question for several days and had finally decided to ask it.
"I built something last week," he said. "Small — a simple API, nothing production, just practice. And I noticed something." He looked at her. "The region selector. Top right corner of the Console. I had selected US East — Northern Virginia — because it was the default. Because it was there." He paused. "Should I have chosen something else?"
Margaret regarded him with the expression she reserved for questions that were better than the person asking them knew.
"That," she said, "is one of the most important questions you can ask before you build anything on AWS. And the fact that most people never ask it — that they accept the default and move on — is the source of a remarkable number of problems." She closed her book. "Tell me — where are your users?"
Timothy blinked. "I don't have users yet. It was a practice project."
"Humor me. If this were real — where would the people using your API be located?"
"Here," he said. "London. Europe, mostly."
"And your API is running in Northern Virginia."
He saw it immediately. "That's — a long way away."
"Approximately five and a half thousand miles," Margaret said. "Every request your European user makes travels across the Atlantic, is processed in Virginia, and the response travels back. The speed of light is fast. It is not, however, instantaneous. And the laws of physics do not make exceptions for good intentions or clever code."
Latency Is Physics
"Let's be precise about what latency actually is," Margaret said. "Because it is frequently treated as a software problem when it is, at its foundation, a physics problem."
"The time it takes data to travel," Timothy said.
"The time it takes data to travel from one point to another — and back. Round trip." She folded her hands. "Light travels through fiber optic cable at roughly two thirds the speed of light in a vacuum. The distance from London to Northern Virginia is approximately eight thousand kilometers by undersea cable. At that speed, the theoretical minimum round trip time is around fifty milliseconds. In practice, with routing, processing, and network overhead — closer to eighty or ninety."
"That doesn't sound like much."
"For a single request, it isn't. For an application that makes twelve sequential API calls to render a page — it compounds." She looked at him. "Eighty milliseconds per call, twelve calls, not parallelized — nearly a second of latency before your application has done a single thing. Before a database query. Before any computation." She paused. "Users notice one hundred milliseconds. They abandon at three seconds. Latency is not an academic concern."
"So the region should be close to the users."
"The region should be as close to the users as practicable, yes. AWS has regions on every inhabited continent. London — eu-west-2. Frankfurt — eu-central-1. Singapore, Sydney, Tokyo, São Paulo, Cape Town." She tilted her head. "The world is covered. The default is not always wrong — but it is never automatically right. The first question before you deploy anything is always: where are the people who will use this?"
What a Region Actually Is
"But a region is not a single building," Margaret said. "And this is where the architecture becomes interesting."
Timothy looked up from his notes.
"A region is a geographic area — eu-west-2 is based in London. But within that region there are multiple availability zones. In most regions, three. Sometimes more." She paused. "An availability zone is a discrete data center — or a cluster of data centers close together — with its own power, its own cooling, its own network connections. Physically separate from the other availability zones in the same region. Close enough that the network latency between them is low — single digit milliseconds. Far enough apart that a flood, a fire, a power failure, a construction crew cutting through a cable does not affect more than one of them."
"So eu-west-2 is London — but it's London spread across three separate physical locations."
"Three locations that are independent enough to fail separately," Margaret said. "That distinction is everything. If you deploy your application in a single availability zone and that zone has a problem — your application is down. If you deploy across three availability zones and one has a problem — your application continues running in the other two. Your users notice nothing."
Timothy sat with that for a moment. "That seems like an obvious thing to do."
"It is obvious in retrospect. It is frequently skipped in practice — because it requires more thought, marginally more cost, and slightly more complex configuration. And because in development and testing, it doesn't matter. The habit of single-zone deployment forms early and persists." She looked at him. "In practice, the pattern looks like this — an auto-scaling group with instances spread across availability zones, a load balancer that routes traffic to all of them, and a database cluster with a primary and replicas in separate zones. Each piece independent enough that losing one zone loses nothing visible to the user." She paused. "The question to ask yourself before you deploy anything significant is not whether you can survive an availability zone failure. It is whether you have made it possible to survive one."
One Account, Many Regions
"A practical note," Margaret said. "Your AWS account exists globally. When you create an IAM user or an IAM role, it exists across all regions — IAM is a global service. But most AWS resources are regional. An EC2 instance in eu-west-2 is not visible when you look at eu-west-1. An S3 bucket created in us-east-1 lives in us-east-1 unless you explicitly configure replication." She paused. "This trips people up regularly. They create a resource, switch regions in the Console, and cannot find it. It has not disappeared. It is simply somewhere else."
Timothy smiled in recognition. "I did that last week."
"Everyone does it last week," Margaret said. "The Console remembers which region you last used, but it does not prevent you from switching. The resource is still there — in the region where you created it. The habit to build is to always be conscious of which region you are working in before you create anything."
"Check the top right corner."
"Check the top right corner," she said. "Every time. Before every action that creates a resource. It is a small habit with an outsized return."
The Day eu-west-1 Had a Very Bad Day
"Let me give you a concrete example," Margaret said. "Not hypothetical — real."
Timothy set down his pen.
"In December 2021, AWS eu-west-1 — the Ireland region — experienced a significant disruption. A power event in one availability zone triggered a cascade that affected a broader set of services than a single-zone failure should have. Applications that were deployed across multiple availability zones continued to run. Applications that were deployed in the affected zone, or that had dependencies concentrated there, went down." She paused. "It was not an apocalypse. It was a few hours. But for the applications that went down — and their users — a few hours is not nothing."
"And the ones that stayed up —"
"Had followed the architecture. Had spread their resources. Had treated availability zone failure as a possibility worth designing for rather than a risk worth ignoring." She looked at him evenly. "AWS has a strong track record. It is not a perfect track record. No infrastructure has a perfect track record. The question is never whether something will go wrong. The question is whether you have built something that can absorb it."
"Resilience," Timothy said.
"Resilience. The ability to continue functioning in the presence of failure rather than depending on the absence of it." She picked up her pen. "This is a different way of thinking about systems than most developers start with. We build things to work. Resilience requires also building things to fail gracefully."
The Third Dimension — Compliance
"Performance and resilience are engineering concerns," Margaret said. "There is a third reason that region selection matters — and it is not engineering. It is legal."
Timothy looked up.
"Data residency," she said. "The principle that certain data — personal data, financial data, health data, government data — must be stored and processed within specific geographic boundaries. Not for technical reasons. For regulatory ones."
"GDPR," Timothy said.
"GDPR is the most widely known example. The General Data Protection Regulation requires that personal data about EU residents be handled in accordance with EU law — which has implications for where that data is stored and who can access it. Storing EU personal data in us-east-1, in Northern Virginia, means that data is subject to US law as well as EU law. For some applications, in some contexts, that creates a compliance problem." She paused. "GDPR is not the only regulation with geographic implications. The UK has its own post-Brexit data framework. Australia has the Privacy Act. Canada has PIPEDA. Various countries have data localisation laws that go further — requiring data not just to be handled in accordance with local law, but physically stored within national borders."
"So choosing the wrong region isn't just a performance mistake," Timothy said slowly. "It can be a legal one."
"It can be a legal one. For consumer applications handling personal data, for applications in regulated industries — healthcare, finance, government — region selection is not a technical preference. It is a compliance requirement." She looked at him steadily. "This is not something to discover after you have built and deployed. It is something to establish at the beginning."
"Before you accept the default and move on."
"Before you accept the default and move on," Margaret agreed.
Before Next Time
He was nearly at the door when he stopped.
"Northern Virginia," he said. "us-east-1. Why is that the default? Of all the regions in the world — why that one?"
Margaret looked up. "Because it was the first. When AWS launched EC2 in 2006, there was one region. Northern Virginia. The infrastructure was already there — Amazon's own data centers, the ones that had been running amazon.com for years." She paused. "Every subsequent region was built deliberately, in response to demand and need. But us-east-1 was simply where it began. The default persisted."
Timothy nodded slowly. "So every developer who accepts the default without thinking is, in some small way, still running on the infrastructure Amazon built for itself in 2006."
"In some small way," Margaret said. "Which is either a comforting thought or an unsettling one, depending on your disposition."
He considered that for a moment, then wound his scarf against the London evening and left.
She listened to the city, thinking that the best questions were always the ones people asked about things they had already done — the retroactive curiosity that turned accidents into understanding.
Next episode: VPCs and Networking — your resources don't just live in a region, they live in a network you define. Timothy discovers that the default VPC is not the same as a well-designed one, and that networking is where cloud architecture either holds together or quietly falls apart.
Aaron Rose is a software engineer and technology writer at tech-reader.blog.
Catch up on the latest explainer videos, podcasts, and industry discussions below.
.jpeg)

Comments
Post a Comment