Posts

Showing posts from June, 2025

Insight: What Kind of “Radio” Is This? Understanding the New Raspberry Pi Radio Module 2

Image
Insight: What Kind of “Radio” Is This? Understanding the New Raspberry Pi Radio Module 2 When Raspberry Pi Ltd announced the Radio Module 2—or RM2 for short—some makers got excited for all the wrong reasons. “Finally!” they thought. “A Pi-branded SDR? An FM tuner module?” Not quite. The name may conjure visions of antennae scanning shortwave bands, but that’s not what we’re dealing with here. The RM2 is a 2.4 GHz wireless communication module, built for Wi‑Fi and Bluetooth, not audio broadcasting. To be clear: this isn’t software-defined radio (SDR). It’s not a ham radio receiver. It won’t tune your local FM station or help you build a police scanner. What it will do—and quite well—is bring wireless networking and device pairing to embedded Raspberry Pi–style projects that use chips like the RP2040. If you’ve ever used a Pico W or Pico 2 W, you’ve already benefited from the same chip. Now, you can buy it on its own for just $4. Yes, Pico W and Pico 2 W Already Have This Here’s where th...

Solve: Diagnosing Aurora PostgreSQL Query Routing Issues

Image
Solve:  Diagnosing Aurora PostgreSQL Query Routing Issues I. The Quiet Cost of Misrouted Queries When your Aurora PostgreSQL cluster starts slowing down, the first thing many engineers think about is scaling. Maybe your app grew. Maybe you need a bigger instance. But sometimes, the issue isn't size—it's traffic routing. Aurora is designed for performance, but only if you use its features the way AWS intended. One of the most important—but often overlooked—tools in your belt is the reader endpoint . Aurora clusters come with both a writer endpoint and a read-only endpoint , and each serves a different role: The writer handles all data changes—INSERTs, UPDATEs, DELETEs, etc. The readers are for SELECTs—reporting dashboards, analytics, background jobs, and any operation that doesn't change data. If your application sends everything to the writer—reads and writes—you’re unintentionally t...

Solve: ECS-CDK Deploys and the Timing Trap—Notes from the Field

Image
Solve: ECS-CDK Deploys and the Timing Trap—Notes from the Field This isn’t the kind of issue you catch in tutorials. It’s the kind that quietly eats your morning, just when you thought you were ready to deploy. If you’re rolling out an ECS service using CDK—and you’re creating your infrastructure and container image for the first time—you may run headfirst into a silent conflict between build timing and deploy order. CDK tries to launch the ECS service before your image exists in ECR. That leads to a 404 at the ECS level, and the whole thing rolls back. At this point, there’s a decision to be made. Option 1: Pre-Push Your Image One option is to build and push a placeholder container to ECR before your first CDK deploy. This lets CDK proceed without breaking ECS on image fetch. It’s a manual step, but it clears the dependency deadlock and lets you move forward fast. Later, you can swap in your real image. Option 2: Deploy in Two Passes The other approach is more surgical. Deploy the pip...

Solve: ECS Rollouts and Rollbacks—How to Keep Your CDK Deployments from Breaking in Production

Image
Solve: ECS Rollouts and Rollbacks—How to Keep Your CDK Deployments from Breaking in Production By now, you've solved the initial ECS deploy paradox and safely updated your service to use your real container image. That means you’ve gone from “why won’t this even launch?” to “we’re deploying our own code into ECS now.” But there’s a next level—not just deploying successfully, but deploying safely. In this post, we’ll show how to strengthen your deployment pipeline by focusing on what happens after CDK runs: ECS rollout behavior, container health checks, rollback settings, and optional traffic shifting. These aren’t luxuries—they’re the difference between weekend peace and a Saturday pager alert. What Happens During an ECS Deployment (And Why It Matters) Every time you change your ECS task definition—whether it’s a new image, an updated env var, or a different port—ECS creates a new revision. When CDK deploys that change, ECS replaces your old tasks with new...

Solve: Safely Updating ECS Task Definitions After the First Deploy

Image
Solve: Safely Updating ECS Task Definitions After the First Deploy Once you’ve gotten past the ECS-CDK bootstrap problem—by deploying with a placeholder image and letting your pipeline build the real one—you’re faced with a quieter but equally important moment: how do you safely update your ECS service to use your real container image? It’s tempting to treat this like a simple swap, but under the hood, Amazon ECS manages your service through versioned task definitions, and AWS CDK interacts with those in subtle ways. In this post, we’ll walk through what really happens during an image update and how to avoid common pitfalls during rollout. CDK and Task Definitions: What's Actually Happening? Each ECS service in your CDK stack points to a task definition , which is a versioned blueprint of how your container should run. This includes the image URI, memory limits, port mappings, environment variables, and more. What many developers miss is that every update to...

Solve: Fixing the ECS-CDK First Deploy Error—The Real-World Solution

Image
Solve: Fixing the ECS-CDK First Deploy Error—The Real-World Solution When you're deploying an ECS service with AWS CDK for the first time, and that service depends on a Docker image in ECR, you're likely to hit a frustrating wall. CloudFormation fails, ECS can’t launch, and the deploy process collapses—not because your code is broken, but because of the order in which everything is expected to exist. This problem shows up most often in projects where your CI/CD pipeline is responsible for building and pushing the image. But the pipeline won’t run until the infrastructure is up... and the infrastructure won’t come up because ECS can’t find the image. You’re stuck in a circular dependency. Let’s break it down and solve it for real. The Fix at a Glance Understand the error — ECS tries to launch with a Docker image that doesn’t exist yet. Deploy with a placeholder — Use a public container image so your CDK deploy succeeds. Let the pipeline run — Once the st...

New Article on Medium: How Cloudflare Defeated a 7.3 Tbps DDoS Monster

Image
New Article on Medium: How Cloudflare Defeated a 7.3 Tbps DDoS Monster A new record has been set - for the size of a cyber attack and for the impressive defense that stopped it. Our latest article dives deep into how Cloudflare successfully blocked a monumental 7.3 Terabits per second Distributed Denial of Service (DDoS) attack in mid-May 2025. Learn about the multi-vector assault, the staggering scale of data involved, and the autonomous technology that protected the internet from this unprecedented surge. Click here to read the full article on Medium * * *  Written by Aaron Rose, software engineer and technology writer at Tech-Reader.blog.

Solve: How to Write Data Back When Redshift Sharing Says No

Image
Solve: How to Write Data Back When Redshift Sharing Says No If you’ve tried to INSERT ,  UPDATE , or  DELETE  a Redshift table received via data sharing, you’ve already seen the wall: Redshift doesn’t allow writes on shared (external) tables. It’s not a permission issue. It’s not a bug. It’s the design. But what if your use case truly requires it? What if the consuming side needs to write new data that must eventually live in the producer’s cluster? You can’t write directly—but you can reroute. The Simplest Path: Stage It in S3 The most approachable workaround uses a write → stage → ingest pattern. The consumer cluster writes to its own local table or S3 export, and the producer ingests that data on a controlled schedule. Here’s a clean pattern that works: 1. Consumer writes to S3:  sql unload ('select * from staging_table') to 's3://your-staging-bucket/data/' iam_role 'arn:aws:iam::123456789012:role/red...

Insight: Why You Can’t INSERT into Shared Redshift Tables—And What to Do Instead

Image
Insight: Why You Can’t INSERT into Shared Redshift Tables—And What to Do Instead Data sharing in Amazon Redshift makes it easy to expose tables from one cluster (or namespace) to another—often across teams, accounts, or organizations. But when you try to go beyond simple reads, you may run into an unexpected wall: bash Operation not supported on external tables. This error typically appears when you attempt to INSERT ,  UPDATE , or  DELETE  on a Redshift table that’s been shared with your cluster. It can feel confusing at first: everything else works. You can query the data, join it, filter it—so why not write to it? Read-Only by Design This is not a permissions issue, nor a resource constraint. It’s simply how Redshift data sharing is built. When data is shared between clusters, the receiving (consumer) cluster sees those tables through an external schema . These look native, but they remain fully owned and governed by the producer. That’s why t...

New Article on Medium: What Happened to My API? Let the Report Tell You

Image
New Article on Medium: What Happened to My API? Let the Report Tell You Ever stared at CloudWatch logs during an outage, trying to make sense of retries, throttles, and spiraling latency? Yep, me too. So we built a small Python tool to help. It scans a plain .txt CloudWatch log and gives you a clean, timestamped report of: Throttling events Client retry storms Time ranges you can act on Recommendations that actually make sense No SDKs. No setup. Just a screen and a report. If your Lambda’s sluggish or your API gateway’s throwing 429s, this will show you what happened. 👉 Read it on Medium 💾 Download the tool * * *  Written by Aaron Rose, software engineer and technology writer at Tech-Reader.blog.