How experienced engineers think about structure, scale, and change
April 2026 · Prepared by Babu AI for Thota
Every non-trivial system eventually faces the same enemy: complexity of change. Code gets modified. Requirements shift. Teams grow. Systems that weren't designed for change become archaeology — where engineers are afraid to touch anything.
Architecture patterns exist because experienced engineers have seen this before. These aren't abstract theories — they're battle-tested solutions proven across decades of production systems.
Patterns exist at every scale of the system. The trick is knowing which pattern solves which problem.
SOLID, Gang of Four (Creational, Structural, Behavioral) — how to organize objects and responsibilities
Layered, Hexagon, Onion, Clean, Microkernel — how to structure an application's internals
Microservices, Event-Driven, CQRS, Saga, Service Mesh — managing multi-service complexity
Containers, Kubernetes, Blue-Green, Canary — how the system runs in production
Circuit Breaker, Retry, Bulkhead, Rate Limiting — handling failures gracefully
Domain-Driven Design, Bounded Contexts, Aggregates — aligning code with business capabilities
Five guidelines for writing maintainable object-oriented code — a system where each principle enables the others.
A module has one reason to change. The most impactful — and most violated. Violations sound like: "Manager", "Handler", "Service" in a class name.
Open for extension, closed for modification. Extend behavior by adding new code, not by editing working code.
Subtypes must be substitutable for base types. A Square can't be a Rectangle — unless callers never set width and height independently.
Split interfaces by client role. Don't force a class to implement methods its clients don't use.
Depend on abstractions, not concretions. Pass dependencies in via constructors. Makes testing trivial.
Follow SRP well and the others follow naturally. SRP violations are the #1 source of code pain.
Every piece of knowledge has a single authoritative representation. The test: if I change this, will I also need to change that? If yes, it's the same knowledge.
Match complexity to the actual problem. Simple should be the default. Add complexity only when necessary — and remove it as soon as it isn't.
Don't build it until you need it. Speculative complexity has an immediate cost; hypothetical future flexibility doesn't.
Only talk to immediate collaborators. The tell: train wrecks like user.getAccount().getBalance().getCurrency(). Each link is a hidden dependency.
WET = Write Everything Twice
Wait for the third occurrence before abstracting. First two may have different reasons to change.
"It is hard for less experienced developers to appreciate how rarely architecting for future requirements turns out net-positive."
Published in 1994 by Gamma, Helm, Johnson, and Vlissides. Understanding them matters less for using them directly and more for recognizing when you're facing a problem they solve.
Object creation complexity.
Singleton, Factory Method, Abstract Factory, Builder, Prototype
Composing classes & objects into larger structures.
Adapter, Bridge, Composite, Decorator, Facade, Proxy, Flyweight
Responsibility between objects & algorithms.
Observer, Strategy, Command, Template Method, Iterator, State, Mediator...
Step-by-step construction of complex objects. Immutability (product exposed only after build()). Python's @dataclass(frozen=True) with .to_builder() is a modern incarnation.
Add behavior dynamically without changing the class. Java streams: BufferedInputStream(FileInputStream(...)). Caution: chain order matters, debugging is hard.
One-to-many dependency: state changes notify all dependents. Angular change detection, RxJS streams, event systems. Pitfall: memory leaks from un unregistered observers.
Swap algorithms at runtime. Payment processors (card vs. PayPal vs. Bitcoin). Each strategy is independent — unlike State, where transitions are coordinated.
Unified, simplified interface to a complex subsystem. jQuery for DOM manipulation. Good facades are cohesive; bad ones are dumping grounds.
Tree structures where clients treat individual objects and compositions uniformly. File systems (files + directories). UI component hierarchies.
Encapsulates requests as objects. Enables queuing, undo/redo, and logging. Each command = receiver + method + arguments. GUI buttons, transaction rollback.
Justified uses are rare. It's global state in disguise. Before using it: can dependency injection with a single registration work instead? Usually yes.
Once systems span multiple services, two new categories of problems emerge that single-process patterns don't address:
ACID transactions don't work across service boundaries. When Service A and Service B each own their own database, you can't atomically update both. You need new patterns: Saga, Outbox, Event Sourcing.
Services need to find each other, route requests, balance load, and handle failures across network boundaries. Service Discovery, API Gateway, Circuit Breaker become essential infrastructure.
UI → Business Logic → Data Access. The default starting point. Works when clear layer separation is needed. Fails when cross-cutting changes touch multiple layers or teams split by layer (anti-pattern).
Unix philosophy: cat | grep | sort | uniq. Each filter is independent, testable, composable. ETL pipelines, compilers, stream processing. Poor fit for branching logic or heavy cross-stage state.
Minimal core + plugins for everything else. Operating systems, VS Code (every feature is a plugin). Best when extensibility is core to the product and different clients need different features.
Three patterns, one philosophy: domain logic at the center, completely isolated from infrastructure. They differ mainly in metaphor and terminology.
Driving ports (incoming: API, UI) and driven ports (outgoing: DB, external APIs) at edges. Adapters translate. Domain has zero framework dependencies. Coined by Alistair Cockburn.
Concentric circles radiating from domain core. Domain entities & services at center → Application services → Infrastructure at outer edge. Dependencies point inward only.
Uncle Bob's 4-layer ring: Entities → Use Cases → Interface Adapters → Frameworks & Drivers. The Dependency Rule is absolute. Most explicitly structured of the three.
Command Query Responsibility Segregation. Split read and write into separate models. Write side: normalized, correct. Read side: denormalized for query efficiency. Enables independent scaling. Pairs with Event Sourcing.
Store state changes as immutable events, not current state. Current state = replay all events. Complete audit trail for free. "What was account balance on Jan 15?" → replay events. Challenge: schema evolution.
Commands produce events. Events are stored. Read models are built by projecting events asynchronously. Write model is authoritative; read models are eventually consistent.
Sequence of local transactions, each publishing an event. If step 3 fails, compensating transactions undo steps 2 and 1. Two styles: Choreography (services self-coordinate) vs. Orchestration (central director).
Reliable event publishing without distributed transactions. Write event to outbox table in same DB transaction as business data. Separate process polls and publishes. Guarantees at-least-once delivery.
Microservices is an architectural style built from many smaller patterns. The core idea: small, independently deployable services organized around business capabilities, each owning their own data.
Can you fully redeploy Service A without touching Service B? Can Service A fail without taking down Service B? If yes, you have a real microservice. If they share a database or A calls B synchronously on every request, you have a distributed monolith.
Single entry point for all client requests. Handles: authentication, SSL termination, request logging, rate limiting, request aggregation. Without it, clients need to know every service endpoint. Kong, Envoy, AWS API Gateway, Nginx.
Services find each other dynamically as they scale. Client-side: client queries a registry (Consul, Eureka). Server-side: load balancer queries registry. Kubernetes DNS is the standard in K8s environments.
Each client type (web, mobile, third-party) gets its own dedicated API backend. Mobile and web have different data needs — don't force a one-size-fits-nothing compromise.
Dedicated infrastructure layer for service-to-service communication. Sidecar proxies (Envoy) handle mTLS, traffic management, and observability. Istio and Linkerd are leading tools. Adds operational complexity — justified at scale.
Distributed systems fail in partial, asynchronous, time-dependent ways that monolithic systems don't. These patterns are not optional — they're survival.
Trip when downstream is failing. Closed (normal) → Open (fail fast, no calls) → Half-Open (probe recovery) → Closed. Prevents cascade failures. Hystrix, Resilience4j, Polly.
Re-attempt transient failures. Exponential backoff (1s → 2s → 4s) + jitter (randomization) prevents thundering herd. Requires idempotent operations.
Named after ship compartments. Separate thread/connection pools per downstream service. If Service A's pool is exhausted, Service B's remains available.
Protect services from being overwhelmed. Token bucket and leaky bucket algorithms. HTTP 429 "Too Many Requests" is the standard response. Usually at API gateway.
When circuit is open, return a degraded but functional response. Cached data. Static error page. Meaningful error message. Never let a cascade failure reach the user.
Services that are tightly coupled via synchronous calls and shared databases. All the operational complexity of microservices with none of the benefits.
Cloud-native isn't just "running on cloud infrastructure" — it's an approach to building systems that take full advantage of cloud computing's model: elasticity, resilience, and operational automation.
Immutable packaging. Rebuild, don't modify. Docker, multi-stage builds, minimal base images.
Container orchestration. Self-healing, scaling, service discovery. Pods, Deployments, Services, Ingress.
Automated pipelines. Test, build, deploy. Trunk-based development. Feature flags enable shipping without deploying.
The philosophical foundation for cloud-ready applications. Originally from Heroku engineers; still the clearest articulation of what makes an app portable and maintainable in cloud environments.
Store config in environment, not in code. Same codebase → dev, staging, production with different env vars. This is what makes deployment identical across environments.
Share nothing between requests. State goes to a backing service (Redis, database). This is what enables horizontal scaling — any instance can handle any request.
Databases, queues, caches are attached resources — swappable by config change. A MySQL database is indistinguishable from a managed Postgres from the app's perspective.
Strict separation of stages. Code → build artifact → config → runnable instance. Once a release is created, it cannot be modified. Rollback = switch to previous release.
Fast startup, graceful shutdown. Instances can be created or destroyed at any moment. No reliance on instance identity. This is what makes auto-scaling work.
Keep development and production as similar as possible. Same runtime, same backing services. "Works on my machine" is eliminated by design, not by discipline.
| Primitive | What it does |
|---|---|
| Pod | Atomic deployable unit — containers sharing network/storage |
| Deployment | Rolling updates, self-healing, rollback |
| Service | Stable network endpoint with load balancing |
| Ingress | HTTP/HTTPS routing with host and path rules |
| StatefulSet | Stable identities for stateful applications |
| CronJob | Scheduled workloads |
Helper container sharing the pod's lifecycle. Log collectors, service proxies, observability agents. Language-agnostic feature implementation without modifying the main app.
Out-of-process proxy for outbound communication. Envoy as ambassador for all outgoing service calls. Standardizes instrumentation across languages.
Normalize external interfaces to what the application expects. Legacy metrics → Prometheus format. Wraps heterogeneous systems into a consistent interface.
Two identical environments. Live = Blue. New = Green. Switch traffic via load balancer at cutover. Instant rollback capability. Cost: double the infrastructure.
Pros: Zero downtime, instant rollback
Cons: Double infra, DB migration complexity
Deploy new version to 5% of traffic. Monitor error rates. Gradually increase. Rollback by reducing traffic. Real production traffic is the test.
Pros: Minimal blast radius, real-world testing
Cons: Requires good observability, slower iteration
Decouple deployment from release. Code ships behind a flag; enabling it goes live instantly. Enable = new feature. Disable = instant rollback. No redeployment needed.
Pros: Instant rollback, trunk-based dev, A/B testing
Cons: Flag cleanup tech debt, complexity management
Published 2014, defines four system-level properties that compose across all scales of a distributed system.
Consistent response times. Problems detected and handled quickly. Upper bounds on latency — the system makes promises it can keep.
Stays responsive in face of failure. Achieved via replication, containment, isolation, delegation. Failures are contained within components.
Stays responsive under varying load. No central bottlenecks. Sharded components. Supports predictive and reactive scaling.
Async message-passing establishes boundaries between components. Loose coupling, isolation, location transparency. Enables all the other properties.
When downstream can't keep up, signal upstream to slow down. Prevents cascade failures. The mechanism that makes elasticity practical under real load.
Pioneered by Netflix. Inject controlled failures into production to find weaknesses before they cause outages.
What does "normal" look like? Establish measurable hypotheses about system behavior.
Server terminations (Chaos Monkey), latency spikes, network partitions, resource exhaustion.
Run continuously in CI/CD. Measure blast radius. Iterate. Tools: Gremlin, LitmusChaos, AWS FIS.
Structured event records. ELK stack (Elasticsearch, Logstash, Kibana) or Loki. The raw material of debugging.
Numerical measurements over time. Prometheus (pull-based). Alert on anomalies, not thresholds.
Track a request end-to-end across service boundaries. A single request can fan out to dozens of services. Zipkin, Jaeger, OpenTelemetry.
Domain-Driven Design doesn't produce an architecture — it's a set of tactical patterns for modeling complex domains. It serves as the seam language for microservices and the domain isolation layer in Clean/Hexagon architectures.
The most important DDD concept. An explicit boundary where a particular domain model is authoritative. Outside the boundary, the same words may mean different things. "Customer" in billing ≠ "Customer" in shipping — each context owns its model. Bounded Contexts are the natural seam for microservice decomposition.
Clusters of related objects treated as a single unit for data changes. An Aggregate Root is the entry point — external code interacts only with the root, never directly with internals. Immutability of value objects within the aggregate.
Records of significant business occurrences: OrderPlaced, PaymentReceived, ShipmentDispatched. Emitted by aggregates, consumed asynchronously by other Bounded Contexts. The foundation of event-driven microservices.
Shared terminology for a Bounded Context, used consistently in code, discussions, and documentation. The language is explicit and owned by the team. Eliminates translation between domain experts and engineers.
Every pattern answers: what should depend on what, and how should changes flow? SOLID at class level. Hexagon/Clean at architecture level. Microservices at deployment level.
Circuit breakers, bulkheads, retries, sagas, outbox, chaos engineering. Assume things will break. Design for it. Make the blast radius as small as possible.
Strangler fig, blue-green, canary, feature flags. Big bang rewrites are how teams get into trouble. These patterns contain the risk of change.
Start simple. Add complexity only when you have to. Remove it as soon as you can.
Prepared by Babu AI for Thota · April 2026