A Pragmatic Journey from Monolith to Microservices
From a simple monolith to a resilient, event-driven system, this is a step-by-step guide to evolving your architecture by solving one problem at a time.
My goal today is to share a structured, pragmatic path for that evolution, based on my latest updated course, ‘Design Microservices Architecture with .NET’
We’ll focus on solving real-world problems at each stage, exploring key patterns, and looking at concrete code examples. All the code demonstrated here is available on my GitHub.
Architecture Design Journey
We’re going to systematically evolve our E-Shop application’s architecture.
Our path will generally follow these key stages:
We’ll start with a Monolithic Architecture — understanding its structure, benefits for simple applications, and its inherent limitations as we scale.
Then, we’ll refactor it into a Modular Monolith Architecture, introducing better internal organization while still being a single deployment.
Next is the major leap: to Microservices Architecture. This is where we’ll spend a more amount of time, decomposing our monolith, and designing independent services. We’ll then enhance our microservices by making them truly Event-Driven, enabling greater decoupling, resilience, and responsiveness.
And finally, we’ll explore Serverless Microservices Architectures, looking at how Functions-as-a-Service can fit into our overall design for specific use cases.
From Macroservices to Nanoservices
If we look at this evolution from a different perspective, we’re essentially on a journey of breaking down the ‘size’ of our services — moving from what you might call ‘Macroservices’ towards ‘Nanoservices.’
We begin with the Monolith, which is a true Macroservice. All business functionalities are packaged and deployed together, typically sharing a single, large database.
Then we move to Microservices. These are significantly smaller than a monolith. Each microservice is designed around a specific business capability. They are developed, deployed, and managed independently, communicating with each other in a loosely coupled manner. This is where we’ll focus much of our design effort.
And as we push the boundaries further, we encounter Nanoservices or Serverless Functions. These are often even more granular than typical microservices. A nanoservice might be designed to perform just a single, highly specific task or function, exposed via a dedicated API endpoint.
Our Approach: An Iterative Learning Cycle
Architecture is all about problem-solving. We won’t jump to the most complex solution immediately. Instead, we’ll follow an iterative learning cycle for each evolutionary step of our sample E-Shop application:
- Problem: We start by identifying a clear business or technical challenge with our current architecture.
- Learn: We explore architectural patterns, principles, and best practices relevant to solving that problem.
- Design: We put on our architect hats, sketch out diagrams, and create a blueprint for the solution.
- Code: We bring the design to life with targeted
.NETcode examples, making the concepts tangible. - Evaluate: We assess the new architecture’s strengths and weaknesses, which leads us to the next problem, and the cycle begins again.
This systematic approach ensures we’re adding complexity only when necessary.
The Problem: We Need to Sell Products Online
Our first and most fundamental problem is simple: we need to create an e-commerce web application.
At a high level, this means:
- Users can browse, select, and purchase products.
- The application must be highly available (e.g., 24/7).
- It must handle a good number of requests with acceptable latency.
For our initial E-Shop, aiming for 2,000 concurrent users and 500 requests per second is a reasonable goal. A monolithic architecture, where the entire application is a single, unified unit, is a perfectly sensible starting point. It aligns with principles like KISS (Keep It Simple, Stupid) and YAGNI (You Ain’t Gonna Need It).
Stage 1: The Monolith — Our Starting Point
Essentially, a monolithic architecture is a traditional approach
where an entire application is designed, developed, and deployed as a single, unified unit.
- Developing a complete application as a single unit
- UI, Business and DB calls is in single, same codebase
- All application ships with single big deployment with single jar/war file
- Most of legacy apps are mainly implemented as a monolithic architecture
- But we Can’t say old style arcitecture, still valid for particular scenarios
In many situations, it’s still the best and most pragmatic option.
Design: First version of Monolithic Architecture
Here you can see the our architecture. Its not so fancy right ? We have learned so many things so why we put only 1 box here ?
Because This is our first design according to pattners & principles.
And this is one of the best architecture for us because if this architecture handle our requirements with FR and N-NR, there is no need to design more complex one.
That means We have followed KISS and YAGNI principles in here. And refactor our design as per requirements and step by step iterate together.
Problem: Tightly Coupled UI Limits Flexibility and User Experience
User Interface, Business Logic, and Data Access are all part of the same application and deployment unit. When our User Interface (UI) is tightly coupled with our backend application logic, evolving them independently becomes difficult.
This tight coupling can significantly limit our flexibility and our ability to enhance the user experience effectively.
When UI and backend logic are part of the same deployment unit, several challenges:
- Development Bottlenecks
- Technology Lock-in
- Scalability Issues
- Slower UI Innovation
Solution:
3-Tier Architecture
What is 3-Tier Architecture?
It’s a client-server architecture pattern where the application is broken down into three distinct physical tiers:
- Presentation Tier (or Client Tier)
- Application Tier (or Middle Tier / Logic Tier)
- Data Tier (or Database Tier)
The key is that these tiers are independent. The Presentation Tier communicates with the Application Tier (typically via API calls over HTTP/S), and the Application Tier communicates with the Data Tier.
Design — E-Shop: A 3-Tier Architecture
Let’s break down these tiers:
Presentation Tier (Client Tier):
This is now represented by a Single Page Application (SPA) running in the user’s web browser. It handles all user interface rendering and user interaction. It communicates with our backend exclusively through API calls, typically sending and receiving data in JSON format over HTTPS.
Application Tier (Middle Tier):
This is our existing E-Shop Monolithic application, but its role evolves. While it still contains the Business Logic Layer and Data Access Layer internally, its primary expose a set of RESTful APIs. It no longer directly serves HTML pages to the end-user for most interactions. It processes requests from the SPA, executes business logic, and interacts with the database.
Data Tier:
This remains our PostgreSQL Database, responsible for persistent storage. This design clearly separates the concerns into physically distinct tiers.
Our Current Backend API: Still a Monolith Inside
Let’s focus on our Application Tier — the ApiService.
While it now nicely separates concerns from the Presentation Tier by exposing APIs, the internal codebase of this ApiService itself is still fundamentally a monolith.
All the business logic for products, orders, payments, user accounts, etc., resides within this single deployment unit and potentially within a single large project or a set of very tightly coupled projects.
This internal monolithic structure can start to exhibit significant growing pains as our E-Shop business expands and our development team grows.
Problem: Monolith is Becoming Hard to Manage and Evolve
Our business is growing rapidly. We want to add many new features quickly to compete in the market; a new loyalty program, advanced search, personalized recommendations, different payment integrations.
Our development team is also growing. We might now have specialized sub-teams focusing on different aspects of the e-commerce domain: a ‘Product Catalog’ team, an ‘Ordering & Sales’ team, a ‘Payments’ team.
This growth is fantastic for the business, but it puts immense pressure on our current, internally undifferentiated monolithic backend API. This leads us to our new problem: Unstructured Growth — Our Monolith is Becoming Hard to Manage and Evolve.
We need a way to organize our monolithic codebase into well-defined, loosely coupled internal modules that can be developed and understood more independently, even if they are still deployed together.
Solution: Modular Monolithic Architecture
What is a Modular Monolith?
Modular Monolith is still a monolithic application that it’s typically deployed as a single unit. However, the critical difference lies in its internal structure.
It is intentionally designed and developed as a collection of well-defined, independent, and loosely coupled modules. Breaks up the code into independent modules and each module encapsulates their own features needed in app.
Still build and deploy a single app, breaking up the code into independent modules. Each module has a specific business domain/capability
Design — E-Shop 3-Tier with a Modular Monolith Backend
Here is our refined architecture for the E-Shop.
We maintain our 3-Tier structure, but the internal design of our Application Tier (the backend API) now explicitly reflects a Modular Monolith.
Let’s focus on the Application Tier (E-Shop Backend API):
It’s still deployed as a single monolithic service. However, internally, it is now composed of several distinct, domain-aligned modules. For our E-Shop, these could be:
- Catalog Module: Manages product information, categories, search.
- Basket Module: Handles shopping cart creation and item management.
- Ordering Module: Orchestrates the order creation process, order history.
- Identity Module: Manages user authentication and authorization.
- Payment Module: Integrates with payment gateways and processes payments.
Each of these modules would encapsulate its own specific business logic and potentially its own data access concerns (though they might still share the same database in a simple monolith, perhaps using different schemas or clearly separated tables).
Communication between these internal modules should happen through well-defined interfaces or internal events, minimizing direct dependencies and coupling. The Presentation Tier (SPA) still communicates with this Application Tier via its public Web APIs. Also, The Application Tier, communicates with the Data Tier (PostgreSQL Database).
This design gives us a highly organized backend, even though it’s deployed as one piece.
Why Separate Schemas ?
In a modular monolith architecture, maintaining clear boundaries between different modules is essential. Database schema separation is a best practice that helps achieve this by isolating the data of each module within its own schema.
Each module’s data is stored in a separate schema, making it clear which data belongs to which module.
Problem: Monolith Bottlenecks at Scale
Imagine our E-Shop is thriving! It’s a significant player in the market. But this success comes with intense pressure. The business demands continuous innovation and rapid feature releases to outmaneuver competitors. The marketing team wants a new recommendation engine, the sales team needs advanced promotional tools, and the payments team needs to integrate three new payment gateways — all ASAP!
Our development organization has grown. We now have specialized teams: a ‘Product Catalog Team,’ an ‘Ordering Team,’ a ‘Payment Team,’ and an ‘Identity Team.’ Each team is eager to deliver features for their specific domain quickly and independently.
We’re also observing that different parts of our ApiService experience vastly different load patterns. For example, the Product Catalog (especially with caching) might handle enormous read traffic efficiently, but perhaps the Order Processing or Inventory Management components have unique scaling needs during flash sales that are different from the rest of the application.
While our current horizontally scaled, modular ApiService (our backend monolith) handles general load well, it starts to show cracks under these advanced pressures.
Despite its internal modularity and horizontal scaling, our ApiService is still, a single, large codebase that gets deployed as one unit. This monolithic nature at the deployment level becomes the root of our new problem: Problem: Monolith Bottlenecks at Scale
- Can’t Scale and Deploy Independently
- Can’t add new features immediately
- Can’t Deploy features immediately, waiting for deployment dates
The Underlying Need: True Independence for Teams, Features & Scaling
We need an architectural approach that allows us to:
- Enable our specialized teams (Product, Orders, Payments, etc.) to develop, test, deploy, and scale their specific business capabilities truly independently of each other.
- Ship new features much faster because changes to one capability don’t require retesting and redeploying the entire backend system.
- Optimize resource usage and cost by scaling individual parts of the system (like just the Product Catalog service or just the Payment service) based on their unique demands, rather than scaling the entire monolith.
This level of independence in development, deployment, and scaling is very difficult to achieve with a monolithic architecture, even a well-structured modular one, when faced with these advanced business pressures.
As a Solution: we will explore Microservices architecture !!
Stage 2: What are Microservices?
Microservices are small, independent, and loosely coupled services, each focused on a specific business capability. Each microservice is designed to do one specific thing well, focusing on a particular business capability. They are designed to minimize dependencies on each other.
Each service has its own separate codebase, which can be developed, understood, and maintained by a small, dedicated development team. This autonomy is a key aspect. Crucially, microservices can be deployed independently. A team can update and deploy their service without needing to rebuild or redeploy the entire application or coordinate extensively with other teams.
They communicate with each other over a network using well-defined APIs and lightweight protocols, commonly HTTP/REST or gRPC. Services don’t need to share the same technology stack, libraries, or frameworks. One service might be in .NET, another in Java or Python,
if that’s the best choice for its specific job.
And importantly, each microservice often has its own dedicated database or persistence layer that is not directly shared with other services. This is a major shift from traditional models where a single large database serves the entire application.
Why The Distributed Monolith Happen ?
When desiging microservices, you have a risk to create Distributed Monolith which is (A Major Anti-Pattern!) that need to avoid.
This is one of the worst outcomes. A distributed monolith occurs when you break your application into services, but these services are still very tightly coupled. They might make excessive, chatty synchronous calls to each other, or worse, share databases in a way that prevents independent deployment or evolution.
A distributed monolith gives you all the complexities of microservices (network calls, multiple deployments) with none of the benefits (like independent deployability, team autonomy, or resilience). Debugging tightly coupled distributed services is a nightmare. True microservices are about autonomy and loose coupling.
Database-per-Service Pattern and Polyglot Persistence
Each microservice owns and manages its own private database. This database is dedicated to that service and contains only the data relevant to its specific business capability.
Crucially: No other microservice is allowed to directly access another service’s database. There’s no shared database schema, no direct reads or writes across service boundaries at the database level.
The Database-per-Service pattern naturally enables a concept called Polyglot Persistence. Polyglot Persistence means using multiple different data storage technologies across your microservices. Instead of being forced to use a single, one-size-fits-all database for the entire application, each microservice team can choose the database technology that is best suited for its specific needs.
Design — E-Shop Initial Microservice Decomposition
Here’s our initial design for the E-Shop using a Microservices Architecture:
We’re taking our backend monolith’s capabilities and decomposing them into a suite of smaller, focused services. You can see a clear separation:
We still have our Client Tier (SPA) that users interact with. This SPA will now make API calls to various backend microservices. And now, our backend consists of several independent microservices, each aligned with a specific business capability:
- Product Catalog Service: Manages all product information. It will have its own dedicated ProductDB, perhaps a NoSQL document database optimized for flexible schemas and high read throughput.
- Shopping Cart Service: Manages users’ active shopping carts. This might use a fast key-value store like Redis as its dedicated BasketCache.
- Ordering Service: Handles the order creation process, order history, and related logic. This service would have its own OrderDB, likely a relational database like PostgreSQL to manage transactional order data.
- Identity Service: Manages user accounts, authentication, and authorization. It would have its own UserDB.
We could also add a Payment Service (conceptual for now) that would handle payment processing with its own specialized data store if needed.
Problem: Direct Client-to-Service Communication
When a client application, like our WebApp or a future mobile app, has to:
- Know the individual addresses of many different microservices.
- Handle different communication protocols or API styles (e.g., making some REST calls, some GraphQL queries).
- Orchestrate calls to multiple services to gather all the data needed for a single user view.
- Implement cross-cutting concerns like authentication or retry logic for each of these interactions.
The client application itself becomes a complex point of integration and a potential bottleneck.
Problems are Increased Client-Side Complexity, Cause to chatty calls from client to service, Hard to manage invocations from client app with all different protocols (HTTP, GraphQL, gRPC) and Duplication of Effort Across Multiple Client Types Security & Cross-Cutting Concerns
Solution is to create simpler, unified, and managed entry point Abstract away the complexity, consistent API Microservices Communication Patterns Patterns (API Gateway, BFF)
What is the API Gateway Pattern?
An API Gateway is a server that acts as a single entry point for all client requests targeting your backend microservices. Instead of client applications needing to know about and communicate directly with dozens of individual microservices, they make all their requests to the API Gateway. The API Gateway then intelligently handles these requests.
API Gateway Sits between clients and the microservices It can:
- Route requests to the correct internal microservice(s).
- Aggregate data from multiple microservices into a single response for the client.
- Handle essential cross-cutting concerns like authentication, authorization, SSL termination, rate limiting, and logging.
You can think of it as being similar to the Facade pattern from object-oriented design. It provides a simplified, unified interface that encapsulates the more complex underlying system of microservices.
API Gateway acts as a reverse proxy, but with added intelligence and capabilities specifically tailored for a microservices environment. This is crucial for managing synchronous communication effectively.
Design — E-Shop Microservice with an API Gateway
Here’s our updated E-Shop architecture, now featuring an API Gateway.
Let’s break down how this design changes the interaction flow:
- Client Tier (SPA/WebApp): All requests from our client applications are now directed to a single, well-known endpoint: the API Gateway. The client no longer needs to know the addresses of individual microservices.
- API Gateway: This is our new “front door.” It receives all incoming client requests and performs several key functions:
- Gateway Routing: It inspects each request and routes it to the appropriate downstream microservice based on path, headers, or other criteria.
- Gateway Aggregation (Potential): the API Gateway can make multiple internal calls, aggregate the results, and return a single, consolidated response to the client.
- Gateway Offloading: The API Gateway is now the prime location to handle cross-cutting concerns. This includes:
- Authentication and Authorization: Verifying user credentials and permissions before forwarding requests.
- SSL Termination: Handling HTTPS for external traffic.
- Rate Limiting and Throttling: Protecting backend services from abuse.
- Request Logging and Monitoring: Centralized observability.
- Response Caching: For common, cacheable responses.
- Potentially Protocol Translation: For example, accepting REST from the client but communicating with some internal services via gRPC. - Backend Microservices Tier:
Our individual microservices (Customer, Product, etc.) now primarily receive requests from the API Gateway. They can be simpler because many cross-cutting concerns are handled by the gateway. They focus on their core business logic and data management, each with its own database.
This design provides a crucial layer of abstraction and control between our clients and our backend services.
Problem: Long Running Operations Can’t Handle with Sync Communication
If service calls are much then a few HTTP calls to multiple microservices, than it goes to un-manageable situation. Each synchronous call in the chain adds network latency and processing time. The end-user (or the client application) is left waiting, blocked, until the entire chain completes. This can lead to very long response times and a frustrating user experience.
The success of the entire operation depends on every single service in the chain being available and responding quickly. If just one service in that chain (say, step 5 or 6) is slow, down, or errors out, the entire ‘Place Order’ transaction can fail. The overall availability of the feature plummets. This is known as cascading failure.
Services in the chain become tightly, temporally coupled. The Ordering Service is directly dependent on the immediate availability and responsiveness of the Payment, Inventory, Shipping, and Notification services to do its job. This makes independent evolution and deployment harder.
Solution: Asynchronous Message-Based Communications working with events.
What is Event-Driven Architecture (EDA) ?
EDA is a software architecture paradigm where the flow of the system is determined by events. An event is a significant occurrence, a state change, or a notification that something has happened within a service or the broader system.
For our E-Shop, examples of events could be:
- OrderPlacedEvent
- ProductPriceChangedEvent
- UserRegisteredEvent
- InventoryLowEvent
In an EDA, services don’t make direct synchronous calls to command other services. Instead:
- A service performs an action and then produces (publishes) an event to signify that this action has occurred.
- Other services that are interested in this type of event can subscribe to it.
- When an event is published, it’s typically sent to an event bus or message broker, which then delivers the event to all subscribed services.
- Each subscribed service then reacts to the event independently, performing its own logic.
This model promotes extreme loose coupling because services don’t need to know about each other directly. They only need to know about the events they produce or consume. Communication is inherently asynchronous.
Design EShop Microservice w/ Event Bus for Asynchronous Communication
Here’s our evolved E-Shop architecture, let’s call it v3.5, now incorporating an Event Bus (implemented via a Message Broker) to facilitate asynchronous, event-driven communication for specific workflows.
Client (WebApp) still interacts with services via API Gateway/BFFs using synchronous calls (REST/GraphQL) for immediate needs.
Let’s look at how this changes interactions:
- Introducing the Message Broker / Event Bus: We now have a central Message Broker in our architecture. This is where events get published and from where they are consumed.
- Event Publishing (Example: Order Placed — The Order Fulfillment Process): When the Ordering Service successfully creates an order (perhaps after a synchronous API call from the client/BFF), it then publishes an OrderPlacedEvent to the Event Bus. The Ordering Service’s job for initiating the fulfillment is now done for that immediate request; it can respond quickly to the client.
- Event Consumption (Example: Order Fulfillment): An Inventory Service subscribes to OrderPlacedEvent to decrease stock. A Notification Service subscribes to send an order confirmation email. A Shipping Service subscribes to start the logistics process. Each of these services reacts independently and in parallel once they receive the event. They don’t need to know about each other, only about the OrderPlacedEvent.
This Publish/Subscribe model using an event bus fundamentally decouples these services. The Product Catalog Service doesn’t know or care about the Shopping Cart Service. The Ordering Service doesn’t directly call or depend on the Inventory or Notification services for the order to be initially accepted.
Codes for E-Shop Microservices with EDA & Publish/Subscribe
I have the E-Shop codebase now updated to include these asynchronous, event-driven interactions using RabbitMQ. You’ll find this version in our GitHub repositories. Goto a folder named 9-eshop-microservices-async-rabbitmq_IDEAL.
You can inspect the logs for BasketService to see it publishing the basket checkout event. Then check Ordering Service logs to see it consuming that event.
Step by Step Design Architectures w/ Course
In this course, we’re going to learn how to Design Microservices Architecture with using Design Patterns, Principles and the Best Practices. We will start with designing Monolithic to Event-Driven Microservices step by step and together using the right architecture design patterns and techniques.
