Building a modular (Spring Boot) monolith

Photo by Esther Jiao on Unsplash

If you’ve been actively involved in software development in the past decade, there was one word you heard over and over again: microservices. All of the sudden, everyone praised and aspired to build small fast scalable self contained services instead of big unstable and slow monoliths.

A promise of small services, that hold only context specific logic.Mostly independent of their surrounding. And especially the elimination of dependencies between teams was, what made it popular. And one of the main reasons behind the pattern.

Which one are you?

However, many speakers advise against it as the default architecture pattern. Especially for company with few or small teams. As this pattern brings a lot of hidden complexity and requirements, not everyone is aware of initially. Not only on the technical but on the infrastructural and organizational level as well.

If you can’t build a well-structured monolith, what makes you think microservices is the answer? — Simon Brown

Don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services. — Martin Fowler

A monolithic architecture is a choice, and a valid one at that. It may not be the right choice in all circumstances, any more than microservices are - but it’s a choice nonetheless. — Sam Newman

Quite often the term modulith was thrown into the mix. A modular monolith, where each module is as self contained as possible. Hence harnessing the benefits of the monolithic simplicity as well as the structural integrity of microservices. This can be beneficial for some project setups with a limited amount of people.

A lot has been said and written about how to cut and slice your domains, using DDD and other modelling approaches. Yet, actually building a modular Monolith with the tools at hand is still not a straight forward thing to do. Neither on the technical and especially not on the functional aspect.

This post wants to provide a possible approach for the technical side of building a modular Monolith (using Gradle, Java and exemplary Spring). The concept can be easily applied to Maven as well.

Beware: I do not dare to claim this to be the best approach. It worked in projects I participated in and just hope for it to inspire.

The structural idea

A possible modulithic structure

The proposed structure consists of just three types of modules. The main module, APIs and implementation modules. Each functional aspect of the application should have at least an implementing module and optionally an API.

Main module

This is basically your previous monolith in regards to the application frame, configurations and build artifact. It depends on all implementation modules, defines common dependencies and configurations for runtime. Depending on the used framework it might have to hold things like database connections as well.

Implementation module

As to be expected, all the “realstuff goes here. Your web API, business logic, persistence layer, automated tests and so forth. Each module represents a functional part of the application. It should be designed to be as independent as possible. However, it may use other modules via their API.
An implementation module may define its own specific configurations, but it should still be orchestrated by the main module.
The last responsibility is the implementation of the matching API interfaces.

API module

Not every functional module will need an API. But with a dependency between two modules, this should get introduced. These modules are supposed to be the lightest of them all.
The only purpose is to provide an internal API to the matching implementation module. Hence, only interfaces and data transfer objects (DTOs) should be present here. Only what is necessary, to let other modules be able to send or receive data.

The nitty gritty technical part of Gradle

How does one actually build this structure? Build tools like Maven and Gradle support modular structures. Maven does it for over 20 years now.

Declaring the modules

For Gradle to recognize the modular structure, all modules have to be declared within the settings.gradle file at the root of your project. Hierarchy levels can be defined with the colon (:) symbol. For example, module:pet:api corresponds with module/pet/api as folder structure.

Gradle analyses the structure and recognises modules with matching folders, containing a build.gradle. For the example Modulith above, the settings.gradle looks like the following: = 'modulith'include 'modules:a:a-api', 'modules:a:a-impl'
include 'modules:b:b-api', 'modules:b:b-impl'
include 'modules:c:c-api', 'modules:c:c-impl'
The resulting folder structure

For the root to recognise the modules when building the project however, they have to be added as dependencies within the root build.gradle. The root is only depending on the implementation of the modules, as these will contain the production code and implement their respective API module:

dependencies {
implementation project('modules:a:a-impl')
implementation project('modules:b:b-impl')
implementation project('modules:c:c-impl')

Managing dependencies

One major benefit of having one project with modules is the easier dependency management. Common dependencies can be defined and managed in the root and directly delegated or picked by the modules. The module does not care about which version to use. This is defined once for all in the root. This makes staying up to date on the latest versions easier, as there is only one source of truth.

Root dependencies are defined within the dependencies section, inherited module dependencies can be managed via the subprojects area. See the build.gradle of the example project for details. This allows for other setting manipulation like artefact group id and version and test management as well.

subprojects {
// this sets the group and version the same as the root
group =
version = rootProject.version

dependencyManagement {
imports {
// use the same spring boot version as in the root
mavenBom "org.springframework.boot:spring-boot-dependencies:${springBootVersion}"

// define dependencies every module will inherit
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'

testImplementation 'org.springframework.boot:spring-boot-starter-test'

Each module can still define their own individual dependencies in their build.gradle, if needed.

This is all for the overall Gradle configuration. All that is left is to fill the modules with life.

Building the actual application

To provide business logic, nothing too special applies. It’s still a monolith. Hence the main application has to be defined in the root still. This ensures that all annotated classes are within sub packages and are scanned and identified by Spring.
Global configurations about database connections, security, etc. should be put into the root module. Controller and used service classes can be put within the respective -impl modules.

package de.nspiess.modulith; // notice the root package here
public class ModulithApplication {
public static void main(String[] args) {, args);
---------------------package de.nspiess.modulith.a.impl;
public class InfoController {

Synchronous communication

Although asynchronous communication is the preferred way, synchronous communication between modules is easily achievable. It’s the synonym of calling a different micro service via a synchronous HTTP API.

As in the situation of distributed services, you want to avoid synchronous communication as much as possible. As it creates dependencies that are hard to get rid of afterwards. Be careful to not create dependency loops as well.

Instead of a web API, define a Java interface within the api module and write a matching implementation in the respective impl module. Every module depending on the interface needs to declare the dependency to the api module only.

package de.nspiess.modulith.a.api;

public interface NameService {
String getName();
-----------------------// build.gradle of module "a-impl"
dependencies {
implementation project(':modules:a:a-api')
-----------------------package de.nspiess.modulith.a.impl;

import de.nspiess.modulith.a.api.NameService;
public class NameServiceImpl implements NameService {
public String getName() { ... }

To use the interface and its implementation, just inject it as class dependency in a different module. As it’s going to be one combined runtime in the end, Spring will resolve the interface to implementation connection as usual.

package de.nspiess.modulith.b.impl;

public class WebController {
private final NameService nameService;
public WebController(NameService nameService) { ... } @GetMapping
public String hello() {
var greeted = nameService.getName();
return String.format("Hello %s!", greeted);

Simple data types can immediately used for the designed methods. If you want complex DTO objects, they need to be defined within the api module next to the interface.

Asynchronous communication

As with distributed services, the goal should be to strive for event or message based communication. Several frameworks (like Guava, MBassador, EventBus, etc.) are available for in memory events. Spring provides it’s own mechanism even, as the framework heavily depends on internal events itself.
For simplicity reasons, I’ll showcase the facilitation of Spring Events. Using a different framework is not much different though.

The idea is always the same: something is emitting an event of a specific type and possibly many listeners may react on it. The listeners are independent and can process the event asynchronously.

Defining an event

Events in this case are nothing more than simple DTO classes. They can be as complex as wanted, holding few or many possibly nested properties. As possible listeners need to be aware of these, events have to be put within the respective api module as well.

public record HelloEvent(String name) {}

Publishing an event

For event publication, Spring provides the ApplicationEventPublisher class. Spring instantiates this for internal purposes already and it can be injected into your own logic as desired. Publishing an event is as simple as calling the publishEvent(Object event) method.

Listening on events

Every Spring managed bean can become a listener. I advise to define dedicated Listener classes, annotated as @Component. To register a listener, Spring provides the @EventListener annotation, applicable to method signatures. For asynchronous handling, the @Async annotation has to be added as well.
Asynchronous behaviour needs to be enabled on the Spring context with the @EnableAsync annotation. It can be put on e.g. the main application class. Otherwise, the events are handled within the same thread, blocking the publishing code.

public void helloEvent(HelloEvent event) { ... }

That’s all you need for publishing your custom events within a Spring context.

The pitfalls

As with every approach, there are things you have to be aware of and consider. In my reasoning most of the pitfalls I mention would have happened in a distributed architecture as well.

Module dependency

First of all, there’s an immediate danger of interweaving many modules with each other. The more modules depend on each other, the more you’ll have to untangle them. Sounds familiar to a distributed monolithic? That’s because the underlying modelling issues are the same.
If you run into cyclic dependencies within the Spring context, use @Lazy injection. It can be a possible smell in your architecture or a limitation of having one global Spring runtime.

Complicated configuration

You’re still building a monolith. Some configurations can be come complicated, that would have been easy with separate services.
For example, if you want different database connections per module, things can get tricky. One simple approach is to ignore this for now and use totally separate tables or collections per module. If you really need separate connections, Spring allows multiple data sources.

Security and environment variables shouldn’t be hard to manage. Security should be the same for all modules anyway and environment variables can be structured as you like. Separate them as you with the Java packages.

Error handling

One big topic that I haven’t touched in the article is error handling. How do you react or even notice, whether an asynchronous event failed to be processed?
This is a very important topic and you should think carefully about your solution. The same applies for a distributed system, helping you in case you go for distribution later on.

I’m sure I did not touch on all aspects but I hope you understand the approach. In my opinion this is a good approach to think about separation of concerns within a monolithic application without dealing with the distribution overhead. And if the need arises, extracting a module into a separate service should be easier than if you would pull it out of a regular monolith.

You can find the example project on GitHub.




Software engineer, craftsman apprentice, coffee geek, board gamer by heart

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Apache Kafka is powering a real-time data revolution?

3 things to remember before starting a career in software development.

Recursion For The Uninitiated

Supra Partners #68 — SupraOracles Partners with DeFi11 to provide statistical data from multiple…

How to tell Alexa to remember and repeat a response in your Alexa custom skill (Part 2 of 2).

Git-secrets installation in Windows

Managing Environments

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Norbert Spiess

Norbert Spiess

Software engineer, craftsman apprentice, coffee geek, board gamer by heart

More from Medium

Spring Boot integration with GraphQL

GraphQL with Spring Boot

Spring Boot Application With Multiple Data sources

How to Use Spring Cloud Gateway to Dynamically Discover Microservices

Client application consuming microservice