Yesterday I gave a talk about basics of CQS and CQRS. It was a part of Code Mastery meetup I’m co-organising.
Here are my slides. Video should come up soon as well.
We all know and we all worked with monolithic application. One big codebase which, in time, is looking more like hairball than real application. Usually at this time we want to rewrite it to microservice architecture. But I think first thing we can take a look is how to write better monoliths.
There is few reasons why monoliths are so common. One is that they are easy to manage and deploy. They are easy to reason about as well. There is one project, one place where things happen. When change is happening you know exactly where it will happen and how to test it. At the beginning of the project, especially in the startup environment, it’s faster to develop when you don’t need to think about issues related with distributed systems.
Usually monoliths are written in a hurry. Features are done quickly and without proper planning. Everything is created in one application as it is one big bounded context.
Dependency tree, as well, is all over the place because of that. If you want to use some model you are just using it. Because it’s in the same codebase, right?
I think this is the sole reason why monoliths are going wrong.
Alright. So we have microservices and monolith. Let’s think how to get advantage of the best parts of both architectures and keep ourselves in single codebase.
One of the best parts of distributed systems is that every service has only one responsibility and it’s doing it right. Similar to Single Responsibility Principle from SOLID. What it means on service level is that every service has it’s own data, contracts and encapsulates bounded context of what needs to be done.
We can use this in our monolith with ease. All we need to do is to separate each module/component/bound context into separate package of code. It has it’s own models, it’s own data and contracts in the same way microservice have.
Same way we can do with interface modules representing departments or group of users using our application with adapters for specific modules they need to use and nothing more.
As we have our modules nicely separates we need to let them talk to each other as usually there is a lot of cases where more than one is involved.
As it comes to straight forward calls we can use anti corruption layer (adapter interface) to deal with it. All we know is some method in some service and we don’t really care about implementation. We’re safe when it changes as all we need to do is change the class implementing mentioned interface or create new one.
As it comes to sharing data (what we touch next) we can simply use events, as Event Sourcing do, just on code level. You can even implement or find event bus which will take care of event propagation in your system. It will be synchronous but in monolith everything is.
In micro services architecture every service holds it’s own data. Having monolith we have one database to deal with. Even if it may look interesting to split database it’s adding unnecessary complexity to our simple modular application.
Better thing to do is to prefix tables with module name and keep prefixed tables to depend only on each other. What it means is that joins, for example, can be done only inside specific prefix namespace. When there is need for data from different part of the system we need to make in code call to it.
If we really want to join, as we do this pretty often, we can use Event Sourcing pattern of materialised views, where module is reacting to public event of other one. Just like microservice reacting to event from global event bus.
Some may say it’s data duplication. I’d say data depend on context. For authorisation system you need user’s email and password where for billing you need way more. Keeping only one User representation is holding you back when it comes to change in one module or other.
Keeping microservices architecture inside monolith may give you best of both worlds. One codebase, one server, one database from monolith and ease of change and domain relevance from distributed systems. Simply use patterns used by microservices replacing tools and network calls with in-code communication.
It should get you through the mono part of the project and get you simple way to migrate to separate codebases where you can simply extract package after package and change implementation of adapter interfaces from method calls to network.
In my previous article I was hyped about getting insane throughput in my first app. Today I started another app and development speed, using quite difficult architecture, surprised me again.
As you could expect after previous entry I’m focusing on Spring Boot as it delivers everything more or less out of the box so I don’t need to worry that much about additional tools. It’s also similar-ish to Symfony2/3 so I feel close to home.
Last Thursday I visited London’s Java Meetup which was about CQRS applications, event sourcing and how to use it in microservices. Great talk which motivated me to try to build Event Sourced app.
There is nothing better for the weekend than building application in language and environment you don’t really know what well using difficult technology you’ve never used before. Should give me weekend or two of fun playing with it.
Sadly my project, Java and tools said
what’s actually good thing.
It’s now 14:23 and I’m basically done with fully event sourced API part with enough materializers to create all views needed for the MVP. It’s kinda crazy for me.
Configuration of Axon Framework (CQRS framework for Java) is very easy. Few services registered in container … I mean Beans registered in Application Context and it’s done. Domain objects working as Command Handlers, listening to events and JPA (like Doctrine) read layer based on MySQL. Everything worked on first run and as it suppose to.
Event sourcing has great part in development speed and I don’t need to care about database at all and I don’t even think about it before I finish all domain processes. After that there is time to reflect what I want to show to the user. What views I’ll have in the application. And how I want to put those data so they will be easy to read.
I’d really like to find some downsides, things I won’t like in Java. I know about Websphere and old “Application Servers” ways and that they are terrible but it’s past. Currently I feel like Java is taking back ground after years of being “slow” and “too corporate”.
I remember doing research years ago, when I was looking for language to learn after seeing no future in Delphi. I’ve chosen PHP as it was growing and for being nice and fast to develop websites. Java felt “too big for task”. Today it seems to be other way around.
CommandBus and CQ(R)S are gaining a lot of popularity over last year. Today I got question about benefits of the switch. As I used it by default for last year I’ve never thought about it. I’ll try summarize it in this article and in next one I’ll explain how you can change your architecture in life project in reasonable and production safe way. Let’s get to it!
CQS stands for Command Query Separation. Simple as it is – every operation which is not just read in encapsulated in Command and handled by CommandHandler. The biggest benefit of that approach is that you have each business operation in separate class. Thanks to that when change has to be made it’s very clear where it should be done and when new feature comes in it won’t interfere with what you already have.
Second argument is very close with first and it’s about communication between dev team and business. Each handler is one specific case and it’s name reflects business language more than what dev team thinks it is. You’re dropping meaningless .*Service classes which grows out of reasonable size very fast.
Stopping unreasonable growth is part of Solid. Single responsibility principle says that each class should have specific functionality. Keeping separate handlers is the best way for that.
Very important in all of that is separation between input and business logic. All you can do outside CommandBus is to create and validate command and send it through with bus. Your controllers are getting very thin and probably you can do half of the things automatically even before it. You also stopping to care where and how command is created. You can execute the same operation from many places without any modifications in your business code.
Cherry on top when you’re creating any API is that you can deserialize your input whatever it is (JSON/XML) straight to object of the command. It saves a lot of time, especially with Symfony 2 where you can create very neat ParamConverter to do the thing for you and throw 400 if validation fails.
Let’s take UserService from thin controller fat service architecture. You have there two methods – registerUser and updateUser. First one creates user and sends confirmation email. Second just updates fields in User.
First thing broken is SOLID as two operations are handled by the same class. You also have to have dependency on some mailer to be able to send email during registration. But do you need to instantiate it when updating user? No. So we can assume you’re looting time and resources on every update of the user.
Question also is – WTF is “updateUser” – which business process is it? When user changes email? When user changes password? When user is promoted to different role?
Usually all from above processes will use this method and you will have lovely stack of ifs to determine when you have to send confirmation of new email, encode password or just update column with role value. Or even worse you’ll have event listeners on specific fields hidden somewhere deep in code and first new person will add another one because of no knowledge of old ones and after another month you will be adding parameters to enforce listeners to be fired in proper order.
Been there done that ;)
With each process in different class you’re avoiding all of that. You’re doing your tasks with speed and precision saving your time and business money.