Common Sense Driven Development

Nowadays every day or week we’ll getting new framework or tool everyone is hyped about. is a great example of trolling JS people about that. Development is a lot about this new and exciting technologies but day to day life is not as simple as using the cutting edge, shiny things.

The double edged sword of Cargo Cults

For the definition I’ll fall back to good old Wikipedia:

(…) attempt to emulate more successful development houses, either by slavishly following a software development process without understanding the reasoning behind it, or by attempting to emulate a commitment-oriented development approach (in which software developers devote large amounts of time and energy toward seeing their projects succeed) by mandating the long hours and unpaid overtime, when in successful companies these are side-effects of high motivation and not requirements.

As managements issues are important, I’d like to focus more on first part of the definition.

There are from time to time new tools and practices released and world is getting crazy. I’d say React.js is one of them. Other may be Netflix Cloud tooling or good old Docker and Kubernetes on the Dev/Ops side.

And don’t get me wrong, I like them all. The difference between what you can use and what to use to make your project successful. It’s context of making decision being more important than decision itself.

Having technology solving your problem is great but you may fail because of very steep learning curve. Tool may be not supported in few months or new version will be released and you’ll have nice and shiny legacy code even before release.

What to look for

  1. Make sure you’re not trying to use the same hammer for every nail – there Is a lot of technologies and some are better in some tasks than other. Like PHP and multithreading or long running processes. You don’t want to do this to yourself. Maybe better solution will be to get people to learn a bit of Java of node.js to make this subsystem?
  2. Support – is the library you want to use “mainstream” enough for you to use it and be sure it will still exist in few years. From other hand ask yourself if you really need to use library for some very simple functionality you can write in about 20 seconds.
  3. Learning curve – Check with your team new solution can be understood and implemented correct way. As an example I can take CQRS and Event Sourcing, which are quite complicated topics and used mostly in enterprise environments. Anyway people often think it’s silver bullet for their problems and going through with it. Often they are right but as it needs time for people to learn about it’s problems it’s better to take middle ground and tart with just emitting events before switching to ultimate solution.
  4. Look at yourself first – There is a lot of companies and a lot of ideas. None of them is a silver bullet. There are also old, “bad” ideas. Like monolith. And those bad ideas are good in some cases. Like when you have quite big application to write in small team.
  5. Take authorities with grain of salt – aka Cargo Cult of the person. It happens when opinion of one person becomes opinion of the community. You know examples of that from global politics. And I’m not saying those people are wrong. They are just preaching one solution which they like. And it doesn’t really matter if it’s correct solution of programatically correct. Their acolytes will quote them in every meeting. Argument of need and correctness of the solution will be pushed back because of argument of well known person having opinion.

There is only one correct answer – it depends

I’d assume there is as many styles of coding and tools as developers in the industry. Some are better than others. Some are evolving and getting better and better. Some are legacy at the idea level but still generating revenue for the company.

Bottom line is that there is no single answer to a problem. Context of the problem changes everything and I think it’s the most important thing to look at when making technical and process decisions. And then choose which hyped tech use in the next project.

Modular monoliths

We all know and we all worked with monolithic application. One big codebase which, in time, is looking more like hairball than real application. Usually at this time we want to rewrite it to microservice architecture. But I think first thing we can take a look is how to write better monoliths.


Why do I even think about building monolith?

There is few reasons why monoliths are so common. One is that they are easy to manage and deploy. They are easy to reason about as well. There is one project, one place where things happen. When change is happening you know exactly where it will happen and how to test it. At the beginning of the project, especially in the startup environment, it’s faster to develop when you don’t need to think about issues related with distributed systems.


So where it went all wrong?

Usually monoliths are written in a hurry. Features are done quickly and without proper planning. Everything is created in one application as it is one big bounded context.

Dependency tree, as well, is all over the place because of that. If you want to use some model you are just using it. Because it’s in the same codebase, right?

I think this is the sole reason why monoliths are going wrong.


How can we do this better?

Alright. So we have microservices and monolith. Let’s think how to get advantage of the best parts of both architectures and keep ourselves in single codebase.


Single responsibility principle

One of the best parts of distributed systems is that every service has only one responsibility and it’s doing it right. Similar to Single Responsibility Principle from SOLID. What it means on service level is that every service has it’s own data, contracts and encapsulates bounded context of what needs to be done.


We can use this in our monolith with ease. All we need to do is to separate each module/component/bound context into separate package of code. It has it’s own models, it’s own data and contracts in the same way microservice have.

Same way we can do with interface modules representing departments or group of users using our application with adapters for specific modules they need to use and nothing more.



As we have our modules nicely separates we need to let them talk to each other as usually there is a lot of cases where more than one is involved.

As it comes to straight forward calls we can use anti corruption layer (adapter interface) to deal with it. All we know is some method in some service and we don’t really care about implementation. We’re safe when it changes as all we need to do is change the class implementing mentioned interface or create new one.

As it comes to sharing data (what we touch next) we can simply use events, as Event Sourcing do, just on code level. You can even implement or find event bus which will take care of event propagation in your system. It will be synchronous but in monolith everything is.



In micro services architecture every service holds it’s own data. Having monolith we have one database to deal with. Even if it may look interesting to split database it’s adding unnecessary complexity to our simple modular application.

Better thing to do is to prefix tables with module name and keep prefixed tables to depend only on each other. What it means is that joins, for example, can be done only inside specific prefix namespace. When there is need for data from different part of the system we need to make in code call to it.

If we really want to join, as we do this pretty often, we can use Event Sourcing pattern of materialised views, where module is reacting to public event of other one. Just like microservice reacting to event from global event bus.

Some may say it’s data duplication. I’d say data depend on context. For authorisation system you need user’s email and password where for billing you need way more. Keeping only one User representation is holding you back when it comes to change in one module or other.


Putting it all together

Keeping microservices architecture inside monolith may give you best of both worlds. One codebase, one server, one database from monolith and ease of change and domain relevance from distributed systems. Simply use patterns used by microservices replacing tools and network calls with in-code communication.

It should get you through the mono part of the project and get you simple way to migrate to separate codebases where you can simply extract package after package and change implementation of adapter interfaces from method calls to network.

Designing REST API

With current trend of Microservices REST APIs are taking leading position as implementation of them. There are always things that need to be done in a synchronous way. Public interfaces are also a thing and REST is the most obvious way to expose your functionality to 3rd party users. In this article I want to show you result of my research on how to design REST API to last and be comfortable to work with in longer time.

HTTP and this thing we call contract

Let’s start from the very beginning. HTTP is our transport layer. Simple text protocol with request and response.

Status codes

Awful amount of times I saw APIs with 200 as response code and real status of operation in request body.

One can say it has everything what’s needed on client side to react in case of error but it’s not that obvious. Principle of least astonishment is one of the most important terms related to this topic. There are millions of developers around the world and probably quite few in your organisation. Everyone has some idea about how API should work. Probably most of them know what is 200, 201 Created and 400 bad request. And everyone knows what 500 is ;)

Only this, speaking in the same language makes life easier and removes one thing to learn when trying to integrate. With status codes you are also enabling yourself to create strategies of responses. Successful one will contain your resource data. If validation fails you can say client sent Bad Request and express this by returning exactly that – HTTP/1.1 400 bad Request – with response body optimised to show errors. I’d also say that it’s what most of clients of your API would expect from you.


When you take a look at HTTP response those are next after status code. Role of headers is to provide metadata for your request and response. What is metadata? Everything not directly related to what your API should do. Authentication, current user, correlation IDs, some additional tokens. I saw a lot, and did in the past, authentication data as _token: “value” . I have underscore so I know it’s not part of contract, right? Not really. Now I need to model all the representations to have this field. What worse I need to deserialise body to get this information when it may not be needed because of authorisation error after that.

One more thing what’s happening here but not in very visible fashion is non functional requirements creeping into business logic. Because it’s easier to deserialise everything we’ll have this token in our DTO and then we use this generic field to make decisions. And then when we want o change way security works we’re screwed because _token is all over the places in our code. OK, so let’s use headers for everything what is not related to domain of our API. If you need something non standard feel free to use X-Custom-Header for fun and profit and to get your data around.

Here also quick mention of Content-Type and Accept headers as they got recognised in most of cases and there is not that much need to rant about them ;)


Let’s talk now about what your API really does. Body of your response. In common language it’s called Contract. Basically because it’s agreement you’re making with your clients on how you will communicate. I mentioned headers previously for non business specific data. Body should be only about business domain of your API. Possibly as short as possible. sometimes it’s hard but ask yourself question if all those data are really needed. I’d say also to try to keep nesting as minimal as possible as traversing big graphs of objects is pretty annoying in long term. Important thing is also to avoid exposing internal data. But let’s talk about this more in next chapter.


Tutorials showing you to dump your entities as a JSON straight from database are just wrong. If you just want to save your data via HTTP and keep no logic in API – create repository in you code and use database. Adding network overhead is pointless and your logic will sit in the clients so you’re not getting any centralisation of logic.

What is Resource then?

For me it’s related to aggregate roots from Domain Driven Design. Those are entities of your business domain. Entity is an element of your system which hold it’s identity. Usually it’s ID. And on top of that it’s ID you’ll search for when asking for data about something.

Let’s take example of Customer. You’re interested in your customer and can think about many operations related to her. On the contrary Address usually isn’t that interesting on it’s own. We’re talking about it as a Value Object. let’s take a look how it could look like.

Even if your data model stores address in separate table with IDs you will never ask about it without specific context of entity it belongs to.

On the API level we’ll model our resources as URI. Each resource has unique ID so when we combine it with it’s name we’ll get unique access point to data about it. GET will return

And as we said Address is sub resource of Customer. So following is a way to access only it’s data. GET which will return

HTTP verbs

Resource is a noun. As REST says we use verbs to interact with it. Those verbs are GET, POST, PUT, PATCH, DELETE etc. GET is about reading data. POST creates new Resource (and returns 201 Created if succeeded) PUT changes resource PATCH makes partial change DELETE removes resource.

Those are used as a another part of HTTP language of interactions. With those we can do all the CRUD operations on our entity. If your API and business case are simple enough you are probably good to go.

Business logic in REST API

I think the CRUD approach to API is wrong. Especially with mentioned before exposure of data and business logic being on client side instead of inside the service.

To hide behaviour inside the REST API we can do two things. Use resource as a business process or use “resource functions”.

Business process as a resource

This concept comes from world of CQS where write and read operations are separated. This world also lacks concept of mutability data. All you can do there is add more data. So first thing you’ll do is dropping support for PUT, PATCH and DELETE request types. This will block 3rd parties from modifying data as they want. One of the major reasons of business logic leaking to the clients. We’re left with only GET and POST request types. With command-query separated it’s all what we need. GET requests are querying for data and POST is dispatching commands to our system. Thing I struggled to grasp was how to model URIs in this type of API. Turns out what you can do is to treat command as separate resource.

This way we’ll separate our code and access point to it. It is also easy to see and document what API is doing. From other side think what could you do with this endpoint called with GET? For example show all successful requests for reporting purposes. Pretty neat if you ask me. And everything in one place.


Other approach my friend is using is based on idea that resources has functions. Those functions live as subresources of entity they’re interacting with. Previous example done as a function would look like this:

Good thing about it is that it keeps context of resource function interacts with so it’s quite easy to reason about it. Not being fully compliant with REST specification is a downside. As I showed before subresources can be shown in the body of resource. Having function here would implicate we will show this list of operations as a field and usually we don’t need it to be a case.

Anyway I think both approaches are good. Main goal we want to achieve is to hide business logic and it can be done nicely in both styles.

Hypermedia as the Engine of Application State

It sounds complicated and it has strange acronym (HATEOAS) but after all it’s just about using another feature of common web in our API. This feature is linking between resources and other operations. Let’s take example of list of the customers. One of the most pointless examples in all of the tutorials is having it mapped 1 to 1 with findAll()method in repository. It will never be a case you’ll show list of all customers assuming you’re out of development environment. Feature you need here is pagination.

Without hypermedia

So let’s add optional parameters to our api page_number and num_per_page. Defaults are set respectively to 1 and 20. So far so good. Clients are implementing this feature and everything is working. Let’s take a look at interface of common pagination. We usually show links to previous and next page. But currently we don’t have information about page at all. One time we’re sure there in no next page is after call we make to API. When empty list or pagination error will be encountered we’ll stop pagination. All of this logic must sit everywhere around in Client code even when all this data are available from API perspective.

Hypermedia in API

How can we do better? We can use something like HAL – Hypertext Application Language. Instead of putting parameters in the documentation we’ll add another key in our response – _links – which will hold URLs for purpose of our pagination.

Now our clients need to read this field and call endpoint presented as value there to get next page. Easy. seems like a not that a big change but now we can determine if there is previous or next page based on API call. With metadata structure we can even more details about endpoints like templates for URLs to specific pages. Important thing is that clients are completely unaware about our naming convention and URL schema. We can change URI of next page, change names of pagination related variables without breaking the client.

Hypermedia and POST requests

Similar thing can happen for requests where we want to create new resources. Take a look at At the home page you can see all the different things API can provide. All you need to do is to follow the links to find what you’re interested in. Just as you do on all the websites. Again, as with pagination example, we’re freeing API clients implementation from knowledge about detailed structure of API.

Here we have example of customers endpoint which is telling us we can perform two operations. Just create Customer or Register Customer. Let’s say requirements for both processes are different but for sake of time we had put registration in the same API. From the client perspective we’ll call /customers first to get list of available operations and then we send our payload to register endpoint. Everything is working fine.

Introducing new API

Now we want to add registration API as turns out it’s not concern of Customer to register all other data. Let’s think first what happen if we have our link to operation hardcoded in client library (we did at least that). After our updated library is released we need to update all of the client applications with new version number and redeploy them. If we miss one we’ll have a problem. If you’re working in bigger company you’ll also mess with other teams release plans. Also because of not being able to make change in one click you need to keep old endpoint working for some time until migration finishes. And knowing life there will be new feature coming which need to be implemented in both places and one of the team can’t release because of reasons.

Just one small change and you got yourself and half of the company in trouble. Now let’s take a look what will happen if you use contract presented in previous example:

All we changed is one value in one place in one deployment. Now everyone who wants to register customer will be sent to another server and his job will be done. You don’t need to know who is using your service to make changes. Other teams have also time to migrate to call directly new service. No dependencies over what is necessary.

On chattiness of hypermedia APIs

One argument I hear a lot agains this kind of solution is that instead of one direct call you need to do many and having quite a few from end user interaction to finished process it adds a lot of time to the whole process. In general it’s true. it’s also tradeoff between speed and flexibility. As with everything in software – there is no silver bullet. What we can do is to mitigate effect of the multiple calls on client level. At least a bit.

What we’re doing is discovery of URI for resource we want to call. Most of the time it will be the same. So why not to cache it locally. If cache is hit we’re going straight to URI, if miss we’ll do few additional calls to get it. In case of 404 returned by current cached URL we’ll do the same. As long as we keep links path in place we’re guarded agains any issues related to moving endpoints between applications. At some point we may even retire API and instead of providing any functionality just return _links values to new APIs. Application is dead but everything is working fine. It’s not happening that often :)

Design and documentation

I’ve put those two together because they should live together. Too often documentation is a black sheep of the process which is left alone for the last step and sometimes not even touched for several months after changes are made. Been there; done that ;)

Open API standard

Over the years many people tried to create one standard to create API specifications. Today the most widely accepted and proven is OpenAPI. What it does is to provide easily writeable JSON markup to describe API contracts. As it’s very wide topic I won’t go into specifics. you can check all the details of current specification here .

What is important is to use it before you start creating your APIs. Sit down with your team and people who will use your API and have a brainstorm on what data and operations are needed from your new app. During this process create JSON document which will describe what you’re discussing. Later you can put it in repo so it’s visible for everyone (you can say it should be first commit) and if something changes along the way of development everyone will see changes made.

This way of creating APIs also enables parallel work between API and client teams as contract is discussed prior to work. Nice thing is also possibility of creating client libraries from JSON document you’ve just created. And last but not least – beautiful documentation.


As Swagger website states:

Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability.

I mentioned before some of the features before. Now I want to touch on documentation and Swagger UI which simplified way documentation is created to the level where there is no way back ;) Swagger UI creates web documentation from your OpenAPI specification document. And not only that. Documentation website created from it is fully interactive. Besides providing all important information it allows you to test API in real time providing all what it does in copy/paste format. Creating Client libraries is a blessing as everything can be verified by client team before first line of code. Also your QA team will be very happy as they can now focus on their work more than on typing endless JSONs in Postman. This makes road to production way faster and makes everyone in business happy with working systems.

Putting it into practice

Practice makes perfect. All the ideas mentioned in this article will stay ideas until you start working with them. As I’m always saying – there is no silver bullet. Especially in our industry which is changing on the daily basis. Try to use them, evaluate if they are good for your scenario and let other know what was working for you.

Event driven microservices

I’ve got lucky I got my spot in last London GOTO conference. Theme was obviously event driven microservices and how they simplify whole architecture of complex applications. After 2 days of listening about them there was no other choice than prototype.

Most intriguing for me was the idea of performing read through event bus as well as writes. It kinda hit me as being against CQS. Couldn’t get why and how. Being in mindset of transactional storage wasn’t helping as well :P

Building event driven microservices

Requirement as simple. One service is providing data and other is requesting and processing it. And some interface to interact with. No need for web, as it’s prototype/proof of concept. Console app it is and Symofny as framework of choice as the simplest way to scaffold code. Yesterday I was thinking about getting kafka but again, I wanted to see results not caring much about technology.

If you want to do system like that in real life please don’t use PHP for that :D

Code you will find here.

So how it works?

I’ve got positively surprised by the effect. Everything was obviously done under a second. Even checking on microsecond level shown loads of time saved in comparison to HTTP.

Think this way – sending order and getting item gives you 2 HTTP calls. It’s not much but with more services interested in this topic it will only grow. And all of it will be done during main order request. So user facing system will be slowed down by every new service attached. Even if response time is about 50ms with only 4 of them it adds 200ms on top of all other processing.

When your microservices communicates with each other with events you’re getting at least three advantages I can think of right now:

  1. No HTTP, less network connections means less time wasted on the wire
  2. Events are asynchronous so if your critical path is finished you can inform end user about it without waiting for all the rest to finish
  3. If one of services in chain fails events will stay in the infrastructure. You’re not loosing any data.  Rest of system is working as normal. Error is atomic to one place. And after it’s fixed you can start processing messages from where it stopped.

Last point is illustrated in my example. When you request non existent item ItemService will fail. Fix collection adding new index and rerun it. Boom. Order gets processed like nothing happened. Just with a bit of delay.


If you can use this pattern it will give you a lot of freedom. Communicating with event bus of any sort will save you a lot of time and money. One thing to do is just to convince your team and company it’s the right thing to do. But it’s topic for another post ;)

Programming assingment #1 – Booking tickets

This entry is part 1 of 1 in the series Programming assignments

Programming is fun. Often I have problem to find a issue I want to solve. With that in mind I start to publish my programming assignments I’m making for myself.

Today we will have simple microservice to manage concurrent ticket bookings for “events”. Let’s say Iron Maiden announces small concert for 100 people on all their social media channels with specific time of booking. Our problem is to handle big spike of traffic at the time booking become available.

Acceptance criteria:

  • Application creates event with poll of tickets of specific size
  • Application allows to reserve a ticket(s) for 1 hour
  • Application allows to transform reservation into real ticket
  • After Reservation times out and spot is available for other user
  • It’s impossible to reserve more tickets than available in the pool
  • After last reservation is transformed into ticket event is closed

I’m leaving proper communication of error out of scope but it’s always good to have framework which allows to communicate about state of the system to external clients.

I’m leaving rest up to you how you want to implement solution for this problem.

As always in software there is no one good answer as it’s “possible” to handle this kind of problem basically in any language with out without external libraries or services.

If you have any questions don’t hesitate to ask. I’ll do my best to answer as soon as possible.

Enjoy and good luck coding!