Deterministic business logic, declarative side effects
When architecting a new piece of software, I often find myself reading up on various design/architectural patterns in order to garner inspiration. However, I usually skip the source code portion because I am less interested in the actual rules of implementation of the design pattern, but looking more for the mental model and principles behind it. This is also true when looking at a new software framework: when reading through its tutorials and source code examples, I’m not necessarily looking for “how to use this framework”, I’m trying to uncover the underlying fundamental concepts and principles that the framework is based upon and tries to enforce. Understanding those underlying concepts allow you to often find common principles in software engineering, driven by decades of experimenting with different approaches, often to solve the same problems.
One of those principles that I have recently been abiding by can be summarised as the following: deterministic business logic, declarative side effects. Let’s explore the premise of that principle, define it then look at some real world successful applications of it in modern software.
Premise
Our job as software engineers fundamentally revolves around translation of business requirements into a set of programs that fulfill those business requirements. This means we have to understand the functional business requirements and map them into the technical medium that our computers understand, so that said computers can run programs that produce correct and useful output at scale and said programs can easily be adapted as business requirements change
Howevere, the intertwined nature of these 2 aspects of software engineering often lead us to produce sets of programs that either:
produce incorrect output under certain scenarios
fail to meet the needed scaling requirements
are hard to understand and update as buiness requirements change
Let me illustrate with an example. Let’s say we want to build a program that notifies the owner of a store via email whenever a new item is sold, but only if the amount is more than 100 USD and if the customer’s location is not in Senegal: that’s our functional requirement.
Let’s imagine our sales data is stored in a MongoDB collection and our program is written in NodeJS. Here is what we get (courtesy of ChatGPT).
From a technical perspective, we watch the MongoDB sales collection change stream for an insert event, and use a mailer library to send an email whenever a new document has been inserted.
However, this program presents a glaring issue, business logic and technical details are visually and conceptually intertwined in the same blob.
This is unnatural as our way of thinking about computer programs naturally follows the following pattern: discuss functional requirements with our SMEs/clients, come up with business rules, think of the logical architecture and abstract algorithms to implement those business rules, code in the technical details.
Clearly our business team and SMEs don’t care about those technical implementation details. If the new sale event was coming from a Shopify webhook and the mail was going out through Sendgrid’s client library, they would not care and would not be involved in the discussion. On the flip side, if the requirements changed and we only wanted to send the email if the sale happened during waking hours, then they would certainly be involved and we would not care about the technical aspects.
When a new developer joins the team and wants to maintain this program, they have to read through both business requirements and technical implementation details at the same time, even though usually only one aspect can (and should) be discussed at a time. They will usually ask questions like “Under what conditions do we decide to send a notification email”, and “How do we send emails” separately, so it makes sense to have the two split.
Now that we’ve understood the problem, let’s revisit our initial principles and see how we can apply them to this scenario: deterministic business logic, declarative side effects
Principle
Business logic should be isolated, ideally synchronous, free of external dependencies and deterministic. For those familiar with functional programming, this will sound very similar to pure functions. In fact in most cases, the best way to implement business logic is with pure functions or declarative business rules. When business logic is isolated, free of external dependencies, synchronous and deterministic, the translation between functional requirements and business logic becomes easy to make, both for the initial writing of said business logic as well as its understanding later on when reading through code. On a podcast with Lex Fridman, when talking about making our programs simpler and more stable, Bjarne Stroustrup says “the first step is actually to make sure that when you want to express something, you can express it directly in code rather than going through endless loops and convolutions in your head before it gets down to code. If the way you are thinking about a problem is not in the code, there is a missing piece that’s just in your head and the code you can see what it does, but you cannot see what you thought about it unless you have expressed things directly. When you express things directly, you can maintain it, it’s easier to find errors, it’s easier to make modifications, it’s actually easier to test it”. When business logic follows these principles, it becomes easy to reason about and find the original train of thought behind it. Lacking external dependencies is also crucial in making your business logic easily testable as you have nothing to mock. Coincidentally, they are the most important piece to test as the errors arising from wrong implementation of business logic are not statically validatable and are most likely unique to your piece of software (more on this later).
Technical side effects, on the other hand, have usually very little to do with functional requirements but more with the technical implementation and constraints of our software. They are usually events triggered from external systems/platforms, or the application of the output of our business logic to an external system/platform. They are necessary (if our program lived in a vacuum, it would only be a piece of algorithm) but they clutter the business logic with pieces not relevant to the discussion taking place. They introduce external dependencies into our business logic making it harder to test (and usually results in fewer testing). They are less stable, meaning they can change as technology evolves even though our business requirements stay the same (in fact the way a piece of software can outlive the original platform it was built on or the original systems it was interacting with is by continuously replacing the platform/external dependencies without modifying the core business logic). Side effects are also usually highly reusable accross projects (because they interact with platforms and external systems in the same way, regardless of the business logic that produced the input to those side effects). In fact, when the right abstractions are found for a given platform, the platform tends to fade from our mind as we only interact with it through abstractions, which means we focus more on delivering value through innovative business logic. All of these aspects of side effects make them perfect for a declarative approach.
Examples
In the beginning of this article I alluded to looking at frameworks and design patterns through their underlying principles. In the wild I’ve found that this principle is actually used quite a lot in the most successful software frameworks and design patterns, here a some examples to name a few:
In React, we are encouraged to use simple, pure functional components to implement our UI widgets. In the scope of building the user interface, these components represent our business logic and the design they stem from represent our requirements. The platform (in this case the DOM) is abstracted away and we only interact with it by returning a declarative definition of what we want to render through JSX. As a result, React components are easy to reason about: they constitue a direct representation of the business requirements (which is why they can often be derived from a good UI component library). The output of pure react components are easy to test (all you need is call them as functions with the appropriate parameters and test their output). When working in similar frameworks, we tend to completely forget that the DOM even exists, which has led to it being usable in other UI development scenarios as well (e.g. mobile with React Native). The result of this approach has earned React its success in the frontend development world
With the decorator pattern, we are encouraged to implement our business logic as an isolated unit, and then wire it up to external systems and platforms through the use of declarative metadata. Take for example a notification library in a system. It’s core features (sending messages, retrying deliveries, keeping a notification history) can be implemented in a core notifier class that has no knowledge of the different notification channels it needs to interact with. Those notification channels can then be added as decorators on top of that notifier class, that simply intercept method calls and run their own pre/post wiring logic to make sure the messages the notifier class produces flows through the right channel. Decorators have become pervasive in most server software frameworks (Spring Boot, ASP.NET, NestJS, etc) where they are used for example to wire up network requests into controllers with the MVC pattern.
In event driven architectures, our business logic is often implemented as event handlers. Events become the sole mechanism of communication with external systems. They are the inputs to our business logic implementation and are often their output as well (e.g. event sourcing). Side effects often happen as a result of those events (e.g. persisting events in a data store, building read models) but are rarely intertwined with our business logic. Event driven architectures have proven their usefulness in keeping the maintainability of the software high, specially as new requirements emerge since we can usually leave existing event handlers intact.
Even going all the way down to the computers our software runs on, most of our modern digital electronics rely on logic circuits to fulfill the business rules of our electronic component while relying on transducers (sensors, actuators and receivers) in order to communicate with the real world. Those transducers (microphones, speakers, cameras) are often commoditized and reused throughout a multitude of electronic products while the business logic that combines the signals from those sensors, runs “computation” and outputs to actuators is often specific. However, since the dawn of microprocessors, the logic circuits have also been commoditized and reused since their specialization for a specific product now relies on us, programmers.
All of this shows how pervasive this principle is. By understanding it and using it to guide our reflection when designing software systems, we can hopefully produce software that better stands the test of time as well as market and technology fluctuations.
Back to our example
In the premise, I showed an example of intertwining functional business logic and technical side effects that leads to a convoluted program. Just like with any example trivial enough to fit in an article, it’s hard to show the extent to which this intertwining can be devastating. Those of you who have had to maintain big codebases can see how such an approach can be detrimental to productivity and stability at scale.
In part 2 of this article, we will see how we can use our newly understood principle to better tackle this problem.