Reusable Modularity

In the software industry, there are two popular schools of thought amongst many others when it comes to designing software.

A group of engineers will lean towards stateless modularity. They will build each component of their system as a pluggable and configurable agnostic module. And to ensure the independent nature of the module they might run into some seemingly redundant pieces of code to prevent any hard dependencies on one centralized component.

This approach is useful in the sense that it guarantees avoiding all possible single points of failures from a dependency standpoint.

It also guarantees that all team members involved will have the opportunity to build similar components and learn from the process of building an entire pipeline for one feature or another which shall eventually have a great positive impact on their growth and success in this industry.

It also comes with both good and bad impact from a cost standpoint.

On one hand, writing seemingly redundant functionality for separate components will result in an easier and gradual adaption for newer technologies and best practices without having to bear the cost of upgrading everything at once to match the new technology enforced by the reusable component shared across the system.

But, on the other hand it also means that there’s an additional cost paid at development time when each and every engineer have to write a seemingly similar functionality such as validations for instance which shall eventually add up to the overall cost of the software being developed.

Another group of engineers will be more leaning towards reusability. They will generalize and package every single component they see reusable to cut the costs of having to develop and write similar functionality in several components throughout the system.

And that approach’s good side as obvious as it may seem is very effective from an economic standpoint during development-time, as more components are being created rapidly without having to bear the cost of what seems to be redundant functionality such as validations for instance.

The problem with that approach, however, is that it creates bottlenecks and single of points of failure across the entire system.

It also enforces a full payment of tech-debt at once as soon as a new technology is adopted or a new standard has been enforced which sometimes results in engineers losing interest in maintaining the software and declaring it a “legacy app”.

So, someone might ask, what’s the best option here? Shall we modularize across the board and ensure gradual growth for multiple components or save the initial cost of development by publishing reusable packages while planning for potentially high tech-debts or full rewrites of entire systems?

The answer as it usually is it depends.

If the modularity is enforced on some external non-business routines then it might make more sense to package that up and turn it into reusable component.

For instance, communicating with a certain external resource, if we continued to redundantly write the same authN/authZ routines to communicate with the same resource then that’ll just be a plain waste of time and effort – I would side with the reusability team on this instance as it doesn’t really make much sense copying the same brokers across multiple components.

A more material example would be the remapping of native exceptions originated from interfacing with a database would be much more economical if it was packaged and distributed for multiple target frameworks without trying to generalize some of the business-specific operations brokers may use to leverage said database functionality.

But on the business logic side of the system, an enforcement of a reusable package would be disastrous as it threatens the very core of what the system is developed to be.

Especially with the current state of the world when packages have to be referenced and pulled from their external repositories on build and deployment, a sunset of a particular artifactory would bring the entire system to its knees and cost much more to remedy.

But the same situation with brokers would at it’s worst case scenario simply require a re-implementation of the basic non-flow-controlling operations that brokers may have without touching the dense part of the system, meaning the business logic.

The same concept applies to public endpoints and controllers in a RESTFul system – but it shall at all cost be avoided for mid-layers where the decisions are being made about any particular incoming request or outgoing response.

Enforcing modularity on the mid-layer portion of any system allows more team members to experience in practice how particular services come to life without having to go through poorly documented tutorials about abstractions that have no existence in any world outside of the current project.

When reusability of logic is enforced in a larger system, sacrifices have to be made to maintain the generic nature of a packaged routine – which shall eventually change and require a deprecation or inconsistency across the board.

You can very easily spot this pattern when seeing a bloated shared package that has many routines that only serve one particular case in a single business logic layer hoping that it may be reused and needed in another flow at some point – it never is.

In conclusion hybrid approach of business components modularity alongside reusable primitive operations seemed to gather the best of the two worlds to ensure the ongoing growth of the development team, being more adaptive to a forever growing and changing industry and following a more economically efficient development procedures than any of the two approaches individually.

Post a comment