Engineering Architecture Systems for a Faster Build
Speed is one of the key factors in any successful endeavour. But we tend to see the importance of speed and performance only when running applications, often forgetting that it is just as important at the time of building them.
In the era of continuous integration and continuous deployment, big applications are creating bloated build pipelines. This means feedback gets to developers late, affecting the ability of a business to react to events.
Companies are beginning to realize this threat and are acting on it. Unfortunately, many of the approaches we’re seeing are rather idealistic concepts that sound good on paper but don’t usually deliver the expected value.
One example is teams rewriting their entire stack into a microservices architecture, thinking that smaller components will be faster to build. However, they fail to realize that once the system grows big enough, those microservices will have shared components and interdependencies that will slow the build down.
Another instance is teams switching to experimental build tools or breaking code encapsulation with monolithic repositories. While these do provide some benefits, as with any tool, they come with disadvantages and the trade-off may not always be worth it.
There is a simpler way to keep things fast, if we can just understand the root of the problem. The whole idea of a continuous integration pipeline is that, whenever a change is made, everything impacted by that change is rebuilt in order to ensure that we are always up to date.
It follows that the real problem is where code becomes so entangled that every single change impacts large portions of the system, meaning there’s a lot to rebuild.
The solution is therefore simple (in principle!): just reshape the architecture of your code so that code changes affect a smaller portion of the overall system. In turn, only a smaller portion needs to be rebuilt, resulting in shorter build times.
For instance, if you have a library that is used by several other components, every time you modify that library you’ll have to rebuild all the dependent components. If, however, you separate that library into its API and its implementation, then you might be able to reduce its impact—when you make a change to the implementation and it doesn’t affect the overall behavior, dependents won’t be affected, so you won’t need to rebuild them.
Obviously, IT is a very fast-paced industry. New ideas and technologies come up every day, and we need to evaluate them in order to keep up. However, the fact that new things appear doesn’t necessarily mean we always need to drop everything that came before it.
In the end, the oldest trick in the book is still one of the most effective: If you want to run things smoothly, make sure everything is appropriately tidied up as you go. Not only will you save a lot of pain, but it’s the only way to keep that all-important performance up.
Abraham Marin-Perez is presenting the session Architectural Patterns for an Efficient Delivery Pipeline at the DevOps West 2017 conference, June 4–9 in Las Vegas, NV.