7
December
2021

Announcing Ensono Digital Stacks v2 : The road to micro-factories

by
Simon Evans
,
10
minute read.
Share this article

Almost a year ago, we released Ensono Digital Stacks into the wild as free open source software for all developers to use. I wrote a blog post in January 2021 explaining why we built Ensono Digital Stacks, but in short, Ensono Digital Stacks is a software factory for building cloud native applications that accelerates delivery and reduces risk.

Building a software factory is not for the faint hearted, especially when the scope of the factory is as broad as Ensono Digital Stacks is. Being vendor agnostic is in Ensono Digital’s DNA, and so Ensono Digital Stacks is both cloud and language agnostic. This menu of “options with opinions” is a complex multi-dimensional problem to manage. In this blog post I aim to explain how we have focused on managing this complexity, and what it means for the future of Ensono Digital Stacks going forward.

Stacks v1 was an important milestone for the project, but it was just the beginning of the story. The roadmap of Stacks includes many additional workloads in the future, alongside broadening out Cloud provider support to comprehensively cover AWS and GCP. Additionally, Stacks is dependent on hundreds of open source frameworks and libraries, all of which have their own maintenance cycles.

Amido needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information  please review our privacy policy.
Oops! Something went wrong while submitting the form.

Things you will learn

No items found.

Stepping back to leap forwards

Our single most important engineering concern when developing Ensono Digital Stacks is maintainability; we are maintaining multiple workloads across multiple languages in multiple clouds, which means any “tech debt” issues hit us many times over. As we have grown the scope of Stacks, we have learnt a lot of lessons around how to meet this challenge. After Ensono Digital Stacks v1 was opened to the public in January 2021, we spent some time taking stock, and listening to feedback from our consultants and teams that use Stacks on their projects. In summary, we realised that we needed to address the following issues in Stacks before we expanded the scope further:

  • We needed to minimise the code duplication and complexity inside our own repos to be able to manage maintainability of adding new workloads in the future.
  • We needed to provide a more fine grained output from the factory so that the code generated more closely matched a developer’s needs, minimising redundant code output.
  • Where relevant, we wanted to enable Stacks to be applied to an existing code base, opening up Stacks to different engineering workflows as well as new application modernisation use cases.
  • We needed a robust solution to versioning to ensure consistency of use on projects, while allowing maintenance cycles to be managed independently of the Stacks team’s scope.
  • We needed to reduce friction in the developer experience by minimising the dependencies put on a developers machine to be those native to a particular stack. For example, a .NET developer should only have to install the prerequisites on their machine related to the .NET stacks, such as .NET Core.
  • We needed to more explicitly decouple our own teams choice of CI/CD pipeline technology (Azure DevOps) from the planned implementation options (Azure DevOps, GitHub Actions and GitLab).
  • We needed a better story around release management by separating build pipelines from deployment pipelines, enabling teams to repeatedly deploy a known previous build version without running a complete build process.

Put simply, in order for us to progress the functional scope of Stacks in a sustainable way, we needed to first solve these lower level engineering concerns; we need to take a step backwards to leap forwards.

Decomposing the factory

The first big design decision we made was to decompose the factory into multiple smaller factories. Stacks is to the software development lifecycle what robots are in a factory production line. So rather than having one all complex robot that services multiple functions in the production line, we have chosen to have lots of simpler robots which have a much narrower purpose. This new approach I call “Micro-factories”, because the rationale is similar to the evolution of Micro-services from Service Oriented Architecture.

Ensono Digital Stacks v2 is made up of many software factory components rather than one complex monolith. Applying this micro factories approach means each micro factory should perform only one narrow function in the production line; in this respect a micro factory is analogous to the single responsibility principle of the SOLID object oriented acronym. Through daisy chaining together several micro factories, we are able to gain the same macro level outcome, but with some key benefits. Micro factories are inherently more reusable and therefore more useful. They explicitly do not define the production line; they only execute jobs on the production line, meaning the production line is more composable. Because each factory is narrow in responsibility, it is sometimes possible to find a third party answer to a single micro factory, and where that is not possible, the code has a simpler purpose; both of these points lead to a more maintainable solution.

Run-once vs run-many micro factories

Decomposing Ensono Digital Stacks into multiple micro factories facilitates the concept of run-many micro factories. Some micro factories are tasks that should only be run once per environment, where as other micro factories can be safely run multiple times.

An example of a run-once micro factory in Stacks is the execution of a “Workload Template”, which is used to provision the complete stack (both infrastructure and software) required for specific design pattern (described in Stacks as a Workload). Workload templates are an example of a run-once micro factory because a Dev Ops engineer needs to provision the infrastructure for a solution once and only once per environment. For example, the “Web API with CQRS” is a workload template that creates a Web API that implements a CQRS pattern.

Such a workload requires the provisioning of infrastructure for containerised compute to host the Web API and a data store to persist and read the data. As an example, I could decide to execute the Web API with CQRS template to provision infrastructure on Microsoft Azure; this would mean that I would need to provision containers hosted on an AKS cluster to host the Web API compute and provision a CosmosDb data store to execute commands and queries for the CQRS pattern. In addition to this I would also have other lower level elements to the infrastructure that require provisioning, such as the use of Azure Monitor for monitoring and alerting. The Workload Template would handle the generation of the required Terraform template to provision all of the infrastructure necessary for this scenario.

Other micro factories can be safely run multiple times by an engineer. This is useful, because it supports the iterative nature of software development. One simple example of this is using the Stacks CLI in interactive mode to generate the YAML file using for scaffolding the workload template. In interactive mode, the Stacks CLI provides some input validation on the variables required to generate a valid YAML configuration file, improving the probability of generating the correct YAML file. But humans make mistakes, so it is probable that an engineer will need to rerun the interactive mode several times to generate the right YAML for their needs. In separating this simple micro factory out from scaffolding a workload template, we have enabled more iterative use and reuse. Moreover, once the correct YAML file is produced, it can be maintained under source control, and re-executed to reproduce the creation of scaffolded infrastructure on additional environments as required.

Developer native tooling

Much of our focus in Ensono Digital Stacks v2 has been around improving the developer experience. We sought feedback from our teams who have used Stacks on real world projects to understand what their mains issues where, with a view to taking friction out of the process.

Much of the feedback we received centred around two key objectives for reducing friction:

  • Developers wanted to use the tooling that was native to their language of choice.
  • Developers were spending too much time removing redundant generated code that wasn’t required for their given scenario.

Using tooling that is native to the developers eco system lead us to a key decision; in Stacks v2, the elements of a workload template that are language based templates are new maintained using tooling native to the language of choice. This means we are using Apache Maven Architypes for templating code in Java and the .NET CLI for templating in .NET using NuGet packages. Now developers are able create workloads using the tools they already know and are familiar with, reducing friction when adopting Stacks in your projects. There is also a new Stacks CLI, which now acts more as an orchestrator of other templating technologies, reducing its footprint and Ensono Digital’s maintenance overhead.

The micro factories approach meant that we reviewed the granularity of the workload templates Stacks provides. We created more fine grained options for creating workloads. For example, it is now possible to scaffold a Web API on its own (just compute) with no additional infrastructure concerns.

For micro service and integration development, we have settled upon three top level workload templates in Stacks v2:

1. Web API

2. Web API with CQRS

3. Web API with CQRS and Events (new functionality to Stacks v2)

These three workload templates represent the three most used design patterns for building and integrating micro services. Each template has a specific footprint of infrastructure that requires provisioning, depending on the chosen technology to implement the pattern. At a pattern level, the code they generate most closely fits to a specific pattern a developer wants to implement.

From a code perspective, each workload template also shares some level of commonality. For example, option 3 (the Web API with CQRS and Events template) shares a similar code base for the Web API and CQRS elements as option 2; effectively option 2 does the same thing as option 3 but without the addition of publishing events. So this lead us to the next challenge; how can we provide fine grained templates without creating a maintainability problem through code duplication?

To address the code duplication challenge, we have heavily relied upon package management (NuGet in .NET and Apache Maven in Java) to centralise the majority of the common code used between each workload template. These binary level dependencies are managed inside their own individual repos, open sourced on GitHub and published as packages on nuget.org for .NET and the Maven Central Repository for Java. The diagram below illustrates the repo structure of Ensono Digital Stacks for Java in Stacks v2 and how the workload templates generate a workload complete with references to core packages:

This new approach (which is fundamentally the same for both .NET and Java) minimises the code generated by the workload templates, which minimises the code duplication that we have to manage inside the workload templates. This in turn minimises the code duplication and redundancy that needs to be managed in a typical project. For example, imagine a project that requires 10 microservices to be scaffolded with various workloads. Using the Stacks v2 template and repo structure, you would generate 10 workloads, all of which would consume the same central package managed dependencies for core boilerplate code like event publication. This approach not only minimises the duplication of code between workload templates, but it also provides a clear package version upgrade path as we maintain these packages in the future.

We have structured the three workload templates so that additional implementations of infrastructure can be implemented without creating additional workload templates; for example in future we plan to add additional options for a CQRS data store outside of Azure Cosmos DB. This is possible without the need to add additional workload templates through the use of conditional compilation; when a workload is generated from a template, implementation arguments are passed into the respective CLI (for example the .NET CLI) so that the generated workload output provisions the correct infrastructure and implementation in code.

Relying on native tooling for factories has opened up some new use cases for Stacks too. For example, some of the templates can be run over the top of an existing solution (albeit with some code changes required!). This elevates Stacks to a much broader set of use cases; instead of being a factory to de-risk green field solutions, Stacks can now be applied to brown field projects too to help modernise them. Finally, the decomposition of much of the code base into discrete packages means these packages can be applied to any project, reducing development times.

A new Stacks CLI

Given our change in direction around embracing templating tools across different developer eco systems, we needed to rewrite the Stacks CLI. The new slim line Stacks CLI is really targeted at DevOps engineers as a common interface for provisioning infrastructure as code against a variety of workloads.

As was the case with the developer experience, we wanted to remove friction from the consumption of Stacks for DevOps engineers. We decided to adopt Go for the Stacks CLI rewrite. Choosing Go meant we could create a CLI that ran without any other dependencies installed, such as NPM which was a prerequisite for running the previous CLI. It also meant that our DevOps tooling is more aligned to the language of other technologies in the DevOps eco system such as Terraform which is written in Go.

At the time of the Stacks v2 release, the old Stacks CLI is still required to provision React and Node.js SSR And CSR websites. This will be addressed in a future version of Stacks where we will adopt a similar micro-factory approach to the front end concerns.

A better build story

The core Ensono Digital Stacks development team runs its own CI/CD pipeline using Azure DevOps, which up until now is the only supported option for running your own CI/CD pipelines. In Stacks v2 however, we have worked to reduce the dependency on Azure DevOps, paving the way for other CI/CD options (such as GitLab) to be supported in future.

Central to enabling this is the introduction on an independent build runner which enables a build pipeline to be run locally without the dependency on Azure Dev Ops. Reducing the reliance on Azure DevOps means that this runner can be plugged into other CI/CD pipeline technologies in future.

Another area of improvement from a build perspective is the decomposition of build and deployment pipelines, meaning that it is now possible to redeploy a previously successful build as many times as required without rerunning the full build pipeline. This means the release experience in Stacks v2 is transformed from v1.

New features and language parity

In amongst all of the steps we have taken to course correct in Stacks v2, we have also extended the scope of what Ensono Digital Stacks is capable of in the creation of backend services. Stacks now supports both the publication and streaming of events in services using the new Web API with CQRS and Events workload template. On Azure this template can implement and provision a competing consumer pattern using Azure Service Bus or event streaming using Azure Event Hubs. The template not only includes the code required to publish or stream events from a Web API; it also includes a complete implementation of an event subscriber either through the scaffolding of an event listener hosted on Kubernetes or as a serverless function. This additional functionality rounds out the capabilities of Ensono Digital Stacks from a back end and integration perspective, enabling developers to quickly scaffold, build and integrate together multiple micro services.

Every new backend feature we have built in Stacks v2 we have built for both Java and .NET, meaning these two languages are at parity in Stacks. It is possible to effectively scaffold exactly the same workload in either language very quickly. And our versioning capabilities have meant that we are able to support multiple versions of the underlying framework. So when Stacks v2 releases it will be possible to scaffold a solution in either .NET 3.1 or the recently released .NET 6. Support for versioning in Stacks means developers who take a dependency on Stacks will not be automatically affected by future developments and changes we make as we add more functionality into Stacks.

Moving towards OpenTelemetry

Central to the codebase in Ensono Digital Stacks is building in observability by default. Stacks currently implements this on Azure using Azure Monitor.

Over coming versions of Stacks, we plan to broaden our support for other monitoring and alerting technologies including third party APM tools.

Central to this future is our adoption of OpenTelemetry (a Cloud Native Computing Foundation project), which is a vendor neutral open standard for how different technologies emit telemetry. While Stacks v2 is still dependent upon Azure Monitor currently, in Stacks v2 we have laid the foundations for the future migration to OpenTelemetry.

Going forward

Hopefully this blog has helped explain the changes we have made in Ensono Digital Stacks v2 with micro-factories and why we have taken this direction. Stacks v2 is a key foundation for the future of Stacks as we broaden cloud support to cover AWS and GCP and well as add future functionality to front end development and data engineering.

Stacks v2 will be released in January 2022.

For more information on building software using Ensono Digital Stacks, please visit https://stacks.amido.com.

Related content

No items found.

Need help plotting a route to the cloud?

We can help you define your digital strategy and turn it into a technical roadmap, achieving momentum to quickly deliver business value, whilst minimising risk.

Ask a question

If you consent to receiving communications from Amido, please subscribe using the checkbox below. If at any point you'd like to unsubscribe, you can do so using the links provided in our newsletters. You can review how your data is handled in our privacy policy.
Thank you, your submission has been received!
Oops! Something went wrong while submitting the form.