<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=1005900&amp;fmt=gif">

Insights

DevOps Architecture – Less is More

07th November 2022 by 
Daniel Bennett DevOps

DevOps Architecture – Less is More (really)

For some time, DevOps has been the major emerging trend in development houses throughout the world. Gartner defines DevOps as a change in IT culture, moving away from waterfall, laborious delivery of services, and moving more towards rapid delivery through agile development practices, to achieve continuous and rapid integration and delivery of software solutions.[1]

 

DevOps & Microservices [2]

One of the most preached about pieces of advice when setting up for DevOps in the cloud is to make use of microservices. Defined by Microsoft as smaller chunks of an application that perform specific business functions, which then become easier to deploy and manage in their smaller chunks[3] – splitting up your architecture into more manageable chunks, saving time, money, and resources.

If you Google “Microservice Benefits”, there are a whole plethora of sources that state how many benefits – time and productivity wise, as well as financial – microservices provide; AppDynamics[4], Cloud Academy[5], GitLab[6], Forbes[7], IBM[8]; the list goes on. This is still true. However, as with most things in life, you can have too much of a good thing and microservices are no exception.

I have been a Software Developer at Capacitas for over 7 years now, and I have hands-on experience at creating and tweaking well-architected DevOps solutions to provide company value. Value in our case, is to provide a tool to allow consultants to automate their tasks and complete them in a timely and efficient manner. In my current role as Development Team Lead, it is even more pivotal that our interpretation and delivery of a DevOps solutions speeds up our teams’ delivery ability and save costs, whilst still being manageable and designed in a way that allows new team members to understand the processes. From experience, it is a tough balancing act between having enough microservices providing cost savings and having too many that it becomes a real challenge to maintain.

Capacitas-Cloud-Consulting-Cost-Saving

Continuous Integration & Delivery

A couple of months ago, I was brought in to help with an external client engagement. The client, a government function in the health sector, was creating a dashboard application for government officials to refer to daily. Such a high-pressured job has a vital need for easily deployable and manageable dashboards to make data available and accessible. Continuous Integration and Continuous Delivery – essentially the motto of DevOps – were essential.

The architecture of their moderately simple dashboard application consisted of over 20 separate repositories, and around 30 different microservices. This is not necessarily a problem! However, adding the extra element of a fluid team turnaround within the organisation – including its DevOps teams – it became far harder and more time consuming for newer team members to

  1. Grasp how and why the architecture is so minutely modular
  2. Get work done!

Capacitas-Cloud-Consulting-Productivvity

Let us simplify this a bit: Imagine you are a schoolteacher, and you have a classroom full of 30 students to manage. Keeping half an eye so you know what they are doing is not too much of an ordeal (sorry in advance to all IT and computing teachers out there reading this), however, having to manage each one individually and giving them the same amount of attention is not a quick and easy job. Also, why Teaching Assistants are often employed. In DevOps, there are no Teaching Assistants- just the teacher.

[9]

 

Cartoon

The Importance of Knowledge Sharing

Back to the client example. In this case, the DevOps team in question were only able to get through 15 tasks (all quite ‘mundane’ and small tasks affecting tiny parts of the system) in the space of 7 weeks. That is just more than 2 tasks completed per week. Not an ideal turnaround time. The biggest impediment – the reason for work delivery being slow – was the time spent on understanding and debugging the monstrous architecture and its many pieces.

The revolving door of developers in the team and the lack of due care and attention given to a thorough handover, made the entire process of understanding the existing architecture a real challenge. There was also a lack of a knowledge-sharing culture within the team. It felt very much like a “every developer for themselves” mentality; something I have experienced in the past and worked hard on to turn upside down.

In comparison, our in-house DevOps team at Capacitas have far fewer microservices setup, as part of our DevOps architecture. In total, there are 6 different, carefully crafted microservices at play, all hosted in the same Microsoft Azure portal. Each microservice has a meaningful name (eg: cap-prod-vm for a Capacitas Production VM, cap-prod-key vault for the Capacitas Production’s key vault, etc). Each one has a clear, defined purpose which easily tracks back to a part of the code. There is also ample documentation and knowledge-sharing between our team about the setup of each microservice.

Bearing in mind all these points about setup and knowledge-transferring, in the same 7-week timeframe as the external client, we were able to get through 32 tasks (slightly bigger, often affecting multiple parts of the system), with fewer developers.

blog_internalteam_pic1

blog_internalteam_pic2

 

Less Really Can Mean More

Less really can mean more. Throughput, easier scaling of systems, saving of developers’ time, and preventing headaches of team leads or Technical Architecture leads. There is such a thing as too many microservices. Question is, how do I set up microservices into suitable sizes, and in a maintainable and knowledge-transferrable manner? Here are my key recommendations:

  • Break up the microservices into individual functions, but not at such a fine, granular level. For example, our in-house application has a microservice for the API, one for the front-end, and various for our various databases. We do not split parts of the API (of which there are numerous) individually, the API gets managed as a whole part
  • Evaluate the costs of creating different microservices; setting up resource groups or different components in Microsoft Azure costs money, so spinning up numerous microservices may feel necessary, but always be aware of the costs of the extra microservices
  • Give resources appropriate names. I tend to follow a naming convention of “cap” (for Capacitas), “-prod” or “-test” (for whether it is a production or test instance), and “vm” or “keyvault” (for the resource function)
  • Make sure to add tags to the resources, so you can assign owners – and therefore chief knowledge holders – to each microservice
  • Document your DevOps setup as you go along
  • Make sure to talk the rest of your team through the setup, and reasons why things are setup the way there are. If you take the developer on the journey with you, it makes it easier for them to understand, or to ask questions if they do not understand the DevOps architecture. This way, it reduces the risk of sole knowledge holders leaving, with a lack of knowledge and/or documentation elsewhere of how it’s meant to work. Knowledge sharing and collaboration are key to successful DevOps operations.

 

[1] https://www.gartner.com/smarterwithgartner/the-science-of-devops-decoded

[2] https://www.dynatrace.com/news/blog/what-is-devops/

[3] https://learn.microsoft.com/en-us/devops/deliver/what-are-microservices

[4] https://www.appdynamics.com/topics/benefits-of-microservices

[5] https://cloudacademy.com/blog/microservices-architecture-challenge-advantage-drawback/

[6] https://about.gitlab.com/blog/2022/09/29/what-are-the-benefits-of-a-microservices-architecture/

[7] https://www.linkedin.com/pulse/3-opportunities-digital-transformation-insurance-mohit-mittal/

[8] https://www.ibm.com/downloads/cas/OQG4AJAM

[9] https://www.ourkidsseries.org/blog-detail/30/too-many-kids-not-enough-space/

 

 

Capacitas' Cloud Diagnostic

For more information, or to ask for more practical advice on the topics covered in this blog, please reach out to us via our website (https://www.capacitas.co.uk/), or to me directly at danielbennett@capacitas.co.uk.

 

About the Author

Dan Bennett is the Principal Developer at Capacitas. Dan plans, develops, tests and manages the Capacitas in-house capacity toolsets, using programming languages such as C#, HTML, MongoDB, Angular, and Python. Dan integrates our toolsets to use technologies such as CloudWatch, Splunk, DynaTrace, Microsoft Azure, and Data Lake. Dan has worked at Capacitas for almost 8 years.

 

  • There are no suggestions because the search field is empty.