Microservices and [Micro]services

The week began as busy as ever. And then I learned that one more task — beyond everything else on my plate — must be accomplished. Which task? The one you are reading. Why?

For the record, at Uber, we're moving many of our microservices to what @copyconstruct calls macroservices (wells-sized services). Exactly b/c testing and maintaining thousands of microservices is not only hard - it can cause more trouble long-term than it solves the short-term.

 

You can find the above tweet here, and the referenced tweet here. I’ve also quoted the second below for convenience.

- Microservices are hard.
- Building reliable and testable microservices is a lot harder than most folks think
- Effectively *testing* microservices requires a ton of tooling and foresight.
- A Netflix/Uber style microservices isn't required by many (most?) orgs.
- Macroservices?

On Being Confused: Microservices

To say the least, the word Microservices became a source of confusion even more quickly than its use became hype. Since then it has moved on to trend and possibly even defacto standard for service architectures. Funny thing, this. It seems many hype, follow, promote, and hype and promote more, but without a clear definition of intended use. Why? It has a lot to do with the software “thought leaders” that we listen to. Our point of view is shaped by our world view, but also by the leaders that take ownership of definitions. You will often find that different vision and thought leaders have different views and messaging, and that is very much the case with Microservices. So then, to which leaders should we listen? I suggest that you take in a number of definitions and guidelines. Additionally, I strongly suggest working by definitions that are less radical, dogmatic, and critical about arbitrary constraints, as if the constraints themselves prove correctness and value. I am of the strong opinion that Microservices are definitely a case where one size does not fit all. As I tweeted earlier today, I really have no problem with the word Microservice. My take is that it’s about purpose and cohesion. Microservices should not be defined by a predetermined size metric, but rather by necessity to achieve a purpose. Following a purpose leads to useful cohesion.

So What?

Going back five years and more, I’ve had a few different overarching thoughts about the term Microservices. The first formative thought I had was something such as: “So what?” I have been working on smallish services for much of my career, a career that spans 36+ years and a lot of different business domains. Many of my experiences are around separating software by what it does, and it usually does one or a few things well. Whether it has been with modules, plugins, components, local or distributed, whenever I had a significant influence on architecture, every part of the software has had its place in the architecture. Herein I refer to all of these modules, plugins, and components as services for the sake of keeping the discussion on topic. Thus, “So what?” is not a rejection of the word Microservices. It’s a sentiment toward seeing a new name for things I have done for a long time.

Size?

My second thought aligns with my first thought, but this thought is definitely critical: “Size?” What I never worried much about in developing software was the size of a given service. As I wrote in my book Implementing Domain-Driven Design, a given service is as big as it needs to be and no bigger. I also specially wrote about microservices in my book Domain-Driven Design Distilled. Here is a blog post, Reactive Microservices, from some years back, which is a close reflection of an e-zine article I wrote the year prior to it. That comes from my mindset that biases toward cohesion for a reason (hey, that rhymes). Sometimes I would survey the size of my code by line count, but that was just a matter of interest. I don’t remember ever having a reaction to lines of code to judge the value or worthiness of the service I had implemented.

The Problems

One problem with the contemporary implementation of services from the beginning was lines-of-code count being the end, rather than the means. That’s not only a problem. That’s a big problem. It means that if your service lines-of-code count is ±N, its correctness and value are questionable. A second problem isn’t explicitly questioning lines of code, but still asserts size as a critical measurement. It’s most easily described by a name: Entity as a Service. This is a service that implements a single software model concept and no more. The modeled concept may not actually be an Entity. It may be some sort of calculator or other kind of processor, but let’s say it’s anything that you would normally otherwise think of as one object doing something in behalf of another object within the same runtime process/instance. In such a case there is no network in between the two objects because it would actually be wasteful to place a network boundary there; that is, unless the network boundary is the end rather than a means. I think we should take a lesson from another use of the word micro as the prefix in a compound word: Microprocessor.

On Being Schooled: Microprocessors

I think it is notable that there has been little if any argument and dissent about the definition of Microprocessor. Here is one worthy definition found on Wikipedia. I’ve used the Simple English edition.

A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together...

Microprocessors are classified by the size of their data bus or address bus. They are also grouped into CISC and RISC types.

That’s a clear definition, albeit limited. In particular I find it interesting that there is a size classification given that helps determine whether or not a central processing unit (CPU) is considered a Microprocessor: the size of their data bus or address bus. Yet, as you will see, it’s not a very radical or dogmatic definition in the sense that it judges the value of a CPU based on power, for example. If you consider the definition found in full edition of Wikipedia you will find some very interesting additional facts about microprocessors. For example, they range in bit-count designs.
  • 4-bit designs: This was the design of the original commercially available microprocessor, the 4004, produced by Intel in 1971. It’s cost was $60 and was used in calculators. At roughly the same time, Texas Instruments developed its first microprocessor used in a desktop terminal (very early PC).
  • 8-bit designs: There is an interesting story around how Intel produced the 8008, the first 8-bit microprocessor. It was used by the Mark-8 microcomputer in 1974. This was followed by the 8080.
  • 12-bit designs: This design had limited use.
  • 16-bit designs: Includes the Intel 8086, the 8088 used in the first IBM PC, 80186, 80188, and the more popular 80286. The 80286 provided protected mode, which was necessary for running true multitasking and multiprocess operating systems. Protected mode prevents one process from overwriting memory (e.g. stray pointer outside the process’ address space, or a memory write with incorrect bounds) and stomping on other processes.
  • 32-bit designs: Includes 80386, 80486, and the 80586, known as the Pentium. The 32-bit microprocessors continued through the Intel Core and early Xeon.
  • 64-bit designs: Intel began its designs with Itanium and a new Pentium, and then several generations of Intel 64.
There are, of course, other microprocessors outside of Intel, but I decided to condense the list for a purpose. The point is to note that microprocessors have a broad range of power. Some of the progressions noted above are due to advances in silicon, for example. Yet, some smaller, less powerful, microprocessors still exist for specific purposes. As the Wikipedia article and other sources indicate, thousands of items that were traditionally not computer-related include microprocessors. Many of these fundamentally require consumer devices with powerful, low-cost, microprocessors.
  • large and small household appliances
  • cars (and their accessory equipment units)
  • car keys
  • tools and test instruments
  • toys
  • light switches/dimmers and electrical circuit breakers
  • smoke alarms
  • battery packs
  • hi-fi audio/visual components
  • cellular telephones
  • DVD video system
  • HDTV broadcast systems
  • IoT devices
There are obvious advantages to using a given microprocessor in specific cases. There are speed and power considerations. A minimal microprocessor may have only a arithmetic logic unit and a control logic section. Some IoT devices use a microprocessor with a powerful CPU, but these will generally trade clock speed and memory for power conservation. We will simply not find an Intel 64 Skylake microprocessor in a child’s toy, but we will find a suitable processor that enhances the child’s experience with the toy. Still, we’d sure hope to have the Intel 6th generation i7 Core Skylake in our developer computer, and even a Intel Xeon Platinum 9282 with 56 cores in our servers. The point to learn here is that there are extremely fast and powerful microprocessors, and there are very slow and low-power microprocessors. Whatever the microprocessor, each one has a purpose and job to do. They are all microprocessors.

Purpose-Built Microservices

The size of a microservice should not be a goal. Rather, it’s a result. Similar to a microprocessor, a microservice serves a purpose within a set of runtime constraints and SLAs. Depending on the purpose, constraints, and SLAs, the word microservice might mean different things. Just like with all things DDD, when you change the context, you change the meaning of any given word.

Start Off With Bounded Context as Microservice

I’ve been giving this advice for quite a while. Using a Domain-Driven Design (DDD) Bounded Context as the initial “measurement” of a microservice is a great place to start. A Bounded Context is not arbitrary in size. Again, as I have pointed out for quite a number of years, a Bounded Context is the size of its Ubiquitous Language codified in a domain model, and with supporting components for UI and/or REST endpoints, database access, and possibly a messaging mechanism. The key here is Ubiquitous Language. A Ubiquitous Language is developed by a team of one or more business experts and developers. You constrain your Language to solve a specific problem that provides core business advantage. Sometimes you might use DDD, or at least some strategic tools of DDD and possibly Domain Events, in microservices that support the core. Thus, the core and any supporting and even generic subdomains would generally all be different microservices. I have written a lot about what constitutes a Ubiquitous Language, so I won’t go into that in detail here. Even so, I will say that a Ubiquitous Language is typically not tiny, but also not very large. It focuses on a single core business value. The concepts of a single Ubiquitous Language is what drives the natural cohesion of components found in single service. It is probably much like what Gergely Orosz of Uber considers “wells-sized services.” Yet, note again that I said that Bounded Context “is a great place to start.” This means that if you deliver your Bounded Context solution as a single deployment unit, this is a great position in which to begin. You won’t have a monolith, but you also won’t have thousands of tiny-tiny microservices on your back, that can feel more like thousands of crazy monkeys.

Even Smaller

Still, there may be good reasons that can drive the need to divide a Bounded Context into separate deployment units. Note that this is not an arbitrary decision driven by radical and dogmatic opinions on lines-of-code count or single component. Let’s consider a few.
  1. Rate of change: Some areas of the Bounded Context change more frequently than other areas.
  2. Scale: A specific component in the Bounded Context is measurably attracting heavier load than other components.
  3. Performance: Similar to scale, but more focused on latency and throughput, a specific component in the Bounded Context is measurably bogging down other components, and/or being bogged down by others.

Rate of Change

Most companies aren’t solving FAANG (and Uber) kinds of scale and performance problems. Therefore, I think that rate of change is the most likely reason to consider deploying a part of a Bounded Context independently. This could be the case when a small team is trying to make stepwise improvements to part of the domain model, but one or two team members are constantly challenged by the business to change business rules found in the same Bounded Context. This might be related to introducing daily sales offers or how logistics change. The systematic changes are happening more slowly and deliberately, while the rules changes are happening as reactionary, and probably even with a bit of drama and a bit more stress. The main factor is when the slow, deliberate, changes cause friction for the team members that are under pressure to deliver rapidly, and vice versa. You can well imagine that breaking the codebase of frequent changes away from the codebase of the slower changes would bring relief to both sides of the small team. Does this mean that the Ubiquitous Language changes? Possibly, but for now let’s assume not. This means that you are still having modeling language discussions with the same domain experts, seeking new breakthroughs, as you were previously. Yet, dividing the codebases enables deployment at whatever the change rates require. What if you were to decide to split the Ubiquitous Language by rate of change? That is justifiable if the business people who drive the frequent rate of change are different from those that share in conversations about the slower and more deliberate changes. In either case, one or a few smaller microservices as a deployment tactic has a specific purpose and a specific benefit. It is driven by the need to find practical ways to decrease the team’s pain.

Scale and Performance

While scale and performance are different problems to solve, they have much the same needs. In this case the drivers are that you need to increase the computing capacity for some subset of components in a Bounded Context. The capacity may be scaling out by means of commodity servers, or it may mean scaling up in terms of network and server class machines, or even by using Function/Lambda as a Service. Here you also deliver your Bounded Context as different deployment units, but it may or may not be necessary to divide the codebase. In this case you may be able to create different deployment units by means of modules within a single codebase. Whatever you choose, clearly the reasons for creating smaller deployment units is to solve a problem that is completely unrelated to the business experts you work with and the domain modeling challenges that you face.

Additional Challenges

Whether it’s rate of change, scale, performance, or another reason that drives the need for different codebases and/or deployment units, there are additional challenges that you now face. Where there was previously in-process method invocations or function calls, there is now a network and other databases. Although the network division may be obvious, the databases must be considered as well. You probably won’t be able to scale and perform better when you are tied to the same databases used by the original Bounded Context as a whole-microservice. In my estimation these are very good reasons to not take these divisions too lightly. Think carefully and see with eyes wide open before creating independent deployment units. Think of it this way. For every new tiny-tiny microservice you introduce, you invite one more crazy monkey to jump around on your back.

Conclusion

Hopefully my discussion about the purpose in using a variety of microprocessors for solving a range of computing challenges has helped illustrate that the same is true for microservices. Remember, if you start out with Bounded Context as Microservice you are starting off on a non-arbitrary path that has a clear core business purpose. As you face any number of challenges that cause pain and friction with teammates and/or customers, you’ve got purpose-driven options. Use them wisely.

More to explore

Reactive DDD: Modeling Uncertainty

Domain-Driven Design supports reactive architecture and programming. Still, reactive introduces uncertainty. Back in 2003, the way that Domain-Driven Design was used and

Scroll to Top