Share on twitter

Three Implementations of vlingo/streams

If you are interested in software designs that feature concurrency, I think you will enjoy this post comparing multi-threading with component development using the Actor Model and actors. Employing the Actor Model means that your components are built using actors. Actors are objects, but ones that send and receive messages asynchronously, and that support concurrent and parallel processing when run on multi-core hardware. If you as of yet don’t know much about this programming model, take a look at these explanations of the Actor Model and actor development.

While developing the vlingo/streams component of the vlingo/platform I purposely produced multiple implementations. I first employed a single-threaded, blocking model. Of course this had latency written all over it, but I knew that a single-threaded request handler would lead to that. Besides reaching observable proof of the disadvantages, I could use this code to refactor into a multi-threaded model. That way I would tend to introduce the complexity of multi-threading only where necessary.

The result of the multi-threaded code performed much better than the single-threaded, of course, but it yielded disadvantages, too. It was difficult to manage thread usage at any given point during request handling. To address this, I created queue-based message dispatchers. I could assign a certain number of threads to each dispatcher queue, attempting to prevent the creation of “too many threads” and the depletion of precious system resources.

Designing a balanced workflow between total thread availability and consumption was challenging, yet doable. The primary problem, however, was the lack of explicit division of work and the inability to predict the time of completion of any given task under heavy load. Admittedly some of the threading complexity could have been managed by the use of the Java Executor, but this still didn’t solve the problem of assigning specific components to specific queue types, and determining how many components would best service a given part of the overall request handling process. Introducing threads and queues in places where I did not initially foresee led to unintentional complexity (not really accidental complexity because I understood why I chose a given design). It wasn’t a tangled mess, but it wasn’t tasteful to me.

Enter the Actor Model with the use of vlingo/actors. I redesigned the overall approach more intentionally because I could easily visualize where actors should be used. Introducing actors at non-critical points leads to unnecessary messaging overhead. Consider using internal actor behavior rather than introducing another actor where low latency data access is critical. Direct access of privately owned data while an actor is handling its current message achieves much better throughput than delegating to other actors when circumstances don’t require it.

What problems does the vlingo/streams component solve? Stay tuned for more information on the way!

If you have yet to look at our vlingo/platform, this blog post introduces you to the architecture. The open source vlingo/platform repositories are available here.

Share on twitter

More to explore

Introducing vlingo/zoom

The easiest way to get up and running quickly with the vlingo/PLATFORM, along with an explanation of our position on open source

Modeling Temporal Occurrences

The inquiry arrived: “Are time lapsed events Domain Events?” The question is the result of wrestling with whether YearEnded and similar occurrences are actually