On Aug 27th I gave a talk at Vienna Scala User Group about this topic.
In the Scala community most of the attention is given to Typesafe’s own stack, namely Akka and the Play Framework. While those are certainly important projects, I will introduce some of Twitter’s open source projects that are equally interesting–maybe even more so.
As a quick reminder: Twitter started to switch from Ruby to Scala around 2008/2009 at a time when Scala was not even five years old and had no significant adopters. In my opinion Twitter played a huge part in getting Scala to where it is today. They showed that the language is powerful enough to handle one of the world’s biggest systems.
That’s also one of the reasons why I find Twitter’s stack so interesting. It’s proven to handle an extreme load and I’m not aware of any Akka or Play production system that is nearly as big.
At the core of many services at Twitter lies Finagle:
Finagle is an extensible RPC system for the JVM, used to construct high-concurrency servers. Finagle implements uniform client and server APIs for several protocols, and is designed for high performance and concurrency. Most of Finagle’s code is protocol agnostic, simplifying the implementation of new protocols.
- Connection pools
- Failure detection
- Failover strategies
- Load balancers
- Back-pressure techniques
This enables developers of a service to focus at the task at hand instead of reimplementing said features over and over.
The basic composable abstractions it provides are
Filter. Futures are your regular futures just Twitter’s own implementation because back in the day the Scala standard library did not have them yet. There are efforts to migrate to the default implementation though. Services take a request and return a Future as response. Filters wrap a Service and could be used for things like authorization or error handling.
Finagle provides modules for important protocols such as HTTP, MySQL, Redis, Memcached, Protobuf, and Thrift.
Even though Akka follows a completely different approach with the actor model, fundamentally both projects address the same issues a distributed system is facing. Akka handles load balancing through mailboxes, failure detection and reaction through supervision, and it’s getting back-pressure with its implementation of Reactive Streams soon.
Once multiple services that communicate with each other are running, it becomes very helpful to see how a request spends its time throughout the distributed system. Enter Zipkin, a distributed tracing system.
Zipkin can be integrated into a Finagle service with only a few lines of code. There are also modules to trace the duration of database queries.
How does it work? A request inside the distributed system passes tracing information and an identifier along, which gets collected on every host in addition with other metrics such as the response duration. This data is sent to Zipkin which ties it back together.
To keep overhead low, only a random sample of requests gets traced. Data can either be stored in Cassandra, Redis, HBase, MySQL, PostgreSQL, SQLite, or H2.
The results are visible through a web interface which shares similarities with Firebug. For each request it shows when which service handled it and how long it took to respond. That way unexpected bottlenecks can be identified quickly.
Zipkin was modelled after the Google Dapper paper.
Note that it is also possible to use Zipkin with Akka.
To properly test services before they are deployed to handle a ton of requests, Twitter built Iago. It is a load generator that is able to accurately replay production (or also synthetic) traffic.
Iago supports arbitrarily high rates of traffic because it can launch jobs on other machines if the current one is not able to generate the requested amount of requests.
Protocols that are supported are HTTP, Thrift, Memcached, Kestrel, and UDP.
The documentation states:
Twitter-server defines a template from which servers at Twitter are built. Twitter-server ensures that common components like an administrative HTTP server, tracing, stats, etc. are wired in correctly for production use at Twitter.
Twitter-server is basically a layer on top of Finagle that makes sure that services can easily be deployed and monitored at Twitter.
It provides a logger and also a way to measure metrics. Metrics were initially collected through Ostrich, another one of Twitter’s open source projects, which has now been obsoleted by Twitter-server.
The HTTP admin interface looks really useful because it provides a way to not only take a look at the application’s metrics but also at those of the server.
Since the Mustache.java project claims that it is used for Twitter’s website, it’s safe to assume that Finatra is in fact what powers every visit to Twitter’s main website.
This project is basically comparable with the Play Framework. However, Play is more advanced (more capable templating engine, asset pipeline,…) and probably should be your choice for a bigger web application – unless your whole infrastructure is using the rest of the Twitter stack.
One big advantage that Finatra has over the Play Framework though is inherited from Twitter-server: Metrics. Finatra provides detailed metrics right out of the box without any additional work by the developer.
Personally I find the Twitter stack very appealing. It’s amazing that Twitter open sourced many projects exactly in the form that they are deploying them themselves. It’s obvious that these projects were built out of a clear need for scalability and it’s good to know that they are battle tested.
If I would have to choose the stack for a distributed system today, I would have a hard time deciding between Akka and the Twitter Stack. Typesafe offers Enterprise support for Akka (and Play) and clearly builds its business by developing and supporting these projects. Twitter builds the presented projects for themselves and their future is unknown.
I hope this overview was interesting and showed that there is another world out there for Scala enthusiasts besides the Typesafe stack.