Threaded vs. Evented Concurency


In the time of cloud computing the question raises at our company about the exposed well-defined backend services. In the current architecture, the services spend most of their time calling other services and waiting on I/O. In the common praxis most server technologies rely on a thread pool to handle I/O. In the business integration area we are confronted with very high traffic environments and processes.

Typical usage of the thread pools can be hard to manage. Big thread pools cause a lot of overhead and small thread pools may get exhausted if there is a spike in latency or traffic.

There are two general approaches –Threaded and Evented



Most development teams are using Threaded approach: one thread is dedicated to each request, and that thread does all the processing for the request until a response is sent. Any I/O, such as a call to a remote service, is typically synchronous, which means the thread will become blocked and sit around idle until the I/O completes.

By contrast, Evented servers typically use only a single thread per CPU core. The idea is to ensure that these scarce threads are never blocked: all I/O is asynchronous, so instead of waiting, the thread can process other requests, and only come back to the current one when the response from the I/O call is ready.

Threaded vs. Evented performance


Threads are resource-devouring and have significant memory overhead (e.g. default stack size for a single thread is 1MB on a 64bit JVM) and context switching overhead (e.g. saving register state, loading register state, impact on CPU cache/pipeline, lock contention). Creating threads on the fly tends to be expensive, so most servers use a fixed thread pool.

Therefore, the crucial parameter of Threaded servers is the size of the thread pool. If there are not enough threads, it’s easy for all of them to become tied up waiting for I/O, preventing any new requests from being processed even though most of your threads are just idly waiting. If the thread pool is too big and there are too many threads, the extra memory usage and context switching overhead become very costly. The right size of thread pool is practically impossible.

On Evented servers, waiting for I/O is very cheap: the actors are lightweight and consume about 600 bytes memory. Idle requests have negligible cost, as they don’t hold up an OS thread. It is very tempting to use evented servers to be able to handle far more concurrent requests than Threaded servers so.

But be careful! Even a single long calculation or accidental blocking I/O call can bring an Evented server to its knees!

Resources:

http://engineering.linkedin.com/play/play-framework-async-io-without-thread-pool-and-callback-hell

http://akka.io/

http://www.playframework.com





Keine Kommentare:

Kommentar veröffentlichen