Added: Jayce Broadhead - Date: 12.10.2021 13:09 - Views: 22347 - Clicks: 4984
All MessageDispatcher implementations are also an ExecutionContextwhich means that they can be used to execute arbitrary code, for instance Futures. Every ActorSystem will have a default dispatcher that will be used in case nothing else is configured for an Actor. The default dispatcher can be configured, and is by default a Dispatcher with the specified default-executor.
If an ActorSystem is created with an ExecutionContext passed in, this ExecutionContext will be used as the default executor for all dispatchers in this ActorSystem. If no ExecutionContext is given, it will fallback to the executor specified in akka.
Dispatchers implement the ExecutionContext interface and can thus be used to run Future invocations etc. So in case you want to give your Actor a different dispatcher than the default, you need to do two things, of which the first is to configure the dispatcher:.
Note that the parallelism-max does not set the upper bound on the total of thre allocated by the ForkPool. It is a setting specifically talking about the of hot thre the pool keep running in order to reduce the latency of handling a new incoming task. The thread pool executor dispatcher is implemented using by a java. For more options, see the default-dispatcher section of the configuration.
An alternative to the deployment configuration is to define the dispatcher in code. If you define the dispatcher in the deployment configuration then this value will be used instead of programmatically provided parameter. The dispatcher you specify in withDispatcher and the dispatcher property in the deployment configuration is in fact a path into your configuration. This is an event-based dispatcher that binds a set of Actors to a thread pool.
It is the default dispatcher used if one is not specified. This dispatcher dedicates a unique thread for each actor using it; i. This dispatcher runs invocations on the current thread only. This dispatcher does not create any new thre, but it can be used from different thre concurrently for the same actor. See CallingThreadDispatcher for details and restrictions. Another example that uses the thread pool based on the of cores e. A different kind of dispatcher that uses an affinity pool may increase throughput in cases where there is relatively small of actors that maintain some internal state.
The affinity pool tries its best to ensure that an actor is always scheduled to run on the same thread.
This actor to thread pinning aims to decrease CPU cache misses which can result in ificant throughput improvement. Note that thread-pool-executor configuration as per the above my-thread-pool-dispatcher example is NOT applicable. This is because every actor will have its own thread pool when using PinnedDispatcherand that pool will have only one thread.
To use the same thread all the time you need to add thread-pool-executor. In some cases it is unavoidable to do blocking operations, i. When facing this, you may be tempted to wrap the blocking call inside a Future and work with that instead, but this strategy is too simple: you are quite likely to find bottlenecks or run out of memory or thre when the application runs under increased load.
Using context. If all of the available thre are blocked, then all the actors on the same dispatcher will starve for thre and will not be able to process incoming messages. Blocking APIs should also be avoided if possible. Try to find or build Reactive APIs, such that blocking is minimised, or moved over to dedicated dispatchers. Often when integrating with existing libraries or systems it is not possible to avoid blocking APIs. The following solution explains how to handle blocking operations properly. Note that the same hints apply to managing blocking operations anywhere in Akka, including Streams, Http and other reactive libraries built on top of it.
Here the app is sending messages to BlockingFutureActor and PrintActor and large s of akka. When you run the above code, you will likely to see the entire application gets stuck somewhere like this:. PrintActor is considered non-blocking, however it is not able to proceed with handling the remaining messages, since all the thre are occupied and blocked by the other blocking actor - thus leading to thread starvation. The orange portion of the thread shows that it is idle.
However, large amount of turquoise blocked, or sleeping as in our example thre is very bad and le to thread starvation. If you own a Lightbend subscription you can use the commercial Thread Starvation Detector which will issue warning log statements if it detects any of your dispatchers suffering from starvation and other.
It is a helpful first step to identify the problem is occurring in a production system, and then you can apply the proposed solutions as explained below. In the above example we put the code under load by sending hundreds of messages to the blocking actor which causes thre of the default dispatcher to be blocked. The fork pool based dispatcher in Akka then attempts to compensate for this blocking by adding more thre to the pool default-akka. This however is not able to help if those too will immediately get blocked, and eventually the blocking operations will dominate the entire dispatcher.
In essence, the Thread. One of the most efficient methods of isolating the blocking behavior such that it does not impact the rest of the system is to prepare and use a dedicated dispatcher for all those blocking operations. In application. A thread-pool-executor based dispatcher allows us to set a limit on the of thre it will host, and this way we gain tight control over how at-most-how-many blocked thre will be in the system.
Usually a small around the of cores is a good default to start from. Whenever blocking has to be done, use the above configured dispatcher instead of the default one:. Messages sent to SeparateDispatcherFutureActor and PrintActor are handled by the default dispatcher - the green lines, which represent the actual execution. When blocking operations are run on the my-blocking-dispatcherit uses the thre up to the configured limit to handle these operations.
The sleeping in this case is nicely isolated to just this dispatcher, and the default one remains unaffected, allowing the rest of the application to proceed as if nothing bad was happening. After a certain period idleness, thre started by this dispatcher will be shut down. In this case, the throughput of other actors was not impacted - they were still served on the default dispatcher. The first possibility is especially well-suited for resources which are single-threaded in nature, like database handles which traditionally can only execute one outstanding query at a time and use internal synchronization to ensure this.
A common pattern is to create a router for N actors, each of which wraps a single DB connection and handles queries as sent to the router. The N must then be tuned for maximum throughput, which will vary depending on which DBMS is deployed on what hardware. Configuring thread pools is a task best delegated to Akka, configure it in application. From the creators of Akkaget technology enhancements, monitoring, and expert support with Lightbend Platform. See Akka In Lightbend Platform. Default dispatcher Every ActorSystem will have a default dispatcher that will be used in case nothing else is configured for an Actor.
Looking up a Dispatcher Dispatchers implement the ExecutionContext interface and can thus be used to run Future invocations etc. Set to 1 for as fair as possible. Fault Tolerance. Found an error in this documentation?
The source code for this can be found here. Please feel free to edit and contribute a pull request.Can i have your number actor
email: [email protected] - phone:(571) 302-8188 x 7192
Who is the Can I have your actor?