The Stealer — Fork Join Framework

Aman Agrawal
5 min readAug 13, 2020

--

Welcome back guys, this blog is a continuation of the previous blog where we talked about Threads and Executor Framework. If you haven’t visited, here is the link.

Hi In the previous blog, I mentioned about a new thread pool in the Executor Framework named newWorkStealingPool, which was introduced in Java 8. If you see its internal implementation, it uses ForkJoinPool. So in this blog, we are going to talk about the Fork Join Framework and of course, the ForkJoinPool. Let us get started…

Basically, the Fork Join Framework uses a recursive divide and conquer strategy where a big problem is divided into smaller units until each unit can be solved directly. For example, let us take a very simplest example where we have 1 lakh integers in an ArrayList and we want to calculate the sum of it. So, what Fork Join Framework can do is divide the task into smaller chunks and then addition can be performed on those smaller chunks, and later on, all those chunks will be added to get the final result.

To implement this divide and conquer strategy, Java provides 3 classes

The ForkJoinTask is the parent class of the RecursiveAction class and RecursiveTask class. Unlike RecursiveAction, the RecursiveTask can return a value for each work unit.

The important thing to note here is that the above classes are used to define how one big task can be broken down into smaller chunks (Example — dividing the ArrayList into sizes of 10) and how individual chunk can be executed (Example — Adding the numbers present in each sub-list). The execution part will be handled by the ForkJoinPool which we will discuss in some moment.

Here is an example of how to implement a divide and conquer strategy.

In the above example, I have used the RecursiveAction class which provides a compute() that can be used to write the implementation of how to divide the task and how to execute the smallest unit. The compute method is the one that will be invoked by the Fork Join Framework. To recursively break down the task, invokeAll() is used, which is the shortcut to fork() all small chunks and then join() them again. Now, let us talk about ForkJoinPool.

A ForkJoinPool differs from other kinds of ExecutorService mainly by virtue of work-stealing, that is, all threads in a pool attempt to find and execute tasks submitted to the pool. This enables efficient processing when most tasks spawn other sub-tasks. If any of the thread finishes its execution, it can then pick up tasks from the other threads which are waiting in the queue. (Each thread has its own queue)

Unlike traditional thread pools that run on a single core but in concurrent mode, the ForkJoinPool runs on multiple cores in parallel mode. Yes, you have read it right!!!!. Finally, we have parallelism. The best practice is to have a number of threads in the ForkJoinPool to be equal to the numbers of processors. If you see the constructor of ForkJoinPool, it accepts an integer that represents the level of parallelism (Number of threads). If you don’t provide any parameter, the level of parallelism will be equal to the number of processors minus one. (No parameter = use of common ForkJoinPool)

The JDK provides a Runtime class that can be used to find out the number of processors (It includes logical processors as well)

The ForkJoinClass has a static commonPool() method which can be used to access common ForkJoinPool anywhere in the entire application. Using the common pool normally reduces resource usage (its threads are slowly reclaimed during periods of non-use, and reinstated upon subsequent use). If you have 2 or more than 2 available CPUs, then your common pool is automatically sized to the number of available CPUs minus 1( For example, if you have 5 available CPUs, the size of common thread pool will be 4). The reason behind being 1 less is that the application shouldn’t go into the blocking state.

The ForkJoinPool class also provides status check methods (For example — getStealCount() — Returns an estimate of the total number of tasks stolen from one thread’s work queue by another) that are intended to aid in developing, tuning and monitoring fork/join applications.

Now, let us deep-dive into common ForkJoinPool and how it can become a headache if you have blocking tasks or time-consuming tasks.

So if you choose common ForkJoinPool for blocking tasks or time-consuming tasks, then you need to consider some consequences. Let us look with the help of an example.

If you see the output, the number of threads in the common ForkJoinPool is one less than the number of available processors. Now, let us look at the main part of the program.

In this implementation, I am calling runAsync method 100 times. For each call, the thread goes on hold for 3 seconds. Let us try to imagine the whole situation. At a time, 11 threads executing 11 calls. Each call is on hold for 3 seconds. Once these calls get over another, another 11 calls go on hold. It goes on and in the end, the overall time was little over 30 seconds. During this whole period, if some other task wants to execute using common ForkJoinPool, it has to wait until one of the threads is free. Thereby increasing unnecessary overall time of that task.

Thre are 3 modes that you can achieve using the common thread pool

  • parallelism ≥ 2, JDK creates (# of CPUs minus 1) threads for the common pool.
  • parallelism = 1, JDK creates a new thread for every submitted task.
  • parallelism = 0, a submitted task is executed on a caller thread.

As you can see above, all occupied CPUs led to a significantly worse result due to highly blocking code. So it is better to create a separate thread pool for blocking tasks or time-consuming tasks.

I hope you guys understood the working of the Fork Join Framework. That's it from the blog.

Thank You.

--

--

No responses yet