For most computing tasks, there is great advantage to splitting up workload into multiple actors and partitioning the task into different, multiple tasks for these multiple actors. Two common ways of doing this are multi-threaded programs and multi-process systems. In a multi-threaded program, multiple actors live in a shared program context. In multi-process systems, there are multiple actors but each lives in its own independent program context. Understanding the best choice for your program and workload requires understanding the advantages and disadvantages of multi-threaded programs:
Multi-threaded program advantages:
- Less overhead to establish and terminate vs. a process: because very little memory copying is required (just the thread stack), threads are faster to start than processes. To start a process, the whole process area must be duplicated for the new process copy to start. While some operating systems only copy memory once it is modified (copy-on-write), this is not universally guaranteed.
- Faster task-switching: in many cases, it is faster for an operating system to switch between threads for the active CPU task than it is to switch between different processes. The CPU caches and program context can be maintained between threads in a process, rather than being reloaded as in the case of switching a CPU to a different process.
- Data sharing with other threads in a process: for tasks that require sharing large amounts of data, the fact that threads all share a process’s memory pool is very beneficial. Not having separate copies means that different threads can read and modify a shared pool of memory easily. While data sharing is possible with separate processes through shared memory and inter-process communication, this sharing is of an arms-length nature and is not inherently built into the process model.
Threads are a useful choice when you have a workload that consists of lightweight tasks (in terms of processing effort or memory size) that come in, for example with a web server servicing page requests. There, each request is small in scope and in memory usage. Threads are also useful in situations where multi-part information is being processed – for example, separating a multi-page TIFF image into separate TIFF files for separate pages. In that situation, being able to load the TIFF into memory once and have multiple threads access the same memory buffer leads to performance benefits.
Multi-threaded program disadvantages:
- Synchronization overhead of shared data: shared data that is modified requires special handling in the form of locks, mutexes and other primitives to ensure that data is not being read while written, nor written by multiple threads at the same time.
- Shared process memory space: all threads in a process share the same memory space. If something goes wrong in one thread and causes data corruption or an access violation, then this affects and corrupts all the threads in that process. This is a special concern for cross-language environments where it is very easy to have subtle ABI interaction problems, such as Java-based web servers calling upon native libraries via the JNI (Java Native Interface) ABI.
- Program debugging: multi-threaded programs present difficulties in finding and resolving bugs over and beyond the normal difficulties of debugging programs. Synchronization issues, non-deterministic timing and accidental data corruption all conspire to make debugging multi-threaded programs an order of magnitude more difficult than single-threaded programs.
Processes are a useful choice for parallel programming with workloads where tasks take significant computing power, memory or both. For example, rendering or printing complicated file formats (such as PDF) can sometimes take significant amounts of time – many milliseconds per page – and involve significant memory and I/O requirements. In this situation, using a single-threaded process and using one process per file to process allows for better throughput due to increased independence and isolation between the tasks vs. using one process with multiple threads.
Virtualized and cloud environments such as VMWare and Amazon’s AWS platform complicate this situation somewhat. In these environments, hardware is shared with other virtualized environments and wide variance in CPU allocation as well as I/O times can be seen. Higher variance in context switching return time can also be observed. As Jayasinghe, et. al. observe (http://www.cercs.gatech.edu/opencirrus/OCsummit11/presentations/jayasinghe.pdf) for the Amazon platform, reducing the number of threads in an application running in a cloud environment can increase performance. Designing for elastic demand at the outset is also an important factor: a multiple-process application where each process assumes limited communication and reliance on other processes is an application that is much easier to have scale horizontally to meet demand, through instantiating new server instances, than is an application that relies on multiple threads exclusively & that can only scale vertically.
There are many factors to consider for your specific application and environment, and I’ve only provided an overview of the most important considerations. For applications that use the Adobe PDF Library, we have found that most workloads benefit from a multiple-process approach when possible. Benchmarking and profiling your application, and usage testing, is however at the end of the day the only reliable way of knowing what will work best in your specific situation. I hope the guidelines above help in giving guidance of where to start in writing or parallelizing your application.