Parallel processing is a type of computer processing in which large computing tasks are broken into smaller sub-tasks that are then processed simultaneously, or in parallel, rather than sequentially. This technology is widely used in modern computing, especially for advanced problems such as those dealt with in the natural sciences. Examples of parallel processing technology within a single device include symmetric multiprocessing and multicore processing. Multiple computers can also be linked together to work in parallel through methods such as distributed computing, computer clusters and massively parallel computers.
A symmetric multiprocessor is a computer with a single common main memory and operating system instance linked to multiple, identical processors. The processors have the same capabilities and are linked to a common memory, so tasks can be easily assigned or reassigned as needed to balance workload between them. In multicore processing, each processor contains at least two central processing units (CPUs), called cores, that are responsible for reading and executing instructions. Essentially, a multicore processor is actually multiple processors in a single integrated component. This allows for faster and more efficient communication between processing cores, compared with multiprocessor computers in which each CPU is a separate component.
Multiprocessor computers are widely used in scientific and business applications. It is less common in personal computer systems, which are usually uniprocessor designs, though multiprocessors have become more common in the consumer market. Computer software must be specifically designed for multiprocessor computers to take full advantage of the benefits it can provide, and this type of software often has performance problems on a single-processor computer as a result. Likewise, programs written with a single processor in mind usually gain only limited benefits from multiprocessing because they are not designed to take advantage of it.
Distributed parallel processing technology uses multiple, otherwise independent computers working on different parts of a problem in parallel, linked together via the Internet or an internal network so that they can communicate with each other. This type of parallel processing technology can be used with computers that are physically distant from each other, though this is not necessarily always the case. Together, the linked computers form what is called a computational grid.
Computational grids can be very large, potentially incorporating thousands of computers that might be spread all over the world. These computers might also be working on unrelated problems at the same time, with tasks being worked on by the grid distributed among computers according to how much spare processing capacity each one has at that moment. Grid computing differs from most other modern parallel computing because a single grid often includes a diverse array of computers of varying capabilities, rather than a group of identical units.
Computer clusters are a form of parallel processing technology in which multiple linked computers, usually with identical capabilities, work closely together as a single unit. Unlike symmetric multiprocessing, which uses multiple processors that share a common memory and operating system, each individual unit in a cluster is a complete standalone computer. These usually are in the same geographical location and are connected on a local area network. Some computers are built specifically for use in computer clusters, but clusters also can be formed by linking computers that were originally designed to operate autonomously.
Massively parallel computers have some similarities to cluster computers, because they are also composed of multiple computers joined together, but they are much larger and usually contain hundreds or thousands of nodes. They also have their own specialized components linking the individual computers comprising it together, whereas computer clusters are joined together by standard, off-the-shelf hardware often referred to as commodity components. The most advanced massively parallel computers can be truly colossal, containing tens of thousands of individual computers filling thousands of square feet of space, all working together. Most of the world's advanced supercomputers, used for complex calculations in areas such as astrophysics and global climate modeling, are of this type.