Concurrency

The ability of a program to manage more than one thing at a time while giving an illusion of them happening at the same time is called concurrency, and such programs are called concurrent programs. Concurrency allows you to structure your program in a way that it performs faster if you have a problem that can be split into multiple sub-problems. When talking about concurrency, another term called parallelism is often thrown in the discussion, and it is important we know the differences as the usage of these terms often overlap. Parallelism is when each task runs simultaneously on separate CPU cores  with non-overlapping time periods. The following diagram illustrates the difference between concurrency and parallelism:

To put it another way, concurrency is about structuring your program to manage more than one thing at a time, while parallelism is about putting your program on multiple cores to increase the amount of work it does in a period of time. With this definition, it follows that concurrency when done right, does a better utilization of the CPU while parallelism might not in all cases. If your program runs in parallel but is only dealing with a single dedicated task, you aren't gaining much throughput. This is to say that we gain the best of both worlds when a concurrent program is made to run on multiple cores.

Usually, the support for concurrency is already provided at the lower levels by the operating system, and developers mostly program against the higher level abstractions provided by programming languages. On top of the low level support, there are different approaches to concurrency.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset