Concurrency Vs Parallelism

Recently I decided to start gaining more knowledge in concurrency and Parallelism in programming and go deeper as I progress in my study. I intend to share my knowledge in this areas through a series of posts that I’ll publish each month. I’ll go deeper and deeper with each post into the subject of asynchronous and parallel programming. So if this topic interests you, make sure you visit every week for a new post.

Reason Concurrent Programming is Becoming More Important

The primary reason for this spark of interest was the path that microprocessor manufacturer such as Intel and AMD are taking. There was a time when Intel shrink its CPU die size every two years. Intel called it tick-tock model, but as years goes by, doing this becomes more and more a challenge. For example the Cannonlake 10 nm architecture delayed multiple times. Instead Intel released a refresh of previous architecture with die size unchanged. It’s not only the Intel, every microprocessor company going to have this problem sooner or later. This lead these companies to increase the number of their processor cores.

For example Intel released its first six core coffee lake processor this year or AMD released its Ryzen 8 core processor for mass market. When it comes to server we have 28 core (56 thread) CPU from Intel and 32 core (64 thread) CPU from AMD. If that’s not enough take a look at a 1000 core CPU that is built by a team at the University of California. This trend is going to increase in time as shrinking the die size becomes more difficult. That’s because gaining performance by increasing the frequency has its limitation. Also any architectural change in CPU proves to be very difficult. So what it means for us programmers? I think that means Concurrency specially parallel programming becomes more important in the future.

Concurrency Vs Parallelism

There’s a lot of confusion about difference of above terms and we hear them a lot when we read about these subjects. It is important to define them upfront so we know what we’re exactly talking about. I group the terms concurrency and asynchrony together as they have almost the same meaning. I also grouped the terms multi-thread and parallel together. Based on what I read about it, this is how the community defines them. Although some people say concurrency is a broader term that encompasses both the parallel and asynchronous programming. But they are not the majority.

Concurrent And Asynchronous

Concurrent And Asynchronous programming is about how our program handle tasks. It can handle one task at a time (sequentially) or multiple task at the same time (concurrently). Notice that each task can be different. Also no matter what the program does, there’s only one entity responsible for doing all those tasks, more can exist. Concurrency make programs more usable and it can be implemented with single processor machine and having multiple processing units are not required.

For example the operating system that you’re currently using is concurrent because you can open your browser, play music, scan for virus etc. Concurrency is not about doing things faster, but it’s about using the system resources more efficiently. Most system do this by starting multiple task at the same time, we can do this because these tasks don’t need our attention at the same time. In other words the tasks are not processor bound but IO and latency bound. One practical example is in Asp.Net Core applications. When we use an async method and we await it, the control is passed back to the caller and the thread is free to do other things until the result from the async method is ready to use.

Multithreaded And Parallel

Multithreaded And Parallel programming is about how our program handle each individual task. It can process the task serially and after each task is finished it goes to the next. It can also do it in parallel by splitting the tasks to chunks, and assign those chunks to different threads to be done and merge the end result. Parallel programming is more about doing things faster by distributing the the workload across many cores. So we need a CPU with multiple cores. Most of the time those chunks of computation have the same structure but it is not necessary. Remember concurrency is about dealing with a lot of things at once, parallelism is about doing a lot of things at once as Rob Pike puts it nicely in this video.

Eric Lipert wrote a very good answer with a nice analogy about the difference of these two. Also I like this answer by Jon Harrop.

Different Mechanisms for Asynchronous Operation

It worth to note that there are three model of distribution asynchronous operations. In other words we normally distribute chunks of data to be processed through available workers. These workers can be different machines or CPU cores or threads on a single core. This can effect how our application use the shared resources and how the end result is put together.

When Should You Use Asynchronous and Parallel Programming

It is important to know when we can most benefit form using asynchronous and parallel programming. Use asynchronous programming when you have a long running task which is not CPU intensive. One example could be when you call a web service or you’re doing some kind of I/O operation. In those instance you’re simply should wait for the call to finish and you have nothing else to do. Another example is in application with UI, if you call some service in UI thread, you need call it asynchronously. Otherwise the UI is going to be unresponsive. But there is situations when you might want to stop using async, such as times when simplicity is more important than efficiency. Another reason might be you task in too simple of an operation to need asynchrony.

Use parallel programming for computationally intensive tasks. Note that if you use parallel programming for something that is not resource intensive, you actually hurt performance. That’s because the cost of managing multiple thread and running task on multiple core can be more than the task itself. So parallelism is not a silver bullet, some problem are inherently suitable for parallelism, often they are called embarrassingly parallel. Others are not and should be run in sequential manner. The main point is the amount of performance that you can gain from parallelism is entirely dependent on the type of problem.

Summary

This post was an introductory and high level overview of asynchronous and parallel programming . In this post I discussed why concurrency and parallelism is going to be more important in the years to come. I also explained the difference between some terms in this sphere. We also saw in what situations synchronous and parallel programming make sense.

Share...
 

Hamid Mosalla

Hi, I'm Hamid ("Arman"). I'm a software developer with 8+ years of experience in C#, .NET Core, Software Architecture and Web Development. I enjoy creating dev tools, contributing to open-source projects, and sharing insights on my blog. Outside of tech, I’m into indie cinema, classical music and abstract art.

 

One thought on “Concurrency Vs Parallelism

Leave a Reply

Your email address will not be published. Required fields are marked *