TransWikia.com

Why is multithreading not used everywhere?

Software Engineering Asked on October 29, 2021

Not almost, but all modern CPUs have multiple cores, yet multithreading isn’t really that common. Why to have these cores then? To execute several sequential programs at the same time? Well, when calculations are complex (rendering, compiling), the program is definitely made to use advantage of multiple cores. But for other tasks a single core is enough?
I understand that multi-threading is hard to implement and has drawbacks if number of threads is less than expected. But not using these idle cores seems so irrational.

9 Answers

An anecdotal data-point: 20+ years ago, when MPI (=message passing interface) was first a widely known thing, many people experimented with rewriting various computation-intense mathematical things for "parallel computation". (Yes, I know this is different from multi-threading at the OS level, but in a way its amplified aspects are easier to understand, since the bench-marking is in some ways easier.)

I vividly recall that one project reported that, after several months of work, their parallelized version ran half as fast (rather than far worse!) as the non-parallelized version. :)

Yes, they were able to identify the bottlenecks, etc., for this, in terms of the algorithm involved. Again, in many ways such analysis is simpler than understanding what an OS is doing! :)

Answered by paul garrett on October 29, 2021

I want to emphasize a point you made that multithreading is hard to implement. Some problems are naturally broken into independent parts that are easily parallelizable ("embarrassingly parallel") so we may easily use multithreading and other parallel techniques such as vector instructions, distributed systems, etc. It may be as easy as using #pragma omp parallel for on a for loop. Maybe even the compiler will automatically vectorize your loop.

Many problems are not so easy and require great care in the interaction between different parts so that they operate in the correct order and don't accidentally break each other's functioning. This is often implemented with locks which usually block execution and may introduce deadlock, but more exotic lock-free algorithms exist. Even then, shared resource contention can lead to problems like resource starvation. See Wikipedia article on concurrency for a broad overview that applies very much to multithreading.

Multithreaded code is much harder to get right and much harder to debug. There may be race conditions that only show up 1/100 times the program is executed. It is much easier for a programmer to reason correctly about a program that (appears to) execute in order than multiple threads executing with any number of different memory access orderings. Behind the scenes, a single processor or compiler may reorder instructions in a way that is normally hidden to the programmer, but breaks if multithreading is introduced.

SEI CERT has a list of rules that should be followed when implementing concurrency. All of these should be taken into account by the programmer to have correct code, and then the programmer must also consider performance. If not followed, severe security vulnerabilities may follow.

Answered by qwr on October 29, 2021

Software falls in two categories: Fast enough, and not fast enough. If it’s fast enough there is no point in making it run faster with multi threading. Whether there are 15 unused cores doesn’t matter if it’s fast enough without using them.

If it’s not fast enough, people will try to use more cores. (But careful. If my single purpose software runs in 8 days on a single core, and it takes me 3 days to make it use all 8 cores and run in one day, then letting the computer run for eight days is a lot cheaper than paying me for three days of work). Some problems are “embarrassingly parallel”. For example solving the same equation with 1000 different values for some parameter. Or compiling 1,000 source files.

Some problems are hard to improve by multi threading. They will come last.

Answered by gnasher729 on October 29, 2021

Vilx- is right, it is everywhere. But first let's separate cores and threads. Having more cores is just a technical detail that allows multi-threaded programs to run faster. Programmers do not "utilize cores", they apply multi-threading and they do not deal with cores at all. Cores are hidden from application programmers, they only deal with threads. And you can create a multi-threaded application on a single core processor just fine and it could be just as useful and effective as it would be on a multi-core processor.

There are basically two reasons to use multiple threads:

  • To get the work done faster.
  • To keep the UI (or some other task) responsive.

Your typical data entry application may have no use for multiple threads because there is just one task and data cannot be processed before the user submits it. When he does submit, he will be interested in the results (if any) so there is nothing that can be done in parallel.

If the submit starts a lengthy search operation however, the user may in the meantime want to do other stuff or start another search, and check back for the results later. Or cancel the search. Then you want to use more than one thread.

You can rest assured that multi-threading will be applied if it makes sense to do so. You may not always be aware of it when you use an application, but you probably would notice it if multi-threading was not applied in a scenario that calls for it.

Answered by Martin Maat on October 29, 2021

Why multithreading isn't everywhere?

Frame challenge: but it is everywhere.

Let's see, let's name some platforms:

  • Desktops/laptops: one of the most common applications today is the browser. And to get a good performance modern browsers take every advantage they can get, including multithreading, GPUs, etc. At the very least every tab will get a separate thread. And many modern applications are also built in HTML with an embedded browser (for example Slack and Discord). Games, at least the bigger ones, also have embraced multithreading a long time ago.
  • Servers: This day and age most servers deal with HTTP requests; other technologies are pretty niche. And webservers scale up nicely to all the cores you have. Sure, every request will most likely run on a single thread, but multiple threads means you can process multiple requests in parallel. It's absolutely standard. The other common part - the database software - also scales well and any serious engine uses multiple threads.
  • Cell phones/tablets: browsers, again. But even without them there are still plenty of background tasks that repeatedly wake up and do a little something. Having multiple cores means that these background tasks affect your foreground app less and it seems more "snappy". Cell phone CPUs are pretty powerful already, but the low power usage requirement means that they're still slower compared to the desktops - and yet we use them perhaps even more extensively. Including for computation-heavy processes like games.

Long story short, if we went back to single-core CPUs, you'd notice it immediately. Modern operating systems have many processes working in parallel and reducing task switching gives serious benefits. Even if some programs individually have little benefits from multiple cores, the system as a whole almost always profits. That said, I suppose there is a limit on how many cores it makes sense to have for various systems. A cell phone with 64 cores will probably not be significantly faster than a cell phone with 32 cores.

Answered by Vilx- on October 29, 2021

https://en.m.wikipedia.org/wiki/Multithreading_(computer_architecture)

Disadvantages

Multiple threads can interfere with each other when sharing hardware resources such as caches or translation lookaside buffers (TLBs). As a result, execution times of a single thread are not improved and can be degraded, even when only one thread is executing, due to lower frequencies or additional pipeline stages that are necessary to accommodate thread-switching hardware.

Overall efficiency varies; Intel claims up to 30% improvement with its Hyper-Threading Technology,[1] while a synthetic program just performing a loop of non-optimized dependent floating-point operations actually gains a 100% speed improvement when run in parallel. On the other hand, hand-tuned assembly language programs using MMX or AltiVec extensions and performing data prefetches (as a good video encoder might) do not suffer from cache misses or idle computing resources. Such programs therefore do not benefit from hardware multithreading and can indeed see degraded performance due to contention for shared resources.

From the software standpoint, hardware support for multithreading is more visible to software, requiring more changes to both application programs and operating systems than multiprocessing. Hardware techniques used to support multithreading often parallel the software techniques used for computer multitasking. Thread scheduling is also a major problem in multithreading.

Answered by Robert Andrzejuk on October 29, 2021

I have experimented with multi-threading, it is not easy to gain an increase in performance, because the cost of setting up a new thread to carry out a task tends to be quite high - so high that it may not be worth the cost in typical situations.

That said, for tasks which involve intensive computations, there may be gains to be had. I found that it was worthwhile to use a second thread to perform LZ77 compression when implementing RFC 1951.

I doubt there is any significant cost to having multiple cores - so there is nothing irrational about modern processors having the capability, even if it is typically under-utilised.

Answered by George Barwood on October 29, 2021

The proliferation of multi-core CPUs is predominantly driven by supply, not by demand.

You're right that many programmers don't bother decomposing their systems so that they can profit from multiple threads of execution. Therefore the customer sees a benefit mainly from OS multiprogramming rather than program multi-thread execution.

The reason CPU vendors create more and more cores is that the traditional way of increasing performance - increasing clock speed - has run into fundamental limitations of physics (both quantum effects and thermal problems). To keep producing chips that can be credibly sold as offering more compute power than last year's chips, they put more and more independent cores into them, trusting that OS multiprogramming and increasing use of multi-threading will catch up and yield actual rather than just nominal gains.

But make no mistake, both designing and exploiting multi-core CPUs is a lot harder than just running the same code faster. Both programmers and chip manufacturers would very much like to just keep increasing the speed of their chips; the trends towards parallelization is largely a matter of necessity, not preference.

Answered by Kilian Foth on October 29, 2021

Why multithreading isn't everywhere?

Because …

I understand that multi-threading is hard to implement and has drawbacks if number of threads is less than expected.

Answered by Jörg W Mittag on October 29, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP