View previous topic :: View next topic |
Author |
Message |
Gentoopc Guru
Joined: 25 Dec 2017 Posts: 383
|
Posted: Sun Jan 12, 2025 8:31 am Post subject: No need for schedulers? |
|
|
Split from "CONFIG_SCHED_AUTOGROUP question". --Zucca
Guys, all this does not work well. Scheduling is very complex, it is carried out both in the Linux kernel and in the CPU. In this case, another scheduler is needed, which will regulate the work of all schedulers. Maybe it would even be possible to make an intelligent scheduler based on a cut-down GPT, the processor has an NPU, but that's not all. But if we had 6000 CUDA cores like on the GPU, or even 20000, then there would be no need for schedulers. There would be no queues. A real-time kernel on the GPU would make life on the Gentoo distribution very interesting. These are unlimited possibilities, which, unfortunately, we will not be given. |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 23015
|
Posted: Sun Jan 12, 2025 3:37 pm Post subject: |
|
|
Yes, if we had unlimited resources so that nothing needed to wait, we would have no need to schedule anything. We do not have unlimited resources, and on most systems, processes need to wait their turn. The kernel's scheduler tries its best to do the most good with the resources available. Part of that is allocating as little time to the scheduler itself as possible, since time spent making a decision about what to run is time not spent actually running anything the user wants done. Are you really suggesting that we use a Generative Predictive Text model, which is infamous not just for its mistakes but for its extremely high resource requirements, to make decisions that need to be made accurately, cheaply, and quickly? |
|
Back to top |
|
|
pingtoo Veteran
Joined: 10 Sep 2021 Posts: 1406 Location: Richmond Hill, Canada
|
Posted: Sun Jan 12, 2025 4:04 pm Post subject: |
|
|
Conceptually I agree with Gentoopc. It would be nice that linux kernel can utilise every available resource on the box (NPU/GPU etc...). Kernel offload task on to NPU/GPU would be very nice.
However I think current technology are limited in sense that the time passing data between CPU and NPU/GPU would be too long. so doing offload don't benefit for performance wise.
I also agree with Hu's point about using Generative Prediction text model for calculate process scheduler is way too expensive (resources and times wise) so it is not worthy. |
|
Back to top |
|
|
Gentoopc Guru
Joined: 25 Dec 2017 Posts: 383
|
Posted: Mon Jan 13, 2025 3:05 am Post subject: |
|
|
Hu wrote: | and on most systems, processes need to wait their turn. as little time to the scheduler itself as possible, since time spent making a decision about what to run is time not |
Guys, this should be changed because there is a strong reason for this. Many processes simply cannot wait for their turn. In a system that will work on important objects, this is exactly the role that the Linux kernel was given, so in such a system, processes cannot be killed or not wait for their turn, because the result of these processes may be needed by other processes so that they can complete. Now I will try to say the main thing
we need to run two functions in parallel
this will be impossible on the CPU because the code is executed line by line. first foo() will be called; then foo1(); to solve this problem we need a new programming language and changes to the CPU components, but no one will do this, it is expensive and you are right when you said that you and we do not have such an opportunity. then we need to use what we have. guys, you understand, I do not suggest inventing something, I suggest using what already exists. and from what is available today, the GPU is well suited to solving this problem. there is already a programming language for the GPU, that is, there is nothing to invent. and you will be able to run functions in parallel, and therefore you will be able to do things that were previously inaccessible. for example, you will be able to get a snapshot of the entire structure at one moment. now the structure needs to be traversed recursively or in a cycle. The GPU provides a different approach. everything can change. and the main thing is that nothing needs to be invented, everything is already there.
Last edited by Gentoopc on Mon Jan 13, 2025 3:34 am; edited 2 times in total |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 23015
|
Posted: Mon Jan 13, 2025 3:14 am Post subject: |
|
|
As I said in the part you did not quote, yes, it would be very convenient if everyone had such powerful systems that there was always a CPU available to run any new process that wants a timeslice, and sufficient free RAM that we never need to page out any process. Most people cannot afford the cost in money or power to maintain such a massively over-powerful system, and so get by on systems that use a scheduler that does the best it can with the resources available to it. Outside of certain safety-critical systems, requiring a process to wait its turn causes no real harm.
It's easy to find things that you wish would be different. Do you have a workable plan for how we make such a change? |
|
Back to top |
|
|
Gentoopc Guru
Joined: 25 Dec 2017 Posts: 383
|
Posted: Mon Jan 13, 2025 3:41 am Post subject: |
|
|
Hu wrote: | Do you have a workable plan for how we make such a change? |
people need to understand. people don't want to change anything. they try to adapt to the present time what was relevant 30 years ago. but it doesn't work as well today as it worked in the past. i just want to say that for the present time, we need tools that are suitable for the present time. no one even tries. they just come up with excuses that old methods are better. they come up with the idea that intel core2 is better than a 2025 GPU with 6000 cores. and they do it well. there are many people and arguments that core2 is better. |
|
Back to top |
|
|
Gentoopc Guru
Joined: 25 Dec 2017 Posts: 383
|
Posted: Mon Jan 13, 2025 3:58 am Post subject: |
|
|
Hu wrote: | Most people cannot afford the cost in money or power to maintain such a massively over-powerful system
|
you are right. But CPU consumes almost as much as GPU. Any video card with CUDA, even used, is affordable for anyone. That is why we need to move towards this. So that it is available to everyone. NPU is expensive now. GPU is everywhere and they are available on Ebay with different prices. That is where you need to look, that is where you need to move. |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 23015
|
Posted: Mon Jan 13, 2025 3:00 pm Post subject: |
|
|
I asked for a workable plan. Your response could be described as a wish, but I don't see a plan for how you expect people to make this work. You just want them to wave a wand and it will magically be better. Yes, if we could run everything on a 4 kilo-core processing unit that can do all the things that our current 8-32 core CPUs can do, that would be wonderful. As many people have explained to you repeatedly, video card GPUs are not a drop-in replacement for a motherboard's CPU. Software must be adapted to run on them, and, again as people have explained to you repeatedly, video card GPUs are not good at some tasks that the average user expects a processing unit to handle well.
Suppose everyone agreed with you that CUDA is the future. Who is responsible for porting all the existing software to run on it? What is your plan for dealing with the tasks that GPUs do poorly, if at all? |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54725 Location: 56N 3W
|
Posted: Mon Jan 13, 2025 3:18 pm Post subject: |
|
|
Gentoopc,
How does real time computing work then?
It still has a scheduler and things do get queued/forced to wait.
The trick is that answers are known before they are needed. That's good enough.
e.g. Flight control computers in most airliners today, or display computers is the same environment.
When you have 16.6ms to draw a new display frame, you do it in 16.6ms or less.
There are two approaches.
1. beef up the hardware
2. simplify what will be displayed. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
logrusx Advocate
Joined: 22 Feb 2018 Posts: 2622
|
Posted: Fri Jan 17, 2025 7:42 pm Post subject: |
|
|
Hu wrote: | Yes, if we had unlimited resources so that nothing needed to wait, we would have no need to schedule anything. |
That's not correct. Some things still need to be executed before other things so we're not going schedulerless anytime soon. Like never. Even if we had unlimited amount of memory and CPU cores. If we had, we wouldn't be able to utilize them beyond a certain limit. Laws of nature.
pingtoo wrote: | Conceptually I agree with Gentoopc. It would be nice that linux kernel can utilise every available resource on the box (NPU/GPU etc...). |
There's no case where the kernel utilizes the GPU. It doesn't need to. Kernel doesn't do or need stuff that GPU/NPU is designed to do.
pingtoo wrote: | Kernel offload task on to NPU/GPU would be very nice.
|
Unless GPU development takes an unexpected turn and it become something other than GPU, that's not going to happen.
Best Regards,
Georgi |
|
Back to top |
|
|
Zucca Moderator
Joined: 14 Jun 2007 Posts: 3877 Location: Rasi, Finland
|
Posted: Fri Jan 17, 2025 10:26 pm Post subject: |
|
|
Only one area I could see where kernel could possibly utilize GPU is networking tasks. _________________ ..: Zucca :..
My gentoo installs: | init=/sbin/openrc-init
-systemd -logind -elogind seatd |
Quote: | I am NaN! I am a man! |
|
|
Back to top |
|
|
dmpogo Advocate
Joined: 02 Sep 2004 Posts: 3468 Location: Canada
|
Posted: Fri Jan 17, 2025 10:41 pm Post subject: |
|
|
pingtoo wrote: | Conceptually I agree with Gentoopc. It would be nice that linux kernel can utilise every available resource on the box (NPU/GPU etc...). Kernel offload task on to NPU/GPU would be very nice.
|
Why, conceptually ? I would think I would prefer kernel to use as least resources as possible, leaving the rest of the resources to more productive things, no ? |
|
Back to top |
|
|
pingtoo Veteran
Joined: 10 Sep 2021 Posts: 1406 Location: Richmond Hill, Canada
|
Posted: Fri Jan 17, 2025 11:09 pm Post subject: |
|
|
dmpogo wrote: | pingtoo wrote: | Conceptually I agree with Gentoopc. It would be nice that linux kernel can utilise every available resource on the box (NPU/GPU etc...). Kernel offload task on to NPU/GPU would be very nice.
|
Why, conceptually ? I would think I would prefer kernel to use as least resources as possible, leaving the rest of the resources to more productive things, no ? |
Because GPU/NPU just another calculation unit, be it doing specialized calculation. so if we there is/are calculation in kernel that can utilize GPU/NPU than why not?
So just imagining here Like SiFi movie a huge network (SkyNet) its kernel need prioritize tasks, but the task count here we are talking in term of billion or even larger. to calculate priority queue would benefit from specialized calculation unit just dedicate for this job.
Please don't turn this to monolithic kernel vs micro kernel debate. I am just express a concept it does not mean anything real. |
|
Back to top |
|
|
|