View previous topic :: View next topic |
Author |
Message |
Gentoopc Guru
Joined: 25 Dec 2017 Posts: 386
|
Posted: Sun Jan 12, 2025 8:31 am Post subject: No need for schedulers? |
|
|
Split from "CONFIG_SCHED_AUTOGROUP question". --Zucca
Guys, all this does not work well. Scheduling is very complex, it is carried out both in the Linux kernel and in the CPU. In this case, another scheduler is needed, which will regulate the work of all schedulers. Maybe it would even be possible to make an intelligent scheduler based on a cut-down GPT, the processor has an NPU, but that's not all. But if we had 6000 CUDA cores like on the GPU, or even 20000, then there would be no need for schedulers. There would be no queues. A real-time kernel on the GPU would make life on the Gentoo distribution very interesting. These are unlimited possibilities, which, unfortunately, we will not be given. |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 23020
|
Posted: Sun Jan 12, 2025 3:37 pm Post subject: |
|
|
Yes, if we had unlimited resources so that nothing needed to wait, we would have no need to schedule anything. We do not have unlimited resources, and on most systems, processes need to wait their turn. The kernel's scheduler tries its best to do the most good with the resources available. Part of that is allocating as little time to the scheduler itself as possible, since time spent making a decision about what to run is time not spent actually running anything the user wants done. Are you really suggesting that we use a Generative Predictive Text model, which is infamous not just for its mistakes but for its extremely high resource requirements, to make decisions that need to be made accurately, cheaply, and quickly? |
|
Back to top |
|
|
pingtoo Veteran
Joined: 10 Sep 2021 Posts: 1408 Location: Richmond Hill, Canada
|
Posted: Sun Jan 12, 2025 4:04 pm Post subject: |
|
|
Conceptually I agree with Gentoopc. It would be nice that linux kernel can utilise every available resource on the box (NPU/GPU etc...). Kernel offload task on to NPU/GPU would be very nice.
However I think current technology are limited in sense that the time passing data between CPU and NPU/GPU would be too long. so doing offload don't benefit for performance wise.
I also agree with Hu's point about using Generative Prediction text model for calculate process scheduler is way too expensive (resources and times wise) so it is not worthy. |
|
Back to top |
|
|
Gentoopc Guru
Joined: 25 Dec 2017 Posts: 386
|
Posted: Mon Jan 13, 2025 3:05 am Post subject: |
|
|
Hu wrote: | and on most systems, processes need to wait their turn. as little time to the scheduler itself as possible, since time spent making a decision about what to run is time not |
Guys, this should be changed because there is a strong reason for this. Many processes simply cannot wait for their turn. In a system that will work on important objects, this is exactly the role that the Linux kernel was given, so in such a system, processes cannot be killed or not wait for their turn, because the result of these processes may be needed by other processes so that they can complete. Now I will try to say the main thing
we need to run two functions in parallel
this will be impossible on the CPU because the code is executed line by line. first foo() will be called; then foo1(); to solve this problem we need a new programming language and changes to the CPU components, but no one will do this, it is expensive and you are right when you said that you and we do not have such an opportunity. then we need to use what we have. guys, you understand, I do not suggest inventing something, I suggest using what already exists. and from what is available today, the GPU is well suited to solving this problem. there is already a programming language for the GPU, that is, there is nothing to invent. and you will be able to run functions in parallel, and therefore you will be able to do things that were previously inaccessible. for example, you will be able to get a snapshot of the entire structure at one moment. now the structure needs to be traversed recursively or in a cycle. The GPU provides a different approach. everything can change. and the main thing is that nothing needs to be invented, everything is already there.
Last edited by Gentoopc on Mon Jan 13, 2025 3:34 am; edited 2 times in total |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 23020
|
Posted: Mon Jan 13, 2025 3:14 am Post subject: |
|
|
As I said in the part you did not quote, yes, it would be very convenient if everyone had such powerful systems that there was always a CPU available to run any new process that wants a timeslice, and sufficient free RAM that we never need to page out any process. Most people cannot afford the cost in money or power to maintain such a massively over-powerful system, and so get by on systems that use a scheduler that does the best it can with the resources available to it. Outside of certain safety-critical systems, requiring a process to wait its turn causes no real harm.
It's easy to find things that you wish would be different. Do you have a workable plan for how we make such a change? |
|
Back to top |
|
|
Gentoopc Guru
Joined: 25 Dec 2017 Posts: 386
|
Posted: Mon Jan 13, 2025 3:41 am Post subject: |
|
|
Hu wrote: | Do you have a workable plan for how we make such a change? |
people need to understand. people don't want to change anything. they try to adapt to the present time what was relevant 30 years ago. but it doesn't work as well today as it worked in the past. i just want to say that for the present time, we need tools that are suitable for the present time. no one even tries. they just come up with excuses that old methods are better. they come up with the idea that intel core2 is better than a 2025 GPU with 6000 cores. and they do it well. there are many people and arguments that core2 is better. |
|
Back to top |
|
|
Gentoopc Guru
Joined: 25 Dec 2017 Posts: 386
|
Posted: Mon Jan 13, 2025 3:58 am Post subject: |
|
|
Hu wrote: | Most people cannot afford the cost in money or power to maintain such a massively over-powerful system
|
you are right. But CPU consumes almost as much as GPU. Any video card with CUDA, even used, is affordable for anyone. That is why we need to move towards this. So that it is available to everyone. NPU is expensive now. GPU is everywhere and they are available on Ebay with different prices. That is where you need to look, that is where you need to move. |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 23020
|
Posted: Mon Jan 13, 2025 3:00 pm Post subject: |
|
|
I asked for a workable plan. Your response could be described as a wish, but I don't see a plan for how you expect people to make this work. You just want them to wave a wand and it will magically be better. Yes, if we could run everything on a 4 kilo-core processing unit that can do all the things that our current 8-32 core CPUs can do, that would be wonderful. As many people have explained to you repeatedly, video card GPUs are not a drop-in replacement for a motherboard's CPU. Software must be adapted to run on them, and, again as people have explained to you repeatedly, video card GPUs are not good at some tasks that the average user expects a processing unit to handle well.
Suppose everyone agreed with you that CUDA is the future. Who is responsible for porting all the existing software to run on it? What is your plan for dealing with the tasks that GPUs do poorly, if at all? |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54737 Location: 56N 3W
|
Posted: Mon Jan 13, 2025 3:18 pm Post subject: |
|
|
Gentoopc,
How does real time computing work then?
It still has a scheduler and things do get queued/forced to wait.
The trick is that answers are known before they are needed. That's good enough.
e.g. Flight control computers in most airliners today, or display computers is the same environment.
When you have 16.6ms to draw a new display frame, you do it in 16.6ms or less.
There are two approaches.
1. beef up the hardware
2. simplify what will be displayed. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
logrusx Advocate
Joined: 22 Feb 2018 Posts: 2628
|
Posted: Fri Jan 17, 2025 7:42 pm Post subject: |
|
|
Hu wrote: | Yes, if we had unlimited resources so that nothing needed to wait, we would have no need to schedule anything. |
That's not correct. Some things still need to be executed before other things so we're not going schedulerless anytime soon. Like never. Even if we had unlimited amount of memory and CPU cores. If we had, we wouldn't be able to utilize them beyond a certain limit. Laws of nature.
pingtoo wrote: | Conceptually I agree with Gentoopc. It would be nice that linux kernel can utilise every available resource on the box (NPU/GPU etc...). |
There's no case where the kernel utilizes the GPU. It doesn't need to. Kernel doesn't do or need stuff that GPU/NPU is designed to do.
pingtoo wrote: | Kernel offload task on to NPU/GPU would be very nice.
|
Unless GPU development takes an unexpected turn and it become something other than GPU, that's not going to happen.
Best Regards,
Georgi |
|
Back to top |
|
|
Zucca Moderator
Joined: 14 Jun 2007 Posts: 3880 Location: Rasi, Finland
|
Posted: Fri Jan 17, 2025 10:26 pm Post subject: |
|
|
Only one area I could see where kernel could possibly utilize GPU is networking tasks. _________________ ..: Zucca :..
My gentoo installs: | init=/sbin/openrc-init
-systemd -logind -elogind seatd |
Quote: | I am NaN! I am a man! |
|
|
Back to top |
|
|
dmpogo Advocate
Joined: 02 Sep 2004 Posts: 3468 Location: Canada
|
Posted: Fri Jan 17, 2025 10:41 pm Post subject: |
|
|
pingtoo wrote: | Conceptually I agree with Gentoopc. It would be nice that linux kernel can utilise every available resource on the box (NPU/GPU etc...). Kernel offload task on to NPU/GPU would be very nice.
|
Why, conceptually ? I would think I would prefer kernel to use as least resources as possible, leaving the rest of the resources to more productive things, no ? |
|
Back to top |
|
|
pingtoo Veteran
Joined: 10 Sep 2021 Posts: 1408 Location: Richmond Hill, Canada
|
Posted: Fri Jan 17, 2025 11:09 pm Post subject: |
|
|
dmpogo wrote: | pingtoo wrote: | Conceptually I agree with Gentoopc. It would be nice that linux kernel can utilise every available resource on the box (NPU/GPU etc...). Kernel offload task on to NPU/GPU would be very nice.
|
Why, conceptually ? I would think I would prefer kernel to use as least resources as possible, leaving the rest of the resources to more productive things, no ? |
Because GPU/NPU just another calculation unit, be it doing specialized calculation. so if we there is/are calculation in kernel that can utilize GPU/NPU than why not?
So just imagining here Like SiFi movie a huge network (SkyNet) its kernel need prioritize tasks, but the task count here we are talking in term of billion or even larger. to calculate priority queue would benefit from specialized calculation unit just dedicate for this job.
Please don't turn this to monolithic kernel vs micro kernel debate. I am just express a concept it does not mean anything real. |
|
Back to top |
|
|
Gentoopc Guru
Joined: 25 Dec 2017 Posts: 386
|
Posted: Sun Jan 19, 2025 3:34 pm Post subject: |
|
|
Hu wrote: | I As many people have explained to you repeatedly? | Guys, no offense, but you can't explain anything to me because you yourself have poor knowledge in this area. I've already said in previous threads that people have said for sure that the Linux kernel can be launched on a GPU, but it requires modification. As for the plan, first we need to honestly admit to ourselves and others that our knowledge is poor in this area, and not look for reasons not to switch to a GPU. I suggest that the most important person on this site organize a fundraiser to write a real-time kernel for a GPU. Guys, this is necessary, there is currently a powerful lobbying effort to restrain development. Let's please not waste time while we have it. |
|
Back to top |
|
|
pingtoo Veteran
Joined: 10 Sep 2021 Posts: 1408 Location: Richmond Hill, Canada
|
Posted: Sun Jan 19, 2025 3:42 pm Post subject: |
|
|
Gentoopc wrote: | I've already said in previous threads that people have said for sure that the Linux kernel can be launched on a GPU |
As far as I can tell from this thread, nobody every say 'Linux kernel can be launched on GPU". even with modification.
Can you find where this line came from and share with us? |
|
Back to top |
|
|
Gentoopc Guru
Joined: 25 Dec 2017 Posts: 386
|
Posted: Sun Jan 19, 2025 4:39 pm Post subject: |
|
|
pingtoo wrote: |
As far as I can tell from this thread, nobody every say 'Linux kernel can be launched on GPU". even with modification.
Can you find where this line came from and share with us? |
I told you in other threads that the Linux kernel can be run on a GPU. I know this because this topic was raised on other resources. and people said that it was possible. it is possible without the participation of the CPU. yes, serious modifications of the Linux kernel are needed, but it is possible. I suggest going the other way, and writing the kernel for the GPU. guys, you understand, everything will change soon, your Linux Torvalds already made it clear to you when he kicked out the developers by removing even mention of them. this is very unfair. let's look to the future, if we do not arrange a place for ourselves in it, then we will not have a future.
Last edited by Gentoopc on Sun Jan 19, 2025 4:40 pm; edited 1 time in total |
|
Back to top |
|
|
logrusx Advocate
Joined: 22 Feb 2018 Posts: 2628
|
Posted: Sun Jan 19, 2025 4:40 pm Post subject: |
|
|
Zucca wrote: | Only one area I could see where kernel could possibly utilize GPU is networking tasks. |
GPU is totally unfit to handle such tasks. In fact a GPU is not a programming unit. It's a calculation unit. The fact that it has firmware is for the same reasons there is microcode. If you don't know how microcode came into existence, read about the Microprocessor wars. That is a way to protect intellectual property. Of course you could argue there are fixes delivered through microcode, but the original reason it came into existence was to protect Intel's IP when it lost the lawsuit against AMD.
Best Regards,
Georgi |
|
Back to top |
|
|
logrusx Advocate
Joined: 22 Feb 2018 Posts: 2628
|
Posted: Sun Jan 19, 2025 4:42 pm Post subject: |
|
|
DELETED: a post I shouldn't have posted.
Last edited by logrusx on Sun Jan 19, 2025 5:01 pm; edited 1 time in total |
|
Back to top |
|
|
Gentoopc Guru
Joined: 25 Dec 2017 Posts: 386
|
Posted: Sun Jan 19, 2025 4:47 pm Post subject: |
|
|
logrusx wrote: | Zucca wrote: | Only one area I could see where kernel could possibly utilize GPU is networking tasks. |
GPU is totally unfit to handle such tasks. In fact a GPU is not a programming unit. It's a calculation unit. The fact that it has firmware is for the same reasons there is microcode. If you don't know how microcode came into existence, read about the Microprocessor wars. That is a way to protect intellectual property. Of course you could argue there are fixes delivered through microcode, but the original reason it came into existence was to protect Intel's IP when it lost the lawsuit against AMD.
| You can tell these tales to others. Please don't tell them to me. I know that it is possible. And it will come true, it's just sad that we won't have time because of people like you. |
|
Back to top |
|
|
pingtoo Veteran
Joined: 10 Sep 2021 Posts: 1408 Location: Richmond Hill, Canada
|
Posted: Sun Jan 19, 2025 5:04 pm Post subject: |
|
|
Gentoopc wrote: | pingtoo wrote: |
As far as I can tell from this thread, nobody every say 'Linux kernel can be launched on GPU". even with modification.
Can you find where this line came from and share with us? |
I told you in other threads that the Linux kernel can be run on a GPU. I know this because this topic was raised on other resources. and people said that it was possible. it is possible without the participation of the CPU. yes, serious modifications of the Linux kernel are needed, but it is possible. I suggest going the other way, and writing the kernel for the GPU. guys, you understand, everything will change soon, your Linux Torvalds already made it clear to you when he kicked out the developers by removing even mention of them. this is very unfair. let's look to the future, if we do not arrange a place for ourselves in it, then we will not have a future. |
You are making claim that is not substantiated. people said many thing but it does not make it true.
Everything in this world are possible. The question is it worthy to do it. Currently making Linux kernel running on GPU/NPU just not worthy in near future. Offload some kernel task into GPU/NPU is maybe but need deep analysis.
[joke]
People told me, they seen Linux running on a toaster and the toaster can produce micro chips that will be running faster than today's CPU.
disclaimer, the kernel and toaster was substantially modified.
[/joke] |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54737 Location: 56N 3W
|
Posted: Sun Jan 19, 2025 5:15 pm Post subject: |
|
|
Gentoopc wrote: | Guys, no offense, but you can't explain anything to me ... |
That's why we keep going over the same ground. It reminds me of this clip from Oh Mr Porter. Listen to the postman.
I did think about locking the topic, but its your time. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
logrusx Advocate
Joined: 22 Feb 2018 Posts: 2628
|
Posted: Sun Jan 19, 2025 5:17 pm Post subject: |
|
|
pingtoo wrote: |
Everything in this world are possible. The question is it worthy to do it. Currently making Linux kernel running on GPU/NPU just not worthy in near future. |
It is impossible. A GPU is not fully functional processing unit. It's actually more like a coprocessor. It has very limited programming capabilities only related to calculations. Most of its logic is hard-wired and can't be modified. (That's why this idea is gibberish by nature)
pingtoo wrote: |
[joke]
People told me, they seen Linux running on a toaster and the toaster can produce micro chips that will be running faster than today's CPU.
disclaimer, the kernel and toaster was substantially modified. :P
[/joke] |
It was NetBSD if I remember correctly. The toaster was a well disguised computer.
Best Regards,
Georgi |
|
Back to top |
|
|
stefan11111 l33t
Joined: 29 Jan 2023 Posts: 949 Location: Romania
|
Posted: Sun Jan 19, 2025 6:00 pm Post subject: |
|
|
Gentoopc wrote: |
Now I will try to say the main thing
we need to run two functions in parallel
this will be impossible on the CPU because the code is executed line by line. first foo() will be called; then foo1(); to solve this problem we need a new programming language and changes to the CPU components, but no one will do this, it is expensive and you are right when you said that you and we do not have such an opportunity. |
I made a post about this when you hit report instead of quote, but I will make another one.
What you seem to be implying is that when running on a GPU, everything can magically become parallel.
So you would want to have a language for the GPU where every instruction is by default executed in parallel.
Think of the following:
Code: |
int x = 0;
x++;
x = 2;
|
What is the value of x after running everything in parallel?
if x++; is run first, then x = 2; x will be 2.
if x = 2; is run first, then x++; x will be 3.
This is ignoring the fact that x++ is not atomic.
Here's another problem, a classical one:
Code: |
int x = 0;
int i;
int j;
for(i = 1; i <= 1000; i++) {
x++;
}
for (j = 1; j <= 1000; j++){
x++;
}
|
This also runs into problems if you run these loops in parallel.
When running on a single thread, the value of x should be 2000 after the for loops are finished.
What if, on a particular iteration, you read x on the first thread, read x on the second thread, increment x on the first thread, increment x on the second thread.
Then, after these 2 x++; are run, the value of x is only increased once, by 1.
As you see, writing parallel programs is not as simple as: 'take a serial program and give each instruction a thread'
Now, with the above example, you could use locks, such that only one thread modifies x at any time.
If you do that, on a CPU, you will see that it takes ages for the program to complete.
This is because the locking/unlocking introduces way more overhead than is saved by the extra thread.
Now, imagine doing this on a GPU, which has slower individual cores than a CPU.
The program would run even slower.
What I'm trying to say is that throwing threads at a program does not necessarily make it faster and has to be done with care to avoid races.
Would you get any advantage running a kernel on a GPU instead of a CPU?
Most likely not, as very few things are parallelizable.
Would you get any advantage running an irc client, a web browser, etc, on a GPU?
Again, most likely not.
But you would for sure have the disadvantage of weaker cores that a CPU.
Now, multiplying 1000000 matrices with 1000000 other matrices in parallel?
Then yes, you should do that on a GPU. The overhead of the weaker cores is massively outweighed my the number of cores you can use at once.
This is what a GPU is designed to do.
So throwing more threads at all programs and writing a language where every instruction is run in parallel would definitely not make all programs faster, and would break most of them.
Some programs are easily parallelizable, but most aren't. _________________ My overlay: https://github.com/stefan11111/stefan_overlay
INSTALL_MASK="/etc/systemd /lib/systemd /usr/lib/systemd /usr/lib/modules-load.d *udev* /usr/lib/tmpfiles.d *tmpfiles* /var/lib/dbus /usr/bin/gdbus /lib/udev" |
|
Back to top |
|
|
Zucca Moderator
Joined: 14 Jun 2007 Posts: 3880 Location: Rasi, Finland
|
Posted: Sun Jan 19, 2025 7:43 pm Post subject: |
|
|
logrusx wrote: | Zucca wrote: | Only one area I could see where kernel could possibly utilize GPU is networking tasks. |
GPU is totally unfit to handle such tasks. | Well. I have to say I'm no expert on this field at all, but have a look at this.
I don't really know how much is possible to do with GPU regarding network packets. But at least something. _________________ ..: Zucca :..
My gentoo installs: | init=/sbin/openrc-init
-systemd -logind -elogind seatd |
Quote: | I am NaN! I am a man! |
|
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|