View previous topic :: View next topic |
Do you still run x86 32-bit Linux in 2017/2018? |
Yes, main desktop, because too lazy to convert to 64-bit |
|
0% |
[ 0 ] |
Yes, main desktop, machine can't run 64-bit |
|
11% |
[ 5 ] |
Yes, running as PVR or other embedded solution |
|
4% |
[ 2 ] |
Yes, running as a server, router, or on a virtual machine |
|
13% |
[ 6 ] |
Yes, more than one of the above |
|
13% |
[ 6 ] |
No, but have 32-bit other architecture |
|
13% |
[ 6 ] |
No, no more 32-bit, get into the 21st century! |
|
41% |
[ 18 ] |
|
Total Votes : 43 |
|
Author |
Message |
Tony0945 Watchman
Joined: 25 Jul 2006 Posts: 5127 Location: Illinois, USA
|
Posted: Thu Dec 07, 2017 1:36 am Post subject: |
|
|
NeddySeagoon wrote: | I can also drop swap and install DOS and Win 3.1 just for old times sake
Swap is next to /boot at the front of the drives. |
My SSD Kaveri has only a swapfile and when I transition the Athlon II box to Ryzen on an NVME, I plan to do the same plus buy 16G of RAM which should pretty much obviate the need for swap. Essentially, out of the old box I will keep:
1. the box itself
2. the DVD drive
3. peripherals - wireless mouse keyboard (I've toyed with the idea of a wireless keyboard), monitor and speakers
4. the videocard that I bought for future use with Zen (onboard video was fine for me)
5. MAYBE the intel ethernet card, but only if the new mobo's onboard isn't supported.
6. the TV card (IF the new board has a legacy PCI slot like one MSI B350 board does), else I have a quad TV pci-e board handy.
7. the 2G data hard drive (maybe scrap the JFS)
8. the Gentoo software instal which will be rebuilt via emerge -e world to optimize for Ryzen.
New PSU, mobo & CPU, memory and NVME drive. Oh, I'll probably finally leave grub legacy for refind.
BTW, the k6-3 doesn't run X11 but it does dual boot NT 4.0, haven't done it for years. |
|
Back to top |
|
|
Zucca Moderator
Joined: 14 Jun 2007 Posts: 3701 Location: Rasi, Finland
|
Posted: Thu Dec 07, 2017 10:08 am Post subject: |
|
|
Since the 17.0 profile is kinda imminent... I thought to ask if one could just disable pie USE -flag globally? Would that work? Since I'm planning to revive my old Pentium 3 laptop sooner or later, I'd like to disable pie there since it seems to cause overhead.
Also would that make it impossible to compile softaware for it on pie enabled 64-bit system (with multilib)? If it's impossible then how about 32-bit non-pie crossdev environment inside qemu? _________________ ..: Zucca :..
My gentoo installs: | init=/sbin/openrc-init
-systemd -logind -elogind seatd |
Quote: | I am NaN! I am a man! |
|
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54578 Location: 56N 3W
|
Posted: Thu Dec 07, 2017 11:01 am Post subject: |
|
|
Zucca,
That works but there is an extra step as pie is forced on. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
Zucca Moderator
Joined: 14 Jun 2007 Posts: 3701 Location: Rasi, Finland
|
Posted: Thu Dec 07, 2017 2:03 pm Post subject: |
|
|
Ok. So basically right after untarring stage3, chrooting and editing make.conf I'd run Code: | emerge -1 gcc && emerge -e --exclude gcc @world | ... Maybe also libc and binutils too before @world.
Is this roughly the correct way? _________________ ..: Zucca :..
My gentoo installs: | init=/sbin/openrc-init
-systemd -logind -elogind seatd |
Quote: | I am NaN! I am a man! |
|
|
Back to top |
|
|
miket Guru
Joined: 28 Apr 2007 Posts: 497 Location: Gainesville, FL, USA
|
Posted: Fri Dec 08, 2017 3:15 pm Post subject: |
|
|
I voted for the last option because it was the most accurate: my last 32-bit machine (yes, on Gentoo) went out of service 7 or 8 years ago, so I have been exclusively 64-bit since then. Things were bumpier in the beginning of 64-bit, but things are good now.
At the same time, I sure am not demanding that everyone "get into the 21st century!" as the poll asks. You could turn that a bit into how I got into the 21st century. I rejoice that Gentoo still supports x86 since, yes indeed, many people still use it and I myself could find myself wanting to use it.
For me, the more fearsome thing is what Intel wants to push onto the world: dropping of support of legacy BIOS in favor of its EFI dog-and-pony show. I am happily in the 16/32-bit world for that! |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9824 Location: almost Mile High in the USA
|
Posted: Fri Dec 08, 2017 5:42 pm Post subject: |
|
|
At least at this point it seems most people have gotten rid of their 32-bit x86 boxes, but still a good holdout. It's great some people other than me still use their x86 boxes on a daily basis so we don't have to worry too much about bit rot... I hope... _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
JWJones n00b
Joined: 11 Jan 2015 Posts: 19 Location: Oregon
|
Posted: Fri Dec 08, 2017 5:45 pm Post subject: |
|
|
I voted "Yes, main desktop, machine can't run 64-bit" because I am currently installing on an old Dell Dimension 3000, P4, 1GB RAM, 40GB HDD. This will be my first attempt at installing Gentoo, so I thought I'd use an old machine from our "boneyard" here at work. I come from the Slackware world to Gentoo, so it's not too much of a stretch for me, so far. I plan on starting off with a minimal window manager desktop machine on this hardware: probably cwm, i3, or spectrwm, then later I'll build a nice xfce desktop on higher-spec (64-bit) hardware.
I should probably note that I am merely a Linux/BSD hobbyist; I'm neither a developer or IT person. I do this only for my own use/experience. I'm really enjoying the Gentoo way so far! |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9824 Location: almost Mile High in the USA
|
Posted: Wed Jan 10, 2018 2:57 am Post subject: |
|
|
Now due to meltdown, 32-bit users are currently in limbo, or has someone seen some information about getting PTI ported to x86-32? _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
miket Guru
Joined: 28 Apr 2007 Posts: 497 Location: Gainesville, FL, USA
|
Posted: Wed Jan 10, 2018 9:05 pm Post subject: |
|
|
eccerr0r wrote: | Now due to meltdown, 32-bit users are currently in limbo, or has someone seen some information about getting PTI ported to x86-32? |
At first flush, I thought that 32-bit machines would already be covered. The main KAISER patches were made at the new file arch/x86/kaiser.c and seem not to involve assembler source at all. That would make it seem that the fixes would apply equally well to all x86-family processors, whether 64 or 32 bits.
So then I had to take a look. The .config key of interest is CONFIG_PAGE_TABLE_ISOLATION. I grabbed the 4.14.13 tarball from kernel.org and went into menuconfig. The sad news that the selection "Security options -> Remove the kernel mapping in user mode" is enabled only for 64-bit kernels. If you're really living of the edge you might try to force that to be enabled anyway, but there are two dangers: that this would definitely break something because of differences in the processors or that, even if the programmers think that they have everything covered, the fix is poorly tested on 32-bit machines.
A grep through the sources for CONFIG_PAGE_TABLE_ISOLATION shows 29 places where it is tested. I wasn't ready to walk through all of those.
Already GKH is telling people who are not on 4.4, 4.9, or 4.14 (or testing 4.15) that they are out of luck. I hope that's not the case for 32-bit x86. |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9824 Location: almost Mile High in the USA
|
Posted: Wed Jan 10, 2018 9:15 pm Post subject: |
|
|
Yes I tried setting CONFIG_PAGE_TABLE_ISOLATION to Y (actually, removed the Kconfig requirement for X86_64)
It wouldn't even compile (4.14.12).
---
Curiosity to those who are still using 32-bit: My initial guesses is that anyone using before a P4 or Pentium-M, you might be safe because it's not predictable to read the data. It IS vulnerable and a dedicated hacker will get your data, but it requires patience and 100% CPU utilization for long periods of time to get your data (as it'd have to check and check again, and may need to do a "best of 3" type of deal (or possibly more than 3) in order to reliably read information, and by then the possibility the data has changed by then.
A P4 or Pentium-M, as they support SSE2 - since they come with cache operations extensions, they are screwed as they're easier to determine whether they got the right cache line. If you can run 64-bit, now's the time.
The non-speculative, non-out of order Atoms are good to go as far as I know (note: the newer Atoms do use speculation and can run things out of order, so they are affected). These older Atoms may be the only 32-bit x86 CPU that are okay at this point. I think even Spectre won't work because of the lack of speculative execution.. Unfortunately these older Atoms are slow as molasses... _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
Marcih Apprentice
Joined: 19 Feb 2018 Posts: 213
|
Posted: Sat Feb 24, 2018 2:17 pm Post subject: |
|
|
I'll just necro this thread if you don't mind
I don't have many machines but 66.6% are a) laptops, b) pre-2006 (Acer TravelMate 230 and 2414WLMi), therefore x86 it is. As you can imagine, compiling on single-core laptop processors is a pain (my recent emerge -e @world due to changed ${CFLAGS} took 36 hours ). The remaining machine is my main one and is a recent x86_64 desktop, but I do have ABI_X86="32 64" in make.conf for compatibilty. I don't understand the vehement push of everything towards the 64-bit variant of whatever architecture - probably because I don't understand how intruction sets work and the benefits of 64-bit aside from having 64 1s and 0s as opposed to 32 of them! |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9824 Location: almost Mile High in the USA
|
Posted: Sat Feb 24, 2018 4:44 pm Post subject: |
|
|
Marcih wrote: | I don't understand the vehement push of everything towards the 64-bit variant of whatever architecture |
I think it's purely due to software developers hating fighting against the 4GB (3GB) memory limit...
Lazy software developers don't want to put their code on a diet. _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22657
|
Posted: Sat Feb 24, 2018 5:44 pm Post subject: |
|
|
There is also the technical appeal of reduced workload / testing. If you have the option to support the package on only one architecture (x86_64) instead of two (x86 and x86_64), that reduces your workload. If you want to support only one architecture, dropping x86 makes more sense than dropping x86_64. |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9824 Location: almost Mile High in the USA
|
Posted: Sat Feb 24, 2018 6:24 pm Post subject: |
|
|
ergo lazy software developers _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
miket Guru
Joined: 28 Apr 2007 Posts: 497 Location: Gainesville, FL, USA
|
Posted: Sun Feb 25, 2018 6:18 am Post subject: |
|
|
Don't forget about another 64-bit advantage. The AMD64 CPU's have a lot more registers than the 32-bit ones plus new addressing modes. One of those new modes is for data relative to the program counter. This makes it so that AMD64 supports Position-Independent Executables with no overhead. The older 32-bit architecture lacks this mode and takes a performance hit when programs are built as position-independent executables.
PIE not only allows loading shared libraries at any location the OS wants to load them, but it is also necessary for Address Space Layout Randomization, a common and cheap security measure. |
|
Back to top |
|
|
Marcih Apprentice
Joined: 19 Feb 2018 Posts: 213
|
Posted: Sun Feb 25, 2018 9:38 am Post subject: |
|
|
eccerr0r wrote: | I think it's purely due to software developers hating fighting against the 4GB (3GB) memory limit... |
I don't know how kernel memory addressing works (yet ) so enlighten me, is it that 32-bit systems can only have max 3GiB of memory mapped and therefore a program can't ever access more than that (imagine your program being so crap that it consumes up 3GiB of RAM )? Because the Linux x86 kernel can support himem all the way till 64GiB using some Intel technology (Physical Adress Extension).
Hu wrote: | There is also the technical appeal of reduced workload / testing. If you have the option to support the package on only one architecture (x86_64) instead of two (x86 and x86_64), that reduces your workload. If you want to support only one architecture, dropping x86 makes more sense than dropping x86_64. |
Makes sense, x86 is "ancient", I can understand that (although I don't consider it valid ).
miket wrote: | Don't forget about another 64-bit advantage. The AMD64 CPU's have a lot more registers than the 32-bit ones plus new addressing modes. One of those new modes is for data relative to the program counter. This makes it so that AMD64 supports Position-Independent Executables with no overhead. The older 32-bit architecture lacks this mode and takes a performance hit when programs are built as position-independent executables.
PIE not only allows loading shared libraries at any location the OS wants to load them, but it is also necessary for Address Space Layout Randomization, a common and cheap security measure. |
Oh, I didn't know that. Would it then be wise to disable the PIE use flag on gcc that's been introduced as the default in the new 17.0 profiles and recompile world to sacrifice security for a bit more performance on my memory-and-processing-power-starved laptop?
eccerr0r wrote: | All the evil in the computer world is caused by lazy developers, no exceptions. |
The only correct answer. |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9824 Location: almost Mile High in the USA
|
Posted: Sun Feb 25, 2018 4:48 pm Post subject: |
|
|
Marcih wrote: | I don't know how kernel memory addressing works (yet ;) ) so enlighten me, is it that 32-bit systems can only have max 3GiB of memory mapped and therefore a program can't ever access more than that (imagine your program being so crap that it consumes up 3GiB of RAM :lol: )? Because the Linux x86 kernel can support himem all the way till 64GiB using some Intel technology (Physical Adress Extension). |
The amount of memory the cpu can access is different than amount of memory a process/program can access at any one time.
To really appreciate the problem you have to go back to to the days when "expanded memory" and a lot of other architectures way back when memory was really scarce.
I will use an even more obscure architecture: the TRS-80 Model 4. It uses a Z80 chip which had an address space of 64KiB. Of course 64KiB was not enough space for many programs. But 64KiB was the max memory you can access because each memory access could be only be done by two 8-bit registers making a 16-bit, and 2^16 = 64KiB.
However the smart-alecs at Tandy decided to put in 128KiB of RAM. How did they do this? They did something called bank swapping. I believe they segmented the 128KiB into 32KiB chunks (minus memory mapped IO space). The processor can choose any two of the four 32KiB chunks to fill out the 64KiB. Great, right? Except for existing applications had no idea how to switch banks. And if you swapped banks, the data that got swapped out cannot be accessed further. Even if you did write software that was aware of the banks, the application needs to keep track which bank a particular piece of data is stored at, and remember to switch the bank before accessing. Not to mention the application needs to make sure it doesn't swap out the bank that it's currently using else it won't know what it's running.
This is nothing insurmountable, the software writer can write programs that keeps all of these problems in mind. But software developers are LAZY. They don't want to keep track of which bank every piece of data is stored at. They want a flat address space and able to access everything at the same time.
PAE does something very similar except it's done within the processor, but now we're talking about huge upwards GiB chunks not 32KiB. When dealing with GiB chunks, there's reason to believe it's possible to rewrite your application so that it can deal with GiB chunks. But they just don't want to do it when a flat 64-bit address space is available.
The 3GB limit (I will not use 3GiB here because it's not exactly 3GiB or even 3GB, it's more like 3.25GiB) is due to hardware limitations once again - mainly because memory mapped IO and firmware needs space in that 4GiB as well. _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22657
|
Posted: Sun Feb 25, 2018 5:05 pm Post subject: |
|
|
A 32-bit number can represent 2**32 unique values, by definition. If every value is one memory address, a 32-bit number can address 2**32 locations => 4GiB of address space. Exceeding that requires the bank swapping tricks that eccerr0r describes. Those tricks are used less today because you can get so much done in one bank. In practice, Linux/x86 programs have less than 4GiB of accessible address space because the kernel reserves a portion of the address space for itself. Even with a PAE aware kernel, the maximum concurrently addressable memory is 4GiB. PAE describes a standard way to do the bank swapping, so that the kernel can have more than 4GiB of RAM with useful data, although it cannot see all that data all at the same time with a simple 32-bit pointer.
Also, while 3GiB-4GiB seems like a lot, remember that this is all the mapped memory for the program, not just data. Every shared library requires a portion of that address space, and due to layout and alignment issues, even tiny libraries require several pages. Every program stack (minimum one per thread, usually not more than one per thread, although some programs play weird tricks with multiple stacks) requires pages. Every data allocation requires pages, whether that is data read from a file, computed in memory for temporary use (rendering to a screen or sending to the network) or computed in memory for long term use (such as remembering which level of a game you are on, or which areas you have visited). Most non-media allocations are comparatively small, but nothing is free.
Perversely, increased memory sizes have worsened program quality. It's now somewhat common that a program's regular use will not run out of memory, so programmers are less inclined to spend the code complexity and testing time to ensure that a memory allocation failure can be handled in those exceptional cases where it happens. This burned me a while back when a 32-bit program started crashing because a supporting library became less memory efficient, causing the program to exhaust its address space and crash, even though it was not (as far as I could tell) actually leaking memory. It merely needed more than it could get. |
|
Back to top |
|
|
Marcih Apprentice
Joined: 19 Feb 2018 Posts: 213
|
Posted: Sun Feb 25, 2018 5:38 pm Post subject: |
|
|
eccerr0r and Hu wrote: | more than what's appropriate to quote |
Well I'm definitely saving both of your posts for future reference, thank you very much for that!
eccerr0r wrote: | They don't want to keep track of which bank every piece of data is stored at. |
Keeping track of this only applies to non-OOPL's (that apostrophe is correct, the use here is to indicate letters have been omitted, e.g. "it's" for "it is", "don't" for "do not" etc. ) though, doesn't it? With object-oriented programming you don't even think about something like memory management. I'm speaking from experience with Python, as limited as it may be, and I haven't done anything to do with memory.
Hu wrote: | Also, while 3GiB-4GiB seems like a lot, remember that this is all the mapped memory for the program, not just data. |
Correct me if I'm wrong but I was under the impression that programs are allocated a certain amount of virtual memory, say 2GiB, but the virtual memory addresses are a) not the same as the physical addresses and b) need not be mapped at all times to a physical address. So is this 3-4GiB mapped you talk about the virtual memory mapped to it by the kernel? If so, most program cannot ever be expected to use up all the memory mapped, you'd run out of memory pretty quickly if all the threads had something written in all of their mapped memory simultaneously... (I'm reading a book about the Windows kernel in my free time, specifically NT 6.1 since the book came out in 2011, that's where I got my limited knowledge about this from. And yes, it's doing my head in, thanks for asking. )
Also, sorry for hijacking the thread a bit; although we're still keeping it about 32-bit kernels so it's not that bad. |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22657
|
Posted: Sun Feb 25, 2018 7:55 pm Post subject: |
|
|
Object-oriented versus not-OO is not really relevant here. All programs need memory to store their work. OO is a style for how to think about that memory, but you still need to store the objects. Languages like Python that take away pointers and handle object lifetime for you will free you from tracking which bank is active, but the language implementer is still obligated to handle that, else your objects won't be available when you try to use them.
Virtual memory addresses and physical memory addresses are logically distinct. In very simple environments, virtual may be mapped 1:1 to physical, but that is not done (except perhaps during early boot) in production systems. The pointer limit applies to memory addressed by the pointer. In most, but not all, cases this will be virtual memory.
Yes, it is common that portions, sometimes very large portions, of virtual memory are unmapped and do not correspond to any physical address. Each process is given a private view of virtual memory (this is part of its appeal relative to physical). Virtual memory can reference a physical page shared with other processes, which is why you can have hundreds of programs load glibc, yet not spend sizeof(glibc) * 100s to store it in physical memory. Typically, such sharing requires that the page not be mutable, else one program could corrupt the glibc of another. Special rules apply to permit lazy unsharing. |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9824 Location: almost Mile High in the USA
|
Posted: Sun Feb 25, 2018 7:57 pm Post subject: |
|
|
Marcih wrote: | Keeping track of this only applies to non-OOPL's (that apostrophe is correct, the use here is to indicate letters have been omitted, e.g. "it's" for "it is", "don't" for "do not" etc. :P ) though, doesn't it? With object-oriented programming you don't even think about something like memory management. I'm speaking from experience with Python, as limited as it may be, and I haven't done anything to do with memory. |
Actually, it doesn't matter what language, object oriented or not. Technically if you made every access to any particular memory go through a subroutine to automatically pick the right bank, you could get away with letting the macros/OS routines finding which bank any particular piece of memory you want is on. But there are two main problems: this is extremely slow (hence the so called PAE "slowdown" which some people even apply (though likely incorrectly) to object oriented programming languages) and the fact this does not work for instruction memory - you still have a limit of 64K of address space and there's no way for the Z-80 to branch/JMP to a piece of memory that's not currently mapped no matter what you do - except if you manually manage the memory. _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
Marcih Apprentice
Joined: 19 Feb 2018 Posts: 213
|
Posted: Mon Feb 26, 2018 5:24 pm Post subject: |
|
|
Hu wrote: | Object-oriented versus not-OO is not really relevant here. All programs need memory to store their work. OO is a style for how to think about that memory, but you still need to store the objects. Languages like Python that take away pointers and handle object lifetime for you will free you from tracking which bank is active, but the language implementer is still obligated to handle that, else your objects won't be available when you try to use them. |
eccerr0r wrote: | Actually, it doesn't matter what language, object oriented or not. |
Of course, I'm aware the actual program still needs to keep track of where stuff is stored in the memory, OO or not. My point was in response to eccerr0r when he said that devs are lazy and don't want to deal with general memory management of their program. What I was trying to say is that it of course applies to all programs, the ones written in a OOPL's included, but these ones do it "behind the scenes" and don't have the programmer worry about such thing while writing the actual code so that they can focus more on the logic. Whether that's a good or bad thing is up to the individual.
Sorry for not making that clear. |
|
Back to top |
|
|
gordonb3 Apprentice
Joined: 01 Jul 2015 Posts: 185
|
Posted: Tue Jun 12, 2018 7:37 pm Post subject: |
|
|
Dropping in my $0.02 on this old poll/topic
Yes I still run multiple x86 Gentoo machines. All VMs actually. One is used as a crossdev machine for a 32 bit ARM architecture, which proved to be necessary because crossdev tends to copy the target directory structure from the host OS rather then the target OS. As a result I get libraries installed in lib64 folders if I use a 64 bit host and that obviously does not work on the 32 bit target platform. It doesn't solve all the issues with crossdev - e.g. perl arch dependent modules get installed under i686-linux-thread-multi even though they are armv5tel-linux-thread-multi.
The other machines are more or less conveniently based on this crossdev machine as I required special purpose machines. I actually like 32 bit OS better for this as well, because they tend to have stable memory management out of the box whereas 64 bit systems tend to be very aggressive and cause a lot of swapping. |
|
Back to top |
|
|
Tony0945 Watchman
Joined: 25 Jul 2006 Posts: 5127 Location: Illinois, USA
|
Posted: Tue Jun 12, 2018 7:59 pm Post subject: |
|
|
gordonb3,
That's an interesting alternative to my 32-bit partition that builds for k6. I'll have to consider that. |
|
Back to top |
|
|
Aiken Apprentice
Joined: 22 Jan 2003 Posts: 239 Location: Toowoomba/Australia
|
Posted: Thu Jun 14, 2018 1:09 am Post subject: |
|
|
Have one x86 still running 32 bit user space. An old p4ht that is waiting to be retired. None of the user space on that machine has ever hit the 32 bit memory addressing limit. When it had a 32 bit kernel not even enough ram to bother with PAE. Only reason it has a 64 bit kernel is got fed up with the OOM killer. Between low mem, high mem, swap, the number of times the OOM killer would trigger with low mem full while high mem was mostly free and swap barely touched. Early on the only reason a couple of machines ended up with 64 kernels.
Everything else has been changed from 32 to 64 kernel and user space. Doing 32 bit to 64 conversions of live headless boxes a bit of trepidation when typing reboot. _________________ Beware the grue. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|