View previous topic :: View next topic |
Author |
Message |
Goverp Advocate
Joined: 07 Mar 2007 Posts: 2179
|
Posted: Thu May 30, 2019 7:54 am Post subject: |
|
|
Chiitoo wrote: | Yeah, with 6 GiBs of RAM, and 8 threads, I'd imagine it's very likely that swapping will occur, which may indeed slow everything down. A lot.
... |
FWIW, I've just (inadvertantly) built qtwebengine without jumbo build on my 4-processor Phenom box (running at 1.6GHz) with 6 GB memory. It took about 11 hours, with -j4, and according to htop, was using about half real memory, no swap. I'll reinstate jumbo build; with it set to (AFAIR) 25, qtwebengine build in about 3 hours, maybe less (that's the average from qlop, and that includes at least 10 hour build from before the jumbo build). You definitely want to keep the jumbo size below the point that causes swapping. _________________ Greybeard |
|
Back to top |
|
|
<3 Veteran
Joined: 21 Oct 2004 Posts: 1083
|
Posted: Tue Jun 04, 2019 2:29 am Post subject: |
|
|
Goverp wrote: | Chiitoo wrote: | Yeah, with 6 GiBs of RAM, and 8 threads, I'd imagine it's very likely that swapping will occur, which may indeed slow everything down. A lot.
... |
FWIW, I've just (inadvertantly) built qtwebengine without jumbo build on my 4-processor Phenom box (running at 1.6GHz) with 6 GB memory. It took about 11 hours, with -j4, and according to htop, was using about half real memory, no swap. I'll reinstate jumbo build; with it set to (AFAIR) 25, qtwebengine build in about 3 hours, maybe less (that's the average from qlop, and that includes at least 10 hour build from before the jumbo build). You definitely want to keep the jumbo size below the point that causes swapping. |
There was a version bump of qtwebengine the other day and as in Goverp's case there was no hitting swap so that was not the issue but I want to update you all on this. I enabled USE="jumbo-build" and my compile times were cut by more than half. Also I was monitoring the CPU thermals and all seemed fine in that respect considering that it had been compiling for hours, no indication of CPU throttling at all.
Code: | #genlop -t qtwebengine
* dev-qt/qtwebengine
Sat Aug 11 06:03:09 2018 >>> dev-qt/qtwebengine-5.9.6-r1
merge time: 5 hours, 29 minutes and 26 seconds.
Fri Aug 17 04:07:20 2018 >>> dev-qt/qtwebengine-5.9.6-r1
merge time: 5 hours, 34 minutes and 18 seconds.
Fri Oct 19 01:14:26 2018 >>> dev-qt/qtwebengine-5.11.1
merge time: 7 hours, 2 minutes and 45 seconds.
Mon Dec 10 08:01:36 2018 >>> dev-qt/qtwebengine-5.11.1
merge time: 6 hours, 53 minutes and 35 seconds.
Tue Jan 8 12:01:05 2019 >>> dev-qt/qtwebengine-5.11.1
merge time: 7 hours and 26 seconds.
Sat Jan 12 11:32:48 2019 >>> dev-qt/qtwebengine-5.11.3
merge time: 6 hours, 58 minutes and 19 seconds.
Fri Feb 8 10:20:57 2019 >>> dev-qt/qtwebengine-5.11.3
merge time: 8 hours, 18 minutes and 2 seconds.
Tue May 28 14:28:16 2019 >>> dev-qt/qtwebengine-5.12.3
merge time: 18 hours, 7 minutes and 18 seconds.
Mon Jun 3 03:48:22 2019 >>> dev-qt/qtwebengine-5.12.3
merge time: 8 hours and 52 seconds. |
8 hours is still long but it's manageable. Can someone please tell me why USE="jumbo-build" was disabled before? Is there a negative aspect to using it? Cutting a massive piece of software's compile time in half seems like a horrible feature to remove, obviously it's must be too good to be true and must have some drawback that I am missing. |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22648
|
Posted: Tue Jun 04, 2019 3:42 am Post subject: |
|
|
Jumbo build concatenates multiple source files together, then compiles the result. This improves total compilation time, but drives up memory requirements for the compiler. If you picked a parallelism that is right at the edge of usable on your hardware with a normal compile, and then that parallelism is used for a jumbo build with its higher memory requirements, then you will begin using swap, and may run out of memory entirely. |
|
Back to top |
|
|
Chiitoo Administrator
Joined: 28 Feb 2010 Posts: 2728 Location: Here and Away Again
|
Posted: Tue Jun 04, 2019 11:28 am Post subject: |
|
|
Merged in the topic Time required to emerge package (1094558), with its 13 replies, starting from here and stopping here.
It never did come to light if USE="jumbo-build" was the culprit for the start of the discussion there, as the default state was not yet changed in 5.12.1, but it seemed relevant enough to me to fit it in here. It also had my answer to the 'why', which I kind of thought I had posted here... but obviously didn't. _________________ Kindest of regardses. |
|
Back to top |
|
|
Goverp Advocate
Joined: 07 Mar 2007 Posts: 2179
|
Posted: Thu Jun 06, 2019 8:50 pm Post subject: |
|
|
An observation that might or might not be relevant or useful:
I fell foul of jumbo-build causing swap, but not directly. jumbo-build was the last straw in a storm of memory eaters: -j4 -l4, --jobs 4, using tempfs for portage work files. tempfs in combination with --jobs is a particularly tricky way to load the gun and wait for it to point at your feet. _________________ Greybeard |
|
Back to top |
|
|
fturco Veteran
Joined: 08 Dec 2010 Posts: 1181 Location: Italy
|
Posted: Sat Jun 08, 2019 3:12 pm Post subject: |
|
|
According to qlop on my desktop computer I emerged qtwebengine 55 times since late 2016... LOL.
The last emerge took only 5 hours and 35 minutes.
I have an Intel Core 2 Duo CPU and 8 GiB or RAM.
The jumbo-build USE flag is enabled. |
|
Back to top |
|
|
dmpogo Advocate
Joined: 02 Sep 2004 Posts: 3425 Location: Canada
|
Posted: Sat Jun 08, 2019 9:57 pm Post subject: |
|
|
fturco wrote: | According to qlop on my desktop computer I emerged qtwebengine 55 times since late 2016... LOL.
The last emerge took only 5 hours and 35 minutes.
I have an Intel Core 2 Duo CPU and 8 GiB or RAM.
The jumbo-build USE flag is enabled. |
I have to stick to bundled icu and ffmpeg to avoid rebuilding it all the time |
|
Back to top |
|
|
Spirch n00b
Joined: 08 Aug 2002 Posts: 48
|
Posted: Sun Aug 18, 2019 1:16 am Post subject: |
|
|
i had to google to find out why that one was taking so much time to do
i guess having a brand new computer (with the 3900x) and with 32 gig of ram pay off
-j24 -l20
build time was 47 minutes based on what i'm seeing
(btw i'm back into gentoo after nearly 6-7 years away) |
|
Back to top |
|
|
skellr l33t
Joined: 18 Jun 2005 Posts: 980 Location: The Village, Portmeirion
|
Posted: Sun Aug 18, 2019 1:33 am Post subject: |
|
|
Spirch wrote: | i had to google to find out why that one was taking so much time to do
i guess having a brand new computer (with the 3900x) and with 32 gig of ram pay off
-j24 -l20
build time was 47 minutes based on what i'm seeing
(btw i'm back into gentoo after nearly 6-7 years away) |
The OP has a Core2.... |
|
Back to top |
|
|
OldTurk n00b
Joined: 14 Oct 2019 Posts: 1
|
Posted: Mon Oct 14, 2019 9:44 pm Post subject: |
|
|
Hi,
I also had this problem. The results below are from a VM :
Code: | Fri Jun 16 20:40:03 2017 >>> dev-qt/qtwebengine-5.6.2
merge time: 2 hours, 18 minutes and 29 seconds.
Wed Oct 11 19:28:29 2017 >>> dev-qt/qtwebengine-5.7.1-r2
merge time: 2 hours, 46 minutes and 42 seconds.
Sat Nov 11 17:54:15 2017 >>> dev-qt/qtwebengine-5.7.1-r2
merge time: 2 hours, 43 minutes and 29 seconds.
Tue Feb 20 18:50:28 2018 >>> dev-qt/qtwebengine-5.7.1-r2
merge time: 3 hours, 12 minutes and 34 seconds.
Sat Jun 2 07:13:06 2018 >>> dev-qt/qtwebengine-5.9.4
merge time: 5 hours, 53 minutes and 18 seconds.
Wed Jul 18 18:53:44 2018 >>> dev-qt/qtwebengine-5.9.6-r1
merge time: 6 hours, 11 minutes and 34 seconds.
Tue Nov 20 21:44:02 2018 >>> dev-qt/qtwebengine-5.11.1
merge time: 6 hours, 3 minutes and 18 seconds.
Wed Dec 12 21:16:59 2018 >>> dev-qt/qtwebengine-5.11.1
merge time: 7 hours, 3 minutes and 25 seconds.
Sun Feb 10 03:11:57 2019 >>> dev-qt/qtwebengine-5.11.3
merge time: 8 hours, 47 minutes and 13 seconds.
Mon Oct 14 07:16:34 2019 >>> dev-qt/qtwebengine-5.12.3
merge time: 11 hours, 13 minutes and 46 seconds.
Mon Oct 14 21:28:46 2019 >>> dev-qt/qtwebengine-5.12.3
merge time: 5 hours, 8 minutes and 21 seconds. |
This VM had 2 GB allocated (and 2 cores). While it took a lot of time, until 5.12.3 I could successfully emerge qtwebengine. The first time (in June or July) I tried to emerge that last version it failed. Theoretically it was going but not really. It caused an incessant trashing, basically swapping non stop. As a result I killed it. Yesterday I increased the allocated memory to 4GB (although this 32-bit installation only saw 3GB) and this time it was successful. But it took, as can be seen above, more than 11 hours. Then I found this thread and enabled the jumbo-build flag setting it to 25. Then the compile time was approximately 5 hours, basically halving the previous compilation time. Just a FYI and thanks to all who contributed. |
|
Back to top |
|
|
Delicates n00b
Joined: 19 Jul 2005 Posts: 6
|
Posted: Wed Feb 26, 2020 9:10 am Post subject: |
|
|
Confirming that excessive build time started with qtwebengine-5.12 versions due to removal of the default jumbo-build USE flag.
Building the current version on a on a hyper-threaded hex-core system with 48 GiB of RAM inside tmpfs ramdisk.
Interestingly the number of parallel make jobs doesn't really seem to affect the build time.
Without jumbo-build USE flag on -j3:
Code: |
Tue Feb 25 00:26:43 2020 >>> dev-qt/qtwebengine-5.14.1
merge time: 21 hours, 3 minutes and 50 seconds.
|
With jumbo-build USE flag on -j3:
Code: |
Tue Feb 25 17:07:25 2020 >>> dev-qt/qtwebengine-5.14.1
merge time: 9 hours, 28 minutes and 36 seconds.
|
With jumbo-build USE flag on -j12:
Code: |
Wed Feb 26 04:03:55 2020 >>> dev-qt/qtwebengine-5.14.1
merge time: 9 hours, 36 minutes and 17 seconds.
|
|
|
Back to top |
|
|
JustAnother Apprentice
Joined: 23 Sep 2016 Posts: 191
|
Posted: Wed Mar 25, 2020 3:22 am Post subject: Re: ><)))°€ |
|
|
urcindalo wrote: | asturm wrote: | urcindalo wrote: | Maybe that's the reason. My fast box has 8Gb RAM. I wonder if 8Gb is the bare minimum for qtwebengine to compile "normally" (as other packages do). |
No, it builds fine on my boxes with only 4 GB. But when I do that, I let it build overnight or while doing nothing else on the system. |
This is what I do whenever I see qtwebengine is to be compiled:
1) I reboot the computer
2) I ssh to it from another box
3) I run a screen session to upgrade
4) I quit and, every now and then, I ssh again to look at the progress.
I noticed that, if I don't proceed this way, qtwebengine even fails to compile (emerging is interrupted).
The thing that you can do it overnight with just 4Gb RAM makes me wonder what's the issue in my case. |
I just considered the same thing today. Shut down everything and sneak in there with ssh.
I just threw retext, otter, and falkon under the bus just to get my hands on qtwebengine and give it the ozone. Now I've got my sights on qtwebkit.
The trouble with playing footsie with the USE flags is that usually some other time-wasting situation ensues.
I think we need to invent a new international unit of noxious compilation characterisitics. Here is my candidate: the deciqtwebengine. It is a logarithmic unit kind of like the db, and like the case where the Bell is too large to be useful, one must use tenths of the new unit. So we can now quote compilations hassles in dbq. The qtwebengine is by definition equal to exactly 10 dbq. |
|
Back to top |
|
|
Fitzcarraldo Advocate
Joined: 30 Aug 2008 Posts: 2052 Location: United Kingdom
|
Posted: Wed Mar 25, 2020 12:15 pm Post subject: Re: ><)))°€ |
|
|
JustAnother wrote: | I think we need to invent a new international unit of noxious compilation characterisitics. Here is my candidate: the deciqtwebengine. It is a logarithmic unit kind of like the db, and like the case where the Bell is too large to be useful, one must use tenths of the new unit. So we can now quote compilations hassles in dbq. The qtwebengine is by definition equal to exactly 10 dbq. |
_________________ Clevo W230SS: amd64, VIDEO_CARDS="intel modesetting nvidia".
Compal NBLB2: ~amd64, xf86-video-ati. Dual boot Win 7 Pro 64-bit.
OpenRC systemd-utils[udev] elogind KDE on both.
My blog |
|
Back to top |
|
|
flyerone n00b
Joined: 19 Nov 2019 Posts: 61 Location: 127 0 0 1
|
Posted: Sun Oct 18, 2020 5:23 pm Post subject: |
|
|
This topic still comes up as a winner on Google..
I have the i7-8700 with 64 GB RAM and I've set up 14 GB as a tmpfs for portage. I've had the jumbo-build disabled globally for just the chromium package which bugged but I can try it with qtwebengine in package.use after I've read this. I don't feel like testing right now as I have 20 minutes left of the build.
I've set up the tmpfs to ease the wear on the nvme drive and using tmpfs does it also pay off to omit the -pipe flag?
Perhaps I'll start a pipe vs jumbo thread of my own after some testing and I'll win Google in a week. |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54577 Location: 56N 3W
|
Posted: Sun Oct 18, 2020 7:43 pm Post subject: |
|
|
flyerone,
-pipe is a hint to gcc that it should pass intermediate files in RAM if it can.
Its not a command.
If you omit -pipe, then the normal disk based intermediate file will be used but I don't know where gcc would write these files.
Whatever, -pipe is a good thing. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
flyerone n00b
Joined: 19 Nov 2019 Posts: 61 Location: 127 0 0 1
|
Posted: Sun Oct 18, 2020 8:30 pm Post subject: |
|
|
NeddySeagoon wrote: | Whatever, -pipe is a good thing. |
So the -pipe flag isn't intrinsic to each gcc call? That's good but that means the emerge isn't faster on tmpfs than the naturally aspirated spinning rust I was used to. I've built flightgear/sim eight times now with various media and flags and the difference stayed within three seconds.
I could just go with the jumbo-build now that I have the memory. I've bought the 64 GB for X-Plane.
When I'm using tmpfs it won't bother to fetch it faster from the ramdisk? The -pipe removal was just a moonshot attempt that qtwebengine should take less than two hours on the 8700. I've agreed with myself that emerge doesn't care what the media is.
Last edited by flyerone on Sat Oct 31, 2020 5:50 pm; edited 1 time in total |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22648
|
Posted: Sun Oct 18, 2020 9:39 pm Post subject: |
|
|
-pipe is passed if the caller chooses to pass it. If it is present, gcc may try to use an anonymous pipe to pass data to the next process. If successful, then even if your normal temporary directory is on a disk, the temporary files will not be written to the disk. Otherwise, it will write to a temporary file. Files can be written to a tmpfs, in which case they likely stay in memory anyway. -pipe was more likely to help when it was uncommon to use a tmpfs for build products. Even now, it is very unlikely to hurt. |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54577 Location: 56N 3W
|
Posted: Mon Oct 19, 2020 8:03 pm Post subject: |
|
|
flyerone,
Consider the way that tmpfs works. Its the normal low level filesystem code but there is no home for the content permanently on disk.
In effect, that means that if you have the RAM to build in tmpfs, you don't need to because the kernel will keep everything in the disk cache anyway.
Hence you don't see any speed improvement.
However, building on permanent storage when you don't need to incurs writes that will never be read.
As they will be done using DMA, the CPU overhead is tiny. Also the CPU will win memory bus arbitration when it need to access RAM and a DMA for a disk transfer is in progress.
Saving writes that will never be read is a good thing for SSDs. Even that doesn't matter so much these days.
The SSDs will all be retired before they wear out. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
ff11 l33t
Joined: 10 Mar 2014 Posts: 664
|
Posted: Mon Oct 19, 2020 8:24 pm Post subject: |
|
|
NeddySeagoon wrote: | ...
never be read is a good thing for SSDs. Even that doesn't matter so much these days.
The SSDs will all be retired before they wear out. |
I don't believe this.
And i will quote dufeu here:
dufeu wrote: | This computer is one of my servers using Chenbro 48 drive 4U chassis. It's actually configured with 48 drives (plus 24 drives in external Norco expansion chassis) attached to a LSI SAS controller. There are two additional drives at the back of the Chenbro chassis where all the OS is installed. These last two drives are connect to the MB via SATA.
Turns out the SSD drive the OS was installed on started losing chunks of files including from /bin and /usr/bin. Presumably, this had the result of attempting to load binary code which included garbage bits in random places.
I've popped the SSD in my test machine and "I can't do a thing with it". Trying to use 'rsync' to copy whatever is retrievable resulted in completely different amounts of data copied each time. I was able to recover stuff like /etc/portage/make.conf and everything from /etc/conf.d etc. Also, /home resided on the second SSD so that was untouched.
I've never experienced an SSD failure before. SSDs certainly fail very differently from HDDs. It's a learning experience i'd rather have not gone through. |
_________________ | Proverbs 26:12 |
| There is more hope for a fool than for a wise man that are wise in his own eyes. |
* AlphaGo - The Movie - Full Documentary "I want to apologize for being so powerless" - Lee |
|
Back to top |
|
|
Anon-E-moose Watchman
Joined: 23 May 2008 Posts: 6147 Location: Dallas area
|
Posted: Mon Oct 19, 2020 8:39 pm Post subject: |
|
|
Some ssds behave strangely, especially as they near their "max writes" threshold.
I have lost the link but several years ago some people did an endurance test for multiple ssd, multiple vendors, etc.
Some did well (samsung is one of the better ones) others like intel, didn't fade away (like a rust bucket) but just had sudden death (but even with that it was at the life write threshold anyway).
But even with that, we're talking several years ago behavior, ssds are now coming into their own.
Is it possible that an ssd would just die, even today? Yep, but with each gen of newer drives/controllers/etc that diminishes.
And I've had hdd just suddenly die too.
All in all, I'd trust a relatively new ssd just as much as I'd trust spinning rust.
As far as trusting data on any storage device not to have a problem some where, sometime, look at Neddy's sig
Edit to add: I don't do emerges/compiles on the ssd though, I do use a tmpfs for that (plenty of memory) but emerge sync's write to the nvme daily. _________________ UM780, 6.1 zen kernel, gcc 13, profile 17.0 (custom bare multilib), openrc, wayland |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54577 Location: 56N 3W
|
Posted: Mon Oct 19, 2020 9:28 pm Post subject: |
|
|
Anon-E-moose,
There was this small scale experiment The SSD Endurance Experiment
Wearing a drive out with writes has nothing to do with random failures, which is what the MTBF tries to asses.
Backblaze has more failure data than is useful from their actual installed drives.
I'm not sure if it includes SSDs. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
ValerieVonck n00b
Joined: 17 Aug 2017 Posts: 47 Location: Erpe-Mere, Oostvlaanderen
|
Posted: Wed Oct 21, 2020 7:10 am Post subject: |
|
|
On my i7 Nuc, (4 threads), the compilation takes around 5 hours, 30 minutes, for version 5.14. (SSD disk)
I am not using JUMBO_BUILD
On my VirtualBox (1 thread), i5 processor, 4 GB ram, 2 GB swap, the builds are taking 18 hours+
On my other VirtualBox (1 thread), i7 processor, 4 GB ram, 2 GB swap I even do no try it anymore...
I really prefer, a binary package, with all flags enabled, I could not care if the download takes around more then 1 or 2 GB.
Is this possible?
In my USE flags, I am including "icu".
If I should remove it, should QT dissapear? Or should I add "-qt" in the USE flag?
And then do a world update?
I then can emerge firefox-bin or chrome binary.. which is normally against my Gentoo philosophy..
I know its used for Falkon (and other programs), which does not even start, and for some other libraries
But seriously, its becoming ridiculously, the compilation times. _________________ Inter antecessum est melius |
|
Back to top |
|
|
AstroFloyd n00b
Joined: 18 Oct 2011 Posts: 59
|
Posted: Thu Apr 22, 2021 4:42 pm Post subject: |
|
|
I may be asking something very stupid, but would it be useful (and if so, is it possible) to *not* clean the build dir for a given package? If not cleaned, and the source files don't change, make could just take the existing object files and e.g. link against the new version of a dependency...? |
|
Back to top |
|
|
Fitzcarraldo Advocate
Joined: 30 Aug 2008 Posts: 2052 Location: United Kingdom
|
Posted: Thu Apr 22, 2021 6:04 pm Post subject: |
|
|
NeddySeagoon wrote: | Backblaze has more failure data than is useful from their actual installed drives.
I'm not sure if it includes SSDs. |
The Backblaze stats are for their data drives, which are all HDDs. Backblaze are still using only HDDs for data storage, but 'little over' 1,200 of their 3,000 boot drives are SSDs:
Andy Klein, Backblaze wrote: | We always exclude boot drives from our reports as their function is very different from a data drive. While it may not seem obvious, having 3,000 boot drives is a bit of a milestone. It means we have 3,000 Backblaze Storage Pods in operation as of December 31st. All of these Storage Pods are organized into Backblaze Vaults of 20 Storage Pods each or 150 Backblaze Vaults.
Over the last year or so, we moved from using hard drives to SSDs as boot drives. We have a little over 1,200 SSDs acting as boot drives today. We are validating the SMART and failure data we are collecting on these SSD boot drives. We’ll keep you posted if we have anything worth publishing. |
Source: https://www.backblaze.com/blog/backblaze-hard-drive-stats-for-2020/
I could be wrong, but I read into that blog post that Backblaze is not ready to trust SSDs over HDDs for data storage yet. _________________ Clevo W230SS: amd64, VIDEO_CARDS="intel modesetting nvidia".
Compal NBLB2: ~amd64, xf86-video-ati. Dual boot Win 7 Pro 64-bit.
OpenRC systemd-utils[udev] elogind KDE on both.
My blog |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22648
|
Posted: Thu Apr 22, 2021 6:51 pm Post subject: |
|
|
AstroFloyd wrote: | I may be asking something very stupid, but would it be useful (and if so, is it possible) to *not* clean the build dir for a given package? If not cleaned, and the source files don't change, make could just take the existing object files and e.g. link against the new version of a dependency...? | It is possible, but it depends on several assumptions.- Do you trust the upstream build system to do the right thing in this case? Many projects use custom-built build systems of varying quality. Some cannot even resume an interrupted build in the middle and produce the right result.
- Will this happen often enough that the extra disk space is worth it?
- Are you assuming that none of the files unpacked by Portage into the build directory are changed, and a relink is the only required step? Or do you allow the possibility that some of those files have been patched / replaced / deleted, and you still want to retain as much as you can?
ccache takes a somewhat safer approach to this, but there are packages known not to work correctly with ccache, too. Fitzcarraldo wrote: | I could be wrong, but I read into that blog post that Backblaze is not ready to trust SSDs over HDDs for data storage yet. | Historically, HDDs offered a better price/capacity ratio than SSDs. At the scale that Backblaze operates hardware, they might have decided to wait for the ratio to become more favorable before switching over. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|