View previous topic :: View next topic |
Author |
Message |
lohner n00b
Joined: 25 Mar 2006 Posts: 7 Location: Austria
|
Posted: Mon Jun 11, 2007 1:30 pm Post subject: Slow HDD? - Compact Flash RAID Hybrid System [+some results] |
|
|
edit: When you finished reading the theory below, scroll down to read first results where I posted them just now.
Introduction/Motivation
I was fed up with current hard disks being sluggish and SSDs still not available, so I figured a CF-RAID would be the perfect solution for my shiny new (quite) silent system. I tried to focus on speed and stability rather than silence, but noise was also a factor. Also I did not want to spend more than 300 for the storage system. Having a motherboard with 6 SATA Ports (Intel G965 b/c of the open source graphics drivers) and using the old SATA HDD RAID (2 disks) from my old system, I already bought 4 SATA-to-CF adapters for 25 each.
Basic Considerations
The basic idea is to create a RAID0 with 4 Compact Flash cards and storing read-only data there. Eventually a CF card will be broken after 100 000 write cycles. (Broken means, you can't modify your data anymore, but read access should be possible).
Having a RAID1 of conventional hard disks for data which suffers from a lot of modifications helps to prevent these shortcomings. Furthermore it will hold the whole CF-data as a backup.
4-way RAID0 additionally spreads write access over 4 cards, lowering the probability of error of a single card by 4.
Filesystem
In practice: Code: | /var /tmp /home /usr/portage | and below and of course the swap partition(s) should go on the harddisk array, while the rest below / is copied to the CF Array and mounted on boot. Maybe and better be on the harddisk too - or something else - depending on your system.
Other considerations should concern using the right filesystem for CF cards. Hence the read-only access and me hardly having any experience with anything else than ext3, I'll follow the path of least resistance here.
Capacity
A Gentoo system should not take up more than 8 GB (which is very generous) for everything but /var /tmp /home and /usr/portage.
With current CF card sizes between 2 and 8GB (16GB or even 64GB to arrive soon) times 4 (for the RAID0) you should have sufficient capacity, you could even think about using only 2 cards. Despite that, card size is relative to cost - the smaller the card size, the cheaper. Also, theoretically, more cards are faster in RAID0.
Compact Flash Cards
Choosing the right compact flash cards is mandatory and i'm still looking for appropriate benchmarks. Some cards are able to sustain 40 MB/s and more of r/w access bandwith (e.g. SanDisk Ultra IV) but they're very expensive.
Manufacturer's marketing data might be euphemistic, so be careful with speed data (like 266x would be - 1x is 150kb/s - 266x0,150x0,854=34,1 MB/s)
Also, be careful not to buy fake cards (see http://reviews.ebay.co.uk/FAKE-SanDisk-Ultra-Compact-Flash-Cards-Exposed_W0QQugidZ10000000001235984)
Only benchmarks will show the actual speed using CF-to-SATA adapters (should be faster than USB2.0 though) - merely i have not found any of them to date.
You can get fast 2 GB cards for as low as 20 and 8GB cards for 80 (optimistic prices for fast cards, slower cards are even cheaper)
RAID
Another question is the type of RAID - hardware or software. I don't know anything about the G965 yet - whether it's a real hardware solution or not.
In short, using software RAID is not the worst thing to do. It enables any linux machine with software RAID support (not controller specific) to read the data and the CPU load should be moderate or even not measurable on dual core systems.
The positive effects:
Fast cards could theoretically outperform any single HDD in terms of read/write bandwith and, by far, - I'd expect 100 times - in terms of random access. I suspect, having no experience at all here, that aging of CF cards won't be as dramatic as it is with conventional disks. This will boost application and system startup speed significantly and keep it over time.
Separating your /home data from the rest of the system also is a good idea.
The downside:
Doing a system update with portage is not recommended on a daily basis, since frequent write access would destroy the cards. Though using cheap cards and expecting prices to fall in the future, buying a new set of cards every year can be possible and therefore having your system up to date at all times.
Also, slow harddisk(s) are needed and they produce heat and noise. Everything on /home won't be accessable any faster than before.
Conclusion:
Gaining system speed you can actually "feel" is possible by investing only about 200.
I'm looking forward to answering any questions or thinking about your suggestions (maybe someone has benchmarks or knows some hidden filesystem areas with frequent write access I didn't come across yet).
Last edited by lohner on Sat Jul 21, 2007 6:05 pm; edited 3 times in total |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54396 Location: 56N 3W
|
Posted: Mon Jun 11, 2007 8:10 pm Post subject: |
|
|
lohner,
A few points - When FLASH memory fails, the stored change that represents your data leaks away more quickly than it did when the FLASH was new.
If you only lose a block of data, that may not be too bad ... if you lose a block of meta data, that will be much worse. Your data may still be there but you have lost the pointers to it.
You don't mention the choice of a filesystem - thats important for best FLASH life. You must not use a journalled filesystem as that increases the number of writes. Your 100,000 writes is fairly pessimistic. With wear leveling, you should achieve at least 10x that. Look at jffs2, which is a special journalled filesystem for FLASH devices.
Raid0 gives you larger capacity and higher speeds at the expense of reduced reliability. If you lose anything in a 4 drive raid0 set, its all gone. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
devsk Advocate
Joined: 24 Oct 2003 Posts: 2998 Location: Bay Area, CA
|
Posted: Mon Jun 11, 2007 8:40 pm Post subject: |
|
|
My comments:
1. raid0 on flash cards is worse than raid0 on hard drives, overall system reliability wise. Bad idea!
2. multiple flash cards using raid0 may still not reach the peak speed of a standalone sata hard drive. The cost per gb is prohibitive and speed loss is a let down. So, you lose both ways.
flash has a long way to go. It has traveled a lot of late but still not enough. |
|
Back to top |
|
|
blandoon Tux's lil' helper
Joined: 05 Apr 2004 Posts: 136 Location: Oregon, USA
|
Posted: Mon Jun 11, 2007 9:13 pm Post subject: |
|
|
Quote: | 4-way RAID0 additionally spreads write access over 4 cards, lowering the probability of error of a single card by 4. |
I think this is incorrect. If you assume that the probability of failure is a function of number of writes to a single card, and you assume that each write hits one card only, then the probability of a particular card failing over a given time might be less. However, that's not a real-world scenario, and we can't assume that any of those things are true.
If failures are a function of time, the probability of ONE card failing goes up exponentially with each card that you add. Take, for example, a twin-engine airplane: the chance of both engines failing is less, but the chance of one engine failing (also possibly a fatal event) is at least twice as high.
Personally (in my opinion only), I don't have any data that is worth so little to me that I would put it on a RAID-0 array of any medium. If you could somehow do a more advanced RAID (5 or better) with very cheap cards, that would tolerate multiple failures, then you might have something, but then the number of writes would probably make that prohibitive. _________________ "Give a man a fire and he's warm for one night, but set fire to him and he's warm for the rest of his life..." |
|
Back to top |
|
|
devsk Advocate
Joined: 24 Oct 2003 Posts: 2998 Location: Bay Area, CA
|
Posted: Mon Jun 11, 2007 9:41 pm Post subject: |
|
|
blandoon wrote: | Quote: | 4-way RAID0 additionally spreads write access over 4 cards, lowering the probability of error of a single card by 4. |
I think this is incorrect. | Its definitely incorrect. raid0 distributes a 128KB write across 2 (say thats your stripe size) drives giving each drive a 64KB chunk to write. This makes the probability of failure increase as you add more cards to raid0.
Also note that the software (or hardware) has to scale with addition of the cards i.e. you will not get a speed which is 8 times one card's speed if you use 8 cards. The splitting (and gathering) data and distribution will catch up with the gain and make the graph go downwards after a certain number of cards. Anybody who has tried large enough (> 4 devices) raid0 will know this. |
|
Back to top |
|
|
lohner n00b
Joined: 25 Mar 2006 Posts: 7 Location: Austria
|
Posted: Mon Jun 11, 2007 10:07 pm Post subject: |
|
|
well thank you for your responses, you got me thinking about it again...
Quote: | You don't mention the choice of a filesystem - thats important for best FLASH life. You must not use a journalled filesystem as that increases the number of writes. Your 100,000 writes is fairly pessimistic. With wear leveling, you should achieve at least 10x that. Look at jffs2, which is a special journalled filesystem for FLASH devices. |
for writing it is bad, which is why I try to have as much read-only as possible
Still, JFFS2 is interesting. I'd appreciate if anybody would care to share their experiences...
Quote: | 1. raid0 on flash cards is worse than raid0 on hard drives, overall system reliability wise. Bad idea! |
That's right, but therefore I propose the backup on the conventional disks
Quote: | 2. multiple flash cards using raid0 may still not reach the peak speed of a standalone sata hard drive. The cost per gb is prohibitive and speed loss is a let down. So, you lose both ways. |
you may be right, but I am not convinced that it is worse than harddrives when it comes to access time (whicht is important when starting applications, loading all the libs, etc)
Quote: | If you assume that the probability of failure is a function of number of writes to a single card, and you assume that each write hits one card only, then the probability of a particular card failing over a given time might be less. |
I agree, another example would be, dividing a 8GB card into 2GB pieces and distribute the access equally, it would fail as soon as the 8GB card without the dividing.
Quote: | If failures are a function of time, the probability of ONE card failing goes up exponentially with each card that you add. |
ACK
Yep, you got me on that one: I shall never spread nonsense like that.
Quote: | Personally (in my opinion only), I don't have any data that is worth so little to me that I would put it on a RAID-0 array of any medium. |
Maybe I can change your mind: if you backup your system after an important change, why would you not risk the redundant original for the sake of speed?
Quote: | Also note that the software (or hardware) has to scale with addition of the cards |
That's true as well. I already have experimented with 4-way RAID0s and found them hardly faster than 2-way RAID0 (4 respectively exact same disks)
but I still have hope that the situation is different with flash memory. |
|
Back to top |
|
|
Archangel1 Veteran
Joined: 21 Apr 2004 Posts: 1212 Location: Work
|
Posted: Wed Jun 13, 2007 10:53 pm Post subject: |
|
|
blandoon wrote: | If failures are a function of time, the probability of ONE card failing goes up exponentially with each card that you add. Take, for example, a twin-engine airplane: the chance of both engines failing is less, but the chance of one engine failing (also possibly a fatal event) is at least twice as high. |
It doesn't go up exponentially.
It would only behave like that if the addition of each card/engine/whatever increased the chance of the others failing. It doesn't, so if you do some quick maths you get a kind of logarithmic-like curve where the gradient reduces as you add more items.
Of course it has to be asymptotic to 1, so it obviously can't be exponential. _________________ What are you, stupid?
Last edited by Archangel1 on Wed Jun 13, 2007 10:55 pm; edited 1 time in total |
|
Back to top |
|
|
Archangel1 Veteran
Joined: 21 Apr 2004 Posts: 1212 Location: Work
|
Posted: Wed Jun 13, 2007 10:55 pm Post subject: |
|
|
lohner wrote: | Still, JFFS2 is interesting. I'd appreciate if anybody would care to share their experiences... |
I haven't tried it, but Wikipedia suggests that it has performance issues with 'big' drives, which yours would be.
There's also YAFFS, but I don't really know how the two compare. _________________ What are you, stupid? |
|
Back to top |
|
|
Cypr n00b
Joined: 03 Jan 2005 Posts: 57
|
Posted: Sun Jun 17, 2007 10:16 am Post subject: |
|
|
If the probability of $DEVICE failing within a time interval dt is p*dt, then the probability of one of N equivalent independent $DEVICEs failing in the time interval dt is N*p*dt.
Assuming the $DEVICEs are radioactive particles (which probably isn't a good approximation for flash cards), then for the cumulative probability P, dP(t) = (1-P(t))*N*p*dt.
Given P(0) = 0, the solution is P(t) = 1-exp(-t*N*p)). |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54396 Location: 56N 3W
|
Posted: Sun Jun 17, 2007 1:14 pm Post subject: |
|
|
Cypr,
Your analagy with radioactive particles is correct for the random causes of failure in all systems.
It does not take into account any systematic failure modes or use related failures. As such, it sets a minimum failure rate (maximum useful life). In practice, other non random failure modes exist, so real world experiance will always be worse (higher failure rates, reduced useful life) than your prediction. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
lohner n00b
Joined: 25 Mar 2006 Posts: 7 Location: Austria
|
Posted: Sat Jul 21, 2007 6:01 pm Post subject: Now Working |
|
|
Finally - I just finished handcrafting - I managed to get the system working almost as described above.
I'll post a short description now with the trouble I came across:
My system looks like this:
2x SanDisk UltraIV CF Cards 4GB each
2x Samsung SP2504C 250 GB each
partitions:
Code: |
Number Start End Size Type File system Flags
1 32,3kB 107MB 107MB primary ext2 boot
2 107MB 5569MB 5462MB primary ext3 raid
4 5569MB 250GB 244GB extended
6 5569MB 7575MB 2007MB logical linux-swap
7 7576MB 32,6GB 25,0GB logical ext3 raid
8 32,6GB 42,6GB 10,0GB logical ext3 raid
5 42,6GB 250GB 207GB logical ext3 raid
|
plus one 4 GB partition type fd on each of the CF Cards.
which gives me in raid mode:
Code: |
rootfs 7,6G /
/dev/root 7,6G /
udev 1,6G /dev
/dev/md5 381G /home
/dev/md7 46G /var
/dev/md8 19G /tmp
shm 1,6G /dev/shm
|
output of mount:
Code: |
rootfs on / type rootfs (rw)
/dev/root on / type ext2 (ro)
proc on /proc type proc (rw,nosuid,nodev,noexec)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec)
udev on /dev type tmpfs (rw,nosuid)
devpts on /dev/pts type devpts (rw,nosuid,noexec)
/dev/md5 on /home type ext3 (rw,noatime,data=ordered)
/dev/md7 on /var type ext3 (rw,noatime,data=ordered)
/dev/md8 on /tmp type ext3 (rw,noatime,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec)
usbfs on /proc/bus/usb type usbfs (rw,nosuid,noexec)
|
What I did
I copied everything form / to the flash memory except /tmp /var /usr/src and /usr/portage.
I linked /etc/mtab against /proc/mounts as described in the mount man page because when booting the system complained about not being able to write to /etc/mtab within my read only root for testing purposes.
I copied /var contents to their own partition and deleted everything inside of /tmp.
Finally, I modified /etc/fstab and /boot/grub/grub.conf to correspond to the new situation plus making / read only.
Results
Output of hdparm -tT for the flash disks:
Code: |
Timing cached reads: 7312 MB in 2.00 seconds = 3659.54 MB/sec
Timing buffered disk reads: 230 MB in 3.03 seconds = 75.98 MB/sec
|
...and of the harddisk array:
Code: |
Timing cached reads: 7298 MB in 2.00 seconds = 3653.22 MB/sec
Timing buffered disk reads: 438 MB in 3.00 seconds = 145.90 MB/sec
|
By the way, without raid, htparm -tT looks like this:
Code: |
CF:
Timing cached reads: 7326 MB in 2.00 seconds = 3666.95 MB/sec
Timing buffered disk reads: 116 MB in 3.06 seconds = 37.85 MB/sec
HDD:
Timing cached reads: 7050 MB in 2.00 seconds = 3528.41 MB/sec
Timing buffered disk reads: 222 MB in 3.00 seconds = 73.91 MB/sec
|
According to hdparm, the flash raid should be twice as slow.
Still, starting applications or even the system "feels" faster (up so several seconds, but I did not measure anything yet).
I am confident to have everything even faster when adding two more compact flash disks for a 4-way-RAID.
This is scheduled for September due to an acute (in the sense of life-threatening) lack of money.
I haven't figured out what rootfs is supposed to be - I think it wasn't there in the first place. I'll have to google that if nobody is going to enlighten me here.
Now, with the read-only root system, the cards should last forever and maybe that adds some level of security to the system.
Anyway, I am expecting quite a few problems in everyday computing and haven't done anything productive with the system yet because - as I mentioned in the firs line - I just finished the set up. That's why I wouldn't encourage anybody to try it out right now. |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9696 Location: almost Mile High in the USA
|
Posted: Wed Jul 25, 2007 11:09 pm Post subject: |
|
|
The reason why the flash card system seems faster is because it is faster - MB/sec is not the only metric, seek time (which is near instantaneous) on the flash system versus the HDDs (which is measured in milliseconds).
The question I do have is whether anyone actually tried JFFS2 on a block device - whether it's even possible or can you only run jffs2 on a Memory Technology Device emulator (and what the consequences of that is on a flash backed block device)? JFFS2 takes advantage of block size to distribute writes, but when dealing with a block device emulator (in terms of a CF/SD/USB, etc. disk) and then converting that to a pseudo-memory device with the other module... oh...so confusing now... _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54396 Location: 56N 3W
|
Posted: Thu Jul 26, 2007 6:00 pm Post subject: |
|
|
eccerr0r,
err, I'm not sure I follow. FLASH memory really is blocked. You should be able to use JFFS2 on any underlying media you like _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
adsmith Veteran
Joined: 26 Sep 2004 Posts: 1386 Location: NC, USA
|
Posted: Thu Jul 26, 2007 10:25 pm Post subject: |
|
|
Unless you're soldering the flash memory to your own controller, there should really be no point in using JFFS2. Don't most of the USB/CF/SD/whatever controllers onboard the normal memory cards automatically do wear-levelling? Just use Ext2 (non-journalling).
This was an interesting experiment, anyway. I personally would have done a RAID5 with the 4 flash drives. I don't really see the point in JBOD with that many devices, as you are bound to have a catostrophic failure (even if it may be more recoverable than on a spinning disk).
hdparm isn't the right metric, though. Use bonnie++ or one of the other disk benchmarks which includes random versus sequential seek and sustained seek information. |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9696 Location: almost Mile High in the USA
|
Posted: Fri Jul 27, 2007 3:50 pm Post subject: |
|
|
That's the hope, that the little tiny microcontroller on the USB/CF/SD card actually does wear-levelling. Considering the amount of memory JFFS2 uses, I don't think so, or the JFFS2 guys are doing something really wrong. Or perhaps they're doing more than what the on-chip version is doing. I'll need to go grab a junk flash card and experiment to see if it really does do wear leveling or not; I suspect a lot of them don't. Or perhaps they handle developed bad blocks poorly.
Anyway, at one point I do recall having to use blkmtd to emulate something. Maybe it's no longer needed. It was depreciated after all at some point... _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
AdShea n00b
Joined: 10 Mar 2005 Posts: 62
|
Posted: Fri Aug 10, 2007 4:05 pm Post subject: |
|
|
About the failure rates of the flash, so long as you're not using them for logs or temp stuff, the 1000000 write cycles will take 9 years to hit if you limit the vfs to writing only every 5 minutes (laptop mode does something like this, there's also files in /proc and /sys that you can use to tune it manually, just read the kernel docs). This would be the failure of a single block if written every 5 minutes 24 hours a day for 9 years straight. In practice, you won't be hitting the same block all the time, so this number will be much better. |
|
Back to top |
|
|
xenon Guru
Joined: 25 Dec 2002 Posts: 432 Location: Europe
|
Posted: Mon Dec 10, 2007 7:43 pm Post subject: |
|
|
I am about to create a CF-based system, too. Any more hints? I am thinking about leaving out /var, /tmp (RAMdisk), /usr/portage, /home. |
|
Back to top |
|
|
sf_alpha Tux's lil' helper
Joined: 19 Sep 2002 Posts: 136 Location: Bangkok, TH
|
Posted: Tue Dec 11, 2007 8:20 am Post subject: |
|
|
CF can outperform harddisk array in case of random-reads ...
Try IOMeter and see result not the hdparm, fast disks always outperform flash memory in case of sequential read. _________________ Gentoo Mirrors in Thailand (and AP)
http://gentoo.in.th |
|
Back to top |
|
|
xenon Guru
Joined: 25 Dec 2002 Posts: 432 Location: Europe
|
Posted: Thu Dec 13, 2007 1:07 pm Post subject: |
|
|
And my bet is that on a system partition, random reads are more frequent and make more of a difference than sequential ones. |
|
Back to top |
|
|
likewhoa l33t
Joined: 04 Oct 2006 Posts: 778 Location: Brooklyn, New York
|
Posted: Sat Dec 22, 2007 11:48 pm Post subject: |
|
|
here is a run i did on a quad-cf pci card i recently purchased. I'm using 4x1gb cf drives (cheap kind) for testing purposes, I would not recommend using crappy cf drives like this for as they are slow as hell. results.
RAID = 4x1GB CF.
Code: |
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid0]
md7 : active raid0 hdh1[3] hdg1[2] hdf1[1] hde1[0]
4011264 blocks 64k chunks
|
START OF EXT2FS stride=16 BENCHMARKS
Code: |
# time mkfs.ext2 -E stride=16 /dev/md7
real 0m17.416s
user 0m0.010s
sys 0m0.070s
# tar xvjpf stage3*
real 0m28.436s
user 0m20.040s
sys 0m2.290s
# tar xvjpf portage-latest*
real 0m16.792s
user 0m11.080s
sys 0m3.920s
# sync;for i in 1 2 3; do time dd if=/dev/zero of=512k count=${i}000k;time sync;time rm 512k;done
1024000+0 records in
1024000+0 records out
524288000 bytes (524 MB) copied, 1.85042 s, 283 MB/s
real 0m2.105s
user 0m0.260s
sys 0m1.830s
real 1m9.081s
user 0m0.000s
sys 0m0.030s
real 0m0.101s
user 0m0.000s
sys 0m0.100s
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB) copied, 18.2989 s, 57.3 MB/s
real 0m18.300s
user 0m0.400s
sys 0m3.530s
real 1m50.938s
user 0m0.000s
sys 0m0.100s
real 0m0.176s
user 0m0.000s
sys 0m0.170s
3072000+0 records in
3072000+0 records out
1572864000 bytes (1.6 GB) copied, 65.9342 s, 23.9 MB/s
real 1m5.935s
user 0m0.530s
sys 0m5.180s
real 2m18.534s
user 0m0.000s
sys 0m0.200s
real 0m0.286s
user 0m0.000s
sys 0m0.280s
# time zcav /dev/md7
#loops: 1
#block K/s time
0 26115 3.921046
100 26683 3.837600
200 26668 3.839802
300 26675 3.838661
400 26718 3.832573
500 26652 3.841995
600 26749 3.828047
700 26687 3.836991
800 26667 3.839809
900 26220 3.905413
1000 20531 4.987442
1100 26670 3.839519
1200 26649 3.842429
1300 26014 3.936285
1400 26675 3.838711
1500 26566 3.854482
1600 26678 3.838225
1700 26603 3.849168
1800 26650 3.842337
1900 26674 3.838868
2000 26645 3.843041
2100 26674 3.838860
2200 26675 3.838785
2300 26673 3.839044
2400 26657 3.841360
2500 26587 3.851463
2600 26673 3.839046
2700 26257 3.899875
2800 26398 3.878951
2900 26511 3.862429
3000 26317 3.890894
3100 26119 3.920407
3200 26253 3.900425
3300 26199 3.908527
3400 25983 3.940975
3500 26053 3.930416
3600 26188 3.910048
3700 26140 3.917268
3800 27248 3.757999
real 2m34.449s
user 0m0.020s
sys 0m6.800s
# hdparm -Tt /dev/md7
/dev/md7:
Timing cached reads: 2892 MB in 2.00 seconds = 1447.17 MB/sec
Timing buffered disk reads: 78 MB in 3.03 seconds = 25.75 MB/sec
# time rm -rf /usr/portage/*
real 0m18.374s
user 0m0.130s
sys 0m1.620s
# time emerge --sync
real 7m32.830s
user 0m12.060s
sys 0m6.000s
# time rm -rf /mnt/gentoo/*
real 0m2.998s
user 0m0.100s
sys 0m1.320s
|
START OF XFS BENCHMARKS
Code: |
# time mkfs.xfs -f /dev/md7
real 0m4.181s
user 0m0.050s
sys 0m0.020s
# time tar xvjpf stage3*
real 8m59.284s
user 0m20.600s
sys 0m4.840s
# time tar xvjpf portage-latest* -C usr/
real 24m47.368s
user 0m12.230s
sys 0m16.880s
# sync;for i in 1 2 3; do time dd if=/dev/zero of=512k count=${i}000k;time sync;time rm 512k;done
1024000+0 records in
1024000+0 records out
524288000 bytes (524 MB) copied, 34.8706 s, 15.0 MB/s
real 0m35.284s
user 0m0.200s
sys 0m2.200s
real 0m19.127s
user 0m0.000s
sys 0m0.060s
real 0m0.280s
user 0m0.000s
sys 0m0.100s
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB) copied, 83.6147 s, 12.5 MB/s
real 1m23.617s
user 0m0.370s
sys 0m4.430s
real 0m25.292s
user 0m0.000s
sys 0m0.040s
real 0m0.199s
user 0m0.000s
sys 0m0.200s
3072000+0 records in
3072000+0 records out
1572864000 bytes (1.6 GB) copied, 132.375 s, 11.9 MB/s
real 2m12.380s
user 0m0.460s
sys 0m7.460s
real 0m25.934s
user 0m0.000s
sys 0m0.090s
real 0m0.280s
user 0m0.000s
sys 0m0.270s
# time zcav /dev/md7
#loops: 1
#block K/s time
0 26227 3.904259
100 25530 4.010862
200 26412 3.876952
300 26380 3.881713
400 26515 3.861962
500 26459 3.870043
600 26485 3.866332
700 26512 3.862382
800 26407 3.877730
900 26449 3.871546
1000 26384 3.881053
1100 26308 3.892339
1200 26353 3.885702
1300 26447 3.871845
1400 26462 3.869693
1500 26460 3.869899
1600 26446 3.871987
1700 26405 3.878007
1800 26446 3.872006
1900 26401 3.878559
2000 26451 3.871227
2100 26469 3.868564
2200 26447 3.871831
2300 26398 3.879018
2400 26462 3.869635
2500 26459 3.870045
2600 26449 3.871515
2700 26454 3.870859
2800 26405 3.877925
2900 26373 3.882743
3000 26338 3.887879
3100 26249 3.901004
3200 26052 3.930488
3300 26000 3.938383
3400 26172 3.912447
3500 26391 3.879983
3600 26388 3.880538
3700 26392 3.879883
3800 27649 3.703521
real 2m33.974s
user 0m0.010s
sys 0m9.840s
# hdparm -Tt /dev/md7
/dev/md7:
Timing cached reads: 2916 MB in 2.00 seconds = 1459.48 MB/sec
Timing buffered disk reads: 78 MB in 3.06 seconds = 25.50 MB/sec
# time rm -rf /usr/portage/*
real 8m8.118s
user 0m0.170s
sys 0m7.430s
# time emerge --sync -q
real 32m58.173s
user 0m12.010s
sys 0m14.330s
# time rm -rf /mnt/gentoo/*
real 10m47.375s
user 0m0.220s
sys 0m9.620s
|
As you can see ext2fs is the clear winner but that's expected since xfs is more focus on large files. I didn't test on journal file systems as there is no point in it. what I learned from this is that you need to invest on good cf media and not cheap media like the one i purchased, there goes $35 bucks down the drain, atleast i can use them for my digital camera. If anyone is wondering why i didn't run bonnie++ that's because the array size was only 4GB and bonnie++ needs to run with -s 4096 and that's just not possible with a 4GB filesystem. I hope my SATA RAID benchmarks finish on my 4x400GB array which is giving me an average of 270MB/s write speeds that's for a totally different thread all together.
Happy Holidays!
likewhoa |
|
Back to top |
|
|
bexamous2 Tux's lil' helper
Joined: 18 Nov 2005 Posts: 80
|
Posted: Mon Mar 10, 2008 7:21 pm Post subject: |
|
|
I'm just searching for results to compare to mine and found this thread. I just bought some CF stuff to mess around with...
266x 4GB Adata CF cards (UDMA)
CT->SATA converters
Got 4 cards in raid0 on a sil3114 PCI card... this testing could be limited by bus or controller as well.
Here is some results, ext2 file system...
ubuntu@ubuntu:~/raid$ sudo time zcav /dev/md1
#loops: 1, version: 1.03b
#block K/s time
0 73374 1.395581
100 74092 1.382062
200 73968 1.384367
300 74078 1.382324
400 74129 1.381368
500 69616 1.470917
600 74132 1.381313
700 74138 1.381200
800 74036 1.383100
900 74164 1.380708
1000 72761 1.407328
1100 71620 1.429750
1200 74124 1.381465
1300 73417 1.394771
1400 73823 1.387093
1500 73313 1.396746
1600 73646 1.390433
<snip> speed never changes its always ~73MB/sec
ubuntu@ubuntu:~/raid$ bonnie++
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03b ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
ubuntu 2G 54699 92 57682 19 22715 9 57258 93 73721 14 2329 15
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 5195 98 +++++ +++ +++++ +++ 5469 96 +++++ +++ 18976 100
ubuntu,2G,54699,92,57682,19,22715,9,57258,93,73721,14,2329.4,15,16,5195,98,+++++,+++,+++++,+++,5469,96,+++++,+++,18976,100
ubuntu@ubuntu:~/raid$ sudo hdparm -tT /dev/md1
/dev/md1:
Timing cached reads: 962 MB in 2.00 seconds = 481.03 MB/sec
Timing buffered disk reads: 218 MB in 3.01 seconds = 72.32 MB/sec
ubuntu@ubuntu:~/raid$
READ Speed:
ubuntu@ubuntu:~/raid$ sudo time dd if=/dev/md1 of=/dev/null bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 27.703 s, 75.7 MB/s
0.02user 6.14system 0:27.80elapsed 22%CPU (0avgtext+0avgdata 0maxresident)k
4098124inputs+0outputs (2major+478minor)pagefaults 0swaps
Write speed:
ubuntu@ubuntu:~/raid$ sudo time dd if=/dev/zero of=zeros bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 35.778 s, 58.6 MB/s
0.00user 7.70system 0:35.85elapsed 21%CPU (0avgtext+0avgdata 0maxresident)k
254inputs+4100184outputs (1major+479minor)pagefaults 0swaps
ubuntu@ubuntu:~/raid$
Costs for this setup... $20 for PCI Sil3114 4 port controller... $40 per CF card .. $12 per CF->SATA converter |
|
Back to top |
|
|
sf_alpha Tux's lil' helper
Joined: 19 Sep 2002 Posts: 136 Location: Bangkok, TH
|
Posted: Thu Mar 13, 2008 9:28 pm Post subject: |
|
|
Please Post the IOZone benchmark is possible. It actually analyze the real performance of the file system in may access pattern including
Sequential Read
Random Read
Sequential Write
Random Write
Sequential Read/Write
Random Read/Write
And you will see how CF outperform traditional Disk for random things and how CF are looser on sequential read/write. _________________ Gentoo Mirrors in Thailand (and AP)
http://gentoo.in.th |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|