View previous topic :: View next topic |
Author |
Message |
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Fri Oct 07, 2005 3:53 pm Post subject: Compressed filesystem : need some testers. |
|
|
Deprecated, see https://forums.gentoo.org/viewtopic-t-405996.html
Hi, I've been making a compressed filesystem overlay since.. monday, and I'm on a level where I need some testers. So I figured which group of people have a huge amount of io reads which they can afford loosing... hmm... /usr/portage
It's a fuse module called lzofs, and works as an overlay (ie, it's not a filesystem, just a compressed directory on your filesystem).
To get it working:
Code: |
emerge -u subversion sys-fs/fuse lzo
svn co https://hollowtube.mine.nu/svn/lzofs/trunk/
cd trunk
make
modprobe fuse
sh runtest
|
Runtest simply makes and does "./lzofs raw x -f".
-f = foreground, and there is a ton of debug output right now
raw = the "raw" directory, files that are compressed will go here.
x = the compressed directory.
If you do echo "This is a test" > x/file
x/file will be compressed and stored in raw, if you fusermount -u x (unmount) the cache will be flushed to disk, and raw/x will be a lzo compressed file. If you remount and do cat x/file it'll be uncompressed and processed as if it wasn't compressed at all.
//edited out performance tests, the overhead is fairly small, but my tests + debug code is a horrible way to test it
Note, I dont see any way this can harm your filesystem in any way possible, I'm using it in my home directory, which is already encrypted with encfs without worrying about it, but... well, dont say I didn't warn you , this isn't tested much.
I'd love to see some performance tests of emerge sync and emerge search'es and such. Also if something hardlocks on your please check top and see if lzofs is using 99% cpu usage or not (in which case it's an infinite loop, not a thread lock).
Last edited by neuron on Fri Nov 25, 2005 5:18 pm; edited 4 times in total |
|
Back to top |
|
|
i92guboj Bodhisattva
Joined: 30 Nov 2004 Posts: 10315 Location: Córdoba (Spain)
|
Posted: Fri Oct 07, 2005 5:09 pm Post subject: Re: Compressed filesystem : need some testers. |
|
|
neuron wrote: |
I'd love to see some performance tests of emerge sync and emerge search'es and such. Also if something hardlocks on your please check top and see if lzofs is using 99% cpu usage or not (in which case it's an infinite loop, not a thread lock). |
The first thing I thought when starting to read your post
However:
Code: |
[ ~/opt/lzfs/trunk ]-[127]: LC_ALL="en" make
gcc -g -Wall -O -D_FILE_OFFSET_BITS=64 -DFUSE_USE_VERSION=22 -D_REENTRANT -c lzocompress.c
lzocompress.c:33:18: fuse.h: No such file or directory
In file included from lzocompress.c:35:
lzocompress.h:109: warning: "struct fuse_file_info" declared inside parameter list
lzocompress.h:109: warning: its scope is only this definition or declaration, which is probably not what you want
lzocompress.h:112: warning: "struct fuse_file_info" declared inside parameter list
lzocompress.c:481: warning: "struct fuse_file_info" declared inside parameter list
lzocompress.c:482: error: conflicting types for 'compress'
lzocompress.h:112: error: previous declaration of 'compress' was here
lzocompress.c:482: error: conflicting types for 'compress'
lzocompress.h:112: error: previous declaration of 'compress' was here
lzocompress.c:723: warning: "struct fuse_file_info" declared inside parameter list
lzocompress.c:724: error: conflicting types for 'decompress'
lzocompress.h:109: error: previous declaration of 'decompress' was here
lzocompress.c:724: error: conflicting types for 'decompress'
lzocompress.h:109: error: previous declaration of 'decompress' was here
make: *** [lzocompress.o] Error 1
|
So I emerged sys-fs/fuse, now it compiles. I will play a little with it and let you know how is it going in about a couple of hours if I have the time
- Thanks for this and luck with the project.
[EDIT]: You also need to "modprobe fuse" before the test, for it to work. |
|
Back to top |
|
|
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Fri Oct 07, 2005 5:23 pm Post subject: |
|
|
, updated original post with that info, completely forgot about it |
|
Back to top |
|
|
i92guboj Bodhisattva
Joined: 30 Nov 2004 Posts: 10315 Location: Córdoba (Spain)
|
Posted: Fri Oct 07, 2005 5:43 pm Post subject: |
|
|
My first experience:
First, I launched the sample file, and left it running in one xterm, so I can post the output later, in the bottom.
Second, in another xterm:
Code: |
[ ~/opt/lzfs/trunk ]-[0]: ls x
[ ~/opt/lzfs/trunk ]-[0]: cd x
[ ~/opt/lzfs/trunk/x ]-[0]: cp ~/opt/kde/tarballs/nuoveXT-kde-1.5.tar.gz .
[ ~/opt/lzfs/trunk/x ]-[0]: ls
nuoveXT-kde-1.5.tar.gz
[ ~/opt/lzfs/trunk/x ]-[0]: ls
nuoveXT-kde-1.5.tar.gz
[ ~/opt/lzfs/trunk/x ]-[0]: ls -l
total 16M
-rw-r--r-- 1 i92guboj users 16M oct 7 19:28 nuoveXT-kde-1.5.tar.gz
[ ~/opt/lzfs/trunk/x ]-[0]: ls -l ../raw
total 16M
-rw-r--r-- 1 i92guboj users 16M oct 7 19:28 nuoveXT-kde-1.5.tar.gz
[ ~/opt/lzfs/trunk/x ]-[0]: gunzip nuoveXT-kde-1.5.tar.gz
[ ~/opt/lzfs/trunk/x ]-[0]: ls -l
total 18M
-rw-r--r-- 1 i92guboj users 21M oct 7 19:28 nuoveXT-kde-1.5.tar
[ ~/opt/lzfs/trunk/x ]-[0]: ls -l ../raw
total 18M
-rw-r--r-- 1 i92guboj users 18M oct 7 19:28 nuoveXT-kde-1.5.tar
[ ~/opt/lzfs/trunk/x ]-[0]: du ../x -sh
18M ../x
[ ~/opt/lzfs/trunk/x ]-[0]: du ../raw -sh
18M ../raw
[ ~/opt/lzfs/trunk/x ]-[0]:
|
As you see, the only thing that is a bit fuzzy is the fact that, while the "total" that ls and du reports is the compressed size, the report for the file in ls is still 21 mb. It would be better if both were the same (in my opinion it would be better the real -uncompressed- size, because it is the space that you will need if you take the file to any other non compressed media.
For the rest it works fine for now. Just a question, is there anyway to raiser the compression level? As you see from the output above, even gzip has a better compresion ration in this case, while is still very soft in cpu.
Nothing more for now, here is the output (I havent looked at it) in case you find something interesting in it:
I was going to put the debug output but I noticed that my 10k lines xterm buffer (full screen) is full. So big I'm scared. So im not posting it. Anyway, I looked some parts and its aspect is uniform, nothing special seems to have happened.
I copied back the file and untarred it, all ok. The checksum also matches the original file, so, no corruption at all.
I will make a backup of my portage, and put it in this thingy, I will sync (since a week ago I dont sync, so there might be a lot of file operations there ) and will report results later here. Don't know if I will be able to do it today, so be patient
Thanks again.
[EDIT]: I have a suggeston in case you like it. Is it possible to make the files in the raw directory to show with a .lz extension (or whatever one you preffer)? So, people can take the compresed file from the raw dir if they preffer or take the original file (that will be uncompressed automatically) from the mounted directory (x in the example). It would be kindda like taking the .mp3 or .ogg files out of a cdaudio using the kioslave. |
|
Back to top |
|
|
i92guboj Bodhisattva
Joined: 30 Nov 2004 Posts: 10315 Location: Córdoba (Spain)
|
Posted: Fri Oct 07, 2005 6:52 pm Post subject: |
|
|
Ok, some things, I moved /usr/portage into /home/portage. I created the raw dir into /usr/portage_raw and created /usr/portage again, to hold the mount point for the portage_raw lzo filesystem. Then I "lzofs /usr/portage_raw /usr/portage".
Code: |
[ /usr ]-[0]: lzofs /usr/portage.raw/ /usr/portage/
lzofs.c:475(main()) : Compressing files in /usr/portage/ to /usr/portage.raw/
|
All seems ok. Now I try to copy the files to the lzo mount point, which is /usr/portage/
Code: |
[ /home/portage ]-[0]: cp -R /home/portage/* /usr/portage/
cp: cannot create regular file `/usr/portage/app-admin/webalizer/files/2.01.10/reconfig': Too many open files
cp: cannot create regular file `/usr/portage/app-admin/webalizer/files/2.01.10/webalizer.conf': Too many open files
cp: cannot create regular file `/usr/portage/app-admin/webalizer/files/digest-webalizer-2.01.10-r8': Too many open files
cp: cannot create regular file `/usr/portage/app-admin/webalizer/files/webalizer-db4.patch': Too many open files
cp: cannot create regular file `/usr/portage/app-admin/webalizer/files/webalizer-db4-with-geoip.patch': Too many open files
cp: cannot create regular file `/usr/portage/app-admin/webalizer/files/webalizer-readability.patch': Too many open files
cp: cannot create regular file `/usr/portage/app-admin/prelude-manager/Manifest': Too many open files
cp: cannot create regular file `/usr/portage/app-admin/prelude-manager/metadata.xml': Too many open files
|
And the thing continues, this is only a few files, but, it would continue this way for all the files on portage.
Code: |
[ /home/portage ]-[130]: ls /usr/portage
ls: reading directory /usr/portage: Too many open files
[ /home/portage ]-[1]: du -sh /usr/portage
4.0K /usr/portage
[ /home/portage ]-[0]: ls /usr/portage.raw/
app-accessibility app-admin
[ /home/portage ]-[0]: du -sh /usr/portage.raw
1.3M /usr/portage.raw
|
I can't ls or cd or anything into /usr/portage, du works well, but reports a fake number. I suspect that the bug does not have anything to do with the number of files, since the files copied vary between 1k and 2.5k in various attemps, as do the total space occupied. In this report it is 1.3 mb, in anothers it is 2.5 mb or any other different number.
Well, after this I unmounted and tried another trick, just to see what would happen in this case.
Code: |
[ /home/portage ]-[0]: fusermount -u /usr/portage
[ /home/portage ]-[0]: lzofs /home/portage/ /usr/portage/
lzofs.c:475(main()) : Compressing files in /usr/portage/ to /home/portage/
[ /home/portage ]-[0]: ls /usr/portage/
ls: /usr/portage/skel.ChangeLog: Operation not permitted
ls: /usr/portage/header.txt: Operation not permitted
ls: /usr/portage/skel.ebuild: Operation not permitted
ls: /usr/portage/skel.metadata.xml: Operation not permitted
app-accessibility app-laptop dev-games dev-tex games-rpg mail-filter net-fs net-zope sec-policy www-misc
app-admin app-misc dev-haskell dev-util games-server mail-mta net-ftp perl-core sys-apps www-servers
app-antivirus app-mobilephone dev-java distfiles games-simulation media-fonts net-im profiles sys-auth x11-apps
app-arch app-office dev-lang eclass games-sports media-gfx net-irc rox-base sys-block x11-base
app-backup app-pda dev-libs games-action games-strategy media-libs net-libs rox-extra sys-boot x11-drivers
app-benchmarks app-portage dev-lisp games-arcade games-util media-plugins net-mail sci-astronomy sys-cluster x11-libs
app-cdr app-shells dev-ml games-board gnome-base media-radio net-misc sci-biology sys-devel x11-misc
app-crypt app-text dev-perl games-emulation gnome-extra media-sound net-nds sci-calculators sys-fs x11-plugins
app-dicts app-vim dev-php games-engines gnustep-apps media-tv net-news sci-chemistry sys-kernel x11-proto
app-doc app-xemacs dev-php4 games-fps gnustep-base media-video net-nntp sci-electronics sys-libs x11-terms
app-editors dev-ada dev-php5 games-kids gnustep-libs metadata net-p2p sci-geosciences sys-power x11-themes
app-emacs dev-cpp dev-python games-misc kde-base net-analyzer net-print sci-libs sys-process x11-wm
app-emulation dev-db dev-ruby games-mud kde-misc net-dialup net-proxy sci-mathematics www-apache xfce-base
app-forensics dev-dotnet dev-scheme games-puzzle licenses net-dns net-wireless sci-misc www-apps xfce-extra
app-i18n dev-embedded dev-tcltk games-roguelike mail-client net-firewall net-www scripts www-client
[ /home/portage ]-[1]: du /usr/portage -sh
du: cannot access `/usr/portage/games-fps/quake3-truecombat/Manifest': Operation not permitted
du: cannot access `/usr/portage/games-fps/quake3-truecombat/metadata.xml': Operation not permitted
du: cannot access `/usr/portage/games-fps/quake3-truecombat/quake3-truecombat-1.2.ebuild': Operation not permitted
du: cannot access `/usr/portage/games-fps/quake3-truecombat/ChangeLog': Operation not permitted
du: cannot access `/usr/portage/games-fps/quake3-truecombat/files/digest-quake3-truecombat-1.2': Operation not permitted
du: cannot access `/usr/portage/games-fps/rtcw/Manifest': Operation not permitted
du: cannot access `/usr/portage/games-fps/rtcw/metadata.xml': Operation not permitted
du: cannot access `/usr/portage/games-fps/rtcw/ChangeLog': Operation not permitted
|
And so on, with all the files into that directory. It seems that the SO cannot access these files (which I understand because they were not recorded there through the correct interface, that is /usr/portage, the mount point). You can see also that ls recognise ok directories, but not the files.
Any idea? |
|
Back to top |
|
|
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Fri Oct 07, 2005 7:35 pm Post subject: |
|
|
first of all, ls -l/du on a compressed directory will show it's uncompressed size. And ls'ing raw isn't accurate because of the cache, stuff might not be flushed to disk.
To get accurate info unpack something to x and fusermount -u x, that'll flush it to disk and you can ls -l on raw.
And I'm using LZO compression, level 1 for now (hardcoded, sorry), so I'd be surprised if it compressed a .gz file at all.
Quote: |
[EDIT]: I have a suggeston in case you like it. Is it possible to make the files in the raw directory to show with a .lz extension (or whatever one you preffer)? So, people can take the compresed file from the raw dir if they preffer or take the original file (that will be uncompressed automatically) from the mounted directory (x in the example). It would be kindda like taking the .mp3 or .ogg files out of a cdaudio using the kioslave.
|
Consider it on the todo, but fairly low priority, same with different ciphers, I plan on doing it, but there's a long list of stuff to do first.
And I'll toy with many files, you could try to run it in foreground (add -f last) and see what errors your getting there. And if you can't ls/du stuff, chances are the decompress is broke, which often means short writes and such. |
|
Back to top |
|
|
i92guboj Bodhisattva
Joined: 30 Nov 2004 Posts: 10315 Location: Córdoba (Spain)
|
Posted: Fri Oct 07, 2005 7:49 pm Post subject: |
|
|
neuron wrote: | first of all, ls -l/du on a compressed directory will show it's uncompressed size. And ls'ing raw isn't accurate because of the cache, stuff might not be flushed to disk.
To get accurate info unpack something to x and fusermount -u x, that'll flush it to disk and you can ls -l on raw.
|
Ok, I understand. Anyway, for now, the deflating factor is not a priority.
neuron wrote: | And I'm using LZO compression, level 1 for now (hardcoded, sorry), so I'd be surprised if it compressed a .gz file at all. |
I know, I just copied a gz file and decompresed it to compare the compressions rates, between that implementation and the gz format. Seems I was wrong in my method thou
neuron wrote: | And I'll toy with many files, you could try to run it in foreground (add -f last) and see what errors your getting there. And if you can't ls/du stuff, chances are the decompress is broke, which often means short writes and such. |
I will try tomorrow, here is too late, but I will do some more tries tomorrow. Anyway, I can anticipate that that affirmation seems logical. I have experimented with some large files, copying them one by one, and all was ok. There is no pattern in the point where the cp process reaches when copying portage to the lzofs mount point, the errors begin at any arbitraty place, after copying some thousands of files, so, it seems something related to cache/buffer or whatever (im not a crack at filesystems, you can tell ) |
|
Back to top |
|
|
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Fri Oct 07, 2005 8:05 pm Post subject: |
|
|
trim_database was "funky" and hardlocked because of some thread magic, should work now.
Oh, and if your a bit too spammed by THREAD messages you can open lzofs.h and comment out DEFINE THREADDEBUG
//edit, oh, and keep in mind rsync is hitting a huge performance bug, as every time it does rename the cache isn't renamed, it's flushed (which means it has to read the entire file again to get the new filesize)., that will be fixed
//edit 2, debug code should be cleaned up quite a bit now, there are defines for more/less debug output in lzofs.h |
|
Back to top |
|
|
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Fri Oct 07, 2005 9:42 pm Post subject: |
|
|
heh, those original benchmarks posted by me are way off, it was the debug output + gnome terminal slowing it down.
doing a rsync checksum on files now I get:
Code: |
time rsync -cvlaogrp --stats --progress /bin/ compressed/
real 0m0.320s
user 0m0.037s
sys 0m0.016s
|
//edit, fixed a ton of bugs today, most of em rare stuff, some... just.. plain.. stupid (like using one variable for every path resolve, which wasn't threadsafe.. oopsie). |
|
Back to top |
|
|
i92guboj Bodhisattva
Joined: 30 Nov 2004 Posts: 10315 Location: Córdoba (Spain)
|
Posted: Sun Oct 09, 2005 5:45 pm Post subject: |
|
|
I'm giving it another shot today (sunday evening and no will to do any other thing but stay at home ). I just downloaded and compiled rev91 from svn repository and im copying the portage tree right now.
The first attemp was unsuccessfull. Right now (second attempt) it seems to work right (still copying, ill tell you later). In the first attempt I got some bad file descriptor errors after 62 megabytes copied, I accessed the files from another terminal with du, maybe that was the reason why it gave me that errors... Not sure.
Because of that, I will stay away from keyboard untill it ends the copy process. |
|
Back to top |
|
|
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Sun Oct 09, 2005 5:58 pm Post subject: |
|
|
I found some more issues that I haven't quite figured out how to solve yet :/, still working on rsync'ing the whole portage tree over without it dying on me. |
|
Back to top |
|
|
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Mon Oct 10, 2005 2:09 am Post subject: |
|
|
I think/hope I've solved the problem for now. Rsync opened files in RDWR mode and I didn't have that implemented, so I had to restructure the code a bit, on a bright side that restructure was on my todo list and has improved performance a lot.
Note that if performance testing undefine DEBUG or you'll be slowed down a LOT by the debug output, also if you like debug output use xterm (or another fast term) and minimize it on both rsync and lzofs, but atleast lzofs .
// edit, oh, and I've looked into how much work would be needed to support other ciphers... and.. not much at all . |
|
Back to top |
|
|
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Mon Oct 10, 2005 2:46 am Post subject: |
|
|
whoho, just did a full rsync of my old portage dir + a huge amount of distfiles with no errors whatsoever.
and then did an rsync -c (checksum) of it again, and it transfered 810 files, which happened to all be 0byte, no idea why it chose to send those again, but fixing it should be trivial, and it's not exactly a critical problem
The overhead is actually far smaller than I thought it'd be, I'm thinking this might be worth putting on a livecd/usb root atleast for smaller amount of reads from the disk , that's in the future though |
|
Back to top |
|
|
lazx888 Tux's lil' helper
Joined: 13 Sep 2005 Posts: 118
|
Posted: Mon Oct 10, 2005 3:12 am Post subject: |
|
|
found you neuron |
|
Back to top |
|
|
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Mon Oct 10, 2005 4:11 am Post subject: |
|
|
huh? |
|
Back to top |
|
|
iphitus Apprentice
Joined: 03 Aug 2005 Posts: 226
|
Posted: Mon Oct 10, 2005 7:41 am Post subject: |
|
|
neuron wrote: | whoho, just did a full rsync of my old portage dir + a huge amount of distfiles with no errors whatsoever.
and then did an rsync -c (checksum) of it again, and it transfered 810 files, which happened to all be 0byte, no idea why it chose to send those again, but fixing it should be trivial, and it's not exactly a critical problem
The overhead is actually far smaller than I thought it'd be, I'm thinking this might be worth putting on a livecd/usb root atleast for smaller amount of reads from the disk , that's in the future though |
Most liveCDs already use squashfs which can use a range of different compression methods, and it's not userspace based which makes it substantially easier to work with. |
|
Back to top |
|
|
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Mon Oct 10, 2005 1:21 pm Post subject: |
|
|
iphitus wrote: | neuron wrote: | whoho, just did a full rsync of my old portage dir + a huge amount of distfiles with no errors whatsoever.
and then did an rsync -c (checksum) of it again, and it transfered 810 files, which happened to all be 0byte, no idea why it chose to send those again, but fixing it should be trivial, and it's not exactly a critical problem
The overhead is actually far smaller than I thought it'd be, I'm thinking this might be worth putting on a livecd/usb root atleast for smaller amount of reads from the disk , that's in the future though |
Most liveCDs already use squashfs which can use a range of different compression methods, and it's not userspace based which makes it substantially easier to work with. |
true, it's also an algorithm optimized for stuff like that so it should pack considerably better, but it's slightly more tricky to set it up with usb keys + an overlayfs for read/write access (I know, I've done it), this is why I started writing this to begin with |
|
Back to top |
|
|
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Mon Oct 10, 2005 5:37 pm Post subject: |
|
|
small update, checkout revision 99, on rev 100 I've started work to make the ciphers into something dynamic, and it's in very active development. |
|
Back to top |
|
|
i92guboj Bodhisattva
Joined: 30 Nov 2004 Posts: 10315 Location: Córdoba (Spain)
|
Posted: Mon Oct 10, 2005 5:57 pm Post subject: |
|
|
Hi again, I'm using rev99 and at last managed to copy back a functional portage tree without errors. All seems to operate well so far within this revision for now. I also sync'ed and im playing a bit with portage.
I haven't tracked any noticeable performance issue, but I would do some tests in the next few days (I still have the old portage copy). I have to test some things, for example, I dont care if there is any speed issue as long as it is not a very noticeable thing, but I do care about the cpu activity. Although lz compression should not be a big issue for a modern machine I will do some tests on it, to see if the compressed filesystem influences too much the system interactivity while retrieving the portage metadata (which is the hardest activity that I can think of on this partition that I use exclusively for portage).
The copy of distfiles suceeded flawlesly, and I did not notice a very big impact on my system responsivity, that is a good sign. Another parameter that worries me is the impact in ram memory. I will report my results in some days. |
|
Back to top |
|
|
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Mon Oct 10, 2005 6:05 pm Post subject: |
|
|
memory usage should be tiny, somethingl like 4kb per file until it's closed, which is almost instant anyway. |
|
Back to top |
|
|
i92guboj Bodhisattva
Joined: 30 Nov 2004 Posts: 10315 Location: Córdoba (Spain)
|
Posted: Mon Oct 10, 2005 6:49 pm Post subject: |
|
|
Hi again
I have news. I was checking another thing (totally unrelated to lzofs), and I ran into this:
Code: |
[ /usr ]-[0]: hdparm -Tt /dev/hda
/dev/hda:
Timing cached reads: 716 MB in 2.01 seconds = 356.27 MB/sec
Timing buffered disk reads: 52 MB in 3.13 seconds = 16.59 MB/sec
|
Well, hdparm sometimes do weird things, so I repeated:
Code: |
[ /usr ]-[0]: hdparm -Tt /dev/hda
/dev/hda:
Timing cached reads: 812 MB in 2.00 seconds = 405.86 MB/sec
Timing buffered disk reads: 62 MB in 3.04 seconds = 20.40 MB/sec
[ /usr ]-[0]: hdparm -Tt /dev/hda
/dev/hda:
Timing cached reads: 816 MB in 2.01 seconds = 406.84 MB/sec
Timing buffered disk reads: 74 MB in 3.07 seconds = 24.13 MB/sec
[ /usr ]-[0]: hdparm -Tt /dev/hda
/dev/hda:
Timing cached reads: 716 MB in 2.01 seconds = 356.27 MB/sec
Timing buffered disk reads: 61 MB in 3.13 seconds = 19.59 MB/sec
|
I was wondering what was happening here, since I know that this drive is much faster than that:
Code: |
[ /usr ]-[0]: cd /usr
[ /usr ]-[0]: fusermount -u portage
[ /usr ]-[0]: hdparm -Tt /dev/hda
/dev/hda:
Timing cached reads: 816 MB in 2.00 seconds = 407.25 MB/sec
Timing buffered disk reads: 152 MB in 3.00 seconds = 50.59 MB/sec
[ /usr ]-[0]: hdparm -Tt /dev/hda
/dev/hda:
Timing cached reads: 816 MB in 2.01 seconds = 406.23 MB/sec
Timing buffered disk reads: 146 MB in 3.01 seconds = 48.46 MB/sec
|
As you see, simply unmounting the lzo filesystem (which is the one that holds portage) the speed is back. Have you noticed anything similar? Is it reproducible in your box? I'm not sure if this test is reliable, I can't tell if speed has gone down, but it it has, is not in a noticeable manner, so I dont know what to think about the hdparm reports... |
|
Back to top |
|
|
neuron Advocate
Joined: 28 May 2002 Posts: 2371
|
Posted: Mon Oct 10, 2005 7:03 pm Post subject: |
|
|
.... that is extreamly weird.
could you try to reproduce this with another fuse filesystem?
also, this could be because I'm keeping a lot of file handles open I suppose, looking into how many I should have open is on my todo list.
Does this happen when mounting only, or when mounting + using it?
(using it will make it fill up the cache with open filehandles) |
|
Back to top |
|
|
i92guboj Bodhisattva
Joined: 30 Nov 2004 Posts: 10315 Location: Córdoba (Spain)
|
Posted: Mon Oct 10, 2005 7:53 pm Post subject: |
|
|
neuron wrote: | .... that is extreamly weird.
could you try to reproduce this with another fuse filesystem?
also, this could be because I'm keeping a lot of file handles open I suppose, looking into how many I should have open is on my todo list.
Does this happen when mounting only, or when mounting + using it?
(using it will make it fill up the cache with open filehandles) |
I will take a closer look later to see if I can find some variables influencing this behaviour. As far as I can tell, the filesystem was completelly quiet when I noticed this (I was answering another question in a forum and wanted to paste my hdparm output as an example). So, the only activity in the hd (not only in that filesystem, but in all my filesystems) was the hdparm -Tt. For the rest, there was nothing else accesing the hd and I'm sure that concretely the portage filesystem was totally unaccessed during that time. I will try to check something to see if I can get different results under different circunstances.
If you need any specific info just let me know.
Note that when I unmounted the thing the hdparm reported again the normal speed for this drive, the fuse kernel module was still loaded, so, I dont think it is responsible for that. Anyway, do you know any other fuse based overlay that I can test? Prior to lzofs I did not even know about fuser at all, so I'm kindda new to fuse...
I also tried to mount with lzofs a new empty dir, it does not make any difference, hdparm still reports around 20 mb/sec, which is pretty low. |
|
Back to top |
|
|
Rainmaker Veteran
Joined: 12 Feb 2004 Posts: 1650 Location: /home/NL/ehv/
|
Posted: Mon Oct 10, 2005 9:26 pm Post subject: |
|
|
looks very interesting. Would love to try this, but I can't get this to compile. I get the same error message as in the 2nd message of this thread.
I'm building against 2.6.14-rc3-nitro1, which is patched with fuse by default
I tried to emerge sys-fs/fuse and compile it then (with the built module from portage), but still getting the same erro. _________________ If you can't dazzle them with brilliance, baffle them with bullshit. |
|
Back to top |
|
|
i92guboj Bodhisattva
Joined: 30 Nov 2004 Posts: 10315 Location: Córdoba (Spain)
|
Posted: Mon Oct 10, 2005 9:53 pm Post subject: |
|
|
That error was due to gcc being unable to find fuse.h, that, if you installed the package fuse, should reside under /usr/include/fuse.h or /usr/include/fuse/fuse.h (not sure). If that is there all should work. Are you sure that you emerged sys-fs/fuse? There is another package called app-emulation/fuse, maybe you emerge that instead... |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|