Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
zfs-fuse-0.6.9 is out! And dedup is awesome!
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo
View previous topic :: View next topic  
Author Message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Fri Jun 04, 2010 10:17 pm    Post subject: zfs-fuse-0.6.9 is out! And dedup is awesome! Reply with quote

I had to do that...:-D

Anyway, goto http://zfs-fuse.net/releases/0.6.9 and have a look at the enhanced zfs-fuse. Too many new features to list here.

zfs-fuse is pretty darn stable here! I have been using zfs-fuse for quite a while now. I just copied over 1TB of my backup using dedup recently. And its just amazing. Backups are smaller by 28% because of combined effect of compression and dedup i.e. my 1125GB back up fits in 810GB. That's a lot of savings!

dedup performance is directly proportional to the amount of RAM you have and inversely proportional to the seek time on your disks. So, get a lot of RAM and use a small SSD as cache device.

The performance of normal non-dedup ZFS is very good! Its able to hit 80-90% of platter speeds (what I really mean is 'dd' speed of read/write direct to media) during sequential read/write operations in my tests, which is plenty good for me.

Let me know if you need the ebuild tarball.
Back to top
View user's profile Send private message
cach0rr0
Bodhisattva
Bodhisattva


Joined: 13 Nov 2008
Posts: 4123
Location: Houston, Republic of Texas

PostPosted: Fri Jun 04, 2010 10:22 pm    Post subject: Reply with quote

someone posted this on OTW - marginally relevant, not sure if you've seen it already

http://wiki.github.com/behlendorf/zfs/
_________________
Lost configuring your system?
dump lspci -n here | see Pappy's guide | Link Stash
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Fri Jun 04, 2010 10:33 pm    Post subject: Reply with quote

cach0rr0 wrote:
someone posted this on OTW - marginally relevant, not sure if you've seen it already

http://wiki.github.com/behlendorf/zfs/
Yup, saw that on the google group posting. That's interesting work! But very far from useable because the basic FS layer (ZPL - ZFS Posix Layer) is missing. That layer ties in with VFS and is probably the hardest piece in the puzzle. So, don't expect native port just yet!
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Fri Jun 04, 2010 10:42 pm    Post subject: Reply with quote

sorry to ask

but you have a cheat sheet for setting up ZFS quickly ?

I can't wait to test it out and see whether I'm suffering from silent data corruption :D

many thanks in advance
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Fri Jun 04, 2010 10:58 pm    Post subject: Reply with quote

kernelOfTruth wrote:
sorry to ask

but you have a cheat sheet for setting up ZFS quickly ?

I can't wait to test it out and see whether I'm suffering from silent data corruption :D

many thanks in advance
Good thing about ZFS is that there is not much to configure. There are two commands: zpool and zfs. zpool is used to create a pool of devices. zfs is used create an FS inside that pool. Typically, you will create FS only if you need to set different properties. Like recordsize is a property which u want to set to 4k for portage FS and leave at 128k for normal. Records are variable sized anyway and file packing is super efficient (I really mean that! It blows away reiser4 in packing data). Note that all FS are free to use all of the pool's leftover space and are not limited. So, no more "oh, I am full on /home, I need to resize the partition and resize the FS" (Yes, LVM takes some of that pain away.).

So, you may want to create a separate FS for portage and for /home, and set different properties (like compression, recordsize, atime, checksum, copies, quota etc.). I would not advise to use zfs-fuse for rootfs (bootstrapping issues) because I haven't done it myself. May be when its native! I do boot opensolaris with ZFS on root. I even have portage on that opensolaris install and I amazed at how fast the emerge is there.

Getting Started, cheat-sheet style: http://hub.opensolaris.org/bin/view/Community+Group+zfs/intro
Main Page tonnes of info: http://hub.opensolaris.org/bin/view/Community+Group+zfs/

ZFS eats RAM. So, have lots of it!
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Fri Jun 04, 2010 10:59 pm    Post subject: Reply with quote

Ahh... I assumed you have zfs-fuse installed and configured. I will attach the zfs-fuse ebuild that I use once I go home. The ebuild will do whatever is required and then you will just say /etc/init.d/zfs-fuse start.
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Fri Jun 04, 2010 11:19 pm    Post subject: Reply with quote

devsk wrote:
kernelOfTruth wrote:
sorry to ask

but you have a cheat sheet for setting up ZFS quickly ?

I can't wait to test it out and see whether I'm suffering from silent data corruption :D

many thanks in advance
Good thing about ZFS is that there is not much to configure. There are two commands: zpool and zfs. zpool is used to create a pool of devices. zfs is used create an FS inside that pool. Typically, you will create FS only if you need to set different properties. Like recordsize is a property which u want to set to 4k for portage FS and leave at 128k for normal. Records are variable sized anyway and file packing is super efficient (I really mean that! It blows away reiser4 in packing data). Note that all FS are free to use all of the pool's leftover space and are not limited. So, no more "oh, I am full on /home, I need to resize the partition and resize the FS" (Yes, LVM takes some of that pain away.).

So, you may want to create a separate FS for portage and for /home, and set different properties (like compression, recordsize, atime, checksum, copies, quota etc.). I would not advise to use zfs-fuse for rootfs (bootstrapping issues) because I haven't done it myself. May be when its native! I do boot opensolaris with ZFS on root. I even have portage on that opensolaris install and I amazed at how fast the emerge is there.

Getting Started, cheat-sheet style: http://hub.opensolaris.org/bin/view/Community+Group+zfs/intro
Main Page tonnes of info: http://hub.opensolaris.org/bin/view/Community+Group+zfs/

ZFS eats RAM. So, have lots of it!


thanks devsk !


devsk wrote:
Ahh... I assumed you have zfs-fuse installed and configured. I will attach the zfs-fuse ebuild that I use once I go home. The ebuild will do whatever is required and then you will just say /etc/init.d/zfs-fuse start.


nope I haven't set it up / installed it yet

I'll probably will follow the review / howto at: http://www.pro-linux.de/artikel/2/1181/zfs-unter-linux.html


my main concerns right now are - whether it'll work with cryptsetup & how low it's throughput really is (I read it was slow and achieves 80-90% throughput nowadays but am not sure how practical all of those statements are)


edit:

interesting:
Quote:
The bad write performance manifests itself with a redundant vdev other than mirror. Read performance matches OpenSolaris ZFS (sometimes surpassing it by a tiny margin).

http://zfs-fuse.net/issues/37
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Sat Jun 05, 2010 5:19 am    Post subject: Reply with quote

The filer of issue 37 never came back. 0.6.9 is a different beast altogether!
Back to top
View user's profile Send private message
regomodo
Guru
Guru


Joined: 25 Mar 2008
Posts: 445

PostPosted: Sat Jun 05, 2010 8:25 am    Post subject: Reply with quote

Thought people might want to see.

zfs natively ported to Linux (No FUSE!)
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sat Jun 05, 2010 2:13 pm    Post subject: Reply with quote

care to share your ebuild ?

for me after bumping to 0.6.9 the fix_zdb_path.patch fails

thanks !

ninja edit:

found one:
https://bugs.gentoo.org/291540
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sat Jun 05, 2010 2:44 pm    Post subject: Reply with quote

well, well ZFS seems to need pretty much data for checksumming and such :o
Quote:
df -h
/dev/mapper/home 730G 711G 19G 98% /home
es2_data 717G 21K 717G 1% /es2_data


so it's reserving 13 GB more than my current reiserfs-filesystem on an 783 GB partition :idea:

ninja edit:

strange:
Quote:
zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
es2_data 728G 144K 728G 0% 1.00x ONLINE -


^^


I've disable access time with:

Code:
zfs set atime=off


and enabled compression with

Code:
zfs set compression=gzip


let's see how well it does ! :D
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sat Jun 05, 2010 8:43 pm    Post subject: Reply with quote

are there any tweaking recommendations you can make, devsk ?

http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide

e.g. switching from fletcher2 to one of the others ?

fletcher4 | sha256


is the following issue still valid:
"Thread: fletcher2/4 implementations fundamentally flawed"
General Solaris 10 Discussion - ZFS Checksum Parameter: Info to help me pick my poison


edit:

ok, well after some testing I've come to the conclusion that it unfortunately is (still) way too slow for me when setting priorities on data safety:
Quote:
zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
es2_data 792M 727G 366 30 1.25M 685K

Quote:
zpool history
History for 'es2_data':
2010-06-05.22:52:49 zpool create es2_data /dev/mapper/zfs
2010-06-05.22:52:54 zfs set atime=off es2_data
2010-06-05.22:52:56 zfs set compression=gzip es2_data
2010-06-05.22:53:01 zfs set dedup=on es2_data
2010-06-05.22:54:13 zfs set dedup=off es2_data
2010-06-05.22:57:36 zfs set checksum=sha256 es2_data
2010-06-05.22:58:25 zfs set dedup=sha256,verify es2_data
2010-06-05.23:01:20 zfs set recordsize=512 es2_data
2010-06-05.23:01:53 zfs set mountpoint=/bak2 es2_data
2010-06-05.23:03:57 zfs set dedup=sha256 es2_data


the both harddrives (in and out) are able to do approx. 105 MB/s so that is not acceptable - besides that I also don't have THAT much time :?
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Sat Jun 05, 2010 9:42 pm    Post subject: Reply with quote

that bug is fixed.

I think fletcher4 is default and fast. Best tweaks are to have a large ARC cache (specify in /etc/zfs/zfsrc) and configure a small SSD or a fast USB thumbdrive (at least 200x) as cache device (zpool add mydata cache /dev/usb4gb). Default compression is fast and efficient in reducing the size, so use that instead of gzip if you are CPU limited. Also, note that ZFS is extremely parallel i.e. if you use gzip and have multiple cores, you may see more cores getting busy with 100% CPU instead of just one core like in BTRFS or REISER4.

Basically, ZFS is ashamed of using the resources you have to give you a fast efficient FS experience.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Sat Jun 05, 2010 10:14 pm    Post subject: Reply with quote

dedup is not for general purpose FS usage. It will pull your throughput down immensely. Its ideal for backups.

I use ZFS on opensolaris also and I know what this FS is really capable of.
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sat Jun 05, 2010 10:29 pm    Post subject: Reply with quote

devsk wrote:
dedup is not for general purpose FS usage. It will pull your throughput down immensely. Its ideal for backups.

I use ZFS on opensolaris also and I know what this FS is really capable of.


that job is kind of a backup ;)

would you mind sharing your zfsrc with us ?

there isn't even a zfs folder in /etc for me 8O
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Sat Jun 05, 2010 10:44 pm    Post subject: Reply with quote

Yeah, I am uploading my ebuild to the bug to make sure some things are taken care of. Here is my /etc/zfs/zfsrc

Code:

vdev-cache-size = 10
max-arc-size = 2024
# zfs-prefetch-disable
# disable-block-cache
# disable-page-cache
fuse-attr-timeout = 3600
fuse-entry-timeout = 3600
fuse-mount-options = default_permissions
# stack-size = 32
I have taken the comments out. The caches are in MB. I have a 12GB system, so tune your max-arc-size depending on what you have.
Back to top
View user's profile Send private message
drescherjm
Advocate
Advocate


Joined: 05 Jun 2004
Posts: 2790
Location: Pittsburgh, PA, USA

PostPosted: Sun Jun 06, 2010 3:05 am    Post subject: Reply with quote

Quote:
The performance of normal non-dedup ZFS is very good! Its able to hit 80-90% of platter speeds (what I really mean is 'dd' speed of read/write direct to media) during sequential read/write operations in my tests, which is plenty good for me.


Thanks for the info. Last time I tested this (2008) the performance was horribly slow (well less than 1/2 the speed of xfs at reads/writes). It's good to know that this is fixed.
_________________
John

My gentoo overlay
Instructons for overlay
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sun Jun 06, 2010 9:59 am    Post subject: Reply with quote

DHTechnologies (DHT) provides an interesting read comparing ext4, btrfs and zfs:

Linux, Btrfs, ZFS, ext4, performance

the report is approx. 1 year old (March 2009) but that shouldn't matter that much all three ext4, btrfs and zfs have progressed ...
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
wrc1944
Advocate
Advocate


Joined: 15 Aug 2002
Posts: 3435
Location: Gainesville, Florida

PostPosted: Mon Jun 07, 2010 5:45 pm    Post subject: Reply with quote

The DHTechnologies link to the 12 page in-depth pdf article is definitely worth a read if one is interested in this stuff. The comparison charts are great.
Many thanks, kernelOfTruth. :)
_________________
Main box- AsRock x370 Gaming K4
Ryzen 7 3700x, 3.6GHz, 16GB GSkill Flare DDR4 3200mhz
Samsung SATA 1000GB, Radeon HD R7 350 2GB DDR5
OpenRC Gentoo ~amd64 plasma, glibc-2.36-r7, gcc-13.2.1_p20230304
kernel-6.8.4 USE=experimental python3_11
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Mon Jun 07, 2010 5:56 pm    Post subject: Reply with quote

wrc1944 wrote:
The DHTechnologies link to the 12 page in-depth pdf article is definitely worth a read if one is interested in this stuff. The comparison charts are great.
Many thanks, kernelOfTruth. :)
Yes, I agree. It is indeed! Very in-depth!

Major take away was that the random access patterns are handled much better by ZFS.
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Mon Jun 07, 2010 6:46 pm    Post subject: Reply with quote

wrc1944 wrote:
The DHTechnologies link to the 12 page in-depth pdf article is definitely worth a read if one is interested in this stuff. The comparison charts are great.
Many thanks, kernelOfTruth. :)


you're welcome :)


@devsk:

you've hit the nail on the head

now zfs-fuse is consuming about 2.5 GiB but that shouldn't matter :P

it now achieves at least 95 MiB/s while writing to disk (and using cryptsetup) - impressive & fast enough for my taste

strange that you have to give it permission to perform that fast - from that POV it's pretty similar to XFS' default settings

ninja edit:

also creating filesystems under an existing filesystem is pretty handy:

e.g.
Quote:
zfs create tank/files/linus
zfs create tank/files/martin
zfs create tank/files/frank
zfs create tank/files/jan

_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Tue Jun 22, 2010 3:19 pm    Post subject: Reply with quote

here's a nice comparison of the compression types available:
http://blogs.sun.com/dap/entry/zfs_compression

also I've nevertheless switched to sha256 checksum algorithm since the overhead is negligible

I love how zfs(-fuse) runs several threads in parallel fully utilizing cpu-power

the best would be if the zfs-fuse implementation wouldn't dedicating RAM-usage solely for itself and share that amount of RAM with slab or other caches

perhaps the devs can learn form virtualbox ? from looking at mem-usage it's all shared memory at virtualbox and only a few MiB are use for the Virtualbox program ...


in those times I wishes I had 12 or 16 gigs of RAM instead of only 6 GB :roll: (so I'm limited in running virtualbox, zfs-fuse, etc. one after other and not in parallel since I can't stand heavy swapping [this µ-ATX board doesn't support more])
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Tue Jun 22, 2010 10:57 pm    Post subject: Reply with quote

WARNING

ok, don't use zfs-fuse with 2.6.35 (yet):
Quote:
[10066.255740] BUG: unable to handle kernel NULL pointer dereference at (null)
[10066.255747] IP: [<ffffffff8173e7b6>] rwsem_down_failed_common+0x66/0x210
[10066.255756] PGD 0
[10066.255760] Oops: 0002 [#1] PREEMPT SMP
[10066.255764] last sysfs file: /sys/devices/platform/it87.2576/temp3_type
[10066.255768] CPU 0
[10066.255770] Modules linked in: fglrx(P) it87 hwmon_vid hwmon xt_owner xt_iprange i2c_i801 e1000e wmi libphy e1000 scsi_wait_scan sl811_hcd ohci_hcd ssb usb_storage ehci_hcd [last unloaded: tg3]
[10066.255790]
[10066.255794] Pid: 7897, comm: zfs-fuse Tainted: P 2.6.35-rc3_test+ #1 FMP55/ipower G3710
[10066.255798] RIP: 0010:[<ffffffff8173e7b6>] [<ffffffff8173e7b6>] rwsem_down_failed_common+0x66/0x210
[10066.255805] RSP: 0018:ffff88014b8cfe48 EFLAGS: 00010006
[10066.255808] RAX: 0000000000000000 RBX: ffff8801badd4c68 RCX: ffff8801badd4c78
[10066.255811] RDX: fffffffeffffffff RSI: ffff88014b8cfea8 RDI: ffff8801badd4c70
[10066.255815] RBP: ffff88014b8cfea8 R08: ffff88014b8ce000 R09: 0000000000000000
[10066.255818] R10: 00000000ffffffff R11: 00000000ffffffff R12: ffff88014b874740
[10066.255821] R13: fffffffeffffffff R14: ffff8801badd4c70 R15: 0000000000000030
[10066.255825] FS: 00007f2a5be9d710(0000) GS:ffff880002000000(0000) knlGS:0000000000000000
[10066.255829] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[10066.255832] CR2: 0000000000000000 CR3: 000000014b8b5000 CR4: 00000000000006f0
[10066.255836] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[10066.255839] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[10066.255843] Process zfs-fuse (pid: 7897, threadinfo ffff88014b8ce000, task ffff88014b874740)
[10066.255846] Stack:
[10066.255848] ffffffff810f1d50 ffffffff810ee0d7 ffff8801492f92a8 0000000000000000
[10066.255853] <0> 0000000000000000 ffff8801badd4c68 ffff8801badd4800 ffff8801badd4c68
[10066.255858] <0> ffffffff810f1d50 ffff88014b8cff64 0000000000000030 ffffffff8173e9b2
[10066.255864] Call Trace:
[10066.255871] [<ffffffff810f1d50>] ? sync_one_sb+0x0/0x30
[10066.255877] [<ffffffff810ee0d7>] ? bdi_sync_writeback+0x87/0x90
[10066.255883] [<ffffffff810f1d50>] ? sync_one_sb+0x0/0x30
[10066.255887] [<ffffffff8173e9b2>] ? rwsem_down_read_failed+0x22/0x2b
[10066.255894] [<ffffffff813965b4>] ? call_rwsem_down_read_failed+0x14/0x30
[10066.255899] [<ffffffff8173de4e>] ? down_read+0xe/0x10
[10066.255904] [<ffffffff810d0014>] ? iterate_supers+0x64/0xc0
[10066.255910] [<ffffffff810f1c89>] ? sync_filesystems+0x19/0x20
[10066.255915] [<ffffffff810f1dec>] ? sys_sync+0x1c/0x40
[10066.255920] [<ffffffff810026ab>] ? system_call_fastpath+0x16/0x1b
[10066.255923] Code: 48 8b 44 24 18 4c 89 f7 e8 58 02 00 00 4c 89 65 10 f0 41 ff 44 24 10 48 8b 43 18 48 8d 4b 10 48 89 6b 18 48 89 45 08 48 89 4d 00 <48> 89 28 4c 89 e8 f0 48 0f c1 03 46 8d 2c 28 4d 85 ed 74 5e 4c
[10066.255970] RIP [<ffffffff8173e7b6>] rwsem_down_failed_common+0x66/0x210
[10066.255976] RSP <ffff88014b8cfe48>
[10066.255978] CR2: 0000000000000000
[10066.255981] ---[ end trace f090cddf7865ecf9 ]---
[10066.255985] note: zfs-fuse[7897] exited with preempt_count 1

_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Tue Jun 22, 2010 11:13 pm    Post subject: Reply with quote

Thank you kernelOfTruth! I was thinking of going to 2.6.35. But I will hold off.

Have you reported the issue to kernel folks? Please post it on zfs-fuse google group also.
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Tue Jun 22, 2010 11:38 pm    Post subject: Reply with quote

devsk wrote:
Thank you kernelOfTruth! I was thinking of going to 2.6.35. But I will hold off.

Have you reported the issue to kernel folks? Please post it on zfs-fuse google group also.


you're welcome,

no - not yet, from what I can tell there's no data at risk here (after the reboot I ran a scrub and no problems were detected)

the problem is that the error / BUG details might not be accurate since I'm using exotic CFLAGS that change the order / structure of the code and hence the resulting report

so I'm still hesitating to post it to lkml

I'll post it to zfs-fuse google group & hopefully someone will be able to reproduce it - after that it could be posted to lkml


this is happening on my working box - since I need it I won't have much time to test & trace things down (I'll go back to 2.6.34-zen1) but I hope that I'll be able to test it again with a kind of "vanilla"-compiled kernel (resembling -march=native -pipe)
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum