View previous topic :: View next topic |
Author |
Message |
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Mon Jun 10, 2013 11:16 pm Post subject: |
|
|
tried out the ROW scheduler and results weren't very satisfying
I threw the worst case scenario at it (btrfs with lzo compression on /home) and ext4 on / (system) partition
stuff took pretty long to load (especially chromium, opening tabs & surfing to known, pre-opened websites)
not sure what I'm doing wrong but I'm giving a try again at a later time
btrfs has some idiosyncrasies - there's still some latency peaks when flushing from time to time - so I guess that's the best real-world benchmark
when running it on /home and combining that with high i/o & syncing stuff via rsync, etc. and using it productively
so my preliminary results with tweaks ended up in:
Code: | for i in /sys/block/sd*; do
/bin/echo "cfq" > $i/queue/scheduler #cfq #bfq
#### /bin/echo "8" > $i/queue/iosched/slice_idle # default: 8, 0 fixes low throughput with drives which have a buggy ncq implementation
#### /bin/echo "16384" > $i/queue/iosched/back_seek_max # default: 16384
/bin/echo "125" > $i/queue/iosched/fifo_expire_async # default: 250 # 10000 (suggested) is probably too high - evaluating # 3000 = 3 seconds
/bin/echo "180" > $i/queue/iosched/fifo_expire_sync # default: 125
/bin/echo "80" > $i/queue/iosched/slice_async # default: 40
/bin/echo "40" > $i/queue/iosched/slice_sync # default: 100
# /bin/echo "5" > $i/queue/iosched/slice_async_rq # default: 2
/bin/echo "6" > $i/queue/iosched/quantum # default: 4 # 6 # affects latency (low better) & throughput (high better)
# /bin/echo "1" > $i/queue/iosched/low_latency # default: 1
done
|
the changes are quite noticable (less waiting times / latency when browsing the web [where it's the most noticable])
cfq focuses more on async workloads now - I figured that in the most seldom cases (at least for me) things are aligned & accessed synchronously / properly
and this is especially true for desktop usage when opening up "random" files (to the i/o scheduler)
so the above preliminary settings where the result _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Tue Jun 11, 2013 5:16 pm Post subject: |
|
|
latest revision of the tweaks (including comments & description how I understand it) for CFQ that I'm currently using:
Code: | # http://www.linux-mag.com/id/7572/
for i in /sys/block/sd*; do
/bin/echo "cfq" > $i/queue/scheduler #cfq #bfq
#### /bin/echo "8" > $i/queue/iosched/slice_idle # default: 8, 0 fixes low throughput with drives which have a buggy ncq implementation
#### /bin/echo "16384" > $i/queue/iosched/back_seek_max # default: 16384
# fifo_expire_*: timeout: the lower the faster switches #
# default: 250 # 125
# 10000 (suggested) is probably too high - evaluating # 3000 = 3 seconds
/bin/echo "125" > $i/queue/iosched/fifo_expire_async
# fifo_expire_*: timeout: the lower the faster switches #
# default: 125 # 180
/bin/echo "180" > $i/queue/iosched/fifo_expire_sync
# slice_* = execution; the longer the more priority
# default: 40 # 80
/bin/echo "80" > $i/queue/iosched/slice_async
# slice = execution; the longer the more priority
# default: 100 # 40
/bin/echo "40" > $i/queue/iosched/slice_sync
# slice_async_rq: This parameter is used to limit the dispatching of asynchronous requests to the device request-queue in queue’s slice time.
# default: 2 # 6
/bin/echo "6" > $i/queue/iosched/slice_async_rq
# quantum:
# default: 4 # 6 # affects latency (low better) & throughput (high better)
/bin/echo "4" > $i/queue/iosched/quantum
/bin/echo "1" > $i/queue/iosched/low_latency # default: 1
# nr_requests:
# default: 128 # test: 512; 1024
/bin/echo "64" > $i/queue/nr_requests
done |
note: this is for current SATA desktop harddrives (3.5''), the behavior in all cases will be difficult on laptops
and might be totally unsuitable for SSDs _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Wed Jun 19, 2013 7:22 pm Post subject: |
|
|
well, migrating /home to btrfs (due to concern of bit-corruption and superiority in data integrity & space efficiency)
I came to the realization that cfq, bfq (and any derivative; bfq less)
are unreliable, unpredictable, slow, inefficient - at least, when consistency matters and causing filesystems and/or devices to behave erratically:
for example any drive/partition with btrfs on it with heavily tweaked cfq would lead to errors and hang of the filesystem, similar behavior could be observed with ZFS
worst-example: external drive with ZFS on it - during backup the drive from time to time would suddenly (or even right from the beginning) timeout and/or shutdown (western digital red, I had similar experience with seagate barracude and ES.2 drives in the past)
so I started looking for new borders and arrived on deadline (or: went back to the roots):
Code: | for i in /sys/block/sd*; do
/bin/echo "deadline" > $i/queue/scheduler
/bin/echo "1024" > $i/queue/nr_requests
/bin/echo "20" > $i/queue/iosched/writes_starved # default: 2 # suggested: 16
/bin/echo "15000" > $i/queue/iosched/write_expire # default: 2 # suggested: 15000, 1500
/bin/echo "4" > $i/queue/iosched/fifo_batch # default: 16
/bin/echo "125" > $i/queue/iosched/read_expire # default: 500 # suggested: 250, 150, 80
/bin/echo "0" > $i/queue/iosched/front_merges # default = 1 (enabled)
done |
there is still some waiting time e.g. during large backup jobs (e.g. /home on btrfs to backup on ZFS) for a few seconds or even less
but at least it's predictable and sound so far seems to continue with these settings (not interruption so far, will have to use longer)
I still have to observe more - but this appears & feels much better than with cfq
also the desktop (and especially chromium) is way snappier _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Sat Jun 22, 2013 3:43 pm Post subject: |
|
|
ok, seems like the "un-predictability" of bfq (and especially cfq) are related to issues on the x64 architecture
meanwhile I'm switching back and forth from deadline to bfq and see where things can be improved,
disabling low-latency gives predictability back but that of course comes at a price
tweaking the cpu scheduler (CFS) seems to play and important role, too (for those who can't use BFS due to bleeding edge and/or other reasons):
here my current CFS tweaks that e.g. allow my system to almost flawless play back 2 streams of audio (1 local, 1 from youtube) with pulse during heavy compiling
Code: | echo "6000000" > /proc/sys/kernel/sched_latency_ns
echo "750000" > /proc/sys/kernel/sched_min_granularity_ns
echo "100000" > /proc/sys/kernel/sched_wakeup_granularity_ns
echo "2000000" > /proc/sys/kernel/sched_migration_cost_ns
echo "2" > /proc/sys/kernel/sched_nr_migrate
echo "0" > /proc/sys/kernel/sched_tunable_scaling
echo "NO_GENTLE_FAIR_SLEEPERS" > /sys/kernel/debug/sched_features
echo "NO_RT_RUNTIME_SHARE" > /sys/kernel/debug/sched_features
echo "1" > /proc/sys/kernel/sched_child_runs_first |
_________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
ppurka Advocate
Joined: 26 Dec 2004 Posts: 3256
|
Posted: Sat Nov 16, 2013 9:07 am Post subject: |
|
|
This LWN article is very interesting. Isn't this the same problem discussed in this thread? _________________ emerge --quiet redefined | E17 vids: I, II | Now using kde5 | e is unstable :-/ |
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
|
Back to top |
|
|
yoshi314 l33t
Joined: 30 Dec 2004 Posts: 850 Location: PL
|
|
Back to top |
|
|
yoshi314 l33t
Joined: 30 Dec 2004 Posts: 850 Location: PL
|
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|