View previous topic :: View next topic |
Author |
Message |
tld Veteran
Joined: 09 Dec 2003 Posts: 1836
|
Posted: Wed Dec 08, 2010 10:57 pm Post subject: |
|
|
For anyone interested, I've implementing the .bashrc version of this under the current stable gentoo-sources-2.6.35-r12, and in my case the improvement for a lot of things I do is very noticeable. I do a lot of things from urxvt prompts. I also haven't used a login manager in a long time. I log in to a console and use startx. One thing I'm unclear on is whether or not that means that everything I run out of X that isn't via another console uses the cgroup of that original login(??).
In any case, there's a lot of information out there that's confusing as far as setting this up. Here' the setup I'm using. Here are the relevant kernel settings:
Code: | grep CGROUP .config
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_NS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_MEM_RES_CTLR=y
# CONFIG_CGROUP_MEM_RES_CTLR_SWAP is not set
CONFIG_CGROUP_SCHED=y
CONFIG_BLK_CGROUP=y
# CONFIG_DEBUG_BLK_CGROUP is not set
|
One thing I found very confusing is all the instructions that specify mounting cgroup to /sys/fs/cgroup/cpu which (at least in the kernel I'm running) doesn't exist no matter what kernel options I use. I've seen other instructions specifying /dev/cgroup etc. It apparently doesn't make any difference where you mount that as long as everything involved references the correct mount point. I created a /cgroup directory and used that for everything.
This is in my .bashrc:
Code: | if [ "$PS1" ] ; then
mkdir -m 0700 /cgroup/user/$$
echo 1 > /cgroup/user/$$/notify_on_release
echo $$ > /cgroup/user/$$/tasks
fi
|
I've create this script at /usr/local/bin/rmcgroup:
Code: | #!/bin/bash
rmdir /cgroup/$1 |
...with execute permissions of course.
I've added this to /etc/conf.d/local.start so as to run on every boot:
Code: | # /etc/conf.d/local.start
# This is a good place to load any misc programs
# on startup (use &>/dev/null to hide output)
mount -t cgroup cgroup /cgroup -o cpu,release_agent=/usr/local/bin/rmcgroup
mkdir -m 0777 /cgroup/user
|
That all works great and the /usr/local/bin/rmcgroup cleans up when all tasks for a group end. Nice stuff.
Tom |
|
Back to top |
|
|
.yankee Apprentice
Joined: 24 Feb 2008 Posts: 194 Location: Polska
|
Posted: Mon Dec 13, 2010 10:13 am Post subject: |
|
|
tld wrote: | For anyone interested, I've implementing the .bashrc version of this under the current stable gentoo-sources-2.6.35-r12, and in my case the improvement for a lot of things I do is very noticeable. I do a lot of things from urxvt prompts. I also haven't used a login manager in a long time. I log in to a console and use startx. One thing I'm unclear on is whether or not that means that everything I run out of X that isn't via another console uses the cgroup of that original login(??).
|
You can check that easily looking at the list of PIDs in the tasks file of your session's cgroup (that is, /cgroup/user/[your_console_session_id]/tasks in your case) and comparing it with the list of running processes (be it an output of "ps" or "top" or any X-frontend). Generally, what gets into the cgroup of the session where you've run startx depends on how you run your .xinitrc and whether, inside your .xinitrc, you run your window manager using the exec call.
For instance, I use my own little script that, upon login, checks whether I am under a real tty and if so, presents me with a dialog where I can choose either to start one of my DEs, or just the console. The important part is, this script gets sourced from /etc/profile if I am both non-root and under a real tty - and then, since my .bashrc wouldn't get sourced while stating X in this way, do the cgroup-magic, as in any normal .bashrc.
tld wrote: | One thing I found very confusing is all the instructions that specify mounting cgroup to /sys/fs/cgroup/cpu which (at least in the kernel I'm running) doesn't exist no matter what kernel options I use. I've seen other instructions specifying /dev/cgroup etc.
|
That is because you use a <2.6.36 kernel. the /sys/fs/cgroup mountpoint was introduced later. And the "cpu" sub-directory needs to be created manually (this is optional anyway).
tld wrote: |
This is in my .bashrc:
Code: | if [ "$PS1" ] ; then
mkdir -m 0700 /cgroup/user/$$
echo 1 > /cgroup/user/$$/notify_on_release
echo $$ > /cgroup/user/$$/tasks
fi
|
|
That might produce a problem in some shells, as did in one of my environments - where you have file overwriting protection enabled for non-root users. In such case using >> instead of > solves the problem.
tld wrote: |
Code: | mount -t cgroup cgroup /cgroup -o cpu,release_agent=/usr/local/bin/rmcgroup
mkdir -m 0777 /cgroup/user
|
|
Oh, that's nice they have a "release_agent" mount option! I didn't know that and used to echo /usr/local/bin/rmcgroup > /sys/fs/cgroup/cpu/release_agent |
|
Back to top |
|
|
Shining Arcanine Veteran
Joined: 24 Sep 2009 Posts: 1110
|
Posted: Wed Feb 02, 2011 3:13 pm Post subject: |
|
|
I just applied this, but I did it a little differently than tld. I did a mix of his instructions and the instructions for Ubuntu users.
http://www.webupd8.org/2010/11/alternative-to-200-lines-kernel-patch.html
Since it would just be a me too post if I posted instructions, here is a script that you can run as root to do this, provided that you configured your kernel with CGROUPS support in advance (see tld's post):
Code: | #!/bin/sh --
# Make the system configure itself for automated per tty task groups before any shells run
cat << END >> /etc/conf.d/local.start || echo "Failure appending to /etc/conf.d/local.start" && exit 1;
# automated per tty task groups
mkdir -p /dev/cgroup/cpu
mount -t cgroup cgroup /dev/cgroup/cpu -o cpu,release_agent=/usr/local/sbin/cgroup_clean
mkdir -m 0777 /dev/cgroup/cpu/user
END
# Make the shells configure themselves for automated per tty task groups before any shells
cat << END >> /etc/bash/bashrc || echo "Failure appending to /etc/bash/bashrc" && exit 1;
# automated per tty task groups
if [ "$PS1" ] ; then
mkdir -p -m 0700 /dev/cgroup/cpu/user/$$ > /dev/null 2>&1
echo $$ > /dev/cgroup/cpu/user/$$/tasks
echo "1" > /dev/cgroup/cpu/user/$$/notify_on_release
fi
END
# Create cleanup script
cat << END > /usr/local/sbin/cgroup_clean || echo "Failure creating /usr/local/sbin/cgroup_clean" && exit 1;
#!/bin/sh
if [ "$*" != "/user" ]; then
rmdir /dev/cgroup/cpu/$*
fi
END
# Enable cleanup script to execute
chmod u+x /usr/local/sbin/cgroup_clean || echo "Failure setting execute bit on /usr/local/sbin/cgroup_clean" && exit 1;
# Tell the shell we are finished. It is more elegant this way. :P
exit 0;
|
Please note that i have not actually tested this, but it should work.
Anyway, I use KDE with compositing. gtkperf times are usually 3 seconds. I was trying to install libreoffice and I noticed some lag. I ran gtkperf and it took 13 seconds. I stopped the libreoffice compile, set this up, restarted KDE, opened konsole and started compiling libreoffice again. It was taking 5 seconds. Then I took some time to type this up and it is now at 16 seconds.
I am not sure if this makes the difference people claim it does. I know that people claim that gtkperf is not a good benchmark, but situations in which I notice lag seem to correspond to situations when gtkperf times are high and situations in which I do not think there is lag seem to correspond to situations when gtkperf times are low. I have PORTAGE_NICENESS=19 in /etc/make.conf, so it could be that the priorities are so low that this tweak does not have any effect. For completeness I would like to mention that I am compiling libreoffice in a tmpfs, although I have 8GB of RAM so there is no swapping being done. |
|
Back to top |
|
|
gringo Advocate
Joined: 27 Apr 2003 Posts: 3793
|
Posted: Wed Feb 02, 2011 3:25 pm Post subject: |
|
|
all i can say is that i tried both and i really wasn´t able to see a difference in responsiveness in my slow eeepc, which is where i´m doing all this testing.
The "in-kernel based solution" made it into mainline BTW.
cheers |
|
Back to top |
|
|
Dont Panic Guru
Joined: 20 Jun 2007 Posts: 322 Location: SouthEast U.S.A.
|
Posted: Wed Feb 02, 2011 3:51 pm Post subject: |
|
|
gringo wrote: | The "in-kernel based solution" made it into mainline BTW. |
Just for clarification, I see it in the 2.6.38_rc kernel, but I don't believe it's in the 2.6.37 kernel. |
|
Back to top |
|
|
gringo Advocate
Joined: 27 Apr 2003 Posts: 3793
|
Posted: Wed Feb 02, 2011 3:54 pm Post subject: |
|
|
Quote: | Just for clarification, I see it in the 2.6.38_rc kernel, but I don't believe it's in the 2.6.37 kernel. |
sorry for not being clear : yes, it was pulled in for 2.6.38.
cheers |
|
Back to top |
|
|
jprobichaud Tux's lil' helper
Joined: 28 Jan 2009 Posts: 81 Location: Montreal, Qc
|
Posted: Thu Feb 03, 2011 5:24 pm Post subject: |
|
|
Reading all this and seeing that all these changes are tty-oriented, I'm wondering: can the cgroups approach be used for keeping users connecting through ssh to a "research box" to stepping on each other toes?
We have a group of machines where users log by ssh (usually, you get "attached" to a pts/0, pts/1, ... if I understood correctly). Sometime, a user launch a script that computes some sufff and almost kills the machine responsiveness for the other users as well. Using 'nice' and even 'ionice' helps a little but not much.
Can "automatic" cgroups policies be defined on a per-user basis? Any examples? I found a "cspeed" tool in the ArchLinux forums, but I have not enough knowledge to see if that's what I'm looking for. The officials admins of these research boxes are very busy. I would like to test these changes on systems I have control on before show this to them.
Thanks! |
|
Back to top |
|
|
devsk Advocate
Joined: 24 Oct 2003 Posts: 3003 Location: Bay Area, CA
|
Posted: Thu Feb 03, 2011 5:48 pm Post subject: |
|
|
jprobichaud wrote: | Reading all this and seeing that all these changes are tty-oriented, I'm wondering: can the cgroups approach be used for keeping users connecting through ssh to a "research box" to stepping on each other toes?
We have a group of machines where users log by ssh (usually, you get "attached" to a pts/0, pts/1, ... if I understood correctly). Sometime, a user launch a script that computes some sufff and almost kills the machine responsiveness for the other users as well. Using 'nice' and even 'ionice' helps a little but not much.
Can "automatic" cgroups policies be defined on a per-user basis? Any examples? I found a "cspeed" tool in the ArchLinux forums, but I have not enough knowledge to see if that's what I'm looking for. The officials admins of these research boxes are very busy. I would like to test these changes on systems I have control on before show this to them.
Thanks! | 2.6.38-rc3 autogrouping is based on setsid system call. If a program calls it, its in its own group and can't kill anybody else's performance. So, all you need to do is wrap the shell invoked by sshd with a setsid (part of util-linux-ng) wrapper and create a new session for every user who logs in. This will isolate users from each other.
This solution is very elegant and 2.6.38 makes it possible out of the box... |
|
Back to top |
|
|
Dont Panic Guru
Joined: 20 Jun 2007 Posts: 322 Location: SouthEast U.S.A.
|
Posted: Thu Feb 03, 2011 7:29 pm Post subject: |
|
|
@jprobichaud
You may want to check out the link kernelOfTruth provided here: https://forums.gentoo.org/viewtopic-p-6490882.html#6490882
This O'Reilly article explains how to manually create and manage CGROUPS, and may give you some ideas for how to use CGROUPS for more finely grained control than what is provided by the automatic CGROUPS patches. |
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
|
Back to top |
|
|
jprobichaud Tux's lil' helper
Joined: 28 Jan 2009 Posts: 81 Location: Montreal, Qc
|
Posted: Thu Feb 03, 2011 7:59 pm Post subject: |
|
|
kernelOfTruth wrote: | @jprobichaud
additionally you could try to use BFS with "Grouping by UID" support:
see Con's Blog (<-- link inside) for more info |
@kernelOfTruth: Thanks for these suggestions (and thanks also "Dont Painic" for pointing me back to the O'Reilly article).
I guess it will already be one big step to move to a new kernel on these machines (right now, they are at 2.6.18, without cgroup support) for the admins, I don't really think they like the idea of getting a "unsupported" kernel patch in This environment is already hard to maintain in good shape (researchers are not necessary computer literate and sometime do all sort of weird/bad things...) we don't want to add too much uncertainties...
I'll digest that. It seems that executing some commands when someone logs in could help keeping everybody in his own little cage, which should help greatly the stability of these nodes. In theory, nobody is supposed to run cpu-intensive tasks there, but in real life, we sometime have these type of trouble...
Again, thanks a lot for these insights! |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|