View previous topic :: View next topic |
Author |
Message |
tudor2k3 n00b
Joined: 26 Nov 2008 Posts: 7
|
Posted: Thu Dec 30, 2010 9:05 pm Post subject: [solved (by workaround)] kernel command line arguments limit |
|
|
Hello,
I'm using 2.6.36-hardened sources and I need to run a script of 144k (and more in the future...). It's just full of ssh iptables and htb commands from one PC to another. If i shrink it down to 128k or less, it runs just fine, but over 128k I get the famous(?) error: "argument list too long". I read it is a kernel limit, and this article giver a sollution: http://www.linuxjournal.com/article/6060?page=0,0 (method 4):
Quote: | /*
* MAX_ARG_PAGES defines the number of pages allocated for arguments
* and envelope for the new program. 32 should suffice, this gives
* a maximum env+arg of 128kB w/4KB pages!
*/
#define MAX_ARG_PAGES 32
In order to increase the amount of memory dedicated to the command-line arguments, you simply need to provide the MAX_ARG_PAGES value with a higher number. Once this edit is saved, simply recompile, install and reboot into the new kernel as you would do normally.
On my own test system I managed to solve all my problems by raising this value to 64. After extensive testing, I have not experienced a single problem since the switch. This is entirely expected since even with MAX_ARG_PAGES set to 64, the longest possible command line I could produce would only occupy 256KB of system memory--not very much by today's system hardware standards. |
I've tried recompiling the kernel like the article says, but no luck.
Anyone know another way to raise the limit? 128k of memory is just... obsolete.
Thanx in advance.
EDIT: If anyone is interested, it's solved by piping the commands to xargs (with the "-0" flag used so it won't interpret any special characters). Even if it doesn't solve the 128k memory problem, it does the job.
also, if anyone know how to raise that 128k limit, I'm still interested.
Last edited by tudor2k3 on Sat Jan 01, 2011 3:28 pm; edited 1 time in total |
|
Back to top |
|
|
tudor2k3 n00b
Joined: 26 Nov 2008 Posts: 7
|
Posted: Thu Dec 30, 2010 9:36 pm Post subject: |
|
|
ok, so I read that the linux kernel newer than 2.6.23 sets the ARG_MAX variable 1/4 of the stack limit (you can get it's value with the "getconf" command), so I've set the stack limit to unlimited:
Code: | ulimit -s unlimited |
now the ARG_MAX variable is bigger:
Code: | sn_mon tmp # getconf ARG_MAX
4611686018427387903
|
but the same effect: can't execute the script if it's over 128k |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 23088
|
Posted: Fri Dec 31, 2010 3:59 am Post subject: |
|
|
This should only apply if you are passing that many kilobytes of arguments on the command line. Perhaps you would be better served by uploading the script to the remote machine via scp and executing it there. You could also try running a remote shell and redirecting ssh stdin from your script. |
|
Back to top |
|
|
tudor2k3 n00b
Joined: 26 Nov 2008 Posts: 7
|
Posted: Sat Jan 01, 2011 3:25 pm Post subject: |
|
|
If anyone is interested, it's solved by piping the commands to xargs (with the "-0" flag used so it won't interpret any special characters). |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|