[prev in list] [next in list] [prev in thread] [next in thread]
List: busybox
Subject: Re: OT: many script or many fork?
From: Mike Frysinger <vapier () gentoo ! org>
Date: 2007-07-26 0:04:13
Message-ID: 200707252004.14409.vapier () gentoo ! org
[Download RAW message or body]
[Attachment #2 (multipart/signed)]
On Wednesday 25 July 2007, Seb wrote:
> Mike Frysinger <vapier@gentoo.org> a écrit:
> > > So, IMHO the most
> > > efficient must be to launch one script. After, it depends on the code
> > > you execute : if there are many system calls and few needs in memory,
> > > you probably won't see the difference.
> >
> > since each script is really just /bin/bash (or whatever), the appropriate
> > sections are shared in memory automatically. an independent script is
> > pretty much the same as backgrounding something as the shell will fork a
> > new process for each one. so you still have 3 processes.
>
> Is there no difference at all between a job and a parallel shell ? it's
> a true question... Seen from "outside" I had the feeling a job
> was more "integrated" to its parent shell in order to allow jobs
> control. Is it just a story of "naming space" (sorry, I haven't
> the appropriate terms) without any physical reality, which for example
> would say to a shell: "you can know when this process will end because
> it's your son but for this one you'll have to deal with 'ps' because
> it's not" ?
whether you do `./somescript.sh` or `while ... done &`, `ps` would show a
separate process because they are both forked off.
> However with 3 scripts executing each one command, you should have 6
> processes unless you specify 'exec' in them (the processes of the shells
> + ones of the commands), while with 3 commands backgrounded in one
> script, you should have 4 processes. But what you mean is that
> *physically* one or thousand 'bash' processes is the same thing,
> have I well understood ? :)
they'd be equivalent number of processes if you did:
./script1.sh
./script2.sh
./script3.sh
versus
while ... done &
while ... done &
while ... done
the only savings you could really count is the cost of starting up the
shell ... when you use &, you're still doing a fork().
> > about the only thing you could say is how long running is the script. if
> > it is long running, then the difference will probably be negligible. if
> > it's something that gets executed over and over, spawning subshells may
> > be slightly cheaper than forking whole new shells for each script. but
> > really, this is all just conjecture ... the only real test is one you run
> > yourself and gather quantitative data about resource utilization.
>
> In fact the difference I was conceiving was a difference between a
> script which would have combine many commands, and a script which would
> have use primarily the internal functions of the shell (few forks). With
> the second ones, the cost of the extra initializations should be
> *relatively* sensible but not in the first ones where it should be
> drowned in all system calls, shouldn't it ?
doing `./foo.sh` will start up a new shell and run through its init routines
while doing `while ... done &` will fork the currently running shell and thus
bypass the routines. but in the end, you arent looking at a different number
of forks, just the startup cost. so a long running script should generally
platoue at the same resource utilization regardless of how it was started.
shell functions do not cause forks, things
like '&', '`...`', '$(...)', '... | ...' do.
-mike
["signature.asc" (application/pgp-signature)]
_______________________________________________
busybox mailing list
busybox@busybox.net
http://busybox.net/cgi-bin/mailman/listinfo/busybox
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic