March 30th, 2004, 04:34 AM
Why do children of a shell script refuse to die?
First of all, this is a Solaris question. I have a (bash) shell script abc.sh, which looks as follows:
The perl script is designed to hang (not by me), by trying to suck some data out from a bad URL (the URL never gives any data back) and there is no timeout.
Now, all I want to do is to be able to kill abc.sh with a single kill -9 without any residual processes hanging about afterwards. The problem is that the perl script refuses to die upon a kill -9 issued to the process of abc.sh.
Before issuing a SIGKILL (kill -9) to the process of abc.sh, a ps -ef shows the parent of the hung perl process to be abc.sh. After issuing a SIGKILL, the parent becomes 1 (/etc/init, as far as I remember). A subsequent kill -9 to the perl process kills it without any problems.
Can anyone think why the perl script would ignore the SIGKILL, which, AFAIK, gets passed through to it? If I am wrong in thinking that it gets passed to it, which is quite likely, then is there a way to pass it through nicely?
My main problem is that killing the script abc.sh is the only thing I can do (I have to use Java's Process.destroy() on it), as normally, I would not have the pid of any of its children (again, as the result of using Java).
March 30th, 2004, 06:58 AM
Nothing unusual about this. This is standard UNIX behavour.
Why not invoke the Perl script directly without going through
March 30th, 2004, 07:33 AM
True, it is standard unix behaviour. However, I cannot invoke the perl script directly (say, I need to invoke another one after it), so I must go through a script.
Originally Posted by fpmurphy
Perhaps the question should sound as follows: is there a way to write a script in such a way that its children would die when it is killed?
Maybe try using the trap command.
Look up man for more info
December 4th, 2010, 09:18 PM