I have a shell script that initiates a long, resource-intensive command on several different machines.
In order to execute the script on each machine in parallel, I have an ampersand symbol after each remote command in the for-loop.
while read host; do ssh -f "$host" "/home/user/allocate.sh" & done < ~/cluster
However, as a consequence, I do not get the shell prompt back at the end of the for-loop.
This prevents me from executing other scripts after
initialize-cluster.sh has finished running
allocate.sh on all of the machines.
For example, I want to be able to run a command such as:
./initalize-cluster.sh && ./execute-something-else.sh
initialize-cluster never “finishes.”
I’d like subsequent commands to execute only after all of the
allocate.sh scripts finish on the remote machines.
Nothing in this script should prevent execution of the next command or return to the shell prompt. What could be giving the impression that the prompt is gone is the output of the remote scripts, which would arrive after return to the shell. To avoid that you could redirect stdout/stderr to some log file.
Since the remote commands are run in the background the original script will finish before the
allocate scripts have finished executing: the next command can’t assume that initialization is complete. In order to start all initializations in parallel and wait for them all to finish you could do the following:
#!/bin/sh while read host do ssh "$host" "/home/user/allocate.sh" & done < ~/cluster wait
And then it should be possible to:
./initialize-cluster.sh && ./execute-something-else.sh
The key here is the
wait command, which waits for child processes to finish.