The question is published on by Tutorial Guruji team.
I have a shell script which will be executed by multiple instances and if an instance accessing a file and doing some operation how can I make sure other instances are not accessing the same file and corrupting the data ?
My question is not about controlling the parallel execution but dealing with file lock or flagging mechanism.
Request some suggestion to proceed.
Linux normally doesn’t do any locking (contrary to windows). This has many advantages, but if you must lock a file, you have several options. I suggest
flock: apply or remove an advisory lock on an open file.
This utility manages flock(2) locks from within shell scripts or from the command line.
For a single command (or entire script), you can use
flock --exclusive /var/lock/mylockfile -c command
If you want to execute more commands in your script under the lock, use
#!/bin/bash .... ( flock --nonblock 200 || exit 1 # ... commands executed under lock ... ) 200>/var/lock/mylockfile
All operations following the
flock call inside the sub-shell
(...) are executed only if the no other process currently holds a
/var/lock/mylockfile. The lock is automatically dropped after the sub-shell exited.
flock can also wait until the file lock has been dropped (that’s the default). In this case do not use the
--nonblock option, which makes
flock fail if no successful lock can be obtained.