The question is published on by Tutorial Guruji team.
I am trying to solve the following architecture, but having some trouble.
Basically there are multiple threads accessing the same the same object in parallel. The first thread is meant to execute
Object.A() and all the other threads are supposed to wait until that process is finished and then proceed to execute
Object.B() simultaneously without any blocking.
Can’t find any resources online that explain anything would produce this functionality and any “solution” that jumps to mind with 2 custom boolean mutexes doesn’t seem to work.
The approach NathanOliver suggested in the comments, to call
A() on the main thread and then spin up the other threads, is the simplest approach and the first one you should consider.
But the construct you are looking for is called a “monitor” (the piece you are missing is called a “condition variable”, “monitor” refers to the whole mutex + cond var pair that you’ll need).
I’m not going to reproduce examples here since it’s pretty easy to find info (Travis Gockel just posted a nice example). Depending on your threading subsystem:
- For pthreads see condition variables.
- For the Windows API see both Condition Variables as well as Event Objects, both can be used to accomplish similar things, I kinda prefer the latter when applicable for basic boolean condition variables except resetting the event can get tricky if you have multiple threads waiting and want to wake them all.
The general model, for a one-time run, is:
- First thread:
- When that returns, signal some “AFinished” cond var.
- Other threads:
- Wait on “AFinished”.
If you want to repeat that process you’ll have to reset the condition after all
B()s are run and wait on that state before calling
A() again. You could use another cond var to do this, you can do it with a semaphore and some care, etc.
I know this is brief, but hopefully it at least gives you some keywords to search for.
Helpful additional reading:
Alternatively take a look at thread pools with a task queue. You could queue a task that runs
A() and then enqueues a bunch of
B() tasks when it is complete (optionally having those
B() tasks maintain a shared semaphore or just a basic thread-safe counter —
InterlockedDecrement and a
volatile counter on Windows can make this super simple — monitoring their progress, so the last one can requeue an
A() runner and repeat).