Limiting the number of parallel task with SemaphoreSlim – why does it work?

in MS Docu you can read about SemaphoreSlim: „Represents a lightweight alternative to Semaphore that limits the number of threads that can access a resource or pool of resources concurrently.“
https://docs.microsoft.com/en-us/dotnet/api/system.threading.semaphoreslim?view=net-5.0

In my understanding a Task is different from Thread. Task is higher level than Thread. Different tasks can run on the same thread. Or a task can be continued on another thread than it was started on.
(Compare: “server-side applications in .NET using asynchrony will use very few threads without limiting themselves to that. If everything really can be served by a single thread, it may well be – if you never have more than one thing to do in terms of physical processing, then that’s fine.” from in C# how to run method async in the same thread)

IMO if you put this information together, the conclusion is that you can’t limit the number of Tasks running in parallel with the use of a semaphore slim, but…

  • there are other texts that give this kind of advice (How to limit the amount of concurrent async I/O operations?, see “You can definitely do this…”)
  • if I’m executing this code on my machine it seems it IS possible. If I work with different numbers for _MaxDegreeOfParallelism and different ranges of numbers, _RunningTasksCount doesn’t exceed the limit that is given by MaxDegreeOfParallelism.

Can somebody provide me some information to clearify?

   class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");

            IRunner runner = new RunnerSemaphore();
            runner.Run();

            Console.WriteLine("Hit any key to close...");
            Console.ReadLine();
        }
    }
    public class RunnerSemaphore : IRunner
    {
        private readonly SemaphoreSlim _ConcurrencySemaphore;
        private List<int> _Numbers;
        private int _MaxDegreeOfParallelism = 3;
        private object _RunningTasksLock = new object();
        private int _RunningTasksCount = 0;

        public RunnerSemaphore()
        {
            _ConcurrencySemaphore = new SemaphoreSlim(_MaxDegreeOfParallelism);
            _Numbers = _Numbers = Enumerable.Range(1, 100).ToList();
        }

        public void Run()
        {
            RunAsync().Wait();
        }        

        private async Task RunAsync()
        {
            List<Task> allTasks = new List<Task>();            

            foreach (int number in _Numbers)
            {
                var task = Task.Run
                    (async () =>
                    {
                        await _ConcurrencySemaphore.WaitAsync();

                        bool isFast = number != 1; 
                        int delay = isFast ? 200 : 10000;

                        Console.WriteLine($"Start Work {number}tManagedThreadId {Thread.CurrentThread.ManagedThreadId}tRunning {IncreaseTaskCount()} tasks");
                        await Task.Delay(delay).ConfigureAwait(false);
                        Console.WriteLine($"End Work {number}tManagedThreadId {Thread.CurrentThread.ManagedThreadId}tRunning {DecreaseTaskCount()} tasks");
                    })
                    .ContinueWith((t) =>
                    {
                        _ConcurrencySemaphore.Release();
                    });


                allTasks.Add(task);
            }

            await Task.WhenAll(allTasks.ToArray());
        }

        private int IncreaseTaskCount()
        {
            int taskCount;
            lock (_RunningTasksLock)
            {                
                taskCount = ++ _RunningTasksCount;
            }
            return taskCount;
        }

        private int DecreaseTaskCount()
        {
            int taskCount;
            lock (_RunningTasksLock)
            {
                taskCount = -- _RunningTasksCount;
                 
            }
            return taskCount;
        }        
    }

Answer

Represents a lightweight alternative to Semaphore that limits the number of threads that can access a resource or pool of resources concurrently.

Well, that was a perfectly fine description when SemaphoreSlim was first introduced – it was just a lightweight Semaphore. Since that time, it has gotten new methods (i.e., WaitAsync) that enable it to act like an asynchronous synchronization primitive.

In my understanding a Task is different from Thread. Task is higher level than Thread. Different tasks can run on the same thread. Or a task can be continued on another thread than it was started on.

This is true for what I call “Delegate Tasks”. There’s also a completely different kind of Task that I call “Promise Tasks”. Promise tasks are similar to promises (or “futures”) in other languages (e.g., JavaScript), and they just represent the completion of some event. Promise tasks do not “run” anywhere; they just complete based on some future event (usually via a callback).

async methods always return promise tasks. The code in an asynchronous method is not actually run as part of the task; the task itself only represents the completion of the async method. I recommend my async intro for more information about async and how the code portions are scheduled.

if you put this information together, the conclusion is that you can’t limit the number of Tasks running in parallel with the use of a semaphore slim

This is personal preference, but I try to be very careful about terminology, precisely to avoid problems like this question. Delegate tasks may run in parallel, e.g., Parallel. Promise tasks do not “run”, and they don’t run in “parallel”, but you can have multiple concurrent promise tasks that are all in progress. And SemaphoreSlim‘s WaitAsync is a perfect match for limiting that kind of concurrency.

You may wish to read about Stephen Toub’s AsyncSemaphore (and other articles in that series). It’s not the same implementation as SemaphoreSlim, but behaves essentially the same as far as promise tasks are concerned.

Leave a Reply

Your email address will not be published. Required fields are marked *