C++17 PMR:: Set number of blocks and their size in a unsynchronized_pool_resource

Is any rule for setting in the most effective way the number of blocks in a chunk (max_blocks_per_chunk) and the largest required block (largest_required_pool_block), in a unsynchronized_pool_resource?

How to avoid unnecessary memory allocations?

For example have a look in this demo.

How to reduce the number of allocation that take place as much as possible?


Pooled allocators function on a memory waste vs upstream allocator calls trade-off. Reducing one will almost always increase the other and vice-versa.

On top of that, one of the primary reason behind their use (in my experience, at least) is to limit or outright eliminate memory fragmentation for long-running processes in memory-constrained scenarios. So it is sort of assumed that “throwing more memory at the problem” is going to be counterproductive more often than not.

Because of this, there is no universal one-size-fit-all rule here. What is preferable will invariably be dictated by the needs of your application.

Figuring out the correct values for max_blocks_per_chunk and largest_required_pool_block is ideally based on a thorough memory usage analysis so that the achieved balance benefits the application as much as possible.

However, given the wording of the question:

How to avoid unnecessary memory allocations?

How to reduce the number of allocation that take place as much as possible?

If you want to minimize upstream allocator calls as much as possible, then it’s simple:

  • Make largest_required_pool_block the largest frequent allocation size you expect the allocator to face. Larger blocks means more allocations will qualify for pooled allocation.
  • Make max_blocks_per_chunk as large as you dare, up to the maximum number of concurrent allocations for any given block size. More blocks per chunks means more allocations between requests to the upstream.

The only limiting factor is how much memory footprint bloat you are willing to tolerate for your application.