Determinism of service times is also an issue in the area of dynamic allocation of RAM memory. Many general-computing non-real-time operating systems offer memory allocation services from what is termed a “Heap.” The famous “malloc” and “free” services known to C-language programmers work from a heap. Tasks can temporarily borrow some memory from the operating system’s heap by calling “malloc”, and specifying the size of memory buffer needed. When this task (or another task) is finished with this memory buffer it can return the buffer to the operating system by calling “free.” The operating system will then return the buffer to the heap, where its memory might be used again, perhaps as part of a larger buffer. Or perhaps it may in the future be broken into several smaller buffers.
Heaps suffer from a phenomenon called “External Memory Fragmentation” that may cause the heap services to degrade. This fragmentation is caused by the fact that when a buffer is returned to the heap, it may in the future be broken into smaller buffers when “malloc” requests for smaller buffer sizes occur. After a heap undergoes many cycles of “malloc”s and “free”s, small slivers of memory may appear between memory buffers that are being used by tasks. These slivers are so small that they are useless to tasks. But they are trapped between buffers that are being used by tasks, so they can’t be coagulated (“glued”) together into bigger, useful buffer sizes. Over time, a heap will have more and more of these slivers. This will eventually result in situations where tasks will ask for memory buffers (“malloc”) of a certain size, and they will be refused by the operating system — even though the operating system has enough available memory in its heap. The problem: That memory is scattered in small slivers distributed in various separate parts of the heap. In operating system terminology, the slivers are called “fragments”, and this problem is called “external memory fragmentation.”
This fragmentation problem can be solved by so-called “garbage collection” (defragmentation) software. Unfortunately, “garbage collection” algorithms are often wildly non-deterministic – injecting randomly-appearing random-duration delays into heap services. These are often seen in the memory allocation services of general-computing non-real-time operating systems.
This puts the embedded system developer who wants to use a general-computing non-real-time operating system into a quandry: Should the embedded system be allowed to suffer occasional randomly-appearing random-duration delays if / when “garbage collection” kicks in?… Or, alternatively, should the embedded system be allowed to fragment its memory until application software “malloc” requests to the heap are refused even though a sufficient total amount of free memory is still available? Neither alternative is acceptable for embedded systems that need to provide service continually for long periods of time.
Real-time operating systems, on the other hand, solve this quandry by altogether avoiding both memory fragmentation and “garbage collection”, and their consequences. RTOSs offer non-fragmenting memory allocation techniques instead of heaps. They do this by limiting the variety of memory chunk sizes they make available to application software. While this approach is less flexible than the approach taken by memory heaps, they do avoid external memory fragmentation and avoid the need for defragmentation. For example, the “Pools” memory allocation mechanism allows application software to allocate chunks of memory of perhaps 4 or 8 different buffer sizes per pool. Pools totally avoid external memory fragmentation, by not permitting a buffer that is returned to the pool to be broken into smaller buffers in the future. Instead, when a buffer is returned the pool, it is put onto a “free buffer list” of buffers of its own size that are available for future re-use at their original buffer size. This is shown in Figure 7.
Memory is allocated and de-allocated from a pool with deterministic, often constant, timing.