“Memory Fragmentation” is it still an issue? [on hold]

  • A+
Category:Languages

I'm a little bit confused. In the OS course we were told that all OSes take care of memory fragmentation by paging or segmentation and there is no contiguous physical memory allocation at all. OS uses different levels of addressing (logical/physical) to avoid the contiguous memory allocation. Now here there are so many discussions about it. My question is: Is this problem real in c++ programming for OSes supporting logical addressing (does any process crash just because of memory fragmentation)? if yes, why in the first place each OS tries to avoid contiguous addressing?

 


There are 2 layers: fragmentation in the virtual process address space and fragmentation in the physical memory.

If you look at any modern application, you can see how its memory usage grows over time as memory is not released to the OS. You can say this is caused by other things, but memory fragmentation (e.g. non-contiguous location of allocated memory chunks) is the core reason for this. In short, memory allocators refuse to release memory to the OS.

If you are interested about fragmentation in physical memroy, then even with memory organized in pages, there is still a need to allocate physically contiguous memory chunks. For example if you need to avoid virtual memory overhead, you might want to use large pages ("huge pages" in terms of Linux). x86_64 supports 4KiB, 2MiB and 4GiB pages. If there is no contiguous memory of the required size, you won't be able to use them.

If by OS you mean "kernel", then it cannot help you will fragmentation that happens in process address space (heap fragmentation). If C library is considered to be part of an OS, then it should try to avoid fragmentation as much as possible. Unfortunately, it is not always able to do so. See the linked question.

Memory allocator is usually not able to release large chunk of memory if there is at least something allocated in it. There is a partial solution to this that takes advantage of virtual memory organization in pages - so called "lazy-free" mechanism represented by MADV_FREE on Linux and BSDs and DiscardVirtualMemory on Windows. When you have a huge chunk of memory that is only partially used, you can notify the kernel that part of that memory is not needed anymore and that it can take it back under memory pressure. This is done lazily and only under memory pressure because memory deallocation is extremely expensive. But many memory allocator still do not use it for performance reasons.

So the answer to your question - it depends on how much you care about efficiency of your program. Most program do not care, as standard allocator just does the job for them. Some programs might suffer when standard allocator is not able to do its job efficiently.

Comment

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: