Accessing memory mapped files is faster than using direct read and write operations for two reasons.
- Firstly, a system call is orders of magnitude slower than a simple change to a program’s local memory.
- Secondly, in most operating systems the memory region mapped actually is the kernel’s page cache (file cache), meaning that no copies need to be created in user space.
It implements demand Paging, meaning that file contents are not immediately read from disk and initially use no physical RAM at all. Instead, the actual reads from disk are performed after a specific location is accessed, in a lazy manner.