forked from luck/tmp_suning_uos_patched
0889eba5b3
x86_64 uses 2M page table entries to map its 1-1 kernel space. We also implement the virtual memmap using 2M page table entries. So there is no additional runtime overhead over FLATMEM, initialisation is slightly more complex. As FLATMEM still references memory to obtain the mem_map pointer and SPARSEMEM_VMEMMAP uses a compile time constant, SPARSEMEM_VMEMMAP should be superior. With this SPARSEMEM becomes the most efficient way of handling virt_to_page, pfn_to_page and friends for UP, SMP and NUMA on x86_64. [apw@shadowen.org: code resplit, style fixups] [apw@shadowen.org: vmemmap x86_64: ensure end of section memmap is initialised] Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andy Whitcroft <apw@shadowen.org> Acked-by: Mel Gorman <mel@csn.ul.ie> Cc: Andi Kleen <ak@suse.de> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
30 lines
1.2 KiB
Plaintext
30 lines
1.2 KiB
Plaintext
|
|
<previous description obsolete, deleted>
|
|
|
|
Virtual memory map with 4 level page tables:
|
|
|
|
0000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm
|
|
hole caused by [48:63] sign extension
|
|
ffff800000000000 - ffff80ffffffffff (=40 bits) guard hole
|
|
ffff810000000000 - ffffc0ffffffffff (=46 bits) direct mapping of all phys. memory
|
|
ffffc10000000000 - ffffc1ffffffffff (=40 bits) hole
|
|
ffffc20000000000 - ffffe1ffffffffff (=45 bits) vmalloc/ioremap space
|
|
ffffe20000000000 - ffffe2ffffffffff (=40 bits) virtual memory map (1TB)
|
|
... unused hole ...
|
|
ffffffff80000000 - ffffffff82800000 (=40 MB) kernel text mapping, from phys 0
|
|
... unused hole ...
|
|
ffffffff88000000 - fffffffffff00000 (=1919 MB) module mapping space
|
|
|
|
The direct mapping covers all memory in the system up to the highest
|
|
memory address (this means in some cases it can also include PCI memory
|
|
holes).
|
|
|
|
vmalloc space is lazily synchronized into the different PML4 pages of
|
|
the processes using the page fault handler, with init_level4_pgt as
|
|
reference.
|
|
|
|
Current X86-64 implementations only support 40 bits of address space,
|
|
but we support up to 46 bits. This expands into MBZ space in the page tables.
|
|
|
|
-Andi Kleen, Jul 2004
|