From a software perspective R5000 and R5000A are the same thing which is
why the symbol CPU_R5000A never got used, so finally delete it.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On some CPU the write to pagemask might complete before the TLB write
instruction reads from the pagemask register.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
R5000 and the Nevada CPUs (RM5230, RM5231, RM5260, RM5261, RM5270 and
RM5271) are basically the same CPU core and all are documented to require
two instructions separating a write to c0_pagemask, c0_entryhi, c0_entrylo0,
c0_entrylo1 or c0_index.
So far we were only providing on cycle before / after a TLBR/TLBWI
for R5000 but 3 cycles before and 1 cycles after for the Nevadas.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The microassembler used in tlbex.c does not notice if a label is redefined
resulting in relocations against such labels silently missrelocated.
The issues exists since commit add6eb04776db4189ea89f596cbcde31b899be9d
[Synthesize TLB exception handlers at runtime.] in 2.6.10 and went unnoticed
for so long because the relocations for the affected branches got computed
to do something *almost* sensible.
The issue affects R4000, R4400, QED/IDT RM5230, RM5231, RM5260, RM5261,
RM5270 and RM5271 processors.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
We don't have to do a separate shift to eliminate the software bits,
just rotate them into the fill and they will be ignored.
Signed-off-by: David Daney <ddaney@caviumnetworks.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/4294/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On a dual issue processor GCC generates code that saves a couple of
clock cycles per loop if we rearrange things slightly. Checking for
p != end saves a SLTU per loop, moving the increment to the middle can
let it dual issue on multi-issue processors.
Signed-off-by: David Daney <ddaney@caviumnetworks.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/4249/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
We can save an instruction in the TLB Refill path for kernel mappings
by aligning swapper_pg_dir on a 64K boundary. The address of
swapper_pg_dir can be generated with a single LUI instead of
LUI/{D}ADDUI.
The alignment of __init_end is bumped up to 64K so there are no holes
between it and swapper_pg_dir, which is placed at the very beginning
of .bss.
The alignment of invalid_pmd_table and invalid_pte_table can be
relaxed to PAGE_SIZE. We do this by using __page_aligned_bss, which
has the added benefit of eliminating alignment holes in .bss.
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: linux-arch@vger.kernel.org,
Cc: linux-kernel@vger.kernel.org
Acked-by: Arnd Bergmann <arnd@arndb.de>
Patchwork: https://patchwork.linux-mips.org/patch/4220/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Merge patches from Andrew Morton:
"A few misc things and very nearly all of the MM tree. A tremendous
amount of stuff (again), including a significant rbtree library
rework."
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (160 commits)
sparc64: Support transparent huge pages.
mm: thp: Use more portable PMD clearing sequenece in zap_huge_pmd().
mm: Add and use update_mmu_cache_pmd() in transparent huge page code.
sparc64: Document PGD and PMD layout.
sparc64: Eliminate PTE table memory wastage.
sparc64: Halve the size of PTE tables
sparc64: Only support 4MB huge pages and 8KB base pages.
memory-hotplug: suppress "Trying to free nonexistent resource <XXXXXXXXXXXXXXXX-YYYYYYYYYYYYYYYY>" warning
mm: memcg: clean up mm_match_cgroup() signature
mm: document PageHuge somewhat
mm: use %pK for /proc/vmallocinfo
mm, thp: fix mlock statistics
mm, thp: fix mapped pages avoiding unevictable list on mlock
memory-hotplug: update memory block's state and notify userspace
memory-hotplug: preparation to notify memory block's state at memory hot remove
mm: avoid section mismatch warning for memblock_type_name
make GFP_NOTRACK definition unconditional
cma: decrease cc.nr_migratepages after reclaiming pagelist
CMA: migrate mlocked pages
kpageflags: fix wrong KPF_THP on non-huge compound pages
...
.fault now can retry. The retry can break state machine of .fault. In
filemap_fault, if page is miss, ra->mmap_miss is increased. In the second
try, since the page is in page cache now, ra->mmap_miss is decreased. And
these are done in one fault, so we can't detect random mmap file access.
Add a new flag to indicate .fault is tried once. In the second try, skip
ra->mmap_miss decreasing. The filemap_fault state machine is ok with it.
I only tested x86, didn't test other archs, but looks the change for other
archs is obvious, but who knows :)
Signed-off-by: Shaohua Li <shaohua.li@fusionio.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Remove usage of the 'kernel_uses_smartmips_rixi' macro from all files
and use new 'cpu_has_rixi' instead.
Signed-off-by: Steven J. Hill <sjhill@mips.com>
Acked-by: David Daney <david.daney@cavium.com>
The EXT and INS instructions can be used to decrease code size and
thus speed up TLB handlers on MIPS32R2 and MIPS64R2 cores.
Signed-off-by: Steven J. Hill <sjhill@mips.com>
The architecture specification says that an EHB instruction is
needed to avoid a hazard when writing TLB entries. However, some
cores do not have this hazard, and thus the EHB instruction causes
a costly pipeline stall. Detect these cores and do not use the EHB
instruction.
Signed-off-by: Steven J. Hill <sjhill@mips.com>
See commit b6999b191 which did the same modification for x86's mm/gup,
Quote from commit b6999b191:
"If compound pages are used and the page is a
tail page, gup_huge_pmd() increases _mapcount to record tail page are
mapped while gup_huge_pud does not do that."
[ralf@linux-mips.org: fixed rejects caused by the original patch getting
linewrapped.]
Signed-off-by: Jovi Zhang <boojovi@gmail.com>
Cc: Youquan Song <youquan.song@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: <stable@vger.kernel.org>
Patchwork: https://patchwork.linux-mips.org/patch/4291/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
R4K-style CPUs having common code to support their caches and tlb have this
boolean defined by default. Allows us to remove some lines in
arch/mips/mm/Makefile.
Signed-off-by: Florian Fainelli <florian@openwrt.org>
Patchwork: http://patchwork.linux-mips.org/patch/3328/
Signed-off-by: John Crispin <blogic@openwrt.org>
A number of new instructions have been added to the micro assembler causing
the list to no longer be in alphabetical order. This patch fixes up the name
ordering.
Signed-off-by: Steven J. Hill <sjhill@mips.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/3789/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Remove usage of the '__attribute__((alias("...")))' hack that aliased
to integer arrays containing micro-assembled instructions. This hack
breaks when building a microMIPS kernel. It also makes the code much
easier to understand.
[ralf@linux-mips.org: Added back export of the clear_page and copy_page
symbols so certain modules will work again. Also fixed build with
CONFIG_SIBYTE_DMA_PAGEOPS enabled.]
Signed-off-by: Steven J. Hill <sjhill@mips.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/3866/
Acked-by: David Daney <david.daney@cavium.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Pull DMA mapping branch from Marek Szyprowski:
"Short summary for the whole series:
A few limitations have been identified in the current dma-mapping
design and its implementations for various architectures. There exist
more than one function for allocating and freeing the buffers:
currently these 3 are used dma_{alloc, free}_coherent,
dma_{alloc,free}_writecombine, dma_{alloc,free}_noncoherent.
For most of the systems these calls are almost equivalent and can be
interchanged. For others, especially the truly non-coherent ones
(like ARM), the difference can be easily noticed in overall driver
performance. Sadly not all architectures provide implementations for
all of them, so the drivers might need to be adapted and cannot be
easily shared between different architectures. The provided patches
unify all these functions and hide the differences under the already
existing dma attributes concept. The thread with more references is
available here:
http://www.spinics.net/lists/linux-sh/msg09777.html
These patches are also a prerequisite for unifying DMA-mapping
implementation on ARM architecture with the common one provided by
dma_map_ops structure and extending it with IOMMU support. More
information is available in the following thread:
http://thread.gmane.org/gmane.linux.kernel.cross-arch/12819
More works on dma-mapping framework are planned, especially in the
area of buffer sharing and managing the shared mappings (together with
the recently introduced dma_buf interface: commit d15bd7ee44
"dma-buf: Introduce dma buffer sharing mechanism").
The patches in the current set introduce a new alloc/free methods
(with support for memory attributes) in dma_map_ops structure, which
will later replace dma_alloc_coherent and dma_alloc_writecombine
functions."
People finally started piping up with support for merging this, so I'm
merging it as the last of the pending stuff from the merge window.
Looks like pohmelfs is going to wait for 3.5 and more external support
for merging.
* 'for-linus' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping:
common: DMA-mapping: add NON-CONSISTENT attribute
common: DMA-mapping: add WRITE_COMBINE attribute
common: dma-mapping: introduce mmap method
common: dma-mapping: remove old alloc_coherent and free_coherent methods
Hexagon: adapt for dma_map_ops changes
Unicore32: adapt for dma_map_ops changes
Microblaze: adapt for dma_map_ops changes
SH: adapt for dma_map_ops changes
Alpha: adapt for dma_map_ops changes
SPARC: adapt for dma_map_ops changes
PowerPC: adapt for dma_map_ops changes
MIPS: adapt for dma_map_ops changes
X86 & IA64: adapt for dma_map_ops changes
common: dma-mapping: introduce generic alloc() and free() methods
This has been obsolescent for a while; time for the final push.
In adjacent context, replaced old cpus_* with cpumask_*.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: David S. Miller <davem@davemloft.net> (arch/sparc)
Acked-by: Chris Metcalf <cmetcalf@tilera.com> (arch/tile)
Cc: user-mode-linux-devel@lists.sourceforge.net
Cc: Russell King <linux@arm.linux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: linux-hexagon@vger.kernel.org
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Helge Deller <deller@gmx.de>
Cc: sparclinux@vger.kernel.org
[swarren@nvidia.com: highmem: Fix ARM build break due to __kmap_atomic rename]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Cong Wang <amwang@redhat.com>
Commit d065bd810b
(mm: retry page fault when blocking on disk transfer) and
commit 37b23e0525
(x86,mm: make pagefault killable)
The above commits introduced changes into the x86 pagefault handler
for making the page fault handler retryable as well as killable.
These changes reduce the mmap_sem hold time, which is crucial
during OOM killer invocation.
Port these changes to MIPS.
Without these changes, my MIPS board encounters many hang and livelock
scenarios.
After applying this patch, OOM feature performance improves according to
my testing.
Signed-off-by: Mohd. Faris <mohdfarisq2010@gmail.com>
Signed-off-by: Kautuk Consul <consul.kautuk@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/3217/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Only available for R4000 style TLBs anyway and proper ordering of
initialization code made this crude interface unncecessary.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When flushing TLB, if @vma is backed by huge page, we could flush huge
TLB, due to that huge page is defined to be far from normal page.
Signed-off-by: Hillf Danton <dhillf@gmail.com>
Acked-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: "Jayachandran C." <jayachandranc@netlogicmicro.com>
Patchwork: https://patchwork.linux-mips.org/patch/2825/
Signed-off-by: David Daney <david.daney@cavium.com>
Acked-by: Hillf Danton <dhillf@gmail.com>
Patchwork: https://patchwork.linux-mips.org/patch/3114/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add support for Netlogic's XLP MIPS SoC. This patch adds:
* XLP processor ID in cpu_probe.c and asm/cpu.h
* XLP case to asm/module.h
* CPU_XLP case to mm/tlbex.c
* minor change to r4k cache handling to ignore XLP secondary cache
* XLP cpu overrides to mach-netlogic/cpu-feature-overrides.h
Signed-off-by: Jayachandran C <jayachandranc@netlogicmicro.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/2966/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch addresses a couple of related problems:
1) The kernel may reside in physical memory outside of the ranges set
by plat_mem_setup(). If this is the case, init mem cannot be
reused as it resides outside of the range of pages that the kernel
memory allocators control.
2) initrd images might be loaded in physical memory outside of the
ranges set by plat_mem_setup(). The memory likewise cannot be
reused. The patch doesn't handle this specific case, but the
infrastructure is useful for future patches that do.
The crux of the problem is that there are memory regions that need be
memory_present(), but that cannot be free_bootmem() at the time of
arch_mem_init(). We create a new type of memory (BOOT_MEM_INIT_RAM)
for use with add_memory_region(). Then arch_mem_init() adds the init
mem with this type if the init mem is not already covered by existing
ranges.
When memory is being freed into the bootmem allocator, we skip the
BOOT_MEM_INIT_RAM ranges so they are not clobbered, but we do signal
them as memory_present(). This way when they are later freed, the
necessary memory manager structures have initialized and the Sparse
allocater is prevented from crashing.
The Octeon specific code that handled this case is removed, because
the new general purpose code handles the case.
Signed-off-by: David Daney <ddaney@caviumnetworks.com>
To: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/1988/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Gup is used in a few cases, say futex.
This work is derived from the x86 version, and operations of pte and pmd are
adapted to the defines of MIPS in straight forward manner.
[ralf@linux-mips.org: Fixed up reject in arch/mips/mm/Makefile due to
whitespace formatting differences. Fixed build error in gup.c due to
conflicting changes elsewhere in the kernel.]
Signed-off-by: Hillf Danton <dhillf@gmail.com>
Cc: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/2859/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Move add_wired_entry to its own header file from where it will be
always included. Patch up other users of add_wired_entry to also include
the header as needed.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Panic() invokes printk() to add a \n internally, so panic arguments should
not themselves end in \n. Panic invocations in arch/mips and elsewhere
are inconsistently sometimes terminating in \n, sometimes not.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
For Alchemy-PCI I need to add a wired entry after resuming from RAM;
remove the __init from add_wired_entry() so that this actually works.
Signed-off-by: Manuel Lauss <manuel.lauss@googlemail.com>
To: Linux-MIPS <linux-mips@linux-mips.org>
Patchwork: https://patchwork.linux-mips.org/patch/2684/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Until now flush_kernel_vmap_range() and invalidate_kernel_vmap_range() did
not exist on MIPS resulting in heavy cache corruption on XFS filesystems.
Left for the post-3.0 time: optimization and make this work with highmem,
too. Since the combination of highmem + cache aliases atm doesn't work
this isn't a regression.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Patchwork: https://patchwork.linux-mips.org/patch/2505/
For the case PM_DEFAULT_MASK == 0, we were placing a branch in the
delay slot of another branch. This leads to undefined behavior.
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/2775/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Only some GCC versions such as gcc 4.2 notice that the variable wr in
build_r3000_tlb_modify_handler is used uninitialized. When using one
of those GCCs the build will fail due to -Werror. GCC 4.6 does not
warn about the uninitialized use of wr.
This issue was introduced by 7211f4d7a3dcbe57c5d396c334dca525315dceb2
[MIPS: Close races in TLB modify handlers.]
Reported-by: Ganesan Ramalingam <ganesan18@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Page table entries are made invalid by writing a zero into the the PTE
slot in a page table. This creates a race condition with the TLB
modify handlers when they are updating the PTE.
CPU0 CPU1
Test for _PAGE_PRESENT
. set to not _PAGE_PRESENT (zero)
Set to _PAGE_VALID
So now the page not present value (zero) is suddenly valid and user
space programs have access to physical page zero.
We close the race by putting the test for _PAGE_PRESENT and setting of
_PAGE_VALID into an atomic LL/SC section. This requires more registers
than just K0 and K1 in the handlers, so we need to save some registers
to a save area and then restore them when we are done.
The save area is an array of cacheline aligned structures that should
not suffer cache line bouncing as they are CPU private.
[ralf@linux-mips.org: Fix !defined(CONFIG_MIPS_PGD_C0_CONTEXT) build error.]
Signed-off-by: David Daney <david.daney@cavium.com>
To: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/2577/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
fixrange_init() allocates page tables for all addresses higher than
FIXADDR_TOP. On processors that override the default FIXADDR_TOP
address of 0xfffe_0000, this can consume up to 4 pages (1 page per 4MB)
for pgd's that are never used.
Signed-off-by: Kevin Cernekee <cernekee@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/1980/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>