forked from luck/tmp_suning_uos_patched
ARM: 7658/1: mm: fix race updating mm->context.id on ASID rollover
If a thread triggers an ASID rollover, other threads of the same process must be made to wait until the mm->context.id for the shared mm_struct has been updated to new generation and associated book-keeping (e.g. TLB invalidation) has ben performed. However, there is a *tiny* window where both mm->context.id and the relevant active_asids entry are updated to the new generation, but the TLB flush has not been performed, which could allow another thread to return to userspace with a dirty TLB, potentially leading to data corruption. In reality this will never occur because one CPU would need to perform a context-switch in the time it takes another to do a couple of atomic test/set operations but we should plug the race anyway. This patch moves the active_asids update until after the potential TLB flush on context-switch. Cc: <stable@vger.kernel.org> # 3.8 Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This commit is contained in:
parent
d61947a164
commit
37f47e3d62
@ -207,11 +207,11 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
|
||||
if ((mm->context.id ^ atomic64_read(&asid_generation)) >> ASID_BITS)
|
||||
new_context(mm, cpu);
|
||||
|
||||
atomic64_set(&per_cpu(active_asids, cpu), mm->context.id);
|
||||
cpumask_set_cpu(cpu, mm_cpumask(mm));
|
||||
|
||||
if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending))
|
||||
local_flush_tlb_all();
|
||||
|
||||
atomic64_set(&per_cpu(active_asids, cpu), mm->context.id);
|
||||
cpumask_set_cpu(cpu, mm_cpumask(mm));
|
||||
raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
|
||||
|
||||
switch_mm_fastpath:
|
||||
|
Loading…
Reference in New Issue
Block a user