x86/ldt/64: Refresh DS and ES when modify_ldt changes an entry

On x86_32, modify_ldt() implicitly refreshes the cached DS and ES
segments because they are refreshed on return to usermode.

On x86_64, they're not refreshed on return to usermode.  To improve
determinism and match x86_32's behavior, refresh them when we update
the LDT.

This avoids a situation in which the DS points to a descriptor that is
changed but the old cached segment persists until the next reschedule.
If this happens, then the user-visible state will change
nondeterministically some time after modify_ldt() returns, which is
unfortunate.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Chang Seok <chang.seok.bae@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Andy Lutomirski 2017-07-26 07:16:30 -07:00 committed by Ingo Molnar
parent 81d3871900
commit a632375764

View File

@ -21,6 +21,25 @@
#include <asm/mmu_context.h>
#include <asm/syscalls.h>
static void refresh_ldt_segments(void)
{
#ifdef CONFIG_X86_64
unsigned short sel;
/*
* Make sure that the cached DS and ES descriptors match the updated
* LDT.
*/
savesegment(ds, sel);
if ((sel & SEGMENT_TI_MASK) == SEGMENT_LDT)
loadsegment(ds, sel);
savesegment(es, sel);
if ((sel & SEGMENT_TI_MASK) == SEGMENT_LDT)
loadsegment(es, sel);
#endif
}
/* context.lock is held for us, so we don't need any locking. */
static void flush_ldt(void *__mm)
{
@ -32,6 +51,8 @@ static void flush_ldt(void *__mm)
pc = &mm->context;
set_ldt(pc->ldt->entries, pc->ldt->nr_entries);
refresh_ldt_segments();
}
/* The caller must call finalize_ldt_struct on the result. LDT starts zeroed. */