forked from luck/tmp_suning_uos_patched
3b4b682bec
movs instruction will combine data to accelerate moving data, however we need to concern two cases about it. 1. movs instruction need long lantency to startup, so here we use general mov instruction to copy data. 2. movs instruction is not good for unaligned case, even if src offset is 0x10, dest offset is 0x0, we avoid and handle the case by general mov instruction. Signed-off-by: Ma Ling <ling.ma@intel.com> LKML-Reference: <1284664360-6138-1-git-send-email-ling.ma@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> |
||
---|---|---|
.. | ||
.gitignore | ||
atomic64_32.c | ||
atomic64_386_32.S | ||
atomic64_cx8_32.S | ||
cache-smp.c | ||
checksum_32.S | ||
clear_page_64.S | ||
cmpxchg8b_emu.S | ||
cmpxchg.c | ||
copy_page_64.S | ||
copy_user_64.S | ||
copy_user_nocache_64.S | ||
csum-copy_64.S | ||
csum-partial_64.c | ||
csum-wrappers_64.c | ||
delay.c | ||
getuser.S | ||
inat.c | ||
insn.c | ||
iomap_copy_64.S | ||
Makefile | ||
memcpy_32.c | ||
memcpy_64.S | ||
memmove_64.c | ||
memset_64.S | ||
mmx_32.c | ||
msr-reg-export.c | ||
msr-reg.S | ||
msr-smp.c | ||
msr.c | ||
putuser.S | ||
rwlock_64.S | ||
rwsem_64.S | ||
semaphore_32.S | ||
string_32.c | ||
strstr_32.c | ||
thunk_32.S | ||
thunk_64.S | ||
usercopy_32.c | ||
usercopy_64.c | ||
x86-opcode-map.txt |