linux/arch/x86/lib
Ma Ling 3b4b682bec x86, mem: Optimize memmove for small size and unaligned cases
movs instruction will combine data to accelerate moving data,
however we need to concern two cases about it.

1. movs instruction need long lantency to startup,
   so here we use general mov instruction to copy data.
2. movs instruction is not good for unaligned case,
   even if src offset is 0x10, dest offset is 0x0,
   we avoid and handle the case by general mov instruction.

Signed-off-by: Ma Ling <ling.ma@intel.com>
LKML-Reference: <1284664360-6138-1-git-send-email-ling.ma@intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-09-24 18:57:11 -07:00
..
.gitignore
Makefile x86, asm: Move cmpxchg emulation code to arch/x86/lib 2010-07-28 16:53:49 -07:00
atomic64_32.c
atomic64_386_32.S x86, asm: Use a lower case name for the end macro in atomic64_386_32.S 2010-08-12 07:04:16 -07:00
atomic64_cx8_32.S
cache-smp.c
checksum_32.S
clear_page_64.S
cmpxchg.c x86, asm: Merge cmpxchg_486_u64() and cmpxchg8b_emu() 2010-07-28 17:05:11 -07:00
cmpxchg8b_emu.S
copy_page_64.S
copy_user_64.S x86, alternatives: Fix one more open-coded 8-bit alternative number 2010-07-13 14:56:16 -07:00
copy_user_nocache_64.S
csum-copy_64.S
csum-partial_64.c
csum-wrappers_64.c
delay.c
getuser.S
inat.c
insn.c
iomap_copy_64.S
memcpy_32.c x86, mem: Optimize memmove for small size and unaligned cases 2010-09-24 18:57:11 -07:00
memcpy_64.S x86, mem: Optimize memcpy by avoiding memory false dependece 2010-08-23 14:56:41 -07:00
memmove_64.c x86, mem: Optimize memmove for small size and unaligned cases 2010-09-24 18:57:11 -07:00
memset_64.S
mmx_32.c
msr-reg-export.c
msr-reg.S
msr-smp.c
msr.c
putuser.S
rwlock_64.S
rwsem_64.S
semaphore_32.S
string_32.c
strstr_32.c
thunk_32.S
thunk_64.S
usercopy_32.c
usercopy_64.c
x86-opcode-map.txt