linux/arch/microblaze/kernel/cpu
Michal Simek 3274c5707c microblaze: Optimize CACHE_LOOP_LIMITS and CACHE_RANGE_LOOP macros
1. Remove CACHE_ALL_LOOP2 macro because it is identical to CACHE_ALL_LOOP
2. Change BUG_ON to WARN_ON
3. Remove end aligned from CACHE_LOOP_LIMITS.
C implementation do not need aligned end address and ASM code do aligned
in their macros
4. ASM optimized  CACHE_RANGE_LOOP_1/2 macros needs to get aligned end address.
Because end address is compound from start + size, end address is the first address
which is exclude.

Here is the corresponding code which describe it.
+       int align = ~(line_length - 1);
+       end = ((end & align) == end) ? end - line_length : end & align;

a) end is aligned:
it is necessary to subtruct line length because we don't want to work with
next cacheline
b) end address is not aligned:
Just align it to be ready for ASM code.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-05-06 11:22:00 +02:00
..
cache.c microblaze: Optimize CACHE_LOOP_LIMITS and CACHE_RANGE_LOOP macros 2010-05-06 11:22:00 +02:00
cpuinfo-pvr-full.c microblaze: Checking DTS against PVR for write-back cache 2009-12-14 08:45:05 +01:00
cpuinfo-static.c microblaze: Extend cpuinfo for support write-back caches 2009-12-14 08:44:58 +01:00
cpuinfo.c include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h 2010-03-30 22:02:32 +09:00
Makefile microblaze: ftrace: add static function tracer 2009-12-14 08:40:09 +01:00
mb.c microblaze: cpuinfo shows cache line length 2010-05-06 11:21:59 +02:00
pvr.c microblaze: Add TRACE_IRQFLAGS_SUPPORT 2009-12-14 08:40:09 +01:00