While looking at code generated by gcc4.0 I noticed some functions still
had frame pointers, even after we stopped ppc64 from defining
CONFIG_FRAME_POINTER. It turns out kernel/Makefile hardwires
-fno-omit-frame-pointer on when compiling schedule.c.
Create CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER and define it on architectures
that dont require frame pointers in sched.c code.
(akpm: blame me for the name)
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Remove the p_nodepda and p_subnodepda pointers from the pda_s structure.
And then define a new per-cpu pointer to the nodepda and export it so
that it can be accessed by kernel modules.
Signed-off-by: Dean Nelson <dcn@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
- pfm_context_load(): change return value from EINVAL to EBUSY
when context is already loaded.
- pfm_check_task_state(): pass test if context state is MASKED.
It is safe to give access on PFM_CTX_MASKED because the PMU
state (PMD) is stable and saved in software state.
This helps multiplexing programs such as the example given
in libpfm-3.1.
Signed-off-by: stephane eranian <eranian@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
The pmu_active test is based on the values of PSR.up. THIS IS THE PROBLEM as
it does not take into account the lazy restore logic which is as follow (simplified):
context switch out:
save PMDs
clear psr.up
release ownership
context switch in:
if (ctx->last_cpu == smp_processor_id() && ctx->cpu_activation == cpu_activation) {
set psr.up
return
}
restore PMD
restore PMC
ctx->last_cpu = smp_processor_id();
ctx->activation = ++cpu_activation;
set psr.up
The key here is that on context switch out, we clear psr.up and on context switch in
we check if nobody else used the PMU on that processor since last time we came. In
that case, we assume the PMD/PMC are ours and we simply reactivate.
The Caliper problem is that between the moment we context switch out and the moment we
come back, nobody effectively used the PMU BUT the processor went idle. Normally this
would have no incidence but PAL_HALT does alter the PMU registers. In default_idle(),
the test on psr.up is not strong enough to cover this case and we go into PAL which
trashed the PMU resgisters. When we come back we falsely assume that this is our state
yet it is corrupted. Very nasty indeed.
To avoid the problem it is necessary to forbid going to PAL_HALT as soon as perfmon
installs some valid state in the PMU registers. This happens with an application
attaches a context to a thread or CPU. It is not enough to check the psr/dcr bits.
Hence I propose the attached patch. It adds a callback in process.c to modify the
condition to enter PAL on idle. Basically, now it is conditional to pal_halt=1 AND
perfmon saying it is okay.
Signed-off-by: Tony Luck <tony.luck@intel.com>
Correct a bug where tioca_dma_mapped() is putting tioca dma map structs
on the wrong list.
Signed-off-by: Mark Maule <maule@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
When SAL calls back into the OS, the OS code is running with preempt
disabled so it cannot call sleeping functions.
Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Jack Steiner uncovered some opportunities for improvement in
the MCA recovery code.
1) Set bsp to save registers on the kernel stack.
2) Disable interrupts while in the MCA recovery code.
3) Change the way the user process is killed, to avoid
a panic in schedule.
Testing shows that these changes make the recovery code much
more reliable with the 2.6.12 kernel.
Signed-off-by: Russ Anderson <rja@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Attached is a patch against David's audit.17 kernel that adds checks
for the TIF_SYSCALL_AUDIT thread flag to the ia64 system call and
signal handling code paths. The patch enables auditing of system
calls set up via fsys_bubble_down, as well as ensuring that
audit_syscall_exit() is called on return from sigreturn.
Neglecting to check for TIF_SYSCALL_AUDIT at these points results in
incorrect information in audit_context, causing frequent system panics
when system call auditing is enabled on an ia64 system.
I have tested this patch and have seen no problems with it.
[Original patch from Amy Griffis ported to current kernel by David Woodhouse]
From: Amy Griffis <amy.griffis@hp.com>
From: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Chris Wright <chrisw@osdl.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Andi noted that during normal runtime cpu_idle_map is bounced around a lot,
and occassionally at a higher frequency than the timer interrupt wakeup
which we normally exit pm_idle from. So switch to a percpu variable.
I didn't move things to the slow path because it would involve adding
scheduler code to wakeup the idle thread on the cpus we're waiting for.
Signed-off-by: Zwane Mwaikambo <zwane@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
The following patch fixes a bug in the SGI Altix sn_dma_flush code.
sn_dma_flush is broken in 2.6. The code isn't waiting for the DMA
data to be flushed out of the PIC ASIC. This patch is based off the
linux-ia64-test-2.6.12 tree
Signed-off-by: Mike Habeck <habeck@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch simplifies a couple places where we search for _PXM
values in ACPI namespace. Thanks,
Signed-off-by: Alex Williamson <alex.williamson@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
The following patch ensures that the correct error interrupt handling
routine is initialized. This patch is based on the 2.6.12 ia64 release tree.
Signed-off-by: Colin Ngam <cngam@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch detects the existence of an uncached physical AMO address setup
by EFI's XPBOOT (SGI) and converts it to an uncached virtual AMO address.
Depends on a patch submitted on 23 March 2005 with the subject of:
[PATCH 2/3] SGI Altix cross partition functionality (2nd revision)
Signed-off-by: Dean Nelson <dcn@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch contains the cross partition pseudo-ethernet driver (XPNET)
functional support module.
Signed-off-by: Dean Nelson <dcn@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch contains the communication module (XPC) for cross partition
communication on a partitioned SGI Altix.
Signed-off-by: Dean Nelson <dcn@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
cg-patch couldn't apply the patch to Makefile, and my dumb script
rushed on and ran cg-commit without this change.
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch contains the shim module (XP) which interfaces between the
communication module (XPC) and the functional support modules (like XPNET).
Signed-off-by: Dean Nelson <dcn@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Another step in the effort to eliminate the SN pda structure.
This patch moves the cnodeid_to_nasid_table field out of the pda,
making it a standalone per-cpu data item, and exports it so it can
be accessed by kernel modules.
Signed-off-by: Dean Nelson <dcn@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Here is a patch to enable the SGI tiocx bus driver to distingush between
FPGA-attached h/w and non-FPGA-attached h/w.
Signed-off-by: Bruce Losure <blosure@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Hi Tony,
This patch against ia64-test-2.6.12 fixes a bug where the tiocx code
was inadvertently un-doing some address modifications done in earlier
fixup code. This patch just removes the offending code.
Signed-off-by: Bruce Losure <blosure@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This is a small patch to switch fluch_icache_range() to use fc.i
instead of fc. This would save time on processors which can establish
i-cache coherency without flushing the cache-line out to memory (not
that any current processors do). On existing processors, fc.i behaves
like fc. The only caveat is that very old assemblers may not know
about fc.i yet.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Patch below fixes 3 trivial typos which are caught by the new
assembler (v2.169.90). Please apply.
[Note: fix to memcpy that was also part of this patch was separately
applied from patches by H.J. and Andreas ... so the delta here only
has the other two fixes. -Tony]
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
The current ia64 assembler complains about mismatching .proc/.endp pairs.
(Same patch also sent by H.J. Lu)
Signed-off-by: Andreas Schwab <schwab@suse.de>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Now that we have MC/MT detection patches in, appended patch allows us to
configure MT scheduler optimizations. For now, we will this option off
by default.
There is some discussion going on lkml about setting up sched-domains
which are absolutely needed (like for example, we shouldn't setup SMT domain
for non MT processors). Once that patch goes in, we can enable this option by
default.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Convert most of the current code that uses _NSIG directly to instead use
valid_signal(). This avoids gcc -W warnings and off-by-one errors.
Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Attached is a patch against David's audit.17 kernel that adds checks
for the TIF_SYSCALL_AUDIT thread flag to the ia64 system call and
signal handling code paths.The patch enables auditing of system
calls set up via fsys_bubble_down, as well as ensuring that
audit_syscall_exit() is called on return from sigreturn.
Neglecting to check for TIF_SYSCALL_AUDIT at these points results in
incorrect information in audit_context, causing frequent system panics
when system call auditing is enabled on an ia64 system.
Signed-off-by: Amy Griffis <amy.griffis@hp.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
We were calling ptrace_notify() after auditing the syscall and arguments,
but the debugger could have _changed_ them before the syscall was actually
invoked. Reorder the calls to fix that.
While we're touching ever call to audit_syscall_entry(), we also make it
take an extra argument: the architecture of the syscall which was made,
because some architectures allow more than one type of syscall.
Also add an explicit success/failure flag to audit_syscall_exit(), for
the benefit of architectures which return that in a condition register
rather than only returning a single register.
Change type of syscall return value to 'long' not 'int'.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Yanmin Zhang pointed out a sequence problem when saving the psr. David
Mosberger provided this patch (which gave up a cycle).
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch switches the srlz.i in ia64_leave_kernel() to srlz.d. As
per architecture manual, the former is needed only to ensure that the
clearing of PSR.IC is seen by the VHPT for subsequent instruction
fetches. However, since the remainder of the code (up to and
including the RFI instruction) is mapped by a pinned TLB entry, there
is no chance of an iTLB miss and we don't care whether or not the VHPT
sees PSR.IC cleared. Since srlz.d is substantially cheaper than
srlz.i, this should shave off a few cycles off the interrupt path
(unverified though; I'm not setup to measure this at the moment).
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch changes comments & formatting only. There is no code
change.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Improvements come from eliminating srlz.i, not scheduling AR/CR-reads
too early (while there are others still pending), scheduling the
backing-store switch as well as possible, splitting the BBB bundle
into a MIB/MBB pair.
Why is it safe to eliminate the srlz.i? Observe
that we used to clear bits ~PSR_PRESERVED_BITS in PSR.L. Since
PSR_PRESERVED_BITS==PSR.{UP,MFL,MFH,PK,DT,PP,SP,RT,IC}, we
ended up clearing PSR.{BE,AC,I,DFL,DFH,DI,DB,SI,TB}. However,
PSR.BE : already is turned off in __kernel_syscall_via_epc()
PSR.AC : don't care (kernel normally turns PSR.AC on)
PSR.I : already turned off by the time fsys_bubble_down gets invoked
PSR.DFL: always 0 (kernel never turns it on)
PSR.DFH: don't care --- kernel never touches f32-f127 on its own
initiative
PSR.DI : always 0 (kernel never turns it on)
PSR.SI : always 0 (kernel never turns it on)
PSR.DB : don't care --- kernel never enables kernel-level breakpoints
PSR.TB : must be 0 already; if it wasn't zero on entry to
__kernel_syscall_via_epc, the branch to fsys_bubble_down
will trigger a taken branch; the taken-trap-handler then
converts the syscall into a break-based system-call.
In other words: all the bits we're clearying are either 0 already or
are don't cares! Thus, we don't have to write PSR.L at all and we
don't have to do a srlz.i either.
Good for another ~20 cycle improvement for EPC-based heavy-weight
syscalls.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Two other very minor changes: use "mov.i" instead of "mov" for reading
ar.pfs (for clarity; doesn't affect the code at all). Also, predicate
the load of r14 for consistency.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Avoid some stalls, which is good for about 2 cycles when invoking a
light-weight handler. When invoking a heavy-weight handler, this
helps by about 7 cycles, with most of the improvement coming from the
improved branch-prediction achieved by splitting the BBB bundle into
two MIB bundles.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch reorganizes break_fault() to optimistically assume that a
system-call is being performed from user-space (which is almost always
the case). If it turns out that (a) we're not being called due to a
system call or (b) we're being called from within the kernel, we fixup
the no-longer-valid assumptions in non_syscall() and .break_fixup(),
respectively.
With this approach, there are 3 major phases:
- Phase 1: Read various control & application registers, in
particular the current task pointer from AR.K6.
- Phase 2: Do all memory loads (load system-call entry,
load current_thread_info()->flags, prefetch
kernel register-backing store) and switch
to kernel register-stack.
- Phase 3: Call ia64_syscall_setup() and invoke
syscall-handler.
Good for 26-30 cycles of improvement on break-based syscall-path.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Reschedule code to read ar.bsp as early as possible. To enable this,
don't bother clearing some of the registers when we're returning to
kernel stacks. Also, instead of trying to support the pNonSys case
(which makes no sense), do a bugcheck instead (with break 0). Finally,
remove a clear of r14 which is a left-over from the previous patch.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Using stf8 seemed like a clever idea at the time, but stf8 forces
the cache-line to be invalidated in the L1D (if it happens to be
there already). This patch eliminates a guaranteed L1D cache-miss
and, by itself, is good for a 1-2 cycle improvement for heavy-weight
syscalls.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Why is this a good idea? Clearing b7 to 0 is guaranteed to do us no
good and writing it with __kernel_syscall_via_epc() yields a 6 cycle
improvement _if_ the application performs another EPC-based system-
call without overwriting b7, which is not all that uncommon. Well
worth the minimal cost of 1 bundle of code.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Decreases syscall overhead by approximately 6 cycles.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This by itself is good for a 1-2 cycle speed up. Effect is bigger
when combined with the later patches.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
[IA64] fix ia64 Kconfig to allow CONFIG_PM on sn2
This probably should have been fixed when I fixed up the generic build for
discontig+numa machines, but oh well.
CONFIG_PM is allowable for generic builds but not for sn2 builds, which
doesn't make much sense, and in fact breaks the build if recent ACPI bits are
added to the tree. It looks like the only arch that needs to prevent
CONFIG_PM stuff is the ski simulator (though those options could probably use
some cleanup as well), so remove the big conditional and replace it with a
simple test for IA64_HP_SIM instead.
Signed-off-by: Jesse Barnes <jbarnes@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
vector sharing patch had a typo ... mismatched spin_lock() with
a spin_unlock_irq(). Fix from Kenji Kaneshige.
Signed-off-by: Tony Luck <tony.luck@intel.com>
Rohit and Suresh changed their mind about the order to print things
in /proc/cpuinfo, but didn't include the change in the version of
the patch they sent to me.
Signed-off-by: Tony Luck <tony.luck@intel.com>
Current ia64 linux cannot handle greater than 184 interrupt sources
because of the lack of vectors. The following patch enables ia64 linux
to handle greater than 184 interrupt sources by allowing the same
vector number to be shared by multiple IOSAPIC's RTEs. The design of
this patch is besed on "Intel(R) Itanium(R) Processor Family Interrupt
Architecture Guide".
Even if you don't have a large I/O system, you can see the behavior of
vector sharing by changing IOSAPIC_LAST_DEVICE_VECTOR to fewer value.
Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Version 3 - rediffed to apply on top of Ashok's hotplug cpu
patch. /proc/cpuinfo output in step with x86.
This is an updated MC/MT identification patch based on the
previous discussions on list.
Add the Multi-core and Multi-threading detection for IPF.
- Add new core and threading related fields in /proc/cpuinfo.
Physical id
Core id
Thread id
Siblings
- setup the cpu_core_map and cpu_sibling_map appropriately
- Handles Hot plug CPU
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Gordon Jin <gordon.jin@intel.com>
Signed-off-by: Rohit Seth <rohit.seth@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
memcpy_mck.S::__copy_user breaks in the prefetch code under these conditions :-
* src is unaligned and
* dst is near the end of a page and
* the page after dst is unmapped.
Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Thanks to Mark for tracking down this one. Users of __copy_from_user_inatomic()
will be sad if we don't handle lfetch faults for the "no_context" case.
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch against ia64-test-2.6.12 is needed for forthcoming
Altix chipsets. It renames geoid_any_t to geoid_common_t and
splits the 8bit 'slab' field into two 4bit fields for 'slab'
and 'slot'. Similar changes in the Altix SAL will retain backward
compatibility for old kernels.
Signed-off-by: Mark Goodwin <markgw@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Sadly, I goofed in this syscall-tuning patch:
ChangeSet 1.1966.1.40 2005/01/22 13:31:05 davidm@hpl.hp.com
[IA64] Improve ia64_leave_syscall() for McKinley-type cores.
Optimize ia64_leave_syscall() a bit better for McKinley-type cores.
The patch looks big, but that's mostly due to renaming r16/r17 to r2/r3.
Good for a 13 cycle improvement.
The problem is that the size of the physical stacked registers was
loaded into the wrong register (r3 instead of r17). Since r17 by
coincidence always had the value 1, this had the effect of turning
rse_clear_invalid into a no-op. That poses the risk of leaking kernel
state back to user-land and is hence not acceptable.
The fix below is simple, but unfortunately it costs us about 28 cycles
in syscall overhead. ;-(
Unfortunately, there isn't much we can do about that since those
registers have to be cleared one way or another.
--david
Signed-off-by: Tony Luck <tony.luck@intel.com>
patch 2:
Shub2 BTE recovery code will be implemented in SAL.
Define the SAL interface.
Modify bte_error to call SAL for shub2.
Signed-off-by: Russ Anderson <rja@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Patch to disable SGI TIOCA GART TLB prefetching due to hw bug.
Signed-off-by: Mark Maule <maule@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This fixes a couple of bugs in the zx1/sx1000 sba_iommu. These are
all pretty low likelihood of hitting. The first problem is a simple off
by one, deep in the sba_alloc_range() error path. Surrounding that was
a lock ordering problem that could have potentially deadlocked with the
order the locks are grabbed in sba_unmap_single(). I moved the resource
locking into sba_search_bitmap() to prevent this. Finally, there's a
potential race between unmapping pdir entries and marking incoming DMA
pages clean. If you see any oddities, please let me know, but I've
tested it pretty thoroughly here. Tony, please apply. Thanks,
BTW, many of the options in this driver not on by default are becoming
more and more broken. I'll be working on some patches to clean them
out, but I wanted to get this bug fix out first.
Signed-off-by: Alex Williamson <alex.williamson@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch introduces using the quicklists for pgd, pmd, and pte levels
by combining the alloc and free functions into a common set of routines.
This greatly simplifies the reading of this header file.
This patch is simple but necessary for large numa configurations.
It simply ensures that only pages from the local node are added to a
cpus quicklist. This prevents the trapping of pages on a remote nodes
quicklist by starting a process, touching a large number of pages to
fill pmd and pte entries, migrating to another node, and then unmapping
or exiting. With those conditions, the pages get trapped and if the
machine has more than 100 nodes of the same size, the calculation of
the pgtable high water mark will be larger than any single node so page
table cache flushing will never occur.
I ran lmbench lat_proc fork and lat_proc exec on a zx1 with and without
this patch and did not notice any change.
On an sn2 machine, there was a slight improvement which is possibly
due to pages from other nodes trapped on the test node before starting
the run. I did not investigate further.
This patch shrinks the quicklist based upon free memory on the node
instead of the high/low water marks. I have written it to enable
preemption periodically and recalculate the amount to shrink every time
we have freed enough pages that the quicklist size should have grown.
I rescan the nodes zones each pass because other processess may be
draining node memory at the same time as we are adding.
Signed-off-by: Robin Holt <holt@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch adds the necessary "hook" to allow SGI/SN
machines to perform a system power off upon a
'init 0', 'halt -p', 'poweroff' or 'shutdown -h'.
The "hook" is to set the pm_power_off callback
to ia64_sn_power_down(). pm_power_off is checked
in machine_power_off()/do_poweroff() and, if set, is executed.
ia64_sn_power_down() is a function already present (but not
used currently) in the sn kernel.
ia64_sn_power_down() makes a SAL call to execute the
power off.
Signed-off-by: Aaron J Young <ayoung@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch is to provide CX port infrastructure for SGI TIO-based
h/w. Also a 'core services' driver for SGI FPGA-based h/w.
Signed-off-by: Bruce Losure <blosure@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
- make pfm_sysctl a global such that it is possible
to enable/disable debug printk in sampling formats
using PFM_DEBUG.
- remove unused pfm_debug_var variable
- fix a bug in pfm_handle_work where an BUG_ON() could
be triggered. There is a path where pfm_handle_work()
can be called with interrupts enabled, i.e., when
TIF_NEED_RESCHED is set. The fix correct the masking
and unmasking of interrupts in pfm_handle_work() such
that we restore the interrupt mask as it was upon entry.
signed-off-by: stephane eranian <eranian@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
please accept this patch to the Altix SN platform topology export
interface to support new chipsets and to export PCI topology.
This follows on top of Jack Steiner's patch dated March 1st
("New chipset support for SN platform").
Signed-off-by: Mark Goodwin <markgw@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Recently I noticed that clearing ar.ssd/ar.csd right before srlz.d is
causing significant stalling in the syscall path. The patch below
fixes that by moving the register-writes after srlz.d. On a Madison,
this drops break-based getpid() from 241 to 226 cycles (-15 cycles).
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Detect user space by the unwind frame with predicate PRED_USER_STACK
set, instead of a user space IP. Tighten up the last ditch check for
running off the top of the kernel stack.
Based on a suggestion by David Mosberger, reworked to fit the current
tree. This survives my stress test which used to break 2.6.9 kernels.
Unlike 2.6.11, the stress test now unwinds to the correct point, so
gdb can get the user space registers.
Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Call cpu_relax() in busy-waiting loops of the ITC-syncing code.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Provide a driver for the altix TIOCA AGP chipset. An agpgart backend will
be provided as a separate patch.
Signed-off-by: Mark Maule <maule@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Move a couple of headers out of arch/ia64/sn/include/pci and into
include/asm-ia64/sn.
Signed-off-by: Mark Maule <maule@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Provide an abstraction of the altix pci dma runtime layer so that multiple
pci-based bridges can be supported.
Signed-off-by: Mark Maule <maule@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This patch is required to support cpu removal for IPF systems. Existing code
just fakes the real offline by keeping it run the idle thread, and polling
for the bit to re-appear in the cpu_state to get out of the idle loop.
For the cpu-offline to work correctly, we need to pass control of this CPU
back to SAL so it can continue in the boot-rendez mode. This gives the
SAL control to not pick this cpu as the monarch processor for global MCA
events, and addition does not wait for this cpu to checkin with SAL
for global MCA events as well. The handoff is implemented as documented in
SAL specification section 3.2.5.1 "OS_BOOT_RENDEZ to SAL return State"
Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Found by Alexander Nyberg, improved by Bjorn Helgaas.
- Fix the incorrect argument to sizeof()
- looks like memcpy() code pass was dervived from code that used
copy_from_user(). But in this case we are doing to kernel space
to kernel space copy, so memcpy is the right routine, but it
doesn't return an error code.
Signed-off-by: Arun Sharma <arun.sharma@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
ia64 and ppc64 had hugetlb_free_pgtables functions which were no longer being
called, and it wasn't obvious what to do about them.
The ppc64 case turns out to be easy: the associated tables are noted elsewhere
and freed later, safe to either skip its hugetlb areas or go through the
motions of freeing nothing. Since ia64 does need a special case, restore to
ppc64 the special case of skipping them.
The ia64 hugetlb case has been broken since pgd_addr_end went in, though it
probably appeared to work okay if you just had one such area; in fact it's
been broken much longer if you consider a long munmap spanning from another
region into the hugetlb region.
In the ia64 hugetlb region, more virtual address bits are available than in
the other regions, yet the page tables are structured the same way: the page
at the bottom is larger. Here we need to scale down each addr before passing
it to the standard free_pgd_range. Was about to write a hugely_scaled_down
macro, but found htlbpage_to_page already exists for just this purpose. Fixed
off-by-one in ia64 is_hugepage_only_range.
Uninline free_pgd_range to make it available to ia64. Make sure the
vma-gathering loop in free_pgtables cannot join a hugepage_only_range to any
other (safe to join huges? probably but don't bother).
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Recent woes with some arches needing their own pgd_addr_end macro; and 4-level
clear_page_range regression since 2.6.10's clear_page_tables; and its
long-standing well-known inefficiency in searching throughout the higher-level
page tables for those few entries to clear and free: all can be blamed on
ignoring the list of vmas when we free page tables.
Replace exit_mmap's clear_page_range of the total user address space by
free_pgtables operating on the mm's vma list; unmap_region use it in the same
way, giving floor and ceiling beyond which it may not free tables. This
brings lmbench fork/exec/sh numbers back to 2.6.10 (unless preempt is enabled,
in which case latency fixes spoil unmap_vmas throughput).
Beware: the do_mmap_pgoff driver failure case must now use unmap_region
instead of zap_page_range, since a page table might have been allocated, and
can only be freed while it is touched by some vma.
Move free_pgtables from mmap.c to memory.c, where its lower levels are adapted
from the clear_page_range levels. (Most of free_pgtables' old code was
actually for a non-existent case, prev not properly set up, dating from before
hch gave us split_vma.) Pass mmu_gather** in the public interfaces, since we
might want to add latency lockdrops later; but no attempt to do so yet, going
by vma should itself reduce latency.
But what if is_hugepage_only_range? Those ia64 and ppc64 cases need careful
examination: put that off until a later patch of the series.
What of x86_64's 32bit vdso page __map_syscall32 maps outside any vma?
And the range to sparc64's flush_tlb_pgtables? It's less clear to me now that
we need to do more than is done here - every PMD_SIZE ever occupied will be
flushed, do we really have to flush every PGDIR_SIZE ever partially occupied?
A shame to complicate it unnecessarily.
Special thanks to David Miller for time spent repairing my ceilings.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!