Commit Graph

109 Commits (e07b9e08601b400aee7e076e7b31799d3dd48c1e)

Author SHA1 Message Date
Tejun Heo 5a0e3ad6af include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files.  percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed.  Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability.  As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

  http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
  only the necessary includes are there.  ie. if only gfp is used,
  gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
  blocks and try to put the new include such that its order conforms
  to its surrounding.  It's put in the include block which contains
  core kernel includes, in the same order that the rest are ordered -
  alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
  doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
  because the file doesn't have fitting include block), it prints out
  an error message indicating which .h file needs to be added to the
  file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
   over 4000 files, deleting around 700 includes and adding ~480 gfp.h
   and ~3000 slab.h inclusions.  The script emitted errors for ~400
   files.

2. Each error was manually checked.  Some didn't need the inclusion,
   some needed manual addition while adding it to implementation .h or
   embedding .c file was more appropriate for others.  This step added
   inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
   from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
   e.g. lib/decompress_*.c used malloc/free() wrappers around slab
   APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
   editing them as sprinkling gfp.h and slab.h inclusions around .h
   files could easily lead to inclusion dependency hell.  Most gfp.h
   inclusion directives were ignored as stuff from gfp.h was usually
   wildly available and often used in preprocessor macros.  Each
   slab.h inclusion directive was examined and added manually as
   necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
   were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
   distributed build env didn't work with gcov compiles) and a few
   more options had to be turned off depending on archs to make things
   build (like ipr on powerpc/64 which failed due to missing writeq).

   * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
   * powerpc and powerpc64 SMP allmodconfig
   * sparc and sparc64 SMP allmodconfig
   * ia64 SMP allmodconfig
   * s390 SMP allmodconfig
   * alpha SMP allmodconfig
   * um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
   a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-30 22:02:32 +09:00
Christoph Hellwig fc5377668c tracing: Remove markers
Now that the last users of markers have migrated to the event
tracer we can kill off the (now orphan) support code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <20090917173527.GA1699@lst.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-18 21:22:08 +02:00
Christoph Hellwig ae142e0c52 powerpc/sputrace: Use the generic event tracer
I wrote sputrace before generic tracing infrastrucure was available.
Now that we have the generic event tracer we can convert it over and
remove a lot of code:

  8 files changed, 45 insertions(+), 285 deletions(-)

To use it make sure CONFIG_EVENT_TRACING is enabled and then enable
the spufs trace channel by

  echo 1 > /sys/kernel/debug/tracing/events/spufs/spufs_context/enable

and then read the trace records using e.g.

  cat /sys/kernel/debug/tracing/trace

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-20 10:29:21 +10:00
Ingo Molnar 74019224ac timers: add mod_timer_pending()
Impact: new timer API

Based on an idea from Martin Josefsson with the help of
Patrick McHardy and Stephen Hemminger:

introduce the mod_timer_pending() API which is a mod_timer()
offspring that is an invariant on already removed timers.

(regular mod_timer() re-activates non-pending timers.)

This is useful for the networking code in that it can
allow unserialized mod_timer_pending() timer-forwarding
calls, but a single del_timer*() will stop the timer
from being reactivated again.

Also while at it:

- optimize the regular mod_timer() path some more, the
  timer-stat and a debug check was needlessly duplicated
  in __mod_timer().

- make the exports come straight after the function, as
  most other exports in timer.c already did.

- eliminate __mod_timer() as an external API, change the
  users to mod_timer().

The regular mod_timer() code path is not impacted
significantly, due to inlining optimizations and due to
the simplifications.

Based-on-patch-from: Stephen Hemminger <shemminger@vyatta.com>
Acked-by: Stephen Hemminger <shemminger@vyatta.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Patrick McHardy <kaber@trash.net>
Cc: netdev@vger.kernel.org
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-18 19:26:33 +01:00
Rusty Russell 86c6f274f5 cpumask: powerpc: Introduce cpumask_of_{node,pcibus} to replace {node,pcibus}_to_cpumask
Impact: New APIs

The old node_to_cpumask/node_to_pcibus returned a cpumask_t: these
return a pointer to a struct cpumask.  Part of removing cpumasks from
the stack.

(Also replaces powerpc internal uses of node_to_cpumask).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-12-26 22:23:39 +10:30
Andre Detsch 34318c253b powerpc/spufs: Explain conditional decrement of aff_sched_count
This patch adds a comment to clarify why atomic_dec_if_positive is being used
to decrement gang's aff_sched_count on SPU context unbind.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-10-21 11:16:09 +11:00
Andre Detsch 10baa26c8c powerpc/spufs: Improve search of node for contexts with SPU affinity
This patch improves redability of the code responsible for trying to find
a node with enough SPUs not committed to other affinity gangs.

An additional check is also added, to avoid taking into account gangs that
have no SPU affinity.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-10-21 11:15:23 +11:00
Andre Detsch b2e601d14d powerpc/spufs: Fix possible scheduling of a context to multiple SPEs
We currently have a race when scheduling a context to a SPE -
after we have found a runnable context in spusched_tick, the same
context may have been scheduled by spu_activate().

This may result in a panic if we try to unschedule a context that has
been freed in the meantime.

This change exits spu_schedule() if the context has already been
scheduled, so we don't end up scheduling it twice.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-09-08 09:44:43 +10:00
Jeremy Kerr b65fe0356b powerpc/spufs: Fix race for a free SPU
We currently have a race for a free SPE. With one thread doing a
spu_yield(), and another doing a spu_activate():

thread 1				thread 2
spu_yield(oldctx)			spu_activate(ctx)
  __spu_deactivate(oldctx)
  spu_unschedule(oldctx, spu)
  spu->alloc_state = SPU_FREE
					spu = spu_get_idle(ctx)
					    - searches for a SPE in
					      state SPU_FREE, gets
					      the context just
					      freed by thread 1
					spu_schedule(ctx, spu)
					  spu->alloc_state = SPU_USED
spu_schedule(newctx, spu)
  - assumes spu is still free
  - tries to schedule context on
    already-used spu

This change introduces a 'free_spu' flag to spu_unschedule, to indicate
whether or not the function should free the spu after descheduling the
context. We only set this flag if we're not going to re-schedule
another context on this SPU.

Add a comment to document this behaviour.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-09-05 10:52:03 +10:00
Jeremy Kerr 9f43e3914d powerpc/spufs: Fix multiple get_spu_context()
Commit 8d5636fbca introduced a reference
count on SPU contexts during find_victim, but this may cause a leak in
the reference count if we later find a better contender for a context to
unschedule.

Change the reference to after we've found our victim context, so we
don't do the extra get_spu_context().

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-09-05 10:51:00 +10:00
Ilpo Järvinen cb9808d3d0 powerpc/spufs: Remove invalid semicolon after if statement
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-08-19 22:04:55 +08:00
Jeremy Kerr 8d5636fbca powerpc/spufs: reference context while dropping state mutex in scheduler
Based on an original patch from Christoph Hellwig <hch@lst.de>.

Currently, there is a possible reference-after-free in the spusched
code - contexts may be freed after we have released their state_mutex
in spusched_tick and find_victim.

This change takes a reference to the context before releasing the
mutex, so that the context doesn't get destroyed.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-08-14 14:59:12 +10:00
Andre Detsch ad1ede1277 powerpc/spufs: better placement of spu affinity reference context
This patch adjusts the placement of a reference context from
a spu affinity chain. The reference context can now be placed
only on nodes that have enough spus not intended to be used by
another gang (already running on the node).

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-24 11:01:54 +10:00
Andre Detsch 0855b54322 powerpc/spufs: fix aff_mutex and cbe_spu_info[n].list_mutex deadlock
Currenlt,, it is possible to lock aff_mutex and
cbe_spu_info[n].list_mutex in different orders, allowing a deadlock to
occur. With this change, aff_mutex is not taken within a list_mutex
critical section anymore.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-24 10:57:26 +10:00
Maxim Shchetynin fabb657005 powerpc/spufs: add atomic busy_spus counter to struct cbe_spu_info
As nr_active counter includes also spus waiting for syscalls to return
we need a seperate counter that only counts spus that are currently running
on spu side. This counter shall be used by a cpufreq governor that targets
a frequency dependent from the number of running spus.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:42 +10:00
Luke Browning 2442a8ba5a powerpc/spufs: don't extend time time slice if context is not in spu_run
An spu context shouldn't get an extra tick if the time slice code
couldn't find something else to run. This means contexts that are not
within spu_run (ie, SPU_SCHED_SPU_RUN is cleared) will not receive
extra ticks while we have no other contexts waiting.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-09 10:13:40 +10:00
Luke Browning 028fda0a6c powerpc/spufs: fix missed stop-and-signal event
There is a delay in the transition to the stopped state for class 2
interrupts. In some cases, the controlling thread detects the state of
the spu as running, and goes back to sleep resulting in a hung
application as the event is missed.

This change detects the stop condition and re-generates the wakeup event
after a context save.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-06-16 14:35:01 +10:00
Luke Browning 2c911a14b7 powerpc/spufs: synchronize interaction between spu exception handling and time slicing
Time slicing can occur at the same time as spu exception handling
resulting in the wakeup of the wrong thread.

This change uses the the spu's register_lock to enforce synchronization
between bind/unbind and spu exception handling so that they are
mutually exclusive.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-06-16 14:35:01 +10:00
Luke Browning 08fcf1d611 [POWERPC] spufs: Fix pointer reference in find_victim
If victim (not ctx) is in spu_run, add victim to rq.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-05-15 20:47:17 +10:00
Paul Mackerras 2a5f2e3e6c Merge branch 'for-2.6.26' of master.kernel.org:/pub/scm/linux/kernel/git/jwboyer/powerpc-4xx into merge 2008-05-09 20:12:06 +10:00
Christoph Hellwig 7a28a1549f [POWERPC] spufs: don't requeue victim contex in find_victim if it's not in spu_run
We should not requeue the victim context in find_victim if the owner is
not in spu_run. It's first not needed because leaving the context on
the spu is an optimization and second is harmful because it means the
owner could re-enter spu_run when the context is on the runqueue and
trip the BUG_ON in __spu_update_sched_info.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-05-08 15:26:32 +10:00
Luke Browning 7a2142002f [POWERPC] spufs: try to route SPU interrupts to local node
Currently, we re-route SPU interrupts to the current cpu, which may be
on a remote node. In the case of time slicing, all spu interrupts will
end up routed to the same cpu, where the spusched_tick occurs.

This change routes mfc interrupts to the cpu where the controlling
thread last ran, provided that cpu is on the same node as the spu
(otherwise don't reroute interrupts).

This should improve performance and provide a more predictable
environment for processing spu exceptions. In the past we have seen
concurrent delivery of spu exceptions to two cpus. This eliminates that
concern.

Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-05-05 13:33:43 +10:00
Linus Torvalds ec31b21241 Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc
* 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc:
  [POWERPC] Fix crashkernel= handling when no crashkernel= specified
  [POWERPC] Make emergency stack safe for current_thread_info() use
  [POWERPC] spufs: add .gitignore for spu_save_dump.h & spu_restore_dump.h
  [POWERPC] spufs: trace spu_acquire_saved events
  [POWERPC] spufs: fix marker name for find_victim
  [POWERPC] spufs: add marker for destroy_spu_context
  [POWERPC] spufs: add sputrace marker parameter names
  [POWERPC] spufs: add context switch notification log
  [POWERPC] mpc5200: defconfigs for CM5200, Lite5200B, Motion-PRO and TQM5200
  [POWERPC] mpc5200: Switch mpc5200 dts files to dts-v1 format
  [POWERPC] mpc5200: Fix FEC error handling on FIFO errors
  [POWERPC] mpc5200: add Phytec pcm030 board support
  [POWERPC] mpc5200: add gpiolib support for mpc5200
  [POWERPC] mpc5200: add interrupt type function
  [POWERPC] mpc5200: Fix unterminated of_device_id table
2008-04-30 08:37:40 -07:00
Julio M. Merino Vidal 8a476d4955 [POWERPC] spufs: fix marker name for find_victim
Fix a typo in the marker for the find_victim function, which prevented
it from being traced. It previously read find_vitim.

Signed-off-by: Julio M. Merino Vidal <jmerino@ac.upc.edu>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-04-30 15:16:14 +10:00
Christoph Hellwig 5158e9b521 [POWERPC] spufs: add context switch notification log
There are userspace instrumentation tools that need to monitor spu
context switches. This patch adds a new file called 'switch_log' to
each spufs context directory that can be used to monitor the context
switches.

Context switch in/out and exit from spu_run are monitored after the
file was first opened and can be read from it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-04-30 15:01:54 +10:00
Denis V. Lunev 667471386d powerpc: use non-racy method for proc entries creation
Use proc_create()/proc_create_data() to make sure that ->proc_fops and ->data
be setup before gluing PDE to main tree.

Add correct ->owner to proc_fops to fix reading/module unloading race.

Signed-off-by: Denis V. Lunev <den@openvz.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29 08:06:22 -07:00
Jeremy Kerr ce7c191bca [POWERPC] spufs: don't (ab)use SCHED_IDLE
commit 4ef11014 introduced a usage of SCHED_IDLE to detect when
a context is within spu_run.

Instead of SCHED_IDLE (which has other meaning), add a flag to
sched_flags to tell if a context should be running.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-03-11 12:28:02 +11:00
Andre Detsch 2a58aa33da [POWERPC] spufs: fix use time accounting on SPE-overcommit
The spu_runcntl_RW register is restored within spu_restore function.
So, at the end of spu_bind_context, the SPU context is not just loaded,
but running.

This change corrects the state switch to account the time as USER.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-29 15:48:55 +11:00
Jeremy Kerr 4ef110141b [POWERPC] spufs: fix scheduler starvation by idle contexts
2.6.25 has a regression where we can starve the scheduler by creating
(N_SPES+1) contexts, then running them one at a time.

The final context will never be run, as the other contexts are loaded on
the SPEs, none of which are repoted as free (ie, spu->alloc_state !=
SPU_FREE), so spu_get_idle() doesn't give us a spu to run on. Because
all of the contexts are stopped, none are descheduled by the scheduler
tick, as spusched_tick returns if spu_stopped(ctx).

This change replaces the spu_stopped() check with checking for SCHED_IDLE
in ctx->policy. We set a context's policy to SCHED_IDLE when we're not
in spu_run(). We also favour SCHED_IDLE contexts when looking for contexts
to unbind, but leave their timeslice intact for later resumption.

This patch fixes the following test in the spufs-testsuite:
  tests/20-scheduler/02-yield-starvation

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-19 10:12:02 +11:00
Christoph Hellwig 038200cfdc [POWERPC] spufs: Add marker-based tracing facility
This adds markers two important points in the spufs code and a new
module (sputrace.ko) that allows reading these out through a proc file.

Long-term I'd rather see something like lttng extended to use the spufs
instrumentation, but for now I think this is a good enough quick
solution.  We'll probably want to add various addition event in addition
to that ones I have already.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-06 16:26:59 +11:00
Paul Mackerras 9156ad4833 Merge branch 'linux-2.6' 2008-01-24 10:07:21 +11:00
Paul Mackerras a5a971129c [POWERPC] Fix build failure on Cell when CONFIG_SPU_FS=y
Commit aed3a8c9bb introduced a
definition of notify_spus_active in .../cell/spu_syscalls.c, and
another definition under #ifndef MODULE in .../cell/spufs/sched.c.
The latter is not necessary and causes the build to fail when
CONFIG_SPU_FS=y, so this removes it.  It also removes the export
of do_notify_spus_active, which is unnecessary.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
2008-01-02 15:56:30 +11:00
Bob Nelson aed3a8c9bb [POWERPC] Oprofile: Remove dependency on spufs module
This removes an OProfile dependency on the spufs module.  This
dependency was causing a problem for multiplatform systems that are
built with support for Oprofile on Cell but try to load the oprofile
module on a non-Cell system.

Signed-off-by: Bob Nelson <rrnelson@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-28 15:07:52 +11:00
Aegis Lin 90608a2928 [POWERPC] spufs: Use separate timer for /proc/spu_loadavg calculation
The original spusched_timer was designed to take effect only when
a context is waiting in the runqueue.

This change adds an additional lower-freq timer has been added to
purely handle the spu_load updates. The new timer will be triggered
per LOAD_FREQ ticks.

Signed-off-by: Aegis Lin <aegislin@gmail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:21 +11:00
Christoph Hellwig c9101bdb1b [POWERPC] spufs: make state_mutex interruptible
Make most places that use spu_acquire/spu_acquire_saved interruptible,
this allows getting out of the spufs code when e.g. pressing ctrl+c.
There are a few places where we get called e.g. from spufs teardown
routines were we can't simply err out so these are left with a comment.
For now I've also not touched the poll routines because it's open what
libspe would expect in terms of interrupted system calls.

Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:21 +11:00
Luke Browning e65c2f6fce [POWERPC] spufs: decouple spu scheduler from spufs_spu_run (asynchronous scheduling)
Change spufs_spu_run so that the context is queued directly to the
scheduler and the controlling thread advances directly to spufs_wait()
for spe errors and exceptions.

nosched contexts are treated the same as before.

Fixes from Christoph Hellwig <hch@lst.de>

Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:21 +11:00
Luke Browning b192541b39 [POWERPC] spufs: spu_find_victim may choose wrong victim
Need to re-check priority after dropping lock.  Otherwise, a
more favored context may be preempted.

Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:20 +11:00
Luke Browning 91569531d1 [POWERPC] spufs: reorganize spu_run_init
This cleans up spu_run_init so that it does all of the spu
initialization for spufs_run_spu.  It initializes the spu context as
much as possible before it activates the spu and writes the runcntl
register.

Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:20 +11:00
Jeremy Kerr d6ad39bc53 [POWERPC] spufs: rework class 0 and 1 interrupt handling
Based on original patches from
 Arnd Bergmann <arnd.bergman@de.ibm.com>; and
 Luke Browning <lukebr@linux.vnet.ibm.com>

Currently, spu contexts need to be loaded to the SPU in order to take
class 0 and class 1 exceptions.

This change makes the actual interrupt-handlers much simpler (ie,
set the exception information in the context save area), and defers the
handling code to the spufs_handle_class[01] functions, called from
spufs_run_spu.

This should improve the concurrency of the spu scheduling leading to
greater SPU utilization when SPUs are overcommited.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:20 +11:00
Arnd Bergmann 33bfd7a738 [POWERPC] spufs: block fault handlers in spu_acquire_runnable
This change disables the logic that faults-in spu contexts under the
covers from the page fault handler.  When a fault requires a runnable
context, the handler will block until the context is scheduled by
other means.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:19 +11:00
Jeremy Kerr 7cd58e4381 [POWERPC] spufs: move fault, lscsa_alloc and switch code to spufs module
Currently, part of the spufs code (switch.o, lscsa_alloc.o and fault.o)
is compiled directly into the kernel.

This change moves these components of spufs into the kernel.

The lscsa and switch objects are fairly straightforward to move in.

For the fault.o module, we split the fault-handling code into two
parts: a/p/p/c/spu_fault.c and a/p/p/c/spufs/fault.c. The former is for
the in-kernel spu_handle_mm_fault function, and we move the rest of the
fault-handling code into spufs.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:19 +11:00
Julio M. Merino Vidal 9b1d21f858 [POWERPC] spufs: fix typos in sched.c comments
Fix a few typos in the spufs scheduler comments

Signed-off-by: Julio M. Merino Vidal <jmerino@ac.upc.edu>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:18 +11:00
Paul Mackerras 0ce49a3945 Merge branch 'linux-2.6' 2007-09-20 10:09:27 +10:00
Christoph Hellwig c0e7b4aa1c [POWERPC] spusched: Fix null pointer dereference in find_victim
find_victim can dereference a NULL pointer when iterating over the list
of victim spus because list_mutex only guarantees spu->ct to be stable,
but of course not to be non-NULL.

Also fix find_victim to not call spu_unbind_context without list_mutex
because that violates the above guarantee.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:26:29 +10:00
Andre Detsch 36ddbb1380 [POWERPC] spufs: Fix race condition on gang->aff_ref_spu
Affinity reference point location (gang->aff_ref_spu) is reset
when the whole gang is descheduled. However, the last member of
a gang can be descheduled while we are trying to schedule another
member of the gang. This was leading to a race condition, and
the code was using gang->aff_ref_spu in an unsafe manner.

By holding the gang->aff_mutex a little bit longer, and increment
gang->aff_sched_count (which controls when gang->aff_ref_spu
should be reset) a little bit earlier, the problem is fixed.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:16 +10:00
Andre Detsch 683e3ab2b5 [POWERPC] spufs: Fix affinity after introduction of node_allowed() calls
This patch fixes affinity reference point placement, which was not being
done in some situations, after the introduction of node_allowed() calls.

The previously used parameter, 'ctx', is just the iterator of the
previous list_for_each_entry_reverse loop, and its value might be
invalid at the end of the loop. Also, the right context to seek
for information when defining the reference ctx location
_is_ the reference ctx.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-03 19:18:10 +10:00
Masato Noguchi 6f6a6dc0c8 [POWERPC] spufs: Fix incorrect initialization of cbe_spu_info.spus
We currently initialize cbe_spu_info[].spus in both init_spu_base and
spu_sched_init. The initialise in spu_sched_init clears the SPU list,
so we end up with no physical SPUs. Because of this, the spu_run
syscall will block forever.

This change removes the unnecessary initialization in spu_sched_init.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-26 16:17:55 +10:00
Christoph Hellwig 486acd4850 [CELL] spufs: rework list management and associated locking
This sorts out the various lists and related locks in the spu code.

In detail:

 - the per-node free_spus and active_list are gone.  Instead struct spu
   gained an alloc_state member telling whether the spu is free or not
 - the per-node spus array is now locked by a per-node mutex, which
   takes over from the global spu_lock and the per-node active_mutex
 - the spu_alloc* and spu_free function are gone as the state change is
   now done inline in the spufs code.  This allows some more sharing of
   code for the affinity vs normal case and more efficient locking
 - some little refactoring in the affinity code for this locking scheme

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:28 +02:00
Bob Nelson 1474855d08 [CELL] oprofile: add support to OProfile for profiling CELL BE SPUs
From: Maynard Johnson <mpjohn@us.ibm.com>

This patch updates the existing arch/powerpc/oprofile/op_model_cell.c
to add in the SPU profiling capabilities.  In addition, a 'cell' subdirectory
was added to arch/powerpc/oprofile to hold Cell-specific SPU profiling code.
Exports spu_set_profile_private_kref and spu_get_profile_private_kref which
are used by OProfile to store private profile information in spufs data
structures.

Also incorporated several fixes from other patches (rrn).  Check pointer
returned from kzalloc.  Eliminated unnecessary cast.  Better error
handling and cleanup in the related area.  64-bit unsigned long parameter
was being demoted to 32-bit unsigned int and eventually promoted back to
unsigned long.

Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com>
Signed-off-by: Bob Nelson <rrnelson@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
2007-07-20 21:42:24 +02:00
Bob Nelson 36aaccc1e9 [CELL] oprofile: enable SPU switch notification to detect currently active SPU tasks
From: Maynard Johnson <mpjohn@us.ibm.com>

This patch adds to the capability of spu_switch_event_register so that
the caller is also notified of currently active SPU tasks.
Exports spu_switch_event_register and spu_switch_event_unregister so
that OProfile can get access to the notifications provided.

Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com>
Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Bob Nelson <rrnelson@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
2007-07-20 21:42:20 +02:00