Commit graph

322 commits

Author SHA1 Message Date
Christian Krafft
91a69c9646 [POWERPC] cell: add cbe_node_to_cpu function
This patch adds code to deal with conversion of
logical cpu to cbe nodes. It removes code that
assummed there were two logical CPUs per CBE.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:38 +02:00
Jeremy Kerr
ccf17e9d00 [POWERPC] spu_base: fix initialisation on systems with no SPEs
This change fixes the case where spu_base and spufs are initialised on a
system with no SPEs - unconditionally create the spu_lists so spu_alloc
doesn't explode, and check for spu_management ops before starting spufs.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>

 arch/powerpc/platforms/cell/spu_base.c    |    7 ++++---
 arch/powerpc/platforms/cell/spufs/inode.c |    5 +++++
 2 files changed, 9 insertions(+), 3 deletions(-)
2007-04-23 21:19:00 +02:00
Christoph Hellwig
befdc746ee [POWERPC] spu_base: remove cleanup_spu_base
spu_base.c is always built into the kernel image, so there is no need
for a cleanup function.  And some of the things it does are in the
way for my following patches, so I'd rather get rid of it ASAP.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:59 +02:00
Christoph Hellwig
aa45e2569f [POWERPC] spufs: various run.c cleanups
- remove the spu_acquire_runnable from spu_run_init.  I need to
   opencode it in spufs_run_spu in the next patch
 - remove various inline attributes, we don't really want to inline
   long functions with multiple callsites
 - cleanup return values and runcntl_write calls in spu_run_init
 - use normal kernel codingstyle in spu_reacquire_runnable

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:59 +02:00
Akinobu Mita
fe8a29db5b [POWERPC] spufs: enable SPU coredump for kernel-builtin spufs
spu_coredump_calls.owner is NULL in case of a builtin spufs,
so the checks in here break.
Check for the availability of the spu_coredump_calls variable
instead.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:59 +02:00
Arnd Bergmann
6cf2179202 [POWERPC] spufs: fix memory leak on coredump
Dynamically allocated read/write buffer in spufs_arch_write_note() will
not be freed. Convert it to get_free_page at the same time.

Cc: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:58 +02:00
Jeremy Kerr
d3764397d0 [POWERPC] spufs: Minor cleanup of spu_wait
Change the loop in spu_wait to be a little more straightforward.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:58 +02:00
Jeremy Kerr
f11f5ee70f [POWERPC] spufs: add mode= mount option
Add a 'mode=' option to spufs mount arguments. This allows more
control over access to the top-level spufs directory.

Tested on Cell.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:58 +02:00
Akinobu Mita
9e2fe2ce4e [POWERPC] spufs: use memcpy_fromio() to copy from local store
GCC may generates inline copy loop to handle memcpy() function
instead of kernel defined memcpy(). But this inlined version of memcpy()
causes an alignment interrupt when copying from local store.

This patch uses memcpy_fromio() and memcpy_toio to copy local store
to prevent memcpy() being inlined.

Signed-off-by: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:57 +02:00
Christoph Hellwig
8a7d86bdb2 [POWERPC] spufs: avoid spurious memory barriers
We now have proper locking around assignets of the mapping pointers,
and the spin_unlock implies enough of a barrier to get rid of the
explicit one.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:57 +02:00
Akinobu Mita
db1384b40d [POWERPC] spufs: fix memory leak on spufs reloading
When SPU isolation mode enabled, isolated_loader would be
allocated by spufs_init_isolated_loader() on module_init().
But anyone do not free it.

This patch introduces spufs_exit_isolated_loader() which is
the opposite of spufs_init_isolated_loader() and called on
module_exit().

Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:57 +02:00
Akinobu Mita
c99c1994a2 [POWERPC] spufs: fix missing error handling in module_init()
spufs module_init forgot to call a few cleanup functions
on error path. This patch also includes cosmetic changes in
spu_sched_init() (identation fix and return error code).

[modified by hch to apply ontop of the latest schedule changes]

Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:56 +02:00
Akinobu Mita
577f8f1021 [POWERPC] spufs: check spu_acquire_runnable() return value
This patch checks return value of spu_acquire_runnable() in
spufs_mfc_write().

Signed-off-by: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:56 +02:00
Christoph Hellwig
e45d48a34d [POWERPC] spufs: turn run_sema into run_mutex
There is no reason for run_sema to be a struct semaphore.  Changing
it to a mutex and rename it accordingly.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:56 +02:00
Jeremy Kerr
c8a1e9393a [POWERPC] spufs: provide siginfo for SPE faults
This change populates a siginfo struct for SPE application exceptions
(ie, invalid DMAs and illegal instructions).

Tested on an IBM Cell Blade.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:55 +02:00
Arnd Bergmann
57dace2391 [POWERPC] spufs: make spu page faults not block scheduling
Until now, we have always entered the spu page fault handler
with a mutex for the spu context held. This has multiple
bad side-effects:
- it becomes impossible to suspend the context during
  page faults
- if an spu program attempts to access its own mmio
  areas through DMA, we get an immediate livelock when
  the nopage function tries to acquire the same mutex

This patch makes the page fault logic operate on a
struct spu_context instead of a struct spu, and moves it
from spu_base.c to a new file fault.c inside of spufs.

We now also need to copy the dar and dsisr contents
of the last fault into the saved context to have it
accessible in case we schedule out the context before
activating the page fault handler.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:55 +02:00
Christoph Hellwig
62c05d583e [POWERPC] spu_base: move spu_init_channels out of spu_mutex
There is no reason to execute spu_init_channels under spu_mutex
after the spu has been taken off the freelist it's ours.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:55 +02:00
Luke Browning
4e0f4ed0df [POWERPC] spu sched: make addition to stop_wq and runque atomic vs wakeup
Addition to stop_wq needs to happen before adding to the runqeueue and
under the same lock so that we don't have a race window for a lost
wake up in the spu scheduler.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:55 +02:00
Christoph Hellwig
7ec18ab923 [POWERPC] spufs: streamline locking for isolated spu setup
For quite a while now spu state is protected by a simple mutex instead
of the old rw_semaphore, and this means we can simplify the locking
around spu_setup_isolated a lot.

Instead of doing an spu_release before entering spu_setup_isolated and
then calling the complicated spu_acquire_exclusive we can now simply
enter the function locked an in guaranteed runnable state, so that the
only bit of spu_acquire_exclusive that's left is the call to
spu_unmap_mappings.

Similarly there's no more need to unlock and reacquire the state_mutex
when spu_setup_isolated is done, but we can always return with the
lock held and only drop it in spu_run_init in the failure case.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:54 +02:00
Christoph Hellwig
a475c2f435 [POWERPC] spufs: remove woken threads from the runqueue early
A single context should only be woken once, and we should not have
more wakeups for a given priority than the number of contexts on
that runqueue position.

Also add some asserts to trap future problems in this area more
easily.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:54 +02:00
Arnd Bergmann
390c534304 [POWERPC] spufs: add memory barriers after set_bit
set_bit does not guarantee ordering on powerpc, so using it
for communication between threads requires explicit
mb() calls.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:54 +02:00
Christoph Hellwig
e097b51328 [POWERPC] spu sched: ensure preempted threads are put back on the runqueue, part2
To not lose a spu thread we need to make sure it always gets put back
on the runqueue.  In find_victim aswell as in the scheduler tick as done
in the previous patch.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:53 +02:00
Christoph Hellwig
b3e76cc324 [POWERPC] spu sched: ensure preempted threads are put back on the runqueue
To not lose a spu thread we need to make sure it always gets put back
on the runqueue.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:53 +02:00
Christoph Hellwig
43c2bbd932 [POWERPC] spufs: clear mapping pointers after last close
Make sure the pointers to various mappings are cleared once the last
user stopped using them.  This avoids accessing freed memory when
tearing down the gang directory aswell as optimizing away
pte invalidations if no one uses these.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:53 +02:00
Christoph Hellwig
0887309589 [POWERPC] spufs: use cancel_rearming_delayed_workqueue when stopping spu contexts
The scheduler workqueue may rearm itself and deadlock when we try to stop
it.  Put a flag in place to avoid skip the work if we're tearing down
the context.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:52 +02:00
Stephen Rothwell
9c1a2bae0c [POWERPC] Rename get_property to of_get_property: the last one
This also fixes a bug where a property value was being modified
in place.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13 03:55:19 +10:00
Stephen Rothwell
e2eb63927b [POWERPC] Rename get_property to of_get_property: arch/powerpc
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13 03:55:19 +10:00
Christoph Hellwig
dbf8eefa2b [POWERPC] spufs: don't yield CPU in spu_yield
There is no reason to yield the CPU in spu_yield - if the backing
thread reenters spu_run it gets added to the end of the runqueue for
it's priority.  So the yield is just a slowdown for the case where
we have higher priority contexts waiting.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13 03:55:15 +10:00
Geert Uytterhoeven
28066ae91b [POWERPC] CBE thermal support on PS3
I wanted to enable CBE_THERM on PS3.  So I had to enable CBE_RAS first.

But the resulting kernel doesn't link, as cbe_regs.c isn't compiled for
non-PPC_CELL_NATIVE.

CBE_RAS should depend on PPC_CELL_NATIVE; this makes it so.

Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13 03:55:15 +10:00
Paul Mackerras
e049d1ca30 Merge branch 'linux-2.6' into for-2.6.22 2007-04-13 03:50:03 +10:00
Kumar Gala
72e77a1b94 [POWERPC] Split cell platforms into their respective Kconfig file
Cleaning up arch/powerpc/Kconfig platform support.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-03-22 10:05:00 -05:00
Benjamin Herrenschmidt
94b2a4393c [POWERPC] Fix spu SLB invalidations
The SPU code doesn't properly invalidate SPUs SLBs when necessary,
for example when changing a segment size from the hugetlbfs code. In
addition, it saves and restores the SLB content on context switches
which makes it harder to properly handle those invalidations.

This patch removes the saving & restoring for now, something more
efficient might be found later on. It also adds a spu_flush_all_slbs(mm)
that can be used by the core mm code to flush the SLBs of all SPEs that
are running a given mm at the time of the flush.

In order to do that, it adds a spinlock to the list of all SPEs and move
some bits & pieces from spufs to spu_base.c

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2007-03-10 00:07:50 +01:00
Christoph Hellwig
50b520d4ef [POWERPC] avoid SPU_ACTIVATE_NOWAKE optimization
This optimization was added recently but is still buggy,
so back it out for now.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-03-10 00:07:49 +01:00
Arnd Bergmann
aa0ed2bdb6 [POWERPC] spufs: fix possible memory corruption is spufs_mem_write
Due to a buggy unsigned comparison, it was possible to write
beyond the end of the local store file in spufs under some
circumstances.

This rewrites the buggy function to look more like
simple_copy_from_buffer.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Cc: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
2007-03-10 00:07:48 +01:00
Stephen Rothwell
57190708f1 [POWERPC] Create and use get_pci_dma_ops()
This allows us to hide pci_dma_ops.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-03-09 15:03:25 +11:00
Stephen Rothwell
9874777016 [POWERPC] Create and use set_pci_dma_ops
This will allow us to build without PCI easier.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-03-09 15:03:25 +11:00
Carl Love
bcb63e25ed [POWERPC] cell: PPU Oprofile cleanup patch
This is a clean up patch that includes the following changes:

 -Some comments were added to clarify the code based on feedback
  from the community.
 -The write_pm_cntrl() and set_count_mode() were passed a
  structure element from a global variable.  The argument was
  removed so the functions now just operate on the global directly.
 -The set_pm_event() function call in the cell_virtual_cntr()
  routine was moved to a for-loop before the for_each_cpu loop

Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 22:03:06 +01:00
Masato Noguchi
128b8546a8 [POWERPC] spufs: avoid accessing kernel memory through mmapped /mem node
I found an exploit in current kernel.
Currently, there is no range check about mmapping "/mem" node in
spufs. Thus, an application can access privilege memory region.

In case this kernel already worked on a public server, I send this
information only here.
If there are such servers in somewhere, please replace it, ASAP.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:55:43 +01:00
Christoph Hellwig
2eb1b12049 [POWERPC] spu sched: static timeslicing for SCHED_RR contexts
For SCHED_RR tasks we can do some really trivial timeslicing.  Basically
we fire up a time for every scheduler tick that searches for a higher
or same priority thread that is on the runqueue and if there is one
context switches to it.  Because we can't lock spus from timer context
we actually run this from a delayed runqueue instead of a timer.

A nice optimization would be to skip the actual priority bitmap search
when there are less contexts than physical spus available.  To implement
this I need a so far unpublished patch from Andre, and it will be added
after we have that patch in.

Note that right now we only do the time slicing for SCHED_RR tasks.
The code would work for SCHED_OTHER tasks aswell, but their prio
value is defered from the one the PPU thread has at time of spu_run,
and using this for spu scheduling decisions would make the code very
unfair.  SCHED_OTHER support will be enabled once we the spu scheduler
knows how to calculcate cpu_context.prio (very soon)

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:55:43 +01:00
Christoph Hellwig
72cb360839 [POWERPC] spu sched: use DECLARE_BITMAP
use DECLARE_BITMAP in the spu scheduler instead of reimplementing it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:55:42 +01:00
Christoph Hellwig
52f04fcf66 [POWERPC] spu sched: forced preemption at execution
If we start a spu context with realtime priority we want it to run
immediately and not wait until some other lower priority thread has
finished.  Try to find a suitable victim and use it's spu in this
case.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:55:42 +01:00
Christoph Hellwig
ae7b4c5284 [POWERPC] spu sched: update some comments
Give spu_yield a kerneldoc comment and remove the old comment
documenting spu_activate, spu_deactive and spu_yield as all of them
now have descriptive kerneldoc comments of their own.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:55:42 +01:00
Christoph Hellwig
678b2ff1e6 [POWERPC] spu sched: simplity spu_remove_from_active_list
If we call spu_remove_from_active_list that spu is always guaranteed
to be on the active list and in runnable state, so we can simply
do a list_del to remove it and unconditionally take the was_active
codepath.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:55:41 +01:00
Christoph Hellwig
26bec67386 [POWERPC] spufs: optimize spu_run
There is no need to directly wake up contexts in spu_activate when
called from spu_run, so add a flag to surpress this wakeup.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:55:41 +01:00
Christoph Hellwig
079cdb6161 [POWERPC] spufs: runqueue simplification
This is the biggest patch in this series, and it reworks the guts of
the spu scheduler runqueue mechanism:

 - instead of embedding a waitqueue in the runqueue there is now a
   simple doubly-linked list, the actual wakeups happen by reusing
   the stop_wq in the spu context (maybe we should rename it one day)
 - spu_free and spu_prio_wakeup are merged into a single spu_reschedule
   function
 - various functionality is split out into small helpers, and kerneldoc
   comments are added in various places to document what's going on.
 - spu_activate is rewritten into a tight loop by removing test for
   various impossible conditions and using the infrastructure in this
   patch.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:55:41 +01:00
Christoph Hellwig
8389998ae9 [POWERPC] spufs: move prio to spu_context
It doesn't make any sense to have a priority field in the physical spu
structure.  Move it into the spu context instead.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:55:40 +01:00
Christoph Hellwig
6a0641e510 [POWERPC] spufs: state_mutex cleanup
Various cleanups in code surrounding the state semaphore:

 - inline spu_acquire/spu_release
 - cleanup spu_acquire_* and add kerneldoc comments to these functions
 - remove spu_release_exclusive and replace it with spu_release

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:55:40 +01:00
Christoph Hellwig
650f8b0291 [POWERPC] spufs: simplify state_mutex
The r/w semaphore to lock the spus was overkill and can be replaced
with a mutex to make it faster, simpler and easier to debug.  It also
helps to allow making most spufs interruptible in future patches.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:52:37 +01:00
Christoph Hellwig
202557d29e [POWERPC] spufs: sched.c cleanups
Various cleanups to sched.c that don't change the global control flow:

 - add kerneldoc comments to various functions
 - add spu_ prefixes to various functions
 - add/remove context from the runqueue in bind/unbind_context as
   it's part of the logical operation
 - add a call to put_active_spu to spu_unbind_contex as it's logically
   part of the unbind operation

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:52:36 +01:00
Christoph Hellwig
81998bafe2 [POWERPC] spufs: bind_context sets SPU_STATE_RUNNABLE
Only bind_context/unbind_context change the spu context state.  Thus
we can move all assignents of SPU_STATE_RUNNABLE into bind_context,
which parallels the unbind side aswell.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:52:36 +01:00
Christoph Hellwig
aa56c16807 [POWERPC] spufs: remove superfluous SPU_STATE_SAVED assignments
unbind_context already sets the context state to SPU_STATE_SAVED, thus
the spu_deactivate callers don't need to do it again.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:52:36 +01:00
Christoph Hellwig
5cb23afc9e [POWERPC] spufs: remove empty last line in run.c
Remove the empty last line in arch/powerpc/platforms/cell/spufs/run.c.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:52:35 +01:00
Christoph Hellwig
30a6c337dc [POWERPC] spufs: remove SPU_CONTEXT_PREEMPT
Remove the SPU_CONTEXT_PREEMPT define.  It's unused and won't be used
in this form after the scheduler rework.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13 21:52:35 +01:00
Benjamin Herrenschmidt
17e0e27020 [POWERPC] spufs: Fix bitrot of the SPU mmap facility
It looks like we've had some serious bitrot there mostly due to tracking
of address_space's of mmap'ed files getting out of sync with the actual
mmap code. The mfc, mss and psmap were not tracked properly and thus
not invalidated on context switches (oops !)

I also removed the various file->f_mapping = inode->i_mapping;
assignments that were done in the other open() routines since that
is already done for us by __dentry_open.

One improvement we might want to do later is to assign the various
ctx-> fields at mmap time instead of file open/close time so that we
don't call unmap_mapping_range() on thing that have not been mmap'ed

Finally, I added some smp_wmb's after assigning the ctx-> fields to make
sure they are visible to other CPUs. I don't think this is really
necessary as I suspect locking in the fs layer will make that happen
anyway but better safe than sorry.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-02-13 15:35:54 +11:00
Benjamin Herrenschmidt
78bde53e35 [POWERPC] spufs: remove need for struct page for SPEs
This patch removes the need for struct page for SPE local store
and registers from spufs. It also makes the locking much more
obvious and no longer relying on the truncate logic black magic
for protecting against races between unmap_mapping_range() and
new pages faulted in. It does so by switching to a nopfn() handler
and using the new vm_insert_pfn() to setup the PTEs itself while
holding a lock on the SPE.

The nice thing is that this patch actually removes a lot more code
than it adds :-)

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-02-13 15:35:53 +11:00
Arjan van de Ven
754661f143 [PATCH] mark struct inode_operations const 1
Many struct inode_operations in the kernel can be "const".  Marking them const
moves these to the .rodata section, which avoids false sharing with potential
dirty data.  In addition it'll catch accidental writes at compile time to
these shared resources.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-02-12 09:48:46 -08:00
Arjan van de Ven
9c2e08c592 [PATCH] mark struct file_operations const 9
Many struct file_operations in the kernel can be "const".  Marking them const
moves these to the .rodata section, which avoids false sharing with potential
dirty data.  In addition it'll catch accidental writes at compile time to
these shared resources.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-02-12 09:48:46 -08:00
Arjan van de Ven
5dfe4c964a [PATCH] mark struct file_operations const 2
Many struct file_operations in the kernel can be "const".  Marking them const
moves these to the .rodata section, which avoids false sharing with potential
dirty data.  In addition it'll catch accidental writes at compile time to
these shared resources.

[akpm@osdl.org: sparc64 fix]
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-02-12 09:48:44 -08:00
Al Viro
9340b0d356 [PATCH] arch/powerpc trivial annotations
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-02-09 09:14:06 -08:00
Ishizaki Kou
c9868fe0e0 [POWERPC] Celleb: consolidate spu management ops
Spu management ops in arch/platforms/cell/spu_priv1_mmio.h can be used
commonly in of based platform. This patch separates spu management ops
from native cell code and uses on celleb platform.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Kou Ishizaki <kou.ishizaki@toshiba.co.jp>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-02-07 14:03:21 +11:00
Ishizaki Kou
3650cfe2e5 [POWERPC] spufs: Add SPU register lock
spu->register_lock should be held before accessing registers.

Signed-off-by: Kou Ishizaki <kou.ishizaki@toshiba.co.jp>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-01-24 21:13:59 +11:00
Arnd Bergmann
ccb4911598 [POWERPC] spufs: fix assignment of node numbers
The difference between 'nid' and 'node' fields in an
spu structure was used incorrectly. The common 'node'
number now reflects the NUMA node, and it is used
in other places in the code as well.

The 'nid' value is meaningful only in one place, namely
the computation of the interrupt numbers based on the
physical location of an spu.  Consequently, we look it
up directly in the place where it is used now.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2006-12-19 15:35:39 +01:00
Benjamin Herrenschmidt
6e22ba63f0 [POWERPC] cell: Fix spufs with "new style" device-tree
Some SPU code was a bit too convoluted and broke when adding support for
the new style device-tree, most notably the struct pages for SPEs no
longer get created. oops...

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2006-12-19 15:35:38 +01:00
Jens Osterkamp
a24e57be9b [POWERPC] cell: Enable spider workarounds on all PCI buses
Don't limit spider I/O workarounds to the first two buses.
The IBM Cell blade has three of them (one PCI, two PCIe)
and we want to handle them all.

Signed-off-by: Jens Osterkamp <jens@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2006-12-19 15:35:37 +01:00
Linus Torvalds
13d7d84e07 Merge git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc
* git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (36 commits)
  [POWERPC] Generic BUG for powerpc
  [PPC] Fix compile failure do to introduction of PHY_POLL
  [POWERPC] Only export __mtdcr/__mfdcr if CONFIG_PPC_DCR is set
  [POWERPC] Remove old dcr.S
  [POWERPC] Fix SPU coredump code for max_fdset removal
  [POWERPC] Fix irq routing on some 32-bit PowerMacs
  [POWERPC] ps3: Add vuart support
  [POWERPC] Support ibm,dynamic-reconfiguration-memory nodes
  [POWERPC] dont allow pSeries_probe to succeed without initialising MMU
  [POWERPC] micro optimise pSeries_probe
  [POWERPC] Add SPURR SPR to sysfs
  [POWERPC] Add DSCR SPR to sysfs
  [POWERPC] Fix 440SPe CPU table entry
  [POWERPC] Add support for FP emulation for the e300c2 core
  [POWERPC] of_device_register: propagate device_create_file return code
  [POWERPC] Fix mmap of PCI resource with hack for X
  [POWERPC] iSeries: head_64.o needs to depend on lparmap.s
  [POWERPC] cbe_thermal: Fix initialization of sysfs attribute_group
  [POWERPC] Remove QE header files from lite5200.c
  [POWERPC] of_platform_make_bus_id(): make `magic' int
  ...
2006-12-11 18:24:58 -08:00
Paul Mackerras
39f44be375 [POWERPC] Fix SPU coredump code for max_fdset removal
Commit bbea9f6966 removed the max_fdset
element of struct fdtable.  It appears that checking max_fds is
sufficient now.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-11 15:13:37 +11:00
Josef Sipek
b4d1ab58c0 [PATCH] struct path: convert powerpc
Signed-off-by: Josef Sipek <jsipek@fsl.cs.sunysb.edu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-08 08:28:48 -08:00
Christian Krafft
22b6e59047 [POWERPC] cbe_thermal: Fix initialization of sysfs attribute_group
This patch adds NULL to the initialization of the attribute_groups.
The spu_attributes and ppe_attributes arrays are arrays of pointers
that need to be terminated with a NULL entry.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-08 17:21:02 +11:00
Stephen Rothwell
a081e126e1 [POWERPC] Fix cell pmu initialisation
Make sure that the pmu is not initialised unless we are running on a cell.
Also make the init routine static.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-08 15:55:54 +11:00
Christoph Lameter
e18b890bb0 [PATCH] slab: remove kmem_cache_t
Replace all uses of kmem_cache_t with struct kmem_cache.

The patch was generated using the following script:

	#!/bin/sh
	#
	# Replace one string by another in all the kernel sources.
	#

	set -e

	for file in `find * -name "*.c" -o -name "*.h"|xargs grep -l $1`; do
		quilt add $file
		sed -e "1,\$s/$1/$2/g" $file >/tmp/$$
		mv /tmp/$$ $file
		quilt refresh
	done

The script was run like this

	sh replace kmem_cache_t "struct kmem_cache"

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07 08:39:25 -08:00
Christoph Lameter
e94b176609 [PATCH] slab: remove SLAB_KERNEL
SLAB_KERNEL is an alias of GFP_KERNEL.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07 08:39:24 -08:00
Stephen Rothwell
da06aa08d9 [POWERPC] spufs: we should only execute init_spu_base on cell
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2006-12-04 20:41:11 +11:00
Arnd Bergmann
c2b2226c7e [POWERPC] spufs: always send sigtrap on breakpoint
Currently, we only send a sigtrap if the current task is being ptraced.
This is somewhat inconsistant, and it breaks utrace support in fedora.
Removing the check should do the right thing in all cases.

Cc: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2006-12-04 20:41:09 +11:00
Jeremy Kerr
bd2e5f829e [POWERPC] spufs: return an error in spu_create is isolated create isnt supported
This changes the spu_create system call to return an error (-ENODEV) if
and isolated spu context is requested on hardware that doesn't support
isolated mode.

Tested on systemsim with and without isolation support

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2006-12-04 20:41:07 +11:00
Geoff Levand
e28b003136 [POWERPC] cell: abstract spu management routines
This adds a platform specific spu management abstraction and the coresponding
routines to support the IBM Cell Blade.  It also removes the hypervisor only
resources that were included in struct spu.

Three new platform specific routines are introduced, spu_enumerate_spus(),
spu_create_spu() and spu_destroy_spu().  The underlying design uses a new
type, struct spu_management_ops, to hold function pointers that the platform
setup code is expected to initialize to instances appropriate to that platform.

For the IBM Cell Blade support, I put the hypervisor only resources that were
in struct spu into a platform specific data structure struct spu_pdata.

Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2006-12-04 20:40:39 +11:00
Benjamin Herrenschmidt
5850dd8f6d [POWERPC] cell: hard disable interrupts in power_save()
With soft-disabled interrupts in power_save, we can
still get external exceptions on Cell, even if we are
in pause(0) a.k.a. sleep state.

When the CPU really wakes up through the 0x100 (system reset)
vector, while we have already started processing the 0x500
(external) exception, we get a panic in unrecoverable_exception()
because of the lost state.

This occurred in Systemsim for Cell, but as far as I can see,
it can theoretically occur on any machine that uses the
system reset exception to get out of sleep state.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2006-12-04 20:40:21 +11:00
Dwayne Grant McConnell
bf1ab978be [POWERPC] coredump: Add SPU elf notes to coredump.
This patch adds SPU elf notes to the coredump. It creates a separate note
for each of /regs, /fpcr, /lslr, /decr, /decr_status, /mem, /signal1,
/signal1_type, /signal2, /signal2_type, /event_mask, /event_status,
/mbox_info, /ibox_info, /wbox_info, /dma_info, /proxydma_info, /object-id.

A new macro, ARCH_HAVE_EXTRA_NOTES, was created for architectures to
specify they have extra elf core notes.

A new macro, ELF_CORE_EXTRA_NOTES_SIZE, was created so the size of the
additional notes could be calculated and added to the notes phdr entry.

A new macro, ELF_CORE_WRITE_EXTRA_NOTES, was created so the new notes
would be written after the existing notes.

The SPU coredump code resides in spufs. Stub functions are provided in the
kernel which are hooked into the spufs code which does the actual work via
register_arch_coredump_calls().

A new set of __spufs_<file>_read/get() functions was provided to allow the
coredump code to read from the spufs files without having to lock the
SPU context for each file read from.

Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Dwayne Grant McConnell <decimal@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2006-12-04 20:40:19 +11:00
Maynard Johnson
18f2190d79 [POWERPC] cell: Add oprofile support
Add PPU event-based and cycle-based profiling support to Oprofile for Cell.

Oprofile is expected to collect data on all CPUs simultaneously.
However, there is one set of performance counters per node.  There are
two hardware threads or virtual CPUs on each node.  Hence, OProfile must
multiplex in time the performance counter collection on the two virtual
CPUs.

The multiplexing of the performance counters is done by a virtual
counter routine.  Initially, the counters are configured to collect data
on the even CPUs in the system, one CPU per node.  In order to capture
the PC for the virtual CPU when the performance counter interrupt occurs
(the specified number of events between samples has occurred), the even
processors are configured to handle the performance counter interrupts
for their node.  The virtual counter routine is called via a kernel
timer after the virtual sample time.  The routine stops the counters,
saves the current counts, loads the last counts for the other virtual
CPU on the node, sets interrupts to be handled by the other virtual CPU
and restarts the counters, the virtual timer routine is scheduled to run
again.  The virtual sample time is kept relatively small to make sure
sampling occurs on both CPUs on the node with a relatively small
granularity.  Whenever the counters overflow, the performance counter
interrupt is called to collect the PC for the CPU where data is being
collected.

The oprofile driver relies on a firmware RTAS call to setup the debug bus
to route the desired signals to the performance counter hardware to be
counted.  The RTAS call must set the routing registers appropriately in
each of the islands to pass the signals down the debug bus as well as
routing the signals from a particular island onto the bus.  There is a
second firmware RTAS call to reset the debug bus to the non pass thru
state when the counters are not in use.

Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:40:14 +11:00
Kevin Corry
0443bbd3d8 [POWERPC] cell: Add routines for managing PMU interrupts
The following routines are added to arch/powerpc/platforms/cell/pmu.c:
 cbe_clear_pm_interrupts()
 cbe_enable_pm_interrupts()
 cbe_disable_pm_interrupts()
 cbe_query_pm_interrupts()
 cbe_pm_irq()
 cbe_init_pm_irq()

This also adds a routine in arch/powerpc/platforms/cell/interrupt.c and
some macros in cbe_regs.h to manipulate the IIC_IR register:
 iic_set_interrupt_routing()

Signed-off-by: Kevin Corry <kevcorry@us.ibm.com>
Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:40:12 +11:00
Kevin Corry
e4f6948cfc [POWERPC] cell: Move PMU-related stuff to include/asm-powerpc/cell-pmu.h
Move some PMU-related macros and function prototypes from cbe_regs.h
and pmu.h in arch/powerpc/platforms/cell/ to a new header at
include/asm-powerpc/cell-pmu.h

This is cleaner to use from the oprofile code, since that sits in
arch/powerpc/oprofile, not in the cell platform directory.

Signed-off-by: Kevin Corry <kevcorry@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:40:11 +11:00
Kevin Corry
c93dfa0766 [POWERPC] cell: PMU register macros
More macros for manipulating bits in the Cell PMU control registers.

Signed-off-by: Kevin Corry <kevcorry@us.ibm.com>
Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:40:09 +11:00
Arnd Bergmann
5231800c6f [POWERPC] cell: Add symbol exports for oprofile
Add symbol-exports for the new routines in arch/powerpc/platforms/cell/pmu.c.
They are needed for Oprofile, which can be built as a module.

Signed-off-by: Kevin Corry <kevcorry@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:40:07 +11:00
Jeremy Kerr
c6730ed4c2 [POWERPC] spufs: Load isolation kernel from spu_run
In order to fit with the "don't-run-spus-outside-of-spu_run" model, this
patch starts the isolated-mode loader in spu_run, rather than
spu_create. If spu_run is passed an isolated-mode context that isn't in
isolated mode state, it will run the loader.

This fixes potential races with the isolated SPE app doing a
stop-and-signal before the PPE has called spu_run: bugzilla #29111.
Also (in conjunction with a mambo patch), this addresses #28565, as we
always set the runcntrl register when entering spu_run.

It is up to libspe to ensure that isolated-mode apps are cleaned up
after running to completion - ie, put the app through the "ISOLATE EXIT"
state (see Ch11 of the CBEA).

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:40:06 +11:00
Jeremy Kerr
3960c26020 [POWERPC] spufs: Add runcntrl read accessors
This change adds a read accessor for the SPE problem-state run control
register.

This is required for for applying (userspace) changes made to the run
control register while the SPE is stopped - simply asserting the master
run control bit is not sufficient. My next patch for isolated-mode
setup requires this.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:40:04 +11:00
Arnd Bergmann
ee2d7340cb [POWERPC] spufs: Use SPU master control to prevent wild SPU execution
When the user changes the runcontrol register, an SPU might be
running without a process being attached to it and waiting for
events. In order to prevent this, make sure we always disable
the priv1 master control when we're not inside of spu_run.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:40:02 +11:00
Masato Noguchi
3692dc6614 [POWERPC] spufs: Fix return value of spufs_mfc_write
This patch changes spufs_mfc_write() to return
correct size instead of 0.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:40:01 +11:00
Arnd Bergmann
932f535dd4 [POWERPC] spufs: Always map local store non-guarded
When fixing spufs to map the 'mem' file backing store cacheable,
I incorrectly set the physical mapping to use both cache-inhibited
and guarded mapping, which resulted in a serious performance
degradation.

Debugged-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:39:59 +11:00
Christoph Hellwig
5c3ecd659b [POWERPC] spufs: Avoid user-triggered oops in ptrace
When one of the spufs files is mapped into a process address
space, regular users can use ptrace to attempt accessing
them with access_process_vm(). With the way that the
mappings currently work, this likely causes an oops.

Setting the vm_flags to VM_IO makes sure that ptrace can
not access them but returns an error code. This is not
the perfect solution in case of the local store mapping,
but it fixes the oops in a well-defined way.

Also remove leftover VM_RESERVED flags in spufs.  The
VM_RESERVED flag is on it's way out and not checked by
the memory managment code anymore.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Christoph Hellwig <chellwig@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:39:57 +11:00
Masato Noguchi
2ebb2477f9 [POWERPC] spufs: Fix missing stop-and-signal
When there is pending signals, current spufs_run_spu() always returns
-ERESTARTSYS and it is called again automatically.
But, if spe already stopped by stop-and-signal or halt instruction,
returning -ERESTARTSYS makes stop-and-signal/halt lost and
spu run over the end-point.

For your convenience, I attached a sample code to restage this bug.
If there is no bug, printed NPC will be 0x4000.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:39:55 +11:00
Arnd Bergmann
453d9f72a9 [POWERPC] spufs: Return correct event for data storage interrupt
When we attempt an MFC DMA to an unmapped address, the event
returned from spu_run should be SPE_EVENT_SPE_DATA_STORAGE,
not SPE_EVENT_INVALID_DMA.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:39:54 +11:00
Geoff Levand
0021550c01 [POWERPC] spufs: Replace spu.nid with spu.node
Replace the use of the platform specific variable spu.nid with the
platform independednt variable spu.node.

Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:39:52 +11:00
Dwayne Grant McConnell
17f88cebc2 [POWERPC] spufs: Read from signal files only if data is there
We need to check the channel count of the signal notification registers
before reading them, because it can be undefined when the count is
zero. In order to read count and data atomically, we read from the
saved context.

This patch uses spu_acquire_saved() to force a context save before a
/signal1 or /signal2 read. Because of this it is no longer necessary to
have backing_ops and hw_ops versions of this function so they have been
removed.

Regular applications should not rely on reading this register
to be fast, as it's conceptually a write-only file from the PPE
perspective.

Signed-off-by: Dwayne Grant McConnell <decimal@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:39:50 +11:00
Dwayne Grant McConnell
69a2f00ce5 [POWERPC] spufs: Implement /mbox_info, /ibox_info, and /wbox_info.
This patch implements read only access to

/mbox_info - SPU Write Outbound Mailbox
/ibox_info - SPU Write Outbound Interrupt Mailbox
/wbox_info - SPU Read Inbound Mailbox

These files are used by gdb in order to look into the current mailbox
queues without changing the contents at the same time. They are
not meant for general programming use, since the access requires
a context save and is therefore rather slow.

It would be good to complement this patch with one that adds
write support as well.

Signed-off-by: Dwayne Grant McConnell <decimal@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:39:49 +11:00
Dwayne Grant McConnell
1182e1d351 [POWERPC] spufs: Remove /spu_tag_mask file
This patch removes the /spu_tag_mask file from spufs. The data provided by
this file is also available from the /dma_info file in the dma_info_mask
of the spu_dma_info struct.

The file was intended to be used by gdb, but that never used it, and
now it has been replaced with the more verbose dma_info file.

Signed-off-by: Dwayne Grant McConnell <decimal@us.ibm.com>
Signed-off-by: Arnd Bergmann  <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:39:47 +11:00
Dwayne Grant McConnell
b9e3bd774b [POWERPC] spufs: Add /lslr, /dma_info and /proxydma files
The /lslr file gives read access to the SPU_LSLR register in hex; 0x3fff
for example The /dma_info file provides read access to the SPU Command
Queue in a binary format. The /proxydma_info files provides read access
access to the Proxy Command Queue in a binary format. The spu_info.h
file provides data structures for interpreting the binary format of
/dma_info and /proxydma_info.

Signed-off-by: Dwayne Grant McConnell <decimal@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:39:45 +11:00
Dwayne Grant McConnell
9b5047e249 [POWERPC] spufs: Change %llx to 0x%llx.
This patches changes /npc, /decr, /decr_status, /spu_tag_mask,
/event_mask, /event_status, and /srr0 files to provide output according to
the format string "0x%llx" instead of "%llx".

Before this patch some files used "0x%llx" and other used "%llx" which is
inconsistent and potentially confusing. A user might assume "%llx" numbers
were decimal if they happened to not contain any a-f digits. This change
will break any code cannot tolerate a leading 0x in the file contents. The
only known users of these files are the libspe but there might also be
some scripts which access these files. This risk is deemed acceptable for
future consistency.

Signed-off-by: Dwayne Grant McConnell <decimal@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:39:44 +11:00
Jeremy Kerr
165785e5c0 [POWERPC] Cell iommu support
This patch adds full cell iommu support (and iommu disabled mode).

It implements mapping/unmapping of iommu pages on demand using the
standard powerpc iommu framework.  It also supports running with
iommu disabled for machines with less than 2GB of memory.  (The
default is off in that case, though it can be forced on with the
kernel command line option iommu=force).

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:39:02 +11:00
Benjamin Herrenschmidt
acfd946a1a [POWERPC] Make cell use direct DMA ops
Now that the direct DMA ops supports an offset, we use that instead
of defining our own.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:39:00 +11:00
Benjamin Herrenschmidt
014da7ff47 [POWERPC] Cell "Spider" MMIO workarounds
This patch implements a workaround for a Spider PCI host bridge bug
where it doesn't enforce some of the PCI ordering rules unless some
manual manipulation of a special register is done. In order to be
fully compliant with the PCI spec, I do this on every MMIO read
operation.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:38:54 +11:00
Benjamin Herrenschmidt
d03f387eb3 [POWERPC] Cell fixup DMA offset for new southbridge
This patch makes the Cell DMA code work on both the Spider and the Axon
south bridges by turning cell_dma_valid into a variable instead of a
constant. This is a temporary patch until we have full iommu support.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:38:50 +11:00