Commit graph

14 commits

Author SHA1 Message Date
Becky Bruce
1cebd7a0f6 powerpc: Rename get_dma_direct_offset get_dma_offset
The former is no longer really accurate with the swiotlb case now
a possibility.  I also move it into dma-mapping.h - it no longer
needs to be in dma.c, and there are about to be some more accessors
that should all end up in the same place.  A comment is added to
indicate that this function is not used in configs where there is no
simple dma offset, such as the iommu case.

Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24 15:31:43 +10:00
FUJITA Tomonori
80d3e8abb7 powerpc: Add CONFIG_DMA_API_DEBUG support
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-28 14:24:11 +10:00
FUJITA Tomonori
45223c5492 powerpc: use dma_map_ops struct
This converts uses dma_map_ops struct (in include/linux/dma-mapping.h)
instead of POWERPC homegrown dma_mapping_ops.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-28 14:24:10 +10:00
Benjamin Herrenschmidt
b2f2e8fee3 powerpc/dma: pci_set_dma_mask() shouldn't fail if mask fits in RAM
On an iMac G5, the b43 driver is failing to initialise because trying to
set the dma mask to 30-bit fails. Even though there's only 512MiB of RAM
in the machine anyway:
	https://bugzilla.redhat.com/show_bug.cgi?id=514787

We should probably let it succeed if the available RAM in the system
doesn't exceed the requested limit.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-10 16:36:38 +10:00
Becky Bruce
ec3cf2ece2 powerpc: Add support for swiotlb on 32-bit
This patch includes the basic infrastructure to use swiotlb
bounce buffering on 32-bit powerpc.  It is not yet enabled on
any platforms.  Probably the most interesting bit is the
addition of addr_needs_map to dma_ops - we need this as
a dma_op because the decision of whether or not an addr
can be mapped by a device is device-specific.

Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-06-09 16:49:18 +10:00
Benjamin Herrenschmidt
8b31e49d1d powerpc: Fix up dma_alloc_coherent() on platforms without cache coherency.
The implementation we just revived has issues, such as using a
Kconfig-defined virtual address area in kernel space that nothing
actually carves out (and thus will overlap whatever is there),
or having some dependencies on being self contained in a single
PTE page which adds unnecessary constraints on the kernel virtual
address space.

This fixes it by using more classic PTE accessors and automatically
locating the area for consistent memory, carving an appropriate hole
in the kernel virtual address space, leaving only the size of that
area as a Kconfig option. It also brings some dma-mask related fixes
from the ARM implementation which was almost identical initially but
grew its own fixes.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-05-27 16:33:59 +10:00
Yang Hongyang
284901a90a dma-mapping: replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32)
Replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32)

Signed-off-by: Yang Hongyang<yanghy@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 08:31:11 -07:00
Becky Bruce
15e09c0eca powerpc: Add sync_*_for_* to dma_ops
We need to swap these out once we start using swiotlb, so add
them to dma_ops.  Create CONFIG_PPC_NEED_DMA_SYNC_OPS Kconfig
option; this is currently enabled automatically if we're
CONFIG_NOT_COHERENT_CACHE.  In the future, this will also
be enabled for builds that need swiotlb.  If PPC_NEED_DMA_SYNC_OPS
is not defined, the dma_sync_*_for_* ops compile to nothing.
Otherwise, they access the dma_ops pointers for the sync ops.

This patch also changes dma_sync_single_range_* to actually
sync the range - previously it was using a generous
dma_sync_single.  dma_sync_single_* is now implemented
as a dma_sync_single_range with an offset of 0.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03 20:46:36 +11:00
Benjamin Herrenschmidt
2434bbb30e powerpc: Fix dma_map_sg() cache flushing on non coherent platforms
On PowerPC 4xx or other non cache-coherent platforms, we lost the
appropriate cache flushing in dma_map_sg() when merging the 32 and
64-bit DMA code (commit 4fc665b88a,
"powerpc: Merge 32 and 64-bit dma code").  This restores it.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03 18:24:08 +11:00
Benjamin Herrenschmidt
8aa2659009 powerpc: Fix DMA offset for non-coherent DMA
After Becky's work we can almost have different DMA offsets
between on-chip devices and PCI. Almost because there's a
problem with the non-coherent DMA code that basically ignores
the programmed offset to use the global one for everything.
This fixes it.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-14 10:35:26 +11:00
Becky Bruce
4fc665b88a powerpc: Merge 32 and 64-bit dma code
We essentially adopt the 64-bit dma code, with some changes to support
32-bit systems, including HIGHMEM.  dma functions on 32-bit are now
invoked via accessor functions which call the correct op for a device based
on archdata dma_ops.  If there is no archdata dma_ops, this defaults
to dma_direct_ops.

In addition, the dma_map/unmap_page functions are added to dma_ops
because we can't just fall back on map/unmap_single when HIGHMEM is
enabled. In the case of dma_direct_*, we stop using map/unmap_single
and just use the page version - this saves a lot of ugly
ifdeffing.  We leave map/unmap_single in the dma_ops definition,
though, because they are needed by the iommu code, which does not
implement map/unmap_page.  Ideally, going forward, we will completely
eliminate map/unmap_single and just have map/unmap_page, if it's
workable for 64-bit.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24 16:26:45 -05:00
Becky Bruce
8fae035324 powerpc: Drop archdata numa_node
Use the struct device's numa_node instead; use accessor functions
to get/set numa_node.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24 16:26:43 -05:00
Becky Bruce
8dd0e95206 powerpc: Move iommu dma ops from dma.c to dma-iommu.c
32-bit platforms are about to start using dma.c; move the iommu
dma ops into their own file to make this a bit cleaner.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24 16:26:42 -05:00
Becky Bruce
7c05d7e08d powerpc: Rename dma_64.c to dma.c
This is in preparation for the merge of the 32 and 64-bit
dma code in arch/powerpc.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24 16:26:41 -05:00
Renamed from arch/powerpc/kernel/dma_64.c (Browse further)