Enabling extended addressing in the h/w requires we always assign the
extended address component (eptr) of the talitos h/w pointer. This is
for e500 based platforms with large memories.
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
align channel access locks onto separate cache lines (for performance
reasons). This is done by placing per-channel variables into their own
private struct, and using the cacheline_aligned attribute within that
struct.
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
don't do request->src vs. assoc pointer math - it's the same as adding
assoclen and ivsize (just with more effort).
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds support for Marvell's Cryptographic Engines and Security
Accelerator (CESA) which can be found on a few SoC.
Tested with dm-crypt.
Acked-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When we encounter partial blocks in finup, we'll invoke the xsha
instruction with a bogus count that is not a multiple of the block
size. This patch fixes it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The previous change to allow hashing from states other than the
initial broke compilation on i386 because the inline assembly
tried to squeeze a u64 into a 32-bit register. As we've already
checked for 32-bit overflows we can simply truncate it to u32,
or unsigned long so that we don't truncate at all on x86-64.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto4xx SHA implementation keeps the hash state in the tfm
data structure. This breaks a fundamental requirement of ahash
implementations that they must be reentrant.
This patch disables the broken implementation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes crypto4xx to use the new style ahash type.
In particular, we now use ahash_alg to define ahash algorithms
instead of crypto_alg.
This is achieved by introducing a union that encapsulates the
new type and the existing crypto_alg structure. They're told
apart through a u32 field containing the type value.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes crypto4xx use crypto_ahash_set_reqsize to avoid
accessing crypto_ahash directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the padlock-sha implementation to shash.
In doing so the existing mechanism of storing the data until
final is no longer viable as we do not have a way of allocating
data in crypto_shash_init and then reliably freeing it.
This is just as well because a better way of handling the problem
is to hash everything but the last chunk using normal sha code
and then provide the intermediate result to the padlock device.
This is good enough because the primary application of padlock-sha
is IPsec and there the data is laid out in the form of an hmac
header followed by the rest of the packet. In essence we can
provide all the data to the padlock as the hmac header only needs
to be hashed once.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Extend previous workarounds for the prefetch bug to cover CBC mode,
clean up the code a bit.
Signed-off-by: Chuck Ebbert <cebbert@redhat.com>
Acked-by: Harald Welte <HaraldWelte@viatech.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The VIA Nano processor has a bug that makes it prefetch extra data
during encryption operations, causing spurious page faults. Extend
existing workarounds for ECB mode to copy the data to an temporary
buffer to avoid the problem.
Signed-off-by: Chuck Ebbert <cebbert@redhat.com>
Acked-by: Harald Welte <HaraldWelte@viatech.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
.ko is normally not included in Kconfig help, make it consistent.
Signed-off-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
The remove member of the pci_driver hifn_pci_driver uses __devexit_p(),
so the remove function itself should be marked with __devexit. And where
there be __devexit on the remove, so is there __devinit on the probe.
Similarly, the module_init/module_exit functions should be declared with
plain __init/__exit markings, not the hotplug __dev{init,exit} ones.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Acked-by: Evgeniy Polyakov <zbr@ioremap.net>
CC: Patrick McHardy <kaber@trash.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When we added 64-bit support to padlock the dependency on x86
was lost. This causes build failures on non-x86 architectures.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Almost everything stays the same, we need just to use the extended registers
on the bit variant.
Signed-off-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
the ICV check bit only gets set in decrypt entry points
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add these ablkcipher algorithms:
cbc(aes),
cbc(des3_ede).
Added handling of chained scatterlists with zero length entry
because eseqiv uses it.
Added new map and unmap routines.
Signed-off-by: Lee Nipper <lee.nipper@gmail.com>
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch is preparation for adding new algorithm types.
Some elements which are AEAD specific were renamed.
The algorithm template structure was changed to
use crypto_alg, and talitos_alg_alloc was made
more general with respect to algorithm types.
ipsec_esp_edesc is renamed to talitos_edesc
to use it in the upcoming ablkcipher routines.
Signed-off-by: Lee Nipper <lee.nipper@gmail.com>
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: padlock - Revert aes-all alias to aes
crypto: api - Fix algorithm module auto-loading
crypto: eseqiv - Fix IV generation for sync algorithms
crypto: ixp4xx - check firmware for crypto support
Since the padlock-aes driver doesn't require a fallback (it's
only padlock-sha that does), it should use the aes alias rather
than aes-all so that ones that do need a fallback can use it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
- the loaded firmware may not support crypto at all or
only support DES and 3DES but not AES or
support DES, 3DES and AES.
- in case of no crypto support of the firmware, the module load will fail.
- in case of missing AES support, the AES algorithms are not registered
and a warning is printed during module load.
Signed-off-by: Christian Hohnstaedt <chohnstaedt@innominate.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32)
Signed-off-by: Yang Hongyang<yanghy@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: ixp4xx - Fix handling of chained sg buffers
crypto: shash - Fix unaligned calculation with short length
hwrng: timeriomem - Use phys address rather than virt
It is a fairly common operation to have a pointer to a work and to need a
pointer to the delayed work it is contained in. In particular, all
delayed works which want to rearm themselves will have to do that. So it
would seem fair to offer a helper function for this operation.
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Greg KH <greg@kroah.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
- keep dma functions away from chained scatterlists.
Use the existing scatterlist iteration inside the driver
to call dma_map_single() for each chunk and avoid dma_map_sg().
Signed-off-by: Christian Hohnstaedt <chohnstaedt@innominate.com>
Tested-By: Karl Hiramoto <karl@hiramoto.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (29 commits)
crypto: sha512-s390 - Add missing block size
hwrng: timeriomem - Breaks an allyesconfig build on s390:
nlattr: Fix build error with NET off
crypto: testmgr - add zlib test
crypto: zlib - New zlib crypto module, using pcomp
crypto: testmgr - Add support for the pcomp interface
crypto: compress - Add pcomp interface
netlink: Move netlink attribute parsing support to lib
crypto: Fix dead links
hwrng: timeriomem - New driver
crypto: chainiv - Use kcrypto_wq instead of keventd_wq
crypto: cryptd - Per-CPU thread implementation based on kcrypto_wq
crypto: api - Use dedicated workqueue for crypto subsystem
crypto: testmgr - Test skciphers with no IVs
crypto: aead - Avoid infinite loop when nivaead fails selftest
crypto: skcipher - Avoid infinite loop when cipher fails selftest
crypto: api - Fix crypto_alloc_tfm/create_create_tfm return convention
crypto: api - crypto_alg_mod_lookup either tested or untested
crypto: amcc - Add crypt4xx driver
crypto: ansi_cprng - Add maintainer
...
There is another user of IXP4xx queue manager, fix it.
Signed-off-by: Krzysztof Hałasa <khc@pm.waw.pl>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With the mandatory algorithm testing at registration, we have
now created a deadlock with algorithms requiring fallbacks.
This can happen if the module containing the algorithm requiring
fallback is loaded first, without the fallback module being loaded
first. The system will then try to test the new algorithm, find
that it needs to load a fallback, and then try to load that.
As both algorithms share the same module alias, it can attempt
to load the original algorithm again and block indefinitely.
As algorithms requiring fallbacks are a special case, we can fix
this by giving them a different module alias than the rest. Then
it's just a matter of using the right aliases according to what
algorithms we're trying to find.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for AMCC ppc4xx security device driver. This is the
initial release that includes the driver framework with AES and SHA1 algorithms
support.
The remaining algorithms will be released in the near future.
Signed-off-by: James Hsiao <jhsiao@amcc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the S390 sha algorithms to the new shash interface.
With fixes by Jan Glauber.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Previous commit for interrupt mitigation moved the done interrupt
acknowlegement from the isr to the talitos_done tasklet.
This patch moves the done interrupt acknowledgement back
into the isr so that done interrupts will always be acknowledged.
This covers the case for acknowledging interrupts for channel done processing
that has actually already been completed by the tasklet prior to fielding
a pending interrupt.
Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Base versions handle constant folding just fine.
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use KM_SOFTIRQ instead of KM_IRQ in tasklet context.
Added bug_on on input no-page condition.
Signed-off-by: Evgeniy Polyakov <zbr@ioremap.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix queue management. Change ring size and perform its check not
one after another descriptor, but using stored pointers to the last
checked descriptors.
Signed-off-by: Evgeniy Polyakov <zbr@ioremap.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
HIFN uses the transform context to store per-request data, which breaks
when more than one request is outstanding. Move per request members from
struct hifn_context to a new struct hifn_request_context and convert
the code to use this.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Evgeniy Polyakov <zbr@ioremap.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Resetting the control word is quite expensive. Fortunately this
isn't an issue for the common operations such as CBC and ECB as
the whole operation is done through a single call. However, modes
such as LRW and XTS have to call padlock over and over again for
one operation which really hurts if each call resets the control
word.
This patch uses an idea by Sebastian Siewior to store the last
control word used on a CPU and only reset the control word if
that changes.
Note that any task switch automatically resets the control word
so we only need to be accurate with regard to the stored control
word when no task switches occur.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In commit ec6644d632 "crypto: talitos - Preempt
overflow interrupts", the test in atomic_inc_not_zero was interpreted by the
author to be applied after the increment operation (not before). This off-by-one
fix prevents overflow error interrupts from occurring when requests are frequent
and large enough to do so.
Signed-off-by: Vishnu Suresh <Vishnu@freescale.com>
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
SEC version 2.1 and above adds the capability to do the IPSec ICV
memcmp in h/w. Results of the cmp are written back in the descriptor
header, along with the done status. A new callback is added that
checks these ICCR bits instead of performing the memcmp on the core,
and is enabled by h/w capability.
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
After testing on different parts, another condition was added
before using h/w auth check because different
SEC revisions require different handling.
The SEC 3.0 allows a more flexible link table where
the auth data can span separate link table entries.
The SEC 2.4/2.1 does not support this case.
So a test was added in the decrypt routine
for a fragmented case; the h/w auth check is disallowed for
revisions not having the extent in the link table;
in this case the hw auth check is done by software.
A portion of a previous change for SEC 3.0 link table handling
was removed since it became dead code with the hw auth check supported.
This seems to be the best compromise for using hw auth check
on supporting SEC revisions; it keeps the link table logic
simpler for the fragmented cases.
Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In talitos_interrupt, upon one done interrupt, mask further done interrupts,
and ack only any error interrupt.
In talitos_done, unmask done interrupts after completing processing.
In flush_channel, ack each done channel processed.
Keep done overflow interrupts masked because even though each pkt
is ack'ed, a few done overflows still occur.
Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since we ack early, the re-read interrupt status in talitos_error
may be already updated with a new value. Pass the error ISR value
directly in order to report and handle the error based on the correct
error status.
Also remove unused error tasklet.
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On Tue, Sep 23, 2008 at 08:06:32PM +0200, Dimitri Puzin (max@psycast.de) wrote:
> With this patch applied it still doesn't work as expected. The overflow
> messages are gone however syslog shows
> [ 120.924266] hifn0: abort: c: 0, s: 1, d: 0, r: 0.
> when doing cryptsetup luksFormat as in original e-mail. At this point
> cryptsetup hangs and can't be killed with -SIGKILL. I've attached
> SysRq-t dump of this condition.
Yes, I was wrong with the patch: HIFN does not support 64-bit addresses
afaics.
Attached patch should not allow HIFN to be registered on 64-bit arch, so
crypto layer will fallback to the software algorithms.
Signed-off-by: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>