In support of inter-channel chaining async_tx utilizes an ack flag to
gate whether a dependent operation can be chained to another. While the
flag is not set the chain can be considered open for appending. Setting
the ack flag closes the chain and flags the descriptor for garbage
collection. The ASYNC_TX_DEP_ACK flag essentially means "close the
chain after adding this dependency". Since each operation can only have
one child the api now implicitly sets the ack flag at dependency
submission time. This removes an unnecessary management burden from
clients of the api.
[ Impact: clean up and enforce one dependency per operation ]
Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Besdies, for the old code, gcc-4.3.3 produced this warning:
"format not a string literal and no format arguments"
Signed-off-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As it stands we will each test hash vector both linearly and as
a scatter list if applicable. This means that we cannot have
vectors longer than a page, even with scatter lists.
This patch fixes this by skipping test vectors with np != 0 when
testing linearly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As we cannot guarantee the availability of contiguous pages at
run-time, all test vectors must either fit within a page, or use
scatter lists. In some cases vectors were not checked as to
whether they fit inside a page. This patch adds all the missing
checks.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
At present, the tcrypt module always exits with an -EAGAIN upon
successfully completing all the tests its been asked to run. In fips
mode, integrity checking is done by running all self-tests from the
initrd, and its much simpler to check the ret from modprobe for
success than to scrape dmesg and/or /proc/crypto. Simply stay
loaded, giving modprobe a retval of 0, if self-tests all pass and
we're in fips mode.
A side-effect of tracking success/failure for fips mode is that in
non-fips mode, self-test failures will return the actual failure
return codes, rather than always returning -EAGAIN, which seems more
correct anyway.
The tcrypt_test() portion of the patch is dependent on my earlier
pair of patches that skip non-fips algs in fips mode, at least to
achieve the fully intended behavior.
Nb: testing this patch against the cryptodev tree revealed a test
failure for sha384, which I have yet to look into...
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If crypto_{,de}compress_{update,final}() succeed, return the actual number of
bytes produced instead of zero, so their users don't have to calculate that
theirselves.
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Because all fips-allowed algorithms must be self-tested before they
can be used, they will all have entries in testmgr.c's alg_test_descs[].
Skip self-tests for any algs not flagged as fips_approved and return
-EINVAL when in fips mode.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Set the fips_allowed flag in testmgr.c's alg_test_descs[] for algs
that are allowed to be used when in fips mode.
One caveat: des isn't actually allowed anymore, but des (and thus also
ecb(des)) has to be permitted, because disallowing them results in
des3_ede being unable to properly register (see des module init func).
Also, crc32 isn't technically on the fips approved list, but I think
it gets used in various places that necessitate it being allowed.
This list is based on
http://csrc.nist.gov/groups/STM/cavp/index.html
Important note: allowed/approved here does NOT mean "validated", just
that its an alg that *could* be validated.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now with multi-block test vectors, all from SP800-38A, Appendix F.5.
Also added ctr(aes) to case 10 in tcrypt.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently allocate temporary memory that is used for testing
statically. This renders the testing engine non-reentrant. As
algorithms may nest, i.e., one may construct another in order to
carry out a part of its operation, this is unacceptable. For
example, it has been reported that an AEAD implementation allocates
a cipher in its setkey function, which causes it to fail during
testing as the temporary memory is overwritten.
This patch replaces the static memory with dynamically allocated
buffers. We need a maximum of 16 pages so this slightly increases
the chances of an algorithm failing due to memory shortage.
However, as testing usually occurs at registration, this shouldn't
be a big problem.
Reported-by: Shasi Pulijala <spulijala@amcc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
According to our FIPS CAVS testing lab guru, when we're in fips mode,
we must print out notices of successful self-test completion for
every alg to be compliant.
New and improved v2, without strncmp crap. Doesn't need to touch a flag
though, due to not moving the notest label around anymore.
Applies atop '[PATCH v2] crypto: catch base cipher self-test failures
in fips mode'.
Personally, I wouldn't mind seeing this info printed out regardless of
whether or not we're in fips mode, I think its useful info, but will
stick with only in fips mode for now.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add ANSI X9.31 Continuous Pseudo-Random Number Generator (AES mode),
aka 'ansi_cprng' test vectors, taken from Appendix B.2.9 and B.2.10
of the NIST RNGVS document, found here:
http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf
Successfully tested against both the cryptodev-2.6 tree and a Red
Hat Enterprise Linux 5.4 kernel, via 'modprobe tcrypt mode=150'.
The selection of 150 was semi-arbitrary, didn't seem like it should
go any place in particular, so I started a new range for rng tests.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add some necessary infrastructure to make it possible to run
self-tests for ansi_cprng. The bits are likely very specific
to the ANSI X9.31 CPRNG in AES mode, and thus perhaps should
be named more specifically if/when we grow additional CPRNG
support...
Successfully tested against the cryptodev-2.6 tree and a
Red Hat Enterprise Linux 5.x kernel with the follow-on
patch that adds the actual test vectors.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add an array of encryption and decryption + verification self-tests
for rfc4309(ccm(aes)).
Test vectors all come from sample FIPS CAVS files provided to
Red Hat by a testing lab. Unfortunately, all the published sample
vectors in RFC 3610 and NIST Special Publication 800-38C contain nonce
lengths that the kernel's rfc4309 implementation doesn't support, so
while using some public domain vectors would have been preferred, its
not possible at this time.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add infrastructure to tcrypt/testmgr to support handling ccm decryption
test vectors that are expected to fail verification.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
make C=1:
| crypto/pcompress.c:77:5: warning: symbol 'crypto_register_pcomp' was not declared. Should it be static?
| crypto/pcompress.c:89:5: warning: symbol 'crypto_unregister_pcomp' was not declared. Should it be static?
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Because kernel_fpu_begin() and kernel_fpu_end() operations are too
slow, the performance gain of general mode implementation + aes-aesni
is almost all compensated.
The AES-NI support for more modes are implemented as follow:
- Add a new AES algorithm implementation named __aes-aesni without
kernel_fpu_begin/end()
- Use fpu(<mode>(AES)) to provide kenrel_fpu_begin/end() invoking
- Add <mode>(AES) ablkcipher, which uses cryptd(fpu(<mode>(AES))) to
defer cryption to cryptd context in soft_irq context.
Now the ctr, lrw, pcbc and xts support are added.
Performance testing based on dm-crypt shows that cryption time can be
reduced to 50% of general mode implementation + aes-aesni implementation.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Blkcipher touching FPU need to be enclosed by kernel_fpu_begin() and
kernel_fpu_end(). If they are invoked in cipher algorithm
implementation, they will be invoked for each block, so that
performance will be hurt, because they are "slow" operations. This
patch implements "fpu" template, which makes these operations to be
invoked for each request.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use crypto_alloc_base() instead of crypto_alloc_ablkcipher() to
allocate underlying tfm in cryptd_alloc_ablkcipher. Because
crypto_alloc_ablkcipher() prefer GENIV encapsulated crypto instead of
raw one, while cryptd_alloc_ablkcipher needed the raw one.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use kzfree() instead of memset() + kfree().
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Applying kernel janitors todos (printk calls need KERN_*
constants on linebeginnings, reduce stack footprint where
possible) to tcrypts test_hash_speed (where stacks
memory footprint was very high (on i386 1184 bytes to
160 now).
Signed-off-by: Frank Seidel <frank@f-seidel.de>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A quirk that we've always supported is having an sg entry that's
bigger than a page, or more generally an sg entry that crosses
page boundaries. Even though it would be better to explicitly have
to sg entries for this, we need to support it for the existing users,
in particular, IPsec.
The new ahash sg walking code did try to handle this, but there was
a bug where we didn't increment the page so kept on walking on the
first page over an dover again.
This patch fixes it.
Tested-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: padlock - Revert aes-all alias to aes
crypto: api - Fix algorithm module auto-loading
crypto: eseqiv - Fix IV generation for sync algorithms
crypto: ixp4xx - check firmware for crypto support
The commit a760a6656e (crypto:
api - Fix module load deadlock with fallback algorithms) broke
the auto-loading of algorithms that require fallbacks. The
problem is that the fallback mask check is missing an and which
cauess bits that should be considered to interfere with the
result.
Reported-by: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If crypto_ablkcipher_encrypt() returns synchronous,
eseqiv_complete2() is called even if req->giv is already the
pointer to the generated IV. The generated IV is overwritten
with some random data in this case. This patch fixes this by
calling eseqiv_complete2() just if the generated IV has to be
copied to req->giv.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
'zero_sum' does not properly describe the operation of generating parity
and checking that it validates against an existing buffer. Change the
name of the operation to 'val' (for 'validate'). This is in
anticipation of the p+q case where it is a requirement to identify the
target parity buffers separately from the source buffers, because the
target parity buffers will not have corresponding pq coefficients.
Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
dma: Add SoF and EoF debugging to ipu_idmac.c, minor cleanup
dw_dmac: add cyclic API to DW DMA driver
dmaengine: Add privatecnt to revert DMA_PRIVATE property
dmatest: add dma interrupts and callbacks
dmatest: add xor test
dmaengine: allow dma support for async_tx to be toggled
async_tx: provide __async_inline for HAS_DMA=n archs
dmaengine: kill some unused headers
dmaengine: initialize tx_list in dma_async_tx_descriptor_init
dma: i.MX31 IPU DMA robustness improvements
dma: improve section assignment in i.MX31 IPU DMA driver
dma: ipu_idmac driver cosmetic clean-up
dmaengine: fail device registration if channel registration fails
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: ixp4xx - Fix handling of chained sg buffers
crypto: shash - Fix unaligned calculation with short length
hwrng: timeriomem - Use phys address rather than virt
* 'for-linus' of git://neil.brown.name/md: (53 commits)
md/raid5 revise rules for when to update metadata during reshape
md/raid5: minor code cleanups in make_request.
md: remove CONFIG_MD_RAID_RESHAPE config option.
md/raid5: be more careful about write ordering when reshaping.
md: don't display meaningless values in sysfs files resync_start and sync_speed
md/raid5: allow layout and chunksize to be changed on active array.
md/raid5: reshape using largest of old and new chunk size
md/raid5: prepare for allowing reshape to change layout
md/raid5: prepare for allowing reshape to change chunksize.
md/raid5: clearly differentiate 'before' and 'after' stripes during reshape.
Documentation/md.txt update
md: allow number of drives in raid5 to be reduced
md/raid5: change reshape-progress measurement to cope with reshaping backwards.
md: add explicit method to signal the end of a reshape.
md/raid5: enhance raid5_size to work correctly with negative delta_disks
md/raid5: drop qd_idx from r6_state
md/raid6: move raid6 data processing to raid6_pq.ko
md: raid5 run(): Fix max_degraded for raid level 4.
md: 'array_size' sysfs attribute
md: centralize ->array_sectors modifications
...
This makes the includes more explicit, and is preparation for moving
md_k.h to drivers/md/md.h
Remove include/raid/md.h as its only remaining use was to #include
other files.
Signed-off-by: NeilBrown <neilb@suse.de>
When the total length is shorter than the calculated number of unaligned bytes, the call to shash->update breaks. For example, calling crc32c on unaligned buffer with length of 1 can result in a system crash.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (29 commits)
crypto: sha512-s390 - Add missing block size
hwrng: timeriomem - Breaks an allyesconfig build on s390:
nlattr: Fix build error with NET off
crypto: testmgr - add zlib test
crypto: zlib - New zlib crypto module, using pcomp
crypto: testmgr - Add support for the pcomp interface
crypto: compress - Add pcomp interface
netlink: Move netlink attribute parsing support to lib
crypto: Fix dead links
hwrng: timeriomem - New driver
crypto: chainiv - Use kcrypto_wq instead of keventd_wq
crypto: cryptd - Per-CPU thread implementation based on kcrypto_wq
crypto: api - Use dedicated workqueue for crypto subsystem
crypto: testmgr - Test skciphers with no IVs
crypto: aead - Avoid infinite loop when nivaead fails selftest
crypto: skcipher - Avoid infinite loop when cipher fails selftest
crypto: api - Fix crypto_alloc_tfm/create_create_tfm return convention
crypto: api - crypto_alg_mod_lookup either tested or untested
crypto: amcc - Add crypt4xx driver
crypto: ansi_cprng - Add maintainer
...
To allow an async_tx routine to be compiled away on HAS_DMA=n arch it
needs to be declared __always_inline otherwise the compiler may emit
code and cause a link error.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The current "comp" crypto interface supports one-shot (de)compression only,
i.e. the whole data buffer to be (de)compressed must be passed at once, and
the whole (de)compressed data buffer will be received at once.
In several use-cases (e.g. compressed file systems that store files in big
compressed blocks), this workflow is not suitable.
Furthermore, the "comp" type doesn't provide for the configuration of
(de)compression parameters, and always allocates workspace memory for both
compression and decompression, which may waste memory.
To solve this, add a "pcomp" partial (de)compression interface that provides
the following operations:
- crypto_compress_{init,update,final}() for compression,
- crypto_decompress_{init,update,final}() for decompression,
- crypto_{,de}compress_setup(), to configure (de)compression parameters
(incl. allocating workspace memory).
The (de)compression methods take a struct comp_request, which was mimicked
after the z_stream object in zlib, and contains buffer pointer and length
pairs for input and output.
The setup methods take an opaque parameter pointer and length pair. Parameters
are supposed to be encoded using netlink attributes, whose meanings depend on
the actual (name of the) (de)compression algorithm.
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With the mandatory algorithm testing at registration, we have
now created a deadlock with algorithms requiring fallbacks.
This can happen if the module containing the algorithm requiring
fallback is loaded first, without the fallback module being loaded
first. The system will then try to test the new algorithm, find
that it needs to load a fallback, and then try to load that.
As both algorithms share the same module alias, it can attempt
to load the original algorithm again and block indefinitely.
As algorithms requiring fallbacks are a special case, we can fix
this by giving them a different module alias than the rest. Then
it's just a matter of using the right aliases according to what
algorithms we're trying to find.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_ahash_show changed to use cra_ahash for digestsize reference.
Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
keventd_wq has potential starvation problem, so use dedicated
kcrypto_wq instead.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use dedicated workqueue for crypto subsystem
A dedicated workqueue named kcrypto_wq is created to be used by crypto
subsystem. The system shared keventd_wq is not suitable for
encryption/decryption, because of potential starvation problem.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As it is an skcipher with no IV escapes testing altogether because
we only test givcipher objects. This patch fixes the bypass logic
to test these algorithms.
Conversely, we're currently testing nivaead algorithms with IVs,
which would have deadlocked had it not been for the fact that no
nivaead algorithms have any test vectors. This patch also fixes
that case.
Both fixes are ugly as hell, but this ugliness should hopefully
disappear once we move them into the per-type code (i.e., the
AEAD test would live in aead.c and the skcipher stuff in ablkcipher.c).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When an aead constructed through crypto_nivaead_default fails
its selftest, we'll loop forever trying to construct new aead
objects but failing because it already exists.
The crux of the issue is that once an aead fails the selftest,
we'll ignore it on the next run through crypto_aead_lookup and
attempt to construct a new aead.
We should instead return an error to the caller if we find an
an that has failed the test.
This bug hasn't manifested itself yet because we don't have any
test vectors for the existing nivaead algorithms. They're tested
through the underlying algorithms only.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When an skcipher constructed through crypto_givcipher_default fails
its selftest, we'll loop forever trying to construct new skcipher
objects but failing because it already exists.
The crux of the issue is that once a givcipher fails the selftest,
we'll ignore it on the next run through crypto_skcipher_lookup and
attempt to construct a new givcipher.
We should instead return an error to the caller if we find a
givcipher that has failed the test.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is based on a report and patch by Geert Uytterhoeven.
The functions crypto_alloc_tfm and create_create_tfm return a
pointer that needs to be adjusted by the caller when successful
and otherwise an error value. This means that the caller has
to check for the error and only perform the adjustment if the
pointer returned is valid.
Since all callers want to make the adjustment and we know how
to adjust it ourselves, it's much easier to just return adjusted
pointer directly.
The only caveat is that we have to return a void * instead of
struct crypto_tfm *. However, this isn't that bad because both
of these functions are for internal use only (by types code like
shash.c, not even algorithms code).
This patch also moves crypto_alloc_tfm into crypto/internal.h
(crypto_create_tfm is already there) to reflect this.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As it stands crypto_alg_mod_lookup will search either tested or
untested algorithms, but never both at the same time. However,
we need exactly that when constructing givcipher and aead so
this patch adds support for that by setting the tested bit in
type but clearing it in mask. This combination is currently
unused.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
FIPS 140-2 specifies that all access to various cryptographic modules be
prevented in the event that any of the provided self tests fail on the various
implemented algorithms. We already panic when any of the test in testmgr.c
fail when we are operating in fips mode. The continuous test in the cprng here
was missed when that was implmented. This code simply checks for the
fips_enabled flag if the test fails, and warns us via syslog or panics the box
accordingly.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pseudo RNGs provide predictable outputs based on input parateters {key, V, DT},
the idea behind them is that only the user should know what the inputs are.
While its nice to have default known values for testing purposes, it seems
dangerous to allow the use of those default values without some sort of safety
measure in place, lest an attacker easily guess the output of the cprng. This
patch forces the NEED_RESET flag on when allocating a cprng context, so that any
user is forced to reseed it before use. The defaults can still be used for
testing, but this will prevent their inadvertent use, and be more secure.
Signed-off-by: Neil Horman <nhorman@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Intel AES-NI is a new set of Single Instruction Multiple Data (SIMD)
instructions that are going to be introduced in the next generation of
Intel processor, as of 2009. These instructions enable fast and secure
data encryption and decryption, using the Advanced Encryption Standard
(AES), defined by FIPS Publication number 197. The architecture
introduces six instructions that offer full hardware support for
AES. Four of them support high performance data encryption and
decryption, and the other two instructions support the AES key
expansion procedure.
The white paper can be downloaded from:
http://softwarecommunity.intel.com/isn/downloads/intelavx/AES-Instructions-Set_WP.pdf
AES may be used in soft_irq context, but MMX/SSE context can not be
touched safely in soft_irq context. So in_interrupt() is checked, if
in IRQ or soft_irq context, the general x86_64 implementation are used
instead.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
cryptd_alloc_ablkcipher() will allocate a cryptd-ed ablkcipher for
specified algorithm name. The new allocated one is guaranteed to be
cryptd-ed ablkcipher, so the blkcipher underlying can be gotten via
cryptd_ablkcipher_child().
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We're currently checking the frontend type in init_tfm. This is
completely pointless because the fact that we're called at all
means that the frontend is ours so the type must match as well.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It turns out that LRW has never worked properly on big endian.
This was never discussed because nobody actually used it that
way. In fact, it was only discovered when Geert Uytterhoeven
loaded it through tcrypt which failed the test on it.
The fix is straightforward, on big endian the to find the nth
bit we should be grouping them by words instead of bytes. So
setbit128_bbe should xor with 128 - BITS_PER_LONG instead of
128 - BITS_PER_BYTE == 0x78.
Tested-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It's illegal to call flush_dcache_page on slab pages on a number
of architectures. So this patch avoids doing so if PageSlab is
true.
In future we can move the flush_dcache_page call to those page
cache users that actually need it.
Reported-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Geert Uytterhoeven pointed out that we're not zeroing all the
memory when freeing a transform. This patch fixes it by calling
ksize to ensure that we zero everything in sight.
Reported-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Module reference counting for shash is incorrect: when
a new shash transformation is created the refcount is not
increased as it should.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When we complete a test we'll notify everyone waiting on it, drop
the mutex, and then remove the test larval (after reacquiring the
mutex). If one of the notified parties tries to register another
algorithm with the same driver name prior to the removal of the
test larval, they will fail with EEXIST as only one algorithm of
a given name can be tested at any time.
This broke the initialisation of aead and givcipher algorithms as
they will register two algorithms with the same driver name, in
sequence.
This patch fixes the problem by marking the larval as dead before
we drop the mutex, and also ignoring all dead or dying algorithms
on the registration path.
Tested-by: Andreas Steffen <andreas.steffen@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Its a valid use case to have null associated data in a ccm vector, but
this case isn't being handled properly right now.
The following ccm decryption/verification test vector, using the
rfc4309 implementation regularly triggers a panic, as will any
other vector with null assoc data:
* key: ab2f8a74b71cd2b1ff802e487d82f8b9
* iv: c6fb7d800d13abd8a6b2d8
* Associated Data: [NULL]
* Tag Length: 8
* input: d5e8939fc7892e2b
The resulting panic looks like so:
Unable to handle kernel paging request at ffff810064ddaec0 RIP:
[<ffffffff8864c4d7>] :ccm:get_data_to_compute+0x1a6/0x1d6
PGD 8063 PUD 0
Oops: 0002 [1] SMP
last sysfs file: /module/libata/version
CPU 0
Modules linked in: crypto_tester_kmod(U) seqiv krng ansi_cprng chainiv rng ctr aes_generic aes_x86_64 ccm cryptomgr testmgr_cipher testmgr aead crypto_blkcipher crypto_a
lgapi des ipv6 xfrm_nalgo crypto_api autofs4 hidp l2cap bluetooth nfs lockd fscache nfs_acl sunrpc ip_conntrack_netbios_ns ipt_REJECT xt_state ip_conntrack nfnetlink xt_
tcpudp iptable_filter ip_tables x_tables dm_mirror dm_log dm_multipath scsi_dh dm_mod video hwmon backlight sbs i2c_ec button battery asus_acpi acpi_memhotplug ac lp sg
snd_intel8x0 snd_ac97_codec ac97_bus snd_seq_dummy snd_seq_oss joydev snd_seq_midi_event snd_seq snd_seq_device snd_pcm_oss snd_mixer_oss ide_cd snd_pcm floppy parport_p
c shpchp e752x_edac snd_timer e1000 i2c_i801 edac_mc snd soundcore snd_page_alloc i2c_core cdrom parport serio_raw pcspkr ata_piix libata sd_mod scsi_mod ext3 jbd uhci_h
cd ohci_hcd ehci_hcd
Pid: 12844, comm: crypto-tester Tainted: G 2.6.18-128.el5.fips1 #1
RIP: 0010:[<ffffffff8864c4d7>] [<ffffffff8864c4d7>] :ccm:get_data_to_compute+0x1a6/0x1d6
RSP: 0018:ffff8100134434e8 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff8100104898b0 RCX: ffffffffab6aea10
RDX: 0000000000000010 RSI: ffff8100104898c0 RDI: ffff810064ddaec0
RBP: 0000000000000000 R08: ffff8100104898b0 R09: 0000000000000000
R10: ffff8100103bac84 R11: ffff8100104898b0 R12: ffff810010489858
R13: ffff8100104898b0 R14: ffff8100103bac00 R15: 0000000000000000
FS: 00002ab881adfd30(0000) GS:ffffffff803ac000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffff810064ddaec0 CR3: 0000000012a88000 CR4: 00000000000006e0
Process crypto-tester (pid: 12844, threadinfo ffff810013442000, task ffff81003d165860)
Stack: ffff8100103bac00 ffff8100104898e8 ffff8100134436f8 ffffffff00000000
0000000000000000 ffff8100104898b0 0000000000000000 ffff810010489858
0000000000000000 ffff8100103bac00 ffff8100134436f8 ffffffff8864c634
Call Trace:
[<ffffffff8864c634>] :ccm:crypto_ccm_auth+0x12d/0x140
[<ffffffff8864cf73>] :ccm:crypto_ccm_decrypt+0x161/0x23a
[<ffffffff88633643>] :crypto_tester_kmod:cavs_test_rfc4309_ccm+0x4a5/0x559
[...]
The above is from a RHEL5-based kernel, but upstream is susceptible too.
The fix is trivial: in crypto/ccm.c:crypto_ccm_auth(), pctx->ilen contains
whatever was in memory when pctx was allocated if assoclen is 0. The tested
fix is to simply add an else clause setting pctx->ilen to 0 for the
assoclen == 0 case, so that get_data_to_compute() doesn't try doing
things its not supposed to.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When we get left-over bits from a slow walk, it means that the
underlying cipher has gone troppo. However, as we're handling
that case we should ensure that the caller terminates the walk.
This patch does this by setting walk->nbytes to zero.
Reported-by: Roel Kluin <roel.kluin@gmail.com>
Reported-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As it is if an algorithm with a zero-length IV is used (e.g.,
NULL encryption) with authenc, authenc may generate an SG entry
of length zero, which will trigger a BUG check in the hash layer.
This patch fixes it by skipping the IV SG generation if the IV
size is zero.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that clients no longer need to be notified of channel arrival
dma_async_client_register can simply increment the dmaengine_ref_count.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
async_tx and net_dma each have open-coded versions of issue_pending_all,
so provide a common routine in dmaengine.
The implementation needs to walk the global device list, so implement
rcu to allow dma_issue_pending_all to run lockless. Clients protect
themselves from channel removal events by holding a dmaengine reference.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Allowing multiple clients to each define their own channel allocation
scheme quickly leads to a pathological situation. For memory-to-memory
offload all clients can share a central allocator.
This simply moves the existing async_tx allocator to dmaengine with
minimal fixups:
* async_tx.c:get_chan_ref_by_cap --> dmaengine.c:nth_chan
* async_tx.c:async_tx_rebalance --> dmaengine.c:dma_channel_rebalance
* split out common code from async_tx.c:__async_tx_find_channel -->
dma_find_channel
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Simply, if a client wants any dmaengine channel then prevent all dmaengine
modules from being removed. Once the clients are done re-enable module
removal.
Why?, beyond reducing complication:
1/ Tracking reference counts per-transaction in an efficient manner, as
is currently done, requires a complicated scheme to avoid cache-line
bouncing effects.
2/ Per-transaction ref-counting gives the false impression that a
dma-driver can be gracefully removed ahead of its user (net, md, or
dma-slave)
3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but
if such an engine were built one day we still would not need to notify
clients of remove events. The driver can simply return NULL to a
->prep() request, something that is much easier for a client to handle.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
async_tx.ko is a consumer of dma channels. A circular dependency arises
if modules in drivers/dma rely on common code in async_tx.ko. It
prevents either module from being unloaded.
Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o
where they should have been from the beginning.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The tables used by the various AES algorithms are currently
computed at run-time. This has created an init ordering problem
because some AES algorithms may be registered before the tables
have been initialised.
This patch gets around this whole thing by precomputing the tables.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The comment for the deflate test vectors says the winbits parameter is 11,
while the deflate module actually uses -11 (a negative window bits parameter
enables the raw deflate format instead of the zlib format).
Correct this, to avoid confusion about the format used.
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
ROTATE -> rol32
XOR was always used with the same destination, use ^=
PLUS/PLUSONE use ++ or +=
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
While its a slightly insane to bypass the key1 == key2 ||
key2 == key3 check in triple-des, since it reduces it to the
same strength as des, some folks do need to do this from time
to time for backwards compatibility with des.
My own case is FIPS CAVS test vectors. Many triple-des test
vectors use a single key, replicated 3x. In order to get the
expected results, des3_ede_setkey() needs to only reject weak
keys if the CRYPTO_TFM_REQ_WEAK_KEY flag is set.
Also sets a more appropriate RES flag when a weak key is found.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes sha512 and sha384 to the new shash interface.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The message schedule W (u64[80]) is too big for the stack. In order
for this algorithm to be used with shash it is moved to a static
percpu area.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes michael_mic to the new shash interface.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes wp512, wp384 and wp256 to the new shash interface.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes tgr192, tgr160 and tgr128 to the new shash interface.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes sha256 and sha224 to the new shash interface.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes md5 to the new shash interface.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes md4 to the new shash interface.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes sha1 to the new shash interface.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since most cryptographic hash algorithms have no keys, this patch
makes the setkey function optional for ahash and shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When self-testing (de)compression algorithms, make sure the actual size of
the (de)compressed output data matches the expected output size.
Otherwise, in case the actual output size would be smaller than the expected
output size, the subsequent buffer compare test would still succeed, and no
error would be reported.
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Base versions handle constant folding just fine.
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This warning:
crypto/testmgr.c: In function ‘test_comp’:
crypto/testmgr.c:829: warning: ‘ret’ may be used uninitialized in this function
triggers because GCC correctly notices that in the ctcount == 0 &&
dtcount != 0 input condition case this function can return an undefined
value, if the second loop fails.
Remove the shadowed 'ret' variable from the second loop that was probably
unintended.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ANSI X9.31 PRNG docs aren't particularly clear on how to increment DT,
but empirical testing shows we're incrementing from the wrong end. A 10,000
iteration Monte Carlo RNG test currently winds up not getting the expected
result.
From http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf :
# CAVS 4.3
# ANSI931 MCT
[X9.31]
[AES 128-Key]
COUNT = 0
Key = 9f5b51200bf334b5d82be8c37255c848
DT = 6376bbe52902ba3b67c925fa701f11ac
V = 572c8e76872647977e74fbddc49501d1
R = 48e9bd0d06ee18fbe45790d5c3fc9b73
Currently, we get 0dd08496c4f7178bfa70a2161a79459a after 10000 loops.
Inverting the DT increment routine results in us obtaining the expected result
of 48e9bd0d06ee18fbe45790d5c3fc9b73. Verified on both x86_64 and ppc64.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
While working with some FIPS RNGVS test vectors yesterday, I discovered a
little bug in the way the ansi_cprng code works right now.
For example, the following test vector (complete with expected result)
from http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf ...
Key = f3b1666d13607242ed061cabb8d46202
DT = e6b3be782a23fa62d71d4afbb0e922fc
V = f0000000000000000000000000000000
R = 88dda456302423e5f69da57e7b95c73a
...when run through ansi_cprng, yields an incorrect R value
of e2afe0d794120103d6e86a2b503bdfaa.
If I load up ansi_cprng w/dbg=1 though, it was fairly obvious what was
going wrong:
----8<----
getting 16 random bytes for context ffff810033fb2b10
Calling _get_more_prng_bytes for context ffff810033fb2b10
Input DT: 00000000: e6 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
Input I: 00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Input V: 00000000: f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
tmp stage 0: 00000000: e6 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
tmp stage 1: 00000000: f4 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84
tmp stage 2: 00000000: 8c 53 6f 73 a4 1a af d4 20 89 68 f4 58 64 f8 be
Returning new block for context ffff810033fb2b10
Output DT: 00000000: e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
Output I: 00000000: 04 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84
Output V: 00000000: 48 89 3b 71 bc e4 00 b6 5e 21 ba 37 8a 0a d5 70
New Random Data: 00000000: 88 dd a4 56 30 24 23 e5 f6 9d a5 7e 7b 95 c7 3a
Calling _get_more_prng_bytes for context ffff810033fb2b10
Input DT: 00000000: e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
Input I: 00000000: 04 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84
Input V: 00000000: 48 89 3b 71 bc e4 00 b6 5e 21 ba 37 8a 0a d5 70
tmp stage 0: 00000000: e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
tmp stage 1: 00000000: 80 6b 3a 8c 23 ae 8f 53 be 71 4c 16 fc 13 b2 ea
tmp stage 2: 00000000: 2a 4d e1 2a 0b 58 8e e6 36 b8 9c 0a 26 22 b8 30
Returning new block for context ffff810033fb2b10
Output DT: 00000000: e8 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
Output I: 00000000: c8 e2 01 fd 9f 4a 8f e5 e0 50 f6 21 76 19 67 9a
Output V: 00000000: ba 98 e3 75 c0 1b 81 8d 03 d6 f8 e2 0c c6 54 4b
New Random Data: 00000000: e2 af e0 d7 94 12 01 03 d6 e8 6a 2b 50 3b df aa
returning 16 from get_prng_bytes in context ffff810033fb2b10
----8<----
The expected result is there, in the first "New Random Data", but we're
incorrectly making a second call to _get_more_prng_bytes, due to some checks
that are slightly off, which resulted in our original bytes never being
returned anywhere.
One approach to fixing this would be to alter some byte_count checks in
get_prng_bytes, but it would mean the last DEFAULT_BLK_SZ bytes would be
copied a byte at a time, rather than in a single memcpy, so a slightly more
involved, equally functional, and ultimately more efficient way of fixing this
was suggested to me by Neil, which I'm submitting here. All of the RNGVS ANSI
X9.31 AES128 VST test vectors I've passed through ansi_cprng are now returning
the expected results with this change.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
ARRAY_SIZE is more concise to use when the size of an array is divided by
the size of its type or the size of its first element.
The semantic patch that makes this change is as follows:
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@i@
@@
#include <linux/kernel.h>
@depends on i using "paren.iso"@
type T;
T[] E;
@@
- (sizeof(E)/sizeof(T))
+ ARRAY_SIZE(E)
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch swaps the role of libcrc32c and crc32c. Previously
the implementation was in libcrc32c and crc32c was a wrapper.
Now the code is in crc32c and libcrc32c just calls the crypto
layer.
The reason for the change is to tap into the algorithm selection
capability of the crypto API so that optimised implementations
such as the one utilising Intel's CRC32C instruction can be
used where available.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a test for the requirement that all crc32c algorithms
shall store the partial result in the first four bytes of the descriptor
context.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch allows shash algorithms to be used through the old hash
interface. This is a transitional measure so we can convert the
underlying algorithms to shash before converting the users across.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes /proc/crypto call the type-specific show function
if one is present before calling the legacy show functions for
cipher/digest/compress. This allows us to reuse the type values
for those legacy types. In particular, hash and digest will share
one type value while shash is phased in as the default hash type.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It is often useful to save the partial state of a hash function
so that it can be used as a base for two or more computations.
The most prominent example is HMAC where all hashes start from
a base determined by the key. Having an import/export interface
means that we only have to compute that base once rather than
for each message.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch allows shash algorithms to be used through the ahash
interface. This is required before we can convert digest algorithms
over to shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The shash interface replaces the current synchronous hash interface.
It improves over hash in two ways. Firstly shash is reentrant,
meaning that the same tfm may be used by two threads simultaneously
as all hashing state is stored in a local descriptor.
The other enhancement is that shash no longer takes scatter list
entries. This is because shash is specifically designed for
synchronous algorithms and as such scatter lists are unnecessary.
All existing hash users will be converted to shash once the
algorithms have been completely converted.
There is also a new finup function that combines update with final.
This will be extended to ahash once the algorithm conversion is
done.
This is also the first time that an algorithm type has their own
registration function. Existing algorithm types will be converted
to this way in due course.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch reintroduces a completely revamped crypto_alloc_tfm.
The biggest change is that we now take two crypto_type objects
when allocating a tfm, a frontend and a backend. In fact this
simply formalises what we've been doing behind the API's back.
For example, as it stands crypto_alloc_ahash may use an
actual ahash algorithm or a crypto_hash algorithm. Putting
this in the API allows us to do this much more cleanly.
The existing types will be converted across gradually.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The type exit function needs to undo any allocations done by the type
init function. However, the type init function may differ depending
on the upper-level type of the transform (e.g., a crypto_blkcipher
instantiated as a crypto_ablkcipher).
So we need to move the exit function out of the lower-level
structure and into crypto_tfm itself.
As it stands this is a no-op since nobody uses exit functions at
all. However, all cases where a lower-level type is instantiated
as a different upper-level type (such as blkcipher as ablkcipher)
will be converted such that they allocate the underlying transform
and use that instead of casting (e.g., crypto_ablkcipher casted
into crypto_blkcipher). That will need to use a different exit
function depending on the upper-level type.
This patch also allows the type init/exit functions to call (or not)
cra_init/cra_exit instead of always calling them from the top level.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is a patch that was sent to me by Jarod Wilson, marking off my
outstanding todo to allow the ansi cprng to set/reset the DT counter value in a
cprng instance. Currently crytpo_rng_reset accepts a seed byte array which is
interpreted by the ansi_cprng as a {V key} tuple. This patch extends that tuple
to now be {V key DT}, with DT an optional value during reset. This patch also
fixes a bug we noticed in which the offset of the key area of the seed is
started at DEFAULT_PRNG_KSZ rather than DEFAULT_BLK_SZ as it should be.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the private implementation of 32-bit rotation and unaligned
access with byteswapping.
As a bonus, fixes sparse warnings:
crypto/camellia.c:602:2: warning: cast to restricted __be32
crypto/camellia.c:603:2: warning: cast to restricted __be32
crypto/camellia.c:604:2: warning: cast to restricted __be32
crypto/camellia.c:605:2: warning: cast to restricted __be32
crypto/camellia.c:710:2: warning: cast to restricted __be32
crypto/camellia.c:711:2: warning: cast to restricted __be32
crypto/camellia.c:712:2: warning: cast to restricted __be32
crypto/camellia.c:713:2: warning: cast to restricted __be32
crypto/camellia.c:714:2: warning: cast to restricted __be32
crypto/camellia.c:715:2: warning: cast to restricted __be32
crypto/camellia.c:716:2: warning: cast to restricted __be32
crypto/camellia.c:717:2: warning: cast to restricted __be32
[Thanks to Tomoyuki Okazaki for spotting the typo]
Tested-by: Carlo E. Prelz <fluido@fluido.as>
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The FIPS specification requires that should self test for any supported
crypto algorithm fail during operation in fips mode, we need to prevent
the use of any crypto functionality until such time as the system can
be re-initialized. Seems like the best way to handle that would be
to panic the system if we were in fips mode and failed a self test.
This patch implements that functionality. I've built and run it
successfully.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If we have at least one algorithm built-in then it no longer makes
sense to have the testing framework, and hence cryptomgr to be a
module. It should be either on or off, i.e., built-in or disabled.
This just happens to stop a potential runaway modprobe loop that
seems to trigger on at least one distro.
With fixes from Evgeniy Polyakov.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Mapping the destination multiple times is a misuse of the dma-api.
Since the destination may be reused as a source, ensure that it is only
mapped once and that it is mapped bidirectionally. This appears to add
ugliness on the unmap side in that it always reads back the destination
address from the descriptor, but gcc can determine that dma_unmap is a
nop and not emit the code that calculates its arguments.
Cc: <stable@kernel.org>
Cc: Saeed Bishara <saeed@marvell.com>
Acked-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
fsldma: allow Freescale Elo DMA driver to be compiled as a module
fsldma: remove internal self-test from Freescale Elo DMA driver
drivers/dma/dmatest.c: switch a GFP_ATOMIC to GFP_KERNEL
dmatest: properly handle duplicate DMA channels
drivers/dma/ioat_dma.c: drop code after return
async_tx: make async_tx_run_dependencies() easier to read
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: skcipher - Use RNG interface instead of get_random_bytes
crypto: rng - RNG interface and implementation
crypto: api - Add fips_enable flag
crypto: skcipher - Move IV generators into their own modules
crypto: cryptomgr - Test ciphers using ECB
crypto: api - Use test infrastructure
crypto: cryptomgr - Add test infrastructure
crypto: tcrypt - Add alg_test interface
crypto: tcrypt - Abort and only log if there is an error
crypto: crc32c - Use Intel CRC32 instruction
crypto: tcrypt - Avoid using contiguous pages
crypto: api - Display larval objects properly
crypto: api - Export crypto_alg_lookup instead of __crypto_alg_lookup
crypto: Kconfig - Replace leading spaces with tabs
* Rename 'next' to 'dep'
* Move the channel switch check inside the loop to simplify
termination
Acked-by: Ilya Yanok <yanok@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This reverts commit bd699f2df6,
which causes camellia to fail the included self-test vectors.
It has also been confirmed that it breaks existing encrypted
disks using camellia.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Should clear the next pointer of the TX if we are sure that the
next TX (say NXT) will be submitted to the channel too. Overwise,
we break the chain of descriptors, because we lose the information
about the next descriptor to run. So next time, when invoke
async_tx_run_dependencies() with TX, it's TX->next will be NULL, and
NXT will be never submitted.
Cc: <stable@kernel.org> [2.6.26]
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This patch makes the IV generators use the new RNG interface so
that the user can pick an RNG other than the default get_random_bytes.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a random number generator interface as well as a
cryptographic pseudo-random number generator based on AES. It is
meant to be used in cases where a deterministic CPRNG is required.
One of the first applications will be as an input in the IPsec IV
generation process.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the ability to turn FIPS-compliant mode on or off at boot
In order to be FIPS compliant, several check may need to be preformed that may
be construed as unusefull in a non-compliant mode. This patch allows us to set
a kernel flag incating that we are running in a fips-compliant mode from boot
up. It also exports that mode information to user space via a sysctl
(/proc/sys/crypto/fips_enabled).
Tested successfully by me.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch moves the default IV generators into their own modules
in order to break a dependency loop between cryptomgr, rng, and
blkcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As it is we only test ciphers when combined with a mode. That means
users that do not invoke a mode of operations may get an untested
cipher.
This patch tests all ciphers using the ECB mode so that simple cipher
users such as ansi-cprng are also protected.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes use of the new testing infrastructure by requiring
algorithms to pass a run-time test before they're made available to
users.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch moves the newly created alg_test infrastructure into
cryptomgr. This shall allow us to use it for testing at algorithm
registrations.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch creates a new interface algorithm testing. A test can
be requested for a particular implementation of an algorithm. This
is achieved by taking both the name of the algorithm and that of
the implementation.
The all-inclusive test has also been rewritten to no longer require
a duplicate listing of all algorithms with tests. In that process
a number of missing tests have also been discovered and rectified.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The info printed is a complete waste of space when there is no error
since it doesn't tell us anything that we don't already know. If there
is an error, we can also be more verbose.
In case that there is an error, this patch also aborts the test and
returns the error to the caller. In future this will be used to
algorithms at registration time.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
From NHM processor onward, Intel processors can support hardware accelerated
CRC32c algorithm with the new CRC32 instruction in SSE 4.2 instruction set.
The patch detects the availability of the feature, and chooses the most proper
way to calculate CRC32c checksum.
Byte code instructions are used for compiler compatibility.
No MMX / XMM registers is involved in the implementation.
Signed-off-by: Austin Zhang <austin.zhang@intel.com>
Signed-off-by: Kent Liu <kent.liu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If tcrypt is to be used as a run-time integrity test, it needs to be
more resilient in a hostile environment. For a start allocating 32K
of physically contiguous memory is definitely out.
This patch teaches it to use separate pages instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than displaying larval objects as real objects, this patch
makes them show up under /proc/crypto as of type larval.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since the only user of __crypto_alg_lookup is doing exactly what
crypto_alg_lookup does, we can now the latter in lieu of the former.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Authenc works in two stages for encryption, it first encrypts and
then computes an ICV. The context memory of the request is used
by both operations. The problem is that when an asynchronous
encryption completes, we will compute the ICV and then reread the
context memory of the encryption to get the original request.
It just happens that we have a buffer of 16 bytes in front of the
request pointer, so ICVs of 16 bytes (such as SHA1) do not trigger
the bug. However, any attempt to uses a larger ICV instantly kills
the machine when the first asynchronous encryption is completed.
This patch fixes this by saving the request pointer before we start
the ICV computation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The changeset ca786dc738
crypto: hash - Fixed digest size check
missed one spot for the digest type. This patch corrects that
error.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
My changeset 4b22f0ddb6
crypto: tcrpyt - Remove unnecessary kmap/kunmap calls
introduced a typo that broke AEAD chunk testing. In particular,
axbuf should really be xbuf.
There is also an issue with testing the last segment when encrypting.
The additional part produced by AEAD wasn't tested. Similarly, on
decryption the additional part of the AEAD input is mistaken for
corruption.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx: (24 commits)
I/OAT: I/OAT version 3.0 support
I/OAT: tcp_dma_copybreak default value dependent on I/OAT version
I/OAT: Add watchdog/reset functionality to ioatdma
iop_adma: cleanup iop_chan_xor_slot_count
iop_adma: document how to calculate the minimum descriptor pool size
iop_adma: directly reclaim descriptors on allocation failure
async_tx: make async_tx_test_ack a boolean routine
async_tx: remove depend_tx from async_tx_sync_epilog
async_tx: export async_tx_quiesce
async_tx: fix handling of the "out of descriptor" condition in async_xor
async_tx: ensure the xor destination buffer remains dma-mapped
async_tx: list_for_each_entry_rcu() cleanup
dmaengine: Driver for the Synopsys DesignWare DMA controller
dmaengine: Add slave DMA interface
dmaengine: add DMA_COMPL_SKIP_{SRC,DEST}_UNMAP flags to control dma unmap
dmaengine: Add dma_client parameter to device_alloc_chan_resources
dmatest: Simple DMA memcpy test client
dmaengine: DMA engine driver for Marvell XOR engine
iop-adma: fix platform driver hotplug/coldplug
dmaengine: track the number of clients using a channel
...
Fixed up conflict in drivers/dca/dca-sysfs.c manually
All callers of async_tx_sync_epilog have called async_tx_quiesce on the
depend_tx, so async_tx_sync_epilog need only call the callback to
complete the operation.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Ensure forward progress is made when a dmaengine driver is unable to
allocate an xor descriptor by breaking the dependency chain with
async_tx_quisce() and issue any pending descriptors.
Tested with iop-adma by setting device->max_xor = 2 to force multiple
calls to device_prep_dma_xor for each call to async_xor and limiting the
descriptor slot pool to 5. Discovered that the minimum descriptor pool
size for iop-adma is 2 * iop_chan_xor_slot_cnt(device->max_xor) + 1.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
When the number of source buffers for an xor operation exceeds the hardware
channel maximum async_xor creates a chain of dependent operations. The result
of one operation is reused as an input to the next to continue the xor
calculation. The destination buffer should remain mapped for the duration of
the entire chain. To provide this guarantee the code must no longer be allowed
to fallback to the synchronous path as this will preclude the buffer from being
unmapped, i.e. the dma-driver will potentially miss the descriptor with
!DMA_COMPL_SKIP_DEST_UNMAP.
Cc: Neil Brown <neilb@suse.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
In the rcu update side, don't use list_for_each_entry_rcu().
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
All new crypto interfaces should go into individual files as much
as possible in order to ensure that crypto.h does not collapse under
its own weight.
This patch moves the ahash code into crypto/hash.h and crypto/internal/hash.h
respectively.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch reimplements crc32c using the ahash interface. This
allows one tfm to be used by an unlimited number of users provided
that they all use the same key (which all current crc32c users do).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the walking helpers for hash algorithms akin to
those of block ciphers. This is a necessary step before we can
reimplement existing hash algorithms using the new ahash interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a cryptographic pseudo-random number generator
based on CTR(AES-128). It is meant to be used in cases where a
deterministic CPRNG is required.
One of the first applications will be as an input in the IPsec IV
generation process.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The base field in ahash_tfm appears to have been cut-n-pasted from
ablkcipher. It isn't needed here at all. Similarly, the info field
in ahash_request also appears to have originated from its cipher
counter-part and is vestigial.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The digest size check on hash algorithms is incorrect. It's
perfectly valid for hash algorithms to have a digest length
longer than their block size. For example crc32c has a block
size of 1 and a digest size of 4. Rather than having it lie
about its block size, this patch fixes the checks to do what
they really should which is to bound the digest size so that
code placing the digest on the stack continue to work.
HMAC however still needs to check this as it's only defined
for such algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Similar to the rmd128.c annotations, significantly cuts down on the
noise.
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the private implementation of 32-bit rotation and unaligned
access with byteswapping.
As a bonus, fixes sparse warnings:
crypto/camellia.c:602:2: warning: cast to restricted __be32
crypto/camellia.c:603:2: warning: cast to restricted __be32
crypto/camellia.c:604:2: warning: cast to restricted __be32
crypto/camellia.c:605:2: warning: cast to restricted __be32
crypto/camellia.c:710:2: warning: cast to restricted __be32
crypto/camellia.c:711:2: warning: cast to restricted __be32
crypto/camellia.c:712:2: warning: cast to restricted __be32
crypto/camellia.c:713:2: warning: cast to restricted __be32
crypto/camellia.c:714:2: warning: cast to restricted __be32
crypto/camellia.c:715:2: warning: cast to restricted __be32
crypto/camellia.c:716:2: warning: cast to restricted __be32
crypto/camellia.c:717:2: warning: cast to restricted __be32
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Noticed by Neil Horman: we are doing unnecessary kmap/kunmap calls
on kmalloced memory. This patch removes them. For the purposes of
testing SG construction, the underlying crypto code already does plenty
of kmap/kunmap calls anyway.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Patch to add checking of DES3 test vectors using CBC mode. FIPS-140-2
compliance mandates that any supported mode of operation must include a self
test. This satisfies that requirement for cbc(des3_ede). The included test
vector was generated by me using openssl. Key/IV was generated with the
following command:
openssl enc -des_ede_cbc -P
input and output values were generated by repeating the string "Too many
secrets" a few times over, truncating it to 128 bytes, and encrypting it with
openssl using the aformentioned key. Tested successfully by myself
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the relevant code in the rmd implementations to
use the pointer form of the endian swapping operations. This allows
certain architectures to generate more optimised code. For example,
on sparc64 this more than halves the CPU cycles on a typical hashing
operation.
Based on a patch by David Miller.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes endian issues making rmd320 work
properly on big-endian machines.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Acked-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes endian issues making rmd256 work
properly on big-endian machines.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Acked-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes endian issues making rmd160 work
properly on big-endian machines.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Acked-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch is based on Sebastian Siewior's patch and
fixes endian issues making rmd128 work properly on
big-endian machines.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Acked-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes tcrypt to use the new asynchronous hash interface
for testing hash algorithm correctness. The speed tests will continue
to use the existing interface for now.
Signed-off-by: Loc Ho <lho@amcc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds asynchronous hash support to crypto daemon.
Signed-off-by: Loc Ho <lho@amcc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds Kconfig entries for RIPEMD-256 and RIPEMD-320.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds test vectors for RIPEMD-256 and
RIPEMD-320 hash algorithms.
The test vectors are taken from
<http://homes.esat.kuleuven.be/~bosselae/ripemd160.html>
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for the extended RIPEMD hash
algorithms RIPEMD-256 and RIPEMD-320.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch puts all common RIPEMD values in the
appropriate header file. Initial values and constants
are the same for all variants of RIPEMD.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Check whether the destination buffer is written to beyond the last
byte contained in the scatterlist.
Also change IDX1 of the cross-page access offsets to a multiple of 4.
This triggers a corruption in the HIFN driver and doesn't seem to
negatively impact other testcases.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Change logs should be kept in source control systems, not the source.
This patch removes the change log from tcrpyt to stop people from
extending it any more.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds Kconfig entries for RIPEMD-128 and RIPEMD-160.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds test vectors for RIPEMD-128 and
RIPEMD-160 hash algorithms and digests (HMAC).
The test vectors are taken from ISO:IEC 10118-3 (2004)
and RFC2286.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for RIPEMD-128 and RIPEMD-160
hash algorithms.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The EINPROGRESS notifications should be done just like the final
call-backs, i.e., with BH off. This patch fixes the call in cryptd
since previously it was called with BH on.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When chainiv postpones requests it never calls their completion functions.
This causes symptoms such as memory leaks when IPsec is in use.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
commit 636bdeaa 'dmaengine: ack to flags: make use of the unused bits in
the 'ack' field' missed an ->ack conversion in
crypto/async_tx/async_memset.c
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Coverity CID: 2306 & 2307 RESOURCE_LEAK
In the second for loop in test_cipher(), data is allocated space with
kzalloc() and is only ever freed in an error case.
Looking at this loop, data is written to this memory but nothing seems
to read from it.
So here is a patch removing the allocation, I think this is the right
fix.
Only compile tested.
Signed-off-by: Darren Jenkins <darrenrjenkins@gmailcom>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move rcu-protected lists from list.h into a new header file rculist.h.
This is done because list are a very used primitive structure all over the
kernel and it's currently impossible to include other header files in this
list.h without creating some circular dependencies.
For example, list.h implements rcu-protected list and uses rcu_dereference()
without including rcupdate.h. It actually compiles because users of
rcu_dereference() are macros. Others RCU functions could be used too but
aren't probably because of this.
Therefore this patch creates rculist.h which includes rcupdates without to
many changes/troubles.
Signed-off-by: Franck Bui-Huu <fbuihuu@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Josh Triplett <josh@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
When HMAC gets a key longer than the block size of the hash, it needs
to feed it as input to the hash to reduce it to a fixed length. As
it is HMAC converts the key to a scatter and gather list. However,
this doesn't work on certain platforms if the key is not allocated
via kmalloc. For example, the keys from tcrypt are stored in the
rodata section and this causes it to fail with HMAC on x86-64.
This patch fixes this by copying the key to memory obtained via
kmalloc before hashing it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Normally, kzalloc returns NULL or a valid pointer value, not a value to be
tested using IS_ERR.
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
After attaching the IV to the head during encryption, eseqiv does not
increase the encryption length by that amount. As such the last block
of the actual plain text will be left unencrypted.
Fortunately the only user of this code hifn currently crashes so this
shouldn't affect anyone :)
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ciphers, block modes, name it, are grouped together and sorted.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On Thu, Mar 27, 2008 at 03:40:36PM +0100, Bodo Eggert wrote:
> Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> wrote:
>
> > This patch cleanups the crypto code, replaces the init() and fini()
> > with the <algorithm name>_init/_fini
>
> This part ist OK.
>
> > or init/fini_<algorithm name> (if the
> > <algorithm name>_init/_fini exist)
>
> Having init_foo and foo_init won't be a good thing, will it? I'd start
> confusing them.
>
> What about foo_modinit instead?
Thanks for the suggestion, the init() is replaced with
<algorithm name>_mod_init ()
and fini () is replaced with <algorithm name>_mod_fini.
Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The key expansion routine could be get little more generic, become
a kernel doc entry and then get exported.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Tested-by: Stefan Hellermann <stefan@the2masters.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement CTS wrapper for CBC mode required for support of AES
encryption support for Kerberos (rfc3962).
Signed-off-by: Kevin Coffman <kwc@citi.umich.edu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The third test vector of ECB-XTEA-ENC fails for me all other
are fine. I could not find a RFC or something else where they
are defined. The test vector has not been modified since git
started recording histrory. The implementation is very close
(not to say equal) to what is available as Public Domain (they
recommend 64 rounds and the in kernel uses 32). Therefore I
belive that there is typo somewhere and tcrypt reported always
*fail* instead of *okey*.
This patch replaces input + result of the third test vector with
result + input from the third decryption vector. The key is the
same, the other three test vectors are also the reverse.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently the tcrypt module is about 2 MiB on x86-32. The
main reason for the huge size is the data segment which contains
all the test vectors for each algorithm. The test vectors are
staticly allocated in an array and the size of the array has been
drastically increased by the merge of the Salsa20 test vectors.
With a hint from Benedigt Spranger I found a way how I could
convert those fixed-length arrays to strings which are flexible
in size. VIM and regex were also very helpfull :)
So, I am talking about a shrinking of ~97% on x86-32:
text data bss dec hex filename
18309 2039708 20 2058037 1f6735 tcrypt-b4.ko
45628 23516 80 69224 10e68 tcrypt.ko
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The test routines (test_{cipher,hash,aead}) are makeing a copy
of the test template and are processing the encryption process
in place. This patch changes the creation of the copy so it will
work even if the source address of the input data isn't an array
inside of the template but a pointer.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The speed templates as it look always the same. The key size
is repeated for each block size and we test always the same
block size. The addition of one inner loop makes it possible
to get rid of the struct and it is possible to use a tiny
u8 array :)
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some crypto ciphers which are impleneted support similar key sizes
(16,24 & 32 byte). They can be grouped together and use a common
templatte instead of their own which contains the same data.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rename sha512 to sha512_generic and add a MODULE_ALIAS for sha512
so all sha512 implementations can be loaded automatically.
Keep the broken tabs so git recognizes this as a rename.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
'ack' is currently a simple integer that flags whether or not a client is done
touching fields in the given descriptor. It is effectively just a single bit
of information. Converting this to a flags parameter allows the other bits to
be put to use to control completion actions, like dma-unmap, and capture
results, like xor-zero-sum == 0.
Changes are one of:
1/ convert all open-coded ->ack manipulations to use async_tx_ack
and async_tx_test_ack.
2/ set the ack bit at prep time where possible
3/ make drivers store the flags at prep time
4/ add flags to the device_prep_dma_interrupt prototype
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Shrink struct dma_async_tx_descriptor and introduce
async_tx_channel_switch to properly inject a channel switch interrupt in
the descriptor stream. This simplifies the locking model as drivers no
longer need to handle dma_async_tx_descriptor.lock.
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The kernel crashes when ipsec passes a udp packet of about 14XX bytes
of data to aes-xcbc-mac.
It seems the first xxxx bytes of the data are in first sg entry,
and remaining xx bytes are in next sg entry. But we don't
check next sg entry to see if we need to go look the page up.
I noticed in hmac.c, we do a scatterwalk_sg_next(), to do this check
and possible lookup, thus xcbc.c needs to use this routine too.
A 15-hour run of an ipsec stress test sending streams of tcp and
udp packets of various sizes, using this patch and
aes-xcbc-mac completed successfully, so hopefully this fixes the
problem.
Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the channel cannot perform the operation in one call to
->device_prep_dma_zero_sum, then fallback to the xor+page_is_zero path.
This only affects users with arrays larger than 16 devices on iop13xx or
32 devices on iop3xx.
Cc: <stable@kernel.org>
Cc: Neil Brown <neilb@suse.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The previous patch to move chainiv and eseqiv into blkcipher created
a section mismatch for the chainiv exit function which was also called
from __init. This patch removes the __exit marking on it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When using aes-xcbc-mac for authentication in IPsec,
the kernel crashes. It seems this algorithm doesn't
account for the space IPsec may make in scatterlist for authtag.
Thus when crypto_xcbc_digest_update2() gets called,
nbytes may be less than sg[i].length.
Since nbytes is an unsigned number, it wraps
at the end of the loop allowing us to go back
into loop and causing crash in memcpy.
I used update function in digest.c to model this fix.
Please let me know if it looks ok.
Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The XTS blockmode uses a copy of the IV which is saved on the stack
and may or may not be properly aligned. If it is not, it will break
hardware cipher like the geode or padlock.
This patch encrypts the IV in place so we don't have to worry about
alignment.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Tested-by: Stefan Hellermann <stefan@the2masters.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Every file should include the headers containing the externs for its
global code (in this case for struct crypto_{init,exit}_digest_ops()).
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For compatibility with dm-crypt initramfs setups it is useful to merge
chainiv/seqiv into the crypto_blkcipher module. Since they're required
by most algorithms anyway this is an acceptable trade-off.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes the following build error caused by commit
3631c650c4:
<-- snip -->
...
LD .tmp_vmlinux1
crypto/built-in.o: In function `skcipher_null_crypt':
crypto_null.c:(.text+0x3d14): undefined reference to `blkcipher_walk_virt'
crypto_null.c:(.text+0x3d14): relocation truncated to fit: R_MIPS_26 against `blkcipher_walk_virt'
crypto/built-in.o: In function `$L32':
crypto_null.c:(.text+0x3d54): undefined reference to `blkcipher_walk_done'
crypto_null.c:(.text+0x3d54): relocation truncated to fit: R_MIPS_26 against `blkcipher_walk_done'
crypto/built-in.o:(.data+0x2e8): undefined reference to `crypto_blkcipher_type'
make[1]: *** [.tmp_vmlinux1] Error 1
<-- snip -->
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Building latest git fails with the following error:
ERROR: "crypto_alloc_ablkcipher" [crypto/tcrypt.ko] undefined!
This appears to happen because CONFIG_CRYPTO_TEST is set while
CONFIG_CRYPTO_BLKCIPHER is not.
The following patch fixes the problem for me.
Signed-off-by: Frederik Deweerdt <frederik.deweerdt@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The source and destination addresses are included to allow channel
selection based on address alignment.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Pass a full set of flags to drivers' per-operation 'prep' routines.
Currently the only flag passed is DMA_PREP_INTERRUPT. The expectation is
that arch-specific async_tx_find_channel() implementations can exploit this
capability to find the best channel for an operation.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Reviewed-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
The tx_set_src and tx_set_dest methods were originally implemented to allow
an array of addresses to be passed down from async_xor to the dmaengine
driver while minimizing stack overhead. Removing these methods allows
drivers to have all transaction parameters available at 'prep' time, saves
two function pointers in struct dma_async_tx_descriptor, and reduces the
number of indirect branches..
A consequence of moving this data to the 'prep' routine is that
multi-source routines like async_xor need temporary storage to convert an
array of linear addresses into an array of dma addresses. In order to keep
the same stack footprint of the previous implementation the input array is
reused as storage for the dma addresses. This requires that
sizeof(dma_addr_t) be less than or equal to sizeof(void *). As a
consequence CONFIG_DMADEVICES now depends on !CONFIG_HIGHMEM64G. It also
requires that drivers be able to make descriptor resources available when
the 'prep' routine is polled.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Remove the unused ASYNC_TX_ASSUME_COHERENT flag. Async_tx is
meant to hide the difference between asynchronous hardware and synchronous
software operations, this flag requires clients to understand cache
coherency consequences of the async path.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
single list_head variable initialized with LIST_HEAD_INIT could almost
always can be replaced with LIST_HEAD declaration, this shrinks the code
and looks better.
Signed-off-by: Denis Cheng <crquan@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
do_async_xor must be compiled away on !HAS_DMA archs.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (125 commits)
[CRYPTO] twofish: Merge common glue code
[CRYPTO] hifn_795x: Fixup container_of() usage
[CRYPTO] cast6: inline bloat--
[CRYPTO] api: Set default CRYPTO_MINALIGN to unsigned long long
[CRYPTO] tcrypt: Make xcbc available as a standalone test
[CRYPTO] xcbc: Remove bogus hash/cipher test
[CRYPTO] xcbc: Fix algorithm leak when block size check fails
[CRYPTO] tcrypt: Zero axbuf in the right function
[CRYPTO] padlock: Only reset the key once for each CBC and ECB operation
[CRYPTO] api: Include sched.h for cond_resched in scatterwalk.h
[CRYPTO] salsa20-asm: Remove unnecessary dependency on CRYPTO_SALSA20
[CRYPTO] tcrypt: Add select of AEAD
[CRYPTO] salsa20: Add x86-64 assembly version
[CRYPTO] salsa20_i586: Salsa20 stream cipher algorithm (i586 version)
[CRYPTO] gcm: Introduce rfc4106
[CRYPTO] api: Show async type
[CRYPTO] chainiv: Avoid lock spinning where possible
[CRYPTO] seqiv: Add select AEAD in Kconfig
[CRYPTO] scatterwalk: Handle zero nbytes in scatterwalk_map_and_copy
[CRYPTO] null: Allow setkey on digest_null
...
Currently the gcm(aes) tests have to be taken together with all other
algorithms. This patch makes it available by itself at number 106.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When setting the digest size xcbc tests to see if the underlying algorithm
is a hash. This is silly because we don't allow it to be a hash and we've
specifically requested for a cipher.
This patch removes the bogus test.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the underlying algorithm has a block size other than 16 we abort
without freeing it. In fact, we try to return the algorithm itself
as an error!
This patch plugs the leak and makes it return -EINVAL instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The axbuf buffer is used by test_aead and therefore should be zeroed
there instead of in test_hash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is the x86-64 version of the Salsa20 stream cipher algorithm. The
original assembly code came from
<http://cr.yp.to/snuffle/salsa20/amd64-3/salsa20.s>. It has been
reformatted for clarity.
Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch contains the salsa20-i586 implementation. The original
assembly code came from
<http://cr.yp.to/snuffle/salsa20/x86-pm/salsa20.s>. I have reformatted
it (added indents) so that it matches the other algorithms in
arch/x86/crypto.
Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch introduces the rfc4106 wrapper for GCM just as we have an
rfc4309 wrapper for CCM. The purpose of the wrapper is to include part
of the IV in the key so that it can be negotiated by IPsec.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes chainiv avoid spinning by postponing requests on lock
contention if the user allows the use of asynchronous algorithms. If
a synchronous algorithm is requested then we behave as before.
This should improve IPsec performance on SMP when two CPUs attempt to
transmit over the same SA. Currently one of them will spin doing nothing
waiting for the other CPU to finish its encryption. This patch makes it
postpone the request and get on with other work.
If only one CPU is transmitting for a given SA, then we will process
the request synchronously as before.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that seqiv supports AEAD algorithms it needs to select the AEAD option.
Thanks to Erez Zadok for pointing out the problem.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It's better to return silently than crash and burn when someone feeds us
a zero length. In particular the null digest algorithm when used as part
of authenc will do that to us.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We need to allow setkey on digest_null if it is to be used directly by
authenc instead of through hmac.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a null blkcipher algorithm called ecb(cipher_null) for
backwards compatibility. Previously the null algorithm when used by
IPsec copied the data byte by byte. This new algorithm optimises that
to a straight memcpy which lets us better measure inherent overheads in
our IPsec code.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds 7 test vectors to tcrypt for CCM.
The test vectors are from rfc 3610.
There are about 10 more test vectors in RFC 3610
and 4 or 5 more in NIST. I can add these as time permits.
I also needed to set authsize. CCM has a prerequisite of
authsize.
Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds Counter with CBC-MAC (CCM) support.
RFC 3610 and NIST Special Publication 800-38C were referenced.
Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes crypto_alloc_aead always return algorithms that is
capable of generating their own IVs through givencrypt and givdecrypt.
All existing AEAD algorithms already do. New ones must either supply
their own or specify a generic IV generator with the geniv field.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for using seqiv with AEAD algorithms. This is
useful for those AEAD algorithms that performs authentication before
encryption because the IV generated by the underlying encryption algorithm
won't be available for authentication.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch creates the infrastructure to help the construction of IV
generator templates that wrap around AEAD algorithms by adding an IV
generator to them. This is useful for AEAD algorithms with no built-in
IV generator or to replace their built-in generator.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some algorithms always require manual IV construction. For instance,
the generic CCM algorithm requires the first byte of the IV to be manually
constructed. Such algorithms are always used by other algorithms equipped
with their own IV generators and do not need IV generation per se.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch implements the givencrypt function for authenc. It simply
calls the givencrypt operation on the underlying cipher instead of encrypt.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the underlying givcrypt operations for aead and associated
support elements. The rationale is identical to that of the skcipher
givcrypt operations, i.e., sometimes only the algorithm knows how the
IV should be generated.
A new request type aead_givcrypt_request is added which contains an
embedded aead_request structure with two new elements to support this
operation. The new elements are seq and giv. The seq field should
contain a strictly increasing 64-bit integer which may be used by
certain IV generators as an input value. The giv field will be used
to store the generated IV. It does not need to obey the alignment
requirements of the algorithm because it's not used during the operation.
The existing iv field must still be available as it will be used to store
intermediate IVs and the output IV if chaining is desired.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This generator generates an IV based on a sequence number by xoring it
with a salt. This algorithm is mainly useful for CTR and similar modes.
This patch also sets it as the default IV generator for ctr.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the gcm algorithm over to crypto_grab_skcipher
which is a prerequisite for IV generation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the gcm_base template which takes a block cipher
parameter instead of cipher. This allows the user to specify a
specific CTR implementation.
This also fixes a leak of the cipher algorithm that was previously
looked up but never freed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the authenc algorithm over to crypto_grab_skcipher
which is a prerequisite for IV generation.
This patch also changes authenc to set its ASYNC status depending on
the ASYNC status of the underlying skcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes crypto_alloc_ablkcipher/crypto_grab_skcipher always
return algorithms that are capable of generating their own IVs through
givencrypt and givdecrypt. Each algorithm may specify its default IV
generator through the geniv field.
For algorithms that do not set the geniv field, the blkcipher layer will
pick a default. Currently it's chainiv for synchronous algorithms and
eseqiv for asynchronous algorithms. Note that if these wrappers do not
work on an algorithm then that algorithm must specify its own geniv or
it can't be used at all.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This generator generates an IV based on a sequence number by xoring it
with a salt and then encrypting it with the same key as used to encrypt
the plain text. This algorithm requires that the block size be equal
to the IV size. It is mainly useful for CBC.
It has one noteworthy property that for IPsec the IV happens to lie
just before the plain text so the IV generation simply increases the
number of encrypted blocks by one. Therefore the cost of this generator
is entirely dependent on the speed of the underlying cipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The chain IV generator is the one we've been using in the IPsec stack.
It simply starts out with a random IV, then uses the last block of each
encrypted packet's cipher text as the IV for the next packet.
It can only be used by synchronous ciphers since we have to make sure
that we don't start the encryption of the next packet until the last
one has completed.
It does have the advantage of using very little CPU time since it doesn't
have to generate anything at all.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch creates the infrastructure to help the construction of givcipher
templates that wrap around existing blkcipher/ablkcipher algorithms by adding
an IV generator to them.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the underlying algorithm specifies a specific geniv algorithm then
we should use it for the cryptd version as well.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch introduces the geniv field which indicates the default IV
generator for each algorithm. It should point to a string that is not
freed as long as the algorithm is registered.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Different block cipher modes have different requirements for intialisation
vectors. For example, CBC can use a simple randomly generated IV while
modes such as CTR must use an IV generation mechanisms that give a stronger
guarantee on the lack of collisions. Furthermore, disk encryption modes
have their own IV generation algorithms.
Up until now IV generation has been left to the users of the symmetric
key cipher API. This is inconvenient as the number of block cipher modes
increase because the user needs to be aware of which mode is supposed to
be paired with which IV generation algorithm.
Therefore it makes sense to integrate the IV generation into the crypto
API. This patch takes the first step in that direction by creating two
new ablkcipher operations, givencrypt and givdecrypt that generates an
IV before performing the actual encryption or decryption.
The operations are currently not exposed to the user. That will be done
once the underlying functionality has actually been implemented.
It also creates the underlying givcipher type. Algorithms that directly
generate IVs would use it instead of ablkcipher. All other algorithms
(including all existing ones) would generate a givcipher algorithm upon
registration. This givcipher algorithm will be constructed from the geniv
string that's stored in every algorithm. That string will locate a template
which is instantiated by the blkcipher/ablkcipher algorithm in question to
give a givcipher algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Note: From now on the collective of ablkcipher/blkcipher/givcipher will
be known as skcipher, i.e., symmetric key cipher. The name blkcipher has
always been much of a misnomer since it supports stream ciphers too.
This patch adds the function crypto_grab_skcipher as a new way of getting
an ablkcipher spawn. The problem is that previously we did this in two
steps, first getting the algorithm and then calling crypto_init_spawn.
This meant that each spawn user had to be aware of what type and mask to
use for these two steps. This is difficult and also presents a problem
when the type/mask changes as they're about to be for IV generators.
The new interface does both steps together just like crypto_alloc_ablkcipher.
As a side-effect this also allows us to be stronger on type enforcement
for spawns. For now this is only done for ablkcipher but it's trivial
to extend for other types.
This patch also moves the type/mask logic for skcipher into the helpers
crypto_skcipher_type and crypto_skcipher_mask.
Finally this patch introduces the function crypto_require_sync to determine
whether the user is specifically requesting a sync algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the necessary changes for GCM to be used with async
ciphers. This would allow it to be used with hardware devices that
support CTR.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As discussed previously, this patch moves the basic CTR functionality
into a chainable algorithm called ctr. The IPsec-specific variant of
it is now placed on top with the name rfc3686.
So ctr(aes) gives a chainable cipher with IV size 16 while the IPsec
variant will be called rfc3686(ctr(aes)). This patch also adjusts
gcm accordingly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With the impending addition of the givcipher type, both blkcipher and
ablkcipher algorithms will use it to create givcipher objects. As such
it no longer makes sense to split the system between ablkcipher and
blkcipher. In particular, both ablkcipher.c and blkcipher.c would need
to use the givcipher type which has to reside in ablkcipher.c since it
shares much code with it.
This patch merges the two Kconfig options as well as the modules into one.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes the request context alignment so that it is actually
aligned to the value required by the algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a new helper crypto_attr_alg_name which is basically the
first half of crypto_attr_alg. That is, it returns an algorithm name
parameter as a string without looking it up. The caller can then look it
up immediately or defer it until later.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
i get here:
----
LD vmlinux
SYSMAP System.map
SYSMAP .tmp_System.map
Building modules, stage 2.
MODPOST 226 modules
ERROR: "crypto_hash_type" [crypto/authenc.ko] undefined!
make[1]: *** [__modpost] Error 1
make: *** [modules] Error 2
---
which fails because crypto_hash_type is declared in crypto/hash.c. You might wanna
fix it like so:
Signed-off-by: Borislav Petkov <bbpetkov@yahoo.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a simple speed test for salsa20.
Usage: modprobe tcrypt mode=206
Signed-of-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add common compression tester function
Modify deflate test case to use the common compressor test function
Signed-off-by: Zoltan Sogor <weth@inf.u-szeged.hu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is a large test vector for Salsa20 that crosses the 4096-bytes
page boundary.
Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes the multi-page processing bug that affects large test
vectors (the same bug that previously affected ctr.c).
There is an optimization for the case walk.nbytes == nbytes. Also we
now use crypto_xor() instead of adhoc XOR routines.
Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The abreq structure is currently allocated on the stack. This is broken
if the underlying algorithm is asynchronous. This patch changes it so
that it's taken from the private context instead which has been enlarged
accordingly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Unfortunately the generic chaining hasn't been ported to all architectures
yet, and notably not s390. So this patch restores the chainging that we've
been using previously which does work everywhere.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The scatterwalk infrastructure is used by algorithms so it needs to
move out of crypto for future users that may live in drivers/crypto
or asm/*/crypto.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes gcm/authenc to return EBADMSG instead of EINVAL for
ICV mismatches. This convention has already been adopted by IPsec.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto_aead convention for ICVs is to include it directly in the
output. If we decided to change this in future then we would make
the ICV (if the algorithm has an explicit one) available in the
request itself.
For now no algorithm needs this so this patch changes gcm to conform
to this convention. It also adjusts the tcrypt aead tests to take
this into account.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently the gcm(aes) tests have to be taken together with all other
ciphers. This patch makes it available by itself at number 35.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The previous code incorrectly included the hash in the verification which
also meant that we'd crash and burn when it comes to actually verifying
the hash since we'd go past the end of the SG list.
This patch fixes that by subtracting authsize from cryptlen at the start.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Having enckeylen as a template parameter makes it a pain for hardware
devices that implement ciphers with many key sizes since each one would
have to be registered separately.
Since the authenc algorithm is mainly used for legacy purposes where its
key is going to be constructed out of two separate keys, we can in fact
embed this value into the key itself.
This patch does this by prepending an rtnetlink header to the key that
contains the encryption key length.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As it is authsize is an algorithm paramter which cannot be changed at
run-time. This is inconvenient because hardware that implements such
algorithms would have to register each authsize that they support
separately.
Since authsize is a property common to all AEAD algorithms, we can add
a function setauthsize that sets it at run-time, just like setkey.
This patch does exactly that and also changes authenc so that authsize
is no longer a parameter of its template.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since alignment masks are always one less than a power of two, we can
use binary or to find their maximum.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
These utilities implemented in lib/hexdump.c are more handy, please use this.
Signed-off-by: Denis Cheng <crquan@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add test vectors to tcrypt for AES in CBC mode for key sizes 192 and 256.
The test vectors are copied from NIST SP800-38A.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a large AES CTR mode test vector. The test vector is
4100 bytes in size. It was generated using a C++ program that called
Crypto++.
Note that this patch increases considerably the size of "struct
cipher_testvec" and hence the size of tcrypt.ko.
Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently the number of entries in a cipher test vector template is
limited by TVMEMSIZE/sizeof(struct cipher_testvec). This patch
circumvents the problem by pointing cipher_tv to each entry in the
template, rather than the template itself.
Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the data spans across a page boundary, CTR may incorrectly process
a partial block in the middle because the blkcipher walking code may
supply partial blocks in the middle as long as the total length of the
supplied data is more than a block. CTR is supposed to return any unused
partial block in that case to the walker.
This patch fixes this by doing exactly that, returning partial blocks to
the walker unless we received less than a block-worth of data to start
with.
This also allows us to optimise the bulk of the processing since we no
longer have to worry about partial blocks until the very end.
Thanks to Tan Swee Heng for fixes and actually testing this :)
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add GCM/GMAC support to cryptoapi.
GCM (Galois/Counter Mode) is an AEAD mode of operations for any block cipher
with a block size of 16. The typical example is AES-GCM.
Signed-off-by: Mikko Herranen <mh1@iki.fi>
Reviewed-by: Mika Kukkonen <mika.kukkonen@nsn.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add AEAD support to tcrypt, needed by GCM.
Signed-off-by: Mikko Herranen <mh1@iki.fi>
Reviewed-by: Mika Kukkonen <mika.kukkonen@nsn.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Analogously to camellia7 patch, move
"absorb kw2 to other subkeys" and "absorb kw4 to other subkeys"
code parts into camellia_setup_tail(). This further reduces
source and object code size at the cost of two brances
in key setup code.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move "key XOR is end of F-function" code part into
camellia_setup_tail(), it is sufficiently similar
between camellia_setup128 and camellia_setup256.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
unifies encrypt/decrypt routines for different key lengths.
This reduces module size by ~25%, with tiny (less than 1%)
speed impact.
Also collapses encrypt/decrypt into more readable
(visually shorter) form using macros.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove unused macro params.
Use (u8)(expr) instead of (expr) & 0xff,
helps gcc to realize how to use simpler commands.
Move CAMELLIA_FLS macro closer to encrypt/decrypt routines.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch replaces the custom xor in CBC with the generic crypto_xor.
It changes the operations for in-place encryption slightly to avoid
calling crypto_xor with tmpbuf since it is not necessarily aligned.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All common block ciphers have a block size that's a power of 2. In fact,
all of our block ciphers obey this rule.
If we require this then CBC can be optimised to avoid an expensive divide
on in-place decryption.
I've also changed the saving of the first IV in the in-place decryption
case to the last IV because that lets us use walk->iv (which is already
aligned) for the xor operation where alignment is required.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With the addition of more stream ciphers we need to curb the proliferation
of ad-hoc xor functions. This patch creates a generic pair of functions,
crypto_inc and crypto_xor which does big-endian increment and exclusive or,
respectively.
For optimum performance, they both use u32 operations so alignment must be
as that of u32 even though the arguments are of type u8 *.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Up until now we have ablkcipher algorithms have been identified as
type BLKCIPHER with the ASYNC bit set. This is suboptimal because
ablkcipher refers to two things. On the one hand it refers to the
top-level ablkcipher interface with requests. On the other hand it
refers to and algorithm type underneath.
As it is you cannot request a synchronous block cipher algorithm
with the ablkcipher interface on top. This is a problem because
we want to be able to eventually phase out the blkcipher top-level
interface.
This patch fixes this by making ABLKCIPHER its own type, just as
we have distinct types for HASH and DIGEST. The type it associated
with the algorithm implementation only.
Which top-level interface is used for synchronous block ciphers is
then determined by the mask that's used. If it's a specific mask
then the old blkcipher interface is given, otherwise we go with the
new ablkcipher interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the crypto scatterwalk code to use the generic
scatterlist chaining rather the version specific to crypto.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Resubmitting this patch which extends sha256_generic.c to support SHA-224 as
described in FIPS 180-2 and RFC 3874. HMAC-SHA-224 as described in RFC4231
is then supported through the hmac interface.
Patch includes test vectors for SHA-224 and HMAC-SHA-224.
SHA-224 chould be chosen as a hash algorithm when 112 bits of security
strength is required.
Patch generated against the 2.6.24-rc1 kernel and tested against
2.6.24-rc1-git14 which includes fix for scatter gather implementation for HMAC.
Signed-off-by: Jonathan Lynch <jonathan.lynch@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The setkey() function can be shared with the generic algorithm.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
NO other block mode is M by default.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The setkey() function can be shared with the generic algorithm.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch exports four tables and the set_key() routine. This ressources
can be shared by other AES implementations (aes-x86_64 for instance).
The decryption key has been turned around (deckey[0] is the first piece
of the key instead of deckey[keylen+20]). The encrypt/decrypt functions
are looking now identical (except they are using different tables and
key).
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds countersize to CTR mode.
The template is now ctr(algo,noncesize,ivsize,countersize).
For example, ctr(aes,4,8,4) indicates the counterblock
will be composed of a salt/nonce that is 4 bytes, an iv
that is 8 bytes and the counter is 4 bytes.
When noncesize + ivsize < blocksize, CTR initializes the
last block - ivsize - noncesize portion of the block to
zero. Otherwise the counter block is composed of the IV
(and nonce if necessary).
If noncesize + ivsize == blocksize, then this indicates that
user is passing in entire counterblock. Thus countersize
indicates the amount of bytes in counterblock to use as
the counter for incrementing. CTR will increment counter
portion by 1, and begin encryption with that value.
Note that CTR assumes the counter portion of the block that
will be incremented is stored in big endian.
Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move huge unrolled pieces of code (3 screenfuls) at the end of
128/256 key setup routines into common camellia_setup_tail(),
convert it to loop there.
Loop is still unrolled six times, so performance hit is very small,
code size win is big.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Acked-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Optimize GETU32 to use 4-byte memcpy (modern gcc will convert
such memcpy to single move instruction on i386).
Original GETU32 did four byte fetches, and shifted/XORed those.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Acked-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rename some macros to shorter names: CAMELLIA_RR8 -> ROR8,
making it easier to understand that it is just a right rotation,
nothing camellia-specific in it.
CAMELLIA_SUBKEY_L() -> SUBKEY_L() - just shorter.
Move be32 <-> cpu conversions out of en/decrypt128/256 and into
camellia_en/decrypt - no reason to have that code duplicated twice.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Acked-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move code blocks around so that related pieces are closer together:
e.g. CAMELLIA_ROUNDSM macro does not need to be separated
from the rest of the code by huge array of constants.
Remove unused macros (COPY4WORD, SWAP4WORD, XOR4WORD[2])
Drop SUBL(), SUBR() macros which only obscure things.
Same for CAMELLIA_SP1110() macro and KEY_TABLE_TYPE typedef.
Remove useless comments:
/* encryption */ -- well it's obvious enough already!
void camellia_encrypt128(...)
Combine swap with copying at the beginning/end of encrypt/decrypt.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Acked-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently twofish cipher key setup code
has unrolled loops - approximately 70-100
instructions are repeated 40 times.
As a result, twofish module is the biggest module
in crypto/*.
Unrolling produces x2.5 more code (+18k on i386), and speeds up key
setup by 7%:
unrolled: twofish_setkey/sec: 41128
loop: twofish_setkey/sec: 38148
CALC_K256: ~100 insns each
CALC_K192: ~90 insns
CALC_K: ~70 insns
Attached patch removes this unrolling.
$ size */twofish_common.o
text data bss dec hex filename
37920 0 0 37920 9420 crypto.org/twofish_common.o
13209 0 0 13209 3399 crypto/twofish_common.o
Run tested (modprobe tcrypt reports ok). Please apply.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This three defines are used in all AES related hardware.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
HIFN driver update to use DES weak key checks (exported in this patch).
Signed-off-by: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch creates include/crypto/des.h for common macros shared between
DES implementations.
Signed-off-by: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch implements CTR mode for IPsec.
It is based off of RFC 3686.
Please note:
1. CTR turns a block cipher into a stream cipher.
Encryption is done in blocks, however the last block
may be a partial block.
A "counter block" is encrypted, creating a keystream
that is xor'ed with the plaintext. The counter portion
of the counter block is incremented after each block
of plaintext is encrypted.
Decryption is performed in same manner.
2. The CTR counterblock is composed of,
nonce + IV + counter
The size of the counterblock is equivalent to the
blocksize of the cipher.
sizeof(nonce) + sizeof(IV) + sizeof(counter) = blocksize
The CTR template requires the name of the cipher
algorithm, the sizeof the nonce, and the sizeof the iv.
ctr(cipher,sizeof_nonce,sizeof_iv)
So for example,
ctr(aes,4,8)
specifies the counterblock will be composed of 4 bytes
from a nonce, 8 bytes from the iv, and 4 bytes for counter
since aes has a blocksize of 16 bytes.
3. The counter portion of the counter block is stored
in big endian for conformance to rfc 3686.
Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As it is crypto_remove_spawn may try to unregister an instance which is
yet to be registered. This patch fixes this by checking whether the
instance has been registered before attempting to remove it.
It also removes a bogus cra_destroy check in crypto_register_instance as
1) it's outside the mutex;
2) we have a check in __crypto_register_alg already.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It seems that newer versions of gcc have regressed in their abilities to
analyse initialisations. This patch moves the initialisations up to avoid
the warnings.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Not architecture specific code should not #include <asm/scatterlist.h>.
This patch therefore either replaces them with
#include <linux/scatterlist.h> or simply removes them if they were
unused.
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
This patch moves the sg_init_table out of the timing loops for hash
algorithms so that it doesn't impact on the speed test results.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
hmac_setkey(), hmac_init(), and hmac_final() have
a singular on-stack scatterlist. Initialit is
using sg_init_one() instead of using sg_set_buf().
Signed-off-by: David S. Miller <davem@davemloft.net>
Crypto now uses SG helper functions. Fix hmac_digest to use those
functions correctly and fix the oops associated with it.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Most drivers need to set length and offset as well, so may as well fold
those three lines into one.
Add sg_assign_page() for those two locations that only needed to set
the page, where the offset/length is set outside of the function context.
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Convert the subdirectory "crypto" to UTF-8. The files changed are
<crypto/fcrypt.c> and <crypto/api.c>.
Signed-off-by: John Anthony Kazos Jr. <jakj@j-a-k-j.com>
Signed-off-by: Adrian Bunk <bunk@kernel.org>
There are currently several SHA implementations that all define their own
initialization vectors and size values. Since this values are idential
move them to a header file under include/crypto.
Signed-off-by: Jan Glauber <jang@de.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Loading the crypto algorithm by the alias instead of by module directly
has the advantage that all possible implementations of this algorithm
are loaded automatically and the crypto API can choose the best one
depending on its priority.
Additionally it ensures that the generic implementation as well as the
HW driver (if available) is loaded in case the HW driver needs the
generic version as fallback in corner cases.
Also remove the probe for sha1 in padlock's init code.
Quote from Herbert:
The probe is actually pointless since we can always probe when
the algorithm is actually used which does not lead to dead-locks
like this.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Loading the crypto algorithm by the alias instead of by module directly
has the advantage that all possible implementations of this algorithm
are loaded automatically and the crypto API can choose the best one
depending on its priority.
Additionally it ensures that the generic implementation as well as the
HW driver (if available) is loaded in case the HW driver needs the
generic version as fallback in corner cases.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Loading the crypto algorithm by the alias instead of by module directly
has the advantage that all possible implementations of this algorithm
are loaded automatically and the crypto API can choose the best one
depending on its priority.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper blkcipher_walk_virt_block which is similar to
blkcipher_walk_virt but uses a supplied block size instead of the block
size of the block cipher. This is useful for CTR where the block size is
1 but we still want to walk by the block size of the underlying cipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the block size is no longer a multiple of the alignment, we need to
increase the kmalloc amount in blkcipher_next_slow to use the aligned block
size.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a comment to explain why we compare the cra_driver_name of
the algorithm being registered against the cra_name of a larval as opposed
to the cra_driver_name of the larval.
In fact larvals have only one name, cra_name which is the name that was
requested by the user. The test here is simply trying to find out whether
the algorithm being registered can or can not satisfy the larval.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Previously we assumed for convenience that the block size is a multiple of
the algorithm's required alignment. With the pending addition of CTR this
will no longer be the case as the block size will be 1 due to it being a
stream cipher. However, the alignment requirement will be that of the
underlying implementation which will most likely be greater than 1.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We do not allow spaces in algorithm names or parameters. Thanks to Joy Latten
for pointing this out.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As Joy Latten points out, inner algorithm parameters will miss the closing
bracket which will also cause the outer algorithm to terminate prematurely.
This patch fixes that also kills the WARN_ON if the number of parameters
exceed the maximum as that is a user error.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
XTS currently considered to be the successor of the LRW mode by the IEEE1619
workgroup. LRW was discarded, because it was not secure if the encyption key
itself is encrypted with LRW.
XTS does not have this problem. The implementation is pretty straightforward,
a new function was added to gf128mul to handle GF(128) elements in ble format.
Four testvectors from the specification
http://grouper.ieee.org/groups/1619/email/pdf00086.pdf
were added, and they verify on my system.
Signed-off-by: Rik Snel <rsnel@cube.dyndns.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use max in blkcipher_get_spot() instead of open coding it.
Signed-off-by: Ingo Oeser <ioe-lkml@rameria.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When scatterwalk is built as a module digest.c was broken because it
requires the crypto_km_types structure which is in scatterwalk. This
patch removes the crypto_km_types structure by encoding the logic into
crypto_kmap_type directly.
In fact, this even saves a few bytes of code (not to mention the data
structure itself) on i386 which is about the only place where it's
needed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the authenc algorithm which constructs an AEAD algorithm
from an asynchronous block cipher and a hash. The construction is done
by concatenating the encrypted result from the cipher with the output
from the hash, as is used by the IPsec ESP protocol.
The authenc algorithm exists as a template with four parameters:
authenc(auth, authsize, enc, enckeylen).
The authentication algorithm, the authentication size (i.e., truncating
the output of the authentication algorithm), the encryption algorithm,
and the encryption key length. Both the size field and the key length
field are in bytes. For example, AES-128 with SHA1-HMAC would be
represented by
authenc(hmac(sha1), 12, cbc(aes), 16)
The key for the authenc algorithm is the concatenation of the keys for
the authentication algorithm with the encryption algorithm. For the
above example, if a key of length 36 bytes is given, then hmac(sha1)
would receive the first 20 bytes while the last 16 would be given to
cbc(aes).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the function scatterwalk_map_and_copy which reads or
writes a chunk of data from a scatterlist at a given offset. It will
be used by authenc which would read/write the authentication data at
the end of the cipher/plain text.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The scatterwalk code is only used by algorithms that can be built as
a module. Therefore we can move it into algapi.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since not everyone needs a queue pointer and those who need it can
always get it from the context anyway the queue pointer in the
common alg object is redundant.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch ensures that kernel.h and slab.h are included for
the setkey_unaligned function. It also breaks a couple of
long lines.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for having multiple parameters to
a template, separated by a comma. It also adds support
for integer parameters in addition to the current algorithm
parameter type.
This will be used by the authenc template which will have
four parameters: the authentication algorithm, the encryption
algorithm, the authentication size and the encryption key
length.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds crypto_aead which is the interface for AEAD
(Authenticated Encryption with Associated Data) algorithms.
AEAD algorithms perform authentication and encryption in one
step. Traditionally users (such as IPsec) would use two
different crypto algorithms to perform these. With AEAD
this comes down to one algorithm and one operation.
Of course if traditional algorithms were used we'd still
be doing two operations underneath. However, real AEAD
algorithms may allow the underlying operations to be
optimised as well.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for the SEED cipher (RFC4269).
This patch have been used in few VPN appliance vendors in Korea for
several years. And it was verified by KISA, who developed the
algorithm itself.
As its importance in Korean banking industry, it would be great
if linux incorporates the support.
Signed-off-by: Hye-Shik Chang <perky@FreeBSD.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Other options requiring specific block cipher algorithms already have
the appropriate select's.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix dma_wait_for_async_tx to not loop forever in the case where a
dependency chain is longer than two entries. This condition will not
happen with current in-kernel drivers, but fix it for future drivers.
Found-by: Saeed Bishara <saeed.bishara@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The previous patch had the conditional inverted. This patch fixes it
so that we return the original position if it does not straddle a page.
Thanks to Bob Gilligan for spotting this.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function blkcipher_get_spot tries to return a buffer of
the specified length that does not straddle a page. It has
an off-by-one bug so it may advance a page unnecessarily.
What's worse, one of its callers doesn't provide a buffer
that's sufficiently long for this operation.
This patch fixes both problems. Thanks to Bob Gilligan for
diagnosing this problem and providing a fix.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>