lib/crypto: blake2b: Roll up BLAKE2b round loop on 32-bit

BLAKE2b has a state of 16 64-bit words.  Add the message data in and
there are 32 64-bit words.  With the current code where all the rounds
are unrolled to enable constant-folding of the blake2b_sigma values,
this results in a very large code size on 32-bit kernels, including a
recurring issue where gcc uses a large amount of stack.

There's just not much benefit to this unrolling when the code is already
so large.  Let's roll up the rounds when !CONFIG_64BIT.

To avoid having to duplicate the code, just write the code once using a
loop, and conditionally use 'unrolled_full' from <linux/unroll.h>.

Then, fold the now-unneeded ROUND() macro into the loop.  Finally, also
remove the now-unneeded override of the stack frame size warning.

Code size improvements for blake2b_compress_generic():

                  Size before (bytes)    Size after (bytes)
                  -------------------    ------------------
    i386, gcc           27584                 3632
    i386, clang         18208                 3248
    arm32, gcc          19912                 2860
    arm32, clang        21336                 3344

Running the BLAKE2b benchmark on a !CONFIG_64BIT kernel on an x86_64
processor shows a 16384B throughput change of 351 => 340 MB/s (gcc) or
442 MB/s => 375 MB/s (clang).  So clearly not much of a slowdown either.
But also that microbenchmark also effectively disregards cache usage,
which is important in practice and is far better in the smaller code.

Note: If we rolled up the loop on x86_64 too, the change would be
7024 bytes => 1584 bytes and 1960 MB/s => 1396 MB/s (gcc), or
6848 bytes => 1696 bytes and 1920 MB/s => 1263 MB/s (clang).
Maybe still worth it, though not quite as clearly beneficial.

Fixes: 91d689337f ("crypto: blake2b - add blake2b generic implementation")
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20251205050330.89704-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
pull/1354/merge
Eric Biggers 2025-12-04 21:03:30 -08:00
parent 1cd5bb6e9e
commit 2e8f7b170a
2 changed files with 20 additions and 25 deletions

View File

@ -33,7 +33,6 @@ obj-$(CONFIG_CRYPTO_LIB_GF128MUL) += gf128mul.o
obj-$(CONFIG_CRYPTO_LIB_BLAKE2B) += libblake2b.o
libblake2b-y := blake2b.o
CFLAGS_blake2b.o := -Wframe-larger-than=4096 # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105930
ifeq ($(CONFIG_CRYPTO_LIB_BLAKE2B_ARCH),y)
CFLAGS_blake2b.o += -I$(src)/$(SRCARCH)
libblake2b-$(CONFIG_ARM) += arm/blake2b-neon-core.o

View File

@ -14,6 +14,7 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/string.h>
#include <linux/unroll.h>
#include <linux/types.h>
static const u8 blake2b_sigma[12][16] = {
@ -73,31 +74,26 @@ blake2b_compress_generic(struct blake2b_ctx *ctx,
b = ror64(b ^ c, 63); \
} while (0)
#define ROUND(r) do { \
G(r, 0, v[0], v[ 4], v[ 8], v[12]); \
G(r, 1, v[1], v[ 5], v[ 9], v[13]); \
G(r, 2, v[2], v[ 6], v[10], v[14]); \
G(r, 3, v[3], v[ 7], v[11], v[15]); \
G(r, 4, v[0], v[ 5], v[10], v[15]); \
G(r, 5, v[1], v[ 6], v[11], v[12]); \
G(r, 6, v[2], v[ 7], v[ 8], v[13]); \
G(r, 7, v[3], v[ 4], v[ 9], v[14]); \
} while (0)
ROUND(0);
ROUND(1);
ROUND(2);
ROUND(3);
ROUND(4);
ROUND(5);
ROUND(6);
ROUND(7);
ROUND(8);
ROUND(9);
ROUND(10);
ROUND(11);
#ifdef CONFIG_64BIT
/*
* Unroll the rounds loop to enable constant-folding of the
* blake2b_sigma values. Seems worthwhile on 64-bit kernels.
* Not worthwhile on 32-bit kernels because the code size is
* already so large there due to BLAKE2b using 64-bit words.
*/
unrolled_full
#endif
for (int r = 0; r < 12; r++) {
G(r, 0, v[0], v[4], v[8], v[12]);
G(r, 1, v[1], v[5], v[9], v[13]);
G(r, 2, v[2], v[6], v[10], v[14]);
G(r, 3, v[3], v[7], v[11], v[15]);
G(r, 4, v[0], v[5], v[10], v[15]);
G(r, 5, v[1], v[6], v[11], v[12]);
G(r, 6, v[2], v[7], v[8], v[13]);
G(r, 7, v[3], v[4], v[9], v[14]);
}
#undef G
#undef ROUND
for (i = 0; i < 8; ++i)
ctx->h[i] ^= v[i] ^ v[i + 8];