- Provide the missing 64-bit variant of clock_getres()
This allows the extension of CONFIG_COMPAT_32BIT_TIME to the vDSO and
finally the removal of 32-bit time types from the kernel and UAPI.
- Remove the useless and broken getcpu_cache from the VDSO
The intention was to provide a trivial way to retrieve the CPU number from
the VDSO, but as the VDSO data is per process there is no way to make it
work.
- Switch get/put_unaligned() from packed struct to memcpy()
The packed struct violates strict aliasing rules which requires to pass
-fno-strict-aliasing to the compiler. As this are scalar values
__builtin_memcpy() turns them into simple loads and stores
- Use __typeof_unqual__() for __unqual_scalar_typeof()
The get/put_unaligned() changes triggered a new sparse warning when __beNN
types are used with get/put_unaligned() as sparse builds add a special
'bitwise' attribute to them which prevents sparse to evaluate the Generic
in __unqual_scalar_typeof().
Newer sparse versions support __typeof_unqual__() which avoids the problem,
but requires a recent sparse install. So this adds a sanity check to sparse
builds, which validates that sparse is available and capable of handling it.
- Force inline __cvdso_clock_getres_common()
Compilers sometimes un-inline agressively, which results in function call
overhead and problems with automatic stack variable initialization.
Interestingly enough the force inlining results in smaller code than the
un-inlined variant produced by GCC when optimizing for size.
-----BEGIN PGP SIGNATURE-----
iQJEBAABCgAuFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmmJ2eAQHHRnbHhAa2Vy
bmVsLm9yZwAKCRCmGPVMDXSYoXhQD/4mjneVlRBbQ6Nt9LxxhzPlYXvED6u8b4SX
R//DQ4qqqagh2fSpE57tr3f56HqhUmXptGJSgEvr6tVKoyugsLqP6sN9J9J3o181
jnPQR0VBP9dihQZ5X93pKo9NZWxLIn0uLD0RsdCbE9NTx4ciw4VIPFYQqkC4Rw6b
jiRrDR2l8EhV8cmxB6puW5WaQ932M6Awabw9RumzwH3MzIIlbc5Ero51S9eS64LL
byU5XeWUe295W1Gxze5RHHJWyNQEyx1eUCFfe3LWvfpz7FMzc2AQsKnIJDzW3GiO
UGu5MGptbLpG+ccvhVEs6/Ls5pWXcoCw4WuDNAunCCOmqda98oDniKf2LwRRbLT0
nAfLNatMnhXdTPk2zbS45z9uipUQAGKmVAE3/LVqB+ekcutmIGMyqHgR75QX0b4l
CQPkC9rBsV6gGsScWTnhRydhqioNO/uhhrQv0vEXnKZa0ysTbgZKt3JDZbgUEL2B
uDxXKyrqjpnqDZKlMMaoLtwd+l+T80ya4/NhHd4ZNGUpTUrHVw2H47lgE7ahCxEk
/SvXTZSU4Jp8sVQIQ+J6y5z2AQ/xGy++zvNKiZMyP9fQuPqhiDqYkLpmOp61bARx
wqyVsfGXZYSB1l16AiSC/CyUDcMqqsFohGXQ/Yf0SOiVtXu2WUyfofY1N0IXIGu4
WOVV9mH1yg==
=0ogX
-----END PGP SIGNATURE-----
Merge tag 'timers-vdso-2026-02-09' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull VDSO updates from Thomas Gleixner:
- Provide the missing 64-bit variant of clock_getres()
This allows the extension of CONFIG_COMPAT_32BIT_TIME to the vDSO and
finally the removal of 32-bit time types from the kernel and UAPI.
- Remove the useless and broken getcpu_cache from the VDSO
The intention was to provide a trivial way to retrieve the CPU number
from the VDSO, but as the VDSO data is per process there is no way to
make it work.
- Switch get/put_unaligned() from packed struct to memcpy()
The packed struct violates strict aliasing rules which requires to
pass -fno-strict-aliasing to the compiler. As this are scalar values
__builtin_memcpy() turns them into simple loads and stores
- Use __typeof_unqual__() for __unqual_scalar_typeof()
The get/put_unaligned() changes triggered a new sparse warning when
__beNN types are used with get/put_unaligned() as sparse builds add a
special 'bitwise' attribute to them which prevents sparse to evaluate
the Generic in __unqual_scalar_typeof().
Newer sparse versions support __typeof_unqual__() which avoids the
problem, but requires a recent sparse install. So this adds a sanity
check to sparse builds, which validates that sparse is available and
capable of handling it.
- Force inline __cvdso_clock_getres_common()
Compilers sometimes un-inline agressively, which results in function
call overhead and problems with automatic stack variable
initialization.
Interestingly enough the force inlining results in smaller code than
the un-inlined variant produced by GCC when optimizing for size.
* tag 'timers-vdso-2026-02-09' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
vdso/gettimeofday: Force inlining of __cvdso_clock_getres_common()
x86/percpu: Make CONFIG_USE_X86_SEG_SUPPORT work with sparse
compiler: Use __typeof_unqual__() for __unqual_scalar_typeof()
powerpc/vdso: Provide clock_getres_time64()
tools headers: Remove unneeded ignoring of warnings in unaligned.h
tools headers: Update the linux/unaligned.h copy with the kernel sources
vdso: Switch get/put_unaligned() from packed struct to memcpy()
parisc: Inline a type punning version of get_unaligned_le32()
vdso: Remove struct getcpu_cache
MIPS: vdso: Provide getres_time64() for 32-bit ABIs
arm64: vdso32: Provide clock_getres_time64()
ARM: VDSO: Provide clock_getres_time64()
ARM: VDSO: Patch out __vdso_clock_getres() if unavailable
x86/vdso: Provide clock_getres_time64() for x86-32
selftests: vDSO: vdso_test_abi: Add test for clock_getres_time64()
selftests: vDSO: vdso_test_abi: Use UAPI system call numbers
selftests: vDSO: vdso_config: Add configurations for clock_getres_time64()
vdso: Add prototype for __vdso_clock_getres_time64()
- Remove the interrupt timing infrastructure
This was added seven years ago to be used for power management purposes, but
that integration never happened.
- Clean up the remaining setup_percpu_irq() users
The memory allocator is available when interrupts can be requested so there
is not need for static irq_action. Move the remaining users to
request_percpu_irq() and delete the historical cruft.
- Warn when interrupt flag inconsistencies are detected in request*_irq().
Inconsistent flags can lead to hard to diagnose malfunction. The fallout of
this new warning has been addressed in next and the fixes are coming in via
the maintainer trees and the tip irq/cleanup pull requests.
- Invoke affinity notifier when CPU hotplug breaks affinity
Otherwise the code using the notifier misses the affinity change and
operates on stale information.
- The usual cleanups and improvements
-----BEGIN PGP SIGNATURE-----
iQJEBAABCgAuFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmmJwSAQHHRnbHhAa2Vy
bmVsLm9yZwAKCRCmGPVMDXSYoV1rD/9lxepiLVhIvaaauyr69HkHnLDJyyyIuNQk
OZ5Rx8JEnmtJIN49DJ/Ps/VjDZA6GYV2RvtKfqCNtpA3AdpHcLCDw2/mZKetHH3v
0RlTNeu0TKt76Mo+d7vadJZqdcKbBPYEBucaQed4XLGFq9Vb7nuVMFLzkDZe+C5U
zG8zhtIaPQyY+MUxFGpWJdeKcddnIC/lKRFBmhglmuflQ8ZdrydRlJZzFBxfcnPy
dcZVnOVZYD4aaW+uryd9eVWg0de5q97C03OobVdhwt077zcmTZpGanRCBM/+7Kop
UDB/p04Z5FFFKFbd+j0f76gl1l6yQ06K/2Pby8qrLnodsqOEgjEgLs0qbHhkNsVw
g5W6ry9boLWAA1XhGFOnqZ4Dc/Qb1G7DpnRUzpmAlyL04MTManjocIHNPrGhjlNG
+jT3iPuraiRUoNZ9K2PBYX9SikGVnKJXFmOwlffkPpOtaJk/oYVUAdMq2W71C4Du
TZeAbd2RyrSPMYaSh7pnBhM7KBmsYIg+hHU1+RaAi/rFSOT0Se5sUWwmUUUzSfOn
KvCAzMGX6gIg8rkSmTp7OvLQhXEfJ2kEVrVZACS6MBxS7+PNVBkxYuOqxdhdoVj0
sZrfB8Qnm4mAcbq7YJtrD6az3+LH5CkrdNyWIAL62leMmqWDoTOgLRMFh2/yEMTl
BeAOgPO2pw==
=3hJ2
-----END PGP SIGNATURE-----
Merge tag 'irq-core-2026-02-09' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq core updates from Thomas Gleixner:
"Updates for the interrupt core subsystem:
- Remove the interrupt timing infrastructure
This was added seven years ago to be used for power management
purposes, but that integration never happened.
- Clean up the remaining setup_percpu_irq() users
The memory allocator is available when interrupts can be requested
so there is not need for static irq_action. Move the remaining
users to request_percpu_irq() and delete the historical cruft.
- Warn when interrupt flag inconsistencies are detected in
request*_irq().
Inconsistent flags can lead to hard to diagnose malfunction. The
fallout of this new warning has been addressed in next and the
fixes are coming in via the maintainer trees and the tip
irq/cleanup pull requests.
- Invoke affinity notifier when CPU hotplug breaks affinity
Otherwise the code using the notifier misses the affinity change
and operates on stale information.
- The usual cleanups and improvements"
* tag 'irq-core-2026-02-09' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
genirq/proc: Replace snprintf with strscpy in register_handler_proc
genirq/cpuhotplug: Notify about affinity changes breaking the affinity mask
genirq: Move clear of kstat_irqs to free_desc()
genirq: Warn about using IRQF_ONESHOT without a threaded handler
irqdomain: Fix up const problem in irq_domain_set_name()
genirq: Remove setup_percpu_irq()
clocksource/drivers/mips-gic-timer: Move GIC timer to request_percpu_irq()
MIPS: Move IP27 timer to request_percpu_irq()
MIPS: Move IP30 timer to request_percpu_irq()
genirq: Remove __request_percpu_irq() helper
genirq: Remove IRQ timing tracking infrastructure
Lock debugging:
- Implement compiler-driven static analysis locking context
checking, using the upcoming Clang 22 compiler's context
analysis features. (Marco Elver)
We removed Sparse context analysis support, because prior to
removal even a defconfig kernel produced 1,700+ context
tracking Sparse warnings, the overwhelming majority of which
are false positives. On an allmodconfig kernel the number of
false positive context tracking Sparse warnings grows to
over 5,200... On the plus side of the balance actual locking
bugs found by Sparse context analysis is also rather ... sparse:
I found only 3 such commits in the last 3 years. So the
rate of false positives and the maintenance overhead is
rather high and there appears to be no active policy in
place to achieve a zero-warnings baseline to move the
annotations & fixers to developers who introduce new code.
Clang context analysis is more complete and more aggressive
in trying to find bugs, at least in principle. Plus it has
a different model to enabling it: it's enabled subsystem by
subsystem, which results in zero warnings on all relevant
kernel builds (as far as our testing managed to cover it).
Which allowed us to enable it by default, similar to other
compiler warnings, with the expectation that there are no
warnings going forward. This enforces a zero-warnings baseline
on clang-22+ builds. (Which are still limited in distribution,
admittedly.)
Hopefully the Clang approach can lead to a more maintainable
zero-warnings status quo and policy, with more and more
subsystems and drivers enabling the feature. Context tracking
can be enabled for all kernel code via WARN_CONTEXT_ANALYSIS_ALL=y
(default disabled), but this will generate a lot of false positives.
( Having said that, Sparse support could still be added back,
if anyone is interested - the removal patch is still
relatively straightforward to revert at this stage. )
Rust integration updates: (Alice Ryhl, Fujita Tomonori, Boqun Feng)
- Add support for Atomic<i8/i16/bool> and replace most Rust native
AtomicBool usages with Atomic<bool>
- Clean up LockClassKey and improve its documentation
- Add missing Send and Sync trait implementation for SetOnce
- Make ARef Unpin as it is supposed to be
- Add __rust_helper to a few Rust helpers as a preparation for
helper LTO
- Inline various lock related functions to avoid additional
function calls.
WW mutexes:
- Extend ww_mutex tests and other test-ww_mutex updates (John Stultz)
Misc fixes and cleanups:
- rcu: Mark lockdep_assert_rcu_helper() __always_inline
(Arnd Bergmann)
- locking/local_lock: Include more missing headers (Peter Zijlstra)
- seqlock: fix scoped_seqlock_read kernel-doc (Randy Dunlap)
- rust: sync: Replace `kernel::c_str!` with C-Strings
(Tamir Duberstein)
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmmIXiURHG1pbmdvQGtl
cm5lbC5vcmcACgkQEnMQ0APhK1gH+A/9GX5UmU6+HuDfDrCtXm9GDve6wkwahvcW
jLDxOYjs764I2BhyjZnjKjyF5zw60hbykem7Wcf5EV2YH30nM4XRgEWVJfkr1UAI
Pra415X4DdOzZ6qYQIpO8Udt1LtR7BMSaXITVLJaLicxEoOVtq3SKxjqyhCFs7UW
MfJdqleB+RMLqq3LlzgB4l43eKk1xyeHh+oQwI0RSxuIpVZme3p4TObnCKjIWnK7
Ihd+dkgC852WBjANgNL7F/sd5UsF5QX3wjtOrLhMKvkIgTPdXln0g398pivjN/G/
Kpnw18SFeb159JfJu8eMotsYvVnQ0D5aOcTBfL4qvOHCImhpcu2s6ik9BcXqt2yT
8IiuWk9xEM3Ok+I/I4ClT5cf5GYpyigV2QsXxn+IjDX5Na8v4zlHh0r8SElP8fOt
7dpQx7iw8UghAib3AzA3suN78Oh39m8l5BNobj7LAjnqOQcVvoPo4o7/48ntuH7A
38EucFrXfxQBMfGbMwvxEmgYuX7MyVfQLaPE06MHy1BkZkffT8Um38TB0iNtZmtf
WUx01yLKWYspehlwFi319uVI4/Zp7FnTfqa5uKv1oSXVdL9vZojSXUzrgDV7FVqT
Z4xAAw/kwNHpUG7y0zNOqd6PukovG1t+CjbLvK+eHPwc5c0vEGG2oTRAfEvvP1z/
kesYDmCyJnk=
=N1gA
-----END PGP SIGNATURE-----
Merge tag 'locking-core-2026-02-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking updates from Ingo Molnar:
"Lock debugging:
- Implement compiler-driven static analysis locking context checking,
using the upcoming Clang 22 compiler's context analysis features
(Marco Elver)
We removed Sparse context analysis support, because prior to
removal even a defconfig kernel produced 1,700+ context tracking
Sparse warnings, the overwhelming majority of which are false
positives. On an allmodconfig kernel the number of false positive
context tracking Sparse warnings grows to over 5,200... On the plus
side of the balance actual locking bugs found by Sparse context
analysis is also rather ... sparse: I found only 3 such commits in
the last 3 years. So the rate of false positives and the
maintenance overhead is rather high and there appears to be no
active policy in place to achieve a zero-warnings baseline to move
the annotations & fixers to developers who introduce new code.
Clang context analysis is more complete and more aggressive in
trying to find bugs, at least in principle. Plus it has a different
model to enabling it: it's enabled subsystem by subsystem, which
results in zero warnings on all relevant kernel builds (as far as
our testing managed to cover it). Which allowed us to enable it by
default, similar to other compiler warnings, with the expectation
that there are no warnings going forward. This enforces a
zero-warnings baseline on clang-22+ builds (Which are still limited
in distribution, admittedly)
Hopefully the Clang approach can lead to a more maintainable
zero-warnings status quo and policy, with more and more subsystems
and drivers enabling the feature. Context tracking can be enabled
for all kernel code via WARN_CONTEXT_ANALYSIS_ALL=y (default
disabled), but this will generate a lot of false positives.
( Having said that, Sparse support could still be added back,
if anyone is interested - the removal patch is still
relatively straightforward to revert at this stage. )
Rust integration updates: (Alice Ryhl, Fujita Tomonori, Boqun Feng)
- Add support for Atomic<i8/i16/bool> and replace most Rust native
AtomicBool usages with Atomic<bool>
- Clean up LockClassKey and improve its documentation
- Add missing Send and Sync trait implementation for SetOnce
- Make ARef Unpin as it is supposed to be
- Add __rust_helper to a few Rust helpers as a preparation for
helper LTO
- Inline various lock related functions to avoid additional function
calls
WW mutexes:
- Extend ww_mutex tests and other test-ww_mutex updates (John
Stultz)
Misc fixes and cleanups:
- rcu: Mark lockdep_assert_rcu_helper() __always_inline (Arnd
Bergmann)
- locking/local_lock: Include more missing headers (Peter Zijlstra)
- seqlock: fix scoped_seqlock_read kernel-doc (Randy Dunlap)
- rust: sync: Replace `kernel::c_str!` with C-Strings (Tamir
Duberstein)"
* tag 'locking-core-2026-02-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (90 commits)
locking/rwlock: Fix write_trylock_irqsave() with CONFIG_INLINE_WRITE_TRYLOCK
rcu: Mark lockdep_assert_rcu_helper() __always_inline
compiler-context-analysis: Remove __assume_ctx_lock from initializers
tomoyo: Use scoped init guard
crypto: Use scoped init guard
kcov: Use scoped init guard
compiler-context-analysis: Introduce scoped init guards
cleanup: Make __DEFINE_LOCK_GUARD handle commas in initializers
seqlock: fix scoped_seqlock_read kernel-doc
tools: Update context analysis macros in compiler_types.h
rust: sync: Replace `kernel::c_str!` with C-Strings
rust: sync: Inline various lock related methods
rust: helpers: Move #define __rust_helper out of atomic.c
rust: wait: Add __rust_helper to helpers
rust: time: Add __rust_helper to helpers
rust: task: Add __rust_helper to helpers
rust: sync: Add __rust_helper to helpers
rust: refcount: Add __rust_helper to helpers
rust: rcu: Add __rust_helper to helpers
rust: processor: Add __rust_helper to helpers
...
- more rust helpers (Alice);
- more bitops tests (Ryota);
- FIND_NTH_BIT() uninitialized variable fix (Lee Yongjun);
- random cleanups (Andy, H. Peter).
-----BEGIN PGP SIGNATURE-----
iQGzBAABCgAdFiEEi8GdvG6xMhdgpu/4sUSA/TofvsgFAmmJIqMACgkQsUSA/Tof
vsj/uwv/WIv9Wa3oTm3qzkIyaxeowIQE+DeDduporSoO8fnVfEYHjPuihMy3J5Gv
x6iNUq9n8fMhmg/qHHyKwb/Ow8Ob11re9MZWS6a02wlm8Het0W9hKEk1z7qauFsI
Vqq3rlMO3wYGaSPmlnh7rn/NHByrJ49ZrJcoX3he0Ov5MKp1w+dP3Czee4HGTb3A
SM0cf4iCR3/Jj3flTsMghZgqZKNhsIe+ejX++dFPhmv82FHbIBcwUfN6CM27OGdy
M7GAro1xIaoTwQVKCO2AycLAu8j+Rx9s+mcGuIAWynDo6YG1tVetADZQ+nRJG+x9
WdUDZwBDyEZlx/8BGnS56wevYxXEfC0lMTTgBqsYlCtLToyeE8G6NUVCLzEJAQCJ
iWhcsenQVIiEHtYBZI/o2PmEp1Puzk1jI0+uFib1jE5Llf3+FrfMQGOiN2TBStEa
zVIOipr+L9EaBbJdiKCV1jTl/MoDuCdRPLyby8egxPS+rk4RugvDsGc31XN48118
OHIJO2M/
=WnLV
-----END PGP SIGNATURE-----
Merge tag 'bitmap-for-6.20' of https://github.com/norov/linux
Pull bitmap updates from Yury Norov:
- more rust helpers (Alice)
- more bitops tests (Ryota)
- FIND_NTH_BIT() uninitialized variable fix (Lee Yongjun)
- random cleanups (Andy, H. Peter)
* tag 'bitmap-for-6.20' of https://github.com/norov/linux:
lib/tests: extend KUnit test for bitops with more cases
bitops: Add more files to the MAINTAINERS
lib/find_bit: fix uninitialized variable use in FIND_NTH_BIT
lib/tests: add KUnit test for bitops
rust: cpumask: add __rust_helper to helpers
rust: bitops: add __rust_helper to helpers
rust: bitmap: add __rust_helper to helpers
linux/bitfield.h: replace __auto_type with auto
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+soXsSLHKoYyzcli6rmadz2vbToFAmmGmrgACgkQ6rmadz2v
bTq6NxAAkCHosxzGn9GYYBV8xhrBJoJJDCyEbQ4nR0XNY+zaWnuykmiPP9w1aOAM
zm/po3mQB2pZjetvlrPrgG5RLgBCAUHzqVGy0r+phUvD3vbohKlmSlMm2kiXOb9N
T01BgLWsyqN2ZcNFvORdSsftqIJUHcXxU6RdupGD60sO5XM9ty5cwyewLX8GBOas
UN2bOhbK2DpqYWUvtv+3Q3ykxoStMSkXZvDRurwLKl4RHeLjXZXPo8NjnfBlk/F2
vdFo/F4NO4TmhOave6UPXvKb4yo9IlBRmiPAl0RmNKBxenY8j9XuV/xZxU6YgzDn
+SQfDK+CKQ4IYIygE+fqd4e5CaQrnjmPPcIw12AB2CF0LimY9Xxyyk6FSAhMN7wm
GTVh5K2C3Dk3OiRQk4G58EvQ5QcxzX98IeeCpcckMUkPsFWHRvF402WMUcv9SWpD
DsxxPkfENY/6N67EvH0qcSe/ikdUorQKFl4QjXKwsMCd5WhToeP4Z7Ck1gVSNkAh
9CX++mLzg333Lpsc4SSIuk9bEPpFa5cUIKUY7GCsCiuOXciPeMDP3cGSd5LioqxN
qWljs4Z88QDM2LJpAh8g4m3sA7bMhES3nPmdlI5CfgBcVyLW8D8CqQq4GEZ1McwL
Ky084+lEosugoVjRejrdMMEOsqAfcbkTr2b8jpuAZdwJKm6p/bw=
=cBdK
-----END PGP SIGNATURE-----
Merge tag 'bpf-next-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Pull bpf updates from Alexei Starovoitov:
- Support associating BPF program with struct_ops (Amery Hung)
- Switch BPF local storage to rqspinlock and remove recursion detection
counters which were causing false positives (Amery Hung)
- Fix live registers marking for indirect jumps (Anton Protopopov)
- Introduce execution context detection BPF helpers (Changwoo Min)
- Improve verifier precision for 32bit sign extension pattern
(Cupertino Miranda)
- Optimize BTF type lookup by sorting vmlinux BTF and doing binary
search (Donglin Peng)
- Allow states pruning for misc/invalid slots in iterator loops (Eduard
Zingerman)
- In preparation for ASAN support in BPF arenas teach libbpf to move
global BPF variables to the end of the region and enable arena kfuncs
while holding locks (Emil Tsalapatis)
- Introduce support for implicit arguments in kfuncs and migrate a
number of them to new API. This is a prerequisite for cgroup
sub-schedulers in sched-ext (Ihor Solodrai)
- Fix incorrect copied_seq calculation in sockmap (Jiayuan Chen)
- Fix ORC stack unwind from kprobe_multi (Jiri Olsa)
- Speed up fentry attach by using single ftrace direct ops in BPF
trampolines (Jiri Olsa)
- Require frozen map for calculating map hash (KP Singh)
- Fix lock entry creation in TAS fallback in rqspinlock (Kumar
Kartikeya Dwivedi)
- Allow user space to select cpu in lookup/update operations on per-cpu
array and hash maps (Leon Hwang)
- Make kfuncs return trusted pointers by default (Matt Bobrowski)
- Introduce "fsession" support where single BPF program is executed
upon entry and exit from traced kernel function (Menglong Dong)
- Allow bpf_timer and bpf_wq use in all programs types (Mykyta
Yatsenko, Andrii Nakryiko, Kumar Kartikeya Dwivedi, Alexei
Starovoitov)
- Make KF_TRUSTED_ARGS the default for all kfuncs and clean up their
definition across the tree (Puranjay Mohan)
- Allow BPF arena calls from non-sleepable context (Puranjay Mohan)
- Improve register id comparison logic in the verifier and extend
linked registers with negative offsets (Puranjay Mohan)
- In preparation for BPF-OOM introduce kfuncs to access memcg events
(Roman Gushchin)
- Use CFI compatible destructor kfunc type (Sami Tolvanen)
- Add bitwise tracking for BPF_END in the verifier (Tianci Cao)
- Add range tracking for BPF_DIV and BPF_MOD in the verifier (Yazhou
Tang)
- Make BPF selftests work with 64k page size (Yonghong Song)
* tag 'bpf-next-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (268 commits)
selftests/bpf: Fix outdated test on storage->smap
selftests/bpf: Choose another percpu variable in bpf for btf_dump test
selftests/bpf: Remove test_task_storage_map_stress_lookup
selftests/bpf: Update task_local_storage/task_storage_nodeadlock test
selftests/bpf: Update task_local_storage/recursion test
selftests/bpf: Update sk_storage_omem_uncharge test
bpf: Switch to bpf_selem_unlink_nofail in bpf_local_storage_{map_free, destroy}
bpf: Support lockless unlink when freeing map or local storage
bpf: Prepare for bpf_selem_unlink_nofail()
bpf: Remove unused percpu counter from bpf_local_storage_map_free
bpf: Remove cgroup local storage percpu counter
bpf: Remove task local storage percpu counter
bpf: Change local_storage->lock and b->lock to rqspinlock
bpf: Convert bpf_selem_unlink to failable
bpf: Convert bpf_selem_link_map to failable
bpf: Convert bpf_selem_unlink_map to failable
bpf: Select bpf_local_storage_map_bucket based on bpf_local_storage
selftests/xsk: fix number of Tx frags in invalid packet
selftests/xsk: properly handle batch ending in the middle of a packet
bpf: Prevent reentrance into call_rcu_tasks_trace()
...
- Various missed __counted_by annotations (Thorsten Blum)
- Various missed -Wflex-array-member-not-at-end fixes (Gustavo A. R. Silva)
- Avoid leftover tempfiles for interrupted compile-time FORTIFY tests
(Nicolas Schier)
- Remove non-existant CONFIG_UBSAN_REPORT_FULL from docs (Stefan Wiehler)
- fortify: Use C arithmetic not FIELD_xxx() in FORTIFY_REASON defines
(David Laight)
- Add __counted_by_ptr attribute, tests, and first user (Bill Wendling,
Kees Cook)
- Update MAINTAINERS file to make hardening section not include pstore
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQRSPkdeREjth1dHnSE2KwveOeQkuwUCaYopPQAKCRA2KwveOeQk
u5QDAQDj9ygsbzLj9EtXtU3T03GTvix4Rx7RkaBAMPSDEJGhNQD+M3dP6Z2ogEMz
1Km6dAC7nTEujsVFur9BVpyEgoBjjQM=
=G1gb
-----END PGP SIGNATURE-----
Merge tag 'hardening-v7.0-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux
Pull hardening updates from Kees Cook:
"Mostly small cleanups and various scattered annotations and flex array
warning fixes that we reviewed by unlanded in other trees. Introduces
new annotation for expanding counted_by to pointer members, now that
compiler behavior between GCC and Clang has been normalized.
- Various missed __counted_by annotations (Thorsten Blum)
- Various missed -Wflex-array-member-not-at-end fixes (Gustavo A. R.
Silva)
- Avoid leftover tempfiles for interrupted compile-time FORTIFY tests
(Nicolas Schier)
- Remove non-existant CONFIG_UBSAN_REPORT_FULL from docs (Stefan
Wiehler)
- fortify: Use C arithmetic not FIELD_xxx() in FORTIFY_REASON defines
(David Laight)
- Add __counted_by_ptr attribute, tests, and first user (Bill
Wendling, Kees Cook)
- Update MAINTAINERS file to make hardening section not include
pstore"
* tag 'hardening-v7.0-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
MAINTAINERS: pstore: Remove L: entry
nfp: tls: Avoid -Wflex-array-member-not-at-end warnings
carl9170: Avoid -Wflex-array-member-not-at-end warning
coredump: Use __counted_by_ptr for struct core_name::corename
lkdtm/bugs: Add __counted_by_ptr() test PTR_BOUNDS
compiler_types.h: Attributes: Add __counted_by_ptr macro
fortify: Cleanup temp file also on non-successful exit
fortify: Rename temporary file to match ignore pattern
fortify: Use C arithmetic not FIELD_xxx() in FORTIFY_REASON defines
ecryptfs: Annotate struct ecryptfs_message with __counted_by
fs/xattr: Annotate struct simple_xattr with __counted_by
crypto: af_alg - Annotate struct af_alg_iv with __counted_by
Kconfig.ubsan: Remove CONFIG_UBSAN_REPORT_FULL from documentation
drm/nouveau: fifo: Avoid -Wflex-array-member-not-at-end warning
- Add support for verifying ML-DSA signatures.
ML-DSA (Module-Lattice-Based Digital Signature Algorithm) is a
recently-standardized post-quantum (quantum-resistant) signature
algorithm. It was known as Dilithium pre-standardization.
The first use case in the kernel will be module signing. But there
are also other users of RSA and ECDSA signatures in the kernel that
might want to upgrade to ML-DSA eventually.
- Improve the AES library:
- Make the AES key expansion and single block encryption and
decryption functions use the architecture-optimized AES code.
Enable these optimizations by default.
- Support preparing an AES key for encryption-only, using about
half as much memory as a bidirectional key.
- Replace the existing two generic implementations of AES with a
single one.
- Simplify how Adiantum message hashing is implemented. Remove the
"nhpoly1305" crypto_shash in favor of direct lib/crypto/ support for
NH hashing, and enable optimizations by default.
-----BEGIN PGP SIGNATURE-----
iIoEABYIADIWIQSacvsUNc7UX4ntmEPzXCl4vpKOKwUCaYlV8xQcZWJpZ2dlcnNA
a2VybmVsLm9yZwAKCRDzXCl4vpKOK1ffAQCbM+cnqF4ThspBCgLZGSScx02KsA4U
dQblKoOFyIEbnwEA1ElJNhNQs2m7AT+R0hOh6yI+5+ttUfqLMT9tuNs2mwM=
=iZ06
-----END PGP SIGNATURE-----
Merge tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux
Pull crypto library updates from Eric Biggers:
- Add support for verifying ML-DSA signatures.
ML-DSA (Module-Lattice-Based Digital Signature Algorithm) is a
recently-standardized post-quantum (quantum-resistant) signature
algorithm. It was known as Dilithium pre-standardization.
The first use case in the kernel will be module signing. But there
are also other users of RSA and ECDSA signatures in the kernel that
might want to upgrade to ML-DSA eventually.
- Improve the AES library:
- Make the AES key expansion and single block encryption and
decryption functions use the architecture-optimized AES code.
Enable these optimizations by default.
- Support preparing an AES key for encryption-only, using about
half as much memory as a bidirectional key.
- Replace the existing two generic implementations of AES with a
single one.
- Simplify how Adiantum message hashing is implemented. Remove the
"nhpoly1305" crypto_shash in favor of direct lib/crypto/ support for
NH hashing, and enable optimizations by default.
* tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: (53 commits)
lib/crypto: mldsa: Clarify the documentation for mldsa_verify() slightly
lib/crypto: aes: Drop 'volatile' from aes_sbox and aes_inv_sbox
lib/crypto: aes: Remove old AES en/decryption functions
lib/crypto: aesgcm: Use new AES library API
lib/crypto: aescfb: Use new AES library API
crypto: omap - Use new AES library API
crypto: inside-secure - Use new AES library API
crypto: drbg - Use new AES library API
crypto: crypto4xx - Use new AES library API
crypto: chelsio - Use new AES library API
crypto: ccp - Use new AES library API
crypto: x86/aes-gcm - Use new AES library API
crypto: arm64/ghash - Use new AES library API
crypto: arm/ghash - Use new AES library API
staging: rtl8723bs: core: Use new AES library API
net: phy: mscc: macsec: Use new AES library API
chelsio: Use new AES library API
Bluetooth: SMP: Use new AES library API
crypto: x86/aes - Remove the superseded AES-NI crypto_cipher
lib/crypto: x86/aes: Add AES-NI optimization
...
-----BEGIN PGP SIGNATURE-----
iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAmmGPZwQHGF4Ym9lQGtl
cm5lbC5kawAKCRD301j7KXHgpjy5EAC8z4IFCz+ua+q3hqJIlGfTlkxR6kM+DMn/
WKqaFYjnwzwApYe7kgBtlVcINnX5riCdNEk70tG1SCkAHqqdnzF4Ps1kQz0RflXS
7DftN76hSTUbEfolQWTzqDAGMrcn7GUjjjwaRKjSVF30UBKjZ6U4fKfyzWChEwah
UtnmLMd3Osl58C9RTcjQPN1qMeQagmLej9C8plyCu9iLauoLA8XlkjxWvXRCcYwc
L+IY9F0s1rxmjGZ3eeaevs7V59RjOwJZvL4EPICajkx3oE7EAxS3VVt0p9LC3tPD
F4U6SXL0UkIeinduKlbEGP17N6l/4a4Twetyu6rSu//APzKIPAOPeD2xqIbrNSlI
rxHqKCsI8KW5JfNTvo9+JjiDOeDxRwt19ZCVCFUzXcsNfRq0EljtuY/4V5P1tPr9
0rOe5SdYS94AncwrabeV/ZOLEGmujjY9YhsCcP3J49LDkFG+T3fBgCpmFWwlWLs7
92MUHVcQmvb+j0z/fZVWRsqzhqtHBG4SO4yg2+Q0RQZeWnsVNTOR5cWfUEShI9G+
hnfYLdyyBTy37n60WXJOq2VhiWbPDAetEjKr+ulbD9hvpPdh6QL7rFiWZsVlnc7V
wUQoUjNltfHlyPI/YSwqa9YyyLPAl6YGKba2/qBKSwFTQmFLpSynJIa87W6jUx6B
sofywm9ZZw==
=faTj
-----END PGP SIGNATURE-----
Merge tag 'for-7.0/block-stable-pages-20260206' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux
Pull bounce buffer dio for stable pages from Jens Axboe:
"This adds support for bounce buffering of dio for stable pages. This
was all done by Christoph. In his words:
This series tries to address the problem that under I/O pages can be
modified during direct I/O, even when the device or file system
require stable pages during I/O to calculate checksums, parity or data
operations. It does so by adding block layer helpers to bounce buffer
an iov_iter into a bio, then wires that up in iomap and ultimately
XFS.
The reason that the file system even needs to know about it, is
because reads need a user context to copy the data back, and the
infrastructure to defer ioends to a workqueue currently sits in XFS.
I'm going to look into moving that into ioend and enabling it for
other file systems. Additionally btrfs already has it's own
infrastructure for this, and actually an urgent need to bounce buffer,
so this should be useful there and could be wire up easily. In fact
the idea comes from patches by Qu that did this in btrfs.
This patch fixes all but one xfstests failures on T10 PI capable
devices (generic/095 seems to have issues with a mix of mmap and
splice still, I'm looking into that separately), and make qemu VMs
running Windows, or Linux with swap enabled fine on an XFS file on a
device using PI.
Performance numbers on my (not exactly state of the art) NVMe PI test
setup:
Sequential reads using io_uring, QD=16.
Bandwidth and CPU usage (usr/sys):
| size | zero copy | bounce |
+------+--------------------------+--------------------------+
| 4k | 1316MiB/s (12.65/55.40%) | 1081MiB/s (11.76/49.78%) |
| 64K | 3370MiB/s ( 5.46/18.20%) | 3365MiB/s ( 4.47/15.68%) |
| 1M | 3401MiB/s ( 0.76/23.05%) | 3400MiB/s ( 0.80/09.06%) |
+------+--------------------------+--------------------------+
Sequential writes using io_uring, QD=16.
Bandwidth and CPU usage (usr/sys):
| size | zero copy | bounce |
+------+--------------------------+--------------------------+
| 4k | 882MiB/s (11.83/33.88%) | 750MiB/s (10.53/34.08%) |
| 64K | 2009MiB/s ( 7.33/15.80%) | 2007MiB/s ( 7.47/24.71%) |
| 1M | 1992MiB/s ( 7.26/ 9.13%) | 1992MiB/s ( 9.21/19.11%) |
+------+--------------------------+--------------------------+
Note that the 64k read numbers look really odd to me for the baseline
zero copy case, but are reproducible over many repeated runs.
The bounce read numbers should further improve when moving the PI
validation to the file system and removing the double context switch,
which I have patches for that will sent out soon"
* tag 'for-7.0/block-stable-pages-20260206' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux:
xfs: use bounce buffering direct I/O when the device requires stable pages
iomap: add a flag to bounce buffer direct I/O
iomap: support ioends for direct reads
iomap: rename IOMAP_DIO_DIRTY to IOMAP_DIO_USER_BACKED
iomap: free the bio before completing the dio
iomap: share code between iomap_dio_bio_end_io and iomap_finish_ioend_direct
iomap: split out the per-bio logic from iomap_dio_bio_iter
iomap: simplify iomap_dio_bio_iter
iomap: fix submission side handling of completion side errors
block: add helpers to bounce buffer an iov_iter into bios
block: remove bio_release_page
iov_iter: extract a iov_iter_extract_bvecs helper from bio code
block: open code bio_add_page and fix handling of mismatching P2P ranges
block: refactor get_contig_folio_len
block: add a BIO_MAX_SIZE constant and use it
kunit:
- adds __rust_helper to helpers
- fixes up const mis-match in many assert functions
- fixes up const mismatch in test_list_sort
- protects KUNIT_BINARY_STR_ASSERTION against ERR_PTR values
- respects KBUILD_OUTPUT env variable by default
- adds bash completion
kunit tool:
- adds test for nested test result reporting
- fixes to not overwrite test status based on subtest counts
- adds 32-bit big endian ARM configuration to qemu_configs
- renames test_data_path() to _test_data_path()
- fixes relying on implicit working directory change
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEPZKym/RZuOCGeA/kCwJExA0NQxwFAml8f08ACgkQCwJExA0N
Qxws1w/9GCnGCGbcdY+uw4ExyLGWZM/Gfecjtf03yspVl9/ohrkkR5K6wkfXdKMH
Cb1J9dI7z+7GAwtDg2C7AQNa/mtiUBu7fSfrCTohExeGSL4okkJEdk7IcYouPtFG
TCYBnIXiz7bp1HDk7zn1D8w2brstp5VKnNTgvfEWaoH60WypEavukJqmy6knrJ0z
Ot0mNhPi5I/wJUAjSLvg85M4KKL1t0Z6Kj4JWytaPLOhz7UG94JE2XvO6vNmB1Qt
3Scym/8RpR6k7zdp6YriffFjCCPTZMvLZEDQggCsEHiqGJjCAWOp8yBQdhzYEt5g
SbFvZOL3Jvl1jXMfPdOVYFXu08/2hHz7OHqkkrHfa4lgirQxTwAuF+sNkWJ63pci
ppAUtsvs56GY0/+XUWviRJXH+9ajUjpkB6RmeMz/Vlj61pWmKqjqYXinMfFTC1ae
HphrQFNIGVJVvZtF2V6wbYpCNKb9pPy8/Pje4EWofVbIJWgIHCh6PsO+ebhvfyPr
4wuGO75Vg+Jv3kGZycITqxGgn3YdC05RAVQGbLso8yXsVwoZnvGDC+RN+j3pA2WJ
qFkREu/pn9AInHOEr24vHsSL6Dn8le6AWbi0vjNbG23r04fs7WDxeKZWgAFRIO1g
zThDQGDAO4A1E0jW6bXmFA7GD4a78tbyFUflyXTjR9JDeXDWXRs=
=ofp6
-----END PGP SIGNATURE-----
Merge tag 'linux_kselftest-kunit-6.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
Pull kunit updates from Shuah Khan:
"kunit:
- add __rust_helper to helpers
- fix up const mismatch in many assert functions
- fix up const mismatch in test_list_sort
- protect KUNIT_BINARY_STR_ASSERTION against ERR_PTR values
- respect KBUILD_OUTPUT env variable by default
- add bash completion
kunit tool:
- add test for nested test result reporting
- do not overwrite test status based on subtest counts
- add 32-bit big endian ARM configuration to qemu_configs
- rename test_data_path() to _test_data_path()
- do not rely on implicit working directory change"
* tag 'linux_kselftest-kunit-6.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
kunit: add bash completion
kunit: tool: test: Don't rely on implicit working directory change
kunit: tool: test: Rename test_data_path() to _test_data_path()
kunit: qemu_configs: Add 32-bit big endian ARM configuration
kunit: tool: Don't overwrite test status based on subtest counts
kunit: tool: Add test for nested test result reporting
kunit: respect KBUILD_OUTPUT env variable by default
kunit: Protect KUNIT_BINARY_STR_ASSERTION against ERR_PTR values
test_list_sort: fix up const mismatch
kunit: fix up const mis-match in many assert functions
rust: kunit: add __rust_helper to helpers
Extend a KUnit test suite for the bitops API to cover more APIs from
include/asm-generic/bitops/instrumented-atomic.h.
- change_bit()
- test_and_set_bit()
- test_and_clear_bit()
- test_and_change_bit()
Verified on x86_64, i386, and arm64 architectures.
Sample KUnit output:
KTAP version 1
# Subtest: test_change_bit
ok 1 BITOPS_4
ok 2 BITOPS_7
ok 3 BITOPS_11
ok 4 BITOPS_31
ok 5 BITOPS_88
# test_change_bit: pass:5 fail:0 skip:0 total:5
ok 2 test_change_bit
KTAP version 1
# Subtest: test_test_and_set_bit_test_and_clear_bit
ok 1 BITOPS_4
ok 2 BITOPS_7
ok 3 BITOPS_11
ok 4 BITOPS_31
ok 5 BITOPS_88
# test_test_and_set_bit_test_and_clear_bit: pass:5 fail:0 skip:0 total:5
ok 3 test_test_and_set_bit_test_and_clear_bit
KTAP version 1
# Subtest: test_test_and_change_bit
ok 1 BITOPS_4
ok 2 BITOPS_7
ok 3 BITOPS_11
ok 4 BITOPS_31
ok 5 BITOPS_88
# test_test_and_change_bit: pass:5 fail:0 skip:0 total:5
ok 4 test_test_and_change_bit
Signed-off-by: Ryota Sakamoto <sakamo.ryota@gmail.com>
Signed-off-by: Yury Norov <ynorov@nvidia.com>
In the FIND_NTH_BIT macro, if the 'size' parameter is 0, both the
loop conditions and the modulo condition are not met. Consequently,
the 'tmp' variable remains uninitialized before being used in the
'found' label.
This results in the following smatch errors:
lib/find_bit.c:164 __find_nth_bit() error: uninitialized symbol 'tmp'.
lib/find_bit.c:171 __find_nth_and_bit() error: uninitialized symbol 'tmp'.
lib/find_bit.c:178 __find_nth_andnot_bit() error: uninitialized symbol 'tmp'.
lib/find_bit.c:187 __find_nth_and_andnot_bit() error: uninitialized symbol 'tmp'.
Initialize 'tmp' to 0 to ensure that fns() operates on a zeroed value
(no bits set) when size is 0, preventing the use of garbage values.
[Yury: size == 0 is generally a sign of error on client side, and in
this case, any returned value is OK because the returned value would be
greater than 'size'. Applying the patch to reduce the checker noise.]
Signed-off-by: Lee Yongjun <jun85566@gmail.com>
Signed-off-by: Yury Norov <ynorov@nvidia.com>
Add a KUnit test suite for the bitops API.
The existing 'lib/test_bitops.c' is preserved as-is because it contains
ad-hoc micro-benchmarks 'test_fns' and is intended to ensure no compiler
warnings from C=1 sparse checker or -Wextra compilations.
Introduce 'lib/tests/bitops_kunit.c' for functional regression testing. It
ports the test logic and data patterns from 'lib/test_bitops.c' to KUnit,
verifying correct behavior across various input patterns and
architecture-specific edge cases using isolated stack-allocated bitmaps.
The following test logic has been ported from test_bitops_startup() in
lib/test_bitops.c:
- set_bit() / clear_bit() / find_first_bit() validation ->
test_set_bit_clear_bit()
- get_count_order() validation -> test_get_count_order()
- get_count_order_long() validation -> test_get_count_order_long()
Also improve the find_first_bit() test to check the full bitmap length
(BITOPS_LENGTH) instead of omitting the last bit, ensuring the bitmap is
completely empty after cleanup.
Verified on x86_64, i386, and arm64 architectures.
Sample KUnit output:
KTAP version 1
# Subtest: bitops
# module: bitops_kunit
1..3
KTAP version 1
# Subtest: test_set_bit_clear_bit
ok 1 BITOPS_4
ok 2 BITOPS_7
ok 3 BITOPS_11
ok 4 BITOPS_31
ok 5 BITOPS_88
# test_set_bit_clear_bit: pass:5 fail:0 skip:0 total:5
ok 1 test_set_bit_clear_bit
KTAP version 1
# Subtest: test_get_count_order
ok 1 0x00000003
ok 2 0x00000004
ok 3 0x00001fff
ok 4 0x00002000
ok 5 0x50000000
ok 6 0x80000000
ok 7 0x80003000
# test_get_count_order: pass:7 fail:0 skip:0 total:7
ok 2 test_get_count_order
KTAP version 1
# Subtest: test_get_count_order_long
ok 1 0x0000000300000000
ok 2 0x0000000400000000
ok 3 0x00001fff00000000
ok 4 0x0000200000000000
ok 5 0x5000000000000000
ok 6 0x8000000000000000
ok 7 0x8000300000000000
# test_get_count_order_long: pass:7 fail:0 skip:0 total:7
ok 3 test_get_count_order_long
[Yury: trim Kconfig help message]
CC: Jesse Brandeburg <jesse.brandeburg@intel.com>
CC: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Ryota Sakamoto <sakamo.ryota@gmail.com>
Signed-off-by: Yury Norov <ynorov@nvidia.com>
Fix PROCMAP_QUERY to fetch optional build ID only after dropping mmap_lock
or per-VMA lock, whichever was used to lock VMA under question, to avoid
deadlock reported by syzbot:
-> #1 (&mm->mmap_lock){++++}-{4:4}:
__might_fault+0xed/0x170
_copy_to_iter+0x118/0x1720
copy_page_to_iter+0x12d/0x1e0
filemap_read+0x720/0x10a0
blkdev_read_iter+0x2b5/0x4e0
vfs_read+0x7f4/0xae0
ksys_read+0x12a/0x250
do_syscall_64+0xcb/0xf80
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (&sb->s_type->i_mutex_key#8){++++}-{4:4}:
__lock_acquire+0x1509/0x26d0
lock_acquire+0x185/0x340
down_read+0x98/0x490
blkdev_read_iter+0x2a7/0x4e0
__kernel_read+0x39a/0xa90
freader_fetch+0x1d5/0xa80
__build_id_parse.isra.0+0xea/0x6a0
do_procmap_query+0xd75/0x1050
procfs_procmap_ioctl+0x7a/0xb0
__x64_sys_ioctl+0x18e/0x210
do_syscall_64+0xcb/0xf80
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
rlock(&mm->mmap_lock);
lock(&sb->s_type->i_mutex_key#8);
lock(&mm->mmap_lock);
rlock(&sb->s_type->i_mutex_key#8);
*** DEADLOCK ***
This seems to be exacerbated (as we haven't seen these syzbot reports
before that) by the recent:
777a8560fd ("lib/buildid: use __kernel_read() for sleepable context")
To make this safe, we need to grab file refcount while VMA is still locked, but
other than that everything is pretty straightforward. Internal build_id_parse()
API assumes VMA is passed, but it only needs the underlying file reference, so
just add another variant build_id_parse_file() that expects file passed
directly.
[akpm@linux-foundation.org: fix up kerneldoc]
Link: https://lkml.kernel.org/r/20260129215340.3742283-1-andrii@kernel.org
Fixes: ed5d583a88 ("fs/procfs: implement efficient VMA querying API for /proc/<pid>/maps")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Reported-by: <syzbot+4e70c8e0a2017b432f7a@syzkaller.appspotmail.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Tested-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Hao Luo <haoluo@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: KP Singh <kpsingh@kernel.org>
Cc: Martin KaFai Lau <martin.lau@linux.dev>
Cc: Song Liu <song@kernel.org>
Cc: Stanislav Fomichev <sdf@fomichev.me>
Cc: Yonghong Song <yonghong.song@linux.dev>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mldsa_verify() implements ML-DSA.Verify with ctx='', so document this
more explicitly. Remove the one-liner comment above mldsa_verify()
which was somewhat misleading.
Reviewed-by: David Howells <dhowells@redhat.com>
Link: https://lore.kernel.org/r/20260202221552.174341-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Remove __assume_ctx_lock() from lock initializers.
Implicitly asserting an active context during initialization caused
false-positive double-lock errors when acquiring a lock immediately after its
initialization. Moving forward, guarded member initialization must either:
1. Use guard(type_init)(&lock) or scoped_guard(type_init, ...).
2. Use context_unsafe() for simple initialization.
Reported-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/all/57062131-e79e-42c2-aa0b-8f931cb8cac2@acm.org/
Link: https://patch.msgid.link/20260119094029.1344361-7-elver@google.com
Add scoped init guard definitions for common synchronization primitives
supported by context analysis.
The scoped init guards treat the context as active within initialization
scope of the underlying context lock, given initialization implies
exclusive access to the underlying object. This allows initialization of
guarded members without disabling context analysis, while documenting
initialization from subsequent usage.
The documentation is updated with the new recommendation. Where scoped
init guards are not provided or cannot be implemented (ww_mutex omitted
for lack of multi-arg guard initializers), the alternative is to just
disable context analysis where guarded members are initialized.
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/all/20251212095943.GM3911114@noisy.programming.kicks-ass.net/
Link: https://patch.msgid.link/20260119094029.1344361-3-elver@google.com
Massage __bio_iov_iter_get_pages so that it doesn't need the bio, and
move it to lib/iov_iter.c so that it can be used by block code for
other things than filling a bio and by other subsystems like netfs.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Reinitialize metadata for large zone device private folios in
zone_device_page_init prior to creating a higher-order zone device private
folio. This step is necessary when the folio's order changes dynamically
between zone_device_page_init calls to avoid building a corrupt folio. As
part of the metadata reinitialization, the dev_pagemap must be passed in
from the caller because the pgmap stored in the folio page may have been
overwritten with a compound head.
Without this fix, individual pages could have invalid pgmap fields and
flags (with PG_locked being notably problematic) due to prior different
order allocations, which can, and will, result in kernel crashes.
Link: https://lkml.kernel.org/r/20260116111325.1736137-2-francois.dugast@intel.com
Fixes: d245f9b4ab ("mm/zone_device: support large zone device private folios")
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Acked-by: Felix Kuehling <felix.kuehling@amd.com>
Reviewed-by: Balbir Singh <balbirs@nvidia.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: "Christophe Leroy (CS GROUP)" <chleroy@kernel.org>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Bernd has reported a lockdep splat from flexible proportions code that is
essentially complaining about the following race:
<timer fires>
run_timer_softirq - we are in softirq context
call_timer_fn
writeout_period
fprop_new_period
write_seqcount_begin(&p->sequence);
<hardirq is raised>
...
blk_mq_end_request()
blk_update_request()
ext4_end_bio()
folio_end_writeback()
__wb_writeout_add()
__fprop_add_percpu_max()
if (unlikely(max_frac < FPROP_FRAC_BASE)) {
fprop_fraction_percpu()
seq = read_seqcount_begin(&p->sequence);
- sees odd sequence so loops indefinitely
Note that a deadlock like this is only possible if the bdi has configured
maximum fraction of writeout throughput which is very rare in general but
frequent for example for FUSE bdis. To fix this problem we have to make
sure write section of the sequence counter is irqsafe.
Link: https://lkml.kernel.org/r/20260121112729.24463-2-jack@suse.cz
Fixes: a91befde35 ("lib/flex_proportions.c: remove local_irq_ops in fprop_new_period()")
Signed-off-by: Jan Kara <jack@suse.cz>
Reported-by: Bernd Schubert <bernd@bsbernd.com>
Link: https://lore.kernel.org/all/9b845a47-9aee-43dd-99bc-1a82bea00442@bsbernd.com/
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Joanne Koong <joannelkoong@gmail.com>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
With CONFIG_CC_OPTIMIZE_FOR_SIZE=y, GCC may decide not to inline
__cvdso_clock_getres_common(). This introduces spurious internal
function calls in the vDSO fastpath.
Furthermore, with automatic stack variable initialization
(CONFIG_INIT_STACK_ALL_ZERO or CONFIG_INIT_STACK_ALL_PATTERN) GCC can emit
a call to memset() which is not valid in the vDSO.
Mark __cvdso_clock_getres_common() as __always_inline to avoid both issues.
Paradoxically the inlining even reduces the size of the code:
$ ./scripts/bloat-o-meter arch/powerpc/kernel/vdso/vgettimeofday-32.o.before arch/powerpc/kernel/vdso/vgettimeofday-32.o.after
add/remove: 0/1 grow/shrink: 1/1 up/down: 52/-148 (-96)
Function old new delta
__c_kernel_clock_getres_time64 92 144 +52
__c_kernel_clock_getres 136 132 -4
__cvdso_clock_getres_common 144 - -144
Total: Before=2788, After=2692, chg -3.44%
With CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y the functions are always inlined
and therefore the behaviour stays the same.
See also the equivalent change for clock_gettime() in commit b91c8c42ff
("lib/vdso: Force inlining of __cvdso_clock_gettime_common()").
Fixes: 21bbfd7404 ("x86/vdso: Provide clock_getres_time64() for x86-32")
Fixes: 1149dcdfc9 ("ARM: VDSO: Provide clock_getres_time64()")
Fixes: f10c2e72b5 ("arm64: vdso32: Provide clock_getres_time64()")
Fixes: bec06cd6a1 ("MIPS: vdso: Provide getres_time64() for 32-bit ABIs")
Fixes: 759a1f9737 ("powerpc/vdso: Provide clock_getres_time64()")
Reported-by: Sverdlin, Alexander <alexander.sverdlin@siemens.com>
Suggested-by: Christophe Leroy (CS GROUP) <chleroy@kernel.org>
Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Link: https://lore.kernel.org/lkml/230c749f-ebd6-4829-93ee-601d88000a45@kernel.org/
Link: https://patch.msgid.link/20260123-vdso-clock_getres-inline-v1-1-4d6203b90cd3@linutronix.de
Closes: https://lore.kernel.org/lkml/f45316f65a46da638b3c6aa69effd8980e6677b9.camel@siemens.com/
The volatile keyword is no longer necessary or useful on aes_sbox and
aes_inv_sbox, since the table prefetching is now done using a helper
function that casts to volatile itself and also includes an optimization
barrier. Since it prevents some compiler optimizations, remove it.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-36-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Now that all callers of the aes_encrypt() and aes_decrypt() type-generic
macros are using the new types, remove the old functions.
Then, replace the macro with direct calls to the new functions, dropping
the "_new" suffix from them.
This completes the change in the type of the key struct that is passed
to aes_encrypt() and aes_decrypt().
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-35-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Switch from the old AES library functions (which use struct
crypto_aes_ctx) to the new ones (which use struct aes_enckey). This
eliminates the unnecessary computation and caching of the decryption
round keys. The new AES en/decryption functions are also much faster
and use AES instructions when supported by the CPU.
Note that in addition to the change in the key preparation function and
the key struct type itself, the change in the type of the key struct
results in aes_encrypt() (which is temporarily a type-generic macro)
calling the new encryption function rather than the old one.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-34-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Switch from the old AES library functions (which use struct
crypto_aes_ctx) to the new ones (which use struct aes_enckey). This
eliminates the unnecessary computation and caching of the decryption
round keys. The new AES en/decryption functions are also much faster
and use AES instructions when supported by the CPU.
Note that in addition to the change in the key preparation function and
the key struct type itself, the change in the type of the key struct
results in aes_encrypt() (which is temporarily a type-generic macro)
calling the new encryption function rather than the old one.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-33-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Optimize the AES library with x86 AES-NI instructions.
The relevant existing assembly functions, aesni_set_key(), aesni_enc(),
and aesni_dec(), are a bit difficult to extract into the library:
- They're coupled to the code for the AES modes.
- They operate on struct crypto_aes_ctx. The AES library now uses
different structs.
- They assume the key is 16-byte aligned. The AES library only
*prefers* 16-byte alignment; it doesn't require it.
Moreover, they're not all that great in the first place:
- They use unrolled loops, which isn't a great choice on x86.
- They use the 'aeskeygenassist' instruction, which is unnecessary, is
slow on Intel CPUs, and forces the loop to be unrolled.
- They have special code for AES-192 key expansion, despite that being
kind of useless. AES-128 and AES-256 are the ones used in practice.
These are small functions anyway.
Therefore, I opted to just write replacements of these functions for the
library. They address all the above issues.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-18-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Move the SPARC64 AES assembly code into lib/crypto/, wire the key
expansion and single-block en/decryption functions up to the AES library
API, and remove the "aes-sparc64" crypto_cipher algorithm.
The result is that both the AES library and crypto_cipher APIs use the
SPARC64 AES opcodes, whereas previously only crypto_cipher did (and it
wasn't enabled by default, which this commit fixes as well).
Note that some of the functions in the SPARC64 AES assembly code are
still used by the AES mode implementations in
arch/sparc/crypto/aes_glue.c. For now, just export these functions.
These exports will go away once the AES mode implementations are
migrated to the library as well. (Trying to split up the assembly file
seemed like much more trouble than it would be worth.)
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-17-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Implement aes_preparekey_arch(), aes_encrypt_arch(), and
aes_decrypt_arch() using the CPACF AES instructions.
Then, remove the superseded "aes-s390" crypto_cipher.
The result is that both the AES library and crypto_cipher APIs use the
CPACF AES instructions, whereas previously only crypto_cipher did (and
it wasn't enabled by default, which this commit fixes as well).
Note that this preserves the optimization where the AES key is stored in
raw form rather than expanded form. CPACF just takes the raw key.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Holger Dengler <dengler@linux.ibm.com>
Reviewed-by: Holger Dengler <dengler@linux.ibm.com>
Link: https://lore.kernel.org/r/20260112192035.10427-16-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Prevent a "BUG: unable to handle kernel NULL pointer dereference in
filemap_read_folio".
For the sleepable context, convert freader to use __kernel_read() instead
of direct page cache access via read_cache_folio(). This simplifies the
faultable code path by using the standard kernel file reading interface
which handles all the complexity of reading file data.
At the moment we are not changing the code for non-sleepable context which
uses filemap_get_folio() and only succeeds if the target folios are
already in memory and up-to-date. The reason is to keep the patch simple
and easier to backport to stable kernels.
Syzbot repro does not crash the kernel anymore and the selftests run
successfully.
In the follow up we will make __kernel_read() with IOCB_NOWAIT work for
non-sleepable contexts. In addition, I would like to replace the
secretmem check with a more generic approach and will add fstest for the
buildid code.
Link: https://lkml.kernel.org/r/20251222205859.3968077-1-shakeel.butt@linux.dev
Fixes: ad41251c29 ("lib/buildid: implement sleepable build_id_parse() API")
Reported-by: syzbot+09b7d050e4806540153d@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=09b7d050e4806540153d
Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Jinchao Wang <wangjinchao600@gmail.com>
Link: https://lkml.kernel.org/r/aUteBPWPYzVWIZFH@ndev
Reviewed-by: Christian Brauner <brauner@kernel.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Daniel Borkman <daniel@iogearbox.net>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cross-merge BPF and other fixes after downstream PR.
No conflicts.
Adjacent:
Auto-merging MAINTAINERS
Auto-merging Makefile
Auto-merging kernel/bpf/verifier.c
Auto-merging kernel/sched/ext.c
Auto-merging mm/memcontrol.c
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Move the aes_encrypt_zvkned() and aes_decrypt_zvkned() assembly
functions into lib/crypto/, wire them up to the AES library API, and
remove the "aes-riscv64-zvkned" crypto_cipher algorithm.
To make this possible, change the prototypes of these functions to
take (rndkeys, key_len) instead of a pointer to crypto_aes_ctx, and
change the RISC-V AES-XTS code to implement tweak encryption using the
AES library instead of directly calling aes_encrypt_zvkned().
The result is that both the AES library and crypto_cipher APIs use
RISC-V's AES instructions, whereas previously only crypto_cipher did
(and it wasn't enabled by default, which this commit fixes as well).
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-15-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Move the POWER8 AES assembly code into lib/crypto/, wire the key
expansion and single-block en/decryption functions up to the AES library
API, and remove the superseded "p8_aes" crypto_cipher algorithm.
The result is that both the AES library and crypto_cipher APIs are now
optimized for POWER8, whereas previously only crypto_cipher was (and
optimizations weren't enabled by default, which this commit fixes too).
Note that many of the functions in the POWER8 assembly code are still
used by the AES mode implementations in arch/powerpc/crypto/. For now,
just export these functions. These exports will go away once the AES
modes are migrated to the library as well. (Trying to split up the
assembly file seemed like much more trouble than it would be worth.)
Another challenge with this code is that the POWER8 assembly code uses a
custom format for the expanded AES key. Since that code is imported
from OpenSSL and is also targeted to POWER8 (rather than POWER9 which
has better data movement and byteswap instructions), that is not easily
changed. For now I've just kept the custom format. To maintain full
correctness, this requires executing some slow fallback code in the case
where the usability of VSX changes between key expansion and use. This
should be tolerable, as this case shouldn't happen in practice.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-14-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Move the PowerPC SPE AES assembly code into lib/crypto/, wire the key
expansion and single-block en/decryption functions up to the AES library
API, and remove the superseded "aes-ppc-spe" crypto_cipher algorithm.
The result is that both the AES library and crypto_cipher APIs are now
optimized with SPE, whereas previously only crypto_cipher was (and
optimizations weren't enabled by default, which this commit fixes too).
Note that many of the functions in the PowerPC SPE assembly code are
still used by the AES mode implementations in arch/powerpc/crypto/. For
now, just export these functions. These exports will go away once the
AES modes are migrated to the library as well. (Trying to split up the
assembly files seemed like much more trouble than it would be worth.)
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-13-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Move the ARM64 optimized AES key expansion and single-block AES
en/decryption code into lib/crypto/, wire it up to the AES library API,
and remove the superseded crypto_cipher algorithms.
The result is that both the AES library and crypto_cipher APIs are now
optimized for ARM64, whereas previously only crypto_cipher was (and the
optimizations weren't enabled by default, which this fixes as well).
Note: to see the diff from arch/arm64/crypto/aes-ce-glue.c to
lib/crypto/arm64/aes.h, view this commit with 'git show -M10'.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-12-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Move the ARM optimized single-block AES en/decryption code into
lib/crypto/, wire it up to the AES library API, and remove the
superseded "aes-arm" crypto_cipher algorithm.
The result is that both the AES library and crypto_cipher APIs are now
optimized for ARM, whereas previously only crypto_cipher was (and the
optimizations weren't enabled by default, which this fixes as well).
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-11-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
The kernel's AES library currently has the following issues:
- It doesn't take advantage of the architecture-optimized AES code,
including the implementations using AES instructions.
- It's much slower than even the other software AES implementations: 2-4
times slower than "aes-generic", "aes-arm", and "aes-arm64".
- It requires that both the encryption and decryption round keys be
computed and cached. This is wasteful for users that need only the
forward (encryption) direction of the cipher: the key struct is 484
bytes when only 244 are actually needed. This missed optimization is
very common, as many AES modes (e.g. GCM, CFB, CTR, CMAC, and even the
tweak key in XTS) use the cipher only in the forward (encryption)
direction even when doing decryption.
- It doesn't provide the flexibility to customize the prepared key
format. The API is defined to do key expansion, and several callers
in drivers/crypto/ use it specifically to expand the key. This is an
issue when integrating the existing powerpc, s390, and sparc code,
which is necessary to provide full parity with the traditional API.
To resolve these issues, I'm proposing the following changes:
1. New structs 'aes_key' and 'aes_enckey' are introduced, with
corresponding functions aes_preparekey() and aes_prepareenckey().
Generally these structs will include the encryption+decryption round
keys and the encryption round keys, respectively. However, the exact
format will be under control of the architecture-specific AES code.
(The verb "prepare" is chosen over "expand" since key expansion isn't
necessarily done. It's also consistent with hmac*_preparekey().)
2. aes_encrypt() and aes_decrypt() will be changed to operate on the new
structs instead of struct crypto_aes_ctx.
3. aes_encrypt() and aes_decrypt() will use architecture-optimized code
when available, or else fall back to a new generic AES implementation
that unifies the existing two fragmented generic AES implementations.
The new generic AES implementation uses tables for both SubBytes and
MixColumns, making it almost as fast as "aes-generic". However,
instead of aes-generic's huge 8192-byte tables per direction, it uses
only 1024 bytes for encryption and 1280 bytes for decryption (similar
to "aes-arm"). The cost is just some extra rotations.
The new generic AES implementation also includes table prefetching,
making it have some "constant-time hardening". That's an improvement
from aes-generic which has no constant-time hardening.
It does slightly regress in constant-time hardening vs. the old
lib/crypto/aes.c which had smaller tables, and from aes-fixed-time
which disabled IRQs on top of that. But I think this is tolerable.
The real solutions for constant-time AES are AES instructions or
bit-slicing. The table-based code remains a best-effort fallback for
the increasingly-rare case where a real solution is unavailable.
4. crypto_aes_ctx and aes_expandkey() will remain for now, but only for
callers that are using them specifically for the AES key expansion
(as opposed to en/decrypting data with the AES library).
This commit begins the migration process by introducing the new structs
and functions, backed by the new generic AES implementation.
To allow callers to be incrementally converted, aes_encrypt() and
aes_decrypt() are temporarily changed into macros that use a _Generic
expression to call either the old functions (which take crypto_aes_ctx)
or the new functions (which take the new types). Once all callers have
been updated, these macros will go away, the old functions will be
removed, and the "_new" suffix will be dropped from the new functions.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Since ML-DSA is FIPS-approved, add the boot-time self-test which is
apparently required.
Just add a test vector manually for now, borrowed from
lib/crypto/tests/mldsa-testvecs.h (where in turn it's borrowed from
leancrypto). The SHA-* FIPS test vectors are generated by
scripts/crypto/gen-fips-testvecs.py instead, but the common Python
libraries don't support ML-DSA yet.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: David Howells <dhowells@redhat.com>
Link: https://lore.kernel.org/r/20260107044215.109930-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Since the architecture-specific implementations of NH initialize memory
in assembly code, they aren't compatible with KMSAN as-is.
Fixes: 382de740759a ("lib/crypto: nh: Add NH library")
Link: https://lore.kernel.org/r/20260105053652.1708299-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Migrate the x86_64 implementations of NH into lib/crypto/. This makes
the nh() function be optimized on x86_64 kernels.
Note: this temporarily makes the adiantum template not utilize the
x86_64 optimized NH code. This is resolved in a later commit that
converts the adiantum template to use nh() instead of "nhpoly1305".
Link: https://lore.kernel.org/r/20251211011846.8179-6-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Migrate the arm64 NEON implementation of NH into lib/crypto/. This
makes the nh() function be optimized on arm64 kernels.
Note: this temporarily makes the adiantum template not utilize the arm64
optimized NH code. This is resolved in a later commit that converts the
adiantum template to use nh() instead of "nhpoly1305".
Link: https://lore.kernel.org/r/20251211011846.8179-5-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Migrate the arm32 NEON implementation of NH into lib/crypto/. This
makes the nh() function be optimized on arm32 kernels.
Note: this temporarily makes the adiantum template not utilize the arm32
optimized NH code. This is resolved in a later commit that converts the
adiantum template to use nh() instead of "nhpoly1305".
Link: https://lore.kernel.org/r/20251211011846.8179-4-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Add some simple KUnit tests for the nh() function.
These replace the test coverage which will be lost by removing the
nhpoly1305 crypto_shash.
Note that the NH code also continues to be tested indirectly as well,
via the tests for the "adiantum(xchacha12,aes)" crypto_skcipher.
Link: https://lore.kernel.org/r/20251211011846.8179-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Add support for the NH "almost-universal hash function" to lib/crypto/,
specifically the variant of NH used in Adiantum.
This will replace the need for the "nhpoly1305" crypto_shash algorithm.
All the implementations of "nhpoly1305" use architecture-optimized code
only for the NH stage; they just use the generic C Poly1305 code for the
Poly1305 stage. We can achieve the same result in a simpler way using
an (architecture-optimized) nh() function combined with code in
crypto/adiantum.c that passes the results to the Poly1305 library.
This commit begins this cleanup by adding the nh() function. The code
is derived from crypto/nhpoly1305.c and include/crypto/nhpoly1305.h.
Link: https://lore.kernel.org/r/20251211011846.8179-2-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Add a KUnit test suite for ML-DSA verification, including the following
for each ML-DSA parameter set (ML-DSA-44, ML-DSA-65, and ML-DSA-87):
- Positive test (valid signature), using vector imported from leancrypto
- Various negative tests:
- Wrong length for signature, message, or public key
- Out-of-range coefficients in z vector
- Invalid encoded hint vector
- Any bit flipped in signature, message, or public key
- Unit test for the internal function use_hint()
- A benchmark
ML-DSA inputs and outputs are very large. To keep the size of the tests
down, use just one valid test vector per parameter set, and generate the
negative tests at runtime by mutating the valid test vector.
I also considered importing the test vectors from Wycheproof. I've
tested that mldsa_verify() indeed passes all of Wycheproof's ML-DSA test
vectors that use an empty context string. However, importing these
permanently would add over 6 MB of source. That's too much to be a
reasonable addition to the Linux kernel tree for one algorithm. It also
wouldn't actually provide much better test coverage than this commit.
Another potential issue is that Wycheproof uses the Apache license.
Similarly, this also differs from the earlier proposal to import a long
list of test vectors from leancrypto. I retained only one valid
signature for each algorithm, and I also added (runtime-generated)
negative tests which were missing. I think this is a better tradeoff.
Reviewed-by: David Howells <dhowells@redhat.com>
Tested-by: David Howells <dhowells@redhat.com>
Link: https://lore.kernel.org/r/20251214181712.29132-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Add support for verifying ML-DSA signatures.
ML-DSA (Module-Lattice-Based Digital Signature Algorithm) is specified
in FIPS 204 and is the standard version of Dilithium. Unlike RSA and
elliptic-curve cryptography, ML-DSA is believed to be secure even
against adversaries in possession of a large-scale quantum computer.
Compared to the earlier patch
(https://lore.kernel.org/r/20251117145606.2155773-3-dhowells@redhat.com/)
that was based on "leancrypto", this implementation:
- Is about 700 lines of source code instead of 4800.
- Generates about 4 KB of object code instead of 28 KB.
- Uses 9-13 KB of memory to verify a signature instead of 31-84 KB.
- Is at least about the same speed, with a microbenchmark showing 3-5%
improvements on one x86_64 CPU and -1% to 1% changes on another.
When memory is a bottleneck, it's likely much faster.
- Correctly implements the RejNTTPoly step of the algorithm.
The API just consists of a single function mldsa_verify(), supporting
pure ML-DSA with any standard parameter set (ML-DSA-44, ML-DSA-65, or
ML-DSA-87) as selected by an enum. That's all that's actually needed.
The following four potential features are unneeded and aren't included.
However, any that ever become needed could fairly easily be added later,
as they only affect how the message representative mu is calculated:
- Nonempty context strings
- Incremental message hashing
- HashML-DSA
- External mu
Signing support would, of course, be a larger and more complex addition.
However, the kernel doesn't, and shouldn't, need ML-DSA signing support.
Note that mldsa_verify() allocates memory, so it can sleep and can fail
with ENOMEM. Unfortunately we don't have much choice about that, since
ML-DSA needs a lot of memory. At least callers have to check for errors
anyway, since the signature could be invalid.
Note that verification doesn't require constant-time code, and in fact
some steps are inherently variable-time. I've used constant-time
patterns in some places anyway, but technically they're not needed.
Reviewed-by: David Howells <dhowells@redhat.com>
Tested-by: David Howells <dhowells@redhat.com>
Link: https://lore.kernel.org/r/20251214181712.29132-2-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
- A couple more fixes for the lib/crypto KUnit tests
- Fix missing MMU protection for the AES S-box
-----BEGIN PGP SIGNATURE-----
iIoEABYIADIWIQSacvsUNc7UX4ntmEPzXCl4vpKOKwUCaWP6dBQcZWJpZ2dlcnNA
a2VybmVsLm9yZwAKCRDzXCl4vpKOK89sAQCLiV7LQdtHp7IJ21JhdrxPLSfhiUgp
HeSwwFylciRfggD+Ml19H/Pa3kJYbkQChL44AizlartCaiyLuiEfeTF3NQE=
=dYlq
-----END PGP SIGNATURE-----
Merge tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux
Pull crypto library fixes from Eric Biggers:
- A couple more fixes for the lib/crypto KUnit tests
- Fix missing MMU protection for the AES S-box
* tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux:
lib/crypto: aes: Fix missing MMU protection for AES S-box
MAINTAINERS: add test vector generation scripts to "CRYPTO LIBRARY"
lib/crypto: tests: Fix syntax error for old python versions
lib/crypto: tests: polyval_kunit: Increase iterations for preparekey in IRQs
In a vain attempt to consolidate the email zoo switch everything to the
kernel.org account.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>