This contains a fix for sched_ext idle CPU selection that likely fixes
a substantial performance regression.
The scx_bpf_select_cpu_dfl/and() kfuncs were incorrectly detecting all
tasks as migration-disabled when called outside ops.select_cpu(), causing
them to always return -EBUSY instead of finding idle CPUs. The fix properly
distinguishes between genuinely migration-disabled tasks vs. the current
task whose migration is temporarily disabled by BPF execution.
-----BEGIN PGP SIGNATURE-----
iIQEABYKACwWIQTfIjM1kS57o3GsC/uxYfJx3gVYGQUCaNF8hQ4cdGpAa2VybmVs
Lm9yZwAKCRCxYfJx3gVYGYhsAPwNeMGqniyewfciL6q+9Yjnxudn+bCzntvTidY/
EbV3vgD9GPily3kjKhuPBGIq5hnyiV1DsmmCTIu8vaj7RB4uYw8=
=k2RU
-----END PGP SIGNATURE-----
Merge tag 'sched_ext-for-6.17-rc7-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext
Pull sched_ext fix from jun Heo:
"This contains a fix for sched_ext idle CPU selection that likely fixes
a substantial performance regression.
The scx_bpf_select_cpu_dfl/and() kfuncs were incorrectly detecting all
tasks as migration-disabled when called outside ops.select_cpu(),
causing them to always return -EBUSY instead of finding idle CPUs.
The fix properly distinguishes between genuinely migration-disabled
tasks vs. the current task whose migration is temporarily disabled by
BPF execution"
* tag 'sched_ext-for-6.17-rc7-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext:
sched_ext: idle: Handle migration-disabled tasks in BPF code
When scx_bpf_select_cpu_dfl()/and() kfuncs are invoked outside of
ops.select_cpu() we can't rely on @p->migration_disabled to determine if
migration is disabled for the task @p.
In fact, migration is always disabled for the current task while running
BPF code: __bpf_prog_enter() disables migration and __bpf_prog_exit()
re-enables it.
To handle this, when @p->migration_disabled == 1, check whether @p is
the current task. If so, migration was not disabled before entering the
callback, otherwise migration was disabled.
This ensures correct idle CPU selection in all cases. The behavior of
ops.select_cpu() remains unchanged, because this callback is never
invoked for the current task and migration-disabled tasks are always
excluded.
Example: without this change scx_bpf_select_cpu_and() called from
ops.enqueue() always returns -EBUSY; with this change applied, it
correctly returns idle CPUs.
Fixes: 06efc9fe0b ("sched_ext: idle: Handle migration-disabled tasks in idle selection")
Cc: stable@vger.kernel.org # v6.16+
Signed-off-by: Andrea Righi <arighi@nvidia.com>
Acked-by: Changwoo Min <changwoo@igalia.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
- Fix build in some RISC-V flavours
Some system calls only are available for the 64bit RISC-V machines.
#ifdef out the cases of clock_nanosleep and futex in the sleep monitor
if they are not supported by the architecture.
- Fix wrong cast, obsolete after refactoring
Use container_of() to get to the rv_monitor structure from the
enable_monitors_next() 'p' pointer. The assignment worked only because
the list field used happened to be the first field of the structure.
- Remove redundant include files
Some include files were listed twice. Remove the extra ones and sort
the includes.
- Fix missing unlock on failure
There was an error path that exited the rv_register_monitor() function
without releasing a lock. Change that to goto the lock release.
- Add Gabriele Monaco to be Runtime Verifier maintainer
Gabriele is doing most of the work on RV as well as collecting patches.
Add him to the maintainers file for Runtime Verification.
-----BEGIN PGP SIGNATURE-----
iIoEABYKADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCaMxsBRQccm9zdGVkdEBn
b29kbWlzLm9yZwAKCRAp5XQQmuv6qk5oAP4tlGnMBoLpZXVBpAubVUQOVRfQo5dI
ar9LpXdgnj4xQAEA9Q5uIvhCI/CMXTK98gFhR31p9O4Sqtn0JlCViBbVSQg=
=tUQG
-----END PGP SIGNATURE-----
Merge tag 'trace-rv-v6.17-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull runtime verifier fixes from Steven Rostedt:
- Fix build in some RISC-V flavours
Some system calls only are available for the 64bit RISC-V machines.
#ifdef out the cases of clock_nanosleep and futex in the sleep
monitor if they are not supported by the architecture.
- Fix wrong cast, obsolete after refactoring
Use container_of() to get to the rv_monitor structure from the
enable_monitors_next() 'p' pointer. The assignment worked only
because the list field used happened to be the first field of the
structure.
- Remove redundant include files
Some include files were listed twice. Remove the extra ones and sort
the includes.
- Fix missing unlock on failure
There was an error path that exited the rv_register_monitor()
function without releasing a lock. Change that to goto the lock
release.
- Add Gabriele Monaco to be Runtime Verifier maintainer
Gabriele is doing most of the work on RV as well as collecting
patches. Add him to the maintainers file for Runtime Verification.
* tag 'trace-rv-v6.17-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
rv: Add Gabriele Monaco as maintainer for Runtime Verification
rv: Fix missing mutex unlock in rv_register_monitor()
include/linux/rv.h: remove redundant include file
rv: Fix wrong type cast in enabled_monitors_next()
rv: Support systems with time64-only syscalls
This contains two cgroup changes. Both are pretty low risk.
- Fix deadlock in cgroup destruction when repeatedly mounting/unmounting
perf_event and net_prio controllers. The issue occurs because
cgroup_destroy_wq has max_active=1, causing root destruction to wait for
CSS offline operations that are queued behind it. The fix splits
cgroup_destroy_wq into three separate workqueues to eliminate the
blocking.
- Set of->priv to NULL upon file release to make potential bugs to manifest
as NULL pointer dereferences rather than use-after-free errors.
-----BEGIN PGP SIGNATURE-----
iIQEABYKACwWIQTfIjM1kS57o3GsC/uxYfJx3gVYGQUCaMsIBg4cdGpAa2VybmVs
Lm9yZwAKCRCxYfJx3gVYGUlBAP95OycrYMUu1iIc37YfClvugsBZJpOV/qMQpNTm
oPjGoQEAv5dzOTo+763ecFUfRjCT469Ke7wFapS1RCVL7hEd5As=
=Kykt
-----END PGP SIGNATURE-----
Merge tag 'cgroup-for-6.17-rc6-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup
Pull cgroup fixes from Tejun Heo:
"This contains two cgroup changes. Both are pretty low risk.
- Fix deadlock in cgroup destruction when repeatedly
mounting/unmounting perf_event and net_prio controllers.
The issue occurs because cgroup_destroy_wq has max_active=1, causing
root destruction to wait for CSS offline operations that are queued
behind it.
The fix splits cgroup_destroy_wq into three separate workqueues to
eliminate the blocking.
- Set of->priv to NULL upon file release to make potential bugs to
manifest as NULL pointer dereferences rather than use-after-free
errors"
* tag 'cgroup-for-6.17-rc6-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup:
cgroup/psi: Set of->priv to NULL upon file release
cgroup: split cgroup_destroy_wq into 3 workqueues
While collecting SCX related fields in struct task_group into struct
scx_task_group, 6e6558a6bc ("sched_ext, sched/core: Factor out struct
scx_task_group") forgot update tg->scx_weight usage in tg_weight(), which
leads to build failure when CONFIG_FAIR_GROUP_SCHED is disabled but
CONFIG_EXT_GROUP_SCHED is enabled. Fix it.
Fixes: 6e6558a6bc ("sched_ext, sched/core: Factor out struct scx_task_group")
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202509170230.MwZsJSWa-lkp@intel.com/
Tested-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
scx_bpf_reenqueue_local() can be called from ops.cpu_release() when a
CPU is taken by a higher scheduling class to give tasks queued to the
CPU's local DSQ a chance to be migrated somewhere else, instead of
waiting indefinitely for that CPU to become available again.
In doing so, we decided to skip migration-disabled tasks, under the
assumption that they cannot be migrated anyway.
However, when a higher scheduling class preempts a CPU, the running task
is always inserted at the head of the local DSQ as a migration-disabled
task. This means it is always skipped by scx_bpf_reenqueue_local(), and
ends up being confined to the same CPU even if that CPU is heavily
contended by other higher scheduling class tasks.
As an example, let's consider the following scenario:
$ schedtool -a 0,1, -e yes > /dev/null
$ sudo schedtool -F -p 99 -a 0, -e \
stress-ng -c 1 --cpu-load 99 --cpu-load-slice 1000
The first task (SCHED_EXT) can run on CPU0 or CPU1. The second task
(SCHED_FIFO) is pinned to CPU0 and consumes ~99% of it. If the SCHED_EXT
task initially runs on CPU0, it will remain there because it always sees
CPU0 as "idle" in the short gaps left by the RT task, resulting in ~1%
utilization while CPU1 stays idle:
0[||||||||||||||||||||||100.0%] 8[ 0.0%]
1[ 0.0%] 9[ 0.0%]
2[ 0.0%] 10[ 0.0%]
3[ 0.0%] 11[ 0.0%]
4[ 0.0%] 12[ 0.0%]
5[ 0.0%] 13[ 0.0%]
6[ 0.0%] 14[ 0.0%]
7[ 0.0%] 15[ 0.0%]
PID USER PRI NI S CPU CPU%▽MEM% TIME+ Command
1067 root RT 0 R 0 99.0 0.2 0:31.16 stress-ng-cpu [run]
975 arighi 20 0 R 0 1.0 0.0 0:26.32 yes
By allowing scx_bpf_reenqueue_local() to re-enqueue migration-disabled
tasks, the scheduler can choose to migrate them to other CPUs (CPU1 in
this case) via ops.enqueue(), leading to better CPU utilization:
0[||||||||||||||||||||||100.0%] 8[ 0.0%]
1[||||||||||||||||||||||100.0%] 9[ 0.0%]
2[ 0.0%] 10[ 0.0%]
3[ 0.0%] 11[ 0.0%]
4[ 0.0%] 12[ 0.0%]
5[ 0.0%] 13[ 0.0%]
6[ 0.0%] 14[ 0.0%]
7[ 0.0%] 15[ 0.0%]
PID USER PRI NI S CPU CPU%▽MEM% TIME+ Command
577 root RT 0 R 0 100.0 0.2 0:23.17 stress-ng-cpu [run]
555 arighi 20 0 R 1 100.0 0.0 0:28.67 yes
It's debatable whether per-CPU tasks should be re-enqueued as well, but
doing so is probably safer: the scheduler can recognize re-enqueued
tasks through the %SCX_ENQ_REENQ flag, reassess their placement, and
either put them back at the head of the local DSQ or let another task
attempt to take the CPU.
This also prevents giving per-CPU tasks an implicit priority boost,
which would otherwise make them more likely to reclaim CPUs preempted by
higher scheduling classes.
Fixes: 97e13ecb02 ("sched_ext: Skip per-CPU tasks in scx_bpf_reenqueue_local()")
Cc: stable@vger.kernel.org # v6.15+
Signed-off-by: Andrea Righi <arighi@nvidia.com>
Acked-by: Changwoo Min <changwoo@igalia.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
If create_monitor_dir() fails, the function returns directly without
releasing rv_interface_lock. This leaves the mutex locked and causes
subsequent monitor registration attempts to deadlock.
Fix it by making the error path jump to out_unlock, ensuring that the
mutex is always released before returning.
Fixes: 24cbfe18d5 ("rv: Merge struct rv_monitor_def into struct rv_monitor")
Signed-off-by: Zhen Ni <zhen.ni@easystack.cn>
Reviewed-by: Gabriele Monaco <gmonaco@redhat.com>
Reviewed-by: Nam Cao <namcao@linutronix.de>
Link: https://lore.kernel.org/r/20250903065112.1878330-1-zhen.ni@easystack.cn
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
Argument 'p' of enabled_monitors_next() is not a pointer to struct
rv_monitor, it is actually a pointer to the list_head inside struct
rv_monitor. Therefore it is wrong to cast 'p' to struct rv_monitor *.
This wrong type cast has been there since the beginning. But it still
worked because the list_head was the first field in struct rv_monitor_def.
This is no longer true since commit 24cbfe18d5 ("rv: Merge struct
rv_monitor_def into struct rv_monitor") moved the list_head, and this wrong
type cast became a functional problem.
Properly use container_of() instead.
Fixes: 24cbfe18d5 ("rv: Merge struct rv_monitor_def into struct rv_monitor")
Signed-off-by: Nam Cao <namcao@linutronix.de>
Reviewed-by: Gabriele Monaco <gmonaco@redhat.com>
Link: https://lore.kernel.org/r/20250806120911.989365-1-namcao@linutronix.de
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
can trigger with certain hardware configs and regular HZ.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmjGirsRHG1pbmdvQGtl
cm5lbC5vcmcACgkQEnMQ0APhK1i91g//QWtFBkjSJgDaCwABsSIr7Z5vgzlidtx5
ZI6bP7Fjt2nMnRXqp+KojMhoVx5WNi8XnQHL+tXIx685ILg8pusxJZ8RplwgccxZ
jApW6PWEvrznWQn4i/uPsH2sDud1T7lI/K74JGPri3xaxpWxfk5LBzHDv296ygi6
PShGP2RDjc1WpOtTs/K/1BHpwodDGB9k7V9CNjydaYKtbuRtNfbvpqTt+Syto8O8
UuFG22j2ZRyPbwuw3PouwaZgBOrks0H9cXW9s6E3wHJA4p+90LVDdhSXcYo2YtJG
VHJM1wUSN/Tth4DIUwxcXwk5ya5AKvQvokPv9n/FL3ceO2CdfyR/hJ2euB+l/dCl
kNooAjaIQzTLowyMigMO+tT7jKTLuwUrq/l6rHSEIoLWoWLH9Ii55fXDlKr9zihu
y5H/jjbNKULIPzZ03gfuIqz4/+t7hFthMcyH+x4xHgPNayT9BJ2X/T59i3wATeZu
s7hscdCb1rNE11Or2mggSX/pMDJxDMzEaD1JmD/4qJeFSipLmKbxcZZgq96RF//2
tU5zrZhm5GIqaK8o/xp1ps15xAwSnDyYjH59U7To0drfoV9uBg7lczPhaAoVAW/F
lQoepSFrw9hV34YRAebimza7/+5IRk+SBCWwYg++xviGaxFt5TStJITOTk3UbHSx
miCA3D/MDeM=
=m+Ze
-----END PGP SIGNATURE-----
Merge tag 'timers-urgent-2025-09-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer fix from Ingo Molnar:
"Fix a lost-timeout CPU hotplug bug in the hrtimer code, which can
trigger with certain hardware configs and regular HZ"
* tag 'timers-urgent-2025-09-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
hrtimers: Unconditionally update target CPU base after offline timer migration
- Restore a pm_restrict_gfp_mask() call in hibernation_snapshot() that
was removed incorrectly during the 6.16 development cycle (Rafael
Wysocki)
- Introduce a function for registering a perf domain without triggering
a system-wide CPU capacity update and make the intel_pstate driver
use it to avoid reocurring unsuccessful attempts to update capacities
of all CPUs in the system (Rafael Wysocki)
- Fix setting of CPPC.min_perf in the active mode with performance
governor in the amd-pstate driver to restore its expected behavior
changed recently (Gautham Shenoy)
- Avoid mistakenly setting EPP to 0 in the amd-pstate driver after
system resume as a result of recent code changes (Mario Limonciello)
-----BEGIN PGP SIGNATURE-----
iQFGBAABCAAwFiEEcM8Aw/RY0dgsiRUR7l+9nS/U47UFAmjCw/YSHHJqd0Byand5
c29ja2kubmV0AAoJEO5fvZ0v1OO1SP8H/3O8D4ZFg7CxwuTn5MofPH5BBAg3FkwB
RhXZ3WA/qjz+0CusYBJO3hsJVIfUrDUzow47zi0H4tbdqqdI7CUbOPoWnGt/N2hd
ngxL4m+t91XkngLi0eOorxFPQ1/dA1p0g5BHXrzVpuMdE94V3gxb92g3SPrvjcAF
N6fVCL3RMQqDwl5ZbadvWfXdE+07nxwogKTF/NKa+DF3SHSy3SOznKgn/AlhQEo3
RKqAhaO3+RxzfTn8M0ie/flYUFApkMbdLdxYau2Lg4Ne3MhrID3ljhssGEFaQdQ0
8z3OGmfOOOLu21F1iYaZiWWXc8wB6v47NchJvuU1FF/JQ+uXdsZZdNA=
=ZGx1
-----END PGP SIGNATURE-----
Merge tag 'pm-6.17-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management fixes from Rafael Wysocki:
"These fix a nasty hibernation regression introduced during the 6.16
cycle, an issue related to energy model management occurring on Intel
hybrid systems where some CPUs are offline to start with, and two
regressions in the amd-pstate driver:
- Restore a pm_restrict_gfp_mask() call in hibernation_snapshot()
that was removed incorrectly during the 6.16 development cycle
(Rafael Wysocki)
- Introduce a function for registering a perf domain without
triggering a system-wide CPU capacity update and make the
intel_pstate driver use it to avoid reocurring unsuccessful
attempts to update capacities of all CPUs in the system (Rafael
Wysocki)
- Fix setting of CPPC.min_perf in the active mode with performance
governor in the amd-pstate driver to restore its expected behavior
changed recently (Gautham Shenoy)
- Avoid mistakenly setting EPP to 0 in the amd-pstate driver after
system resume as a result of recent code changes (Mario
Limonciello)"
* tag 'pm-6.17-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
PM: hibernate: Restrict GFP mask in hibernation_snapshot()
PM: EM: Add function for registering a PD without capacity update
cpufreq/amd-pstate: Fix a regression leading to EPP 0 after resume
cpufreq/amd-pstate: Fix setting of CPPC.min_perf in active mode for performance governor
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+soXsSLHKoYyzcli6rmadz2vbToFAmjB/6gACgkQ6rmadz2v
bToSCg//Z8Q2ToTV/BOLTzFYLvcTm2YRlqqIe3SFxyxLQCIhC0kxQAT94baVQHky
/6ASbPjDWXdGVHNoMopA6lpMx22Tq4xi6qO5fzJHDuSqh5KTi8l5/GyJeA3egPzD
7RIvKvvgePpCx0xm9rm5O5vvUeFrsxhQPRRiN/fsOibiTJjBpRAJDp9k+pvnK6mb
HaZcHF+In5Vg7XozuHAUMzsp+4njzdLrMXL2Q54o2MrIoeBg8/oAnhLujskGMnXK
mgUA+skW42IEkw+TYUu9888/5PMDkto3BZIx0plcAIVAIvcU5BFzLt11llQswgVl
q740k50oRKrmwHyEVDwugV7WeGQMks48lMHtLKytYmdEhdTfEYUKHeBpcI87fUYy
IpOdSUT49nBxOmGl59ccBcdzsndTjo7Zrl7dMf4umN0SSjfdohwj0uu7rmZCaOdd
m/TxH13Ae7na4QzVx0N911qxBYw07uYNiq3Ati+x327ySozvvNfLIYK/sS/clJkd
lOpz3kpjwgV+PUfv2NBqEJm4nSPTtW7fiEQ8p/yBvK90nB6NnHIbq9a2rPBKeDKx
RpkDB9nhJ1J6OMKWkUDasMiP5tAXt9RrI+la/CgBxMcP/G4HxH6yf22Mf3Hzhe8V
UfjAgHqXvmrjqpgIbO0AVkfwDvlOM37DGvY0H3bMFeOXCk0DDw4=
=09/9
-----END PGP SIGNATURE-----
Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
Pull bpf fixes from Alexei Starovoitov:
"A number of fixes accumulated due to summer vacations
- Fix out-of-bounds dynptr write in bpf_crypto_crypt() kfunc which
was misidentified as a security issue (Daniel Borkmann)
- Update the list of BPF selftests maintainers (Eduard Zingerman)
- Fix selftests warnings with icecc compiler (Ilya Leoshkevich)
- Disable XDP/cpumap direct return optimization (Jesper Dangaard
Brouer)
- Fix unexpected get_helper_proto() result in unusual configuration
BPF_SYSCALL=y and BPF_EVENTS=n (Jiri Olsa)
- Allow fallback to interpreter when JIT support is limited (KaFai
Wan)
- Fix rqspinlock and choose trylock fallback for NMI waiters. Pick
the simplest fix. More involved fix is targeted bpf-next (Kumar
Kartikeya Dwivedi)
- Fix cleanup when tcp_bpf_send_verdict() fails to allocate
psock->cork (Kuniyuki Iwashima)
- Disallow bpf_timer in PREEMPT_RT for now. Proper solution is being
discussed for bpf-next. (Leon Hwang)
- Fix XSK cq descriptor production (Maciej Fijalkowski)
- Tell memcg to use allow_spinning=false path in bpf_timer_init() to
avoid lockup in cgroup_file_notify() (Peilin Ye)
- Fix bpf_strnstr() to handle suffix match cases (Rong Tao)"
* tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf:
selftests/bpf: Skip timer cases when bpf_timer is not supported
bpf: Reject bpf_timer for PREEMPT_RT
tcp_bpf: Call sk_msg_free() when tcp_bpf_send_verdict() fails to allocate psock->cork.
bpf: Tell memcg to use allow_spinning=false path in bpf_timer_init()
bpf: Allow fall back to interpreter for programs with stack size <= 512
rqspinlock: Choose trylock fallback for NMI waiters
xsk: Fix immature cq descriptor production
bpf: Update the list of BPF selftests maintainers
selftests/bpf: Add tests for bpf_strnstr
selftests/bpf: Fix "expression result unused" warnings with icecc
bpf: Fix bpf_strnstr() to handle suffix match cases better
selftests/bpf: Extend crypto_sanity selftest with invalid dst buffer
bpf: Fix out-of-bounds dynptr write in bpf_crypto_crypt
bpf: Check the helper function is valid in get_helper_proto
bpf, cpumap: Disable page_pool direct xdp_return need larger scope
Merge a hibernation regression fix and an fix related to energy model
management for 6.17-rc6
* pm-sleep:
PM: hibernate: Restrict GFP mask in hibernation_snapshot()
* pm-em:
PM: EM: Add function for registering a PD without capacity update
When enable CONFIG_PREEMPT_RT, the kernel will warn when run timer
selftests by './test_progs -t timer':
BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
In order to avoid such warning, reject bpf_timer in verifier when
PREEMPT_RT is enabled.
Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
Link: https://lore.kernel.org/r/20250910125740.52172-2-leon.hwang@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
- Remove redundant __GFP_NOWARN flag is kmalloc
As now __GFP_NOWARN is part of __GFP_NOWAIT, it can be removed from kmalloc
as it is redundant.
- Use copy_from_user_nofault() instead of _inatomic() for trace markers
The trace_marker files are written to to allow user space to quickly write
into the tracing ring buffer. Back in 2016, the get_user_pages_fast() and
the kmap() logic was replaced by a __copy_from_user_inatomic(). But the
_inatomic() is somewhat a misnomer, as if the data being read faults, it can
cause a schedule. This is not something you want to do in an atomic context.
Since the time this was added, copy_from_user_nofault() was added which is
what is actually needed here. Replace the inatomic() with the nofault().
- Fix the assembly markup in the ftrace direct sample code
The ftrace direct sample code (which is also used for selftests), had the
size directive between the "leave" and the "ret" instead of after the ret.
This caused objtool to think the code was unreachable.
- Only call unregister_pm_notifier() on outer most fgraph registration
There was an error path in register_ftrace_graph() that did not call
unregister_pm_notifier() on error, so it was added in the error path.
The problem with that fix, is that register_pm_notifier() is only called by
the initial user of fgraph. If that succeeds, but another fgraph
registration were to fail, then unregister_pm_notifier() would be called
incorrectly.
- Fix a crash in osnoise when zero size cpumask is passed in
If a zero size CPU mask is passed in, the kmalloc() would return
ZERO_SIZE_PTR which is not checked, and the code would continue thinking it
had real memory and crash. If zero is passed in as the size of the write,
simply return 0.
- Fix possible warning in trace_pid_write()
If while processing a series of numbers passed to the "set_event_pid" file,
and one of the updates fails to allocate (triggered by a fault injection),
it can cause a warning to trigger. Check the return value of the call to
trace_pid_list_set() and break out early with an error code if it fails.
-----BEGIN PGP SIGNATURE-----
iIoEABYKADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCaMCL8RQccm9zdGVkdEBn
b29kbWlzLm9yZwAKCRAp5XQQmuv6qnNIAQD9lMNjbpKLpTyZw49ZYyfieNPhFLJ/
94FxNKBfmXnYIwD/cBL8KPo9CApbdk5fG8NO2BAM/AK2MJIBKdfMdnseLw4=
=v8gJ
-----END PGP SIGNATURE-----
Merge tag 'trace-v6.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull tracing fixes from Steven Rostedt:
- Remove redundant __GFP_NOWARN flag is kmalloc
As now __GFP_NOWARN is part of __GFP_NOWAIT, it can be removed from
kmalloc as it is redundant.
- Use copy_from_user_nofault() instead of _inatomic() for trace markers
The trace_marker files are written to to allow user space to quickly
write into the tracing ring buffer.
Back in 2016, the get_user_pages_fast() and the kmap() logic was
replaced by a __copy_from_user_inatomic(), but didn't properly
disable page faults around it.
Since the time this was added, copy_from_user_nofault() was added
which does the required page fault disabling for us.
- Fix the assembly markup in the ftrace direct sample code
The ftrace direct sample code (which is also used for selftests), had
the size directive between the "leave" and the "ret" instead of after
the ret. This caused objtool to think the code was unreachable.
- Only call unregister_pm_notifier() on outer most fgraph registration
There was an error path in register_ftrace_graph() that did not call
unregister_pm_notifier() on error, so it was added in the error path.
The problem with that fix, is that register_pm_notifier() is only
called by the initial user of fgraph. If that succeeds, but another
fgraph registration were to fail, then unregister_pm_notifier() would
be called incorrectly.
- Fix a crash in osnoise when zero size cpumask is passed in
If a zero size CPU mask is passed in, the kmalloc() would return
ZERO_SIZE_PTR which is not checked, and the code would continue
thinking it had real memory and crash. If zero is passed in as the
size of the write, simply return 0.
- Fix possible warning in trace_pid_write()
If while processing a series of numbers passed to the "set_event_pid"
file, and one of the updates fails to allocate (triggered by a fault
injection), it can cause a warning to trigger. Check the return value
of the call to trace_pid_list_set() and break out early with an error
code if it fails.
* tag 'trace-v6.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
tracing: Silence warning when chunk allocation fails in trace_pid_write
tracing/osnoise: Fix null-ptr-deref in bitmap_parselist()
trace/fgraph: Fix error handling
ftrace/samples: Fix function size computation
tracing: Fix tracing_marker may trigger page fault during preempt_disable
trace: Remove redundant __GFP_NOWARN
Commit 12ffc3b151 ("PM: Restrict swap use to later in the suspend
sequence") incorrectly removed a pm_restrict_gfp_mask() call from
hibernation_snapshot(), so memory allocations involving swap are not
prevented from being carried out in this code path any more which may
lead to serious breakage.
The symptoms of such breakage have become visible after adding a
shrink_shmem_memory() call to hibernation_snapshot() in commit
2640e81947 ("PM: hibernate: shrink shmem pages after dev_pm_ops.prepare()")
which caused this problem to be much more likely to manifest itself.
However, since commit 2640e81947 was initially present in the DRM
tree that did not include commit 12ffc3b151, the symptoms of this
issue were not visible until merge commit 260f6f4fda ("Merge tag
'drm-next-2025-07-30' of https://gitlab.freedesktop.org/drm/kernel")
that exposed it through an entirely reasonable merge conflict
resolution.
Fixes: 12ffc3b151 ("PM: Restrict swap use to later in the suspend sequence")
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=220555
Reported-by: Todd Brandt <todd.e.brandt@linux.intel.com>
Tested-by: Todd Brandt <todd.e.brandt@linux.intel.com>
Cc: 6.16+ <stable@vger.kernel.org> # 6.16+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Mario Limonciello (AMD) <superm1@kernel.org>
The intel_pstate driver manages CPU capacity changes itself and it does
not need an update of the capacity of all CPUs in the system to be
carried out after registering a PD.
Moreover, in some configurations (for instance, an SMT-capable
hybrid x86 system booted with nosmt in the kernel command line) the
em_check_capacity_update() call at the end of em_dev_register_perf_domain()
always fails and reschedules itself to run once again in 1 s, so
effectively it runs in vain every 1 s forever.
To address this, introduce a new variant of em_dev_register_perf_domain(),
called em_dev_register_pd_no_update(), that does not invoke
em_check_capacity_update(), and make intel_pstate use it instead of the
original.
Fixes: 7b010f9b90 ("cpufreq: intel_pstate: EAS support for hybrid platforms")
Closes: https://lore.kernel.org/linux-pm/40212796-734c-4140-8a85-854f72b8144d@panix.com/
Reported-by: Kenneth R. Crudup <kenny@panix.com>
Tested-by: Kenneth R. Crudup <kenny@panix.com>
Cc: 6.16+ <stable@vger.kernel.org> # 6.16+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Currently, calling bpf_map_kmalloc_node() from __bpf_async_init() can
cause various locking issues; see the following stack trace (edited for
style) as one example:
...
[10.011566] do_raw_spin_lock.cold
[10.011570] try_to_wake_up (5) double-acquiring the same
[10.011575] kick_pool rq_lock, causing a hardlockup
[10.011579] __queue_work
[10.011582] queue_work_on
[10.011585] kernfs_notify
[10.011589] cgroup_file_notify
[10.011593] try_charge_memcg (4) memcg accounting raises an
[10.011597] obj_cgroup_charge_pages MEMCG_MAX event
[10.011599] obj_cgroup_charge_account
[10.011600] __memcg_slab_post_alloc_hook
[10.011603] __kmalloc_node_noprof
...
[10.011611] bpf_map_kmalloc_node
[10.011612] __bpf_async_init
[10.011615] bpf_timer_init (3) BPF calls bpf_timer_init()
[10.011617] bpf_prog_xxxxxxxxxxxxxxxx_fcg_runnable
[10.011619] bpf__sched_ext_ops_runnable
[10.011620] enqueue_task_scx (2) BPF runs with rq_lock held
[10.011622] enqueue_task
[10.011626] ttwu_do_activate
[10.011629] sched_ttwu_pending (1) grabs rq_lock
...
The above was reproduced on bpf-next (b338cf849e) by modifying
./tools/sched_ext/scx_flatcg.bpf.c to call bpf_timer_init() during
ops.runnable(), and hacking the memcg accounting code a bit to make
a bpf_timer_init() call more likely to raise an MEMCG_MAX event.
We have also run into other similar variants (both internally and on
bpf-next), including double-acquiring cgroup_file_kn_lock, the same
worker_pool::lock, etc.
As suggested by Shakeel, fix this by using __GFP_HIGH instead of
GFP_ATOMIC in __bpf_async_init(), so that e.g. if try_charge_memcg()
raises an MEMCG_MAX event, we call __memcg_memory_event() with
@allow_spinning=false and avoid calling cgroup_file_notify() there.
Depends on mm patch
"memcg: skip cgroup_file_notify if spinning is not allowed":
https://lore.kernel.org/bpf/20250905201606.66198-1-shakeel.butt@linux.dev/
v0 approach s/bpf_map_kmalloc_node/bpf_mem_alloc/
https://lore.kernel.org/bpf/20250905061919.439648-1-yepeilin@google.com/
v1 approach:
https://lore.kernel.org/bpf/20250905234547.862249-1-yepeilin@google.com/
Fixes: b00628b1c7 ("bpf: Introduce bpf timers.")
Suggested-by: Shakeel Butt <shakeel.butt@linux.dev>
Signed-off-by: Peilin Ye <yepeilin@google.com>
Link: https://lore.kernel.org/r/20250909095222.2121438-1-yepeilin@google.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
OpenWRT users reported regression on ARMv6 devices after updating to latest
HEAD, where tcpdump filter:
tcpdump "not ether host 3c37121a2b3c and not ether host 184ecbca2a3a \
and not ether host 14130b4d3f47 and not ether host f0f61cf440b7 \
and not ether host a84b4dedf471 and not ether host d022be17e1d7 \
and not ether host 5c497967208b and not ether host 706655784d5b"
fails with warning: "Kernel filter failed: No error information"
when using config:
# CONFIG_BPF_JIT_ALWAYS_ON is not set
CONFIG_BPF_JIT_DEFAULT_ON=y
The issue arises because commits:
1. "bpf: Fix array bounds error with may_goto" changed default runtime to
__bpf_prog_ret0_warn when jit_requested = 1
2. "bpf: Avoid __bpf_prog_ret0_warn when jit fails" returns error when
jit_requested = 1 but jit fails
This change restores interpreter fallback capability for BPF programs with
stack size <= 512 bytes when jit fails.
Reported-by: Felix Fietkau <nbd@nbd.name>
Closes: https://lore.kernel.org/bpf/2e267b4b-0540-45d8-9310-e127bf95fc63@nbd.name/
Fixes: 6ebc5030e0 ("bpf: Fix array bounds error with may_goto")
Signed-off-by: KaFai Wan <kafai.wan@linux.dev>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20250909144614.2991253-1-kafai.wan@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Currently, out of all 3 types of waiters in the rqspinlock slow path
(i.e., pending bit waiter, wait queue head waiter, and wait queue
non-head waiter), only the pending bit waiter and wait queue head
waiters apply deadlock checks and a timeout on their waiting loop. The
assumption here was that the wait queue head's forward progress would be
sufficient to identify cases where the lock owner or pending bit waiter
is stuck, and non-head waiters relying on the head waiter would prove to
be sufficient for their own forward progress.
However, the head waiter itself can be preempted by a non-head waiter
for the same lock (AA) or a different lock (ABBA) in a manner that
impedes its forward progress. In such a case, non-head waiters not
performing deadlock and timeout checks becomes insufficient, and the
system can enter a state of lockup.
This is typically not a concern with non-NMI lock acquisitions, as lock
holders which in run in different contexts (IRQ, non-IRQ) use "irqsave"
variants of the lock APIs, which naturally excludes such lock holders
from preempting one another on the same CPU.
It might seem likely that a similar case may occur for rqspinlock when
programs are attached to contention tracepoints (begin, end), however,
these tracepoints either precede the enqueue into the wait queue, or
succeed it, therefore cannot be used to preempt a head waiter's waiting
loop.
We must still be careful against nested kprobe and fentry programs that
may attach to the middle of the head's waiting loop to stall forward
progress and invoke another rqspinlock acquisition that proceeds as a
non-head waiter. To this end, drop CC_FLAGS_FTRACE from the rqspinlock.o
object file.
For now, this issue is resolved by falling back to a repeated trylock on
the lock word from NMI context, while performing the deadlock checks to
break out early in case forward progress is impossible, and use the
timeout as a final fallback.
A more involved fix to terminate the queue when such a condition occurs
will be made as a follow up. A selftest to stress this aspect of nested
NMI/non-NMI locking attempts will be added in a subsequent patch to the
bpf-next tree when this fix lands and trees are synchronized.
Reported-by: Josef Bacik <josef@toxicpanda.com>
Fixes: 164c246571 ("rqspinlock: Protect waiters in queue from stalls")
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20250909184959.3509085-1-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
bpf_strnstr() should not treat the ending '\0' of s2 as a matching character
if the parameter 'len' equal to s2 string length, for example:
1. bpf_strnstr("openat", "open", 4) = -ENOENT
2. bpf_strnstr("openat", "open", 5) = 0
This patch makes (1) return 0, fix just the `len == strlen(s2)` case.
And fix a more general case when s2 is a suffix of the first len
characters of s1.
Fixes: e91370550f ("bpf: Add kfuncs for read-only string operations")
Signed-off-by: Rong Tao <rongtao@cestc.cn>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/tencent_17DC57B9D16BC443837021BEACE84B7C1507@qq.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Stanislav reported that in bpf_crypto_crypt() the destination dynptr's
size is not validated to be at least as large as the source dynptr's
size before calling into the crypto backend with 'len = src_len'. This
can result in an OOB write when the destination is smaller than the
source.
Concretely, in mentioned function, psrc and pdst are both linear
buffers fetched from each dynptr:
psrc = __bpf_dynptr_data(src, src_len);
[...]
pdst = __bpf_dynptr_data_rw(dst, dst_len);
[...]
err = decrypt ?
ctx->type->decrypt(ctx->tfm, psrc, pdst, src_len, piv) :
ctx->type->encrypt(ctx->tfm, psrc, pdst, src_len, piv);
The crypto backend expects pdst to be large enough with a src_len length
that can be written. Add an additional src_len > dst_len check and bail
out if it's the case. Note that these kfuncs are accessible under root
privileges only.
Fixes: 3e1c6f3540 ("bpf: make common crypto API for TC/XDP programs")
Reported-by: Stanislav Fort <disclosure@aisle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Link: https://lore.kernel.org/r/20250829143657.318524-1-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
- one more fix for DMA API debugging infrastructure (Baochen Qiang)
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQSrngzkoBtlA8uaaJ+Jp1EFxbsSRAUCaMAGzwAKCRCJp1EFxbsS
RJcvAPwO4NrxrUQdjaXOy//DhFNG4fhMmdn6kYqD+TjTJq9QnwEA/dMGxU1fZqvM
Kqusn1iiQsHjFLRzEOExqRbL6MFDSgM=
=ViHe
-----END PGP SIGNATURE-----
Merge tag 'dma-mapping-6.17-2025-09-09' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux
Pull dma-mapping fix from Marek Szyprowski:
- one more fix for DMA API debugging infrastructure (Baochen Qiang)
* tag 'dma-mapping-6.17-2025-09-09' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux:
dma-debug: don't enforce dma mapping check on noncoherent allocations
When testing softirq based hrtimers on an ARM32 board, with high resolution
mode and NOHZ inactive, softirq based hrtimers fail to expire after being
moved away from an offline CPU:
CPU0 CPU1
hrtimer_start(..., HRTIMER_MODE_SOFT);
cpu_down(CPU1) ...
hrtimers_cpu_dying()
// Migrate timers to CPU0
smp_call_function_single(CPU0, returgger_next_event);
retrigger_next_event()
if (!highres && !nohz)
return;
As retrigger_next_event() is a NOOP when both high resolution timers and
NOHZ are inactive CPU0's hrtimer_cpu_base::softirq_expires_next is not
updated and the migrated softirq timers never expire unless there is a
softirq based hrtimer queued on CPU0 later.
Fix this by removing the hrtimer_hres_active() and tick_nohz_active() check
in retrigger_next_event(), which enforces a full update of the CPU base.
As this is not a fast path the extra cost does not matter.
[ tglx: Massaged change log ]
Fixes: 5c0930ccaa ("hrtimers: Push pending hrtimers away from outgoing CPU earlier")
Co-developed-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/all/20250805081025.54235-1-wangxiongfeng2@huawei.com
Syzkaller trigger a fault injection warning:
WARNING: CPU: 1 PID: 12326 at tracepoint_add_func+0xbfc/0xeb0
Modules linked in:
CPU: 1 UID: 0 PID: 12326 Comm: syz.6.10325 Tainted: G U 6.14.0-rc5-syzkaller #0
Tainted: [U]=USER
Hardware name: Google Compute Engine/Google Compute Engine
RIP: 0010:tracepoint_add_func+0xbfc/0xeb0 kernel/tracepoint.c:294
Code: 09 fe ff 90 0f 0b 90 0f b6 74 24 43 31 ff 41 bc ea ff ff ff
RSP: 0018:ffffc9000414fb48 EFLAGS: 00010283
RAX: 00000000000012a1 RBX: ffffffff8e240ae0 RCX: ffffc90014b78000
RDX: 0000000000080000 RSI: ffffffff81bbd78b RDI: 0000000000000001
RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000001 R12: ffffffffffffffef
R13: 0000000000000000 R14: dffffc0000000000 R15: ffffffff81c264f0
FS: 00007f27217f66c0(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b2e80dff8 CR3: 00000000268f8000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
tracepoint_probe_register_prio+0xc0/0x110 kernel/tracepoint.c:464
register_trace_prio_sched_switch include/trace/events/sched.h:222 [inline]
register_pid_events kernel/trace/trace_events.c:2354 [inline]
event_pid_write.isra.0+0x439/0x7a0 kernel/trace/trace_events.c:2425
vfs_write+0x24c/0x1150 fs/read_write.c:677
ksys_write+0x12b/0x250 fs/read_write.c:731
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
We can reproduce the warning by following the steps below:
1. echo 8 >> set_event_notrace_pid. Let tr->filtered_pids owns one pid
and register sched_switch tracepoint.
2. echo ' ' >> set_event_pid, and perform fault injection during chunk
allocation of trace_pid_list_alloc. Let pid_list with no pid and
assign to tr->filtered_pids.
3. echo ' ' >> set_event_pid. Let pid_list is NULL and assign to
tr->filtered_pids.
4. echo 9 >> set_event_pid, will trigger the double register
sched_switch tracepoint warning.
The reason is that syzkaller injects a fault into the chunk allocation
in trace_pid_list_alloc, causing a failure in trace_pid_list_set, which
may trigger double register of the same tracepoint. This only occurs
when the system is about to crash, but to suppress this warning, let's
add failure handling logic to trace_pid_list_set.
Link: https://lore.kernel.org/20250908024658.2390398-1-pulehui@huaweicloud.com
Fixes: 8d6e90983a ("tracing: Create a sparse bitmask for pid filtering")
Reported-by: syzbot+161412ccaeff20ce4dde@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/67cb890e.050a0220.d8275.022e.GAE@google.com
Signed-off-by: Pu Lehui <pulehui@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
to the while() loop in __iter_div_u64_rem(), when the AUX-clock
is enabled.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmi9XZURHG1pbmdvQGtl
cm5lbC5vcmcACgkQEnMQ0APhK1gVzg/+N3T4qL8Vi3HwOzN862EqmVdaEauAVkQU
vlIxFbUX26KHmwoSYpsVlaswhONMa6tkNEz3zeOfECrK6SAIg/9SGuk/1bRqERCc
4KX3g0S0b7qfC7VJmjsmswXmxM9MrXLY5dx/XzhbACpGVoTJAMLlXL0OVwS361fP
VOH4zSRCPUl+IbTJ6CkVQKZYWZAXxDuxBaSAsMaUz2QMZck6ikGOKP7CT9se0Fk5
Rve7bC0Wy2vuUfchCIagSE9U28rf7HDawqJVEOygtRQm/Hl7yKSUxaHO5LdSI7c/
k7mCoZkdyqj3Pdbe26CFSyvGdNJ+gHIhOSzaO43kHTf+bpgW/WBWm6DLjGWl8eKP
EAW7QMkFoQaNWhSUbPwOR68R7fR3CsvmU1SOXufIRVKjuDy+246gu6y1nvJFr3RS
BQyBPUh3qAJUoJj00nvjxtLB9p0pI2C+0ml7GTWEX3Ao8Y7rPPKh8yR2CvIhdmI4
Oo4/vRtcs9NGYKzSu7gknYSkxITsjfEInCkmdsDVQ503QuUAgL653z8rjUIfTIti
72fJ3pqVUyJuv9xyRkzQcavyZtU/l8uSfMNzgmHen8zmPjKA6rXg+BiuhP4uQ10W
Wxw2s0cP3RgvuL3+y9oQHq3VHJ4JfdNVOKVssd97zCvmwFKCOXxi+0eKCy9ELQPq
nNpuOC/9API=
=5NRx
-----END PGP SIGNATURE-----
Merge tag 'timers-urgent-2025-09-07' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer fix from Ingo Molnar:
"Fix a severe slowdown regression in the timer vDSO code related to the
while() loop in __iter_div_u64_rem(), when the AUX-clock is enabled"
* tag 'timers-urgent-2025-09-07' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
vdso/vsyscall: Avoid slow division loop in auxiliary clock update
A crash was observed with the following output:
BUG: kernel NULL pointer dereference, address: 0000000000000010
Oops: Oops: 0000 [#1] SMP NOPTI
CPU: 2 UID: 0 PID: 92 Comm: osnoise_cpus Not tainted 6.17.0-rc4-00201-gd69eb204c255 #138 PREEMPT(voluntary)
RIP: 0010:bitmap_parselist+0x53/0x3e0
Call Trace:
<TASK>
osnoise_cpus_write+0x7a/0x190
vfs_write+0xf8/0x410
? do_sys_openat2+0x88/0xd0
ksys_write+0x60/0xd0
do_syscall_64+0xa4/0x260
entry_SYSCALL_64_after_hwframe+0x77/0x7f
</TASK>
This issue can be reproduced by below code:
fd=open("/sys/kernel/debug/tracing/osnoise/cpus", O_WRONLY);
write(fd, "0-2", 0);
When user pass 'count=0' to osnoise_cpus_write(), kmalloc() will return
ZERO_SIZE_PTR (16) and cpulist_parse() treat it as a normal value, which
trigger the null pointer dereference. Add check for the parameter 'count'.
Cc: <mhiramat@kernel.org>
Cc: <mathieu.desnoyers@efficios.com>
Cc: <tglozar@redhat.com>
Link: https://lore.kernel.org/20250906035610.3880282-1-wangliang74@huawei.com
Fixes: 17f89102fe ("tracing/osnoise: Allow arbitrarily long CPU string")
Signed-off-by: Wang Liang <wangliang74@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Commit edede7a6dc ("trace/fgraph: Fix the warning caused by missing
unregister notifier") added a call to unregister the PM notifier if
register_ftrace_graph() failed. It does so unconditionally. However,
the PM notifier is only registered with the first call to
register_ftrace_graph(). If the first registration was successful and
a subsequent registration failed, the notifier is now unregistered even
if ftrace graphs are still registered.
Fix the problem by only unregistering the PM notifier during error handling
if there are no active fgraph registrations.
Fixes: edede7a6dc ("trace/fgraph: Fix the warning caused by missing unregister notifier")
Closes: https://lore.kernel.org/all/63b0ba5a-a928-438e-84f9-93028dd72e54@roeck-us.net/
Cc: Ye Weihua <yeweihua4@huawei.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/20250906050618.2634078-1-linux@roeck-us.net
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
-----BEGIN PGP SIGNATURE-----
iQJIBAABCgAyFiEES0KozwfymdVUl37v6iDy2pc3iXMFAmi7AksUHHBhdWxAcGF1
bC1tb29yZS5jb20ACgkQ6iDy2pc3iXMmaA/8ChtRoaZFLTFI7UTWmVo0aY7NDl77
fqjwY1ZrPsqezLYt317LCJareQOfn1/NEc1Xxb7Caz6Z35eQPUDeUnd96zEarzDL
4iYcZA1MJVO2jjnyj4lqVoSgkftQPN6qvga5osA9mMOQ24mNPAO5yZIVdzL/sekv
ewosheC6spwFD26+/uWE00sdVRhtOnUPkLmelf4wyN0WX+lStTXozLpSW1Pr0FB1
tZI/mjLyVmZ+YJtTMCMLhpM/hFIMymlSr7BZ7pO/G1w48OGzVBdUlS45bS5rMUIQ
D/MdTB2YlkMOYFyRCgoSgxaFHGHf7F6MQO0J7y/hVu2QSvGoxneUaTIszpaj7rcg
edaJKogA6DlXvauw32UQhjUTQetuppyFqAHQXsec/JVYGUJqAZWNdsThP86MGlim
INwspptaBwOULg2OTYt/+jHblbhl8BI9ayC/S4lN89MCwszLYoFmjIo+mCf9qWfE
uFsiyvvqoXaMKH/e4NkcUjT0AxzwAHF0DwU7Vh+apThOpdr7Qudr6meWEgJ+iRpO
hhWrpyPAP1pann7VebwAqWs9iG97cuwARca4EWyuSy+i11qLQ19LwC/i5vfb58TJ
Ozh9g01A65qWgA5/G14XWmN1oLYdWK+KqwmdTXtDgWXZ5hX3oMrlypRxbPxKIyVG
PsEKU7GXjze/J1s=
=g/UF
-----END PGP SIGNATURE-----
Merge tag 'audit-pr-20250905' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/audit
Pull audit fix from Paul Moore:
"A single small audit patch to fix a potential out-of-bounds read
caused by a negative array index when comparing paths"
* tag 'audit-pr-20250905' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/audit:
audit: fix out-of-bounds read in audit_compare_dname_path()
When a watch on dir=/ is combined with an fsnotify event for a
single-character name directly under / (e.g., creating /a), an
out-of-bounds read can occur in audit_compare_dname_path().
The helper parent_len() returns 1 for "/". In audit_compare_dname_path(),
when parentlen equals the full path length (1), the code sets p = path + 1
and pathlen = 1 - 1 = 0. The subsequent loop then dereferences
p[pathlen - 1] (i.e., p[-1]), causing an out-of-bounds read.
Fix this by adding a pathlen > 0 check to the while loop condition
to prevent the out-of-bounds access.
Cc: stable@vger.kernel.org
Fixes: e92eebb0d6 ("audit: fix suffixed '/' filename matching")
Reported-by: Stanislav Fort <disclosure@aisle.com>
Suggested-by: Linus Torvalds <torvalds@linuxfoundation.org>
Signed-off-by: Stanislav Fort <stanislav.fort@aisle.com>
[PM: subject tweak, sign-off email fixes]
Signed-off-by: Paul Moore <paul@paul-moore.com>
The call to __iter_div_u64_rem() in vdso_time_update_aux() is a wrapper
around subtraction. It cannot be used to divide large numbers, as that
introduces long, computationally expensive delays. A regular u64 division
is also not possible in the timekeeper update path as it can be too slow.
Instead of splitting the ktime_t offset into into second and subsecond
components during the timekeeper update fast-path, do it together with the
adjustment of tk->offs_aux in the slow-path. Equivalent to the handling of
offs_boot and monotonic_to_boot.
Reuse the storage of monotonic_to_boot for the new field, as it is not used
by auxiliary timekeepers.
Fixes: 380b84e168 ("vdso/vsyscall: Update auxiliary clock data in the datapage")
Reported-by: Miroslav Lichvar <mlichvar@redhat.com>
Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/all/20250825-vdso-auxclock-division-v1-1-a1d32a16a313@linutronix.de
Closes: https://lore.kernel.org/lkml/aKwsNNWsHJg8IKzj@localhost/
The event_limit can be set by the PERF_EVENT_IOC_REFRESH to limit the
number of events. When the event_limit reaches 0, the POLL_HUP signal
should be sent. But it's missed.
The corresponding counter should be stopped when the event_limit reaches
0. It was implemented in the ARCH-specific code. However, since the
commit 9734e25fbf ("perf: Fix the throttle logic for a group"), all
the ARCH-specific code has been moved to the generic code. The code to
handle the event_limit was lost.
Add the event->pmu->stop(event, 0); back.
Fixes: 9734e25fbf ("perf: Fix the throttle logic for a group")
Closes: https://lore.kernel.org/lkml/aICYAqM5EQUlTqtX@li-2b55cdcc-350b-11b2-a85c-a78bff51fc11.ibm.com/
Reported-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
Link: https://lkml.kernel.org/r/20250811182644.1305952-1-kan.liang@linux.intel.com
Both tracing_mark_write and tracing_mark_raw_write call
__copy_from_user_inatomic during preempt_disable. But in some case,
__copy_from_user_inatomic may trigger page fault, and will call schedule()
subtly. And if a task is migrated to other cpu, the following warning will
be trigger:
if (RB_WARN_ON(cpu_buffer,
!local_read(&cpu_buffer->committing)))
An example can illustrate this issue:
process flow CPU
---------------------------------------------------------------------
tracing_mark_raw_write(): cpu:0
...
ring_buffer_lock_reserve(): cpu:0
...
cpu = raw_smp_processor_id() cpu:0
cpu_buffer = buffer->buffers[cpu] cpu:0
...
...
__copy_from_user_inatomic(): cpu:0
...
# page fault
do_mem_abort(): cpu:0
...
# Call schedule
schedule() cpu:0
...
# the task schedule to cpu1
__buffer_unlock_commit(): cpu:1
...
ring_buffer_unlock_commit(): cpu:1
...
cpu = raw_smp_processor_id() cpu:1
cpu_buffer = buffer->buffers[cpu] cpu:1
As shown above, the process will acquire cpuid twice and the return values
are not the same.
To fix this problem using copy_from_user_nofault instead of
__copy_from_user_inatomic, as the former performs 'access_ok' before
copying.
Link: https://lore.kernel.org/20250819105152.2766363-1-luogengkun@huaweicloud.com
Fixes: 656c7f0d2d ("tracing: Replace kmap with copy_from_user() in trace_marker writing")
Signed-off-by: Luo Gengkun <luogengkun@huaweicloud.com>
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Commit 16f5dfbc85 ("gfp: include __GFP_NOWARN in GFP_NOWAIT")
made GFP_NOWAIT implicitly include __GFP_NOWARN.
Therefore, explicit __GFP_NOWARN combined with GFP_NOWAIT
(e.g., `GFP_NOWAIT | __GFP_NOWARN`) is now redundant. Let's clean
up these redundant flags across subsystems.
No functional changes.
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/20250805023630.335719-1-rongqianfeng@vivo.com
Signed-off-by: Qianfeng Rong <rongqianfeng@vivo.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
task twice as part of the runqueue's running tasks count
- Fix a realtime tasks starvation case where failure to enqueue a timer whose
expiration time is already in the past would cause repeated attempts to
re-enqueue a deadline server task which leads to starving the former,
realtime one
- Prevent a delayed deadline server task stop from breaking the per-runqueue
bandwidth tracking
- Have a function checking whether the deadline server task has stopped,
return the correct value
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmi0GaYACgkQEsHwGGHe
VUphpRAAuI0Ra8fDwy+VRLoHYVseGHw/SZuffgd6WVu1J5Vag9TSJzl4+mDBsLVU
42+b5+KSWbnX5zMDKgOhx7MU6AOgR3UCDlQEAMtKYI1CFD7ADe+Jwr45jG32Z50s
bAl4LHhFeTHJx+jLP5Ez5tTwCTc2/Q7UbhadpGgTLQOhvrFPDwsDjrlMgClXgYU4
DNEF6s6m9X31UJ/jnZNJQ7VeXa6SdqNo2fBZU+SoY1J8GYzGgcUDlCNLD0SnQwMe
CgGCnYyzOjl9oNdKV2Z14ruCZwkfhv3hlVt0qHwlRKiP8OOdNKWN0FMIAtvyD0QM
IQXITVIsc+T/diihZEGNR7wHRd0vhZ/cZPLoPjUQ7mNYB4IuCtWJrbLZf4W17CbG
0clZ/OxG0EmOTKSuxBOxjg5tUtWI9ZqBHPFvBXFFl+6AhHTb1QK0hriAqbaqe0t6
rOmohWKqg55yQxuhr0VXUgHy4Oq4u4WBZCF1OH02wtk6w87EHawuWPrULp5jR2iM
BUXazn8CiTc13IBm+NhO9X45GfH1wIHC0Uhul+gWhylzG6gFRWN0CNixqPjd/7M7
GS5gpH7xVs6Qe5DmAG9WHIXGLHPhda8OkyvzYK5MMtwJ7zpdPvqH3LCbO/uXspMy
qYJWG+z3ni09SBX6EnZGLjenzOApsRuYL85NvFK8lOv6LG/SSO0=
=NXg1
-----END PGP SIGNATURE-----
Merge tag 'sched_urgent_for_v6.17_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler fixes from Borislav Petkov:
- Fix a stall on the CPU offline path due to mis-counting a deadline
server task twice as part of the runqueue's running tasks count
- Fix a realtime tasks starvation case where failure to enqueue a timer
whose expiration time is already in the past would cause repeated
attempts to re-enqueue a deadline server task which leads to starving
the former, realtime one
- Prevent a delayed deadline server task stop from breaking the
per-runqueue bandwidth tracking
- Have a function checking whether the deadline server task has
stopped, return the correct value
* tag 'sched_urgent_for_v6.17_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
sched/deadline: Don't count nr_running for dl_server proxy tasks
sched/deadline: Fix RT task potential starvation when expiry time passed
sched/deadline: Always stop dl-server before changing parameters
sched/deadline: Fix dl_server_stopped()
To avoid a memory leak via mm_alloc() + mmdrop() the futex cleanup code
has been moved to __mmdrop(). This resulted in a warnings if the futex
hash table has been allocated via vmalloc() the mmdrop() was invoked
from atomic context.
The free path must stay in __mmput() to ensure it is invoked from
preemptible context.
In order to avoid the memory leak, delay the allocation of
mm_struct::mm->futex_ref to futex_hash_allocate(). This works because
neither the per-CPU counter nor the private hash has been allocated and
therefore
- futex_private_hash() callers (such as exit_pi_state_list()) don't
acquire reference if there is no private hash yet. There is also no
reference put.
- Regular callers (futex_hash()) fallback to global hash. No reference
counting here.
The futex_ref member can be allocated in futex_hash_allocate() before
the private hash itself is allocated. This happens either while the
first thread is created or on request. In both cases the process has
just a single thread so there can be either futex operation in progress
or the request to create a private hash.
Move futex_hash_free() back to __mmput();
Move the allocation of mm_struct::futex_ref to futex_hash_allocate().
[ bp: Fold a follow-up fix to prevent a use-after-free:
https://lore.kernel.org/r/20250830213806.sEKuuGSm@linutronix.de ]
Fixes: e703b7e247 ("futex: Move futex cleanup to __mmdrop()")
Closes: https://lore.kernel.org/all/20250821102721.6deae493@kernel.org/
Reported-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lkml.kernel.org/r/20250822141238.PfnkTjFb@linutronix.de
- another small fix relevant to arm64 systems with memory encryption
(Shanker Donthineni)
- fix relevant to arm32 systems with non-standard CMA configuration
(Oreoluwa Babatunde)
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQSrngzkoBtlA8uaaJ+Jp1EFxbsSRAUCaLAK7gAKCRCJp1EFxbsS
REDxAQC+5hLiyzc/1rR5EQb1D6Xr1f/0VN3IFz3creHp3juFBAEApi1iFMdmahO7
0YKG4KkzHpcNkGrxaXKP0VNtQsDLwww=
=fVrB
-----END PGP SIGNATURE-----
Merge tag 'dma-mapping-6.17-2025-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux
Pull dma-mapping fixes from Marek Szyprowski:
- another small fix for arm64 systems with memory encryption (Shanker
Donthineni)
- fix for arm32 systems with non-standard CMA configuration (Oreoluwa
Babatunde)
* tag 'dma-mapping-6.17-2025-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux:
dma/pool: Ensure DMA_DIRECT_REMAP allocations are decrypted
of: reserved_mem: Restructure call site for dma_contiguous_early_fixup()
On CPU offline the kernel stalled with below call trace:
INFO: task kworker/0:1:11 blocked for more than 120 seconds.
cpuhp hold the cpu hotplug lock endless and stalled vmstat_shepherd.
This is because we count nr_running twice on cpuhp enqueuing and failed
the wait condition of cpuhp:
enqueue_task_fair() // pick cpuhp from idle, rq->nr_running = 0
dl_server_start()
[...]
add_nr_running() // rq->nr_running = 1
add_nr_running() // rq->nr_running = 2
[switch to cpuhp, waiting on balance_hotplug_wait()]
rcuwait_wait_event(rq->nr_running == 1 && ...) // failed, rq->nr_running=2
schedule() // wait again
It doesn't make sense to count the dl_server towards runnable tasks,
since it runs other tasks.
Fixes: 63ba8422f8 ("sched/deadline: Introduce deadline servers")
Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250627035420.37712-1-yangyicong@huawei.com
[Symptom]
The fair server mechanism, which is intended to prevent fair starvation
when higher-priority tasks monopolize the CPU.
Specifically, RT tasks on the runqueue may not be scheduled as expected.
[Analysis]
The log "sched: DL replenish lagged too much" triggered.
By memory dump of dl_server:
curr = 0xFFFFFF80D6A0AC00 (
dl_server = 0xFFFFFF83CD5B1470(
dl_runtime = 0x02FAF080,
dl_deadline = 0x3B9ACA00,
dl_period = 0x3B9ACA00,
dl_bw = 0xCCCC,
dl_density = 0xCCCC,
runtime = 0x02FAF080,
deadline = 0x0000082031EB0E80,
flags = 0x0,
dl_throttled = 0x0,
dl_yielded = 0x0,
dl_non_contending = 0x0,
dl_overrun = 0x0,
dl_server = 0x1,
dl_server_active = 0x1,
dl_defer = 0x1,
dl_defer_armed = 0x0,
dl_defer_running = 0x1,
dl_timer = (
node = (
expires = 0x000008199756E700),
_softexpires = 0x000008199756E700,
function = 0xFFFFFFDB9AF44D30 = dl_task_timer,
base = 0xFFFFFF83CD5A12C0,
state = 0x0,
is_rel = 0x0,
is_soft = 0x0,
clock_update_flags = 0x4,
clock = 0x000008204A496900,
- The timer expiration time (rq->curr->dl_server->dl_timer->expires)
is already in the past, indicating the timer has expired.
- The timer state (rq->curr->dl_server->dl_timer->state) is 0.
[Suspected Root Cause]
The relevant code flow in the throttle path of
update_curr_dl_se() as follows:
dequeue_dl_entity(dl_se, 0); // the DL entity is dequeued
if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(dl_se))) {
if (dl_server(dl_se)) // timer registration fails
enqueue_dl_entity(dl_se, ENQUEUE_REPLENISH);//enqueue immediately
...
}
The failure of `start_dl_timer` is caused by attempting to register a
timer with an expiration time that is already in the past. When this
situation persists, the code repeatedly re-enqueues the DL entity
without properly replenishing or restarting the timer, resulting in RT
task may not be scheduled as expected.
[Proposed Solution]:
Instead of immediately re-enqueuing the DL entity on timer registration
failure, this change ensures the DL entity is properly replenished and
the timer is restarted, preventing RT potential starvation.
Fixes: 63ba8422f8 ("sched/deadline: Introduce deadline servers")
Signed-off-by: kuyo chang <kuyo.chang@mediatek.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Closes: https://lore.kernel.org/CAMuHMdXn4z1pioTtBGMfQM0jsLviqS2jwysaWXpoLxWYoGa82w@mail.gmail.com
Tested-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Jiri Slaby <jirislaby@kernel.org>
Tested-by: Diederik de Haas <didi.debian@cknow.org>
Link: https://lkml.kernel.org/r/20250615131129.954975-1-kuyo.chang@mediatek.com
Commit cccb45d7c4 ("sched/deadline: Less agressive dl_server
handling") reduced dl-server overhead by delaying disabling servers only
after there are no fair task around for a whole period, which means that
deadline entities are not dequeued right away on a server stop event.
However, the delay opens up a window in which a request for changing
server parameters can break per-runqueue running_bw tracking, as
reported by Yuri.
Close the problematic window by unconditionally calling dl_server_stop()
before applying the new parameters (ensuring deadline entities go
through an actual dequeue).
Fixes: cccb45d7c4 ("sched/deadline: Less agressive dl_server handling")
Reported-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20250721-upstream-fix-dlserver-lessaggressive-b4-v1-1-4ebc10c87e40@redhat.com
Commit cccb45d7c4 ("sched/deadline: Less agressive dl_server handling")
introduces dl_server_stopped(). But it is obvious that dl_server_stopped()
should return true if dl_se->dl_server_active is 0.
Fixes: cccb45d7c4 ("sched/deadline: Less agressive dl_server handling")
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20250809130419.1980742-1-chenhuacai@loongson.cn
This includes a fix part of the KSPP (Kernel Self Protection Project) to replace
the deprecated and unsafe strcpy() calls in the kernel parameter string handler
and sysfs parameters for built-in modules. Single commit, no functional changes.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE73Ua4R8Pc+G5xjxTQJ6jxB8ZUfsFAmiqshwACgkQQJ6jxB8Z
UfuF8Q//UVMLtvEREUM8m18u1SVDhnO9/+Dpl/bkTof+w03KviIbAN/VXm6qn3C7
ZYtK5lDWU4twg+bOpC6EC/LdNVNyJx6cCpqkZFri21i9Yhf5ak0Sp833lWTZdqH/
DDNCCAOuFH1EaihlJwQ/T4ox1CUOBTC49HyCBnVvdWiCCevUaPbxc8Cgsuzp/gsf
vqXoOJCYKZD2ZdeRKgW7EgETWUljIpvjfXnb3DtMHztj92wzHPaCR50d0iBJbpZi
3JEmrZ6FQ+sb+Qgp4VrW7ZEIa8UFGusqKVZBzJfZR61OU+iVz97gg2WptczsTZCa
GoV0sM5MbNwaBtaMEoM40OiQWCAtYyfsIFmOH142Djcmzgs2hGFTGMKgZReDRs8B
XiPPTq0IW5czYLoNyzJKvtoRX1qBC7wV0rxN9MY8AQieCPmhV3fQsjUgnPKwUOlV
4U5EvzI2Qy4LL5oEUp3rEcymio/rP1wrd0dFxx/D+bMMj//PRF+9rr51deZ/tqtz
Y0Q8rI7CYYlhg0I6XH8t2sAe2TypbU8gNGhOi23Z9vBZwtOv1e3fGTQXymopvVT4
m50c541senApTKFHDbbck2KXwNyasdtIWCQrtChaP99Y0Lk2KRxkgYtVuCJEPNsv
rFjyK3KnfkhhqNJEl4I8EUJwfQ1z9tUHJyfx1p0q74cShV7fRg8=
=8/92
-----END PGP SIGNATURE-----
Merge tag 'modules-6.17-rc3.fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/modules/linux
Pull modules fix from Daniel Gomez:
"This includes a fix part of the KSPP (Kernel Self Protection Project)
to replace the deprecated and unsafe strcpy() calls in the kernel
parameter string handler and sysfs parameters for built-in modules.
Single commit, no functional changes"
* tag 'modules-6.17-rc3.fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/modules/linux:
params: Replace deprecated strcpy() with strscpy() and memcpy()