Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.17-rc6).

Conflicts:

net/netfilter/nft_set_pipapo.c
net/netfilter/nft_set_pipapo_avx2.c
  c4eaca2e10 ("netfilter: nft_set_pipapo: don't check genbit from packetpath lookups")
  84c1da7b38 ("netfilter: nft_set_pipapo: use avx2 algorithm for insertions too")

Only trivial adjacent changes (in a doc and a Makefile).

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
pull/1354/merge
Jakub Kicinski 2025-09-11 17:37:09 -07:00
commit fc3a281041
269 changed files with 2633 additions and 1130 deletions

View File

@ -586,6 +586,7 @@ What: /sys/devices/system/cpu/vulnerabilities
/sys/devices/system/cpu/vulnerabilities/srbds
/sys/devices/system/cpu/vulnerabilities/tsa
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
/sys/devices/system/cpu/vulnerabilities/vmscape
Date: January 2018
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description: Information about CPU vulnerabilities

View File

@ -26,3 +26,4 @@ are configurable at compile, boot or run time.
rsb
old_microcode
indirect-target-selection
vmscape

View File

@ -0,0 +1,110 @@
.. SPDX-License-Identifier: GPL-2.0
VMSCAPE
=======
VMSCAPE is a vulnerability that may allow a guest to influence the branch
prediction in host userspace. It particularly affects hypervisors like QEMU.
Even if a hypervisor may not have any sensitive data like disk encryption keys,
guest-userspace may be able to attack the guest-kernel using the hypervisor as
a confused deputy.
Affected processors
-------------------
The following CPU families are affected by VMSCAPE:
**Intel processors:**
- Skylake generation (Parts without Enhanced-IBRS)
- Cascade Lake generation - (Parts affected by ITS guest/host separation)
- Alder Lake and newer (Parts affected by BHI)
Note that, BHI affected parts that use BHB clearing software mitigation e.g.
Icelake are not vulnerable to VMSCAPE.
**AMD processors:**
- Zen series (families 0x17, 0x19, 0x1a)
** Hygon processors:**
- Family 0x18
Mitigation
----------
Conditional IBPB
----------------
Kernel tracks when a CPU has run a potentially malicious guest and issues an
IBPB before the first exit to userspace after VM-exit. If userspace did not run
between VM-exit and the next VM-entry, no IBPB is issued.
Note that the existing userspace mitigation against Spectre-v2 is effective in
protecting the userspace. They are insufficient to protect the userspace VMMs
from a malicious guest. This is because Spectre-v2 mitigations are applied at
context switch time, while the userspace VMM can run after a VM-exit without a
context switch.
Vulnerability enumeration and mitigation is not applied inside a guest. This is
because nested hypervisors should already be deploying IBPB to isolate
themselves from nested guests.
SMT considerations
------------------
When Simultaneous Multi-Threading (SMT) is enabled, hypervisors can be
vulnerable to cross-thread attacks. For complete protection against VMSCAPE
attacks in SMT environments, STIBP should be enabled.
The kernel will issue a warning if SMT is enabled without adequate STIBP
protection. Warning is not issued when:
- SMT is disabled
- STIBP is enabled system-wide
- Intel eIBRS is enabled (which implies STIBP protection)
System information and options
------------------------------
The sysfs file showing VMSCAPE mitigation status is:
/sys/devices/system/cpu/vulnerabilities/vmscape
The possible values in this file are:
* 'Not affected':
The processor is not vulnerable to VMSCAPE attacks.
* 'Vulnerable':
The processor is vulnerable and no mitigation has been applied.
* 'Mitigation: IBPB before exit to userspace':
Conditional IBPB mitigation is enabled. The kernel tracks when a CPU has
run a potentially malicious guest and issues an IBPB before the first
exit to userspace after VM-exit.
* 'Mitigation: IBPB on VMEXIT':
IBPB is issued on every VM-exit. This occurs when other mitigations like
RETBLEED or SRSO are already issuing IBPB on VM-exit.
Mitigation control on the kernel command line
----------------------------------------------
The mitigation can be controlled via the ``vmscape=`` command line parameter:
* ``vmscape=off``:
Disable the VMSCAPE mitigation.
* ``vmscape=ibpb``:
Enable conditional IBPB mitigation (default when CONFIG_MITIGATION_VMSCAPE=y).
* ``vmscape=force``:
Force vulnerability detection and mitigation even on processors that are
not known to be affected.

View File

@ -3829,6 +3829,7 @@
srbds=off [X86,INTEL]
ssbd=force-off [ARM64]
tsx_async_abort=off [X86]
vmscape=off [X86]
Exceptions:
This does not have any effect on
@ -8041,6 +8042,16 @@
vmpoff= [KNL,S390] Perform z/VM CP command after power off.
Format: <command>
vmscape= [X86] Controls mitigation for VMscape attacks.
VMscape attacks can leak information from a userspace
hypervisor to a guest via speculative side-channels.
off - disable the mitigation
ibpb - use Indirect Branch Prediction Barrier
(IBPB) mitigation (default)
force - force vulnerability detection even on
unaffected processors
vsyscall= [X86-64,EARLY]
Controls the behavior of vsyscalls (i.e. calls to
fixed addresses of 0xffffffffff600x00 from legacy

View File

@ -20,6 +20,7 @@ properties:
- enum:
- fsl,imx7ulp-spi
- fsl,imx8qxp-spi
- nxp,s32g2-lpspi
- items:
- enum:
- fsl,imx8ulp-spi
@ -27,6 +28,10 @@ properties:
- fsl,imx94-spi
- fsl,imx95-spi
- const: fsl,imx7ulp-spi
- items:
- const: nxp,s32g3-lpspi
- const: nxp,s32g2-lpspi
reg:
maxItems: 1

View File

@ -256,7 +256,7 @@ attribute-sets:
type: u32
-
name: if-idx
type: u32
type: s32
-
name: reset-reason
type: u32

View File

@ -742,7 +742,7 @@ The broadcast manager sends responses to user space in the same form:
struct timeval ival1, ival2; /* count and subsequent interval */
canid_t can_id; /* unique can_id for task */
__u32 nframes; /* number of can_frames following */
struct can_frame frames[0];
struct can_frame frames[];
};
The aligned payload 'frames' uses the same basic CAN frame structure defined

View File

@ -60,10 +60,10 @@ address announcements. Typically, it is the client side that initiates subflows,
and the server side that announces additional addresses via the ``ADD_ADDR`` and
``REMOVE_ADDR`` options.
Path managers are controlled by the ``net.mptcp.pm_type`` sysctl knob -- see
mptcp-sysctl.rst. There are two types: the in-kernel one (type ``0``) where the
same rules are applied for all the connections (see: ``ip mptcp``) ; and the
userspace one (type ``1``), controlled by a userspace daemon (i.e. `mptcpd
Path managers are controlled by the ``net.mptcp.path_manager`` sysctl knob --
see mptcp-sysctl.rst. There are two types: the in-kernel one (``kernel``) where
the same rules are applied for all the connections (see: ``ip mptcp``) ; and the
userspace one (``userspace``), controlled by a userspace daemon (i.e. `mptcpd
<https://mptcpd.mptcp.dev/>`_) where different rules can be applied for each
connection. The path managers can be controlled via a Netlink API; see
../netlink/specs/mptcp_pm.rst.

View File

@ -4683,7 +4683,6 @@ F: security/bpf/
BPF [SELFTESTS] (Test Runners & Infrastructure)
M: Andrii Nakryiko <andrii@kernel.org>
M: Eduard Zingerman <eddyz87@gmail.com>
R: Mykola Lysenko <mykolal@fb.com>
L: bpf@vger.kernel.org
S: Maintained
F: tools/testing/selftests/bpf/
@ -5259,7 +5258,6 @@ F: drivers/gpio/gpio-bt8xx.c
BTRFS FILE SYSTEM
M: Chris Mason <clm@fb.com>
M: Josef Bacik <josef@toxicpanda.com>
M: David Sterba <dsterba@suse.com>
L: linux-btrfs@vger.kernel.org
S: Maintained
@ -7822,7 +7820,7 @@ Q: https://patchwork.freedesktop.org/project/nouveau/
Q: https://gitlab.freedesktop.org/drm/nouveau/-/merge_requests
B: https://gitlab.freedesktop.org/drm/nouveau/-/issues
C: irc://irc.oftc.net/nouveau
T: git https://gitlab.freedesktop.org/drm/nouveau.git
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: drivers/gpu/drm/nouveau/
F: include/uapi/drm/nouveau_drm.h
@ -16127,6 +16125,7 @@ M: Andrew Morton <akpm@linux-foundation.org>
M: Mike Rapoport <rppt@kernel.org>
L: linux-mm@kvack.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git
F: include/linux/numa_memblks.h
F: mm/numa.c
F: mm/numa_emulation.c
@ -17479,6 +17478,7 @@ NETFILTER
M: Pablo Neira Ayuso <pablo@netfilter.org>
M: Jozsef Kadlecsik <kadlec@netfilter.org>
M: Florian Westphal <fw@strlen.de>
R: Phil Sutter <phil@nwl.cc>
L: netfilter-devel@vger.kernel.org
L: coreteam@netfilter.org
S: Maintained

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 17
SUBLEVEL = 0
EXTRAVERSION = -rc4
EXTRAVERSION = -rc5
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View File

@ -19,6 +19,7 @@ struct mod_arch_specific {
/* for CONFIG_DYNAMIC_FTRACE */
struct plt_entry *ftrace_trampolines;
struct plt_entry *init_ftrace_trampolines;
};
u64 module_emit_plt_entry(struct module *mod, Elf64_Shdr *sechdrs,

View File

@ -2,6 +2,7 @@ SECTIONS {
.plt 0 : { BYTE(0) }
.init.plt 0 : { BYTE(0) }
.text.ftrace_trampoline 0 : { BYTE(0) }
.init.text.ftrace_trampoline 0 : { BYTE(0) }
#ifdef CONFIG_KASAN_SW_TAGS
/*

View File

@ -17,7 +17,12 @@
#ifndef __ASM_BITSPERLONG_H
#define __ASM_BITSPERLONG_H
#if defined(__KERNEL__) && !defined(__aarch64__)
/* Used by the compat vDSO */
#define __BITS_PER_LONG 32
#else
#define __BITS_PER_LONG 64
#endif
#include <asm-generic/bitsperlong.h>

View File

@ -258,10 +258,17 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
return ftrace_modify_code(pc, 0, new, false);
}
static struct plt_entry *get_ftrace_plt(struct module *mod)
static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr)
{
#ifdef CONFIG_MODULES
struct plt_entry *plt = mod->arch.ftrace_trampolines;
struct plt_entry *plt = NULL;
if (within_module_mem_type(addr, mod, MOD_INIT_TEXT))
plt = mod->arch.init_ftrace_trampolines;
else if (within_module_mem_type(addr, mod, MOD_TEXT))
plt = mod->arch.ftrace_trampolines;
else
return NULL;
return &plt[FTRACE_PLT_IDX];
#else
@ -332,7 +339,7 @@ static bool ftrace_find_callable_addr(struct dyn_ftrace *rec,
if (WARN_ON(!mod))
return false;
plt = get_ftrace_plt(mod);
plt = get_ftrace_plt(mod, pc);
if (!plt) {
pr_err("ftrace: no module PLT for %ps\n", (void *)*addr);
return false;

View File

@ -94,7 +94,7 @@ int load_other_segments(struct kimage *image,
char *initrd, unsigned long initrd_len,
char *cmdline)
{
struct kexec_buf kbuf;
struct kexec_buf kbuf = {};
void *dtb = NULL;
unsigned long initrd_load_addr = 0, dtb_len,
orig_segments = image->nr_segments;

View File

@ -283,7 +283,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
unsigned long core_plts = 0;
unsigned long init_plts = 0;
Elf64_Sym *syms = NULL;
Elf_Shdr *pltsec, *tramp = NULL;
Elf_Shdr *pltsec, *tramp = NULL, *init_tramp = NULL;
int i;
/*
@ -298,6 +298,9 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
else if (!strcmp(secstrings + sechdrs[i].sh_name,
".text.ftrace_trampoline"))
tramp = sechdrs + i;
else if (!strcmp(secstrings + sechdrs[i].sh_name,
".init.text.ftrace_trampoline"))
init_tramp = sechdrs + i;
else if (sechdrs[i].sh_type == SHT_SYMTAB)
syms = (Elf64_Sym *)sechdrs[i].sh_addr;
}
@ -363,5 +366,12 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
tramp->sh_size = NR_FTRACE_PLTS * sizeof(struct plt_entry);
}
if (init_tramp) {
init_tramp->sh_type = SHT_NOBITS;
init_tramp->sh_flags = SHF_EXECINSTR | SHF_ALLOC;
init_tramp->sh_addralign = __alignof__(struct plt_entry);
init_tramp->sh_size = NR_FTRACE_PLTS * sizeof(struct plt_entry);
}
return 0;
}

View File

@ -466,6 +466,17 @@ static int module_init_ftrace_plt(const Elf_Ehdr *hdr,
__init_plt(&plts[FTRACE_PLT_IDX], FTRACE_ADDR);
mod->arch.ftrace_trampolines = plts;
s = find_section(hdr, sechdrs, ".init.text.ftrace_trampoline");
if (!s)
return -ENOEXEC;
plts = (void *)s->sh_addr;
__init_plt(&plts[FTRACE_PLT_IDX], FTRACE_ADDR);
mod->arch.init_ftrace_trampolines = plts;
#endif
return 0;
}

View File

@ -65,7 +65,7 @@ config RISCV
select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE
select ARCH_SUPPORTS_HUGETLBFS if MMU
# LLD >= 14: https://github.com/llvm/llvm-project/issues/50505
select ARCH_SUPPORTS_LTO_CLANG if LLD_VERSION >= 140000
select ARCH_SUPPORTS_LTO_CLANG if LLD_VERSION >= 140000 && CMODEL_MEDANY
select ARCH_SUPPORTS_LTO_CLANG_THIN if LLD_VERSION >= 140000
select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS if 64BIT && MMU
select ARCH_SUPPORTS_PAGE_TABLE_CHECK if MMU

View File

@ -91,7 +91,7 @@
#endif
.macro asm_per_cpu dst sym tmp
REG_L \tmp, TASK_TI_CPU_NUM(tp)
lw \tmp, TASK_TI_CPU_NUM(tp)
slli \tmp, \tmp, PER_CPU_OFFSET_SHIFT
la \dst, __per_cpu_offset
add \dst, \dst, \tmp

View File

@ -209,7 +209,7 @@ do { \
err = 0; \
break; \
__gu_failed: \
x = 0; \
x = (__typeof__(x))0; \
err = -EFAULT; \
} while (0)
@ -311,7 +311,7 @@ do { \
do { \
if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && \
!IS_ALIGNED((uintptr_t)__gu_ptr, sizeof(*__gu_ptr))) { \
__inttype(x) ___val = (__inttype(x))x; \
__typeof__(*(__gu_ptr)) ___val = (x); \
if (__asm_copy_to_user_sum_enabled(__gu_ptr, &(___val), sizeof(*__gu_ptr))) \
goto label; \
break; \
@ -438,10 +438,10 @@ unsigned long __must_check clear_user(void __user *to, unsigned long n)
}
#define __get_kernel_nofault(dst, src, type, err_label) \
__get_user_nocheck(*((type *)(dst)), (type *)(src), err_label)
__get_user_nocheck(*((type *)(dst)), (__force __user type *)(src), err_label)
#define __put_kernel_nofault(dst, src, type, err_label) \
__put_user_nocheck(*((type *)(src)), (type *)(dst), err_label)
__put_user_nocheck(*((type *)(src)), (__force __user type *)(dst), err_label)
static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len)
{

View File

@ -46,7 +46,7 @@
* a0 = &new_vmalloc[BIT_WORD(cpu)]
* a1 = BIT_MASK(cpu)
*/
REG_L a2, TASK_TI_CPU(tp)
lw a2, TASK_TI_CPU(tp)
/*
* Compute the new_vmalloc element position:
* (cpu / 64) * 8 = (cpu >> 6) << 3

View File

@ -28,7 +28,7 @@ static int riscv_kexec_elf_load(struct kimage *image, struct elfhdr *ehdr,
int i;
int ret = 0;
size_t size;
struct kexec_buf kbuf;
struct kexec_buf kbuf = {};
const struct elf_phdr *phdr;
kbuf.image = image;
@ -66,7 +66,7 @@ static int elf_find_pbase(struct kimage *image, unsigned long kernel_len,
{
int i;
int ret;
struct kexec_buf kbuf;
struct kexec_buf kbuf = {};
const struct elf_phdr *phdr;
unsigned long lowest_paddr = ULONG_MAX;
unsigned long lowest_vaddr = ULONG_MAX;

View File

@ -41,7 +41,7 @@ static void *image_load(struct kimage *image,
struct riscv_image_header *h;
u64 flags;
bool be_image, be_kernel;
struct kexec_buf kbuf;
struct kexec_buf kbuf = {};
int ret;
/* Check Image header */

View File

@ -261,7 +261,7 @@ int load_extra_segments(struct kimage *image, unsigned long kernel_start,
int ret;
void *fdt;
unsigned long initrd_pbase = 0UL;
struct kexec_buf kbuf;
struct kexec_buf kbuf = {};
char *modified_cmdline = NULL;
kbuf.image = image;

View File

@ -1356,7 +1356,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
emit_mv(rd, rs, ctx);
#ifdef CONFIG_SMP
/* Load current CPU number in T1 */
emit_ld(RV_REG_T1, offsetof(struct thread_info, cpu),
emit_lw(RV_REG_T1, offsetof(struct thread_info, cpu),
RV_REG_TP, ctx);
/* Load address of __per_cpu_offset array in T2 */
emit_addr(RV_REG_T2, (u64)&__per_cpu_offset, extra_pass, ctx);
@ -1763,7 +1763,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
*/
if (insn->src_reg == 0 && insn->imm == BPF_FUNC_get_smp_processor_id) {
/* Load current CPU number in R0 */
emit_ld(bpf_to_rv_reg(BPF_REG_0, ctx), offsetof(struct thread_info, cpu),
emit_lw(bpf_to_rv_reg(BPF_REG_0, ctx), offsetof(struct thread_info, cpu),
RV_REG_TP, ctx);
break;
}

View File

@ -16,7 +16,7 @@
static int kexec_file_add_kernel_elf(struct kimage *image,
struct s390_load_data *data)
{
struct kexec_buf buf;
struct kexec_buf buf = {};
const Elf_Ehdr *ehdr;
const Elf_Phdr *phdr;
Elf_Addr entry;

View File

@ -16,7 +16,7 @@
static int kexec_file_add_kernel_image(struct kimage *image,
struct s390_load_data *data)
{
struct kexec_buf buf;
struct kexec_buf buf = {};
buf.image = image;

View File

@ -129,7 +129,7 @@ static int kexec_file_update_purgatory(struct kimage *image,
static int kexec_file_add_purgatory(struct kimage *image,
struct s390_load_data *data)
{
struct kexec_buf buf;
struct kexec_buf buf = {};
int ret;
buf.image = image;
@ -152,7 +152,7 @@ static int kexec_file_add_purgatory(struct kimage *image,
static int kexec_file_add_initrd(struct kimage *image,
struct s390_load_data *data)
{
struct kexec_buf buf;
struct kexec_buf buf = {};
int ret;
buf.image = image;
@ -184,7 +184,7 @@ static int kexec_file_add_ipl_report(struct kimage *image,
{
__u32 *lc_ipl_parmblock_ptr;
unsigned int len, ncerts;
struct kexec_buf buf;
struct kexec_buf buf = {};
unsigned long addr;
void *ptr, *end;
int ret;

View File

@ -760,8 +760,6 @@ static int __hw_perf_event_init(struct perf_event *event, unsigned int type)
break;
case PERF_TYPE_HARDWARE:
if (is_sampling_event(event)) /* No sampling support */
return -ENOENT;
ev = attr->config;
if (!attr->exclude_user && attr->exclude_kernel) {
/*
@ -859,6 +857,8 @@ static int cpumf_pmu_event_init(struct perf_event *event)
unsigned int type = event->attr.type;
int err = -ENOENT;
if (is_sampling_event(event)) /* No sampling support */
return err;
if (type == PERF_TYPE_HARDWARE || type == PERF_TYPE_RAW)
err = __hw_perf_event_init(event, type);
else if (event->pmu->type == type)

View File

@ -285,10 +285,10 @@ static int paicrypt_event_init(struct perf_event *event)
/* PAI crypto PMU registered as PERF_TYPE_RAW, check event type */
if (a->type != PERF_TYPE_RAW && event->pmu->type != a->type)
return -ENOENT;
/* PAI crypto event must be in valid range */
/* PAI crypto event must be in valid range, try others if not */
if (a->config < PAI_CRYPTO_BASE ||
a->config > PAI_CRYPTO_BASE + paicrypt_cnt)
return -EINVAL;
return -ENOENT;
/* Allow only CRYPTO_ALL for sampling */
if (a->sample_period && a->config != PAI_CRYPTO_BASE)
return -EINVAL;

View File

@ -265,7 +265,7 @@ static int paiext_event_valid(struct perf_event *event)
event->hw.config_base = offsetof(struct paiext_cb, acc);
return 0;
}
return -EINVAL;
return -ENOENT;
}
/* Might be called on different CPU than the one the event is intended for. */

View File

@ -335,7 +335,6 @@ pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr,
int nodat;
struct mm_struct *mm = vma->vm_mm;
preempt_disable();
pgste = ptep_xchg_start(mm, addr, ptep);
nodat = !!(pgste_val(pgste) & _PGSTE_GPS_NODAT);
old = ptep_flush_lazy(mm, addr, ptep, nodat);
@ -360,7 +359,6 @@ void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
} else {
set_pte(ptep, pte);
}
preempt_enable();
}
static inline void pmdp_idte_local(struct mm_struct *mm,

View File

@ -2701,6 +2701,15 @@ config MITIGATION_TSA
security vulnerability on AMD CPUs which can lead to forwarding of
invalid info to subsequent instructions and thus can affect their
timing and thereby cause a leakage.
config MITIGATION_VMSCAPE
bool "Mitigate VMSCAPE"
depends on KVM
default y
help
Enable mitigation for VMSCAPE attacks. VMSCAPE is a hardware security
vulnerability on Intel and AMD CPUs that may allow a guest to do
Spectre v2 style attacks on userspace hypervisor.
endif
config ARCH_HAS_ADD_PAGES

View File

@ -495,6 +495,7 @@
#define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */
#define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */
#define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */
#define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */
/*
* BUG word(s)
@ -551,4 +552,5 @@
#define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */
#define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */
#define X86_BUG_TSA X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */
#define X86_BUG_VMSCAPE X86_BUG( 1*32+10) /* "vmscape" CPU is affected by VMSCAPE attacks from guests */
#endif /* _ASM_X86_CPUFEATURES_H */

View File

@ -93,6 +93,13 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
* 8 (ia32) bits.
*/
choose_random_kstack_offset(rdtsc());
/* Avoid unnecessary reads of 'x86_ibpb_exit_to_user' */
if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) &&
this_cpu_read(x86_ibpb_exit_to_user)) {
indirect_branch_prediction_barrier();
this_cpu_write(x86_ibpb_exit_to_user, false);
}
}
#define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare

View File

@ -530,6 +530,8 @@ void alternative_msr_write(unsigned int msr, u64 val, unsigned int feature)
: "memory");
}
DECLARE_PER_CPU(bool, x86_ibpb_exit_to_user);
static inline void indirect_branch_prediction_barrier(void)
{
asm_inline volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)

View File

@ -96,6 +96,9 @@ static void __init its_update_mitigation(void);
static void __init its_apply_mitigation(void);
static void __init tsa_select_mitigation(void);
static void __init tsa_apply_mitigation(void);
static void __init vmscape_select_mitigation(void);
static void __init vmscape_update_mitigation(void);
static void __init vmscape_apply_mitigation(void);
/* The base value of the SPEC_CTRL MSR without task-specific bits set */
u64 x86_spec_ctrl_base;
@ -105,6 +108,14 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
DEFINE_PER_CPU(u64, x86_spec_ctrl_current);
EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current);
/*
* Set when the CPU has run a potentially malicious guest. An IBPB will
* be needed to before running userspace. That IBPB will flush the branch
* predictor content.
*/
DEFINE_PER_CPU(bool, x86_ibpb_exit_to_user);
EXPORT_PER_CPU_SYMBOL_GPL(x86_ibpb_exit_to_user);
u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB;
static u64 __ro_after_init x86_arch_cap_msr;
@ -262,6 +273,7 @@ void __init cpu_select_mitigations(void)
its_select_mitigation();
bhi_select_mitigation();
tsa_select_mitigation();
vmscape_select_mitigation();
/*
* After mitigations are selected, some may need to update their
@ -293,6 +305,7 @@ void __init cpu_select_mitigations(void)
bhi_update_mitigation();
/* srso_update_mitigation() depends on retbleed_update_mitigation(). */
srso_update_mitigation();
vmscape_update_mitigation();
spectre_v1_apply_mitigation();
spectre_v2_apply_mitigation();
@ -310,6 +323,7 @@ void __init cpu_select_mitigations(void)
its_apply_mitigation();
bhi_apply_mitigation();
tsa_apply_mitigation();
vmscape_apply_mitigation();
}
/*
@ -2538,88 +2552,6 @@ static void update_mds_branch_idle(void)
}
}
#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
#define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n"
void cpu_bugs_smt_update(void)
{
mutex_lock(&spec_ctrl_mutex);
if (sched_smt_active() && unprivileged_ebpf_enabled() &&
spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
switch (spectre_v2_user_stibp) {
case SPECTRE_V2_USER_NONE:
break;
case SPECTRE_V2_USER_STRICT:
case SPECTRE_V2_USER_STRICT_PREFERRED:
update_stibp_strict();
break;
case SPECTRE_V2_USER_PRCTL:
case SPECTRE_V2_USER_SECCOMP:
update_indir_branch_cond();
break;
}
switch (mds_mitigation) {
case MDS_MITIGATION_FULL:
case MDS_MITIGATION_AUTO:
case MDS_MITIGATION_VMWERV:
if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
pr_warn_once(MDS_MSG_SMT);
update_mds_branch_idle();
break;
case MDS_MITIGATION_OFF:
break;
}
switch (taa_mitigation) {
case TAA_MITIGATION_VERW:
case TAA_MITIGATION_AUTO:
case TAA_MITIGATION_UCODE_NEEDED:
if (sched_smt_active())
pr_warn_once(TAA_MSG_SMT);
break;
case TAA_MITIGATION_TSX_DISABLED:
case TAA_MITIGATION_OFF:
break;
}
switch (mmio_mitigation) {
case MMIO_MITIGATION_VERW:
case MMIO_MITIGATION_AUTO:
case MMIO_MITIGATION_UCODE_NEEDED:
if (sched_smt_active())
pr_warn_once(MMIO_MSG_SMT);
break;
case MMIO_MITIGATION_OFF:
break;
}
switch (tsa_mitigation) {
case TSA_MITIGATION_USER_KERNEL:
case TSA_MITIGATION_VM:
case TSA_MITIGATION_AUTO:
case TSA_MITIGATION_FULL:
/*
* TSA-SQ can potentially lead to info leakage between
* SMT threads.
*/
if (sched_smt_active())
static_branch_enable(&cpu_buf_idle_clear);
else
static_branch_disable(&cpu_buf_idle_clear);
break;
case TSA_MITIGATION_NONE:
case TSA_MITIGATION_UCODE_NEEDED:
break;
}
mutex_unlock(&spec_ctrl_mutex);
}
#undef pr_fmt
#define pr_fmt(fmt) "Speculative Store Bypass: " fmt
@ -3330,9 +3262,185 @@ static void __init srso_apply_mitigation(void)
}
}
#undef pr_fmt
#define pr_fmt(fmt) "VMSCAPE: " fmt
enum vmscape_mitigations {
VMSCAPE_MITIGATION_NONE,
VMSCAPE_MITIGATION_AUTO,
VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER,
VMSCAPE_MITIGATION_IBPB_ON_VMEXIT,
};
static const char * const vmscape_strings[] = {
[VMSCAPE_MITIGATION_NONE] = "Vulnerable",
/* [VMSCAPE_MITIGATION_AUTO] */
[VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] = "Mitigation: IBPB before exit to userspace",
[VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT",
};
static enum vmscape_mitigations vmscape_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_VMSCAPE) ? VMSCAPE_MITIGATION_AUTO : VMSCAPE_MITIGATION_NONE;
static int __init vmscape_parse_cmdline(char *str)
{
if (!str)
return -EINVAL;
if (!strcmp(str, "off")) {
vmscape_mitigation = VMSCAPE_MITIGATION_NONE;
} else if (!strcmp(str, "ibpb")) {
vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER;
} else if (!strcmp(str, "force")) {
setup_force_cpu_bug(X86_BUG_VMSCAPE);
vmscape_mitigation = VMSCAPE_MITIGATION_AUTO;
} else {
pr_err("Ignoring unknown vmscape=%s option.\n", str);
}
return 0;
}
early_param("vmscape", vmscape_parse_cmdline);
static void __init vmscape_select_mitigation(void)
{
if (cpu_mitigations_off() ||
!boot_cpu_has_bug(X86_BUG_VMSCAPE) ||
!boot_cpu_has(X86_FEATURE_IBPB)) {
vmscape_mitigation = VMSCAPE_MITIGATION_NONE;
return;
}
if (vmscape_mitigation == VMSCAPE_MITIGATION_AUTO)
vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER;
}
static void __init vmscape_update_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_VMSCAPE))
return;
if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB ||
srso_mitigation == SRSO_MITIGATION_IBPB_ON_VMEXIT)
vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_ON_VMEXIT;
pr_info("%s\n", vmscape_strings[vmscape_mitigation]);
}
static void __init vmscape_apply_mitigation(void)
{
if (vmscape_mitigation == VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER)
setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_TO_USER);
}
#undef pr_fmt
#define pr_fmt(fmt) fmt
#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
#define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n"
#define VMSCAPE_MSG_SMT "VMSCAPE: SMT on, STIBP is required for full protection. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/vmscape.html for more details.\n"
void cpu_bugs_smt_update(void)
{
mutex_lock(&spec_ctrl_mutex);
if (sched_smt_active() && unprivileged_ebpf_enabled() &&
spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
switch (spectre_v2_user_stibp) {
case SPECTRE_V2_USER_NONE:
break;
case SPECTRE_V2_USER_STRICT:
case SPECTRE_V2_USER_STRICT_PREFERRED:
update_stibp_strict();
break;
case SPECTRE_V2_USER_PRCTL:
case SPECTRE_V2_USER_SECCOMP:
update_indir_branch_cond();
break;
}
switch (mds_mitigation) {
case MDS_MITIGATION_FULL:
case MDS_MITIGATION_AUTO:
case MDS_MITIGATION_VMWERV:
if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
pr_warn_once(MDS_MSG_SMT);
update_mds_branch_idle();
break;
case MDS_MITIGATION_OFF:
break;
}
switch (taa_mitigation) {
case TAA_MITIGATION_VERW:
case TAA_MITIGATION_AUTO:
case TAA_MITIGATION_UCODE_NEEDED:
if (sched_smt_active())
pr_warn_once(TAA_MSG_SMT);
break;
case TAA_MITIGATION_TSX_DISABLED:
case TAA_MITIGATION_OFF:
break;
}
switch (mmio_mitigation) {
case MMIO_MITIGATION_VERW:
case MMIO_MITIGATION_AUTO:
case MMIO_MITIGATION_UCODE_NEEDED:
if (sched_smt_active())
pr_warn_once(MMIO_MSG_SMT);
break;
case MMIO_MITIGATION_OFF:
break;
}
switch (tsa_mitigation) {
case TSA_MITIGATION_USER_KERNEL:
case TSA_MITIGATION_VM:
case TSA_MITIGATION_AUTO:
case TSA_MITIGATION_FULL:
/*
* TSA-SQ can potentially lead to info leakage between
* SMT threads.
*/
if (sched_smt_active())
static_branch_enable(&cpu_buf_idle_clear);
else
static_branch_disable(&cpu_buf_idle_clear);
break;
case TSA_MITIGATION_NONE:
case TSA_MITIGATION_UCODE_NEEDED:
break;
}
switch (vmscape_mitigation) {
case VMSCAPE_MITIGATION_NONE:
case VMSCAPE_MITIGATION_AUTO:
break;
case VMSCAPE_MITIGATION_IBPB_ON_VMEXIT:
case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER:
/*
* Hypervisors can be attacked across-threads, warn for SMT when
* STIBP is not already enabled system-wide.
*
* Intel eIBRS (!AUTOIBRS) implies STIBP on.
*/
if (!sched_smt_active() ||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED ||
(spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
!boot_cpu_has(X86_FEATURE_AUTOIBRS)))
break;
pr_warn_once(VMSCAPE_MSG_SMT);
break;
}
mutex_unlock(&spec_ctrl_mutex);
}
#ifdef CONFIG_SYSFS
#define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion"
@ -3578,6 +3686,11 @@ static ssize_t tsa_show_state(char *buf)
return sysfs_emit(buf, "%s\n", tsa_strings[tsa_mitigation]);
}
static ssize_t vmscape_show_state(char *buf)
{
return sysfs_emit(buf, "%s\n", vmscape_strings[vmscape_mitigation]);
}
static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
char *buf, unsigned int bug)
{
@ -3644,6 +3757,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
case X86_BUG_TSA:
return tsa_show_state(buf);
case X86_BUG_VMSCAPE:
return vmscape_show_state(buf);
default:
break;
}
@ -3735,6 +3851,11 @@ ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *bu
{
return cpu_show_common(dev, attr, buf, X86_BUG_TSA);
}
ssize_t cpu_show_vmscape(struct device *dev, struct device_attribute *attr, char *buf)
{
return cpu_show_common(dev, attr, buf, X86_BUG_VMSCAPE);
}
#endif
void __warn_thunk(void)

View File

@ -1236,55 +1236,71 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
#define ITS_NATIVE_ONLY BIT(9)
/* CPU is affected by Transient Scheduler Attacks */
#define TSA BIT(10)
/* CPU is affected by VMSCAPE */
#define VMSCAPE BIT(11)
static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE, X86_STEP_MAX, SRBDS),
VULNBL_INTEL_STEPS(INTEL_HASWELL, X86_STEP_MAX, SRBDS),
VULNBL_INTEL_STEPS(INTEL_HASWELL_L, X86_STEP_MAX, SRBDS),
VULNBL_INTEL_STEPS(INTEL_HASWELL_G, X86_STEP_MAX, SRBDS),
VULNBL_INTEL_STEPS(INTEL_HASWELL_X, X86_STEP_MAX, MMIO),
VULNBL_INTEL_STEPS(INTEL_BROADWELL_D, X86_STEP_MAX, MMIO),
VULNBL_INTEL_STEPS(INTEL_BROADWELL_G, X86_STEP_MAX, SRBDS),
VULNBL_INTEL_STEPS(INTEL_BROADWELL_X, X86_STEP_MAX, MMIO),
VULNBL_INTEL_STEPS(INTEL_BROADWELL, X86_STEP_MAX, SRBDS),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, 0x5, MMIO | RETBLEED | GDS),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, 0xb, MMIO | RETBLEED | GDS | SRBDS),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE, 0xc, MMIO | RETBLEED | GDS | SRBDS),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS),
VULNBL_INTEL_STEPS(INTEL_CANNONLAKE_L, X86_STEP_MAX, RETBLEED),
VULNBL_INTEL_STEPS(INTEL_SANDYBRIDGE_X, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_SANDYBRIDGE, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE_X, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE, X86_STEP_MAX, SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_HASWELL, X86_STEP_MAX, SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_HASWELL_L, X86_STEP_MAX, SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_HASWELL_G, X86_STEP_MAX, SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_HASWELL_X, X86_STEP_MAX, MMIO | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_BROADWELL_D, X86_STEP_MAX, MMIO | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_BROADWELL_X, X86_STEP_MAX, MMIO | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_BROADWELL_G, X86_STEP_MAX, SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_BROADWELL, X86_STEP_MAX, SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, 0x5, MMIO | RETBLEED | GDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, 0xb, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE, 0xc, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_CANNONLAKE_L, X86_STEP_MAX, RETBLEED | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ICELAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY),
VULNBL_INTEL_STEPS(INTEL_ICELAKE_D, X86_STEP_MAX, MMIO | GDS | ITS | ITS_NATIVE_ONLY),
VULNBL_INTEL_STEPS(INTEL_ICELAKE_X, X86_STEP_MAX, MMIO | GDS | ITS | ITS_NATIVE_ONLY),
VULNBL_INTEL_STEPS(INTEL_COMETLAKE, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS),
VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, 0x0, MMIO | RETBLEED | ITS),
VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS),
VULNBL_INTEL_STEPS(INTEL_COMETLAKE, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, 0x0, MMIO | RETBLEED | ITS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_TIGERLAKE_L, X86_STEP_MAX, GDS | ITS | ITS_NATIVE_ONLY),
VULNBL_INTEL_STEPS(INTEL_TIGERLAKE, X86_STEP_MAX, GDS | ITS | ITS_NATIVE_ONLY),
VULNBL_INTEL_STEPS(INTEL_LAKEFIELD, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED),
VULNBL_INTEL_STEPS(INTEL_ROCKETLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY),
VULNBL_INTEL_TYPE(INTEL_ALDERLAKE, ATOM, RFDS),
VULNBL_INTEL_STEPS(INTEL_ALDERLAKE_L, X86_STEP_MAX, RFDS),
VULNBL_INTEL_TYPE(INTEL_RAPTORLAKE, ATOM, RFDS),
VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_P, X86_STEP_MAX, RFDS),
VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_S, X86_STEP_MAX, RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_GRACEMONT, X86_STEP_MAX, RFDS),
VULNBL_INTEL_TYPE(INTEL_ALDERLAKE, ATOM, RFDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ALDERLAKE, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ALDERLAKE_L, X86_STEP_MAX, RFDS | VMSCAPE),
VULNBL_INTEL_TYPE(INTEL_RAPTORLAKE, ATOM, RFDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_P, X86_STEP_MAX, RFDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_S, X86_STEP_MAX, RFDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_METEORLAKE_L, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ARROWLAKE_H, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ARROWLAKE, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ARROWLAKE_U, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_LUNARLAKE_M, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_SAPPHIRERAPIDS_X, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_GRANITERAPIDS_X, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_EMERALDRAPIDS_X, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ATOM_GRACEMONT, X86_STEP_MAX, RFDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT, X86_STEP_MAX, MMIO | MMIO_SBDS | RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT_D, X86_STEP_MAX, MMIO | RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT, X86_STEP_MAX, RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT_D, X86_STEP_MAX, RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT_PLUS, X86_STEP_MAX, RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_CRESTMONT_X, X86_STEP_MAX, VMSCAPE),
VULNBL_AMD(0x15, RETBLEED),
VULNBL_AMD(0x16, RETBLEED),
VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO),
VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO),
VULNBL_AMD(0x19, SRSO | TSA),
VULNBL_AMD(0x1a, SRSO),
VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO | VMSCAPE),
VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO | VMSCAPE),
VULNBL_AMD(0x19, SRSO | TSA | VMSCAPE),
VULNBL_AMD(0x1a, SRSO | VMSCAPE),
{}
};
@ -1543,6 +1559,14 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
}
}
/*
* Set the bug only on bare-metal. A nested hypervisor should already be
* deploying IBPB to isolate itself from nested guests.
*/
if (cpu_matches(cpu_vuln_blacklist, VMSCAPE) &&
!boot_cpu_has(X86_FEATURE_HYPERVISOR))
setup_force_cpu_bug(X86_BUG_VMSCAPE);
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
return;

View File

@ -11010,6 +11010,15 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
if (vcpu->arch.guest_fpu.xfd_err)
wrmsrq(MSR_IA32_XFD_ERR, 0);
/*
* Mark this CPU as needing a branch predictor flush before running
* userspace. Must be done before enabling preemption to ensure it gets
* set for the CPU that actually ran the guest, and not the CPU that it
* may migrate to.
*/
if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER))
this_cpu_write(x86_ibpb_exit_to_user, true);
/*
* Consume any pending interrupts, including the possible source of
* VM-Exit on SVM and any ticks that occur between VM-Exit and now.

View File

@ -7,6 +7,7 @@
#include <linux/init.h>
#include <linux/mm.h>
#include <linux/blkdev.h>
#include <linux/blk-integrity.h>
#include <linux/buffer_head.h>
#include <linux/mpage.h>
#include <linux/uio.h>
@ -54,7 +55,6 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
struct bio bio;
ssize_t ret;
WARN_ON_ONCE(iocb->ki_flags & IOCB_HAS_METADATA);
if (nr_pages <= DIO_INLINE_BIO_VECS)
vecs = inline_vecs;
else {
@ -131,7 +131,7 @@ static void blkdev_bio_end_io(struct bio *bio)
if (bio->bi_status && !dio->bio.bi_status)
dio->bio.bi_status = bio->bi_status;
if (!is_sync && (dio->iocb->ki_flags & IOCB_HAS_METADATA))
if (bio_integrity(bio))
bio_integrity_unmap_user(bio);
if (atomic_dec_and_test(&dio->ref)) {
@ -233,7 +233,7 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
}
bio->bi_opf |= REQ_NOWAIT;
}
if (!is_sync && (iocb->ki_flags & IOCB_HAS_METADATA)) {
if (iocb->ki_flags & IOCB_HAS_METADATA) {
ret = bio_integrity_map_iter(bio, iocb->private);
if (unlikely(ret))
goto fail;
@ -301,7 +301,7 @@ static void blkdev_bio_end_io_async(struct bio *bio)
ret = blk_status_to_errno(bio->bi_status);
}
if (iocb->ki_flags & IOCB_HAS_METADATA)
if (bio_integrity(bio))
bio_integrity_unmap_user(bio);
iocb->ki_complete(iocb, ret);
@ -422,7 +422,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
}
nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
if (likely(nr_pages <= BIO_MAX_VECS)) {
if (likely(nr_pages <= BIO_MAX_VECS &&
!(iocb->ki_flags & IOCB_HAS_METADATA))) {
if (is_sync_kiocb(iocb))
return __blkdev_direct_IO_simple(iocb, iter, bdev,
nr_pages);
@ -687,6 +688,8 @@ static int blkdev_open(struct inode *inode, struct file *filp)
if (bdev_can_atomic_write(bdev))
filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
if (blk_get_integrity(bdev->bd_disk))
filp->f_mode |= FMODE_HAS_METADATA;
ret = bdev_open(bdev, mode, filp->private_data, NULL, filp);
if (ret)

View File

@ -49,6 +49,18 @@ static int __crypto_sha1_import(struct sha1_ctx *ctx, const void *in)
return 0;
}
static int __crypto_sha1_export_core(const struct sha1_ctx *ctx, void *out)
{
memcpy(out, ctx, offsetof(struct sha1_ctx, buf));
return 0;
}
static int __crypto_sha1_import_core(struct sha1_ctx *ctx, const void *in)
{
memcpy(ctx, in, offsetof(struct sha1_ctx, buf));
return 0;
}
const u8 sha1_zero_message_hash[SHA1_DIGEST_SIZE] = {
0xda, 0x39, 0xa3, 0xee, 0x5e, 0x6b, 0x4b, 0x0d,
0x32, 0x55, 0xbf, 0xef, 0x95, 0x60, 0x18, 0x90,
@ -94,6 +106,16 @@ static int crypto_sha1_import(struct shash_desc *desc, const void *in)
return __crypto_sha1_import(SHA1_CTX(desc), in);
}
static int crypto_sha1_export_core(struct shash_desc *desc, void *out)
{
return __crypto_sha1_export_core(SHA1_CTX(desc), out);
}
static int crypto_sha1_import_core(struct shash_desc *desc, const void *in)
{
return __crypto_sha1_import_core(SHA1_CTX(desc), in);
}
#define HMAC_SHA1_KEY(tfm) ((struct hmac_sha1_key *)crypto_shash_ctx(tfm))
#define HMAC_SHA1_CTX(desc) ((struct hmac_sha1_ctx *)shash_desc_ctx(desc))
@ -143,6 +165,19 @@ static int crypto_hmac_sha1_import(struct shash_desc *desc, const void *in)
return __crypto_sha1_import(&ctx->sha_ctx, in);
}
static int crypto_hmac_sha1_export_core(struct shash_desc *desc, void *out)
{
return __crypto_sha1_export_core(&HMAC_SHA1_CTX(desc)->sha_ctx, out);
}
static int crypto_hmac_sha1_import_core(struct shash_desc *desc, const void *in)
{
struct hmac_sha1_ctx *ctx = HMAC_SHA1_CTX(desc);
ctx->ostate = HMAC_SHA1_KEY(desc->tfm)->ostate;
return __crypto_sha1_import_core(&ctx->sha_ctx, in);
}
static struct shash_alg algs[] = {
{
.base.cra_name = "sha1",
@ -157,6 +192,8 @@ static struct shash_alg algs[] = {
.digest = crypto_sha1_digest,
.export = crypto_sha1_export,
.import = crypto_sha1_import,
.export_core = crypto_sha1_export_core,
.import_core = crypto_sha1_import_core,
.descsize = sizeof(struct sha1_ctx),
.statesize = SHA1_SHASH_STATE_SIZE,
},
@ -175,6 +212,8 @@ static struct shash_alg algs[] = {
.digest = crypto_hmac_sha1_digest,
.export = crypto_hmac_sha1_export,
.import = crypto_hmac_sha1_import,
.export_core = crypto_hmac_sha1_export_core,
.import_core = crypto_hmac_sha1_import_core,
.descsize = sizeof(struct hmac_sha1_ctx),
.statesize = SHA1_SHASH_STATE_SIZE,
},

View File

@ -50,6 +50,19 @@ static int __crypto_sha256_import(struct __sha256_ctx *ctx, const void *in)
return 0;
}
static int __crypto_sha256_export_core(const struct __sha256_ctx *ctx,
void *out)
{
memcpy(out, ctx, offsetof(struct __sha256_ctx, buf));
return 0;
}
static int __crypto_sha256_import_core(struct __sha256_ctx *ctx, const void *in)
{
memcpy(ctx, in, offsetof(struct __sha256_ctx, buf));
return 0;
}
/* SHA-224 */
const u8 sha224_zero_message_hash[SHA224_DIGEST_SIZE] = {
@ -98,6 +111,16 @@ static int crypto_sha224_import(struct shash_desc *desc, const void *in)
return __crypto_sha256_import(&SHA224_CTX(desc)->ctx, in);
}
static int crypto_sha224_export_core(struct shash_desc *desc, void *out)
{
return __crypto_sha256_export_core(&SHA224_CTX(desc)->ctx, out);
}
static int crypto_sha224_import_core(struct shash_desc *desc, const void *in)
{
return __crypto_sha256_import_core(&SHA224_CTX(desc)->ctx, in);
}
/* SHA-256 */
const u8 sha256_zero_message_hash[SHA256_DIGEST_SIZE] = {
@ -146,6 +169,16 @@ static int crypto_sha256_import(struct shash_desc *desc, const void *in)
return __crypto_sha256_import(&SHA256_CTX(desc)->ctx, in);
}
static int crypto_sha256_export_core(struct shash_desc *desc, void *out)
{
return __crypto_sha256_export_core(&SHA256_CTX(desc)->ctx, out);
}
static int crypto_sha256_import_core(struct shash_desc *desc, const void *in)
{
return __crypto_sha256_import_core(&SHA256_CTX(desc)->ctx, in);
}
/* HMAC-SHA224 */
#define HMAC_SHA224_KEY(tfm) ((struct hmac_sha224_key *)crypto_shash_ctx(tfm))
@ -198,6 +231,21 @@ static int crypto_hmac_sha224_import(struct shash_desc *desc, const void *in)
return __crypto_sha256_import(&ctx->ctx.sha_ctx, in);
}
static int crypto_hmac_sha224_export_core(struct shash_desc *desc, void *out)
{
return __crypto_sha256_export_core(&HMAC_SHA224_CTX(desc)->ctx.sha_ctx,
out);
}
static int crypto_hmac_sha224_import_core(struct shash_desc *desc,
const void *in)
{
struct hmac_sha224_ctx *ctx = HMAC_SHA224_CTX(desc);
ctx->ctx.ostate = HMAC_SHA224_KEY(desc->tfm)->key.ostate;
return __crypto_sha256_import_core(&ctx->ctx.sha_ctx, in);
}
/* HMAC-SHA256 */
#define HMAC_SHA256_KEY(tfm) ((struct hmac_sha256_key *)crypto_shash_ctx(tfm))
@ -250,6 +298,21 @@ static int crypto_hmac_sha256_import(struct shash_desc *desc, const void *in)
return __crypto_sha256_import(&ctx->ctx.sha_ctx, in);
}
static int crypto_hmac_sha256_export_core(struct shash_desc *desc, void *out)
{
return __crypto_sha256_export_core(&HMAC_SHA256_CTX(desc)->ctx.sha_ctx,
out);
}
static int crypto_hmac_sha256_import_core(struct shash_desc *desc,
const void *in)
{
struct hmac_sha256_ctx *ctx = HMAC_SHA256_CTX(desc);
ctx->ctx.ostate = HMAC_SHA256_KEY(desc->tfm)->key.ostate;
return __crypto_sha256_import_core(&ctx->ctx.sha_ctx, in);
}
/* Algorithm definitions */
static struct shash_alg algs[] = {
@ -266,6 +329,8 @@ static struct shash_alg algs[] = {
.digest = crypto_sha224_digest,
.export = crypto_sha224_export,
.import = crypto_sha224_import,
.export_core = crypto_sha224_export_core,
.import_core = crypto_sha224_import_core,
.descsize = sizeof(struct sha224_ctx),
.statesize = SHA256_SHASH_STATE_SIZE,
},
@ -282,6 +347,8 @@ static struct shash_alg algs[] = {
.digest = crypto_sha256_digest,
.export = crypto_sha256_export,
.import = crypto_sha256_import,
.export_core = crypto_sha256_export_core,
.import_core = crypto_sha256_import_core,
.descsize = sizeof(struct sha256_ctx),
.statesize = SHA256_SHASH_STATE_SIZE,
},
@ -300,6 +367,8 @@ static struct shash_alg algs[] = {
.digest = crypto_hmac_sha224_digest,
.export = crypto_hmac_sha224_export,
.import = crypto_hmac_sha224_import,
.export_core = crypto_hmac_sha224_export_core,
.import_core = crypto_hmac_sha224_import_core,
.descsize = sizeof(struct hmac_sha224_ctx),
.statesize = SHA256_SHASH_STATE_SIZE,
},
@ -318,6 +387,8 @@ static struct shash_alg algs[] = {
.digest = crypto_hmac_sha256_digest,
.export = crypto_hmac_sha256_export,
.import = crypto_hmac_sha256_import,
.export_core = crypto_hmac_sha256_export_core,
.import_core = crypto_hmac_sha256_import_core,
.descsize = sizeof(struct hmac_sha256_ctx),
.statesize = SHA256_SHASH_STATE_SIZE,
},

View File

@ -50,6 +50,19 @@ static int __crypto_sha512_import(struct __sha512_ctx *ctx, const void *in)
return 0;
}
static int __crypto_sha512_export_core(const struct __sha512_ctx *ctx,
void *out)
{
memcpy(out, ctx, offsetof(struct __sha512_ctx, buf));
return 0;
}
static int __crypto_sha512_import_core(struct __sha512_ctx *ctx, const void *in)
{
memcpy(ctx, in, offsetof(struct __sha512_ctx, buf));
return 0;
}
/* SHA-384 */
const u8 sha384_zero_message_hash[SHA384_DIGEST_SIZE] = {
@ -100,6 +113,16 @@ static int crypto_sha384_import(struct shash_desc *desc, const void *in)
return __crypto_sha512_import(&SHA384_CTX(desc)->ctx, in);
}
static int crypto_sha384_export_core(struct shash_desc *desc, void *out)
{
return __crypto_sha512_export_core(&SHA384_CTX(desc)->ctx, out);
}
static int crypto_sha384_import_core(struct shash_desc *desc, const void *in)
{
return __crypto_sha512_import_core(&SHA384_CTX(desc)->ctx, in);
}
/* SHA-512 */
const u8 sha512_zero_message_hash[SHA512_DIGEST_SIZE] = {
@ -152,6 +175,16 @@ static int crypto_sha512_import(struct shash_desc *desc, const void *in)
return __crypto_sha512_import(&SHA512_CTX(desc)->ctx, in);
}
static int crypto_sha512_export_core(struct shash_desc *desc, void *out)
{
return __crypto_sha512_export_core(&SHA512_CTX(desc)->ctx, out);
}
static int crypto_sha512_import_core(struct shash_desc *desc, const void *in)
{
return __crypto_sha512_import_core(&SHA512_CTX(desc)->ctx, in);
}
/* HMAC-SHA384 */
#define HMAC_SHA384_KEY(tfm) ((struct hmac_sha384_key *)crypto_shash_ctx(tfm))
@ -204,6 +237,21 @@ static int crypto_hmac_sha384_import(struct shash_desc *desc, const void *in)
return __crypto_sha512_import(&ctx->ctx.sha_ctx, in);
}
static int crypto_hmac_sha384_export_core(struct shash_desc *desc, void *out)
{
return __crypto_sha512_export_core(&HMAC_SHA384_CTX(desc)->ctx.sha_ctx,
out);
}
static int crypto_hmac_sha384_import_core(struct shash_desc *desc,
const void *in)
{
struct hmac_sha384_ctx *ctx = HMAC_SHA384_CTX(desc);
ctx->ctx.ostate = HMAC_SHA384_KEY(desc->tfm)->key.ostate;
return __crypto_sha512_import_core(&ctx->ctx.sha_ctx, in);
}
/* HMAC-SHA512 */
#define HMAC_SHA512_KEY(tfm) ((struct hmac_sha512_key *)crypto_shash_ctx(tfm))
@ -256,6 +304,21 @@ static int crypto_hmac_sha512_import(struct shash_desc *desc, const void *in)
return __crypto_sha512_import(&ctx->ctx.sha_ctx, in);
}
static int crypto_hmac_sha512_export_core(struct shash_desc *desc, void *out)
{
return __crypto_sha512_export_core(&HMAC_SHA512_CTX(desc)->ctx.sha_ctx,
out);
}
static int crypto_hmac_sha512_import_core(struct shash_desc *desc,
const void *in)
{
struct hmac_sha512_ctx *ctx = HMAC_SHA512_CTX(desc);
ctx->ctx.ostate = HMAC_SHA512_KEY(desc->tfm)->key.ostate;
return __crypto_sha512_import_core(&ctx->ctx.sha_ctx, in);
}
/* Algorithm definitions */
static struct shash_alg algs[] = {
@ -272,6 +335,8 @@ static struct shash_alg algs[] = {
.digest = crypto_sha384_digest,
.export = crypto_sha384_export,
.import = crypto_sha384_import,
.export_core = crypto_sha384_export_core,
.import_core = crypto_sha384_import_core,
.descsize = sizeof(struct sha384_ctx),
.statesize = SHA512_SHASH_STATE_SIZE,
},
@ -288,6 +353,8 @@ static struct shash_alg algs[] = {
.digest = crypto_sha512_digest,
.export = crypto_sha512_export,
.import = crypto_sha512_import,
.export_core = crypto_sha512_export_core,
.import_core = crypto_sha512_import_core,
.descsize = sizeof(struct sha512_ctx),
.statesize = SHA512_SHASH_STATE_SIZE,
},
@ -306,6 +373,8 @@ static struct shash_alg algs[] = {
.digest = crypto_hmac_sha384_digest,
.export = crypto_hmac_sha384_export,
.import = crypto_hmac_sha384_import,
.export_core = crypto_hmac_sha384_export_core,
.import_core = crypto_hmac_sha384_import_core,
.descsize = sizeof(struct hmac_sha384_ctx),
.statesize = SHA512_SHASH_STATE_SIZE,
},
@ -324,6 +393,8 @@ static struct shash_alg algs[] = {
.digest = crypto_hmac_sha512_digest,
.export = crypto_hmac_sha512_export,
.import = crypto_hmac_sha512_import,
.export_core = crypto_hmac_sha512_export_core,
.import_core = crypto_hmac_sha512_import_core,
.descsize = sizeof(struct hmac_sha512_ctx),
.statesize = SHA512_SHASH_STATE_SIZE,
},

View File

@ -677,7 +677,7 @@ static void ivpu_bo_unbind_all_user_contexts(struct ivpu_device *vdev)
static void ivpu_dev_fini(struct ivpu_device *vdev)
{
ivpu_jobs_abort_all(vdev);
ivpu_pm_cancel_recovery(vdev);
ivpu_pm_disable_recovery(vdev);
ivpu_pm_disable(vdev);
ivpu_prepare_for_reset(vdev);
ivpu_shutdown(vdev);

View File

@ -417,10 +417,10 @@ void ivpu_pm_init(struct ivpu_device *vdev)
ivpu_dbg(vdev, PM, "Autosuspend delay = %d\n", delay);
}
void ivpu_pm_cancel_recovery(struct ivpu_device *vdev)
void ivpu_pm_disable_recovery(struct ivpu_device *vdev)
{
drm_WARN_ON(&vdev->drm, delayed_work_pending(&vdev->pm->job_timeout_work));
cancel_work_sync(&vdev->pm->recovery_work);
disable_work_sync(&vdev->pm->recovery_work);
}
void ivpu_pm_enable(struct ivpu_device *vdev)

View File

@ -25,7 +25,7 @@ struct ivpu_pm_info {
void ivpu_pm_init(struct ivpu_device *vdev);
void ivpu_pm_enable(struct ivpu_device *vdev);
void ivpu_pm_disable(struct ivpu_device *vdev);
void ivpu_pm_cancel_recovery(struct ivpu_device *vdev);
void ivpu_pm_disable_recovery(struct ivpu_device *vdev);
int ivpu_pm_suspend_cb(struct device *dev);
int ivpu_pm_resume_cb(struct device *dev);

View File

@ -937,8 +937,10 @@ static u32 *iort_rmr_alloc_sids(u32 *sids, u32 count, u32 id_start,
new_sids = krealloc_array(sids, count + new_count,
sizeof(*new_sids), GFP_KERNEL);
if (!new_sids)
if (!new_sids) {
kfree(sids);
return NULL;
}
for (i = count; i < total_count; i++)
new_sids[i] = id_start++;

View File

@ -119,7 +119,7 @@ int cpc_read_ffh(int cpu, struct cpc_reg *reg, u64 *val)
*val = data.ret.value;
return (data.ret.error) ? sbi_err_map_linux_errno(data.ret.error) : 0;
return data.ret.error;
}
return -EINVAL;
@ -148,7 +148,7 @@ int cpc_write_ffh(int cpu, struct cpc_reg *reg, u64 val)
smp_call_function_single(cpu, cppc_ffh_csr_write, &data, 1);
return (data.ret.error) ? sbi_err_map_linux_errno(data.ret.error) : 0;
return data.ret.error;
}
return -EINVAL;

View File

@ -603,6 +603,7 @@ CPU_SHOW_VULN_FALLBACK(ghostwrite);
CPU_SHOW_VULN_FALLBACK(old_microcode);
CPU_SHOW_VULN_FALLBACK(indirect_target_selection);
CPU_SHOW_VULN_FALLBACK(tsa);
CPU_SHOW_VULN_FALLBACK(vmscape);
static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
@ -622,6 +623,7 @@ static DEVICE_ATTR(ghostwrite, 0444, cpu_show_ghostwrite, NULL);
static DEVICE_ATTR(old_microcode, 0444, cpu_show_old_microcode, NULL);
static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL);
static DEVICE_ATTR(tsa, 0444, cpu_show_tsa, NULL);
static DEVICE_ATTR(vmscape, 0444, cpu_show_vmscape, NULL);
static struct attribute *cpu_root_vulnerabilities_attrs[] = {
&dev_attr_meltdown.attr,
@ -642,6 +644,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
&dev_attr_old_microcode.attr,
&dev_attr_indirect_target_selection.attr,
&dev_attr_tsa.attr,
&dev_attr_vmscape.attr,
NULL
};

View File

@ -1554,13 +1554,15 @@ static void amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
pr_debug("CPU %d exiting\n", policy->cpu);
}
static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy, bool policy_change)
{
struct amd_cpudata *cpudata = policy->driver_data;
union perf_cached perf;
u8 epp;
if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
if (policy_change ||
policy->min != cpudata->min_limit_freq ||
policy->max != cpudata->max_limit_freq)
amd_pstate_update_min_max_limit(policy);
if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
@ -1584,7 +1586,7 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
cpudata->policy = policy->policy;
ret = amd_pstate_epp_update_limit(policy);
ret = amd_pstate_epp_update_limit(policy, true);
if (ret)
return ret;
@ -1626,13 +1628,14 @@ static int amd_pstate_suspend(struct cpufreq_policy *policy)
* min_perf value across kexec reboots. If this CPU is just resumed back without kexec,
* the limits, epp and desired perf will get reset to the cached values in cpudata struct
*/
ret = amd_pstate_update_perf(policy, perf.bios_min_perf, 0U, 0U, 0U, false);
ret = amd_pstate_update_perf(policy, perf.bios_min_perf,
FIELD_GET(AMD_CPPC_DES_PERF_MASK, cpudata->cppc_req_cached),
FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached),
FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached),
false);
if (ret)
return ret;
/* invalidate to ensure it's rewritten during resume */
cpudata->cppc_req_cached = 0;
/* set this flag to avoid setting core offline*/
cpudata->suspended = true;
@ -1658,7 +1661,7 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy)
int ret;
/* enable amd pstate from suspend state*/
ret = amd_pstate_epp_update_limit(policy);
ret = amd_pstate_epp_update_limit(policy, false);
if (ret)
return ret;

View File

@ -1034,8 +1034,8 @@ static bool hybrid_register_perf_domain(unsigned int cpu)
if (!cpu_dev)
return false;
if (em_dev_register_perf_domain(cpu_dev, HYBRID_EM_STATE_COUNT, &cb,
cpumask_of(cpu), false))
if (em_dev_register_pd_no_update(cpu_dev, HYBRID_EM_STATE_COUNT, &cb,
cpumask_of(cpu), false))
return false;
cpudata->pd_registered = true;

View File

@ -128,7 +128,6 @@ static ssize_t altr_sdr_mc_err_inject_write(struct file *file,
ptemp = dma_alloc_coherent(mci->pdev, 16, &dma_handle, GFP_KERNEL);
if (!ptemp) {
dma_free_coherent(mci->pdev, 16, ptemp, dma_handle);
edac_printk(KERN_ERR, EDAC_MC,
"Inject: Buffer Allocation error\n");
return -ENOMEM;

View File

@ -3,6 +3,9 @@
# GPIO infrastructure and drivers
#
config GPIOLIB_LEGACY
def_bool y
menuconfig GPIOLIB
bool "GPIO Support"
help
@ -12,9 +15,6 @@ menuconfig GPIOLIB
If unsure, say N.
config GPIOLIB_LEGACY
def_bool y
if GPIOLIB
config GPIOLIB_FASTPATH_LIMIT

View File

@ -448,7 +448,7 @@ static int psp_sw_init(struct amdgpu_ip_block *ip_block)
psp->cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
if (!psp->cmd) {
dev_err(adev->dev, "Failed to allocate memory to command buffer!\n");
ret = -ENOMEM;
return -ENOMEM;
}
adev->psp.xgmi_context.supports_extended_data =

View File

@ -1462,17 +1462,12 @@ static int dce_v10_0_audio_init(struct amdgpu_device *adev)
static void dce_v10_0_audio_fini(struct amdgpu_device *adev)
{
int i;
if (!amdgpu_audio)
return;
if (!adev->mode_info.audio.enabled)
return;
for (i = 0; i < adev->mode_info.audio.num_pins; i++)
dce_v10_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false);
adev->mode_info.audio.enabled = false;
}

View File

@ -1511,17 +1511,12 @@ static int dce_v11_0_audio_init(struct amdgpu_device *adev)
static void dce_v11_0_audio_fini(struct amdgpu_device *adev)
{
int i;
if (!amdgpu_audio)
return;
if (!adev->mode_info.audio.enabled)
return;
for (i = 0; i < adev->mode_info.audio.num_pins; i++)
dce_v11_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false);
adev->mode_info.audio.enabled = false;
}

View File

@ -1451,17 +1451,12 @@ static int dce_v6_0_audio_init(struct amdgpu_device *adev)
static void dce_v6_0_audio_fini(struct amdgpu_device *adev)
{
int i;
if (!amdgpu_audio)
return;
if (!adev->mode_info.audio.enabled)
return;
for (i = 0; i < adev->mode_info.audio.num_pins; i++)
dce_v6_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false);
adev->mode_info.audio.enabled = false;
}

View File

@ -1443,17 +1443,12 @@ static int dce_v8_0_audio_init(struct amdgpu_device *adev)
static void dce_v8_0_audio_fini(struct amdgpu_device *adev)
{
int i;
if (!amdgpu_audio)
return;
if (!adev->mode_info.audio.enabled)
return;
for (i = 0; i < adev->mode_info.audio.num_pins; i++)
dce_v8_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false);
adev->mode_info.audio.enabled = false;
}

View File

@ -641,8 +641,9 @@ static int mes_v11_0_misc_op(struct amdgpu_mes *mes,
break;
case MES_MISC_OP_CHANGE_CONFIG:
if ((mes->adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) < 0x63) {
dev_err(mes->adev->dev, "MES FW version must be larger than 0x63 to support limit single process feature.\n");
return -EINVAL;
dev_warn_once(mes->adev->dev,
"MES FW version must be larger than 0x63 to support limit single process feature.\n");
return 0;
}
misc_pkt.opcode = MESAPI_MISC__CHANGE_CONFIG;
misc_pkt.change_config.opcode =

View File

@ -1377,7 +1377,7 @@ static int sdma_v6_0_sw_init(struct amdgpu_ip_block *ip_block)
switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) {
case IP_VERSION(6, 0, 0):
if ((adev->sdma.instance[0].fw_version >= 24) && !adev->sdma.disable_uq)
if ((adev->sdma.instance[0].fw_version >= 27) && !adev->sdma.disable_uq)
adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;
break;
case IP_VERSION(6, 0, 1):
@ -1385,11 +1385,11 @@ static int sdma_v6_0_sw_init(struct amdgpu_ip_block *ip_block)
adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;
break;
case IP_VERSION(6, 0, 2):
if ((adev->sdma.instance[0].fw_version >= 21) && !adev->sdma.disable_uq)
if ((adev->sdma.instance[0].fw_version >= 23) && !adev->sdma.disable_uq)
adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;
break;
case IP_VERSION(6, 0, 3):
if ((adev->sdma.instance[0].fw_version >= 25) && !adev->sdma.disable_uq)
if ((adev->sdma.instance[0].fw_version >= 27) && !adev->sdma.disable_uq)
adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;
break;
case IP_VERSION(6, 1, 0):

View File

@ -8381,8 +8381,7 @@ static int amdgpu_dm_connector_get_modes(struct drm_connector *connector)
drm_add_modes_noedid(connector, 1920, 1080);
} else {
amdgpu_dm_connector_ddc_get_modes(connector, drm_edid);
if (encoder && (connector->connector_type != DRM_MODE_CONNECTOR_eDP) &&
(connector->connector_type != DRM_MODE_CONNECTOR_LVDS))
if (encoder)
amdgpu_dm_connector_add_common_modes(encoder, connector);
amdgpu_dm_connector_add_freesync_modes(connector, drm_edid);
}

View File

@ -520,6 +520,15 @@ void dpp1_dppclk_control(
REG_UPDATE(DPP_CONTROL, DPP_CLOCK_ENABLE, 0);
}
void dpp_force_disable_cursor(struct dpp *dpp_base)
{
struct dcn10_dpp *dpp = TO_DCN10_DPP(dpp_base);
/* Force disable cursor */
REG_UPDATE(CURSOR0_CONTROL, CUR0_ENABLE, 0);
dpp_base->pos.cur0_ctl.bits.cur0_enable = 0;
}
static const struct dpp_funcs dcn10_dpp_funcs = {
.dpp_read_state = dpp_read_state,
.dpp_reset = dpp_reset,

View File

@ -1525,4 +1525,6 @@ void dpp1_construct(struct dcn10_dpp *dpp1,
void dpp1_cm_get_gamut_remap(struct dpp *dpp_base,
struct dpp_grph_csc_adjustment *adjust);
void dpp_force_disable_cursor(struct dpp *dpp_base);
#endif

View File

@ -1494,6 +1494,7 @@ static struct dpp_funcs dcn30_dpp_funcs = {
.dpp_dppclk_control = dpp1_dppclk_control,
.dpp_set_hdr_multiplier = dpp3_set_hdr_multiplier,
.dpp_get_gamut_remap = dpp3_cm_get_gamut_remap,
.dpp_force_disable_cursor = dpp_force_disable_cursor,
};

View File

@ -528,3 +528,75 @@ void dcn314_disable_link_output(struct dc_link *link,
apply_symclk_on_tx_off_wa(link);
}
/**
* dcn314_dpp_pg_control - DPP power gate control.
*
* @hws: dce_hwseq reference.
* @dpp_inst: DPP instance reference.
* @power_on: true if we want to enable power gate, false otherwise.
*
* Enable or disable power gate in the specific DPP instance.
* If power gating is disabled, will force disable cursor in the DPP instance.
*/
void dcn314_dpp_pg_control(
struct dce_hwseq *hws,
unsigned int dpp_inst,
bool power_on)
{
uint32_t power_gate = power_on ? 0 : 1;
uint32_t pwr_status = power_on ? 0 : 2;
if (hws->ctx->dc->debug.disable_dpp_power_gate) {
/* Workaround for DCN314 with disabled power gating */
if (!power_on) {
/* Force disable cursor if power gating is disabled */
struct dpp *dpp = hws->ctx->dc->res_pool->dpps[dpp_inst];
if (dpp && dpp->funcs->dpp_force_disable_cursor)
dpp->funcs->dpp_force_disable_cursor(dpp);
}
return;
}
if (REG(DOMAIN1_PG_CONFIG) == 0)
return;
switch (dpp_inst) {
case 0: /* DPP0 */
REG_UPDATE(DOMAIN1_PG_CONFIG,
DOMAIN1_POWER_GATE, power_gate);
REG_WAIT(DOMAIN1_PG_STATUS,
DOMAIN1_PGFSM_PWR_STATUS, pwr_status,
1, 1000);
break;
case 1: /* DPP1 */
REG_UPDATE(DOMAIN3_PG_CONFIG,
DOMAIN3_POWER_GATE, power_gate);
REG_WAIT(DOMAIN3_PG_STATUS,
DOMAIN3_PGFSM_PWR_STATUS, pwr_status,
1, 1000);
break;
case 2: /* DPP2 */
REG_UPDATE(DOMAIN5_PG_CONFIG,
DOMAIN5_POWER_GATE, power_gate);
REG_WAIT(DOMAIN5_PG_STATUS,
DOMAIN5_PGFSM_PWR_STATUS, pwr_status,
1, 1000);
break;
case 3: /* DPP3 */
REG_UPDATE(DOMAIN7_PG_CONFIG,
DOMAIN7_POWER_GATE, power_gate);
REG_WAIT(DOMAIN7_PG_STATUS,
DOMAIN7_PGFSM_PWR_STATUS, pwr_status,
1, 1000);
break;
default:
BREAK_TO_DEBUGGER();
break;
}
}

View File

@ -47,4 +47,6 @@ void dcn314_dpp_root_clock_control(struct dce_hwseq *hws, unsigned int dpp_inst,
void dcn314_disable_link_output(struct dc_link *link, const struct link_resource *link_res, enum signal_type signal);
void dcn314_dpp_pg_control(struct dce_hwseq *hws, unsigned int dpp_inst, bool power_on);
#endif /* __DC_HWSS_DCN314_H__ */

View File

@ -141,6 +141,7 @@ static const struct hwseq_private_funcs dcn314_private_funcs = {
.enable_power_gating_plane = dcn314_enable_power_gating_plane,
.dpp_root_clock_control = dcn314_dpp_root_clock_control,
.hubp_pg_control = dcn31_hubp_pg_control,
.dpp_pg_control = dcn314_dpp_pg_control,
.program_all_writeback_pipes_in_tree = dcn30_program_all_writeback_pipes_in_tree,
.update_odm = dcn314_update_odm,
.dsc_pg_control = dcn314_dsc_pg_control,

View File

@ -349,6 +349,9 @@ struct dpp_funcs {
struct dpp *dpp_base,
enum dc_color_space color_space,
struct dc_csc_transform cursor_csc_color_matrix);
void (*dpp_force_disable_cursor)(struct dpp *dpp_base);
};

View File

@ -392,6 +392,17 @@ static int __maybe_unused ti_sn65dsi86_resume(struct device *dev)
gpiod_set_value_cansleep(pdata->enable_gpio, 1);
/*
* After EN is deasserted and an external clock is detected, the bridge
* will sample GPIO3:1 to determine its frequency. The driver will
* overwrite this setting in ti_sn_bridge_set_refclk_freq(). But this is
* racy. Thus we have to wait a couple of us. According to the datasheet
* the GPIO lines has to be stable at least 5 us (td5) but it seems that
* is not enough and the refclk frequency value is still lost or
* overwritten by the bridge itself. Waiting for 20us seems to work.
*/
usleep_range(20, 30);
/*
* If we have a reference clock we can enable communication w/ the
* panel (including the aux channel) w/out any need for an input clock

View File

@ -18,7 +18,7 @@ gv100_fence_emit32(struct nouveau_channel *chan, u64 virtual, u32 sequence)
struct nvif_push *push = &chan->chan.push;
int ret;
ret = PUSH_WAIT(push, 8);
ret = PUSH_WAIT(push, 13);
if (ret)
return ret;
@ -32,6 +32,11 @@ gv100_fence_emit32(struct nouveau_channel *chan, u64 virtual, u32 sequence)
NVDEF(NVC36F, SEM_EXECUTE, PAYLOAD_SIZE, 32BIT) |
NVDEF(NVC36F, SEM_EXECUTE, RELEASE_TIMESTAMP, DIS));
PUSH_MTHD(push, NVC36F, MEM_OP_A, 0,
MEM_OP_B, 0,
MEM_OP_C, NVDEF(NVC36F, MEM_OP_C, MEMBAR_TYPE, SYS_MEMBAR),
MEM_OP_D, NVDEF(NVC36F, MEM_OP_D, OPERATION, MEMBAR));
PUSH_MTHD(push, NVC36F, NON_STALL_INTERRUPT, 0);
PUSH_KICK(push);

View File

@ -7,6 +7,91 @@
#define NVC36F_NON_STALL_INTERRUPT (0x00000020)
#define NVC36F_NON_STALL_INTERRUPT_HANDLE 31:0
// NOTE - MEM_OP_A and MEM_OP_B have been replaced in gp100 with methods for
// specifying the page address for a targeted TLB invalidate and the uTLB for
// a targeted REPLAY_CANCEL for UVM.
// The previous MEM_OP_A/B functionality is in MEM_OP_C/D, with slightly
// rearranged fields.
#define NVC36F_MEM_OP_A (0x00000028)
#define NVC36F_MEM_OP_A_TLB_INVALIDATE_CANCEL_TARGET_CLIENT_UNIT_ID 5:0 // only relevant for REPLAY_CANCEL_TARGETED
#define NVC36F_MEM_OP_A_TLB_INVALIDATE_INVALIDATION_SIZE 5:0 // Used to specify size of invalidate, used for invalidates which are not of the REPLAY_CANCEL_TARGETED type
#define NVC36F_MEM_OP_A_TLB_INVALIDATE_CANCEL_TARGET_GPC_ID 10:6 // only relevant for REPLAY_CANCEL_TARGETED
#define NVC36F_MEM_OP_A_TLB_INVALIDATE_CANCEL_MMU_ENGINE_ID 6:0 // only relevant for REPLAY_CANCEL_VA_GLOBAL
#define NVC36F_MEM_OP_A_TLB_INVALIDATE_SYSMEMBAR 11:11
#define NVC36F_MEM_OP_A_TLB_INVALIDATE_SYSMEMBAR_EN 0x00000001
#define NVC36F_MEM_OP_A_TLB_INVALIDATE_SYSMEMBAR_DIS 0x00000000
#define NVC36F_MEM_OP_A_TLB_INVALIDATE_TARGET_ADDR_LO 31:12
#define NVC36F_MEM_OP_B (0x0000002c)
#define NVC36F_MEM_OP_B_TLB_INVALIDATE_TARGET_ADDR_HI 31:0
#define NVC36F_MEM_OP_C (0x00000030)
#define NVC36F_MEM_OP_C_MEMBAR_TYPE 2:0
#define NVC36F_MEM_OP_C_MEMBAR_TYPE_SYS_MEMBAR 0x00000000
#define NVC36F_MEM_OP_C_MEMBAR_TYPE_MEMBAR 0x00000001
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB 0:0
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_ONE 0x00000000
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_ALL 0x00000001 // Probably nonsensical for MMU_TLB_INVALIDATE_TARGETED
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_GPC 1:1
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_GPC_ENABLE 0x00000000
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_GPC_DISABLE 0x00000001
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY 4:2 // only relevant if GPC ENABLE
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_NONE 0x00000000
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_START 0x00000001
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_START_ACK_ALL 0x00000002
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_CANCEL_TARGETED 0x00000003
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_CANCEL_GLOBAL 0x00000004
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_CANCEL_VA_GLOBAL 0x00000005
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE 6:5 // only relevant if GPC ENABLE
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE_NONE 0x00000000
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE_GLOBALLY 0x00000001
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE_INTRANODE 0x00000002
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE 9:7 //only relevant for REPLAY_CANCEL_VA_GLOBAL
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_READ 0
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_WRITE 1
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ATOMIC_STRONG 2
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_RSVRVD 3
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ATOMIC_WEAK 4
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ATOMIC_ALL 5
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_WRITE_AND_ATOMIC 6
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ALL 7
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL 9:7 // Invalidate affects this level and all below
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_ALL 0x00000000 // Invalidate tlb caches at all levels of the page table
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_PTE_ONLY 0x00000001
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE0 0x00000002
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE1 0x00000003
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE2 0x00000004
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE3 0x00000005
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE4 0x00000006
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE5 0x00000007
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE 11:10 // only relevant if PDB_ONE
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE_VID_MEM 0x00000000
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE_SYS_MEM_COHERENT 0x00000002
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE_SYS_MEM_NONCOHERENT 0x00000003
#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_ADDR_LO 31:12 // only relevant if PDB_ONE
#define NVC36F_MEM_OP_C_ACCESS_COUNTER_CLR_TARGETED_NOTIFY_TAG 19:0
// MEM_OP_D MUST be preceded by MEM_OPs A-C.
#define NVC36F_MEM_OP_D (0x00000034)
#define NVC36F_MEM_OP_D_TLB_INVALIDATE_PDB_ADDR_HI 26:0 // only relevant if PDB_ONE
#define NVC36F_MEM_OP_D_OPERATION 31:27
#define NVC36F_MEM_OP_D_OPERATION_MEMBAR 0x00000005
#define NVC36F_MEM_OP_D_OPERATION_MMU_TLB_INVALIDATE 0x00000009
#define NVC36F_MEM_OP_D_OPERATION_MMU_TLB_INVALIDATE_TARGETED 0x0000000a
#define NVC36F_MEM_OP_D_OPERATION_L2_PEERMEM_INVALIDATE 0x0000000d
#define NVC36F_MEM_OP_D_OPERATION_L2_SYSMEM_INVALIDATE 0x0000000e
// CLEAN_LINES is an alias for Tegra/GPU IP usage
#define NVC36F_MEM_OP_B_OPERATION_L2_INVALIDATE_CLEAN_LINES 0x0000000e
#define NVC36F_MEM_OP_D_OPERATION_L2_CLEAN_COMPTAGS 0x0000000f
#define NVC36F_MEM_OP_D_OPERATION_L2_FLUSH_DIRTY 0x00000010
#define NVC36F_MEM_OP_D_OPERATION_L2_WAIT_FOR_SYS_PENDING_READS 0x00000015
#define NVC36F_MEM_OP_D_OPERATION_ACCESS_COUNTER_CLR 0x00000016
#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE 1:0
#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_MIMC 0x00000000
#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_MOMC 0x00000001
#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_ALL 0x00000002
#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_TARGETED 0x00000003
#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_TYPE 2:2
#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_TYPE_MIMC 0x00000000
#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_TYPE_MOMC 0x00000001
#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_BANK 6:3
#define NVC36F_SEM_ADDR_LO (0x0000005c)
#define NVC36F_SEM_ADDR_LO_OFFSET 31:2
#define NVC36F_SEM_ADDR_HI (0x00000060)

View File

@ -350,6 +350,8 @@ nvkm_fifo_dtor(struct nvkm_engine *engine)
nvkm_chid_unref(&fifo->chid);
nvkm_event_fini(&fifo->nonstall.event);
if (fifo->func->nonstall_dtor)
fifo->func->nonstall_dtor(fifo);
mutex_destroy(&fifo->mutex);
if (fifo->func->dtor)

View File

@ -517,19 +517,11 @@ ga100_fifo_nonstall_intr(struct nvkm_inth *inth)
static void
ga100_fifo_nonstall_block(struct nvkm_event *event, int type, int index)
{
struct nvkm_fifo *fifo = container_of(event, typeof(*fifo), nonstall.event);
struct nvkm_runl *runl = nvkm_runl_get(fifo, index, 0);
nvkm_inth_block(&runl->nonstall.inth);
}
static void
ga100_fifo_nonstall_allow(struct nvkm_event *event, int type, int index)
{
struct nvkm_fifo *fifo = container_of(event, typeof(*fifo), nonstall.event);
struct nvkm_runl *runl = nvkm_runl_get(fifo, index, 0);
nvkm_inth_allow(&runl->nonstall.inth);
}
const struct nvkm_event_func
@ -564,12 +556,26 @@ ga100_fifo_nonstall_ctor(struct nvkm_fifo *fifo)
if (ret)
return ret;
nvkm_inth_allow(&runl->nonstall.inth);
nr = max(nr, runl->id + 1);
}
return nr;
}
void
ga100_fifo_nonstall_dtor(struct nvkm_fifo *fifo)
{
struct nvkm_runl *runl;
nvkm_runl_foreach(runl, fifo) {
if (runl->nonstall.vector < 0)
continue;
nvkm_inth_block(&runl->nonstall.inth);
}
}
int
ga100_fifo_runl_ctor(struct nvkm_fifo *fifo)
{
@ -599,6 +605,7 @@ ga100_fifo = {
.runl_ctor = ga100_fifo_runl_ctor,
.mmu_fault = &tu102_fifo_mmu_fault,
.nonstall_ctor = ga100_fifo_nonstall_ctor,
.nonstall_dtor = ga100_fifo_nonstall_dtor,
.nonstall = &ga100_fifo_nonstall,
.runl = &ga100_runl,
.runq = &ga100_runq,

View File

@ -30,6 +30,7 @@ ga102_fifo = {
.runl_ctor = ga100_fifo_runl_ctor,
.mmu_fault = &tu102_fifo_mmu_fault,
.nonstall_ctor = ga100_fifo_nonstall_ctor,
.nonstall_dtor = ga100_fifo_nonstall_dtor,
.nonstall = &ga100_fifo_nonstall,
.runl = &ga100_runl,
.runq = &ga100_runq,

View File

@ -41,6 +41,7 @@ struct nvkm_fifo_func {
void (*start)(struct nvkm_fifo *, unsigned long *);
int (*nonstall_ctor)(struct nvkm_fifo *);
void (*nonstall_dtor)(struct nvkm_fifo *);
const struct nvkm_event_func *nonstall;
const struct nvkm_runl_func *runl;
@ -200,6 +201,7 @@ u32 tu102_chan_doorbell_handle(struct nvkm_chan *);
int ga100_fifo_runl_ctor(struct nvkm_fifo *);
int ga100_fifo_nonstall_ctor(struct nvkm_fifo *);
void ga100_fifo_nonstall_dtor(struct nvkm_fifo *);
extern const struct nvkm_event_func ga100_fifo_nonstall;
extern const struct nvkm_runl_func ga100_runl;
extern const struct nvkm_runq_func ga100_runq;

View File

@ -601,6 +601,7 @@ r535_fifo_new(const struct nvkm_fifo_func *hw, struct nvkm_device *device,
rm->chan.func = &r535_chan;
rm->nonstall = &ga100_fifo_nonstall;
rm->nonstall_ctor = ga100_fifo_nonstall_ctor;
rm->nonstall_dtor = ga100_fifo_nonstall_dtor;
return nvkm_fifo_new_(rm, device, type, inst, pfifo);
}

View File

@ -391,7 +391,8 @@ EXPORT_SYMBOL(drm_sched_entity_set_priority);
* Add a callback to the current dependency of the entity to wake up the
* scheduler when the entity becomes available.
*/
static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity)
static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity,
struct drm_sched_job *sched_job)
{
struct drm_gpu_scheduler *sched = entity->rq->sched;
struct dma_fence *fence = entity->dependency;
@ -421,6 +422,10 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity)
entity->dependency = fence;
}
if (trace_drm_sched_job_unschedulable_enabled() &&
!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &entity->dependency->flags))
trace_drm_sched_job_unschedulable(sched_job, entity->dependency);
if (!dma_fence_add_callback(entity->dependency, &entity->cb,
drm_sched_entity_wakeup))
return true;
@ -461,10 +466,8 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity)
while ((entity->dependency =
drm_sched_job_dependency(sched_job, entity))) {
if (drm_sched_entity_add_dependency_cb(entity)) {
trace_drm_sched_job_unschedulable(sched_job, entity->dependency);
if (drm_sched_entity_add_dependency_cb(entity, sched_job))
return NULL;
}
}
/* skip jobs from entity that marked guilty */

View File

@ -819,8 +819,7 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
return ret;
}
tt_has_data = ttm && (ttm_tt_is_populated(ttm) ||
(ttm->page_flags & TTM_TT_FLAG_SWAPPED));
tt_has_data = ttm && (ttm_tt_is_populated(ttm) || ttm_tt_is_swapped(ttm));
move_lacks_source = !old_mem || (handle_system_ccs ? (!bo->ccs_cleared) :
(!mem_type_is_vram(old_mem_type) && !tt_has_data));

View File

@ -379,7 +379,7 @@ static int ina238_write_in(struct device *dev, u32 attr, int channel,
regval = clamp_val(val, -163, 163);
regval = (regval * 1000 * 4) /
(INA238_SHUNT_VOLTAGE_LSB * data->gain);
regval = clamp_val(regval, S16_MIN, S16_MAX);
regval = clamp_val(regval, S16_MIN, S16_MAX) & 0xffff;
switch (attr) {
case hwmon_in_max:
@ -517,9 +517,10 @@ static int ina238_write_power(struct device *dev, u32 attr, long val)
* Unsigned postive values. Compared against the 24-bit power register,
* lower 8-bits are truncated. Same conversion to/from uW as POWER
* register.
* The first clamp_val() is to establish a baseline to avoid overflows.
*/
regval = clamp_val(val, 0, LONG_MAX);
regval = div_u64(val * 4 * 100 * data->rshunt, data->config->power_calculate_factor *
regval = clamp_val(val, 0, LONG_MAX / 2);
regval = div_u64(regval * 4 * 100 * data->rshunt, data->config->power_calculate_factor *
1000ULL * INA238_FIXED_SHUNT * data->gain);
regval = clamp_val(regval >> 8, 0, U16_MAX);
@ -572,7 +573,7 @@ static int ina238_write_temp(struct device *dev, u32 attr, long val)
return -EOPNOTSUPP;
/* Signed */
regval = clamp_val(val, -40000, 125000);
val = clamp_val(val, -40000, 125000);
regval = div_s64(val * 10000, data->config->temp_lsb) << data->config->temp_shift;
regval = clamp_val(regval, S16_MIN, S16_MAX) & (0xffff << data->config->temp_shift);

View File

@ -561,15 +561,14 @@ static int mlxreg_fan_cooling_config(struct device *dev, struct mlxreg_fan *fan)
if (!pwm->connected)
continue;
pwm->fan = fan;
/* Set minimal PWM speed. */
pwm->last_hwmon_state = MLXREG_FAN_PWM_DUTY2STATE(MLXREG_FAN_MIN_DUTY);
pwm->cdev = devm_thermal_of_cooling_device_register(dev, NULL, mlxreg_fan_name[i],
pwm, &mlxreg_fan_cooling_ops);
if (IS_ERR(pwm->cdev)) {
dev_err(dev, "Failed to register cooling device\n");
return PTR_ERR(pwm->cdev);
}
/* Set minimal PWM speed. */
pwm->last_hwmon_state = MLXREG_FAN_PWM_DUTY2STATE(MLXREG_FAN_MIN_DUTY);
}
return 0;

View File

@ -1052,7 +1052,7 @@ static const struct pci_device_id i801_ids[] = {
{ PCI_DEVICE_DATA(INTEL, METEOR_LAKE_P_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
{ PCI_DEVICE_DATA(INTEL, METEOR_LAKE_SOC_S_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
{ PCI_DEVICE_DATA(INTEL, METEOR_LAKE_PCH_S_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
{ PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
{ PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS, FEATURES_ICH5) },
{ PCI_DEVICE_DATA(INTEL, ARROW_LAKE_H_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
{ PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_H_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
{ PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_P_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },

View File

@ -99,6 +99,9 @@ static int rtl9300_i2c_config_xfer(struct rtl9300_i2c *i2c, struct rtl9300_i2c_c
{
u32 val, mask;
if (len < 1 || len > 16)
return -EINVAL;
val = chan->bus_freq << RTL9300_I2C_MST_CTRL2_SCL_FREQ_OFS;
mask = RTL9300_I2C_MST_CTRL2_SCL_FREQ_MASK;
@ -222,15 +225,6 @@ static int rtl9300_i2c_smbus_xfer(struct i2c_adapter *adap, u16 addr, unsigned s
}
switch (size) {
case I2C_SMBUS_QUICK:
ret = rtl9300_i2c_config_xfer(i2c, chan, addr, 0);
if (ret)
goto out_unlock;
ret = rtl9300_i2c_reg_addr_set(i2c, 0, 0);
if (ret)
goto out_unlock;
break;
case I2C_SMBUS_BYTE:
if (read_write == I2C_SMBUS_WRITE) {
ret = rtl9300_i2c_config_xfer(i2c, chan, addr, 0);
@ -312,9 +306,9 @@ out_unlock:
static u32 rtl9300_i2c_func(struct i2c_adapter *a)
{
return I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE |
I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA |
I2C_FUNC_SMBUS_BLOCK_DATA;
return I2C_FUNC_SMBUS_BYTE | I2C_FUNC_SMBUS_BYTE_DATA |
I2C_FUNC_SMBUS_WORD_DATA | I2C_FUNC_SMBUS_BLOCK_DATA |
I2C_FUNC_SMBUS_I2C_BLOCK;
}
static const struct i2c_algorithm rtl9300_i2c_algo = {
@ -323,7 +317,7 @@ static const struct i2c_algorithm rtl9300_i2c_algo = {
};
static struct i2c_adapter_quirks rtl9300_i2c_quirks = {
.flags = I2C_AQ_NO_CLK_STRETCH,
.flags = I2C_AQ_NO_CLK_STRETCH | I2C_AQ_NO_ZERO_LEN,
.max_read_len = 16,
.max_write_len = 16,
};
@ -353,7 +347,7 @@ static int rtl9300_i2c_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, i2c);
if (device_get_child_node_count(dev) >= RTL9300_I2C_MUX_NCHAN)
if (device_get_child_node_count(dev) > RTL9300_I2C_MUX_NCHAN)
return dev_err_probe(dev, -EINVAL, "Too many channels\n");
device_for_each_child_node(dev, child) {

View File

@ -9125,6 +9125,11 @@ void md_do_sync(struct md_thread *thread)
}
action = md_sync_action(mddev);
if (action == ACTION_FROZEN || action == ACTION_IDLE) {
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
goto skip;
}
desc = md_sync_action_name(action);
mddev->last_sync_action = action;

View File

@ -1225,7 +1225,7 @@ static void alloc_behind_master_bio(struct r1bio *r1_bio,
int i = 0;
struct bio *behind_bio = NULL;
behind_bio = bio_alloc_bioset(NULL, vcnt, 0, GFP_NOIO,
behind_bio = bio_alloc_bioset(NULL, vcnt, bio->bi_opf, GFP_NOIO,
&r1_bio->mddev->bio_set);
/* discard op, we don't support writezero/writesame yet */

View File

@ -861,7 +861,6 @@ static int rcar_can_resume(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
struct rcar_can_priv *priv = netdev_priv(ndev);
u16 ctlr;
int err;
if (!netif_running(ndev))
@ -873,12 +872,7 @@ static int rcar_can_resume(struct device *dev)
return err;
}
ctlr = readw(&priv->regs->ctlr);
ctlr &= ~RCAR_CAN_CTLR_SLPM;
writew(ctlr, &priv->regs->ctlr);
ctlr &= ~RCAR_CAN_CTLR_CANM;
writew(ctlr, &priv->regs->ctlr);
priv->can.state = CAN_STATE_ERROR_ACTIVE;
rcar_can_start(ndev);
netif_device_attach(ndev);
netif_start_queue(ndev);

View File

@ -690,14 +690,6 @@ static void xcan_write_frame(struct net_device *ndev, struct sk_buff *skb,
dlc |= XCAN_DLCR_EDL_MASK;
}
if (!(priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES) &&
(priv->devtype.flags & XCAN_FLAG_TXFEMP))
can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max, 0);
else
can_put_echo_skb(skb, ndev, 0, 0);
priv->tx_head++;
priv->write_reg(priv, XCAN_FRAME_ID_OFFSET(frame_offset), id);
/* If the CAN frame is RTR frame this write triggers transmission
* (not on CAN FD)
@ -730,6 +722,14 @@ static void xcan_write_frame(struct net_device *ndev, struct sk_buff *skb,
data[1]);
}
}
if (!(priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES) &&
(priv->devtype.flags & XCAN_FLAG_TXFEMP))
can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max, 0);
else
can_put_echo_skb(skb, ndev, 0, 0);
priv->tx_head++;
}
/**

View File

@ -1273,9 +1273,15 @@ static int b53_setup(struct dsa_switch *ds)
*/
ds->untag_vlan_aware_bridge_pvid = true;
/* Ageing time is set in seconds */
ds->ageing_time_min = 1 * 1000;
ds->ageing_time_max = AGE_TIME_MAX * 1000;
if (dev->chip_id == BCM53101_DEVICE_ID) {
/* BCM53101 uses 0.5 second increments */
ds->ageing_time_min = 1 * 500;
ds->ageing_time_max = AGE_TIME_MAX * 500;
} else {
/* Everything else uses 1 second increments */
ds->ageing_time_min = 1 * 1000;
ds->ageing_time_max = AGE_TIME_MAX * 1000;
}
ret = b53_reset_switch(dev);
if (ret) {
@ -2559,7 +2565,10 @@ int b53_set_ageing_time(struct dsa_switch *ds, unsigned int msecs)
else
reg = B53_AGING_TIME_CONTROL;
atc = DIV_ROUND_CLOSEST(msecs, 1000);
if (dev->chip_id == BCM53101_DEVICE_ID)
atc = DIV_ROUND_CLOSEST(msecs, 500);
else
atc = DIV_ROUND_CLOSEST(msecs, 1000);
if (!is5325(dev) && !is5365(dev))
atc |= AGE_CHANGE;

View File

@ -2363,7 +2363,8 @@ static void fec_enet_phy_reset_after_clk_enable(struct net_device *ndev)
*/
phy_dev = of_phy_find_device(fep->phy_node);
phy_reset_after_clk_enable(phy_dev);
put_device(&phy_dev->mdio.dev);
if (phy_dev)
put_device(&phy_dev->mdio.dev);
}
}

View File

@ -1561,6 +1561,7 @@ I40E_CHECK_CMD_LENGTH(i40e_aq_set_phy_config);
struct i40e_aq_set_mac_config {
__le16 max_frame_size;
u8 params;
#define I40E_AQ_SET_MAC_CONFIG_CRC_EN BIT(2)
u8 tx_timer_priority; /* bitmap */
__le16 tx_timer_value;
__le16 fc_refresh_threshold;

View File

@ -1189,6 +1189,40 @@ int i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures,
return status;
}
/**
* i40e_aq_set_mac_config - Configure MAC settings
* @hw: pointer to the hw struct
* @max_frame_size: Maximum Frame Size to be supported by the port
* @cmd_details: pointer to command details structure or NULL
*
* Set MAC configuration (0x0603). Note that max_frame_size must be greater
* than zero.
*
* Return: 0 on success, or a negative error code on failure.
*/
int i40e_aq_set_mac_config(struct i40e_hw *hw, u16 max_frame_size,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_set_mac_config *cmd;
struct libie_aq_desc desc;
cmd = libie_aq_raw(&desc);
if (max_frame_size == 0)
return -EINVAL;
i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_mac_config);
cmd->max_frame_size = cpu_to_le16(max_frame_size);
cmd->params = I40E_AQ_SET_MAC_CONFIG_CRC_EN;
#define I40E_AQ_SET_MAC_CONFIG_FC_DEFAULT_THRESHOLD 0x7FFF
cmd->fc_refresh_threshold =
cpu_to_le16(I40E_AQ_SET_MAC_CONFIG_FC_DEFAULT_THRESHOLD);
return i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
}
/**
* i40e_aq_clear_pxe_mode
* @hw: pointer to the hw struct

View File

@ -4156,7 +4156,7 @@ free_queue_irqs:
irq_num = pf->msix_entries[base + vector].vector;
irq_set_affinity_notifier(irq_num, NULL);
irq_update_affinity_hint(irq_num, NULL);
free_irq(irq_num, &vsi->q_vectors[vector]);
free_irq(irq_num, vsi->q_vectors[vector]);
}
return err;
}
@ -16045,13 +16045,17 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
dev_dbg(&pf->pdev->dev, "get supported phy types ret = %pe last_status = %s\n",
ERR_PTR(err), libie_aq_str(pf->hw.aq.asq_last_status));
/* make sure the MFS hasn't been set lower than the default */
#define MAX_FRAME_SIZE_DEFAULT 0x2600
val = FIELD_GET(I40E_PRTGL_SAH_MFS_MASK,
rd32(&pf->hw, I40E_PRTGL_SAH));
if (val < MAX_FRAME_SIZE_DEFAULT)
dev_warn(&pdev->dev, "MFS for port %x (%d) has been set below the default (%d)\n",
pf->hw.port, val, MAX_FRAME_SIZE_DEFAULT);
err = i40e_aq_set_mac_config(hw, MAX_FRAME_SIZE_DEFAULT, NULL);
if (err)
dev_warn(&pdev->dev, "set mac config ret = %pe last_status = %s\n",
ERR_PTR(err), libie_aq_str(pf->hw.aq.asq_last_status));
/* Make sure the MFS is set to the expected value */
val = rd32(hw, I40E_PRTGL_SAH);
FIELD_MODIFY(I40E_PRTGL_SAH_MFS_MASK, &val, MAX_FRAME_SIZE_DEFAULT);
wr32(hw, I40E_PRTGL_SAH, val);
/* Add a filter to drop all Flow control frames from any VSI from being
* transmitted. By doing so we stop a malicious VF from sending out

View File

@ -98,6 +98,8 @@ int i40e_aq_set_mac_loopback(struct i40e_hw *hw,
struct i40e_asq_cmd_details *cmd_details);
int i40e_aq_set_phy_int_mask(struct i40e_hw *hw, u16 mask,
struct i40e_asq_cmd_details *cmd_details);
int i40e_aq_set_mac_config(struct i40e_hw *hw, u16 max_frame_size,
struct i40e_asq_cmd_details *cmd_details);
int i40e_aq_clear_pxe_mode(struct i40e_hw *hw,
struct i40e_asq_cmd_details *cmd_details);
int i40e_aq_set_link_restart_an(struct i40e_hw *hw,

View File

@ -2081,11 +2081,8 @@ static void igb_diag_test(struct net_device *netdev,
} else {
dev_info(&adapter->pdev->dev, "online testing starting\n");
/* PHY is powered down when interface is down */
if (if_running && igb_link_test(adapter, &data[TEST_LINK]))
if (igb_link_test(adapter, &data[TEST_LINK]))
eth_test->flags |= ETH_TEST_FL_FAILED;
else
data[TEST_LINK] = 0;
/* Online tests aren't run; pass by default */
data[TEST_REG] = 0;

View File

@ -4452,8 +4452,7 @@ int igb_setup_rx_resources(struct igb_ring *rx_ring)
if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
res = xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev,
rx_ring->queue_index,
rx_ring->q_vector->napi.napi_id);
rx_ring->queue_index, 0);
if (res < 0) {
dev_err(dev, "Failed to register xdp_rxq index %u\n",
rx_ring->queue_index);

View File

@ -654,7 +654,7 @@ static void icssg_prueth_hsr_fdb_add_del(struct prueth_emac *emac,
static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr)
{
struct net_device *real_dev;
struct net_device *real_dev, *port_dev;
struct prueth_emac *emac;
u8 vlan_id, i;
@ -663,11 +663,15 @@ static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr)
if (is_hsr_master(real_dev)) {
for (i = HSR_PT_SLAVE_A; i < HSR_PT_INTERLINK; i++) {
emac = netdev_priv(hsr_get_port_ndev(real_dev, i));
if (!emac)
port_dev = hsr_get_port_ndev(real_dev, i);
emac = netdev_priv(port_dev);
if (!emac) {
dev_put(port_dev);
return -EINVAL;
}
icssg_prueth_hsr_fdb_add_del(emac, addr, vlan_id,
true);
dev_put(port_dev);
}
} else {
emac = netdev_priv(real_dev);
@ -679,7 +683,7 @@ static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr)
static int icssg_prueth_hsr_del_mcast(struct net_device *ndev, const u8 *addr)
{
struct net_device *real_dev;
struct net_device *real_dev, *port_dev;
struct prueth_emac *emac;
u8 vlan_id, i;
@ -688,11 +692,15 @@ static int icssg_prueth_hsr_del_mcast(struct net_device *ndev, const u8 *addr)
if (is_hsr_master(real_dev)) {
for (i = HSR_PT_SLAVE_A; i < HSR_PT_INTERLINK; i++) {
emac = netdev_priv(hsr_get_port_ndev(real_dev, i));
if (!emac)
port_dev = hsr_get_port_ndev(real_dev, i);
emac = netdev_priv(port_dev);
if (!emac) {
dev_put(port_dev);
return -EINVAL;
}
icssg_prueth_hsr_fdb_add_del(emac, addr, vlan_id,
false);
dev_put(port_dev);
}
} else {
emac = netdev_priv(real_dev);

View File

@ -2078,10 +2078,6 @@ static void wx_setup_mrqc(struct wx *wx)
{
u32 rss_field = 0;
/* VT, and RSS do not coexist at the same time */
if (test_bit(WX_FLAG_VMDQ_ENABLED, wx->flags))
return;
/* Disable indicating checksum in descriptor, enables RSS hash */
wr32m(wx, WX_PSR_CTL, WX_PSR_CTL_PCSD, WX_PSR_CTL_PCSD);

View File

@ -4211,6 +4211,7 @@ static int macsec_newlink(struct net_device *dev,
if (err < 0)
goto del_dev;
netdev_update_features(dev);
netif_stacked_transfer_operstate(real_dev, dev);
linkwatch_fire_event(dev);

View File

@ -361,7 +361,7 @@ config NXP_TJA11XX_PHY
tristate "NXP TJA11xx PHYs support"
depends on HWMON
help
Currently supports the NXP TJA1100 and TJA1101 PHY.
Currently supports the NXP TJA1100, TJA1101 and TJA1102 PHYs.
config NCN26000_PHY
tristate "Onsemi 10BASE-T1S Ethernet PHY"

View File

@ -1065,23 +1065,19 @@ EXPORT_SYMBOL_GPL(phy_inband_caps);
*/
int phy_config_inband(struct phy_device *phydev, unsigned int modes)
{
int err;
lockdep_assert_held(&phydev->lock);
if (!!(modes & LINK_INBAND_DISABLE) +
!!(modes & LINK_INBAND_ENABLE) +
!!(modes & LINK_INBAND_BYPASS) != 1)
return -EINVAL;
mutex_lock(&phydev->lock);
if (!phydev->drv)
err = -EIO;
return -EIO;
else if (!phydev->drv->config_inband)
err = -EOPNOTSUPP;
else
err = phydev->drv->config_inband(phydev, modes);
mutex_unlock(&phydev->lock);
return -EOPNOTSUPP;
return err;
return phydev->drv->config_inband(phydev, modes);
}
EXPORT_SYMBOL(phy_config_inband);

View File

@ -287,8 +287,7 @@ static bool phy_uses_state_machine(struct phy_device *phydev)
if (phydev->phy_link_change == phy_link_change)
return phydev->attached_dev && phydev->adjust_link;
/* phydev->phy_link_change is implicitly phylink_phy_change() */
return true;
return !!phydev->phy_link_change;
}
static bool mdio_bus_phy_may_suspend(struct phy_device *phydev)
@ -1864,6 +1863,8 @@ void phy_detach(struct phy_device *phydev)
phydev->attached_dev = NULL;
phy_link_topo_del_phy(dev, phydev);
}
phydev->phy_link_change = NULL;
phydev->phylink = NULL;
if (!phydev->is_on_sfp_module)

Some files were not shown because too many files have changed in this diff Show More