Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.17-rc4).

No conflicts.

Adjacent changes:

drivers/net/ethernet/intel/idpf/idpf_txrx.c
  02614eee26 ("idpf: do not linearize big TSO packets")
  6c4e684802 ("idpf: remove obsolete stashing code")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
pull/1354/merge
Jakub Kicinski 2025-06-12 10:08:24 -07:00
commit d23ad54de7
393 changed files with 3911 additions and 2278 deletions

View File

@ -226,6 +226,8 @@ Domen Puncer <domen@coderock.org>
Douglas Gilbert <dougg@torque.net>
Drew Fustini <fustini@kernel.org> <drew@pdp7.com>
<duje@dujemihanovic.xyz> <duje.mihanovic@skole.hr>
Easwar Hariharan <easwar.hariharan@linux.microsoft.com> <easwar.hariharan@intel.com>
Easwar Hariharan <easwar.hariharan@linux.microsoft.com> <eahariha@linux.microsoft.com>
Ed L. Cashin <ecashin@coraid.com>
Elliot Berman <quic_eberman@quicinc.com> <eberman@codeaurora.org>
Enric Balletbo i Serra <eballetbo@kernel.org> <enric.balletbo@collabora.com>

View File

@ -3222,6 +3222,10 @@ D: AIC5800 IEEE 1394, RAW I/O on 1394
D: Starter of Linux1394 effort
S: ask per mail for current address
N: Boris Pismenny
E: borisp@mellanox.com
D: Kernel TLS implementation and offload support.
N: Nicolas Pitre
E: nico@fluxnic.net
D: StrongARM SA1100 support integrator & hacker
@ -4168,6 +4172,9 @@ S: 1513 Brewster Dr.
S: Carrollton, TX 75010
S: USA
N: Dave Watson
D: Kernel TLS implementation.
N: Tim Waugh
E: tim@cyberelk.net
D: Co-architect of the parallel-port sharing system

View File

@ -435,8 +435,8 @@ both cgroups.
Controlling Controllers
-----------------------
Availablity
~~~~~~~~~~~
Availability
~~~~~~~~~~~~
A controller is available in a cgroup when it is supported by the kernel (i.e.,
compiled in, not disabled and not attached to a v1 hierarchy) and listed in the

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Infineon Buck Regulators with PMBUS interfaces
maintainers:
- Not Me.
- Guenter Roeck <linux@roeck-us.net>
allOf:
- $ref: regulator.yaml#

View File

@ -507,6 +507,8 @@ patternProperties:
description: Espressif Systems Co. Ltd.
"^est,.*":
description: ESTeem Wireless Modems
"^eswin,.*":
description: Beijing ESWIN Technology Group Co. Ltd.
"^ettus,.*":
description: NI Ettus Research
"^eukrea,.*":

View File

@ -8,8 +8,22 @@ like to know when a security bug is found so that it can be fixed and
disclosed as quickly as possible. Please report security bugs to the
Linux kernel security team.
Contact
-------
The security team and maintainers almost always require additional
information beyond what was initially provided in a report and rely on
active and efficient collaboration with the reporter to perform further
testing (e.g., verifying versions, configuration options, mitigations, or
patches). Before contacting the security team, the reporter must ensure
they are available to explain their findings, engage in discussions, and
run additional tests. Reports where the reporter does not respond promptly
or cannot effectively discuss their findings may be abandoned if the
communication does not quickly improve.
As it is with any bug, the more information provided the easier it
will be to diagnose and fix. Please review the procedure outlined in
'Documentation/admin-guide/reporting-issues.rst' if you are unclear about what
information is helpful. Any exploit code is very helpful and will not
be released without consent from the reporter unless it has already been
made public.
The Linux kernel security team can be contacted by email at
<security@kernel.org>. This is a private list of security officers
@ -19,13 +33,6 @@ that can speed up the process considerably. It is possible that the
security team will bring in extra help from area maintainers to
understand and fix the security vulnerability.
As it is with any bug, the more information provided the easier it
will be to diagnose and fix. Please review the procedure outlined in
'Documentation/admin-guide/reporting-issues.rst' if you are unclear about what
information is helpful. Any exploit code is very helpful and will not
be released without consent from the reporter unless it has already been
made public.
Please send plain text emails without attachments where possible.
It is much harder to have a context-quoted discussion about a complex
issue if all the details are hidden away in attachments. Think of it like a

View File

@ -43,7 +43,7 @@ Following IOMMUFD objects are exposed to userspace:
- IOMMUFD_OBJ_HWPT_PAGING, representing an actual hardware I/O page table
(i.e. a single struct iommu_domain) managed by the iommu driver. "PAGING"
primarly indicates this type of HWPT should be linked to an IOAS. It also
primarily indicates this type of HWPT should be linked to an IOAS. It also
indicates that it is backed by an iommu_domain with __IOMMU_DOMAIN_PAGING
feature flag. This can be either an UNMANAGED stage-1 domain for a device
running in the user space, or a nesting parent stage-2 domain for mappings
@ -76,7 +76,7 @@ Following IOMMUFD objects are exposed to userspace:
* Security namespace for guest owned ID, e.g. guest-controlled cache tags
* Non-device-affiliated event reporting, e.g. invalidation queue errors
* Access to a sharable nesting parent pagetable across physical IOMMUs
* Access to a shareable nesting parent pagetable across physical IOMMUs
* Virtualization of various platforms IDs, e.g. RIDs and others
* Delivery of paravirtualized invalidation
* Direct assigned invalidation queues

View File

@ -937,7 +937,7 @@ S: Maintained
F: drivers/gpio/gpio-altera.c
ALTERA TRIPLE SPEED ETHERNET DRIVER
M: Joyce Ooi <joyce.ooi@intel.com>
M: Boon Khai Ng <boon.khai.ng@altera.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/altera/
@ -4216,7 +4216,7 @@ F: drivers/md/bcache/
BCACHEFS
M: Kent Overstreet <kent.overstreet@linux.dev>
L: linux-bcachefs@vger.kernel.org
S: Supported
S: Externally maintained
C: irc://irc.oftc.net/bcache
P: Documentation/filesystems/bcachefs/SubmittingPatches.rst
T: git https://evilpiepirate.org/git/bcachefs.git
@ -8427,6 +8427,17 @@ T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: drivers/gpu/drm/scheduler/
F: include/drm/gpu_scheduler.h
DRM GPUVM
M: Danilo Krummrich <dakr@kernel.org>
R: Matthew Brost <matthew.brost@intel.com>
R: Thomas Hellström <thomas.hellstrom@linux.intel.com>
R: Alice Ryhl <aliceryhl@google.com>
L: dri-devel@lists.freedesktop.org
S: Supported
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: drivers/gpu/drm/drm_gpuvm.c
F: include/drm/drm_gpuvm.h
DRM LOG
M: Jocelyn Falempe <jfalempe@redhat.com>
M: Javier Martinez Canillas <javierm@redhat.com>
@ -10656,7 +10667,8 @@ S: Maintained
F: block/partitions/efi.*
HABANALABS PCI DRIVER
M: Yaron Avizrat <yaron.avizrat@intel.com>
M: Koby Elbaz <koby.elbaz@intel.com>
M: Konstantin Sinyuk <konstantin.sinyuk@intel.com>
L: dri-devel@lists.freedesktop.org
S: Supported
C: irc://irc.oftc.net/dri-devel
@ -11014,7 +11026,7 @@ F: Documentation/admin-guide/perf/hns3-pmu.rst
F: drivers/perf/hisilicon/hns3_pmu.c
HISILICON I2C CONTROLLER DRIVER
M: Yicong Yang <yangyicong@hisilicon.com>
M: Devyn Liu <liudingyuan@h-partners.com>
L: linux-i2c@vger.kernel.org
S: Maintained
W: https://www.hisilicon.com
@ -12282,7 +12294,6 @@ F: include/linux/avf/virtchnl.h
F: include/linux/net/intel/*/
INTEL ETHERNET PROTOCOL DRIVER FOR RDMA
M: Mustafa Ismail <mustafa.ismail@intel.com>
M: Tatyana Nikolova <tatyana.e.nikolova@intel.com>
L: linux-rdma@vger.kernel.org
S: Supported
@ -16059,6 +16070,23 @@ F: mm/mempolicy.c
F: mm/migrate.c
F: mm/migrate_device.c
MEMORY MANAGEMENT - MGLRU (MULTI-GEN LRU)
M: Andrew Morton <akpm@linux-foundation.org>
M: Axel Rasmussen <axelrasmussen@google.com>
M: Yuanchu Xie <yuanchu@google.com>
R: Wei Xu <weixugc@google.com>
L: linux-mm@kvack.org
S: Maintained
W: http://www.linux-mm.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
F: Documentation/admin-guide/mm/multigen_lru.rst
F: Documentation/mm/multigen_lru.rst
F: include/linux/mm_inline.h
F: include/linux/mmzone.h
F: mm/swap.c
F: mm/vmscan.c
F: mm/workingset.c
MEMORY MANAGEMENT - MISC
M: Andrew Morton <akpm@linux-foundation.org>
M: David Hildenbrand <david@redhat.com>
@ -16249,8 +16277,10 @@ S: Maintained
W: http://www.linux-mm.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
F: rust/helpers/mm.c
F: rust/helpers/page.c
F: rust/kernel/mm.rs
F: rust/kernel/mm/
F: rust/kernel/page.rs
MEMORY MAPPING
M: Andrew Morton <akpm@linux-foundation.org>
@ -17819,7 +17849,6 @@ F: net/ipv6/syncookies.c
F: net/ipv6/tcp*.c
NETWORKING [TLS]
M: Boris Pismenny <borisp@nvidia.com>
M: John Fastabend <john.fastabend@gmail.com>
M: Jakub Kicinski <kuba@kernel.org>
L: netdev@vger.kernel.org
@ -20857,8 +20886,8 @@ S: Maintained
F: drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
QUALCOMM RMNET DRIVER
M: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com>
M: Sean Tranchetti <quic_stranche@quicinc.com>
M: Subash Abhinov Kasiviswanathan <subash.a.kasiviswanathan@oss.qualcomm.com>
M: Sean Tranchetti <sean.tranchetti@oss.qualcomm.com>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/networking/device_drivers/cellular/qualcomm/rmnet.rst

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 17
SUBLEVEL = 0
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View File

@ -82,13 +82,16 @@
};
};
etop@e180000 {
ethernet@e180000 {
compatible = "lantiq,etop-xway";
reg = <0xe180000 0x40000>;
interrupt-parent = <&icu0>;
interrupts = <73 78>;
interrupt-names = "tx", "rx";
phy-mode = "rmii";
mac-address = [ 00 11 22 33 44 55 ];
lantiq,rx-burst-length = <4>;
lantiq,tx-burst-length = <4>;
};
stp0: stp@e100bb0 {

View File

@ -497,7 +497,7 @@ void __init ltq_soc_init(void)
ifccr = CGU_IFCCR_VR9;
pcicr = CGU_PCICR_VR9;
} else {
clkdev_add_pmu("1e180000.etop", NULL, 1, 0, PMU_PPE);
clkdev_add_pmu("1e180000.ethernet", NULL, 1, 0, PMU_PPE);
}
if (!of_machine_is_compatible("lantiq,ase"))
@ -531,9 +531,9 @@ void __init ltq_soc_init(void)
CLOCK_133M, CLOCK_133M);
clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0);
clkdev_add_pmu("1f203018.usb2-phy", "phy", 1, 0, PMU_USB0_P);
clkdev_add_pmu("1e180000.etop", "ppe", 1, 0, PMU_PPE);
clkdev_add_cgu("1e180000.etop", "ephycgu", CGU_EPHY);
clkdev_add_pmu("1e180000.etop", "ephy", 1, 0, PMU_EPHY);
clkdev_add_pmu("1e180000.ethernet", "ppe", 1, 0, PMU_PPE);
clkdev_add_cgu("1e180000.ethernet", "ephycgu", CGU_EPHY);
clkdev_add_pmu("1e180000.ethernet", "ephy", 1, 0, PMU_EPHY);
clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_ASE_SDIO);
clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE);
} else if (of_machine_is_compatible("lantiq,grx390")) {
@ -592,7 +592,7 @@ void __init ltq_soc_init(void)
clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0 | PMU_AHBM);
clkdev_add_pmu("1f203034.usb2-phy", "phy", 1, 0, PMU_USB1_P);
clkdev_add_pmu("1e106000.usb", "otg", 1, 0, PMU_USB1 | PMU_AHBM);
clkdev_add_pmu("1e180000.etop", "switch", 1, 0, PMU_SWITCH);
clkdev_add_pmu("1e180000.ethernet", "switch", 1, 0, PMU_SWITCH);
clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO);
clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU);
clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE);

View File

@ -243,13 +243,13 @@ $(obj)/wrapper.a: $(obj-wlib) FORCE
hostprogs := addnote hack-coff mktree
targets += $(patsubst $(obj)/%,%,$(obj-boot) wrapper.a) zImage.lds
extra-y := $(obj)/wrapper.a $(obj-plat) $(obj)/empty.o \
always-y := $(obj)/wrapper.a $(obj-plat) $(obj)/empty.o \
$(obj)/zImage.lds $(obj)/zImage.coff.lds $(obj)/zImage.ps3.lds
dtstree := $(src)/dts
wrapper := $(src)/wrapper
wrapperbits := $(extra-y) $(addprefix $(obj)/,addnote hack-coff mktree) \
wrapperbits := $(always-y) $(addprefix $(obj)/,addnote hack-coff mktree) \
$(wrapper) FORCE
#############
@ -456,7 +456,7 @@ WRAPPER_DTSDIR := /usr/lib/kernel-wrapper/dts
WRAPPER_BINDIR := /usr/sbin
INSTALL := install
extra-installed := $(patsubst $(obj)/%, $(DESTDIR)$(WRAPPER_OBJDIR)/%, $(extra-y))
extra-installed := $(patsubst $(obj)/%, $(DESTDIR)$(WRAPPER_OBJDIR)/%, $(always-y))
hostprogs-installed := $(patsubst %, $(DESTDIR)$(WRAPPER_BINDIR)/%, $(hostprogs))
wrapper-installed := $(DESTDIR)$(WRAPPER_BINDIR)/wrapper
dts-installed := $(patsubst $(dtstree)/%, $(DESTDIR)$(WRAPPER_DTSDIR)/%, $(wildcard $(dtstree)/*.dts))

View File

@ -19,19 +19,19 @@
set -e
# this should work for both the pSeries zImage and the iSeries vmlinux.sm
image_name=`basename $2`
image_name=$(basename "$2")
echo "Warning: '${INSTALLKERNEL}' command not available... Copying" \
"directly to $4/$image_name-$1" >&2
if [ -f $4/$image_name-$1 ]; then
mv $4/$image_name-$1 $4/$image_name-$1.old
if [ -f "$4"/"$image_name"-"$1" ]; then
mv "$4"/"$image_name"-"$1" "$4"/"$image_name"-"$1".old
fi
if [ -f $4/System.map-$1 ]; then
mv $4/System.map-$1 $4/System-$1.old
if [ -f "$4"/System.map-"$1" ]; then
mv "$4"/System.map-"$1" "$4"/System-"$1".old
fi
cat $2 > $4/$image_name-$1
cp $3 $4/System.map-$1
cat "$2" > "$4"/"$image_name"-"$1"
cp "$3" "$4"/System.map-"$1"

View File

@ -199,7 +199,9 @@ obj-$(CONFIG_ALTIVEC) += vector.o
obj-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_init.o
obj64-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_entry_64.o
extra-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_init_check
ifdef KBUILD_BUILTIN
always-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_init_check
endif
obj-$(CONFIG_PPC64) += $(obj64-y)
obj-$(CONFIG_PPC32) += $(obj32-y)

View File

@ -632,19 +632,19 @@ static void __init kvm_check_ins(u32 *inst, u32 features)
#endif
}
switch (inst_no_rt & ~KVM_MASK_RB) {
#ifdef CONFIG_PPC_BOOK3S_32
switch (inst_no_rt & ~KVM_MASK_RB) {
case KVM_INST_MTSRIN:
if (features & KVM_MAGIC_FEAT_SR) {
u32 inst_rb = _inst & KVM_MASK_RB;
kvm_patch_ins_mtsrin(inst, inst_rt, inst_rb);
}
break;
#endif
}
#endif
switch (_inst) {
#ifdef CONFIG_BOOKE
switch (_inst) {
case KVM_INST_WRTEEI_0:
kvm_patch_ins_wrteei_0(inst);
break;
@ -652,8 +652,8 @@ static void __init kvm_check_ins(u32 *inst, u32 features)
case KVM_INST_WRTEEI_1:
kvm_patch_ins_wrtee(inst, 0, 1);
break;
#endif
}
#endif
}
extern u32 kvm_template_start[];

View File

@ -15,8 +15,8 @@
has_renamed_memintrinsics()
{
grep -q "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} && \
! grep -q "^CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX=y" ${KCONFIG_CONFIG}
grep -q "^CONFIG_KASAN=y$" "${KCONFIG_CONFIG}" && \
! grep -q "^CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX=y" "${KCONFIG_CONFIG}"
}
if has_renamed_memintrinsics
@ -42,15 +42,15 @@ check_section()
{
file=$1
section=$2
size=$(objdump -h -j $section $file 2>/dev/null | awk "\$2 == \"$section\" {print \$3}")
size=$(objdump -h -j "$section" "$file" 2>/dev/null | awk "\$2 == \"$section\" {print \$3}")
size=${size:-0}
if [ $size -ne 0 ]; then
if [ "$size" -ne 0 ]; then
ERROR=1
echo "Error: Section $section not empty in prom_init.c" >&2
fi
}
for UNDEF in $($NM -u $OBJ | awk '{print $2}')
for UNDEF in $($NM -u "$OBJ" | awk '{print $2}')
do
# On 64-bit nm gives us the function descriptors, which have
# a leading . on the name, so strip it off here.
@ -87,8 +87,8 @@ do
fi
done
check_section $OBJ .data
check_section $OBJ .bss
check_section $OBJ .init.data
check_section "$OBJ" .data
check_section "$OBJ" .bss
check_section "$OBJ" .init.data
exit $ERROR

View File

@ -141,10 +141,7 @@ void __init check_smt_enabled(void)
smt_enabled_at_boot = 0;
else {
int smt;
int rc;
rc = kstrtoint(smt_enabled_cmdline, 10, &smt);
if (!rc)
if (!kstrtoint(smt_enabled_cmdline, 10, &smt))
smt_enabled_at_boot =
min(threads_per_core, smt);
}

View File

@ -69,7 +69,7 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
/*
* Common checks before entering the guest world. Call with interrupts
* disabled.
* enabled.
*
* returns:
*

View File

@ -110,8 +110,7 @@ static int cpm_pic_probe(struct platform_device *pdev)
out_be32(&data->reg->cpic_cimr, 0);
data->host = irq_domain_create_linear(of_fwnode_handle(dev->of_node),
64, &cpm_pic_host_ops, data);
data->host = irq_domain_create_linear(dev_fwnode(dev), 64, &cpm_pic_host_ops, data);
if (!data->host)
return -ENODEV;

View File

@ -122,16 +122,11 @@ choice
If unsure, select Generic.
config POWERPC64_CPU
bool "Generic (POWER5 and PowerPC 970 and above)"
depends on PPC_BOOK3S_64 && !CPU_LITTLE_ENDIAN
bool "Generic 64 bits powerpc"
depends on PPC_BOOK3S_64
select ARCH_HAS_FAST_MULTIPLIER if CPU_LITTLE_ENDIAN
select PPC_64S_HASH_MMU
config POWERPC64_CPU
bool "Generic (POWER8 and above)"
depends on PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN
select ARCH_HAS_FAST_MULTIPLIER
select PPC_64S_HASH_MMU
select PPC_HAS_LBARX_LHARX
select PPC_HAS_LBARX_LHARX if CPU_LITTLE_ENDIAN
config POWERPC_CPU
bool "Generic 32 bits powerpc"

View File

@ -412,9 +412,8 @@ static int fsl_of_msi_probe(struct platform_device *dev)
}
platform_set_drvdata(dev, msi);
msi->irqhost = irq_domain_create_linear(of_fwnode_handle(dev->dev.of_node),
NR_MSI_IRQS_MAX, &fsl_msi_host_ops, msi);
msi->irqhost = irq_domain_create_linear(dev_fwnode(&dev->dev), NR_MSI_IRQS_MAX,
&fsl_msi_host_ops, msi);
if (msi->irqhost == NULL) {
dev_err(&dev->dev, "No memory for MSI irqhost\n");
err = -ENOMEM;

View File

@ -530,6 +530,9 @@ void setup_vmem(unsigned long kernel_start, unsigned long kernel_end, unsigned l
lowcore_address + sizeof(struct lowcore),
POPULATE_LOWCORE);
for_each_physmem_usable_range(i, &start, &end) {
/* Do not map lowcore with identity mapping */
if (!start)
start = sizeof(struct lowcore);
pgtable_populate((unsigned long)__identity_va(start),
(unsigned long)__identity_va(end),
POPULATE_IDENTITY);

View File

@ -5,6 +5,7 @@ CONFIG_WATCH_QUEUE=y
CONFIG_AUDIT=y
CONFIG_NO_HZ_IDLE=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_POSIX_AUX_CLOCKS=y
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
@ -19,6 +20,7 @@ CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_SCHED_PROXY_EXEC=y
CONFIG_NUMA_BALANCING=y
CONFIG_MEMCG=y
CONFIG_BLK_CGROUP=y
@ -42,6 +44,7 @@ CONFIG_PROFILING=y
CONFIG_KEXEC=y
CONFIG_KEXEC_FILE=y
CONFIG_KEXEC_SIG=y
CONFIG_CRASH_DM_CRYPT=y
CONFIG_LIVEPATCH=y
CONFIG_MARCH_Z13=y
CONFIG_NR_CPUS=512
@ -105,6 +108,7 @@ CONFIG_CMA_AREAS=7
CONFIG_MEM_SOFT_DIRTY=y
CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_ZONE_DEVICE=y
CONFIG_PERCPU_STATS=y
CONFIG_GUP_TEST=y
CONFIG_ANON_VMA_NAME=y
@ -223,17 +227,19 @@ CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_DSCP=m
CONFIG_NETFILTER_XT_TARGET_HL=m
CONFIG_NETFILTER_XT_TARGET_HMARK=m
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_NAT=m
CONFIG_NETFILTER_XT_TARGET_NETMAP=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m
CONFIG_NETFILTER_XT_TARGET_TEE=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m
@ -248,6 +254,7 @@ CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
CONFIG_NETFILTER_XT_MATCH_CPU=m
CONFIG_NETFILTER_XT_MATCH_DCCP=m
CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_NETFILTER_XT_MATCH_ESP=m
@ -318,16 +325,8 @@ CONFIG_IP_NF_MATCH_AH=m
CONFIG_IP_NF_MATCH_ECN=m
CONFIG_IP_NF_MATCH_RPFILTER=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_MANGLE=m
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_RAW=m
CONFIG_IP_NF_SECURITY=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
CONFIG_NFT_FIB_IPV6=m
CONFIG_IP6_NF_IPTABLES=m
@ -340,15 +339,9 @@ CONFIG_IP6_NF_MATCH_IPV6HEADER=m
CONFIG_IP6_NF_MATCH_MH=m
CONFIG_IP6_NF_MATCH_RPFILTER=m
CONFIG_IP6_NF_MATCH_RT=m
CONFIG_IP6_NF_TARGET_HL=m
CONFIG_IP6_NF_FILTER=m
CONFIG_IP6_NF_TARGET_REJECT=m
CONFIG_IP6_NF_MANGLE=m
CONFIG_IP6_NF_RAW=m
CONFIG_IP6_NF_SECURITY=m
CONFIG_IP6_NF_NAT=m
CONFIG_IP6_NF_TARGET_MASQUERADE=m
CONFIG_NF_TABLES_BRIDGE=m
CONFIG_IP_SCTP=m
CONFIG_RDS=m
CONFIG_RDS_RDMA=m
CONFIG_RDS_TCP=m
@ -383,6 +376,7 @@ CONFIG_NET_SCH_FQ_CODEL=m
CONFIG_NET_SCH_INGRESS=m
CONFIG_NET_SCH_PLUG=m
CONFIG_NET_SCH_ETS=m
CONFIG_NET_SCH_DUALPI2=m
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_FW=m
@ -504,6 +498,7 @@ CONFIG_DM_VDO=m
CONFIG_NETDEVICES=y
CONFIG_BONDING=m
CONFIG_DUMMY=m
CONFIG_OVPN=m
CONFIG_EQUALIZER=m
CONFIG_IFB=m
CONFIG_MACVLAN=m
@ -641,6 +636,7 @@ CONFIG_VP_VDPA=m
CONFIG_VHOST_NET=m
CONFIG_VHOST_VSOCK=m
CONFIG_VHOST_VDPA=m
CONFIG_DEV_DAX=m
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
@ -665,6 +661,7 @@ CONFIG_NILFS2_FS=m
CONFIG_BCACHEFS_FS=y
CONFIG_BCACHEFS_QUOTA=y
CONFIG_BCACHEFS_POSIX_ACL=y
CONFIG_FS_DAX=y
CONFIG_EXPORTFS_BLOCK_OPS=y
CONFIG_FS_ENCRYPTION=y
CONFIG_FS_VERITY=y
@ -755,6 +752,8 @@ CONFIG_HARDENED_USERCOPY=y
CONFIG_BUG_ON_DATA_CORRUPTION=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_SELFTESTS=y
CONFIG_CRYPTO_SELFTESTS_FULL=y
CONFIG_CRYPTO_NULL=y
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_CRYPTD=m
CONFIG_CRYPTO_BENCHMARK=m
@ -783,7 +782,6 @@ CONFIG_CRYPTO_HCTR2=m
CONFIG_CRYPTO_LRW=m
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_AEGIS128=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_GCM=y
CONFIG_CRYPTO_SEQIV=y
CONFIG_CRYPTO_MD4=m
@ -822,6 +820,7 @@ CONFIG_SYSTEM_BLACKLIST_KEYRING=y
CONFIG_CRYPTO_KRB5=m
CONFIG_CRYPTO_KRB5_SELFTESTS=y
CONFIG_CORDIC=m
CONFIG_TRACE_MMIO_ACCESS=y
CONFIG_RANDOM32_SELFTEST=y
CONFIG_XZ_DEC_MICROLZMA=y
CONFIG_DMA_CMA=y

View File

@ -4,6 +4,7 @@ CONFIG_WATCH_QUEUE=y
CONFIG_AUDIT=y
CONFIG_NO_HZ_IDLE=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_POSIX_AUX_CLOCKS=y
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
@ -17,6 +18,7 @@ CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_SCHED_PROXY_EXEC=y
CONFIG_NUMA_BALANCING=y
CONFIG_MEMCG=y
CONFIG_BLK_CGROUP=y
@ -40,11 +42,12 @@ CONFIG_PROFILING=y
CONFIG_KEXEC=y
CONFIG_KEXEC_FILE=y
CONFIG_KEXEC_SIG=y
CONFIG_CRASH_DM_CRYPT=y
CONFIG_LIVEPATCH=y
CONFIG_MARCH_Z13=y
CONFIG_NR_CPUS=512
CONFIG_NUMA=y
CONFIG_HZ_100=y
CONFIG_HZ_1000=y
CONFIG_CERT_STORE=y
CONFIG_EXPOLINE=y
CONFIG_EXPOLINE_AUTO=y
@ -97,6 +100,7 @@ CONFIG_CMA_AREAS=7
CONFIG_MEM_SOFT_DIRTY=y
CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_ZONE_DEVICE=y
CONFIG_PERCPU_STATS=y
CONFIG_ANON_VMA_NAME=y
CONFIG_USERFAULTFD=y
@ -214,17 +218,19 @@ CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_DSCP=m
CONFIG_NETFILTER_XT_TARGET_HL=m
CONFIG_NETFILTER_XT_TARGET_HMARK=m
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_NAT=m
CONFIG_NETFILTER_XT_TARGET_NETMAP=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m
CONFIG_NETFILTER_XT_TARGET_TEE=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m
@ -239,6 +245,7 @@ CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
CONFIG_NETFILTER_XT_MATCH_CPU=m
CONFIG_NETFILTER_XT_MATCH_DCCP=m
CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_NETFILTER_XT_MATCH_ESP=m
@ -309,16 +316,8 @@ CONFIG_IP_NF_MATCH_AH=m
CONFIG_IP_NF_MATCH_ECN=m
CONFIG_IP_NF_MATCH_RPFILTER=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_MANGLE=m
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_RAW=m
CONFIG_IP_NF_SECURITY=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
CONFIG_NFT_FIB_IPV6=m
CONFIG_IP6_NF_IPTABLES=m
@ -331,15 +330,9 @@ CONFIG_IP6_NF_MATCH_IPV6HEADER=m
CONFIG_IP6_NF_MATCH_MH=m
CONFIG_IP6_NF_MATCH_RPFILTER=m
CONFIG_IP6_NF_MATCH_RT=m
CONFIG_IP6_NF_TARGET_HL=m
CONFIG_IP6_NF_FILTER=m
CONFIG_IP6_NF_TARGET_REJECT=m
CONFIG_IP6_NF_MANGLE=m
CONFIG_IP6_NF_RAW=m
CONFIG_IP6_NF_SECURITY=m
CONFIG_IP6_NF_NAT=m
CONFIG_IP6_NF_TARGET_MASQUERADE=m
CONFIG_NF_TABLES_BRIDGE=m
CONFIG_IP_SCTP=m
CONFIG_RDS=m
CONFIG_RDS_RDMA=m
CONFIG_RDS_TCP=m
@ -373,6 +366,7 @@ CONFIG_NET_SCH_FQ_CODEL=m
CONFIG_NET_SCH_INGRESS=m
CONFIG_NET_SCH_PLUG=m
CONFIG_NET_SCH_ETS=m
CONFIG_NET_SCH_DUALPI2=m
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_FW=m
@ -494,6 +488,7 @@ CONFIG_DM_VDO=m
CONFIG_NETDEVICES=y
CONFIG_BONDING=m
CONFIG_DUMMY=m
CONFIG_OVPN=m
CONFIG_EQUALIZER=m
CONFIG_IFB=m
CONFIG_MACVLAN=m
@ -631,6 +626,7 @@ CONFIG_VP_VDPA=m
CONFIG_VHOST_NET=m
CONFIG_VHOST_VSOCK=m
CONFIG_VHOST_VDPA=m
CONFIG_DEV_DAX=m
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
@ -652,6 +648,7 @@ CONFIG_NILFS2_FS=m
CONFIG_BCACHEFS_FS=m
CONFIG_BCACHEFS_QUOTA=y
CONFIG_BCACHEFS_POSIX_ACL=y
CONFIG_FS_DAX=y
CONFIG_EXPORTFS_BLOCK_OPS=y
CONFIG_FS_ENCRYPTION=y
CONFIG_FS_VERITY=y
@ -683,7 +680,6 @@ CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_INODE64=y
CONFIG_TMPFS_QUOTA=y
CONFIG_HUGETLBFS=y
CONFIG_CONFIGFS_FS=m
CONFIG_ECRYPT_FS=m
CONFIG_CRAMFS=m
CONFIG_SQUASHFS=m
@ -741,6 +737,7 @@ CONFIG_BUG_ON_DATA_CORRUPTION=y
CONFIG_CRYPTO_FIPS=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_SELFTESTS=y
CONFIG_CRYPTO_NULL=y
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_CRYPTD=m
CONFIG_CRYPTO_BENCHMARK=m
@ -769,7 +766,6 @@ CONFIG_CRYPTO_HCTR2=m
CONFIG_CRYPTO_LRW=m
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_AEGIS128=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_GCM=y
CONFIG_CRYPTO_SEQIV=y
CONFIG_CRYPTO_MD4=m

View File

@ -1,5 +1,6 @@
CONFIG_NO_HZ_IDLE=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_POSIX_AUX_CLOCKS=y
CONFIG_BPF_SYSCALL=y
# CONFIG_CPU_ISOLATION is not set
# CONFIG_UTS_NS is not set
@ -11,7 +12,7 @@ CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_KEXEC=y
CONFIG_MARCH_Z13=y
CONFIG_NR_CPUS=2
CONFIG_HZ_100=y
CONFIG_HZ_1000=y
# CONFIG_CHSC_SCH is not set
# CONFIG_SCM_BUS is not set
# CONFIG_AP is not set

View File

@ -6,6 +6,7 @@
* Author(s): Michael Holzheu <holzheu@linux.vnet.ibm.com>
*/
#include <linux/security.h>
#include <linux/slab.h>
#include "hypfs.h"
@ -66,23 +67,27 @@ static long dbfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
long rc;
mutex_lock(&df->lock);
if (df->unlocked_ioctl)
rc = df->unlocked_ioctl(file, cmd, arg);
else
rc = -ENOTTY;
rc = df->unlocked_ioctl(file, cmd, arg);
mutex_unlock(&df->lock);
return rc;
}
static const struct file_operations dbfs_ops = {
static const struct file_operations dbfs_ops_ioctl = {
.read = dbfs_read,
.unlocked_ioctl = dbfs_ioctl,
};
static const struct file_operations dbfs_ops = {
.read = dbfs_read,
};
void hypfs_dbfs_create_file(struct hypfs_dbfs_file *df)
{
df->dentry = debugfs_create_file(df->name, 0400, dbfs_dir, df,
&dbfs_ops);
const struct file_operations *fops = &dbfs_ops;
if (df->unlocked_ioctl && !security_locked_down(LOCKDOWN_DEBUGFS))
fops = &dbfs_ops_ioctl;
df->dentry = debugfs_create_file(df->name, 0400, dbfs_dir, df, fops);
mutex_init(&df->lock);
}

View File

@ -94,12 +94,13 @@ DECLARE_STATIC_CALL(xen_hypercall, xen_hypercall_func);
#ifdef MODULE
#define __ADDRESSABLE_xen_hypercall
#else
#define __ADDRESSABLE_xen_hypercall __ADDRESSABLE_ASM_STR(__SCK__xen_hypercall)
#define __ADDRESSABLE_xen_hypercall \
__stringify(.global STATIC_CALL_KEY(xen_hypercall);)
#endif
#define __HYPERCALL \
__ADDRESSABLE_xen_hypercall \
"call __SCT__xen_hypercall"
__stringify(call STATIC_CALL_TRAMP(xen_hypercall))
#define __HYPERCALL_ENTRY(x) "a" (x)

View File

@ -1326,8 +1326,8 @@ static const char * const s5_reset_reason_txt[] = {
static __init int print_s5_reset_status_mmio(void)
{
unsigned long value;
void __iomem *addr;
u32 value;
int i;
if (!cpu_feature_enabled(X86_FEATURE_ZEN))
@ -1340,12 +1340,16 @@ static __init int print_s5_reset_status_mmio(void)
value = ioread32(addr);
iounmap(addr);
/* Value with "all bits set" is an error response and should be ignored. */
if (value == U32_MAX)
return 0;
for (i = 0; i < ARRAY_SIZE(s5_reset_reason_txt); i++) {
if (!(value & BIT(i)))
continue;
if (s5_reset_reason_txt[i]) {
pr_info("x86/amd: Previous system reset reason [0x%08lx]: %s\n",
pr_info("x86/amd: Previous system reset reason [0x%08x]: %s\n",
value, s5_reset_reason_txt[i]);
}
}

View File

@ -1068,10 +1068,8 @@ static void __init gds_select_mitigation(void)
if (gds_mitigation == GDS_MITIGATION_AUTO) {
if (should_mitigate_vuln(X86_BUG_GDS))
gds_mitigation = GDS_MITIGATION_FULL;
else {
else
gds_mitigation = GDS_MITIGATION_OFF;
return;
}
}
/* No microcode */

View File

@ -16,6 +16,7 @@
#include <asm/spec-ctrl.h>
#include <asm/delay.h>
#include <asm/msr.h>
#include <asm/resctrl.h>
#include "cpu.h"
@ -117,6 +118,8 @@ static void bsp_init_hygon(struct cpuinfo_x86 *c)
x86_amd_ls_cfg_ssbd_mask = 1ULL << 10;
}
}
resctrl_cpu_detect(c);
}
static void early_init_hygon(struct cpuinfo_x86 *c)

View File

@ -557,7 +557,7 @@ static inline int bio_check_eod(struct bio *bio)
sector_t maxsector = bdev_nr_sectors(bio->bi_bdev);
unsigned int nr_sectors = bio_sectors(bio);
if (nr_sectors &&
if (nr_sectors && maxsector &&
(nr_sectors > maxsector ||
bio->bi_iter.bi_sector > maxsector - nr_sectors)) {
pr_info_ratelimited("%s: attempt to access beyond end of device\n"

View File

@ -95,6 +95,7 @@ static const char *const blk_queue_flag_name[] = {
QUEUE_FLAG_NAME(SQ_SCHED),
QUEUE_FLAG_NAME(DISABLE_WBT_DEF),
QUEUE_FLAG_NAME(NO_ELV_SWITCH),
QUEUE_FLAG_NAME(QOS_ENABLED),
};
#undef QUEUE_FLAG_NAME

View File

@ -5033,6 +5033,7 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
unsigned int memflags;
int i;
struct xarray elv_tbl, et_tbl;
bool queues_frozen = false;
lockdep_assert_held(&set->tag_list_lock);
@ -5056,9 +5057,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
blk_mq_sysfs_unregister_hctxs(q);
}
list_for_each_entry(q, &set->tag_list, tag_set_list)
blk_mq_freeze_queue_nomemsave(q);
/*
* Switch IO scheduler to 'none', cleaning up the data associated
* with the previous scheduler. We will switch back once we are done
@ -5068,6 +5066,9 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
if (blk_mq_elv_switch_none(q, &elv_tbl))
goto switch_back;
list_for_each_entry(q, &set->tag_list, tag_set_list)
blk_mq_freeze_queue_nomemsave(q);
queues_frozen = true;
if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0)
goto switch_back;
@ -5091,8 +5092,12 @@ fallback:
}
switch_back:
/* The blk_mq_elv_switch_back unfreezes queue for us. */
list_for_each_entry(q, &set->tag_list, tag_set_list)
list_for_each_entry(q, &set->tag_list, tag_set_list) {
/* switch_back expects queue to be frozen */
if (!queues_frozen)
blk_mq_freeze_queue_nomemsave(q);
blk_mq_elv_switch_back(q, &elv_tbl, &et_tbl);
}
list_for_each_entry(q, &set->tag_list, tag_set_list) {
blk_mq_sysfs_register_hctxs(q);

View File

@ -2,8 +2,6 @@
#include "blk-rq-qos.h"
__read_mostly DEFINE_STATIC_KEY_FALSE(block_rq_qos);
/*
* Increment 'v', if 'v' is below 'below'. Returns true if we succeeded,
* false if 'v' + 1 would be bigger than 'below'.
@ -319,8 +317,8 @@ void rq_qos_exit(struct request_queue *q)
struct rq_qos *rqos = q->rq_qos;
q->rq_qos = rqos->next;
rqos->ops->exit(rqos);
static_branch_dec(&block_rq_qos);
}
blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q);
mutex_unlock(&q->rq_qos_mutex);
}
@ -346,7 +344,7 @@ int rq_qos_add(struct rq_qos *rqos, struct gendisk *disk, enum rq_qos_id id,
goto ebusy;
rqos->next = q->rq_qos;
q->rq_qos = rqos;
static_branch_inc(&block_rq_qos);
blk_queue_flag_set(QUEUE_FLAG_QOS_ENABLED, q);
blk_mq_unfreeze_queue(q, memflags);
@ -377,6 +375,8 @@ void rq_qos_del(struct rq_qos *rqos)
break;
}
}
if (!q->rq_qos)
blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q);
blk_mq_unfreeze_queue(q, memflags);
mutex_lock(&q->debugfs_mutex);

View File

@ -12,7 +12,6 @@
#include "blk-mq-debugfs.h"
struct blk_mq_debugfs_attr;
extern struct static_key_false block_rq_qos;
enum rq_qos_id {
RQ_QOS_WBT,
@ -113,43 +112,55 @@ void __rq_qos_queue_depth_changed(struct rq_qos *rqos);
static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio)
{
if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)
if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
q->rq_qos)
__rq_qos_cleanup(q->rq_qos, bio);
}
static inline void rq_qos_done(struct request_queue *q, struct request *rq)
{
if (static_branch_unlikely(&block_rq_qos) && q->rq_qos &&
!blk_rq_is_passthrough(rq))
if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
q->rq_qos && !blk_rq_is_passthrough(rq))
__rq_qos_done(q->rq_qos, rq);
}
static inline void rq_qos_issue(struct request_queue *q, struct request *rq)
{
if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)
if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
q->rq_qos)
__rq_qos_issue(q->rq_qos, rq);
}
static inline void rq_qos_requeue(struct request_queue *q, struct request *rq)
{
if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)
if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
q->rq_qos)
__rq_qos_requeue(q->rq_qos, rq);
}
static inline void rq_qos_done_bio(struct bio *bio)
{
if (static_branch_unlikely(&block_rq_qos) &&
bio->bi_bdev && (bio_flagged(bio, BIO_QOS_THROTTLED) ||
bio_flagged(bio, BIO_QOS_MERGED))) {
struct request_queue *q = bdev_get_queue(bio->bi_bdev);
if (q->rq_qos)
__rq_qos_done_bio(q->rq_qos, bio);
}
struct request_queue *q;
if (!bio->bi_bdev || (!bio_flagged(bio, BIO_QOS_THROTTLED) &&
!bio_flagged(bio, BIO_QOS_MERGED)))
return;
q = bdev_get_queue(bio->bi_bdev);
/*
* If a bio has BIO_QOS_xxx set, it implicitly implies that
* q->rq_qos is present. So, we skip re-checking q->rq_qos
* here as an extra optimization and directly call
* __rq_qos_done_bio().
*/
__rq_qos_done_bio(q->rq_qos, bio);
}
static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)
{
if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) {
if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
q->rq_qos) {
bio_set_flag(bio, BIO_QOS_THROTTLED);
__rq_qos_throttle(q->rq_qos, bio);
}
@ -158,14 +169,16 @@ static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)
static inline void rq_qos_track(struct request_queue *q, struct request *rq,
struct bio *bio)
{
if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)
if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
q->rq_qos)
__rq_qos_track(q->rq_qos, rq, bio);
}
static inline void rq_qos_merge(struct request_queue *q, struct request *rq,
struct bio *bio)
{
if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) {
if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
q->rq_qos) {
bio_set_flag(bio, BIO_QOS_MERGED);
__rq_qos_merge(q->rq_qos, rq, bio);
}
@ -173,7 +186,8 @@ static inline void rq_qos_merge(struct request_queue *q, struct request *rq,
static inline void rq_qos_queue_depth_changed(struct request_queue *q)
{
if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)
if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
q->rq_qos)
__rq_qos_queue_depth_changed(q->rq_qos);
}

View File

@ -157,16 +157,14 @@ static int blk_validate_integrity_limits(struct queue_limits *lim)
switch (bi->csum_type) {
case BLK_INTEGRITY_CSUM_NONE:
if (bi->pi_tuple_size) {
pr_warn("pi_tuple_size must be 0 when checksum type \
is none\n");
pr_warn("pi_tuple_size must be 0 when checksum type is none\n");
return -EINVAL;
}
break;
case BLK_INTEGRITY_CSUM_CRC:
case BLK_INTEGRITY_CSUM_IP:
if (bi->pi_tuple_size != sizeof(struct t10_pi_tuple)) {
pr_warn("pi_tuple_size mismatch for T10 PI: expected \
%zu, got %u\n",
pr_warn("pi_tuple_size mismatch for T10 PI: expected %zu, got %u\n",
sizeof(struct t10_pi_tuple),
bi->pi_tuple_size);
return -EINVAL;
@ -174,8 +172,7 @@ static int blk_validate_integrity_limits(struct queue_limits *lim)
break;
case BLK_INTEGRITY_CSUM_CRC64:
if (bi->pi_tuple_size != sizeof(struct crc64_pi_tuple)) {
pr_warn("pi_tuple_size mismatch for CRC64 PI: \
expected %zu, got %u\n",
pr_warn("pi_tuple_size mismatch for CRC64 PI: expected %zu, got %u\n",
sizeof(struct crc64_pi_tuple),
bi->pi_tuple_size);
return -EINVAL;
@ -972,6 +969,8 @@ bool queue_limits_stack_integrity(struct queue_limits *t,
goto incompatible;
if (ti->csum_type != bi->csum_type)
goto incompatible;
if (ti->pi_tuple_size != bi->pi_tuple_size)
goto incompatible;
if ((ti->flags & BLK_INTEGRITY_REF_TAG) !=
(bi->flags & BLK_INTEGRITY_REF_TAG))
goto incompatible;
@ -980,6 +979,7 @@ bool queue_limits_stack_integrity(struct queue_limits *t,
ti->flags |= (bi->flags & BLK_INTEGRITY_DEVICE_CAPABLE) |
(bi->flags & BLK_INTEGRITY_REF_TAG);
ti->csum_type = bi->csum_type;
ti->pi_tuple_size = bi->pi_tuple_size;
ti->metadata_size = bi->metadata_size;
ti->pi_offset = bi->pi_offset;
ti->interval_exp = bi->interval_exp;

View File

@ -10437,7 +10437,7 @@ end:
(u64 *)(lin_dma_pkts_arr), DEBUGFS_WRITE64);
WREG32(sob_addr, 0);
kfree(lin_dma_pkts_arr);
kvfree(lin_dma_pkts_arr);
return rc;
}

View File

@ -315,7 +315,7 @@ static void __iomem *einj_get_parameter_address(void)
memcpy_fromio(&v5param, p, v5param_size);
acpi5 = 1;
check_vendor_extension(pa_v5, &v5param);
if (available_error_type & ACPI65_EINJV2_SUPP) {
if (is_v2 && available_error_type & ACPI65_EINJV2_SUPP) {
len = v5param.einjv2_struct.length;
offset = offsetof(struct einjv2_extension_struct, component_arr);
max_nr_components = (len - offset) /
@ -540,6 +540,9 @@ static int __einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
struct set_error_type_with_address *v5param;
v5param = kmalloc(v5param_size, GFP_KERNEL);
if (!v5param)
return -ENOMEM;
memcpy_fromio(v5param, einj_param, v5param_size);
v5param->type = type;
if (type & ACPI5_VENDOR_BIT) {
@ -1091,7 +1094,7 @@ err_put_table:
return rc;
}
static void __exit einj_remove(struct faux_device *fdev)
static void einj_remove(struct faux_device *fdev)
{
struct apei_exec_context ctx;
@ -1114,15 +1117,9 @@ static void __exit einj_remove(struct faux_device *fdev)
}
static struct faux_device *einj_dev;
/*
* einj_remove() lives in .exit.text. For drivers registered via
* platform_driver_probe() this is ok because they cannot get unbound at
* runtime. So mark the driver struct with __refdata to prevent modpost
* triggering a section mismatch warning.
*/
static struct faux_device_ops einj_device_ops __refdata = {
static struct faux_device_ops einj_device_ops = {
.probe = einj_probe,
.remove = __exit_p(einj_remove),
.remove = einj_remove,
};
static int __init einj_init(void)

View File

@ -329,7 +329,7 @@ static bool applicable_image(const void *data, struct pfru_update_cap_info *cap,
if (type == PFRU_CODE_INJECT_TYPE)
return payload_hdr->rt_ver >= cap->code_rt_version;
return payload_hdr->rt_ver >= cap->drv_rt_version;
return payload_hdr->svn_ver >= cap->drv_svn;
}
static void print_update_debug_info(struct pfru_updated_result *result,

View File

@ -279,6 +279,19 @@ static struct atm_vcc *find_vcc(struct atm_dev *dev, short vpi, int vci)
return NULL;
}
static int atmtcp_c_pre_send(struct atm_vcc *vcc, struct sk_buff *skb)
{
struct atmtcp_hdr *hdr;
if (skb->len < sizeof(struct atmtcp_hdr))
return -EINVAL;
hdr = (struct atmtcp_hdr *)skb->data;
if (hdr->length == ATMTCP_HDR_MAGIC)
return -EINVAL;
return 0;
}
static int atmtcp_c_send(struct atm_vcc *vcc,struct sk_buff *skb)
{
@ -288,9 +301,6 @@ static int atmtcp_c_send(struct atm_vcc *vcc,struct sk_buff *skb)
struct sk_buff *new_skb;
int result = 0;
if (skb->len < sizeof(struct atmtcp_hdr))
goto done;
dev = vcc->dev_data;
hdr = (struct atmtcp_hdr *) skb->data;
if (hdr->length == ATMTCP_HDR_MAGIC) {
@ -347,6 +357,7 @@ static const struct atmdev_ops atmtcp_v_dev_ops = {
static const struct atmdev_ops atmtcp_c_dev_ops = {
.close = atmtcp_c_close,
.pre_send = atmtcp_c_pre_send,
.send = atmtcp_c_send
};

View File

@ -675,7 +675,7 @@ static void dpm_async_resume_subordinate(struct device *dev, async_func_t func)
idx = device_links_read_lock();
/* Start processing the device's "async" consumers. */
list_for_each_entry_rcu(link, &dev->links.consumers, s_node)
list_for_each_entry_rcu_locked(link, &dev->links.consumers, s_node)
if (READ_ONCE(link->status) != DL_STATE_DORMANT)
dpm_async_with_cleanup(link->consumer, func);
@ -1330,7 +1330,7 @@ static void dpm_async_suspend_superior(struct device *dev, async_func_t func)
idx = device_links_read_lock();
/* Start processing the device's "async" suppliers. */
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node)
list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node)
if (READ_ONCE(link->status) != DL_STATE_DORMANT)
dpm_async_with_cleanup(link->supplier, func);

View File

@ -137,20 +137,29 @@ static void loop_global_unlock(struct loop_device *lo, bool global)
static int max_part;
static int part_shift;
static loff_t get_size(loff_t offset, loff_t sizelimit, struct file *file)
static loff_t lo_calculate_size(struct loop_device *lo, struct file *file)
{
struct kstat stat;
loff_t loopsize;
int ret;
/* Compute loopsize in bytes */
loopsize = i_size_read(file->f_mapping->host);
if (offset > 0)
loopsize -= offset;
/*
* Get the accurate file size. This provides better results than
* cached inode data, particularly for network filesystems where
* metadata may be stale.
*/
ret = vfs_getattr_nosec(&file->f_path, &stat, STATX_SIZE, 0);
if (ret)
return 0;
loopsize = stat.size;
if (lo->lo_offset > 0)
loopsize -= lo->lo_offset;
/* offset is beyond i_size, weird but possible */
if (loopsize < 0)
return 0;
if (sizelimit > 0 && sizelimit < loopsize)
loopsize = sizelimit;
if (lo->lo_sizelimit > 0 && lo->lo_sizelimit < loopsize)
loopsize = lo->lo_sizelimit;
/*
* Unfortunately, if we want to do I/O on the device,
* the number of 512-byte sectors has to fit into a sector_t.
@ -158,11 +167,6 @@ static loff_t get_size(loff_t offset, loff_t sizelimit, struct file *file)
return loopsize >> 9;
}
static loff_t get_loop_size(struct loop_device *lo, struct file *file)
{
return get_size(lo->lo_offset, lo->lo_sizelimit, file);
}
/*
* We support direct I/O only if lo_offset is aligned with the logical I/O size
* of backing device, and the logical block size of loop is bigger than that of
@ -569,7 +573,7 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
error = -EINVAL;
/* size of the new backing store needs to be the same */
if (get_loop_size(lo, file) != get_loop_size(lo, old_file))
if (lo_calculate_size(lo, file) != lo_calculate_size(lo, old_file))
goto out_err;
/*
@ -1063,7 +1067,7 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
loop_update_dio(lo);
loop_sysfs_init(lo);
size = get_loop_size(lo, file);
size = lo_calculate_size(lo, file);
loop_set_size(lo, size);
/* Order wrt reading lo_state in loop_validate_file(). */
@ -1255,8 +1259,7 @@ out_unfreeze:
if (partscan)
clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state);
if (!err && size_changed) {
loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit,
lo->lo_backing_file);
loff_t new_size = lo_calculate_size(lo, lo->lo_backing_file);
loop_set_size(lo, new_size);
}
out_unlock:
@ -1399,7 +1402,7 @@ static int loop_set_capacity(struct loop_device *lo)
if (unlikely(lo->lo_state != Lo_bound))
return -ENXIO;
size = get_loop_size(lo, lo->lo_backing_file);
size = lo_calculate_size(lo, lo->lo_backing_file);
loop_set_size(lo, size);
return 0;

View File

@ -129,8 +129,7 @@ static int cdx_rpmsg_probe(struct rpmsg_device *rpdev)
chinfo.src = RPMSG_ADDR_ANY;
chinfo.dst = rpdev->dst;
strscpy(chinfo.name, cdx_rpmsg_id_table[0].name,
strlen(cdx_rpmsg_id_table[0].name));
strscpy(chinfo.name, cdx_rpmsg_id_table[0].name, sizeof(chinfo.name));
cdx_mcdi->ept = rpmsg_create_ept(rpdev, cdx_rpmsg_cb, NULL, chinfo);
if (!cdx_mcdi->ept) {

View File

@ -1587,6 +1587,9 @@ static int do_insnlist_ioctl(struct comedi_device *dev,
memset(&data[n], 0, (MIN_SAMPLES - n) *
sizeof(unsigned int));
}
} else {
memset(data, 0, max_t(unsigned int, n, MIN_SAMPLES) *
sizeof(unsigned int));
}
ret = parse_insn(dev, insns + i, data, file);
if (ret < 0)
@ -1670,6 +1673,8 @@ static int do_insn_ioctl(struct comedi_device *dev,
memset(&data[insn->n], 0,
(MIN_SAMPLES - insn->n) * sizeof(unsigned int));
}
} else {
memset(data, 0, n_data * sizeof(unsigned int));
}
ret = parse_insn(dev, insn, data, file);
if (ret < 0)

View File

@ -620,11 +620,9 @@ static int insn_rw_emulate_bits(struct comedi_device *dev,
unsigned int chan = CR_CHAN(insn->chanspec);
unsigned int base_chan = (chan < 32) ? 0 : chan;
unsigned int _data[2];
unsigned int i;
int ret;
if (insn->n == 0)
return 0;
memset(_data, 0, sizeof(_data));
memset(&_insn, 0, sizeof(_insn));
_insn.insn = INSN_BITS;
@ -635,18 +633,21 @@ static int insn_rw_emulate_bits(struct comedi_device *dev,
if (insn->insn == INSN_WRITE) {
if (!(s->subdev_flags & SDF_WRITABLE))
return -EINVAL;
_data[0] = 1U << (chan - base_chan); /* mask */
_data[1] = data[0] ? (1U << (chan - base_chan)) : 0; /* bits */
_data[0] = 1U << (chan - base_chan); /* mask */
}
for (i = 0; i < insn->n; i++) {
if (insn->insn == INSN_WRITE)
_data[1] = data[i] ? _data[0] : 0; /* bits */
ret = s->insn_bits(dev, s, &_insn, _data);
if (ret < 0)
return ret;
if (insn->insn == INSN_READ)
data[i] = (_data[1] >> (chan - base_chan)) & 1;
}
ret = s->insn_bits(dev, s, &_insn, _data);
if (ret < 0)
return ret;
if (insn->insn == INSN_READ)
data[0] = (_data[1] >> (chan - base_chan)) & 1;
return 1;
return insn->n;
}
static int __comedi_device_postconfig_async(struct comedi_device *dev,

View File

@ -328,7 +328,8 @@ static int pcl726_attach(struct comedi_device *dev,
* Hook up the external trigger source interrupt only if the
* user config option is valid and the board supports interrupts.
*/
if (it->options[1] && (board->irq_mask & (1 << it->options[1]))) {
if (it->options[1] > 0 && it->options[1] < 16 &&
(board->irq_mask & (1U << it->options[1]))) {
ret = request_irq(it->options[1], pcl726_interrupt, 0,
dev->board_name, dev);
if (ret == 0) {

View File

@ -287,20 +287,15 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
return 0;
}
if (tick_nohz_tick_stopped()) {
/*
* If the tick is already stopped, the cost of possible short
* idle duration misprediction is much higher, because the CPU
* may be stuck in a shallow idle state for a long time as a
* result of it. In that case say we might mispredict and use
* the known time till the closest timer event for the idle
* state selection.
*/
if (predicted_ns < TICK_NSEC)
predicted_ns = data->next_timer_ns;
} else if (latency_req > predicted_ns) {
latency_req = predicted_ns;
}
/*
* If the tick is already stopped, the cost of possible short idle
* duration misprediction is much higher, because the CPU may be stuck
* in a shallow idle state for a long time as a result of it. In that
* case, say we might mispredict and use the known time till the closest
* timer event for the idle state selection.
*/
if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
predicted_ns = data->next_timer_ns;
/*
* Find the idle state with the lowest power while satisfying
@ -316,13 +311,15 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
if (idx == -1)
idx = i; /* first enabled state */
if (s->exit_latency_ns > latency_req)
break;
if (s->target_residency_ns > predicted_ns) {
/*
* Use a physical idle state, not busy polling, unless
* a timer is going to trigger soon enough.
*/
if ((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) &&
s->exit_latency_ns <= latency_req &&
s->target_residency_ns <= data->next_timer_ns) {
predicted_ns = s->target_residency_ns;
idx = i;
@ -354,8 +351,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
return idx;
}
if (s->exit_latency_ns > latency_req)
break;
idx = i;
}

View File

@ -405,12 +405,12 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr, struct sg_table *sgt)
}
}
priv->dma_nelms =
dma_map_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0);
if (priv->dma_nelms == 0) {
err = dma_map_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0);
if (err) {
dev_err(&mgr->dev, "Unable to DMA map (TO_DEVICE)\n");
return -ENOMEM;
return err;
}
priv->dma_nelms = sgt->nents;
/* enable clock */
err = clk_enable(priv->clk);

View File

@ -344,6 +344,20 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
.ignore_interrupt = "AMDI0030:00@8",
},
},
{
/*
* Spurious wakeups from TP_ATTN# pin
* Found in BIOS 5.35
* https://gitlab.freedesktop.org/drm/amd/-/issues/4482
*/
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_PRODUCT_FAMILY, "ProArt PX13"),
},
.driver_data = &(struct acpi_gpiolib_dmi_quirk) {
.ignore_wake = "ASCP1A00:00@8",
},
},
{} /* Terminating entry */
};

View File

@ -514,7 +514,7 @@ bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
return false;
if (drm_gem_is_imported(obj)) {
struct dma_buf *dma_buf = obj->dma_buf;
struct dma_buf *dma_buf = obj->import_attach->dmabuf;
if (dma_buf->ops != &amdgpu_dmabuf_ops)
/* No XGMI with non AMD GPUs */

View File

@ -317,7 +317,8 @@ static int amdgpu_gem_object_open(struct drm_gem_object *obj,
*/
if (!vm->is_compute_context || !vm->process_info)
return 0;
if (!drm_gem_is_imported(obj) || !dma_buf_is_dynamic(obj->dma_buf))
if (!drm_gem_is_imported(obj) ||
!dma_buf_is_dynamic(obj->import_attach->dmabuf))
return 0;
mutex_lock_nested(&vm->process_info->lock, 1);
if (!WARN_ON(!vm->process_info->eviction_fence)) {

View File

@ -1283,7 +1283,7 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev, struct amdgpu_bo_va *bo_va,
struct drm_gem_object *obj = &bo->tbo.base;
if (drm_gem_is_imported(obj) && bo_va->is_xgmi) {
struct dma_buf *dma_buf = obj->dma_buf;
struct dma_buf *dma_buf = obj->import_attach->dmabuf;
struct drm_gem_object *gobj = dma_buf->priv;
struct amdgpu_bo *abo = gem_to_amdgpu_bo(gobj);

View File

@ -7792,6 +7792,9 @@ amdgpu_dm_connector_atomic_check(struct drm_connector *conn,
struct amdgpu_dm_connector *aconn = to_amdgpu_dm_connector(conn);
int ret;
if (WARN_ON(unlikely(!old_con_state || !new_con_state)))
return -EINVAL;
trace_amdgpu_dm_connector_atomic_check(new_con_state);
if (conn->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {

View File

@ -299,6 +299,25 @@ static inline int amdgpu_dm_crtc_set_vblank(struct drm_crtc *crtc, bool enable)
irq_type = amdgpu_display_crtc_idx_to_irq_type(adev, acrtc->crtc_id);
if (enable) {
struct dc *dc = adev->dm.dc;
struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc);
struct psr_settings *psr = &acrtc_state->stream->link->psr_settings;
struct replay_settings *pr = &acrtc_state->stream->link->replay_settings;
bool sr_supported = (psr->psr_version != DC_PSR_VERSION_UNSUPPORTED) ||
pr->config.replay_supported;
/*
* IPS & self-refresh feature can cause vblank counter resets between
* vblank disable and enable.
* It may cause system stuck due to waiting for the vblank counter.
* Call this function to estimate missed vblanks by using timestamps and
* update the vblank counter in DRM.
*/
if (dc->caps.ips_support &&
dc->config.disable_ips != DMUB_IPS_DISABLE_ALL &&
sr_supported && vblank->config.disable_immediate)
drm_crtc_vblank_restore(crtc);
/* vblank irq on -> Only need vupdate irq in vrr mode */
if (amdgpu_dm_crtc_vrr_active(acrtc_state))
rc = amdgpu_dm_crtc_set_vupdate_irq(crtc, true);

View File

@ -174,11 +174,8 @@ static struct graphics_object_id bios_parser_get_connector_id(
return object_id;
}
if (tbl->ucNumberOfObjects <= i) {
dm_error("Can't find connector id %d in connector table of size %d.\n",
i, tbl->ucNumberOfObjects);
if (tbl->ucNumberOfObjects <= i)
return object_id;
}
id = le16_to_cpu(tbl->asObjects[i].usObjectID);
object_id = object_id_from_bios_object_id(id);

View File

@ -993,7 +993,7 @@ static enum bp_result set_pixel_clock_v3(
allocation.sPCLKInput.usFbDiv =
cpu_to_le16((uint16_t)bp_params->feedback_divider);
allocation.sPCLKInput.ucFracFbDiv =
(uint8_t)bp_params->fractional_feedback_divider;
(uint8_t)(bp_params->fractional_feedback_divider / 100000);
allocation.sPCLKInput.ucPostDiv =
(uint8_t)bp_params->pixel_clock_post_divider;

View File

@ -72,9 +72,9 @@ static const struct state_dependent_clocks dce80_max_clks_by_state[] = {
/* ClocksStateLow */
{ .display_clk_khz = 352000, .pixel_clk_khz = 330000},
/* ClocksStateNominal */
{ .display_clk_khz = 600000, .pixel_clk_khz = 400000 },
{ .display_clk_khz = 625000, .pixel_clk_khz = 400000 },
/* ClocksStatePerformance */
{ .display_clk_khz = 600000, .pixel_clk_khz = 400000 } };
{ .display_clk_khz = 625000, .pixel_clk_khz = 400000 } };
int dentist_get_divider_from_did(int did)
{
@ -391,8 +391,6 @@ static void dce_pplib_apply_display_requirements(
{
struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg;
pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context);
dce110_fill_display_configs(context, pp_display_cfg);
if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0)
@ -405,11 +403,9 @@ static void dce_update_clocks(struct clk_mgr *clk_mgr_base,
{
struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base);
struct dm_pp_power_level_change_request level_change_req;
int patched_disp_clk = context->bw_ctx.bw.dce.dispclk_khz;
/*TODO: W/A for dal3 linux, investigate why this works */
if (!clk_mgr_dce->dfs_bypass_active)
patched_disp_clk = patched_disp_clk * 115 / 100;
const int max_disp_clk =
clk_mgr_dce->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz;
int patched_disp_clk = MIN(max_disp_clk, context->bw_ctx.bw.dce.dispclk_khz);
level_change_req.power_level = dce_get_required_clocks_state(clk_mgr_base, context);
/* get max clock state from PPLIB */

View File

@ -120,9 +120,15 @@ void dce110_fill_display_configs(
const struct dc_state *context,
struct dm_pp_display_configuration *pp_display_cfg)
{
struct dc *dc = context->clk_mgr->ctx->dc;
int j;
int num_cfgs = 0;
pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context);
pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz;
pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0;
pp_display_cfg->crtc_index = dc->res_pool->res_cap->num_timing_generator;
for (j = 0; j < context->stream_count; j++) {
int k;
@ -164,6 +170,23 @@ void dce110_fill_display_configs(
cfg->v_refresh /= stream->timing.h_total;
cfg->v_refresh = (cfg->v_refresh + stream->timing.v_total / 2)
/ stream->timing.v_total;
/* Find first CRTC index and calculate its line time.
* This is necessary for DPM on SI GPUs.
*/
if (cfg->pipe_idx < pp_display_cfg->crtc_index) {
const struct dc_crtc_timing *timing =
&context->streams[0]->timing;
pp_display_cfg->crtc_index = cfg->pipe_idx;
pp_display_cfg->line_time_in_us =
timing->h_total * 10000 / timing->pix_clk_100hz;
}
}
if (!num_cfgs) {
pp_display_cfg->crtc_index = 0;
pp_display_cfg->line_time_in_us = 0;
}
pp_display_cfg->display_count = num_cfgs;
@ -223,25 +246,8 @@ void dce11_pplib_apply_display_requirements(
pp_display_cfg->min_engine_clock_deep_sleep_khz
= context->bw_ctx.bw.dce.sclk_deep_sleep_khz;
pp_display_cfg->avail_mclk_switch_time_us =
dce110_get_min_vblank_time_us(context);
/* TODO: dce11.2*/
pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0;
pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz;
dce110_fill_display_configs(context, pp_display_cfg);
/* TODO: is this still applicable?*/
if (pp_display_cfg->display_count == 1) {
const struct dc_crtc_timing *timing =
&context->streams[0]->timing;
pp_display_cfg->crtc_index =
pp_display_cfg->disp_configs[0].pipe_idx;
pp_display_cfg->line_time_in_us = timing->h_total * 10000 / timing->pix_clk_100hz;
}
if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0)
dm_pp_apply_display_requirements(dc->ctx, pp_display_cfg);
}

View File

@ -83,22 +83,13 @@ static const struct state_dependent_clocks dce60_max_clks_by_state[] = {
static int dce60_get_dp_ref_freq_khz(struct clk_mgr *clk_mgr_base)
{
struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
int dprefclk_wdivider;
int dp_ref_clk_khz;
int target_div;
struct dc_context *ctx = clk_mgr_base->ctx;
int dp_ref_clk_khz = 0;
/* DCE6 has no DPREFCLK_CNTL to read DP Reference Clock source */
/* Read the mmDENTIST_DISPCLK_CNTL to get the currently
* programmed DID DENTIST_DPREFCLK_WDIVIDER*/
REG_GET(DENTIST_DISPCLK_CNTL, DENTIST_DPREFCLK_WDIVIDER, &dprefclk_wdivider);
/* Convert DENTIST_DPREFCLK_WDIVIDERto actual divider*/
target_div = dentist_get_divider_from_did(dprefclk_wdivider);
/* Calculate the current DFS clock, in kHz.*/
dp_ref_clk_khz = (DENTIST_DIVIDER_RANGE_SCALE_FACTOR
* clk_mgr->base.dentist_vco_freq_khz) / target_div;
if (ASIC_REV_IS_TAHITI_P(ctx->asic_id.hw_internal_rev))
dp_ref_clk_khz = ctx->dc_bios->fw_info.default_display_engine_pll_frequency;
else
dp_ref_clk_khz = clk_mgr_base->clks.dispclk_khz;
return dce_adjust_dp_ref_freq_for_ss(clk_mgr, dp_ref_clk_khz);
}
@ -109,8 +100,6 @@ static void dce60_pplib_apply_display_requirements(
{
struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg;
pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context);
dce110_fill_display_configs(context, pp_display_cfg);
if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0)
@ -123,11 +112,9 @@ static void dce60_update_clocks(struct clk_mgr *clk_mgr_base,
{
struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base);
struct dm_pp_power_level_change_request level_change_req;
int patched_disp_clk = context->bw_ctx.bw.dce.dispclk_khz;
/*TODO: W/A for dal3 linux, investigate why this works */
if (!clk_mgr_dce->dfs_bypass_active)
patched_disp_clk = patched_disp_clk * 115 / 100;
const int max_disp_clk =
clk_mgr_dce->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz;
int patched_disp_clk = MIN(max_disp_clk, context->bw_ctx.bw.dce.dispclk_khz);
level_change_req.power_level = dce_get_required_clocks_state(clk_mgr_base, context);
/* get max clock state from PPLIB */

View File

@ -217,11 +217,24 @@ static bool create_links(
connectors_num,
num_virtual_links);
// condition loop on link_count to allow skipping invalid indices
/* When getting the number of connectors, the VBIOS reports the number of valid indices,
* but it doesn't say which indices are valid, and not every index has an actual connector.
* So, if we don't find a connector on an index, that is not an error.
*
* - There is no guarantee that the first N indices will be valid
* - VBIOS may report a higher amount of valid indices than there are actual connectors
* - Some VBIOS have valid configurations for more connectors than there actually are
* on the card. This may be because the manufacturer used the same VBIOS for different
* variants of the same card.
*/
for (i = 0; dc->link_count < connectors_num && i < MAX_LINKS; i++) {
struct graphics_object_id connector_id = bios->funcs->get_connector_id(bios, i);
struct link_init_data link_init_params = {0};
struct dc_link *link;
if (connector_id.id == CONNECTOR_ID_UNKNOWN)
continue;
DC_LOG_DC("BIOS object table - printing link object info for connector number: %d, link_index: %d", i, dc->link_count);
link_init_params.ctx = dc->ctx;

View File

@ -896,13 +896,13 @@ void dce110_link_encoder_construct(
enc110->base.id, &bp_cap_info);
/* Override features with DCE-specific values */
if (BP_RESULT_OK == result) {
if (result == BP_RESULT_OK) {
enc110->base.features.flags.bits.IS_HBR2_CAPABLE =
bp_cap_info.DP_HBR2_EN;
enc110->base.features.flags.bits.IS_HBR3_CAPABLE =
bp_cap_info.DP_HBR3_EN;
enc110->base.features.flags.bits.HDMI_6GB_EN = bp_cap_info.HDMI_6GB_EN;
} else {
} else if (result != BP_RESULT_NORECORD) {
DC_LOG_WARNING("%s: Failed to get encoder_cap_info from VBIOS with error code %d!\n",
__func__,
result);
@ -1798,13 +1798,13 @@ void dce60_link_encoder_construct(
enc110->base.id, &bp_cap_info);
/* Override features with DCE-specific values */
if (BP_RESULT_OK == result) {
if (result == BP_RESULT_OK) {
enc110->base.features.flags.bits.IS_HBR2_CAPABLE =
bp_cap_info.DP_HBR2_EN;
enc110->base.features.flags.bits.IS_HBR3_CAPABLE =
bp_cap_info.DP_HBR3_EN;
enc110->base.features.flags.bits.HDMI_6GB_EN = bp_cap_info.HDMI_6GB_EN;
} else {
} else if (result != BP_RESULT_NORECORD) {
DC_LOG_WARNING("%s: Failed to get encoder_cap_info from VBIOS with error code %d!\n",
__func__,
result);

View File

@ -4,7 +4,6 @@
#include "dc.h"
#include "dc_dmub_srv.h"
#include "dc_dp_types.h"
#include "dmub/dmub_srv.h"
#include "core_types.h"
#include "dmub_replay.h"
@ -44,45 +43,21 @@ static void dmub_replay_get_state(struct dmub_replay *dmub, enum replay_state *s
/*
* Enable/Disable Replay.
*/
static void dmub_replay_enable(struct dmub_replay *dmub, bool enable, bool wait, uint8_t panel_inst,
struct dc_link *link)
static void dmub_replay_enable(struct dmub_replay *dmub, bool enable, bool wait, uint8_t panel_inst)
{
union dmub_rb_cmd cmd;
struct dc_context *dc = dmub->ctx;
uint32_t retry_count;
enum replay_state state = REPLAY_STATE_0;
struct pipe_ctx *pipe_ctx = NULL;
struct resource_context *res_ctx = &link->ctx->dc->current_state->res_ctx;
uint8_t i;
memset(&cmd, 0, sizeof(cmd));
cmd.replay_enable.header.type = DMUB_CMD__REPLAY;
cmd.replay_enable.data.panel_inst = panel_inst;
cmd.replay_enable.header.sub_type = DMUB_CMD__REPLAY_ENABLE;
if (enable) {
if (enable)
cmd.replay_enable.data.enable = REPLAY_ENABLE;
// hpo stream/link encoder assignments are not static, need to update everytime we try to enable replay
if (link->cur_link_settings.link_rate >= LINK_RATE_UHBR10) {
for (i = 0; i < MAX_PIPES; i++) {
if (res_ctx &&
res_ctx->pipe_ctx[i].stream &&
res_ctx->pipe_ctx[i].stream->link &&
res_ctx->pipe_ctx[i].stream->link == link &&
res_ctx->pipe_ctx[i].stream->link->connector_signal == SIGNAL_TYPE_EDP) {
pipe_ctx = &res_ctx->pipe_ctx[i];
//TODO: refactor for multi edp support
break;
}
}
if (!pipe_ctx)
return;
cmd.replay_enable.data.hpo_stream_enc_inst = pipe_ctx->stream_res.hpo_dp_stream_enc->inst;
cmd.replay_enable.data.hpo_link_enc_inst = pipe_ctx->link_res.hpo_dp_link_enc->inst;
}
} else
else
cmd.replay_enable.data.enable = REPLAY_DISABLE;
cmd.replay_enable.header.payload_bytes = sizeof(struct dmub_rb_cmd_replay_enable_data);
@ -174,17 +149,6 @@ static bool dmub_replay_copy_settings(struct dmub_replay *dmub,
copy_settings_data->digbe_inst = replay_context->digbe_inst;
copy_settings_data->digfe_inst = replay_context->digfe_inst;
if (link->cur_link_settings.link_rate >= LINK_RATE_UHBR10) {
if (pipe_ctx->stream_res.hpo_dp_stream_enc)
copy_settings_data->hpo_stream_enc_inst = pipe_ctx->stream_res.hpo_dp_stream_enc->inst;
else
copy_settings_data->hpo_stream_enc_inst = 0;
if (pipe_ctx->link_res.hpo_dp_link_enc)
copy_settings_data->hpo_link_enc_inst = pipe_ctx->link_res.hpo_dp_link_enc->inst;
else
copy_settings_data->hpo_link_enc_inst = 0;
}
if (pipe_ctx->plane_res.dpp)
copy_settings_data->dpp_inst = pipe_ctx->plane_res.dpp->inst;
else
@ -247,7 +211,6 @@ static void dmub_replay_set_coasting_vtotal(struct dmub_replay *dmub,
pCmd->header.type = DMUB_CMD__REPLAY;
pCmd->header.sub_type = DMUB_CMD__REPLAY_SET_COASTING_VTOTAL;
pCmd->header.payload_bytes = sizeof(struct dmub_cmd_replay_set_coasting_vtotal_data);
pCmd->replay_set_coasting_vtotal_data.panel_inst = panel_inst;
pCmd->replay_set_coasting_vtotal_data.coasting_vtotal = (coasting_vtotal & 0xFFFF);
pCmd->replay_set_coasting_vtotal_data.coasting_vtotal_high = (coasting_vtotal & 0xFFFF0000) >> 16;

View File

@ -19,7 +19,7 @@ struct dmub_replay_funcs {
void (*replay_get_state)(struct dmub_replay *dmub, enum replay_state *state,
uint8_t panel_inst);
void (*replay_enable)(struct dmub_replay *dmub, bool enable, bool wait,
uint8_t panel_inst, struct dc_link *link);
uint8_t panel_inst);
bool (*replay_copy_settings)(struct dmub_replay *dmub, struct dc_link *link,
struct replay_context *replay_context, uint8_t panel_inst);
void (*replay_set_power_opt)(struct dmub_replay *dmub, unsigned int power_opt,

View File

@ -944,7 +944,7 @@ bool edp_set_replay_allow_active(struct dc_link *link, const bool *allow_active,
// TODO: Handle mux change case if force_static is set
// If force_static is set, just change the replay_allow_active state directly
if (replay != NULL && link->replay_settings.replay_feature_enabled)
replay->funcs->replay_enable(replay, *allow_active, wait, panel_inst, link);
replay->funcs->replay_enable(replay, *allow_active, wait, panel_inst);
link->replay_settings.replay_allow_active = *allow_active;
}

View File

@ -4047,14 +4047,6 @@ struct dmub_cmd_replay_copy_settings_data {
* DIG BE HW instance.
*/
uint8_t digbe_inst;
/**
* @hpo_stream_enc_inst: HPO stream encoder instance
*/
uint8_t hpo_stream_enc_inst;
/**
* @hpo_link_enc_inst: HPO link encoder instance
*/
uint8_t hpo_link_enc_inst;
/**
* AUX HW instance.
*/
@ -4159,18 +4151,6 @@ struct dmub_rb_cmd_replay_enable_data {
* This does not support HDMI/DP2 for now.
*/
uint8_t phy_rate;
/**
* @hpo_stream_enc_inst: HPO stream encoder instance
*/
uint8_t hpo_stream_enc_inst;
/**
* @hpo_link_enc_inst: HPO link encoder instance
*/
uint8_t hpo_link_enc_inst;
/**
* @pad: Align structure to 4 byte boundary.
*/
uint8_t pad[2];
};
/**

View File

@ -260,6 +260,9 @@ enum mod_hdcp_status mod_hdcp_hdcp1_create_session(struct mod_hdcp *hdcp)
return MOD_HDCP_STATUS_FAILURE;
}
if (!display)
return MOD_HDCP_STATUS_DISPLAY_NOT_FOUND;
hdcp_cmd = (struct ta_hdcp_shared_memory *)psp->hdcp_context.context.mem_context.shared_buf;
mutex_lock(&psp->hdcp_context.mutex);

View File

@ -1697,9 +1697,11 @@ static int smu_v14_0_2_get_power_limit(struct smu_context *smu,
uint32_t *min_power_limit)
{
struct smu_table_context *table_context = &smu->smu_table;
struct smu_14_0_2_powerplay_table *powerplay_table =
table_context->power_play_table;
PPTable_t *pptable = table_context->driver_pptable;
CustomSkuTable_t *skutable = &pptable->CustomSkuTable;
uint32_t power_limit;
uint32_t power_limit, od_percent_upper = 0, od_percent_lower = 0;
uint32_t msg_limit = pptable->SkuTable.MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC];
if (smu_v14_0_get_current_power_limit(smu, &power_limit))
@ -1712,11 +1714,29 @@ static int smu_v14_0_2_get_power_limit(struct smu_context *smu,
if (default_power_limit)
*default_power_limit = power_limit;
if (max_power_limit)
*max_power_limit = msg_limit;
if (powerplay_table) {
if (smu->od_enabled &&
smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) {
od_percent_upper = pptable->SkuTable.OverDriveLimitsBasicMax.Ppt;
od_percent_lower = pptable->SkuTable.OverDriveLimitsBasicMin.Ppt;
} else if (smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) {
od_percent_upper = 0;
od_percent_lower = pptable->SkuTable.OverDriveLimitsBasicMin.Ppt;
}
}
if (min_power_limit)
*min_power_limit = 0;
dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n",
od_percent_upper, od_percent_lower, power_limit);
if (max_power_limit) {
*max_power_limit = msg_limit * (100 + od_percent_upper);
*max_power_limit /= 100;
}
if (min_power_limit) {
*min_power_limit = power_limit * (100 + od_percent_lower);
*min_power_limit /= 100;
}
return 0;
}

View File

@ -1474,8 +1474,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
dp = devm_drm_bridge_alloc(dev, struct analogix_dp_device, bridge,
&analogix_dp_bridge_funcs);
if (!dp)
return ERR_PTR(-ENOMEM);
if (IS_ERR(dp))
return ERR_CAST(dp);
dp->dev = &pdev->dev;
dp->dpms_mode = DRM_MODE_DPMS_OFF;

View File

@ -2432,6 +2432,8 @@ static const struct drm_gpuvm_ops lock_ops = {
*
* The expected usage is:
*
* .. code-block:: c
*
* vm_bind {
* struct drm_exec exec;
*

View File

@ -381,6 +381,26 @@ struct DecFifo {
len: usize,
}
// On arm32 architecture, dividing an `u64` by a constant will generate a call
// to `__aeabi_uldivmod` which is not present in the kernel.
// So use the multiply by inverse method for this architecture.
fn div10(val: u64) -> u64 {
if cfg!(target_arch = "arm") {
let val_h = val >> 32;
let val_l = val & 0xFFFFFFFF;
let b_h: u64 = 0x66666666;
let b_l: u64 = 0x66666667;
let tmp1 = val_h * b_l + ((val_l * b_l) >> 32);
let tmp2 = val_l * b_h + (tmp1 & 0xffffffff);
let tmp3 = val_h * b_h + (tmp1 >> 32) + (tmp2 >> 32);
tmp3 >> 2
} else {
val / 10
}
}
impl DecFifo {
fn push(&mut self, data: u64, len: usize) {
let mut chunk = data;
@ -389,7 +409,7 @@ impl DecFifo {
}
for i in 0..len {
self.decimals[i] = (chunk % 10) as u8;
chunk /= 10;
chunk = div10(chunk);
}
self.len += len;
}

View File

@ -325,6 +325,17 @@ static int hibmc_dp_link_downgrade_training_eq(struct hibmc_dp_dev *dp)
return hibmc_dp_link_reduce_rate(dp);
}
static void hibmc_dp_update_caps(struct hibmc_dp_dev *dp)
{
dp->link.cap.link_rate = dp->dpcd[DP_MAX_LINK_RATE];
if (dp->link.cap.link_rate > DP_LINK_BW_8_1 || !dp->link.cap.link_rate)
dp->link.cap.link_rate = DP_LINK_BW_8_1;
dp->link.cap.lanes = dp->dpcd[DP_MAX_LANE_COUNT] & DP_MAX_LANE_COUNT_MASK;
if (dp->link.cap.lanes > HIBMC_DP_LANE_NUM_MAX)
dp->link.cap.lanes = HIBMC_DP_LANE_NUM_MAX;
}
int hibmc_dp_link_training(struct hibmc_dp_dev *dp)
{
struct hibmc_dp_link *link = &dp->link;
@ -334,8 +345,7 @@ int hibmc_dp_link_training(struct hibmc_dp_dev *dp)
if (ret)
drm_err(dp->dev, "dp aux read dpcd failed, ret: %d\n", ret);
dp->link.cap.link_rate = dp->dpcd[DP_MAX_LINK_RATE];
dp->link.cap.lanes = 0x2;
hibmc_dp_update_caps(dp);
ret = hibmc_dp_get_serdes_rate_cfg(dp);
if (ret < 0)

View File

@ -32,7 +32,7 @@
DEFINE_DRM_GEM_FOPS(hibmc_fops);
static const char *g_irqs_names_map[HIBMC_MAX_VECTORS] = { "vblank", "hpd" };
static const char *g_irqs_names_map[HIBMC_MAX_VECTORS] = { "hibmc-vblank", "hibmc-hpd" };
static irqreturn_t hibmc_interrupt(int irq, void *arg)
{
@ -115,6 +115,8 @@ static const struct drm_mode_config_funcs hibmc_mode_funcs = {
static int hibmc_kms_init(struct hibmc_drm_private *priv)
{
struct drm_device *dev = &priv->dev;
struct drm_encoder *encoder;
u32 clone_mask = 0;
int ret;
ret = drmm_mode_config_init(dev);
@ -154,6 +156,12 @@ static int hibmc_kms_init(struct hibmc_drm_private *priv)
return ret;
}
drm_for_each_encoder(encoder, dev)
clone_mask |= drm_encoder_mask(encoder);
drm_for_each_encoder(encoder, dev)
encoder->possible_clones = clone_mask;
return 0;
}
@ -277,7 +285,6 @@ static void hibmc_unload(struct drm_device *dev)
static int hibmc_msi_init(struct drm_device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev->dev);
char name[32] = {0};
int valid_irq_num;
int irq;
int ret;
@ -292,9 +299,6 @@ static int hibmc_msi_init(struct drm_device *dev)
valid_irq_num = ret;
for (int i = 0; i < valid_irq_num; i++) {
snprintf(name, ARRAY_SIZE(name) - 1, "%s-%s-%s",
dev->driver->name, pci_name(pdev), g_irqs_names_map[i]);
irq = pci_irq_vector(pdev, i);
if (i)
@ -302,10 +306,10 @@ static int hibmc_msi_init(struct drm_device *dev)
ret = devm_request_threaded_irq(&pdev->dev, irq,
hibmc_dp_interrupt,
hibmc_dp_hpd_isr,
IRQF_SHARED, name, dev);
IRQF_SHARED, g_irqs_names_map[i], dev);
else
ret = devm_request_irq(&pdev->dev, irq, hibmc_interrupt,
IRQF_SHARED, name, dev);
IRQF_SHARED, g_irqs_names_map[i], dev);
if (ret) {
drm_err(dev, "install irq failed: %d\n", ret);
return ret;
@ -323,13 +327,13 @@ static int hibmc_load(struct drm_device *dev)
ret = hibmc_hw_init(priv);
if (ret)
goto err;
return ret;
ret = drmm_vram_helper_init(dev, pci_resource_start(pdev, 0),
pci_resource_len(pdev, 0));
if (ret) {
drm_err(dev, "Error initializing VRAM MM; %d\n", ret);
goto err;
return ret;
}
ret = hibmc_kms_init(priv);

View File

@ -69,6 +69,7 @@ int hibmc_de_init(struct hibmc_drm_private *priv);
int hibmc_vdac_init(struct hibmc_drm_private *priv);
int hibmc_ddc_create(struct drm_device *drm_dev, struct hibmc_vdac *connector);
void hibmc_ddc_del(struct hibmc_vdac *vdac);
int hibmc_dp_init(struct hibmc_drm_private *priv);

View File

@ -95,3 +95,8 @@ int hibmc_ddc_create(struct drm_device *drm_dev, struct hibmc_vdac *vdac)
return i2c_bit_add_bus(&vdac->adapter);
}
void hibmc_ddc_del(struct hibmc_vdac *vdac)
{
i2c_del_adapter(&vdac->adapter);
}

View File

@ -53,7 +53,7 @@ static void hibmc_connector_destroy(struct drm_connector *connector)
{
struct hibmc_vdac *vdac = to_hibmc_vdac(connector);
i2c_del_adapter(&vdac->adapter);
hibmc_ddc_del(vdac);
drm_connector_cleanup(connector);
}
@ -110,7 +110,7 @@ int hibmc_vdac_init(struct hibmc_drm_private *priv)
ret = drmm_encoder_init(dev, encoder, NULL, DRM_MODE_ENCODER_DAC, NULL);
if (ret) {
drm_err(dev, "failed to init encoder: %d\n", ret);
return ret;
goto err;
}
drm_encoder_helper_add(encoder, &hibmc_encoder_helper_funcs);
@ -121,7 +121,7 @@ int hibmc_vdac_init(struct hibmc_drm_private *priv)
&vdac->adapter);
if (ret) {
drm_err(dev, "failed to init connector: %d\n", ret);
return ret;
goto err;
}
drm_connector_helper_add(connector, &hibmc_connector_helper_funcs);
@ -131,4 +131,9 @@ int hibmc_vdac_init(struct hibmc_drm_private *priv)
connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT;
return 0;
err:
hibmc_ddc_del(vdac);
return ret;
}

View File

@ -1506,10 +1506,14 @@ u32 gen11_gu_misc_irq_ack(struct intel_display *display, const u32 master_ctl)
if (!(master_ctl & GEN11_GU_MISC_IRQ))
return 0;
intel_display_rpm_assert_block(display);
iir = intel_de_read(display, GEN11_GU_MISC_IIR);
if (likely(iir))
intel_de_write(display, GEN11_GU_MISC_IIR, iir);
intel_display_rpm_assert_unblock(display);
return iir;
}

View File

@ -23,6 +23,7 @@
#include "intel_modeset_lock.h"
#include "intel_tc.h"
#define DP_PIN_ASSIGNMENT_NONE 0x0
#define DP_PIN_ASSIGNMENT_C 0x3
#define DP_PIN_ASSIGNMENT_D 0x4
#define DP_PIN_ASSIGNMENT_E 0x5
@ -66,6 +67,7 @@ struct intel_tc_port {
enum tc_port_mode init_mode;
enum phy_fia phy_fia;
u8 phy_fia_idx;
u8 max_lane_count;
};
static enum intel_display_power_domain
@ -307,6 +309,8 @@ static int lnl_tc_port_get_max_lane_count(struct intel_digital_port *dig_port)
REG_FIELD_GET(TCSS_DDI_STATUS_PIN_ASSIGNMENT_MASK, val);
switch (pin_assignment) {
case DP_PIN_ASSIGNMENT_NONE:
return 0;
default:
MISSING_CASE(pin_assignment);
fallthrough;
@ -365,12 +369,12 @@ static int intel_tc_port_get_max_lane_count(struct intel_digital_port *dig_port)
}
}
int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port)
static int get_max_lane_count(struct intel_tc_port *tc)
{
struct intel_display *display = to_intel_display(dig_port);
struct intel_tc_port *tc = to_tc_port(dig_port);
struct intel_display *display = to_intel_display(tc->dig_port);
struct intel_digital_port *dig_port = tc->dig_port;
if (!intel_encoder_is_tc(&dig_port->base) || tc->mode != TC_PORT_DP_ALT)
if (tc->mode != TC_PORT_DP_ALT)
return 4;
assert_tc_cold_blocked(tc);
@ -384,6 +388,25 @@ int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port)
return intel_tc_port_get_max_lane_count(dig_port);
}
static void read_pin_configuration(struct intel_tc_port *tc)
{
tc->max_lane_count = get_max_lane_count(tc);
}
int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port)
{
struct intel_display *display = to_intel_display(dig_port);
struct intel_tc_port *tc = to_tc_port(dig_port);
if (!intel_encoder_is_tc(&dig_port->base))
return 4;
if (DISPLAY_VER(display) < 20)
return get_max_lane_count(tc);
return tc->max_lane_count;
}
void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port,
int required_lanes)
{
@ -596,9 +619,12 @@ static void icl_tc_phy_get_hw_state(struct intel_tc_port *tc)
tc_cold_wref = __tc_cold_block(tc, &domain);
tc->mode = tc_phy_get_current_mode(tc);
if (tc->mode != TC_PORT_DISCONNECTED)
if (tc->mode != TC_PORT_DISCONNECTED) {
tc->lock_wakeref = tc_cold_block(tc);
read_pin_configuration(tc);
}
__tc_cold_unblock(tc, domain, tc_cold_wref);
}
@ -656,8 +682,11 @@ static bool icl_tc_phy_connect(struct intel_tc_port *tc,
tc->lock_wakeref = tc_cold_block(tc);
if (tc->mode == TC_PORT_TBT_ALT)
if (tc->mode == TC_PORT_TBT_ALT) {
read_pin_configuration(tc);
return true;
}
if ((!tc_phy_is_ready(tc) ||
!icl_tc_phy_take_ownership(tc, true)) &&
@ -668,6 +697,7 @@ static bool icl_tc_phy_connect(struct intel_tc_port *tc,
goto out_unblock_tc_cold;
}
read_pin_configuration(tc);
if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes))
goto out_release_phy;
@ -858,9 +888,12 @@ static void adlp_tc_phy_get_hw_state(struct intel_tc_port *tc)
port_wakeref = intel_display_power_get(display, port_power_domain);
tc->mode = tc_phy_get_current_mode(tc);
if (tc->mode != TC_PORT_DISCONNECTED)
if (tc->mode != TC_PORT_DISCONNECTED) {
tc->lock_wakeref = tc_cold_block(tc);
read_pin_configuration(tc);
}
intel_display_power_put(display, port_power_domain, port_wakeref);
}
@ -873,6 +906,9 @@ static bool adlp_tc_phy_connect(struct intel_tc_port *tc, int required_lanes)
if (tc->mode == TC_PORT_TBT_ALT) {
tc->lock_wakeref = tc_cold_block(tc);
read_pin_configuration(tc);
return true;
}
@ -894,6 +930,8 @@ static bool adlp_tc_phy_connect(struct intel_tc_port *tc, int required_lanes)
tc->lock_wakeref = tc_cold_block(tc);
read_pin_configuration(tc);
if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes))
goto out_unblock_tc_cold;
@ -1124,9 +1162,18 @@ static void xelpdp_tc_phy_get_hw_state(struct intel_tc_port *tc)
tc_cold_wref = __tc_cold_block(tc, &domain);
tc->mode = tc_phy_get_current_mode(tc);
if (tc->mode != TC_PORT_DISCONNECTED)
if (tc->mode != TC_PORT_DISCONNECTED) {
tc->lock_wakeref = tc_cold_block(tc);
read_pin_configuration(tc);
/*
* Set a valid lane count value for a DP-alt sink which got
* disconnected. The driver can only disable the output on this PHY.
*/
if (tc->max_lane_count == 0)
tc->max_lane_count = 4;
}
drm_WARN_ON(display->drm,
(tc->mode == TC_PORT_DP_ALT || tc->mode == TC_PORT_LEGACY) &&
!xelpdp_tc_phy_tcss_power_is_enabled(tc));
@ -1138,14 +1185,19 @@ static bool xelpdp_tc_phy_connect(struct intel_tc_port *tc, int required_lanes)
{
tc->lock_wakeref = tc_cold_block(tc);
if (tc->mode == TC_PORT_TBT_ALT)
if (tc->mode == TC_PORT_TBT_ALT) {
read_pin_configuration(tc);
return true;
}
if (!xelpdp_tc_phy_enable_tcss_power(tc, true))
goto out_unblock_tccold;
xelpdp_tc_phy_take_ownership(tc, true);
read_pin_configuration(tc);
if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes))
goto out_release_phy;
@ -1226,14 +1278,19 @@ static void tc_phy_get_hw_state(struct intel_tc_port *tc)
tc->phy_ops->get_hw_state(tc);
}
static bool tc_phy_is_ready_and_owned(struct intel_tc_port *tc,
bool phy_is_ready, bool phy_is_owned)
/* Is the PHY owned by display i.e. is it in legacy or DP-alt mode? */
static bool tc_phy_owned_by_display(struct intel_tc_port *tc,
bool phy_is_ready, bool phy_is_owned)
{
struct intel_display *display = to_intel_display(tc->dig_port);
drm_WARN_ON(display->drm, phy_is_owned && !phy_is_ready);
if (DISPLAY_VER(display) < 20) {
drm_WARN_ON(display->drm, phy_is_owned && !phy_is_ready);
return phy_is_ready && phy_is_owned;
return phy_is_ready && phy_is_owned;
} else {
return phy_is_owned;
}
}
static bool tc_phy_is_connected(struct intel_tc_port *tc,
@ -1244,7 +1301,7 @@ static bool tc_phy_is_connected(struct intel_tc_port *tc,
bool phy_is_owned = tc_phy_is_owned(tc);
bool is_connected;
if (tc_phy_is_ready_and_owned(tc, phy_is_ready, phy_is_owned))
if (tc_phy_owned_by_display(tc, phy_is_ready, phy_is_owned))
is_connected = port_pll_type == ICL_PORT_DPLL_MG_PHY;
else
is_connected = port_pll_type == ICL_PORT_DPLL_DEFAULT;
@ -1352,7 +1409,7 @@ tc_phy_get_current_mode(struct intel_tc_port *tc)
phy_is_ready = tc_phy_is_ready(tc);
phy_is_owned = tc_phy_is_owned(tc);
if (!tc_phy_is_ready_and_owned(tc, phy_is_ready, phy_is_owned)) {
if (!tc_phy_owned_by_display(tc, phy_is_ready, phy_is_owned)) {
mode = get_tc_mode_in_phy_not_owned_state(tc, live_mode);
} else {
drm_WARN_ON(display->drm, live_mode == TC_PORT_TBT_ALT);
@ -1441,11 +1498,11 @@ static void intel_tc_port_reset_mode(struct intel_tc_port *tc,
intel_display_power_flush_work(display);
if (!intel_tc_cold_requires_aux_pw(dig_port)) {
enum intel_display_power_domain aux_domain;
bool aux_powered;
aux_domain = intel_aux_power_domain(dig_port);
aux_powered = intel_display_power_is_enabled(display, aux_domain);
drm_WARN_ON(display->drm, aux_powered);
if (intel_display_power_is_enabled(display, aux_domain))
drm_dbg_kms(display->drm, "Port %s: AUX unexpectedly powered\n",
tc->port_name);
}
tc_phy_disconnect(tc);

View File

@ -634,6 +634,8 @@ static void cfl_ctx_workarounds_init(struct intel_engine_cs *engine,
static void icl_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct drm_i915_private *i915 = engine->i915;
/* Wa_1406697149 (WaDisableBankHangMode:icl) */
wa_write(wal, GEN8_L3CNTLREG, GEN8_ERRDETBCTRL);
@ -669,6 +671,15 @@ static void icl_ctx_workarounds_init(struct intel_engine_cs *engine,
/* Wa_1406306137:icl,ehl */
wa_mcr_masked_en(wal, GEN9_ROW_CHICKEN4, GEN11_DIS_PICK_2ND_EU);
if (IS_JASPERLAKE(i915) || IS_ELKHARTLAKE(i915)) {
/*
* Disable Repacking for Compression (masked R/W access)
* before rendering compressed surfaces for display.
*/
wa_masked_en(wal, CACHE_MODE_0_GEN7,
DISABLE_REPACKING_FOR_COMPRESSION);
}
}
/*
@ -2306,15 +2317,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
GEN8_RC_SEMA_IDLE_MSG_DISABLE);
}
if (IS_JASPERLAKE(i915) || IS_ELKHARTLAKE(i915)) {
/*
* "Disable Repacking for Compression (masked R/W access)
* before rendering compressed surfaces for display."
*/
wa_masked_en(wal, CACHE_MODE_0_GEN7,
DISABLE_REPACKING_FOR_COMPRESSION);
}
if (GRAPHICS_VER(i915) == 11) {
/* This is not an Wa. Enable for better image quality */
wa_masked_en(wal,

View File

@ -60,14 +60,14 @@
* virtual address in the GPU's VA space there is no guarantee that the actual
* mappings are created in the GPU's MMU. If the given memory is swapped out
* at the time the bind operation is executed the kernel will stash the mapping
* details into it's internal alloctor and create the actual MMU mappings once
* details into it's internal allocator and create the actual MMU mappings once
* the memory is swapped back in. While this is transparent for userspace, it is
* guaranteed that all the backing memory is swapped back in and all the memory
* mappings, as requested by userspace previously, are actually mapped once the
* DRM_NOUVEAU_EXEC ioctl is called to submit an exec job.
*
* A VM_BIND job can be executed either synchronously or asynchronously. If
* exectued asynchronously, userspace may provide a list of syncobjs this job
* executed asynchronously, userspace may provide a list of syncobjs this job
* will wait for and/or a list of syncobj the kernel will signal once the
* VM_BIND job finished execution. If executed synchronously the ioctl will
* block until the bind job is finished. For synchronous jobs the kernel will
@ -82,7 +82,7 @@
* Since VM_BIND jobs update the GPU's VA space on job submit, EXEC jobs do have
* an up to date view of the VA space. However, the actual mappings might still
* be pending. Hence, EXEC jobs require to have the particular fences - of
* the corresponding VM_BIND jobs they depent on - attached to them.
* the corresponding VM_BIND jobs they depend on - attached to them.
*/
static int

View File

@ -219,7 +219,8 @@ nvif_vmm_ctor(struct nvif_mmu *mmu, const char *name, s32 oclass,
case RAW: args->type = NVIF_VMM_V0_TYPE_RAW; break;
default:
WARN_ON(1);
return -EINVAL;
ret = -EINVAL;
goto done;
}
memcpy(args->data, argv, argc);

View File

@ -325,7 +325,7 @@ r535_gsp_msgq_recv(struct nvkm_gsp *gsp, u32 gsp_rpc_len, int *retries)
rpc = r535_gsp_msgq_peek(gsp, sizeof(*rpc), info.retries);
if (IS_ERR_OR_NULL(rpc)) {
kfree(buf);
kvfree(buf);
return rpc;
}
@ -334,7 +334,7 @@ r535_gsp_msgq_recv(struct nvkm_gsp *gsp, u32 gsp_rpc_len, int *retries)
rpc = r535_gsp_msgq_recv_one_elem(gsp, &info);
if (IS_ERR_OR_NULL(rpc)) {
kfree(buf);
kvfree(buf);
return rpc;
}

View File

@ -39,7 +39,8 @@ impl File {
_ => return Err(EINVAL),
};
getparam.set_value(value);
#[allow(clippy::useless_conversion)]
getparam.set_value(value.into());
Ok(0)
}

View File

@ -53,6 +53,7 @@ config ROCKCHIP_CDN_DP
bool "Rockchip cdn DP"
depends on EXTCON=y || (EXTCON=m && DRM_ROCKCHIP=m)
select DRM_DISPLAY_HELPER
select DRM_BRIDGE_CONNECTOR
select DRM_DISPLAY_DP_HELPER
help
This selects support for Rockchip SoC specific extensions

View File

@ -2579,12 +2579,13 @@ static int vop2_win_init(struct vop2 *vop2)
}
/*
* The window registers are only updated when config done is written.
* Until that they read back the old value. As we read-modify-write
* these registers mark them as non-volatile. This makes sure we read
* the new values from the regmap register cache.
* The window and video port registers are only updated when config
* done is written. Until that they read back the old value. As we
* read-modify-write these registers mark them as non-volatile. This
* makes sure we read the new values from the regmap register cache.
*/
static const struct regmap_range vop2_nonvolatile_range[] = {
regmap_reg_range(RK3568_VP0_CTRL_BASE, RK3588_VP3_CTRL_BASE + 255),
regmap_reg_range(0x1000, 0x23ff),
};

View File

@ -1033,13 +1033,14 @@ static void drm_test_fb_xrgb8888_to_xrgb2101010(struct kunit *test)
NULL : &result->dst_pitch;
drm_fb_xrgb8888_to_xrgb2101010(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
buf = le32buf_to_cpu(test, buf, dst_size / sizeof(u32));
buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
buf = dst.vaddr; /* restore original value of buf */
memset(buf, 0, dst_size);
drm_fb_xrgb8888_to_xrgb2101010(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
}

View File

@ -408,7 +408,7 @@ struct xe_migrate *xe_migrate_init(struct xe_tile *tile)
/* Special layout, prepared below.. */
vm = xe_vm_create(xe, XE_VM_FLAG_MIGRATION |
XE_VM_FLAG_SET_TILE_ID(tile));
XE_VM_FLAG_SET_TILE_ID(tile), NULL);
if (IS_ERR(vm))
return ERR_CAST(vm);

View File

@ -101,7 +101,7 @@ static int allocate_gsc_client_resources(struct xe_gt *gt,
xe_assert(xe, hwe);
/* PXP instructions must be issued from PPGTT */
vm = xe_vm_create(xe, XE_VM_FLAG_GSC);
vm = xe_vm_create(xe, XE_VM_FLAG_GSC, NULL);
if (IS_ERR(vm))
return PTR_ERR(vm);

View File

@ -1640,7 +1640,7 @@ static void xe_vm_free_scratch(struct xe_vm *vm)
}
}
struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef)
{
struct drm_gem_object *vm_resv_obj;
struct xe_vm *vm;
@ -1661,9 +1661,10 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
vm->xe = xe;
vm->size = 1ull << xe->info.va_bits;
vm->flags = flags;
if (xef)
vm->xef = xe_file_get(xef);
/**
* GSC VMs are kernel-owned, only used for PXP ops and can sometimes be
* manipulated under the PXP mutex. However, the PXP mutex can be taken
@ -1794,6 +1795,20 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
if (number_tiles > 1)
vm->composite_fence_ctx = dma_fence_context_alloc(1);
if (xef && xe->info.has_asid) {
u32 asid;
down_write(&xe->usm.lock);
err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm,
XA_LIMIT(1, XE_MAX_ASID - 1),
&xe->usm.next_asid, GFP_KERNEL);
up_write(&xe->usm.lock);
if (err < 0)
goto err_unlock_close;
vm->usm.asid = asid;
}
trace_xe_vm_create(vm);
return vm;
@ -1814,6 +1829,8 @@ err_no_resv:
for_each_tile(tile, xe, id)
xe_range_fence_tree_fini(&vm->rftree[id]);
ttm_lru_bulk_move_fini(&xe->ttm, &vm->lru_bulk_move);
if (vm->xef)
xe_file_put(vm->xef);
kfree(vm);
if (flags & XE_VM_FLAG_LR_MODE)
xe_pm_runtime_put(xe);
@ -2059,9 +2076,8 @@ int xe_vm_create_ioctl(struct drm_device *dev, void *data,
struct xe_device *xe = to_xe_device(dev);
struct xe_file *xef = to_xe_file(file);
struct drm_xe_vm_create *args = data;
struct xe_tile *tile;
struct xe_vm *vm;
u32 id, asid;
u32 id;
int err;
u32 flags = 0;
@ -2097,29 +2113,10 @@ int xe_vm_create_ioctl(struct drm_device *dev, void *data,
if (args->flags & DRM_XE_VM_CREATE_FLAG_FAULT_MODE)
flags |= XE_VM_FLAG_FAULT_MODE;
vm = xe_vm_create(xe, flags);
vm = xe_vm_create(xe, flags, xef);
if (IS_ERR(vm))
return PTR_ERR(vm);
if (xe->info.has_asid) {
down_write(&xe->usm.lock);
err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm,
XA_LIMIT(1, XE_MAX_ASID - 1),
&xe->usm.next_asid, GFP_KERNEL);
up_write(&xe->usm.lock);
if (err < 0)
goto err_close_and_put;
vm->usm.asid = asid;
}
vm->xef = xe_file_get(xef);
/* Record BO memory for VM pagetable created against client */
for_each_tile(tile, xe, id)
if (vm->pt_root[id])
xe_drm_client_add_bo(vm->xef->client, vm->pt_root[id]->bo);
#if IS_ENABLED(CONFIG_DRM_XE_DEBUG_MEM)
/* Warning: Security issue - never enable by default */
args->reserved[0] = xe_bo_main_addr(vm->pt_root[0]->bo, XE_PAGE_SIZE);
@ -3421,6 +3418,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
free_bind_ops:
if (args->num_binds > 1)
kvfree(*bind_ops);
*bind_ops = NULL;
return err;
}
@ -3527,7 +3525,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
struct xe_exec_queue *q = NULL;
u32 num_syncs, num_ufence = 0;
struct xe_sync_entry *syncs = NULL;
struct drm_xe_vm_bind_op *bind_ops;
struct drm_xe_vm_bind_op *bind_ops = NULL;
struct xe_vma_ops vops;
struct dma_fence *fence;
int err;

View File

@ -26,7 +26,7 @@ struct xe_sync_entry;
struct xe_svm_range;
struct drm_exec;
struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags);
struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef);
struct xe_vm *xe_vm_lookup(struct xe_file *xef, u32 id);
int xe_vma_cmp_vma_cb(const void *key, const struct rb_node *node);

View File

@ -143,10 +143,10 @@ static int rtl9300_i2c_write(struct rtl9300_i2c *i2c, u8 *buf, int len)
return -EIO;
for (i = 0; i < len; i++) {
if (i % 4 == 0)
vals[i/4] = 0;
vals[i/4] <<= 8;
vals[i/4] |= buf[i];
unsigned int shift = (i % 4) * 8;
unsigned int reg = i / 4;
vals[reg] |= buf[i] << shift;
}
return regmap_bulk_write(i2c->regmap, i2c->reg_base + RTL9300_I2C_MST_DATA_WORD0,
@ -175,7 +175,7 @@ static int rtl9300_i2c_execute_xfer(struct rtl9300_i2c *i2c, char read_write,
return ret;
ret = regmap_read_poll_timeout(i2c->regmap, i2c->reg_base + RTL9300_I2C_MST_CTRL1,
val, !(val & RTL9300_I2C_MST_CTRL1_I2C_TRIG), 100, 2000);
val, !(val & RTL9300_I2C_MST_CTRL1_I2C_TRIG), 100, 100000);
if (ret)
return ret;
@ -281,15 +281,19 @@ static int rtl9300_i2c_smbus_xfer(struct i2c_adapter *adap, u16 addr, unsigned s
ret = rtl9300_i2c_reg_addr_set(i2c, command, 1);
if (ret)
goto out_unlock;
ret = rtl9300_i2c_config_xfer(i2c, chan, addr, data->block[0]);
if (data->block[0] < 1 || data->block[0] > I2C_SMBUS_BLOCK_MAX) {
ret = -EINVAL;
goto out_unlock;
}
ret = rtl9300_i2c_config_xfer(i2c, chan, addr, data->block[0] + 1);
if (ret)
goto out_unlock;
if (read_write == I2C_SMBUS_WRITE) {
ret = rtl9300_i2c_write(i2c, &data->block[1], data->block[0]);
ret = rtl9300_i2c_write(i2c, &data->block[0], data->block[0] + 1);
if (ret)
goto out_unlock;
}
len = data->block[0];
len = data->block[0] + 1;
break;
default:

View File

@ -477,7 +477,7 @@ static irqreturn_t sca3300_trigger_handler(int irq, void *p)
struct iio_dev *indio_dev = pf->indio_dev;
struct sca3300_data *data = iio_priv(indio_dev);
int bit, ret, val, i = 0;
IIO_DECLARE_BUFFER_WITH_TS(s16, channels, SCA3300_SCAN_MAX);
IIO_DECLARE_BUFFER_WITH_TS(s16, channels, SCA3300_SCAN_MAX) = { };
iio_for_each_active_channel(indio_dev, bit) {
ret = sca3300_read_reg(data, indio_dev->channels[bit].address, &val);

View File

@ -1300,7 +1300,7 @@ config RN5T618_ADC
config ROHM_BD79124
tristate "Rohm BD79124 ADC driver"
depends on I2C
depends on I2C && GPIOLIB
select REGMAP_I2C
select IIO_ADC_HELPER
help

View File

@ -849,7 +849,7 @@ enum {
static int ad7124_syscalib_locked(struct ad7124_state *st, const struct iio_chan_spec *chan)
{
struct device *dev = &st->sd.spi->dev;
struct ad7124_channel *ch = &st->channels[chan->channel];
struct ad7124_channel *ch = &st->channels[chan->address];
int ret;
if (ch->syscalib_mode == AD7124_SYSCALIB_ZERO_SCALE) {
@ -865,8 +865,8 @@ static int ad7124_syscalib_locked(struct ad7124_state *st, const struct iio_chan
if (ret < 0)
return ret;
dev_dbg(dev, "offset for channel %d after zero-scale calibration: 0x%x\n",
chan->channel, ch->cfg.calibration_offset);
dev_dbg(dev, "offset for channel %lu after zero-scale calibration: 0x%x\n",
chan->address, ch->cfg.calibration_offset);
} else {
ch->cfg.calibration_gain = st->gain_default;
@ -880,8 +880,8 @@ static int ad7124_syscalib_locked(struct ad7124_state *st, const struct iio_chan
if (ret < 0)
return ret;
dev_dbg(dev, "gain for channel %d after full-scale calibration: 0x%x\n",
chan->channel, ch->cfg.calibration_gain);
dev_dbg(dev, "gain for channel %lu after full-scale calibration: 0x%x\n",
chan->address, ch->cfg.calibration_gain);
}
return 0;
@ -924,7 +924,7 @@ static int ad7124_set_syscalib_mode(struct iio_dev *indio_dev,
{
struct ad7124_state *st = iio_priv(indio_dev);
st->channels[chan->channel].syscalib_mode = mode;
st->channels[chan->address].syscalib_mode = mode;
return 0;
}
@ -934,7 +934,7 @@ static int ad7124_get_syscalib_mode(struct iio_dev *indio_dev,
{
struct ad7124_state *st = iio_priv(indio_dev);
return st->channels[chan->channel].syscalib_mode;
return st->channels[chan->address].syscalib_mode;
}
static const struct iio_enum ad7124_syscalib_mode_enum = {

View File

@ -200,7 +200,7 @@ struct ad7173_channel_config {
/*
* Following fields are used to compare equality. If you
* make adaptations in it, you most likely also have to adapt
* ad7173_find_live_config(), too.
* ad7173_is_setup_equal(), too.
*/
struct_group(config_props,
bool bipolar;
@ -561,12 +561,19 @@ static void ad7173_reset_usage_cnts(struct ad7173_state *st)
st->config_usage_counter = 0;
}
static struct ad7173_channel_config *
ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg)
/**
* ad7173_is_setup_equal - Compare two channel setups
* @cfg1: First channel configuration
* @cfg2: Second channel configuration
*
* Compares all configuration options that affect the registers connected to
* SETUP_SEL, namely CONFIGx, FILTERx, GAINx and OFFSETx.
*
* Returns: true if the setups are identical, false otherwise
*/
static bool ad7173_is_setup_equal(const struct ad7173_channel_config *cfg1,
const struct ad7173_channel_config *cfg2)
{
struct ad7173_channel_config *cfg_aux;
int i;
/*
* This is just to make sure that the comparison is adapted after
* struct ad7173_channel_config was changed.
@ -579,14 +586,22 @@ ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *c
u8 ref_sel;
}));
return cfg1->bipolar == cfg2->bipolar &&
cfg1->input_buf == cfg2->input_buf &&
cfg1->odr == cfg2->odr &&
cfg1->ref_sel == cfg2->ref_sel;
}
static struct ad7173_channel_config *
ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg)
{
struct ad7173_channel_config *cfg_aux;
int i;
for (i = 0; i < st->num_channels; i++) {
cfg_aux = &st->channels[i].cfg;
if (cfg_aux->live &&
cfg->bipolar == cfg_aux->bipolar &&
cfg->input_buf == cfg_aux->input_buf &&
cfg->odr == cfg_aux->odr &&
cfg->ref_sel == cfg_aux->ref_sel)
if (cfg_aux->live && ad7173_is_setup_equal(cfg, cfg_aux))
return cfg_aux;
}
return NULL;
@ -1228,7 +1243,7 @@ static int ad7173_update_scan_mode(struct iio_dev *indio_dev,
const unsigned long *scan_mask)
{
struct ad7173_state *st = iio_priv(indio_dev);
int i, ret;
int i, j, k, ret;
for (i = 0; i < indio_dev->num_channels; i++) {
if (test_bit(i, scan_mask))
@ -1239,6 +1254,54 @@ static int ad7173_update_scan_mode(struct iio_dev *indio_dev,
return ret;
}
/*
* On some chips, there are more channels that setups, so if there were
* more unique setups requested than the number of available slots,
* ad7173_set_channel() will have written over some of the slots. We
* can detect this by making sure each assigned cfg_slot matches the
* requested configuration. If it doesn't, we know that the slot was
* overwritten by a different channel.
*/
for_each_set_bit(i, scan_mask, indio_dev->num_channels) {
const struct ad7173_channel_config *cfg1, *cfg2;
cfg1 = &st->channels[i].cfg;
for_each_set_bit(j, scan_mask, indio_dev->num_channels) {
cfg2 = &st->channels[j].cfg;
/*
* Only compare configs that are assigned to the same
* SETUP_SEL slot and don't compare channel to itself.
*/
if (i == j || cfg1->cfg_slot != cfg2->cfg_slot)
continue;
/*
* If we find two different configs trying to use the
* same SETUP_SEL slot, then we know that the that we
* have too many unique configurations requested for
* the available slots and at least one was overwritten.
*/
if (!ad7173_is_setup_equal(cfg1, cfg2)) {
/*
* At this point, there isn't a way to tell
* which setups are actually programmed in the
* ADC anymore, so we could read them back to
* see, but it is simpler to just turn off all
* of the live flags so that everything gets
* reprogramed on the next attempt read a sample.
*/
for (k = 0; k < st->num_channels; k++)
st->channels[k].cfg.live = false;
dev_err(&st->sd.spi->dev,
"Too many unique channel configurations requested for scan\n");
return -EINVAL;
}
}
}
return 0;
}

View File

@ -873,6 +873,7 @@ static const struct ad7380_chip_info adaq4381_4_chip_info = {
.has_hardware_gain = true,
.available_scan_masks = ad7380_4_channel_scan_masks,
.timing_specs = &ad7380_4_timing,
.max_conversion_rate_hz = 4 * MEGA,
};
static const struct spi_offload_config ad7380_offload_config = {

View File

@ -89,7 +89,6 @@ struct rzg2l_adc {
struct completion completion;
struct mutex lock;
u16 last_val[RZG2L_ADC_MAX_CHANNELS];
bool was_rpm_active;
};
/**
@ -428,6 +427,8 @@ static int rzg2l_adc_probe(struct platform_device *pdev)
if (!indio_dev)
return -ENOMEM;
platform_set_drvdata(pdev, indio_dev);
adc = iio_priv(indio_dev);
adc->hw_params = device_get_match_data(dev);
@ -460,8 +461,6 @@ static int rzg2l_adc_probe(struct platform_device *pdev)
if (ret)
return ret;
platform_set_drvdata(pdev, indio_dev);
ret = rzg2l_adc_hw_init(dev, adc);
if (ret)
return dev_err_probe(&pdev->dev, ret,
@ -541,14 +540,9 @@ static int rzg2l_adc_suspend(struct device *dev)
};
int ret;
if (pm_runtime_suspended(dev)) {
adc->was_rpm_active = false;
} else {
ret = pm_runtime_force_suspend(dev);
if (ret)
return ret;
adc->was_rpm_active = true;
}
ret = pm_runtime_force_suspend(dev);
if (ret)
return ret;
ret = reset_control_bulk_assert(ARRAY_SIZE(resets), resets);
if (ret)
@ -557,9 +551,7 @@ static int rzg2l_adc_suspend(struct device *dev)
return 0;
rpm_restore:
if (adc->was_rpm_active)
pm_runtime_force_resume(dev);
pm_runtime_force_resume(dev);
return ret;
}
@ -577,11 +569,9 @@ static int rzg2l_adc_resume(struct device *dev)
if (ret)
return ret;
if (adc->was_rpm_active) {
ret = pm_runtime_force_resume(dev);
if (ret)
goto resets_restore;
}
ret = pm_runtime_force_resume(dev);
if (ret)
goto resets_restore;
ret = rzg2l_adc_hw_init(dev, adc);
if (ret)
@ -590,10 +580,7 @@ static int rzg2l_adc_resume(struct device *dev)
return 0;
rpm_restore:
if (adc->was_rpm_active) {
pm_runtime_mark_last_busy(dev);
pm_runtime_put_autosuspend(dev);
}
pm_runtime_force_suspend(dev);
resets_restore:
reset_control_bulk_assert(ARRAY_SIZE(resets), resets);
return ret;

View File

@ -32,8 +32,12 @@ static int inv_icm42600_temp_read(struct inv_icm42600_state *st, s16 *temp)
goto exit;
*temp = (s16)be16_to_cpup(raw);
/*
* Temperature data is invalid if both accel and gyro are off.
* Return -EBUSY in this case.
*/
if (*temp == INV_ICM42600_DATA_INVALID)
ret = -EINVAL;
ret = -EBUSY;
exit:
mutex_unlock(&st->lock);

View File

@ -639,7 +639,7 @@ static irqreturn_t as73211_trigger_handler(int irq __always_unused, void *p)
struct {
__le16 chan[4];
aligned_s64 ts;
} scan;
} scan = { };
int data_result, ret;
mutex_lock(&data->mutex);

View File

@ -3213,11 +3213,12 @@ int bmp280_common_probe(struct device *dev,
/* Bring chip out of reset if there is an assigned GPIO line */
gpiod = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
if (IS_ERR(gpiod))
return dev_err_probe(dev, PTR_ERR(gpiod), "failed to get reset GPIO\n");
/* Deassert the signal */
if (gpiod) {
dev_info(dev, "release reset\n");
gpiod_set_value(gpiod, 0);
}
dev_info(dev, "release reset\n");
gpiod_set_value(gpiod, 0);
data->regmap = regmap;

View File

@ -938,12 +938,18 @@ static irqreturn_t isl29501_trigger_handler(int irq, void *p)
struct iio_dev *indio_dev = pf->indio_dev;
struct isl29501_private *isl29501 = iio_priv(indio_dev);
const unsigned long *active_mask = indio_dev->active_scan_mask;
u32 buffer[4] __aligned(8) = {}; /* 1x16-bit + naturally aligned ts */
u32 value;
struct {
u16 data;
aligned_s64 ts;
} scan = { };
if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask))
isl29501_register_read(isl29501, REG_DISTANCE, buffer);
if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask)) {
isl29501_register_read(isl29501, REG_DISTANCE, &value);
scan.data = value;
}
iio_push_to_buffers_with_timestamp(indio_dev, buffer, pf->timestamp);
iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp);
iio_trigger_notify_done(indio_dev->trig);
return IRQ_HANDLED;

Some files were not shown because too many files have changed in this diff Show More