pci-v6.17-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmiL3OkUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vz9bhAAqiD9REYlNUgGX/bEBgCVPFdtjjTz
 FpSLzG23vWd2J0FEy04qtQWH9j71IXnM+yMybzsMe9SsPt2HhczzSCIMpPj0FZNN
 ccOf3gA/KqPux7FORrS3mpM8OO4ICt3XZhCji3nNg5iW5XlH+NrQKPVxRlvBB0rP
 +7RxSjDClUdZ97QSSmp1uZ7Qh1qyV0Ht0qjPMwecrnB2kApt4ZaMphAaKPEjX/4f
 RgZPFqbIpRWt9e87Z8ADr5c2jokZAzIV0zauQ2fhbjBkTcXIXL3yOzUbR+ngBWDD
 oq21rXJBUCQheA7J6j2SKabgF9AZaI5NI9ERld5vJ1inXSZCyuyKopN1AzuKZquG
 N+jyYJqZC99ePvMLbTWs/spU58J03A6TOwaJNE3ISRgbnxFkhvLl7h68XuTDonZm
 hYGloXXUj+i+rh7/eJIDDWa9MTpEvl2p1zc6EDIZ/umlnHwg9rGlGQVARMCs6Ist
 EiJQEtjMMlXiBJMkFhpxesOdyonGkxAL9WtT6MoEOFF7dqgsTqSKiDUPa+6MHV+I
 tsTB630J3ROsWGfQD1uJI2BrCm+op4j6faamH6UMqCrUU0TUZMHiRR3qVWbM6qgU
 /WL1gZ96uy5I7UoE0+gH+wMhMClO2BnsxffocToDE5wOYpGDd5BwPEoY8ej8U2lu
 CBMCkMor1jDtS8Y=
 =ipv3
 -----END PGP SIGNATURE-----

Merge tag 'pci-v6.17-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull PCI updates from Bjorn Helgaas:
 "Enumeration:

   - Allow built-in drivers, not just modular drivers, to use async
     initial probing (Lukas Wunner)

   - Support Immediate Readiness even on devices with no PM Capability
     (Sean Christopherson)

   - Consolidate definition of PCIE_RESET_CONFIG_WAIT_MS (100ms), the
     required delay between a reset and sending config requests to a
     device (Niklas Cassel)

   - Add pci_is_display() to check for "Display" base class and use it
     in ALSA hda, vfio, vga_switcheroo, vt-d (Mario Limonciello)

   - Allow 'isolated PCI functions' (multi-function devices without a
     function 0) for LoongArch, similar to s390 and jailhouse (Huacai
     Chen)

  Power control:

   - Add ability to enable optional slot clock for cases where the PCIe
     host controller and the slot are supplied by different clocks
     (Marek Vasut)

  PCIe native device hotplug:

   - Fix runtime PM ref imbalance on Hot-Plug Capable ports caused by
     misinterpreting a config read failure after a device has been
     removed (Lukas Wunner)

   - Avoid creating a useless PCIe port service device for pciehp if the
     slot is handled by the ACPI hotplug driver (Lukas Wunner)

   - Ignore ACPI hotplug slots when calculating depth of pciehp hotplug
     ports (Lukas Wunner)

  Virtualization:

   - Save VF resizable BAR state and restore it after reset (Michał
     Winiarski)

   - Allow IOV resources (VF BARs) to be resized (Michał Winiarski)

   - Add pci_iov_vf_bar_set_size() so drivers can control VF BAR size
     (Michał Winiarski)

  Endpoint framework:

   - Add RC-to-EP doorbell support using platform MSI controller,
     including a test case (Frank Li)

   - Allow BAR assignment via configfs so platforms have flexibility in
     determining BAR usage (Jerome Brunet)

  Native PCIe controller drivers:

   - Convert amazon,al-alpine-v[23]-pcie, apm,xgene-pcie,
     axis,artpec6-pcie, marvell,armada-3700-pcie, st,spear1340-pcie to
     DT schema format (Rob Herring)

   - Use dev_fwnode() instead of of_fwnode_handle() to remove OF
     dependency in altera (fixes an unused variable), designware-host,
     mediatek, mediatek-gen3, mobiveil, plda, xilinx, xilinx-dma,
     xilinx-nwl (Jiri Slaby, Arnd Bergmann)

   - Convert aardvark, altera, brcmstb, designware-host, iproc,
     mediatek, mediatek-gen3, mobiveil, plda, rcar-host, vmd, xilinx,
     xilinx-dma, xilinx-nwl from using pci_msi_create_irq_domain() to
     using msi_create_parent_irq_domain() instead; this makes the
     interrupt controller per-PCI device, allows dynamic allocation of
     vectors after initialization, and allows support of IMS (Nam Cao)

  APM X-Gene PCIe controller driver:

   - Rewrite MSI handling to MSI CPU affinity, drop useless CPU hotplug
     bits, use device-managed memory allocations, and clean things up
     (Marc Zyngier)

   - Probe xgene-msi as a standard platform driver rather than a
     subsys_initcall (Marc Zyngier)

  Broadcom STB PCIe controller driver:

   - Add optional DT 'num-lanes' property and if present, use it to
     override the Maximum Link Width advertised in Link Capabilities
     (Jim Quinlan)

  Cadence PCIe controller driver:

   - Use PCIe Message routing types from the PCI core rather than
     defining private ones (Hans Zhang)

  Freescale i.MX6 PCIe controller driver:

   - Add IMX8MQ_EP third 64-bit BAR in epc_features (Richard Zhu)

   - Add IMX8MM_EP and IMX8MP_EP fixed 256-byte BAR 4 in epc_features
     (Richard Zhu)

   - Configure LUT for MSI/IOMMU in Endpoint mode so Root Complex can
     trigger doorbel on Endpoint (Frank Li)

   - Remove apps_reset (LTSSM_EN) from
     imx_pcie_{assert,deassert}_core_reset(), which fixes a hotplug
     regression on i.MX8MM (Richard Zhu)

   - Delay Endpoint link start until configfs 'start' written (Richard
     Zhu)

  Intel VMD host bridge driver:

   - Add Intel Panther Lake (PTL)-H/P/U Vendor ID (George D Sworo)

  Qualcomm PCIe controller driver:

   - Add DT binding and driver support for SA8255p, which supports ECAM
     for Configuration Space access (Mayank Rana)

   - Update DT binding and driver to describe PHYs and per-Root Port
     resets in a Root Port stanza and deprecate describing them in the
     host bridge; this makes it possible to support multiple Root Ports
     in the future (Krishna Chaitanya Chundru)

   - Add Qualcomm QCS615 to SM8150 DT binding (Ziyue Zhang)

   - Add Qualcomm QCS8300 to SA8775p DT binding (Ziyue Zhang)

   - Drop TBU and ref clocks from Qualcomm SM8150 and SC8180x DT
     bindings (Konrad Dybcio)

   - Document 'link_down' reset in Qualcomm SA8775P DT binding (Ziyue
     Zhang)

   - Add required PCIE_RESET_CONFIG_WAIT_MS delay after Link up IRQ
     (Niklas Cassel)

  Rockchip PCIe controller driver:

   - Drop unused PCIe Message routing and code definitions (Hans Zhang)

   - Remove several unused header includes (Hans Zhang)

   - Use standard PCIe config register definitions instead of
     rockchip-specific redefinitions (Geraldo Nascimento)

   - Set Target Link Speed to 5.0 GT/s before retraining so we have a
     chance to train at a higher speed (Geraldo Nascimento)

  Rockchip DesignWare PCIe controller driver:

   - Prevent race between link training and register update via DBI by
     inhibiting link training after hot reset and link down (Wilfred
     Mallawa)

   - Add required PCIE_RESET_CONFIG_WAIT_MS delay after Link up IRQ
     (Niklas Cassel)

  Sophgo PCIe controller driver:

   - Add DT binding and driver for Sophgo SG2044 PCIe controller driver
     in Root Complex mode (Inochi Amaoto)

  Synopsys DesignWare PCIe controller driver:

   - Add required PCIE_RESET_CONFIG_WAIT_MS after waiting for Link up on
     Ports that support > 5.0 GT/s. Slower Ports still rely on the
     not-quite-correct PCIE_LINK_WAIT_SLEEP_MS 90ms default delay while
     waiting for the Link (Niklas Cassel)"

* tag 'pci-v6.17-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (116 commits)
  dt-bindings: PCI: qcom,pcie-sa8775p: Document 'link_down' reset
  dt-bindings: PCI: Remove 83xx-512x-pci.txt
  dt-bindings: PCI: Convert amazon,al-alpine-v[23]-pcie to DT schema
  dt-bindings: PCI: Convert marvell,armada-3700-pcie to DT schema
  dt-bindings: PCI: Convert apm,xgene-pcie to DT schema
  dt-bindings: PCI: Convert axis,artpec6-pcie to DT schema
  dt-bindings: PCI: Convert st,spear1340-pcie to DT schema
  PCI: Move is_pciehp check out of pciehp_is_native()
  PCI: pciehp: Use is_pciehp instead of is_hotplug_bridge
  PCI/portdrv: Use is_pciehp instead of is_hotplug_bridge
  PCI/ACPI: Fix runtime PM ref imbalance on Hot-Plug Capable ports
  selftests: pci_endpoint: Add doorbell test case
  misc: pci_endpoint_test: Add doorbell test case
  PCI: endpoint: pci-epf-test: Add doorbell test support
  PCI: endpoint: Add pci_epf_align_inbound_addr() helper for inbound address alignment
  PCI: endpoint: pci-ep-msi: Add checks for MSI parent and mutability
  PCI: endpoint: Add RC-to-EP doorbell support using platform MSI controller
  PCI: dwc: Add Sophgo SG2044 PCIe controller driver in Root Complex mode
  PCI: vmd: Switch to msi_create_parent_irq_domain()
  PCI: vmd: Convert to lock guards
  ...
pull/1320/head
Linus Torvalds 2025-08-01 13:59:07 -07:00
commit 0bd0a41a51
104 changed files with 2996 additions and 1375 deletions

View File

@ -203,3 +203,18 @@ controllers, it is advisable to skip this testcase using this
command::
# pci_endpoint_test -f pci_ep_bar -f pci_ep_basic -v memcpy -T COPY_TEST -v dma
Kselftest EP Doorbell
~~~~~~~~~~~~~~~~~~~~~
If the Endpoint MSI controller is used for the doorbell usecase, run below
command for testing it:
# pci_endpoint_test -f pcie_ep_doorbell
# Starting 1 tests from 1 test cases.
# RUN pcie_ep_doorbell.DOORBELL_TEST ...
# OK pcie_ep_doorbell.DOORBELL_TEST
ok 1 pcie_ep_doorbell.DOORBELL_TEST
# PASSED: 1 / 1 tests passed.
# Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0

View File

@ -1,39 +0,0 @@
* Freescale 83xx and 512x PCI bridges
Freescale 83xx and 512x SOCs include the same PCI bridge core.
83xx/512x specific notes:
- reg: should contain two address length tuples
The first is for the internal PCI bridge registers
The second is for the PCI config space access registers
Example (MPC8313ERDB)
pci0: pci@e0008500 {
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
interrupt-map = <
/* IDSEL 0x0E -mini PCI */
0x7000 0x0 0x0 0x1 &ipic 18 0x8
0x7000 0x0 0x0 0x2 &ipic 18 0x8
0x7000 0x0 0x0 0x3 &ipic 18 0x8
0x7000 0x0 0x0 0x4 &ipic 18 0x8
/* IDSEL 0x0F - PCI slot */
0x7800 0x0 0x0 0x1 &ipic 17 0x8
0x7800 0x0 0x0 0x2 &ipic 18 0x8
0x7800 0x0 0x0 0x3 &ipic 17 0x8
0x7800 0x0 0x0 0x4 &ipic 18 0x8>;
interrupt-parent = <&ipic>;
interrupts = <66 0x8>;
bus-range = <0x0 0x0>;
ranges = <0x02000000 0x0 0x90000000 0x90000000 0x0 0x10000000
0x42000000 0x0 0x80000000 0x80000000 0x0 0x10000000
0x01000000 0x0 0x00000000 0xe2000000 0x0 0x00100000>;
clock-frequency = <66666666>;
#interrupt-cells = <1>;
#size-cells = <2>;
#address-cells = <3>;
reg = <0xe0008500 0x100 /* internal registers */
0xe0008300 0x8>; /* config space access registers */
compatible = "fsl,mpc8349-pci";
device_type = "pci";
};

View File

@ -1,59 +0,0 @@
Aardvark PCIe controller
This PCIe controller is used on the Marvell Armada 3700 ARM64 SoC.
The Device Tree node describing an Aardvark PCIe controller must
contain the following properties:
- compatible: Should be "marvell,armada-3700-pcie"
- reg: range of registers for the PCIe controller
- interrupts: the interrupt line of the PCIe controller
- #address-cells: set to <3>
- #size-cells: set to <2>
- device_type: set to "pci"
- ranges: ranges for the PCI memory and I/O regions
- #interrupt-cells: set to <1>
- msi-controller: indicates that the PCIe controller can itself
handle MSI interrupts
- msi-parent: pointer to the MSI controller to be used
- interrupt-map-mask and interrupt-map: standard PCI properties to
define the mapping of the PCIe interface to interrupt numbers.
- bus-range: PCI bus numbers covered
- phys: the PCIe PHY handle
- max-link-speed: see pci.txt
- reset-gpios: see pci.txt
In addition, the Device Tree describing an Aardvark PCIe controller
must include a sub-node that describes the legacy interrupt controller
built into the PCIe controller. This sub-node must have the following
properties:
- interrupt-controller
- #interrupt-cells: set to <1>
Example:
pcie0: pcie@d0070000 {
compatible = "marvell,armada-3700-pcie";
device_type = "pci";
reg = <0 0xd0070000 0 0x20000>;
#address-cells = <3>;
#size-cells = <2>;
bus-range = <0x00 0xff>;
interrupts = <GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>;
#interrupt-cells = <1>;
msi-controller;
msi-parent = <&pcie0>;
ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x1000000 /* Port 0 MEM */
0x81000000 0 0xe9000000 0 0xe9000000 0 0x10000>; /* Port 0 IO*/
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc 0>,
<0 0 0 2 &pcie_intc 1>,
<0 0 0 3 &pcie_intc 2>,
<0 0 0 4 &pcie_intc 3>;
phys = <&comphy1 0>;
pcie_intc: interrupt-controller {
interrupt-controller;
#interrupt-cells = <1>;
};
};

View File

@ -0,0 +1,71 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/amazon,al-alpine-v3-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Amazon Annapurna Labs Alpine v3 PCIe Host Bridge
maintainers:
- Jonathan Chocron <jonnyc@amazon.com>
description:
Amazon's Annapurna Labs PCIe Host Controller is based on the Synopsys
DesignWare PCI controller.
allOf:
- $ref: snps,dw-pcie.yaml#
properties:
compatible:
enum:
- amazon,al-alpine-v2-pcie
- amazon,al-alpine-v3-pcie
reg:
items:
- description: PCIe ECAM space
- description: AL proprietary registers
- description: Designware PCIe registers
reg-names:
items:
- const: config
- const: controller
- const: dbi
interrupts:
maxItems: 1
unevaluatedProperties: false
required:
- compatible
- reg
- reg-names
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
bus {
#address-cells = <2>;
#size-cells = <2>;
pcie@fb600000 {
compatible = "amazon,al-alpine-v3-pcie";
reg = <0x0 0xfb600000 0x0 0x00100000
0x0 0xfd800000 0x0 0x00010000
0x0 0xfd810000 0x0 0x00001000>;
reg-names = "config", "controller", "dbi";
bus-range = <0 255>;
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>;
interrupt-map-mask = <0x00 0 0 7>;
interrupt-map = <0x0000 0 0 1 &gic GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>; /* INTa */
ranges = <0x02000000 0x0 0xc0010000 0x0 0xc0010000 0x0 0x07ff0000>;
};
};

View File

@ -0,0 +1,84 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/apm,xgene-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: AppliedMicro X-Gene PCIe interface
maintainers:
- Toan Le <toan@os.amperecomputing.com>
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:
oneOf:
- items:
- const: apm,xgene-storm-pcie
- const: apm,xgene-pcie
- items:
- const: apm,xgene-pcie
reg:
items:
- description: Controller configuration registers
- description: PCI configuration space registers
reg-names:
items:
- const: csr
- const: cfg
clocks:
maxItems: 1
clock-names:
items:
- const: pcie
dma-coherent: true
msi-parent:
maxItems: 1
required:
- compatible
- reg
- reg-names
- '#interrupt-cells'
- interrupt-map-mask
- interrupt-map
- clocks
unevaluatedProperties: false
examples:
- |
bus {
#address-cells = <2>;
#size-cells = <2>;
pcie@1f2b0000 {
compatible = "apm,xgene-storm-pcie", "apm,xgene-pcie";
device_type = "pci";
#interrupt-cells = <1>;
#size-cells = <2>;
#address-cells = <3>;
reg = <0x00 0x1f2b0000 0x0 0x00010000>, /* Controller registers */
<0xe0 0xd0000000 0x0 0x00040000>; /* PCI config space */
reg-names = "csr", "cfg";
ranges = <0x01000000 0x00 0x00000000 0xe0 0x10000000 0x00 0x00010000>, /* io */
<0x02000000 0x00 0x80000000 0xe1 0x80000000 0x00 0x80000000>; /* mem */
dma-ranges = <0x42000000 0x80 0x00000000 0x80 0x00000000 0x00 0x80000000>,
<0x42000000 0x00 0x00000000 0x00 0x00000000 0x80 0x00000000>;
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0xc2 0x1>,
<0x0 0x0 0x0 0x2 &gic 0x0 0xc3 0x1>,
<0x0 0x0 0x0 0x3 &gic 0x0 0xc4 0x1>,
<0x0 0x0 0x0 0x4 &gic 0x0 0xc5 0x1>;
dma-coherent;
clocks = <&pcie0clk 0>;
};
};

View File

@ -1,50 +0,0 @@
* Axis ARTPEC-6 PCIe interface
This PCIe host controller is based on the Synopsys DesignWare PCIe IP
and thus inherits all the common properties defined in snps,dw-pcie.yaml.
Required properties:
- compatible: "axis,artpec6-pcie", "snps,dw-pcie" for ARTPEC-6 in RC mode;
"axis,artpec6-pcie-ep", "snps,dw-pcie" for ARTPEC-6 in EP mode;
"axis,artpec7-pcie", "snps,dw-pcie" for ARTPEC-7 in RC mode;
"axis,artpec7-pcie-ep", "snps,dw-pcie" for ARTPEC-7 in EP mode;
- reg: base addresses and lengths of the PCIe controller (DBI),
the PHY controller, and configuration address space.
- reg-names: Must include the following entries:
- "dbi"
- "phy"
- "config"
- interrupts: A list of interrupt outputs of the controller. Must contain an
entry for each entry in the interrupt-names property.
- interrupt-names: Must include the following entries:
- "msi": The interrupt that is asserted when an MSI is received
- axis,syscon-pcie: A phandle pointing to the ARTPEC-6 system controller,
used to enable and control the Synopsys IP.
Example:
pcie@f8050000 {
compatible = "axis,artpec6-pcie", "snps,dw-pcie";
reg = <0xf8050000 0x2000
0xf8040000 0x1000
0xc0000000 0x2000>;
reg-names = "dbi", "phy", "config";
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
/* downstream I/O */
ranges = <0x81000000 0 0 0xc0002000 0 0x00010000
/* non-prefetchable memory */
0x82000000 0 0xc0012000 0xc0012000 0 0x1ffee000>;
num-lanes = <2>;
bus-range = <0x00 0xff>;
interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>;
axis,syscon-pcie = <&syscon>;
};

View File

@ -0,0 +1,118 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright 2025 Axis AB
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/axis,artpec6-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Axis ARTPEC-6 PCIe host controller
maintainers:
- Jesper Nilsson <jesper.nilsson@axis.com>
description:
This PCIe host controller is based on the Synopsys DesignWare PCIe IP.
select:
properties:
compatible:
contains:
enum:
- axis,artpec6-pcie
- axis,artpec6-pcie-ep
- axis,artpec7-pcie
- axis,artpec7-pcie-ep
required:
- compatible
properties:
compatible:
items:
- enum:
- axis,artpec6-pcie
- axis,artpec6-pcie-ep
- axis,artpec7-pcie
- axis,artpec7-pcie-ep
- const: snps,dw-pcie
reg:
minItems: 3
maxItems: 4
reg-names:
minItems: 3
maxItems: 4
interrupts:
maxItems: 1
interrupt-names:
items:
- const: msi
axis,syscon-pcie:
$ref: /schemas/types.yaml#/definitions/phandle
description:
System controller phandle used to enable and control the Synopsys IP.
required:
- compatible
- reg
- reg-names
- interrupts
- interrupt-names
- axis,syscon-pcie
oneOf:
- $ref: snps,dw-pcie.yaml#
properties:
reg:
maxItems: 3
reg-names:
items:
- const: dbi
- const: phy
- const: config
- $ref: snps,dw-pcie-ep.yaml#
properties:
reg:
minItems: 4
reg-names:
items:
- const: dbi
- const: dbi2
- const: phy
- const: addr_space
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
pcie@f8050000 {
compatible = "axis,artpec6-pcie", "snps,dw-pcie";
device_type = "pci";
reg = <0xf8050000 0x2000
0xf8040000 0x1000
0xc0000000 0x2000>;
reg-names = "dbi", "phy", "config";
#address-cells = <3>;
#size-cells = <2>;
ranges = <0x81000000 0 0 0xc0002000 0 0x00010000>,
<0x82000000 0 0xc0012000 0xc0012000 0 0x1ffee000>;
num-lanes = <2>;
bus-range = <0x00 0xff>;
interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>;
axis,syscon-pcie = <&syscon>;
};

View File

@ -107,6 +107,10 @@ properties:
- const: bridge
- const: swinit
num-lanes:
default: 1
maximum: 4
required:
- compatible
- reg

View File

@ -0,0 +1,99 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/marvell,armada-3700-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Marvell Armada 3700 (Aardvark) PCIe Controller
maintainers:
- Thomas Petazzoni <thomas.petazzoni@bootlin.com>
- Pali Rohár <pali@kernel.org>
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:
const: marvell,armada-3700-pcie
reg:
maxItems: 1
clocks:
maxItems: 1
interrupts:
maxItems: 1
msi-controller: true
msi-parent:
maxItems: 1
phys:
maxItems: 1
reset-gpios:
description: PCIe reset GPIO signals.
interrupt-controller:
type: object
additionalProperties: false
properties:
interrupt-controller: true
'#interrupt-cells':
const: 1
required:
- interrupt-controller
- '#interrupt-cells'
required:
- compatible
- reg
- interrupts
- '#interrupt-cells'
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/gpio/gpio.h>
bus {
#address-cells = <2>;
#size-cells = <2>;
pcie@d0070000 {
compatible = "marvell,armada-3700-pcie";
device_type = "pci";
reg = <0 0xd0070000 0 0x20000>;
#address-cells = <3>;
#size-cells = <2>;
bus-range = <0x00 0xff>;
interrupts = <GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>;
msi-controller;
msi-parent = <&pcie0>;
ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x1000000>,
<0x81000000 0 0xe9000000 0 0xe9000000 0 0x10000>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc 0>,
<0 0 0 2 &pcie_intc 1>,
<0 0 0 3 &pcie_intc 2>,
<0 0 0 4 &pcie_intc 3>;
phys = <&comphy1 0>;
max-link-speed = <2>;
reset-gpios = <&gpio1 15 GPIO_ACTIVE_LOW>;
pcie_intc: interrupt-controller {
interrupt-controller;
#interrupt-cells = <1>;
};
};
};

View File

@ -51,7 +51,7 @@ properties:
max-link-speed:
$ref: /schemas/types.yaml#/definitions/uint32
enum: [ 1, 2, 3, 4 ]
enum: [ 1, 2, 3, 4, 5, 6 ]
msi-map:
description: |

View File

@ -1,46 +0,0 @@
* Amazon Annapurna Labs PCIe host bridge
Amazon's Annapurna Labs PCIe Host Controller is based on the Synopsys DesignWare
PCI core. It inherits common properties defined in
Documentation/devicetree/bindings/pci/snps,dw-pcie.yaml.
Properties of the host controller node that differ from it are:
- compatible:
Usage: required
Value type: <stringlist>
Definition: Value should contain
- "amazon,al-alpine-v2-pcie" for alpine_v2
- "amazon,al-alpine-v3-pcie" for alpine_v3
- reg:
Usage: required
Value type: <prop-encoded-array>
Definition: Register ranges as listed in the reg-names property
- reg-names:
Usage: required
Value type: <stringlist>
Definition: Must include the following entries
- "config" PCIe ECAM space
- "controller" AL proprietary registers
- "dbi" Designware PCIe registers
Example:
pcie-external0: pcie@fb600000 {
compatible = "amazon,al-alpine-v3-pcie";
reg = <0x0 0xfb600000 0x0 0x00100000
0x0 0xfd800000 0x0 0x00010000
0x0 0xfd810000 0x0 0x00001000>;
reg-names = "config", "controller", "dbi";
bus-range = <0 255>;
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>;
interrupt-map-mask = <0x00 0 0 7>;
interrupt-map = <0x0000 0 0 1 &gic GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>; /* INTa */
ranges = <0x02000000 0x0 0xc0010000 0x0 0xc0010000 0x0 0x07ff0000>;
};

View File

@ -51,10 +51,18 @@ properties:
phys:
maxItems: 1
deprecated: true
description:
This property is deprecated, instead of referencing this property from
the host bridge node, use the property from the PCIe root port node.
phy-names:
items:
- const: pciephy
deprecated: true
description:
Phandle to the register map node. This property is deprecated, and not
required to add in the root port also, as the root port has only one phy.
power-domains:
maxItems: 1
@ -71,12 +79,18 @@ properties:
maxItems: 12
perst-gpios:
description: GPIO controlled connection to PERST# signal
description: GPIO controlled connection to PERST# signal. This property is
deprecated, instead of referencing this property from the host bridge node,
use the reset-gpios property from the root port node.
maxItems: 1
deprecated: true
wake-gpios:
description: GPIO controlled connection to WAKE# signal
description: GPIO controlled connection to WAKE# signal. This property is
deprecated, instead of referencing this property from the host bridge node,
use the property from the PCIe root port node.
maxItems: 1
deprecated: true
vddpe-3v3-supply:
description: PCIe endpoint power supply
@ -85,6 +99,20 @@ properties:
opp-table:
type: object
patternProperties:
"^pcie@":
type: object
$ref: /schemas/pci/pci-pci-bridge.yaml#
properties:
reg:
maxItems: 1
phys:
maxItems: 1
unevaluatedProperties: false
required:
- reg
- reg-names

View File

@ -0,0 +1,122 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-sa8255p.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm SA8255p based firmware managed and ECAM compliant PCIe Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
description:
Qualcomm SA8255p SoC PCIe root complex controller is based on the Synopsys
DesignWare PCIe IP which is managed by firmware, and configured in ECAM mode.
properties:
compatible:
const: qcom,pcie-sa8255p
reg:
description:
The base address and size of the ECAM area for accessing PCI
Configuration Space, as accessed from the parent bus. The base
address corresponds to the first bus in the "bus-range" property. If
no "bus-range" is specified, this will be bus 0 (the default).
maxItems: 1
ranges:
description:
As described in IEEE Std 1275-1994, but must provide at least a
definition of non-prefetchable memory. One or both of prefetchable Memory
may also be provided.
minItems: 1
maxItems: 2
interrupts:
minItems: 8
maxItems: 8
interrupt-names:
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
power-domains:
maxItems: 1
dma-coherent: true
iommu-map: true
required:
- compatible
- reg
- ranges
- power-domains
- interrupts
- interrupt-names
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pci@1c00000 {
compatible = "qcom,pcie-sa8255p";
reg = <0x4 0x00000000 0 0x10000000>;
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
ranges = <0x02000000 0x0 0x40100000 0x0 0x40100000 0x0 0x1ff00000>,
<0x43000000 0x4 0x10100000 0x4 0x10100000 0x0 0x40000000>;
bus-range = <0x00 0xff>;
dma-coherent;
linux,pci-domain = <0>;
power-domains = <&scmi5_pd 0>;
iommu-map = <0x0 &pcie_smmu 0x0000 0x1>,
<0x100 &pcie_smmu 0x0001 0x1>;
interrupt-parent = <&intc>;
interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 308 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 309 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 312 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 313 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 314 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi0", "msi1", "msi2", "msi3",
"msi4", "msi5", "msi6", "msi7";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>;
pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
};
};
};

View File

@ -16,7 +16,12 @@ description:
properties:
compatible:
const: qcom,pcie-sa8775p
oneOf:
- const: qcom,pcie-sa8775p
- items:
- enum:
- qcom,pcie-qcs8300
- const: qcom,pcie-sa8775p
reg:
minItems: 6
@ -61,11 +66,14 @@ properties:
- const: global
resets:
maxItems: 1
items:
- description: PCIe controller reset
- description: PCIe link down reset
reset-names:
items:
- const: pci
- const: link_down
required:
- interconnects
@ -161,8 +169,10 @@ examples:
power-domains = <&gcc PCIE_0_GDSC>;
resets = <&gcc GCC_PCIE_0_BCR>;
reset-names = "pci";
resets = <&gcc GCC_PCIE_0_BCR>,
<&gcc GCC_PCIE_0_LINK_DOWN_BCR>;
reset-names = "pci",
"link_down";
perst-gpios = <&tlmm 2 GPIO_ACTIVE_LOW>;
wake-gpios = <&tlmm 0 GPIO_ACTIVE_HIGH>;

View File

@ -165,9 +165,6 @@ examples:
iommu-map = <0x0 &apps_smmu 0x1c80 0x1>,
<0x100 &apps_smmu 0x1c81 0x1>;
phys = <&pcie1_phy>;
phy-names = "pciephy";
pinctrl-names = "default";
pinctrl-0 = <&pcie1_clkreq_n>;
@ -176,7 +173,18 @@ examples:
resets = <&gcc GCC_PCIE_1_BCR>;
reset-names = "pci";
perst-gpios = <&tlmm 2 GPIO_ACTIVE_LOW>;
vddpe-3v3-supply = <&pp3300_ssd>;
pcie1_port0: pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
phys = <&pcie1_phy>;
reset-gpios = <&tlmm 2 GPIO_ACTIVE_LOW>;
};
};
};

View File

@ -33,8 +33,8 @@ properties:
- const: mhi # MHI registers
clocks:
minItems: 8
maxItems: 8
minItems: 6
maxItems: 6
clock-names:
items:
@ -44,8 +44,6 @@ properties:
- const: bus_master # Master AXI clock
- const: bus_slave # Slave AXI clock
- const: slave_q2a # Slave Q2A clock
- const: ref # REFERENCE clock
- const: tbu # PCIe TBU clock
interrupts:
minItems: 8
@ -117,17 +115,13 @@ examples:
<&gcc GCC_PCIE_0_CFG_AHB_CLK>,
<&gcc GCC_PCIE_0_MSTR_AXI_CLK>,
<&gcc GCC_PCIE_0_SLV_AXI_CLK>,
<&gcc GCC_PCIE_0_SLV_Q2A_AXI_CLK>,
<&gcc GCC_PCIE_0_CLKREF_CLK>,
<&gcc GCC_AGGRE_NOC_PCIE_TBU_CLK>;
<&gcc GCC_PCIE_0_SLV_Q2A_AXI_CLK>;
clock-names = "pipe",
"aux",
"cfg",
"bus_master",
"bus_slave",
"slave_q2a",
"ref",
"tbu";
"slave_q2a";
dma-coherent;

View File

@ -16,7 +16,12 @@ description:
properties:
compatible:
const: qcom,pcie-sm8150
oneOf:
- const: qcom,pcie-sm8150
- items:
- enum:
- qcom,pcie-qcs615
- const: qcom,pcie-sm8150
reg:
minItems: 5
@ -33,8 +38,8 @@ properties:
- const: mhi # MHI registers
clocks:
minItems: 8
maxItems: 8
minItems: 6
maxItems: 6
clock-names:
items:
@ -44,8 +49,6 @@ properties:
- const: bus_master # Master AXI clock
- const: bus_slave # Slave AXI clock
- const: slave_q2a # Slave Q2A clock
- const: tbu # PCIe TBU clock
- const: ref # REFERENCE clock
interrupts:
minItems: 8
@ -111,17 +114,13 @@ examples:
<&gcc GCC_PCIE_0_CFG_AHB_CLK>,
<&gcc GCC_PCIE_0_MSTR_AXI_CLK>,
<&gcc GCC_PCIE_0_SLV_AXI_CLK>,
<&gcc GCC_PCIE_0_SLV_Q2A_AXI_CLK>,
<&gcc GCC_AGGRE_NOC_PCIE_TBU_CLK>,
<&rpmhcc RPMH_CXO_CLK>;
<&gcc GCC_PCIE_0_SLV_Q2A_AXI_CLK>;
clock-names = "pipe",
"aux",
"cfg",
"bus_master",
"bus_slave",
"slave_q2a",
"tbu",
"ref";
"slave_q2a";
interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>,

View File

@ -108,7 +108,7 @@ properties:
- description: See native 'dbi' CSR region for details.
enum: [ ctrl ]
- description: See native 'elbi/app' CSR region for details.
enum: [ apb, mgmt, link, ulreg, appl ]
enum: [ apb, mgmt, link, ulreg, appl, controller ]
- description: See native 'atu' CSR region for details.
enum: [ atu_dma ]
- description: Syscon-related CSR regions.

View File

@ -0,0 +1,122 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/sophgo,sg2044-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: DesignWare based PCIe Root Complex controller on Sophgo SoCs
maintainers:
- Inochi Amaoto <inochiama@gmail.com>
description:
SG2044 SoC PCIe Root Complex controller is based on the Synopsys DesignWare
PCIe IP and thus inherits all the common properties defined in
snps,dw-pcie.yaml.
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
- $ref: /schemas/pci/snps,dw-pcie.yaml#
properties:
compatible:
const: sophgo,sg2044-pcie
reg:
items:
- description: Data Bus Interface (DBI) registers
- description: iATU registers
- description: Config registers
- description: Sophgo designed configuration registers
reg-names:
items:
- const: dbi
- const: atu
- const: config
- const: app
clocks:
items:
- description: core clk
clock-names:
items:
- const: core
interrupt-controller:
description: Interrupt controller node for handling legacy PCI interrupts.
type: object
properties:
"#address-cells":
const: 0
"#interrupt-cells":
const: 1
interrupt-controller: true
interrupts:
items:
- description: combined legacy interrupt
required:
- "#address-cells"
- "#interrupt-cells"
- interrupt-controller
- interrupts
additionalProperties: false
msi-parent: true
ranges:
maxItems: 5
required:
- compatible
- reg
- clocks
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/irq.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie@6c00400000 {
compatible = "sophgo,sg2044-pcie";
reg = <0x6c 0x00400000 0x0 0x00001000>,
<0x6c 0x00700000 0x0 0x00004000>,
<0x40 0x00000000 0x0 0x00001000>,
<0x6c 0x00780c00 0x0 0x00000400>;
reg-names = "dbi", "atu", "config", "app";
#address-cells = <3>;
#size-cells = <2>;
bus-range = <0x00 0xff>;
clocks = <&clk 0>;
clock-names = "core";
device_type = "pci";
linux,pci-domain = <0>;
msi-parent = <&msi>;
ranges = <0x01000000 0x0 0x00000000 0x40 0x10000000 0x0 0x00200000>,
<0x42000000 0x0 0x00000000 0x0 0x00000000 0x0 0x04000000>,
<0x02000000 0x0 0x04000000 0x0 0x04000000 0x0 0x04000000>,
<0x43000000 0x42 0x00000000 0x42 0x00000000 0x2 0x00000000>,
<0x03000000 0x41 0x00000000 0x41 0x00000000 0x1 0x00000000>;
interrupt-controller {
#address-cells = <0>;
#interrupt-cells = <1>;
interrupt-controller;
interrupt-parent = <&intc>;
interrupts = <64 IRQ_TYPE_LEVEL_HIGH>;
};
};
};
...

View File

@ -1,14 +0,0 @@
SPEAr13XX PCIe DT detail:
================================
SPEAr13XX uses the Synopsys DesignWare PCIe controller and ST MiPHY as PHY
controller.
Required properties:
- compatible : should be "st,spear1340-pcie", "snps,dw-pcie".
- phys : phandle to PHY node associated with PCIe controller
- phy-names : must be "pcie-phy"
- All other definitions as per generic PCI bindings
Optional properties:
- st,pcie-is-gen1 indicates that forced gen1 initialization is needed.

View File

@ -0,0 +1,45 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/st,spear1340-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: ST SPEAr1340 PCIe controller
maintainers:
- Pratyush Anand <pratyush.anand@gmail.com>
description:
SPEAr13XX uses the Synopsys DesignWare PCIe controller and ST MiPHY as PHY
controller.
select:
properties:
compatible:
contains:
const: st,spear1340-pcie
required:
- compatible
properties:
compatible:
items:
- const: st,spear1340-pcie
- const: snps,dw-pcie
phys:
maxItems: 1
st,pcie-is-gen1:
type: boolean
description: Indicates forced gen1 initialization is needed.
required:
- compatible
- phys
- phy-names
allOf:
- $ref: snps,dw-pcie.yaml#
unevaluatedProperties: false

View File

@ -1,50 +0,0 @@
* AppliedMicro X-Gene PCIe interface
Required properties:
- device_type: set to "pci"
- compatible: should contain "apm,xgene-pcie" to identify the core.
- reg: A list of physical base address and length for each set of controller
registers. Must contain an entry for each entry in the reg-names
property.
- reg-names: Must include the following entries:
"csr": controller configuration registers.
"cfg": PCIe configuration space registers.
- #address-cells: set to <3>
- #size-cells: set to <2>
- ranges: ranges for the outbound memory, I/O regions.
- dma-ranges: ranges for the inbound memory regions.
- #interrupt-cells: set to <1>
- interrupt-map-mask and interrupt-map: standard PCI properties
to define the mapping of the PCIe interface to interrupt
numbers.
- clocks: from common clock binding: handle to pci clock.
Optional properties:
- status: Either "ok" or "disabled".
- dma-coherent: Present if DMA operations are coherent
Example:
pcie0: pcie@1f2b0000 {
status = "disabled";
device_type = "pci";
compatible = "apm,xgene-storm-pcie", "apm,xgene-pcie";
#interrupt-cells = <1>;
#size-cells = <2>;
#address-cells = <3>;
reg = < 0x00 0x1f2b0000 0x0 0x00010000 /* Controller registers */
0xe0 0xd0000000 0x0 0x00040000>; /* PCI config space */
reg-names = "csr", "cfg";
ranges = <0x01000000 0x00 0x00000000 0xe0 0x10000000 0x00 0x00010000 /* io */
0x02000000 0x00 0x80000000 0xe1 0x80000000 0x00 0x80000000>; /* mem */
dma-ranges = <0x42000000 0x80 0x00000000 0x80 0x00000000 0x00 0x80000000
0x42000000 0x00 0x00000000 0x00 0x00000000 0x80 0x00000000>;
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0xc2 0x1
0x0 0x0 0x0 0x2 &gic 0x0 0xc3 0x1
0x0 0x0 0x0 0x3 &gic 0x0 0xc4 0x1
0x0 0x0 0x0 0x4 &gic 0x0 0xc5 0x1>;
dma-coherent;
clocks = <&pcie0clk 0>;
};

View File

@ -5189,7 +5189,6 @@ F: include/linux/platform_data/brcmnand.h
BROADCOM STB PCIE DRIVER
M: Jim Quinlan <jim2101024@gmail.com>
M: Nicolas Saenz Julienne <nsaenz@kernel.org>
M: Florian Fainelli <florian.fainelli@broadcom.com>
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-pci@vger.kernel.org
@ -19208,7 +19207,7 @@ M: Pali Rohár <pali@kernel.org>
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/pci/aardvark-pci.txt
F: Documentation/devicetree/bindings/pci/marvell,armada-3700-pcie.yaml
F: drivers/pci/controller/pci-aardvark.c
PCI DRIVER FOR ALTERA PCIE IP
@ -19223,7 +19222,7 @@ M: Toan Le <toan@os.amperecomputing.com>
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/pci/xgene-pci.txt
F: Documentation/devicetree/bindings/pci/apm,xgene-pcie.yaml
F: drivers/pci/controller/pci-xgene.c
PCI DRIVER FOR ARM VERSATILE PLATFORM
@ -19542,7 +19541,7 @@ PCIE DRIVER FOR AMAZON ANNAPURNA LABS
M: Jonathan Chocron <jonnyc@amazon.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/pcie-al.txt
F: Documentation/devicetree/bindings/pci/amazon,al-alpine-v3-pcie.yaml
F: drivers/pci/controller/dwc/pcie-al.c
PCIE DRIVER FOR AMLOGIC MESON

View File

@ -437,7 +437,7 @@ find_active_client(struct list_head *head)
*/
bool vga_switcheroo_client_probe_defer(struct pci_dev *pdev)
{
if ((pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY) {
if (pci_is_display(pdev)) {
/*
* apple-gmux is needed on pre-retina MacBook Pro
* to probe the panel if pdev is the inactive GPU.

View File

@ -34,7 +34,7 @@
#define ROOT_SIZE VTD_PAGE_SIZE
#define CONTEXT_SIZE VTD_PAGE_SIZE
#define IS_GFX_DEVICE(pdev) ((pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY)
#define IS_GFX_DEVICE(pdev) pci_is_display(pdev)
#define IS_USB_DEVICE(pdev) ((pdev->class >> 8) == PCI_CLASS_SERIAL_USB)
#define IS_ISA_DEVICE(pdev) ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA)
#define IS_AZALIA(pdev) ((pdev)->vendor == 0x8086 && (pdev)->device == 0x3a3e)

View File

@ -37,6 +37,8 @@
#define COMMAND_READ BIT(3)
#define COMMAND_WRITE BIT(4)
#define COMMAND_COPY BIT(5)
#define COMMAND_ENABLE_DOORBELL BIT(6)
#define COMMAND_DISABLE_DOORBELL BIT(7)
#define PCI_ENDPOINT_TEST_STATUS 0x8
#define STATUS_READ_SUCCESS BIT(0)
@ -48,6 +50,11 @@
#define STATUS_IRQ_RAISED BIT(6)
#define STATUS_SRC_ADDR_INVALID BIT(7)
#define STATUS_DST_ADDR_INVALID BIT(8)
#define STATUS_DOORBELL_SUCCESS BIT(9)
#define STATUS_DOORBELL_ENABLE_SUCCESS BIT(10)
#define STATUS_DOORBELL_ENABLE_FAIL BIT(11)
#define STATUS_DOORBELL_DISABLE_SUCCESS BIT(12)
#define STATUS_DOORBELL_DISABLE_FAIL BIT(13)
#define PCI_ENDPOINT_TEST_LOWER_SRC_ADDR 0x0c
#define PCI_ENDPOINT_TEST_UPPER_SRC_ADDR 0x10
@ -62,6 +69,7 @@
#define PCI_ENDPOINT_TEST_IRQ_NUMBER 0x28
#define PCI_ENDPOINT_TEST_FLAGS 0x2c
#define FLAG_USE_DMA BIT(0)
#define PCI_ENDPOINT_TEST_CAPS 0x30
@ -70,6 +78,10 @@
#define CAP_MSIX BIT(2)
#define CAP_INTX BIT(3)
#define PCI_ENDPOINT_TEST_DB_BAR 0x34
#define PCI_ENDPOINT_TEST_DB_OFFSET 0x38
#define PCI_ENDPOINT_TEST_DB_DATA 0x3c
#define PCI_DEVICE_ID_TI_AM654 0xb00c
#define PCI_DEVICE_ID_TI_J7200 0xb00f
#define PCI_DEVICE_ID_TI_AM64 0xb010
@ -100,6 +112,7 @@ enum pci_barno {
BAR_3,
BAR_4,
BAR_5,
NO_BAR = -1,
};
struct pci_endpoint_test {
@ -841,6 +854,73 @@ static int pci_endpoint_test_set_irq(struct pci_endpoint_test *test,
return 0;
}
static int pci_endpoint_test_doorbell(struct pci_endpoint_test *test)
{
struct pci_dev *pdev = test->pdev;
struct device *dev = &pdev->dev;
int irq_type = test->irq_type;
enum pci_barno bar;
u32 data, status;
u32 addr;
int left;
if (irq_type < PCITEST_IRQ_TYPE_INTX ||
irq_type > PCITEST_IRQ_TYPE_MSIX) {
dev_err(dev, "Invalid IRQ type\n");
return -EINVAL;
}
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, irq_type);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 1);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND,
COMMAND_ENABLE_DOORBELL);
left = wait_for_completion_timeout(&test->irq_raised, msecs_to_jiffies(1000));
status = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_STATUS);
if (!left || (status & STATUS_DOORBELL_ENABLE_FAIL)) {
dev_err(dev, "Failed to enable doorbell\n");
return -EINVAL;
}
data = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_DB_DATA);
addr = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_DB_OFFSET);
bar = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_DB_BAR);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, irq_type);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 1);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_STATUS, 0);
bar = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_DB_BAR);
writel(data, test->bar[bar] + addr);
left = wait_for_completion_timeout(&test->irq_raised, msecs_to_jiffies(1000));
status = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_STATUS);
if (!left || !(status & STATUS_DOORBELL_SUCCESS))
dev_err(dev, "Failed to trigger doorbell in endpoint\n");
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND,
COMMAND_DISABLE_DOORBELL);
wait_for_completion_timeout(&test->irq_raised, msecs_to_jiffies(1000));
status |= pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_STATUS);
if (status & STATUS_DOORBELL_DISABLE_FAIL) {
dev_err(dev, "Failed to disable doorbell\n");
return -EINVAL;
}
if (!(status & STATUS_DOORBELL_SUCCESS))
return -EINVAL;
return 0;
}
static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
@ -891,6 +971,9 @@ static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
case PCITEST_CLEAR_IRQ:
ret = pci_endpoint_test_clear_irq(test);
break;
case PCITEST_DOORBELL:
ret = pci_endpoint_test_doorbell(test);
break;
}
ret:

View File

@ -341,7 +341,6 @@ void pci_bus_add_device(struct pci_dev *dev)
{
struct device_node *dn = dev->dev.of_node;
struct platform_device *pdev;
int retval;
/*
* Can not put in pci_device_add yet because resources
@ -372,9 +371,7 @@ void pci_bus_add_device(struct pci_dev *dev)
if (!dn || of_device_is_available(dn))
pci_dev_allow_binding(dev);
retval = device_attach(&dev->dev);
if (retval < 0 && retval != -EPROBE_DEFER)
pci_warn(dev, "device attach failed (%d)\n", retval);
device_initial_probe(&dev->dev);
pci_dev_assign_added(dev);
}

View File

@ -13,6 +13,7 @@ config PCI_AARDVARK
depends on OF
depends on PCI_MSI
select PCI_BRIDGE_EMUL
select IRQ_MSI_LIB
help
Add support for Aardvark 64bit PCIe Host Controller. This
controller is part of the South Bridge of the Marvel Armada
@ -29,6 +30,7 @@ config PCIE_ALTERA_MSI
tristate "Altera PCIe MSI feature"
depends on PCIE_ALTERA
depends on PCI_MSI
select IRQ_MSI_LIB
help
Say Y here if you want PCIe MSI support for the Altera FPGA.
This MSI driver supports Altera MSI to GIC controller IP.
@ -62,6 +64,7 @@ config PCIE_BRCMSTB
BMIPS_GENERIC || COMPILE_TEST
depends on OF
depends on PCI_MSI
select IRQ_MSI_LIB
default ARCH_BRCMSTB || BMIPS_GENERIC
help
Say Y here to enable PCIe host controller support for
@ -98,6 +101,7 @@ config PCIE_IPROC_MSI
bool "Broadcom iProc PCIe MSI support"
depends on PCIE_IPROC_PLATFORM || PCIE_IPROC_BCMA
depends on PCI_MSI
select IRQ_MSI_LIB
default ARCH_BCM_IPROC
help
Say Y here if you want to enable MSI support for Broadcom's iProc
@ -152,6 +156,7 @@ config PCI_IXP4XX
config VMD
depends on PCI_MSI && X86_64 && !UML
tristate "Intel Volume Management Device Driver"
select IRQ_MSI_LIB
help
Adds support for the Intel Volume Management Device (VMD). VMD is a
secondary PCI host bridge that allows PCI Express root ports,
@ -191,6 +196,7 @@ config PCIE_MEDIATEK
depends on ARCH_AIROHA || ARCH_MEDIATEK || COMPILE_TEST
depends on OF
depends on PCI_MSI
select IRQ_MSI_LIB
help
Say Y here if you want to enable PCIe controller support on
MediaTek SoCs.
@ -199,6 +205,7 @@ config PCIE_MEDIATEK_GEN3
tristate "MediaTek Gen3 PCIe controller"
depends on ARCH_AIROHA || ARCH_MEDIATEK || COMPILE_TEST
depends on PCI_MSI
select IRQ_MSI_LIB
help
Adds support for PCIe Gen3 MAC controller for MediaTek SoCs.
This PCIe controller is compatible with Gen3, Gen2 and Gen1 speed,
@ -237,6 +244,7 @@ config PCIE_RCAR_HOST
bool "Renesas R-Car PCIe controller (host mode)"
depends on ARCH_RENESAS || COMPILE_TEST
depends on PCI_MSI
select IRQ_MSI_LIB
help
Say Y here if you want PCIe controller support on R-Car SoCs in host
mode.
@ -315,6 +323,7 @@ config PCIE_XILINX
bool "Xilinx AXI PCIe controller"
depends on OF
depends on PCI_MSI
select IRQ_MSI_LIB
help
Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
Host Bridge driver.
@ -324,6 +333,7 @@ config PCIE_XILINX_DMA_PL
depends on ARCH_ZYNQMP || COMPILE_TEST
depends on PCI_MSI
select PCI_HOST_COMMON
select IRQ_MSI_LIB
help
Say 'Y' here if you want kernel support for the Xilinx PL DMA
PCIe host bridge. The controller is a Soft IP which can act as
@ -334,6 +344,7 @@ config PCIE_XILINX_NWL
bool "Xilinx NWL PCIe controller"
depends on ARCH_ZYNQMP || COMPILE_TEST
depends on PCI_MSI
select IRQ_MSI_LIB
help
Say 'Y' here if you want kernel support for Xilinx
NWL PCIe controller. The controller can act as Root Port

View File

@ -353,7 +353,7 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx,
}
spin_unlock_irqrestore(&ep->lock, flags);
offset = CDNS_PCIE_NORMAL_MSG_ROUTING(MSG_ROUTING_LOCAL) |
offset = CDNS_PCIE_NORMAL_MSG_ROUTING(PCIE_MSG_TYPE_R_LOCAL) |
CDNS_PCIE_NORMAL_MSG_CODE(msg_code);
writel(0, ep->irq_cpu_addr + offset);
}

View File

@ -250,26 +250,6 @@ struct cdns_pcie_rp_ib_bar {
struct cdns_pcie;
enum cdns_pcie_msg_routing {
/* Route to Root Complex */
MSG_ROUTING_TO_RC,
/* Use Address Routing */
MSG_ROUTING_BY_ADDR,
/* Use ID Routing */
MSG_ROUTING_BY_ID,
/* Route as Broadcast Message from Root Complex */
MSG_ROUTING_BCAST,
/* Local message; terminate at receiver (INTx messages) */
MSG_ROUTING_LOCAL,
/* Gather & route to Root Complex (PME_TO_Ack message) */
MSG_ROUTING_GATHER,
};
struct cdns_pcie_ops {
int (*start_link)(struct cdns_pcie *pcie);
void (*stop_link)(struct cdns_pcie *pcie);

View File

@ -19,6 +19,7 @@ config PCIE_DW_DEBUGFS
config PCIE_DW_HOST
bool
select PCIE_DW
select IRQ_MSI_LIB
config PCIE_DW_EP
bool
@ -296,6 +297,7 @@ config PCIE_QCOM
select PCIE_DW_HOST
select CRC8
select PCIE_QCOM_COMMON
select PCI_HOST_COMMON
help
Say Y here to enable PCIe controller support on Qualcomm SoCs. The
PCIe controller uses the DesignWare core plus Qualcomm-specific
@ -402,6 +404,16 @@ config PCIE_UNIPHIER_EP
Say Y here if you want PCIe endpoint controller support on
UniPhier SoCs. This driver supports Pro5 SoC.
config PCIE_SOPHGO_DW
bool "Sophgo DesignWare PCIe controller (host mode)"
depends on ARCH_SOPHGO || COMPILE_TEST
depends on PCI_MSI
depends on OF
select PCIE_DW_HOST
help
Say Y here if you want PCIe host controller support on
Sophgo SoCs.
config PCIE_SPEAR13XX
bool "STMicroelectronics SPEAr PCIe controller"
depends on ARCH_SPEAR13XX || COMPILE_TEST

View File

@ -20,6 +20,7 @@ obj-$(CONFIG_PCIE_QCOM_EP) += pcie-qcom-ep.o
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
obj-$(CONFIG_PCIE_ROCKCHIP_DW) += pcie-dw-rockchip.o
obj-$(CONFIG_PCIE_SOPHGO_DW) += pcie-sophgo.o
obj-$(CONFIG_PCIE_INTEL_GW) += pcie-intel-gw.o
obj-$(CONFIG_PCIE_KEEMBAY) += pcie-keembay.o
obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o

View File

@ -860,7 +860,6 @@ static int imx95_pcie_core_reset(struct imx_pcie *imx_pcie, bool assert)
static void imx_pcie_assert_core_reset(struct imx_pcie *imx_pcie)
{
reset_control_assert(imx_pcie->pciephy_reset);
reset_control_assert(imx_pcie->apps_reset);
if (imx_pcie->drvdata->core_reset)
imx_pcie->drvdata->core_reset(imx_pcie, true);
@ -872,7 +871,6 @@ static void imx_pcie_assert_core_reset(struct imx_pcie *imx_pcie)
static int imx_pcie_deassert_core_reset(struct imx_pcie *imx_pcie)
{
reset_control_deassert(imx_pcie->pciephy_reset);
reset_control_deassert(imx_pcie->apps_reset);
if (imx_pcie->drvdata->core_reset)
imx_pcie->drvdata->core_reset(imx_pcie, false);
@ -1063,7 +1061,10 @@ static int imx_pcie_add_lut(struct imx_pcie *imx_pcie, u16 rid, u8 sid)
data1 |= IMX95_PE0_LUT_VLD;
regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA1, data1);
data2 = IMX95_PE0_LUT_MASK; /* Match all bits of RID */
if (imx_pcie->drvdata->mode == DW_PCIE_EP_TYPE)
data2 = 0x7; /* In the EP mode, only 'Device ID' is required */
else
data2 = IMX95_PE0_LUT_MASK; /* Match all bits of RID */
data2 |= FIELD_PREP(IMX95_PE0_LUT_REQID, rid);
regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2, data2);
@ -1096,18 +1097,14 @@ static void imx_pcie_remove_lut(struct imx_pcie *imx_pcie, u16 rid)
}
}
static int imx_pcie_enable_device(struct pci_host_bridge *bridge,
struct pci_dev *pdev)
static int imx_pcie_add_lut_by_rid(struct imx_pcie *imx_pcie, u32 rid)
{
struct imx_pcie *imx_pcie = to_imx_pcie(to_dw_pcie_from_pp(bridge->sysdata));
u32 sid_i, sid_m, rid = pci_dev_id(pdev);
struct device *dev = imx_pcie->pci->dev;
struct device_node *target;
struct device *dev;
u32 sid_i, sid_m;
int err_i, err_m;
u32 sid = 0;
dev = imx_pcie->pci->dev;
target = NULL;
err_i = of_map_id(dev->of_node, rid, "iommu-map", "iommu-map-mask",
&target, &sid_i);
@ -1182,6 +1179,13 @@ static int imx_pcie_enable_device(struct pci_host_bridge *bridge,
return imx_pcie_add_lut(imx_pcie, rid, sid);
}
static int imx_pcie_enable_device(struct pci_host_bridge *bridge, struct pci_dev *pdev)
{
struct imx_pcie *imx_pcie = to_imx_pcie(to_dw_pcie_from_pp(bridge->sysdata));
return imx_pcie_add_lut_by_rid(imx_pcie, pci_dev_id(pdev));
}
static void imx_pcie_disable_device(struct pci_host_bridge *bridge,
struct pci_dev *pdev)
{
@ -1247,6 +1251,9 @@ static int imx_pcie_host_init(struct dw_pcie_rp *pp)
}
}
/* Make sure that PCIe LTSSM is cleared */
imx_pcie_ltssm_disable(dev);
ret = imx_pcie_deassert_core_reset(imx_pcie);
if (ret < 0) {
dev_err(dev, "pcie deassert core reset failed: %d\n", ret);
@ -1385,6 +1392,8 @@ static const struct pci_epc_features imx8m_pcie_epc_features = {
.msix_capable = false,
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_256, },
.bar[BAR_5] = { .type = BAR_RESERVED, },
.align = SZ_64K,
};
@ -1465,9 +1474,6 @@ static int imx_add_pcie_ep(struct imx_pcie *imx_pcie,
pci_epc_init_notify(ep->epc);
/* Start LTSSM. */
imx_pcie_ltssm_enable(dev);
return 0;
}
@ -1764,6 +1770,12 @@ static int imx_pcie_probe(struct platform_device *pdev)
ret = imx_add_pcie_ep(imx_pcie, pdev);
if (ret < 0)
return ret;
/*
* FIXME: Only single Device (EPF) is supported due to the
* Endpoint framework limitation.
*/
imx_pcie_add_lut_by_rid(imx_pcie, 0);
} else {
pci->pp.use_atu_msg = true;
ret = dw_pcie_host_init(&pci->pp);
@ -1912,7 +1924,7 @@ static const struct imx_pcie_drvdata drvdata[] = {
.mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE,
.mode_off[1] = IOMUXC_GPR12,
.mode_mask[1] = IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE,
.epc_features = &imx8m_pcie_epc_features,
.epc_features = &imx8q_pcie_epc_features,
.init_phy = imx8mq_pcie_init_phy,
.enable_ref_clk = imx8mm_pcie_enable_ref_clk,
},

View File

@ -814,14 +814,14 @@ static bool dw_pcie_ptm_context_update_visible(void *drvdata)
{
struct dw_pcie *pci = drvdata;
return (pci->mode == DW_PCIE_EP_TYPE) ? true : false;
return pci->mode == DW_PCIE_EP_TYPE;
}
static bool dw_pcie_ptm_context_valid_visible(void *drvdata)
{
struct dw_pcie *pci = drvdata;
return (pci->mode == DW_PCIE_RC_TYPE) ? true : false;
return pci->mode == DW_PCIE_RC_TYPE;
}
static bool dw_pcie_ptm_local_clock_visible(void *drvdata)
@ -834,38 +834,38 @@ static bool dw_pcie_ptm_master_clock_visible(void *drvdata)
{
struct dw_pcie *pci = drvdata;
return (pci->mode == DW_PCIE_EP_TYPE) ? true : false;
return pci->mode == DW_PCIE_EP_TYPE;
}
static bool dw_pcie_ptm_t1_visible(void *drvdata)
{
struct dw_pcie *pci = drvdata;
return (pci->mode == DW_PCIE_EP_TYPE) ? true : false;
return pci->mode == DW_PCIE_EP_TYPE;
}
static bool dw_pcie_ptm_t2_visible(void *drvdata)
{
struct dw_pcie *pci = drvdata;
return (pci->mode == DW_PCIE_RC_TYPE) ? true : false;
return pci->mode == DW_PCIE_RC_TYPE;
}
static bool dw_pcie_ptm_t3_visible(void *drvdata)
{
struct dw_pcie *pci = drvdata;
return (pci->mode == DW_PCIE_RC_TYPE) ? true : false;
return pci->mode == DW_PCIE_RC_TYPE;
}
static bool dw_pcie_ptm_t4_visible(void *drvdata)
{
struct dw_pcie *pci = drvdata;
return (pci->mode == DW_PCIE_EP_TYPE) ? true : false;
return pci->mode == DW_PCIE_EP_TYPE;
}
const struct pcie_ptm_ops dw_pcie_ptm_ops = {
static const struct pcie_ptm_ops dw_pcie_ptm_ops = {
.check_capability = dw_pcie_ptm_check_capability,
.context_update_write = dw_pcie_ptm_context_update_write,
.context_update_read = dw_pcie_ptm_context_update_read,

View File

@ -10,6 +10,7 @@
#include <linux/iopoll.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqdomain.h>
#include <linux/msi.h>
#include <linux/of_address.h>
@ -23,35 +24,21 @@
static struct pci_ops dw_pcie_ops;
static struct pci_ops dw_child_pcie_ops;
static void dw_msi_ack_irq(struct irq_data *d)
{
irq_chip_ack_parent(d);
}
#define DW_PCIE_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_NO_AFFINITY | \
MSI_FLAG_PCI_MSI_MASK_PARENT)
#define DW_PCIE_MSI_FLAGS_SUPPORTED (MSI_FLAG_MULTI_PCI_MSI | \
MSI_FLAG_PCI_MSIX | \
MSI_GENERIC_FLAGS_MASK)
static void dw_msi_mask_irq(struct irq_data *d)
{
pci_msi_mask_irq(d);
irq_chip_mask_parent(d);
}
static void dw_msi_unmask_irq(struct irq_data *d)
{
pci_msi_unmask_irq(d);
irq_chip_unmask_parent(d);
}
static struct irq_chip dw_pcie_msi_irq_chip = {
.name = "PCI-MSI",
.irq_ack = dw_msi_ack_irq,
.irq_mask = dw_msi_mask_irq,
.irq_unmask = dw_msi_unmask_irq,
};
static struct msi_domain_info dw_pcie_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY | MSI_FLAG_PCI_MSIX |
MSI_FLAG_MULTI_PCI_MSI,
.chip = &dw_pcie_msi_irq_chip,
static const struct msi_parent_ops dw_pcie_msi_parent_ops = {
.required_flags = DW_PCIE_MSI_FLAGS_REQUIRED,
.supported_flags = DW_PCIE_MSI_FLAGS_SUPPORTED,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.chip_flags = MSI_CHIP_FLAG_SET_ACK,
.prefix = "DW-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
/* MSI int handler */
@ -227,30 +214,23 @@ static const struct irq_domain_ops dw_pcie_msi_domain_ops = {
int dw_pcie_allocate_domains(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct fwnode_handle *fwnode = of_fwnode_handle(pci->dev->of_node);
struct irq_domain_info info = {
.fwnode = dev_fwnode(pci->dev),
.ops = &dw_pcie_msi_domain_ops,
.size = pp->num_vectors,
.host_data = pp,
};
pp->irq_domain = irq_domain_create_linear(fwnode, pp->num_vectors,
&dw_pcie_msi_domain_ops, pp);
pp->irq_domain = msi_create_parent_irq_domain(&info, &dw_pcie_msi_parent_ops);
if (!pp->irq_domain) {
dev_err(pci->dev, "Failed to create IRQ domain\n");
return -ENOMEM;
}
irq_domain_update_bus_token(pp->irq_domain, DOMAIN_BUS_NEXUS);
pp->msi_domain = pci_msi_create_irq_domain(fwnode,
&dw_pcie_msi_domain_info,
pp->irq_domain);
if (!pp->msi_domain) {
dev_err(pci->dev, "Failed to create MSI domain\n");
irq_domain_remove(pp->irq_domain);
return -ENOMEM;
}
return 0;
}
static void dw_pcie_free_msi(struct dw_pcie_rp *pp)
void dw_pcie_free_msi(struct dw_pcie_rp *pp)
{
u32 ctrl;
@ -260,22 +240,36 @@ static void dw_pcie_free_msi(struct dw_pcie_rp *pp)
NULL, NULL);
}
irq_domain_remove(pp->msi_domain);
irq_domain_remove(pp->irq_domain);
}
EXPORT_SYMBOL_GPL(dw_pcie_free_msi);
static void dw_pcie_msi_init(struct dw_pcie_rp *pp)
void dw_pcie_msi_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
u64 msi_target = (u64)pp->msi_data;
u32 ctrl, num_ctrls;
if (!pci_msi_enabled() || !pp->has_msi_ctrl)
return;
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
/* Initialize IRQ Status array */
for (ctrl = 0; ctrl < num_ctrls; ctrl++) {
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK +
(ctrl * MSI_REG_CTRL_BLOCK_SIZE),
pp->irq_mask[ctrl]);
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_ENABLE +
(ctrl * MSI_REG_CTRL_BLOCK_SIZE),
~0);
}
/* Program the msi_data */
dw_pcie_writel_dbi(pci, PCIE_MSI_ADDR_LO, lower_32_bits(msi_target));
dw_pcie_writel_dbi(pci, PCIE_MSI_ADDR_HI, upper_32_bits(msi_target));
}
EXPORT_SYMBOL_GPL(dw_pcie_msi_init);
static int dw_pcie_parse_split_msi_irq(struct dw_pcie_rp *pp)
{
@ -317,7 +311,7 @@ static int dw_pcie_parse_split_msi_irq(struct dw_pcie_rp *pp)
return 0;
}
static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev;
@ -391,6 +385,7 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
return 0;
}
EXPORT_SYMBOL_GPL(dw_pcie_msi_host_init);
static void dw_pcie_host_request_msg_tlp_res(struct dw_pcie_rp *pp)
{
@ -909,7 +904,7 @@ static void dw_pcie_config_presets(struct dw_pcie_rp *pp)
int dw_pcie_setup_rc(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
u32 val, ctrl, num_ctrls;
u32 val;
int ret;
/*
@ -920,20 +915,6 @@ int dw_pcie_setup_rc(struct dw_pcie_rp *pp)
dw_pcie_setup(pci);
if (pp->has_msi_ctrl) {
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
/* Initialize IRQ Status array */
for (ctrl = 0; ctrl < num_ctrls; ctrl++) {
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK +
(ctrl * MSI_REG_CTRL_BLOCK_SIZE),
pp->irq_mask[ctrl]);
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_ENABLE +
(ctrl * MSI_REG_CTRL_BLOCK_SIZE),
~0);
}
}
dw_pcie_msi_init(pp);
/* Setup RC BARs */

View File

@ -702,18 +702,26 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci)
int retries;
/* Check if the link is up or not */
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
for (retries = 0; retries < PCIE_LINK_WAIT_MAX_RETRIES; retries++) {
if (dw_pcie_link_up(pci))
break;
msleep(LINK_WAIT_SLEEP_MS);
msleep(PCIE_LINK_WAIT_SLEEP_MS);
}
if (retries >= LINK_WAIT_MAX_RETRIES) {
if (retries >= PCIE_LINK_WAIT_MAX_RETRIES) {
dev_info(pci->dev, "Phy link never came up\n");
return -ETIMEDOUT;
}
/*
* As per PCIe r6.0, sec 6.6.1, a Downstream Port that supports Link
* speeds greater than 5.0 GT/s, software must wait a minimum of 100 ms
* after Link training completes before sending a Configuration Request.
*/
if (pci->max_link_speed > 2)
msleep(PCIE_RESET_CONFIG_WAIT_MS);
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA);

View File

@ -62,10 +62,6 @@
#define dw_pcie_cap_set(_pci, _cap) \
set_bit(DW_PCIE_CAP_ ## _cap, &(_pci)->caps)
/* Parameters for the waiting for link up routine */
#define LINK_WAIT_MAX_RETRIES 10
#define LINK_WAIT_SLEEP_MS 90
/* Parameters for the waiting for iATU enabled routine */
#define LINK_WAIT_MAX_IATU_RETRIES 5
#define LINK_WAIT_IATU 9
@ -417,7 +413,6 @@ struct dw_pcie_rp {
const struct dw_pcie_host_ops *ops;
int msi_irq[MAX_MSI_CTRLS];
struct irq_domain *irq_domain;
struct irq_domain *msi_domain;
dma_addr_t msi_data;
struct irq_chip *msi_irq_chip;
u32 num_vectors;
@ -759,6 +754,9 @@ static inline enum dw_pcie_ltssm dw_pcie_get_ltssm(struct dw_pcie *pci)
int dw_pcie_suspend_noirq(struct dw_pcie *pci);
int dw_pcie_resume_noirq(struct dw_pcie *pci);
irqreturn_t dw_handle_msi_irq(struct dw_pcie_rp *pp);
void dw_pcie_msi_init(struct dw_pcie_rp *pp);
int dw_pcie_msi_host_init(struct dw_pcie_rp *pp);
void dw_pcie_free_msi(struct dw_pcie_rp *pp);
int dw_pcie_setup_rc(struct dw_pcie_rp *pp);
int dw_pcie_host_init(struct dw_pcie_rp *pp);
void dw_pcie_host_deinit(struct dw_pcie_rp *pp);
@ -781,6 +779,17 @@ static inline irqreturn_t dw_handle_msi_irq(struct dw_pcie_rp *pp)
return IRQ_NONE;
}
static inline void dw_pcie_msi_init(struct dw_pcie_rp *pp)
{ }
static inline int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
{
return -ENODEV;
}
static inline void dw_pcie_free_msi(struct dw_pcie_rp *pp)
{ }
static inline int dw_pcie_setup_rc(struct dw_pcie_rp *pp)
{
return 0;

View File

@ -58,6 +58,8 @@
/* Hot Reset Control Register */
#define PCIE_CLIENT_HOT_RESET_CTRL 0x180
#define PCIE_LTSSM_APP_DLY2_EN BIT(1)
#define PCIE_LTSSM_APP_DLY2_DONE BIT(3)
#define PCIE_LTSSM_ENABLE_ENHANCE BIT(4)
/* LTSSM Status Register */
@ -458,6 +460,7 @@ static irqreturn_t rockchip_pcie_rc_sys_irq_thread(int irq, void *arg)
if (reg & PCIE_RDLH_LINK_UP_CHGED) {
if (rockchip_pcie_link_up(pci)) {
msleep(PCIE_RESET_CONFIG_WAIT_MS);
dev_dbg(dev, "Received Link up event. Starting enumeration!\n");
/* Rescan the bus to enumerate endpoint devices */
pci_lock_rescan_remove();
@ -474,7 +477,7 @@ static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
struct rockchip_pcie *rockchip = arg;
struct dw_pcie *pci = &rockchip->pci;
struct device *dev = pci->dev;
u32 reg;
u32 reg, val;
reg = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_INTR_STATUS_MISC);
rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC);
@ -485,6 +488,10 @@ static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
if (reg & PCIE_LINK_REQ_RST_NOT_INT) {
dev_dbg(dev, "hot reset or link-down reset\n");
dw_pcie_ep_linkdown(&pci->ep);
/* Stop delaying link training. */
val = HIWORD_UPDATE_BIT(PCIE_LTSSM_APP_DLY2_DONE);
rockchip_pcie_writel_apb(rockchip, val,
PCIE_CLIENT_HOT_RESET_CTRL);
}
if (reg & PCIE_RDLH_LINK_UP_CHGED) {
@ -566,8 +573,11 @@ static int rockchip_pcie_configure_ep(struct platform_device *pdev,
return ret;
}
/* LTSSM enable control mode */
val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE);
/*
* LTSSM enable control mode, and automatically delay link training on
* hot reset/link-down reset.
*/
val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE | PCIE_LTSSM_APP_DLY2_EN);
rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL);
rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_EP_MODE,

View File

@ -21,7 +21,9 @@
#include <linux/limits.h>
#include <linux/init.h>
#include <linux/of.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/pci-ecam.h>
#include <linux/pm_opp.h>
#include <linux/pm_runtime.h>
#include <linux/platform_device.h>
@ -34,6 +36,7 @@
#include <linux/units.h>
#include "../../pci.h"
#include "../pci-host-common.h"
#include "pcie-designware.h"
#include "pcie-qcom-common.h"
@ -255,13 +258,21 @@ struct qcom_pcie_ops {
* @ops: qcom PCIe ops structure
* @override_no_snoop: Override NO_SNOOP attribute in TLP to enable cache
* snooping
* @firmware_managed: Set if the Root Complex is firmware managed
*/
struct qcom_pcie_cfg {
const struct qcom_pcie_ops *ops;
bool override_no_snoop;
bool firmware_managed;
bool no_l0s;
};
struct qcom_pcie_port {
struct list_head list;
struct gpio_desc *reset;
struct phy *phy;
};
struct qcom_pcie {
struct dw_pcie *pci;
void __iomem *parf; /* DT parf */
@ -274,24 +285,37 @@ struct qcom_pcie {
struct icc_path *icc_cpu;
const struct qcom_pcie_cfg *cfg;
struct dentry *debugfs;
struct list_head ports;
bool suspended;
bool use_pm_opp;
};
#define to_qcom_pcie(x) dev_get_drvdata((x)->dev)
static void qcom_perst_assert(struct qcom_pcie *pcie, bool assert)
{
struct qcom_pcie_port *port;
int val = assert ? 1 : 0;
if (list_empty(&pcie->ports))
gpiod_set_value_cansleep(pcie->reset, val);
else
list_for_each_entry(port, &pcie->ports, list)
gpiod_set_value_cansleep(port->reset, val);
usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500);
}
static void qcom_ep_reset_assert(struct qcom_pcie *pcie)
{
gpiod_set_value_cansleep(pcie->reset, 1);
usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500);
qcom_perst_assert(pcie, true);
}
static void qcom_ep_reset_deassert(struct qcom_pcie *pcie)
{
/* Ensure that PERST has been asserted for at least 100 ms */
msleep(PCIE_T_PVPERL_MS);
gpiod_set_value_cansleep(pcie->reset, 0);
usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500);
qcom_perst_assert(pcie, false);
}
static int qcom_pcie_start_link(struct dw_pcie *pci)
@ -1229,6 +1253,59 @@ static bool qcom_pcie_link_up(struct dw_pcie *pci)
return val & PCI_EXP_LNKSTA_DLLLA;
}
static void qcom_pcie_phy_exit(struct qcom_pcie *pcie)
{
struct qcom_pcie_port *port;
if (list_empty(&pcie->ports))
phy_exit(pcie->phy);
else
list_for_each_entry(port, &pcie->ports, list)
phy_exit(port->phy);
}
static void qcom_pcie_phy_power_off(struct qcom_pcie *pcie)
{
struct qcom_pcie_port *port;
if (list_empty(&pcie->ports)) {
phy_power_off(pcie->phy);
} else {
list_for_each_entry(port, &pcie->ports, list)
phy_power_off(port->phy);
}
}
static int qcom_pcie_phy_power_on(struct qcom_pcie *pcie)
{
struct qcom_pcie_port *port;
int ret = 0;
if (list_empty(&pcie->ports)) {
ret = phy_set_mode_ext(pcie->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC);
if (ret)
return ret;
ret = phy_power_on(pcie->phy);
if (ret)
return ret;
} else {
list_for_each_entry(port, &pcie->ports, list) {
ret = phy_set_mode_ext(port->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC);
if (ret)
return ret;
ret = phy_power_on(port->phy);
if (ret) {
qcom_pcie_phy_power_off(pcie);
return ret;
}
}
}
return ret;
}
static int qcom_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
@ -1241,11 +1318,7 @@ static int qcom_pcie_host_init(struct dw_pcie_rp *pp)
if (ret)
return ret;
ret = phy_set_mode_ext(pcie->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC);
if (ret)
goto err_deinit;
ret = phy_power_on(pcie->phy);
ret = qcom_pcie_phy_power_on(pcie);
if (ret)
goto err_deinit;
@ -1268,7 +1341,7 @@ static int qcom_pcie_host_init(struct dw_pcie_rp *pp)
err_assert_reset:
qcom_ep_reset_assert(pcie);
err_disable_phy:
phy_power_off(pcie->phy);
qcom_pcie_phy_power_off(pcie);
err_deinit:
pcie->cfg->ops->deinit(pcie);
@ -1281,7 +1354,7 @@ static void qcom_pcie_host_deinit(struct dw_pcie_rp *pp)
struct qcom_pcie *pcie = to_qcom_pcie(pci);
qcom_ep_reset_assert(pcie);
phy_power_off(pcie->phy);
qcom_pcie_phy_power_off(pcie);
pcie->cfg->ops->deinit(pcie);
}
@ -1426,6 +1499,10 @@ static const struct qcom_pcie_cfg cfg_sc8280xp = {
.no_l0s = true,
};
static const struct qcom_pcie_cfg cfg_fw_managed = {
.firmware_managed = true,
};
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = qcom_pcie_link_up,
.start_link = qcom_pcie_start_link,
@ -1564,6 +1641,7 @@ static irqreturn_t qcom_pcie_global_irq_thread(int irq, void *data)
writel_relaxed(status, pcie->parf + PARF_INT_ALL_CLEAR);
if (FIELD_GET(PARF_INT_ALL_LINK_UP, status)) {
msleep(PCIE_RESET_CONFIG_WAIT_MS);
dev_dbg(dev, "Received Link up event. Starting enumeration!\n");
/* Rescan the bus to enumerate endpoint devices */
pci_lock_rescan_remove();
@ -1579,10 +1657,128 @@ static irqreturn_t qcom_pcie_global_irq_thread(int irq, void *data)
return IRQ_HANDLED;
}
static void qcom_pci_free_msi(void *ptr)
{
struct dw_pcie_rp *pp = (struct dw_pcie_rp *)ptr;
if (pp && pp->has_msi_ctrl)
dw_pcie_free_msi(pp);
}
static int qcom_pcie_ecam_host_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct dw_pcie_rp *pp;
struct dw_pcie *pci;
int ret;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pci->dev = dev;
pp = &pci->pp;
pci->dbi_base = cfg->win;
pp->num_vectors = MSI_DEF_NUM_VECTORS;
ret = dw_pcie_msi_host_init(pp);
if (ret)
return ret;
pp->has_msi_ctrl = true;
dw_pcie_msi_init(pp);
return devm_add_action_or_reset(dev, qcom_pci_free_msi, pp);
}
static const struct pci_ecam_ops pci_qcom_ecam_ops = {
.init = qcom_pcie_ecam_host_init,
.pci_ops = {
.map_bus = pci_ecam_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
}
};
static int qcom_pcie_parse_port(struct qcom_pcie *pcie, struct device_node *node)
{
struct device *dev = pcie->pci->dev;
struct qcom_pcie_port *port;
struct gpio_desc *reset;
struct phy *phy;
int ret;
reset = devm_fwnode_gpiod_get(dev, of_fwnode_handle(node),
"reset", GPIOD_OUT_HIGH, "PERST#");
if (IS_ERR(reset))
return PTR_ERR(reset);
phy = devm_of_phy_get(dev, node, NULL);
if (IS_ERR(phy))
return PTR_ERR(phy);
port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
if (!port)
return -ENOMEM;
ret = phy_init(phy);
if (ret)
return ret;
port->reset = reset;
port->phy = phy;
INIT_LIST_HEAD(&port->list);
list_add_tail(&port->list, &pcie->ports);
return 0;
}
static int qcom_pcie_parse_ports(struct qcom_pcie *pcie)
{
struct device *dev = pcie->pci->dev;
struct qcom_pcie_port *port, *tmp;
int ret = -ENOENT;
for_each_available_child_of_node_scoped(dev->of_node, of_port) {
ret = qcom_pcie_parse_port(pcie, of_port);
if (ret)
goto err_port_del;
}
return ret;
err_port_del:
list_for_each_entry_safe(port, tmp, &pcie->ports, list)
list_del(&port->list);
return ret;
}
static int qcom_pcie_parse_legacy_binding(struct qcom_pcie *pcie)
{
struct device *dev = pcie->pci->dev;
int ret;
pcie->phy = devm_phy_optional_get(dev, "pciephy");
if (IS_ERR(pcie->phy))
return PTR_ERR(pcie->phy);
pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH);
if (IS_ERR(pcie->reset))
return PTR_ERR(pcie->reset);
ret = phy_init(pcie->phy);
if (ret)
return ret;
return 0;
}
static int qcom_pcie_probe(struct platform_device *pdev)
{
const struct qcom_pcie_cfg *pcie_cfg;
unsigned long max_freq = ULONG_MAX;
struct qcom_pcie_port *port, *tmp;
struct device *dev = &pdev->dev;
struct dev_pm_opp *opp;
struct qcom_pcie *pcie;
@ -1593,24 +1789,64 @@ static int qcom_pcie_probe(struct platform_device *pdev)
char *name;
pcie_cfg = of_device_get_match_data(dev);
if (!pcie_cfg || !pcie_cfg->ops) {
dev_err(dev, "Invalid platform data\n");
return -EINVAL;
if (!pcie_cfg) {
dev_err(dev, "No platform data\n");
return -ENODATA;
}
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
if (!pcie_cfg->firmware_managed && !pcie_cfg->ops) {
dev_err(dev, "No platform ops\n");
return -ENODATA;
}
pm_runtime_enable(dev);
ret = pm_runtime_get_sync(dev);
if (ret < 0)
goto err_pm_runtime_put;
if (pcie_cfg->firmware_managed) {
struct pci_host_bridge *bridge;
struct pci_config_window *cfg;
bridge = devm_pci_alloc_host_bridge(dev, 0);
if (!bridge) {
ret = -ENOMEM;
goto err_pm_runtime_put;
}
/* Parse and map our ECAM configuration space area */
cfg = pci_host_common_ecam_create(dev, bridge,
&pci_qcom_ecam_ops);
if (IS_ERR(cfg)) {
ret = PTR_ERR(cfg);
goto err_pm_runtime_put;
}
bridge->sysdata = cfg;
bridge->ops = (struct pci_ops *)&pci_qcom_ecam_ops.pci_ops;
bridge->msi_domain = true;
ret = pci_host_probe(bridge);
if (ret)
goto err_pm_runtime_put;
return 0;
}
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie) {
ret = -ENOMEM;
goto err_pm_runtime_put;
}
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci) {
ret = -ENOMEM;
goto err_pm_runtime_put;
}
INIT_LIST_HEAD(&pcie->ports);
pci->dev = dev;
pci->ops = &dw_pcie_ops;
pp = &pci->pp;
@ -1619,12 +1855,6 @@ static int qcom_pcie_probe(struct platform_device *pdev)
pcie->cfg = pcie_cfg;
pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH);
if (IS_ERR(pcie->reset)) {
ret = PTR_ERR(pcie->reset);
goto err_pm_runtime_put;
}
pcie->parf = devm_platform_ioremap_resource_byname(pdev, "parf");
if (IS_ERR(pcie->parf)) {
ret = PTR_ERR(pcie->parf);
@ -1647,12 +1877,6 @@ static int qcom_pcie_probe(struct platform_device *pdev)
}
}
pcie->phy = devm_phy_optional_get(dev, "pciephy");
if (IS_ERR(pcie->phy)) {
ret = PTR_ERR(pcie->phy);
goto err_pm_runtime_put;
}
/* OPP table is optional */
ret = devm_pm_opp_of_add_table(dev);
if (ret && ret != -ENODEV) {
@ -1699,9 +1923,23 @@ static int qcom_pcie_probe(struct platform_device *pdev)
pp->ops = &qcom_pcie_dw_ops;
ret = phy_init(pcie->phy);
if (ret)
goto err_pm_runtime_put;
ret = qcom_pcie_parse_ports(pcie);
if (ret) {
if (ret != -ENOENT) {
dev_err_probe(pci->dev, ret,
"Failed to parse Root Port: %d\n", ret);
goto err_pm_runtime_put;
}
/*
* In the case of properties not populated in Root Port node,
* fallback to the legacy method of parsing the Host Bridge
* node. This is to maintain DT backwards compatibility.
*/
ret = qcom_pcie_parse_legacy_binding(pcie);
if (ret)
goto err_pm_runtime_put;
}
platform_set_drvdata(pdev, pcie);
@ -1746,7 +1984,9 @@ static int qcom_pcie_probe(struct platform_device *pdev)
err_host_deinit:
dw_pcie_host_deinit(pp);
err_phy_exit:
phy_exit(pcie->phy);
qcom_pcie_phy_exit(pcie);
list_for_each_entry_safe(port, tmp, &pcie->ports, list)
list_del(&port->list);
err_pm_runtime_put:
pm_runtime_put(dev);
pm_runtime_disable(dev);
@ -1756,9 +1996,13 @@ err_pm_runtime_put:
static int qcom_pcie_suspend_noirq(struct device *dev)
{
struct qcom_pcie *pcie = dev_get_drvdata(dev);
struct qcom_pcie *pcie;
int ret = 0;
pcie = dev_get_drvdata(dev);
if (!pcie)
return 0;
/*
* Set minimum bandwidth required to keep data path functional during
* suspend.
@ -1812,9 +2056,13 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
static int qcom_pcie_resume_noirq(struct device *dev)
{
struct qcom_pcie *pcie = dev_get_drvdata(dev);
struct qcom_pcie *pcie;
int ret;
pcie = dev_get_drvdata(dev);
if (!pcie)
return 0;
if (pm_suspend_target_state != PM_SUSPEND_MEM) {
ret = icc_enable(pcie->icc_cpu);
if (ret) {
@ -1849,6 +2097,7 @@ static const struct of_device_id qcom_pcie_match[] = {
{ .compatible = "qcom,pcie-ipq9574", .data = &cfg_2_9_0 },
{ .compatible = "qcom,pcie-msm8996", .data = &cfg_2_3_2 },
{ .compatible = "qcom,pcie-qcs404", .data = &cfg_2_4_0 },
{ .compatible = "qcom,pcie-sa8255p", .data = &cfg_fw_managed },
{ .compatible = "qcom,pcie-sa8540p", .data = &cfg_sc8280xp },
{ .compatible = "qcom,pcie-sa8775p", .data = &cfg_1_34_0},
{ .compatible = "qcom,pcie-sc7280", .data = &cfg_1_9_0 },

View File

@ -0,0 +1,257 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Sophgo DesignWare based PCIe host controller driver
*/
#include <linux/bits.h>
#include <linux/clk.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/module.h>
#include <linux/property.h>
#include <linux/platform_device.h>
#include "pcie-designware.h"
#define to_sophgo_pcie(x) dev_get_drvdata((x)->dev)
#define PCIE_INT_SIGNAL 0xc48
#define PCIE_INT_EN 0xca0
#define PCIE_INT_SIGNAL_INTX GENMASK(8, 5)
#define PCIE_INT_EN_INTX GENMASK(4, 1)
#define PCIE_INT_EN_INT_MSI BIT(5)
struct sophgo_pcie {
struct dw_pcie pci;
void __iomem *app_base;
struct clk_bulk_data *clks;
unsigned int clk_cnt;
struct irq_domain *irq_domain;
};
static int sophgo_pcie_readl_app(struct sophgo_pcie *sophgo, u32 reg)
{
return readl_relaxed(sophgo->app_base + reg);
}
static void sophgo_pcie_writel_app(struct sophgo_pcie *sophgo, u32 val, u32 reg)
{
writel_relaxed(val, sophgo->app_base + reg);
}
static void sophgo_pcie_intx_handler(struct irq_desc *desc)
{
struct dw_pcie_rp *pp = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct sophgo_pcie *sophgo = to_sophgo_pcie(pci);
unsigned long hwirq, reg;
chained_irq_enter(chip, desc);
reg = sophgo_pcie_readl_app(sophgo, PCIE_INT_SIGNAL);
reg = FIELD_GET(PCIE_INT_SIGNAL_INTX, reg);
for_each_set_bit(hwirq, &reg, PCI_NUM_INTX)
generic_handle_domain_irq(sophgo->irq_domain, hwirq);
chained_irq_exit(chip, desc);
}
static void sophgo_intx_irq_mask(struct irq_data *d)
{
struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct sophgo_pcie *sophgo = to_sophgo_pcie(pci);
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&pp->lock, flags);
val = sophgo_pcie_readl_app(sophgo, PCIE_INT_EN);
val &= ~FIELD_PREP(PCIE_INT_EN_INTX, BIT(d->hwirq));
sophgo_pcie_writel_app(sophgo, val, PCIE_INT_EN);
raw_spin_unlock_irqrestore(&pp->lock, flags);
};
static void sophgo_intx_irq_unmask(struct irq_data *d)
{
struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct sophgo_pcie *sophgo = to_sophgo_pcie(pci);
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&pp->lock, flags);
val = sophgo_pcie_readl_app(sophgo, PCIE_INT_EN);
val |= FIELD_PREP(PCIE_INT_EN_INTX, BIT(d->hwirq));
sophgo_pcie_writel_app(sophgo, val, PCIE_INT_EN);
raw_spin_unlock_irqrestore(&pp->lock, flags);
};
static struct irq_chip sophgo_intx_irq_chip = {
.name = "INTx",
.irq_mask = sophgo_intx_irq_mask,
.irq_unmask = sophgo_intx_irq_unmask,
};
static int sophgo_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
{
irq_set_chip_and_handler(irq, &sophgo_intx_irq_chip, handle_level_irq);
irq_set_chip_data(irq, domain->host_data);
return 0;
}
static const struct irq_domain_ops intx_domain_ops = {
.map = sophgo_pcie_intx_map,
};
static int sophgo_pcie_init_irq_domain(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct sophgo_pcie *sophgo = to_sophgo_pcie(pci);
struct device *dev = sophgo->pci.dev;
struct fwnode_handle *intc;
int irq;
intc = device_get_named_child_node(dev, "interrupt-controller");
if (!intc) {
dev_err(dev, "missing child interrupt-controller node\n");
return -ENODEV;
}
irq = fwnode_irq_get(intc, 0);
if (irq < 0) {
dev_err(dev, "failed to get INTx irq number\n");
fwnode_handle_put(intc);
return irq;
}
sophgo->irq_domain = irq_domain_create_linear(intc, PCI_NUM_INTX,
&intx_domain_ops, pp);
fwnode_handle_put(intc);
if (!sophgo->irq_domain) {
dev_err(dev, "failed to get a INTx irq domain\n");
return -EINVAL;
}
return irq;
}
static void sophgo_pcie_msi_enable(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct sophgo_pcie *sophgo = to_sophgo_pcie(pci);
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&pp->lock, flags);
val = sophgo_pcie_readl_app(sophgo, PCIE_INT_EN);
val |= PCIE_INT_EN_INT_MSI;
sophgo_pcie_writel_app(sophgo, val, PCIE_INT_EN);
raw_spin_unlock_irqrestore(&pp->lock, flags);
}
static int sophgo_pcie_host_init(struct dw_pcie_rp *pp)
{
int irq;
irq = sophgo_pcie_init_irq_domain(pp);
if (irq < 0)
return irq;
irq_set_chained_handler_and_data(irq, sophgo_pcie_intx_handler, pp);
sophgo_pcie_msi_enable(pp);
return 0;
}
static const struct dw_pcie_host_ops sophgo_pcie_host_ops = {
.init = sophgo_pcie_host_init,
};
static int sophgo_pcie_clk_init(struct sophgo_pcie *sophgo)
{
struct device *dev = sophgo->pci.dev;
int ret;
ret = devm_clk_bulk_get_all_enabled(dev, &sophgo->clks);
if (ret < 0)
return dev_err_probe(dev, ret, "failed to get clocks\n");
sophgo->clk_cnt = ret;
return 0;
}
static int sophgo_pcie_resource_get(struct platform_device *pdev,
struct sophgo_pcie *sophgo)
{
sophgo->app_base = devm_platform_ioremap_resource_byname(pdev, "app");
if (IS_ERR(sophgo->app_base))
return dev_err_probe(&pdev->dev, PTR_ERR(sophgo->app_base),
"failed to map app registers\n");
return 0;
}
static int sophgo_pcie_configure_rc(struct sophgo_pcie *sophgo)
{
struct dw_pcie_rp *pp;
pp = &sophgo->pci.pp;
pp->ops = &sophgo_pcie_host_ops;
return dw_pcie_host_init(pp);
}
static int sophgo_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct sophgo_pcie *sophgo;
int ret;
sophgo = devm_kzalloc(dev, sizeof(*sophgo), GFP_KERNEL);
if (!sophgo)
return -ENOMEM;
platform_set_drvdata(pdev, sophgo);
sophgo->pci.dev = dev;
ret = sophgo_pcie_resource_get(pdev, sophgo);
if (ret)
return ret;
ret = sophgo_pcie_clk_init(sophgo);
if (ret)
return ret;
return sophgo_pcie_configure_rc(sophgo);
}
static const struct of_device_id sophgo_pcie_of_match[] = {
{ .compatible = "sophgo,sg2044-pcie" },
{ }
};
MODULE_DEVICE_TABLE(of, sophgo_pcie_of_match);
static struct platform_driver sophgo_pcie_driver = {
.driver = {
.name = "sophgo-pcie",
.of_match_table = sophgo_pcie_of_match,
.suppress_bind_attrs = true,
},
.probe = sophgo_pcie_probe,
};
builtin_platform_driver(sophgo_pcie_driver);

View File

@ -9,6 +9,7 @@ config PCIE_MOBIVEIL
config PCIE_MOBIVEIL_HOST
bool
depends on PCI_MSI
select IRQ_MSI_LIB
select PCIE_MOBIVEIL
config PCIE_LAYERSCAPE_GEN4

View File

@ -12,6 +12,7 @@
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
@ -353,16 +354,19 @@ static const struct irq_domain_ops intx_domain_ops = {
.map = mobiveil_pcie_intx_map,
};
static struct irq_chip mobiveil_msi_irq_chip = {
.name = "Mobiveil PCIe MSI",
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
#define MOBIVEIL_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_NO_AFFINITY)
static struct msi_domain_info mobiveil_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY | MSI_FLAG_PCI_MSIX,
.chip = &mobiveil_msi_irq_chip,
#define MOBIVEIL_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_PCI_MSIX)
static const struct msi_parent_ops mobiveil_msi_parent_ops = {
.required_flags = MOBIVEIL_MSI_FLAGS_REQUIRED,
.supported_flags = MOBIVEIL_MSI_FLAGS_SUPPORTED,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.prefix = "Mobiveil-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
static void mobiveil_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
@ -435,23 +439,20 @@ static const struct irq_domain_ops msi_domain_ops = {
static int mobiveil_allocate_msi_domains(struct mobiveil_pcie *pcie)
{
struct device *dev = &pcie->pdev->dev;
struct fwnode_handle *fwnode = of_fwnode_handle(dev->of_node);
struct mobiveil_msi *msi = &pcie->rp.msi;
mutex_init(&msi->lock);
msi->dev_domain = irq_domain_create_linear(NULL, msi->num_of_vectors,
&msi_domain_ops, pcie);
if (!msi->dev_domain) {
dev_err(dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
msi->msi_domain = pci_msi_create_irq_domain(fwnode,
&mobiveil_msi_domain_info,
msi->dev_domain);
if (!msi->msi_domain) {
struct irq_domain_info info = {
.fwnode = dev_fwnode(dev),
.ops = &msi_domain_ops,
.host_data = pcie,
.size = msi->num_of_vectors,
};
msi->dev_domain = msi_create_parent_irq_domain(&info, &mobiveil_msi_parent_ops);
if (!msi->dev_domain) {
dev_err(dev, "failed to create MSI domain\n");
irq_domain_remove(msi->dev_domain);
return -ENOMEM;
}
@ -464,9 +465,8 @@ static int mobiveil_pcie_init_irq_domain(struct mobiveil_pcie *pcie)
struct mobiveil_root_port *rp = &pcie->rp;
/* setup INTx */
rp->intx_domain = irq_domain_create_linear(of_fwnode_handle(dev->of_node), PCI_NUM_INTX,
&intx_domain_ops, pcie);
rp->intx_domain = irq_domain_create_linear(dev_fwnode(dev), PCI_NUM_INTX, &intx_domain_ops,
pcie);
if (!rp->intx_domain) {
dev_err(dev, "Failed to get a INTx IRQ domain\n");
return -ENOMEM;

View File

@ -135,7 +135,6 @@
struct mobiveil_msi { /* MSI information */
struct mutex lock; /* protect bitmap variable */
struct irq_domain *msi_domain;
struct irq_domain *dev_domain;
phys_addr_t msi_pages_phys;
int num_of_vectors;

View File

@ -13,6 +13,7 @@
#include <linux/gpio/consumer.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/module.h>
@ -278,7 +279,6 @@ struct advk_pcie {
struct irq_domain *irq_domain;
struct irq_chip irq_chip;
raw_spinlock_t irq_lock;
struct irq_domain *msi_domain;
struct irq_domain *msi_inner_domain;
raw_spinlock_t msi_irq_lock;
DECLARE_BITMAP(msi_used, MSI_IRQ_NUM);
@ -1332,18 +1332,6 @@ static void advk_msi_irq_unmask(struct irq_data *d)
raw_spin_unlock_irqrestore(&pcie->msi_irq_lock, flags);
}
static void advk_msi_top_irq_mask(struct irq_data *d)
{
pci_msi_mask_irq(d);
irq_chip_mask_parent(d);
}
static void advk_msi_top_irq_unmask(struct irq_data *d)
{
pci_msi_unmask_irq(d);
irq_chip_unmask_parent(d);
}
static struct irq_chip advk_msi_bottom_irq_chip = {
.name = "MSI",
.irq_compose_msi_msg = advk_msi_irq_compose_msi_msg,
@ -1436,17 +1424,20 @@ static const struct irq_domain_ops advk_pcie_irq_domain_ops = {
.xlate = irq_domain_xlate_onecell,
};
static struct irq_chip advk_msi_irq_chip = {
.name = "advk-MSI",
.irq_mask = advk_msi_top_irq_mask,
.irq_unmask = advk_msi_top_irq_unmask,
};
#define ADVK_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_PCI_MSI_MASK_PARENT | \
MSI_FLAG_NO_AFFINITY)
#define ADVK_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_PCI_MSIX | \
MSI_FLAG_MULTI_PCI_MSI)
static struct msi_domain_info advk_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY | MSI_FLAG_MULTI_PCI_MSI |
MSI_FLAG_PCI_MSIX,
.chip = &advk_msi_irq_chip,
static const struct msi_parent_ops advk_msi_parent_ops = {
.required_flags = ADVK_MSI_FLAGS_REQUIRED,
.supported_flags = ADVK_MSI_FLAGS_SUPPORTED,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.prefix = "advk-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
static int advk_pcie_init_msi_irq_domain(struct advk_pcie *pcie)
@ -1456,26 +1447,22 @@ static int advk_pcie_init_msi_irq_domain(struct advk_pcie *pcie)
raw_spin_lock_init(&pcie->msi_irq_lock);
mutex_init(&pcie->msi_used_lock);
pcie->msi_inner_domain = irq_domain_create_linear(NULL, MSI_IRQ_NUM,
&advk_msi_domain_ops, pcie);
struct irq_domain_info info = {
.fwnode = dev_fwnode(dev),
.ops = &advk_msi_domain_ops,
.host_data = pcie,
.size = MSI_IRQ_NUM,
};
pcie->msi_inner_domain = msi_create_parent_irq_domain(&info, &advk_msi_parent_ops);
if (!pcie->msi_inner_domain)
return -ENOMEM;
pcie->msi_domain =
pci_msi_create_irq_domain(dev_fwnode(dev),
&advk_msi_domain_info,
pcie->msi_inner_domain);
if (!pcie->msi_domain) {
irq_domain_remove(pcie->msi_inner_domain);
return -ENOMEM;
}
return 0;
}
static void advk_pcie_remove_msi_irq_domain(struct advk_pcie *pcie)
{
irq_domain_remove(pcie->msi_domain);
irq_domain_remove(pcie->msi_inner_domain);
}

View File

@ -22,7 +22,7 @@ static void gen_pci_unmap_cfg(void *ptr)
pci_ecam_free((struct pci_config_window *)ptr);
}
static struct pci_config_window *gen_pci_init(struct device *dev,
struct pci_config_window *pci_host_common_ecam_create(struct device *dev,
struct pci_host_bridge *bridge, const struct pci_ecam_ops *ops)
{
int err;
@ -50,6 +50,7 @@ static struct pci_config_window *gen_pci_init(struct device *dev,
return cfg;
}
EXPORT_SYMBOL_GPL(pci_host_common_ecam_create);
int pci_host_common_init(struct platform_device *pdev,
const struct pci_ecam_ops *ops)
@ -67,7 +68,7 @@ int pci_host_common_init(struct platform_device *pdev,
platform_set_drvdata(pdev, bridge);
/* Parse and map our Configuration Space windows */
cfg = gen_pci_init(dev, bridge, ops);
cfg = pci_host_common_ecam_create(dev, bridge, ops);
if (IS_ERR(cfg))
return PTR_ERR(cfg);

View File

@ -17,4 +17,6 @@ int pci_host_common_init(struct platform_device *pdev,
const struct pci_ecam_ops *ops);
void pci_host_common_remove(struct platform_device *pdev);
struct pci_config_window *pci_host_common_ecam_create(struct device *dev,
struct pci_host_bridge *bridge, const struct pci_ecam_ops *ops);
#endif

View File

@ -1353,11 +1353,9 @@ static int mvebu_pcie_parse_port(struct mvebu_pcie *pcie,
goto skip;
}
ret = devm_add_action(dev, mvebu_pcie_port_clk_put, port);
if (ret < 0) {
clk_put(port->clk);
ret = devm_add_action_or_reset(dev, mvebu_pcie_port_clk_put, port);
if (ret < 0)
goto err;
}
return 1;

View File

@ -6,6 +6,7 @@
* Author: Tanmay Inamdar <tinamdar@apm.com>
* Duc Dang <dhdang@apm.com>
*/
#include <linux/bitfield.h>
#include <linux/cpu.h>
#include <linux/interrupt.h>
#include <linux/irqdomain.h>
@ -22,31 +23,49 @@
#define IDX_PER_GROUP 8
#define IRQS_PER_IDX 16
#define NR_HW_IRQS 16
#define NR_MSI_VEC (IDX_PER_GROUP * IRQS_PER_IDX * NR_HW_IRQS)
#define NR_MSI_BITS (IDX_PER_GROUP * IRQS_PER_IDX * NR_HW_IRQS)
#define NR_MSI_VEC (NR_MSI_BITS / num_possible_cpus())
struct xgene_msi_group {
struct xgene_msi *msi;
int gic_irq;
u32 msi_grp;
};
#define MSI_GROUP_MASK GENMASK(22, 19)
#define MSI_INDEX_MASK GENMASK(18, 16)
#define MSI_INTR_MASK GENMASK(19, 16)
#define MSInRx_HWIRQ_MASK GENMASK(6, 4)
#define DATA_HWIRQ_MASK GENMASK(3, 0)
struct xgene_msi {
struct device_node *node;
struct irq_domain *inner_domain;
u64 msi_addr;
void __iomem *msi_regs;
unsigned long *bitmap;
struct mutex bitmap_lock;
struct xgene_msi_group *msi_groups;
int num_cpus;
unsigned int gic_irq[NR_HW_IRQS];
};
/* Global data */
static struct xgene_msi xgene_msi_ctrl;
static struct xgene_msi *xgene_msi_ctrl;
/*
* X-Gene v1 has 16 groups of MSI termination registers MSInIRx, where
* n is group number (0..F), x is index of registers in each group (0..7)
* X-Gene v1 has 16 frames of MSI termination registers MSInIRx, where n is
* frame number (0..15), x is index of registers in each frame (0..7). Each
* 32b register is at the beginning of a 64kB region, each frame occupying
* 512kB (and the whole thing 8MB of PA space).
*
* Each register supports 16 MSI vectors (0..15) to generate interrupts. A
* write to the MSInIRx from the PCI side generates an interrupt. A read
* from the MSInRx on the CPU side returns a bitmap of the pending MSIs in
* the lower 16 bits. A side effect of this read is that all pending
* interrupts are acknowledged and cleared).
*
* Additionally, each MSI termination frame has 1 MSIINTn register (n is
* 0..15) to indicate the MSI pending status caused by any of its 8
* termination registers, reported as a bitmap in the lower 8 bits. Each 32b
* register is at the beginning of a 64kB region (and overall occupying an
* extra 1MB).
*
* There is one GIC IRQ assigned for each MSI termination frame, 16 in
* total.
*
* The register layout is as follows:
* MSI0IR0 base_addr
* MSI0IR1 base_addr + 0x10000
@ -67,107 +86,74 @@ static struct xgene_msi xgene_msi_ctrl;
* MSIINT1 base_addr + 0x810000
* ... ...
* MSIINTF base_addr + 0x8F0000
*
* Each index register supports 16 MSI vectors (0..15) to generate interrupt.
* There are total 16 GIC IRQs assigned for these 16 groups of MSI termination
* registers.
*
* Each MSI termination group has 1 MSIINTn register (n is 0..15) to indicate
* the MSI pending status caused by 1 of its 8 index registers.
*/
/* MSInIRx read helper */
static u32 xgene_msi_ir_read(struct xgene_msi *msi,
u32 msi_grp, u32 msir_idx)
static u32 xgene_msi_ir_read(struct xgene_msi *msi, u32 msi_grp, u32 msir_idx)
{
return readl_relaxed(msi->msi_regs + MSI_IR0 +
(msi_grp << 19) + (msir_idx << 16));
(FIELD_PREP(MSI_GROUP_MASK, msi_grp) |
FIELD_PREP(MSI_INDEX_MASK, msir_idx)));
}
/* MSIINTn read helper */
static u32 xgene_msi_int_read(struct xgene_msi *msi, u32 msi_grp)
{
return readl_relaxed(msi->msi_regs + MSI_INT0 + (msi_grp << 16));
return readl_relaxed(msi->msi_regs + MSI_INT0 +
FIELD_PREP(MSI_INTR_MASK, msi_grp));
}
/*
* With 2048 MSI vectors supported, the MSI message can be constructed using
* following scheme:
* - Divide into 8 256-vector groups
* Group 0: 0-255
* Group 1: 256-511
* Group 2: 512-767
* ...
* Group 7: 1792-2047
* - Each 256-vector group is divided into 16 16-vector groups
* As an example: 16 16-vector groups for 256-vector group 0-255 is
* Group 0: 0-15
* Group 1: 16-32
* ...
* Group 15: 240-255
* - The termination address of MSI vector in 256-vector group n and 16-vector
* group x is the address of MSIxIRn
* - The data for MSI vector in 16-vector group x is x
* In order to allow an MSI to be moved from one CPU to another without
* having to repaint both the address and the data (which cannot be done
* atomically), we statically partitions the MSI frames between CPUs. Given
* that XGene-1 has 8 CPUs, each CPU gets two frames assigned to it
*
* We adopt the convention that when an MSI is moved, it is configured to
* target the same register number in the congruent frame assigned to the
* new target CPU. This reserves a given MSI across all CPUs, and reduces
* the MSI capacity from 2048 to 256.
*
* Effectively, this amounts to:
* - hwirq[7]::cpu[2:0] is the target frame number (n in MSInIRx)
* - hwirq[6:4] is the register index in any given frame (x in MSInIRx)
* - hwirq[3:0] is the MSI data
*/
static u32 hwirq_to_reg_set(unsigned long hwirq)
static irq_hw_number_t compute_hwirq(u8 frame, u8 index, u8 data)
{
return (hwirq / (NR_HW_IRQS * IRQS_PER_IDX));
}
static u32 hwirq_to_group(unsigned long hwirq)
{
return (hwirq % NR_HW_IRQS);
}
static u32 hwirq_to_msi_data(unsigned long hwirq)
{
return ((hwirq / NR_HW_IRQS) % IRQS_PER_IDX);
return (FIELD_PREP(BIT(7), FIELD_GET(BIT(3), frame)) |
FIELD_PREP(MSInRx_HWIRQ_MASK, index) |
FIELD_PREP(DATA_HWIRQ_MASK, data));
}
static void xgene_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct xgene_msi *msi = irq_data_get_irq_chip_data(data);
u32 reg_set = hwirq_to_reg_set(data->hwirq);
u32 group = hwirq_to_group(data->hwirq);
u64 target_addr = msi->msi_addr + (((8 * group) + reg_set) << 16);
u64 target_addr;
u32 frame, msir;
int cpu;
cpu = cpumask_first(irq_data_get_effective_affinity_mask(data));
msir = FIELD_GET(MSInRx_HWIRQ_MASK, data->hwirq);
frame = FIELD_PREP(BIT(3), FIELD_GET(BIT(7), data->hwirq)) | cpu;
target_addr = msi->msi_addr;
target_addr += (FIELD_PREP(MSI_GROUP_MASK, frame) |
FIELD_PREP(MSI_INTR_MASK, msir));
msg->address_hi = upper_32_bits(target_addr);
msg->address_lo = lower_32_bits(target_addr);
msg->data = hwirq_to_msi_data(data->hwirq);
}
/*
* X-Gene v1 only has 16 MSI GIC IRQs for 2048 MSI vectors. To maintain
* the expected behaviour of .set_affinity for each MSI interrupt, the 16
* MSI GIC IRQs are statically allocated to 8 X-Gene v1 cores (2 GIC IRQs
* for each core). The MSI vector is moved from 1 MSI GIC IRQ to another
* MSI GIC IRQ to steer its MSI interrupt to correct X-Gene v1 core. As a
* consequence, the total MSI vectors that X-Gene v1 supports will be
* reduced to 256 (2048/8) vectors.
*/
static int hwirq_to_cpu(unsigned long hwirq)
{
return (hwirq % xgene_msi_ctrl.num_cpus);
}
static unsigned long hwirq_to_canonical_hwirq(unsigned long hwirq)
{
return (hwirq - hwirq_to_cpu(hwirq));
msg->data = FIELD_GET(DATA_HWIRQ_MASK, data->hwirq);
}
static int xgene_msi_set_affinity(struct irq_data *irqdata,
const struct cpumask *mask, bool force)
{
int target_cpu = cpumask_first(mask);
int curr_cpu;
curr_cpu = hwirq_to_cpu(irqdata->hwirq);
if (curr_cpu == target_cpu)
return IRQ_SET_MASK_OK_DONE;
/* Update MSI number to target the new CPU */
irqdata->hwirq = hwirq_to_canonical_hwirq(irqdata->hwirq) + target_cpu;
irq_data_update_effective_affinity(irqdata, cpumask_of(target_cpu));
/* Force the core code to regenerate the message */
return IRQ_SET_MASK_OK;
}
@ -181,25 +167,23 @@ static int xgene_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *args)
{
struct xgene_msi *msi = domain->host_data;
int msi_irq;
irq_hw_number_t hwirq;
mutex_lock(&msi->bitmap_lock);
msi_irq = bitmap_find_next_zero_area(msi->bitmap, NR_MSI_VEC, 0,
msi->num_cpus, 0);
if (msi_irq < NR_MSI_VEC)
bitmap_set(msi->bitmap, msi_irq, msi->num_cpus);
else
msi_irq = -ENOSPC;
hwirq = find_first_zero_bit(msi->bitmap, NR_MSI_VEC);
if (hwirq < NR_MSI_VEC)
set_bit(hwirq, msi->bitmap);
mutex_unlock(&msi->bitmap_lock);
if (msi_irq < 0)
return msi_irq;
if (hwirq >= NR_MSI_VEC)
return -ENOSPC;
irq_domain_set_info(domain, virq, msi_irq,
irq_domain_set_info(domain, virq, hwirq,
&xgene_msi_bottom_irq_chip, domain->host_data,
handle_simple_irq, NULL, NULL);
irqd_set_resend_when_in_progress(irq_get_irq_data(virq));
return 0;
}
@ -209,12 +193,10 @@ static void xgene_irq_domain_free(struct irq_domain *domain,
{
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
struct xgene_msi *msi = irq_data_get_irq_chip_data(d);
u32 hwirq;
mutex_lock(&msi->bitmap_lock);
hwirq = hwirq_to_canonical_hwirq(d->hwirq);
bitmap_clear(msi->bitmap, hwirq, msi->num_cpus);
clear_bit(d->hwirq, msi->bitmap);
mutex_unlock(&msi->bitmap_lock);
@ -235,10 +217,11 @@ static const struct msi_parent_ops xgene_msi_parent_ops = {
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
static int xgene_allocate_domains(struct xgene_msi *msi)
static int xgene_allocate_domains(struct device_node *node,
struct xgene_msi *msi)
{
struct irq_domain_info info = {
.fwnode = of_fwnode_handle(msi->node),
.fwnode = of_fwnode_handle(node),
.ops = &xgene_msi_domain_ops,
.size = NR_MSI_VEC,
.host_data = msi,
@ -248,169 +231,114 @@ static int xgene_allocate_domains(struct xgene_msi *msi)
return msi->inner_domain ? 0 : -ENOMEM;
}
static void xgene_free_domains(struct xgene_msi *msi)
static int xgene_msi_init_allocator(struct device *dev)
{
if (msi->inner_domain)
irq_domain_remove(msi->inner_domain);
}
static int xgene_msi_init_allocator(struct xgene_msi *xgene_msi)
{
xgene_msi->bitmap = bitmap_zalloc(NR_MSI_VEC, GFP_KERNEL);
if (!xgene_msi->bitmap)
xgene_msi_ctrl->bitmap = devm_bitmap_zalloc(dev, NR_MSI_VEC, GFP_KERNEL);
if (!xgene_msi_ctrl->bitmap)
return -ENOMEM;
mutex_init(&xgene_msi->bitmap_lock);
xgene_msi->msi_groups = kcalloc(NR_HW_IRQS,
sizeof(struct xgene_msi_group),
GFP_KERNEL);
if (!xgene_msi->msi_groups)
return -ENOMEM;
mutex_init(&xgene_msi_ctrl->bitmap_lock);
return 0;
}
static void xgene_msi_isr(struct irq_desc *desc)
{
unsigned int *irqp = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
struct xgene_msi_group *msi_groups;
struct xgene_msi *xgene_msi;
int msir_index, msir_val, hw_irq, ret;
u32 intr_index, grp_select, msi_grp;
struct xgene_msi *xgene_msi = xgene_msi_ctrl;
unsigned long grp_pending;
int msir_idx;
u32 msi_grp;
chained_irq_enter(chip, desc);
msi_groups = irq_desc_get_handler_data(desc);
xgene_msi = msi_groups->msi;
msi_grp = msi_groups->msi_grp;
msi_grp = irqp - xgene_msi->gic_irq;
/*
* MSIINTn (n is 0..F) indicates if there is a pending MSI interrupt
* If bit x of this register is set (x is 0..7), one or more interrupts
* corresponding to MSInIRx is set.
*/
grp_select = xgene_msi_int_read(xgene_msi, msi_grp);
while (grp_select) {
msir_index = ffs(grp_select) - 1;
/*
* Calculate MSInIRx address to read to check for interrupts
* (refer to termination address and data assignment
* described in xgene_compose_msi_msg() )
*/
msir_val = xgene_msi_ir_read(xgene_msi, msi_grp, msir_index);
while (msir_val) {
intr_index = ffs(msir_val) - 1;
/*
* Calculate MSI vector number (refer to the termination
* address and data assignment described in
* xgene_compose_msi_msg function)
*/
hw_irq = (((msir_index * IRQS_PER_IDX) + intr_index) *
NR_HW_IRQS) + msi_grp;
/*
* As we have multiple hw_irq that maps to single MSI,
* always look up the virq using the hw_irq as seen from
* CPU0
*/
hw_irq = hwirq_to_canonical_hwirq(hw_irq);
ret = generic_handle_domain_irq(xgene_msi->inner_domain, hw_irq);
grp_pending = xgene_msi_int_read(xgene_msi, msi_grp);
for_each_set_bit(msir_idx, &grp_pending, IDX_PER_GROUP) {
unsigned long msir;
int intr_idx;
msir = xgene_msi_ir_read(xgene_msi, msi_grp, msir_idx);
for_each_set_bit(intr_idx, &msir, IRQS_PER_IDX) {
irq_hw_number_t hwirq;
int ret;
hwirq = compute_hwirq(msi_grp, msir_idx, intr_idx);
ret = generic_handle_domain_irq(xgene_msi->inner_domain,
hwirq);
WARN_ON_ONCE(ret);
msir_val &= ~(1 << intr_index);
}
grp_select &= ~(1 << msir_index);
if (!grp_select) {
/*
* We handled all interrupts happened in this group,
* resample this group MSI_INTx register in case
* something else has been made pending in the meantime
*/
grp_select = xgene_msi_int_read(xgene_msi, msi_grp);
}
}
chained_irq_exit(chip, desc);
}
static enum cpuhp_state pci_xgene_online;
static void xgene_msi_remove(struct platform_device *pdev)
{
struct xgene_msi *msi = platform_get_drvdata(pdev);
for (int i = 0; i < NR_HW_IRQS; i++) {
unsigned int irq = xgene_msi_ctrl->gic_irq[i];
if (!irq)
continue;
irq_set_chained_handler_and_data(irq, NULL, NULL);
}
if (pci_xgene_online)
cpuhp_remove_state(pci_xgene_online);
cpuhp_remove_state(CPUHP_PCI_XGENE_DEAD);
kfree(msi->msi_groups);
bitmap_free(msi->bitmap);
msi->bitmap = NULL;
xgene_free_domains(msi);
if (xgene_msi_ctrl->inner_domain)
irq_domain_remove(xgene_msi_ctrl->inner_domain);
}
static int xgene_msi_hwirq_alloc(unsigned int cpu)
static int xgene_msi_handler_setup(struct platform_device *pdev)
{
struct xgene_msi *msi = &xgene_msi_ctrl;
struct xgene_msi_group *msi_group;
cpumask_var_t mask;
struct xgene_msi *xgene_msi = xgene_msi_ctrl;
int i;
int err;
for (i = cpu; i < NR_HW_IRQS; i += msi->num_cpus) {
msi_group = &msi->msi_groups[i];
if (!msi_group->gic_irq)
continue;
for (i = 0; i < NR_HW_IRQS; i++) {
u32 msi_val;
int irq, err;
irq_set_chained_handler_and_data(msi_group->gic_irq,
xgene_msi_isr, msi_group);
/*
* MSInIRx registers are read-to-clear; before registering
* interrupt handlers, read all of them to clear spurious
* interrupts that may occur before the driver is probed.
*/
for (int msi_idx = 0; msi_idx < IDX_PER_GROUP; msi_idx++)
xgene_msi_ir_read(xgene_msi, i, msi_idx);
/* Read MSIINTn to confirm */
msi_val = xgene_msi_int_read(xgene_msi, i);
if (msi_val) {
dev_err(&pdev->dev, "Failed to clear spurious IRQ\n");
return EINVAL;
}
irq = platform_get_irq(pdev, i);
if (irq < 0)
return irq;
xgene_msi->gic_irq[i] = irq;
/*
* Statically allocate MSI GIC IRQs to each CPU core.
* With 8-core X-Gene v1, 2 MSI GIC IRQs are allocated
* to each core.
*/
if (alloc_cpumask_var(&mask, GFP_KERNEL)) {
cpumask_clear(mask);
cpumask_set_cpu(cpu, mask);
err = irq_set_affinity(msi_group->gic_irq, mask);
if (err)
pr_err("failed to set affinity for GIC IRQ");
free_cpumask_var(mask);
} else {
pr_err("failed to alloc CPU mask for affinity\n");
err = -EINVAL;
}
irq_set_status_flags(irq, IRQ_NO_BALANCING);
err = irq_set_affinity(irq, cpumask_of(i % num_possible_cpus()));
if (err) {
irq_set_chained_handler_and_data(msi_group->gic_irq,
NULL, NULL);
pr_err("failed to set affinity for GIC IRQ");
return err;
}
irq_set_chained_handler_and_data(irq, xgene_msi_isr,
&xgene_msi_ctrl->gic_irq[i]);
}
return 0;
}
static int xgene_msi_hwirq_free(unsigned int cpu)
{
struct xgene_msi *msi = &xgene_msi_ctrl;
struct xgene_msi_group *msi_group;
int i;
for (i = cpu; i < NR_HW_IRQS; i += msi->num_cpus) {
msi_group = &msi->msi_groups[i];
if (!msi_group->gic_irq)
continue;
irq_set_chained_handler_and_data(msi_group->gic_irq, NULL,
NULL);
}
return 0;
}
static const struct of_device_id xgene_msi_match_table[] = {
{.compatible = "apm,xgene1-msi"},
{},
@ -419,14 +347,15 @@ static const struct of_device_id xgene_msi_match_table[] = {
static int xgene_msi_probe(struct platform_device *pdev)
{
struct resource *res;
int rc, irq_index;
struct xgene_msi *xgene_msi;
int virt_msir;
u32 msi_val, msi_idx;
int rc;
xgene_msi = &xgene_msi_ctrl;
xgene_msi_ctrl = devm_kzalloc(&pdev->dev, sizeof(*xgene_msi_ctrl),
GFP_KERNEL);
if (!xgene_msi_ctrl)
return -ENOMEM;
platform_set_drvdata(pdev, xgene_msi);
xgene_msi = xgene_msi_ctrl;
xgene_msi->msi_regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
if (IS_ERR(xgene_msi->msi_regs)) {
@ -434,66 +363,26 @@ static int xgene_msi_probe(struct platform_device *pdev)
goto error;
}
xgene_msi->msi_addr = res->start;
xgene_msi->node = pdev->dev.of_node;
xgene_msi->num_cpus = num_possible_cpus();
rc = xgene_msi_init_allocator(xgene_msi);
rc = xgene_msi_init_allocator(&pdev->dev);
if (rc) {
dev_err(&pdev->dev, "Error allocating MSI bitmap\n");
goto error;
}
rc = xgene_allocate_domains(xgene_msi);
rc = xgene_allocate_domains(dev_of_node(&pdev->dev), xgene_msi);
if (rc) {
dev_err(&pdev->dev, "Failed to allocate MSI domain\n");
goto error;
}
for (irq_index = 0; irq_index < NR_HW_IRQS; irq_index++) {
virt_msir = platform_get_irq(pdev, irq_index);
if (virt_msir < 0) {
rc = virt_msir;
goto error;
}
xgene_msi->msi_groups[irq_index].gic_irq = virt_msir;
xgene_msi->msi_groups[irq_index].msi_grp = irq_index;
xgene_msi->msi_groups[irq_index].msi = xgene_msi;
}
/*
* MSInIRx registers are read-to-clear; before registering
* interrupt handlers, read all of them to clear spurious
* interrupts that may occur before the driver is probed.
*/
for (irq_index = 0; irq_index < NR_HW_IRQS; irq_index++) {
for (msi_idx = 0; msi_idx < IDX_PER_GROUP; msi_idx++)
xgene_msi_ir_read(xgene_msi, irq_index, msi_idx);
/* Read MSIINTn to confirm */
msi_val = xgene_msi_int_read(xgene_msi, irq_index);
if (msi_val) {
dev_err(&pdev->dev, "Failed to clear spurious IRQ\n");
rc = -EINVAL;
goto error;
}
}
rc = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "pci/xgene:online",
xgene_msi_hwirq_alloc, NULL);
if (rc < 0)
goto err_cpuhp;
pci_xgene_online = rc;
rc = cpuhp_setup_state(CPUHP_PCI_XGENE_DEAD, "pci/xgene:dead", NULL,
xgene_msi_hwirq_free);
rc = xgene_msi_handler_setup(pdev);
if (rc)
goto err_cpuhp;
goto error;
dev_info(&pdev->dev, "APM X-Gene PCIe MSI driver loaded\n");
return 0;
err_cpuhp:
dev_err(&pdev->dev, "failed to add CPU MSI notifier\n");
error:
xgene_msi_remove(pdev);
return rc;
@ -507,9 +396,4 @@ static struct platform_driver xgene_msi_driver = {
.probe = xgene_msi_probe,
.remove = xgene_msi_remove,
};
static int __init xgene_pcie_msi_init(void)
{
return platform_driver_register(&xgene_msi_driver);
}
subsys_initcall(xgene_pcie_msi_init);
builtin_platform_driver(xgene_msi_driver);

View File

@ -12,6 +12,7 @@
#include <linux/jiffies.h>
#include <linux/memblock.h>
#include <linux/init.h>
#include <linux/irqdomain.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
@ -53,11 +54,9 @@
#define XGENE_V1_PCI_EXP_CAP 0x40
/* PCIe IP version */
#define XGENE_PCIE_IP_VER_UNKN 0
#define XGENE_PCIE_IP_VER_1 1
#define XGENE_PCIE_IP_VER_2 2
#if defined(CONFIG_PCI_XGENE) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
struct xgene_pcie {
struct device_node *node;
struct device *dev;
@ -188,7 +187,6 @@ static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn,
return PCIBIOS_SUCCESSFUL;
}
#endif
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
static int xgene_get_csr_resource(struct acpi_device *adev,
@ -279,7 +277,6 @@ const struct pci_ecam_ops xgene_v2_pcie_ecam_ops = {
};
#endif
#if defined(CONFIG_PCI_XGENE)
static u64 xgene_pcie_set_ib_mask(struct xgene_pcie *port, u32 addr,
u32 flags, u64 size)
{
@ -594,6 +591,24 @@ static struct pci_ops xgene_pcie_ops = {
.write = pci_generic_config_write32,
};
static bool xgene_check_pcie_msi_ready(void)
{
struct device_node *np;
struct irq_domain *d;
if (!IS_ENABLED(CONFIG_PCI_XGENE_MSI))
return true;
np = of_find_compatible_node(NULL, NULL, "apm,xgene1-msi");
if (!np)
return true;
d = irq_find_matching_host(np, DOMAIN_BUS_PCI_MSI);
of_node_put(np);
return d && irq_domain_is_msi_parent(d);
}
static int xgene_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -602,6 +617,10 @@ static int xgene_pcie_probe(struct platform_device *pdev)
struct pci_host_bridge *bridge;
int ret;
if (!xgene_check_pcie_msi_ready())
return dev_err_probe(&pdev->dev, -EPROBE_DEFER,
"MSI driver not ready\n");
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*port));
if (!bridge)
return -ENOMEM;
@ -610,10 +629,7 @@ static int xgene_pcie_probe(struct platform_device *pdev)
port->node = of_node_get(dn);
port->dev = dev;
port->version = XGENE_PCIE_IP_VER_UNKN;
if (of_device_is_compatible(port->node, "apm,xgene-pcie"))
port->version = XGENE_PCIE_IP_VER_1;
port->version = XGENE_PCIE_IP_VER_1;
ret = xgene_pcie_map_reg(port, pdev);
if (ret)
@ -647,4 +663,3 @@ static struct platform_driver xgene_pcie_driver = {
.probe = xgene_pcie_probe,
};
builtin_platform_driver(xgene_pcie_driver);
#endif

View File

@ -9,6 +9,7 @@
#include <linux/interrupt.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqdomain.h>
#include <linux/init.h>
#include <linux/module.h>
@ -29,7 +30,6 @@ struct altera_msi {
DECLARE_BITMAP(used, MAX_MSI_VECTORS);
struct mutex lock; /* protect "used" bitmap */
struct platform_device *pdev;
struct irq_domain *msi_domain;
struct irq_domain *inner_domain;
void __iomem *csr_base;
void __iomem *vector_base;
@ -74,18 +74,20 @@ static void altera_msi_isr(struct irq_desc *desc)
chained_irq_exit(chip, desc);
}
static struct irq_chip altera_msi_irq_chip = {
.name = "Altera PCIe MSI",
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
#define ALTERA_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_NO_AFFINITY)
static struct msi_domain_info altera_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY | MSI_FLAG_PCI_MSIX,
.chip = &altera_msi_irq_chip,
};
#define ALTERA_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_PCI_MSIX)
static const struct msi_parent_ops altera_msi_parent_ops = {
.required_flags = ALTERA_MSI_FLAGS_REQUIRED,
.supported_flags = ALTERA_MSI_FLAGS_SUPPORTED,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.prefix = "Altera-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
static void altera_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct altera_msi *msi = irq_data_get_irq_chip_data(data);
@ -164,20 +166,16 @@ static const struct irq_domain_ops msi_domain_ops = {
static int altera_allocate_domains(struct altera_msi *msi)
{
struct fwnode_handle *fwnode = of_fwnode_handle(msi->pdev->dev.of_node);
struct irq_domain_info info = {
.fwnode = dev_fwnode(&msi->pdev->dev),
.ops = &msi_domain_ops,
.host_data = msi,
.size = msi->num_of_vectors,
};
msi->inner_domain = irq_domain_create_linear(NULL, msi->num_of_vectors,
&msi_domain_ops, msi);
msi->inner_domain = msi_create_parent_irq_domain(&info, &altera_msi_parent_ops);
if (!msi->inner_domain) {
dev_err(&msi->pdev->dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
msi->msi_domain = pci_msi_create_irq_domain(fwnode,
&altera_msi_domain_info, msi->inner_domain);
if (!msi->msi_domain) {
dev_err(&msi->pdev->dev, "failed to create MSI domain\n");
irq_domain_remove(msi->inner_domain);
return -ENOMEM;
}
@ -186,7 +184,6 @@ static int altera_allocate_domains(struct altera_msi *msi)
static void altera_free_domains(struct altera_msi *msi)
{
irq_domain_remove(msi->msi_domain);
irq_domain_remove(msi->inner_domain);
}

View File

@ -852,10 +852,9 @@ static void aglx_isr(struct irq_desc *desc)
static int altera_pcie_init_irq_domain(struct altera_pcie *pcie)
{
struct device *dev = &pcie->pdev->dev;
struct device_node *node = dev->of_node;
/* Setup INTx */
pcie->irq_domain = irq_domain_create_linear(of_fwnode_handle(node), PCI_NUM_INTX,
pcie->irq_domain = irq_domain_create_linear(dev_fwnode(dev), PCI_NUM_INTX,
&intx_domain_ops, pcie);
if (!pcie->irq_domain) {
dev_err(dev, "Failed to get a INTx IRQ domain\n");

View File

@ -12,6 +12,7 @@
#include <linux/iopoll.h>
#include <linux/ioport.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/list.h>
@ -46,6 +47,7 @@
#define PCIE_RC_CFG_PRIV1_ID_VAL3_CLASS_CODE_MASK 0xffffff
#define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY 0x04dc
#define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_MAX_LINK_WIDTH_MASK 0x1f0
#define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK 0xc00
#define PCIE_RC_CFG_PRIV1_ROOT_CAP 0x4f8
@ -55,6 +57,9 @@
#define PCIE_RC_DL_MDIO_WR_DATA 0x1104
#define PCIE_RC_DL_MDIO_RD_DATA 0x1108
#define PCIE_RC_PL_REG_PHY_CTL_1 0x1804
#define PCIE_RC_PL_REG_PHY_CTL_1_REG_P2_POWERDOWN_ENA_NOSYNC_MASK 0x8
#define PCIE_RC_PL_PHY_CTL_15 0x184c
#define PCIE_RC_PL_PHY_CTL_15_DIS_PLL_PD_MASK 0x400000
#define PCIE_RC_PL_PHY_CTL_15_PM_CLK_PERIOD_MASK 0xff
@ -265,7 +270,6 @@ struct brcm_msi {
struct device *dev;
void __iomem *base;
struct device_node *np;
struct irq_domain *msi_domain;
struct irq_domain *inner_domain;
struct mutex lock; /* guards the alloc/free operations */
u64 target_addr;
@ -465,17 +469,20 @@ static void brcm_pcie_set_outbound_win(struct brcm_pcie *pcie,
writel(tmp, pcie->base + PCIE_MEM_WIN0_LIMIT_HI(win));
}
static struct irq_chip brcm_msi_irq_chip = {
.name = "BRCM STB PCIe MSI",
.irq_ack = irq_chip_ack_parent,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
#define BRCM_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_NO_AFFINITY)
static struct msi_domain_info brcm_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY | MSI_FLAG_MULTI_PCI_MSI,
.chip = &brcm_msi_irq_chip,
#define BRCM_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_MULTI_PCI_MSI)
static const struct msi_parent_ops brcm_msi_parent_ops = {
.required_flags = BRCM_MSI_FLAGS_REQUIRED,
.supported_flags = BRCM_MSI_FLAGS_SUPPORTED,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.chip_flags = MSI_CHIP_FLAG_SET_ACK,
.prefix = "BRCM-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
static void brcm_pcie_msi_isr(struct irq_desc *desc)
@ -581,21 +588,18 @@ static const struct irq_domain_ops msi_domain_ops = {
static int brcm_allocate_domains(struct brcm_msi *msi)
{
struct fwnode_handle *fwnode = of_fwnode_handle(msi->np);
struct device *dev = msi->dev;
msi->inner_domain = irq_domain_create_linear(NULL, msi->nr, &msi_domain_ops, msi);
if (!msi->inner_domain) {
dev_err(dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
struct irq_domain_info info = {
.fwnode = of_fwnode_handle(msi->np),
.ops = &msi_domain_ops,
.host_data = msi,
.size = msi->nr,
};
msi->msi_domain = pci_msi_create_irq_domain(fwnode,
&brcm_msi_domain_info,
msi->inner_domain);
if (!msi->msi_domain) {
msi->inner_domain = msi_create_parent_irq_domain(&info, &brcm_msi_parent_ops);
if (!msi->inner_domain) {
dev_err(dev, "failed to create MSI domain\n");
irq_domain_remove(msi->inner_domain);
return -ENOMEM;
}
@ -604,7 +608,6 @@ static int brcm_allocate_domains(struct brcm_msi *msi)
static void brcm_free_domains(struct brcm_msi *msi)
{
irq_domain_remove(msi->msi_domain);
irq_domain_remove(msi->inner_domain);
}
@ -970,7 +973,7 @@ static int brcm_pcie_get_inbound_wins(struct brcm_pcie *pcie,
*
* The PCIe host controller by design must set the inbound viewport to
* be a contiguous arrangement of all of the system's memory. In
* addition, its size mut be a power of two. To further complicate
* addition, its size must be a power of two. To further complicate
* matters, the viewport must start on a pcie-address that is aligned
* on a multiple of its size. If a portion of the viewport does not
* represent system memory -- e.g. 3GB of memory requires a 4GB
@ -1072,7 +1075,7 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
void __iomem *base = pcie->base;
struct pci_host_bridge *bridge;
struct resource_entry *entry;
u32 tmp, burst, aspm_support;
u32 tmp, burst, aspm_support, num_lanes, num_lanes_cap;
u8 num_out_wins = 0;
int num_inbound_wins = 0;
int memc, ret;
@ -1180,6 +1183,27 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK);
writel(tmp, base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
/* 'tmp' still holds the contents of PRIV1_LINK_CAPABILITY */
num_lanes_cap = u32_get_bits(tmp, PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_MAX_LINK_WIDTH_MASK);
num_lanes = 0;
/*
* Use hardware negotiated Max Link Width value by default. If the
* "num-lanes" DT property is present, assume that the chip's default
* link width capability information is incorrect/undesired and use the
* specified value instead.
*/
if (!of_property_read_u32(pcie->np, "num-lanes", &num_lanes) &&
num_lanes && num_lanes <= 4 && num_lanes_cap != num_lanes) {
u32p_replace_bits(&tmp, num_lanes,
PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_MAX_LINK_WIDTH_MASK);
writel(tmp, base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
tmp = readl(base + PCIE_RC_PL_REG_PHY_CTL_1);
u32p_replace_bits(&tmp, 1,
PCIE_RC_PL_REG_PHY_CTL_1_REG_P2_POWERDOWN_ENA_NOSYNC_MASK);
writel(tmp, base + PCIE_RC_PL_REG_PHY_CTL_1);
}
/*
* For config space accesses on the RC, show the right class for
* a PCIe-PCIe bridge (the default setting is to be EP mode).
@ -1333,11 +1357,7 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
if (ret)
return ret;
/*
* Wait for 100ms after PERST# deassertion; see PCIe CEM specification
* sections 2.2, PCIe r5.0, 6.6.1.
*/
msleep(100);
msleep(PCIE_RESET_CONFIG_WAIT_MS);
/*
* Give the RC/EP even more time to wake up, before trying to

View File

@ -5,6 +5,7 @@
#include <linux/interrupt.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqdomain.h>
#include <linux/msi.h>
#include <linux/of_irq.h>
@ -81,7 +82,6 @@ struct iproc_msi_grp {
* @bitmap_lock: lock to protect access to the MSI bitmap
* @nr_msi_vecs: total number of MSI vectors
* @inner_domain: inner IRQ domain
* @msi_domain: MSI IRQ domain
* @nr_eq_region: required number of 4K aligned memory region for MSI event
* queues
* @nr_msi_region: required number of 4K aligned address region for MSI posted
@ -101,7 +101,6 @@ struct iproc_msi {
struct mutex bitmap_lock;
unsigned int nr_msi_vecs;
struct irq_domain *inner_domain;
struct irq_domain *msi_domain;
unsigned int nr_eq_region;
unsigned int nr_msi_region;
void *eq_cpu;
@ -165,16 +164,18 @@ static inline unsigned int iproc_msi_eq_offset(struct iproc_msi *msi, u32 eq)
return eq * EQ_LEN * sizeof(u32);
}
static struct irq_chip iproc_msi_irq_chip = {
.name = "iProc-MSI",
};
#define IPROC_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS)
#define IPROC_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_PCI_MSIX)
static struct msi_domain_info iproc_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_PCI_MSIX,
.chip = &iproc_msi_irq_chip,
static struct msi_parent_ops iproc_msi_parent_ops = {
.required_flags = IPROC_MSI_FLAGS_REQUIRED,
.supported_flags = IPROC_MSI_FLAGS_SUPPORTED,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.prefix = "iProc-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
/*
* In iProc PCIe core, each MSI group is serviced by a GIC interrupt and a
* dedicated event queue. Each MSI group can support up to 64 MSI vectors.
@ -446,27 +447,22 @@ static void iproc_msi_disable(struct iproc_msi *msi)
static int iproc_msi_alloc_domains(struct device_node *node,
struct iproc_msi *msi)
{
msi->inner_domain = irq_domain_create_linear(NULL, msi->nr_msi_vecs,
&msi_domain_ops, msi);
struct irq_domain_info info = {
.fwnode = of_fwnode_handle(node),
.ops = &msi_domain_ops,
.host_data = msi,
.size = msi->nr_msi_vecs,
};
msi->inner_domain = msi_create_parent_irq_domain(&info, &iproc_msi_parent_ops);
if (!msi->inner_domain)
return -ENOMEM;
msi->msi_domain = pci_msi_create_irq_domain(of_fwnode_handle(node),
&iproc_msi_domain_info,
msi->inner_domain);
if (!msi->msi_domain) {
irq_domain_remove(msi->inner_domain);
return -ENOMEM;
}
return 0;
}
static void iproc_msi_free_domains(struct iproc_msi *msi)
{
if (msi->msi_domain)
irq_domain_remove(msi->msi_domain);
if (msi->inner_domain)
irq_domain_remove(msi->inner_domain);
}
@ -542,7 +538,7 @@ int iproc_msi_init(struct iproc_pcie *pcie, struct device_node *node)
msi->nr_cpus = num_possible_cpus();
if (msi->nr_cpus == 1)
iproc_msi_domain_info.flags |= MSI_FLAG_MULTI_PCI_MSI;
iproc_msi_parent_ops.supported_flags |= MSI_FLAG_MULTI_PCI_MSI;
msi->nr_irqs = of_irq_count(node);
if (!msi->nr_irqs) {

View File

@ -12,6 +12,7 @@
#include <linux/delay.h>
#include <linux/iopoll.h>
#include <linux/irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
@ -187,7 +188,6 @@ struct mtk_msi_set {
* @saved_irq_state: IRQ enable state saved at suspend time
* @irq_lock: lock protecting IRQ register access
* @intx_domain: legacy INTx IRQ domain
* @msi_domain: MSI IRQ domain
* @msi_bottom_domain: MSI IRQ bottom domain
* @msi_sets: MSI sets information
* @lock: lock protecting IRQ bit map
@ -210,7 +210,6 @@ struct mtk_gen3_pcie {
u32 saved_irq_state;
raw_spinlock_t irq_lock;
struct irq_domain *intx_domain;
struct irq_domain *msi_domain;
struct irq_domain *msi_bottom_domain;
struct mtk_msi_set msi_sets[PCIE_MSI_SET_NUM];
struct mutex lock;
@ -526,30 +525,22 @@ static int mtk_pcie_startup_port(struct mtk_gen3_pcie *pcie)
return 0;
}
static void mtk_pcie_msi_irq_mask(struct irq_data *data)
{
pci_msi_mask_irq(data);
irq_chip_mask_parent(data);
}
#define MTK_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_NO_AFFINITY | \
MSI_FLAG_PCI_MSI_MASK_PARENT)
static void mtk_pcie_msi_irq_unmask(struct irq_data *data)
{
pci_msi_unmask_irq(data);
irq_chip_unmask_parent(data);
}
#define MTK_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_PCI_MSIX | \
MSI_FLAG_MULTI_PCI_MSI)
static struct irq_chip mtk_msi_irq_chip = {
.irq_ack = irq_chip_ack_parent,
.irq_mask = mtk_pcie_msi_irq_mask,
.irq_unmask = mtk_pcie_msi_irq_unmask,
.name = "MSI",
};
static struct msi_domain_info mtk_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY | MSI_FLAG_PCI_MSIX |
MSI_FLAG_MULTI_PCI_MSI,
.chip = &mtk_msi_irq_chip,
static const struct msi_parent_ops mtk_msi_parent_ops = {
.required_flags = MTK_MSI_FLAGS_REQUIRED,
.supported_flags = MTK_MSI_FLAGS_SUPPORTED,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.chip_flags = MSI_CHIP_FLAG_SET_ACK,
.prefix = "MTK3-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
static void mtk_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
@ -756,29 +747,23 @@ static int mtk_pcie_init_irq_domains(struct mtk_gen3_pcie *pcie)
/* Setup MSI */
mutex_init(&pcie->lock);
pcie->msi_bottom_domain = irq_domain_create_linear(of_fwnode_handle(node),
PCIE_MSI_IRQS_NUM,
&mtk_msi_bottom_domain_ops, pcie);
struct irq_domain_info info = {
.fwnode = dev_fwnode(dev),
.ops = &mtk_msi_bottom_domain_ops,
.host_data = pcie,
.size = PCIE_MSI_IRQS_NUM,
};
pcie->msi_bottom_domain = msi_create_parent_irq_domain(&info, &mtk_msi_parent_ops);
if (!pcie->msi_bottom_domain) {
dev_err(dev, "failed to create MSI bottom domain\n");
ret = -ENODEV;
goto err_msi_bottom_domain;
}
pcie->msi_domain = pci_msi_create_irq_domain(dev->fwnode,
&mtk_msi_domain_info,
pcie->msi_bottom_domain);
if (!pcie->msi_domain) {
dev_err(dev, "failed to create MSI domain\n");
ret = -ENODEV;
goto err_msi_domain;
}
of_node_put(intc_node);
return 0;
err_msi_domain:
irq_domain_remove(pcie->msi_bottom_domain);
err_msi_bottom_domain:
irq_domain_remove(pcie->intx_domain);
out_put_node:
@ -793,9 +778,6 @@ static void mtk_pcie_irq_teardown(struct mtk_gen3_pcie *pcie)
if (pcie->intx_domain)
irq_domain_remove(pcie->intx_domain);
if (pcie->msi_domain)
irq_domain_remove(pcie->msi_domain);
if (pcie->msi_bottom_domain)
irq_domain_remove(pcie->msi_bottom_domain);

View File

@ -12,6 +12,7 @@
#include <linux/iopoll.h>
#include <linux/irq.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/mfd/syscon.h>
@ -180,7 +181,6 @@ struct mtk_pcie_soc {
* @irq: GIC irq
* @irq_domain: legacy INTx IRQ domain
* @inner_domain: inner IRQ domain
* @msi_domain: MSI IRQ domain
* @lock: protect the msi_irq_in_use bitmap
* @msi_irq_in_use: bit map for assigned MSI IRQ
*/
@ -200,7 +200,6 @@ struct mtk_pcie_port {
int irq;
struct irq_domain *irq_domain;
struct irq_domain *inner_domain;
struct irq_domain *msi_domain;
struct mutex lock;
DECLARE_BITMAP(msi_irq_in_use, MTK_MSI_IRQS_NUM);
};
@ -470,40 +469,39 @@ static const struct irq_domain_ops msi_domain_ops = {
.free = mtk_pcie_irq_domain_free,
};
static struct irq_chip mtk_msi_irq_chip = {
.name = "MTK PCIe MSI",
.irq_ack = irq_chip_ack_parent,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
#define MTK_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_NO_AFFINITY)
static struct msi_domain_info mtk_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY | MSI_FLAG_PCI_MSIX,
.chip = &mtk_msi_irq_chip,
#define MTK_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_PCI_MSIX)
static const struct msi_parent_ops mtk_msi_parent_ops = {
.required_flags = MTK_MSI_FLAGS_REQUIRED,
.supported_flags = MTK_MSI_FLAGS_SUPPORTED,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.chip_flags = MSI_CHIP_FLAG_SET_ACK,
.prefix = "MTK-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
static int mtk_pcie_allocate_msi_domains(struct mtk_pcie_port *port)
{
struct fwnode_handle *fwnode = of_fwnode_handle(port->pcie->dev->of_node);
mutex_init(&port->lock);
port->inner_domain = irq_domain_create_linear(fwnode, MTK_MSI_IRQS_NUM,
&msi_domain_ops, port);
struct irq_domain_info info = {
.fwnode = dev_fwnode(port->pcie->dev),
.ops = &msi_domain_ops,
.host_data = port,
.size = MTK_MSI_IRQS_NUM,
};
port->inner_domain = msi_create_parent_irq_domain(&info, &mtk_msi_parent_ops);
if (!port->inner_domain) {
dev_err(port->pcie->dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
port->msi_domain = pci_msi_create_irq_domain(fwnode, &mtk_msi_domain_info,
port->inner_domain);
if (!port->msi_domain) {
dev_err(port->pcie->dev, "failed to create MSI domain\n");
irq_domain_remove(port->inner_domain);
return -ENOMEM;
}
return 0;
}
@ -532,8 +530,6 @@ static void mtk_pcie_irq_teardown(struct mtk_pcie *pcie)
irq_domain_remove(port->irq_domain);
if (IS_ENABLED(CONFIG_PCI_MSI)) {
if (port->msi_domain)
irq_domain_remove(port->msi_domain);
if (port->inner_domain)
irq_domain_remove(port->inner_domain);
}

View File

@ -17,6 +17,7 @@
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/init.h>
@ -597,30 +598,6 @@ static irqreturn_t rcar_pcie_msi_irq(int irq, void *data)
return IRQ_HANDLED;
}
static void rcar_msi_top_irq_ack(struct irq_data *d)
{
irq_chip_ack_parent(d);
}
static void rcar_msi_top_irq_mask(struct irq_data *d)
{
pci_msi_mask_irq(d);
irq_chip_mask_parent(d);
}
static void rcar_msi_top_irq_unmask(struct irq_data *d)
{
pci_msi_unmask_irq(d);
irq_chip_unmask_parent(d);
}
static struct irq_chip rcar_msi_top_chip = {
.name = "PCIe MSI",
.irq_ack = rcar_msi_top_irq_ack,
.irq_mask = rcar_msi_top_irq_mask,
.irq_unmask = rcar_msi_top_irq_unmask,
};
static void rcar_msi_irq_ack(struct irq_data *d)
{
struct rcar_msi *msi = irq_data_get_irq_chip_data(d);
@ -718,30 +695,36 @@ static const struct irq_domain_ops rcar_msi_domain_ops = {
.free = rcar_msi_domain_free,
};
static struct msi_domain_info rcar_msi_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY | MSI_FLAG_MULTI_PCI_MSI,
.chip = &rcar_msi_top_chip,
#define RCAR_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_PCI_MSI_MASK_PARENT | \
MSI_FLAG_NO_AFFINITY)
#define RCAR_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_MULTI_PCI_MSI)
static const struct msi_parent_ops rcar_msi_parent_ops = {
.required_flags = RCAR_MSI_FLAGS_REQUIRED,
.supported_flags = RCAR_MSI_FLAGS_SUPPORTED,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.chip_flags = MSI_CHIP_FLAG_SET_ACK,
.prefix = "RCAR-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
static int rcar_allocate_domains(struct rcar_msi *msi)
{
struct rcar_pcie *pcie = &msi_to_host(msi)->pcie;
struct fwnode_handle *fwnode = dev_fwnode(pcie->dev);
struct irq_domain *parent;
struct irq_domain_info info = {
.fwnode = dev_fwnode(pcie->dev),
.ops = &rcar_msi_domain_ops,
.host_data = msi,
.size = INT_PCI_MSI_NR,
};
parent = irq_domain_create_linear(fwnode, INT_PCI_MSI_NR,
&rcar_msi_domain_ops, msi);
if (!parent) {
dev_err(pcie->dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS);
msi->domain = pci_msi_create_irq_domain(fwnode, &rcar_msi_info, parent);
msi->domain = msi_create_parent_irq_domain(&info, &rcar_msi_parent_ops);
if (!msi->domain) {
dev_err(pcie->dev, "failed to create MSI domain\n");
irq_domain_remove(parent);
dev_err(pcie->dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
@ -750,10 +733,7 @@ static int rcar_allocate_domains(struct rcar_msi *msi)
static void rcar_free_domains(struct rcar_msi *msi)
{
struct irq_domain *parent = msi->domain->parent;
irq_domain_remove(msi->domain);
irq_domain_remove(parent);
}
static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)

View File

@ -518,9 +518,9 @@ static void rockchip_pcie_ep_retrain_link(struct rockchip_pcie *rockchip)
{
u32 status;
status = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_LCS);
status = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_BASE + PCI_EXP_LNKCTL);
status |= PCI_EXP_LNKCTL_RL;
rockchip_pcie_write(rockchip, status, PCIE_EP_CONFIG_LCS);
rockchip_pcie_write(rockchip, status, PCIE_EP_CONFIG_BASE + PCI_EXP_LNKCTL);
}
static bool rockchip_pcie_ep_link_up(struct rockchip_pcie *rockchip)

View File

@ -11,27 +11,19 @@
* ARM PCI Host generic driver.
*/
#include <linux/bitfield.h>
#include <linux/bitrev.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/iopoll.h>
#include <linux/irq.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/pci_ids.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/reset.h>
#include <linux/regmap.h>
#include "../pci.h"
#include "pcie-rockchip.h"
@ -40,18 +32,18 @@ static void rockchip_pcie_enable_bw_int(struct rockchip_pcie *rockchip)
{
u32 status;
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
status |= (PCI_EXP_LNKCTL_LBMIE | PCI_EXP_LNKCTL_LABIE);
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
}
static void rockchip_pcie_clr_bw_int(struct rockchip_pcie *rockchip)
{
u32 status;
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
status |= (PCI_EXP_LNKSTA_LBMS | PCI_EXP_LNKSTA_LABS) << 16;
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
}
static void rockchip_pcie_update_txcredit_mui(struct rockchip_pcie *rockchip)
@ -269,7 +261,7 @@ static void rockchip_pcie_set_power_limit(struct rockchip_pcie *rockchip)
scale = 3; /* 0.001x */
curr = curr / 1000; /* convert to mA */
power = (curr * 3300) / 1000; /* milliwatt */
while (power > PCIE_RC_CONFIG_DCR_CSPL_LIMIT) {
while (power > FIELD_MAX(PCI_EXP_DEVCAP_PWR_VAL)) {
if (!scale) {
dev_warn(rockchip->dev, "invalid power supply\n");
return;
@ -278,10 +270,10 @@ static void rockchip_pcie_set_power_limit(struct rockchip_pcie *rockchip)
power = power / 10;
}
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCR);
status |= (power << PCIE_RC_CONFIG_DCR_CSPL_SHIFT) |
(scale << PCIE_RC_CONFIG_DCR_CPLS_SHIFT);
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCR);
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_DEVCAP);
status |= FIELD_PREP(PCI_EXP_DEVCAP_PWR_VAL, power);
status |= FIELD_PREP(PCI_EXP_DEVCAP_PWR_SCL, scale);
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_DEVCAP);
}
/**
@ -309,14 +301,14 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
rockchip_pcie_set_power_limit(rockchip);
/* Set RC's clock architecture as common clock */
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
status |= PCI_EXP_LNKSTA_SLC << 16;
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
/* Set RC's RCB to 128 */
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
status |= PCI_EXP_LNKCTL_RCB;
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
/* Enable Gen1 training */
rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE,
@ -325,7 +317,7 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
msleep(PCIE_T_PVPERL_MS);
gpiod_set_value_cansleep(rockchip->perst_gpio, 1);
msleep(PCIE_T_RRS_READY_MS);
msleep(PCIE_RESET_CONFIG_WAIT_MS);
/* 500ms timeout value should be enough for Gen1/2 training */
err = readl_poll_timeout(rockchip->apb_base + PCIE_CLIENT_BASIC_STATUS1,
@ -341,9 +333,13 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
* Enable retrain for gen2. This should be configured only after
* gen1 finished.
*/
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL2);
status &= ~PCI_EXP_LNKCTL2_TLS;
status |= PCI_EXP_LNKCTL2_TLS_5_0GT;
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL2);
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
status |= PCI_EXP_LNKCTL_RL;
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
err = readl_poll_timeout(rockchip->apb_base + PCIE_CORE_CTRL,
status, PCIE_LINK_IS_GEN2(status), 20,
@ -380,15 +376,15 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
/* Clear L0s from RC's link cap */
if (of_property_read_bool(dev->of_node, "aspm-no-l0s")) {
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LINK_CAP);
status &= ~PCIE_RC_CONFIG_LINK_CAP_L0S;
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LINK_CAP);
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCAP);
status &= ~PCI_EXP_LNKCAP_ASPM_L0S;
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCAP);
}
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCSR);
status &= ~PCIE_RC_CONFIG_DCSR_MPS_MASK;
status |= PCIE_RC_CONFIG_DCSR_MPS_256;
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCSR);
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_DEVCTL);
status &= ~PCI_EXP_DEVCTL_PAYLOAD;
status |= PCI_EXP_DEVCTL_PAYLOAD_256B;
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_DEVCTL);
return 0;
err_power_off_phy:
@ -439,7 +435,7 @@ static irqreturn_t rockchip_pcie_subsys_irq_handler(int irq, void *arg)
dev_dbg(dev, "malformed TLP received from the link\n");
if (sub_reg & PCIE_CORE_INT_UCR)
dev_dbg(dev, "malformed TLP received from the link\n");
dev_dbg(dev, "Unexpected Completion received from the link\n");
if (sub_reg & PCIE_CORE_INT_FCE)
dev_dbg(dev, "an error was observed in the flow control advertisements from the other side\n");
@ -489,7 +485,7 @@ static irqreturn_t rockchip_pcie_client_irq_handler(int irq, void *arg)
dev_dbg(dev, "fatal error interrupt received\n");
if (reg & PCIE_CLIENT_INT_NFATAL_ERR)
dev_dbg(dev, "no fatal error interrupt received\n");
dev_dbg(dev, "non fatal error interrupt received\n");
if (reg & PCIE_CLIENT_INT_CORR_ERR)
dev_dbg(dev, "correctable error interrupt received\n");

View File

@ -155,17 +155,7 @@
#define PCIE_EP_CONFIG_DID_VID (PCIE_EP_CONFIG_BASE + 0x00)
#define PCIE_EP_CONFIG_LCS (PCIE_EP_CONFIG_BASE + 0xd0)
#define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08)
#define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4)
#define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18
#define PCIE_RC_CONFIG_DCR_CSPL_LIMIT 0xff
#define PCIE_RC_CONFIG_DCR_CPLS_SHIFT 26
#define PCIE_RC_CONFIG_DCSR (PCIE_RC_CONFIG_BASE + 0xc8)
#define PCIE_RC_CONFIG_DCSR_MPS_MASK GENMASK(7, 5)
#define PCIE_RC_CONFIG_DCSR_MPS_256 (0x1 << 5)
#define PCIE_RC_CONFIG_LINK_CAP (PCIE_RC_CONFIG_BASE + 0xcc)
#define PCIE_RC_CONFIG_LINK_CAP_L0S BIT(10)
#define PCIE_RC_CONFIG_LCS (PCIE_RC_CONFIG_BASE + 0xd0)
#define PCIE_EP_CONFIG_LCS (PCIE_EP_CONFIG_BASE + 0xd0)
#define PCIE_RC_CONFIG_CR (PCIE_RC_CONFIG_BASE + 0xc0)
#define PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2 (PCIE_RC_CONFIG_BASE + 0x90c)
#define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274)
#define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20)
@ -215,20 +205,6 @@
#define RC_REGION_0_TYPE_MASK GENMASK(3, 0)
#define MAX_AXI_WRAPPER_REGION_NUM 33
#define ROCKCHIP_PCIE_MSG_ROUTING_TO_RC 0x0
#define ROCKCHIP_PCIE_MSG_ROUTING_VIA_ADDR 0x1
#define ROCKCHIP_PCIE_MSG_ROUTING_VIA_ID 0x2
#define ROCKCHIP_PCIE_MSG_ROUTING_BROADCAST 0x3
#define ROCKCHIP_PCIE_MSG_ROUTING_LOCAL_INTX 0x4
#define ROCKCHIP_PCIE_MSG_ROUTING_PME_ACK 0x5
#define ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTA 0x20
#define ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTB 0x21
#define ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTC 0x22
#define ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTD 0x23
#define ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTA 0x24
#define ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTB 0x25
#define ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTC 0x26
#define ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTD 0x27
#define ROCKCHIP_PCIE_MSG_ROUTING_MASK GENMASK(7, 5)
#define ROCKCHIP_PCIE_MSG_ROUTING(route) \
(((route) << 5) & ROCKCHIP_PCIE_MSG_ROUTING_MASK)

View File

@ -7,6 +7,7 @@
#include <linux/bitfield.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/module.h>
@ -90,7 +91,6 @@ struct xilinx_pl_dma_variant {
};
struct xilinx_msi {
struct irq_domain *msi_domain;
unsigned long *bitmap;
struct irq_domain *dev_domain;
struct mutex lock; /* Protect bitmap variable */
@ -373,20 +373,20 @@ static irqreturn_t xilinx_pl_dma_pcie_intr_handler(int irq, void *dev_id)
return IRQ_HANDLED;
}
static struct irq_chip xilinx_msi_irq_chip = {
.name = "pl_dma:PCIe MSI",
.irq_enable = pci_msi_unmask_irq,
.irq_disable = pci_msi_mask_irq,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
#define XILINX_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_NO_AFFINITY)
static struct msi_domain_info xilinx_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY | MSI_FLAG_MULTI_PCI_MSI,
.chip = &xilinx_msi_irq_chip,
};
#define XILINX_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_MULTI_PCI_MSI)
static const struct msi_parent_ops xilinx_msi_parent_ops = {
.required_flags = XILINX_MSI_FLAGS_REQUIRED,
.supported_flags = XILINX_MSI_FLAGS_SUPPORTED,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.prefix = "pl_dma-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
static void xilinx_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct pl_dma_pcie *pcie = irq_data_get_irq_chip_data(data);
@ -458,11 +458,6 @@ static void xilinx_pl_dma_pcie_free_irq_domains(struct pl_dma_pcie *port)
irq_domain_remove(msi->dev_domain);
msi->dev_domain = NULL;
}
if (msi->msi_domain) {
irq_domain_remove(msi->msi_domain);
msi->msi_domain = NULL;
}
}
static int xilinx_pl_dma_pcie_init_msi_irq_domain(struct pl_dma_pcie *port)
@ -470,19 +465,17 @@ static int xilinx_pl_dma_pcie_init_msi_irq_domain(struct pl_dma_pcie *port)
struct device *dev = port->dev;
struct xilinx_msi *msi = &port->msi;
int size = BITS_TO_LONGS(XILINX_NUM_MSI_IRQS) * sizeof(long);
struct fwnode_handle *fwnode = of_fwnode_handle(port->dev->of_node);
struct irq_domain_info info = {
.fwnode = dev_fwnode(port->dev),
.ops = &dev_msi_domain_ops,
.host_data = port,
.size = XILINX_NUM_MSI_IRQS,
};
msi->dev_domain = irq_domain_create_linear(NULL, XILINX_NUM_MSI_IRQS,
&dev_msi_domain_ops, port);
msi->dev_domain = msi_create_parent_irq_domain(&info, &xilinx_msi_parent_ops);
if (!msi->dev_domain)
goto out;
msi->msi_domain = pci_msi_create_irq_domain(fwnode,
&xilinx_msi_domain_info,
msi->dev_domain);
if (!msi->msi_domain)
goto out;
mutex_init(&msi->lock);
msi->bitmap = kzalloc(size, GFP_KERNEL);
if (!msi->bitmap)

View File

@ -10,6 +10,7 @@
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/init.h>
@ -145,7 +146,6 @@
#define LINK_WAIT_USLEEP_MAX 100000
struct nwl_msi { /* MSI information */
struct irq_domain *msi_domain;
DECLARE_BITMAP(bitmap, INT_PCI_MSI_NR);
struct irq_domain *dev_domain;
struct mutex lock; /* protect bitmap variable */
@ -418,19 +418,22 @@ static const struct irq_domain_ops intx_domain_ops = {
};
#ifdef CONFIG_PCI_MSI
static struct irq_chip nwl_msi_irq_chip = {
.name = "nwl_pcie:msi",
.irq_enable = pci_msi_unmask_irq,
.irq_disable = pci_msi_mask_irq,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
#define NWL_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_NO_AFFINITY)
#define NWL_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_MULTI_PCI_MSI)
static const struct msi_parent_ops nwl_msi_parent_ops = {
.required_flags = NWL_MSI_FLAGS_REQUIRED,
.supported_flags = NWL_MSI_FLAGS_SUPPORTED,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.prefix = "nwl-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
static struct msi_domain_info nwl_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY | MSI_FLAG_MULTI_PCI_MSI,
.chip = &nwl_msi_irq_chip,
};
#endif
static void nwl_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
@ -495,22 +498,19 @@ static int nwl_pcie_init_msi_irq_domain(struct nwl_pcie *pcie)
{
#ifdef CONFIG_PCI_MSI
struct device *dev = pcie->dev;
struct fwnode_handle *fwnode = of_fwnode_handle(dev->of_node);
struct nwl_msi *msi = &pcie->msi;
struct irq_domain_info info = {
.fwnode = dev_fwnode(dev),
.ops = &dev_msi_domain_ops,
.host_data = pcie,
.size = INT_PCI_MSI_NR,
};
msi->dev_domain = irq_domain_create_linear(NULL, INT_PCI_MSI_NR, &dev_msi_domain_ops, pcie);
msi->dev_domain = msi_create_parent_irq_domain(&info, &nwl_msi_parent_ops);
if (!msi->dev_domain) {
dev_err(dev, "failed to create dev IRQ domain\n");
return -ENOMEM;
}
msi->msi_domain = pci_msi_create_irq_domain(fwnode,
&nwl_msi_domain_info,
msi->dev_domain);
if (!msi->msi_domain) {
dev_err(dev, "failed to create msi IRQ domain\n");
irq_domain_remove(msi->dev_domain);
return -ENOMEM;
}
#endif
return 0;
}

View File

@ -12,6 +12,7 @@
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/init.h>
@ -203,11 +204,6 @@ static void xilinx_msi_top_irq_ack(struct irq_data *d)
*/
}
static struct irq_chip xilinx_msi_top_chip = {
.name = "PCIe MSI",
.irq_ack = xilinx_msi_top_irq_ack,
};
static void xilinx_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct xilinx_pcie *pcie = irq_data_get_irq_chip_data(data);
@ -264,29 +260,42 @@ static const struct irq_domain_ops xilinx_msi_domain_ops = {
.free = xilinx_msi_domain_free,
};
static struct msi_domain_info xilinx_msi_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY,
.chip = &xilinx_msi_top_chip,
static bool xilinx_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
struct irq_domain *real_parent, struct msi_domain_info *info)
{
struct irq_chip *chip = info->chip;
if (!msi_lib_init_dev_msi_info(dev, domain, real_parent, info))
return false;
chip->irq_ack = xilinx_msi_top_irq_ack;
return true;
}
#define XILINX_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_NO_AFFINITY)
static const struct msi_parent_ops xilinx_msi_parent_ops = {
.required_flags = XILINX_MSI_FLAGS_REQUIRED,
.supported_flags = MSI_GENERIC_FLAGS_MASK,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.prefix = "xilinx-",
.init_dev_msi_info = xilinx_init_dev_msi_info,
};
static int xilinx_allocate_msi_domains(struct xilinx_pcie *pcie)
{
struct fwnode_handle *fwnode = dev_fwnode(pcie->dev);
struct irq_domain *parent;
struct irq_domain_info info = {
.fwnode = dev_fwnode(pcie->dev),
.ops = &xilinx_msi_domain_ops,
.host_data = pcie,
.size = XILINX_NUM_MSI_IRQS,
};
parent = irq_domain_create_linear(fwnode, XILINX_NUM_MSI_IRQS,
&xilinx_msi_domain_ops, pcie);
if (!parent) {
dev_err(pcie->dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS);
pcie->msi_domain = pci_msi_create_irq_domain(fwnode, &xilinx_msi_info, parent);
pcie->msi_domain = msi_create_parent_irq_domain(&info, &xilinx_msi_parent_ops);
if (!pcie->msi_domain) {
dev_err(pcie->dev, "failed to create MSI domain\n");
irq_domain_remove(parent);
return -ENOMEM;
}
@ -295,10 +304,7 @@ static int xilinx_allocate_msi_domains(struct xilinx_pcie *pcie)
static void xilinx_free_msi_domains(struct xilinx_pcie *pcie)
{
struct irq_domain *parent = pcie->msi_domain->parent;
irq_domain_remove(pcie->msi_domain);
irq_domain_remove(parent);
}
/* INTx Functions */

View File

@ -5,6 +5,7 @@ menu "PLDA-based PCIe controllers"
config PCIE_PLDA_HOST
bool
select IRQ_MSI_LIB
config PCIE_MICROCHIP_HOST
tristate "Microchip AXI PCIe controller"

View File

@ -11,6 +11,7 @@
#include <linux/align.h>
#include <linux/bitfield.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/irqdomain.h>
#include <linux/msi.h>
#include <linux/pci_regs.h>
@ -134,42 +135,41 @@ static const struct irq_domain_ops msi_domain_ops = {
.free = plda_irq_msi_domain_free,
};
static struct irq_chip plda_msi_irq_chip = {
.name = "PLDA PCIe MSI",
.irq_ack = irq_chip_ack_parent,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
#define PLDA_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_NO_AFFINITY)
#define PLDA_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_PCI_MSIX)
static struct msi_domain_info plda_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY | MSI_FLAG_PCI_MSIX,
.chip = &plda_msi_irq_chip,
static const struct msi_parent_ops plda_msi_parent_ops = {
.required_flags = PLDA_MSI_FLAGS_REQUIRED,
.supported_flags = PLDA_MSI_FLAGS_SUPPORTED,
.chip_flags = MSI_CHIP_FLAG_SET_ACK,
.bus_select_token = DOMAIN_BUS_PCI_MSI,
.prefix = "PLDA-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
static int plda_allocate_msi_domains(struct plda_pcie_rp *port)
{
struct device *dev = port->dev;
struct fwnode_handle *fwnode = of_fwnode_handle(dev->of_node);
struct plda_msi *msi = &port->msi;
mutex_init(&port->msi.lock);
msi->dev_domain = irq_domain_create_linear(NULL, msi->num_vectors, &msi_domain_ops, port);
struct irq_domain_info info = {
.fwnode = dev_fwnode(dev),
.ops = &msi_domain_ops,
.host_data = port,
.size = msi->num_vectors,
};
msi->dev_domain = msi_create_parent_irq_domain(&info, &plda_msi_parent_ops);
if (!msi->dev_domain) {
dev_err(dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
msi->msi_domain = pci_msi_create_irq_domain(fwnode,
&plda_msi_domain_info,
msi->dev_domain);
if (!msi->msi_domain) {
dev_err(dev, "failed to create MSI domain\n");
irq_domain_remove(msi->dev_domain);
return -ENOMEM;
}
return 0;
}
@ -563,7 +563,6 @@ static void plda_pcie_irq_domain_deinit(struct plda_pcie_rp *pcie)
irq_set_chained_handler_and_data(pcie->msi_irq, NULL, NULL);
irq_set_chained_handler_and_data(pcie->intx_irq, NULL, NULL);
irq_domain_remove(pcie->msi.msi_domain);
irq_domain_remove(pcie->msi.dev_domain);
irq_domain_remove(pcie->intx_domain);

View File

@ -164,7 +164,6 @@ struct plda_pcie_host_ops {
struct plda_msi {
struct mutex lock; /* Protect used bitmap */
struct irq_domain *msi_domain;
struct irq_domain *dev_domain;
u32 num_vectors;
u64 vector_phy;

View File

@ -368,7 +368,7 @@ static int starfive_pcie_host_init(struct plda_pcie_rp *plda)
* of 100ms following exit from a conventional reset before
* sending a configuration request to the device.
*/
msleep(PCIE_RESET_CONFIG_DEVICE_WAIT_MS);
msleep(PCIE_RESET_CONFIG_WAIT_MS);
if (starfive_pcie_host_wait_for_link(pcie))
dev_info(dev, "port link down\n");

View File

@ -7,6 +7,7 @@
#include <linux/device.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip/irq-msi-lib.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/msi.h>
@ -174,58 +175,52 @@ static void vmd_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
msg->arch_addr_lo.destid_0_7 = index_from_irqs(vmd, irq);
}
/*
* We rely on MSI_FLAG_USE_DEF_CHIP_OPS to set the IRQ mask/unmask ops.
*/
static void vmd_irq_enable(struct irq_data *data)
{
struct vmd_irq *vmdirq = data->chip_data;
unsigned long flags;
raw_spin_lock_irqsave(&list_lock, flags);
WARN_ON(vmdirq->enabled);
list_add_tail_rcu(&vmdirq->node, &vmdirq->irq->irq_list);
vmdirq->enabled = true;
raw_spin_unlock_irqrestore(&list_lock, flags);
scoped_guard(raw_spinlock_irqsave, &list_lock) {
WARN_ON(vmdirq->enabled);
list_add_tail_rcu(&vmdirq->node, &vmdirq->irq->irq_list);
vmdirq->enabled = true;
}
}
static void vmd_pci_msi_enable(struct irq_data *data)
{
vmd_irq_enable(data->parent_data);
data->chip->irq_unmask(data);
}
static void vmd_irq_disable(struct irq_data *data)
{
struct vmd_irq *vmdirq = data->chip_data;
unsigned long flags;
data->chip->irq_mask(data);
raw_spin_lock_irqsave(&list_lock, flags);
if (vmdirq->enabled) {
list_del_rcu(&vmdirq->node);
vmdirq->enabled = false;
scoped_guard(raw_spinlock_irqsave, &list_lock) {
if (vmdirq->enabled) {
list_del_rcu(&vmdirq->node);
vmdirq->enabled = false;
}
}
raw_spin_unlock_irqrestore(&list_lock, flags);
}
static void vmd_pci_msi_disable(struct irq_data *data)
{
data->chip->irq_mask(data);
vmd_irq_disable(data->parent_data);
}
static struct irq_chip vmd_msi_controller = {
.name = "VMD-MSI",
.irq_enable = vmd_irq_enable,
.irq_disable = vmd_irq_disable,
.irq_compose_msi_msg = vmd_compose_msi_msg,
};
static irq_hw_number_t vmd_get_hwirq(struct msi_domain_info *info,
msi_alloc_info_t *arg)
{
return 0;
}
/*
* XXX: We can be even smarter selecting the best IRQ once we solve the
* affinity problem.
*/
static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *desc)
{
unsigned long flags;
int i, best;
if (vmd->msix_count == 1 + vmd->first_vec)
@ -242,86 +237,119 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d
return &vmd->irqs[vmd->first_vec];
}
raw_spin_lock_irqsave(&list_lock, flags);
best = vmd->first_vec + 1;
for (i = best; i < vmd->msix_count; i++)
if (vmd->irqs[i].count < vmd->irqs[best].count)
best = i;
vmd->irqs[best].count++;
raw_spin_unlock_irqrestore(&list_lock, flags);
scoped_guard(raw_spinlock_irq, &list_lock) {
best = vmd->first_vec + 1;
for (i = best; i < vmd->msix_count; i++)
if (vmd->irqs[i].count < vmd->irqs[best].count)
best = i;
vmd->irqs[best].count++;
}
return &vmd->irqs[best];
}
static int vmd_msi_init(struct irq_domain *domain, struct msi_domain_info *info,
unsigned int virq, irq_hw_number_t hwirq,
msi_alloc_info_t *arg)
static void vmd_msi_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs);
static int vmd_msi_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *arg)
{
struct msi_desc *desc = arg->desc;
struct vmd_dev *vmd = vmd_from_bus(msi_desc_to_pci_dev(desc)->bus);
struct vmd_irq *vmdirq = kzalloc(sizeof(*vmdirq), GFP_KERNEL);
struct msi_desc *desc = ((msi_alloc_info_t *)arg)->desc;
struct vmd_dev *vmd = domain->host_data;
struct vmd_irq *vmdirq;
if (!vmdirq)
return -ENOMEM;
for (int i = 0; i < nr_irqs; ++i) {
vmdirq = kzalloc(sizeof(*vmdirq), GFP_KERNEL);
if (!vmdirq) {
vmd_msi_free(domain, virq, i);
return -ENOMEM;
}
INIT_LIST_HEAD(&vmdirq->node);
vmdirq->irq = vmd_next_irq(vmd, desc);
vmdirq->virq = virq;
INIT_LIST_HEAD(&vmdirq->node);
vmdirq->irq = vmd_next_irq(vmd, desc);
vmdirq->virq = virq + i;
irq_domain_set_info(domain, virq + i, vmdirq->irq->virq,
&vmd_msi_controller, vmdirq,
handle_untracked_irq, vmd, NULL);
}
irq_domain_set_info(domain, virq, vmdirq->irq->virq, info->chip, vmdirq,
handle_untracked_irq, vmd, NULL);
return 0;
}
static void vmd_msi_free(struct irq_domain *domain,
struct msi_domain_info *info, unsigned int virq)
static void vmd_msi_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs)
{
struct vmd_irq *vmdirq = irq_get_chip_data(virq);
unsigned long flags;
struct vmd_irq *vmdirq;
synchronize_srcu(&vmdirq->irq->srcu);
for (int i = 0; i < nr_irqs; ++i) {
vmdirq = irq_get_chip_data(virq + i);
/* XXX: Potential optimization to rebalance */
raw_spin_lock_irqsave(&list_lock, flags);
vmdirq->irq->count--;
raw_spin_unlock_irqrestore(&list_lock, flags);
synchronize_srcu(&vmdirq->irq->srcu);
kfree(vmdirq);
/* XXX: Potential optimization to rebalance */
scoped_guard(raw_spinlock_irq, &list_lock)
vmdirq->irq->count--;
kfree(vmdirq);
}
}
static int vmd_msi_prepare(struct irq_domain *domain, struct device *dev,
int nvec, msi_alloc_info_t *arg)
static const struct irq_domain_ops vmd_msi_domain_ops = {
.alloc = vmd_msi_alloc,
.free = vmd_msi_free,
};
static bool vmd_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
struct irq_domain *real_parent,
struct msi_domain_info *info)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct vmd_dev *vmd = vmd_from_bus(pdev->bus);
if (WARN_ON_ONCE(info->bus_token != DOMAIN_BUS_PCI_DEVICE_MSIX))
return false;
if (nvec > vmd->msix_count)
return vmd->msix_count;
if (!msi_lib_init_dev_msi_info(dev, domain, real_parent, info))
return false;
info->chip->irq_enable = vmd_pci_msi_enable;
info->chip->irq_disable = vmd_pci_msi_disable;
return true;
}
#define VMD_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | MSI_FLAG_PCI_MSIX)
#define VMD_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_NO_AFFINITY)
static const struct msi_parent_ops vmd_msi_parent_ops = {
.supported_flags = VMD_MSI_FLAGS_SUPPORTED,
.required_flags = VMD_MSI_FLAGS_REQUIRED,
.bus_select_token = DOMAIN_BUS_VMD_MSI,
.bus_select_mask = MATCH_PCI_MSI,
.prefix = "VMD-",
.init_dev_msi_info = vmd_init_dev_msi_info,
};
static int vmd_create_irq_domain(struct vmd_dev *vmd)
{
struct irq_domain_info info = {
.size = vmd->msix_count,
.ops = &vmd_msi_domain_ops,
.host_data = vmd,
};
info.fwnode = irq_domain_alloc_named_id_fwnode("VMD-MSI",
vmd->sysdata.domain);
if (!info.fwnode)
return -ENODEV;
vmd->irq_domain = msi_create_parent_irq_domain(&info,
&vmd_msi_parent_ops);
if (!vmd->irq_domain) {
irq_domain_free_fwnode(info.fwnode);
return -ENODEV;
}
memset(arg, 0, sizeof(*arg));
return 0;
}
static void vmd_set_desc(msi_alloc_info_t *arg, struct msi_desc *desc)
{
arg->desc = desc;
}
static struct msi_domain_ops vmd_msi_domain_ops = {
.get_hwirq = vmd_get_hwirq,
.msi_init = vmd_msi_init,
.msi_free = vmd_msi_free,
.msi_prepare = vmd_msi_prepare,
.set_desc = vmd_set_desc,
};
static struct msi_domain_info vmd_msi_domain_info = {
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_NO_AFFINITY | MSI_FLAG_PCI_MSIX,
.ops = &vmd_msi_domain_ops,
.chip = &vmd_msi_controller,
};
static void vmd_set_msi_remapping(struct vmd_dev *vmd, bool enable)
{
u16 reg;
@ -332,23 +360,6 @@ static void vmd_set_msi_remapping(struct vmd_dev *vmd, bool enable)
pci_write_config_word(vmd->dev, PCI_REG_VMCONFIG, reg);
}
static int vmd_create_irq_domain(struct vmd_dev *vmd)
{
struct fwnode_handle *fn;
fn = irq_domain_alloc_named_id_fwnode("VMD-MSI", vmd->sysdata.domain);
if (!fn)
return -ENODEV;
vmd->irq_domain = pci_msi_create_irq_domain(fn, &vmd_msi_domain_info, NULL);
if (!vmd->irq_domain) {
irq_domain_free_fwnode(fn);
return -ENODEV;
}
return 0;
}
static void vmd_remove_irq_domain(struct vmd_dev *vmd)
{
/*
@ -387,29 +398,24 @@ static int vmd_pci_read(struct pci_bus *bus, unsigned int devfn, int reg,
{
struct vmd_dev *vmd = vmd_from_bus(bus);
void __iomem *addr = vmd_cfg_addr(vmd, bus, devfn, reg, len);
unsigned long flags;
int ret = 0;
if (!addr)
return -EFAULT;
raw_spin_lock_irqsave(&vmd->cfg_lock, flags);
guard(raw_spinlock_irqsave)(&vmd->cfg_lock);
switch (len) {
case 1:
*value = readb(addr);
break;
return 0;
case 2:
*value = readw(addr);
break;
return 0;
case 4:
*value = readl(addr);
break;
return 0;
default:
ret = -EINVAL;
break;
return -EINVAL;
}
raw_spin_unlock_irqrestore(&vmd->cfg_lock, flags);
return ret;
}
/*
@ -422,32 +428,27 @@ static int vmd_pci_write(struct pci_bus *bus, unsigned int devfn, int reg,
{
struct vmd_dev *vmd = vmd_from_bus(bus);
void __iomem *addr = vmd_cfg_addr(vmd, bus, devfn, reg, len);
unsigned long flags;
int ret = 0;
if (!addr)
return -EFAULT;
raw_spin_lock_irqsave(&vmd->cfg_lock, flags);
guard(raw_spinlock_irqsave)(&vmd->cfg_lock);
switch (len) {
case 1:
writeb(value, addr);
readb(addr);
break;
return 0;
case 2:
writew(value, addr);
readw(addr);
break;
return 0;
case 4:
writel(value, addr);
readl(addr);
break;
return 0;
default:
ret = -EINVAL;
break;
return -EINVAL;
}
raw_spin_unlock_irqrestore(&vmd->cfg_lock, flags);
return ret;
}
static struct pci_ops vmd_ops = {
@ -889,12 +890,6 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
ret = vmd_create_irq_domain(vmd);
if (ret)
return ret;
/*
* Override the IRQ domain bus token so the domain can be
* distinguished from a regular PCI/MSI domain.
*/
irq_domain_update_bus_token(vmd->irq_domain, DOMAIN_BUS_VMD_MSI);
} else {
vmd_set_msi_remapping(vmd, false);
}
@ -1129,6 +1124,8 @@ static const struct pci_device_id vmd_ids[] = {
.driver_data = VMD_FEATS_CLIENT,},
{PCI_VDEVICE(INTEL, 0xb06f),
.driver_data = VMD_FEATS_CLIENT,},
{PCI_VDEVICE(INTEL, 0xb07f),
.driver_data = VMD_FEATS_CLIENT,},
{0,}
};
MODULE_DEVICE_TABLE(pci, vmd_ids);

View File

@ -28,6 +28,14 @@ config PCI_ENDPOINT_CONFIGFS
configure the endpoint function and used to bind the
function with an endpoint controller.
config PCI_ENDPOINT_MSI_DOORBELL
bool "PCI Endpoint MSI Doorbell Support"
depends on PCI_ENDPOINT && GENERIC_MSI_IRQ
help
This enables the EP's MSI interrupt controller to function as a
doorbell. The RC can trigger doorbell in EP by writing data to a
dedicated BAR, which the EP maps to the controller's message address.
source "drivers/pci/endpoint/functions/Kconfig"
endmenu

View File

@ -6,3 +6,4 @@
obj-$(CONFIG_PCI_ENDPOINT_CONFIGFS) += pci-ep-cfs.o
obj-$(CONFIG_PCI_ENDPOINT) += pci-epc-core.o pci-epf-core.o\
pci-epc-mem.o functions/
obj-$(CONFIG_PCI_ENDPOINT_MSI_DOORBELL) += pci-ep-msi.o

View File

@ -11,12 +11,14 @@
#include <linux/dmaengine.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/msi.h>
#include <linux/slab.h>
#include <linux/pci_ids.h>
#include <linux/random.h>
#include <linux/pci-epc.h>
#include <linux/pci-epf.h>
#include <linux/pci-ep-msi.h>
#include <linux/pci_regs.h>
#define IRQ_TYPE_INTX 0
@ -29,6 +31,8 @@
#define COMMAND_READ BIT(3)
#define COMMAND_WRITE BIT(4)
#define COMMAND_COPY BIT(5)
#define COMMAND_ENABLE_DOORBELL BIT(6)
#define COMMAND_DISABLE_DOORBELL BIT(7)
#define STATUS_READ_SUCCESS BIT(0)
#define STATUS_READ_FAIL BIT(1)
@ -39,6 +43,11 @@
#define STATUS_IRQ_RAISED BIT(6)
#define STATUS_SRC_ADDR_INVALID BIT(7)
#define STATUS_DST_ADDR_INVALID BIT(8)
#define STATUS_DOORBELL_SUCCESS BIT(9)
#define STATUS_DOORBELL_ENABLE_SUCCESS BIT(10)
#define STATUS_DOORBELL_ENABLE_FAIL BIT(11)
#define STATUS_DOORBELL_DISABLE_SUCCESS BIT(12)
#define STATUS_DOORBELL_DISABLE_FAIL BIT(13)
#define FLAG_USE_DMA BIT(0)
@ -66,6 +75,7 @@ struct pci_epf_test {
bool dma_supported;
bool dma_private;
const struct pci_epc_features *epc_features;
struct pci_epf_bar db_bar;
};
struct pci_epf_test_reg {
@ -80,6 +90,9 @@ struct pci_epf_test_reg {
__le32 irq_number;
__le32 flags;
__le32 caps;
__le32 doorbell_bar;
__le32 doorbell_offset;
__le32 doorbell_data;
} __packed;
static struct pci_epf_header test_header = {
@ -667,6 +680,115 @@ static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test,
}
}
static irqreturn_t pci_epf_test_doorbell_handler(int irq, void *data)
{
struct pci_epf_test *epf_test = data;
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar];
u32 status = le32_to_cpu(reg->status);
status |= STATUS_DOORBELL_SUCCESS;
reg->status = cpu_to_le32(status);
pci_epf_test_raise_irq(epf_test, reg);
return IRQ_HANDLED;
}
static void pci_epf_test_doorbell_cleanup(struct pci_epf_test *epf_test)
{
struct pci_epf_test_reg *reg = epf_test->reg[epf_test->test_reg_bar];
struct pci_epf *epf = epf_test->epf;
free_irq(epf->db_msg[0].virq, epf_test);
reg->doorbell_bar = cpu_to_le32(NO_BAR);
pci_epf_free_doorbell(epf);
}
static void pci_epf_test_enable_doorbell(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
u32 status = le32_to_cpu(reg->status);
struct pci_epf *epf = epf_test->epf;
struct pci_epc *epc = epf->epc;
struct msi_msg *msg;
enum pci_barno bar;
size_t offset;
int ret;
ret = pci_epf_alloc_doorbell(epf, 1);
if (ret)
goto set_status_err;
msg = &epf->db_msg[0].msg;
bar = pci_epc_get_next_free_bar(epf_test->epc_features, epf_test->test_reg_bar + 1);
if (bar < BAR_0)
goto err_doorbell_cleanup;
ret = request_irq(epf->db_msg[0].virq, pci_epf_test_doorbell_handler, 0,
"pci-ep-test-doorbell", epf_test);
if (ret) {
dev_err(&epf->dev,
"Failed to request doorbell IRQ: %d\n",
epf->db_msg[0].virq);
goto err_doorbell_cleanup;
}
reg->doorbell_data = cpu_to_le32(msg->data);
reg->doorbell_bar = cpu_to_le32(bar);
msg = &epf->db_msg[0].msg;
ret = pci_epf_align_inbound_addr(epf, bar, ((u64)msg->address_hi << 32) | msg->address_lo,
&epf_test->db_bar.phys_addr, &offset);
if (ret)
goto err_doorbell_cleanup;
reg->doorbell_offset = cpu_to_le32(offset);
epf_test->db_bar.barno = bar;
epf_test->db_bar.size = epf->bar[bar].size;
epf_test->db_bar.flags = epf->bar[bar].flags;
ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, &epf_test->db_bar);
if (ret)
goto err_doorbell_cleanup;
status |= STATUS_DOORBELL_ENABLE_SUCCESS;
reg->status = cpu_to_le32(status);
return;
err_doorbell_cleanup:
pci_epf_test_doorbell_cleanup(epf_test);
set_status_err:
status |= STATUS_DOORBELL_ENABLE_FAIL;
reg->status = cpu_to_le32(status);
}
static void pci_epf_test_disable_doorbell(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
enum pci_barno bar = le32_to_cpu(reg->doorbell_bar);
u32 status = le32_to_cpu(reg->status);
struct pci_epf *epf = epf_test->epf;
struct pci_epc *epc = epf->epc;
if (bar < BAR_0)
goto set_status_err;
pci_epf_test_doorbell_cleanup(epf_test);
pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, &epf_test->db_bar);
status |= STATUS_DOORBELL_DISABLE_SUCCESS;
reg->status = cpu_to_le32(status);
return;
set_status_err:
status |= STATUS_DOORBELL_DISABLE_FAIL;
reg->status = cpu_to_le32(status);
}
static void pci_epf_test_cmd_handler(struct work_struct *work)
{
u32 command;
@ -714,6 +836,14 @@ static void pci_epf_test_cmd_handler(struct work_struct *work)
pci_epf_test_copy(epf_test, reg);
pci_epf_test_raise_irq(epf_test, reg);
break;
case COMMAND_ENABLE_DOORBELL:
pci_epf_test_enable_doorbell(epf_test, reg);
pci_epf_test_raise_irq(epf_test, reg);
break;
case COMMAND_DISABLE_DOORBELL:
pci_epf_test_disable_doorbell(epf_test, reg);
pci_epf_test_raise_irq(epf_test, reg);
break;
default:
dev_err(dev, "Invalid command 0x%x\n", command);
break;

View File

@ -70,9 +70,11 @@ static struct workqueue_struct *kpcintb_workqueue;
enum epf_ntb_bar {
BAR_CONFIG,
BAR_DB,
BAR_MW0,
BAR_MW1,
BAR_MW2,
BAR_MW3,
BAR_MW4,
VNTB_BAR_NUM,
};
/*
@ -132,7 +134,7 @@ struct epf_ntb {
bool linkup;
u32 spad_size;
enum pci_barno epf_ntb_bar[6];
enum pci_barno epf_ntb_bar[VNTB_BAR_NUM];
struct epf_ntb_ctrl *reg;
@ -510,7 +512,7 @@ static int epf_ntb_db_bar_init(struct epf_ntb *ntb)
struct device *dev = &ntb->epf->dev;
int ret;
struct pci_epf_bar *epf_bar;
void __iomem *mw_addr;
void *mw_addr;
enum pci_barno barno;
size_t size = sizeof(u32) * ntb->db_count;
@ -576,7 +578,7 @@ static int epf_ntb_mw_bar_init(struct epf_ntb *ntb)
for (i = 0; i < ntb->num_mws; i++) {
size = ntb->mws_size[i];
barno = ntb->epf_ntb_bar[BAR_MW0 + i];
barno = ntb->epf_ntb_bar[BAR_MW1 + i];
ntb->epf->bar[barno].barno = barno;
ntb->epf->bar[barno].size = size;
@ -629,7 +631,7 @@ static void epf_ntb_mw_bar_clear(struct epf_ntb *ntb, int num_mws)
int i;
for (i = 0; i < num_mws; i++) {
barno = ntb->epf_ntb_bar[BAR_MW0 + i];
barno = ntb->epf_ntb_bar[BAR_MW1 + i];
pci_epc_clear_bar(ntb->epf->epc,
ntb->epf->func_no,
ntb->epf->vfunc_no,
@ -654,6 +656,63 @@ static void epf_ntb_epc_destroy(struct epf_ntb *ntb)
pci_epc_put(ntb->epf->epc);
}
/**
* epf_ntb_is_bar_used() - Check if a bar is used in the ntb configuration
* @ntb: NTB device that facilitates communication between HOST and VHOST
* @barno: Checked bar number
*
* Returns: true if used, false if free.
*/
static bool epf_ntb_is_bar_used(struct epf_ntb *ntb,
enum pci_barno barno)
{
int i;
for (i = 0; i < VNTB_BAR_NUM; i++) {
if (ntb->epf_ntb_bar[i] == barno)
return true;
}
return false;
}
/**
* epf_ntb_find_bar() - Assign BAR number when no configuration is provided
* @ntb: NTB device that facilitates communication between HOST and VHOST
* @epc_features: The features provided by the EPC specific to this EPF
* @bar: NTB BAR index
* @barno: Bar start index
*
* When the BAR configuration was not provided through the userspace
* configuration, automatically assign BAR as it has been historically
* done by this endpoint function.
*
* Returns: the BAR number found, if any. -1 otherwise
*/
static int epf_ntb_find_bar(struct epf_ntb *ntb,
const struct pci_epc_features *epc_features,
enum epf_ntb_bar bar,
enum pci_barno barno)
{
while (ntb->epf_ntb_bar[bar] < 0) {
barno = pci_epc_get_next_free_bar(epc_features, barno);
if (barno < 0)
break; /* No more BAR available */
/*
* Verify if the BAR found is not already assigned
* through the provided configuration
*/
if (!epf_ntb_is_bar_used(ntb, barno))
ntb->epf_ntb_bar[bar] = barno;
barno += 1;
}
return barno;
}
/**
* epf_ntb_init_epc_bar() - Identify BARs to be used for each of the NTB
* constructs (scratchpad region, doorbell, memorywindow)
@ -676,23 +735,21 @@ static int epf_ntb_init_epc_bar(struct epf_ntb *ntb)
epc_features = pci_epc_get_features(ntb->epf->epc, ntb->epf->func_no, ntb->epf->vfunc_no);
/* These are required BARs which are mandatory for NTB functionality */
for (bar = BAR_CONFIG; bar <= BAR_MW0; bar++, barno++) {
barno = pci_epc_get_next_free_bar(epc_features, barno);
for (bar = BAR_CONFIG; bar <= BAR_MW1; bar++) {
barno = epf_ntb_find_bar(ntb, epc_features, bar, barno);
if (barno < 0) {
dev_err(dev, "Fail to get NTB function BAR\n");
return barno;
return -ENOENT;
}
ntb->epf_ntb_bar[bar] = barno;
}
/* These are optional BARs which don't impact NTB functionality */
for (bar = BAR_MW1, i = 1; i < num_mws; bar++, barno++, i++) {
barno = pci_epc_get_next_free_bar(epc_features, barno);
for (bar = BAR_MW1, i = 1; i < num_mws; bar++, i++) {
barno = epf_ntb_find_bar(ntb, epc_features, bar, barno);
if (barno < 0) {
ntb->num_mws = i;
dev_dbg(dev, "BAR not available for > MW%d\n", i + 1);
}
ntb->epf_ntb_bar[bar] = barno;
}
return 0;
@ -860,6 +917,37 @@ static ssize_t epf_ntb_##_name##_store(struct config_item *item, \
return len; \
}
#define EPF_NTB_BAR_R(_name, _id) \
static ssize_t epf_ntb_##_name##_show(struct config_item *item, \
char *page) \
{ \
struct config_group *group = to_config_group(item); \
struct epf_ntb *ntb = to_epf_ntb(group); \
\
return sprintf(page, "%d\n", ntb->epf_ntb_bar[_id]); \
}
#define EPF_NTB_BAR_W(_name, _id) \
static ssize_t epf_ntb_##_name##_store(struct config_item *item, \
const char *page, size_t len) \
{ \
struct config_group *group = to_config_group(item); \
struct epf_ntb *ntb = to_epf_ntb(group); \
int val; \
int ret; \
\
ret = kstrtoint(page, 0, &val); \
if (ret) \
return ret; \
\
if (val < NO_BAR || val > BAR_5) \
return -EINVAL; \
\
ntb->epf_ntb_bar[_id] = val; \
\
return len; \
}
static ssize_t epf_ntb_num_mws_store(struct config_item *item,
const char *page, size_t len)
{
@ -899,6 +987,18 @@ EPF_NTB_MW_R(mw3)
EPF_NTB_MW_W(mw3)
EPF_NTB_MW_R(mw4)
EPF_NTB_MW_W(mw4)
EPF_NTB_BAR_R(ctrl_bar, BAR_CONFIG)
EPF_NTB_BAR_W(ctrl_bar, BAR_CONFIG)
EPF_NTB_BAR_R(db_bar, BAR_DB)
EPF_NTB_BAR_W(db_bar, BAR_DB)
EPF_NTB_BAR_R(mw1_bar, BAR_MW1)
EPF_NTB_BAR_W(mw1_bar, BAR_MW1)
EPF_NTB_BAR_R(mw2_bar, BAR_MW2)
EPF_NTB_BAR_W(mw2_bar, BAR_MW2)
EPF_NTB_BAR_R(mw3_bar, BAR_MW3)
EPF_NTB_BAR_W(mw3_bar, BAR_MW3)
EPF_NTB_BAR_R(mw4_bar, BAR_MW4)
EPF_NTB_BAR_W(mw4_bar, BAR_MW4)
CONFIGFS_ATTR(epf_ntb_, spad_count);
CONFIGFS_ATTR(epf_ntb_, db_count);
@ -910,6 +1010,12 @@ CONFIGFS_ATTR(epf_ntb_, mw4);
CONFIGFS_ATTR(epf_ntb_, vbus_number);
CONFIGFS_ATTR(epf_ntb_, vntb_pid);
CONFIGFS_ATTR(epf_ntb_, vntb_vid);
CONFIGFS_ATTR(epf_ntb_, ctrl_bar);
CONFIGFS_ATTR(epf_ntb_, db_bar);
CONFIGFS_ATTR(epf_ntb_, mw1_bar);
CONFIGFS_ATTR(epf_ntb_, mw2_bar);
CONFIGFS_ATTR(epf_ntb_, mw3_bar);
CONFIGFS_ATTR(epf_ntb_, mw4_bar);
static struct configfs_attribute *epf_ntb_attrs[] = {
&epf_ntb_attr_spad_count,
@ -922,6 +1028,12 @@ static struct configfs_attribute *epf_ntb_attrs[] = {
&epf_ntb_attr_vbus_number,
&epf_ntb_attr_vntb_pid,
&epf_ntb_attr_vntb_vid,
&epf_ntb_attr_ctrl_bar,
&epf_ntb_attr_db_bar,
&epf_ntb_attr_mw1_bar,
&epf_ntb_attr_mw2_bar,
&epf_ntb_attr_mw3_bar,
&epf_ntb_attr_mw4_bar,
NULL,
};
@ -1048,7 +1160,7 @@ static int vntb_epf_mw_set_trans(struct ntb_dev *ndev, int pidx, int idx,
struct device *dev;
dev = &ntb->ntb.dev;
barno = ntb->epf_ntb_bar[BAR_MW0 + idx];
barno = ntb->epf_ntb_bar[BAR_MW1 + idx];
epf_bar = &ntb->epf->bar[barno];
epf_bar->phys_addr = addr;
epf_bar->barno = barno;
@ -1379,6 +1491,7 @@ static int epf_ntb_probe(struct pci_epf *epf,
{
struct epf_ntb *ntb;
struct device *dev;
int i;
dev = &epf->dev;
@ -1389,6 +1502,11 @@ static int epf_ntb_probe(struct pci_epf *epf,
epf->header = &epf_ntb_header;
ntb->epf = epf;
ntb->vbus_number = 0xff;
/* Initially, no bar is assigned */
for (i = 0; i < VNTB_BAR_NUM; i++)
ntb->epf_ntb_bar[i] = NO_BAR;
epf_set_drvdata(epf, ntb);
dev_info(dev, "pci-ep epf driver loaded\n");

View File

@ -691,6 +691,7 @@ void pci_ep_cfs_remove_epf_group(struct config_group *group)
if (IS_ERR_OR_NULL(group))
return;
list_del(&group->group_entry);
configfs_unregister_default_group(group);
}
EXPORT_SYMBOL(pci_ep_cfs_remove_epf_group);

View File

@ -0,0 +1,100 @@
// SPDX-License-Identifier: GPL-2.0
/*
* PCI Endpoint *Controller* (EPC) MSI library
*
* Copyright (C) 2025 NXP
* Author: Frank Li <Frank.Li@nxp.com>
*/
#include <linux/device.h>
#include <linux/export.h>
#include <linux/irqdomain.h>
#include <linux/module.h>
#include <linux/msi.h>
#include <linux/of_irq.h>
#include <linux/pci-epc.h>
#include <linux/pci-epf.h>
#include <linux/pci-ep-cfs.h>
#include <linux/pci-ep-msi.h>
#include <linux/slab.h>
static void pci_epf_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
{
struct pci_epc *epc;
struct pci_epf *epf;
epc = pci_epc_get(dev_name(msi_desc_to_dev(desc)));
if (!epc)
return;
epf = list_first_entry_or_null(&epc->pci_epf, struct pci_epf, list);
if (epf && epf->db_msg && desc->msi_index < epf->num_db)
memcpy(&epf->db_msg[desc->msi_index].msg, msg, sizeof(*msg));
pci_epc_put(epc);
}
int pci_epf_alloc_doorbell(struct pci_epf *epf, u16 num_db)
{
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
struct irq_domain *domain;
void *msg;
int ret;
int i;
/* TODO: Multi-EPF support */
if (list_first_entry_or_null(&epc->pci_epf, struct pci_epf, list) != epf) {
dev_err(dev, "MSI doorbell doesn't support multiple EPF\n");
return -EINVAL;
}
domain = of_msi_map_get_device_domain(epc->dev.parent, 0,
DOMAIN_BUS_PLATFORM_MSI);
if (!domain) {
dev_err(dev, "Can't find MSI domain for EPC\n");
return -ENODEV;
}
if (!irq_domain_is_msi_parent(domain))
return -ENODEV;
if (!irq_domain_is_msi_immutable(domain)) {
dev_err(dev, "Mutable MSI controller not supported\n");
return -ENODEV;
}
dev_set_msi_domain(epc->dev.parent, domain);
msg = kcalloc(num_db, sizeof(struct pci_epf_doorbell_msg), GFP_KERNEL);
if (!msg)
return -ENOMEM;
epf->num_db = num_db;
epf->db_msg = msg;
ret = platform_device_msi_init_and_alloc_irqs(epc->dev.parent, num_db,
pci_epf_write_msi_msg);
if (ret) {
dev_err(dev, "Failed to allocate MSI\n");
kfree(msg);
return ret;
}
for (i = 0; i < num_db; i++)
epf->db_msg[i].virq = msi_get_virq(epc->dev.parent, i);
return ret;
}
EXPORT_SYMBOL_GPL(pci_epf_alloc_doorbell);
void pci_epf_free_doorbell(struct pci_epf *epf)
{
platform_device_msi_free_irqs_all(epf->epc->dev.parent);
kfree(epf->db_msg);
epf->db_msg = NULL;
epf->num_db = 0;
}
EXPORT_SYMBOL_GPL(pci_epf_free_doorbell);

View File

@ -338,7 +338,7 @@ static void pci_epf_remove_cfs(struct pci_epf_driver *driver)
mutex_lock(&pci_epf_mutex);
list_for_each_entry_safe(group, tmp, &driver->epf_group, group_entry)
pci_ep_cfs_remove_epf_group(group);
list_del(&driver->epf_group);
WARN_ON(!list_empty(&driver->epf_group));
mutex_unlock(&pci_epf_mutex);
}
@ -477,6 +477,44 @@ struct pci_epf *pci_epf_create(const char *name)
}
EXPORT_SYMBOL_GPL(pci_epf_create);
/**
* pci_epf_align_inbound_addr() - Align the given address based on the BAR
* alignment requirement
* @epf: the EPF device
* @addr: inbound address to be aligned
* @bar: the BAR number corresponding to the given addr
* @base: base address matching the @bar alignment requirement
* @off: offset to be added to the @base address
*
* Helper function to align input @addr based on BAR's alignment requirement.
* The aligned base address and offset are returned via @base and @off.
*
* NOTE: The pci_epf_alloc_space() function already accounts for alignment.
* This API is primarily intended for use with other memory regions not
* allocated by pci_epf_alloc_space(), such as peripheral register spaces or
* the message address of a platform MSI controller.
*
* Return: 0 on success, errno otherwise.
*/
int pci_epf_align_inbound_addr(struct pci_epf *epf, enum pci_barno bar,
u64 addr, dma_addr_t *base, size_t *off)
{
/*
* Most EP controllers require the BAR start address to be aligned to
* the BAR size, because they mask off the lower bits.
*
* Alignment to BAR size also works for controllers that support
* unaligned addresses.
*/
u64 align = epf->bar[bar].size;
*base = round_down(addr, align);
*off = addr & (align - 1);
return 0;
}
EXPORT_SYMBOL_GPL(pci_epf_align_inbound_addr);
static void pci_epf_dev_release(struct device *dev)
{
struct pci_epf *epf = to_pci_epf(dev);

View File

@ -2,10 +2,6 @@ Contributions are solicited in particular to remedy the following issues:
cpcihp:
* There are no implementations of the ->hardware_test, ->get_power and
->set_power callbacks in struct cpci_hp_controller_ops. Why were they
introduced? Can they be removed from the struct?
* Returned code from pci_hp_add_bridge() is not checked.
cpqphp:

View File

@ -995,7 +995,7 @@ static inline int pcie_hotplug_depth(struct pci_dev *dev)
while (bus->parent) {
bus = bus->parent;
if (bus->self && bus->self->is_hotplug_bridge)
if (bus->self && bus->self->is_pciehp)
depth++;
}

View File

@ -7,11 +7,16 @@
* Copyright (C) 2009 Intel Corporation, Yu Zhao <yu.zhao@intel.com>
*/
#include <linux/bitfield.h>
#include <linux/bits.h>
#include <linux/log2.h>
#include <linux/pci.h>
#include <linux/sizes.h>
#include <linux/slab.h>
#include <linux/export.h>
#include <linux/string.h>
#include <linux/delay.h>
#include <asm/div64.h>
#include "pci.h"
#define VIRTFN_ID_LEN 17 /* "virtfn%u\0" for 2^32 - 1 */
@ -150,7 +155,28 @@ resource_size_t pci_iov_resource_size(struct pci_dev *dev, int resno)
if (!dev->is_physfn)
return 0;
return dev->sriov->barsz[resno - PCI_IOV_RESOURCES];
return dev->sriov->barsz[pci_resource_num_to_vf_bar(resno)];
}
void pci_iov_resource_set_size(struct pci_dev *dev, int resno,
resource_size_t size)
{
if (!pci_resource_is_iov(resno)) {
pci_warn(dev, "%s is not an IOV resource\n",
pci_resource_name(dev, resno));
return;
}
dev->sriov->barsz[pci_resource_num_to_vf_bar(resno)] = size;
}
bool pci_iov_is_memory_decoding_enabled(struct pci_dev *dev)
{
u16 cmd;
pci_read_config_word(dev, dev->sriov->pos + PCI_SRIOV_CTRL, &cmd);
return cmd & PCI_SRIOV_CTRL_MSE;
}
static void pci_read_vf_config_common(struct pci_dev *virtfn)
@ -341,12 +367,14 @@ int pci_iov_add_virtfn(struct pci_dev *dev, int id)
virtfn->multifunction = 0;
for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
res = &dev->resource[i + PCI_IOV_RESOURCES];
int idx = pci_resource_num_from_vf_bar(i);
res = &dev->resource[idx];
if (!res->parent)
continue;
virtfn->resource[i].name = pci_name(virtfn);
virtfn->resource[i].flags = res->flags;
size = pci_iov_resource_size(dev, i + PCI_IOV_RESOURCES);
size = pci_iov_resource_size(dev, idx);
resource_set_range(&virtfn->resource[i],
res->start + size * id, size);
rc = request_resource(res, &virtfn->resource[i]);
@ -643,8 +671,13 @@ static int sriov_enable(struct pci_dev *dev, int nr_virtfn)
nres = 0;
for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
bars |= (1 << (i + PCI_IOV_RESOURCES));
res = &dev->resource[i + PCI_IOV_RESOURCES];
int idx = pci_resource_num_from_vf_bar(i);
resource_size_t vf_bar_sz = pci_iov_resource_size(dev, idx);
bars |= (1 << idx);
res = &dev->resource[idx];
if (vf_bar_sz * nr_virtfn > resource_size(res))
continue;
if (res->parent)
nres++;
}
@ -810,8 +843,10 @@ found:
nres = 0;
for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
res = &dev->resource[i + PCI_IOV_RESOURCES];
res_name = pci_resource_name(dev, i + PCI_IOV_RESOURCES);
int idx = pci_resource_num_from_vf_bar(i);
res = &dev->resource[idx];
res_name = pci_resource_name(dev, idx);
/*
* If it is already FIXED, don't change it, something
@ -850,6 +885,7 @@ found:
pci_read_config_byte(dev, pos + PCI_SRIOV_FUNC_LINK, &iov->link);
if (pci_pcie_type(dev) == PCI_EXP_TYPE_RC_END)
iov->link = PCI_DEVFN(PCI_SLOT(dev->devfn), iov->link);
iov->vf_rebar_cap = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_VF_REBAR);
if (pdev)
iov->dev = pci_dev_get(pdev);
@ -869,7 +905,7 @@ fail_max_buses:
dev->is_physfn = 0;
failed:
for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
res = &dev->resource[i + PCI_IOV_RESOURCES];
res = &dev->resource[pci_resource_num_from_vf_bar(i)];
res->flags = 0;
}
@ -888,6 +924,30 @@ static void sriov_release(struct pci_dev *dev)
dev->sriov = NULL;
}
static void sriov_restore_vf_rebar_state(struct pci_dev *dev)
{
unsigned int pos, nbars, i;
u32 ctrl;
pos = pci_iov_vf_rebar_cap(dev);
if (!pos)
return;
pci_read_config_dword(dev, pos + PCI_VF_REBAR_CTRL, &ctrl);
nbars = FIELD_GET(PCI_VF_REBAR_CTRL_NBAR_MASK, ctrl);
for (i = 0; i < nbars; i++, pos += 8) {
int bar_idx, size;
pci_read_config_dword(dev, pos + PCI_VF_REBAR_CTRL, &ctrl);
bar_idx = FIELD_GET(PCI_VF_REBAR_CTRL_BAR_IDX, ctrl);
size = pci_rebar_bytes_to_size(dev->sriov->barsz[bar_idx]);
ctrl &= ~PCI_VF_REBAR_CTRL_BAR_SIZE;
ctrl |= FIELD_PREP(PCI_VF_REBAR_CTRL_BAR_SIZE, size);
pci_write_config_dword(dev, pos + PCI_VF_REBAR_CTRL, ctrl);
}
}
static void sriov_restore_state(struct pci_dev *dev)
{
int i;
@ -907,7 +967,7 @@ static void sriov_restore_state(struct pci_dev *dev)
pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, ctrl);
for (i = 0; i < PCI_SRIOV_NUM_BARS; i++)
pci_update_resource(dev, i + PCI_IOV_RESOURCES);
pci_update_resource(dev, pci_resource_num_from_vf_bar(i));
pci_write_config_dword(dev, iov->pos + PCI_SRIOV_SYS_PGSIZE, iov->pgsz);
pci_iov_set_numvfs(dev, iov->num_VFs);
@ -973,7 +1033,7 @@ void pci_iov_update_resource(struct pci_dev *dev, int resno)
{
struct pci_sriov *iov = dev->is_physfn ? dev->sriov : NULL;
struct resource *res = pci_resource_n(dev, resno);
int vf_bar = resno - PCI_IOV_RESOURCES;
int vf_bar = pci_resource_num_to_vf_bar(resno);
struct pci_bus_region region;
u16 cmd;
u32 new;
@ -1047,8 +1107,10 @@ resource_size_t pci_sriov_resource_alignment(struct pci_dev *dev, int resno)
*/
void pci_restore_iov_state(struct pci_dev *dev)
{
if (dev->is_physfn)
if (dev->is_physfn) {
sriov_restore_vf_rebar_state(dev);
sriov_restore_state(dev);
}
}
/**
@ -1255,3 +1317,72 @@ int pci_sriov_configure_simple(struct pci_dev *dev, int nr_virtfn)
return nr_virtfn;
}
EXPORT_SYMBOL_GPL(pci_sriov_configure_simple);
/**
* pci_iov_vf_bar_set_size - set a new size for a VF BAR
* @dev: the PCI device
* @resno: the resource number
* @size: new size as defined in the spec (0=1MB, 31=128TB)
*
* Set the new size of a VF BAR that supports VF resizable BAR capability.
* Unlike pci_resize_resource(), this does not cause the resource that
* reserves the MMIO space (originally up to total_VFs) to be resized, which
* means that following calls to pci_enable_sriov() can fail if the resources
* no longer fit.
*
* Return: 0 on success, or negative on failure.
*/
int pci_iov_vf_bar_set_size(struct pci_dev *dev, int resno, int size)
{
u32 sizes;
int ret;
if (!pci_resource_is_iov(resno))
return -EINVAL;
if (pci_iov_is_memory_decoding_enabled(dev))
return -EBUSY;
sizes = pci_rebar_get_possible_sizes(dev, resno);
if (!sizes)
return -ENOTSUPP;
if (!(sizes & BIT(size)))
return -EINVAL;
ret = pci_rebar_set_size(dev, resno, size);
if (ret)
return ret;
pci_iov_resource_set_size(dev, resno, pci_rebar_size_to_bytes(size));
return 0;
}
EXPORT_SYMBOL_GPL(pci_iov_vf_bar_set_size);
/**
* pci_iov_vf_bar_get_sizes - get VF BAR sizes allowing to create up to num_vfs
* @dev: the PCI device
* @resno: the resource number
* @num_vfs: number of VFs
*
* Get the sizes of a VF resizable BAR that can accommodate @num_vfs within
* the currently assigned size of the resource @resno.
*
* Return: A bitmask of sizes in format defined in the spec (bit 0=1MB,
* bit 31=128TB).
*/
u32 pci_iov_vf_bar_get_sizes(struct pci_dev *dev, int resno, int num_vfs)
{
u64 vf_len = pci_resource_len(dev, resno);
u32 sizes;
if (!num_vfs)
return 0;
do_div(vf_len, num_vfs);
sizes = (roundup_pow_of_two(vf_len + 1) - 1) >> ilog2(SZ_1M);
return sizes & pci_rebar_get_possible_sizes(dev, resno);
}
EXPORT_SYMBOL_GPL(pci_iov_vf_bar_get_sizes);

View File

@ -943,7 +943,7 @@ int pci_msix_write_tph_tag(struct pci_dev *pdev, unsigned int index, u16 tag)
/*
* This is a horrible hack, but short of implementing a PCI
* specific interrupt chip callback and a huge pile of
* infrastructure, this is the minor nuissance. It provides the
* infrastructure, this is the minor nuisance. It provides the
* protection against concurrent operations on this entry and keeps
* the control word cache in sync.
*/

View File

@ -816,15 +816,10 @@ int pci_acpi_program_hp_params(struct pci_dev *dev)
bool pciehp_is_native(struct pci_dev *bridge)
{
const struct pci_host_bridge *host;
u32 slot_cap;
if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
return false;
pcie_capability_read_dword(bridge, PCI_EXP_SLTCAP, &slot_cap);
if (!(slot_cap & PCI_EXP_SLTCAP_HPC))
return false;
if (pcie_ports_native)
return true;
@ -1002,7 +997,7 @@ bool acpi_pci_bridge_d3(struct pci_dev *dev)
struct acpi_device *adev, *rpadev;
const union acpi_object *obj;
if (acpi_pci_disabled || !dev->is_hotplug_bridge)
if (acpi_pci_disabled || !dev->is_pciehp)
return false;
adev = ACPI_COMPANION(&dev->dev);

View File

@ -1632,7 +1632,7 @@ static int pci_bus_num_vf(struct device *dev)
*/
static int pci_dma_configure(struct device *dev)
{
struct pci_driver *driver = to_pci_driver(dev->driver);
const struct device_driver *drv = READ_ONCE(dev->driver);
struct device *bridge;
int ret = 0;
@ -1649,8 +1649,8 @@ static int pci_dma_configure(struct device *dev)
pci_put_host_bridge_device(bridge);
/* @driver may not be valid when we're called from the IOMMU layer */
if (!ret && dev->driver && !driver->driver_managed_dma) {
/* @drv may not be valid when we're called from the IOMMU layer */
if (!ret && drv && !to_pci_driver(drv)->driver_managed_dma) {
ret = iommu_device_use_default_domain(dev);
if (ret)
arch_teardown_dma_ops(dev);

View File

@ -3030,8 +3030,12 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
* pci_bridge_d3_possible - Is it possible to put the bridge into D3
* @bridge: Bridge to check
*
* This function checks if it is possible to move the bridge to D3.
* Currently we only allow D3 for some PCIe ports and for Thunderbolt.
*
* Return: Whether it is possible to move the bridge to D3.
*
* The return value is guaranteed to be constant across the entire lifetime
* of the bridge, including its hot-removal.
*/
bool pci_bridge_d3_possible(struct pci_dev *bridge)
{
@ -3046,10 +3050,14 @@ bool pci_bridge_d3_possible(struct pci_dev *bridge)
return false;
/*
* Hotplug ports handled by firmware in System Management Mode
* may not be put into D3 by the OS (Thunderbolt on non-Macs).
* Hotplug ports handled by platform firmware may not be put
* into D3 by the OS, e.g. ACPI slots ...
*/
if (bridge->is_hotplug_bridge && !pciehp_is_native(bridge))
if (bridge->is_hotplug_bridge && !bridge->is_pciehp)
return false;
/* ... or PCIe hotplug ports not handled natively by the OS. */
if (bridge->is_pciehp && !pciehp_is_native(bridge))
return false;
if (pci_bridge_d3_force)
@ -3068,7 +3076,7 @@ bool pci_bridge_d3_possible(struct pci_dev *bridge)
* by vendors for runtime D3 at least until 2018 because there
* was no OS support.
*/
if (bridge->is_hotplug_bridge)
if (bridge->is_pciehp)
return false;
if (dmi_check_system(bridge_d3_blacklist))
@ -3205,7 +3213,6 @@ void pci_pm_power_up_and_verify_state(struct pci_dev *pci_dev)
void pci_pm_init(struct pci_dev *dev)
{
int pm;
u16 status;
u16 pmc;
device_enable_async_suspend(&dev->dev);
@ -3266,9 +3273,6 @@ void pci_pm_init(struct pci_dev *dev)
pci_pme_active(dev, false);
}
pci_read_config_word(dev, PCI_STATUS, &status);
if (status & PCI_STATUS_IMM_READY)
dev->imm_ready = 1;
poweron:
pci_pm_power_up_and_verify_state(dev);
pm_runtime_forbid(&dev->dev);
@ -3753,7 +3757,13 @@ static int pci_rebar_find_pos(struct pci_dev *pdev, int bar)
unsigned int pos, nbars, i;
u32 ctrl;
pos = pdev->rebar_cap;
if (pci_resource_is_iov(bar)) {
pos = pci_iov_vf_rebar_cap(pdev);
bar = pci_resource_num_to_vf_bar(bar);
} else {
pos = pdev->rebar_cap;
}
if (!pos)
return -ENOTSUPP;

View File

@ -35,13 +35,6 @@ struct pcie_tlp_log;
*/
#define PCIE_T_PERST_CLK_US 100
/*
* End of conventional reset (PERST# de-asserted) to first configuration
* request (device able to respond with a "Request Retry Status" completion),
* from PCIe r6.0, sec 6.6.1.
*/
#define PCIE_T_RRS_READY_MS 100
/*
* PCIe r6.0, sec 5.3.3.2.1 <PME Synchronization>
* Recommends 1ms to 10ms timeout to check L2 ready.
@ -61,7 +54,11 @@ struct pcie_tlp_log;
* completes before sending a Configuration Request to the device
* immediately below that Port."
*/
#define PCIE_RESET_CONFIG_DEVICE_WAIT_MS 100
#define PCIE_RESET_CONFIG_WAIT_MS 100
/* Parameters for the waiting for link up routine */
#define PCIE_LINK_WAIT_MAX_RETRIES 10
#define PCIE_LINK_WAIT_SLEEP_MS 90
/* Message Routing (r[2:0]); PCIe r6.0, sec 2.2.8 */
#define PCIE_MSG_TYPE_R_RC 0
@ -391,12 +388,14 @@ void pci_bus_put(struct pci_bus *bus);
#define PCIE_LNKCAP_SLS2SPEED(lnkcap) \
({ \
((lnkcap) == PCI_EXP_LNKCAP_SLS_64_0GB ? PCIE_SPEED_64_0GT : \
(lnkcap) == PCI_EXP_LNKCAP_SLS_32_0GB ? PCIE_SPEED_32_0GT : \
(lnkcap) == PCI_EXP_LNKCAP_SLS_16_0GB ? PCIE_SPEED_16_0GT : \
(lnkcap) == PCI_EXP_LNKCAP_SLS_8_0GB ? PCIE_SPEED_8_0GT : \
(lnkcap) == PCI_EXP_LNKCAP_SLS_5_0GB ? PCIE_SPEED_5_0GT : \
(lnkcap) == PCI_EXP_LNKCAP_SLS_2_5GB ? PCIE_SPEED_2_5GT : \
u32 lnkcap_sls = (lnkcap) & PCI_EXP_LNKCAP_SLS; \
\
(lnkcap_sls == PCI_EXP_LNKCAP_SLS_64_0GB ? PCIE_SPEED_64_0GT : \
lnkcap_sls == PCI_EXP_LNKCAP_SLS_32_0GB ? PCIE_SPEED_32_0GT : \
lnkcap_sls == PCI_EXP_LNKCAP_SLS_16_0GB ? PCIE_SPEED_16_0GT : \
lnkcap_sls == PCI_EXP_LNKCAP_SLS_8_0GB ? PCIE_SPEED_8_0GT : \
lnkcap_sls == PCI_EXP_LNKCAP_SLS_5_0GB ? PCIE_SPEED_5_0GT : \
lnkcap_sls == PCI_EXP_LNKCAP_SLS_2_5GB ? PCIE_SPEED_2_5GT : \
PCI_SPEED_UNKNOWN); \
})
@ -411,13 +410,17 @@ void pci_bus_put(struct pci_bus *bus);
PCI_SPEED_UNKNOWN)
#define PCIE_LNKCTL2_TLS2SPEED(lnkctl2) \
((lnkctl2) == PCI_EXP_LNKCTL2_TLS_64_0GT ? PCIE_SPEED_64_0GT : \
(lnkctl2) == PCI_EXP_LNKCTL2_TLS_32_0GT ? PCIE_SPEED_32_0GT : \
(lnkctl2) == PCI_EXP_LNKCTL2_TLS_16_0GT ? PCIE_SPEED_16_0GT : \
(lnkctl2) == PCI_EXP_LNKCTL2_TLS_8_0GT ? PCIE_SPEED_8_0GT : \
(lnkctl2) == PCI_EXP_LNKCTL2_TLS_5_0GT ? PCIE_SPEED_5_0GT : \
(lnkctl2) == PCI_EXP_LNKCTL2_TLS_2_5GT ? PCIE_SPEED_2_5GT : \
PCI_SPEED_UNKNOWN)
({ \
u16 lnkctl2_tls = (lnkctl2) & PCI_EXP_LNKCTL2_TLS; \
\
(lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_64_0GT ? PCIE_SPEED_64_0GT : \
lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_32_0GT ? PCIE_SPEED_32_0GT : \
lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_16_0GT ? PCIE_SPEED_16_0GT : \
lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_8_0GT ? PCIE_SPEED_8_0GT : \
lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_5_0GT ? PCIE_SPEED_5_0GT : \
lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_2_5GT ? PCIE_SPEED_2_5GT : \
PCI_SPEED_UNKNOWN); \
})
/* PCIe speed to Mb/s reduced by encoding overhead */
#define PCIE_SPEED2MBS_ENC(speed) \
@ -486,6 +489,7 @@ struct pci_sriov {
u16 subsystem_vendor; /* VF subsystem vendor */
u16 subsystem_device; /* VF subsystem device */
resource_size_t barsz[PCI_SRIOV_NUM_BARS]; /* VF BAR size */
u16 vf_rebar_cap; /* VF Resizable BAR capability offset */
bool drivers_autoprobe; /* Auto probing of VFs by driver */
};
@ -710,10 +714,28 @@ void pci_iov_update_resource(struct pci_dev *dev, int resno);
resource_size_t pci_sriov_resource_alignment(struct pci_dev *dev, int resno);
void pci_restore_iov_state(struct pci_dev *dev);
int pci_iov_bus_range(struct pci_bus *bus);
void pci_iov_resource_set_size(struct pci_dev *dev, int resno,
resource_size_t size);
bool pci_iov_is_memory_decoding_enabled(struct pci_dev *dev);
static inline u16 pci_iov_vf_rebar_cap(struct pci_dev *dev)
{
if (!dev->is_physfn)
return 0;
return dev->sriov->vf_rebar_cap;
}
static inline bool pci_resource_is_iov(int resno)
{
return resno >= PCI_IOV_RESOURCES && resno <= PCI_IOV_RESOURCE_END;
}
static inline int pci_resource_num_from_vf_bar(int resno)
{
return resno + PCI_IOV_RESOURCES;
}
static inline int pci_resource_num_to_vf_bar(int resno)
{
return resno - PCI_IOV_RESOURCES;
}
extern const struct attribute_group sriov_pf_dev_attr_group;
extern const struct attribute_group sriov_vf_dev_attr_group;
#else
@ -734,10 +756,30 @@ static inline int pci_iov_bus_range(struct pci_bus *bus)
{
return 0;
}
static inline void pci_iov_resource_set_size(struct pci_dev *dev, int resno,
resource_size_t size) { }
static inline bool pci_iov_is_memory_decoding_enabled(struct pci_dev *dev)
{
return false;
}
static inline u16 pci_iov_vf_rebar_cap(struct pci_dev *dev)
{
return 0;
}
static inline bool pci_resource_is_iov(int resno)
{
return false;
}
static inline int pci_resource_num_from_vf_bar(int resno)
{
WARN_ON_ONCE(1);
return -ENODEV;
}
static inline int pci_resource_num_to_vf_bar(int resno)
{
WARN_ON_ONCE(1);
return -ENODEV;
}
#endif /* CONFIG_PCI_IOV */
#ifdef CONFIG_PCIE_TPH

View File

@ -116,12 +116,12 @@ struct aer_info {
PCI_ERR_ROOT_MULTI_COR_RCV | \
PCI_ERR_ROOT_MULTI_UNCOR_RCV)
static int pcie_aer_disable;
static bool pcie_aer_disable;
static pci_ers_result_t aer_root_reset(struct pci_dev *dev);
void pci_no_aer(void)
{
pcie_aer_disable = 1;
pcie_aer_disable = true;
}
bool pci_aer_available(void)
@ -1039,7 +1039,8 @@ static int find_device_iter(struct pci_dev *dev, void *data)
/* List this device */
if (add_error_device(e_info, dev)) {
/* We cannot handle more... Stop iteration */
/* TODO: Should print error message here? */
pci_err(dev, "Exceeded max supported (%d) devices with errors logged\n",
AER_MAX_MULTI_ERR_DEVICES);
return 1;
}

View File

@ -245,7 +245,7 @@ struct pcie_link_state {
u32 clkpm_disable:1; /* Clock PM disabled */
};
static int aspm_disabled, aspm_force;
static bool aspm_disabled, aspm_force;
static bool aspm_support_enabled = true;
static DEFINE_MUTEX(aspm_lock);
static LIST_HEAD(link_list);
@ -884,10 +884,9 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
/* Configure the ASPM L1 substates. Caller must disable L1 first. */
static void pcie_config_aspm_l1ss(struct pcie_link_state *link, u32 state)
{
u32 val;
u32 val = 0;
struct pci_dev *child = link->downstream, *parent = link->pdev;
val = 0;
if (state & PCIE_LINK_STATE_L1_1)
val |= PCI_L1SS_CTL1_ASPM_L1_1;
if (state & PCIE_LINK_STATE_L1_2)
@ -1712,11 +1711,11 @@ static int __init pcie_aspm_disable(char *str)
{
if (!strcmp(str, "off")) {
aspm_policy = POLICY_DEFAULT;
aspm_disabled = 1;
aspm_disabled = true;
aspm_support_enabled = false;
pr_info("PCIe ASPM is disabled\n");
} else if (!strcmp(str, "force")) {
aspm_force = 1;
aspm_force = true;
pr_info("PCIe ASPM is forcibly enabled\n");
}
return 1;
@ -1734,7 +1733,7 @@ void pcie_no_aspm(void)
*/
if (!aspm_force) {
aspm_policy = POLICY_DEFAULT;
aspm_disabled = 1;
aspm_disabled = true;
}
}

View File

@ -220,7 +220,7 @@ static int get_port_device_capability(struct pci_dev *dev)
struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
int services = 0;
if (dev->is_hotplug_bridge &&
if (dev->is_pciehp &&
(pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM) &&
(pcie_ports_native || host->native_pcie_hotplug)) {

View File

@ -507,7 +507,7 @@ struct pci_ptm_debugfs *pcie_ptm_create_debugfs(struct device *dev, void *pdata,
if (!ops->check_capability)
return NULL;
/* Check for PTM capability before creating debugfs attrbutes */
/* Check for PTM capability before creating debugfs attributes */
ret = ops->check_capability(pdata);
if (!ret) {
dev_dbg(dev, "PTM capability not present\n");

View File

@ -1678,7 +1678,7 @@ void set_pcie_hotplug_bridge(struct pci_dev *pdev)
pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &reg32);
if (reg32 & PCI_EXP_SLTCAP_HPC)
pdev->is_hotplug_bridge = 1;
pdev->is_hotplug_bridge = pdev->is_pciehp = 1;
}
static void set_pcie_thunderbolt(struct pci_dev *dev)
@ -2602,6 +2602,15 @@ void pcie_report_downtraining(struct pci_dev *dev)
__pcie_print_link_status(dev, false);
}
static void pci_imm_ready_init(struct pci_dev *dev)
{
u16 status;
pci_read_config_word(dev, PCI_STATUS, &status);
if (status & PCI_STATUS_IMM_READY)
dev->imm_ready = 1;
}
static void pci_init_capabilities(struct pci_dev *dev)
{
pci_ea_init(dev); /* Enhanced Allocation */
@ -2611,6 +2620,7 @@ static void pci_init_capabilities(struct pci_dev *dev)
/* Buffers for saving PCIe and PCI-X capabilities */
pci_allocate_cap_save_buffers(dev);
pci_imm_ready_init(dev); /* Immediate Readiness */
pci_pm_init(dev); /* Power Management */
pci_vpd_init(dev); /* Vital Product Data */
pci_configure_ari(dev); /* Alternative Routing-ID Forwarding */

View File

@ -105,13 +105,13 @@ int pcie_failed_link_retrain(struct pci_dev *dev)
!pcie_cap_has_lnkctl2(dev) || !dev->link_active_reporting)
return ret;
pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &lnkctl2);
pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
if (!(lnksta & PCI_EXP_LNKSTA_DLLLA) && pcie_lbms_seen(dev, lnksta)) {
u16 oldlnkctl2 = lnkctl2;
u16 oldlnkctl2;
pci_info(dev, "broken device, retraining non-functional downstream link at 2.5GT/s\n");
pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &oldlnkctl2);
ret = pcie_set_target_speed(dev, PCIE_SPEED_2_5GT, false);
if (ret) {
pci_info(dev, "retraining failed\n");
@ -123,6 +123,8 @@ int pcie_failed_link_retrain(struct pci_dev *dev)
pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
}
pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &lnkctl2);
if ((lnksta & PCI_EXP_LNKSTA_DLLLA) &&
(lnkctl2 & PCI_EXP_LNKCTL2_TLS) == PCI_EXP_LNKCTL2_TLS_2_5GT &&
pci_match_id(ids, dev)) {

View File

@ -1888,7 +1888,8 @@ static int iov_resources_unassigned(struct pci_dev *dev, void *data)
bool *unassigned = data;
for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
struct resource *r = &dev->resource[i + PCI_IOV_RESOURCES];
int idx = pci_resource_num_from_vf_bar(i);
struct resource *r = &dev->resource[idx];
struct pci_bus_region region;
/* Not assigned or rejected by kernel? */

View File

@ -423,13 +423,39 @@ void pci_release_resource(struct pci_dev *dev, int resno)
}
EXPORT_SYMBOL(pci_release_resource);
static bool pci_resize_is_memory_decoding_enabled(struct pci_dev *dev,
int resno)
{
u16 cmd;
if (pci_resource_is_iov(resno))
return pci_iov_is_memory_decoding_enabled(dev);
pci_read_config_word(dev, PCI_COMMAND, &cmd);
return cmd & PCI_COMMAND_MEMORY;
}
static void pci_resize_resource_set_size(struct pci_dev *dev, int resno,
int size)
{
resource_size_t res_size = pci_rebar_size_to_bytes(size);
struct resource *res = pci_resource_n(dev, resno);
if (!pci_resource_is_iov(resno)) {
resource_set_size(res, res_size);
} else {
resource_set_size(res, res_size * pci_sriov_get_totalvfs(dev));
pci_iov_resource_set_size(dev, resno, res_size);
}
}
int pci_resize_resource(struct pci_dev *dev, int resno, int size)
{
struct resource *res = pci_resource_n(dev, resno);
struct pci_host_bridge *host;
int old, ret;
u32 sizes;
u16 cmd;
/* Check if we must preserve the firmware's resource assignment */
host = pci_find_host_bridge(dev->bus);
@ -440,8 +466,7 @@ int pci_resize_resource(struct pci_dev *dev, int resno, int size)
if (!(res->flags & IORESOURCE_UNSET))
return -EBUSY;
pci_read_config_word(dev, PCI_COMMAND, &cmd);
if (cmd & PCI_COMMAND_MEMORY)
if (pci_resize_is_memory_decoding_enabled(dev, resno))
return -EBUSY;
sizes = pci_rebar_get_possible_sizes(dev, resno);
@ -459,7 +484,7 @@ int pci_resize_resource(struct pci_dev *dev, int resno, int size)
if (ret)
return ret;
resource_set_size(res, pci_rebar_size_to_bytes(size));
pci_resize_resource_set_size(dev, resno, size);
/* Check if the new config works by trying to assign everything. */
if (dev->bus->self) {
@ -471,7 +496,7 @@ int pci_resize_resource(struct pci_dev *dev, int resno, int size)
error_resize:
pci_rebar_set_size(dev, resno, old);
resource_set_size(res, pci_rebar_size_to_bytes(old));
pci_resize_resource_set_size(dev, resno, old);
return ret;
}
EXPORT_SYMBOL(pci_resize_resource);

View File

@ -437,8 +437,7 @@ static int vfio_pci_igd_cfg_init(struct vfio_pci_core_device *vdev)
bool vfio_pci_is_intel_display(struct pci_dev *pdev)
{
return (pdev->vendor == PCI_VENDOR_ID_INTEL) &&
((pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY);
return (pdev->vendor == PCI_VENDOR_ID_INTEL) && pci_is_display(pdev);
}
int vfio_pci_igd_init(struct vfio_pci_core_device *vdev)

View File

@ -90,7 +90,6 @@ enum cpuhp_state {
CPUHP_RADIX_DEAD,
CPUHP_PAGE_ALLOC,
CPUHP_NET_DEV_DEAD,
CPUHP_PCI_XGENE_DEAD,
CPUHP_IOMMU_IOVA_DEAD,
CPUHP_AP_ARM_CACHE_B15_RAC_DEAD,
CPUHP_PADATA_DEAD,

View File

@ -37,6 +37,9 @@ static inline bool hypervisor_isolated_pci_functions(void)
if (IS_ENABLED(CONFIG_S390))
return true;
if (IS_ENABLED(CONFIG_LOONGARCH))
return true;
return jailhouse_paravirt();
}

View File

@ -0,0 +1,28 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* PCI Endpoint *Function* side MSI header file
*
* Copyright (C) 2024 NXP
* Author: Frank Li <Frank.Li@nxp.com>
*/
#ifndef __PCI_EP_MSI__
#define __PCI_EP_MSI__
struct pci_epf;
#ifdef CONFIG_PCI_ENDPOINT_MSI_DOORBELL
int pci_epf_alloc_doorbell(struct pci_epf *epf, u16 nums);
void pci_epf_free_doorbell(struct pci_epf *epf);
#else
static inline int pci_epf_alloc_doorbell(struct pci_epf *epf, u16 nums)
{
return -ENODATA;
}
static inline void pci_epf_free_doorbell(struct pci_epf *epf)
{
}
#endif /* CONFIG_GENERIC_MSI_IRQ */
#endif /* __PCI_EP_MSI__ */

View File

@ -12,6 +12,7 @@
#include <linux/configfs.h>
#include <linux/device.h>
#include <linux/mod_devicetable.h>
#include <linux/msi.h>
#include <linux/pci.h>
struct pci_epf;
@ -128,6 +129,16 @@ struct pci_epf_bar {
int flags;
};
/**
* struct pci_epf_doorbell_msg - represents doorbell message
* @msg: MSI message
* @virq: IRQ number of this doorbell MSI message
*/
struct pci_epf_doorbell_msg {
struct msi_msg msg;
int virq;
};
/**
* struct pci_epf - represents the PCI EPF device
* @dev: the PCI EPF device
@ -155,6 +166,8 @@ struct pci_epf_bar {
* @vfunction_num_map: bitmap to manage virtual function number
* @pci_vepf: list of virtual endpoint functions associated with this function
* @event_ops: callbacks for capturing the EPC events
* @db_msg: data for MSI from RC side
* @num_db: number of doorbells
*/
struct pci_epf {
struct device dev;
@ -185,6 +198,8 @@ struct pci_epf {
unsigned long vfunction_num_map;
struct list_head pci_vepf;
const struct pci_epc_event_ops *event_ops;
struct pci_epf_doorbell_msg *db_msg;
u16 num_db;
};
/**
@ -226,6 +241,9 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
enum pci_epc_interface_type type);
void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar,
enum pci_epc_interface_type type);
int pci_epf_align_inbound_addr(struct pci_epf *epf, enum pci_barno bar,
u64 addr, dma_addr_t *base, size_t *off);
int pci_epf_bind(struct pci_epf *epf);
void pci_epf_unbind(struct pci_epf *epf);
int pci_epf_add_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf);

View File

@ -39,7 +39,7 @@ struct device_link;
struct pci_pwrctrl {
struct device *dev;
/* Private: don't use. */
/* private: internal use only */
struct notifier_block nb;
struct device_link *link;
struct work_struct work;

View File

@ -328,6 +328,11 @@ struct rcec_ea;
* determined (e.g., for Root Complex Integrated
* Endpoints without the relevant Capability
* Registers).
* @is_hotplug_bridge: Hotplug bridge of any kind (e.g. PCIe Hot-Plug Capable,
* Conventional PCI Hot-Plug, ACPI slot).
* Such bridges are allocated additional MMIO and bus
* number resources to allow for hierarchy expansion.
* @is_pciehp: PCIe Hot-Plug Capable bridge.
*/
struct pci_dev {
struct list_head bus_list; /* Node in per-bus list */
@ -451,6 +456,7 @@ struct pci_dev {
unsigned int is_physfn:1;
unsigned int is_virtfn:1;
unsigned int is_hotplug_bridge:1;
unsigned int is_pciehp:1;
unsigned int shpc_managed:1; /* SHPC owned by shpchp */
unsigned int is_thunderbolt:1; /* Thunderbolt controller */
/*
@ -744,6 +750,21 @@ static inline bool pci_is_vga(struct pci_dev *pdev)
return false;
}
/**
* pci_is_display - check if the PCI device is a display controller
* @pdev: PCI device
*
* Determine whether the given PCI device corresponds to a display
* controller. Display controllers are typically used for graphical output
* and are identified based on their class code.
*
* Return: true if the PCI device is a display controller, false otherwise.
*/
static inline bool pci_is_display(struct pci_dev *pdev)
{
return (pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY;
}
#define for_each_pci_bridge(dev, bus) \
list_for_each_entry(dev, &bus->devices, bus_list) \
if (!pci_is_bridge(dev)) {} else
@ -2438,6 +2459,8 @@ int pci_sriov_set_totalvfs(struct pci_dev *dev, u16 numvfs);
int pci_sriov_get_totalvfs(struct pci_dev *dev);
int pci_sriov_configure_simple(struct pci_dev *dev, int nr_virtfn);
resource_size_t pci_iov_resource_size(struct pci_dev *dev, int resno);
int pci_iov_vf_bar_set_size(struct pci_dev *dev, int resno, int size);
u32 pci_iov_vf_bar_get_sizes(struct pci_dev *dev, int resno, int num_vfs);
void pci_vf_drivers_autoprobe(struct pci_dev *dev, bool probe);
/* Arch may override these (weak) */
@ -2490,6 +2513,10 @@ static inline int pci_sriov_get_totalvfs(struct pci_dev *dev)
#define pci_sriov_configure_simple NULL
static inline resource_size_t pci_iov_resource_size(struct pci_dev *dev, int resno)
{ return 0; }
static inline int pci_iov_vf_bar_set_size(struct pci_dev *dev, int resno, int size)
{ return -ENODEV; }
static inline u32 pci_iov_vf_bar_get_sizes(struct pci_dev *dev, int resno, int num_vfs)
{ return 0; }
static inline void pci_vf_drivers_autoprobe(struct pci_dev *dev, bool probe) { }
#endif

View File

@ -104,6 +104,7 @@ static inline bool shpchp_is_native(struct pci_dev *bridge) { return true; }
static inline bool hotplug_is_native(struct pci_dev *bridge)
{
return pciehp_is_native(bridge) || shpchp_is_native(bridge);
return (bridge->is_pciehp && pciehp_is_native(bridge)) ||
shpchp_is_native(bridge);
}
#endif

View File

@ -745,6 +745,7 @@
#define PCI_EXT_CAP_ID_L1SS 0x1E /* L1 PM Substates */
#define PCI_EXT_CAP_ID_PTM 0x1F /* Precision Time Measurement */
#define PCI_EXT_CAP_ID_DVSEC 0x23 /* Designated Vendor-Specific */
#define PCI_EXT_CAP_ID_VF_REBAR 0x24 /* VF Resizable BAR */
#define PCI_EXT_CAP_ID_DLF 0x25 /* Data Link Feature */
#define PCI_EXT_CAP_ID_PL_16GT 0x26 /* Physical Layer 16.0 GT/s */
#define PCI_EXT_CAP_ID_NPEM 0x29 /* Native PCIe Enclosure Management */
@ -1141,6 +1142,14 @@
#define PCI_DVSEC_HEADER2 0x8 /* Designated Vendor-Specific Header2 */
#define PCI_DVSEC_HEADER2_ID(x) ((x) & 0xffff)
/* VF Resizable BARs, same layout as PCI_REBAR */
#define PCI_VF_REBAR_CAP PCI_REBAR_CAP
#define PCI_VF_REBAR_CAP_SIZES PCI_REBAR_CAP_SIZES
#define PCI_VF_REBAR_CTRL PCI_REBAR_CTRL
#define PCI_VF_REBAR_CTRL_BAR_IDX PCI_REBAR_CTRL_BAR_IDX
#define PCI_VF_REBAR_CTRL_NBAR_MASK PCI_REBAR_CTRL_NBAR_MASK
#define PCI_VF_REBAR_CTRL_BAR_SIZE PCI_REBAR_CTRL_BAR_SIZE
/* Data Link Feature */
#define PCI_DLF_CAP 0x04 /* Capabilities Register */
#define PCI_DLF_EXCHANGE_ENABLE 0x80000000 /* Data Link Feature Exchange Enable */

View File

@ -21,6 +21,7 @@
#define PCITEST_SET_IRQTYPE _IOW('P', 0x8, int)
#define PCITEST_GET_IRQTYPE _IO('P', 0x9)
#define PCITEST_BARS _IO('P', 0xa)
#define PCITEST_DOORBELL _IO('P', 0xb)
#define PCITEST_CLEAR_IRQ _IO('P', 0x10)
#define PCITEST_IRQ_TYPE_UNDEFINED -1

Some files were not shown because too many files have changed in this diff Show More