Updates for the MSI subsystem (core code and PCI):
- Switch the MSI decriptor locking to lock guards
- Replace a broken and naive implementation of PCI/MSI-X control word
updates in the PCI/TPH driver with a properly serialized variant in the
PCI/MSI core code.
- Remove the MSI descriptor abuse in the SCCI/UFS/QCOM driver by
replacing the direct access to the MSI descriptors with the proper API
function calls. People will never understand that APIs exist for a
reason...
- Provide core infrastructre for the upcoming PCI endpoint library
extensions. Currently limited to ARM GICv3+, but in theory extensible
to other architectures.
- Provide a MSI domain::teardown() callback, which allows drivers to undo
the effects of the prepare() callback.
- Move the MSI domain::prepare() callback invocation to domain creation
time to avoid redundant (and in case of ARM/GIC-V3-ITS confusing)
invocations on every allocation.
In combination with the new teardown callback this removes some ugly
hacks in the GIC-V3-ITS driver, which pretended to work around the
short comings of the core code so far. With this update the code is
correct by design and implementation.
- Make the irqchip MSI library globally available, provide a MSI parent
domain creation helper and convert a bunch of (PCI/)MSI drivers over to
the modern MSI parent mechanism. This is the first step to get rid of
at least one incarnation of the three PCI/MSI management schemes.
- The usual small cleanups and improvements
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmgzgFsTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYoR0KD/402K12tlI/D70H2aTG25dbTx+dkVk+
pKpJz0985uUlLJiPCR54dZL0ofcfRU+CdjEIf1I+6TPshtg6IWLJCfqu7OWVPYzz
2lJDO0yeUGwJqc0CIa1vttvJWvcUcxfWBX/ZSkOIM5avaXqSwRwsFNfd7TQ+T+eG
79VS1yyW197mUva53ekSF2voa8EEPWfEslAjoX1dRg5d4viAxaLtKm/KpBqo1oPh
Eb+E67xEWiIonvWNdr1AOisxnbi19PyDo1xnftgBToaeXXYBodNrNIAfAkx40YUZ
IZQLHvhZ91x15hXYIS4Cz1RXqPECbu/tHxs4AFUgGvqdgJUF89wzI3C21ymrKA6E
tDlWfpIcuE3vV/bsqj1gHGL5G5m1tyBRgIdIAOOmMoTHvwp5rrQtuZzpuqzGmEzj
iVIHnn5m08kRpOZQc7+PlxQMh3eunEyj9WWG49EJgoAnJPb5lou4shTwBUheHcKm
NXxKsfo4x5C+WehGTxv80UlnMcK3Yh/TuWf2OPR6QuT2iHP2VL5jyHjIs0ICn0cp
1tvSJtdc1rgvk/4Vn4lu5eyVaTx5ZAH8ZXNQfwwBTWTp3ZyAW+7GkaCq3LPaNJoZ
4LWpgZ5gs6wT+1XNT3boKdns81VolmeTI8P1ciQKpUtaTt6Cy9P/i2az/J+BCS4U
Fn5Qqk08PHGrUQ==
=OBMj
-----END PGP SIGNATURE-----
Merge tag 'irq-msi-2025-05-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull MSI updates from Thomas Gleixner:
"Updates for the MSI subsystem (core code and PCI):
- Switch the MSI descriptor locking to lock guards
- Replace a broken and naive implementation of PCI/MSI-X control word
updates in the PCI/TPH driver with a properly serialized variant in
the PCI/MSI core code.
- Remove the MSI descriptor abuse in the SCCI/UFS/QCOM driver by
replacing the direct access to the MSI descriptors with the proper
API function calls. People will never understand that APIs exist
for a reason...
- Provide core infrastructre for the upcoming PCI endpoint library
extensions. Currently limited to ARM GICv3+, but in theory
extensible to other architectures.
- Provide a MSI domain::teardown() callback, which allows drivers to
undo the effects of the prepare() callback.
- Move the MSI domain::prepare() callback invocation to domain
creation time to avoid redundant (and in case of ARM/GIC-V3-ITS
confusing) invocations on every allocation.
In combination with the new teardown callback this removes some
ugly hacks in the GIC-V3-ITS driver, which pretended to work around
the short comings of the core code so far. With this update the
code is correct by design and implementation.
- Make the irqchip MSI library globally available, provide a MSI
parent domain creation helper and convert a bunch of (PCI/)MSI
drivers over to the modern MSI parent mechanism. This is the first
step to get rid of at least one incarnation of the three PCI/MSI
management schemes.
- The usual small cleanups and improvements"
* tag 'irq-msi-2025-05-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (33 commits)
PCI/MSI: Use bool for MSI enable state tracking
PCI: tegra: Convert to MSI parent infrastructure
PCI: xgene: Convert to MSI parent infrastructure
PCI: apple: Convert to MSI parent infrastructure
irqchip/msi-lib: Honour the MSI_FLAG_NO_AFFINITY flag
irqchip/mvebu: Convert to msi_create_parent_irq_domain() helper
irqchip/gic: Convert to msi_create_parent_irq_domain() helper
genirq/msi: Add helper for creating MSI-parent irq domains
irqchip: Make irq-msi-lib.h globally available
irqchip/gic-v3-its: Use allocation size from the prepare call
genirq/msi: Engage the .msi_teardown() callback on domain removal
genirq/msi: Move prepare() call to per-device allocation
irqchip/gic-v3-its: Implement .msi_teardown() callback
genirq/msi: Add .msi_teardown() callback as the reverse of .msi_prepare()
irqchip/gic-v3-its: Add support for device tree msi-map and msi-mask
dt-bindings: PCI: pci-ep: Add support for iommu-map and msi-map
irqchip/gic-v3-its: Set IRQ_DOMAIN_FLAG_MSI_IMMUTABLE for ITS
irqdomain: Add IRQ_DOMAIN_FLAG_MSI_IMMUTABLE and irq_domain_is_msi_immutable()
platform-msi: Add msi_remove_device_irq_domain() in platform_device_msi_free_irqs_all()
genirq/msi: Rename msi_[un]lock_descs()
...
pull/1250/head
commit
44ed0f35df
|
|
@ -17,6 +17,24 @@ properties:
|
|||
$nodename:
|
||||
pattern: "^pcie-ep@"
|
||||
|
||||
iommu-map:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32-matrix
|
||||
items:
|
||||
items:
|
||||
- description: Device ID (see msi-map) base
|
||||
maximum: 0x7ffff
|
||||
- description: phandle to IOMMU
|
||||
- description: IOMMU specifier base (currently always 1 cell)
|
||||
- description: Number of Device IDs
|
||||
maximum: 0x80000
|
||||
|
||||
iommu-map-mask:
|
||||
description:
|
||||
A mask to be applied to each Device ID prior to being mapped to an
|
||||
IOMMU specifier per the iommu-map property.
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
maximum: 0x7ffff
|
||||
|
||||
max-functions:
|
||||
description: Maximum number of functions that can be configured
|
||||
$ref: /schemas/types.yaml#/definitions/uint8
|
||||
|
|
@ -35,6 +53,56 @@ properties:
|
|||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
enum: [ 1, 2, 3, 4 ]
|
||||
|
||||
msi-map:
|
||||
description: |
|
||||
Maps a Device ID to an MSI and associated MSI specifier data.
|
||||
|
||||
A PCI Endpoint (EP) can use MSI as a doorbell function. This is achieved by
|
||||
mapping the MSI controller's address into PCI BAR<n>. The PCI Root Complex
|
||||
can write to this BAR<n>, triggering the EP to generate IRQ. This notifies
|
||||
the EP-side driver of an event, eliminating the need for the driver to
|
||||
continuously poll for status changes.
|
||||
|
||||
However, the EP cannot rely on Requester ID (RID) because the RID is
|
||||
determined by the PCI topology of the host system. Since the EP may be
|
||||
connected to different PCI hosts, the RID can vary between systems and is
|
||||
therefore not a reliable identifier.
|
||||
|
||||
Each EP can support up to 8 physical functions and up to 65,536 virtual
|
||||
functions. To uniquely identify each child device, a device ID is defined
|
||||
as
|
||||
- Bits [2:0] for the function number (func)
|
||||
- Bits [18:3] for the virtual function index (vfunc)
|
||||
|
||||
The resulting device ID is computed as:
|
||||
|
||||
(func & 0x7) | (vfunc << 3)
|
||||
|
||||
The property is an arbitrary number of tuples of
|
||||
(device-id-base, msi, msi-base,length).
|
||||
|
||||
Any Device ID id in the interval [id-base, id-base + length) is
|
||||
associated with the listed MSI, with the MSI specifier
|
||||
(id - id-base + msi-base).
|
||||
$ref: /schemas/types.yaml#/definitions/uint32-matrix
|
||||
items:
|
||||
items:
|
||||
- description: The Device ID base matched by the entry
|
||||
maximum: 0x7ffff
|
||||
- description: phandle to msi-controller node
|
||||
- description: (optional) The msi-specifier produced for the first
|
||||
Device ID matched by the entry. Currently, msi-specifier is 0 or
|
||||
1 cells.
|
||||
- description: The length of consecutive Device IDs following the
|
||||
Device ID base
|
||||
maximum: 0x80000
|
||||
|
||||
msi-map-mask:
|
||||
description: A mask to be applied to each Device ID prior to being
|
||||
mapped to an msi-specifier per the msi-map property.
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
maximum: 0x7ffff
|
||||
|
||||
num-lanes:
|
||||
description: maximum number of lanes
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
|
|
|
|||
|
|
@ -95,5 +95,6 @@ EXPORT_SYMBOL_GPL(platform_device_msi_init_and_alloc_irqs);
|
|||
void platform_device_msi_free_irqs_all(struct device *dev)
|
||||
{
|
||||
msi_domain_free_irqs_all(dev, MSI_DEFAULT_DOMAIN);
|
||||
msi_remove_device_irq_domain(dev, MSI_DEFAULT_DOMAIN);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(platform_device_msi_free_irqs_all);
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@
|
|||
#include <linux/of_address.h>
|
||||
#include <linux/of_platform.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
|
||||
#define MIP_INT_RAISE 0x00
|
||||
#define MIP_INT_CLEAR 0x10
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@
|
|||
#include <linux/irqchip/arm-gic.h>
|
||||
#include <linux/irqchip/arm-gic-common.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
|
||||
/*
|
||||
* MSI_TYPER:
|
||||
|
|
@ -261,23 +261,23 @@ static struct msi_parent_ops gicv2m_msi_parent_ops = {
|
|||
|
||||
static __init int gicv2m_allocate_domains(struct irq_domain *parent)
|
||||
{
|
||||
struct irq_domain *inner_domain;
|
||||
struct irq_domain_info info = {
|
||||
.ops = &gicv2m_domain_ops,
|
||||
.parent = parent,
|
||||
};
|
||||
struct v2m_data *v2m;
|
||||
|
||||
v2m = list_first_entry_or_null(&v2m_nodes, struct v2m_data, entry);
|
||||
if (!v2m)
|
||||
return 0;
|
||||
|
||||
inner_domain = irq_domain_create_hierarchy(parent, 0, 0, v2m->fwnode,
|
||||
&gicv2m_domain_ops, v2m);
|
||||
if (!inner_domain) {
|
||||
info.host_data = v2m;
|
||||
info.fwnode = v2m->fwnode;
|
||||
|
||||
if (!msi_create_parent_irq_domain(&info, &gicv2m_msi_parent_ops)) {
|
||||
pr_err("Failed to create GICv2m domain\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
irq_domain_update_bus_token(inner_domain, DOMAIN_BUS_NEXUS);
|
||||
inner_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
|
||||
inner_domain->msi_parent_ops = &gicv2m_msi_parent_ops;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@
|
|||
#include <linux/pci.h>
|
||||
|
||||
#include "irq-gic-common.h"
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
|
||||
#define ITS_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
|
||||
MSI_FLAG_USE_DEF_CHIP_OPS | \
|
||||
|
|
@ -67,17 +67,6 @@ static int its_pci_msi_prepare(struct irq_domain *domain, struct device *dev,
|
|||
/* ITS specific DeviceID, as the core ITS ignores dev. */
|
||||
info->scratchpad[0].ul = pci_msi_domain_get_msi_rid(domain->parent, pdev);
|
||||
|
||||
/*
|
||||
* @domain->msi_domain_info->hwsize contains the size of the
|
||||
* MSI[-X] domain, but vector allocation happens one by one. This
|
||||
* needs some thought when MSI comes into play as the size of MSI
|
||||
* might be unknown at domain creation time and therefore set to
|
||||
* MSI_MAX_INDEX.
|
||||
*/
|
||||
msi_info = msi_get_domain_info(domain);
|
||||
if (msi_info->hwsize > nvec)
|
||||
nvec = msi_info->hwsize;
|
||||
|
||||
/*
|
||||
* Always allocate a power of 2, and special case device 0 for
|
||||
* broken systems where the DevID is not wired (and all devices
|
||||
|
|
@ -118,6 +107,14 @@ static int of_pmsi_get_dev_id(struct irq_domain *domain, struct device *dev,
|
|||
index++;
|
||||
} while (!ret);
|
||||
|
||||
if (ret) {
|
||||
struct device_node *np = NULL;
|
||||
|
||||
ret = of_map_id(dev->of_node, dev->id, "msi-map", "msi-map-mask", &np, dev_id);
|
||||
if (np)
|
||||
of_node_put(np);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
@ -143,14 +140,6 @@ static int its_pmsi_prepare(struct irq_domain *domain, struct device *dev,
|
|||
/* ITS specific DeviceID, as the core ITS ignores dev. */
|
||||
info->scratchpad[0].ul = dev_id;
|
||||
|
||||
/*
|
||||
* @domain->msi_domain_info->hwsize contains the size of the device
|
||||
* domain, but vector allocation happens one by one.
|
||||
*/
|
||||
msi_info = msi_get_domain_info(domain);
|
||||
if (msi_info->hwsize > nvec)
|
||||
nvec = msi_info->hwsize;
|
||||
|
||||
/* Allocate at least 32 MSIs, and always as a power of 2 */
|
||||
nvec = max_t(int, 32, roundup_pow_of_two(nvec));
|
||||
|
||||
|
|
@ -159,6 +148,14 @@ static int its_pmsi_prepare(struct irq_domain *domain, struct device *dev,
|
|||
dev, nvec, info);
|
||||
}
|
||||
|
||||
static void its_msi_teardown(struct irq_domain *domain, msi_alloc_info_t *info)
|
||||
{
|
||||
struct msi_domain_info *msi_info;
|
||||
|
||||
msi_info = msi_get_domain_info(domain->parent);
|
||||
msi_info->ops->msi_teardown(domain->parent, info);
|
||||
}
|
||||
|
||||
static bool its_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
|
||||
struct irq_domain *real_parent, struct msi_domain_info *info)
|
||||
{
|
||||
|
|
@ -182,6 +179,7 @@ static bool its_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
|
|||
* %MSI_MAX_INDEX.
|
||||
*/
|
||||
info->ops->msi_prepare = its_pci_msi_prepare;
|
||||
info->ops->msi_teardown = its_msi_teardown;
|
||||
break;
|
||||
case DOMAIN_BUS_DEVICE_MSI:
|
||||
case DOMAIN_BUS_WIRED_TO_MSI:
|
||||
|
|
@ -190,6 +188,7 @@ static bool its_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
|
|||
* size is also known at domain creation time.
|
||||
*/
|
||||
info->ops->msi_prepare = its_pmsi_prepare;
|
||||
info->ops->msi_teardown = its_msi_teardown;
|
||||
break;
|
||||
default:
|
||||
/* Confused. How did the lib return true? */
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@
|
|||
#include <asm/exception.h>
|
||||
|
||||
#include "irq-gic-common.h"
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
|
||||
#define ITS_FLAGS_CMDQ_NEEDS_FLUSHING (1ULL << 0)
|
||||
#define ITS_FLAGS_WORKAROUND_CAVIUM_22375 (1ULL << 1)
|
||||
|
|
@ -3624,8 +3624,33 @@ out:
|
|||
return err;
|
||||
}
|
||||
|
||||
static void its_msi_teardown(struct irq_domain *domain, msi_alloc_info_t *info)
|
||||
{
|
||||
struct its_device *its_dev = info->scratchpad[0].ptr;
|
||||
|
||||
guard(mutex)(&its_dev->its->dev_alloc_lock);
|
||||
|
||||
/* If the device is shared, keep everything around */
|
||||
if (its_dev->shared)
|
||||
return;
|
||||
|
||||
/* LPIs should have been already unmapped at this stage */
|
||||
if (WARN_ON_ONCE(!bitmap_empty(its_dev->event_map.lpi_map,
|
||||
its_dev->event_map.nr_lpis)))
|
||||
return;
|
||||
|
||||
its_lpi_free(its_dev->event_map.lpi_map,
|
||||
its_dev->event_map.lpi_base,
|
||||
its_dev->event_map.nr_lpis);
|
||||
|
||||
/* Unmap device/itt, and get rid of the tracking */
|
||||
its_send_mapd(its_dev, 0);
|
||||
its_free_device(its_dev);
|
||||
}
|
||||
|
||||
static struct msi_domain_ops its_msi_domain_ops = {
|
||||
.msi_prepare = its_msi_prepare,
|
||||
.msi_teardown = its_msi_teardown,
|
||||
};
|
||||
|
||||
static int its_irq_gic_domain_alloc(struct irq_domain *domain,
|
||||
|
|
@ -3726,7 +3751,6 @@ static void its_irq_domain_free(struct irq_domain *domain, unsigned int virq,
|
|||
{
|
||||
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
|
||||
struct its_device *its_dev = irq_data_get_irq_chip_data(d);
|
||||
struct its_node *its = its_dev->its;
|
||||
int i;
|
||||
|
||||
bitmap_release_region(its_dev->event_map.lpi_map,
|
||||
|
|
@ -3740,26 +3764,6 @@ static void its_irq_domain_free(struct irq_domain *domain, unsigned int virq,
|
|||
irq_domain_reset_irq_data(data);
|
||||
}
|
||||
|
||||
mutex_lock(&its->dev_alloc_lock);
|
||||
|
||||
/*
|
||||
* If all interrupts have been freed, start mopping the
|
||||
* floor. This is conditioned on the device not being shared.
|
||||
*/
|
||||
if (!its_dev->shared &&
|
||||
bitmap_empty(its_dev->event_map.lpi_map,
|
||||
its_dev->event_map.nr_lpis)) {
|
||||
its_lpi_free(its_dev->event_map.lpi_map,
|
||||
its_dev->event_map.lpi_base,
|
||||
its_dev->event_map.nr_lpis);
|
||||
|
||||
/* Unmap device/itt */
|
||||
its_send_mapd(its_dev, 0);
|
||||
its_free_device(its_dev);
|
||||
}
|
||||
|
||||
mutex_unlock(&its->dev_alloc_lock);
|
||||
|
||||
irq_domain_free_irqs_parent(domain, virq, nr_irqs);
|
||||
}
|
||||
|
||||
|
|
@ -5122,7 +5126,12 @@ out_unmap:
|
|||
|
||||
static int its_init_domain(struct its_node *its)
|
||||
{
|
||||
struct irq_domain *inner_domain;
|
||||
struct irq_domain_info dom_info = {
|
||||
.fwnode = its->fwnode_handle,
|
||||
.ops = &its_domain_ops,
|
||||
.domain_flags = its->msi_domain_flags,
|
||||
.parent = its_parent,
|
||||
};
|
||||
struct msi_domain_info *info;
|
||||
|
||||
info = kzalloc(sizeof(*info), GFP_KERNEL);
|
||||
|
|
@ -5131,21 +5140,12 @@ static int its_init_domain(struct its_node *its)
|
|||
|
||||
info->ops = &its_msi_domain_ops;
|
||||
info->data = its;
|
||||
dom_info.host_data = info;
|
||||
|
||||
inner_domain = irq_domain_create_hierarchy(its_parent,
|
||||
its->msi_domain_flags, 0,
|
||||
its->fwnode_handle, &its_domain_ops,
|
||||
info);
|
||||
if (!inner_domain) {
|
||||
if (!msi_create_parent_irq_domain(&dom_info, &gic_v3_its_msi_parent_ops)) {
|
||||
kfree(info);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
irq_domain_update_bus_token(inner_domain, DOMAIN_BUS_NEXUS);
|
||||
|
||||
inner_domain->msi_parent_ops = &gic_v3_its_msi_parent_ops;
|
||||
inner_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -5522,7 +5522,7 @@ static struct its_node __init *its_node_init(struct resource *res,
|
|||
its->base = its_base;
|
||||
its->phys_base = res->start;
|
||||
its->get_msi_base = its_irq_get_msi_base;
|
||||
its->msi_domain_flags = IRQ_DOMAIN_FLAG_ISOLATED_MSI;
|
||||
its->msi_domain_flags = IRQ_DOMAIN_FLAG_ISOLATED_MSI | IRQ_DOMAIN_FLAG_MSI_IMMUTABLE;
|
||||
|
||||
its->numa_node = numa_node;
|
||||
its->fwnode_handle = handle;
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@
|
|||
|
||||
#include <linux/irqchip/arm-gic-v3.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
|
||||
struct mbi_range {
|
||||
u32 spi_start;
|
||||
|
|
@ -206,17 +206,13 @@ static const struct msi_parent_ops gic_v3_mbi_msi_parent_ops = {
|
|||
|
||||
static int mbi_allocate_domain(struct irq_domain *parent)
|
||||
{
|
||||
struct irq_domain *nexus_domain;
|
||||
struct irq_domain_info info = {
|
||||
.fwnode = parent->fwnode,
|
||||
.ops = &mbi_domain_ops,
|
||||
.parent = parent,
|
||||
};
|
||||
|
||||
nexus_domain = irq_domain_create_hierarchy(parent, 0, 0, parent->fwnode,
|
||||
&mbi_domain_ops, NULL);
|
||||
if (!nexus_domain)
|
||||
return -ENOMEM;
|
||||
|
||||
irq_domain_update_bus_token(nexus_domain, DOMAIN_BUS_NEXUS);
|
||||
nexus_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
|
||||
nexus_domain->msi_parent_ops = &gic_v3_mbi_msi_parent_ops;
|
||||
return 0;
|
||||
return msi_create_parent_irq_domain(&info, &gic_v3_mbi_msi_parent_ops) ? 0 : -ENOMEM;
|
||||
}
|
||||
|
||||
int __init mbi_init(struct fwnode_handle *fwnode, struct irq_domain *parent)
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@
|
|||
#include <linux/pm_domain.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
|
||||
#define IMX_MU_CHANS 4
|
||||
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@
|
|||
#include <asm/loongarch.h>
|
||||
#include <asm/setup.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
#include "irq-loongson.h"
|
||||
|
||||
#define VECTORS_PER_REG 64
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@
|
|||
#include <linux/pci.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
#include "irq-loongson.h"
|
||||
|
||||
static int nr_pics;
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
#include <linux/export.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
|
||||
/**
|
||||
* msi_lib_init_dev_msi_info - Domain info setup for MSI domains
|
||||
|
|
@ -105,7 +105,12 @@ bool msi_lib_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
|
|||
* MSI message into the hardware which is the whole purpose of the
|
||||
* device MSI domain aside of mask/unmask which is provided e.g. by
|
||||
* PCI/MSI device domains.
|
||||
*
|
||||
* The exception to the rule is when the underlying domain
|
||||
* tells you that affinity is not a thing -- for example when
|
||||
* everything is muxed behind a single interrupt.
|
||||
*/
|
||||
if (!chip->irq_set_affinity && !(info->flags & MSI_FLAG_NO_AFFINITY))
|
||||
chip->irq_set_affinity = msi_domain_set_affinity;
|
||||
return true;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@
|
|||
#include <linux/of_platform.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
|
||||
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
||||
|
||||
|
|
@ -170,9 +170,12 @@ static const struct msi_parent_ops gicp_msi_parent_ops = {
|
|||
|
||||
static int mvebu_gicp_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct irq_domain *inner_domain, *parent_domain;
|
||||
struct device_node *node = pdev->dev.of_node;
|
||||
struct device_node *irq_parent_dn;
|
||||
struct irq_domain_info info = {
|
||||
.fwnode = of_fwnode_handle(node),
|
||||
.ops = &gicp_domain_ops,
|
||||
};
|
||||
struct mvebu_gicp *gicp;
|
||||
int ret, i;
|
||||
|
||||
|
|
@ -217,30 +220,23 @@ static int mvebu_gicp_probe(struct platform_device *pdev)
|
|||
if (!gicp->spi_bitmap)
|
||||
return -ENOMEM;
|
||||
|
||||
info.size = gicp->spi_cnt;
|
||||
info.host_data = gicp;
|
||||
|
||||
irq_parent_dn = of_irq_find_parent(node);
|
||||
if (!irq_parent_dn) {
|
||||
dev_err(&pdev->dev, "failed to find parent IRQ node\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
parent_domain = irq_find_host(irq_parent_dn);
|
||||
info.parent = irq_find_host(irq_parent_dn);
|
||||
of_node_put(irq_parent_dn);
|
||||
if (!parent_domain) {
|
||||
if (!info.parent) {
|
||||
dev_err(&pdev->dev, "failed to find parent IRQ domain\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
inner_domain = irq_domain_create_hierarchy(parent_domain, 0,
|
||||
gicp->spi_cnt,
|
||||
of_node_to_fwnode(node),
|
||||
&gicp_domain_ops, gicp);
|
||||
if (!inner_domain)
|
||||
return -ENOMEM;
|
||||
|
||||
irq_domain_update_bus_token(inner_domain, DOMAIN_BUS_GENERIC_MSI);
|
||||
inner_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
|
||||
inner_domain->msi_parent_ops = &gicp_msi_parent_ops;
|
||||
return 0;
|
||||
return msi_create_parent_irq_domain(&info, &gicp_msi_parent_ops) ? 0 : -ENOMEM;
|
||||
}
|
||||
|
||||
static const struct of_device_id mvebu_gicp_of_match[] = {
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@
|
|||
#include <linux/of_platform.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
|
||||
#include <dt-bindings/interrupt-controller/mvebu-icu.h>
|
||||
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@
|
|||
#include <linux/of_address.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
|
||||
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
||||
|
||||
|
|
@ -167,7 +167,12 @@ static const struct msi_parent_ops odmi_msi_parent_ops = {
|
|||
static int __init mvebu_odmi_init(struct device_node *node,
|
||||
struct device_node *parent)
|
||||
{
|
||||
struct irq_domain *parent_domain, *inner_domain;
|
||||
struct irq_domain_info info = {
|
||||
.fwnode = of_fwnode_handle(node),
|
||||
.ops = &odmi_domain_ops,
|
||||
.size = odmis_count * NODMIS_PER_FRAME,
|
||||
.parent = irq_find_host(parent),
|
||||
};
|
||||
int ret, i;
|
||||
|
||||
if (of_property_read_u32(node, "marvell,odmi-frames", &odmis_count))
|
||||
|
|
@ -203,23 +208,11 @@ static int __init mvebu_odmi_init(struct device_node *node,
|
|||
}
|
||||
}
|
||||
|
||||
parent_domain = irq_find_host(parent);
|
||||
|
||||
inner_domain = irq_domain_create_hierarchy(parent_domain, 0,
|
||||
odmis_count * NODMIS_PER_FRAME,
|
||||
of_node_to_fwnode(node),
|
||||
&odmi_domain_ops, NULL);
|
||||
if (!inner_domain) {
|
||||
ret = -ENOMEM;
|
||||
goto err_unmap;
|
||||
}
|
||||
|
||||
irq_domain_update_bus_token(inner_domain, DOMAIN_BUS_GENERIC_MSI);
|
||||
inner_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
|
||||
inner_domain->msi_parent_ops = &odmi_msi_parent_ops;
|
||||
|
||||
if (msi_create_parent_irq_domain(&info, &odmi_msi_parent_ops))
|
||||
return 0;
|
||||
|
||||
ret = -ENOMEM;
|
||||
|
||||
err_unmap:
|
||||
for (i = 0; i < odmis_count; i++) {
|
||||
struct odmi_data *odmi = &odmis[i];
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@
|
|||
#include <linux/of_irq.h>
|
||||
#include <linux/of_platform.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
|
||||
/* Cause register */
|
||||
#define GICP_SECR(idx) (0x0 + ((idx) * 0x4))
|
||||
|
|
@ -366,6 +366,10 @@ static const struct msi_parent_ops sei_msi_parent_ops = {
|
|||
static int mvebu_sei_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device_node *node = pdev->dev.of_node;
|
||||
struct irq_domain_info info = {
|
||||
.fwnode = of_fwnode_handle(node),
|
||||
.ops = &mvebu_sei_cp_domain_ops,
|
||||
};
|
||||
struct mvebu_sei *sei;
|
||||
u32 parent_irq;
|
||||
int ret;
|
||||
|
|
@ -402,7 +406,7 @@ static int mvebu_sei_probe(struct platform_device *pdev)
|
|||
}
|
||||
|
||||
/* Create the root SEI domain */
|
||||
sei->sei_domain = irq_domain_create_linear(of_node_to_fwnode(node),
|
||||
sei->sei_domain = irq_domain_create_linear(of_fwnode_handle(node),
|
||||
(sei->caps->ap_range.size +
|
||||
sei->caps->cp_range.size),
|
||||
&mvebu_sei_domain_ops,
|
||||
|
|
@ -418,7 +422,7 @@ static int mvebu_sei_probe(struct platform_device *pdev)
|
|||
/* Create the 'wired' domain */
|
||||
sei->ap_domain = irq_domain_create_hierarchy(sei->sei_domain, 0,
|
||||
sei->caps->ap_range.size,
|
||||
of_node_to_fwnode(node),
|
||||
of_fwnode_handle(node),
|
||||
&mvebu_sei_ap_domain_ops,
|
||||
sei);
|
||||
if (!sei->ap_domain) {
|
||||
|
|
@ -430,21 +434,17 @@ static int mvebu_sei_probe(struct platform_device *pdev)
|
|||
irq_domain_update_bus_token(sei->ap_domain, DOMAIN_BUS_WIRED);
|
||||
|
||||
/* Create the 'MSI' domain */
|
||||
sei->cp_domain = irq_domain_create_hierarchy(sei->sei_domain, 0,
|
||||
sei->caps->cp_range.size,
|
||||
of_node_to_fwnode(node),
|
||||
&mvebu_sei_cp_domain_ops,
|
||||
sei);
|
||||
info.size = sei->caps->cp_range.size;
|
||||
info.host_data = sei;
|
||||
info.parent = sei->sei_domain;
|
||||
|
||||
sei->cp_domain = msi_create_parent_irq_domain(&info, &sei_msi_parent_ops);
|
||||
if (!sei->cp_domain) {
|
||||
pr_err("Failed to create CPs IRQ domain\n");
|
||||
ret = -ENOMEM;
|
||||
goto remove_ap_domain;
|
||||
}
|
||||
|
||||
irq_domain_update_bus_token(sei->cp_domain, DOMAIN_BUS_GENERIC_MSI);
|
||||
sei->cp_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
|
||||
sei->cp_domain->msi_parent_ops = &sei_msi_parent_ops;
|
||||
|
||||
mvebu_sei_reset(sei);
|
||||
|
||||
irq_set_chained_handler_and_data(parent_irq, mvebu_sei_handle_cascade_irq, sei);
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@
|
|||
#include <linux/spinlock.h>
|
||||
#include <linux/smp.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
#include "irq-riscv-imsic-state.h"
|
||||
|
||||
static bool imsic_cpu_page_phys(unsigned int cpu, unsigned int guest_index,
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@
|
|||
#include <linux/property.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "irq-msi-lib.h"
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
|
||||
struct sg204x_msi_chip_info {
|
||||
const struct irq_chip *irqchip;
|
||||
|
|
|
|||
|
|
@ -106,10 +106,10 @@ int ntb_msi_setup_mws(struct ntb_dev *ntb)
|
|||
if (!ntb->msi)
|
||||
return -EINVAL;
|
||||
|
||||
msi_lock_descs(&ntb->pdev->dev);
|
||||
scoped_guard (msi_descs_lock, &ntb->pdev->dev) {
|
||||
desc = msi_first_desc(&ntb->pdev->dev, MSI_DESC_ASSOCIATED);
|
||||
addr = desc->msg.address_lo + ((uint64_t)desc->msg.address_hi << 32);
|
||||
msi_unlock_descs(&ntb->pdev->dev);
|
||||
}
|
||||
|
||||
for (peer = 0; peer < ntb_peer_port_count(ntb); peer++) {
|
||||
peer_widx = ntb_peer_highest_mw_idx(ntb, peer);
|
||||
|
|
@ -289,7 +289,7 @@ int ntbm_msi_request_threaded_irq(struct ntb_dev *ntb, irq_handler_t handler,
|
|||
if (!ntb->msi)
|
||||
return -EINVAL;
|
||||
|
||||
msi_lock_descs(dev);
|
||||
guard(msi_descs_lock)(dev);
|
||||
msi_for_each_desc(entry, dev, MSI_DESC_ASSOCIATED) {
|
||||
if (irq_has_action(entry->irq))
|
||||
continue;
|
||||
|
|
@ -307,17 +307,11 @@ int ntbm_msi_request_threaded_irq(struct ntb_dev *ntb, irq_handler_t handler,
|
|||
ret = ntbm_msi_setup_callback(ntb, entry, msi_desc);
|
||||
if (ret) {
|
||||
devm_free_irq(&ntb->dev, entry->irq, dev_id);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
ret = entry->irq;
|
||||
goto unlock;
|
||||
}
|
||||
ret = -ENODEV;
|
||||
|
||||
unlock:
|
||||
msi_unlock_descs(dev);
|
||||
return ret;
|
||||
}
|
||||
return entry->irq;
|
||||
}
|
||||
return -ENODEV;
|
||||
}
|
||||
EXPORT_SYMBOL(ntbm_msi_request_threaded_irq);
|
||||
|
||||
|
|
|
|||
|
|
@ -40,6 +40,7 @@ config PCIE_APPLE
|
|||
depends on OF
|
||||
depends on PCI_MSI
|
||||
select PCI_HOST_COMMON
|
||||
select IRQ_MSI_LIB
|
||||
help
|
||||
Say Y here if you want to enable PCIe controller support on Apple
|
||||
system-on-chips, like the Apple M1. This is required for the USB
|
||||
|
|
@ -227,6 +228,7 @@ config PCI_TEGRA
|
|||
bool "NVIDIA Tegra PCIe controller"
|
||||
depends on ARCH_TEGRA || COMPILE_TEST
|
||||
depends on PCI_MSI
|
||||
select IRQ_MSI_LIB
|
||||
help
|
||||
Say Y here if you want support for the PCIe host controller found
|
||||
on NVIDIA Tegra SoCs.
|
||||
|
|
@ -303,6 +305,7 @@ config PCI_XGENE_MSI
|
|||
bool "X-Gene v1 PCIe MSI feature"
|
||||
depends on PCI_XGENE
|
||||
depends on PCI_MSI
|
||||
select IRQ_MSI_LIB
|
||||
default y
|
||||
help
|
||||
Say Y here if you want PCIe MSI support for the APM X-Gene v1 SoC.
|
||||
|
|
|
|||
|
|
@ -3975,24 +3975,18 @@ static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg)
|
|||
{
|
||||
struct irq_data *irq_data;
|
||||
struct msi_desc *entry;
|
||||
int ret = 0;
|
||||
|
||||
if (!pdev->msi_enabled && !pdev->msix_enabled)
|
||||
return 0;
|
||||
|
||||
msi_lock_descs(&pdev->dev);
|
||||
guard(msi_descs_lock)(&pdev->dev);
|
||||
msi_for_each_desc(entry, &pdev->dev, MSI_DESC_ASSOCIATED) {
|
||||
irq_data = irq_get_irq_data(entry->irq);
|
||||
if (WARN_ON_ONCE(!irq_data)) {
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
if (WARN_ON_ONCE(!irq_data))
|
||||
return -EINVAL;
|
||||
hv_compose_msi_msg(irq_data, &entry->msg);
|
||||
}
|
||||
msi_unlock_descs(&pdev->dev);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/iopoll.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
|
|
@ -1547,7 +1548,7 @@ static void tegra_pcie_msi_irq(struct irq_desc *desc)
|
|||
unsigned int index = i * 32 + offset;
|
||||
int ret;
|
||||
|
||||
ret = generic_handle_domain_irq(msi->domain->parent, index);
|
||||
ret = generic_handle_domain_irq(msi->domain, index);
|
||||
if (ret) {
|
||||
/*
|
||||
* that's weird who triggered this?
|
||||
|
|
@ -1565,30 +1566,6 @@ static void tegra_pcie_msi_irq(struct irq_desc *desc)
|
|||
chained_irq_exit(chip, desc);
|
||||
}
|
||||
|
||||
static void tegra_msi_top_irq_ack(struct irq_data *d)
|
||||
{
|
||||
irq_chip_ack_parent(d);
|
||||
}
|
||||
|
||||
static void tegra_msi_top_irq_mask(struct irq_data *d)
|
||||
{
|
||||
pci_msi_mask_irq(d);
|
||||
irq_chip_mask_parent(d);
|
||||
}
|
||||
|
||||
static void tegra_msi_top_irq_unmask(struct irq_data *d)
|
||||
{
|
||||
pci_msi_unmask_irq(d);
|
||||
irq_chip_unmask_parent(d);
|
||||
}
|
||||
|
||||
static struct irq_chip tegra_msi_top_chip = {
|
||||
.name = "Tegra PCIe MSI",
|
||||
.irq_ack = tegra_msi_top_irq_ack,
|
||||
.irq_mask = tegra_msi_top_irq_mask,
|
||||
.irq_unmask = tegra_msi_top_irq_unmask,
|
||||
};
|
||||
|
||||
static void tegra_msi_irq_ack(struct irq_data *d)
|
||||
{
|
||||
struct tegra_msi *msi = irq_data_get_irq_chip_data(d);
|
||||
|
|
@ -1690,42 +1667,40 @@ static const struct irq_domain_ops tegra_msi_domain_ops = {
|
|||
.free = tegra_msi_domain_free,
|
||||
};
|
||||
|
||||
static struct msi_domain_info tegra_msi_info = {
|
||||
.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
|
||||
MSI_FLAG_NO_AFFINITY | MSI_FLAG_PCI_MSIX,
|
||||
.chip = &tegra_msi_top_chip,
|
||||
static const struct msi_parent_ops tegra_msi_parent_ops = {
|
||||
.supported_flags = (MSI_GENERIC_FLAGS_MASK |
|
||||
MSI_FLAG_PCI_MSIX),
|
||||
.required_flags = (MSI_FLAG_USE_DEF_DOM_OPS |
|
||||
MSI_FLAG_USE_DEF_CHIP_OPS |
|
||||
MSI_FLAG_PCI_MSI_MASK_PARENT |
|
||||
MSI_FLAG_NO_AFFINITY),
|
||||
.chip_flags = MSI_CHIP_FLAG_SET_ACK,
|
||||
.bus_select_token = DOMAIN_BUS_PCI_MSI,
|
||||
.init_dev_msi_info = msi_lib_init_dev_msi_info,
|
||||
};
|
||||
|
||||
static int tegra_allocate_domains(struct tegra_msi *msi)
|
||||
{
|
||||
struct tegra_pcie *pcie = msi_to_pcie(msi);
|
||||
struct fwnode_handle *fwnode = dev_fwnode(pcie->dev);
|
||||
struct irq_domain *parent;
|
||||
struct irq_domain_info info = {
|
||||
.fwnode = fwnode,
|
||||
.ops = &tegra_msi_domain_ops,
|
||||
.size = INT_PCI_MSI_NR,
|
||||
.host_data = msi,
|
||||
};
|
||||
|
||||
parent = irq_domain_create_linear(fwnode, INT_PCI_MSI_NR,
|
||||
&tegra_msi_domain_ops, msi);
|
||||
if (!parent) {
|
||||
dev_err(pcie->dev, "failed to create IRQ domain\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS);
|
||||
|
||||
msi->domain = pci_msi_create_irq_domain(fwnode, &tegra_msi_info, parent);
|
||||
msi->domain = msi_create_parent_irq_domain(&info, &tegra_msi_parent_ops);
|
||||
if (!msi->domain) {
|
||||
dev_err(pcie->dev, "failed to create MSI domain\n");
|
||||
irq_domain_remove(parent);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void tegra_free_domains(struct tegra_msi *msi)
|
||||
{
|
||||
struct irq_domain *parent = msi->domain->parent;
|
||||
|
||||
irq_domain_remove(msi->domain);
|
||||
irq_domain_remove(parent);
|
||||
}
|
||||
|
||||
static int tegra_pcie_msi_setup(struct tegra_pcie *pcie)
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@
|
|||
#include <linux/module.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/of_pci.h>
|
||||
|
|
@ -32,7 +33,6 @@ struct xgene_msi_group {
|
|||
struct xgene_msi {
|
||||
struct device_node *node;
|
||||
struct irq_domain *inner_domain;
|
||||
struct irq_domain *msi_domain;
|
||||
u64 msi_addr;
|
||||
void __iomem *msi_regs;
|
||||
unsigned long *bitmap;
|
||||
|
|
@ -44,20 +44,6 @@ struct xgene_msi {
|
|||
/* Global data */
|
||||
static struct xgene_msi xgene_msi_ctrl;
|
||||
|
||||
static struct irq_chip xgene_msi_top_irq_chip = {
|
||||
.name = "X-Gene1 MSI",
|
||||
.irq_enable = pci_msi_unmask_irq,
|
||||
.irq_disable = pci_msi_mask_irq,
|
||||
.irq_mask = pci_msi_mask_irq,
|
||||
.irq_unmask = pci_msi_unmask_irq,
|
||||
};
|
||||
|
||||
static struct msi_domain_info xgene_msi_domain_info = {
|
||||
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
|
||||
MSI_FLAG_PCI_MSIX),
|
||||
.chip = &xgene_msi_top_irq_chip,
|
||||
};
|
||||
|
||||
/*
|
||||
* X-Gene v1 has 16 groups of MSI termination registers MSInIRx, where
|
||||
* n is group number (0..F), x is index of registers in each group (0..7)
|
||||
|
|
@ -235,34 +221,35 @@ static void xgene_irq_domain_free(struct irq_domain *domain,
|
|||
irq_domain_free_irqs_parent(domain, virq, nr_irqs);
|
||||
}
|
||||
|
||||
static const struct irq_domain_ops msi_domain_ops = {
|
||||
static const struct irq_domain_ops xgene_msi_domain_ops = {
|
||||
.alloc = xgene_irq_domain_alloc,
|
||||
.free = xgene_irq_domain_free,
|
||||
};
|
||||
|
||||
static const struct msi_parent_ops xgene_msi_parent_ops = {
|
||||
.supported_flags = (MSI_GENERIC_FLAGS_MASK |
|
||||
MSI_FLAG_PCI_MSIX),
|
||||
.required_flags = (MSI_FLAG_USE_DEF_DOM_OPS |
|
||||
MSI_FLAG_USE_DEF_CHIP_OPS),
|
||||
.bus_select_token = DOMAIN_BUS_PCI_MSI,
|
||||
.init_dev_msi_info = msi_lib_init_dev_msi_info,
|
||||
};
|
||||
|
||||
static int xgene_allocate_domains(struct xgene_msi *msi)
|
||||
{
|
||||
msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC,
|
||||
&msi_domain_ops, msi);
|
||||
if (!msi->inner_domain)
|
||||
return -ENOMEM;
|
||||
struct irq_domain_info info = {
|
||||
.fwnode = of_fwnode_handle(msi->node),
|
||||
.ops = &xgene_msi_domain_ops,
|
||||
.size = NR_MSI_VEC,
|
||||
.host_data = msi,
|
||||
};
|
||||
|
||||
msi->msi_domain = pci_msi_create_irq_domain(of_fwnode_handle(msi->node),
|
||||
&xgene_msi_domain_info,
|
||||
msi->inner_domain);
|
||||
|
||||
if (!msi->msi_domain) {
|
||||
irq_domain_remove(msi->inner_domain);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
return 0;
|
||||
msi->inner_domain = msi_create_parent_irq_domain(&info, &xgene_msi_parent_ops);
|
||||
return msi->inner_domain ? 0 : -ENOMEM;
|
||||
}
|
||||
|
||||
static void xgene_free_domains(struct xgene_msi *msi)
|
||||
{
|
||||
if (msi->msi_domain)
|
||||
irq_domain_remove(msi->msi_domain);
|
||||
if (msi->inner_domain)
|
||||
irq_domain_remove(msi->inner_domain);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqchip/irq-msi-lib.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/module.h>
|
||||
|
|
@ -133,7 +134,6 @@ struct apple_pcie {
|
|||
struct mutex lock;
|
||||
struct device *dev;
|
||||
void __iomem *base;
|
||||
struct irq_domain *domain;
|
||||
unsigned long *bitmap;
|
||||
struct list_head ports;
|
||||
struct completion event;
|
||||
|
|
@ -162,27 +162,6 @@ static void rmw_clear(u32 clr, void __iomem *addr)
|
|||
writel_relaxed(readl_relaxed(addr) & ~clr, addr);
|
||||
}
|
||||
|
||||
static void apple_msi_top_irq_mask(struct irq_data *d)
|
||||
{
|
||||
pci_msi_mask_irq(d);
|
||||
irq_chip_mask_parent(d);
|
||||
}
|
||||
|
||||
static void apple_msi_top_irq_unmask(struct irq_data *d)
|
||||
{
|
||||
pci_msi_unmask_irq(d);
|
||||
irq_chip_unmask_parent(d);
|
||||
}
|
||||
|
||||
static struct irq_chip apple_msi_top_chip = {
|
||||
.name = "PCIe MSI",
|
||||
.irq_mask = apple_msi_top_irq_mask,
|
||||
.irq_unmask = apple_msi_top_irq_unmask,
|
||||
.irq_eoi = irq_chip_eoi_parent,
|
||||
.irq_set_affinity = irq_chip_set_affinity_parent,
|
||||
.irq_set_type = irq_chip_set_type_parent,
|
||||
};
|
||||
|
||||
static void apple_msi_compose_msg(struct irq_data *data, struct msi_msg *msg)
|
||||
{
|
||||
msg->address_hi = upper_32_bits(DOORBELL_ADDR);
|
||||
|
|
@ -226,8 +205,7 @@ static int apple_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
|
|||
|
||||
for (i = 0; i < nr_irqs; i++) {
|
||||
irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i,
|
||||
&apple_msi_bottom_chip,
|
||||
domain->host_data);
|
||||
&apple_msi_bottom_chip, pcie);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
@ -251,12 +229,6 @@ static const struct irq_domain_ops apple_msi_domain_ops = {
|
|||
.free = apple_msi_domain_free,
|
||||
};
|
||||
|
||||
static struct msi_domain_info apple_msi_info = {
|
||||
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
|
||||
MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX),
|
||||
.chip = &apple_msi_top_chip,
|
||||
};
|
||||
|
||||
static void apple_port_irq_mask(struct irq_data *data)
|
||||
{
|
||||
struct apple_pcie_port *port = irq_data_get_irq_chip_data(data);
|
||||
|
|
@ -595,11 +567,28 @@ static int apple_pcie_setup_port(struct apple_pcie *pcie,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static const struct msi_parent_ops apple_msi_parent_ops = {
|
||||
.supported_flags = (MSI_GENERIC_FLAGS_MASK |
|
||||
MSI_FLAG_PCI_MSIX |
|
||||
MSI_FLAG_MULTI_PCI_MSI),
|
||||
.required_flags = (MSI_FLAG_USE_DEF_DOM_OPS |
|
||||
MSI_FLAG_USE_DEF_CHIP_OPS |
|
||||
MSI_FLAG_PCI_MSI_MASK_PARENT),
|
||||
.chip_flags = MSI_CHIP_FLAG_SET_EOI,
|
||||
.bus_select_token = DOMAIN_BUS_PCI_MSI,
|
||||
.init_dev_msi_info = msi_lib_init_dev_msi_info,
|
||||
};
|
||||
|
||||
static int apple_msi_init(struct apple_pcie *pcie)
|
||||
{
|
||||
struct fwnode_handle *fwnode = dev_fwnode(pcie->dev);
|
||||
struct irq_domain_info info = {
|
||||
.fwnode = fwnode,
|
||||
.ops = &apple_msi_domain_ops,
|
||||
.size = pcie->nvecs,
|
||||
.host_data = pcie,
|
||||
};
|
||||
struct of_phandle_args args = {};
|
||||
struct irq_domain *parent;
|
||||
int ret;
|
||||
|
||||
ret = of_parse_phandle_with_args(to_of_node(fwnode), "msi-ranges",
|
||||
|
|
@ -619,28 +608,16 @@ static int apple_msi_init(struct apple_pcie *pcie)
|
|||
if (!pcie->bitmap)
|
||||
return -ENOMEM;
|
||||
|
||||
parent = irq_find_matching_fwspec(&pcie->fwspec, DOMAIN_BUS_WIRED);
|
||||
if (!parent) {
|
||||
info.parent = irq_find_matching_fwspec(&pcie->fwspec, DOMAIN_BUS_WIRED);
|
||||
if (!info.parent) {
|
||||
dev_err(pcie->dev, "failed to find parent domain\n");
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
parent = irq_domain_create_hierarchy(parent, 0, pcie->nvecs, fwnode,
|
||||
&apple_msi_domain_ops, pcie);
|
||||
if (!parent) {
|
||||
if (!msi_create_parent_irq_domain(&info, &apple_msi_parent_ops)) {
|
||||
dev_err(pcie->dev, "failed to create IRQ domain\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS);
|
||||
|
||||
pcie->domain = pci_msi_create_irq_domain(fwnode, &apple_msi_info,
|
||||
parent);
|
||||
if (!pcie->domain) {
|
||||
dev_err(pcie->dev, "failed to create MSI domain\n");
|
||||
irq_domain_remove(parent);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -53,10 +53,9 @@ void pci_disable_msi(struct pci_dev *dev)
|
|||
if (!pci_msi_enabled() || !dev || !dev->msi_enabled)
|
||||
return;
|
||||
|
||||
msi_lock_descs(&dev->dev);
|
||||
guard(msi_descs_lock)(&dev->dev);
|
||||
pci_msi_shutdown(dev);
|
||||
pci_free_msi_irqs(dev);
|
||||
msi_unlock_descs(&dev->dev);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_disable_msi);
|
||||
|
||||
|
|
@ -196,10 +195,9 @@ void pci_disable_msix(struct pci_dev *dev)
|
|||
if (!pci_msi_enabled() || !dev || !dev->msix_enabled)
|
||||
return;
|
||||
|
||||
msi_lock_descs(&dev->dev);
|
||||
guard(msi_descs_lock)(&dev->dev);
|
||||
pci_msix_shutdown(dev);
|
||||
pci_free_msi_irqs(dev);
|
||||
msi_unlock_descs(&dev->dev);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_disable_msix);
|
||||
|
||||
|
|
@ -401,7 +399,7 @@ EXPORT_SYMBOL_GPL(pci_restore_msi_state);
|
|||
* Return: true if MSI has not been globally disabled through ACPI FADT,
|
||||
* PCI bridge quirks, or the "pci=nomsi" kernel command-line option.
|
||||
*/
|
||||
int pci_msi_enabled(void)
|
||||
bool pci_msi_enabled(void)
|
||||
{
|
||||
return pci_msi_enable;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@
|
|||
#include "../pci.h"
|
||||
#include "msi.h"
|
||||
|
||||
int pci_msi_enable = 1;
|
||||
bool pci_msi_enable = true;
|
||||
|
||||
/**
|
||||
* pci_msi_supported - check whether MSI may be enabled on a device
|
||||
|
|
@ -335,43 +335,13 @@ static int msi_verify_entries(struct pci_dev *dev)
|
|||
return !entry ? 0 : -EIO;
|
||||
}
|
||||
|
||||
/**
|
||||
* msi_capability_init - configure device's MSI capability structure
|
||||
* @dev: pointer to the pci_dev data structure of MSI device function
|
||||
* @nvec: number of interrupts to allocate
|
||||
* @affd: description of automatic IRQ affinity assignments (may be %NULL)
|
||||
*
|
||||
* Setup the MSI capability structure of the device with the requested
|
||||
* number of interrupts. A return value of zero indicates the successful
|
||||
* setup of an entry with the new MSI IRQ. A negative return value indicates
|
||||
* an error, and a positive return value indicates the number of interrupts
|
||||
* which could have been allocated.
|
||||
*/
|
||||
static int msi_capability_init(struct pci_dev *dev, int nvec,
|
||||
struct irq_affinity *affd)
|
||||
static int __msi_capability_init(struct pci_dev *dev, int nvec, struct irq_affinity_desc *masks)
|
||||
{
|
||||
struct irq_affinity_desc *masks = NULL;
|
||||
int ret = msi_setup_msi_desc(dev, nvec, masks);
|
||||
struct msi_desc *entry, desc;
|
||||
int ret;
|
||||
|
||||
/* Reject multi-MSI early on irq domain enabled architectures */
|
||||
if (nvec > 1 && !pci_msi_domain_supports(dev, MSI_FLAG_MULTI_PCI_MSI, ALLOW_LEGACY))
|
||||
return 1;
|
||||
|
||||
/*
|
||||
* Disable MSI during setup in the hardware, but mark it enabled
|
||||
* so that setup code can evaluate it.
|
||||
*/
|
||||
pci_msi_set_enable(dev, 0);
|
||||
dev->msi_enabled = 1;
|
||||
|
||||
if (affd)
|
||||
masks = irq_create_affinity_masks(nvec, affd);
|
||||
|
||||
msi_lock_descs(&dev->dev);
|
||||
ret = msi_setup_msi_desc(dev, nvec, masks);
|
||||
if (ret)
|
||||
goto fail;
|
||||
return ret;
|
||||
|
||||
/* All MSIs are unmasked by default; mask them all */
|
||||
entry = msi_first_desc(&dev->dev, MSI_DESC_ALL);
|
||||
|
|
@ -393,24 +363,51 @@ static int msi_capability_init(struct pci_dev *dev, int nvec,
|
|||
goto err;
|
||||
|
||||
/* Set MSI enabled bits */
|
||||
dev->msi_enabled = 1;
|
||||
pci_intx_for_msi(dev, 0);
|
||||
pci_msi_set_enable(dev, 1);
|
||||
|
||||
pcibios_free_irq(dev);
|
||||
dev->irq = entry->irq;
|
||||
goto unlock;
|
||||
|
||||
return 0;
|
||||
err:
|
||||
pci_msi_unmask(&desc, msi_multi_mask(&desc));
|
||||
pci_free_msi_irqs(dev);
|
||||
fail:
|
||||
dev->msi_enabled = 0;
|
||||
unlock:
|
||||
msi_unlock_descs(&dev->dev);
|
||||
kfree(masks);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* msi_capability_init - configure device's MSI capability structure
|
||||
* @dev: pointer to the pci_dev data structure of MSI device function
|
||||
* @nvec: number of interrupts to allocate
|
||||
* @affd: description of automatic IRQ affinity assignments (may be %NULL)
|
||||
*
|
||||
* Setup the MSI capability structure of the device with the requested
|
||||
* number of interrupts. A return value of zero indicates the successful
|
||||
* setup of an entry with the new MSI IRQ. A negative return value indicates
|
||||
* an error, and a positive return value indicates the number of interrupts
|
||||
* which could have been allocated.
|
||||
*/
|
||||
static int msi_capability_init(struct pci_dev *dev, int nvec,
|
||||
struct irq_affinity *affd)
|
||||
{
|
||||
/* Reject multi-MSI early on irq domain enabled architectures */
|
||||
if (nvec > 1 && !pci_msi_domain_supports(dev, MSI_FLAG_MULTI_PCI_MSI, ALLOW_LEGACY))
|
||||
return 1;
|
||||
|
||||
/*
|
||||
* Disable MSI during setup in the hardware, but mark it enabled
|
||||
* so that setup code can evaluate it.
|
||||
*/
|
||||
pci_msi_set_enable(dev, 0);
|
||||
|
||||
struct irq_affinity_desc *masks __free(kfree) =
|
||||
affd ? irq_create_affinity_masks(nvec, affd) : NULL;
|
||||
|
||||
guard(msi_descs_lock)(&dev->dev);
|
||||
return __msi_capability_init(dev, nvec, masks);
|
||||
}
|
||||
|
||||
int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
|
||||
struct irq_affinity *affd)
|
||||
{
|
||||
|
|
@ -666,38 +663,39 @@ static void msix_mask_all(void __iomem *base, int tsize)
|
|||
writel(ctrl, base + PCI_MSIX_ENTRY_VECTOR_CTRL);
|
||||
}
|
||||
|
||||
static int msix_setup_interrupts(struct pci_dev *dev, struct msix_entry *entries,
|
||||
int nvec, struct irq_affinity *affd)
|
||||
DEFINE_FREE(free_msi_irqs, struct pci_dev *, if (_T) pci_free_msi_irqs(_T));
|
||||
|
||||
static int __msix_setup_interrupts(struct pci_dev *__dev, struct msix_entry *entries,
|
||||
int nvec, struct irq_affinity_desc *masks)
|
||||
{
|
||||
struct irq_affinity_desc *masks = NULL;
|
||||
int ret;
|
||||
struct pci_dev *dev __free(free_msi_irqs) = __dev;
|
||||
|
||||
if (affd)
|
||||
masks = irq_create_affinity_masks(nvec, affd);
|
||||
|
||||
msi_lock_descs(&dev->dev);
|
||||
ret = msix_setup_msi_descs(dev, entries, nvec, masks);
|
||||
int ret = msix_setup_msi_descs(dev, entries, nvec, masks);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
return ret;
|
||||
|
||||
ret = pci_msi_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
return ret;
|
||||
|
||||
/* Check if all MSI entries honor device restrictions */
|
||||
ret = msi_verify_entries(dev);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
return ret;
|
||||
|
||||
msix_update_entries(dev, entries);
|
||||
goto out_unlock;
|
||||
retain_and_null_ptr(dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
out_free:
|
||||
pci_free_msi_irqs(dev);
|
||||
out_unlock:
|
||||
msi_unlock_descs(&dev->dev);
|
||||
kfree(masks);
|
||||
return ret;
|
||||
static int msix_setup_interrupts(struct pci_dev *dev, struct msix_entry *entries,
|
||||
int nvec, struct irq_affinity *affd)
|
||||
{
|
||||
struct irq_affinity_desc *masks __free(kfree) =
|
||||
affd ? irq_create_affinity_masks(nvec, affd) : NULL;
|
||||
|
||||
guard(msi_descs_lock)(&dev->dev);
|
||||
return __msix_setup_interrupts(dev, entries, nvec, masks);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -873,13 +871,13 @@ void __pci_restore_msix_state(struct pci_dev *dev)
|
|||
|
||||
write_msg = arch_restore_msi_irqs(dev);
|
||||
|
||||
msi_lock_descs(&dev->dev);
|
||||
scoped_guard (msi_descs_lock, &dev->dev) {
|
||||
msi_for_each_desc(entry, &dev->dev, MSI_DESC_ALL) {
|
||||
if (write_msg)
|
||||
__pci_write_msi_msg(entry, &entry->msg);
|
||||
pci_msix_write_vector_ctrl(entry, entry->pci.msix_ctrl);
|
||||
}
|
||||
msi_unlock_descs(&dev->dev);
|
||||
}
|
||||
|
||||
pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0);
|
||||
}
|
||||
|
|
@ -918,6 +916,53 @@ void pci_free_msi_irqs(struct pci_dev *dev)
|
|||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PCIE_TPH
|
||||
/**
|
||||
* pci_msix_write_tph_tag - Update the TPH tag for a given MSI-X vector
|
||||
* @pdev: The PCIe device to update
|
||||
* @index: The MSI-X index to update
|
||||
* @tag: The tag to write
|
||||
*
|
||||
* Returns: 0 on success, error code on failure
|
||||
*/
|
||||
int pci_msix_write_tph_tag(struct pci_dev *pdev, unsigned int index, u16 tag)
|
||||
{
|
||||
struct msi_desc *msi_desc;
|
||||
struct irq_desc *irq_desc;
|
||||
unsigned int virq;
|
||||
|
||||
if (!pdev->msix_enabled)
|
||||
return -ENXIO;
|
||||
|
||||
guard(msi_descs_lock)(&pdev->dev);
|
||||
virq = msi_get_virq(&pdev->dev, index);
|
||||
if (!virq)
|
||||
return -ENXIO;
|
||||
/*
|
||||
* This is a horrible hack, but short of implementing a PCI
|
||||
* specific interrupt chip callback and a huge pile of
|
||||
* infrastructure, this is the minor nuissance. It provides the
|
||||
* protection against concurrent operations on this entry and keeps
|
||||
* the control word cache in sync.
|
||||
*/
|
||||
irq_desc = irq_to_desc(virq);
|
||||
if (!irq_desc)
|
||||
return -ENXIO;
|
||||
|
||||
guard(raw_spinlock_irq)(&irq_desc->lock);
|
||||
msi_desc = irq_data_get_msi_desc(&irq_desc->irq_data);
|
||||
if (!msi_desc || msi_desc->pci.msi_attrib.is_virtual)
|
||||
return -ENXIO;
|
||||
|
||||
msi_desc->pci.msix_ctrl &= ~PCI_MSIX_ENTRY_CTRL_ST;
|
||||
msi_desc->pci.msix_ctrl |= FIELD_PREP(PCI_MSIX_ENTRY_CTRL_ST, tag);
|
||||
pci_msix_write_vector_ctrl(msi_desc, msi_desc->pci.msix_ctrl);
|
||||
/* Flush the write */
|
||||
readl(pci_msix_desc_addr(msi_desc));
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
/* Misc. infrastructure */
|
||||
|
||||
struct pci_dev *msi_desc_to_pci_dev(struct msi_desc *desc)
|
||||
|
|
@ -928,5 +973,5 @@ EXPORT_SYMBOL(msi_desc_to_pci_dev);
|
|||
|
||||
void pci_no_msi(void)
|
||||
{
|
||||
pci_msi_enable = 0;
|
||||
pci_msi_enable = false;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -87,7 +87,7 @@ static inline __attribute_const__ u32 msi_multi_mask(struct msi_desc *desc)
|
|||
void msix_prepare_msi_desc(struct pci_dev *dev, struct msi_desc *desc);
|
||||
|
||||
/* Subsystem variables */
|
||||
extern int pci_msi_enable;
|
||||
extern bool pci_msi_enable;
|
||||
|
||||
/* MSI internal functions invoked from the public APIs */
|
||||
void pci_msi_shutdown(struct pci_dev *dev);
|
||||
|
|
|
|||
|
|
@ -1064,6 +1064,15 @@ int pcim_request_region_exclusive(struct pci_dev *pdev, int bar,
|
|||
const char *name);
|
||||
void pcim_release_region(struct pci_dev *pdev, int bar);
|
||||
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
int pci_msix_write_tph_tag(struct pci_dev *pdev, unsigned int index, u16 tag);
|
||||
#else
|
||||
static inline int pci_msix_write_tph_tag(struct pci_dev *pdev, unsigned int index, u16 tag)
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Config Address for PCI Configuration Mechanism #1
|
||||
*
|
||||
|
|
|
|||
|
|
@ -204,48 +204,6 @@ static u8 get_rp_completer_type(struct pci_dev *pdev)
|
|||
return FIELD_GET(PCI_EXP_DEVCAP2_TPH_COMP_MASK, reg);
|
||||
}
|
||||
|
||||
/* Write ST to MSI-X vector control reg - Return 0 if OK, otherwise -errno */
|
||||
static int write_tag_to_msix(struct pci_dev *pdev, int msix_idx, u16 tag)
|
||||
{
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
struct msi_desc *msi_desc = NULL;
|
||||
void __iomem *vec_ctrl;
|
||||
u32 val;
|
||||
int err = 0;
|
||||
|
||||
msi_lock_descs(&pdev->dev);
|
||||
|
||||
/* Find the msi_desc entry with matching msix_idx */
|
||||
msi_for_each_desc(msi_desc, &pdev->dev, MSI_DESC_ASSOCIATED) {
|
||||
if (msi_desc->msi_index == msix_idx)
|
||||
break;
|
||||
}
|
||||
|
||||
if (!msi_desc) {
|
||||
err = -ENXIO;
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
/* Get the vector control register (offset 0xc) pointed by msix_idx */
|
||||
vec_ctrl = pdev->msix_base + msix_idx * PCI_MSIX_ENTRY_SIZE;
|
||||
vec_ctrl += PCI_MSIX_ENTRY_VECTOR_CTRL;
|
||||
|
||||
val = readl(vec_ctrl);
|
||||
val &= ~PCI_MSIX_ENTRY_CTRL_ST;
|
||||
val |= FIELD_PREP(PCI_MSIX_ENTRY_CTRL_ST, tag);
|
||||
writel(val, vec_ctrl);
|
||||
|
||||
/* Read back to flush the update */
|
||||
val = readl(vec_ctrl);
|
||||
|
||||
err_out:
|
||||
msi_unlock_descs(&pdev->dev);
|
||||
return err;
|
||||
#else
|
||||
return -ENODEV;
|
||||
#endif
|
||||
}
|
||||
|
||||
/* Write tag to ST table - Return 0 if OK, otherwise -errno */
|
||||
static int write_tag_to_st_table(struct pci_dev *pdev, int index, u16 tag)
|
||||
{
|
||||
|
|
@ -346,7 +304,7 @@ int pcie_tph_set_st_entry(struct pci_dev *pdev, unsigned int index, u16 tag)
|
|||
|
||||
switch (loc) {
|
||||
case PCI_TPH_LOC_MSIX:
|
||||
err = write_tag_to_msix(pdev, index, tag);
|
||||
err = pci_msix_write_tph_tag(pdev, index, tag);
|
||||
break;
|
||||
case PCI_TPH_LOC_CAP:
|
||||
err = write_tag_to_st_table(pdev, index, tag);
|
||||
|
|
|
|||
|
|
@ -103,19 +103,15 @@ int ti_sci_inta_msi_domain_alloc_irqs(struct device *dev,
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
msi_lock_descs(dev);
|
||||
guard(msi_descs_lock)(dev);
|
||||
nvec = ti_sci_inta_msi_alloc_descs(dev, res);
|
||||
if (nvec <= 0) {
|
||||
ret = nvec;
|
||||
goto unlock;
|
||||
}
|
||||
if (nvec <= 0)
|
||||
return nvec;
|
||||
|
||||
/* Use alloc ALL as it's unclear whether there are gaps in the indices */
|
||||
ret = msi_domain_alloc_irqs_all_locked(dev, MSI_DEFAULT_DOMAIN, nvec);
|
||||
if (ret)
|
||||
dev_err(dev, "Failed to allocate IRQs %d\n", ret);
|
||||
unlock:
|
||||
msi_unlock_descs(dev);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ti_sci_inta_msi_domain_alloc_irqs);
|
||||
|
|
|
|||
|
|
@ -1849,25 +1849,38 @@ static void ufs_qcom_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
|
|||
ufshcd_mcq_config_esi(hba, msg);
|
||||
}
|
||||
|
||||
struct ufs_qcom_irq {
|
||||
unsigned int irq;
|
||||
unsigned int idx;
|
||||
struct ufs_hba *hba;
|
||||
};
|
||||
|
||||
static irqreturn_t ufs_qcom_mcq_esi_handler(int irq, void *data)
|
||||
{
|
||||
struct msi_desc *desc = data;
|
||||
struct device *dev = msi_desc_to_dev(desc);
|
||||
struct ufs_hba *hba = dev_get_drvdata(dev);
|
||||
u32 id = desc->msi_index;
|
||||
struct ufs_hw_queue *hwq = &hba->uhq[id];
|
||||
struct ufs_qcom_irq *qi = data;
|
||||
struct ufs_hba *hba = qi->hba;
|
||||
struct ufs_hw_queue *hwq = &hba->uhq[qi->idx];
|
||||
|
||||
ufshcd_mcq_write_cqis(hba, 0x1, id);
|
||||
ufshcd_mcq_write_cqis(hba, 0x1, qi->idx);
|
||||
ufshcd_mcq_poll_cqe_lock(hba, hwq);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static void ufs_qcom_irq_free(struct ufs_qcom_irq *uqi)
|
||||
{
|
||||
for (struct ufs_qcom_irq *q = uqi; q->irq; q++)
|
||||
devm_free_irq(q->hba->dev, q->irq, q->hba);
|
||||
|
||||
platform_device_msi_free_irqs_all(uqi->hba->dev);
|
||||
devm_kfree(uqi->hba->dev, uqi);
|
||||
}
|
||||
|
||||
DEFINE_FREE(ufs_qcom_irq, struct ufs_qcom_irq *, if (_T) ufs_qcom_irq_free(_T))
|
||||
|
||||
static int ufs_qcom_config_esi(struct ufs_hba *hba)
|
||||
{
|
||||
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
|
||||
struct msi_desc *desc;
|
||||
struct msi_desc *failed_desc = NULL;
|
||||
int nr_irqs, ret;
|
||||
|
||||
if (host->esi_enabled)
|
||||
|
|
@ -1878,6 +1891,14 @@ static int ufs_qcom_config_esi(struct ufs_hba *hba)
|
|||
* 2. Poll queues do not need ESI.
|
||||
*/
|
||||
nr_irqs = hba->nr_hw_queues - hba->nr_queues[HCTX_TYPE_POLL];
|
||||
|
||||
struct ufs_qcom_irq *qi __free(ufs_qcom_irq) =
|
||||
devm_kcalloc(hba->dev, nr_irqs, sizeof(*qi), GFP_KERNEL);
|
||||
if (!qi)
|
||||
return -ENOMEM;
|
||||
/* Preset so __free() has a pointer to hba in all error paths */
|
||||
qi[0].hba = hba;
|
||||
|
||||
ret = platform_device_msi_init_and_alloc_irqs(hba->dev, nr_irqs,
|
||||
ufs_qcom_write_msi_msg);
|
||||
if (ret) {
|
||||
|
|
@ -1885,41 +1906,31 @@ static int ufs_qcom_config_esi(struct ufs_hba *hba)
|
|||
return ret;
|
||||
}
|
||||
|
||||
msi_lock_descs(hba->dev);
|
||||
msi_for_each_desc(desc, hba->dev, MSI_DESC_ALL) {
|
||||
ret = devm_request_irq(hba->dev, desc->irq,
|
||||
ufs_qcom_mcq_esi_handler,
|
||||
IRQF_SHARED, "qcom-mcq-esi", desc);
|
||||
for (int idx = 0; idx < nr_irqs; idx++) {
|
||||
qi[idx].irq = msi_get_virq(hba->dev, idx);
|
||||
qi[idx].idx = idx;
|
||||
qi[idx].hba = hba;
|
||||
|
||||
ret = devm_request_irq(hba->dev, qi[idx].irq, ufs_qcom_mcq_esi_handler,
|
||||
IRQF_SHARED, "qcom-mcq-esi", qi + idx);
|
||||
if (ret) {
|
||||
dev_err(hba->dev, "%s: Fail to request IRQ for %d, err = %d\n",
|
||||
__func__, desc->irq, ret);
|
||||
failed_desc = desc;
|
||||
break;
|
||||
__func__, qi[idx].irq, ret);
|
||||
qi[idx].irq = 0;
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
msi_unlock_descs(hba->dev);
|
||||
|
||||
if (ret) {
|
||||
/* Rewind */
|
||||
msi_lock_descs(hba->dev);
|
||||
msi_for_each_desc(desc, hba->dev, MSI_DESC_ALL) {
|
||||
if (desc == failed_desc)
|
||||
break;
|
||||
devm_free_irq(hba->dev, desc->irq, hba);
|
||||
}
|
||||
msi_unlock_descs(hba->dev);
|
||||
platform_device_msi_free_irqs_all(hba->dev);
|
||||
} else {
|
||||
retain_and_null_ptr(qi);
|
||||
|
||||
if (host->hw_ver.major == 6 && host->hw_ver.minor == 0 &&
|
||||
host->hw_ver.step == 0)
|
||||
ufshcd_rmwl(hba, ESI_VEC_MASK,
|
||||
FIELD_PREP(ESI_VEC_MASK, MAX_ESI_VEC - 1),
|
||||
host->hw_ver.step == 0) {
|
||||
ufshcd_rmwl(hba, ESI_VEC_MASK, FIELD_PREP(ESI_VEC_MASK, MAX_ESI_VEC - 1),
|
||||
REG_UFS_CFG3);
|
||||
}
|
||||
ufshcd_mcq_enable_esi(hba);
|
||||
host->esi_enabled = true;
|
||||
}
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static u32 ufs_qcom_freq_to_gear_speed(struct ufs_hba *hba, unsigned long freq)
|
||||
|
|
|
|||
|
|
@ -216,6 +216,25 @@ const volatile void * __must_check_fn(const volatile void *val)
|
|||
|
||||
#define return_ptr(p) return no_free_ptr(p)
|
||||
|
||||
/*
|
||||
* Only for situations where an allocation is handed in to another function
|
||||
* and consumed by that function on success.
|
||||
*
|
||||
* struct foo *f __free(kfree) = kzalloc(sizeof(*f), GFP_KERNEL);
|
||||
*
|
||||
* setup(f);
|
||||
* if (some_condition)
|
||||
* return -EINVAL;
|
||||
* ....
|
||||
* ret = bar(f);
|
||||
* if (!ret)
|
||||
* retain_and_null_ptr(f);
|
||||
* return ret;
|
||||
*
|
||||
* After retain_and_null_ptr(f) the variable f is NULL and cannot be
|
||||
* dereferenced anymore.
|
||||
*/
|
||||
#define retain_and_null_ptr(p) ((void)__get_and_null(p, NULL))
|
||||
|
||||
/*
|
||||
* DEFINE_CLASS(name, type, exit, init, init_args...):
|
||||
|
|
|
|||
|
|
@ -2,8 +2,8 @@
|
|||
// Copyright (C) 2022 Linutronix GmbH
|
||||
// Copyright (C) 2022 Intel
|
||||
|
||||
#ifndef _DRIVERS_IRQCHIP_IRQ_MSI_LIB_H
|
||||
#define _DRIVERS_IRQCHIP_IRQ_MSI_LIB_H
|
||||
#ifndef _IRQCHIP_IRQ_MSI_LIB_H
|
||||
#define _IRQCHIP_IRQ_MSI_LIB_H
|
||||
|
||||
#include <linux/bits.h>
|
||||
#include <linux/irqdomain.h>
|
||||
|
|
@ -24,4 +24,4 @@ bool msi_lib_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
|
|||
struct irq_domain *real_parent,
|
||||
struct msi_domain_info *info);
|
||||
|
||||
#endif /* _DRIVERS_IRQCHIP_IRQ_MSI_LIB_H */
|
||||
#endif /* _IRQCHIP_IRQ_MSI_LIB_H */
|
||||
|
|
@ -209,6 +209,9 @@ enum {
|
|||
/* Irq domain must destroy generic chips when removed */
|
||||
IRQ_DOMAIN_FLAG_DESTROY_GC = (1 << 10),
|
||||
|
||||
/* Address and data pair is mutable when irq_set_affinity() */
|
||||
IRQ_DOMAIN_FLAG_MSI_IMMUTABLE = (1 << 11),
|
||||
|
||||
/*
|
||||
* Flags starting from IRQ_DOMAIN_FLAG_NONCORE are reserved
|
||||
* for implementation specific purposes and ignored by the
|
||||
|
|
@ -256,6 +259,8 @@ static inline struct fwnode_handle *irq_domain_alloc_fwnode(phys_addr_t *pa)
|
|||
|
||||
void irq_domain_free_fwnode(struct fwnode_handle *fwnode);
|
||||
|
||||
DEFINE_FREE(irq_domain_free_fwnode, struct fwnode_handle *, if (_T) irq_domain_free_fwnode(_T))
|
||||
|
||||
struct irq_domain_chip_generic_info;
|
||||
|
||||
/**
|
||||
|
|
@ -627,6 +632,10 @@ static inline bool irq_domain_is_msi_device(struct irq_domain *domain)
|
|||
return domain->flags & IRQ_DOMAIN_FLAG_MSI_DEVICE;
|
||||
}
|
||||
|
||||
static inline bool irq_domain_is_msi_immutable(struct irq_domain *domain)
|
||||
{
|
||||
return domain->flags & IRQ_DOMAIN_FLAG_MSI_IMMUTABLE;
|
||||
}
|
||||
#else /* CONFIG_IRQ_DOMAIN_HIERARCHY */
|
||||
static inline int irq_domain_alloc_irqs(struct irq_domain *domain, unsigned int nr_irqs,
|
||||
int node, void *arg)
|
||||
|
|
|
|||
|
|
@ -229,8 +229,11 @@ struct msi_dev_domain {
|
|||
|
||||
int msi_setup_device_data(struct device *dev);
|
||||
|
||||
void msi_lock_descs(struct device *dev);
|
||||
void msi_unlock_descs(struct device *dev);
|
||||
void __msi_lock_descs(struct device *dev);
|
||||
void __msi_unlock_descs(struct device *dev);
|
||||
|
||||
DEFINE_LOCK_GUARD_1(msi_descs_lock, struct device, __msi_lock_descs(_T->lock),
|
||||
__msi_unlock_descs(_T->lock));
|
||||
|
||||
struct msi_desc *msi_domain_first_desc(struct device *dev, unsigned int domid,
|
||||
enum msi_desc_filter filter);
|
||||
|
|
@ -420,6 +423,7 @@ struct msi_domain_info;
|
|||
* @msi_init: Domain specific init function for MSI interrupts
|
||||
* @msi_free: Domain specific function to free a MSI interrupts
|
||||
* @msi_prepare: Prepare the allocation of the interrupts in the domain
|
||||
* @msi_teardown: Reverse the effects of @msi_prepare
|
||||
* @prepare_desc: Optional function to prepare the allocated MSI descriptor
|
||||
* in the domain
|
||||
* @set_desc: Set the msi descriptor for an interrupt
|
||||
|
|
@ -435,8 +439,9 @@ struct msi_domain_info;
|
|||
* @get_hwirq, @msi_init and @msi_free are callbacks used by the underlying
|
||||
* irqdomain.
|
||||
*
|
||||
* @msi_check, @msi_prepare, @prepare_desc and @set_desc are callbacks used by the
|
||||
* msi_domain_alloc/free_irqs*() variants.
|
||||
* @msi_check, @msi_prepare, @msi_teardown, @prepare_desc and
|
||||
* @set_desc are callbacks used by the msi_domain_alloc/free_irqs*()
|
||||
* variants.
|
||||
*
|
||||
* @domain_alloc_irqs, @domain_free_irqs can be used to override the
|
||||
* default allocation/free functions (__msi_domain_alloc/free_irqs). This
|
||||
|
|
@ -458,6 +463,8 @@ struct msi_domain_ops {
|
|||
int (*msi_prepare)(struct irq_domain *domain,
|
||||
struct device *dev, int nvec,
|
||||
msi_alloc_info_t *arg);
|
||||
void (*msi_teardown)(struct irq_domain *domain,
|
||||
msi_alloc_info_t *arg);
|
||||
void (*prepare_desc)(struct irq_domain *domain, msi_alloc_info_t *arg,
|
||||
struct msi_desc *desc);
|
||||
void (*set_desc)(msi_alloc_info_t *arg,
|
||||
|
|
@ -486,6 +493,7 @@ struct msi_domain_ops {
|
|||
* @handler: Optional: associated interrupt flow handler
|
||||
* @handler_data: Optional: associated interrupt flow handler data
|
||||
* @handler_name: Optional: associated interrupt flow handler name
|
||||
* @alloc_data: Optional: associated interrupt allocation data
|
||||
* @data: Optional: domain specific data
|
||||
*/
|
||||
struct msi_domain_info {
|
||||
|
|
@ -498,6 +506,7 @@ struct msi_domain_info {
|
|||
irq_flow_handler_t handler;
|
||||
void *handler_data;
|
||||
const char *handler_name;
|
||||
msi_alloc_info_t *alloc_data;
|
||||
void *data;
|
||||
};
|
||||
|
||||
|
|
@ -507,12 +516,14 @@ struct msi_domain_info {
|
|||
* @chip: Interrupt chip for this domain
|
||||
* @ops: MSI domain ops
|
||||
* @info: MSI domain info data
|
||||
* @alloc_info: MSI domain allocation data (architecture specific)
|
||||
*/
|
||||
struct msi_domain_template {
|
||||
char name[48];
|
||||
struct irq_chip chip;
|
||||
struct msi_domain_ops ops;
|
||||
struct msi_domain_info info;
|
||||
msi_alloc_info_t alloc_info;
|
||||
};
|
||||
|
||||
/*
|
||||
|
|
@ -625,6 +636,10 @@ struct irq_domain *msi_create_irq_domain(struct fwnode_handle *fwnode,
|
|||
struct msi_domain_info *info,
|
||||
struct irq_domain *parent);
|
||||
|
||||
struct irq_domain_info;
|
||||
struct irq_domain *msi_create_parent_irq_domain(struct irq_domain_info *info,
|
||||
const struct msi_parent_ops *msi_parent_ops);
|
||||
|
||||
bool msi_create_device_irq_domain(struct device *dev, unsigned int domid,
|
||||
const struct msi_domain_template *template,
|
||||
unsigned int hwsize, void *domain_data,
|
||||
|
|
|
|||
|
|
@ -1671,7 +1671,7 @@ void pci_disable_msi(struct pci_dev *dev);
|
|||
int pci_msix_vec_count(struct pci_dev *dev);
|
||||
void pci_disable_msix(struct pci_dev *dev);
|
||||
void pci_restore_msi_state(struct pci_dev *dev);
|
||||
int pci_msi_enabled(void);
|
||||
bool pci_msi_enabled(void);
|
||||
int pci_enable_msi(struct pci_dev *dev);
|
||||
int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
|
||||
int minvec, int maxvec);
|
||||
|
|
@ -1704,7 +1704,7 @@ static inline void pci_disable_msi(struct pci_dev *dev) { }
|
|||
static inline int pci_msix_vec_count(struct pci_dev *dev) { return -ENOSYS; }
|
||||
static inline void pci_disable_msix(struct pci_dev *dev) { }
|
||||
static inline void pci_restore_msi_state(struct pci_dev *dev) { }
|
||||
static inline int pci_msi_enabled(void) { return 0; }
|
||||
static inline bool pci_msi_enabled(void) { return false; }
|
||||
static inline int pci_enable_msi(struct pci_dev *dev)
|
||||
{ return -ENOSYS; }
|
||||
static inline int pci_enable_msix_range(struct pci_dev *dev,
|
||||
|
|
|
|||
184
kernel/irq/msi.c
184
kernel/irq/msi.c
|
|
@ -59,7 +59,8 @@ struct msi_ctrl {
|
|||
static void msi_domain_free_locked(struct device *dev, struct msi_ctrl *ctrl);
|
||||
static unsigned int msi_domain_get_hwsize(struct device *dev, unsigned int domid);
|
||||
static inline int msi_sysfs_create_group(struct device *dev);
|
||||
|
||||
static int msi_domain_prepare_irqs(struct irq_domain *domain, struct device *dev,
|
||||
int nvec, msi_alloc_info_t *arg);
|
||||
|
||||
/**
|
||||
* msi_alloc_desc - Allocate an initialized msi_desc
|
||||
|
|
@ -343,26 +344,30 @@ int msi_setup_device_data(struct device *dev)
|
|||
}
|
||||
|
||||
/**
|
||||
* msi_lock_descs - Lock the MSI descriptor storage of a device
|
||||
* __msi_lock_descs - Lock the MSI descriptor storage of a device
|
||||
* @dev: Device to operate on
|
||||
*
|
||||
* Internal function for guard(msi_descs_lock). Don't use in code.
|
||||
*/
|
||||
void msi_lock_descs(struct device *dev)
|
||||
void __msi_lock_descs(struct device *dev)
|
||||
{
|
||||
mutex_lock(&dev->msi.data->mutex);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(msi_lock_descs);
|
||||
EXPORT_SYMBOL_GPL(__msi_lock_descs);
|
||||
|
||||
/**
|
||||
* msi_unlock_descs - Unlock the MSI descriptor storage of a device
|
||||
* __msi_unlock_descs - Unlock the MSI descriptor storage of a device
|
||||
* @dev: Device to operate on
|
||||
*
|
||||
* Internal function for guard(msi_descs_lock). Don't use in code.
|
||||
*/
|
||||
void msi_unlock_descs(struct device *dev)
|
||||
void __msi_unlock_descs(struct device *dev)
|
||||
{
|
||||
/* Invalidate the index which was cached by the iterator */
|
||||
dev->msi.data->__iter_idx = MSI_XA_MAX_INDEX;
|
||||
mutex_unlock(&dev->msi.data->mutex);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(msi_unlock_descs);
|
||||
EXPORT_SYMBOL_GPL(__msi_unlock_descs);
|
||||
|
||||
static struct msi_desc *msi_find_desc(struct msi_device_data *md, unsigned int domid,
|
||||
enum msi_desc_filter filter)
|
||||
|
|
@ -448,7 +453,6 @@ EXPORT_SYMBOL_GPL(msi_next_desc);
|
|||
unsigned int msi_domain_get_virq(struct device *dev, unsigned int domid, unsigned int index)
|
||||
{
|
||||
struct msi_desc *desc;
|
||||
unsigned int ret = 0;
|
||||
bool pcimsi = false;
|
||||
struct xarray *xa;
|
||||
|
||||
|
|
@ -462,7 +466,7 @@ unsigned int msi_domain_get_virq(struct device *dev, unsigned int domid, unsigne
|
|||
if (dev_is_pci(dev) && domid == MSI_DEFAULT_DOMAIN)
|
||||
pcimsi = to_pci_dev(dev)->msi_enabled;
|
||||
|
||||
msi_lock_descs(dev);
|
||||
guard(msi_descs_lock)(dev);
|
||||
xa = &dev->msi.data->__domains[domid].store;
|
||||
desc = xa_load(xa, pcimsi ? 0 : index);
|
||||
if (desc && desc->irq) {
|
||||
|
|
@ -471,16 +475,12 @@ unsigned int msi_domain_get_virq(struct device *dev, unsigned int domid, unsigne
|
|||
* PCI-MSIX and platform MSI use a descriptor per
|
||||
* interrupt.
|
||||
*/
|
||||
if (pcimsi) {
|
||||
if (!pcimsi)
|
||||
return desc->irq;
|
||||
if (index < desc->nvec_used)
|
||||
ret = desc->irq + index;
|
||||
} else {
|
||||
ret = desc->irq;
|
||||
return desc->irq + index;
|
||||
}
|
||||
}
|
||||
|
||||
msi_unlock_descs(dev);
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(msi_domain_get_virq);
|
||||
|
||||
|
|
@ -796,6 +796,10 @@ static int msi_domain_ops_prepare(struct irq_domain *domain, struct device *dev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void msi_domain_ops_teardown(struct irq_domain *domain, msi_alloc_info_t *arg)
|
||||
{
|
||||
}
|
||||
|
||||
static void msi_domain_ops_set_desc(msi_alloc_info_t *arg,
|
||||
struct msi_desc *desc)
|
||||
{
|
||||
|
|
@ -821,6 +825,7 @@ static struct msi_domain_ops msi_domain_ops_default = {
|
|||
.get_hwirq = msi_domain_ops_get_hwirq,
|
||||
.msi_init = msi_domain_ops_init,
|
||||
.msi_prepare = msi_domain_ops_prepare,
|
||||
.msi_teardown = msi_domain_ops_teardown,
|
||||
.set_desc = msi_domain_ops_set_desc,
|
||||
};
|
||||
|
||||
|
|
@ -842,6 +847,8 @@ static void msi_domain_update_dom_ops(struct msi_domain_info *info)
|
|||
ops->msi_init = msi_domain_ops_default.msi_init;
|
||||
if (ops->msi_prepare == NULL)
|
||||
ops->msi_prepare = msi_domain_ops_default.msi_prepare;
|
||||
if (ops->msi_teardown == NULL)
|
||||
ops->msi_teardown = msi_domain_ops_default.msi_teardown;
|
||||
if (ops->set_desc == NULL)
|
||||
ops->set_desc = msi_domain_ops_default.set_desc;
|
||||
}
|
||||
|
|
@ -904,6 +911,32 @@ struct irq_domain *msi_create_irq_domain(struct fwnode_handle *fwnode,
|
|||
return __msi_create_irq_domain(fwnode, info, 0, parent);
|
||||
}
|
||||
|
||||
/**
|
||||
* msi_create_parent_irq_domain - Create an MSI-parent interrupt domain
|
||||
* @info: MSI irqdomain creation info
|
||||
* @msi_parent_ops: MSI parent callbacks and configuration
|
||||
*
|
||||
* Return: pointer to the created &struct irq_domain or %NULL on failure
|
||||
*/
|
||||
struct irq_domain *msi_create_parent_irq_domain(struct irq_domain_info *info,
|
||||
const struct msi_parent_ops *msi_parent_ops)
|
||||
{
|
||||
struct irq_domain *d;
|
||||
|
||||
info->hwirq_max = max(info->hwirq_max, info->size);
|
||||
info->size = info->hwirq_max;
|
||||
info->domain_flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
|
||||
info->bus_token = msi_parent_ops->bus_select_token;
|
||||
|
||||
d = irq_domain_instantiate(info);
|
||||
if (IS_ERR(d))
|
||||
return NULL;
|
||||
|
||||
d->msi_parent_ops = msi_parent_ops;
|
||||
return d;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(msi_create_parent_irq_domain);
|
||||
|
||||
/**
|
||||
* msi_parent_init_dev_msi_info - Delegate initialization of device MSI info down
|
||||
* in the domain hierarchy
|
||||
|
|
@ -998,9 +1031,8 @@ bool msi_create_device_irq_domain(struct device *dev, unsigned int domid,
|
|||
void *chip_data)
|
||||
{
|
||||
struct irq_domain *domain, *parent = dev->msi.domain;
|
||||
struct fwnode_handle *fwnode, *fwnalloced = NULL;
|
||||
struct msi_domain_template *bundle;
|
||||
const struct msi_parent_ops *pops;
|
||||
struct fwnode_handle *fwnode;
|
||||
|
||||
if (!irq_domain_is_msi_parent(parent))
|
||||
return false;
|
||||
|
|
@ -1008,7 +1040,8 @@ bool msi_create_device_irq_domain(struct device *dev, unsigned int domid,
|
|||
if (domid >= MSI_MAX_DEVICE_IRQDOMAINS)
|
||||
return false;
|
||||
|
||||
bundle = kmemdup(template, sizeof(*bundle), GFP_KERNEL);
|
||||
struct msi_domain_template *bundle __free(kfree) =
|
||||
kmemdup(template, sizeof(*bundle), GFP_KERNEL);
|
||||
if (!bundle)
|
||||
return false;
|
||||
|
||||
|
|
@ -1017,6 +1050,7 @@ bool msi_create_device_irq_domain(struct device *dev, unsigned int domid,
|
|||
bundle->info.ops = &bundle->ops;
|
||||
bundle->info.data = domain_data;
|
||||
bundle->info.chip_data = chip_data;
|
||||
bundle->info.alloc_data = &bundle->alloc_info;
|
||||
|
||||
pops = parent->msi_parent_ops;
|
||||
snprintf(bundle->name, sizeof(bundle->name), "%s%s-%s",
|
||||
|
|
@ -1031,41 +1065,43 @@ bool msi_create_device_irq_domain(struct device *dev, unsigned int domid,
|
|||
* node as they are not guaranteed to have a fwnode. They are never
|
||||
* looked up and always handled in the context of the device.
|
||||
*/
|
||||
if (bundle->info.flags & MSI_FLAG_USE_DEV_FWNODE)
|
||||
fwnode = dev->fwnode;
|
||||
struct fwnode_handle *fwnode_alloced __free(irq_domain_free_fwnode) = NULL;
|
||||
|
||||
if (!(bundle->info.flags & MSI_FLAG_USE_DEV_FWNODE))
|
||||
fwnode = fwnode_alloced = irq_domain_alloc_named_fwnode(bundle->name);
|
||||
else
|
||||
fwnode = fwnalloced = irq_domain_alloc_named_fwnode(bundle->name);
|
||||
fwnode = dev->fwnode;
|
||||
|
||||
if (!fwnode)
|
||||
goto free_bundle;
|
||||
return false;
|
||||
|
||||
if (msi_setup_device_data(dev))
|
||||
goto free_fwnode;
|
||||
|
||||
msi_lock_descs(dev);
|
||||
return false;
|
||||
|
||||
guard(msi_descs_lock)(dev);
|
||||
if (WARN_ON_ONCE(msi_get_device_domain(dev, domid)))
|
||||
goto fail;
|
||||
return false;
|
||||
|
||||
if (!pops->init_dev_msi_info(dev, parent, parent, &bundle->info))
|
||||
goto fail;
|
||||
return false;
|
||||
|
||||
domain = __msi_create_irq_domain(fwnode, &bundle->info, IRQ_DOMAIN_FLAG_MSI_DEVICE, parent);
|
||||
if (!domain)
|
||||
goto fail;
|
||||
return false;
|
||||
|
||||
domain->dev = dev;
|
||||
dev->msi.data->__domains[domid].domain = domain;
|
||||
msi_unlock_descs(dev);
|
||||
return true;
|
||||
|
||||
fail:
|
||||
msi_unlock_descs(dev);
|
||||
free_fwnode:
|
||||
irq_domain_free_fwnode(fwnalloced);
|
||||
free_bundle:
|
||||
kfree(bundle);
|
||||
if (msi_domain_prepare_irqs(domain, dev, hwsize, &bundle->alloc_info)) {
|
||||
dev->msi.data->__domains[domid].domain = NULL;
|
||||
irq_domain_remove(domain);
|
||||
return false;
|
||||
}
|
||||
|
||||
/* @bundle and @fwnode_alloced are now in use. Prevent cleanup */
|
||||
retain_and_null_ptr(bundle);
|
||||
retain_and_null_ptr(fwnode_alloced);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1079,23 +1115,21 @@ void msi_remove_device_irq_domain(struct device *dev, unsigned int domid)
|
|||
struct msi_domain_info *info;
|
||||
struct irq_domain *domain;
|
||||
|
||||
msi_lock_descs(dev);
|
||||
|
||||
guard(msi_descs_lock)(dev);
|
||||
domain = msi_get_device_domain(dev, domid);
|
||||
|
||||
if (!domain || !irq_domain_is_msi_device(domain))
|
||||
goto unlock;
|
||||
return;
|
||||
|
||||
dev->msi.data->__domains[domid].domain = NULL;
|
||||
info = domain->host_data;
|
||||
|
||||
info->ops->msi_teardown(domain, info->alloc_data);
|
||||
|
||||
if (irq_domain_is_msi_device(domain))
|
||||
fwnode = domain->fwnode;
|
||||
irq_domain_remove(domain);
|
||||
irq_domain_free_fwnode(fwnode);
|
||||
kfree(container_of(info, struct msi_domain_template, info));
|
||||
|
||||
unlock:
|
||||
msi_unlock_descs(dev);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1111,16 +1145,14 @@ bool msi_match_device_irq_domain(struct device *dev, unsigned int domid,
|
|||
{
|
||||
struct msi_domain_info *info;
|
||||
struct irq_domain *domain;
|
||||
bool ret = false;
|
||||
|
||||
msi_lock_descs(dev);
|
||||
guard(msi_descs_lock)(dev);
|
||||
domain = msi_get_device_domain(dev, domid);
|
||||
if (domain && irq_domain_is_msi_device(domain)) {
|
||||
info = domain->host_data;
|
||||
ret = info->bus_token == bus_token;
|
||||
return info->bus_token == bus_token;
|
||||
}
|
||||
msi_unlock_descs(dev);
|
||||
return ret;
|
||||
return false;
|
||||
}
|
||||
|
||||
static int msi_domain_prepare_irqs(struct irq_domain *domain, struct device *dev,
|
||||
|
|
@ -1238,6 +1270,24 @@ static int msi_init_virq(struct irq_domain *domain, int virq, unsigned int vflag
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int populate_alloc_info(struct irq_domain *domain, struct device *dev,
|
||||
unsigned int nirqs, msi_alloc_info_t *arg)
|
||||
{
|
||||
struct msi_domain_info *info = domain->host_data;
|
||||
|
||||
/*
|
||||
* If the caller has provided a template alloc info, use that. Once
|
||||
* all users of msi_create_irq_domain() have been eliminated, this
|
||||
* should be the only source of allocation information, and the
|
||||
* prepare call below should be finally removed.
|
||||
*/
|
||||
if (!info->alloc_data)
|
||||
return msi_domain_prepare_irqs(domain, dev, nirqs, arg);
|
||||
|
||||
*arg = *info->alloc_data;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __msi_domain_alloc_irqs(struct device *dev, struct irq_domain *domain,
|
||||
struct msi_ctrl *ctrl)
|
||||
{
|
||||
|
|
@ -1250,7 +1300,7 @@ static int __msi_domain_alloc_irqs(struct device *dev, struct irq_domain *domain
|
|||
unsigned long idx;
|
||||
int i, ret, virq;
|
||||
|
||||
ret = msi_domain_prepare_irqs(domain, dev, ctrl->nirqs, &arg);
|
||||
ret = populate_alloc_info(domain, dev, ctrl->nirqs, &arg);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
@ -1391,12 +1441,9 @@ int msi_domain_alloc_irqs_range_locked(struct device *dev, unsigned int domid,
|
|||
int msi_domain_alloc_irqs_range(struct device *dev, unsigned int domid,
|
||||
unsigned int first, unsigned int last)
|
||||
{
|
||||
int ret;
|
||||
|
||||
msi_lock_descs(dev);
|
||||
ret = msi_domain_alloc_irqs_range_locked(dev, domid, first, last);
|
||||
msi_unlock_descs(dev);
|
||||
return ret;
|
||||
guard(msi_descs_lock)(dev);
|
||||
return msi_domain_alloc_irqs_range_locked(dev, domid, first, last);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(msi_domain_alloc_irqs_range);
|
||||
|
||||
|
|
@ -1500,12 +1547,8 @@ struct msi_map msi_domain_alloc_irq_at(struct device *dev, unsigned int domid, u
|
|||
const struct irq_affinity_desc *affdesc,
|
||||
union msi_instance_cookie *icookie)
|
||||
{
|
||||
struct msi_map map;
|
||||
|
||||
msi_lock_descs(dev);
|
||||
map = __msi_domain_alloc_irq_at(dev, domid, index, affdesc, icookie);
|
||||
msi_unlock_descs(dev);
|
||||
return map;
|
||||
guard(msi_descs_lock)(dev);
|
||||
return __msi_domain_alloc_irq_at(dev, domid, index, affdesc, icookie);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1542,13 +1585,11 @@ int msi_device_domain_alloc_wired(struct irq_domain *domain, unsigned int hwirq,
|
|||
|
||||
icookie.value = ((u64)type << 32) | hwirq;
|
||||
|
||||
msi_lock_descs(dev);
|
||||
guard(msi_descs_lock)(dev);
|
||||
if (WARN_ON_ONCE(msi_get_device_domain(dev, domid) != domain))
|
||||
map.index = -EINVAL;
|
||||
else
|
||||
map = __msi_domain_alloc_irq_at(dev, domid, MSI_ANY_INDEX, NULL, &icookie);
|
||||
msi_unlock_descs(dev);
|
||||
|
||||
return map.index >= 0 ? map.virq : map.index;
|
||||
}
|
||||
|
||||
|
|
@ -1641,9 +1682,8 @@ void msi_domain_free_irqs_range_locked(struct device *dev, unsigned int domid,
|
|||
void msi_domain_free_irqs_range(struct device *dev, unsigned int domid,
|
||||
unsigned int first, unsigned int last)
|
||||
{
|
||||
msi_lock_descs(dev);
|
||||
guard(msi_descs_lock)(dev);
|
||||
msi_domain_free_irqs_range_locked(dev, domid, first, last);
|
||||
msi_unlock_descs(dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(msi_domain_free_irqs_all);
|
||||
|
||||
|
|
@ -1673,9 +1713,8 @@ void msi_domain_free_irqs_all_locked(struct device *dev, unsigned int domid)
|
|||
*/
|
||||
void msi_domain_free_irqs_all(struct device *dev, unsigned int domid)
|
||||
{
|
||||
msi_lock_descs(dev);
|
||||
guard(msi_descs_lock)(dev);
|
||||
msi_domain_free_irqs_all_locked(dev, domid);
|
||||
msi_unlock_descs(dev);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1694,12 +1733,11 @@ void msi_device_domain_free_wired(struct irq_domain *domain, unsigned int virq)
|
|||
if (WARN_ON_ONCE(!dev || !desc || domain->bus_token != DOMAIN_BUS_WIRED_TO_MSI))
|
||||
return;
|
||||
|
||||
msi_lock_descs(dev);
|
||||
if (!WARN_ON_ONCE(msi_get_device_domain(dev, MSI_DEFAULT_DOMAIN) != domain)) {
|
||||
guard(msi_descs_lock)(dev);
|
||||
if (WARN_ON_ONCE(msi_get_device_domain(dev, MSI_DEFAULT_DOMAIN) != domain))
|
||||
return;
|
||||
msi_domain_free_irqs_range_locked(dev, MSI_DEFAULT_DOMAIN, desc->msi_index,
|
||||
desc->msi_index);
|
||||
}
|
||||
msi_unlock_descs(dev);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
Loading…
Reference in New Issue