Happy May Day.

Things have calmed down on our end (knock on wood), no outstanding
 investigations. Including fixes from Bluetooth and WiFi.
 
 Current release - fix to a fix:
 
  - igc: fix lock order in igc_ptp_reset
 
 Current release - new code bugs:
 
  - Revert "wifi: iwlwifi: make no_160 more generic", fixes regression
    to Killer line of devices reported by a number of people
 
  - Revert "wifi: iwlwifi: add support for BE213", initial FW is too buggy
 
  - number of fixes for mld, the new Intel WiFi subdriver
 
 Previous releases - regressions:
 
  - wifi: mac80211: restore monitor for outgoing frames
 
  - drv: vmxnet3: fix malformed packet sizing in vmxnet3_process_xdp
 
  - eth: bnxt_en: fix timestamping FIFO getting out of sync on reset,
    delivering stale timestamps
 
  - use sock_gen_put() in the TCP fraglist GRO heuristic, don't assume
    every socket is a full socket
 
 Previous releases - always broken:
 
  - sched: adapt qdiscs for reentrant enqueue cases, fix list corruptions
 
  - xsk: fix race condition in AF_XDP generic RX path, shared UMEM
    can't be protected by a per-socket lock
 
  - eth: mtk-star-emac: fix spinlock recursion issues on rx/tx poll
 
  - btusb: avoid NULL pointer dereference in skb_dequeue()
 
  - dsa: felix: fix broken taprio gate states after clock jump
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmgTmIoACgkQMUZtbf5S
 IruoZhAApd5csY/U9Pl7h5CK6mQek7qk+EbDcLs+/j/0UFXcOayuFKILHT27BIe5
 gEVbQBkcEpzj+QKJblWvuczreBXChTmhfQb1zItRMWJktYmlqYtPsPE4kM2inzGo
 94iDeSTca/LKPfc4dPQ/rBfXMQb7NX7UhWRR9AFBWhOuM/SDjItyZUFPqhmGDctO
 8+7YdftK2qFcwIN18kU4GqbmJEZK9Y4+ePabABR7VBFtw70bpdCptWp5tmRMbq6V
 KphxkWbWg8SP1HUnAfRAS4C3TO4Dn6e7CVaAirroqrHWmawiWgHV/o1awc0ErO8k
 aAgVhPOB5zzVqbV0ZeoEj5PL5/2rus/KmWkMludRAqGGivcQOu6GOuvo+QPwpYCU
 3edxCU95kqRB6XrjsLnZ6HnpwtBr6CUVf9FXMGNt4199cWN/PE+w4qxTz2vxiiH3
 At0su5W41s4mvldktait+ilwk+R3v1rIzTR6IDmNPW8pACI8RlQt8/Mgy7smDZp9
 FEKtWhXHdww3DosB3vHOGZ5u6gRlGvfjPsJlt2C9fGZIdrOJzzHDZ37BXDYDTYnx
 9mejzy/YAYjK826YGA3sqJ3fUDFUoAynZuajDKxErvtIZlRzEDLu+7pG4zAB8Nrx
 tHdz5iR2rDdZ87S4jB8CAP7FGVs+YZkekVvY5fIw4dGs1QMc2wE=
 =6xNz
 -----END PGP SIGNATURE-----

Merge tag 'net-6.15-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Happy May Day.

  Things have calmed down on our end (knock on wood), no outstanding
  investigations. Including fixes from Bluetooth and WiFi.

  Current release - fix to a fix:

   - igc: fix lock order in igc_ptp_reset

  Current release - new code bugs:

   - Revert "wifi: iwlwifi: make no_160 more generic", fixes regression
     to Killer line of devices reported by a number of people

   - Revert "wifi: iwlwifi: add support for BE213", initial FW is too
     buggy

   - number of fixes for mld, the new Intel WiFi subdriver

  Previous releases - regressions:

   - wifi: mac80211: restore monitor for outgoing frames

   - drv: vmxnet3: fix malformed packet sizing in vmxnet3_process_xdp

   - eth: bnxt_en: fix timestamping FIFO getting out of sync on reset,
     delivering stale timestamps

   - use sock_gen_put() in the TCP fraglist GRO heuristic, don't assume
     every socket is a full socket

  Previous releases - always broken:

   - sched: adapt qdiscs for reentrant enqueue cases, fix list
     corruptions

   - xsk: fix race condition in AF_XDP generic RX path, shared UMEM
     can't be protected by a per-socket lock

   - eth: mtk-star-emac: fix spinlock recursion issues on rx/tx poll

   - btusb: avoid NULL pointer dereference in skb_dequeue()

   - dsa: felix: fix broken taprio gate states after clock jump"

* tag 'net-6.15-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (83 commits)
  net: vertexcom: mse102x: Fix RX error handling
  net: vertexcom: mse102x: Add range check for CMD_RTS
  net: vertexcom: mse102x: Fix LEN_MASK
  net: vertexcom: mse102x: Fix possible stuck of SPI interrupt
  net: hns3: defer calling ptp_clock_register()
  net: hns3: fixed debugfs tm_qset size
  net: hns3: fix an interrupt residual problem
  net: hns3: store rx VLAN tag offload state for VF
  octeon_ep: Fix host hang issue during device reboot
  net: fec: ERR007885 Workaround for conventional TX
  net: lan743x: Fix memleak issue when GSO enabled
  ptp: ocp: Fix NULL dereference in Adva board SMA sysfs operations
  net: use sock_gen_put() when sk_state is TCP_TIME_WAIT
  bnxt_en: fix module unload sequence
  bnxt_en: Fix ethtool -d byte order for 32-bit values
  bnxt_en: Fix out-of-bound memcpy() during ethtool -w
  bnxt_en: Fix coredump logic to free allocated buffer
  bnxt_en: delay pci_alloc_irq_vectors() in the AER path
  bnxt_en: call pci_alloc_irq_vectors() after bnxt_reserve_rings()
  bnxt_en: Add missing skb_mark_for_recycle() in bnxt_rx_vlan()
  ...
pull/1194/merge
Linus Torvalds 2025-05-01 10:37:49 -07:00
commit ebd297a2af
96 changed files with 1786 additions and 737 deletions

View File

@ -89,8 +89,10 @@ definitions:
doc: Group of short_detected states doc: Group of short_detected states
- -
name: phy-upstream-type name: phy-upstream-type
enum-name: enum-name: phy-upstream
header: linux/ethtool.h
type: enum type: enum
name-prefix: phy-upstream
entries: [ mac, phy ] entries: [ mac, phy ]
- -
name: tcp-data-split name: tcp-data-split

View File

@ -957,8 +957,10 @@ static int btintel_pcie_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
/* This is a debug event that comes from IML and OP image when it /* This is a debug event that comes from IML and OP image when it
* starts execution. There is no need pass this event to stack. * starts execution. There is no need pass this event to stack.
*/ */
if (skb->data[2] == 0x97) if (skb->data[2] == 0x97) {
hci_recv_diag(hdev, skb);
return 0; return 0;
}
} }
return hci_recv_frame(hdev, skb); return hci_recv_frame(hdev, skb);
@ -974,7 +976,6 @@ static int btintel_pcie_recv_frame(struct btintel_pcie_data *data,
u8 pkt_type; u8 pkt_type;
u16 plen; u16 plen;
u32 pcie_pkt_type; u32 pcie_pkt_type;
struct sk_buff *new_skb;
void *pdata; void *pdata;
struct hci_dev *hdev = data->hdev; struct hci_dev *hdev = data->hdev;
@ -1051,24 +1052,20 @@ static int btintel_pcie_recv_frame(struct btintel_pcie_data *data,
bt_dev_dbg(hdev, "pkt_type: 0x%2.2x len: %u", pkt_type, plen); bt_dev_dbg(hdev, "pkt_type: 0x%2.2x len: %u", pkt_type, plen);
new_skb = bt_skb_alloc(plen, GFP_ATOMIC); hci_skb_pkt_type(skb) = pkt_type;
if (!new_skb) {
bt_dev_err(hdev, "Failed to allocate memory for skb of len: %u",
skb->len);
ret = -ENOMEM;
goto exit_error;
}
hci_skb_pkt_type(new_skb) = pkt_type;
skb_put_data(new_skb, skb->data, plen);
hdev->stat.byte_rx += plen; hdev->stat.byte_rx += plen;
skb_trim(skb, plen);
if (pcie_pkt_type == BTINTEL_PCIE_HCI_EVT_PKT) if (pcie_pkt_type == BTINTEL_PCIE_HCI_EVT_PKT)
ret = btintel_pcie_recv_event(hdev, new_skb); ret = btintel_pcie_recv_event(hdev, skb);
else else
ret = hci_recv_frame(hdev, new_skb); ret = hci_recv_frame(hdev, skb);
skb = NULL; /* skb is freed in the callee */
exit_error: exit_error:
if (skb)
kfree_skb(skb);
if (ret) if (ret)
hdev->stat.err_rx++; hdev->stat.err_rx++;
@ -1202,8 +1199,6 @@ static void btintel_pcie_rx_work(struct work_struct *work)
struct btintel_pcie_data *data = container_of(work, struct btintel_pcie_data *data = container_of(work,
struct btintel_pcie_data, rx_work); struct btintel_pcie_data, rx_work);
struct sk_buff *skb; struct sk_buff *skb;
int err;
struct hci_dev *hdev = data->hdev;
if (test_bit(BTINTEL_PCIE_HWEXP_INPROGRESS, &data->flags)) { if (test_bit(BTINTEL_PCIE_HWEXP_INPROGRESS, &data->flags)) {
/* Unlike usb products, controller will not send hardware /* Unlike usb products, controller will not send hardware
@ -1224,11 +1219,7 @@ static void btintel_pcie_rx_work(struct work_struct *work)
/* Process the sk_buf in queue and send to the HCI layer */ /* Process the sk_buf in queue and send to the HCI layer */
while ((skb = skb_dequeue(&data->rx_skb_q))) { while ((skb = skb_dequeue(&data->rx_skb_q))) {
err = btintel_pcie_recv_frame(data, skb); btintel_pcie_recv_frame(data, skb);
if (err)
bt_dev_err(hdev, "Failed to send received frame: %d",
err);
kfree_skb(skb);
} }
} }
@ -1281,10 +1272,8 @@ static void btintel_pcie_msix_rx_handle(struct btintel_pcie_data *data)
bt_dev_dbg(hdev, "RXQ: cr_hia: %u cr_tia: %u", cr_hia, cr_tia); bt_dev_dbg(hdev, "RXQ: cr_hia: %u cr_tia: %u", cr_hia, cr_tia);
/* Check CR_TIA and CR_HIA for change */ /* Check CR_TIA and CR_HIA for change */
if (cr_tia == cr_hia) { if (cr_tia == cr_hia)
bt_dev_warn(hdev, "RXQ: no new CD found");
return; return;
}
rxq = &data->rxq; rxq = &data->rxq;
@ -1320,6 +1309,16 @@ static irqreturn_t btintel_pcie_msix_isr(int irq, void *data)
return IRQ_WAKE_THREAD; return IRQ_WAKE_THREAD;
} }
static inline bool btintel_pcie_is_rxq_empty(struct btintel_pcie_data *data)
{
return data->ia.cr_hia[BTINTEL_PCIE_RXQ_NUM] == data->ia.cr_tia[BTINTEL_PCIE_RXQ_NUM];
}
static inline bool btintel_pcie_is_txackq_empty(struct btintel_pcie_data *data)
{
return data->ia.cr_tia[BTINTEL_PCIE_TXQ_NUM] == data->ia.cr_hia[BTINTEL_PCIE_TXQ_NUM];
}
static irqreturn_t btintel_pcie_irq_msix_handler(int irq, void *dev_id) static irqreturn_t btintel_pcie_irq_msix_handler(int irq, void *dev_id)
{ {
struct msix_entry *entry = dev_id; struct msix_entry *entry = dev_id;
@ -1351,12 +1350,18 @@ static irqreturn_t btintel_pcie_irq_msix_handler(int irq, void *dev_id)
btintel_pcie_msix_gp0_handler(data); btintel_pcie_msix_gp0_handler(data);
/* For TX */ /* For TX */
if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_0) if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_0) {
btintel_pcie_msix_tx_handle(data); btintel_pcie_msix_tx_handle(data);
if (!btintel_pcie_is_rxq_empty(data))
btintel_pcie_msix_rx_handle(data);
}
/* For RX */ /* For RX */
if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_1) if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_1) {
btintel_pcie_msix_rx_handle(data); btintel_pcie_msix_rx_handle(data);
if (!btintel_pcie_is_txackq_empty(data))
btintel_pcie_msix_tx_handle(data);
}
/* /*
* Before sending the interrupt the HW disables it to prevent a nested * Before sending the interrupt the HW disables it to prevent a nested

View File

@ -723,6 +723,10 @@ static int btmtksdio_close(struct hci_dev *hdev)
{ {
struct btmtksdio_dev *bdev = hci_get_drvdata(hdev); struct btmtksdio_dev *bdev = hci_get_drvdata(hdev);
/* Skip btmtksdio_close if BTMTKSDIO_FUNC_ENABLED isn't set */
if (!test_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state))
return 0;
sdio_claim_host(bdev->func); sdio_claim_host(bdev->func);
/* Disable interrupt */ /* Disable interrupt */
@ -1443,11 +1447,15 @@ static void btmtksdio_remove(struct sdio_func *func)
if (!bdev) if (!bdev)
return; return;
hdev = bdev->hdev;
/* Make sure to call btmtksdio_close before removing sdio card */
if (test_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state))
btmtksdio_close(hdev);
/* Be consistent the state in btmtksdio_probe */ /* Be consistent the state in btmtksdio_probe */
pm_runtime_get_noresume(bdev->dev); pm_runtime_get_noresume(bdev->dev);
hdev = bdev->hdev;
sdio_set_drvdata(func, NULL); sdio_set_drvdata(func, NULL);
hci_unregister_dev(hdev); hci_unregister_dev(hdev);
hci_free_dev(hdev); hci_free_dev(hdev);

View File

@ -3010,22 +3010,16 @@ static void btusb_coredump_qca(struct hci_dev *hdev)
bt_dev_err(hdev, "%s: triggle crash failed (%d)", __func__, err); bt_dev_err(hdev, "%s: triggle crash failed (%d)", __func__, err);
} }
/* /* Return: 0 on success, negative errno on failure. */
* ==0: not a dump pkt.
* < 0: fails to handle a dump pkt
* > 0: otherwise.
*/
static int handle_dump_pkt_qca(struct hci_dev *hdev, struct sk_buff *skb) static int handle_dump_pkt_qca(struct hci_dev *hdev, struct sk_buff *skb)
{ {
int ret = 1; int ret = 0;
u8 pkt_type; u8 pkt_type;
u8 *sk_ptr; u8 *sk_ptr;
unsigned int sk_len; unsigned int sk_len;
u16 seqno; u16 seqno;
u32 dump_size; u32 dump_size;
struct hci_event_hdr *event_hdr;
struct hci_acl_hdr *acl_hdr;
struct qca_dump_hdr *dump_hdr; struct qca_dump_hdr *dump_hdr;
struct btusb_data *btdata = hci_get_drvdata(hdev); struct btusb_data *btdata = hci_get_drvdata(hdev);
struct usb_device *udev = btdata->udev; struct usb_device *udev = btdata->udev;
@ -3035,30 +3029,14 @@ static int handle_dump_pkt_qca(struct hci_dev *hdev, struct sk_buff *skb)
sk_len = skb->len; sk_len = skb->len;
if (pkt_type == HCI_ACLDATA_PKT) { if (pkt_type == HCI_ACLDATA_PKT) {
acl_hdr = hci_acl_hdr(skb);
if (le16_to_cpu(acl_hdr->handle) != QCA_MEMDUMP_ACL_HANDLE)
return 0;
sk_ptr += HCI_ACL_HDR_SIZE; sk_ptr += HCI_ACL_HDR_SIZE;
sk_len -= HCI_ACL_HDR_SIZE; sk_len -= HCI_ACL_HDR_SIZE;
event_hdr = (struct hci_event_hdr *)sk_ptr;
} else {
event_hdr = hci_event_hdr(skb);
} }
if ((event_hdr->evt != HCI_VENDOR_PKT)
|| (event_hdr->plen != (sk_len - HCI_EVENT_HDR_SIZE)))
return 0;
sk_ptr += HCI_EVENT_HDR_SIZE; sk_ptr += HCI_EVENT_HDR_SIZE;
sk_len -= HCI_EVENT_HDR_SIZE; sk_len -= HCI_EVENT_HDR_SIZE;
dump_hdr = (struct qca_dump_hdr *)sk_ptr; dump_hdr = (struct qca_dump_hdr *)sk_ptr;
if ((sk_len < offsetof(struct qca_dump_hdr, data))
|| (dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS)
|| (dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE))
return 0;
/*it is dump pkt now*/
seqno = le16_to_cpu(dump_hdr->seqno); seqno = le16_to_cpu(dump_hdr->seqno);
if (seqno == 0) { if (seqno == 0) {
set_bit(BTUSB_HW_SSR_ACTIVE, &btdata->flags); set_bit(BTUSB_HW_SSR_ACTIVE, &btdata->flags);
@ -3132,17 +3110,84 @@ out:
return ret; return ret;
} }
/* Return: true if the ACL packet is a dump packet, false otherwise. */
static bool acl_pkt_is_dump_qca(struct hci_dev *hdev, struct sk_buff *skb)
{
u8 *sk_ptr;
unsigned int sk_len;
struct hci_event_hdr *event_hdr;
struct hci_acl_hdr *acl_hdr;
struct qca_dump_hdr *dump_hdr;
sk_ptr = skb->data;
sk_len = skb->len;
acl_hdr = hci_acl_hdr(skb);
if (le16_to_cpu(acl_hdr->handle) != QCA_MEMDUMP_ACL_HANDLE)
return false;
sk_ptr += HCI_ACL_HDR_SIZE;
sk_len -= HCI_ACL_HDR_SIZE;
event_hdr = (struct hci_event_hdr *)sk_ptr;
if ((event_hdr->evt != HCI_VENDOR_PKT) ||
(event_hdr->plen != (sk_len - HCI_EVENT_HDR_SIZE)))
return false;
sk_ptr += HCI_EVENT_HDR_SIZE;
sk_len -= HCI_EVENT_HDR_SIZE;
dump_hdr = (struct qca_dump_hdr *)sk_ptr;
if ((sk_len < offsetof(struct qca_dump_hdr, data)) ||
(dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS) ||
(dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE))
return false;
return true;
}
/* Return: true if the event packet is a dump packet, false otherwise. */
static bool evt_pkt_is_dump_qca(struct hci_dev *hdev, struct sk_buff *skb)
{
u8 *sk_ptr;
unsigned int sk_len;
struct hci_event_hdr *event_hdr;
struct qca_dump_hdr *dump_hdr;
sk_ptr = skb->data;
sk_len = skb->len;
event_hdr = hci_event_hdr(skb);
if ((event_hdr->evt != HCI_VENDOR_PKT)
|| (event_hdr->plen != (sk_len - HCI_EVENT_HDR_SIZE)))
return false;
sk_ptr += HCI_EVENT_HDR_SIZE;
sk_len -= HCI_EVENT_HDR_SIZE;
dump_hdr = (struct qca_dump_hdr *)sk_ptr;
if ((sk_len < offsetof(struct qca_dump_hdr, data)) ||
(dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS) ||
(dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE))
return false;
return true;
}
static int btusb_recv_acl_qca(struct hci_dev *hdev, struct sk_buff *skb) static int btusb_recv_acl_qca(struct hci_dev *hdev, struct sk_buff *skb)
{ {
if (handle_dump_pkt_qca(hdev, skb)) if (acl_pkt_is_dump_qca(hdev, skb))
return 0; return handle_dump_pkt_qca(hdev, skb);
return hci_recv_frame(hdev, skb); return hci_recv_frame(hdev, skb);
} }
static int btusb_recv_evt_qca(struct hci_dev *hdev, struct sk_buff *skb) static int btusb_recv_evt_qca(struct hci_dev *hdev, struct sk_buff *skb)
{ {
if (handle_dump_pkt_qca(hdev, skb)) if (evt_pkt_is_dump_qca(hdev, skb))
return 0; return handle_dump_pkt_qca(hdev, skb);
return hci_recv_frame(hdev, skb); return hci_recv_frame(hdev, skb);
} }

View File

@ -1543,7 +1543,7 @@ static void vsc9959_tas_clock_adjust(struct ocelot *ocelot)
struct tc_taprio_qopt_offload *taprio; struct tc_taprio_qopt_offload *taprio;
struct ocelot_port *ocelot_port; struct ocelot_port *ocelot_port;
struct timespec64 base_ts; struct timespec64 base_ts;
int port; int i, port;
u32 val; u32 val;
mutex_lock(&ocelot->fwd_domain_lock); mutex_lock(&ocelot->fwd_domain_lock);
@ -1575,6 +1575,9 @@ static void vsc9959_tas_clock_adjust(struct ocelot *ocelot)
QSYS_PARAM_CFG_REG_3_BASE_TIME_SEC_MSB_M, QSYS_PARAM_CFG_REG_3_BASE_TIME_SEC_MSB_M,
QSYS_PARAM_CFG_REG_3); QSYS_PARAM_CFG_REG_3);
for (i = 0; i < taprio->num_entries; i++)
vsc9959_tas_gcl_set(ocelot, i, &taprio->entries[i]);
ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE, ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE,
QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE, QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE,
QSYS_TAS_PARAM_CFG_CTRL); QSYS_TAS_PARAM_CFG_CTRL);

View File

@ -186,7 +186,6 @@ void pdsc_auxbus_dev_del(struct pdsc *cf, struct pdsc *pf,
pds_client_unregister(pf, padev->client_id); pds_client_unregister(pf, padev->client_id);
auxiliary_device_delete(&padev->aux_dev); auxiliary_device_delete(&padev->aux_dev);
auxiliary_device_uninit(&padev->aux_dev); auxiliary_device_uninit(&padev->aux_dev);
padev->client_id = 0;
*pd_ptr = NULL; *pd_ptr = NULL;
mutex_unlock(&pf->config_lock); mutex_unlock(&pf->config_lock);

View File

@ -373,8 +373,13 @@ static int xgbe_map_rx_buffer(struct xgbe_prv_data *pdata,
} }
/* Set up the header page info */ /* Set up the header page info */
xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa, if (pdata->netdev->features & NETIF_F_RXCSUM) {
XGBE_SKB_ALLOC_SIZE); xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa,
XGBE_SKB_ALLOC_SIZE);
} else {
xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa,
pdata->rx_buf_size);
}
/* Set up the buffer page info */ /* Set up the buffer page info */
xgbe_set_buffer_data(&rdata->rx.buf, &ring->rx_buf_pa, xgbe_set_buffer_data(&rdata->rx.buf, &ring->rx_buf_pa,

View File

@ -320,6 +320,18 @@ static void xgbe_config_sph_mode(struct xgbe_prv_data *pdata)
XGMAC_IOWRITE_BITS(pdata, MAC_RCR, HDSMS, XGBE_SPH_HDSMS_SIZE); XGMAC_IOWRITE_BITS(pdata, MAC_RCR, HDSMS, XGBE_SPH_HDSMS_SIZE);
} }
static void xgbe_disable_sph_mode(struct xgbe_prv_data *pdata)
{
unsigned int i;
for (i = 0; i < pdata->channel_count; i++) {
if (!pdata->channel[i]->rx_ring)
break;
XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_CR, SPH, 0);
}
}
static int xgbe_write_rss_reg(struct xgbe_prv_data *pdata, unsigned int type, static int xgbe_write_rss_reg(struct xgbe_prv_data *pdata, unsigned int type,
unsigned int index, unsigned int val) unsigned int index, unsigned int val)
{ {
@ -3545,8 +3557,12 @@ static int xgbe_init(struct xgbe_prv_data *pdata)
xgbe_config_tx_coalesce(pdata); xgbe_config_tx_coalesce(pdata);
xgbe_config_rx_buffer_size(pdata); xgbe_config_rx_buffer_size(pdata);
xgbe_config_tso_mode(pdata); xgbe_config_tso_mode(pdata);
xgbe_config_sph_mode(pdata);
xgbe_config_rss(pdata); if (pdata->netdev->features & NETIF_F_RXCSUM) {
xgbe_config_sph_mode(pdata);
xgbe_config_rss(pdata);
}
desc_if->wrapper_tx_desc_init(pdata); desc_if->wrapper_tx_desc_init(pdata);
desc_if->wrapper_rx_desc_init(pdata); desc_if->wrapper_rx_desc_init(pdata);
xgbe_enable_dma_interrupts(pdata); xgbe_enable_dma_interrupts(pdata);
@ -3702,5 +3718,9 @@ void xgbe_init_function_ptrs_dev(struct xgbe_hw_if *hw_if)
hw_if->disable_vxlan = xgbe_disable_vxlan; hw_if->disable_vxlan = xgbe_disable_vxlan;
hw_if->set_vxlan_id = xgbe_set_vxlan_id; hw_if->set_vxlan_id = xgbe_set_vxlan_id;
/* For Split Header*/
hw_if->enable_sph = xgbe_config_sph_mode;
hw_if->disable_sph = xgbe_disable_sph_mode;
DBGPR("<--xgbe_init_function_ptrs\n"); DBGPR("<--xgbe_init_function_ptrs\n");
} }

View File

@ -2257,10 +2257,17 @@ static int xgbe_set_features(struct net_device *netdev,
if (ret) if (ret)
return ret; return ret;
if ((features & NETIF_F_RXCSUM) && !rxcsum) if ((features & NETIF_F_RXCSUM) && !rxcsum) {
hw_if->enable_sph(pdata);
hw_if->enable_vxlan(pdata);
hw_if->enable_rx_csum(pdata); hw_if->enable_rx_csum(pdata);
else if (!(features & NETIF_F_RXCSUM) && rxcsum) schedule_work(&pdata->restart_work);
} else if (!(features & NETIF_F_RXCSUM) && rxcsum) {
hw_if->disable_sph(pdata);
hw_if->disable_vxlan(pdata);
hw_if->disable_rx_csum(pdata); hw_if->disable_rx_csum(pdata);
schedule_work(&pdata->restart_work);
}
if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !rxvlan) if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !rxvlan)
hw_if->enable_rx_vlan_stripping(pdata); hw_if->enable_rx_vlan_stripping(pdata);

View File

@ -865,6 +865,10 @@ struct xgbe_hw_if {
void (*enable_vxlan)(struct xgbe_prv_data *); void (*enable_vxlan)(struct xgbe_prv_data *);
void (*disable_vxlan)(struct xgbe_prv_data *); void (*disable_vxlan)(struct xgbe_prv_data *);
void (*set_vxlan_id)(struct xgbe_prv_data *); void (*set_vxlan_id)(struct xgbe_prv_data *);
/* For Split Header */
void (*enable_sph)(struct xgbe_prv_data *pdata);
void (*disable_sph)(struct xgbe_prv_data *pdata);
}; };
/* This structure represents implementation specific routines for an /* This structure represents implementation specific routines for an

View File

@ -2015,6 +2015,7 @@ static struct sk_buff *bnxt_rx_vlan(struct sk_buff *skb, u8 cmp_type,
} }
return skb; return skb;
vlan_err: vlan_err:
skb_mark_for_recycle(skb);
dev_kfree_skb(skb); dev_kfree_skb(skb);
return NULL; return NULL;
} }
@ -3414,6 +3415,9 @@ static void bnxt_free_tx_skbs(struct bnxt *bp)
bnxt_free_one_tx_ring_skbs(bp, txr, i); bnxt_free_one_tx_ring_skbs(bp, txr, i);
} }
if (bp->ptp_cfg && !(bp->fw_cap & BNXT_FW_CAP_TX_TS_CMP))
bnxt_ptp_free_txts_skbs(bp->ptp_cfg);
} }
static void bnxt_free_one_rx_ring(struct bnxt *bp, struct bnxt_rx_ring_info *rxr) static void bnxt_free_one_rx_ring(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
@ -11599,6 +11603,9 @@ static void bnxt_init_napi(struct bnxt *bp)
poll_fn = bnxt_poll_p5; poll_fn = bnxt_poll_p5;
else if (BNXT_CHIP_TYPE_NITRO_A0(bp)) else if (BNXT_CHIP_TYPE_NITRO_A0(bp))
cp_nr_rings--; cp_nr_rings--;
set_bit(BNXT_STATE_NAPI_DISABLED, &bp->state);
for (i = 0; i < cp_nr_rings; i++) { for (i = 0; i < cp_nr_rings; i++) {
bnapi = bp->bnapi[i]; bnapi = bp->bnapi[i];
netif_napi_add_config_locked(bp->dev, &bnapi->napi, poll_fn, netif_napi_add_config_locked(bp->dev, &bnapi->napi, poll_fn,
@ -12318,12 +12325,15 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
{ {
struct hwrm_func_drv_if_change_output *resp; struct hwrm_func_drv_if_change_output *resp;
struct hwrm_func_drv_if_change_input *req; struct hwrm_func_drv_if_change_input *req;
bool fw_reset = !bp->irq_tbl;
bool resc_reinit = false; bool resc_reinit = false;
bool caps_change = false; bool caps_change = false;
int rc, retry = 0; int rc, retry = 0;
bool fw_reset;
u32 flags = 0; u32 flags = 0;
fw_reset = (bp->fw_reset_state == BNXT_FW_RESET_STATE_ABORT);
bp->fw_reset_state = 0;
if (!(bp->fw_cap & BNXT_FW_CAP_IF_CHANGE)) if (!(bp->fw_cap & BNXT_FW_CAP_IF_CHANGE))
return 0; return 0;
@ -12392,13 +12402,8 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
set_bit(BNXT_STATE_ABORT_ERR, &bp->state); set_bit(BNXT_STATE_ABORT_ERR, &bp->state);
return rc; return rc;
} }
/* IRQ will be initialized later in bnxt_request_irq()*/
bnxt_clear_int_mode(bp); bnxt_clear_int_mode(bp);
rc = bnxt_init_int_mode(bp);
if (rc) {
clear_bit(BNXT_STATE_FW_RESET_DET, &bp->state);
netdev_err(bp->dev, "init int mode failed\n");
return rc;
}
} }
rc = bnxt_cancel_reservations(bp, fw_reset); rc = bnxt_cancel_reservations(bp, fw_reset);
} }
@ -12797,8 +12802,6 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
/* VF-reps may need to be re-opened after the PF is re-opened */ /* VF-reps may need to be re-opened after the PF is re-opened */
if (BNXT_PF(bp)) if (BNXT_PF(bp))
bnxt_vf_reps_open(bp); bnxt_vf_reps_open(bp);
if (bp->ptp_cfg && !(bp->fw_cap & BNXT_FW_CAP_TX_TS_CMP))
WRITE_ONCE(bp->ptp_cfg->tx_avail, BNXT_MAX_TX_TS);
bnxt_ptp_init_rtc(bp, true); bnxt_ptp_init_rtc(bp, true);
bnxt_ptp_cfg_tstamp_filters(bp); bnxt_ptp_cfg_tstamp_filters(bp);
if (BNXT_SUPPORTS_MULTI_RSS_CTX(bp)) if (BNXT_SUPPORTS_MULTI_RSS_CTX(bp))
@ -14833,7 +14836,7 @@ static void bnxt_fw_reset_abort(struct bnxt *bp, int rc)
clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state); clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
if (bp->fw_reset_state != BNXT_FW_RESET_STATE_POLL_VF) if (bp->fw_reset_state != BNXT_FW_RESET_STATE_POLL_VF)
bnxt_dl_health_fw_status_update(bp, false); bnxt_dl_health_fw_status_update(bp, false);
bp->fw_reset_state = 0; bp->fw_reset_state = BNXT_FW_RESET_STATE_ABORT;
netif_close(bp->dev); netif_close(bp->dev);
} }
@ -16003,8 +16006,8 @@ static void bnxt_remove_one(struct pci_dev *pdev)
bnxt_rdma_aux_device_del(bp); bnxt_rdma_aux_device_del(bp);
bnxt_ptp_clear(bp);
unregister_netdev(dev); unregister_netdev(dev);
bnxt_ptp_clear(bp);
bnxt_rdma_aux_device_uninit(bp); bnxt_rdma_aux_device_uninit(bp);
@ -16931,10 +16934,9 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
if (!err) if (!err)
result = PCI_ERS_RESULT_RECOVERED; result = PCI_ERS_RESULT_RECOVERED;
/* IRQ will be initialized later in bnxt_io_resume */
bnxt_ulp_irq_stop(bp); bnxt_ulp_irq_stop(bp);
bnxt_clear_int_mode(bp); bnxt_clear_int_mode(bp);
err = bnxt_init_int_mode(bp);
bnxt_ulp_irq_restart(bp, err);
} }
reset_exit: reset_exit:
@ -16963,10 +16965,13 @@ static void bnxt_io_resume(struct pci_dev *pdev)
err = bnxt_hwrm_func_qcaps(bp); err = bnxt_hwrm_func_qcaps(bp);
if (!err) { if (!err) {
if (netif_running(netdev)) if (netif_running(netdev)) {
err = bnxt_open(netdev); err = bnxt_open(netdev);
else } else {
err = bnxt_reserve_rings(bp, true); err = bnxt_reserve_rings(bp, true);
if (!err)
err = bnxt_init_int_mode(bp);
}
} }
if (!err) if (!err)

View File

@ -2614,6 +2614,7 @@ struct bnxt {
#define BNXT_FW_RESET_STATE_POLL_FW 4 #define BNXT_FW_RESET_STATE_POLL_FW 4
#define BNXT_FW_RESET_STATE_OPENING 5 #define BNXT_FW_RESET_STATE_OPENING 5
#define BNXT_FW_RESET_STATE_POLL_FW_DOWN 6 #define BNXT_FW_RESET_STATE_POLL_FW_DOWN 6
#define BNXT_FW_RESET_STATE_ABORT 7
u16 fw_reset_min_dsecs; u16 fw_reset_min_dsecs;
#define BNXT_DFLT_FW_RST_MIN_DSECS 20 #define BNXT_DFLT_FW_RST_MIN_DSECS 20

View File

@ -110,20 +110,30 @@ static int bnxt_hwrm_dbg_dma_data(struct bnxt *bp, void *msg,
} }
} }
if (info->dest_buf) {
if ((info->seg_start + off + len) <=
BNXT_COREDUMP_BUF_LEN(info->buf_len)) {
memcpy(info->dest_buf + off, dma_buf, len);
} else {
rc = -ENOBUFS;
break;
}
}
if (cmn_req->req_type == if (cmn_req->req_type ==
cpu_to_le16(HWRM_DBG_COREDUMP_RETRIEVE)) cpu_to_le16(HWRM_DBG_COREDUMP_RETRIEVE))
info->dest_buf_size += len; info->dest_buf_size += len;
if (info->dest_buf) {
if ((info->seg_start + off + len) <=
BNXT_COREDUMP_BUF_LEN(info->buf_len)) {
u16 copylen = min_t(u16, len,
info->dest_buf_size - off);
memcpy(info->dest_buf + off, dma_buf, copylen);
if (copylen < len)
break;
} else {
rc = -ENOBUFS;
if (cmn_req->req_type ==
cpu_to_le16(HWRM_DBG_COREDUMP_LIST)) {
kfree(info->dest_buf);
info->dest_buf = NULL;
}
break;
}
}
if (!(cmn_resp->flags & HWRM_DBG_CMN_FLAGS_MORE)) if (!(cmn_resp->flags & HWRM_DBG_CMN_FLAGS_MORE))
break; break;

View File

@ -2062,6 +2062,17 @@ static int bnxt_get_regs_len(struct net_device *dev)
return reg_len; return reg_len;
} }
#define BNXT_PCIE_32B_ENTRY(start, end) \
{ offsetof(struct pcie_ctx_hw_stats, start), \
offsetof(struct pcie_ctx_hw_stats, end) }
static const struct {
u16 start;
u16 end;
} bnxt_pcie_32b_entries[] = {
BNXT_PCIE_32B_ENTRY(pcie_ltssm_histogram[0], pcie_ltssm_histogram[3]),
};
static void bnxt_get_regs(struct net_device *dev, struct ethtool_regs *regs, static void bnxt_get_regs(struct net_device *dev, struct ethtool_regs *regs,
void *_p) void *_p)
{ {
@ -2094,12 +2105,27 @@ static void bnxt_get_regs(struct net_device *dev, struct ethtool_regs *regs,
req->pcie_stat_host_addr = cpu_to_le64(hw_pcie_stats_addr); req->pcie_stat_host_addr = cpu_to_le64(hw_pcie_stats_addr);
rc = hwrm_req_send(bp, req); rc = hwrm_req_send(bp, req);
if (!rc) { if (!rc) {
__le64 *src = (__le64 *)hw_pcie_stats; u8 *dst = (u8 *)(_p + BNXT_PXP_REG_LEN);
u64 *dst = (u64 *)(_p + BNXT_PXP_REG_LEN); u8 *src = (u8 *)hw_pcie_stats;
int i; int i, j;
for (i = 0; i < sizeof(*hw_pcie_stats) / sizeof(__le64); i++) for (i = 0, j = 0; i < sizeof(*hw_pcie_stats); ) {
dst[i] = le64_to_cpu(src[i]); if (i >= bnxt_pcie_32b_entries[j].start &&
i <= bnxt_pcie_32b_entries[j].end) {
u32 *dst32 = (u32 *)(dst + i);
*dst32 = le32_to_cpu(*(__le32 *)(src + i));
i += 4;
if (i > bnxt_pcie_32b_entries[j].end &&
j < ARRAY_SIZE(bnxt_pcie_32b_entries) - 1)
j++;
} else {
u64 *dst64 = (u64 *)(dst + i);
*dst64 = le64_to_cpu(*(__le64 *)(src + i));
i += 8;
}
}
} }
hwrm_req_drop(bp, req); hwrm_req_drop(bp, req);
} }
@ -4991,6 +5017,7 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
if (!bp->num_tests || !BNXT_PF(bp)) if (!bp->num_tests || !BNXT_PF(bp))
return; return;
memset(buf, 0, sizeof(u64) * bp->num_tests);
if (etest->flags & ETH_TEST_FL_OFFLINE && if (etest->flags & ETH_TEST_FL_OFFLINE &&
bnxt_ulp_registered(bp->edev)) { bnxt_ulp_registered(bp->edev)) {
etest->flags |= ETH_TEST_FL_FAILED; etest->flags |= ETH_TEST_FL_FAILED;
@ -4998,7 +5025,6 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
return; return;
} }
memset(buf, 0, sizeof(u64) * bp->num_tests);
if (!netif_running(dev)) { if (!netif_running(dev)) {
etest->flags |= ETH_TEST_FL_FAILED; etest->flags |= ETH_TEST_FL_FAILED;
return; return;

View File

@ -794,6 +794,27 @@ next_slot:
return HZ; return HZ;
} }
void bnxt_ptp_free_txts_skbs(struct bnxt_ptp_cfg *ptp)
{
struct bnxt_ptp_tx_req *txts_req;
u16 cons = ptp->txts_cons;
/* make sure ptp aux worker finished with
* possible BNXT_STATE_OPEN set
*/
ptp_cancel_worker_sync(ptp->ptp_clock);
ptp->tx_avail = BNXT_MAX_TX_TS;
while (cons != ptp->txts_prod) {
txts_req = &ptp->txts_req[cons];
if (!IS_ERR_OR_NULL(txts_req->tx_skb))
dev_kfree_skb_any(txts_req->tx_skb);
cons = NEXT_TXTS(cons);
}
ptp->txts_cons = cons;
ptp_schedule_worker(ptp->ptp_clock, 0);
}
int bnxt_ptp_get_txts_prod(struct bnxt_ptp_cfg *ptp, u16 *prod) int bnxt_ptp_get_txts_prod(struct bnxt_ptp_cfg *ptp, u16 *prod)
{ {
spin_lock_bh(&ptp->ptp_tx_lock); spin_lock_bh(&ptp->ptp_tx_lock);
@ -1105,7 +1126,6 @@ out:
void bnxt_ptp_clear(struct bnxt *bp) void bnxt_ptp_clear(struct bnxt *bp)
{ {
struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
int i;
if (!ptp) if (!ptp)
return; return;
@ -1117,12 +1137,5 @@ void bnxt_ptp_clear(struct bnxt *bp)
kfree(ptp->ptp_info.pin_config); kfree(ptp->ptp_info.pin_config);
ptp->ptp_info.pin_config = NULL; ptp->ptp_info.pin_config = NULL;
for (i = 0; i < BNXT_MAX_TX_TS; i++) {
if (ptp->txts_req[i].tx_skb) {
dev_kfree_skb_any(ptp->txts_req[i].tx_skb);
ptp->txts_req[i].tx_skb = NULL;
}
}
bnxt_unmap_ptp_regs(bp); bnxt_unmap_ptp_regs(bp);
} }

View File

@ -162,6 +162,7 @@ int bnxt_ptp_cfg_tstamp_filters(struct bnxt *bp);
void bnxt_ptp_reapply_pps(struct bnxt *bp); void bnxt_ptp_reapply_pps(struct bnxt *bp);
int bnxt_hwtstamp_set(struct net_device *dev, struct ifreq *ifr); int bnxt_hwtstamp_set(struct net_device *dev, struct ifreq *ifr);
int bnxt_hwtstamp_get(struct net_device *dev, struct ifreq *ifr); int bnxt_hwtstamp_get(struct net_device *dev, struct ifreq *ifr);
void bnxt_ptp_free_txts_skbs(struct bnxt_ptp_cfg *ptp);
int bnxt_ptp_get_txts_prod(struct bnxt_ptp_cfg *ptp, u16 *prod); int bnxt_ptp_get_txts_prod(struct bnxt_ptp_cfg *ptp, u16 *prod);
void bnxt_get_tx_ts_p5(struct bnxt *bp, struct sk_buff *skb, u16 prod); void bnxt_get_tx_ts_p5(struct bnxt *bp, struct sk_buff *skb, u16 prod);
int bnxt_get_rx_ts_p5(struct bnxt *bp, u64 *ts, u32 pkt_ts); int bnxt_get_rx_ts_p5(struct bnxt *bp, u64 *ts, u32 pkt_ts);

View File

@ -352,7 +352,7 @@ parse_eeprom (struct net_device *dev)
eth_hw_addr_set(dev, psrom->mac_addr); eth_hw_addr_set(dev, psrom->mac_addr);
if (np->chip_id == CHIP_IP1000A) { if (np->chip_id == CHIP_IP1000A) {
np->led_mode = psrom->led_mode; np->led_mode = le16_to_cpu(psrom->led_mode);
return 0; return 0;
} }

View File

@ -335,7 +335,7 @@ typedef struct t_SROM {
u16 sub_system_id; /* 0x06 */ u16 sub_system_id; /* 0x06 */
u16 pci_base_1; /* 0x08 (IP1000A only) */ u16 pci_base_1; /* 0x08 (IP1000A only) */
u16 pci_base_2; /* 0x0a (IP1000A only) */ u16 pci_base_2; /* 0x0a (IP1000A only) */
u16 led_mode; /* 0x0c (IP1000A only) */ __le16 led_mode; /* 0x0c (IP1000A only) */
u16 reserved1[9]; /* 0x0e-0x1f */ u16 reserved1[9]; /* 0x0e-0x1f */
u8 mac_addr[6]; /* 0x20-0x25 */ u8 mac_addr[6]; /* 0x20-0x25 */
u8 reserved2[10]; /* 0x26-0x2f */ u8 reserved2[10]; /* 0x26-0x2f */

View File

@ -714,7 +714,12 @@ static int fec_enet_txq_submit_skb(struct fec_enet_priv_tx_q *txq,
txq->bd.cur = bdp; txq->bd.cur = bdp;
/* Trigger transmission start */ /* Trigger transmission start */
writel(0, txq->bd.reg_desc_active); if (!(fep->quirks & FEC_QUIRK_ERR007885) ||
!readl(txq->bd.reg_desc_active) ||
!readl(txq->bd.reg_desc_active) ||
!readl(txq->bd.reg_desc_active) ||
!readl(txq->bd.reg_desc_active))
writel(0, txq->bd.reg_desc_active);
return 0; return 0;
} }

View File

@ -61,7 +61,7 @@ static struct hns3_dbg_cmd_info hns3_dbg_cmd[] = {
.name = "tm_qset", .name = "tm_qset",
.cmd = HNAE3_DBG_CMD_TM_QSET, .cmd = HNAE3_DBG_CMD_TM_QSET,
.dentry = HNS3_DBG_DENTRY_TM, .dentry = HNS3_DBG_DENTRY_TM,
.buf_len = HNS3_DBG_READ_LEN, .buf_len = HNS3_DBG_READ_LEN_1MB,
.init = hns3_dbg_common_file_init, .init = hns3_dbg_common_file_init,
}, },
{ {

View File

@ -473,20 +473,14 @@ static void hns3_mask_vector_irq(struct hns3_enet_tqp_vector *tqp_vector,
writel(mask_en, tqp_vector->mask_addr); writel(mask_en, tqp_vector->mask_addr);
} }
static void hns3_vector_enable(struct hns3_enet_tqp_vector *tqp_vector) static void hns3_irq_enable(struct hns3_enet_tqp_vector *tqp_vector)
{ {
napi_enable(&tqp_vector->napi); napi_enable(&tqp_vector->napi);
enable_irq(tqp_vector->vector_irq); enable_irq(tqp_vector->vector_irq);
/* enable vector */
hns3_mask_vector_irq(tqp_vector, 1);
} }
static void hns3_vector_disable(struct hns3_enet_tqp_vector *tqp_vector) static void hns3_irq_disable(struct hns3_enet_tqp_vector *tqp_vector)
{ {
/* disable vector */
hns3_mask_vector_irq(tqp_vector, 0);
disable_irq(tqp_vector->vector_irq); disable_irq(tqp_vector->vector_irq);
napi_disable(&tqp_vector->napi); napi_disable(&tqp_vector->napi);
cancel_work_sync(&tqp_vector->rx_group.dim.work); cancel_work_sync(&tqp_vector->rx_group.dim.work);
@ -707,11 +701,42 @@ static int hns3_set_rx_cpu_rmap(struct net_device *netdev)
return 0; return 0;
} }
static void hns3_enable_irqs_and_tqps(struct net_device *netdev)
{
struct hns3_nic_priv *priv = netdev_priv(netdev);
struct hnae3_handle *h = priv->ae_handle;
u16 i;
for (i = 0; i < priv->vector_num; i++)
hns3_irq_enable(&priv->tqp_vector[i]);
for (i = 0; i < priv->vector_num; i++)
hns3_mask_vector_irq(&priv->tqp_vector[i], 1);
for (i = 0; i < h->kinfo.num_tqps; i++)
hns3_tqp_enable(h->kinfo.tqp[i]);
}
static void hns3_disable_irqs_and_tqps(struct net_device *netdev)
{
struct hns3_nic_priv *priv = netdev_priv(netdev);
struct hnae3_handle *h = priv->ae_handle;
u16 i;
for (i = 0; i < h->kinfo.num_tqps; i++)
hns3_tqp_disable(h->kinfo.tqp[i]);
for (i = 0; i < priv->vector_num; i++)
hns3_mask_vector_irq(&priv->tqp_vector[i], 0);
for (i = 0; i < priv->vector_num; i++)
hns3_irq_disable(&priv->tqp_vector[i]);
}
static int hns3_nic_net_up(struct net_device *netdev) static int hns3_nic_net_up(struct net_device *netdev)
{ {
struct hns3_nic_priv *priv = netdev_priv(netdev); struct hns3_nic_priv *priv = netdev_priv(netdev);
struct hnae3_handle *h = priv->ae_handle; struct hnae3_handle *h = priv->ae_handle;
int i, j;
int ret; int ret;
ret = hns3_nic_reset_all_ring(h); ret = hns3_nic_reset_all_ring(h);
@ -720,23 +745,13 @@ static int hns3_nic_net_up(struct net_device *netdev)
clear_bit(HNS3_NIC_STATE_DOWN, &priv->state); clear_bit(HNS3_NIC_STATE_DOWN, &priv->state);
/* enable the vectors */ hns3_enable_irqs_and_tqps(netdev);
for (i = 0; i < priv->vector_num; i++)
hns3_vector_enable(&priv->tqp_vector[i]);
/* enable rcb */
for (j = 0; j < h->kinfo.num_tqps; j++)
hns3_tqp_enable(h->kinfo.tqp[j]);
/* start the ae_dev */ /* start the ae_dev */
ret = h->ae_algo->ops->start ? h->ae_algo->ops->start(h) : 0; ret = h->ae_algo->ops->start ? h->ae_algo->ops->start(h) : 0;
if (ret) { if (ret) {
set_bit(HNS3_NIC_STATE_DOWN, &priv->state); set_bit(HNS3_NIC_STATE_DOWN, &priv->state);
while (j--) hns3_disable_irqs_and_tqps(netdev);
hns3_tqp_disable(h->kinfo.tqp[j]);
for (j = i - 1; j >= 0; j--)
hns3_vector_disable(&priv->tqp_vector[j]);
} }
return ret; return ret;
@ -823,17 +838,9 @@ static void hns3_reset_tx_queue(struct hnae3_handle *h)
static void hns3_nic_net_down(struct net_device *netdev) static void hns3_nic_net_down(struct net_device *netdev)
{ {
struct hns3_nic_priv *priv = netdev_priv(netdev); struct hns3_nic_priv *priv = netdev_priv(netdev);
struct hnae3_handle *h = hns3_get_handle(netdev);
const struct hnae3_ae_ops *ops; const struct hnae3_ae_ops *ops;
int i;
/* disable vectors */ hns3_disable_irqs_and_tqps(netdev);
for (i = 0; i < priv->vector_num; i++)
hns3_vector_disable(&priv->tqp_vector[i]);
/* disable rcb */
for (i = 0; i < h->kinfo.num_tqps; i++)
hns3_tqp_disable(h->kinfo.tqp[i]);
/* stop ae_dev */ /* stop ae_dev */
ops = priv->ae_handle->ae_algo->ops; ops = priv->ae_handle->ae_algo->ops;
@ -5864,8 +5871,6 @@ int hns3_set_channels(struct net_device *netdev,
void hns3_external_lb_prepare(struct net_device *ndev, bool if_running) void hns3_external_lb_prepare(struct net_device *ndev, bool if_running)
{ {
struct hns3_nic_priv *priv = netdev_priv(ndev); struct hns3_nic_priv *priv = netdev_priv(ndev);
struct hnae3_handle *h = priv->ae_handle;
int i;
if (!if_running) if (!if_running)
return; return;
@ -5876,11 +5881,7 @@ void hns3_external_lb_prepare(struct net_device *ndev, bool if_running)
netif_carrier_off(ndev); netif_carrier_off(ndev);
netif_tx_disable(ndev); netif_tx_disable(ndev);
for (i = 0; i < priv->vector_num; i++) hns3_disable_irqs_and_tqps(ndev);
hns3_vector_disable(&priv->tqp_vector[i]);
for (i = 0; i < h->kinfo.num_tqps; i++)
hns3_tqp_disable(h->kinfo.tqp[i]);
/* delay ring buffer clearing to hns3_reset_notify_uninit_enet /* delay ring buffer clearing to hns3_reset_notify_uninit_enet
* during reset process, because driver may not be able * during reset process, because driver may not be able
@ -5896,7 +5897,6 @@ void hns3_external_lb_restore(struct net_device *ndev, bool if_running)
{ {
struct hns3_nic_priv *priv = netdev_priv(ndev); struct hns3_nic_priv *priv = netdev_priv(ndev);
struct hnae3_handle *h = priv->ae_handle; struct hnae3_handle *h = priv->ae_handle;
int i;
if (!if_running) if (!if_running)
return; return;
@ -5912,11 +5912,7 @@ void hns3_external_lb_restore(struct net_device *ndev, bool if_running)
clear_bit(HNS3_NIC_STATE_DOWN, &priv->state); clear_bit(HNS3_NIC_STATE_DOWN, &priv->state);
for (i = 0; i < priv->vector_num; i++) hns3_enable_irqs_and_tqps(ndev);
hns3_vector_enable(&priv->tqp_vector[i]);
for (i = 0; i < h->kinfo.num_tqps; i++)
hns3_tqp_enable(h->kinfo.tqp[i]);
netif_tx_wake_all_queues(ndev); netif_tx_wake_all_queues(ndev);

View File

@ -440,6 +440,13 @@ static int hclge_ptp_create_clock(struct hclge_dev *hdev)
ptp->info.settime64 = hclge_ptp_settime; ptp->info.settime64 = hclge_ptp_settime;
ptp->info.n_alarm = 0; ptp->info.n_alarm = 0;
spin_lock_init(&ptp->lock);
ptp->io_base = hdev->hw.hw.io_base + HCLGE_PTP_REG_OFFSET;
ptp->ts_cfg.rx_filter = HWTSTAMP_FILTER_NONE;
ptp->ts_cfg.tx_type = HWTSTAMP_TX_OFF;
hdev->ptp = ptp;
ptp->clock = ptp_clock_register(&ptp->info, &hdev->pdev->dev); ptp->clock = ptp_clock_register(&ptp->info, &hdev->pdev->dev);
if (IS_ERR(ptp->clock)) { if (IS_ERR(ptp->clock)) {
dev_err(&hdev->pdev->dev, dev_err(&hdev->pdev->dev,
@ -451,12 +458,6 @@ static int hclge_ptp_create_clock(struct hclge_dev *hdev)
return -ENODEV; return -ENODEV;
} }
spin_lock_init(&ptp->lock);
ptp->io_base = hdev->hw.hw.io_base + HCLGE_PTP_REG_OFFSET;
ptp->ts_cfg.rx_filter = HWTSTAMP_FILTER_NONE;
ptp->ts_cfg.tx_type = HWTSTAMP_TX_OFF;
hdev->ptp = ptp;
return 0; return 0;
} }

View File

@ -1292,9 +1292,8 @@ static void hclgevf_sync_vlan_filter(struct hclgevf_dev *hdev)
rtnl_unlock(); rtnl_unlock();
} }
static int hclgevf_en_hw_strip_rxvtag(struct hnae3_handle *handle, bool enable) static int hclgevf_en_hw_strip_rxvtag_cmd(struct hclgevf_dev *hdev, bool enable)
{ {
struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
struct hclge_vf_to_pf_msg send_msg; struct hclge_vf_to_pf_msg send_msg;
hclgevf_build_send_msg(&send_msg, HCLGE_MBX_SET_VLAN, hclgevf_build_send_msg(&send_msg, HCLGE_MBX_SET_VLAN,
@ -1303,6 +1302,19 @@ static int hclgevf_en_hw_strip_rxvtag(struct hnae3_handle *handle, bool enable)
return hclgevf_send_mbx_msg(hdev, &send_msg, false, NULL, 0); return hclgevf_send_mbx_msg(hdev, &send_msg, false, NULL, 0);
} }
static int hclgevf_en_hw_strip_rxvtag(struct hnae3_handle *handle, bool enable)
{
struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
int ret;
ret = hclgevf_en_hw_strip_rxvtag_cmd(hdev, enable);
if (ret)
return ret;
hdev->rxvtag_strip_en = enable;
return 0;
}
static int hclgevf_reset_tqp(struct hnae3_handle *handle) static int hclgevf_reset_tqp(struct hnae3_handle *handle)
{ {
#define HCLGEVF_RESET_ALL_QUEUE_DONE 1U #define HCLGEVF_RESET_ALL_QUEUE_DONE 1U
@ -2204,12 +2216,13 @@ static int hclgevf_rss_init_hw(struct hclgevf_dev *hdev)
tc_valid, tc_size); tc_valid, tc_size);
} }
static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev) static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev,
bool rxvtag_strip_en)
{ {
struct hnae3_handle *nic = &hdev->nic; struct hnae3_handle *nic = &hdev->nic;
int ret; int ret;
ret = hclgevf_en_hw_strip_rxvtag(nic, true); ret = hclgevf_en_hw_strip_rxvtag(nic, rxvtag_strip_en);
if (ret) { if (ret) {
dev_err(&hdev->pdev->dev, dev_err(&hdev->pdev->dev,
"failed to enable rx vlan offload, ret = %d\n", ret); "failed to enable rx vlan offload, ret = %d\n", ret);
@ -2879,7 +2892,7 @@ static int hclgevf_reset_hdev(struct hclgevf_dev *hdev)
if (ret) if (ret)
return ret; return ret;
ret = hclgevf_init_vlan_config(hdev); ret = hclgevf_init_vlan_config(hdev, hdev->rxvtag_strip_en);
if (ret) { if (ret) {
dev_err(&hdev->pdev->dev, dev_err(&hdev->pdev->dev,
"failed(%d) to initialize VLAN config\n", ret); "failed(%d) to initialize VLAN config\n", ret);
@ -2994,7 +3007,7 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
goto err_config; goto err_config;
} }
ret = hclgevf_init_vlan_config(hdev); ret = hclgevf_init_vlan_config(hdev, true);
if (ret) { if (ret) {
dev_err(&hdev->pdev->dev, dev_err(&hdev->pdev->dev,
"failed(%d) to initialize VLAN config\n", ret); "failed(%d) to initialize VLAN config\n", ret);

View File

@ -253,6 +253,7 @@ struct hclgevf_dev {
int *vector_irq; int *vector_irq;
bool gro_en; bool gro_en;
bool rxvtag_strip_en;
unsigned long vlan_del_fail_bmap[BITS_TO_LONGS(VLAN_N_VID)]; unsigned long vlan_del_fail_bmap[BITS_TO_LONGS(VLAN_N_VID)];

View File

@ -2345,15 +2345,15 @@ ice_get_set_tx_topo(struct ice_hw *hw, u8 *buf, u16 buf_size,
cmd->set_flags |= ICE_AQC_TX_TOPO_FLAGS_SRC_RAM | cmd->set_flags |= ICE_AQC_TX_TOPO_FLAGS_SRC_RAM |
ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW; ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW;
if (hw->mac_type == ICE_MAC_GENERIC_3K_E825) desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
} else { } else {
ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_tx_topo); ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_tx_topo);
cmd->get_flags = ICE_AQC_TX_TOPO_GET_RAM; cmd->get_flags = ICE_AQC_TX_TOPO_GET_RAM;
}
if (hw->mac_type != ICE_MAC_GENERIC_3K_E825) if (hw->mac_type == ICE_MAC_E810 ||
desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); hw->mac_type == ICE_MAC_GENERIC)
desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
}
status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd); status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
if (status) if (status)

View File

@ -2097,6 +2097,11 @@ int ice_vc_add_fdir_fltr(struct ice_vf *vf, u8 *msg)
pf = vf->pf; pf = vf->pf;
dev = ice_pf_to_dev(pf); dev = ice_pf_to_dev(pf);
vf_vsi = ice_get_vf_vsi(vf); vf_vsi = ice_get_vf_vsi(vf);
if (!vf_vsi) {
dev_err(dev, "Can not get FDIR vf_vsi for VF %u\n", vf->vf_id);
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
goto err_exit;
}
#define ICE_VF_MAX_FDIR_FILTERS 128 #define ICE_VF_MAX_FDIR_FILTERS 128
if (!ice_fdir_num_avail_fltr(&pf->hw, vf_vsi) || if (!ice_fdir_num_avail_fltr(&pf->hw, vf_vsi) ||

View File

@ -629,13 +629,13 @@ bool idpf_is_capability_ena(struct idpf_adapter *adapter, bool all,
VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 |\ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 |\
VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6) VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6)
#define IDPF_CAP_RX_CSUM_L4V4 (\ #define IDPF_CAP_TX_CSUM_L4V4 (\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP |\ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP |\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP) VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP)
#define IDPF_CAP_RX_CSUM_L4V6 (\ #define IDPF_CAP_TX_CSUM_L4V6 (\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |\ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP |\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP) VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP)
#define IDPF_CAP_RX_CSUM (\ #define IDPF_CAP_RX_CSUM (\
VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 |\ VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 |\
@ -644,11 +644,9 @@ bool idpf_is_capability_ena(struct idpf_adapter *adapter, bool all,
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |\ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP) VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP)
#define IDPF_CAP_SCTP_CSUM (\ #define IDPF_CAP_TX_SCTP_CSUM (\
VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP |\ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP |\
VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP |\ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP)
VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP |\
VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP)
#define IDPF_CAP_TUNNEL_TX_CSUM (\ #define IDPF_CAP_TUNNEL_TX_CSUM (\
VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL |\ VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL |\

View File

@ -703,8 +703,10 @@ static int idpf_cfg_netdev(struct idpf_vport *vport)
{ {
struct idpf_adapter *adapter = vport->adapter; struct idpf_adapter *adapter = vport->adapter;
struct idpf_vport_config *vport_config; struct idpf_vport_config *vport_config;
netdev_features_t other_offloads = 0;
netdev_features_t csum_offloads = 0;
netdev_features_t tso_offloads = 0;
netdev_features_t dflt_features; netdev_features_t dflt_features;
netdev_features_t offloads = 0;
struct idpf_netdev_priv *np; struct idpf_netdev_priv *np;
struct net_device *netdev; struct net_device *netdev;
u16 idx = vport->idx; u16 idx = vport->idx;
@ -766,53 +768,32 @@ static int idpf_cfg_netdev(struct idpf_vport *vport)
if (idpf_is_cap_ena_all(adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS)) if (idpf_is_cap_ena_all(adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS))
dflt_features |= NETIF_F_RXHASH; dflt_features |= NETIF_F_RXHASH;
if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM_L4V4)) if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_TX_CSUM_L4V4))
dflt_features |= NETIF_F_IP_CSUM; csum_offloads |= NETIF_F_IP_CSUM;
if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM_L4V6)) if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_TX_CSUM_L4V6))
dflt_features |= NETIF_F_IPV6_CSUM; csum_offloads |= NETIF_F_IPV6_CSUM;
if (idpf_is_cap_ena(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM)) if (idpf_is_cap_ena(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM))
dflt_features |= NETIF_F_RXCSUM; csum_offloads |= NETIF_F_RXCSUM;
if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_SCTP_CSUM)) if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_TX_SCTP_CSUM))
dflt_features |= NETIF_F_SCTP_CRC; csum_offloads |= NETIF_F_SCTP_CRC;
if (idpf_is_cap_ena(adapter, IDPF_SEG_CAPS, VIRTCHNL2_CAP_SEG_IPV4_TCP)) if (idpf_is_cap_ena(adapter, IDPF_SEG_CAPS, VIRTCHNL2_CAP_SEG_IPV4_TCP))
dflt_features |= NETIF_F_TSO; tso_offloads |= NETIF_F_TSO;
if (idpf_is_cap_ena(adapter, IDPF_SEG_CAPS, VIRTCHNL2_CAP_SEG_IPV6_TCP)) if (idpf_is_cap_ena(adapter, IDPF_SEG_CAPS, VIRTCHNL2_CAP_SEG_IPV6_TCP))
dflt_features |= NETIF_F_TSO6; tso_offloads |= NETIF_F_TSO6;
if (idpf_is_cap_ena_all(adapter, IDPF_SEG_CAPS, if (idpf_is_cap_ena_all(adapter, IDPF_SEG_CAPS,
VIRTCHNL2_CAP_SEG_IPV4_UDP | VIRTCHNL2_CAP_SEG_IPV4_UDP |
VIRTCHNL2_CAP_SEG_IPV6_UDP)) VIRTCHNL2_CAP_SEG_IPV6_UDP))
dflt_features |= NETIF_F_GSO_UDP_L4; tso_offloads |= NETIF_F_GSO_UDP_L4;
if (idpf_is_cap_ena_all(adapter, IDPF_RSC_CAPS, IDPF_CAP_RSC)) if (idpf_is_cap_ena_all(adapter, IDPF_RSC_CAPS, IDPF_CAP_RSC))
offloads |= NETIF_F_GRO_HW; other_offloads |= NETIF_F_GRO_HW;
/* advertise to stack only if offloads for encapsulated packets is
* supported
*/
if (idpf_is_cap_ena(vport->adapter, IDPF_SEG_CAPS,
VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL)) {
offloads |= NETIF_F_GSO_UDP_TUNNEL |
NETIF_F_GSO_GRE |
NETIF_F_GSO_GRE_CSUM |
NETIF_F_GSO_PARTIAL |
NETIF_F_GSO_UDP_TUNNEL_CSUM |
NETIF_F_GSO_IPXIP4 |
NETIF_F_GSO_IPXIP6 |
0;
if (!idpf_is_cap_ena_all(vport->adapter, IDPF_CSUM_CAPS,
IDPF_CAP_TUNNEL_TX_CSUM))
netdev->gso_partial_features |=
NETIF_F_GSO_UDP_TUNNEL_CSUM;
netdev->gso_partial_features |= NETIF_F_GSO_GRE_CSUM;
offloads |= NETIF_F_TSO_MANGLEID;
}
if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_LOOPBACK)) if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_LOOPBACK))
offloads |= NETIF_F_LOOPBACK; other_offloads |= NETIF_F_LOOPBACK;
netdev->features |= dflt_features; netdev->features |= dflt_features | csum_offloads | tso_offloads;
netdev->hw_features |= dflt_features | offloads; netdev->hw_features |= netdev->features | other_offloads;
netdev->hw_enc_features |= dflt_features | offloads; netdev->vlan_features |= netdev->features | other_offloads;
netdev->hw_enc_features |= dflt_features | other_offloads;
idpf_set_ethtool_ops(netdev); idpf_set_ethtool_ops(netdev);
netif_set_affinity_auto(netdev); netif_set_affinity_auto(netdev);
SET_NETDEV_DEV(netdev, &adapter->pdev->dev); SET_NETDEV_DEV(netdev, &adapter->pdev->dev);
@ -1132,11 +1113,9 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
num_max_q = max(max_q->max_txq, max_q->max_rxq); num_max_q = max(max_q->max_txq, max_q->max_rxq);
vport->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL); vport->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL);
if (!vport->q_vector_idxs) { if (!vport->q_vector_idxs)
kfree(vport); goto free_vport;
return NULL;
}
idpf_vport_init(vport, max_q); idpf_vport_init(vport, max_q);
/* This alloc is done separate from the LUT because it's not strictly /* This alloc is done separate from the LUT because it's not strictly
@ -1146,11 +1125,9 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
*/ */
rss_data = &adapter->vport_config[idx]->user_config.rss_data; rss_data = &adapter->vport_config[idx]->user_config.rss_data;
rss_data->rss_key = kzalloc(rss_data->rss_key_size, GFP_KERNEL); rss_data->rss_key = kzalloc(rss_data->rss_key_size, GFP_KERNEL);
if (!rss_data->rss_key) { if (!rss_data->rss_key)
kfree(vport); goto free_vector_idxs;
return NULL;
}
/* Initialize default rss key */ /* Initialize default rss key */
netdev_rss_key_fill((void *)rss_data->rss_key, rss_data->rss_key_size); netdev_rss_key_fill((void *)rss_data->rss_key, rss_data->rss_key_size);
@ -1163,6 +1140,13 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
adapter->next_vport = idpf_get_free_slot(adapter); adapter->next_vport = idpf_get_free_slot(adapter);
return vport; return vport;
free_vector_idxs:
kfree(vport->q_vector_idxs);
free_vport:
kfree(vport);
return NULL;
} }
/** /**

View File

@ -89,6 +89,7 @@ static void idpf_shutdown(struct pci_dev *pdev)
{ {
struct idpf_adapter *adapter = pci_get_drvdata(pdev); struct idpf_adapter *adapter = pci_get_drvdata(pdev);
cancel_delayed_work_sync(&adapter->serv_task);
cancel_delayed_work_sync(&adapter->vc_event_task); cancel_delayed_work_sync(&adapter->vc_event_task);
idpf_vc_core_deinit(adapter); idpf_vc_core_deinit(adapter);
idpf_deinit_dflt_mbx(adapter); idpf_deinit_dflt_mbx(adapter);

View File

@ -1290,6 +1290,8 @@ void igc_ptp_reset(struct igc_adapter *adapter)
/* reset the tstamp_config */ /* reset the tstamp_config */
igc_ptp_set_timestamp_mode(adapter, &adapter->tstamp_config); igc_ptp_set_timestamp_mode(adapter, &adapter->tstamp_config);
mutex_lock(&adapter->ptm_lock);
spin_lock_irqsave(&adapter->tmreg_lock, flags); spin_lock_irqsave(&adapter->tmreg_lock, flags);
switch (adapter->hw.mac.type) { switch (adapter->hw.mac.type) {
@ -1308,7 +1310,6 @@ void igc_ptp_reset(struct igc_adapter *adapter)
if (!igc_is_crosststamp_supported(adapter)) if (!igc_is_crosststamp_supported(adapter))
break; break;
mutex_lock(&adapter->ptm_lock);
wr32(IGC_PCIE_DIG_DELAY, IGC_PCIE_DIG_DELAY_DEFAULT); wr32(IGC_PCIE_DIG_DELAY, IGC_PCIE_DIG_DELAY_DEFAULT);
wr32(IGC_PCIE_PHY_DELAY, IGC_PCIE_PHY_DELAY_DEFAULT); wr32(IGC_PCIE_PHY_DELAY, IGC_PCIE_PHY_DELAY_DEFAULT);
@ -1332,7 +1333,6 @@ void igc_ptp_reset(struct igc_adapter *adapter)
netdev_err(adapter->netdev, "Timeout reading IGC_PTM_STAT register\n"); netdev_err(adapter->netdev, "Timeout reading IGC_PTM_STAT register\n");
igc_ptm_reset(hw); igc_ptm_reset(hw);
mutex_unlock(&adapter->ptm_lock);
break; break;
default: default:
/* No work to do. */ /* No work to do. */
@ -1349,5 +1349,7 @@ void igc_ptp_reset(struct igc_adapter *adapter)
out: out:
spin_unlock_irqrestore(&adapter->tmreg_lock, flags); spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
mutex_unlock(&adapter->ptm_lock);
wrfl(); wrfl();
} }

View File

@ -1223,7 +1223,7 @@ static void octep_hb_timeout_task(struct work_struct *work)
miss_cnt); miss_cnt);
rtnl_lock(); rtnl_lock();
if (netif_running(oct->netdev)) if (netif_running(oct->netdev))
octep_stop(oct->netdev); dev_close(oct->netdev);
rtnl_unlock(); rtnl_unlock();
} }

View File

@ -835,7 +835,9 @@ static void octep_vf_tx_timeout(struct net_device *netdev, unsigned int txqueue)
struct octep_vf_device *oct = netdev_priv(netdev); struct octep_vf_device *oct = netdev_priv(netdev);
netdev_hold(netdev, NULL, GFP_ATOMIC); netdev_hold(netdev, NULL, GFP_ATOMIC);
schedule_work(&oct->tx_timeout_task); if (!schedule_work(&oct->tx_timeout_task))
netdev_put(netdev, NULL);
} }
static int octep_vf_set_mac(struct net_device *netdev, void *p) static int octep_vf_set_mac(struct net_device *netdev, void *p)

View File

@ -269,12 +269,8 @@ static const char * const mtk_clks_source_name[] = {
"ethwarp_wocpu2", "ethwarp_wocpu2",
"ethwarp_wocpu1", "ethwarp_wocpu1",
"ethwarp_wocpu0", "ethwarp_wocpu0",
"top_usxgmii0_sel",
"top_usxgmii1_sel",
"top_sgm0_sel", "top_sgm0_sel",
"top_sgm1_sel", "top_sgm1_sel",
"top_xfi_phy0_xtal_sel",
"top_xfi_phy1_xtal_sel",
"top_eth_gmii_sel", "top_eth_gmii_sel",
"top_eth_refck_50m_sel", "top_eth_refck_50m_sel",
"top_eth_sys_200m_sel", "top_eth_sys_200m_sel",
@ -2252,14 +2248,18 @@ skip_rx:
ring->data[idx] = new_data; ring->data[idx] = new_data;
rxd->rxd1 = (unsigned int)dma_addr; rxd->rxd1 = (unsigned int)dma_addr;
release_desc: release_desc:
if (MTK_HAS_CAPS(eth->soc->caps, MTK_36BIT_DMA)) {
if (unlikely(dma_addr == DMA_MAPPING_ERROR))
addr64 = FIELD_GET(RX_DMA_ADDR64_MASK,
rxd->rxd2);
else
addr64 = RX_DMA_PREP_ADDR64(dma_addr);
}
if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628))
rxd->rxd2 = RX_DMA_LSO; rxd->rxd2 = RX_DMA_LSO;
else else
rxd->rxd2 = RX_DMA_PREP_PLEN0(ring->buf_size); rxd->rxd2 = RX_DMA_PREP_PLEN0(ring->buf_size) | addr64;
if (MTK_HAS_CAPS(eth->soc->caps, MTK_36BIT_DMA) &&
likely(dma_addr != DMA_MAPPING_ERROR))
rxd->rxd2 |= RX_DMA_PREP_ADDR64(dma_addr);
ring->calc_idx = idx; ring->calc_idx = idx;
done++; done++;

View File

@ -1163,6 +1163,7 @@ static int mtk_star_tx_poll(struct napi_struct *napi, int budget)
struct net_device *ndev = priv->ndev; struct net_device *ndev = priv->ndev;
unsigned int head = ring->head; unsigned int head = ring->head;
unsigned int entry = ring->tail; unsigned int entry = ring->tail;
unsigned long flags;
while (entry != head && count < (MTK_STAR_RING_NUM_DESCS - 1)) { while (entry != head && count < (MTK_STAR_RING_NUM_DESCS - 1)) {
ret = mtk_star_tx_complete_one(priv); ret = mtk_star_tx_complete_one(priv);
@ -1182,9 +1183,9 @@ static int mtk_star_tx_poll(struct napi_struct *napi, int budget)
netif_wake_queue(ndev); netif_wake_queue(ndev);
if (napi_complete(napi)) { if (napi_complete(napi)) {
spin_lock(&priv->lock); spin_lock_irqsave(&priv->lock, flags);
mtk_star_enable_dma_irq(priv, false, true); mtk_star_enable_dma_irq(priv, false, true);
spin_unlock(&priv->lock); spin_unlock_irqrestore(&priv->lock, flags);
} }
return 0; return 0;
@ -1341,16 +1342,16 @@ push_new_skb:
static int mtk_star_rx_poll(struct napi_struct *napi, int budget) static int mtk_star_rx_poll(struct napi_struct *napi, int budget)
{ {
struct mtk_star_priv *priv; struct mtk_star_priv *priv;
unsigned long flags;
int work_done = 0; int work_done = 0;
priv = container_of(napi, struct mtk_star_priv, rx_napi); priv = container_of(napi, struct mtk_star_priv, rx_napi);
work_done = mtk_star_rx(priv, budget); work_done = mtk_star_rx(priv, budget);
if (work_done < budget) { if (work_done < budget && napi_complete_done(napi, work_done)) {
napi_complete_done(napi, work_done); spin_lock_irqsave(&priv->lock, flags);
spin_lock(&priv->lock);
mtk_star_enable_dma_irq(priv, true, false); mtk_star_enable_dma_irq(priv, true, false);
spin_unlock(&priv->lock); spin_unlock_irqrestore(&priv->lock, flags);
} }
return work_done; return work_done;

View File

@ -176,6 +176,7 @@ static int mlx5e_tx_reporter_ptpsq_unhealthy_recover(void *ctx)
priv = ptpsq->txqsq.priv; priv = ptpsq->txqsq.priv;
rtnl_lock();
mutex_lock(&priv->state_lock); mutex_lock(&priv->state_lock);
chs = &priv->channels; chs = &priv->channels;
netdev = priv->netdev; netdev = priv->netdev;
@ -183,22 +184,19 @@ static int mlx5e_tx_reporter_ptpsq_unhealthy_recover(void *ctx)
carrier_ok = netif_carrier_ok(netdev); carrier_ok = netif_carrier_ok(netdev);
netif_carrier_off(netdev); netif_carrier_off(netdev);
rtnl_lock();
mlx5e_deactivate_priv_channels(priv); mlx5e_deactivate_priv_channels(priv);
rtnl_unlock();
mlx5e_ptp_close(chs->ptp); mlx5e_ptp_close(chs->ptp);
err = mlx5e_ptp_open(priv, &chs->params, chs->c[0]->lag_port, &chs->ptp); err = mlx5e_ptp_open(priv, &chs->params, chs->c[0]->lag_port, &chs->ptp);
rtnl_lock();
mlx5e_activate_priv_channels(priv); mlx5e_activate_priv_channels(priv);
rtnl_unlock();
/* return carrier back if needed */ /* return carrier back if needed */
if (carrier_ok) if (carrier_ok)
netif_carrier_on(netdev); netif_carrier_on(netdev);
mutex_unlock(&priv->state_lock); mutex_unlock(&priv->state_lock);
rtnl_unlock();
return err; return err;
} }

View File

@ -165,9 +165,6 @@ static int mlx5e_tc_tun_parse_vxlan(struct mlx5e_priv *priv,
struct flow_match_enc_keyid enc_keyid; struct flow_match_enc_keyid enc_keyid;
void *misc_c, *misc_v; void *misc_c, *misc_v;
misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID))
return 0; return 0;
@ -182,6 +179,30 @@ static int mlx5e_tc_tun_parse_vxlan(struct mlx5e_priv *priv,
err = mlx5e_tc_tun_parse_vxlan_gbp_option(priv, spec, f); err = mlx5e_tc_tun_parse_vxlan_gbp_option(priv, spec, f);
if (err) if (err)
return err; return err;
/* We can't mix custom tunnel headers with symbolic ones and we
* don't have a symbolic field name for GBP, so we use custom
* tunnel headers in this case. We need hardware support to
* match on custom tunnel headers, but we already know it's
* supported because the previous call successfully checked for
* that.
*/
misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
misc_parameters_5);
misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
misc_parameters_5);
/* Shift by 8 to account for the reserved bits in the vxlan
* header after the VNI.
*/
MLX5_SET(fte_match_set_misc5, misc_c, tunnel_header_1,
be32_to_cpu(enc_keyid.mask->keyid) << 8);
MLX5_SET(fte_match_set_misc5, misc_v, tunnel_header_1,
be32_to_cpu(enc_keyid.key->keyid) << 8);
spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_5;
return 0;
} }
/* match on VNI is required */ /* match on VNI is required */
@ -195,6 +216,11 @@ static int mlx5e_tc_tun_parse_vxlan(struct mlx5e_priv *priv,
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
misc_parameters);
misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
misc_parameters);
MLX5_SET(fte_match_set_misc, misc_c, vxlan_vni, MLX5_SET(fte_match_set_misc, misc_c, vxlan_vni,
be32_to_cpu(enc_keyid.mask->keyid)); be32_to_cpu(enc_keyid.mask->keyid));
MLX5_SET(fte_match_set_misc, misc_v, vxlan_vni, MLX5_SET(fte_match_set_misc, misc_v, vxlan_vni,

View File

@ -1750,9 +1750,6 @@ extra_split_attr_dests_needed(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr
!list_is_first(&attr->list, &flow->attrs)) !list_is_first(&attr->list, &flow->attrs))
return 0; return 0;
if (flow_flag_test(flow, SLOW))
return 0;
esw_attr = attr->esw_attr; esw_attr = attr->esw_attr;
if (!esw_attr->split_count || if (!esw_attr->split_count ||
esw_attr->split_count == esw_attr->out_count - 1) esw_attr->split_count == esw_attr->out_count - 1)
@ -1766,7 +1763,7 @@ extra_split_attr_dests_needed(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr
for (i = esw_attr->split_count; i < esw_attr->out_count; i++) { for (i = esw_attr->split_count; i < esw_attr->out_count; i++) {
/* external dest with encap is considered as internal by firmware */ /* external dest with encap is considered as internal by firmware */
if (esw_attr->dests[i].vport == MLX5_VPORT_UPLINK && if (esw_attr->dests[i].vport == MLX5_VPORT_UPLINK &&
!(esw_attr->dests[i].flags & MLX5_ESW_DEST_ENCAP_VALID)) !(esw_attr->dests[i].flags & MLX5_ESW_DEST_ENCAP))
ext_dest = true; ext_dest = true;
else else
int_dest = true; int_dest = true;

View File

@ -3533,7 +3533,9 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
int err; int err;
mutex_init(&esw->offloads.termtbl_mutex); mutex_init(&esw->offloads.termtbl_mutex);
mlx5_rdma_enable_roce(esw->dev); err = mlx5_rdma_enable_roce(esw->dev);
if (err)
goto err_roce;
err = mlx5_esw_host_number_init(esw); err = mlx5_esw_host_number_init(esw);
if (err) if (err)
@ -3594,6 +3596,7 @@ err_vport_metadata:
esw_offloads_metadata_uninit(esw); esw_offloads_metadata_uninit(esw);
err_metadata: err_metadata:
mlx5_rdma_disable_roce(esw->dev); mlx5_rdma_disable_roce(esw->dev);
err_roce:
mutex_destroy(&esw->offloads.termtbl_mutex); mutex_destroy(&esw->offloads.termtbl_mutex);
return err; return err;
} }

View File

@ -118,8 +118,8 @@ static void mlx5_rdma_make_default_gid(struct mlx5_core_dev *dev, union ib_gid *
static int mlx5_rdma_add_roce_addr(struct mlx5_core_dev *dev) static int mlx5_rdma_add_roce_addr(struct mlx5_core_dev *dev)
{ {
u8 mac[ETH_ALEN] = {};
union ib_gid gid; union ib_gid gid;
u8 mac[ETH_ALEN];
mlx5_rdma_make_default_gid(dev, &gid); mlx5_rdma_make_default_gid(dev, &gid);
return mlx5_core_roce_gid_set(dev, 0, return mlx5_core_roce_gid_set(dev, 0,
@ -140,17 +140,17 @@ void mlx5_rdma_disable_roce(struct mlx5_core_dev *dev)
mlx5_nic_vport_disable_roce(dev); mlx5_nic_vport_disable_roce(dev);
} }
void mlx5_rdma_enable_roce(struct mlx5_core_dev *dev) int mlx5_rdma_enable_roce(struct mlx5_core_dev *dev)
{ {
int err; int err;
if (!MLX5_CAP_GEN(dev, roce)) if (!MLX5_CAP_GEN(dev, roce))
return; return 0;
err = mlx5_nic_vport_enable_roce(dev); err = mlx5_nic_vport_enable_roce(dev);
if (err) { if (err) {
mlx5_core_err(dev, "Failed to enable RoCE: %d\n", err); mlx5_core_err(dev, "Failed to enable RoCE: %d\n", err);
return; return err;
} }
err = mlx5_rdma_add_roce_addr(dev); err = mlx5_rdma_add_roce_addr(dev);
@ -165,10 +165,11 @@ void mlx5_rdma_enable_roce(struct mlx5_core_dev *dev)
goto del_roce_addr; goto del_roce_addr;
} }
return; return err;
del_roce_addr: del_roce_addr:
mlx5_rdma_del_roce_addr(dev); mlx5_rdma_del_roce_addr(dev);
disable_roce: disable_roce:
mlx5_nic_vport_disable_roce(dev); mlx5_nic_vport_disable_roce(dev);
return err;
} }

View File

@ -8,12 +8,12 @@
#ifdef CONFIG_MLX5_ESWITCH #ifdef CONFIG_MLX5_ESWITCH
void mlx5_rdma_enable_roce(struct mlx5_core_dev *dev); int mlx5_rdma_enable_roce(struct mlx5_core_dev *dev);
void mlx5_rdma_disable_roce(struct mlx5_core_dev *dev); void mlx5_rdma_disable_roce(struct mlx5_core_dev *dev);
#else /* CONFIG_MLX5_ESWITCH */ #else /* CONFIG_MLX5_ESWITCH */
static inline void mlx5_rdma_enable_roce(struct mlx5_core_dev *dev) {} static inline int mlx5_rdma_enable_roce(struct mlx5_core_dev *dev) { return 0; }
static inline void mlx5_rdma_disable_roce(struct mlx5_core_dev *dev) {} static inline void mlx5_rdma_disable_roce(struct mlx5_core_dev *dev) {}
#endif /* CONFIG_MLX5_ESWITCH */ #endif /* CONFIG_MLX5_ESWITCH */

View File

@ -1815,6 +1815,7 @@ static void lan743x_tx_frame_add_lso(struct lan743x_tx *tx,
if (nr_frags <= 0) { if (nr_frags <= 0) {
tx->frame_data0 |= TX_DESC_DATA0_LS_; tx->frame_data0 |= TX_DESC_DATA0_LS_;
tx->frame_data0 |= TX_DESC_DATA0_IOC_; tx->frame_data0 |= TX_DESC_DATA0_IOC_;
tx->frame_last = tx->frame_first;
} }
tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail]; tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail];
tx_descriptor->data0 = cpu_to_le32(tx->frame_data0); tx_descriptor->data0 = cpu_to_le32(tx->frame_data0);
@ -1884,6 +1885,7 @@ static int lan743x_tx_frame_add_fragment(struct lan743x_tx *tx,
tx->frame_first = 0; tx->frame_first = 0;
tx->frame_data0 = 0; tx->frame_data0 = 0;
tx->frame_tail = 0; tx->frame_tail = 0;
tx->frame_last = 0;
return -ENOMEM; return -ENOMEM;
} }
@ -1924,16 +1926,18 @@ static void lan743x_tx_frame_end(struct lan743x_tx *tx,
TX_DESC_DATA0_DTYPE_DATA_) { TX_DESC_DATA0_DTYPE_DATA_) {
tx->frame_data0 |= TX_DESC_DATA0_LS_; tx->frame_data0 |= TX_DESC_DATA0_LS_;
tx->frame_data0 |= TX_DESC_DATA0_IOC_; tx->frame_data0 |= TX_DESC_DATA0_IOC_;
tx->frame_last = tx->frame_tail;
} }
tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail]; tx_descriptor = &tx->ring_cpu_ptr[tx->frame_last];
buffer_info = &tx->buffer_info[tx->frame_tail]; buffer_info = &tx->buffer_info[tx->frame_last];
buffer_info->skb = skb; buffer_info->skb = skb;
if (time_stamp) if (time_stamp)
buffer_info->flags |= TX_BUFFER_INFO_FLAG_TIMESTAMP_REQUESTED; buffer_info->flags |= TX_BUFFER_INFO_FLAG_TIMESTAMP_REQUESTED;
if (ignore_sync) if (ignore_sync)
buffer_info->flags |= TX_BUFFER_INFO_FLAG_IGNORE_SYNC; buffer_info->flags |= TX_BUFFER_INFO_FLAG_IGNORE_SYNC;
tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail];
tx_descriptor->data0 = cpu_to_le32(tx->frame_data0); tx_descriptor->data0 = cpu_to_le32(tx->frame_data0);
tx->frame_tail = lan743x_tx_next_index(tx, tx->frame_tail); tx->frame_tail = lan743x_tx_next_index(tx, tx->frame_tail);
tx->last_tail = tx->frame_tail; tx->last_tail = tx->frame_tail;

View File

@ -980,6 +980,7 @@ struct lan743x_tx {
u32 frame_first; u32 frame_first;
u32 frame_data0; u32 frame_data0;
u32 frame_tail; u32 frame_tail;
u32 frame_last;
struct lan743x_tx_buffer_info *buffer_info; struct lan743x_tx_buffer_info *buffer_info;

View File

@ -830,6 +830,7 @@ EXPORT_SYMBOL(ocelot_vlan_prepare);
int ocelot_vlan_add(struct ocelot *ocelot, int port, u16 vid, bool pvid, int ocelot_vlan_add(struct ocelot *ocelot, int port, u16 vid, bool pvid,
bool untagged) bool untagged)
{ {
struct ocelot_port *ocelot_port = ocelot->ports[port];
int err; int err;
/* Ignore VID 0 added to our RX filter by the 8021q module, since /* Ignore VID 0 added to our RX filter by the 8021q module, since
@ -849,6 +850,11 @@ int ocelot_vlan_add(struct ocelot *ocelot, int port, u16 vid, bool pvid,
ocelot_bridge_vlan_find(ocelot, vid)); ocelot_bridge_vlan_find(ocelot, vid));
if (err) if (err)
return err; return err;
} else if (ocelot_port->pvid_vlan &&
ocelot_bridge_vlan_find(ocelot, vid) == ocelot_port->pvid_vlan) {
err = ocelot_port_set_pvid(ocelot, port, NULL);
if (err)
return err;
} }
/* Untagged egress vlan clasification */ /* Untagged egress vlan clasification */

View File

@ -1925,8 +1925,8 @@ static u16 rtase_calc_time_mitigation(u32 time_us)
time_us = min_t(int, time_us, RTASE_MITI_MAX_TIME); time_us = min_t(int, time_us, RTASE_MITI_MAX_TIME);
msb = fls(time_us); if (time_us > RTASE_MITI_TIME_COUNT_MASK) {
if (msb >= RTASE_MITI_COUNT_BIT_NUM) { msb = fls(time_us);
time_unit = msb - RTASE_MITI_COUNT_BIT_NUM; time_unit = msb - RTASE_MITI_COUNT_BIT_NUM;
time_count = time_us >> (msb - RTASE_MITI_COUNT_BIT_NUM); time_count = time_us >> (msb - RTASE_MITI_COUNT_BIT_NUM);
} else { } else {

View File

@ -6,6 +6,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/if_vlan.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/kernel.h> #include <linux/kernel.h>
@ -33,7 +34,7 @@
#define CMD_CTR (0x2 << CMD_SHIFT) #define CMD_CTR (0x2 << CMD_SHIFT)
#define CMD_MASK GENMASK(15, CMD_SHIFT) #define CMD_MASK GENMASK(15, CMD_SHIFT)
#define LEN_MASK GENMASK(CMD_SHIFT - 1, 0) #define LEN_MASK GENMASK(CMD_SHIFT - 2, 0)
#define DET_CMD_LEN 4 #define DET_CMD_LEN 4
#define DET_SOF_LEN 2 #define DET_SOF_LEN 2
@ -262,7 +263,7 @@ static int mse102x_tx_frame_spi(struct mse102x_net *mse, struct sk_buff *txp,
} }
static int mse102x_rx_frame_spi(struct mse102x_net *mse, u8 *buff, static int mse102x_rx_frame_spi(struct mse102x_net *mse, u8 *buff,
unsigned int frame_len) unsigned int frame_len, bool drop)
{ {
struct mse102x_net_spi *mses = to_mse102x_spi(mse); struct mse102x_net_spi *mses = to_mse102x_spi(mse);
struct spi_transfer *xfer = &mses->spi_xfer; struct spi_transfer *xfer = &mses->spi_xfer;
@ -280,6 +281,9 @@ static int mse102x_rx_frame_spi(struct mse102x_net *mse, u8 *buff,
netdev_err(mse->ndev, "%s: spi_sync() failed: %d\n", netdev_err(mse->ndev, "%s: spi_sync() failed: %d\n",
__func__, ret); __func__, ret);
mse->stats.xfer_err++; mse->stats.xfer_err++;
} else if (drop) {
netdev_dbg(mse->ndev, "%s: Drop frame\n", __func__);
ret = -EINVAL;
} else if (*sof != cpu_to_be16(DET_SOF)) { } else if (*sof != cpu_to_be16(DET_SOF)) {
netdev_dbg(mse->ndev, "%s: SPI start of frame is invalid (0x%04x)\n", netdev_dbg(mse->ndev, "%s: SPI start of frame is invalid (0x%04x)\n",
__func__, *sof); __func__, *sof);
@ -307,6 +311,7 @@ static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
struct sk_buff *skb; struct sk_buff *skb;
unsigned int rxalign; unsigned int rxalign;
unsigned int rxlen; unsigned int rxlen;
bool drop = false;
__be16 rx = 0; __be16 rx = 0;
u16 cmd_resp; u16 cmd_resp;
u8 *rxpkt; u8 *rxpkt;
@ -329,7 +334,8 @@ static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
net_dbg_ratelimited("%s: Unexpected response (0x%04x)\n", net_dbg_ratelimited("%s: Unexpected response (0x%04x)\n",
__func__, cmd_resp); __func__, cmd_resp);
mse->stats.invalid_rts++; mse->stats.invalid_rts++;
return; drop = true;
goto drop;
} }
net_dbg_ratelimited("%s: Unexpected response to first CMD\n", net_dbg_ratelimited("%s: Unexpected response to first CMD\n",
@ -337,12 +343,20 @@ static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
} }
rxlen = cmd_resp & LEN_MASK; rxlen = cmd_resp & LEN_MASK;
if (!rxlen) { if (rxlen < ETH_ZLEN || rxlen > VLAN_ETH_FRAME_LEN) {
net_dbg_ratelimited("%s: No frame length defined\n", __func__); net_dbg_ratelimited("%s: Invalid frame length: %d\n", __func__,
rxlen);
mse->stats.invalid_len++; mse->stats.invalid_len++;
return; drop = true;
} }
/* In case of a invalid CMD_RTS, the frame must be consumed anyway.
* So assume the maximum possible frame length.
*/
drop:
if (drop)
rxlen = VLAN_ETH_FRAME_LEN;
rxalign = ALIGN(rxlen + DET_SOF_LEN + DET_DFT_LEN, 4); rxalign = ALIGN(rxlen + DET_SOF_LEN + DET_DFT_LEN, 4);
skb = netdev_alloc_skb_ip_align(mse->ndev, rxalign); skb = netdev_alloc_skb_ip_align(mse->ndev, rxalign);
if (!skb) if (!skb)
@ -353,7 +367,7 @@ static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
* They are copied, but ignored. * They are copied, but ignored.
*/ */
rxpkt = skb_put(skb, rxlen) - DET_SOF_LEN; rxpkt = skb_put(skb, rxlen) - DET_SOF_LEN;
if (mse102x_rx_frame_spi(mse, rxpkt, rxlen)) { if (mse102x_rx_frame_spi(mse, rxpkt, rxlen, drop)) {
mse->ndev->stats.rx_errors++; mse->ndev->stats.rx_errors++;
dev_kfree_skb(skb); dev_kfree_skb(skb);
return; return;
@ -509,6 +523,7 @@ static irqreturn_t mse102x_irq(int irq, void *_mse)
static int mse102x_net_open(struct net_device *ndev) static int mse102x_net_open(struct net_device *ndev)
{ {
struct mse102x_net *mse = netdev_priv(ndev); struct mse102x_net *mse = netdev_priv(ndev);
struct mse102x_net_spi *mses = to_mse102x_spi(mse);
int ret; int ret;
ret = request_threaded_irq(ndev->irq, NULL, mse102x_irq, IRQF_ONESHOT, ret = request_threaded_irq(ndev->irq, NULL, mse102x_irq, IRQF_ONESHOT,
@ -524,6 +539,13 @@ static int mse102x_net_open(struct net_device *ndev)
netif_carrier_on(ndev); netif_carrier_on(ndev);
/* The SPI interrupt can stuck in case of pending packet(s).
* So poll for possible packet(s) to re-arm the interrupt.
*/
mutex_lock(&mses->lock);
mse102x_rx_pkt_spi(mse);
mutex_unlock(&mses->lock);
netif_dbg(mse, ifup, ndev, "network device up\n"); netif_dbg(mse, ifup, ndev, "network device up\n");
return 0; return 0;

View File

@ -17,6 +17,7 @@
#define REG2_LEDACT GENMASK(23, 22) #define REG2_LEDACT GENMASK(23, 22)
#define REG2_LEDLINK GENMASK(25, 24) #define REG2_LEDLINK GENMASK(25, 24)
#define REG2_DIV4SEL BIT(27) #define REG2_DIV4SEL BIT(27)
#define REG2_REVERSED BIT(28)
#define REG2_ADCBYPASS BIT(30) #define REG2_ADCBYPASS BIT(30)
#define REG2_CLKINSEL BIT(31) #define REG2_CLKINSEL BIT(31)
#define ETH_REG3 0x4 #define ETH_REG3 0x4
@ -65,7 +66,7 @@ static void gxl_enable_internal_mdio(struct gxl_mdio_mux *priv)
* The only constraint is that it must match the one in * The only constraint is that it must match the one in
* drivers/net/phy/meson-gxl.c to properly match the PHY. * drivers/net/phy/meson-gxl.c to properly match the PHY.
*/ */
writel(FIELD_PREP(REG2_PHYID, EPHY_GXL_ID), writel(REG2_REVERSED | FIELD_PREP(REG2_PHYID, EPHY_GXL_ID),
priv->regs + ETH_REG2); priv->regs + ETH_REG2);
/* Enable the internal phy */ /* Enable the internal phy */

View File

@ -630,16 +630,6 @@ static const struct driver_info zte_rndis_info = {
.tx_fixup = rndis_tx_fixup, .tx_fixup = rndis_tx_fixup,
}; };
static const struct driver_info wwan_rndis_info = {
.description = "Mobile Broadband RNDIS device",
.flags = FLAG_WWAN | FLAG_POINTTOPOINT | FLAG_FRAMING_RN | FLAG_NO_SETINT,
.bind = rndis_bind,
.unbind = rndis_unbind,
.status = rndis_status,
.rx_fixup = rndis_rx_fixup,
.tx_fixup = rndis_tx_fixup,
};
/*-------------------------------------------------------------------------*/ /*-------------------------------------------------------------------------*/
static const struct usb_device_id products [] = { static const struct usb_device_id products [] = {
@ -676,11 +666,9 @@ static const struct usb_device_id products [] = {
USB_INTERFACE_INFO(USB_CLASS_WIRELESS_CONTROLLER, 1, 3), USB_INTERFACE_INFO(USB_CLASS_WIRELESS_CONTROLLER, 1, 3),
.driver_info = (unsigned long) &rndis_info, .driver_info = (unsigned long) &rndis_info,
}, { }, {
/* Mobile Broadband Modem, seen in Novatel Verizon USB730L and /* Novatel Verizon USB730L */
* Telit FN990A (RNDIS)
*/
USB_INTERFACE_INFO(USB_CLASS_MISC, 4, 1), USB_INTERFACE_INFO(USB_CLASS_MISC, 4, 1),
.driver_info = (unsigned long)&wwan_rndis_info, .driver_info = (unsigned long) &rndis_info,
}, },
{ }, // END { }, // END
}; };

View File

@ -397,7 +397,7 @@ vmxnet3_process_xdp(struct vmxnet3_adapter *adapter,
xdp_init_buff(&xdp, PAGE_SIZE, &rq->xdp_rxq); xdp_init_buff(&xdp, PAGE_SIZE, &rq->xdp_rxq);
xdp_prepare_buff(&xdp, page_address(page), rq->page_pool->p.offset, xdp_prepare_buff(&xdp, page_address(page), rq->page_pool->p.offset,
rbi->len, false); rcd->len, false);
xdp_buff_clear_frags_flag(&xdp); xdp_buff_clear_frags_flag(&xdp);
xdp_prog = rcu_dereference(rq->adapter->xdp_bpf_prog); xdp_prog = rcu_dereference(rq->adapter->xdp_bpf_prog);

View File

@ -627,7 +627,11 @@ static void vxlan_vni_delete_group(struct vxlan_dev *vxlan,
* default dst remote_ip previously added for this vni * default dst remote_ip previously added for this vni
*/ */
if (!vxlan_addr_any(&vninode->remote_ip) || if (!vxlan_addr_any(&vninode->remote_ip) ||
!vxlan_addr_any(&dst->remote_ip)) !vxlan_addr_any(&dst->remote_ip)) {
u32 hash_index = fdb_head_index(vxlan, all_zeros_mac,
vninode->vni);
spin_lock_bh(&vxlan->hash_lock[hash_index]);
__vxlan_fdb_delete(vxlan, all_zeros_mac, __vxlan_fdb_delete(vxlan, all_zeros_mac,
(vxlan_addr_any(&vninode->remote_ip) ? (vxlan_addr_any(&vninode->remote_ip) ?
dst->remote_ip : vninode->remote_ip), dst->remote_ip : vninode->remote_ip),
@ -635,6 +639,8 @@ static void vxlan_vni_delete_group(struct vxlan_dev *vxlan,
vninode->vni, vninode->vni, vninode->vni, vninode->vni,
dst->remote_ifindex, dst->remote_ifindex,
true); true);
spin_unlock_bh(&vxlan->hash_lock[hash_index]);
}
if (vxlan->dev->flags & IFF_UP) { if (vxlan->dev->flags & IFF_UP) {
if (vxlan_addr_multicast(&vninode->remote_ip) && if (vxlan_addr_multicast(&vninode->remote_ip) &&

View File

@ -896,14 +896,16 @@ brcmf_usb_dl_writeimage(struct brcmf_usbdev_info *devinfo, u8 *fw, int fwlen)
} }
/* 1) Prepare USB boot loader for runtime image */ /* 1) Prepare USB boot loader for runtime image */
brcmf_usb_dl_cmd(devinfo, DL_START, &state, sizeof(state)); err = brcmf_usb_dl_cmd(devinfo, DL_START, &state, sizeof(state));
if (err)
goto fail;
rdlstate = le32_to_cpu(state.state); rdlstate = le32_to_cpu(state.state);
rdlbytes = le32_to_cpu(state.bytes); rdlbytes = le32_to_cpu(state.bytes);
/* 2) Check we are in the Waiting state */ /* 2) Check we are in the Waiting state */
if (rdlstate != DL_WAITING) { if (rdlstate != DL_WAITING) {
brcmf_err("Failed to DL_START\n"); brcmf_err("Invalid DL state: %u\n", rdlstate);
err = -EINVAL; err = -EINVAL;
goto fail; goto fail;
} }

View File

@ -142,8 +142,6 @@ const struct iwl_cfg_trans_params iwl_sc_trans_cfg = {
.ltr_delay = IWL_CFG_TRANS_LTR_DELAY_2500US, .ltr_delay = IWL_CFG_TRANS_LTR_DELAY_2500US,
}; };
const char iwl_sp_name[] = "Intel(R) Wi-Fi 7 BE213 160MHz";
const struct iwl_cfg iwl_cfg_sc = { const struct iwl_cfg iwl_cfg_sc = {
.fw_name_mac = "sc", .fw_name_mac = "sc",
IWL_DEVICE_SC, IWL_DEVICE_SC,

View File

@ -2,7 +2,7 @@
/* /*
* Copyright (C) 2005-2014, 2018-2021 Intel Corporation * Copyright (C) 2005-2014, 2018-2021 Intel Corporation
* Copyright (C) 2016-2017 Intel Deutschland GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH
* Copyright (C) 2018-2025 Intel Corporation * Copyright (C) 2018-2024 Intel Corporation
*/ */
#ifndef __IWL_CONFIG_H__ #ifndef __IWL_CONFIG_H__
#define __IWL_CONFIG_H__ #define __IWL_CONFIG_H__
@ -451,8 +451,11 @@ struct iwl_cfg {
#define IWL_CFG_RF_ID_HR 0x7 #define IWL_CFG_RF_ID_HR 0x7
#define IWL_CFG_RF_ID_HR1 0x4 #define IWL_CFG_RF_ID_HR1 0x4
#define IWL_CFG_BW_NO_LIM (U16_MAX - 1) #define IWL_CFG_NO_160 0x1
#define IWL_CFG_BW_ANY U16_MAX #define IWL_CFG_160 0x0
#define IWL_CFG_NO_320 0x1
#define IWL_CFG_320 0x0
#define IWL_CFG_CORES_BT 0x0 #define IWL_CFG_CORES_BT 0x0
#define IWL_CFG_CORES_BT_GNSS 0x5 #define IWL_CFG_CORES_BT_GNSS 0x5
@ -464,7 +467,7 @@ struct iwl_cfg {
#define IWL_CFG_IS_JACKET 0x1 #define IWL_CFG_IS_JACKET 0x1
#define IWL_SUBDEVICE_RF_ID(subdevice) ((u16)((subdevice) & 0x00F0) >> 4) #define IWL_SUBDEVICE_RF_ID(subdevice) ((u16)((subdevice) & 0x00F0) >> 4)
#define IWL_SUBDEVICE_BW_LIM(subdevice) ((u16)((subdevice) & 0x0200) >> 9) #define IWL_SUBDEVICE_NO_160(subdevice) ((u16)((subdevice) & 0x0200) >> 9)
#define IWL_SUBDEVICE_CORES(subdevice) ((u16)((subdevice) & 0x1C00) >> 10) #define IWL_SUBDEVICE_CORES(subdevice) ((u16)((subdevice) & 0x1C00) >> 10)
struct iwl_dev_info { struct iwl_dev_info {
@ -472,10 +475,10 @@ struct iwl_dev_info {
u16 subdevice; u16 subdevice;
u16 mac_type; u16 mac_type;
u16 rf_type; u16 rf_type;
u16 bw_limit;
u8 mac_step; u8 mac_step;
u8 rf_step; u8 rf_step;
u8 rf_id; u8 rf_id;
u8 no_160;
u8 cores; u8 cores;
u8 cdb; u8 cdb;
u8 jacket; u8 jacket;
@ -489,7 +492,7 @@ extern const unsigned int iwl_dev_info_table_size;
const struct iwl_dev_info * const struct iwl_dev_info *
iwl_pci_find_dev_info(u16 device, u16 subsystem_device, iwl_pci_find_dev_info(u16 device, u16 subsystem_device,
u16 mac_type, u8 mac_step, u16 rf_type, u8 cdb, u16 mac_type, u8 mac_step, u16 rf_type, u8 cdb,
u8 jacket, u8 rf_id, u8 bw_limit, u8 cores, u8 rf_step); u8 jacket, u8 rf_id, u8 no_160, u8 cores, u8 rf_step);
extern const struct pci_device_id iwl_hw_card_ids[]; extern const struct pci_device_id iwl_hw_card_ids[];
#endif #endif
@ -550,7 +553,6 @@ extern const char iwl_ax231_name[];
extern const char iwl_ax411_name[]; extern const char iwl_ax411_name[];
extern const char iwl_fm_name[]; extern const char iwl_fm_name[];
extern const char iwl_wh_name[]; extern const char iwl_wh_name[];
extern const char iwl_sp_name[];
extern const char iwl_gl_name[]; extern const char iwl_gl_name[];
extern const char iwl_mtp_name[]; extern const char iwl_mtp_name[];
extern const char iwl_dr_name[]; extern const char iwl_dr_name[];

View File

@ -148,6 +148,7 @@
* during a error FW error. * during a error FW error.
*/ */
#define CSR_FUNC_SCRATCH_INIT_VALUE (0x01010101) #define CSR_FUNC_SCRATCH_INIT_VALUE (0x01010101)
#define CSR_FUNC_SCRATCH_POWER_OFF_MASK 0xFFFF
/* Bits for CSR_HW_IF_CONFIG_REG */ /* Bits for CSR_HW_IF_CONFIG_REG */
#define CSR_HW_IF_CONFIG_REG_MSK_MAC_STEP_DASH (0x0000000F) #define CSR_HW_IF_CONFIG_REG_MSK_MAC_STEP_DASH (0x0000000F)

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/* /*
* Copyright (C) 2005-2014, 2018-2023, 2025 Intel Corporation * Copyright (C) 2005-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH
*/ */
@ -944,8 +944,7 @@ iwl_nvm_fixup_sband_iftd(struct iwl_trans *trans,
IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_MASK); IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_MASK);
break; break;
case NL80211_BAND_6GHZ: case NL80211_BAND_6GHZ:
if (!trans->reduced_cap_sku && if (!trans->reduced_cap_sku) {
trans->bw_limit >= 320) {
iftype_data->eht_cap.eht_cap_elem.phy_cap_info[0] |= iftype_data->eht_cap.eht_cap_elem.phy_cap_info[0] |=
IEEE80211_EHT_PHY_CAP0_320MHZ_IN_6GHZ; IEEE80211_EHT_PHY_CAP0_320MHZ_IN_6GHZ;
iftype_data->eht_cap.eht_cap_elem.phy_cap_info[1] |= iftype_data->eht_cap.eht_cap_elem.phy_cap_info[1] |=
@ -1095,22 +1094,19 @@ iwl_nvm_fixup_sband_iftd(struct iwl_trans *trans,
iftype_data->eht_cap.eht_mcs_nss_supp.bw._320.rx_tx_mcs13_max_nss = 0; iftype_data->eht_cap.eht_mcs_nss_supp.bw._320.rx_tx_mcs13_max_nss = 0;
} }
if (trans->bw_limit < 160) if (trans->no_160)
iftype_data->he_cap.he_cap_elem.phy_cap_info[0] &= iftype_data->he_cap.he_cap_elem.phy_cap_info[0] &=
~IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G; ~IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G;
if (trans->bw_limit < 320 || trans->reduced_cap_sku) { if (trans->reduced_cap_sku) {
memset(&iftype_data->eht_cap.eht_mcs_nss_supp.bw._320, 0, memset(&iftype_data->eht_cap.eht_mcs_nss_supp.bw._320, 0,
sizeof(iftype_data->eht_cap.eht_mcs_nss_supp.bw._320)); sizeof(iftype_data->eht_cap.eht_mcs_nss_supp.bw._320));
iftype_data->eht_cap.eht_cap_elem.phy_cap_info[2] &=
~IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_320MHZ_MASK;
}
if (trans->reduced_cap_sku) {
iftype_data->eht_cap.eht_mcs_nss_supp.bw._80.rx_tx_mcs13_max_nss = 0; iftype_data->eht_cap.eht_mcs_nss_supp.bw._80.rx_tx_mcs13_max_nss = 0;
iftype_data->eht_cap.eht_mcs_nss_supp.bw._160.rx_tx_mcs13_max_nss = 0; iftype_data->eht_cap.eht_mcs_nss_supp.bw._160.rx_tx_mcs13_max_nss = 0;
iftype_data->eht_cap.eht_cap_elem.phy_cap_info[8] &= iftype_data->eht_cap.eht_cap_elem.phy_cap_info[8] &=
~IEEE80211_EHT_PHY_CAP8_RX_4096QAM_WIDER_BW_DL_OFDMA; ~IEEE80211_EHT_PHY_CAP8_RX_4096QAM_WIDER_BW_DL_OFDMA;
iftype_data->eht_cap.eht_cap_elem.phy_cap_info[2] &=
~IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_320MHZ_MASK;
} }
} }

View File

@ -21,6 +21,7 @@ struct iwl_trans_dev_restart_data {
struct list_head list; struct list_head list;
unsigned int restart_count; unsigned int restart_count;
time64_t last_error; time64_t last_error;
bool backoff;
char name[]; char name[];
}; };
@ -125,13 +126,20 @@ iwl_trans_determine_restart_mode(struct iwl_trans *trans)
if (!data) if (!data)
return at_least; return at_least;
if (ktime_get_boottime_seconds() - data->last_error >= if (!data->backoff &&
ktime_get_boottime_seconds() - data->last_error >=
IWL_TRANS_RESET_OK_TIME) IWL_TRANS_RESET_OK_TIME)
data->restart_count = 0; data->restart_count = 0;
index = data->restart_count; index = data->restart_count;
if (index >= ARRAY_SIZE(escalation_list)) if (index >= ARRAY_SIZE(escalation_list)) {
index = ARRAY_SIZE(escalation_list) - 1; index = ARRAY_SIZE(escalation_list) - 1;
if (!data->backoff) {
data->backoff = true;
return IWL_RESET_MODE_BACKOFF;
}
data->backoff = false;
}
return max(at_least, escalation_list[index]); return max(at_least, escalation_list[index]);
} }
@ -140,7 +148,8 @@ iwl_trans_determine_restart_mode(struct iwl_trans *trans)
static void iwl_trans_restart_wk(struct work_struct *wk) static void iwl_trans_restart_wk(struct work_struct *wk)
{ {
struct iwl_trans *trans = container_of(wk, typeof(*trans), restart.wk); struct iwl_trans *trans = container_of(wk, typeof(*trans),
restart.wk.work);
struct iwl_trans_reprobe *reprobe; struct iwl_trans_reprobe *reprobe;
enum iwl_reset_mode mode; enum iwl_reset_mode mode;
@ -168,6 +177,12 @@ static void iwl_trans_restart_wk(struct work_struct *wk)
return; return;
mode = iwl_trans_determine_restart_mode(trans); mode = iwl_trans_determine_restart_mode(trans);
if (mode == IWL_RESET_MODE_BACKOFF) {
IWL_ERR(trans, "Too many device errors - delay next reset\n");
queue_delayed_work(system_unbound_wq, &trans->restart.wk,
IWL_TRANS_RESET_DELAY);
return;
}
iwl_trans_inc_restart_count(trans->dev); iwl_trans_inc_restart_count(trans->dev);
@ -227,7 +242,7 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
trans->dev = dev; trans->dev = dev;
trans->num_rx_queues = 1; trans->num_rx_queues = 1;
INIT_WORK(&trans->restart.wk, iwl_trans_restart_wk); INIT_DELAYED_WORK(&trans->restart.wk, iwl_trans_restart_wk);
return trans; return trans;
} }
@ -271,7 +286,7 @@ int iwl_trans_init(struct iwl_trans *trans)
void iwl_trans_free(struct iwl_trans *trans) void iwl_trans_free(struct iwl_trans *trans)
{ {
cancel_work_sync(&trans->restart.wk); cancel_delayed_work_sync(&trans->restart.wk);
kmem_cache_destroy(trans->dev_cmd_pool); kmem_cache_destroy(trans->dev_cmd_pool);
} }
@ -403,7 +418,7 @@ void iwl_trans_op_mode_leave(struct iwl_trans *trans)
iwl_trans_pcie_op_mode_leave(trans); iwl_trans_pcie_op_mode_leave(trans);
cancel_work_sync(&trans->restart.wk); cancel_delayed_work_sync(&trans->restart.wk);
trans->op_mode = NULL; trans->op_mode = NULL;
@ -540,7 +555,6 @@ void __releases(nic_access)
iwl_trans_release_nic_access(struct iwl_trans *trans) iwl_trans_release_nic_access(struct iwl_trans *trans)
{ {
iwl_trans_pcie_release_nic_access(trans); iwl_trans_pcie_release_nic_access(trans);
__release(nic_access);
} }
IWL_EXPORT_SYMBOL(iwl_trans_release_nic_access); IWL_EXPORT_SYMBOL(iwl_trans_release_nic_access);

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/* /*
* Copyright (C) 2005-2014, 2018-2023, 2025 Intel Corporation * Copyright (C) 2005-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH
*/ */
@ -876,7 +876,7 @@ struct iwl_txq {
* only valid for discrete (not integrated) NICs * only valid for discrete (not integrated) NICs
* @invalid_tx_cmd: invalid TX command buffer * @invalid_tx_cmd: invalid TX command buffer
* @reduced_cap_sku: reduced capability supported SKU * @reduced_cap_sku: reduced capability supported SKU
* @bw_limit: the max bandwidth * @no_160: device not supporting 160 MHz
* @step_urm: STEP is in URM, no support for MCS>9 in 320 MHz * @step_urm: STEP is in URM, no support for MCS>9 in 320 MHz
* @restart: restart worker data * @restart: restart worker data
* @restart.wk: restart worker * @restart.wk: restart worker
@ -911,8 +911,7 @@ struct iwl_trans {
char hw_id_str[52]; char hw_id_str[52];
u32 sku_id[3]; u32 sku_id[3];
bool reduced_cap_sku; bool reduced_cap_sku;
u16 bw_limit; u8 no_160:1, step_urm:1;
bool step_urm;
u8 dsbr_urm_fw_dependent:1, u8 dsbr_urm_fw_dependent:1,
dsbr_urm_permanent:1; dsbr_urm_permanent:1;
@ -962,7 +961,7 @@ struct iwl_trans {
struct iwl_dma_ptr invalid_tx_cmd; struct iwl_dma_ptr invalid_tx_cmd;
struct { struct {
struct work_struct wk; struct delayed_work wk;
struct iwl_fw_error_dump_mode mode; struct iwl_fw_error_dump_mode mode;
bool during_reset; bool during_reset;
} restart; } restart;
@ -1163,7 +1162,7 @@ static inline void iwl_trans_schedule_reset(struct iwl_trans *trans,
*/ */
trans->restart.during_reset = test_bit(STATUS_IN_SW_RESET, trans->restart.during_reset = test_bit(STATUS_IN_SW_RESET,
&trans->status); &trans->status);
queue_work(system_unbound_wq, &trans->restart.wk); queue_delayed_work(system_unbound_wq, &trans->restart.wk, 0);
} }
static inline void iwl_trans_fw_error(struct iwl_trans *trans, static inline void iwl_trans_fw_error(struct iwl_trans *trans,
@ -1262,6 +1261,9 @@ enum iwl_reset_mode {
IWL_RESET_MODE_RESCAN, IWL_RESET_MODE_RESCAN,
IWL_RESET_MODE_FUNC_RESET, IWL_RESET_MODE_FUNC_RESET,
IWL_RESET_MODE_PROD_RESET, IWL_RESET_MODE_PROD_RESET,
/* keep last - special backoff value */
IWL_RESET_MODE_BACKOFF,
}; };
void iwl_trans_pcie_reset(struct iwl_trans *trans, enum iwl_reset_mode mode); void iwl_trans_pcie_reset(struct iwl_trans *trans, enum iwl_reset_mode mode);

View File

@ -124,9 +124,9 @@ void iwl_mld_handle_bar_frame_release_notif(struct iwl_mld *mld,
rcu_read_lock(); rcu_read_lock();
baid_data = rcu_dereference(mld->fw_id_to_ba[baid]); baid_data = rcu_dereference(mld->fw_id_to_ba[baid]);
if (!IWL_FW_CHECK(mld, !baid_data, if (IWL_FW_CHECK(mld, !baid_data,
"Got valid BAID %d but not allocated, invalid BAR release!\n", "Got valid BAID %d but not allocated, invalid BAR release!\n",
baid)) baid))
goto out_unlock; goto out_unlock;
if (IWL_FW_CHECK(mld, tid != baid_data->tid || if (IWL_FW_CHECK(mld, tid != baid_data->tid ||

View File

@ -949,8 +949,9 @@ void iwl_mld_add_vif_debugfs(struct ieee80211_hw *hw,
snprintf(name, sizeof(name), "%pd", vif->debugfs_dir); snprintf(name, sizeof(name), "%pd", vif->debugfs_dir);
snprintf(target, sizeof(target), "../../../%pd3/iwlmld", snprintf(target, sizeof(target), "../../../%pd3/iwlmld",
vif->debugfs_dir); vif->debugfs_dir);
mld_vif->dbgfs_slink = if (!mld_vif->dbgfs_slink)
debugfs_create_symlink(name, mld->debugfs_dir, target); mld_vif->dbgfs_slink =
debugfs_create_symlink(name, mld->debugfs_dir, target);
if (iwlmld_mod_params.power_scheme != IWL_POWER_SCHEME_CAM && if (iwlmld_mod_params.power_scheme != IWL_POWER_SCHEME_CAM &&
vif->type == NL80211_IFTYPE_STATION) { vif->type == NL80211_IFTYPE_STATION) {

View File

@ -333,19 +333,22 @@ int iwl_mld_load_fw(struct iwl_mld *mld)
ret = iwl_trans_start_hw(mld->trans); ret = iwl_trans_start_hw(mld->trans);
if (ret) if (ret)
return ret; goto err;
ret = iwl_mld_run_fw_init_sequence(mld); ret = iwl_mld_run_fw_init_sequence(mld);
if (ret) if (ret)
return ret; goto err;
ret = iwl_mld_init_mcc(mld); ret = iwl_mld_init_mcc(mld);
if (ret) if (ret)
return ret; goto err;
mld->fw_status.running = true; mld->fw_status.running = true;
return 0; return 0;
err:
iwl_mld_stop_fw(mld);
return ret;
} }
void iwl_mld_stop_fw(struct iwl_mld *mld) void iwl_mld_stop_fw(struct iwl_mld *mld)
@ -358,6 +361,10 @@ void iwl_mld_stop_fw(struct iwl_mld *mld)
iwl_trans_stop_device(mld->trans); iwl_trans_stop_device(mld->trans);
wiphy_work_cancel(mld->wiphy, &mld->async_handlers_wk);
iwl_mld_purge_async_handlers_list(mld);
mld->fw_status.running = false; mld->fw_status.running = false;
} }

View File

@ -651,6 +651,7 @@ void iwl_mld_mac80211_remove_interface(struct ieee80211_hw *hw,
#ifdef CONFIG_IWLWIFI_DEBUGFS #ifdef CONFIG_IWLWIFI_DEBUGFS
debugfs_remove(iwl_mld_vif_from_mac80211(vif)->dbgfs_slink); debugfs_remove(iwl_mld_vif_from_mac80211(vif)->dbgfs_slink);
iwl_mld_vif_from_mac80211(vif)->dbgfs_slink = NULL;
#endif #endif
iwl_mld_rm_vif(mld, vif); iwl_mld_rm_vif(mld, vif);

View File

@ -75,6 +75,7 @@ void iwl_construct_mld(struct iwl_mld *mld, struct iwl_trans *trans,
/* Setup async RX handling */ /* Setup async RX handling */
spin_lock_init(&mld->async_handlers_lock); spin_lock_init(&mld->async_handlers_lock);
INIT_LIST_HEAD(&mld->async_handlers_list);
wiphy_work_init(&mld->async_handlers_wk, wiphy_work_init(&mld->async_handlers_wk,
iwl_mld_async_handlers_wk); iwl_mld_async_handlers_wk);
@ -414,9 +415,14 @@ iwl_op_mode_mld_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
wiphy_unlock(mld->wiphy); wiphy_unlock(mld->wiphy);
rtnl_unlock(); rtnl_unlock();
iwl_fw_flush_dumps(&mld->fwrt); iwl_fw_flush_dumps(&mld->fwrt);
goto free_hw; goto err;
} }
/* We are about to stop the FW. Notifications may require an
* operational FW, so handle them all here before we stop.
*/
wiphy_work_flush(mld->wiphy, &mld->async_handlers_wk);
iwl_mld_stop_fw(mld); iwl_mld_stop_fw(mld);
wiphy_unlock(mld->wiphy); wiphy_unlock(mld->wiphy);
@ -455,7 +461,8 @@ leds_exit:
iwl_mld_leds_exit(mld); iwl_mld_leds_exit(mld);
free_nvm: free_nvm:
kfree(mld->nvm_data); kfree(mld->nvm_data);
free_hw: err:
iwl_trans_op_mode_leave(mld->trans);
ieee80211_free_hw(mld->hw); ieee80211_free_hw(mld->hw);
return ERR_PTR(ret); return ERR_PTR(ret);
} }

View File

@ -298,11 +298,6 @@ iwl_cleanup_mld(struct iwl_mld *mld)
#endif #endif
iwl_mld_low_latency_restart_cleanup(mld); iwl_mld_low_latency_restart_cleanup(mld);
/* Empty the list of async notification handlers so we won't process
* notifications from the dead fw after the reconfig flow.
*/
iwl_mld_purge_async_handlers_list(mld);
} }
enum iwl_power_scheme { enum iwl_power_scheme {

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/* /*
* Copyright (C) 2005-2014, 2018-2025 Intel Corporation * Copyright (C) 2005-2014, 2018-2024 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH
*/ */
@ -552,17 +552,16 @@ MODULE_DEVICE_TABLE(pci, iwl_hw_card_ids);
EXPORT_SYMBOL_IF_IWLWIFI_KUNIT(iwl_hw_card_ids); EXPORT_SYMBOL_IF_IWLWIFI_KUNIT(iwl_hw_card_ids);
#define _IWL_DEV_INFO(_device, _subdevice, _mac_type, _mac_step, _rf_type, \ #define _IWL_DEV_INFO(_device, _subdevice, _mac_type, _mac_step, _rf_type, \
_rf_id, _rf_step, _bw_limit, _cores, _cdb, _cfg, _name) \ _rf_id, _rf_step, _no_160, _cores, _cdb, _cfg, _name) \
{ .device = (_device), .subdevice = (_subdevice), .cfg = &(_cfg), \ { .device = (_device), .subdevice = (_subdevice), .cfg = &(_cfg), \
.name = _name, .mac_type = _mac_type, .rf_type = _rf_type, .rf_step = _rf_step, \ .name = _name, .mac_type = _mac_type, .rf_type = _rf_type, .rf_step = _rf_step, \
.bw_limit = _bw_limit, .cores = _cores, .rf_id = _rf_id, \ .no_160 = _no_160, .cores = _cores, .rf_id = _rf_id, \
.mac_step = _mac_step, .cdb = _cdb, .jacket = IWL_CFG_ANY } .mac_step = _mac_step, .cdb = _cdb, .jacket = IWL_CFG_ANY }
#define IWL_DEV_INFO(_device, _subdevice, _cfg, _name) \ #define IWL_DEV_INFO(_device, _subdevice, _cfg, _name) \
_IWL_DEV_INFO(_device, _subdevice, IWL_CFG_ANY, IWL_CFG_ANY, \ _IWL_DEV_INFO(_device, _subdevice, IWL_CFG_ANY, IWL_CFG_ANY, \
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, \ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, \
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_ANY, \ IWL_CFG_ANY, _cfg, _name)
_cfg, _name)
VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = { VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
#if IS_ENABLED(CONFIG_IWLMVM) #if IS_ENABLED(CONFIG_IWLMVM)
@ -725,66 +724,66 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_2ac_cfg_soc, iwl9461_160_name), iwl9560_2ac_cfg_soc, iwl9461_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_2ac_cfg_soc, iwl9461_name), iwl9560_2ac_cfg_soc, iwl9461_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_2ac_cfg_soc, iwl9462_160_name), iwl9560_2ac_cfg_soc, iwl9462_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_2ac_cfg_soc, iwl9462_name), iwl9560_2ac_cfg_soc, iwl9462_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_2ac_cfg_soc, iwl9560_160_name), iwl9560_2ac_cfg_soc, iwl9560_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_2ac_cfg_soc, iwl9560_name), iwl9560_2ac_cfg_soc, iwl9560_name),
_IWL_DEV_INFO(0x2526, IWL_CFG_ANY, _IWL_DEV_INFO(0x2526, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_TH, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_TH, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_TH, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_TH, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT_GNSS, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT_GNSS, IWL_CFG_NO_CDB,
iwl9260_2ac_cfg, iwl9270_160_name), iwl9260_2ac_cfg, iwl9270_160_name),
_IWL_DEV_INFO(0x2526, IWL_CFG_ANY, _IWL_DEV_INFO(0x2526, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_TH, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_TH, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_TH, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_TH, IWL_CFG_ANY, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT_GNSS, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT_GNSS, IWL_CFG_NO_CDB,
iwl9260_2ac_cfg, iwl9270_name), iwl9260_2ac_cfg, iwl9270_name),
_IWL_DEV_INFO(0x271B, IWL_CFG_ANY, _IWL_DEV_INFO(0x271B, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_TH, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_TH, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_TH1, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_TH1, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9260_2ac_cfg, iwl9162_160_name), iwl9260_2ac_cfg, iwl9162_160_name),
_IWL_DEV_INFO(0x271B, IWL_CFG_ANY, _IWL_DEV_INFO(0x271B, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_TH, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_TH, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_TH1, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_TH1, IWL_CFG_ANY, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9260_2ac_cfg, iwl9162_name), iwl9260_2ac_cfg, iwl9162_name),
_IWL_DEV_INFO(0x2526, IWL_CFG_ANY, _IWL_DEV_INFO(0x2526, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_TH, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_TH, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_TH, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_TH, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9260_2ac_cfg, iwl9260_160_name), iwl9260_2ac_cfg, iwl9260_160_name),
_IWL_DEV_INFO(0x2526, IWL_CFG_ANY, _IWL_DEV_INFO(0x2526, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_TH, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_TH, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_TH, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_TH, IWL_CFG_ANY, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9260_2ac_cfg, iwl9260_name), iwl9260_2ac_cfg, iwl9260_name),
/* Qu with Jf */ /* Qu with Jf */
@ -792,132 +791,132 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_b0_jf_b0_cfg, iwl9461_160_name), iwl9560_qu_b0_jf_b0_cfg, iwl9461_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_b0_jf_b0_cfg, iwl9461_name), iwl9560_qu_b0_jf_b0_cfg, iwl9461_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_b0_jf_b0_cfg, iwl9462_160_name), iwl9560_qu_b0_jf_b0_cfg, iwl9462_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_b0_jf_b0_cfg, iwl9462_name), iwl9560_qu_b0_jf_b0_cfg, iwl9462_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_b0_jf_b0_cfg, iwl9560_160_name), iwl9560_qu_b0_jf_b0_cfg, iwl9560_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_b0_jf_b0_cfg, iwl9560_name), iwl9560_qu_b0_jf_b0_cfg, iwl9560_name),
_IWL_DEV_INFO(IWL_CFG_ANY, 0x1551, _IWL_DEV_INFO(IWL_CFG_ANY, 0x1551,
IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_b0_jf_b0_cfg, iwl9560_killer_1550s_name), iwl9560_qu_b0_jf_b0_cfg, iwl9560_killer_1550s_name),
_IWL_DEV_INFO(IWL_CFG_ANY, 0x1552, _IWL_DEV_INFO(IWL_CFG_ANY, 0x1552,
IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_b0_jf_b0_cfg, iwl9560_killer_1550i_name), iwl9560_qu_b0_jf_b0_cfg, iwl9560_killer_1550i_name),
/* Qu C step */ /* Qu C step */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_c0_jf_b0_cfg, iwl9461_160_name), iwl9560_qu_c0_jf_b0_cfg, iwl9461_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_c0_jf_b0_cfg, iwl9461_name), iwl9560_qu_c0_jf_b0_cfg, iwl9461_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_c0_jf_b0_cfg, iwl9462_160_name), iwl9560_qu_c0_jf_b0_cfg, iwl9462_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_c0_jf_b0_cfg, iwl9462_name), iwl9560_qu_c0_jf_b0_cfg, iwl9462_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_c0_jf_b0_cfg, iwl9560_160_name), iwl9560_qu_c0_jf_b0_cfg, iwl9560_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_c0_jf_b0_cfg, iwl9560_name), iwl9560_qu_c0_jf_b0_cfg, iwl9560_name),
_IWL_DEV_INFO(IWL_CFG_ANY, 0x1551, _IWL_DEV_INFO(IWL_CFG_ANY, 0x1551,
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_c0_jf_b0_cfg, iwl9560_killer_1550s_name), iwl9560_qu_c0_jf_b0_cfg, iwl9560_killer_1550s_name),
_IWL_DEV_INFO(IWL_CFG_ANY, 0x1552, _IWL_DEV_INFO(IWL_CFG_ANY, 0x1552,
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_qu_c0_jf_b0_cfg, iwl9560_killer_1550i_name), iwl9560_qu_c0_jf_b0_cfg, iwl9560_killer_1550i_name),
/* QuZ */ /* QuZ */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_quz_a0_jf_b0_cfg, iwl9461_160_name), iwl9560_quz_a0_jf_b0_cfg, iwl9461_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_quz_a0_jf_b0_cfg, iwl9461_name), iwl9560_quz_a0_jf_b0_cfg, iwl9461_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_quz_a0_jf_b0_cfg, iwl9462_160_name), iwl9560_quz_a0_jf_b0_cfg, iwl9462_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_quz_a0_jf_b0_cfg, iwl9462_name), iwl9560_quz_a0_jf_b0_cfg, iwl9462_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_quz_a0_jf_b0_cfg, iwl9560_160_name), iwl9560_quz_a0_jf_b0_cfg, iwl9560_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_quz_a0_jf_b0_cfg, iwl9560_name), iwl9560_quz_a0_jf_b0_cfg, iwl9560_name),
_IWL_DEV_INFO(IWL_CFG_ANY, 0x1551, _IWL_DEV_INFO(IWL_CFG_ANY, 0x1551,
IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_quz_a0_jf_b0_cfg, iwl9560_killer_1550s_name), iwl9560_quz_a0_jf_b0_cfg, iwl9560_killer_1550s_name),
_IWL_DEV_INFO(IWL_CFG_ANY, 0x1552, _IWL_DEV_INFO(IWL_CFG_ANY, 0x1552,
IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwl9560_quz_a0_jf_b0_cfg, iwl9560_killer_1550i_name), iwl9560_quz_a0_jf_b0_cfg, iwl9560_killer_1550i_name),
/* Qu with Hr */ /* Qu with Hr */
@ -925,189 +924,189 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP,
IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_qu_b0_hr1_b0, iwl_ax101_name), iwl_qu_b0_hr1_b0, iwl_ax101_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
80, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_qu_b0_hr_b0, iwl_ax203_name), iwl_qu_b0_hr_b0, iwl_ax203_name),
/* Qu C step */ /* Qu C step */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_qu_c0_hr1_b0, iwl_ax101_name), iwl_qu_c0_hr1_b0, iwl_ax101_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
80, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_qu_c0_hr_b0, iwl_ax203_name), iwl_qu_c0_hr_b0, iwl_ax203_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_qu_c0_hr_b0, iwl_ax201_name), iwl_qu_c0_hr_b0, iwl_ax201_name),
/* QuZ */ /* QuZ */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_QUZ, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_quz_a0_hr1_b0, iwl_ax101_name), iwl_quz_a0_hr1_b0, iwl_ax101_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QUZ, SILICON_B_STEP, IWL_CFG_MAC_TYPE_QUZ, SILICON_B_STEP,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
80, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_quz_a0_hr_b0, iwl_ax203_name), iwl_cfg_quz_a0_hr_b0, iwl_ax203_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_QUZ, SILICON_B_STEP, IWL_CFG_MAC_TYPE_QUZ, SILICON_B_STEP,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_quz_a0_hr_b0, iwl_ax201_name), iwl_cfg_quz_a0_hr_b0, iwl_ax201_name),
/* Ma */ /* Ma */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_MA, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_MA, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_ma, iwl_ax201_name), iwl_cfg_ma, iwl_ax201_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_MA, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_MA, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_ma, iwl_ax211_name), iwl_cfg_ma, iwl_ax211_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_MA, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_MA, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_ma, iwl_ax231_name), iwl_cfg_ma, iwl_ax231_name),
/* So with Hr */ /* So with Hr */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
80, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_so_a0_hr_a0, iwl_ax203_name), iwl_cfg_so_a0_hr_a0, iwl_ax203_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, IWL_CFG_ANY,
80, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_so_a0_hr_a0, iwl_ax101_name), iwl_cfg_so_a0_hr_a0, iwl_ax101_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_so_a0_hr_a0, iwl_ax201_name), iwl_cfg_so_a0_hr_a0, iwl_ax201_name),
/* So-F with Hr */ /* So-F with Hr */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
80, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_so_a0_hr_a0, iwl_ax203_name), iwl_cfg_so_a0_hr_a0, iwl_ax203_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, IWL_CFG_ANY,
80, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_so_a0_hr_a0, iwl_ax101_name), iwl_cfg_so_a0_hr_a0, iwl_ax101_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_so_a0_hr_a0, iwl_ax201_name), iwl_cfg_so_a0_hr_a0, iwl_ax201_name),
/* So-F with Gf */ /* So-F with Gf */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_name), iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_CDB, IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_CDB,
iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_name), iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_name),
/* SoF with JF2 */ /* SoF with JF2 */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwlax210_2ax_cfg_so_jf_b0, iwl9560_160_name), iwlax210_2ax_cfg_so_jf_b0, iwl9560_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwlax210_2ax_cfg_so_jf_b0, iwl9560_name), iwlax210_2ax_cfg_so_jf_b0, iwl9560_name),
/* SoF with JF */ /* SoF with JF */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwlax210_2ax_cfg_so_jf_b0, iwl9461_160_name), iwlax210_2ax_cfg_so_jf_b0, iwl9461_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwlax210_2ax_cfg_so_jf_b0, iwl9462_160_name), iwlax210_2ax_cfg_so_jf_b0, iwl9462_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwlax210_2ax_cfg_so_jf_b0, iwl9461_name), iwlax210_2ax_cfg_so_jf_b0, iwl9461_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwlax210_2ax_cfg_so_jf_b0, iwl9462_name), iwlax210_2ax_cfg_so_jf_b0, iwl9462_name),
/* So with GF */ /* So with GF */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_name), iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_CDB, IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_CDB,
iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_name), iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_name),
/* So with JF2 */ /* So with JF2 */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwlax210_2ax_cfg_so_jf_b0, iwl9560_160_name), iwlax210_2ax_cfg_so_jf_b0, iwl9560_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwlax210_2ax_cfg_so_jf_b0, iwl9560_name), iwlax210_2ax_cfg_so_jf_b0, iwl9560_name),
/* So with JF */ /* So with JF */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwlax210_2ax_cfg_so_jf_b0, iwl9461_160_name), iwlax210_2ax_cfg_so_jf_b0, iwl9461_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwlax210_2ax_cfg_so_jf_b0, iwl9462_160_name), iwlax210_2ax_cfg_so_jf_b0, iwl9462_160_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwlax210_2ax_cfg_so_jf_b0, iwl9461_name), iwlax210_2ax_cfg_so_jf_b0, iwl9461_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY, IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV, IWL_CFG_ANY,
80, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB, IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
iwlax210_2ax_cfg_so_jf_b0, iwl9462_name), iwlax210_2ax_cfg_so_jf_b0, iwl9462_name),
#endif /* CONFIG_IWLMVM */ #endif /* CONFIG_IWLMVM */
@ -1116,13 +1115,13 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_bz, iwl_ax201_name), iwl_cfg_bz, iwl_ax201_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_bz, iwl_ax211_name), iwl_cfg_bz, iwl_ax211_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
@ -1134,119 +1133,104 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_bz, iwl_wh_name), iwl_cfg_bz, iwl_wh_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_bz, iwl_ax201_name), iwl_cfg_bz, iwl_ax201_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_bz, iwl_ax211_name), iwl_cfg_bz, iwl_ax211_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_bz, iwl_fm_name), iwl_cfg_bz, iwl_fm_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_bz, iwl_wh_name), iwl_cfg_bz, iwl_wh_name),
/* Ga (Gl) */ /* Ga (Gl) */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_320, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_gl, iwl_gl_name), iwl_cfg_gl, iwl_gl_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
160, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_NO_320, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_gl, iwl_mtp_name), iwl_cfg_gl, iwl_mtp_name),
/* Sc */ /* Sc */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SC, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SC, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_sc, iwl_ax211_name), iwl_cfg_sc, iwl_ax211_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SC, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SC, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_sc, iwl_fm_name), iwl_cfg_sc, iwl_fm_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SC, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SC, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_sc, iwl_wh_name), iwl_cfg_sc, iwl_wh_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SC, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
160, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_sc, iwl_sp_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SC2, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SC2, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_sc2, iwl_ax211_name), iwl_cfg_sc2, iwl_ax211_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SC2, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SC2, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_sc2, iwl_fm_name), iwl_cfg_sc2, iwl_fm_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SC2, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SC2, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_sc2, iwl_wh_name), iwl_cfg_sc2, iwl_wh_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SC2, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
160, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_sc2, iwl_sp_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SC2F, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SC2F, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_sc2f, iwl_ax211_name), iwl_cfg_sc2f, iwl_ax211_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SC2F, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SC2F, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_sc2f, iwl_fm_name), iwl_cfg_sc2f, iwl_fm_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SC2F, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_SC2F, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_NO_LIM, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_sc2f, iwl_wh_name), iwl_cfg_sc2f, iwl_wh_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SC2F, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
160, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_sc2f, iwl_sp_name),
/* Dr */ /* Dr */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_DR, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_DR, IWL_CFG_ANY,
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_dr, iwl_dr_name), iwl_cfg_dr, iwl_dr_name),
/* Br */ /* Br */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_BR, IWL_CFG_ANY, IWL_CFG_MAC_TYPE_BR, IWL_CFG_ANY,
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_BW_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
iwl_cfg_br, iwl_br_name), iwl_cfg_br, iwl_br_name),
#endif /* CONFIG_IWLMLD */ #endif /* CONFIG_IWLMLD */
}; };
@ -1398,7 +1382,7 @@ out:
VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info * VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info *
iwl_pci_find_dev_info(u16 device, u16 subsystem_device, iwl_pci_find_dev_info(u16 device, u16 subsystem_device,
u16 mac_type, u8 mac_step, u16 rf_type, u8 cdb, u16 mac_type, u8 mac_step, u16 rf_type, u8 cdb,
u8 jacket, u8 rf_id, u8 bw_limit, u8 cores, u8 rf_step) u8 jacket, u8 rf_id, u8 no_160, u8 cores, u8 rf_step)
{ {
int num_devices = ARRAY_SIZE(iwl_dev_info_table); int num_devices = ARRAY_SIZE(iwl_dev_info_table);
int i; int i;
@ -1441,15 +1425,8 @@ iwl_pci_find_dev_info(u16 device, u16 subsystem_device,
dev_info->rf_id != rf_id) dev_info->rf_id != rf_id)
continue; continue;
/* if (dev_info->no_160 != (u8)IWL_CFG_ANY &&
* Check that bw_limit have the same "boolean" value since dev_info->no_160 != no_160)
* IWL_SUBDEVICE_BW_LIM can only return a boolean value and
* dev_info->bw_limit encodes a non-boolean value.
* dev_info->bw_limit == IWL_CFG_BW_NO_LIM must be equal to
* !bw_limit to have a match.
*/
if (dev_info->bw_limit != IWL_CFG_BW_ANY &&
(dev_info->bw_limit == IWL_CFG_BW_NO_LIM) == !!bw_limit)
continue; continue;
if (dev_info->cores != (u8)IWL_CFG_ANY && if (dev_info->cores != (u8)IWL_CFG_ANY &&
@ -1587,13 +1564,13 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
CSR_HW_RFID_IS_CDB(iwl_trans->hw_rf_id), CSR_HW_RFID_IS_CDB(iwl_trans->hw_rf_id),
CSR_HW_RFID_IS_JACKET(iwl_trans->hw_rf_id), CSR_HW_RFID_IS_JACKET(iwl_trans->hw_rf_id),
IWL_SUBDEVICE_RF_ID(pdev->subsystem_device), IWL_SUBDEVICE_RF_ID(pdev->subsystem_device),
IWL_SUBDEVICE_BW_LIM(pdev->subsystem_device), IWL_SUBDEVICE_NO_160(pdev->subsystem_device),
IWL_SUBDEVICE_CORES(pdev->subsystem_device), IWL_SUBDEVICE_CORES(pdev->subsystem_device),
CSR_HW_RFID_STEP(iwl_trans->hw_rf_id)); CSR_HW_RFID_STEP(iwl_trans->hw_rf_id));
if (dev_info) { if (dev_info) {
iwl_trans->cfg = dev_info->cfg; iwl_trans->cfg = dev_info->cfg;
iwl_trans->name = dev_info->name; iwl_trans->name = dev_info->name;
iwl_trans->bw_limit = dev_info->bw_limit; iwl_trans->no_160 = dev_info->no_160 == IWL_CFG_NO_160;
} }
#if IS_ENABLED(CONFIG_IWLMVM) #if IS_ENABLED(CONFIG_IWLMVM)
@ -1759,11 +1736,27 @@ static int _iwl_pci_resume(struct device *device, bool restore)
* Scratch value was altered, this means the device was powered off, we * Scratch value was altered, this means the device was powered off, we
* need to reset it completely. * need to reset it completely.
* Note: MAC (bits 0:7) will be cleared upon suspend even with wowlan, * Note: MAC (bits 0:7) will be cleared upon suspend even with wowlan,
* so assume that any bits there mean that the device is usable. * but not bits [15:8]. So if we have bits set in lower word, assume
* the device is alive.
* For older devices, just try silently to grab the NIC.
*/ */
if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ && if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) {
!iwl_read32(trans, CSR_FUNC_SCRATCH)) if (!(iwl_read32(trans, CSR_FUNC_SCRATCH) &
device_was_powered_off = true; CSR_FUNC_SCRATCH_POWER_OFF_MASK))
device_was_powered_off = true;
} else {
/*
* bh are re-enabled by iwl_trans_pcie_release_nic_access,
* so re-enable them if _iwl_trans_pcie_grab_nic_access fails.
*/
local_bh_disable();
if (_iwl_trans_pcie_grab_nic_access(trans, true)) {
iwl_trans_pcie_release_nic_access(trans);
} else {
device_was_powered_off = true;
local_bh_enable();
}
}
if (restore || device_was_powered_off) { if (restore || device_was_powered_off) {
trans->state = IWL_TRANS_NO_FW; trans->state = IWL_TRANS_NO_FW;

View File

@ -558,10 +558,10 @@ void iwl_trans_pcie_free(struct iwl_trans *trans);
void iwl_trans_pcie_free_pnvm_dram_regions(struct iwl_dram_regions *dram_regions, void iwl_trans_pcie_free_pnvm_dram_regions(struct iwl_dram_regions *dram_regions,
struct device *dev); struct device *dev);
bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans); bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent);
#define _iwl_trans_pcie_grab_nic_access(trans) \ #define _iwl_trans_pcie_grab_nic_access(trans, silent) \
__cond_lock(nic_access_nobh, \ __cond_lock(nic_access_nobh, \
likely(__iwl_trans_pcie_grab_nic_access(trans))) likely(__iwl_trans_pcie_grab_nic_access(trans, silent)))
void iwl_trans_pcie_check_product_reset_status(struct pci_dev *pdev); void iwl_trans_pcie_check_product_reset_status(struct pci_dev *pdev);
void iwl_trans_pcie_check_product_reset_mode(struct pci_dev *pdev); void iwl_trans_pcie_check_product_reset_mode(struct pci_dev *pdev);
@ -1105,7 +1105,8 @@ void iwl_trans_pcie_set_bits_mask(struct iwl_trans *trans, u32 reg,
int iwl_trans_pcie_read_config32(struct iwl_trans *trans, u32 ofs, int iwl_trans_pcie_read_config32(struct iwl_trans *trans, u32 ofs,
u32 *val); u32 *val);
bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans); bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans);
void iwl_trans_pcie_release_nic_access(struct iwl_trans *trans); void __releases(nic_access_nobh)
iwl_trans_pcie_release_nic_access(struct iwl_trans *trans);
/* transport gen 1 exported functions */ /* transport gen 1 exported functions */
void iwl_trans_pcie_fw_alive(struct iwl_trans *trans, u32 scd_addr); void iwl_trans_pcie_fw_alive(struct iwl_trans *trans, u32 scd_addr);

View File

@ -2351,7 +2351,8 @@ void iwl_trans_pcie_reset(struct iwl_trans *trans, enum iwl_reset_mode mode)
struct iwl_trans_pcie_removal *removal; struct iwl_trans_pcie_removal *removal;
char _msg = 0, *msg = &_msg; char _msg = 0, *msg = &_msg;
if (WARN_ON(mode < IWL_RESET_MODE_REMOVE_ONLY)) if (WARN_ON(mode < IWL_RESET_MODE_REMOVE_ONLY ||
mode == IWL_RESET_MODE_BACKOFF))
return; return;
if (test_bit(STATUS_TRANS_DEAD, &trans->status)) if (test_bit(STATUS_TRANS_DEAD, &trans->status))
@ -2405,7 +2406,7 @@ EXPORT_SYMBOL(iwl_trans_pcie_reset);
* This version doesn't disable BHs but rather assumes they're * This version doesn't disable BHs but rather assumes they're
* already disabled. * already disabled.
*/ */
bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans) bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent)
{ {
int ret; int ret;
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
@ -2457,6 +2458,11 @@ bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
if (unlikely(ret < 0)) { if (unlikely(ret < 0)) {
u32 cntrl = iwl_read32(trans, CSR_GP_CNTRL); u32 cntrl = iwl_read32(trans, CSR_GP_CNTRL);
if (silent) {
spin_unlock(&trans_pcie->reg_lock);
return false;
}
WARN_ONCE(1, WARN_ONCE(1,
"Timeout waiting for hardware access (CSR_GP_CNTRL 0x%08x)\n", "Timeout waiting for hardware access (CSR_GP_CNTRL 0x%08x)\n",
cntrl); cntrl);
@ -2488,7 +2494,7 @@ bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
bool ret; bool ret;
local_bh_disable(); local_bh_disable();
ret = __iwl_trans_pcie_grab_nic_access(trans); ret = __iwl_trans_pcie_grab_nic_access(trans, false);
if (ret) { if (ret) {
/* keep BHs disabled until iwl_trans_pcie_release_nic_access */ /* keep BHs disabled until iwl_trans_pcie_release_nic_access */
return ret; return ret;
@ -2497,7 +2503,8 @@ bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
return false; return false;
} }
void iwl_trans_pcie_release_nic_access(struct iwl_trans *trans) void __releases(nic_access_nobh)
iwl_trans_pcie_release_nic_access(struct iwl_trans *trans)
{ {
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
@ -2524,6 +2531,7 @@ void iwl_trans_pcie_release_nic_access(struct iwl_trans *trans)
* scheduled on different CPUs (after we drop reg_lock). * scheduled on different CPUs (after we drop reg_lock).
*/ */
out: out:
__release(nic_access_nobh);
spin_unlock_bh(&trans_pcie->reg_lock); spin_unlock_bh(&trans_pcie->reg_lock);
} }

View File

@ -1021,7 +1021,7 @@ static int iwl_pcie_set_cmd_in_flight(struct iwl_trans *trans,
* returned. This needs to be done only on NICs that have * returned. This needs to be done only on NICs that have
* apmg_wake_up_wa set (see above.) * apmg_wake_up_wa set (see above.)
*/ */
if (!_iwl_trans_pcie_grab_nic_access(trans)) if (!_iwl_trans_pcie_grab_nic_access(trans, false))
return -EIO; return -EIO;
/* /*

View File

@ -2,7 +2,7 @@
/* /*
* KUnit tests for the iwlwifi device info table * KUnit tests for the iwlwifi device info table
* *
* Copyright (C) 2023-2025 Intel Corporation * Copyright (C) 2023-2024 Intel Corporation
*/ */
#include <kunit/test.h> #include <kunit/test.h>
#include <linux/pci.h> #include <linux/pci.h>
@ -13,9 +13,9 @@ MODULE_IMPORT_NS("EXPORTED_FOR_KUNIT_TESTING");
static void iwl_pci_print_dev_info(const char *pfx, const struct iwl_dev_info *di) static void iwl_pci_print_dev_info(const char *pfx, const struct iwl_dev_info *di)
{ {
printk(KERN_DEBUG "%sdev=%.4x,subdev=%.4x,mac_type=%.4x,mac_step=%.4x,rf_type=%.4x,cdb=%d,jacket=%d,rf_id=%.2x,bw_limit=%d,cores=%.2x\n", printk(KERN_DEBUG "%sdev=%.4x,subdev=%.4x,mac_type=%.4x,mac_step=%.4x,rf_type=%.4x,cdb=%d,jacket=%d,rf_id=%.2x,no_160=%d,cores=%.2x\n",
pfx, di->device, di->subdevice, di->mac_type, di->mac_step, pfx, di->device, di->subdevice, di->mac_type, di->mac_step,
di->rf_type, di->cdb, di->jacket, di->rf_id, di->bw_limit, di->rf_type, di->cdb, di->jacket, di->rf_id, di->no_160,
di->cores); di->cores);
} }
@ -31,13 +31,8 @@ static void devinfo_table_order(struct kunit *test)
di->mac_type, di->mac_step, di->mac_type, di->mac_step,
di->rf_type, di->cdb, di->rf_type, di->cdb,
di->jacket, di->rf_id, di->jacket, di->rf_id,
di->bw_limit != IWL_CFG_BW_NO_LIM, di->no_160, di->cores, di->rf_step);
di->cores, di->rf_step); if (ret != di) {
if (!ret) {
iwl_pci_print_dev_info("No entry found for: ", di);
KUNIT_FAIL(test,
"No entry found for entry at index %d\n", idx);
} else if (ret != di) {
iwl_pci_print_dev_info("searched: ", di); iwl_pci_print_dev_info("searched: ", di);
iwl_pci_print_dev_info("found: ", ret); iwl_pci_print_dev_info("found: ", ret);
KUNIT_FAIL(test, KUNIT_FAIL(test,

View File

@ -102,7 +102,6 @@ int plfxlc_mac_init_hw(struct ieee80211_hw *hw)
void plfxlc_mac_release(struct plfxlc_mac *mac) void plfxlc_mac_release(struct plfxlc_mac *mac)
{ {
plfxlc_chip_release(&mac->chip); plfxlc_chip_release(&mac->chip);
lockdep_assert_held(&mac->lock);
} }
int plfxlc_op_start(struct ieee80211_hw *hw) int plfxlc_op_start(struct ieee80211_hw *hw)

View File

@ -2578,12 +2578,60 @@ static const struct ocp_sma_op ocp_fb_sma_op = {
.set_output = ptp_ocp_sma_fb_set_output, .set_output = ptp_ocp_sma_fb_set_output,
}; };
static int
ptp_ocp_sma_adva_set_output(struct ptp_ocp *bp, int sma_nr, u32 val)
{
u32 reg, mask, shift;
unsigned long flags;
u32 __iomem *gpio;
gpio = sma_nr > 2 ? &bp->sma_map1->gpio2 : &bp->sma_map2->gpio2;
shift = sma_nr & 1 ? 0 : 16;
mask = 0xffff << (16 - shift);
spin_lock_irqsave(&bp->lock, flags);
reg = ioread32(gpio);
reg = (reg & mask) | (val << shift);
iowrite32(reg, gpio);
spin_unlock_irqrestore(&bp->lock, flags);
return 0;
}
static int
ptp_ocp_sma_adva_set_inputs(struct ptp_ocp *bp, int sma_nr, u32 val)
{
u32 reg, mask, shift;
unsigned long flags;
u32 __iomem *gpio;
gpio = sma_nr > 2 ? &bp->sma_map2->gpio1 : &bp->sma_map1->gpio1;
shift = sma_nr & 1 ? 0 : 16;
mask = 0xffff << (16 - shift);
spin_lock_irqsave(&bp->lock, flags);
reg = ioread32(gpio);
reg = (reg & mask) | (val << shift);
iowrite32(reg, gpio);
spin_unlock_irqrestore(&bp->lock, flags);
return 0;
}
static const struct ocp_sma_op ocp_adva_sma_op = { static const struct ocp_sma_op ocp_adva_sma_op = {
.tbl = { ptp_ocp_adva_sma_in, ptp_ocp_adva_sma_out }, .tbl = { ptp_ocp_adva_sma_in, ptp_ocp_adva_sma_out },
.init = ptp_ocp_sma_fb_init, .init = ptp_ocp_sma_fb_init,
.get = ptp_ocp_sma_fb_get, .get = ptp_ocp_sma_fb_get,
.set_inputs = ptp_ocp_sma_fb_set_inputs, .set_inputs = ptp_ocp_sma_adva_set_inputs,
.set_output = ptp_ocp_sma_fb_set_output, .set_output = ptp_ocp_sma_adva_set_output,
}; };
static int static int

View File

@ -1931,6 +1931,8 @@ struct hci_cp_le_pa_create_sync {
__u8 sync_cte_type; __u8 sync_cte_type;
} __packed; } __packed;
#define HCI_OP_LE_PA_CREATE_SYNC_CANCEL 0x2045
#define HCI_OP_LE_PA_TERM_SYNC 0x2046 #define HCI_OP_LE_PA_TERM_SYNC 0x2046
struct hci_cp_le_pa_term_sync { struct hci_cp_le_pa_term_sync {
__le16 handle; __le16 handle;
@ -2830,7 +2832,7 @@ struct hci_evt_le_create_big_complete {
__le16 bis_handle[]; __le16 bis_handle[];
} __packed; } __packed;
#define HCI_EVT_LE_BIG_SYNC_ESTABILISHED 0x1d #define HCI_EVT_LE_BIG_SYNC_ESTABLISHED 0x1d
struct hci_evt_le_big_sync_estabilished { struct hci_evt_le_big_sync_estabilished {
__u8 status; __u8 status;
__u8 handle; __u8 handle;

View File

@ -1113,10 +1113,8 @@ static inline struct hci_conn *hci_conn_hash_lookup_bis(struct hci_dev *hdev,
return NULL; return NULL;
} }
static inline struct hci_conn *hci_conn_hash_lookup_sid(struct hci_dev *hdev, static inline struct hci_conn *
__u8 sid, hci_conn_hash_lookup_create_pa_sync(struct hci_dev *hdev)
bdaddr_t *dst,
__u8 dst_type)
{ {
struct hci_conn_hash *h = &hdev->conn_hash; struct hci_conn_hash *h = &hdev->conn_hash;
struct hci_conn *c; struct hci_conn *c;
@ -1124,8 +1122,10 @@ static inline struct hci_conn *hci_conn_hash_lookup_sid(struct hci_dev *hdev,
rcu_read_lock(); rcu_read_lock();
list_for_each_entry_rcu(c, &h->list, list) { list_for_each_entry_rcu(c, &h->list, list) {
if (c->type != ISO_LINK || bacmp(&c->dst, dst) || if (c->type != ISO_LINK)
c->dst_type != dst_type || c->sid != sid) continue;
if (!test_bit(HCI_CONN_CREATE_PA_SYNC, &c->flags))
continue; continue;
rcu_read_unlock(); rcu_read_unlock();
@ -1524,8 +1524,6 @@ bool hci_setup_sync(struct hci_conn *conn, __u16 handle);
void hci_sco_setup(struct hci_conn *conn, __u8 status); void hci_sco_setup(struct hci_conn *conn, __u8 status);
bool hci_iso_setup_path(struct hci_conn *conn); bool hci_iso_setup_path(struct hci_conn *conn);
int hci_le_create_cis_pending(struct hci_dev *hdev); int hci_le_create_cis_pending(struct hci_dev *hdev);
int hci_pa_create_sync_pending(struct hci_dev *hdev);
int hci_le_big_create_sync_pending(struct hci_dev *hdev);
int hci_conn_check_create_cis(struct hci_conn *conn); int hci_conn_check_create_cis(struct hci_conn *conn);
struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst, struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
@ -1566,9 +1564,9 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
__u8 data_len, __u8 *data); __u8 data_len, __u8 *data);
struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
__u8 dst_type, __u8 sid, struct bt_iso_qos *qos); __u8 dst_type, __u8 sid, struct bt_iso_qos *qos);
int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon, int hci_conn_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
struct bt_iso_qos *qos, struct bt_iso_qos *qos, __u16 sync_handle,
__u16 sync_handle, __u8 num_bis, __u8 bis[]); __u8 num_bis, __u8 bis[]);
int hci_conn_check_link_mode(struct hci_conn *conn); int hci_conn_check_link_mode(struct hci_conn *conn);
int hci_conn_check_secure(struct hci_conn *conn, __u8 sec_level); int hci_conn_check_secure(struct hci_conn *conn, __u8 sec_level);
int hci_conn_security(struct hci_conn *conn, __u8 sec_level, __u8 auth_type, int hci_conn_security(struct hci_conn *conn, __u8 sec_level, __u8 auth_type,

View File

@ -185,3 +185,6 @@ int hci_connect_le_sync(struct hci_dev *hdev, struct hci_conn *conn);
int hci_cancel_connect_sync(struct hci_dev *hdev, struct hci_conn *conn); int hci_cancel_connect_sync(struct hci_dev *hdev, struct hci_conn *conn);
int hci_le_conn_update_sync(struct hci_dev *hdev, struct hci_conn *conn, int hci_le_conn_update_sync(struct hci_dev *hdev, struct hci_conn *conn,
struct hci_conn_params *params); struct hci_conn_params *params);
int hci_connect_pa_sync(struct hci_dev *hdev, struct hci_conn *conn);
int hci_connect_big_sync(struct hci_dev *hdev, struct hci_conn *conn);

View File

@ -71,9 +71,6 @@ struct xdp_sock {
*/ */
u32 tx_budget_spent; u32 tx_budget_spent;
/* Protects generic receive. */
spinlock_t rx_lock;
/* Statistics */ /* Statistics */
u64 rx_dropped; u64 rx_dropped;
u64 rx_queue_full; u64 rx_queue_full;

View File

@ -53,6 +53,8 @@ struct xsk_buff_pool {
refcount_t users; refcount_t users;
struct xdp_umem *umem; struct xdp_umem *umem;
struct work_struct work; struct work_struct work;
/* Protects generic receive in shared and non-shared umem mode. */
spinlock_t rx_lock;
struct list_head free_list; struct list_head free_list;
struct list_head xskb_list; struct list_head xskb_list;
u32 heads_cnt; u32 heads_cnt;
@ -238,8 +240,8 @@ static inline u64 xp_get_handle(struct xdp_buff_xsk *xskb,
return orig_addr; return orig_addr;
offset = xskb->xdp.data - xskb->xdp.data_hard_start; offset = xskb->xdp.data - xskb->xdp.data_hard_start;
orig_addr -= offset;
offset += pool->headroom; offset += pool->headroom;
orig_addr -= offset;
return orig_addr + (offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT); return orig_addr + (offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT);
} }

View File

@ -31,11 +31,6 @@ enum ethtool_header_flags {
ETHTOOL_FLAG_STATS = 4, ETHTOOL_FLAG_STATS = 4,
}; };
enum {
ETHTOOL_PHY_UPSTREAM_TYPE_MAC,
ETHTOOL_PHY_UPSTREAM_TYPE_PHY,
};
enum ethtool_tcp_data_split { enum ethtool_tcp_data_split {
ETHTOOL_TCP_DATA_SPLIT_UNKNOWN, ETHTOOL_TCP_DATA_SPLIT_UNKNOWN,
ETHTOOL_TCP_DATA_SPLIT_DISABLED, ETHTOOL_TCP_DATA_SPLIT_DISABLED,

View File

@ -2064,95 +2064,6 @@ static int create_big_sync(struct hci_dev *hdev, void *data)
return hci_le_create_big(conn, &conn->iso_qos); return hci_le_create_big(conn, &conn->iso_qos);
} }
static void create_pa_complete(struct hci_dev *hdev, void *data, int err)
{
bt_dev_dbg(hdev, "");
if (err)
bt_dev_err(hdev, "Unable to create PA: %d", err);
}
static bool hci_conn_check_create_pa_sync(struct hci_conn *conn)
{
if (conn->type != ISO_LINK || conn->sid == HCI_SID_INVALID)
return false;
return true;
}
static int create_pa_sync(struct hci_dev *hdev, void *data)
{
struct hci_cp_le_pa_create_sync cp = {0};
struct hci_conn *conn;
int err = 0;
hci_dev_lock(hdev);
rcu_read_lock();
/* The spec allows only one pending LE Periodic Advertising Create
* Sync command at a time. If the command is pending now, don't do
* anything. We check for pending connections after each PA Sync
* Established event.
*
* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
* page 2493:
*
* If the Host issues this command when another HCI_LE_Periodic_
* Advertising_Create_Sync command is pending, the Controller shall
* return the error code Command Disallowed (0x0C).
*/
list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
if (test_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags))
goto unlock;
}
list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
if (hci_conn_check_create_pa_sync(conn)) {
struct bt_iso_qos *qos = &conn->iso_qos;
cp.options = qos->bcast.options;
cp.sid = conn->sid;
cp.addr_type = conn->dst_type;
bacpy(&cp.addr, &conn->dst);
cp.skip = cpu_to_le16(qos->bcast.skip);
cp.sync_timeout = cpu_to_le16(qos->bcast.sync_timeout);
cp.sync_cte_type = qos->bcast.sync_cte_type;
break;
}
}
unlock:
rcu_read_unlock();
hci_dev_unlock(hdev);
if (bacmp(&cp.addr, BDADDR_ANY)) {
hci_dev_set_flag(hdev, HCI_PA_SYNC);
set_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
err = __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC,
sizeof(cp), &cp, HCI_CMD_TIMEOUT);
if (!err)
err = hci_update_passive_scan_sync(hdev);
if (err) {
hci_dev_clear_flag(hdev, HCI_PA_SYNC);
clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
}
}
return err;
}
int hci_pa_create_sync_pending(struct hci_dev *hdev)
{
/* Queue start pa_create_sync and scan */
return hci_cmd_sync_queue(hdev, create_pa_sync,
NULL, create_pa_complete);
}
struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
__u8 dst_type, __u8 sid, __u8 dst_type, __u8 sid,
struct bt_iso_qos *qos) struct bt_iso_qos *qos)
@ -2167,97 +2078,18 @@ struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
conn->dst_type = dst_type; conn->dst_type = dst_type;
conn->sid = sid; conn->sid = sid;
conn->state = BT_LISTEN; conn->state = BT_LISTEN;
conn->conn_timeout = msecs_to_jiffies(qos->bcast.sync_timeout * 10);
hci_conn_hold(conn); hci_conn_hold(conn);
hci_pa_create_sync_pending(hdev); hci_connect_pa_sync(hdev, conn);
return conn; return conn;
} }
static bool hci_conn_check_create_big_sync(struct hci_conn *conn) int hci_conn_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
{ struct bt_iso_qos *qos, __u16 sync_handle,
if (!conn->num_bis) __u8 num_bis, __u8 bis[])
return false;
return true;
}
static void big_create_sync_complete(struct hci_dev *hdev, void *data, int err)
{
bt_dev_dbg(hdev, "");
if (err)
bt_dev_err(hdev, "Unable to create BIG sync: %d", err);
}
static int big_create_sync(struct hci_dev *hdev, void *data)
{
DEFINE_FLEX(struct hci_cp_le_big_create_sync, pdu, bis, num_bis, 0x11);
struct hci_conn *conn;
rcu_read_lock();
pdu->num_bis = 0;
/* The spec allows only one pending LE BIG Create Sync command at
* a time. If the command is pending now, don't do anything. We
* check for pending connections after each BIG Sync Established
* event.
*
* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
* page 2586:
*
* If the Host sends this command when the Controller is in the
* process of synchronizing to any BIG, i.e. the HCI_LE_BIG_Sync_
* Established event has not been generated, the Controller shall
* return the error code Command Disallowed (0x0C).
*/
list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
if (test_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags))
goto unlock;
}
list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
if (hci_conn_check_create_big_sync(conn)) {
struct bt_iso_qos *qos = &conn->iso_qos;
set_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags);
pdu->handle = qos->bcast.big;
pdu->sync_handle = cpu_to_le16(conn->sync_handle);
pdu->encryption = qos->bcast.encryption;
memcpy(pdu->bcode, qos->bcast.bcode,
sizeof(pdu->bcode));
pdu->mse = qos->bcast.mse;
pdu->timeout = cpu_to_le16(qos->bcast.timeout);
pdu->num_bis = conn->num_bis;
memcpy(pdu->bis, conn->bis, conn->num_bis);
break;
}
}
unlock:
rcu_read_unlock();
if (!pdu->num_bis)
return 0;
return hci_send_cmd(hdev, HCI_OP_LE_BIG_CREATE_SYNC,
struct_size(pdu, bis, pdu->num_bis), pdu);
}
int hci_le_big_create_sync_pending(struct hci_dev *hdev)
{
/* Queue big_create_sync */
return hci_cmd_sync_queue_once(hdev, big_create_sync,
NULL, big_create_sync_complete);
}
int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
struct bt_iso_qos *qos,
__u16 sync_handle, __u8 num_bis, __u8 bis[])
{ {
int err; int err;
@ -2274,9 +2106,10 @@ int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
hcon->num_bis = num_bis; hcon->num_bis = num_bis;
memcpy(hcon->bis, bis, num_bis); memcpy(hcon->bis, bis, num_bis);
hcon->conn_timeout = msecs_to_jiffies(qos->bcast.timeout * 10);
} }
return hci_le_big_create_sync_pending(hdev); return hci_connect_big_sync(hdev, hcon);
} }
static void create_big_complete(struct hci_dev *hdev, void *data, int err) static void create_big_complete(struct hci_dev *hdev, void *data, int err)

View File

@ -6378,8 +6378,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
hci_dev_clear_flag(hdev, HCI_PA_SYNC); hci_dev_clear_flag(hdev, HCI_PA_SYNC);
conn = hci_conn_hash_lookup_sid(hdev, ev->sid, &ev->bdaddr, conn = hci_conn_hash_lookup_create_pa_sync(hdev);
ev->bdaddr_type);
if (!conn) { if (!conn) {
bt_dev_err(hdev, bt_dev_err(hdev,
"Unable to find connection for dst %pMR sid 0x%2.2x", "Unable to find connection for dst %pMR sid 0x%2.2x",
@ -6418,9 +6417,6 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
} }
unlock: unlock:
/* Handle any other pending PA sync command */
hci_pa_create_sync_pending(hdev);
hci_dev_unlock(hdev); hci_dev_unlock(hdev);
} }
@ -6932,7 +6928,7 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
bt_dev_dbg(hdev, "status 0x%2.2x", ev->status); bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
if (!hci_le_ev_skb_pull(hdev, skb, HCI_EVT_LE_BIG_SYNC_ESTABILISHED, if (!hci_le_ev_skb_pull(hdev, skb, HCI_EVT_LE_BIG_SYNC_ESTABLISHED,
flex_array_size(ev, bis, ev->num_bis))) flex_array_size(ev, bis, ev->num_bis)))
return; return;
@ -7003,9 +6999,6 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
} }
unlock: unlock:
/* Handle any other pending BIG sync command */
hci_le_big_create_sync_pending(hdev);
hci_dev_unlock(hdev); hci_dev_unlock(hdev);
} }
@ -7127,8 +7120,8 @@ static const struct hci_le_ev {
hci_le_create_big_complete_evt, hci_le_create_big_complete_evt,
sizeof(struct hci_evt_le_create_big_complete), sizeof(struct hci_evt_le_create_big_complete),
HCI_MAX_EVENT_SIZE), HCI_MAX_EVENT_SIZE),
/* [0x1d = HCI_EV_LE_BIG_SYNC_ESTABILISHED] */ /* [0x1d = HCI_EV_LE_BIG_SYNC_ESTABLISHED] */
HCI_LE_EV_VL(HCI_EVT_LE_BIG_SYNC_ESTABILISHED, HCI_LE_EV_VL(HCI_EVT_LE_BIG_SYNC_ESTABLISHED,
hci_le_big_sync_established_evt, hci_le_big_sync_established_evt,
sizeof(struct hci_evt_le_big_sync_estabilished), sizeof(struct hci_evt_le_big_sync_estabilished),
HCI_MAX_EVENT_SIZE), HCI_MAX_EVENT_SIZE),

View File

@ -2693,16 +2693,16 @@ static u8 hci_update_accept_list_sync(struct hci_dev *hdev)
/* Force address filtering if PA Sync is in progress */ /* Force address filtering if PA Sync is in progress */
if (hci_dev_test_flag(hdev, HCI_PA_SYNC)) { if (hci_dev_test_flag(hdev, HCI_PA_SYNC)) {
struct hci_cp_le_pa_create_sync *sent; struct hci_conn *conn;
sent = hci_sent_cmd_data(hdev, HCI_OP_LE_PA_CREATE_SYNC); conn = hci_conn_hash_lookup_create_pa_sync(hdev);
if (sent) { if (conn) {
struct conn_params pa; struct conn_params pa;
memset(&pa, 0, sizeof(pa)); memset(&pa, 0, sizeof(pa));
bacpy(&pa.addr, &sent->addr); bacpy(&pa.addr, &conn->dst);
pa.addr_type = sent->addr_type; pa.addr_type = conn->dst_type;
/* Clear first since there could be addresses left /* Clear first since there could be addresses left
* behind. * behind.
@ -6895,3 +6895,143 @@ int hci_le_conn_update_sync(struct hci_dev *hdev, struct hci_conn *conn,
return __hci_cmd_sync_status(hdev, HCI_OP_LE_CONN_UPDATE, return __hci_cmd_sync_status(hdev, HCI_OP_LE_CONN_UPDATE,
sizeof(cp), &cp, HCI_CMD_TIMEOUT); sizeof(cp), &cp, HCI_CMD_TIMEOUT);
} }
static void create_pa_complete(struct hci_dev *hdev, void *data, int err)
{
bt_dev_dbg(hdev, "err %d", err);
if (!err)
return;
hci_dev_clear_flag(hdev, HCI_PA_SYNC);
if (err == -ECANCELED)
return;
hci_dev_lock(hdev);
hci_update_passive_scan_sync(hdev);
hci_dev_unlock(hdev);
}
static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data)
{
struct hci_cp_le_pa_create_sync cp;
struct hci_conn *conn = data;
struct bt_iso_qos *qos = &conn->iso_qos;
int err;
if (!hci_conn_valid(hdev, conn))
return -ECANCELED;
if (hci_dev_test_and_set_flag(hdev, HCI_PA_SYNC))
return -EBUSY;
/* Mark HCI_CONN_CREATE_PA_SYNC so hci_update_passive_scan_sync can
* program the address in the allow list so PA advertisements can be
* received.
*/
set_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
hci_update_passive_scan_sync(hdev);
memset(&cp, 0, sizeof(cp));
cp.options = qos->bcast.options;
cp.sid = conn->sid;
cp.addr_type = conn->dst_type;
bacpy(&cp.addr, &conn->dst);
cp.skip = cpu_to_le16(qos->bcast.skip);
cp.sync_timeout = cpu_to_le16(qos->bcast.sync_timeout);
cp.sync_cte_type = qos->bcast.sync_cte_type;
/* The spec allows only one pending LE Periodic Advertising Create
* Sync command at a time so we forcefully wait for PA Sync Established
* event since cmd_work can only schedule one command at a time.
*
* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
* page 2493:
*
* If the Host issues this command when another HCI_LE_Periodic_
* Advertising_Create_Sync command is pending, the Controller shall
* return the error code Command Disallowed (0x0C).
*/
err = __hci_cmd_sync_status_sk(hdev, HCI_OP_LE_PA_CREATE_SYNC,
sizeof(cp), &cp,
HCI_EV_LE_PA_SYNC_ESTABLISHED,
conn->conn_timeout, NULL);
if (err == -ETIMEDOUT)
__hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC_CANCEL,
0, NULL, HCI_CMD_TIMEOUT);
return err;
}
int hci_connect_pa_sync(struct hci_dev *hdev, struct hci_conn *conn)
{
return hci_cmd_sync_queue_once(hdev, hci_le_pa_create_sync, conn,
create_pa_complete);
}
static void create_big_complete(struct hci_dev *hdev, void *data, int err)
{
struct hci_conn *conn = data;
bt_dev_dbg(hdev, "err %d", err);
if (err == -ECANCELED)
return;
if (hci_conn_valid(hdev, conn))
clear_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags);
}
static int hci_le_big_create_sync(struct hci_dev *hdev, void *data)
{
DEFINE_FLEX(struct hci_cp_le_big_create_sync, cp, bis, num_bis, 0x11);
struct hci_conn *conn = data;
struct bt_iso_qos *qos = &conn->iso_qos;
int err;
if (!hci_conn_valid(hdev, conn))
return -ECANCELED;
set_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags);
memset(cp, 0, sizeof(*cp));
cp->handle = qos->bcast.big;
cp->sync_handle = cpu_to_le16(conn->sync_handle);
cp->encryption = qos->bcast.encryption;
memcpy(cp->bcode, qos->bcast.bcode, sizeof(cp->bcode));
cp->mse = qos->bcast.mse;
cp->timeout = cpu_to_le16(qos->bcast.timeout);
cp->num_bis = conn->num_bis;
memcpy(cp->bis, conn->bis, conn->num_bis);
/* The spec allows only one pending LE BIG Create Sync command at
* a time, so we forcefully wait for BIG Sync Established event since
* cmd_work can only schedule one command at a time.
*
* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
* page 2586:
*
* If the Host sends this command when the Controller is in the
* process of synchronizing to any BIG, i.e. the HCI_LE_BIG_Sync_
* Established event has not been generated, the Controller shall
* return the error code Command Disallowed (0x0C).
*/
err = __hci_cmd_sync_status_sk(hdev, HCI_OP_LE_BIG_CREATE_SYNC,
struct_size(cp, bis, cp->num_bis), cp,
HCI_EVT_LE_BIG_SYNC_ESTABLISHED,
conn->conn_timeout, NULL);
if (err == -ETIMEDOUT)
hci_le_big_terminate_sync(hdev, cp->handle);
return err;
}
int hci_connect_big_sync(struct hci_dev *hdev, struct hci_conn *conn)
{
return hci_cmd_sync_queue_once(hdev, hci_le_big_create_sync, conn,
create_big_complete);
}

View File

@ -1462,14 +1462,13 @@ static void iso_conn_big_sync(struct sock *sk)
lock_sock(sk); lock_sock(sk);
if (!test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) { if (!test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
err = hci_le_big_create_sync(hdev, iso_pi(sk)->conn->hcon, err = hci_conn_big_create_sync(hdev, iso_pi(sk)->conn->hcon,
&iso_pi(sk)->qos, &iso_pi(sk)->qos,
iso_pi(sk)->sync_handle, iso_pi(sk)->sync_handle,
iso_pi(sk)->bc_num_bis, iso_pi(sk)->bc_num_bis,
iso_pi(sk)->bc_bis); iso_pi(sk)->bc_bis);
if (err) if (err)
bt_dev_err(hdev, "hci_le_big_create_sync: %d", bt_dev_err(hdev, "hci_big_create_sync: %d", err);
err);
} }
release_sock(sk); release_sock(sk);
@ -1922,7 +1921,7 @@ static void iso_conn_ready(struct iso_conn *conn)
hcon); hcon);
} else if (test_bit(HCI_CONN_BIG_SYNC_FAILED, &hcon->flags)) { } else if (test_bit(HCI_CONN_BIG_SYNC_FAILED, &hcon->flags)) {
ev = hci_recv_event_data(hcon->hdev, ev = hci_recv_event_data(hcon->hdev,
HCI_EVT_LE_BIG_SYNC_ESTABILISHED); HCI_EVT_LE_BIG_SYNC_ESTABLISHED);
/* Get reference to PA sync parent socket, if it exists */ /* Get reference to PA sync parent socket, if it exists */
parent = iso_get_sock(&hcon->src, &hcon->dst, parent = iso_get_sock(&hcon->src, &hcon->dst,
@ -2113,12 +2112,11 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
if (!test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags) && if (!test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags) &&
!test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) { !test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
err = hci_le_big_create_sync(hdev, err = hci_conn_big_create_sync(hdev, hcon,
hcon, &iso_pi(sk)->qos,
&iso_pi(sk)->qos, iso_pi(sk)->sync_handle,
iso_pi(sk)->sync_handle, iso_pi(sk)->bc_num_bis,
iso_pi(sk)->bc_num_bis, iso_pi(sk)->bc_bis);
iso_pi(sk)->bc_bis);
if (err) { if (err) {
bt_dev_err(hdev, "hci_le_big_create_sync: %d", bt_dev_err(hdev, "hci_le_big_create_sync: %d",
err); err);

View File

@ -7415,6 +7415,9 @@ static int l2cap_recv_frag(struct l2cap_conn *conn, struct sk_buff *skb,
return -ENOMEM; return -ENOMEM;
/* Init rx_len */ /* Init rx_len */
conn->rx_len = len; conn->rx_len = len;
skb_set_delivery_time(conn->rx_skb, skb->tstamp,
skb->tstamp_type);
} }
/* Copy as much as the rx_skb can hold */ /* Copy as much as the rx_skb can hold */

View File

@ -439,7 +439,7 @@ static void tcp4_check_fraglist_gro(struct list_head *head, struct sk_buff *skb,
iif, sdif); iif, sdif);
NAPI_GRO_CB(skb)->is_flist = !sk; NAPI_GRO_CB(skb)->is_flist = !sk;
if (sk) if (sk)
sock_put(sk); sock_gen_put(sk);
} }
INDIRECT_CALLABLE_SCOPE INDIRECT_CALLABLE_SCOPE

View File

@ -247,6 +247,62 @@ static struct sk_buff *__udpv4_gso_segment_list_csum(struct sk_buff *segs)
return segs; return segs;
} }
static void __udpv6_gso_segment_csum(struct sk_buff *seg,
struct in6_addr *oldip,
const struct in6_addr *newip,
__be16 *oldport, __be16 newport)
{
struct udphdr *uh = udp_hdr(seg);
if (ipv6_addr_equal(oldip, newip) && *oldport == newport)
return;
if (uh->check) {
inet_proto_csum_replace16(&uh->check, seg, oldip->s6_addr32,
newip->s6_addr32, true);
inet_proto_csum_replace2(&uh->check, seg, *oldport, newport,
false);
if (!uh->check)
uh->check = CSUM_MANGLED_0;
}
*oldip = *newip;
*oldport = newport;
}
static struct sk_buff *__udpv6_gso_segment_list_csum(struct sk_buff *segs)
{
const struct ipv6hdr *iph;
const struct udphdr *uh;
struct ipv6hdr *iph2;
struct sk_buff *seg;
struct udphdr *uh2;
seg = segs;
uh = udp_hdr(seg);
iph = ipv6_hdr(seg);
uh2 = udp_hdr(seg->next);
iph2 = ipv6_hdr(seg->next);
if (!(*(const u32 *)&uh->source ^ *(const u32 *)&uh2->source) &&
ipv6_addr_equal(&iph->saddr, &iph2->saddr) &&
ipv6_addr_equal(&iph->daddr, &iph2->daddr))
return segs;
while ((seg = seg->next)) {
uh2 = udp_hdr(seg);
iph2 = ipv6_hdr(seg);
__udpv6_gso_segment_csum(seg, &iph2->saddr, &iph->saddr,
&uh2->source, uh->source);
__udpv6_gso_segment_csum(seg, &iph2->daddr, &iph->daddr,
&uh2->dest, uh->dest);
}
return segs;
}
static struct sk_buff *__udp_gso_segment_list(struct sk_buff *skb, static struct sk_buff *__udp_gso_segment_list(struct sk_buff *skb,
netdev_features_t features, netdev_features_t features,
bool is_ipv6) bool is_ipv6)
@ -259,7 +315,10 @@ static struct sk_buff *__udp_gso_segment_list(struct sk_buff *skb,
udp_hdr(skb)->len = htons(sizeof(struct udphdr) + mss); udp_hdr(skb)->len = htons(sizeof(struct udphdr) + mss);
return is_ipv6 ? skb : __udpv4_gso_segment_list_csum(skb); if (is_ipv6)
return __udpv6_gso_segment_list_csum(skb);
else
return __udpv4_gso_segment_list_csum(skb);
} }
struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb, struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,

View File

@ -42,7 +42,7 @@ static void tcp6_check_fraglist_gro(struct list_head *head, struct sk_buff *skb,
iif, sdif); iif, sdif);
NAPI_GRO_CB(skb)->is_flist = !sk; NAPI_GRO_CB(skb)->is_flist = !sk;
if (sk) if (sk)
sock_put(sk); sock_gen_put(sk);
#endif /* IS_ENABLED(CONFIG_IPV6) */ #endif /* IS_ENABLED(CONFIG_IPV6) */
} }

View File

@ -1085,7 +1085,13 @@ static void __ieee80211_tx_status(struct ieee80211_hw *hw,
ieee80211_report_used_skb(local, skb, false, status->ack_hwtstamp); ieee80211_report_used_skb(local, skb, false, status->ack_hwtstamp);
if (status->free_list) /*
* This is a bit racy but we can avoid a lot of work
* with this test...
*/
if (local->tx_mntrs)
ieee80211_tx_monitor(local, skb, retry_count, status);
else if (status->free_list)
list_add_tail(&skb->list, status->free_list); list_add_tail(&skb->list, status->free_list);
else else
dev_kfree_skb(skb); dev_kfree_skb(skb);

View File

@ -35,6 +35,11 @@ struct drr_sched {
struct Qdisc_class_hash clhash; struct Qdisc_class_hash clhash;
}; };
static bool cl_is_active(struct drr_class *cl)
{
return !list_empty(&cl->alist);
}
static struct drr_class *drr_find_class(struct Qdisc *sch, u32 classid) static struct drr_class *drr_find_class(struct Qdisc *sch, u32 classid)
{ {
struct drr_sched *q = qdisc_priv(sch); struct drr_sched *q = qdisc_priv(sch);
@ -337,7 +342,6 @@ static int drr_enqueue(struct sk_buff *skb, struct Qdisc *sch,
struct drr_sched *q = qdisc_priv(sch); struct drr_sched *q = qdisc_priv(sch);
struct drr_class *cl; struct drr_class *cl;
int err = 0; int err = 0;
bool first;
cl = drr_classify(skb, sch, &err); cl = drr_classify(skb, sch, &err);
if (cl == NULL) { if (cl == NULL) {
@ -347,7 +351,6 @@ static int drr_enqueue(struct sk_buff *skb, struct Qdisc *sch,
return err; return err;
} }
first = !cl->qdisc->q.qlen;
err = qdisc_enqueue(skb, cl->qdisc, to_free); err = qdisc_enqueue(skb, cl->qdisc, to_free);
if (unlikely(err != NET_XMIT_SUCCESS)) { if (unlikely(err != NET_XMIT_SUCCESS)) {
if (net_xmit_drop_count(err)) { if (net_xmit_drop_count(err)) {
@ -357,7 +360,7 @@ static int drr_enqueue(struct sk_buff *skb, struct Qdisc *sch,
return err; return err;
} }
if (first) { if (!cl_is_active(cl)) {
list_add_tail(&cl->alist, &q->active); list_add_tail(&cl->alist, &q->active);
cl->deficit = cl->quantum; cl->deficit = cl->quantum;
} }

View File

@ -74,6 +74,11 @@ static const struct nla_policy ets_class_policy[TCA_ETS_MAX + 1] = {
[TCA_ETS_QUANTA_BAND] = { .type = NLA_U32 }, [TCA_ETS_QUANTA_BAND] = { .type = NLA_U32 },
}; };
static bool cl_is_active(struct ets_class *cl)
{
return !list_empty(&cl->alist);
}
static int ets_quantum_parse(struct Qdisc *sch, const struct nlattr *attr, static int ets_quantum_parse(struct Qdisc *sch, const struct nlattr *attr,
unsigned int *quantum, unsigned int *quantum,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
@ -416,7 +421,6 @@ static int ets_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
struct ets_sched *q = qdisc_priv(sch); struct ets_sched *q = qdisc_priv(sch);
struct ets_class *cl; struct ets_class *cl;
int err = 0; int err = 0;
bool first;
cl = ets_classify(skb, sch, &err); cl = ets_classify(skb, sch, &err);
if (!cl) { if (!cl) {
@ -426,7 +430,6 @@ static int ets_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
return err; return err;
} }
first = !cl->qdisc->q.qlen;
err = qdisc_enqueue(skb, cl->qdisc, to_free); err = qdisc_enqueue(skb, cl->qdisc, to_free);
if (unlikely(err != NET_XMIT_SUCCESS)) { if (unlikely(err != NET_XMIT_SUCCESS)) {
if (net_xmit_drop_count(err)) { if (net_xmit_drop_count(err)) {
@ -436,7 +439,7 @@ static int ets_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
return err; return err;
} }
if (first && !ets_class_is_strict(q, cl)) { if (!cl_is_active(cl) && !ets_class_is_strict(q, cl)) {
list_add_tail(&cl->alist, &q->active); list_add_tail(&cl->alist, &q->active);
cl->deficit = cl->quantum; cl->deficit = cl->quantum;
} }

View File

@ -1569,7 +1569,7 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
return err; return err;
} }
if (first) { if (first && !cl->cl_nactive) {
if (cl->cl_flags & HFSC_RSC) if (cl->cl_flags & HFSC_RSC)
init_ed(cl, len); init_ed(cl, len);
if (cl->cl_flags & HFSC_FSC) if (cl->cl_flags & HFSC_FSC)

View File

@ -202,6 +202,11 @@ struct qfq_sched {
*/ */
enum update_reason {enqueue, requeue}; enum update_reason {enqueue, requeue};
static bool cl_is_active(struct qfq_class *cl)
{
return !list_empty(&cl->alist);
}
static struct qfq_class *qfq_find_class(struct Qdisc *sch, u32 classid) static struct qfq_class *qfq_find_class(struct Qdisc *sch, u32 classid)
{ {
struct qfq_sched *q = qdisc_priv(sch); struct qfq_sched *q = qdisc_priv(sch);
@ -1215,7 +1220,6 @@ static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
struct qfq_class *cl; struct qfq_class *cl;
struct qfq_aggregate *agg; struct qfq_aggregate *agg;
int err = 0; int err = 0;
bool first;
cl = qfq_classify(skb, sch, &err); cl = qfq_classify(skb, sch, &err);
if (cl == NULL) { if (cl == NULL) {
@ -1237,7 +1241,6 @@ static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
} }
gso_segs = skb_is_gso(skb) ? skb_shinfo(skb)->gso_segs : 1; gso_segs = skb_is_gso(skb) ? skb_shinfo(skb)->gso_segs : 1;
first = !cl->qdisc->q.qlen;
err = qdisc_enqueue(skb, cl->qdisc, to_free); err = qdisc_enqueue(skb, cl->qdisc, to_free);
if (unlikely(err != NET_XMIT_SUCCESS)) { if (unlikely(err != NET_XMIT_SUCCESS)) {
pr_debug("qfq_enqueue: enqueue failed %d\n", err); pr_debug("qfq_enqueue: enqueue failed %d\n", err);
@ -1253,8 +1256,8 @@ static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
++sch->q.qlen; ++sch->q.qlen;
agg = cl->agg; agg = cl->agg;
/* if the queue was not empty, then done here */ /* if the class is active, then done here */
if (!first) { if (cl_is_active(cl)) {
if (unlikely(skb == cl->qdisc->ops->peek(cl->qdisc)) && if (unlikely(skb == cl->qdisc->ops->peek(cl->qdisc)) &&
list_first_entry(&agg->active, struct qfq_class, alist) list_first_entry(&agg->active, struct qfq_class, alist)
== cl && cl->deficit < len) == cl && cl->deficit < len)

View File

@ -338,13 +338,14 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp)
u32 len = xdp_get_buff_len(xdp); u32 len = xdp_get_buff_len(xdp);
int err; int err;
spin_lock_bh(&xs->rx_lock);
err = xsk_rcv_check(xs, xdp, len); err = xsk_rcv_check(xs, xdp, len);
if (!err) { if (!err) {
spin_lock_bh(&xs->pool->rx_lock);
err = __xsk_rcv(xs, xdp, len); err = __xsk_rcv(xs, xdp, len);
xsk_flush(xs); xsk_flush(xs);
spin_unlock_bh(&xs->pool->rx_lock);
} }
spin_unlock_bh(&xs->rx_lock);
return err; return err;
} }
@ -1734,7 +1735,6 @@ static int xsk_create(struct net *net, struct socket *sock, int protocol,
xs = xdp_sk(sk); xs = xdp_sk(sk);
xs->state = XSK_READY; xs->state = XSK_READY;
mutex_init(&xs->mutex); mutex_init(&xs->mutex);
spin_lock_init(&xs->rx_lock);
INIT_LIST_HEAD(&xs->map_list); INIT_LIST_HEAD(&xs->map_list);
spin_lock_init(&xs->map_list_lock); spin_lock_init(&xs->map_list_lock);

View File

@ -89,6 +89,7 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
pool->addrs = umem->addrs; pool->addrs = umem->addrs;
pool->tx_metadata_len = umem->tx_metadata_len; pool->tx_metadata_len = umem->tx_metadata_len;
pool->tx_sw_csum = umem->flags & XDP_UMEM_TX_SW_CSUM; pool->tx_sw_csum = umem->flags & XDP_UMEM_TX_SW_CSUM;
spin_lock_init(&pool->rx_lock);
INIT_LIST_HEAD(&pool->free_list); INIT_LIST_HEAD(&pool->free_list);
INIT_LIST_HEAD(&pool->xskb_list); INIT_LIST_HEAD(&pool->xskb_list);
INIT_LIST_HEAD(&pool->xsk_tx_list); INIT_LIST_HEAD(&pool->xsk_tx_list);

View File

@ -0,0 +1 @@
run_net_forwarding_test.sh

View File

@ -266,18 +266,14 @@ run_test()
"${base_time}" \ "${base_time}" \
"${CYCLE_TIME_NS}" \ "${CYCLE_TIME_NS}" \
"${SHIFT_TIME_NS}" \ "${SHIFT_TIME_NS}" \
"${GATE_DURATION_NS}" \
"${NUM_PKTS}" \ "${NUM_PKTS}" \
"${STREAM_VID}" \ "${STREAM_VID}" \
"${STREAM_PRIO}" \ "${STREAM_PRIO}" \
"" \ "" \
"${isochron_dat}" "${isochron_dat}"
# Count all received packets by looking at the non-zero RX timestamps received=$(isochron_report_num_received "${isochron_dat}")
received=$(isochron report \
--input-file "${isochron_dat}" \
--printf-format "%u\n" --printf-args "R" | \
grep -w -v '0' | wc -l)
if [ "${received}" = "${expected}" ]; then if [ "${received}" = "${expected}" ]; then
RET=0 RET=0
else else

View File

@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
ALL_TESTS="ping_ipv4 ping_ipv6 learning flooding vlan_deletion extern_learn other_tpid" ALL_TESTS="ping_ipv4 ping_ipv6 learning flooding vlan_deletion extern_learn other_tpid 8021p drop_untagged"
NUM_NETIFS=4 NUM_NETIFS=4
CHECK_TC="yes" CHECK_TC="yes"
source lib.sh source lib.sh
@ -194,6 +194,100 @@ other_tpid()
tc qdisc del dev $h2 clsact tc qdisc del dev $h2 clsact
} }
8021p_do()
{
local should_fail=$1; shift
local mac=de:ad:be:ef:13:37
tc filter add dev $h2 ingress protocol all pref 1 handle 101 \
flower dst_mac $mac action drop
$MZ -q $h1 -c 1 -b $mac -a own "81:00 00:00 08:00 aa-aa-aa-aa-aa-aa-aa-aa-aa"
sleep 1
tc -j -s filter show dev $h2 ingress \
| jq -e ".[] | select(.options.handle == 101) \
| select(.options.actions[0].stats.packets == 1)" &> /dev/null
check_err_fail $should_fail $? "802.1p-tagged reception"
tc filter del dev $h2 ingress pref 1
}
8021p()
{
RET=0
tc qdisc add dev $h2 clsact
ip link set $h2 promisc on
# Test that with the default_pvid, 1, packets tagged with VID 0 are
# accepted.
8021p_do 0
# Test that packets tagged with VID 0 are still accepted after changing
# the default_pvid.
ip link set br0 type bridge vlan_default_pvid 10
8021p_do 0
log_test "Reception of 802.1p-tagged traffic"
ip link set $h2 promisc off
tc qdisc del dev $h2 clsact
}
send_untagged_and_8021p()
{
ping_do $h1 192.0.2.2
check_fail $?
8021p_do 1
}
drop_untagged()
{
RET=0
tc qdisc add dev $h2 clsact
ip link set $h2 promisc on
# Test that with no PVID, untagged and 802.1p-tagged traffic is
# dropped.
ip link set br0 type bridge vlan_default_pvid 1
# First we reconfigure the default_pvid, 1, as a non-PVID VLAN.
bridge vlan add dev $swp1 vid 1 untagged
send_untagged_and_8021p
bridge vlan add dev $swp1 vid 1 pvid untagged
# Next we try to delete VID 1 altogether
bridge vlan del dev $swp1 vid 1
send_untagged_and_8021p
bridge vlan add dev $swp1 vid 1 pvid untagged
# Set up the bridge without a default_pvid, then check that the 8021q
# module, when the bridge port goes down and then up again, does not
# accidentally re-enable untagged packet reception.
ip link set br0 type bridge vlan_default_pvid 0
ip link set $swp1 down
ip link set $swp1 up
setup_wait
send_untagged_and_8021p
# Remove swp1 as a bridge port and let it rejoin the bridge while it
# has no default_pvid.
ip link set $swp1 nomaster
ip link set $swp1 master br0
send_untagged_and_8021p
# Restore settings
ip link set br0 type bridge vlan_default_pvid 1
log_test "Dropping of untagged and 802.1p-tagged traffic with no PVID"
ip link set $h2 promisc off
tc qdisc del dev $h2 clsact
}
trap cleanup EXIT trap cleanup EXIT
setup_prepare setup_prepare

View File

@ -0,0 +1,421 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
ALL_TESTS=" \
test_clock_jump_backward \
test_taprio_after_ptp \
test_max_sdu \
test_clock_jump_backward_forward \
"
NUM_NETIFS=4
source tc_common.sh
source lib.sh
source tsn_lib.sh
require_command python3
# The test assumes the usual topology from the README, where h1 is connected to
# swp1, h2 to swp2, and swp1 and swp2 are together in a bridge.
# Additional assumption: h1 and h2 use the same PHC, and so do swp1 and swp2.
# By synchronizing h1 to swp1 via PTP, h2 is also implicitly synchronized to
# swp1 (and both to CLOCK_REALTIME).
h1=${NETIFS[p1]}
swp1=${NETIFS[p2]}
swp2=${NETIFS[p3]}
h2=${NETIFS[p4]}
UDS_ADDRESS_H1="/var/run/ptp4l_h1"
UDS_ADDRESS_SWP1="/var/run/ptp4l_swp1"
H1_IPV4="192.0.2.1"
H2_IPV4="192.0.2.2"
H1_IPV6="2001:db8:1::1"
H2_IPV6="2001:db8:1::2"
# Tunables
NUM_PKTS=100
STREAM_VID=10
STREAM_PRIO_1=6
STREAM_PRIO_2=5
STREAM_PRIO_3=4
# PTP uses TC 0
ALL_GATES=$((1 << 0 | 1 << STREAM_PRIO_1 | 1 << STREAM_PRIO_2))
# Use a conservative cycle of 10 ms to allow the test to still pass when the
# kernel has some extra overhead like lockdep etc
CYCLE_TIME_NS=10000000
# Create two Gate Control List entries, one OPEN and one CLOSE, of equal
# durations
GATE_DURATION_NS=$((CYCLE_TIME_NS / 2))
# Give 2/3 of the cycle time to user space and 1/3 to the kernel
FUDGE_FACTOR=$((CYCLE_TIME_NS / 3))
# Shift the isochron base time by half the gate time, so that packets are
# always received by swp1 close to the middle of the time slot, to minimize
# inaccuracies due to network sync
SHIFT_TIME_NS=$((GATE_DURATION_NS / 2))
path_delay=
h1_create()
{
simple_if_init $h1 $H1_IPV4/24 $H1_IPV6/64
}
h1_destroy()
{
simple_if_fini $h1 $H1_IPV4/24 $H1_IPV6/64
}
h2_create()
{
simple_if_init $h2 $H2_IPV4/24 $H2_IPV6/64
}
h2_destroy()
{
simple_if_fini $h2 $H2_IPV4/24 $H2_IPV6/64
}
switch_create()
{
local h2_mac_addr=$(mac_get $h2)
ip link set $swp1 up
ip link set $swp2 up
ip link add br0 type bridge vlan_filtering 1
ip link set $swp1 master br0
ip link set $swp2 master br0
ip link set br0 up
bridge vlan add dev $swp2 vid $STREAM_VID
bridge vlan add dev $swp1 vid $STREAM_VID
bridge fdb add dev $swp2 \
$h2_mac_addr vlan $STREAM_VID static master
}
switch_destroy()
{
ip link del br0
}
ptp_setup()
{
# Set up swp1 as a master PHC for h1, synchronized to the local
# CLOCK_REALTIME.
phc2sys_start $UDS_ADDRESS_SWP1
ptp4l_start $h1 true $UDS_ADDRESS_H1
ptp4l_start $swp1 false $UDS_ADDRESS_SWP1
}
ptp_cleanup()
{
ptp4l_stop $swp1
ptp4l_stop $h1
phc2sys_stop
}
txtime_setup()
{
local if_name=$1
tc qdisc add dev $if_name clsact
# Classify PTP on TC 7 and isochron on TC 6
tc filter add dev $if_name egress protocol 0x88f7 \
flower action skbedit priority 7
tc filter add dev $if_name egress protocol 802.1Q \
flower vlan_ethtype 0xdead action skbedit priority 6
tc qdisc add dev $if_name handle 100: parent root mqprio num_tc 8 \
queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \
map 0 1 2 3 4 5 6 7 \
hw 1
# Set up TC 5, 6, 7 for SO_TXTIME. tc-mqprio queues count from 1.
tc qdisc replace dev $if_name parent 100:$((STREAM_PRIO_1 + 1)) etf \
clockid CLOCK_TAI offload delta $FUDGE_FACTOR
tc qdisc replace dev $if_name parent 100:$((STREAM_PRIO_2 + 1)) etf \
clockid CLOCK_TAI offload delta $FUDGE_FACTOR
tc qdisc replace dev $if_name parent 100:$((STREAM_PRIO_3 + 1)) etf \
clockid CLOCK_TAI offload delta $FUDGE_FACTOR
}
txtime_cleanup()
{
local if_name=$1
tc qdisc del dev $if_name clsact
tc qdisc del dev $if_name root
}
taprio_replace()
{
local if_name="$1"; shift
local extra_args="$1"; shift
# STREAM_PRIO_1 always has an open gate.
# STREAM_PRIO_2 has a gate open for GATE_DURATION_NS (half the cycle time)
# STREAM_PRIO_3 always has a closed gate.
tc qdisc replace dev $if_name root stab overhead 24 taprio num_tc 8 \
queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \
map 0 1 2 3 4 5 6 7 \
sched-entry S $(printf "%x" $ALL_GATES) $GATE_DURATION_NS \
sched-entry S $(printf "%x" $((ALL_GATES & ~(1 << STREAM_PRIO_2)))) $GATE_DURATION_NS \
base-time 0 flags 0x2 $extra_args
taprio_wait_for_admin $if_name
}
taprio_cleanup()
{
local if_name=$1
tc qdisc del dev $if_name root
}
probe_path_delay()
{
local isochron_dat="$(mktemp)"
local received
log_info "Probing path delay"
isochron_do "$h1" "$h2" "$UDS_ADDRESS_H1" "" 0 \
"$CYCLE_TIME_NS" "" "" "$NUM_PKTS" \
"$STREAM_VID" "$STREAM_PRIO_1" "" "$isochron_dat"
received=$(isochron_report_num_received "$isochron_dat")
if [ "$received" != "$NUM_PKTS" ]; then
echo "Cannot establish basic data path between $h1 and $h2"
exit $ksft_fail
fi
printf "pdelay = {}\n" > isochron_data.py
isochron report --input-file "$isochron_dat" \
--printf-format "pdelay[%u] = %d - %d\n" \
--printf-args "qRT" \
>> isochron_data.py
cat <<-'EOF' > isochron_postprocess.py
#!/usr/bin/env python3
from isochron_data import pdelay
import numpy as np
w = np.array(list(pdelay.values()))
print("{}".format(np.max(w)))
EOF
path_delay=$(python3 ./isochron_postprocess.py)
log_info "Path delay from $h1 to $h2 estimated at $path_delay ns"
if [ "$path_delay" -gt "$GATE_DURATION_NS" ]; then
echo "Path delay larger than gate duration, aborting"
exit $ksft_fail
fi
rm -f ./isochron_data.py 2> /dev/null
rm -f ./isochron_postprocess.py 2> /dev/null
rm -f "$isochron_dat" 2> /dev/null
}
setup_prepare()
{
vrf_prepare
h1_create
h2_create
switch_create
txtime_setup $h1
# Temporarily set up PTP just to probe the end-to-end path delay.
ptp_setup
probe_path_delay
ptp_cleanup
}
cleanup()
{
pre_cleanup
isochron_recv_stop
txtime_cleanup $h1
switch_destroy
h2_destroy
h1_destroy
vrf_cleanup
}
run_test()
{
local base_time=$1; shift
local stream_prio=$1; shift
local expected_delay=$1; shift
local should_fail=$1; shift
local test_name=$1; shift
local isochron_dat="$(mktemp)"
local received
local median_delay
RET=0
# Set the shift time equal to the cycle time, which effectively
# cancels the default advance time. Packets won't be sent early in
# software, which ensures that they won't prematurely enter through
# the open gate in __test_out_of_band(). Also, the gate is open for
# long enough that this won't cause a problem in __test_in_band().
isochron_do "$h1" "$h2" "$UDS_ADDRESS_H1" "" "$base_time" \
"$CYCLE_TIME_NS" "$SHIFT_TIME_NS" "$GATE_DURATION_NS" \
"$NUM_PKTS" "$STREAM_VID" "$stream_prio" "" "$isochron_dat"
received=$(isochron_report_num_received "$isochron_dat")
[ "$received" = "$NUM_PKTS" ]
check_err_fail $should_fail $? "Reception of $NUM_PKTS packets"
if [ $should_fail = 0 ] && [ "$received" = "$NUM_PKTS" ]; then
printf "pdelay = {}\n" > isochron_data.py
isochron report --input-file "$isochron_dat" \
--printf-format "pdelay[%u] = %d - %d\n" \
--printf-args "qRT" \
>> isochron_data.py
cat <<-'EOF' > isochron_postprocess.py
#!/usr/bin/env python3
from isochron_data import pdelay
import numpy as np
w = np.array(list(pdelay.values()))
print("{}".format(int(np.median(w))))
EOF
median_delay=$(python3 ./isochron_postprocess.py)
# If the condition below is true, packets were delayed by a closed gate
[ "$median_delay" -gt $((path_delay + expected_delay)) ]
check_fail $? "Median delay $median_delay is greater than expected delay $expected_delay plus path delay $path_delay"
# If the condition below is true, packets were sent expecting them to
# hit a closed gate in the switch, but were not delayed
[ "$expected_delay" -gt 0 ] && [ "$median_delay" -lt "$expected_delay" ]
check_fail $? "Median delay $median_delay is less than expected delay $expected_delay"
fi
log_test "$test_name"
rm -f ./isochron_data.py 2> /dev/null
rm -f ./isochron_postprocess.py 2> /dev/null
rm -f "$isochron_dat" 2> /dev/null
}
__test_always_open()
{
run_test 0.000000000 $STREAM_PRIO_1 0 0 "Gate always open"
}
__test_always_closed()
{
run_test 0.000000000 $STREAM_PRIO_3 0 1 "Gate always closed"
}
__test_in_band()
{
# Send packets in-band with the OPEN gate entry
run_test 0.000000000 $STREAM_PRIO_2 0 0 "In band with gate"
}
__test_out_of_band()
{
# Send packets in-band with the CLOSE gate entry
run_test 0.005000000 $STREAM_PRIO_2 \
$((GATE_DURATION_NS - SHIFT_TIME_NS)) 0 \
"Out of band with gate"
}
run_subtests()
{
__test_always_open
__test_always_closed
__test_in_band
__test_out_of_band
}
test_taprio_after_ptp()
{
log_info "Setting up taprio after PTP"
ptp_setup
taprio_replace $swp2
run_subtests
taprio_cleanup $swp2
ptp_cleanup
}
__test_under_max_sdu()
{
# Limit max-sdu for STREAM_PRIO_1
taprio_replace "$swp2" "max-sdu 0 0 0 0 0 0 100 0"
run_test 0.000000000 $STREAM_PRIO_1 0 0 "Under maximum SDU"
}
__test_over_max_sdu()
{
# Limit max-sdu for STREAM_PRIO_1
taprio_replace "$swp2" "max-sdu 0 0 0 0 0 0 20 0"
run_test 0.000000000 $STREAM_PRIO_1 0 1 "Over maximum SDU"
}
test_max_sdu()
{
ptp_setup
__test_under_max_sdu
__test_over_max_sdu
taprio_cleanup $swp2
ptp_cleanup
}
# Perform a clock jump in the past without synchronization running, so that the
# time base remains where it was set by phc_ctl.
test_clock_jump_backward()
{
# This is a more complex schedule specifically crafted in a way that
# has been problematic on NXP LS1028A. Not much to test with it other
# than the fact that it passes traffic.
tc qdisc replace dev $swp2 root stab overhead 24 taprio num_tc 8 \
queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 map 0 1 2 3 4 5 6 7 \
base-time 0 sched-entry S 20 300000 sched-entry S 10 200000 \
sched-entry S 20 300000 sched-entry S 48 200000 \
sched-entry S 20 300000 sched-entry S 83 200000 \
sched-entry S 40 300000 sched-entry S 00 200000 flags 2
log_info "Forcing a backward clock jump"
phc_ctl $swp1 set 0
ping_test $h1 192.0.2.2
taprio_cleanup $swp2
}
# Test that taprio tolerates clock jumps.
# Since ptp4l and phc2sys are running, it is expected for the time to
# eventually recover (through yet another clock jump). Isochron waits
# until that is the case.
test_clock_jump_backward_forward()
{
log_info "Forcing a backward and a forward clock jump"
taprio_replace $swp2
phc_ctl $swp1 set 0
ptp_setup
ping_test $h1 192.0.2.2
run_subtests
ptp_cleanup
taprio_cleanup $swp2
}
tc_offload_check
if [[ $? -ne 0 ]]; then
log_test_skip "Could not test offloaded functionality"
exit $EXIT_STATUS
fi
trap cleanup EXIT
setup_prepare
setup_wait
tests_run
exit $EXIT_STATUS

View File

@ -2,6 +2,8 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
# Copyright 2021-2022 NXP # Copyright 2021-2022 NXP
tc_testing_scripts_dir=$(dirname $0)/../../tc-testing/scripts
REQUIRE_ISOCHRON=${REQUIRE_ISOCHRON:=yes} REQUIRE_ISOCHRON=${REQUIRE_ISOCHRON:=yes}
REQUIRE_LINUXPTP=${REQUIRE_LINUXPTP:=yes} REQUIRE_LINUXPTP=${REQUIRE_LINUXPTP:=yes}
@ -18,6 +20,7 @@ fi
if [[ "$REQUIRE_LINUXPTP" = "yes" ]]; then if [[ "$REQUIRE_LINUXPTP" = "yes" ]]; then
require_command phc2sys require_command phc2sys
require_command ptp4l require_command ptp4l
require_command phc_ctl
fi fi
phc2sys_start() phc2sys_start()
@ -182,6 +185,7 @@ isochron_do()
local base_time=$1; shift local base_time=$1; shift
local cycle_time=$1; shift local cycle_time=$1; shift
local shift_time=$1; shift local shift_time=$1; shift
local window_size=$1; shift
local num_pkts=$1; shift local num_pkts=$1; shift
local vid=$1; shift local vid=$1; shift
local priority=$1; shift local priority=$1; shift
@ -212,6 +216,10 @@ isochron_do()
extra_args="${extra_args} --shift-time=${shift_time}" extra_args="${extra_args} --shift-time=${shift_time}"
fi fi
if ! [ -z "${window_size}" ]; then
extra_args="${extra_args} --window-size=${window_size}"
fi
if [ "${use_l2}" = "true" ]; then if [ "${use_l2}" = "true" ]; then
extra_args="${extra_args} --l2 --etype=0xdead ${vid}" extra_args="${extra_args} --l2 --etype=0xdead ${vid}"
receiver_extra_args="--l2 --etype=0xdead" receiver_extra_args="--l2 --etype=0xdead"
@ -247,3 +255,21 @@ isochron_do()
cpufreq_restore ${ISOCHRON_CPU} cpufreq_restore ${ISOCHRON_CPU}
} }
isochron_report_num_received()
{
local isochron_dat=$1; shift
# Count all received packets by looking at the non-zero RX timestamps
isochron report \
--input-file "${isochron_dat}" \
--printf-format "%u\n" --printf-args "R" | \
grep -w -v '0' | wc -l
}
taprio_wait_for_admin()
{
local if_name="$1"; shift
"$tc_testing_scripts_dir/taprio_wait_for_admin.sh" "$(which tc)" "$if_name"
}

View File

@ -352,5 +352,191 @@
"$TC qdisc del dev $DUMMY handle 1:0 root", "$TC qdisc del dev $DUMMY handle 1:0 root",
"$IP addr del 10.10.10.10/24 dev $DUMMY || true" "$IP addr del 10.10.10.10/24 dev $DUMMY || true"
] ]
},
{
"id": "90ec",
"name": "Test DRR's enqueue reentrant behaviour with netem",
"category": [
"qdisc",
"drr"
],
"plugins": {
"requires": "nsPlugin"
},
"setup": [
"$IP link set dev $DUMMY up || true",
"$IP addr add 10.10.10.10/24 dev $DUMMY || true",
"$TC qdisc add dev $DUMMY handle 1:0 root drr",
"$TC class replace dev $DUMMY parent 1:0 classid 1:1 drr",
"$TC qdisc add dev $DUMMY parent 1:1 handle 2:0 netem duplicate 100%",
"$TC filter add dev $DUMMY parent 1:0 protocol ip prio 1 u32 match ip protocol 1 0xff flowid 1:1"
],
"cmdUnderTest": "ping -c 1 -I $DUMMY 10.10.10.1 > /dev/null || true",
"expExitCode": "0",
"verifyCmd": "$TC -j -s qdisc ls dev $DUMMY handle 1:0",
"matchJSON": [
{
"kind": "drr",
"handle": "1:",
"bytes": 196,
"packets": 2
}
],
"matchCount": "1",
"teardown": [
"$TC qdisc del dev $DUMMY handle 1:0 root",
"$IP addr del 10.10.10.10/24 dev $DUMMY || true"
]
},
{
"id": "1f1f",
"name": "Test ETS's enqueue reentrant behaviour with netem",
"category": [
"qdisc",
"ets"
],
"plugins": {
"requires": "nsPlugin"
},
"setup": [
"$IP link set dev $DUMMY up || true",
"$IP addr add 10.10.10.10/24 dev $DUMMY || true",
"$TC qdisc add dev $DUMMY handle 1:0 root ets bands 2",
"$TC class replace dev $DUMMY parent 1:0 classid 1:1 ets quantum 1500",
"$TC qdisc add dev $DUMMY parent 1:1 handle 2:0 netem duplicate 100%",
"$TC filter add dev $DUMMY parent 1:0 protocol ip prio 1 u32 match ip protocol 1 0xff flowid 1:1"
],
"cmdUnderTest": "ping -c 1 -I $DUMMY 10.10.10.1 > /dev/null || true",
"expExitCode": "0",
"verifyCmd": "$TC -j -s class show dev $DUMMY",
"matchJSON": [
{
"class": "ets",
"handle": "1:1",
"stats": {
"bytes": 196,
"packets": 2
}
}
],
"matchCount": "1",
"teardown": [
"$TC qdisc del dev $DUMMY handle 1:0 root",
"$IP addr del 10.10.10.10/24 dev $DUMMY || true"
]
},
{
"id": "5e6d",
"name": "Test QFQ's enqueue reentrant behaviour with netem",
"category": [
"qdisc",
"qfq"
],
"plugins": {
"requires": "nsPlugin"
},
"setup": [
"$IP link set dev $DUMMY up || true",
"$IP addr add 10.10.10.10/24 dev $DUMMY || true",
"$TC qdisc add dev $DUMMY handle 1:0 root qfq",
"$TC class replace dev $DUMMY parent 1:0 classid 1:1 qfq weight 100 maxpkt 1500",
"$TC qdisc add dev $DUMMY parent 1:1 handle 2:0 netem duplicate 100%",
"$TC filter add dev $DUMMY parent 1:0 protocol ip prio 1 u32 match ip protocol 1 0xff flowid 1:1"
],
"cmdUnderTest": "ping -c 1 -I $DUMMY 10.10.10.1 > /dev/null || true",
"expExitCode": "0",
"verifyCmd": "$TC -j -s qdisc ls dev $DUMMY handle 1:0",
"matchJSON": [
{
"kind": "qfq",
"handle": "1:",
"bytes": 196,
"packets": 2
}
],
"matchCount": "1",
"teardown": [
"$TC qdisc del dev $DUMMY handle 1:0 root",
"$IP addr del 10.10.10.10/24 dev $DUMMY || true"
]
},
{
"id": "bf1d",
"name": "Test HFSC's enqueue reentrant behaviour with netem",
"category": [
"qdisc",
"hfsc"
],
"plugins": {
"requires": "nsPlugin"
},
"setup": [
"$IP link set dev $DUMMY up || true",
"$IP addr add 10.10.10.10/24 dev $DUMMY || true",
"$TC qdisc add dev $DUMMY handle 1:0 root hfsc",
"$TC class add dev $DUMMY parent 1:0 classid 1:1 hfsc ls m2 10Mbit",
"$TC qdisc add dev $DUMMY parent 1:1 handle 2:0 netem duplicate 100%",
"$TC filter add dev $DUMMY parent 1:0 protocol ip prio 1 u32 match ip dst 10.10.10.1/32 flowid 1:1",
"$TC class add dev $DUMMY parent 1:0 classid 1:2 hfsc ls m2 10Mbit",
"$TC qdisc add dev $DUMMY parent 1:2 handle 3:0 netem duplicate 100%",
"$TC filter add dev $DUMMY parent 1:0 protocol ip prio 2 u32 match ip dst 10.10.10.2/32 flowid 1:2",
"ping -c 1 10.10.10.1 -I$DUMMY > /dev/null || true",
"$TC filter del dev $DUMMY parent 1:0 protocol ip prio 1",
"$TC class del dev $DUMMY classid 1:1"
],
"cmdUnderTest": "ping -c 1 10.10.10.2 -I$DUMMY > /dev/null || true",
"expExitCode": "0",
"verifyCmd": "$TC -j -s qdisc ls dev $DUMMY handle 1:0",
"matchJSON": [
{
"kind": "hfsc",
"handle": "1:",
"bytes": 392,
"packets": 4
}
],
"matchCount": "1",
"teardown": [
"$TC qdisc del dev $DUMMY handle 1:0 root",
"$IP addr del 10.10.10.10/24 dev $DUMMY || true"
]
},
{
"id": "7c3b",
"name": "Test nested DRR's enqueue reentrant behaviour with netem",
"category": [
"qdisc",
"drr"
],
"plugins": {
"requires": "nsPlugin"
},
"setup": [
"$IP link set dev $DUMMY up || true",
"$IP addr add 10.10.10.10/24 dev $DUMMY || true",
"$TC qdisc add dev $DUMMY handle 1:0 root drr",
"$TC class add dev $DUMMY parent 1:0 classid 1:1 drr",
"$TC filter add dev $DUMMY parent 1:0 protocol ip prio 1 u32 match ip protocol 1 0xff flowid 1:1",
"$TC qdisc add dev $DUMMY handle 2:0 parent 1:1 drr",
"$TC class add dev $DUMMY classid 2:1 parent 2:0 drr",
"$TC filter add dev $DUMMY parent 2:0 protocol ip prio 1 u32 match ip protocol 1 0xff flowid 2:1",
"$TC qdisc add dev $DUMMY parent 2:1 handle 3:0 netem duplicate 100%"
],
"cmdUnderTest": "ping -c 1 -I $DUMMY 10.10.10.1 > /dev/null || true",
"expExitCode": "0",
"verifyCmd": "$TC -j -s qdisc ls dev $DUMMY handle 1:0",
"matchJSON": [
{
"kind": "drr",
"handle": "1:",
"bytes": 196,
"packets": 2
}
],
"matchCount": "1",
"teardown": [
"$TC qdisc del dev $DUMMY handle 1:0 root",
"$IP addr del 10.10.10.10/24 dev $DUMMY || true"
]
} }
] ]