Including fixes from Bluetooth.

Current release - regressions:
 
   - ipv4: fix regression in local-broadcast routes
 
   - vsock: fix error-handling regression introduced in v6.17-rc1
 
 Previous releases - regressions:
 
   - bluetooth:
     - mark connection as closed during suspend disconnect
     - fix set_local_name race condition
 
   - eth: ice: fix NULL pointer dereference on reset
 
   - eth: mlx5: fix memory leak in hws_pool_buddy_init error path
 
   - eth: bnxt_en: fix stats context reservation logic
 
   - eth: hv: fix loss of receive events from host during channel open
 
 Previous releases - always broken:
 
   - page_pool: fix incorrect mp_ops error handling
 
   - sctp: initialize more fields in sctp_v6_from_sk()
 
   - eth: octeontx2-vf: fix max packet length errors
 
   - eth: idpf: fix Tx flow scheduling to avoid Tx timeouts
 
   - eth: bnxt_en: fix memory corruption during ifdown
 
   - eth: ice: fix incorrect counter for buffer allocation failures
 
   - eth: mlx5: fix lockdep assertion on sync reset unload event
 
   - eth: fbnic: fixup rtnl_lock and devl_lock handling
 
   - eth: xgmac: do not enable RX FIFO overflow interrupts
 
   - phy: mscc: fix when PTP clock is register and unregister
 
 Misc:
 
   - add Telit Cinterion LE910C4-WWX new compositions
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmiwM+gSHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOkyLAP/jMG9RkfV+TA9Q/X5IOEb7E2g7aSxQ6w
 L64meDBrNkX1mPfYZCdYIhrjwV4snmdH1f85EL/Tqid7t6IphHYxy5IuYwsTQEhM
 LjkEPl8wMS5ayKVeSfFTZV9yspi5auB3Mmxd37qKBBj7YsZqoN27v28o7nZLbGqr
 LHO+sorTyqQVN1WMNJtd+MQDn+Ww3vXy0FcYhbpAsWVGweukkRRA9OMrTYNur6Am
 zi8eysRtPyHh72SmqVGAmyaZWstTCAQO6cn50BMPD77atr1ozJfJFeHpftfoEFs6
 xEILvp8/mykb+XrGKQlELly//iIXmBpeECR1KNYHwRXc61CY4WlD4/GHZ+NzHjpC
 hO2jO0x0fRdx0B4CsVTqa74c6fdaVWzkqZ2AuLc8XntqYLlWB8IRYX6UIIx9EkD1
 Q58saCXWPizln6S1F6qBZi3co95unj00Je5z048WQXeZ/icvGynW2voTWALuBVXR
 wHZXxY2W8o1CszTIn8UwST8LcnKnvd5MCAqBX5XaDX7X80sNUSZnsq2NCPFHu1Oy
 D3L0KOrNhqb2KdCxW2+lej6g37B3fTh27mlYJCeX/9zBGR2afnyVcwwrK4O2hDp2
 mnywK0QgfbevoM2V9RcxRcX4AccQF5h9A4V3whOhsf1hlGSRiPJ+VS+Gjc2JWXrx
 4Ewv6VNjeOtd
 =8eIy
 -----END PGP SIGNATURE-----

Merge tag 'net-6.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from Bluetooth.

  Current release - regressions:

    - ipv4: fix regression in local-broadcast routes

    - vsock: fix error-handling regression introduced in v6.17-rc1

  Previous releases - regressions:

    - bluetooth:
        - mark connection as closed during suspend disconnect
        - fix set_local_name race condition

    - eth:
        - ice: fix NULL pointer dereference on reset
        - mlx5: fix memory leak in hws_pool_buddy_init error path
        - bnxt_en: fix stats context reservation logic
        - hv: fix loss of receive events from host during channel open

  Previous releases - always broken:

    - page_pool: fix incorrect mp_ops error handling

    - sctp: initialize more fields in sctp_v6_from_sk()

    - eth:
        - octeontx2-vf: fix max packet length errors
        - idpf: fix Tx flow scheduling to avoid Tx timeouts
        - bnxt_en: fix memory corruption during ifdown
        - ice: fix incorrect counter for buffer allocation failures
        - mlx5: fix lockdep assertion on sync reset unload event
        - fbnic: fixup rtnl_lock and devl_lock handling
        - xgmac: do not enable RX FIFO overflow interrupts

    - phy: mscc: fix when PTP clock is register and unregister

  Misc:

    - add Telit Cinterion LE910C4-WWX new compositions"

* tag 'net-6.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (60 commits)
  net: ipv4: fix regression in local-broadcast routes
  net: macb: Disable clocks once
  fbnic: Move phylink resume out of service_task and into open/close
  fbnic: Fixup rtnl_lock and devl_lock handling related to mailbox code
  net: rose: fix a typo in rose_clear_routes()
  l2tp: do not use sock_hold() in pppol2tp_session_get_sock()
  sctp: initialize more fields in sctp_v6_from_sk()
  MAINTAINERS: rmnet: Update email addresses
  net: rose: include node references in rose_neigh refcount
  net: rose: convert 'use' field to refcount_t
  net: rose: split remove and free operations in rose_remove_neigh()
  net: hv_netvsc: fix loss of early receive events from host during channel open.
  net: stmmac: Set CIC bit only for TX queues with COE
  net: stmmac: xgmac: Correct supported speed modes
  net: stmmac: xgmac: Do not enable RX FIFO Overflow interrupts
  net/mlx5e: Set local Xoff after FW update
  net/mlx5e: Update and set Xon/Xoff upon port speed set
  net/mlx5e: Update and set Xon/Xoff upon MTU set
  net/mlx5: Prevent flow steering mode changes in switchdev mode
  net/mlx5: Nack sync reset when SFs are present
  ...
pull/1334/head
Linus Torvalds 2025-08-28 17:35:51 -07:00
commit 9c736ace06
69 changed files with 1000 additions and 807 deletions

View File

@ -3222,6 +3222,10 @@ D: AIC5800 IEEE 1394, RAW I/O on 1394
D: Starter of Linux1394 effort
S: ask per mail for current address
N: Boris Pismenny
E: borisp@mellanox.com
D: Kernel TLS implementation and offload support.
N: Nicolas Pitre
E: nico@fluxnic.net
D: StrongARM SA1100 support integrator & hacker
@ -4168,6 +4172,9 @@ S: 1513 Brewster Dr.
S: Carrollton, TX 75010
S: USA
N: Dave Watson
D: Kernel TLS implementation.
N: Tim Waugh
E: tim@cyberelk.net
D: Co-architect of the parallel-port sharing system

View File

@ -937,7 +937,7 @@ S: Maintained
F: drivers/gpio/gpio-altera.c
ALTERA TRIPLE SPEED ETHERNET DRIVER
M: Joyce Ooi <joyce.ooi@intel.com>
M: Boon Khai Ng <boon.khai.ng@altera.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/altera/
@ -17848,7 +17848,6 @@ F: net/ipv6/syncookies.c
F: net/ipv6/tcp*.c
NETWORKING [TLS]
M: Boris Pismenny <borisp@nvidia.com>
M: John Fastabend <john.fastabend@gmail.com>
M: Jakub Kicinski <kuba@kernel.org>
L: netdev@vger.kernel.org
@ -20878,8 +20877,8 @@ S: Maintained
F: drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
QUALCOMM RMNET DRIVER
M: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com>
M: Sean Tranchetti <quic_stranche@quicinc.com>
M: Subash Abhinov Kasiviswanathan <subash.a.kasiviswanathan@oss.qualcomm.com>
M: Sean Tranchetti <sean.tranchetti@oss.qualcomm.com>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/networking/device_drivers/cellular/qualcomm/rmnet.rst

View File

@ -279,6 +279,19 @@ static struct atm_vcc *find_vcc(struct atm_dev *dev, short vpi, int vci)
return NULL;
}
static int atmtcp_c_pre_send(struct atm_vcc *vcc, struct sk_buff *skb)
{
struct atmtcp_hdr *hdr;
if (skb->len < sizeof(struct atmtcp_hdr))
return -EINVAL;
hdr = (struct atmtcp_hdr *)skb->data;
if (hdr->length == ATMTCP_HDR_MAGIC)
return -EINVAL;
return 0;
}
static int atmtcp_c_send(struct atm_vcc *vcc,struct sk_buff *skb)
{
@ -288,9 +301,6 @@ static int atmtcp_c_send(struct atm_vcc *vcc,struct sk_buff *skb)
struct sk_buff *new_skb;
int result = 0;
if (skb->len < sizeof(struct atmtcp_hdr))
goto done;
dev = vcc->dev_data;
hdr = (struct atmtcp_hdr *) skb->data;
if (hdr->length == ATMTCP_HDR_MAGIC) {
@ -347,6 +357,7 @@ static const struct atmdev_ops atmtcp_v_dev_ops = {
static const struct atmdev_ops atmtcp_c_dev_ops = {
.close = atmtcp_c_close,
.pre_send = atmtcp_c_pre_send,
.send = atmtcp_c_send
};

View File

@ -39,12 +39,13 @@
#include "hfc_pci.h"
static void hfcpci_softirq(struct timer_list *unused);
static const char *hfcpci_revision = "2.0";
static int HFC_cnt;
static uint debug;
static uint poll, tics;
static struct timer_list hfc_tl;
static DEFINE_TIMER(hfc_tl, hfcpci_softirq);
static unsigned long hfc_jiffies;
MODULE_AUTHOR("Karsten Keil");
@ -2305,8 +2306,7 @@ hfcpci_softirq(struct timer_list *unused)
hfc_jiffies = jiffies + 1;
else
hfc_jiffies += tics;
hfc_tl.expires = hfc_jiffies;
add_timer(&hfc_tl);
mod_timer(&hfc_tl, hfc_jiffies);
}
static int __init
@ -2332,10 +2332,8 @@ HFC_init(void)
if (poll != HFCPCI_BTRANS_THRESHOLD) {
printk(KERN_INFO "%s: Using alternative poll value of %d\n",
__func__, poll);
timer_setup(&hfc_tl, hfcpci_softirq, 0);
hfc_tl.expires = jiffies + tics;
hfc_jiffies = hfc_tl.expires;
add_timer(&hfc_tl);
hfc_jiffies = jiffies + tics;
mod_timer(&hfc_tl, hfc_jiffies);
} else
tics = 0; /* indicate the use of controller's timer */

View File

@ -8016,7 +8016,8 @@ static int __bnxt_reserve_rings(struct bnxt *bp)
}
rx_rings = min_t(int, rx_rings, hwr.grp);
hwr.cp = min_t(int, hwr.cp, bp->cp_nr_rings);
if (hwr.stat > bnxt_get_ulp_stat_ctxs(bp))
if (bnxt_ulp_registered(bp->edev) &&
hwr.stat > bnxt_get_ulp_stat_ctxs(bp))
hwr.stat -= bnxt_get_ulp_stat_ctxs(bp);
hwr.cp = min_t(int, hwr.cp, hwr.stat);
rc = bnxt_trim_rings(bp, &rx_rings, &hwr.tx, hwr.cp, sh);
@ -8024,6 +8025,11 @@ static int __bnxt_reserve_rings(struct bnxt *bp)
hwr.rx = rx_rings << 1;
tx_cp = bnxt_num_tx_to_cp(bp, hwr.tx);
hwr.cp = sh ? max_t(int, tx_cp, rx_rings) : tx_cp + rx_rings;
if (hwr.tx != bp->tx_nr_rings) {
netdev_warn(bp->dev,
"Able to reserve only %d out of %d requested TX rings\n",
hwr.tx, bp->tx_nr_rings);
}
bp->tx_nr_rings = hwr.tx;
/* If we cannot reserve all the RX rings, reset the RSS map only
@ -12851,6 +12857,17 @@ static int bnxt_set_xps_mapping(struct bnxt *bp)
return rc;
}
static int bnxt_tx_nr_rings(struct bnxt *bp)
{
return bp->num_tc ? bp->tx_nr_rings_per_tc * bp->num_tc :
bp->tx_nr_rings_per_tc;
}
static int bnxt_tx_nr_rings_per_tc(struct bnxt *bp)
{
return bp->num_tc ? bp->tx_nr_rings / bp->num_tc : bp->tx_nr_rings;
}
static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
{
int rc = 0;
@ -12868,6 +12885,13 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
if (rc)
return rc;
/* Make adjustments if reserved TX rings are less than requested */
bp->tx_nr_rings -= bp->tx_nr_rings_xdp;
bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp);
if (bp->tx_nr_rings_xdp) {
bp->tx_nr_rings_xdp = bp->tx_nr_rings_per_tc;
bp->tx_nr_rings += bp->tx_nr_rings_xdp;
}
rc = bnxt_alloc_mem(bp, irq_re_init);
if (rc) {
netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc);
@ -16325,7 +16349,7 @@ static void bnxt_trim_dflt_sh_rings(struct bnxt *bp)
bp->cp_nr_rings = min_t(int, bp->tx_nr_rings_per_tc, bp->rx_nr_rings);
bp->rx_nr_rings = bp->cp_nr_rings;
bp->tx_nr_rings_per_tc = bp->cp_nr_rings;
bp->tx_nr_rings = bp->tx_nr_rings_per_tc;
bp->tx_nr_rings = bnxt_tx_nr_rings(bp);
}
static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh)
@ -16357,7 +16381,7 @@ static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh)
bnxt_trim_dflt_sh_rings(bp);
else
bp->cp_nr_rings = bp->tx_nr_rings_per_tc + bp->rx_nr_rings;
bp->tx_nr_rings = bp->tx_nr_rings_per_tc;
bp->tx_nr_rings = bnxt_tx_nr_rings(bp);
avail_msix = bnxt_get_max_func_irqs(bp) - bp->cp_nr_rings;
if (avail_msix >= BNXT_MIN_ROCE_CP_RINGS) {
@ -16370,7 +16394,7 @@ static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh)
rc = __bnxt_reserve_rings(bp);
if (rc && rc != -ENODEV)
netdev_warn(bp->dev, "Unable to reserve tx rings\n");
bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp);
if (sh)
bnxt_trim_dflt_sh_rings(bp);
@ -16379,7 +16403,7 @@ static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh)
rc = __bnxt_reserve_rings(bp);
if (rc && rc != -ENODEV)
netdev_warn(bp->dev, "2nd rings reservation failed.\n");
bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp);
}
if (BNXT_CHIP_TYPE_NITRO_A0(bp)) {
bp->rx_nr_rings++;
@ -16413,7 +16437,7 @@ static int bnxt_init_dflt_ring_mode(struct bnxt *bp)
if (rc)
goto init_dflt_ring_err;
bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp);
bnxt_set_dflt_rfs(bp);

View File

@ -3090,7 +3090,7 @@ static void gem_update_stats(struct macb *bp)
/* Add GEM_OCTTXH, GEM_OCTRXH */
val = bp->macb_reg_readl(bp, offset + 4);
bp->ethtool_stats[i] += ((u64)val) << 32;
*(p++) += ((u64)val) << 32;
*p += ((u64)val) << 32;
}
}
@ -5399,19 +5399,16 @@ static void macb_remove(struct platform_device *pdev)
if (dev) {
bp = netdev_priv(dev);
unregister_netdev(dev);
phy_exit(bp->sgmii_phy);
mdiobus_unregister(bp->mii_bus);
mdiobus_free(bp->mii_bus);
unregister_netdev(dev);
device_set_wakeup_enable(&bp->pdev->dev, 0);
cancel_work_sync(&bp->hresp_err_bh_work);
pm_runtime_disable(&pdev->dev);
pm_runtime_dont_use_autosuspend(&pdev->dev);
if (!pm_runtime_suspended(&pdev->dev)) {
macb_clks_disable(bp->pclk, bp->hclk, bp->tx_clk,
bp->rx_clk, bp->tsu_clk);
pm_runtime_set_suspended(&pdev->dev);
}
pm_runtime_set_suspended(&pdev->dev);
phylink_destroy(bp->phylink);
free_netdev(dev);
}

View File

@ -1099,7 +1099,7 @@ get_stats (struct net_device *dev)
dev->stats.rx_bytes += dr32(OctetRcvOk);
dev->stats.tx_bytes += dr32(OctetXmtOk);
dev->stats.multicast = dr32(McstFramesRcvdOk);
dev->stats.multicast += dr32(McstFramesRcvdOk);
dev->stats.collisions += dr32(SingleColFrames)
+ dr32(MultiColFrames);

View File

@ -510,6 +510,7 @@ enum ice_pf_flags {
ICE_FLAG_LINK_LENIENT_MODE_ENA,
ICE_FLAG_PLUG_AUX_DEV,
ICE_FLAG_UNPLUG_AUX_DEV,
ICE_FLAG_AUX_DEV_CREATED,
ICE_FLAG_MTU_CHANGED,
ICE_FLAG_GNSS, /* GNSS successfully initialized */
ICE_FLAG_DPLL, /* SyncE/PTP dplls initialized */

View File

@ -13,16 +13,45 @@
static DEFINE_XARRAY(ice_adapters);
static DEFINE_MUTEX(ice_adapters_mutex);
static unsigned long ice_adapter_index(u64 dsn)
#define ICE_ADAPTER_FIXED_INDEX BIT_ULL(63)
#define ICE_ADAPTER_INDEX_E825C \
(ICE_DEV_ID_E825C_BACKPLANE | ICE_ADAPTER_FIXED_INDEX)
static u64 ice_adapter_index(struct pci_dev *pdev)
{
switch (pdev->device) {
case ICE_DEV_ID_E825C_BACKPLANE:
case ICE_DEV_ID_E825C_QSFP:
case ICE_DEV_ID_E825C_SFP:
case ICE_DEV_ID_E825C_SGMII:
/* E825C devices have multiple NACs which are connected to the
* same clock source, and which must share the same
* ice_adapter structure. We can't use the serial number since
* each NAC has its own NVM generated with its own unique
* Device Serial Number. Instead, rely on the embedded nature
* of the E825C devices, and use a fixed index. This relies on
* the fact that all E825C physical functions in a given
* system are part of the same overall device.
*/
return ICE_ADAPTER_INDEX_E825C;
default:
return pci_get_dsn(pdev) & ~ICE_ADAPTER_FIXED_INDEX;
}
}
static unsigned long ice_adapter_xa_index(struct pci_dev *pdev)
{
u64 index = ice_adapter_index(pdev);
#if BITS_PER_LONG == 64
return dsn;
return index;
#else
return (u32)dsn ^ (u32)(dsn >> 32);
return (u32)index ^ (u32)(index >> 32);
#endif
}
static struct ice_adapter *ice_adapter_new(u64 dsn)
static struct ice_adapter *ice_adapter_new(struct pci_dev *pdev)
{
struct ice_adapter *adapter;
@ -30,7 +59,7 @@ static struct ice_adapter *ice_adapter_new(u64 dsn)
if (!adapter)
return NULL;
adapter->device_serial_number = dsn;
adapter->index = ice_adapter_index(pdev);
spin_lock_init(&adapter->ptp_gltsyn_time_lock);
spin_lock_init(&adapter->txq_ctx_lock);
refcount_set(&adapter->refcount, 1);
@ -64,24 +93,23 @@ static void ice_adapter_free(struct ice_adapter *adapter)
*/
struct ice_adapter *ice_adapter_get(struct pci_dev *pdev)
{
u64 dsn = pci_get_dsn(pdev);
struct ice_adapter *adapter;
unsigned long index;
int err;
index = ice_adapter_index(dsn);
index = ice_adapter_xa_index(pdev);
scoped_guard(mutex, &ice_adapters_mutex) {
err = xa_insert(&ice_adapters, index, NULL, GFP_KERNEL);
if (err == -EBUSY) {
adapter = xa_load(&ice_adapters, index);
refcount_inc(&adapter->refcount);
WARN_ON_ONCE(adapter->device_serial_number != dsn);
WARN_ON_ONCE(adapter->index != ice_adapter_index(pdev));
return adapter;
}
if (err)
return ERR_PTR(err);
adapter = ice_adapter_new(dsn);
adapter = ice_adapter_new(pdev);
if (!adapter)
return ERR_PTR(-ENOMEM);
xa_store(&ice_adapters, index, adapter, GFP_KERNEL);
@ -100,11 +128,10 @@ struct ice_adapter *ice_adapter_get(struct pci_dev *pdev)
*/
void ice_adapter_put(struct pci_dev *pdev)
{
u64 dsn = pci_get_dsn(pdev);
struct ice_adapter *adapter;
unsigned long index;
index = ice_adapter_index(dsn);
index = ice_adapter_xa_index(pdev);
scoped_guard(mutex, &ice_adapters_mutex) {
adapter = xa_load(&ice_adapters, index);
if (WARN_ON(!adapter))

View File

@ -33,7 +33,7 @@ struct ice_port_list {
* @txq_ctx_lock: Spinlock protecting access to the GLCOMM_QTX_CNTX_CTL register
* @ctrl_pf: Control PF of the adapter
* @ports: Ports list
* @device_serial_number: DSN cached for collision detection on 32bit systems
* @index: 64-bit index cached for collision detection on 32bit systems
*/
struct ice_adapter {
refcount_t refcount;
@ -44,7 +44,7 @@ struct ice_adapter {
struct ice_pf *ctrl_pf;
struct ice_port_list ports;
u64 device_serial_number;
u64 index;
};
struct ice_adapter *ice_adapter_get(struct pci_dev *pdev);

View File

@ -2377,7 +2377,13 @@ ice_get_set_tx_topo(struct ice_hw *hw, u8 *buf, u16 buf_size,
* The function will apply the new Tx topology from the package buffer
* if available.
*
* Return: zero when update was successful, negative values otherwise.
* Return:
* * 0 - Successfully applied topology configuration.
* * -EBUSY - Failed to acquire global configuration lock.
* * -EEXIST - Topology configuration has already been applied.
* * -EIO - Unable to apply topology configuration.
* * -ENODEV - Failed to re-initialize device after applying configuration.
* * Other negative error codes indicate unexpected failures.
*/
int ice_cfg_tx_topo(struct ice_hw *hw, const void *buf, u32 len)
{
@ -2410,7 +2416,7 @@ int ice_cfg_tx_topo(struct ice_hw *hw, const void *buf, u32 len)
if (status) {
ice_debug(hw, ICE_DBG_INIT, "Get current topology is failed\n");
return status;
return -EIO;
}
/* Is default topology already applied ? */
@ -2497,31 +2503,45 @@ update_topo:
ICE_GLOBAL_CFG_LOCK_TIMEOUT);
if (status) {
ice_debug(hw, ICE_DBG_INIT, "Failed to acquire global lock\n");
return status;
return -EBUSY;
}
/* Check if reset was triggered already. */
reg = rd32(hw, GLGEN_RSTAT);
if (reg & GLGEN_RSTAT_DEVSTATE_M) {
/* Reset is in progress, re-init the HW again */
ice_debug(hw, ICE_DBG_INIT, "Reset is in progress. Layer topology might be applied already\n");
ice_check_reset(hw);
return 0;
/* Reset is in progress, re-init the HW again */
goto reinit_hw;
}
/* Set new topology */
status = ice_get_set_tx_topo(hw, new_topo, size, NULL, NULL, true);
if (status) {
ice_debug(hw, ICE_DBG_INIT, "Failed setting Tx topology\n");
return status;
ice_debug(hw, ICE_DBG_INIT, "Failed to set Tx topology, status %pe\n",
ERR_PTR(status));
/* only report -EIO here as the caller checks the error value
* and reports an informational error message informing that
* the driver failed to program Tx topology.
*/
status = -EIO;
}
/* New topology is updated, delay 1 second before issuing the CORER */
/* Even if Tx topology config failed, we need to CORE reset here to
* clear the global configuration lock. Delay 1 second to allow
* hardware to settle then issue a CORER
*/
msleep(1000);
ice_reset(hw, ICE_RESET_CORER);
/* CORER will clear the global lock, so no explicit call
* required for release.
*/
ice_check_reset(hw);
return 0;
reinit_hw:
/* Since we triggered a CORER, re-initialize hardware */
ice_deinit_hw(hw);
if (ice_init_hw(hw)) {
ice_debug(hw, ICE_DBG_INIT, "Failed to re-init hardware after setting Tx topology\n");
return -ENODEV;
}
return status;
}

View File

@ -336,6 +336,7 @@ int ice_plug_aux_dev(struct ice_pf *pf)
mutex_lock(&pf->adev_mutex);
cdev->adev = adev;
mutex_unlock(&pf->adev_mutex);
set_bit(ICE_FLAG_AUX_DEV_CREATED, pf->flags);
return 0;
}
@ -347,15 +348,16 @@ void ice_unplug_aux_dev(struct ice_pf *pf)
{
struct auxiliary_device *adev;
if (!test_and_clear_bit(ICE_FLAG_AUX_DEV_CREATED, pf->flags))
return;
mutex_lock(&pf->adev_mutex);
adev = pf->cdev_info->adev;
pf->cdev_info->adev = NULL;
mutex_unlock(&pf->adev_mutex);
if (adev) {
auxiliary_device_delete(adev);
auxiliary_device_uninit(adev);
}
auxiliary_device_delete(adev);
auxiliary_device_uninit(adev);
}
/**

View File

@ -4536,17 +4536,23 @@ ice_init_tx_topology(struct ice_hw *hw, const struct firmware *firmware)
dev_info(dev, "Tx scheduling layers switching feature disabled\n");
else
dev_info(dev, "Tx scheduling layers switching feature enabled\n");
/* if there was a change in topology ice_cfg_tx_topo triggered
* a CORER and we need to re-init hw
return 0;
} else if (err == -ENODEV) {
/* If we failed to re-initialize the device, we can no longer
* continue loading.
*/
ice_deinit_hw(hw);
err = ice_init_hw(hw);
dev_warn(dev, "Failed to initialize hardware after applying Tx scheduling configuration.\n");
return err;
} else if (err == -EIO) {
dev_info(dev, "DDP package does not support Tx scheduling layers switching feature - please update to the latest DDP package and try again\n");
return 0;
} else if (err == -EEXIST) {
return 0;
}
/* Do not treat this as a fatal error. */
dev_info(dev, "Failed to apply Tx scheduling configuration, err %pe\n",
ERR_PTR(err));
return 0;
}

View File

@ -1352,7 +1352,7 @@ construct_skb:
skb = ice_construct_skb(rx_ring, xdp);
/* exit if we failed to retrieve a buffer */
if (!skb) {
rx_ring->ring_stats->rx_stats.alloc_page_failed++;
rx_ring->ring_stats->rx_stats.alloc_buf_failed++;
xdp_verdict = ICE_XDP_CONSUMED;
}
ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);

View File

@ -179,6 +179,58 @@ static int idpf_tx_singleq_csum(struct sk_buff *skb,
return 1;
}
/**
* idpf_tx_singleq_dma_map_error - handle TX DMA map errors
* @txq: queue to send buffer on
* @skb: send buffer
* @first: original first buffer info buffer for packet
* @idx: starting point on ring to unwind
*/
static void idpf_tx_singleq_dma_map_error(struct idpf_tx_queue *txq,
struct sk_buff *skb,
struct idpf_tx_buf *first, u16 idx)
{
struct libeth_sq_napi_stats ss = { };
struct libeth_cq_pp cp = {
.dev = txq->dev,
.ss = &ss,
};
u64_stats_update_begin(&txq->stats_sync);
u64_stats_inc(&txq->q_stats.dma_map_errs);
u64_stats_update_end(&txq->stats_sync);
/* clear dma mappings for failed tx_buf map */
for (;;) {
struct idpf_tx_buf *tx_buf;
tx_buf = &txq->tx_buf[idx];
libeth_tx_complete(tx_buf, &cp);
if (tx_buf == first)
break;
if (idx == 0)
idx = txq->desc_count;
idx--;
}
if (skb_is_gso(skb)) {
union idpf_tx_flex_desc *tx_desc;
/* If we failed a DMA mapping for a TSO packet, we will have
* used one additional descriptor for a context
* descriptor. Reset that here.
*/
tx_desc = &txq->flex_tx[idx];
memset(tx_desc, 0, sizeof(*tx_desc));
if (idx == 0)
idx = txq->desc_count;
idx--;
}
/* Update tail in case netdev_xmit_more was previously true */
idpf_tx_buf_hw_update(txq, idx, false);
}
/**
* idpf_tx_singleq_map - Build the Tx base descriptor
* @tx_q: queue to send buffer on
@ -219,8 +271,9 @@ static void idpf_tx_singleq_map(struct idpf_tx_queue *tx_q,
for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED;
if (dma_mapping_error(tx_q->dev, dma))
return idpf_tx_dma_map_error(tx_q, skb, first, i);
if (unlikely(dma_mapping_error(tx_q->dev, dma)))
return idpf_tx_singleq_dma_map_error(tx_q, skb,
first, i);
/* record length, and DMA address */
dma_unmap_len_set(tx_buf, len, size);
@ -362,11 +415,11 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
{
struct idpf_tx_offload_params offload = { };
struct idpf_tx_buf *first;
u32 count, buf_count = 1;
int csum, tso, needed;
unsigned int count;
__be16 protocol;
count = idpf_tx_desc_count_required(tx_q, skb);
count = idpf_tx_res_count_required(tx_q, skb, &buf_count);
if (unlikely(!count))
return idpf_tx_drop_skb(tx_q, skb);

File diff suppressed because it is too large Load Diff

View File

@ -108,8 +108,8 @@ do { \
*/
#define IDPF_TX_SPLITQ_RE_MIN_GAP 64
#define IDPF_RX_BI_GEN_M BIT(16)
#define IDPF_RX_BI_BUFID_M GENMASK(15, 0)
#define IDPF_RFL_BI_GEN_M BIT(16)
#define IDPF_RFL_BI_BUFID_M GENMASK(15, 0)
#define IDPF_RXD_EOF_SPLITQ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M
#define IDPF_RXD_EOF_SINGLEQ VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M
@ -118,10 +118,6 @@ do { \
((((txq)->next_to_clean > (txq)->next_to_use) ? 0 : (txq)->desc_count) + \
(txq)->next_to_clean - (txq)->next_to_use - 1)
#define IDPF_TX_BUF_RSV_UNUSED(txq) ((txq)->stash->buf_stack.top)
#define IDPF_TX_BUF_RSV_LOW(txq) (IDPF_TX_BUF_RSV_UNUSED(txq) < \
(txq)->desc_count >> 2)
#define IDPF_TX_COMPLQ_OVERFLOW_THRESH(txcq) ((txcq)->desc_count >> 1)
/* Determine the absolute number of completions pending, i.e. the number of
* completions that are expected to arrive on the TX completion queue.
@ -131,11 +127,7 @@ do { \
0 : U32_MAX) + \
(txq)->num_completions_pending - (txq)->complq->num_completions)
#define IDPF_TX_SPLITQ_COMPL_TAG_WIDTH 16
/* Adjust the generation for the completion tag and wrap if necessary */
#define IDPF_TX_ADJ_COMPL_TAG_GEN(txq) \
((++(txq)->compl_tag_cur_gen) >= (txq)->compl_tag_gen_max ? \
0 : (txq)->compl_tag_cur_gen)
#define IDPF_TXBUF_NULL U32_MAX
#define IDPF_TXD_LAST_DESC_CMD (IDPF_TX_DESC_CMD_EOP | IDPF_TX_DESC_CMD_RS)
@ -152,18 +144,6 @@ union idpf_tx_flex_desc {
#define idpf_tx_buf libeth_sqe
/**
* struct idpf_buf_lifo - LIFO for managing OOO completions
* @top: Used to know how many buffers are left
* @size: Total size of LIFO
* @bufs: Backing array
*/
struct idpf_buf_lifo {
u16 top;
u16 size;
struct idpf_tx_stash **bufs;
};
/**
* struct idpf_tx_offload_params - Offload parameters for a given packet
* @tx_flags: Feature flags enabled for this packet
@ -196,6 +176,9 @@ struct idpf_tx_offload_params {
* @compl_tag: Associated tag for completion
* @td_tag: Descriptor tunneling tag
* @offload: Offload parameters
* @prev_ntu: stored TxQ next_to_use in case of rollback
* @prev_refill_ntc: stored refillq next_to_clean in case of packet rollback
* @prev_refill_gen: stored refillq generation bit in case of packet rollback
*/
struct idpf_tx_splitq_params {
enum idpf_tx_desc_dtype_value dtype;
@ -206,6 +189,10 @@ struct idpf_tx_splitq_params {
};
struct idpf_tx_offload_params offload;
u16 prev_ntu;
u16 prev_refill_ntc;
bool prev_refill_gen;
};
enum idpf_tx_ctx_desc_eipt_offload {
@ -467,17 +454,6 @@ struct idpf_tx_queue_stats {
#define IDPF_ITR_IDX_SPACING(spacing, dflt) (spacing ? spacing : dflt)
#define IDPF_DIM_DEFAULT_PROFILE_IX 1
/**
* struct idpf_txq_stash - Tx buffer stash for Flow-based scheduling mode
* @buf_stack: Stack of empty buffers to store buffer info for out of order
* buffer completions. See struct idpf_buf_lifo
* @sched_buf_hash: Hash table to store buffers
*/
struct idpf_txq_stash {
struct idpf_buf_lifo buf_stack;
DECLARE_HASHTABLE(sched_buf_hash, 12);
} ____cacheline_aligned;
/**
* struct idpf_rx_queue - software structure representing a receive queue
* @rx: universal receive descriptor array
@ -610,6 +586,8 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, 64,
* @netdev: &net_device corresponding to this queue
* @next_to_use: Next descriptor to use
* @next_to_clean: Next descriptor to clean
* @last_re: last descriptor index that RE bit was set
* @tx_max_bufs: Max buffers that can be transmitted with scatter-gather
* @cleaned_bytes: Splitq only, TXQ only: When a TX completion is received on
* the TX completion queue, it can be for any TXQ associated
* with that completion queue. This means we can clean up to
@ -620,11 +598,7 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, 64,
* only once at the end of the cleaning routine.
* @clean_budget: singleq only, queue cleaning budget
* @cleaned_pkts: Number of packets cleaned for the above said case
* @tx_max_bufs: Max buffers that can be transmitted with scatter-gather
* @stash: Tx buffer stash for Flow-based scheduling mode
* @compl_tag_bufid_m: Completion tag buffer id mask
* @compl_tag_cur_gen: Used to keep track of current completion tag generation
* @compl_tag_gen_max: To determine when compl_tag_cur_gen should be reset
* @refillq: Pointer to refill queue
* @cached_tstamp_caps: Tx timestamp capabilities negotiated with the CP
* @tstamp_task: Work that handles Tx timestamp read
* @stats_sync: See struct u64_stats_sync
@ -633,6 +607,7 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, 64,
* @size: Length of descriptor ring in bytes
* @dma: Physical address of ring
* @q_vector: Backreference to associated vector
* @buf_pool_size: Total number of idpf_tx_buf
*/
struct idpf_tx_queue {
__cacheline_group_begin_aligned(read_mostly);
@ -654,7 +629,6 @@ struct idpf_tx_queue {
u16 desc_count;
u16 tx_min_pkt_len;
u16 compl_tag_gen_s;
struct net_device *netdev;
__cacheline_group_end_aligned(read_mostly);
@ -662,6 +636,8 @@ struct idpf_tx_queue {
__cacheline_group_begin_aligned(read_write);
u16 next_to_use;
u16 next_to_clean;
u16 last_re;
u16 tx_max_bufs;
union {
u32 cleaned_bytes;
@ -669,12 +645,7 @@ struct idpf_tx_queue {
};
u16 cleaned_pkts;
u16 tx_max_bufs;
struct idpf_txq_stash *stash;
u16 compl_tag_bufid_m;
u16 compl_tag_cur_gen;
u16 compl_tag_gen_max;
struct idpf_sw_queue *refillq;
struct idpf_ptp_vport_tx_tstamp_caps *cached_tstamp_caps;
struct work_struct *tstamp_task;
@ -689,11 +660,12 @@ struct idpf_tx_queue {
dma_addr_t dma;
struct idpf_q_vector *q_vector;
u32 buf_pool_size;
__cacheline_group_end_aligned(cold);
};
libeth_cacheline_set_assert(struct idpf_tx_queue, 64,
112 + sizeof(struct u64_stats_sync),
24);
104 + sizeof(struct u64_stats_sync),
32);
/**
* struct idpf_buf_queue - software structure representing a buffer queue
@ -903,7 +875,6 @@ struct idpf_rxq_group {
* @vport: Vport back pointer
* @num_txq: Number of TX queues associated
* @txqs: Array of TX queue pointers
* @stashes: array of OOO stashes for the queues
* @complq: Associated completion queue pointer, split queue only
* @num_completions_pending: Total number of completions pending for the
* completion queue, acculumated for all TX queues
@ -918,7 +889,6 @@ struct idpf_txq_group {
u16 num_txq;
struct idpf_tx_queue *txqs[IDPF_LARGE_MAX_Q];
struct idpf_txq_stash *stashes;
struct idpf_compl_queue *complq;
@ -1011,6 +981,17 @@ static inline void idpf_vport_intr_set_wb_on_itr(struct idpf_q_vector *q_vector)
reg->dyn_ctl);
}
/**
* idpf_tx_splitq_get_free_bufs - get number of free buf_ids in refillq
* @refillq: pointer to refillq containing buf_ids
*/
static inline u32 idpf_tx_splitq_get_free_bufs(struct idpf_sw_queue *refillq)
{
return (refillq->next_to_use > refillq->next_to_clean ?
0 : refillq->desc_count) +
refillq->next_to_use - refillq->next_to_clean - 1;
}
int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget);
void idpf_vport_init_num_qs(struct idpf_vport *vport,
struct virtchnl2_create_vport *vport_msg);
@ -1038,10 +1019,8 @@ void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
bool xmit_more);
unsigned int idpf_size_to_txd_count(unsigned int size);
netdev_tx_t idpf_tx_drop_skb(struct idpf_tx_queue *tx_q, struct sk_buff *skb);
void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb,
struct idpf_tx_buf *first, u16 ring_idx);
unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
struct sk_buff *skb);
unsigned int idpf_tx_res_count_required(struct idpf_tx_queue *txq,
struct sk_buff *skb, u32 *buf_count);
void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue);
netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
struct idpf_tx_queue *tx_q);

View File

@ -3125,7 +3125,7 @@ static int ixgbe_get_orom_ver_info(struct ixgbe_hw *hw,
if (err)
return err;
combo_ver = le32_to_cpu(civd.combo_ver);
combo_ver = get_unaligned_le32(&civd.combo_ver);
orom->major = (u8)FIELD_GET(IXGBE_OROM_VER_MASK, combo_ver);
orom->patch = (u8)FIELD_GET(IXGBE_OROM_VER_PATCH_MASK, combo_ver);

View File

@ -932,7 +932,7 @@ struct ixgbe_orom_civd_info {
__le32 combo_ver; /* Combo Image Version number */
u8 combo_name_len; /* Length of the unicode combo image version string, max of 32 */
__le16 combo_name[32]; /* Unicode string representing the Combo Image version */
};
} __packed;
/* Function specific capabilities */
struct ixgbe_hw_func_caps {

View File

@ -1978,6 +1978,13 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto err_release_regions;
}
if (!is_cn20k(pdev) &&
!is_cgx_mapped_to_nix(pdev->subsystem_device, cgx->cgx_id)) {
dev_notice(dev, "CGX %d not mapped to NIX, skipping probe\n",
cgx->cgx_id);
goto err_release_regions;
}
cgx->lmac_count = cgx->mac_ops->get_nr_lmacs(cgx);
if (!cgx->lmac_count) {
dev_notice(dev, "CGX %d LMAC count is zero, skipping probe\n", cgx->cgx_id);

View File

@ -783,6 +783,20 @@ static inline bool is_cn10kb(struct rvu *rvu)
return false;
}
static inline bool is_cgx_mapped_to_nix(unsigned short id, u8 cgx_id)
{
/* On CNF10KA and CNF10KB silicons only two CGX blocks are connected
* to NIX.
*/
if (id == PCI_SUBSYS_DEVID_CNF10K_A || id == PCI_SUBSYS_DEVID_CNF10K_B)
return cgx_id <= 1;
return !(cgx_id && !(id == PCI_SUBSYS_DEVID_96XX ||
id == PCI_SUBSYS_DEVID_98XX ||
id == PCI_SUBSYS_DEVID_CN10K_A ||
id == PCI_SUBSYS_DEVID_CN10K_B));
}
static inline bool is_rvu_npc_hash_extract_en(struct rvu *rvu)
{
u64 npc_const3;

View File

@ -124,7 +124,9 @@ void otx2_get_dev_stats(struct otx2_nic *pfvf)
dev_stats->rx_ucast_frames;
dev_stats->tx_bytes = OTX2_GET_TX_STATS(TX_OCTS);
dev_stats->tx_drops = OTX2_GET_TX_STATS(TX_DROP);
dev_stats->tx_drops = OTX2_GET_TX_STATS(TX_DROP) +
(unsigned long)atomic_long_read(&dev_stats->tx_discards);
dev_stats->tx_bcast_frames = OTX2_GET_TX_STATS(TX_BCAST);
dev_stats->tx_mcast_frames = OTX2_GET_TX_STATS(TX_MCAST);
dev_stats->tx_ucast_frames = OTX2_GET_TX_STATS(TX_UCAST);

View File

@ -153,6 +153,7 @@ struct otx2_dev_stats {
u64 tx_bcast_frames;
u64 tx_mcast_frames;
u64 tx_drops;
atomic_long_t tx_discards;
};
/* Driver counted stats */

View File

@ -2220,6 +2220,7 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev)
{
struct otx2_nic *pf = netdev_priv(netdev);
int qidx = skb_get_queue_mapping(skb);
struct otx2_dev_stats *dev_stats;
struct otx2_snd_queue *sq;
struct netdev_queue *txq;
int sq_idx;
@ -2232,6 +2233,8 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev)
/* Check for minimum and maximum packet length */
if (skb->len <= ETH_HLEN ||
(!skb_shinfo(skb)->gso_size && skb->len > pf->tx_max_pktlen)) {
dev_stats = &pf->hw.dev_stats;
atomic_long_inc(&dev_stats->tx_discards);
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}

View File

@ -417,9 +417,19 @@ static netdev_tx_t otx2vf_xmit(struct sk_buff *skb, struct net_device *netdev)
{
struct otx2_nic *vf = netdev_priv(netdev);
int qidx = skb_get_queue_mapping(skb);
struct otx2_dev_stats *dev_stats;
struct otx2_snd_queue *sq;
struct netdev_queue *txq;
/* Check for minimum and maximum packet length */
if (skb->len <= ETH_HLEN ||
(!skb_shinfo(skb)->gso_size && skb->len > vf->tx_max_pktlen)) {
dev_stats = &vf->hw.dev_stats;
atomic_long_inc(&dev_stats->tx_discards);
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
sq = &vf->qset.sq[qidx];
txq = netdev_get_tx_queue(netdev, qidx);

View File

@ -371,7 +371,8 @@ static void rvu_rep_get_stats(struct work_struct *work)
stats->rx_mcast_frames = rsp->rx.mcast;
stats->tx_bytes = rsp->tx.octs;
stats->tx_frames = rsp->tx.ucast + rsp->tx.bcast + rsp->tx.mcast;
stats->tx_drops = rsp->tx.drop;
stats->tx_drops = rsp->tx.drop +
(unsigned long)atomic_long_read(&stats->tx_discards);
exit:
mutex_unlock(&priv->mbox.lock);
}
@ -418,6 +419,16 @@ static netdev_tx_t rvu_rep_xmit(struct sk_buff *skb, struct net_device *dev)
struct otx2_nic *pf = rep->mdev;
struct otx2_snd_queue *sq;
struct netdev_queue *txq;
struct rep_stats *stats;
/* Check for minimum and maximum packet length */
if (skb->len <= ETH_HLEN ||
(!skb_shinfo(skb)->gso_size && skb->len > pf->tx_max_pktlen)) {
stats = &rep->stats;
atomic_long_inc(&stats->tx_discards);
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
sq = &pf->qset.sq[rep->rep_id];
txq = netdev_get_tx_queue(dev, 0);

View File

@ -27,6 +27,7 @@ struct rep_stats {
u64 tx_bytes;
u64 tx_frames;
u64 tx_drops;
atomic_long_t tx_discards;
};
struct rep_dev {

View File

@ -160,7 +160,7 @@ static int mlx5_devlink_reload_fw_activate(struct devlink *devlink, struct netli
if (err)
return err;
mlx5_unload_one_devl_locked(dev, true);
mlx5_sync_reset_unload_flow(dev, true);
err = mlx5_health_wait_pci_up(dev);
if (err)
NL_SET_ERR_MSG_MOD(extack, "FW activate aborted, PCI reads fail after reset");

View File

@ -575,7 +575,6 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
if (err)
return err;
}
priv->dcbx.xoff = xoff;
/* Apply the settings */
if (update_buffer) {
@ -584,6 +583,8 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
return err;
}
priv->dcbx.xoff = xoff;
if (update_prio2buffer)
err = mlx5e_port_set_priority2buffer(priv->mdev, prio2buffer);

View File

@ -66,11 +66,23 @@ struct mlx5e_port_buffer {
struct mlx5e_bufferx_reg buffer[MLX5E_MAX_NETWORK_BUFFER];
};
#ifdef CONFIG_MLX5_CORE_EN_DCB
int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
u32 change, unsigned int mtu,
struct ieee_pfc *pfc,
u32 *buffer_size,
u8 *prio2buffer);
#else
static inline int
mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
u32 change, unsigned int mtu,
void *pfc,
u32 *buffer_size,
u8 *prio2buffer)
{
return 0;
}
#endif
int mlx5e_port_query_buffer(struct mlx5e_priv *priv,
struct mlx5e_port_buffer *port_buffer);

View File

@ -49,6 +49,7 @@
#include "en.h"
#include "en/dim.h"
#include "en/txrx.h"
#include "en/port_buffer.h"
#include "en_tc.h"
#include "en_rep.h"
#include "en_accel/ipsec.h"
@ -138,6 +139,8 @@ void mlx5e_update_carrier(struct mlx5e_priv *priv)
if (up) {
netdev_info(priv->netdev, "Link up\n");
netif_carrier_on(priv->netdev);
mlx5e_port_manual_buffer_config(priv, 0, priv->netdev->mtu,
NULL, NULL, NULL);
} else {
netdev_info(priv->netdev, "Link down\n");
netif_carrier_off(priv->netdev);
@ -3040,9 +3043,11 @@ int mlx5e_set_dev_port_mtu(struct mlx5e_priv *priv)
struct mlx5e_params *params = &priv->channels.params;
struct net_device *netdev = priv->netdev;
struct mlx5_core_dev *mdev = priv->mdev;
u16 mtu;
u16 mtu, prev_mtu;
int err;
mlx5e_query_mtu(mdev, params, &prev_mtu);
err = mlx5e_set_mtu(mdev, params, params->sw_mtu);
if (err)
return err;
@ -3052,6 +3057,18 @@ int mlx5e_set_dev_port_mtu(struct mlx5e_priv *priv)
netdev_warn(netdev, "%s: VPort MTU %d is different than netdev mtu %d\n",
__func__, mtu, params->sw_mtu);
if (mtu != prev_mtu && MLX5_BUFFER_SUPPORTED(mdev)) {
err = mlx5e_port_manual_buffer_config(priv, 0, mtu,
NULL, NULL, NULL);
if (err) {
netdev_warn(netdev, "%s: Failed to set Xon/Xoff values with MTU %d (err %d), setting back to previous MTU %d\n",
__func__, mtu, err, prev_mtu);
mlx5e_set_mtu(mdev, params, prev_mtu);
return err;
}
}
params->sw_mtu = mtu;
return 0;
}

View File

@ -3734,6 +3734,13 @@ static int mlx5_fs_mode_validate(struct devlink *devlink, u32 id,
char *value = val.vstr;
u8 eswitch_mode;
eswitch_mode = mlx5_eswitch_mode(dev);
if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) {
NL_SET_ERR_MSG_FMT_MOD(extack,
"Changing fs mode is not supported when eswitch offloads enabled.");
return -EOPNOTSUPP;
}
if (!strcmp(value, "dmfs"))
return 0;
@ -3759,14 +3766,6 @@ static int mlx5_fs_mode_validate(struct devlink *devlink, u32 id,
return -EINVAL;
}
eswitch_mode = mlx5_eswitch_mode(dev);
if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) {
NL_SET_ERR_MSG_FMT_MOD(extack,
"Moving to %s is not supported when eswitch offloads enabled.",
value);
return -EOPNOTSUPP;
}
return 0;
}

View File

@ -6,13 +6,15 @@
#include "fw_reset.h"
#include "diag/fw_tracer.h"
#include "lib/tout.h"
#include "sf/sf.h"
enum {
MLX5_FW_RESET_FLAGS_RESET_REQUESTED,
MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST,
MLX5_FW_RESET_FLAGS_PENDING_COMP,
MLX5_FW_RESET_FLAGS_DROP_NEW_REQUESTS,
MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED
MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED,
MLX5_FW_RESET_FLAGS_UNLOAD_EVENT,
};
struct mlx5_fw_reset {
@ -219,7 +221,7 @@ int mlx5_fw_reset_set_live_patch(struct mlx5_core_dev *dev)
return mlx5_reg_mfrl_set(dev, MLX5_MFRL_REG_RESET_LEVEL0, 0, 0, false);
}
static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev, bool unloaded)
static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev)
{
struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
struct devlink *devlink = priv_to_devlink(dev);
@ -228,8 +230,7 @@ static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev, bool unload
if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags)) {
complete(&fw_reset->done);
} else {
if (!unloaded)
mlx5_unload_one(dev, false);
mlx5_sync_reset_unload_flow(dev, false);
if (mlx5_health_wait_pci_up(dev))
mlx5_core_err(dev, "reset reload flow aborted, PCI reads still not working\n");
else
@ -272,7 +273,7 @@ static void mlx5_sync_reset_reload_work(struct work_struct *work)
mlx5_sync_reset_clear_reset_requested(dev, false);
mlx5_enter_error_state(dev, true);
mlx5_fw_reset_complete_reload(dev, false);
mlx5_fw_reset_complete_reload(dev);
}
#define MLX5_RESET_POLL_INTERVAL (HZ / 10)
@ -428,6 +429,11 @@ static bool mlx5_is_reset_now_capable(struct mlx5_core_dev *dev,
return false;
}
if (!mlx5_core_is_ecpf(dev) && !mlx5_sf_table_empty(dev)) {
mlx5_core_warn(dev, "SFs should be removed before reset\n");
return false;
}
#if IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE)
if (reset_method != MLX5_MFRL_REG_PCI_RESET_METHOD_HOT_RESET) {
err = mlx5_check_hotplug_interrupt(dev, bridge);
@ -586,65 +592,23 @@ static int mlx5_sync_pci_reset(struct mlx5_core_dev *dev, u8 reset_method)
return err;
}
static void mlx5_sync_reset_now_event(struct work_struct *work)
void mlx5_sync_reset_unload_flow(struct mlx5_core_dev *dev, bool locked)
{
struct mlx5_fw_reset *fw_reset = container_of(work, struct mlx5_fw_reset,
reset_now_work);
struct mlx5_core_dev *dev = fw_reset->dev;
int err;
if (mlx5_sync_reset_clear_reset_requested(dev, false))
return;
mlx5_core_warn(dev, "Sync Reset now. Device is going to reset.\n");
err = mlx5_cmd_fast_teardown_hca(dev);
if (err) {
mlx5_core_warn(dev, "Fast teardown failed, no reset done, err %d\n", err);
goto done;
}
err = mlx5_sync_pci_reset(dev, fw_reset->reset_method);
if (err) {
mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, no reset done, err %d\n", err);
set_bit(MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED, &fw_reset->reset_flags);
}
mlx5_enter_error_state(dev, true);
done:
fw_reset->ret = err;
mlx5_fw_reset_complete_reload(dev, false);
}
static void mlx5_sync_reset_unload_event(struct work_struct *work)
{
struct mlx5_fw_reset *fw_reset;
struct mlx5_core_dev *dev;
struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
unsigned long timeout;
int poll_freq = 20;
bool reset_action;
u8 rst_state;
int err;
fw_reset = container_of(work, struct mlx5_fw_reset, reset_unload_work);
dev = fw_reset->dev;
if (mlx5_sync_reset_clear_reset_requested(dev, false))
return;
mlx5_core_warn(dev, "Sync Reset Unload. Function is forced down.\n");
err = mlx5_cmd_fast_teardown_hca(dev);
if (err)
mlx5_core_warn(dev, "Fast teardown failed, unloading, err %d\n", err);
else
mlx5_enter_error_state(dev, true);
if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags))
if (locked)
mlx5_unload_one_devl_locked(dev, false);
else
mlx5_unload_one(dev, false);
if (!test_bit(MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, &fw_reset->reset_flags))
return;
mlx5_set_fw_rst_ack(dev);
mlx5_core_warn(dev, "Sync Reset Unload done, device reset expected\n");
@ -672,17 +636,73 @@ static void mlx5_sync_reset_unload_event(struct work_struct *work)
goto done;
}
mlx5_core_warn(dev, "Sync Reset, got reset action. rst_state = %u\n", rst_state);
mlx5_core_warn(dev, "Sync Reset, got reset action. rst_state = %u\n",
rst_state);
if (rst_state == MLX5_FW_RST_STATE_TOGGLE_REQ) {
err = mlx5_sync_pci_reset(dev, fw_reset->reset_method);
if (err) {
mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, err %d\n", err);
mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, err %d\n",
err);
fw_reset->ret = err;
}
}
done:
mlx5_fw_reset_complete_reload(dev, true);
clear_bit(MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, &fw_reset->reset_flags);
}
static void mlx5_sync_reset_now_event(struct work_struct *work)
{
struct mlx5_fw_reset *fw_reset = container_of(work, struct mlx5_fw_reset,
reset_now_work);
struct mlx5_core_dev *dev = fw_reset->dev;
int err;
if (mlx5_sync_reset_clear_reset_requested(dev, false))
return;
mlx5_core_warn(dev, "Sync Reset now. Device is going to reset.\n");
err = mlx5_cmd_fast_teardown_hca(dev);
if (err) {
mlx5_core_warn(dev, "Fast teardown failed, no reset done, err %d\n", err);
goto done;
}
err = mlx5_sync_pci_reset(dev, fw_reset->reset_method);
if (err) {
mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, no reset done, err %d\n", err);
set_bit(MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED, &fw_reset->reset_flags);
}
mlx5_enter_error_state(dev, true);
done:
fw_reset->ret = err;
mlx5_fw_reset_complete_reload(dev);
}
static void mlx5_sync_reset_unload_event(struct work_struct *work)
{
struct mlx5_fw_reset *fw_reset;
struct mlx5_core_dev *dev;
int err;
fw_reset = container_of(work, struct mlx5_fw_reset, reset_unload_work);
dev = fw_reset->dev;
if (mlx5_sync_reset_clear_reset_requested(dev, false))
return;
set_bit(MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, &fw_reset->reset_flags);
mlx5_core_warn(dev, "Sync Reset Unload. Function is forced down.\n");
err = mlx5_cmd_fast_teardown_hca(dev);
if (err)
mlx5_core_warn(dev, "Fast teardown failed, unloading, err %d\n", err);
else
mlx5_enter_error_state(dev, true);
mlx5_fw_reset_complete_reload(dev);
}
static void mlx5_sync_reset_abort_event(struct work_struct *work)

View File

@ -12,6 +12,7 @@ int mlx5_fw_reset_set_reset_sync(struct mlx5_core_dev *dev, u8 reset_type_sel,
int mlx5_fw_reset_set_live_patch(struct mlx5_core_dev *dev);
int mlx5_fw_reset_wait_reset_done(struct mlx5_core_dev *dev);
void mlx5_sync_reset_unload_flow(struct mlx5_core_dev *dev, bool locked);
int mlx5_fw_reset_verify_fw_complete(struct mlx5_core_dev *dev,
struct netlink_ext_ack *extack);
void mlx5_fw_reset_events_start(struct mlx5_core_dev *dev);

View File

@ -518,3 +518,13 @@ void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev)
WARN_ON(!xa_empty(&table->function_ids));
kfree(table);
}
bool mlx5_sf_table_empty(const struct mlx5_core_dev *dev)
{
struct mlx5_sf_table *table = dev->priv.sf_table;
if (!table)
return true;
return xa_empty(&table->function_ids);
}

View File

@ -17,6 +17,7 @@ void mlx5_sf_hw_table_destroy(struct mlx5_core_dev *dev);
int mlx5_sf_table_init(struct mlx5_core_dev *dev);
void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev);
bool mlx5_sf_table_empty(const struct mlx5_core_dev *dev);
int mlx5_devlink_sf_port_new(struct devlink *devlink,
const struct devlink_port_new_attrs *add_attr,
@ -61,6 +62,11 @@ static inline void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev)
{
}
static inline bool mlx5_sf_table_empty(const struct mlx5_core_dev *dev)
{
return true;
}
#endif
#endif

View File

@ -117,7 +117,7 @@ static int hws_action_get_shared_stc_nic(struct mlx5hws_context *ctx,
mlx5hws_err(ctx, "No such stc_type: %d\n", stc_type);
pr_warn("HWS: Invalid stc_type: %d\n", stc_type);
ret = -EINVAL;
goto unlock_and_out;
goto free_shared_stc;
}
ret = mlx5hws_action_alloc_single_stc(ctx, &stc_attr, tbl_type,

View File

@ -279,7 +279,7 @@ int mlx5hws_pat_get_pattern(struct mlx5hws_context *ctx,
return ret;
clean_pattern:
mlx5hws_cmd_header_modify_pattern_destroy(ctx->mdev, *pattern_id);
mlx5hws_cmd_header_modify_pattern_destroy(ctx->mdev, ptrn_id);
out_unlock:
mutex_unlock(&ctx->pattern_cache->lock);
return ret;
@ -527,7 +527,6 @@ int mlx5hws_pat_calc_nop(__be64 *pattern, size_t num_actions,
u32 *nop_locations, __be64 *new_pat)
{
u16 prev_src_field = INVALID_FIELD, prev_dst_field = INVALID_FIELD;
u16 src_field, dst_field;
u8 action_type;
bool dependent;
size_t i, j;
@ -539,6 +538,9 @@ int mlx5hws_pat_calc_nop(__be64 *pattern, size_t num_actions,
return 0;
for (i = 0, j = 0; i < num_actions; i++, j++) {
u16 src_field = INVALID_FIELD;
u16 dst_field = INVALID_FIELD;
if (j >= max_actions)
return -EINVAL;

View File

@ -124,6 +124,7 @@ static int hws_pool_buddy_init(struct mlx5hws_pool *pool)
mlx5hws_err(pool->ctx, "Failed to create resource type: %d size %zu\n",
pool->type, pool->alloc_log_sz);
mlx5hws_buddy_cleanup(buddy);
kfree(buddy);
return -ENOMEM;
}

View File

@ -52,6 +52,8 @@ int __fbnic_open(struct fbnic_net *fbn)
fbnic_bmc_rpc_init(fbd);
fbnic_rss_reinit(fbd, fbn);
phylink_resume(fbn->phylink);
return 0;
time_stop:
fbnic_time_stop(fbn);
@ -84,6 +86,8 @@ static int fbnic_stop(struct net_device *netdev)
{
struct fbnic_net *fbn = netdev_priv(netdev);
phylink_suspend(fbn->phylink, fbnic_bmc_present(fbn->fbd));
fbnic_down(fbn);
fbnic_pcs_free_irq(fbn->fbd);

View File

@ -118,14 +118,12 @@ static void fbnic_service_task_start(struct fbnic_net *fbn)
struct fbnic_dev *fbd = fbn->fbd;
schedule_delayed_work(&fbd->service_task, HZ);
phylink_resume(fbn->phylink);
}
static void fbnic_service_task_stop(struct fbnic_net *fbn)
{
struct fbnic_dev *fbd = fbn->fbd;
phylink_suspend(fbn->phylink, fbnic_bmc_present(fbd));
cancel_delayed_work(&fbd->service_task);
}
@ -443,11 +441,10 @@ static int __fbnic_pm_resume(struct device *dev)
/* Re-enable mailbox */
err = fbnic_fw_request_mbx(fbd);
devl_unlock(priv_to_devlink(fbd));
if (err)
goto err_free_irqs;
devl_unlock(priv_to_devlink(fbd));
/* Only send log history if log buffer is empty to prevent duplicate
* log entries.
*/
@ -464,20 +461,20 @@ static int __fbnic_pm_resume(struct device *dev)
rtnl_lock();
if (netif_running(netdev)) {
if (netif_running(netdev))
err = __fbnic_open(fbn);
if (err)
goto err_free_mbx;
}
rtnl_unlock();
if (err)
goto err_free_mbx;
return 0;
err_free_mbx:
fbnic_fw_log_disable(fbd);
rtnl_unlock();
devl_lock(priv_to_devlink(fbd));
fbnic_fw_free_mbx(fbd);
devl_unlock(priv_to_devlink(fbd));
err_free_irqs:
fbnic_free_irqs(fbd);
err_invalidate_uc_addr:

View File

@ -49,6 +49,14 @@ static void dwxgmac2_core_init(struct mac_device_info *hw,
writel(XGMAC_INT_DEFAULT_EN, ioaddr + XGMAC_INT_EN);
}
static void dwxgmac2_update_caps(struct stmmac_priv *priv)
{
if (!priv->dma_cap.mbps_10_100)
priv->hw->link.caps &= ~(MAC_10 | MAC_100);
else if (!priv->dma_cap.half_duplex)
priv->hw->link.caps &= ~(MAC_10HD | MAC_100HD);
}
static void dwxgmac2_set_mac(void __iomem *ioaddr, bool enable)
{
u32 tx = readl(ioaddr + XGMAC_TX_CONFIG);
@ -1424,6 +1432,7 @@ static void dwxgmac2_set_arp_offload(struct mac_device_info *hw, bool en,
const struct stmmac_ops dwxgmac210_ops = {
.core_init = dwxgmac2_core_init,
.update_caps = dwxgmac2_update_caps,
.set_mac = dwxgmac2_set_mac,
.rx_ipc = dwxgmac2_rx_ipc,
.rx_queue_enable = dwxgmac2_rx_queue_enable,
@ -1532,8 +1541,8 @@ int dwxgmac2_setup(struct stmmac_priv *priv)
mac->mcast_bits_log2 = ilog2(mac->multicast_filter_bins);
mac->link.caps = MAC_ASYM_PAUSE | MAC_SYM_PAUSE |
MAC_1000FD | MAC_2500FD | MAC_5000FD |
MAC_10000FD;
MAC_10 | MAC_100 | MAC_1000FD |
MAC_2500FD | MAC_5000FD | MAC_10000FD;
mac->link.duplex = 0;
mac->link.speed10 = XGMAC_CONFIG_SS_10_MII;
mac->link.speed100 = XGMAC_CONFIG_SS_100_MII;

View File

@ -203,10 +203,6 @@ static void dwxgmac2_dma_rx_mode(struct stmmac_priv *priv, void __iomem *ioaddr,
}
writel(value, ioaddr + XGMAC_MTL_RXQ_OPMODE(channel));
/* Enable MTL RX overflow */
value = readl(ioaddr + XGMAC_MTL_QINTEN(channel));
writel(value | XGMAC_RXOIE, ioaddr + XGMAC_MTL_QINTEN(channel));
}
static void dwxgmac2_dma_tx_mode(struct stmmac_priv *priv, void __iomem *ioaddr,
@ -386,8 +382,11 @@ static int dwxgmac2_dma_interrupt(struct stmmac_priv *priv,
static int dwxgmac2_get_hw_feature(void __iomem *ioaddr,
struct dma_features *dma_cap)
{
struct stmmac_priv *priv;
u32 hw_cap;
priv = container_of(dma_cap, struct stmmac_priv, dma_cap);
/* MAC HW feature 0 */
hw_cap = readl(ioaddr + XGMAC_HW_FEATURE0);
dma_cap->edma = (hw_cap & XGMAC_HWFEAT_EDMA) >> 31;
@ -410,6 +409,8 @@ static int dwxgmac2_get_hw_feature(void __iomem *ioaddr,
dma_cap->vlhash = (hw_cap & XGMAC_HWFEAT_VLHASH) >> 4;
dma_cap->half_duplex = (hw_cap & XGMAC_HWFEAT_HDSEL) >> 3;
dma_cap->mbps_1000 = (hw_cap & XGMAC_HWFEAT_GMIISEL) >> 1;
if (dma_cap->mbps_1000 && priv->synopsys_id >= DWXGMAC_CORE_2_20)
dma_cap->mbps_10_100 = 1;
/* MAC HW feature 1 */
hw_cap = readl(ioaddr + XGMAC_HW_FEATURE1);

View File

@ -2584,6 +2584,7 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
struct netdev_queue *nq = netdev_get_tx_queue(priv->dev, queue);
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue];
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue];
bool csum = !priv->plat->tx_queues_cfg[queue].coe_unsupported;
struct xsk_buff_pool *pool = tx_q->xsk_pool;
unsigned int entry = tx_q->cur_tx;
struct dma_desc *tx_desc = NULL;
@ -2671,7 +2672,7 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
}
stmmac_prepare_tx_desc(priv, tx_desc, 1, xdp_desc.len,
true, priv->mode, true, true,
csum, priv->mode, true, true,
xdp_desc.len);
stmmac_enable_dma_transmission(priv, priv->ioaddr, queue);
@ -4983,6 +4984,7 @@ static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue,
{
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue];
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue];
bool csum = !priv->plat->tx_queues_cfg[queue].coe_unsupported;
unsigned int entry = tx_q->cur_tx;
struct dma_desc *tx_desc;
dma_addr_t dma_addr;
@ -5034,7 +5036,7 @@ static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue,
stmmac_set_desc_addr(priv, tx_desc, dma_addr);
stmmac_prepare_tx_desc(priv, tx_desc, 1, xdpf->len,
true, priv->mode, true, true,
csum, priv->mode, true, true,
xdpf->len);
tx_q->tx_count_frames++;

View File

@ -1812,6 +1812,11 @@ struct netvsc_device *netvsc_device_add(struct hv_device *device,
/* Enable NAPI handler before init callbacks */
netif_napi_add(ndev, &net_device->chan_table[0].napi, netvsc_poll);
napi_enable(&net_device->chan_table[0].napi);
netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX,
&net_device->chan_table[0].napi);
netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX,
&net_device->chan_table[0].napi);
/* Open the channel */
device->channel->next_request_id_callback = vmbus_next_request_id;
@ -1831,12 +1836,6 @@ struct netvsc_device *netvsc_device_add(struct hv_device *device,
/* Channel is opened */
netdev_dbg(ndev, "hv_netvsc channel opened successfully\n");
napi_enable(&net_device->chan_table[0].napi);
netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX,
&net_device->chan_table[0].napi);
netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX,
&net_device->chan_table[0].napi);
/* Connect with the NetVsp */
ret = netvsc_connect_vsp(device, net_device, device_info);
if (ret != 0) {
@ -1854,14 +1853,14 @@ struct netvsc_device *netvsc_device_add(struct hv_device *device,
close:
RCU_INIT_POINTER(net_device_ctx->nvdev, NULL);
netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX, NULL);
netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX, NULL);
napi_disable(&net_device->chan_table[0].napi);
/* Now, we can close the channel safely */
vmbus_close(device->channel);
cleanup:
netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX, NULL);
netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX, NULL);
napi_disable(&net_device->chan_table[0].napi);
netif_napi_del(&net_device->chan_table[0].napi);
cleanup2:

View File

@ -1252,17 +1252,26 @@ static void netvsc_sc_open(struct vmbus_channel *new_sc)
new_sc->rqstor_size = netvsc_rqstor_size(netvsc_ring_bytes);
new_sc->max_pkt_size = NETVSC_MAX_PKT_SIZE;
/* Enable napi before opening the vmbus channel to avoid races
* as the host placing data on the host->guest ring may be left
* out if napi was not enabled.
*/
napi_enable(&nvchan->napi);
netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_RX,
&nvchan->napi);
netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_TX,
&nvchan->napi);
ret = vmbus_open(new_sc, netvsc_ring_bytes,
netvsc_ring_bytes, NULL, 0,
netvsc_channel_cb, nvchan);
if (ret == 0) {
napi_enable(&nvchan->napi);
netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_RX,
&nvchan->napi);
netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_TX,
&nvchan->napi);
} else {
if (ret != 0) {
netdev_notice(ndev, "sub channel open failed: %d\n", ret);
netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_TX,
NULL);
netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_RX,
NULL);
napi_disable(&nvchan->napi);
}
if (atomic_inc_return(&nvscdev->open_chn) == nvscdev->num_chn)

View File

@ -481,6 +481,7 @@ static inline void vsc8584_config_macsec_intr(struct phy_device *phydev)
void vsc85xx_link_change_notify(struct phy_device *phydev);
void vsc8584_config_ts_intr(struct phy_device *phydev);
int vsc8584_ptp_init(struct phy_device *phydev);
void vsc8584_ptp_deinit(struct phy_device *phydev);
int vsc8584_ptp_probe_once(struct phy_device *phydev);
int vsc8584_ptp_probe(struct phy_device *phydev);
irqreturn_t vsc8584_handle_ts_interrupt(struct phy_device *phydev);
@ -495,6 +496,9 @@ static inline int vsc8584_ptp_init(struct phy_device *phydev)
{
return 0;
}
static inline void vsc8584_ptp_deinit(struct phy_device *phydev)
{
}
static inline int vsc8584_ptp_probe_once(struct phy_device *phydev)
{
return 0;

View File

@ -2337,9 +2337,7 @@ static int vsc85xx_probe(struct phy_device *phydev)
static void vsc85xx_remove(struct phy_device *phydev)
{
struct vsc8531_private *priv = phydev->priv;
skb_queue_purge(&priv->rx_skbs_list);
vsc8584_ptp_deinit(phydev);
}
/* Microsemi VSC85xx PHYs */

View File

@ -1298,7 +1298,6 @@ static void vsc8584_set_input_clk_configured(struct phy_device *phydev)
static int __vsc8584_init_ptp(struct phy_device *phydev)
{
struct vsc8531_private *vsc8531 = phydev->priv;
static const u32 ltc_seq_e[] = { 0, 400000, 0, 0, 0 };
static const u8 ltc_seq_a[] = { 8, 6, 5, 4, 2 };
u32 val;
@ -1515,17 +1514,7 @@ static int __vsc8584_init_ptp(struct phy_device *phydev)
vsc85xx_ts_eth_cmp1_sig(phydev);
vsc8531->mii_ts.rxtstamp = vsc85xx_rxtstamp;
vsc8531->mii_ts.txtstamp = vsc85xx_txtstamp;
vsc8531->mii_ts.hwtstamp = vsc85xx_hwtstamp;
vsc8531->mii_ts.ts_info = vsc85xx_ts_info;
phydev->mii_ts = &vsc8531->mii_ts;
memcpy(&vsc8531->ptp->caps, &vsc85xx_clk_caps, sizeof(vsc85xx_clk_caps));
vsc8531->ptp->ptp_clock = ptp_clock_register(&vsc8531->ptp->caps,
&phydev->mdio.dev);
return PTR_ERR_OR_ZERO(vsc8531->ptp->ptp_clock);
return 0;
}
void vsc8584_config_ts_intr(struct phy_device *phydev)
@ -1552,6 +1541,16 @@ int vsc8584_ptp_init(struct phy_device *phydev)
return 0;
}
void vsc8584_ptp_deinit(struct phy_device *phydev)
{
struct vsc8531_private *vsc8531 = phydev->priv;
if (vsc8531->ptp->ptp_clock) {
ptp_clock_unregister(vsc8531->ptp->ptp_clock);
skb_queue_purge(&vsc8531->rx_skbs_list);
}
}
irqreturn_t vsc8584_handle_ts_interrupt(struct phy_device *phydev)
{
struct vsc8531_private *priv = phydev->priv;
@ -1612,7 +1611,16 @@ int vsc8584_ptp_probe(struct phy_device *phydev)
vsc8531->ptp->phydev = phydev;
return 0;
vsc8531->mii_ts.rxtstamp = vsc85xx_rxtstamp;
vsc8531->mii_ts.txtstamp = vsc85xx_txtstamp;
vsc8531->mii_ts.hwtstamp = vsc85xx_hwtstamp;
vsc8531->mii_ts.ts_info = vsc85xx_ts_info;
phydev->mii_ts = &vsc8531->mii_ts;
memcpy(&vsc8531->ptp->caps, &vsc85xx_clk_caps, sizeof(vsc85xx_clk_caps));
vsc8531->ptp->ptp_clock = ptp_clock_register(&vsc8531->ptp->caps,
&phydev->mdio.dev);
return PTR_ERR_OR_ZERO(vsc8531->ptp->ptp_clock);
}
int vsc8584_ptp_probe_once(struct phy_device *phydev)

View File

@ -1355,6 +1355,9 @@ static const struct usb_device_id products[] = {
{QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */
{QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1034, 2)}, /* Telit LE910C4-WWX */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1037, 4)}, /* Telit LE910C4-WWX */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1038, 3)}, /* Telit LE910C4-WWX */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x103a, 0)}, /* Telit LE910C4-WWX */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */

View File

@ -185,6 +185,7 @@ struct atmdev_ops { /* only send is required */
int (*compat_ioctl)(struct atm_dev *dev,unsigned int cmd,
void __user *arg);
#endif
int (*pre_send)(struct atm_vcc *vcc, struct sk_buff *skb);
int (*send)(struct atm_vcc *vcc,struct sk_buff *skb);
int (*send_bh)(struct atm_vcc *vcc, struct sk_buff *skb);
int (*send_oam)(struct atm_vcc *vcc,void *cell,int flags);

View File

@ -4172,6 +4172,8 @@ int skb_copy_and_crc32c_datagram_iter(const struct sk_buff *skb, int offset,
struct iov_iter *to, int len, u32 *crcp);
int skb_copy_datagram_from_iter(struct sk_buff *skb, int offset,
struct iov_iter *from, int len);
int skb_copy_datagram_from_iter_full(struct sk_buff *skb, int offset,
struct iov_iter *from, int len);
int zerocopy_sg_from_iter(struct sk_buff *skb, struct iov_iter *frm);
void skb_free_datagram(struct sock *sk, struct sk_buff *skb);
int skb_kill_datagram(struct sock *sk, struct sk_buff *skb, unsigned int flags);

View File

@ -93,7 +93,7 @@ int hci_update_class_sync(struct hci_dev *hdev);
int hci_update_eir_sync(struct hci_dev *hdev);
int hci_update_class_sync(struct hci_dev *hdev);
int hci_update_name_sync(struct hci_dev *hdev);
int hci_update_name_sync(struct hci_dev *hdev, const u8 *name);
int hci_write_ssp_mode_sync(struct hci_dev *hdev, u8 mode);
int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,

View File

@ -8,6 +8,7 @@
#ifndef _ROSE_H
#define _ROSE_H
#include <linux/refcount.h>
#include <linux/rose.h>
#include <net/ax25.h>
#include <net/sock.h>
@ -96,7 +97,7 @@ struct rose_neigh {
ax25_cb *ax25;
struct net_device *dev;
unsigned short count;
unsigned short use;
refcount_t use;
unsigned int number;
char restarted;
char dce_mode;
@ -151,6 +152,21 @@ struct rose_sock {
#define rose_sk(sk) ((struct rose_sock *)(sk))
static inline void rose_neigh_hold(struct rose_neigh *rose_neigh)
{
refcount_inc(&rose_neigh->use);
}
static inline void rose_neigh_put(struct rose_neigh *rose_neigh)
{
if (refcount_dec_and_test(&rose_neigh->use)) {
if (rose_neigh->ax25)
ax25_cb_put(rose_neigh->ax25);
kfree(rose_neigh->digipeat);
kfree(rose_neigh);
}
}
/* af_rose.c */
extern ax25_address rose_callsign;
extern int sysctl_rose_restart_request_timeout;

View File

@ -635,18 +635,27 @@ int vcc_sendmsg(struct socket *sock, struct msghdr *m, size_t size)
skb->dev = NULL; /* for paths shared with net_device interfaces */
if (!copy_from_iter_full(skb_put(skb, size), size, &m->msg_iter)) {
atm_return_tx(vcc, skb);
kfree_skb(skb);
error = -EFAULT;
goto out;
goto free_skb;
}
if (eff != size)
memset(skb->data + size, 0, eff-size);
if (vcc->dev->ops->pre_send) {
error = vcc->dev->ops->pre_send(vcc, skb);
if (error)
goto free_skb;
}
error = vcc->dev->ops->send(vcc, skb);
error = error ? error : size;
out:
release_sock(sk);
return error;
free_skb:
atm_return_tx(vcc, skb);
kfree_skb(skb);
goto out;
}
__poll_t vcc_poll(struct file *file, struct socket *sock, poll_table *wait)

View File

@ -149,8 +149,6 @@ static void hci_conn_cleanup(struct hci_conn *conn)
hci_chan_list_flush(conn);
hci_conn_hash_del(hdev, conn);
if (HCI_CONN_HANDLE_UNSET(conn->handle))
ida_free(&hdev->unset_handle_ida, conn->handle);
@ -1152,28 +1150,54 @@ void hci_conn_del(struct hci_conn *conn)
disable_delayed_work_sync(&conn->auto_accept_work);
disable_delayed_work_sync(&conn->idle_work);
if (conn->type == ACL_LINK) {
/* Unacked frames */
hdev->acl_cnt += conn->sent;
} else if (conn->type == LE_LINK) {
cancel_delayed_work(&conn->le_conn_timeout);
/* Remove the connection from the list so unacked logic can detect when
* a certain pool is not being utilized.
*/
hci_conn_hash_del(hdev, conn);
if (hdev->le_pkts)
hdev->le_cnt += conn->sent;
/* Handle unacked frames:
*
* - In case there are no connection, or if restoring the buffers
* considered in transist would overflow, restore all buffers to the
* pool.
* - Otherwise restore just the buffers considered in transit for the
* hci_conn
*/
switch (conn->type) {
case ACL_LINK:
if (!hci_conn_num(hdev, ACL_LINK) ||
hdev->acl_cnt + conn->sent > hdev->acl_pkts)
hdev->acl_cnt = hdev->acl_pkts;
else
hdev->acl_cnt += conn->sent;
} else {
/* Unacked ISO frames */
if (conn->type == CIS_LINK ||
conn->type == BIS_LINK ||
conn->type == PA_LINK) {
if (hdev->iso_pkts)
hdev->iso_cnt += conn->sent;
else if (hdev->le_pkts)
break;
case LE_LINK:
cancel_delayed_work(&conn->le_conn_timeout);
if (hdev->le_pkts) {
if (!hci_conn_num(hdev, LE_LINK) ||
hdev->le_cnt + conn->sent > hdev->le_pkts)
hdev->le_cnt = hdev->le_pkts;
else
hdev->le_cnt += conn->sent;
} else {
if ((!hci_conn_num(hdev, LE_LINK) &&
!hci_conn_num(hdev, ACL_LINK)) ||
hdev->acl_cnt + conn->sent > hdev->acl_pkts)
hdev->acl_cnt = hdev->acl_pkts;
else
hdev->acl_cnt += conn->sent;
}
break;
case CIS_LINK:
case BIS_LINK:
case PA_LINK:
if (!hci_iso_count(hdev) ||
hdev->iso_cnt + conn->sent > hdev->iso_pkts)
hdev->iso_cnt = hdev->iso_pkts;
else
hdev->iso_cnt += conn->sent;
break;
}
skb_queue_purge(&conn->data_q);

View File

@ -2703,7 +2703,7 @@ static void hci_cs_disconnect(struct hci_dev *hdev, u8 status)
if (!conn)
goto unlock;
if (status) {
if (status && status != HCI_ERROR_UNKNOWN_CONN_ID) {
mgmt_disconnect_failed(hdev, &conn->dst, conn->type,
conn->dst_type, status);
@ -2718,6 +2718,12 @@ static void hci_cs_disconnect(struct hci_dev *hdev, u8 status)
goto done;
}
/* During suspend, mark connection as closed immediately
* since we might not receive HCI_EV_DISCONN_COMPLETE
*/
if (hdev->suspended)
conn->state = BT_CLOSED;
mgmt_conn = test_and_clear_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags);
if (conn->type == ACL_LINK) {
@ -4398,7 +4404,17 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
if (!conn)
continue;
conn->sent -= count;
/* Check if there is really enough packets outstanding before
* attempting to decrease the sent counter otherwise it could
* underflow..
*/
if (conn->sent >= count) {
conn->sent -= count;
} else {
bt_dev_warn(hdev, "hcon %p sent %u < count %u",
conn, conn->sent, count);
conn->sent = 0;
}
for (i = 0; i < count; ++i)
hci_conn_tx_dequeue(conn);
@ -7008,6 +7024,7 @@ static void hci_le_big_sync_lost_evt(struct hci_dev *hdev, void *data,
{
struct hci_evt_le_big_sync_lost *ev = data;
struct hci_conn *bis, *conn;
bool mgmt_conn;
bt_dev_dbg(hdev, "big handle 0x%2.2x", ev->handle);
@ -7026,6 +7043,10 @@ static void hci_le_big_sync_lost_evt(struct hci_dev *hdev, void *data,
while ((bis = hci_conn_hash_lookup_big_state(hdev, ev->handle,
BT_CONNECTED,
HCI_ROLE_SLAVE))) {
mgmt_conn = test_and_clear_bit(HCI_CONN_MGMT_CONNECTED, &bis->flags);
mgmt_device_disconnected(hdev, &bis->dst, bis->type, bis->dst_type,
ev->reason, mgmt_conn);
clear_bit(HCI_CONN_BIG_SYNC, &bis->flags);
hci_disconn_cfm(bis, ev->reason);
hci_conn_del(bis);

View File

@ -3481,13 +3481,13 @@ int hci_update_scan_sync(struct hci_dev *hdev)
return hci_write_scan_enable_sync(hdev, scan);
}
int hci_update_name_sync(struct hci_dev *hdev)
int hci_update_name_sync(struct hci_dev *hdev, const u8 *name)
{
struct hci_cp_write_local_name cp;
memset(&cp, 0, sizeof(cp));
memcpy(cp.name, hdev->dev_name, sizeof(cp.name));
memcpy(cp.name, name, sizeof(cp.name));
return __hci_cmd_sync_status(hdev, HCI_OP_WRITE_LOCAL_NAME,
sizeof(cp), &cp,
@ -3540,7 +3540,7 @@ int hci_powered_update_sync(struct hci_dev *hdev)
hci_write_fast_connectable_sync(hdev, false);
hci_update_scan_sync(hdev);
hci_update_class_sync(hdev);
hci_update_name_sync(hdev);
hci_update_name_sync(hdev, hdev->dev_name);
hci_update_eir_sync(hdev);
}

View File

@ -3892,8 +3892,11 @@ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
static int set_name_sync(struct hci_dev *hdev, void *data)
{
struct mgmt_pending_cmd *cmd = data;
struct mgmt_cp_set_local_name *cp = cmd->param;
if (lmp_bredr_capable(hdev)) {
hci_update_name_sync(hdev);
hci_update_name_sync(hdev, cp->name);
hci_update_eir_sync(hdev);
}
@ -9705,7 +9708,9 @@ void mgmt_device_disconnected(struct hci_dev *hdev, bdaddr_t *bdaddr,
if (!mgmt_connected)
return;
if (link_type != ACL_LINK && link_type != LE_LINK)
if (link_type != ACL_LINK &&
link_type != LE_LINK &&
link_type != BIS_LINK)
return;
bacpy(&ev.addr.bdaddr, bdaddr);

View File

@ -618,6 +618,20 @@ fault:
}
EXPORT_SYMBOL(skb_copy_datagram_from_iter);
int skb_copy_datagram_from_iter_full(struct sk_buff *skb, int offset,
struct iov_iter *from, int len)
{
struct iov_iter_state state;
int ret;
iov_iter_save_state(from, &state);
ret = skb_copy_datagram_from_iter(skb, offset, from, len);
if (ret)
iov_iter_restore(from, &state);
return ret;
}
EXPORT_SYMBOL(skb_copy_datagram_from_iter_full);
int zerocopy_fill_skb_from_iter(struct sk_buff *skb,
struct iov_iter *from, size_t length)
{

View File

@ -287,8 +287,10 @@ static int page_pool_init(struct page_pool *pool,
}
if (pool->mp_ops) {
if (!pool->dma_map || !pool->dma_sync)
return -EOPNOTSUPP;
if (!pool->dma_map || !pool->dma_sync) {
err = -EOPNOTSUPP;
goto free_ptr_ring;
}
if (WARN_ON(!is_kernel_rodata((unsigned long)pool->mp_ops))) {
err = -EFAULT;

View File

@ -2575,12 +2575,16 @@ static struct rtable *__mkroute_output(const struct fib_result *res,
!netif_is_l3_master(dev_out))
return ERR_PTR(-EINVAL);
if (ipv4_is_lbcast(fl4->daddr))
if (ipv4_is_lbcast(fl4->daddr)) {
type = RTN_BROADCAST;
else if (ipv4_is_multicast(fl4->daddr))
/* reset fi to prevent gateway resolution */
fi = NULL;
} else if (ipv4_is_multicast(fl4->daddr)) {
type = RTN_MULTICAST;
else if (ipv4_is_zeronet(fl4->daddr))
} else if (ipv4_is_zeronet(fl4->daddr)) {
return ERR_PTR(-EINVAL);
}
if (dev_out->flags & IFF_LOOPBACK)
flags |= RTCF_LOCAL;

View File

@ -129,22 +129,12 @@ static const struct ppp_channel_ops pppol2tp_chan_ops = {
static const struct proto_ops pppol2tp_ops;
/* Retrieves the pppol2tp socket associated to a session.
* A reference is held on the returned socket, so this function must be paired
* with sock_put().
*/
/* Retrieves the pppol2tp socket associated to a session. */
static struct sock *pppol2tp_session_get_sock(struct l2tp_session *session)
{
struct pppol2tp_session *ps = l2tp_session_priv(session);
struct sock *sk;
rcu_read_lock();
sk = rcu_dereference(ps->sk);
if (sk)
sock_hold(sk);
rcu_read_unlock();
return sk;
return rcu_dereference(ps->sk);
}
/* Helpers to obtain tunnel/session contexts from sockets.
@ -206,14 +196,13 @@ end:
static void pppol2tp_recv(struct l2tp_session *session, struct sk_buff *skb, int data_len)
{
struct pppol2tp_session *ps = l2tp_session_priv(session);
struct sock *sk = NULL;
struct sock *sk;
/* If the socket is bound, send it in to PPP's input queue. Otherwise
* queue it on the session socket.
*/
rcu_read_lock();
sk = rcu_dereference(ps->sk);
sk = pppol2tp_session_get_sock(session);
if (!sk)
goto no_sock;
@ -510,13 +499,14 @@ static void pppol2tp_show(struct seq_file *m, void *arg)
struct l2tp_session *session = arg;
struct sock *sk;
rcu_read_lock();
sk = pppol2tp_session_get_sock(session);
if (sk) {
struct pppox_sock *po = pppox_sk(sk);
seq_printf(m, " interface %s\n", ppp_dev_name(&po->chan));
sock_put(sk);
}
rcu_read_unlock();
}
static void pppol2tp_session_init(struct l2tp_session *session)
@ -1530,6 +1520,7 @@ static void pppol2tp_seq_session_show(struct seq_file *m, void *v)
port = ntohs(inet->inet_sport);
}
rcu_read_lock();
sk = pppol2tp_session_get_sock(session);
if (sk) {
state = sk->sk_state;
@ -1565,8 +1556,8 @@ static void pppol2tp_seq_session_show(struct seq_file *m, void *v)
struct pppox_sock *po = pppox_sk(sk);
seq_printf(m, " interface %s\n", ppp_dev_name(&po->chan));
sock_put(sk);
}
rcu_read_unlock();
}
static int pppol2tp_seq_show(struct seq_file *m, void *v)

View File

@ -170,7 +170,7 @@ void rose_kill_by_neigh(struct rose_neigh *neigh)
if (rose->neighbour == neigh) {
rose_disconnect(s, ENETUNREACH, ROSE_OUT_OF_ORDER, 0);
rose->neighbour->use--;
rose_neigh_put(rose->neighbour);
rose->neighbour = NULL;
}
}
@ -212,7 +212,7 @@ start:
if (rose->device == dev) {
rose_disconnect(sk, ENETUNREACH, ROSE_OUT_OF_ORDER, 0);
if (rose->neighbour)
rose->neighbour->use--;
rose_neigh_put(rose->neighbour);
netdev_put(rose->device, &rose->dev_tracker);
rose->device = NULL;
}
@ -655,7 +655,7 @@ static int rose_release(struct socket *sock)
break;
case ROSE_STATE_2:
rose->neighbour->use--;
rose_neigh_put(rose->neighbour);
release_sock(sk);
rose_disconnect(sk, 0, -1, -1);
lock_sock(sk);
@ -823,6 +823,7 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
rose->lci = rose_new_lci(rose->neighbour);
if (!rose->lci) {
err = -ENETUNREACH;
rose_neigh_put(rose->neighbour);
goto out_release;
}
@ -834,12 +835,14 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
dev = rose_dev_first();
if (!dev) {
err = -ENETUNREACH;
rose_neigh_put(rose->neighbour);
goto out_release;
}
user = ax25_findbyuid(current_euid());
if (!user) {
err = -EINVAL;
rose_neigh_put(rose->neighbour);
dev_put(dev);
goto out_release;
}
@ -874,8 +877,6 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
rose->state = ROSE_STATE_1;
rose->neighbour->use++;
rose_write_internal(sk, ROSE_CALL_REQUEST);
rose_start_heartbeat(sk);
rose_start_t1timer(sk);
@ -1077,7 +1078,7 @@ int rose_rx_call_request(struct sk_buff *skb, struct net_device *dev, struct ros
GFP_ATOMIC);
make_rose->facilities = facilities;
make_rose->neighbour->use++;
rose_neigh_hold(make_rose->neighbour);
if (rose_sk(sk)->defer) {
make_rose->state = ROSE_STATE_5;

View File

@ -56,7 +56,7 @@ static int rose_state1_machine(struct sock *sk, struct sk_buff *skb, int framety
case ROSE_CLEAR_REQUEST:
rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION);
rose_disconnect(sk, ECONNREFUSED, skb->data[3], skb->data[4]);
rose->neighbour->use--;
rose_neigh_put(rose->neighbour);
break;
default:
@ -79,12 +79,12 @@ static int rose_state2_machine(struct sock *sk, struct sk_buff *skb, int framety
case ROSE_CLEAR_REQUEST:
rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION);
rose_disconnect(sk, 0, skb->data[3], skb->data[4]);
rose->neighbour->use--;
rose_neigh_put(rose->neighbour);
break;
case ROSE_CLEAR_CONFIRMATION:
rose_disconnect(sk, 0, -1, -1);
rose->neighbour->use--;
rose_neigh_put(rose->neighbour);
break;
default:
@ -121,7 +121,7 @@ static int rose_state3_machine(struct sock *sk, struct sk_buff *skb, int framety
case ROSE_CLEAR_REQUEST:
rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION);
rose_disconnect(sk, 0, skb->data[3], skb->data[4]);
rose->neighbour->use--;
rose_neigh_put(rose->neighbour);
break;
case ROSE_RR:
@ -234,7 +234,7 @@ static int rose_state4_machine(struct sock *sk, struct sk_buff *skb, int framety
case ROSE_CLEAR_REQUEST:
rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION);
rose_disconnect(sk, 0, skb->data[3], skb->data[4]);
rose->neighbour->use--;
rose_neigh_put(rose->neighbour);
break;
default:
@ -254,7 +254,7 @@ static int rose_state5_machine(struct sock *sk, struct sk_buff *skb, int framety
if (frametype == ROSE_CLEAR_REQUEST) {
rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION);
rose_disconnect(sk, 0, skb->data[3], skb->data[4]);
rose_sk(sk)->neighbour->use--;
rose_neigh_put(rose_sk(sk)->neighbour);
}
return 0;

View File

@ -93,11 +93,11 @@ static int __must_check rose_add_node(struct rose_route_struct *rose_route,
rose_neigh->ax25 = NULL;
rose_neigh->dev = dev;
rose_neigh->count = 0;
rose_neigh->use = 0;
rose_neigh->dce_mode = 0;
rose_neigh->loopback = 0;
rose_neigh->number = rose_neigh_no++;
rose_neigh->restarted = 0;
refcount_set(&rose_neigh->use, 1);
skb_queue_head_init(&rose_neigh->queue);
@ -178,6 +178,7 @@ static int __must_check rose_add_node(struct rose_route_struct *rose_route,
}
}
rose_neigh->count++;
rose_neigh_hold(rose_neigh);
goto out;
}
@ -187,6 +188,7 @@ static int __must_check rose_add_node(struct rose_route_struct *rose_route,
rose_node->neighbour[rose_node->count] = rose_neigh;
rose_node->count++;
rose_neigh->count++;
rose_neigh_hold(rose_neigh);
}
out:
@ -234,20 +236,12 @@ static void rose_remove_neigh(struct rose_neigh *rose_neigh)
if ((s = rose_neigh_list) == rose_neigh) {
rose_neigh_list = rose_neigh->next;
if (rose_neigh->ax25)
ax25_cb_put(rose_neigh->ax25);
kfree(rose_neigh->digipeat);
kfree(rose_neigh);
return;
}
while (s != NULL && s->next != NULL) {
if (s->next == rose_neigh) {
s->next = rose_neigh->next;
if (rose_neigh->ax25)
ax25_cb_put(rose_neigh->ax25);
kfree(rose_neigh->digipeat);
kfree(rose_neigh);
return;
}
@ -263,10 +257,10 @@ static void rose_remove_route(struct rose_route *rose_route)
struct rose_route *s;
if (rose_route->neigh1 != NULL)
rose_route->neigh1->use--;
rose_neigh_put(rose_route->neigh1);
if (rose_route->neigh2 != NULL)
rose_route->neigh2->use--;
rose_neigh_put(rose_route->neigh2);
if ((s = rose_route_list) == rose_route) {
rose_route_list = rose_route->next;
@ -330,9 +324,12 @@ static int rose_del_node(struct rose_route_struct *rose_route,
for (i = 0; i < rose_node->count; i++) {
if (rose_node->neighbour[i] == rose_neigh) {
rose_neigh->count--;
rose_neigh_put(rose_neigh);
if (rose_neigh->count == 0 && rose_neigh->use == 0)
if (rose_neigh->count == 0) {
rose_remove_neigh(rose_neigh);
rose_neigh_put(rose_neigh);
}
rose_node->count--;
@ -381,11 +378,11 @@ void rose_add_loopback_neigh(void)
sn->ax25 = NULL;
sn->dev = NULL;
sn->count = 0;
sn->use = 0;
sn->dce_mode = 1;
sn->loopback = 1;
sn->number = rose_neigh_no++;
sn->restarted = 1;
refcount_set(&sn->use, 1);
skb_queue_head_init(&sn->queue);
@ -436,6 +433,7 @@ int rose_add_loopback_node(const rose_address *address)
rose_node_list = rose_node;
rose_loopback_neigh->count++;
rose_neigh_hold(rose_loopback_neigh);
out:
spin_unlock_bh(&rose_node_list_lock);
@ -467,6 +465,7 @@ void rose_del_loopback_node(const rose_address *address)
rose_remove_node(rose_node);
rose_loopback_neigh->count--;
rose_neigh_put(rose_loopback_neigh);
out:
spin_unlock_bh(&rose_node_list_lock);
@ -506,6 +505,7 @@ void rose_rt_device_down(struct net_device *dev)
memmove(&t->neighbour[i], &t->neighbour[i + 1],
sizeof(t->neighbour[0]) *
(t->count - i));
rose_neigh_put(s);
}
if (t->count <= 0)
@ -513,6 +513,7 @@ void rose_rt_device_down(struct net_device *dev)
}
rose_remove_neigh(s);
rose_neigh_put(s);
}
spin_unlock_bh(&rose_neigh_list_lock);
spin_unlock_bh(&rose_node_list_lock);
@ -548,6 +549,7 @@ static int rose_clear_routes(void)
{
struct rose_neigh *s, *rose_neigh;
struct rose_node *t, *rose_node;
int i;
spin_lock_bh(&rose_node_list_lock);
spin_lock_bh(&rose_neigh_list_lock);
@ -558,17 +560,21 @@ static int rose_clear_routes(void)
while (rose_node != NULL) {
t = rose_node;
rose_node = rose_node->next;
if (!t->loopback)
if (!t->loopback) {
for (i = 0; i < t->count; i++)
rose_neigh_put(t->neighbour[i]);
rose_remove_node(t);
}
}
while (rose_neigh != NULL) {
s = rose_neigh;
rose_neigh = rose_neigh->next;
if (s->use == 0 && !s->loopback) {
s->count = 0;
if (!s->loopback) {
rose_remove_neigh(s);
rose_neigh_put(s);
}
}
@ -684,6 +690,7 @@ struct rose_neigh *rose_get_neigh(rose_address *addr, unsigned char *cause,
for (i = 0; i < node->count; i++) {
if (node->neighbour[i]->restarted) {
res = node->neighbour[i];
rose_neigh_hold(node->neighbour[i]);
goto out;
}
}
@ -695,6 +702,7 @@ struct rose_neigh *rose_get_neigh(rose_address *addr, unsigned char *cause,
for (i = 0; i < node->count; i++) {
if (!rose_ftimer_running(node->neighbour[i])) {
res = node->neighbour[i];
rose_neigh_hold(node->neighbour[i]);
goto out;
}
failed = 1;
@ -784,13 +792,13 @@ static void rose_del_route_by_neigh(struct rose_neigh *rose_neigh)
}
if (rose_route->neigh1 == rose_neigh) {
rose_route->neigh1->use--;
rose_neigh_put(rose_route->neigh1);
rose_route->neigh1 = NULL;
rose_transmit_clear_request(rose_route->neigh2, rose_route->lci2, ROSE_OUT_OF_ORDER, 0);
}
if (rose_route->neigh2 == rose_neigh) {
rose_route->neigh2->use--;
rose_neigh_put(rose_route->neigh2);
rose_route->neigh2 = NULL;
rose_transmit_clear_request(rose_route->neigh1, rose_route->lci1, ROSE_OUT_OF_ORDER, 0);
}
@ -919,7 +927,7 @@ int rose_route_frame(struct sk_buff *skb, ax25_cb *ax25)
rose_clear_queues(sk);
rose->cause = ROSE_NETWORK_CONGESTION;
rose->diagnostic = 0;
rose->neighbour->use--;
rose_neigh_put(rose->neighbour);
rose->neighbour = NULL;
rose->lci = 0;
rose->state = ROSE_STATE_0;
@ -1044,12 +1052,12 @@ int rose_route_frame(struct sk_buff *skb, ax25_cb *ax25)
if ((new_lci = rose_new_lci(new_neigh)) == 0) {
rose_transmit_clear_request(rose_neigh, lci, ROSE_NETWORK_CONGESTION, 71);
goto out;
goto put_neigh;
}
if ((rose_route = kmalloc(sizeof(*rose_route), GFP_ATOMIC)) == NULL) {
rose_transmit_clear_request(rose_neigh, lci, ROSE_NETWORK_CONGESTION, 120);
goto out;
goto put_neigh;
}
rose_route->lci1 = lci;
@ -1062,8 +1070,8 @@ int rose_route_frame(struct sk_buff *skb, ax25_cb *ax25)
rose_route->lci2 = new_lci;
rose_route->neigh2 = new_neigh;
rose_route->neigh1->use++;
rose_route->neigh2->use++;
rose_neigh_hold(rose_route->neigh1);
rose_neigh_hold(rose_route->neigh2);
rose_route->next = rose_route_list;
rose_route_list = rose_route;
@ -1075,6 +1083,8 @@ int rose_route_frame(struct sk_buff *skb, ax25_cb *ax25)
rose_transmit_link(skb, rose_route->neigh2);
res = 1;
put_neigh:
rose_neigh_put(new_neigh);
out:
spin_unlock_bh(&rose_route_list_lock);
spin_unlock_bh(&rose_neigh_list_lock);
@ -1190,7 +1200,7 @@ static int rose_neigh_show(struct seq_file *seq, void *v)
(rose_neigh->loopback) ? "RSLOOP-0" : ax2asc(buf, &rose_neigh->callsign),
rose_neigh->dev ? rose_neigh->dev->name : "???",
rose_neigh->count,
rose_neigh->use,
refcount_read(&rose_neigh->use) - rose_neigh->count - 1,
(rose_neigh->dce_mode) ? "DCE" : "DTE",
(rose_neigh->restarted) ? "yes" : "no",
ax25_display_timer(&rose_neigh->t0timer) / HZ,
@ -1295,18 +1305,22 @@ void __exit rose_rt_free(void)
struct rose_neigh *s, *rose_neigh = rose_neigh_list;
struct rose_node *t, *rose_node = rose_node_list;
struct rose_route *u, *rose_route = rose_route_list;
int i;
while (rose_neigh != NULL) {
s = rose_neigh;
rose_neigh = rose_neigh->next;
rose_remove_neigh(s);
rose_neigh_put(s);
}
while (rose_node != NULL) {
t = rose_node;
rose_node = rose_node->next;
for (i = 0; i < t->count; i++)
rose_neigh_put(t->neighbour[i]);
rose_remove_node(t);
}

View File

@ -180,7 +180,7 @@ static void rose_timer_expiry(struct timer_list *t)
break;
case ROSE_STATE_2: /* T3 */
rose->neighbour->use--;
rose_neigh_put(rose->neighbour);
rose_disconnect(sk, ETIMEDOUT, -1, -1);
break;

View File

@ -547,7 +547,9 @@ static void sctp_v6_from_sk(union sctp_addr *addr, struct sock *sk)
{
addr->v6.sin6_family = AF_INET6;
addr->v6.sin6_port = 0;
addr->v6.sin6_flowinfo = 0;
addr->v6.sin6_addr = sk->sk_v6_rcv_saddr;
addr->v6.sin6_scope_id = 0;
}
/* Initialize sk->sk_rcv_saddr from sctp_addr. */

View File

@ -105,12 +105,14 @@ static int virtio_transport_fill_skb(struct sk_buff *skb,
size_t len,
bool zcopy)
{
struct msghdr *msg = info->msg;
if (zcopy)
return __zerocopy_sg_from_iter(info->msg, NULL, skb,
&info->msg->msg_iter, len, NULL);
return __zerocopy_sg_from_iter(msg, NULL, skb,
&msg->msg_iter, len, NULL);
virtio_vsock_skb_put(skb, len);
return skb_copy_datagram_from_iter(skb, 0, &info->msg->msg_iter, len);
return skb_copy_datagram_from_iter_full(skb, 0, &msg->msg_iter, len);
}
static void virtio_transport_init_hdr(struct sk_buff *skb,