SCSI misc on 20250730

Smaller set of driver updates than usual (ufs, lpfc, mpi3mr).  The
 rest (including the core file changes) are doc updates and some minor
 bug fixes.
 
 Signed-off-by: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
 -----BEGIN PGP SIGNATURE-----
 
 iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCaIosYSYcamFtZXMuYm90
 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishQWTAQCfaWMn
 U7rAoU2zEkv4/6kajfw0Nz62IjbX3fLveBOgFwD/ZQqXVPpD+316ksjzwM+5E+O9
 fxYASbF/IOLC8g1z7JU=
 =7x/z
 -----END PGP SIGNATURE-----

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
 "Smaller set of driver updates than usual (ufs, lpfc, mpi3mr).

  The rest (including the core file changes) are doc updates and some
  minor bug fixes"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (49 commits)
  scsi: libiscsi: Initialize iscsi_conn->dd_data only if memory is allocated
  scsi: scsi_transport_fc: Add comments to describe added 'rport' parameter
  scsi: bfa: Double-free fix
  scsi: isci: Fix dma_unmap_sg() nents value
  scsi: mvsas: Fix dma_unmap_sg() nents value
  scsi: elx: efct: Fix dma_unmap_sg() nents value
  scsi: scsi_transport_fc: Change to use per-rport devloss_work_q
  scsi: ufs: exynos: Fix programming of HCI_UTRL_NEXUS_TYPE
  scsi: core: Fix kernel doc for scsi_track_queue_full()
  scsi: ibmvscsi_tgt: Fix dma_unmap_sg() nents value
  scsi: ibmvscsi_tgt: Fix typo in comment
  scsi: mpi3mr: Update driver version to 8.14.0.5.50
  scsi: mpi3mr: Serialize admin queue BAR writes on 32-bit systems
  scsi: mpi3mr: Drop unnecessary volatile from __iomem pointers
  scsi: mpi3mr: Fix race between config read submit and interrupt completion
  scsi: ufs: ufs-qcom: Enable QUnipro Internal Clock Gating
  scsi: ufs: core: Add ufshcd_dme_rmw() to modify DME attributes
  scsi: ufs: ufs-qcom: Update esi_vec_mask for HW major version >= 6
  scsi: core: Use scsi_cmd_priv() instead of open-coding it
  scsi: qla2xxx: Remove firmware URL
  ...
pull/1311/head
Linus Torvalds 2025-07-31 12:13:53 -07:00
commit 2c8c9aae44
52 changed files with 815 additions and 254 deletions

View File

@ -1685,3 +1685,86 @@ Description:
================ ========================================
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/hid/analysis_trigger
What: /sys/bus/platform/devices/*.ufs/hid/analysis_trigger
Date: May 2025
Contact: Huan Tang <tanghuan@vivo.com>
Description:
The host can enable or disable HID analysis operation.
======= =========================================
disable disable HID analysis operation
enable enable HID analysis operation
======= =========================================
The file is write only.
What: /sys/bus/platform/drivers/ufshcd/*/hid/defrag_trigger
What: /sys/bus/platform/devices/*.ufs/hid/defrag_trigger
Date: May 2025
Contact: Huan Tang <tanghuan@vivo.com>
Description:
The host can enable or disable HID defragmentation operation.
======= =========================================
disable disable HID defragmentation operation
enable enable HID defragmentation operation
======= =========================================
The attribute is write only.
What: /sys/bus/platform/drivers/ufshcd/*/hid/fragmented_size
What: /sys/bus/platform/devices/*.ufs/hid/fragmented_size
Date: May 2025
Contact: Huan Tang <tanghuan@vivo.com>
Description:
The total fragmented size in the device is reported through
this attribute.
The attribute is read only.
What: /sys/bus/platform/drivers/ufshcd/*/hid/defrag_size
What: /sys/bus/platform/devices/*.ufs/hid/defrag_size
Date: May 2025
Contact: Huan Tang <tanghuan@vivo.com>
Description:
The host sets the size to be defragmented by an HID
defragmentation operation.
The attribute is read/write.
What: /sys/bus/platform/drivers/ufshcd/*/hid/progress_ratio
What: /sys/bus/platform/devices/*.ufs/hid/progress_ratio
Date: May 2025
Contact: Huan Tang <tanghuan@vivo.com>
Description:
Defragmentation progress is reported by this attribute,
indicates the ratio of the completed defragmentation size
over the requested defragmentation size.
==== ============================================
1 1%
...
100 100%
==== ============================================
The attribute is read only.
What: /sys/bus/platform/drivers/ufshcd/*/hid/state
What: /sys/bus/platform/devices/*.ufs/hid/state
Date: May 2025
Contact: Huan Tang <tanghuan@vivo.com>
Description:
The HID state is reported by this attribute.
==================== ===========================
idle Idle (analysis required)
analysis_in_progress Analysis in progress
defrag_required Defrag required
defrag_in_progress Defrag in progress
defrag_completed Defrag completed
defrag_not_required Defrag is not required
==================== ===========================
The attribute is read only.

View File

@ -30,7 +30,40 @@ This file is found at Documentation/scsi/scsi_fc_transport.rst
FC Remote Ports (rports)
========================
<< To Be Supplied >>
In the Fibre Channel (FC) subsystem, a remote port (rport) refers to a
remote Fibre Channel node that the local port can communicate with.
These are typically storage targets (e.g., arrays, tapes) that respond
to SCSI commands over FC transport.
In Linux, rports are managed by the FC transport class and are
represented in sysfs under:
/sys/class/fc_remote_ports/
Each rport directory contains attributes describing the remote port,
such as port ID, node name, port state, and link speed.
rports are typically created by the FC transport when a new device is
discovered during a fabric login or scan, and they persist until the
device is removed or the link is lost.
Common attributes:
- node_name: World Wide Node Name (WWNN).
- port_name: World Wide Port Name (WWPN).
- port_id: FC address of the remote port.
- roles: Indicates if the port is an initiator, target, or both.
- port_state: Shows the current operational state.
After discovering a remote port, the driver typically populates a
fc_rport_identifiers structure and invokes fc_remote_port_add() to
create and register the remote port with the SCSI subsystem via the
Fibre Channel (FC) transport class.
rports are also visible via sysfs as children of the FC host adapter.
For developers: use fc_remote_port_add() and fc_remote_port_delete() when
implementing a driver that interacts with the FC transport class.
FC Virtual Ports (vports)

View File

@ -706,6 +706,7 @@ bfad_im_probe(struct bfad_s *bfad)
if (bfad_thread_workq(bfad) != BFA_STATUS_OK) {
kfree(im);
bfad->im = NULL;
return BFA_STATUS_FAILED;
}

View File

@ -382,7 +382,7 @@ efct_lio_sg_unmap(struct efct_io *io)
return;
dma_unmap_sg(&io->efct->pci->dev, cmd->t_data_sg,
ocp->seg_map_cnt, cmd->data_direction);
cmd->t_data_nents, cmd->data_direction);
ocp->seg_map_cnt = 0;
}

View File

@ -1299,26 +1299,6 @@ static void fcoe_thread_cleanup_local(unsigned int cpu)
flush_work(&p->work);
}
/**
* fcoe_select_cpu() - Selects CPU to handle post-processing of incoming
* command.
*
* This routine selects next CPU based on cpumask to distribute
* incoming requests in round robin.
*
* Returns: int CPU number
*/
static inline unsigned int fcoe_select_cpu(void)
{
static unsigned int selected_cpu;
selected_cpu = cpumask_next(selected_cpu, cpu_online_mask);
if (selected_cpu >= nr_cpu_ids)
selected_cpu = cpumask_first(cpu_online_mask);
return selected_cpu;
}
/**
* fcoe_rcv() - Receive packets from a net device
* @skb: The received packet
@ -1405,7 +1385,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
cpu = ntohs(fh->fh_ox_id) & fc_cpu_mask;
else {
if (ntohs(fh->fh_rx_id) == FC_XID_UNKNOWN)
cpu = fcoe_select_cpu();
cpu = skb->alloc_cpu;
else
cpu = ntohs(fh->fh_rx_id) & fc_cpu_mask;
}

View File

@ -2401,7 +2401,7 @@ static void slot_complete_v2_hw(struct hisi_hba *hisi_hba,
slot_err_v2_hw(hisi_hba, task, slot, 2);
if (ts->stat != SAS_DATA_UNDERRUN)
dev_info(dev, "erroneous completion iptt=%d task=%pK dev id=%d CQ hdr: 0x%x 0x%x 0x%x 0x%x Error info: 0x%x 0x%x 0x%x 0x%x\n",
dev_info(dev, "erroneous completion iptt=%d task=%p dev id=%d CQ hdr: 0x%x 0x%x 0x%x 0x%x Error info: 0x%x 0x%x 0x%x 0x%x\n",
slot->idx, task, sas_dev->device_id,
complete_hdr->dw0, complete_hdr->dw1,
complete_hdr->act, complete_hdr->dw3,
@ -2467,7 +2467,7 @@ out:
spin_lock_irqsave(&task->task_state_lock, flags);
if (task->task_state_flags & SAS_TASK_STATE_ABORTED) {
spin_unlock_irqrestore(&task->task_state_lock, flags);
dev_info(dev, "slot complete: task(%pK) aborted\n", task);
dev_info(dev, "slot complete: task(%p) aborted\n", task);
return;
}
task->task_state_flags |= SAS_TASK_STATE_DONE;
@ -2478,7 +2478,7 @@ out:
spin_lock_irqsave(&device->done_lock, flags);
if (test_bit(SAS_HA_FROZEN, &ha->state)) {
spin_unlock_irqrestore(&device->done_lock, flags);
dev_info(dev, "slot complete: task(%pK) ignored\n",
dev_info(dev, "slot complete: task(%p) ignored\n",
task);
return;
}

View File

@ -2409,7 +2409,7 @@ static void slot_complete_v3_hw(struct hisi_hba *hisi_hba,
if (slot_err_v3_hw(hisi_hba, task, slot)) {
if (ts->stat != SAS_DATA_UNDERRUN)
dev_info(dev, "erroneous completion iptt=%d task=%pK dev id=%d addr=%016llx CQ hdr: 0x%x 0x%x 0x%x 0x%x Error info: 0x%x 0x%x 0x%x 0x%x\n",
dev_info(dev, "erroneous completion iptt=%d task=%p dev id=%d addr=%016llx CQ hdr: 0x%x 0x%x 0x%x 0x%x Error info: 0x%x 0x%x 0x%x 0x%x\n",
slot->idx, task, sas_dev->device_id,
SAS_ADDR(device->sas_addr),
dw0, dw1, complete_hdr->act, dw3,
@ -2470,7 +2470,7 @@ out:
spin_lock_irqsave(&task->task_state_lock, flags);
if (task->task_state_flags & SAS_TASK_STATE_ABORTED) {
spin_unlock_irqrestore(&task->task_state_lock, flags);
dev_info(dev, "slot complete: task(%pK) aborted\n", task);
dev_info(dev, "slot complete: task(%p) aborted\n", task);
return;
}
task->task_state_flags |= SAS_TASK_STATE_DONE;
@ -2481,7 +2481,7 @@ out:
spin_lock_irqsave(&device->done_lock, flags);
if (test_bit(SAS_HA_FROZEN, &ha->state)) {
spin_unlock_irqrestore(&device->done_lock, flags);
dev_info(dev, "slot complete: task(%pK) ignored\n",
dev_info(dev, "slot complete: task(%p) ignored\n",
task);
return;
}

View File

@ -425,7 +425,7 @@ static void ibmvscsis_disconnect(struct work_struct *work)
/*
* check which state we are in and see if we
* should transitition to the new state
* should transition to the new state
*/
switch (vscsi->state) {
/* Should never be called while in this state. */

View File

@ -184,7 +184,8 @@ static int srp_direct_data(struct ibmvscsis_cmd *cmd, struct srp_direct_buf *md,
err = rdma_io(cmd, sg, nsg, md, 1, dir, len);
if (dma_map)
dma_unmap_sg(iue->target->dev, sg, nsg, DMA_BIDIRECTIONAL);
dma_unmap_sg(iue->target->dev, sg, cmd->se_cmd.t_data_nents,
DMA_BIDIRECTIONAL);
return err;
}
@ -256,7 +257,8 @@ rdma:
err = rdma_io(cmd, sg, nsg, md, nmd, dir, len);
if (dma_map)
dma_unmap_sg(iue->target->dev, sg, nsg, DMA_BIDIRECTIONAL);
dma_unmap_sg(iue->target->dev, sg, cmd->se_cmd.t_data_nents,
DMA_BIDIRECTIONAL);
free_mem:
if (token && dma_map) {

View File

@ -2904,7 +2904,7 @@ static void isci_request_io_request_complete(struct isci_host *ihost,
task->total_xfer_len, task->data_dir);
else /* unmap the sgl dma addresses */
dma_unmap_sg(&ihost->pdev->dev, task->scatter,
request->num_sg_entries, task->data_dir);
task->num_scatter, task->data_dir);
break;
case SAS_PROTOCOL_SMP: {
struct scatterlist *sg = &task->smp_task.smp_req;

View File

@ -3185,7 +3185,8 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, int dd_size,
return NULL;
conn = cls_conn->dd_data;
conn->dd_data = cls_conn->dd_data + sizeof(*conn);
if (dd_size)
conn->dd_data = cls_conn->dd_data + sizeof(*conn);
conn->session = session;
conn->cls_conn = cls_conn;
conn->c_stage = ISCSI_CONN_INITIAL_STAGE;

View File

@ -1,7 +1,7 @@
/*******************************************************************
* This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
* Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. *
@ -264,9 +264,9 @@ ct_free_mpvirt:
ct_free_mp:
kfree(mp);
ct_exit:
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
"6440 Unsol CT: Rsp err %d Data: x%lx\n",
rc, vport->fc_flag);
lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
"6440 Unsol CT: Rsp err %d Data: x%lx\n",
rc, vport->fc_flag);
}
/**
@ -313,7 +313,7 @@ lpfc_ct_handle_mibreq(struct lpfc_hba *phba, struct lpfc_iocbq *ctiocbq)
mi_cmd = be16_to_cpu(ct_req->CommandResponse.bits.CmdRsp);
lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
"6442 MI Cmd : x%x Not Supported\n", mi_cmd);
"6442 MI Cmd: x%x Not Supported\n", mi_cmd);
lpfc_ct_reject_event(ndlp, ct_req,
bf_get(wqe_ctxt_tag,
&ctiocbq->wqe.xmit_els_rsp.wqe_com),
@ -2229,21 +2229,6 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
/* Look for a retryable error */
if (ulp_status == IOSTAT_LOCAL_REJECT) {
switch ((ulp_word4 & IOERR_PARAM_MASK)) {
case IOERR_SLI_ABORTED:
case IOERR_SLI_DOWN:
/* Driver aborted this IO. No retry as error
* is likely Offline->Online or some adapter
* error. Recovery will try again, but if port
* is not active there's no point to continue
* issuing follow up FDMI commands.
*/
if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE)) {
free_ndlp = cmdiocb->ndlp;
lpfc_ct_free_iocb(phba, cmdiocb);
lpfc_nlp_put(free_ndlp);
return;
}
break;
case IOERR_ABORT_IN_PROGRESS:
case IOERR_SEQUENCE_TIMEOUT:
case IOERR_ILLEGAL_FRAME:
@ -2269,6 +2254,9 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
lpfc_ct_free_iocb(phba, cmdiocb);
lpfc_nlp_put(free_ndlp);
if (ulp_status != IOSTAT_SUCCESS)
return;
ndlp = lpfc_findnode_did(vport, FDMI_DID);
if (!ndlp)
return;

View File

@ -1,7 +1,7 @@
/*******************************************************************
* This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
* Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2007-2015 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. *
@ -6218,8 +6218,9 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
i++;
}
lpfc_debugfs_max_slow_ring_trc = (1 << i);
pr_err("lpfc_debugfs_max_disc_trc changed to "
"%d\n", lpfc_debugfs_max_disc_trc);
pr_info("lpfc_debugfs_max_slow_ring_trc "
"changed to %d\n",
lpfc_debugfs_max_slow_ring_trc);
}
}
@ -6251,7 +6252,7 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
atomic_set(&phba->nvmeio_trc_cnt, 0);
if (lpfc_debugfs_max_nvmeio_trc) {
num = lpfc_debugfs_max_nvmeio_trc - 1;
if (num & lpfc_debugfs_max_disc_trc) {
if (num & lpfc_debugfs_max_nvmeio_trc) {
/* Change to be a power of 2 */
num = lpfc_debugfs_max_nvmeio_trc;
i = 0;
@ -6260,10 +6261,9 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
i++;
}
lpfc_debugfs_max_nvmeio_trc = (1 << i);
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0575 lpfc_debugfs_max_nvmeio_trc "
"changed to %d\n",
lpfc_debugfs_max_nvmeio_trc);
pr_info("lpfc_debugfs_max_nvmeio_trc changed "
"to %d\n",
lpfc_debugfs_max_nvmeio_trc);
}
phba->nvmeio_trc_size = lpfc_debugfs_max_nvmeio_trc;
@ -6308,8 +6308,8 @@ nvmeio_off:
i++;
}
lpfc_debugfs_max_disc_trc = (1 << i);
pr_err("lpfc_debugfs_max_disc_trc changed to %d\n",
lpfc_debugfs_max_disc_trc);
pr_info("lpfc_debugfs_max_disc_trc changed to %d\n",
lpfc_debugfs_max_disc_trc);
}
}

View File

@ -7861,6 +7861,13 @@ lpfc_rscn_recovery_check(struct lpfc_vport *vport)
/* Move all affected nodes by pending RSCNs to NPR state. */
list_for_each_entry_safe(ndlp, n, &vport->fc_nodes, nlp_listp) {
if (test_bit(FC_UNLOADING, &vport->load_flag)) {
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
"1000 %s Unloading set\n",
__func__);
return 0;
}
if ((ndlp->nlp_state == NLP_STE_UNUSED_NODE) ||
!lpfc_rscn_payload_check(vport, ndlp->nlp_DID))
continue;
@ -8369,9 +8376,9 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
clear_bit(FC_PUBLIC_LOOP, &vport->fc_flag);
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
"3311 Rcv Flogi PS x%x new PS x%x "
"fc_flag x%lx new fc_flag x%lx\n",
"fc_flag x%lx new fc_flag x%lx, hba_flag x%lx\n",
port_state, vport->port_state,
fc_flag, vport->fc_flag);
fc_flag, vport->fc_flag, phba->hba_flag);
/*
* We temporarily set fc_myDID to make it look like we are

View File

@ -183,7 +183,8 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
/* Don't schedule a worker thread event if the vport is going down. */
if (test_bit(FC_UNLOADING, &vport->load_flag) ||
!test_bit(HBA_SETUP, &phba->hba_flag)) {
(phba->sli_rev == LPFC_SLI_REV4 &&
!test_bit(HBA_SETUP, &phba->hba_flag))) {
spin_lock_irqsave(&ndlp->lock, iflags);
ndlp->rport = NULL;
@ -1266,6 +1267,10 @@ lpfc_linkdown(struct lpfc_hba *phba)
}
phba->defer_flogi_acc.flag = false;
/* reinitialize initial HBA flag */
clear_bit(HBA_FLOGI_ISSUED, &phba->hba_flag);
clear_bit(HBA_RHBA_CMPL, &phba->hba_flag);
/* Clear external loopback plug detected flag */
phba->link_flag &= ~LS_EXTERNAL_LOOPBACK;
@ -1436,10 +1441,6 @@ lpfc_linkup(struct lpfc_hba *phba)
phba->pport->rcv_flogi_cnt = 0;
spin_unlock_irq(shost->host_lock);
/* reinitialize initial HBA flag */
clear_bit(HBA_FLOGI_ISSUED, &phba->hba_flag);
clear_bit(HBA_RHBA_CMPL, &phba->hba_flag);
return 0;
}

View File

@ -1,7 +1,7 @@
/*******************************************************************
* This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
* Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2009-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. *
@ -1328,6 +1328,9 @@ struct cq_context {
#define LPFC_CQ_CNT_512 0x1
#define LPFC_CQ_CNT_1024 0x2
#define LPFC_CQ_CNT_WORD7 0x3
#define lpfc_cq_context_cqe_sz_SHIFT 25
#define lpfc_cq_context_cqe_sz_MASK 0x00000003
#define lpfc_cq_context_cqe_sz_WORD word0
#define lpfc_cq_context_autovalid_SHIFT 15
#define lpfc_cq_context_autovalid_MASK 0x00000001
#define lpfc_cq_context_autovalid_WORD word0
@ -1383,9 +1386,9 @@ struct lpfc_mbx_cq_create_set {
#define lpfc_mbx_cq_create_set_valid_SHIFT 29
#define lpfc_mbx_cq_create_set_valid_MASK 0x00000001
#define lpfc_mbx_cq_create_set_valid_WORD word1
#define lpfc_mbx_cq_create_set_cqe_cnt_SHIFT 27
#define lpfc_mbx_cq_create_set_cqe_cnt_MASK 0x00000003
#define lpfc_mbx_cq_create_set_cqe_cnt_WORD word1
#define lpfc_mbx_cq_create_set_cqecnt_SHIFT 27
#define lpfc_mbx_cq_create_set_cqecnt_MASK 0x00000003
#define lpfc_mbx_cq_create_set_cqecnt_WORD word1
#define lpfc_mbx_cq_create_set_cqe_size_SHIFT 25
#define lpfc_mbx_cq_create_set_cqe_size_MASK 0x00000003
#define lpfc_mbx_cq_create_set_cqe_size_WORD word1
@ -1398,13 +1401,16 @@ struct lpfc_mbx_cq_create_set {
#define lpfc_mbx_cq_create_set_clswm_SHIFT 12
#define lpfc_mbx_cq_create_set_clswm_MASK 0x00000003
#define lpfc_mbx_cq_create_set_clswm_WORD word1
#define lpfc_mbx_cq_create_set_cqe_cnt_hi_SHIFT 0
#define lpfc_mbx_cq_create_set_cqe_cnt_hi_MASK 0x0000001F
#define lpfc_mbx_cq_create_set_cqe_cnt_hi_WORD word1
uint32_t word2;
#define lpfc_mbx_cq_create_set_arm_SHIFT 31
#define lpfc_mbx_cq_create_set_arm_MASK 0x00000001
#define lpfc_mbx_cq_create_set_arm_WORD word2
#define lpfc_mbx_cq_create_set_cq_cnt_SHIFT 16
#define lpfc_mbx_cq_create_set_cq_cnt_MASK 0x00007FFF
#define lpfc_mbx_cq_create_set_cq_cnt_WORD word2
#define lpfc_mbx_cq_create_set_cqe_cnt_lo_SHIFT 16
#define lpfc_mbx_cq_create_set_cqe_cnt_lo_MASK 0x00007FFF
#define lpfc_mbx_cq_create_set_cqe_cnt_lo_WORD word2
#define lpfc_mbx_cq_create_set_num_cq_SHIFT 0
#define lpfc_mbx_cq_create_set_num_cq_MASK 0x0000FFFF
#define lpfc_mbx_cq_create_set_num_cq_WORD word2

View File

@ -2627,27 +2627,33 @@ lpfc_get_hba_model_desc(struct lpfc_hba *phba, uint8_t *mdp, uint8_t *descp)
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_BMID:
m = (typeof(m)){"LP1150", "PCI-X2", "Fibre Channel Adapter"};
m = (typeof(m)){"LP1150", "PCI-X2",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_BSMB:
m = (typeof(m)){"LP111", "PCI-X2",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_ZEPHYR:
m = (typeof(m)){"LPe11000", "PCIe", "Fibre Channel Adapter"};
m = (typeof(m)){"LPe11000", "PCIe",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_ZEPHYR_SCSP:
m = (typeof(m)){"LPe11000", "PCIe", "Fibre Channel Adapter"};
m = (typeof(m)){"LPe11000", "PCIe",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_ZEPHYR_DCSP:
m = (typeof(m)){"LP2105", "PCIe", "FCoE Adapter"};
m = (typeof(m)){"LP2105", "PCIe",
"Obsolete, Unsupported FCoE Adapter"};
GE = 1;
break;
case PCI_DEVICE_ID_ZMID:
m = (typeof(m)){"LPe1150", "PCIe", "Fibre Channel Adapter"};
m = (typeof(m)){"LPe1150", "PCIe",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_ZSMB:
m = (typeof(m)){"LPe111", "PCIe", "Fibre Channel Adapter"};
m = (typeof(m)){"LPe111", "PCIe",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_LP101:
m = (typeof(m)){"LP101", "PCI-X",
@ -2666,22 +2672,28 @@ lpfc_get_hba_model_desc(struct lpfc_hba *phba, uint8_t *mdp, uint8_t *descp)
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_SAT:
m = (typeof(m)){"LPe12000", "PCIe", "Fibre Channel Adapter"};
m = (typeof(m)){"LPe12000", "PCIe",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_SAT_MID:
m = (typeof(m)){"LPe1250", "PCIe", "Fibre Channel Adapter"};
m = (typeof(m)){"LPe1250", "PCIe",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_SAT_SMB:
m = (typeof(m)){"LPe121", "PCIe", "Fibre Channel Adapter"};
m = (typeof(m)){"LPe121", "PCIe",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_SAT_DCSP:
m = (typeof(m)){"LPe12002-SP", "PCIe", "Fibre Channel Adapter"};
m = (typeof(m)){"LPe12002-SP", "PCIe",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_SAT_SCSP:
m = (typeof(m)){"LPe12000-SP", "PCIe", "Fibre Channel Adapter"};
m = (typeof(m)){"LPe12000-SP", "PCIe",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_SAT_S:
m = (typeof(m)){"LPe12000-S", "PCIe", "Fibre Channel Adapter"};
m = (typeof(m)){"LPe12000-S", "PCIe",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_PROTEUS_VF:
m = (typeof(m)){"LPev12000", "PCIe IOV",
@ -2697,22 +2709,25 @@ lpfc_get_hba_model_desc(struct lpfc_hba *phba, uint8_t *mdp, uint8_t *descp)
break;
case PCI_DEVICE_ID_TIGERSHARK:
oneConnect = 1;
m = (typeof(m)){"OCe10100", "PCIe", "FCoE"};
m = (typeof(m)){"OCe10100", "PCIe",
"Obsolete, Unsupported FCoE Adapter"};
break;
case PCI_DEVICE_ID_TOMCAT:
oneConnect = 1;
m = (typeof(m)){"OCe11100", "PCIe", "FCoE"};
m = (typeof(m)){"OCe11100", "PCIe",
"Obsolete, Unsupported FCoE Adapter"};
break;
case PCI_DEVICE_ID_FALCON:
m = (typeof(m)){"LPSe12002-ML1-E", "PCIe",
"EmulexSecure Fibre"};
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_BALIUS:
m = (typeof(m)){"LPVe12002", "PCIe Shared I/O",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_LANCER_FC:
m = (typeof(m)){"LPe16000", "PCIe", "Fibre Channel Adapter"};
m = (typeof(m)){"LPe16000", "PCIe",
"Obsolete, Unsupported Fibre Channel Adapter"};
break;
case PCI_DEVICE_ID_LANCER_FC_VF:
m = (typeof(m)){"LPe16000", "PCIe",
@ -2720,12 +2735,13 @@ lpfc_get_hba_model_desc(struct lpfc_hba *phba, uint8_t *mdp, uint8_t *descp)
break;
case PCI_DEVICE_ID_LANCER_FCOE:
oneConnect = 1;
m = (typeof(m)){"OCe15100", "PCIe", "FCoE"};
m = (typeof(m)){"OCe15100", "PCIe",
"Obsolete, Unsupported FCoE Adapter"};
break;
case PCI_DEVICE_ID_LANCER_FCOE_VF:
oneConnect = 1;
m = (typeof(m)){"OCe15100", "PCIe",
"Obsolete, Unsupported FCoE"};
"Obsolete, Unsupported FCoE Adapter"};
break;
case PCI_DEVICE_ID_LANCER_G6_FC:
m = (typeof(m)){"LPe32000", "PCIe", "Fibre Channel Adapter"};
@ -2739,7 +2755,8 @@ lpfc_get_hba_model_desc(struct lpfc_hba *phba, uint8_t *mdp, uint8_t *descp)
case PCI_DEVICE_ID_SKYHAWK:
case PCI_DEVICE_ID_SKYHAWK_VF:
oneConnect = 1;
m = (typeof(m)){"OCe14000", "PCIe", "FCoE"};
m = (typeof(m)){"OCe14000", "PCIe",
"Obsolete, Unsupported FCoE Adapter"};
break;
default:
m = (typeof(m)){"Unknown", "", ""};
@ -7919,8 +7936,6 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
int longs;
int extra;
uint64_t wwn;
u32 if_type;
u32 if_fam;
phba->sli4_hba.num_present_cpu = lpfc_present_cpu;
phba->sli4_hba.num_possible_cpu = cpumask_last(cpu_possible_mask) + 1;
@ -8180,28 +8195,11 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
*/
rc = lpfc_get_sli4_parameters(phba, mboxq);
if (rc) {
if_type = bf_get(lpfc_sli_intf_if_type,
&phba->sli4_hba.sli_intf);
if_fam = bf_get(lpfc_sli_intf_sli_family,
&phba->sli4_hba.sli_intf);
if (phba->sli4_hba.extents_in_use &&
phba->sli4_hba.rpi_hdrs_in_use) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2999 Unsupported SLI4 Parameters "
"Extents and RPI headers enabled.\n");
if (if_type == LPFC_SLI_INTF_IF_TYPE_0 &&
if_fam == LPFC_SLI_INTF_FAMILY_BE2) {
mempool_free(mboxq, phba->mbox_mem_pool);
rc = -EIO;
goto out_free_bsmbx;
}
}
if (!(if_type == LPFC_SLI_INTF_IF_TYPE_0 &&
if_fam == LPFC_SLI_INTF_FAMILY_BE2)) {
mempool_free(mboxq, phba->mbox_mem_pool);
rc = -EIO;
goto out_free_bsmbx;
}
lpfc_log_msg(phba, KERN_WARNING, LOG_INIT,
"2999 Could not get SLI4 parameters\n");
rc = -EIO;
mempool_free(mboxq, phba->mbox_mem_pool);
goto out_free_bsmbx;
}
/*

View File

@ -1,7 +1,7 @@
/*******************************************************************
* This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
* Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. *
@ -390,6 +390,10 @@ lpfc_sli4_vport_delete_fcp_xri_aborted(struct lpfc_vport *vport)
if (!(vport->cfg_enable_fc4_type & LPFC_ENABLE_FCP))
return;
/* may be called before queues established if hba_setup fails */
if (!phba->sli4_hba.hdwq)
return;
spin_lock_irqsave(&phba->hbalock, iflag);
for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
qp = &phba->sli4_hba.hdwq[idx];
@ -532,7 +536,8 @@ lpfc_sli4_io_xri_aborted(struct lpfc_hba *phba,
psb = container_of(iocbq, struct lpfc_io_buf, cur_iocbq);
psb->flags &= ~LPFC_SBUF_XBUSY;
spin_unlock_irqrestore(&phba->hbalock, iflag);
if (!list_empty(&pring->txq))
if (test_bit(HBA_SETUP, &phba->hba_flag) &&
!list_empty(&pring->txq))
lpfc_worker_wake_up(phba);
return;
}

View File

@ -5167,7 +5167,6 @@ lpfc_sli4_brdreset(struct lpfc_hba *phba)
phba->link_events = 0;
phba->pport->fc_myDID = 0;
phba->pport->fc_prevDID = 0;
clear_bit(HBA_SETUP, &phba->hba_flag);
spin_lock_irq(&phba->hbalock);
psli->sli_flag &= ~(LPFC_PROCESS_LA);
@ -5284,6 +5283,7 @@ lpfc_sli_brdrestart_s4(struct lpfc_hba *phba)
"0296 Restart HBA Data: x%x x%x\n",
phba->pport->port_state, psli->sli_flag);
clear_bit(HBA_SETUP, &phba->hba_flag);
lpfc_sli4_queue_unset(phba);
rc = lpfc_sli4_brdreset(phba);
@ -16477,10 +16477,10 @@ lpfc_cq_create_set(struct lpfc_hba *phba, struct lpfc_queue **cqp,
case 4096:
if (phba->sli4_hba.pc_sli4_params.cqv ==
LPFC_Q_CREATE_VERSION_2) {
bf_set(lpfc_mbx_cq_create_set_cqe_cnt,
bf_set(lpfc_mbx_cq_create_set_cqe_cnt_lo,
&cq_set->u.request,
cq->entry_count);
bf_set(lpfc_mbx_cq_create_set_cqe_cnt,
cq->entry_count);
bf_set(lpfc_mbx_cq_create_set_cqecnt,
&cq_set->u.request,
LPFC_CQ_CNT_WORD7);
break;
@ -16496,15 +16496,15 @@ lpfc_cq_create_set(struct lpfc_hba *phba, struct lpfc_queue **cqp,
}
fallthrough; /* otherwise default to smallest */
case 256:
bf_set(lpfc_mbx_cq_create_set_cqe_cnt,
bf_set(lpfc_mbx_cq_create_set_cqecnt,
&cq_set->u.request, LPFC_CQ_CNT_256);
break;
case 512:
bf_set(lpfc_mbx_cq_create_set_cqe_cnt,
bf_set(lpfc_mbx_cq_create_set_cqecnt,
&cq_set->u.request, LPFC_CQ_CNT_512);
break;
case 1024:
bf_set(lpfc_mbx_cq_create_set_cqe_cnt,
bf_set(lpfc_mbx_cq_create_set_cqecnt,
&cq_set->u.request, LPFC_CQ_CNT_1024);
break;
}

View File

@ -1,7 +1,7 @@
/*******************************************************************
* This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
* Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2009-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. *
@ -575,8 +575,10 @@ struct lpfc_pc_sli4_params {
#define LPFC_CQ_4K_PAGE_SZ 0x1
#define LPFC_CQ_16K_PAGE_SZ 0x4
#define LPFC_CQ_32K_PAGE_SZ 0x8
#define LPFC_WQ_4K_PAGE_SZ 0x1
#define LPFC_WQ_16K_PAGE_SZ 0x4
#define LPFC_WQ_32K_PAGE_SZ 0x8
struct lpfc_iov {
uint32_t pf_number;

View File

@ -20,7 +20,7 @@
* included with this package. *
*******************************************************************/
#define LPFC_DRIVER_VERSION "14.4.0.9"
#define LPFC_DRIVER_VERSION "14.4.0.10"
#define LPFC_DRIVER_NAME "lpfc"
/* Used for SLI 2/3 */

View File

@ -56,8 +56,8 @@ extern struct list_head mrioc_list;
extern int prot_mask;
extern atomic64_t event_counter;
#define MPI3MR_DRIVER_VERSION "8.13.0.5.50"
#define MPI3MR_DRIVER_RELDATE "20-February-2025"
#define MPI3MR_DRIVER_VERSION "8.14.0.5.50"
#define MPI3MR_DRIVER_RELDATE "27-June-2025"
#define MPI3MR_DRIVER_NAME "mpi3mr"
#define MPI3MR_DRIVER_LICENSE "GPL"
@ -1137,6 +1137,8 @@ struct scmd_priv {
* @logdata_buf: Circular buffer to store log data entries
* @logdata_buf_idx: Index of entry in buffer to store
* @logdata_entry_sz: log data entry size
* @adm_req_q_bar_writeq_lock: Admin request queue lock
* @adm_reply_q_bar_writeq_lock: Admin reply queue lock
* @pend_large_data_sz: Counter to track pending large data
* @io_throttle_data_length: I/O size to track in 512b blocks
* @io_throttle_high: I/O size to start throttle in 512b blocks
@ -1185,7 +1187,7 @@ struct mpi3mr_ioc {
char name[MPI3MR_NAME_LENGTH];
char driver_name[MPI3MR_NAME_LENGTH];
volatile struct mpi3_sysif_registers __iomem *sysif_regs;
struct mpi3_sysif_registers __iomem *sysif_regs;
resource_size_t sysif_regs_phys;
int bars;
u64 dma_mask;
@ -1339,6 +1341,8 @@ struct mpi3mr_ioc {
u8 *logdata_buf;
u16 logdata_buf_idx;
u16 logdata_entry_sz;
spinlock_t adm_req_q_bar_writeq_lock;
spinlock_t adm_reply_q_bar_writeq_lock;
atomic_t pend_large_data_sz;
u32 io_throttle_data_length;

View File

@ -795,9 +795,8 @@ void mpi3mr_release_diag_bufs(struct mpi3mr_ioc *mrioc, u8 skip_rel_action)
*
* @hdb: HDB pointer
* @type: Trigger type
* @data: Trigger data
* @force: Trigger overwrite flag
* @trigger_data: Pointer to trigger data information
* @force: Trigger overwrite flag
*
* Updates trigger type and trigger data based on parameter
* passed to this function
@ -822,9 +821,8 @@ void mpi3mr_set_trigger_data_in_hdb(struct diag_buffer_desc *hdb,
*
* @mrioc: Adapter instance reference
* @type: Trigger type
* @data: Trigger data
* @force: Trigger overwrite flag
* @trigger_data: Pointer to trigger data information
* @force: Trigger overwrite flag
*
* Updates trigger type and trigger data based on parameter
* passed to this function
@ -3388,6 +3386,8 @@ static DEVICE_ATTR_RO(persistent_id);
* @buf: the buffer returned
*
* A sysfs 'read-only' sdev attribute, only works with SATA devices
*
* Returns: the number of characters written to @buf
*/
static ssize_t
sas_ncq_prio_supported_show(struct device *dev,
@ -3406,6 +3406,8 @@ static DEVICE_ATTR_RO(sas_ncq_prio_supported);
* @buf: the buffer returned
*
* A sysfs 'read/write' sdev attribute, only works with SATA devices
*
* Returns: the number of characters written to @buf
*/
static ssize_t
sas_ncq_prio_enable_show(struct device *dev,

View File

@ -23,17 +23,22 @@ module_param(poll_queues, int, 0444);
MODULE_PARM_DESC(poll_queues, "Number of queues for io_uring poll mode. (Range 1 - 126)");
#if defined(writeq) && defined(CONFIG_64BIT)
static inline void mpi3mr_writeq(__u64 b, volatile void __iomem *addr)
static inline void mpi3mr_writeq(__u64 b, void __iomem *addr,
spinlock_t *write_queue_lock)
{
writeq(b, addr);
}
#else
static inline void mpi3mr_writeq(__u64 b, volatile void __iomem *addr)
static inline void mpi3mr_writeq(__u64 b, void __iomem *addr,
spinlock_t *write_queue_lock)
{
__u64 data_out = b;
unsigned long flags;
spin_lock_irqsave(write_queue_lock, flags);
writel((u32)(data_out), addr);
writel((u32)(data_out >> 32), (addr + 4));
spin_unlock_irqrestore(write_queue_lock, flags);
}
#endif
@ -428,8 +433,8 @@ static void mpi3mr_process_admin_reply_desc(struct mpi3mr_ioc *mrioc,
MPI3MR_SENSE_BUF_SZ);
}
if (cmdptr->is_waiting) {
complete(&cmdptr->done);
cmdptr->is_waiting = 0;
complete(&cmdptr->done);
} else if (cmdptr->callback)
cmdptr->callback(mrioc, cmdptr);
}
@ -2954,9 +2959,11 @@ static int mpi3mr_setup_admin_qpair(struct mpi3mr_ioc *mrioc)
(mrioc->num_admin_req);
writel(num_admin_entries, &mrioc->sysif_regs->admin_queue_num_entries);
mpi3mr_writeq(mrioc->admin_req_dma,
&mrioc->sysif_regs->admin_request_queue_address);
&mrioc->sysif_regs->admin_request_queue_address,
&mrioc->adm_req_q_bar_writeq_lock);
mpi3mr_writeq(mrioc->admin_reply_dma,
&mrioc->sysif_regs->admin_reply_queue_address);
&mrioc->sysif_regs->admin_reply_queue_address,
&mrioc->adm_reply_q_bar_writeq_lock);
writel(mrioc->admin_req_pi, &mrioc->sysif_regs->admin_request_queue_pi);
writel(mrioc->admin_reply_ci, &mrioc->sysif_regs->admin_reply_queue_ci);
return retval;

View File

@ -49,6 +49,13 @@ static void mpi3mr_send_event_ack(struct mpi3mr_ioc *mrioc, u8 event,
#define MPI3_EVENT_WAIT_FOR_DEVICES_TO_REFRESH (0xFFFE)
/*
* SAS Log info code for a NCQ collateral abort after an NCQ error:
* IOC_LOGINFO_PREFIX_PL | PL_LOGINFO_CODE_SATA_NCQ_FAIL_ALL_CMDS_AFTR_ERR
* See: drivers/message/fusion/lsi/mpi_log_sas.h
*/
#define IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR 0x31080000
/**
* mpi3mr_host_tag_for_scmd - Get host tag for a scmd
* @mrioc: Adapter instance reference
@ -3430,7 +3437,18 @@ void mpi3mr_process_op_reply_desc(struct mpi3mr_ioc *mrioc,
scmd->result = DID_NO_CONNECT << 16;
break;
case MPI3_IOCSTATUS_SCSI_IOC_TERMINATED:
scmd->result = DID_SOFT_ERROR << 16;
if (ioc_loginfo == IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR) {
/*
* This is a ATA NCQ command aborted due to another NCQ
* command failure. We must retry this command
* immediately but without incrementing its retry
* counter.
*/
WARN_ON_ONCE(xfer_count != 0);
scmd->result = DID_IMM_RETRY << 16;
} else {
scmd->result = DID_SOFT_ERROR << 16;
}
break;
case MPI3_IOCSTATUS_SCSI_TASK_TERMINATED:
case MPI3_IOCSTATUS_SCSI_EXT_TERMINATED:
@ -5365,6 +5383,8 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
spin_lock_init(&mrioc->tgtdev_lock);
spin_lock_init(&mrioc->watchdog_lock);
spin_lock_init(&mrioc->chain_buf_lock);
spin_lock_init(&mrioc->adm_req_q_bar_writeq_lock);
spin_lock_init(&mrioc->adm_reply_q_bar_writeq_lock);
spin_lock_init(&mrioc->sas_node_lock);
spin_lock_init(&mrioc->trigger_lock);

View File

@ -2914,7 +2914,6 @@ int mpt3sas_send_mctp_passthru_req(struct mpt3_passthru_command *command)
{
struct MPT3SAS_ADAPTER *ioc;
MPI2RequestHeader_t *mpi_request = NULL, *request;
MPI2DefaultReply_t *mpi_reply;
Mpi26MctpPassthroughRequest_t *mctp_passthru_req;
u16 smid;
unsigned long timeout;
@ -3022,8 +3021,6 @@ int mpt3sas_send_mctp_passthru_req(struct mpt3_passthru_command *command)
goto issue_host_reset;
}
mpi_reply = ioc->ctl_cmds.reply;
/* copy out xdata to user */
if (data_in_sz)
memcpy(command->data_in_buf_ptr, data_in, data_in_sz);

View File

@ -195,6 +195,14 @@ struct sense_info {
#define MPT3SAS_PORT_ENABLE_COMPLETE (0xFFFD)
#define MPT3SAS_ABRT_TASK_SET (0xFFFE)
#define MPT3SAS_REMOVE_UNRESPONDING_DEVICES (0xFFFF)
/*
* SAS Log info code for a NCQ collateral abort after an NCQ error:
* IOC_LOGINFO_PREFIX_PL | PL_LOGINFO_CODE_SATA_NCQ_FAIL_ALL_CMDS_AFTR_ERR
* See: drivers/message/fusion/lsi/mpi_log_sas.h
*/
#define IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR 0x31080000
/**
* struct fw_event_work - firmware event struct
* @list: link list framework
@ -5814,6 +5822,17 @@ _scsih_io_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
scmd->result = DID_TRANSPORT_DISRUPTED << 16;
goto out;
}
if (log_info == IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR) {
/*
* This is a ATA NCQ command aborted due to another NCQ
* command failure. We must retry this command
* immediately but without incrementing its retry
* counter.
*/
WARN_ON_ONCE(xfer_cnt != 0);
scmd->result = DID_IMM_RETRY << 16;
break;
}
if (log_info == 0x31110630) {
if (scmd->retries > 2) {
scmd->result = DID_NO_CONNECT << 16;

View File

@ -818,7 +818,7 @@ err_out:
dev_printk(KERN_ERR, mvi->dev, "mvsas prep failed[%d]!\n", rc);
if (!sas_protocol_ata(task->task_proto))
if (n_elem)
dma_unmap_sg(mvi->dev, task->scatter, n_elem,
dma_unmap_sg(mvi->dev, task->scatter, task->num_scatter,
task->data_dir);
prep_out:
return rc;
@ -864,7 +864,7 @@ static void mvs_slot_task_free(struct mvs_info *mvi, struct sas_task *task,
if (!sas_protocol_ata(task->task_proto))
if (slot->n_elem)
dma_unmap_sg(mvi->dev, task->scatter,
slot->n_elem, task->data_dir);
task->num_scatter, task->data_dir);
switch (task->task_proto) {
case SAS_PROTOCOL_SMP:

View File

@ -170,6 +170,14 @@ struct forensic_data {
#define SPCV_MSGU_CFG_TABLE_TRANSFER_DEBUG_INFO 0x80
#define MAIN_MERRDCTO_MERRDCES 0xA0/* DWORD 0x28) */
/**
* enum fatal_error_reporter: Indicates the originator of the fatal error
*/
enum fatal_error_reporter {
REPORTER_DRIVER,
REPORTER_FIRMWARE,
};
struct pm8001_dispatch {
char *name;
int (*chip_init)(struct pm8001_hba_info *pm8001_ha);
@ -715,6 +723,8 @@ ssize_t pm80xx_get_non_fatal_dump(struct device *cdev,
struct device_attribute *attr, char *buf);
ssize_t pm8001_get_gsm_dump(struct device *cdev, u32, char *buf);
int pm80xx_fatal_errors(struct pm8001_hba_info *pm8001_ha);
void pm80xx_fatal_error_uevent_emit(struct pm8001_hba_info *pm8001_ha,
enum fatal_error_reporter error_reporter);
void pm8001_free_dev(struct pm8001_device *pm8001_dev);
/* ctl shared API */
extern const struct attribute_group *pm8001_host_groups[];

View File

@ -1551,6 +1551,52 @@ static int mpi_uninit_check(struct pm8001_hba_info *pm8001_ha)
return 0;
}
/**
* pm80xx_fatal_error_uevent_emit - emits a single fatal error uevent
* @pm8001_ha: our hba card information
* @error_reporter: reporter of fatal error
*/
void pm80xx_fatal_error_uevent_emit(struct pm8001_hba_info *pm8001_ha,
enum fatal_error_reporter error_reporter)
{
struct kobj_uevent_env *env;
pm8001_dbg(pm8001_ha, FAIL, "emitting fatal error uevent");
env = kzalloc(sizeof(struct kobj_uevent_env), GFP_KERNEL);
if (!env)
return;
if (add_uevent_var(env, "DRIVER=%s", DRV_NAME))
goto exit;
if (add_uevent_var(env, "HBA_NUM=%u", pm8001_ha->id))
goto exit;
if (add_uevent_var(env, "EVENT_TYPE=FATAL_ERROR"))
goto exit;
switch (error_reporter) {
case REPORTER_DRIVER:
if (add_uevent_var(env, "REPORTED_BY=DRIVER"))
goto exit;
break;
case REPORTER_FIRMWARE:
if (add_uevent_var(env, "REPORTED_BY=FIRMWARE"))
goto exit;
break;
default:
if (add_uevent_var(env, "REPORTED_BY=OTHER"))
goto exit;
break;
}
kobject_uevent_env(&pm8001_ha->shost->shost_dev.kobj, KOBJ_CHANGE, env->envp);
exit:
kfree(env);
}
/**
* pm80xx_fatal_errors - returns non-zero *ONLY* when fatal errors
* @pm8001_ha: our hba card information
@ -1580,6 +1626,7 @@ pm80xx_fatal_errors(struct pm8001_hba_info *pm8001_ha)
"Fatal error SCRATCHPAD1 = 0x%x SCRATCHPAD2 = 0x%x SCRATCHPAD3 = 0x%x SCRATCHPAD_RSVD0 = 0x%x SCRATCHPAD_RSVD1 = 0x%x\n",
scratch_pad1, scratch_pad2, scratch_pad3,
scratch_pad_rsvd0, scratch_pad_rsvd1);
pm80xx_fatal_error_uevent_emit(pm8001_ha, REPORTER_DRIVER);
ret = 1;
}
@ -4039,6 +4086,7 @@ static int process_oq(struct pm8001_hba_info *pm8001_ha, u8 vec)
pm8001_dbg(pm8001_ha, FAIL,
"Firmware Fatal error! Regval:0x%x\n",
regval);
pm80xx_fatal_error_uevent_emit(pm8001_ha, REPORTER_FIRMWARE);
pm8001_handle_event(pm8001_ha, NULL, IO_FATAL_ERROR);
print_scratchpad_registers(pm8001_ha);
return ret;
@ -4677,8 +4725,12 @@ pm80xx_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id)
&pm8001_ha->phy[phy_id].dev_sas_addr, SAS_ADDR_SIZE);
payload.sas_identify.phy_id = phy_id;
return pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
ret = pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
sizeof(payload), 0);
if (ret < 0)
pm8001_tag_free(pm8001_ha, tag);
return ret;
}
/**
@ -4704,8 +4756,12 @@ static int pm80xx_chip_phy_stop_req(struct pm8001_hba_info *pm8001_ha,
payload.tag = cpu_to_le32(tag);
payload.phy_id = cpu_to_le32(phy_id);
return pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
ret = pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
sizeof(payload), 0);
if (ret < 0)
pm8001_tag_free(pm8001_ha, tag);
return ret;
}
/*

View File

@ -25,11 +25,7 @@ config SCSI_QLA_FC
Upon request, the driver caches the firmware image until
the driver is unloaded.
Firmware images can be retrieved from:
http://ldriver.qlogic.com/firmware/
They are also included in the linux-firmware tree as well.
Firmware images are included in the linux-firmware tree.
config TCM_QLA2XXX
tristate "TCM_QLA2XXX fabric module for QLogic 24xx+ series target mode HBAs"

View File

@ -179,10 +179,9 @@ qla2x00_dfs_tgt_port_database_show(struct seq_file *s, void *unused)
struct qla_hw_data *ha = vha->hw;
struct gid_list_info *gid_list;
dma_addr_t gid_list_dma;
fc_port_t fc_port;
char *id_iter;
int rc, i;
uint16_t entries, loop_id;
uint16_t entries;
seq_printf(s, "%s\n", vha->host_str);
gid_list = dma_alloc_coherent(&ha->pdev->dev,
@ -205,18 +204,11 @@ qla2x00_dfs_tgt_port_database_show(struct seq_file *s, void *unused)
seq_puts(s, "Port Name Port ID Loop ID\n");
for (i = 0; i < entries; i++) {
struct gid_list_info *gid =
(struct gid_list_info *)id_iter;
loop_id = le16_to_cpu(gid->loop_id);
memset(&fc_port, 0, sizeof(fc_port_t));
struct gid_list_info *gid = (struct gid_list_info *)id_iter;
fc_port.loop_id = loop_id;
rc = qla24xx_gpdb_wait(vha, &fc_port, 0);
seq_printf(s, "%8phC %02x%02x%02x %d\n",
fc_port.port_name, fc_port.d_id.b.domain,
fc_port.d_id.b.area, fc_port.d_id.b.al_pa,
fc_port.loop_id);
rc = qla24xx_print_fc_port_id(vha, s, le16_to_cpu(gid->loop_id));
if (rc != QLA_SUCCESS)
break;
id_iter += ha->gid_list_info_size;
}
out_free_id_list:

View File

@ -557,6 +557,7 @@ qla26xx_dport_diagnostics_v2(scsi_qla_host_t *,
int qla24xx_send_mb_cmd(struct scsi_qla_host *, mbx_cmd_t *);
int qla24xx_gpdb_wait(struct scsi_qla_host *, fc_port_t *, u8);
int qla24xx_print_fc_port_id(struct scsi_qla_host *, struct seq_file *, u16);
int qla24xx_gidlist_wait(struct scsi_qla_host *, void *, dma_addr_t,
uint16_t *);
int __qla24xx_parse_gpdb(struct scsi_qla_host *, fc_port_t *,

View File

@ -8603,8 +8603,6 @@ failed:
return QLA_SUCCESS;
}
#define QLA_FW_URL "http://ldriver.qlogic.com/firmware/"
int
qla2x00_load_risc(scsi_qla_host_t *vha, uint32_t *srisc_addr)
{
@ -8622,8 +8620,6 @@ qla2x00_load_risc(scsi_qla_host_t *vha, uint32_t *srisc_addr)
if (!blob) {
ql_log(ql_log_info, vha, 0x0083,
"Firmware image unavailable.\n");
ql_log(ql_log_info, vha, 0x0084,
"Firmware images can be retrieved from: "QLA_FW_URL ".\n");
return QLA_FUNCTION_FAILED;
}

View File

@ -6597,6 +6597,54 @@ done:
return rval;
}
int qla24xx_print_fc_port_id(struct scsi_qla_host *vha, struct seq_file *s, u16 loop_id)
{
int rval = QLA_FUNCTION_FAILED;
dma_addr_t pd_dma;
struct port_database_24xx *pd;
struct qla_hw_data *ha = vha->hw;
mbx_cmd_t mc;
if (!vha->hw->flags.fw_started)
goto done;
pd = dma_pool_zalloc(ha->s_dma_pool, GFP_KERNEL, &pd_dma);
if (pd == NULL) {
ql_log(ql_log_warn, vha, 0xd047,
"Failed to allocate port database structure.\n");
goto done;
}
memset(&mc, 0, sizeof(mc));
mc.mb[0] = MBC_GET_PORT_DATABASE;
mc.mb[1] = loop_id;
mc.mb[2] = MSW(pd_dma);
mc.mb[3] = LSW(pd_dma);
mc.mb[6] = MSW(MSD(pd_dma));
mc.mb[7] = LSW(MSD(pd_dma));
mc.mb[9] = vha->vp_idx;
rval = qla24xx_send_mb_cmd(vha, &mc);
if (rval != QLA_SUCCESS) {
ql_dbg(ql_dbg_mbx, vha, 0x1193, "%s: fail\n", __func__);
goto done_free_sp;
}
ql_dbg(ql_dbg_mbx, vha, 0x1197, "%s: %8phC done\n",
__func__, pd->port_name);
seq_printf(s, "%8phC %02x%02x%02x %d\n",
pd->port_name, pd->port_id[0],
pd->port_id[1], pd->port_id[2],
loop_id);
done_free_sp:
if (pd)
dma_pool_free(ha->s_dma_pool, pd, pd_dma);
done:
return rval;
}
/*
* qla24xx_gpdb_wait
* NOTE: Do not call this routine from DPC thread

View File

@ -242,9 +242,11 @@ EXPORT_SYMBOL(scsi_change_queue_depth);
* specific SCSI device to determine if and when there is a
* need to adjust the queue depth on the device.
*
* Returns: 0 - No change needed, >0 - Adjust queue depth to this new depth,
* -1 - Drop back to untagged operation using host->cmd_per_lun
* as the untagged command depth
* Returns:
* * 0 - No change needed
* * >0 - Adjust queue depth to this new depth,
* * -1 - Drop back to untagged operation using host->cmd_per_lun as the
* untagged command depth
*
* Lock Status: None held on entry
*
@ -708,20 +710,15 @@ void scsi_cdl_check(struct scsi_device *sdev)
int scsi_cdl_enable(struct scsi_device *sdev, bool enable)
{
char buf[64];
bool is_ata;
int ret;
if (!sdev->cdl_supported)
return -EOPNOTSUPP;
rcu_read_lock();
is_ata = rcu_dereference(sdev->vpd_pg89);
rcu_read_unlock();
/*
* For ATA devices, CDL needs to be enabled with a SET FEATURES command.
*/
if (is_ata) {
if (sdev->is_ata) {
struct scsi_mode_data data;
struct scsi_sense_hdr sshdr;
char *buf_data;

View File

@ -8770,7 +8770,7 @@ static int sdebug_add_store(void)
dif_size = sdebug_store_sectors * sizeof(struct t10_pi_tuple);
sip->dif_storep = vmalloc(dif_size);
pr_info("dif_storep %u bytes @ %pK\n", dif_size,
pr_info("dif_storep %u bytes @ %p\n", dif_size,
sip->dif_storep);
if (!sip->dif_storep) {

View File

@ -269,17 +269,12 @@ static struct {
static struct scsi_dev_info_list_table *scsi_devinfo_lookup_by_key(int key)
{
struct scsi_dev_info_list_table *devinfo_table;
int found = 0;
list_for_each_entry(devinfo_table, &scsi_dev_info_list, node)
if (devinfo_table->key == key) {
found = 1;
break;
}
if (!found)
return ERR_PTR(-EINVAL);
if (devinfo_table->key == key)
return devinfo_table;
return devinfo_table;
return ERR_PTR(-EINVAL);
}
/*

View File

@ -1843,7 +1843,7 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx,
* a function to initialize that data.
*/
if (shost->hostt->cmd_size && !shost->hostt->init_cmd_priv)
memset(cmd + 1, 0, shost->hostt->cmd_size);
memset(scsi_cmd_priv(cmd), 0, shost->hostt->cmd_size);
if (!(req->rq_flags & RQF_DONTPREP)) {
ret = scsi_prepare_cmd(req);

View File

@ -909,7 +909,8 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
sdev->model = (char *) (sdev->inquiry + 16);
sdev->rev = (char *) (sdev->inquiry + 32);
if (strncmp(sdev->vendor, "ATA ", 8) == 0) {
sdev->is_ata = strncmp(sdev->vendor, "ATA ", 8) == 0;
if (sdev->is_ata) {
/*
* sata emulation layer device. This is a hack to work around
* the SATL power management specifications which state that

View File

@ -446,13 +446,6 @@ static int fc_host_setup(struct transport_container *tc, struct device *dev,
return -ENOMEM;
fc_host->dev_loss_tmo = fc_dev_loss_tmo;
fc_host->devloss_work_q = alloc_workqueue("fc_dl_%d", 0, 0,
shost->host_no);
if (!fc_host->devloss_work_q) {
destroy_workqueue(fc_host->work_q);
fc_host->work_q = NULL;
return -ENOMEM;
}
fc_bsg_hostadd(shost, fc_host);
/* ignore any bsg add error - we just can't do sgio */
@ -2814,6 +2807,7 @@ fc_flush_work(struct Scsi_Host *shost)
/**
* fc_queue_devloss_work - Schedule work for the fc_host devloss workqueue.
* @shost: Pointer to Scsi_Host bound to fc_host.
* @rport: rport associated with the devloss work
* @work: Work to queue for execution.
* @delay: jiffies to delay the work queuing
*
@ -2821,10 +2815,10 @@ fc_flush_work(struct Scsi_Host *shost)
* 1 on success / 0 already queued / < 0 for error
*/
static int
fc_queue_devloss_work(struct Scsi_Host *shost, struct delayed_work *work,
unsigned long delay)
fc_queue_devloss_work(struct Scsi_Host *shost, struct fc_rport *rport,
struct delayed_work *work, unsigned long delay)
{
if (unlikely(!fc_host_devloss_work_q(shost))) {
if (unlikely(!rport->devloss_work_q)) {
printk(KERN_ERR
"ERROR: FC host '%s' attempted to queue work, "
"when no workqueue created.\n", shost->hostt->name);
@ -2833,17 +2827,18 @@ fc_queue_devloss_work(struct Scsi_Host *shost, struct delayed_work *work,
return -EINVAL;
}
return queue_delayed_work(fc_host_devloss_work_q(shost), work, delay);
return queue_delayed_work(rport->devloss_work_q, work, delay);
}
/**
* fc_flush_devloss - Flush a fc_host's devloss workqueue.
* @shost: Pointer to Scsi_Host bound to fc_host.
* @rport: rport associated with the devloss work
*/
static void
fc_flush_devloss(struct Scsi_Host *shost)
fc_flush_devloss(struct Scsi_Host *shost, struct fc_rport *rport)
{
if (!fc_host_devloss_work_q(shost)) {
if (unlikely(!rport->devloss_work_q)) {
printk(KERN_ERR
"ERROR: FC host '%s' attempted to flush work, "
"when no workqueue created.\n", shost->hostt->name);
@ -2851,7 +2846,7 @@ fc_flush_devloss(struct Scsi_Host *shost)
return;
}
flush_workqueue(fc_host_devloss_work_q(shost));
flush_workqueue(rport->devloss_work_q);
}
@ -2913,13 +2908,6 @@ fc_remove_host(struct Scsi_Host *shost)
fc_host->work_q = NULL;
destroy_workqueue(work_q);
}
/* flush all devloss work items, then kill it */
if (fc_host->devloss_work_q) {
work_q = fc_host->devloss_work_q;
fc_host->devloss_work_q = NULL;
destroy_workqueue(work_q);
}
}
EXPORT_SYMBOL(fc_remove_host);
@ -2967,6 +2955,7 @@ fc_rport_final_delete(struct work_struct *work)
struct device *dev = &rport->dev;
struct Scsi_Host *shost = rport_to_shost(rport);
struct fc_internal *i = to_fc_internal(shost->transportt);
struct workqueue_struct *work_q;
unsigned long flags;
int do_callback = 0;
@ -2988,9 +2977,9 @@ fc_rport_final_delete(struct work_struct *work)
if (rport->flags & FC_RPORT_DEVLOSS_PENDING) {
spin_unlock_irqrestore(shost->host_lock, flags);
if (!cancel_delayed_work(&rport->fail_io_work))
fc_flush_devloss(shost);
fc_flush_devloss(shost, rport);
if (!cancel_delayed_work(&rport->dev_loss_work))
fc_flush_devloss(shost);
fc_flush_devloss(shost, rport);
cancel_work_sync(&rport->scan_work);
spin_lock_irqsave(shost->host_lock, flags);
rport->flags &= ~FC_RPORT_DEVLOSS_PENDING;
@ -3021,6 +3010,12 @@ fc_rport_final_delete(struct work_struct *work)
fc_bsg_remove(rport->rqst_q);
if (rport->devloss_work_q) {
work_q = rport->devloss_work_q;
rport->devloss_work_q = NULL;
destroy_workqueue(work_q);
}
transport_remove_device(dev);
device_del(dev);
transport_destroy_device(dev);
@ -3093,6 +3088,22 @@ fc_remote_port_create(struct Scsi_Host *shost, int channel,
spin_unlock_irqrestore(shost->host_lock, flags);
rport->devloss_work_q = alloc_workqueue("fc_dl_%d_%d", 0, 0,
shost->host_no, rport->number);
if (!rport->devloss_work_q) {
printk(KERN_ERR "FC Remote Port alloc_workqueue failed\n");
/*
* Note that we have not yet called device_initialize() / get_device()
* Cannot reclaim incremented rport->number because we released host_lock
*/
spin_lock_irqsave(shost->host_lock, flags);
list_del(&rport->peers);
scsi_host_put(shost); /* for fc_host->rport list */
spin_unlock_irqrestore(shost->host_lock, flags);
kfree(rport);
return NULL;
}
dev = &rport->dev;
device_initialize(dev); /* takes self reference */
dev->parent = get_device(&shost->shost_gendev); /* parent reference */
@ -3255,9 +3266,9 @@ fc_remote_port_add(struct Scsi_Host *shost, int channel,
* be checked and will NOOP the function.
*/
if (!cancel_delayed_work(&rport->fail_io_work))
fc_flush_devloss(shost);
fc_flush_devloss(shost, rport);
if (!cancel_delayed_work(&rport->dev_loss_work))
fc_flush_devloss(shost);
fc_flush_devloss(shost, rport);
spin_lock_irqsave(shost->host_lock, flags);
@ -3451,11 +3462,12 @@ fc_remote_port_delete(struct fc_rport *rport)
/* see if we need to kill io faster than waiting for device loss */
if ((rport->fast_io_fail_tmo != -1) &&
(rport->fast_io_fail_tmo < timeout))
fc_queue_devloss_work(shost, &rport->fail_io_work,
rport->fast_io_fail_tmo * HZ);
fc_queue_devloss_work(shost, rport, &rport->fail_io_work,
rport->fast_io_fail_tmo * HZ);
/* cap the length the devices can be blocked until they are deleted */
fc_queue_devloss_work(shost, &rport->dev_loss_work, timeout * HZ);
fc_queue_devloss_work(shost, rport, &rport->dev_loss_work,
timeout * HZ);
}
EXPORT_SYMBOL(fc_remote_port_delete);
@ -3514,9 +3526,9 @@ fc_remote_port_rolechg(struct fc_rport *rport, u32 roles)
* transaction.
*/
if (!cancel_delayed_work(&rport->fail_io_work))
fc_flush_devloss(shost);
fc_flush_devloss(shost, rport);
if (!cancel_delayed_work(&rport->dev_loss_work))
fc_flush_devloss(shost);
fc_flush_devloss(shost, rport);
spin_lock_irqsave(shost->host_lock, flags);
rport->flags &= ~(FC_RPORT_FAST_FAIL_TIMEDOUT |

View File

@ -3464,19 +3464,14 @@ static void sd_read_write_same(struct scsi_disk *sdkp, unsigned char *buffer)
}
if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, INQUIRY, 0) < 0) {
struct scsi_vpd *vpd;
sdev->no_report_opcodes = 1;
/* Disable WRITE SAME if REPORT SUPPORTED OPERATION
* CODES is unsupported and the device has an ATA
* Information VPD page (SAT).
/*
* Disable WRITE SAME if REPORT SUPPORTED OPERATION CODES is
* unsupported and this is an ATA device.
*/
rcu_read_lock();
vpd = rcu_dereference(sdev->vpd_pg89);
if (vpd)
if (sdev->is_ata)
sdev->no_write_same = 1;
rcu_read_unlock();
}
if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, WRITE_SAME_16, 0) == 1)

View File

@ -87,6 +87,23 @@ static const char *ufs_wb_resize_status_to_string(enum wb_resize_status status)
}
}
static const char * const ufs_hid_states[] = {
[HID_IDLE] = "idle",
[ANALYSIS_IN_PROGRESS] = "analysis_in_progress",
[DEFRAG_REQUIRED] = "defrag_required",
[DEFRAG_IN_PROGRESS] = "defrag_in_progress",
[DEFRAG_COMPLETED] = "defrag_completed",
[DEFRAG_NOT_REQUIRED] = "defrag_not_required",
};
static const char *ufs_hid_state_to_string(enum ufs_hid_state state)
{
if (state < NUM_UFS_HID_STATES)
return ufs_hid_states[state];
return "unknown";
}
static const char *ufshcd_uic_link_state_to_string(
enum uic_link_state state)
{
@ -1763,6 +1780,178 @@ static const struct attribute_group ufs_sysfs_attributes_group = {
.attrs = ufs_sysfs_attributes,
};
static int hid_query_attr(struct ufs_hba *hba, enum query_opcode opcode,
enum attr_idn idn, u32 *attr_val)
{
int ret;
down(&hba->host_sem);
if (!ufshcd_is_user_access_allowed(hba)) {
up(&hba->host_sem);
return -EBUSY;
}
ufshcd_rpm_get_sync(hba);
ret = ufshcd_query_attr(hba, opcode, idn, 0, 0, attr_val);
ufshcd_rpm_put_sync(hba);
up(&hba->host_sem);
return ret;
}
static ssize_t analysis_trigger_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
int mode;
int ret;
if (sysfs_streq(buf, "enable"))
mode = HID_ANALYSIS_ENABLE;
else if (sysfs_streq(buf, "disable"))
mode = HID_ANALYSIS_AND_DEFRAG_DISABLE;
else
return -EINVAL;
ret = hid_query_attr(hba, UPIU_QUERY_OPCODE_WRITE_ATTR,
QUERY_ATTR_IDN_HID_DEFRAG_OPERATION, &mode);
return ret < 0 ? ret : count;
}
static DEVICE_ATTR_WO(analysis_trigger);
static ssize_t defrag_trigger_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
int mode;
int ret;
if (sysfs_streq(buf, "enable"))
mode = HID_ANALYSIS_AND_DEFRAG_ENABLE;
else if (sysfs_streq(buf, "disable"))
mode = HID_ANALYSIS_AND_DEFRAG_DISABLE;
else
return -EINVAL;
ret = hid_query_attr(hba, UPIU_QUERY_OPCODE_WRITE_ATTR,
QUERY_ATTR_IDN_HID_DEFRAG_OPERATION, &mode);
return ret < 0 ? ret : count;
}
static DEVICE_ATTR_WO(defrag_trigger);
static ssize_t fragmented_size_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
u32 value;
int ret;
ret = hid_query_attr(hba, UPIU_QUERY_OPCODE_READ_ATTR,
QUERY_ATTR_IDN_HID_AVAILABLE_SIZE, &value);
if (ret)
return ret;
return sysfs_emit(buf, "%u\n", value);
}
static DEVICE_ATTR_RO(fragmented_size);
static ssize_t defrag_size_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
u32 value;
int ret;
ret = hid_query_attr(hba, UPIU_QUERY_OPCODE_READ_ATTR,
QUERY_ATTR_IDN_HID_SIZE, &value);
if (ret)
return ret;
return sysfs_emit(buf, "%u\n", value);
}
static ssize_t defrag_size_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
u32 value;
int ret;
if (kstrtou32(buf, 0, &value))
return -EINVAL;
ret = hid_query_attr(hba, UPIU_QUERY_OPCODE_WRITE_ATTR,
QUERY_ATTR_IDN_HID_SIZE, &value);
return ret < 0 ? ret : count;
}
static DEVICE_ATTR_RW(defrag_size);
static ssize_t progress_ratio_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
u32 value;
int ret;
ret = hid_query_attr(hba, UPIU_QUERY_OPCODE_READ_ATTR,
QUERY_ATTR_IDN_HID_PROGRESS_RATIO, &value);
if (ret)
return ret;
return sysfs_emit(buf, "%u\n", value);
}
static DEVICE_ATTR_RO(progress_ratio);
static ssize_t state_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
u32 value;
int ret;
ret = hid_query_attr(hba, UPIU_QUERY_OPCODE_READ_ATTR,
QUERY_ATTR_IDN_HID_STATE, &value);
if (ret)
return ret;
return sysfs_emit(buf, "%s\n", ufs_hid_state_to_string(value));
}
static DEVICE_ATTR_RO(state);
static struct attribute *ufs_sysfs_hid[] = {
&dev_attr_analysis_trigger.attr,
&dev_attr_defrag_trigger.attr,
&dev_attr_fragmented_size.attr,
&dev_attr_defrag_size.attr,
&dev_attr_progress_ratio.attr,
&dev_attr_state.attr,
NULL,
};
static umode_t ufs_sysfs_hid_is_visible(struct kobject *kobj,
struct attribute *attr, int n)
{
struct device *dev = container_of(kobj, struct device, kobj);
struct ufs_hba *hba = dev_get_drvdata(dev);
return hba->dev_info.hid_sup ? attr->mode : 0;
}
static const struct attribute_group ufs_sysfs_hid_group = {
.name = "hid",
.attrs = ufs_sysfs_hid,
.is_visible = ufs_sysfs_hid_is_visible,
};
static const struct attribute_group *ufs_sysfs_groups[] = {
&ufs_sysfs_default_group,
&ufs_sysfs_capabilities_group,
@ -1777,6 +1966,7 @@ static const struct attribute_group *ufs_sysfs_groups[] = {
&ufs_sysfs_string_descriptors_group,
&ufs_sysfs_flags_group,
&ufs_sysfs_attributes_group,
&ufs_sysfs_hid_group,
NULL,
};

View File

@ -2566,7 +2566,7 @@ ufshcd_wait_for_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
* @hba: per adapter instance
* @uic_cmd: UIC command
*
* Return: 0 only if success.
* Return: 0 if successful; < 0 upon failure.
*/
static int
__ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
@ -2826,8 +2826,6 @@ static void ufshcd_prepare_utp_query_req_upiu(struct ufs_hba *hba,
/* Copy the Descriptor */
if (query->request.upiu_req.opcode == UPIU_QUERY_OPCODE_WRITE_DESC)
memcpy(ucd_req_ptr + 1, query->descriptor, len);
memset(lrbp->ucd_rsp_ptr, 0, sizeof(struct utp_upiu_rsp));
}
static inline void ufshcd_prepare_utp_nop_upiu(struct ufshcd_lrb *lrbp)
@ -2840,8 +2838,6 @@ static inline void ufshcd_prepare_utp_nop_upiu(struct ufshcd_lrb *lrbp)
.transaction_code = UPIU_TRANSACTION_NOP_OUT,
.task_tag = lrbp->task_tag,
};
memset(lrbp->ucd_rsp_ptr, 0, sizeof(struct utp_upiu_rsp));
}
/**
@ -2867,6 +2863,8 @@ static int ufshcd_compose_devman_upiu(struct ufs_hba *hba,
else
ret = -EINVAL;
memset(lrbp->ucd_rsp_ptr, 0, sizeof(struct utp_upiu_rsp));
return ret;
}
@ -3074,6 +3072,9 @@ static void ufshcd_setup_dev_cmd(struct ufs_hba *hba, struct ufshcd_lrb *lrbp,
hba->dev_cmd.type = cmd_type;
}
/*
* Return: 0 upon success; < 0 upon failure.
*/
static int ufshcd_compose_dev_cmd(struct ufs_hba *hba,
struct ufshcd_lrb *lrbp, enum dev_cmd_type cmd_type, int tag)
{
@ -3186,9 +3187,13 @@ ufshcd_dev_cmd_completion(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
break;
}
WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err);
return err;
}
/*
* Return: 0 upon success; < 0 upon failure.
*/
static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba,
struct ufshcd_lrb *lrbp, int max_timeout)
{
@ -3263,6 +3268,7 @@ retry:
}
}
WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err);
return err;
}
@ -3280,6 +3286,9 @@ static void ufshcd_dev_man_unlock(struct ufs_hba *hba)
ufshcd_release(hba);
}
/*
* Return: 0 upon success; < 0 upon failure.
*/
static int ufshcd_issue_dev_cmd(struct ufs_hba *hba, struct ufshcd_lrb *lrbp,
const u32 tag, int timeout)
{
@ -3367,6 +3376,7 @@ static int ufshcd_query_flag_retry(struct ufs_hba *hba,
dev_err(hba->dev,
"%s: query flag, opcode %d, idn %d, failed with error %d after %d retries\n",
__func__, opcode, idn, ret, retries);
WARN_ONCE(ret > 0, "Incorrect return value %d > 0\n", ret);
return ret;
}
@ -3378,7 +3388,7 @@ static int ufshcd_query_flag_retry(struct ufs_hba *hba,
* @index: flag index to access
* @flag_res: the flag value after the query request completes
*
* Return: 0 for success, non-zero in case of failure.
* Return: 0 for success; < 0 upon failure.
*/
int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode,
enum flag_idn idn, u8 index, bool *flag_res)
@ -3434,6 +3444,7 @@ int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode,
out_unlock:
ufshcd_dev_man_unlock(hba);
WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err);
return err;
}
@ -3446,7 +3457,7 @@ out_unlock:
* @selector: selector field
* @attr_val: the attribute value after the query request completes
*
* Return: 0 for success, non-zero in case of failure.
* Return: 0 upon success; < 0 upon failure.
*/
int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode,
enum attr_idn idn, u8 index, u8 selector, u32 *attr_val)
@ -3495,6 +3506,7 @@ int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode,
out_unlock:
ufshcd_dev_man_unlock(hba);
WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err);
return err;
}
@ -3509,7 +3521,7 @@ out_unlock:
* @attr_val: the attribute value after the query request
* completes
*
* Return: 0 for success, non-zero in case of failure.
* Return: 0 for success; < 0 upon failure.
*/
int ufshcd_query_attr_retry(struct ufs_hba *hba,
enum query_opcode opcode, enum attr_idn idn, u8 index, u8 selector,
@ -3532,9 +3544,13 @@ int ufshcd_query_attr_retry(struct ufs_hba *hba,
dev_err(hba->dev,
"%s: query attribute, idn %d, failed with error %d after %d retries\n",
__func__, idn, ret, QUERY_REQ_RETRIES);
WARN_ONCE(ret > 0, "Incorrect return value %d > 0\n", ret);
return ret;
}
/*
* Return: 0 if successful; < 0 upon failure.
*/
static int __ufshcd_query_descriptor(struct ufs_hba *hba,
enum query_opcode opcode, enum desc_idn idn, u8 index,
u8 selector, u8 *desc_buf, int *buf_len)
@ -3592,6 +3608,7 @@ static int __ufshcd_query_descriptor(struct ufs_hba *hba,
out_unlock:
hba->dev_cmd.query.descriptor = NULL;
ufshcd_dev_man_unlock(hba);
WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err);
return err;
}
@ -3608,7 +3625,7 @@ out_unlock:
* The buf_len parameter will contain, on return, the length parameter
* received on the response.
*
* Return: 0 for success, non-zero in case of failure.
* Return: 0 for success; < 0 upon failure.
*/
int ufshcd_query_descriptor_retry(struct ufs_hba *hba,
enum query_opcode opcode,
@ -3626,6 +3643,7 @@ int ufshcd_query_descriptor_retry(struct ufs_hba *hba,
break;
}
WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err);
return err;
}
@ -3638,7 +3656,7 @@ int ufshcd_query_descriptor_retry(struct ufs_hba *hba,
* @param_read_buf: pointer to buffer where parameter would be read
* @param_size: sizeof(param_read_buf)
*
* Return: 0 in case of success, non-zero otherwise.
* Return: 0 in case of success; < 0 upon failure.
*/
int ufshcd_read_desc_param(struct ufs_hba *hba,
enum desc_idn desc_id,
@ -3705,6 +3723,7 @@ int ufshcd_read_desc_param(struct ufs_hba *hba,
out:
if (is_kmalloc)
kfree(desc_buf);
WARN_ONCE(ret > 0, "Incorrect return value %d > 0\n", ret);
return ret;
}
@ -3818,7 +3837,7 @@ out:
* @param_read_buf: pointer to buffer where parameter would be read
* @param_size: sizeof(param_read_buf)
*
* Return: 0 in case of success, non-zero otherwise.
* Return: 0 in case of success; < 0 upon failure.
*/
static inline int ufshcd_read_unit_desc_param(struct ufs_hba *hba,
int lun,
@ -4253,6 +4272,30 @@ out:
}
EXPORT_SYMBOL_GPL(ufshcd_dme_get_attr);
/**
* ufshcd_dme_rmw - get modify set a DME attribute
* @hba: per adapter instance
* @mask: indicates which bits to clear from the value that has been read
* @val: actual value to write
* @attr: dme attribute
*/
int ufshcd_dme_rmw(struct ufs_hba *hba, u32 mask,
u32 val, u32 attr)
{
u32 cfg = 0;
int err;
err = ufshcd_dme_get(hba, UIC_ARG_MIB(attr), &cfg);
if (err)
return err;
cfg &= ~mask;
cfg |= (val & mask);
return ufshcd_dme_set(hba, UIC_ARG_MIB(attr), cfg);
}
EXPORT_SYMBOL_GPL(ufshcd_dme_rmw);
/**
* ufshcd_uic_pwr_ctrl - executes UIC commands (which affects the link power
* state) and waits for it to take effect.
@ -4796,7 +4839,7 @@ out:
* 3. Program UTRL and UTMRL base address
* 4. Configure run-stop-registers
*
* Return: 0 on success, non-zero value on failure.
* Return: 0 if successful; < 0 upon failure.
*/
int ufshcd_make_hba_operational(struct ufs_hba *hba)
{
@ -8420,6 +8463,10 @@ static int ufs_get_device_desc(struct ufs_hba *hba)
dev_info->rtt_cap = desc_buf[DEVICE_DESC_PARAM_RTT_CAP];
dev_info->hid_sup = get_unaligned_be32(desc_buf +
DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP) &
UFS_DEV_HID_SUPPORT;
model_index = desc_buf[DEVICE_DESC_PARAM_PRDCT_NAME];
err = ufshcd_read_string_desc(hba, model_index,

View File

@ -1110,8 +1110,8 @@ static int exynos_ufs_post_link(struct ufs_hba *hba)
hci_writel(ufs, val, HCI_TXPRDT_ENTRY_SIZE);
hci_writel(ufs, ilog2(DATA_UNIT_SIZE), HCI_RXPRDT_ENTRY_SIZE);
hci_writel(ufs, (1 << hba->nutrs) - 1, HCI_UTRL_NEXUS_TYPE);
hci_writel(ufs, (1 << hba->nutmrs) - 1, HCI_UTMRL_NEXUS_TYPE);
hci_writel(ufs, BIT(hba->nutrs) - 1, HCI_UTRL_NEXUS_TYPE);
hci_writel(ufs, BIT(hba->nutmrs) - 1, HCI_UTMRL_NEXUS_TYPE);
hci_writel(ufs, 0xf, HCI_AXIDMA_RWDATA_BURST_LEN);
if (ufs->opts & EXYNOS_UFS_OPT_SKIP_CONNECTION_ESTAB)

View File

@ -552,11 +552,32 @@ out_disable_phy:
*/
static void ufs_qcom_enable_hw_clk_gating(struct ufs_hba *hba)
{
int err;
/* Enable UTP internal clock gating */
ufshcd_rmwl(hba, REG_UFS_CFG2_CGC_EN_ALL, REG_UFS_CFG2_CGC_EN_ALL,
REG_UFS_CFG2);
/* Ensure that HW clock gating is enabled before next operations */
ufshcd_readl(hba, REG_UFS_CFG2);
/* Enable Unipro internal clock gating */
err = ufshcd_dme_rmw(hba, DL_VS_CLK_CFG_MASK,
DL_VS_CLK_CFG_MASK, DL_VS_CLK_CFG);
if (err)
goto out;
err = ufshcd_dme_rmw(hba, PA_VS_CLK_CFG_REG_MASK,
PA_VS_CLK_CFG_REG_MASK, PA_VS_CLK_CFG_REG);
if (err)
goto out;
err = ufshcd_dme_rmw(hba, DME_VS_CORE_CLK_CTRL_DME_HW_CGC_EN,
DME_VS_CORE_CLK_CTRL_DME_HW_CGC_EN,
DME_VS_CORE_CLK_CTRL);
out:
if (err)
dev_err(hba->dev, "hw clk gating enabled failed\n");
}
static int ufs_qcom_hce_enable_notify(struct ufs_hba *hba,
@ -2109,8 +2130,7 @@ static int ufs_qcom_config_esi(struct ufs_hba *hba)
retain_and_null_ptr(qi);
if (host->hw_ver.major == 6 && host->hw_ver.minor == 0 &&
host->hw_ver.step == 0) {
if (host->hw_ver.major >= 6) {
ufshcd_rmwl(hba, ESI_VEC_MASK, FIELD_PREP(ESI_VEC_MASK, MAX_ESI_VEC - 1),
REG_UFS_CFG3);
}

View File

@ -24,6 +24,15 @@
#define UFS_QCOM_LIMIT_HS_RATE PA_HS_MODE_B
/* bit and mask definitions for PA_VS_CLK_CFG_REG attribute */
#define PA_VS_CLK_CFG_REG 0x9004
#define PA_VS_CLK_CFG_REG_MASK GENMASK(8, 0)
/* bit and mask definitions for DL_VS_CLK_CFG attribute */
#define DL_VS_CLK_CFG 0xA00B
#define DL_VS_CLK_CFG_MASK GENMASK(9, 0)
#define DME_VS_CORE_CLK_CTRL_DME_HW_CGC_EN BIT(9)
/* QCOM UFS host controller vendor specific registers */
enum {
REG_UFS_SYS1CLK_1US = 0xC0,

View File

@ -184,6 +184,11 @@ struct scsi_device {
*/
unsigned force_runtime_start_on_system_start:1;
/*
* Set if the device is an ATA device.
*/
unsigned is_ata:1;
unsigned removable:1;
unsigned changed:1; /* Data invalid due to media change */
unsigned busy:1; /* Used to prevent races */

View File

@ -383,6 +383,8 @@ struct fc_rport { /* aka fc_starget_attrs */
struct work_struct stgt_delete_work;
struct work_struct rport_delete_work;
struct request_queue *rqst_q; /* bsg support */
struct workqueue_struct *devloss_work_q;
} __attribute__((aligned(sizeof(unsigned long))));
/* bit field values for struct fc_rport "flags" field: */
@ -576,7 +578,6 @@ struct fc_host_attrs {
/* work queues for rport state manipulation */
struct workqueue_struct *work_q;
struct workqueue_struct *devloss_work_q;
/* bsg support */
struct request_queue *rqst_q;
@ -654,8 +655,6 @@ struct fc_host_attrs {
(((struct fc_host_attrs *)(x)->shost_data)->npiv_vports_inuse)
#define fc_host_work_q(x) \
(((struct fc_host_attrs *)(x)->shost_data)->work_q)
#define fc_host_devloss_work_q(x) \
(((struct fc_host_attrs *)(x)->shost_data)->devloss_work_q)
#define fc_host_dev_loss_tmo(x) \
(((struct fc_host_attrs *)(x)->shost_data)->dev_loss_tmo)
#define fc_host_max_ct_payload(x) \

View File

@ -200,6 +200,14 @@ TRACE_EVENT(scsi_dispatch_cmd_start,
__print_hex(__get_dynamic_array(cmnd), __entry->cmd_len))
);
#define scsi_rtn_name(result) { result, #result }
#define show_rtn_name(val) \
__print_symbolic(val, \
scsi_rtn_name(SCSI_MLQUEUE_HOST_BUSY), \
scsi_rtn_name(SCSI_MLQUEUE_DEVICE_BUSY), \
scsi_rtn_name(SCSI_MLQUEUE_EH_RETRY), \
scsi_rtn_name(SCSI_MLQUEUE_TARGET_BUSY))
TRACE_EVENT(scsi_dispatch_cmd_error,
TP_PROTO(struct scsi_cmnd *cmd, int rtn),
@ -240,14 +248,15 @@ TRACE_EVENT(scsi_dispatch_cmd_error,
TP_printk("host_no=%u channel=%u id=%u lun=%u data_sgl=%u prot_sgl=%u" \
" prot_op=%s driver_tag=%d scheduler_tag=%d cmnd=(%s %s raw=%s)" \
" rtn=%d",
" rtn=%s",
__entry->host_no, __entry->channel, __entry->id,
__entry->lun, __entry->data_sglen, __entry->prot_sglen,
show_prot_op_name(__entry->prot_op), __entry->driver_tag,
__entry->scheduler_tag, show_opcode_name(__entry->opcode),
__parse_cdb(__get_dynamic_array(cmnd), __entry->cmd_len),
__print_hex(__get_dynamic_array(cmnd), __entry->cmd_len),
__entry->rtn)
show_rtn_name(__entry->rtn)
)
);
DECLARE_EVENT_CLASS(scsi_cmd_done_timeout_template,

View File

@ -182,6 +182,11 @@ enum attr_idn {
QUERY_ATTR_IDN_CURR_WB_BUFF_SIZE = 0x1F,
QUERY_ATTR_IDN_TIMESTAMP = 0x30,
QUERY_ATTR_IDN_DEV_LVL_EXCEPTION_ID = 0x34,
QUERY_ATTR_IDN_HID_DEFRAG_OPERATION = 0x35,
QUERY_ATTR_IDN_HID_AVAILABLE_SIZE = 0x36,
QUERY_ATTR_IDN_HID_SIZE = 0x37,
QUERY_ATTR_IDN_HID_PROGRESS_RATIO = 0x38,
QUERY_ATTR_IDN_HID_STATE = 0x39,
QUERY_ATTR_IDN_WB_BUF_RESIZE_HINT = 0x3C,
QUERY_ATTR_IDN_WB_BUF_RESIZE_EN = 0x3D,
QUERY_ATTR_IDN_WB_BUF_RESIZE_STATUS = 0x3E,
@ -401,6 +406,7 @@ enum {
UFS_DEV_HPB_SUPPORT = BIT(7),
UFS_DEV_WRITE_BOOSTER_SUP = BIT(8),
UFS_DEV_LVL_EXCEPTION_SUP = BIT(12),
UFS_DEV_HID_SUPPORT = BIT(13),
};
#define UFS_DEV_HPB_SUPPORT_VERSION 0x310
@ -466,6 +472,24 @@ enum ufs_ref_clk_freq {
REF_CLK_FREQ_INVAL = -1,
};
/* bDefragOperation attribute values */
enum ufs_hid_defrag_operation {
HID_ANALYSIS_AND_DEFRAG_DISABLE = 0,
HID_ANALYSIS_ENABLE = 1,
HID_ANALYSIS_AND_DEFRAG_ENABLE = 2,
};
/* bHIDState attribute values */
enum ufs_hid_state {
HID_IDLE = 0,
ANALYSIS_IN_PROGRESS = 1,
DEFRAG_REQUIRED = 2,
DEFRAG_IN_PROGRESS = 3,
DEFRAG_COMPLETED = 4,
DEFRAG_NOT_REQUIRED = 5,
NUM_UFS_HID_STATES = 6,
};
/* bWriteBoosterBufferResizeEn attribute */
enum wb_resize_en {
WB_RESIZE_EN_IDLE = 0,
@ -625,6 +649,8 @@ struct ufs_dev_info {
u32 rtc_update_period;
u8 rtt_cap; /* bDeviceRTTCap */
bool hid_sup;
};
/*

View File

@ -1480,6 +1480,7 @@ void ufshcd_resume_complete(struct device *dev);
bool ufshcd_is_hba_active(struct ufs_hba *hba);
void ufshcd_pm_qos_init(struct ufs_hba *hba);
void ufshcd_pm_qos_exit(struct ufs_hba *hba);
int ufshcd_dme_rmw(struct ufs_hba *hba, u32 mask, u32 val, u32 attr);
/* Wrapper functions for safely calling variant operations */
static inline int ufshcd_vops_init(struct ufs_hba *hba)