This update includes the following changes:
Drivers: - Add ciphertext hiding support to ccp. - Add hashjoin, gather and UDMA data move features to hisilicon. - Add lz4 and lz77_only to hisilicon. - Add xilinx hwrng driver. - Add ti driver with ecb/cbc aes support. - Add ring buffer idle and command queue telemetry for GEN6 in qat. Others: - Use rcu_dereference_all to stop false alarms in rhashtable. - Fix CPU number wraparound in padata. -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmjbpJsACgkQxycdCkmx i6fuTRAAzv5o0MIw4Kc7EEU3zMgFSX0FdcTUPY+eiFrWZrSrvUVW+jYcH9ppO8J7 offAYSZYatcyyU9+u8X22CQNKLdXnKQQ0YymWO35TOpvVxveUM1bqEEV1ZK0xaXD hlJTLoFIsPaVVhi8CW+ZNhDJBwJHNCv7Yi9TUB6sC7rilWWbJ5LzbEVw3Rtg81Lx 0hcuGX2LrpsHOVVWYxGdJ534Kt2lrkt+8/gWOFg3ap3RVQ39tohEjS2Adm2p8eiX zIdru/aYd89EcYoxuFyylX2d/OLmMAQpFsADy/Fys26eeOWtqggH62V1LAiSyEqw vLRBCVKpLhlbNNfnUs0f5nqjjYEUrNk9SA4rgoxITwKoucbWBQMS4zWJTEDKz29n iBBqHsukGpwVOE6RY8BzR/QNJKhZCSsJpGkagS1v6VPa5P1QomuKftGXKB7JKXKz xoyk+DhJyA8rkb/E5J9Ni7+Tb08Y4zvJ1dpCQHZMlln3DKkK+kk3gkpoxXMZwBV2 LbEMGTI+sfnAfqkGCJYAZR9gDJ5LQDR9jy/Ds5jvPuVvvjyY5LY/bjETqGPF2QVs Rz2Sg0RHl7PVZOP6QgbQzkV7SkJrZfyu5iYd0ZfUqZr7BaHLOHJG/E/HlUW3/mXu OjD+Q5gPhiOdc/qn+32+QERTDCFQdbByv0h7khGQA5vHE3XCu8E= =knnk -----END PGP SIGNATURE----- Merge tag 'v6.18-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "Drivers: - Add ciphertext hiding support to ccp - Add hashjoin, gather and UDMA data move features to hisilicon - Add lz4 and lz77_only to hisilicon - Add xilinx hwrng driver - Add ti driver with ecb/cbc aes support - Add ring buffer idle and command queue telemetry for GEN6 in qat Others: - Use rcu_dereference_all to stop false alarms in rhashtable - Fix CPU number wraparound in padata" * tag 'v6.18-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (78 commits) dt-bindings: rng: hisi-rng: convert to DT schema crypto: doc - Add explicit title heading to API docs hwrng: ks-sa - fix division by zero in ks_sa_rng_init KEYS: X.509: Fix Basic Constraints CA flag parsing crypto: anubis - simplify return statement in anubis_mod_init crypto: hisilicon/qm - set NULL to qm->debug.qm_diff_regs crypto: hisilicon/qm - clear all VF configurations in the hardware crypto: hisilicon - enable error reporting again crypto: hisilicon/qm - mask axi error before memory init crypto: hisilicon/qm - invalidate queues in use crypto: qat - Return pointer directly in adf_ctl_alloc_resources crypto: aspeed - Fix dma_unmap_sg() direction rhashtable: Use rcu_dereference_all and rcu_dereference_all_check crypto: comp - Use same definition of context alloc and free ops crypto: omap - convert from tasklet to BH workqueue crypto: qat - Replace kzalloc() + copy_from_user() with memdup_user() crypto: caam - double the entropy delay interval for retry padata: WQ_PERCPU added to alloc_workqueue users padata: replace use of system_unbound_wq with system_dfl_wq crypto: cryptd - WQ_PERCPU added to alloc_workqueue users ...pull/1354/merge
commit
908057d185
|
|
@ -57,6 +57,7 @@ Description: (RO) Reports device telemetry counters.
|
||||||
gp_lat_acc_avg average get to put latency [ns]
|
gp_lat_acc_avg average get to put latency [ns]
|
||||||
bw_in PCIe, write bandwidth [Mbps]
|
bw_in PCIe, write bandwidth [Mbps]
|
||||||
bw_out PCIe, read bandwidth [Mbps]
|
bw_out PCIe, read bandwidth [Mbps]
|
||||||
|
re_acc_avg average ring empty time [ns]
|
||||||
at_page_req_lat_avg Address Translator(AT), average page
|
at_page_req_lat_avg Address Translator(AT), average page
|
||||||
request latency [ns]
|
request latency [ns]
|
||||||
at_trans_lat_avg AT, average page translation latency [ns]
|
at_trans_lat_avg AT, average page translation latency [ns]
|
||||||
|
|
@ -85,6 +86,32 @@ Description: (RO) Reports device telemetry counters.
|
||||||
exec_cph<N> execution count of Cipher slice N
|
exec_cph<N> execution count of Cipher slice N
|
||||||
util_ath<N> utilization of Authentication slice N [%]
|
util_ath<N> utilization of Authentication slice N [%]
|
||||||
exec_ath<N> execution count of Authentication slice N
|
exec_ath<N> execution count of Authentication slice N
|
||||||
|
cmdq_wait_cnv<N> wait time for cmdq N to get Compression and verify
|
||||||
|
slice ownership
|
||||||
|
cmdq_exec_cnv<N> Compression and verify slice execution time while
|
||||||
|
owned by cmdq N
|
||||||
|
cmdq_drain_cnv<N> time taken for cmdq N to release Compression and
|
||||||
|
verify slice ownership
|
||||||
|
cmdq_wait_dcprz<N> wait time for cmdq N to get Decompression
|
||||||
|
slice N ownership
|
||||||
|
cmdq_exec_dcprz<N> Decompression slice execution time while
|
||||||
|
owned by cmdq N
|
||||||
|
cmdq_drain_dcprz<N> time taken for cmdq N to release Decompression
|
||||||
|
slice ownership
|
||||||
|
cmdq_wait_pke<N> wait time for cmdq N to get PKE slice ownership
|
||||||
|
cmdq_exec_pke<N> PKE slice execution time while owned by cmdq N
|
||||||
|
cmdq_drain_pke<N> time taken for cmdq N to release PKE slice
|
||||||
|
ownership
|
||||||
|
cmdq_wait_ucs<N> wait time for cmdq N to get UCS slice ownership
|
||||||
|
cmdq_exec_ucs<N> UCS slice execution time while owned by cmdq N
|
||||||
|
cmdq_drain_ucs<N> time taken for cmdq N to release UCS slice
|
||||||
|
ownership
|
||||||
|
cmdq_wait_ath<N> wait time for cmdq N to get Authentication slice
|
||||||
|
ownership
|
||||||
|
cmdq_exec_ath<N> Authentication slice execution time while owned
|
||||||
|
by cmdq N
|
||||||
|
cmdq_drain_ath<N> time taken for cmdq N to release Authentication
|
||||||
|
slice ownership
|
||||||
======================= ========================================
|
======================= ========================================
|
||||||
|
|
||||||
The telemetry report file can be read with the following command::
|
The telemetry report file can be read with the following command::
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,6 @@
|
||||||
|
Authenticated Encryption With Associated Data (AEAD)
|
||||||
|
====================================================
|
||||||
|
|
||||||
Authenticated Encryption With Associated Data (AEAD) Algorithm Definitions
|
Authenticated Encryption With Associated Data (AEAD) Algorithm Definitions
|
||||||
--------------------------------------------------------------------------
|
--------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,6 @@
|
||||||
|
Asymmetric Cipher
|
||||||
|
=================
|
||||||
|
|
||||||
Asymmetric Cipher Algorithm Definitions
|
Asymmetric Cipher Algorithm Definitions
|
||||||
---------------------------------------
|
---------------------------------------
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,6 @@
|
||||||
|
Message Digest
|
||||||
|
==============
|
||||||
|
|
||||||
Message Digest Algorithm Definitions
|
Message Digest Algorithm Definitions
|
||||||
------------------------------------
|
------------------------------------
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,6 @@
|
||||||
|
Key-agreement Protocol Primitives (KPP)
|
||||||
|
=======================================
|
||||||
|
|
||||||
Key-agreement Protocol Primitives (KPP) Cipher Algorithm Definitions
|
Key-agreement Protocol Primitives (KPP) Cipher Algorithm Definitions
|
||||||
--------------------------------------------------------------------
|
--------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,6 @@
|
||||||
|
Random Number Generator (RNG)
|
||||||
|
=============================
|
||||||
|
|
||||||
Random Number Algorithm Definitions
|
Random Number Algorithm Definitions
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,6 @@
|
||||||
|
Asymmetric Signature
|
||||||
|
====================
|
||||||
|
|
||||||
Asymmetric Signature Algorithm Definitions
|
Asymmetric Signature Algorithm Definitions
|
||||||
------------------------------------------
|
------------------------------------------
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,6 @@
|
||||||
|
Symmetric Key Cipher
|
||||||
|
====================
|
||||||
|
|
||||||
Block Cipher Algorithm Definitions
|
Block Cipher Algorithm Definitions
|
||||||
----------------------------------
|
----------------------------------
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,50 @@
|
||||||
|
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||||
|
%YAML 1.2
|
||||||
|
---
|
||||||
|
$id: http://devicetree.org/schemas/crypto/ti,am62l-dthev2.yaml#
|
||||||
|
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||||
|
|
||||||
|
title: K3 SoC DTHE V2 crypto module
|
||||||
|
|
||||||
|
maintainers:
|
||||||
|
- T Pratham <t-pratham@ti.com>
|
||||||
|
|
||||||
|
properties:
|
||||||
|
compatible:
|
||||||
|
enum:
|
||||||
|
- ti,am62l-dthev2
|
||||||
|
|
||||||
|
reg:
|
||||||
|
maxItems: 1
|
||||||
|
|
||||||
|
dmas:
|
||||||
|
items:
|
||||||
|
- description: AES Engine RX DMA Channel
|
||||||
|
- description: AES Engine TX DMA Channel
|
||||||
|
- description: SHA Engine TX DMA Channel
|
||||||
|
|
||||||
|
dma-names:
|
||||||
|
items:
|
||||||
|
- const: rx
|
||||||
|
- const: tx1
|
||||||
|
- const: tx2
|
||||||
|
|
||||||
|
required:
|
||||||
|
- compatible
|
||||||
|
- reg
|
||||||
|
- dmas
|
||||||
|
- dma-names
|
||||||
|
|
||||||
|
additionalProperties: false
|
||||||
|
|
||||||
|
examples:
|
||||||
|
- |
|
||||||
|
crypto@40800000 {
|
||||||
|
compatible = "ti,am62l-dthev2";
|
||||||
|
reg = <0x40800000 0x10000>;
|
||||||
|
|
||||||
|
dmas = <&main_bcdma 0 0 0x4700 0>,
|
||||||
|
<&main_bcdma 0 0 0xc701 0>,
|
||||||
|
<&main_bcdma 0 0 0xc700 0>;
|
||||||
|
dma-names = "rx", "tx1", "tx2";
|
||||||
|
};
|
||||||
|
|
@ -0,0 +1,35 @@
|
||||||
|
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||||
|
%YAML 1.2
|
||||||
|
---
|
||||||
|
$id: http://devicetree.org/schemas/crypto/xlnx,versal-trng.yaml#
|
||||||
|
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||||
|
|
||||||
|
title: Xilinx Versal True Random Number Generator Hardware Accelerator
|
||||||
|
|
||||||
|
maintainers:
|
||||||
|
- Harsh Jain <h.jain@amd.com>
|
||||||
|
- Mounika Botcha <mounika.botcha@amd.com>
|
||||||
|
|
||||||
|
description:
|
||||||
|
The Versal True Random Number Generator consists of Ring Oscillators as
|
||||||
|
entropy source and a deterministic CTR_DRBG random bit generator (DRBG).
|
||||||
|
|
||||||
|
properties:
|
||||||
|
compatible:
|
||||||
|
const: xlnx,versal-trng
|
||||||
|
|
||||||
|
reg:
|
||||||
|
maxItems: 1
|
||||||
|
|
||||||
|
required:
|
||||||
|
- reg
|
||||||
|
|
||||||
|
additionalProperties: false
|
||||||
|
|
||||||
|
examples:
|
||||||
|
- |
|
||||||
|
rng@f1230000 {
|
||||||
|
compatible = "xlnx,versal-trng";
|
||||||
|
reg = <0xf1230000 0x1000>;
|
||||||
|
};
|
||||||
|
...
|
||||||
|
|
@ -1,12 +0,0 @@
|
||||||
Hisilicon Random Number Generator
|
|
||||||
|
|
||||||
Required properties:
|
|
||||||
- compatible : Should be "hisilicon,hip04-rng" or "hisilicon,hip05-rng"
|
|
||||||
- reg : Offset and length of the register set of this block
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
rng@d1010000 {
|
|
||||||
compatible = "hisilicon,hip05-rng";
|
|
||||||
reg = <0xd1010000 0x100>;
|
|
||||||
};
|
|
||||||
|
|
@ -0,0 +1,32 @@
|
||||||
|
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||||
|
%YAML 1.2
|
||||||
|
---
|
||||||
|
$id: http://devicetree.org/schemas/rng/hisi-rng.yaml#
|
||||||
|
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||||
|
|
||||||
|
title: Hisilicon Random Number Generator
|
||||||
|
|
||||||
|
maintainers:
|
||||||
|
- Kefeng Wang <wangkefeng.wang@huawei>
|
||||||
|
|
||||||
|
properties:
|
||||||
|
compatible:
|
||||||
|
enum:
|
||||||
|
- hisilicon,hip04-rng
|
||||||
|
- hisilicon,hip05-rng
|
||||||
|
|
||||||
|
reg:
|
||||||
|
maxItems: 1
|
||||||
|
|
||||||
|
required:
|
||||||
|
- compatible
|
||||||
|
- reg
|
||||||
|
|
||||||
|
additionalProperties: false
|
||||||
|
|
||||||
|
examples:
|
||||||
|
- |
|
||||||
|
rng@d1010000 {
|
||||||
|
compatible = "hisilicon,hip05-rng";
|
||||||
|
reg = <0xd1010000 0x100>;
|
||||||
|
};
|
||||||
13
MAINTAINERS
13
MAINTAINERS
|
|
@ -25558,6 +25558,13 @@ S: Odd Fixes
|
||||||
F: drivers/clk/ti/
|
F: drivers/clk/ti/
|
||||||
F: include/linux/clk/ti.h
|
F: include/linux/clk/ti.h
|
||||||
|
|
||||||
|
TI DATA TRANSFORM AND HASHING ENGINE (DTHE) V2 CRYPTO DRIVER
|
||||||
|
M: T Pratham <t-pratham@ti.com>
|
||||||
|
L: linux-crypto@vger.kernel.org
|
||||||
|
S: Supported
|
||||||
|
F: Documentation/devicetree/bindings/crypto/ti,am62l-dthev2.yaml
|
||||||
|
F: drivers/crypto/ti/
|
||||||
|
|
||||||
TI DAVINCI MACHINE SUPPORT
|
TI DAVINCI MACHINE SUPPORT
|
||||||
M: Bartosz Golaszewski <brgl@bgdev.pl>
|
M: Bartosz Golaszewski <brgl@bgdev.pl>
|
||||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||||
|
|
@ -28008,6 +28015,12 @@ F: Documentation/misc-devices/xilinx_sdfec.rst
|
||||||
F: drivers/misc/xilinx_sdfec.c
|
F: drivers/misc/xilinx_sdfec.c
|
||||||
F: include/uapi/misc/xilinx_sdfec.h
|
F: include/uapi/misc/xilinx_sdfec.h
|
||||||
|
|
||||||
|
XILINX TRNG DRIVER
|
||||||
|
M: Mounika Botcha <mounika.botcha@amd.com>
|
||||||
|
M: Harsh Jain <h.jain@amd.com>
|
||||||
|
S: Maintained
|
||||||
|
F: drivers/crypto/xilinx/xilinx-trng.c
|
||||||
|
|
||||||
XILINX UARTLITE SERIAL DRIVER
|
XILINX UARTLITE SERIAL DRIVER
|
||||||
M: Peter Korsgaard <jacmet@sunsite.dk>
|
M: Peter Korsgaard <jacmet@sunsite.dk>
|
||||||
L: linux-serial@vger.kernel.org
|
L: linux-serial@vger.kernel.org
|
||||||
|
|
|
||||||
|
|
@ -71,6 +71,7 @@ config CRYPTO_POLYVAL_ARM64_CE
|
||||||
config CRYPTO_AES_ARM64
|
config CRYPTO_AES_ARM64
|
||||||
tristate "Ciphers: AES, modes: ECB, CBC, CTR, CTS, XCTR, XTS"
|
tristate "Ciphers: AES, modes: ECB, CBC, CTR, CTS, XCTR, XTS"
|
||||||
select CRYPTO_AES
|
select CRYPTO_AES
|
||||||
|
select CRYPTO_LIB_SHA256
|
||||||
help
|
help
|
||||||
Block ciphers: AES cipher algorithms (FIPS-197)
|
Block ciphers: AES cipher algorithms (FIPS-197)
|
||||||
Length-preserving ciphers: AES with ECB, CBC, CTR, CTS,
|
Length-preserving ciphers: AES with ECB, CBC, CTR, CTS,
|
||||||
|
|
|
||||||
|
|
@ -122,7 +122,6 @@ struct crypto_aes_xts_ctx {
|
||||||
struct crypto_aes_essiv_cbc_ctx {
|
struct crypto_aes_essiv_cbc_ctx {
|
||||||
struct crypto_aes_ctx key1;
|
struct crypto_aes_ctx key1;
|
||||||
struct crypto_aes_ctx __aligned(8) key2;
|
struct crypto_aes_ctx __aligned(8) key2;
|
||||||
struct crypto_shash *hash;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
struct mac_tfm_ctx {
|
struct mac_tfm_ctx {
|
||||||
|
|
@ -171,7 +170,7 @@ static int __maybe_unused essiv_cbc_set_key(struct crypto_skcipher *tfm,
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
crypto_shash_tfm_digest(ctx->hash, in_key, key_len, digest);
|
sha256(in_key, key_len, digest);
|
||||||
|
|
||||||
return aes_expandkey(&ctx->key2, digest, sizeof(digest));
|
return aes_expandkey(&ctx->key2, digest, sizeof(digest));
|
||||||
}
|
}
|
||||||
|
|
@ -388,22 +387,6 @@ static int cts_cbc_decrypt(struct skcipher_request *req)
|
||||||
return skcipher_walk_done(&walk, 0);
|
return skcipher_walk_done(&walk, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int __maybe_unused essiv_cbc_init_tfm(struct crypto_skcipher *tfm)
|
|
||||||
{
|
|
||||||
struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
|
|
||||||
|
|
||||||
ctx->hash = crypto_alloc_shash("sha256", 0, 0);
|
|
||||||
|
|
||||||
return PTR_ERR_OR_ZERO(ctx->hash);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void __maybe_unused essiv_cbc_exit_tfm(struct crypto_skcipher *tfm)
|
|
||||||
{
|
|
||||||
struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
|
|
||||||
|
|
||||||
crypto_free_shash(ctx->hash);
|
|
||||||
}
|
|
||||||
|
|
||||||
static int __maybe_unused essiv_cbc_encrypt(struct skcipher_request *req)
|
static int __maybe_unused essiv_cbc_encrypt(struct skcipher_request *req)
|
||||||
{
|
{
|
||||||
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
|
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
|
||||||
|
|
@ -793,8 +776,6 @@ static struct skcipher_alg aes_algs[] = { {
|
||||||
.setkey = essiv_cbc_set_key,
|
.setkey = essiv_cbc_set_key,
|
||||||
.encrypt = essiv_cbc_encrypt,
|
.encrypt = essiv_cbc_encrypt,
|
||||||
.decrypt = essiv_cbc_decrypt,
|
.decrypt = essiv_cbc_decrypt,
|
||||||
.init = essiv_cbc_init_tfm,
|
|
||||||
.exit = essiv_cbc_exit_tfm,
|
|
||||||
} };
|
} };
|
||||||
|
|
||||||
static int cbcmac_setkey(struct crypto_shash *tfm, const u8 *in_key,
|
static int cbcmac_setkey(struct crypto_shash *tfm, const u8 *in_key,
|
||||||
|
|
|
||||||
|
|
@ -10,14 +10,15 @@
|
||||||
#ifndef _CRYPTO_ARCH_S390_SHA_H
|
#ifndef _CRYPTO_ARCH_S390_SHA_H
|
||||||
#define _CRYPTO_ARCH_S390_SHA_H
|
#define _CRYPTO_ARCH_S390_SHA_H
|
||||||
|
|
||||||
|
#include <crypto/hash.h>
|
||||||
#include <crypto/sha2.h>
|
#include <crypto/sha2.h>
|
||||||
#include <crypto/sha3.h>
|
#include <crypto/sha3.h>
|
||||||
|
#include <linux/build_bug.h>
|
||||||
#include <linux/types.h>
|
#include <linux/types.h>
|
||||||
|
|
||||||
/* must be big enough for the largest SHA variant */
|
/* must be big enough for the largest SHA variant */
|
||||||
#define CPACF_MAX_PARMBLOCK_SIZE SHA3_STATE_SIZE
|
#define CPACF_MAX_PARMBLOCK_SIZE SHA3_STATE_SIZE
|
||||||
#define SHA_MAX_BLOCK_SIZE SHA3_224_BLOCK_SIZE
|
#define SHA_MAX_BLOCK_SIZE SHA3_224_BLOCK_SIZE
|
||||||
#define S390_SHA_CTX_SIZE sizeof(struct s390_sha_ctx)
|
|
||||||
|
|
||||||
struct s390_sha_ctx {
|
struct s390_sha_ctx {
|
||||||
u64 count; /* message length in bytes */
|
u64 count; /* message length in bytes */
|
||||||
|
|
@ -42,4 +43,9 @@ int s390_sha_update_blocks(struct shash_desc *desc, const u8 *data,
|
||||||
int s390_sha_finup(struct shash_desc *desc, const u8 *src, unsigned int len,
|
int s390_sha_finup(struct shash_desc *desc, const u8 *src, unsigned int len,
|
||||||
u8 *out);
|
u8 *out);
|
||||||
|
|
||||||
|
static inline void __check_s390_sha_ctx_size(void)
|
||||||
|
{
|
||||||
|
BUILD_BUG_ON(S390_SHA_CTX_SIZE != sizeof(struct s390_sha_ctx));
|
||||||
|
}
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|
|
||||||
|
|
@ -54,8 +54,10 @@ static int crypto842_sdecompress(struct crypto_scomp *tfm,
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct scomp_alg scomp = {
|
static struct scomp_alg scomp = {
|
||||||
|
.streams = {
|
||||||
.alloc_ctx = crypto842_alloc_ctx,
|
.alloc_ctx = crypto842_alloc_ctx,
|
||||||
.free_ctx = crypto842_free_ctx,
|
.free_ctx = crypto842_free_ctx,
|
||||||
|
},
|
||||||
.compress = crypto842_scompress,
|
.compress = crypto842_scompress,
|
||||||
.decompress = crypto842_sdecompress,
|
.decompress = crypto842_sdecompress,
|
||||||
.base = {
|
.base = {
|
||||||
|
|
|
||||||
|
|
@ -683,10 +683,7 @@ static struct crypto_alg anubis_alg = {
|
||||||
|
|
||||||
static int __init anubis_mod_init(void)
|
static int __init anubis_mod_init(void)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
return crypto_register_alg(&anubis_alg);
|
||||||
|
|
||||||
ret = crypto_register_alg(&anubis_alg);
|
|
||||||
return ret;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __exit anubis_mod_fini(void)
|
static void __exit anubis_mod_fini(void)
|
||||||
|
|
|
||||||
|
|
@ -610,11 +610,14 @@ int x509_process_extension(void *context, size_t hdrlen,
|
||||||
/*
|
/*
|
||||||
* Get hold of the basicConstraints
|
* Get hold of the basicConstraints
|
||||||
* v[1] is the encoding size
|
* v[1] is the encoding size
|
||||||
* (Expect 0x2 or greater, making it 1 or more bytes)
|
* (Expect 0x00 for empty SEQUENCE with CA:FALSE, or
|
||||||
|
* 0x03 or greater for non-empty SEQUENCE)
|
||||||
* v[2] is the encoding type
|
* v[2] is the encoding type
|
||||||
* (Expect an ASN1_BOOL for the CA)
|
* (Expect an ASN1_BOOL for the CA)
|
||||||
* v[3] is the contents of the ASN1_BOOL
|
* v[3] is the length of the ASN1_BOOL
|
||||||
* (Expect 1 if the CA is TRUE)
|
* (Expect 1 for a single byte boolean)
|
||||||
|
* v[4] is the contents of the ASN1_BOOL
|
||||||
|
* (Expect 0xFF if the CA is TRUE)
|
||||||
* vlen should match the entire extension size
|
* vlen should match the entire extension size
|
||||||
*/
|
*/
|
||||||
if (v[0] != (ASN1_CONS_BIT | ASN1_SEQ))
|
if (v[0] != (ASN1_CONS_BIT | ASN1_SEQ))
|
||||||
|
|
@ -623,8 +626,13 @@ int x509_process_extension(void *context, size_t hdrlen,
|
||||||
return -EBADMSG;
|
return -EBADMSG;
|
||||||
if (v[1] != vlen - 2)
|
if (v[1] != vlen - 2)
|
||||||
return -EBADMSG;
|
return -EBADMSG;
|
||||||
if (vlen >= 4 && v[1] != 0 && v[2] == ASN1_BOOL && v[3] == 1)
|
/* Empty SEQUENCE means CA:FALSE (default value omitted per DER) */
|
||||||
|
if (v[1] == 0)
|
||||||
|
return 0;
|
||||||
|
if (vlen >= 5 && v[2] == ASN1_BOOL && v[3] == 1 && v[4] == 0xFF)
|
||||||
ctx->cert->pub->key_eflags |= 1 << KEY_EFLAG_CA;
|
ctx->cert->pub->key_eflags |= 1 << KEY_EFLAG_CA;
|
||||||
|
else
|
||||||
|
return -EBADMSG;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1115,7 +1115,8 @@ static int __init cryptd_init(void)
|
||||||
{
|
{
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
cryptd_wq = alloc_workqueue("cryptd", WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE,
|
cryptd_wq = alloc_workqueue("cryptd",
|
||||||
|
WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE | WQ_PERCPU,
|
||||||
1);
|
1);
|
||||||
if (!cryptd_wq)
|
if (!cryptd_wq)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
|
||||||
|
|
@ -117,6 +117,7 @@ int jent_hash_time(void *hash_state, __u64 time, u8 *addtl,
|
||||||
pr_warn_ratelimited("Unexpected digest size\n");
|
pr_warn_ratelimited("Unexpected digest size\n");
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
kmsan_unpoison_memory(intermediary, sizeof(intermediary));
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This loop fills a buffer which is injected into the entropy pool.
|
* This loop fills a buffer which is injected into the entropy pool.
|
||||||
|
|
|
||||||
|
|
@ -68,8 +68,10 @@ static int lz4_sdecompress(struct crypto_scomp *tfm, const u8 *src,
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct scomp_alg scomp = {
|
static struct scomp_alg scomp = {
|
||||||
|
.streams = {
|
||||||
.alloc_ctx = lz4_alloc_ctx,
|
.alloc_ctx = lz4_alloc_ctx,
|
||||||
.free_ctx = lz4_free_ctx,
|
.free_ctx = lz4_free_ctx,
|
||||||
|
},
|
||||||
.compress = lz4_scompress,
|
.compress = lz4_scompress,
|
||||||
.decompress = lz4_sdecompress,
|
.decompress = lz4_sdecompress,
|
||||||
.base = {
|
.base = {
|
||||||
|
|
|
||||||
|
|
@ -66,8 +66,10 @@ static int lz4hc_sdecompress(struct crypto_scomp *tfm, const u8 *src,
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct scomp_alg scomp = {
|
static struct scomp_alg scomp = {
|
||||||
|
.streams = {
|
||||||
.alloc_ctx = lz4hc_alloc_ctx,
|
.alloc_ctx = lz4hc_alloc_ctx,
|
||||||
.free_ctx = lz4hc_free_ctx,
|
.free_ctx = lz4hc_free_ctx,
|
||||||
|
},
|
||||||
.compress = lz4hc_scompress,
|
.compress = lz4hc_scompress,
|
||||||
.decompress = lz4hc_sdecompress,
|
.decompress = lz4hc_sdecompress,
|
||||||
.base = {
|
.base = {
|
||||||
|
|
|
||||||
|
|
@ -70,8 +70,10 @@ static int lzorle_sdecompress(struct crypto_scomp *tfm, const u8 *src,
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct scomp_alg scomp = {
|
static struct scomp_alg scomp = {
|
||||||
|
.streams = {
|
||||||
.alloc_ctx = lzorle_alloc_ctx,
|
.alloc_ctx = lzorle_alloc_ctx,
|
||||||
.free_ctx = lzorle_free_ctx,
|
.free_ctx = lzorle_free_ctx,
|
||||||
|
},
|
||||||
.compress = lzorle_scompress,
|
.compress = lzorle_scompress,
|
||||||
.decompress = lzorle_sdecompress,
|
.decompress = lzorle_sdecompress,
|
||||||
.base = {
|
.base = {
|
||||||
|
|
|
||||||
|
|
@ -70,8 +70,10 @@ static int lzo_sdecompress(struct crypto_scomp *tfm, const u8 *src,
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct scomp_alg scomp = {
|
static struct scomp_alg scomp = {
|
||||||
|
.streams = {
|
||||||
.alloc_ctx = lzo_alloc_ctx,
|
.alloc_ctx = lzo_alloc_ctx,
|
||||||
.free_ctx = lzo_free_ctx,
|
.free_ctx = lzo_free_ctx,
|
||||||
|
},
|
||||||
.compress = lzo_scompress,
|
.compress = lzo_scompress,
|
||||||
.decompress = lzo_sdecompress,
|
.decompress = lzo_sdecompress,
|
||||||
.base = {
|
.base = {
|
||||||
|
|
|
||||||
|
|
@ -312,6 +312,7 @@ config HW_RANDOM_INGENIC_TRNG
|
||||||
config HW_RANDOM_NOMADIK
|
config HW_RANDOM_NOMADIK
|
||||||
tristate "ST-Ericsson Nomadik Random Number Generator support"
|
tristate "ST-Ericsson Nomadik Random Number Generator support"
|
||||||
depends on ARCH_NOMADIK || COMPILE_TEST
|
depends on ARCH_NOMADIK || COMPILE_TEST
|
||||||
|
depends on ARM_AMBA
|
||||||
default HW_RANDOM
|
default HW_RANDOM
|
||||||
help
|
help
|
||||||
This driver provides kernel-side support for the Random Number
|
This driver provides kernel-side support for the Random Number
|
||||||
|
|
|
||||||
|
|
@ -188,7 +188,7 @@ static int cn10k_rng_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||||
|
|
||||||
rng->reg_base = pcim_iomap(pdev, 0, 0);
|
rng->reg_base = pcim_iomap(pdev, 0, 0);
|
||||||
if (!rng->reg_base)
|
if (!rng->reg_base)
|
||||||
return dev_err_probe(&pdev->dev, -ENOMEM, "Error while mapping CSRs, exiting\n");
|
return -ENOMEM;
|
||||||
|
|
||||||
rng->ops.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
|
rng->ops.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
|
||||||
"cn10k-rng-%s", dev_name(&pdev->dev));
|
"cn10k-rng-%s", dev_name(&pdev->dev));
|
||||||
|
|
|
||||||
|
|
@ -231,6 +231,10 @@ static int ks_sa_rng_probe(struct platform_device *pdev)
|
||||||
if (IS_ERR(ks_sa_rng->regmap_cfg))
|
if (IS_ERR(ks_sa_rng->regmap_cfg))
|
||||||
return dev_err_probe(dev, -EINVAL, "syscon_node_to_regmap failed\n");
|
return dev_err_probe(dev, -EINVAL, "syscon_node_to_regmap failed\n");
|
||||||
|
|
||||||
|
ks_sa_rng->clk = devm_clk_get_enabled(dev, NULL);
|
||||||
|
if (IS_ERR(ks_sa_rng->clk))
|
||||||
|
return dev_err_probe(dev, PTR_ERR(ks_sa_rng->clk), "Failed to get clock\n");
|
||||||
|
|
||||||
pm_runtime_enable(dev);
|
pm_runtime_enable(dev);
|
||||||
ret = pm_runtime_resume_and_get(dev);
|
ret = pm_runtime_resume_and_get(dev);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
|
|
|
||||||
|
|
@ -150,7 +150,7 @@ static int timeriomem_rng_probe(struct platform_device *pdev)
|
||||||
priv->rng_ops.quality = pdata->quality;
|
priv->rng_ops.quality = pdata->quality;
|
||||||
}
|
}
|
||||||
|
|
||||||
priv->period = ns_to_ktime(period * NSEC_PER_USEC);
|
priv->period = us_to_ktime(period);
|
||||||
init_completion(&priv->completion);
|
init_completion(&priv->completion);
|
||||||
hrtimer_setup(&priv->timer, timeriomem_rng_trigger, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
|
hrtimer_setup(&priv->timer, timeriomem_rng_trigger, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -725,6 +725,18 @@ config CRYPTO_DEV_TEGRA
|
||||||
Select this to enable Tegra Security Engine which accelerates various
|
Select this to enable Tegra Security Engine which accelerates various
|
||||||
AES encryption/decryption and HASH algorithms.
|
AES encryption/decryption and HASH algorithms.
|
||||||
|
|
||||||
|
config CRYPTO_DEV_XILINX_TRNG
|
||||||
|
tristate "Support for Xilinx True Random Generator"
|
||||||
|
depends on ZYNQMP_FIRMWARE || COMPILE_TEST
|
||||||
|
select CRYPTO_RNG
|
||||||
|
select HW_RANDOM
|
||||||
|
help
|
||||||
|
Xilinx Versal SoC driver provides kernel-side support for True Random Number
|
||||||
|
Generator and Pseudo random Number in CTR_DRBG mode as defined in NIST SP800-90A.
|
||||||
|
|
||||||
|
To compile this driver as a module, choose M here: the module
|
||||||
|
will be called xilinx-trng.
|
||||||
|
|
||||||
config CRYPTO_DEV_ZYNQMP_AES
|
config CRYPTO_DEV_ZYNQMP_AES
|
||||||
tristate "Support for Xilinx ZynqMP AES hw accelerator"
|
tristate "Support for Xilinx ZynqMP AES hw accelerator"
|
||||||
depends on ZYNQMP_FIRMWARE || COMPILE_TEST
|
depends on ZYNQMP_FIRMWARE || COMPILE_TEST
|
||||||
|
|
@ -864,5 +876,6 @@ config CRYPTO_DEV_SA2UL
|
||||||
source "drivers/crypto/aspeed/Kconfig"
|
source "drivers/crypto/aspeed/Kconfig"
|
||||||
source "drivers/crypto/starfive/Kconfig"
|
source "drivers/crypto/starfive/Kconfig"
|
||||||
source "drivers/crypto/inside-secure/eip93/Kconfig"
|
source "drivers/crypto/inside-secure/eip93/Kconfig"
|
||||||
|
source "drivers/crypto/ti/Kconfig"
|
||||||
|
|
||||||
endif # CRYPTO_HW
|
endif # CRYPTO_HW
|
||||||
|
|
|
||||||
|
|
@ -49,3 +49,4 @@ obj-$(CONFIG_CRYPTO_DEV_AMLOGIC_GXL) += amlogic/
|
||||||
obj-y += intel/
|
obj-y += intel/
|
||||||
obj-y += starfive/
|
obj-y += starfive/
|
||||||
obj-y += cavium/
|
obj-y += cavium/
|
||||||
|
obj-y += ti/
|
||||||
|
|
|
||||||
|
|
@ -111,7 +111,7 @@ static int sun8i_ce_cipher_fallback(struct skcipher_request *areq)
|
||||||
|
|
||||||
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG)) {
|
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG)) {
|
||||||
struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
|
struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
|
||||||
struct sun8i_ce_alg_template *algt __maybe_unused;
|
struct sun8i_ce_alg_template *algt;
|
||||||
|
|
||||||
algt = container_of(alg, struct sun8i_ce_alg_template,
|
algt = container_of(alg, struct sun8i_ce_alg_template,
|
||||||
alg.skcipher.base);
|
alg.skcipher.base);
|
||||||
|
|
@ -131,21 +131,19 @@ static int sun8i_ce_cipher_fallback(struct skcipher_request *areq)
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req)
|
static int sun8i_ce_cipher_prepare(struct skcipher_request *areq,
|
||||||
|
struct ce_task *cet)
|
||||||
{
|
{
|
||||||
struct skcipher_request *areq = container_of(async_req, struct skcipher_request, base);
|
|
||||||
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
|
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
|
||||||
struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
|
struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
|
||||||
struct sun8i_ce_dev *ce = op->ce;
|
struct sun8i_ce_dev *ce = op->ce;
|
||||||
struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
|
struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
|
||||||
struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
|
struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
|
||||||
struct sun8i_ce_alg_template *algt;
|
struct sun8i_ce_alg_template *algt;
|
||||||
struct sun8i_ce_flow *chan;
|
|
||||||
struct ce_task *cet;
|
|
||||||
struct scatterlist *sg;
|
struct scatterlist *sg;
|
||||||
unsigned int todo, len, offset, ivsize;
|
unsigned int todo, len, offset, ivsize;
|
||||||
u32 common, sym;
|
u32 common, sym;
|
||||||
int flow, i;
|
int i;
|
||||||
int nr_sgs = 0;
|
int nr_sgs = 0;
|
||||||
int nr_sgd = 0;
|
int nr_sgd = 0;
|
||||||
int err = 0;
|
int err = 0;
|
||||||
|
|
@ -163,14 +161,9 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
|
||||||
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG))
|
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG))
|
||||||
algt->stat_req++;
|
algt->stat_req++;
|
||||||
|
|
||||||
flow = rctx->flow;
|
|
||||||
|
|
||||||
chan = &ce->chanlist[flow];
|
|
||||||
|
|
||||||
cet = chan->tl;
|
|
||||||
memset(cet, 0, sizeof(struct ce_task));
|
memset(cet, 0, sizeof(struct ce_task));
|
||||||
|
|
||||||
cet->t_id = cpu_to_le32(flow);
|
cet->t_id = cpu_to_le32(rctx->flow);
|
||||||
common = ce->variant->alg_cipher[algt->ce_algo_id];
|
common = ce->variant->alg_cipher[algt->ce_algo_id];
|
||||||
common |= rctx->op_dir | CE_COMM_INT;
|
common |= rctx->op_dir | CE_COMM_INT;
|
||||||
cet->t_common_ctl = cpu_to_le32(common);
|
cet->t_common_ctl = cpu_to_le32(common);
|
||||||
|
|
@ -209,11 +202,11 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
|
||||||
if (areq->iv && ivsize > 0) {
|
if (areq->iv && ivsize > 0) {
|
||||||
if (rctx->op_dir & CE_DECRYPTION) {
|
if (rctx->op_dir & CE_DECRYPTION) {
|
||||||
offset = areq->cryptlen - ivsize;
|
offset = areq->cryptlen - ivsize;
|
||||||
scatterwalk_map_and_copy(chan->backup_iv, areq->src,
|
scatterwalk_map_and_copy(rctx->backup_iv, areq->src,
|
||||||
offset, ivsize, 0);
|
offset, ivsize, 0);
|
||||||
}
|
}
|
||||||
memcpy(chan->bounce_iv, areq->iv, ivsize);
|
memcpy(rctx->bounce_iv, areq->iv, ivsize);
|
||||||
rctx->addr_iv = dma_map_single(ce->dev, chan->bounce_iv, ivsize,
|
rctx->addr_iv = dma_map_single(ce->dev, rctx->bounce_iv, ivsize,
|
||||||
DMA_TO_DEVICE);
|
DMA_TO_DEVICE);
|
||||||
if (dma_mapping_error(ce->dev, rctx->addr_iv)) {
|
if (dma_mapping_error(ce->dev, rctx->addr_iv)) {
|
||||||
dev_err(ce->dev, "Cannot DMA MAP IV\n");
|
dev_err(ce->dev, "Cannot DMA MAP IV\n");
|
||||||
|
|
@ -276,7 +269,6 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
|
||||||
goto theend_sgs;
|
goto theend_sgs;
|
||||||
}
|
}
|
||||||
|
|
||||||
chan->timeout = areq->cryptlen;
|
|
||||||
rctx->nr_sgs = ns;
|
rctx->nr_sgs = ns;
|
||||||
rctx->nr_sgd = nd;
|
rctx->nr_sgd = nd;
|
||||||
return 0;
|
return 0;
|
||||||
|
|
@ -300,13 +292,13 @@ theend_iv:
|
||||||
|
|
||||||
offset = areq->cryptlen - ivsize;
|
offset = areq->cryptlen - ivsize;
|
||||||
if (rctx->op_dir & CE_DECRYPTION) {
|
if (rctx->op_dir & CE_DECRYPTION) {
|
||||||
memcpy(areq->iv, chan->backup_iv, ivsize);
|
memcpy(areq->iv, rctx->backup_iv, ivsize);
|
||||||
memzero_explicit(chan->backup_iv, ivsize);
|
memzero_explicit(rctx->backup_iv, ivsize);
|
||||||
} else {
|
} else {
|
||||||
scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
|
scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
|
||||||
ivsize, 0);
|
ivsize, 0);
|
||||||
}
|
}
|
||||||
memzero_explicit(chan->bounce_iv, ivsize);
|
memzero_explicit(rctx->bounce_iv, ivsize);
|
||||||
}
|
}
|
||||||
|
|
||||||
dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE);
|
dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE);
|
||||||
|
|
@ -315,24 +307,17 @@ theend:
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sun8i_ce_cipher_unprepare(struct crypto_engine *engine,
|
static void sun8i_ce_cipher_unprepare(struct skcipher_request *areq,
|
||||||
void *async_req)
|
struct ce_task *cet)
|
||||||
{
|
{
|
||||||
struct skcipher_request *areq = container_of(async_req, struct skcipher_request, base);
|
|
||||||
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
|
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
|
||||||
struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
|
struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
|
||||||
struct sun8i_ce_dev *ce = op->ce;
|
struct sun8i_ce_dev *ce = op->ce;
|
||||||
struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
|
struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
|
||||||
struct sun8i_ce_flow *chan;
|
|
||||||
struct ce_task *cet;
|
|
||||||
unsigned int ivsize, offset;
|
unsigned int ivsize, offset;
|
||||||
int nr_sgs = rctx->nr_sgs;
|
int nr_sgs = rctx->nr_sgs;
|
||||||
int nr_sgd = rctx->nr_sgd;
|
int nr_sgd = rctx->nr_sgd;
|
||||||
int flow;
|
|
||||||
|
|
||||||
flow = rctx->flow;
|
|
||||||
chan = &ce->chanlist[flow];
|
|
||||||
cet = chan->tl;
|
|
||||||
ivsize = crypto_skcipher_ivsize(tfm);
|
ivsize = crypto_skcipher_ivsize(tfm);
|
||||||
|
|
||||||
if (areq->src == areq->dst) {
|
if (areq->src == areq->dst) {
|
||||||
|
|
@ -349,43 +334,43 @@ static void sun8i_ce_cipher_unprepare(struct crypto_engine *engine,
|
||||||
DMA_TO_DEVICE);
|
DMA_TO_DEVICE);
|
||||||
offset = areq->cryptlen - ivsize;
|
offset = areq->cryptlen - ivsize;
|
||||||
if (rctx->op_dir & CE_DECRYPTION) {
|
if (rctx->op_dir & CE_DECRYPTION) {
|
||||||
memcpy(areq->iv, chan->backup_iv, ivsize);
|
memcpy(areq->iv, rctx->backup_iv, ivsize);
|
||||||
memzero_explicit(chan->backup_iv, ivsize);
|
memzero_explicit(rctx->backup_iv, ivsize);
|
||||||
} else {
|
} else {
|
||||||
scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
|
scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
|
||||||
ivsize, 0);
|
ivsize, 0);
|
||||||
}
|
}
|
||||||
memzero_explicit(chan->bounce_iv, ivsize);
|
memzero_explicit(rctx->bounce_iv, ivsize);
|
||||||
}
|
}
|
||||||
|
|
||||||
dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE);
|
dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sun8i_ce_cipher_run(struct crypto_engine *engine, void *areq)
|
|
||||||
{
|
|
||||||
struct skcipher_request *breq = container_of(areq, struct skcipher_request, base);
|
|
||||||
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(breq);
|
|
||||||
struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
|
|
||||||
struct sun8i_ce_dev *ce = op->ce;
|
|
||||||
struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(breq);
|
|
||||||
int flow, err;
|
|
||||||
|
|
||||||
flow = rctx->flow;
|
|
||||||
err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(breq->base.tfm));
|
|
||||||
sun8i_ce_cipher_unprepare(engine, areq);
|
|
||||||
local_bh_disable();
|
|
||||||
crypto_finalize_skcipher_request(engine, breq, err);
|
|
||||||
local_bh_enable();
|
|
||||||
}
|
|
||||||
|
|
||||||
int sun8i_ce_cipher_do_one(struct crypto_engine *engine, void *areq)
|
int sun8i_ce_cipher_do_one(struct crypto_engine *engine, void *areq)
|
||||||
{
|
{
|
||||||
int err = sun8i_ce_cipher_prepare(engine, areq);
|
struct skcipher_request *req = skcipher_request_cast(areq);
|
||||||
|
struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(req);
|
||||||
|
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
|
||||||
|
struct sun8i_cipher_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
|
||||||
|
struct sun8i_ce_dev *ce = ctx->ce;
|
||||||
|
struct sun8i_ce_flow *chan;
|
||||||
|
int err;
|
||||||
|
|
||||||
|
chan = &ce->chanlist[rctx->flow];
|
||||||
|
|
||||||
|
err = sun8i_ce_cipher_prepare(req, chan->tl);
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
sun8i_ce_cipher_run(engine, areq);
|
err = sun8i_ce_run_task(ce, rctx->flow,
|
||||||
|
crypto_tfm_alg_name(req->base.tfm));
|
||||||
|
|
||||||
|
sun8i_ce_cipher_unprepare(req, chan->tl);
|
||||||
|
|
||||||
|
local_bh_disable();
|
||||||
|
crypto_finalize_skcipher_request(engine, req, err);
|
||||||
|
local_bh_enable();
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -169,6 +169,12 @@ static const struct ce_variant ce_r40_variant = {
|
||||||
.trng = CE_ID_NOTSUPP,
|
.trng = CE_ID_NOTSUPP,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static void sun8i_ce_dump_task_descriptors(struct sun8i_ce_flow *chan)
|
||||||
|
{
|
||||||
|
print_hex_dump(KERN_INFO, "TASK: ", DUMP_PREFIX_NONE, 16, 4,
|
||||||
|
chan->tl, sizeof(struct ce_task), false);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* sun8i_ce_get_engine_number() get the next channel slot
|
* sun8i_ce_get_engine_number() get the next channel slot
|
||||||
* This is a simple round-robin way of getting the next channel
|
* This is a simple round-robin way of getting the next channel
|
||||||
|
|
@ -183,7 +189,6 @@ int sun8i_ce_run_task(struct sun8i_ce_dev *ce, int flow, const char *name)
|
||||||
{
|
{
|
||||||
u32 v;
|
u32 v;
|
||||||
int err = 0;
|
int err = 0;
|
||||||
struct ce_task *cet = ce->chanlist[flow].tl;
|
|
||||||
|
|
||||||
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
|
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
|
||||||
ce->chanlist[flow].stat_req++;
|
ce->chanlist[flow].stat_req++;
|
||||||
|
|
@ -210,11 +215,10 @@ int sun8i_ce_run_task(struct sun8i_ce_dev *ce, int flow, const char *name)
|
||||||
mutex_unlock(&ce->mlock);
|
mutex_unlock(&ce->mlock);
|
||||||
|
|
||||||
wait_for_completion_interruptible_timeout(&ce->chanlist[flow].complete,
|
wait_for_completion_interruptible_timeout(&ce->chanlist[flow].complete,
|
||||||
msecs_to_jiffies(ce->chanlist[flow].timeout));
|
msecs_to_jiffies(CE_DMA_TIMEOUT_MS));
|
||||||
|
|
||||||
if (ce->chanlist[flow].status == 0) {
|
if (ce->chanlist[flow].status == 0) {
|
||||||
dev_err(ce->dev, "DMA timeout for %s (tm=%d) on flow %d\n", name,
|
dev_err(ce->dev, "DMA timeout for %s on flow %d\n", name, flow);
|
||||||
ce->chanlist[flow].timeout, flow);
|
|
||||||
err = -EFAULT;
|
err = -EFAULT;
|
||||||
}
|
}
|
||||||
/* No need to lock for this read, the channel is locked so
|
/* No need to lock for this read, the channel is locked so
|
||||||
|
|
@ -226,9 +230,8 @@ int sun8i_ce_run_task(struct sun8i_ce_dev *ce, int flow, const char *name)
|
||||||
/* Sadly, the error bit is not per flow */
|
/* Sadly, the error bit is not per flow */
|
||||||
if (v) {
|
if (v) {
|
||||||
dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow);
|
dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow);
|
||||||
|
sun8i_ce_dump_task_descriptors(&ce->chanlist[flow]);
|
||||||
err = -EFAULT;
|
err = -EFAULT;
|
||||||
print_hex_dump(KERN_INFO, "TASK: ", DUMP_PREFIX_NONE, 16, 4,
|
|
||||||
cet, sizeof(struct ce_task), false);
|
|
||||||
}
|
}
|
||||||
if (v & CE_ERR_ALGO_NOTSUP)
|
if (v & CE_ERR_ALGO_NOTSUP)
|
||||||
dev_err(ce->dev, "CE ERROR: algorithm not supported\n");
|
dev_err(ce->dev, "CE ERROR: algorithm not supported\n");
|
||||||
|
|
@ -245,9 +248,8 @@ int sun8i_ce_run_task(struct sun8i_ce_dev *ce, int flow, const char *name)
|
||||||
v &= 0xF;
|
v &= 0xF;
|
||||||
if (v) {
|
if (v) {
|
||||||
dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow);
|
dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow);
|
||||||
|
sun8i_ce_dump_task_descriptors(&ce->chanlist[flow]);
|
||||||
err = -EFAULT;
|
err = -EFAULT;
|
||||||
print_hex_dump(KERN_INFO, "TASK: ", DUMP_PREFIX_NONE, 16, 4,
|
|
||||||
cet, sizeof(struct ce_task), false);
|
|
||||||
}
|
}
|
||||||
if (v & CE_ERR_ALGO_NOTSUP)
|
if (v & CE_ERR_ALGO_NOTSUP)
|
||||||
dev_err(ce->dev, "CE ERROR: algorithm not supported\n");
|
dev_err(ce->dev, "CE ERROR: algorithm not supported\n");
|
||||||
|
|
@ -261,9 +263,8 @@ int sun8i_ce_run_task(struct sun8i_ce_dev *ce, int flow, const char *name)
|
||||||
v &= 0xFF;
|
v &= 0xFF;
|
||||||
if (v) {
|
if (v) {
|
||||||
dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow);
|
dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow);
|
||||||
|
sun8i_ce_dump_task_descriptors(&ce->chanlist[flow]);
|
||||||
err = -EFAULT;
|
err = -EFAULT;
|
||||||
print_hex_dump(KERN_INFO, "TASK: ", DUMP_PREFIX_NONE, 16, 4,
|
|
||||||
cet, sizeof(struct ce_task), false);
|
|
||||||
}
|
}
|
||||||
if (v & CE_ERR_ALGO_NOTSUP)
|
if (v & CE_ERR_ALGO_NOTSUP)
|
||||||
dev_err(ce->dev, "CE ERROR: algorithm not supported\n");
|
dev_err(ce->dev, "CE ERROR: algorithm not supported\n");
|
||||||
|
|
@ -758,18 +759,6 @@ static int sun8i_ce_allocate_chanlist(struct sun8i_ce_dev *ce)
|
||||||
err = -ENOMEM;
|
err = -ENOMEM;
|
||||||
goto error_engine;
|
goto error_engine;
|
||||||
}
|
}
|
||||||
ce->chanlist[i].bounce_iv = devm_kmalloc(ce->dev, AES_BLOCK_SIZE,
|
|
||||||
GFP_KERNEL | GFP_DMA);
|
|
||||||
if (!ce->chanlist[i].bounce_iv) {
|
|
||||||
err = -ENOMEM;
|
|
||||||
goto error_engine;
|
|
||||||
}
|
|
||||||
ce->chanlist[i].backup_iv = devm_kmalloc(ce->dev, AES_BLOCK_SIZE,
|
|
||||||
GFP_KERNEL);
|
|
||||||
if (!ce->chanlist[i].backup_iv) {
|
|
||||||
err = -ENOMEM;
|
|
||||||
goto error_engine;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
error_engine:
|
error_engine:
|
||||||
|
|
@ -1063,7 +1052,7 @@ static int sun8i_ce_probe(struct platform_device *pdev)
|
||||||
pm_runtime_put_sync(ce->dev);
|
pm_runtime_put_sync(ce->dev);
|
||||||
|
|
||||||
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG)) {
|
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG)) {
|
||||||
struct dentry *dbgfs_dir __maybe_unused;
|
struct dentry *dbgfs_dir;
|
||||||
struct dentry *dbgfs_stats __maybe_unused;
|
struct dentry *dbgfs_stats __maybe_unused;
|
||||||
|
|
||||||
/* Ignore error of debugfs */
|
/* Ignore error of debugfs */
|
||||||
|
|
|
||||||
|
|
@ -26,7 +26,7 @@
|
||||||
static void sun8i_ce_hash_stat_fb_inc(struct crypto_ahash *tfm)
|
static void sun8i_ce_hash_stat_fb_inc(struct crypto_ahash *tfm)
|
||||||
{
|
{
|
||||||
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG)) {
|
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG)) {
|
||||||
struct sun8i_ce_alg_template *algt __maybe_unused;
|
struct sun8i_ce_alg_template *algt;
|
||||||
struct ahash_alg *alg = crypto_ahash_alg(tfm);
|
struct ahash_alg *alg = crypto_ahash_alg(tfm);
|
||||||
|
|
||||||
algt = container_of(alg, struct sun8i_ce_alg_template,
|
algt = container_of(alg, struct sun8i_ce_alg_template,
|
||||||
|
|
@ -58,7 +58,8 @@ int sun8i_ce_hash_init_tfm(struct crypto_ahash *tfm)
|
||||||
|
|
||||||
crypto_ahash_set_reqsize(tfm,
|
crypto_ahash_set_reqsize(tfm,
|
||||||
sizeof(struct sun8i_ce_hash_reqctx) +
|
sizeof(struct sun8i_ce_hash_reqctx) +
|
||||||
crypto_ahash_reqsize(op->fallback_tfm));
|
crypto_ahash_reqsize(op->fallback_tfm) +
|
||||||
|
CRYPTO_DMA_PADDING);
|
||||||
|
|
||||||
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG))
|
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG))
|
||||||
memcpy(algt->fbname,
|
memcpy(algt->fbname,
|
||||||
|
|
@ -84,7 +85,7 @@ void sun8i_ce_hash_exit_tfm(struct crypto_ahash *tfm)
|
||||||
|
|
||||||
int sun8i_ce_hash_init(struct ahash_request *areq)
|
int sun8i_ce_hash_init(struct ahash_request *areq)
|
||||||
{
|
{
|
||||||
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
|
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq);
|
||||||
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
||||||
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
||||||
|
|
||||||
|
|
@ -100,7 +101,7 @@ int sun8i_ce_hash_init(struct ahash_request *areq)
|
||||||
|
|
||||||
int sun8i_ce_hash_export(struct ahash_request *areq, void *out)
|
int sun8i_ce_hash_export(struct ahash_request *areq, void *out)
|
||||||
{
|
{
|
||||||
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
|
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq);
|
||||||
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
||||||
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
||||||
|
|
||||||
|
|
@ -114,7 +115,7 @@ int sun8i_ce_hash_export(struct ahash_request *areq, void *out)
|
||||||
|
|
||||||
int sun8i_ce_hash_import(struct ahash_request *areq, const void *in)
|
int sun8i_ce_hash_import(struct ahash_request *areq, const void *in)
|
||||||
{
|
{
|
||||||
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
|
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq);
|
||||||
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
||||||
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
||||||
|
|
||||||
|
|
@ -128,7 +129,7 @@ int sun8i_ce_hash_import(struct ahash_request *areq, const void *in)
|
||||||
|
|
||||||
int sun8i_ce_hash_final(struct ahash_request *areq)
|
int sun8i_ce_hash_final(struct ahash_request *areq)
|
||||||
{
|
{
|
||||||
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
|
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq);
|
||||||
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
||||||
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
||||||
|
|
||||||
|
|
@ -145,7 +146,7 @@ int sun8i_ce_hash_final(struct ahash_request *areq)
|
||||||
|
|
||||||
int sun8i_ce_hash_update(struct ahash_request *areq)
|
int sun8i_ce_hash_update(struct ahash_request *areq)
|
||||||
{
|
{
|
||||||
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
|
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq);
|
||||||
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
||||||
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
||||||
|
|
||||||
|
|
@ -160,7 +161,7 @@ int sun8i_ce_hash_update(struct ahash_request *areq)
|
||||||
|
|
||||||
int sun8i_ce_hash_finup(struct ahash_request *areq)
|
int sun8i_ce_hash_finup(struct ahash_request *areq)
|
||||||
{
|
{
|
||||||
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
|
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq);
|
||||||
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
||||||
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
||||||
|
|
||||||
|
|
@ -178,7 +179,7 @@ int sun8i_ce_hash_finup(struct ahash_request *areq)
|
||||||
|
|
||||||
static int sun8i_ce_hash_digest_fb(struct ahash_request *areq)
|
static int sun8i_ce_hash_digest_fb(struct ahash_request *areq)
|
||||||
{
|
{
|
||||||
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
|
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq);
|
||||||
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
||||||
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
|
||||||
|
|
||||||
|
|
@ -238,19 +239,15 @@ static bool sun8i_ce_hash_need_fallback(struct ahash_request *areq)
|
||||||
int sun8i_ce_hash_digest(struct ahash_request *areq)
|
int sun8i_ce_hash_digest(struct ahash_request *areq)
|
||||||
{
|
{
|
||||||
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
||||||
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
|
struct sun8i_ce_hash_tfm_ctx *ctx = crypto_ahash_ctx(tfm);
|
||||||
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
|
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq);
|
||||||
struct sun8i_ce_alg_template *algt;
|
struct sun8i_ce_dev *ce = ctx->ce;
|
||||||
struct sun8i_ce_dev *ce;
|
|
||||||
struct crypto_engine *engine;
|
struct crypto_engine *engine;
|
||||||
int e;
|
int e;
|
||||||
|
|
||||||
if (sun8i_ce_hash_need_fallback(areq))
|
if (sun8i_ce_hash_need_fallback(areq))
|
||||||
return sun8i_ce_hash_digest_fb(areq);
|
return sun8i_ce_hash_digest_fb(areq);
|
||||||
|
|
||||||
algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash.base);
|
|
||||||
ce = algt->ce;
|
|
||||||
|
|
||||||
e = sun8i_ce_get_engine_number(ce);
|
e = sun8i_ce_get_engine_number(ce);
|
||||||
rctx->flow = e;
|
rctx->flow = e;
|
||||||
engine = ce->chanlist[e].engine;
|
engine = ce->chanlist[e].engine;
|
||||||
|
|
@ -316,28 +313,22 @@ static u64 hash_pad(__le32 *buf, unsigned int bufsize, u64 padi, u64 byte_count,
|
||||||
return j;
|
return j;
|
||||||
}
|
}
|
||||||
|
|
||||||
int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
|
static int sun8i_ce_hash_prepare(struct ahash_request *areq, struct ce_task *cet)
|
||||||
{
|
{
|
||||||
struct ahash_request *areq = container_of(breq, struct ahash_request, base);
|
|
||||||
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
||||||
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
|
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
|
||||||
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
|
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq);
|
||||||
struct sun8i_ce_alg_template *algt;
|
struct sun8i_ce_alg_template *algt;
|
||||||
struct sun8i_ce_dev *ce;
|
struct sun8i_ce_dev *ce;
|
||||||
struct sun8i_ce_flow *chan;
|
|
||||||
struct ce_task *cet;
|
|
||||||
struct scatterlist *sg;
|
struct scatterlist *sg;
|
||||||
int nr_sgs, flow, err;
|
int nr_sgs, err;
|
||||||
unsigned int len;
|
unsigned int len;
|
||||||
u32 common;
|
u32 common;
|
||||||
u64 byte_count;
|
u64 byte_count;
|
||||||
__le32 *bf;
|
__le32 *bf;
|
||||||
void *buf, *result;
|
|
||||||
int j, i, todo;
|
int j, i, todo;
|
||||||
u64 bs;
|
u64 bs;
|
||||||
int digestsize;
|
int digestsize;
|
||||||
dma_addr_t addr_res, addr_pad;
|
|
||||||
int ns = sg_nents_for_len(areq->src, areq->nbytes);
|
|
||||||
|
|
||||||
algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash.base);
|
algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash.base);
|
||||||
ce = algt->ce;
|
ce = algt->ce;
|
||||||
|
|
@ -349,32 +340,16 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
|
||||||
if (digestsize == SHA384_DIGEST_SIZE)
|
if (digestsize == SHA384_DIGEST_SIZE)
|
||||||
digestsize = SHA512_DIGEST_SIZE;
|
digestsize = SHA512_DIGEST_SIZE;
|
||||||
|
|
||||||
/* the padding could be up to two block. */
|
bf = (__le32 *)rctx->pad;
|
||||||
buf = kcalloc(2, bs, GFP_KERNEL | GFP_DMA);
|
|
||||||
if (!buf) {
|
|
||||||
err = -ENOMEM;
|
|
||||||
goto err_out;
|
|
||||||
}
|
|
||||||
bf = (__le32 *)buf;
|
|
||||||
|
|
||||||
result = kzalloc(digestsize, GFP_KERNEL | GFP_DMA);
|
|
||||||
if (!result) {
|
|
||||||
err = -ENOMEM;
|
|
||||||
goto err_free_buf;
|
|
||||||
}
|
|
||||||
|
|
||||||
flow = rctx->flow;
|
|
||||||
chan = &ce->chanlist[flow];
|
|
||||||
|
|
||||||
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG))
|
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG))
|
||||||
algt->stat_req++;
|
algt->stat_req++;
|
||||||
|
|
||||||
dev_dbg(ce->dev, "%s %s len=%d\n", __func__, crypto_tfm_alg_name(areq->base.tfm), areq->nbytes);
|
dev_dbg(ce->dev, "%s %s len=%d\n", __func__, crypto_tfm_alg_name(areq->base.tfm), areq->nbytes);
|
||||||
|
|
||||||
cet = chan->tl;
|
|
||||||
memset(cet, 0, sizeof(struct ce_task));
|
memset(cet, 0, sizeof(struct ce_task));
|
||||||
|
|
||||||
cet->t_id = cpu_to_le32(flow);
|
cet->t_id = cpu_to_le32(rctx->flow);
|
||||||
common = ce->variant->alg_hash[algt->ce_algo_id];
|
common = ce->variant->alg_hash[algt->ce_algo_id];
|
||||||
common |= CE_COMM_INT;
|
common |= CE_COMM_INT;
|
||||||
cet->t_common_ctl = cpu_to_le32(common);
|
cet->t_common_ctl = cpu_to_le32(common);
|
||||||
|
|
@ -382,11 +357,12 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
|
||||||
cet->t_sym_ctl = 0;
|
cet->t_sym_ctl = 0;
|
||||||
cet->t_asym_ctl = 0;
|
cet->t_asym_ctl = 0;
|
||||||
|
|
||||||
nr_sgs = dma_map_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);
|
rctx->nr_sgs = sg_nents_for_len(areq->src, areq->nbytes);
|
||||||
|
nr_sgs = dma_map_sg(ce->dev, areq->src, rctx->nr_sgs, DMA_TO_DEVICE);
|
||||||
if (nr_sgs <= 0 || nr_sgs > MAX_SG) {
|
if (nr_sgs <= 0 || nr_sgs > MAX_SG) {
|
||||||
dev_err(ce->dev, "Invalid sg number %d\n", nr_sgs);
|
dev_err(ce->dev, "Invalid sg number %d\n", nr_sgs);
|
||||||
err = -EINVAL;
|
err = -EINVAL;
|
||||||
goto err_free_result;
|
goto err_out;
|
||||||
}
|
}
|
||||||
|
|
||||||
len = areq->nbytes;
|
len = areq->nbytes;
|
||||||
|
|
@ -401,10 +377,13 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
|
||||||
err = -EINVAL;
|
err = -EINVAL;
|
||||||
goto err_unmap_src;
|
goto err_unmap_src;
|
||||||
}
|
}
|
||||||
addr_res = dma_map_single(ce->dev, result, digestsize, DMA_FROM_DEVICE);
|
|
||||||
cet->t_dst[0].addr = desc_addr_val_le32(ce, addr_res);
|
rctx->result_len = digestsize;
|
||||||
cet->t_dst[0].len = cpu_to_le32(digestsize / 4);
|
rctx->addr_res = dma_map_single(ce->dev, rctx->result, rctx->result_len,
|
||||||
if (dma_mapping_error(ce->dev, addr_res)) {
|
DMA_FROM_DEVICE);
|
||||||
|
cet->t_dst[0].addr = desc_addr_val_le32(ce, rctx->addr_res);
|
||||||
|
cet->t_dst[0].len = cpu_to_le32(rctx->result_len / 4);
|
||||||
|
if (dma_mapping_error(ce->dev, rctx->addr_res)) {
|
||||||
dev_err(ce->dev, "DMA map dest\n");
|
dev_err(ce->dev, "DMA map dest\n");
|
||||||
err = -EINVAL;
|
err = -EINVAL;
|
||||||
goto err_unmap_src;
|
goto err_unmap_src;
|
||||||
|
|
@ -432,10 +411,12 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
|
||||||
goto err_unmap_result;
|
goto err_unmap_result;
|
||||||
}
|
}
|
||||||
|
|
||||||
addr_pad = dma_map_single(ce->dev, buf, j * 4, DMA_TO_DEVICE);
|
rctx->pad_len = j * 4;
|
||||||
cet->t_src[i].addr = desc_addr_val_le32(ce, addr_pad);
|
rctx->addr_pad = dma_map_single(ce->dev, rctx->pad, rctx->pad_len,
|
||||||
|
DMA_TO_DEVICE);
|
||||||
|
cet->t_src[i].addr = desc_addr_val_le32(ce, rctx->addr_pad);
|
||||||
cet->t_src[i].len = cpu_to_le32(j);
|
cet->t_src[i].len = cpu_to_le32(j);
|
||||||
if (dma_mapping_error(ce->dev, addr_pad)) {
|
if (dma_mapping_error(ce->dev, rctx->addr_pad)) {
|
||||||
dev_err(ce->dev, "DMA error on padding SG\n");
|
dev_err(ce->dev, "DMA error on padding SG\n");
|
||||||
err = -EINVAL;
|
err = -EINVAL;
|
||||||
goto err_unmap_result;
|
goto err_unmap_result;
|
||||||
|
|
@ -446,29 +427,59 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
|
||||||
else
|
else
|
||||||
cet->t_dlen = cpu_to_le32(areq->nbytes / 4 + j);
|
cet->t_dlen = cpu_to_le32(areq->nbytes / 4 + j);
|
||||||
|
|
||||||
chan->timeout = areq->nbytes;
|
return 0;
|
||||||
|
|
||||||
err = sun8i_ce_run_task(ce, flow, crypto_ahash_alg_name(tfm));
|
|
||||||
|
|
||||||
dma_unmap_single(ce->dev, addr_pad, j * 4, DMA_TO_DEVICE);
|
|
||||||
|
|
||||||
err_unmap_result:
|
err_unmap_result:
|
||||||
dma_unmap_single(ce->dev, addr_res, digestsize, DMA_FROM_DEVICE);
|
dma_unmap_single(ce->dev, rctx->addr_res, rctx->result_len,
|
||||||
if (!err)
|
DMA_FROM_DEVICE);
|
||||||
memcpy(areq->result, result, crypto_ahash_digestsize(tfm));
|
|
||||||
|
|
||||||
err_unmap_src:
|
err_unmap_src:
|
||||||
dma_unmap_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);
|
dma_unmap_sg(ce->dev, areq->src, rctx->nr_sgs, DMA_TO_DEVICE);
|
||||||
|
|
||||||
err_free_result:
|
|
||||||
kfree(result);
|
|
||||||
|
|
||||||
err_free_buf:
|
|
||||||
kfree(buf);
|
|
||||||
|
|
||||||
err_out:
|
err_out:
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void sun8i_ce_hash_unprepare(struct ahash_request *areq,
|
||||||
|
struct ce_task *cet)
|
||||||
|
{
|
||||||
|
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq);
|
||||||
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
||||||
|
struct sun8i_ce_hash_tfm_ctx *ctx = crypto_ahash_ctx(tfm);
|
||||||
|
struct sun8i_ce_dev *ce = ctx->ce;
|
||||||
|
|
||||||
|
dma_unmap_single(ce->dev, rctx->addr_pad, rctx->pad_len, DMA_TO_DEVICE);
|
||||||
|
dma_unmap_single(ce->dev, rctx->addr_res, rctx->result_len,
|
||||||
|
DMA_FROM_DEVICE);
|
||||||
|
dma_unmap_sg(ce->dev, areq->src, rctx->nr_sgs, DMA_TO_DEVICE);
|
||||||
|
}
|
||||||
|
|
||||||
|
int sun8i_ce_hash_run(struct crypto_engine *engine, void *async_req)
|
||||||
|
{
|
||||||
|
struct ahash_request *areq = ahash_request_cast(async_req);
|
||||||
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
|
||||||
|
struct sun8i_ce_hash_tfm_ctx *ctx = crypto_ahash_ctx(tfm);
|
||||||
|
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq);
|
||||||
|
struct sun8i_ce_dev *ce = ctx->ce;
|
||||||
|
struct sun8i_ce_flow *chan;
|
||||||
|
int err;
|
||||||
|
|
||||||
|
chan = &ce->chanlist[rctx->flow];
|
||||||
|
|
||||||
|
err = sun8i_ce_hash_prepare(areq, chan->tl);
|
||||||
|
if (err)
|
||||||
|
return err;
|
||||||
|
|
||||||
|
err = sun8i_ce_run_task(ce, rctx->flow, crypto_ahash_alg_name(tfm));
|
||||||
|
|
||||||
|
sun8i_ce_hash_unprepare(areq, chan->tl);
|
||||||
|
|
||||||
|
if (!err)
|
||||||
|
memcpy(areq->result, rctx->result,
|
||||||
|
crypto_ahash_digestsize(tfm));
|
||||||
|
|
||||||
local_bh_disable();
|
local_bh_disable();
|
||||||
crypto_finalize_hash_request(engine, breq, err);
|
crypto_finalize_hash_request(engine, async_req, err);
|
||||||
local_bh_enable();
|
local_bh_enable();
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
||||||
|
|
@ -137,7 +137,6 @@ int sun8i_ce_prng_generate(struct crypto_rng *tfm, const u8 *src,
|
||||||
|
|
||||||
cet->t_dst[0].addr = desc_addr_val_le32(ce, dma_dst);
|
cet->t_dst[0].addr = desc_addr_val_le32(ce, dma_dst);
|
||||||
cet->t_dst[0].len = cpu_to_le32(todo / 4);
|
cet->t_dst[0].len = cpu_to_le32(todo / 4);
|
||||||
ce->chanlist[flow].timeout = 2000;
|
|
||||||
|
|
||||||
err = sun8i_ce_run_task(ce, 3, "PRNG");
|
err = sun8i_ce_run_task(ce, 3, "PRNG");
|
||||||
mutex_unlock(&ce->rnglock);
|
mutex_unlock(&ce->rnglock);
|
||||||
|
|
|
||||||
|
|
@ -79,7 +79,6 @@ static int sun8i_ce_trng_read(struct hwrng *rng, void *data, size_t max, bool wa
|
||||||
|
|
||||||
cet->t_dst[0].addr = desc_addr_val_le32(ce, dma_dst);
|
cet->t_dst[0].addr = desc_addr_val_le32(ce, dma_dst);
|
||||||
cet->t_dst[0].len = cpu_to_le32(todo / 4);
|
cet->t_dst[0].len = cpu_to_le32(todo / 4);
|
||||||
ce->chanlist[flow].timeout = todo;
|
|
||||||
|
|
||||||
err = sun8i_ce_run_task(ce, 3, "TRNG");
|
err = sun8i_ce_run_task(ce, 3, "TRNG");
|
||||||
mutex_unlock(&ce->rnglock);
|
mutex_unlock(&ce->rnglock);
|
||||||
|
|
|
||||||
|
|
@ -106,9 +106,13 @@
|
||||||
#define MAX_SG 8
|
#define MAX_SG 8
|
||||||
|
|
||||||
#define CE_MAX_CLOCKS 4
|
#define CE_MAX_CLOCKS 4
|
||||||
|
#define CE_DMA_TIMEOUT_MS 3000
|
||||||
|
|
||||||
#define MAXFLOW 4
|
#define MAXFLOW 4
|
||||||
|
|
||||||
|
#define CE_MAX_HASH_DIGEST_SIZE SHA512_DIGEST_SIZE
|
||||||
|
#define CE_MAX_HASH_BLOCK_SIZE SHA512_BLOCK_SIZE
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* struct ce_clock - Describe clocks used by sun8i-ce
|
* struct ce_clock - Describe clocks used by sun8i-ce
|
||||||
* @name: Name of clock needed by this variant
|
* @name: Name of clock needed by this variant
|
||||||
|
|
@ -187,8 +191,6 @@ struct ce_task {
|
||||||
* @status: set to 1 by interrupt if task is done
|
* @status: set to 1 by interrupt if task is done
|
||||||
* @t_phy: Physical address of task
|
* @t_phy: Physical address of task
|
||||||
* @tl: pointer to the current ce_task for this flow
|
* @tl: pointer to the current ce_task for this flow
|
||||||
* @backup_iv: buffer which contain the next IV to store
|
|
||||||
* @bounce_iv: buffer which contain the IV
|
|
||||||
* @stat_req: number of request done by this flow
|
* @stat_req: number of request done by this flow
|
||||||
*/
|
*/
|
||||||
struct sun8i_ce_flow {
|
struct sun8i_ce_flow {
|
||||||
|
|
@ -196,10 +198,7 @@ struct sun8i_ce_flow {
|
||||||
struct completion complete;
|
struct completion complete;
|
||||||
int status;
|
int status;
|
||||||
dma_addr_t t_phy;
|
dma_addr_t t_phy;
|
||||||
int timeout;
|
|
||||||
struct ce_task *tl;
|
struct ce_task *tl;
|
||||||
void *backup_iv;
|
|
||||||
void *bounce_iv;
|
|
||||||
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
|
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
|
||||||
unsigned long stat_req;
|
unsigned long stat_req;
|
||||||
#endif
|
#endif
|
||||||
|
|
@ -264,6 +263,8 @@ static inline __le32 desc_addr_val_le32(struct sun8i_ce_dev *dev,
|
||||||
* @nr_sgd: The number of destination SG (as given by dma_map_sg())
|
* @nr_sgd: The number of destination SG (as given by dma_map_sg())
|
||||||
* @addr_iv: The IV addr returned by dma_map_single, need to unmap later
|
* @addr_iv: The IV addr returned by dma_map_single, need to unmap later
|
||||||
* @addr_key: The key addr returned by dma_map_single, need to unmap later
|
* @addr_key: The key addr returned by dma_map_single, need to unmap later
|
||||||
|
* @bounce_iv: Current IV buffer
|
||||||
|
* @backup_iv: Next IV buffer
|
||||||
* @fallback_req: request struct for invoking the fallback skcipher TFM
|
* @fallback_req: request struct for invoking the fallback skcipher TFM
|
||||||
*/
|
*/
|
||||||
struct sun8i_cipher_req_ctx {
|
struct sun8i_cipher_req_ctx {
|
||||||
|
|
@ -273,6 +274,8 @@ struct sun8i_cipher_req_ctx {
|
||||||
int nr_sgd;
|
int nr_sgd;
|
||||||
dma_addr_t addr_iv;
|
dma_addr_t addr_iv;
|
||||||
dma_addr_t addr_key;
|
dma_addr_t addr_key;
|
||||||
|
u8 bounce_iv[AES_BLOCK_SIZE] __aligned(sizeof(u32));
|
||||||
|
u8 backup_iv[AES_BLOCK_SIZE];
|
||||||
struct skcipher_request fallback_req; // keep at the end
|
struct skcipher_request fallback_req; // keep at the end
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
@ -304,9 +307,23 @@ struct sun8i_ce_hash_tfm_ctx {
|
||||||
* struct sun8i_ce_hash_reqctx - context for an ahash request
|
* struct sun8i_ce_hash_reqctx - context for an ahash request
|
||||||
* @fallback_req: pre-allocated fallback request
|
* @fallback_req: pre-allocated fallback request
|
||||||
* @flow: the flow to use for this request
|
* @flow: the flow to use for this request
|
||||||
|
* @nr_sgs: number of entries in the source scatterlist
|
||||||
|
* @result_len: result length in bytes
|
||||||
|
* @pad_len: padding length in bytes
|
||||||
|
* @addr_res: DMA address of the result buffer, returned by dma_map_single()
|
||||||
|
* @addr_pad: DMA address of the padding buffer, returned by dma_map_single()
|
||||||
|
* @result: per-request result buffer
|
||||||
|
* @pad: per-request padding buffer (up to 2 blocks)
|
||||||
*/
|
*/
|
||||||
struct sun8i_ce_hash_reqctx {
|
struct sun8i_ce_hash_reqctx {
|
||||||
int flow;
|
int flow;
|
||||||
|
int nr_sgs;
|
||||||
|
size_t result_len;
|
||||||
|
size_t pad_len;
|
||||||
|
dma_addr_t addr_res;
|
||||||
|
dma_addr_t addr_pad;
|
||||||
|
u8 result[CE_MAX_HASH_DIGEST_SIZE] __aligned(CRYPTO_DMA_ALIGN);
|
||||||
|
u8 pad[2 * CE_MAX_HASH_BLOCK_SIZE];
|
||||||
struct ahash_request fallback_req; // keep at the end
|
struct ahash_request fallback_req; // keep at the end
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -346,7 +346,7 @@ free_req:
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
dma_unmap_sg(hace_dev->dev, req->dst, rctx->dst_nents,
|
dma_unmap_sg(hace_dev->dev, req->dst, rctx->dst_nents,
|
||||||
DMA_TO_DEVICE);
|
DMA_FROM_DEVICE);
|
||||||
dma_unmap_sg(hace_dev->dev, req->src, rctx->src_nents,
|
dma_unmap_sg(hace_dev->dev, req->src, rctx->src_nents,
|
||||||
DMA_TO_DEVICE);
|
DMA_TO_DEVICE);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -512,7 +512,7 @@ static int atmel_tdes_crypt_start(struct atmel_tdes_dev *dd)
|
||||||
|
|
||||||
if (err && (dd->flags & TDES_FLAGS_FAST)) {
|
if (err && (dd->flags & TDES_FLAGS_FAST)) {
|
||||||
dma_unmap_sg(dd->dev, dd->in_sg, 1, DMA_TO_DEVICE);
|
dma_unmap_sg(dd->dev, dd->in_sg, 1, DMA_TO_DEVICE);
|
||||||
dma_unmap_sg(dd->dev, dd->out_sg, 1, DMA_TO_DEVICE);
|
dma_unmap_sg(dd->dev, dd->out_sg, 1, DMA_FROM_DEVICE);
|
||||||
}
|
}
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
|
|
|
||||||
|
|
@ -592,8 +592,8 @@ static int init_clocks(struct device *dev, const struct caam_imx_data *data)
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ctrlpriv->num_clks = data->num_clks;
|
ctrlpriv->num_clks = data->num_clks;
|
||||||
ctrlpriv->clks = devm_kmemdup(dev, data->clks,
|
ctrlpriv->clks = devm_kmemdup_array(dev, data->clks,
|
||||||
data->num_clks * sizeof(data->clks[0]),
|
data->num_clks, sizeof(*data->clks),
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!ctrlpriv->clks)
|
if (!ctrlpriv->clks)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
@ -703,12 +703,12 @@ static int caam_ctrl_rng_init(struct device *dev)
|
||||||
*/
|
*/
|
||||||
if (needs_entropy_delay_adjustment())
|
if (needs_entropy_delay_adjustment())
|
||||||
ent_delay = 12000;
|
ent_delay = 12000;
|
||||||
if (!(ctrlpriv->rng4_sh_init || inst_handles)) {
|
if (!inst_handles) {
|
||||||
dev_info(dev,
|
dev_info(dev,
|
||||||
"Entropy delay = %u\n",
|
"Entropy delay = %u\n",
|
||||||
ent_delay);
|
ent_delay);
|
||||||
kick_trng(dev, ent_delay);
|
kick_trng(dev, ent_delay);
|
||||||
ent_delay += 400;
|
ent_delay = ent_delay * 2;
|
||||||
}
|
}
|
||||||
/*
|
/*
|
||||||
* if instantiate_rng(...) fails, the loop will rerun
|
* if instantiate_rng(...) fails, the loop will rerun
|
||||||
|
|
|
||||||
|
|
@ -74,7 +74,7 @@ struct attribute_group psp_security_attr_group = {
|
||||||
.is_visible = psp_security_is_visible,
|
.is_visible = psp_security_is_visible,
|
||||||
};
|
};
|
||||||
|
|
||||||
static int psp_poulate_hsti(struct psp_device *psp)
|
static int psp_populate_hsti(struct psp_device *psp)
|
||||||
{
|
{
|
||||||
struct hsti_request *req;
|
struct hsti_request *req;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
@ -84,11 +84,11 @@ static int psp_poulate_hsti(struct psp_device *psp)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
/* Allocate command-response buffer */
|
/* Allocate command-response buffer */
|
||||||
req = kzalloc(sizeof(*req), GFP_KERNEL | __GFP_ZERO);
|
req = kzalloc(sizeof(*req), GFP_KERNEL);
|
||||||
if (!req)
|
if (!req)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
req->header.payload_size = sizeof(req);
|
req->header.payload_size = sizeof(*req);
|
||||||
|
|
||||||
ret = psp_send_platform_access_msg(PSP_CMD_HSTI_QUERY, (struct psp_request *)req);
|
ret = psp_send_platform_access_msg(PSP_CMD_HSTI_QUERY, (struct psp_request *)req);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
|
@ -114,7 +114,7 @@ int psp_init_hsti(struct psp_device *psp)
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (PSP_FEATURE(psp, HSTI)) {
|
if (PSP_FEATURE(psp, HSTI)) {
|
||||||
ret = psp_poulate_hsti(psp);
|
ret = psp_populate_hsti(psp);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -249,6 +249,8 @@ static int sev_cmd_buffer_len(int cmd)
|
||||||
case SEV_CMD_SNP_GUEST_REQUEST: return sizeof(struct sev_data_snp_guest_request);
|
case SEV_CMD_SNP_GUEST_REQUEST: return sizeof(struct sev_data_snp_guest_request);
|
||||||
case SEV_CMD_SNP_CONFIG: return sizeof(struct sev_user_data_snp_config);
|
case SEV_CMD_SNP_CONFIG: return sizeof(struct sev_user_data_snp_config);
|
||||||
case SEV_CMD_SNP_COMMIT: return sizeof(struct sev_data_snp_commit);
|
case SEV_CMD_SNP_COMMIT: return sizeof(struct sev_data_snp_commit);
|
||||||
|
case SEV_CMD_SNP_FEATURE_INFO: return sizeof(struct sev_data_snp_feature_info);
|
||||||
|
case SEV_CMD_SNP_VLEK_LOAD: return sizeof(struct sev_user_data_snp_vlek_load);
|
||||||
default: return 0;
|
default: return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -862,9 +864,10 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
|
||||||
struct sev_device *sev;
|
struct sev_device *sev;
|
||||||
unsigned int cmdbuff_hi, cmdbuff_lo;
|
unsigned int cmdbuff_hi, cmdbuff_lo;
|
||||||
unsigned int phys_lsb, phys_msb;
|
unsigned int phys_lsb, phys_msb;
|
||||||
unsigned int reg, ret = 0;
|
unsigned int reg;
|
||||||
void *cmd_buf;
|
void *cmd_buf;
|
||||||
int buf_len;
|
int buf_len;
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
if (!psp || !psp->sev_data)
|
if (!psp || !psp->sev_data)
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
|
@ -1248,6 +1251,88 @@ static void snp_leak_hv_fixed_pages(void)
|
||||||
1 << entry->order, false);
|
1 << entry->order, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool sev_is_snp_ciphertext_hiding_supported(void)
|
||||||
|
{
|
||||||
|
struct psp_device *psp = psp_master;
|
||||||
|
struct sev_device *sev;
|
||||||
|
|
||||||
|
if (!psp || !psp->sev_data)
|
||||||
|
return false;
|
||||||
|
|
||||||
|
sev = psp->sev_data;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Feature information indicates if CipherTextHiding feature is
|
||||||
|
* supported by the SEV firmware and additionally platform status
|
||||||
|
* indicates if CipherTextHiding feature is enabled in the
|
||||||
|
* Platform BIOS.
|
||||||
|
*/
|
||||||
|
return ((sev->snp_feat_info_0.ecx & SNP_CIPHER_TEXT_HIDING_SUPPORTED) &&
|
||||||
|
sev->snp_plat_status.ciphertext_hiding_cap);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(sev_is_snp_ciphertext_hiding_supported);
|
||||||
|
|
||||||
|
static int snp_get_platform_data(struct sev_device *sev, int *error)
|
||||||
|
{
|
||||||
|
struct sev_data_snp_feature_info snp_feat_info;
|
||||||
|
struct snp_feature_info *feat_info;
|
||||||
|
struct sev_data_snp_addr buf;
|
||||||
|
struct page *page;
|
||||||
|
int rc;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This function is expected to be called before SNP is
|
||||||
|
* initialized.
|
||||||
|
*/
|
||||||
|
if (sev->snp_initialized)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
buf.address = __psp_pa(&sev->snp_plat_status);
|
||||||
|
rc = sev_do_cmd(SEV_CMD_SNP_PLATFORM_STATUS, &buf, error);
|
||||||
|
if (rc) {
|
||||||
|
dev_err(sev->dev, "SNP PLATFORM_STATUS command failed, ret = %d, error = %#x\n",
|
||||||
|
rc, *error);
|
||||||
|
return rc;
|
||||||
|
}
|
||||||
|
|
||||||
|
sev->api_major = sev->snp_plat_status.api_major;
|
||||||
|
sev->api_minor = sev->snp_plat_status.api_minor;
|
||||||
|
sev->build = sev->snp_plat_status.build_id;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Do feature discovery of the currently loaded firmware,
|
||||||
|
* and cache feature information from CPUID 0x8000_0024,
|
||||||
|
* sub-function 0.
|
||||||
|
*/
|
||||||
|
if (!sev->snp_plat_status.feature_info)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Use dynamically allocated structure for the SNP_FEATURE_INFO
|
||||||
|
* command to ensure structure is 8-byte aligned, and does not
|
||||||
|
* cross a page boundary.
|
||||||
|
*/
|
||||||
|
page = alloc_page(GFP_KERNEL);
|
||||||
|
if (!page)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
feat_info = page_address(page);
|
||||||
|
snp_feat_info.length = sizeof(snp_feat_info);
|
||||||
|
snp_feat_info.ecx_in = 0;
|
||||||
|
snp_feat_info.feature_info_paddr = __psp_pa(feat_info);
|
||||||
|
|
||||||
|
rc = sev_do_cmd(SEV_CMD_SNP_FEATURE_INFO, &snp_feat_info, error);
|
||||||
|
if (!rc)
|
||||||
|
sev->snp_feat_info_0 = *feat_info;
|
||||||
|
else
|
||||||
|
dev_err(sev->dev, "SNP FEATURE_INFO command failed, ret = %d, error = %#x\n",
|
||||||
|
rc, *error);
|
||||||
|
|
||||||
|
__free_page(page);
|
||||||
|
|
||||||
|
return rc;
|
||||||
|
}
|
||||||
|
|
||||||
static int snp_filter_reserved_mem_regions(struct resource *rs, void *arg)
|
static int snp_filter_reserved_mem_regions(struct resource *rs, void *arg)
|
||||||
{
|
{
|
||||||
struct sev_data_range_list *range_list = arg;
|
struct sev_data_range_list *range_list = arg;
|
||||||
|
|
@ -1278,7 +1363,7 @@ static int snp_filter_reserved_mem_regions(struct resource *rs, void *arg)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int __sev_snp_init_locked(int *error)
|
static int __sev_snp_init_locked(int *error, unsigned int max_snp_asid)
|
||||||
{
|
{
|
||||||
struct psp_device *psp = psp_master;
|
struct psp_device *psp = psp_master;
|
||||||
struct sev_data_snp_init_ex data;
|
struct sev_data_snp_init_ex data;
|
||||||
|
|
@ -1345,6 +1430,12 @@ static int __sev_snp_init_locked(int *error)
|
||||||
snp_add_hv_fixed_pages(sev, snp_range_list);
|
snp_add_hv_fixed_pages(sev, snp_range_list);
|
||||||
|
|
||||||
memset(&data, 0, sizeof(data));
|
memset(&data, 0, sizeof(data));
|
||||||
|
|
||||||
|
if (max_snp_asid) {
|
||||||
|
data.ciphertext_hiding_en = 1;
|
||||||
|
data.max_snp_asid = max_snp_asid;
|
||||||
|
}
|
||||||
|
|
||||||
data.init_rmp = 1;
|
data.init_rmp = 1;
|
||||||
data.list_paddr_en = 1;
|
data.list_paddr_en = 1;
|
||||||
data.list_paddr = __psp_pa(snp_range_list);
|
data.list_paddr = __psp_pa(snp_range_list);
|
||||||
|
|
@ -1468,7 +1559,7 @@ static int __sev_platform_init_locked(int *error)
|
||||||
|
|
||||||
sev = psp_master->sev_data;
|
sev = psp_master->sev_data;
|
||||||
|
|
||||||
if (sev->state == SEV_STATE_INIT)
|
if (sev->sev_plat_status.state == SEV_STATE_INIT)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
__sev_platform_init_handle_tmr(sev);
|
__sev_platform_init_handle_tmr(sev);
|
||||||
|
|
@ -1500,7 +1591,7 @@ static int __sev_platform_init_locked(int *error)
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
sev->state = SEV_STATE_INIT;
|
sev->sev_plat_status.state = SEV_STATE_INIT;
|
||||||
|
|
||||||
/* Prepare for first SEV guest launch after INIT */
|
/* Prepare for first SEV guest launch after INIT */
|
||||||
wbinvd_on_all_cpus();
|
wbinvd_on_all_cpus();
|
||||||
|
|
@ -1538,10 +1629,10 @@ static int _sev_platform_init_locked(struct sev_platform_init_args *args)
|
||||||
|
|
||||||
sev = psp_master->sev_data;
|
sev = psp_master->sev_data;
|
||||||
|
|
||||||
if (sev->state == SEV_STATE_INIT)
|
if (sev->sev_plat_status.state == SEV_STATE_INIT)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
rc = __sev_snp_init_locked(&args->error);
|
rc = __sev_snp_init_locked(&args->error, args->max_snp_asid);
|
||||||
if (rc && rc != -ENODEV)
|
if (rc && rc != -ENODEV)
|
||||||
return rc;
|
return rc;
|
||||||
|
|
||||||
|
|
@ -1575,7 +1666,7 @@ static int __sev_platform_shutdown_locked(int *error)
|
||||||
|
|
||||||
sev = psp->sev_data;
|
sev = psp->sev_data;
|
||||||
|
|
||||||
if (sev->state == SEV_STATE_UNINIT)
|
if (sev->sev_plat_status.state == SEV_STATE_UNINIT)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
ret = __sev_do_cmd_locked(SEV_CMD_SHUTDOWN, NULL, error);
|
ret = __sev_do_cmd_locked(SEV_CMD_SHUTDOWN, NULL, error);
|
||||||
|
|
@ -1585,7 +1676,7 @@ static int __sev_platform_shutdown_locked(int *error)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
sev->state = SEV_STATE_UNINIT;
|
sev->sev_plat_status.state = SEV_STATE_UNINIT;
|
||||||
dev_dbg(sev->dev, "SEV firmware shutdown\n");
|
dev_dbg(sev->dev, "SEV firmware shutdown\n");
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
|
@ -1624,7 +1715,7 @@ static int snp_move_to_init_state(struct sev_issue_cmd *argp, bool *shutdown_req
|
||||||
{
|
{
|
||||||
int error, rc;
|
int error, rc;
|
||||||
|
|
||||||
rc = __sev_snp_init_locked(&error);
|
rc = __sev_snp_init_locked(&error, 0);
|
||||||
if (rc) {
|
if (rc) {
|
||||||
argp->error = SEV_RET_INVALID_PLATFORM_STATE;
|
argp->error = SEV_RET_INVALID_PLATFORM_STATE;
|
||||||
return rc;
|
return rc;
|
||||||
|
|
@ -1693,7 +1784,7 @@ static int sev_ioctl_do_pek_pdh_gen(int cmd, struct sev_issue_cmd *argp, bool wr
|
||||||
if (!writable)
|
if (!writable)
|
||||||
return -EPERM;
|
return -EPERM;
|
||||||
|
|
||||||
if (sev->state == SEV_STATE_UNINIT) {
|
if (sev->sev_plat_status.state == SEV_STATE_UNINIT) {
|
||||||
rc = sev_move_to_init_state(argp, &shutdown_required);
|
rc = sev_move_to_init_state(argp, &shutdown_required);
|
||||||
if (rc)
|
if (rc)
|
||||||
return rc;
|
return rc;
|
||||||
|
|
@ -1742,7 +1833,7 @@ static int sev_ioctl_do_pek_csr(struct sev_issue_cmd *argp, bool writable)
|
||||||
data.len = input.length;
|
data.len = input.length;
|
||||||
|
|
||||||
cmd:
|
cmd:
|
||||||
if (sev->state == SEV_STATE_UNINIT) {
|
if (sev->sev_plat_status.state == SEV_STATE_UNINIT) {
|
||||||
ret = sev_move_to_init_state(argp, &shutdown_required);
|
ret = sev_move_to_init_state(argp, &shutdown_required);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto e_free_blob;
|
goto e_free_blob;
|
||||||
|
|
@ -1790,6 +1881,16 @@ static int sev_get_api_version(void)
|
||||||
struct sev_user_data_status status;
|
struct sev_user_data_status status;
|
||||||
int error = 0, ret;
|
int error = 0, ret;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Cache SNP platform status and SNP feature information
|
||||||
|
* if SNP is available.
|
||||||
|
*/
|
||||||
|
if (cc_platform_has(CC_ATTR_HOST_SEV_SNP)) {
|
||||||
|
ret = snp_get_platform_data(sev, &error);
|
||||||
|
if (ret)
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
ret = sev_platform_status(&status, &error);
|
ret = sev_platform_status(&status, &error);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(sev->dev,
|
dev_err(sev->dev,
|
||||||
|
|
@ -1797,10 +1898,12 @@ static int sev_get_api_version(void)
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Cache SEV platform status */
|
||||||
|
sev->sev_plat_status = status;
|
||||||
|
|
||||||
sev->api_major = status.api_major;
|
sev->api_major = status.api_major;
|
||||||
sev->api_minor = status.api_minor;
|
sev->api_minor = status.api_minor;
|
||||||
sev->build = status.build;
|
sev->build = status.build;
|
||||||
sev->state = status.state;
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
@ -2029,7 +2132,7 @@ static int sev_ioctl_do_pek_import(struct sev_issue_cmd *argp, bool writable)
|
||||||
data.oca_cert_len = input.oca_cert_len;
|
data.oca_cert_len = input.oca_cert_len;
|
||||||
|
|
||||||
/* If platform is not in INIT state then transition it to INIT */
|
/* If platform is not in INIT state then transition it to INIT */
|
||||||
if (sev->state != SEV_STATE_INIT) {
|
if (sev->sev_plat_status.state != SEV_STATE_INIT) {
|
||||||
ret = sev_move_to_init_state(argp, &shutdown_required);
|
ret = sev_move_to_init_state(argp, &shutdown_required);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto e_free_oca;
|
goto e_free_oca;
|
||||||
|
|
@ -2200,7 +2303,7 @@ static int sev_ioctl_do_pdh_export(struct sev_issue_cmd *argp, bool writable)
|
||||||
|
|
||||||
cmd:
|
cmd:
|
||||||
/* If platform is not in INIT state then transition it to INIT. */
|
/* If platform is not in INIT state then transition it to INIT. */
|
||||||
if (sev->state != SEV_STATE_INIT) {
|
if (sev->sev_plat_status.state != SEV_STATE_INIT) {
|
||||||
if (!writable) {
|
if (!writable) {
|
||||||
ret = -EPERM;
|
ret = -EPERM;
|
||||||
goto e_free_cert;
|
goto e_free_cert;
|
||||||
|
|
|
||||||
|
|
@ -42,7 +42,6 @@ struct sev_device {
|
||||||
|
|
||||||
struct sev_vdata *vdata;
|
struct sev_vdata *vdata;
|
||||||
|
|
||||||
int state;
|
|
||||||
unsigned int int_rcvd;
|
unsigned int int_rcvd;
|
||||||
wait_queue_head_t int_queue;
|
wait_queue_head_t int_queue;
|
||||||
struct sev_misc_dev *misc;
|
struct sev_misc_dev *misc;
|
||||||
|
|
@ -57,6 +56,11 @@ struct sev_device {
|
||||||
bool cmd_buf_backup_active;
|
bool cmd_buf_backup_active;
|
||||||
|
|
||||||
bool snp_initialized;
|
bool snp_initialized;
|
||||||
|
|
||||||
|
struct sev_user_data_status sev_plat_status;
|
||||||
|
|
||||||
|
struct sev_user_data_snp_status snp_plat_status;
|
||||||
|
struct snp_feature_info snp_feat_info_0;
|
||||||
};
|
};
|
||||||
|
|
||||||
int sev_dev_init(struct psp_device *psp);
|
int sev_dev_init(struct psp_device *psp);
|
||||||
|
|
|
||||||
|
|
@ -4,9 +4,9 @@ config CRYPTO_DEV_CHELSIO
|
||||||
depends on CHELSIO_T4
|
depends on CHELSIO_T4
|
||||||
select CRYPTO_LIB_AES
|
select CRYPTO_LIB_AES
|
||||||
select CRYPTO_LIB_GF128MUL
|
select CRYPTO_LIB_GF128MUL
|
||||||
select CRYPTO_SHA1
|
select CRYPTO_LIB_SHA1
|
||||||
select CRYPTO_SHA256
|
select CRYPTO_LIB_SHA256
|
||||||
select CRYPTO_SHA512
|
select CRYPTO_LIB_SHA512
|
||||||
select CRYPTO_AUTHENC
|
select CRYPTO_AUTHENC
|
||||||
help
|
help
|
||||||
The Chelsio Crypto Co-processor driver for T6 adapters.
|
The Chelsio Crypto Co-processor driver for T6 adapters.
|
||||||
|
|
|
||||||
|
|
@ -51,7 +51,6 @@
|
||||||
|
|
||||||
#include <crypto/aes.h>
|
#include <crypto/aes.h>
|
||||||
#include <crypto/algapi.h>
|
#include <crypto/algapi.h>
|
||||||
#include <crypto/hash.h>
|
|
||||||
#include <crypto/gcm.h>
|
#include <crypto/gcm.h>
|
||||||
#include <crypto/sha1.h>
|
#include <crypto/sha1.h>
|
||||||
#include <crypto/sha2.h>
|
#include <crypto/sha2.h>
|
||||||
|
|
@ -277,88 +276,60 @@ static void get_aes_decrypt_key(unsigned char *dec_key,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct crypto_shash *chcr_alloc_shash(unsigned int ds)
|
static int chcr_prepare_hmac_key(const u8 *raw_key, unsigned int raw_key_len,
|
||||||
|
int digestsize, void *istate, void *ostate)
|
||||||
{
|
{
|
||||||
struct crypto_shash *base_hash = ERR_PTR(-EINVAL);
|
__be32 *istate32 = istate, *ostate32 = ostate;
|
||||||
|
__be64 *istate64 = istate, *ostate64 = ostate;
|
||||||
|
union {
|
||||||
|
struct hmac_sha1_key sha1;
|
||||||
|
struct hmac_sha224_key sha224;
|
||||||
|
struct hmac_sha256_key sha256;
|
||||||
|
struct hmac_sha384_key sha384;
|
||||||
|
struct hmac_sha512_key sha512;
|
||||||
|
} k;
|
||||||
|
|
||||||
switch (ds) {
|
switch (digestsize) {
|
||||||
case SHA1_DIGEST_SIZE:
|
case SHA1_DIGEST_SIZE:
|
||||||
base_hash = crypto_alloc_shash("sha1", 0, 0);
|
hmac_sha1_preparekey(&k.sha1, raw_key, raw_key_len);
|
||||||
|
for (int i = 0; i < ARRAY_SIZE(k.sha1.istate.h); i++) {
|
||||||
|
istate32[i] = cpu_to_be32(k.sha1.istate.h[i]);
|
||||||
|
ostate32[i] = cpu_to_be32(k.sha1.ostate.h[i]);
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
case SHA224_DIGEST_SIZE:
|
case SHA224_DIGEST_SIZE:
|
||||||
base_hash = crypto_alloc_shash("sha224", 0, 0);
|
hmac_sha224_preparekey(&k.sha224, raw_key, raw_key_len);
|
||||||
|
for (int i = 0; i < ARRAY_SIZE(k.sha224.key.istate.h); i++) {
|
||||||
|
istate32[i] = cpu_to_be32(k.sha224.key.istate.h[i]);
|
||||||
|
ostate32[i] = cpu_to_be32(k.sha224.key.ostate.h[i]);
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
case SHA256_DIGEST_SIZE:
|
case SHA256_DIGEST_SIZE:
|
||||||
base_hash = crypto_alloc_shash("sha256", 0, 0);
|
hmac_sha256_preparekey(&k.sha256, raw_key, raw_key_len);
|
||||||
|
for (int i = 0; i < ARRAY_SIZE(k.sha256.key.istate.h); i++) {
|
||||||
|
istate32[i] = cpu_to_be32(k.sha256.key.istate.h[i]);
|
||||||
|
ostate32[i] = cpu_to_be32(k.sha256.key.ostate.h[i]);
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
case SHA384_DIGEST_SIZE:
|
case SHA384_DIGEST_SIZE:
|
||||||
base_hash = crypto_alloc_shash("sha384", 0, 0);
|
hmac_sha384_preparekey(&k.sha384, raw_key, raw_key_len);
|
||||||
|
for (int i = 0; i < ARRAY_SIZE(k.sha384.key.istate.h); i++) {
|
||||||
|
istate64[i] = cpu_to_be64(k.sha384.key.istate.h[i]);
|
||||||
|
ostate64[i] = cpu_to_be64(k.sha384.key.ostate.h[i]);
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
case SHA512_DIGEST_SIZE:
|
case SHA512_DIGEST_SIZE:
|
||||||
base_hash = crypto_alloc_shash("sha512", 0, 0);
|
hmac_sha512_preparekey(&k.sha512, raw_key, raw_key_len);
|
||||||
|
for (int i = 0; i < ARRAY_SIZE(k.sha512.key.istate.h); i++) {
|
||||||
|
istate64[i] = cpu_to_be64(k.sha512.key.istate.h[i]);
|
||||||
|
ostate64[i] = cpu_to_be64(k.sha512.key.ostate.h[i]);
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
|
default:
|
||||||
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
memzero_explicit(&k, sizeof(k));
|
||||||
return base_hash;
|
return 0;
|
||||||
}
|
|
||||||
|
|
||||||
static int chcr_compute_partial_hash(struct shash_desc *desc,
|
|
||||||
char *iopad, char *result_hash,
|
|
||||||
int digest_size)
|
|
||||||
{
|
|
||||||
struct sha1_state sha1_st;
|
|
||||||
struct sha256_state sha256_st;
|
|
||||||
struct sha512_state sha512_st;
|
|
||||||
int error;
|
|
||||||
|
|
||||||
if (digest_size == SHA1_DIGEST_SIZE) {
|
|
||||||
error = crypto_shash_init(desc) ?:
|
|
||||||
crypto_shash_update(desc, iopad, SHA1_BLOCK_SIZE) ?:
|
|
||||||
crypto_shash_export_core(desc, &sha1_st);
|
|
||||||
memcpy(result_hash, sha1_st.state, SHA1_DIGEST_SIZE);
|
|
||||||
} else if (digest_size == SHA224_DIGEST_SIZE) {
|
|
||||||
error = crypto_shash_init(desc) ?:
|
|
||||||
crypto_shash_update(desc, iopad, SHA256_BLOCK_SIZE) ?:
|
|
||||||
crypto_shash_export_core(desc, &sha256_st);
|
|
||||||
memcpy(result_hash, sha256_st.state, SHA256_DIGEST_SIZE);
|
|
||||||
|
|
||||||
} else if (digest_size == SHA256_DIGEST_SIZE) {
|
|
||||||
error = crypto_shash_init(desc) ?:
|
|
||||||
crypto_shash_update(desc, iopad, SHA256_BLOCK_SIZE) ?:
|
|
||||||
crypto_shash_export_core(desc, &sha256_st);
|
|
||||||
memcpy(result_hash, sha256_st.state, SHA256_DIGEST_SIZE);
|
|
||||||
|
|
||||||
} else if (digest_size == SHA384_DIGEST_SIZE) {
|
|
||||||
error = crypto_shash_init(desc) ?:
|
|
||||||
crypto_shash_update(desc, iopad, SHA512_BLOCK_SIZE) ?:
|
|
||||||
crypto_shash_export_core(desc, &sha512_st);
|
|
||||||
memcpy(result_hash, sha512_st.state, SHA512_DIGEST_SIZE);
|
|
||||||
|
|
||||||
} else if (digest_size == SHA512_DIGEST_SIZE) {
|
|
||||||
error = crypto_shash_init(desc) ?:
|
|
||||||
crypto_shash_update(desc, iopad, SHA512_BLOCK_SIZE) ?:
|
|
||||||
crypto_shash_export_core(desc, &sha512_st);
|
|
||||||
memcpy(result_hash, sha512_st.state, SHA512_DIGEST_SIZE);
|
|
||||||
} else {
|
|
||||||
error = -EINVAL;
|
|
||||||
pr_err("Unknown digest size %d\n", digest_size);
|
|
||||||
}
|
|
||||||
return error;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void chcr_change_order(char *buf, int ds)
|
|
||||||
{
|
|
||||||
int i;
|
|
||||||
|
|
||||||
if (ds == SHA512_DIGEST_SIZE) {
|
|
||||||
for (i = 0; i < (ds / sizeof(u64)); i++)
|
|
||||||
*((__be64 *)buf + i) =
|
|
||||||
cpu_to_be64(*((u64 *)buf + i));
|
|
||||||
} else {
|
|
||||||
for (i = 0; i < (ds / sizeof(u32)); i++)
|
|
||||||
*((__be32 *)buf + i) =
|
|
||||||
cpu_to_be32(*((u32 *)buf + i));
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int is_hmac(struct crypto_tfm *tfm)
|
static inline int is_hmac(struct crypto_tfm *tfm)
|
||||||
|
|
@ -1547,11 +1518,6 @@ static int get_alg_config(struct algo_param *params,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void chcr_free_shash(struct crypto_shash *base_hash)
|
|
||||||
{
|
|
||||||
crypto_free_shash(base_hash);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* create_hash_wr - Create hash work request
|
* create_hash_wr - Create hash work request
|
||||||
* @req: Cipher req base
|
* @req: Cipher req base
|
||||||
|
|
@ -2202,53 +2168,13 @@ static int chcr_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
|
||||||
unsigned int keylen)
|
unsigned int keylen)
|
||||||
{
|
{
|
||||||
struct hmac_ctx *hmacctx = HMAC_CTX(h_ctx(tfm));
|
struct hmac_ctx *hmacctx = HMAC_CTX(h_ctx(tfm));
|
||||||
unsigned int digestsize = crypto_ahash_digestsize(tfm);
|
|
||||||
unsigned int bs = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm));
|
|
||||||
unsigned int i, err = 0, updated_digestsize;
|
|
||||||
|
|
||||||
SHASH_DESC_ON_STACK(shash, hmacctx->base_hash);
|
|
||||||
|
|
||||||
/* use the key to calculate the ipad and opad. ipad will sent with the
|
/* use the key to calculate the ipad and opad. ipad will sent with the
|
||||||
* first request's data. opad will be sent with the final hash result
|
* first request's data. opad will be sent with the final hash result
|
||||||
* ipad in hmacctx->ipad and opad in hmacctx->opad location
|
* ipad in hmacctx->ipad and opad in hmacctx->opad location
|
||||||
*/
|
*/
|
||||||
shash->tfm = hmacctx->base_hash;
|
return chcr_prepare_hmac_key(key, keylen, crypto_ahash_digestsize(tfm),
|
||||||
if (keylen > bs) {
|
hmacctx->ipad, hmacctx->opad);
|
||||||
err = crypto_shash_digest(shash, key, keylen,
|
|
||||||
hmacctx->ipad);
|
|
||||||
if (err)
|
|
||||||
goto out;
|
|
||||||
keylen = digestsize;
|
|
||||||
} else {
|
|
||||||
memcpy(hmacctx->ipad, key, keylen);
|
|
||||||
}
|
|
||||||
memset(hmacctx->ipad + keylen, 0, bs - keylen);
|
|
||||||
unsafe_memcpy(hmacctx->opad, hmacctx->ipad, bs,
|
|
||||||
"fortified memcpy causes -Wrestrict warning");
|
|
||||||
|
|
||||||
for (i = 0; i < bs / sizeof(int); i++) {
|
|
||||||
*((unsigned int *)(&hmacctx->ipad) + i) ^= IPAD_DATA;
|
|
||||||
*((unsigned int *)(&hmacctx->opad) + i) ^= OPAD_DATA;
|
|
||||||
}
|
|
||||||
|
|
||||||
updated_digestsize = digestsize;
|
|
||||||
if (digestsize == SHA224_DIGEST_SIZE)
|
|
||||||
updated_digestsize = SHA256_DIGEST_SIZE;
|
|
||||||
else if (digestsize == SHA384_DIGEST_SIZE)
|
|
||||||
updated_digestsize = SHA512_DIGEST_SIZE;
|
|
||||||
err = chcr_compute_partial_hash(shash, hmacctx->ipad,
|
|
||||||
hmacctx->ipad, digestsize);
|
|
||||||
if (err)
|
|
||||||
goto out;
|
|
||||||
chcr_change_order(hmacctx->ipad, updated_digestsize);
|
|
||||||
|
|
||||||
err = chcr_compute_partial_hash(shash, hmacctx->opad,
|
|
||||||
hmacctx->opad, digestsize);
|
|
||||||
if (err)
|
|
||||||
goto out;
|
|
||||||
chcr_change_order(hmacctx->opad, updated_digestsize);
|
|
||||||
out:
|
|
||||||
return err;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int chcr_aes_xts_setkey(struct crypto_skcipher *cipher, const u8 *key,
|
static int chcr_aes_xts_setkey(struct crypto_skcipher *cipher, const u8 *key,
|
||||||
|
|
@ -2344,30 +2270,11 @@ static int chcr_hmac_init(struct ahash_request *areq)
|
||||||
|
|
||||||
static int chcr_hmac_cra_init(struct crypto_tfm *tfm)
|
static int chcr_hmac_cra_init(struct crypto_tfm *tfm)
|
||||||
{
|
{
|
||||||
struct chcr_context *ctx = crypto_tfm_ctx(tfm);
|
|
||||||
struct hmac_ctx *hmacctx = HMAC_CTX(ctx);
|
|
||||||
unsigned int digestsize =
|
|
||||||
crypto_ahash_digestsize(__crypto_ahash_cast(tfm));
|
|
||||||
|
|
||||||
crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
|
crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
|
||||||
sizeof(struct chcr_ahash_req_ctx));
|
sizeof(struct chcr_ahash_req_ctx));
|
||||||
hmacctx->base_hash = chcr_alloc_shash(digestsize);
|
|
||||||
if (IS_ERR(hmacctx->base_hash))
|
|
||||||
return PTR_ERR(hmacctx->base_hash);
|
|
||||||
return chcr_device_init(crypto_tfm_ctx(tfm));
|
return chcr_device_init(crypto_tfm_ctx(tfm));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void chcr_hmac_cra_exit(struct crypto_tfm *tfm)
|
|
||||||
{
|
|
||||||
struct chcr_context *ctx = crypto_tfm_ctx(tfm);
|
|
||||||
struct hmac_ctx *hmacctx = HMAC_CTX(ctx);
|
|
||||||
|
|
||||||
if (hmacctx->base_hash) {
|
|
||||||
chcr_free_shash(hmacctx->base_hash);
|
|
||||||
hmacctx->base_hash = NULL;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
inline void chcr_aead_common_exit(struct aead_request *req)
|
inline void chcr_aead_common_exit(struct aead_request *req)
|
||||||
{
|
{
|
||||||
struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req);
|
struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req);
|
||||||
|
|
@ -3557,15 +3464,12 @@ static int chcr_authenc_setkey(struct crypto_aead *authenc, const u8 *key,
|
||||||
struct chcr_authenc_ctx *actx = AUTHENC_CTX(aeadctx);
|
struct chcr_authenc_ctx *actx = AUTHENC_CTX(aeadctx);
|
||||||
/* it contains auth and cipher key both*/
|
/* it contains auth and cipher key both*/
|
||||||
struct crypto_authenc_keys keys;
|
struct crypto_authenc_keys keys;
|
||||||
unsigned int bs, subtype;
|
unsigned int subtype;
|
||||||
unsigned int max_authsize = crypto_aead_alg(authenc)->maxauthsize;
|
unsigned int max_authsize = crypto_aead_alg(authenc)->maxauthsize;
|
||||||
int err = 0, i, key_ctx_len = 0;
|
int err = 0, key_ctx_len = 0;
|
||||||
unsigned char ck_size = 0;
|
unsigned char ck_size = 0;
|
||||||
unsigned char pad[CHCR_HASH_MAX_BLOCK_SIZE_128] = { 0 };
|
|
||||||
struct crypto_shash *base_hash = ERR_PTR(-EINVAL);
|
|
||||||
struct algo_param param;
|
struct algo_param param;
|
||||||
int align;
|
int align;
|
||||||
u8 *o_ptr = NULL;
|
|
||||||
|
|
||||||
crypto_aead_clear_flags(aeadctx->sw_cipher, CRYPTO_TFM_REQ_MASK);
|
crypto_aead_clear_flags(aeadctx->sw_cipher, CRYPTO_TFM_REQ_MASK);
|
||||||
crypto_aead_set_flags(aeadctx->sw_cipher, crypto_aead_get_flags(authenc)
|
crypto_aead_set_flags(aeadctx->sw_cipher, crypto_aead_get_flags(authenc)
|
||||||
|
|
@ -3613,68 +3517,26 @@ static int chcr_authenc_setkey(struct crypto_aead *authenc, const u8 *key,
|
||||||
get_aes_decrypt_key(actx->dec_rrkey, aeadctx->key,
|
get_aes_decrypt_key(actx->dec_rrkey, aeadctx->key,
|
||||||
aeadctx->enckey_len << 3);
|
aeadctx->enckey_len << 3);
|
||||||
}
|
}
|
||||||
base_hash = chcr_alloc_shash(max_authsize);
|
|
||||||
if (IS_ERR(base_hash)) {
|
|
||||||
pr_err("Base driver cannot be loaded\n");
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
{
|
|
||||||
SHASH_DESC_ON_STACK(shash, base_hash);
|
|
||||||
|
|
||||||
shash->tfm = base_hash;
|
|
||||||
bs = crypto_shash_blocksize(base_hash);
|
|
||||||
align = KEYCTX_ALIGN_PAD(max_authsize);
|
align = KEYCTX_ALIGN_PAD(max_authsize);
|
||||||
o_ptr = actx->h_iopad + param.result_size + align;
|
err = chcr_prepare_hmac_key(keys.authkey, keys.authkeylen, max_authsize,
|
||||||
|
actx->h_iopad,
|
||||||
if (keys.authkeylen > bs) {
|
actx->h_iopad + param.result_size + align);
|
||||||
err = crypto_shash_digest(shash, keys.authkey,
|
if (err)
|
||||||
keys.authkeylen,
|
|
||||||
o_ptr);
|
|
||||||
if (err) {
|
|
||||||
pr_err("Base driver cannot be loaded\n");
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
keys.authkeylen = max_authsize;
|
|
||||||
} else
|
|
||||||
memcpy(o_ptr, keys.authkey, keys.authkeylen);
|
|
||||||
|
|
||||||
/* Compute the ipad-digest*/
|
|
||||||
memset(pad + keys.authkeylen, 0, bs - keys.authkeylen);
|
|
||||||
memcpy(pad, o_ptr, keys.authkeylen);
|
|
||||||
for (i = 0; i < bs >> 2; i++)
|
|
||||||
*((unsigned int *)pad + i) ^= IPAD_DATA;
|
|
||||||
|
|
||||||
if (chcr_compute_partial_hash(shash, pad, actx->h_iopad,
|
|
||||||
max_authsize))
|
|
||||||
goto out;
|
|
||||||
/* Compute the opad-digest */
|
|
||||||
memset(pad + keys.authkeylen, 0, bs - keys.authkeylen);
|
|
||||||
memcpy(pad, o_ptr, keys.authkeylen);
|
|
||||||
for (i = 0; i < bs >> 2; i++)
|
|
||||||
*((unsigned int *)pad + i) ^= OPAD_DATA;
|
|
||||||
|
|
||||||
if (chcr_compute_partial_hash(shash, pad, o_ptr, max_authsize))
|
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
/* convert the ipad and opad digest to network order */
|
key_ctx_len = sizeof(struct _key_ctx) + roundup(keys.enckeylen, 16) +
|
||||||
chcr_change_order(actx->h_iopad, param.result_size);
|
|
||||||
chcr_change_order(o_ptr, param.result_size);
|
|
||||||
key_ctx_len = sizeof(struct _key_ctx) +
|
|
||||||
roundup(keys.enckeylen, 16) +
|
|
||||||
(param.result_size + align) * 2;
|
(param.result_size + align) * 2;
|
||||||
aeadctx->key_ctx_hdr = FILL_KEY_CTX_HDR(ck_size, param.mk_size,
|
aeadctx->key_ctx_hdr = FILL_KEY_CTX_HDR(ck_size, param.mk_size, 0, 1,
|
||||||
0, 1, key_ctx_len >> 4);
|
key_ctx_len >> 4);
|
||||||
actx->auth_mode = param.auth_mode;
|
actx->auth_mode = param.auth_mode;
|
||||||
chcr_free_shash(base_hash);
|
|
||||||
|
|
||||||
memzero_explicit(&keys, sizeof(keys));
|
memzero_explicit(&keys, sizeof(keys));
|
||||||
return 0;
|
return 0;
|
||||||
}
|
|
||||||
out:
|
out:
|
||||||
aeadctx->enckey_len = 0;
|
aeadctx->enckey_len = 0;
|
||||||
memzero_explicit(&keys, sizeof(keys));
|
memzero_explicit(&keys, sizeof(keys));
|
||||||
if (!IS_ERR(base_hash))
|
|
||||||
chcr_free_shash(base_hash);
|
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -4490,7 +4352,6 @@ static int chcr_register_alg(void)
|
||||||
|
|
||||||
if (driver_algs[i].type == CRYPTO_ALG_TYPE_HMAC) {
|
if (driver_algs[i].type == CRYPTO_ALG_TYPE_HMAC) {
|
||||||
a_hash->halg.base.cra_init = chcr_hmac_cra_init;
|
a_hash->halg.base.cra_init = chcr_hmac_cra_init;
|
||||||
a_hash->halg.base.cra_exit = chcr_hmac_cra_exit;
|
|
||||||
a_hash->init = chcr_hmac_init;
|
a_hash->init = chcr_hmac_init;
|
||||||
a_hash->setkey = chcr_ahash_setkey;
|
a_hash->setkey = chcr_ahash_setkey;
|
||||||
a_hash->halg.base.cra_ctxsize = SZ_AHASH_H_CTX;
|
a_hash->halg.base.cra_ctxsize = SZ_AHASH_H_CTX;
|
||||||
|
|
|
||||||
|
|
@ -241,7 +241,6 @@ struct chcr_aead_ctx {
|
||||||
};
|
};
|
||||||
|
|
||||||
struct hmac_ctx {
|
struct hmac_ctx {
|
||||||
struct crypto_shash *base_hash;
|
|
||||||
u8 ipad[CHCR_HASH_MAX_BLOCK_SIZE_128];
|
u8 ipad[CHCR_HASH_MAX_BLOCK_SIZE_128];
|
||||||
u8 opad[CHCR_HASH_MAX_BLOCK_SIZE_128];
|
u8 opad[CHCR_HASH_MAX_BLOCK_SIZE_128];
|
||||||
};
|
};
|
||||||
|
|
|
||||||
|
|
@ -888,6 +888,7 @@ static int qm_diff_regs_init(struct hisi_qm *qm,
|
||||||
dfx_regs_uninit(qm, qm->debug.qm_diff_regs, ARRAY_SIZE(qm_diff_regs));
|
dfx_regs_uninit(qm, qm->debug.qm_diff_regs, ARRAY_SIZE(qm_diff_regs));
|
||||||
ret = PTR_ERR(qm->debug.acc_diff_regs);
|
ret = PTR_ERR(qm->debug.acc_diff_regs);
|
||||||
qm->debug.acc_diff_regs = NULL;
|
qm->debug.acc_diff_regs = NULL;
|
||||||
|
qm->debug.qm_diff_regs = NULL;
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -39,6 +39,7 @@
|
||||||
#define HPRE_HAC_RAS_NFE_ENB 0x301414
|
#define HPRE_HAC_RAS_NFE_ENB 0x301414
|
||||||
#define HPRE_HAC_RAS_FE_ENB 0x301418
|
#define HPRE_HAC_RAS_FE_ENB 0x301418
|
||||||
#define HPRE_HAC_INT_SET 0x301500
|
#define HPRE_HAC_INT_SET 0x301500
|
||||||
|
#define HPRE_AXI_ERROR_MASK GENMASK(21, 10)
|
||||||
#define HPRE_RNG_TIMEOUT_NUM 0x301A34
|
#define HPRE_RNG_TIMEOUT_NUM 0x301A34
|
||||||
#define HPRE_CORE_INT_ENABLE 0
|
#define HPRE_CORE_INT_ENABLE 0
|
||||||
#define HPRE_RDCHN_INI_ST 0x301a00
|
#define HPRE_RDCHN_INI_ST 0x301a00
|
||||||
|
|
@ -78,6 +79,11 @@
|
||||||
#define HPRE_PREFETCH_ENABLE (~(BIT(0) | BIT(30)))
|
#define HPRE_PREFETCH_ENABLE (~(BIT(0) | BIT(30)))
|
||||||
#define HPRE_PREFETCH_DISABLE BIT(30)
|
#define HPRE_PREFETCH_DISABLE BIT(30)
|
||||||
#define HPRE_SVA_DISABLE_READY (BIT(4) | BIT(8))
|
#define HPRE_SVA_DISABLE_READY (BIT(4) | BIT(8))
|
||||||
|
#define HPRE_SVA_PREFTCH_DFX4 0x301144
|
||||||
|
#define HPRE_WAIT_SVA_READY 500000
|
||||||
|
#define HPRE_READ_SVA_STATUS_TIMES 3
|
||||||
|
#define HPRE_WAIT_US_MIN 10
|
||||||
|
#define HPRE_WAIT_US_MAX 20
|
||||||
|
|
||||||
/* clock gate */
|
/* clock gate */
|
||||||
#define HPRE_CLKGATE_CTL 0x301a10
|
#define HPRE_CLKGATE_CTL 0x301a10
|
||||||
|
|
@ -466,6 +472,33 @@ struct hisi_qp *hpre_create_qp(u8 type)
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int hpre_wait_sva_ready(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
u32 val, try_times = 0;
|
||||||
|
u8 count = 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Read the register value every 10-20us. If the value is 0 for three
|
||||||
|
* consecutive times, the SVA module is ready.
|
||||||
|
*/
|
||||||
|
do {
|
||||||
|
val = readl(qm->io_base + HPRE_SVA_PREFTCH_DFX4);
|
||||||
|
if (val)
|
||||||
|
count = 0;
|
||||||
|
else if (++count == HPRE_READ_SVA_STATUS_TIMES)
|
||||||
|
break;
|
||||||
|
|
||||||
|
usleep_range(HPRE_WAIT_US_MIN, HPRE_WAIT_US_MAX);
|
||||||
|
} while (++try_times < HPRE_WAIT_SVA_READY);
|
||||||
|
|
||||||
|
if (try_times == HPRE_WAIT_SVA_READY) {
|
||||||
|
pci_err(qm->pdev, "failed to wait sva prefetch ready\n");
|
||||||
|
return -ETIMEDOUT;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static void hpre_config_pasid(struct hisi_qm *qm)
|
static void hpre_config_pasid(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 val1, val2;
|
u32 val1, val2;
|
||||||
|
|
@ -563,27 +596,6 @@ static void disable_flr_of_bme(struct hisi_qm *qm)
|
||||||
writel(PEH_AXUSER_CFG_ENABLE, qm->io_base + QM_PEH_AXUSER_CFG_ENABLE);
|
writel(PEH_AXUSER_CFG_ENABLE, qm->io_base + QM_PEH_AXUSER_CFG_ENABLE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void hpre_open_sva_prefetch(struct hisi_qm *qm)
|
|
||||||
{
|
|
||||||
u32 val;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
|
|
||||||
return;
|
|
||||||
|
|
||||||
/* Enable prefetch */
|
|
||||||
val = readl_relaxed(qm->io_base + HPRE_PREFETCH_CFG);
|
|
||||||
val &= HPRE_PREFETCH_ENABLE;
|
|
||||||
writel(val, qm->io_base + HPRE_PREFETCH_CFG);
|
|
||||||
|
|
||||||
ret = readl_relaxed_poll_timeout(qm->io_base + HPRE_PREFETCH_CFG,
|
|
||||||
val, !(val & HPRE_PREFETCH_DISABLE),
|
|
||||||
HPRE_REG_RD_INTVRL_US,
|
|
||||||
HPRE_REG_RD_TMOUT_US);
|
|
||||||
if (ret)
|
|
||||||
pci_err(qm->pdev, "failed to open sva prefetch\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
static void hpre_close_sva_prefetch(struct hisi_qm *qm)
|
static void hpre_close_sva_prefetch(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
@ -602,6 +614,36 @@ static void hpre_close_sva_prefetch(struct hisi_qm *qm)
|
||||||
HPRE_REG_RD_TMOUT_US);
|
HPRE_REG_RD_TMOUT_US);
|
||||||
if (ret)
|
if (ret)
|
||||||
pci_err(qm->pdev, "failed to close sva prefetch\n");
|
pci_err(qm->pdev, "failed to close sva prefetch\n");
|
||||||
|
|
||||||
|
(void)hpre_wait_sva_ready(qm);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void hpre_open_sva_prefetch(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
u32 val;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
|
||||||
|
return;
|
||||||
|
|
||||||
|
/* Enable prefetch */
|
||||||
|
val = readl_relaxed(qm->io_base + HPRE_PREFETCH_CFG);
|
||||||
|
val &= HPRE_PREFETCH_ENABLE;
|
||||||
|
writel(val, qm->io_base + HPRE_PREFETCH_CFG);
|
||||||
|
|
||||||
|
ret = readl_relaxed_poll_timeout(qm->io_base + HPRE_PREFETCH_CFG,
|
||||||
|
val, !(val & HPRE_PREFETCH_DISABLE),
|
||||||
|
HPRE_REG_RD_INTVRL_US,
|
||||||
|
HPRE_REG_RD_TMOUT_US);
|
||||||
|
if (ret) {
|
||||||
|
pci_err(qm->pdev, "failed to open sva prefetch\n");
|
||||||
|
hpre_close_sva_prefetch(qm);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = hpre_wait_sva_ready(qm);
|
||||||
|
if (ret)
|
||||||
|
hpre_close_sva_prefetch(qm);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void hpre_enable_clock_gate(struct hisi_qm *qm)
|
static void hpre_enable_clock_gate(struct hisi_qm *qm)
|
||||||
|
|
@ -721,6 +763,7 @@ static int hpre_set_user_domain_and_cache(struct hisi_qm *qm)
|
||||||
|
|
||||||
/* Config data buffer pasid needed by Kunpeng 920 */
|
/* Config data buffer pasid needed by Kunpeng 920 */
|
||||||
hpre_config_pasid(qm);
|
hpre_config_pasid(qm);
|
||||||
|
hpre_open_sva_prefetch(qm);
|
||||||
|
|
||||||
hpre_enable_clock_gate(qm);
|
hpre_enable_clock_gate(qm);
|
||||||
|
|
||||||
|
|
@ -756,8 +799,7 @@ static void hpre_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
|
||||||
val1 = readl(qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
|
val1 = readl(qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
|
||||||
if (enable) {
|
if (enable) {
|
||||||
val1 |= HPRE_AM_OOO_SHUTDOWN_ENABLE;
|
val1 |= HPRE_AM_OOO_SHUTDOWN_ENABLE;
|
||||||
val2 = hisi_qm_get_hw_info(qm, hpre_basic_info,
|
val2 = qm->err_info.dev_err.shutdown_mask;
|
||||||
HPRE_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
|
||||||
} else {
|
} else {
|
||||||
val1 &= ~HPRE_AM_OOO_SHUTDOWN_ENABLE;
|
val1 &= ~HPRE_AM_OOO_SHUTDOWN_ENABLE;
|
||||||
val2 = 0x0;
|
val2 = 0x0;
|
||||||
|
|
@ -771,38 +813,33 @@ static void hpre_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
|
||||||
|
|
||||||
static void hpre_hw_error_disable(struct hisi_qm *qm)
|
static void hpre_hw_error_disable(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 ce, nfe;
|
struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err;
|
||||||
|
u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe;
|
||||||
ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CE_MASK_CAP, qm->cap_ver);
|
|
||||||
nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
|
|
||||||
|
|
||||||
/* disable hpre hw error interrupts */
|
/* disable hpre hw error interrupts */
|
||||||
writel(ce | nfe | HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_INT_MASK);
|
writel(err_mask, qm->io_base + HPRE_INT_MASK);
|
||||||
/* disable HPRE block master OOO when nfe occurs on Kunpeng930 */
|
/* disable HPRE block master OOO when nfe occurs on Kunpeng930 */
|
||||||
hpre_master_ooo_ctrl(qm, false);
|
hpre_master_ooo_ctrl(qm, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void hpre_hw_error_enable(struct hisi_qm *qm)
|
static void hpre_hw_error_enable(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 ce, nfe, err_en;
|
struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err;
|
||||||
|
u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe;
|
||||||
ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CE_MASK_CAP, qm->cap_ver);
|
|
||||||
nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
|
|
||||||
|
|
||||||
/* clear HPRE hw error source if having */
|
/* clear HPRE hw error source if having */
|
||||||
writel(ce | nfe | HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_HAC_SOURCE_INT);
|
writel(err_mask, qm->io_base + HPRE_HAC_SOURCE_INT);
|
||||||
|
|
||||||
/* configure error type */
|
/* configure error type */
|
||||||
writel(ce, qm->io_base + HPRE_RAS_CE_ENB);
|
writel(dev_err->ce, qm->io_base + HPRE_RAS_CE_ENB);
|
||||||
writel(nfe, qm->io_base + HPRE_RAS_NFE_ENB);
|
writel(dev_err->nfe, qm->io_base + HPRE_RAS_NFE_ENB);
|
||||||
writel(HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_RAS_FE_ENB);
|
writel(dev_err->fe, qm->io_base + HPRE_RAS_FE_ENB);
|
||||||
|
|
||||||
/* enable HPRE block master OOO when nfe occurs on Kunpeng930 */
|
/* enable HPRE block master OOO when nfe occurs on Kunpeng930 */
|
||||||
hpre_master_ooo_ctrl(qm, true);
|
hpre_master_ooo_ctrl(qm, true);
|
||||||
|
|
||||||
/* enable hpre hw error interrupts */
|
/* enable hpre hw error interrupts */
|
||||||
err_en = ce | nfe | HPRE_HAC_RAS_FE_ENABLE;
|
writel(~err_mask, qm->io_base + HPRE_INT_MASK);
|
||||||
writel(~err_en, qm->io_base + HPRE_INT_MASK);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline struct hisi_qm *hpre_file_to_qm(struct hpre_debugfs_file *file)
|
static inline struct hisi_qm *hpre_file_to_qm(struct hpre_debugfs_file *file)
|
||||||
|
|
@ -1171,7 +1208,7 @@ static int hpre_pre_store_cap_reg(struct hisi_qm *qm)
|
||||||
size_t i, size;
|
size_t i, size;
|
||||||
|
|
||||||
size = ARRAY_SIZE(hpre_cap_query_info);
|
size = ARRAY_SIZE(hpre_cap_query_info);
|
||||||
hpre_cap = devm_kzalloc(dev, sizeof(*hpre_cap) * size, GFP_KERNEL);
|
hpre_cap = devm_kcalloc(dev, size, sizeof(*hpre_cap), GFP_KERNEL);
|
||||||
if (!hpre_cap)
|
if (!hpre_cap)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
|
@ -1357,12 +1394,20 @@ static void hpre_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
|
||||||
|
|
||||||
static void hpre_disable_error_report(struct hisi_qm *qm, u32 err_type)
|
static void hpre_disable_error_report(struct hisi_qm *qm, u32 err_type)
|
||||||
{
|
{
|
||||||
u32 nfe_mask;
|
u32 nfe_mask = qm->err_info.dev_err.nfe;
|
||||||
|
|
||||||
nfe_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
|
|
||||||
writel(nfe_mask & (~err_type), qm->io_base + HPRE_RAS_NFE_ENB);
|
writel(nfe_mask & (~err_type), qm->io_base + HPRE_RAS_NFE_ENB);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void hpre_enable_error_report(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
u32 nfe_mask = qm->err_info.dev_err.nfe;
|
||||||
|
u32 ce_mask = qm->err_info.dev_err.ce;
|
||||||
|
|
||||||
|
writel(nfe_mask, qm->io_base + HPRE_RAS_NFE_ENB);
|
||||||
|
writel(ce_mask, qm->io_base + HPRE_RAS_CE_ENB);
|
||||||
|
}
|
||||||
|
|
||||||
static void hpre_open_axi_master_ooo(struct hisi_qm *qm)
|
static void hpre_open_axi_master_ooo(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 value;
|
u32 value;
|
||||||
|
|
@ -1380,16 +1425,18 @@ static enum acc_err_result hpre_get_err_result(struct hisi_qm *qm)
|
||||||
|
|
||||||
err_status = hpre_get_hw_err_status(qm);
|
err_status = hpre_get_hw_err_status(qm);
|
||||||
if (err_status) {
|
if (err_status) {
|
||||||
if (err_status & qm->err_info.ecc_2bits_mask)
|
if (err_status & qm->err_info.dev_err.ecc_2bits_mask)
|
||||||
qm->err_status.is_dev_ecc_mbit = true;
|
qm->err_status.is_dev_ecc_mbit = true;
|
||||||
hpre_log_hw_error(qm, err_status);
|
hpre_log_hw_error(qm, err_status);
|
||||||
|
|
||||||
if (err_status & qm->err_info.dev_reset_mask) {
|
if (err_status & qm->err_info.dev_err.reset_mask) {
|
||||||
/* Disable the same error reporting until device is recovered. */
|
/* Disable the same error reporting until device is recovered. */
|
||||||
hpre_disable_error_report(qm, err_status);
|
hpre_disable_error_report(qm, err_status);
|
||||||
return ACC_ERR_NEED_RESET;
|
return ACC_ERR_NEED_RESET;
|
||||||
}
|
}
|
||||||
hpre_clear_hw_err_status(qm, err_status);
|
hpre_clear_hw_err_status(qm, err_status);
|
||||||
|
/* Avoid firmware disable error report, re-enable. */
|
||||||
|
hpre_enable_error_report(qm);
|
||||||
}
|
}
|
||||||
|
|
||||||
return ACC_ERR_RECOVERED;
|
return ACC_ERR_RECOVERED;
|
||||||
|
|
@ -1400,28 +1447,64 @@ static bool hpre_dev_is_abnormal(struct hisi_qm *qm)
|
||||||
u32 err_status;
|
u32 err_status;
|
||||||
|
|
||||||
err_status = hpre_get_hw_err_status(qm);
|
err_status = hpre_get_hw_err_status(qm);
|
||||||
if (err_status & qm->err_info.dev_shutdown_mask)
|
if (err_status & qm->err_info.dev_err.shutdown_mask)
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void hpre_disable_axi_error(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err;
|
||||||
|
u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe;
|
||||||
|
u32 val;
|
||||||
|
|
||||||
|
val = ~(err_mask & (~HPRE_AXI_ERROR_MASK));
|
||||||
|
writel(val, qm->io_base + HPRE_INT_MASK);
|
||||||
|
|
||||||
|
if (qm->ver > QM_HW_V2)
|
||||||
|
writel(dev_err->shutdown_mask & (~HPRE_AXI_ERROR_MASK),
|
||||||
|
qm->io_base + HPRE_OOO_SHUTDOWN_SEL);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void hpre_enable_axi_error(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err;
|
||||||
|
u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe;
|
||||||
|
|
||||||
|
/* clear axi error source */
|
||||||
|
writel(HPRE_AXI_ERROR_MASK, qm->io_base + HPRE_HAC_SOURCE_INT);
|
||||||
|
|
||||||
|
writel(~err_mask, qm->io_base + HPRE_INT_MASK);
|
||||||
|
|
||||||
|
if (qm->ver > QM_HW_V2)
|
||||||
|
writel(dev_err->shutdown_mask, qm->io_base + HPRE_OOO_SHUTDOWN_SEL);
|
||||||
|
}
|
||||||
|
|
||||||
static void hpre_err_info_init(struct hisi_qm *qm)
|
static void hpre_err_info_init(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
struct hisi_qm_err_info *err_info = &qm->err_info;
|
struct hisi_qm_err_info *err_info = &qm->err_info;
|
||||||
|
struct hisi_qm_err_mask *qm_err = &err_info->qm_err;
|
||||||
|
struct hisi_qm_err_mask *dev_err = &err_info->dev_err;
|
||||||
|
|
||||||
err_info->fe = HPRE_HAC_RAS_FE_ENABLE;
|
qm_err->fe = HPRE_HAC_RAS_FE_ENABLE;
|
||||||
err_info->ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_QM_CE_MASK_CAP, qm->cap_ver);
|
qm_err->ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_QM_CE_MASK_CAP, qm->cap_ver);
|
||||||
err_info->nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_QM_NFE_MASK_CAP, qm->cap_ver);
|
qm_err->nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_QM_NFE_MASK_CAP, qm->cap_ver);
|
||||||
err_info->ecc_2bits_mask = HPRE_CORE_ECC_2BIT_ERR | HPRE_OOO_ECC_2BIT_ERR;
|
qm_err->shutdown_mask = hisi_qm_get_hw_info(qm, hpre_basic_info,
|
||||||
err_info->dev_shutdown_mask = hisi_qm_get_hw_info(qm, hpre_basic_info,
|
|
||||||
HPRE_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
|
||||||
err_info->qm_shutdown_mask = hisi_qm_get_hw_info(qm, hpre_basic_info,
|
|
||||||
HPRE_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
HPRE_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
||||||
err_info->qm_reset_mask = hisi_qm_get_hw_info(qm, hpre_basic_info,
|
qm_err->reset_mask = hisi_qm_get_hw_info(qm, hpre_basic_info,
|
||||||
HPRE_QM_RESET_MASK_CAP, qm->cap_ver);
|
HPRE_QM_RESET_MASK_CAP, qm->cap_ver);
|
||||||
err_info->dev_reset_mask = hisi_qm_get_hw_info(qm, hpre_basic_info,
|
qm_err->ecc_2bits_mask = QM_ECC_MBIT;
|
||||||
|
|
||||||
|
dev_err->fe = HPRE_HAC_RAS_FE_ENABLE;
|
||||||
|
dev_err->ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CE_MASK_CAP, qm->cap_ver);
|
||||||
|
dev_err->nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
|
||||||
|
dev_err->shutdown_mask = hisi_qm_get_hw_info(qm, hpre_basic_info,
|
||||||
|
HPRE_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
||||||
|
dev_err->reset_mask = hisi_qm_get_hw_info(qm, hpre_basic_info,
|
||||||
HPRE_RESET_MASK_CAP, qm->cap_ver);
|
HPRE_RESET_MASK_CAP, qm->cap_ver);
|
||||||
|
dev_err->ecc_2bits_mask = HPRE_CORE_ECC_2BIT_ERR | HPRE_OOO_ECC_2BIT_ERR;
|
||||||
|
|
||||||
err_info->msi_wr_port = HPRE_WR_MSI_PORT;
|
err_info->msi_wr_port = HPRE_WR_MSI_PORT;
|
||||||
err_info->acpi_rst = "HRST";
|
err_info->acpi_rst = "HRST";
|
||||||
}
|
}
|
||||||
|
|
@ -1439,6 +1522,8 @@ static const struct hisi_qm_err_ini hpre_err_ini = {
|
||||||
.err_info_init = hpre_err_info_init,
|
.err_info_init = hpre_err_info_init,
|
||||||
.get_err_result = hpre_get_err_result,
|
.get_err_result = hpre_get_err_result,
|
||||||
.dev_is_abnormal = hpre_dev_is_abnormal,
|
.dev_is_abnormal = hpre_dev_is_abnormal,
|
||||||
|
.disable_axi_error = hpre_disable_axi_error,
|
||||||
|
.enable_axi_error = hpre_enable_axi_error,
|
||||||
};
|
};
|
||||||
|
|
||||||
static int hpre_pf_probe_init(struct hpre *hpre)
|
static int hpre_pf_probe_init(struct hpre *hpre)
|
||||||
|
|
@ -1450,8 +1535,6 @@ static int hpre_pf_probe_init(struct hpre *hpre)
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
hpre_open_sva_prefetch(qm);
|
|
||||||
|
|
||||||
hisi_qm_dev_err_init(qm);
|
hisi_qm_dev_err_init(qm);
|
||||||
ret = hpre_show_last_regs_init(qm);
|
ret = hpre_show_last_regs_init(qm);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
|
|
||||||
|
|
@ -45,6 +45,8 @@
|
||||||
|
|
||||||
#define QM_SQ_TYPE_MASK GENMASK(3, 0)
|
#define QM_SQ_TYPE_MASK GENMASK(3, 0)
|
||||||
#define QM_SQ_TAIL_IDX(sqc) ((le16_to_cpu((sqc).w11) >> 6) & 0x1)
|
#define QM_SQ_TAIL_IDX(sqc) ((le16_to_cpu((sqc).w11) >> 6) & 0x1)
|
||||||
|
#define QM_SQC_DISABLE_QP (1U << 6)
|
||||||
|
#define QM_XQC_RANDOM_DATA 0xaaaa
|
||||||
|
|
||||||
/* cqc shift */
|
/* cqc shift */
|
||||||
#define QM_CQ_HOP_NUM_SHIFT 0
|
#define QM_CQ_HOP_NUM_SHIFT 0
|
||||||
|
|
@ -145,9 +147,9 @@
|
||||||
#define QM_RAS_CE_TIMES_PER_IRQ 1
|
#define QM_RAS_CE_TIMES_PER_IRQ 1
|
||||||
#define QM_OOO_SHUTDOWN_SEL 0x1040f8
|
#define QM_OOO_SHUTDOWN_SEL 0x1040f8
|
||||||
#define QM_AXI_RRESP_ERR BIT(0)
|
#define QM_AXI_RRESP_ERR BIT(0)
|
||||||
#define QM_ECC_MBIT BIT(2)
|
|
||||||
#define QM_DB_TIMEOUT BIT(10)
|
#define QM_DB_TIMEOUT BIT(10)
|
||||||
#define QM_OF_FIFO_OF BIT(11)
|
#define QM_OF_FIFO_OF BIT(11)
|
||||||
|
#define QM_RAS_AXI_ERROR (BIT(0) | BIT(1) | BIT(12))
|
||||||
|
|
||||||
#define QM_RESET_WAIT_TIMEOUT 400
|
#define QM_RESET_WAIT_TIMEOUT 400
|
||||||
#define QM_PEH_VENDOR_ID 0x1000d8
|
#define QM_PEH_VENDOR_ID 0x1000d8
|
||||||
|
|
@ -163,7 +165,6 @@
|
||||||
#define ACC_MASTER_TRANS_RETURN 0x300150
|
#define ACC_MASTER_TRANS_RETURN 0x300150
|
||||||
#define ACC_MASTER_GLOBAL_CTRL 0x300000
|
#define ACC_MASTER_GLOBAL_CTRL 0x300000
|
||||||
#define ACC_AM_CFG_PORT_WR_EN 0x30001c
|
#define ACC_AM_CFG_PORT_WR_EN 0x30001c
|
||||||
#define QM_RAS_NFE_MBIT_DISABLE ~QM_ECC_MBIT
|
|
||||||
#define ACC_AM_ROB_ECC_INT_STS 0x300104
|
#define ACC_AM_ROB_ECC_INT_STS 0x300104
|
||||||
#define ACC_ROB_ECC_ERR_MULTPL BIT(1)
|
#define ACC_ROB_ECC_ERR_MULTPL BIT(1)
|
||||||
#define QM_MSI_CAP_ENABLE BIT(16)
|
#define QM_MSI_CAP_ENABLE BIT(16)
|
||||||
|
|
@ -520,7 +521,7 @@ static bool qm_check_dev_error(struct hisi_qm *qm)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
err_status = qm_get_hw_error_status(pf_qm);
|
err_status = qm_get_hw_error_status(pf_qm);
|
||||||
if (err_status & pf_qm->err_info.qm_shutdown_mask)
|
if (err_status & pf_qm->err_info.qm_err.shutdown_mask)
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
if (pf_qm->err_ini->dev_is_abnormal)
|
if (pf_qm->err_ini->dev_is_abnormal)
|
||||||
|
|
@ -1395,17 +1396,17 @@ static void qm_hw_error_init_v1(struct hisi_qm *qm)
|
||||||
|
|
||||||
static void qm_hw_error_cfg(struct hisi_qm *qm)
|
static void qm_hw_error_cfg(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
struct hisi_qm_err_info *err_info = &qm->err_info;
|
struct hisi_qm_err_mask *qm_err = &qm->err_info.qm_err;
|
||||||
|
|
||||||
qm->error_mask = err_info->nfe | err_info->ce | err_info->fe;
|
qm->error_mask = qm_err->nfe | qm_err->ce | qm_err->fe;
|
||||||
/* clear QM hw residual error source */
|
/* clear QM hw residual error source */
|
||||||
writel(qm->error_mask, qm->io_base + QM_ABNORMAL_INT_SOURCE);
|
writel(qm->error_mask, qm->io_base + QM_ABNORMAL_INT_SOURCE);
|
||||||
|
|
||||||
/* configure error type */
|
/* configure error type */
|
||||||
writel(err_info->ce, qm->io_base + QM_RAS_CE_ENABLE);
|
writel(qm_err->ce, qm->io_base + QM_RAS_CE_ENABLE);
|
||||||
writel(QM_RAS_CE_TIMES_PER_IRQ, qm->io_base + QM_RAS_CE_THRESHOLD);
|
writel(QM_RAS_CE_TIMES_PER_IRQ, qm->io_base + QM_RAS_CE_THRESHOLD);
|
||||||
writel(err_info->nfe, qm->io_base + QM_RAS_NFE_ENABLE);
|
writel(qm_err->nfe, qm->io_base + QM_RAS_NFE_ENABLE);
|
||||||
writel(err_info->fe, qm->io_base + QM_RAS_FE_ENABLE);
|
writel(qm_err->fe, qm->io_base + QM_RAS_FE_ENABLE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void qm_hw_error_init_v2(struct hisi_qm *qm)
|
static void qm_hw_error_init_v2(struct hisi_qm *qm)
|
||||||
|
|
@ -1434,7 +1435,7 @@ static void qm_hw_error_init_v3(struct hisi_qm *qm)
|
||||||
qm_hw_error_cfg(qm);
|
qm_hw_error_cfg(qm);
|
||||||
|
|
||||||
/* enable close master ooo when hardware error happened */
|
/* enable close master ooo when hardware error happened */
|
||||||
writel(qm->err_info.qm_shutdown_mask, qm->io_base + QM_OOO_SHUTDOWN_SEL);
|
writel(qm->err_info.qm_err.shutdown_mask, qm->io_base + QM_OOO_SHUTDOWN_SEL);
|
||||||
|
|
||||||
irq_unmask = ~qm->error_mask;
|
irq_unmask = ~qm->error_mask;
|
||||||
irq_unmask &= readl(qm->io_base + QM_ABNORMAL_INT_MASK);
|
irq_unmask &= readl(qm->io_base + QM_ABNORMAL_INT_MASK);
|
||||||
|
|
@ -1496,6 +1497,7 @@ static void qm_log_hw_error(struct hisi_qm *qm, u32 error_status)
|
||||||
|
|
||||||
static enum acc_err_result qm_hw_error_handle_v2(struct hisi_qm *qm)
|
static enum acc_err_result qm_hw_error_handle_v2(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
|
struct hisi_qm_err_mask *qm_err = &qm->err_info.qm_err;
|
||||||
u32 error_status;
|
u32 error_status;
|
||||||
|
|
||||||
error_status = qm_get_hw_error_status(qm);
|
error_status = qm_get_hw_error_status(qm);
|
||||||
|
|
@ -1504,17 +1506,16 @@ static enum acc_err_result qm_hw_error_handle_v2(struct hisi_qm *qm)
|
||||||
qm->err_status.is_qm_ecc_mbit = true;
|
qm->err_status.is_qm_ecc_mbit = true;
|
||||||
|
|
||||||
qm_log_hw_error(qm, error_status);
|
qm_log_hw_error(qm, error_status);
|
||||||
if (error_status & qm->err_info.qm_reset_mask) {
|
if (error_status & qm_err->reset_mask) {
|
||||||
/* Disable the same error reporting until device is recovered. */
|
/* Disable the same error reporting until device is recovered. */
|
||||||
writel(qm->err_info.nfe & (~error_status),
|
writel(qm_err->nfe & (~error_status), qm->io_base + QM_RAS_NFE_ENABLE);
|
||||||
qm->io_base + QM_RAS_NFE_ENABLE);
|
|
||||||
return ACC_ERR_NEED_RESET;
|
return ACC_ERR_NEED_RESET;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Clear error source if not need reset. */
|
/* Clear error source if not need reset. */
|
||||||
writel(error_status, qm->io_base + QM_ABNORMAL_INT_SOURCE);
|
writel(error_status, qm->io_base + QM_ABNORMAL_INT_SOURCE);
|
||||||
writel(qm->err_info.nfe, qm->io_base + QM_RAS_NFE_ENABLE);
|
writel(qm_err->nfe, qm->io_base + QM_RAS_NFE_ENABLE);
|
||||||
writel(qm->err_info.ce, qm->io_base + QM_RAS_CE_ENABLE);
|
writel(qm_err->ce, qm->io_base + QM_RAS_CE_ENABLE);
|
||||||
}
|
}
|
||||||
|
|
||||||
return ACC_ERR_RECOVERED;
|
return ACC_ERR_RECOVERED;
|
||||||
|
|
@ -2742,6 +2743,27 @@ static void qm_remove_uacce(struct hisi_qm *qm)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void qm_uacce_api_ver_init(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
struct uacce_device *uacce = qm->uacce;
|
||||||
|
|
||||||
|
switch (qm->ver) {
|
||||||
|
case QM_HW_V1:
|
||||||
|
uacce->api_ver = HISI_QM_API_VER_BASE;
|
||||||
|
break;
|
||||||
|
case QM_HW_V2:
|
||||||
|
uacce->api_ver = HISI_QM_API_VER2_BASE;
|
||||||
|
break;
|
||||||
|
case QM_HW_V3:
|
||||||
|
case QM_HW_V4:
|
||||||
|
uacce->api_ver = HISI_QM_API_VER3_BASE;
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
uacce->api_ver = HISI_QM_API_VER5_BASE;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static int qm_alloc_uacce(struct hisi_qm *qm)
|
static int qm_alloc_uacce(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = qm->pdev;
|
struct pci_dev *pdev = qm->pdev;
|
||||||
|
|
@ -2775,13 +2797,6 @@ static int qm_alloc_uacce(struct hisi_qm *qm)
|
||||||
uacce->is_vf = pdev->is_virtfn;
|
uacce->is_vf = pdev->is_virtfn;
|
||||||
uacce->priv = qm;
|
uacce->priv = qm;
|
||||||
|
|
||||||
if (qm->ver == QM_HW_V1)
|
|
||||||
uacce->api_ver = HISI_QM_API_VER_BASE;
|
|
||||||
else if (qm->ver == QM_HW_V2)
|
|
||||||
uacce->api_ver = HISI_QM_API_VER2_BASE;
|
|
||||||
else
|
|
||||||
uacce->api_ver = HISI_QM_API_VER3_BASE;
|
|
||||||
|
|
||||||
if (qm->ver == QM_HW_V1)
|
if (qm->ver == QM_HW_V1)
|
||||||
mmio_page_nr = QM_DOORBELL_PAGE_NR;
|
mmio_page_nr = QM_DOORBELL_PAGE_NR;
|
||||||
else if (!test_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps))
|
else if (!test_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps))
|
||||||
|
|
@ -2801,6 +2816,7 @@ static int qm_alloc_uacce(struct hisi_qm *qm)
|
||||||
uacce->qf_pg_num[UACCE_QFRT_DUS] = dus_page_nr;
|
uacce->qf_pg_num[UACCE_QFRT_DUS] = dus_page_nr;
|
||||||
|
|
||||||
qm->uacce = uacce;
|
qm->uacce = uacce;
|
||||||
|
qm_uacce_api_ver_init(qm);
|
||||||
INIT_LIST_HEAD(&qm->isolate_data.qm_hw_errs);
|
INIT_LIST_HEAD(&qm->isolate_data.qm_hw_errs);
|
||||||
mutex_init(&qm->isolate_data.isolate_lock);
|
mutex_init(&qm->isolate_data.isolate_lock);
|
||||||
|
|
||||||
|
|
@ -3179,6 +3195,9 @@ static int qm_eq_aeq_ctx_cfg(struct hisi_qm *qm)
|
||||||
|
|
||||||
qm_init_eq_aeq_status(qm);
|
qm_init_eq_aeq_status(qm);
|
||||||
|
|
||||||
|
/* Before starting the dev, clear the memory and then configure to device using. */
|
||||||
|
memset(qm->qdma.va, 0, qm->qdma.size);
|
||||||
|
|
||||||
ret = qm_eq_ctx_cfg(qm);
|
ret = qm_eq_ctx_cfg(qm);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(dev, "Set eqc failed!\n");
|
dev_err(dev, "Set eqc failed!\n");
|
||||||
|
|
@ -3190,9 +3209,13 @@ static int qm_eq_aeq_ctx_cfg(struct hisi_qm *qm)
|
||||||
|
|
||||||
static int __hisi_qm_start(struct hisi_qm *qm)
|
static int __hisi_qm_start(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
|
struct device *dev = &qm->pdev->dev;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
WARN_ON(!qm->qdma.va);
|
if (!qm->qdma.va) {
|
||||||
|
dev_err(dev, "qm qdma is NULL!\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
if (qm->fun_type == QM_HW_PF) {
|
if (qm->fun_type == QM_HW_PF) {
|
||||||
ret = hisi_qm_set_vft(qm, 0, qm->qp_base, qm->qp_num);
|
ret = hisi_qm_set_vft(qm, 0, qm->qp_base, qm->qp_num);
|
||||||
|
|
@ -3266,7 +3289,7 @@ static int qm_restart(struct hisi_qm *qm)
|
||||||
for (i = 0; i < qm->qp_num; i++) {
|
for (i = 0; i < qm->qp_num; i++) {
|
||||||
qp = &qm->qp_array[i];
|
qp = &qm->qp_array[i];
|
||||||
if (atomic_read(&qp->qp_status.flags) == QP_STOP &&
|
if (atomic_read(&qp->qp_status.flags) == QP_STOP &&
|
||||||
qp->is_resetting == true) {
|
qp->is_resetting == true && qp->is_in_kernel == true) {
|
||||||
ret = qm_start_qp_nolock(qp, 0);
|
ret = qm_start_qp_nolock(qp, 0);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
dev_err(dev, "Failed to start qp%d!\n", i);
|
dev_err(dev, "Failed to start qp%d!\n", i);
|
||||||
|
|
@ -3298,24 +3321,44 @@ static void qm_stop_started_qp(struct hisi_qm *qm)
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* qm_clear_queues() - Clear all queues memory in a qm.
|
* qm_invalid_queues() - invalid all queues in use.
|
||||||
* @qm: The qm in which the queues will be cleared.
|
* @qm: The qm in which the queues will be invalidated.
|
||||||
*
|
*
|
||||||
* This function clears all queues memory in a qm. Reset of accelerator can
|
* This function invalid all queues in use. If the doorbell command is sent
|
||||||
* use this to clear queues.
|
* to device in user space after the device is reset, the device discards
|
||||||
|
* the doorbell command.
|
||||||
*/
|
*/
|
||||||
static void qm_clear_queues(struct hisi_qm *qm)
|
static void qm_invalid_queues(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
struct hisi_qp *qp;
|
struct hisi_qp *qp;
|
||||||
|
struct qm_sqc *sqc;
|
||||||
|
struct qm_cqc *cqc;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Normal stop queues is no longer used and does not need to be
|
||||||
|
* invalid queues.
|
||||||
|
*/
|
||||||
|
if (qm->status.stop_reason == QM_NORMAL)
|
||||||
|
return;
|
||||||
|
|
||||||
|
if (qm->status.stop_reason == QM_DOWN)
|
||||||
|
hisi_qm_cache_wb(qm);
|
||||||
|
|
||||||
for (i = 0; i < qm->qp_num; i++) {
|
for (i = 0; i < qm->qp_num; i++) {
|
||||||
qp = &qm->qp_array[i];
|
qp = &qm->qp_array[i];
|
||||||
if (qp->is_in_kernel && qp->is_resetting)
|
if (!qp->is_resetting)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
/* Modify random data and set sqc close bit to invalid queue. */
|
||||||
|
sqc = qm->sqc + i;
|
||||||
|
cqc = qm->cqc + i;
|
||||||
|
sqc->w8 = cpu_to_le16(QM_XQC_RANDOM_DATA);
|
||||||
|
sqc->w13 = cpu_to_le16(QM_SQC_DISABLE_QP);
|
||||||
|
cqc->w8 = cpu_to_le16(QM_XQC_RANDOM_DATA);
|
||||||
|
if (qp->is_in_kernel)
|
||||||
memset(qp->qdma.va, 0, qp->qdma.size);
|
memset(qp->qdma.va, 0, qp->qdma.size);
|
||||||
}
|
}
|
||||||
|
|
||||||
memset(qm->qdma.va, 0, qm->qdma.size);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -3372,7 +3415,7 @@ int hisi_qm_stop(struct hisi_qm *qm, enum qm_stop_reason r)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
qm_clear_queues(qm);
|
qm_invalid_queues(qm);
|
||||||
qm->status.stop_reason = QM_NORMAL;
|
qm->status.stop_reason = QM_NORMAL;
|
||||||
|
|
||||||
err_unlock:
|
err_unlock:
|
||||||
|
|
@ -3617,19 +3660,19 @@ static int qm_vf_q_assign(struct hisi_qm *qm, u32 num_vfs)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int qm_clear_vft_config(struct hisi_qm *qm)
|
static void qm_clear_vft_config(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
int ret;
|
|
||||||
u32 i;
|
u32 i;
|
||||||
|
|
||||||
for (i = 1; i <= qm->vfs_num; i++) {
|
/*
|
||||||
ret = hisi_qm_set_vft(qm, i, 0, 0);
|
* When disabling SR-IOV, clear the configuration of each VF in the hardware
|
||||||
if (ret)
|
* sequentially. Failure to clear a single VF should not affect the clearing
|
||||||
return ret;
|
* operation of other VFs.
|
||||||
}
|
*/
|
||||||
qm->vfs_num = 0;
|
for (i = 1; i <= qm->vfs_num; i++)
|
||||||
|
(void)hisi_qm_set_vft(qm, i, 0, 0);
|
||||||
|
|
||||||
return 0;
|
qm->vfs_num = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int qm_func_shaper_enable(struct hisi_qm *qm, u32 fun_index, u32 qos)
|
static int qm_func_shaper_enable(struct hisi_qm *qm, u32 fun_index, u32 qos)
|
||||||
|
|
@ -3826,6 +3869,10 @@ static ssize_t qm_get_qos_value(struct hisi_qm *qm, const char *buf,
|
||||||
}
|
}
|
||||||
|
|
||||||
pdev = container_of(dev, struct pci_dev, dev);
|
pdev = container_of(dev, struct pci_dev, dev);
|
||||||
|
if (pci_physfn(pdev) != qm->pdev) {
|
||||||
|
pci_err(qm->pdev, "the pdev input does not match the pf!\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
*fun_index = pdev->devfn;
|
*fun_index = pdev->devfn;
|
||||||
|
|
||||||
|
|
@ -3960,13 +4007,13 @@ int hisi_qm_sriov_enable(struct pci_dev *pdev, int max_vfs)
|
||||||
goto err_put_sync;
|
goto err_put_sync;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
qm->vfs_num = num_vfs;
|
||||||
ret = pci_enable_sriov(pdev, num_vfs);
|
ret = pci_enable_sriov(pdev, num_vfs);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
pci_err(pdev, "Can't enable VF!\n");
|
pci_err(pdev, "Can't enable VF!\n");
|
||||||
qm_clear_vft_config(qm);
|
qm_clear_vft_config(qm);
|
||||||
goto err_put_sync;
|
goto err_put_sync;
|
||||||
}
|
}
|
||||||
qm->vfs_num = num_vfs;
|
|
||||||
|
|
||||||
pci_info(pdev, "VF enabled, vfs_num(=%d)!\n", num_vfs);
|
pci_info(pdev, "VF enabled, vfs_num(=%d)!\n", num_vfs);
|
||||||
|
|
||||||
|
|
@ -4001,11 +4048,10 @@ int hisi_qm_sriov_disable(struct pci_dev *pdev, bool is_frozen)
|
||||||
}
|
}
|
||||||
|
|
||||||
pci_disable_sriov(pdev);
|
pci_disable_sriov(pdev);
|
||||||
|
qm_clear_vft_config(qm);
|
||||||
qm->vfs_num = 0;
|
|
||||||
qm_pm_put_sync(qm);
|
qm_pm_put_sync(qm);
|
||||||
|
|
||||||
return qm_clear_vft_config(qm);
|
return 0;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(hisi_qm_sriov_disable);
|
EXPORT_SYMBOL_GPL(hisi_qm_sriov_disable);
|
||||||
|
|
||||||
|
|
@ -4179,9 +4225,9 @@ static void qm_dev_ecc_mbit_handle(struct hisi_qm *qm)
|
||||||
!qm->err_status.is_qm_ecc_mbit &&
|
!qm->err_status.is_qm_ecc_mbit &&
|
||||||
!qm->err_ini->close_axi_master_ooo) {
|
!qm->err_ini->close_axi_master_ooo) {
|
||||||
nfe_enb = readl(qm->io_base + QM_RAS_NFE_ENABLE);
|
nfe_enb = readl(qm->io_base + QM_RAS_NFE_ENABLE);
|
||||||
writel(nfe_enb & QM_RAS_NFE_MBIT_DISABLE,
|
writel(nfe_enb & ~qm->err_info.qm_err.ecc_2bits_mask,
|
||||||
qm->io_base + QM_RAS_NFE_ENABLE);
|
qm->io_base + QM_RAS_NFE_ENABLE);
|
||||||
writel(QM_ECC_MBIT, qm->io_base + QM_ABNORMAL_INT_SET);
|
writel(qm->err_info.qm_err.ecc_2bits_mask, qm->io_base + QM_ABNORMAL_INT_SET);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -4447,9 +4493,6 @@ static void qm_restart_prepare(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 value;
|
u32 value;
|
||||||
|
|
||||||
if (qm->err_ini->open_sva_prefetch)
|
|
||||||
qm->err_ini->open_sva_prefetch(qm);
|
|
||||||
|
|
||||||
if (qm->ver >= QM_HW_V3)
|
if (qm->ver >= QM_HW_V3)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
|
@ -4463,12 +4506,12 @@ static void qm_restart_prepare(struct hisi_qm *qm)
|
||||||
qm->io_base + ACC_AM_CFG_PORT_WR_EN);
|
qm->io_base + ACC_AM_CFG_PORT_WR_EN);
|
||||||
|
|
||||||
/* clear dev ecc 2bit error source if having */
|
/* clear dev ecc 2bit error source if having */
|
||||||
value = qm_get_dev_err_status(qm) & qm->err_info.ecc_2bits_mask;
|
value = qm_get_dev_err_status(qm) & qm->err_info.dev_err.ecc_2bits_mask;
|
||||||
if (value && qm->err_ini->clear_dev_hw_err_status)
|
if (value && qm->err_ini->clear_dev_hw_err_status)
|
||||||
qm->err_ini->clear_dev_hw_err_status(qm, value);
|
qm->err_ini->clear_dev_hw_err_status(qm, value);
|
||||||
|
|
||||||
/* clear QM ecc mbit error source */
|
/* clear QM ecc mbit error source */
|
||||||
writel(QM_ECC_MBIT, qm->io_base + QM_ABNORMAL_INT_SOURCE);
|
writel(qm->err_info.qm_err.ecc_2bits_mask, qm->io_base + QM_ABNORMAL_INT_SOURCE);
|
||||||
|
|
||||||
/* clear AM Reorder Buffer ecc mbit source */
|
/* clear AM Reorder Buffer ecc mbit source */
|
||||||
writel(ACC_ROB_ECC_ERR_MULTPL, qm->io_base + ACC_AM_ROB_ECC_INT_STS);
|
writel(ACC_ROB_ECC_ERR_MULTPL, qm->io_base + ACC_AM_ROB_ECC_INT_STS);
|
||||||
|
|
@ -4495,6 +4538,34 @@ clear_flags:
|
||||||
qm->err_status.is_dev_ecc_mbit = false;
|
qm->err_status.is_dev_ecc_mbit = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void qm_disable_axi_error(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
struct hisi_qm_err_mask *qm_err = &qm->err_info.qm_err;
|
||||||
|
u32 val;
|
||||||
|
|
||||||
|
val = ~(qm->error_mask & (~QM_RAS_AXI_ERROR));
|
||||||
|
writel(val, qm->io_base + QM_ABNORMAL_INT_MASK);
|
||||||
|
if (qm->ver > QM_HW_V2)
|
||||||
|
writel(qm_err->shutdown_mask & (~QM_RAS_AXI_ERROR),
|
||||||
|
qm->io_base + QM_OOO_SHUTDOWN_SEL);
|
||||||
|
|
||||||
|
if (qm->err_ini->disable_axi_error)
|
||||||
|
qm->err_ini->disable_axi_error(qm);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void qm_enable_axi_error(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
/* clear axi error source */
|
||||||
|
writel(QM_RAS_AXI_ERROR, qm->io_base + QM_ABNORMAL_INT_SOURCE);
|
||||||
|
|
||||||
|
writel(~qm->error_mask, qm->io_base + QM_ABNORMAL_INT_MASK);
|
||||||
|
if (qm->ver > QM_HW_V2)
|
||||||
|
writel(qm->err_info.qm_err.shutdown_mask, qm->io_base + QM_OOO_SHUTDOWN_SEL);
|
||||||
|
|
||||||
|
if (qm->err_ini->enable_axi_error)
|
||||||
|
qm->err_ini->enable_axi_error(qm);
|
||||||
|
}
|
||||||
|
|
||||||
static int qm_controller_reset_done(struct hisi_qm *qm)
|
static int qm_controller_reset_done(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
struct pci_dev *pdev = qm->pdev;
|
struct pci_dev *pdev = qm->pdev;
|
||||||
|
|
@ -4528,6 +4599,7 @@ static int qm_controller_reset_done(struct hisi_qm *qm)
|
||||||
|
|
||||||
qm_restart_prepare(qm);
|
qm_restart_prepare(qm);
|
||||||
hisi_qm_dev_err_init(qm);
|
hisi_qm_dev_err_init(qm);
|
||||||
|
qm_disable_axi_error(qm);
|
||||||
if (qm->err_ini->open_axi_master_ooo)
|
if (qm->err_ini->open_axi_master_ooo)
|
||||||
qm->err_ini->open_axi_master_ooo(qm);
|
qm->err_ini->open_axi_master_ooo(qm);
|
||||||
|
|
||||||
|
|
@ -4550,7 +4622,7 @@ static int qm_controller_reset_done(struct hisi_qm *qm)
|
||||||
ret = qm_wait_vf_prepare_finish(qm);
|
ret = qm_wait_vf_prepare_finish(qm);
|
||||||
if (ret)
|
if (ret)
|
||||||
pci_err(pdev, "failed to start by vfs in soft reset!\n");
|
pci_err(pdev, "failed to start by vfs in soft reset!\n");
|
||||||
|
qm_enable_axi_error(qm);
|
||||||
qm_cmd_init(qm);
|
qm_cmd_init(qm);
|
||||||
qm_restart_done(qm);
|
qm_restart_done(qm);
|
||||||
|
|
||||||
|
|
@ -4731,6 +4803,15 @@ flr_done:
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(hisi_qm_reset_done);
|
EXPORT_SYMBOL_GPL(hisi_qm_reset_done);
|
||||||
|
|
||||||
|
static irqreturn_t qm_rsvd_irq(int irq, void *data)
|
||||||
|
{
|
||||||
|
struct hisi_qm *qm = data;
|
||||||
|
|
||||||
|
dev_info(&qm->pdev->dev, "Reserved interrupt, ignore!\n");
|
||||||
|
|
||||||
|
return IRQ_HANDLED;
|
||||||
|
}
|
||||||
|
|
||||||
static irqreturn_t qm_abnormal_irq(int irq, void *data)
|
static irqreturn_t qm_abnormal_irq(int irq, void *data)
|
||||||
{
|
{
|
||||||
struct hisi_qm *qm = data;
|
struct hisi_qm *qm = data;
|
||||||
|
|
@ -4760,8 +4841,6 @@ void hisi_qm_dev_shutdown(struct pci_dev *pdev)
|
||||||
ret = hisi_qm_stop(qm, QM_DOWN);
|
ret = hisi_qm_stop(qm, QM_DOWN);
|
||||||
if (ret)
|
if (ret)
|
||||||
dev_err(&pdev->dev, "Fail to stop qm in shutdown!\n");
|
dev_err(&pdev->dev, "Fail to stop qm in shutdown!\n");
|
||||||
|
|
||||||
hisi_qm_cache_wb(qm);
|
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(hisi_qm_dev_shutdown);
|
EXPORT_SYMBOL_GPL(hisi_qm_dev_shutdown);
|
||||||
|
|
||||||
|
|
@ -5014,7 +5093,7 @@ static void qm_unregister_abnormal_irq(struct hisi_qm *qm)
|
||||||
struct pci_dev *pdev = qm->pdev;
|
struct pci_dev *pdev = qm->pdev;
|
||||||
u32 irq_vector, val;
|
u32 irq_vector, val;
|
||||||
|
|
||||||
if (qm->fun_type == QM_HW_VF)
|
if (qm->fun_type == QM_HW_VF && qm->ver < QM_HW_V3)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
val = qm->cap_tables.qm_cap_table[QM_ABNORMAL_IRQ].cap_val;
|
val = qm->cap_tables.qm_cap_table[QM_ABNORMAL_IRQ].cap_val;
|
||||||
|
|
@ -5031,17 +5110,28 @@ static int qm_register_abnormal_irq(struct hisi_qm *qm)
|
||||||
u32 irq_vector, val;
|
u32 irq_vector, val;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (qm->fun_type == QM_HW_VF)
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
val = qm->cap_tables.qm_cap_table[QM_ABNORMAL_IRQ].cap_val;
|
val = qm->cap_tables.qm_cap_table[QM_ABNORMAL_IRQ].cap_val;
|
||||||
if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_ABN_IRQ_TYPE_MASK))
|
if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_ABN_IRQ_TYPE_MASK))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
irq_vector = val & QM_IRQ_VECTOR_MASK;
|
irq_vector = val & QM_IRQ_VECTOR_MASK;
|
||||||
|
|
||||||
|
/* For VF, this is a reserved interrupt in V3 version. */
|
||||||
|
if (qm->fun_type == QM_HW_VF) {
|
||||||
|
if (qm->ver < QM_HW_V3)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
ret = request_irq(pci_irq_vector(pdev, irq_vector), qm_rsvd_irq,
|
||||||
|
IRQF_NO_AUTOEN, qm->dev_name, qm);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(&pdev->dev, "failed to request reserved irq, ret = %d!\n", ret);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
ret = request_irq(pci_irq_vector(pdev, irq_vector), qm_abnormal_irq, 0, qm->dev_name, qm);
|
ret = request_irq(pci_irq_vector(pdev, irq_vector), qm_abnormal_irq, 0, qm->dev_name, qm);
|
||||||
if (ret)
|
if (ret)
|
||||||
dev_err(&qm->pdev->dev, "failed to request abnormal irq, ret = %d", ret);
|
dev_err(&qm->pdev->dev, "failed to request abnormal irq, ret = %d!\n", ret);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
@ -5407,6 +5497,12 @@ static int hisi_qm_pci_init(struct hisi_qm *qm)
|
||||||
pci_set_master(pdev);
|
pci_set_master(pdev);
|
||||||
|
|
||||||
num_vec = qm_get_irq_num(qm);
|
num_vec = qm_get_irq_num(qm);
|
||||||
|
if (!num_vec) {
|
||||||
|
dev_err(dev, "Device irq num is zero!\n");
|
||||||
|
ret = -EINVAL;
|
||||||
|
goto err_get_pci_res;
|
||||||
|
}
|
||||||
|
num_vec = roundup_pow_of_two(num_vec);
|
||||||
ret = pci_alloc_irq_vectors(pdev, num_vec, num_vec, PCI_IRQ_MSI);
|
ret = pci_alloc_irq_vectors(pdev, num_vec, num_vec, PCI_IRQ_MSI);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
dev_err(dev, "Failed to enable MSI vectors!\n");
|
dev_err(dev, "Failed to enable MSI vectors!\n");
|
||||||
|
|
|
||||||
|
|
@ -922,7 +922,8 @@ static int sec_hw_init(struct sec_dev_info *info)
|
||||||
struct iommu_domain *domain;
|
struct iommu_domain *domain;
|
||||||
u32 sec_ipv4_mask = 0;
|
u32 sec_ipv4_mask = 0;
|
||||||
u32 sec_ipv6_mask[10] = {};
|
u32 sec_ipv6_mask[10] = {};
|
||||||
u32 i, ret;
|
int ret;
|
||||||
|
u32 i;
|
||||||
|
|
||||||
domain = iommu_get_domain_for_dev(info->dev);
|
domain = iommu_get_domain_for_dev(info->dev);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1944,14 +1944,12 @@ static void sec_request_uninit(struct sec_req *req)
|
||||||
static int sec_request_init(struct sec_ctx *ctx, struct sec_req *req)
|
static int sec_request_init(struct sec_ctx *ctx, struct sec_req *req)
|
||||||
{
|
{
|
||||||
struct sec_qp_ctx *qp_ctx;
|
struct sec_qp_ctx *qp_ctx;
|
||||||
int i;
|
int i = 0;
|
||||||
|
|
||||||
for (i = 0; i < ctx->sec->ctx_q_num; i++) {
|
do {
|
||||||
qp_ctx = &ctx->qp_ctx[i];
|
qp_ctx = &ctx->qp_ctx[i];
|
||||||
req->req_id = sec_alloc_req_id(req, qp_ctx);
|
req->req_id = sec_alloc_req_id(req, qp_ctx);
|
||||||
if (req->req_id >= 0)
|
} while (req->req_id < 0 && ++i < ctx->sec->ctx_q_num);
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
req->qp_ctx = qp_ctx;
|
req->qp_ctx = qp_ctx;
|
||||||
req->backlog = &qp_ctx->backlog;
|
req->backlog = &qp_ctx->backlog;
|
||||||
|
|
|
||||||
|
|
@ -47,6 +47,8 @@
|
||||||
#define SEC_RAS_FE_ENB_MSK 0x0
|
#define SEC_RAS_FE_ENB_MSK 0x0
|
||||||
#define SEC_OOO_SHUTDOWN_SEL 0x301014
|
#define SEC_OOO_SHUTDOWN_SEL 0x301014
|
||||||
#define SEC_RAS_DISABLE 0x0
|
#define SEC_RAS_DISABLE 0x0
|
||||||
|
#define SEC_AXI_ERROR_MASK (BIT(0) | BIT(1))
|
||||||
|
|
||||||
#define SEC_MEM_START_INIT_REG 0x301100
|
#define SEC_MEM_START_INIT_REG 0x301100
|
||||||
#define SEC_MEM_INIT_DONE_REG 0x301104
|
#define SEC_MEM_INIT_DONE_REG 0x301104
|
||||||
|
|
||||||
|
|
@ -93,6 +95,16 @@
|
||||||
#define SEC_PREFETCH_ENABLE (~(BIT(0) | BIT(1) | BIT(11)))
|
#define SEC_PREFETCH_ENABLE (~(BIT(0) | BIT(1) | BIT(11)))
|
||||||
#define SEC_PREFETCH_DISABLE BIT(1)
|
#define SEC_PREFETCH_DISABLE BIT(1)
|
||||||
#define SEC_SVA_DISABLE_READY (BIT(7) | BIT(11))
|
#define SEC_SVA_DISABLE_READY (BIT(7) | BIT(11))
|
||||||
|
#define SEC_SVA_PREFETCH_INFO 0x301ED4
|
||||||
|
#define SEC_SVA_STALL_NUM GENMASK(23, 8)
|
||||||
|
#define SEC_SVA_PREFETCH_NUM GENMASK(2, 0)
|
||||||
|
#define SEC_WAIT_SVA_READY 500000
|
||||||
|
#define SEC_READ_SVA_STATUS_TIMES 3
|
||||||
|
#define SEC_WAIT_US_MIN 10
|
||||||
|
#define SEC_WAIT_US_MAX 20
|
||||||
|
#define SEC_WAIT_QP_US_MIN 1000
|
||||||
|
#define SEC_WAIT_QP_US_MAX 2000
|
||||||
|
#define SEC_MAX_WAIT_TIMES 2000
|
||||||
|
|
||||||
#define SEC_DELAY_10_US 10
|
#define SEC_DELAY_10_US 10
|
||||||
#define SEC_POLL_TIMEOUT_US 1000
|
#define SEC_POLL_TIMEOUT_US 1000
|
||||||
|
|
@ -464,6 +476,81 @@ static void sec_set_endian(struct hisi_qm *qm)
|
||||||
writel_relaxed(reg, qm->io_base + SEC_CONTROL_REG);
|
writel_relaxed(reg, qm->io_base + SEC_CONTROL_REG);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int sec_wait_sva_ready(struct hisi_qm *qm, __u32 offset, __u32 mask)
|
||||||
|
{
|
||||||
|
u32 val, try_times = 0;
|
||||||
|
u8 count = 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Read the register value every 10-20us. If the value is 0 for three
|
||||||
|
* consecutive times, the SVA module is ready.
|
||||||
|
*/
|
||||||
|
do {
|
||||||
|
val = readl(qm->io_base + offset);
|
||||||
|
if (val & mask)
|
||||||
|
count = 0;
|
||||||
|
else if (++count == SEC_READ_SVA_STATUS_TIMES)
|
||||||
|
break;
|
||||||
|
|
||||||
|
usleep_range(SEC_WAIT_US_MIN, SEC_WAIT_US_MAX);
|
||||||
|
} while (++try_times < SEC_WAIT_SVA_READY);
|
||||||
|
|
||||||
|
if (try_times == SEC_WAIT_SVA_READY) {
|
||||||
|
pci_err(qm->pdev, "failed to wait sva prefetch ready\n");
|
||||||
|
return -ETIMEDOUT;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void sec_close_sva_prefetch(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
u32 val;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
|
||||||
|
return;
|
||||||
|
|
||||||
|
val = readl_relaxed(qm->io_base + SEC_PREFETCH_CFG);
|
||||||
|
val |= SEC_PREFETCH_DISABLE;
|
||||||
|
writel(val, qm->io_base + SEC_PREFETCH_CFG);
|
||||||
|
|
||||||
|
ret = readl_relaxed_poll_timeout(qm->io_base + SEC_SVA_TRANS,
|
||||||
|
val, !(val & SEC_SVA_DISABLE_READY),
|
||||||
|
SEC_DELAY_10_US, SEC_POLL_TIMEOUT_US);
|
||||||
|
if (ret)
|
||||||
|
pci_err(qm->pdev, "failed to close sva prefetch\n");
|
||||||
|
|
||||||
|
(void)sec_wait_sva_ready(qm, SEC_SVA_PREFETCH_INFO, SEC_SVA_STALL_NUM);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void sec_open_sva_prefetch(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
u32 val;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
|
||||||
|
return;
|
||||||
|
|
||||||
|
/* Enable prefetch */
|
||||||
|
val = readl_relaxed(qm->io_base + SEC_PREFETCH_CFG);
|
||||||
|
val &= SEC_PREFETCH_ENABLE;
|
||||||
|
writel(val, qm->io_base + SEC_PREFETCH_CFG);
|
||||||
|
|
||||||
|
ret = readl_relaxed_poll_timeout(qm->io_base + SEC_PREFETCH_CFG,
|
||||||
|
val, !(val & SEC_PREFETCH_DISABLE),
|
||||||
|
SEC_DELAY_10_US, SEC_POLL_TIMEOUT_US);
|
||||||
|
if (ret) {
|
||||||
|
pci_err(qm->pdev, "failed to open sva prefetch\n");
|
||||||
|
sec_close_sva_prefetch(qm);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = sec_wait_sva_ready(qm, SEC_SVA_TRANS, SEC_SVA_PREFETCH_NUM);
|
||||||
|
if (ret)
|
||||||
|
sec_close_sva_prefetch(qm);
|
||||||
|
}
|
||||||
|
|
||||||
static void sec_engine_sva_config(struct hisi_qm *qm)
|
static void sec_engine_sva_config(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 reg;
|
u32 reg;
|
||||||
|
|
@ -497,45 +584,7 @@ static void sec_engine_sva_config(struct hisi_qm *qm)
|
||||||
writel_relaxed(reg, qm->io_base +
|
writel_relaxed(reg, qm->io_base +
|
||||||
SEC_INTERFACE_USER_CTRL1_REG);
|
SEC_INTERFACE_USER_CTRL1_REG);
|
||||||
}
|
}
|
||||||
}
|
sec_open_sva_prefetch(qm);
|
||||||
|
|
||||||
static void sec_open_sva_prefetch(struct hisi_qm *qm)
|
|
||||||
{
|
|
||||||
u32 val;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
|
|
||||||
return;
|
|
||||||
|
|
||||||
/* Enable prefetch */
|
|
||||||
val = readl_relaxed(qm->io_base + SEC_PREFETCH_CFG);
|
|
||||||
val &= SEC_PREFETCH_ENABLE;
|
|
||||||
writel(val, qm->io_base + SEC_PREFETCH_CFG);
|
|
||||||
|
|
||||||
ret = readl_relaxed_poll_timeout(qm->io_base + SEC_PREFETCH_CFG,
|
|
||||||
val, !(val & SEC_PREFETCH_DISABLE),
|
|
||||||
SEC_DELAY_10_US, SEC_POLL_TIMEOUT_US);
|
|
||||||
if (ret)
|
|
||||||
pci_err(qm->pdev, "failed to open sva prefetch\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
static void sec_close_sva_prefetch(struct hisi_qm *qm)
|
|
||||||
{
|
|
||||||
u32 val;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
|
|
||||||
return;
|
|
||||||
|
|
||||||
val = readl_relaxed(qm->io_base + SEC_PREFETCH_CFG);
|
|
||||||
val |= SEC_PREFETCH_DISABLE;
|
|
||||||
writel(val, qm->io_base + SEC_PREFETCH_CFG);
|
|
||||||
|
|
||||||
ret = readl_relaxed_poll_timeout(qm->io_base + SEC_SVA_TRANS,
|
|
||||||
val, !(val & SEC_SVA_DISABLE_READY),
|
|
||||||
SEC_DELAY_10_US, SEC_POLL_TIMEOUT_US);
|
|
||||||
if (ret)
|
|
||||||
pci_err(qm->pdev, "failed to close sva prefetch\n");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sec_enable_clock_gate(struct hisi_qm *qm)
|
static void sec_enable_clock_gate(struct hisi_qm *qm)
|
||||||
|
|
@ -666,8 +715,7 @@ static void sec_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
|
||||||
val1 = readl(qm->io_base + SEC_CONTROL_REG);
|
val1 = readl(qm->io_base + SEC_CONTROL_REG);
|
||||||
if (enable) {
|
if (enable) {
|
||||||
val1 |= SEC_AXI_SHUTDOWN_ENABLE;
|
val1 |= SEC_AXI_SHUTDOWN_ENABLE;
|
||||||
val2 = hisi_qm_get_hw_info(qm, sec_basic_info,
|
val2 = qm->err_info.dev_err.shutdown_mask;
|
||||||
SEC_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
|
||||||
} else {
|
} else {
|
||||||
val1 &= SEC_AXI_SHUTDOWN_DISABLE;
|
val1 &= SEC_AXI_SHUTDOWN_DISABLE;
|
||||||
val2 = 0x0;
|
val2 = 0x0;
|
||||||
|
|
@ -681,7 +729,8 @@ static void sec_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
|
||||||
|
|
||||||
static void sec_hw_error_enable(struct hisi_qm *qm)
|
static void sec_hw_error_enable(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 ce, nfe;
|
struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err;
|
||||||
|
u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe;
|
||||||
|
|
||||||
if (qm->ver == QM_HW_V1) {
|
if (qm->ver == QM_HW_V1) {
|
||||||
writel(SEC_CORE_INT_DISABLE, qm->io_base + SEC_CORE_INT_MASK);
|
writel(SEC_CORE_INT_DISABLE, qm->io_base + SEC_CORE_INT_MASK);
|
||||||
|
|
@ -689,22 +738,19 @@ static void sec_hw_error_enable(struct hisi_qm *qm)
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
ce = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_CE_MASK_CAP, qm->cap_ver);
|
|
||||||
nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
|
|
||||||
|
|
||||||
/* clear SEC hw error source if having */
|
/* clear SEC hw error source if having */
|
||||||
writel(ce | nfe | SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_CORE_INT_SOURCE);
|
writel(err_mask, qm->io_base + SEC_CORE_INT_SOURCE);
|
||||||
|
|
||||||
/* enable RAS int */
|
/* enable RAS int */
|
||||||
writel(ce, qm->io_base + SEC_RAS_CE_REG);
|
writel(dev_err->ce, qm->io_base + SEC_RAS_CE_REG);
|
||||||
writel(SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_RAS_FE_REG);
|
writel(dev_err->fe, qm->io_base + SEC_RAS_FE_REG);
|
||||||
writel(nfe, qm->io_base + SEC_RAS_NFE_REG);
|
writel(dev_err->nfe, qm->io_base + SEC_RAS_NFE_REG);
|
||||||
|
|
||||||
/* enable SEC block master OOO when nfe occurs on Kunpeng930 */
|
/* enable SEC block master OOO when nfe occurs on Kunpeng930 */
|
||||||
sec_master_ooo_ctrl(qm, true);
|
sec_master_ooo_ctrl(qm, true);
|
||||||
|
|
||||||
/* enable SEC hw error interrupts */
|
/* enable SEC hw error interrupts */
|
||||||
writel(ce | nfe | SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_CORE_INT_MASK);
|
writel(err_mask, qm->io_base + SEC_CORE_INT_MASK);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sec_hw_error_disable(struct hisi_qm *qm)
|
static void sec_hw_error_disable(struct hisi_qm *qm)
|
||||||
|
|
@ -1061,12 +1107,20 @@ static void sec_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
|
||||||
|
|
||||||
static void sec_disable_error_report(struct hisi_qm *qm, u32 err_type)
|
static void sec_disable_error_report(struct hisi_qm *qm, u32 err_type)
|
||||||
{
|
{
|
||||||
u32 nfe_mask;
|
u32 nfe_mask = qm->err_info.dev_err.nfe;
|
||||||
|
|
||||||
nfe_mask = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
|
|
||||||
writel(nfe_mask & (~err_type), qm->io_base + SEC_RAS_NFE_REG);
|
writel(nfe_mask & (~err_type), qm->io_base + SEC_RAS_NFE_REG);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void sec_enable_error_report(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
u32 nfe_mask = qm->err_info.dev_err.nfe;
|
||||||
|
u32 ce_mask = qm->err_info.dev_err.ce;
|
||||||
|
|
||||||
|
writel(nfe_mask, qm->io_base + SEC_RAS_NFE_REG);
|
||||||
|
writel(ce_mask, qm->io_base + SEC_RAS_CE_REG);
|
||||||
|
}
|
||||||
|
|
||||||
static void sec_open_axi_master_ooo(struct hisi_qm *qm)
|
static void sec_open_axi_master_ooo(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
@ -1082,16 +1136,18 @@ static enum acc_err_result sec_get_err_result(struct hisi_qm *qm)
|
||||||
|
|
||||||
err_status = sec_get_hw_err_status(qm);
|
err_status = sec_get_hw_err_status(qm);
|
||||||
if (err_status) {
|
if (err_status) {
|
||||||
if (err_status & qm->err_info.ecc_2bits_mask)
|
if (err_status & qm->err_info.dev_err.ecc_2bits_mask)
|
||||||
qm->err_status.is_dev_ecc_mbit = true;
|
qm->err_status.is_dev_ecc_mbit = true;
|
||||||
sec_log_hw_error(qm, err_status);
|
sec_log_hw_error(qm, err_status);
|
||||||
|
|
||||||
if (err_status & qm->err_info.dev_reset_mask) {
|
if (err_status & qm->err_info.dev_err.reset_mask) {
|
||||||
/* Disable the same error reporting until device is recovered. */
|
/* Disable the same error reporting until device is recovered. */
|
||||||
sec_disable_error_report(qm, err_status);
|
sec_disable_error_report(qm, err_status);
|
||||||
return ACC_ERR_NEED_RESET;
|
return ACC_ERR_NEED_RESET;
|
||||||
}
|
}
|
||||||
sec_clear_hw_err_status(qm, err_status);
|
sec_clear_hw_err_status(qm, err_status);
|
||||||
|
/* Avoid firmware disable error report, re-enable. */
|
||||||
|
sec_enable_error_report(qm);
|
||||||
}
|
}
|
||||||
|
|
||||||
return ACC_ERR_RECOVERED;
|
return ACC_ERR_RECOVERED;
|
||||||
|
|
@ -1102,28 +1158,62 @@ static bool sec_dev_is_abnormal(struct hisi_qm *qm)
|
||||||
u32 err_status;
|
u32 err_status;
|
||||||
|
|
||||||
err_status = sec_get_hw_err_status(qm);
|
err_status = sec_get_hw_err_status(qm);
|
||||||
if (err_status & qm->err_info.dev_shutdown_mask)
|
if (err_status & qm->err_info.dev_err.shutdown_mask)
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void sec_disable_axi_error(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err;
|
||||||
|
u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe;
|
||||||
|
|
||||||
|
writel(err_mask & ~SEC_AXI_ERROR_MASK, qm->io_base + SEC_CORE_INT_MASK);
|
||||||
|
|
||||||
|
if (qm->ver > QM_HW_V2)
|
||||||
|
writel(dev_err->shutdown_mask & (~SEC_AXI_ERROR_MASK),
|
||||||
|
qm->io_base + SEC_OOO_SHUTDOWN_SEL);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void sec_enable_axi_error(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err;
|
||||||
|
u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe;
|
||||||
|
|
||||||
|
/* clear axi error source */
|
||||||
|
writel(SEC_AXI_ERROR_MASK, qm->io_base + SEC_CORE_INT_SOURCE);
|
||||||
|
|
||||||
|
writel(err_mask, qm->io_base + SEC_CORE_INT_MASK);
|
||||||
|
|
||||||
|
if (qm->ver > QM_HW_V2)
|
||||||
|
writel(dev_err->shutdown_mask, qm->io_base + SEC_OOO_SHUTDOWN_SEL);
|
||||||
|
}
|
||||||
|
|
||||||
static void sec_err_info_init(struct hisi_qm *qm)
|
static void sec_err_info_init(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
struct hisi_qm_err_info *err_info = &qm->err_info;
|
struct hisi_qm_err_info *err_info = &qm->err_info;
|
||||||
|
struct hisi_qm_err_mask *qm_err = &err_info->qm_err;
|
||||||
|
struct hisi_qm_err_mask *dev_err = &err_info->dev_err;
|
||||||
|
|
||||||
err_info->fe = SEC_RAS_FE_ENB_MSK;
|
qm_err->fe = SEC_RAS_FE_ENB_MSK;
|
||||||
err_info->ce = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_QM_CE_MASK_CAP, qm->cap_ver);
|
qm_err->ce = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_QM_CE_MASK_CAP, qm->cap_ver);
|
||||||
err_info->nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_QM_NFE_MASK_CAP, qm->cap_ver);
|
qm_err->nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_QM_NFE_MASK_CAP, qm->cap_ver);
|
||||||
err_info->ecc_2bits_mask = SEC_CORE_INT_STATUS_M_ECC;
|
qm_err->shutdown_mask = hisi_qm_get_hw_info(qm, sec_basic_info,
|
||||||
err_info->qm_shutdown_mask = hisi_qm_get_hw_info(qm, sec_basic_info,
|
|
||||||
SEC_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
SEC_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
||||||
err_info->dev_shutdown_mask = hisi_qm_get_hw_info(qm, sec_basic_info,
|
qm_err->reset_mask = hisi_qm_get_hw_info(qm, sec_basic_info,
|
||||||
SEC_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
|
||||||
err_info->qm_reset_mask = hisi_qm_get_hw_info(qm, sec_basic_info,
|
|
||||||
SEC_QM_RESET_MASK_CAP, qm->cap_ver);
|
SEC_QM_RESET_MASK_CAP, qm->cap_ver);
|
||||||
err_info->dev_reset_mask = hisi_qm_get_hw_info(qm, sec_basic_info,
|
qm_err->ecc_2bits_mask = QM_ECC_MBIT;
|
||||||
|
|
||||||
|
dev_err->fe = SEC_RAS_FE_ENB_MSK;
|
||||||
|
dev_err->ce = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_CE_MASK_CAP, qm->cap_ver);
|
||||||
|
dev_err->nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
|
||||||
|
dev_err->shutdown_mask = hisi_qm_get_hw_info(qm, sec_basic_info,
|
||||||
|
SEC_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
||||||
|
dev_err->reset_mask = hisi_qm_get_hw_info(qm, sec_basic_info,
|
||||||
SEC_RESET_MASK_CAP, qm->cap_ver);
|
SEC_RESET_MASK_CAP, qm->cap_ver);
|
||||||
|
dev_err->ecc_2bits_mask = SEC_CORE_INT_STATUS_M_ECC;
|
||||||
|
|
||||||
err_info->msi_wr_port = BIT(0);
|
err_info->msi_wr_port = BIT(0);
|
||||||
err_info->acpi_rst = "SRST";
|
err_info->acpi_rst = "SRST";
|
||||||
}
|
}
|
||||||
|
|
@ -1141,6 +1231,8 @@ static const struct hisi_qm_err_ini sec_err_ini = {
|
||||||
.err_info_init = sec_err_info_init,
|
.err_info_init = sec_err_info_init,
|
||||||
.get_err_result = sec_get_err_result,
|
.get_err_result = sec_get_err_result,
|
||||||
.dev_is_abnormal = sec_dev_is_abnormal,
|
.dev_is_abnormal = sec_dev_is_abnormal,
|
||||||
|
.disable_axi_error = sec_disable_axi_error,
|
||||||
|
.enable_axi_error = sec_enable_axi_error,
|
||||||
};
|
};
|
||||||
|
|
||||||
static int sec_pf_probe_init(struct sec_dev *sec)
|
static int sec_pf_probe_init(struct sec_dev *sec)
|
||||||
|
|
@ -1152,7 +1244,6 @@ static int sec_pf_probe_init(struct sec_dev *sec)
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
sec_open_sva_prefetch(qm);
|
|
||||||
hisi_qm_dev_err_init(qm);
|
hisi_qm_dev_err_init(qm);
|
||||||
sec_debug_regs_clear(qm);
|
sec_debug_regs_clear(qm);
|
||||||
ret = sec_show_last_regs_init(qm);
|
ret = sec_show_last_regs_init(qm);
|
||||||
|
|
@ -1169,7 +1260,7 @@ static int sec_pre_store_cap_reg(struct hisi_qm *qm)
|
||||||
size_t i, size;
|
size_t i, size;
|
||||||
|
|
||||||
size = ARRAY_SIZE(sec_cap_query_info);
|
size = ARRAY_SIZE(sec_cap_query_info);
|
||||||
sec_cap = devm_kzalloc(&pdev->dev, sizeof(*sec_cap) * size, GFP_KERNEL);
|
sec_cap = devm_kcalloc(&pdev->dev, size, sizeof(*sec_cap), GFP_KERNEL);
|
||||||
if (!sec_cap)
|
if (!sec_cap)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -15,6 +15,7 @@
|
||||||
#define DAE_REG_RD_TMOUT_US USEC_PER_SEC
|
#define DAE_REG_RD_TMOUT_US USEC_PER_SEC
|
||||||
|
|
||||||
#define DAE_ALG_NAME "hashagg"
|
#define DAE_ALG_NAME "hashagg"
|
||||||
|
#define DAE_V5_ALG_NAME "hashagg\nudma\nhashjoin\ngather"
|
||||||
|
|
||||||
/* error */
|
/* error */
|
||||||
#define DAE_AXI_CFG_OFFSET 0x331000
|
#define DAE_AXI_CFG_OFFSET 0x331000
|
||||||
|
|
@ -82,6 +83,7 @@ int hisi_dae_set_user_domain(struct hisi_qm *qm)
|
||||||
|
|
||||||
int hisi_dae_set_alg(struct hisi_qm *qm)
|
int hisi_dae_set_alg(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
|
const char *alg_name;
|
||||||
size_t len;
|
size_t len;
|
||||||
|
|
||||||
if (!dae_is_support(qm))
|
if (!dae_is_support(qm))
|
||||||
|
|
@ -90,9 +92,14 @@ int hisi_dae_set_alg(struct hisi_qm *qm)
|
||||||
if (!qm->uacce)
|
if (!qm->uacce)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
if (qm->ver >= QM_HW_V5)
|
||||||
|
alg_name = DAE_V5_ALG_NAME;
|
||||||
|
else
|
||||||
|
alg_name = DAE_ALG_NAME;
|
||||||
|
|
||||||
len = strlen(qm->uacce->algs);
|
len = strlen(qm->uacce->algs);
|
||||||
/* A line break may be required */
|
/* A line break may be required */
|
||||||
if (len + strlen(DAE_ALG_NAME) + 1 >= QM_DEV_ALG_MAX_LEN) {
|
if (len + strlen(alg_name) + 1 >= QM_DEV_ALG_MAX_LEN) {
|
||||||
pci_err(qm->pdev, "algorithm name is too long!\n");
|
pci_err(qm->pdev, "algorithm name is too long!\n");
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
@ -100,7 +107,7 @@ int hisi_dae_set_alg(struct hisi_qm *qm)
|
||||||
if (len)
|
if (len)
|
||||||
strcat((char *)qm->uacce->algs, "\n");
|
strcat((char *)qm->uacce->algs, "\n");
|
||||||
|
|
||||||
strcat((char *)qm->uacce->algs, DAE_ALG_NAME);
|
strcat((char *)qm->uacce->algs, alg_name);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
@ -168,6 +175,12 @@ static void hisi_dae_disable_error_report(struct hisi_qm *qm, u32 err_type)
|
||||||
writel(DAE_ERR_NFE_MASK & (~err_type), qm->io_base + DAE_ERR_NFE_OFFSET);
|
writel(DAE_ERR_NFE_MASK & (~err_type), qm->io_base + DAE_ERR_NFE_OFFSET);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void hisi_dae_enable_error_report(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
writel(DAE_ERR_CE_MASK, qm->io_base + DAE_ERR_CE_OFFSET);
|
||||||
|
writel(DAE_ERR_NFE_MASK, qm->io_base + DAE_ERR_NFE_OFFSET);
|
||||||
|
}
|
||||||
|
|
||||||
static void hisi_dae_log_hw_error(struct hisi_qm *qm, u32 err_type)
|
static void hisi_dae_log_hw_error(struct hisi_qm *qm, u32 err_type)
|
||||||
{
|
{
|
||||||
const struct hisi_dae_hw_error *err = dae_hw_error;
|
const struct hisi_dae_hw_error *err = dae_hw_error;
|
||||||
|
|
@ -209,6 +222,8 @@ enum acc_err_result hisi_dae_get_err_result(struct hisi_qm *qm)
|
||||||
return ACC_ERR_NEED_RESET;
|
return ACC_ERR_NEED_RESET;
|
||||||
}
|
}
|
||||||
hisi_dae_clear_hw_err_status(qm, err_status);
|
hisi_dae_clear_hw_err_status(qm, err_status);
|
||||||
|
/* Avoid firmware disable error report, re-enable. */
|
||||||
|
hisi_dae_enable_error_report(qm);
|
||||||
|
|
||||||
return ACC_ERR_RECOVERED;
|
return ACC_ERR_RECOVERED;
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -65,6 +65,7 @@
|
||||||
#define HZIP_SRAM_ECC_ERR_NUM_SHIFT 16
|
#define HZIP_SRAM_ECC_ERR_NUM_SHIFT 16
|
||||||
#define HZIP_SRAM_ECC_ERR_ADDR_SHIFT 24
|
#define HZIP_SRAM_ECC_ERR_ADDR_SHIFT 24
|
||||||
#define HZIP_CORE_INT_MASK_ALL GENMASK(12, 0)
|
#define HZIP_CORE_INT_MASK_ALL GENMASK(12, 0)
|
||||||
|
#define HZIP_AXI_ERROR_MASK (BIT(2) | BIT(3))
|
||||||
#define HZIP_SQE_SIZE 128
|
#define HZIP_SQE_SIZE 128
|
||||||
#define HZIP_PF_DEF_Q_NUM 64
|
#define HZIP_PF_DEF_Q_NUM 64
|
||||||
#define HZIP_PF_DEF_Q_BASE 0
|
#define HZIP_PF_DEF_Q_BASE 0
|
||||||
|
|
@ -80,6 +81,7 @@
|
||||||
#define HZIP_ALG_GZIP_BIT GENMASK(3, 2)
|
#define HZIP_ALG_GZIP_BIT GENMASK(3, 2)
|
||||||
#define HZIP_ALG_DEFLATE_BIT GENMASK(5, 4)
|
#define HZIP_ALG_DEFLATE_BIT GENMASK(5, 4)
|
||||||
#define HZIP_ALG_LZ77_BIT GENMASK(7, 6)
|
#define HZIP_ALG_LZ77_BIT GENMASK(7, 6)
|
||||||
|
#define HZIP_ALG_LZ4_BIT GENMASK(9, 8)
|
||||||
|
|
||||||
#define HZIP_BUF_SIZE 22
|
#define HZIP_BUF_SIZE 22
|
||||||
#define HZIP_SQE_MASK_OFFSET 64
|
#define HZIP_SQE_MASK_OFFSET 64
|
||||||
|
|
@ -95,10 +97,16 @@
|
||||||
#define HZIP_PREFETCH_ENABLE (~(BIT(26) | BIT(17) | BIT(0)))
|
#define HZIP_PREFETCH_ENABLE (~(BIT(26) | BIT(17) | BIT(0)))
|
||||||
#define HZIP_SVA_PREFETCH_DISABLE BIT(26)
|
#define HZIP_SVA_PREFETCH_DISABLE BIT(26)
|
||||||
#define HZIP_SVA_DISABLE_READY (BIT(26) | BIT(30))
|
#define HZIP_SVA_DISABLE_READY (BIT(26) | BIT(30))
|
||||||
|
#define HZIP_SVA_PREFETCH_NUM GENMASK(18, 16)
|
||||||
|
#define HZIP_SVA_STALL_NUM GENMASK(15, 0)
|
||||||
#define HZIP_SHAPER_RATE_COMPRESS 750
|
#define HZIP_SHAPER_RATE_COMPRESS 750
|
||||||
#define HZIP_SHAPER_RATE_DECOMPRESS 140
|
#define HZIP_SHAPER_RATE_DECOMPRESS 140
|
||||||
#define HZIP_DELAY_1_US 1
|
#define HZIP_DELAY_1_US 1
|
||||||
#define HZIP_POLL_TIMEOUT_US 1000
|
#define HZIP_POLL_TIMEOUT_US 1000
|
||||||
|
#define HZIP_WAIT_SVA_READY 500000
|
||||||
|
#define HZIP_READ_SVA_STATUS_TIMES 3
|
||||||
|
#define HZIP_WAIT_US_MIN 10
|
||||||
|
#define HZIP_WAIT_US_MAX 20
|
||||||
|
|
||||||
/* clock gating */
|
/* clock gating */
|
||||||
#define HZIP_PEH_CFG_AUTO_GATE 0x3011A8
|
#define HZIP_PEH_CFG_AUTO_GATE 0x3011A8
|
||||||
|
|
@ -111,6 +119,9 @@
|
||||||
/* zip comp high performance */
|
/* zip comp high performance */
|
||||||
#define HZIP_HIGH_PERF_OFFSET 0x301208
|
#define HZIP_HIGH_PERF_OFFSET 0x301208
|
||||||
|
|
||||||
|
#define HZIP_LIT_LEN_EN_OFFSET 0x301204
|
||||||
|
#define HZIP_LIT_LEN_EN_EN BIT(4)
|
||||||
|
|
||||||
enum {
|
enum {
|
||||||
HZIP_HIGH_COMP_RATE,
|
HZIP_HIGH_COMP_RATE,
|
||||||
HZIP_HIGH_COMP_PERF,
|
HZIP_HIGH_COMP_PERF,
|
||||||
|
|
@ -141,6 +152,12 @@ static const struct qm_dev_alg zip_dev_algs[] = { {
|
||||||
}, {
|
}, {
|
||||||
.alg_msk = HZIP_ALG_LZ77_BIT,
|
.alg_msk = HZIP_ALG_LZ77_BIT,
|
||||||
.alg = "lz77_zstd\n",
|
.alg = "lz77_zstd\n",
|
||||||
|
}, {
|
||||||
|
.alg_msk = HZIP_ALG_LZ77_BIT,
|
||||||
|
.alg = "lz77_only\n",
|
||||||
|
}, {
|
||||||
|
.alg_msk = HZIP_ALG_LZ4_BIT,
|
||||||
|
.alg = "lz4\n",
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
@ -448,10 +465,23 @@ bool hisi_zip_alg_support(struct hisi_qm *qm, u32 alg)
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int hisi_zip_set_high_perf(struct hisi_qm *qm)
|
static void hisi_zip_literal_set(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
u32 val;
|
||||||
|
|
||||||
|
if (qm->ver < QM_HW_V3)
|
||||||
|
return;
|
||||||
|
|
||||||
|
val = readl_relaxed(qm->io_base + HZIP_LIT_LEN_EN_OFFSET);
|
||||||
|
val &= ~HZIP_LIT_LEN_EN_EN;
|
||||||
|
|
||||||
|
/* enable literal length in stream mode compression */
|
||||||
|
writel(val, qm->io_base + HZIP_LIT_LEN_EN_OFFSET);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void hisi_zip_set_high_perf(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 val;
|
u32 val;
|
||||||
int ret;
|
|
||||||
|
|
||||||
val = readl_relaxed(qm->io_base + HZIP_HIGH_PERF_OFFSET);
|
val = readl_relaxed(qm->io_base + HZIP_HIGH_PERF_OFFSET);
|
||||||
if (perf_mode == HZIP_HIGH_COMP_PERF)
|
if (perf_mode == HZIP_HIGH_COMP_PERF)
|
||||||
|
|
@ -461,33 +491,33 @@ static int hisi_zip_set_high_perf(struct hisi_qm *qm)
|
||||||
|
|
||||||
/* Set perf mode */
|
/* Set perf mode */
|
||||||
writel(val, qm->io_base + HZIP_HIGH_PERF_OFFSET);
|
writel(val, qm->io_base + HZIP_HIGH_PERF_OFFSET);
|
||||||
ret = readl_relaxed_poll_timeout(qm->io_base + HZIP_HIGH_PERF_OFFSET,
|
|
||||||
val, val == perf_mode, HZIP_DELAY_1_US,
|
|
||||||
HZIP_POLL_TIMEOUT_US);
|
|
||||||
if (ret)
|
|
||||||
pci_err(qm->pdev, "failed to set perf mode\n");
|
|
||||||
|
|
||||||
return ret;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void hisi_zip_open_sva_prefetch(struct hisi_qm *qm)
|
static int hisi_zip_wait_sva_ready(struct hisi_qm *qm, __u32 offset, __u32 mask)
|
||||||
{
|
{
|
||||||
u32 val;
|
u32 val, try_times = 0;
|
||||||
int ret;
|
u8 count = 0;
|
||||||
|
|
||||||
if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
|
/*
|
||||||
return;
|
* Read the register value every 10-20us. If the value is 0 for three
|
||||||
|
* consecutive times, the SVA module is ready.
|
||||||
|
*/
|
||||||
|
do {
|
||||||
|
val = readl(qm->io_base + offset);
|
||||||
|
if (val & mask)
|
||||||
|
count = 0;
|
||||||
|
else if (++count == HZIP_READ_SVA_STATUS_TIMES)
|
||||||
|
break;
|
||||||
|
|
||||||
/* Enable prefetch */
|
usleep_range(HZIP_WAIT_US_MIN, HZIP_WAIT_US_MAX);
|
||||||
val = readl_relaxed(qm->io_base + HZIP_PREFETCH_CFG);
|
} while (++try_times < HZIP_WAIT_SVA_READY);
|
||||||
val &= HZIP_PREFETCH_ENABLE;
|
|
||||||
writel(val, qm->io_base + HZIP_PREFETCH_CFG);
|
|
||||||
|
|
||||||
ret = readl_relaxed_poll_timeout(qm->io_base + HZIP_PREFETCH_CFG,
|
if (try_times == HZIP_WAIT_SVA_READY) {
|
||||||
val, !(val & HZIP_SVA_PREFETCH_DISABLE),
|
pci_err(qm->pdev, "failed to wait sva prefetch ready\n");
|
||||||
HZIP_DELAY_1_US, HZIP_POLL_TIMEOUT_US);
|
return -ETIMEDOUT;
|
||||||
if (ret)
|
}
|
||||||
pci_err(qm->pdev, "failed to open sva prefetch\n");
|
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void hisi_zip_close_sva_prefetch(struct hisi_qm *qm)
|
static void hisi_zip_close_sva_prefetch(struct hisi_qm *qm)
|
||||||
|
|
@ -507,6 +537,35 @@ static void hisi_zip_close_sva_prefetch(struct hisi_qm *qm)
|
||||||
HZIP_DELAY_1_US, HZIP_POLL_TIMEOUT_US);
|
HZIP_DELAY_1_US, HZIP_POLL_TIMEOUT_US);
|
||||||
if (ret)
|
if (ret)
|
||||||
pci_err(qm->pdev, "failed to close sva prefetch\n");
|
pci_err(qm->pdev, "failed to close sva prefetch\n");
|
||||||
|
|
||||||
|
(void)hisi_zip_wait_sva_ready(qm, HZIP_SVA_TRANS, HZIP_SVA_STALL_NUM);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void hisi_zip_open_sva_prefetch(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
u32 val;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
|
||||||
|
return;
|
||||||
|
|
||||||
|
/* Enable prefetch */
|
||||||
|
val = readl_relaxed(qm->io_base + HZIP_PREFETCH_CFG);
|
||||||
|
val &= HZIP_PREFETCH_ENABLE;
|
||||||
|
writel(val, qm->io_base + HZIP_PREFETCH_CFG);
|
||||||
|
|
||||||
|
ret = readl_relaxed_poll_timeout(qm->io_base + HZIP_PREFETCH_CFG,
|
||||||
|
val, !(val & HZIP_SVA_PREFETCH_DISABLE),
|
||||||
|
HZIP_DELAY_1_US, HZIP_POLL_TIMEOUT_US);
|
||||||
|
if (ret) {
|
||||||
|
pci_err(qm->pdev, "failed to open sva prefetch\n");
|
||||||
|
hisi_zip_close_sva_prefetch(qm);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = hisi_zip_wait_sva_ready(qm, HZIP_SVA_TRANS, HZIP_SVA_PREFETCH_NUM);
|
||||||
|
if (ret)
|
||||||
|
hisi_zip_close_sva_prefetch(qm);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void hisi_zip_enable_clock_gate(struct hisi_qm *qm)
|
static void hisi_zip_enable_clock_gate(struct hisi_qm *qm)
|
||||||
|
|
@ -530,6 +589,7 @@ static int hisi_zip_set_user_domain_and_cache(struct hisi_qm *qm)
|
||||||
void __iomem *base = qm->io_base;
|
void __iomem *base = qm->io_base;
|
||||||
u32 dcomp_bm, comp_bm;
|
u32 dcomp_bm, comp_bm;
|
||||||
u32 zip_core_en;
|
u32 zip_core_en;
|
||||||
|
int ret;
|
||||||
|
|
||||||
/* qm user domain */
|
/* qm user domain */
|
||||||
writel(AXUSER_BASE, base + QM_ARUSER_M_CFG_1);
|
writel(AXUSER_BASE, base + QM_ARUSER_M_CFG_1);
|
||||||
|
|
@ -565,6 +625,7 @@ static int hisi_zip_set_user_domain_and_cache(struct hisi_qm *qm)
|
||||||
writel(AXUSER_BASE, base + HZIP_DATA_WUSER_32_63);
|
writel(AXUSER_BASE, base + HZIP_DATA_WUSER_32_63);
|
||||||
writel(AXUSER_BASE, base + HZIP_SGL_RUSER_32_63);
|
writel(AXUSER_BASE, base + HZIP_SGL_RUSER_32_63);
|
||||||
}
|
}
|
||||||
|
hisi_zip_open_sva_prefetch(qm);
|
||||||
|
|
||||||
/* let's open all compression/decompression cores */
|
/* let's open all compression/decompression cores */
|
||||||
|
|
||||||
|
|
@ -580,9 +641,19 @@ static int hisi_zip_set_user_domain_and_cache(struct hisi_qm *qm)
|
||||||
CQC_CACHE_WB_ENABLE | FIELD_PREP(SQC_CACHE_WB_THRD, 1) |
|
CQC_CACHE_WB_ENABLE | FIELD_PREP(SQC_CACHE_WB_THRD, 1) |
|
||||||
FIELD_PREP(CQC_CACHE_WB_THRD, 1), base + QM_CACHE_CTL);
|
FIELD_PREP(CQC_CACHE_WB_THRD, 1), base + QM_CACHE_CTL);
|
||||||
|
|
||||||
|
hisi_zip_set_high_perf(qm);
|
||||||
|
hisi_zip_literal_set(qm);
|
||||||
hisi_zip_enable_clock_gate(qm);
|
hisi_zip_enable_clock_gate(qm);
|
||||||
|
|
||||||
return hisi_dae_set_user_domain(qm);
|
ret = hisi_dae_set_user_domain(qm);
|
||||||
|
if (ret)
|
||||||
|
goto close_sva_prefetch;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
close_sva_prefetch:
|
||||||
|
hisi_zip_close_sva_prefetch(qm);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void hisi_zip_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
|
static void hisi_zip_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
|
||||||
|
|
@ -592,8 +663,7 @@ static void hisi_zip_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
|
||||||
val1 = readl(qm->io_base + HZIP_SOFT_CTRL_ZIP_CONTROL);
|
val1 = readl(qm->io_base + HZIP_SOFT_CTRL_ZIP_CONTROL);
|
||||||
if (enable) {
|
if (enable) {
|
||||||
val1 |= HZIP_AXI_SHUTDOWN_ENABLE;
|
val1 |= HZIP_AXI_SHUTDOWN_ENABLE;
|
||||||
val2 = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
|
val2 = qm->err_info.dev_err.shutdown_mask;
|
||||||
ZIP_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
|
||||||
} else {
|
} else {
|
||||||
val1 &= ~HZIP_AXI_SHUTDOWN_ENABLE;
|
val1 &= ~HZIP_AXI_SHUTDOWN_ENABLE;
|
||||||
val2 = 0x0;
|
val2 = 0x0;
|
||||||
|
|
@ -607,7 +677,8 @@ static void hisi_zip_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
|
||||||
|
|
||||||
static void hisi_zip_hw_error_enable(struct hisi_qm *qm)
|
static void hisi_zip_hw_error_enable(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 nfe, ce;
|
struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err;
|
||||||
|
u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe;
|
||||||
|
|
||||||
if (qm->ver == QM_HW_V1) {
|
if (qm->ver == QM_HW_V1) {
|
||||||
writel(HZIP_CORE_INT_MASK_ALL,
|
writel(HZIP_CORE_INT_MASK_ALL,
|
||||||
|
|
@ -616,33 +687,29 @@ static void hisi_zip_hw_error_enable(struct hisi_qm *qm)
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
|
|
||||||
ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CE_MASK_CAP, qm->cap_ver);
|
|
||||||
|
|
||||||
/* clear ZIP hw error source if having */
|
/* clear ZIP hw error source if having */
|
||||||
writel(ce | nfe | HZIP_CORE_INT_RAS_FE_ENB_MASK, qm->io_base + HZIP_CORE_INT_SOURCE);
|
writel(err_mask, qm->io_base + HZIP_CORE_INT_SOURCE);
|
||||||
|
|
||||||
/* configure error type */
|
/* configure error type */
|
||||||
writel(ce, qm->io_base + HZIP_CORE_INT_RAS_CE_ENB);
|
writel(dev_err->ce, qm->io_base + HZIP_CORE_INT_RAS_CE_ENB);
|
||||||
writel(HZIP_CORE_INT_RAS_FE_ENB_MASK, qm->io_base + HZIP_CORE_INT_RAS_FE_ENB);
|
writel(dev_err->fe, qm->io_base + HZIP_CORE_INT_RAS_FE_ENB);
|
||||||
writel(nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
|
writel(dev_err->nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
|
||||||
|
|
||||||
hisi_zip_master_ooo_ctrl(qm, true);
|
hisi_zip_master_ooo_ctrl(qm, true);
|
||||||
|
|
||||||
/* enable ZIP hw error interrupts */
|
/* enable ZIP hw error interrupts */
|
||||||
writel(0, qm->io_base + HZIP_CORE_INT_MASK_REG);
|
writel(~err_mask, qm->io_base + HZIP_CORE_INT_MASK_REG);
|
||||||
|
|
||||||
hisi_dae_hw_error_enable(qm);
|
hisi_dae_hw_error_enable(qm);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void hisi_zip_hw_error_disable(struct hisi_qm *qm)
|
static void hisi_zip_hw_error_disable(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 nfe, ce;
|
struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err;
|
||||||
|
u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe;
|
||||||
|
|
||||||
/* disable ZIP hw error interrupts */
|
/* disable ZIP hw error interrupts */
|
||||||
nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
|
writel(err_mask, qm->io_base + HZIP_CORE_INT_MASK_REG);
|
||||||
ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CE_MASK_CAP, qm->cap_ver);
|
|
||||||
writel(ce | nfe | HZIP_CORE_INT_RAS_FE_ENB_MASK, qm->io_base + HZIP_CORE_INT_MASK_REG);
|
|
||||||
|
|
||||||
hisi_zip_master_ooo_ctrl(qm, false);
|
hisi_zip_master_ooo_ctrl(qm, false);
|
||||||
|
|
||||||
|
|
@ -1116,12 +1183,20 @@ static void hisi_zip_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
|
||||||
|
|
||||||
static void hisi_zip_disable_error_report(struct hisi_qm *qm, u32 err_type)
|
static void hisi_zip_disable_error_report(struct hisi_qm *qm, u32 err_type)
|
||||||
{
|
{
|
||||||
u32 nfe_mask;
|
u32 nfe_mask = qm->err_info.dev_err.nfe;
|
||||||
|
|
||||||
nfe_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
|
|
||||||
writel(nfe_mask & (~err_type), qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
|
writel(nfe_mask & (~err_type), qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void hisi_zip_enable_error_report(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
u32 nfe_mask = qm->err_info.dev_err.nfe;
|
||||||
|
u32 ce_mask = qm->err_info.dev_err.ce;
|
||||||
|
|
||||||
|
writel(nfe_mask, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
|
||||||
|
writel(ce_mask, qm->io_base + HZIP_CORE_INT_RAS_CE_ENB);
|
||||||
|
}
|
||||||
|
|
||||||
static void hisi_zip_open_axi_master_ooo(struct hisi_qm *qm)
|
static void hisi_zip_open_axi_master_ooo(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
@ -1160,16 +1235,18 @@ static enum acc_err_result hisi_zip_get_err_result(struct hisi_qm *qm)
|
||||||
/* Get device hardware new error status */
|
/* Get device hardware new error status */
|
||||||
err_status = hisi_zip_get_hw_err_status(qm);
|
err_status = hisi_zip_get_hw_err_status(qm);
|
||||||
if (err_status) {
|
if (err_status) {
|
||||||
if (err_status & qm->err_info.ecc_2bits_mask)
|
if (err_status & qm->err_info.dev_err.ecc_2bits_mask)
|
||||||
qm->err_status.is_dev_ecc_mbit = true;
|
qm->err_status.is_dev_ecc_mbit = true;
|
||||||
hisi_zip_log_hw_error(qm, err_status);
|
hisi_zip_log_hw_error(qm, err_status);
|
||||||
|
|
||||||
if (err_status & qm->err_info.dev_reset_mask) {
|
if (err_status & qm->err_info.dev_err.reset_mask) {
|
||||||
/* Disable the same error reporting until device is recovered. */
|
/* Disable the same error reporting until device is recovered. */
|
||||||
hisi_zip_disable_error_report(qm, err_status);
|
hisi_zip_disable_error_report(qm, err_status);
|
||||||
return ACC_ERR_NEED_RESET;
|
zip_result = ACC_ERR_NEED_RESET;
|
||||||
} else {
|
} else {
|
||||||
hisi_zip_clear_hw_err_status(qm, err_status);
|
hisi_zip_clear_hw_err_status(qm, err_status);
|
||||||
|
/* Avoid firmware disable error report, re-enable. */
|
||||||
|
hisi_zip_enable_error_report(qm);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -1185,7 +1262,7 @@ static bool hisi_zip_dev_is_abnormal(struct hisi_qm *qm)
|
||||||
u32 err_status;
|
u32 err_status;
|
||||||
|
|
||||||
err_status = hisi_zip_get_hw_err_status(qm);
|
err_status = hisi_zip_get_hw_err_status(qm);
|
||||||
if (err_status & qm->err_info.dev_shutdown_mask)
|
if (err_status & qm->err_info.dev_err.shutdown_mask)
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
return hisi_dae_dev_is_abnormal(qm);
|
return hisi_dae_dev_is_abnormal(qm);
|
||||||
|
|
@ -1196,23 +1273,59 @@ static int hisi_zip_set_priv_status(struct hisi_qm *qm)
|
||||||
return hisi_dae_close_axi_master_ooo(qm);
|
return hisi_dae_close_axi_master_ooo(qm);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void hisi_zip_disable_axi_error(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err;
|
||||||
|
u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe;
|
||||||
|
u32 val;
|
||||||
|
|
||||||
|
val = ~(err_mask & (~HZIP_AXI_ERROR_MASK));
|
||||||
|
writel(val, qm->io_base + HZIP_CORE_INT_MASK_REG);
|
||||||
|
|
||||||
|
if (qm->ver > QM_HW_V2)
|
||||||
|
writel(dev_err->shutdown_mask & (~HZIP_AXI_ERROR_MASK),
|
||||||
|
qm->io_base + HZIP_OOO_SHUTDOWN_SEL);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void hisi_zip_enable_axi_error(struct hisi_qm *qm)
|
||||||
|
{
|
||||||
|
struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err;
|
||||||
|
u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe;
|
||||||
|
|
||||||
|
/* clear axi error source */
|
||||||
|
writel(HZIP_AXI_ERROR_MASK, qm->io_base + HZIP_CORE_INT_SOURCE);
|
||||||
|
|
||||||
|
writel(~err_mask, qm->io_base + HZIP_CORE_INT_MASK_REG);
|
||||||
|
|
||||||
|
if (qm->ver > QM_HW_V2)
|
||||||
|
writel(dev_err->shutdown_mask, qm->io_base + HZIP_OOO_SHUTDOWN_SEL);
|
||||||
|
}
|
||||||
|
|
||||||
static void hisi_zip_err_info_init(struct hisi_qm *qm)
|
static void hisi_zip_err_info_init(struct hisi_qm *qm)
|
||||||
{
|
{
|
||||||
struct hisi_qm_err_info *err_info = &qm->err_info;
|
struct hisi_qm_err_info *err_info = &qm->err_info;
|
||||||
|
struct hisi_qm_err_mask *qm_err = &err_info->qm_err;
|
||||||
|
struct hisi_qm_err_mask *dev_err = &err_info->dev_err;
|
||||||
|
|
||||||
err_info->fe = HZIP_CORE_INT_RAS_FE_ENB_MASK;
|
qm_err->fe = HZIP_CORE_INT_RAS_FE_ENB_MASK;
|
||||||
err_info->ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_QM_CE_MASK_CAP, qm->cap_ver);
|
qm_err->ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_QM_CE_MASK_CAP, qm->cap_ver);
|
||||||
err_info->nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
|
qm_err->nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
|
||||||
ZIP_QM_NFE_MASK_CAP, qm->cap_ver);
|
ZIP_QM_NFE_MASK_CAP, qm->cap_ver);
|
||||||
err_info->ecc_2bits_mask = HZIP_CORE_INT_STATUS_M_ECC;
|
qm_err->ecc_2bits_mask = QM_ECC_MBIT;
|
||||||
err_info->qm_shutdown_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
|
qm_err->reset_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
|
||||||
ZIP_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
|
||||||
err_info->dev_shutdown_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
|
|
||||||
ZIP_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
|
||||||
err_info->qm_reset_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
|
|
||||||
ZIP_QM_RESET_MASK_CAP, qm->cap_ver);
|
ZIP_QM_RESET_MASK_CAP, qm->cap_ver);
|
||||||
err_info->dev_reset_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
|
qm_err->shutdown_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
|
||||||
|
ZIP_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
||||||
|
|
||||||
|
dev_err->fe = HZIP_CORE_INT_RAS_FE_ENB_MASK;
|
||||||
|
dev_err->ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CE_MASK_CAP, qm->cap_ver);
|
||||||
|
dev_err->nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
|
||||||
|
dev_err->ecc_2bits_mask = HZIP_CORE_INT_STATUS_M_ECC;
|
||||||
|
dev_err->shutdown_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
|
||||||
|
ZIP_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
|
||||||
|
dev_err->reset_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
|
||||||
ZIP_RESET_MASK_CAP, qm->cap_ver);
|
ZIP_RESET_MASK_CAP, qm->cap_ver);
|
||||||
|
|
||||||
err_info->msi_wr_port = HZIP_WR_PORT;
|
err_info->msi_wr_port = HZIP_WR_PORT;
|
||||||
err_info->acpi_rst = "ZRST";
|
err_info->acpi_rst = "ZRST";
|
||||||
}
|
}
|
||||||
|
|
@ -1232,6 +1345,8 @@ static const struct hisi_qm_err_ini hisi_zip_err_ini = {
|
||||||
.get_err_result = hisi_zip_get_err_result,
|
.get_err_result = hisi_zip_get_err_result,
|
||||||
.set_priv_status = hisi_zip_set_priv_status,
|
.set_priv_status = hisi_zip_set_priv_status,
|
||||||
.dev_is_abnormal = hisi_zip_dev_is_abnormal,
|
.dev_is_abnormal = hisi_zip_dev_is_abnormal,
|
||||||
|
.disable_axi_error = hisi_zip_disable_axi_error,
|
||||||
|
.enable_axi_error = hisi_zip_enable_axi_error,
|
||||||
};
|
};
|
||||||
|
|
||||||
static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
|
static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
|
||||||
|
|
@ -1251,11 +1366,6 @@ static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
ret = hisi_zip_set_high_perf(qm);
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
hisi_zip_open_sva_prefetch(qm);
|
|
||||||
hisi_qm_dev_err_init(qm);
|
hisi_qm_dev_err_init(qm);
|
||||||
hisi_zip_debug_regs_clear(qm);
|
hisi_zip_debug_regs_clear(qm);
|
||||||
|
|
||||||
|
|
@ -1273,7 +1383,7 @@ static int zip_pre_store_cap_reg(struct hisi_qm *qm)
|
||||||
size_t i, size;
|
size_t i, size;
|
||||||
|
|
||||||
size = ARRAY_SIZE(zip_cap_query_info);
|
size = ARRAY_SIZE(zip_cap_query_info);
|
||||||
zip_cap = devm_kzalloc(&pdev->dev, sizeof(*zip_cap) * size, GFP_KERNEL);
|
zip_cap = devm_kcalloc(&pdev->dev, size, sizeof(*zip_cap), GFP_KERNEL);
|
||||||
if (!zip_cap)
|
if (!zip_cap)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -232,7 +232,7 @@ static int kmb_ocs_dma_prepare(struct ahash_request *req)
|
||||||
struct device *dev = rctx->hcu_dev->dev;
|
struct device *dev = rctx->hcu_dev->dev;
|
||||||
unsigned int remainder = 0;
|
unsigned int remainder = 0;
|
||||||
unsigned int total;
|
unsigned int total;
|
||||||
size_t nents;
|
int nents;
|
||||||
size_t count;
|
size_t count;
|
||||||
int rc;
|
int rc;
|
||||||
int i;
|
int i;
|
||||||
|
|
@ -253,6 +253,9 @@ static int kmb_ocs_dma_prepare(struct ahash_request *req)
|
||||||
/* Determine the number of scatter gather list entries to process. */
|
/* Determine the number of scatter gather list entries to process. */
|
||||||
nents = sg_nents_for_len(req->src, rctx->sg_data_total - remainder);
|
nents = sg_nents_for_len(req->src, rctx->sg_data_total - remainder);
|
||||||
|
|
||||||
|
if (nents < 0)
|
||||||
|
return nents;
|
||||||
|
|
||||||
/* If there are entries to process, map them. */
|
/* If there are entries to process, map them. */
|
||||||
if (nents) {
|
if (nents) {
|
||||||
rctx->sg_dma_nents = dma_map_sg(dev, req->src, nents,
|
rctx->sg_dma_nents = dma_map_sg(dev, req->src, nents,
|
||||||
|
|
|
||||||
|
|
@ -6,12 +6,11 @@ config CRYPTO_DEV_QAT
|
||||||
select CRYPTO_SKCIPHER
|
select CRYPTO_SKCIPHER
|
||||||
select CRYPTO_AKCIPHER
|
select CRYPTO_AKCIPHER
|
||||||
select CRYPTO_DH
|
select CRYPTO_DH
|
||||||
select CRYPTO_HMAC
|
|
||||||
select CRYPTO_RSA
|
select CRYPTO_RSA
|
||||||
select CRYPTO_SHA1
|
|
||||||
select CRYPTO_SHA256
|
|
||||||
select CRYPTO_SHA512
|
|
||||||
select CRYPTO_LIB_AES
|
select CRYPTO_LIB_AES
|
||||||
|
select CRYPTO_LIB_SHA1
|
||||||
|
select CRYPTO_LIB_SHA256
|
||||||
|
select CRYPTO_LIB_SHA512
|
||||||
select FW_LOADER
|
select FW_LOADER
|
||||||
select CRC8
|
select CRC8
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -89,26 +89,14 @@ err_chrdev_unreg:
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int adf_ctl_alloc_resources(struct adf_user_cfg_ctl_data **ctl_data,
|
static struct adf_user_cfg_ctl_data *adf_ctl_alloc_resources(unsigned long arg)
|
||||||
unsigned long arg)
|
|
||||||
{
|
{
|
||||||
struct adf_user_cfg_ctl_data *cfg_data;
|
struct adf_user_cfg_ctl_data *cfg_data;
|
||||||
|
|
||||||
cfg_data = kzalloc(sizeof(*cfg_data), GFP_KERNEL);
|
cfg_data = memdup_user((void __user *)arg, sizeof(*cfg_data));
|
||||||
if (!cfg_data)
|
if (IS_ERR(cfg_data))
|
||||||
return -ENOMEM;
|
|
||||||
|
|
||||||
/* Initialize device id to NO DEVICE as 0 is a valid device id */
|
|
||||||
cfg_data->device_id = ADF_CFG_NO_DEVICE;
|
|
||||||
|
|
||||||
if (copy_from_user(cfg_data, (void __user *)arg, sizeof(*cfg_data))) {
|
|
||||||
pr_err("QAT: failed to copy from user cfg_data.\n");
|
pr_err("QAT: failed to copy from user cfg_data.\n");
|
||||||
kfree(cfg_data);
|
return cfg_data;
|
||||||
return -EIO;
|
|
||||||
}
|
|
||||||
|
|
||||||
*ctl_data = cfg_data;
|
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int adf_add_key_value_data(struct adf_accel_dev *accel_dev,
|
static int adf_add_key_value_data(struct adf_accel_dev *accel_dev,
|
||||||
|
|
@ -188,13 +176,13 @@ out_err:
|
||||||
static int adf_ctl_ioctl_dev_config(struct file *fp, unsigned int cmd,
|
static int adf_ctl_ioctl_dev_config(struct file *fp, unsigned int cmd,
|
||||||
unsigned long arg)
|
unsigned long arg)
|
||||||
{
|
{
|
||||||
int ret;
|
|
||||||
struct adf_user_cfg_ctl_data *ctl_data;
|
struct adf_user_cfg_ctl_data *ctl_data;
|
||||||
struct adf_accel_dev *accel_dev;
|
struct adf_accel_dev *accel_dev;
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
ret = adf_ctl_alloc_resources(&ctl_data, arg);
|
ctl_data = adf_ctl_alloc_resources(arg);
|
||||||
if (ret)
|
if (IS_ERR(ctl_data))
|
||||||
return ret;
|
return PTR_ERR(ctl_data);
|
||||||
|
|
||||||
accel_dev = adf_devmgr_get_dev_by_id(ctl_data->device_id);
|
accel_dev = adf_devmgr_get_dev_by_id(ctl_data->device_id);
|
||||||
if (!accel_dev) {
|
if (!accel_dev) {
|
||||||
|
|
@ -267,9 +255,9 @@ static int adf_ctl_ioctl_dev_stop(struct file *fp, unsigned int cmd,
|
||||||
int ret;
|
int ret;
|
||||||
struct adf_user_cfg_ctl_data *ctl_data;
|
struct adf_user_cfg_ctl_data *ctl_data;
|
||||||
|
|
||||||
ret = adf_ctl_alloc_resources(&ctl_data, arg);
|
ctl_data = adf_ctl_alloc_resources(arg);
|
||||||
if (ret)
|
if (IS_ERR(ctl_data))
|
||||||
return ret;
|
return PTR_ERR(ctl_data);
|
||||||
|
|
||||||
if (adf_devmgr_verify_id(ctl_data->device_id)) {
|
if (adf_devmgr_verify_id(ctl_data->device_id)) {
|
||||||
pr_err("QAT: Device %d not found\n", ctl_data->device_id);
|
pr_err("QAT: Device %d not found\n", ctl_data->device_id);
|
||||||
|
|
@ -301,9 +289,9 @@ static int adf_ctl_ioctl_dev_start(struct file *fp, unsigned int cmd,
|
||||||
struct adf_user_cfg_ctl_data *ctl_data;
|
struct adf_user_cfg_ctl_data *ctl_data;
|
||||||
struct adf_accel_dev *accel_dev;
|
struct adf_accel_dev *accel_dev;
|
||||||
|
|
||||||
ret = adf_ctl_alloc_resources(&ctl_data, arg);
|
ctl_data = adf_ctl_alloc_resources(arg);
|
||||||
if (ret)
|
if (IS_ERR(ctl_data))
|
||||||
return ret;
|
return PTR_ERR(ctl_data);
|
||||||
|
|
||||||
ret = -ENODEV;
|
ret = -ENODEV;
|
||||||
accel_dev = adf_devmgr_get_dev_by_id(ctl_data->device_id);
|
accel_dev = adf_devmgr_get_dev_by_id(ctl_data->device_id);
|
||||||
|
|
|
||||||
|
|
@ -21,6 +21,25 @@
|
||||||
|
|
||||||
#define SLICE_IDX(sl) offsetof(struct icp_qat_fw_init_admin_slice_cnt, sl##_cnt)
|
#define SLICE_IDX(sl) offsetof(struct icp_qat_fw_init_admin_slice_cnt, sl##_cnt)
|
||||||
|
|
||||||
|
#define ADF_GEN6_TL_CMDQ_WAIT_COUNTER(_name) \
|
||||||
|
ADF_TL_COUNTER("cmdq_wait_" #_name, ADF_TL_SIMPLE_COUNT, \
|
||||||
|
ADF_TL_CMDQ_REG_OFF(_name, reg_tm_cmdq_wait_cnt, gen6))
|
||||||
|
#define ADF_GEN6_TL_CMDQ_EXEC_COUNTER(_name) \
|
||||||
|
ADF_TL_COUNTER("cmdq_exec_" #_name, ADF_TL_SIMPLE_COUNT, \
|
||||||
|
ADF_TL_CMDQ_REG_OFF(_name, reg_tm_cmdq_exec_cnt, gen6))
|
||||||
|
#define ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(_name) \
|
||||||
|
ADF_TL_COUNTER("cmdq_drain_" #_name, ADF_TL_SIMPLE_COUNT, \
|
||||||
|
ADF_TL_CMDQ_REG_OFF(_name, reg_tm_cmdq_drain_cnt, \
|
||||||
|
gen6))
|
||||||
|
|
||||||
|
#define CPR_QUEUE_COUNT 5
|
||||||
|
#define DCPR_QUEUE_COUNT 3
|
||||||
|
#define PKE_QUEUE_COUNT 1
|
||||||
|
#define WAT_QUEUE_COUNT 7
|
||||||
|
#define WCP_QUEUE_COUNT 7
|
||||||
|
#define USC_QUEUE_COUNT 3
|
||||||
|
#define ATH_QUEUE_COUNT 2
|
||||||
|
|
||||||
/* Device level counters. */
|
/* Device level counters. */
|
||||||
static const struct adf_tl_dbg_counter dev_counters[] = {
|
static const struct adf_tl_dbg_counter dev_counters[] = {
|
||||||
/* PCIe partial transactions. */
|
/* PCIe partial transactions. */
|
||||||
|
|
@ -57,6 +76,10 @@ static const struct adf_tl_dbg_counter dev_counters[] = {
|
||||||
/* Maximum uTLB used. */
|
/* Maximum uTLB used. */
|
||||||
ADF_TL_COUNTER(AT_MAX_UTLB_USED_NAME, ADF_TL_SIMPLE_COUNT,
|
ADF_TL_COUNTER(AT_MAX_UTLB_USED_NAME, ADF_TL_SIMPLE_COUNT,
|
||||||
ADF_GEN6_TL_DEV_REG_OFF(reg_tl_at_max_utlb_used)),
|
ADF_GEN6_TL_DEV_REG_OFF(reg_tl_at_max_utlb_used)),
|
||||||
|
/* Ring Empty average[ns] across all rings */
|
||||||
|
ADF_TL_COUNTER_LATENCY(RE_ACC_NAME, ADF_TL_COUNTER_NS_AVG,
|
||||||
|
ADF_GEN6_TL_DEV_REG_OFF(reg_tl_re_acc),
|
||||||
|
ADF_GEN6_TL_DEV_REG_OFF(reg_tl_re_cnt)),
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Accelerator utilization counters */
|
/* Accelerator utilization counters */
|
||||||
|
|
@ -95,6 +118,80 @@ static const struct adf_tl_dbg_counter sl_exec_counters[ADF_TL_SL_CNT_COUNT] = {
|
||||||
[SLICE_IDX(ath)] = ADF_GEN6_TL_SL_EXEC_COUNTER(ath),
|
[SLICE_IDX(ath)] = ADF_GEN6_TL_SL_EXEC_COUNTER(ath),
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static const struct adf_tl_dbg_counter cnv_cmdq_counters[] = {
|
||||||
|
ADF_GEN6_TL_CMDQ_WAIT_COUNTER(cnv),
|
||||||
|
ADF_GEN6_TL_CMDQ_EXEC_COUNTER(cnv),
|
||||||
|
ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(cnv)
|
||||||
|
};
|
||||||
|
|
||||||
|
#define NUM_CMDQ_COUNTERS ARRAY_SIZE(cnv_cmdq_counters)
|
||||||
|
|
||||||
|
static const struct adf_tl_dbg_counter dcprz_cmdq_counters[] = {
|
||||||
|
ADF_GEN6_TL_CMDQ_WAIT_COUNTER(dcprz),
|
||||||
|
ADF_GEN6_TL_CMDQ_EXEC_COUNTER(dcprz),
|
||||||
|
ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(dcprz)
|
||||||
|
};
|
||||||
|
|
||||||
|
static_assert(ARRAY_SIZE(dcprz_cmdq_counters) == NUM_CMDQ_COUNTERS);
|
||||||
|
|
||||||
|
static const struct adf_tl_dbg_counter pke_cmdq_counters[] = {
|
||||||
|
ADF_GEN6_TL_CMDQ_WAIT_COUNTER(pke),
|
||||||
|
ADF_GEN6_TL_CMDQ_EXEC_COUNTER(pke),
|
||||||
|
ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(pke)
|
||||||
|
};
|
||||||
|
|
||||||
|
static_assert(ARRAY_SIZE(pke_cmdq_counters) == NUM_CMDQ_COUNTERS);
|
||||||
|
|
||||||
|
static const struct adf_tl_dbg_counter wat_cmdq_counters[] = {
|
||||||
|
ADF_GEN6_TL_CMDQ_WAIT_COUNTER(wat),
|
||||||
|
ADF_GEN6_TL_CMDQ_EXEC_COUNTER(wat),
|
||||||
|
ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(wat)
|
||||||
|
};
|
||||||
|
|
||||||
|
static_assert(ARRAY_SIZE(wat_cmdq_counters) == NUM_CMDQ_COUNTERS);
|
||||||
|
|
||||||
|
static const struct adf_tl_dbg_counter wcp_cmdq_counters[] = {
|
||||||
|
ADF_GEN6_TL_CMDQ_WAIT_COUNTER(wcp),
|
||||||
|
ADF_GEN6_TL_CMDQ_EXEC_COUNTER(wcp),
|
||||||
|
ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(wcp)
|
||||||
|
};
|
||||||
|
|
||||||
|
static_assert(ARRAY_SIZE(wcp_cmdq_counters) == NUM_CMDQ_COUNTERS);
|
||||||
|
|
||||||
|
static const struct adf_tl_dbg_counter ucs_cmdq_counters[] = {
|
||||||
|
ADF_GEN6_TL_CMDQ_WAIT_COUNTER(ucs),
|
||||||
|
ADF_GEN6_TL_CMDQ_EXEC_COUNTER(ucs),
|
||||||
|
ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(ucs)
|
||||||
|
};
|
||||||
|
|
||||||
|
static_assert(ARRAY_SIZE(ucs_cmdq_counters) == NUM_CMDQ_COUNTERS);
|
||||||
|
|
||||||
|
static const struct adf_tl_dbg_counter ath_cmdq_counters[] = {
|
||||||
|
ADF_GEN6_TL_CMDQ_WAIT_COUNTER(ath),
|
||||||
|
ADF_GEN6_TL_CMDQ_EXEC_COUNTER(ath),
|
||||||
|
ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(ath)
|
||||||
|
};
|
||||||
|
|
||||||
|
static_assert(ARRAY_SIZE(ath_cmdq_counters) == NUM_CMDQ_COUNTERS);
|
||||||
|
|
||||||
|
/* CMDQ drain counters. */
|
||||||
|
static const struct adf_tl_dbg_counter *cmdq_counters[ADF_TL_SL_CNT_COUNT] = {
|
||||||
|
/* Compression accelerator execution count. */
|
||||||
|
[SLICE_IDX(cpr)] = cnv_cmdq_counters,
|
||||||
|
/* Decompression accelerator execution count. */
|
||||||
|
[SLICE_IDX(dcpr)] = dcprz_cmdq_counters,
|
||||||
|
/* PKE execution count. */
|
||||||
|
[SLICE_IDX(pke)] = pke_cmdq_counters,
|
||||||
|
/* Wireless Authentication accelerator execution count. */
|
||||||
|
[SLICE_IDX(wat)] = wat_cmdq_counters,
|
||||||
|
/* Wireless Cipher accelerator execution count. */
|
||||||
|
[SLICE_IDX(wcp)] = wcp_cmdq_counters,
|
||||||
|
/* UCS accelerator execution count. */
|
||||||
|
[SLICE_IDX(ucs)] = ucs_cmdq_counters,
|
||||||
|
/* Authentication accelerator execution count. */
|
||||||
|
[SLICE_IDX(ath)] = ath_cmdq_counters,
|
||||||
|
};
|
||||||
|
|
||||||
/* Ring pair counters. */
|
/* Ring pair counters. */
|
||||||
static const struct adf_tl_dbg_counter rp_counters[] = {
|
static const struct adf_tl_dbg_counter rp_counters[] = {
|
||||||
/* PCIe partial transactions. */
|
/* PCIe partial transactions. */
|
||||||
|
|
@ -122,12 +219,17 @@ static const struct adf_tl_dbg_counter rp_counters[] = {
|
||||||
/* Payload DevTLB miss rate. */
|
/* Payload DevTLB miss rate. */
|
||||||
ADF_TL_COUNTER(AT_PAYLD_DTLB_MISS_NAME, ADF_TL_SIMPLE_COUNT,
|
ADF_TL_COUNTER(AT_PAYLD_DTLB_MISS_NAME, ADF_TL_SIMPLE_COUNT,
|
||||||
ADF_GEN6_TL_RP_REG_OFF(reg_tl_at_payld_devtlb_miss)),
|
ADF_GEN6_TL_RP_REG_OFF(reg_tl_at_payld_devtlb_miss)),
|
||||||
|
/* Ring Empty average[ns]. */
|
||||||
|
ADF_TL_COUNTER_LATENCY(RE_ACC_NAME, ADF_TL_COUNTER_NS_AVG,
|
||||||
|
ADF_GEN6_TL_RP_REG_OFF(reg_tl_re_acc),
|
||||||
|
ADF_GEN6_TL_RP_REG_OFF(reg_tl_re_cnt)),
|
||||||
};
|
};
|
||||||
|
|
||||||
void adf_gen6_init_tl_data(struct adf_tl_hw_data *tl_data)
|
void adf_gen6_init_tl_data(struct adf_tl_hw_data *tl_data)
|
||||||
{
|
{
|
||||||
tl_data->layout_sz = ADF_GEN6_TL_LAYOUT_SZ;
|
tl_data->layout_sz = ADF_GEN6_TL_LAYOUT_SZ;
|
||||||
tl_data->slice_reg_sz = ADF_GEN6_TL_SLICE_REG_SZ;
|
tl_data->slice_reg_sz = ADF_GEN6_TL_SLICE_REG_SZ;
|
||||||
|
tl_data->cmdq_reg_sz = ADF_GEN6_TL_CMDQ_REG_SZ;
|
||||||
tl_data->rp_reg_sz = ADF_GEN6_TL_RP_REG_SZ;
|
tl_data->rp_reg_sz = ADF_GEN6_TL_RP_REG_SZ;
|
||||||
tl_data->num_hbuff = ADF_GEN6_TL_NUM_HIST_BUFFS;
|
tl_data->num_hbuff = ADF_GEN6_TL_NUM_HIST_BUFFS;
|
||||||
tl_data->max_rp = ADF_GEN6_TL_MAX_RP_NUM;
|
tl_data->max_rp = ADF_GEN6_TL_MAX_RP_NUM;
|
||||||
|
|
@ -139,8 +241,18 @@ void adf_gen6_init_tl_data(struct adf_tl_hw_data *tl_data)
|
||||||
tl_data->num_dev_counters = ARRAY_SIZE(dev_counters);
|
tl_data->num_dev_counters = ARRAY_SIZE(dev_counters);
|
||||||
tl_data->sl_util_counters = sl_util_counters;
|
tl_data->sl_util_counters = sl_util_counters;
|
||||||
tl_data->sl_exec_counters = sl_exec_counters;
|
tl_data->sl_exec_counters = sl_exec_counters;
|
||||||
|
tl_data->cmdq_counters = cmdq_counters;
|
||||||
|
tl_data->num_cmdq_counters = NUM_CMDQ_COUNTERS;
|
||||||
tl_data->rp_counters = rp_counters;
|
tl_data->rp_counters = rp_counters;
|
||||||
tl_data->num_rp_counters = ARRAY_SIZE(rp_counters);
|
tl_data->num_rp_counters = ARRAY_SIZE(rp_counters);
|
||||||
tl_data->max_sl_cnt = ADF_GEN6_TL_MAX_SLICES_PER_TYPE;
|
tl_data->max_sl_cnt = ADF_GEN6_TL_MAX_SLICES_PER_TYPE;
|
||||||
|
|
||||||
|
tl_data->multiplier.cpr_cnt = CPR_QUEUE_COUNT;
|
||||||
|
tl_data->multiplier.dcpr_cnt = DCPR_QUEUE_COUNT;
|
||||||
|
tl_data->multiplier.pke_cnt = PKE_QUEUE_COUNT;
|
||||||
|
tl_data->multiplier.wat_cnt = WAT_QUEUE_COUNT;
|
||||||
|
tl_data->multiplier.wcp_cnt = WCP_QUEUE_COUNT;
|
||||||
|
tl_data->multiplier.ucs_cnt = USC_QUEUE_COUNT;
|
||||||
|
tl_data->multiplier.ath_cnt = ATH_QUEUE_COUNT;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(adf_gen6_init_tl_data);
|
EXPORT_SYMBOL_GPL(adf_gen6_init_tl_data);
|
||||||
|
|
|
||||||
|
|
@ -212,6 +212,23 @@ int adf_tl_halt(struct adf_accel_dev *accel_dev)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void adf_set_cmdq_cnt(struct adf_accel_dev *accel_dev,
|
||||||
|
struct adf_tl_hw_data *tl_data)
|
||||||
|
{
|
||||||
|
struct icp_qat_fw_init_admin_slice_cnt *slice_cnt, *cmdq_cnt;
|
||||||
|
|
||||||
|
slice_cnt = &accel_dev->telemetry->slice_cnt;
|
||||||
|
cmdq_cnt = &accel_dev->telemetry->cmdq_cnt;
|
||||||
|
|
||||||
|
cmdq_cnt->cpr_cnt = slice_cnt->cpr_cnt * tl_data->multiplier.cpr_cnt;
|
||||||
|
cmdq_cnt->dcpr_cnt = slice_cnt->dcpr_cnt * tl_data->multiplier.dcpr_cnt;
|
||||||
|
cmdq_cnt->pke_cnt = slice_cnt->pke_cnt * tl_data->multiplier.pke_cnt;
|
||||||
|
cmdq_cnt->wat_cnt = slice_cnt->wat_cnt * tl_data->multiplier.wat_cnt;
|
||||||
|
cmdq_cnt->wcp_cnt = slice_cnt->wcp_cnt * tl_data->multiplier.wcp_cnt;
|
||||||
|
cmdq_cnt->ucs_cnt = slice_cnt->ucs_cnt * tl_data->multiplier.ucs_cnt;
|
||||||
|
cmdq_cnt->ath_cnt = slice_cnt->ath_cnt * tl_data->multiplier.ath_cnt;
|
||||||
|
}
|
||||||
|
|
||||||
int adf_tl_run(struct adf_accel_dev *accel_dev, int state)
|
int adf_tl_run(struct adf_accel_dev *accel_dev, int state)
|
||||||
{
|
{
|
||||||
struct adf_tl_hw_data *tl_data = &GET_TL_DATA(accel_dev);
|
struct adf_tl_hw_data *tl_data = &GET_TL_DATA(accel_dev);
|
||||||
|
|
@ -235,6 +252,8 @@ int adf_tl_run(struct adf_accel_dev *accel_dev, int state)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
adf_set_cmdq_cnt(accel_dev, tl_data);
|
||||||
|
|
||||||
telemetry->hbuffs = state;
|
telemetry->hbuffs = state;
|
||||||
atomic_set(&telemetry->state, state);
|
atomic_set(&telemetry->state, state);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -28,19 +28,23 @@ struct dentry;
|
||||||
struct adf_tl_hw_data {
|
struct adf_tl_hw_data {
|
||||||
size_t layout_sz;
|
size_t layout_sz;
|
||||||
size_t slice_reg_sz;
|
size_t slice_reg_sz;
|
||||||
|
size_t cmdq_reg_sz;
|
||||||
size_t rp_reg_sz;
|
size_t rp_reg_sz;
|
||||||
size_t msg_cnt_off;
|
size_t msg_cnt_off;
|
||||||
const struct adf_tl_dbg_counter *dev_counters;
|
const struct adf_tl_dbg_counter *dev_counters;
|
||||||
const struct adf_tl_dbg_counter *sl_util_counters;
|
const struct adf_tl_dbg_counter *sl_util_counters;
|
||||||
const struct adf_tl_dbg_counter *sl_exec_counters;
|
const struct adf_tl_dbg_counter *sl_exec_counters;
|
||||||
|
const struct adf_tl_dbg_counter **cmdq_counters;
|
||||||
const struct adf_tl_dbg_counter *rp_counters;
|
const struct adf_tl_dbg_counter *rp_counters;
|
||||||
u8 num_hbuff;
|
u8 num_hbuff;
|
||||||
u8 cpp_ns_per_cycle;
|
u8 cpp_ns_per_cycle;
|
||||||
u8 bw_units_to_bytes;
|
u8 bw_units_to_bytes;
|
||||||
u8 num_dev_counters;
|
u8 num_dev_counters;
|
||||||
u8 num_rp_counters;
|
u8 num_rp_counters;
|
||||||
|
u8 num_cmdq_counters;
|
||||||
u8 max_rp;
|
u8 max_rp;
|
||||||
u8 max_sl_cnt;
|
u8 max_sl_cnt;
|
||||||
|
struct icp_qat_fw_init_admin_slice_cnt multiplier;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct adf_telemetry {
|
struct adf_telemetry {
|
||||||
|
|
@ -69,6 +73,7 @@ struct adf_telemetry {
|
||||||
struct mutex wr_lock;
|
struct mutex wr_lock;
|
||||||
struct delayed_work work_ctx;
|
struct delayed_work work_ctx;
|
||||||
struct icp_qat_fw_init_admin_slice_cnt slice_cnt;
|
struct icp_qat_fw_init_admin_slice_cnt slice_cnt;
|
||||||
|
struct icp_qat_fw_init_admin_slice_cnt cmdq_cnt;
|
||||||
};
|
};
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_FS
|
#ifdef CONFIG_DEBUG_FS
|
||||||
|
|
|
||||||
|
|
@ -339,6 +339,48 @@ static int tl_calc_and_print_sl_counters(struct adf_accel_dev *accel_dev,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int tl_print_cmdq_counter(struct adf_telemetry *telemetry,
|
||||||
|
const struct adf_tl_dbg_counter *ctr,
|
||||||
|
struct seq_file *s, u8 cnt_id, u8 counter)
|
||||||
|
{
|
||||||
|
size_t cmdq_regs_sz = GET_TL_DATA(telemetry->accel_dev).cmdq_reg_sz;
|
||||||
|
size_t offset_inc = cnt_id * cmdq_regs_sz;
|
||||||
|
struct adf_tl_dbg_counter slice_ctr;
|
||||||
|
char cnt_name[MAX_COUNT_NAME_SIZE];
|
||||||
|
|
||||||
|
slice_ctr = *(ctr + counter);
|
||||||
|
slice_ctr.offset1 += offset_inc;
|
||||||
|
snprintf(cnt_name, MAX_COUNT_NAME_SIZE, "%s%d", slice_ctr.name, cnt_id);
|
||||||
|
|
||||||
|
return tl_calc_and_print_counter(telemetry, s, &slice_ctr, cnt_name);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int tl_calc_and_print_cmdq_counters(struct adf_accel_dev *accel_dev,
|
||||||
|
struct seq_file *s, u8 cnt_type,
|
||||||
|
u8 cnt_id)
|
||||||
|
{
|
||||||
|
struct adf_tl_hw_data *tl_data = &GET_TL_DATA(accel_dev);
|
||||||
|
struct adf_telemetry *telemetry = accel_dev->telemetry;
|
||||||
|
const struct adf_tl_dbg_counter **cmdq_tl_counters;
|
||||||
|
const struct adf_tl_dbg_counter *ctr;
|
||||||
|
u8 counter;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
cmdq_tl_counters = tl_data->cmdq_counters;
|
||||||
|
ctr = cmdq_tl_counters[cnt_type];
|
||||||
|
|
||||||
|
for (counter = 0; counter < tl_data->num_cmdq_counters; counter++) {
|
||||||
|
ret = tl_print_cmdq_counter(telemetry, ctr, s, cnt_id, counter);
|
||||||
|
if (ret) {
|
||||||
|
dev_notice(&GET_DEV(accel_dev),
|
||||||
|
"invalid slice utilization counter type\n");
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static void tl_print_msg_cnt(struct seq_file *s, u32 msg_cnt)
|
static void tl_print_msg_cnt(struct seq_file *s, u32 msg_cnt)
|
||||||
{
|
{
|
||||||
seq_printf(s, "%-*s", TL_KEY_MIN_PADDING, SNAPSHOT_CNT_MSG);
|
seq_printf(s, "%-*s", TL_KEY_MIN_PADDING, SNAPSHOT_CNT_MSG);
|
||||||
|
|
@ -352,6 +394,7 @@ static int tl_print_dev_data(struct adf_accel_dev *accel_dev,
|
||||||
struct adf_telemetry *telemetry = accel_dev->telemetry;
|
struct adf_telemetry *telemetry = accel_dev->telemetry;
|
||||||
const struct adf_tl_dbg_counter *dev_tl_counters;
|
const struct adf_tl_dbg_counter *dev_tl_counters;
|
||||||
u8 num_dev_counters = tl_data->num_dev_counters;
|
u8 num_dev_counters = tl_data->num_dev_counters;
|
||||||
|
u8 *cmdq_cnt = (u8 *)&telemetry->cmdq_cnt;
|
||||||
u8 *sl_cnt = (u8 *)&telemetry->slice_cnt;
|
u8 *sl_cnt = (u8 *)&telemetry->slice_cnt;
|
||||||
const struct adf_tl_dbg_counter *ctr;
|
const struct adf_tl_dbg_counter *ctr;
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
|
|
@ -387,6 +430,15 @@ static int tl_print_dev_data(struct adf_accel_dev *accel_dev,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Print per command queue telemetry. */
|
||||||
|
for (i = 0; i < ADF_TL_SL_CNT_COUNT; i++) {
|
||||||
|
for (j = 0; j < cmdq_cnt[i]; j++) {
|
||||||
|
ret = tl_calc_and_print_cmdq_counters(accel_dev, s, i, j);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -17,6 +17,7 @@ struct adf_accel_dev;
|
||||||
#define LAT_ACC_NAME "gp_lat_acc_avg"
|
#define LAT_ACC_NAME "gp_lat_acc_avg"
|
||||||
#define BW_IN_NAME "bw_in"
|
#define BW_IN_NAME "bw_in"
|
||||||
#define BW_OUT_NAME "bw_out"
|
#define BW_OUT_NAME "bw_out"
|
||||||
|
#define RE_ACC_NAME "re_acc_avg"
|
||||||
#define PAGE_REQ_LAT_NAME "at_page_req_lat_avg"
|
#define PAGE_REQ_LAT_NAME "at_page_req_lat_avg"
|
||||||
#define AT_TRANS_LAT_NAME "at_trans_lat_avg"
|
#define AT_TRANS_LAT_NAME "at_trans_lat_avg"
|
||||||
#define AT_MAX_UTLB_USED_NAME "at_max_tlb_used"
|
#define AT_MAX_UTLB_USED_NAME "at_max_tlb_used"
|
||||||
|
|
@ -43,6 +44,10 @@ struct adf_accel_dev;
|
||||||
(ADF_TL_DEV_REG_OFF(slice##_slices[0], qat_gen) + \
|
(ADF_TL_DEV_REG_OFF(slice##_slices[0], qat_gen) + \
|
||||||
offsetof(struct adf_##qat_gen##_tl_slice_data_regs, reg))
|
offsetof(struct adf_##qat_gen##_tl_slice_data_regs, reg))
|
||||||
|
|
||||||
|
#define ADF_TL_CMDQ_REG_OFF(slice, reg, qat_gen) \
|
||||||
|
(ADF_TL_DEV_REG_OFF(slice##_cmdq[0], qat_gen) + \
|
||||||
|
offsetof(struct adf_##qat_gen##_tl_cmdq_data_regs, reg))
|
||||||
|
|
||||||
#define ADF_TL_RP_REG_OFF(reg, qat_gen) \
|
#define ADF_TL_RP_REG_OFF(reg, qat_gen) \
|
||||||
(ADF_TL_DATA_REG_OFF(tl_ring_pairs_data_regs[0], qat_gen) + \
|
(ADF_TL_DATA_REG_OFF(tl_ring_pairs_data_regs[0], qat_gen) + \
|
||||||
offsetof(struct adf_##qat_gen##_tl_ring_pair_data_regs, reg))
|
offsetof(struct adf_##qat_gen##_tl_ring_pair_data_regs, reg))
|
||||||
|
|
|
||||||
|
|
@ -5,12 +5,10 @@
|
||||||
#include <linux/crypto.h>
|
#include <linux/crypto.h>
|
||||||
#include <crypto/internal/aead.h>
|
#include <crypto/internal/aead.h>
|
||||||
#include <crypto/internal/cipher.h>
|
#include <crypto/internal/cipher.h>
|
||||||
#include <crypto/internal/hash.h>
|
|
||||||
#include <crypto/internal/skcipher.h>
|
#include <crypto/internal/skcipher.h>
|
||||||
#include <crypto/aes.h>
|
#include <crypto/aes.h>
|
||||||
#include <crypto/sha1.h>
|
#include <crypto/sha1.h>
|
||||||
#include <crypto/sha2.h>
|
#include <crypto/sha2.h>
|
||||||
#include <crypto/hmac.h>
|
|
||||||
#include <crypto/algapi.h>
|
#include <crypto/algapi.h>
|
||||||
#include <crypto/authenc.h>
|
#include <crypto/authenc.h>
|
||||||
#include <crypto/scatterwalk.h>
|
#include <crypto/scatterwalk.h>
|
||||||
|
|
@ -68,16 +66,10 @@ struct qat_alg_aead_ctx {
|
||||||
dma_addr_t dec_cd_paddr;
|
dma_addr_t dec_cd_paddr;
|
||||||
struct icp_qat_fw_la_bulk_req enc_fw_req;
|
struct icp_qat_fw_la_bulk_req enc_fw_req;
|
||||||
struct icp_qat_fw_la_bulk_req dec_fw_req;
|
struct icp_qat_fw_la_bulk_req dec_fw_req;
|
||||||
struct crypto_shash *hash_tfm;
|
|
||||||
enum icp_qat_hw_auth_algo qat_hash_alg;
|
enum icp_qat_hw_auth_algo qat_hash_alg;
|
||||||
|
unsigned int hash_digestsize;
|
||||||
|
unsigned int hash_blocksize;
|
||||||
struct qat_crypto_instance *inst;
|
struct qat_crypto_instance *inst;
|
||||||
union {
|
|
||||||
struct sha1_state sha1;
|
|
||||||
struct sha256_state sha256;
|
|
||||||
struct sha512_state sha512;
|
|
||||||
};
|
|
||||||
char ipad[SHA512_BLOCK_SIZE]; /* sufficient for SHA-1/SHA-256 as well */
|
|
||||||
char opad[SHA512_BLOCK_SIZE];
|
|
||||||
};
|
};
|
||||||
|
|
||||||
struct qat_alg_skcipher_ctx {
|
struct qat_alg_skcipher_ctx {
|
||||||
|
|
@ -94,126 +86,58 @@ struct qat_alg_skcipher_ctx {
|
||||||
int mode;
|
int mode;
|
||||||
};
|
};
|
||||||
|
|
||||||
static int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg)
|
|
||||||
{
|
|
||||||
switch (qat_hash_alg) {
|
|
||||||
case ICP_QAT_HW_AUTH_ALGO_SHA1:
|
|
||||||
return ICP_QAT_HW_SHA1_STATE1_SZ;
|
|
||||||
case ICP_QAT_HW_AUTH_ALGO_SHA256:
|
|
||||||
return ICP_QAT_HW_SHA256_STATE1_SZ;
|
|
||||||
case ICP_QAT_HW_AUTH_ALGO_SHA512:
|
|
||||||
return ICP_QAT_HW_SHA512_STATE1_SZ;
|
|
||||||
default:
|
|
||||||
return -EFAULT;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
|
static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
|
||||||
struct qat_alg_aead_ctx *ctx,
|
struct qat_alg_aead_ctx *ctx,
|
||||||
const u8 *auth_key,
|
const u8 *auth_key,
|
||||||
unsigned int auth_keylen)
|
unsigned int auth_keylen)
|
||||||
{
|
{
|
||||||
SHASH_DESC_ON_STACK(shash, ctx->hash_tfm);
|
|
||||||
int block_size = crypto_shash_blocksize(ctx->hash_tfm);
|
|
||||||
int digest_size = crypto_shash_digestsize(ctx->hash_tfm);
|
|
||||||
__be32 *hash_state_out;
|
|
||||||
__be64 *hash512_state_out;
|
|
||||||
int i, offset;
|
|
||||||
|
|
||||||
memset(ctx->ipad, 0, block_size);
|
|
||||||
memset(ctx->opad, 0, block_size);
|
|
||||||
shash->tfm = ctx->hash_tfm;
|
|
||||||
|
|
||||||
if (auth_keylen > block_size) {
|
|
||||||
int ret = crypto_shash_digest(shash, auth_key,
|
|
||||||
auth_keylen, ctx->ipad);
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
memcpy(ctx->opad, ctx->ipad, digest_size);
|
|
||||||
} else {
|
|
||||||
memcpy(ctx->ipad, auth_key, auth_keylen);
|
|
||||||
memcpy(ctx->opad, auth_key, auth_keylen);
|
|
||||||
}
|
|
||||||
|
|
||||||
for (i = 0; i < block_size; i++) {
|
|
||||||
char *ipad_ptr = ctx->ipad + i;
|
|
||||||
char *opad_ptr = ctx->opad + i;
|
|
||||||
*ipad_ptr ^= HMAC_IPAD_VALUE;
|
|
||||||
*opad_ptr ^= HMAC_OPAD_VALUE;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (crypto_shash_init(shash))
|
|
||||||
return -EFAULT;
|
|
||||||
|
|
||||||
if (crypto_shash_update(shash, ctx->ipad, block_size))
|
|
||||||
return -EFAULT;
|
|
||||||
|
|
||||||
hash_state_out = (__be32 *)hash->sha.state1;
|
|
||||||
hash512_state_out = (__be64 *)hash_state_out;
|
|
||||||
|
|
||||||
switch (ctx->qat_hash_alg) {
|
switch (ctx->qat_hash_alg) {
|
||||||
case ICP_QAT_HW_AUTH_ALGO_SHA1:
|
case ICP_QAT_HW_AUTH_ALGO_SHA1: {
|
||||||
if (crypto_shash_export_core(shash, &ctx->sha1))
|
struct hmac_sha1_key key;
|
||||||
return -EFAULT;
|
__be32 *istate = (__be32 *)hash->sha.state1;
|
||||||
for (i = 0; i < digest_size >> 2; i++, hash_state_out++)
|
__be32 *ostate = (__be32 *)(hash->sha.state1 +
|
||||||
*hash_state_out = cpu_to_be32(ctx->sha1.state[i]);
|
round_up(sizeof(key.istate.h), 8));
|
||||||
break;
|
|
||||||
case ICP_QAT_HW_AUTH_ALGO_SHA256:
|
hmac_sha1_preparekey(&key, auth_key, auth_keylen);
|
||||||
if (crypto_shash_export_core(shash, &ctx->sha256))
|
for (int i = 0; i < ARRAY_SIZE(key.istate.h); i++) {
|
||||||
return -EFAULT;
|
istate[i] = cpu_to_be32(key.istate.h[i]);
|
||||||
for (i = 0; i < digest_size >> 2; i++, hash_state_out++)
|
ostate[i] = cpu_to_be32(key.ostate.h[i]);
|
||||||
*hash_state_out = cpu_to_be32(ctx->sha256.state[i]);
|
|
||||||
break;
|
|
||||||
case ICP_QAT_HW_AUTH_ALGO_SHA512:
|
|
||||||
if (crypto_shash_export_core(shash, &ctx->sha512))
|
|
||||||
return -EFAULT;
|
|
||||||
for (i = 0; i < digest_size >> 3; i++, hash512_state_out++)
|
|
||||||
*hash512_state_out = cpu_to_be64(ctx->sha512.state[i]);
|
|
||||||
break;
|
|
||||||
default:
|
|
||||||
return -EFAULT;
|
|
||||||
}
|
}
|
||||||
|
memzero_explicit(&key, sizeof(key));
|
||||||
if (crypto_shash_init(shash))
|
|
||||||
return -EFAULT;
|
|
||||||
|
|
||||||
if (crypto_shash_update(shash, ctx->opad, block_size))
|
|
||||||
return -EFAULT;
|
|
||||||
|
|
||||||
offset = round_up(qat_get_inter_state_size(ctx->qat_hash_alg), 8);
|
|
||||||
if (offset < 0)
|
|
||||||
return -EFAULT;
|
|
||||||
|
|
||||||
hash_state_out = (__be32 *)(hash->sha.state1 + offset);
|
|
||||||
hash512_state_out = (__be64 *)hash_state_out;
|
|
||||||
|
|
||||||
switch (ctx->qat_hash_alg) {
|
|
||||||
case ICP_QAT_HW_AUTH_ALGO_SHA1:
|
|
||||||
if (crypto_shash_export_core(shash, &ctx->sha1))
|
|
||||||
return -EFAULT;
|
|
||||||
for (i = 0; i < digest_size >> 2; i++, hash_state_out++)
|
|
||||||
*hash_state_out = cpu_to_be32(ctx->sha1.state[i]);
|
|
||||||
break;
|
|
||||||
case ICP_QAT_HW_AUTH_ALGO_SHA256:
|
|
||||||
if (crypto_shash_export_core(shash, &ctx->sha256))
|
|
||||||
return -EFAULT;
|
|
||||||
for (i = 0; i < digest_size >> 2; i++, hash_state_out++)
|
|
||||||
*hash_state_out = cpu_to_be32(ctx->sha256.state[i]);
|
|
||||||
break;
|
|
||||||
case ICP_QAT_HW_AUTH_ALGO_SHA512:
|
|
||||||
if (crypto_shash_export_core(shash, &ctx->sha512))
|
|
||||||
return -EFAULT;
|
|
||||||
for (i = 0; i < digest_size >> 3; i++, hash512_state_out++)
|
|
||||||
*hash512_state_out = cpu_to_be64(ctx->sha512.state[i]);
|
|
||||||
break;
|
|
||||||
default:
|
|
||||||
return -EFAULT;
|
|
||||||
}
|
|
||||||
memzero_explicit(ctx->ipad, block_size);
|
|
||||||
memzero_explicit(ctx->opad, block_size);
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
case ICP_QAT_HW_AUTH_ALGO_SHA256: {
|
||||||
|
struct hmac_sha256_key key;
|
||||||
|
__be32 *istate = (__be32 *)hash->sha.state1;
|
||||||
|
__be32 *ostate = (__be32 *)(hash->sha.state1 +
|
||||||
|
sizeof(key.key.istate.h));
|
||||||
|
|
||||||
|
hmac_sha256_preparekey(&key, auth_key, auth_keylen);
|
||||||
|
for (int i = 0; i < ARRAY_SIZE(key.key.istate.h); i++) {
|
||||||
|
istate[i] = cpu_to_be32(key.key.istate.h[i]);
|
||||||
|
ostate[i] = cpu_to_be32(key.key.ostate.h[i]);
|
||||||
|
}
|
||||||
|
memzero_explicit(&key, sizeof(key));
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
case ICP_QAT_HW_AUTH_ALGO_SHA512: {
|
||||||
|
struct hmac_sha512_key key;
|
||||||
|
__be64 *istate = (__be64 *)hash->sha.state1;
|
||||||
|
__be64 *ostate = (__be64 *)(hash->sha.state1 +
|
||||||
|
sizeof(key.key.istate.h));
|
||||||
|
|
||||||
|
hmac_sha512_preparekey(&key, auth_key, auth_keylen);
|
||||||
|
for (int i = 0; i < ARRAY_SIZE(key.key.istate.h); i++) {
|
||||||
|
istate[i] = cpu_to_be64(key.key.istate.h[i]);
|
||||||
|
ostate[i] = cpu_to_be64(key.key.ostate.h[i]);
|
||||||
|
}
|
||||||
|
memzero_explicit(&key, sizeof(key));
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
return -EFAULT;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header)
|
static void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header)
|
||||||
{
|
{
|
||||||
|
|
@ -259,7 +183,7 @@ static int qat_alg_aead_init_enc_session(struct crypto_aead *aead_tfm,
|
||||||
ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1,
|
ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1,
|
||||||
ctx->qat_hash_alg, digestsize);
|
ctx->qat_hash_alg, digestsize);
|
||||||
hash->sha.inner_setup.auth_counter.counter =
|
hash->sha.inner_setup.auth_counter.counter =
|
||||||
cpu_to_be32(crypto_shash_blocksize(ctx->hash_tfm));
|
cpu_to_be32(ctx->hash_blocksize);
|
||||||
|
|
||||||
if (qat_alg_do_precomputes(hash, ctx, keys->authkey, keys->authkeylen))
|
if (qat_alg_do_precomputes(hash, ctx, keys->authkey, keys->authkeylen))
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
|
|
@ -326,7 +250,7 @@ static int qat_alg_aead_init_dec_session(struct crypto_aead *aead_tfm,
|
||||||
struct icp_qat_hw_cipher_algo_blk *cipher =
|
struct icp_qat_hw_cipher_algo_blk *cipher =
|
||||||
(struct icp_qat_hw_cipher_algo_blk *)((char *)dec_ctx +
|
(struct icp_qat_hw_cipher_algo_blk *)((char *)dec_ctx +
|
||||||
sizeof(struct icp_qat_hw_auth_setup) +
|
sizeof(struct icp_qat_hw_auth_setup) +
|
||||||
roundup(crypto_shash_digestsize(ctx->hash_tfm), 8) * 2);
|
roundup(ctx->hash_digestsize, 8) * 2);
|
||||||
struct icp_qat_fw_la_bulk_req *req_tmpl = &ctx->dec_fw_req;
|
struct icp_qat_fw_la_bulk_req *req_tmpl = &ctx->dec_fw_req;
|
||||||
struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
|
struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
|
||||||
struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
|
struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
|
||||||
|
|
@ -346,7 +270,7 @@ static int qat_alg_aead_init_dec_session(struct crypto_aead *aead_tfm,
|
||||||
ctx->qat_hash_alg,
|
ctx->qat_hash_alg,
|
||||||
digestsize);
|
digestsize);
|
||||||
hash->sha.inner_setup.auth_counter.counter =
|
hash->sha.inner_setup.auth_counter.counter =
|
||||||
cpu_to_be32(crypto_shash_blocksize(ctx->hash_tfm));
|
cpu_to_be32(ctx->hash_blocksize);
|
||||||
|
|
||||||
if (qat_alg_do_precomputes(hash, ctx, keys->authkey, keys->authkeylen))
|
if (qat_alg_do_precomputes(hash, ctx, keys->authkey, keys->authkeylen))
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
|
|
@ -368,7 +292,7 @@ static int qat_alg_aead_init_dec_session(struct crypto_aead *aead_tfm,
|
||||||
cipher_cd_ctrl->cipher_state_sz = AES_BLOCK_SIZE >> 3;
|
cipher_cd_ctrl->cipher_state_sz = AES_BLOCK_SIZE >> 3;
|
||||||
cipher_cd_ctrl->cipher_cfg_offset =
|
cipher_cd_ctrl->cipher_cfg_offset =
|
||||||
(sizeof(struct icp_qat_hw_auth_setup) +
|
(sizeof(struct icp_qat_hw_auth_setup) +
|
||||||
roundup(crypto_shash_digestsize(ctx->hash_tfm), 8) * 2) >> 3;
|
roundup(ctx->hash_digestsize, 8) * 2) >> 3;
|
||||||
ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
|
ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
|
||||||
ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
|
ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
|
||||||
|
|
||||||
|
|
@ -1150,32 +1074,35 @@ static int qat_alg_skcipher_xts_decrypt(struct skcipher_request *req)
|
||||||
}
|
}
|
||||||
|
|
||||||
static int qat_alg_aead_init(struct crypto_aead *tfm,
|
static int qat_alg_aead_init(struct crypto_aead *tfm,
|
||||||
enum icp_qat_hw_auth_algo hash,
|
enum icp_qat_hw_auth_algo hash_alg,
|
||||||
const char *hash_name)
|
unsigned int hash_digestsize,
|
||||||
|
unsigned int hash_blocksize)
|
||||||
{
|
{
|
||||||
struct qat_alg_aead_ctx *ctx = crypto_aead_ctx(tfm);
|
struct qat_alg_aead_ctx *ctx = crypto_aead_ctx(tfm);
|
||||||
|
|
||||||
ctx->hash_tfm = crypto_alloc_shash(hash_name, 0, 0);
|
ctx->qat_hash_alg = hash_alg;
|
||||||
if (IS_ERR(ctx->hash_tfm))
|
ctx->hash_digestsize = hash_digestsize;
|
||||||
return PTR_ERR(ctx->hash_tfm);
|
ctx->hash_blocksize = hash_blocksize;
|
||||||
ctx->qat_hash_alg = hash;
|
|
||||||
crypto_aead_set_reqsize(tfm, sizeof(struct qat_crypto_request));
|
crypto_aead_set_reqsize(tfm, sizeof(struct qat_crypto_request));
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int qat_alg_aead_sha1_init(struct crypto_aead *tfm)
|
static int qat_alg_aead_sha1_init(struct crypto_aead *tfm)
|
||||||
{
|
{
|
||||||
return qat_alg_aead_init(tfm, ICP_QAT_HW_AUTH_ALGO_SHA1, "sha1");
|
return qat_alg_aead_init(tfm, ICP_QAT_HW_AUTH_ALGO_SHA1,
|
||||||
|
SHA1_DIGEST_SIZE, SHA1_BLOCK_SIZE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int qat_alg_aead_sha256_init(struct crypto_aead *tfm)
|
static int qat_alg_aead_sha256_init(struct crypto_aead *tfm)
|
||||||
{
|
{
|
||||||
return qat_alg_aead_init(tfm, ICP_QAT_HW_AUTH_ALGO_SHA256, "sha256");
|
return qat_alg_aead_init(tfm, ICP_QAT_HW_AUTH_ALGO_SHA256,
|
||||||
|
SHA256_DIGEST_SIZE, SHA256_BLOCK_SIZE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int qat_alg_aead_sha512_init(struct crypto_aead *tfm)
|
static int qat_alg_aead_sha512_init(struct crypto_aead *tfm)
|
||||||
{
|
{
|
||||||
return qat_alg_aead_init(tfm, ICP_QAT_HW_AUTH_ALGO_SHA512, "sha512");
|
return qat_alg_aead_init(tfm, ICP_QAT_HW_AUTH_ALGO_SHA512,
|
||||||
|
SHA512_DIGEST_SIZE, SHA512_BLOCK_SIZE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void qat_alg_aead_exit(struct crypto_aead *tfm)
|
static void qat_alg_aead_exit(struct crypto_aead *tfm)
|
||||||
|
|
@ -1184,8 +1111,6 @@ static void qat_alg_aead_exit(struct crypto_aead *tfm)
|
||||||
struct qat_crypto_instance *inst = ctx->inst;
|
struct qat_crypto_instance *inst = ctx->inst;
|
||||||
struct device *dev;
|
struct device *dev;
|
||||||
|
|
||||||
crypto_free_shash(ctx->hash_tfm);
|
|
||||||
|
|
||||||
if (!inst)
|
if (!inst)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1900,7 +1900,7 @@ static int qat_uclo_map_objs_from_mof(struct icp_qat_mof_handle *mobj_handle)
|
||||||
if (sobj_hdr)
|
if (sobj_hdr)
|
||||||
sobj_chunk_num = sobj_hdr->num_chunks;
|
sobj_chunk_num = sobj_hdr->num_chunks;
|
||||||
|
|
||||||
mobj_hdr = kzalloc((uobj_chunk_num + sobj_chunk_num) *
|
mobj_hdr = kcalloc(size_add(uobj_chunk_num, sobj_chunk_num),
|
||||||
sizeof(*mobj_hdr), GFP_KERNEL);
|
sizeof(*mobj_hdr), GFP_KERNEL);
|
||||||
if (!mobj_hdr)
|
if (!mobj_hdr)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
|
||||||
|
|
@ -1615,7 +1615,7 @@ int otx2_cpt_dl_custom_egrp_create(struct otx2_cptpf_dev *cptpf,
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
err_msg = "Invalid engine group format";
|
err_msg = "Invalid engine group format";
|
||||||
strscpy(tmp_buf, ctx->val.vstr, strlen(ctx->val.vstr) + 1);
|
strscpy(tmp_buf, ctx->val.vstr);
|
||||||
start = tmp_buf;
|
start = tmp_buf;
|
||||||
|
|
||||||
has_se = has_ie = has_ae = false;
|
has_se = has_ie = has_ae = false;
|
||||||
|
|
|
||||||
|
|
@ -1043,8 +1043,10 @@ static struct scomp_alg nx842_powernv_alg = {
|
||||||
.base.cra_priority = 300,
|
.base.cra_priority = 300,
|
||||||
.base.cra_module = THIS_MODULE,
|
.base.cra_module = THIS_MODULE,
|
||||||
|
|
||||||
|
.streams = {
|
||||||
.alloc_ctx = nx842_powernv_crypto_alloc_ctx,
|
.alloc_ctx = nx842_powernv_crypto_alloc_ctx,
|
||||||
.free_ctx = nx842_crypto_free_ctx,
|
.free_ctx = nx842_crypto_free_ctx,
|
||||||
|
},
|
||||||
.compress = nx842_crypto_compress,
|
.compress = nx842_crypto_compress,
|
||||||
.decompress = nx842_crypto_decompress,
|
.decompress = nx842_crypto_decompress,
|
||||||
};
|
};
|
||||||
|
|
|
||||||
|
|
@ -1020,8 +1020,10 @@ static struct scomp_alg nx842_pseries_alg = {
|
||||||
.base.cra_priority = 300,
|
.base.cra_priority = 300,
|
||||||
.base.cra_module = THIS_MODULE,
|
.base.cra_module = THIS_MODULE,
|
||||||
|
|
||||||
|
.streams = {
|
||||||
.alloc_ctx = nx842_pseries_crypto_alloc_ctx,
|
.alloc_ctx = nx842_pseries_crypto_alloc_ctx,
|
||||||
.free_ctx = nx842_crypto_free_ctx,
|
.free_ctx = nx842_crypto_free_ctx,
|
||||||
|
},
|
||||||
.compress = nx842_crypto_compress,
|
.compress = nx842_crypto_compress,
|
||||||
.decompress = nx842_crypto_decompress,
|
.decompress = nx842_crypto_decompress,
|
||||||
};
|
};
|
||||||
|
|
|
||||||
|
|
@ -32,6 +32,7 @@
|
||||||
#include <linux/pm_runtime.h>
|
#include <linux/pm_runtime.h>
|
||||||
#include <linux/scatterlist.h>
|
#include <linux/scatterlist.h>
|
||||||
#include <linux/string.h>
|
#include <linux/string.h>
|
||||||
|
#include <linux/workqueue.h>
|
||||||
|
|
||||||
#include "omap-crypto.h"
|
#include "omap-crypto.h"
|
||||||
#include "omap-aes.h"
|
#include "omap-aes.h"
|
||||||
|
|
@ -221,7 +222,7 @@ static void omap_aes_dma_out_callback(void *data)
|
||||||
struct omap_aes_dev *dd = data;
|
struct omap_aes_dev *dd = data;
|
||||||
|
|
||||||
/* dma_lch_out - completed */
|
/* dma_lch_out - completed */
|
||||||
tasklet_schedule(&dd->done_task);
|
queue_work(system_bh_wq, &dd->done_task);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int omap_aes_dma_init(struct omap_aes_dev *dd)
|
static int omap_aes_dma_init(struct omap_aes_dev *dd)
|
||||||
|
|
@ -494,9 +495,9 @@ static void omap_aes_copy_ivout(struct omap_aes_dev *dd, u8 *ivbuf)
|
||||||
((u32 *)ivbuf)[i] = omap_aes_read(dd, AES_REG_IV(dd, i));
|
((u32 *)ivbuf)[i] = omap_aes_read(dd, AES_REG_IV(dd, i));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void omap_aes_done_task(unsigned long data)
|
static void omap_aes_done_task(struct work_struct *t)
|
||||||
{
|
{
|
||||||
struct omap_aes_dev *dd = (struct omap_aes_dev *)data;
|
struct omap_aes_dev *dd = from_work(dd, t, done_task);
|
||||||
|
|
||||||
pr_debug("enter done_task\n");
|
pr_debug("enter done_task\n");
|
||||||
|
|
||||||
|
|
@ -925,7 +926,7 @@ static irqreturn_t omap_aes_irq(int irq, void *dev_id)
|
||||||
|
|
||||||
if (!dd->total)
|
if (!dd->total)
|
||||||
/* All bytes read! */
|
/* All bytes read! */
|
||||||
tasklet_schedule(&dd->done_task);
|
queue_work(system_bh_wq, &dd->done_task);
|
||||||
else
|
else
|
||||||
/* Enable DATA_IN interrupt for next block */
|
/* Enable DATA_IN interrupt for next block */
|
||||||
omap_aes_write(dd, AES_REG_IRQ_ENABLE(dd), 0x2);
|
omap_aes_write(dd, AES_REG_IRQ_ENABLE(dd), 0x2);
|
||||||
|
|
@ -1140,7 +1141,7 @@ static int omap_aes_probe(struct platform_device *pdev)
|
||||||
(reg & dd->pdata->major_mask) >> dd->pdata->major_shift,
|
(reg & dd->pdata->major_mask) >> dd->pdata->major_shift,
|
||||||
(reg & dd->pdata->minor_mask) >> dd->pdata->minor_shift);
|
(reg & dd->pdata->minor_mask) >> dd->pdata->minor_shift);
|
||||||
|
|
||||||
tasklet_init(&dd->done_task, omap_aes_done_task, (unsigned long)dd);
|
INIT_WORK(&dd->done_task, omap_aes_done_task);
|
||||||
|
|
||||||
err = omap_aes_dma_init(dd);
|
err = omap_aes_dma_init(dd);
|
||||||
if (err == -EPROBE_DEFER) {
|
if (err == -EPROBE_DEFER) {
|
||||||
|
|
@ -1229,7 +1230,7 @@ err_engine:
|
||||||
|
|
||||||
omap_aes_dma_cleanup(dd);
|
omap_aes_dma_cleanup(dd);
|
||||||
err_irq:
|
err_irq:
|
||||||
tasklet_kill(&dd->done_task);
|
cancel_work_sync(&dd->done_task);
|
||||||
err_pm_disable:
|
err_pm_disable:
|
||||||
pm_runtime_disable(dev);
|
pm_runtime_disable(dev);
|
||||||
err_res:
|
err_res:
|
||||||
|
|
@ -1264,7 +1265,7 @@ static void omap_aes_remove(struct platform_device *pdev)
|
||||||
|
|
||||||
crypto_engine_exit(dd->engine);
|
crypto_engine_exit(dd->engine);
|
||||||
|
|
||||||
tasklet_kill(&dd->done_task);
|
cancel_work_sync(&dd->done_task);
|
||||||
omap_aes_dma_cleanup(dd);
|
omap_aes_dma_cleanup(dd);
|
||||||
pm_runtime_disable(dd->dev);
|
pm_runtime_disable(dd->dev);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -159,7 +159,7 @@ struct omap_aes_dev {
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
struct tasklet_struct done_task;
|
struct work_struct done_task;
|
||||||
struct aead_queue aead_queue;
|
struct aead_queue aead_queue;
|
||||||
spinlock_t lock;
|
spinlock_t lock;
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -32,6 +32,7 @@
|
||||||
#include <linux/pm_runtime.h>
|
#include <linux/pm_runtime.h>
|
||||||
#include <linux/scatterlist.h>
|
#include <linux/scatterlist.h>
|
||||||
#include <linux/string.h>
|
#include <linux/string.h>
|
||||||
|
#include <linux/workqueue.h>
|
||||||
|
|
||||||
#include "omap-crypto.h"
|
#include "omap-crypto.h"
|
||||||
|
|
||||||
|
|
@ -130,7 +131,7 @@ struct omap_des_dev {
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
struct tasklet_struct done_task;
|
struct work_struct done_task;
|
||||||
|
|
||||||
struct skcipher_request *req;
|
struct skcipher_request *req;
|
||||||
struct crypto_engine *engine;
|
struct crypto_engine *engine;
|
||||||
|
|
@ -325,7 +326,7 @@ static void omap_des_dma_out_callback(void *data)
|
||||||
struct omap_des_dev *dd = data;
|
struct omap_des_dev *dd = data;
|
||||||
|
|
||||||
/* dma_lch_out - completed */
|
/* dma_lch_out - completed */
|
||||||
tasklet_schedule(&dd->done_task);
|
queue_work(system_bh_wq, &dd->done_task);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int omap_des_dma_init(struct omap_des_dev *dd)
|
static int omap_des_dma_init(struct omap_des_dev *dd)
|
||||||
|
|
@ -580,9 +581,9 @@ static int omap_des_crypt_req(struct crypto_engine *engine,
|
||||||
omap_des_crypt_dma_start(dd);
|
omap_des_crypt_dma_start(dd);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void omap_des_done_task(unsigned long data)
|
static void omap_des_done_task(struct work_struct *t)
|
||||||
{
|
{
|
||||||
struct omap_des_dev *dd = (struct omap_des_dev *)data;
|
struct omap_des_dev *dd = from_work(dd, t, done_task);
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
pr_debug("enter done_task\n");
|
pr_debug("enter done_task\n");
|
||||||
|
|
@ -890,7 +891,7 @@ static irqreturn_t omap_des_irq(int irq, void *dev_id)
|
||||||
|
|
||||||
if (!dd->total)
|
if (!dd->total)
|
||||||
/* All bytes read! */
|
/* All bytes read! */
|
||||||
tasklet_schedule(&dd->done_task);
|
queue_work(system_bh_wq, &dd->done_task);
|
||||||
else
|
else
|
||||||
/* Enable DATA_IN interrupt for next block */
|
/* Enable DATA_IN interrupt for next block */
|
||||||
omap_des_write(dd, DES_REG_IRQ_ENABLE(dd), 0x2);
|
omap_des_write(dd, DES_REG_IRQ_ENABLE(dd), 0x2);
|
||||||
|
|
@ -986,7 +987,7 @@ static int omap_des_probe(struct platform_device *pdev)
|
||||||
(reg & dd->pdata->major_mask) >> dd->pdata->major_shift,
|
(reg & dd->pdata->major_mask) >> dd->pdata->major_shift,
|
||||||
(reg & dd->pdata->minor_mask) >> dd->pdata->minor_shift);
|
(reg & dd->pdata->minor_mask) >> dd->pdata->minor_shift);
|
||||||
|
|
||||||
tasklet_init(&dd->done_task, omap_des_done_task, (unsigned long)dd);
|
INIT_WORK(&dd->done_task, omap_des_done_task);
|
||||||
|
|
||||||
err = omap_des_dma_init(dd);
|
err = omap_des_dma_init(dd);
|
||||||
if (err == -EPROBE_DEFER) {
|
if (err == -EPROBE_DEFER) {
|
||||||
|
|
@ -1053,7 +1054,7 @@ err_engine:
|
||||||
|
|
||||||
omap_des_dma_cleanup(dd);
|
omap_des_dma_cleanup(dd);
|
||||||
err_irq:
|
err_irq:
|
||||||
tasklet_kill(&dd->done_task);
|
cancel_work_sync(&dd->done_task);
|
||||||
err_get:
|
err_get:
|
||||||
pm_runtime_disable(dev);
|
pm_runtime_disable(dev);
|
||||||
err_res:
|
err_res:
|
||||||
|
|
@ -1077,7 +1078,7 @@ static void omap_des_remove(struct platform_device *pdev)
|
||||||
crypto_engine_unregister_skcipher(
|
crypto_engine_unregister_skcipher(
|
||||||
&dd->pdata->algs_info[i].algs_list[j]);
|
&dd->pdata->algs_info[i].algs_list[j]);
|
||||||
|
|
||||||
tasklet_kill(&dd->done_task);
|
cancel_work_sync(&dd->done_task);
|
||||||
omap_des_dma_cleanup(dd);
|
omap_des_dma_cleanup(dd);
|
||||||
pm_runtime_disable(dd->dev);
|
pm_runtime_disable(dd->dev);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -37,6 +37,7 @@
|
||||||
#include <linux/scatterlist.h>
|
#include <linux/scatterlist.h>
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
#include <linux/string.h>
|
#include <linux/string.h>
|
||||||
|
#include <linux/workqueue.h>
|
||||||
|
|
||||||
#define MD5_DIGEST_SIZE 16
|
#define MD5_DIGEST_SIZE 16
|
||||||
|
|
||||||
|
|
@ -217,7 +218,7 @@ struct omap_sham_dev {
|
||||||
int irq;
|
int irq;
|
||||||
int err;
|
int err;
|
||||||
struct dma_chan *dma_lch;
|
struct dma_chan *dma_lch;
|
||||||
struct tasklet_struct done_task;
|
struct work_struct done_task;
|
||||||
u8 polling_mode;
|
u8 polling_mode;
|
||||||
u8 xmit_buf[BUFLEN] OMAP_ALIGNED;
|
u8 xmit_buf[BUFLEN] OMAP_ALIGNED;
|
||||||
|
|
||||||
|
|
@ -561,7 +562,7 @@ static void omap_sham_dma_callback(void *param)
|
||||||
struct omap_sham_dev *dd = param;
|
struct omap_sham_dev *dd = param;
|
||||||
|
|
||||||
set_bit(FLAGS_DMA_READY, &dd->flags);
|
set_bit(FLAGS_DMA_READY, &dd->flags);
|
||||||
tasklet_schedule(&dd->done_task);
|
queue_work(system_bh_wq, &dd->done_task);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int omap_sham_xmit_dma(struct omap_sham_dev *dd, size_t length,
|
static int omap_sham_xmit_dma(struct omap_sham_dev *dd, size_t length,
|
||||||
|
|
@ -1703,9 +1704,9 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
static void omap_sham_done_task(unsigned long data)
|
static void omap_sham_done_task(struct work_struct *t)
|
||||||
{
|
{
|
||||||
struct omap_sham_dev *dd = (struct omap_sham_dev *)data;
|
struct omap_sham_dev *dd = from_work(dd, t, done_task);
|
||||||
int err = 0;
|
int err = 0;
|
||||||
|
|
||||||
dev_dbg(dd->dev, "%s: flags=%lx\n", __func__, dd->flags);
|
dev_dbg(dd->dev, "%s: flags=%lx\n", __func__, dd->flags);
|
||||||
|
|
@ -1739,7 +1740,7 @@ finish:
|
||||||
static irqreturn_t omap_sham_irq_common(struct omap_sham_dev *dd)
|
static irqreturn_t omap_sham_irq_common(struct omap_sham_dev *dd)
|
||||||
{
|
{
|
||||||
set_bit(FLAGS_OUTPUT_READY, &dd->flags);
|
set_bit(FLAGS_OUTPUT_READY, &dd->flags);
|
||||||
tasklet_schedule(&dd->done_task);
|
queue_work(system_bh_wq, &dd->done_task);
|
||||||
|
|
||||||
return IRQ_HANDLED;
|
return IRQ_HANDLED;
|
||||||
}
|
}
|
||||||
|
|
@ -2059,7 +2060,7 @@ static int omap_sham_probe(struct platform_device *pdev)
|
||||||
platform_set_drvdata(pdev, dd);
|
platform_set_drvdata(pdev, dd);
|
||||||
|
|
||||||
INIT_LIST_HEAD(&dd->list);
|
INIT_LIST_HEAD(&dd->list);
|
||||||
tasklet_init(&dd->done_task, omap_sham_done_task, (unsigned long)dd);
|
INIT_WORK(&dd->done_task, omap_sham_done_task);
|
||||||
crypto_init_queue(&dd->queue, OMAP_SHAM_QUEUE_LENGTH);
|
crypto_init_queue(&dd->queue, OMAP_SHAM_QUEUE_LENGTH);
|
||||||
|
|
||||||
err = (dev->of_node) ? omap_sham_get_res_of(dd, dev, &res) :
|
err = (dev->of_node) ? omap_sham_get_res_of(dd, dev, &res) :
|
||||||
|
|
@ -2194,7 +2195,7 @@ static void omap_sham_remove(struct platform_device *pdev)
|
||||||
&dd->pdata->algs_info[i].algs_list[j]);
|
&dd->pdata->algs_info[i].algs_list[j]);
|
||||||
dd->pdata->algs_info[i].registered--;
|
dd->pdata->algs_info[i].registered--;
|
||||||
}
|
}
|
||||||
tasklet_kill(&dd->done_task);
|
cancel_work_sync(&dd->done_task);
|
||||||
pm_runtime_dont_use_autosuspend(&pdev->dev);
|
pm_runtime_dont_use_autosuspend(&pdev->dev);
|
||||||
pm_runtime_disable(&pdev->dev);
|
pm_runtime_disable(&pdev->dev);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -254,7 +254,7 @@ static void rk_hash_unprepare(struct crypto_engine *engine, void *breq)
|
||||||
struct rk_ahash_rctx *rctx = ahash_request_ctx(areq);
|
struct rk_ahash_rctx *rctx = ahash_request_ctx(areq);
|
||||||
struct rk_crypto_info *rkc = rctx->dev;
|
struct rk_crypto_info *rkc = rctx->dev;
|
||||||
|
|
||||||
dma_unmap_sg(rkc->dev, areq->src, rctx->nrsg, DMA_TO_DEVICE);
|
dma_unmap_sg(rkc->dev, areq->src, sg_nents(areq->src), DMA_TO_DEVICE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int rk_hash_run(struct crypto_engine *engine, void *breq)
|
static int rk_hash_run(struct crypto_engine *engine, void *breq)
|
||||||
|
|
|
||||||
|
|
@ -511,8 +511,7 @@ static int starfive_aes_map_sg(struct starfive_cryp_dev *cryp,
|
||||||
stsg = sg_next(stsg), dtsg = sg_next(dtsg)) {
|
stsg = sg_next(stsg), dtsg = sg_next(dtsg)) {
|
||||||
src_nents = dma_map_sg(cryp->dev, stsg, 1, DMA_BIDIRECTIONAL);
|
src_nents = dma_map_sg(cryp->dev, stsg, 1, DMA_BIDIRECTIONAL);
|
||||||
if (src_nents == 0)
|
if (src_nents == 0)
|
||||||
return dev_err_probe(cryp->dev, -ENOMEM,
|
return -ENOMEM;
|
||||||
"dma_map_sg error\n");
|
|
||||||
|
|
||||||
dst_nents = src_nents;
|
dst_nents = src_nents;
|
||||||
len = min(sg_dma_len(stsg), remain);
|
len = min(sg_dma_len(stsg), remain);
|
||||||
|
|
@ -528,13 +527,11 @@ static int starfive_aes_map_sg(struct starfive_cryp_dev *cryp,
|
||||||
for (stsg = src, dtsg = dst;;) {
|
for (stsg = src, dtsg = dst;;) {
|
||||||
src_nents = dma_map_sg(cryp->dev, stsg, 1, DMA_TO_DEVICE);
|
src_nents = dma_map_sg(cryp->dev, stsg, 1, DMA_TO_DEVICE);
|
||||||
if (src_nents == 0)
|
if (src_nents == 0)
|
||||||
return dev_err_probe(cryp->dev, -ENOMEM,
|
return -ENOMEM;
|
||||||
"dma_map_sg src error\n");
|
|
||||||
|
|
||||||
dst_nents = dma_map_sg(cryp->dev, dtsg, 1, DMA_FROM_DEVICE);
|
dst_nents = dma_map_sg(cryp->dev, dtsg, 1, DMA_FROM_DEVICE);
|
||||||
if (dst_nents == 0)
|
if (dst_nents == 0)
|
||||||
return dev_err_probe(cryp->dev, -ENOMEM,
|
return -ENOMEM;
|
||||||
"dma_map_sg dst error\n");
|
|
||||||
|
|
||||||
len = min(sg_dma_len(stsg), sg_dma_len(dtsg));
|
len = min(sg_dma_len(stsg), sg_dma_len(dtsg));
|
||||||
len = min(len, remain);
|
len = min(len, remain);
|
||||||
|
|
@ -669,8 +666,7 @@ static int starfive_aes_aead_do_one_req(struct crypto_engine *engine, void *areq
|
||||||
if (cryp->assoclen) {
|
if (cryp->assoclen) {
|
||||||
rctx->adata = kzalloc(cryp->assoclen + AES_BLOCK_SIZE, GFP_KERNEL);
|
rctx->adata = kzalloc(cryp->assoclen + AES_BLOCK_SIZE, GFP_KERNEL);
|
||||||
if (!rctx->adata)
|
if (!rctx->adata)
|
||||||
return dev_err_probe(cryp->dev, -ENOMEM,
|
return -ENOMEM;
|
||||||
"Failed to alloc memory for adata");
|
|
||||||
|
|
||||||
if (sg_copy_to_buffer(req->src, sg_nents_for_len(req->src, cryp->assoclen),
|
if (sg_copy_to_buffer(req->src, sg_nents_for_len(req->src, cryp->assoclen),
|
||||||
rctx->adata, cryp->assoclen) != cryp->assoclen)
|
rctx->adata, cryp->assoclen) != cryp->assoclen)
|
||||||
|
|
|
||||||
|
|
@ -229,8 +229,7 @@ static int starfive_hash_one_request(struct crypto_engine *engine, void *areq)
|
||||||
for_each_sg(rctx->in_sg, tsg, rctx->in_sg_len, i) {
|
for_each_sg(rctx->in_sg, tsg, rctx->in_sg_len, i) {
|
||||||
src_nents = dma_map_sg(cryp->dev, tsg, 1, DMA_TO_DEVICE);
|
src_nents = dma_map_sg(cryp->dev, tsg, 1, DMA_TO_DEVICE);
|
||||||
if (src_nents == 0)
|
if (src_nents == 0)
|
||||||
return dev_err_probe(cryp->dev, -ENOMEM,
|
return -ENOMEM;
|
||||||
"dma_map_sg error\n");
|
|
||||||
|
|
||||||
ret = starfive_hash_dma_xfer(cryp, tsg);
|
ret = starfive_hash_dma_xfer(cryp, tsg);
|
||||||
dma_unmap_sg(cryp->dev, tsg, 1, DMA_TO_DEVICE);
|
dma_unmap_sg(cryp->dev, tsg, 1, DMA_TO_DEVICE);
|
||||||
|
|
|
||||||
|
|
@ -2781,5 +2781,5 @@ static struct platform_driver stm32_cryp_driver = {
|
||||||
module_platform_driver(stm32_cryp_driver);
|
module_platform_driver(stm32_cryp_driver);
|
||||||
|
|
||||||
MODULE_AUTHOR("Fabien Dessenne <fabien.dessenne@st.com>");
|
MODULE_AUTHOR("Fabien Dessenne <fabien.dessenne@st.com>");
|
||||||
MODULE_DESCRIPTION("STMicrolectronics STM32 CRYP hardware driver");
|
MODULE_DESCRIPTION("STMicroelectronics STM32 CRYP hardware driver");
|
||||||
MODULE_LICENSE("GPL");
|
MODULE_LICENSE("GPL");
|
||||||
|
|
|
||||||
|
|
@ -400,8 +400,9 @@ static int tegra_sha_do_update(struct ahash_request *req)
|
||||||
struct tegra_sha_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
|
struct tegra_sha_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
|
||||||
struct tegra_sha_reqctx *rctx = ahash_request_ctx(req);
|
struct tegra_sha_reqctx *rctx = ahash_request_ctx(req);
|
||||||
struct tegra_se *se = ctx->se;
|
struct tegra_se *se = ctx->se;
|
||||||
unsigned int nblks, nresidue, size, ret;
|
unsigned int nblks, nresidue, size;
|
||||||
u32 *cpuvaddr = se->cmdbuf->addr;
|
u32 *cpuvaddr = se->cmdbuf->addr;
|
||||||
|
int ret;
|
||||||
|
|
||||||
nresidue = (req->nbytes + rctx->residue.size) % rctx->blk_size;
|
nresidue = (req->nbytes + rctx->residue.size) % rctx->blk_size;
|
||||||
nblks = (req->nbytes + rctx->residue.size) / rctx->blk_size;
|
nblks = (req->nbytes + rctx->residue.size) / rctx->blk_size;
|
||||||
|
|
|
||||||
|
|
@ -310,7 +310,7 @@ static int tegra_se_probe(struct platform_device *pdev)
|
||||||
|
|
||||||
se->engine = crypto_engine_alloc_init(dev, 0);
|
se->engine = crypto_engine_alloc_init(dev, 0);
|
||||||
if (!se->engine)
|
if (!se->engine)
|
||||||
return dev_err_probe(dev, -ENOMEM, "failed to init crypto engine\n");
|
return -ENOMEM;
|
||||||
|
|
||||||
ret = crypto_engine_start(se->engine);
|
ret = crypto_engine_start(se->engine);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,14 @@
|
||||||
|
# SPDX-License-Identifier: GPL-2.0-only
|
||||||
|
config CRYPTO_DEV_TI_DTHEV2
|
||||||
|
tristate "Support for TI DTHE V2 cryptography engine"
|
||||||
|
depends on ARCH_K3 || COMPILE_TEST
|
||||||
|
select CRYPTO_ENGINE
|
||||||
|
select CRYPTO_SKCIPHER
|
||||||
|
select CRYPTO_ECB
|
||||||
|
select CRYPTO_CBC
|
||||||
|
help
|
||||||
|
This enables support for the TI DTHE V2 hw cryptography engine
|
||||||
|
which can be found on TI K3 SOCs. Selecting this enables use
|
||||||
|
of hardware offloading for cryptographic algorithms on
|
||||||
|
these devices, providing enhanced resistance against side-channel
|
||||||
|
attacks.
|
||||||
|
|
@ -0,0 +1,3 @@
|
||||||
|
# SPDX-License-Identifier: GPL-2.0-only
|
||||||
|
obj-$(CONFIG_CRYPTO_DEV_TI_DTHEV2) += dthev2.o
|
||||||
|
dthev2-objs := dthev2-common.o dthev2-aes.o
|
||||||
|
|
@ -0,0 +1,411 @@
|
||||||
|
// SPDX-License-Identifier: GPL-2.0-only
|
||||||
|
/*
|
||||||
|
* K3 DTHE V2 crypto accelerator driver
|
||||||
|
*
|
||||||
|
* Copyright (C) Texas Instruments 2025 - https://www.ti.com
|
||||||
|
* Author: T Pratham <t-pratham@ti.com>
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <crypto/aead.h>
|
||||||
|
#include <crypto/aes.h>
|
||||||
|
#include <crypto/algapi.h>
|
||||||
|
#include <crypto/engine.h>
|
||||||
|
#include <crypto/internal/aead.h>
|
||||||
|
#include <crypto/internal/skcipher.h>
|
||||||
|
|
||||||
|
#include "dthev2-common.h"
|
||||||
|
|
||||||
|
#include <linux/delay.h>
|
||||||
|
#include <linux/dmaengine.h>
|
||||||
|
#include <linux/dma-mapping.h>
|
||||||
|
#include <linux/io.h>
|
||||||
|
#include <linux/scatterlist.h>
|
||||||
|
|
||||||
|
/* Registers */
|
||||||
|
|
||||||
|
// AES Engine
|
||||||
|
#define DTHE_P_AES_BASE 0x7000
|
||||||
|
#define DTHE_P_AES_KEY1_0 0x0038
|
||||||
|
#define DTHE_P_AES_KEY1_1 0x003C
|
||||||
|
#define DTHE_P_AES_KEY1_2 0x0030
|
||||||
|
#define DTHE_P_AES_KEY1_3 0x0034
|
||||||
|
#define DTHE_P_AES_KEY1_4 0x0028
|
||||||
|
#define DTHE_P_AES_KEY1_5 0x002C
|
||||||
|
#define DTHE_P_AES_KEY1_6 0x0020
|
||||||
|
#define DTHE_P_AES_KEY1_7 0x0024
|
||||||
|
#define DTHE_P_AES_IV_IN_0 0x0040
|
||||||
|
#define DTHE_P_AES_IV_IN_1 0x0044
|
||||||
|
#define DTHE_P_AES_IV_IN_2 0x0048
|
||||||
|
#define DTHE_P_AES_IV_IN_3 0x004C
|
||||||
|
#define DTHE_P_AES_CTRL 0x0050
|
||||||
|
#define DTHE_P_AES_C_LENGTH_0 0x0054
|
||||||
|
#define DTHE_P_AES_C_LENGTH_1 0x0058
|
||||||
|
#define DTHE_P_AES_AUTH_LENGTH 0x005C
|
||||||
|
#define DTHE_P_AES_DATA_IN_OUT 0x0060
|
||||||
|
|
||||||
|
#define DTHE_P_AES_SYSCONFIG 0x0084
|
||||||
|
#define DTHE_P_AES_IRQSTATUS 0x008C
|
||||||
|
#define DTHE_P_AES_IRQENABLE 0x0090
|
||||||
|
|
||||||
|
/* Register write values and macros */
|
||||||
|
|
||||||
|
enum aes_ctrl_mode_masks {
|
||||||
|
AES_CTRL_ECB_MASK = 0x00,
|
||||||
|
AES_CTRL_CBC_MASK = BIT(5),
|
||||||
|
};
|
||||||
|
|
||||||
|
#define DTHE_AES_CTRL_MODE_CLEAR_MASK ~GENMASK(28, 5)
|
||||||
|
|
||||||
|
#define DTHE_AES_CTRL_DIR_ENC BIT(2)
|
||||||
|
|
||||||
|
#define DTHE_AES_CTRL_KEYSIZE_16B BIT(3)
|
||||||
|
#define DTHE_AES_CTRL_KEYSIZE_24B BIT(4)
|
||||||
|
#define DTHE_AES_CTRL_KEYSIZE_32B (BIT(3) | BIT(4))
|
||||||
|
|
||||||
|
#define DTHE_AES_CTRL_SAVE_CTX_SET BIT(29)
|
||||||
|
|
||||||
|
#define DTHE_AES_CTRL_OUTPUT_READY BIT_MASK(0)
|
||||||
|
#define DTHE_AES_CTRL_INPUT_READY BIT_MASK(1)
|
||||||
|
#define DTHE_AES_CTRL_SAVED_CTX_READY BIT_MASK(30)
|
||||||
|
#define DTHE_AES_CTRL_CTX_READY BIT_MASK(31)
|
||||||
|
|
||||||
|
#define DTHE_AES_SYSCONFIG_DMA_DATA_IN_OUT_EN GENMASK(6, 5)
|
||||||
|
#define DTHE_AES_IRQENABLE_EN_ALL GENMASK(3, 0)
|
||||||
|
|
||||||
|
/* Misc */
|
||||||
|
#define AES_IV_SIZE AES_BLOCK_SIZE
|
||||||
|
#define AES_BLOCK_WORDS (AES_BLOCK_SIZE / sizeof(u32))
|
||||||
|
#define AES_IV_WORDS AES_BLOCK_WORDS
|
||||||
|
|
||||||
|
static int dthe_cipher_init_tfm(struct crypto_skcipher *tfm)
|
||||||
|
{
|
||||||
|
struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
|
||||||
|
struct dthe_data *dev_data = dthe_get_dev(ctx);
|
||||||
|
|
||||||
|
ctx->dev_data = dev_data;
|
||||||
|
ctx->keylen = 0;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int dthe_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen)
|
||||||
|
{
|
||||||
|
struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
|
||||||
|
|
||||||
|
if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && keylen != AES_KEYSIZE_256)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
ctx->keylen = keylen;
|
||||||
|
memcpy(ctx->key, key, keylen);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int dthe_aes_ecb_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen)
|
||||||
|
{
|
||||||
|
struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
|
||||||
|
|
||||||
|
ctx->aes_mode = DTHE_AES_ECB;
|
||||||
|
|
||||||
|
return dthe_aes_setkey(tfm, key, keylen);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int dthe_aes_cbc_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen)
|
||||||
|
{
|
||||||
|
struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
|
||||||
|
|
||||||
|
ctx->aes_mode = DTHE_AES_CBC;
|
||||||
|
|
||||||
|
return dthe_aes_setkey(tfm, key, keylen);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dthe_aes_set_ctrl_key(struct dthe_tfm_ctx *ctx,
|
||||||
|
struct dthe_aes_req_ctx *rctx,
|
||||||
|
u32 *iv_in)
|
||||||
|
{
|
||||||
|
struct dthe_data *dev_data = dthe_get_dev(ctx);
|
||||||
|
void __iomem *aes_base_reg = dev_data->regs + DTHE_P_AES_BASE;
|
||||||
|
u32 ctrl_val = 0;
|
||||||
|
|
||||||
|
writel_relaxed(ctx->key[0], aes_base_reg + DTHE_P_AES_KEY1_0);
|
||||||
|
writel_relaxed(ctx->key[1], aes_base_reg + DTHE_P_AES_KEY1_1);
|
||||||
|
writel_relaxed(ctx->key[2], aes_base_reg + DTHE_P_AES_KEY1_2);
|
||||||
|
writel_relaxed(ctx->key[3], aes_base_reg + DTHE_P_AES_KEY1_3);
|
||||||
|
|
||||||
|
if (ctx->keylen > AES_KEYSIZE_128) {
|
||||||
|
writel_relaxed(ctx->key[4], aes_base_reg + DTHE_P_AES_KEY1_4);
|
||||||
|
writel_relaxed(ctx->key[5], aes_base_reg + DTHE_P_AES_KEY1_5);
|
||||||
|
}
|
||||||
|
if (ctx->keylen == AES_KEYSIZE_256) {
|
||||||
|
writel_relaxed(ctx->key[6], aes_base_reg + DTHE_P_AES_KEY1_6);
|
||||||
|
writel_relaxed(ctx->key[7], aes_base_reg + DTHE_P_AES_KEY1_7);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (rctx->enc)
|
||||||
|
ctrl_val |= DTHE_AES_CTRL_DIR_ENC;
|
||||||
|
|
||||||
|
if (ctx->keylen == AES_KEYSIZE_128)
|
||||||
|
ctrl_val |= DTHE_AES_CTRL_KEYSIZE_16B;
|
||||||
|
else if (ctx->keylen == AES_KEYSIZE_192)
|
||||||
|
ctrl_val |= DTHE_AES_CTRL_KEYSIZE_24B;
|
||||||
|
else
|
||||||
|
ctrl_val |= DTHE_AES_CTRL_KEYSIZE_32B;
|
||||||
|
|
||||||
|
// Write AES mode
|
||||||
|
ctrl_val &= DTHE_AES_CTRL_MODE_CLEAR_MASK;
|
||||||
|
switch (ctx->aes_mode) {
|
||||||
|
case DTHE_AES_ECB:
|
||||||
|
ctrl_val |= AES_CTRL_ECB_MASK;
|
||||||
|
break;
|
||||||
|
case DTHE_AES_CBC:
|
||||||
|
ctrl_val |= AES_CTRL_CBC_MASK;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (iv_in) {
|
||||||
|
ctrl_val |= DTHE_AES_CTRL_SAVE_CTX_SET;
|
||||||
|
for (int i = 0; i < AES_IV_WORDS; ++i)
|
||||||
|
writel_relaxed(iv_in[i],
|
||||||
|
aes_base_reg + DTHE_P_AES_IV_IN_0 + (DTHE_REG_SIZE * i));
|
||||||
|
}
|
||||||
|
|
||||||
|
writel_relaxed(ctrl_val, aes_base_reg + DTHE_P_AES_CTRL);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dthe_aes_dma_in_callback(void *data)
|
||||||
|
{
|
||||||
|
struct skcipher_request *req = (struct skcipher_request *)data;
|
||||||
|
struct dthe_aes_req_ctx *rctx = skcipher_request_ctx(req);
|
||||||
|
|
||||||
|
complete(&rctx->aes_compl);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int dthe_aes_run(struct crypto_engine *engine, void *areq)
|
||||||
|
{
|
||||||
|
struct skcipher_request *req = container_of(areq, struct skcipher_request, base);
|
||||||
|
struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
|
||||||
|
struct dthe_data *dev_data = dthe_get_dev(ctx);
|
||||||
|
struct dthe_aes_req_ctx *rctx = skcipher_request_ctx(req);
|
||||||
|
|
||||||
|
unsigned int len = req->cryptlen;
|
||||||
|
struct scatterlist *src = req->src;
|
||||||
|
struct scatterlist *dst = req->dst;
|
||||||
|
|
||||||
|
int src_nents = sg_nents_for_len(src, len);
|
||||||
|
int dst_nents;
|
||||||
|
|
||||||
|
int src_mapped_nents;
|
||||||
|
int dst_mapped_nents;
|
||||||
|
|
||||||
|
bool diff_dst;
|
||||||
|
enum dma_data_direction src_dir, dst_dir;
|
||||||
|
|
||||||
|
struct device *tx_dev, *rx_dev;
|
||||||
|
struct dma_async_tx_descriptor *desc_in, *desc_out;
|
||||||
|
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
void __iomem *aes_base_reg = dev_data->regs + DTHE_P_AES_BASE;
|
||||||
|
|
||||||
|
u32 aes_irqenable_val = readl_relaxed(aes_base_reg + DTHE_P_AES_IRQENABLE);
|
||||||
|
u32 aes_sysconfig_val = readl_relaxed(aes_base_reg + DTHE_P_AES_SYSCONFIG);
|
||||||
|
|
||||||
|
aes_sysconfig_val |= DTHE_AES_SYSCONFIG_DMA_DATA_IN_OUT_EN;
|
||||||
|
writel_relaxed(aes_sysconfig_val, aes_base_reg + DTHE_P_AES_SYSCONFIG);
|
||||||
|
|
||||||
|
aes_irqenable_val |= DTHE_AES_IRQENABLE_EN_ALL;
|
||||||
|
writel_relaxed(aes_irqenable_val, aes_base_reg + DTHE_P_AES_IRQENABLE);
|
||||||
|
|
||||||
|
if (src == dst) {
|
||||||
|
diff_dst = false;
|
||||||
|
src_dir = DMA_BIDIRECTIONAL;
|
||||||
|
dst_dir = DMA_BIDIRECTIONAL;
|
||||||
|
} else {
|
||||||
|
diff_dst = true;
|
||||||
|
src_dir = DMA_TO_DEVICE;
|
||||||
|
dst_dir = DMA_FROM_DEVICE;
|
||||||
|
}
|
||||||
|
|
||||||
|
tx_dev = dmaengine_get_dma_device(dev_data->dma_aes_tx);
|
||||||
|
rx_dev = dmaengine_get_dma_device(dev_data->dma_aes_rx);
|
||||||
|
|
||||||
|
src_mapped_nents = dma_map_sg(tx_dev, src, src_nents, src_dir);
|
||||||
|
if (src_mapped_nents == 0) {
|
||||||
|
ret = -EINVAL;
|
||||||
|
goto aes_err;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!diff_dst) {
|
||||||
|
dst_nents = src_nents;
|
||||||
|
dst_mapped_nents = src_mapped_nents;
|
||||||
|
} else {
|
||||||
|
dst_nents = sg_nents_for_len(dst, len);
|
||||||
|
dst_mapped_nents = dma_map_sg(rx_dev, dst, dst_nents, dst_dir);
|
||||||
|
if (dst_mapped_nents == 0) {
|
||||||
|
dma_unmap_sg(tx_dev, src, src_nents, src_dir);
|
||||||
|
ret = -EINVAL;
|
||||||
|
goto aes_err;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
desc_in = dmaengine_prep_slave_sg(dev_data->dma_aes_rx, dst, dst_mapped_nents,
|
||||||
|
DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
|
||||||
|
if (!desc_in) {
|
||||||
|
dev_err(dev_data->dev, "IN prep_slave_sg() failed\n");
|
||||||
|
ret = -EINVAL;
|
||||||
|
goto aes_prep_err;
|
||||||
|
}
|
||||||
|
|
||||||
|
desc_out = dmaengine_prep_slave_sg(dev_data->dma_aes_tx, src, src_mapped_nents,
|
||||||
|
DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
|
||||||
|
if (!desc_out) {
|
||||||
|
dev_err(dev_data->dev, "OUT prep_slave_sg() failed\n");
|
||||||
|
ret = -EINVAL;
|
||||||
|
goto aes_prep_err;
|
||||||
|
}
|
||||||
|
|
||||||
|
desc_in->callback = dthe_aes_dma_in_callback;
|
||||||
|
desc_in->callback_param = req;
|
||||||
|
|
||||||
|
init_completion(&rctx->aes_compl);
|
||||||
|
|
||||||
|
if (ctx->aes_mode == DTHE_AES_ECB)
|
||||||
|
dthe_aes_set_ctrl_key(ctx, rctx, NULL);
|
||||||
|
else
|
||||||
|
dthe_aes_set_ctrl_key(ctx, rctx, (u32 *)req->iv);
|
||||||
|
|
||||||
|
writel_relaxed(lower_32_bits(req->cryptlen), aes_base_reg + DTHE_P_AES_C_LENGTH_0);
|
||||||
|
writel_relaxed(upper_32_bits(req->cryptlen), aes_base_reg + DTHE_P_AES_C_LENGTH_1);
|
||||||
|
|
||||||
|
dmaengine_submit(desc_in);
|
||||||
|
dmaengine_submit(desc_out);
|
||||||
|
|
||||||
|
dma_async_issue_pending(dev_data->dma_aes_rx);
|
||||||
|
dma_async_issue_pending(dev_data->dma_aes_tx);
|
||||||
|
|
||||||
|
// Need to do a timeout to ensure finalise gets called if DMA callback fails for any reason
|
||||||
|
ret = wait_for_completion_timeout(&rctx->aes_compl, msecs_to_jiffies(DTHE_DMA_TIMEOUT_MS));
|
||||||
|
if (!ret) {
|
||||||
|
ret = -ETIMEDOUT;
|
||||||
|
dmaengine_terminate_sync(dev_data->dma_aes_rx);
|
||||||
|
dmaengine_terminate_sync(dev_data->dma_aes_tx);
|
||||||
|
|
||||||
|
for (int i = 0; i < AES_BLOCK_WORDS; ++i)
|
||||||
|
readl_relaxed(aes_base_reg + DTHE_P_AES_DATA_IN_OUT + (DTHE_REG_SIZE * i));
|
||||||
|
} else {
|
||||||
|
ret = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// For modes other than ECB, read IV_OUT
|
||||||
|
if (ctx->aes_mode != DTHE_AES_ECB) {
|
||||||
|
u32 *iv_out = (u32 *)req->iv;
|
||||||
|
|
||||||
|
for (int i = 0; i < AES_IV_WORDS; ++i)
|
||||||
|
iv_out[i] = readl_relaxed(aes_base_reg +
|
||||||
|
DTHE_P_AES_IV_IN_0 +
|
||||||
|
(DTHE_REG_SIZE * i));
|
||||||
|
}
|
||||||
|
|
||||||
|
aes_prep_err:
|
||||||
|
dma_unmap_sg(tx_dev, src, src_nents, src_dir);
|
||||||
|
if (dst_dir != DMA_BIDIRECTIONAL)
|
||||||
|
dma_unmap_sg(rx_dev, dst, dst_nents, dst_dir);
|
||||||
|
|
||||||
|
aes_err:
|
||||||
|
local_bh_disable();
|
||||||
|
crypto_finalize_skcipher_request(dev_data->engine, req, ret);
|
||||||
|
local_bh_enable();
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int dthe_aes_crypt(struct skcipher_request *req)
|
||||||
|
{
|
||||||
|
struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
|
||||||
|
struct dthe_data *dev_data = dthe_get_dev(ctx);
|
||||||
|
struct crypto_engine *engine;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If data is not a multiple of AES_BLOCK_SIZE, need to return -EINVAL
|
||||||
|
* If data length input is zero, no need to do any operation.
|
||||||
|
*/
|
||||||
|
if (req->cryptlen % AES_BLOCK_SIZE)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
if (req->cryptlen == 0)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
engine = dev_data->engine;
|
||||||
|
return crypto_transfer_skcipher_request_to_engine(engine, req);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int dthe_aes_encrypt(struct skcipher_request *req)
|
||||||
|
{
|
||||||
|
struct dthe_aes_req_ctx *rctx = skcipher_request_ctx(req);
|
||||||
|
|
||||||
|
rctx->enc = 1;
|
||||||
|
return dthe_aes_crypt(req);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int dthe_aes_decrypt(struct skcipher_request *req)
|
||||||
|
{
|
||||||
|
struct dthe_aes_req_ctx *rctx = skcipher_request_ctx(req);
|
||||||
|
|
||||||
|
rctx->enc = 0;
|
||||||
|
return dthe_aes_crypt(req);
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct skcipher_engine_alg cipher_algs[] = {
|
||||||
|
{
|
||||||
|
.base.init = dthe_cipher_init_tfm,
|
||||||
|
.base.setkey = dthe_aes_ecb_setkey,
|
||||||
|
.base.encrypt = dthe_aes_encrypt,
|
||||||
|
.base.decrypt = dthe_aes_decrypt,
|
||||||
|
.base.min_keysize = AES_MIN_KEY_SIZE,
|
||||||
|
.base.max_keysize = AES_MAX_KEY_SIZE,
|
||||||
|
.base.base = {
|
||||||
|
.cra_name = "ecb(aes)",
|
||||||
|
.cra_driver_name = "ecb-aes-dthev2",
|
||||||
|
.cra_priority = 299,
|
||||||
|
.cra_flags = CRYPTO_ALG_TYPE_SKCIPHER |
|
||||||
|
CRYPTO_ALG_KERN_DRIVER_ONLY,
|
||||||
|
.cra_alignmask = AES_BLOCK_SIZE - 1,
|
||||||
|
.cra_blocksize = AES_BLOCK_SIZE,
|
||||||
|
.cra_ctxsize = sizeof(struct dthe_tfm_ctx),
|
||||||
|
.cra_reqsize = sizeof(struct dthe_aes_req_ctx),
|
||||||
|
.cra_module = THIS_MODULE,
|
||||||
|
},
|
||||||
|
.op.do_one_request = dthe_aes_run,
|
||||||
|
}, /* ECB AES */
|
||||||
|
{
|
||||||
|
.base.init = dthe_cipher_init_tfm,
|
||||||
|
.base.setkey = dthe_aes_cbc_setkey,
|
||||||
|
.base.encrypt = dthe_aes_encrypt,
|
||||||
|
.base.decrypt = dthe_aes_decrypt,
|
||||||
|
.base.min_keysize = AES_MIN_KEY_SIZE,
|
||||||
|
.base.max_keysize = AES_MAX_KEY_SIZE,
|
||||||
|
.base.ivsize = AES_IV_SIZE,
|
||||||
|
.base.base = {
|
||||||
|
.cra_name = "cbc(aes)",
|
||||||
|
.cra_driver_name = "cbc-aes-dthev2",
|
||||||
|
.cra_priority = 299,
|
||||||
|
.cra_flags = CRYPTO_ALG_TYPE_SKCIPHER |
|
||||||
|
CRYPTO_ALG_KERN_DRIVER_ONLY,
|
||||||
|
.cra_alignmask = AES_BLOCK_SIZE - 1,
|
||||||
|
.cra_blocksize = AES_BLOCK_SIZE,
|
||||||
|
.cra_ctxsize = sizeof(struct dthe_tfm_ctx),
|
||||||
|
.cra_reqsize = sizeof(struct dthe_aes_req_ctx),
|
||||||
|
.cra_module = THIS_MODULE,
|
||||||
|
},
|
||||||
|
.op.do_one_request = dthe_aes_run,
|
||||||
|
} /* CBC AES */
|
||||||
|
};
|
||||||
|
|
||||||
|
int dthe_register_aes_algs(void)
|
||||||
|
{
|
||||||
|
return crypto_engine_register_skciphers(cipher_algs, ARRAY_SIZE(cipher_algs));
|
||||||
|
}
|
||||||
|
|
||||||
|
void dthe_unregister_aes_algs(void)
|
||||||
|
{
|
||||||
|
crypto_engine_unregister_skciphers(cipher_algs, ARRAY_SIZE(cipher_algs));
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,217 @@
|
||||||
|
// SPDX-License-Identifier: GPL-2.0-only
|
||||||
|
/*
|
||||||
|
* K3 DTHE V2 crypto accelerator driver
|
||||||
|
*
|
||||||
|
* Copyright (C) Texas Instruments 2025 - https://www.ti.com
|
||||||
|
* Author: T Pratham <t-pratham@ti.com>
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <crypto/aes.h>
|
||||||
|
#include <crypto/algapi.h>
|
||||||
|
#include <crypto/engine.h>
|
||||||
|
#include <crypto/internal/aead.h>
|
||||||
|
#include <crypto/internal/skcipher.h>
|
||||||
|
|
||||||
|
#include "dthev2-common.h"
|
||||||
|
|
||||||
|
#include <linux/delay.h>
|
||||||
|
#include <linux/dmaengine.h>
|
||||||
|
#include <linux/dmapool.h>
|
||||||
|
#include <linux/dma-mapping.h>
|
||||||
|
#include <linux/io.h>
|
||||||
|
#include <linux/kernel.h>
|
||||||
|
#include <linux/module.h>
|
||||||
|
#include <linux/mod_devicetable.h>
|
||||||
|
#include <linux/platform_device.h>
|
||||||
|
#include <linux/scatterlist.h>
|
||||||
|
|
||||||
|
#define DRIVER_NAME "dthev2"
|
||||||
|
|
||||||
|
static struct dthe_list dthe_dev_list = {
|
||||||
|
.dev_list = LIST_HEAD_INIT(dthe_dev_list.dev_list),
|
||||||
|
.lock = __SPIN_LOCK_UNLOCKED(dthe_dev_list.lock),
|
||||||
|
};
|
||||||
|
|
||||||
|
struct dthe_data *dthe_get_dev(struct dthe_tfm_ctx *ctx)
|
||||||
|
{
|
||||||
|
struct dthe_data *dev_data;
|
||||||
|
|
||||||
|
if (ctx->dev_data)
|
||||||
|
return ctx->dev_data;
|
||||||
|
|
||||||
|
spin_lock_bh(&dthe_dev_list.lock);
|
||||||
|
dev_data = list_first_entry(&dthe_dev_list.dev_list, struct dthe_data, list);
|
||||||
|
if (dev_data)
|
||||||
|
list_move_tail(&dev_data->list, &dthe_dev_list.dev_list);
|
||||||
|
spin_unlock_bh(&dthe_dev_list.lock);
|
||||||
|
|
||||||
|
return dev_data;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int dthe_dma_init(struct dthe_data *dev_data)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
struct dma_slave_config cfg;
|
||||||
|
|
||||||
|
dev_data->dma_aes_rx = NULL;
|
||||||
|
dev_data->dma_aes_tx = NULL;
|
||||||
|
dev_data->dma_sha_tx = NULL;
|
||||||
|
|
||||||
|
dev_data->dma_aes_rx = dma_request_chan(dev_data->dev, "rx");
|
||||||
|
if (IS_ERR(dev_data->dma_aes_rx)) {
|
||||||
|
return dev_err_probe(dev_data->dev, PTR_ERR(dev_data->dma_aes_rx),
|
||||||
|
"Unable to request rx DMA channel\n");
|
||||||
|
}
|
||||||
|
|
||||||
|
dev_data->dma_aes_tx = dma_request_chan(dev_data->dev, "tx1");
|
||||||
|
if (IS_ERR(dev_data->dma_aes_tx)) {
|
||||||
|
ret = dev_err_probe(dev_data->dev, PTR_ERR(dev_data->dma_aes_tx),
|
||||||
|
"Unable to request tx1 DMA channel\n");
|
||||||
|
goto err_dma_aes_tx;
|
||||||
|
}
|
||||||
|
|
||||||
|
dev_data->dma_sha_tx = dma_request_chan(dev_data->dev, "tx2");
|
||||||
|
if (IS_ERR(dev_data->dma_sha_tx)) {
|
||||||
|
ret = dev_err_probe(dev_data->dev, PTR_ERR(dev_data->dma_sha_tx),
|
||||||
|
"Unable to request tx2 DMA channel\n");
|
||||||
|
goto err_dma_sha_tx;
|
||||||
|
}
|
||||||
|
|
||||||
|
memzero_explicit(&cfg, sizeof(cfg));
|
||||||
|
|
||||||
|
cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
|
||||||
|
cfg.src_maxburst = 4;
|
||||||
|
|
||||||
|
ret = dmaengine_slave_config(dev_data->dma_aes_rx, &cfg);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev_data->dev, "Can't configure IN dmaengine slave: %d\n", ret);
|
||||||
|
goto err_dma_config;
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
|
||||||
|
cfg.dst_maxburst = 4;
|
||||||
|
|
||||||
|
ret = dmaengine_slave_config(dev_data->dma_aes_tx, &cfg);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev_data->dev, "Can't configure OUT dmaengine slave: %d\n", ret);
|
||||||
|
goto err_dma_config;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
err_dma_config:
|
||||||
|
dma_release_channel(dev_data->dma_sha_tx);
|
||||||
|
err_dma_sha_tx:
|
||||||
|
dma_release_channel(dev_data->dma_aes_tx);
|
||||||
|
err_dma_aes_tx:
|
||||||
|
dma_release_channel(dev_data->dma_aes_rx);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int dthe_register_algs(void)
|
||||||
|
{
|
||||||
|
return dthe_register_aes_algs();
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dthe_unregister_algs(void)
|
||||||
|
{
|
||||||
|
dthe_unregister_aes_algs();
|
||||||
|
}
|
||||||
|
|
||||||
|
static int dthe_probe(struct platform_device *pdev)
|
||||||
|
{
|
||||||
|
struct device *dev = &pdev->dev;
|
||||||
|
struct dthe_data *dev_data;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
dev_data = devm_kzalloc(dev, sizeof(*dev_data), GFP_KERNEL);
|
||||||
|
if (!dev_data)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
dev_data->dev = dev;
|
||||||
|
dev_data->regs = devm_platform_ioremap_resource(pdev, 0);
|
||||||
|
if (IS_ERR(dev_data->regs))
|
||||||
|
return PTR_ERR(dev_data->regs);
|
||||||
|
|
||||||
|
platform_set_drvdata(pdev, dev_data);
|
||||||
|
|
||||||
|
spin_lock(&dthe_dev_list.lock);
|
||||||
|
list_add(&dev_data->list, &dthe_dev_list.dev_list);
|
||||||
|
spin_unlock(&dthe_dev_list.lock);
|
||||||
|
|
||||||
|
ret = dthe_dma_init(dev_data);
|
||||||
|
if (ret)
|
||||||
|
goto probe_dma_err;
|
||||||
|
|
||||||
|
dev_data->engine = crypto_engine_alloc_init(dev, 1);
|
||||||
|
if (!dev_data->engine) {
|
||||||
|
ret = -ENOMEM;
|
||||||
|
goto probe_engine_err;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = crypto_engine_start(dev_data->engine);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Failed to start crypto engine\n");
|
||||||
|
goto probe_engine_start_err;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = dthe_register_algs();
|
||||||
|
if (ret) {
|
||||||
|
dev_err(dev, "Failed to register algs\n");
|
||||||
|
goto probe_engine_start_err;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
probe_engine_start_err:
|
||||||
|
crypto_engine_exit(dev_data->engine);
|
||||||
|
probe_engine_err:
|
||||||
|
dma_release_channel(dev_data->dma_aes_rx);
|
||||||
|
dma_release_channel(dev_data->dma_aes_tx);
|
||||||
|
dma_release_channel(dev_data->dma_sha_tx);
|
||||||
|
probe_dma_err:
|
||||||
|
spin_lock(&dthe_dev_list.lock);
|
||||||
|
list_del(&dev_data->list);
|
||||||
|
spin_unlock(&dthe_dev_list.lock);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dthe_remove(struct platform_device *pdev)
|
||||||
|
{
|
||||||
|
struct dthe_data *dev_data = platform_get_drvdata(pdev);
|
||||||
|
|
||||||
|
spin_lock(&dthe_dev_list.lock);
|
||||||
|
list_del(&dev_data->list);
|
||||||
|
spin_unlock(&dthe_dev_list.lock);
|
||||||
|
|
||||||
|
dthe_unregister_algs();
|
||||||
|
|
||||||
|
crypto_engine_exit(dev_data->engine);
|
||||||
|
|
||||||
|
dma_release_channel(dev_data->dma_aes_rx);
|
||||||
|
dma_release_channel(dev_data->dma_aes_tx);
|
||||||
|
dma_release_channel(dev_data->dma_sha_tx);
|
||||||
|
}
|
||||||
|
|
||||||
|
static const struct of_device_id dthe_of_match[] = {
|
||||||
|
{ .compatible = "ti,am62l-dthev2", },
|
||||||
|
{},
|
||||||
|
};
|
||||||
|
MODULE_DEVICE_TABLE(of, dthe_of_match);
|
||||||
|
|
||||||
|
static struct platform_driver dthe_driver = {
|
||||||
|
.probe = dthe_probe,
|
||||||
|
.remove = dthe_remove,
|
||||||
|
.driver = {
|
||||||
|
.name = DRIVER_NAME,
|
||||||
|
.of_match_table = dthe_of_match,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
module_platform_driver(dthe_driver);
|
||||||
|
|
||||||
|
MODULE_AUTHOR("T Pratham <t-pratham@ti.com>");
|
||||||
|
MODULE_DESCRIPTION("Texas Instruments DTHE V2 driver");
|
||||||
|
MODULE_LICENSE("GPL");
|
||||||
|
|
@ -0,0 +1,101 @@
|
||||||
|
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||||
|
/*
|
||||||
|
* K3 DTHE V2 crypto accelerator driver
|
||||||
|
*
|
||||||
|
* Copyright (C) Texas Instruments 2025 - https://www.ti.com
|
||||||
|
* Author: T Pratham <t-pratham@ti.com>
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifndef __TI_DTHEV2_H__
|
||||||
|
#define __TI_DTHEV2_H__
|
||||||
|
|
||||||
|
#include <crypto/aead.h>
|
||||||
|
#include <crypto/aes.h>
|
||||||
|
#include <crypto/algapi.h>
|
||||||
|
#include <crypto/engine.h>
|
||||||
|
#include <crypto/hash.h>
|
||||||
|
#include <crypto/internal/aead.h>
|
||||||
|
#include <crypto/internal/hash.h>
|
||||||
|
#include <crypto/internal/skcipher.h>
|
||||||
|
|
||||||
|
#include <linux/delay.h>
|
||||||
|
#include <linux/dmaengine.h>
|
||||||
|
#include <linux/dmapool.h>
|
||||||
|
#include <linux/dma-mapping.h>
|
||||||
|
#include <linux/io.h>
|
||||||
|
#include <linux/scatterlist.h>
|
||||||
|
|
||||||
|
#define DTHE_REG_SIZE 4
|
||||||
|
#define DTHE_DMA_TIMEOUT_MS 2000
|
||||||
|
|
||||||
|
enum dthe_aes_mode {
|
||||||
|
DTHE_AES_ECB = 0,
|
||||||
|
DTHE_AES_CBC,
|
||||||
|
};
|
||||||
|
|
||||||
|
/* Driver specific struct definitions */
|
||||||
|
|
||||||
|
/**
|
||||||
|
* struct dthe_data - DTHE_V2 driver instance data
|
||||||
|
* @dev: Device pointer
|
||||||
|
* @regs: Base address of the register space
|
||||||
|
* @list: list node for dev
|
||||||
|
* @engine: Crypto engine instance
|
||||||
|
* @dma_aes_rx: AES Rx DMA Channel
|
||||||
|
* @dma_aes_tx: AES Tx DMA Channel
|
||||||
|
* @dma_sha_tx: SHA Tx DMA Channel
|
||||||
|
*/
|
||||||
|
struct dthe_data {
|
||||||
|
struct device *dev;
|
||||||
|
void __iomem *regs;
|
||||||
|
struct list_head list;
|
||||||
|
struct crypto_engine *engine;
|
||||||
|
|
||||||
|
struct dma_chan *dma_aes_rx;
|
||||||
|
struct dma_chan *dma_aes_tx;
|
||||||
|
|
||||||
|
struct dma_chan *dma_sha_tx;
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* struct dthe_list - device data list head
|
||||||
|
* @dev_list: linked list head
|
||||||
|
* @lock: Spinlock protecting accesses to the list
|
||||||
|
*/
|
||||||
|
struct dthe_list {
|
||||||
|
struct list_head dev_list;
|
||||||
|
spinlock_t lock;
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* struct dthe_tfm_ctx - Transform ctx struct containing ctx for all sub-components of DTHE V2
|
||||||
|
* @dev_data: Device data struct pointer
|
||||||
|
* @keylen: AES key length
|
||||||
|
* @key: AES key
|
||||||
|
* @aes_mode: AES mode
|
||||||
|
*/
|
||||||
|
struct dthe_tfm_ctx {
|
||||||
|
struct dthe_data *dev_data;
|
||||||
|
unsigned int keylen;
|
||||||
|
u32 key[AES_KEYSIZE_256 / sizeof(u32)];
|
||||||
|
enum dthe_aes_mode aes_mode;
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* struct dthe_aes_req_ctx - AES engine req ctx struct
|
||||||
|
* @enc: flag indicating encryption or decryption operation
|
||||||
|
* @aes_compl: Completion variable for use in manual completion in case of DMA callback failure
|
||||||
|
*/
|
||||||
|
struct dthe_aes_req_ctx {
|
||||||
|
int enc;
|
||||||
|
struct completion aes_compl;
|
||||||
|
};
|
||||||
|
|
||||||
|
/* Struct definitions end */
|
||||||
|
|
||||||
|
struct dthe_data *dthe_get_dev(struct dthe_tfm_ctx *ctx);
|
||||||
|
|
||||||
|
int dthe_register_aes_algs(void);
|
||||||
|
void dthe_unregister_aes_algs(void);
|
||||||
|
|
||||||
|
#endif
|
||||||
|
|
@ -1,3 +1,4 @@
|
||||||
# SPDX-License-Identifier: GPL-2.0-only
|
# SPDX-License-Identifier: GPL-2.0-only
|
||||||
|
obj-$(CONFIG_CRYPTO_DEV_XILINX_TRNG) += xilinx-trng.o
|
||||||
obj-$(CONFIG_CRYPTO_DEV_ZYNQMP_AES) += zynqmp-aes-gcm.o
|
obj-$(CONFIG_CRYPTO_DEV_ZYNQMP_AES) += zynqmp-aes-gcm.o
|
||||||
obj-$(CONFIG_CRYPTO_DEV_ZYNQMP_SHA3) += zynqmp-sha.o
|
obj-$(CONFIG_CRYPTO_DEV_ZYNQMP_SHA3) += zynqmp-sha.o
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,405 @@
|
||||||
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
|
/*
|
||||||
|
* AMD Versal True Random Number Generator driver
|
||||||
|
* Copyright (c) 2024 - 2025 Advanced Micro Devices, Inc.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <linux/bitfield.h>
|
||||||
|
#include <linux/clk.h>
|
||||||
|
#include <linux/crypto.h>
|
||||||
|
#include <linux/delay.h>
|
||||||
|
#include <linux/errno.h>
|
||||||
|
#include <linux/firmware/xlnx-zynqmp.h>
|
||||||
|
#include <linux/hw_random.h>
|
||||||
|
#include <linux/io.h>
|
||||||
|
#include <linux/iopoll.h>
|
||||||
|
#include <linux/kernel.h>
|
||||||
|
#include <linux/module.h>
|
||||||
|
#include <linux/mutex.h>
|
||||||
|
#include <linux/mod_devicetable.h>
|
||||||
|
#include <linux/platform_device.h>
|
||||||
|
#include <linux/string.h>
|
||||||
|
#include <crypto/internal/cipher.h>
|
||||||
|
#include <crypto/internal/rng.h>
|
||||||
|
#include <crypto/aes.h>
|
||||||
|
|
||||||
|
/* TRNG Registers Offsets */
|
||||||
|
#define TRNG_STATUS_OFFSET 0x4U
|
||||||
|
#define TRNG_CTRL_OFFSET 0x8U
|
||||||
|
#define TRNG_EXT_SEED_OFFSET 0x40U
|
||||||
|
#define TRNG_PER_STRNG_OFFSET 0x80U
|
||||||
|
#define TRNG_CORE_OUTPUT_OFFSET 0xC0U
|
||||||
|
#define TRNG_RESET_OFFSET 0xD0U
|
||||||
|
#define TRNG_OSC_EN_OFFSET 0xD4U
|
||||||
|
|
||||||
|
/* Mask values */
|
||||||
|
#define TRNG_RESET_VAL_MASK BIT(0)
|
||||||
|
#define TRNG_OSC_EN_VAL_MASK BIT(0)
|
||||||
|
#define TRNG_CTRL_PRNGSRST_MASK BIT(0)
|
||||||
|
#define TRNG_CTRL_EUMODE_MASK BIT(8)
|
||||||
|
#define TRNG_CTRL_TRSSEN_MASK BIT(2)
|
||||||
|
#define TRNG_CTRL_PRNGSTART_MASK BIT(5)
|
||||||
|
#define TRNG_CTRL_PRNGXS_MASK BIT(3)
|
||||||
|
#define TRNG_CTRL_PRNGMODE_MASK BIT(7)
|
||||||
|
#define TRNG_STATUS_DONE_MASK BIT(0)
|
||||||
|
#define TRNG_STATUS_QCNT_MASK GENMASK(11, 9)
|
||||||
|
#define TRNG_STATUS_QCNT_16_BYTES 0x800
|
||||||
|
|
||||||
|
/* Sizes in bytes */
|
||||||
|
#define TRNG_SEED_LEN_BYTES 48U
|
||||||
|
#define TRNG_ENTROPY_SEED_LEN_BYTES 64U
|
||||||
|
#define TRNG_SEC_STRENGTH_SHIFT 5U
|
||||||
|
#define TRNG_SEC_STRENGTH_BYTES BIT(TRNG_SEC_STRENGTH_SHIFT)
|
||||||
|
#define TRNG_BYTES_PER_REG 4U
|
||||||
|
#define TRNG_RESET_DELAY 10
|
||||||
|
#define TRNG_NUM_INIT_REGS 12U
|
||||||
|
#define TRNG_READ_4_WORD 4
|
||||||
|
#define TRNG_DATA_READ_DELAY 8000
|
||||||
|
|
||||||
|
struct xilinx_rng {
|
||||||
|
void __iomem *rng_base;
|
||||||
|
struct device *dev;
|
||||||
|
struct mutex lock; /* Protect access to TRNG device */
|
||||||
|
struct hwrng trng;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct xilinx_rng_ctx {
|
||||||
|
struct xilinx_rng *rng;
|
||||||
|
};
|
||||||
|
|
||||||
|
static struct xilinx_rng *xilinx_rng_dev;
|
||||||
|
|
||||||
|
static void xtrng_readwrite32(void __iomem *addr, u32 mask, u8 value)
|
||||||
|
{
|
||||||
|
u32 val;
|
||||||
|
|
||||||
|
val = ioread32(addr);
|
||||||
|
val = (val & (~mask)) | (mask & value);
|
||||||
|
iowrite32(val, addr);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void xtrng_trng_reset(void __iomem *addr)
|
||||||
|
{
|
||||||
|
xtrng_readwrite32(addr + TRNG_RESET_OFFSET, TRNG_RESET_VAL_MASK, TRNG_RESET_VAL_MASK);
|
||||||
|
udelay(TRNG_RESET_DELAY);
|
||||||
|
xtrng_readwrite32(addr + TRNG_RESET_OFFSET, TRNG_RESET_VAL_MASK, 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void xtrng_hold_reset(void __iomem *addr)
|
||||||
|
{
|
||||||
|
xtrng_readwrite32(addr + TRNG_CTRL_OFFSET, TRNG_CTRL_PRNGSRST_MASK,
|
||||||
|
TRNG_CTRL_PRNGSRST_MASK);
|
||||||
|
iowrite32(TRNG_RESET_VAL_MASK, addr + TRNG_RESET_OFFSET);
|
||||||
|
udelay(TRNG_RESET_DELAY);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void xtrng_softreset(struct xilinx_rng *rng)
|
||||||
|
{
|
||||||
|
xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET, TRNG_CTRL_PRNGSRST_MASK,
|
||||||
|
TRNG_CTRL_PRNGSRST_MASK);
|
||||||
|
udelay(TRNG_RESET_DELAY);
|
||||||
|
xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET, TRNG_CTRL_PRNGSRST_MASK, 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Return no. of bytes read */
|
||||||
|
static size_t xtrng_readblock32(void __iomem *rng_base, __be32 *buf, int blocks32, bool wait)
|
||||||
|
{
|
||||||
|
int read = 0, ret;
|
||||||
|
int timeout = 1;
|
||||||
|
int i, idx;
|
||||||
|
u32 val;
|
||||||
|
|
||||||
|
if (wait)
|
||||||
|
timeout = TRNG_DATA_READ_DELAY;
|
||||||
|
|
||||||
|
for (i = 0; i < (blocks32 * 2); i++) {
|
||||||
|
/* TRNG core generate data in 16 bytes. Read twice to complete 32 bytes read */
|
||||||
|
ret = readl_poll_timeout(rng_base + TRNG_STATUS_OFFSET, val,
|
||||||
|
(val & TRNG_STATUS_QCNT_MASK) ==
|
||||||
|
TRNG_STATUS_QCNT_16_BYTES, !!wait, timeout);
|
||||||
|
if (ret)
|
||||||
|
break;
|
||||||
|
|
||||||
|
for (idx = 0; idx < TRNG_READ_4_WORD; idx++) {
|
||||||
|
*(buf + read) = cpu_to_be32(ioread32(rng_base + TRNG_CORE_OUTPUT_OFFSET));
|
||||||
|
read += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return read * 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int xtrng_collect_random_data(struct xilinx_rng *rng, u8 *rand_gen_buf,
|
||||||
|
int no_of_random_bytes, bool wait)
|
||||||
|
{
|
||||||
|
u8 randbuf[TRNG_SEC_STRENGTH_BYTES];
|
||||||
|
int byteleft, blocks, count = 0;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
byteleft = no_of_random_bytes & (TRNG_SEC_STRENGTH_BYTES - 1);
|
||||||
|
blocks = no_of_random_bytes >> TRNG_SEC_STRENGTH_SHIFT;
|
||||||
|
xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET, TRNG_CTRL_PRNGSTART_MASK,
|
||||||
|
TRNG_CTRL_PRNGSTART_MASK);
|
||||||
|
if (blocks) {
|
||||||
|
ret = xtrng_readblock32(rng->rng_base, (__be32 *)rand_gen_buf, blocks, wait);
|
||||||
|
if (!ret)
|
||||||
|
return 0;
|
||||||
|
count += ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (byteleft) {
|
||||||
|
ret = xtrng_readblock32(rng->rng_base, (__be32 *)randbuf, 1, wait);
|
||||||
|
if (!ret)
|
||||||
|
return count;
|
||||||
|
memcpy(rand_gen_buf + (blocks * TRNG_SEC_STRENGTH_BYTES), randbuf, byteleft);
|
||||||
|
count += byteleft;
|
||||||
|
}
|
||||||
|
|
||||||
|
xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET,
|
||||||
|
TRNG_CTRL_PRNGMODE_MASK | TRNG_CTRL_PRNGSTART_MASK, 0U);
|
||||||
|
|
||||||
|
return count;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void xtrng_write_multiple_registers(void __iomem *base_addr, u32 *values, size_t n)
|
||||||
|
{
|
||||||
|
void __iomem *reg_addr;
|
||||||
|
size_t i;
|
||||||
|
|
||||||
|
/* Write seed value into EXTERNAL_SEED Registers in big endian format */
|
||||||
|
for (i = 0; i < n; i++) {
|
||||||
|
reg_addr = (base_addr + ((n - 1 - i) * TRNG_BYTES_PER_REG));
|
||||||
|
iowrite32((u32 __force)(cpu_to_be32(values[i])), reg_addr);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void xtrng_enable_entropy(struct xilinx_rng *rng)
|
||||||
|
{
|
||||||
|
iowrite32(TRNG_OSC_EN_VAL_MASK, rng->rng_base + TRNG_OSC_EN_OFFSET);
|
||||||
|
xtrng_softreset(rng);
|
||||||
|
iowrite32(TRNG_CTRL_EUMODE_MASK | TRNG_CTRL_TRSSEN_MASK, rng->rng_base + TRNG_CTRL_OFFSET);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int xtrng_reseed_internal(struct xilinx_rng *rng)
|
||||||
|
{
|
||||||
|
u8 entropy[TRNG_ENTROPY_SEED_LEN_BYTES];
|
||||||
|
u32 val;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
memset(entropy, 0, sizeof(entropy));
|
||||||
|
xtrng_enable_entropy(rng);
|
||||||
|
|
||||||
|
/* collect random data to use it as entropy (input for DF) */
|
||||||
|
ret = xtrng_collect_random_data(rng, entropy, TRNG_SEED_LEN_BYTES, true);
|
||||||
|
if (ret != TRNG_SEED_LEN_BYTES)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
xtrng_write_multiple_registers(rng->rng_base + TRNG_EXT_SEED_OFFSET,
|
||||||
|
(u32 *)entropy, TRNG_NUM_INIT_REGS);
|
||||||
|
/* select reseed operation */
|
||||||
|
iowrite32(TRNG_CTRL_PRNGXS_MASK, rng->rng_base + TRNG_CTRL_OFFSET);
|
||||||
|
|
||||||
|
/* Start the reseed operation with above configuration and wait for STATUS.Done bit to be
|
||||||
|
* set. Monitor STATUS.CERTF bit, if set indicates SP800-90B entropy health test has failed.
|
||||||
|
*/
|
||||||
|
xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET, TRNG_CTRL_PRNGSTART_MASK,
|
||||||
|
TRNG_CTRL_PRNGSTART_MASK);
|
||||||
|
|
||||||
|
ret = readl_poll_timeout(rng->rng_base + TRNG_STATUS_OFFSET, val,
|
||||||
|
(val & TRNG_STATUS_DONE_MASK) == TRNG_STATUS_DONE_MASK,
|
||||||
|
1U, 15000U);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET, TRNG_CTRL_PRNGSTART_MASK, 0U);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int xtrng_random_bytes_generate(struct xilinx_rng *rng, u8 *rand_buf_ptr,
|
||||||
|
u32 rand_buf_size, int wait)
|
||||||
|
{
|
||||||
|
int nbytes;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET,
|
||||||
|
TRNG_CTRL_PRNGMODE_MASK | TRNG_CTRL_PRNGXS_MASK,
|
||||||
|
TRNG_CTRL_PRNGMODE_MASK | TRNG_CTRL_PRNGXS_MASK);
|
||||||
|
nbytes = xtrng_collect_random_data(rng, rand_buf_ptr, rand_buf_size, wait);
|
||||||
|
|
||||||
|
ret = xtrng_reseed_internal(rng);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(rng->dev, "Re-seed fail\n");
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
return nbytes;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int xtrng_trng_generate(struct crypto_rng *tfm, const u8 *src, u32 slen,
|
||||||
|
u8 *dst, u32 dlen)
|
||||||
|
{
|
||||||
|
struct xilinx_rng_ctx *ctx = crypto_rng_ctx(tfm);
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
mutex_lock(&ctx->rng->lock);
|
||||||
|
ret = xtrng_random_bytes_generate(ctx->rng, dst, dlen, true);
|
||||||
|
mutex_unlock(&ctx->rng->lock);
|
||||||
|
|
||||||
|
return ret < 0 ? ret : 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int xtrng_trng_seed(struct crypto_rng *tfm, const u8 *seed, unsigned int slen)
|
||||||
|
{
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int xtrng_trng_init(struct crypto_tfm *rtfm)
|
||||||
|
{
|
||||||
|
struct xilinx_rng_ctx *ctx = crypto_tfm_ctx(rtfm);
|
||||||
|
|
||||||
|
ctx->rng = xilinx_rng_dev;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct rng_alg xtrng_trng_alg = {
|
||||||
|
.generate = xtrng_trng_generate,
|
||||||
|
.seed = xtrng_trng_seed,
|
||||||
|
.seedsize = 0,
|
||||||
|
.base = {
|
||||||
|
.cra_name = "stdrng",
|
||||||
|
.cra_driver_name = "xilinx-trng",
|
||||||
|
.cra_priority = 300,
|
||||||
|
.cra_ctxsize = sizeof(struct xilinx_rng_ctx),
|
||||||
|
.cra_module = THIS_MODULE,
|
||||||
|
.cra_init = xtrng_trng_init,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
static int xtrng_hwrng_trng_read(struct hwrng *hwrng, void *data, size_t max, bool wait)
|
||||||
|
{
|
||||||
|
u8 buf[TRNG_SEC_STRENGTH_BYTES];
|
||||||
|
struct xilinx_rng *rng;
|
||||||
|
int ret = -EINVAL, i = 0;
|
||||||
|
|
||||||
|
rng = container_of(hwrng, struct xilinx_rng, trng);
|
||||||
|
/* Return in case wait not set and lock not available. */
|
||||||
|
if (!mutex_trylock(&rng->lock) && !wait)
|
||||||
|
return 0;
|
||||||
|
else if (!mutex_is_locked(&rng->lock) && wait)
|
||||||
|
mutex_lock(&rng->lock);
|
||||||
|
|
||||||
|
while (i < max) {
|
||||||
|
ret = xtrng_random_bytes_generate(rng, buf, TRNG_SEC_STRENGTH_BYTES, wait);
|
||||||
|
if (ret < 0)
|
||||||
|
break;
|
||||||
|
|
||||||
|
memcpy(data + i, buf, min_t(int, ret, (max - i)));
|
||||||
|
i += min_t(int, ret, (max - i));
|
||||||
|
}
|
||||||
|
mutex_unlock(&rng->lock);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int xtrng_hwrng_register(struct hwrng *trng)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
trng->name = "Xilinx Versal Crypto Engine TRNG";
|
||||||
|
trng->read = xtrng_hwrng_trng_read;
|
||||||
|
|
||||||
|
ret = hwrng_register(trng);
|
||||||
|
if (ret)
|
||||||
|
pr_err("Fail to register the TRNG\n");
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void xtrng_hwrng_unregister(struct hwrng *trng)
|
||||||
|
{
|
||||||
|
hwrng_unregister(trng);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int xtrng_probe(struct platform_device *pdev)
|
||||||
|
{
|
||||||
|
struct xilinx_rng *rng;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
rng = devm_kzalloc(&pdev->dev, sizeof(*rng), GFP_KERNEL);
|
||||||
|
if (!rng)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
rng->dev = &pdev->dev;
|
||||||
|
rng->rng_base = devm_platform_ioremap_resource(pdev, 0);
|
||||||
|
if (IS_ERR(rng->rng_base)) {
|
||||||
|
dev_err(&pdev->dev, "Failed to map resource %ld\n", PTR_ERR(rng->rng_base));
|
||||||
|
return PTR_ERR(rng->rng_base);
|
||||||
|
}
|
||||||
|
|
||||||
|
xtrng_trng_reset(rng->rng_base);
|
||||||
|
ret = xtrng_reseed_internal(rng);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(&pdev->dev, "TRNG Seed fail\n");
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
xilinx_rng_dev = rng;
|
||||||
|
mutex_init(&rng->lock);
|
||||||
|
ret = crypto_register_rng(&xtrng_trng_alg);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(&pdev->dev, "Crypto Random device registration failed: %d\n", ret);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
ret = xtrng_hwrng_register(&rng->trng);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(&pdev->dev, "HWRNG device registration failed: %d\n", ret);
|
||||||
|
goto crypto_rng_free;
|
||||||
|
}
|
||||||
|
platform_set_drvdata(pdev, rng);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
crypto_rng_free:
|
||||||
|
crypto_unregister_rng(&xtrng_trng_alg);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void xtrng_remove(struct platform_device *pdev)
|
||||||
|
{
|
||||||
|
struct xilinx_rng *rng;
|
||||||
|
u32 zero[TRNG_NUM_INIT_REGS] = { };
|
||||||
|
|
||||||
|
rng = platform_get_drvdata(pdev);
|
||||||
|
xtrng_hwrng_unregister(&rng->trng);
|
||||||
|
crypto_unregister_rng(&xtrng_trng_alg);
|
||||||
|
xtrng_write_multiple_registers(rng->rng_base + TRNG_EXT_SEED_OFFSET, zero,
|
||||||
|
TRNG_NUM_INIT_REGS);
|
||||||
|
xtrng_write_multiple_registers(rng->rng_base + TRNG_PER_STRNG_OFFSET, zero,
|
||||||
|
TRNG_NUM_INIT_REGS);
|
||||||
|
xtrng_hold_reset(rng->rng_base);
|
||||||
|
xilinx_rng_dev = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
static const struct of_device_id xtrng_of_match[] = {
|
||||||
|
{ .compatible = "xlnx,versal-trng", },
|
||||||
|
{},
|
||||||
|
};
|
||||||
|
|
||||||
|
MODULE_DEVICE_TABLE(of, xtrng_of_match);
|
||||||
|
|
||||||
|
static struct platform_driver xtrng_driver = {
|
||||||
|
.driver = {
|
||||||
|
.name = "xlnx,versal-trng",
|
||||||
|
.of_match_table = xtrng_of_match,
|
||||||
|
},
|
||||||
|
.probe = xtrng_probe,
|
||||||
|
.remove = xtrng_remove,
|
||||||
|
};
|
||||||
|
|
||||||
|
module_platform_driver(xtrng_driver);
|
||||||
|
MODULE_LICENSE("GPL");
|
||||||
|
MODULE_AUTHOR("Harsh Jain <h.jain@amd.com>");
|
||||||
|
MODULE_AUTHOR("Mounika Botcha <mounika.botcha@amd.com>");
|
||||||
|
MODULE_DESCRIPTION("True Random Number Generator Driver");
|
||||||
|
|
@ -177,14 +177,26 @@ struct shash_desc {
|
||||||
|
|
||||||
#define HASH_MAX_DIGESTSIZE 64
|
#define HASH_MAX_DIGESTSIZE 64
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The size of a core hash state and a partial block. The final byte
|
||||||
|
* is the length of the partial block.
|
||||||
|
*/
|
||||||
|
#define HASH_STATE_AND_BLOCK(state, block) ((state) + (block) + 1)
|
||||||
|
|
||||||
|
|
||||||
/* Worst case is sha3-224. */
|
/* Worst case is sha3-224. */
|
||||||
#define HASH_MAX_STATESIZE 200 + 144 + 1
|
#define HASH_MAX_STATESIZE HASH_STATE_AND_BLOCK(200, 144)
|
||||||
|
|
||||||
|
/* This needs to match arch/s390/crypto/sha.h. */
|
||||||
|
#define S390_SHA_CTX_SIZE 216
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Worst case is hmac(sha3-224-s390). Its context is a nested 'shash_desc'
|
* Worst case is hmac(sha3-224-s390). Its context is a nested 'shash_desc'
|
||||||
* containing a 'struct s390_sha_ctx'.
|
* containing a 'struct s390_sha_ctx'.
|
||||||
*/
|
*/
|
||||||
#define HASH_MAX_DESCSIZE (sizeof(struct shash_desc) + 361)
|
#define SHA3_224_S390_DESCSIZE HASH_STATE_AND_BLOCK(S390_SHA_CTX_SIZE, 144)
|
||||||
|
#define HASH_MAX_DESCSIZE (sizeof(struct shash_desc) + \
|
||||||
|
SHA3_224_S390_DESCSIZE)
|
||||||
#define MAX_SYNC_HASH_REQSIZE (sizeof(struct ahash_request) + \
|
#define MAX_SYNC_HASH_REQSIZE (sizeof(struct ahash_request) + \
|
||||||
HASH_MAX_DESCSIZE)
|
HASH_MAX_DESCSIZE)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -18,11 +18,8 @@ struct crypto_scomp {
|
||||||
/**
|
/**
|
||||||
* struct scomp_alg - synchronous compression algorithm
|
* struct scomp_alg - synchronous compression algorithm
|
||||||
*
|
*
|
||||||
* @alloc_ctx: Function allocates algorithm specific context
|
|
||||||
* @free_ctx: Function frees context allocated with alloc_ctx
|
|
||||||
* @compress: Function performs a compress operation
|
* @compress: Function performs a compress operation
|
||||||
* @decompress: Function performs a de-compress operation
|
* @decompress: Function performs a de-compress operation
|
||||||
* @base: Common crypto API algorithm data structure
|
|
||||||
* @streams: Per-cpu memory for algorithm
|
* @streams: Per-cpu memory for algorithm
|
||||||
* @calg: Cmonn algorithm data structure shared with acomp
|
* @calg: Cmonn algorithm data structure shared with acomp
|
||||||
*/
|
*/
|
||||||
|
|
@ -34,13 +31,7 @@ struct scomp_alg {
|
||||||
unsigned int slen, u8 *dst, unsigned int *dlen,
|
unsigned int slen, u8 *dst, unsigned int *dlen,
|
||||||
void *ctx);
|
void *ctx);
|
||||||
|
|
||||||
union {
|
|
||||||
struct {
|
|
||||||
void *(*alloc_ctx)(void);
|
|
||||||
void (*free_ctx)(void *ctx);
|
|
||||||
};
|
|
||||||
struct crypto_acomp_streams streams;
|
struct crypto_acomp_streams streams;
|
||||||
};
|
|
||||||
|
|
||||||
union {
|
union {
|
||||||
struct COMP_ALG_COMMON;
|
struct COMP_ALG_COMMON;
|
||||||
|
|
|
||||||
|
|
@ -104,6 +104,8 @@
|
||||||
#define UACCE_MODE_SVA 1 /* use uacce sva mode */
|
#define UACCE_MODE_SVA 1 /* use uacce sva mode */
|
||||||
#define UACCE_MODE_DESC "0(default) means only register to crypto, 1 means both register to crypto and uacce"
|
#define UACCE_MODE_DESC "0(default) means only register to crypto, 1 means both register to crypto and uacce"
|
||||||
|
|
||||||
|
#define QM_ECC_MBIT BIT(2)
|
||||||
|
|
||||||
enum qm_stop_reason {
|
enum qm_stop_reason {
|
||||||
QM_NORMAL,
|
QM_NORMAL,
|
||||||
QM_SOFT_RESET,
|
QM_SOFT_RESET,
|
||||||
|
|
@ -125,6 +127,7 @@ enum qm_hw_ver {
|
||||||
QM_HW_V2 = 0x21,
|
QM_HW_V2 = 0x21,
|
||||||
QM_HW_V3 = 0x30,
|
QM_HW_V3 = 0x30,
|
||||||
QM_HW_V4 = 0x50,
|
QM_HW_V4 = 0x50,
|
||||||
|
QM_HW_V5 = 0x51,
|
||||||
};
|
};
|
||||||
|
|
||||||
enum qm_fun_type {
|
enum qm_fun_type {
|
||||||
|
|
@ -239,19 +242,22 @@ enum acc_err_result {
|
||||||
ACC_ERR_RECOVERED,
|
ACC_ERR_RECOVERED,
|
||||||
};
|
};
|
||||||
|
|
||||||
struct hisi_qm_err_info {
|
struct hisi_qm_err_mask {
|
||||||
char *acpi_rst;
|
|
||||||
u32 msi_wr_port;
|
|
||||||
u32 ecc_2bits_mask;
|
u32 ecc_2bits_mask;
|
||||||
u32 qm_shutdown_mask;
|
u32 shutdown_mask;
|
||||||
u32 dev_shutdown_mask;
|
u32 reset_mask;
|
||||||
u32 qm_reset_mask;
|
|
||||||
u32 dev_reset_mask;
|
|
||||||
u32 ce;
|
u32 ce;
|
||||||
u32 nfe;
|
u32 nfe;
|
||||||
u32 fe;
|
u32 fe;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
struct hisi_qm_err_info {
|
||||||
|
char *acpi_rst;
|
||||||
|
u32 msi_wr_port;
|
||||||
|
struct hisi_qm_err_mask qm_err;
|
||||||
|
struct hisi_qm_err_mask dev_err;
|
||||||
|
};
|
||||||
|
|
||||||
struct hisi_qm_err_status {
|
struct hisi_qm_err_status {
|
||||||
u32 is_qm_ecc_mbit;
|
u32 is_qm_ecc_mbit;
|
||||||
u32 is_dev_ecc_mbit;
|
u32 is_dev_ecc_mbit;
|
||||||
|
|
@ -272,6 +278,8 @@ struct hisi_qm_err_ini {
|
||||||
enum acc_err_result (*get_err_result)(struct hisi_qm *qm);
|
enum acc_err_result (*get_err_result)(struct hisi_qm *qm);
|
||||||
bool (*dev_is_abnormal)(struct hisi_qm *qm);
|
bool (*dev_is_abnormal)(struct hisi_qm *qm);
|
||||||
int (*set_priv_status)(struct hisi_qm *qm);
|
int (*set_priv_status)(struct hisi_qm *qm);
|
||||||
|
void (*disable_axi_error)(struct hisi_qm *qm);
|
||||||
|
void (*enable_axi_error)(struct hisi_qm *qm);
|
||||||
};
|
};
|
||||||
|
|
||||||
struct hisi_qm_cap_info {
|
struct hisi_qm_cap_info {
|
||||||
|
|
|
||||||
|
|
@ -107,6 +107,7 @@ enum sev_cmd {
|
||||||
SEV_CMD_SNP_DOWNLOAD_FIRMWARE_EX = 0x0CA,
|
SEV_CMD_SNP_DOWNLOAD_FIRMWARE_EX = 0x0CA,
|
||||||
SEV_CMD_SNP_COMMIT = 0x0CB,
|
SEV_CMD_SNP_COMMIT = 0x0CB,
|
||||||
SEV_CMD_SNP_VLEK_LOAD = 0x0CD,
|
SEV_CMD_SNP_VLEK_LOAD = 0x0CD,
|
||||||
|
SEV_CMD_SNP_FEATURE_INFO = 0x0CE,
|
||||||
|
|
||||||
SEV_CMD_MAX,
|
SEV_CMD_MAX,
|
||||||
};
|
};
|
||||||
|
|
@ -747,10 +748,13 @@ struct sev_data_snp_guest_request {
|
||||||
struct sev_data_snp_init_ex {
|
struct sev_data_snp_init_ex {
|
||||||
u32 init_rmp:1;
|
u32 init_rmp:1;
|
||||||
u32 list_paddr_en:1;
|
u32 list_paddr_en:1;
|
||||||
u32 rsvd:30;
|
u32 rapl_dis:1;
|
||||||
|
u32 ciphertext_hiding_en:1;
|
||||||
|
u32 rsvd:28;
|
||||||
u32 rsvd1;
|
u32 rsvd1;
|
||||||
u64 list_paddr;
|
u64 list_paddr;
|
||||||
u8 rsvd2[48];
|
u16 max_snp_asid;
|
||||||
|
u8 rsvd2[46];
|
||||||
} __packed;
|
} __packed;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -799,10 +803,13 @@ struct sev_data_snp_shutdown_ex {
|
||||||
* @probe: True if this is being called as part of CCP module probe, which
|
* @probe: True if this is being called as part of CCP module probe, which
|
||||||
* will defer SEV_INIT/SEV_INIT_EX firmware initialization until needed
|
* will defer SEV_INIT/SEV_INIT_EX firmware initialization until needed
|
||||||
* unless psp_init_on_probe module param is set
|
* unless psp_init_on_probe module param is set
|
||||||
|
* @max_snp_asid: When non-zero, enable ciphertext hiding and specify the
|
||||||
|
* maximum ASID that can be used for an SEV-SNP guest.
|
||||||
*/
|
*/
|
||||||
struct sev_platform_init_args {
|
struct sev_platform_init_args {
|
||||||
int error;
|
int error;
|
||||||
bool probe;
|
bool probe;
|
||||||
|
unsigned int max_snp_asid;
|
||||||
};
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -814,6 +821,36 @@ struct sev_data_snp_commit {
|
||||||
u32 len;
|
u32 len;
|
||||||
} __packed;
|
} __packed;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* struct sev_data_snp_feature_info - SEV_SNP_FEATURE_INFO structure
|
||||||
|
*
|
||||||
|
* @length: len of the command buffer read by the PSP
|
||||||
|
* @ecx_in: subfunction index
|
||||||
|
* @feature_info_paddr : System Physical Address of the FEATURE_INFO structure
|
||||||
|
*/
|
||||||
|
struct sev_data_snp_feature_info {
|
||||||
|
u32 length;
|
||||||
|
u32 ecx_in;
|
||||||
|
u64 feature_info_paddr;
|
||||||
|
} __packed;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* struct feature_info - FEATURE_INFO structure
|
||||||
|
*
|
||||||
|
* @eax: output of SNP_FEATURE_INFO command
|
||||||
|
* @ebx: output of SNP_FEATURE_INFO command
|
||||||
|
* @ecx: output of SNP_FEATURE_INFO command
|
||||||
|
* #edx: output of SNP_FEATURE_INFO command
|
||||||
|
*/
|
||||||
|
struct snp_feature_info {
|
||||||
|
u32 eax;
|
||||||
|
u32 ebx;
|
||||||
|
u32 ecx;
|
||||||
|
u32 edx;
|
||||||
|
} __packed;
|
||||||
|
|
||||||
|
#define SNP_CIPHER_TEXT_HIDING_SUPPORTED BIT(3)
|
||||||
|
|
||||||
#ifdef CONFIG_CRYPTO_DEV_SP_PSP
|
#ifdef CONFIG_CRYPTO_DEV_SP_PSP
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -957,6 +994,7 @@ void *psp_copy_user_blob(u64 uaddr, u32 len);
|
||||||
void *snp_alloc_firmware_page(gfp_t mask);
|
void *snp_alloc_firmware_page(gfp_t mask);
|
||||||
void snp_free_firmware_page(void *addr);
|
void snp_free_firmware_page(void *addr);
|
||||||
void sev_platform_shutdown(void);
|
void sev_platform_shutdown(void);
|
||||||
|
bool sev_is_snp_ciphertext_hiding_supported(void);
|
||||||
|
|
||||||
#else /* !CONFIG_CRYPTO_DEV_SP_PSP */
|
#else /* !CONFIG_CRYPTO_DEV_SP_PSP */
|
||||||
|
|
||||||
|
|
@ -993,6 +1031,8 @@ static inline void snp_free_firmware_page(void *addr) { }
|
||||||
|
|
||||||
static inline void sev_platform_shutdown(void) { }
|
static inline void sev_platform_shutdown(void) { }
|
||||||
|
|
||||||
|
static inline bool sev_is_snp_ciphertext_hiding_supported(void) { return false; }
|
||||||
|
|
||||||
#endif /* CONFIG_CRYPTO_DEV_SP_PSP */
|
#endif /* CONFIG_CRYPTO_DEV_SP_PSP */
|
||||||
|
|
||||||
#endif /* __PSP_SEV_H__ */
|
#endif /* __PSP_SEV_H__ */
|
||||||
|
|
|
||||||
|
|
@ -713,6 +713,24 @@ do { \
|
||||||
(c) || rcu_read_lock_sched_held(), \
|
(c) || rcu_read_lock_sched_held(), \
|
||||||
__rcu)
|
__rcu)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* rcu_dereference_all_check() - rcu_dereference_all with debug checking
|
||||||
|
* @p: The pointer to read, prior to dereferencing
|
||||||
|
* @c: The conditions under which the dereference will take place
|
||||||
|
*
|
||||||
|
* This is similar to rcu_dereference_check(), but allows protection
|
||||||
|
* by all forms of vanilla RCU readers, including preemption disabled,
|
||||||
|
* bh-disabled, and interrupt-disabled regions of code. Note that "vanilla
|
||||||
|
* RCU" excludes SRCU and the various Tasks RCU flavors. Please note
|
||||||
|
* that this macro should not be backported to any Linux-kernel version
|
||||||
|
* preceding v5.0 due to changes in synchronize_rcu() semantics prior
|
||||||
|
* to that version.
|
||||||
|
*/
|
||||||
|
#define rcu_dereference_all_check(p, c) \
|
||||||
|
__rcu_dereference_check((p), __UNIQUE_ID(rcu), \
|
||||||
|
(c) || rcu_read_lock_any_held(), \
|
||||||
|
__rcu)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The tracing infrastructure traces RCU (we want that), but unfortunately
|
* The tracing infrastructure traces RCU (we want that), but unfortunately
|
||||||
* some of the RCU checks causes tracing to lock up the system.
|
* some of the RCU checks causes tracing to lock up the system.
|
||||||
|
|
@ -767,6 +785,14 @@ do { \
|
||||||
*/
|
*/
|
||||||
#define rcu_dereference_sched(p) rcu_dereference_sched_check(p, 0)
|
#define rcu_dereference_sched(p) rcu_dereference_sched_check(p, 0)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* rcu_dereference_all() - fetch RCU-all-protected pointer for dereferencing
|
||||||
|
* @p: The pointer to read, prior to dereferencing
|
||||||
|
*
|
||||||
|
* Makes rcu_dereference_check() do the dirty work.
|
||||||
|
*/
|
||||||
|
#define rcu_dereference_all(p) rcu_dereference_all_check(p, 0)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* rcu_pointer_handoff() - Hand off a pointer from RCU to other mechanism
|
* rcu_pointer_handoff() - Hand off a pointer from RCU to other mechanism
|
||||||
* @p: The pointer to hand off
|
* @p: The pointer to hand off
|
||||||
|
|
|
||||||
|
|
@ -122,7 +122,7 @@ static inline unsigned int rht_bucket_index(const struct bucket_table *tbl,
|
||||||
return hash & (tbl->size - 1);
|
return hash & (tbl->size - 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned int rht_key_get_hash(struct rhashtable *ht,
|
static __always_inline unsigned int rht_key_get_hash(struct rhashtable *ht,
|
||||||
const void *key, const struct rhashtable_params params,
|
const void *key, const struct rhashtable_params params,
|
||||||
unsigned int hash_rnd)
|
unsigned int hash_rnd)
|
||||||
{
|
{
|
||||||
|
|
@ -152,7 +152,7 @@ static inline unsigned int rht_key_get_hash(struct rhashtable *ht,
|
||||||
return hash;
|
return hash;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned int rht_key_hashfn(
|
static __always_inline unsigned int rht_key_hashfn(
|
||||||
struct rhashtable *ht, const struct bucket_table *tbl,
|
struct rhashtable *ht, const struct bucket_table *tbl,
|
||||||
const void *key, const struct rhashtable_params params)
|
const void *key, const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -161,7 +161,7 @@ static inline unsigned int rht_key_hashfn(
|
||||||
return rht_bucket_index(tbl, hash);
|
return rht_bucket_index(tbl, hash);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned int rht_head_hashfn(
|
static __always_inline unsigned int rht_head_hashfn(
|
||||||
struct rhashtable *ht, const struct bucket_table *tbl,
|
struct rhashtable *ht, const struct bucket_table *tbl,
|
||||||
const struct rhash_head *he, const struct rhashtable_params params)
|
const struct rhash_head *he, const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -272,13 +272,13 @@ struct rhash_lock_head __rcu **rht_bucket_nested_insert(
|
||||||
rcu_dereference_protected(p, lockdep_rht_mutex_is_held(ht))
|
rcu_dereference_protected(p, lockdep_rht_mutex_is_held(ht))
|
||||||
|
|
||||||
#define rht_dereference_rcu(p, ht) \
|
#define rht_dereference_rcu(p, ht) \
|
||||||
rcu_dereference_check(p, lockdep_rht_mutex_is_held(ht))
|
rcu_dereference_all_check(p, lockdep_rht_mutex_is_held(ht))
|
||||||
|
|
||||||
#define rht_dereference_bucket(p, tbl, hash) \
|
#define rht_dereference_bucket(p, tbl, hash) \
|
||||||
rcu_dereference_protected(p, lockdep_rht_bucket_is_held(tbl, hash))
|
rcu_dereference_protected(p, lockdep_rht_bucket_is_held(tbl, hash))
|
||||||
|
|
||||||
#define rht_dereference_bucket_rcu(p, tbl, hash) \
|
#define rht_dereference_bucket_rcu(p, tbl, hash) \
|
||||||
rcu_dereference_check(p, lockdep_rht_bucket_is_held(tbl, hash))
|
rcu_dereference_all_check(p, lockdep_rht_bucket_is_held(tbl, hash))
|
||||||
|
|
||||||
#define rht_entry(tpos, pos, member) \
|
#define rht_entry(tpos, pos, member) \
|
||||||
({ tpos = container_of(pos, typeof(*tpos), member); 1; })
|
({ tpos = container_of(pos, typeof(*tpos), member); 1; })
|
||||||
|
|
@ -373,7 +373,7 @@ static inline struct rhash_head *__rht_ptr(
|
||||||
static inline struct rhash_head *rht_ptr_rcu(
|
static inline struct rhash_head *rht_ptr_rcu(
|
||||||
struct rhash_lock_head __rcu *const *bkt)
|
struct rhash_lock_head __rcu *const *bkt)
|
||||||
{
|
{
|
||||||
return __rht_ptr(rcu_dereference(*bkt), bkt);
|
return __rht_ptr(rcu_dereference_all(*bkt), bkt);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline struct rhash_head *rht_ptr(
|
static inline struct rhash_head *rht_ptr(
|
||||||
|
|
@ -497,7 +497,7 @@ static inline void rht_assign_unlock(struct bucket_table *tbl,
|
||||||
for (({barrier(); }), \
|
for (({barrier(); }), \
|
||||||
pos = head; \
|
pos = head; \
|
||||||
!rht_is_a_nulls(pos); \
|
!rht_is_a_nulls(pos); \
|
||||||
pos = rcu_dereference_raw(pos->next))
|
pos = rcu_dereference_all(pos->next))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* rht_for_each_rcu - iterate over rcu hash chain
|
* rht_for_each_rcu - iterate over rcu hash chain
|
||||||
|
|
@ -513,7 +513,7 @@ static inline void rht_assign_unlock(struct bucket_table *tbl,
|
||||||
for (({barrier(); }), \
|
for (({barrier(); }), \
|
||||||
pos = rht_ptr_rcu(rht_bucket(tbl, hash)); \
|
pos = rht_ptr_rcu(rht_bucket(tbl, hash)); \
|
||||||
!rht_is_a_nulls(pos); \
|
!rht_is_a_nulls(pos); \
|
||||||
pos = rcu_dereference_raw(pos->next))
|
pos = rcu_dereference_all(pos->next))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* rht_for_each_entry_rcu_from - iterated over rcu hash chain from given head
|
* rht_for_each_entry_rcu_from - iterated over rcu hash chain from given head
|
||||||
|
|
@ -560,7 +560,7 @@ static inline void rht_assign_unlock(struct bucket_table *tbl,
|
||||||
* list returned by rhltable_lookup.
|
* list returned by rhltable_lookup.
|
||||||
*/
|
*/
|
||||||
#define rhl_for_each_rcu(pos, list) \
|
#define rhl_for_each_rcu(pos, list) \
|
||||||
for (pos = list; pos; pos = rcu_dereference_raw(pos->next))
|
for (pos = list; pos; pos = rcu_dereference_all(pos->next))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* rhl_for_each_entry_rcu - iterate over rcu hash table list of given type
|
* rhl_for_each_entry_rcu - iterate over rcu hash table list of given type
|
||||||
|
|
@ -574,7 +574,7 @@ static inline void rht_assign_unlock(struct bucket_table *tbl,
|
||||||
*/
|
*/
|
||||||
#define rhl_for_each_entry_rcu(tpos, pos, list, member) \
|
#define rhl_for_each_entry_rcu(tpos, pos, list, member) \
|
||||||
for (pos = list; pos && rht_entry(tpos, pos, member); \
|
for (pos = list; pos && rht_entry(tpos, pos, member); \
|
||||||
pos = rcu_dereference_raw(pos->next))
|
pos = rcu_dereference_all(pos->next))
|
||||||
|
|
||||||
static inline int rhashtable_compare(struct rhashtable_compare_arg *arg,
|
static inline int rhashtable_compare(struct rhashtable_compare_arg *arg,
|
||||||
const void *obj)
|
const void *obj)
|
||||||
|
|
@ -586,7 +586,7 @@ static inline int rhashtable_compare(struct rhashtable_compare_arg *arg,
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Internal function, do not use. */
|
/* Internal function, do not use. */
|
||||||
static inline struct rhash_head *__rhashtable_lookup(
|
static __always_inline struct rhash_head *__rhashtable_lookup(
|
||||||
struct rhashtable *ht, const void *key,
|
struct rhashtable *ht, const void *key,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -639,7 +639,7 @@ restart:
|
||||||
*
|
*
|
||||||
* Returns the first entry on which the compare function returned true.
|
* Returns the first entry on which the compare function returned true.
|
||||||
*/
|
*/
|
||||||
static inline void *rhashtable_lookup(
|
static __always_inline void *rhashtable_lookup(
|
||||||
struct rhashtable *ht, const void *key,
|
struct rhashtable *ht, const void *key,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -662,7 +662,7 @@ static inline void *rhashtable_lookup(
|
||||||
*
|
*
|
||||||
* Returns the first entry on which the compare function returned true.
|
* Returns the first entry on which the compare function returned true.
|
||||||
*/
|
*/
|
||||||
static inline void *rhashtable_lookup_fast(
|
static __always_inline void *rhashtable_lookup_fast(
|
||||||
struct rhashtable *ht, const void *key,
|
struct rhashtable *ht, const void *key,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -689,7 +689,7 @@ static inline void *rhashtable_lookup_fast(
|
||||||
*
|
*
|
||||||
* Returns the list of entries that match the given key.
|
* Returns the list of entries that match the given key.
|
||||||
*/
|
*/
|
||||||
static inline struct rhlist_head *rhltable_lookup(
|
static __always_inline struct rhlist_head *rhltable_lookup(
|
||||||
struct rhltable *hlt, const void *key,
|
struct rhltable *hlt, const void *key,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -702,7 +702,7 @@ static inline struct rhlist_head *rhltable_lookup(
|
||||||
* function returns the existing element already in hashes if there is a clash,
|
* function returns the existing element already in hashes if there is a clash,
|
||||||
* otherwise it returns an error via ERR_PTR().
|
* otherwise it returns an error via ERR_PTR().
|
||||||
*/
|
*/
|
||||||
static inline void *__rhashtable_insert_fast(
|
static __always_inline void *__rhashtable_insert_fast(
|
||||||
struct rhashtable *ht, const void *key, struct rhash_head *obj,
|
struct rhashtable *ht, const void *key, struct rhash_head *obj,
|
||||||
const struct rhashtable_params params, bool rhlist)
|
const struct rhashtable_params params, bool rhlist)
|
||||||
{
|
{
|
||||||
|
|
@ -825,7 +825,7 @@ out_unlock:
|
||||||
* Will trigger an automatic deferred table resizing if residency in the
|
* Will trigger an automatic deferred table resizing if residency in the
|
||||||
* table grows beyond 70%.
|
* table grows beyond 70%.
|
||||||
*/
|
*/
|
||||||
static inline int rhashtable_insert_fast(
|
static __always_inline int rhashtable_insert_fast(
|
||||||
struct rhashtable *ht, struct rhash_head *obj,
|
struct rhashtable *ht, struct rhash_head *obj,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -854,7 +854,7 @@ static inline int rhashtable_insert_fast(
|
||||||
* Will trigger an automatic deferred table resizing if residency in the
|
* Will trigger an automatic deferred table resizing if residency in the
|
||||||
* table grows beyond 70%.
|
* table grows beyond 70%.
|
||||||
*/
|
*/
|
||||||
static inline int rhltable_insert_key(
|
static __always_inline int rhltable_insert_key(
|
||||||
struct rhltable *hlt, const void *key, struct rhlist_head *list,
|
struct rhltable *hlt, const void *key, struct rhlist_head *list,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -877,7 +877,7 @@ static inline int rhltable_insert_key(
|
||||||
* Will trigger an automatic deferred table resizing if residency in the
|
* Will trigger an automatic deferred table resizing if residency in the
|
||||||
* table grows beyond 70%.
|
* table grows beyond 70%.
|
||||||
*/
|
*/
|
||||||
static inline int rhltable_insert(
|
static __always_inline int rhltable_insert(
|
||||||
struct rhltable *hlt, struct rhlist_head *list,
|
struct rhltable *hlt, struct rhlist_head *list,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -902,7 +902,7 @@ static inline int rhltable_insert(
|
||||||
* Will trigger an automatic deferred table resizing if residency in the
|
* Will trigger an automatic deferred table resizing if residency in the
|
||||||
* table grows beyond 70%.
|
* table grows beyond 70%.
|
||||||
*/
|
*/
|
||||||
static inline int rhashtable_lookup_insert_fast(
|
static __always_inline int rhashtable_lookup_insert_fast(
|
||||||
struct rhashtable *ht, struct rhash_head *obj,
|
struct rhashtable *ht, struct rhash_head *obj,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -929,7 +929,7 @@ static inline int rhashtable_lookup_insert_fast(
|
||||||
* object if it exists, NULL if it did not and the insertion was successful,
|
* object if it exists, NULL if it did not and the insertion was successful,
|
||||||
* and an ERR_PTR otherwise.
|
* and an ERR_PTR otherwise.
|
||||||
*/
|
*/
|
||||||
static inline void *rhashtable_lookup_get_insert_fast(
|
static __always_inline void *rhashtable_lookup_get_insert_fast(
|
||||||
struct rhashtable *ht, struct rhash_head *obj,
|
struct rhashtable *ht, struct rhash_head *obj,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -956,7 +956,7 @@ static inline void *rhashtable_lookup_get_insert_fast(
|
||||||
*
|
*
|
||||||
* Returns zero on success.
|
* Returns zero on success.
|
||||||
*/
|
*/
|
||||||
static inline int rhashtable_lookup_insert_key(
|
static __always_inline int rhashtable_lookup_insert_key(
|
||||||
struct rhashtable *ht, const void *key, struct rhash_head *obj,
|
struct rhashtable *ht, const void *key, struct rhash_head *obj,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -982,7 +982,7 @@ static inline int rhashtable_lookup_insert_key(
|
||||||
* object if it exists, NULL if it does not and the insertion was successful,
|
* object if it exists, NULL if it does not and the insertion was successful,
|
||||||
* and an ERR_PTR otherwise.
|
* and an ERR_PTR otherwise.
|
||||||
*/
|
*/
|
||||||
static inline void *rhashtable_lookup_get_insert_key(
|
static __always_inline void *rhashtable_lookup_get_insert_key(
|
||||||
struct rhashtable *ht, const void *key, struct rhash_head *obj,
|
struct rhashtable *ht, const void *key, struct rhash_head *obj,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -992,7 +992,7 @@ static inline void *rhashtable_lookup_get_insert_key(
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Internal function, please use rhashtable_remove_fast() instead */
|
/* Internal function, please use rhashtable_remove_fast() instead */
|
||||||
static inline int __rhashtable_remove_fast_one(
|
static __always_inline int __rhashtable_remove_fast_one(
|
||||||
struct rhashtable *ht, struct bucket_table *tbl,
|
struct rhashtable *ht, struct bucket_table *tbl,
|
||||||
struct rhash_head *obj, const struct rhashtable_params params,
|
struct rhash_head *obj, const struct rhashtable_params params,
|
||||||
bool rhlist)
|
bool rhlist)
|
||||||
|
|
@ -1074,7 +1074,7 @@ unlocked:
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Internal function, please use rhashtable_remove_fast() instead */
|
/* Internal function, please use rhashtable_remove_fast() instead */
|
||||||
static inline int __rhashtable_remove_fast(
|
static __always_inline int __rhashtable_remove_fast(
|
||||||
struct rhashtable *ht, struct rhash_head *obj,
|
struct rhashtable *ht, struct rhash_head *obj,
|
||||||
const struct rhashtable_params params, bool rhlist)
|
const struct rhashtable_params params, bool rhlist)
|
||||||
{
|
{
|
||||||
|
|
@ -1115,7 +1115,7 @@ static inline int __rhashtable_remove_fast(
|
||||||
*
|
*
|
||||||
* Returns zero on success, -ENOENT if the entry could not be found.
|
* Returns zero on success, -ENOENT if the entry could not be found.
|
||||||
*/
|
*/
|
||||||
static inline int rhashtable_remove_fast(
|
static __always_inline int rhashtable_remove_fast(
|
||||||
struct rhashtable *ht, struct rhash_head *obj,
|
struct rhashtable *ht, struct rhash_head *obj,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -1137,7 +1137,7 @@ static inline int rhashtable_remove_fast(
|
||||||
*
|
*
|
||||||
* Returns zero on success, -ENOENT if the entry could not be found.
|
* Returns zero on success, -ENOENT if the entry could not be found.
|
||||||
*/
|
*/
|
||||||
static inline int rhltable_remove(
|
static __always_inline int rhltable_remove(
|
||||||
struct rhltable *hlt, struct rhlist_head *list,
|
struct rhltable *hlt, struct rhlist_head *list,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
{
|
{
|
||||||
|
|
@ -1145,7 +1145,7 @@ static inline int rhltable_remove(
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Internal function, please use rhashtable_replace_fast() instead */
|
/* Internal function, please use rhashtable_replace_fast() instead */
|
||||||
static inline int __rhashtable_replace_fast(
|
static __always_inline int __rhashtable_replace_fast(
|
||||||
struct rhashtable *ht, struct bucket_table *tbl,
|
struct rhashtable *ht, struct bucket_table *tbl,
|
||||||
struct rhash_head *obj_old, struct rhash_head *obj_new,
|
struct rhash_head *obj_old, struct rhash_head *obj_new,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
|
|
@ -1208,7 +1208,7 @@ unlocked:
|
||||||
* Returns zero on success, -ENOENT if the entry could not be found,
|
* Returns zero on success, -ENOENT if the entry could not be found,
|
||||||
* -EINVAL if hash is not the same for the old and new objects.
|
* -EINVAL if hash is not the same for the old and new objects.
|
||||||
*/
|
*/
|
||||||
static inline int rhashtable_replace_fast(
|
static __always_inline int rhashtable_replace_fast(
|
||||||
struct rhashtable *ht, struct rhash_head *obj_old,
|
struct rhashtable *ht, struct rhash_head *obj_old,
|
||||||
struct rhash_head *obj_new,
|
struct rhash_head *obj_new,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
|
|
|
||||||
|
|
@ -185,6 +185,10 @@ struct sev_user_data_get_id2 {
|
||||||
* @mask_chip_id: whether chip id is present in attestation reports or not
|
* @mask_chip_id: whether chip id is present in attestation reports or not
|
||||||
* @mask_chip_key: whether attestation reports are signed or not
|
* @mask_chip_key: whether attestation reports are signed or not
|
||||||
* @vlek_en: VLEK (Version Loaded Endorsement Key) hashstick is loaded
|
* @vlek_en: VLEK (Version Loaded Endorsement Key) hashstick is loaded
|
||||||
|
* @feature_info: whether SNP_FEATURE_INFO command is available
|
||||||
|
* @rapl_dis: whether RAPL is disabled
|
||||||
|
* @ciphertext_hiding_cap: whether platform has ciphertext hiding capability
|
||||||
|
* @ciphertext_hiding_en: whether ciphertext hiding is enabled
|
||||||
* @rsvd1: reserved
|
* @rsvd1: reserved
|
||||||
* @guest_count: the number of guest currently managed by the firmware
|
* @guest_count: the number of guest currently managed by the firmware
|
||||||
* @current_tcb_version: current TCB version
|
* @current_tcb_version: current TCB version
|
||||||
|
|
@ -200,7 +204,11 @@ struct sev_user_data_snp_status {
|
||||||
__u32 mask_chip_id:1; /* Out */
|
__u32 mask_chip_id:1; /* Out */
|
||||||
__u32 mask_chip_key:1; /* Out */
|
__u32 mask_chip_key:1; /* Out */
|
||||||
__u32 vlek_en:1; /* Out */
|
__u32 vlek_en:1; /* Out */
|
||||||
__u32 rsvd1:29;
|
__u32 feature_info:1; /* Out */
|
||||||
|
__u32 rapl_dis:1; /* Out */
|
||||||
|
__u32 ciphertext_hiding_cap:1; /* Out */
|
||||||
|
__u32 ciphertext_hiding_en:1; /* Out */
|
||||||
|
__u32 rsvd1:25;
|
||||||
__u32 guest_count; /* Out */
|
__u32 guest_count; /* Out */
|
||||||
__u64 current_tcb_version; /* Out */
|
__u64 current_tcb_version; /* Out */
|
||||||
__u64 reported_tcb_version; /* Out */
|
__u64 reported_tcb_version; /* Out */
|
||||||
|
|
|
||||||
|
|
@ -31,6 +31,7 @@ struct hisi_qp_info {
|
||||||
#define HISI_QM_API_VER_BASE "hisi_qm_v1"
|
#define HISI_QM_API_VER_BASE "hisi_qm_v1"
|
||||||
#define HISI_QM_API_VER2_BASE "hisi_qm_v2"
|
#define HISI_QM_API_VER2_BASE "hisi_qm_v2"
|
||||||
#define HISI_QM_API_VER3_BASE "hisi_qm_v3"
|
#define HISI_QM_API_VER3_BASE "hisi_qm_v3"
|
||||||
|
#define HISI_QM_API_VER5_BASE "hisi_qm_v5"
|
||||||
|
|
||||||
/* UACCE_CMD_QM_SET_QP_CTX: Set qp algorithm type */
|
/* UACCE_CMD_QM_SET_QP_CTX: Set qp algorithm type */
|
||||||
#define UACCE_CMD_QM_SET_QP_CTX _IOWR('H', 10, struct hisi_qp_ctx)
|
#define UACCE_CMD_QM_SET_QP_CTX _IOWR('H', 10, struct hisi_qp_ctx)
|
||||||
|
|
|
||||||
|
|
@ -291,8 +291,12 @@ static void padata_reorder(struct padata_priv *padata)
|
||||||
struct padata_serial_queue *squeue;
|
struct padata_serial_queue *squeue;
|
||||||
int cb_cpu;
|
int cb_cpu;
|
||||||
|
|
||||||
cpu = cpumask_next_wrap(cpu, pd->cpumask.pcpu);
|
|
||||||
processed++;
|
processed++;
|
||||||
|
/* When sequence wraps around, reset to the first CPU. */
|
||||||
|
if (unlikely(processed == 0))
|
||||||
|
cpu = cpumask_first(pd->cpumask.pcpu);
|
||||||
|
else
|
||||||
|
cpu = cpumask_next_wrap(cpu, pd->cpumask.pcpu);
|
||||||
|
|
||||||
cb_cpu = padata->cb_cpu;
|
cb_cpu = padata->cb_cpu;
|
||||||
squeue = per_cpu_ptr(pd->squeue, cb_cpu);
|
squeue = per_cpu_ptr(pd->squeue, cb_cpu);
|
||||||
|
|
@ -486,9 +490,9 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
|
||||||
do {
|
do {
|
||||||
nid = next_node_in(old_node, node_states[N_CPU]);
|
nid = next_node_in(old_node, node_states[N_CPU]);
|
||||||
} while (!atomic_try_cmpxchg(&last_used_nid, &old_node, nid));
|
} while (!atomic_try_cmpxchg(&last_used_nid, &old_node, nid));
|
||||||
queue_work_node(nid, system_unbound_wq, &pw->pw_work);
|
queue_work_node(nid, system_dfl_wq, &pw->pw_work);
|
||||||
} else {
|
} else {
|
||||||
queue_work(system_unbound_wq, &pw->pw_work);
|
queue_work(system_dfl_wq, &pw->pw_work);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Use the current thread, which saves starting a workqueue worker. */
|
/* Use the current thread, which saves starting a workqueue worker. */
|
||||||
|
|
@ -963,8 +967,9 @@ struct padata_instance *padata_alloc(const char *name)
|
||||||
|
|
||||||
cpus_read_lock();
|
cpus_read_lock();
|
||||||
|
|
||||||
pinst->serial_wq = alloc_workqueue("%s_serial", WQ_MEM_RECLAIM |
|
pinst->serial_wq = alloc_workqueue("%s_serial",
|
||||||
WQ_CPU_INTENSIVE, 1, name);
|
WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE | WQ_PERCPU,
|
||||||
|
1, name);
|
||||||
if (!pinst->serial_wq)
|
if (!pinst->serial_wq)
|
||||||
goto err_put_cpus;
|
goto err_put_cpus;
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -26,7 +26,7 @@
|
||||||
#define HAVE_OP(x) 1
|
#define HAVE_OP(x) 1
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#define NEED_OP(x) if (!HAVE_OP(x)) goto output_overrun
|
#define NEED_OP(x) if (unlikely(!HAVE_OP(x))) goto output_overrun
|
||||||
|
|
||||||
static noinline int
|
static noinline int
|
||||||
LZO_SAFE(lzo1x_1_do_compress)(const unsigned char *in, size_t in_len,
|
LZO_SAFE(lzo1x_1_do_compress)(const unsigned char *in, size_t in_len,
|
||||||
|
|
|
||||||
|
|
@ -22,9 +22,9 @@
|
||||||
|
|
||||||
#define HAVE_IP(x) ((size_t)(ip_end - ip) >= (size_t)(x))
|
#define HAVE_IP(x) ((size_t)(ip_end - ip) >= (size_t)(x))
|
||||||
#define HAVE_OP(x) ((size_t)(op_end - op) >= (size_t)(x))
|
#define HAVE_OP(x) ((size_t)(op_end - op) >= (size_t)(x))
|
||||||
#define NEED_IP(x) if (!HAVE_IP(x)) goto input_overrun
|
#define NEED_IP(x) if (unlikely(!HAVE_IP(x))) goto input_overrun
|
||||||
#define NEED_OP(x) if (!HAVE_OP(x)) goto output_overrun
|
#define NEED_OP(x) if (unlikely(!HAVE_OP(x))) goto output_overrun
|
||||||
#define TEST_LB(m_pos) if ((m_pos) < out) goto lookbehind_overrun
|
#define TEST_LB(m_pos) if (unlikely((m_pos) < out)) goto lookbehind_overrun
|
||||||
|
|
||||||
/* This MAX_255_COUNT is the maximum number of times we can add 255 to a base
|
/* This MAX_255_COUNT is the maximum number of times we can add 255 to a base
|
||||||
* count without overflowing an integer. The multiply will overflow when
|
* count without overflowing an integer. The multiply will overflow when
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue