arm64 updates for 6.17:
Perf and PMU updates:
- Add support for new (v3) Hisilicon SLLC and DDRC PMUs
- Add support for Arm-NI PMU integrations that share interrupts between
clock domains within a given instance
- Allow SPE to be configured with a lower sample period than the
minimum recommendation advertised by PMSIDR_EL1.Interval
- Add suppport for Arm's "Branch Record Buffer Extension" (BRBE)
- Adjust the perf watchdog period according to cpu frequency changes
- Minor driver fixes and cleanups
Hardware features:
- Support for MTE store-only checking (FEAT_MTE_STORE_ONLY)
- Support for reporting the non-address bits during a synchronous MTE
tag check fault (FEAT_MTE_TAGGED_FAR)
- Optimise the TLBI when folding/unfolding contiguous PTEs on hardware
with FEAT_BBM (break-before-make) level 2 and no TLB conflict aborts
Software features:
- Enable HAVE_LIVEPATCH after implementing arch_stack_walk_reliable()
and using the text-poke API for late module relocations
- Force VMAP_STACK always on and change arm64_efi_rt_init() to use
arch_alloc_vmap_stack() in order to avoid KASAN false positives
ACPI:
- Improve SPCR handling and messaging on systems lacking an SPCR table
Debug:
- Simplify the debug exception entry path
- Drop redundant DBG_MDSCR_* macros
Kselftests:
- Cleanups and improvements for SME, SVE and FPSIMD tests
Miscellaneous:
- Optimise loop to reduce redundant operations in contpte_ptep_get()
- Remove ISB when resetting POR_EL0 during signal handling
- Mark the kernel as tainted on SEA and SError panic
- Remove redundant gcs_free() call
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAmiDkgoACgkQa9axLQDI
XvFucQ//bYugRP5/Sdlrq5eDKWBGi1HufYzwfDEBLc4S75Eu8mGL/tuThfu9yFn+
qCowtt4U84HdWsZDTSVo6lym6v2vJUpGOMgXzepvJaFBRnqGv9X9NxH6RQO1LTnu
Pm7rO+7I9tNpfuc7Zu9pHDggsJEw+WzVfmEF6WPSFlT9mUNv6NbSx4rbLQKU86Dm
ouTqXaePEQZ5oiRXVasxyT0otGtiACD20WpgOtNjYGzsfUVwCf/C83V/2DLwwbhr
9cW9lCtFxA/yFdQcA9ThRzWZ9Eo5LAHqjGIq00+zOjuzgDbBtcTT79gpChkhovIR
FBIsWHd9j9i3nYxzf4V4eRKQnyqS3NQWv7g7uKFwNgARif1Zk0VJ77QIlAYk5xLI
ENTRjLKz5WNGGnhdkeCvDlVyxX+OktgcVTp3vqRxAKCRahMMUqBrwxiM8RzVF37e
yzkEQayL8F7uZqy9H7Sjn48UpHZux6frJ1bBQw1oEvR9QmAoAdqavPMSAYIOT3Zr
ze4WIljq/cFr3kBPIFP5pK1e0qYMHXZpSKIm8MAv6y/7KmQuVbMjZthpuPbLSIw0
Q7C0KalB8lToPIbO7qMni/he0dCN4K2+E1YHFTR+pzfcoLuW4rjSg7i8tqMLKMJ8
H+SeGLyPtM5A6bdAPTTpqefcgUUe7064ENUqrGUpDEynGXA7boE=
=5h1C
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas:
"A quick summary: perf support for Branch Record Buffer Extensions
(BRBE), typical PMU hardware updates, small additions to MTE for
store-only tag checking and exposing non-address bits to signal
handlers, HAVE_LIVEPATCH enabled on arm64, VMAP_STACK forced on.
There is also a TLBI optimisation on hardware that does not require
break-before-make when changing the user PTEs between contiguous and
non-contiguous.
More details:
Perf and PMU updates:
- Add support for new (v3) Hisilicon SLLC and DDRC PMUs
- Add support for Arm-NI PMU integrations that share interrupts
between clock domains within a given instance
- Allow SPE to be configured with a lower sample period than the
minimum recommendation advertised by PMSIDR_EL1.Interval
- Add suppport for Arm's "Branch Record Buffer Extension" (BRBE)
- Adjust the perf watchdog period according to cpu frequency changes
- Minor driver fixes and cleanups
Hardware features:
- Support for MTE store-only checking (FEAT_MTE_STORE_ONLY)
- Support for reporting the non-address bits during a synchronous MTE
tag check fault (FEAT_MTE_TAGGED_FAR)
- Optimise the TLBI when folding/unfolding contiguous PTEs on
hardware with FEAT_BBM (break-before-make) level 2 and no TLB
conflict aborts
Software features:
- Enable HAVE_LIVEPATCH after implementing arch_stack_walk_reliable()
and using the text-poke API for late module relocations
- Force VMAP_STACK always on and change arm64_efi_rt_init() to use
arch_alloc_vmap_stack() in order to avoid KASAN false positives
ACPI:
- Improve SPCR handling and messaging on systems lacking an SPCR
table
Debug:
- Simplify the debug exception entry path
- Drop redundant DBG_MDSCR_* macros
Kselftests:
- Cleanups and improvements for SME, SVE and FPSIMD tests
Miscellaneous:
- Optimise loop to reduce redundant operations in contpte_ptep_get()
- Remove ISB when resetting POR_EL0 during signal handling
- Mark the kernel as tainted on SEA and SError panic
- Remove redundant gcs_free() call"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (93 commits)
arm64/gcs: task_gcs_el0_enable() should use passed task
arm64: Kconfig: Keep selects somewhat alphabetically ordered
arm64: signal: Remove ISB when resetting POR_EL0
kselftest/arm64: Handle attempts to disable SM on SME only systems
kselftest/arm64: Fix SVE write data generation for SME only systems
kselftest/arm64: Test SME on SME only systems in fp-ptrace
kselftest/arm64: Test FPSIMD format data writes via NT_ARM_SVE in fp-ptrace
kselftest/arm64: Allow sve-ptrace to run on SME only systems
arm64/mm: Drop redundant addr increment in set_huge_pte_at()
kselftest/arm4: Provide local defines for AT_HWCAP3
arm64: Mark kernel as tainted on SAE and SError panic
arm64/gcs: Don't call gcs_free() when releasing task_struct
drivers/perf: hisi: Support PMUs with no interrupt
drivers/perf: hisi: Relax the event number check of v2 PMUs
drivers/perf: hisi: Add support for HiSilicon SLLC v3 PMU driver
drivers/perf: hisi: Use ACPI driver_data to retrieve SLLC PMU information
drivers/perf: hisi: Add support for HiSilicon DDRC v3 PMU driver
drivers/perf: hisi: Simplify the probe process for each DDRC version
perf/arm-ni: Support sharing IRQs within an NI instance
perf/arm-ni: Consolidate CPU affinity handling
...
pull/1309/head
commit
6fb44438a5
|
|
@ -388,6 +388,27 @@ Before jumping into the kernel, the following conditions must be met:
|
|||
|
||||
- SMCR_EL2.EZT0 (bit 30) must be initialised to 0b1.
|
||||
|
||||
For CPUs with the Branch Record Buffer Extension (FEAT_BRBE):
|
||||
|
||||
- If EL3 is present:
|
||||
|
||||
- MDCR_EL3.SBRBE (bits 33:32) must be initialised to 0b01 or 0b11.
|
||||
|
||||
- If the kernel is entered at EL1 and EL2 is present:
|
||||
|
||||
- BRBCR_EL2.CC (bit 3) must be initialised to 0b1.
|
||||
- BRBCR_EL2.MPRED (bit 4) must be initialised to 0b1.
|
||||
|
||||
- HDFGRTR_EL2.nBRBDATA (bit 61) must be initialised to 0b1.
|
||||
- HDFGRTR_EL2.nBRBCTL (bit 60) must be initialised to 0b1.
|
||||
- HDFGRTR_EL2.nBRBIDR (bit 59) must be initialised to 0b1.
|
||||
|
||||
- HDFGWTR_EL2.nBRBDATA (bit 61) must be initialised to 0b1.
|
||||
- HDFGWTR_EL2.nBRBCTL (bit 60) must be initialised to 0b1.
|
||||
|
||||
- HFGITR_EL2.nBRBIALL (bit 56) must be initialised to 0b1.
|
||||
- HFGITR_EL2.nBRBINJ (bit 55) must be initialised to 0b1.
|
||||
|
||||
For CPUs with the Performance Monitors Extension (FEAT_PMUv3p9):
|
||||
|
||||
- If EL3 is present:
|
||||
|
|
|
|||
|
|
@ -435,6 +435,12 @@ HWCAP2_SME_SF8DP4
|
|||
HWCAP2_POE
|
||||
Functionality implied by ID_AA64MMFR3_EL1.S1POE == 0b0001.
|
||||
|
||||
HWCAP3_MTE_FAR
|
||||
Functionality implied by ID_AA64PFR2_EL1.MTEFAR == 0b0001.
|
||||
|
||||
HWCAP3_MTE_STORE_ONLY
|
||||
Functionality implied by ID_AA64PFR2_EL1.MTESTOREONLY == 0b0001.
|
||||
|
||||
4. Unused AT_HWCAP bits
|
||||
-----------------------
|
||||
|
||||
|
|
|
|||
|
|
@ -60,11 +60,12 @@ that signal handlers in applications making use of tags cannot rely
|
|||
on the tag information for user virtual addresses being maintained
|
||||
in these fields unless the flag was set.
|
||||
|
||||
Due to architecture limitations, bits 63:60 of the fault address
|
||||
are not preserved in response to synchronous tag check faults
|
||||
(SEGV_MTESERR) even if SA_EXPOSE_TAGBITS was set. Applications should
|
||||
treat the values of these bits as undefined in order to accommodate
|
||||
future architecture revisions which may preserve the bits.
|
||||
If FEAT_MTE_TAGGED_FAR (Armv8.9) is supported, bits 63:60 of the fault address
|
||||
are preserved in response to synchronous tag check faults (SEGV_MTESERR)
|
||||
otherwise not preserved even if SA_EXPOSE_TAGBITS was set.
|
||||
Applications should interpret the values of these bits based on
|
||||
the support for the HWCAP3_MTE_FAR. If the support is not present,
|
||||
the values of these bits should be considered as undefined otherwise valid.
|
||||
|
||||
For signals raised in response to watchpoint debug exceptions, the
|
||||
tag information will be preserved regardless of the SA_EXPOSE_TAGBITS
|
||||
|
|
|
|||
|
|
@ -232,6 +232,7 @@ config ARM64
|
|||
select HAVE_HW_BREAKPOINT if PERF_EVENTS
|
||||
select HAVE_IOREMAP_PROT
|
||||
select HAVE_IRQ_TIME_ACCOUNTING
|
||||
select HAVE_LIVEPATCH
|
||||
select HAVE_MOD_ARCH_SPECIFIC
|
||||
select HAVE_NMI
|
||||
select HAVE_PERF_EVENTS
|
||||
|
|
@ -240,6 +241,7 @@ config ARM64
|
|||
select HAVE_PERF_USER_STACK_DUMP
|
||||
select HAVE_PREEMPT_DYNAMIC_KEY
|
||||
select HAVE_REGS_AND_STACK_ACCESS_API
|
||||
select HAVE_RELIABLE_STACKTRACE
|
||||
select HAVE_POSIX_CPU_TIMERS_TASK_WORK
|
||||
select HAVE_FUNCTION_ARG_ACCESS_API
|
||||
select MMU_GATHER_RCU_TABLE_FREE
|
||||
|
|
@ -278,6 +280,7 @@ config ARM64
|
|||
select HAVE_SOFTIRQ_ON_OWN_STACK
|
||||
select USER_STACKTRACE_SUPPORT
|
||||
select VDSO_GETRANDOM
|
||||
select VMAP_STACK
|
||||
help
|
||||
ARM 64-bit (AArch64) Linux support.
|
||||
|
||||
|
|
@ -2498,3 +2501,4 @@ source "drivers/acpi/Kconfig"
|
|||
|
||||
source "arch/arm64/kvm/Kconfig"
|
||||
|
||||
source "kernel/livepatch/Kconfig"
|
||||
|
|
|
|||
|
|
@ -58,7 +58,7 @@
|
|||
.macro disable_step_tsk, flgs, tmp
|
||||
tbz \flgs, #TIF_SINGLESTEP, 9990f
|
||||
mrs \tmp, mdscr_el1
|
||||
bic \tmp, \tmp, #DBG_MDSCR_SS
|
||||
bic \tmp, \tmp, #MDSCR_EL1_SS
|
||||
msr mdscr_el1, \tmp
|
||||
isb // Take effect before a subsequent clear of DAIF.D
|
||||
9990:
|
||||
|
|
@ -68,7 +68,7 @@
|
|||
.macro enable_step_tsk, flgs, tmp
|
||||
tbz \flgs, #TIF_SINGLESTEP, 9990f
|
||||
mrs \tmp, mdscr_el1
|
||||
orr \tmp, \tmp, #DBG_MDSCR_SS
|
||||
orr \tmp, \tmp, #MDSCR_EL1_SS
|
||||
msr mdscr_el1, \tmp
|
||||
9990:
|
||||
.endm
|
||||
|
|
|
|||
|
|
@ -275,6 +275,14 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
|
|||
#define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU ((u16)BIT(5))
|
||||
/* Panic when a conflict is detected */
|
||||
#define ARM64_CPUCAP_PANIC_ON_CONFLICT ((u16)BIT(6))
|
||||
/*
|
||||
* When paired with SCOPE_LOCAL_CPU, all early CPUs must satisfy the
|
||||
* condition. This is different from SCOPE_SYSTEM where the check is performed
|
||||
* only once at the end of the SMP boot on the sanitised ID registers.
|
||||
* SCOPE_SYSTEM is not suitable for cases where the capability depends on
|
||||
* properties local to a CPU like MIDR_EL1.
|
||||
*/
|
||||
#define ARM64_CPUCAP_MATCH_ALL_EARLY_CPUS ((u16)BIT(7))
|
||||
|
||||
/*
|
||||
* CPU errata workarounds that need to be enabled at boot time if one or
|
||||
|
|
@ -304,6 +312,16 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
|
|||
(ARM64_CPUCAP_SCOPE_LOCAL_CPU | \
|
||||
ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU | \
|
||||
ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
|
||||
/*
|
||||
* CPU feature detected at boot time and present on all early CPUs. Late CPUs
|
||||
* are permitted to have the feature even if it hasn't been enabled, although
|
||||
* the feature will not be used by Linux in this case. If all early CPUs have
|
||||
* the feature, then every late CPU must have it.
|
||||
*/
|
||||
#define ARM64_CPUCAP_EARLY_LOCAL_CPU_FEATURE \
|
||||
(ARM64_CPUCAP_SCOPE_LOCAL_CPU | \
|
||||
ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU | \
|
||||
ARM64_CPUCAP_MATCH_ALL_EARLY_CPUS)
|
||||
|
||||
/*
|
||||
* CPU feature detected at boot time, on one or more CPUs. A late CPU
|
||||
|
|
@ -391,6 +409,11 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap)
|
|||
return cap->type & ARM64_CPUCAP_SCOPE_MASK;
|
||||
}
|
||||
|
||||
static inline bool cpucap_match_all_early_cpus(const struct arm64_cpu_capabilities *cap)
|
||||
{
|
||||
return cap->type & ARM64_CPUCAP_MATCH_ALL_EARLY_CPUS;
|
||||
}
|
||||
|
||||
/*
|
||||
* Generic helper for handling capabilities with multiple (match,enable) pairs
|
||||
* of call backs, sharing the same capability bit.
|
||||
|
|
@ -848,6 +871,11 @@ static inline bool system_supports_pmuv3(void)
|
|||
return cpus_have_final_cap(ARM64_HAS_PMUV3);
|
||||
}
|
||||
|
||||
static inline bool system_supports_bbml2_noabort(void)
|
||||
{
|
||||
return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
|
||||
}
|
||||
|
||||
int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
|
||||
bool try_emulate_mrs(struct pt_regs *regs, u32 isn);
|
||||
|
||||
|
|
|
|||
|
|
@ -13,14 +13,8 @@
|
|||
#include <asm/ptrace.h>
|
||||
|
||||
/* Low-level stepping controls. */
|
||||
#define DBG_MDSCR_SS (1 << 0)
|
||||
#define DBG_SPSR_SS (1 << 21)
|
||||
|
||||
/* MDSCR_EL1 enabling bits */
|
||||
#define DBG_MDSCR_KDE (1 << 13)
|
||||
#define DBG_MDSCR_MDE (1 << 15)
|
||||
#define DBG_MDSCR_MASK ~(DBG_MDSCR_KDE | DBG_MDSCR_MDE)
|
||||
|
||||
#define DBG_ESR_EVT(x) (((x) >> 27) & 0x7)
|
||||
|
||||
/* AArch64 */
|
||||
|
|
@ -62,30 +56,6 @@ struct task_struct;
|
|||
#define DBG_HOOK_HANDLED 0
|
||||
#define DBG_HOOK_ERROR 1
|
||||
|
||||
struct step_hook {
|
||||
struct list_head node;
|
||||
int (*fn)(struct pt_regs *regs, unsigned long esr);
|
||||
};
|
||||
|
||||
void register_user_step_hook(struct step_hook *hook);
|
||||
void unregister_user_step_hook(struct step_hook *hook);
|
||||
|
||||
void register_kernel_step_hook(struct step_hook *hook);
|
||||
void unregister_kernel_step_hook(struct step_hook *hook);
|
||||
|
||||
struct break_hook {
|
||||
struct list_head node;
|
||||
int (*fn)(struct pt_regs *regs, unsigned long esr);
|
||||
u16 imm;
|
||||
u16 mask; /* These bits are ignored when comparing with imm */
|
||||
};
|
||||
|
||||
void register_user_break_hook(struct break_hook *hook);
|
||||
void unregister_user_break_hook(struct break_hook *hook);
|
||||
|
||||
void register_kernel_break_hook(struct break_hook *hook);
|
||||
void unregister_kernel_break_hook(struct break_hook *hook);
|
||||
|
||||
u8 debug_monitors_arch(void);
|
||||
|
||||
enum dbg_active_el {
|
||||
|
|
@ -108,17 +78,15 @@ void kernel_rewind_single_step(struct pt_regs *regs);
|
|||
void kernel_fastforward_single_step(struct pt_regs *regs);
|
||||
|
||||
#ifdef CONFIG_HAVE_HW_BREAKPOINT
|
||||
int reinstall_suspended_bps(struct pt_regs *regs);
|
||||
bool try_step_suspended_breakpoints(struct pt_regs *regs);
|
||||
#else
|
||||
static inline int reinstall_suspended_bps(struct pt_regs *regs)
|
||||
static inline bool try_step_suspended_breakpoints(struct pt_regs *regs)
|
||||
{
|
||||
return -ENODEV;
|
||||
return false;
|
||||
}
|
||||
#endif
|
||||
|
||||
int aarch32_break_handler(struct pt_regs *regs);
|
||||
|
||||
void debug_traps_init(void);
|
||||
bool try_handle_aarch32_break(struct pt_regs *regs);
|
||||
|
||||
#endif /* __ASSEMBLY */
|
||||
#endif /* __ASM_DEBUG_MONITORS_H */
|
||||
|
|
|
|||
|
|
@ -189,6 +189,28 @@
|
|||
.Lskip_set_cptr_\@:
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Configure BRBE to permit recording cycle counts and branch mispredicts.
|
||||
*
|
||||
* At any EL, to record cycle counts BRBE requires that both BRBCR_EL2.CC=1 and
|
||||
* BRBCR_EL1.CC=1.
|
||||
*
|
||||
* At any EL, to record branch mispredicts BRBE requires that both
|
||||
* BRBCR_EL2.MPRED=1 and BRBCR_EL1.MPRED=1.
|
||||
*
|
||||
* Set {CC,MPRED} in BRBCR_EL2 in case nVHE mode is used and we are
|
||||
* executing in EL1.
|
||||
*/
|
||||
.macro __init_el2_brbe
|
||||
mrs x1, id_aa64dfr0_el1
|
||||
ubfx x1, x1, #ID_AA64DFR0_EL1_BRBE_SHIFT, #4
|
||||
cbz x1, .Lskip_brbe_\@
|
||||
|
||||
mov_q x0, BRBCR_ELx_CC | BRBCR_ELx_MPRED
|
||||
msr_s SYS_BRBCR_EL2, x0
|
||||
.Lskip_brbe_\@:
|
||||
.endm
|
||||
|
||||
/* Disable any fine grained traps */
|
||||
.macro __init_el2_fgt
|
||||
mrs x1, id_aa64mmfr0_el1
|
||||
|
|
@ -196,20 +218,62 @@
|
|||
cbz x1, .Lskip_fgt_\@
|
||||
|
||||
mov x0, xzr
|
||||
mov x2, xzr
|
||||
mrs x1, id_aa64dfr0_el1
|
||||
ubfx x1, x1, #ID_AA64DFR0_EL1_PMSVer_SHIFT, #4
|
||||
cmp x1, #3
|
||||
b.lt .Lskip_spe_fgt_\@
|
||||
/* Disable PMSNEVFR_EL1 read and write traps */
|
||||
orr x0, x0, #(1 << 62)
|
||||
orr x0, x0, #HDFGRTR_EL2_nPMSNEVFR_EL1_MASK
|
||||
orr x2, x2, #HDFGWTR_EL2_nPMSNEVFR_EL1_MASK
|
||||
|
||||
.Lskip_spe_fgt_\@:
|
||||
mrs x1, id_aa64dfr0_el1
|
||||
ubfx x1, x1, #ID_AA64DFR0_EL1_BRBE_SHIFT, #4
|
||||
cbz x1, .Lskip_brbe_fgt_\@
|
||||
|
||||
/*
|
||||
* Disable read traps for the following registers
|
||||
*
|
||||
* [BRBSRC|BRBTGT|RBINF]_EL1
|
||||
* [BRBSRCINJ|BRBTGTINJ|BRBINFINJ|BRBTS]_EL1
|
||||
*/
|
||||
orr x0, x0, #HDFGRTR_EL2_nBRBDATA_MASK
|
||||
|
||||
/*
|
||||
* Disable write traps for the following registers
|
||||
*
|
||||
* [BRBSRCINJ|BRBTGTINJ|BRBINFINJ|BRBTS]_EL1
|
||||
*/
|
||||
orr x2, x2, #HDFGWTR_EL2_nBRBDATA_MASK
|
||||
|
||||
/* Disable read and write traps for [BRBCR|BRBFCR]_EL1 */
|
||||
orr x0, x0, #HDFGRTR_EL2_nBRBCTL_MASK
|
||||
orr x2, x2, #HDFGWTR_EL2_nBRBCTL_MASK
|
||||
|
||||
/* Disable read traps for BRBIDR_EL1 */
|
||||
orr x0, x0, #HDFGRTR_EL2_nBRBIDR_MASK
|
||||
|
||||
.Lskip_brbe_fgt_\@:
|
||||
|
||||
.Lset_debug_fgt_\@:
|
||||
msr_s SYS_HDFGRTR_EL2, x0
|
||||
msr_s SYS_HDFGWTR_EL2, x0
|
||||
msr_s SYS_HDFGWTR_EL2, x2
|
||||
|
||||
mov x0, xzr
|
||||
mov x2, xzr
|
||||
|
||||
mrs x1, id_aa64dfr0_el1
|
||||
ubfx x1, x1, #ID_AA64DFR0_EL1_BRBE_SHIFT, #4
|
||||
cbz x1, .Lskip_brbe_insn_fgt_\@
|
||||
|
||||
/* Disable traps for BRBIALL instruction */
|
||||
orr x2, x2, #HFGITR_EL2_nBRBIALL_MASK
|
||||
|
||||
/* Disable traps for BRBINJ instruction */
|
||||
orr x2, x2, #HFGITR_EL2_nBRBINJ_MASK
|
||||
|
||||
.Lskip_brbe_insn_fgt_\@:
|
||||
mrs x1, id_aa64pfr1_el1
|
||||
ubfx x1, x1, #ID_AA64PFR1_EL1_SME_SHIFT, #4
|
||||
cbz x1, .Lskip_sme_fgt_\@
|
||||
|
|
@ -250,7 +314,7 @@
|
|||
.Lset_fgt_\@:
|
||||
msr_s SYS_HFGRTR_EL2, x0
|
||||
msr_s SYS_HFGWTR_EL2, x0
|
||||
msr_s SYS_HFGITR_EL2, xzr
|
||||
msr_s SYS_HFGITR_EL2, x2
|
||||
|
||||
mrs x1, id_aa64pfr0_el1 // AMU traps UNDEF without AMU
|
||||
ubfx x1, x1, #ID_AA64PFR0_EL1_AMU_SHIFT, #4
|
||||
|
|
@ -300,6 +364,7 @@
|
|||
__init_el2_hcrx
|
||||
__init_el2_timers
|
||||
__init_el2_debug
|
||||
__init_el2_brbe
|
||||
__init_el2_lor
|
||||
__init_el2_stage2
|
||||
__init_el2_gicv3
|
||||
|
|
|
|||
|
|
@ -59,8 +59,20 @@ void do_el0_bti(struct pt_regs *regs);
|
|||
void do_el1_bti(struct pt_regs *regs, unsigned long esr);
|
||||
void do_el0_gcs(struct pt_regs *regs, unsigned long esr);
|
||||
void do_el1_gcs(struct pt_regs *regs, unsigned long esr);
|
||||
void do_debug_exception(unsigned long addr_if_watchpoint, unsigned long esr,
|
||||
#ifdef CONFIG_HAVE_HW_BREAKPOINT
|
||||
void do_breakpoint(unsigned long esr, struct pt_regs *regs);
|
||||
void do_watchpoint(unsigned long addr, unsigned long esr,
|
||||
struct pt_regs *regs);
|
||||
#else
|
||||
static inline void do_breakpoint(unsigned long esr, struct pt_regs *regs) {}
|
||||
static inline void do_watchpoint(unsigned long addr, unsigned long esr,
|
||||
struct pt_regs *regs) {}
|
||||
#endif /* CONFIG_HAVE_HW_BREAKPOINT */
|
||||
void do_el0_softstep(unsigned long esr, struct pt_regs *regs);
|
||||
void do_el1_softstep(unsigned long esr, struct pt_regs *regs);
|
||||
void do_el0_brk64(unsigned long esr, struct pt_regs *regs);
|
||||
void do_el1_brk64(unsigned long esr, struct pt_regs *regs);
|
||||
void do_bkpt32(unsigned long esr, struct pt_regs *regs);
|
||||
void do_fpsimd_acc(unsigned long esr, struct pt_regs *regs);
|
||||
void do_sve_acc(unsigned long esr, struct pt_regs *regs);
|
||||
void do_sme_acc(unsigned long esr, struct pt_regs *regs);
|
||||
|
|
|
|||
|
|
@ -58,7 +58,7 @@ static inline u64 gcsss2(void)
|
|||
|
||||
static inline bool task_gcs_el0_enabled(struct task_struct *task)
|
||||
{
|
||||
return current->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE;
|
||||
return task->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE;
|
||||
}
|
||||
|
||||
void gcs_set_el0_mode(struct task_struct *task);
|
||||
|
|
|
|||
|
|
@ -176,6 +176,8 @@
|
|||
#define KERNEL_HWCAP_POE __khwcap2_feature(POE)
|
||||
|
||||
#define __khwcap3_feature(x) (const_ilog2(HWCAP3_ ## x) + 128)
|
||||
#define KERNEL_HWCAP_MTE_FAR __khwcap3_feature(MTE_FAR)
|
||||
#define KERNEL_HWCAP_MTE_STORE_ONLY __khwcap3_feature(MTE_STORE_ONLY)
|
||||
|
||||
/*
|
||||
* This yields a mask that user programs can use to figure out what
|
||||
|
|
|
|||
|
|
@ -24,6 +24,18 @@ static inline void arch_kgdb_breakpoint(void)
|
|||
extern void kgdb_handle_bus_error(void);
|
||||
extern int kgdb_fault_expected;
|
||||
|
||||
int kgdb_brk_handler(struct pt_regs *regs, unsigned long esr);
|
||||
int kgdb_compiled_brk_handler(struct pt_regs *regs, unsigned long esr);
|
||||
#ifdef CONFIG_KGDB
|
||||
int kgdb_single_step_handler(struct pt_regs *regs, unsigned long esr);
|
||||
#else
|
||||
static inline int kgdb_single_step_handler(struct pt_regs *regs,
|
||||
unsigned long esr)
|
||||
{
|
||||
return DBG_HOOK_ERROR;
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -41,4 +41,12 @@ void __kretprobe_trampoline(void);
|
|||
void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
|
||||
|
||||
#endif /* CONFIG_KPROBES */
|
||||
|
||||
int __kprobes kprobe_brk_handler(struct pt_regs *regs,
|
||||
unsigned long esr);
|
||||
int __kprobes kprobe_ss_brk_handler(struct pt_regs *regs,
|
||||
unsigned long esr);
|
||||
int __kprobes kretprobe_brk_handler(struct pt_regs *regs,
|
||||
unsigned long esr);
|
||||
|
||||
#endif /* _ARM_KPROBES_H */
|
||||
|
|
|
|||
|
|
@ -704,6 +704,7 @@ struct kvm_host_data {
|
|||
#define KVM_HOST_DATA_FLAG_EL1_TRACING_CONFIGURED 5
|
||||
#define KVM_HOST_DATA_FLAG_VCPU_IN_HYP_CONTEXT 6
|
||||
#define KVM_HOST_DATA_FLAG_L1_VNCR_MAPPED 7
|
||||
#define KVM_HOST_DATA_FLAG_HAS_BRBE 8
|
||||
unsigned long flags;
|
||||
|
||||
struct kvm_cpu_context host_ctxt;
|
||||
|
|
@ -737,6 +738,7 @@ struct kvm_host_data {
|
|||
u64 trfcr_el1;
|
||||
/* Values of trap registers for the host before guest entry. */
|
||||
u64 mdcr_el2;
|
||||
u64 brbcr_el1;
|
||||
} host_debug_state;
|
||||
|
||||
/* Guest trace filter value */
|
||||
|
|
|
|||
|
|
@ -118,7 +118,7 @@
|
|||
* VMAP'd stacks are allocated at page granularity, so we must ensure that such
|
||||
* stacks are a multiple of page size.
|
||||
*/
|
||||
#if defined(CONFIG_VMAP_STACK) && (MIN_THREAD_SHIFT < PAGE_SHIFT)
|
||||
#if (MIN_THREAD_SHIFT < PAGE_SHIFT)
|
||||
#define THREAD_SHIFT PAGE_SHIFT
|
||||
#else
|
||||
#define THREAD_SHIFT MIN_THREAD_SHIFT
|
||||
|
|
@ -135,11 +135,7 @@
|
|||
* checking sp & (1 << THREAD_SHIFT), which we can do cheaply in the entry
|
||||
* assembly.
|
||||
*/
|
||||
#ifdef CONFIG_VMAP_STACK
|
||||
#define THREAD_ALIGN (2 * THREAD_SIZE)
|
||||
#else
|
||||
#define THREAD_ALIGN THREAD_SIZE
|
||||
#endif
|
||||
|
||||
#define IRQ_STACK_SIZE THREAD_SIZE
|
||||
|
||||
|
|
|
|||
|
|
@ -23,6 +23,8 @@
|
|||
#define MTE_CTRL_TCF_ASYNC (1UL << 17)
|
||||
#define MTE_CTRL_TCF_ASYMM (1UL << 18)
|
||||
|
||||
#define MTE_CTRL_STORE_ONLY (1UL << 19)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <linux/build_bug.h>
|
||||
|
|
|
|||
|
|
@ -59,7 +59,6 @@ static inline bool on_task_stack(const struct task_struct *tsk,
|
|||
|
||||
#define on_thread_stack() (on_task_stack(current, current_stack_pointer, 1))
|
||||
|
||||
#ifdef CONFIG_VMAP_STACK
|
||||
DECLARE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack);
|
||||
|
||||
static inline struct stack_info stackinfo_get_overflow(void)
|
||||
|
|
@ -72,11 +71,8 @@ static inline struct stack_info stackinfo_get_overflow(void)
|
|||
.high = high,
|
||||
};
|
||||
}
|
||||
#else
|
||||
#define stackinfo_get_overflow() stackinfo_get_unknown()
|
||||
#endif
|
||||
|
||||
#if defined(CONFIG_ARM_SDE_INTERFACE) && defined(CONFIG_VMAP_STACK)
|
||||
#if defined(CONFIG_ARM_SDE_INTERFACE)
|
||||
DECLARE_PER_CPU(unsigned long *, sdei_stack_normal_ptr);
|
||||
DECLARE_PER_CPU(unsigned long *, sdei_stack_critical_ptr);
|
||||
|
||||
|
|
|
|||
|
|
@ -202,16 +202,8 @@
|
|||
#define SYS_DBGVCR32_EL2 sys_reg(2, 4, 0, 7, 0)
|
||||
|
||||
#define SYS_BRBINF_EL1(n) sys_reg(2, 1, 8, (n & 15), (((n & 16) >> 2) | 0))
|
||||
#define SYS_BRBINFINJ_EL1 sys_reg(2, 1, 9, 1, 0)
|
||||
#define SYS_BRBSRC_EL1(n) sys_reg(2, 1, 8, (n & 15), (((n & 16) >> 2) | 1))
|
||||
#define SYS_BRBSRCINJ_EL1 sys_reg(2, 1, 9, 1, 1)
|
||||
#define SYS_BRBTGT_EL1(n) sys_reg(2, 1, 8, (n & 15), (((n & 16) >> 2) | 2))
|
||||
#define SYS_BRBTGTINJ_EL1 sys_reg(2, 1, 9, 1, 2)
|
||||
#define SYS_BRBTS_EL1 sys_reg(2, 1, 9, 0, 2)
|
||||
|
||||
#define SYS_BRBCR_EL1 sys_reg(2, 1, 9, 0, 0)
|
||||
#define SYS_BRBFCR_EL1 sys_reg(2, 1, 9, 0, 1)
|
||||
#define SYS_BRBIDR0_EL1 sys_reg(2, 1, 9, 2, 0)
|
||||
|
||||
#define SYS_TRCITECR_EL1 sys_reg(3, 0, 1, 2, 3)
|
||||
#define SYS_TRCACATR(m) sys_reg(2, 1, 2, ((m & 7) << 1), (2 | (m >> 3)))
|
||||
|
|
@ -277,8 +269,6 @@
|
|||
/* ETM */
|
||||
#define SYS_TRCOSLAR sys_reg(2, 1, 1, 0, 4)
|
||||
|
||||
#define SYS_BRBCR_EL2 sys_reg(2, 4, 9, 0, 0)
|
||||
|
||||
#define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0)
|
||||
#define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5)
|
||||
#define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6)
|
||||
|
|
@ -821,6 +811,12 @@
|
|||
#define OP_COSP_RCTX sys_insn(1, 3, 7, 3, 6)
|
||||
#define OP_CPP_RCTX sys_insn(1, 3, 7, 3, 7)
|
||||
|
||||
/*
|
||||
* BRBE Instructions
|
||||
*/
|
||||
#define BRB_IALL_INSN __emit_inst(0xd5000000 | OP_BRB_IALL | (0x1f))
|
||||
#define BRB_INJ_INSN __emit_inst(0xd5000000 | OP_BRB_INJ | (0x1f))
|
||||
|
||||
/* Common SCTLR_ELx flags. */
|
||||
#define SCTLR_ELx_ENTP2 (BIT(60))
|
||||
#define SCTLR_ELx_DSSBS (BIT(44))
|
||||
|
|
|
|||
|
|
@ -25,10 +25,6 @@ void arm64_notify_die(const char *str, struct pt_regs *regs,
|
|||
int signo, int sicode, unsigned long far,
|
||||
unsigned long err);
|
||||
|
||||
void hook_debug_fault_code(int nr, int (*fn)(unsigned long, unsigned long,
|
||||
struct pt_regs *),
|
||||
int sig, int code, const char *name);
|
||||
|
||||
struct mm_struct;
|
||||
extern void __show_regs(struct pt_regs *);
|
||||
|
||||
|
|
|
|||
|
|
@ -70,6 +70,7 @@ void arch_setup_new_exec(void);
|
|||
#define TIF_SYSCALL_TRACEPOINT 10 /* syscall tracepoint for ftrace */
|
||||
#define TIF_SECCOMP 11 /* syscall secure computing */
|
||||
#define TIF_SYSCALL_EMU 12 /* syscall emulation active */
|
||||
#define TIF_PATCH_PENDING 13 /* pending live patching update */
|
||||
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
|
||||
#define TIF_FREEZE 19
|
||||
#define TIF_RESTORE_SIGMASK 20
|
||||
|
|
@ -96,6 +97,7 @@ void arch_setup_new_exec(void);
|
|||
#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
|
||||
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
|
||||
#define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU)
|
||||
#define _TIF_PATCH_PENDING (1 << TIF_PATCH_PENDING)
|
||||
#define _TIF_UPROBE (1 << TIF_UPROBE)
|
||||
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
|
||||
#define _TIF_32BIT (1 << TIF_32BIT)
|
||||
|
|
@ -107,7 +109,8 @@ void arch_setup_new_exec(void);
|
|||
#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY | \
|
||||
_TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
|
||||
_TIF_UPROBE | _TIF_MTE_ASYNC_FAULT | \
|
||||
_TIF_NOTIFY_SIGNAL | _TIF_SIGPENDING)
|
||||
_TIF_NOTIFY_SIGNAL | _TIF_SIGPENDING | \
|
||||
_TIF_PATCH_PENDING)
|
||||
|
||||
#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
|
||||
_TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
|
||||
|
|
|
|||
|
|
@ -29,6 +29,12 @@ void arm64_force_sig_fault_pkey(unsigned long far, const char *str, int pkey);
|
|||
void arm64_force_sig_mceerr(int code, unsigned long far, short lsb, const char *str);
|
||||
void arm64_force_sig_ptrace_errno_trap(int errno, unsigned long far, const char *str);
|
||||
|
||||
int bug_brk_handler(struct pt_regs *regs, unsigned long esr);
|
||||
int cfi_brk_handler(struct pt_regs *regs, unsigned long esr);
|
||||
int reserved_fault_brk_handler(struct pt_regs *regs, unsigned long esr);
|
||||
int kasan_brk_handler(struct pt_regs *regs, unsigned long esr);
|
||||
int ubsan_brk_handler(struct pt_regs *regs, unsigned long esr);
|
||||
|
||||
int early_brk64(unsigned long addr, unsigned long esr, struct pt_regs *regs);
|
||||
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -28,4 +28,15 @@ struct arch_uprobe {
|
|||
bool simulate;
|
||||
};
|
||||
|
||||
int uprobe_brk_handler(struct pt_regs *regs, unsigned long esr);
|
||||
#ifdef CONFIG_UPROBES
|
||||
int uprobe_single_step_handler(struct pt_regs *regs, unsigned long esr);
|
||||
#else
|
||||
static inline int uprobe_single_step_handler(struct pt_regs *regs,
|
||||
unsigned long esr)
|
||||
{
|
||||
return DBG_HOOK_ERROR;
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
|
|
|||
|
|
@ -143,5 +143,7 @@
|
|||
/*
|
||||
* HWCAP3 flags - for AT_HWCAP3
|
||||
*/
|
||||
#define HWCAP3_MTE_FAR (1UL << 0)
|
||||
#define HWCAP3_MTE_STORE_ONLY (1UL << 1)
|
||||
|
||||
#endif /* _UAPI__ASM_HWCAP_H */
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ obj-y += head.o
|
|||
always-$(KBUILD_BUILTIN) += vmlinux.lds
|
||||
|
||||
ifeq ($(CONFIG_DEBUG_EFI),y)
|
||||
AFLAGS_head.o += -DVMLINUX_PATH="\"$(realpath $(objtree)/vmlinux)\""
|
||||
AFLAGS_head.o += -DVMLINUX_PATH="\"$(abspath vmlinux)\""
|
||||
endif
|
||||
|
||||
# for cleaning
|
||||
|
|
|
|||
|
|
@ -197,6 +197,8 @@ out:
|
|||
*/
|
||||
void __init acpi_boot_table_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* Enable ACPI instead of device tree unless
|
||||
* - ACPI has been disabled explicitly (acpi=off), or
|
||||
|
|
@ -250,10 +252,12 @@ done:
|
|||
* behaviour, use acpi=nospcr to disable console in ACPI SPCR
|
||||
* table as default serial console.
|
||||
*/
|
||||
acpi_parse_spcr(earlycon_acpi_spcr_enable,
|
||||
ret = acpi_parse_spcr(earlycon_acpi_spcr_enable,
|
||||
!param_acpi_nospcr);
|
||||
pr_info("Use ACPI SPCR as default console: %s\n",
|
||||
param_acpi_nospcr ? "No" : "Yes");
|
||||
if (!ret || param_acpi_nospcr || !IS_ENABLED(CONFIG_ACPI_SPCR_TABLE))
|
||||
pr_info("Use ACPI SPCR as default console: No\n");
|
||||
else
|
||||
pr_info("Use ACPI SPCR as default console: Yes\n");
|
||||
|
||||
if (IS_ENABLED(CONFIG_ACPI_BGRT))
|
||||
acpi_table_parse(ACPI_SIG_BGRT, acpi_parse_bgrt);
|
||||
|
|
|
|||
|
|
@ -320,6 +320,8 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
|
|||
|
||||
static const struct arm64_ftr_bits ftr_id_aa64pfr2[] = {
|
||||
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR2_EL1_FPMR_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR2_EL1_MTEFAR_SHIFT, 4, ID_AA64PFR2_EL1_MTEFAR_NI),
|
||||
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR2_EL1_MTESTOREONLY_SHIFT, 4, ID_AA64PFR2_EL1_MTESTOREONLY_NI),
|
||||
ARM64_FTR_END,
|
||||
};
|
||||
|
||||
|
|
@ -2213,6 +2215,38 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
|
|||
return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
|
||||
}
|
||||
|
||||
static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
|
||||
{
|
||||
/*
|
||||
* We want to allow usage of BBML2 in as wide a range of kernel contexts
|
||||
* as possible. This list is therefore an allow-list of known-good
|
||||
* implementations that both support BBML2 and additionally, fulfill the
|
||||
* extra constraint of never generating TLB conflict aborts when using
|
||||
* the relaxed BBML2 semantics (such aborts make use of BBML2 in certain
|
||||
* kernel contexts difficult to prove safe against recursive aborts).
|
||||
*
|
||||
* Note that implementations can only be considered "known-good" if their
|
||||
* implementors attest to the fact that the implementation never raises
|
||||
* TLB conflict aborts for BBML2 mapping granularity changes.
|
||||
*/
|
||||
static const struct midr_range supports_bbml2_noabort_list[] = {
|
||||
MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
|
||||
MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
|
||||
{}
|
||||
};
|
||||
|
||||
/* Does our cpu guarantee to never raise TLB conflict aborts? */
|
||||
if (!is_midr_in_range_list(supports_bbml2_noabort_list))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* We currently ignore the ID_AA64MMFR2_EL1 register, and only care
|
||||
* about whether the MIDR check passes.
|
||||
*/
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_ARM64_PAN
|
||||
static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
|
||||
{
|
||||
|
|
@ -2874,6 +2908,20 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
|||
.matches = has_cpuid_feature,
|
||||
ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, MTE, MTE3)
|
||||
},
|
||||
{
|
||||
.desc = "FAR on MTE Tag Check Fault",
|
||||
.capability = ARM64_MTE_FAR,
|
||||
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
|
||||
.matches = has_cpuid_feature,
|
||||
ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, MTEFAR, IMP)
|
||||
},
|
||||
{
|
||||
.desc = "Store Only MTE Tag Check",
|
||||
.capability = ARM64_MTE_STORE_ONLY,
|
||||
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
|
||||
.matches = has_cpuid_feature,
|
||||
ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, MTESTOREONLY, IMP)
|
||||
},
|
||||
#endif /* CONFIG_ARM64_MTE */
|
||||
{
|
||||
.desc = "RCpc load-acquire (LDAPR)",
|
||||
|
|
@ -2980,6 +3028,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
|||
.matches = has_cpuid_feature,
|
||||
ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, EVT, IMP)
|
||||
},
|
||||
{
|
||||
.desc = "BBM Level 2 without TLB conflict abort",
|
||||
.capability = ARM64_HAS_BBML2_NOABORT,
|
||||
.type = ARM64_CPUCAP_EARLY_LOCAL_CPU_FEATURE,
|
||||
.matches = has_bbml2_noabort,
|
||||
},
|
||||
{
|
||||
.desc = "52-bit Virtual Addressing for KVM (LPA2)",
|
||||
.capability = ARM64_HAS_LPA2,
|
||||
|
|
@ -3218,6 +3272,8 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
|
|||
#ifdef CONFIG_ARM64_MTE
|
||||
HWCAP_CAP(ID_AA64PFR1_EL1, MTE, MTE2, CAP_HWCAP, KERNEL_HWCAP_MTE),
|
||||
HWCAP_CAP(ID_AA64PFR1_EL1, MTE, MTE3, CAP_HWCAP, KERNEL_HWCAP_MTE3),
|
||||
HWCAP_CAP(ID_AA64PFR2_EL1, MTEFAR, IMP, CAP_HWCAP, KERNEL_HWCAP_MTE_FAR),
|
||||
HWCAP_CAP(ID_AA64PFR2_EL1, MTESTOREONLY, IMP, CAP_HWCAP , KERNEL_HWCAP_MTE_STORE_ONLY),
|
||||
#endif /* CONFIG_ARM64_MTE */
|
||||
HWCAP_CAP(ID_AA64MMFR0_EL1, ECV, IMP, CAP_HWCAP, KERNEL_HWCAP_ECV),
|
||||
HWCAP_CAP(ID_AA64MMFR1_EL1, AFP, IMP, CAP_HWCAP, KERNEL_HWCAP_AFP),
|
||||
|
|
@ -3377,18 +3433,49 @@ static void update_cpu_capabilities(u16 scope_mask)
|
|||
|
||||
scope_mask &= ARM64_CPUCAP_SCOPE_MASK;
|
||||
for (i = 0; i < ARM64_NCAPS; i++) {
|
||||
bool match_all = false;
|
||||
bool caps_set = false;
|
||||
bool boot_cpu = false;
|
||||
|
||||
caps = cpucap_ptrs[i];
|
||||
if (!caps || !(caps->type & scope_mask) ||
|
||||
cpus_have_cap(caps->capability) ||
|
||||
!caps->matches(caps, cpucap_default_scope(caps)))
|
||||
if (!caps || !(caps->type & scope_mask))
|
||||
continue;
|
||||
|
||||
if (caps->desc && !caps->cpus)
|
||||
match_all = cpucap_match_all_early_cpus(caps);
|
||||
caps_set = cpus_have_cap(caps->capability);
|
||||
boot_cpu = scope_mask & SCOPE_BOOT_CPU;
|
||||
|
||||
/*
|
||||
* Unless it's a match-all CPUs feature, avoid probing if
|
||||
* already detected.
|
||||
*/
|
||||
if (!match_all && caps_set)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* A match-all CPUs capability is only set when probing the
|
||||
* boot CPU. It may be cleared subsequently if not detected on
|
||||
* secondary ones.
|
||||
*/
|
||||
if (match_all && !caps_set && !boot_cpu)
|
||||
continue;
|
||||
|
||||
if (!caps->matches(caps, cpucap_default_scope(caps))) {
|
||||
if (match_all)
|
||||
__clear_bit(caps->capability, system_cpucaps);
|
||||
continue;
|
||||
}
|
||||
|
||||
/*
|
||||
* Match-all CPUs capabilities are logged later when the
|
||||
* system capabilities are finalised.
|
||||
*/
|
||||
if (!match_all && caps->desc && !caps->cpus)
|
||||
pr_info("detected: %s\n", caps->desc);
|
||||
|
||||
__set_bit(caps->capability, system_cpucaps);
|
||||
|
||||
if ((scope_mask & SCOPE_BOOT_CPU) && (caps->type & SCOPE_BOOT_CPU))
|
||||
if (boot_cpu && (caps->type & SCOPE_BOOT_CPU))
|
||||
set_bit(caps->capability, boot_cpucaps);
|
||||
}
|
||||
}
|
||||
|
|
@ -3789,17 +3876,24 @@ static void __init setup_system_capabilities(void)
|
|||
enable_cpu_capabilities(SCOPE_ALL & ~SCOPE_BOOT_CPU);
|
||||
apply_alternatives_all();
|
||||
|
||||
/*
|
||||
* Log any cpucaps with a cpumask as these aren't logged by
|
||||
* update_cpu_capabilities().
|
||||
*/
|
||||
for (int i = 0; i < ARM64_NCAPS; i++) {
|
||||
const struct arm64_cpu_capabilities *caps = cpucap_ptrs[i];
|
||||
|
||||
if (caps && caps->cpus && caps->desc &&
|
||||
cpumask_any(caps->cpus) < nr_cpu_ids)
|
||||
if (!caps || !caps->desc)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* Log any cpucaps with a cpumask as these aren't logged by
|
||||
* update_cpu_capabilities().
|
||||
*/
|
||||
if (caps->cpus && cpumask_any(caps->cpus) < nr_cpu_ids)
|
||||
pr_info("detected: %s on CPU%*pbl\n",
|
||||
caps->desc, cpumask_pr_args(caps->cpus));
|
||||
|
||||
/* Log match-all CPUs capabilities */
|
||||
if (cpucap_match_all_early_cpus(caps) &&
|
||||
cpus_have_cap(caps->capability))
|
||||
pr_info("detected: %s\n", caps->desc);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -160,6 +160,8 @@ static const char *const hwcap_str[] = {
|
|||
[KERNEL_HWCAP_SME_SFEXPA] = "smesfexpa",
|
||||
[KERNEL_HWCAP_SME_STMOP] = "smestmop",
|
||||
[KERNEL_HWCAP_SME_SMOP4] = "smesmop4",
|
||||
[KERNEL_HWCAP_MTE_FAR] = "mtefar",
|
||||
[KERNEL_HWCAP_MTE_STORE_ONLY] = "mtestoreonly",
|
||||
};
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
|
|
|
|||
|
|
@ -21,8 +21,12 @@
|
|||
#include <asm/cputype.h>
|
||||
#include <asm/daifflags.h>
|
||||
#include <asm/debug-monitors.h>
|
||||
#include <asm/exception.h>
|
||||
#include <asm/kgdb.h>
|
||||
#include <asm/kprobes.h>
|
||||
#include <asm/system_misc.h>
|
||||
#include <asm/traps.h>
|
||||
#include <asm/uprobes.h>
|
||||
|
||||
/* Determine debug architecture. */
|
||||
u8 debug_monitors_arch(void)
|
||||
|
|
@ -34,7 +38,7 @@ u8 debug_monitors_arch(void)
|
|||
/*
|
||||
* MDSCR access routines.
|
||||
*/
|
||||
static void mdscr_write(u32 mdscr)
|
||||
static void mdscr_write(u64 mdscr)
|
||||
{
|
||||
unsigned long flags;
|
||||
flags = local_daif_save();
|
||||
|
|
@ -43,7 +47,7 @@ static void mdscr_write(u32 mdscr)
|
|||
}
|
||||
NOKPROBE_SYMBOL(mdscr_write);
|
||||
|
||||
static u32 mdscr_read(void)
|
||||
static u64 mdscr_read(void)
|
||||
{
|
||||
return read_sysreg(mdscr_el1);
|
||||
}
|
||||
|
|
@ -79,16 +83,16 @@ static DEFINE_PER_CPU(int, kde_ref_count);
|
|||
|
||||
void enable_debug_monitors(enum dbg_active_el el)
|
||||
{
|
||||
u32 mdscr, enable = 0;
|
||||
u64 mdscr, enable = 0;
|
||||
|
||||
WARN_ON(preemptible());
|
||||
|
||||
if (this_cpu_inc_return(mde_ref_count) == 1)
|
||||
enable = DBG_MDSCR_MDE;
|
||||
enable = MDSCR_EL1_MDE;
|
||||
|
||||
if (el == DBG_ACTIVE_EL1 &&
|
||||
this_cpu_inc_return(kde_ref_count) == 1)
|
||||
enable |= DBG_MDSCR_KDE;
|
||||
enable |= MDSCR_EL1_KDE;
|
||||
|
||||
if (enable && debug_enabled) {
|
||||
mdscr = mdscr_read();
|
||||
|
|
@ -100,16 +104,16 @@ NOKPROBE_SYMBOL(enable_debug_monitors);
|
|||
|
||||
void disable_debug_monitors(enum dbg_active_el el)
|
||||
{
|
||||
u32 mdscr, disable = 0;
|
||||
u64 mdscr, disable = 0;
|
||||
|
||||
WARN_ON(preemptible());
|
||||
|
||||
if (this_cpu_dec_return(mde_ref_count) == 0)
|
||||
disable = ~DBG_MDSCR_MDE;
|
||||
disable = ~MDSCR_EL1_MDE;
|
||||
|
||||
if (el == DBG_ACTIVE_EL1 &&
|
||||
this_cpu_dec_return(kde_ref_count) == 0)
|
||||
disable &= ~DBG_MDSCR_KDE;
|
||||
disable &= ~MDSCR_EL1_KDE;
|
||||
|
||||
if (disable) {
|
||||
mdscr = mdscr_read();
|
||||
|
|
@ -156,74 +160,6 @@ NOKPROBE_SYMBOL(clear_user_regs_spsr_ss);
|
|||
#define set_regs_spsr_ss(r) set_user_regs_spsr_ss(&(r)->user_regs)
|
||||
#define clear_regs_spsr_ss(r) clear_user_regs_spsr_ss(&(r)->user_regs)
|
||||
|
||||
static DEFINE_SPINLOCK(debug_hook_lock);
|
||||
static LIST_HEAD(user_step_hook);
|
||||
static LIST_HEAD(kernel_step_hook);
|
||||
|
||||
static void register_debug_hook(struct list_head *node, struct list_head *list)
|
||||
{
|
||||
spin_lock(&debug_hook_lock);
|
||||
list_add_rcu(node, list);
|
||||
spin_unlock(&debug_hook_lock);
|
||||
|
||||
}
|
||||
|
||||
static void unregister_debug_hook(struct list_head *node)
|
||||
{
|
||||
spin_lock(&debug_hook_lock);
|
||||
list_del_rcu(node);
|
||||
spin_unlock(&debug_hook_lock);
|
||||
synchronize_rcu();
|
||||
}
|
||||
|
||||
void register_user_step_hook(struct step_hook *hook)
|
||||
{
|
||||
register_debug_hook(&hook->node, &user_step_hook);
|
||||
}
|
||||
|
||||
void unregister_user_step_hook(struct step_hook *hook)
|
||||
{
|
||||
unregister_debug_hook(&hook->node);
|
||||
}
|
||||
|
||||
void register_kernel_step_hook(struct step_hook *hook)
|
||||
{
|
||||
register_debug_hook(&hook->node, &kernel_step_hook);
|
||||
}
|
||||
|
||||
void unregister_kernel_step_hook(struct step_hook *hook)
|
||||
{
|
||||
unregister_debug_hook(&hook->node);
|
||||
}
|
||||
|
||||
/*
|
||||
* Call registered single step handlers
|
||||
* There is no Syndrome info to check for determining the handler.
|
||||
* So we call all the registered handlers, until the right handler is
|
||||
* found which returns zero.
|
||||
*/
|
||||
static int call_step_hook(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
struct step_hook *hook;
|
||||
struct list_head *list;
|
||||
int retval = DBG_HOOK_ERROR;
|
||||
|
||||
list = user_mode(regs) ? &user_step_hook : &kernel_step_hook;
|
||||
|
||||
/*
|
||||
* Since single-step exception disables interrupt, this function is
|
||||
* entirely not preemptible, and we can use rcu list safely here.
|
||||
*/
|
||||
list_for_each_entry_rcu(hook, list, node) {
|
||||
retval = hook->fn(regs, esr);
|
||||
if (retval == DBG_HOOK_HANDLED)
|
||||
break;
|
||||
}
|
||||
|
||||
return retval;
|
||||
}
|
||||
NOKPROBE_SYMBOL(call_step_hook);
|
||||
|
||||
static void send_user_sigtrap(int si_code)
|
||||
{
|
||||
struct pt_regs *regs = current_pt_regs();
|
||||
|
|
@ -238,105 +174,110 @@ static void send_user_sigtrap(int si_code)
|
|||
"User debug trap");
|
||||
}
|
||||
|
||||
static int single_step_handler(unsigned long unused, unsigned long esr,
|
||||
struct pt_regs *regs)
|
||||
/*
|
||||
* We have already unmasked interrupts and enabled preemption
|
||||
* when calling do_el0_softstep() from entry-common.c.
|
||||
*/
|
||||
void do_el0_softstep(unsigned long esr, struct pt_regs *regs)
|
||||
{
|
||||
bool handler_found = false;
|
||||
if (uprobe_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
|
||||
return;
|
||||
|
||||
send_user_sigtrap(TRAP_TRACE);
|
||||
/*
|
||||
* If we are stepping a pending breakpoint, call the hw_breakpoint
|
||||
* handler first.
|
||||
* ptrace will disable single step unless explicitly
|
||||
* asked to re-enable it. For other clients, it makes
|
||||
* sense to leave it enabled (i.e. rewind the controls
|
||||
* to the active-not-pending state).
|
||||
*/
|
||||
if (!reinstall_suspended_bps(regs))
|
||||
return 0;
|
||||
user_rewind_single_step(current);
|
||||
}
|
||||
|
||||
if (!handler_found && call_step_hook(regs, esr) == DBG_HOOK_HANDLED)
|
||||
handler_found = true;
|
||||
void do_el1_softstep(unsigned long esr, struct pt_regs *regs)
|
||||
{
|
||||
if (kgdb_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
|
||||
return;
|
||||
|
||||
if (!handler_found && user_mode(regs)) {
|
||||
send_user_sigtrap(TRAP_TRACE);
|
||||
pr_warn("Unexpected kernel single-step exception at EL1\n");
|
||||
/*
|
||||
* Re-enable stepping since we know that we will be
|
||||
* returning to regs.
|
||||
*/
|
||||
set_regs_spsr_ss(regs);
|
||||
}
|
||||
NOKPROBE_SYMBOL(do_el1_softstep);
|
||||
|
||||
/*
|
||||
* ptrace will disable single step unless explicitly
|
||||
* asked to re-enable it. For other clients, it makes
|
||||
* sense to leave it enabled (i.e. rewind the controls
|
||||
* to the active-not-pending state).
|
||||
*/
|
||||
user_rewind_single_step(current);
|
||||
} else if (!handler_found) {
|
||||
pr_warn("Unexpected kernel single-step exception at EL1\n");
|
||||
/*
|
||||
* Re-enable stepping since we know that we will be
|
||||
* returning to regs.
|
||||
*/
|
||||
set_regs_spsr_ss(regs);
|
||||
static int call_el1_break_hook(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
if (esr_brk_comment(esr) == BUG_BRK_IMM)
|
||||
return bug_brk_handler(regs, esr);
|
||||
|
||||
if (IS_ENABLED(CONFIG_CFI_CLANG) && esr_is_cfi_brk(esr))
|
||||
return cfi_brk_handler(regs, esr);
|
||||
|
||||
if (esr_brk_comment(esr) == FAULT_BRK_IMM)
|
||||
return reserved_fault_brk_handler(regs, esr);
|
||||
|
||||
if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) &&
|
||||
(esr_brk_comment(esr) & ~KASAN_BRK_MASK) == KASAN_BRK_IMM)
|
||||
return kasan_brk_handler(regs, esr);
|
||||
|
||||
if (IS_ENABLED(CONFIG_UBSAN_TRAP) && esr_is_ubsan_brk(esr))
|
||||
return ubsan_brk_handler(regs, esr);
|
||||
|
||||
if (IS_ENABLED(CONFIG_KGDB)) {
|
||||
if (esr_brk_comment(esr) == KGDB_DYN_DBG_BRK_IMM)
|
||||
return kgdb_brk_handler(regs, esr);
|
||||
if (esr_brk_comment(esr) == KGDB_COMPILED_DBG_BRK_IMM)
|
||||
return kgdb_compiled_brk_handler(regs, esr);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
NOKPROBE_SYMBOL(single_step_handler);
|
||||
|
||||
static LIST_HEAD(user_break_hook);
|
||||
static LIST_HEAD(kernel_break_hook);
|
||||
|
||||
void register_user_break_hook(struct break_hook *hook)
|
||||
{
|
||||
register_debug_hook(&hook->node, &user_break_hook);
|
||||
}
|
||||
|
||||
void unregister_user_break_hook(struct break_hook *hook)
|
||||
{
|
||||
unregister_debug_hook(&hook->node);
|
||||
}
|
||||
|
||||
void register_kernel_break_hook(struct break_hook *hook)
|
||||
{
|
||||
register_debug_hook(&hook->node, &kernel_break_hook);
|
||||
}
|
||||
|
||||
void unregister_kernel_break_hook(struct break_hook *hook)
|
||||
{
|
||||
unregister_debug_hook(&hook->node);
|
||||
}
|
||||
|
||||
static int call_break_hook(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
struct break_hook *hook;
|
||||
struct list_head *list;
|
||||
|
||||
list = user_mode(regs) ? &user_break_hook : &kernel_break_hook;
|
||||
|
||||
/*
|
||||
* Since brk exception disables interrupt, this function is
|
||||
* entirely not preemptible, and we can use rcu list safely here.
|
||||
*/
|
||||
list_for_each_entry_rcu(hook, list, node) {
|
||||
if ((esr_brk_comment(esr) & ~hook->mask) == hook->imm)
|
||||
return hook->fn(regs, esr);
|
||||
if (IS_ENABLED(CONFIG_KPROBES)) {
|
||||
if (esr_brk_comment(esr) == KPROBES_BRK_IMM)
|
||||
return kprobe_brk_handler(regs, esr);
|
||||
if (esr_brk_comment(esr) == KPROBES_BRK_SS_IMM)
|
||||
return kprobe_ss_brk_handler(regs, esr);
|
||||
}
|
||||
|
||||
if (IS_ENABLED(CONFIG_KRETPROBES) &&
|
||||
esr_brk_comment(esr) == KRETPROBES_BRK_IMM)
|
||||
return kretprobe_brk_handler(regs, esr);
|
||||
|
||||
return DBG_HOOK_ERROR;
|
||||
}
|
||||
NOKPROBE_SYMBOL(call_break_hook);
|
||||
NOKPROBE_SYMBOL(call_el1_break_hook);
|
||||
|
||||
static int brk_handler(unsigned long unused, unsigned long esr,
|
||||
struct pt_regs *regs)
|
||||
/*
|
||||
* We have already unmasked interrupts and enabled preemption
|
||||
* when calling do_el0_brk64() from entry-common.c.
|
||||
*/
|
||||
void do_el0_brk64(unsigned long esr, struct pt_regs *regs)
|
||||
{
|
||||
if (call_break_hook(regs, esr) == DBG_HOOK_HANDLED)
|
||||
return 0;
|
||||
if (IS_ENABLED(CONFIG_UPROBES) &&
|
||||
esr_brk_comment(esr) == UPROBES_BRK_IMM &&
|
||||
uprobe_brk_handler(regs, esr) == DBG_HOOK_HANDLED)
|
||||
return;
|
||||
|
||||
if (user_mode(regs)) {
|
||||
send_user_sigtrap(TRAP_BRKPT);
|
||||
} else {
|
||||
pr_warn("Unexpected kernel BRK exception at EL1\n");
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
return 0;
|
||||
send_user_sigtrap(TRAP_BRKPT);
|
||||
}
|
||||
NOKPROBE_SYMBOL(brk_handler);
|
||||
|
||||
int aarch32_break_handler(struct pt_regs *regs)
|
||||
void do_el1_brk64(unsigned long esr, struct pt_regs *regs)
|
||||
{
|
||||
if (call_el1_break_hook(regs, esr) == DBG_HOOK_HANDLED)
|
||||
return;
|
||||
|
||||
die("Oops - BRK", regs, esr);
|
||||
}
|
||||
NOKPROBE_SYMBOL(do_el1_brk64);
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
void do_bkpt32(unsigned long esr, struct pt_regs *regs)
|
||||
{
|
||||
arm64_notify_die("aarch32 BKPT", regs, SIGTRAP, TRAP_BRKPT, regs->pc, esr);
|
||||
}
|
||||
#endif /* CONFIG_COMPAT */
|
||||
|
||||
bool try_handle_aarch32_break(struct pt_regs *regs)
|
||||
{
|
||||
u32 arm_instr;
|
||||
u16 thumb_instr;
|
||||
|
|
@ -344,7 +285,7 @@ int aarch32_break_handler(struct pt_regs *regs)
|
|||
void __user *pc = (void __user *)instruction_pointer(regs);
|
||||
|
||||
if (!compat_user_mode(regs))
|
||||
return -EFAULT;
|
||||
return false;
|
||||
|
||||
if (compat_thumb_mode(regs)) {
|
||||
/* get 16-bit Thumb instruction */
|
||||
|
|
@ -368,20 +309,12 @@ int aarch32_break_handler(struct pt_regs *regs)
|
|||
}
|
||||
|
||||
if (!bp)
|
||||
return -EFAULT;
|
||||
return false;
|
||||
|
||||
send_user_sigtrap(TRAP_BRKPT);
|
||||
return 0;
|
||||
}
|
||||
NOKPROBE_SYMBOL(aarch32_break_handler);
|
||||
|
||||
void __init debug_traps_init(void)
|
||||
{
|
||||
hook_debug_fault_code(DBG_ESR_EVT_HWSS, single_step_handler, SIGTRAP,
|
||||
TRAP_TRACE, "single-step handler");
|
||||
hook_debug_fault_code(DBG_ESR_EVT_BRK, brk_handler, SIGTRAP,
|
||||
TRAP_BRKPT, "BRK handler");
|
||||
return true;
|
||||
}
|
||||
NOKPROBE_SYMBOL(try_handle_aarch32_break);
|
||||
|
||||
/* Re-enable single step for syscall restarting. */
|
||||
void user_rewind_single_step(struct task_struct *task)
|
||||
|
|
@ -415,7 +348,7 @@ void kernel_enable_single_step(struct pt_regs *regs)
|
|||
{
|
||||
WARN_ON(!irqs_disabled());
|
||||
set_regs_spsr_ss(regs);
|
||||
mdscr_write(mdscr_read() | DBG_MDSCR_SS);
|
||||
mdscr_write(mdscr_read() | MDSCR_EL1_SS);
|
||||
enable_debug_monitors(DBG_ACTIVE_EL1);
|
||||
}
|
||||
NOKPROBE_SYMBOL(kernel_enable_single_step);
|
||||
|
|
@ -423,7 +356,7 @@ NOKPROBE_SYMBOL(kernel_enable_single_step);
|
|||
void kernel_disable_single_step(void)
|
||||
{
|
||||
WARN_ON(!irqs_disabled());
|
||||
mdscr_write(mdscr_read() & ~DBG_MDSCR_SS);
|
||||
mdscr_write(mdscr_read() & ~MDSCR_EL1_SS);
|
||||
disable_debug_monitors(DBG_ACTIVE_EL1);
|
||||
}
|
||||
NOKPROBE_SYMBOL(kernel_disable_single_step);
|
||||
|
|
@ -431,7 +364,7 @@ NOKPROBE_SYMBOL(kernel_disable_single_step);
|
|||
int kernel_active_single_step(void)
|
||||
{
|
||||
WARN_ON(!irqs_disabled());
|
||||
return mdscr_read() & DBG_MDSCR_SS;
|
||||
return mdscr_read() & MDSCR_EL1_SS;
|
||||
}
|
||||
NOKPROBE_SYMBOL(kernel_active_single_step);
|
||||
|
||||
|
|
|
|||
|
|
@ -215,11 +215,6 @@ static int __init arm64_efi_rt_init(void)
|
|||
if (!efi_enabled(EFI_RUNTIME_SERVICES))
|
||||
return 0;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_VMAP_STACK)) {
|
||||
clear_bit(EFI_RUNTIME_SERVICES, &efi.flags);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
p = arch_alloc_vmap_stack(THREAD_SIZE, NUMA_NO_NODE);
|
||||
if (!p) {
|
||||
pr_warn("Failed to allocate EFI runtime stack\n");
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@
|
|||
#include <linux/context_tracking.h>
|
||||
#include <linux/kasan.h>
|
||||
#include <linux/linkage.h>
|
||||
#include <linux/livepatch.h>
|
||||
#include <linux/lockdep.h>
|
||||
#include <linux/ptrace.h>
|
||||
#include <linux/resume_user_mode.h>
|
||||
|
|
@ -144,6 +145,9 @@ static void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags)
|
|||
(void __user *)NULL, current);
|
||||
}
|
||||
|
||||
if (thread_flags & _TIF_PATCH_PENDING)
|
||||
klp_update_patch_state(current);
|
||||
|
||||
if (thread_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
|
||||
do_signal(regs);
|
||||
|
||||
|
|
@ -344,7 +348,7 @@ static DEFINE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
|
|||
|
||||
static void cortex_a76_erratum_1463225_svc_handler(void)
|
||||
{
|
||||
u32 reg, val;
|
||||
u64 reg, val;
|
||||
|
||||
if (!unlikely(test_thread_flag(TIF_SINGLESTEP)))
|
||||
return;
|
||||
|
|
@ -354,7 +358,7 @@ static void cortex_a76_erratum_1463225_svc_handler(void)
|
|||
|
||||
__this_cpu_write(__in_cortex_a76_erratum_1463225_wa, 1);
|
||||
reg = read_sysreg(mdscr_el1);
|
||||
val = reg | DBG_MDSCR_SS | DBG_MDSCR_KDE;
|
||||
val = reg | MDSCR_EL1_SS | MDSCR_EL1_KDE;
|
||||
write_sysreg(val, mdscr_el1);
|
||||
asm volatile("msr daifclr, #8");
|
||||
isb();
|
||||
|
|
@ -441,6 +445,28 @@ static __always_inline void fpsimd_syscall_exit(void)
|
|||
__this_cpu_write(fpsimd_last_state.to_save, FP_STATE_CURRENT);
|
||||
}
|
||||
|
||||
/*
|
||||
* In debug exception context, we explicitly disable preemption despite
|
||||
* having interrupts disabled.
|
||||
* This serves two purposes: it makes it much less likely that we would
|
||||
* accidentally schedule in exception context and it will force a warning
|
||||
* if we somehow manage to schedule by accident.
|
||||
*/
|
||||
static void debug_exception_enter(struct pt_regs *regs)
|
||||
{
|
||||
preempt_disable();
|
||||
|
||||
/* This code is a bit fragile. Test it. */
|
||||
RCU_LOCKDEP_WARN(!rcu_is_watching(), "exception_enter didn't work");
|
||||
}
|
||||
NOKPROBE_SYMBOL(debug_exception_enter);
|
||||
|
||||
static void debug_exception_exit(struct pt_regs *regs)
|
||||
{
|
||||
preempt_enable_no_resched();
|
||||
}
|
||||
NOKPROBE_SYMBOL(debug_exception_exit);
|
||||
|
||||
UNHANDLED(el1t, 64, sync)
|
||||
UNHANDLED(el1t, 64, irq)
|
||||
UNHANDLED(el1t, 64, fiq)
|
||||
|
|
@ -504,13 +530,51 @@ static void noinstr el1_mops(struct pt_regs *regs, unsigned long esr)
|
|||
exit_to_kernel_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr)
|
||||
static void noinstr el1_breakpt(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
arm64_enter_el1_dbg(regs);
|
||||
debug_exception_enter(regs);
|
||||
do_breakpoint(esr, regs);
|
||||
debug_exception_exit(regs);
|
||||
arm64_exit_el1_dbg(regs);
|
||||
}
|
||||
|
||||
static void noinstr el1_softstp(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
arm64_enter_el1_dbg(regs);
|
||||
if (!cortex_a76_erratum_1463225_debug_handler(regs)) {
|
||||
debug_exception_enter(regs);
|
||||
/*
|
||||
* After handling a breakpoint, we suspend the breakpoint
|
||||
* and use single-step to move to the next instruction.
|
||||
* If we are stepping a suspended breakpoint there's nothing more to do:
|
||||
* the single-step is complete.
|
||||
*/
|
||||
if (!try_step_suspended_breakpoints(regs))
|
||||
do_el1_softstep(esr, regs);
|
||||
debug_exception_exit(regs);
|
||||
}
|
||||
arm64_exit_el1_dbg(regs);
|
||||
}
|
||||
|
||||
static void noinstr el1_watchpt(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
/* Watchpoints are the only debug exception to write FAR_EL1 */
|
||||
unsigned long far = read_sysreg(far_el1);
|
||||
|
||||
arm64_enter_el1_dbg(regs);
|
||||
if (!cortex_a76_erratum_1463225_debug_handler(regs))
|
||||
do_debug_exception(far, esr, regs);
|
||||
debug_exception_enter(regs);
|
||||
do_watchpoint(far, esr, regs);
|
||||
debug_exception_exit(regs);
|
||||
arm64_exit_el1_dbg(regs);
|
||||
}
|
||||
|
||||
static void noinstr el1_brk64(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
arm64_enter_el1_dbg(regs);
|
||||
debug_exception_enter(regs);
|
||||
do_el1_brk64(esr, regs);
|
||||
debug_exception_exit(regs);
|
||||
arm64_exit_el1_dbg(regs);
|
||||
}
|
||||
|
||||
|
|
@ -553,10 +617,16 @@ asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
|
|||
el1_mops(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_BREAKPT_CUR:
|
||||
el1_breakpt(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_SOFTSTP_CUR:
|
||||
el1_softstp(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_WATCHPT_CUR:
|
||||
el1_watchpt(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_BRK64:
|
||||
el1_dbg(regs, esr);
|
||||
el1_brk64(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_FPAC:
|
||||
el1_fpac(regs, esr);
|
||||
|
|
@ -747,17 +817,59 @@ static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr)
|
|||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
|
||||
static void noinstr el0_breakpt(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
/* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */
|
||||
if (!is_ttbr0_addr(regs->pc))
|
||||
arm64_apply_bp_hardening();
|
||||
|
||||
enter_from_user_mode(regs);
|
||||
debug_exception_enter(regs);
|
||||
do_breakpoint(esr, regs);
|
||||
debug_exception_exit(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_softstp(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
if (!is_ttbr0_addr(regs->pc))
|
||||
arm64_apply_bp_hardening();
|
||||
|
||||
enter_from_user_mode(regs);
|
||||
/*
|
||||
* After handling a breakpoint, we suspend the breakpoint
|
||||
* and use single-step to move to the next instruction.
|
||||
* If we are stepping a suspended breakpoint there's nothing more to do:
|
||||
* the single-step is complete.
|
||||
*/
|
||||
if (!try_step_suspended_breakpoints(regs)) {
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_el0_softstep(esr, regs);
|
||||
}
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_watchpt(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
/* Watchpoints are the only debug exception to write FAR_EL1 */
|
||||
unsigned long far = read_sysreg(far_el1);
|
||||
|
||||
enter_from_user_mode(regs);
|
||||
do_debug_exception(far, esr, regs);
|
||||
debug_exception_enter(regs);
|
||||
do_watchpoint(far, esr, regs);
|
||||
debug_exception_exit(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_brk64(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_el0_brk64(esr, regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_svc(struct pt_regs *regs)
|
||||
{
|
||||
enter_from_user_mode(regs);
|
||||
|
|
@ -826,10 +938,16 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
|
|||
el0_gcs(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_BREAKPT_LOW:
|
||||
el0_breakpt(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_SOFTSTP_LOW:
|
||||
el0_softstp(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_WATCHPT_LOW:
|
||||
el0_watchpt(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_BRK64:
|
||||
el0_dbg(regs, esr);
|
||||
el0_brk64(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_FPAC:
|
||||
el0_fpac(regs, esr);
|
||||
|
|
@ -912,6 +1030,14 @@ static void noinstr el0_svc_compat(struct pt_regs *regs)
|
|||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_bkpt32(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_bkpt32(esr, regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
asmlinkage void noinstr el0t_32_sync_handler(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long esr = read_sysreg(esr_el1);
|
||||
|
|
@ -946,10 +1072,16 @@ asmlinkage void noinstr el0t_32_sync_handler(struct pt_regs *regs)
|
|||
el0_cp15(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_BREAKPT_LOW:
|
||||
el0_breakpt(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_SOFTSTP_LOW:
|
||||
el0_softstp(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_WATCHPT_LOW:
|
||||
el0_watchpt(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_BKPT32:
|
||||
el0_dbg(regs, esr);
|
||||
el0_bkpt32(regs, esr);
|
||||
break;
|
||||
default:
|
||||
el0_inv(regs, esr);
|
||||
|
|
@ -977,7 +1109,6 @@ UNHANDLED(el0t, 32, fiq)
|
|||
UNHANDLED(el0t, 32, error)
|
||||
#endif /* CONFIG_COMPAT */
|
||||
|
||||
#ifdef CONFIG_VMAP_STACK
|
||||
asmlinkage void noinstr __noreturn handle_bad_stack(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long esr = read_sysreg(esr_el1);
|
||||
|
|
@ -986,7 +1117,6 @@ asmlinkage void noinstr __noreturn handle_bad_stack(struct pt_regs *regs)
|
|||
arm64_enter_nmi(regs);
|
||||
panic_bad_stack(regs, esr, far);
|
||||
}
|
||||
#endif /* CONFIG_VMAP_STACK */
|
||||
|
||||
#ifdef CONFIG_ARM_SDE_INTERFACE
|
||||
asmlinkage noinstr unsigned long
|
||||
|
|
|
|||
|
|
@ -55,7 +55,6 @@
|
|||
.endif
|
||||
|
||||
sub sp, sp, #PT_REGS_SIZE
|
||||
#ifdef CONFIG_VMAP_STACK
|
||||
/*
|
||||
* Test whether the SP has overflowed, without corrupting a GPR.
|
||||
* Task and IRQ stacks are aligned so that SP & (1 << THREAD_SHIFT)
|
||||
|
|
@ -97,7 +96,6 @@
|
|||
/* We were already on the overflow stack. Restore sp/x0 and carry on. */
|
||||
sub sp, sp, x0
|
||||
mrs x0, tpidrro_el0
|
||||
#endif
|
||||
b el\el\ht\()_\regsize\()_\label
|
||||
.org .Lventry_start\@ + 128 // Did we overflow the ventry slot?
|
||||
.endm
|
||||
|
|
@ -540,7 +538,6 @@ SYM_CODE_START(vectors)
|
|||
kernel_ventry 0, t, 32, error // Error 32-bit EL0
|
||||
SYM_CODE_END(vectors)
|
||||
|
||||
#ifdef CONFIG_VMAP_STACK
|
||||
SYM_CODE_START_LOCAL(__bad_stack)
|
||||
/*
|
||||
* We detected an overflow in kernel_ventry, which switched to the
|
||||
|
|
@ -568,7 +565,6 @@ SYM_CODE_START_LOCAL(__bad_stack)
|
|||
bl handle_bad_stack
|
||||
ASM_BUG()
|
||||
SYM_CODE_END(__bad_stack)
|
||||
#endif /* CONFIG_VMAP_STACK */
|
||||
|
||||
|
||||
.macro entry_handler el:req, ht:req, regsize:req, label:req
|
||||
|
|
@ -1009,7 +1005,6 @@ SYM_CODE_START(__sdei_asm_handler)
|
|||
1: adr_this_cpu dst=x5, sym=sdei_active_critical_event, tmp=x6
|
||||
2: str x19, [x5]
|
||||
|
||||
#ifdef CONFIG_VMAP_STACK
|
||||
/*
|
||||
* entry.S may have been using sp as a scratch register, find whether
|
||||
* this is a normal or critical event and switch to the appropriate
|
||||
|
|
@ -1022,7 +1017,6 @@ SYM_CODE_START(__sdei_asm_handler)
|
|||
2: mov x6, #SDEI_STACK_SIZE
|
||||
add x5, x5, x6
|
||||
mov sp, x5
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
/* Use a separate shadow call stack for normal and critical events */
|
||||
|
|
|
|||
|
|
@ -22,6 +22,7 @@
|
|||
#include <asm/current.h>
|
||||
#include <asm/debug-monitors.h>
|
||||
#include <asm/esr.h>
|
||||
#include <asm/exception.h>
|
||||
#include <asm/hw_breakpoint.h>
|
||||
#include <asm/traps.h>
|
||||
#include <asm/cputype.h>
|
||||
|
|
@ -618,8 +619,7 @@ NOKPROBE_SYMBOL(toggle_bp_registers);
|
|||
/*
|
||||
* Debug exception handlers.
|
||||
*/
|
||||
static int breakpoint_handler(unsigned long unused, unsigned long esr,
|
||||
struct pt_regs *regs)
|
||||
void do_breakpoint(unsigned long esr, struct pt_regs *regs)
|
||||
{
|
||||
int i, step = 0, *kernel_step;
|
||||
u32 ctrl_reg;
|
||||
|
|
@ -662,7 +662,7 @@ unlock:
|
|||
}
|
||||
|
||||
if (!step)
|
||||
return 0;
|
||||
return;
|
||||
|
||||
if (user_mode(regs)) {
|
||||
debug_info->bps_disabled = 1;
|
||||
|
|
@ -670,7 +670,7 @@ unlock:
|
|||
|
||||
/* If we're already stepping a watchpoint, just return. */
|
||||
if (debug_info->wps_disabled)
|
||||
return 0;
|
||||
return;
|
||||
|
||||
if (test_thread_flag(TIF_SINGLESTEP))
|
||||
debug_info->suspended_step = 1;
|
||||
|
|
@ -681,7 +681,7 @@ unlock:
|
|||
kernel_step = this_cpu_ptr(&stepping_kernel_bp);
|
||||
|
||||
if (*kernel_step != ARM_KERNEL_STEP_NONE)
|
||||
return 0;
|
||||
return;
|
||||
|
||||
if (kernel_active_single_step()) {
|
||||
*kernel_step = ARM_KERNEL_STEP_SUSPEND;
|
||||
|
|
@ -690,10 +690,8 @@ unlock:
|
|||
kernel_enable_single_step(regs);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
NOKPROBE_SYMBOL(breakpoint_handler);
|
||||
NOKPROBE_SYMBOL(do_breakpoint);
|
||||
|
||||
/*
|
||||
* Arm64 hardware does not always report a watchpoint hit address that matches
|
||||
|
|
@ -752,8 +750,7 @@ static int watchpoint_report(struct perf_event *wp, unsigned long addr,
|
|||
return step;
|
||||
}
|
||||
|
||||
static int watchpoint_handler(unsigned long addr, unsigned long esr,
|
||||
struct pt_regs *regs)
|
||||
void do_watchpoint(unsigned long addr, unsigned long esr, struct pt_regs *regs)
|
||||
{
|
||||
int i, step = 0, *kernel_step, access, closest_match = 0;
|
||||
u64 min_dist = -1, dist;
|
||||
|
|
@ -808,7 +805,7 @@ static int watchpoint_handler(unsigned long addr, unsigned long esr,
|
|||
rcu_read_unlock();
|
||||
|
||||
if (!step)
|
||||
return 0;
|
||||
return;
|
||||
|
||||
/*
|
||||
* We always disable EL0 watchpoints because the kernel can
|
||||
|
|
@ -821,7 +818,7 @@ static int watchpoint_handler(unsigned long addr, unsigned long esr,
|
|||
|
||||
/* If we're already stepping a breakpoint, just return. */
|
||||
if (debug_info->bps_disabled)
|
||||
return 0;
|
||||
return;
|
||||
|
||||
if (test_thread_flag(TIF_SINGLESTEP))
|
||||
debug_info->suspended_step = 1;
|
||||
|
|
@ -832,7 +829,7 @@ static int watchpoint_handler(unsigned long addr, unsigned long esr,
|
|||
kernel_step = this_cpu_ptr(&stepping_kernel_bp);
|
||||
|
||||
if (*kernel_step != ARM_KERNEL_STEP_NONE)
|
||||
return 0;
|
||||
return;
|
||||
|
||||
if (kernel_active_single_step()) {
|
||||
*kernel_step = ARM_KERNEL_STEP_SUSPEND;
|
||||
|
|
@ -841,44 +838,41 @@ static int watchpoint_handler(unsigned long addr, unsigned long esr,
|
|||
kernel_enable_single_step(regs);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
NOKPROBE_SYMBOL(watchpoint_handler);
|
||||
NOKPROBE_SYMBOL(do_watchpoint);
|
||||
|
||||
/*
|
||||
* Handle single-step exception.
|
||||
*/
|
||||
int reinstall_suspended_bps(struct pt_regs *regs)
|
||||
bool try_step_suspended_breakpoints(struct pt_regs *regs)
|
||||
{
|
||||
struct debug_info *debug_info = ¤t->thread.debug;
|
||||
int handled_exception = 0, *kernel_step;
|
||||
|
||||
kernel_step = this_cpu_ptr(&stepping_kernel_bp);
|
||||
int *kernel_step = this_cpu_ptr(&stepping_kernel_bp);
|
||||
bool handled_exception = false;
|
||||
|
||||
/*
|
||||
* Called from single-step exception handler.
|
||||
* Return 0 if execution can resume, 1 if a SIGTRAP should be
|
||||
* reported.
|
||||
* Called from single-step exception entry.
|
||||
* Return true if we stepped a breakpoint and can resume execution,
|
||||
* false if we need to handle a single-step.
|
||||
*/
|
||||
if (user_mode(regs)) {
|
||||
if (debug_info->bps_disabled) {
|
||||
debug_info->bps_disabled = 0;
|
||||
toggle_bp_registers(AARCH64_DBG_REG_BCR, DBG_ACTIVE_EL0, 1);
|
||||
handled_exception = 1;
|
||||
handled_exception = true;
|
||||
}
|
||||
|
||||
if (debug_info->wps_disabled) {
|
||||
debug_info->wps_disabled = 0;
|
||||
toggle_bp_registers(AARCH64_DBG_REG_WCR, DBG_ACTIVE_EL0, 1);
|
||||
handled_exception = 1;
|
||||
handled_exception = true;
|
||||
}
|
||||
|
||||
if (handled_exception) {
|
||||
if (debug_info->suspended_step) {
|
||||
debug_info->suspended_step = 0;
|
||||
/* Allow exception handling to fall-through. */
|
||||
handled_exception = 0;
|
||||
handled_exception = false;
|
||||
} else {
|
||||
user_disable_single_step(current);
|
||||
}
|
||||
|
|
@ -892,17 +886,17 @@ int reinstall_suspended_bps(struct pt_regs *regs)
|
|||
|
||||
if (*kernel_step != ARM_KERNEL_STEP_SUSPEND) {
|
||||
kernel_disable_single_step();
|
||||
handled_exception = 1;
|
||||
handled_exception = true;
|
||||
} else {
|
||||
handled_exception = 0;
|
||||
handled_exception = false;
|
||||
}
|
||||
|
||||
*kernel_step = ARM_KERNEL_STEP_NONE;
|
||||
}
|
||||
|
||||
return !handled_exception;
|
||||
return handled_exception;
|
||||
}
|
||||
NOKPROBE_SYMBOL(reinstall_suspended_bps);
|
||||
NOKPROBE_SYMBOL(try_step_suspended_breakpoints);
|
||||
|
||||
/*
|
||||
* Context-switcher for restoring suspended breakpoints.
|
||||
|
|
@ -987,12 +981,6 @@ static int __init arch_hw_breakpoint_init(void)
|
|||
pr_info("found %d breakpoint and %d watchpoint registers.\n",
|
||||
core_num_brps, core_num_wrps);
|
||||
|
||||
/* Register debug fault handlers. */
|
||||
hook_debug_fault_code(DBG_ESR_EVT_HWBP, breakpoint_handler, SIGTRAP,
|
||||
TRAP_HWBKPT, "hw-breakpoint handler");
|
||||
hook_debug_fault_code(DBG_ESR_EVT_HWWP, watchpoint_handler, SIGTRAP,
|
||||
TRAP_HWBKPT, "hw-watchpoint handler");
|
||||
|
||||
/*
|
||||
* Reset the breakpoint resources. We assume that a halting
|
||||
* debugger will leave the world in a nice state for us.
|
||||
|
|
|
|||
|
|
@ -51,7 +51,6 @@ static void init_irq_scs(void)
|
|||
scs_alloc(early_cpu_to_node(cpu));
|
||||
}
|
||||
|
||||
#ifdef CONFIG_VMAP_STACK
|
||||
static void __init init_irq_stacks(void)
|
||||
{
|
||||
int cpu;
|
||||
|
|
@ -62,18 +61,6 @@ static void __init init_irq_stacks(void)
|
|||
per_cpu(irq_stack_ptr, cpu) = p;
|
||||
}
|
||||
}
|
||||
#else
|
||||
/* irq stack only needs to be 16 byte aligned - not IRQ_STACK_SIZE aligned. */
|
||||
DEFINE_PER_CPU_ALIGNED(unsigned long [IRQ_STACK_SIZE/sizeof(long)], irq_stack);
|
||||
|
||||
static void init_irq_stacks(void)
|
||||
{
|
||||
int cpu;
|
||||
|
||||
for_each_possible_cpu(cpu)
|
||||
per_cpu(irq_stack_ptr, cpu) = per_cpu(irq_stack, cpu);
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifndef CONFIG_PREEMPT_RT
|
||||
static void ____do_softirq(struct pt_regs *regs)
|
||||
|
|
|
|||
|
|
@ -234,23 +234,23 @@ int kgdb_arch_handle_exception(int exception_vector, int signo,
|
|||
return err;
|
||||
}
|
||||
|
||||
static int kgdb_brk_fn(struct pt_regs *regs, unsigned long esr)
|
||||
int kgdb_brk_handler(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
kgdb_handle_exception(1, SIGTRAP, 0, regs);
|
||||
return DBG_HOOK_HANDLED;
|
||||
}
|
||||
NOKPROBE_SYMBOL(kgdb_brk_fn)
|
||||
NOKPROBE_SYMBOL(kgdb_brk_handler)
|
||||
|
||||
static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned long esr)
|
||||
int kgdb_compiled_brk_handler(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
compiled_break = 1;
|
||||
kgdb_handle_exception(1, SIGTRAP, 0, regs);
|
||||
|
||||
return DBG_HOOK_HANDLED;
|
||||
}
|
||||
NOKPROBE_SYMBOL(kgdb_compiled_brk_fn);
|
||||
NOKPROBE_SYMBOL(kgdb_compiled_brk_handler);
|
||||
|
||||
static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned long esr)
|
||||
int kgdb_single_step_handler(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
if (!kgdb_single_step)
|
||||
return DBG_HOOK_ERROR;
|
||||
|
|
@ -258,21 +258,7 @@ static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned long esr)
|
|||
kgdb_handle_exception(0, SIGTRAP, 0, regs);
|
||||
return DBG_HOOK_HANDLED;
|
||||
}
|
||||
NOKPROBE_SYMBOL(kgdb_step_brk_fn);
|
||||
|
||||
static struct break_hook kgdb_brkpt_hook = {
|
||||
.fn = kgdb_brk_fn,
|
||||
.imm = KGDB_DYN_DBG_BRK_IMM,
|
||||
};
|
||||
|
||||
static struct break_hook kgdb_compiled_brkpt_hook = {
|
||||
.fn = kgdb_compiled_brk_fn,
|
||||
.imm = KGDB_COMPILED_DBG_BRK_IMM,
|
||||
};
|
||||
|
||||
static struct step_hook kgdb_step_hook = {
|
||||
.fn = kgdb_step_brk_fn
|
||||
};
|
||||
NOKPROBE_SYMBOL(kgdb_single_step_handler);
|
||||
|
||||
static int __kgdb_notify(struct die_args *args, unsigned long cmd)
|
||||
{
|
||||
|
|
@ -311,15 +297,7 @@ static struct notifier_block kgdb_notifier = {
|
|||
*/
|
||||
int kgdb_arch_init(void)
|
||||
{
|
||||
int ret = register_die_notifier(&kgdb_notifier);
|
||||
|
||||
if (ret != 0)
|
||||
return ret;
|
||||
|
||||
register_kernel_break_hook(&kgdb_brkpt_hook);
|
||||
register_kernel_break_hook(&kgdb_compiled_brkpt_hook);
|
||||
register_kernel_step_hook(&kgdb_step_hook);
|
||||
return 0;
|
||||
return register_die_notifier(&kgdb_notifier);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
@ -329,9 +307,6 @@ int kgdb_arch_init(void)
|
|||
*/
|
||||
void kgdb_arch_exit(void)
|
||||
{
|
||||
unregister_kernel_break_hook(&kgdb_brkpt_hook);
|
||||
unregister_kernel_break_hook(&kgdb_compiled_brkpt_hook);
|
||||
unregister_kernel_step_hook(&kgdb_step_hook);
|
||||
unregister_die_notifier(&kgdb_notifier);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@
|
|||
#include <asm/insn.h>
|
||||
#include <asm/scs.h>
|
||||
#include <asm/sections.h>
|
||||
#include <asm/text-patching.h>
|
||||
|
||||
enum aarch64_reloc_op {
|
||||
RELOC_OP_NONE,
|
||||
|
|
@ -48,7 +49,17 @@ static u64 do_reloc(enum aarch64_reloc_op reloc_op, __le32 *place, u64 val)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
|
||||
#define WRITE_PLACE(place, val, mod) do { \
|
||||
__typeof__(val) __val = (val); \
|
||||
\
|
||||
if (mod->state == MODULE_STATE_UNFORMED) \
|
||||
*(place) = __val; \
|
||||
else \
|
||||
aarch64_insn_copy(place, &(__val), sizeof(*place)); \
|
||||
} while (0)
|
||||
|
||||
static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len,
|
||||
struct module *me)
|
||||
{
|
||||
s64 sval = do_reloc(op, place, val);
|
||||
|
||||
|
|
@ -66,7 +77,7 @@ static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
|
|||
|
||||
switch (len) {
|
||||
case 16:
|
||||
*(s16 *)place = sval;
|
||||
WRITE_PLACE((s16 *)place, sval, me);
|
||||
switch (op) {
|
||||
case RELOC_OP_ABS:
|
||||
if (sval < 0 || sval > U16_MAX)
|
||||
|
|
@ -82,7 +93,7 @@ static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
|
|||
}
|
||||
break;
|
||||
case 32:
|
||||
*(s32 *)place = sval;
|
||||
WRITE_PLACE((s32 *)place, sval, me);
|
||||
switch (op) {
|
||||
case RELOC_OP_ABS:
|
||||
if (sval < 0 || sval > U32_MAX)
|
||||
|
|
@ -98,7 +109,7 @@ static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
|
|||
}
|
||||
break;
|
||||
case 64:
|
||||
*(s64 *)place = sval;
|
||||
WRITE_PLACE((s64 *)place, sval, me);
|
||||
break;
|
||||
default:
|
||||
pr_err("Invalid length (%d) for data relocation\n", len);
|
||||
|
|
@ -113,7 +124,8 @@ enum aarch64_insn_movw_imm_type {
|
|||
};
|
||||
|
||||
static int reloc_insn_movw(enum aarch64_reloc_op op, __le32 *place, u64 val,
|
||||
int lsb, enum aarch64_insn_movw_imm_type imm_type)
|
||||
int lsb, enum aarch64_insn_movw_imm_type imm_type,
|
||||
struct module *me)
|
||||
{
|
||||
u64 imm;
|
||||
s64 sval;
|
||||
|
|
@ -145,7 +157,7 @@ static int reloc_insn_movw(enum aarch64_reloc_op op, __le32 *place, u64 val,
|
|||
|
||||
/* Update the instruction with the new encoding. */
|
||||
insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_16, insn, imm);
|
||||
*place = cpu_to_le32(insn);
|
||||
WRITE_PLACE(place, cpu_to_le32(insn), me);
|
||||
|
||||
if (imm > U16_MAX)
|
||||
return -ERANGE;
|
||||
|
|
@ -154,7 +166,8 @@ static int reloc_insn_movw(enum aarch64_reloc_op op, __le32 *place, u64 val,
|
|||
}
|
||||
|
||||
static int reloc_insn_imm(enum aarch64_reloc_op op, __le32 *place, u64 val,
|
||||
int lsb, int len, enum aarch64_insn_imm_type imm_type)
|
||||
int lsb, int len, enum aarch64_insn_imm_type imm_type,
|
||||
struct module *me)
|
||||
{
|
||||
u64 imm, imm_mask;
|
||||
s64 sval;
|
||||
|
|
@ -170,7 +183,7 @@ static int reloc_insn_imm(enum aarch64_reloc_op op, __le32 *place, u64 val,
|
|||
|
||||
/* Update the instruction's immediate field. */
|
||||
insn = aarch64_insn_encode_immediate(imm_type, insn, imm);
|
||||
*place = cpu_to_le32(insn);
|
||||
WRITE_PLACE(place, cpu_to_le32(insn), me);
|
||||
|
||||
/*
|
||||
* Extract the upper value bits (including the sign bit) and
|
||||
|
|
@ -189,17 +202,17 @@ static int reloc_insn_imm(enum aarch64_reloc_op op, __le32 *place, u64 val,
|
|||
}
|
||||
|
||||
static int reloc_insn_adrp(struct module *mod, Elf64_Shdr *sechdrs,
|
||||
__le32 *place, u64 val)
|
||||
__le32 *place, u64 val, struct module *me)
|
||||
{
|
||||
u32 insn;
|
||||
|
||||
if (!is_forbidden_offset_for_adrp(place))
|
||||
return reloc_insn_imm(RELOC_OP_PAGE, place, val, 12, 21,
|
||||
AARCH64_INSN_IMM_ADR);
|
||||
AARCH64_INSN_IMM_ADR, me);
|
||||
|
||||
/* patch ADRP to ADR if it is in range */
|
||||
if (!reloc_insn_imm(RELOC_OP_PREL, place, val & ~0xfff, 0, 21,
|
||||
AARCH64_INSN_IMM_ADR)) {
|
||||
AARCH64_INSN_IMM_ADR, me)) {
|
||||
insn = le32_to_cpu(*place);
|
||||
insn &= ~BIT(31);
|
||||
} else {
|
||||
|
|
@ -211,7 +224,7 @@ static int reloc_insn_adrp(struct module *mod, Elf64_Shdr *sechdrs,
|
|||
AARCH64_INSN_BRANCH_NOLINK);
|
||||
}
|
||||
|
||||
*place = cpu_to_le32(insn);
|
||||
WRITE_PLACE(place, cpu_to_le32(insn), me);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -255,23 +268,23 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
|
|||
/* Data relocations. */
|
||||
case R_AARCH64_ABS64:
|
||||
overflow_check = false;
|
||||
ovf = reloc_data(RELOC_OP_ABS, loc, val, 64);
|
||||
ovf = reloc_data(RELOC_OP_ABS, loc, val, 64, me);
|
||||
break;
|
||||
case R_AARCH64_ABS32:
|
||||
ovf = reloc_data(RELOC_OP_ABS, loc, val, 32);
|
||||
ovf = reloc_data(RELOC_OP_ABS, loc, val, 32, me);
|
||||
break;
|
||||
case R_AARCH64_ABS16:
|
||||
ovf = reloc_data(RELOC_OP_ABS, loc, val, 16);
|
||||
ovf = reloc_data(RELOC_OP_ABS, loc, val, 16, me);
|
||||
break;
|
||||
case R_AARCH64_PREL64:
|
||||
overflow_check = false;
|
||||
ovf = reloc_data(RELOC_OP_PREL, loc, val, 64);
|
||||
ovf = reloc_data(RELOC_OP_PREL, loc, val, 64, me);
|
||||
break;
|
||||
case R_AARCH64_PREL32:
|
||||
ovf = reloc_data(RELOC_OP_PREL, loc, val, 32);
|
||||
ovf = reloc_data(RELOC_OP_PREL, loc, val, 32, me);
|
||||
break;
|
||||
case R_AARCH64_PREL16:
|
||||
ovf = reloc_data(RELOC_OP_PREL, loc, val, 16);
|
||||
ovf = reloc_data(RELOC_OP_PREL, loc, val, 16, me);
|
||||
break;
|
||||
|
||||
/* MOVW instruction relocations. */
|
||||
|
|
@ -280,88 +293,88 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
|
|||
fallthrough;
|
||||
case R_AARCH64_MOVW_UABS_G0:
|
||||
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 0,
|
||||
AARCH64_INSN_IMM_MOVKZ);
|
||||
AARCH64_INSN_IMM_MOVKZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_UABS_G1_NC:
|
||||
overflow_check = false;
|
||||
fallthrough;
|
||||
case R_AARCH64_MOVW_UABS_G1:
|
||||
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 16,
|
||||
AARCH64_INSN_IMM_MOVKZ);
|
||||
AARCH64_INSN_IMM_MOVKZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_UABS_G2_NC:
|
||||
overflow_check = false;
|
||||
fallthrough;
|
||||
case R_AARCH64_MOVW_UABS_G2:
|
||||
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 32,
|
||||
AARCH64_INSN_IMM_MOVKZ);
|
||||
AARCH64_INSN_IMM_MOVKZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_UABS_G3:
|
||||
/* We're using the top bits so we can't overflow. */
|
||||
overflow_check = false;
|
||||
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 48,
|
||||
AARCH64_INSN_IMM_MOVKZ);
|
||||
AARCH64_INSN_IMM_MOVKZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_SABS_G0:
|
||||
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 0,
|
||||
AARCH64_INSN_IMM_MOVNZ);
|
||||
AARCH64_INSN_IMM_MOVNZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_SABS_G1:
|
||||
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 16,
|
||||
AARCH64_INSN_IMM_MOVNZ);
|
||||
AARCH64_INSN_IMM_MOVNZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_SABS_G2:
|
||||
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 32,
|
||||
AARCH64_INSN_IMM_MOVNZ);
|
||||
AARCH64_INSN_IMM_MOVNZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_PREL_G0_NC:
|
||||
overflow_check = false;
|
||||
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 0,
|
||||
AARCH64_INSN_IMM_MOVKZ);
|
||||
AARCH64_INSN_IMM_MOVKZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_PREL_G0:
|
||||
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 0,
|
||||
AARCH64_INSN_IMM_MOVNZ);
|
||||
AARCH64_INSN_IMM_MOVNZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_PREL_G1_NC:
|
||||
overflow_check = false;
|
||||
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 16,
|
||||
AARCH64_INSN_IMM_MOVKZ);
|
||||
AARCH64_INSN_IMM_MOVKZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_PREL_G1:
|
||||
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 16,
|
||||
AARCH64_INSN_IMM_MOVNZ);
|
||||
AARCH64_INSN_IMM_MOVNZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_PREL_G2_NC:
|
||||
overflow_check = false;
|
||||
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 32,
|
||||
AARCH64_INSN_IMM_MOVKZ);
|
||||
AARCH64_INSN_IMM_MOVKZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_PREL_G2:
|
||||
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 32,
|
||||
AARCH64_INSN_IMM_MOVNZ);
|
||||
AARCH64_INSN_IMM_MOVNZ, me);
|
||||
break;
|
||||
case R_AARCH64_MOVW_PREL_G3:
|
||||
/* We're using the top bits so we can't overflow. */
|
||||
overflow_check = false;
|
||||
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 48,
|
||||
AARCH64_INSN_IMM_MOVNZ);
|
||||
AARCH64_INSN_IMM_MOVNZ, me);
|
||||
break;
|
||||
|
||||
/* Immediate instruction relocations. */
|
||||
case R_AARCH64_LD_PREL_LO19:
|
||||
ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2, 19,
|
||||
AARCH64_INSN_IMM_19);
|
||||
AARCH64_INSN_IMM_19, me);
|
||||
break;
|
||||
case R_AARCH64_ADR_PREL_LO21:
|
||||
ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 0, 21,
|
||||
AARCH64_INSN_IMM_ADR);
|
||||
AARCH64_INSN_IMM_ADR, me);
|
||||
break;
|
||||
case R_AARCH64_ADR_PREL_PG_HI21_NC:
|
||||
overflow_check = false;
|
||||
fallthrough;
|
||||
case R_AARCH64_ADR_PREL_PG_HI21:
|
||||
ovf = reloc_insn_adrp(me, sechdrs, loc, val);
|
||||
ovf = reloc_insn_adrp(me, sechdrs, loc, val, me);
|
||||
if (ovf && ovf != -ERANGE)
|
||||
return ovf;
|
||||
break;
|
||||
|
|
@ -369,46 +382,46 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
|
|||
case R_AARCH64_LDST8_ABS_LO12_NC:
|
||||
overflow_check = false;
|
||||
ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 0, 12,
|
||||
AARCH64_INSN_IMM_12);
|
||||
AARCH64_INSN_IMM_12, me);
|
||||
break;
|
||||
case R_AARCH64_LDST16_ABS_LO12_NC:
|
||||
overflow_check = false;
|
||||
ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 1, 11,
|
||||
AARCH64_INSN_IMM_12);
|
||||
AARCH64_INSN_IMM_12, me);
|
||||
break;
|
||||
case R_AARCH64_LDST32_ABS_LO12_NC:
|
||||
overflow_check = false;
|
||||
ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 2, 10,
|
||||
AARCH64_INSN_IMM_12);
|
||||
AARCH64_INSN_IMM_12, me);
|
||||
break;
|
||||
case R_AARCH64_LDST64_ABS_LO12_NC:
|
||||
overflow_check = false;
|
||||
ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 3, 9,
|
||||
AARCH64_INSN_IMM_12);
|
||||
AARCH64_INSN_IMM_12, me);
|
||||
break;
|
||||
case R_AARCH64_LDST128_ABS_LO12_NC:
|
||||
overflow_check = false;
|
||||
ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 4, 8,
|
||||
AARCH64_INSN_IMM_12);
|
||||
AARCH64_INSN_IMM_12, me);
|
||||
break;
|
||||
case R_AARCH64_TSTBR14:
|
||||
ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2, 14,
|
||||
AARCH64_INSN_IMM_14);
|
||||
AARCH64_INSN_IMM_14, me);
|
||||
break;
|
||||
case R_AARCH64_CONDBR19:
|
||||
ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2, 19,
|
||||
AARCH64_INSN_IMM_19);
|
||||
AARCH64_INSN_IMM_19, me);
|
||||
break;
|
||||
case R_AARCH64_JUMP26:
|
||||
case R_AARCH64_CALL26:
|
||||
ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2, 26,
|
||||
AARCH64_INSN_IMM_26);
|
||||
AARCH64_INSN_IMM_26, me);
|
||||
if (ovf == -ERANGE) {
|
||||
val = module_emit_plt_entry(me, sechdrs, loc, &rel[i], sym);
|
||||
if (!val)
|
||||
return -ENOEXEC;
|
||||
ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2,
|
||||
26, AARCH64_INSN_IMM_26);
|
||||
26, AARCH64_INSN_IMM_26, me);
|
||||
}
|
||||
break;
|
||||
|
||||
|
|
|
|||
|
|
@ -200,7 +200,7 @@ static void mte_update_sctlr_user(struct task_struct *task)
|
|||
* program requested values go with what was requested.
|
||||
*/
|
||||
resolved_mte_tcf = (mte_ctrl & pref) ? pref : mte_ctrl;
|
||||
sctlr &= ~SCTLR_EL1_TCF0_MASK;
|
||||
sctlr &= ~(SCTLR_EL1_TCF0_MASK | SCTLR_EL1_TCSO0_MASK);
|
||||
/*
|
||||
* Pick an actual setting. The order in which we check for
|
||||
* set bits and map into register values determines our
|
||||
|
|
@ -212,6 +212,10 @@ static void mte_update_sctlr_user(struct task_struct *task)
|
|||
sctlr |= SYS_FIELD_PREP_ENUM(SCTLR_EL1, TCF0, ASYNC);
|
||||
else if (resolved_mte_tcf & MTE_CTRL_TCF_SYNC)
|
||||
sctlr |= SYS_FIELD_PREP_ENUM(SCTLR_EL1, TCF0, SYNC);
|
||||
|
||||
if (mte_ctrl & MTE_CTRL_STORE_ONLY)
|
||||
sctlr |= SYS_FIELD_PREP(SCTLR_EL1, TCSO0, 1);
|
||||
|
||||
task->thread.sctlr_user = sctlr;
|
||||
}
|
||||
|
||||
|
|
@ -371,6 +375,9 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg)
|
|||
(arg & PR_MTE_TCF_SYNC))
|
||||
mte_ctrl |= MTE_CTRL_TCF_ASYMM;
|
||||
|
||||
if (arg & PR_MTE_STORE_ONLY)
|
||||
mte_ctrl |= MTE_CTRL_STORE_ONLY;
|
||||
|
||||
task->thread.mte_ctrl = mte_ctrl;
|
||||
if (task == current) {
|
||||
preempt_disable();
|
||||
|
|
@ -398,6 +405,8 @@ long get_mte_ctrl(struct task_struct *task)
|
|||
ret |= PR_MTE_TCF_ASYNC;
|
||||
if (mte_ctrl & MTE_CTRL_TCF_SYNC)
|
||||
ret |= PR_MTE_TCF_SYNC;
|
||||
if (mte_ctrl & MTE_CTRL_STORE_ONLY)
|
||||
ret |= PR_MTE_STORE_ONLY;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -41,4 +41,4 @@ obj-y := idreg-override.pi.o \
|
|||
obj-$(CONFIG_RELOCATABLE) += relocate.pi.o
|
||||
obj-$(CONFIG_RANDOMIZE_BASE) += kaslr_early.pi.o
|
||||
obj-$(CONFIG_UNWIND_PATCH_PAC_INTO_SCS) += patch-scs.pi.o
|
||||
extra-y := $(patsubst %.pi.o,%.o,$(obj-y))
|
||||
targets := $(patsubst %.pi.o,%.o,$(obj-y))
|
||||
|
|
|
|||
|
|
@ -292,8 +292,8 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int __kprobes
|
||||
kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
|
||||
int __kprobes
|
||||
kprobe_brk_handler(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
struct kprobe *p, *cur_kprobe;
|
||||
struct kprobe_ctlblk *kcb;
|
||||
|
|
@ -336,13 +336,8 @@ kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
|
|||
return DBG_HOOK_HANDLED;
|
||||
}
|
||||
|
||||
static struct break_hook kprobes_break_hook = {
|
||||
.imm = KPROBES_BRK_IMM,
|
||||
.fn = kprobe_breakpoint_handler,
|
||||
};
|
||||
|
||||
static int __kprobes
|
||||
kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned long esr)
|
||||
int __kprobes
|
||||
kprobe_ss_brk_handler(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
|
||||
unsigned long addr = instruction_pointer(regs);
|
||||
|
|
@ -360,13 +355,8 @@ kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned long esr)
|
|||
return DBG_HOOK_ERROR;
|
||||
}
|
||||
|
||||
static struct break_hook kprobes_break_ss_hook = {
|
||||
.imm = KPROBES_BRK_SS_IMM,
|
||||
.fn = kprobe_breakpoint_ss_handler,
|
||||
};
|
||||
|
||||
static int __kprobes
|
||||
kretprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
|
||||
int __kprobes
|
||||
kretprobe_brk_handler(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
if (regs->pc != (unsigned long)__kretprobe_trampoline)
|
||||
return DBG_HOOK_ERROR;
|
||||
|
|
@ -375,11 +365,6 @@ kretprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
|
|||
return DBG_HOOK_HANDLED;
|
||||
}
|
||||
|
||||
static struct break_hook kretprobes_break_hook = {
|
||||
.imm = KRETPROBES_BRK_IMM,
|
||||
.fn = kretprobe_breakpoint_handler,
|
||||
};
|
||||
|
||||
/*
|
||||
* Provide a blacklist of symbols identifying ranges which cannot be kprobed.
|
||||
* This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
|
||||
|
|
@ -422,9 +407,5 @@ int __kprobes arch_trampoline_kprobe(struct kprobe *p)
|
|||
|
||||
int __init arch_init_kprobes(void)
|
||||
{
|
||||
register_kernel_break_hook(&kprobes_break_hook);
|
||||
register_kernel_break_hook(&kprobes_break_ss_hook);
|
||||
register_kernel_break_hook(&kretprobes_break_hook);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@
|
|||
SYM_CODE_START(__kretprobe_trampoline)
|
||||
/*
|
||||
* Trigger a breakpoint exception. The PC will be adjusted by
|
||||
* kretprobe_breakpoint_handler(), and no subsequent instructions will
|
||||
* kretprobe_brk_handler(), and no subsequent instructions will
|
||||
* be executed from the trampoline.
|
||||
*/
|
||||
brk #KRETPROBES_BRK_IMM
|
||||
|
|
|
|||
|
|
@ -173,7 +173,7 @@ int arch_uprobe_exception_notify(struct notifier_block *self,
|
|||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
static int uprobe_breakpoint_handler(struct pt_regs *regs,
|
||||
int uprobe_brk_handler(struct pt_regs *regs,
|
||||
unsigned long esr)
|
||||
{
|
||||
if (uprobe_pre_sstep_notifier(regs))
|
||||
|
|
@ -182,7 +182,7 @@ static int uprobe_breakpoint_handler(struct pt_regs *regs,
|
|||
return DBG_HOOK_ERROR;
|
||||
}
|
||||
|
||||
static int uprobe_single_step_handler(struct pt_regs *regs,
|
||||
int uprobe_single_step_handler(struct pt_regs *regs,
|
||||
unsigned long esr)
|
||||
{
|
||||
struct uprobe_task *utask = current->utask;
|
||||
|
|
@ -194,23 +194,3 @@ static int uprobe_single_step_handler(struct pt_regs *regs,
|
|||
return DBG_HOOK_ERROR;
|
||||
}
|
||||
|
||||
/* uprobe breakpoint handler hook */
|
||||
static struct break_hook uprobes_break_hook = {
|
||||
.imm = UPROBES_BRK_IMM,
|
||||
.fn = uprobe_breakpoint_handler,
|
||||
};
|
||||
|
||||
/* uprobe single step handler hook */
|
||||
static struct step_hook uprobes_step_hook = {
|
||||
.fn = uprobe_single_step_handler,
|
||||
};
|
||||
|
||||
static int __init arch_init_uprobes(void)
|
||||
{
|
||||
register_user_break_hook(&uprobes_break_hook);
|
||||
register_user_step_hook(&uprobes_step_hook);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
device_initcall(arch_init_uprobes);
|
||||
|
|
|
|||
|
|
@ -307,13 +307,13 @@ static int copy_thread_gcs(struct task_struct *p,
|
|||
p->thread.gcs_base = 0;
|
||||
p->thread.gcs_size = 0;
|
||||
|
||||
p->thread.gcs_el0_mode = current->thread.gcs_el0_mode;
|
||||
p->thread.gcs_el0_locked = current->thread.gcs_el0_locked;
|
||||
|
||||
gcs = gcs_alloc_thread_stack(p, args);
|
||||
if (IS_ERR_VALUE(gcs))
|
||||
return PTR_ERR((void *)gcs);
|
||||
|
||||
p->thread.gcs_el0_mode = current->thread.gcs_el0_mode;
|
||||
p->thread.gcs_el0_locked = current->thread.gcs_el0_locked;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -341,7 +341,6 @@ void flush_thread(void)
|
|||
void arch_release_task_struct(struct task_struct *tsk)
|
||||
{
|
||||
fpsimd_release_task(tsk);
|
||||
gcs_free(tsk);
|
||||
}
|
||||
|
||||
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
|
||||
|
|
@ -856,10 +855,14 @@ long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg)
|
|||
if (is_compat_thread(ti))
|
||||
return -EINVAL;
|
||||
|
||||
if (system_supports_mte())
|
||||
if (system_supports_mte()) {
|
||||
valid_mask |= PR_MTE_TCF_SYNC | PR_MTE_TCF_ASYNC \
|
||||
| PR_MTE_TAG_MASK;
|
||||
|
||||
if (cpus_have_cap(ARM64_MTE_STORE_ONLY))
|
||||
valid_mask |= PR_MTE_STORE_ONLY;
|
||||
}
|
||||
|
||||
if (arg & ~valid_mask)
|
||||
return -EINVAL;
|
||||
|
||||
|
|
|
|||
|
|
@ -34,10 +34,8 @@ unsigned long sdei_exit_mode;
|
|||
DECLARE_PER_CPU(unsigned long *, sdei_stack_normal_ptr);
|
||||
DECLARE_PER_CPU(unsigned long *, sdei_stack_critical_ptr);
|
||||
|
||||
#ifdef CONFIG_VMAP_STACK
|
||||
DEFINE_PER_CPU(unsigned long *, sdei_stack_normal_ptr);
|
||||
DEFINE_PER_CPU(unsigned long *, sdei_stack_critical_ptr);
|
||||
#endif
|
||||
|
||||
DECLARE_PER_CPU(unsigned long *, sdei_shadow_call_stack_normal_ptr);
|
||||
DECLARE_PER_CPU(unsigned long *, sdei_shadow_call_stack_critical_ptr);
|
||||
|
|
@ -65,8 +63,7 @@ static void free_sdei_stacks(void)
|
|||
{
|
||||
int cpu;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_VMAP_STACK))
|
||||
return;
|
||||
BUILD_BUG_ON(!IS_ENABLED(CONFIG_VMAP_STACK));
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
_free_sdei_stack(&sdei_stack_normal_ptr, cpu);
|
||||
|
|
@ -91,8 +88,7 @@ static int init_sdei_stacks(void)
|
|||
int cpu;
|
||||
int err = 0;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_VMAP_STACK))
|
||||
return 0;
|
||||
BUILD_BUG_ON(!IS_ENABLED(CONFIG_VMAP_STACK));
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
err = _init_sdei_stack(&sdei_stack_normal_ptr, cpu);
|
||||
|
|
|
|||
|
|
@ -95,8 +95,11 @@ static void save_reset_user_access_state(struct user_access_state *ua_state)
|
|||
|
||||
ua_state->por_el0 = read_sysreg_s(SYS_POR_EL0);
|
||||
write_sysreg_s(por_enable_all, SYS_POR_EL0);
|
||||
/* Ensure that any subsequent uaccess observes the updated value */
|
||||
isb();
|
||||
/*
|
||||
* No ISB required as we can tolerate spurious Overlay faults -
|
||||
* the fault handler will check again based on the new value
|
||||
* of POR_EL0.
|
||||
*/
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -152,6 +152,8 @@ kunwind_recover_return_address(struct kunwind_state *state)
|
|||
orig_pc = kretprobe_find_ret_addr(state->task,
|
||||
(void *)state->common.fp,
|
||||
&state->kr_cur);
|
||||
if (!orig_pc)
|
||||
return -EINVAL;
|
||||
state->common.pc = orig_pc;
|
||||
state->flags.kretprobe = 1;
|
||||
}
|
||||
|
|
@ -277,21 +279,24 @@ kunwind_next(struct kunwind_state *state)
|
|||
|
||||
typedef bool (*kunwind_consume_fn)(const struct kunwind_state *state, void *cookie);
|
||||
|
||||
static __always_inline void
|
||||
static __always_inline int
|
||||
do_kunwind(struct kunwind_state *state, kunwind_consume_fn consume_state,
|
||||
void *cookie)
|
||||
{
|
||||
if (kunwind_recover_return_address(state))
|
||||
return;
|
||||
int ret;
|
||||
|
||||
ret = kunwind_recover_return_address(state);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
while (1) {
|
||||
int ret;
|
||||
|
||||
if (!consume_state(state, cookie))
|
||||
break;
|
||||
return -EINVAL;
|
||||
ret = kunwind_next(state);
|
||||
if (ret == -ENOENT)
|
||||
return 0;
|
||||
if (ret < 0)
|
||||
break;
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -324,7 +329,7 @@ do_kunwind(struct kunwind_state *state, kunwind_consume_fn consume_state,
|
|||
: stackinfo_get_unknown(); \
|
||||
})
|
||||
|
||||
static __always_inline void
|
||||
static __always_inline int
|
||||
kunwind_stack_walk(kunwind_consume_fn consume_state,
|
||||
void *cookie, struct task_struct *task,
|
||||
struct pt_regs *regs)
|
||||
|
|
@ -332,10 +337,8 @@ kunwind_stack_walk(kunwind_consume_fn consume_state,
|
|||
struct stack_info stacks[] = {
|
||||
stackinfo_get_task(task),
|
||||
STACKINFO_CPU(irq),
|
||||
#if defined(CONFIG_VMAP_STACK)
|
||||
STACKINFO_CPU(overflow),
|
||||
#endif
|
||||
#if defined(CONFIG_VMAP_STACK) && defined(CONFIG_ARM_SDE_INTERFACE)
|
||||
#if defined(CONFIG_ARM_SDE_INTERFACE)
|
||||
STACKINFO_SDEI(normal),
|
||||
STACKINFO_SDEI(critical),
|
||||
#endif
|
||||
|
|
@ -352,7 +355,7 @@ kunwind_stack_walk(kunwind_consume_fn consume_state,
|
|||
|
||||
if (regs) {
|
||||
if (task != current)
|
||||
return;
|
||||
return -EINVAL;
|
||||
kunwind_init_from_regs(&state, regs);
|
||||
} else if (task == current) {
|
||||
kunwind_init_from_caller(&state);
|
||||
|
|
@ -360,7 +363,7 @@ kunwind_stack_walk(kunwind_consume_fn consume_state,
|
|||
kunwind_init_from_task(&state, task);
|
||||
}
|
||||
|
||||
do_kunwind(&state, consume_state, cookie);
|
||||
return do_kunwind(&state, consume_state, cookie);
|
||||
}
|
||||
|
||||
struct kunwind_consume_entry_data {
|
||||
|
|
@ -387,6 +390,36 @@ noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry,
|
|||
kunwind_stack_walk(arch_kunwind_consume_entry, &data, task, regs);
|
||||
}
|
||||
|
||||
static __always_inline bool
|
||||
arch_reliable_kunwind_consume_entry(const struct kunwind_state *state, void *cookie)
|
||||
{
|
||||
/*
|
||||
* At an exception boundary we can reliably consume the saved PC. We do
|
||||
* not know whether the LR was live when the exception was taken, and
|
||||
* so we cannot perform the next unwind step reliably.
|
||||
*
|
||||
* All that matters is whether the *entire* unwind is reliable, so give
|
||||
* up as soon as we hit an exception boundary.
|
||||
*/
|
||||
if (state->source == KUNWIND_SOURCE_REGS_PC)
|
||||
return false;
|
||||
|
||||
return arch_kunwind_consume_entry(state, cookie);
|
||||
}
|
||||
|
||||
noinline noinstr int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
|
||||
void *cookie,
|
||||
struct task_struct *task)
|
||||
{
|
||||
struct kunwind_consume_entry_data data = {
|
||||
.consume_entry = consume_entry,
|
||||
.cookie = cookie,
|
||||
};
|
||||
|
||||
return kunwind_stack_walk(arch_reliable_kunwind_consume_entry, &data,
|
||||
task, NULL);
|
||||
}
|
||||
|
||||
struct bpf_unwind_consume_entry_data {
|
||||
bool (*consume_entry)(void *cookie, u64 ip, u64 sp, u64 fp);
|
||||
void *cookie;
|
||||
|
|
|
|||
|
|
@ -454,7 +454,7 @@ void do_el0_undef(struct pt_regs *regs, unsigned long esr)
|
|||
u32 insn;
|
||||
|
||||
/* check for AArch32 breakpoint instructions */
|
||||
if (!aarch32_break_handler(regs))
|
||||
if (try_handle_aarch32_break(regs))
|
||||
return;
|
||||
|
||||
if (user_insn_read(regs, &insn))
|
||||
|
|
@ -894,8 +894,6 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned long esr)
|
|||
"Bad EL0 synchronous exception");
|
||||
}
|
||||
|
||||
#ifdef CONFIG_VMAP_STACK
|
||||
|
||||
DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack)
|
||||
__aligned(16);
|
||||
|
||||
|
|
@ -927,10 +925,10 @@ void __noreturn panic_bad_stack(struct pt_regs *regs, unsigned long esr, unsigne
|
|||
nmi_panic(NULL, "kernel stack overflow");
|
||||
cpu_park_loop();
|
||||
}
|
||||
#endif
|
||||
|
||||
void __noreturn arm64_serror_panic(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK);
|
||||
console_verbose();
|
||||
|
||||
pr_crit("SError Interrupt on CPU%d, code 0x%016lx -- %s\n",
|
||||
|
|
@ -987,7 +985,7 @@ void do_serror(struct pt_regs *regs, unsigned long esr)
|
|||
int is_valid_bugaddr(unsigned long addr)
|
||||
{
|
||||
/*
|
||||
* bug_handler() only called for BRK #BUG_BRK_IMM.
|
||||
* bug_brk_handler() only called for BRK #BUG_BRK_IMM.
|
||||
* So the answer is trivial -- any spurious instances with no
|
||||
* bug table entry will be rejected by report_bug() and passed
|
||||
* back to the debug-monitors code and handled as a fatal
|
||||
|
|
@ -997,7 +995,7 @@ int is_valid_bugaddr(unsigned long addr)
|
|||
}
|
||||
#endif
|
||||
|
||||
static int bug_handler(struct pt_regs *regs, unsigned long esr)
|
||||
int bug_brk_handler(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
switch (report_bug(regs->pc, regs)) {
|
||||
case BUG_TRAP_TYPE_BUG:
|
||||
|
|
@ -1017,13 +1015,8 @@ static int bug_handler(struct pt_regs *regs, unsigned long esr)
|
|||
return DBG_HOOK_HANDLED;
|
||||
}
|
||||
|
||||
static struct break_hook bug_break_hook = {
|
||||
.fn = bug_handler,
|
||||
.imm = BUG_BRK_IMM,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_CFI_CLANG
|
||||
static int cfi_handler(struct pt_regs *regs, unsigned long esr)
|
||||
int cfi_brk_handler(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
unsigned long target;
|
||||
u32 type;
|
||||
|
|
@ -1046,15 +1039,9 @@ static int cfi_handler(struct pt_regs *regs, unsigned long esr)
|
|||
arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
|
||||
return DBG_HOOK_HANDLED;
|
||||
}
|
||||
|
||||
static struct break_hook cfi_break_hook = {
|
||||
.fn = cfi_handler,
|
||||
.imm = CFI_BRK_IMM_BASE,
|
||||
.mask = CFI_BRK_IMM_MASK,
|
||||
};
|
||||
#endif /* CONFIG_CFI_CLANG */
|
||||
|
||||
static int reserved_fault_handler(struct pt_regs *regs, unsigned long esr)
|
||||
int reserved_fault_brk_handler(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
pr_err("%s generated an invalid instruction at %pS!\n",
|
||||
"Kernel text patching",
|
||||
|
|
@ -1064,11 +1051,6 @@ static int reserved_fault_handler(struct pt_regs *regs, unsigned long esr)
|
|||
return DBG_HOOK_ERROR;
|
||||
}
|
||||
|
||||
static struct break_hook fault_break_hook = {
|
||||
.fn = reserved_fault_handler,
|
||||
.imm = FAULT_BRK_IMM,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_KASAN_SW_TAGS
|
||||
|
||||
#define KASAN_ESR_RECOVER 0x20
|
||||
|
|
@ -1076,7 +1058,7 @@ static struct break_hook fault_break_hook = {
|
|||
#define KASAN_ESR_SIZE_MASK 0x0f
|
||||
#define KASAN_ESR_SIZE(esr) (1 << ((esr) & KASAN_ESR_SIZE_MASK))
|
||||
|
||||
static int kasan_handler(struct pt_regs *regs, unsigned long esr)
|
||||
int kasan_brk_handler(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
bool recover = esr & KASAN_ESR_RECOVER;
|
||||
bool write = esr & KASAN_ESR_WRITE;
|
||||
|
|
@ -1107,62 +1089,12 @@ static int kasan_handler(struct pt_regs *regs, unsigned long esr)
|
|||
arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
|
||||
return DBG_HOOK_HANDLED;
|
||||
}
|
||||
|
||||
static struct break_hook kasan_break_hook = {
|
||||
.fn = kasan_handler,
|
||||
.imm = KASAN_BRK_IMM,
|
||||
.mask = KASAN_BRK_MASK,
|
||||
};
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_UBSAN_TRAP
|
||||
static int ubsan_handler(struct pt_regs *regs, unsigned long esr)
|
||||
int ubsan_brk_handler(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
die(report_ubsan_failure(esr & UBSAN_BRK_MASK), regs, esr);
|
||||
return DBG_HOOK_HANDLED;
|
||||
}
|
||||
|
||||
static struct break_hook ubsan_break_hook = {
|
||||
.fn = ubsan_handler,
|
||||
.imm = UBSAN_BRK_IMM,
|
||||
.mask = UBSAN_BRK_MASK,
|
||||
};
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Initial handler for AArch64 BRK exceptions
|
||||
* This handler only used until debug_traps_init().
|
||||
*/
|
||||
int __init early_brk64(unsigned long addr, unsigned long esr,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
#ifdef CONFIG_CFI_CLANG
|
||||
if (esr_is_cfi_brk(esr))
|
||||
return cfi_handler(regs, esr) != DBG_HOOK_HANDLED;
|
||||
#endif
|
||||
#ifdef CONFIG_KASAN_SW_TAGS
|
||||
if ((esr_brk_comment(esr) & ~KASAN_BRK_MASK) == KASAN_BRK_IMM)
|
||||
return kasan_handler(regs, esr) != DBG_HOOK_HANDLED;
|
||||
#endif
|
||||
#ifdef CONFIG_UBSAN_TRAP
|
||||
if (esr_is_ubsan_brk(esr))
|
||||
return ubsan_handler(regs, esr) != DBG_HOOK_HANDLED;
|
||||
#endif
|
||||
return bug_handler(regs, esr) != DBG_HOOK_HANDLED;
|
||||
}
|
||||
|
||||
void __init trap_init(void)
|
||||
{
|
||||
register_kernel_break_hook(&bug_break_hook);
|
||||
#ifdef CONFIG_CFI_CLANG
|
||||
register_kernel_break_hook(&cfi_break_hook);
|
||||
#endif
|
||||
register_kernel_break_hook(&fault_break_hook);
|
||||
#ifdef CONFIG_KASAN_SW_TAGS
|
||||
register_kernel_break_hook(&kasan_break_hook);
|
||||
#endif
|
||||
#ifdef CONFIG_UBSAN_TRAP
|
||||
register_kernel_break_hook(&ubsan_break_hook);
|
||||
#endif
|
||||
debug_traps_init();
|
||||
}
|
||||
|
|
|
|||
|
|
@ -34,3 +34,61 @@ bool __init arch_perf_nmi_is_available(void)
|
|||
*/
|
||||
return arm_pmu_irq_is_nmi();
|
||||
}
|
||||
|
||||
static int watchdog_perf_update_period(void *data)
|
||||
{
|
||||
int cpu = smp_processor_id();
|
||||
u64 max_cpu_freq, new_period;
|
||||
|
||||
max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL;
|
||||
if (!max_cpu_freq)
|
||||
return 0;
|
||||
|
||||
new_period = watchdog_thresh * max_cpu_freq;
|
||||
hardlockup_detector_perf_adjust_period(new_period);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int watchdog_freq_notifier_callback(struct notifier_block *nb,
|
||||
unsigned long val, void *data)
|
||||
{
|
||||
struct cpufreq_policy *policy = data;
|
||||
int cpu;
|
||||
|
||||
if (val != CPUFREQ_CREATE_POLICY)
|
||||
return NOTIFY_DONE;
|
||||
|
||||
/*
|
||||
* Let each online CPU related to the policy update the period by their
|
||||
* own. This will serialize with the framework on start/stop the lockup
|
||||
* detector (softlockup_{start,stop}_all) and avoid potential race
|
||||
* condition. Otherwise we may have below theoretical race condition:
|
||||
* (core 0/1 share the same policy)
|
||||
* [core 0] [core 1]
|
||||
* hardlockup_detector_event_create()
|
||||
* hw_nmi_get_sample_period()
|
||||
* (cpufreq registered, notifier callback invoked)
|
||||
* watchdog_freq_notifier_callback()
|
||||
* watchdog_perf_update_period()
|
||||
* (since core 1's event's not yet created,
|
||||
* the period is not set)
|
||||
* perf_event_create_kernel_counter()
|
||||
* (event's period is SAFE_MAX_CPU_FREQ)
|
||||
*/
|
||||
for_each_cpu(cpu, policy->cpus)
|
||||
smp_call_on_cpu(cpu, watchdog_perf_update_period, NULL, false);
|
||||
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
static struct notifier_block watchdog_freq_notifier = {
|
||||
.notifier_call = watchdog_freq_notifier_callback,
|
||||
};
|
||||
|
||||
static int __init init_watchdog_freq_notifier(void)
|
||||
{
|
||||
return cpufreq_register_notifier(&watchdog_freq_notifier,
|
||||
CPUFREQ_POLICY_NOTIFIER);
|
||||
}
|
||||
core_initcall(init_watchdog_freq_notifier);
|
||||
|
|
|
|||
|
|
@ -81,6 +81,10 @@ void kvm_init_host_debug_data(void)
|
|||
!(read_sysreg_s(SYS_PMBIDR_EL1) & PMBIDR_EL1_P))
|
||||
host_data_set_flag(HAS_SPE);
|
||||
|
||||
/* Check if we have BRBE implemented and available at the host */
|
||||
if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_BRBE_SHIFT))
|
||||
host_data_set_flag(HAS_BRBE);
|
||||
|
||||
if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceFilt_SHIFT)) {
|
||||
/* Force disable trace in protected mode in case of no TRBE */
|
||||
if (is_protected_kvm_enabled())
|
||||
|
|
|
|||
|
|
@ -92,12 +92,42 @@ static void __trace_switch_to_host(void)
|
|||
*host_data_ptr(host_debug_state.trfcr_el1));
|
||||
}
|
||||
|
||||
static void __debug_save_brbe(u64 *brbcr_el1)
|
||||
{
|
||||
*brbcr_el1 = 0;
|
||||
|
||||
/* Check if the BRBE is enabled */
|
||||
if (!(read_sysreg_el1(SYS_BRBCR) & (BRBCR_ELx_E0BRE | BRBCR_ELx_ExBRE)))
|
||||
return;
|
||||
|
||||
/*
|
||||
* Prohibit branch record generation while we are in guest.
|
||||
* Since access to BRBCR_EL1 is trapped, the guest can't
|
||||
* modify the filtering set by the host.
|
||||
*/
|
||||
*brbcr_el1 = read_sysreg_el1(SYS_BRBCR);
|
||||
write_sysreg_el1(0, SYS_BRBCR);
|
||||
}
|
||||
|
||||
static void __debug_restore_brbe(u64 brbcr_el1)
|
||||
{
|
||||
if (!brbcr_el1)
|
||||
return;
|
||||
|
||||
/* Restore BRBE controls */
|
||||
write_sysreg_el1(brbcr_el1, SYS_BRBCR);
|
||||
}
|
||||
|
||||
void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
/* Disable and flush SPE data generation */
|
||||
if (host_data_test_flag(HAS_SPE))
|
||||
__debug_save_spe(host_data_ptr(host_debug_state.pmscr_el1));
|
||||
|
||||
/* Disable BRBE branch records */
|
||||
if (host_data_test_flag(HAS_BRBE))
|
||||
__debug_save_brbe(host_data_ptr(host_debug_state.brbcr_el1));
|
||||
|
||||
if (__trace_needs_switch())
|
||||
__trace_switch_to_guest();
|
||||
}
|
||||
|
|
@ -111,6 +141,8 @@ void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
|
|||
{
|
||||
if (host_data_test_flag(HAS_SPE))
|
||||
__debug_restore_spe(*host_data_ptr(host_debug_state.pmscr_el1));
|
||||
if (host_data_test_flag(HAS_BRBE))
|
||||
__debug_restore_brbe(*host_data_ptr(host_debug_state.brbcr_el1));
|
||||
if (__trace_needs_switch())
|
||||
__trace_switch_to_host();
|
||||
}
|
||||
|
|
|
|||
|
|
@ -272,7 +272,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
|
|||
* We're about to restore some new MMU state. Make sure
|
||||
* ongoing page-table walks that have started before we
|
||||
* trapped to EL2 have completed. This also synchronises the
|
||||
* above disabling of SPE and TRBE.
|
||||
* above disabling of BRBE, SPE and TRBE.
|
||||
*
|
||||
* See DDI0487I.a D8.1.5 "Out-of-context translation regimes",
|
||||
* rule R_LFHQG and subsequent information statements.
|
||||
|
|
|
|||
|
|
@ -1617,8 +1617,10 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
|
|||
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MPAM_frac);
|
||||
break;
|
||||
case SYS_ID_AA64PFR2_EL1:
|
||||
/* We only expose FPMR */
|
||||
val &= ID_AA64PFR2_EL1_FPMR;
|
||||
val &= ID_AA64PFR2_EL1_FPMR |
|
||||
(kvm_has_mte(vcpu->kvm) ?
|
||||
ID_AA64PFR2_EL1_MTEFAR | ID_AA64PFR2_EL1_MTESTOREONLY :
|
||||
0);
|
||||
break;
|
||||
case SYS_ID_AA64ISAR1_EL1:
|
||||
if (!vcpu_has_ptrauth(vcpu))
|
||||
|
|
@ -2878,7 +2880,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
|
|||
ID_AA64PFR1_EL1_MPAM_frac |
|
||||
ID_AA64PFR1_EL1_RAS_frac |
|
||||
ID_AA64PFR1_EL1_MTE)),
|
||||
ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR),
|
||||
ID_WRITABLE(ID_AA64PFR2_EL1,
|
||||
ID_AA64PFR2_EL1_FPMR |
|
||||
ID_AA64PFR2_EL1_MTEFAR |
|
||||
ID_AA64PFR2_EL1_MTESTOREONLY),
|
||||
ID_UNALLOCATED(4,3),
|
||||
ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0),
|
||||
ID_HIDDEN(ID_AA64SMFR0_EL1),
|
||||
|
|
|
|||
|
|
@ -68,7 +68,144 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr,
|
|||
pte = pte_mkyoung(pte);
|
||||
}
|
||||
|
||||
__flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3);
|
||||
/*
|
||||
* On eliding the __tlb_flush_range() under BBML2+noabort:
|
||||
*
|
||||
* NOTE: Instead of using N=16 as the contiguous block length, we use
|
||||
* N=4 for clarity.
|
||||
*
|
||||
* NOTE: 'n' and 'c' are used to denote the "contiguous bit" being
|
||||
* unset and set, respectively.
|
||||
*
|
||||
* We worry about two cases where contiguous bit is used:
|
||||
* - When folding N smaller non-contiguous ptes as 1 contiguous block.
|
||||
* - When unfolding a contiguous block into N smaller non-contiguous ptes.
|
||||
*
|
||||
* Currently, the BBML0 folding case looks as follows:
|
||||
*
|
||||
* 0) Initial page-table layout:
|
||||
*
|
||||
* +----+----+----+----+
|
||||
* |RO,n|RO,n|RO,n|RW,n| <--- last page being set as RO
|
||||
* +----+----+----+----+
|
||||
*
|
||||
* 1) Aggregate AF + dirty flags using __ptep_get_and_clear():
|
||||
*
|
||||
* +----+----+----+----+
|
||||
* | 0 | 0 | 0 | 0 |
|
||||
* +----+----+----+----+
|
||||
*
|
||||
* 2) __flush_tlb_range():
|
||||
*
|
||||
* |____ tlbi + dsb ____|
|
||||
*
|
||||
* 3) __set_ptes() to repaint contiguous block:
|
||||
*
|
||||
* +----+----+----+----+
|
||||
* |RO,c|RO,c|RO,c|RO,c|
|
||||
* +----+----+----+----+
|
||||
*
|
||||
* 4) The kernel will eventually __flush_tlb() for changed page:
|
||||
*
|
||||
* |____| <--- tlbi + dsb
|
||||
*
|
||||
* As expected, the intermediate tlbi+dsb ensures that other PEs
|
||||
* only ever see an invalid (0) entry, or the new contiguous TLB entry.
|
||||
* The final tlbi+dsb will always throw away the newly installed
|
||||
* contiguous TLB entry, which is a micro-optimisation opportunity,
|
||||
* but does not affect correctness.
|
||||
*
|
||||
* In the BBML2 case, the change is avoiding the intermediate tlbi+dsb.
|
||||
* This means a few things, but notably other PEs will still "see" any
|
||||
* stale cached TLB entries. This could lead to a "contiguous bit
|
||||
* misprogramming" issue until the final tlbi+dsb of the changed page,
|
||||
* which would clear out both the stale (RW,n) entry and the new (RO,c)
|
||||
* contiguous entry installed in its place.
|
||||
*
|
||||
* What this is saying, is the following:
|
||||
*
|
||||
* +----+----+----+----+
|
||||
* |RO,n|RO,n|RO,n|RW,n| <--- old page tables, all non-contiguous
|
||||
* +----+----+----+----+
|
||||
*
|
||||
* +----+----+----+----+
|
||||
* |RO,c|RO,c|RO,c|RO,c| <--- new page tables, all contiguous
|
||||
* +----+----+----+----+
|
||||
* /\
|
||||
* ||
|
||||
*
|
||||
* If both the old single (RW,n) and new contiguous (RO,c) TLB entries
|
||||
* are present, and a write is made to this address, do we fault or
|
||||
* is the write permitted (via amalgamation)?
|
||||
*
|
||||
* The relevant Arm ARM DDI 0487L.a requirements are RNGLXZ and RJQQTC,
|
||||
* and together state that when BBML1 or BBML2 are implemented, either
|
||||
* a TLB conflict abort is raised (which we expressly forbid), or will
|
||||
* "produce an OA, access permissions, and memory attributes that are
|
||||
* consistent with any of the programmed translation table values".
|
||||
*
|
||||
* That is to say, will either raise a TLB conflict, or produce one of
|
||||
* the cached TLB entries, but never amalgamate.
|
||||
*
|
||||
* Thus, as the page tables are only considered "consistent" after
|
||||
* the final tlbi+dsb (which evicts both the single stale (RW,n) TLB
|
||||
* entry as well as the new contiguous (RO,c) TLB entry), omitting the
|
||||
* initial tlbi+dsb is correct.
|
||||
*
|
||||
* It is also important to note that at the end of the BBML2 folding
|
||||
* case, we are still left with potentially all N TLB entries still
|
||||
* cached (the N-1 non-contiguous ptes, and the single contiguous
|
||||
* block). However, over time, natural TLB pressure will cause the
|
||||
* non-contiguous pte TLB entries to be flushed, leaving only the
|
||||
* contiguous block TLB entry. This means that omitting the tlbi+dsb is
|
||||
* not only correct, but also keeps our eventual performance benefits.
|
||||
*
|
||||
* For the unfolding case, BBML0 looks as follows:
|
||||
*
|
||||
* 0) Initial page-table layout:
|
||||
*
|
||||
* +----+----+----+----+
|
||||
* |RW,c|RW,c|RW,c|RW,c| <--- last page being set as RO
|
||||
* +----+----+----+----+
|
||||
*
|
||||
* 1) Aggregate AF + dirty flags using __ptep_get_and_clear():
|
||||
*
|
||||
* +----+----+----+----+
|
||||
* | 0 | 0 | 0 | 0 |
|
||||
* +----+----+----+----+
|
||||
*
|
||||
* 2) __flush_tlb_range():
|
||||
*
|
||||
* |____ tlbi + dsb ____|
|
||||
*
|
||||
* 3) __set_ptes() to repaint as non-contiguous:
|
||||
*
|
||||
* +----+----+----+----+
|
||||
* |RW,n|RW,n|RW,n|RW,n|
|
||||
* +----+----+----+----+
|
||||
*
|
||||
* 4) Update changed page permissions:
|
||||
*
|
||||
* +----+----+----+----+
|
||||
* |RW,n|RW,n|RW,n|RO,n| <--- last page permissions set
|
||||
* +----+----+----+----+
|
||||
*
|
||||
* 5) The kernel will eventually __flush_tlb() for changed page:
|
||||
*
|
||||
* |____| <--- tlbi + dsb
|
||||
*
|
||||
* For BBML2, we again remove the intermediate tlbi+dsb. Here, there
|
||||
* are no issues, as the final tlbi+dsb covering the changed page is
|
||||
* guaranteed to remove the original large contiguous (RW,c) TLB entry,
|
||||
* as well as the intermediate (RW,n) TLB entry; the next access will
|
||||
* install the new (RO,n) TLB entry and the page tables are only
|
||||
* considered "consistent" after the final tlbi+dsb, so software must
|
||||
* be prepared for this inconsistency prior to finishing the mm dance
|
||||
* regardless.
|
||||
*/
|
||||
|
||||
if (!system_supports_bbml2_noabort())
|
||||
__flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3);
|
||||
|
||||
__set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES);
|
||||
}
|
||||
|
|
@ -169,17 +306,46 @@ pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte)
|
|||
for (i = 0; i < CONT_PTES; i++, ptep++) {
|
||||
pte = __ptep_get(ptep);
|
||||
|
||||
if (pte_dirty(pte))
|
||||
if (pte_dirty(pte)) {
|
||||
orig_pte = pte_mkdirty(orig_pte);
|
||||
for (; i < CONT_PTES; i++, ptep++) {
|
||||
pte = __ptep_get(ptep);
|
||||
if (pte_young(pte)) {
|
||||
orig_pte = pte_mkyoung(orig_pte);
|
||||
break;
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
if (pte_young(pte))
|
||||
if (pte_young(pte)) {
|
||||
orig_pte = pte_mkyoung(orig_pte);
|
||||
i++;
|
||||
ptep++;
|
||||
for (; i < CONT_PTES; i++, ptep++) {
|
||||
pte = __ptep_get(ptep);
|
||||
if (pte_dirty(pte)) {
|
||||
orig_pte = pte_mkdirty(orig_pte);
|
||||
break;
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return orig_pte;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(contpte_ptep_get);
|
||||
|
||||
static inline bool contpte_is_consistent(pte_t pte, unsigned long pfn,
|
||||
pgprot_t orig_prot)
|
||||
{
|
||||
pgprot_t prot = pte_pgprot(pte_mkold(pte_mkclean(pte)));
|
||||
|
||||
return pte_valid_cont(pte) && pte_pfn(pte) == pfn &&
|
||||
pgprot_val(prot) == pgprot_val(orig_prot);
|
||||
}
|
||||
|
||||
pte_t contpte_ptep_get_lockless(pte_t *orig_ptep)
|
||||
{
|
||||
/*
|
||||
|
|
@ -202,7 +368,6 @@ pte_t contpte_ptep_get_lockless(pte_t *orig_ptep)
|
|||
pgprot_t orig_prot;
|
||||
unsigned long pfn;
|
||||
pte_t orig_pte;
|
||||
pgprot_t prot;
|
||||
pte_t *ptep;
|
||||
pte_t pte;
|
||||
int i;
|
||||
|
|
@ -219,18 +384,44 @@ retry:
|
|||
|
||||
for (i = 0; i < CONT_PTES; i++, ptep++, pfn++) {
|
||||
pte = __ptep_get(ptep);
|
||||
prot = pte_pgprot(pte_mkold(pte_mkclean(pte)));
|
||||
|
||||
if (!pte_valid_cont(pte) ||
|
||||
pte_pfn(pte) != pfn ||
|
||||
pgprot_val(prot) != pgprot_val(orig_prot))
|
||||
if (!contpte_is_consistent(pte, pfn, orig_prot))
|
||||
goto retry;
|
||||
|
||||
if (pte_dirty(pte))
|
||||
if (pte_dirty(pte)) {
|
||||
orig_pte = pte_mkdirty(orig_pte);
|
||||
for (; i < CONT_PTES; i++, ptep++, pfn++) {
|
||||
pte = __ptep_get(ptep);
|
||||
|
||||
if (pte_young(pte))
|
||||
if (!contpte_is_consistent(pte, pfn, orig_prot))
|
||||
goto retry;
|
||||
|
||||
if (pte_young(pte)) {
|
||||
orig_pte = pte_mkyoung(orig_pte);
|
||||
break;
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
if (pte_young(pte)) {
|
||||
orig_pte = pte_mkyoung(orig_pte);
|
||||
i++;
|
||||
ptep++;
|
||||
pfn++;
|
||||
for (; i < CONT_PTES; i++, ptep++, pfn++) {
|
||||
pte = __ptep_get(ptep);
|
||||
|
||||
if (!contpte_is_consistent(pte, pfn, orig_prot))
|
||||
goto retry;
|
||||
|
||||
if (pte_dirty(pte)) {
|
||||
orig_pte = pte_mkdirty(orig_pte);
|
||||
break;
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return orig_pte;
|
||||
|
|
|
|||
|
|
@ -53,18 +53,12 @@ struct fault_info {
|
|||
};
|
||||
|
||||
static const struct fault_info fault_info[];
|
||||
static struct fault_info debug_fault_info[];
|
||||
|
||||
static inline const struct fault_info *esr_to_fault_info(unsigned long esr)
|
||||
{
|
||||
return fault_info + (esr & ESR_ELx_FSC);
|
||||
}
|
||||
|
||||
static inline const struct fault_info *esr_to_debug_fault_info(unsigned long esr)
|
||||
{
|
||||
return debug_fault_info + DBG_ESR_EVT(esr);
|
||||
}
|
||||
|
||||
static void data_abort_decode(unsigned long esr)
|
||||
{
|
||||
unsigned long iss2 = ESR_ELx_ISS2(esr);
|
||||
|
|
@ -838,6 +832,7 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
|
|||
*/
|
||||
siaddr = untagged_addr(far);
|
||||
}
|
||||
add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK);
|
||||
arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
|
||||
|
||||
return 0;
|
||||
|
|
@ -849,9 +844,12 @@ static int do_tag_check_fault(unsigned long far, unsigned long esr,
|
|||
/*
|
||||
* The architecture specifies that bits 63:60 of FAR_EL1 are UNKNOWN
|
||||
* for tag check faults. Set them to corresponding bits in the untagged
|
||||
* address.
|
||||
* address if ARM64_MTE_FAR isn't supported.
|
||||
* Otherwise, bits 63:60 of FAR_EL1 are not UNKNOWN.
|
||||
*/
|
||||
far = (__untagged_addr(far) & ~MTE_TAG_MASK) | (far & MTE_TAG_MASK);
|
||||
if (!cpus_have_cap(ARM64_MTE_FAR))
|
||||
far = (__untagged_addr(far) & ~MTE_TAG_MASK) | (far & MTE_TAG_MASK);
|
||||
|
||||
do_bad_area(far, esr, regs);
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -950,75 +948,6 @@ void do_sp_pc_abort(unsigned long addr, unsigned long esr, struct pt_regs *regs)
|
|||
}
|
||||
NOKPROBE_SYMBOL(do_sp_pc_abort);
|
||||
|
||||
/*
|
||||
* __refdata because early_brk64 is __init, but the reference to it is
|
||||
* clobbered at arch_initcall time.
|
||||
* See traps.c and debug-monitors.c:debug_traps_init().
|
||||
*/
|
||||
static struct fault_info __refdata debug_fault_info[] = {
|
||||
{ do_bad, SIGTRAP, TRAP_HWBKPT, "hardware breakpoint" },
|
||||
{ do_bad, SIGTRAP, TRAP_HWBKPT, "hardware single-step" },
|
||||
{ do_bad, SIGTRAP, TRAP_HWBKPT, "hardware watchpoint" },
|
||||
{ do_bad, SIGKILL, SI_KERNEL, "unknown 3" },
|
||||
{ do_bad, SIGTRAP, TRAP_BRKPT, "aarch32 BKPT" },
|
||||
{ do_bad, SIGKILL, SI_KERNEL, "aarch32 vector catch" },
|
||||
{ early_brk64, SIGTRAP, TRAP_BRKPT, "aarch64 BRK" },
|
||||
{ do_bad, SIGKILL, SI_KERNEL, "unknown 7" },
|
||||
};
|
||||
|
||||
void __init hook_debug_fault_code(int nr,
|
||||
int (*fn)(unsigned long, unsigned long, struct pt_regs *),
|
||||
int sig, int code, const char *name)
|
||||
{
|
||||
BUG_ON(nr < 0 || nr >= ARRAY_SIZE(debug_fault_info));
|
||||
|
||||
debug_fault_info[nr].fn = fn;
|
||||
debug_fault_info[nr].sig = sig;
|
||||
debug_fault_info[nr].code = code;
|
||||
debug_fault_info[nr].name = name;
|
||||
}
|
||||
|
||||
/*
|
||||
* In debug exception context, we explicitly disable preemption despite
|
||||
* having interrupts disabled.
|
||||
* This serves two purposes: it makes it much less likely that we would
|
||||
* accidentally schedule in exception context and it will force a warning
|
||||
* if we somehow manage to schedule by accident.
|
||||
*/
|
||||
static void debug_exception_enter(struct pt_regs *regs)
|
||||
{
|
||||
preempt_disable();
|
||||
|
||||
/* This code is a bit fragile. Test it. */
|
||||
RCU_LOCKDEP_WARN(!rcu_is_watching(), "exception_enter didn't work");
|
||||
}
|
||||
NOKPROBE_SYMBOL(debug_exception_enter);
|
||||
|
||||
static void debug_exception_exit(struct pt_regs *regs)
|
||||
{
|
||||
preempt_enable_no_resched();
|
||||
}
|
||||
NOKPROBE_SYMBOL(debug_exception_exit);
|
||||
|
||||
void do_debug_exception(unsigned long addr_if_watchpoint, unsigned long esr,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
const struct fault_info *inf = esr_to_debug_fault_info(esr);
|
||||
unsigned long pc = instruction_pointer(regs);
|
||||
|
||||
debug_exception_enter(regs);
|
||||
|
||||
if (user_mode(regs) && !is_ttbr0_addr(pc))
|
||||
arm64_apply_bp_hardening();
|
||||
|
||||
if (inf->fn(addr_if_watchpoint, esr, regs)) {
|
||||
arm64_notify_die(inf->name, regs, inf->sig, inf->code, pc, esr);
|
||||
}
|
||||
|
||||
debug_exception_exit(regs);
|
||||
}
|
||||
NOKPROBE_SYMBOL(do_debug_exception);
|
||||
|
||||
/*
|
||||
* Used during anonymous page fault handling.
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -157,12 +157,6 @@ void gcs_free(struct task_struct *task)
|
|||
if (!system_supports_gcs())
|
||||
return;
|
||||
|
||||
/*
|
||||
* When fork() with CLONE_VM fails, the child (tsk) already
|
||||
* has a GCS allocated, and exit_thread() calls this function
|
||||
* to free it. In this case the parent (current) and the
|
||||
* child share the same mm struct.
|
||||
*/
|
||||
if (!task->mm || task->mm != current->mm)
|
||||
return;
|
||||
|
||||
|
|
|
|||
|
|
@ -225,7 +225,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
|||
ncontig = num_contig_ptes(sz, &pgsize);
|
||||
|
||||
if (!pte_present(pte)) {
|
||||
for (i = 0; i < ncontig; i++, ptep++, addr += pgsize)
|
||||
for (i = 0; i < ncontig; i++, ptep++)
|
||||
__set_ptes_anysz(mm, ptep, pte, 1, pgsize);
|
||||
return;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -454,7 +454,7 @@ SYM_FUNC_START(__cpu_setup)
|
|||
dsb nsh
|
||||
|
||||
msr cpacr_el1, xzr // Reset cpacr_el1
|
||||
mov x1, #1 << 12 // Reset mdscr_el1 and disable
|
||||
mov x1, MDSCR_EL1_TDCC // Reset mdscr_el1 and disable
|
||||
msr mdscr_el1, x1 // access to the DCC from EL0
|
||||
reset_pmuserenr_el0 x1 // Disable PMU access from EL0
|
||||
reset_amuserenr_el0 x1 // Disable AMU access from EL0
|
||||
|
|
|
|||
|
|
@ -45,6 +45,7 @@ HAS_LPA2
|
|||
HAS_LSE_ATOMICS
|
||||
HAS_MOPS
|
||||
HAS_NESTED_VIRT
|
||||
HAS_BBML2_NOABORT
|
||||
HAS_PAN
|
||||
HAS_PMUV3
|
||||
HAS_S1PIE
|
||||
|
|
@ -68,6 +69,8 @@ MPAM
|
|||
MPAM_HCR
|
||||
MTE
|
||||
MTE_ASYMM
|
||||
MTE_FAR
|
||||
MTE_STORE_ONLY
|
||||
SME
|
||||
SME_FA64
|
||||
SME2
|
||||
|
|
|
|||
|
|
@ -1329,6 +1329,138 @@ UnsignedEnum 3:0 MTEPERM
|
|||
EndEnum
|
||||
EndSysreg
|
||||
|
||||
|
||||
SysregFields BRBINFx_EL1
|
||||
Res0 63:47
|
||||
Field 46 CCU
|
||||
Field 45:40 CC_EXP
|
||||
Field 39:32 CC_MANT
|
||||
Res0 31:18
|
||||
Field 17 LASTFAILED
|
||||
Field 16 T
|
||||
Res0 15:14
|
||||
Enum 13:8 TYPE
|
||||
0b000000 DIRECT_UNCOND
|
||||
0b000001 INDIRECT
|
||||
0b000010 DIRECT_LINK
|
||||
0b000011 INDIRECT_LINK
|
||||
0b000101 RET
|
||||
0b000111 ERET
|
||||
0b001000 DIRECT_COND
|
||||
0b100001 DEBUG_HALT
|
||||
0b100010 CALL
|
||||
0b100011 TRAP
|
||||
0b100100 SERROR
|
||||
0b100110 INSN_DEBUG
|
||||
0b100111 DATA_DEBUG
|
||||
0b101010 ALIGN_FAULT
|
||||
0b101011 INSN_FAULT
|
||||
0b101100 DATA_FAULT
|
||||
0b101110 IRQ
|
||||
0b101111 FIQ
|
||||
0b110000 IMPDEF_TRAP_EL3
|
||||
0b111001 DEBUG_EXIT
|
||||
EndEnum
|
||||
Enum 7:6 EL
|
||||
0b00 EL0
|
||||
0b01 EL1
|
||||
0b10 EL2
|
||||
0b11 EL3
|
||||
EndEnum
|
||||
Field 5 MPRED
|
||||
Res0 4:2
|
||||
Enum 1:0 VALID
|
||||
0b00 NONE
|
||||
0b01 TARGET
|
||||
0b10 SOURCE
|
||||
0b11 FULL
|
||||
EndEnum
|
||||
EndSysregFields
|
||||
|
||||
SysregFields BRBCR_ELx
|
||||
Res0 63:24
|
||||
Field 23 EXCEPTION
|
||||
Field 22 ERTN
|
||||
Res0 21:10
|
||||
Field 9 FZPSS
|
||||
Field 8 FZP
|
||||
Res0 7
|
||||
Enum 6:5 TS
|
||||
0b01 VIRTUAL
|
||||
0b10 GUEST_PHYSICAL
|
||||
0b11 PHYSICAL
|
||||
EndEnum
|
||||
Field 4 MPRED
|
||||
Field 3 CC
|
||||
Res0 2
|
||||
Field 1 ExBRE
|
||||
Field 0 E0BRE
|
||||
EndSysregFields
|
||||
|
||||
Sysreg BRBCR_EL1 2 1 9 0 0
|
||||
Fields BRBCR_ELx
|
||||
EndSysreg
|
||||
|
||||
Sysreg BRBFCR_EL1 2 1 9 0 1
|
||||
Res0 63:30
|
||||
Enum 29:28 BANK
|
||||
0b00 BANK_0
|
||||
0b01 BANK_1
|
||||
EndEnum
|
||||
Res0 27:23
|
||||
Field 22 CONDDIR
|
||||
Field 21 DIRCALL
|
||||
Field 20 INDCALL
|
||||
Field 19 RTN
|
||||
Field 18 INDIRECT
|
||||
Field 17 DIRECT
|
||||
Field 16 EnI
|
||||
Res0 15:8
|
||||
Field 7 PAUSED
|
||||
Field 6 LASTFAILED
|
||||
Res0 5:0
|
||||
EndSysreg
|
||||
|
||||
Sysreg BRBTS_EL1 2 1 9 0 2
|
||||
Field 63:0 TS
|
||||
EndSysreg
|
||||
|
||||
Sysreg BRBINFINJ_EL1 2 1 9 1 0
|
||||
Fields BRBINFx_EL1
|
||||
EndSysreg
|
||||
|
||||
Sysreg BRBSRCINJ_EL1 2 1 9 1 1
|
||||
Field 63:0 ADDRESS
|
||||
EndSysreg
|
||||
|
||||
Sysreg BRBTGTINJ_EL1 2 1 9 1 2
|
||||
Field 63:0 ADDRESS
|
||||
EndSysreg
|
||||
|
||||
Sysreg BRBIDR0_EL1 2 1 9 2 0
|
||||
Res0 63:16
|
||||
Enum 15:12 CC
|
||||
0b0101 20_BIT
|
||||
EndEnum
|
||||
Enum 11:8 FORMAT
|
||||
0b0000 FORMAT_0
|
||||
EndEnum
|
||||
Enum 7:0 NUMREC
|
||||
0b00001000 8
|
||||
0b00010000 16
|
||||
0b00100000 32
|
||||
0b01000000 64
|
||||
EndEnum
|
||||
EndSysreg
|
||||
|
||||
Sysreg BRBCR_EL2 2 4 9 0 0
|
||||
Fields BRBCR_ELx
|
||||
EndSysreg
|
||||
|
||||
Sysreg BRBCR_EL12 2 5 9 0 0
|
||||
Fields BRBCR_ELx
|
||||
EndSysreg
|
||||
|
||||
Sysreg ID_AA64ZFR0_EL1 3 0 0 4 4
|
||||
Res0 63:60
|
||||
UnsignedEnum 59:56 F64MM
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ aflags-zboot-header-$(EFI_ZBOOT_FORWARD_CFI) := \
|
|||
-DPE_DLL_CHAR_EX=IMAGE_DLLCHARACTERISTICS_EX_FORWARD_CFI_COMPAT
|
||||
|
||||
AFLAGS_zboot-header.o += -DMACHINE_TYPE=IMAGE_FILE_MACHINE_$(EFI_ZBOOT_MACH_TYPE) \
|
||||
-DZBOOT_EFI_PATH="\"$(realpath $(obj)/vmlinuz.efi.elf)\"" \
|
||||
-DZBOOT_EFI_PATH="\"$(abspath $(obj)/vmlinuz.efi.elf)\"" \
|
||||
-DZBOOT_SIZE_LEN=$(zboot-size-len-y) \
|
||||
-DCOMP_TYPE="\"$(comp-type-y)\"" \
|
||||
$(aflags-zboot-header-y)
|
||||
|
|
|
|||
|
|
@ -220,6 +220,9 @@ bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
|
|||
feat_mask |= ARM_SMMU_FEAT_VAX;
|
||||
}
|
||||
|
||||
if (system_supports_bbml2_noabort())
|
||||
feat_mask |= ARM_SMMU_FEAT_BBML2;
|
||||
|
||||
if ((smmu->features & feat_mask) != feat_mask)
|
||||
return false;
|
||||
|
||||
|
|
|
|||
|
|
@ -4457,6 +4457,9 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
|
|||
if (FIELD_GET(IDR3_FWB, reg))
|
||||
smmu->features |= ARM_SMMU_FEAT_S2FWB;
|
||||
|
||||
if (FIELD_GET(IDR3_BBM, reg) == 2)
|
||||
smmu->features |= ARM_SMMU_FEAT_BBML2;
|
||||
|
||||
/* IDR5 */
|
||||
reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
|
||||
|
||||
|
|
|
|||
|
|
@ -60,6 +60,7 @@ struct arm_smmu_device;
|
|||
#define ARM_SMMU_IDR3 0xc
|
||||
#define IDR3_FWB (1 << 8)
|
||||
#define IDR3_RIL (1 << 10)
|
||||
#define IDR3_BBM GENMASK(12, 11)
|
||||
|
||||
#define ARM_SMMU_IDR5 0x14
|
||||
#define IDR5_STALL_MAX GENMASK(31, 16)
|
||||
|
|
@ -755,6 +756,7 @@ struct arm_smmu_device {
|
|||
#define ARM_SMMU_FEAT_HA (1 << 21)
|
||||
#define ARM_SMMU_FEAT_HD (1 << 22)
|
||||
#define ARM_SMMU_FEAT_S2FWB (1 << 23)
|
||||
#define ARM_SMMU_FEAT_BBML2 (1 << 24)
|
||||
u32 features;
|
||||
|
||||
#define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0)
|
||||
|
|
|
|||
|
|
@ -223,6 +223,17 @@ config ARM_SPE_PMU
|
|||
Extension, which provides periodic sampling of operations in
|
||||
the CPU pipeline and reports this via the perf AUX interface.
|
||||
|
||||
config ARM64_BRBE
|
||||
bool "Enable support for branch stack sampling using FEAT_BRBE"
|
||||
depends on ARM_PMUV3 && ARM64
|
||||
default y
|
||||
help
|
||||
Enable perf support for Branch Record Buffer Extension (BRBE) which
|
||||
records all branches taken in an execution path. This supports some
|
||||
branch types and privilege based filtering. It captures additional
|
||||
relevant information such as cycle count, misprediction and branch
|
||||
type, branch privilege level etc.
|
||||
|
||||
config ARM_DMC620_PMU
|
||||
tristate "Enable PMU support for the ARM DMC-620 memory controller"
|
||||
depends on (ARM64 && ACPI) || COMPILE_TEST
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ obj-$(CONFIG_STARFIVE_STARLINK_PMU) += starfive_starlink_pmu.o
|
|||
obj-$(CONFIG_THUNDERX2_PMU) += thunderx2_pmu.o
|
||||
obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o
|
||||
obj-$(CONFIG_ARM_SPE_PMU) += arm_spe_pmu.o
|
||||
obj-$(CONFIG_ARM64_BRBE) += arm_brbe.o
|
||||
obj-$(CONFIG_ARM_DMC620_PMU) += arm_dmc620_pmu.o
|
||||
obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) += marvell_cn10k_tad_pmu.o
|
||||
obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) += marvell_cn10k_ddr_pmu.o
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
// Copyright (C) 2016-2020 Arm Limited
|
||||
// CMN-600 Coherent Mesh Network PMU driver
|
||||
// ARM CMN/CI interconnect PMU driver
|
||||
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/bitfield.h>
|
||||
|
|
@ -2245,12 +2245,11 @@ static enum cmn_node_type arm_cmn_subtype(enum cmn_node_type type)
|
|||
|
||||
static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
|
||||
{
|
||||
void __iomem *cfg_region;
|
||||
void __iomem *cfg_region, __iomem *xp_region;
|
||||
struct arm_cmn_node cfg, *dn;
|
||||
struct arm_cmn_dtm *dtm;
|
||||
enum cmn_part part;
|
||||
u16 child_count, child_poff;
|
||||
u32 xp_offset[CMN_MAX_XPS];
|
||||
u64 reg;
|
||||
int i, j;
|
||||
size_t sz;
|
||||
|
|
@ -2302,11 +2301,12 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
|
|||
cmn->num_dns = cmn->num_xps;
|
||||
|
||||
/* Pass 1: visit the XPs, enumerate their children */
|
||||
cfg_region += child_poff;
|
||||
for (i = 0; i < cmn->num_xps; i++) {
|
||||
reg = readq_relaxed(cfg_region + child_poff + i * 8);
|
||||
xp_offset[i] = reg & CMN_CHILD_NODE_ADDR;
|
||||
reg = readq_relaxed(cfg_region + i * 8);
|
||||
xp_region = cmn->base + (reg & CMN_CHILD_NODE_ADDR);
|
||||
|
||||
reg = readq_relaxed(cmn->base + xp_offset[i] + CMN_CHILD_INFO);
|
||||
reg = readq_relaxed(xp_region + CMN_CHILD_INFO);
|
||||
cmn->num_dns += FIELD_GET(CMN_CI_CHILD_COUNT, reg);
|
||||
}
|
||||
|
||||
|
|
@ -2332,11 +2332,12 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
|
|||
cmn->dns = dn;
|
||||
cmn->dtms = dtm;
|
||||
for (i = 0; i < cmn->num_xps; i++) {
|
||||
void __iomem *xp_region = cmn->base + xp_offset[i];
|
||||
struct arm_cmn_node *xp = dn++;
|
||||
unsigned int xp_ports = 0;
|
||||
|
||||
arm_cmn_init_node_info(cmn, xp_offset[i], xp);
|
||||
reg = readq_relaxed(cfg_region + i * 8);
|
||||
xp_region = cmn->base + (reg & CMN_CHILD_NODE_ADDR);
|
||||
arm_cmn_init_node_info(cmn, reg & CMN_CHILD_NODE_ADDR, xp);
|
||||
/*
|
||||
* Thanks to the order in which XP logical IDs seem to be
|
||||
* assigned, we can handily infer the mesh X dimension by
|
||||
|
|
@ -2655,6 +2656,7 @@ static struct platform_driver arm_cmn_driver = {
|
|||
.name = "arm-cmn",
|
||||
.of_match_table = of_match_ptr(arm_cmn_of_match),
|
||||
.acpi_match_table = ACPI_PTR(arm_cmn_acpi_match),
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = arm_cmn_probe,
|
||||
.remove = arm_cmn_remove,
|
||||
|
|
@ -2693,5 +2695,5 @@ module_init(arm_cmn_init);
|
|||
module_exit(arm_cmn_exit);
|
||||
|
||||
MODULE_AUTHOR("Robin Murphy <robin.murphy@arm.com>");
|
||||
MODULE_DESCRIPTION("Arm CMN-600 PMU driver");
|
||||
MODULE_DESCRIPTION("Arm CMN/CI interconnect PMU driver");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
|
|||
|
|
@ -102,10 +102,9 @@ struct arm_ni_unit {
|
|||
struct arm_ni_cd {
|
||||
void __iomem *pmu_base;
|
||||
u16 id;
|
||||
s8 irq_friend;
|
||||
int num_units;
|
||||
int irq;
|
||||
int cpu;
|
||||
struct hlist_node cpuhp_node;
|
||||
struct pmu pmu;
|
||||
struct arm_ni_unit *units;
|
||||
struct perf_event *evcnt[NI_NUM_COUNTERS];
|
||||
|
|
@ -117,13 +116,18 @@ struct arm_ni {
|
|||
void __iomem *base;
|
||||
enum ni_part part;
|
||||
int id;
|
||||
int cpu;
|
||||
int num_cds;
|
||||
struct hlist_node cpuhp_node;
|
||||
struct arm_ni_cd cds[] __counted_by(num_cds);
|
||||
};
|
||||
|
||||
#define cd_to_ni(cd) container_of((cd), struct arm_ni, cds[(cd)->id])
|
||||
#define pmu_to_cd(p) container_of((p), struct arm_ni_cd, pmu)
|
||||
|
||||
#define ni_for_each_cd(n, c) \
|
||||
for (struct arm_ni_cd *c = n->cds; c < n->cds + n->num_cds; c++) if (c->pmu_base)
|
||||
|
||||
#define cd_for_each_unit(cd, u) \
|
||||
for (struct arm_ni_unit *u = cd->units; u < cd->units + cd->num_units; u++)
|
||||
|
||||
|
|
@ -218,9 +222,9 @@ static const struct attribute_group arm_ni_format_attrs_group = {
|
|||
static ssize_t arm_ni_cpumask_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct arm_ni_cd *cd = pmu_to_cd(dev_get_drvdata(dev));
|
||||
struct arm_ni *ni = cd_to_ni(pmu_to_cd(dev_get_drvdata(dev)));
|
||||
|
||||
return cpumap_print_to_pagebuf(true, buf, cpumask_of(cd->cpu));
|
||||
return cpumap_print_to_pagebuf(true, buf, cpumask_of(ni->cpu));
|
||||
}
|
||||
|
||||
static struct device_attribute arm_ni_cpumask_attr =
|
||||
|
|
@ -314,7 +318,7 @@ static int arm_ni_event_init(struct perf_event *event)
|
|||
if (is_sampling_event(event))
|
||||
return -EINVAL;
|
||||
|
||||
event->cpu = cd->cpu;
|
||||
event->cpu = cd_to_ni(cd)->cpu;
|
||||
if (NI_EVENT_TYPE(event) == NI_PMU)
|
||||
return arm_ni_validate_group(event);
|
||||
|
||||
|
|
@ -445,33 +449,37 @@ static irqreturn_t arm_ni_handle_irq(int irq, void *dev_id)
|
|||
{
|
||||
struct arm_ni_cd *cd = dev_id;
|
||||
irqreturn_t ret = IRQ_NONE;
|
||||
u32 reg = readl_relaxed(cd->pmu_base + NI_PMOVSCLR);
|
||||
|
||||
if (reg & (1U << NI_CCNT_IDX)) {
|
||||
ret = IRQ_HANDLED;
|
||||
if (!(WARN_ON(!cd->ccnt))) {
|
||||
arm_ni_event_read(cd->ccnt);
|
||||
arm_ni_init_ccnt(cd);
|
||||
for (;;) {
|
||||
u32 reg = readl_relaxed(cd->pmu_base + NI_PMOVSCLR);
|
||||
|
||||
if (reg & (1U << NI_CCNT_IDX)) {
|
||||
ret = IRQ_HANDLED;
|
||||
if (!(WARN_ON(!cd->ccnt))) {
|
||||
arm_ni_event_read(cd->ccnt);
|
||||
arm_ni_init_ccnt(cd);
|
||||
}
|
||||
}
|
||||
}
|
||||
for (int i = 0; i < NI_NUM_COUNTERS; i++) {
|
||||
if (!(reg & (1U << i)))
|
||||
continue;
|
||||
ret = IRQ_HANDLED;
|
||||
if (!(WARN_ON(!cd->evcnt[i]))) {
|
||||
arm_ni_event_read(cd->evcnt[i]);
|
||||
arm_ni_init_evcnt(cd, i);
|
||||
for (int i = 0; i < NI_NUM_COUNTERS; i++) {
|
||||
if (!(reg & (1U << i)))
|
||||
continue;
|
||||
ret = IRQ_HANDLED;
|
||||
if (!(WARN_ON(!cd->evcnt[i]))) {
|
||||
arm_ni_event_read(cd->evcnt[i]);
|
||||
arm_ni_init_evcnt(cd, i);
|
||||
}
|
||||
}
|
||||
writel_relaxed(reg, cd->pmu_base + NI_PMOVSCLR);
|
||||
if (!cd->irq_friend)
|
||||
return ret;
|
||||
cd += cd->irq_friend;
|
||||
}
|
||||
writel_relaxed(reg, cd->pmu_base + NI_PMOVSCLR);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int arm_ni_init_cd(struct arm_ni *ni, struct arm_ni_node *node, u64 res_start)
|
||||
{
|
||||
struct arm_ni_cd *cd = ni->cds + node->id;
|
||||
const char *name;
|
||||
int err;
|
||||
|
||||
cd->id = node->id;
|
||||
cd->num_units = node->num_components;
|
||||
|
|
@ -531,19 +539,11 @@ static int arm_ni_init_cd(struct arm_ni *ni, struct arm_ni_node *node, u64 res_s
|
|||
cd->pmu_base + NI_PMCR);
|
||||
writel_relaxed(U32_MAX, cd->pmu_base + NI_PMCNTENCLR);
|
||||
writel_relaxed(U32_MAX, cd->pmu_base + NI_PMOVSCLR);
|
||||
writel_relaxed(U32_MAX, cd->pmu_base + NI_PMINTENSET);
|
||||
|
||||
cd->irq = platform_get_irq(to_platform_device(ni->dev), cd->id);
|
||||
if (cd->irq < 0)
|
||||
return cd->irq;
|
||||
|
||||
err = devm_request_irq(ni->dev, cd->irq, arm_ni_handle_irq,
|
||||
IRQF_NOBALANCING | IRQF_NO_THREAD,
|
||||
dev_name(ni->dev), cd);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
cd->cpu = cpumask_local_spread(0, dev_to_node(ni->dev));
|
||||
cd->pmu = (struct pmu) {
|
||||
.module = THIS_MODULE,
|
||||
.parent = ni->dev,
|
||||
|
|
@ -564,32 +564,19 @@ static int arm_ni_init_cd(struct arm_ni *ni, struct arm_ni_node *node, u64 res_s
|
|||
if (!name)
|
||||
return -ENOMEM;
|
||||
|
||||
err = cpuhp_state_add_instance_nocalls(arm_ni_hp_state, &cd->cpuhp_node);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = perf_pmu_register(&cd->pmu, name, -1);
|
||||
if (err)
|
||||
cpuhp_state_remove_instance_nocalls(arm_ni_hp_state, &cd->cpuhp_node);
|
||||
|
||||
return err;
|
||||
return perf_pmu_register(&cd->pmu, name, -1);
|
||||
}
|
||||
|
||||
static void arm_ni_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct arm_ni *ni = platform_get_drvdata(pdev);
|
||||
|
||||
for (int i = 0; i < ni->num_cds; i++) {
|
||||
struct arm_ni_cd *cd = ni->cds + i;
|
||||
|
||||
if (!cd->pmu_base)
|
||||
continue;
|
||||
|
||||
ni_for_each_cd(ni, cd) {
|
||||
writel_relaxed(0, cd->pmu_base + NI_PMCR);
|
||||
writel_relaxed(U32_MAX, cd->pmu_base + NI_PMINTENCLR);
|
||||
perf_pmu_unregister(&cd->pmu);
|
||||
cpuhp_state_remove_instance_nocalls(arm_ni_hp_state, &cd->cpuhp_node);
|
||||
}
|
||||
cpuhp_state_remove_instance_nocalls(arm_ni_hp_state, &ni->cpuhp_node);
|
||||
}
|
||||
|
||||
static void arm_ni_probe_domain(void __iomem *base, struct arm_ni_node *node)
|
||||
|
|
@ -602,6 +589,34 @@ static void arm_ni_probe_domain(void __iomem *base, struct arm_ni_node *node)
|
|||
node->num_components = readl_relaxed(base + NI_CHILD_NODE_INFO);
|
||||
}
|
||||
|
||||
static int arm_ni_init_irqs(struct arm_ni *ni)
|
||||
{
|
||||
int err;
|
||||
|
||||
ni_for_each_cd(ni, cd) {
|
||||
for (struct arm_ni_cd *prev = cd; prev-- > ni->cds; ) {
|
||||
if (prev->irq == cd->irq) {
|
||||
prev->irq_friend = cd - prev;
|
||||
goto set_inten;
|
||||
}
|
||||
}
|
||||
err = devm_request_irq(ni->dev, cd->irq, arm_ni_handle_irq,
|
||||
IRQF_NOBALANCING | IRQF_NO_THREAD | IRQF_NO_AUTOEN,
|
||||
dev_name(ni->dev), cd);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
irq_set_affinity(cd->irq, cpumask_of(ni->cpu));
|
||||
set_inten:
|
||||
writel_relaxed(U32_MAX, cd->pmu_base + NI_PMINTENSET);
|
||||
}
|
||||
|
||||
ni_for_each_cd(ni, cd)
|
||||
if (!cd->irq_friend)
|
||||
enable_irq(cd->irq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int arm_ni_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct arm_ni_node cfg, vd, pd, cd;
|
||||
|
|
@ -609,7 +624,7 @@ static int arm_ni_probe(struct platform_device *pdev)
|
|||
struct resource *res;
|
||||
void __iomem *base;
|
||||
static atomic_t id;
|
||||
int num_cds;
|
||||
int ret, num_cds;
|
||||
u32 reg, part;
|
||||
|
||||
/*
|
||||
|
|
@ -660,8 +675,13 @@ static int arm_ni_probe(struct platform_device *pdev)
|
|||
ni->num_cds = num_cds;
|
||||
ni->part = part;
|
||||
ni->id = atomic_fetch_inc(&id);
|
||||
ni->cpu = cpumask_local_spread(0, dev_to_node(ni->dev));
|
||||
platform_set_drvdata(pdev, ni);
|
||||
|
||||
ret = cpuhp_state_add_instance_nocalls(arm_ni_hp_state, &ni->cpuhp_node);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
for (int v = 0; v < cfg.num_components; v++) {
|
||||
reg = readl_relaxed(cfg.base + NI_CHILD_PTR(v));
|
||||
arm_ni_probe_domain(base + reg, &vd);
|
||||
|
|
@ -669,8 +689,6 @@ static int arm_ni_probe(struct platform_device *pdev)
|
|||
reg = readl_relaxed(vd.base + NI_CHILD_PTR(p));
|
||||
arm_ni_probe_domain(base + reg, &pd);
|
||||
for (int c = 0; c < pd.num_components; c++) {
|
||||
int ret;
|
||||
|
||||
reg = readl_relaxed(pd.base + NI_CHILD_PTR(c));
|
||||
arm_ni_probe_domain(base + reg, &cd);
|
||||
ret = arm_ni_init_cd(ni, &cd, res->start);
|
||||
|
|
@ -683,7 +701,11 @@ static int arm_ni_probe(struct platform_device *pdev)
|
|||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
ret = arm_ni_init_irqs(ni);
|
||||
if (ret)
|
||||
arm_ni_remove(pdev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
|
|
@ -707,47 +729,50 @@ static struct platform_driver arm_ni_driver = {
|
|||
.name = "arm-ni",
|
||||
.of_match_table = of_match_ptr(arm_ni_of_match),
|
||||
.acpi_match_table = ACPI_PTR(arm_ni_acpi_match),
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = arm_ni_probe,
|
||||
.remove = arm_ni_remove,
|
||||
};
|
||||
|
||||
static void arm_ni_pmu_migrate(struct arm_ni_cd *cd, unsigned int cpu)
|
||||
static void arm_ni_pmu_migrate(struct arm_ni *ni, unsigned int cpu)
|
||||
{
|
||||
perf_pmu_migrate_context(&cd->pmu, cd->cpu, cpu);
|
||||
irq_set_affinity(cd->irq, cpumask_of(cpu));
|
||||
cd->cpu = cpu;
|
||||
ni_for_each_cd(ni, cd) {
|
||||
perf_pmu_migrate_context(&cd->pmu, ni->cpu, cpu);
|
||||
irq_set_affinity(cd->irq, cpumask_of(cpu));
|
||||
}
|
||||
ni->cpu = cpu;
|
||||
}
|
||||
|
||||
static int arm_ni_pmu_online_cpu(unsigned int cpu, struct hlist_node *cpuhp_node)
|
||||
{
|
||||
struct arm_ni_cd *cd;
|
||||
struct arm_ni *ni;
|
||||
int node;
|
||||
|
||||
cd = hlist_entry_safe(cpuhp_node, struct arm_ni_cd, cpuhp_node);
|
||||
node = dev_to_node(cd_to_ni(cd)->dev);
|
||||
if (cpu_to_node(cd->cpu) != node && cpu_to_node(cpu) == node)
|
||||
arm_ni_pmu_migrate(cd, cpu);
|
||||
ni = hlist_entry_safe(cpuhp_node, struct arm_ni, cpuhp_node);
|
||||
node = dev_to_node(ni->dev);
|
||||
if (cpu_to_node(ni->cpu) != node && cpu_to_node(cpu) == node)
|
||||
arm_ni_pmu_migrate(ni, cpu);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int arm_ni_pmu_offline_cpu(unsigned int cpu, struct hlist_node *cpuhp_node)
|
||||
{
|
||||
struct arm_ni_cd *cd;
|
||||
struct arm_ni *ni;
|
||||
unsigned int target;
|
||||
int node;
|
||||
|
||||
cd = hlist_entry_safe(cpuhp_node, struct arm_ni_cd, cpuhp_node);
|
||||
if (cpu != cd->cpu)
|
||||
ni = hlist_entry_safe(cpuhp_node, struct arm_ni, cpuhp_node);
|
||||
if (cpu != ni->cpu)
|
||||
return 0;
|
||||
|
||||
node = dev_to_node(cd_to_ni(cd)->dev);
|
||||
node = dev_to_node(ni->dev);
|
||||
target = cpumask_any_and_but(cpumask_of_node(node), cpu_online_mask, cpu);
|
||||
if (target >= nr_cpu_ids)
|
||||
target = cpumask_any_but(cpu_online_mask, cpu);
|
||||
|
||||
if (target < nr_cpu_ids)
|
||||
arm_ni_pmu_migrate(cd, target);
|
||||
arm_ni_pmu_migrate(ni, target);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,805 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Branch Record Buffer Extension Driver.
|
||||
*
|
||||
* Copyright (C) 2022-2025 ARM Limited
|
||||
*
|
||||
* Author: Anshuman Khandual <anshuman.khandual@arm.com>
|
||||
*/
|
||||
#include <linux/types.h>
|
||||
#include <linux/bitmap.h>
|
||||
#include <linux/perf/arm_pmu.h>
|
||||
#include "arm_brbe.h"
|
||||
|
||||
#define BRBFCR_EL1_BRANCH_FILTERS (BRBFCR_EL1_DIRECT | \
|
||||
BRBFCR_EL1_INDIRECT | \
|
||||
BRBFCR_EL1_RTN | \
|
||||
BRBFCR_EL1_INDCALL | \
|
||||
BRBFCR_EL1_DIRCALL | \
|
||||
BRBFCR_EL1_CONDDIR)
|
||||
|
||||
/*
|
||||
* BRBTS_EL1 is currently not used for branch stack implementation
|
||||
* purpose but BRBCR_ELx.TS needs to have a valid value from all
|
||||
* available options. BRBCR_ELx_TS_VIRTUAL is selected for this.
|
||||
*/
|
||||
#define BRBCR_ELx_DEFAULT_TS FIELD_PREP(BRBCR_ELx_TS_MASK, BRBCR_ELx_TS_VIRTUAL)
|
||||
|
||||
/*
|
||||
* BRBE Buffer Organization
|
||||
*
|
||||
* BRBE buffer is arranged as multiple banks of 32 branch record
|
||||
* entries each. An individual branch record in a given bank could
|
||||
* be accessed, after selecting the bank in BRBFCR_EL1.BANK and
|
||||
* accessing the registers i.e [BRBSRC, BRBTGT, BRBINF] set with
|
||||
* indices [0..31].
|
||||
*
|
||||
* Bank 0
|
||||
*
|
||||
* --------------------------------- ------
|
||||
* | 00 | BRBSRC | BRBTGT | BRBINF | | 00 |
|
||||
* --------------------------------- ------
|
||||
* | 01 | BRBSRC | BRBTGT | BRBINF | | 01 |
|
||||
* --------------------------------- ------
|
||||
* | .. | BRBSRC | BRBTGT | BRBINF | | .. |
|
||||
* --------------------------------- ------
|
||||
* | 31 | BRBSRC | BRBTGT | BRBINF | | 31 |
|
||||
* --------------------------------- ------
|
||||
*
|
||||
* Bank 1
|
||||
*
|
||||
* --------------------------------- ------
|
||||
* | 32 | BRBSRC | BRBTGT | BRBINF | | 00 |
|
||||
* --------------------------------- ------
|
||||
* | 33 | BRBSRC | BRBTGT | BRBINF | | 01 |
|
||||
* --------------------------------- ------
|
||||
* | .. | BRBSRC | BRBTGT | BRBINF | | .. |
|
||||
* --------------------------------- ------
|
||||
* | 63 | BRBSRC | BRBTGT | BRBINF | | 31 |
|
||||
* --------------------------------- ------
|
||||
*/
|
||||
#define BRBE_BANK_MAX_ENTRIES 32
|
||||
|
||||
struct brbe_regset {
|
||||
u64 brbsrc;
|
||||
u64 brbtgt;
|
||||
u64 brbinf;
|
||||
};
|
||||
|
||||
#define PERF_BR_ARM64_MAX (PERF_BR_MAX + PERF_BR_NEW_MAX)
|
||||
|
||||
struct brbe_hw_attr {
|
||||
int brbe_version;
|
||||
int brbe_cc;
|
||||
int brbe_nr;
|
||||
int brbe_format;
|
||||
};
|
||||
|
||||
#define BRBE_REGN_CASE(n, case_macro) \
|
||||
case n: case_macro(n); break
|
||||
|
||||
#define BRBE_REGN_SWITCH(x, case_macro) \
|
||||
do { \
|
||||
switch (x) { \
|
||||
BRBE_REGN_CASE(0, case_macro); \
|
||||
BRBE_REGN_CASE(1, case_macro); \
|
||||
BRBE_REGN_CASE(2, case_macro); \
|
||||
BRBE_REGN_CASE(3, case_macro); \
|
||||
BRBE_REGN_CASE(4, case_macro); \
|
||||
BRBE_REGN_CASE(5, case_macro); \
|
||||
BRBE_REGN_CASE(6, case_macro); \
|
||||
BRBE_REGN_CASE(7, case_macro); \
|
||||
BRBE_REGN_CASE(8, case_macro); \
|
||||
BRBE_REGN_CASE(9, case_macro); \
|
||||
BRBE_REGN_CASE(10, case_macro); \
|
||||
BRBE_REGN_CASE(11, case_macro); \
|
||||
BRBE_REGN_CASE(12, case_macro); \
|
||||
BRBE_REGN_CASE(13, case_macro); \
|
||||
BRBE_REGN_CASE(14, case_macro); \
|
||||
BRBE_REGN_CASE(15, case_macro); \
|
||||
BRBE_REGN_CASE(16, case_macro); \
|
||||
BRBE_REGN_CASE(17, case_macro); \
|
||||
BRBE_REGN_CASE(18, case_macro); \
|
||||
BRBE_REGN_CASE(19, case_macro); \
|
||||
BRBE_REGN_CASE(20, case_macro); \
|
||||
BRBE_REGN_CASE(21, case_macro); \
|
||||
BRBE_REGN_CASE(22, case_macro); \
|
||||
BRBE_REGN_CASE(23, case_macro); \
|
||||
BRBE_REGN_CASE(24, case_macro); \
|
||||
BRBE_REGN_CASE(25, case_macro); \
|
||||
BRBE_REGN_CASE(26, case_macro); \
|
||||
BRBE_REGN_CASE(27, case_macro); \
|
||||
BRBE_REGN_CASE(28, case_macro); \
|
||||
BRBE_REGN_CASE(29, case_macro); \
|
||||
BRBE_REGN_CASE(30, case_macro); \
|
||||
BRBE_REGN_CASE(31, case_macro); \
|
||||
default: WARN(1, "Invalid BRB* index %d\n", x); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define RETURN_READ_BRBSRCN(n) \
|
||||
return read_sysreg_s(SYS_BRBSRC_EL1(n))
|
||||
static inline u64 get_brbsrc_reg(int idx)
|
||||
{
|
||||
BRBE_REGN_SWITCH(idx, RETURN_READ_BRBSRCN);
|
||||
return 0;
|
||||
}
|
||||
|
||||
#define RETURN_READ_BRBTGTN(n) \
|
||||
return read_sysreg_s(SYS_BRBTGT_EL1(n))
|
||||
static u64 get_brbtgt_reg(int idx)
|
||||
{
|
||||
BRBE_REGN_SWITCH(idx, RETURN_READ_BRBTGTN);
|
||||
return 0;
|
||||
}
|
||||
|
||||
#define RETURN_READ_BRBINFN(n) \
|
||||
return read_sysreg_s(SYS_BRBINF_EL1(n))
|
||||
static u64 get_brbinf_reg(int idx)
|
||||
{
|
||||
BRBE_REGN_SWITCH(idx, RETURN_READ_BRBINFN);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static u64 brbe_record_valid(u64 brbinf)
|
||||
{
|
||||
return FIELD_GET(BRBINFx_EL1_VALID_MASK, brbinf);
|
||||
}
|
||||
|
||||
static bool brbe_invalid(u64 brbinf)
|
||||
{
|
||||
return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_NONE;
|
||||
}
|
||||
|
||||
static bool brbe_record_is_complete(u64 brbinf)
|
||||
{
|
||||
return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_FULL;
|
||||
}
|
||||
|
||||
static bool brbe_record_is_source_only(u64 brbinf)
|
||||
{
|
||||
return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_SOURCE;
|
||||
}
|
||||
|
||||
static bool brbe_record_is_target_only(u64 brbinf)
|
||||
{
|
||||
return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_TARGET;
|
||||
}
|
||||
|
||||
static int brbinf_get_in_tx(u64 brbinf)
|
||||
{
|
||||
return FIELD_GET(BRBINFx_EL1_T_MASK, brbinf);
|
||||
}
|
||||
|
||||
static int brbinf_get_mispredict(u64 brbinf)
|
||||
{
|
||||
return FIELD_GET(BRBINFx_EL1_MPRED_MASK, brbinf);
|
||||
}
|
||||
|
||||
static int brbinf_get_lastfailed(u64 brbinf)
|
||||
{
|
||||
return FIELD_GET(BRBINFx_EL1_LASTFAILED_MASK, brbinf);
|
||||
}
|
||||
|
||||
static u16 brbinf_get_cycles(u64 brbinf)
|
||||
{
|
||||
u32 exp, mant, cycles;
|
||||
/*
|
||||
* Captured cycle count is unknown and hence
|
||||
* should not be passed on to userspace.
|
||||
*/
|
||||
if (brbinf & BRBINFx_EL1_CCU)
|
||||
return 0;
|
||||
|
||||
exp = FIELD_GET(BRBINFx_EL1_CC_EXP_MASK, brbinf);
|
||||
mant = FIELD_GET(BRBINFx_EL1_CC_MANT_MASK, brbinf);
|
||||
|
||||
if (!exp)
|
||||
return mant;
|
||||
|
||||
cycles = (mant | 0x100) << (exp - 1);
|
||||
|
||||
return min(cycles, U16_MAX);
|
||||
}
|
||||
|
||||
static int brbinf_get_type(u64 brbinf)
|
||||
{
|
||||
return FIELD_GET(BRBINFx_EL1_TYPE_MASK, brbinf);
|
||||
}
|
||||
|
||||
static int brbinf_get_el(u64 brbinf)
|
||||
{
|
||||
return FIELD_GET(BRBINFx_EL1_EL_MASK, brbinf);
|
||||
}
|
||||
|
||||
void brbe_invalidate(void)
|
||||
{
|
||||
/* Ensure all branches before this point are recorded */
|
||||
isb();
|
||||
asm volatile(BRB_IALL_INSN);
|
||||
/* Ensure all branch records are invalidated after this point */
|
||||
isb();
|
||||
}
|
||||
|
||||
static bool valid_brbe_nr(int brbe_nr)
|
||||
{
|
||||
return brbe_nr == BRBIDR0_EL1_NUMREC_8 ||
|
||||
brbe_nr == BRBIDR0_EL1_NUMREC_16 ||
|
||||
brbe_nr == BRBIDR0_EL1_NUMREC_32 ||
|
||||
brbe_nr == BRBIDR0_EL1_NUMREC_64;
|
||||
}
|
||||
|
||||
static bool valid_brbe_cc(int brbe_cc)
|
||||
{
|
||||
return brbe_cc == BRBIDR0_EL1_CC_20_BIT;
|
||||
}
|
||||
|
||||
static bool valid_brbe_format(int brbe_format)
|
||||
{
|
||||
return brbe_format == BRBIDR0_EL1_FORMAT_FORMAT_0;
|
||||
}
|
||||
|
||||
static bool valid_brbidr(u64 brbidr)
|
||||
{
|
||||
int brbe_format, brbe_cc, brbe_nr;
|
||||
|
||||
brbe_format = FIELD_GET(BRBIDR0_EL1_FORMAT_MASK, brbidr);
|
||||
brbe_cc = FIELD_GET(BRBIDR0_EL1_CC_MASK, brbidr);
|
||||
brbe_nr = FIELD_GET(BRBIDR0_EL1_NUMREC_MASK, brbidr);
|
||||
|
||||
return valid_brbe_format(brbe_format) && valid_brbe_cc(brbe_cc) && valid_brbe_nr(brbe_nr);
|
||||
}
|
||||
|
||||
static bool valid_brbe_version(int brbe_version)
|
||||
{
|
||||
return brbe_version == ID_AA64DFR0_EL1_BRBE_IMP ||
|
||||
brbe_version == ID_AA64DFR0_EL1_BRBE_BRBE_V1P1;
|
||||
}
|
||||
|
||||
static void select_brbe_bank(int bank)
|
||||
{
|
||||
u64 brbfcr;
|
||||
|
||||
brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
|
||||
brbfcr &= ~BRBFCR_EL1_BANK_MASK;
|
||||
brbfcr |= SYS_FIELD_PREP(BRBFCR_EL1, BANK, bank);
|
||||
write_sysreg_s(brbfcr, SYS_BRBFCR_EL1);
|
||||
/*
|
||||
* Arm ARM (DDI 0487K.a) D.18.4 rule PPBZP requires explicit sync
|
||||
* between setting BANK and accessing branch records.
|
||||
*/
|
||||
isb();
|
||||
}
|
||||
|
||||
static bool __read_brbe_regset(struct brbe_regset *entry, int idx)
|
||||
{
|
||||
entry->brbinf = get_brbinf_reg(idx);
|
||||
|
||||
if (brbe_invalid(entry->brbinf))
|
||||
return false;
|
||||
|
||||
entry->brbsrc = get_brbsrc_reg(idx);
|
||||
entry->brbtgt = get_brbtgt_reg(idx);
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Generic perf branch filters supported on BRBE
|
||||
*
|
||||
* New branch filters need to be evaluated whether they could be supported on
|
||||
* BRBE. This ensures that such branch filters would not just be accepted, to
|
||||
* fail silently. PERF_SAMPLE_BRANCH_HV is a special case that is selectively
|
||||
* supported only on platforms where kernel is in hyp mode.
|
||||
*/
|
||||
#define BRBE_EXCLUDE_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_ABORT_TX | \
|
||||
PERF_SAMPLE_BRANCH_IN_TX | \
|
||||
PERF_SAMPLE_BRANCH_NO_TX | \
|
||||
PERF_SAMPLE_BRANCH_CALL_STACK | \
|
||||
PERF_SAMPLE_BRANCH_COUNTERS)
|
||||
|
||||
#define BRBE_ALLOWED_BRANCH_TYPES (PERF_SAMPLE_BRANCH_ANY | \
|
||||
PERF_SAMPLE_BRANCH_ANY_CALL | \
|
||||
PERF_SAMPLE_BRANCH_ANY_RETURN | \
|
||||
PERF_SAMPLE_BRANCH_IND_CALL | \
|
||||
PERF_SAMPLE_BRANCH_COND | \
|
||||
PERF_SAMPLE_BRANCH_IND_JUMP | \
|
||||
PERF_SAMPLE_BRANCH_CALL)
|
||||
|
||||
|
||||
#define BRBE_ALLOWED_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_USER | \
|
||||
PERF_SAMPLE_BRANCH_KERNEL | \
|
||||
PERF_SAMPLE_BRANCH_HV | \
|
||||
BRBE_ALLOWED_BRANCH_TYPES | \
|
||||
PERF_SAMPLE_BRANCH_NO_FLAGS | \
|
||||
PERF_SAMPLE_BRANCH_NO_CYCLES | \
|
||||
PERF_SAMPLE_BRANCH_TYPE_SAVE | \
|
||||
PERF_SAMPLE_BRANCH_HW_INDEX | \
|
||||
PERF_SAMPLE_BRANCH_PRIV_SAVE)
|
||||
|
||||
#define BRBE_PERF_BRANCH_FILTERS (BRBE_ALLOWED_BRANCH_FILTERS | \
|
||||
BRBE_EXCLUDE_BRANCH_FILTERS)
|
||||
|
||||
/*
|
||||
* BRBE supports the following functional branch type filters while
|
||||
* generating branch records. These branch filters can be enabled,
|
||||
* either individually or as a group i.e ORing multiple filters
|
||||
* with each other.
|
||||
*
|
||||
* BRBFCR_EL1_CONDDIR - Conditional direct branch
|
||||
* BRBFCR_EL1_DIRCALL - Direct call
|
||||
* BRBFCR_EL1_INDCALL - Indirect call
|
||||
* BRBFCR_EL1_INDIRECT - Indirect branch
|
||||
* BRBFCR_EL1_DIRECT - Direct branch
|
||||
* BRBFCR_EL1_RTN - Subroutine return
|
||||
*/
|
||||
static u64 branch_type_to_brbfcr(int branch_type)
|
||||
{
|
||||
u64 brbfcr = 0;
|
||||
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_ANY) {
|
||||
brbfcr |= BRBFCR_EL1_BRANCH_FILTERS;
|
||||
return brbfcr;
|
||||
}
|
||||
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) {
|
||||
brbfcr |= BRBFCR_EL1_INDCALL;
|
||||
brbfcr |= BRBFCR_EL1_DIRCALL;
|
||||
}
|
||||
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN)
|
||||
brbfcr |= BRBFCR_EL1_RTN;
|
||||
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_IND_CALL)
|
||||
brbfcr |= BRBFCR_EL1_INDCALL;
|
||||
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_COND)
|
||||
brbfcr |= BRBFCR_EL1_CONDDIR;
|
||||
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_IND_JUMP)
|
||||
brbfcr |= BRBFCR_EL1_INDIRECT;
|
||||
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_CALL)
|
||||
brbfcr |= BRBFCR_EL1_DIRCALL;
|
||||
|
||||
return brbfcr;
|
||||
}
|
||||
|
||||
/*
|
||||
* BRBE supports the following privilege mode filters while generating
|
||||
* branch records.
|
||||
*
|
||||
* BRBCR_ELx_E0BRE - EL0 branch records
|
||||
* BRBCR_ELx_ExBRE - EL1/EL2 branch records
|
||||
*
|
||||
* BRBE also supports the following additional functional branch type
|
||||
* filters while generating branch records.
|
||||
*
|
||||
* BRBCR_ELx_EXCEPTION - Exception
|
||||
* BRBCR_ELx_ERTN - Exception return
|
||||
*/
|
||||
static u64 branch_type_to_brbcr(int branch_type)
|
||||
{
|
||||
u64 brbcr = BRBCR_ELx_FZP | BRBCR_ELx_DEFAULT_TS;
|
||||
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_USER)
|
||||
brbcr |= BRBCR_ELx_E0BRE;
|
||||
|
||||
/*
|
||||
* When running in the hyp mode, writing into BRBCR_EL1
|
||||
* actually writes into BRBCR_EL2 instead. Field E2BRE
|
||||
* is also at the same position as E1BRE.
|
||||
*/
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_KERNEL)
|
||||
brbcr |= BRBCR_ELx_ExBRE;
|
||||
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_HV) {
|
||||
if (is_kernel_in_hyp_mode())
|
||||
brbcr |= BRBCR_ELx_ExBRE;
|
||||
}
|
||||
|
||||
if (!(branch_type & PERF_SAMPLE_BRANCH_NO_CYCLES))
|
||||
brbcr |= BRBCR_ELx_CC;
|
||||
|
||||
if (!(branch_type & PERF_SAMPLE_BRANCH_NO_FLAGS))
|
||||
brbcr |= BRBCR_ELx_MPRED;
|
||||
|
||||
/*
|
||||
* The exception and exception return branches could be
|
||||
* captured, irrespective of the perf event's privilege.
|
||||
* If the perf event does not have enough privilege for
|
||||
* a given exception level, then addresses which falls
|
||||
* under that exception level will be reported as zero
|
||||
* for the captured branch record, creating source only
|
||||
* or target only records.
|
||||
*/
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_KERNEL) {
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_ANY) {
|
||||
brbcr |= BRBCR_ELx_EXCEPTION;
|
||||
brbcr |= BRBCR_ELx_ERTN;
|
||||
}
|
||||
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL)
|
||||
brbcr |= BRBCR_ELx_EXCEPTION;
|
||||
|
||||
if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN)
|
||||
brbcr |= BRBCR_ELx_ERTN;
|
||||
}
|
||||
return brbcr;
|
||||
}
|
||||
|
||||
bool brbe_branch_attr_valid(struct perf_event *event)
|
||||
{
|
||||
u64 branch_type = event->attr.branch_sample_type;
|
||||
|
||||
/*
|
||||
* Ensure both perf branch filter allowed and exclude
|
||||
* masks are always in sync with the generic perf ABI.
|
||||
*/
|
||||
BUILD_BUG_ON(BRBE_PERF_BRANCH_FILTERS != (PERF_SAMPLE_BRANCH_MAX - 1));
|
||||
|
||||
if (branch_type & BRBE_EXCLUDE_BRANCH_FILTERS) {
|
||||
pr_debug("requested branch filter not supported 0x%llx\n", branch_type);
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Ensure at least 1 branch type is enabled */
|
||||
if (!(branch_type & BRBE_ALLOWED_BRANCH_TYPES)) {
|
||||
pr_debug("no branch type enabled 0x%llx\n", branch_type);
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* No branches are recorded in guests nor nVHE hypervisors, so
|
||||
* excluding the host or both kernel and user is invalid.
|
||||
*
|
||||
* Ideally we'd just require exclude_guest and exclude_hv, but setting
|
||||
* event filters with perf for kernel or user don't set exclude_guest.
|
||||
* So effectively, exclude_guest and exclude_hv are ignored.
|
||||
*/
|
||||
if (event->attr.exclude_host || (event->attr.exclude_user && event->attr.exclude_kernel)) {
|
||||
pr_debug("branch filter in hypervisor or guest only not supported 0x%llx\n", branch_type);
|
||||
return false;
|
||||
}
|
||||
|
||||
event->hw.branch_reg.config = branch_type_to_brbfcr(event->attr.branch_sample_type);
|
||||
event->hw.extra_reg.config = branch_type_to_brbcr(event->attr.branch_sample_type);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
unsigned int brbe_num_branch_records(const struct arm_pmu *armpmu)
|
||||
{
|
||||
return FIELD_GET(BRBIDR0_EL1_NUMREC_MASK, armpmu->reg_brbidr);
|
||||
}
|
||||
|
||||
void brbe_probe(struct arm_pmu *armpmu)
|
||||
{
|
||||
u64 brbidr, aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1);
|
||||
u32 brbe;
|
||||
|
||||
brbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_EL1_BRBE_SHIFT);
|
||||
if (!valid_brbe_version(brbe))
|
||||
return;
|
||||
|
||||
brbidr = read_sysreg_s(SYS_BRBIDR0_EL1);
|
||||
if (!valid_brbidr(brbidr))
|
||||
return;
|
||||
|
||||
armpmu->reg_brbidr = brbidr;
|
||||
}
|
||||
|
||||
/*
|
||||
* BRBE is assumed to be disabled/paused on entry
|
||||
*/
|
||||
void brbe_enable(const struct arm_pmu *arm_pmu)
|
||||
{
|
||||
struct pmu_hw_events *cpuc = this_cpu_ptr(arm_pmu->hw_events);
|
||||
u64 brbfcr = 0, brbcr = 0;
|
||||
|
||||
/*
|
||||
* Discard existing records to avoid a discontinuity, e.g. records
|
||||
* missed during handling an overflow.
|
||||
*/
|
||||
brbe_invalidate();
|
||||
|
||||
/*
|
||||
* Merge the permitted branch filters of all events.
|
||||
*/
|
||||
for (int i = 0; i < ARMPMU_MAX_HWEVENTS; i++) {
|
||||
struct perf_event *event = cpuc->events[i];
|
||||
|
||||
if (event && has_branch_stack(event)) {
|
||||
brbfcr |= event->hw.branch_reg.config;
|
||||
brbcr |= event->hw.extra_reg.config;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* In VHE mode with MDCR_EL2.HPMN equal to PMCR_EL0.N, BRBCR_EL1.FZP
|
||||
* controls freezing the branch records on counter overflow rather than
|
||||
* BRBCR_EL2.FZP (which writes to BRBCR_EL1 are redirected to).
|
||||
* The exception levels are enabled/disabled in BRBCR_EL2, so keep EL1
|
||||
* and EL0 recording disabled for guests.
|
||||
*
|
||||
* As BRBCR_EL1 CC and MPRED bits also need to match, use the same
|
||||
* value for both registers just masking the exception levels.
|
||||
*/
|
||||
if (is_kernel_in_hyp_mode())
|
||||
write_sysreg_s(brbcr & ~(BRBCR_ELx_ExBRE | BRBCR_ELx_E0BRE), SYS_BRBCR_EL12);
|
||||
write_sysreg_s(brbcr, SYS_BRBCR_EL1);
|
||||
/* Ensure BRBCR_ELx settings take effect before unpausing */
|
||||
isb();
|
||||
|
||||
/* Finally write SYS_BRBFCR_EL to unpause BRBE */
|
||||
write_sysreg_s(brbfcr, SYS_BRBFCR_EL1);
|
||||
/* Synchronization in PMCR write ensures ordering WRT PMU enabling */
|
||||
}
|
||||
|
||||
void brbe_disable(void)
|
||||
{
|
||||
/*
|
||||
* No need for synchronization here as synchronization in PMCR write
|
||||
* ensures ordering and in the interrupt handler this is a NOP as
|
||||
* we're already paused.
|
||||
*/
|
||||
write_sysreg_s(BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
|
||||
write_sysreg_s(0, SYS_BRBCR_EL1);
|
||||
}
|
||||
|
||||
static const int brbe_type_to_perf_type_map[BRBINFx_EL1_TYPE_DEBUG_EXIT + 1][2] = {
|
||||
[BRBINFx_EL1_TYPE_DIRECT_UNCOND] = { PERF_BR_UNCOND, 0 },
|
||||
[BRBINFx_EL1_TYPE_INDIRECT] = { PERF_BR_IND, 0 },
|
||||
[BRBINFx_EL1_TYPE_DIRECT_LINK] = { PERF_BR_CALL, 0 },
|
||||
[BRBINFx_EL1_TYPE_INDIRECT_LINK] = { PERF_BR_IND_CALL, 0 },
|
||||
[BRBINFx_EL1_TYPE_RET] = { PERF_BR_RET, 0 },
|
||||
[BRBINFx_EL1_TYPE_DIRECT_COND] = { PERF_BR_COND, 0 },
|
||||
[BRBINFx_EL1_TYPE_CALL] = { PERF_BR_SYSCALL, 0 },
|
||||
[BRBINFx_EL1_TYPE_ERET] = { PERF_BR_ERET, 0 },
|
||||
[BRBINFx_EL1_TYPE_IRQ] = { PERF_BR_IRQ, 0 },
|
||||
[BRBINFx_EL1_TYPE_TRAP] = { PERF_BR_IRQ, 0 },
|
||||
[BRBINFx_EL1_TYPE_SERROR] = { PERF_BR_SERROR, 0 },
|
||||
[BRBINFx_EL1_TYPE_ALIGN_FAULT] = { PERF_BR_EXTEND_ABI, PERF_BR_NEW_FAULT_ALGN },
|
||||
[BRBINFx_EL1_TYPE_INSN_FAULT] = { PERF_BR_EXTEND_ABI, PERF_BR_NEW_FAULT_INST },
|
||||
[BRBINFx_EL1_TYPE_DATA_FAULT] = { PERF_BR_EXTEND_ABI, PERF_BR_NEW_FAULT_DATA },
|
||||
};
|
||||
|
||||
static void brbe_set_perf_entry_type(struct perf_branch_entry *entry, u64 brbinf)
|
||||
{
|
||||
int brbe_type = brbinf_get_type(brbinf);
|
||||
|
||||
if (brbe_type <= BRBINFx_EL1_TYPE_DEBUG_EXIT) {
|
||||
const int *br_type = brbe_type_to_perf_type_map[brbe_type];
|
||||
|
||||
entry->type = br_type[0];
|
||||
entry->new_type = br_type[1];
|
||||
}
|
||||
}
|
||||
|
||||
static int brbinf_get_perf_priv(u64 brbinf)
|
||||
{
|
||||
int brbe_el = brbinf_get_el(brbinf);
|
||||
|
||||
switch (brbe_el) {
|
||||
case BRBINFx_EL1_EL_EL0:
|
||||
return PERF_BR_PRIV_USER;
|
||||
case BRBINFx_EL1_EL_EL1:
|
||||
return PERF_BR_PRIV_KERNEL;
|
||||
case BRBINFx_EL1_EL_EL2:
|
||||
if (is_kernel_in_hyp_mode())
|
||||
return PERF_BR_PRIV_KERNEL;
|
||||
return PERF_BR_PRIV_HV;
|
||||
default:
|
||||
pr_warn_once("%d - unknown branch privilege captured\n", brbe_el);
|
||||
return PERF_BR_PRIV_UNKNOWN;
|
||||
}
|
||||
}
|
||||
|
||||
static bool perf_entry_from_brbe_regset(int index, struct perf_branch_entry *entry,
|
||||
const struct perf_event *event)
|
||||
{
|
||||
struct brbe_regset bregs;
|
||||
u64 brbinf;
|
||||
|
||||
if (!__read_brbe_regset(&bregs, index))
|
||||
return false;
|
||||
|
||||
brbinf = bregs.brbinf;
|
||||
perf_clear_branch_entry_bitfields(entry);
|
||||
if (brbe_record_is_complete(brbinf)) {
|
||||
entry->from = bregs.brbsrc;
|
||||
entry->to = bregs.brbtgt;
|
||||
} else if (brbe_record_is_source_only(brbinf)) {
|
||||
entry->from = bregs.brbsrc;
|
||||
entry->to = 0;
|
||||
} else if (brbe_record_is_target_only(brbinf)) {
|
||||
entry->from = 0;
|
||||
entry->to = bregs.brbtgt;
|
||||
}
|
||||
|
||||
brbe_set_perf_entry_type(entry, brbinf);
|
||||
|
||||
if (!branch_sample_no_cycles(event))
|
||||
entry->cycles = brbinf_get_cycles(brbinf);
|
||||
|
||||
if (!branch_sample_no_flags(event)) {
|
||||
/* Mispredict info is available for source only and complete branch records. */
|
||||
if (!brbe_record_is_target_only(brbinf)) {
|
||||
entry->mispred = brbinf_get_mispredict(brbinf);
|
||||
entry->predicted = !entry->mispred;
|
||||
}
|
||||
|
||||
/*
|
||||
* Currently TME feature is neither implemented in any hardware
|
||||
* nor it is being supported in the kernel. Just warn here once
|
||||
* if TME related information shows up rather unexpectedly.
|
||||
*/
|
||||
if (brbinf_get_lastfailed(brbinf) || brbinf_get_in_tx(brbinf))
|
||||
pr_warn_once("Unknown transaction states\n");
|
||||
}
|
||||
|
||||
/*
|
||||
* Branch privilege level is available for target only and complete
|
||||
* branch records.
|
||||
*/
|
||||
if (!brbe_record_is_source_only(brbinf))
|
||||
entry->priv = brbinf_get_perf_priv(brbinf);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
#define PERF_BR_ARM64_ALL ( \
|
||||
BIT(PERF_BR_COND) | \
|
||||
BIT(PERF_BR_UNCOND) | \
|
||||
BIT(PERF_BR_IND) | \
|
||||
BIT(PERF_BR_CALL) | \
|
||||
BIT(PERF_BR_IND_CALL) | \
|
||||
BIT(PERF_BR_RET))
|
||||
|
||||
#define PERF_BR_ARM64_ALL_KERNEL ( \
|
||||
BIT(PERF_BR_SYSCALL) | \
|
||||
BIT(PERF_BR_IRQ) | \
|
||||
BIT(PERF_BR_SERROR) | \
|
||||
BIT(PERF_BR_MAX + PERF_BR_NEW_FAULT_ALGN) | \
|
||||
BIT(PERF_BR_MAX + PERF_BR_NEW_FAULT_DATA) | \
|
||||
BIT(PERF_BR_MAX + PERF_BR_NEW_FAULT_INST))
|
||||
|
||||
static void prepare_event_branch_type_mask(u64 branch_sample,
|
||||
unsigned long *event_type_mask)
|
||||
{
|
||||
if (branch_sample & PERF_SAMPLE_BRANCH_ANY) {
|
||||
if (branch_sample & PERF_SAMPLE_BRANCH_KERNEL)
|
||||
bitmap_from_u64(event_type_mask,
|
||||
BIT(PERF_BR_ERET) | PERF_BR_ARM64_ALL |
|
||||
PERF_BR_ARM64_ALL_KERNEL);
|
||||
else
|
||||
bitmap_from_u64(event_type_mask, PERF_BR_ARM64_ALL);
|
||||
return;
|
||||
}
|
||||
|
||||
bitmap_zero(event_type_mask, PERF_BR_ARM64_MAX);
|
||||
|
||||
if (branch_sample & PERF_SAMPLE_BRANCH_ANY_CALL) {
|
||||
if (branch_sample & PERF_SAMPLE_BRANCH_KERNEL)
|
||||
bitmap_from_u64(event_type_mask, PERF_BR_ARM64_ALL_KERNEL);
|
||||
|
||||
set_bit(PERF_BR_CALL, event_type_mask);
|
||||
set_bit(PERF_BR_IND_CALL, event_type_mask);
|
||||
}
|
||||
|
||||
if (branch_sample & PERF_SAMPLE_BRANCH_IND_JUMP)
|
||||
set_bit(PERF_BR_IND, event_type_mask);
|
||||
|
||||
if (branch_sample & PERF_SAMPLE_BRANCH_COND)
|
||||
set_bit(PERF_BR_COND, event_type_mask);
|
||||
|
||||
if (branch_sample & PERF_SAMPLE_BRANCH_CALL)
|
||||
set_bit(PERF_BR_CALL, event_type_mask);
|
||||
|
||||
if (branch_sample & PERF_SAMPLE_BRANCH_IND_CALL)
|
||||
set_bit(PERF_BR_IND_CALL, event_type_mask);
|
||||
|
||||
if (branch_sample & PERF_SAMPLE_BRANCH_ANY_RETURN) {
|
||||
set_bit(PERF_BR_RET, event_type_mask);
|
||||
|
||||
if (branch_sample & PERF_SAMPLE_BRANCH_KERNEL)
|
||||
set_bit(PERF_BR_ERET, event_type_mask);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* BRBE is configured with an OR of permissions from all events, so there may
|
||||
* be events which have to be dropped or events where just the source or target
|
||||
* address has to be zeroed.
|
||||
*/
|
||||
static bool filter_branch_privilege(struct perf_branch_entry *entry, u64 branch_sample_type)
|
||||
{
|
||||
bool from_user = access_ok((void __user *)(unsigned long)entry->from, 4);
|
||||
bool to_user = access_ok((void __user *)(unsigned long)entry->to, 4);
|
||||
bool exclude_kernel = !((branch_sample_type & PERF_SAMPLE_BRANCH_KERNEL) ||
|
||||
(is_kernel_in_hyp_mode() && (branch_sample_type & PERF_SAMPLE_BRANCH_HV)));
|
||||
|
||||
/* We can only have a half record if permissions have not been expanded */
|
||||
if (!entry->from || !entry->to)
|
||||
return true;
|
||||
|
||||
/*
|
||||
* If record is within a single exception level, just need to either
|
||||
* drop or keep the entire record.
|
||||
*/
|
||||
if (from_user == to_user)
|
||||
return ((entry->priv == PERF_BR_PRIV_KERNEL) && !exclude_kernel) ||
|
||||
((entry->priv == PERF_BR_PRIV_USER) &&
|
||||
(branch_sample_type & PERF_SAMPLE_BRANCH_USER));
|
||||
|
||||
/*
|
||||
* Record is across exception levels, mask addresses for the exception
|
||||
* level we're not capturing.
|
||||
*/
|
||||
if (!(branch_sample_type & PERF_SAMPLE_BRANCH_USER)) {
|
||||
if (from_user)
|
||||
entry->from = 0;
|
||||
if (to_user)
|
||||
entry->to = 0;
|
||||
}
|
||||
|
||||
if (exclude_kernel) {
|
||||
if (!from_user)
|
||||
entry->from = 0;
|
||||
if (!to_user)
|
||||
entry->to = 0;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool filter_branch_type(struct perf_branch_entry *entry,
|
||||
const unsigned long *event_type_mask)
|
||||
{
|
||||
if (entry->type == PERF_BR_EXTEND_ABI)
|
||||
return test_bit(PERF_BR_MAX + entry->new_type, event_type_mask);
|
||||
else
|
||||
return test_bit(entry->type, event_type_mask);
|
||||
}
|
||||
|
||||
static bool filter_branch_record(struct perf_branch_entry *entry,
|
||||
u64 branch_sample,
|
||||
const unsigned long *event_type_mask)
|
||||
{
|
||||
return filter_branch_type(entry, event_type_mask) &&
|
||||
filter_branch_privilege(entry, branch_sample);
|
||||
}
|
||||
|
||||
void brbe_read_filtered_entries(struct perf_branch_stack *branch_stack,
|
||||
const struct perf_event *event)
|
||||
{
|
||||
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
|
||||
int nr_hw = brbe_num_branch_records(cpu_pmu);
|
||||
int nr_banks = DIV_ROUND_UP(nr_hw, BRBE_BANK_MAX_ENTRIES);
|
||||
int nr_filtered = 0;
|
||||
u64 branch_sample_type = event->attr.branch_sample_type;
|
||||
DECLARE_BITMAP(event_type_mask, PERF_BR_ARM64_MAX);
|
||||
|
||||
prepare_event_branch_type_mask(branch_sample_type, event_type_mask);
|
||||
|
||||
for (int bank = 0; bank < nr_banks; bank++) {
|
||||
int nr_remaining = nr_hw - (bank * BRBE_BANK_MAX_ENTRIES);
|
||||
int nr_this_bank = min(nr_remaining, BRBE_BANK_MAX_ENTRIES);
|
||||
|
||||
select_brbe_bank(bank);
|
||||
|
||||
for (int i = 0; i < nr_this_bank; i++) {
|
||||
struct perf_branch_entry *pbe = &branch_stack->entries[nr_filtered];
|
||||
|
||||
if (!perf_entry_from_brbe_regset(i, pbe, event))
|
||||
goto done;
|
||||
|
||||
if (!filter_branch_record(pbe, branch_sample_type, event_type_mask))
|
||||
continue;
|
||||
|
||||
nr_filtered++;
|
||||
}
|
||||
}
|
||||
|
||||
done:
|
||||
branch_stack->nr = nr_filtered;
|
||||
}
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/*
|
||||
* Branch Record Buffer Extension Helpers.
|
||||
*
|
||||
* Copyright (C) 2022-2025 ARM Limited
|
||||
*
|
||||
* Author: Anshuman Khandual <anshuman.khandual@arm.com>
|
||||
*/
|
||||
|
||||
struct arm_pmu;
|
||||
struct perf_branch_stack;
|
||||
struct perf_event;
|
||||
|
||||
#ifdef CONFIG_ARM64_BRBE
|
||||
void brbe_probe(struct arm_pmu *arm_pmu);
|
||||
unsigned int brbe_num_branch_records(const struct arm_pmu *armpmu);
|
||||
void brbe_invalidate(void);
|
||||
|
||||
void brbe_enable(const struct arm_pmu *arm_pmu);
|
||||
void brbe_disable(void);
|
||||
|
||||
bool brbe_branch_attr_valid(struct perf_event *event);
|
||||
void brbe_read_filtered_entries(struct perf_branch_stack *branch_stack,
|
||||
const struct perf_event *event);
|
||||
#else
|
||||
static inline void brbe_probe(struct arm_pmu *arm_pmu) { }
|
||||
static inline unsigned int brbe_num_branch_records(const struct arm_pmu *armpmu)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void brbe_invalidate(void) { }
|
||||
|
||||
static inline void brbe_enable(const struct arm_pmu *arm_pmu) { };
|
||||
static inline void brbe_disable(void) { };
|
||||
|
||||
static inline bool brbe_branch_attr_valid(struct perf_event *event)
|
||||
{
|
||||
WARN_ON_ONCE(!has_branch_stack(event));
|
||||
return false;
|
||||
}
|
||||
|
||||
static void brbe_read_filtered_entries(struct perf_branch_stack *branch_stack,
|
||||
const struct perf_event *event)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
|
@ -99,7 +99,7 @@ static const struct pmu_irq_ops percpu_pmunmi_ops = {
|
|||
.free_pmuirq = armpmu_free_percpu_pmunmi
|
||||
};
|
||||
|
||||
static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
|
||||
DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
|
||||
static DEFINE_PER_CPU(int, cpu_irq);
|
||||
static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
|
||||
|
||||
|
|
@ -318,6 +318,12 @@ armpmu_del(struct perf_event *event, int flags)
|
|||
int idx = hwc->idx;
|
||||
|
||||
armpmu_stop(event, PERF_EF_UPDATE);
|
||||
|
||||
if (has_branch_stack(event)) {
|
||||
hw_events->branch_users--;
|
||||
perf_sched_cb_dec(event->pmu);
|
||||
}
|
||||
|
||||
hw_events->events[idx] = NULL;
|
||||
armpmu->clear_event_idx(hw_events, event);
|
||||
perf_event_update_userpage(event);
|
||||
|
|
@ -345,6 +351,11 @@ armpmu_add(struct perf_event *event, int flags)
|
|||
/* The newly-allocated counter should be empty */
|
||||
WARN_ON_ONCE(hw_events->events[idx]);
|
||||
|
||||
if (has_branch_stack(event)) {
|
||||
hw_events->branch_users++;
|
||||
perf_sched_cb_inc(event->pmu);
|
||||
}
|
||||
|
||||
event->hw.idx = idx;
|
||||
hw_events->events[idx] = event;
|
||||
|
||||
|
|
@ -509,8 +520,7 @@ static int armpmu_event_init(struct perf_event *event)
|
|||
!cpumask_test_cpu(event->cpu, &armpmu->supported_cpus))
|
||||
return -ENOENT;
|
||||
|
||||
/* does not support taken branch sampling */
|
||||
if (has_branch_stack(event))
|
||||
if (has_branch_stack(event) && !armpmu->reg_brbidr)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
return __hw_perf_event_init(event);
|
||||
|
|
|
|||
|
|
@ -25,6 +25,8 @@
|
|||
#include <linux/smp.h>
|
||||
#include <linux/nmi.h>
|
||||
|
||||
#include "arm_brbe.h"
|
||||
|
||||
/* ARMv8 Cortex-A53 specific event types. */
|
||||
#define ARMV8_A53_PERFCTR_PREF_LINEFILL 0xC2
|
||||
|
||||
|
|
@ -438,7 +440,19 @@ static ssize_t threshold_max_show(struct device *dev,
|
|||
|
||||
static DEVICE_ATTR_RO(threshold_max);
|
||||
|
||||
static ssize_t branches_show(struct device *dev,
|
||||
struct device_attribute *attr, char *page)
|
||||
{
|
||||
struct pmu *pmu = dev_get_drvdata(dev);
|
||||
struct arm_pmu *cpu_pmu = container_of(pmu, struct arm_pmu, pmu);
|
||||
|
||||
return sysfs_emit(page, "%d\n", brbe_num_branch_records(cpu_pmu));
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RO(branches);
|
||||
|
||||
static struct attribute *armv8_pmuv3_caps_attrs[] = {
|
||||
&dev_attr_branches.attr,
|
||||
&dev_attr_slots.attr,
|
||||
&dev_attr_bus_slots.attr,
|
||||
&dev_attr_bus_width.attr,
|
||||
|
|
@ -446,9 +460,22 @@ static struct attribute *armv8_pmuv3_caps_attrs[] = {
|
|||
NULL,
|
||||
};
|
||||
|
||||
static umode_t caps_is_visible(struct kobject *kobj, struct attribute *attr, int i)
|
||||
{
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
struct pmu *pmu = dev_get_drvdata(dev);
|
||||
struct arm_pmu *cpu_pmu = container_of(pmu, struct arm_pmu, pmu);
|
||||
|
||||
if (i == 0)
|
||||
return brbe_num_branch_records(cpu_pmu) ? attr->mode : 0;
|
||||
|
||||
return attr->mode;
|
||||
}
|
||||
|
||||
static const struct attribute_group armv8_pmuv3_caps_attr_group = {
|
||||
.name = "caps",
|
||||
.attrs = armv8_pmuv3_caps_attrs,
|
||||
.is_visible = caps_is_visible,
|
||||
};
|
||||
|
||||
/*
|
||||
|
|
@ -809,6 +836,7 @@ static void armv8pmu_disable_event(struct perf_event *event)
|
|||
static void armv8pmu_start(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
struct perf_event_context *ctx;
|
||||
struct pmu_hw_events *hw_events = this_cpu_ptr(cpu_pmu->hw_events);
|
||||
int nr_user = 0;
|
||||
|
||||
ctx = perf_cpu_task_ctx();
|
||||
|
|
@ -822,16 +850,34 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu)
|
|||
|
||||
kvm_vcpu_pmu_resync_el0();
|
||||
|
||||
if (hw_events->branch_users)
|
||||
brbe_enable(cpu_pmu);
|
||||
|
||||
/* Enable all counters */
|
||||
armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
|
||||
}
|
||||
|
||||
static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
struct pmu_hw_events *hw_events = this_cpu_ptr(cpu_pmu->hw_events);
|
||||
|
||||
if (hw_events->branch_users)
|
||||
brbe_disable();
|
||||
|
||||
/* Disable all counters */
|
||||
armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
|
||||
}
|
||||
|
||||
static void read_branch_records(struct pmu_hw_events *cpuc,
|
||||
struct perf_event *event,
|
||||
struct perf_sample_data *data)
|
||||
{
|
||||
struct perf_branch_stack *branch_stack = cpuc->branch_stack;
|
||||
|
||||
brbe_read_filtered_entries(branch_stack, event);
|
||||
perf_sample_save_brstack(data, event, branch_stack, NULL);
|
||||
}
|
||||
|
||||
static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
u64 pmovsr;
|
||||
|
|
@ -882,6 +928,9 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
|
|||
if (!armpmu_event_set_period(event))
|
||||
continue;
|
||||
|
||||
if (has_branch_stack(event))
|
||||
read_branch_records(cpuc, event, &data);
|
||||
|
||||
/*
|
||||
* Perf event overflow will queue the processing of the event as
|
||||
* an irq_work which will be taken care of in the handling of
|
||||
|
|
@ -938,7 +987,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
|
|||
|
||||
/* Always prefer to place a cycle counter into the cycle counter. */
|
||||
if ((evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) &&
|
||||
!armv8pmu_event_get_threshold(&event->attr)) {
|
||||
!armv8pmu_event_get_threshold(&event->attr) && !has_branch_stack(event)) {
|
||||
if (!test_and_set_bit(ARMV8_PMU_CYCLE_IDX, cpuc->used_mask))
|
||||
return ARMV8_PMU_CYCLE_IDX;
|
||||
else if (armv8pmu_event_is_64bit(event) &&
|
||||
|
|
@ -987,6 +1036,19 @@ static int armv8pmu_user_event_idx(struct perf_event *event)
|
|||
return event->hw.idx + 1;
|
||||
}
|
||||
|
||||
static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx,
|
||||
struct task_struct *task, bool sched_in)
|
||||
{
|
||||
struct arm_pmu *armpmu = *this_cpu_ptr(&cpu_armpmu);
|
||||
struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events);
|
||||
|
||||
if (!hw_events->branch_users)
|
||||
return;
|
||||
|
||||
if (sched_in)
|
||||
brbe_invalidate();
|
||||
}
|
||||
|
||||
/*
|
||||
* Add an event filter to a given event.
|
||||
*/
|
||||
|
|
@ -1004,6 +1066,13 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
|
|||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (has_branch_stack(perf_event)) {
|
||||
if (!brbe_num_branch_records(cpu_pmu) || !brbe_branch_attr_valid(perf_event))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
perf_event->attach_state |= PERF_ATTACH_SCHED_CB;
|
||||
}
|
||||
|
||||
/*
|
||||
* If we're running in hyp mode, then we *are* the hypervisor.
|
||||
* Therefore we ignore exclude_hv in this configuration, since
|
||||
|
|
@ -1070,6 +1139,11 @@ static void armv8pmu_reset(void *info)
|
|||
/* Clear the counters we flip at guest entry/exit */
|
||||
kvm_clr_pmu_events(mask);
|
||||
|
||||
if (brbe_num_branch_records(cpu_pmu)) {
|
||||
brbe_disable();
|
||||
brbe_invalidate();
|
||||
}
|
||||
|
||||
/*
|
||||
* Initialize & Reset PMNC. Request overflow interrupt for
|
||||
* 64 bit cycle counter but cheat in armv8pmu_write_counter().
|
||||
|
|
@ -1238,6 +1312,25 @@ static void __armv8pmu_probe_pmu(void *info)
|
|||
cpu_pmu->reg_pmmir = read_pmmir();
|
||||
else
|
||||
cpu_pmu->reg_pmmir = 0;
|
||||
|
||||
brbe_probe(cpu_pmu);
|
||||
}
|
||||
|
||||
static int branch_records_alloc(struct arm_pmu *armpmu)
|
||||
{
|
||||
size_t size = struct_size_t(struct perf_branch_stack, entries,
|
||||
brbe_num_branch_records(armpmu));
|
||||
int cpu;
|
||||
|
||||
for_each_cpu(cpu, &armpmu->supported_cpus) {
|
||||
struct pmu_hw_events *events_cpu;
|
||||
|
||||
events_cpu = per_cpu_ptr(armpmu->hw_events, cpu);
|
||||
events_cpu->branch_stack = kmalloc(size, GFP_KERNEL);
|
||||
if (!events_cpu->branch_stack)
|
||||
return -ENOMEM;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
|
||||
|
|
@ -1254,7 +1347,15 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
return probe.present ? 0 : -ENODEV;
|
||||
if (!probe.present)
|
||||
return -ENODEV;
|
||||
|
||||
if (brbe_num_branch_records(cpu_pmu)) {
|
||||
ret = branch_records_alloc(cpu_pmu);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void armv8pmu_disable_user_access_ipi(void *unused)
|
||||
|
|
@ -1313,6 +1414,8 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
|
|||
cpu_pmu->set_event_filter = armv8pmu_set_event_filter;
|
||||
|
||||
cpu_pmu->pmu.event_idx = armv8pmu_user_event_idx;
|
||||
if (brbe_num_branch_records(cpu_pmu))
|
||||
cpu_pmu->pmu.sched_task = armv8pmu_sched_task;
|
||||
|
||||
cpu_pmu->name = name;
|
||||
cpu_pmu->map_event = map_event;
|
||||
|
|
|
|||
|
|
@ -308,17 +308,21 @@ static u64 arm_spe_event_to_pmscr(struct perf_event *event)
|
|||
|
||||
static void arm_spe_event_sanitise_period(struct perf_event *event)
|
||||
{
|
||||
struct arm_spe_pmu *spe_pmu = to_spe_pmu(event->pmu);
|
||||
u64 period = event->hw.sample_period;
|
||||
u64 max_period = PMSIRR_EL1_INTERVAL_MASK;
|
||||
|
||||
if (period < spe_pmu->min_period)
|
||||
period = spe_pmu->min_period;
|
||||
else if (period > max_period)
|
||||
period = max_period;
|
||||
else
|
||||
period &= max_period;
|
||||
/*
|
||||
* The PMSIDR_EL1.Interval field (stored in spe_pmu->min_period) is a
|
||||
* recommendation for the minimum interval, not a hardware limitation.
|
||||
*
|
||||
* According to the Arm ARM (DDI 0487 L.a), section D24.7.12 PMSIRR_EL1,
|
||||
* Sampling Interval Reload Register, the INTERVAL field (bits [31:8])
|
||||
* states: "Software must set this to a nonzero value". Use 1 as the
|
||||
* minimum value.
|
||||
*/
|
||||
u64 min_period = FIELD_PREP(PMSIRR_EL1_INTERVAL_MASK, 1);
|
||||
|
||||
period = clamp_t(u64, period, min_period, max_period) & max_period;
|
||||
event->hw.sample_period = period;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -113,7 +113,7 @@ struct cxl_pmu_info {
|
|||
|
||||
/*
|
||||
* All CPMU counters are discoverable via the Event Capabilities Registers.
|
||||
* Each Event Capability register contains a a VID / GroupID.
|
||||
* Each Event Capability register contains a VID / GroupID.
|
||||
* A counter may then count any combination (by summing) of events in
|
||||
* that group which are in the Supported Events Bitmask.
|
||||
* However, there are some complexities to the scheme.
|
||||
|
|
@ -406,7 +406,7 @@ static struct attribute *cxl_pmu_event_attrs[] = {
|
|||
CXL_PMU_EVENT_CXL_ATTR(s2m_bisnp_curblk, CXL_PMU_GID_S2M_BISNP, BIT(4)),
|
||||
CXL_PMU_EVENT_CXL_ATTR(s2m_bisnp_datblk, CXL_PMU_GID_S2M_BISNP, BIT(5)),
|
||||
CXL_PMU_EVENT_CXL_ATTR(s2m_bisnp_invblk, CXL_PMU_GID_S2M_BISNP, BIT(6)),
|
||||
/* CXL rev 3.1 Table 3-50 S2M NDR Opcopdes */
|
||||
/* CXL rev 3.1 Table 3-50 S2M NDR Opcodes */
|
||||
CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_cmp, CXL_PMU_GID_S2M_NDR, BIT(0)),
|
||||
CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_cmps, CXL_PMU_GID_S2M_NDR, BIT(1)),
|
||||
CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_cmpe, CXL_PMU_GID_S2M_NDR, BIT(2)),
|
||||
|
|
@ -627,7 +627,7 @@ static void cxl_pmu_event_start(struct perf_event *event, int flags)
|
|||
hwc->state = 0;
|
||||
|
||||
/*
|
||||
* Currently only hdm filter control is implemnted, this code will
|
||||
* Currently only hdm filter control is implemented, this code will
|
||||
* want generalizing when more filters are added.
|
||||
*/
|
||||
if (info->filter_hdm) {
|
||||
|
|
@ -834,8 +834,8 @@ static int cxl_pmu_probe(struct device *dev)
|
|||
if (rc)
|
||||
return rc;
|
||||
|
||||
info->hw_events = devm_kcalloc(dev, sizeof(*info->hw_events),
|
||||
info->num_counters, GFP_KERNEL);
|
||||
info->hw_events = devm_kcalloc(dev, info->num_counters,
|
||||
sizeof(*info->hw_events), GFP_KERNEL);
|
||||
if (!info->hw_events)
|
||||
return -ENOMEM;
|
||||
|
||||
|
|
@ -873,7 +873,7 @@ static int cxl_pmu_probe(struct device *dev)
|
|||
return rc;
|
||||
irq = rc;
|
||||
|
||||
irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_overflow\n", dev_name);
|
||||
irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_overflow", dev_name);
|
||||
if (!irq_name)
|
||||
return -ENOMEM;
|
||||
|
||||
|
|
|
|||
|
|
@ -461,9 +461,11 @@ static void imx93_ddr_perf_monitor_config(struct ddr_pmu *pmu, int event,
|
|||
int counter, int axi_id, int axi_mask)
|
||||
{
|
||||
u32 pmcfg1, pmcfg2;
|
||||
u32 mask[] = { MX93_PMCFG1_RD_TRANS_FILT_EN,
|
||||
MX93_PMCFG1_WR_TRANS_FILT_EN,
|
||||
MX93_PMCFG1_RD_BT_FILT_EN };
|
||||
static const u32 mask[] = {
|
||||
MX93_PMCFG1_RD_TRANS_FILT_EN,
|
||||
MX93_PMCFG1_WR_TRANS_FILT_EN,
|
||||
MX93_PMCFG1_RD_BT_FILT_EN
|
||||
};
|
||||
|
||||
pmcfg1 = readl_relaxed(pmu->base + PMCFG1);
|
||||
|
||||
|
|
|
|||
|
|
@ -43,12 +43,21 @@
|
|||
#define DDRC_V2_EVENT_TYPE 0xe74
|
||||
#define DDRC_V2_PERF_CTRL 0xeA0
|
||||
|
||||
/* DDRC interrupt registers definition in v3 */
|
||||
#define DDRC_V3_INT_MASK 0x534
|
||||
#define DDRC_V3_INT_STATUS 0x538
|
||||
#define DDRC_V3_INT_CLEAR 0x53C
|
||||
|
||||
/* DDRC has 8-counters */
|
||||
#define DDRC_NR_COUNTERS 0x8
|
||||
#define DDRC_V1_PERF_CTRL_EN 0x2
|
||||
#define DDRC_V2_PERF_CTRL_EN 0x1
|
||||
#define DDRC_V1_NR_EVENTS 0x7
|
||||
#define DDRC_V2_NR_EVENTS 0x90
|
||||
#define DDRC_V2_NR_EVENTS 0xFF
|
||||
|
||||
#define DDRC_EVENT_CNTn(base, n) ((base) + (n) * 8)
|
||||
#define DDRC_EVENT_TYPEn(base, n) ((base) + (n) * 4)
|
||||
#define DDRC_UNIMPLEMENTED_REG GENMASK(31, 0)
|
||||
|
||||
/*
|
||||
* For PMU v1, there are eight-events and every event has been mapped
|
||||
|
|
@ -63,47 +72,37 @@ static const u32 ddrc_reg_off[] = {
|
|||
DDRC_PRE_CMD, DDRC_ACT_CMD, DDRC_RNK_CHG, DDRC_RW_CHG
|
||||
};
|
||||
|
||||
/*
|
||||
* Select the counter register offset using the counter index.
|
||||
* In PMU v1, there are no programmable counter, the count
|
||||
* is read form the statistics counter register itself.
|
||||
*/
|
||||
static u32 hisi_ddrc_pmu_v1_get_counter_offset(int cntr_idx)
|
||||
{
|
||||
return ddrc_reg_off[cntr_idx];
|
||||
}
|
||||
struct hisi_ddrc_pmu_regs {
|
||||
u32 event_cnt;
|
||||
u32 event_ctrl;
|
||||
u32 event_type;
|
||||
u32 perf_ctrl;
|
||||
u32 perf_ctrl_en;
|
||||
u32 int_mask;
|
||||
u32 int_clear;
|
||||
u32 int_status;
|
||||
};
|
||||
|
||||
static u32 hisi_ddrc_pmu_v2_get_counter_offset(int cntr_idx)
|
||||
{
|
||||
return DDRC_V2_EVENT_CNT + cntr_idx * 8;
|
||||
}
|
||||
|
||||
static u64 hisi_ddrc_pmu_v1_read_counter(struct hisi_pmu *ddrc_pmu,
|
||||
static u64 hisi_ddrc_pmu_read_counter(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
return readl(ddrc_pmu->base +
|
||||
hisi_ddrc_pmu_v1_get_counter_offset(hwc->idx));
|
||||
struct hisi_ddrc_pmu_regs *regs = ddrc_pmu->dev_info->private;
|
||||
|
||||
if (regs->event_cnt == DDRC_UNIMPLEMENTED_REG)
|
||||
return readl(ddrc_pmu->base + ddrc_reg_off[hwc->idx]);
|
||||
|
||||
return readq(ddrc_pmu->base + DDRC_EVENT_CNTn(regs->event_cnt, hwc->idx));
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v1_write_counter(struct hisi_pmu *ddrc_pmu,
|
||||
static void hisi_ddrc_pmu_write_counter(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc, u64 val)
|
||||
{
|
||||
writel((u32)val,
|
||||
ddrc_pmu->base + hisi_ddrc_pmu_v1_get_counter_offset(hwc->idx));
|
||||
}
|
||||
struct hisi_ddrc_pmu_regs *regs = ddrc_pmu->dev_info->private;
|
||||
|
||||
static u64 hisi_ddrc_pmu_v2_read_counter(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
return readq(ddrc_pmu->base +
|
||||
hisi_ddrc_pmu_v2_get_counter_offset(hwc->idx));
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v2_write_counter(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc, u64 val)
|
||||
{
|
||||
writeq(val,
|
||||
ddrc_pmu->base + hisi_ddrc_pmu_v2_get_counter_offset(hwc->idx));
|
||||
if (regs->event_cnt == DDRC_UNIMPLEMENTED_REG)
|
||||
writel((u32)val, ddrc_pmu->base + ddrc_reg_off[hwc->idx]);
|
||||
else
|
||||
writeq(val, ddrc_pmu->base + DDRC_EVENT_CNTn(regs->event_cnt, hwc->idx));
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
@ -114,54 +113,12 @@ static void hisi_ddrc_pmu_v2_write_counter(struct hisi_pmu *ddrc_pmu,
|
|||
static void hisi_ddrc_pmu_write_evtype(struct hisi_pmu *ddrc_pmu, int idx,
|
||||
u32 type)
|
||||
{
|
||||
u32 offset;
|
||||
struct hisi_ddrc_pmu_regs *regs = ddrc_pmu->dev_info->private;
|
||||
|
||||
if (ddrc_pmu->identifier >= HISI_PMU_V2) {
|
||||
offset = DDRC_V2_EVENT_TYPE + 4 * idx;
|
||||
writel(type, ddrc_pmu->base + offset);
|
||||
}
|
||||
}
|
||||
if (regs->event_type == DDRC_UNIMPLEMENTED_REG)
|
||||
return;
|
||||
|
||||
static void hisi_ddrc_pmu_v1_start_counters(struct hisi_pmu *ddrc_pmu)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
/* Set perf_enable in DDRC_PERF_CTRL to start event counting */
|
||||
val = readl(ddrc_pmu->base + DDRC_PERF_CTRL);
|
||||
val |= DDRC_V1_PERF_CTRL_EN;
|
||||
writel(val, ddrc_pmu->base + DDRC_PERF_CTRL);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v1_stop_counters(struct hisi_pmu *ddrc_pmu)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
/* Clear perf_enable in DDRC_PERF_CTRL to stop event counting */
|
||||
val = readl(ddrc_pmu->base + DDRC_PERF_CTRL);
|
||||
val &= ~DDRC_V1_PERF_CTRL_EN;
|
||||
writel(val, ddrc_pmu->base + DDRC_PERF_CTRL);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v1_enable_counter(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
/* Set counter index(event code) in DDRC_EVENT_CTRL register */
|
||||
val = readl(ddrc_pmu->base + DDRC_EVENT_CTRL);
|
||||
val |= (1 << GET_DDRC_EVENTID(hwc));
|
||||
writel(val, ddrc_pmu->base + DDRC_EVENT_CTRL);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v1_disable_counter(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
/* Clear counter index(event code) in DDRC_EVENT_CTRL register */
|
||||
val = readl(ddrc_pmu->base + DDRC_EVENT_CTRL);
|
||||
val &= ~(1 << GET_DDRC_EVENTID(hwc));
|
||||
writel(val, ddrc_pmu->base + DDRC_EVENT_CTRL);
|
||||
writel(type, ddrc_pmu->base + DDRC_EVENT_TYPEn(regs->event_type, idx));
|
||||
}
|
||||
|
||||
static int hisi_ddrc_pmu_v1_get_event_idx(struct perf_event *event)
|
||||
|
|
@ -180,120 +137,96 @@ static int hisi_ddrc_pmu_v1_get_event_idx(struct perf_event *event)
|
|||
return idx;
|
||||
}
|
||||
|
||||
static int hisi_ddrc_pmu_v2_get_event_idx(struct perf_event *event)
|
||||
static int hisi_ddrc_pmu_get_event_idx(struct perf_event *event)
|
||||
{
|
||||
struct hisi_pmu *ddrc_pmu = to_hisi_pmu(event->pmu);
|
||||
struct hisi_ddrc_pmu_regs *regs = ddrc_pmu->dev_info->private;
|
||||
|
||||
if (regs->event_type == DDRC_UNIMPLEMENTED_REG)
|
||||
return hisi_ddrc_pmu_v1_get_event_idx(event);
|
||||
|
||||
return hisi_uncore_pmu_get_event_idx(event);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v2_start_counters(struct hisi_pmu *ddrc_pmu)
|
||||
static void hisi_ddrc_pmu_start_counters(struct hisi_pmu *ddrc_pmu)
|
||||
{
|
||||
struct hisi_ddrc_pmu_regs *regs = ddrc_pmu->dev_info->private;
|
||||
u32 val;
|
||||
|
||||
val = readl(ddrc_pmu->base + DDRC_V2_PERF_CTRL);
|
||||
val |= DDRC_V2_PERF_CTRL_EN;
|
||||
writel(val, ddrc_pmu->base + DDRC_V2_PERF_CTRL);
|
||||
val = readl(ddrc_pmu->base + regs->perf_ctrl);
|
||||
val |= regs->perf_ctrl_en;
|
||||
writel(val, ddrc_pmu->base + regs->perf_ctrl);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v2_stop_counters(struct hisi_pmu *ddrc_pmu)
|
||||
static void hisi_ddrc_pmu_stop_counters(struct hisi_pmu *ddrc_pmu)
|
||||
{
|
||||
struct hisi_ddrc_pmu_regs *regs = ddrc_pmu->dev_info->private;
|
||||
u32 val;
|
||||
|
||||
val = readl(ddrc_pmu->base + DDRC_V2_PERF_CTRL);
|
||||
val &= ~DDRC_V2_PERF_CTRL_EN;
|
||||
writel(val, ddrc_pmu->base + DDRC_V2_PERF_CTRL);
|
||||
val = readl(ddrc_pmu->base + regs->perf_ctrl);
|
||||
val &= ~regs->perf_ctrl_en;
|
||||
writel(val, ddrc_pmu->base + regs->perf_ctrl);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v2_enable_counter(struct hisi_pmu *ddrc_pmu,
|
||||
static void hisi_ddrc_pmu_enable_counter(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
struct hisi_ddrc_pmu_regs *regs = ddrc_pmu->dev_info->private;
|
||||
u32 val;
|
||||
|
||||
val = readl(ddrc_pmu->base + DDRC_V2_EVENT_CTRL);
|
||||
val |= 1 << hwc->idx;
|
||||
writel(val, ddrc_pmu->base + DDRC_V2_EVENT_CTRL);
|
||||
val = readl(ddrc_pmu->base + regs->event_ctrl);
|
||||
val |= BIT_ULL(hwc->idx);
|
||||
writel(val, ddrc_pmu->base + regs->event_ctrl);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v2_disable_counter(struct hisi_pmu *ddrc_pmu,
|
||||
static void hisi_ddrc_pmu_disable_counter(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
struct hisi_ddrc_pmu_regs *regs = ddrc_pmu->dev_info->private;
|
||||
u32 val;
|
||||
|
||||
val = readl(ddrc_pmu->base + DDRC_V2_EVENT_CTRL);
|
||||
val &= ~(1 << hwc->idx);
|
||||
writel(val, ddrc_pmu->base + DDRC_V2_EVENT_CTRL);
|
||||
val = readl(ddrc_pmu->base + regs->event_ctrl);
|
||||
val &= ~BIT_ULL(hwc->idx);
|
||||
writel(val, ddrc_pmu->base + regs->event_ctrl);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v1_enable_counter_int(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
static void hisi_ddrc_pmu_enable_counter_int(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
struct hisi_ddrc_pmu_regs *regs = ddrc_pmu->dev_info->private;
|
||||
u32 val;
|
||||
|
||||
/* Write 0 to enable interrupt */
|
||||
val = readl(ddrc_pmu->base + DDRC_INT_MASK);
|
||||
val &= ~(1 << hwc->idx);
|
||||
writel(val, ddrc_pmu->base + DDRC_INT_MASK);
|
||||
val = readl(ddrc_pmu->base + regs->int_mask);
|
||||
val &= ~BIT_ULL(hwc->idx);
|
||||
writel(val, ddrc_pmu->base + regs->int_mask);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v1_disable_counter_int(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
static void hisi_ddrc_pmu_disable_counter_int(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
struct hisi_ddrc_pmu_regs *regs = ddrc_pmu->dev_info->private;
|
||||
u32 val;
|
||||
|
||||
/* Write 1 to mask interrupt */
|
||||
val = readl(ddrc_pmu->base + DDRC_INT_MASK);
|
||||
val |= 1 << hwc->idx;
|
||||
writel(val, ddrc_pmu->base + DDRC_INT_MASK);
|
||||
val = readl(ddrc_pmu->base + regs->int_mask);
|
||||
val |= BIT_ULL(hwc->idx);
|
||||
writel(val, ddrc_pmu->base + regs->int_mask);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v2_enable_counter_int(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
static u32 hisi_ddrc_pmu_get_int_status(struct hisi_pmu *ddrc_pmu)
|
||||
{
|
||||
u32 val;
|
||||
struct hisi_ddrc_pmu_regs *regs = ddrc_pmu->dev_info->private;
|
||||
|
||||
val = readl(ddrc_pmu->base + DDRC_V2_INT_MASK);
|
||||
val &= ~(1 << hwc->idx);
|
||||
writel(val, ddrc_pmu->base + DDRC_V2_INT_MASK);
|
||||
return readl(ddrc_pmu->base + regs->int_status);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v2_disable_counter_int(struct hisi_pmu *ddrc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
static void hisi_ddrc_pmu_clear_int_status(struct hisi_pmu *ddrc_pmu,
|
||||
int idx)
|
||||
{
|
||||
u32 val;
|
||||
struct hisi_ddrc_pmu_regs *regs = ddrc_pmu->dev_info->private;
|
||||
|
||||
val = readl(ddrc_pmu->base + DDRC_V2_INT_MASK);
|
||||
val |= 1 << hwc->idx;
|
||||
writel(val, ddrc_pmu->base + DDRC_V2_INT_MASK);
|
||||
writel(1 << idx, ddrc_pmu->base + regs->int_clear);
|
||||
}
|
||||
|
||||
static u32 hisi_ddrc_pmu_v1_get_int_status(struct hisi_pmu *ddrc_pmu)
|
||||
{
|
||||
return readl(ddrc_pmu->base + DDRC_INT_STATUS);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v1_clear_int_status(struct hisi_pmu *ddrc_pmu,
|
||||
int idx)
|
||||
{
|
||||
writel(1 << idx, ddrc_pmu->base + DDRC_INT_CLEAR);
|
||||
}
|
||||
|
||||
static u32 hisi_ddrc_pmu_v2_get_int_status(struct hisi_pmu *ddrc_pmu)
|
||||
{
|
||||
return readl(ddrc_pmu->base + DDRC_V2_INT_STATUS);
|
||||
}
|
||||
|
||||
static void hisi_ddrc_pmu_v2_clear_int_status(struct hisi_pmu *ddrc_pmu,
|
||||
int idx)
|
||||
{
|
||||
writel(1 << idx, ddrc_pmu->base + DDRC_V2_INT_CLEAR);
|
||||
}
|
||||
|
||||
static const struct acpi_device_id hisi_ddrc_pmu_acpi_match[] = {
|
||||
{ "HISI0233", },
|
||||
{ "HISI0234", },
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(acpi, hisi_ddrc_pmu_acpi_match);
|
||||
|
||||
static int hisi_ddrc_pmu_init_data(struct platform_device *pdev,
|
||||
struct hisi_pmu *ddrc_pmu)
|
||||
{
|
||||
|
|
@ -314,6 +247,10 @@ static int hisi_ddrc_pmu_init_data(struct platform_device *pdev,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
ddrc_pmu->dev_info = device_get_match_data(&pdev->dev);
|
||||
if (!ddrc_pmu->dev_info)
|
||||
return -ENODEV;
|
||||
|
||||
ddrc_pmu->base = devm_platform_ioremap_resource(pdev, 0);
|
||||
if (IS_ERR(ddrc_pmu->base)) {
|
||||
dev_err(&pdev->dev, "ioremap failed for ddrc_pmu resource\n");
|
||||
|
|
@ -396,34 +333,19 @@ static const struct attribute_group *hisi_ddrc_pmu_v2_attr_groups[] = {
|
|||
NULL
|
||||
};
|
||||
|
||||
static const struct hisi_uncore_ops hisi_uncore_ddrc_v1_ops = {
|
||||
static const struct hisi_uncore_ops hisi_uncore_ddrc_ops = {
|
||||
.write_evtype = hisi_ddrc_pmu_write_evtype,
|
||||
.get_event_idx = hisi_ddrc_pmu_v1_get_event_idx,
|
||||
.start_counters = hisi_ddrc_pmu_v1_start_counters,
|
||||
.stop_counters = hisi_ddrc_pmu_v1_stop_counters,
|
||||
.enable_counter = hisi_ddrc_pmu_v1_enable_counter,
|
||||
.disable_counter = hisi_ddrc_pmu_v1_disable_counter,
|
||||
.enable_counter_int = hisi_ddrc_pmu_v1_enable_counter_int,
|
||||
.disable_counter_int = hisi_ddrc_pmu_v1_disable_counter_int,
|
||||
.write_counter = hisi_ddrc_pmu_v1_write_counter,
|
||||
.read_counter = hisi_ddrc_pmu_v1_read_counter,
|
||||
.get_int_status = hisi_ddrc_pmu_v1_get_int_status,
|
||||
.clear_int_status = hisi_ddrc_pmu_v1_clear_int_status,
|
||||
};
|
||||
|
||||
static const struct hisi_uncore_ops hisi_uncore_ddrc_v2_ops = {
|
||||
.write_evtype = hisi_ddrc_pmu_write_evtype,
|
||||
.get_event_idx = hisi_ddrc_pmu_v2_get_event_idx,
|
||||
.start_counters = hisi_ddrc_pmu_v2_start_counters,
|
||||
.stop_counters = hisi_ddrc_pmu_v2_stop_counters,
|
||||
.enable_counter = hisi_ddrc_pmu_v2_enable_counter,
|
||||
.disable_counter = hisi_ddrc_pmu_v2_disable_counter,
|
||||
.enable_counter_int = hisi_ddrc_pmu_v2_enable_counter_int,
|
||||
.disable_counter_int = hisi_ddrc_pmu_v2_disable_counter_int,
|
||||
.write_counter = hisi_ddrc_pmu_v2_write_counter,
|
||||
.read_counter = hisi_ddrc_pmu_v2_read_counter,
|
||||
.get_int_status = hisi_ddrc_pmu_v2_get_int_status,
|
||||
.clear_int_status = hisi_ddrc_pmu_v2_clear_int_status,
|
||||
.get_event_idx = hisi_ddrc_pmu_get_event_idx,
|
||||
.start_counters = hisi_ddrc_pmu_start_counters,
|
||||
.stop_counters = hisi_ddrc_pmu_stop_counters,
|
||||
.enable_counter = hisi_ddrc_pmu_enable_counter,
|
||||
.disable_counter = hisi_ddrc_pmu_disable_counter,
|
||||
.enable_counter_int = hisi_ddrc_pmu_enable_counter_int,
|
||||
.disable_counter_int = hisi_ddrc_pmu_disable_counter_int,
|
||||
.write_counter = hisi_ddrc_pmu_write_counter,
|
||||
.read_counter = hisi_ddrc_pmu_read_counter,
|
||||
.get_int_status = hisi_ddrc_pmu_get_int_status,
|
||||
.clear_int_status = hisi_ddrc_pmu_clear_int_status,
|
||||
};
|
||||
|
||||
static int hisi_ddrc_pmu_dev_probe(struct platform_device *pdev,
|
||||
|
|
@ -439,18 +361,10 @@ static int hisi_ddrc_pmu_dev_probe(struct platform_device *pdev,
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (ddrc_pmu->identifier >= HISI_PMU_V2) {
|
||||
ddrc_pmu->counter_bits = 48;
|
||||
ddrc_pmu->check_event = DDRC_V2_NR_EVENTS;
|
||||
ddrc_pmu->pmu_events.attr_groups = hisi_ddrc_pmu_v2_attr_groups;
|
||||
ddrc_pmu->ops = &hisi_uncore_ddrc_v2_ops;
|
||||
} else {
|
||||
ddrc_pmu->counter_bits = 32;
|
||||
ddrc_pmu->check_event = DDRC_V1_NR_EVENTS;
|
||||
ddrc_pmu->pmu_events.attr_groups = hisi_ddrc_pmu_v1_attr_groups;
|
||||
ddrc_pmu->ops = &hisi_uncore_ddrc_v1_ops;
|
||||
}
|
||||
|
||||
ddrc_pmu->pmu_events.attr_groups = ddrc_pmu->dev_info->attr_groups;
|
||||
ddrc_pmu->counter_bits = ddrc_pmu->dev_info->counter_bits;
|
||||
ddrc_pmu->check_event = ddrc_pmu->dev_info->check_event;
|
||||
ddrc_pmu->ops = &hisi_uncore_ddrc_ops;
|
||||
ddrc_pmu->num_counters = DDRC_NR_COUNTERS;
|
||||
ddrc_pmu->dev = &pdev->dev;
|
||||
ddrc_pmu->on_cpu = -1;
|
||||
|
|
@ -515,6 +429,68 @@ static void hisi_ddrc_pmu_remove(struct platform_device *pdev)
|
|||
&ddrc_pmu->node);
|
||||
}
|
||||
|
||||
static struct hisi_ddrc_pmu_regs hisi_ddrc_v1_pmu_regs = {
|
||||
.event_cnt = DDRC_UNIMPLEMENTED_REG,
|
||||
.event_ctrl = DDRC_EVENT_CTRL,
|
||||
.event_type = DDRC_UNIMPLEMENTED_REG,
|
||||
.perf_ctrl = DDRC_PERF_CTRL,
|
||||
.perf_ctrl_en = DDRC_V1_PERF_CTRL_EN,
|
||||
.int_mask = DDRC_INT_MASK,
|
||||
.int_clear = DDRC_INT_CLEAR,
|
||||
.int_status = DDRC_INT_STATUS,
|
||||
};
|
||||
|
||||
static const struct hisi_pmu_dev_info hisi_ddrc_v1 = {
|
||||
.counter_bits = 32,
|
||||
.check_event = DDRC_V1_NR_EVENTS,
|
||||
.attr_groups = hisi_ddrc_pmu_v1_attr_groups,
|
||||
.private = &hisi_ddrc_v1_pmu_regs,
|
||||
};
|
||||
|
||||
static struct hisi_ddrc_pmu_regs hisi_ddrc_v2_pmu_regs = {
|
||||
.event_cnt = DDRC_V2_EVENT_CNT,
|
||||
.event_ctrl = DDRC_V2_EVENT_CTRL,
|
||||
.event_type = DDRC_V2_EVENT_TYPE,
|
||||
.perf_ctrl = DDRC_V2_PERF_CTRL,
|
||||
.perf_ctrl_en = DDRC_V2_PERF_CTRL_EN,
|
||||
.int_mask = DDRC_V2_INT_MASK,
|
||||
.int_clear = DDRC_V2_INT_CLEAR,
|
||||
.int_status = DDRC_V2_INT_STATUS,
|
||||
};
|
||||
|
||||
static const struct hisi_pmu_dev_info hisi_ddrc_v2 = {
|
||||
.counter_bits = 48,
|
||||
.check_event = DDRC_V2_NR_EVENTS,
|
||||
.attr_groups = hisi_ddrc_pmu_v2_attr_groups,
|
||||
.private = &hisi_ddrc_v2_pmu_regs,
|
||||
};
|
||||
|
||||
static struct hisi_ddrc_pmu_regs hisi_ddrc_v3_pmu_regs = {
|
||||
.event_cnt = DDRC_V2_EVENT_CNT,
|
||||
.event_ctrl = DDRC_V2_EVENT_CTRL,
|
||||
.event_type = DDRC_V2_EVENT_TYPE,
|
||||
.perf_ctrl = DDRC_V2_PERF_CTRL,
|
||||
.perf_ctrl_en = DDRC_V2_PERF_CTRL_EN,
|
||||
.int_mask = DDRC_V3_INT_MASK,
|
||||
.int_clear = DDRC_V3_INT_CLEAR,
|
||||
.int_status = DDRC_V3_INT_STATUS,
|
||||
};
|
||||
|
||||
static const struct hisi_pmu_dev_info hisi_ddrc_v3 = {
|
||||
.counter_bits = 48,
|
||||
.check_event = DDRC_V2_NR_EVENTS,
|
||||
.attr_groups = hisi_ddrc_pmu_v2_attr_groups,
|
||||
.private = &hisi_ddrc_v3_pmu_regs,
|
||||
};
|
||||
|
||||
static const struct acpi_device_id hisi_ddrc_pmu_acpi_match[] = {
|
||||
{ "HISI0233", (kernel_ulong_t)&hisi_ddrc_v1 },
|
||||
{ "HISI0234", (kernel_ulong_t)&hisi_ddrc_v2 },
|
||||
{ "HISI0235", (kernel_ulong_t)&hisi_ddrc_v3 },
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(acpi, hisi_ddrc_pmu_acpi_match);
|
||||
|
||||
static struct platform_driver hisi_ddrc_pmu_driver = {
|
||||
.driver = {
|
||||
.name = "hisi_ddrc_pmu",
|
||||
|
|
|
|||
|
|
@ -47,9 +47,9 @@
|
|||
#define HHA_SRCID_CMD GENMASK(16, 6)
|
||||
#define HHA_SRCID_MSK GENMASK(30, 20)
|
||||
#define HHA_DATSRC_SKT_EN BIT(23)
|
||||
#define HHA_EVTYPE_NONE 0xff
|
||||
#define HHA_EVTYPE_MASK GENMASK(7, 0)
|
||||
#define HHA_V1_NR_EVENT 0x65
|
||||
#define HHA_V2_NR_EVENT 0xCE
|
||||
#define HHA_V2_NR_EVENT 0xFF
|
||||
|
||||
HISI_PMU_EVENT_ATTR_EXTRACTOR(srcid_cmd, config1, 10, 0);
|
||||
HISI_PMU_EVENT_ATTR_EXTRACTOR(srcid_msk, config1, 21, 11);
|
||||
|
|
@ -197,7 +197,7 @@ static void hisi_hha_pmu_write_evtype(struct hisi_pmu *hha_pmu, int idx,
|
|||
|
||||
/* Write event code to HHA_EVENT_TYPEx register */
|
||||
val = readl(hha_pmu->base + reg);
|
||||
val &= ~(HHA_EVTYPE_NONE << shift);
|
||||
val &= ~(HHA_EVTYPE_MASK << shift);
|
||||
val |= (type << shift);
|
||||
writel(val, hha_pmu->base + reg);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -440,7 +440,7 @@ static int hisi_pa_pmu_dev_probe(struct platform_device *pdev,
|
|||
pa_pmu->pmu_events.attr_groups = pa_pmu->dev_info->attr_groups;
|
||||
pa_pmu->num_counters = PA_NR_COUNTERS;
|
||||
pa_pmu->ops = &hisi_uncore_pa_ops;
|
||||
pa_pmu->check_event = 0xB0;
|
||||
pa_pmu->check_event = PA_EVTYPE_MASK;
|
||||
pa_pmu->counter_bits = 64;
|
||||
pa_pmu->dev = &pdev->dev;
|
||||
pa_pmu->on_cpu = -1;
|
||||
|
|
|
|||
|
|
@ -510,7 +510,9 @@ int hisi_uncore_pmu_online_cpu(unsigned int cpu, struct hlist_node *node)
|
|||
return 0;
|
||||
|
||||
hisi_pmu->on_cpu = cpumask_local_spread(0, dev_to_node(hisi_pmu->dev));
|
||||
WARN_ON(irq_set_affinity(hisi_pmu->irq, cpumask_of(hisi_pmu->on_cpu)));
|
||||
if (hisi_pmu->irq > 0)
|
||||
WARN_ON(irq_set_affinity(hisi_pmu->irq,
|
||||
cpumask_of(hisi_pmu->on_cpu)));
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -525,7 +527,8 @@ int hisi_uncore_pmu_online_cpu(unsigned int cpu, struct hlist_node *node)
|
|||
hisi_pmu->on_cpu = cpu;
|
||||
|
||||
/* Overflow interrupt also should use the same CPU */
|
||||
WARN_ON(irq_set_affinity(hisi_pmu->irq, cpumask_of(cpu)));
|
||||
if (hisi_pmu->irq > 0)
|
||||
WARN_ON(irq_set_affinity(hisi_pmu->irq, cpumask_of(cpu)));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -560,7 +563,9 @@ int hisi_uncore_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
|
|||
perf_pmu_migrate_context(&hisi_pmu->pmu, cpu, target);
|
||||
/* Use this CPU for event counting */
|
||||
hisi_pmu->on_cpu = target;
|
||||
WARN_ON(irq_set_affinity(hisi_pmu->irq, cpumask_of(target)));
|
||||
|
||||
if (hisi_pmu->irq > 0)
|
||||
WARN_ON(irq_set_affinity(hisi_pmu->irq, cpumask_of(target)));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -72,6 +72,8 @@ struct hisi_uncore_ops {
|
|||
struct hisi_pmu_dev_info {
|
||||
const char *name;
|
||||
const struct attribute_group **attr_groups;
|
||||
u32 counter_bits;
|
||||
u32 check_event;
|
||||
void *private;
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -28,6 +28,18 @@
|
|||
#define SLLC_VERSION 0x1cf0
|
||||
#define SLLC_EVENT_CNT0_L 0x1d00
|
||||
|
||||
/* SLLC registers definition in v3 */
|
||||
#define SLLC_V3_INT_MASK 0x6834
|
||||
#define SLLC_V3_INT_STATUS 0x6838
|
||||
#define SLLC_V3_INT_CLEAR 0x683c
|
||||
#define SLLC_V3_VERSION 0x6c00
|
||||
#define SLLC_V3_PERF_CTRL 0x6d00
|
||||
#define SLLC_V3_SRCID_CTRL 0x6d04
|
||||
#define SLLC_V3_TGTID_CTRL 0x6d08
|
||||
#define SLLC_V3_EVENT_CTRL 0x6d14
|
||||
#define SLLC_V3_EVENT_TYPE0 0x6d18
|
||||
#define SLLC_V3_EVENT_CNT0_L 0x6e00
|
||||
|
||||
#define SLLC_EVTYPE_MASK 0xff
|
||||
#define SLLC_PERF_CTRL_EN BIT(0)
|
||||
#define SLLC_FILT_EN BIT(1)
|
||||
|
|
@ -40,7 +52,14 @@
|
|||
#define SLLC_TGTID_MAX_SHIFT 12
|
||||
#define SLLC_SRCID_CMD_SHIFT 1
|
||||
#define SLLC_SRCID_MSK_SHIFT 12
|
||||
#define SLLC_NR_EVENTS 0x80
|
||||
|
||||
#define SLLC_V3_TGTID_MIN_SHIFT 1
|
||||
#define SLLC_V3_TGTID_MAX_SHIFT 10
|
||||
#define SLLC_V3_SRCID_CMD_SHIFT 1
|
||||
#define SLLC_V3_SRCID_MSK_SHIFT 10
|
||||
|
||||
#define SLLC_NR_EVENTS 0xff
|
||||
#define SLLC_EVENT_CNTn(cnt0, n) ((cnt0) + (n) * 8)
|
||||
|
||||
HISI_PMU_EVENT_ATTR_EXTRACTOR(tgtid_min, config1, 10, 0);
|
||||
HISI_PMU_EVENT_ATTR_EXTRACTOR(tgtid_max, config1, 21, 11);
|
||||
|
|
@ -48,6 +67,23 @@ HISI_PMU_EVENT_ATTR_EXTRACTOR(srcid_cmd, config1, 32, 22);
|
|||
HISI_PMU_EVENT_ATTR_EXTRACTOR(srcid_msk, config1, 43, 33);
|
||||
HISI_PMU_EVENT_ATTR_EXTRACTOR(tracetag_en, config1, 44, 44);
|
||||
|
||||
struct hisi_sllc_pmu_regs {
|
||||
u32 int_mask;
|
||||
u32 int_clear;
|
||||
u32 int_status;
|
||||
u32 perf_ctrl;
|
||||
u32 srcid_ctrl;
|
||||
u32 srcid_cmd_shift;
|
||||
u32 srcid_mask_shift;
|
||||
u32 tgtid_ctrl;
|
||||
u32 tgtid_min_shift;
|
||||
u32 tgtid_max_shift;
|
||||
u32 event_ctrl;
|
||||
u32 event_type0;
|
||||
u32 version;
|
||||
u32 event_cnt0;
|
||||
};
|
||||
|
||||
static bool tgtid_is_valid(u32 max, u32 min)
|
||||
{
|
||||
return max > 0 && max >= min;
|
||||
|
|
@ -56,96 +92,104 @@ static bool tgtid_is_valid(u32 max, u32 min)
|
|||
static void hisi_sllc_pmu_enable_tracetag(struct perf_event *event)
|
||||
{
|
||||
struct hisi_pmu *sllc_pmu = to_hisi_pmu(event->pmu);
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 tt_en = hisi_get_tracetag_en(event);
|
||||
|
||||
if (tt_en) {
|
||||
u32 val;
|
||||
|
||||
val = readl(sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
val = readl(sllc_pmu->base + regs->perf_ctrl);
|
||||
val |= SLLC_TRACETAG_EN | SLLC_FILT_EN;
|
||||
writel(val, sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
writel(val, sllc_pmu->base + regs->perf_ctrl);
|
||||
}
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_disable_tracetag(struct perf_event *event)
|
||||
{
|
||||
struct hisi_pmu *sllc_pmu = to_hisi_pmu(event->pmu);
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 tt_en = hisi_get_tracetag_en(event);
|
||||
|
||||
if (tt_en) {
|
||||
u32 val;
|
||||
|
||||
val = readl(sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
val = readl(sllc_pmu->base + regs->perf_ctrl);
|
||||
val &= ~(SLLC_TRACETAG_EN | SLLC_FILT_EN);
|
||||
writel(val, sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
writel(val, sllc_pmu->base + regs->perf_ctrl);
|
||||
}
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_config_tgtid(struct perf_event *event)
|
||||
{
|
||||
struct hisi_pmu *sllc_pmu = to_hisi_pmu(event->pmu);
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 min = hisi_get_tgtid_min(event);
|
||||
u32 max = hisi_get_tgtid_max(event);
|
||||
|
||||
if (tgtid_is_valid(max, min)) {
|
||||
u32 val = (max << SLLC_TGTID_MAX_SHIFT) | (min << SLLC_TGTID_MIN_SHIFT);
|
||||
u32 val = (max << regs->tgtid_max_shift) |
|
||||
(min << regs->tgtid_min_shift);
|
||||
|
||||
writel(val, sllc_pmu->base + SLLC_TGTID_CTRL);
|
||||
writel(val, sllc_pmu->base + regs->tgtid_ctrl);
|
||||
/* Enable the tgtid */
|
||||
val = readl(sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
val = readl(sllc_pmu->base + regs->perf_ctrl);
|
||||
val |= SLLC_TGTID_EN | SLLC_FILT_EN;
|
||||
writel(val, sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
writel(val, sllc_pmu->base + regs->perf_ctrl);
|
||||
}
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_clear_tgtid(struct perf_event *event)
|
||||
{
|
||||
struct hisi_pmu *sllc_pmu = to_hisi_pmu(event->pmu);
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 min = hisi_get_tgtid_min(event);
|
||||
u32 max = hisi_get_tgtid_max(event);
|
||||
|
||||
if (tgtid_is_valid(max, min)) {
|
||||
u32 val;
|
||||
|
||||
writel(SLLC_TGTID_NONE, sllc_pmu->base + SLLC_TGTID_CTRL);
|
||||
writel(SLLC_TGTID_NONE, sllc_pmu->base + regs->tgtid_ctrl);
|
||||
/* Disable the tgtid */
|
||||
val = readl(sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
val = readl(sllc_pmu->base + regs->perf_ctrl);
|
||||
val &= ~(SLLC_TGTID_EN | SLLC_FILT_EN);
|
||||
writel(val, sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
writel(val, sllc_pmu->base + regs->perf_ctrl);
|
||||
}
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_config_srcid(struct perf_event *event)
|
||||
{
|
||||
struct hisi_pmu *sllc_pmu = to_hisi_pmu(event->pmu);
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 cmd = hisi_get_srcid_cmd(event);
|
||||
|
||||
if (cmd) {
|
||||
u32 val, msk;
|
||||
|
||||
msk = hisi_get_srcid_msk(event);
|
||||
val = (cmd << SLLC_SRCID_CMD_SHIFT) | (msk << SLLC_SRCID_MSK_SHIFT);
|
||||
writel(val, sllc_pmu->base + SLLC_SRCID_CTRL);
|
||||
val = (cmd << regs->srcid_cmd_shift) |
|
||||
(msk << regs->srcid_mask_shift);
|
||||
writel(val, sllc_pmu->base + regs->srcid_ctrl);
|
||||
/* Enable the srcid */
|
||||
val = readl(sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
val = readl(sllc_pmu->base + regs->perf_ctrl);
|
||||
val |= SLLC_SRCID_EN | SLLC_FILT_EN;
|
||||
writel(val, sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
writel(val, sllc_pmu->base + regs->perf_ctrl);
|
||||
}
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_clear_srcid(struct perf_event *event)
|
||||
{
|
||||
struct hisi_pmu *sllc_pmu = to_hisi_pmu(event->pmu);
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 cmd = hisi_get_srcid_cmd(event);
|
||||
|
||||
if (cmd) {
|
||||
u32 val;
|
||||
|
||||
writel(SLLC_SRCID_NONE, sllc_pmu->base + SLLC_SRCID_CTRL);
|
||||
writel(SLLC_SRCID_NONE, sllc_pmu->base + regs->srcid_ctrl);
|
||||
/* Disable the srcid */
|
||||
val = readl(sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
val = readl(sllc_pmu->base + regs->perf_ctrl);
|
||||
val &= ~(SLLC_SRCID_EN | SLLC_FILT_EN);
|
||||
writel(val, sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
writel(val, sllc_pmu->base + regs->perf_ctrl);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -167,29 +211,27 @@ static void hisi_sllc_pmu_clear_filter(struct perf_event *event)
|
|||
}
|
||||
}
|
||||
|
||||
static u32 hisi_sllc_pmu_get_counter_offset(int idx)
|
||||
{
|
||||
return (SLLC_EVENT_CNT0_L + idx * 8);
|
||||
}
|
||||
|
||||
static u64 hisi_sllc_pmu_read_counter(struct hisi_pmu *sllc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
return readq(sllc_pmu->base +
|
||||
hisi_sllc_pmu_get_counter_offset(hwc->idx));
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
|
||||
return readq(sllc_pmu->base + SLLC_EVENT_CNTn(regs->event_cnt0, hwc->idx));
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_write_counter(struct hisi_pmu *sllc_pmu,
|
||||
struct hw_perf_event *hwc, u64 val)
|
||||
{
|
||||
writeq(val, sllc_pmu->base +
|
||||
hisi_sllc_pmu_get_counter_offset(hwc->idx));
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
|
||||
writeq(val, sllc_pmu->base + SLLC_EVENT_CNTn(regs->event_cnt0, hwc->idx));
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_write_evtype(struct hisi_pmu *sllc_pmu, int idx,
|
||||
u32 type)
|
||||
{
|
||||
u32 reg, reg_idx, shift, val;
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 reg, val;
|
||||
|
||||
/*
|
||||
* Select the appropriate event select register(SLLC_EVENT_TYPE0/1).
|
||||
|
|
@ -198,96 +240,98 @@ static void hisi_sllc_pmu_write_evtype(struct hisi_pmu *sllc_pmu, int idx,
|
|||
* SLLC_EVENT_TYPE0 is chosen. For the latter 4 hardware counters,
|
||||
* SLLC_EVENT_TYPE1 is chosen.
|
||||
*/
|
||||
reg = SLLC_EVENT_TYPE0 + (idx / 4) * 4;
|
||||
reg_idx = idx % 4;
|
||||
shift = 8 * reg_idx;
|
||||
reg = regs->event_type0 + (idx / 4) * 4;
|
||||
|
||||
/* Write event code to SLLC_EVENT_TYPEx Register */
|
||||
val = readl(sllc_pmu->base + reg);
|
||||
val &= ~(SLLC_EVTYPE_MASK << shift);
|
||||
val |= (type << shift);
|
||||
val &= ~(SLLC_EVTYPE_MASK << HISI_PMU_EVTYPE_SHIFT(idx));
|
||||
val |= (type << HISI_PMU_EVTYPE_SHIFT(idx));
|
||||
writel(val, sllc_pmu->base + reg);
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_start_counters(struct hisi_pmu *sllc_pmu)
|
||||
{
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 val;
|
||||
|
||||
val = readl(sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
val = readl(sllc_pmu->base + regs->perf_ctrl);
|
||||
val |= SLLC_PERF_CTRL_EN;
|
||||
writel(val, sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
writel(val, sllc_pmu->base + regs->perf_ctrl);
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_stop_counters(struct hisi_pmu *sllc_pmu)
|
||||
{
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 val;
|
||||
|
||||
val = readl(sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
val = readl(sllc_pmu->base + regs->perf_ctrl);
|
||||
val &= ~(SLLC_PERF_CTRL_EN);
|
||||
writel(val, sllc_pmu->base + SLLC_PERF_CTRL);
|
||||
writel(val, sllc_pmu->base + regs->perf_ctrl);
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_enable_counter(struct hisi_pmu *sllc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 val;
|
||||
|
||||
val = readl(sllc_pmu->base + SLLC_EVENT_CTRL);
|
||||
val |= 1 << hwc->idx;
|
||||
writel(val, sllc_pmu->base + SLLC_EVENT_CTRL);
|
||||
val = readl(sllc_pmu->base + regs->event_ctrl);
|
||||
val |= BIT_ULL(hwc->idx);
|
||||
writel(val, sllc_pmu->base + regs->event_ctrl);
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_disable_counter(struct hisi_pmu *sllc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 val;
|
||||
|
||||
val = readl(sllc_pmu->base + SLLC_EVENT_CTRL);
|
||||
val &= ~(1 << hwc->idx);
|
||||
writel(val, sllc_pmu->base + SLLC_EVENT_CTRL);
|
||||
val = readl(sllc_pmu->base + regs->event_ctrl);
|
||||
val &= ~BIT_ULL(hwc->idx);
|
||||
writel(val, sllc_pmu->base + regs->event_ctrl);
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_enable_counter_int(struct hisi_pmu *sllc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 val;
|
||||
|
||||
val = readl(sllc_pmu->base + SLLC_INT_MASK);
|
||||
/* Write 0 to enable interrupt */
|
||||
val &= ~(1 << hwc->idx);
|
||||
writel(val, sllc_pmu->base + SLLC_INT_MASK);
|
||||
val = readl(sllc_pmu->base + regs->int_mask);
|
||||
val &= ~BIT_ULL(hwc->idx);
|
||||
writel(val, sllc_pmu->base + regs->int_mask);
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_disable_counter_int(struct hisi_pmu *sllc_pmu,
|
||||
struct hw_perf_event *hwc)
|
||||
{
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
u32 val;
|
||||
|
||||
val = readl(sllc_pmu->base + SLLC_INT_MASK);
|
||||
/* Write 1 to mask interrupt */
|
||||
val |= 1 << hwc->idx;
|
||||
writel(val, sllc_pmu->base + SLLC_INT_MASK);
|
||||
val = readl(sllc_pmu->base + regs->int_mask);
|
||||
val |= BIT_ULL(hwc->idx);
|
||||
writel(val, sllc_pmu->base + regs->int_mask);
|
||||
}
|
||||
|
||||
static u32 hisi_sllc_pmu_get_int_status(struct hisi_pmu *sllc_pmu)
|
||||
{
|
||||
return readl(sllc_pmu->base + SLLC_INT_STATUS);
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
|
||||
return readl(sllc_pmu->base + regs->int_status);
|
||||
}
|
||||
|
||||
static void hisi_sllc_pmu_clear_int_status(struct hisi_pmu *sllc_pmu, int idx)
|
||||
{
|
||||
writel(1 << idx, sllc_pmu->base + SLLC_INT_CLEAR);
|
||||
}
|
||||
struct hisi_sllc_pmu_regs *regs = sllc_pmu->dev_info->private;
|
||||
|
||||
static const struct acpi_device_id hisi_sllc_pmu_acpi_match[] = {
|
||||
{ "HISI0263", },
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(acpi, hisi_sllc_pmu_acpi_match);
|
||||
writel(BIT_ULL(idx), sllc_pmu->base + regs->int_clear);
|
||||
}
|
||||
|
||||
static int hisi_sllc_pmu_init_data(struct platform_device *pdev,
|
||||
struct hisi_pmu *sllc_pmu)
|
||||
{
|
||||
struct hisi_sllc_pmu_regs *regs;
|
||||
|
||||
hisi_uncore_pmu_init_topology(sllc_pmu, &pdev->dev);
|
||||
|
||||
/*
|
||||
|
|
@ -304,13 +348,18 @@ static int hisi_sllc_pmu_init_data(struct platform_device *pdev,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
sllc_pmu->dev_info = device_get_match_data(&pdev->dev);
|
||||
if (!sllc_pmu->dev_info)
|
||||
return -ENODEV;
|
||||
|
||||
sllc_pmu->base = devm_platform_ioremap_resource(pdev, 0);
|
||||
if (IS_ERR(sllc_pmu->base)) {
|
||||
dev_err(&pdev->dev, "ioremap failed for sllc_pmu resource.\n");
|
||||
return PTR_ERR(sllc_pmu->base);
|
||||
}
|
||||
|
||||
sllc_pmu->identifier = readl(sllc_pmu->base + SLLC_VERSION);
|
||||
regs = sllc_pmu->dev_info->private;
|
||||
sllc_pmu->identifier = readl(sllc_pmu->base + regs->version);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -352,6 +401,48 @@ static const struct attribute_group *hisi_sllc_pmu_v2_attr_groups[] = {
|
|||
NULL
|
||||
};
|
||||
|
||||
static struct hisi_sllc_pmu_regs hisi_sllc_v2_pmu_regs = {
|
||||
.int_mask = SLLC_INT_MASK,
|
||||
.int_clear = SLLC_INT_CLEAR,
|
||||
.int_status = SLLC_INT_STATUS,
|
||||
.perf_ctrl = SLLC_PERF_CTRL,
|
||||
.srcid_ctrl = SLLC_SRCID_CTRL,
|
||||
.srcid_cmd_shift = SLLC_SRCID_CMD_SHIFT,
|
||||
.srcid_mask_shift = SLLC_SRCID_MSK_SHIFT,
|
||||
.tgtid_ctrl = SLLC_TGTID_CTRL,
|
||||
.tgtid_min_shift = SLLC_TGTID_MIN_SHIFT,
|
||||
.tgtid_max_shift = SLLC_TGTID_MAX_SHIFT,
|
||||
.event_ctrl = SLLC_EVENT_CTRL,
|
||||
.event_type0 = SLLC_EVENT_TYPE0,
|
||||
.version = SLLC_VERSION,
|
||||
.event_cnt0 = SLLC_EVENT_CNT0_L,
|
||||
};
|
||||
|
||||
static const struct hisi_pmu_dev_info hisi_sllc_v2 = {
|
||||
.private = &hisi_sllc_v2_pmu_regs,
|
||||
};
|
||||
|
||||
static struct hisi_sllc_pmu_regs hisi_sllc_v3_pmu_regs = {
|
||||
.int_mask = SLLC_V3_INT_MASK,
|
||||
.int_clear = SLLC_V3_INT_CLEAR,
|
||||
.int_status = SLLC_V3_INT_STATUS,
|
||||
.perf_ctrl = SLLC_V3_PERF_CTRL,
|
||||
.srcid_ctrl = SLLC_V3_SRCID_CTRL,
|
||||
.srcid_cmd_shift = SLLC_V3_SRCID_CMD_SHIFT,
|
||||
.srcid_mask_shift = SLLC_V3_SRCID_MSK_SHIFT,
|
||||
.tgtid_ctrl = SLLC_V3_TGTID_CTRL,
|
||||
.tgtid_min_shift = SLLC_V3_TGTID_MIN_SHIFT,
|
||||
.tgtid_max_shift = SLLC_V3_TGTID_MAX_SHIFT,
|
||||
.event_ctrl = SLLC_V3_EVENT_CTRL,
|
||||
.event_type0 = SLLC_V3_EVENT_TYPE0,
|
||||
.version = SLLC_V3_VERSION,
|
||||
.event_cnt0 = SLLC_V3_EVENT_CNT0_L,
|
||||
};
|
||||
|
||||
static const struct hisi_pmu_dev_info hisi_sllc_v3 = {
|
||||
.private = &hisi_sllc_v3_pmu_regs,
|
||||
};
|
||||
|
||||
static const struct hisi_uncore_ops hisi_uncore_sllc_ops = {
|
||||
.write_evtype = hisi_sllc_pmu_write_evtype,
|
||||
.get_event_idx = hisi_uncore_pmu_get_event_idx,
|
||||
|
|
@ -443,6 +534,13 @@ static void hisi_sllc_pmu_remove(struct platform_device *pdev)
|
|||
&sllc_pmu->node);
|
||||
}
|
||||
|
||||
static const struct acpi_device_id hisi_sllc_pmu_acpi_match[] = {
|
||||
{ "HISI0263", (kernel_ulong_t)&hisi_sllc_v2 },
|
||||
{ "HISI0264", (kernel_ulong_t)&hisi_sllc_v3 },
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(acpi, hisi_sllc_pmu_acpi_match);
|
||||
|
||||
static struct platform_driver hisi_sllc_pmu_driver = {
|
||||
.driver = {
|
||||
.name = "hisi_sllc_pmu",
|
||||
|
|
|
|||
|
|
@ -1503,7 +1503,7 @@ int acpi_parse_spcr(bool enable_earlycon, bool enable_console);
|
|||
#else
|
||||
static inline int acpi_parse_spcr(bool enable_earlycon, bool enable_console)
|
||||
{
|
||||
return 0;
|
||||
return -ENODEV;
|
||||
}
|
||||
#endif
|
||||
|
||||
|
|
|
|||
|
|
@ -103,10 +103,12 @@ void watchdog_hardlockup_check(unsigned int cpu, struct pt_regs *regs);
|
|||
extern void hardlockup_detector_perf_stop(void);
|
||||
extern void hardlockup_detector_perf_restart(void);
|
||||
extern void hardlockup_config_perf_event(const char *str);
|
||||
extern void hardlockup_detector_perf_adjust_period(u64 period);
|
||||
#else
|
||||
static inline void hardlockup_detector_perf_stop(void) { }
|
||||
static inline void hardlockup_detector_perf_restart(void) { }
|
||||
static inline void hardlockup_config_perf_event(const char *str) { }
|
||||
static inline void hardlockup_detector_perf_adjust_period(u64 period) { }
|
||||
#endif
|
||||
|
||||
void watchdog_hardlockup_stop(void);
|
||||
|
|
|
|||
|
|
@ -70,6 +70,11 @@ struct pmu_hw_events {
|
|||
struct arm_pmu *percpu_pmu;
|
||||
|
||||
int irq;
|
||||
|
||||
struct perf_branch_stack *branch_stack;
|
||||
|
||||
/* Active events requesting branch records */
|
||||
unsigned int branch_users;
|
||||
};
|
||||
|
||||
enum armpmu_attr_groups {
|
||||
|
|
@ -115,6 +120,7 @@ struct arm_pmu {
|
|||
/* PMUv3 only */
|
||||
int pmuver;
|
||||
u64 reg_pmmir;
|
||||
u64 reg_brbidr;
|
||||
#define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40
|
||||
DECLARE_BITMAP(pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS);
|
||||
#define ARMV8_PMUV3_EXT_COMMON_EVENT_BASE 0x4000
|
||||
|
|
@ -126,6 +132,8 @@ struct arm_pmu {
|
|||
|
||||
#define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu))
|
||||
|
||||
DECLARE_PER_CPU(struct arm_pmu *, cpu_armpmu);
|
||||
|
||||
u64 armpmu_event_update(struct perf_event *event);
|
||||
|
||||
int armpmu_event_set_period(struct perf_event *event);
|
||||
|
|
|
|||
|
|
@ -244,6 +244,8 @@ struct prctl_mm_map {
|
|||
# define PR_MTE_TAG_MASK (0xffffUL << PR_MTE_TAG_SHIFT)
|
||||
/* Unused; kept only for source compatibility */
|
||||
# define PR_MTE_TCF_SHIFT 1
|
||||
/* MTE tag check store only */
|
||||
# define PR_MTE_STORE_ONLY (1UL << 19)
|
||||
/* RISC-V pointer masking tag length */
|
||||
# define PR_PMLEN_SHIFT 24
|
||||
# define PR_PMLEN_MASK (0x7fUL << PR_PMLEN_SHIFT)
|
||||
|
|
|
|||
|
|
@ -186,6 +186,28 @@ void watchdog_hardlockup_disable(unsigned int cpu)
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* hardlockup_detector_perf_adjust_period - Adjust the event period due
|
||||
* to current cpu frequency change
|
||||
* @period: The target period to be set
|
||||
*/
|
||||
void hardlockup_detector_perf_adjust_period(u64 period)
|
||||
{
|
||||
struct perf_event *event = this_cpu_read(watchdog_ev);
|
||||
|
||||
if (!(watchdog_enabled & WATCHDOG_HARDLOCKUP_ENABLED))
|
||||
return;
|
||||
|
||||
if (!event)
|
||||
return;
|
||||
|
||||
if (event->attr.sample_period == period)
|
||||
return;
|
||||
|
||||
if (perf_event_period(event, period))
|
||||
pr_err("failed to change period to %llu\n", period);
|
||||
}
|
||||
|
||||
/**
|
||||
* hardlockup_detector_perf_stop - Globally stop watchdog events
|
||||
*
|
||||
|
|
|
|||
|
|
@ -12,4 +12,4 @@ $(OUTPUT)/syscall-abi: syscall-abi.c syscall-abi-asm.S
|
|||
$(OUTPUT)/tpidr2: tpidr2.c
|
||||
$(CC) -fno-asynchronous-unwind-tables -fno-ident -s -Os -nostdlib \
|
||||
-static -include ../../../../include/nolibc/nolibc.h \
|
||||
-ffreestanding -Wall $^ -o $@ -lgcc
|
||||
-I../.. -ffreestanding -Wall $^ -o $@ -lgcc
|
||||
|
|
|
|||
|
|
@ -21,6 +21,10 @@
|
|||
|
||||
#define TESTS_PER_HWCAP 3
|
||||
|
||||
#ifndef AT_HWCAP3
|
||||
#define AT_HWCAP3 29
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Function expected to generate exception when the feature is not
|
||||
* supported and return when it is supported. If the specific exception
|
||||
|
|
@ -1098,6 +1102,18 @@ static const struct hwcap_data {
|
|||
.sigill_fn = hbc_sigill,
|
||||
.sigill_reliable = true,
|
||||
},
|
||||
{
|
||||
.name = "MTE_FAR",
|
||||
.at_hwcap = AT_HWCAP3,
|
||||
.hwcap_bit = HWCAP3_MTE_FAR,
|
||||
.cpuinfo = "mtefar",
|
||||
},
|
||||
{
|
||||
.name = "MTE_STOREONLY",
|
||||
.at_hwcap = AT_HWCAP3,
|
||||
.hwcap_bit = HWCAP3_MTE_STORE_ONLY,
|
||||
.cpuinfo = "mtestoreonly",
|
||||
},
|
||||
};
|
||||
|
||||
typedef void (*sighandler_fn)(int, siginfo_t *, void *);
|
||||
|
|
|
|||
|
|
@ -3,31 +3,12 @@
|
|||
#include <linux/sched.h>
|
||||
#include <linux/wait.h>
|
||||
|
||||
#include "kselftest.h"
|
||||
|
||||
#define SYS_TPIDR2 "S3_3_C13_C0_5"
|
||||
|
||||
#define EXPECTED_TESTS 5
|
||||
|
||||
static void putstr(const char *str)
|
||||
{
|
||||
write(1, str, strlen(str));
|
||||
}
|
||||
|
||||
static void putnum(unsigned int num)
|
||||
{
|
||||
char c;
|
||||
|
||||
if (num / 10)
|
||||
putnum(num / 10);
|
||||
|
||||
c = '0' + (num % 10);
|
||||
write(1, &c, 1);
|
||||
}
|
||||
|
||||
static int tests_run;
|
||||
static int tests_passed;
|
||||
static int tests_failed;
|
||||
static int tests_skipped;
|
||||
|
||||
static void set_tpidr2(uint64_t val)
|
||||
{
|
||||
asm volatile (
|
||||
|
|
@ -50,20 +31,6 @@ static uint64_t get_tpidr2(void)
|
|||
return val;
|
||||
}
|
||||
|
||||
static void print_summary(void)
|
||||
{
|
||||
if (tests_passed + tests_failed + tests_skipped != EXPECTED_TESTS)
|
||||
putstr("# UNEXPECTED TEST COUNT: ");
|
||||
|
||||
putstr("# Totals: pass:");
|
||||
putnum(tests_passed);
|
||||
putstr(" fail:");
|
||||
putnum(tests_failed);
|
||||
putstr(" xfail:0 xpass:0 skip:");
|
||||
putnum(tests_skipped);
|
||||
putstr(" error:0\n");
|
||||
}
|
||||
|
||||
/* Processes should start with TPIDR2 == 0 */
|
||||
static int default_value(void)
|
||||
{
|
||||
|
|
@ -105,9 +72,8 @@ static int write_fork_read(void)
|
|||
if (newpid == 0) {
|
||||
/* In child */
|
||||
if (get_tpidr2() != oldpid) {
|
||||
putstr("# TPIDR2 changed in child: ");
|
||||
putnum(get_tpidr2());
|
||||
putstr("\n");
|
||||
ksft_print_msg("TPIDR2 changed in child: %llx\n",
|
||||
get_tpidr2());
|
||||
exit(0);
|
||||
}
|
||||
|
||||
|
|
@ -115,14 +81,12 @@ static int write_fork_read(void)
|
|||
if (get_tpidr2() == getpid()) {
|
||||
exit(1);
|
||||
} else {
|
||||
putstr("# Failed to set TPIDR2 in child\n");
|
||||
ksft_print_msg("Failed to set TPIDR2 in child\n");
|
||||
exit(0);
|
||||
}
|
||||
}
|
||||
if (newpid < 0) {
|
||||
putstr("# fork() failed: -");
|
||||
putnum(-newpid);
|
||||
putstr("\n");
|
||||
ksft_print_msg("fork() failed: %d\n", newpid);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -132,23 +96,22 @@ static int write_fork_read(void)
|
|||
if (waiting < 0) {
|
||||
if (errno == EINTR)
|
||||
continue;
|
||||
putstr("# waitpid() failed: ");
|
||||
putnum(errno);
|
||||
putstr("\n");
|
||||
ksft_print_msg("waitpid() failed: %d\n", errno);
|
||||
return 0;
|
||||
}
|
||||
if (waiting != newpid) {
|
||||
putstr("# waitpid() returned wrong PID\n");
|
||||
ksft_print_msg("waitpid() returned wrong PID: %d != %d\n",
|
||||
waiting, newpid);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!WIFEXITED(status)) {
|
||||
putstr("# child did not exit\n");
|
||||
ksft_print_msg("child did not exit\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (getpid() != get_tpidr2()) {
|
||||
putstr("# TPIDR2 corrupted in parent\n");
|
||||
ksft_print_msg("TPIDR2 corrupted in parent\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -188,35 +151,32 @@ static int write_clone_read(void)
|
|||
|
||||
stack = malloc(__STACK_SIZE);
|
||||
if (!stack) {
|
||||
putstr("# malloc() failed\n");
|
||||
ksft_print_msg("malloc() failed\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
ret = sys_clone(CLONE_VM, (unsigned long)stack + __STACK_SIZE,
|
||||
&parent_tid, 0, &child_tid);
|
||||
if (ret == -1) {
|
||||
putstr("# clone() failed\n");
|
||||
putnum(errno);
|
||||
putstr("\n");
|
||||
ksft_print_msg("clone() failed: %d\n", errno);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (ret == 0) {
|
||||
/* In child */
|
||||
if (get_tpidr2() != 0) {
|
||||
putstr("# TPIDR2 non-zero in child: ");
|
||||
putnum(get_tpidr2());
|
||||
putstr("\n");
|
||||
ksft_print_msg("TPIDR2 non-zero in child: %llx\n",
|
||||
get_tpidr2());
|
||||
exit(0);
|
||||
}
|
||||
|
||||
if (gettid() == 0)
|
||||
putstr("# Child TID==0\n");
|
||||
ksft_print_msg("Child TID==0\n");
|
||||
set_tpidr2(gettid());
|
||||
if (get_tpidr2() == gettid()) {
|
||||
exit(1);
|
||||
} else {
|
||||
putstr("# Failed to set TPIDR2 in child\n");
|
||||
ksft_print_msg("Failed to set TPIDR2 in child\n");
|
||||
exit(0);
|
||||
}
|
||||
}
|
||||
|
|
@ -227,25 +187,22 @@ static int write_clone_read(void)
|
|||
if (waiting < 0) {
|
||||
if (errno == EINTR)
|
||||
continue;
|
||||
putstr("# wait4() failed: ");
|
||||
putnum(errno);
|
||||
putstr("\n");
|
||||
ksft_print_msg("wait4() failed: %d\n", errno);
|
||||
return 0;
|
||||
}
|
||||
if (waiting != ret) {
|
||||
putstr("# wait4() returned wrong PID ");
|
||||
putnum(waiting);
|
||||
putstr("\n");
|
||||
ksft_print_msg("wait4() returned wrong PID %d\n",
|
||||
waiting);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!WIFEXITED(status)) {
|
||||
putstr("# child did not exit\n");
|
||||
ksft_print_msg("child did not exit\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (parent != get_tpidr2()) {
|
||||
putstr("# TPIDR2 corrupted in parent\n");
|
||||
ksft_print_msg("TPIDR2 corrupted in parent\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -253,35 +210,14 @@ static int write_clone_read(void)
|
|||
}
|
||||
}
|
||||
|
||||
#define run_test(name) \
|
||||
if (name()) { \
|
||||
tests_passed++; \
|
||||
} else { \
|
||||
tests_failed++; \
|
||||
putstr("not "); \
|
||||
} \
|
||||
putstr("ok "); \
|
||||
putnum(++tests_run); \
|
||||
putstr(" " #name "\n");
|
||||
|
||||
#define skip_test(name) \
|
||||
tests_skipped++; \
|
||||
putstr("ok "); \
|
||||
putnum(++tests_run); \
|
||||
putstr(" # SKIP " #name "\n");
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int ret;
|
||||
|
||||
putstr("TAP version 13\n");
|
||||
putstr("1..");
|
||||
putnum(EXPECTED_TESTS);
|
||||
putstr("\n");
|
||||
ksft_print_header();
|
||||
ksft_set_plan(5);
|
||||
|
||||
putstr("# PID: ");
|
||||
putnum(getpid());
|
||||
putstr("\n");
|
||||
ksft_print_msg("PID: %d\n", getpid());
|
||||
|
||||
/*
|
||||
* This test is run with nolibc which doesn't support hwcap and
|
||||
|
|
@ -290,23 +226,21 @@ int main(int argc, char **argv)
|
|||
*/
|
||||
ret = open("/proc/sys/abi/sme_default_vector_length", O_RDONLY, 0);
|
||||
if (ret >= 0) {
|
||||
run_test(default_value);
|
||||
run_test(write_read);
|
||||
run_test(write_sleep_read);
|
||||
run_test(write_fork_read);
|
||||
run_test(write_clone_read);
|
||||
ksft_test_result(default_value(), "default_value\n");
|
||||
ksft_test_result(write_read, "write_read\n");
|
||||
ksft_test_result(write_sleep_read, "write_sleep_read\n");
|
||||
ksft_test_result(write_fork_read, "write_fork_read\n");
|
||||
ksft_test_result(write_clone_read, "write_clone_read\n");
|
||||
|
||||
} else {
|
||||
putstr("# SME support not present\n");
|
||||
ksft_print_msg("SME support not present\n");
|
||||
|
||||
skip_test(default_value);
|
||||
skip_test(write_read);
|
||||
skip_test(write_sleep_read);
|
||||
skip_test(write_fork_read);
|
||||
skip_test(write_clone_read);
|
||||
ksft_test_result_skip("default_value\n");
|
||||
ksft_test_result_skip("write_read\n");
|
||||
ksft_test_result_skip("write_sleep_read\n");
|
||||
ksft_test_result_skip("write_fork_read\n");
|
||||
ksft_test_result_skip("write_clone_read\n");
|
||||
}
|
||||
|
||||
print_summary();
|
||||
|
||||
return 0;
|
||||
ksft_finished();
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1061,11 +1061,31 @@ static bool sve_write_supported(struct test_config *config)
|
|||
if (config->sme_vl_in != config->sme_vl_expected) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!sve_supported())
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool sve_write_fpsimd_supported(struct test_config *config)
|
||||
{
|
||||
if (!sve_supported())
|
||||
return false;
|
||||
|
||||
if ((config->svcr_in & SVCR_ZA) != (config->svcr_expected & SVCR_ZA))
|
||||
return false;
|
||||
|
||||
if (config->svcr_expected & SVCR_SM)
|
||||
return false;
|
||||
|
||||
if (config->sme_vl_in != config->sme_vl_expected)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static void fpsimd_write_expected(struct test_config *config)
|
||||
{
|
||||
int vl;
|
||||
|
|
@ -1134,6 +1154,9 @@ static void sve_write_expected(struct test_config *config)
|
|||
int vl = vl_expected(config);
|
||||
int sme_vq = __sve_vq_from_vl(config->sme_vl_expected);
|
||||
|
||||
if (!vl)
|
||||
return;
|
||||
|
||||
fill_random(z_expected, __SVE_ZREGS_SIZE(__sve_vq_from_vl(vl)));
|
||||
fill_random(p_expected, __SVE_PREGS_SIZE(__sve_vq_from_vl(vl)));
|
||||
|
||||
|
|
@ -1152,7 +1175,7 @@ static void sve_write_expected(struct test_config *config)
|
|||
}
|
||||
}
|
||||
|
||||
static void sve_write(pid_t child, struct test_config *config)
|
||||
static void sve_write_sve(pid_t child, struct test_config *config)
|
||||
{
|
||||
struct user_sve_header *sve;
|
||||
struct iovec iov;
|
||||
|
|
@ -1161,6 +1184,9 @@ static void sve_write(pid_t child, struct test_config *config)
|
|||
vl = vl_expected(config);
|
||||
vq = __sve_vq_from_vl(vl);
|
||||
|
||||
if (!vl)
|
||||
return;
|
||||
|
||||
iov.iov_len = SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, SVE_PT_REGS_SVE);
|
||||
iov.iov_base = malloc(iov.iov_len);
|
||||
if (!iov.iov_base) {
|
||||
|
|
@ -1195,6 +1221,45 @@ static void sve_write(pid_t child, struct test_config *config)
|
|||
free(iov.iov_base);
|
||||
}
|
||||
|
||||
static void sve_write_fpsimd(pid_t child, struct test_config *config)
|
||||
{
|
||||
struct user_sve_header *sve;
|
||||
struct user_fpsimd_state *fpsimd;
|
||||
struct iovec iov;
|
||||
int ret, vl, vq;
|
||||
|
||||
vl = vl_expected(config);
|
||||
vq = __sve_vq_from_vl(vl);
|
||||
|
||||
if (!vl)
|
||||
return;
|
||||
|
||||
iov.iov_len = SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq,
|
||||
SVE_PT_REGS_FPSIMD);
|
||||
iov.iov_base = malloc(iov.iov_len);
|
||||
if (!iov.iov_base) {
|
||||
ksft_print_msg("Failed allocating %lu byte SVE write buffer\n",
|
||||
iov.iov_len);
|
||||
return;
|
||||
}
|
||||
memset(iov.iov_base, 0, iov.iov_len);
|
||||
|
||||
sve = iov.iov_base;
|
||||
sve->size = iov.iov_len;
|
||||
sve->flags = SVE_PT_REGS_FPSIMD;
|
||||
sve->vl = vl;
|
||||
|
||||
fpsimd = iov.iov_base + SVE_PT_REGS_OFFSET;
|
||||
memcpy(&fpsimd->vregs, v_expected, sizeof(v_expected));
|
||||
|
||||
ret = ptrace(PTRACE_SETREGSET, child, NT_ARM_SVE, &iov);
|
||||
if (ret != 0)
|
||||
ksft_print_msg("Failed to write SVE: %s (%d)\n",
|
||||
strerror(errno), errno);
|
||||
|
||||
free(iov.iov_base);
|
||||
}
|
||||
|
||||
static bool za_write_supported(struct test_config *config)
|
||||
{
|
||||
if ((config->svcr_in & SVCR_SM) != (config->svcr_expected & SVCR_SM))
|
||||
|
|
@ -1386,7 +1451,13 @@ static struct test_definition sve_test_defs[] = {
|
|||
.name = "SVE write",
|
||||
.supported = sve_write_supported,
|
||||
.set_expected_values = sve_write_expected,
|
||||
.modify_values = sve_write,
|
||||
.modify_values = sve_write_sve,
|
||||
},
|
||||
{
|
||||
.name = "SVE write FPSIMD format",
|
||||
.supported = sve_write_fpsimd_supported,
|
||||
.set_expected_values = fpsimd_write_expected,
|
||||
.modify_values = sve_write_fpsimd,
|
||||
},
|
||||
};
|
||||
|
||||
|
|
@ -1607,7 +1678,7 @@ int main(void)
|
|||
* Run the test set if there is no SVE or SME, with those we
|
||||
* have to pick a VL for each run.
|
||||
*/
|
||||
if (!sve_supported()) {
|
||||
if (!sve_supported() && !sme_supported()) {
|
||||
test_config.sve_vl_in = 0;
|
||||
test_config.sve_vl_expected = 0;
|
||||
test_config.sme_vl_in = 0;
|
||||
|
|
|
|||
|
|
@ -170,7 +170,7 @@ static void ptrace_set_get_inherit(pid_t child, const struct vec_type *type)
|
|||
memset(&sve, 0, sizeof(sve));
|
||||
sve.size = sizeof(sve);
|
||||
sve.vl = sve_vl_from_vq(SVE_VQ_MIN);
|
||||
sve.flags = SVE_PT_VL_INHERIT;
|
||||
sve.flags = SVE_PT_VL_INHERIT | SVE_PT_REGS_SVE;
|
||||
ret = set_sve(child, type, &sve);
|
||||
if (ret != 0) {
|
||||
ksft_test_result_fail("Failed to set %s SVE_PT_VL_INHERIT\n",
|
||||
|
|
@ -235,6 +235,7 @@ static void ptrace_set_get_vl(pid_t child, const struct vec_type *type,
|
|||
/* Set the VL by doing a set with no register payload */
|
||||
memset(&sve, 0, sizeof(sve));
|
||||
sve.size = sizeof(sve);
|
||||
sve.flags = SVE_PT_REGS_SVE;
|
||||
sve.vl = vl;
|
||||
ret = set_sve(child, type, &sve);
|
||||
if (ret != 0) {
|
||||
|
|
@ -253,7 +254,7 @@ static void ptrace_set_get_vl(pid_t child, const struct vec_type *type,
|
|||
return;
|
||||
}
|
||||
|
||||
ksft_test_result(new_sve->vl = prctl_vl, "Set %s VL %u\n",
|
||||
ksft_test_result(new_sve->vl == prctl_vl, "Set %s VL %u\n",
|
||||
type->name, vl);
|
||||
|
||||
free(new_sve);
|
||||
|
|
@ -301,8 +302,10 @@ static void ptrace_sve_fpsimd(pid_t child, const struct vec_type *type)
|
|||
p[j] = j;
|
||||
}
|
||||
|
||||
/* This should only succeed for SVE */
|
||||
ret = set_sve(child, type, sve);
|
||||
ksft_test_result(ret == 0, "%s FPSIMD set via SVE: %d\n",
|
||||
ksft_test_result((type->regset == NT_ARM_SVE) == (ret == 0),
|
||||
"%s FPSIMD set via SVE: %d\n",
|
||||
type->name, ret);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
|
@ -750,9 +753,6 @@ int main(void)
|
|||
ksft_print_header();
|
||||
ksft_set_plan(EXPECTED_TESTS);
|
||||
|
||||
if (!(getauxval(AT_HWCAP) & HWCAP_SVE))
|
||||
ksft_exit_skip("SVE not available\n");
|
||||
|
||||
child = fork();
|
||||
if (!child)
|
||||
return do_child();
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ static int check_buffer_by_byte(int mem_type, int mode)
|
|||
int i, j, item;
|
||||
bool err;
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
item = ARRAY_SIZE(sizes);
|
||||
|
||||
for (i = 0; i < item; i++) {
|
||||
|
|
@ -68,7 +68,7 @@ static int check_buffer_underflow_by_byte(int mem_type, int mode,
|
|||
bool err;
|
||||
char *und_ptr = NULL;
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
item = ARRAY_SIZE(sizes);
|
||||
for (i = 0; i < item; i++) {
|
||||
ptr = (char *)mte_allocate_memory_tag_range(sizes[i], mem_type, 0,
|
||||
|
|
@ -164,7 +164,7 @@ static int check_buffer_overflow_by_byte(int mem_type, int mode,
|
|||
size_t tagged_size, overflow_size;
|
||||
char *over_ptr = NULL;
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
item = ARRAY_SIZE(sizes);
|
||||
for (i = 0; i < item; i++) {
|
||||
ptr = (char *)mte_allocate_memory_tag_range(sizes[i], mem_type, 0,
|
||||
|
|
@ -337,7 +337,7 @@ static int check_buffer_by_block(int mem_type, int mode)
|
|||
{
|
||||
int i, item, result = KSFT_PASS;
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
item = ARRAY_SIZE(sizes);
|
||||
cur_mte_cxt.fault_valid = false;
|
||||
for (i = 0; i < item; i++) {
|
||||
|
|
@ -368,7 +368,7 @@ static int check_memory_initial_tags(int mem_type, int mode, int mapping)
|
|||
int run, fd;
|
||||
int total = ARRAY_SIZE(sizes);
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
for (run = 0; run < total; run++) {
|
||||
/* check initial tags for anonymous mmap */
|
||||
ptr = (char *)mte_allocate_memory(sizes[run], mem_type, mapping, false);
|
||||
|
|
@ -415,7 +415,7 @@ int main(int argc, char *argv[])
|
|||
return err;
|
||||
|
||||
/* Register SIGSEGV handler */
|
||||
mte_register_signal(SIGSEGV, mte_default_handler);
|
||||
mte_register_signal(SIGSEGV, mte_default_handler, false);
|
||||
|
||||
/* Set test plan */
|
||||
ksft_set_plan(20);
|
||||
|
|
|
|||
|
|
@ -88,7 +88,7 @@ static int check_child_memory_mapping(int mem_type, int mode, int mapping)
|
|||
int item = ARRAY_SIZE(sizes);
|
||||
|
||||
item = ARRAY_SIZE(sizes);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
for (run = 0; run < item; run++) {
|
||||
ptr = (char *)mte_allocate_memory_tag_range(sizes[run], mem_type, mapping,
|
||||
UNDERFLOW, OVERFLOW);
|
||||
|
|
@ -109,7 +109,7 @@ static int check_child_file_mapping(int mem_type, int mode, int mapping)
|
|||
int run, fd, map_size, result = KSFT_PASS;
|
||||
int total = ARRAY_SIZE(sizes);
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
for (run = 0; run < total; run++) {
|
||||
fd = create_temp_file();
|
||||
if (fd == -1)
|
||||
|
|
@ -160,8 +160,8 @@ int main(int argc, char *argv[])
|
|||
return err;
|
||||
|
||||
/* Register SIGSEGV handler */
|
||||
mte_register_signal(SIGSEGV, mte_default_handler);
|
||||
mte_register_signal(SIGBUS, mte_default_handler);
|
||||
mte_register_signal(SIGSEGV, mte_default_handler, false);
|
||||
mte_register_signal(SIGBUS, mte_default_handler, false);
|
||||
|
||||
/* Set test plan */
|
||||
ksft_set_plan(12);
|
||||
|
|
|
|||
|
|
@ -151,7 +151,7 @@ static int check_hugetlb_memory_mapping(int mem_type, int mode, int mapping, int
|
|||
|
||||
map_size = default_huge_page_size();
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
map_ptr = (char *)mte_allocate_memory(map_size, mem_type, mapping, false);
|
||||
if (check_allocated_memory(map_ptr, map_size, mem_type, false) != KSFT_PASS)
|
||||
return KSFT_FAIL;
|
||||
|
|
@ -180,7 +180,7 @@ static int check_clear_prot_mte_flag(int mem_type, int mode, int mapping)
|
|||
unsigned long map_size;
|
||||
|
||||
prot_flag = PROT_READ | PROT_WRITE;
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
map_size = default_huge_page_size();
|
||||
map_ptr = (char *)mte_allocate_memory_tag_range(map_size, mem_type, mapping,
|
||||
0, 0);
|
||||
|
|
@ -210,7 +210,7 @@ static int check_child_hugetlb_memory_mapping(int mem_type, int mode, int mappin
|
|||
|
||||
map_size = default_huge_page_size();
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
ptr = (char *)mte_allocate_memory_tag_range(map_size, mem_type, mapping,
|
||||
0, 0);
|
||||
if (check_allocated_memory_range(ptr, map_size, mem_type,
|
||||
|
|
@ -235,8 +235,8 @@ int main(int argc, char *argv[])
|
|||
return err;
|
||||
|
||||
/* Register signal handlers */
|
||||
mte_register_signal(SIGBUS, mte_default_handler);
|
||||
mte_register_signal(SIGSEGV, mte_default_handler);
|
||||
mte_register_signal(SIGBUS, mte_default_handler, false);
|
||||
mte_register_signal(SIGSEGV, mte_default_handler, false);
|
||||
|
||||
allocate_hugetlb();
|
||||
|
||||
|
|
|
|||
|
|
@ -106,7 +106,7 @@ static int check_madvise_options(int mem_type, int mode, int mapping)
|
|||
return err;
|
||||
}
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
ptr = mte_allocate_memory(TEST_UNIT * page_sz, mem_type, mapping, true);
|
||||
if (check_allocated_memory(ptr, TEST_UNIT * page_sz, mem_type, false) != KSFT_PASS)
|
||||
return KSFT_FAIL;
|
||||
|
|
@ -141,8 +141,8 @@ int main(int argc, char *argv[])
|
|||
return KSFT_FAIL;
|
||||
}
|
||||
/* Register signal handlers */
|
||||
mte_register_signal(SIGBUS, mte_default_handler);
|
||||
mte_register_signal(SIGSEGV, mte_default_handler);
|
||||
mte_register_signal(SIGBUS, mte_default_handler, false);
|
||||
mte_register_signal(SIGSEGV, mte_default_handler, false);
|
||||
|
||||
/* Set test plan */
|
||||
ksft_set_plan(4);
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@
|
|||
|
||||
#define _GNU_SOURCE
|
||||
|
||||
#include <assert.h>
|
||||
#include <errno.h>
|
||||
#include <fcntl.h>
|
||||
#include <signal.h>
|
||||
|
|
@ -23,6 +24,35 @@
|
|||
#define OVERFLOW MT_GRANULE_SIZE
|
||||
#define TAG_CHECK_ON 0
|
||||
#define TAG_CHECK_OFF 1
|
||||
#define ATAG_CHECK_ON 1
|
||||
#define ATAG_CHECK_OFF 0
|
||||
|
||||
#define TEST_NAME_MAX 256
|
||||
|
||||
enum mte_mem_check_type {
|
||||
CHECK_ANON_MEM = 0,
|
||||
CHECK_FILE_MEM = 1,
|
||||
CHECK_CLEAR_PROT_MTE = 2,
|
||||
};
|
||||
|
||||
enum mte_tag_op_type {
|
||||
TAG_OP_ALL = 0,
|
||||
TAG_OP_STONLY = 1,
|
||||
};
|
||||
|
||||
struct check_mmap_testcase {
|
||||
int check_type;
|
||||
int mem_type;
|
||||
int mte_sync;
|
||||
int mapping;
|
||||
int tag_check;
|
||||
int atag_check;
|
||||
int tag_op;
|
||||
bool enable_tco;
|
||||
};
|
||||
|
||||
#define TAG_OP_ALL 0
|
||||
#define TAG_OP_STONLY 1
|
||||
|
||||
static size_t page_size;
|
||||
static int sizes[] = {
|
||||
|
|
@ -30,8 +60,17 @@ static int sizes[] = {
|
|||
/* page size - 1*/ 0, /* page_size */ 0, /* page size + 1 */ 0
|
||||
};
|
||||
|
||||
static int check_mte_memory(char *ptr, int size, int mode, int tag_check)
|
||||
static int check_mte_memory(char *ptr, int size, int mode,
|
||||
int tag_check,int atag_check, int tag_op)
|
||||
{
|
||||
char buf[MT_GRANULE_SIZE];
|
||||
|
||||
if (!mtefar_support && atag_check == ATAG_CHECK_ON)
|
||||
return KSFT_SKIP;
|
||||
|
||||
if (atag_check == ATAG_CHECK_ON)
|
||||
ptr = mte_insert_atag(ptr);
|
||||
|
||||
mte_initialize_current_context(mode, (uintptr_t)ptr, size);
|
||||
memset(ptr, '1', size);
|
||||
mte_wait_after_trig();
|
||||
|
|
@ -54,16 +93,34 @@ static int check_mte_memory(char *ptr, int size, int mode, int tag_check)
|
|||
if (cur_mte_cxt.fault_valid == true && tag_check == TAG_CHECK_OFF)
|
||||
return KSFT_FAIL;
|
||||
|
||||
if (tag_op == TAG_OP_STONLY) {
|
||||
mte_initialize_current_context(mode, (uintptr_t)ptr, -UNDERFLOW);
|
||||
memcpy(buf, ptr - UNDERFLOW, MT_GRANULE_SIZE);
|
||||
mte_wait_after_trig();
|
||||
if (cur_mte_cxt.fault_valid == true)
|
||||
return KSFT_FAIL;
|
||||
|
||||
mte_initialize_current_context(mode, (uintptr_t)ptr, size + OVERFLOW);
|
||||
memcpy(buf, ptr + size, MT_GRANULE_SIZE);
|
||||
mte_wait_after_trig();
|
||||
if (cur_mte_cxt.fault_valid == true)
|
||||
return KSFT_FAIL;
|
||||
}
|
||||
|
||||
return KSFT_PASS;
|
||||
}
|
||||
|
||||
static int check_anonymous_memory_mapping(int mem_type, int mode, int mapping, int tag_check)
|
||||
static int check_anonymous_memory_mapping(int mem_type, int mode, int mapping,
|
||||
int tag_check, int atag_check, int tag_op)
|
||||
{
|
||||
char *ptr, *map_ptr;
|
||||
int run, result, map_size;
|
||||
int item = ARRAY_SIZE(sizes);
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
if (tag_op == TAG_OP_STONLY && !mtestonly_support)
|
||||
return KSFT_SKIP;
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, tag_op);
|
||||
for (run = 0; run < item; run++) {
|
||||
map_size = sizes[run] + OVERFLOW + UNDERFLOW;
|
||||
map_ptr = (char *)mte_allocate_memory(map_size, mem_type, mapping, false);
|
||||
|
|
@ -79,23 +136,27 @@ static int check_anonymous_memory_mapping(int mem_type, int mode, int mapping, i
|
|||
munmap((void *)map_ptr, map_size);
|
||||
return KSFT_FAIL;
|
||||
}
|
||||
result = check_mte_memory(ptr, sizes[run], mode, tag_check);
|
||||
result = check_mte_memory(ptr, sizes[run], mode, tag_check, atag_check, tag_op);
|
||||
mte_clear_tags((void *)ptr, sizes[run]);
|
||||
mte_free_memory((void *)map_ptr, map_size, mem_type, false);
|
||||
if (result == KSFT_FAIL)
|
||||
return KSFT_FAIL;
|
||||
if (result != KSFT_PASS)
|
||||
return result;
|
||||
}
|
||||
return KSFT_PASS;
|
||||
}
|
||||
|
||||
static int check_file_memory_mapping(int mem_type, int mode, int mapping, int tag_check)
|
||||
static int check_file_memory_mapping(int mem_type, int mode, int mapping,
|
||||
int tag_check, int atag_check, int tag_op)
|
||||
{
|
||||
char *ptr, *map_ptr;
|
||||
int run, fd, map_size;
|
||||
int total = ARRAY_SIZE(sizes);
|
||||
int result = KSFT_PASS;
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
if (tag_op == TAG_OP_STONLY && !mtestonly_support)
|
||||
return KSFT_SKIP;
|
||||
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, tag_op);
|
||||
for (run = 0; run < total; run++) {
|
||||
fd = create_temp_file();
|
||||
if (fd == -1)
|
||||
|
|
@ -117,24 +178,24 @@ static int check_file_memory_mapping(int mem_type, int mode, int mapping, int ta
|
|||
close(fd);
|
||||
return KSFT_FAIL;
|
||||
}
|
||||
result = check_mte_memory(ptr, sizes[run], mode, tag_check);
|
||||
result = check_mte_memory(ptr, sizes[run], mode, tag_check, atag_check, tag_op);
|
||||
mte_clear_tags((void *)ptr, sizes[run]);
|
||||
munmap((void *)map_ptr, map_size);
|
||||
close(fd);
|
||||
if (result == KSFT_FAIL)
|
||||
break;
|
||||
if (result != KSFT_PASS)
|
||||
return result;
|
||||
}
|
||||
return result;
|
||||
return KSFT_PASS;
|
||||
}
|
||||
|
||||
static int check_clear_prot_mte_flag(int mem_type, int mode, int mapping)
|
||||
static int check_clear_prot_mte_flag(int mem_type, int mode, int mapping, int atag_check)
|
||||
{
|
||||
char *ptr, *map_ptr;
|
||||
int run, prot_flag, result, fd, map_size;
|
||||
int total = ARRAY_SIZE(sizes);
|
||||
|
||||
prot_flag = PROT_READ | PROT_WRITE;
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
for (run = 0; run < total; run++) {
|
||||
map_size = sizes[run] + OVERFLOW + UNDERFLOW;
|
||||
ptr = (char *)mte_allocate_memory_tag_range(sizes[run], mem_type, mapping,
|
||||
|
|
@ -150,10 +211,10 @@ static int check_clear_prot_mte_flag(int mem_type, int mode, int mapping)
|
|||
ksft_print_msg("FAIL: mprotect not ignoring clear PROT_MTE property\n");
|
||||
return KSFT_FAIL;
|
||||
}
|
||||
result = check_mte_memory(ptr, sizes[run], mode, TAG_CHECK_ON);
|
||||
result = check_mte_memory(ptr, sizes[run], mode, TAG_CHECK_ON, atag_check, TAG_OP_ALL);
|
||||
mte_free_memory_tag_range((void *)ptr, sizes[run], mem_type, UNDERFLOW, OVERFLOW);
|
||||
if (result != KSFT_PASS)
|
||||
return KSFT_FAIL;
|
||||
return result;
|
||||
|
||||
fd = create_temp_file();
|
||||
if (fd == -1)
|
||||
|
|
@ -174,19 +235,715 @@ static int check_clear_prot_mte_flag(int mem_type, int mode, int mapping)
|
|||
close(fd);
|
||||
return KSFT_FAIL;
|
||||
}
|
||||
result = check_mte_memory(ptr, sizes[run], mode, TAG_CHECK_ON);
|
||||
result = check_mte_memory(ptr, sizes[run], mode, TAG_CHECK_ON, atag_check, TAG_OP_ALL);
|
||||
mte_free_memory_tag_range((void *)ptr, sizes[run], mem_type, UNDERFLOW, OVERFLOW);
|
||||
close(fd);
|
||||
if (result != KSFT_PASS)
|
||||
return KSFT_FAIL;
|
||||
return result;
|
||||
}
|
||||
return KSFT_PASS;
|
||||
}
|
||||
|
||||
const char *format_test_name(struct check_mmap_testcase *tc)
|
||||
{
|
||||
static char test_name[TEST_NAME_MAX];
|
||||
const char *check_type_str;
|
||||
const char *mem_type_str;
|
||||
const char *sync_str;
|
||||
const char *mapping_str;
|
||||
const char *tag_check_str;
|
||||
const char *atag_check_str;
|
||||
const char *tag_op_str;
|
||||
|
||||
switch (tc->check_type) {
|
||||
case CHECK_ANON_MEM:
|
||||
check_type_str = "anonymous memory";
|
||||
break;
|
||||
case CHECK_FILE_MEM:
|
||||
check_type_str = "file memory";
|
||||
break;
|
||||
case CHECK_CLEAR_PROT_MTE:
|
||||
check_type_str = "clear PROT_MTE flags";
|
||||
break;
|
||||
default:
|
||||
assert(0);
|
||||
break;
|
||||
}
|
||||
|
||||
switch (tc->mem_type) {
|
||||
case USE_MMAP:
|
||||
mem_type_str = "mmap";
|
||||
break;
|
||||
case USE_MPROTECT:
|
||||
mem_type_str = "mmap/mprotect";
|
||||
break;
|
||||
default:
|
||||
assert(0);
|
||||
break;
|
||||
}
|
||||
|
||||
switch (tc->mte_sync) {
|
||||
case MTE_NONE_ERR:
|
||||
sync_str = "no error";
|
||||
break;
|
||||
case MTE_SYNC_ERR:
|
||||
sync_str = "sync error";
|
||||
break;
|
||||
case MTE_ASYNC_ERR:
|
||||
sync_str = "async error";
|
||||
break;
|
||||
default:
|
||||
assert(0);
|
||||
break;
|
||||
}
|
||||
|
||||
switch (tc->mapping) {
|
||||
case MAP_SHARED:
|
||||
mapping_str = "shared";
|
||||
break;
|
||||
case MAP_PRIVATE:
|
||||
mapping_str = "private";
|
||||
break;
|
||||
default:
|
||||
assert(0);
|
||||
break;
|
||||
}
|
||||
|
||||
switch (tc->tag_check) {
|
||||
case TAG_CHECK_ON:
|
||||
tag_check_str = "tag check on";
|
||||
break;
|
||||
case TAG_CHECK_OFF:
|
||||
tag_check_str = "tag check off";
|
||||
break;
|
||||
default:
|
||||
assert(0);
|
||||
break;
|
||||
}
|
||||
|
||||
switch (tc->atag_check) {
|
||||
case ATAG_CHECK_ON:
|
||||
atag_check_str = "with address tag [63:60]";
|
||||
break;
|
||||
case ATAG_CHECK_OFF:
|
||||
atag_check_str = "without address tag [63:60]";
|
||||
break;
|
||||
default:
|
||||
assert(0);
|
||||
break;
|
||||
}
|
||||
|
||||
snprintf(test_name, sizeof(test_name),
|
||||
"Check %s with %s mapping, %s mode, %s memory and %s (%s)\n",
|
||||
check_type_str, mapping_str, sync_str, mem_type_str,
|
||||
tag_check_str, atag_check_str);
|
||||
|
||||
switch (tc->tag_op) {
|
||||
case TAG_OP_ALL:
|
||||
tag_op_str = "";
|
||||
break;
|
||||
case TAG_OP_STONLY:
|
||||
tag_op_str = " / store-only";
|
||||
break;
|
||||
default:
|
||||
assert(0);
|
||||
break;
|
||||
}
|
||||
|
||||
snprintf(test_name, TEST_NAME_MAX,
|
||||
"Check %s with %s mapping, %s mode, %s memory and %s (%s%s)\n",
|
||||
check_type_str, mapping_str, sync_str, mem_type_str,
|
||||
tag_check_str, atag_check_str, tag_op_str);
|
||||
|
||||
return test_name;
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
int err;
|
||||
int err, i;
|
||||
int item = ARRAY_SIZE(sizes);
|
||||
struct check_mmap_testcase test_cases[]= {
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_OFF,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = true,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_OFF,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = true,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_NONE_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_OFF,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_NONE_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_OFF,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_CLEAR_PROT_MTE,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_CLEAR_PROT_MTE,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_OFF,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_ANON_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_SHARED,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_FILE_MEM,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_ASYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_STONLY,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_CLEAR_PROT_MTE,
|
||||
.mem_type = USE_MMAP,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
{
|
||||
.check_type = CHECK_CLEAR_PROT_MTE,
|
||||
.mem_type = USE_MPROTECT,
|
||||
.mte_sync = MTE_SYNC_ERR,
|
||||
.mapping = MAP_PRIVATE,
|
||||
.tag_check = TAG_CHECK_ON,
|
||||
.atag_check = ATAG_CHECK_ON,
|
||||
.tag_op = TAG_OP_ALL,
|
||||
.enable_tco = false,
|
||||
},
|
||||
};
|
||||
|
||||
err = mte_default_setup();
|
||||
if (err)
|
||||
|
|
@ -200,64 +957,51 @@ int main(int argc, char *argv[])
|
|||
sizes[item - 2] = page_size;
|
||||
sizes[item - 1] = page_size + 1;
|
||||
|
||||
/* Register signal handlers */
|
||||
mte_register_signal(SIGBUS, mte_default_handler);
|
||||
mte_register_signal(SIGSEGV, mte_default_handler);
|
||||
|
||||
/* Set test plan */
|
||||
ksft_set_plan(22);
|
||||
ksft_set_plan(ARRAY_SIZE(test_cases));
|
||||
|
||||
mte_enable_pstate_tco();
|
||||
for (i = 0 ; i < ARRAY_SIZE(test_cases); i++) {
|
||||
/* Register signal handlers */
|
||||
mte_register_signal(SIGBUS, mte_default_handler,
|
||||
test_cases[i].atag_check == ATAG_CHECK_ON);
|
||||
mte_register_signal(SIGSEGV, mte_default_handler,
|
||||
test_cases[i].atag_check == ATAG_CHECK_ON);
|
||||
|
||||
evaluate_test(check_anonymous_memory_mapping(USE_MMAP, MTE_SYNC_ERR, MAP_PRIVATE, TAG_CHECK_OFF),
|
||||
"Check anonymous memory with private mapping, sync error mode, mmap memory and tag check off\n");
|
||||
evaluate_test(check_file_memory_mapping(USE_MPROTECT, MTE_SYNC_ERR, MAP_PRIVATE, TAG_CHECK_OFF),
|
||||
"Check file memory with private mapping, sync error mode, mmap/mprotect memory and tag check off\n");
|
||||
if (test_cases[i].enable_tco)
|
||||
mte_enable_pstate_tco();
|
||||
else
|
||||
mte_disable_pstate_tco();
|
||||
|
||||
mte_disable_pstate_tco();
|
||||
evaluate_test(check_anonymous_memory_mapping(USE_MMAP, MTE_NONE_ERR, MAP_PRIVATE, TAG_CHECK_OFF),
|
||||
"Check anonymous memory with private mapping, no error mode, mmap memory and tag check off\n");
|
||||
evaluate_test(check_file_memory_mapping(USE_MPROTECT, MTE_NONE_ERR, MAP_PRIVATE, TAG_CHECK_OFF),
|
||||
"Check file memory with private mapping, no error mode, mmap/mprotect memory and tag check off\n");
|
||||
|
||||
evaluate_test(check_anonymous_memory_mapping(USE_MMAP, MTE_SYNC_ERR, MAP_PRIVATE, TAG_CHECK_ON),
|
||||
"Check anonymous memory with private mapping, sync error mode, mmap memory and tag check on\n");
|
||||
evaluate_test(check_anonymous_memory_mapping(USE_MPROTECT, MTE_SYNC_ERR, MAP_PRIVATE, TAG_CHECK_ON),
|
||||
"Check anonymous memory with private mapping, sync error mode, mmap/mprotect memory and tag check on\n");
|
||||
evaluate_test(check_anonymous_memory_mapping(USE_MMAP, MTE_SYNC_ERR, MAP_SHARED, TAG_CHECK_ON),
|
||||
"Check anonymous memory with shared mapping, sync error mode, mmap memory and tag check on\n");
|
||||
evaluate_test(check_anonymous_memory_mapping(USE_MPROTECT, MTE_SYNC_ERR, MAP_SHARED, TAG_CHECK_ON),
|
||||
"Check anonymous memory with shared mapping, sync error mode, mmap/mprotect memory and tag check on\n");
|
||||
evaluate_test(check_anonymous_memory_mapping(USE_MMAP, MTE_ASYNC_ERR, MAP_PRIVATE, TAG_CHECK_ON),
|
||||
"Check anonymous memory with private mapping, async error mode, mmap memory and tag check on\n");
|
||||
evaluate_test(check_anonymous_memory_mapping(USE_MPROTECT, MTE_ASYNC_ERR, MAP_PRIVATE, TAG_CHECK_ON),
|
||||
"Check anonymous memory with private mapping, async error mode, mmap/mprotect memory and tag check on\n");
|
||||
evaluate_test(check_anonymous_memory_mapping(USE_MMAP, MTE_ASYNC_ERR, MAP_SHARED, TAG_CHECK_ON),
|
||||
"Check anonymous memory with shared mapping, async error mode, mmap memory and tag check on\n");
|
||||
evaluate_test(check_anonymous_memory_mapping(USE_MPROTECT, MTE_ASYNC_ERR, MAP_SHARED, TAG_CHECK_ON),
|
||||
"Check anonymous memory with shared mapping, async error mode, mmap/mprotect memory and tag check on\n");
|
||||
|
||||
evaluate_test(check_file_memory_mapping(USE_MMAP, MTE_SYNC_ERR, MAP_PRIVATE, TAG_CHECK_ON),
|
||||
"Check file memory with private mapping, sync error mode, mmap memory and tag check on\n");
|
||||
evaluate_test(check_file_memory_mapping(USE_MPROTECT, MTE_SYNC_ERR, MAP_PRIVATE, TAG_CHECK_ON),
|
||||
"Check file memory with private mapping, sync error mode, mmap/mprotect memory and tag check on\n");
|
||||
evaluate_test(check_file_memory_mapping(USE_MMAP, MTE_SYNC_ERR, MAP_SHARED, TAG_CHECK_ON),
|
||||
"Check file memory with shared mapping, sync error mode, mmap memory and tag check on\n");
|
||||
evaluate_test(check_file_memory_mapping(USE_MPROTECT, MTE_SYNC_ERR, MAP_SHARED, TAG_CHECK_ON),
|
||||
"Check file memory with shared mapping, sync error mode, mmap/mprotect memory and tag check on\n");
|
||||
evaluate_test(check_file_memory_mapping(USE_MMAP, MTE_ASYNC_ERR, MAP_PRIVATE, TAG_CHECK_ON),
|
||||
"Check file memory with private mapping, async error mode, mmap memory and tag check on\n");
|
||||
evaluate_test(check_file_memory_mapping(USE_MPROTECT, MTE_ASYNC_ERR, MAP_PRIVATE, TAG_CHECK_ON),
|
||||
"Check file memory with private mapping, async error mode, mmap/mprotect memory and tag check on\n");
|
||||
evaluate_test(check_file_memory_mapping(USE_MMAP, MTE_ASYNC_ERR, MAP_SHARED, TAG_CHECK_ON),
|
||||
"Check file memory with shared mapping, async error mode, mmap memory and tag check on\n");
|
||||
evaluate_test(check_file_memory_mapping(USE_MPROTECT, MTE_ASYNC_ERR, MAP_SHARED, TAG_CHECK_ON),
|
||||
"Check file memory with shared mapping, async error mode, mmap/mprotect memory and tag check on\n");
|
||||
|
||||
evaluate_test(check_clear_prot_mte_flag(USE_MMAP, MTE_SYNC_ERR, MAP_PRIVATE),
|
||||
"Check clear PROT_MTE flags with private mapping, sync error mode and mmap memory\n");
|
||||
evaluate_test(check_clear_prot_mte_flag(USE_MPROTECT, MTE_SYNC_ERR, MAP_PRIVATE),
|
||||
"Check clear PROT_MTE flags with private mapping and sync error mode and mmap/mprotect memory\n");
|
||||
switch (test_cases[i].check_type) {
|
||||
case CHECK_ANON_MEM:
|
||||
evaluate_test(check_anonymous_memory_mapping(test_cases[i].mem_type,
|
||||
test_cases[i].mte_sync,
|
||||
test_cases[i].mapping,
|
||||
test_cases[i].tag_check,
|
||||
test_cases[i].atag_check,
|
||||
test_cases[i].tag_op),
|
||||
format_test_name(&test_cases[i]));
|
||||
break;
|
||||
case CHECK_FILE_MEM:
|
||||
evaluate_test(check_file_memory_mapping(test_cases[i].mem_type,
|
||||
test_cases[i].mte_sync,
|
||||
test_cases[i].mapping,
|
||||
test_cases[i].tag_check,
|
||||
test_cases[i].atag_check,
|
||||
test_cases[i].tag_op),
|
||||
format_test_name(&test_cases[i]));
|
||||
break;
|
||||
case CHECK_CLEAR_PROT_MTE:
|
||||
evaluate_test(check_clear_prot_mte_flag(test_cases[i].mem_type,
|
||||
test_cases[i].mte_sync,
|
||||
test_cases[i].mapping,
|
||||
test_cases[i].atag_check),
|
||||
format_test_name(&test_cases[i]));
|
||||
break;
|
||||
default:
|
||||
exit(KSFT_FAIL);
|
||||
}
|
||||
}
|
||||
|
||||
mte_restore_setup();
|
||||
ksft_print_cnts();
|
||||
|
|
|
|||
|
|
@ -12,6 +12,10 @@
|
|||
|
||||
#include "kselftest.h"
|
||||
|
||||
#ifndef AT_HWCAP3
|
||||
#define AT_HWCAP3 29
|
||||
#endif
|
||||
|
||||
static int set_tagged_addr_ctrl(int val)
|
||||
{
|
||||
int ret;
|
||||
|
|
@ -60,7 +64,7 @@ void check_basic_read(void)
|
|||
/*
|
||||
* Attempt to set a specified combination of modes.
|
||||
*/
|
||||
void set_mode_test(const char *name, int hwcap2, int mask)
|
||||
void set_mode_test(const char *name, int hwcap2, int hwcap3, int mask)
|
||||
{
|
||||
int ret;
|
||||
|
||||
|
|
@ -69,6 +73,11 @@ void set_mode_test(const char *name, int hwcap2, int mask)
|
|||
return;
|
||||
}
|
||||
|
||||
if ((getauxval(AT_HWCAP3) & hwcap3) != hwcap3) {
|
||||
ksft_test_result_skip("%s\n", name);
|
||||
return;
|
||||
}
|
||||
|
||||
ret = set_tagged_addr_ctrl(mask);
|
||||
if (ret < 0) {
|
||||
ksft_test_result_fail("%s\n", name);
|
||||
|
|
@ -81,7 +90,7 @@ void set_mode_test(const char *name, int hwcap2, int mask)
|
|||
return;
|
||||
}
|
||||
|
||||
if ((ret & PR_MTE_TCF_MASK) == mask) {
|
||||
if ((ret & (PR_MTE_TCF_MASK | PR_MTE_STORE_ONLY)) == mask) {
|
||||
ksft_test_result_pass("%s\n", name);
|
||||
} else {
|
||||
ksft_print_msg("Got %x, expected %x\n",
|
||||
|
|
@ -93,12 +102,16 @@ void set_mode_test(const char *name, int hwcap2, int mask)
|
|||
struct mte_mode {
|
||||
int mask;
|
||||
int hwcap2;
|
||||
int hwcap3;
|
||||
const char *name;
|
||||
} mte_modes[] = {
|
||||
{ PR_MTE_TCF_NONE, 0, "NONE" },
|
||||
{ PR_MTE_TCF_SYNC, HWCAP2_MTE, "SYNC" },
|
||||
{ PR_MTE_TCF_ASYNC, HWCAP2_MTE, "ASYNC" },
|
||||
{ PR_MTE_TCF_SYNC | PR_MTE_TCF_ASYNC, HWCAP2_MTE, "SYNC+ASYNC" },
|
||||
{ PR_MTE_TCF_NONE, 0, 0, "NONE" },
|
||||
{ PR_MTE_TCF_SYNC, HWCAP2_MTE, 0, "SYNC" },
|
||||
{ PR_MTE_TCF_ASYNC, HWCAP2_MTE, 0, "ASYNC" },
|
||||
{ PR_MTE_TCF_SYNC | PR_MTE_TCF_ASYNC, HWCAP2_MTE, 0, "SYNC+ASYNC" },
|
||||
{ PR_MTE_TCF_SYNC | PR_MTE_STORE_ONLY, HWCAP2_MTE, HWCAP3_MTE_STORE_ONLY, "SYNC+STONLY" },
|
||||
{ PR_MTE_TCF_ASYNC | PR_MTE_STORE_ONLY, HWCAP2_MTE, HWCAP3_MTE_STORE_ONLY, "ASYNC+STONLY" },
|
||||
{ PR_MTE_TCF_SYNC | PR_MTE_TCF_ASYNC | PR_MTE_STORE_ONLY, HWCAP2_MTE, HWCAP3_MTE_STORE_ONLY, "SYNC+ASYNC+STONLY" },
|
||||
};
|
||||
|
||||
int main(void)
|
||||
|
|
@ -106,11 +119,11 @@ int main(void)
|
|||
int i;
|
||||
|
||||
ksft_print_header();
|
||||
ksft_set_plan(5);
|
||||
ksft_set_plan(ARRAY_SIZE(mte_modes));
|
||||
|
||||
check_basic_read();
|
||||
for (i = 0; i < ARRAY_SIZE(mte_modes); i++)
|
||||
set_mode_test(mte_modes[i].name, mte_modes[i].hwcap2,
|
||||
set_mode_test(mte_modes[i].name, mte_modes[i].hwcap2, mte_modes[i].hwcap3,
|
||||
mte_modes[i].mask);
|
||||
|
||||
ksft_print_cnts();
|
||||
|
|
|
|||
|
|
@ -57,7 +57,7 @@ static int check_single_included_tags(int mem_type, int mode)
|
|||
return KSFT_FAIL;
|
||||
|
||||
for (tag = 0; (tag < MT_TAG_COUNT) && (result == KSFT_PASS); tag++) {
|
||||
ret = mte_switch_mode(mode, MT_INCLUDE_VALID_TAG(tag));
|
||||
ret = mte_switch_mode(mode, MT_INCLUDE_VALID_TAG(tag), false);
|
||||
if (ret != 0)
|
||||
result = KSFT_FAIL;
|
||||
/* Try to catch a excluded tag by a number of tries. */
|
||||
|
|
@ -91,7 +91,7 @@ static int check_multiple_included_tags(int mem_type, int mode)
|
|||
|
||||
for (tag = 0; (tag < MT_TAG_COUNT - 1) && (result == KSFT_PASS); tag++) {
|
||||
excl_mask |= 1 << tag;
|
||||
mte_switch_mode(mode, MT_INCLUDE_VALID_TAGS(excl_mask));
|
||||
mte_switch_mode(mode, MT_INCLUDE_VALID_TAGS(excl_mask), false);
|
||||
/* Try to catch a excluded tag by a number of tries. */
|
||||
for (run = 0; (run < RUNS) && (result == KSFT_PASS); run++) {
|
||||
ptr = mte_insert_tags(ptr, BUFFER_SIZE);
|
||||
|
|
@ -120,7 +120,7 @@ static int check_all_included_tags(int mem_type, int mode)
|
|||
mem_type, false) != KSFT_PASS)
|
||||
return KSFT_FAIL;
|
||||
|
||||
ret = mte_switch_mode(mode, MT_INCLUDE_TAG_MASK);
|
||||
ret = mte_switch_mode(mode, MT_INCLUDE_TAG_MASK, false);
|
||||
if (ret != 0)
|
||||
return KSFT_FAIL;
|
||||
/* Try to catch a excluded tag by a number of tries. */
|
||||
|
|
@ -145,7 +145,7 @@ static int check_none_included_tags(int mem_type, int mode)
|
|||
if (check_allocated_memory(ptr, BUFFER_SIZE, mem_type, false) != KSFT_PASS)
|
||||
return KSFT_FAIL;
|
||||
|
||||
ret = mte_switch_mode(mode, MT_EXCLUDE_TAG_MASK);
|
||||
ret = mte_switch_mode(mode, MT_EXCLUDE_TAG_MASK, false);
|
||||
if (ret != 0)
|
||||
return KSFT_FAIL;
|
||||
/* Try to catch a excluded tag by a number of tries. */
|
||||
|
|
@ -180,7 +180,7 @@ int main(int argc, char *argv[])
|
|||
return err;
|
||||
|
||||
/* Register SIGSEGV handler */
|
||||
mte_register_signal(SIGSEGV, mte_default_handler);
|
||||
mte_register_signal(SIGSEGV, mte_default_handler, false);
|
||||
|
||||
/* Set test plan */
|
||||
ksft_set_plan(4);
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ static int check_usermem_access_fault(int mem_type, int mode, int mapping,
|
|||
|
||||
err = KSFT_PASS;
|
||||
len = 2 * page_sz;
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG, false);
|
||||
fd = create_temp_file();
|
||||
if (fd == -1)
|
||||
return KSFT_FAIL;
|
||||
|
|
@ -211,7 +211,7 @@ int main(int argc, char *argv[])
|
|||
return err;
|
||||
|
||||
/* Register signal handlers */
|
||||
mte_register_signal(SIGSEGV, mte_default_handler);
|
||||
mte_register_signal(SIGSEGV, mte_default_handler, false);
|
||||
|
||||
/* Set test plan */
|
||||
ksft_set_plan(64);
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@
|
|||
#include <signal.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <time.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#include <linux/auxvec.h>
|
||||
|
|
@ -19,20 +20,40 @@
|
|||
#include "mte_common_util.h"
|
||||
#include "mte_def.h"
|
||||
|
||||
#ifndef SA_EXPOSE_TAGBITS
|
||||
#define SA_EXPOSE_TAGBITS 0x00000800
|
||||
#endif
|
||||
|
||||
#define INIT_BUFFER_SIZE 256
|
||||
|
||||
struct mte_fault_cxt cur_mte_cxt;
|
||||
bool mtefar_support;
|
||||
bool mtestonly_support;
|
||||
static unsigned int mte_cur_mode;
|
||||
static unsigned int mte_cur_pstate_tco;
|
||||
static bool mte_cur_stonly;
|
||||
|
||||
void mte_default_handler(int signum, siginfo_t *si, void *uc)
|
||||
{
|
||||
struct sigaction sa;
|
||||
unsigned long addr = (unsigned long)si->si_addr;
|
||||
unsigned char si_tag, si_atag;
|
||||
|
||||
sigaction(signum, NULL, &sa);
|
||||
|
||||
if (sa.sa_flags & SA_EXPOSE_TAGBITS) {
|
||||
si_tag = MT_FETCH_TAG(addr);
|
||||
si_atag = MT_FETCH_ATAG(addr);
|
||||
addr = MT_CLEAR_TAGS(addr);
|
||||
} else {
|
||||
si_tag = 0;
|
||||
si_atag = 0;
|
||||
}
|
||||
|
||||
if (signum == SIGSEGV) {
|
||||
#ifdef DEBUG
|
||||
ksft_print_msg("INFO: SIGSEGV signal at pc=%lx, fault addr=%lx, si_code=%lx\n",
|
||||
((ucontext_t *)uc)->uc_mcontext.pc, addr, si->si_code);
|
||||
ksft_print_msg("INFO: SIGSEGV signal at pc=%lx, fault addr=%lx, si_code=%lx, si_tag=%x, si_atag=%x\n",
|
||||
((ucontext_t *)uc)->uc_mcontext.pc, addr, si->si_code, si_tag, si_atag);
|
||||
#endif
|
||||
if (si->si_code == SEGV_MTEAERR) {
|
||||
if (cur_mte_cxt.trig_si_code == si->si_code)
|
||||
|
|
@ -45,13 +66,18 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc)
|
|||
}
|
||||
/* Compare the context for precise error */
|
||||
else if (si->si_code == SEGV_MTESERR) {
|
||||
if ((!mtefar_support && si_atag) || (si_atag != MT_FETCH_ATAG(cur_mte_cxt.trig_addr))) {
|
||||
ksft_print_msg("Invalid MTE synchronous exception caught for address tag! si_tag=%x, si_atag: %x\n", si_tag, si_atag);
|
||||
exit(KSFT_FAIL);
|
||||
}
|
||||
|
||||
if (cur_mte_cxt.trig_si_code == si->si_code &&
|
||||
((cur_mte_cxt.trig_range >= 0 &&
|
||||
addr >= MT_CLEAR_TAG(cur_mte_cxt.trig_addr) &&
|
||||
addr <= (MT_CLEAR_TAG(cur_mte_cxt.trig_addr) + cur_mte_cxt.trig_range)) ||
|
||||
addr >= MT_CLEAR_TAGS(cur_mte_cxt.trig_addr) &&
|
||||
addr <= (MT_CLEAR_TAGS(cur_mte_cxt.trig_addr) + cur_mte_cxt.trig_range)) ||
|
||||
(cur_mte_cxt.trig_range < 0 &&
|
||||
addr <= MT_CLEAR_TAG(cur_mte_cxt.trig_addr) &&
|
||||
addr >= (MT_CLEAR_TAG(cur_mte_cxt.trig_addr) + cur_mte_cxt.trig_range)))) {
|
||||
addr <= MT_CLEAR_TAGS(cur_mte_cxt.trig_addr) &&
|
||||
addr >= (MT_CLEAR_TAGS(cur_mte_cxt.trig_addr) + cur_mte_cxt.trig_range)))) {
|
||||
cur_mte_cxt.fault_valid = true;
|
||||
/* Adjust the pc by 4 */
|
||||
((ucontext_t *)uc)->uc_mcontext.pc += 4;
|
||||
|
|
@ -67,11 +93,11 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc)
|
|||
ksft_print_msg("INFO: SIGBUS signal at pc=%llx, fault addr=%lx, si_code=%x\n",
|
||||
((ucontext_t *)uc)->uc_mcontext.pc, addr, si->si_code);
|
||||
if ((cur_mte_cxt.trig_range >= 0 &&
|
||||
addr >= MT_CLEAR_TAG(cur_mte_cxt.trig_addr) &&
|
||||
addr <= (MT_CLEAR_TAG(cur_mte_cxt.trig_addr) + cur_mte_cxt.trig_range)) ||
|
||||
addr >= MT_CLEAR_TAGS(cur_mte_cxt.trig_addr) &&
|
||||
addr <= (MT_CLEAR_TAGS(cur_mte_cxt.trig_addr) + cur_mte_cxt.trig_range)) ||
|
||||
(cur_mte_cxt.trig_range < 0 &&
|
||||
addr <= MT_CLEAR_TAG(cur_mte_cxt.trig_addr) &&
|
||||
addr >= (MT_CLEAR_TAG(cur_mte_cxt.trig_addr) + cur_mte_cxt.trig_range))) {
|
||||
addr <= MT_CLEAR_TAGS(cur_mte_cxt.trig_addr) &&
|
||||
addr >= (MT_CLEAR_TAGS(cur_mte_cxt.trig_addr) + cur_mte_cxt.trig_range))) {
|
||||
cur_mte_cxt.fault_valid = true;
|
||||
/* Adjust the pc by 4 */
|
||||
((ucontext_t *)uc)->uc_mcontext.pc += 4;
|
||||
|
|
@ -79,12 +105,17 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc)
|
|||
}
|
||||
}
|
||||
|
||||
void mte_register_signal(int signal, void (*handler)(int, siginfo_t *, void *))
|
||||
void mte_register_signal(int signal, void (*handler)(int, siginfo_t *, void *),
|
||||
bool export_tags)
|
||||
{
|
||||
struct sigaction sa;
|
||||
|
||||
sa.sa_sigaction = handler;
|
||||
sa.sa_flags = SA_SIGINFO;
|
||||
|
||||
if (export_tags && signal == SIGSEGV)
|
||||
sa.sa_flags |= SA_EXPOSE_TAGBITS;
|
||||
|
||||
sigemptyset(&sa.sa_mask);
|
||||
sigaction(signal, &sa, NULL);
|
||||
}
|
||||
|
|
@ -120,6 +151,19 @@ void mte_clear_tags(void *ptr, size_t size)
|
|||
mte_clear_tag_address_range(ptr, size);
|
||||
}
|
||||
|
||||
void *mte_insert_atag(void *ptr)
|
||||
{
|
||||
unsigned char atag;
|
||||
|
||||
atag = mtefar_support ? (random() % MT_ATAG_MASK) + 1 : 0;
|
||||
return (void *)MT_SET_ATAG((unsigned long)ptr, atag);
|
||||
}
|
||||
|
||||
void *mte_clear_atag(void *ptr)
|
||||
{
|
||||
return (void *)MT_CLEAR_ATAG((unsigned long)ptr);
|
||||
}
|
||||
|
||||
static void *__mte_allocate_memory_range(size_t size, int mem_type, int mapping,
|
||||
size_t range_before, size_t range_after,
|
||||
bool tags, int fd)
|
||||
|
|
@ -272,7 +316,7 @@ void mte_initialize_current_context(int mode, uintptr_t ptr, ssize_t range)
|
|||
cur_mte_cxt.trig_si_code = 0;
|
||||
}
|
||||
|
||||
int mte_switch_mode(int mte_option, unsigned long incl_mask)
|
||||
int mte_switch_mode(int mte_option, unsigned long incl_mask, bool stonly)
|
||||
{
|
||||
unsigned long en = 0;
|
||||
|
||||
|
|
@ -304,6 +348,9 @@ int mte_switch_mode(int mte_option, unsigned long incl_mask)
|
|||
break;
|
||||
}
|
||||
|
||||
if (mtestonly_support && stonly)
|
||||
en |= PR_MTE_STORE_ONLY;
|
||||
|
||||
en |= (incl_mask << PR_MTE_TAG_SHIFT);
|
||||
/* Enable address tagging ABI, mte error reporting mode and tag inclusion mask. */
|
||||
if (prctl(PR_SET_TAGGED_ADDR_CTRL, en, 0, 0, 0) != 0) {
|
||||
|
|
@ -316,12 +363,21 @@ int mte_switch_mode(int mte_option, unsigned long incl_mask)
|
|||
int mte_default_setup(void)
|
||||
{
|
||||
unsigned long hwcaps2 = getauxval(AT_HWCAP2);
|
||||
unsigned long hwcaps3 = getauxval(AT_HWCAP3);
|
||||
unsigned long en = 0;
|
||||
int ret;
|
||||
|
||||
/* To generate random address tag */
|
||||
srandom(time(NULL));
|
||||
|
||||
if (!(hwcaps2 & HWCAP2_MTE))
|
||||
ksft_exit_skip("MTE features unavailable\n");
|
||||
|
||||
mtefar_support = !!(hwcaps3 & HWCAP3_MTE_FAR);
|
||||
|
||||
if (hwcaps3 & HWCAP3_MTE_STORE_ONLY)
|
||||
mtestonly_support = true;
|
||||
|
||||
/* Get current mte mode */
|
||||
ret = prctl(PR_GET_TAGGED_ADDR_CTRL, en, 0, 0, 0);
|
||||
if (ret < 0) {
|
||||
|
|
@ -335,6 +391,8 @@ int mte_default_setup(void)
|
|||
else if (ret & PR_MTE_TCF_NONE)
|
||||
mte_cur_mode = MTE_NONE_ERR;
|
||||
|
||||
mte_cur_stonly = (ret & PR_MTE_STORE_ONLY) ? true : false;
|
||||
|
||||
mte_cur_pstate_tco = mte_get_pstate_tco();
|
||||
/* Disable PSTATE.TCO */
|
||||
mte_disable_pstate_tco();
|
||||
|
|
@ -343,7 +401,7 @@ int mte_default_setup(void)
|
|||
|
||||
void mte_restore_setup(void)
|
||||
{
|
||||
mte_switch_mode(mte_cur_mode, MTE_ALLOW_NON_ZERO_TAG);
|
||||
mte_switch_mode(mte_cur_mode, MTE_ALLOW_NON_ZERO_TAG, mte_cur_stonly);
|
||||
if (mte_cur_pstate_tco == MT_PSTATE_TCO_EN)
|
||||
mte_enable_pstate_tco();
|
||||
else if (mte_cur_pstate_tco == MT_PSTATE_TCO_DIS)
|
||||
|
|
|
|||
|
|
@ -37,10 +37,13 @@ struct mte_fault_cxt {
|
|||
};
|
||||
|
||||
extern struct mte_fault_cxt cur_mte_cxt;
|
||||
extern bool mtefar_support;
|
||||
extern bool mtestonly_support;
|
||||
|
||||
/* MTE utility functions */
|
||||
void mte_default_handler(int signum, siginfo_t *si, void *uc);
|
||||
void mte_register_signal(int signal, void (*handler)(int, siginfo_t *, void *));
|
||||
void mte_register_signal(int signal, void (*handler)(int, siginfo_t *, void *),
|
||||
bool export_tags);
|
||||
void mte_wait_after_trig(void);
|
||||
void *mte_allocate_memory(size_t size, int mem_type, int mapping, bool tags);
|
||||
void *mte_allocate_memory_tag_range(size_t size, int mem_type, int mapping,
|
||||
|
|
@ -54,9 +57,11 @@ void mte_free_memory_tag_range(void *ptr, size_t size, int mem_type,
|
|||
size_t range_before, size_t range_after);
|
||||
void *mte_insert_tags(void *ptr, size_t size);
|
||||
void mte_clear_tags(void *ptr, size_t size);
|
||||
void *mte_insert_atag(void *ptr);
|
||||
void *mte_clear_atag(void *ptr);
|
||||
int mte_default_setup(void);
|
||||
void mte_restore_setup(void);
|
||||
int mte_switch_mode(int mte_option, unsigned long incl_mask);
|
||||
int mte_switch_mode(int mte_option, unsigned long incl_mask, bool stonly);
|
||||
void mte_initialize_current_context(int mode, uintptr_t ptr, ssize_t range);
|
||||
|
||||
/* Common utility functions */
|
||||
|
|
|
|||
|
|
@ -42,6 +42,8 @@
|
|||
#define MT_TAG_COUNT 16
|
||||
#define MT_INCLUDE_TAG_MASK 0xFFFF
|
||||
#define MT_EXCLUDE_TAG_MASK 0x0
|
||||
#define MT_ATAG_SHIFT 60
|
||||
#define MT_ATAG_MASK 0xFUL
|
||||
|
||||
#define MT_ALIGN_GRANULE (MT_GRANULE_SIZE - 1)
|
||||
#define MT_CLEAR_TAG(x) ((x) & ~(MT_TAG_MASK << MT_TAG_SHIFT))
|
||||
|
|
@ -49,6 +51,12 @@
|
|||
#define MT_FETCH_TAG(x) ((x >> MT_TAG_SHIFT) & (MT_TAG_MASK))
|
||||
#define MT_ALIGN_UP(x) ((x + MT_ALIGN_GRANULE) & ~(MT_ALIGN_GRANULE))
|
||||
|
||||
#define MT_CLEAR_ATAG(x) ((x) & ~(MT_TAG_MASK << MT_ATAG_SHIFT))
|
||||
#define MT_SET_ATAG(x, y) ((x) | (((y) & MT_ATAG_MASK) << MT_ATAG_SHIFT))
|
||||
#define MT_FETCH_ATAG(x) ((x >> MT_ATAG_SHIFT) & (MT_ATAG_MASK))
|
||||
|
||||
#define MT_CLEAR_TAGS(x) (MT_CLEAR_ATAG(MT_CLEAR_TAG(x)))
|
||||
|
||||
#define MT_PSTATE_TCO_SHIFT 25
|
||||
#define MT_PSTATE_TCO_MASK ~(0x1 << MT_PSTATE_TCO_SHIFT)
|
||||
#define MT_PSTATE_TCO_EN 1
|
||||
|
|
|
|||
|
|
@ -140,7 +140,7 @@ static void enable_os_lock(void)
|
|||
|
||||
static void enable_monitor_debug_exceptions(void)
|
||||
{
|
||||
uint32_t mdscr;
|
||||
uint64_t mdscr;
|
||||
|
||||
asm volatile("msr daifclr, #8");
|
||||
|
||||
|
|
@ -223,7 +223,7 @@ void install_hw_bp_ctx(uint8_t addr_bp, uint8_t ctx_bp, uint64_t addr,
|
|||
|
||||
static void install_ss(void)
|
||||
{
|
||||
uint32_t mdscr;
|
||||
uint64_t mdscr;
|
||||
|
||||
asm volatile("msr daifclr, #8");
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue