From 8d5fae6011260de209aaf231120e8146b14bc8e0 Mon Sep 17 00:00:00 2001 From: Breno Leitao Date: Tue, 10 Mar 2026 03:13:16 -0700 Subject: [PATCH 1/5] perf/x86: Move event pointer setup earlier in x86_pmu_enable() A production AMD EPYC system crashed with a NULL pointer dereference in the PMU NMI handler: BUG: kernel NULL pointer dereference, address: 0000000000000198 RIP: x86_perf_event_update+0xc/0xa0 Call Trace: amd_pmu_v2_handle_irq+0x1a6/0x390 perf_event_nmi_handler+0x24/0x40 The faulting instruction is `cmpq $0x0, 0x198(%rdi)` with RDI=0, corresponding to the `if (unlikely(!hwc->event_base))` check in x86_perf_event_update() where hwc = &event->hw and event is NULL. drgn inspection of the vmcore on CPU 106 showed a mismatch between cpuc->active_mask and cpuc->events[]: active_mask: 0x1e (bits 1, 2, 3, 4) events[1]: 0xff1100136cbd4f38 (valid) events[2]: 0x0 (NULL, but active_mask bit 2 set) events[3]: 0xff1100076fd2cf38 (valid) events[4]: 0xff1100079e990a90 (valid) The event that should occupy events[2] was found in event_list[2] with hw.idx=2 and hw.state=0x0, confirming x86_pmu_start() had run (which clears hw.state and sets active_mask) but events[2] was never populated. Another event (event_list[0]) had hw.state=0x7 (STOPPED|UPTODATE|ARCH), showing it was stopped when the PMU rescheduled events, confirming the throttle-then-reschedule sequence occurred. The root cause is commit 7e772a93eb61 ("perf/x86: Fix NULL event access and potential PEBS record loss") which moved the cpuc->events[idx] assignment out of x86_pmu_start() and into step 2 of x86_pmu_enable(), after the PERF_HES_ARCH check. This broke any path that calls pmu->start() without going through x86_pmu_enable() -- specifically the unthrottle path: perf_adjust_freq_unthr_events() -> perf_event_unthrottle_group() -> perf_event_unthrottle() -> event->pmu->start(event, 0) -> x86_pmu_start() // sets active_mask but not events[] The race sequence is: 1. A group of perf events overflows, triggering group throttle via perf_event_throttle_group(). All events are stopped: active_mask bits cleared, events[] preserved (x86_pmu_stop no longer clears events[] after commit 7e772a93eb61). 2. While still throttled (PERF_HES_STOPPED), x86_pmu_enable() runs due to other scheduling activity. Stopped events that need to move counters get PERF_HES_ARCH set and events[old_idx] cleared. In step 2 of x86_pmu_enable(), PERF_HES_ARCH causes these events to be skipped -- events[new_idx] is never set. 3. The timer tick unthrottles the group via pmu->start(). Since commit 7e772a93eb61 removed the events[] assignment from x86_pmu_start(), active_mask[new_idx] is set but events[new_idx] remains NULL. 4. A PMC overflow NMI fires. The handler iterates active counters, finds active_mask[2] set, reads events[2] which is NULL, and crashes dereferencing it. Move the cpuc->events[hwc->idx] assignment in x86_pmu_enable() to before the PERF_HES_ARCH check, so that events[] is populated even for events that are not immediately started. This ensures the unthrottle path via pmu->start() always finds a valid event pointer. Fixes: 7e772a93eb61 ("perf/x86: Fix NULL event access and potential PEBS record loss") Signed-off-by: Breno Leitao Signed-off-by: Peter Zijlstra (Intel) Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20260310-perf-v2-1-4a3156fce43c@debian.org --- arch/x86/events/core.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 03ce1bc7ef2e..54b4c315d927 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -1372,6 +1372,8 @@ static void x86_pmu_enable(struct pmu *pmu) else if (i < n_running) continue; + cpuc->events[hwc->idx] = event; + if (hwc->state & PERF_HES_ARCH) continue; @@ -1379,7 +1381,6 @@ static void x86_pmu_enable(struct pmu *pmu) * if cpuc->enabled = 0, then no wrmsr as * per x86_pmu_enable_event() */ - cpuc->events[hwc->idx] = event; x86_pmu_start(event, PERF_EF_RELOAD); } cpuc->n_added = 0; From f1cac6ac62d28a9a57b17f51ac5795bf250c12d3 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Wed, 11 Mar 2026 21:29:14 +0100 Subject: [PATCH 2/5] x86/perf: Make sure to program the counter value for stopped events on migration Both Mi Dapeng and Ian Rogers noted that not everything that sets HES_STOPPED is required to EF_UPDATE. Specifically the 'step 1' loop of rescheduling explicitly does EF_UPDATE to ensure the counter value is read. However, then 'step 2' simply leaves the new counter uninitialized when HES_STOPPED, even though, as noted above, the thing that stopped them might not be aware it needs to EF_RELOAD -- since it didn't EF_UPDATE on stop. One such location that is affected is throttling, throttle does pmu->stop(, 0); and unthrottle does pmu->start(, 0); possibly restarting an uninitialized counter. Fixes: a4eaf7f14675 ("perf: Rework the PMU methods") Reported-by: Dapeng Mi Reported-by: Ian Rogers Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Dapeng Mi Link: https://patch.msgid.link/20260311204035.GX606826@noisy.programming.kicks-ass.net --- arch/x86/events/core.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 54b4c315d927..810ab21ffd99 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -1374,8 +1374,10 @@ static void x86_pmu_enable(struct pmu *pmu) cpuc->events[hwc->idx] = event; - if (hwc->state & PERF_HES_ARCH) + if (hwc->state & PERF_HES_ARCH) { + static_call(x86_pmu_set_period)(event); continue; + } /* * if cpuc->enabled = 0, then no wrmsr as From 4b9ce671960627b2505b3f64742544ae9801df97 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 9 Mar 2026 13:55:46 +0100 Subject: [PATCH 3/5] perf: Make sure to use pmu_ctx->pmu for groups Oliver reported that x86_pmu_del() ended up doing an out-of-bound memory access when group_sched_in() fails and needs to roll back. This *should* be handled by the transaction callbacks, but he found that when the group leader is a software event, the transaction handlers of the wrong PMU are used. Despite the move_group case in perf_event_open() and group_sched_in() using pmu_ctx->pmu. Turns out, inherit uses event->pmu to clone the events, effectively undoing the move_group case for all inherited contexts. Fix this by also making inherit use pmu_ctx->pmu, ensuring all inherited counters end up in the same pmu context. Similarly, __perf_event_read() should use equally use pmu_ctx->pmu for the group case. Fixes: bd2756811766 ("perf: Rewrite core context handling") Reported-by: Oliver Rosenberg Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Ian Rogers Link: https://patch.msgid.link/20260309133713.GB606826@noisy.programming.kicks-ass.net --- kernel/events/core.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 1f5699b339ec..89b40e439717 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4813,7 +4813,7 @@ static void __perf_event_read(void *info) struct perf_event *sub, *event = data->event; struct perf_event_context *ctx = event->ctx; struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); - struct pmu *pmu = event->pmu; + struct pmu *pmu; /* * If this is a task context, we need to check whether it is @@ -4825,7 +4825,7 @@ static void __perf_event_read(void *info) if (ctx->task && cpuctx->task_ctx != ctx) return; - raw_spin_lock(&ctx->lock); + guard(raw_spinlock)(&ctx->lock); ctx_time_update_event(ctx, event); perf_event_update_time(event); @@ -4833,25 +4833,22 @@ static void __perf_event_read(void *info) perf_event_update_sibling_time(event); if (event->state != PERF_EVENT_STATE_ACTIVE) - goto unlock; + return; if (!data->group) { - pmu->read(event); + perf_pmu_read(event); data->ret = 0; - goto unlock; + return; } + pmu = event->pmu_ctx->pmu; pmu->start_txn(pmu, PERF_PMU_TXN_READ); - pmu->read(event); - + perf_pmu_read(event); for_each_sibling_event(sub, event) perf_pmu_read(sub); data->ret = pmu->commit_txn(pmu); - -unlock: - raw_spin_unlock(&ctx->lock); } static inline u64 perf_event_count(struct perf_event *event, bool self) @@ -14744,7 +14741,7 @@ inherit_event(struct perf_event *parent_event, get_ctx(child_ctx); child_event->ctx = child_ctx; - pmu_ctx = find_get_pmu_context(child_event->pmu, child_ctx, child_event); + pmu_ctx = find_get_pmu_context(parent_event->pmu_ctx->pmu, child_ctx, child_event); if (IS_ERR(pmu_ctx)) { free_event(child_event); return ERR_CAST(pmu_ctx); From 1d07bbd7ea36ea0b8dfa8068dbe67eb3a32d9590 Mon Sep 17 00:00:00 2001 From: Dapeng Mi Date: Sat, 28 Feb 2026 13:33:20 +0800 Subject: [PATCH 4/5] perf/x86/intel: Add missing branch counters constraint apply When running the command: 'perf record -e "{instructions,instructions:p}" -j any,counter sleep 1', a "shift-out-of-bounds" warning is reported on CWF. UBSAN: shift-out-of-bounds in /kbuild/src/consumer/arch/x86/events/intel/lbr.c:970:15 shift exponent 64 is too large for 64-bit type 'long long unsigned int' ...... intel_pmu_lbr_counters_reorder.isra.0.cold+0x2a/0xa7 intel_pmu_lbr_save_brstack+0xc0/0x4c0 setup_arch_pebs_sample_data+0x114b/0x2400 The warning occurs because the second "instructions:p" event, which involves branch counters sampling, is incorrectly programmed to fixed counter 0 instead of the general-purpose (GP) counters 0-3 that support branch counters sampling. Currently only GP counters 0-3 support branch counters sampling on CWF, any event involving branch counters sampling should be programed on GP counters 0-3. Since the counter index of fixed counter 0 is 32, it leads to the "src" value in below code is right shifted 64 bits and trigger the "shift-out-of-bounds" warning. cnt = (src >> (order[j] * LBR_INFO_BR_CNTR_BITS)) & LBR_INFO_BR_CNTR_MASK; The root cause is the loss of the branch counters constraint for the new event in the branch counters sampling event group. Since it isn't yet part of the sibling list. This results in the second "instructions:p" event being programmed on fixed counter 0 incorrectly instead of the appropriate GP counters 0-3. To address this, we apply the missing branch counters constraint for the last event in the group. Additionally, we introduce a new function, `intel_set_branch_counter_constr()`, to apply the branch counters constraint and avoid code duplication. Fixes: 33744916196b ("perf/x86/intel: Support branch counters logging") Reported-by: Xudong Hao Signed-off-by: Dapeng Mi Signed-off-by: Peter Zijlstra (Intel) Link: https://patch.msgid.link/20260228053320.140406-2-dapeng1.mi@linux.intel.com Cc: stable@vger.kernel.org --- arch/x86/events/intel/core.c | 31 +++++++++++++++++++++---------- 1 file changed, 21 insertions(+), 10 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index cf3a4fe06ff2..36c68210d4d2 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -4628,6 +4628,19 @@ static inline void intel_pmu_set_acr_caused_constr(struct perf_event *event, event->hw.dyn_constraint &= hybrid(event->pmu, acr_cause_mask64); } +static inline int intel_set_branch_counter_constr(struct perf_event *event, + int *num) +{ + if (branch_sample_call_stack(event)) + return -EINVAL; + if (branch_sample_counters(event)) { + (*num)++; + event->hw.dyn_constraint &= x86_pmu.lbr_counters; + } + + return 0; +} + static int intel_pmu_hw_config(struct perf_event *event) { int ret = x86_pmu_hw_config(event); @@ -4698,21 +4711,19 @@ static int intel_pmu_hw_config(struct perf_event *event) * group, which requires the extra space to store the counters. */ leader = event->group_leader; - if (branch_sample_call_stack(leader)) + if (intel_set_branch_counter_constr(leader, &num)) return -EINVAL; - if (branch_sample_counters(leader)) { - num++; - leader->hw.dyn_constraint &= x86_pmu.lbr_counters; - } leader->hw.flags |= PERF_X86_EVENT_BRANCH_COUNTERS; for_each_sibling_event(sibling, leader) { - if (branch_sample_call_stack(sibling)) + if (intel_set_branch_counter_constr(sibling, &num)) + return -EINVAL; + } + + /* event isn't installed as a sibling yet. */ + if (event != leader) { + if (intel_set_branch_counter_constr(event, &num)) return -EINVAL; - if (branch_sample_counters(sibling)) { - num++; - sibling->hw.dyn_constraint &= x86_pmu.lbr_counters; - } } if (num > fls(x86_pmu.lbr_counters)) From e7fcc54524f04e42641de99028edd9c69dc19f8c Mon Sep 17 00:00:00 2001 From: Dapeng Mi Date: Wed, 11 Mar 2026 15:52:00 +0800 Subject: [PATCH 5/5] perf/x86/intel: Fix OMR snoop information parsing issues When omr_source is 0x2, the omr_snoop (bit[6]) and omr_promoted (bit[7]) fields are combined to represent the snoop information. However, the omr_promoted field was not left-shifted by 1 bit, resulting in incorrect snoop information. Besides, the snoop information parsing is not accurate for some OMR sources, like the snoop information should be SNOOP_NONE for these memory access (omr_source >= 7) instead of SNOOP_HIT. Fix these issues. Closes: https://lore.kernel.org/all/CAP-5=fW4zLWFw1v38zCzB9-cseNSTTCtup=p2SDxZq7dPayVww@mail.gmail.com/ Fixes: d2bdcde9626c ("perf/x86/intel: Add support for PEBS memory auxiliary info field in DMR") Reported-by: Ian Rogers Signed-off-by: Dapeng Mi Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Ian Rogers Link: https://patch.msgid.link/20260311075201.2951073-1-dapeng1.mi@linux.intel.com --- arch/x86/events/intel/ds.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index 5027afc97b65..7f0d515c07c5 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -345,12 +345,12 @@ static u64 parse_omr_data_source(u8 dse) if (omr.omr_remote) val |= REM; - val |= omr.omr_hitm ? P(SNOOP, HITM) : P(SNOOP, HIT); - if (omr.omr_source == 0x2) { - u8 snoop = omr.omr_snoop | omr.omr_promoted; + u8 snoop = omr.omr_snoop | (omr.omr_promoted << 1); - if (snoop == 0x0) + if (omr.omr_hitm) + val |= P(SNOOP, HITM); + else if (snoop == 0x0) val |= P(SNOOP, NA); else if (snoop == 0x1) val |= P(SNOOP, MISS); @@ -359,7 +359,10 @@ static u64 parse_omr_data_source(u8 dse) else if (snoop == 0x3) val |= P(SNOOP, NONE); } else if (omr.omr_source > 0x2 && omr.omr_source < 0x7) { + val |= omr.omr_hitm ? P(SNOOP, HITM) : P(SNOOP, HIT); val |= omr.omr_snoop ? P(SNOOPX, FWD) : 0; + } else { + val |= P(SNOOP, NONE); } return val;