Browse Source

Revert "arch: arm: cortex_m: Only trigger context switch if thread is preemptible"

This reverts commit 42036cdbca.

Architecture specific code should not do preemption checking before context
switch. This is already handled by the scheduler, so duplicating it would
be redundant and error prone. These checks used to be necessary, but the
scheduler has been rewritten since then and the checks were removed in
3a0cb2d35d (kernel: Remove legacy preemption checking, 2018-05-23).

The check this reverts was also incorrect, as it didn't take scheduler
locking nor meta-IRQs into account.

Fixes #80574

Signed-off-by: Kalle Kietäväinen <kalle.kietavainen@silabs.com>
(cherry picked from commit d929b8a9fa)
backport-84509-to-v4.0-branch
Kalle Kietäväinen 5 months ago committed by github-actions[bot]
parent
commit
1c0fb77c01
  1. 9
      arch/arm/core/cortex_m/exc_exit.c

9
arch/arm/core/cortex_m/exc_exit.c

@ -55,13 +55,8 @@ FUNC_ALIAS(z_arm_exc_exit, z_arm_int_exit, void); @@ -55,13 +55,8 @@ FUNC_ALIAS(z_arm_exc_exit, z_arm_int_exit, void);
Z_GENERIC_SECTION(.text._HandlerModeExit) void z_arm_exc_exit(void)
{
#ifdef CONFIG_PREEMPT_ENABLED
/* If thread is preemptible */
if (_kernel.cpus->current->base.prio >= 0) {
/* and cached thread is not current thread */
if (_kernel.ready_q.cache != _kernel.cpus->current) {
/* trigger a context switch */
SCB->ICSR |= SCB_ICSR_PENDSVSET_Msk;
}
if (_kernel.ready_q.cache != _kernel.cpus->current) {
SCB->ICSR |= SCB_ICSR_PENDSVSET_Msk;
}
#endif /* CONFIG_PREEMPT_ENABLED */

Loading…
Cancel
Save