forked from luck/tmp_suning_uos_patched
37f8173dd8
Currently instrumentation of atomic primitives is done at the architecture level, while composites or fallbacks are provided at the generic level. The result is that there are no uninstrumented variants of the fallbacks. Since there is now need of such variants to isolate text poke from any form of instrumentation invert this ordering. Doing this means moving the instrumentation into the generic code as well as having (for now) two variants of the fallbacks. Notes: - the various *cond_read* primitives are not proper fallbacks and got moved into linux/atomic.c. No arch_ variants are generated because the base primitives smp_cond_load*() are instrumented. - once all architectures are moved over to arch_atomic_ one of the fallback variants can be removed and some 2300 lines reclaimed. - atomic_{read,set}*() are no longer double-instrumented Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lkml.kernel.org/r/20200505134058.769149955@linutronix.de
24 lines
560 B
Plaintext
Executable File
24 lines
560 B
Plaintext
Executable File
cat << EOF
|
|
/**
|
|
* ${arch}${atomic}_fetch_add_unless - add unless the number is already a given value
|
|
* @v: pointer of type ${atomic}_t
|
|
* @a: the amount to add to v...
|
|
* @u: ...unless v is equal to u.
|
|
*
|
|
* Atomically adds @a to @v, so long as @v was not already @u.
|
|
* Returns original value of @v
|
|
*/
|
|
static __always_inline ${int}
|
|
${arch}${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
|
|
{
|
|
${int} c = ${arch}${atomic}_read(v);
|
|
|
|
do {
|
|
if (unlikely(c == u))
|
|
break;
|
|
} while (!${arch}${atomic}_try_cmpxchg(v, &c, c + a));
|
|
|
|
return c;
|
|
}
|
|
EOF
|