forked from luck/tmp_suning_uos_patched
locking/mutexes: Optimize mutex trylock slowpath
The mutex_trylock() function calls into __mutex_trylock_fastpath() when trying to obtain the mutex. On 32 bit x86, in the !__HAVE_ARCH_CMPXCHG case, __mutex_trylock_fastpath() calls directly into __mutex_trylock_slowpath() regardless of whether or not the mutex is locked. In __mutex_trylock_slowpath(), we then acquire the wait_lock spinlock, xchg() lock->count with -1, then set lock->count back to 0 if there are no waiters, and return true if the prev lock count was 1. However, if the mutex is already locked, then there isn't much point in attempting all of the above expensive operations. In this patch, we only attempt the above trylock operations if the mutex is unlocked. Signed-off-by: Jason Low <jason.low2@hp.com> Reviewed-by: Davidlohr Bueso <davidlohr@hp.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: akpm@linux-foundation.org Cc: tim.c.chen@linux.intel.com Cc: paulmck@linux.vnet.ibm.com Cc: rostedt@goodmis.org Cc: Waiman.Long@hp.com Cc: scott.norton@hp.com Cc: aswin@hp.com Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1402511843-4721-5-git-send-email-jason.low2@hp.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
0d968dd8c6
commit
72d5305dcb
@ -820,6 +820,10 @@ static inline int __mutex_trylock_slowpath(atomic_t *lock_count)
|
||||
unsigned long flags;
|
||||
int prev;
|
||||
|
||||
/* No need to trylock if the mutex is locked. */
|
||||
if (mutex_is_locked(lock))
|
||||
return 0;
|
||||
|
||||
spin_lock_mutex(&lock->wait_lock, flags);
|
||||
|
||||
prev = atomic_xchg(&lock->count, -1);
|
||||
|
Loading…
Reference in New Issue
Block a user