locks: fix a potential use-after-free problem when wakeup a waiter

'16306a61d3b7 ("fs/locks: always delete_block after waiting.")' add the
logic to check waiter->fl_blocker without blocked_lock_lock. And it will
trigger a UAF when we try to wakeup some waiter:

Thread 1 has create a write flock a on file, and now thread 2 try to
unlock and delete flock a, thread 3 try to add flock b on the same file.

Thread2                         Thread3
                                flock syscall(create flock b)
	                        ...flock_lock_inode_wait
				    flock_lock_inode(will insert
				    our fl_blocked_member list
				    to flock a's fl_blocked_requests)
				   sleep
flock syscall(unlock)
...flock_lock_inode_wait
    locks_delete_lock_ctx
    ...__locks_wake_up_blocks
        __locks_delete_blocks(
	b->fl_blocker = NULL)
	...
                                   break by a signal
				   locks_delete_block
				    b->fl_blocker == NULL &&
				    list_empty(&b->fl_blocked_requests)
	                            success, return directly
				 locks_free_lock b
	wake_up(&b->fl_waiter)
	trigger UAF

Fix it by remove this logic, and this patch may also fix CVE-2019-19769.

Cc: stable@vger.kernel.org
Fixes: 16306a61d3 ("fs/locks: always delete_block after waiting.")
Signed-off-by: yangerkun <yangerkun@huawei.com>
Signed-off-by: Jeff Layton <jlayton@kernel.org>
This commit is contained in:
yangerkun 2020-03-04 15:25:56 +08:00 committed by Jeff Layton
parent 0a68ff5e2e
commit 6d390e4b5d

View File

@ -753,20 +753,6 @@ int locks_delete_block(struct file_lock *waiter)
{ {
int status = -ENOENT; int status = -ENOENT;
/*
* If fl_blocker is NULL, it won't be set again as this thread
* "owns" the lock and is the only one that might try to claim
* the lock. So it is safe to test fl_blocker locklessly.
* Also if fl_blocker is NULL, this waiter is not listed on
* fl_blocked_requests for some lock, so no other request can
* be added to the list of fl_blocked_requests for this
* request. So if fl_blocker is NULL, it is safe to
* locklessly check if fl_blocked_requests is empty. If both
* of these checks succeed, there is no need to take the lock.
*/
if (waiter->fl_blocker == NULL &&
list_empty(&waiter->fl_blocked_requests))
return status;
spin_lock(&blocked_lock_lock); spin_lock(&blocked_lock_lock);
if (waiter->fl_blocker) if (waiter->fl_blocker)
status = 0; status = 0;