forked from luck/tmp_suning_uos_patched
dm thin metadata: THIN_MAX_CONCURRENT_LOCKS should be 6
For btree removal, there is a corner case that a single thread could takes 6 locks which is more than THIN_MAX_CONCURRENT_LOCKS(5) and leads to deadlock. A btree removal might eventually call rebalance_children()->rebalance3() to rebalance entries of three neighbor child nodes when shadow_spine has already acquired two write locks. In rebalance3(), it tries to shadow and acquire the write locks of all three child nodes. However, shadowing a child node requires acquiring a read lock of the original child node and a write lock of the new block. Although the read lock will be released after block shadowing, shadowing the third child node in rebalance3() could still take the sixth lock. (2 write locks for shadow_spine + 2 write locks for the first two child nodes's shadow + 1 write lock for the last child node's shadow + 1 read lock for the last child node) Cc: stable@vger.kernel.org Signed-off-by: Dennis Yang <dennisyang@qnap.com> Acked-by: Joe Thornber <thornber@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
This commit is contained in:
parent
1291a0d504
commit
490ae017f5
|
@ -80,10 +80,14 @@
|
|||
#define SECTOR_TO_BLOCK_SHIFT 3
|
||||
|
||||
/*
|
||||
* For btree insert:
|
||||
* 3 for btree insert +
|
||||
* 2 for btree lookup used within space map
|
||||
* For btree remove:
|
||||
* 2 for shadow spine +
|
||||
* 4 for rebalance 3 child node
|
||||
*/
|
||||
#define THIN_MAX_CONCURRENT_LOCKS 5
|
||||
#define THIN_MAX_CONCURRENT_LOCKS 6
|
||||
|
||||
/* This should be plenty */
|
||||
#define SPACE_MAP_ROOT_SIZE 128
|
||||
|
|
Loading…
Reference in New Issue
Block a user