kernel_optimize_test/drivers/md
Jonathan Brassow b80aa7a0c2 dm raid1: fix EIO after log failure
This patch adds the ability to requeue write I/O to
core device-mapper when there is a log device failure.

If a write to the log produces and error, the pending writes are
put on the "failures" list.  Since the log is marked as failed,
they will stay on the failures list until a suspend happens.

Suspends come in two phases, presuspend and postsuspend.  We must
make sure that all the writes on the failures list are requeued
in the presuspend phase (a requirement of dm core).  This means
that recovery must be complete (because writes may be delayed
behind it) and the failures list must be requeued before we
return from presuspend.

The mechanisms to ensure recovery is complete (or stopped) was
already in place, but needed to be moved from postsuspend to
presuspend.  We rely on 'flush_workqueue' to ensure that the
mirror thread is complete and therefore, has requeued all writes
in the failures list.

Because we are using flush_workqueue, we must ensure that no
additional 'queue_work' calls will produce additional I/O
that we need to requeue (because once we return from
presuspend, we are unable to do anything about it).  'queue_work'
is called in response to the following functions:
- complete_resync_work = NA, recovery is stopped
- rh_dec (mirror_end_io) = NA, only calls 'queue_work' if it
                           is ready to recover the region
                           (recovery is stopped) or it needs
                           to clear the region in the log*
                           **this doesn't get called while
                           suspending**
- rh_recovery_end = NA, recovery is stopped
- rh_recovery_start = NA, recovery is stopped
- write_callback = 1) Writes w/o failures simply call
                   bio_endio -> mirror_end_io -> rh_dec
                   (see rh_dec above)
                   2) Writes with failures are put on
                   the failures list and queue_work is
                   called**
                   ** write_callbacks don't happen
                   during suspend **
- do_failures = NA, 'queue_work' not called if suspending
- add_mirror (initialization) = NA, only done on mirror creation
- queue_bio = NA, 1) delayed I/O scheduled before flush_workqueue
              is called.  2) No more I/Os are being issued.
              3) Re-attempted READs can still be handled.
              (Write completions are handled through rh_dec/
              write_callback - mention above - and do not
              use queue_bio.)

Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2008-02-08 02:11:35 +00:00
..
raid6test md: raid6: clean up the style of raid6test/test.c 2008-02-06 10:41:18 -08:00
.gitignore
bitmap.c md: change ITERATE_RDEV to rdev_for_each 2008-02-06 10:41:19 -08:00
dm-bio-list.h dm: bio_list macro renaming 2007-10-20 02:01:11 +01:00
dm-bio-record.h
dm-crypt.c dm crypt: use async crypto 2008-02-08 02:11:14 +00:00
dm-delay.c dm: bio_list macro renaming 2007-10-20 02:01:11 +01:00
dm-emc.c dm mpath: emc fix an error message 2007-10-20 02:01:12 +01:00
dm-exception-store.c dm snapshot: use uninitialized_var 2008-02-08 02:10:11 +00:00
dm-hw-handler.c dm: use kzalloc 2007-10-20 02:01:07 +01:00
dm-hw-handler.h dm mpath: add retry pg init 2007-10-20 02:01:18 +01:00
dm-io.c Drop 'size' argument from bio_endio and bi_end_io 2007-10-10 09:25:57 +02:00
dm-io.h
dm-ioctl.c dm ioctl: use uninitialized_var 2008-02-08 02:10:16 +00:00
dm-linear.c
dm-log.c dm log: auto load modules 2008-02-08 02:11:19 +00:00
dm-log.h dm log: split suspend 2007-10-20 02:01:21 +01:00
dm-mpath-hp-sw.c dm mpath: hp retry if not ready 2007-10-20 02:01:20 +01:00
dm-mpath-rdac.c dm mpath: rdac fix init race 2007-10-20 02:00:57 +01:00
dm-mpath.c dm mpath: add missing static 2008-02-08 02:10:35 +00:00
dm-mpath.h
dm-path-selector.c dm: use kzalloc 2007-10-20 02:01:07 +01:00
dm-path-selector.h
dm-raid1.c dm raid1: fix EIO after log failure 2008-02-08 02:11:35 +00:00
dm-round-robin.c
dm-snap.c dm snapshot: combine consecutive exceptions in memory 2008-02-08 02:11:27 +00:00
dm-snap.h dm snapshot: combine consecutive exceptions in memory 2008-02-08 02:11:27 +00:00
dm-stripe.c dm: stripe enhanced status return 2008-02-08 02:11:24 +00:00
dm-table.c dm: table use uninitialized_var 2008-02-08 02:10:14 +00:00
dm-target.c dm: use kzalloc 2007-10-20 02:01:07 +01:00
dm-uevent.c dm: uevent generate events 2007-10-20 02:01:26 +01:00
dm-uevent.h dm: uevent generate events 2007-10-20 02:01:26 +01:00
dm-zero.c Drop 'size' argument from bio_endio and bi_end_io 2007-10-10 09:25:57 +02:00
dm.c dm: move deferred bio flushing to workqueue 2008-02-08 02:11:17 +00:00
dm.h dm: trigger change uevent on rename 2007-12-20 17:32:11 +00:00
faulty.c md: change ITERATE_RDEV to rdev_for_each 2008-02-06 10:41:19 -08:00
Kconfig dm: targets no longer experimental 2008-02-08 02:10:32 +00:00
kcopyd.c kcopyd use mutex instead of semaphore 2007-10-20 02:01:08 +01:00
kcopyd.h
linear.c md: change ITERATE_RDEV to rdev_for_each 2008-02-06 10:41:19 -08:00
Makefile dm: add uevent to core 2007-10-20 02:01:24 +01:00
md.c md: change ITERATE_RDEV_GENERIC to rdev_for_each_list, and remove ITERATE_RDEV_PENDING. 2008-02-06 10:41:19 -08:00
mktables.c md: raid6: Fix mktable.c 2008-02-06 10:41:18 -08:00
multipath.c md: change ITERATE_RDEV to rdev_for_each 2008-02-06 10:41:19 -08:00
raid0.c md: change ITERATE_RDEV to rdev_for_each 2008-02-06 10:41:19 -08:00
raid1.c md: change ITERATE_RDEV to rdev_for_each 2008-02-06 10:41:19 -08:00
raid5.c md: fix an occasional deadlock in raid5 2008-02-06 10:41:19 -08:00
raid6.h
raid6algos.c x86 merge fallout: uml 2007-10-29 07:41:32 -07:00
raid6altivec.uc
raid6int.uc
raid6mmx.c x86 merge fallout: uml 2007-10-29 07:41:32 -07:00
raid6recov.c
raid6sse1.c x86 merge fallout: uml 2007-10-29 07:41:32 -07:00
raid6sse2.c x86 merge fallout: uml 2007-10-29 07:41:32 -07:00
raid6x86.h x86 merge fallout: uml 2007-10-29 07:41:32 -07:00
raid10.c md: change ITERATE_RDEV to rdev_for_each 2008-02-06 10:41:19 -08:00
unroll.pl