slow-work: kill it

slow-work doesn't have any user left.  Kill it.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: David Howells <dhowells@redhat.com>
This commit is contained in:
Tejun Heo 2010-07-20 22:09:02 +02:00
parent 6ecd7c2dd9
commit 181a51f6e0
8 changed files with 0 additions and 1886 deletions

View File

@ -1,322 +0,0 @@
====================================
SLOW WORK ITEM EXECUTION THREAD POOL
====================================
By: David Howells <dhowells@redhat.com>
The slow work item execution thread pool is a pool of threads for performing
things that take a relatively long time, such as making mkdir calls.
Typically, when processing something, these items will spend a lot of time
blocking a thread on I/O, thus making that thread unavailable for doing other
work.
The standard workqueue model is unsuitable for this class of work item as that
limits the owner to a single thread or a single thread per CPU. For some
tasks, however, more threads - or fewer - are required.
There is just one pool per system. It contains no threads unless something
wants to use it - and that something must register its interest first. When
the pool is active, the number of threads it contains is dynamic, varying
between a maximum and minimum setting, depending on the load.
====================
CLASSES OF WORK ITEM
====================
This pool support two classes of work items:
(*) Slow work items.
(*) Very slow work items.
The former are expected to finish much quicker than the latter.
An operation of the very slow class may do a batch combination of several
lookups, mkdirs, and a create for instance.
An operation of the ordinarily slow class may, for example, write stuff or
expand files, provided the time taken to do so isn't too long.
Operations of both types may sleep during execution, thus tying up the thread
loaned to it.
A further class of work item is available, based on the slow work item class:
(*) Delayed slow work items.
These are slow work items that have a timer to defer queueing of the item for
a while.
THREAD-TO-CLASS ALLOCATION
--------------------------
Not all the threads in the pool are available to work on very slow work items.
The number will be between one and one fewer than the number of active threads.
This is configurable (see the "Pool Configuration" section).
All the threads are available to work on ordinarily slow work items, but a
percentage of the threads will prefer to work on very slow work items.
The configuration ensures that at least one thread will be available to work on
very slow work items, and at least one thread will be available that won't work
on very slow work items at all.
=====================
USING SLOW WORK ITEMS
=====================
Firstly, a module or subsystem wanting to make use of slow work items must
register its interest:
int ret = slow_work_register_user(struct module *module);
This will return 0 if successful, or a -ve error upon failure. The module
pointer should be the module interested in using this facility (almost
certainly THIS_MODULE).
Slow work items may then be set up by:
(1) Declaring a slow_work struct type variable:
#include <linux/slow-work.h>
struct slow_work myitem;
(2) Declaring the operations to be used for this item:
struct slow_work_ops myitem_ops = {
.get_ref = myitem_get_ref,
.put_ref = myitem_put_ref,
.execute = myitem_execute,
};
[*] For a description of the ops, see section "Item Operations".
(3) Initialising the item:
slow_work_init(&myitem, &myitem_ops);
or:
delayed_slow_work_init(&myitem, &myitem_ops);
or:
vslow_work_init(&myitem, &myitem_ops);
depending on its class.
A suitably set up work item can then be enqueued for processing:
int ret = slow_work_enqueue(&myitem);
This will return a -ve error if the thread pool is unable to gain a reference
on the item, 0 otherwise, or (for delayed work):
int ret = delayed_slow_work_enqueue(&myitem, my_jiffy_delay);
The items are reference counted, so there ought to be no need for a flush
operation. But as the reference counting is optional, means to cancel
existing work items are also included:
cancel_slow_work(&myitem);
cancel_delayed_slow_work(&myitem);
can be used to cancel pending work. The above cancel function waits for
existing work to have been executed (or prevent execution of them, depending
on timing).
When all a module's slow work items have been processed, and the
module has no further interest in the facility, it should unregister its
interest:
slow_work_unregister_user(struct module *module);
The module pointer is used to wait for all outstanding work items for that
module before completing the unregistration. This prevents the put_ref() code
from being taken away before it completes. module should almost certainly be
THIS_MODULE.
================
HELPER FUNCTIONS
================
The slow-work facility provides a function by which it can be determined
whether or not an item is queued for later execution:
bool queued = slow_work_is_queued(struct slow_work *work);
If it returns false, then the item is not on the queue (it may be executing
with a requeue pending). This can be used to work out whether an item on which
another depends is on the queue, thus allowing a dependent item to be queued
after it.
If the above shows an item on which another depends not to be queued, then the
owner of the dependent item might need to wait. However, to avoid locking up
the threads unnecessarily be sleeping in them, it can make sense under some
circumstances to return the work item to the queue, thus deferring it until
some other items have had a chance to make use of the yielded thread.
To yield a thread and defer an item, the work function should simply enqueue
the work item again and return. However, this doesn't work if there's nothing
actually on the queue, as the thread just vacated will jump straight back into
the item's work function, thus busy waiting on a CPU.
Instead, the item should use the thread to wait for the dependency to go away,
but rather than using schedule() or schedule_timeout() to sleep, it should use
the following function:
bool requeue = slow_work_sleep_till_thread_needed(
struct slow_work *work,
signed long *_timeout);
This will add a second wait and then sleep, such that it will be woken up if
either something appears on the queue that could usefully make use of the
thread - and behind which this item can be queued, or if the event the caller
set up to wait for happens. True will be returned if something else appeared
on the queue and this work function should perhaps return, of false if
something else woke it up. The timeout is as for schedule_timeout().
For example:
wq = bit_waitqueue(&my_flags, MY_BIT);
init_wait(&wait);
requeue = false;
do {
prepare_to_wait(wq, &wait, TASK_UNINTERRUPTIBLE);
if (!test_bit(MY_BIT, &my_flags))
break;
requeue = slow_work_sleep_till_thread_needed(&my_work,
&timeout);
} while (timeout > 0 && !requeue);
finish_wait(wq, &wait);
if (!test_bit(MY_BIT, &my_flags)
goto do_my_thing;
if (requeue)
return; // to slow_work
===============
ITEM OPERATIONS
===============
Each work item requires a table of operations of type struct slow_work_ops.
Only ->execute() is required; the getting and putting of a reference and the
describing of an item are all optional.
(*) Get a reference on an item:
int (*get_ref)(struct slow_work *work);
This allows the thread pool to attempt to pin an item by getting a
reference on it. This function should return 0 if the reference was
granted, or a -ve error otherwise. If an error is returned,
slow_work_enqueue() will fail.
The reference is held whilst the item is queued and whilst it is being
executed. The item may then be requeued with the same reference held, or
the reference will be released.
(*) Release a reference on an item:
void (*put_ref)(struct slow_work *work);
This allows the thread pool to unpin an item by releasing the reference on
it. The thread pool will not touch the item again once this has been
called.
(*) Execute an item:
void (*execute)(struct slow_work *work);
This should perform the work required of the item. It may sleep, it may
perform disk I/O and it may wait for locks.
(*) View an item through /proc:
void (*desc)(struct slow_work *work, struct seq_file *m);
If supplied, this should print to 'm' a small string describing the work
the item is to do. This should be no more than about 40 characters, and
shouldn't include a newline character.
See the 'Viewing executing and queued items' section below.
==================
POOL CONFIGURATION
==================
The slow-work thread pool has a number of configurables:
(*) /proc/sys/kernel/slow-work/min-threads
The minimum number of threads that should be in the pool whilst it is in
use. This may be anywhere between 2 and max-threads.
(*) /proc/sys/kernel/slow-work/max-threads
The maximum number of threads that should in the pool. This may be
anywhere between min-threads and 255 or NR_CPUS * 2, whichever is greater.
(*) /proc/sys/kernel/slow-work/vslow-percentage
The percentage of active threads in the pool that may be used to execute
very slow work items. This may be between 1 and 99. The resultant number
is bounded to between 1 and one fewer than the number of active threads.
This ensures there is always at least one thread that can process very
slow work items, and always at least one thread that won't.
==================================
VIEWING EXECUTING AND QUEUED ITEMS
==================================
If CONFIG_SLOW_WORK_DEBUG is enabled, a debugfs file is made available:
/sys/kernel/debug/slow_work/runqueue
through which the list of work items being executed and the queues of items to
be executed may be viewed. The owner of a work item is given the chance to
add some information of its own.
The contents look something like the following:
THR PID ITEM ADDR FL MARK DESC
=== ===== ================ == ===== ==========
0 3005 ffff880023f52348 a 952ms FSC: OBJ17d3: LOOK
1 3006 ffff880024e33668 2 160ms FSC: OBJ17e5 OP60d3b: Write1/Store fl=2
2 3165 ffff8800296dd180 a 424ms FSC: OBJ17e4: LOOK
3 4089 ffff8800262c8d78 a 212ms FSC: OBJ17ea: CRTN
4 4090 ffff88002792bed8 2 388ms FSC: OBJ17e8 OP60d36: Write1/Store fl=2
5 4092 ffff88002a0ef308 2 388ms FSC: OBJ17e7 OP60d2e: Write1/Store fl=2
6 4094 ffff88002abaf4b8 2 132ms FSC: OBJ17e2 OP60d4e: Write1/Store fl=2
7 4095 ffff88002bb188e0 a 388ms FSC: OBJ17e9: CRTN
vsq - ffff880023d99668 1 308ms FSC: OBJ17e0 OP60f91: Write1/EnQ fl=2
vsq - ffff8800295d1740 1 212ms FSC: OBJ16be OP4d4b6: Write1/EnQ fl=2
vsq - ffff880025ba3308 1 160ms FSC: OBJ179a OP58dec: Write1/EnQ fl=2
vsq - ffff880024ec83e0 1 160ms FSC: OBJ17ae OP599f2: Write1/EnQ fl=2
vsq - ffff880026618e00 1 160ms FSC: OBJ17e6 OP60d33: Write1/EnQ fl=2
vsq - ffff880025a2a4b8 1 132ms FSC: OBJ16a2 OP4d583: Write1/EnQ fl=2
vsq - ffff880023cbe6d8 9 212ms FSC: OBJ17eb: LOOK
vsq - ffff880024d37590 9 212ms FSC: OBJ17ec: LOOK
vsq - ffff880027746cb0 9 212ms FSC: OBJ17ed: LOOK
vsq - ffff880024d37ae8 9 212ms FSC: OBJ17ee: LOOK
vsq - ffff880024d37cb0 9 212ms FSC: OBJ17ef: LOOK
vsq - ffff880025036550 9 212ms FSC: OBJ17f0: LOOK
vsq - ffff8800250368e0 9 212ms FSC: OBJ17f1: LOOK
vsq - ffff880025036aa8 9 212ms FSC: OBJ17f2: LOOK
In the 'THR' column, executing items show the thread they're occupying and
queued threads indicate which queue they're on. 'PID' shows the process ID of
a slow-work thread that's executing something. 'FL' shows the work item flags.
'MARK' indicates how long since an item was queued or began executing. Lastly,
the 'DESC' column permits the owner of an item to give some information.

View File

@ -1,163 +0,0 @@
/* Worker thread pool for slow items, such as filesystem lookups or mkdirs
*
* Copyright (C) 2008 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*
* See Documentation/slow-work.txt
*/
#ifndef _LINUX_SLOW_WORK_H
#define _LINUX_SLOW_WORK_H
#ifdef CONFIG_SLOW_WORK
#include <linux/sysctl.h>
#include <linux/timer.h>
struct slow_work;
#ifdef CONFIG_SLOW_WORK_DEBUG
struct seq_file;
#endif
/*
* The operations used to support slow work items
*/
struct slow_work_ops {
/* owner */
struct module *owner;
/* get a ref on a work item
* - return 0 if successful, -ve if not
*/
int (*get_ref)(struct slow_work *work);
/* discard a ref to a work item */
void (*put_ref)(struct slow_work *work);
/* execute a work item */
void (*execute)(struct slow_work *work);
#ifdef CONFIG_SLOW_WORK_DEBUG
/* describe a work item for debugfs */
void (*desc)(struct slow_work *work, struct seq_file *m);
#endif
};
/*
* A slow work item
* - A reference is held on the parent object by the thread pool when it is
* queued
*/
struct slow_work {
struct module *owner; /* the owning module */
unsigned long flags;
#define SLOW_WORK_PENDING 0 /* item pending (further) execution */
#define SLOW_WORK_EXECUTING 1 /* item currently executing */
#define SLOW_WORK_ENQ_DEFERRED 2 /* item enqueue deferred */
#define SLOW_WORK_VERY_SLOW 3 /* item is very slow */
#define SLOW_WORK_CANCELLING 4 /* item is being cancelled, don't enqueue */
#define SLOW_WORK_DELAYED 5 /* item is struct delayed_slow_work with active timer */
const struct slow_work_ops *ops; /* operations table for this item */
struct list_head link; /* link in queue */
#ifdef CONFIG_SLOW_WORK_DEBUG
struct timespec mark; /* jiffies at which queued or exec begun */
#endif
};
struct delayed_slow_work {
struct slow_work work;
struct timer_list timer;
};
/**
* slow_work_init - Initialise a slow work item
* @work: The work item to initialise
* @ops: The operations to use to handle the slow work item
*
* Initialise a slow work item.
*/
static inline void slow_work_init(struct slow_work *work,
const struct slow_work_ops *ops)
{
work->flags = 0;
work->ops = ops;
INIT_LIST_HEAD(&work->link);
}
/**
* slow_work_init - Initialise a delayed slow work item
* @work: The work item to initialise
* @ops: The operations to use to handle the slow work item
*
* Initialise a delayed slow work item.
*/
static inline void delayed_slow_work_init(struct delayed_slow_work *dwork,
const struct slow_work_ops *ops)
{
init_timer(&dwork->timer);
slow_work_init(&dwork->work, ops);
}
/**
* vslow_work_init - Initialise a very slow work item
* @work: The work item to initialise
* @ops: The operations to use to handle the slow work item
*
* Initialise a very slow work item. This item will be restricted such that
* only a certain number of the pool threads will be able to execute items of
* this type.
*/
static inline void vslow_work_init(struct slow_work *work,
const struct slow_work_ops *ops)
{
work->flags = 1 << SLOW_WORK_VERY_SLOW;
work->ops = ops;
INIT_LIST_HEAD(&work->link);
}
/**
* slow_work_is_queued - Determine if a slow work item is on the work queue
* work: The work item to test
*
* Determine if the specified slow-work item is on the work queue. This
* returns true if it is actually on the queue.
*
* If the item is executing and has been marked for requeue when execution
* finishes, then false will be returned.
*
* Anyone wishing to wait for completion of execution can wait on the
* SLOW_WORK_EXECUTING bit.
*/
static inline bool slow_work_is_queued(struct slow_work *work)
{
unsigned long flags = work->flags;
return flags & SLOW_WORK_PENDING && !(flags & SLOW_WORK_EXECUTING);
}
extern int slow_work_enqueue(struct slow_work *work);
extern void slow_work_cancel(struct slow_work *work);
extern int slow_work_register_user(struct module *owner);
extern void slow_work_unregister_user(struct module *owner);
extern int delayed_slow_work_enqueue(struct delayed_slow_work *dwork,
unsigned long delay);
static inline void delayed_slow_work_cancel(struct delayed_slow_work *dwork)
{
slow_work_cancel(&dwork->work);
}
extern bool slow_work_sleep_till_thread_needed(struct slow_work *work,
signed long *_timeout);
#ifdef CONFIG_SYSCTL
extern ctl_table slow_work_sysctls[];
#endif
#endif /* CONFIG_SLOW_WORK */
#endif /* _LINUX_SLOW_WORK_H */

View File

@ -1143,30 +1143,6 @@ config TRACEPOINTS
source "arch/Kconfig" source "arch/Kconfig"
config SLOW_WORK
default n
bool
help
The slow work thread pool provides a number of dynamically allocated
threads that can be used by the kernel to perform operations that
take a relatively long time.
An example of this would be CacheFiles doing a path lookup followed
by a series of mkdirs and a create call, all of which have to touch
disk.
See Documentation/slow-work.txt.
config SLOW_WORK_DEBUG
bool "Slow work debugging through debugfs"
default n
depends on SLOW_WORK && DEBUG_FS
help
Display the contents of the slow work run queue through debugfs,
including items currently executing.
See Documentation/slow-work.txt.
endmenu # General setup endmenu # General setup
config HAVE_GENERIC_DMA_COHERENT config HAVE_GENERIC_DMA_COHERENT

View File

@ -99,8 +99,6 @@ obj-$(CONFIG_TRACING) += trace/
obj-$(CONFIG_X86_DS) += trace/ obj-$(CONFIG_X86_DS) += trace/
obj-$(CONFIG_RING_BUFFER) += trace/ obj-$(CONFIG_RING_BUFFER) += trace/
obj-$(CONFIG_SMP) += sched_cpupri.o obj-$(CONFIG_SMP) += sched_cpupri.o
obj-$(CONFIG_SLOW_WORK) += slow-work.o
obj-$(CONFIG_SLOW_WORK_DEBUG) += slow-work-debugfs.o
obj-$(CONFIG_PERF_EVENTS) += perf_event.o obj-$(CONFIG_PERF_EVENTS) += perf_event.o
obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o
obj-$(CONFIG_USER_RETURN_NOTIFIER) += user-return-notifier.o obj-$(CONFIG_USER_RETURN_NOTIFIER) += user-return-notifier.o

View File

@ -1,227 +0,0 @@
/* Slow work debugging
*
* Copyright (C) 2009 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*/
#include <linux/module.h>
#include <linux/slow-work.h>
#include <linux/fs.h>
#include <linux/time.h>
#include <linux/seq_file.h>
#include "slow-work.h"
#define ITERATOR_SHIFT (BITS_PER_LONG - 4)
#define ITERATOR_SELECTOR (0xfUL << ITERATOR_SHIFT)
#define ITERATOR_COUNTER (~ITERATOR_SELECTOR)
void slow_work_new_thread_desc(struct slow_work *work, struct seq_file *m)
{
seq_puts(m, "Slow-work: New thread");
}
/*
* Render the time mark field on a work item into a 5-char time with units plus
* a space
*/
static void slow_work_print_mark(struct seq_file *m, struct slow_work *work)
{
struct timespec now, diff;
now = CURRENT_TIME;
diff = timespec_sub(now, work->mark);
if (diff.tv_sec < 0)
seq_puts(m, " -ve ");
else if (diff.tv_sec == 0 && diff.tv_nsec < 1000)
seq_printf(m, "%3luns ", diff.tv_nsec);
else if (diff.tv_sec == 0 && diff.tv_nsec < 1000000)
seq_printf(m, "%3luus ", diff.tv_nsec / 1000);
else if (diff.tv_sec == 0 && diff.tv_nsec < 1000000000)
seq_printf(m, "%3lums ", diff.tv_nsec / 1000000);
else if (diff.tv_sec <= 1)
seq_puts(m, " 1s ");
else if (diff.tv_sec < 60)
seq_printf(m, "%4lus ", diff.tv_sec);
else if (diff.tv_sec < 60 * 60)
seq_printf(m, "%4lum ", diff.tv_sec / 60);
else if (diff.tv_sec < 60 * 60 * 24)
seq_printf(m, "%4luh ", diff.tv_sec / 3600);
else
seq_puts(m, "exces ");
}
/*
* Describe a slow work item for debugfs
*/
static int slow_work_runqueue_show(struct seq_file *m, void *v)
{
struct slow_work *work;
struct list_head *p = v;
unsigned long id;
switch ((unsigned long) v) {
case 1:
seq_puts(m, "THR PID ITEM ADDR FL MARK DESC\n");
return 0;
case 2:
seq_puts(m, "=== ===== ================ == ===== ==========\n");
return 0;
case 3 ... 3 + SLOW_WORK_THREAD_LIMIT - 1:
id = (unsigned long) v - 3;
read_lock(&slow_work_execs_lock);
work = slow_work_execs[id];
if (work) {
smp_read_barrier_depends();
seq_printf(m, "%3lu %5d %16p %2lx ",
id, slow_work_pids[id], work, work->flags);
slow_work_print_mark(m, work);
if (work->ops->desc)
work->ops->desc(work, m);
seq_putc(m, '\n');
}
read_unlock(&slow_work_execs_lock);
return 0;
default:
work = list_entry(p, struct slow_work, link);
seq_printf(m, "%3s - %16p %2lx ",
work->flags & SLOW_WORK_VERY_SLOW ? "vsq" : "sq",
work, work->flags);
slow_work_print_mark(m, work);
if (work->ops->desc)
work->ops->desc(work, m);
seq_putc(m, '\n');
return 0;
}
}
/*
* map the iterator to a work item
*/
static void *slow_work_runqueue_index(struct seq_file *m, loff_t *_pos)
{
struct list_head *p;
unsigned long count, id;
switch (*_pos >> ITERATOR_SHIFT) {
case 0x0:
if (*_pos == 0)
*_pos = 1;
if (*_pos < 3)
return (void *)(unsigned long) *_pos;
if (*_pos < 3 + SLOW_WORK_THREAD_LIMIT)
for (id = *_pos - 3;
id < SLOW_WORK_THREAD_LIMIT;
id++, (*_pos)++)
if (slow_work_execs[id])
return (void *)(unsigned long) *_pos;
*_pos = 0x1UL << ITERATOR_SHIFT;
case 0x1:
count = *_pos & ITERATOR_COUNTER;
list_for_each(p, &slow_work_queue) {
if (count == 0)
return p;
count--;
}
*_pos = 0x2UL << ITERATOR_SHIFT;
case 0x2:
count = *_pos & ITERATOR_COUNTER;
list_for_each(p, &vslow_work_queue) {
if (count == 0)
return p;
count--;
}
*_pos = 0x3UL << ITERATOR_SHIFT;
default:
return NULL;
}
}
/*
* set up the iterator to start reading from the first line
*/
static void *slow_work_runqueue_start(struct seq_file *m, loff_t *_pos)
{
spin_lock_irq(&slow_work_queue_lock);
return slow_work_runqueue_index(m, _pos);
}
/*
* move to the next line
*/
static void *slow_work_runqueue_next(struct seq_file *m, void *v, loff_t *_pos)
{
struct list_head *p = v;
unsigned long selector = *_pos >> ITERATOR_SHIFT;
(*_pos)++;
switch (selector) {
case 0x0:
return slow_work_runqueue_index(m, _pos);
case 0x1:
if (*_pos >> ITERATOR_SHIFT == 0x1) {
p = p->next;
if (p != &slow_work_queue)
return p;
}
*_pos = 0x2UL << ITERATOR_SHIFT;
p = &vslow_work_queue;
case 0x2:
if (*_pos >> ITERATOR_SHIFT == 0x2) {
p = p->next;
if (p != &vslow_work_queue)
return p;
}
*_pos = 0x3UL << ITERATOR_SHIFT;
default:
return NULL;
}
}
/*
* clean up after reading
*/
static void slow_work_runqueue_stop(struct seq_file *m, void *v)
{
spin_unlock_irq(&slow_work_queue_lock);
}
static const struct seq_operations slow_work_runqueue_ops = {
.start = slow_work_runqueue_start,
.stop = slow_work_runqueue_stop,
.next = slow_work_runqueue_next,
.show = slow_work_runqueue_show,
};
/*
* open "/sys/kernel/debug/slow_work/runqueue" to list queue contents
*/
static int slow_work_runqueue_open(struct inode *inode, struct file *file)
{
return seq_open(file, &slow_work_runqueue_ops);
}
const struct file_operations slow_work_runqueue_fops = {
.owner = THIS_MODULE,
.open = slow_work_runqueue_open,
.read = seq_read,
.llseek = seq_lseek,
.release = seq_release,
};

File diff suppressed because it is too large Load Diff

View File

@ -1,72 +0,0 @@
/* Slow work private definitions
*
* Copyright (C) 2009 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*/
#define SLOW_WORK_CULL_TIMEOUT (5 * HZ) /* cull threads 5s after running out of
* things to do */
#define SLOW_WORK_OOM_TIMEOUT (5 * HZ) /* can't start new threads for 5s after
* OOM */
#define SLOW_WORK_THREAD_LIMIT 255 /* abs maximum number of slow-work threads */
/*
* slow-work.c
*/
#ifdef CONFIG_SLOW_WORK_DEBUG
extern struct slow_work *slow_work_execs[];
extern pid_t slow_work_pids[];
extern rwlock_t slow_work_execs_lock;
#endif
extern struct list_head slow_work_queue;
extern struct list_head vslow_work_queue;
extern spinlock_t slow_work_queue_lock;
/*
* slow-work-debugfs.c
*/
#ifdef CONFIG_SLOW_WORK_DEBUG
extern const struct file_operations slow_work_runqueue_fops;
extern void slow_work_new_thread_desc(struct slow_work *, struct seq_file *);
#endif
/*
* Helper functions
*/
static inline void slow_work_set_thread_pid(int id, pid_t pid)
{
#ifdef CONFIG_SLOW_WORK_DEBUG
slow_work_pids[id] = pid;
#endif
}
static inline void slow_work_mark_time(struct slow_work *work)
{
#ifdef CONFIG_SLOW_WORK_DEBUG
work->mark = CURRENT_TIME;
#endif
}
static inline void slow_work_begin_exec(int id, struct slow_work *work)
{
#ifdef CONFIG_SLOW_WORK_DEBUG
slow_work_execs[id] = work;
#endif
}
static inline void slow_work_end_exec(int id, struct slow_work *work)
{
#ifdef CONFIG_SLOW_WORK_DEBUG
write_lock(&slow_work_execs_lock);
slow_work_execs[id] = NULL;
write_unlock(&slow_work_execs_lock);
#endif
}

View File

@ -50,7 +50,6 @@
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/reboot.h> #include <linux/reboot.h>
#include <linux/ftrace.h> #include <linux/ftrace.h>
#include <linux/slow-work.h>
#include <linux/perf_event.h> #include <linux/perf_event.h>
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <linux/pipe_fs_i.h> #include <linux/pipe_fs_i.h>
@ -906,13 +905,6 @@ static struct ctl_table kern_table[] = {
.proc_handler = proc_dointvec, .proc_handler = proc_dointvec,
}, },
#endif #endif
#ifdef CONFIG_SLOW_WORK
{
.procname = "slow-work",
.mode = 0555,
.child = slow_work_sysctls,
},
#endif
#ifdef CONFIG_PERF_EVENTS #ifdef CONFIG_PERF_EVENTS
{ {
.procname = "perf_event_paranoid", .procname = "perf_event_paranoid",