forked from luck/tmp_suning_uos_patched
This includes two new updates for the ftrace infrastructure.
1) With the changing of the code for filtering events by pid, from a list of pids to a bitmask, we can now easily implement following forks. With a new tracing option "event-fork" which, when set, will have tasks with pids in set_event_pid, when they fork, to have their child pids added to set_event_pid and the child will be traced as well. Note, if "event-fork" is set and a task with its pid in set_event_pid exits, its pid will be removed from set_event_pid 2) The addition of Tom Zanussi's hist triggers. This includes a very thorough documentatino on how to use the hist triggers with events. This introduces a quick and easy way to get histogram data from events and their fields. Some other cleanups and updates were added as well. Like Masami Hiramatsu added test cases for the event trigger and hist triggers. Also I added a speed up of filtering by using a temp buffer when filters are set. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAABAgAGBQJXPIv1AAoJEKKk/i67LK/8WZcIAIaaHJMctDCfXPg8OoT1LLI/ yUxgWvQRM7iwGV8YjuaXlyxTDJU0XVoNpPF5ZGiePlRDSCUboNvgcNVHRusJJKqM oV1BTsq2x5eY12agA8kSOHcqGP7saqa2H+RJ4+3jNB/DTtOwJ8RzodlqWQ7PZbRG 0IDvD7buh9NeDS2am835RB+Xhy/jNBrkoJjpvMNaG5nZypsMq8D524RzyBm6RYjp p+KLo3/yDc0+khv1hIs1c/w+LXNs7XtpPjpAKBa8B4xOiXndh3IosjX3JnL+0f+6 EvXt6qRfBKCE5o2BM397qjE3V/L0/SfzTijuL1WMd88ZvPGqwcsslQekmxKAb1E= =WBTB -----END PGP SIGNATURE----- Merge tag 'trace-v4.7' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: "This includes two new updates for the ftrace infrastructure. - With the changing of the code for filtering events by pid, from a list of pids to a bitmask, we can now easily implement following forks. With a new tracing option "event-fork" which, when set, will have tasks with pids in set_event_pid, when they fork, to have their child pids added to set_event_pid and the child will be traced as well. Note, if "event-fork" is set and a task with its pid in set_event_pid exits, its pid will be removed from set_event_pid - The addition of Tom Zanussi's hist triggers. This includes a very thorough documentatino on how to use the hist triggers with events. This introduces a quick and easy way to get histogram data from events and their fields. Some other cleanups and updates were added as well. Like Masami Hiramatsu added test cases for the event trigger and hist triggers. Also I added a speed up of filtering by using a temp buffer when filters are set" * tag 'trace-v4.7' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (45 commits) tracing: Use temp buffer when filtering events tracing: Remove TRACE_EVENT_FL_USE_CALL_FILTER logic tracing: Remove unused function trace_current_buffer_lock_reserve() tracing: Remove one use of trace_current_buffer_lock_reserve() tracing: Have trace_buffer_unlock_commit() call the _regs version with NULL tracing: Remove unused function trace_current_buffer_discard_commit() tracing: Move trace_buffer_unlock_commit{_regs}() to local header tracing: Fold filter_check_discard() into its only user tracing: Make filter_check_discard() local tracing: Move event_trigger_unlock_commit{_regs}() to local header tracing: Don't use the address of the buffer array name in copy_from_user tracing: Handle tracing_map_alloc_elts() error path correctly tracing: Add check for NULL event field when creating hist field tracing: checking for NULL instead of IS_ERR() tracing: Do not inherit event-fork option for instances tracing: Fix unsigned comparison to zero in hist trigger code kselftests/ftrace: Add a test for log2 modifier of hist trigger tracing: Add hist trigger 'log2' modifier kselftests/ftrace: Add hist trigger testcases kselftests/ftrace : Add event trigger testcases ...
This commit is contained in:
commit
2600a46ee0
File diff suppressed because it is too large
Load Diff
|
@ -210,6 +210,11 @@ of ftrace. Here is a list of some of the key files:
|
|||
Note, sched_switch and sched_wake_up will also trace events
|
||||
listed in this file.
|
||||
|
||||
To have the PIDs of children of tasks with their PID in this file
|
||||
added on fork, enable the "event-fork" option. That option will also
|
||||
cause the PIDs of tasks to be removed from this file when the task
|
||||
exits.
|
||||
|
||||
set_graph_function:
|
||||
|
||||
Set a "trigger" function where tracing should start
|
||||
|
@ -725,16 +730,14 @@ noraw
|
|||
nohex
|
||||
nobin
|
||||
noblock
|
||||
nostacktrace
|
||||
trace_printk
|
||||
noftrace_preempt
|
||||
nobranch
|
||||
annotate
|
||||
nouserstacktrace
|
||||
nosym-userobj
|
||||
noprintk-msg-only
|
||||
context-info
|
||||
latency-format
|
||||
nolatency-format
|
||||
sleep-time
|
||||
graph-time
|
||||
record-cmd
|
||||
|
@ -742,7 +745,10 @@ overwrite
|
|||
nodisable_on_free
|
||||
irq-info
|
||||
markers
|
||||
noevent-fork
|
||||
function-trace
|
||||
nodisplay-graph
|
||||
nostacktrace
|
||||
|
||||
To disable one of the options, echo in the option prepended with
|
||||
"no".
|
||||
|
@ -796,11 +802,6 @@ Here are the available options:
|
|||
|
||||
block - When set, reading trace_pipe will not block when polled.
|
||||
|
||||
stacktrace - This is one of the options that changes the trace
|
||||
itself. When a trace is recorded, so is the stack
|
||||
of functions. This allows for back traces of
|
||||
trace sites.
|
||||
|
||||
trace_printk - Can disable trace_printk() from writing into the buffer.
|
||||
|
||||
branch - Enable branch tracing with the tracer.
|
||||
|
@ -897,6 +898,10 @@ x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
|
|||
When disabled, the trace_marker will error with EINVAL
|
||||
on write.
|
||||
|
||||
event-fork - When set, tasks with PIDs listed in set_event_pid will have
|
||||
the PIDs of their children added to set_event_pid when those
|
||||
tasks fork. Also, when tasks with PIDs in set_event_pid exit,
|
||||
their PIDs will be removed from the file.
|
||||
|
||||
function-trace - The latency tracers will enable function tracing
|
||||
if this option is enabled (default it is). When
|
||||
|
@ -904,8 +909,17 @@ x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
|
|||
functions. This keeps the overhead of the tracer down
|
||||
when performing latency tests.
|
||||
|
||||
Note: Some tracers have their own options. They only appear
|
||||
when the tracer is active.
|
||||
display-graph - When set, the latency tracers (irqsoff, wakeup, etc) will
|
||||
use function graph tracing instead of function tracing.
|
||||
|
||||
stacktrace - This is one of the options that changes the trace
|
||||
itself. When a trace is recorded, so is the stack
|
||||
of functions. This allows for back traces of
|
||||
trace sites.
|
||||
|
||||
Note: Some tracers have their own options. They only appear in this
|
||||
file when the tracer is active. They always appear in the
|
||||
options directory.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -154,21 +154,6 @@ trace_event_buffer_lock_reserve(struct ring_buffer **current_buffer,
|
|||
struct trace_event_file *trace_file,
|
||||
int type, unsigned long len,
|
||||
unsigned long flags, int pc);
|
||||
struct ring_buffer_event *
|
||||
trace_current_buffer_lock_reserve(struct ring_buffer **current_buffer,
|
||||
int type, unsigned long len,
|
||||
unsigned long flags, int pc);
|
||||
void trace_buffer_unlock_commit(struct trace_array *tr,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event,
|
||||
unsigned long flags, int pc);
|
||||
void trace_buffer_unlock_commit_regs(struct trace_array *tr,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event,
|
||||
unsigned long flags, int pc,
|
||||
struct pt_regs *regs);
|
||||
void trace_current_buffer_discard_commit(struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event);
|
||||
|
||||
void tracing_record_cmdline(struct task_struct *tsk);
|
||||
|
||||
|
@ -229,7 +214,6 @@ enum {
|
|||
TRACE_EVENT_FL_NO_SET_FILTER_BIT,
|
||||
TRACE_EVENT_FL_IGNORE_ENABLE_BIT,
|
||||
TRACE_EVENT_FL_WAS_ENABLED_BIT,
|
||||
TRACE_EVENT_FL_USE_CALL_FILTER_BIT,
|
||||
TRACE_EVENT_FL_TRACEPOINT_BIT,
|
||||
TRACE_EVENT_FL_KPROBE_BIT,
|
||||
TRACE_EVENT_FL_UPROBE_BIT,
|
||||
|
@ -244,7 +228,6 @@ enum {
|
|||
* WAS_ENABLED - Set and stays set when an event was ever enabled
|
||||
* (used for module unloading, if a module event is enabled,
|
||||
* it is best to clear the buffers that used it).
|
||||
* USE_CALL_FILTER - For trace internal events, don't use file filter
|
||||
* TRACEPOINT - Event is a tracepoint
|
||||
* KPROBE - Event is a kprobe
|
||||
* UPROBE - Event is a uprobe
|
||||
|
@ -255,7 +238,6 @@ enum {
|
|||
TRACE_EVENT_FL_NO_SET_FILTER = (1 << TRACE_EVENT_FL_NO_SET_FILTER_BIT),
|
||||
TRACE_EVENT_FL_IGNORE_ENABLE = (1 << TRACE_EVENT_FL_IGNORE_ENABLE_BIT),
|
||||
TRACE_EVENT_FL_WAS_ENABLED = (1 << TRACE_EVENT_FL_WAS_ENABLED_BIT),
|
||||
TRACE_EVENT_FL_USE_CALL_FILTER = (1 << TRACE_EVENT_FL_USE_CALL_FILTER_BIT),
|
||||
TRACE_EVENT_FL_TRACEPOINT = (1 << TRACE_EVENT_FL_TRACEPOINT_BIT),
|
||||
TRACE_EVENT_FL_KPROBE = (1 << TRACE_EVENT_FL_KPROBE_BIT),
|
||||
TRACE_EVENT_FL_UPROBE = (1 << TRACE_EVENT_FL_UPROBE_BIT),
|
||||
|
@ -407,16 +389,12 @@ enum event_trigger_type {
|
|||
ETT_SNAPSHOT = (1 << 1),
|
||||
ETT_STACKTRACE = (1 << 2),
|
||||
ETT_EVENT_ENABLE = (1 << 3),
|
||||
ETT_EVENT_HIST = (1 << 4),
|
||||
ETT_HIST_ENABLE = (1 << 5),
|
||||
};
|
||||
|
||||
extern int filter_match_preds(struct event_filter *filter, void *rec);
|
||||
|
||||
extern int filter_check_discard(struct trace_event_file *file, void *rec,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event);
|
||||
extern int call_filter_check_discard(struct trace_event_call *call, void *rec,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event);
|
||||
extern enum event_trigger_type event_triggers_call(struct trace_event_file *file,
|
||||
void *rec);
|
||||
extern void event_triggers_post_call(struct trace_event_file *file,
|
||||
|
@ -450,100 +428,6 @@ trace_trigger_soft_disabled(struct trace_event_file *file)
|
|||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* Helper function for event_trigger_unlock_commit{_regs}().
|
||||
* If there are event triggers attached to this event that requires
|
||||
* filtering against its fields, then they wil be called as the
|
||||
* entry already holds the field information of the current event.
|
||||
*
|
||||
* It also checks if the event should be discarded or not.
|
||||
* It is to be discarded if the event is soft disabled and the
|
||||
* event was only recorded to process triggers, or if the event
|
||||
* filter is active and this event did not match the filters.
|
||||
*
|
||||
* Returns true if the event is discarded, false otherwise.
|
||||
*/
|
||||
static inline bool
|
||||
__event_trigger_test_discard(struct trace_event_file *file,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event,
|
||||
void *entry,
|
||||
enum event_trigger_type *tt)
|
||||
{
|
||||
unsigned long eflags = file->flags;
|
||||
|
||||
if (eflags & EVENT_FILE_FL_TRIGGER_COND)
|
||||
*tt = event_triggers_call(file, entry);
|
||||
|
||||
if (test_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags))
|
||||
ring_buffer_discard_commit(buffer, event);
|
||||
else if (!filter_check_discard(file, entry, buffer, event))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* event_trigger_unlock_commit - handle triggers and finish event commit
|
||||
* @file: The file pointer assoctiated to the event
|
||||
* @buffer: The ring buffer that the event is being written to
|
||||
* @event: The event meta data in the ring buffer
|
||||
* @entry: The event itself
|
||||
* @irq_flags: The state of the interrupts at the start of the event
|
||||
* @pc: The state of the preempt count at the start of the event.
|
||||
*
|
||||
* This is a helper function to handle triggers that require data
|
||||
* from the event itself. It also tests the event against filters and
|
||||
* if the event is soft disabled and should be discarded.
|
||||
*/
|
||||
static inline void
|
||||
event_trigger_unlock_commit(struct trace_event_file *file,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event,
|
||||
void *entry, unsigned long irq_flags, int pc)
|
||||
{
|
||||
enum event_trigger_type tt = ETT_NONE;
|
||||
|
||||
if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
|
||||
trace_buffer_unlock_commit(file->tr, buffer, event, irq_flags, pc);
|
||||
|
||||
if (tt)
|
||||
event_triggers_post_call(file, tt, entry);
|
||||
}
|
||||
|
||||
/**
|
||||
* event_trigger_unlock_commit_regs - handle triggers and finish event commit
|
||||
* @file: The file pointer assoctiated to the event
|
||||
* @buffer: The ring buffer that the event is being written to
|
||||
* @event: The event meta data in the ring buffer
|
||||
* @entry: The event itself
|
||||
* @irq_flags: The state of the interrupts at the start of the event
|
||||
* @pc: The state of the preempt count at the start of the event.
|
||||
*
|
||||
* This is a helper function to handle triggers that require data
|
||||
* from the event itself. It also tests the event against filters and
|
||||
* if the event is soft disabled and should be discarded.
|
||||
*
|
||||
* Same as event_trigger_unlock_commit() but calls
|
||||
* trace_buffer_unlock_commit_regs() instead of trace_buffer_unlock_commit().
|
||||
*/
|
||||
static inline void
|
||||
event_trigger_unlock_commit_regs(struct trace_event_file *file,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event,
|
||||
void *entry, unsigned long irq_flags, int pc,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
enum event_trigger_type tt = ETT_NONE;
|
||||
|
||||
if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
|
||||
trace_buffer_unlock_commit_regs(file->tr, buffer, event,
|
||||
irq_flags, pc, regs);
|
||||
|
||||
if (tt)
|
||||
event_triggers_post_call(file, tt, entry);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_BPF_EVENTS
|
||||
unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx);
|
||||
#else
|
||||
|
|
|
@ -528,6 +528,32 @@ config MMIOTRACE
|
|||
See Documentation/trace/mmiotrace.txt.
|
||||
If you are not helping to develop drivers, say N.
|
||||
|
||||
config TRACING_MAP
|
||||
bool
|
||||
depends on ARCH_HAVE_NMI_SAFE_CMPXCHG
|
||||
help
|
||||
tracing_map is a special-purpose lock-free map for tracing,
|
||||
separated out as a stand-alone facility in order to allow it
|
||||
to be shared between multiple tracers. It isn't meant to be
|
||||
generally used outside of that context, and is normally
|
||||
selected by tracers that use it.
|
||||
|
||||
config HIST_TRIGGERS
|
||||
bool "Histogram triggers"
|
||||
depends on ARCH_HAVE_NMI_SAFE_CMPXCHG
|
||||
select TRACING_MAP
|
||||
default n
|
||||
help
|
||||
Hist triggers allow one or more arbitrary trace event fields
|
||||
to be aggregated into hash tables and dumped to stdout by
|
||||
reading a debugfs/tracefs file. They're useful for
|
||||
gathering quick and dirty (though precise) summaries of
|
||||
event activity as an initial guide for further investigation
|
||||
using more advanced tools.
|
||||
|
||||
See Documentation/trace/events.txt.
|
||||
If in doubt, say N.
|
||||
|
||||
config MMIOTRACE_TEST
|
||||
tristate "Test module for mmiotrace"
|
||||
depends on MMIOTRACE && m
|
||||
|
|
|
@ -31,6 +31,7 @@ obj-$(CONFIG_TRACING) += trace_output.o
|
|||
obj-$(CONFIG_TRACING) += trace_seq.o
|
||||
obj-$(CONFIG_TRACING) += trace_stat.o
|
||||
obj-$(CONFIG_TRACING) += trace_printk.o
|
||||
obj-$(CONFIG_TRACING_MAP) += tracing_map.o
|
||||
obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
|
||||
obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o
|
||||
obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
|
||||
|
@ -53,6 +54,7 @@ obj-$(CONFIG_EVENT_TRACING) += trace_event_perf.o
|
|||
endif
|
||||
obj-$(CONFIG_EVENT_TRACING) += trace_events_filter.o
|
||||
obj-$(CONFIG_EVENT_TRACING) += trace_events_trigger.o
|
||||
obj-$(CONFIG_HIST_TRIGGERS) += trace_events_hist.o
|
||||
obj-$(CONFIG_BPF_EVENTS) += bpf_trace.o
|
||||
obj-$(CONFIG_KPROBE_EVENT) += trace_kprobe.o
|
||||
obj-$(CONFIG_TRACEPOINTS) += power-traces.o
|
||||
|
|
|
@ -253,6 +253,9 @@ unsigned long long ns2usecs(cycle_t nsec)
|
|||
#define TOP_LEVEL_TRACE_FLAGS (TRACE_ITER_PRINTK | \
|
||||
TRACE_ITER_PRINTK_MSGONLY | TRACE_ITER_RECORD_CMD)
|
||||
|
||||
/* trace_flags that are default zero for instances */
|
||||
#define ZEROED_TRACE_FLAGS \
|
||||
TRACE_ITER_EVENT_FORK
|
||||
|
||||
/*
|
||||
* The global_trace is the descriptor that holds the tracing
|
||||
|
@ -303,33 +306,18 @@ void trace_array_put(struct trace_array *this_tr)
|
|||
mutex_unlock(&trace_types_lock);
|
||||
}
|
||||
|
||||
int filter_check_discard(struct trace_event_file *file, void *rec,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event)
|
||||
{
|
||||
if (unlikely(file->flags & EVENT_FILE_FL_FILTERED) &&
|
||||
!filter_match_preds(file->filter, rec)) {
|
||||
ring_buffer_discard_commit(buffer, event);
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(filter_check_discard);
|
||||
|
||||
int call_filter_check_discard(struct trace_event_call *call, void *rec,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event)
|
||||
{
|
||||
if (unlikely(call->flags & TRACE_EVENT_FL_FILTERED) &&
|
||||
!filter_match_preds(call->filter, rec)) {
|
||||
ring_buffer_discard_commit(buffer, event);
|
||||
__trace_event_discard_commit(buffer, event);
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(call_filter_check_discard);
|
||||
|
||||
static cycle_t buffer_ftrace_now(struct trace_buffer *buf, int cpu)
|
||||
{
|
||||
|
@ -1672,6 +1660,16 @@ tracing_generic_entry_update(struct trace_entry *entry, unsigned long flags,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(tracing_generic_entry_update);
|
||||
|
||||
static __always_inline void
|
||||
trace_event_setup(struct ring_buffer_event *event,
|
||||
int type, unsigned long flags, int pc)
|
||||
{
|
||||
struct trace_entry *ent = ring_buffer_event_data(event);
|
||||
|
||||
tracing_generic_entry_update(ent, flags, pc);
|
||||
ent->type = type;
|
||||
}
|
||||
|
||||
struct ring_buffer_event *
|
||||
trace_buffer_lock_reserve(struct ring_buffer *buffer,
|
||||
int type,
|
||||
|
@ -1681,34 +1679,137 @@ trace_buffer_lock_reserve(struct ring_buffer *buffer,
|
|||
struct ring_buffer_event *event;
|
||||
|
||||
event = ring_buffer_lock_reserve(buffer, len);
|
||||
if (event != NULL) {
|
||||
struct trace_entry *ent = ring_buffer_event_data(event);
|
||||
|
||||
tracing_generic_entry_update(ent, flags, pc);
|
||||
ent->type = type;
|
||||
}
|
||||
if (event != NULL)
|
||||
trace_event_setup(event, type, flags, pc);
|
||||
|
||||
return event;
|
||||
}
|
||||
|
||||
DEFINE_PER_CPU(struct ring_buffer_event *, trace_buffered_event);
|
||||
DEFINE_PER_CPU(int, trace_buffered_event_cnt);
|
||||
static int trace_buffered_event_ref;
|
||||
|
||||
/**
|
||||
* trace_buffered_event_enable - enable buffering events
|
||||
*
|
||||
* When events are being filtered, it is quicker to use a temporary
|
||||
* buffer to write the event data into if there's a likely chance
|
||||
* that it will not be committed. The discard of the ring buffer
|
||||
* is not as fast as committing, and is much slower than copying
|
||||
* a commit.
|
||||
*
|
||||
* When an event is to be filtered, allocate per cpu buffers to
|
||||
* write the event data into, and if the event is filtered and discarded
|
||||
* it is simply dropped, otherwise, the entire data is to be committed
|
||||
* in one shot.
|
||||
*/
|
||||
void trace_buffered_event_enable(void)
|
||||
{
|
||||
struct ring_buffer_event *event;
|
||||
struct page *page;
|
||||
int cpu;
|
||||
|
||||
WARN_ON_ONCE(!mutex_is_locked(&event_mutex));
|
||||
|
||||
if (trace_buffered_event_ref++)
|
||||
return;
|
||||
|
||||
for_each_tracing_cpu(cpu) {
|
||||
page = alloc_pages_node(cpu_to_node(cpu),
|
||||
GFP_KERNEL | __GFP_NORETRY, 0);
|
||||
if (!page)
|
||||
goto failed;
|
||||
|
||||
event = page_address(page);
|
||||
memset(event, 0, sizeof(*event));
|
||||
|
||||
per_cpu(trace_buffered_event, cpu) = event;
|
||||
|
||||
preempt_disable();
|
||||
if (cpu == smp_processor_id() &&
|
||||
this_cpu_read(trace_buffered_event) !=
|
||||
per_cpu(trace_buffered_event, cpu))
|
||||
WARN_ON_ONCE(1);
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
return;
|
||||
failed:
|
||||
trace_buffered_event_disable();
|
||||
}
|
||||
|
||||
static void enable_trace_buffered_event(void *data)
|
||||
{
|
||||
/* Probably not needed, but do it anyway */
|
||||
smp_rmb();
|
||||
this_cpu_dec(trace_buffered_event_cnt);
|
||||
}
|
||||
|
||||
static void disable_trace_buffered_event(void *data)
|
||||
{
|
||||
this_cpu_inc(trace_buffered_event_cnt);
|
||||
}
|
||||
|
||||
/**
|
||||
* trace_buffered_event_disable - disable buffering events
|
||||
*
|
||||
* When a filter is removed, it is faster to not use the buffered
|
||||
* events, and to commit directly into the ring buffer. Free up
|
||||
* the temp buffers when there are no more users. This requires
|
||||
* special synchronization with current events.
|
||||
*/
|
||||
void trace_buffered_event_disable(void)
|
||||
{
|
||||
int cpu;
|
||||
|
||||
WARN_ON_ONCE(!mutex_is_locked(&event_mutex));
|
||||
|
||||
if (WARN_ON_ONCE(!trace_buffered_event_ref))
|
||||
return;
|
||||
|
||||
if (--trace_buffered_event_ref)
|
||||
return;
|
||||
|
||||
preempt_disable();
|
||||
/* For each CPU, set the buffer as used. */
|
||||
smp_call_function_many(tracing_buffer_mask,
|
||||
disable_trace_buffered_event, NULL, 1);
|
||||
preempt_enable();
|
||||
|
||||
/* Wait for all current users to finish */
|
||||
synchronize_sched();
|
||||
|
||||
for_each_tracing_cpu(cpu) {
|
||||
free_page((unsigned long)per_cpu(trace_buffered_event, cpu));
|
||||
per_cpu(trace_buffered_event, cpu) = NULL;
|
||||
}
|
||||
/*
|
||||
* Make sure trace_buffered_event is NULL before clearing
|
||||
* trace_buffered_event_cnt.
|
||||
*/
|
||||
smp_wmb();
|
||||
|
||||
preempt_disable();
|
||||
/* Do the work on each cpu */
|
||||
smp_call_function_many(tracing_buffer_mask,
|
||||
enable_trace_buffered_event, NULL, 1);
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
void
|
||||
__buffer_unlock_commit(struct ring_buffer *buffer, struct ring_buffer_event *event)
|
||||
{
|
||||
__this_cpu_write(trace_cmdline_save, true);
|
||||
ring_buffer_unlock_commit(buffer, event);
|
||||
}
|
||||
|
||||
void trace_buffer_unlock_commit(struct trace_array *tr,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event,
|
||||
unsigned long flags, int pc)
|
||||
{
|
||||
__buffer_unlock_commit(buffer, event);
|
||||
|
||||
ftrace_trace_stack(tr, buffer, flags, 6, pc, NULL);
|
||||
ftrace_trace_userstack(buffer, flags, pc);
|
||||
/* If this is the temp buffer, we need to commit fully */
|
||||
if (this_cpu_read(trace_buffered_event) == event) {
|
||||
/* Length is in event->array[0] */
|
||||
ring_buffer_write(buffer, event->array[0], &event->array[1]);
|
||||
/* Release the temp buffer */
|
||||
this_cpu_dec(trace_buffered_event_cnt);
|
||||
} else
|
||||
ring_buffer_unlock_commit(buffer, event);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_buffer_unlock_commit);
|
||||
|
||||
static struct ring_buffer *temp_buffer;
|
||||
|
||||
|
@ -1719,8 +1820,23 @@ trace_event_buffer_lock_reserve(struct ring_buffer **current_rb,
|
|||
unsigned long flags, int pc)
|
||||
{
|
||||
struct ring_buffer_event *entry;
|
||||
int val;
|
||||
|
||||
*current_rb = trace_file->tr->trace_buffer.buffer;
|
||||
|
||||
if ((trace_file->flags &
|
||||
(EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED)) &&
|
||||
(entry = this_cpu_read(trace_buffered_event))) {
|
||||
/* Try to use the per cpu buffer first */
|
||||
val = this_cpu_inc_return(trace_buffered_event_cnt);
|
||||
if (val == 1) {
|
||||
trace_event_setup(entry, type, flags, pc);
|
||||
entry->array[0] = len;
|
||||
return entry;
|
||||
}
|
||||
this_cpu_dec(trace_buffered_event_cnt);
|
||||
}
|
||||
|
||||
entry = trace_buffer_lock_reserve(*current_rb,
|
||||
type, len, flags, pc);
|
||||
/*
|
||||
|
@ -1738,17 +1854,6 @@ trace_event_buffer_lock_reserve(struct ring_buffer **current_rb,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve);
|
||||
|
||||
struct ring_buffer_event *
|
||||
trace_current_buffer_lock_reserve(struct ring_buffer **current_rb,
|
||||
int type, unsigned long len,
|
||||
unsigned long flags, int pc)
|
||||
{
|
||||
*current_rb = global_trace.trace_buffer.buffer;
|
||||
return trace_buffer_lock_reserve(*current_rb,
|
||||
type, len, flags, pc);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_current_buffer_lock_reserve);
|
||||
|
||||
void trace_buffer_unlock_commit_regs(struct trace_array *tr,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event,
|
||||
|
@ -1760,14 +1865,6 @@ void trace_buffer_unlock_commit_regs(struct trace_array *tr,
|
|||
ftrace_trace_stack(tr, buffer, flags, 0, pc, regs);
|
||||
ftrace_trace_userstack(buffer, flags, pc);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_buffer_unlock_commit_regs);
|
||||
|
||||
void trace_current_buffer_discard_commit(struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event)
|
||||
{
|
||||
ring_buffer_discard_commit(buffer, event);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_current_buffer_discard_commit);
|
||||
|
||||
void
|
||||
trace_function(struct trace_array *tr,
|
||||
|
@ -3571,6 +3668,9 @@ int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled)
|
|||
if (mask == TRACE_ITER_RECORD_CMD)
|
||||
trace_event_enable_cmd_record(enabled);
|
||||
|
||||
if (mask == TRACE_ITER_EVENT_FORK)
|
||||
trace_event_follow_fork(tr, enabled);
|
||||
|
||||
if (mask == TRACE_ITER_OVERWRITE) {
|
||||
ring_buffer_change_overwrite(tr->trace_buffer.buffer, enabled);
|
||||
#ifdef CONFIG_TRACER_MAX_TRACE
|
||||
|
@ -3658,7 +3758,7 @@ tracing_trace_options_write(struct file *filp, const char __user *ubuf,
|
|||
if (cnt >= sizeof(buf))
|
||||
return -EINVAL;
|
||||
|
||||
if (copy_from_user(&buf, ubuf, cnt))
|
||||
if (copy_from_user(buf, ubuf, cnt))
|
||||
return -EFAULT;
|
||||
|
||||
buf[cnt] = 0;
|
||||
|
@ -3804,11 +3904,18 @@ static const char readme_msg[] =
|
|||
"\t trigger: traceon, traceoff\n"
|
||||
"\t enable_event:<system>:<event>\n"
|
||||
"\t disable_event:<system>:<event>\n"
|
||||
#ifdef CONFIG_HIST_TRIGGERS
|
||||
"\t enable_hist:<system>:<event>\n"
|
||||
"\t disable_hist:<system>:<event>\n"
|
||||
#endif
|
||||
#ifdef CONFIG_STACKTRACE
|
||||
"\t\t stacktrace\n"
|
||||
#endif
|
||||
#ifdef CONFIG_TRACER_SNAPSHOT
|
||||
"\t\t snapshot\n"
|
||||
#endif
|
||||
#ifdef CONFIG_HIST_TRIGGERS
|
||||
"\t\t hist (see below)\n"
|
||||
#endif
|
||||
"\t example: echo traceoff > events/block/block_unplug/trigger\n"
|
||||
"\t echo traceoff:3 > events/block/block_unplug/trigger\n"
|
||||
|
@ -3825,6 +3932,56 @@ static const char readme_msg[] =
|
|||
"\t To remove a trigger with a count:\n"
|
||||
"\t echo '!<trigger>:0 > <system>/<event>/trigger\n"
|
||||
"\t Filters can be ignored when removing a trigger.\n"
|
||||
#ifdef CONFIG_HIST_TRIGGERS
|
||||
" hist trigger\t- If set, event hits are aggregated into a hash table\n"
|
||||
"\t Format: hist:keys=<field1[,field2,...]>\n"
|
||||
"\t [:values=<field1[,field2,...]>]\n"
|
||||
"\t [:sort=<field1[,field2,...]>]\n"
|
||||
"\t [:size=#entries]\n"
|
||||
"\t [:pause][:continue][:clear]\n"
|
||||
"\t [:name=histname1]\n"
|
||||
"\t [if <filter>]\n\n"
|
||||
"\t When a matching event is hit, an entry is added to a hash\n"
|
||||
"\t table using the key(s) and value(s) named, and the value of a\n"
|
||||
"\t sum called 'hitcount' is incremented. Keys and values\n"
|
||||
"\t correspond to fields in the event's format description. Keys\n"
|
||||
"\t can be any field, or the special string 'stacktrace'.\n"
|
||||
"\t Compound keys consisting of up to two fields can be specified\n"
|
||||
"\t by the 'keys' keyword. Values must correspond to numeric\n"
|
||||
"\t fields. Sort keys consisting of up to two fields can be\n"
|
||||
"\t specified using the 'sort' keyword. The sort direction can\n"
|
||||
"\t be modified by appending '.descending' or '.ascending' to a\n"
|
||||
"\t sort field. The 'size' parameter can be used to specify more\n"
|
||||
"\t or fewer than the default 2048 entries for the hashtable size.\n"
|
||||
"\t If a hist trigger is given a name using the 'name' parameter,\n"
|
||||
"\t its histogram data will be shared with other triggers of the\n"
|
||||
"\t same name, and trigger hits will update this common data.\n\n"
|
||||
"\t Reading the 'hist' file for the event will dump the hash\n"
|
||||
"\t table in its entirety to stdout. If there are multiple hist\n"
|
||||
"\t triggers attached to an event, there will be a table for each\n"
|
||||
"\t trigger in the output. The table displayed for a named\n"
|
||||
"\t trigger will be the same as any other instance having the\n"
|
||||
"\t same name. The default format used to display a given field\n"
|
||||
"\t can be modified by appending any of the following modifiers\n"
|
||||
"\t to the field name, as applicable:\n\n"
|
||||
"\t .hex display a number as a hex value\n"
|
||||
"\t .sym display an address as a symbol\n"
|
||||
"\t .sym-offset display an address as a symbol and offset\n"
|
||||
"\t .execname display a common_pid as a program name\n"
|
||||
"\t .syscall display a syscall id as a syscall name\n\n"
|
||||
"\t .log2 display log2 value rather than raw number\n\n"
|
||||
"\t The 'pause' parameter can be used to pause an existing hist\n"
|
||||
"\t trigger or to start a hist trigger but not log any events\n"
|
||||
"\t until told to do so. 'continue' can be used to start or\n"
|
||||
"\t restart a paused hist trigger.\n\n"
|
||||
"\t The 'clear' parameter will clear the contents of a running\n"
|
||||
"\t hist trigger and leave its current paused/active state\n"
|
||||
"\t unchanged.\n\n"
|
||||
"\t The enable_hist and disable_hist triggers can be used to\n"
|
||||
"\t have one event conditionally start and stop another event's\n"
|
||||
"\t already-attached hist trigger. The syntax is analagous to\n"
|
||||
"\t the enable_event and disable_event triggers.\n"
|
||||
#endif
|
||||
;
|
||||
|
||||
static ssize_t
|
||||
|
@ -4474,7 +4631,7 @@ tracing_set_trace_write(struct file *filp, const char __user *ubuf,
|
|||
if (cnt > MAX_TRACER_SIZE)
|
||||
cnt = MAX_TRACER_SIZE;
|
||||
|
||||
if (copy_from_user(&buf, ubuf, cnt))
|
||||
if (copy_from_user(buf, ubuf, cnt))
|
||||
return -EFAULT;
|
||||
|
||||
buf[cnt] = 0;
|
||||
|
@ -5264,7 +5421,7 @@ static ssize_t tracing_clock_write(struct file *filp, const char __user *ubuf,
|
|||
if (cnt >= sizeof(buf))
|
||||
return -EINVAL;
|
||||
|
||||
if (copy_from_user(&buf, ubuf, cnt))
|
||||
if (copy_from_user(buf, ubuf, cnt))
|
||||
return -EFAULT;
|
||||
|
||||
buf[cnt] = 0;
|
||||
|
@ -6650,7 +6807,7 @@ static int instance_mkdir(const char *name)
|
|||
if (!alloc_cpumask_var(&tr->tracing_cpumask, GFP_KERNEL))
|
||||
goto out_free_tr;
|
||||
|
||||
tr->trace_flags = global_trace.trace_flags;
|
||||
tr->trace_flags = global_trace.trace_flags & ~ZEROED_TRACE_FLAGS;
|
||||
|
||||
cpumask_copy(tr->tracing_cpumask, cpu_all_mask);
|
||||
|
||||
|
@ -6724,6 +6881,12 @@ static int instance_rmdir(const char *name)
|
|||
|
||||
list_del(&tr->list);
|
||||
|
||||
/* Disable all the flags that were enabled coming in */
|
||||
for (i = 0; i < TRACE_FLAGS_MAX_SIZE; i++) {
|
||||
if ((1 << i) & ZEROED_TRACE_FLAGS)
|
||||
set_tracer_flag(tr, 1 << i, 0);
|
||||
}
|
||||
|
||||
tracing_set_nop(tr);
|
||||
event_trace_del_tracer(tr);
|
||||
ftrace_destroy_function_files(tr);
|
||||
|
|
|
@ -177,9 +177,8 @@ struct trace_options {
|
|||
};
|
||||
|
||||
struct trace_pid_list {
|
||||
unsigned int nr_pids;
|
||||
int order;
|
||||
pid_t *pids;
|
||||
int pid_max;
|
||||
unsigned long *pids;
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -656,6 +655,7 @@ static inline void __trace_stack(struct trace_array *tr, unsigned long flags,
|
|||
extern cycle_t ftrace_now(int cpu);
|
||||
|
||||
extern void trace_find_cmdline(int pid, char comm[]);
|
||||
extern void trace_event_follow_fork(struct trace_array *tr, bool enable);
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
extern unsigned long ftrace_update_tot_cnt;
|
||||
|
@ -967,6 +967,7 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
|
|||
C(STOP_ON_FREE, "disable_on_free"), \
|
||||
C(IRQ_INFO, "irq-info"), \
|
||||
C(MARKERS, "markers"), \
|
||||
C(EVENT_FORK, "event-fork"), \
|
||||
FUNCTION_FLAGS \
|
||||
FGRAPH_FLAGS \
|
||||
STACK_FLAGS \
|
||||
|
@ -1064,6 +1065,137 @@ struct trace_subsystem_dir {
|
|||
int nr_events;
|
||||
};
|
||||
|
||||
extern int call_filter_check_discard(struct trace_event_call *call, void *rec,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event);
|
||||
|
||||
void trace_buffer_unlock_commit_regs(struct trace_array *tr,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event,
|
||||
unsigned long flags, int pc,
|
||||
struct pt_regs *regs);
|
||||
|
||||
static inline void trace_buffer_unlock_commit(struct trace_array *tr,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event,
|
||||
unsigned long flags, int pc)
|
||||
{
|
||||
trace_buffer_unlock_commit_regs(tr, buffer, event, flags, pc, NULL);
|
||||
}
|
||||
|
||||
DECLARE_PER_CPU(struct ring_buffer_event *, trace_buffered_event);
|
||||
DECLARE_PER_CPU(int, trace_buffered_event_cnt);
|
||||
void trace_buffered_event_disable(void);
|
||||
void trace_buffered_event_enable(void);
|
||||
|
||||
static inline void
|
||||
__trace_event_discard_commit(struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event)
|
||||
{
|
||||
if (this_cpu_read(trace_buffered_event) == event) {
|
||||
/* Simply release the temp buffer */
|
||||
this_cpu_dec(trace_buffered_event_cnt);
|
||||
return;
|
||||
}
|
||||
ring_buffer_discard_commit(buffer, event);
|
||||
}
|
||||
|
||||
/*
|
||||
* Helper function for event_trigger_unlock_commit{_regs}().
|
||||
* If there are event triggers attached to this event that requires
|
||||
* filtering against its fields, then they wil be called as the
|
||||
* entry already holds the field information of the current event.
|
||||
*
|
||||
* It also checks if the event should be discarded or not.
|
||||
* It is to be discarded if the event is soft disabled and the
|
||||
* event was only recorded to process triggers, or if the event
|
||||
* filter is active and this event did not match the filters.
|
||||
*
|
||||
* Returns true if the event is discarded, false otherwise.
|
||||
*/
|
||||
static inline bool
|
||||
__event_trigger_test_discard(struct trace_event_file *file,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event,
|
||||
void *entry,
|
||||
enum event_trigger_type *tt)
|
||||
{
|
||||
unsigned long eflags = file->flags;
|
||||
|
||||
if (eflags & EVENT_FILE_FL_TRIGGER_COND)
|
||||
*tt = event_triggers_call(file, entry);
|
||||
|
||||
if (test_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags) ||
|
||||
(unlikely(file->flags & EVENT_FILE_FL_FILTERED) &&
|
||||
!filter_match_preds(file->filter, entry))) {
|
||||
__trace_event_discard_commit(buffer, event);
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* event_trigger_unlock_commit - handle triggers and finish event commit
|
||||
* @file: The file pointer assoctiated to the event
|
||||
* @buffer: The ring buffer that the event is being written to
|
||||
* @event: The event meta data in the ring buffer
|
||||
* @entry: The event itself
|
||||
* @irq_flags: The state of the interrupts at the start of the event
|
||||
* @pc: The state of the preempt count at the start of the event.
|
||||
*
|
||||
* This is a helper function to handle triggers that require data
|
||||
* from the event itself. It also tests the event against filters and
|
||||
* if the event is soft disabled and should be discarded.
|
||||
*/
|
||||
static inline void
|
||||
event_trigger_unlock_commit(struct trace_event_file *file,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event,
|
||||
void *entry, unsigned long irq_flags, int pc)
|
||||
{
|
||||
enum event_trigger_type tt = ETT_NONE;
|
||||
|
||||
if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
|
||||
trace_buffer_unlock_commit(file->tr, buffer, event, irq_flags, pc);
|
||||
|
||||
if (tt)
|
||||
event_triggers_post_call(file, tt, entry);
|
||||
}
|
||||
|
||||
/**
|
||||
* event_trigger_unlock_commit_regs - handle triggers and finish event commit
|
||||
* @file: The file pointer assoctiated to the event
|
||||
* @buffer: The ring buffer that the event is being written to
|
||||
* @event: The event meta data in the ring buffer
|
||||
* @entry: The event itself
|
||||
* @irq_flags: The state of the interrupts at the start of the event
|
||||
* @pc: The state of the preempt count at the start of the event.
|
||||
*
|
||||
* This is a helper function to handle triggers that require data
|
||||
* from the event itself. It also tests the event against filters and
|
||||
* if the event is soft disabled and should be discarded.
|
||||
*
|
||||
* Same as event_trigger_unlock_commit() but calls
|
||||
* trace_buffer_unlock_commit_regs() instead of trace_buffer_unlock_commit().
|
||||
*/
|
||||
static inline void
|
||||
event_trigger_unlock_commit_regs(struct trace_event_file *file,
|
||||
struct ring_buffer *buffer,
|
||||
struct ring_buffer_event *event,
|
||||
void *entry, unsigned long irq_flags, int pc,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
enum event_trigger_type tt = ETT_NONE;
|
||||
|
||||
if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
|
||||
trace_buffer_unlock_commit_regs(file->tr, buffer, event,
|
||||
irq_flags, pc, regs);
|
||||
|
||||
if (tt)
|
||||
event_triggers_post_call(file, tt, entry);
|
||||
}
|
||||
|
||||
#define FILTER_PRED_INVALID ((unsigned short)-1)
|
||||
#define FILTER_PRED_IS_RIGHT (1 << 15)
|
||||
#define FILTER_PRED_FOLD (1 << 15)
|
||||
|
@ -1161,6 +1293,15 @@ extern struct mutex event_mutex;
|
|||
extern struct list_head ftrace_events;
|
||||
|
||||
extern const struct file_operations event_trigger_fops;
|
||||
extern const struct file_operations event_hist_fops;
|
||||
|
||||
#ifdef CONFIG_HIST_TRIGGERS
|
||||
extern int register_trigger_hist_cmd(void);
|
||||
extern int register_trigger_hist_enable_disable_cmds(void);
|
||||
#else
|
||||
static inline int register_trigger_hist_cmd(void) { return 0; }
|
||||
static inline int register_trigger_hist_enable_disable_cmds(void) { return 0; }
|
||||
#endif
|
||||
|
||||
extern int register_trigger_cmds(void);
|
||||
extern void clear_event_triggers(struct trace_array *tr);
|
||||
|
@ -1174,9 +1315,41 @@ struct event_trigger_data {
|
|||
char *filter_str;
|
||||
void *private_data;
|
||||
bool paused;
|
||||
bool paused_tmp;
|
||||
struct list_head list;
|
||||
char *name;
|
||||
struct list_head named_list;
|
||||
struct event_trigger_data *named_data;
|
||||
};
|
||||
|
||||
/* Avoid typos */
|
||||
#define ENABLE_EVENT_STR "enable_event"
|
||||
#define DISABLE_EVENT_STR "disable_event"
|
||||
#define ENABLE_HIST_STR "enable_hist"
|
||||
#define DISABLE_HIST_STR "disable_hist"
|
||||
|
||||
struct enable_trigger_data {
|
||||
struct trace_event_file *file;
|
||||
bool enable;
|
||||
bool hist;
|
||||
};
|
||||
|
||||
extern int event_enable_trigger_print(struct seq_file *m,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data);
|
||||
extern void event_enable_trigger_free(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data);
|
||||
extern int event_enable_trigger_func(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob, char *cmd, char *param);
|
||||
extern int event_enable_register_trigger(char *glob,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data,
|
||||
struct trace_event_file *file);
|
||||
extern void event_enable_unregister_trigger(char *glob,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *test,
|
||||
struct trace_event_file *file);
|
||||
extern void trigger_data_free(struct event_trigger_data *data);
|
||||
extern int event_trigger_init(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data);
|
||||
|
@ -1189,7 +1362,18 @@ extern void unregister_trigger(char *glob, struct event_trigger_ops *ops,
|
|||
extern int set_trigger_filter(char *filter_str,
|
||||
struct event_trigger_data *trigger_data,
|
||||
struct trace_event_file *file);
|
||||
extern struct event_trigger_data *find_named_trigger(const char *name);
|
||||
extern bool is_named_trigger(struct event_trigger_data *test);
|
||||
extern int save_named_trigger(const char *name,
|
||||
struct event_trigger_data *data);
|
||||
extern void del_named_trigger(struct event_trigger_data *data);
|
||||
extern void pause_named_trigger(struct event_trigger_data *data);
|
||||
extern void unpause_named_trigger(struct event_trigger_data *data);
|
||||
extern void set_named_trigger_data(struct event_trigger_data *data,
|
||||
struct event_trigger_data *named_data);
|
||||
extern int register_event_command(struct event_command *cmd);
|
||||
extern int unregister_event_command(struct event_command *cmd);
|
||||
extern int register_trigger_hist_enable_disable_cmds(void);
|
||||
|
||||
/**
|
||||
* struct event_trigger_ops - callbacks for trace event triggers
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
#include <linux/kthread.h>
|
||||
#include <linux/tracefs.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/bsearch.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/ctype.h>
|
||||
#include <linux/sort.h>
|
||||
|
@ -381,6 +381,7 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file,
|
|||
{
|
||||
struct trace_event_call *call = file->event_call;
|
||||
struct trace_array *tr = file->tr;
|
||||
unsigned long file_flags = file->flags;
|
||||
int ret = 0;
|
||||
int disable;
|
||||
|
||||
|
@ -463,6 +464,15 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file,
|
|||
break;
|
||||
}
|
||||
|
||||
/* Enable or disable use of trace_buffered_event */
|
||||
if ((file_flags & EVENT_FILE_FL_SOFT_DISABLED) !=
|
||||
(file->flags & EVENT_FILE_FL_SOFT_DISABLED)) {
|
||||
if (file->flags & EVENT_FILE_FL_SOFT_DISABLED)
|
||||
trace_buffered_event_enable();
|
||||
else
|
||||
trace_buffered_event_disable();
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -489,24 +499,26 @@ static void ftrace_clear_events(struct trace_array *tr)
|
|||
mutex_unlock(&event_mutex);
|
||||
}
|
||||
|
||||
static int cmp_pid(const void *key, const void *elt)
|
||||
{
|
||||
const pid_t *search_pid = key;
|
||||
const pid_t *pid = elt;
|
||||
/* Shouldn't this be in a header? */
|
||||
extern int pid_max;
|
||||
|
||||
if (*search_pid == *pid)
|
||||
return 0;
|
||||
if (*search_pid < *pid)
|
||||
return -1;
|
||||
return 1;
|
||||
/* Returns true if found in filter */
|
||||
static bool
|
||||
find_filtered_pid(struct trace_pid_list *filtered_pids, pid_t search_pid)
|
||||
{
|
||||
/*
|
||||
* If pid_max changed after filtered_pids was created, we
|
||||
* by default ignore all pids greater than the previous pid_max.
|
||||
*/
|
||||
if (search_pid >= filtered_pids->pid_max)
|
||||
return false;
|
||||
|
||||
return test_bit(search_pid, filtered_pids->pids);
|
||||
}
|
||||
|
||||
static bool
|
||||
check_ignore_pid(struct trace_pid_list *filtered_pids, struct task_struct *task)
|
||||
ignore_this_task(struct trace_pid_list *filtered_pids, struct task_struct *task)
|
||||
{
|
||||
pid_t search_pid;
|
||||
pid_t *pid;
|
||||
|
||||
/*
|
||||
* Return false, because if filtered_pids does not exist,
|
||||
* all pids are good to trace.
|
||||
|
@ -514,15 +526,68 @@ check_ignore_pid(struct trace_pid_list *filtered_pids, struct task_struct *task)
|
|||
if (!filtered_pids)
|
||||
return false;
|
||||
|
||||
search_pid = task->pid;
|
||||
return !find_filtered_pid(filtered_pids, task->pid);
|
||||
}
|
||||
|
||||
pid = bsearch(&search_pid, filtered_pids->pids,
|
||||
filtered_pids->nr_pids, sizeof(pid_t),
|
||||
cmp_pid);
|
||||
if (!pid)
|
||||
return true;
|
||||
static void filter_add_remove_task(struct trace_pid_list *pid_list,
|
||||
struct task_struct *self,
|
||||
struct task_struct *task)
|
||||
{
|
||||
if (!pid_list)
|
||||
return;
|
||||
|
||||
return false;
|
||||
/* For forks, we only add if the forking task is listed */
|
||||
if (self) {
|
||||
if (!find_filtered_pid(pid_list, self->pid))
|
||||
return;
|
||||
}
|
||||
|
||||
/* Sorry, but we don't support pid_max changing after setting */
|
||||
if (task->pid >= pid_list->pid_max)
|
||||
return;
|
||||
|
||||
/* "self" is set for forks, and NULL for exits */
|
||||
if (self)
|
||||
set_bit(task->pid, pid_list->pids);
|
||||
else
|
||||
clear_bit(task->pid, pid_list->pids);
|
||||
}
|
||||
|
||||
static void
|
||||
event_filter_pid_sched_process_exit(void *data, struct task_struct *task)
|
||||
{
|
||||
struct trace_pid_list *pid_list;
|
||||
struct trace_array *tr = data;
|
||||
|
||||
pid_list = rcu_dereference_sched(tr->filtered_pids);
|
||||
filter_add_remove_task(pid_list, NULL, task);
|
||||
}
|
||||
|
||||
static void
|
||||
event_filter_pid_sched_process_fork(void *data,
|
||||
struct task_struct *self,
|
||||
struct task_struct *task)
|
||||
{
|
||||
struct trace_pid_list *pid_list;
|
||||
struct trace_array *tr = data;
|
||||
|
||||
pid_list = rcu_dereference_sched(tr->filtered_pids);
|
||||
filter_add_remove_task(pid_list, self, task);
|
||||
}
|
||||
|
||||
void trace_event_follow_fork(struct trace_array *tr, bool enable)
|
||||
{
|
||||
if (enable) {
|
||||
register_trace_prio_sched_process_fork(event_filter_pid_sched_process_fork,
|
||||
tr, INT_MIN);
|
||||
register_trace_prio_sched_process_exit(event_filter_pid_sched_process_exit,
|
||||
tr, INT_MAX);
|
||||
} else {
|
||||
unregister_trace_sched_process_fork(event_filter_pid_sched_process_fork,
|
||||
tr);
|
||||
unregister_trace_sched_process_exit(event_filter_pid_sched_process_exit,
|
||||
tr);
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -535,8 +600,8 @@ event_filter_pid_sched_switch_probe_pre(void *data, bool preempt,
|
|||
pid_list = rcu_dereference_sched(tr->filtered_pids);
|
||||
|
||||
this_cpu_write(tr->trace_buffer.data->ignore_pid,
|
||||
check_ignore_pid(pid_list, prev) &&
|
||||
check_ignore_pid(pid_list, next));
|
||||
ignore_this_task(pid_list, prev) &&
|
||||
ignore_this_task(pid_list, next));
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -549,7 +614,7 @@ event_filter_pid_sched_switch_probe_post(void *data, bool preempt,
|
|||
pid_list = rcu_dereference_sched(tr->filtered_pids);
|
||||
|
||||
this_cpu_write(tr->trace_buffer.data->ignore_pid,
|
||||
check_ignore_pid(pid_list, next));
|
||||
ignore_this_task(pid_list, next));
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -565,7 +630,7 @@ event_filter_pid_sched_wakeup_probe_pre(void *data, struct task_struct *task)
|
|||
pid_list = rcu_dereference_sched(tr->filtered_pids);
|
||||
|
||||
this_cpu_write(tr->trace_buffer.data->ignore_pid,
|
||||
check_ignore_pid(pid_list, task));
|
||||
ignore_this_task(pid_list, task));
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -582,7 +647,7 @@ event_filter_pid_sched_wakeup_probe_post(void *data, struct task_struct *task)
|
|||
|
||||
/* Set tracing if current is enabled */
|
||||
this_cpu_write(tr->trace_buffer.data->ignore_pid,
|
||||
check_ignore_pid(pid_list, current));
|
||||
ignore_this_task(pid_list, current));
|
||||
}
|
||||
|
||||
static void __ftrace_clear_event_pids(struct trace_array *tr)
|
||||
|
@ -620,7 +685,7 @@ static void __ftrace_clear_event_pids(struct trace_array *tr)
|
|||
/* Wait till all users are no longer using pid filtering */
|
||||
synchronize_sched();
|
||||
|
||||
free_pages((unsigned long)pid_list->pids, pid_list->order);
|
||||
vfree(pid_list->pids);
|
||||
kfree(pid_list);
|
||||
}
|
||||
|
||||
|
@ -964,11 +1029,32 @@ static void t_stop(struct seq_file *m, void *p)
|
|||
mutex_unlock(&event_mutex);
|
||||
}
|
||||
|
||||
static void *
|
||||
p_next(struct seq_file *m, void *v, loff_t *pos)
|
||||
{
|
||||
struct trace_array *tr = m->private;
|
||||
struct trace_pid_list *pid_list = rcu_dereference_sched(tr->filtered_pids);
|
||||
unsigned long pid = (unsigned long)v;
|
||||
|
||||
(*pos)++;
|
||||
|
||||
/* pid already is +1 of the actual prevous bit */
|
||||
pid = find_next_bit(pid_list->pids, pid_list->pid_max, pid);
|
||||
|
||||
/* Return pid + 1 to allow zero to be represented */
|
||||
if (pid < pid_list->pid_max)
|
||||
return (void *)(pid + 1);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void *p_start(struct seq_file *m, loff_t *pos)
|
||||
__acquires(RCU)
|
||||
{
|
||||
struct trace_pid_list *pid_list;
|
||||
struct trace_array *tr = m->private;
|
||||
unsigned long pid;
|
||||
loff_t l = 0;
|
||||
|
||||
/*
|
||||
* Grab the mutex, to keep calls to p_next() having the same
|
||||
|
@ -981,10 +1067,18 @@ static void *p_start(struct seq_file *m, loff_t *pos)
|
|||
|
||||
pid_list = rcu_dereference_sched(tr->filtered_pids);
|
||||
|
||||
if (!pid_list || *pos >= pid_list->nr_pids)
|
||||
if (!pid_list)
|
||||
return NULL;
|
||||
|
||||
return (void *)&pid_list->pids[*pos];
|
||||
pid = find_first_bit(pid_list->pids, pid_list->pid_max);
|
||||
if (pid >= pid_list->pid_max)
|
||||
return NULL;
|
||||
|
||||
/* Return pid + 1 so that zero can be the exit value */
|
||||
for (pid++; pid && l < *pos;
|
||||
pid = (unsigned long)p_next(m, (void *)pid, &l))
|
||||
;
|
||||
return (void *)pid;
|
||||
}
|
||||
|
||||
static void p_stop(struct seq_file *m, void *p)
|
||||
|
@ -994,25 +1088,11 @@ static void p_stop(struct seq_file *m, void *p)
|
|||
mutex_unlock(&event_mutex);
|
||||
}
|
||||
|
||||
static void *
|
||||
p_next(struct seq_file *m, void *v, loff_t *pos)
|
||||
{
|
||||
struct trace_array *tr = m->private;
|
||||
struct trace_pid_list *pid_list = rcu_dereference_sched(tr->filtered_pids);
|
||||
|
||||
(*pos)++;
|
||||
|
||||
if (*pos >= pid_list->nr_pids)
|
||||
return NULL;
|
||||
|
||||
return (void *)&pid_list->pids[*pos];
|
||||
}
|
||||
|
||||
static int p_show(struct seq_file *m, void *v)
|
||||
{
|
||||
pid_t *pid = v;
|
||||
unsigned long pid = (unsigned long)v - 1;
|
||||
|
||||
seq_printf(m, "%d\n", *pid);
|
||||
seq_printf(m, "%lu\n", pid);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1561,11 +1641,6 @@ show_header(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
|
|||
return r;
|
||||
}
|
||||
|
||||
static int max_pids(struct trace_pid_list *pid_list)
|
||||
{
|
||||
return (PAGE_SIZE << pid_list->order) / sizeof(pid_t);
|
||||
}
|
||||
|
||||
static void ignore_task_cpu(void *data)
|
||||
{
|
||||
struct trace_array *tr = data;
|
||||
|
@ -1579,7 +1654,7 @@ static void ignore_task_cpu(void *data)
|
|||
mutex_is_locked(&event_mutex));
|
||||
|
||||
this_cpu_write(tr->trace_buffer.data->ignore_pid,
|
||||
check_ignore_pid(pid_list, current));
|
||||
ignore_this_task(pid_list, current));
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
|
@ -1589,7 +1664,7 @@ ftrace_event_pid_write(struct file *filp, const char __user *ubuf,
|
|||
struct seq_file *m = filp->private_data;
|
||||
struct trace_array *tr = m->private;
|
||||
struct trace_pid_list *filtered_pids = NULL;
|
||||
struct trace_pid_list *pid_list = NULL;
|
||||
struct trace_pid_list *pid_list;
|
||||
struct trace_event_file *file;
|
||||
struct trace_parser parser;
|
||||
unsigned long val;
|
||||
|
@ -1597,7 +1672,7 @@ ftrace_event_pid_write(struct file *filp, const char __user *ubuf,
|
|||
ssize_t read = 0;
|
||||
ssize_t ret = 0;
|
||||
pid_t pid;
|
||||
int i;
|
||||
int nr_pids = 0;
|
||||
|
||||
if (!cnt)
|
||||
return 0;
|
||||
|
@ -1610,10 +1685,43 @@ ftrace_event_pid_write(struct file *filp, const char __user *ubuf,
|
|||
return -ENOMEM;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
filtered_pids = rcu_dereference_protected(tr->filtered_pids,
|
||||
lockdep_is_held(&event_mutex));
|
||||
|
||||
/*
|
||||
* Load as many pids into the array before doing a
|
||||
* swap from the tr->filtered_pids to the new list.
|
||||
* Always recreate a new array. The write is an all or nothing
|
||||
* operation. Always create a new array when adding new pids by
|
||||
* the user. If the operation fails, then the current list is
|
||||
* not modified.
|
||||
*/
|
||||
pid_list = kmalloc(sizeof(*pid_list), GFP_KERNEL);
|
||||
if (!pid_list) {
|
||||
read = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
pid_list->pid_max = READ_ONCE(pid_max);
|
||||
/* Only truncating will shrink pid_max */
|
||||
if (filtered_pids && filtered_pids->pid_max > pid_list->pid_max)
|
||||
pid_list->pid_max = filtered_pids->pid_max;
|
||||
pid_list->pids = vzalloc((pid_list->pid_max + 7) >> 3);
|
||||
if (!pid_list->pids) {
|
||||
kfree(pid_list);
|
||||
read = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
if (filtered_pids) {
|
||||
/* copy the current bits to the new max */
|
||||
pid = find_first_bit(filtered_pids->pids,
|
||||
filtered_pids->pid_max);
|
||||
while (pid < filtered_pids->pid_max) {
|
||||
set_bit(pid, pid_list->pids);
|
||||
pid = find_next_bit(filtered_pids->pids,
|
||||
filtered_pids->pid_max,
|
||||
pid + 1);
|
||||
nr_pids++;
|
||||
}
|
||||
}
|
||||
|
||||
while (cnt > 0) {
|
||||
|
||||
this_pos = 0;
|
||||
|
@ -1631,92 +1739,35 @@ ftrace_event_pid_write(struct file *filp, const char __user *ubuf,
|
|||
ret = -EINVAL;
|
||||
if (kstrtoul(parser.buffer, 0, &val))
|
||||
break;
|
||||
if (val > INT_MAX)
|
||||
if (val >= pid_list->pid_max)
|
||||
break;
|
||||
|
||||
pid = (pid_t)val;
|
||||
|
||||
ret = -ENOMEM;
|
||||
if (!pid_list) {
|
||||
pid_list = kmalloc(sizeof(*pid_list), GFP_KERNEL);
|
||||
if (!pid_list)
|
||||
break;
|
||||
set_bit(pid, pid_list->pids);
|
||||
nr_pids++;
|
||||
|
||||
filtered_pids = rcu_dereference_protected(tr->filtered_pids,
|
||||
lockdep_is_held(&event_mutex));
|
||||
if (filtered_pids)
|
||||
pid_list->order = filtered_pids->order;
|
||||
else
|
||||
pid_list->order = 0;
|
||||
|
||||
pid_list->pids = (void *)__get_free_pages(GFP_KERNEL,
|
||||
pid_list->order);
|
||||
if (!pid_list->pids)
|
||||
break;
|
||||
|
||||
if (filtered_pids) {
|
||||
pid_list->nr_pids = filtered_pids->nr_pids;
|
||||
memcpy(pid_list->pids, filtered_pids->pids,
|
||||
pid_list->nr_pids * sizeof(pid_t));
|
||||
} else
|
||||
pid_list->nr_pids = 0;
|
||||
}
|
||||
|
||||
if (pid_list->nr_pids >= max_pids(pid_list)) {
|
||||
pid_t *pid_page;
|
||||
|
||||
pid_page = (void *)__get_free_pages(GFP_KERNEL,
|
||||
pid_list->order + 1);
|
||||
if (!pid_page)
|
||||
break;
|
||||
memcpy(pid_page, pid_list->pids,
|
||||
pid_list->nr_pids * sizeof(pid_t));
|
||||
free_pages((unsigned long)pid_list->pids, pid_list->order);
|
||||
|
||||
pid_list->order++;
|
||||
pid_list->pids = pid_page;
|
||||
}
|
||||
|
||||
pid_list->pids[pid_list->nr_pids++] = pid;
|
||||
trace_parser_clear(&parser);
|
||||
ret = 0;
|
||||
}
|
||||
trace_parser_put(&parser);
|
||||
|
||||
if (ret < 0) {
|
||||
if (pid_list)
|
||||
free_pages((unsigned long)pid_list->pids, pid_list->order);
|
||||
vfree(pid_list->pids);
|
||||
kfree(pid_list);
|
||||
mutex_unlock(&event_mutex);
|
||||
return ret;
|
||||
read = ret;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!pid_list) {
|
||||
mutex_unlock(&event_mutex);
|
||||
return ret;
|
||||
if (!nr_pids) {
|
||||
/* Cleared the list of pids */
|
||||
vfree(pid_list->pids);
|
||||
kfree(pid_list);
|
||||
read = ret;
|
||||
if (!filtered_pids)
|
||||
goto out;
|
||||
pid_list = NULL;
|
||||
}
|
||||
|
||||
sort(pid_list->pids, pid_list->nr_pids, sizeof(pid_t), cmp_pid, NULL);
|
||||
|
||||
/* Remove duplicates */
|
||||
for (i = 1; i < pid_list->nr_pids; i++) {
|
||||
int start = i;
|
||||
|
||||
while (i < pid_list->nr_pids &&
|
||||
pid_list->pids[i - 1] == pid_list->pids[i])
|
||||
i++;
|
||||
|
||||
if (start != i) {
|
||||
if (i < pid_list->nr_pids) {
|
||||
memmove(&pid_list->pids[start], &pid_list->pids[i],
|
||||
(pid_list->nr_pids - i) * sizeof(pid_t));
|
||||
pid_list->nr_pids -= i - start;
|
||||
i = start;
|
||||
} else
|
||||
pid_list->nr_pids = start;
|
||||
}
|
||||
}
|
||||
|
||||
rcu_assign_pointer(tr->filtered_pids, pid_list);
|
||||
|
||||
list_for_each_entry(file, &tr->events, list) {
|
||||
|
@ -1726,7 +1777,7 @@ ftrace_event_pid_write(struct file *filp, const char __user *ubuf,
|
|||
if (filtered_pids) {
|
||||
synchronize_sched();
|
||||
|
||||
free_pages((unsigned long)filtered_pids->pids, filtered_pids->order);
|
||||
vfree(filtered_pids->pids);
|
||||
kfree(filtered_pids);
|
||||
} else {
|
||||
/*
|
||||
|
@ -1763,10 +1814,12 @@ ftrace_event_pid_write(struct file *filp, const char __user *ubuf,
|
|||
*/
|
||||
on_each_cpu(ignore_task_cpu, tr, 1);
|
||||
|
||||
out:
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
ret = read;
|
||||
*ppos += read;
|
||||
if (read > 0)
|
||||
*ppos += read;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -2121,6 +2174,10 @@ event_create_dir(struct dentry *parent, struct trace_event_file *file)
|
|||
trace_create_file("trigger", 0644, file->dir, file,
|
||||
&event_trigger_fops);
|
||||
|
||||
#ifdef CONFIG_HIST_TRIGGERS
|
||||
trace_create_file("hist", 0444, file->dir, file,
|
||||
&event_hist_fops);
|
||||
#endif
|
||||
trace_create_file("format", 0444, file->dir, call,
|
||||
&ftrace_event_format_fops);
|
||||
|
||||
|
@ -3368,7 +3425,7 @@ static __init void event_trace_self_tests(void)
|
|||
|
||||
static DEFINE_PER_CPU(atomic_t, ftrace_test_event_disable);
|
||||
|
||||
static struct trace_array *event_tr;
|
||||
static struct trace_event_file event_trace_file __initdata;
|
||||
|
||||
static void __init
|
||||
function_test_events_call(unsigned long ip, unsigned long parent_ip,
|
||||
|
@ -3392,17 +3449,17 @@ function_test_events_call(unsigned long ip, unsigned long parent_ip,
|
|||
|
||||
local_save_flags(flags);
|
||||
|
||||
event = trace_current_buffer_lock_reserve(&buffer,
|
||||
TRACE_FN, sizeof(*entry),
|
||||
flags, pc);
|
||||
event = trace_event_buffer_lock_reserve(&buffer, &event_trace_file,
|
||||
TRACE_FN, sizeof(*entry),
|
||||
flags, pc);
|
||||
if (!event)
|
||||
goto out;
|
||||
entry = ring_buffer_event_data(event);
|
||||
entry->ip = ip;
|
||||
entry->parent_ip = parent_ip;
|
||||
|
||||
trace_buffer_unlock_commit(event_tr, buffer, event, flags, pc);
|
||||
|
||||
event_trigger_unlock_commit(&event_trace_file, buffer, event,
|
||||
entry, flags, pc);
|
||||
out:
|
||||
atomic_dec(&per_cpu(ftrace_test_event_disable, cpu));
|
||||
preempt_enable_notrace();
|
||||
|
@ -3417,9 +3474,11 @@ static struct ftrace_ops trace_ops __initdata =
|
|||
static __init void event_trace_self_test_with_function(void)
|
||||
{
|
||||
int ret;
|
||||
event_tr = top_trace_array();
|
||||
if (WARN_ON(!event_tr))
|
||||
|
||||
event_trace_file.tr = top_trace_array();
|
||||
if (WARN_ON(!event_trace_file.tr))
|
||||
return;
|
||||
|
||||
ret = register_ftrace_function(&trace_ops);
|
||||
if (WARN_ON(ret < 0)) {
|
||||
pr_info("Failed to enable function tracer for event tests\n");
|
||||
|
|
|
@ -689,10 +689,7 @@ static void append_filter_err(struct filter_parse_state *ps,
|
|||
|
||||
static inline struct event_filter *event_filter(struct trace_event_file *file)
|
||||
{
|
||||
if (file->event_call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
|
||||
return file->event_call->filter;
|
||||
else
|
||||
return file->filter;
|
||||
return file->filter;
|
||||
}
|
||||
|
||||
/* caller must hold event_mutex */
|
||||
|
@ -826,12 +823,12 @@ static void __free_preds(struct event_filter *filter)
|
|||
|
||||
static void filter_disable(struct trace_event_file *file)
|
||||
{
|
||||
struct trace_event_call *call = file->event_call;
|
||||
unsigned long old_flags = file->flags;
|
||||
|
||||
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
|
||||
call->flags &= ~TRACE_EVENT_FL_FILTERED;
|
||||
else
|
||||
file->flags &= ~EVENT_FILE_FL_FILTERED;
|
||||
file->flags &= ~EVENT_FILE_FL_FILTERED;
|
||||
|
||||
if (old_flags != file->flags)
|
||||
trace_buffered_event_disable();
|
||||
}
|
||||
|
||||
static void __free_filter(struct event_filter *filter)
|
||||
|
@ -883,13 +880,8 @@ static int __alloc_preds(struct event_filter *filter, int n_preds)
|
|||
|
||||
static inline void __remove_filter(struct trace_event_file *file)
|
||||
{
|
||||
struct trace_event_call *call = file->event_call;
|
||||
|
||||
filter_disable(file);
|
||||
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
|
||||
remove_filter_string(call->filter);
|
||||
else
|
||||
remove_filter_string(file->filter);
|
||||
remove_filter_string(file->filter);
|
||||
}
|
||||
|
||||
static void filter_free_subsystem_preds(struct trace_subsystem_dir *dir,
|
||||
|
@ -906,15 +898,8 @@ static void filter_free_subsystem_preds(struct trace_subsystem_dir *dir,
|
|||
|
||||
static inline void __free_subsystem_filter(struct trace_event_file *file)
|
||||
{
|
||||
struct trace_event_call *call = file->event_call;
|
||||
|
||||
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER) {
|
||||
__free_filter(call->filter);
|
||||
call->filter = NULL;
|
||||
} else {
|
||||
__free_filter(file->filter);
|
||||
file->filter = NULL;
|
||||
}
|
||||
__free_filter(file->filter);
|
||||
file->filter = NULL;
|
||||
}
|
||||
|
||||
static void filter_free_subsystem_filters(struct trace_subsystem_dir *dir,
|
||||
|
@ -1718,69 +1703,43 @@ static int replace_preds(struct trace_event_call *call,
|
|||
|
||||
static inline void event_set_filtered_flag(struct trace_event_file *file)
|
||||
{
|
||||
struct trace_event_call *call = file->event_call;
|
||||
unsigned long old_flags = file->flags;
|
||||
|
||||
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
|
||||
call->flags |= TRACE_EVENT_FL_FILTERED;
|
||||
else
|
||||
file->flags |= EVENT_FILE_FL_FILTERED;
|
||||
file->flags |= EVENT_FILE_FL_FILTERED;
|
||||
|
||||
if (old_flags != file->flags)
|
||||
trace_buffered_event_enable();
|
||||
}
|
||||
|
||||
static inline void event_set_filter(struct trace_event_file *file,
|
||||
struct event_filter *filter)
|
||||
{
|
||||
struct trace_event_call *call = file->event_call;
|
||||
|
||||
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
|
||||
rcu_assign_pointer(call->filter, filter);
|
||||
else
|
||||
rcu_assign_pointer(file->filter, filter);
|
||||
rcu_assign_pointer(file->filter, filter);
|
||||
}
|
||||
|
||||
static inline void event_clear_filter(struct trace_event_file *file)
|
||||
{
|
||||
struct trace_event_call *call = file->event_call;
|
||||
|
||||
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
|
||||
RCU_INIT_POINTER(call->filter, NULL);
|
||||
else
|
||||
RCU_INIT_POINTER(file->filter, NULL);
|
||||
RCU_INIT_POINTER(file->filter, NULL);
|
||||
}
|
||||
|
||||
static inline void
|
||||
event_set_no_set_filter_flag(struct trace_event_file *file)
|
||||
{
|
||||
struct trace_event_call *call = file->event_call;
|
||||
|
||||
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
|
||||
call->flags |= TRACE_EVENT_FL_NO_SET_FILTER;
|
||||
else
|
||||
file->flags |= EVENT_FILE_FL_NO_SET_FILTER;
|
||||
file->flags |= EVENT_FILE_FL_NO_SET_FILTER;
|
||||
}
|
||||
|
||||
static inline void
|
||||
event_clear_no_set_filter_flag(struct trace_event_file *file)
|
||||
{
|
||||
struct trace_event_call *call = file->event_call;
|
||||
|
||||
if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
|
||||
call->flags &= ~TRACE_EVENT_FL_NO_SET_FILTER;
|
||||
else
|
||||
file->flags &= ~EVENT_FILE_FL_NO_SET_FILTER;
|
||||
file->flags &= ~EVENT_FILE_FL_NO_SET_FILTER;
|
||||
}
|
||||
|
||||
static inline bool
|
||||
event_no_set_filter_flag(struct trace_event_file *file)
|
||||
{
|
||||
struct trace_event_call *call = file->event_call;
|
||||
|
||||
if (file->flags & EVENT_FILE_FL_NO_SET_FILTER)
|
||||
return true;
|
||||
|
||||
if ((call->flags & TRACE_EVENT_FL_USE_CALL_FILTER) &&
|
||||
(call->flags & TRACE_EVENT_FL_NO_SET_FILTER))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
|
|
1755
kernel/trace/trace_events_hist.c
Normal file
1755
kernel/trace/trace_events_hist.c
Normal file
File diff suppressed because it is too large
Load Diff
|
@ -347,7 +347,7 @@ __init int register_event_command(struct event_command *cmd)
|
|||
* Currently we only unregister event commands from __init, so mark
|
||||
* this __init too.
|
||||
*/
|
||||
static __init int unregister_event_command(struct event_command *cmd)
|
||||
__init int unregister_event_command(struct event_command *cmd)
|
||||
{
|
||||
struct event_command *p, *n;
|
||||
int ret = -ENODEV;
|
||||
|
@ -641,6 +641,7 @@ event_trigger_callback(struct event_command *cmd_ops,
|
|||
trigger_data->ops = trigger_ops;
|
||||
trigger_data->cmd_ops = cmd_ops;
|
||||
INIT_LIST_HEAD(&trigger_data->list);
|
||||
INIT_LIST_HEAD(&trigger_data->named_list);
|
||||
|
||||
if (glob[0] == '!') {
|
||||
cmd_ops->unreg(glob+1, trigger_ops, trigger_data, file);
|
||||
|
@ -764,6 +765,148 @@ int set_trigger_filter(char *filter_str,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static LIST_HEAD(named_triggers);
|
||||
|
||||
/**
|
||||
* find_named_trigger - Find the common named trigger associated with @name
|
||||
* @name: The name of the set of named triggers to find the common data for
|
||||
*
|
||||
* Named triggers are sets of triggers that share a common set of
|
||||
* trigger data. The first named trigger registered with a given name
|
||||
* owns the common trigger data that the others subsequently
|
||||
* registered with the same name will reference. This function
|
||||
* returns the common trigger data associated with that first
|
||||
* registered instance.
|
||||
*
|
||||
* Return: the common trigger data for the given named trigger on
|
||||
* success, NULL otherwise.
|
||||
*/
|
||||
struct event_trigger_data *find_named_trigger(const char *name)
|
||||
{
|
||||
struct event_trigger_data *data;
|
||||
|
||||
if (!name)
|
||||
return NULL;
|
||||
|
||||
list_for_each_entry(data, &named_triggers, named_list) {
|
||||
if (data->named_data)
|
||||
continue;
|
||||
if (strcmp(data->name, name) == 0)
|
||||
return data;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* is_named_trigger - determine if a given trigger is a named trigger
|
||||
* @test: The trigger data to test
|
||||
*
|
||||
* Return: true if 'test' is a named trigger, false otherwise.
|
||||
*/
|
||||
bool is_named_trigger(struct event_trigger_data *test)
|
||||
{
|
||||
struct event_trigger_data *data;
|
||||
|
||||
list_for_each_entry(data, &named_triggers, named_list) {
|
||||
if (test == data)
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* save_named_trigger - save the trigger in the named trigger list
|
||||
* @name: The name of the named trigger set
|
||||
* @data: The trigger data to save
|
||||
*
|
||||
* Return: 0 if successful, negative error otherwise.
|
||||
*/
|
||||
int save_named_trigger(const char *name, struct event_trigger_data *data)
|
||||
{
|
||||
data->name = kstrdup(name, GFP_KERNEL);
|
||||
if (!data->name)
|
||||
return -ENOMEM;
|
||||
|
||||
list_add(&data->named_list, &named_triggers);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* del_named_trigger - delete a trigger from the named trigger list
|
||||
* @data: The trigger data to delete
|
||||
*/
|
||||
void del_named_trigger(struct event_trigger_data *data)
|
||||
{
|
||||
kfree(data->name);
|
||||
data->name = NULL;
|
||||
|
||||
list_del(&data->named_list);
|
||||
}
|
||||
|
||||
static void __pause_named_trigger(struct event_trigger_data *data, bool pause)
|
||||
{
|
||||
struct event_trigger_data *test;
|
||||
|
||||
list_for_each_entry(test, &named_triggers, named_list) {
|
||||
if (strcmp(test->name, data->name) == 0) {
|
||||
if (pause) {
|
||||
test->paused_tmp = test->paused;
|
||||
test->paused = true;
|
||||
} else {
|
||||
test->paused = test->paused_tmp;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* pause_named_trigger - Pause all named triggers with the same name
|
||||
* @data: The trigger data of a named trigger to pause
|
||||
*
|
||||
* Pauses a named trigger along with all other triggers having the
|
||||
* same name. Because named triggers share a common set of data,
|
||||
* pausing only one is meaningless, so pausing one named trigger needs
|
||||
* to pause all triggers with the same name.
|
||||
*/
|
||||
void pause_named_trigger(struct event_trigger_data *data)
|
||||
{
|
||||
__pause_named_trigger(data, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* unpause_named_trigger - Un-pause all named triggers with the same name
|
||||
* @data: The trigger data of a named trigger to unpause
|
||||
*
|
||||
* Un-pauses a named trigger along with all other triggers having the
|
||||
* same name. Because named triggers share a common set of data,
|
||||
* unpausing only one is meaningless, so unpausing one named trigger
|
||||
* needs to unpause all triggers with the same name.
|
||||
*/
|
||||
void unpause_named_trigger(struct event_trigger_data *data)
|
||||
{
|
||||
__pause_named_trigger(data, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* set_named_trigger_data - Associate common named trigger data
|
||||
* @data: The trigger data of a named trigger to unpause
|
||||
*
|
||||
* Named triggers are sets of triggers that share a common set of
|
||||
* trigger data. The first named trigger registered with a given name
|
||||
* owns the common trigger data that the others subsequently
|
||||
* registered with the same name will reference. This function
|
||||
* associates the common trigger data from the first trigger with the
|
||||
* given trigger.
|
||||
*/
|
||||
void set_named_trigger_data(struct event_trigger_data *data,
|
||||
struct event_trigger_data *named_data)
|
||||
{
|
||||
data->named_data = named_data;
|
||||
}
|
||||
|
||||
static void
|
||||
traceon_trigger(struct event_trigger_data *data, void *rec)
|
||||
{
|
||||
|
@ -1062,15 +1205,6 @@ static __init void unregister_trigger_traceon_traceoff_cmds(void)
|
|||
unregister_event_command(&trigger_traceoff_cmd);
|
||||
}
|
||||
|
||||
/* Avoid typos */
|
||||
#define ENABLE_EVENT_STR "enable_event"
|
||||
#define DISABLE_EVENT_STR "disable_event"
|
||||
|
||||
struct enable_trigger_data {
|
||||
struct trace_event_file *file;
|
||||
bool enable;
|
||||
};
|
||||
|
||||
static void
|
||||
event_enable_trigger(struct event_trigger_data *data, void *rec)
|
||||
{
|
||||
|
@ -1100,14 +1234,16 @@ event_enable_count_trigger(struct event_trigger_data *data, void *rec)
|
|||
event_enable_trigger(data, rec);
|
||||
}
|
||||
|
||||
static int
|
||||
event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
int event_enable_trigger_print(struct seq_file *m,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
{
|
||||
struct enable_trigger_data *enable_data = data->private_data;
|
||||
|
||||
seq_printf(m, "%s:%s:%s",
|
||||
enable_data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR,
|
||||
enable_data->hist ?
|
||||
(enable_data->enable ? ENABLE_HIST_STR : DISABLE_HIST_STR) :
|
||||
(enable_data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR),
|
||||
enable_data->file->event_call->class->system,
|
||||
trace_event_name(enable_data->file->event_call));
|
||||
|
||||
|
@ -1124,9 +1260,8 @@ event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
event_enable_trigger_free(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
void event_enable_trigger_free(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
{
|
||||
struct enable_trigger_data *enable_data = data->private_data;
|
||||
|
||||
|
@ -1171,10 +1306,9 @@ static struct event_trigger_ops event_disable_count_trigger_ops = {
|
|||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
static int
|
||||
event_enable_trigger_func(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob, char *cmd, char *param)
|
||||
int event_enable_trigger_func(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob, char *cmd, char *param)
|
||||
{
|
||||
struct trace_event_file *event_enable_file;
|
||||
struct enable_trigger_data *enable_data;
|
||||
|
@ -1183,6 +1317,7 @@ event_enable_trigger_func(struct event_command *cmd_ops,
|
|||
struct trace_array *tr = file->tr;
|
||||
const char *system;
|
||||
const char *event;
|
||||
bool hist = false;
|
||||
char *trigger;
|
||||
char *number;
|
||||
bool enable;
|
||||
|
@ -1207,8 +1342,15 @@ event_enable_trigger_func(struct event_command *cmd_ops,
|
|||
if (!event_enable_file)
|
||||
goto out;
|
||||
|
||||
enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
|
||||
#ifdef CONFIG_HIST_TRIGGERS
|
||||
hist = ((strcmp(cmd, ENABLE_HIST_STR) == 0) ||
|
||||
(strcmp(cmd, DISABLE_HIST_STR) == 0));
|
||||
|
||||
enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
|
||||
(strcmp(cmd, ENABLE_HIST_STR) == 0));
|
||||
#else
|
||||
enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
|
||||
#endif
|
||||
trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
|
||||
|
||||
ret = -ENOMEM;
|
||||
|
@ -1228,6 +1370,7 @@ event_enable_trigger_func(struct event_command *cmd_ops,
|
|||
INIT_LIST_HEAD(&trigger_data->list);
|
||||
RCU_INIT_POINTER(trigger_data->filter, NULL);
|
||||
|
||||
enable_data->hist = hist;
|
||||
enable_data->enable = enable;
|
||||
enable_data->file = event_enable_file;
|
||||
trigger_data->private_data = enable_data;
|
||||
|
@ -1305,10 +1448,10 @@ event_enable_trigger_func(struct event_command *cmd_ops,
|
|||
goto out;
|
||||
}
|
||||
|
||||
static int event_enable_register_trigger(char *glob,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data,
|
||||
struct trace_event_file *file)
|
||||
int event_enable_register_trigger(char *glob,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data,
|
||||
struct trace_event_file *file)
|
||||
{
|
||||
struct enable_trigger_data *enable_data = data->private_data;
|
||||
struct enable_trigger_data *test_enable_data;
|
||||
|
@ -1318,6 +1461,8 @@ static int event_enable_register_trigger(char *glob,
|
|||
list_for_each_entry_rcu(test, &file->triggers, list) {
|
||||
test_enable_data = test->private_data;
|
||||
if (test_enable_data &&
|
||||
(test->cmd_ops->trigger_type ==
|
||||
data->cmd_ops->trigger_type) &&
|
||||
(test_enable_data->file == enable_data->file)) {
|
||||
ret = -EEXIST;
|
||||
goto out;
|
||||
|
@ -1343,10 +1488,10 @@ static int event_enable_register_trigger(char *glob,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void event_enable_unregister_trigger(char *glob,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *test,
|
||||
struct trace_event_file *file)
|
||||
void event_enable_unregister_trigger(char *glob,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *test,
|
||||
struct trace_event_file *file)
|
||||
{
|
||||
struct enable_trigger_data *test_enable_data = test->private_data;
|
||||
struct enable_trigger_data *enable_data;
|
||||
|
@ -1356,6 +1501,8 @@ static void event_enable_unregister_trigger(char *glob,
|
|||
list_for_each_entry_rcu(data, &file->triggers, list) {
|
||||
enable_data = data->private_data;
|
||||
if (enable_data &&
|
||||
(data->cmd_ops->trigger_type ==
|
||||
test->cmd_ops->trigger_type) &&
|
||||
(enable_data->file == test_enable_data->file)) {
|
||||
unregistered = true;
|
||||
list_del_rcu(&data->list);
|
||||
|
@ -1375,8 +1522,12 @@ event_enable_get_trigger_ops(char *cmd, char *param)
|
|||
struct event_trigger_ops *ops;
|
||||
bool enable;
|
||||
|
||||
#ifdef CONFIG_HIST_TRIGGERS
|
||||
enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
|
||||
(strcmp(cmd, ENABLE_HIST_STR) == 0));
|
||||
#else
|
||||
enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
|
||||
|
||||
#endif
|
||||
if (enable)
|
||||
ops = param ? &event_enable_count_trigger_ops :
|
||||
&event_enable_trigger_ops;
|
||||
|
@ -1447,6 +1598,8 @@ __init int register_trigger_cmds(void)
|
|||
register_trigger_snapshot_cmd();
|
||||
register_trigger_stacktrace_cmd();
|
||||
register_trigger_enable_disable_cmds();
|
||||
register_trigger_hist_enable_disable_cmds();
|
||||
register_trigger_hist_cmd();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
1062
kernel/trace/tracing_map.c
Normal file
1062
kernel/trace/tracing_map.c
Normal file
File diff suppressed because it is too large
Load Diff
283
kernel/trace/tracing_map.h
Normal file
283
kernel/trace/tracing_map.h
Normal file
|
@ -0,0 +1,283 @@
|
|||
#ifndef __TRACING_MAP_H
|
||||
#define __TRACING_MAP_H
|
||||
|
||||
#define TRACING_MAP_BITS_DEFAULT 11
|
||||
#define TRACING_MAP_BITS_MAX 17
|
||||
#define TRACING_MAP_BITS_MIN 7
|
||||
|
||||
#define TRACING_MAP_KEYS_MAX 2
|
||||
#define TRACING_MAP_VALS_MAX 3
|
||||
#define TRACING_MAP_FIELDS_MAX (TRACING_MAP_KEYS_MAX + \
|
||||
TRACING_MAP_VALS_MAX)
|
||||
#define TRACING_MAP_SORT_KEYS_MAX 2
|
||||
|
||||
typedef int (*tracing_map_cmp_fn_t) (void *val_a, void *val_b);
|
||||
|
||||
/*
|
||||
* This is an overview of the tracing_map data structures and how they
|
||||
* relate to the tracing_map API. The details of the algorithms
|
||||
* aren't discussed here - this is just a general overview of the data
|
||||
* structures and how they interact with the API.
|
||||
*
|
||||
* The central data structure of the tracing_map is an initially
|
||||
* zeroed array of struct tracing_map_entry (stored in the map field
|
||||
* of struct tracing_map). tracing_map_entry is a very simple data
|
||||
* structure containing only two fields: a 32-bit unsigned 'key'
|
||||
* variable and a pointer named 'val'. This array of struct
|
||||
* tracing_map_entry is essentially a hash table which will be
|
||||
* modified by a single function, tracing_map_insert(), but which can
|
||||
* be traversed and read by a user at any time (though the user does
|
||||
* this indirectly via an array of tracing_map_sort_entry - see the
|
||||
* explanation of that data structure in the discussion of the
|
||||
* sorting-related data structures below).
|
||||
*
|
||||
* The central function of the tracing_map API is
|
||||
* tracing_map_insert(). tracing_map_insert() hashes the
|
||||
* arbitrarily-sized key passed into it into a 32-bit unsigned key.
|
||||
* It then uses this key, truncated to the array size, as an index
|
||||
* into the array of tracing_map_entries. If the value of the 'key'
|
||||
* field of the tracing_map_entry found at that location is 0, then
|
||||
* that entry is considered to be free and can be claimed, by
|
||||
* replacing the 0 in the 'key' field of the tracing_map_entry with
|
||||
* the new 32-bit hashed key. Once claimed, that tracing_map_entry's
|
||||
* 'val' field is then used to store a unique element which will be
|
||||
* forever associated with that 32-bit hashed key in the
|
||||
* tracing_map_entry.
|
||||
*
|
||||
* That unique element now in the tracing_map_entry's 'val' field is
|
||||
* an instance of tracing_map_elt, where 'elt' in the latter part of
|
||||
* that variable name is short for 'element'. The purpose of a
|
||||
* tracing_map_elt is to hold values specific to the particular
|
||||
* 32-bit hashed key it's assocated with. Things such as the unique
|
||||
* set of aggregated sums associated with the 32-bit hashed key, along
|
||||
* with a copy of the full key associated with the entry, and which
|
||||
* was used to produce the 32-bit hashed key.
|
||||
*
|
||||
* When tracing_map_create() is called to create the tracing map, the
|
||||
* user specifies (indirectly via the map_bits param, the details are
|
||||
* unimportant for this discussion) the maximum number of elements
|
||||
* that the map can hold (stored in the max_elts field of struct
|
||||
* tracing_map). This is the maximum possible number of
|
||||
* tracing_map_entries in the tracing_map_entry array which can be
|
||||
* 'claimed' as described in the above discussion, and therefore is
|
||||
* also the maximum number of tracing_map_elts that can be associated
|
||||
* with the tracing_map_entry array in the tracing_map. Because of
|
||||
* the way the insertion algorithm works, the size of the allocated
|
||||
* tracing_map_entry array is always twice the maximum number of
|
||||
* elements (2 * max_elts). This value is stored in the map_size
|
||||
* field of struct tracing_map.
|
||||
*
|
||||
* Because tracing_map_insert() needs to work from any context,
|
||||
* including from within the memory allocation functions themselves,
|
||||
* both the tracing_map_entry array and a pool of max_elts
|
||||
* tracing_map_elts are pre-allocated before any call is made to
|
||||
* tracing_map_insert().
|
||||
*
|
||||
* The tracing_map_entry array is allocated as a single block by
|
||||
* tracing_map_create().
|
||||
*
|
||||
* Because the tracing_map_elts are much larger objects and can't
|
||||
* generally be allocated together as a single large array without
|
||||
* failure, they're allocated individually, by tracing_map_init().
|
||||
*
|
||||
* The pool of tracing_map_elts are allocated by tracing_map_init()
|
||||
* rather than by tracing_map_create() because at the time
|
||||
* tracing_map_create() is called, there isn't enough information to
|
||||
* create the tracing_map_elts. Specifically,the user first needs to
|
||||
* tell the tracing_map implementation how many fields the
|
||||
* tracing_map_elts contain, and which types of fields they are (key
|
||||
* or sum). The user does this via the tracing_map_add_sum_field()
|
||||
* and tracing_map_add_key_field() functions, following which the user
|
||||
* calls tracing_map_init() to finish up the tracing map setup. The
|
||||
* array holding the pointers which make up the pre-allocated pool of
|
||||
* tracing_map_elts is allocated as a single block and is stored in
|
||||
* the elts field of struct tracing_map.
|
||||
*
|
||||
* There is also a set of structures used for sorting that might
|
||||
* benefit from some minimal explanation.
|
||||
*
|
||||
* struct tracing_map_sort_key is used to drive the sort at any given
|
||||
* time. By 'any given time' we mean that a different
|
||||
* tracing_map_sort_key will be used at different times depending on
|
||||
* whether the sort currently being performed is a primary or a
|
||||
* secondary sort.
|
||||
*
|
||||
* The sort key is very simple, consisting of the field index of the
|
||||
* tracing_map_elt field to sort on (which the user saved when adding
|
||||
* the field), and whether the sort should be done in an ascending or
|
||||
* descending order.
|
||||
*
|
||||
* For the convenience of the sorting code, a tracing_map_sort_entry
|
||||
* is created for each tracing_map_elt, again individually allocated
|
||||
* to avoid failures that might be expected if allocated as a single
|
||||
* large array of struct tracing_map_sort_entry.
|
||||
* tracing_map_sort_entry instances are the objects expected by the
|
||||
* various internal sorting functions, and are also what the user
|
||||
* ultimately receives after calling tracing_map_sort_entries().
|
||||
* Because it doesn't make sense for users to access an unordered and
|
||||
* sparsely populated tracing_map directly, the
|
||||
* tracing_map_sort_entries() function is provided so that users can
|
||||
* retrieve a sorted list of all existing elements. In addition to
|
||||
* the associated tracing_map_elt 'elt' field contained within the
|
||||
* tracing_map_sort_entry, which is the object of interest to the
|
||||
* user, tracing_map_sort_entry objects contain a number of additional
|
||||
* fields which are used for caching and internal purposes and can
|
||||
* safely be ignored.
|
||||
*/
|
||||
|
||||
struct tracing_map_field {
|
||||
tracing_map_cmp_fn_t cmp_fn;
|
||||
union {
|
||||
atomic64_t sum;
|
||||
unsigned int offset;
|
||||
};
|
||||
};
|
||||
|
||||
struct tracing_map_elt {
|
||||
struct tracing_map *map;
|
||||
struct tracing_map_field *fields;
|
||||
void *key;
|
||||
void *private_data;
|
||||
};
|
||||
|
||||
struct tracing_map_entry {
|
||||
u32 key;
|
||||
struct tracing_map_elt *val;
|
||||
};
|
||||
|
||||
struct tracing_map_sort_key {
|
||||
unsigned int field_idx;
|
||||
bool descending;
|
||||
};
|
||||
|
||||
struct tracing_map_sort_entry {
|
||||
void *key;
|
||||
struct tracing_map_elt *elt;
|
||||
bool elt_copied;
|
||||
bool dup;
|
||||
};
|
||||
|
||||
struct tracing_map_array {
|
||||
unsigned int entries_per_page;
|
||||
unsigned int entry_size_shift;
|
||||
unsigned int entry_shift;
|
||||
unsigned int entry_mask;
|
||||
unsigned int n_pages;
|
||||
void **pages;
|
||||
};
|
||||
|
||||
#define TRACING_MAP_ARRAY_ELT(array, idx) \
|
||||
(array->pages[idx >> array->entry_shift] + \
|
||||
((idx & array->entry_mask) << array->entry_size_shift))
|
||||
|
||||
#define TRACING_MAP_ENTRY(array, idx) \
|
||||
((struct tracing_map_entry *)TRACING_MAP_ARRAY_ELT(array, idx))
|
||||
|
||||
#define TRACING_MAP_ELT(array, idx) \
|
||||
((struct tracing_map_elt **)TRACING_MAP_ARRAY_ELT(array, idx))
|
||||
|
||||
struct tracing_map {
|
||||
unsigned int key_size;
|
||||
unsigned int map_bits;
|
||||
unsigned int map_size;
|
||||
unsigned int max_elts;
|
||||
atomic_t next_elt;
|
||||
struct tracing_map_array *elts;
|
||||
struct tracing_map_array *map;
|
||||
const struct tracing_map_ops *ops;
|
||||
void *private_data;
|
||||
struct tracing_map_field fields[TRACING_MAP_FIELDS_MAX];
|
||||
unsigned int n_fields;
|
||||
int key_idx[TRACING_MAP_KEYS_MAX];
|
||||
unsigned int n_keys;
|
||||
struct tracing_map_sort_key sort_key;
|
||||
atomic64_t hits;
|
||||
atomic64_t drops;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct tracing_map_ops - callbacks for tracing_map
|
||||
*
|
||||
* The methods in this structure define callback functions for various
|
||||
* operations on a tracing_map or objects related to a tracing_map.
|
||||
*
|
||||
* For a detailed description of tracing_map_elt objects please see
|
||||
* the overview of tracing_map data structures at the beginning of
|
||||
* this file.
|
||||
*
|
||||
* All the methods below are optional.
|
||||
*
|
||||
* @elt_alloc: When a tracing_map_elt is allocated, this function, if
|
||||
* defined, will be called and gives clients the opportunity to
|
||||
* allocate additional data and attach it to the element
|
||||
* (tracing_map_elt->private_data is meant for that purpose).
|
||||
* Element allocation occurs before tracing begins, when the
|
||||
* tracing_map_init() call is made by client code.
|
||||
*
|
||||
* @elt_copy: At certain points in the lifetime of an element, it may
|
||||
* need to be copied. The copy should include a copy of the
|
||||
* client-allocated data, which can be copied into the 'to'
|
||||
* element from the 'from' element.
|
||||
*
|
||||
* @elt_free: When a tracing_map_elt is freed, this function is called
|
||||
* and allows client-allocated per-element data to be freed.
|
||||
*
|
||||
* @elt_clear: This callback allows per-element client-defined data to
|
||||
* be cleared, if applicable.
|
||||
*
|
||||
* @elt_init: This callback allows per-element client-defined data to
|
||||
* be initialized when used i.e. when the element is actually
|
||||
* claimed by tracing_map_insert() in the context of the map
|
||||
* insertion.
|
||||
*/
|
||||
struct tracing_map_ops {
|
||||
int (*elt_alloc)(struct tracing_map_elt *elt);
|
||||
void (*elt_copy)(struct tracing_map_elt *to,
|
||||
struct tracing_map_elt *from);
|
||||
void (*elt_free)(struct tracing_map_elt *elt);
|
||||
void (*elt_clear)(struct tracing_map_elt *elt);
|
||||
void (*elt_init)(struct tracing_map_elt *elt);
|
||||
};
|
||||
|
||||
extern struct tracing_map *
|
||||
tracing_map_create(unsigned int map_bits,
|
||||
unsigned int key_size,
|
||||
const struct tracing_map_ops *ops,
|
||||
void *private_data);
|
||||
extern int tracing_map_init(struct tracing_map *map);
|
||||
|
||||
extern int tracing_map_add_sum_field(struct tracing_map *map);
|
||||
extern int tracing_map_add_key_field(struct tracing_map *map,
|
||||
unsigned int offset,
|
||||
tracing_map_cmp_fn_t cmp_fn);
|
||||
|
||||
extern void tracing_map_destroy(struct tracing_map *map);
|
||||
extern void tracing_map_clear(struct tracing_map *map);
|
||||
|
||||
extern struct tracing_map_elt *
|
||||
tracing_map_insert(struct tracing_map *map, void *key);
|
||||
extern struct tracing_map_elt *
|
||||
tracing_map_lookup(struct tracing_map *map, void *key);
|
||||
|
||||
extern tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
|
||||
int field_is_signed);
|
||||
extern int tracing_map_cmp_string(void *val_a, void *val_b);
|
||||
extern int tracing_map_cmp_none(void *val_a, void *val_b);
|
||||
|
||||
extern void tracing_map_update_sum(struct tracing_map_elt *elt,
|
||||
unsigned int i, u64 n);
|
||||
extern u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i);
|
||||
extern void tracing_map_set_field_descr(struct tracing_map *map,
|
||||
unsigned int i,
|
||||
unsigned int key_offset,
|
||||
tracing_map_cmp_fn_t cmp_fn);
|
||||
extern int
|
||||
tracing_map_sort_entries(struct tracing_map *map,
|
||||
struct tracing_map_sort_key *sort_keys,
|
||||
unsigned int n_sort_keys,
|
||||
struct tracing_map_sort_entry ***sort_entries);
|
||||
|
||||
extern void
|
||||
tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
|
||||
unsigned int n_entries);
|
||||
#endif /* __TRACING_MAP_H */
|
|
@ -14,3 +14,12 @@ enable_tracing() { # start trace recording
|
|||
reset_tracer() { # reset the current tracer
|
||||
echo nop > current_tracer
|
||||
}
|
||||
|
||||
reset_trigger() { # reset all current setting triggers
|
||||
grep -v ^# events/*/*/trigger |
|
||||
while read line; do
|
||||
cmd=`echo $line | cut -f2- -d: | cut -f1 -d" "`
|
||||
echo "!$cmd" > `echo $line | cut -f1 -d:`
|
||||
done
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,64 @@
|
|||
#!/bin/sh
|
||||
# description: event trigger - test event enable/disable trigger
|
||||
|
||||
do_reset() {
|
||||
reset_trigger
|
||||
echo > set_event
|
||||
clear_trace
|
||||
}
|
||||
|
||||
fail() { #msg
|
||||
do_reset
|
||||
echo $1
|
||||
exit $FAIL
|
||||
}
|
||||
|
||||
if [ ! -f set_event -o ! -d events/sched ]; then
|
||||
echo "event tracing is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
if [ ! -f events/sched/sched_process_fork/trigger ]; then
|
||||
echo "event trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
reset_tracer
|
||||
do_reset
|
||||
|
||||
FEATURE=`grep enable_event events/sched/sched_process_fork/trigger`
|
||||
if [ -z "$FEATURE" ]; then
|
||||
echo "event enable/disable trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
echo "Test enable_event trigger"
|
||||
echo 0 > events/sched/sched_switch/enable
|
||||
echo 'enable_event:sched:sched_switch' > events/sched/sched_process_fork/trigger
|
||||
( echo "forked")
|
||||
if [ `cat events/sched/sched_switch/enable` != '1*' ]; then
|
||||
fail "enable_event trigger on sched_process_fork did not work"
|
||||
fi
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test disable_event trigger"
|
||||
echo 1 > events/sched/sched_switch/enable
|
||||
echo 'disable_event:sched:sched_switch' > events/sched/sched_process_fork/trigger
|
||||
( echo "forked")
|
||||
if [ `cat events/sched/sched_switch/enable` != '0*' ]; then
|
||||
fail "disable_event trigger on sched_process_fork did not work"
|
||||
fi
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test semantic error for event enable/disable trigger"
|
||||
! echo 'enable_event:nogroup:noevent' > events/sched/sched_process_fork/trigger
|
||||
! echo 'disable_event+1' > events/sched/sched_process_fork/trigger
|
||||
echo 'enable_event:sched:sched_switch' > events/sched/sched_process_fork/trigger
|
||||
! echo 'enable_event:sched:sched_switch' > events/sched/sched_process_fork/trigger
|
||||
! echo 'disable_event:sched:sched_switch' > events/sched/sched_process_fork/trigger
|
||||
|
||||
do_reset
|
||||
|
||||
exit 0
|
|
@ -0,0 +1,59 @@
|
|||
#!/bin/sh
|
||||
# description: event trigger - test trigger filter
|
||||
|
||||
do_reset() {
|
||||
reset_trigger
|
||||
echo > set_event
|
||||
clear_trace
|
||||
}
|
||||
|
||||
fail() { #msg
|
||||
do_reset
|
||||
echo $1
|
||||
exit $FAIL
|
||||
}
|
||||
|
||||
if [ ! -f set_event -o ! -d events/sched ]; then
|
||||
echo "event tracing is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
if [ ! -f events/sched/sched_process_fork/trigger ]; then
|
||||
echo "event trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
reset_tracer
|
||||
do_reset
|
||||
|
||||
echo "Test trigger filter"
|
||||
echo 1 > tracing_on
|
||||
echo 'traceoff if child_pid == 0' > events/sched/sched_process_fork/trigger
|
||||
( echo "forked")
|
||||
if [ `cat tracing_on` -ne 1 ]; then
|
||||
fail "traceoff trigger on sched_process_fork did not work"
|
||||
fi
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test semantic error for trigger filter"
|
||||
! echo 'traceoff if a' > events/sched/sched_process_fork/trigger
|
||||
! echo 'traceoff if common_pid=0' > events/sched/sched_process_fork/trigger
|
||||
! echo 'traceoff if common_pid==b' > events/sched/sched_process_fork/trigger
|
||||
echo 'traceoff if common_pid == 0' > events/sched/sched_process_fork/trigger
|
||||
echo '!traceoff' > events/sched/sched_process_fork/trigger
|
||||
! echo 'traceoff if common_pid == child_pid' > events/sched/sched_process_fork/trigger
|
||||
echo 'traceoff if common_pid <= 0' > events/sched/sched_process_fork/trigger
|
||||
echo '!traceoff' > events/sched/sched_process_fork/trigger
|
||||
echo 'traceoff if common_pid >= 0' > events/sched/sched_process_fork/trigger
|
||||
echo '!traceoff' > events/sched/sched_process_fork/trigger
|
||||
echo 'traceoff if parent_pid >= 0 && child_pid >= 0' > events/sched/sched_process_fork/trigger
|
||||
echo '!traceoff' > events/sched/sched_process_fork/trigger
|
||||
echo 'traceoff if parent_pid >= 0 || child_pid >= 0' > events/sched/sched_process_fork/trigger
|
||||
echo '!traceoff' > events/sched/sched_process_fork/trigger
|
||||
|
||||
|
||||
|
||||
do_reset
|
||||
|
||||
exit 0
|
|
@ -0,0 +1,75 @@
|
|||
#!/bin/sh
|
||||
# description: event trigger - test histogram modifiers
|
||||
|
||||
do_reset() {
|
||||
reset_trigger
|
||||
echo > set_event
|
||||
clear_trace
|
||||
}
|
||||
|
||||
fail() { #msg
|
||||
do_reset
|
||||
echo $1
|
||||
exit $FAIL
|
||||
}
|
||||
|
||||
if [ ! -f set_event -o ! -d events/sched ]; then
|
||||
echo "event tracing is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
if [ ! -f events/sched/sched_process_fork/trigger ]; then
|
||||
echo "event trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
reset_tracer
|
||||
do_reset
|
||||
|
||||
FEATURE=`grep hist events/sched/sched_process_fork/trigger`
|
||||
if [ -z "$FEATURE" ]; then
|
||||
echo "hist trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
echo "Test histogram with execname modifier"
|
||||
|
||||
echo 'hist:keys=common_pid.execname' > events/sched/sched_process_fork/trigger
|
||||
for i in `seq 1 10` ; do ( echo "forked" > /dev/null); done
|
||||
COMM=`cat /proc/$$/comm`
|
||||
grep "common_pid: $COMM" events/sched/sched_process_fork/hist > /dev/null || \
|
||||
fail "execname modifier on sched_process_fork did not work"
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test histogram with hex modifier"
|
||||
|
||||
echo 'hist:keys=parent_pid.hex' > events/sched/sched_process_fork/trigger
|
||||
for i in `seq 1 10` ; do ( echo "forked" > /dev/null); done
|
||||
# Note that $$ is the parent pid. $PID is current PID.
|
||||
HEX=`printf %x $PID`
|
||||
grep "parent_pid: $HEX" events/sched/sched_process_fork/hist > /dev/null || \
|
||||
fail "hex modifier on sched_process_fork did not work"
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test histogram with syscall modifier"
|
||||
|
||||
echo 'hist:keys=id.syscall' > events/raw_syscalls/sys_exit/trigger
|
||||
for i in `seq 1 10` ; do ( echo "forked" > /dev/null); done
|
||||
grep "id: sys_" events/raw_syscalls/sys_exit/hist > /dev/null || \
|
||||
fail "syscall modifier on raw_syscalls/sys_exit did not work"
|
||||
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test histgram with log2 modifier"
|
||||
|
||||
echo 'hist:keys=bytes_req.log2' > events/kmem/kmalloc/trigger
|
||||
for i in `seq 1 10` ; do ( echo "forked" > /dev/null); done
|
||||
grep 'bytes_req: ~ 2^[0-9]*' events/kmem/kmalloc/hist > /dev/null || \
|
||||
fail "log2 modifier on kmem/kmalloc did not work"
|
||||
|
||||
do_reset
|
||||
|
||||
exit 0
|
|
@ -0,0 +1,83 @@
|
|||
#!/bin/sh
|
||||
# description: event trigger - test histogram trigger
|
||||
|
||||
do_reset() {
|
||||
reset_trigger
|
||||
echo > set_event
|
||||
clear_trace
|
||||
}
|
||||
|
||||
fail() { #msg
|
||||
do_reset
|
||||
echo $1
|
||||
exit $FAIL
|
||||
}
|
||||
|
||||
if [ ! -f set_event -o ! -d events/sched ]; then
|
||||
echo "event tracing is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
if [ ! -f events/sched/sched_process_fork/trigger ]; then
|
||||
echo "event trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
reset_tracer
|
||||
do_reset
|
||||
|
||||
FEATURE=`grep hist events/sched/sched_process_fork/trigger`
|
||||
if [ -z "$FEATURE" ]; then
|
||||
echo "hist trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
echo "Test histogram basic tigger"
|
||||
|
||||
echo 'hist:keys=parent_pid:vals=child_pid' > events/sched/sched_process_fork/trigger
|
||||
for i in `seq 1 10` ; do ( echo "forked" > /dev/null); done
|
||||
grep parent_pid events/sched/sched_process_fork/hist > /dev/null || \
|
||||
fail "hist trigger on sched_process_fork did not work"
|
||||
grep child events/sched/sched_process_fork/hist > /dev/null || \
|
||||
fail "hist trigger on sched_process_fork did not work"
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test histogram with compound keys"
|
||||
|
||||
echo 'hist:keys=parent_pid,child_pid' > events/sched/sched_process_fork/trigger
|
||||
for i in `seq 1 10` ; do ( echo "forked" > /dev/null); done
|
||||
grep '^{ parent_pid:.*, child_pid:.*}' events/sched/sched_process_fork/hist > /dev/null || \
|
||||
fail "compound keys on sched_process_fork did not work"
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test histogram with string key"
|
||||
|
||||
echo 'hist:keys=parent_comm' > events/sched/sched_process_fork/trigger
|
||||
for i in `seq 1 10` ; do ( echo "forked" > /dev/null); done
|
||||
COMM=`cat /proc/$$/comm`
|
||||
grep "parent_comm: $COMM" events/sched/sched_process_fork/hist > /dev/null || \
|
||||
fail "string key on sched_process_fork did not work"
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test histogram with sort key"
|
||||
|
||||
echo 'hist:keys=parent_pid,child_pid:sort=child_pid.ascending' > events/sched/sched_process_fork/trigger
|
||||
for i in `seq 1 10` ; do ( echo "forked" > /dev/null); done
|
||||
|
||||
check_inc() {
|
||||
while [ $# -gt 1 ]; do
|
||||
[ $1 -gt $2 ] && return 1
|
||||
shift 1
|
||||
done
|
||||
return 0
|
||||
}
|
||||
check_inc `grep -o "child_pid:[[:space:]]*[[:digit:]]*" \
|
||||
events/sched/sched_process_fork/hist | cut -d: -f2 ` ||
|
||||
fail "sort param on sched_process_fork did not work"
|
||||
|
||||
do_reset
|
||||
|
||||
exit 0
|
|
@ -0,0 +1,73 @@
|
|||
#!/bin/sh
|
||||
# description: event trigger - test multiple histogram triggers
|
||||
|
||||
do_reset() {
|
||||
reset_trigger
|
||||
echo > set_event
|
||||
clear_trace
|
||||
}
|
||||
|
||||
fail() { #msg
|
||||
do_reset
|
||||
echo $1
|
||||
exit $FAIL
|
||||
}
|
||||
|
||||
if [ ! -f set_event -o ! -d events/sched ]; then
|
||||
echo "event tracing is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
if [ ! -f events/sched/sched_process_fork/trigger ]; then
|
||||
echo "event trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
reset_tracer
|
||||
do_reset
|
||||
|
||||
FEATURE=`grep hist events/sched/sched_process_fork/trigger`
|
||||
if [ -z "$FEATURE" ]; then
|
||||
echo "hist trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test histogram multiple tiggers"
|
||||
|
||||
echo 'hist:keys=parent_pid:vals=child_pid' > events/sched/sched_process_fork/trigger
|
||||
echo 'hist:keys=parent_comm:vals=child_pid' >> events/sched/sched_process_fork/trigger
|
||||
for i in `seq 1 10` ; do ( echo "forked" > /dev/null); done
|
||||
grep parent_pid events/sched/sched_process_fork/hist > /dev/null || \
|
||||
fail "hist trigger on sched_process_fork did not work"
|
||||
grep child events/sched/sched_process_fork/hist > /dev/null || \
|
||||
fail "hist trigger on sched_process_fork did not work"
|
||||
COMM=`cat /proc/$$/comm`
|
||||
grep "parent_comm: $COMM" events/sched/sched_process_fork/hist > /dev/null || \
|
||||
fail "string key on sched_process_fork did not work"
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test histogram with its name"
|
||||
|
||||
echo 'hist:name=test_hist:keys=common_pid' > events/sched/sched_process_fork/trigger
|
||||
for i in `seq 1 10` ; do ( echo "forked" > /dev/null); done
|
||||
grep test_hist events/sched/sched_process_fork/hist > /dev/null || \
|
||||
fail "named event on sched_process_fork did not work"
|
||||
|
||||
echo "Test same named histogram on different events"
|
||||
|
||||
echo 'hist:name=test_hist:keys=common_pid' > events/sched/sched_process_exit/trigger
|
||||
for i in `seq 1 10` ; do ( echo "forked" > /dev/null); done
|
||||
grep test_hist events/sched/sched_process_exit/hist > /dev/null || \
|
||||
fail "named event on sched_process_fork did not work"
|
||||
|
||||
diffs=`diff events/sched/sched_process_exit/hist events/sched/sched_process_fork/hist | wc -l`
|
||||
test $diffs -eq 0 || fail "Same name histograms are not same"
|
||||
|
||||
reset_trigger
|
||||
|
||||
do_reset
|
||||
|
||||
exit 0
|
|
@ -0,0 +1,56 @@
|
|||
#!/bin/sh
|
||||
# description: event trigger - test snapshot-trigger
|
||||
|
||||
do_reset() {
|
||||
reset_trigger
|
||||
echo > set_event
|
||||
clear_trace
|
||||
}
|
||||
|
||||
fail() { #msg
|
||||
do_reset
|
||||
echo $1
|
||||
exit $FAIL
|
||||
}
|
||||
|
||||
if [ ! -f set_event -o ! -d events/sched ]; then
|
||||
echo "event tracing is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
if [ ! -f events/sched/sched_process_fork/trigger ]; then
|
||||
echo "event trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
reset_tracer
|
||||
do_reset
|
||||
|
||||
FEATURE=`grep snapshot events/sched/sched_process_fork/trigger`
|
||||
if [ -z "$FEATURE" ]; then
|
||||
echo "snapshot trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
echo "Test snapshot tigger"
|
||||
echo 0 > snapshot
|
||||
echo 1 > events/sched/sched_process_fork/enable
|
||||
( echo "forked")
|
||||
echo 'snapshot:1' > events/sched/sched_process_fork/trigger
|
||||
( echo "forked")
|
||||
grep sched_process_fork snapshot > /dev/null || \
|
||||
fail "snapshot trigger on sched_process_fork did not work"
|
||||
|
||||
reset_trigger
|
||||
echo 0 > snapshot
|
||||
echo 0 > events/sched/sched_process_fork/enable
|
||||
|
||||
echo "Test snapshot semantic errors"
|
||||
|
||||
! echo "snapshot+1" > events/sched/sched_process_fork/trigger
|
||||
echo "snapshot" > events/sched/sched_process_fork/trigger
|
||||
! echo "snapshot" > events/sched/sched_process_fork/trigger
|
||||
|
||||
do_reset
|
||||
|
||||
exit 0
|
|
@ -0,0 +1,53 @@
|
|||
#!/bin/sh
|
||||
# description: event trigger - test stacktrace-trigger
|
||||
|
||||
do_reset() {
|
||||
reset_trigger
|
||||
echo > set_event
|
||||
clear_trace
|
||||
}
|
||||
|
||||
fail() { #msg
|
||||
do_reset
|
||||
echo $1
|
||||
exit $FAIL
|
||||
}
|
||||
|
||||
if [ ! -f set_event -o ! -d events/sched ]; then
|
||||
echo "event tracing is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
if [ ! -f events/sched/sched_process_fork/trigger ]; then
|
||||
echo "event trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
reset_tracer
|
||||
do_reset
|
||||
|
||||
FEATURE=`grep stacktrace events/sched/sched_process_fork/trigger`
|
||||
if [ -z "$FEATURE" ]; then
|
||||
echo "stacktrace trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
echo "Test stacktrace tigger"
|
||||
echo 0 > trace
|
||||
echo 0 > options/stacktrace
|
||||
echo 'stacktrace' > events/sched/sched_process_fork/trigger
|
||||
( echo "forked")
|
||||
grep "<stack trace>" trace > /dev/null || \
|
||||
fail "stacktrace trigger on sched_process_fork did not work"
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test stacktrace semantic errors"
|
||||
|
||||
! echo "stacktrace:foo" > events/sched/sched_process_fork/trigger
|
||||
echo "stacktrace" > events/sched/sched_process_fork/trigger
|
||||
! echo "stacktrace" > events/sched/sched_process_fork/trigger
|
||||
|
||||
do_reset
|
||||
|
||||
exit 0
|
|
@ -0,0 +1,58 @@
|
|||
#!/bin/sh
|
||||
# description: event trigger - test traceon/off trigger
|
||||
|
||||
do_reset() {
|
||||
reset_trigger
|
||||
echo > set_event
|
||||
clear_trace
|
||||
}
|
||||
|
||||
fail() { #msg
|
||||
do_reset
|
||||
echo $1
|
||||
exit $FAIL
|
||||
}
|
||||
|
||||
if [ ! -f set_event -o ! -d events/sched ]; then
|
||||
echo "event tracing is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
if [ ! -f events/sched/sched_process_fork/trigger ]; then
|
||||
echo "event trigger is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
reset_tracer
|
||||
do_reset
|
||||
|
||||
echo "Test traceoff trigger"
|
||||
echo 1 > tracing_on
|
||||
echo 'traceoff' > events/sched/sched_process_fork/trigger
|
||||
( echo "forked")
|
||||
if [ `cat tracing_on` -ne 0 ]; then
|
||||
fail "traceoff trigger on sched_process_fork did not work"
|
||||
fi
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test traceon trigger"
|
||||
echo 0 > tracing_on
|
||||
echo 'traceon' > events/sched/sched_process_fork/trigger
|
||||
( echo "forked")
|
||||
if [ `cat tracing_on` -ne 1 ]; then
|
||||
fail "traceoff trigger on sched_process_fork did not work"
|
||||
fi
|
||||
|
||||
reset_trigger
|
||||
|
||||
echo "Test semantic error for traceoff/on trigger"
|
||||
! echo 'traceoff:badparam' > events/sched/sched_process_fork/trigger
|
||||
! echo 'traceoff+0' > events/sched/sched_process_fork/trigger
|
||||
echo 'traceon' > events/sched/sched_process_fork/trigger
|
||||
! echo 'traceon' > events/sched/sched_process_fork/trigger
|
||||
! echo 'traceoff' > events/sched/sched_process_fork/trigger
|
||||
|
||||
do_reset
|
||||
|
||||
exit 0
|
Loading…
Reference in New Issue
Block a user