forked from luck/tmp_suning_uos_patched
tracing/perf: Avoid perf_trace_buf_*() in perf_trace_##call() when possible
perf_trace_buf_prepare() + perf_trace_buf_submit(task => NULL) make no sense if hlist_empty(head). Change perf_trace_##call() to check ->perf_events beforehand and do nothing if it is empty. This removes the overhead for tasks without events associated with them. For example, "perf record -e sched:sched_switch -p1" attaches the counter(s) to the single task, but every task in system will do perf_trace_buf_prepare/submit() just to realize that it was not attached to this event. However, we can only do this if __task == NULL, so we also add the __builtin_constant_p(__task) check. With this patch "perf bench sched pipe" shows approximately 4% improvement when "perf record -p1" runs in parallel, many thanks to Steven for the testing. Link: http://lkml.kernel.org/r/20130806160847.GA2746@redhat.com Tested-by: David Ahern <dsahern@gmail.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
This commit is contained in:
parent
12473965c3
commit
d027e6a9c8
|
@ -667,6 +667,12 @@ perf_trace_##call(void *__data, proto) \
|
|||
int rctx; \
|
||||
\
|
||||
__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
|
||||
\
|
||||
head = this_cpu_ptr(event_call->perf_events); \
|
||||
if (__builtin_constant_p(!__task) && !__task && \
|
||||
hlist_empty(head)) \
|
||||
return; \
|
||||
\
|
||||
__entry_size = ALIGN(__data_size + sizeof(*entry) + sizeof(u32),\
|
||||
sizeof(u64)); \
|
||||
__entry_size -= sizeof(u32); \
|
||||
|
@ -681,7 +687,6 @@ perf_trace_##call(void *__data, proto) \
|
|||
\
|
||||
{ assign; } \
|
||||
\
|
||||
head = this_cpu_ptr(event_call->perf_events); \
|
||||
perf_trace_buf_submit(entry, __entry_size, rctx, __addr, \
|
||||
__count, &__regs, head, __task); \
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue
Block a user