kernel_optimize_test/kernel/bpf
Daniel Borkmann 50b045a8c0 bpf, lru: avoid messing with eviction heuristics upon syscall lookup
One of the biggest issues we face right now with picking LRU map over
regular hash table is that a map walk out of user space, for example,
to just dump the existing entries or to remove certain ones, will
completely mess up LRU eviction heuristics and wrong entries such
as just created ones will get evicted instead. The reason for this
is that we mark an entry as "in use" via bpf_lru_node_set_ref() from
system call lookup side as well. Thus upon walk, all entries are
being marked, so information of actual least recently used ones
are "lost".

In case of Cilium where it can be used (besides others) as a BPF
based connection tracker, this current behavior causes disruption
upon control plane changes that need to walk the map from user space
to evict certain entries. Discussion result from bpfconf [0] was that
we should simply just remove marking from system call side as no
good use case could be found where it's actually needed there.
Therefore this patch removes marking for regular LRU and per-CPU
flavor. If there ever should be a need in future, the behavior could
be selected via map creation flag, but due to mentioned reason we
avoid this here.

  [0] http://vger.kernel.org/bpfconf.html

Fixes: 29ba732acb ("bpf: Add BPF_MAP_TYPE_LRU_HASH")
Fixes: 8f8449384e ("bpf: Add BPF_MAP_TYPE_LRU_PERCPU_HASH")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-14 10:47:29 -07:00
..
arraymap.c bpf: allow for key-less BTF in array map 2019-04-09 17:05:46 -07:00
bpf_lru_list.c
bpf_lru_list.h
btf.c bpf: allow for key-less BTF in array map 2019-04-09 17:05:46 -07:00
cgroup.c bpf: add map helper functions push, pop, peek in more BPF programs 2019-04-16 10:24:02 +02:00
core.c bpf: fix out of bounds backwards jmps due to dead code removal 2019-05-10 18:49:27 -07:00
cpumap.c bpf: cpumap memory prefetchw optimizations for struct page 2019-04-17 19:09:25 -07:00
devmap.c bpf: devmap: fix use-after-free Read in __dev_map_entry_free 2019-05-14 01:25:49 +02:00
disasm.c bpf: implement lookup-free direct value access for maps 2019-04-09 17:05:46 -07:00
disasm.h bpf: Remove struct bpf_verifier_env argument from print_bpf_insn 2018-03-23 17:38:57 +01:00
hashtab.c bpf, lru: avoid messing with eviction heuristics upon syscall lookup 2019-05-14 10:47:29 -07:00
helpers.c bpf: Introduce bpf_strtol and bpf_strtoul helpers 2019-04-12 13:54:59 -07:00
inode.c bpf: switch to ->free_inode() 2019-05-01 22:43:26 -04:00
local_storage.c bpf: add program side {rd, wr}only support for maps 2019-04-09 17:05:46 -07:00
lpm_trie.c bpf: add program side {rd, wr}only support for maps 2019-04-09 17:05:46 -07:00
Makefile bpf: add queue and stack maps 2018-10-19 13:24:31 -07:00
map_in_map.c bpf: set inner_map_meta->spin_lock_off correctly 2019-02-27 17:03:13 -08:00
map_in_map.h
offload.c bpf: offload: add priv field for drivers 2019-02-12 17:07:09 +01:00
percpu_freelist.c bpf: fix lockdep false positive in percpu_freelist 2019-01-31 23:18:21 +01:00
percpu_freelist.h bpf: fix lockdep false positive in percpu_freelist 2019-01-31 23:18:21 +01:00
queue_stack_maps.c bpf: add program side {rd, wr}only support for maps 2019-04-09 17:05:46 -07:00
reuseport_array.c bpf: Introduce BPF_MAP_TYPE_REUSEPORT_SOCKARRAY 2018-08-11 01:58:46 +02:00
stackmap.c bpf: fix lockdep false positive in stackmap 2019-02-11 16:36:24 +01:00
syscall.c bpf: add map_lookup_elem_sys_only for lookups from syscall side 2019-05-14 10:47:29 -07:00
tnum.c bpf/verifier: improve register value range tracking with ARSH 2018-04-29 08:45:53 -07:00
verifier.c bpf: fix undefined behavior in narrow load handling 2019-05-13 02:05:50 +02:00
xskmap.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net 2018-10-19 11:03:06 -07:00