kernel_optimize_test/net
Maciej Żenczykowski aefa927744 bpf: Do not change gso_size during bpf_skb_change_proto()
[ Upstream commit 364745fbe981a4370f50274475da4675661104df ]

This is technically a backwards incompatible change in behaviour, but I'm
going to argue that it is very unlikely to break things, and likely to fix
*far* more then it breaks.

In no particular order, various reasons follow:

(a) I've long had a bug assigned to myself to debug a super rare kernel crash
on Android Pixel phones which can (per stacktrace) be traced back to BPF clat
IPv6 to IPv4 protocol conversion causing some sort of ugly failure much later
on during transmit deep in the GSO engine, AFAICT precisely because of this
change to gso_size, though I've never been able to manually reproduce it. I
believe it may be related to the particular network offload support of attached
USB ethernet dongle being used for tethering off of an IPv6-only cellular
connection. The reason might be we end up with more segments than max permitted,
or with a GSO packet with only one segment... (either way we break some
assumption and hit a BUG_ON)

(b) There is no check that the gso_size is > 20 when reducing it by 20, so we
might end up with a negative (or underflowing) gso_size or a gso_size of 0.
This can't possibly be good. Indeed this is probably somehow exploitable (or
at least can result in a kernel crash) by delivering crafted packets and perhaps
triggering an infinite loop or a divide by zero... As a reminder: gso_size (MSS)
is related to MTU, but not directly derived from it: gso_size/MSS may be
significantly smaller then one would get by deriving from local MTU. And on
some NICs (which do loose MTU checking on receive, it may even potentially be
larger, for example my work pc with 1500 MTU can receive 1520 byte frames [and
sometimes does due to bugs in a vendor plat46 implementation]). Indeed even just
going from 21 to 1 is potentially problematic because it increases the number
of segments by a factor of 21 (think DoS, or some other crash due to too many
segments).

(c) It's always safe to not increase the gso_size, because it doesn't result in
the max packet size increasing.  So the skb_increase_gso_size() call was always
unnecessary for correctness (and outright undesirable, see later). As such the
only part which is potentially dangerous (ie. could cause backwards compatibility
issues) is the removal of the skb_decrease_gso_size() call.

(d) If the packets are ultimately destined to the local device, then there is
absolutely no benefit to playing around with gso_size. It only matters if the
packets will egress the device. ie. we're either forwarding, or transmitting
from the device.

(e) This logic only triggers for packets which are GSO. It does not trigger for
skbs which are not GSO. It will not convert a non-GSO MTU sized packet into a
GSO packet (and you don't even know what the MTU is, so you can't even fix it).
As such your transmit path must *already* be able to handle an MTU 20 bytes
larger then your receive path (for IPv4 to IPv6 translation) - and indeed 28
bytes larger due to IPv4 fragments. Thus removing the skb_decrease_gso_size()
call doesn't actually increase the size of the packets your transmit side must
be able to handle. ie. to handle non-GSO max-MTU packets, the IPv4/IPv6 device/
route MTUs must already be set correctly. Since for example with an IPv4 egress
MTU of 1500, IPv4 to IPv6 translation will already build 1520 byte IPv6 frames,
so you need a 1520 byte device MTU. This means if your IPv6 device's egress
MTU is 1280, your IPv4 route must be 1260 (and actually 1252, because of the
need to handle fragments). This is to handle normal non-GSO packets. Thus the
reduction is simply not needed for GSO packets, because when they're correctly
built, they will already be the right size.

(f) TSO/GSO should be able to exactly undo GRO: the number of packets (TCP
segments) should not be modified, so that TCP's MSS counting works correctly
(this matters for congestion control). If protocol conversion changes the
gso_size, then the number of TCP segments may increase or decrease. Packet loss
after protocol conversion can result in partial loss of MSS segments that the
sender sent. How's the sending TCP stack going to react to receiving ACKs/SACKs
in the middle of the segments it sent?

(g) skb_{decrease,increase}_gso_size() are already no-ops for GSO_BY_FRAGS
case (besides triggering WARN_ON_ONCE). This means you already cannot guarantee
that gso_size (and thus resulting packet MTU) is changed. ie. you must assume
it won't be changed.

(h) changing gso_size is outright buggy for UDP GSO packets, where framing
matters (I believe that's also the case for SCTP, but it's already excluded
by [g]).  So the only remaining case is TCP, which also doesn't want it
(see [f]).

(i) see also the reasoning on the previous attempt at fixing this
(commit fa7b83bf3b156c767f3e4a25bbf3817b08f3ff8e) which shows that the current
behaviour causes TCP packet loss:

  In the forwarding path GRO -> BPF 6 to 4 -> GSO for TCP traffic, the
  coalesced packet payload can be > MSS, but < MSS + 20.

  bpf_skb_proto_6_to_4() will upgrade the MSS and it can be > the payload
  length. After then tcp_gso_segment checks for the payload length if it
  is <= MSS. The condition is causing the packet to be dropped.

  tcp_gso_segment():
    [...]
    mss = skb_shinfo(skb)->gso_size;
    if (unlikely(skb->len <= mss)) goto out;
    [...]

Thus changing the gso_size is simply a very bad idea. Increasing is unnecessary
and buggy, and decreasing can go negative.

Fixes: 6578171a7f ("bpf: add bpf_skb_change_proto helper")
Signed-off-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Dongseok Yi <dseok.yi@samsung.com>
Cc: Willem de Bruijn <willemb@google.com>
Link: https://lore.kernel.org/bpf/CANP3RGfjLikQ6dg=YpBU0OeHvyv7JOki7CyOUS9modaXAi-9vQ@mail.gmail.com
Link: https://lore.kernel.org/bpf/20210617000953.2787453-2-zenczykowski@gmail.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14 16:56:27 +02:00
..
6lowpan
9p net: 9p: advance iov on empty read 2021-04-07 15:00:08 +02:00
802
8021q
appletalk appletalk: Fix skb allocation size in loopback case 2021-04-07 15:00:08 +02:00
atm
ax25
batman-adv batman-adv: Avoid WARN_ON timing related checks 2021-06-23 14:42:41 +02:00
bluetooth Bluetooth: Remove spurious error message 2021-07-14 16:55:34 +02:00
bpf
bpfilter
bridge net: bridge: fix vlan tunnel dst refcnt when egressing 2021-06-23 14:42:53 +02:00
caif net: caif: fix memory leak in cfusbl_device_notify 2021-06-10 13:39:25 +02:00
can can: j1939: j1939_sk_setsockopt(): prevent allocation of j1939 filter for optlen == 0 2021-07-14 16:56:27 +02:00
ceph
core bpf: Do not change gso_size during bpf_skb_change_proto() 2021-07-14 16:56:27 +02:00
dcb
dccp ipv6: weaken the v4mapped source check 2021-03-30 14:32:01 +02:00
decnet
dns_resolver
dsa net: dsa: tag_8021q: fix the VLAN IDs used for encoding sub-VLANs 2021-06-10 13:39:17 +02:00
ethernet
ethtool net: ethtool: clear heap allocations for ethtool function 2021-06-30 08:47:20 -04:00
hsr net: hsr: fix mac_len checks 2021-06-03 09:00:50 +02:00
ieee802154 net: ieee802154: fix null deref in parse dev addr 2021-06-18 10:00:03 +02:00
ife
ipv4 net/ipv4: swap flow ports when validating source 2021-07-14 16:56:25 +02:00
ipv6 ipv6: exthdrs: do not blindly use init_net 2021-07-14 16:56:27 +02:00
iucv
kcm
key
l2tp
l3mdev
lapb
llc
mac80211 mac80211: remove iwlwifi specific workaround NDPs of null_response 2021-07-14 16:56:26 +02:00
mac802154 net: mac802154: Fix general protection fault 2021-04-14 08:42:13 +02:00
mpls
mptcp mptcp: generate subflow hmac after mptcp_finish_join() 2021-07-14 16:56:17 +02:00
ncsi net/ncsi: Avoid channel_monitor hrtimer deadlock 2021-04-14 08:42:08 +02:00
netfilter netfilter: nf_tables_offload: check FLOW_DISSECTOR_KEY_BASIC in VLAN transfer logic 2021-07-14 16:56:24 +02:00
netlabel netlabel: Fix memory leak in netlbl_mgmt_add_common 2021-07-14 16:56:22 +02:00
netlink netlink: disable IRQs for netlink_lock_table() 2021-06-16 12:01:36 +02:00
netrom
nfc net/nfc/rawsock.c: fix a permission check bug 2021-06-16 12:01:35 +02:00
nsh
openvswitch openvswitch: meter: fix race when getting now_ms. 2021-06-03 09:00:47 +02:00
packet net/packet: annotate accesses to po->ifindex 2021-06-30 08:47:22 -04:00
phonet
psample
qrtr net: qrtr: ns: Fix error return code in qrtr_ns_init() 2021-07-14 16:56:16 +02:00
rds net: rds: fix memory leak in rds_recvmsg 2021-06-23 14:42:42 +02:00
rfkill
rose
rxrpc
sched pkt_sched: sch_qfq: fix qfq_change_class() error path 2021-07-14 16:56:25 +02:00
sctp sctp: fix a SCTP_MIB_CURRESTAB leak in sctp_sf_do_dupcook_b 2021-05-19 10:13:06 +02:00
smc net/smc: remove device from smcd_dev_list after failed device_add() 2021-06-03 09:00:48 +02:00
strparser
sunrpc SUNRPC: Should wake up the privileged task firstly. 2021-07-14 16:55:42 +02:00
switchdev
tipc tipc: fix unique bearer names sanity check 2021-06-10 13:39:22 +02:00
tls tls: prevent oversized sendfile() hangs by ignoring MSG_MORE 2021-07-14 16:56:24 +02:00
unix net/af_unix: fix a data-race in unix_dgram_sendmsg / unix_release_sock 2021-06-23 14:42:46 +02:00
vmw_vsock vsock/virtio: free queued packets when closing socket 2021-05-14 09:50:41 +02:00
wimax
wireless cfg80211: call cfg80211_leave_ocb when switching away from OCB 2021-06-30 08:47:19 -04:00
x25 net/x25: Return the correct errno code 2021-06-18 10:00:06 +02:00
xdp xsk: Fix broken Tx ring validation 2021-07-14 16:56:23 +02:00
xfrm xfrm: Fix xfrm offload fallback fail case 2021-07-14 16:56:25 +02:00
compat.c net: Return the correct errno code 2021-06-18 10:00:06 +02:00
devres.c
Kconfig
Makefile
socket.c net: make get_net_ns return error if NET_NS is disabled 2021-06-23 14:42:44 +02:00
sysctl_net.c