summaryrefslogtreecommitdiff
path: root/net
diff options
context:
space:
mode:
authorMagnus Karlsson <magnus.karlsson@intel.com>2021-06-18 09:58:05 +0200
committerDaniel Borkmann <daniel@iogearbox.net>2021-06-18 16:59:20 +0200
commitf654fae47e83e56b454fbbfd0af0a4f232e356d6 (patch)
tree3634a11285d2edd56ef556ea22e2409f3ec6ba7a /net
parent2f99619820c2269534eb2c0cde44870313c6d353 (diff)
downloadlinux-f654fae47e83e56b454fbbfd0af0a4f232e356d6.tar.gz
xsk: Fix broken Tx ring validation
Fix broken Tx ring validation for AF_XDP. The commit under the Fixes tag, fixed an off-by-one error in the validation but introduced another error. Descriptors are now let through even if they straddle a chunk boundary which they are not allowed to do in aligned mode. Worse is that they are let through even if they straddle the end of the umem itself, tricking the kernel to read data outside the allowed umem region which might or might not be mapped at all. Fix this by reintroducing the old code, but subtract the length by one to fix the off-by-one error that the original patch was addressing. The test chunk != chunk_end makes sure packets do not straddle chunk boundraries. Note that packets of zero length are allowed in the interface, therefore the test if the length is non-zero. Fixes: ac31565c2193 ("xsk: Fix for xp_aligned_validate_desc() when len == chunk_size") Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20210618075805.14412-1-magnus.karlsson@gmail.com
Diffstat (limited to 'net')
-rw-r--r--net/xdp/xsk_queue.h11
1 files changed, 7 insertions, 4 deletions
diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
index 9d2a89d793c0..9ae13cccfb28 100644
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -128,12 +128,15 @@ static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr)
static inline bool xp_aligned_validate_desc(struct xsk_buff_pool *pool,
struct xdp_desc *desc)
{
- u64 chunk;
-
- if (desc->len > pool->chunk_size)
- return false;
+ u64 chunk, chunk_end;
chunk = xp_aligned_extract_addr(pool, desc->addr);
+ if (likely(desc->len)) {
+ chunk_end = xp_aligned_extract_addr(pool, desc->addr + desc->len - 1);
+ if (chunk != chunk_end)
+ return false;
+ }
+
if (chunk >= pool->addrs_cnt)
return false;