dma-mapping: Clarify valid conditions for CPU cache line overlap

Rename the DMA_ATTR_CPU_CACHE_CLEAN attribute to better reflect that it
is debugging aid to inform DMA core code that CPU cache line overlaps are
allowed, and refine the documentation describing its use.

Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Link: https://lore.kernel.org/r/20260316-dma-debug-overlap-v3-3-1dde90a7f08b@nvidia.com
This commit is contained in:
Leon Romanovsky 2026-03-16 21:06:47 +02:00 committed by Marek Szyprowski
parent 6f45b1604c
commit 9bb0a4d6a4
5 changed files with 24 additions and 18 deletions

View File

@ -149,11 +149,17 @@ For architectures that require cache flushing for DMA coherence
DMA_ATTR_MMIO will not perform any cache flushing. The address
provided must never be mapped cacheable into the CPU.
DMA_ATTR_CPU_CACHE_CLEAN
------------------------
DMA_ATTR_DEBUGGING_IGNORE_CACHELINES
------------------------------------
This attribute indicates the CPU will not dirty any cacheline overlapping this
DMA_FROM_DEVICE/DMA_BIDIRECTIONAL buffer while it is mapped. This allows
multiple small buffers to safely share a cacheline without risk of data
corruption, suppressing DMA debug warnings about overlapping mappings.
All mappings sharing a cacheline should have this attribute.
This attribute indicates that CPU cache lines may overlap for buffers mapped
with DMA_FROM_DEVICE or DMA_BIDIRECTIONAL.
Such overlap may occur when callers map multiple small buffers that reside
within the same cache line. In this case, callers must guarantee that the CPU
will not dirty these cache lines after the mappings are established. When this
condition is met, multiple buffers can safely share a cache line without risking
data corruption.
All mappings that share a cache line must set this attribute to suppress DMA
debug warnings about overlapping mappings.

View File

@ -2912,10 +2912,10 @@ EXPORT_SYMBOL_GPL(virtqueue_add_inbuf);
* @data: the token identifying the buffer.
* @gfp: how to do memory allocations (if necessary).
*
* Same as virtqueue_add_inbuf but passes DMA_ATTR_CPU_CACHE_CLEAN to indicate
* that the CPU will not dirty any cacheline overlapping this buffer while it
* is available, and to suppress overlapping cacheline warnings in DMA debug
* builds.
* Same as virtqueue_add_inbuf but passes DMA_ATTR_DEBUGGING_IGNORE_CACHELINES
* to indicate that the CPU will not dirty any cacheline overlapping this buffer
* while it is available, and to suppress overlapping cacheline warnings in DMA
* debug builds.
*
* Caller must ensure we don't call this with other virtqueue operations
* at the same time (except where noted).
@ -2928,7 +2928,7 @@ int virtqueue_add_inbuf_cache_clean(struct virtqueue *vq,
gfp_t gfp)
{
return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, false, gfp,
DMA_ATTR_CPU_CACHE_CLEAN);
DMA_ATTR_DEBUGGING_IGNORE_CACHELINES);
}
EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_cache_clean);

View File

@ -80,11 +80,11 @@
#define DMA_ATTR_MMIO (1UL << 10)
/*
* DMA_ATTR_CPU_CACHE_CLEAN: Indicates the CPU will not dirty any cacheline
* overlapping this buffer while it is mapped for DMA. All mappings sharing
* a cacheline must have this attribute for this to be considered safe.
* DMA_ATTR_DEBUGGING_IGNORE_CACHELINES: Indicates the CPU cache line can be
* overlapped. All mappings sharing a cacheline must have this attribute for
* this to be considered safe.
*/
#define DMA_ATTR_CPU_CACHE_CLEAN (1UL << 11)
#define DMA_ATTR_DEBUGGING_IGNORE_CACHELINES (1UL << 11)
/*
* A dma_addr_t can hold any valid DMA or bus address for the platform. It can

View File

@ -33,7 +33,7 @@ TRACE_DEFINE_ENUM(DMA_NONE);
{ DMA_ATTR_NO_WARN, "NO_WARN" }, \
{ DMA_ATTR_PRIVILEGED, "PRIVILEGED" }, \
{ DMA_ATTR_MMIO, "MMIO" }, \
{ DMA_ATTR_CPU_CACHE_CLEAN, "CACHE_CLEAN" })
{ DMA_ATTR_DEBUGGING_IGNORE_CACHELINES, "CACHELINES_OVERLAP" })
DECLARE_EVENT_CLASS(dma_map,
TP_PROTO(struct device *dev, phys_addr_t phys_addr, dma_addr_t dma_addr,

View File

@ -601,7 +601,7 @@ static void add_dma_entry(struct dma_debug_entry *entry, unsigned long attrs)
unsigned long flags;
int rc;
entry->is_cache_clean = !!(attrs & DMA_ATTR_CPU_CACHE_CLEAN);
entry->is_cache_clean = attrs & DMA_ATTR_DEBUGGING_IGNORE_CACHELINES;
bucket = get_hash_bucket(entry, &flags);
hash_bucket_add(bucket, entry);