diff options
author | Chris Wilson <chris@chris-wilson.co.uk> | 2019-12-16 12:26:03 +0000 |
---|---|---|
committer | Chris Wilson <chris@chris-wilson.co.uk> | 2019-12-16 23:13:12 +0000 |
commit | 0a9a5532d2962a74004e041eeb2217de930f8c27 (patch) | |
tree | 002a6ff59443b6b210ce7c917add4038daf166fe /drivers/gpu/drm/i915/intel_memory_region.c | |
parent | 884054403393a46ae88f24b0d76581278222f5ce (diff) | |
download | linux-0a9a5532d2962a74004e041eeb2217de930f8c27.tar.gz |
drm/i915/gem: Apply lmem size restriction to get_pages
When creating a handle, it is just that, an abstract handle. The fact
that we cannot currently support a handle larger than the size of the
backing storage is an artifact of our whole-object-at-a-time handling in
get_pages() and being an implementation limitation is best handled at
that point -- similar to shmem, where we only barf when asked to
populate the whole object if larger than RAM. (Pinning the whole object
at a time is major hindrance that we are likely to have to overcome in
the near future.) In the case of the buddy allocator, the late check is
preferable as the request size may often be smaller than the required
size.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191216122603.2598155-1-chris@chris-wilson.co.uk
Diffstat (limited to 'drivers/gpu/drm/i915/intel_memory_region.c')
-rw-r--r-- | drivers/gpu/drm/i915/intel_memory_region.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c index baaeaecc64af..e24c280e5930 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.c +++ b/drivers/gpu/drm/i915/intel_memory_region.c @@ -73,6 +73,9 @@ __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem, min_order = ilog2(size) - ilog2(mem->mm.chunk_size); } + if (size > BIT(mem->mm.max_order) * mem->mm.chunk_size) + return -E2BIG; + n_pages = size >> ilog2(mem->mm.chunk_size); mutex_lock(&mem->mm_lock); |