Several nouveau fixes to remove unused code, fix an error path and be

less restrictive with the formats it accepts. A fix for amdgpu to pin
 vmapped dma-buf, and a revert for tegra for a regression in the dma-buf
 / GEM code.
 -----BEGIN PGP SIGNATURE-----
 
 iJUEABMJAB0WIQTkHFbLp4ejekA/qfgnX84Zoj2+dgUCaK/35wAKCRAnX84Zoj2+
 duAqAX9IIovrilRa/6169LIVEFbI3NfhWJorjDp5GB6jEilKxu6LBWFNYmyiLLwI
 0+c4WiEBfAk/jHkpO4fdtAbIczic5d1WdSlO3VipH6w8WMDca1A3DcDM4BXO2x5Z
 74f5CctKSA==
 =PWA/
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-fixes-2025-08-28' of https://gitlab.freedesktop.org/drm/misc/kernel into drm-fixes

Several nouveau fixes to remove unused code, fix an error path and be
less restrictive with the formats it accepts. A fix for amdgpu to pin
vmapped dma-buf, and a revert for tegra for a regression in the dma-buf
/ GEM code.

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maxime Ripard <mripard@redhat.com>
Link: https://lore.kernel.org/r/20250828-hypersonic-colorful-squirrel-64f04b@houat
pull/1334/head
Dave Airlie 2025-08-29 08:44:10 +10:00
commit 60d98e1a8d
7 changed files with 88 additions and 60 deletions

View File

@ -285,6 +285,36 @@ static int amdgpu_dma_buf_begin_cpu_access(struct dma_buf *dma_buf,
return ret;
}
static int amdgpu_dma_buf_vmap(struct dma_buf *dma_buf, struct iosys_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
int ret;
/*
* Pin to keep buffer in place while it's vmap'ed. The actual
* domain is not that important as long as it's mapable. Using
* GTT and VRAM should be compatible with most use cases.
*/
ret = amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT | AMDGPU_GEM_DOMAIN_VRAM);
if (ret)
return ret;
ret = drm_gem_dmabuf_vmap(dma_buf, map);
if (ret)
amdgpu_bo_unpin(bo);
return ret;
}
static void amdgpu_dma_buf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
drm_gem_dmabuf_vunmap(dma_buf, map);
amdgpu_bo_unpin(bo);
}
const struct dma_buf_ops amdgpu_dmabuf_ops = {
.attach = amdgpu_dma_buf_attach,
.pin = amdgpu_dma_buf_pin,
@ -294,8 +324,8 @@ const struct dma_buf_ops amdgpu_dmabuf_ops = {
.release = drm_gem_dmabuf_release,
.begin_cpu_access = amdgpu_dma_buf_begin_cpu_access,
.mmap = drm_gem_dmabuf_mmap,
.vmap = drm_gem_dmabuf_vmap,
.vunmap = drm_gem_dmabuf_vunmap,
.vmap = amdgpu_dma_buf_vmap,
.vunmap = amdgpu_dma_buf_vunmap,
};
/**

View File

@ -40,7 +40,7 @@
* mapping's backing &drm_gem_object buffers.
*
* &drm_gem_object buffers maintain a list of &drm_gpuva objects representing
* all existent GPU VA mappings using this &drm_gem_object as backing buffer.
* all existing GPU VA mappings using this &drm_gem_object as backing buffer.
*
* GPU VAs can be flagged as sparse, such that drivers may use GPU VAs to also
* keep track of sparse PTEs in order to support Vulkan 'Sparse Resources'.
@ -72,7 +72,7 @@
* but it can also be a 'dummy' object, which can be allocated with
* drm_gpuvm_resv_object_alloc().
*
* In order to connect a struct drm_gpuva its backing &drm_gem_object each
* In order to connect a struct drm_gpuva to its backing &drm_gem_object each
* &drm_gem_object maintains a list of &drm_gpuvm_bo structures, and each
* &drm_gpuvm_bo contains a list of &drm_gpuva structures.
*
@ -81,7 +81,7 @@
* This is ensured by the API through drm_gpuvm_bo_obtain() and
* drm_gpuvm_bo_obtain_prealloc() which first look into the corresponding
* &drm_gem_object list of &drm_gpuvm_bos for an existing instance of this
* particular combination. If not existent a new instance is created and linked
* particular combination. If not present, a new instance is created and linked
* to the &drm_gem_object.
*
* &drm_gpuvm_bo structures, since unique for a given &drm_gpuvm, are also used
@ -108,7 +108,7 @@
* sequence of operations to satisfy a given map or unmap request.
*
* Therefore the DRM GPU VA manager provides an algorithm implementing splitting
* and merging of existent GPU VA mappings with the ones that are requested to
* and merging of existing GPU VA mappings with the ones that are requested to
* be mapped or unmapped. This feature is required by the Vulkan API to
* implement Vulkan 'Sparse Memory Bindings' - drivers UAPIs often refer to this
* as VM BIND.
@ -119,7 +119,7 @@
* execute in order to integrate the new mapping cleanly into the current state
* of the GPU VA space.
*
* Depending on how the new GPU VA mapping intersects with the existent mappings
* Depending on how the new GPU VA mapping intersects with the existing mappings
* of the GPU VA space the &drm_gpuvm_ops callbacks contain an arbitrary amount
* of unmap operations, a maximum of two remap operations and a single map
* operation. The caller might receive no callback at all if no operation is
@ -139,16 +139,16 @@
* one unmap operation and one or two map operations, such that drivers can
* derive the page table update delta accordingly.
*
* Note that there can't be more than two existent mappings to split up, one at
* Note that there can't be more than two existing mappings to split up, one at
* the beginning and one at the end of the new mapping, hence there is a
* maximum of two remap operations.
*
* Analogous to drm_gpuvm_sm_map() drm_gpuvm_sm_unmap() uses &drm_gpuvm_ops to
* call back into the driver in order to unmap a range of GPU VA space. The
* logic behind this function is way simpler though: For all existent mappings
* logic behind this function is way simpler though: For all existing mappings
* enclosed by the given range unmap operations are created. For mappings which
* are only partically located within the given range, remap operations are
* created such that those mappings are split up and re-mapped partically.
* are only partially located within the given range, remap operations are
* created such that those mappings are split up and re-mapped partially.
*
* As an alternative to drm_gpuvm_sm_map() and drm_gpuvm_sm_unmap(),
* drm_gpuvm_sm_map_ops_create() and drm_gpuvm_sm_unmap_ops_create() can be used
@ -168,7 +168,7 @@
* provided helper functions drm_gpuva_map(), drm_gpuva_remap() and
* drm_gpuva_unmap() instead.
*
* The following diagram depicts the basic relationships of existent GPU VA
* The following diagram depicts the basic relationships of existing GPU VA
* mappings, a newly requested mapping and the resulting mappings as implemented
* by drm_gpuvm_sm_map() - it doesn't cover any arbitrary combinations of these.
*
@ -218,7 +218,7 @@
*
*
* 4) Existent mapping is a left aligned subset of the requested one, hence
* replace the existent one.
* replace the existing one.
*
* ::
*
@ -236,9 +236,9 @@
* and/or non-contiguous BO offset.
*
*
* 5) Requested mapping's range is a left aligned subset of the existent one,
* 5) Requested mapping's range is a left aligned subset of the existing one,
* but backed by a different BO. Hence, map the requested mapping and split
* the existent one adjusting its BO offset.
* the existing one adjusting its BO offset.
*
* ::
*
@ -271,9 +271,9 @@
* new: |-----|-----| (a.bo_offset=n, a'.bo_offset=n+1)
*
*
* 7) Requested mapping's range is a right aligned subset of the existent one,
* 7) Requested mapping's range is a right aligned subset of the existing one,
* but backed by a different BO. Hence, map the requested mapping and split
* the existent one, without adjusting the BO offset.
* the existing one, without adjusting the BO offset.
*
* ::
*
@ -304,7 +304,7 @@
*
* 9) Existent mapping is overlapped at the end by the requested mapping backed
* by a different BO. Hence, map the requested mapping and split up the
* existent one, without adjusting the BO offset.
* existing one, without adjusting the BO offset.
*
* ::
*
@ -334,9 +334,9 @@
* new: |-----|-----------| (a'.bo_offset=n, a.bo_offset=n+1)
*
*
* 11) Requested mapping's range is a centered subset of the existent one
* 11) Requested mapping's range is a centered subset of the existing one
* having a different backing BO. Hence, map the requested mapping and split
* up the existent one in two mappings, adjusting the BO offset of the right
* up the existing one in two mappings, adjusting the BO offset of the right
* one accordingly.
*
* ::
@ -351,7 +351,7 @@
* new: |-----|-----|-----| (a.bo_offset=n,b.bo_offset=m,a'.bo_offset=n+2)
*
*
* 12) Requested mapping is a contiguous subset of the existent one. Split it
* 12) Requested mapping is a contiguous subset of the existing one. Split it
* up, but indicate that the backing PTEs could be kept.
*
* ::
@ -367,7 +367,7 @@
*
*
* 13) Existent mapping is a right aligned subset of the requested one, hence
* replace the existent one.
* replace the existing one.
*
* ::
*
@ -386,7 +386,7 @@
*
*
* 14) Existent mapping is a centered subset of the requested one, hence
* replace the existent one.
* replace the existing one.
*
* ::
*
@ -406,7 +406,7 @@
*
* 15) Existent mappings is overlapped at the beginning by the requested mapping
* backed by a different BO. Hence, map the requested mapping and split up
* the existent one, adjusting its BO offset accordingly.
* the existing one, adjusting its BO offset accordingly.
*
* ::
*
@ -469,8 +469,8 @@
* make use of them.
*
* The below code is strictly limited to illustrate the generic usage pattern.
* To maintain simplicitly, it doesn't make use of any abstractions for common
* code, different (asyncronous) stages with fence signalling critical paths,
* To maintain simplicity, it doesn't make use of any abstractions for common
* code, different (asynchronous) stages with fence signalling critical paths,
* any other helpers or error handling in terms of freeing memory and dropping
* previously taken locks.
*
@ -479,7 +479,7 @@
* // Allocates a new &drm_gpuva.
* struct drm_gpuva * driver_gpuva_alloc(void);
*
* // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva
* // Typically drivers would embed the &drm_gpuvm and &drm_gpuva
* // structure in individual driver structures and lock the dma-resv with
* // drm_exec or similar helpers.
* int driver_mapping_create(struct drm_gpuvm *gpuvm,
@ -582,7 +582,7 @@
* .sm_step_unmap = driver_gpuva_unmap,
* };
*
* // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva
* // Typically drivers would embed the &drm_gpuvm and &drm_gpuva
* // structure in individual driver structures and lock the dma-resv with
* // drm_exec or similar helpers.
* int driver_mapping_create(struct drm_gpuvm *gpuvm,
@ -680,7 +680,7 @@
*
* This helper is here to provide lockless list iteration. Lockless as in, the
* iterator releases the lock immediately after picking the first element from
* the list, so list insertion deletion can happen concurrently.
* the list, so list insertion and deletion can happen concurrently.
*
* Elements popped from the original list are kept in a local list, so removal
* and is_empty checks can still happen while we're iterating the list.
@ -1160,7 +1160,7 @@ drm_gpuvm_prepare_objects_locked(struct drm_gpuvm *gpuvm,
}
/**
* drm_gpuvm_prepare_objects() - prepare all assoiciated BOs
* drm_gpuvm_prepare_objects() - prepare all associated BOs
* @gpuvm: the &drm_gpuvm
* @exec: the &drm_exec locking context
* @num_fences: the amount of &dma_fences to reserve
@ -1230,13 +1230,13 @@ drm_gpuvm_prepare_range(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
EXPORT_SYMBOL_GPL(drm_gpuvm_prepare_range);
/**
* drm_gpuvm_exec_lock() - lock all dma-resv of all assoiciated BOs
* drm_gpuvm_exec_lock() - lock all dma-resv of all associated BOs
* @vm_exec: the &drm_gpuvm_exec wrapper
*
* Acquires all dma-resv locks of all &drm_gem_objects the given
* &drm_gpuvm contains mappings of.
*
* Addionally, when calling this function with struct drm_gpuvm_exec::extra
* Additionally, when calling this function with struct drm_gpuvm_exec::extra
* being set the driver receives the given @fn callback to lock additional
* dma-resv in the context of the &drm_gpuvm_exec instance. Typically, drivers
* would call drm_exec_prepare_obj() from within this callback.
@ -1293,7 +1293,7 @@ fn_lock_array(struct drm_gpuvm_exec *vm_exec)
}
/**
* drm_gpuvm_exec_lock_array() - lock all dma-resv of all assoiciated BOs
* drm_gpuvm_exec_lock_array() - lock all dma-resv of all associated BOs
* @vm_exec: the &drm_gpuvm_exec wrapper
* @objs: additional &drm_gem_objects to lock
* @num_objs: the number of additional &drm_gem_objects to lock
@ -1588,7 +1588,7 @@ drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm,
EXPORT_SYMBOL_GPL(drm_gpuvm_bo_find);
/**
* drm_gpuvm_bo_obtain() - obtains and instance of the &drm_gpuvm_bo for the
* drm_gpuvm_bo_obtain() - obtains an instance of the &drm_gpuvm_bo for the
* given &drm_gpuvm and &drm_gem_object
* @gpuvm: The &drm_gpuvm the @obj is mapped in.
* @obj: The &drm_gem_object being mapped in the @gpuvm.
@ -1624,7 +1624,7 @@ drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm,
EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain);
/**
* drm_gpuvm_bo_obtain_prealloc() - obtains and instance of the &drm_gpuvm_bo
* drm_gpuvm_bo_obtain_prealloc() - obtains an instance of the &drm_gpuvm_bo
* for the given &drm_gpuvm and &drm_gem_object
* @__vm_bo: A pre-allocated struct drm_gpuvm_bo.
*
@ -1688,7 +1688,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_bo_extobj_add);
* @vm_bo: the &drm_gpuvm_bo to add or remove
* @evict: indicates whether the object is evicted
*
* Adds a &drm_gpuvm_bo to or removes it from the &drm_gpuvms evicted list.
* Adds a &drm_gpuvm_bo to or removes it from the &drm_gpuvm's evicted list.
*/
void
drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, bool evict)
@ -1790,7 +1790,7 @@ __drm_gpuva_remove(struct drm_gpuva *va)
* drm_gpuva_remove() - remove a &drm_gpuva
* @va: the &drm_gpuva to remove
*
* This removes the given &va from the underlaying tree.
* This removes the given &va from the underlying tree.
*
* It is safe to use this function using the safe versions of iterating the GPU
* VA space, such as drm_gpuvm_for_each_va_safe() and
@ -2358,7 +2358,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
*
* This function iterates the given range of the GPU VA space. It utilizes the
* &drm_gpuvm_ops to call back into the driver providing the operations to
* unmap and, if required, split existent mappings.
* unmap and, if required, split existing mappings.
*
* Drivers may use these callbacks to update the GPU VA space right away within
* the callback. In case the driver decides to copy and store the operations for
@ -2475,7 +2475,7 @@ static const struct drm_gpuvm_ops lock_ops = {
* required without the earlier DRIVER_OP_MAP. This is safe because we've
* already locked the GEM object in the earlier DRIVER_OP_MAP step.
*
* Returns: 0 on success or a negative error codec
* Returns: 0 on success or a negative error code
*/
int
drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
@ -2619,12 +2619,12 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
* @req_offset: the offset within the &drm_gem_object
*
* This function creates a list of operations to perform splitting and merging
* of existent mapping(s) with the newly requested one.
* of existing mapping(s) with the newly requested one.
*
* The list can be iterated with &drm_gpuva_for_each_op and must be processed
* in the given order. It can contain map, unmap and remap operations, but it
* also can be empty if no operation is required, e.g. if the requested mapping
* already exists is the exact same way.
* already exists in the exact same way.
*
* There can be an arbitrary amount of unmap operations, a maximum of two remap
* operations and a single map operation. The latter one represents the original

View File

@ -795,6 +795,10 @@ static bool nv50_plane_format_mod_supported(struct drm_plane *plane,
struct nouveau_drm *drm = nouveau_drm(plane->dev);
uint8_t i;
/* All chipsets can display all formats in linear layout */
if (modifier == DRM_FORMAT_MOD_LINEAR)
return true;
if (drm->client.device.info.chipset < 0xc0) {
const struct drm_format_info *info = drm_format_info(format);
const uint8_t kind = (modifier >> 12) & 0xff;

View File

@ -103,7 +103,7 @@ gm200_flcn_pio_imem_wr_init(struct nvkm_falcon *falcon, u8 port, bool sec, u32 i
static void
gm200_flcn_pio_imem_wr(struct nvkm_falcon *falcon, u8 port, const u8 *img, int len, u16 tag)
{
nvkm_falcon_wr32(falcon, 0x188 + (port * 0x10), tag++);
nvkm_falcon_wr32(falcon, 0x188 + (port * 0x10), tag);
while (len >= 4) {
nvkm_falcon_wr32(falcon, 0x184 + (port * 0x10), *(u32 *)img);
img += 4;
@ -249,9 +249,11 @@ int
gm200_flcn_fw_load(struct nvkm_falcon_fw *fw)
{
struct nvkm_falcon *falcon = fw->falcon;
int target, ret;
int ret;
if (fw->inst) {
int target;
nvkm_falcon_mask(falcon, 0x048, 0x00000001, 0x00000001);
switch (nvkm_memory_target(fw->inst)) {
@ -285,15 +287,6 @@ gm200_flcn_fw_load(struct nvkm_falcon_fw *fw)
}
if (fw->boot) {
switch (nvkm_memory_target(&fw->fw.mem.memory)) {
case NVKM_MEM_TARGET_VRAM: target = 4; break;
case NVKM_MEM_TARGET_HOST: target = 5; break;
case NVKM_MEM_TARGET_NCOH: target = 6; break;
default:
WARN_ON(1);
return -EINVAL;
}
ret = nvkm_falcon_pio_wr(falcon, fw->boot, 0, 0,
IMEM, falcon->code.limit - fw->boot_size, fw->boot_size,
fw->boot_addr >> 8, false);

View File

@ -209,11 +209,12 @@ nvkm_gsp_fwsec_v2(struct nvkm_gsp *gsp, const char *name,
fw->boot_addr = bld->start_tag << 8;
fw->boot_size = bld->code_size;
fw->boot = kmemdup(bl->data + hdr->data_offset + bld->code_off, fw->boot_size, GFP_KERNEL);
if (!fw->boot)
ret = -ENOMEM;
nvkm_firmware_put(bl);
if (!fw->boot)
return -ENOMEM;
/* Patch in interface data. */
return nvkm_gsp_fwsec_patch(gsp, fw, desc->InterfaceOffset, init_cmd);
}

View File

@ -526,7 +526,7 @@ void tegra_bo_free_object(struct drm_gem_object *gem)
if (drm_gem_is_imported(gem)) {
dma_buf_unmap_attachment_unlocked(gem->import_attach, bo->sgt,
DMA_TO_DEVICE);
dma_buf_detach(gem->dma_buf, gem->import_attach);
dma_buf_detach(gem->import_attach->dmabuf, gem->import_attach);
}
}

View File

@ -103,7 +103,7 @@ struct drm_gpuva {
} va;
/**
* @gem: structure containing the &drm_gem_object and it's offset
* @gem: structure containing the &drm_gem_object and its offset
*/
struct {
/**
@ -843,7 +843,7 @@ struct drm_gpuva_op_map {
} va;
/**
* @gem: structure containing the &drm_gem_object and it's offset
* @gem: structure containing the &drm_gem_object and its offset
*/
struct {
/**
@ -1189,11 +1189,11 @@ struct drm_gpuvm_ops {
/**
* @sm_step_unmap: called from &drm_gpuvm_sm_map and
* &drm_gpuvm_sm_unmap to unmap an existent mapping
* &drm_gpuvm_sm_unmap to unmap an existing mapping
*
* This callback is called when existent mapping needs to be unmapped.
* This callback is called when existing mapping needs to be unmapped.
* This is the case when either a newly requested mapping encloses an
* existent mapping or an unmap of an existent mapping is requested.
* existing mapping or an unmap of an existing mapping is requested.
*
* The &priv pointer matches the one the driver passed to
* &drm_gpuvm_sm_map or &drm_gpuvm_sm_unmap, respectively.