Skip to content
Snippets Groups Projects

vkd3d: Write Vulkan descriptors in a worker thread.

Merged Conor McCarthy requested to merge cmccarthy/vkd3d:desc_writes into master

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
2584 2584 for (; i != UINT_MAX; i = next)
2585 2585 {
2586 2586 src = &descriptors[i];
2587 next = (int)src->next >> 1;
2587 next = vkd3d_atomic_exchange(&src->next, 0);
2588 next = (int)next >> 1;
2588 2589
2590 /* A race exists here between updating src->next and getting the current object. The best
2591 * we can do is get the object last, which may result in a harmless rewrite later. */
2589 2592 u.object = d3d12_desc_get_object_ref(src, device);
2590 2593
2591 2594 if (!u.object)
2592 2595 {
2593 2596 vkd3d_atomic_exchange(&src->next, 0);
  • Conor McCarthy added 2 commits

    added 2 commits

    • 9fc3d18e - vkd3d: Update the descriptor `next` index before getting a reference for writing.
    • 7887d56d - vkd3d: Write Vulkan descriptors in a worker thread.

    Compare with previous version

    • This looks fine, as far as I can say. The only part I'm a bit concerned about is how many threads this is going to create. Is it possible that an application creates a lot of descriptor heaps, each of those spawning a new thread? That wouldn't be very gentle on the OS. Would it be possible to have on thread per device, taking care of all the heaps belonging to that device? Or, if parallel processing is advisable, a per-device pool of threads taking care of all the heaps belonging to that device?

    • Author Developer

      Typically few heaps are used. Only one of each type can be used per draw or compute call, and they can be shared, so there's no gain in creating more than one shader-visible heap.

    • I'll leave to Henri to decide what's best here, but I still think that having one thread per device should be easily doable and avoid wasting resources if an application, for some reason, creates a lot of heaps. Apparently neither us nor native implementations really enforce which heap descriptors come from, so it could still be that some application creates many heaps each with just a few descriptors.

    • Author Developer

      We enforce one per heap type; see d3d12_command_list_bind_descriptor_table(). It's true this doesn't prevent an app from creating many heaps and using them in consecutive draw calls or multiple command lists. Changing heap bindings can impose a pretty big performance cost though.

    • Please register or sign in to reply
  • Giovanni Mascellani approved this merge request

    approved this merge request

    • From 7887d56d0c76f52c7afb8a84225d138ac9922f33 Mon Sep 17 00:00:00 2001
      From: Conor McCarthy <cmccarthy@codeweavers.com>
      Date: Sun, 30 Jul 2023 13:34:09 +1000
      Subject: [PATCH 2/2] vkd3d: Write Vulkan descriptors in a worker thread.
      
      Raises framerate in Horizon Zero Dawn by about 5-10%.

      It's great that it does, but this doesn't help much with understanding which underlying issue we're addressing, or with determining whether this MR is the most appropriate way to do that.

      I'll leave to Henri to decide what's best here, but I still think that having one thread per device should be easily doable and avoid wasting resources if an application, for some reason, creates a lot of heaps. Apparently neither us nor native implementations really enforce which heap descriptors come from, so it could still be that some application creates many heaps each with just a few descriptors.

      Yeah, in general I'd prefer creating fewer threads rather than more, unless it either can't be avoided or there a clear advantage to creating more threads. In fact, I wonder how hard it would be to use vkd3d_fence_worker_main() for this. Waiting for fences is a blocking operation, but it may not have to be, and in principle these waits are expected to complete quickly. That also depends on which issue we're trying to address here, of course...

    • Yeah, in general I'd prefer creating fewer threads rather than more, unless it either can't be avoided or there a clear advantage to creating more threads. In fact, I wonder how hard it would be to use vkd3d_fence_worker_main() for this. Waiting for fences is a blocking operation, but it may not have to be, and in principle these waits are expected to complete quickly. That also depends on which issue we're trying to address here, of course...

      As it was probably evident from my comment, I'm also in favor of using as fewer threads as possible. Still, I'm not sure that reusing the fence worker would be a good idea, since the fence worker has to wait on either a timeline semaphore or a fence. Maybe the descriptor worker could even be rewritten in order to use a timeline semaphore to synchronize (though I'm not sure if it's possible to implement the variable condition), but it doesn't look that the same can be done for plain fences. And I'm not aware of any reasonable way to wait at the same time on sempahores/fences and whatever vkd3d already abstracts for mutexes and condition variables.

      (I could go on with a rant on why Vulkan has so many different blocking functions, vkWaitSemaphores(), vkWaitForFences(), vkWaitForPresentKHR(), vkAcquireNextImageKHR() and maybe others I'm not ware of, and not a way to uniformly wait and multiplex on them, but I'm sure I'm preaching to the choir; and we have to deal with it anyway, see for instance wine!3262 (comment 38797))

    • Please register or sign in to reply
  • Yeah, in general I'd prefer creating fewer threads rather than more, unless it either can't be avoided or there a clear advantage to creating more threads. In fact, I wonder how hard it would be to use vkd3d_fence_worker_main() for this. Waiting for fences is a blocking operation, but it may not have to be, and in principle these waits are expected to complete quickly. That also depends on which issue we're trying to address here, of course...

    As it was probably evident from my comment, I'm also in favor of using as fewer threads as possible. Still, I'm not sure that reusing the fence worker would be a good idea, since the fence worker has to wait on either a timeline semaphore or a fence.

    Sure, but those waits don't necessarily need to be blocking waits. It may still be a bad idea, of course, depending on which issue we're trying to address, but I'd like for some serious thought to be given to whether we can make it work.

  • Author Developer

    The issue is, to prevent concurrent writes of the same Vulkan descriptor we must delay writing them until command list submission, so instead of descriptors being written on the fly, often by multiple threads, we write them all at the last millisecond from a single thread. In my testing of HZD, the worker thread always handled all or very nearly all writes by list submission time, so it removes this bottleneck.

    I think the fence worker is unsuitable. If it has some fences to wait for, it must poll vkWaitSemaphoresKHR() with a zero or extremely short wait time in case some descriptor writes come along. To avoid spinning we would need to use a short wait time, not zero, which will delay descriptor handling. And when writes do occur, they may delay fence handling.

    Two or four threads in struct d3d12_device looks to me like the best option. Or we could make it dynamic, so another thread is created if too many writes remain when command lists are submitted, at a cost of greater complexity.

    Edited by Conor McCarthy
    • I think the fence worker is unsuitable. If it has some fences to wait for, it must poll vkWaitSemaphoresKHR() with a zero or extremely short wait time in case some descriptor writes come along. To avoid spinning we would need to use a short wait time, not zero, which will delay descriptor handling. And when writes do occur, they may delay fence handling.

      Well, it depends on the specifics, I think. In particular:

      • How long does a d3d12_desc_flush_vk_heap_updates_locked() call typically take? How long does it maximally take? If the answer to the previous question is some approximation of "infinity", could we put a lower bound on that by e.g. limiting the maximum number of descriptors we process in a single d3d12_desc_flush_vk_heap_updates_locked() call?
      • How long does waiting for a fence typically take once we know it has been submitted? Would it be terrible to poll fences with e.g. a 1ms timeout?
      • What is the worst case behaviour? If descriptor writes were to get stuck behind a fence we'd need to wait for d3d12_command_queue_ExecuteCommandLists() to process them, but that should be no worse than what we're currently doing. The reverse might be worse, but we should be able to avoid that by polling fences inside d3d12_desc_flush_vk_heap_updates_locked() if needed.
      • Are there any nasty edge cases?
    • Author Developer

      From the Vulkan spec:

      timeout is the timeout period in units of nanoseconds. timeout is adjusted to the closest value allowed by the implementation-dependent timeout accuracy, which may be substantially longer than one nanosecond, and may be longer than the requested period.

      The shortest timeout in current drivers is probably much shorter than we need, but this could bite us in the future, and there's no way to query the actual minimum value.

    • Well, it depends on the specifics, I think. In particular:

      • How long does a d3d12_desc_flush_vk_heap_updates_locked() call typically take? How long does it maximally take? If the answer to the previous question is some approximation of "infinity", could we put a lower bound on that by e.g. limiting the maximum number of descriptors we process in a single d3d12_desc_flush_vk_heap_updates_locked() call?
      • How long does waiting for a fence typically take once we know it has been submitted? Would it be terrible to poll fences with e.g. a 1ms timeout?
      • What is the worst case behaviour? If descriptor writes were to get stuck behind a fence we'd need to wait for d3d12_command_queue_ExecuteCommandLists() to process them, but that should be no worse than what we're currently doing. The reverse might be worse, but we should be able to avoid that by polling fences inside d3d12_desc_flush_vk_heap_updates_locked() if needed.
      • Are there any nasty edge cases?

      My problem with this approach is that it depends on a lot of magic constants which would require tuning, and this tuning depends on the computer, on the game, possibly on the specific scene of the game, on the game settings and possibly many other factors. Polling at 1 ms can be nothing, if for some random reason fences are immediately ready, or it can be a lot, if you have 16 ms of budget per frame and waste 5 of them just polling for a handful of fences that manage to block every other operation. Getting into this sort of business to save on a thread per device doesn't seem ideal to me, though I'll admit you have more experience than me.

    • Please register or sign in to reply
    • The issue is, to prevent concurrent writes of the same Vulkan descriptor we must delay writing them until command list submission, so instead of descriptors being written on the fly, often by multiple threads, we write them all at the last millisecond from a single thread.

      Are concurrent writes to the same descriptor really allowed in DX12? If so, wouldn't that mean every DX12 driver has to use atomics/locks to update its descriptors? That seems like a lot of burden to put on drivers for little benefit to games (since they'd still be unsure of which descriptor write overwrote which). Are you sure this isn't a leftover from pre-update-after-bind, where entire descriptor sets needed synchronization, in which case we should be able to update concurrently as long as update after bind is used?

    • Author Developer

      It's unclear if "free-threaded" means concurrent writes are allowed, but they do happen surprisingly often. It's unlikely Windows drivers do any kind of locking, so most likely the concurrently written descriptors are identical or at least produce a valid descriptor if the data is a blend of both. I have seen crashes in Vulkan from concurrent writes though, so we must prevent it. Descriptor buffers are used in vkd3d-proton to allow memcpying of Vulkan descriptors, and it seems to handle concurrency as well as Windows does.

    • Please register or sign in to reply
  • Conor McCarthy added 606 commits

    added 606 commits

    • 7887d56d...736f3ae2 - 602 commits from branch wine:master
    • 76d36b13 - vkd3d: Update the descriptor `next` index before getting a reference for writing.
    • 28ec704f - vkd3d: Write Vulkan descriptors in a worker thread.
    • d564bcc0 - vkd3d: Rename the device mutex to pipeline_cache_mutex.
    • 15d5a936 - vkd3d: Co-locate all descriptor-related members.

    Compare with previous version

  • Author Developer

    It now uses a single device worker thread.

  • Conor McCarthy added 36 commits

    added 36 commits

    • 15d5a936...45679a96 - 32 commits from branch wine:master
    • dbf9874f - vkd3d: Update the descriptor `next` index before getting a reference for writing.
    • 8c142267 - vkd3d: Write Vulkan descriptors in a worker thread.
    • 6fda06b1 - vkd3d: Rename the device mutex to pipeline_cache_mutex.
    • 0269fe49 - vkd3d: Co-locate all descriptor-related members.

    Compare with previous version

  • +static HRESULT device_worker_stop(struct d3d12_device *device)
    +{
    +    HRESULT hr;
    +
    +    TRACE("device %p.\n", device);
    +
    +    vkd3d_mutex_lock(&device->worker_mutex);
    +
    +    device->worker_should_exit = true;
    +    vkd3d_cond_signal(&device->worker_cond);
    +
    +    vkd3d_mutex_unlock(&device->worker_mutex);
    +
    +    if (FAILED(hr = vkd3d_join_thread(device->vkd3d_instance, &device->worker_thread)))
    +        return hr;
    +
    +    vkd3d_mutex_destroy(&device->worker_mutex);
    +    vkd3d_cond_destroy(&device->worker_cond);
    +
    +    return S_OK;
    +}
     ...
    +static void *device_worker_main(void *arg)
    +{
    +    struct d3d12_descriptor_heap *heap;
    +    struct d3d12_device *device = arg;
    +    size_t i;
    +
    +    vkd3d_set_thread_name("device_worker");
    +
    +    vkd3d_mutex_lock(&device->worker_mutex);
    +
    +    for (;;)
    +    {
    +        for (i = 0; i < device->heap_count && !device->worker_should_exit; ++i)
    +        {
    +            heap = device->heaps[i];
    +            if (heap->dirty_list_head == UINT_MAX)
    +                continue;
    +            vkd3d_mutex_lock(&heap->vk_sets_mutex);
    +            d3d12_desc_flush_vk_heap_updates_locked(heap, device);
    +            vkd3d_mutex_unlock(&heap->vk_sets_mutex);
    +        }
    +
    +        if (device->worker_should_exit)
    +            break;
    +
    +        vkd3d_cond_wait(&device->worker_cond, &device->worker_mutex);
    +    }
    +
    +    vkd3d_mutex_unlock(&device->worker_mutex);
    +
    +    return NULL;
    +}

    Does it make sense to check device->worker_should_exit on each loop iteration above? device->worker_mutex should prevent device_worker_stop() from modifying device->worker_should_exit before the vkd3d_cond_wait() call, right?

    I also still think it would be helpful to explain the problem we're trying to solve (i.e., descriptor writes getting delayed until command list submission, and then potentially becoming a bottleneck) somewhere in the actual commit. Perhaps as a comment in device_worker_main().

  • Conor McCarthy added 3 commits

    added 3 commits

    • a1d015a0 - vkd3d: Write Vulkan descriptors in a worker thread.
    • e1d6b35e - vkd3d: Rename the device mutex to pipeline_cache_mutex.
    • 980f7fa8 - vkd3d: Co-locate all descriptor-related members.

    Compare with previous version

  • @@ -4291,6 +4358,12 @@ static HRESULT d3d12_device_init(struct d3d12_device *device,
         if (FAILED(hr = vkd3d_vk_descriptor_heap_layouts_init(device)))
             goto out_cleanup_uav_clear_state;
     
    +    if (device->use_vk_heaps && FAILED(hr = vkd3d_create_thread(device->vkd3d_instance,
    +            device_worker_main, device, &device->worker_thread)))
    +    {
    +        WARN("Failed to create worker thread, hr %#x.\n", hr);
    +    }

    This means failing to create the worker thread is non-fatal. That's probably fine, but it also means we can't check "device->use_vk_heaps" to determine whether the worker thread exists. That may be benign for d3d12_device_add_descriptor_heap() and d3d12_device_remove_descriptor_heap(), but it seems more questionable for device_worker_stop().

  • Conor McCarthy added 3 commits

    added 3 commits

    • c420e138 - vkd3d: Write Vulkan descriptors in a worker thread.
    • dbab6e1a - vkd3d: Rename the device mutex to pipeline_cache_mutex.
    • 629ce39f - vkd3d: Co-locate all descriptor-related members.

    Compare with previous version

  • Conor McCarthy added 36 commits

    added 36 commits

    • 629ce39f...21491d1b - 32 commits from branch wine:master
    • fcc57e8b - vkd3d: Update the descriptor `next` index before getting a reference for writing.
    • e2d87fad - vkd3d: Write Vulkan descriptors in a worker thread.
    • d664fad2 - vkd3d: Rename the device mutex to pipeline_cache_mutex.
    • 2ca0302b - vkd3d: Co-locate all descriptor-related members.

    Compare with previous version

  • Henri Verbeet approved this merge request

    approved this merge request

  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Please register or sign in to reply
    Loading