Skip to content
Snippets Groups Projects

vkd3d: Hold locks to read protected variables

Merged Giovanni Mascellani requested to merge giomasce/vkd3d:amiata into master
3 unresolved threads

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • requested review from @cmccarthy

    • My understanding of the problem with blocked_queue_count is this sequence:

      1. thread0: Is in d3d12_command_queue_Wait() after if (!command_queue->ops_count && value <= fence->max_pending_value) but has not yet called d3d12_device_add_blocked_command_queues().
      2. thread1: Is handling a signal which unblocks the wait, but had not yet updated max_pending_value when thread0 checked it. It executes if (!device->blocked_queue_count) before thread0 calls d3d12_device_add_blocked_command_queues().
      3. thread0: calls d3d12_device_add_blocked_command_queues() but the corresponding call to d3d12_device_flush_blocked_queues() has passed in thread1.
    • Notice that on relaxed memory architectures, like ARM, something even worse can happen: even after that d3d12_device_add_blocked_command_queues() has released its mutex, it's not guaranteed that another thread observes the new value for blocked_queue_count unless this latter thread has first acquired the mutex. The reading thread might, for example, read from an earlier cached value of blocked_queue_count and not bother checking that the cache is still up-to-date. So I think we should really protect the read with the mutex.

    • Please register or sign in to reply
    • I don't think we need to lock the fence mutex before checking max_pending_value. All cases which update max_pending_value lead to a new call to d3d12_command_queue_flush_ops(), except for updates from vkd3d_wait_for_gpu_timeline_semaphore(), which are not relevant because higher values being waited on there have already been included in max_pending_value.

    • As before, I don't think we should assume strong memory models, as I guess we want our code to be portable. And on relaxed memory models you're not guaranteed that memory reads are current unless you've performed some acquire operation, like locking a mutex.

      Specifically in this case, I guess this is a scenario which could go wrong: thread A waits on a fence, with a value larger than the current max_pending_value; it therefore mark that queue as blocked. Then thread A continues doing other stuff, until, for some unrelated reason, it begins flushing queues. In d3d12_device_flush_blocked_queues_once() it "steals" all the blocked queues, locks the device mutex and is about to start processing them when it finishes its time quantum and has to yield to another thread. On another core, thread B finally signals the fence with a value that would unblock the queue, and it updates max_pending_value. When it comes to d3d12_device_flush_blocked_queues_once() there is nothing to do, because thread A has temporarily stolen them, so thread B resumes doing other stuff. This should not be a problem, because as soon as thread A gets rescheduled it should see the new max_pending_value and act by it. But, as I said, unless there is sufficient synchronization, thread A is not guaranteed to see the max_pending_value that was set by thread B, and it could conclude that there is nothing to do for that fence, incorrectly missing the queue unblock.

    • Please register or sign in to reply
    • The device mutex is for protecting the cache so it should probably be renamed at some point. You're right about creating a separate one for blocked queues.

    • Actually, while I was creating blocked_queues_mutex I also wondered if the current mutex can be further split. In state.c it seems that it is used to protect both cache->render_passes and graphics->compiled_pipelines, and it's not clear to me whether these two uses are independent or not. If they are, the mutex can be further split, and in principle I guess that it's a good idea, when possible, to keep locking as granular as possible. But since I don't know much that code, I didn't venture doing anything.

    • That makes sense too.

    • Please register or sign in to reply
  • Conor McCarthy approved this merge request

    approved this merge request

  • Henri Verbeet approved this merge request

    approved this merge request

  • added 9 commits

    • d1c2a1cc...240b2f96 - 6 commits from branch wine:master
    • e076fd9c - vkd3d: Do not read blocked_queue_count without holding the device mutex.
    • df360266 - vkd3d: Do not read max_pending_value without holding the fence's mutex.
    • 8e087b0f - vkd3d: Use a dedicated mutex to protect the blocked queues.

    Compare with previous version

  • Alexandre Julliard approved this merge request

    approved this merge request

Please register or sign in to reply
Loading