vkd3d: Fix invalid atomic behaviour in the view cache linked list.
Merge request reports
Activity
- Resolved by Giovanni Mascellani
- Resolved by Giovanni Mascellani
- Resolved by Giovanni Mascellani
There was some discussion on whether this object cache is on such a hot path that it requires arch-specific optimizations (instead of just protecting the object cache with a mutex).
After a few tests with Cyberpunk 2077 (using Conor's branch
sm6_rebase
), it seems that yes, there is some performance loss if we just use a mutex instead of the lock-free implementation: specifically, the game callsCreateConstantBufferView()
a lot of times and from many threads, causing a lot of mutex contention; also, while for other view types creation and destruction involve some Vulkan call, for CBVs there is only some little arithmetic and a few atomic operations, so it is conceivable that a contended mutex impacts performances significantly.added 1 commit
- 2c56c4a3 - vkd3d: Fix invalid atomic behaviour in the view cache linked list.
The performance difference is larger in Horizon Zero Dawn.
I concluded it's unsafe to use tag values stored in each object because we could load a value which becomes stale before the swap. A single tag value is more likely to overflow in 32-bit, but 32-bit use is likely very uncommon and the issue is rare enough that it's unlikely to be coincident with an overflow.
I previously talked about parts of this with Giovanni on IRC, but the productive thing to do is probably to continue that conversation here.
I have a fairly strong dislike for the direction this is going in. In particular, in no specific order:
-
Implementing lock-free data structures correctly is notoriously hard, and we're probably seeing an example of that here. Perhaps more importantly, while implementing these correctly may be hard, reviewing the code is generally even harder.
-
Adding inline assembly and architecture specific code doesn't help.
-
Neither does inlining the linked list implementation in the device and object cache code.
-
If the issue with using a regular mutex or even a spinlock is contention, perhaps we should try to address that, instead of attempting to make the synchronisation primitives faster. (Do these caches need to be global to the device? Could we e.g. make them local to the CPU core or thread accessing them?)
-
In the case of CBVs in particular, given the number of them that applications appear to create and destroy per frame, as well as the fact that these are fairly small structures, allocating the individually using vkd3d_malloc() seems less than ideal. (I.e., I imagine we'd want to use slab allocation for these in order to improve both locality and allocation overhead.)
-
Making caches local to the thread has the potential for some thorny edge cases where one thread creates more than it frees, and another does the opposite (on copied descriptors), so its cache grows until the system is out of memory.
The new implementation is much simpler than the 128-bit CAS version and has about the same performance. It's somewhat similar to the old mutex array scheme.
I tried a slab allocator a while back and it had no effect on performance, but it would use less memory so may be worth revisiting.
55 55 free(ptr); 56 56 } 57 57 58 static inline void *vkd3d_malloc_aligned(size_t size, size_t alignment) 59 { 60 /* aligned_alloc() requires C11. */ 61 void *p = malloc(size); 62 63 /* Systems which support double-wide CAS should return a pointer with the required alignment. */ 64 if ((intptr_t)p & (alignment - 1)) This doesn't look safe.
malloc
is only required to return an alignment suitable for built-in POD types, but it may have a larger alignment depending on where it's able to find available memory, resulting in this spuriously succeeding or failing. If you can't use win32's_aligned_malloc
/_aligned_free
, or POSIX'sposix_memalign
, you may need to manually align with over-allocation.changed this line in version 3 of the diff
added 1 commit
- 620e4f85 - vkd3d: Fix invalid atomic behaviour in the view cache linked list.
added 1 commit
- 66514be1 - vkd3d: Fix invalid atomic behaviour in the view cache linked list.
Horizon Zero Dawn benefits a bit from using 16 heads instead of 8, which is also more future-proof. Like Cyberpunk it creates many CBVs per frame, as does SotTR.
It's possible on x86_64 to eliminate view objects for all buffer types with extension
buffer_device_address
, which reduces the buffer info to a 64-bit address and 64-bit size, which can be written to the descriptor with a single_mm_stream_si128
or_mm_store_si128
instruction. But this would also require descriptor buffers, because they are the only way to write a Vulkan buffer descriptor with a device address. It doesn't seem worthwhile at the moment.1690 1690 HRESULT vkd3d_uav_clear_state_init(struct vkd3d_uav_clear_state *state, struct d3d12_device *device); 1691 1691 void vkd3d_uav_clear_state_cleanup(struct vkd3d_uav_clear_state *state, struct d3d12_device *device); 1692 1692 1693 struct desc_object_cache_head 1694 { 1695 void *head; 1696 unsigned int spinlock; 1697 }; - Comment on lines +1693 to +1697
In theory it is advised to avoid putting more than a spinlock on the same cache line, because the cache line would be contended by different cores even if they mean to operate on different spinlocks. I guess that would amount to ensure that
struct desc_object_cache_head
is padded and aligned to the size of a cache line. On Intel architectures a cache line is 64 bytes, so you are putting four spinlocks in the same line.That, of course, could turn out to be the usual theoretical thing that doesn't count at all in practice, but maybe it's worth having a try.
Unfortunately C doesn't (as far as I know) offer a portable way to query the cache line size at compilation time (as C++17 does). Experimenting a little bit with the compiler explorer it seems that most architectures are either 32 or 64 bytes, with PowerPC being 128 bytes and ARM64 possibly even 265 bytes. Given that we mostly care about Intel and ARM, I guess that we can just settle for 64, but 256 for ARM64.
2285 /* Objects are cached so that vkd3d_view_incref() can safely check the refcount 2286 * of an object freed by another thread. */ 2285 #define HEAD_INDEX_MASK (ARRAY_SIZE(cache->heads) - 1) 2286 2287 /* Objects are cached so that vkd3d_view_incref() can safely check the refcount of an 2288 * object freed by another thread. This could be implemented as a single atomic linked 2289 * list, but it requires handling the ABA problem, which brings issues with cross-platform 2290 * support, compiler support, and non-universal x86-64 support for 128-bit CAS. */ 2287 2291 static void *vkd3d_desc_object_cache_get(struct vkd3d_desc_object_cache *cache) 2288 2292 { 2289 2293 union d3d12_desc_object u; 2290 void *next; 2294 unsigned int i; 2291 2295 2292 do 2296 STATIC_ASSERT(!(ARRAY_SIZE(cache->heads) & HEAD_INDEX_MASK)); Making caches local to the thread has the potential for some thorny edge cases where one thread creates more than it frees, and another does the opposite (on copied descriptors), so its cache grows until the system is out of memory.
Right, thread local caches would need occasional rebalancing against a global cache. The nice thing about them is that they're essentially wait-free though, and even with such worst case behaviour you'd have less contention than with just the global cache.
In principle we could actually get per-CPU caches on Linux using RSEQs (restartable sequences), but unfortunately I'm not aware of any equivalent Win32 mechanism.
The new implementation is much simpler than the 128-bit CAS version and has about the same performance. It's somewhat similar to the old mutex array scheme.
Yeah, conceptually I like this much better. (And fwiw, that's a fairly standard scheme usually referred to as mutex/lock striping.) This does still have quite a number of atomic operations in the hot path though.
Unfortunately C doesn't (as far as I know) offer a portable way to query the cache line size at compilation time (as C++17 does). Experimenting a little bit with the compiler explorer it seems that most architectures are either 32 or 64 bytes, with PowerPC being 128 bytes and ARM64 possibly even 265 bytes. Given that we mostly care about Intel and ARM, I guess that we can just settle for 64, but 256 for ARM64.
Recent gcc versions have __GCC_DESTRUCTIVE_SIZE. That's not portable of course, but we could easily do something along the lines of
#ifdef __GCC_DESTRUCTIVE_SIZE # define VKD3D_DESTRUCTIVE_SIZE __GCC_DESTRUCTIVE_SIZE #elif ... # define VKD3D_DESTRUCTIVE_SIZE ... #else # define VKD3D_DESTRUCTIVE_SIZE 64 #endif
Spinning is the big performance killer. That seems to be the case for mutexes too because entry uses spinlocking. I see no measurable performance gain from a 64-byte alignment, but there is always the chance of gains on other hardware. FWIW the old 128-bit CAS implementation was only very slightly slower than this despite using a single atomic value.
I did some measurements with Cyberpunk 2077 to see how many times we need to spin (i.e., execute the
for
loop) on average for each call tovkd3d_desc_object_cache_get()
. Results seem to be good: the ratio never reaches 2. It starts at 1, then grows a bit towards 1.5-1.6, then it decreases back seemingly converging to 1. That means that after some transient we basically never spin more than once for each call tovkd3d_desc_object_get()
.I think the MR is already good enough to be accepted. Further optimization like the cache size or thread-local caches could be considered in the future if some more performance has to be squeezed (though I wouldn't oppose to having them immediately if anybody wants to implement them right away).
Spinning is the big performance killer. That seems to be the case for mutexes too because entry uses spinlocking. I see no measurable performance gain from a 64-byte alignment,
I imagine at least part of that is due to the atomic operations on cache->next_index and cache->free_count in vkd3d_desc_object_cache_get() and vkd3d_desc_object_cache_push().
I did some measurements with Cyberpunk 2077 to see how many times we need to spin (i.e., execute the
for
loop) on average for each call tovkd3d_desc_object_cache_get()
. Results seem to be good: the ratio never reaches 2. It starts at 1, then grows a bit towards 1.5-1.6, then it decreases back seemingly converging to 1. That means that after some transient we basically never spin more than once for each call tovkd3d_desc_object_get()
.So it essentially get rid of the contention; that's great to know.
I think the MR is already good enough to be accepted. Further optimization like the cache size or thread-local caches could be considered in the future if some more performance has to be squeezed (though I wouldn't oppose to having them immediately if anybody wants to implement them right away).
I think it's an improvement too, so I'll approve this. I do think there's further room for improvement though, both in terms of performance and in terms of code quality, and I'd prefer seeing those sooner rather than later. (E.g., I don't like the magic "16"; I don't like that we're rolling our own spinlocks here; I don't like the number of atomic operations in what's supposed to be a hot path.)
I think it's an improvement too, so I'll approve this. I do think there's further room for improvement though, both in terms of performance and in terms of code quality, and I'd prefer seeing those sooner rather than later. (E.g., I don't like the magic "16"; I don't like that we're rolling our own spinlocks here; I don't like the number of atomic operations in what's supposed to be a hot path.)
I gave this some more thinking, and I'm not sure I like the idea of using spinlocks (either implemented by us or by others) any more. They're not wait-free and they don't even coordinate with the operating system, meaning that if a thread is suspended while a spinlock is hold any other thread trying to acquire the same spinlock will spin busily for an entire scheduling quantum (or more). In our case that's slightly different because there is striping, but there are still scenarios in which that can fail (depending on the number of active threads, CPUs and stripe buckets), so I don't like it. While I understand the engineering problems of the wait-free option with the CPU-specific code, I can't help but thinking that after some initial investment that's going to remove some opportunities for stuttering that may be even harder to reproduce and debug later.