- Feb 24, 2025
-
-
Conor McCarthy authored
-
Conor McCarthy authored
-
Conor McCarthy authored
-
Conor McCarthy authored
-
- Feb 20, 2025
-
-
Giovanni Mascellani authored
If the root signature wasn't explicitly specified. This fixes a failure in The Touryst.
-
Giovanni Mascellani authored
-
Henri Verbeet authored
-
Henri Verbeet authored
-
Henri Verbeet authored
-
Henri Verbeet authored
The DXIL parser doesn't need them.
-
Henri Verbeet authored
We currently generate our own I/O signatures inside the DXIL parser, but use the element counts from the DXBC container signatures to allocate the input_params/output_params/patch_constant_params arrays. That happens to work for well-behaved inputs, but it's asking for trouble.
-
Henri Verbeet authored
vkd3d-shader/dxbc: Update the vkd3d_shader_parse_input_signature() documentation for dxbc-dxil shaders.
-
Elizabeth Figura authored
-
Elizabeth Figura authored
-
Elizabeth Figura authored
We already check for error instructions when parsing swizzles, but if allocation fails at codegen time we would like to avoid asserting when subsequently constructing a swizzle.
-
Elizabeth Figura authored
Similar to d1c2ae3f, this is a bit too strict and may prevent e.g. simultaneous use of float and float1 at codegen time. However, in this case the inciting factor is that in the case of allocation failure at codegen time, we would like to allow one or more arguments to have error type.
-
Elizabeth Figura authored
The primary motivation here is to avoid needing to worry about instructions potentially pointing to the preallocated error instruction in the case of allocation failure. This doesn't cover all passes, but none of the other passes make assumptions about instruction sources.
-
Francisco Casas authored
This is because lower_nonconstant_array_loads() can potentially turn nonconstant loads into constant loads, allowing copy-prop to turn these loads into previous instructions, which might help other passes as well. This patch lowers the number of required temps for the following ps_2_0 shader from 19 to 16: int i; float3x3 mats[4]; float4 main() : sv_target { return mul(mats[i], float3(1, 2, 3)).xyzz; }
-
Francisco Casas authored
-
Francisco Casas authored
This can save a significant amount of temp registers because it allows to avoid referencing the temp (and having to store it) when not needed. For instance, this patch lowers the number of required temps for the following ps_2_0 shader from 24 to 19: int i; float3x3 mats[4]; float4 main() : sv_target { return mul(mats[i], float3(1, 2, 3)).xyzz; } Also, it is needed for SM1 vertex shader relative addressing since non-constant loads are required to be directly on the uniform ('c' registers) instead of the temp, and non-constant loads cannot be transformed by copy propagation.
-
Elizabeth Figura authored
Fix the last few places that care.
-
Francisco Casas authored
-
Nikolay Sivov authored
Signed-off-by:
Nikolay Sivov <nsivov@codeweavers.com>
-
- Feb 19, 2025
-
-
Giovanni Mascellani authored
Since 4a94bfc2 we segregate different D3D12 descriptor types in different Vulkan descriptor sets. This change was introduced to reduce descriptor wasting when allocating a new descriptor pool; that can be very useful when using virtual heaps, which have to often cycle through many descriptors, but it is expected to have limited impact for Vulkan heaps, given that in that case most descriptors are allocated through the descriptor heap rather than through the command allocator. Instead, it has a rather detrimental effect with Vulkan heaps, because it tends to use many more Vulkan descriptor sets than necessary, often with just a handful of descriptors each. This causes a regression on some Vulkan implementations that support too few descriptor sets. With this change we revert to a situation similar to before, stuffing all the descriptors that do not live in a root descriptor table in as few descriptor sets as possible (at most one or two, depending on whether push descriptors are used).
-
Giovanni Mascellani authored
Soon it won't be used necessarily for push descriptors anymore, but it will still contain root descriptors.
-
Conor McCarthy authored
-
Conor McCarthy authored
-
Conor McCarthy authored
-
Conor McCarthy authored
-
Conor McCarthy authored
-
Conor McCarthy authored
-
Giovanni Mascellani authored
We're already implicitly using it for image layouts in which either depth or stencil is writeable and the other is not. Correspondingly, add the _KHR suffix in those cases, so the extension usage is more evident. According to the Vulkan Hardware Database, only four reports without this extension were filed since 2023, and all of them for configurations we likely don't target.
-
Francisco Casas authored
-
Francisco Casas authored
-
Francisco Casas authored
-
Francisco Casas authored
This could be useful since there are many shaders that contain `#include` directives or use parameter-defined macros and we can't reproduce bugs from the source alone.
-
Giovanni Mascellani authored
The current TPF validator enforces that for each register involved in a DCL_INDEX_RANGE instruction there must be a signature element for that register and the DCL_INDEX_RANGE write mask. This is an excessively strong request, and causes some shaders from The Falconeer to be invalidly rejected. The excessively strong check was needed to avoid triggering a bug in the I/O normaliser. Since that bug is now solved, the check can be relaxed.
-
Giovanni Mascellani authored
A good part of the I/O normaliser job is to merge together signature elements that are spanned by DCL_INDEX_RANGE instructions. The current algorithm assumes that each index range touches exactly one signature element for each index spanned by the range. The assumption is used in shader_signature_merge() in the form of expecting that, if the index range is N registers long, then, once you find the first signature element of an index range, the other elements that will have to be merged with it are exactly the following N-1 according to the order given by signature_element_register_compare() or signature_element_mask_compare(), depending on the signature type. This doesn't necessarily happen. For example, The Falconeer has a few hull shaders in which this happens: hs_fork_phase dcl_hs_fork_phase_instance_count 13 dcl_input vForkInstanceId dcl_output o4.z dcl_output o5.z dcl_output o6.z dcl_output o7.z dcl_output o12.z dcl_output o13.z dcl_output o14.z dcl_output o15.z dcl_output o16.z dcl_output o17.z dcl_output o18.z dcl_output o19.z dcl_output o20.z dcl_temps 1 dcl_index_range o4.z 17 iadd r0.x, vForkInstanceId.x, l(4) ult r0.y, vForkInstanceId.x, l(4) movc r0.x, r0.y, vForkInstanceId.x, r0.x mov o[r0.x + 4].z, l(0) ret Here the index range "skips" o8.z through o11.z, because those registers only use mask .xy. The current algorithm fails on such a shader. Even depending on the signature element order doesn't look ideal. I don't have a full counterexample for that, but it looks fragile, especially given that the register allocation algorithm in FXC is notoriously full of unexpected corner cases. We solve both problems by slightly changing the architecture of the normaliser: first we move computing the masks for the merge signature element from signature_element_range_expand_mask(), which is executed while merging signature, to io_normaliser_add_index_range(), which is executed before merging signatures. Then, while we are merging signatures, we can decide for each single signature element whether it has to be retained or not, and how it should be patched. The algorithm becomes independent of the order, because each signature element can be processed individually.
-
Giovanni Mascellani authored
In order to make it able to have other fields.
-
Giovanni Mascellani authored
The assumptions the I/O normaliser makes on its input program are rather intricated. In theory the VSIR validator checks should be strong enough, but the validator isn't run by default anyway. Whether the TPF parser validation is strong enough is not completely clear to me, and considering that the I/O normaliser could end up being used on different programs as well it's probably better to revalidate locally just in case.
-