configure: Build with -flto=auto when available.
Merge request reports
Activity
Have you measured whether this gives any positive result at runtime? It certainly causes a certain additional load at compile time: after a quick (and certainly not scientific) benchmark on my laptop, if I change a single file (
dxil.c
) in a simple way (but not too trivial, so as to prevent ccache hits) it takes me 6.5 seconds to recompile the project. That becomes more than 9.5 seconds with LTO (I guess because the optimizer has to be run on the whole library again, instead of just the changed file). That's ~50% slower. It's also three seconds, so not such a big deal, but it would probably be a good idea to ensure whether we're getting something in return.I can't say I've done extensive benchmarking, in part because the change seemed straightforward enough and the overhead fairly modest (although I suppose that may be somewhat specific to my setup), but it's at least able to inline small helper functions like vsir_register_init() in places outside ir.c where it previously couldn't.
I did a few tests running the DXIL -> SPIR-V compiler with callgrind (measuring instruction fetch counts) a few times, on different sets of 100 shaders randomly taken from commercial games. I got these results:
LTO 582,991,044 672,954,225 704,396,238 no LTO 600,647,762 692,299,378 726,273,989 improvement 3,02% 2,87% 3,10%
So it seems that LTO gains us a ~3% reduction of number of instructions fetched. I also ran some tests without valgrind, looking at the user time, but the results were too noisy to be useful. I guess I need some more method if I want to meaningfully measure a 3% margin from user time.
added 54 commits
-
b0ad6e4a...127bcf90 - 53 commits from branch
wine:master
- a52a91d7 - configure: Build with -flto=auto when available.
-
b0ad6e4a...127bcf90 - 53 commits from branch