Those are just basic and essential optimizations, nothing too surprising here.
The sum of integers is actually a question I ask developers in interviews (works well from juniors to seniors), with the extra problem of what happens if we were to use floating-point instead of integers.
To provide the solution to the second part of the question, there is no closed-form solution. Since floating point math is not associative, there’s no O(1) optimization that can be applied that preserves the exact output of the O(n) loop.
Technically there is a closed form solution as long as the answer is less than 2^24 for a float32 or 2^53 for a float64, since below those all integers can be represented fully by a floating point number, and integer addition even with floating point numbers is identical if the result is below those caps. I doubt a compiler would catch that one, but it technically could do the optimisation and have the exact same bit answer. If result was intialised to a non-integer number this would not be true however of course.
To those who don't know about compiler optimisation, the replacement with a closed form is rather suprising I'd say, especially if someone with Matt Godbolt's experience of all people is saying it is surprising.
Also this series is targeted towards more of a beginner audience to compilers, thus its likely to be suprising to the audience, even if not to you.
Im curious what exactly you ask here. I consider myself to be a decent engineer (for practical purposes) but without a CS degree, and I might likely have not passed that question.
I know compilers can do some crazy optimizations but wouldn't have guessed it'll transform something from O(n) to O(1). Having said that, I dont still feel this has too much relevance to my actual job for the most part. Such performance knowledge seems to be very abstracted away from actual programming by database systems, or managed offerings like spark and snowflake, that unless you intend to work on these systems this knowledge isn't that useful (being aware they happen can be though, for sure).
I'm actually surprised that gcc doesn't do this! If there's one thing compilers do well is pattern match on code patterns and replace with more efficient ones; just try pasting things from Hacker's Delight and watch it always canonicalise it to the equivalent, fastest machine code.
Those are just basic and essential optimizations, nothing too surprising here.
The sum of integers is actually a question I ask developers in interviews (works well from juniors to seniors), with the extra problem of what happens if we were to use floating-point instead of integers.
For Matt, the creator of compiler explorer, those are surprises.
For you are essentials.
You and the juniors you hire must have a deeper knoledge than him.
To provide the solution to the second part of the question, there is no closed-form solution. Since floating point math is not associative, there’s no O(1) optimization that can be applied that preserves the exact output of the O(n) loop.
Technically there is a closed form solution as long as the answer is less than 2^24 for a float32 or 2^53 for a float64, since below those all integers can be represented fully by a floating point number, and integer addition even with floating point numbers is identical if the result is below those caps. I doubt a compiler would catch that one, but it technically could do the optimisation and have the exact same bit answer. If result was intialised to a non-integer number this would not be true however of course.
To those who don't know about compiler optimisation, the replacement with a closed form is rather suprising I'd say, especially if someone with Matt Godbolt's experience of all people is saying it is surprising.
Also this series is targeted towards more of a beginner audience to compilers, thus its likely to be suprising to the audience, even if not to you.
Im curious what exactly you ask here. I consider myself to be a decent engineer (for practical purposes) but without a CS degree, and I might likely have not passed that question.
I know compilers can do some crazy optimizations but wouldn't have guessed it'll transform something from O(n) to O(1). Having said that, I dont still feel this has too much relevance to my actual job for the most part. Such performance knowledge seems to be very abstracted away from actual programming by database systems, or managed offerings like spark and snowflake, that unless you intend to work on these systems this knowledge isn't that useful (being aware they happen can be though, for sure).
It’s neat. I wonder if someone attempted detecting a graph coloring problem to replace it with a constant.
I'm actually surprised that gcc doesn't do this! If there's one thing compilers do well is pattern match on code patterns and replace with more efficient ones; just try pasting things from Hacker's Delight and watch it always canonicalise it to the equivalent, fastest machine code.