My submission for the 2nd SAP Community Coding Challenge. Feel free to suggest improvements or point out bugs.
const getStartingNumberWithLongestChain = limit => { // Buffer the chain lengths in a prefilled array let chainLengthBuffer = new Uint32Array(limit); chainLengthBuffer[1] = 1; const getChainLength = number => { let n = number; let chainLength = 0; // Calculate till the buffer is reached, the remaining // length can be read from buffer // The buffer is prefilled at index 1, which is always // the last number of the chain while (!chainLengthBuffer[n]) { // Bitwise n % 2 === 1 if (n & 1) { // If you multiply an odd number with an odd // number, you get an odd number // Odd number + 1 = even number // => you can do odd and then even without // checking odd or even in between n = 3 * n + 1; chainLength++; } n /= 2; chainLength++; } chainLength += chainLengthBuffer[n]; // Add result to buffer chainLengthBuffer[number] = chainLength; return chainLength; }; let startingNumber = 1; let longestChainLength = 1; for (let i = 1; i < limit; i++) { const chainLength = getChainLength(i); if (chainLength > longestChainLength) { longestChainLength = chainLength; startingNumber = i; } } return startingNumber; }; console.log(getStartingNumberWithLongestChain(1000000));
The answer is: 837799
Click here for a runnable version with proper syntax highlighting: https://repl.it/@AlexMFrank/ccc2
Very cool!
I wrote this inspired by yours and runs about twice as fast on my laptop (my previous "fast" version was about twice as slow...).
No idea why!!!!
Perhaps less array lookups? I check the index instead...
Thanks. I hope that you've also submitted your version.
I couldn't explain the big difference and did some measurements. In NodeJS on a Thinkpad T490, 1 million array lookups took about 3ms, which is slower than I've expected but couldn't explain the execution time difference. However, these were all array lookups where the index existed in the array. The lookup time increased if the index was outside the array and got really bad when the index went outside of the 32 bit signed integer range. Then the 1 million lookups took over 100ms.
So if the line:
while (!chainLengthBuffer[n]) {
in my coding is changed to:
while (n > limit || !chainLengthBuffer[n]) {
the execution times get quite similar.
I submitted a functional one, much more compact and elegant, but about 45 times slower.
I think out of bounds access actually grows the array: it's sparse but over several thousand accesses will end up reallocating it a few times.
It looks like this issue is JS engine specific. Google describe V8s behavior here in their blog:
https://v8.dev/blog/elements-kinds#avoid-reading-beyond-the-length-of-the-array
I've created a jsperf for this and found out that all lookups with an existing index and without one take about the same time in Firefox. However in Chrome, doing a lookup once with an index outside of 32 bit signed integer range destroys your array lookup performance: