Why is <= slower than < in V8?

• A+
Category：Languages

I am reading the slides Breaking the Javascript Speed Limit with V8, and there is an example like the code below. I cannot figure out why <= is slower than < in this case, can anybody explain that? Any comments are appreciated.

Slow:

this.isPrimeDivisible = function(candidate) {     for (var i = 1; i <= this.prime_count; ++i) {         if (candidate % this.primes[i] == 0) return true;     }     return false; }

(Hint: primes is an array of length prime_count)

Faster:

this.isPrimeDivisible = function(candidate) {     for (var i = 1; i < this.prime_count; ++i) {         if (candidate % this.primes[i] == 0) return true;     }     return false; }

Other answers and comments mention that the difference between the two loops is that the first one executes one more iteration than the second one. This is true, but in an array that grows to 25,000 elements, one iteration more or less would only make a miniscule difference. As a ballpark guess, if we assume the average length as it grows is 12,500, then the difference we might expect should be around 1/12,500, or only 0.008%.

The performance difference here is much larger than would be explained by that one extra iteration, and the problem is explained near the end of the presentation.

this.primes is a contiguous array (every element holds a value) and the elements are all numbers.

A JavaScript engine may optimize such an array to be an simple array of actual numbers, instead of an array of objects which happen to contain numbers but could contain other values or no value. The first format is much faster to access: it takes less code, and the array is much smaller so it will fit better in cache. But there are some conditions that may prevent this optimized format from being used.

One condition would be if some of the array elements are missing. For example:

let array = []; a = 10; a = 20;

Now what is the value of a? It has no value. (It isn't even correct to say it has the value undefined - an array element containing the undefined value is different from an array element that is missing entirely.)

There isn't a way to represent this with numbers only, so the JavaScript engine is forced to use the less optimized format. If a contained a numeric value like the other two elements, the array could potentially be optimized into an array of numbers only.

Another reason for an array to be forced into the deoptimized format can be if you attempt to access an element outside the bounds of the array, as discussed in the presentation.

The first loop with <= attempts to read an element past the end of the array. This doesn't prevent the algorithm from working correctly, because in the last iteration:

• this.primes[i] evaluates to undefined because i is past the array end.
• candidate % undefined (for any value of candidate) evaluates to NaN.
• NaN == 0 evaluates to false.
• Therefore, the return true is not executed.

So the extra iteration has no effect on the logic. The code produces the same result as it would without the extra iteration.

But to get there, it tried to read a nonexistent element past the end of the array. This forces the array out of optimization - or at least did at the time of this talk.

The second loop with < reads only elements that exist within the array, so it allows an optimized array and code.

The problem is described in pages 90-91 of the talk, with related discussion in the pages before and after that.

I happened to attend this very Google I/O presentation and talked with the speaker (one of the V8 authors) afterward. I had been using a technique in my own code that involved reading past the end of an array as a misguided (in hindsight) attempt to optimize one particular situation. He confirmed that if you tried to even read past the end of an array, it would prevent the simple optimized format from being used.

If what the V8 author said is still true, then reading past the end of the array would prevent it from being optimized and it would have to fall back to the slower format.

Now it's possible that V8 has been improved in the meantime to efficiently handle this case, or that other JavaScript engines handle it differently. I don't know one way or the other on that, but this deoptimization is what the presentation was talking about.