Why is if (variable1 % variable2 == 0) inefficient?

  • A+

I am new to java, and was running some code last night, and this really bothered me. I was building a simple program to display every X outputs in a forloop, and I noticed a MASSIVE decrease in performance, when I used modulous as variable % variable vs variable % 5000 or what not. Can some one explain to me why this is and what is causing it? So I can be better...

here is the "efficient" code (sorry if I get a bit of syntax wrong i'm not on the computer with the code right now)

long startNum = 0; long stopNum = 1000000000L;  for (long i = startNum; i <= stopNum; i++){     if (i % 50000 == 0) {         System.out.println(i)     } } 

Here is the "ineficient code"

long startNum = 0; long stopNum = 1000000000L; long progressCheck = 50000;  for (long i = startNum; i <= stopNum; i++){     if (i % progressCheck == 0) {         System.out.println(i)     } } 

Mind you I had a date variable to measure the differences, and once it became long enough, the first one took 50ms while the other took 12 seconds or something like that. You may have to increase the stopNum or decrease the progressCheck if your pc is more efficient than mine or what not.

I looked for this question around the web, but I can't find an answer, maybe I'm just not asking it right.

EDIT: I did not expect my question to be so popular, I appreciate all the answers. I did perform a benchmark on each half in time taken, and the ineficient code took considerably longer, 1/4 of a second vs 10 seconds give or take. Granted they are using println, but they are both doing the same amount, so I wouldn't imagine that would skew it much, especially since the discrepancy is repeatable. As for the answers, since I am new to java, I will let votes decide for now which answer is best. I will try to pick one by wednesday.

EDIT2: I am going to make another test tonight, where instead of modulous, it just increments a variable, and when it reaches progressCheck, it will perform one, and then reset that variable to 0. for a 3rd option.


You are measuring the OSR (on-stack replacement) stub.

OSR stub is a special version of compiled method intended specifically for transferring execution from interpreted mode to compiled code while the method is running.

OSR stubs are not as optimized as regular methods, because they need a frame layout compatible with interpreted frame. I showed this already in the following answers: 1, 2, 3.

A similar thing happens here, too. While "inneficient code" is running a long loop, the method is compiled specially for the on-stack replacement right inside the loop. The state is transferred from the interpreted frame to OSR-compiled method, and this state includes progressCheck local variable. At this point JIT cannot replace the variable with the constant, and thus cannot apply certain optimizations like strength reduction.

In particular this means JIT does not replace integer division with multiplication.

However, if you run the same method several times, the second and the subsequent runs will execute the regular (non-OSR) code, which is fully optimized. Here is a benchmark to prove the theory:

@State(Scope.Benchmark) public class Div {      @Benchmark     public void divConst(Blackhole blackhole) {         long startNum = 0;         long stopNum = 100000000L;          for (long i = startNum; i <= stopNum; i++) {             if (i % 50000 == 0) {                 blackhole.consume(i);             }         }     }      @Benchmark     public void divVar(Blackhole blackhole) {         long startNum = 0;         long stopNum = 100000000L;         long progressCheck = 50000;          for (long i = startNum; i <= stopNum; i++) {             if (i % progressCheck == 0) {                 blackhole.consume(i);             }         }     } } 

And the results:

# Benchmark: bench.Div.divConst  # Run progress: 0,00% complete, ETA 00:00:16 # Fork: 1 of 1 # Warmup Iteration   1: 126,967 ms/op # Warmup Iteration   2: 105,660 ms/op # Warmup Iteration   3: 106,205 ms/op Iteration   1: 105,620 ms/op Iteration   2: 105,789 ms/op Iteration   3: 105,915 ms/op Iteration   4: 105,629 ms/op Iteration   5: 105,632 ms/op   # Benchmark: bench.Div.divVar  # Run progress: 50,00% complete, ETA 00:00:09 # Fork: 1 of 1 # Warmup Iteration   1: 844,708 ms/op          <-- much slower! # Warmup Iteration   2: 105,893 ms/op          <-- as fast as divConst # Warmup Iteration   3: 105,601 ms/op Iteration   1: 105,570 ms/op Iteration   2: 105,475 ms/op Iteration   3: 105,702 ms/op Iteration   4: 105,535 ms/op Iteration   5: 105,766 ms/op 

The very first iteration of divVar is indeed much slower, because of inefficiently compiled OSR stub. But as soon as the method reruns from the beginning, the new unconstrained version is executed which leverages all the available compiler optimizations.


:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: