[prev in list] [next in list] [prev in thread] [next in thread] 

List:       openjdk-discuss
Subject:    Re: Question to JMH and JIT
From:       Ngor <ngortheone () gmail ! com>
Date:       2019-02-04 19:37:12
Message-ID: CAKJNb8-bCdd=yyfKmjezSOSc8YgyduajFT0eLFoCwduV2G9ONw () mail ! gmail ! com
[Download RAW message or body]

In addition I highly recommend reading the paper on the topic:

Statistically Rigorous Java Performance Evaluation
https://dri.es/files/oopsla07-georges.pdf

On Mon, Feb 4, 2019 at 7:44 AM Herr Knack <koehlerkokser@gmail.com> wrote:

> Hey there,
>
> I currently try to find a mathematical model to compute the approximate
> runtime of a function term out of its basic structure for my machine
> (Ubuntu 16.04, i7-5500U, 2.40GHz×4). For first microbenchmark measurements
> I used the JMH and measured the throughput of some simple functions (4
> different forks, 3 warmup and 5 measurement iterations each fork, 10 sec
> time each iteration).
>
> I found out, that it is really difficult to see the connection between the
> structure of the function term and its runtime in the microbenchmark. I
> assume that the JIT compiler does some weird stuff to optimize the
> functions, but I can‘t see which optimizations in detail are done there.
>
> Could JIT be the problem? Does anyone know if the workflow of JIT
> optimizations is documented anywhere?
>
> I hope someone can help, because I‘m getting crazy about it. Here some
> results I got. I converted the throughput (runs/sec) to a average time per
> run in ns. The standard deviation is smaller as you might think, so the
> average time is most likely correct in an area of +/-0.3 ns :
>
> - (((0.5 * x) * 0.5) * (((((0.5 * x) * 0.5) / 0.5) + (((0.5 * x) * 0.5) *
> 0.5)) * (((0.5 * x) * 0.5) + x))) + x, 15 operations (3+, 11*, 1/), 28.09
> ns
>
> - x + ((((0.5 * x) * (0.5 * x)) * (0.5 * x)) + ((0.5 * x) * ((((0.5 * x) *
> (0.5 * x)) * (0.5 * x)) * (((0.5 * x) * (0.5 * x)) / ((((0.5 * x) * ((0.5 *
> x) * (0.5 * x))) / (0.5 + ((0.5 * x) * (0.5 * x)))) + (((0.5 * x) * (0.5 *
> x)) * (0.5 * x))))))), 35 operations (4+,29*,2/), 38.61 ns
>
> - (((x / (x / (0.5 * x))) * (x / (x / (0.5 * x)))) * (0.5 * x)) + (x +
> ((((x / (x / (0.5 * x))) * (x / (x / (0.5 * x)))) * (0.5 * x)) / (((x /
> (0.5 * x)) + 0.5) - ((((x / (x / (0.5 * x))) * (x / (x / (0.5 * x)))) *
> (0.5 * x)) * (x / (x / (0.5 * x))))))), 38 operations (4+-,18*,16/), 40.70
> ns
>
> I assume that there is an constant offset for method calls and so on. In
> addition I suppose that already calculated term pieces don‘t have to be
> calculated again. But these conclusions did not help me either.
>
> Regards,
>
> KKokser
>
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic