Benchmark Mechanism
- The benchmark is automated through Puppeteer and using devtools-timeline-model to parse raw trace data. (Equavalent of manually checking javascript execution time through Dev Tools Performance tab.)
- Each task is iterated times and mean execution duration is taken.
- Each iteration Steps:
- Open a new page ("empty_page.html")
- Slowdown CPU speed by X
- Load required libraries through a script tags.
- Load respective task through a script tag
- Start tracing (to capture Dev Tools' performance metrics)
- Run Task (Puppeteer's code injection execution time is excluded by scheduling the task using setTimeout before tracing is started.)
- Wait pre-defined seconds for all task operations to be completed.
- Stop tracing
- Assert whether the task is completed.
- Parse raw trace data using devtools-timeline-model to extract javascript, layout & painting execution times.
- Close Page
- The mean execution duration is in microseconds. (1000 Microseconds = 1 Millisecond)
- Coefficient of variation is shown below the execution duration.
- This is how much iterations deviated from its mean execution duration. Lower deviation means the test is stable.
- Variation for smaller operations can be high. If that's the case, repeat the task equally for all candidates to increase the execution time.
- Refer _task_template.js for details.
- Factor of slowness is compared against the selected base candidate. Eg:
- Vanilla JS execution duration = 400ms
- Candidate 1 execution duration= 600ms
- Candidate 1 is 1.5x slower than Vanilla JS