-
-
Couldn't load subscription status.
- Fork 49
Description
-
Clang 6 is able to optimise the fibonacci test to something like:
int fibs[20] = {1, 2, 3, 5, 8, ...}
printf("%d\n", fibs[i]);
and thus completes in zero time. (gcc and gfortran 8 also do the fibonacci test remarkably fast which suggests to me that they use a smaller memoization table). -
The pi_sum test does the same thing 500 times, but gfortran 8 completes extremely fast, which suggests that it is able to recognise that the loop body is executing the same code with no side-effects and optimize the loop away entirely.
-
The quicksort test counts the time to generate 5000 random numbers in fortran and julia but not in C or python.
-
The tests that call cblas dominate runtime and are a little strange: really they are testing whether the language has cblas bindings, which is a feature test, not a benchmark.
A more general point is that benchmarks like this that consist of running extremely simple algorithms multiple times are not really an informative way to measure performance of a language. The point of this repo, as I understand it, is to tell me, the potential julia user, whether and by how much julia is likely to be faster than, eg, python, for my particular scientific problem, but it doesn't do this. Since a large part of the point of julia is to be faster, it would be useful to have some meaningful way of showing that it is, indeed, faster.
For example, have you considered instead writing julia examples for the algorithms here: https://benchmarksgame-team.pages.debian.net/benchmarksgame/ ? This would get you comparisons with other languages for free (and you could have some CI that ran them on every commit to find regressions).