I've been testing the differences of time it takes to sum the element of a matrix in row major order
std::vector<double> v( n * n ); // Timing begins double sum{ 0.0 }; for (std::size_t i = 0; i < n; i++) { for (std::size_t j = 0; j < n; j++) { sum += v[i * n + j]; } } // Timing ends
and in column major order
std::vector<double> v( n * n ); // Timing begins double sum{ 0.0 }; for (std::size_t j = 0; j < n; j++) { for (std::size_t i = 0; i < n; i++) { sum += v[i * n + j]; } } // Timing ends
The code has been compiled with
g++ -std=c++11 -Ofast -fno-tree-vectorize -DNDEBUG main.cpp -o main
and also with similar settings with icpc (This is more of an hardware question than a compiler question). We expect the timings of the row major order (blue) to be significantly faster than the column major order (yellow). If I plot the time it takes to run this algorithm (in nanoseconds) divided by the size in bytes of the array, I get the following graph on my computer which has a core-i7.
The x-axis displays n, and the y-axis displays the time in nanoseconds for the sumation divided by the size (in bytes) of v. Everything seems normal. The huge difference in between the two starts around n = 850 for which the size of the matrix is about 6MB which is exactly the size of my L3 cache. The column major order is 10 times slower than the row major order for large n. I am pleased with the results.
Next thing I do is run the same program on Amazon Web Services where they have an E5-2670. Here are the results of the same program.
The column major order is about 10 times slower than the row major order for 700 <= n <= 2000, but for n >= 2100, the cost per bytes of the column major order suddenly drops and it is just 2 times slower than the row major order!!! Does anyone have an explanation for this strange behaviour?
PS: For those who are interested, the full code is available here: https://www.dropbox.com/s/778hwpuriwqbi6o/InsideLoop.zip?dl=0