Quantcast
Channel: Intel® C++ Compiler
Viewing all articles
Browse latest Browse all 1616

Measuring the overhead of gettimeofday()

$
0
0

Hi,

I wrote the following small program to measure the overhead of gettimeofday() on my platform.

#include <stdio.h>
#include <sys/time.h>

int main(void)
{
        const size_t num_syscalls = 1000000000;
        struct timeval begin, end, nao;
        int i;
        gettimeofday(&begin, NULL);
        for (i = 0; i < num_syscalls; ++i) {
                gettimeofday(&nao, NULL);
        }
        gettimeofday(&end, NULL);
        printf("time = %u.%06u\n", begin.tv_sec, begin.tv_usec);
        printf("time = %u.%06u\n", end.tv_sec, end.tv_usec);
        printf("Number of Calls = %u\n", num_syscalls);


}

And I got the following output

time = 1396927460.707331
time = 1396927491.641229
Number of Calls = 1000000000

This means around 30 seconds for 1000000000 calls, i.e., the overhead of gettimeofday() is around 30ns.

My question is: Is this small program correctly measuring the overhead of gettimeofday() ? Is it possible that the function was "cached" so that it's retrieval from memory was faster that if it had been called once?

Thanks for your time and kind help.

Best regards,

    Wentao


Viewing all articles
Browse latest Browse all 1616

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>