Quantcast
Channel: Intel® C++ Compiler
Viewing all articles
Browse latest Browse all 1616

Unpredictable results in OpenMP code fixed by inlining function

$
0
0

I have a rather complex, large code in C parallelized with OpenMP and Intel C compiler 16.0.1.159. The code produced slightly different results in every run until I did a fix shown below (not because of data races, as discussed below). I cannot explain this fix, and it is not satisfactory anyway. In addition, there was an intermediate fix, which also looked strange. On top of that, I ran the code through Intel Inspector, and it did not detect any data races. Unfortunately, I could not come up with a minimal reproducer.

Could somebody help with a hypothetical explanation of what I am observing?

Initial code implementation was somewhat like this, and it produced bad results (different results from one run to another). The real code has many more levels of function nesting than shown, but this, I think, is a good prototype:

void FuncA(DataType* d) {
  *d = ...;
}

void FuncB(DataType* d){
  FuncA(d);
}

void FuncC(DataType** data){
#pragma omp parallel for
  for (int i = 0; i < n; i++) {
     FuncB(data[i]);
  }
}

 

Fix #1 shown below - putting the call to the innermost function in a critical region - worked (code produced correct results):

void FuncA(DataType* d) {
  *d = ...;
}

void FuncB(DataType* d){
#pragma omp critical
  {
    FuncA(d);
  }
}

void FuncC(DataType** data){
#pragma omp parallel for
  for (int i = 0; i < n; i++) {
     FuncB(data[i]);
  }
}

 

Fix #2 shown below - putting the entire body of the innermost function in a critical region - did not work, code produced different results every run. Question 1: why does this not work when Fix #1 works?

void FuncA(DataType* d) {
#pragma omp critical
  {
    *d = ...;
  }
}

void FuncB(DataType* d){
  FuncA(d);
}

void FuncC(DataType** data){
#pragma omp parallel for
  for (int i = 0; i < n; i++) {
     FuncB(data[i]);
  }
}

 

Finally, Fix #3 shown below - declaring the innermost function as inline - kind of worked. The code produced correct results every time at the point where the original code failed, however, results crumbled later in the execution:

inline void FuncA(DataType* d) {
  *d = ...;
}

void FuncB(DataType* d){
  FuncA(d);
}

void FuncC(DataType** data){
#pragma omp parallel for
  for (int i = 0; i < n; i++) {
     FuncB(data[i]);
  }
}

 

 

Question 2: the last case makes me think that OpenMP perhaps has a limit on the call stack depth. Is that correct?

 

Once again, I am as sure as I can be that there are no memory leaks or data races in FuncA or FuncB. This was verified by eye as well as by Intel Inspector.


Viewing all articles
Browse latest Browse all 1616

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>