I have a question regarding how OpenMP threads are created/cleaned up when parallel regions are encountered in child threads. I'm observing a situation where the total number of threads created by OpenMP grows to a large enough number that it eventually causes a system error.
In a nutshell, the software I'm working with calls pthread_create whenever it receives a request to do some work. The pthread that gets spawned is passed the request and it goes off to do its thing. In the process of computing the result of the request, an openmp parallel region is encountered. Once the thread is done, it is join'ed by the parent thread.
I have boiled down the program to the following toy problem that does a matrix-matrix multiply in each pthread and runs the threads in different ways. It appears as if the threads OpenMP is creating within each pthread are not being reaped when the pthread that created them is joined. This behavior is not seen with gcc.
If for example I set OMP_NUM_THREADS=8 and call do_work() 10 times, I observe the following:
- When run 10 times sequentially in the main thread, the total number of threads after all 10 calls is 9
- When run in 10 separate pThreads one after the next (joining between each pthread_create) the total number of threads after all 10 calls is 9
- When run in 10 separate pThreads all created (mostly) simultaneously the total number of threads after 10 executions is 72 (+/-)
Is it correct that OpenMP threads can be shared between child threads of program? This would explain situation #2 and would suggest why when multiple threads encounter a parallel region simultaneously more worker threads need to be created (as in #3). Regardless, it definitely seems that omp-spawned threads are not getting reaped once their parent thread is joined.
It makes sense to keep omp threads around in case a subsequent parallel region is encountered (and thread creation overhead and therefore be avoided), but is it desired that these omp worker threads persist after their parent has gone away?
My working theory is that after my code has been running for long enough, these omp threads are piling up to the point where my program eventually hits a parallel region and fails with "OMP: System error #11: Resource temporarily unavailable" because there is a ridiculous number of threads running. I can relax the system limit the program is running up against, but I'd like to understand where all these threads are coming from.
For this toy problem, I'm compiling with:
icc -O0 -openmp omp_test.c -o omp_test_icc gcc -O0 -fopenmp omp_test.c -o omp_test_gcc
I'm using icc version 15.0.3 and gcc version 4.8.3. The source for omp_test.c is:
#include <stdlib.h> #include <stdio.h> #include <syscall.h> #include <math.h> #include <time.h> #include <omp.h> #include <pthread.h> #define N 1000 pid_t main_pid; void threadCount(int pid, char * msg){ int total_threads; FILE *fp; char ps[256]; sprintf(ps, "ps h -o nlwp -p %d", pid); fp = popen(ps, "r"); fscanf(fp, "%d", &total_threads); pclose(fp); printf("%s -- total_threads = %d\n", msg, total_threads); } void do_work() { double *a = malloc(N*N*sizeof(double)); double *b = malloc(N*N*sizeof(double)); double *c = malloc(N*N*sizeof(double)); int i,j,k; double start = omp_get_wtime(); # pragma omp parallel shared (a, b, c) private (i, j, k) { # pragma omp for for ( i = 0; i < N; i++ ) { for ( j = 0; j < N; j++ ) { a[i * N + j] = 1; b[i * N + j] = 2; } } # pragma omp for for ( i = 0; i < N; i++ ) { for ( j = 0; j < N; j++ ) { c[i * N + j] = 0; for ( k = 0; k < N; k++ ) { c[i * N + j] = c[i * N + j] + a[i * N + k] * b[k * N + j]; } } } } printf("C[100] = %f -- Time: %f\n", c[100], omp_get_wtime() - start); free(a); free(b); free(c); } int main() { int n_pthreads = 10; pthread_t threads[n_pthreads]; main_pid = getpid(); printf("main pid = %d\n", main_pid); double start; int i; // -------------- // in main thread // -------------- threadCount(main_pid, "before inlined"); start = omp_get_wtime(); for (i = 0; i < n_pthreads; i++){ do_work(); } printf("(inlined sequential) time = %f\n", omp_get_wtime() - start); threadCount(main_pid, "after inlined"); printf("\n"); // ------------------------------------ // in pthreads created run sequentially // ------------------------------------ threadCount(main_pid, "before sequential threaded"); start= omp_get_wtime(); for (i = 0; i < n_pthreads; i++){ pthread_create(&threads[i], NULL, (void *) &do_work, NULL); pthread_join(threads[i], NULL); } printf("(sequential threaded) time = %f\n", omp_get_wtime() - start); threadCount(main_pid, "after sequential threaded"); printf("\n"); // ----------------------------------------------- // in pthreads created run (mostly) simultaneously // ----------------------------------------------- threadCount(main_pid, "before parallel threaded"); start = omp_get_wtime(); for (i = 0; i < n_pthreads; i++){ pthread_create(&threads[i], NULL, (void *) &do_work, NULL); } for (i = 0; i < n_pthreads; i++){ pthread_join(threads[i], NULL); } printf("(parallel threaded) time = %f\n", omp_get_wtime() - start); threadCount(main_pid, "after parallel threaded"); printf("\n"); return 0; }