Enhanced Debugging of Data Races in Parallel Programs Using OpenMP



Journal Title

Journal ISSN

Volume Title



Parallel computing is pervasive. The variety and number of parallel hardware architectures increase daily. As this technology evolves, parallel developers will need productivity enhancing tools to analyze, debug, and tune their parallel applications.

Current debugging tools that excel for sequential programs do not have the features necessary to help locate errors common to parallel programs (i.e. deadlock, livelock, priority inversion, race conditions). Data races, one type of race condition, are the most common software fault in shared-memory programs, and can be difficult to find because they are nondeterministic. Current data-race detection tools are often plagued by numerous false-positives and may exhibit inefficient execution times and memory loads. Enhancing the accuracy and efficiency of data race detectors can both increase the acceptance of shared memory programming models like OpenMP and improve developer productivity.

I describe a complementary analysis technique to detect data races in parallel programs using the OpenMP programming model. This hybrid-analysis technique capitalizes on the strengths of stand-alone static and dynamic data-race detection methods.

A proposed, future tool would implement the static-analysis capabilities of the OpenUH compiler to complement the dynamic-analysis techniques used in Sun (now Oracle) Thread Analyzer. This combination is expected to result in an improved debugging experience for developers intending to resolve data-race errors resulting from poor-programmer discipline, erroneous synchronization or inappropriate data-scoping.



Data races, Parallel programming, Static-analysis, Dynamic-analysis, Complementary-analysis, Debugging, RaceFree