StarPU Handbook
 All Data Structures Files Functions Variables Typedefs Enumerations Enumerator Macros Groups Pages
The StarPU OpenMP Runtime Support (SORS)

StarPU provides the necessary routines and support to implement an OpenMP (http://www.openmp.org/) runtime compliant with the revision 3.1 of the language specification, and compliant with the task-related data dependency functionalities introduced in the revision 4.0 of the language. This StarPU OpenMP Runtime Support (SORS) has been designed to be targetted by OpenMP compilers such as the Klang-OMP compiler. Most supported OpenMP directives can both be implemented inline or as outlined functions.

All functions are defined in OpenMP Runtime Support.

Implementation Details and Specificities

Main Thread

When using the SORS, the main thread gets involved in executing OpenMP tasks just like every other threads, in order to be compliant with the specification execution model. This contrasts with StarPU's usual execution model where the main thread submit tasks but does not take part in executing them.

Extended Task Semantics

The semantics of tasks generated by the SORS are extended with respect to regular StarPU tasks in that SORS' tasks may block and be preempted by SORS call, whereas regular StarPU tasks cannot. SORS tasks may coexist with regular StarPU tasks. However, only the tasks created using SORS API functions inherit from extended semantics.

Configuration

The SORS can be compiled into libstarpu through the configure option --enable-openmp. Conditional compiled source codes may check for the availability of the OpenMP Runtime Support by testing whether the C preprocessor macro STARPU_OPENMP is defined or not.

Initialization and Shutdown

The SORS needs to be executed/terminated by the starpu_omp_init() / starpu_omp_shutdown() instead of starpu_init() / starpu_shutdown(). This requirement is necessary to make sure that the main thread gets the proper execution environment to run OpenMP tasks. These calls will usually be performed by a compiler runtime. Thus, they can be executed from a constructor/destructor such as this:

__attribute__((constructor))
static void omp_constructor(void)
{
int ret = starpu_omp_init();
STARPU_CHECK_RETURN_VALUE(ret, "starpu_omp_init");
}
__attribute__((destructor))
static void omp_destructor(void)
{
}
See Also
starpu_omp_init()
starpu_omp_shutdown()

Parallel Regions and Worksharing

The SORS provides functions to create OpenMP parallel regions as well as mapping work on participating workers. The current implementation does not provide nested active parallel regions: Parallel regions may be created recursively, however only the first level parallel region may have more than one worker. From an internal point-of-view, the SORS' parallel regions are implemented as a set of implicit, extended semantics StarPU tasks, following the execution model of the OpenMP specification. Thus the SORS' parallel region tasks may block and be preempted, by SORS calls, enabling constructs such as barriers.

Parallel Regions

Parallel regions can be created with the function starpu_omp_parallel_region() which accepts a set of attributes as parameter. The execution of the calling task is suspended until the parallel region completes. The field starpu_omp_parallel_region_attr::cl is a regular StarPU codelet. However only CPU codelets are supported for parallel regions. Here is an example of use:

void parallel_region_f(void *buffers[], void *args)
{
(void) buffers;
(void) args;
pthread_t tid = pthread_self();
int worker_id = starpu_worker_get_id();
printf("[tid %p] task thread = %d\n", (void *)tid, worker_id);
}
void f(void)
{
memset(&attr, 0, sizeof(attr));
attr.cl.cpu_funcs[0] = parallel_region_f;
attr.cl.where = STARPU_CPU;
attr.if_clause = 1;
return 0;
}
See Also
struct starpu_omp_parallel_region_attr
starpu_omp_parallel_region()

Parallel For

OpenMP for loops are provided by the starpu_omp_for() group of functions. Variants are available for inline or outlined implementations. The SORS supports static, dynamic, and guided loop scheduling clauses. The auto scheduling clause is implemented as static. The runtime scheduling clause honors the scheduling mode selected through the environment variable OMP_SCHEDULE or the starpu_omp_set_schedule() function. For loops with the ordered clause are also supported. An implicit barrier can be enforced or skipped at the end of the worksharing construct, according to the value of the nowait parameter.

The canonical family of starpu_omp_for() functions provide each instance with the first iteration number and the number of iterations (possibly zero) to perform. The alternate family of starpu_omp_for_alt() functions provide each instance with the (possibly empty) range of iterations to perform, including the first and excluding the last.

The family of starpu_omp_ordered() functions enable to implement OpenMP's ordered construct, a region with a parallel for loop that is guaranteed to be executed in the sequential order of the loop iterations.

void for_g(unsigned long long i, unsigned long long nb_i, void *arg)
{
(void) arg;
for (; nb_i > 0; i++, nb_i--)
{
array[i] = 1;
}
}
void parallel_region_f(void *buffers[], void *args)
{
(void) buffers;
(void) args;
starpu_omp_for(for_g, NULL, NB_ITERS, CHUNK, starpu_omp_sched_static, 0, 0);
}
See Also
starpu_omp_for()
starpu_omp_for_inline_first()
starpu_omp_for_inline_next()
starpu_omp_for_alt()
starpu_omp_for_inline_first_alt()
starpu_omp_for_inline_next_alt()
starpu_omp_ordered()
starpu_omp_ordered_inline_begin()
starpu_omp_ordered_inline_end()

Sections

OpenMP sections worksharing constructs are supported using the set of starpu_omp_sections() variants. The general principle is either to provide an array of per-section functions or a single function that will redirect to execution to the suitable per-section functions. An implicit barrier can be enforced or skipped at the end of the worksharing construct, according to the value of the nowait parameter.

void parallel_region_f(void *buffers[], void *args)
{
(void) buffers;
(void) args;
section_funcs[0] = f;
section_funcs[1] = g;
section_funcs[2] = h;
section_funcs[3] = i;
section_args[0] = arg_f;
section_args[1] = arg_g;
section_args[2] = arg_h;
section_args[3] = arg_i;
starpu_omp_sections(4, section_f, section_args, 0);
}
See Also
starpu_omp_sections()
starpu_omp_sections_combined()

Single

OpenMP single workharing constructs are supported using the set of starpu_omp_single() variants. An implicit barrier can be enforced or skipped at the end of the worksharing construct, according to the value of the nowait parameter.

void single_f(void *arg)
{
(void) arg;
pthread_t tid = pthread_self();
int worker_id = starpu_worker_get_id();
printf("[tid %p] task thread = %d -- single\n", (void *)tid, worker_id);
}
void parallel_region_f(void *buffers[], void *args)
{
(void) buffers;
(void) args;
starpu_omp_single(single_f, NULL, 0);
}

The SORS also provides dedicated support for single sections with copyprivate clauses through the starpu_omp_single_copyprivate() function variants. The OpenMP master directive is supported as well using the starpu_omp_master() function variants.

See Also
starpu_omp_master()
starpu_omp_master_inline()
starpu_omp_single()
starpu_omp_single_inline()
starpu_omp_single_copyprivate()
starpu_omp_single_copyprivate_inline_begin()
starpu_omp_single_copyprivate_inline_end()

Tasks

The SORS implements the necessary support of OpenMP 3.1 and OpenMP 4.0's so-called explicit tasks, together with OpenMP 4.0's data dependency management.

Explicit Tasks

Explicit OpenMP tasks are created with the SORS using the starpu_omp_task_region() function. The implementation supports if, final, untied and mergeable clauses as defined in the OpenMP specification. Unless specified otherwise by the appropriate clause(s), the created task may be executed by any participating worker of the current parallel region.

The current SORS implementation requires explicit tasks to be created within the context of an active parallel region. In particular, an explicit task cannot be created by the main thread outside of a parallel region. Explicit OpenMP tasks created using starpu_omp_task_region() are implemented as StarPU tasks with extended semantics, and may as such be blocked and preempted by SORS routines.

The current SORS implementation supports recursive explicit tasks creation, to ensure compliance with the OpenMP specification. However, it should be noted that StarPU is not designed nor optimized for efficiently scheduling of recursive task applications.

The code below shows how to create 4 explicit tasks within a parallel region.

void task_region_g(void *buffers[], void *args)
{
(void) buffers;
(void) args;
pthread tid = pthread_self();
int worker_id = starpu_worker_get_id();
printf("[tid %p] task thread = %d: explicit task \"g\"\n", (void *)tid, worker_id);
}
void parallel_region_f(void *buffers[], void *args)
{
(void) buffers;
(void) args;
memset(&attr, 0, sizeof(attr));
attr.cl.cpu_funcs[0] = task_region_g;
attr.cl.where = STARPU_CPU;
attr.if_clause = 1;
attr.final_clause = 0;
attr.untied_clause = 1;
attr.mergeable_clause = 0;
}
See Also
struct starpu_omp_task_region_attr
starpu_omp_task_region()

Data Dependencies

The SORS implements inter-tasks data dependencies as specified in OpenMP 4.0. Data dependencies are expressed using regular StarPU data handles (starpu_data_handle_t) plugged into the task's attr.cl codelet. The family of starpu_vector_data_register() -like functions and the starpu_data_lookup() function may be used to register a memory area and to retrieve the current data handle associated with a pointer respectively. The testcase ./tests/openmp/task_02.c gives a detailed example of using OpenMP 4.0 tasks dependencies with the SORS implementation.

Note: the OpenMP 4.0 specification only supports data dependencies between sibling tasks, that is tasks created by the same implicit or explicit parent task. The current SORS implementation also only supports data dependencies between sibling tasks. Consequently the behaviour is unspecified if dependencies are expressed beween tasks that have not been created by the same parent task.

TaskWait and TaskGroup

The SORS implements both the taskwait and taskgroup OpenMP task synchronization constructs specified in OpenMP 4.0, with the starpu_omp_taskwait() and starpu_omp_taskgroup() functions respectively.

An example of starpu_omp_taskwait() use, creating two explicit tasks and waiting for their completion:

void task_region_g(void *buffers[], void *args)
{
(void) buffers;
(void) args;
printf("Hello, World!\n");
}
void parallel_region_f(void *buffers[], void *args)
{
(void) buffers;
(void) args;
memset(&attr, 0, sizeof(attr));
attr.cl.cpu_funcs[0] = task_region_g;
attr.cl.where = STARPU_CPU;
attr.if_clause = 1;
attr.final_clause = 0;
attr.untied_clause = 1;
attr.mergeable_clause = 0;

An example of starpu_omp_taskgroup() use, creating a task group of two explicit tasks:

void task_region_g(void *buffers[], void *args)
{
(void) buffers;
(void) args;
printf("Hello, World!\n");
}
void taskgroup_f(void *arg)
{
(void)arg;
memset(&attr, 0, sizeof(attr));
attr.cl.cpu_funcs[0] = task_region_g;
attr.cl.where = STARPU_CPU;
attr.if_clause = 1;
attr.final_clause = 0;
attr.untied_clause = 1;
attr.mergeable_clause = 0;
}
void parallel_region_f(void *buffers[], void *args)
{
(void) buffers;
(void) args;
starpu_omp_taskgroup(taskgroup_f, (void *)NULL);
}
See Also
starpu_omp_task_region()
starpu_omp_taskwait()
starpu_omp_taskgroup()
starpu_omp_taskgroup_inline_begin()
starpu_omp_taskgroup_inline_end()

Synchronization Support

The SORS implements objects and method to build common OpenMP synchronization constructs.

Simple Locks

The SORS Simple Locks are opaque starpu_omp_lock_t objects enabling multiple tasks to synchronize with each others, following the Simple Lock constructs defined by the OpenMP specification. In accordance with such specification, simple locks may not by acquired multiple times by the same task, without being released in-between; otherwise, deadlocks may result. Codes requiring the possibility to lock multiple times recursively should use Nestable Locks (Nestable Locks). Codes NOT requiring the possibility to lock multiple times recursively should use Simple Locks as they incur less processing overhead than Nestable Locks.

See Also
starpu_omp_lock_t
starpu_omp_init_lock()
starpu_omp_destroy_lock()
starpu_omp_set_lock()
starpu_omp_unset_lock()
starpu_omp_test_lock()

Nestable Locks

The SORS Nestable Locks are opaque starpu_omp_nest_lock_t objects enabling multiple tasks to synchronize with each others, following the Nestable Lock constructs defined by the OpenMP specification. In accordance with such specification, nestable locks may by acquired multiple times recursively by the same task without deadlocking. Nested locking and unlocking operations must be well parenthesized at any time, otherwise deadlock and/or undefined behaviour may occur. Codes requiring the possibility to lock multiple times recursively should use Nestable Locks. Codes NOT requiring the possibility to lock multiple times recursively should use Simple Locks (Simple Locks) instead, as they incur less processing overhead than Nestable Locks.

See Also
starpu_omp_nest_lock_t
starpu_omp_init_nest_lock()
starpu_omp_destroy_nest_lock()
starpu_omp_set_nest_lock()
starpu_omp_unset_nest_lock()
starpu_omp_test_nest_lock()

Critical Sections

The SORS implements support for OpenMP critical sections through the family of starpu_omp_critical functions. Critical sections may optionally be named. There is a single, common anonymous critical section. Mutual exclusion only occur within the scope of single critical section, either a named one or the anonymous one.

See Also
starpu_omp_critical()
starpu_omp_critical_inline_begin()
starpu_omp_critical_inline_end()

Barriers

The SORS provides the starpu_omp_barrier() function to implement barriers over parallel region teams. In accordance with the OpenMP specification, the starpu_omp_barrier() function waits for every implicit task of the parallel region to reach the barrier and every explicit task launched by the parallel region to complete, before returning.

See Also
starpu_omp_barrier()