The Habanero Java (HJ) language under development at Rice University build=
s on past work on X10 v1.5. HJ is intend=
ed for use in teaching Computer Science at the undergraduate level (=
COMP 322), as well as to serve as a research testbed for new language, c=
ompiler, and runtime software technologies for extreme scale systems. =
HJ proposes an execution model for multicore processors based on four orth=
ogonal dimensions for portable parallelism:
- Lightweight dynamic task creation and termination using async, finish, future, fora=
ll, forasync, ateach constructs.
- Collective and point-to-point synchronization using phasers
- Mutual exclusion and isolation using isolated
- Locality control using hierarchical place trees
Since HJ is based on Java, the use of certain non-blocking primitives from the Java Concurrenc=
y Utilities is also permitted in HJ programs, most notably operations on Ja=
va Concurrent Collections such as java.util.concurrent.ConcurrentHashMap an=
d on Java Atomic Variables.
A short summary of the HJ language is included below, and the following pa=
per provides an overview of the language:
Habanero-Java: the New Adventures=
of Old X10. 9th International Conference on the Principles and P=
ractice of Programming in Java (PPPJ), August, 2011.
We also have a new library implementation of HJ called HJ-lib that can be used with any standard Java 8 implementation. HJ-lib puts a particular emphasis on the usability and safety =
of parallel constructs. HJ-lib is built using Java 8 clos=
ures and can run on any Java 8 JVM. Older JVMs can be targeted by relying o=
n external bytecode transformations tools for compatibility.
More details can be found in the papers in the Habanero publications web page.
A download of the HJ language implementation can be found here. The instructions for the download and installation of th=
e HJ-lib jar file is available here.
<=
/strong>
HJ Language Summary
(Following standard conventi=
ons for syntax specification, the [ ... ] square brackets below refer =
to optional clauses.)
async [at (place)]
&n=
bsp; [p=
hased [(ph1<mode1>, ...)] ]
&n=
bsp; [s=
eq (condition)] =
&nb=
sp;
&n=
bsp; [a=
wait (ddf1, ...)] Stmt
- async=
=E2=80=94 Asynchronously start a new child task to execute Stm=
t
- at =
=E2=80=93- A destination place may optionally be specified for =
where the task should execute
- phase=
d =E2=80=94 Task may optionally be phased on a specified subset=
of its parent=E2=80=99s phasers with specified modes (e.g., phaser ph=
1 with mode1), or on the entire set of the parent's phasers and modes (by d=
efault, if no subset is specified)
- seq&n=
bsp;=E2=80=94 A boolean condition m=
ay optionally be specified as a tuning parameter to determine if the async =
should just be executed sequentially in the parent task. The seq clause can not be combined wi=
th phased or await clauses
- await=
=E2=80=94 Task may optionally be delayed to only start after a=
ll specified events (data-driven futures) become available
finish [ (accum1, ...) =
] Stmt
- Execute Stmt, but wait until all (transitiv=
ely) spawned asyncs and futures in Stmt=E2=80=99s scope have terminated
- Propagate a multiset of all exceptions thro=
wn by asyncs spawned within Stmt=E2=80=99s scope
- Optionally, a set of accumulators (e.g., ac=
cum1) can be specified as being registered with this finish scope
final future<T> f =3D async<T> [at (place)]
&n=
bsp;  =
; [phased [(ph1<mode1>, ...)] ] Stmt-Block-with-Return
- Asynchronously start a new child task to ev=
aluate Stmt-Block-with-Return with optional at and phased clauses&n=
bsp;as in async
- f is a reference to object of type future<T>=
, which is a container for the value to be computed by the future tas=
k; T may be a primitive type (including void) or an object type (class)
- Stmt-Block-with-Return is a statement =
block that dynamically terminates with a return statement as in a method bo=
dy; a return statement is not needed if the return type is void
f.get()
- Wait until future f has completed execution=
, and propagate its return value; if T =3D void, then f.get() is evaluated =
as a statement (like a method call with a void return value)
- get() also propagates any exception thrown =
by Stmt-Block-with-Return
point
- A point is an n-dimensional tuple of int's<=
/li>
- A point variable can hold values of differe=
nt ranks e.g., point p; p =3D [1]; =E2=80=A6 p =3D [2,3]; =E2=80=A6
for (point [i1, =E2=80=A6] : [lo1:hi1, =E2=80=A6]) Stmt
- Execute multiple instances of Stmt sequenti=
ally in lexicographic order, one per iteration in rectangular region [lo1:hi1, =E2=80=
=A6]
forall (point [i1, =E2=80=A6] : [lo1:hi1, =E2=80=A6]) Stmt
- Create multiple parallel instances of Stmt =
as child tasks, one per forall iteration in the rectangular region [lo1:hi1, =
=E2=80=A6]
- An implicit finish is included for all iter=
ations of the =
forall
- Each forall instance has an anonymous pre-all=
ocated phaser shared by all its iterations; no explicit phased clause is permitted for a forall
forasync (point [i1, =E2=80=A6] : [lo1:hi1, =E2=80=A6]) [phased [(ph<=
span class=3D"style_7">1<mode1>=
, ...)] ] Stmt
- Like the forall, create multiple instances of Stmt=
as child tasks, one per forasync iteration in the rectangular region [lo1:hi1, =
=E2=80=A6]
- There is no implicit finish in forasync
- As with async, a forasync iteration may optionally =
be phased on a specified subset, (ph1<mod=
e1>, ...), of its parent=E2=80=99s phaser=
s or on the entire set
new phaser(mode1)
- Allocate a phaser with the specified mode, wh=
ich can be one of SIG, WAIT, SIG_WAIT, SINGLE
- Scope of phaser is limited to immediately e=
nclosing finish
next ;
- Advance each phaser that this task is registe=
red on to its next phase, in accordance with this task=E2=80=99s registrati=
on mode
- Wait on each phaser that task is registered o=
n with a wait capability (WAIT, SIG_WAIT, SINGLE)
next single Stmt
- Execute a single instance of Stmt during th=
e phase transition performed by next
- All tasks executing the next single statement=
must be registered with all its phasers in SINGLE mode
signal ;
- signal each phaser that task is registered on=
with a signal capability (SIG, SIG_WAIT, SIGNAL)
- signal is a non-blocking operation --- computation=
between signal and next serves as a =E2=80=9Csplit phase barrier=E2=80=9D<=
/li>
isolated Stmt
- Execute Stmt =
in isolation (mutual exclusion) relative to all other instances of isolated=
statements
- Stmt must not=
contain any parallel constructs
- Weak atomicit=
y: no guarantee of isolation with respect tonon-isolated statements =
li>
isolated [(obj1, ...)] Stmt
- Object-based =
isolation =E2=80=94 mutual exclusion is only guaranteed for a pair of isola=
ted statements with a non-empty intersection of their object sets
- If no object =
set is specified, then the default set is the universe of all objects =
- A null value =
for an object is treated like an empty contribution to the set
- Weak atomicit=
y: no guarantee of isolation with respect to non-isolated statements <=
/li>
complex32, complex64
- HJ includes complex as a primitive type e.g=
.,
complex32 cf =3D (1.0f, 2.0f); complex=
64 cd =3D (1.0, 2.0);
- The following operations are supported on c=
omplex:
+,-,*,/, =3D=3D, !=3D, toString(), exp(), si=
n(), cos(), sqrt(), pow()
array views
T[.] declares=
a view on a 1-D Java array e.g.,
&n=
bsp; double[.] view =3D new arrayView(baseArray, offset, =
[lo1:hi1=
, =E2=80=A6])
where
&n=
bsp; baseArray =3D base 1-D Java array
&n=
bsp; offset =3D starting offset in baseArray for view
&n=
bsp; [lo1:hi1, =E2=80=A6] =3D rectangular region for view
abstract performance metrics
- Programmer inserts calls of the form, perf.=
addLocalOps(N), in sequential code
- HJ implementation computes total work and c=
ritical path length in units of programmer=E2=80=99s local ops