\(\) \(%\newcommand{\CLRS}{\href{https://mitpress.mit.edu/books/introduction-algorithms}{Cormen et al}}\) \(\newcommand{\CLRS}{\href{https://mitpress.mit.edu/books/introduction-algorithms}{Introduction to Algorithms, 3rd ed.} (\href{http://libproxy.aalto.fi/login?url=http://site.ebrary.com/lib/aalto/Doc?id=10397652}{online via Aalto lib})}\) \(\newcommand{\SW}{\href{http://algs4.cs.princeton.edu/home/}{Algorithms, 4th ed.}}\) \(%\newcommand{\SW}{\href{http://algs4.cs.princeton.edu/home/}{Sedgewick and Wayne}}\) \(\) \(\newcommand{\Java}{\href{http://java.com/en/}{Java}}\) \(\newcommand{\Python}{\href{https://www.python.org/}{Python}}\) \(\newcommand{\CPP}{\href{http://www.cplusplus.com/}{C++}}\) \(\newcommand{\ST}[1]{{\Blue{\textsf{#1}}}}\) \(\newcommand{\PseudoCode}[1]{{\color{blue}\textsf{#1}}}\) \(\) \(%\newcommand{\subheading}[1]{\textbf{\large\color{aaltodgreen}#1}}\) \(\newcommand{\subheading}[1]{\large{\usebeamercolor[fg]{frametitle} #1}}\) \(\) \(\newcommand{\Blue}[1]{{\color{flagblue}#1}}\) \(\newcommand{\Red}[1]{{\color{aaltored}#1}}\) \(\newcommand{\Emph}[1]{\emph{\color{flagblue}#1}}\) \(\) \(\newcommand{\Engl}[1]{({\em engl.}\ #1)}\) \(\) \(\newcommand{\Pointer}{\raisebox{-1ex}{\huge\ding{43}}\ }\) \(\) \(\newcommand{\Set}[1]{\{#1\}}\) \(\newcommand{\Setdef}[2]{\{{#1}\mid{#2}\}}\) \(\newcommand{\PSet}[1]{\mathcal{P}(#1)}\) \(\newcommand{\Card}[1]{{\vert{#1}\vert}}\) \(\newcommand{\Tuple}[1]{(#1)}\) \(\newcommand{\Implies}{\Rightarrow}\) \(\newcommand{\Reals}{\mathbb{R}}\) \(\newcommand{\Seq}[1]{(#1)}\) \(\newcommand{\Arr}[1]{[#1]}\) \(\newcommand{\Floor}[1]{{\lfloor{#1}\rfloor}}\) \(\newcommand{\Ceil}[1]{{\lceil{#1}\rceil}}\) \(\newcommand{\Path}[1]{(#1)}\) \(\) \(%\newcommand{\Lg}{\lg}\) \(\newcommand{\Lg}{\log_2}\) \(\) \(\newcommand{\BigOh}{O}\) \(%\newcommand{\BigOh}{\mathcal{O}}\) \(\newcommand{\Oh}[1]{\BigOh(#1)}\) \(\) \(\newcommand{\todo}[1]{\Red{\textbf{TO DO: #1}}}\) \(\) \(\newcommand{\NULL}{\textsf{null}}\) \(\) \(\newcommand{\Insert}{\ensuremath{\textsc{insert}}}\) \(\newcommand{\Search}{\ensuremath{\textsc{search}}}\) \(\newcommand{\Delete}{\ensuremath{\textsc{delete}}}\) \(\newcommand{\Remove}{\ensuremath{\textsc{remove}}}\) \(\) \(\newcommand{\Parent}[1]{\mathop{parent}(#1)}\) \(\) \(\newcommand{\ALengthOf}[1]{{#1}.\textit{length}}\) \(\) \(\newcommand{\TRootOf}[1]{{#1}.\textit{root}}\) \(\newcommand{\TLChildOf}[1]{{#1}.\textit{leftChild}}\) \(\newcommand{\TRChildOf}[1]{{#1}.\textit{rightChild}}\) \(\newcommand{\TNode}{x}\) \(\newcommand{\TNodeI}{y}\) \(\newcommand{\TKeyOf}[1]{{#1}.\textit{key}}\) \(\) \(\newcommand{\PEnqueue}[2]{{#1}.\textsf{enqueue}(#2)}\) \(\newcommand{\PDequeue}[1]{{#1}.\textsf{dequeue}()}\) \(\) \(\newcommand{\Def}{\mathrel{:=}}\) \(\) \(\newcommand{\Eq}{\mathrel{=}}\) \(\newcommand{\Asgn}{\mathrel{\leftarrow}}\) \(%\newcommand{\Asgn}{\mathrel{:=}}\) \(\) \(%\) \(% Heaps\) \(%\) \(\newcommand{\Downheap}{\textsc{downheap}}\) \(\newcommand{\Upheap}{\textsc{upheap}}\) \(\newcommand{\Makeheap}{\textsc{makeheap}}\) \(\) \(%\) \(% Dynamic sets\) \(%\) \(\newcommand{\SInsert}[1]{\textsc{insert}(#1)}\) \(\newcommand{\SSearch}[1]{\textsc{search}(#1)}\) \(\newcommand{\SDelete}[1]{\textsc{delete}(#1)}\) \(\newcommand{\SMin}{\textsc{min}()}\) \(\newcommand{\SMax}{\textsc{max}()}\) \(\newcommand{\SPredecessor}[1]{\textsc{predecessor}(#1)}\) \(\newcommand{\SSuccessor}[1]{\textsc{successor}(#1)}\) \(\) \(%\) \(% Union-find\) \(%\) \(\newcommand{\UFMS}[1]{\textsc{make-set}(#1)}\) \(\newcommand{\UFFS}[1]{\textsc{find-set}(#1)}\) \(\newcommand{\UFCompress}[1]{\textsc{find-and-compress}(#1)}\) \(\newcommand{\UFUnion}[2]{\textsc{union}(#1,#2)}\) \(\) \(\) \(%\) \(% Graphs\) \(%\) \(\newcommand{\Verts}{V}\) \(\newcommand{\Vtx}{v}\) \(\newcommand{\VtxA}{v_1}\) \(\newcommand{\VtxB}{v_2}\) \(\newcommand{\VertsA}{V_\textup{A}}\) \(\newcommand{\VertsB}{V_\textup{B}}\) \(\newcommand{\Edges}{E}\) \(\newcommand{\Edge}{e}\) \(\newcommand{\NofV}{\Card{V}}\) \(\newcommand{\NofE}{\Card{E}}\) \(\newcommand{\Graph}{G}\) \(\) \(\newcommand{\SCC}{C}\) \(\newcommand{\GraphSCC}{G^\text{SCC}}\) \(\newcommand{\VertsSCC}{V^\text{SCC}}\) \(\newcommand{\EdgesSCC}{E^\text{SCC}}\) \(\) \(\newcommand{\GraphT}{G^\text{T}}\) \(%\newcommand{\VertsT}{V^\textup{T}}\) \(\newcommand{\EdgesT}{E^\text{T}}\) \(\) \(%\) \(% NP-completeness etc\) \(%\) \(\newcommand{\Poly}{\textbf{P}}\) \(\newcommand{\NP}{\textbf{NP}}\) \(\newcommand{\PSPACE}{\textbf{PSPACE}}\) \(\newcommand{\EXPTIME}{\textbf{EXPTIME}}\)

Threads

(The following is partly recap from CS-A1120 Programming 2)

Threads are the core mechanism of executing code concurrently in multicore machines. Threads execute

  • ​ in parallel in different cores when available, or

  • ​ in time slices when there are not enough cores for all the active threads.

Different threads within the same process, such as a Scala application or a C++ program, share memory and can thus access same data structures. Each process has at least one thread, the main thread. As Scala programs run on top of a Java virtual machine, they use Java threads. To perform computations concurrently,

  • ​ threads can start new threads (“fork”),

  • ​ wait until other threads have finished (“join”), and

  • ​ communicate between each other through the shared memory.

Communication between threads is a source of many errors and if done, one should take care of proper synchronization or use high-level concurrency objects. Please observe that debugging errors in concurrent programs can be much more difficult than in sequential programs because the timing of the thread executions cannot usually be controlled: a bug can appear in one execution but not in the next one because the threads are executed in a slightly different schedule. As stated earlier, in this course we will not discuss thread communication and synchronization further but assume that the tasks executed in threads are independent and the only synchronization is at the end when the threads join.

Example

Computing the median and average of an array in parallel by using Java threads in Scala can be done as follows.

abstract class ArrayRunnable[T](val a: Array[T]) extends Runnable {
  var result: Int = 0
}
val medianRunnable = new ArrayRunnable(a) {
  def run() = {result = a.sorted.apply(a.length / 2)}
}
val avgRunnable = new ArrayRunnable(a) {
  def run() = {result = a.sum / a.length}
}
val medianThread = new Thread(medianRunnable)
val avgThread = new Thread(avgRunnable)
// Run the threads in parallel
medianThread.start()
avgThread.start()
// Synchronize: wait until both have finished
medianThread.join()
avgThread.join()
// Get the results: joins quarantee that they are ready
val (median, avg) = (medianRunnable.result, avgRunnable.result)

Assume that threads are only created with forks and then later joined but no other synchronization is done. In such a case, each execution can be illustrated with a computation DAG in which vertices correspond to different points of execution in each thread: entry, fork, join, and exit. Consecutive execution points in a thread are connected with edge

  • ​ Each fork vertex is connected to the entry vertex of the created thread.

  • ​ Each exit vertex in a non-main thread is connected to the corresponding join vertex.

Example

The computation DAG for an execution where the main thread forks two new threads that compute the median and average of an array in parallel:

_images/par-threads-2.png

Recall that the work of a parallel program is the sum of execution times of all its parallel computations (\( \approx \) the number of CPU cycles used in all the threads). In the DAG presentation, this is the sum of the running times of all the computation edges.

On the other hand, the span is the “wallclock time” of the program in an ideal setting with unbounded amount of parallel processing units. In the DAG presentation, this corresponds to the length of the longest path.

Example

Recall our previous example of computing the median and average of an array in parallel by using Java threads in Scala:

abstract class ArrayRunnable[T](val a: Array[T]) extends Runnable {
  var result: Int = 0
}
val medianRunnable = new ArrayRunnable(a) {
  def run() = {result = a.sorted.apply(a.length / 2)}
}
val avgRunnable = new ArrayRunnable(a) {
  def run() = {result = a.sum / a.length}
}
val medianThread = new Thread(medianRunnable)
val avgThread = new Thread(avgRunnable)
// Run the threads in parallel
medianThread.start()
avgThread.start()
// Synchronize: wait until both have finished
medianThread.join()
avgThread.join()
// Get the results: joins quarantee that they are ready
val (median, avg) = (medianRunnable.result, avgRunnable.result)

The work is the sum of

  • ​ the times taken in thread allocations, initialization and deallocation, and

  • ​ the sum of the times taken in the median and average computations.

The span is the sum of

  • ​ the times taken in thread allocations, initialization and deallocation, and

  • ​ the larger of the times taken in the median and average computations.

However, programming with plain threads is tedious and error prone. Furthermore, creating and deleting threads is not cheap:

Thread objects use a significant amount of memory, and in a large-scale application, allocating and deallocating many thread objects creates a significant memory management overhead. (An excerpt from Java tutorials.)

In the rest of the round, we will use higher level abstractions where the thread management is taken care by a concurrency platform and we can focus on tasks that should be run in parallel. The main thing we have to take care is that there are no data races. This is usually quite easy to obtain: just make sure that tasks that can run in parallel will never read values written by the others.