\(\) \(%\newcommand{\CLRS}{\href{https://mitpress.mit.edu/books/introduction-algorithms}{Cormen et al}}\) \(\newcommand{\CLRS}{\href{https://mitpress.mit.edu/books/introduction-algorithms}{Introduction to Algorithms, 3rd ed.} (\href{http://libproxy.aalto.fi/login?url=http://site.ebrary.com/lib/aalto/Doc?id=10397652}{online via Aalto lib})}\) \(\newcommand{\SW}{\href{http://algs4.cs.princeton.edu/home/}{Algorithms, 4th ed.}}\) \(%\newcommand{\SW}{\href{http://algs4.cs.princeton.edu/home/}{Sedgewick and Wayne}}\) \(\) \(\newcommand{\Java}{\href{http://java.com/en/}{Java}}\) \(\newcommand{\Python}{\href{https://www.python.org/}{Python}}\) \(\newcommand{\CPP}{\href{http://www.cplusplus.com/}{C++}}\) \(\newcommand{\ST}[1]{{\Blue{\textsf{#1}}}}\) \(\newcommand{\PseudoCode}[1]{{\color{blue}\textsf{#1}}}\) \(\) \(%\newcommand{\subheading}[1]{\textbf{\large\color{aaltodgreen}#1}}\) \(\newcommand{\subheading}[1]{\large{\usebeamercolor[fg]{frametitle} #1}}\) \(\) \(\newcommand{\Blue}[1]{{\color{flagblue}#1}}\) \(\newcommand{\Red}[1]{{\color{aaltored}#1}}\) \(\newcommand{\Emph}[1]{\emph{\color{flagblue}#1}}\) \(\) \(\newcommand{\Engl}[1]{({\em engl.}\ #1)}\) \(\) \(\newcommand{\Pointer}{\raisebox{-1ex}{\huge\ding{43}}\ }\) \(\) \(\newcommand{\Set}[1]{\{#1\}}\) \(\newcommand{\Setdef}[2]{\{{#1}\mid{#2}\}}\) \(\newcommand{\PSet}[1]{\mathcal{P}(#1)}\) \(\newcommand{\Card}[1]{{\vert{#1}\vert}}\) \(\newcommand{\Tuple}[1]{(#1)}\) \(\newcommand{\Implies}{\Rightarrow}\) \(\newcommand{\Reals}{\mathbb{R}}\) \(\newcommand{\Seq}[1]{(#1)}\) \(\newcommand{\Arr}[1]{[#1]}\) \(\newcommand{\Floor}[1]{{\lfloor{#1}\rfloor}}\) \(\newcommand{\Ceil}[1]{{\lceil{#1}\rceil}}\) \(\newcommand{\Path}[1]{(#1)}\) \(\) \(%\newcommand{\Lg}{\lg}\) \(\newcommand{\Lg}{\log_2}\) \(\) \(\newcommand{\BigOh}{O}\) \(%\newcommand{\BigOh}{\mathcal{O}}\) \(\newcommand{\Oh}[1]{\BigOh(#1)}\) \(\) \(\newcommand{\todo}[1]{\Red{\textbf{TO DO: #1}}}\) \(\) \(\newcommand{\NULL}{\textsf{null}}\) \(\) \(\newcommand{\Insert}{\ensuremath{\textsc{insert}}}\) \(\newcommand{\Search}{\ensuremath{\textsc{search}}}\) \(\newcommand{\Delete}{\ensuremath{\textsc{delete}}}\) \(\newcommand{\Remove}{\ensuremath{\textsc{remove}}}\) \(\) \(\newcommand{\Parent}[1]{\mathop{parent}(#1)}\) \(\) \(\newcommand{\ALengthOf}[1]{{#1}.\textit{length}}\) \(\) \(\newcommand{\TRootOf}[1]{{#1}.\textit{root}}\) \(\newcommand{\TLChildOf}[1]{{#1}.\textit{leftChild}}\) \(\newcommand{\TRChildOf}[1]{{#1}.\textit{rightChild}}\) \(\newcommand{\TNode}{x}\) \(\newcommand{\TNodeI}{y}\) \(\newcommand{\TKeyOf}[1]{{#1}.\textit{key}}\) \(\) \(\newcommand{\PEnqueue}[2]{{#1}.\textsf{enqueue}(#2)}\) \(\newcommand{\PDequeue}[1]{{#1}.\textsf{dequeue}()}\) \(\) \(\newcommand{\Def}{\mathrel{:=}}\) \(\) \(\newcommand{\Eq}{\mathrel{=}}\) \(\newcommand{\Asgn}{\mathrel{\leftarrow}}\) \(%\newcommand{\Asgn}{\mathrel{:=}}\) \(\) \(%\) \(% Heaps\) \(%\) \(\newcommand{\Downheap}{\textsc{downheap}}\) \(\newcommand{\Upheap}{\textsc{upheap}}\) \(\newcommand{\Makeheap}{\textsc{makeheap}}\) \(\) \(%\) \(% Dynamic sets\) \(%\) \(\newcommand{\SInsert}[1]{\textsc{insert}(#1)}\) \(\newcommand{\SSearch}[1]{\textsc{search}(#1)}\) \(\newcommand{\SDelete}[1]{\textsc{delete}(#1)}\) \(\newcommand{\SMin}{\textsc{min}()}\) \(\newcommand{\SMax}{\textsc{max}()}\) \(\newcommand{\SPredecessor}[1]{\textsc{predecessor}(#1)}\) \(\newcommand{\SSuccessor}[1]{\textsc{successor}(#1)}\) \(\) \(%\) \(% Union-find\) \(%\) \(\newcommand{\UFMS}[1]{\textsc{make-set}(#1)}\) \(\newcommand{\UFFS}[1]{\textsc{find-set}(#1)}\) \(\newcommand{\UFCompress}[1]{\textsc{find-and-compress}(#1)}\) \(\newcommand{\UFUnion}[2]{\textsc{union}(#1,#2)}\) \(\) \(\) \(%\) \(% Graphs\) \(%\) \(\newcommand{\Verts}{V}\) \(\newcommand{\Vtx}{v}\) \(\newcommand{\VtxA}{v_1}\) \(\newcommand{\VtxB}{v_2}\) \(\newcommand{\VertsA}{V_\textup{A}}\) \(\newcommand{\VertsB}{V_\textup{B}}\) \(\newcommand{\Edges}{E}\) \(\newcommand{\Edge}{e}\) \(\newcommand{\NofV}{\Card{V}}\) \(\newcommand{\NofE}{\Card{E}}\) \(\newcommand{\Graph}{G}\) \(\) \(\newcommand{\SCC}{C}\) \(\newcommand{\GraphSCC}{G^\text{SCC}}\) \(\newcommand{\VertsSCC}{V^\text{SCC}}\) \(\newcommand{\EdgesSCC}{E^\text{SCC}}\) \(\) \(\newcommand{\GraphT}{G^\text{T}}\) \(%\newcommand{\VertsT}{V^\textup{T}}\) \(\newcommand{\EdgesT}{E^\text{T}}\) \(\) \(%\) \(% NP-completeness etc\) \(%\) \(\newcommand{\Poly}{\textbf{P}}\) \(\newcommand{\NP}{\textbf{NP}}\) \(\newcommand{\PSPACE}{\textbf{PSPACE}}\) \(\newcommand{\EXPTIME}{\textbf{EXPTIME}}\)

Experimental analysis

The O-notation described earlier disregards constant factors. In practice, however, constant factors in running times and other resource usages do matter. In addition to mathematical analysis, we may thus perform experimental performance analysis. For instance, we may

  • ​ take implementations of two algorithms that perform the same task,

  • ​ run these on the same input set, and

  • ​ compare their running times.

Similarly, we may

  • ​ instrument the code, for instance, to count all the comparison operators made and then run the algorithm on various inputs to see how many comparisons are made on average or in the worst case, or

  • ​ use some profiling tool (such as valgrind for C/C++/binary programs) to find out numbers of cache misses etc.

Example: Square matrix multiplication

Let’s compare the performance of some \( n \times n \) matrix multiplication implementations:

  • ​ Scala naive: the code in Section Running times of algorithms

  • ​ C++ naive: the same in C++

  • ​ Scala transpose: transpose the other matrix for better cache behavior

  • ​ Scala transpose parallel: the previous but use 8 threads of a 3.20GHz Intel Xeon(R) E31230 CPU

  • ​ cuBLAS (GPU): use NVIDIA Quadro 2000 (a medium range graphics processing unit) and a well-designed implementation in the NVIDIA cuBLAS library (version 7.5)

_images/matrixmult-comparisons.jpg

We see that the implementations scale similarly but on \( 2000 \times 2000 \) matrices (and on this particular machine), the running times drop from the \( 68\mathrm{s} \) of the naive algorithm to \( 0.45\mathrm{s} \) of the GPU algorithm.

Simple running time measurement in Scala

As seen in the CS-A1120 Programming 2 course, we can measure running times of functions quite easily.

import java.lang.management.{ManagementFactory, ThreadMXBean}
package object timer {
  val bean: ThreadMXBean = ManagementFactory.getThreadMXBean()
  def getCpuTime = if(bean.isCurrentThreadCpuTimeSupported())
    bean.getCurrentThreadCpuTime() else 0
  def measureCpuTime[T](f: => T): (T, Double) = {
    val start = getCpuTime
    val r = f
    val end = getCpuTime
    (r, (end - start) / 1000000000.0)
  }
  def measureWallClockTime[T](f: => T): (T, Double) = {
    val start = System.nanoTime
    val r = f
    val end = System.nanoTime
    (r, (end - start) / 1000000000.0)
  }
}

Observe that wall clock times (the function measureWallClockTime) should be used when measuring running times of parallel programs. This is because the method getCurrentThreadCpuTime used in measureCpuTime measures the CPU time of only one thread.

Unfortunately, things are not always that simple. Let’s run the following code which sorts twenty arrays of 10000 integers:

import timer._
val a = new Array[Int](10000)
val rand = new scala.util.Random()
val times = scala.collection.mutable.ArrayBuffer[Double]()
for(test <- 0 until 20) {
  for(i <- 0 until a.length) a(i) = rand.nextInt
  val (dummy, t) = measureCpuTime { a.sorted }
  times += t
}
println(times.mkString(", "))

An output of the program:

0.024590893, 0.010173347, 0.005155264, 0.003917635, 0.002668954, 0.002594947, 0.002807369, 0.002570412, 0.004068068, 0.006141965, 0.005393813, 0.005688233, 0.005168036, 0.005198876, 0.005008517, 0.004631029, 0.002407551, 0.001865849, 0.001878646, 0.002058858

The program measures the same code on similar inputs but the running times improve significantly after some repetitions. Why is this happening? The above behaviour can be explained as follows.

  • ​ A Scala program is first compiled into Java bytecode.

  • ​ The resulting Java bytecode is executed in a Java virtual machine (JVM).

  • ​ The execution may start in interpreted mode.

  • ​ After while, some parts of the program can then be just-in-time compiled to native machine code during the execution of the program.

  • ​ The compiled code may then be globally optimized during the execution.

  • ​ In addition, garbage collection may happen during the execution.

As a consequence, when measuring running times of programs with shortish running times, one should be careful when interpreting the results. But for programs with larger running times this is usually not so big problem.

As we saw, reliably benchmarking programs written in interpreted and virtual machine based languages (such as Java and Scala) can be challenging, especially if the running times are small. In this course, we’ll sometimes use the ScalaMeter tool for performance analysis. It tries to take the JVM-related issues into account by, among other things, “warming up” the virtual machine and using statistical analysis techniques. See, for instance, the documentation of the tool and the article

  1. Georges et al: Statistically rigorous java performance evaluation, Proc. OOPSLA 2007, pp. 57-76.