paulb@124 | 1 | <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> |
paulb@124 | 2 | <html xmlns="http://www.w3.org/1999/xhtml" lang="en-gb"> |
paulb@124 | 3 | <head> |
paulb@124 | 4 | <meta http-equiv="content-type" content="text/html; charset=UTF-8" /> |
paulb@124 | 5 | <title>pprocess - Tutorial</title> |
paulb@124 | 6 | <link href="styles.css" rel="stylesheet" type="text/css" /> |
paulb@124 | 7 | </head> |
paulb@124 | 8 | <body> |
paulb@124 | 9 | |
paulb@124 | 10 | <h1>pprocess - Tutorial</h1> |
paulb@124 | 11 | |
paulb@124 | 12 | <p>The <code>pprocess</code> module provides several mechanisms for running |
paulb@124 | 13 | Python code concurrently in several processes. The most straightforward way of |
paulb@124 | 14 | making a program parallel-aware - that is, where the program can take |
paulb@124 | 15 | advantage of more than one processor to simultaneously process data - is to |
paulb@124 | 16 | use the <code>pmap</code> function.</p> |
paulb@124 | 17 | |
paulb@145 | 18 | <ul> |
paulb@145 | 19 | <li><a href="#pmap">Converting Map-Style Code</a></li> |
paulb@145 | 20 | <li><a href="#Map">Converting Invocations to Parallel Operations</a></li> |
paulb@145 | 21 | <li><a href="#Queue">Converting Arbitrarily-Ordered Invocations</a></li> |
paulb@145 | 22 | <li><a href="#create">Converting Inline Computations</a></li> |
paulb@145 | 23 | <li><a href="#MakeReusable">Reusing Processes in Parallel Programs</a></li> |
paulb@145 | 24 | <li><a href="#BackgroundCallable">Performing Computations in Background Processes</a></li> |
paulb@145 | 25 | <li><a href="#ManagingBackgroundProcesses">Managing Several Background Processes</a></li> |
paulb@145 | 26 | <li><a href="#Summary">Summary</a></li> |
paulb@145 | 27 | </ul> |
paulb@145 | 28 | |
paulb@149 | 29 | <p>For a brief summary of each of the features of <code>pprocess</code>, see |
paulb@149 | 30 | the <a href="reference.html">reference document</a>.</p> |
paulb@149 | 31 | |
paulb@145 | 32 | <h2 id="pmap">Converting Map-Style Code</h2> |
paulb@124 | 33 | |
paulb@124 | 34 | <p>Consider a program using the built-in <code>map</code> function and a sequence of inputs:</p> |
paulb@124 | 35 | |
paulb@124 | 36 | <pre> |
paulb@124 | 37 | t = time.time() |
paulb@124 | 38 | |
paulb@124 | 39 | # Initialise an array. |
paulb@124 | 40 | |
paulb@124 | 41 | sequence = [] |
paulb@124 | 42 | for i in range(0, N): |
paulb@124 | 43 | for j in range(0, N): |
paulb@124 | 44 | sequence.append((i, j)) |
paulb@124 | 45 | |
paulb@124 | 46 | # Perform the work. |
paulb@124 | 47 | |
paulb@124 | 48 | results = map(calculate, sequence) |
paulb@124 | 49 | |
paulb@124 | 50 | # Show the results. |
paulb@124 | 51 | |
paulb@124 | 52 | for i in range(0, N): |
paulb@124 | 53 | for result in results[i*N:i*N+N]: |
paulb@124 | 54 | print result, |
paulb@124 | 55 | print |
paulb@124 | 56 | |
paulb@124 | 57 | print "Time taken:", time.time() - t</pre> |
paulb@124 | 58 | |
paulb@124 | 59 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@124 | 60 | found in the <code>examples/simple_map.py</code> file.)</p> |
paulb@124 | 61 | |
paulb@124 | 62 | <p>The principal features of this program involve the preparation of an array |
paulb@124 | 63 | for input purposes, and the use of the <code>map</code> function to iterate |
paulb@124 | 64 | over the combinations of <code>i</code> and <code>j</code> in the array. Even |
paulb@124 | 65 | if the <code>calculate</code> function could be invoked independently for each |
paulb@124 | 66 | input value, we have to wait for each computation to complete before |
paulb@124 | 67 | initiating a new one. The <code>calculate</code> function may be defined as |
paulb@124 | 68 | follows:</p> |
paulb@124 | 69 | |
paulb@124 | 70 | <pre> |
paulb@124 | 71 | def calculate(t): |
paulb@124 | 72 | |
paulb@124 | 73 | "A supposedly time-consuming calculation on 't'." |
paulb@124 | 74 | |
paulb@124 | 75 | i, j = t |
paulb@124 | 76 | time.sleep(delay) |
paulb@124 | 77 | return i * N + j |
paulb@124 | 78 | </pre> |
paulb@124 | 79 | |
paulb@124 | 80 | <p>In order to reduce the processing time - to speed the code up, in other |
paulb@124 | 81 | words - we can make this code use several processes instead of just one. Here |
paulb@124 | 82 | is the modified code:</p> |
paulb@124 | 83 | |
paulb@124 | 84 | <pre> |
paulb@124 | 85 | t = time.time() |
paulb@124 | 86 | |
paulb@124 | 87 | # Initialise an array. |
paulb@124 | 88 | |
paulb@124 | 89 | sequence = [] |
paulb@124 | 90 | for i in range(0, N): |
paulb@124 | 91 | for j in range(0, N): |
paulb@124 | 92 | sequence.append((i, j)) |
paulb@124 | 93 | |
paulb@124 | 94 | # Perform the work. |
paulb@124 | 95 | |
paulb@124 | 96 | results = <strong>pprocess.pmap</strong>(calculate, sequence<strong>, limit=limit</strong>) |
paulb@124 | 97 | |
paulb@124 | 98 | # Show the results. |
paulb@124 | 99 | |
paulb@124 | 100 | for i in range(0, N): |
paulb@124 | 101 | for result in results[i*N:i*N+N]: |
paulb@124 | 102 | print result, |
paulb@124 | 103 | print |
paulb@124 | 104 | |
paulb@124 | 105 | print "Time taken:", time.time() - t</pre> |
paulb@124 | 106 | |
paulb@124 | 107 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@124 | 108 | found in the <code>examples/simple_pmap.py</code> file.)</p> |
paulb@124 | 109 | |
paulb@124 | 110 | <p>By replacing usage of the <code>map</code> function with the |
paulb@124 | 111 | <code>pprocess.pmap</code> function, and specifying the limit on the number of |
paulb@124 | 112 | processes to be active at any given time (the value of the <code>limit</code> |
paulb@124 | 113 | variable is defined elsewhere), several calculations can now be performed in |
paulb@124 | 114 | parallel.</p> |
paulb@124 | 115 | |
paulb@145 | 116 | <h2 id="Map">Converting Invocations to Parallel Operations</h2> |
paulb@124 | 117 | |
paulb@124 | 118 | <p>Although some programs make natural use of the <code>map</code> function, |
paulb@124 | 119 | others may employ an invocation in a nested loop. This may also be converted |
paulb@124 | 120 | to a parallel program. Consider the following Python code:</p> |
paulb@124 | 121 | |
paulb@124 | 122 | <pre> |
paulb@124 | 123 | t = time.time() |
paulb@124 | 124 | |
paulb@124 | 125 | # Initialise an array. |
paulb@124 | 126 | |
paulb@124 | 127 | results = [] |
paulb@124 | 128 | |
paulb@124 | 129 | # Perform the work. |
paulb@124 | 130 | |
paulb@124 | 131 | print "Calculating..." |
paulb@124 | 132 | for i in range(0, N): |
paulb@124 | 133 | for j in range(0, N): |
paulb@124 | 134 | results.append(calculate(i, j)) |
paulb@124 | 135 | |
paulb@124 | 136 | # Show the results. |
paulb@124 | 137 | |
paulb@124 | 138 | for i in range(0, N): |
paulb@124 | 139 | for result in results[i*N:i*N+N]: |
paulb@124 | 140 | print result, |
paulb@124 | 141 | print |
paulb@124 | 142 | |
paulb@124 | 143 | print "Time taken:", time.time() - t</pre> |
paulb@124 | 144 | |
paulb@124 | 145 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@124 | 146 | found in the <code>examples/simple1.py</code> file.)</p> |
paulb@124 | 147 | |
paulb@124 | 148 | <p>Here, a computation in the <code>calculate</code> function is performed for |
paulb@124 | 149 | each combination of <code>i</code> and <code>j</code> in the nested loop, |
paulb@124 | 150 | returning a result value. However, we must wait for the completion of this |
paulb@124 | 151 | function for each element before moving on to the next element, and this means |
paulb@124 | 152 | that the computations are performed sequentially. Consequently, on a system |
paulb@124 | 153 | with more than one processor, even if we could call <code>calculate</code> for |
paulb@124 | 154 | more than one combination of <code>i</code> and <code>j</code><code></code> |
paulb@124 | 155 | and have the computations executing at the same time, the above program will |
paulb@124 | 156 | not take advantage of such capabilities.</p> |
paulb@124 | 157 | |
paulb@124 | 158 | <p>We use a slightly modified version of <code>calculate</code> which employs |
paulb@124 | 159 | two parameters instead of one:</p> |
paulb@124 | 160 | |
paulb@124 | 161 | <pre> |
paulb@124 | 162 | def calculate(i, j): |
paulb@124 | 163 | |
paulb@124 | 164 | """ |
paulb@124 | 165 | A supposedly time-consuming calculation on 'i' and 'j'. |
paulb@124 | 166 | """ |
paulb@124 | 167 | |
paulb@124 | 168 | time.sleep(delay) |
paulb@124 | 169 | return i * N + j |
paulb@124 | 170 | </pre> |
paulb@124 | 171 | |
paulb@124 | 172 | <p>In order to reduce the processing time - to speed the code up, in other |
paulb@124 | 173 | words - we can make this code use several processes instead of just one. Here |
paulb@124 | 174 | is the modified code:</p> |
paulb@124 | 175 | |
paulb@124 | 176 | <pre id="simple_managed_map"> |
paulb@124 | 177 | t = time.time() |
paulb@124 | 178 | |
paulb@124 | 179 | # Initialise the results using a map with a limit on the number of |
paulb@124 | 180 | # channels/processes. |
paulb@124 | 181 | |
paulb@124 | 182 | <strong>results = pprocess.Map(limit=limit)</strong><code></code> |
paulb@124 | 183 | |
paulb@124 | 184 | # Wrap the calculate function and manage it. |
paulb@124 | 185 | |
paulb@124 | 186 | <strong>calc = results.manage(pprocess.MakeParallel(calculate))</strong> |
paulb@124 | 187 | |
paulb@124 | 188 | # Perform the work. |
paulb@124 | 189 | |
paulb@124 | 190 | print "Calculating..." |
paulb@124 | 191 | for i in range(0, N): |
paulb@124 | 192 | for j in range(0, N): |
paulb@124 | 193 | <strong>calc</strong>(i, j) |
paulb@124 | 194 | |
paulb@124 | 195 | # Show the results. |
paulb@124 | 196 | |
paulb@124 | 197 | for i in range(0, N): |
paulb@124 | 198 | for result in results[i*N:i*N+N]: |
paulb@124 | 199 | print result, |
paulb@124 | 200 | print |
paulb@124 | 201 | |
paulb@124 | 202 | print "Time taken:", time.time() - t</pre> |
paulb@124 | 203 | |
paulb@124 | 204 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@124 | 205 | found in the <code>examples/simple_managed_map.py</code> file.)</p> |
paulb@124 | 206 | |
paulb@124 | 207 | <p>The principal changes in the above code involve the use of a |
paulb@124 | 208 | <code>pprocess.Map</code> object to collect the results, and a version of the |
paulb@124 | 209 | <code>calculate</code> function which is managed by the <code>Map</code> |
paulb@124 | 210 | object. What the <code>Map</code> object does is to arrange the results of |
paulb@124 | 211 | computations such that iterating over the object or accessing the object using |
paulb@124 | 212 | list operations provides the results in the same order as their corresponding |
paulb@124 | 213 | inputs.</p> |
paulb@124 | 214 | |
paulb@145 | 215 | <h2 id="Queue">Converting Arbitrarily-Ordered Invocations</h2> |
paulb@124 | 216 | |
paulb@124 | 217 | <p>In some programs, it is not important to receive the results of |
paulb@124 | 218 | computations in any particular order, usually because either the order of |
paulb@124 | 219 | these results is irrelevant, or because the results provide "positional" |
paulb@124 | 220 | information which let them be handled in an appropriate way. Consider the |
paulb@124 | 221 | following Python code:</p> |
paulb@124 | 222 | |
paulb@124 | 223 | <pre> |
paulb@124 | 224 | t = time.time() |
paulb@124 | 225 | |
paulb@124 | 226 | # Initialise an array. |
paulb@124 | 227 | |
paulb@124 | 228 | results = [0] * N * N |
paulb@124 | 229 | |
paulb@124 | 230 | # Perform the work. |
paulb@124 | 231 | |
paulb@124 | 232 | print "Calculating..." |
paulb@124 | 233 | for i in range(0, N): |
paulb@124 | 234 | for j in range(0, N): |
paulb@124 | 235 | i2, j2, result = calculate(i, j) |
paulb@124 | 236 | results[i2*N+j2] = result |
paulb@124 | 237 | |
paulb@124 | 238 | # Show the results. |
paulb@124 | 239 | |
paulb@124 | 240 | for i in range(0, N): |
paulb@124 | 241 | for result in results[i*N:i*N+N]: |
paulb@124 | 242 | print result, |
paulb@124 | 243 | print |
paulb@124 | 244 | |
paulb@124 | 245 | print "Time taken:", time.time() - t |
paulb@124 | 246 | </pre> |
paulb@124 | 247 | |
paulb@124 | 248 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@124 | 249 | found in the <code>examples/simple2.py</code> file.)</p> |
paulb@124 | 250 | |
paulb@124 | 251 | <p>Here, a result array is initialised first and each computation is performed |
paulb@124 | 252 | sequentially. A significant difference to the previous examples is the return |
paulb@124 | 253 | value of the <code>calculate</code> function: the position details |
paulb@124 | 254 | corresponding to <code>i</code> and <code>j</code> are returned alongside the |
paulb@124 | 255 | result. Obviously, this is of limited value in the above code because the |
paulb@124 | 256 | order of the computations and the reception of results is fixed. However, we |
paulb@124 | 257 | get no benefit from parallelisation in the above example.</p> |
paulb@124 | 258 | |
paulb@124 | 259 | <p>We can bring the benefits of parallel processing to the above program with |
paulb@124 | 260 | the following code:</p> |
paulb@124 | 261 | |
paulb@145 | 262 | <pre id="simple_managed_queue"> |
paulb@124 | 263 | t = time.time() |
paulb@124 | 264 | |
paulb@124 | 265 | # Initialise the communications queue with a limit on the number of |
paulb@124 | 266 | # channels/processes. |
paulb@124 | 267 | |
paulb@124 | 268 | <strong>queue = pprocess.Queue(limit=limit)</strong> |
paulb@124 | 269 | |
paulb@124 | 270 | # Initialise an array. |
paulb@124 | 271 | |
paulb@124 | 272 | results = [0] * N * N |
paulb@124 | 273 | |
paulb@124 | 274 | # Wrap the calculate function and manage it. |
paulb@124 | 275 | |
paulb@124 | 276 | <strong>calc = queue.manage(pprocess.MakeParallel(calculate))</strong> |
paulb@124 | 277 | |
paulb@124 | 278 | # Perform the work. |
paulb@124 | 279 | |
paulb@124 | 280 | print "Calculating..." |
paulb@124 | 281 | for i in range(0, N): |
paulb@124 | 282 | for j in range(0, N): |
paulb@124 | 283 | <strong>calc(i, j)</strong> |
paulb@124 | 284 | |
paulb@124 | 285 | # Store the results as they arrive. |
paulb@124 | 286 | |
paulb@124 | 287 | print "Finishing..." |
paulb@124 | 288 | <strong>for i, j, result in queue:</strong> |
paulb@124 | 289 | <strong>results[i*N+j] = result</strong> |
paulb@124 | 290 | |
paulb@124 | 291 | # Show the results. |
paulb@124 | 292 | |
paulb@124 | 293 | for i in range(0, N): |
paulb@124 | 294 | for result in results[i*N:i*N+N]: |
paulb@124 | 295 | print result, |
paulb@124 | 296 | print |
paulb@124 | 297 | |
paulb@124 | 298 | print "Time taken:", time.time() - t |
paulb@124 | 299 | </pre> |
paulb@124 | 300 | |
paulb@124 | 301 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@124 | 302 | found in the <code>examples/simple_managed_queue.py</code> file.)</p> |
paulb@124 | 303 | |
paulb@124 | 304 | <p>This revised code employs a <code>pprocess.Queue</code> object whose |
paulb@124 | 305 | purpose is to collect the results of computations and to make them available |
paulb@124 | 306 | in the order in which they were received. The code collecting results has been |
paulb@124 | 307 | moved into a separate loop independent of the original computation loop and |
paulb@124 | 308 | taking advantage of the more relevant "positional" information emerging from |
paulb@124 | 309 | the queue.</p> |
paulb@124 | 310 | |
paulb@124 | 311 | <p>We can take this example further, illustrating some of the mechanisms |
paulb@124 | 312 | employed by <code>pprocess</code>. Instead of collecting results in a queue, |
paulb@124 | 313 | we can define a class containing a method which is called when new results |
paulb@124 | 314 | arrive:</p> |
paulb@124 | 315 | |
paulb@124 | 316 | <pre> |
paulb@124 | 317 | class MyExchange(pprocess.Exchange): |
paulb@124 | 318 | |
paulb@124 | 319 | "Parallel convenience class containing the array assignment operation." |
paulb@124 | 320 | |
paulb@124 | 321 | def store_data(self, ch): |
paulb@124 | 322 | i, j, result = ch.receive() |
paulb@124 | 323 | self.D[i*N+j] = result |
paulb@124 | 324 | </pre> |
paulb@124 | 325 | |
paulb@124 | 326 | <p>This code exposes the channel paradigm which is used throughout |
paulb@124 | 327 | <code>pprocess</code> and is available to applications, if desired. The effect |
paulb@124 | 328 | of the method is the storage of a result received through the channel in an |
paulb@124 | 329 | attribute of the object. The following code shows how this class can be used, |
paulb@124 | 330 | with differences to the previous program illustrated:</p> |
paulb@124 | 331 | |
paulb@124 | 332 | <pre> |
paulb@124 | 333 | t = time.time() |
paulb@124 | 334 | |
paulb@124 | 335 | # Initialise the communications exchange with a limit on the number of |
paulb@124 | 336 | # channels/processes. |
paulb@124 | 337 | |
paulb@124 | 338 | <strong>exchange = MyExchange(limit=limit)</strong> |
paulb@124 | 339 | |
paulb@124 | 340 | # Initialise an array - it is stored in the exchange to permit automatic |
paulb@124 | 341 | # assignment of values as the data arrives. |
paulb@124 | 342 | |
paulb@124 | 343 | <strong>results = exchange.D = [0] * N * N</strong> |
paulb@124 | 344 | |
paulb@124 | 345 | # Wrap the calculate function and manage it. |
paulb@124 | 346 | |
paulb@124 | 347 | calc = <strong>exchange</strong>.manage(pprocess.MakeParallel(calculate)) |
paulb@124 | 348 | |
paulb@124 | 349 | # Perform the work. |
paulb@124 | 350 | |
paulb@124 | 351 | print "Calculating..." |
paulb@124 | 352 | for i in range(0, N): |
paulb@124 | 353 | for j in range(0, N): |
paulb@124 | 354 | calc(i, j) |
paulb@124 | 355 | |
paulb@124 | 356 | # Wait for the results. |
paulb@124 | 357 | |
paulb@124 | 358 | print "Finishing..." |
paulb@124 | 359 | <strong>exchange.finish()</strong> |
paulb@124 | 360 | |
paulb@124 | 361 | # Show the results. |
paulb@124 | 362 | |
paulb@124 | 363 | for i in range(0, N): |
paulb@124 | 364 | for result in results[i*N:i*N+N]: |
paulb@124 | 365 | print result, |
paulb@124 | 366 | print |
paulb@124 | 367 | |
paulb@124 | 368 | print "Time taken:", time.time() - t |
paulb@124 | 369 | </pre> |
paulb@124 | 370 | |
paulb@124 | 371 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@124 | 372 | found in the <code>examples/simple_managed.py</code> file.)</p> |
paulb@124 | 373 | |
paulb@124 | 374 | <p>The main visible differences between this and the previous program are the |
paulb@124 | 375 | storage of the result array in the exchange, the removal of the queue |
paulb@124 | 376 | consumption code from the main program, placing the act of storing values in |
paulb@124 | 377 | the exchange's <code>store_data</code> method, and the need to call the |
paulb@124 | 378 | <code>finish</code> method on the <code>MyExchange</code> object so that we do |
paulb@124 | 379 | not try and access the results too soon. One underlying benefit not visible in |
paulb@124 | 380 | the above code is that we no longer need to accumulate results in a queue or |
paulb@124 | 381 | other structure so that they may be processed and assigned to the correct |
paulb@124 | 382 | positions in the result array.</p> |
paulb@124 | 383 | |
paulb@124 | 384 | <p>For the curious, we may remove some of the remaining conveniences of the |
paulb@124 | 385 | above program to expose other features of <code>pprocess</code>. First, we |
paulb@124 | 386 | define a slightly modified version of the <code>calculate</code> function:</p> |
paulb@124 | 387 | |
paulb@124 | 388 | <pre> |
paulb@124 | 389 | def calculate(ch, i, j): |
paulb@124 | 390 | |
paulb@124 | 391 | """ |
paulb@124 | 392 | A supposedly time-consuming calculation on 'i' and 'j', using 'ch' to |
paulb@124 | 393 | communicate with the parent process. |
paulb@124 | 394 | """ |
paulb@124 | 395 | |
paulb@124 | 396 | time.sleep(delay) |
paulb@124 | 397 | ch.send((i, j, i * N + j)) |
paulb@124 | 398 | </pre> |
paulb@124 | 399 | |
paulb@124 | 400 | <p>This function accepts a channel, <code>ch</code>, through which results |
paulb@124 | 401 | will be sent, and through which other values could potentially be received, |
paulb@124 | 402 | although we choose not to do so here. The program using this function is as |
paulb@124 | 403 | follows, with differences to the previous program illustrated:</p> |
paulb@124 | 404 | |
paulb@124 | 405 | <pre> |
paulb@124 | 406 | t = time.time() |
paulb@124 | 407 | |
paulb@124 | 408 | # Initialise the communications exchange with a limit on the number of |
paulb@124 | 409 | # channels/processes. |
paulb@124 | 410 | |
paulb@124 | 411 | exchange = MyExchange(limit=limit) |
paulb@124 | 412 | |
paulb@124 | 413 | # Initialise an array - it is stored in the exchange to permit automatic |
paulb@124 | 414 | # assignment of values as the data arrives. |
paulb@124 | 415 | |
paulb@124 | 416 | results = exchange.D = [0] * N * N |
paulb@124 | 417 | |
paulb@124 | 418 | # Perform the work. |
paulb@124 | 419 | |
paulb@124 | 420 | print "Calculating..." |
paulb@124 | 421 | for i in range(0, N): |
paulb@124 | 422 | for j in range(0, N): |
paulb@124 | 423 | <strong>exchange.start(calculate, i, j)</strong> |
paulb@124 | 424 | |
paulb@124 | 425 | # Wait for the results. |
paulb@124 | 426 | |
paulb@124 | 427 | print "Finishing..." |
paulb@124 | 428 | exchange.finish() |
paulb@124 | 429 | |
paulb@124 | 430 | # Show the results. |
paulb@124 | 431 | |
paulb@124 | 432 | for i in range(0, N): |
paulb@124 | 433 | for result in results[i*N:i*N+N]: |
paulb@124 | 434 | print result, |
paulb@124 | 435 | print |
paulb@124 | 436 | |
paulb@124 | 437 | print "Time taken:", time.time() - t |
paulb@124 | 438 | </pre> |
paulb@124 | 439 | |
paulb@124 | 440 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@124 | 441 | found in the <code>examples/simple_start.py</code> file.)</p> |
paulb@124 | 442 | |
paulb@124 | 443 | <p>Here, we have discarded two conveniences: the wrapping of callables using |
paulb@124 | 444 | <code>MakeParallel</code>, which lets us use functions without providing any |
paulb@124 | 445 | channel parameters, and the management of callables using the |
paulb@124 | 446 | <code>manage</code> method on queues, exchanges, and so on. The |
paulb@124 | 447 | <code>start</code> method still calls the provided callable, but using a |
paulb@124 | 448 | different notation from that employed previously.</p> |
paulb@124 | 449 | |
paulb@145 | 450 | <h2 id="create">Converting Inline Computations</h2> |
paulb@124 | 451 | |
paulb@124 | 452 | <p>Although many programs employ functions and other useful abstractions which |
paulb@124 | 453 | can be treated as parallelisable units, some programs perform computations |
paulb@124 | 454 | "inline", meaning that the code responsible appears directly within a loop or |
paulb@124 | 455 | related control-flow construct. Consider the following code:</p> |
paulb@124 | 456 | |
paulb@124 | 457 | <pre> |
paulb@124 | 458 | t = time.time() |
paulb@124 | 459 | |
paulb@124 | 460 | # Initialise an array. |
paulb@124 | 461 | |
paulb@124 | 462 | results = [0] * N * N |
paulb@124 | 463 | |
paulb@124 | 464 | # Perform the work. |
paulb@124 | 465 | |
paulb@124 | 466 | print "Calculating..." |
paulb@124 | 467 | for i in range(0, N): |
paulb@124 | 468 | for j in range(0, N): |
paulb@124 | 469 | time.sleep(delay) |
paulb@124 | 470 | results[i*N+j] = i * N + j |
paulb@124 | 471 | |
paulb@124 | 472 | # Show the results. |
paulb@124 | 473 | |
paulb@124 | 474 | for i in range(0, N): |
paulb@124 | 475 | for result in results[i*N:i*N+N]: |
paulb@124 | 476 | print result, |
paulb@124 | 477 | print |
paulb@124 | 478 | |
paulb@124 | 479 | print "Time taken:", time.time() - t |
paulb@124 | 480 | </pre> |
paulb@124 | 481 | |
paulb@124 | 482 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@124 | 483 | found in the <code>examples/simple.py</code> file.)</p> |
paulb@124 | 484 | |
paulb@124 | 485 | <p>To simulate "work", as in the different versions of the |
paulb@124 | 486 | <code>calculate</code> function, we use the <code>time.sleep</code> function |
paulb@124 | 487 | (which does not actually do work, and which will cause a process to be |
paulb@124 | 488 | descheduled in most cases, but which simulates the delay associated with work |
paulb@124 | 489 | being done). This inline work, which must be performed sequentially in the |
paulb@124 | 490 | above program, can be performed in parallel in a somewhat modified version of |
paulb@124 | 491 | the program:</p> |
paulb@124 | 492 | |
paulb@124 | 493 | <pre> |
paulb@124 | 494 | t = time.time() |
paulb@124 | 495 | |
paulb@124 | 496 | # Initialise the results using a map with a limit on the number of |
paulb@124 | 497 | # channels/processes. |
paulb@124 | 498 | |
paulb@124 | 499 | <strong>results = pprocess.Map(limit=limit)</strong> |
paulb@124 | 500 | |
paulb@124 | 501 | # Perform the work. |
paulb@124 | 502 | # NOTE: Could use the with statement in the loop to package the |
paulb@124 | 503 | # NOTE: try...finally functionality. |
paulb@124 | 504 | |
paulb@124 | 505 | print "Calculating..." |
paulb@124 | 506 | for i in range(0, N): |
paulb@124 | 507 | for j in range(0, N): |
paulb@124 | 508 | <strong>ch = results.create()</strong> |
paulb@124 | 509 | <strong>if ch:</strong> |
paulb@124 | 510 | <strong>try: # Calculation work.</strong> |
paulb@124 | 511 | |
paulb@124 | 512 | time.sleep(delay) |
paulb@124 | 513 | <strong>ch.send(i * N + j)</strong> |
paulb@124 | 514 | |
paulb@124 | 515 | <strong>finally: # Important finalisation.</strong> |
paulb@124 | 516 | |
paulb@124 | 517 | <strong>pprocess.exit(ch)</strong> |
paulb@124 | 518 | |
paulb@124 | 519 | # Show the results. |
paulb@124 | 520 | |
paulb@124 | 521 | for i in range(0, N): |
paulb@124 | 522 | for result in results[i*N:i*N+N]: |
paulb@124 | 523 | print result, |
paulb@124 | 524 | print |
paulb@124 | 525 | |
paulb@124 | 526 | print "Time taken:", time.time() - t |
paulb@124 | 527 | </pre> |
paulb@124 | 528 | |
paulb@124 | 529 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@124 | 530 | found in the <code>examples/simple_create_map.py</code> file.)</p> |
paulb@124 | 531 | |
paulb@124 | 532 | <p>Although seemingly more complicated, the bulk of the changes in this |
paulb@124 | 533 | modified program are focused on obtaining a channel object, <code>ch</code>, |
paulb@124 | 534 | at the point where the computations are performed, and the wrapping of the |
paulb@124 | 535 | computation code in a <code>try</code>...<code>finally</code> statement which |
paulb@124 | 536 | ensures that the process associated with the channel exits when the |
paulb@124 | 537 | computation is complete. In order for the results of these computations to be |
paulb@124 | 538 | collected, a <code>pprocess.Map</code> object is used, since it will maintain |
paulb@124 | 539 | the results in the same order as the initiation of the computations which |
paulb@124 | 540 | produced them.</p> |
paulb@124 | 541 | |
paulb@145 | 542 | <h2 id="MakeReusable">Reusing Processes in Parallel Programs</h2> |
paulb@124 | 543 | |
paulb@124 | 544 | <p>One notable aspect of the above programs when parallelised is that each |
paulb@124 | 545 | invocation of a computation in parallel creates a new process in which the |
paulb@124 | 546 | computation is to be performed, regardless of whether existing processes had |
paulb@124 | 547 | just finished producing results and could theoretically have been asked to |
paulb@124 | 548 | perform new computations. In other words, processes were created and destroyed |
paulb@124 | 549 | instead of being reused.</p> |
paulb@124 | 550 | |
paulb@124 | 551 | <p>However, we can request that processes be reused for computations by |
paulb@124 | 552 | enabling the <code>reuse</code> feature of exchange-like objects and employing |
paulb@124 | 553 | suitable reusable callables. Consider this modified version of the <a |
paulb@124 | 554 | href="#simple_managed_map">simple_managed_map</a> program:</p> |
paulb@124 | 555 | |
paulb@124 | 556 | <pre> |
paulb@124 | 557 | t = time.time() |
paulb@124 | 558 | |
paulb@124 | 559 | # Initialise the results using a map with a limit on the number of |
paulb@124 | 560 | # channels/processes. |
paulb@124 | 561 | |
paulb@124 | 562 | results = pprocess.Map(limit=limit<strong>, reuse=1</strong>) |
paulb@124 | 563 | |
paulb@124 | 564 | # Wrap the calculate function and manage it. |
paulb@124 | 565 | |
paulb@124 | 566 | calc = results.manage(pprocess.Make<strong>Reusable</strong>(calculate)) |
paulb@124 | 567 | |
paulb@124 | 568 | # Perform the work. |
paulb@124 | 569 | |
paulb@124 | 570 | print "Calculating..." |
paulb@124 | 571 | for i in range(0, N): |
paulb@124 | 572 | for j in range(0, N): |
paulb@124 | 573 | calc(i, j) |
paulb@124 | 574 | |
paulb@124 | 575 | # Show the results. |
paulb@124 | 576 | |
paulb@124 | 577 | for i in range(0, N): |
paulb@124 | 578 | for result in results[i*N:i*N+N]: |
paulb@124 | 579 | print result, |
paulb@124 | 580 | print |
paulb@124 | 581 | |
paulb@124 | 582 | print "Time taken:", time.time() - t |
paulb@124 | 583 | </pre> |
paulb@124 | 584 | |
paulb@124 | 585 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@124 | 586 | found in the <code>examples/simple_manage_map_reusable.py</code> file.)</p> |
paulb@124 | 587 | |
paulb@124 | 588 | <p>By indicating that processes and channels shall be reused, and by wrapping |
paulb@124 | 589 | the <code>calculate</code> function with the necessary support, the |
paulb@124 | 590 | computations may be performed in parallel using a pool of processes instead of |
paulb@124 | 591 | creating a new process for each computation and then discarding it, only to |
paulb@124 | 592 | create a new process for the next computation.</p> |
paulb@124 | 593 | |
paulb@145 | 594 | <h2 id="BackgroundCallable">Performing Computations in Background Processes</h2> |
paulb@145 | 595 | |
paulb@145 | 596 | <p>Occasionally, it is desirable to initiate time-consuming computations and to |
paulb@145 | 597 | not only leave such processes running in the background, but to be able to detach |
paulb@145 | 598 | the creating process from them completely, potentially terminating the creating |
paulb@145 | 599 | process altogether, and yet also be able to collect the results of the created |
paulb@145 | 600 | processes at a later time, potentially in another completely different process. |
paulb@145 | 601 | For such situations, we can make use of the <code>BackgroundCallable</code> |
paulb@145 | 602 | class, which converts a parallel-aware callable into a callable which will run |
paulb@145 | 603 | in a background process when invoked.</p> |
paulb@145 | 604 | |
paulb@145 | 605 | <p>Consider this excerpt from a modified version of the <a |
paulb@145 | 606 | href="#simple_managed_queue">simple_managed_queue</a> program:</p> |
paulb@145 | 607 | |
paulb@145 | 608 | <pre> |
paulb@145 | 609 | <strong>def task():</strong> |
paulb@145 | 610 | |
paulb@145 | 611 | # Initialise the communications queue with a limit on the number of |
paulb@145 | 612 | # channels/processes. |
paulb@145 | 613 | |
paulb@145 | 614 | queue = pprocess.Queue(limit=limit) |
paulb@145 | 615 | |
paulb@145 | 616 | # Initialise an array. |
paulb@145 | 617 | |
paulb@145 | 618 | results = [0] * N * N |
paulb@145 | 619 | |
paulb@145 | 620 | # Wrap the calculate function and manage it. |
paulb@145 | 621 | |
paulb@145 | 622 | calc = queue.manage(pprocess.MakeParallel(calculate)) |
paulb@145 | 623 | |
paulb@145 | 624 | # Perform the work. |
paulb@145 | 625 | |
paulb@145 | 626 | print "Calculating..." |
paulb@145 | 627 | for i in range(0, N): |
paulb@145 | 628 | for j in range(0, N): |
paulb@145 | 629 | calc(i, j) |
paulb@145 | 630 | |
paulb@145 | 631 | # Store the results as they arrive. |
paulb@145 | 632 | |
paulb@145 | 633 | print "Finishing..." |
paulb@145 | 634 | for i, j, result in queue: |
paulb@145 | 635 | results[i*N+j] = result |
paulb@145 | 636 | |
paulb@145 | 637 | <strong>return results</strong> |
paulb@145 | 638 | </pre> |
paulb@145 | 639 | |
paulb@145 | 640 | <p>Here, we have converted the main program into a function, and instead of |
paulb@145 | 641 | printing out the results, we return the results list from the function.</p> |
paulb@145 | 642 | |
paulb@145 | 643 | <p>Now, let us consider the new main program (with the relevant mechanisms |
paulb@145 | 644 | highlighted):</p> |
paulb@145 | 645 | |
paulb@145 | 646 | <pre> |
paulb@145 | 647 | t = time.time() |
paulb@145 | 648 | |
paulb@145 | 649 | if "--reconnect" not in sys.argv: |
paulb@145 | 650 | |
paulb@145 | 651 | # Wrap the computation and manage it. |
paulb@145 | 652 | |
paulb@145 | 653 | <strong>ptask = pprocess.BackgroundCallable("task.socket", pprocess.MakeParallel(task))</strong> |
paulb@145 | 654 | |
paulb@145 | 655 | # Perform the work. |
paulb@145 | 656 | |
paulb@145 | 657 | ptask() |
paulb@145 | 658 | |
paulb@145 | 659 | # Discard the callable. |
paulb@145 | 660 | |
paulb@145 | 661 | del ptask |
paulb@145 | 662 | print "Discarded the callable." |
paulb@145 | 663 | |
paulb@145 | 664 | if "--start" not in sys.argv: |
paulb@145 | 665 | |
paulb@145 | 666 | # Open a queue and reconnect to the task. |
paulb@145 | 667 | |
paulb@145 | 668 | print "Opening a queue." |
paulb@145 | 669 | <strong>queue = pprocess.BackgroundQueue("task.socket")</strong> |
paulb@145 | 670 | |
paulb@145 | 671 | # Wait for the results. |
paulb@145 | 672 | |
paulb@145 | 673 | print "Waiting for persistent results" |
paulb@145 | 674 | for results in queue: |
paulb@145 | 675 | pass # should only be one element |
paulb@145 | 676 | |
paulb@145 | 677 | # Show the results. |
paulb@145 | 678 | |
paulb@145 | 679 | for i in range(0, N): |
paulb@145 | 680 | for result in results[i*N:i*N+N]: |
paulb@145 | 681 | print result, |
paulb@145 | 682 | print |
paulb@145 | 683 | |
paulb@145 | 684 | print "Time taken:", time.time() - t |
paulb@145 | 685 | </pre> |
paulb@145 | 686 | |
paulb@145 | 687 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@145 | 688 | found in the <code>examples/simple_background_queue.py</code> file.)</p> |
paulb@145 | 689 | |
paulb@145 | 690 | <p>This new main program has two parts: the part which initiates the |
paulb@145 | 691 | computation, and the part which connects to the computation in order to collect |
paulb@145 | 692 | the results. Both parts can be run in the same process, and this should result |
paulb@145 | 693 | in similar behaviour to that of the original |
paulb@145 | 694 | <a href="#simple_managed_queue">simple_managed_queue</a> program.</p> |
paulb@145 | 695 | |
paulb@145 | 696 | <p>In the above program, however, we are free to specify <code>--start</code> as |
paulb@145 | 697 | an option when running the program, and the result of this is merely to initiate |
paulb@145 | 698 | the computation in a background process, using <code>BackgroundCallable</code> |
paulb@145 | 699 | to obtain a callable which, when invoked, creates the background process and |
paulb@145 | 700 | runs the computation. After doing this, the program will then exit, but it will |
paulb@145 | 701 | leave the computation running as a collection of background processes, and a |
paulb@145 | 702 | special file called <code>task.socket</code> will exist in the current working |
paulb@145 | 703 | directory.</p> |
paulb@145 | 704 | |
paulb@145 | 705 | <p>When the above program is run using the <code>--reconnect</code> option, an |
paulb@145 | 706 | attempt will be made to reconnect to the background processes already created by |
paulb@145 | 707 | attempting to contact them using the previously created <code>task.socket</code> |
paulb@145 | 708 | special file (which is, in fact, a UNIX-domain socket); this being done using |
paulb@145 | 709 | the <code>BackgroundQueue</code> function which will handle the incoming results |
paulb@145 | 710 | in a fashion similar to that of a <code>Queue</code> object. Since only one |
paulb@145 | 711 | result is returned by the computation (as defined by the <code>return</code> |
paulb@145 | 712 | statement in the <code>task</code> function), we need only expect one element to |
paulb@145 | 713 | be collected by the queue: a list containing all of the results produced in the |
paulb@145 | 714 | computation.</p> |
paulb@145 | 715 | |
paulb@145 | 716 | <h2 id="ManagingBackgroundProcesses">Managing Several Background Processes</h2> |
paulb@145 | 717 | |
paulb@145 | 718 | <p>In the above example, a single background process was used to manage a number |
paulb@145 | 719 | of other processes, with all of them running in the background. However, it can |
paulb@145 | 720 | be desirable to manage more than one background process.</p> |
paulb@145 | 721 | |
paulb@145 | 722 | <p>Consider this excerpt from a modified version of the <a |
paulb@145 | 723 | href="#simple_managed_queue">simple_managed_queue</a> program:</p> |
paulb@145 | 724 | |
paulb@145 | 725 | <pre> |
paulb@145 | 726 | <strong>def task(i):</strong> |
paulb@145 | 727 | |
paulb@145 | 728 | # Initialise the communications queue with a limit on the number of |
paulb@145 | 729 | # channels/processes. |
paulb@145 | 730 | |
paulb@145 | 731 | queue = pprocess.Queue(limit=limit) |
paulb@145 | 732 | |
paulb@145 | 733 | # Initialise an array. |
paulb@145 | 734 | |
paulb@145 | 735 | results = [0] * N |
paulb@145 | 736 | |
paulb@145 | 737 | # Wrap the calculate function and manage it. |
paulb@145 | 738 | |
paulb@145 | 739 | calc = queue.manage(pprocess.MakeParallel(calculate)) |
paulb@145 | 740 | |
paulb@145 | 741 | # Perform the work. |
paulb@145 | 742 | |
paulb@145 | 743 | print "Calculating..." |
paulb@145 | 744 | <strong>for j in range(0, N):</strong> |
paulb@145 | 745 | <strong>calc(i, j)</strong> |
paulb@145 | 746 | |
paulb@145 | 747 | # Store the results as they arrive. |
paulb@145 | 748 | |
paulb@145 | 749 | print "Finishing..." |
paulb@145 | 750 | <strong>for i, j, result in queue:</strong> |
paulb@145 | 751 | <strong>results[j] = result</strong> |
paulb@145 | 752 | |
paulb@145 | 753 | <strong>return i, results</strong> |
paulb@145 | 754 | </pre> |
paulb@145 | 755 | |
paulb@145 | 756 | <p>Just as we see in the previous example, a function called <code>task</code> |
paulb@145 | 757 | has been defined to hold a background computation, and this function returns a |
paulb@145 | 758 | portion of the results. However, unlike the previous example or the original |
paulb@145 | 759 | example, the scope of the results of the computation collected in the function |
paulb@145 | 760 | have been changed: here, only results for calculations involving a certain value |
paulb@145 | 761 | of <code>i</code> are collected, with the particular value of <code>i</code> |
paulb@145 | 762 | returned along with the appropriate portion of the results.</p> |
paulb@145 | 763 | |
paulb@145 | 764 | <p>Now, let us consider the new main program (with the relevant mechanisms |
paulb@145 | 765 | highlighted):</p> |
paulb@145 | 766 | |
paulb@145 | 767 | <pre> |
paulb@145 | 768 | t = time.time() |
paulb@145 | 769 | |
paulb@145 | 770 | if "--reconnect" not in sys.argv: |
paulb@145 | 771 | |
paulb@145 | 772 | # Wrap the computation and manage it. |
paulb@145 | 773 | |
paulb@145 | 774 | <strong>ptask = pprocess.MakeParallel(task)</strong> |
paulb@145 | 775 | |
paulb@145 | 776 | <strong>for i in range(0, N):</strong> |
paulb@145 | 777 | |
paulb@145 | 778 | # Make a distinct callable for each part of the computation. |
paulb@145 | 779 | |
paulb@145 | 780 | <strong>ptask_i = pprocess.BackgroundCallable("task-%d.socket" % i, ptask)</strong> |
paulb@145 | 781 | |
paulb@145 | 782 | # Perform the work. |
paulb@145 | 783 | |
paulb@145 | 784 | <strong>ptask_i(i)</strong> |
paulb@145 | 785 | |
paulb@145 | 786 | # Discard the callable. |
paulb@145 | 787 | |
paulb@145 | 788 | del ptask |
paulb@145 | 789 | print "Discarded the callable." |
paulb@145 | 790 | |
paulb@145 | 791 | if "--start" not in sys.argv: |
paulb@145 | 792 | |
paulb@145 | 793 | # Open a queue and reconnect to the task. |
paulb@145 | 794 | |
paulb@145 | 795 | print "Opening a queue." |
paulb@145 | 796 | <strong>queue = pprocess.PersistentQueue()</strong> |
paulb@145 | 797 | <strong>for i in range(0, N):</strong> |
paulb@145 | 798 | <strong>queue.connect("task-%d.socket" % i)</strong> |
paulb@145 | 799 | |
paulb@145 | 800 | # Initialise an array. |
paulb@145 | 801 | |
paulb@145 | 802 | <strong>results = [0] * N</strong> |
paulb@145 | 803 | |
paulb@145 | 804 | # Wait for the results. |
paulb@145 | 805 | |
paulb@145 | 806 | print "Waiting for persistent results" |
paulb@145 | 807 | <strong>for i, result in queue:</strong> |
paulb@145 | 808 | <strong>results[i] = result</strong> |
paulb@145 | 809 | |
paulb@145 | 810 | # Show the results. |
paulb@145 | 811 | |
paulb@145 | 812 | for i in range(0, N): |
paulb@145 | 813 | <strong>for result in results[i]:</strong> |
paulb@145 | 814 | print result, |
paulb@145 | 815 | print |
paulb@145 | 816 | |
paulb@145 | 817 | print "Time taken:", time.time() - t |
paulb@145 | 818 | </pre> |
paulb@145 | 819 | |
paulb@145 | 820 | <p>(This code in context with <code>import</code> statements and functions is |
paulb@145 | 821 | found in the <code>examples/simple_persistent_queue.py</code> file.)</p> |
paulb@145 | 822 | |
paulb@145 | 823 | <p>In the first section, the process of making a parallel-aware callable is as |
paulb@145 | 824 | expected, but instead of then invoking a single background version of such a |
paulb@145 | 825 | callable, as in the previous example, we create a version for each value of |
paulb@145 | 826 | <code>i</code> (using <code>BackgroundCallable</code>) and then invoke each one. |
paulb@145 | 827 | The result of this is a total of <code>N</code> background processes, each |
paulb@145 | 828 | running an invocation of the <code>task</code> function with a distinct value of |
paulb@145 | 829 | <code>i</code> (which in turn perform computations), and each employing a |
paulb@145 | 830 | UNIX-domain socket for communication with a name of the form |
paulb@145 | 831 | <code>task-<em>i</em>.socket</code>.</p> |
paulb@145 | 832 | |
paulb@145 | 833 | <p>In the second section, since we now have more than one background process, we |
paulb@145 | 834 | must find a way to monitor them after reconnecting to them; to achieve this, a |
paulb@145 | 835 | <code>PersistentQueue</code> is created, which acts like a regular |
paulb@145 | 836 | <code>Queue</code> object but is instead focused on handling persistent |
paulb@145 | 837 | communications. Upon connecting the queue to each of the previously created |
paulb@145 | 838 | UNIX-domain sockets, the queue acts like a regular <code>Queue</code> and |
paulb@145 | 839 | exposes received results through an iterator. Here, the principal difference |
paulb@145 | 840 | from previous examples is the structure of results: instead of collecting each |
paulb@145 | 841 | individual value in a flat <code>i</code> by <code>j</code> array, a list is |
paulb@145 | 842 | returned for each value of <code>i</code> and is stored directly in another |
paulb@145 | 843 | list.</p> |
paulb@145 | 844 | |
paulb@145 | 845 | <h3>Applications of Background Computations</h3> |
paulb@145 | 846 | |
paulb@145 | 847 | <p>Background computations are useful because they provide flexibility in the |
paulb@145 | 848 | way the results can be collected. One area in which they can be useful is Web |
paulb@145 | 849 | programming, where a process handling an incoming HTTP request may need to |
paulb@145 | 850 | initiate a computation but then immediately send output to the Web client - such |
paulb@145 | 851 | as a page indicating that the computation is "in progress" - without having to |
paulb@145 | 852 | wait for the computation or to allocate resources to monitor it. Moreover, in |
paulb@145 | 853 | some Web architectures, notably those employing the Common Gateway Interface |
paulb@145 | 854 | (CGI), it is necessary for a process handling an incoming request to terminate |
paulb@145 | 855 | before its output will be sent to clients. By using a |
paulb@145 | 856 | <code>BackgroundCallable</code>, a Web server process can initiate a |
paulb@145 | 857 | communication, and then subsequent server processes can be used to reconnect to |
paulb@145 | 858 | the background computation and to wait efficiently for results.</p> |
paulb@145 | 859 | |
paulb@145 | 860 | <h2 id="Summary">Summary</h2> |
paulb@124 | 861 | |
paulb@124 | 862 | <p>The following table indicates the features used in converting one |
paulb@124 | 863 | sequential example program to another parallel program:</p> |
paulb@124 | 864 | |
paulb@124 | 865 | <table border="1" cellspacing="0" cellpadding="5"> |
paulb@124 | 866 | <thead> |
paulb@124 | 867 | <tr> |
paulb@124 | 868 | <th>Sequential Example</th> |
paulb@124 | 869 | <th>Parallel Example</th> |
paulb@124 | 870 | <th>Features Used</th> |
paulb@124 | 871 | </tr> |
paulb@124 | 872 | </thead> |
paulb@124 | 873 | <tbody> |
paulb@124 | 874 | <tr> |
paulb@124 | 875 | <td>simple_map</td> |
paulb@124 | 876 | <td>simple_pmap</td> |
paulb@124 | 877 | <td>pmap</td> |
paulb@124 | 878 | </tr> |
paulb@124 | 879 | <tr> |
paulb@124 | 880 | <td>simple1</td> |
paulb@124 | 881 | <td>simple_managed_map</td> |
paulb@124 | 882 | <td>MakeParallel, Map, manage</td> |
paulb@124 | 883 | </tr> |
paulb@124 | 884 | <tr> |
paulb@145 | 885 | <td rowspan="5">simple2</td> |
paulb@124 | 886 | <td>simple_managed_queue</td> |
paulb@124 | 887 | <td>MakeParallel, Queue, manage</td> |
paulb@124 | 888 | </tr> |
paulb@124 | 889 | <tr> |
paulb@124 | 890 | <td>simple_managed</td> |
paulb@124 | 891 | <td>MakeParallel, Exchange (subclass), manage, finish</td> |
paulb@124 | 892 | </tr> |
paulb@124 | 893 | <tr> |
paulb@124 | 894 | <td>simple_start</td> |
paulb@124 | 895 | <td>Channel, Exchange (subclass), start, finish</td> |
paulb@124 | 896 | </tr> |
paulb@124 | 897 | <tr> |
paulb@145 | 898 | <td>simple_background_queue</td> |
paulb@145 | 899 | <td>MakeParallel, BackgroundCallable, BackgroundQueue</td> |
paulb@145 | 900 | </tr> |
paulb@145 | 901 | <tr> |
paulb@145 | 902 | <td>simple_persistent_queue</td> |
paulb@145 | 903 | <td>MakeParallel, BackgroundCallable, PersistentQueue</td> |
paulb@145 | 904 | </tr> |
paulb@145 | 905 | <tr> |
paulb@124 | 906 | <td>simple</td> |
paulb@124 | 907 | <td>simple_create_map</td> |
paulb@124 | 908 | <td>Channel, Map, create, exit</td> |
paulb@124 | 909 | </tr> |
paulb@124 | 910 | </tbody> |
paulb@124 | 911 | </table> |
paulb@124 | 912 | |
paulb@124 | 913 | </body> |
paulb@124 | 914 | </html> |