Shared counter with Python’s multiprocessing

January 4th, 2012 at 5:52 am

One of the methods of exchanging data between processes with the multiprocessing module is directly shared memory via multiprocessing.Value. As any method that’s very general, it can sometimes be tricky to use. I’ve seen a variation of this question asked a couple of times on StackOverflow:

I have some processes that do work, and I want them to increment some shared counter because [... some irrelevant reason ...] – how can this be done?

The wrong way

And surprisingly enough, some answers given to this question are wrong, since they use multiprocessing.Value incorrectly, as follows:

import time
from multiprocessing import Process, Value

def func(val):
    for i in range(50):
        time.sleep(0.01)
        val.value += 1

if __name__ == '__main__':
    v = Value('i', 0)
    procs = [Process(target=func, args=(v,)) for i in range(10)]

    for p in procs: p.start()
    for p in procs: p.join()

    print v.value

This code is a demonstration of the problem, distilling only the usage of the shared counter. A "pool" of 10 processes is created to run the func function. All processes share a Value and increment it 50 times. You would expect this code to eventually print 500, but in all likeness it won’t. Here’s some output taken from 10 runs of that code:

> for i in {1..10}; do python sync_nolock_wrong.py; done
435
464
484
448
491
481
490
471
497
494

Why does this happen?

I must admit that the documentation of multiprocessing.Value can be a bit confusing here, especially for beginners. It states that by default, a lock is created to synchronize access to the value, so one may be falsely led to believe that it would be OK to modify this value in any way imaginable from multiple processes. But it’s not.

Explanation – the default locking done by Value

This section is advanced and isn’t strictly required for the overall flow of the post. If you just want to understand how to synchronize the counter correctly, feel free to skip it.

The locking done by multiprocessing.Value is very fine-grained. Value is a wrapper around a ctypes object, which has an underlying value attribute representing the actual object in memory. All Value does is ensure that only a single process or thread may read or write this value attribute simultaneously. This is important, since (for some types, on some architectures) writes and reads may not be atomic. I.e. to actually fill up the object’s memory, the CPU may need several instructions, and another process reading the same (shared) memory at the same time could see some intermediate, invalid state. The built-in lock of Value prevents this from happening.

However, when we do this:

val.value +=1

What Python actually performs is the following (disassembled bytecode with the dis module). I’ve annotated the locking done by Value in #<-- comments:

 0 LOAD_FAST                0 (val)
 3 DUP_TOP
                                     #<--- Value lock acquired
 4 LOAD_ATTR                0 (value)
                                     #<--- Value lock released
 7 LOAD_CONST               1 (1)
10 INPLACE_ADD
11 ROT_TWO
                                     #<--- Value lock acquired
12 STORE_ATTR               0 (value)
                                     #<--- Value lock released

So it’s obvious that while process #1 is now at instruction 7 (LOAD_CONST), nothing prevents process #2 from also loading the (old) value attribute and be on instruction 7 too. Both processes will proceed incrementing their private copy and writing it back. The result: the actual value got incremented only once, not twice.

The right way

Fortunately, this problem is very easy to fix. A separate Lock is needed to guarantee the atomicity of modifications to the Value:

import time
from multiprocessing import Process, Value, Lock

def func(val, lock):
    for i in range(50):
        time.sleep(0.01)
        with lock:
            val.value += 1

if __name__ == '__main__':
    v = Value('i', 0)
    lock = Lock()
    procs = [Process(target=func, args=(v, lock)) for i in range(10)]

    for p in procs: p.start()
    for p in procs: p.join()

    print v.value

Now we get the expected result:

> for i in {1..10}; do python sync_lock_right.py; done
500
500
500
500
500
500
500
500
500
500

A value and a lock may appear like too much baggage to carry around at all times. So, we can create a simple "synchronized shared counter" object to encapsulate this functionality:

import time
from multiprocessing import Process, Value, Lock

class Counter(object):
    def __init__(self, initval=0):
        self.val = Value('i', initval)
        self.lock = Lock()

    def increment(self):
        with self.lock:
            self.val.value += 1

    def value(self):
        with self.lock:
            return self.val.value

def func(counter):
    for i in range(50):
        time.sleep(0.01)
        counter.increment()

if __name__ == '__main__':
    counter = Counter(0)
    procs = [Process(target=func, args=(counter,)) for i in range(10)]

    for p in procs: p.start()
    for p in procs: p.join()

    print counter.value()

Bonus: since we’ve now placed a more coarse-grained lock on the modification of the value, we may throw away Value with its fine-grained lock altogether, and just use multiprocessing.RawValue, that simply wraps a shared object without any locking.

Related posts:

  1. Distributed computing in Python with multiprocessing
  2. Python – parallelizing CPU-bound tasks with multiprocessing
  3. How (not) to set a timeout on a computation in Python
  4. Python threads: communication and stopping
  5. Position Independent Code (PIC) in shared libraries

9 Responses to “Shared counter with Python’s multiprocessing”

  1. A. Jesse Jiryu DavisNo Gravatar Says:

    Hi Eli, great post. My only criticism is, in Counter.value(), do you need to acquire the lock before returning the value?

    I can’t think of any race condition that’s avoided by acquiring the lock there. After all, if there *is* a race condition between one process doing increment() and another doing value(), then whether the reader-process gets the value before or after the increment is undefined, *regardless* of whether it acquires the lock.

    So I’d remove the lock from value(), since it’s a slight performance penalty.

  2. elibenNo Gravatar Says:

    A. Jesse,

    This is a valid point, but my intention was to get rid of Value completely, replacing it by RawValue (see the “Bonus” note at the end). In such case the lock is required, for the same reason the original lock of Value exists.

    If performance is important, there’s really no point wrapping Value with such a counter – it causes a lot of useless locking & unlocking.

  3. Ken SwiftNo Gravatar Says:

    Man, this kind of posts are the raisins that I’m looking for! I wish most of the planet python post were as educative as your.

    Thanks!

  4. fungusakafungusNo Gravatar Says:

    As I understand the docs and the code, Value is just a synchronized(http://docs.python.org/library/multiprocessing.html#multiprocessing.sharedctypes.synchronized) RawValue, so instead of explicitly importing, instantiating and passing a Lock, one could synchronize to val.get_lock():

    def func(val):
        for i in range(50):
            time.sleep(0.01)
            with val.get_lock():
                val.value += 1
  5. fungusakafungusNo Gravatar Says:

    Well, actually, thanks for the post! Meant to look into that multiprocessing thing since long time.

  6. elibenNo Gravatar Says:

    fungusakafungus,

    I suppose this could work if the lock is recursive. However, get_lock is not a documented method – at least not in an officially meaningful way. It is mentioned off-hand in the documentation of multiprocessing.sharedctypes.synchronized, but without reading the source it’s not clear whether it can be used on a Value.

  7. anupam sainiNo Gravatar Says:

    Great post and very informative too.

    I use multiprocessing Queues and Locks and I believe posts like these are very helpful in understanding the concepts.

  8. TimeyNo Gravatar Says:

    Hi, I just stumbled over this site while I was looking for a way to implement a counter in my parallel code. And this works and looks good.

    Is it possible to do this not using multiprocessing.Process but using multiprocessing.Pool.map()?
    Because I need an ordered result like Pool.map() returns.
    Or turning this around, can Process be used to return an ordered result? Becauf if you use it like this, it will throw out results as they come in.
    for example, if you have an array [0,1,2,3,4,5] and want to calculate the squares of them. Depending on the speed of a core the resulting array of Process might be [1,0,4,9,25,16]. I mean, the order of calculating does not matter, but at least I need to know which input parameters created a specific result.
    Thx.

  9. elibenNo Gravatar Says:

    Timey,

    I vaguely recall looking at Pool but finding no good way to pass custom parameters to the processes it creates. I may be wrong though.

Leave a Reply

To post code with preserved formatting, enclose it in `backticks` (even multiple lines)