I have a simple python multiprocessing script that sets up a pool of workers that attempt to append work-output to a Manager list. The script has 3 call stacks: - main calls f1 that spawns several worker processes that call another function g1. When one attempts to debug the script (incidentally on Windows 7/64 bit/VS 2010/PyTools) the script runs into a nested process creation loop, spawning an endless number of processes. Can anyone determine why? I'm sure I am missing something very simple. Here's the problematic code: -
import multiprocessing import logging manager = multiprocessing.Manager() results = manager.list() def g1(x): y = x*x print "processing: y = %s" % y results.append(y) def f1(): logger = multiprocessing.log_to_stderr() logger.setLevel(multiprocessing.SUBDEBUG) pool = multiprocessing.Pool(processes=4) for (i) in range(0,15): pool.apply_async(g1, [i]) pool.close() pool.join() def main(): f1() if __name__ == "__main__": main()
PS: tried adding
multiprocessing.freeze_support() to main to no avail.