天道酬勤,学无止境

pool

python multiprocessing.Pool kill *specific* long running or hung process

Question I need to execute a pool of many parallel database connections and queries. I would like to use a multiprocessing.Pool or concurrent.futures ProcessPoolExecutor. Python 2.7.5 In some cases, query requests take too long or will never finish (hung/zombie process). I would like to kill the specific process from the multiprocessing.Pool or concurrent.futures ProcessPoolExecutor that has timed out. Here is an example of how to kill/re-spawn the entire process pool, but ideally I would minimize that CPU thrashing since I only want to kill a specific long running process that has not

2021-06-11 05:19:04    分类:技术分享    python   process   timeout   multiprocessing   pool

Profiling a python multiprocessing pool

Question I'm trying to run cProfile.runctx() on each process in a multiprocessing pool, to get an idea of what the multiprocessing bottlenecks are in my source. Here is a simplified example of what I'm trying to do: from multiprocessing import Pool import cProfile def square(i): return i*i def square_wrapper(i): cProfile.runctx("result = square(i)", globals(), locals(), "file_"+str(i)) # NameError happens here - 'result' is not defined. return result if __name__ == "__main__": pool = Pool(8) results = pool.map_async(square_wrapper, range(15)).get(99999) print results Unfortunately, trying to

2021-06-09 12:25:35    分类:技术分享    python   profiling   multiprocessing   pool

To Pool or not to Pool java crypto service providers

Question Solution MessageDigest => create new instances as often as needed KeyFactory => use a single shared instance SecureRandom => use a StackObjectPool Cipher => use a StackObjectPool Question I face a regular dilemna while coding security frameworks : "to pool or not to pool" Basically this question is divided on two "groups" : Group 1 : SecureRandom because the call to nextBytes(...) is synchronized and it could become a bottleneck for a WebApp / a multi-threaded app Group 2 : Crypto service providers like MessageDigest, Signature, Cipher, KeyFactory, ... (because of the cost of the

2021-06-09 10:29:29    分类:技术分享    java   pool

How to get the amount of “work” left to be done by a Python multiprocessing Pool?

Question So far whenever I needed to use multiprocessing I have done so by manually creating a "process pool" and sharing a working Queue with all subprocesses. For example: from multiprocessing import Process, Queue class MyClass: def __init__(self, num_processes): self._log = logging.getLogger() self.process_list = [] self.work_queue = Queue() for i in range(num_processes): p_name = 'CPU_%02d' % (i+1) self._log.info('Initializing process %s', p_name) p = Process(target = do_stuff, args = (self.work_queue, 'arg1'), name = p_name) This way I could add stuff to the queue, which would be

2021-06-05 07:03:37    分类:技术分享    python   process   parallel-processing   multiprocessing   pool

Python NotImplementedError: pool objects cannot be passed between processes

Question I'm trying to deliver work when a page is appended to the pages list, but my code output returns a NotImplementedError. Here is the code with what I'm trying to do: Code: from multiprocessing import Pool, current_process import time import random import copy_reg import types import threading class PageControler(object): def __init__(self): self.nProcess = 3 self.pages = [1,2,3,4,5,6,7,8,9,10] self.manageWork() def manageWork(self): self.pool = Pool(processes=self.nProcess) time.sleep(2) work_queue = threading.Thread(target=self.modifyQueue) work_queue.start() #pool.close() #pool.join(

2021-05-31 15:56:16    分类:技术分享    python   multiprocessing   pool

How to prevent destructors from being called on objects managed by boost::fast_pool_allocator?

Question I would like to take advantage of the following advertised feature of boost::fast_pool_allocator (see the Boost documentation for Boost Pool): For example, you could have a situation where you want to allocate a bunch of small objects at one point, and then reach a point in your program where none of them are needed any more. Using pool interfaces, you can choose to run their destructors or just drop them off into oblivion... (See here for this quote.) The key phrase is drop them off into oblivion. I do not want the destructors called on these objects. (The reason is that I have

2021-05-25 03:17:24    分类:技术分享    c++   c++11   boost   pool

Node.js and MongoDB, reusing the DB object

Question I'm new to both Node.js and MongoDB, but I've managed to put some parts together from SO and the documentation for mongo. Mongo documentetion gives the example: // Retrieve var MongoClient = require('mongodb').MongoClient; // Connect to the db MongoClient.connect("mongodb://localhost:27017/exampleDb", function(err, db) { if(!err) { console.log("We are connected"); } }); Which looks fine if I only need to use the DB in one function at one place. Searching and reading on SO has shown me that I should not open a new connection each time, but rather use a pool and reuse the database

2021-05-19 10:31:12    分类:技术分享    node.js   mongodb   object   reusability   pool

python multiprocessing pool terminate

Question I'm working on a renderfarm, and I need my clients to be able to launch multiple instances of a renderer, without blocking so the client can receive new commands. I've got that working correctly, however I'm having trouble terminating the created processes. At the global level, I define my pool (so that I can access it from any function): p = Pool(2) I then call my renderer with apply_async: for i in range(totalInstances): p.apply_async(render, (allRenderArgs[i],args[2]), callback=renderFinished) p.close() That function finishes, launches the processes in the background, and waits for

2021-05-18 13:26:17    分类:技术分享    python   multiprocessing   pool

java.lang.IllegalMonitorStateException: (m=null) Failed to get monitor for

Question Why may this happen? The thing is that monitor object is not null for sure, but still we get this exception quite often: java.lang.IllegalMonitorStateException: (m=null) Failed to get monitor for (tIdx=60) at java.lang.Object.wait(Object.java:474) at ... The code that provokes this is a simple pool solution: public Object takeObject() { Object obj = internalTakeObject(); while (obj == null) { try { available.wait(); } catch (InterruptedException e) { throw new RuntimeException(e); } obj = internalTakeObject(); } return obj; } private Object internalTakeObject() { Object obj = null

2021-05-18 07:34:29    分类:技术分享    java   multithreading   locking   pool

Gevent pool with nested web requests

Question I try to organize pool with maximum 10 concurrent downloads. The function should download base url, then parser all urls on this page and download each of them, but OVERALL number of concurrent downloads should not exceed 10. from lxml import etree import gevent from gevent import monkey, pool import requests monkey.patch_all() urls = [ 'http://www.google.com', 'http://www.yandex.ru', 'http://www.python.org', 'http://stackoverflow.com', # ... another 100 urls ] LINKS_ON_PAGE=[] POOL = pool.Pool(10) def parse_urls(page): html = etree.HTML(page) if html: links = [link for link in html

2021-05-18 06:32:51    分类:技术分享    python   web   pool   gevent