天道酬勤,学无止境

relstorage

在 relstorage zodb 包期间,“sys.excepthook 中的错误启动的线程中出现未处理的异常”("Unhandled exception in thread started by Error in sys.excepthook" during relstorage zodb pack)

问题 我们有一个相当大的 Plone 实例在它自己的挂载点上运行。 ZMI界面列出数据库大小为7101.4M。 我们使用 Relstorage zodbpack.py 脚本每周运行一次数据库包,删除超过 7 天的对象。 过去两周,运行包的 cron 作业输出以下内容: Sun Jun 26 07:00:38 BST 2011 packing cms mount /home/zope/home/parts/zope2/lib/python/zope/configuration/xmlconfig.py:323: DeprecationWarning: zope.app.annotation has moved to zope.annotation. Import of zope.app.annotation will become unsupported in Zope 3.5 __import__(arguments[0]) /home/zope/home/eggs/p4a.common-1.0.7-py2.4.egg/p4a/common/configure.zcml:19: DeprecationWarning: The five:localsite directive is deprecated and will be removed in Zope 2.12. See

2022-02-08 07:27:19    分类:技术分享    python   plone   zope   zodb   relstorage

"Unhandled exception in thread started by Error in sys.excepthook" during relstorage zodb pack

We have a reasonably large Plone instance running on its own mount point. The ZMI interface lists the size of the database as 7101.4M. We run a weekly pack of the database using the Relstorage zodbpack.py script, removing objects older than 7 days. The last two weeks the cron job that runs the pack has output the following: Sun Jun 26 07:00:38 BST 2011 packing cms mount /home/zope/home/parts/zope2/lib/python/zope/configuration/xmlconfig.py:323: DeprecationWarning: zope.app.annotation has moved to zope.annotation. Import of zope.app.annotation will become unsupported in Zope 3.5 __import__

2022-01-11 17:04:21    分类:问答    python   plone   zope   zodb   relstorage

如何使用 Plone 和 RelStorage 打包 blobstorage(How to pack blobstorage with Plone and RelStorage)

问题 我已经在我的本地构建上运行了 zodbpack 并显着减少了我的 data.fs 和 blobstorage(比如 <50%)。 当我在测试服务器上运行 zodbpack 时,postgres 'zodb' 数据库打包并减小了大小,但 blobstorage 的大小没有变化。 我试过用每一个组合来运行它 blob-dir /mnt/drbd/blobstorage 我能想到,但它没有让步。 可以打包吗? 相对存储 = 1.5.1 我正在考虑将 RelStorage 升级到 1.6.0,不确定这是否会有所帮助。 更新: 现在在本地运行 relstorage,我也看到了同样的行为......数据库包,blob 不清理。 $ bin/zodbpack -d 7 zodbpack-conf.xml zodbpack-conf.xml <relstorage> pack-gc false blob-dir /Users/aaron/Development/restores/blobstorage <postgresql> dsn dbname='zodb' user='postgres' host='localhost' password='password' </postgresql> </relstorage> 我错过了什么?

2021-10-08 03:45:14    分类:技术分享    postgresql   plone   blobstorage   relstorage

如何提高操作大量数据的脚本的性能?(How to improve performance of a script operating on large amount of data?)

问题 我的机器学习脚本产生大量(数百万数据的BTree中含有的一个根BTree ),并将其存储在ZODB的FileStorage ,主要是因为它的所有不适合在RAM中。 脚本还经常修改以前添加的数据。 当我增加了问题的复杂性,因此需要存储更多的数据时,我注意到了性能问题 - 脚本现在计算数据的速度平均慢了两倍甚至十倍(唯一改变的是要存储的数据量稍后检索以进行更改)。 我尝试将cache_size设置为 1000 到 50000 之间的各种值。老实说,速度上的差异可以忽略不计。 我想切换到RelStorage但不幸的是在文档中他们只提到如何配置框架,例如 Zope 或 Plone。 我只使用 ZODB。 我想知道在我的情况下, RelStorage是否会更快。 这是我目前设置 ZODB 连接的方法: import ZODB connection = ZODB.connection('zodb.fs', ...) dbroot = connection.root() 我很清楚 ZODB 目前是我脚本的瓶颈。 我正在寻找有关如何解决此问题的建议。 我选择 ZODB 是因为我认为 NoSQL 数据库更适合我的情况,而且我喜欢类似于 Python 的dict接口的想法。 代码和数据结构: 根数据结构: if not hasattr(dbroot, 'actions_values')

2021-09-30 02:45:13    分类:技术分享    python   performance   machine-learning   zodb   relstorage

How to pack blobstorage with Plone and RelStorage

I have run zodbpack on my local build and reduced my data.fs and blobstorage significantly (like to <50%). When i run zodbpack on the test server the postgres 'zodb' database packs and comes down in size, but the blobstorage doesnt budge in size. I've tried running it with every combo of blob-dir /mnt/drbd/blobstorage I can think of, but it doesnt budge. Can it be packed? RelStorage = 1.5.1 I'm considering upgarding RelStorage to 1.6.0, not sure if that will help. Update: Now running relstorage locally, I also see the same behaviour... database packs, blobs don't clean up. $ bin/zodbpack -d 7

2021-09-01 09:50:22    分类:问答    postgresql   plone   blobstorage   relstorage

How to improve performance of a script operating on large amount of data?

My machine learning script produces a lot of data (millions of BTrees contained in one root BTree) and store it in ZODB's FileStorage, mainly because all of it wouldn't fit in RAM. Script also frequently modifies previously added data. When I increased the complexity of the problem, and thus more data needs to be stored, I noticed performance issues - script is now computing data on average from two to even ten times slower (the only thing that changed is amount of data to be stored and later retrieved to be changed). I tried setting cache_size to various values between 1000 and 50000. To be

2021-06-28 02:46:09    分类:问答    python   performance   machine-learning   zodb   relstorage