天道酬勤,学无止境

Why does PostgreSQL abort this serializable schedule

Theory states that a group of concurrent transactions is serializable if, and only if, their concurrent execution is equivalent to one of their possible serial executions.

Now the following concurrent execution of transactions T1 and T2 is serializable, because it is equivalent to the serial execution "T1 then T2"

T1: r1x   w1y  c1
T2:    w2x   c2

(i.e., T1 reads x, T2 writes x, T1 writes y, T2 commits, and finally, T1 commits)

However, when tried in PostgreSQL 10.4, like this:

T1: begin
T1: set transaction isolation level serializable;
T2: begin
T2: set transaction isolation level serializable;
T2: update variables set value = value + 1 where name = 'x'
T1: update variables set value = value + 1 where name = 'y'
T2: commit
T1: commit

the database aborts T1 when this transaction tries to commit. Why?

评论

PostgreSQL uses heuristics to determine whether to abort a serializable transaction or not, because it would be too hard to be exact. So it can happen that transactions are aborted even if there is an equivalent serial execution (false positives).

But I suspect a different reason in this case. If you look at the execution plans, you will probably see sequential scans. Now a sequential scan reads all rows, so T2 has read y during its update.

The behavior of serializable transactions depends on the execution plan chosen!

受限制的 HTML

  • 允许的HTML标签:<a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • 自动断行和分段。
  • 网页和电子邮件地址自动转换为链接。

相关推荐
  • springboot集成Jpa,集成Quartz启动报错relation "qrtz_locks" does not exist
    Exception in thread "Quartz Scheduler [quartzScheduler]" org.springframework.scheduling.SchedulingException: Could not start Quartz Scheduler after delay; nested exception is org.quartz.SchedulerConfigException: Failure occured during job recovery. [See nested exception: org.quartz.impl.jdbcjobstore.LockException: Failure obtaining db row lock: ERROR: current transaction is aborted, commands ignored until end of transaction block [See nested exception: org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block]] at org
  • Why does PostgreSQL serializable transaction think this as conflict?
    In my understanding PostgreSQL use some kind of monitors to guess if there's a conflict in serializable isolation level. Many examples are about modifying same resource in concurrent transaction, and serializable transaction works great. But I want to test concurrent issue in another way. I decide to test 2 users modifying their own account balance, and wish PostgreSQL is smart enough to not detect it as conflict, but the result is not what I want. Below is my table, there're 4 accounts which belongs to 2 users, each user has a checking account and a saving account. create table accounts ( id
  • How to fix “java.io.NotSerializableException: org.apache.kafka.clients.consumer.ConsumerRecord” in Spark Streaming Kafka Consumer?
    Spark 2.0.0 Apache Kafka 0.10.1.0 scala 2.11.8 When I use spark streaming and kafka integration with kafka broker version 0.10.1.0 with the following Scala code it fails with the following exception: 16/11/13 12:55:20 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0) java.io.NotSerializableException: org.apache.kafka.clients.consumer.ConsumerRecord Serialization stack: - object not serializable (class: org.apache.kafka.clients.consumer.ConsumerRecord, value: ConsumerRecord(topic = local1, partition = 0, offset = 10000, CreateTime = 1479012919187, checksum = 1713832959, serialized key
  • 具有可序列化隔离的 PostgreSQL 9.2.1 中的谓词锁定(Predicate locking in PostgreSQL 9.2.1 with Serializable isolation)
    问题 我一直在仔细阅读我的其他问题中建议的有关事务隔离的 postgres 文档,但我仍然无法理解“谓词锁定”的内容。 我希望有人能启发我:-) 根据文档:与大多数其他数据库系统一样,PostgreSQL 中的谓词锁基于事务实际访问的数据 听起来不错,那么为什么会发生以下情况? CREATE TABLE mycustomer(cid integer PRIMARY KEY, licenses integer); CREATE TABLE mydevice(id integer PRIMARY KEY, cid integer REFERENCES mycustomer (cid), status varchar(10)); INSERT INTO mycustomer(cid, licenses) VALUES (1, 5); INSERT INTO mycustomer(cid, licenses) VALUES (2, 5); Request 1 Request2 BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; SELECT * from mydevice where cid = 1; SELECT * from mydevice
  • Object not serializable (org.apache.kafka.clients.consumer.ConsumerRecord) in Java spark kafka streaming
    I am pretty sure that I am pushing data only string and deserialize also as String. Record I pushed it is showing in error also. But why suddenly it is showing such type of error, Is there anything I am missing? Here is below code, import java.util.HashMap; import java.util.HashSet; import java.util.Arrays; import java.util.Collection; import java.util.Iterator; import java.util.Map; import java.util.Set; import java.util.concurrent.atomic.AtomicReference; import java.util.regex.Pattern; import scala.Tuple2; import kafka.serializer.StringDecoder; import org.apache.spark.SparkConf; import org
  • Write Skew anomaly in Oracle and PostgreSQL does not rollback transaction
    I noticed the following occurrence in both Oracle and PostgreSQL. Considering we have the following database schema: create table post ( id int8 not null, title varchar(255), version int4 not null, primary key (id)); create table post_comment ( id int8 not null, review varchar(255), version int4 not null, post_id int8, primary key (id)); alter table post_comment add constraint FKna4y825fdc5hw8aow65ijexm0 foreign key (post_id) references post; With the following data: insert into post (title, version, id) values ('Transactions', 0, 1); insert into post_comment (post_id, review, version, id)
  • PSQLException:当前事务中止,命令被忽略,直到事务块结束(PSQLException: current transaction is aborted, commands ignored until end of transaction block)
    问题 我在JBoss 7.1.1 Final的server.log文件中看到以下(截断的)堆栈跟踪: Caused by: org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2102) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1835) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:512) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags
  • Mockito's mock throw ClassNotFoundException in Spark application
    I found that mock object in Mockito would throw ClassNotFoundException when used in Spark. Here is a minimal example: import org.apache.spark.{SparkConf, SparkContext} import org.mockito.{Matchers, Mockito} import org.scalatest.FlatSpec import org.scalatest.mockito.MockitoSugar trait MyTrait { def myMethod(a: Int): Int } class MyTraitTest extends FlatSpec with MockitoSugar { "Mock" should "work in Spark" in { val m = mock[MyTrait](Mockito.withSettings().serializable()) Mockito.when(m.myMethod(Matchers.any())).thenReturn(1) val conf = new SparkConf().setAppName("testApp").setMaster("local") val
  • 将您的SQL Server工作负载迁移到PostgreSQL –第1部分
    目录 为什么要开源? 从许可软件迁移到开源 为什么选择PostgreSQL? 竞争者 SQL Server:历史记录,版本和版本 PostgreSQL:历史,版本和版本 根据您的需要选择合适的PostgreSQL版本 生产工作负载 开发,测试,登台或试点 本地与云 为什么要开源? 尽管毫无疑问,许可和专有数据库的功能和优势,但其严格的定价模式以及大型安装基础的较高的总体拥有成本(TCO)使得企业和中型企业都希望采用成本更低的开源解决方案。企业之所以回避开放源代码软件,主要是因为在出现任何重大问题时没有任何支持。开源数据库以更低的成本提供了相同甚至更好的功能。从商业数据库迁移到开源数据库可为企业在许可和支持方面节省大量成本。 从许可软件迁移到开源 诸如SQL Server和Oracle之类的许可软件由其各自的供应商提供支持,并根据许可模型将其作为许可的一部分或单独提供支持。当数据库由于软件错误和漏洞,繁重的负载或其他硬件和网络问题而关闭时,它们会为您提供支持(至少是在法律上)。 另一方面,如果将开源软件托管在服务器上,并且出现任何问题,则由您自己解决,则按“原样”提供开源软件。但是好消息是,有第三方公司为开源数据库提供支持。请注意,该软件本身仍可免费使用,价格仅用于支持。这样的一家公司EnterpriseDB 提供了对开源PostgreSQL以及它自己的PostgreSQL分支的支持
  • Predicate locking in PostgreSQL 9.2.1 with Serializable isolation
    I have been reading thoroughly the postgres documentation on transaction isolation suggested in other of my questions but I have not still managed to understand the "predicate locking" stuff. I hope somebody can enlighten me :-) According to the documentation: Predicate locks in PostgreSQL, like in most other database systems, are based on data actually accessed by a transaction That sounds good, then why is the following happening? CREATE TABLE mycustomer(cid integer PRIMARY KEY, licenses integer); CREATE TABLE mydevice(id integer PRIMARY KEY, cid integer REFERENCES mycustomer (cid), status
  • SQLAlchemy, Serializable transactions isolation and retries in idiomatic Python way
    PostgreSQL and SQL defines a Serializable transaction isolation level. If you isolate transactions to this level, conflicting concurrent transactions abort and need retrying. I am familiar with the concept of transaction retries from Plone / Zope world where the entire HTTP request can be replayed in the case there is a transaction conflict. How similar functionality could be achieved with SQLAlchemy (and potentially with zope.sqlalchemy)? I tried to read the documentation of zope.sqlalchemy and Zope transaction manager, but this is not obvious the me. Specially I want something like this: #
  • PSQLException: current transaction is aborted, commands ignored until end of transaction block
    I am seeing the following (truncated) stacktrace in the server.log file of JBoss 7.1.1 Final: Caused by: org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2102) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1835) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:512) at org.postgresql.jdbc2
  • 通过wildfly将实体持久化到postgresql期间挂起的多线程事务(Multithreading transactions hanging during persisting entity to postgresql via wildfly)
    问题 我有一个带有 entityManager 的 bean 定义,如下所示: @Stateless public class JPABean<T> { private static final Logger LOG = Logger.getLogger(JPABean.class); @PersistenceContext(unitName = "myPersistanceSettings") private EntityManager em; public void write(T o) { LOG.info("PERSISTING"); em.persist(o); LOG.info("FLASHING"); em.flush(); LOG.info("SUCCESS"); } 我的持久性.xml <persistence-unit name="eSystemJPA" transaction-type="JTA" > <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>java:jboss/datasources/myDS</jta-data-source> <properties> <property name="hibernate.show_sql" value=
  • Why do we need message brokers like RabbitMQ over a database like PostgreSQL?
    I am new to message brokers like RabbitMQ which we can use to create tasks / message queues for a scheduling system like Celery. Now, here is the question: I can create a table in PostgreSQL which can be appended with new tasks and consumed by the consumer program like Celery. Why on earth would I want to setup a whole new tech for this like RabbitMQ? Now, I believe scaling cannot be the answer since our database like PostgreSQL can work in a distributed environment. I googled for what problems does the database poses for the particular problem, and I found: polling keeps the database busy and
  • Multithreading transactions hanging during persisting entity to postgresql via wildfly
    I've got a bean definition with an entityManager like this: @Stateless public class JPABean<T> { private static final Logger LOG = Logger.getLogger(JPABean.class); @PersistenceContext(unitName = "myPersistanceSettings") private EntityManager em; public void write(T o) { LOG.info("PERSISTING"); em.persist(o); LOG.info("FLASHING"); em.flush(); LOG.info("SUCCESS"); } My persistance.xml <persistence-unit name="eSystemJPA" transaction-type="JTA" > <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>java:jboss/datasources/myDS</jta-data-source> <properties> <property name=
  • 如何自动关闭PostgreSQL中的空闲连接?(How to close idle connections in PostgreSQL automatically?)
    问题 一些客户端连接到我们的postgresql数据库,但保持连接打开状态。 是否可以告诉Postgresql在一定数量的不活动后关闭那些连接? TL; DR 如果您使用的是Postgresql版本> = 9.2 然后使用我想出的解决方案如果您不想编写任何代码然后使用arqnid的解决方案 回答1 对于那些感兴趣的人,这是我从克雷格·林格(Craig Ringer)的评论中得到的解决方案: (...)使用cron作业查看连接的上一次活动时间(请参阅pg_stat_activity),并使用pg_terminate_backend杀死旧连接。 选定的解决方案如下所示: 首先,我们升级到Postgresql 9.2。 然后,我们安排一个线程每秒运行一次。 线程运行时,它将查找任何旧的非活动连接。 如果某个连接的状态为idle , idle in transaction idle in transaction (aborted)或disabled ,则认为该连接处于非活动状态。 如果连接的状态在5分钟以上保持不变,则认为该连接是旧的。 还有其他与上述功能相同的线程。 但是,这些线程使用不同的用户连接到数据库。 对于连接到数据库的任何应用程序,我们至少打开一个连接。 ( rank()函数) 这是线程运行的SQL查询: WITH inactive_connections AS (
  • set isolation level for postgresql stored procedures
    Hopefully a simple question, but one for which I haven't readily found a decent answer. I'm reliably informed that stored procedures (user-defined DB functions) in PostgreSQL (specifically, version 9.0.4) are inherently transactional, inasmuch as they are called through a SELECT statement which itself is a transaction. So how does one choose the isolation level of the stored procedure? I believe in other DBMSs the desired transactional block would be wrapped in a START TRANSACTION block for which the desired isolation level is an optional parameter. As a specific made-up example, say I want to
  • 带有时区的 PostgreSQL date()(PostgreSQL date() with timezone)
    问题 我在从 Postgres 正确选择日期时遇到问题 - 它们以 UTC 格式存储,但没有使用 Date() 函数正确转换。 如果超过太平洋标准时间下午 4 点,将时间戳转换为日期会给我错误的日期。 在这种情况下, 2012-06-21应该是2012-06-20 。 starts_at列数据类型是timestamp without time zone 。 以下是我的疑问: 不转换为 PST 时区: Select starts_at from schedules where id = 40; starts_at --------------------- 2012-06-21 01:00:00 转换给出了这个: Select (starts_at at time zone 'pst') from schedules where id = 40; timezone ------------------------ 2012-06-21 02:00:00-07 但都没有转换为时区中的正确日期。 回答1 我在你的问题中没有看到starts_at的确切类型。 你真的应该包括这些信息,它是解决方案的关键。 我得猜。 PostgreSQL总是在内部存储timestamp with time zone类型的 UTC 时间。 输入和输出(显示)根据当前timezone设置或给定时区进行调整。 AT
  • Why does Hibernate execute multiple SELECT queries instead of one when using @Fetch(FetchMode.JOIN)
    I've got the following query which I expect to run in a single select request: @NamedQuery(name=Game.GET_GAME_BY_ID1, query = "SELECT g FROM Game g " + "JOIN FETCH g.team1 t1 " + "JOIN FETCH t1.players p1 " + "JOIN FETCH p1.playerSkill skill1 " + "where g.id=:id") The problem is that everything is fetched by separate multiple queries. I want only Team and team's players and each player's skills to be fetched in a single request. But instead I've got multiple select queries for fetching each team, player, each player's stats and skills. Here are entities used with annotations given: Game Entity
  • 何时使用SELECT…FOR UPDATE?(When to use SELECT … FOR UPDATE?)
    问题 请帮助我了解SELECT ... FOR UPDATE背后的用例。 问题1 :以下是当应使用SELECT ... FOR UPDATE一个很好的示例吗? 鉴于: 房间[id] 标签[ID,名称] room_tags [room_id,tag_id] room_id和tag_id是外键 该应用程序希望列出所有房间及其标签,但是需要区分没有标签的房间和已删除的房间。 如果不使用SELECT ... FOR UPDATE,则可能发生以下情况: 最初: 房间包含[id = 1] 标签包含[id = 1, name = 'cats'] room_tags包含[room_id = 1, tag_id = 1] 线程1: SELECT id FROM rooms; returns [id = 1] 线程2: DELETE FROM room_tags WHERE room_id = 1; 线程2: DELETE FROM rooms WHERE id = 1; 线程2:[提交交易] 线程1: SELECT tags.name FROM room_tags, tags WHERE room_tags.tag_id = 1 AND tags.id = room_tags.tag_id; 返回一个空列表 现在线程1认为会议室1没有标签,但实际上该会议室已被删除。 为了解决此问题