How to Cope with Deadlocks
This section builds on the conceptual information about deadlocks in . It explains how to organize database operations to minimize deadlocks and the subsequent error handling required in applications.
are a classic problem in transactional databases, but they are not dangerous unless they are so frequent that you cannot run certain transactions at all. Normally, you must write your applications so that they are always prepared to re-issue a transaction if it gets rolled back because of a deadlock.
InnoDB
uses automatic row-level locking. You can get deadlocks even in the case of transactions that just insert or delete a single row. That is because these operations are not really “atomic”; they automatically set locks on the (possibly several) index records of the row inserted or deleted.
You can cope with deadlocks and reduce the likelihood of their occurrence with the following techniques:
-
Use to determine the cause of the latest deadlock. That can help you to tune your application to avoid deadlocks.
-
Always be prepared to re-issue a transaction if it fails due to deadlock. Deadlocks are not dangerous. Just try again.
-
Commit your transactions immediately after making a set of related changes. Small transactions are less prone to collision. In particular, do not leave an interactive session open for a long time with an uncommitted transaction.
-
If you are using locking reads ( or
SELECT ... LOCK IN SHARE MODE
), try using a lower isolation level such as . -
When modifying multiple tables within a transaction, or different sets of rows in the same table, do those operations in a consistent order each time. Then transactions form well-defined queues and do not deadlock. For example, organize database operations into functions within your application, or call stored routines, rather than coding multiple similar sequences of
INSERT
,UPDATE
, andDELETE
statements in different places. -
Add well-chosen indexes to your tables. Then your queries need to scan fewer index records and consequently set fewer locks. Use to determine which indexes the MySQL server regards as the most appropriate for your queries.
-
Use less locking. If you can afford to permit a to return data from an old snapshot, do not add the clause
FOR UPDATE
orLOCK IN SHARE MODE
to it. Using the isolation level is good here, because each consistent read within the same transaction reads from its own fresh snapshot. You should also set the value of to 0, which will reduce the number of disk flushes due to synchronizing on disk data and the binary log. -
If nothing else helps, serialize your transactions with table-level locks. The correct way to use with transactional tables, such as
InnoDB
tables, is to begin a transaction withSET autocommit = 0
(not ) followed by , and to not call until you commit the transaction explicitly. For example, if you need to write to tablet1
and read from tablet2
, you can do this:SET autocommit=0;LOCK TABLES t1 WRITE, t2 READ, ...;... do something with tables t1 and t2 here ...COMMIT;UNLOCK TABLES;
Table-level locks prevent concurrent updates to the table, avoiding deadlocks at the expense of less responsiveness for a busy system.
-
Another way to serialize transactions is to create an auxiliary “semaphore” table that contains just a single row. Have each transaction update that row before accessing other tables. In that way, all transactions happen in a serial fashion. Note that the
InnoDB
instant deadlock detection algorithm also works in this case, because the serializing lock is a row-level lock. With MySQL table-level locks, the timeout method must be used to resolve deadlocks.