@xiaohaizi
2020-04-30T05:49:42.000000Z
字数 3415
阅读 1001
公众号文章online
In a search where at most one record in the index may match, we can use a LOCK_REC_NOT_GAP type record lock when locking a non-delete-marked matching record.
Note that in a unique secondary index there may be different delete-marked versions of a record where only the primary key values differ: thus in a secondary index we must use next-key locks when locking delete-marked records.
判断是不是unique_search:
if (match_mode == ROW_SEL_EXACT
&& dict_index_is_unique(index)
&& dtuple_get_n_fields(search_tuple)
== dict_index_get_n_unique(index)
&& (dict_index_is_clust(index)
|| !dtuple_contains_null(search_tuple))) {
unique_search = TRUE;
}
if (trx->isolation_level <= TRX_ISO_READ_COMMITTED
&& prebuilt->select_lock_type != LOCK_NONE
&& trx->mysql_thd != NULL
&& thd_is_select(trx->mysql_thd)) {
/* It is a plain locking SELECT and the isolation
level is low: do not lock gaps */
set_also_gap_locks = FALSE;
}
在READ UNCOMMITTED、READ COMMITTED隔离级别下,对读取到的记录加正经记录锁
在REPEATABLE READ隔离级别下,对读取到的记录加next-key锁
如果是精确查找,并且没有找到匹配的记录的话,如果隔离级别大于REPEATABLE READ,那么对记录加gap锁
如果unique search,并且读取的记录不是标记为已删除,加
正经记录锁
如果是对于主键进行>=查找,则对第一条符合条件的记录加正经记录锁
如果扫描二级索引记录,则需要对相应的聚簇索引记录加正经记录锁
如果对记录做更新、删除操作,需要对对应的二级索引记录加正经记录锁
if (!moves_up
&& !page_rec_is_supremum(rec)
&& set_also_gap_locks
&& !(srv_locks_unsafe_for_binlog
|| trx->isolation_level <= TRX_ISO_READ_COMMITTED)
&& prebuilt->select_lock_type != LOCK_NONE
&& !dict_index_is_spatial(index)) {
/* Try to place a gap lock on the next index record
to prevent phantoms in ORDER BY ... DESC queries */
const rec_t* next_rec = page_rec_get_next_const(rec);
offsets = rec_get_offsets(next_rec, index, offsets,
ULINT_UNDEFINED, &heap);
err = sel_set_rec_lock(pcur,
next_rec, index, offsets,
prebuilt->select_lock_type,
LOCK_GAP, thr, &mtr);
switch (err) {
case DB_SUCCESS_LOCKED_REC:
err = DB_SUCCESS;
case DB_SUCCESS:
break;
default:
goto lock_wait_or_error;
}
}
}
if (set_also_gap_locks
&& !(srv_locks_unsafe_for_binlog
|| trx->isolation_level
<= TRX_ISO_READ_COMMITTED)
&& prebuilt->select_lock_type != LOCK_NONE
&& !dict_index_is_spatial(index)) {
/* Try to place a gap lock on the index
record only if innodb_locks_unsafe_for_binlog
option is not set or this session is not
using a READ COMMITTED isolation level. */
err = sel_set_rec_lock(
pcur,
rec, index, offsets,
prebuilt->select_lock_type, LOCK_GAP,
thr, &mtr);
switch (err) {
case DB_SUCCESS_LOCKED_REC:
case DB_SUCCESS:
break;
default:
goto lock_wait_or_error;
}
}
/* Try to place a lock on the index record; note that delete
marked records are a special case in a unique search. If there
is a non-delete marked record, then it is enough to lock its
existence with LOCK_REC_NOT_GAP. */
/* If innodb_locks_unsafe_for_binlog option is used
or this session is using a READ COMMITED isolation
level we lock only the record, i.e., next-key locking is
not used. */
if (!set_also_gap_locks
|| srv_locks_unsafe_for_binlog
|| trx->isolation_level <= TRX_ISO_READ_COMMITTED
|| (unique_search && !rec_get_deleted_flag(rec, comp))
|| dict_index_is_spatial(index)) {
goto no_gap_lock;
} else {
lock_type = LOCK_ORDINARY;
}
主键>=的情况
/* If we are doing a 'greater or equal than a primary key
value' search from a clustered index, and we find a record
that has that exact primary key value, then there is no need
to lock the gap before the record, because no insert in the
gap can be in our search range. That is, no phantom row can
appear that way.
An example: if col1 is the primary key, the search is WHERE
col1 >= 100, and we find a record where col1 = 100, then no
need to lock the gap before that record. */
if (index == clust_index
&& mode == PAGE_CUR_GE
&& direction == 0
&& dtuple_get_n_fields_cmp(search_tuple)
== dict_index_get_n_unique(index)
&& 0 == cmp_dtuple_rec(search_tuple, rec, offsets)) {
no_gap_lock:
lock_type = LOCK_REC_NOT_GAP;
}