AUTO INCREMENT in MariaDB
AUTO_INCREMENT is a feature that allows for a synthetic Primary Key (PK) to be generated by the database engine for a table without any complicated manual implementation. The AUTO_INCREMENT will, for each row, simply assign a numerical value to the column, incrementing the value for each new row.
Basic Overview
MariaDB supports a large variety of different integer types, each type has a fixed storage size measured in bytes, which determines the largest value that integer type can represent. By default, these types are signed, meaning negative values are allowed. Here’s some of the most commonly used types and their maximum values:
Type Storage Size Maximum Value
TINYINT 1 Byte 127
SMALLINT 2 Bytes 32767
MEDIUMINT 3 Bytes 8388607
INT 4 Bytes 2147483647
BIGINT 8 Bytes 9223372036854775807
They can be defined as unsigned, disallowing negative values, but effectively doubling the possible maximum value.
A small sidenote: Sometimes you may see things like INT(11), or TINYINT(1). The value in the brackets only refers to the display width, as in, how many characters should be displayed for this column during a SELECT. It has no effect on storage size or maximum value.
Reaching the Max Value
Once the max value of an integer field is reached, no more new data can be inserted, and you will be greeted with an error message like:
(root@linuxpc) [test]> SHOW CREATE TABLE example;
+---------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table
|
+---------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| example | CREATE TABLE `example` (
`id` tinyint(4) NOT NULL AUTO_INCREMENT,
`col2` bigint(20) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=128 DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci |
+---------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.000 sec)
(root@linuxpc) [test]> INSERT INTO example (col2) VALUES (128);
ERROR 167 (22003): Out of range value for column 'id' at row 1
The INSERT fails, and no more data can be written to this table, until the integer type is changed to a bigger type (larger byte storage size). This can be problematic when a busy table is being changed from INT -> BIGINT due to the required table downtime.
Gaps
AUTO_INCREMENT does not backfill gaps, meaning that if a record with a high id is manually inserted, AUTO_INCREMENT will not go backwards to backfill the skipped id’s. E.g.
(root@linuxpc) [test]> SELECT MAX(id) FROM example;
+---------+
| MAX(id) |
+---------+
| 127 |
+---------+
1 row in set (0.000 sec)
(root@linuxpc) [test]> ALTER TABLE example MODIFY COLUMN id INT NOT NULL AUTO_INCREMENT;
Query OK, 127 rows affected (0.056 sec)
Records: 127 Duplicates: 0 Warnings: 0
(root@linuxpc) [test]> INSERT INTO example (id) VALUES (1000);
Query OK, 1 row affected (0.005 sec)
(root@linuxpc) [test]> INSERT INTO example (col2) VALUES (0);
Query OK, 1 row affected (0.005 sec)
(root@linuxpc) [test]> SELECT MAX(id) FROM example;
+---------+
| MAX(id) |
+---------+
| 1001 |
+---------+
1 row in set (0.000 sec)
An accidental manual insert is not the only way for AUTO_INCREMENT gaps to appear, and there are a few common ways they can naturally occur:
-> Transactions that have been rolled back will leave a gap, as the transaction will still reserve an id value.
->
INSERT IGNORE/UNIQUE CONSTRAINT, failed insertions allocate an id, but will fail to save the row, so a gap will be left-> Bulk
INSERT, sometimes InnoDB might pre-allocate more ids that necessary, and leftover gaps will be created-> Deleted rows,
DELETE FROM ...will naturally leave a gap
In general there is no functional impact of these gaps, but it is useful to note that any chunked UPDATE/DELETE on the PK does need to take them into account.
Besides that, it is good to remember that large gaps mean the value will reach its exhaustion point more quickly, therefore it might be wise to monitor how close important tables may be to their exhaustion point.
Monitoring Tables close to Max Value exhaustion
Thankfully, all of the necessary information is available in information_schema. The first of the two tables needed are TABLES, to get the list of tables we’re interested in, and for filtering out any schemas that we don’t want to monitor. The second is COLUMNS, so that for each table, we can check that they are applicable: we want primary keys that are of an integer data type.
The TABLES.AUTO_INCREMENT states the next value the auto increment is going to create if a new row is going to be added to the table. Dividing this value by the maximum value of the datatype of the column gives us how close we are to exhaustion as a %. I like to be alerted at a threshold of 80%, but feel free to use whatever value that makes sense for your situation instead.
SELECT
t.TABLE_SCHEMA AS _schema,
t.TABLE_NAME AS _table,
t.AUTO_INCREMENT AS _auto_increment,
c.DATA_TYPE AS pk_type,
c.COLUMN_TYPE,
(
t.AUTO_INCREMENT /
(CASE DATA_TYPE
WHEN 'tinyint'
THEN IF(COLUMN_TYPE REGEXP '%unsigned',
255,
127
)
WHEN 'smallint'
THEN IF(COLUMN_TYPE REGEXP 'unsigned',
65535,
32767
)
WHEN 'mediumint'
THEN IF(COLUMN_TYPE REGEXP 'unsigned',
16777215,
8388607
)
WHEN 'int'
THEN IF(COLUMN_TYPE REGEXP 'unsigned',
4294967295,
2147483647
)
WHEN 'bigint'
THEN IF(COLUMN_TYPE REGEXP 'unsigned',
18446744073709551615,
9223372036854775807
)
END / 100)
) AS max_value
FROM information_schema.TABLES t
INNER JOIN information_schema.COLUMNS c
ON t.TABLE_SCHEMA = c.TABLE_SCHEMA
AND t.TABLE_NAME = c.TABLE_NAME
WHERE
t.TABLE_SCHEMA NOT IN ('sys','mysql','information_schema','performance_schema')
AND t.AUTO_INCREMENT IS NOT NULL
AND c.COLUMN_KEY = 'PRI'
AND c.DATA_TYPE REGEXP 'int'
HAVING max_value > 80
ORDER BY max_value DESC;
Example output on a tinyint table filled 110/127:
+---------+---------+-----------------+---------+-------------+-----------+
| _schema | _table | _auto_increment | pk_type | COLUMN_TYPE | max_value |
+---------+---------+-----------------+---------+-------------+-----------+
| test | example | 111 | tinyint | tinyint(4) | 87.4016 |
+---------+---------+-----------------+---------+-------------+-----------+
1 row in set (0.020 sec)
Conclusion
Adding a script to monitor for auto increment exhaustion can be simple, yet effective way to give yourself an early warning for tables that are about to hit this problem, giving you ample time to make appropriate preparations. This is particularly useful for INT -> BIGINT situations on core/central tables, where this change might be very disruptive to the business.