Subject | Transaction tagged indexes |
---|---|
Author | Ann Harrison |
Post date | 2015-08-21T14:39:45Z |
I've been thinking about the problems of count(*) being slow, the inability to handle an M to M join without reading the data in the junction table, and keeping indexes dense.
If anyone on this list is unaware of it, Firebird can't resolve queries by reading the index without reference to the data because the identity of the transaction that created the record - thus the record's visibility to the current transaction - is kept in the record, not in the index.
For decades, there's been discussion of keeping transaction information in the index. Unfortunately, the index entry has to contain both the identity of transaction that created the entry and the transaction id of the transaction that invalidated the entry by deleting the record or by changing the key value. That means, of course, that a DELETE not only creates a deleted stub record, it must also modify current index entries for the record. An UPDATE that changes key fields must create a new index entries and mark old index entries as invalidated.
So, keeping transaction information in the index will greatly increase the size of indexes and increase the overhead of maintaining them. The size increases by one or two 64-bit transaction ids for each entry. Transaction ids don't compress well either. The maintenance more or less doubles for updates and, well, whatever the percent increase of going from no action to changing each index on the table for deletes.
So, is it worth making index access slower and index maintenance harder to speed up count(*) and multi-way joins using junction tables? If not, should there be an option that the use can select to have slower indexes - or some slower indexes - but have faster response on some queries? Traditionally, Firebird has said "No" to both. The first because it penalizes "good" SQL to benefit the stupid and lazy and the second because it puts the burden of optimization on the database designer - "if your database is slow, it's your fault".
My thought was that primary key indexes rarely change and are particularly useful for junction records. If primary key - or all unique indexes - carried transaction information and other indexes did not, Firebird could support an index-only count(*) without looking at data. It also handles joins like "students s join registrations r on r.student_id = s.student_id join courses c on c.course_id = r.course_id" without reading the Registration table.
Oh, and one other bit of trivia on the same general topic. NuoDB has added an "approximate count" based on the index. Apparently their customers don't care about the odd record that's actually deleted or not committed for this transaction.
Cheers,
Ann
At one time there was an index optimization for the slightly unlikely case that a record has a key that bounces back and forth between two values - say "Night" and "Day". Originally, when you store a record with "Night", that created one index entry. When you modified the key field to be "Day", that created a second. When you modified the key field again, back to "Night", no new entry was created. That required some very complex bookkeeping for the garbage collector when cleaning up a bunch of old versions of the record - say removing three "Night" versions and two "Day" versions in a recod that had a primary version that was "Day" and five back versions. I think Vlad said that mechanism was dropped. Good. It doesn't work with transactional indexes.
If anyone on this list is unaware of it, Firebird can't resolve queries by reading the index without reference to the data because the identity of the transaction that created the record - thus the record's visibility to the current transaction - is kept in the record, not in the index.
For decades, there's been discussion of keeping transaction information in the index. Unfortunately, the index entry has to contain both the identity of transaction that created the entry and the transaction id of the transaction that invalidated the entry by deleting the record or by changing the key value. That means, of course, that a DELETE not only creates a deleted stub record, it must also modify current index entries for the record. An UPDATE that changes key fields must create a new index entries and mark old index entries as invalidated.
So, keeping transaction information in the index will greatly increase the size of indexes and increase the overhead of maintaining them. The size increases by one or two 64-bit transaction ids for each entry. Transaction ids don't compress well either. The maintenance more or less doubles for updates and, well, whatever the percent increase of going from no action to changing each index on the table for deletes.
So, is it worth making index access slower and index maintenance harder to speed up count(*) and multi-way joins using junction tables? If not, should there be an option that the use can select to have slower indexes - or some slower indexes - but have faster response on some queries? Traditionally, Firebird has said "No" to both. The first because it penalizes "good" SQL to benefit the stupid and lazy and the second because it puts the burden of optimization on the database designer - "if your database is slow, it's your fault".
My thought was that primary key indexes rarely change and are particularly useful for junction records. If primary key - or all unique indexes - carried transaction information and other indexes did not, Firebird could support an index-only count(*) without looking at data. It also handles joins like "students s join registrations r on r.student_id = s.student_id join courses c on c.course_id = r.course_id" without reading the Registration table.
Oh, and one other bit of trivia on the same general topic. NuoDB has added an "approximate count" based on the index. Apparently their customers don't care about the odd record that's actually deleted or not committed for this transaction.
Cheers,
Ann
At one time there was an index optimization for the slightly unlikely case that a record has a key that bounces back and forth between two values - say "Night" and "Day". Originally, when you store a record with "Night", that created one index entry. When you modified the key field to be "Day", that created a second. When you modified the key field again, back to "Night", no new entry was created. That required some very complex bookkeeping for the garbage collector when cleaning up a bunch of old versions of the record - say removing three "Night" versions and two "Day" versions in a recod that had a primary version that was "Day" and five back versions. I think Vlad said that mechanism was dropped. Good. It doesn't work with transactional indexes.