Subject Why did a large delete make a table partially inaccessible
Author Matt Clark
Hello all,
I maintain a widely indexed. active production database on Classic
server currently about 60gb with daily updates from over 100 stores.
In a recent effort to prune redundant data I removed 18 million rows
from a 20 million row table. There are 5 indexes on the table and
after deleting the data several problems materialised. Firstly was the
fact that several programs that indirectly (via triggers) add rows to
this table refused to work and secondly I had a garbage collection
session running for nearly 4 hours before I was forced to kill it to
take my laptop home when I normally don't have to wait more than 45
minutes. After finally resorting to dropping the indexes and the table
the garbage collection seemed able to complete.

What I'm really looking for here is some understanding of what
happened internally so I can avoid doing it again !
From what I've read subsequently I'm guessing this problem would have
been avoided if I'd disabled the indexes before the delete but I'm now
curious to understand why this created such a problem for the garbage
collection? Would it ever have completed or did I create a condition
that the GC couldn't fix and why did dropping the table ultimately fix