Subject RE: [IB-Architect] trouble with sweep
Author Jim Starkey
At 12:17 PM 10/19/2002 +0400, Dmitry Yemanov wrote:
>Pavel and others,
>
>You are talking about disadvantages of the GC thread under heavy load, but
>there're also other bad things related to the GC design and implementation:
>
>1) It's slow. I understand all these long-chain issues, but a feedback of
>the server should be predictable for user. Now someone can delete one
>million records quite fast, but subsequent selects may take hours to
>complete because of the GC. And that's without any noticeable load, even in
>single-user mode. It's a shock for users who worked with e.g. MSSQL before
>IB/FB.
>

The work involved in undoing an insert is almost exactly the same
as doing the insert. The work has to be done. If the GC thread
is doing it without contention, it appears free. If the GC thread
is running under heavy contention, it's going to slow things down,
delay garbage collection, reducing server efficiency, and overall
consume significantly more resources that if it were done cooperatively.

Do you design for light load (i.e. benchmarks) or the real world.
Pick one. Or find an algorithm without the tradeoff (my choice).

Note that fine granularity solves the problem.

>2) It's unreliable. Just kill the server while a database is being GC'ed and
>most likely you'll have your database corrupted. Just a real-world example.
>Someone does a mass update/delete, then start a complex report, then wait
>10, 20, 30 minutes - the server is 100% busy. One kills the connection - the
>server is still 100% busy. One kills the server - and get the database
>corrupted.
>

That's gibberish. The backout code is careful write like everything
else, so the database wouldn't work at all. And it's the same code
whether kicked off by the GC thread or a worker thread.

>Comments anyone? Are these unavoidable effects of the current design?
>

The effects are not only avoidable, but avoided.

Jim Starkey