Subject | Garbage Collection Question (WAS: Re: Slow SELECT/DELETE) |
---|---|
Author | federonline |
Post date | 2007-06-05T16:47:08Z |
Hi, Adam.
This brings up a couple issues that relate to our project. We use FB
2.0 and our DB is less than 300M. My question is mostly about
terminology...but some background first....
We have an application that collects data asynchronously and in real
time, storing it in a table. The relevance of that data diminishes
over time and is generally irrelevant after 12 hours. Every 5-10
minutes, our application launches a thread that purges everything
older than 2 days (1200-2000 rows, depending on input devices), which
keeps the DB small.
My questions are:
1) do you use "garbage collection" to mean re-indexing?
2) can you tell me the affect of the constant purging of data on the
index??
3) if the affects are very negative, is there a way programatically
(through the API) to force an index rebuild or "garbage collection" if
those terms are almost synonymous?
I have to say, generally we have no issues, but have also only let the
system run for 2-3 months before bringing it down, running a backup,
running gfix and a sweep (none of which have ever presented an error).
I'm just nervous about deploying it for a year without anyone to
service it.
Thanks in advance.
Kurt.
This brings up a couple issues that relate to our project. We use FB
2.0 and our DB is less than 300M. My question is mostly about
terminology...but some background first....
We have an application that collects data asynchronously and in real
time, storing it in a table. The relevance of that data diminishes
over time and is generally irrelevant after 12 hours. Every 5-10
minutes, our application launches a thread that purges everything
older than 2 days (1200-2000 rows, depending on input devices), which
keeps the DB small.
My questions are:
1) do you use "garbage collection" to mean re-indexing?
2) can you tell me the affect of the constant purging of data on the
index??
3) if the affects are very negative, is there a way programatically
(through the API) to force an index rebuild or "garbage collection" if
those terms are almost synonymous?
I have to say, generally we have no issues, but have also only let the
system run for 2-3 months before bringing it down, running a backup,
running gfix and a sweep (none of which have ever presented an error).
I'm just nervous about deploying it for a year without anyone to
service it.
Thanks in advance.
Kurt.
> --- In firebird-support@yahoogroups.com, "Adam" wrote:
>
> --- In firebird-support@yahoogroups.com, "pdhpoolplayer1957" wrote:
> >
> > Using the FlameRobin interface to our Firebird instance, I entered
> > a simple query:
> >
> > SELECT datetime FROM IPCLOG WHERE datetime < '05/01/2007'
> >
> > This query takes over 1.5 hours to run. There is an index on the
> > datetime field, and the statistics have been updated. Is there
> > anything else I can do to improve this abysmal performance?
>
> If you have just done a bulk delete, Firebird is probably performing
> garbage collection during your select. There are probably a number
> of factors at play.
>
> The way the Firebird 1.5 index structure works makes it expensive to
> clean up poor indices (ie, an index with lots of duplicate values).
> It is not clear to me whether your datetime field fits this
> definition.
>
> Firebird 2 will probably be a real help here, because it structures
> the index in a manner that it is far less expensive to clean up. The
> new ability to select garbage collection mode may also be useful.
>