Subject Re: [firebird-support] Re: Curious about Index deletion
Author Helen Borrie
At 09:04 PM 18/02/2007, you wrote:
>I messed up with the table name and field name too.
>Sorry for that.
>WORD => W1
>you are correct heLen.
>WHF (ID,W1,F1) table cant contain primary columns but i can make all
>of them NOT NULL. and still i want to query like this:
>select ID from WHF where W1 IN('a','b','c'...) order by F1 DESC;
> How should i index the table..?
>Would it be like
>(a) create index id_index on WHF(ID); -- NO, there is no use for it
>(b) create desc index f1_index on WHF(F1); -- YES, for the sorting
>(c) create index w1_index on WHF(W1); -- YES, for the search
>Is this okai..?

Why can't the table have a primary key?

>Another query....
>If FB creates index at runtime, how is it managing for huge tables.
>I am just curious about it..?

It doesn't create the index at runtime. It creates a binary tree
from the index and it stores the index pages in the cache at runtime.

>Also after my first post i noticed that when i tried to insert new
>index information in my table then it does something like trying to
>copy the database into temp. I actually noticed its name and size
>fb_sort_9cljjl 955MB
>fb_sort_czhojl 700MB +
>So the application stopped as there was no memory in my primary drive.

It is doing this because, to build an index, it has to build a sort
table. If there is enough RAM, it will use RAM. If there is not
enough RAM, it will place these intermediate tables in your temporary
directory and work on them from there.

If you are working with indexes requiring sort tables of those sizes
then you will need to configure a decent amount of sort space on an
attached HDD that does have enough. See the TempDirectories
parameter in firebird.conf. You can configure sort space on any hard
drive that is physically run on that machine. If you have a fast
drive, put your first TempDirectory there with e.g. 2 Gb of space
assigned to it. Sorting is very i/o intensive. Also avoid a
partition on the same HDD where you have configured your RAM cache,
so that your sorts are not competing with another i/o intensive operation.

>Also according to you it must be trying to update its structure of
>index pages. But 955 + 700 MB is huge space occupied for 1.86 GB of
>total data.

I can't tell what you are talking about here. When inserts happen,
it *will* update index pages, just as it updates data pages. It will
also update the runtime structures that it uses to find index pages quickly.

If bulk inserts are causing more database growth than you expect,
then it could be due to a number of factors, including.-

The database is fairly new, or is newly restored. A new database
starts out with the minimum amount of disk space allocated. As more
pages are needed, the engine asks the OS for more blocks of
disk. So, at the start of its life, the database will grow faster
than later on, when free space is likely to be available on pages
where old record versions and old index stubs have been garbage collected.

Another cause may be accumulation of large amounts of garbage with no
GC occurring at all, because of lack of hard commits. (I didn't see
any commits in your code, I wonder if it is an
essential area that you are unaware of.) If you don't commit your
transactions frequently enough, record versions and their associated
index nodes remain "interesting" and will not be flagged for
GC. Performance can degrade drastically under these conditions. In
short, you should not write client code that does not take good care
of transaction management.

By the sound of it, you are short of machine resource as well.

>And lastly i will assume you havent wrote me those last sentences..

Why? Do you intend to ignore them?