|Subject||Re: Curious about Index deletion|
|Author||Manish Lal Joshi|
> Why can't the table have a primary key?I dont want to keep them. It is not required here. Is that bad ..?
>It doesn't create the index at runtime. It creates a binary treeOKAI, FOR THIS I THINK I WILL STUDY SOME DETAILS ABOUT IT.
>from the index and it stores the index pages in the cache at runtime.
>It is doing this because, to build an index, it has to build a sort
>table. If there is enough RAM, it will use RAM. If there is not
>enough RAM, it will place these intermediate tables in your temporary
>directory and work on them from there.
>If you are working with indexes requiring sort tables of those sizes
>then you will need to configure a decent amount of sort space on an
>attached HDD that does have enough. See the TempDirectories
>parameter in firebird.conf. You can configure sort space on any hard
>drive that is physically run on that machine. If you have a fast
>drive, put your first TempDirectory there with e.g. 2 Gb of space
>assigned to it. Sorting is very i/o intensive. Also avoid a
>partition on the same HDD where you have configured your RAM cache,
>so that your sorts are not competing with another i/o intensive
>I can't tell what you are talking about here. When inserts happen,
>it *will* update index pages, just as it updates data pages. It will
>also update the runtime structures that it uses to find index pages
>If bulk inserts are causing more database growth than you expect,
>then it could be due to a number of factors, including.-
>The database is fairly new, or is newly restored. A new database
>starts out with the minimum amount of disk space allocated. As more
>pages are needed, the engine asks the OS for more blocks of
>disk. So, at the start of its life, the database will grow faster
>than later on, when free space is likely to be available on pages
>where old record versions and old index stubs have been garbage
>Another cause may be accumulation of large amounts of garbage with no
>GC occurring at all, because of lack of hard commits. (I didn't see
>any commits in your code, actually...so I wonder if it is an
>essential area that you are unaware of.) If you don't commit your
>transactions frequently enough, record versions and their associated
>index nodes remain "interesting" and will not be flagged for
>GC. Performance can degrade drastically under these conditions. In
>short, you should not write client code that does not take good care
>of transaction management.
>By the sound of it, you are short of machine resource as well.
But i was wondering why is it taking so much space just to update
index. I guess it will be eventually clear the space as it is placing
them in temp folder.
But still wondering what is it tranferring there. Whole DB.. cant be..?
Hey, i am using FB from 5 days only....
And about Transaction..?
I am doing partial commiting of transaction as soon as it hold some
records from front end.
insert into tbl values();
Is that okai or i also have to write it in backend..
Show me the way..?
>And lastly i will assume you havent wrote me those last sentences..As your last sentences were all due to misunderstanding in Table
>>Why? Do you intend to ignore them?
names. i.e. WHF and other.. so i can ignore them..