Subject | Re: [firebird-support] Server and DB page sizes and memory |
---|---|
Author | Erik LaBianca |
Post date | 2006-11-09T23:17:04Z |
Ann,
Thanks for the reply.
SortMemBlockSize = 1048576 (Default)
SortMemUpperLimit = 1073741824 (1G)
It didn't seem to me like increasing the SortMemBlockSize should make
much of a performance difference by reading the description, but perhaps
I'm wrong?
I was also able to shave off a fair amount of time by optimizing my
stored procedure a bit. I had tried to optimize by adding a column with
my sort key and indexing it, which turned out to be a very bad idea.
Going back to a computed column helped a bunch. The MySQL folks had some
good info about optimizing for large database that won't fit into RAM.
I will definitely make sure my /tmp is running /tmpfs as well, one of
the other machines I was testing with was configured that way but this
one is not.
I'm still getting fairly slow write performance however... I would hope
5 discs in raid-0 could update 4.5 million records in less than around
10 minutes, but apparently not.
Thanks
--erik
[Non-text portions of this message have been removed]
Thanks for the reply.
>Currently I'm using
> Have you looked at the sort memory parameters in the configuration
> file? You can increase the size of the in-memory sort block and
> avoid writing out intermediate files. The behavior you're seeing
> (fast with smaller chunks, slow with large chunks) suggests that
> your problem is with writing and reading sort files from the
> temporary directors. You might also point the firebird temporary
> directory to a RAM disk.
>
SortMemBlockSize = 1048576 (Default)
SortMemUpperLimit = 1073741824 (1G)
It didn't seem to me like increasing the SortMemBlockSize should make
much of a performance difference by reading the description, but perhaps
I'm wrong?
I was also able to shave off a fair amount of time by optimizing my
stored procedure a bit. I had tried to optimize by adding a column with
my sort key and indexing it, which turned out to be a very bad idea.
Going back to a computed column helped a bunch. The MySQL folks had some
good info about optimizing for large database that won't fit into RAM.
I will definitely make sure my /tmp is running /tmpfs as well, one of
the other machines I was testing with was configured that way but this
one is not.
I'm still getting fairly slow write performance however... I would hope
5 discs in raid-0 could update 4.5 million records in less than around
10 minutes, but apparently not.
Thanks
--erik
[Non-text portions of this message have been removed]