Subject | Re: [firebird-support] Ramdrive |
---|---|
Author | unordained |
Post date | 2003-07-06T06:21:35Z |
Bernard: random comments (i'm just a firebird user) --
a) watch out for the sort file. i've noticed rather large files in /tmp/ on my linux box (hey, the
query wasn't done yet, okay?) and you'd want -that- in memory too, so unix sort can run quickly on
it. maybe. (assuming new versions of firebird still do this ... i'm lost somewhere in the past.)
b) just hope the shadow file is required to finish committing entirely to disk before commit()
returns successfully. you'd hate to have transactions lost in lala land because your RAM instantly
said it worked, right before the machine crashed. (again, i dunno what i'm talking about.)
i'm interested in seeing where this goes -- our db is only about 300 megs, and i've got at least a
gig of ram on that machine. hate to waste it. there are a few table scans that occur because the
optimizer gets confused (i think nobody believes me, but a table-self-join across a parent_item_id
has sent our version of firebird spinning for hours ... something about the two sets of items being
large, and it deciding, index-wise, to pre-fetch everything ... a sub-query might sometimes run
faster.) a ram drive is probably the worst solution to the problem, but it's still a possible
solution. i'm sure there are better reasons out there, somewhere.
regardless, your commit()s are going to need to take a while if you want them to work right. that's
one thing you don't want your ramdrive optimizing out of existence.
-philip
a) watch out for the sort file. i've noticed rather large files in /tmp/ on my linux box (hey, the
query wasn't done yet, okay?) and you'd want -that- in memory too, so unix sort can run quickly on
it. maybe. (assuming new versions of firebird still do this ... i'm lost somewhere in the past.)
b) just hope the shadow file is required to finish committing entirely to disk before commit()
returns successfully. you'd hate to have transactions lost in lala land because your RAM instantly
said it worked, right before the machine crashed. (again, i dunno what i'm talking about.)
i'm interested in seeing where this goes -- our db is only about 300 megs, and i've got at least a
gig of ram on that machine. hate to waste it. there are a few table scans that occur because the
optimizer gets confused (i think nobody believes me, but a table-self-join across a parent_item_id
has sent our version of firebird spinning for hours ... something about the two sets of items being
large, and it deciding, index-wise, to pre-fetch everything ... a sub-query might sometimes run
faster.) a ram drive is probably the worst solution to the problem, but it's still a possible
solution. i'm sure there are better reasons out there, somewhere.
regardless, your commit()s are going to need to take a while if you want them to work right. that's
one thing you don't want your ramdrive optimizing out of existence.
-philip