Subject | Re: Memory tune-up on Firebird 1.5 Embedded |
---|---|
Author | Adam |
Post date | 2007-01-10T22:53:18Z |
> Are there any other resources that we can disable or limit withoutYou can't have your cake and eat it. Both approaches make sense at
> hurting the DB performance and functionality, given our current usage
> pattern in which the DB is being used much more as a reliable
> persistence layer than as an ongoing SELECT-driven data source?
>
different times. I agree with you that you have a valid design because:
* You have a limited amount of data that fits into an affordable
amount of memory.
* You do not need to concern yourself with concurrency
In turn, you understand that your system will have limited
scalability. If you are running in a 32 bit environment, then 2GB is
your limit (3GB if you know the boot switch to push it). I don't know
your problem domain, but I can only assume you have considered the
implications of loading everything into memory. To solve larger
problems, you would need to take a different approach.
However, I am confused as to why you need to constantly perform
selects if you have already loaded your database into memory in your
application. Surely it would be faster to use what is already in memory.
In any case, reducing the cache sizes and sort spaces will cause
certain operations to have to hit the disk where previously they may
have been able to use the in memory cache of the dbms. Depending on
your operations, this may or may not be significant. It would be
quicker to run some benchmarks yourself than to do the complex math to
theorise what may happen.
> On a different but important note: many of our customers use ourTo the database engine, it is like power being cut before it had a
> application from a flash drive (U3 compatible): the database (fdb)
> remains on the flash drive, while the executable is temporarily
> copied to the hard drive and executed on the host machine. Are there
> any parameters that you would suggest tweaking in this scenario? The
> problems there are the ability of the customer to basically yank out
> the drive without a proper eject, which means that you cannot assume
> write operations will terminate as expected. That would be a bit
> similar to the file storage going down or going offline on a server.
chance to complete its writes. You have lost any uncommitted work
(obviously) and any unflushed writes. If you have disabled forced
writes, you could be in for corruption trouble or if your lucky you
may just lose some data you thought was committed. If you create the
database in 1.5 or higher, Forced writes are enabled by default. This
maintains a 'careful write order', which means in theory that you
should be fine.
That said, your application should detect if the media is removed
while your application is running, and provide an error message that
advised them to never do it again. Possibly your application can after
the server disconnects manually flush the drive (I am sure it would be
possible using some Windows API) and then advise the user it is safe
to remove the media.
Once you disconnect from the embedded database, all the writes should
have been flushed, so even if they rip it out it should be fine.
Adam