Subject | Re: [firebird-support] why Blob is so slow ? |
---|---|
Author | Tupy... nambá |
Post date | 2012-04-19T16:18:44Z |
MSSQL has two commands of the DBCC that allow to do defragmentation. The defragmentation is not a garbage collection, but putting all parts of an object (file or columns, hanging of the level - disc or DB) side by side, in a way that the reading of data will be almost fast, because all data will be found almost together. Normally,this is the way to have quick readings of data. Garbage collection is like removing of erased data.
As I quickly read at some PostGreSQL pages, VACUUM has to be a defragment command for PostGreSQL.
Since you know that you can make a defragment at Firebird making an DB restore, you can make a restore and compare the reading times at the two situations. If you have a meaningfull increase of readings speed (SELECT´s and so on) after the restore, this will mean that your problem is of high fragmentation.
Also, after having made the restore, you can do a new backup and once again, a second restore, and see if you have time reduce. At the first restore, the time has to be long, but at the second, no more, because the second backup will store defragmented data.
If you can, let´s try.... till now, all I have are only theories. Your results will be interesting for all of us.
As I quickly read at some PostGreSQL pages, VACUUM has to be a defragment command for PostGreSQL.
Since you know that you can make a defragment at Firebird making an DB restore, you can make a restore and compare the reading times at the two situations. If you have a meaningfull increase of readings speed (SELECT´s and so on) after the restore, this will mean that your problem is of high fragmentation.
Also, after having made the restore, you can do a new backup and once again, a second restore, and see if you have time reduce. At the first restore, the time has to be long, but at the second, no more, because the second backup will store defragmented data.
If you can, let´s try.... till now, all I have are only theories. Your results will be interesting for all of us.
--- On Thu, 4/19/12, Alexandre Benson Smith <iblist@...> wrote:
I had used MSSQL 6.5 (yes it's a long time ago) so can't comment on the
need of defragmentation.
I don't know Postgres, but I think the VACUMM is a similar to FB garbage
collection.
There is a way to defragment FB, make a back-up/restore, but I don't
think it's needed, at least I had never had the need for such operation.
A big blob will be stored in a bunch of pages that tends to be
contiguous at the end of the file (yes, I know unsed page are reused),
so I don't think it's the reason.
A typical NFE would be around 10KB, depending on the page size it could
be stored with the record, or be stored in two blob pages and just the
blob id on the record page, anyway I prefer to have a separate table to
hold the blobs, because in my case the access to blob's are not so
often, so I prefer to have as many records per page as I can, and read a
separate table (and therefore page) to read the blob contents when I
need it.
It's good to read your thougths, I am just arguing about the options :)
see you !
------------------------------------
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Visit http://www.firebirdsql.org and click the Resources item
on the main (top) menu. Try Knowledgebase and FAQ links !
Also search the knowledgebases at http://www.ibphoenix.com
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Yahoo! Groups Links
[Non-text portions of this message have been removed]