Subject | Re: [firebird-support] why Blob is so slow ? |
---|---|
Author | Alexandre Benson Smith |
Post date | 2012-04-19T15:32:55Z |
Em 19/4/2012 12:13, Tupy... nambá escreveu:
need of defragmentation.
I don't know Postgres, but I think the VACUMM is a similar to FB garbage
collection.
There is a way to defragment FB, make a back-up/restore, but I don't
think it's needed, at least I had never had the need for such operation.
A big blob will be stored in a bunch of pages that tends to be
contiguous at the end of the file (yes, I know unsed page are reused),
so I don't think it's the reason.
A typical NFE would be around 10KB, depending on the page size it could
be stored with the record, or be stored in two blob pages and just the
blob id on the record page, anyway I prefer to have a separate table to
hold the blobs, because in my case the access to blob's are not so
often, so I prefer to have as many records per page as I can, and read a
separate table (and therefore page) to read the blob contents when I
need it.
It's good to read your thougths, I am just arguing about the options :)
see you !
> Hi, Alexandre,I had used MSSQL 6.5 (yes it's a long time ago) so can't comment on the
> For the sample you gave (NFE), I agree with you, because the amount of files that will be generated will be very great and each file itself is not so big, probably they will not become a problem. And, in this case, they are part of a transaction. Probably not, but I´m not sure - one have to make comparisons to be sure about the best solution. I told in a generic way, specially were we have contracts, photos, and other no transactional documents.
>
> But, having many NFE (as many as the transactions), don´t you agree that these BLOB´s will be a great source of fragmentation inside the DB ?
> And, if I´m sure about my thinkings, as Firebird doesn´t have a way to defragment inside the DB, you don´t have a way to resolve this.
> May be, for having a good solution for such kind of business, one had to use a MS SQL Server to periodically defragment the DB. Or another DB name that has this funcionality. I searched something like this at Postgres and I found a command named VACUUM that does something like this. Think about all of this, if you want. If have to have BLOB´s, I think Firebird is not a good solution for a great number of them. My thought, you don´t need to agree.
> Friendly, best regards,Roberto Camargo.
>
>
need of defragmentation.
I don't know Postgres, but I think the VACUMM is a similar to FB garbage
collection.
There is a way to defragment FB, make a back-up/restore, but I don't
think it's needed, at least I had never had the need for such operation.
A big blob will be stored in a bunch of pages that tends to be
contiguous at the end of the file (yes, I know unsed page are reused),
so I don't think it's the reason.
A typical NFE would be around 10KB, depending on the page size it could
be stored with the record, or be stored in two blob pages and just the
blob id on the record page, anyway I prefer to have a separate table to
hold the blobs, because in my case the access to blob's are not so
often, so I prefer to have as many records per page as I can, and read a
separate table (and therefore page) to read the blob contents when I
need it.
It's good to read your thougths, I am just arguing about the options :)
see you !