Subject | Re: [firebird-support] Database for images |
---|---|
Author | Alexandre Benson Smith |
Post date | 2007-05-08T19:47:40Z |
Aage Johansen wrote:
and reduce the density of records per data page. It's good (IMHO)
one cluster will be read, if your cluster size is 8KB two clusters
should be read, but on the other hand, if your cluster size is 16KB and
your pagesize is 8KB, the OS will read 16KB multiples regardless of the
number of pages needs by the DB, every time you read an odd number of
pages you will "loose" 8KB of read/write unnecessary.
I have no numbers about this, and don't know exactly how to define this
on Linux, but on my mind the optimum scenario would be if the disk
cluster size match the database page size, my costumers DB's are quite
small (around 2GB) so I never had the necessity to go so deep on
performance optimization, taking care of the plans always was enough,
but in your case I think every bit of increase should be significant.
in other words, the total number of disk clusters reads), but on the
other hand, if you have 16KB pages, "every" 16KB of data that you need
to read/write would be a contiguous disk space, but with 8KB pages when
you read 16KB you could potentially read page number 100 and page number
505 of the database that could be far away one from the other on the
disk, so the performance should be decreased.
Again, I have no facts or real numbers about this, but on my mind this
would be this way.
2.0, the amount of changed data will be quite small compared to a full
back-up
The speed you are talking about was a local gbak ?
Did you backed-up to the same disk/volume ? If so, would your volume has
enough space to run a full gbak it when the database is in full size ?
Did you measured backing-up to a different volume/controller/etc ? ( I
think the speed should be a bit faster reading from one writing to another)
see you !
--
Alexandre Benson Smith
Development
THOR Software e Comercial Ltda
Santo Andre - Sao Paulo - Brazil
www.thorsoftware.com.br
> Since very few images are less than 16KB (they are mostly 50KB andok, so you will have no problem having the blob stored on the data pages
> more) most blobs would span several pages
and reduce the density of records per data page. It's good (IMHO)
> and I reasoned that theI think it's indifferent, will talk more about this later...
> fewer pages the better.
> If an image is 90KB, at least 6 pages (@In both cases 96KB will be read, if your disk cluster size is 16KB only
> 16KB) must be read, while for 8KB pages at least 12 pages will be
> read.
one cluster will be read, if your cluster size is 8KB two clusters
should be read, but on the other hand, if your cluster size is 16KB and
your pagesize is 8KB, the OS will read 16KB multiples regardless of the
number of pages needs by the DB, every time you read an odd number of
pages you will "loose" 8KB of read/write unnecessary.
I have no numbers about this, and don't know exactly how to define this
on Linux, but on my mind the optimum scenario would be if the disk
cluster size match the database page size, my costumers DB's are quite
small (around 2GB) so I never had the necessity to go so deep on
performance optimization, taking care of the plans always was enough,
but in your case I think every bit of increase should be significant.
> Will the total time for reading the pages only depend on theI think that whats matter in fact was the total number of KB reads (or
> blob size, and be independent of the number of pages?
>
in other words, the total number of disk clusters reads), but on the
other hand, if you have 16KB pages, "every" 16KB of data that you need
to read/write would be a contiguous disk space, but with 8KB pages when
you read 16KB you could potentially read page number 100 and page number
505 of the database that could be far away one from the other on the
disk, so the performance should be decreased.
Again, I have no facts or real numbers about this, but on my mind this
would be this way.
>Anyway I think NBACKUP would be a very good choice if you can go to FB
>
> BTW:
> Testing backup and restore of the database with 140.000 images -
> 7.5GB - which is about 1/40 of the total shows that backup takes
> about 7-8 minutes (and so did restore). This means that backup (or
> restore) of the full database will take about 5-6 hours. No problem
> at all. (Don't worry - be happy!).
> The restored database was just 1MB smaller than the original (7.5GB total)
>
2.0, the amount of changed data will be quite small compared to a full
back-up
The speed you are talking about was a local gbak ?
Did you backed-up to the same disk/volume ? If so, would your volume has
enough space to run a full gbak it when the database is in full size ?
Did you measured backing-up to a different volume/controller/etc ? ( I
think the speed should be a bit faster reading from one writing to another)
see you !
--
Alexandre Benson Smith
Development
THOR Software e Comercial Ltda
Santo Andre - Sao Paulo - Brazil
www.thorsoftware.com.br