Subject | Re: [firebird-support] Re: Why it's soo slow ? it's jus t a very simple select ... |
---|---|
Author | Mark Rotteveel |
Post date | 2012-03-01T13:53:41Z |
On Thu, 01 Mar 2012 12:38:15 -0000, "nathanelrick" <nathanelrick@...>
wrote:
at least two pages if the VARCHAR is filled for over 80%, for smaller
VARCHARs there is still a relatively high chance it will need to read 2
pages. For page sizes of 16K this is less, but still relatively high
(especially if the field is filled for a large percentage). If there are
multiple record versions that need to be processed even more pages need to
be read. Reading more pages => more IO => more time.
BTW: Try to use BLOBs instead of VARCHAR(10000), it might reduces this
problem
Mark
wrote:
> OK, doing more test, i pass the Page size to 16K (instead of 8k before)same
> and now
>
> CREATE TABLE DESCRIPTIONS
> (
> IDObj VARCHAR(15) NOT NULL,
> type SMALLINT NOT NULL,
> Lang VARCHAR(3) NOT NULL,
> Descr VARCHAR(10000),
> PRIMARY KEY (IDObj, type, Lang)
> );
>
> and a foreign key on IDOBJ
>
> with 32 Millions rows, then
> select IDObj from description where IDObj=randomID
> with 50 simultaneous thread, take around 80 ms to return !!
> With 1 Thread, take around 8 ms to return !!
>
> So i not understand WHY here it's more fast (much more) with a page size
> of 16k ?? the index is just a single column index on a varchar(15) field
> ... but anyway is still more slower (around 2x more slower) than the
> query on a table without any VARCHAR(10000) fields and with same amountof
> reccords. but this make me crasy because on the select we don't use atall
> this varchar(10000) field !Quite simple: with a field of VARCHAR(10000) on 8K pages it needs to read
at least two pages if the VARCHAR is filled for over 80%, for smaller
VARCHARs there is still a relatively high chance it will need to read 2
pages. For page sizes of 16K this is less, but still relatively high
(especially if the field is filled for a large percentage). If there are
multiple record versions that need to be processed even more pages need to
be read. Reading more pages => more IO => more time.
BTW: Try to use BLOBs instead of VARCHAR(10000), it might reduces this
problem
Mark