Subject Re: [firebird-support] What the best big table or several small table
Author Vander Clock Stephane
> > i have one table with lot of index (around 30, record around 20
> millions)
>
> 30 indices. And they all are useful? How selective are they?
>

Yes all are necessary :( but i think i will start a new thread to speak
about
the index because i have somes more questions that can be interresting
for someone else


>
> > We access the data only throught query like
> >
> > select First 200 skip 0 .... from MyTable where c = XY and d = zw and A
> > > x and b< Z ... ORDERBY C, D
> >
> > what is the best :
>
> Take an eye on the execution plan, I/O stats in the monitoring table or
> via the trace api.
>
> Btw, you know that with the usage of FIRST above, you probably won't get
> your expected result?
>
> You probably want to do:
>
> select first 200 skip 0 ... from (
> select ... from order by c, d
> )
>

hmm not understand where is the difference ?


> > 1/ One table with lot of record ?
> > 2/ Several tables with fewer records ?
>
> Things are getting more complicated with several tables IMHO. So if you
> don't have a strong reason e.g. maintenance or performance wise, go with
> one table.
>
> Basically, Firebird can handle tables with 20 millions records. It
> vastly depends on how optimized the access is.
>
> > because i can split easyly the number of row in 10 differents tables
> >
> > also is an empty on not very used table use firebird ressource ?
>
> Sorry, I don't understand this question.
>

1. ok, let say i m a real estate website and i have properties to sell
in all the world.
For this i must permit user to see all properties in one location (for
exemple New York)
but also in the location inside (Exemple Manhattan)

For this i setup my location ID as a string like this :

for US > NEW YORK(State) > NEW YORK(City) the id will be
US-0001234-0001245

for US > NEW YORK(State) > NEW YORK(City) > Manhattan the id will be
US-0001234-0001245--0001298

etc...

now in my query if an user want to see all the property in manhattan i
simply do
Where Location like 'US-0001234-0001245--0001298%';

AND to see all the property in new york city (including manhattan)
Where Location like 'US-0001234-0001245%';

AND to see all the property in US
Where Location like 'US%';

so it's OK BUT the probleme is that i need also to add some other filter
criteria (like nb_room, surface, etc..)

and for query like

where Location like 'US%' and nb_bedroom > 2 and nb_bedroom < 3

i m in trouble because of the "like" i can not use the multiple column
index on (LOCATION, NB_BEDROOM) ...
i can use only the single column index on location + single column index
on nb_bedroom
(and believe me speed is not the same)

so for this i simply "duplicate" the row in the datase!

For every property with Location like 'US-0001234-0001245--0001298' i
create 4 row in
the database with theses locations

Row 1: US
Row 2: US-0001234
Row 3: US-0001234-0001245
Row 4: US-0001234-0001245--0001298

in this way i can easyly uses my multiple column index on the query
where Location = 'US' and nb_bedroom > 2 and nb_bedroom < 3

but now you understand ... i can have only one table for all the row OR
to split theses row in 4 differentes tables

TABLE_LOCATION_LEVEL1
Row 1: US

TABLE_LOCATION_LEVEL2
Row 2: US-0001234

TABLE_LOCATION_LEVEL3
Row 3: US-0001234-0001245

TABLE_LOCATION_LEVEL3
Row 4: US-0001234-0001245--0001298


but now before to do so i just need to know about the cost
of having multiple table instead of one big single !
Theses tables will be also in different database / server to reduce the
charge on the
main database










[Non-text portions of this message have been removed]