Subject | Re: Long table or many tables? |
---|---|
Author | epedder |
Post date | 2009-10-09T21:03:21Z |
Hello Lester,
Thanks very much for your insight,
- Elric
> > Is it that handling many tables imposes a performanceThat's exactly what I wanted to know.
> > penalty? Is it that the data is effectively stored
> > the same way anyway? Or perhaps the cache system is
> > less efficient with this design? Any of this information
> > would help me in the early stages of planning a possible
> > replacement design in the future.
>
> The 'volume' of data will not change however you organize it,
> and so if you have 100 users, then adding 100 table headers and
> 100*indexes DOES add to the overheads. A single table and single
> set of index will much simplify the 'metadata' lookup stages.
> When LOOKING for data, it is stored in pages, and so once you
> identify the page the record is on, then it can be read.
> Accessing that page via multiple tables just means you have to
> cache multiple 'starting points' as it does not really matter if
> you do find user table -> find record or simply find record in
> current table. It takes the same amount of time to access once
> you have the 'page number' however you get it. Data is not
> stored in the same block of space for each small table, it can be
> anywhere in the 'database file'.
> One thought would be - is the message content in a large characterA good point but we need to be able to order results on those text fields. Though it may be worth benchmarking splitting the text fields into short VARCHARs for indexing with the rest in a BLOB.
> field or in a blob? keeping large text fields in blobs helps in
> these sorts of cases as the actual record can be a lot smaller,
> and more will fit on a 'page' in storage. You only need to access
> the 'content' once you have identified the header records.
Thanks very much for your insight,
- Elric