Subject | Re: [Firebird-Architect] Feature Request |
---|---|
Author | Christian Stengel |
Post date | 2005-04-30T19:22:43Z |
>Hi Jim,
> The way to do a fast load is to load the data with indexes disabledWill this be faster than in firebird? I am doing that currently this
> then
> enable the indexes. The indexes will be built using a "fast load"
> scheme from a sorted data stream, which is much faster than incremental
> index updates. This does not require additional database support.
>
way.
1.000.000 records take:
No index (1. 1.000.000 records): 0m46.865s
With 1 index: 1m13.135s
Unfortunately - the more data you get in the longer it takes (for the
10. 1.000.000 records it takes 1m12.937s) - without any indices.
creation of index with 10.000.000 records: 3m16.072s - so yes. I am
saving time.
>OK. But internal functions would be nicer - easier to handle, you can
>> 3. metric calculations
>> Ann sometimes asked whether there is a need for spartial extensions
> That's what UDFs are for. Use them.
be shure, that they are available on all platforms ...
>Great. In FB 1.5.2 I can edit a .gdb file with vi and see the strings
>> 4. compressed data field
> Recordds are already run-length compressed. The zip compression
> algorithm loses big time for short items. A 6 character string, for
> example, bloats in 118 bytes. The algorithm is space efficient only
> for
> large strings.
unencoded
>> 5. Index file for external tablesCan the filemanager easily be exchanged in Vulcan (e.g. without the
> Not a chance in the world unless the external file manager d
need of recompiling the whole engine?).
> oes itExternal files are slower than importing data with c-api in current
> itself or you wish to write your own external file manager. External
> files are useful to import or export data or to interact with an
versions of firebird. The only benifit is, that it's faster to
implement them. ( 0m46.865s vs. 0m55.354s with external tables)
> application that uses RMS. It is not intended to every be anYes. I know that. But when you have to deal with 35.000.000 records per
> alternative record storage format.
day from one source - you have to be creative - and one aproach could
be:
Use external files as temporary storage, agregate the data and throw
them away after a month or so :-). And in that case - a one time index
could save time :-)
Thanks,
Chris