|Subject||RE: [firebird-support] Re: select count(*) took time|
> I'm just evaluating firebird to see if it is a viable solution vs Oracle.If you need Oracle specific features, look for a project named Fryacle, which adds Oracle features to Firebird.
> Is there any way to spread a firebird database over several files?Yes, but there is no need to.
> I execute over a loop and commit every 100 records.Commit only where your logic requires it (does it make sense in your case to commit half of the records?) not as a performance tweak. If transaction size is the cause of any performance issues, then there is a flag that can affect this, but the effect won't be very large.
> why select count(*) in firebird is too slowBecause every record must be touched to determine if your transaction can see it. In your case, what is the need for count(*)?
> basically it is a prepared statementWhat language was your example in? It looks like it would reprepare the statement every time.
From: firstname.lastname@example.org [mailto:email@example.com] On Behalf Of Hoang-Vu PHUNG
Sent: Tuesday, May 23, 2006 3:08 PM
Subject: RE: [firebird-support] Re: select count(*) took time
Well, my table contains about more than 100 columns with the record size about 5 Kbytes. So it is rather big but not so big.
I just use jaybird inside jython. I'm just evaluating firebird to see if it is a viable solution vs Oracle.
My oracle database is about 15 Gigabytes. Is there any way to spread a firebird database over several files?
Regarding my insertions,
basically it is a prepared statement and then I execute over a loop and commit every 100 records.
The code roughly is :
sqlInsert = "insert into bigtable(a1,..,an) values (?,..?)"
cursor = db.cursor()
>> loop and commit every 100 recordscursor.execute(sqlInsert, params).
I'm still confused why select count(*) in firebird is too slow compared to other databases. Of course, your trick can help but imagine if you should do for every big tables... Again, request the size of a file is quite fast. Finally, scan a file of 200'000 rows to get the number of rows is also fast done, even in perl / java ...
--- Adam <s3057043@...> a écrit :
--- In firstname.lastname@example.org, Hoang-Vu PHUNG <hvu_phung@...> wrote:
> Hi all,
> I have loaded about 200'000 records in a table, that make nearly 800
> Mega bytes.
> Now if i do select count(*), it takes nearly one minute to finish !
> Why ???
Because every record needs to be read to know if ** YOUR TRANSACTION
** is allowed to count it. It is not as simple as looking to see how many records are physically present in the table or index or that would be quick. Deleted records, yet to be committed inserted records are all still in the table at the time, so unless you expect your count to contain all of those records, it had better check you can see them. The transaction that wrote the record version is present on the data page, not in the index so if the entire table needs to be read, then the quickest way to do that is to jump straight to the data pages and read them in storage order.
If you want an approximate count (not accurate but often enough), look
Never use count when you mean want to check if a record exists, that is what the keywork 'exists' does really quickly.
Visit http://firebird.sourceforge.net and click the Resources item on the main (top) menu. Try Knowledgebase and FAQ links !
Also search the knowledgebases at
Computer technical support
Compaq computer technical support
Compaq technical support
Hewlett packard technical support
Microsoft technical support
YAHOO! GROUPS LINKS
Visit your group "firebird-support" on the web.
To unsubscribe from this group, send an email to:
Your use of Yahoo! Groups is subject to the Yahoo!
Terms of Service.
Yahoo! Mail réinvente le mail ! Découvrez le nouveau Yahoo! Mail et son interface révolutionnaire.
Visit http://firebird.sourceforge.net and click the Resources item
on the main (top) menu. Try Knowledgebase and FAQ links !
Also search the knowledgebases at http://www.ibphoenix.com
Yahoo! Groups Links