Subject RE: [firebird-support] Re: Firebird Usage Load Problem
Author Rick Debay
> I am sweeping my database everyday through cron now.

Your last post showed:

Oldest transaction 1569535
Oldest active 1569536
Oldest snapshot 1569533
Next transaction 1584914

Which has a difference of 15381 between your oldest snapshot and your
next transaction.
If you sweep every night, either your system is really busy or there is
a rogue client that's keeping a transaction open.

I notice that you commit after every statement, perhaps you commit only
occasionally:

Cursor = connection.cursor()
While (not done)
cursor.execute()
if ( ++i = 10000)
cursor.commit()
i = 0

This should help things a lot by not committing the work after every
statement.

Out of curiosity, why would python care if you were doing a join? Does
kinterbase deconstruct your query? It's strange that the SQL would
cause an error. Perhaps the FB engine is throwing an error when asked
to prepare the SQL, and python is having trouble returning the error
code.
What happens if you run the join statement from isql?

-----Original Message-----
From: firebird-support@yahoogroups.com
[mailto:firebird-support@yahoogroups.com] On Behalf Of Maurice Ling
Sent: Thursday, July 14, 2005 7:50 PM
To: firebird-support@yahoogroups.com
Subject: [firebird-support] Re: Firebird Usage Load Problem

--- In firebird-support@yahoogroups.com, "Rick Debay" <rdebay@r...>
wrote:
> > ...get a list of primary keys from A table using
> kinterbasdb.cursor.execute() method and roll out as a Python list
> using interbasdb.cursor.fetchall().
> > For each of the elements in the list (primary keys), it is used to
> pull out a record in another table for processing
>
> Instead of getting a list and looping through the list selecting from
> another table, select directly from a join between the two tables,
> which presumably are connected by a foreign key.
> Can you post the DDL for the two tables, including foreign keys and
> indexes, and the plan used by the optimizer?

I've tried to do an inner join before but I get a kernel error through
Python (from vm_allocate() method) in my Mac system. This issue had been
documented and Python does not have vm_allocate() method in its codes.
Which is why I used this workaround to get all my records.

>
> > ...insert the results (usually a few records) into another table.
> This process is repeated for about 500k times.
>
> Does this need to be one transaction, or can you commit every 10,000
> items?

KinterbasDB keeps the transactions as transparent as possible. In
general usage, transaction is not even a concept to worry about.

What I did is this (python codes):

import kinterbasdb
connection = kinterbasdb.connect(<connection string>) cursor =
connection.cursor() # inserting data for example for line in data:
cursor.execute(<sql statement>)
cursor.commit()

# when i'm done
cursor.commit() # final commit, just in case
cursor.close()
connection.close()

>
> Since the sysadmin installed it just for you, do they do any
> maintainence such as sweeps or index selectivity calculations?
> I'm guessing that regular maintainence is done on the other databases,

> including vacuuming the Postgres, your other MGA database.

I am sweeping my database everyday through cron now. Doesn't seems to
help very much.

cheers
maurice




++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Visit http://firebird.sourceforge.net and click the Resources item
on the main (top) menu. Try Knowledgebase and FAQ links !

Also search the knowledgebases at http://www.ibphoenix.com

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Yahoo! Groups Links