Subject | Re: JayBird and large attachment |
---|---|
Author | Roman Rokytskyy |
Post date | 2003-08-11T13:06:32Z |
Hi,
need to do commits. Firebird is an RDBMS with optimistic concurrency,
this means that readers do not block writers. So, as long as you do
not modify the row you will not get any lock conflict there.
stored together with the record). This means that blob will be
transferred only when you will try to access its data. So, if you have
100,000 rows in table
CREATE TABLE atable (
id INTEGER NOT NULL PRIMARY KEY,
document_name CHAR(20) NOT NULL,
xml_content BLOB SUB_TYPE 1
)
you will transfer 100,000 x 32 bytes - ~3,2MB of table data + size of
those blobs you actually ask (if you load all blobs to the client,
that will be ~100,000 x avg. XML size).
Engine groups records so that each TCP packet contains as much payload
as possible (in case of 1500 bytes packet size engine will send ~40
records in each packet). Also Statement.setFetchSize(int) works
correctly, this allows you to use TCP stack more efficiently.
In auto-commit mode for each record blobs will be fetched to the
client too. First this is a clear overhead, second, network usage
pattern might be not optimal.
Best regards,
Roman Rokytskyy
> If I correctly understood, it may be better to disable autocommit.In general - yes.
> So, when I'm querying a huge datbase with a SELECT statement thatWhy do you need to do commit at all? For read-only selects you do not
> returns hundred of thousand rows, when to do the commit? Between
> each row retrivial and after a heuristic and prefined number (10,
> 100, 1000...)? What is the best practice?
need to do commits. Firebird is an RDBMS with optimistic concurrency,
this means that readers do not block writers. So, as long as you do
not modify the row you will not get any lock conflict there.
> Our database is composed of rows storing an XML file within a BLOB,Blobs are stored separately from row data (only 64-bit blob ID is
> which may be huge.
stored together with the record). This means that blob will be
transferred only when you will try to access its data. So, if you have
100,000 rows in table
CREATE TABLE atable (
id INTEGER NOT NULL PRIMARY KEY,
document_name CHAR(20) NOT NULL,
xml_content BLOB SUB_TYPE 1
)
you will transfer 100,000 x 32 bytes - ~3,2MB of table data + size of
those blobs you actually ask (if you load all blobs to the client,
that will be ~100,000 x avg. XML size).
Engine groups records so that each TCP packet contains as much payload
as possible (in case of 1500 bytes packet size engine will send ~40
records in each packet). Also Statement.setFetchSize(int) works
correctly, this allows you to use TCP stack more efficiently.
In auto-commit mode for each record blobs will be fetched to the
client too. First this is a clear overhead, second, network usage
pattern might be not optimal.
Best regards,
Roman Rokytskyy