Subject | GBAK size limit question (again, sorry), also BLOB questions |
---|---|
Author | jeeggers |
Post date | 2003-09-19T17:45:02Z |
So I think I read most of the Q and As about the size limit for GBAK
backups and restores, but I'm still not any wiser. From what I
understand, GBAK updates the database itself (rather than the files
that make up the database), is that correct ? If so, then the usage
of the gbak command doesn't differ if my database is made up of one
file as opposed to several files ? Does the back up result in a
single file for the entire database, or does it create files matching
those that make up the database itself ?
So then, on NTFS on Win NT4 or Win2k, what is the largest database
that I can back up ?
Another question, about BLOBs: I'm storing BLOBs as big as 120 MB per
row. Am I insane ? When I update the record (but not the blob) does
it still create (on disk) a completly new record with a copy of the
blob (And the old record is garbage-collected in the next sweep) ?
Doesn't that mean I should really have the blobs in a table external
to the rest of the logical record if the blob itself is never
updated ?
When using the JDBC driver, is there a way to read the blob value in
chunks so that I don't have to have the entire blob in memory at one
time ? And is there an oracle-equivalent to DBMS_LOB.getlength() to
tell me the number of bytes in the blob ? (e.g. SELECT
DBMS_LOB.GETLENGTH(myblobcol) FROM bar)
Thanks,
Johannes
backups and restores, but I'm still not any wiser. From what I
understand, GBAK updates the database itself (rather than the files
that make up the database), is that correct ? If so, then the usage
of the gbak command doesn't differ if my database is made up of one
file as opposed to several files ? Does the back up result in a
single file for the entire database, or does it create files matching
those that make up the database itself ?
So then, on NTFS on Win NT4 or Win2k, what is the largest database
that I can back up ?
Another question, about BLOBs: I'm storing BLOBs as big as 120 MB per
row. Am I insane ? When I update the record (but not the blob) does
it still create (on disk) a completly new record with a copy of the
blob (And the old record is garbage-collected in the next sweep) ?
Doesn't that mean I should really have the blobs in a table external
to the rest of the logical record if the blob itself is never
updated ?
When using the JDBC driver, is there a way to read the blob value in
chunks so that I don't have to have the entire blob in memory at one
time ? And is there an oracle-equivalent to DBMS_LOB.getlength() to
tell me the number of bytes in the blob ? (e.g. SELECT
DBMS_LOB.GETLENGTH(myblobcol) FROM bar)
Thanks,
Johannes