Subject Re: [Firebird-Java] Re: FB Blob problem - OutOfMemory
Author Rick Fincher
Hi Phil,

No problem, happens to all of us!

For future reference, the two code snippets below are from this group's
YahooGroups file download section. It's old but may give you some new
ideas about reading blobs.

You can also search the messages for this group for "blobs" to see the
various mail about blob issues that may be useful to you.

Rick


public byte[] selectBlob( int rowid ) {

// In this example I'm assuming there's an open, active
// Connection instance called 'con'.

// This examples uses an imaginary SQL table of the following
// form:
//
// CREATE TABLE blobs (
// ROWID INT NOT NULL,
// ROWDATA BLOB,
//
// PRIMARY KEY (rowid)
// );

try {

Statement sment = con.createStatement();

String sql = "SELECT rowid, rowdata FROM blobs WHERE rowid = " +
rowid;

ResultSet rs = sment.executeQuery(sql);

byte[] returndata = null;

if ( rs.next() ) {

try {

// The ByteArrayOutputStream buffers all bytes written to it
// until we call getBytes() which returns to us an array of
bytes:
ByteArrayOutputStream baos = new ByteArrayOutputStream(1024);

// Create an input stream from the BLOB column. By default,
rs.getBinaryStream()
// returns a vanilla InputStream instance. We override this
for efficiency
// but you don't have to:
BufferedInputStream bis = new BufferedInputStream(
rs.getBinaryStream("fieldblob") );

// A temporary buffer for the byte data:
byte bindata[1024];

// Used to return how many bytes are read with each read()
of the input stream:
int bytesread = 0;

// Make sure its not a NULL value in the column:
if ( !rs.wasNull() ) {

if ( (bytesread = bis.read(bindata,0,bindata.length)) !=
-1 ) {

// Write out 'bytesread' bytes to the writer instance:
baos.write(bindata,0,bytesread);

} else {

// When the read() method returns -1 we've hit the end
of the stream,
// so now we can get our bytes out of the writer object:
returndata = baos.getBytes();

}
}

// Close the binary input stream:
bis.close();

} catch ( IOException ioe ) {
System.err.println("Problem retrieving binary data: " + ioe);
} catch ( ClassNotFoundException cnfe ) {
System.err.println("Problem retrieving binary data: " + cnfe);
}
}

rs.close();
sment.close();

} catch ( SQLException se ) {
System.err.println("Couldn't retrieve binary data: " + se);
} finally {
con.close();
}

return returndata;
}

-----------------------------------------------------------

import java.sql.*;

public class ReadBlobs
{
public static void main(String args[])
{
for (int i=0; i<args.length; i++)
System.out.println("args["+i+"] = "+args[i]);
String databaseURL="", user="", password="", driverName = "";
user = "sysdba";
password = "masterkey";
if ("win".equals(args[0])) {
if ("fb".equals(args[1])) {
databaseURL =
"jdbc:firebirdsql:localhost/3050:C:\\test.gdb";
driverName = "org.firebirdsql.jdbc.FBDriver";
} else if ("ic".equals(args[1])) {
databaseURL = "jdbc:interbase://localhost/C:\\test.gdb";
driverName = "interbase.interclient.Driver";
}
} else if ("linux".equals(args[0])) {
if ("fb".equals(args[1])) {
databaseURL =
"jdbc:firebirdsql:localhost/3050:/home/admin/test.gdb";
driverName = "org.firebirdsql.jdbc.FBDriver";
} else if ("ic".equals(args[1])) {
databaseURL =
"jdbc:interbase://localhost/home/admin/test.gdb";
driverName = "interbase.interclient.Driver";
}
}
System.out.println(driverName);
System.out.println(databaseURL);

String sname = null;
Connection con;
PreparedStatement ps;
ResultSet rs;
int i=0;
byte[] bf;
long time0 = 0, time1 = 0, time2 = 0, time3 = 0, etime = 0;
time0 = System.currentTimeMillis();

try {
Class.forName(driverName);
con = DriverManager.getConnection(databaseURL,user,password);
con.setAutoCommit(false);
String sql = "select BLOB_FIELD from TEST_BLOB";
ps = con.prepareStatement(sql);
time1 = System.currentTimeMillis();
etime = time1-time0;
System.out.println("time1 = "+etime);

rs = ps.executeQuery();
time2 = System.currentTimeMillis();
etime = time2-time1;
System.out.println("time2 = "+etime);

while ( rs.next() ) {
i++;
bf = rs.getBytes(1);
System.out.println(i+" "+bf.length);
}
time3 = System.currentTimeMillis();
etime = time3-time2;
System.out.println("time3 = "+etime);

rs.close();
ps.close();
con.close();

}
catch ( ClassNotFoundException cnfe )
{
System.err.println("Couldn't locate the driver class: "+cnfe);
}
catch ( SQLException se )
{
System.err.println("Exception creating the database
connection: "+se);
se.printStackTrace(System.out);
}
} // main
} // CLASS


phil_hhn wrote:
>
> --- In Firebird-Java@yahoogroups.com
> <mailto:Firebird-Java%40yahoogroups.com>, Rick Fincher <rnf@...> wrote:
> >
> > Hi Phil,
> >
> > If you use a ResultSet with autocommit mode on, the entire thing gets
> > returned at once uses all the memory available for Java that the JDBC
> > driver is running in. Turn off autocommit and give it a try.
> >
> > Be aware, though, that you must have enough memory allocated to the JVM
> > to read the largest graphic you will ever retrieve or you will get the
> > OOM exception. You may not be able to guarantee this if your app gets
> > deployed elsewhere.
> >
> > The size problem is probably a result of padding the buffers in the
> > stream/blob reads. You can get around that by storing the image
> size in
> > a field of the record, then reading the size upon retrieval of the
> > image, and only reading that number of bytes from the stream and
> putting
> > them in the image.
> >
> > It is also helpful to store info like mime-type, creation date,
> original
> > filename, etc. if you might ever need to access the image through a web
> > app or store it back to disk as a file.
> >
> > Hope this helps,
> >
> > Rick
>
> Hi Rick, thanks for your reply. Autocommit is not on... and I'm
> embarassed to say I posted my problem a little early, after a couple
> of days of frustration. It turns out the object size WAS correct and
> consistent, and I'd made so many changes I'd lost track of the correct
> objects. Sorry about that.
> But your tips re. possible padding of the buffers is interesting -
> I'll keep an eye on that, but as it happens it seems the bytes read is
> correct. I also already store most of the parameters you suggest...
>
> Despite all this, I still cannot read/write the large blobs (my
> troublesome one is 9.5MB) without OOM exceptions, but I've come to the
> conclusion that the large blobs that are giving me problems (they
> aren't images) can be skipped from my process.... so for now I can
> probably just avoid it, but it's a pretty crappy solution :(
>
> Once again thanks for your comments and sorry for posting the wrong
> message about the wrong blob size...
>
> Thanks, Phil
>
> _



[Non-text portions of this message have been removed]