Subject | precompilers |
---|---|
Author | Stephen Boyd |
Post date | 2009-07-24T14:04:47Z |
I know that the subject of embedded SQL precompilers is considered passe and that the mere mention of the subject elicits massive yawns from all concerned, but ...
I have a Cobol application the needs to be migrated to Firebird from ISAM files. I have dusted of GPRE and fixed it so that it actually works with our Cobol compiler. You all have probably seen my posts about this in the past. It works flawlessly. You would think that would be enough. However, all I am getting from the Cobol guys is a constant litany of complaints about how it is too much work, how it "isn't enough like Cobol", and on and on and on and on ... Some days I get so fed up I just want to tell them to take their project and put it where the sun don't shine. But then I calm down and start to think about it. There are ways the precompiler could be "improved" at the cost of deviating from "standard" ESQL.
The problems as I see it are these:
--1--
The current ESQL syntax is cumbersome. Consider that for every ESQL statement you code you effectively have to specify the list of columns to be operated on twice. Once to specify the columns to be retrieved / updated and once to specify the host variables containing the column values. This is probably the biggest source of complaints.
--2--
Handling NULL fields is a pain. To a Cobol programmer the concept of NULL fields is completely foreign. To them NULL means spaces or zeros depending on the data type of the column. Having to include indicator variables on all of the ESQL statements and then test them and set the host variable to spaces or zeros is the second biggest source of whining.
--3--
The way that dynamic SQL is implemented by the current precompiler makes it virtually impossible to use dynamic SQL in a Cobol program. It is difficult, verging on impossible, to use pointers to dereference the XSQLDA.
There are a number of ways of addressing these issue:
We could develop a set of standard subroutines to read / write the various tables. This has the disadvantage of encouraging lazy programming. Why bother to custom craft an ESQL statement to retrieve / update just those columns you need when you can just call a subroutine that affects the entire row. This approach would be massively appealing to Cobol programmers and I can virtually guarantee that you would never see another ESQL statement coded outside of these standard subroutines. We would end up with a database that, for all intents and purposes, would contain a collection of ISAM files with all of the disadvantages of using a database and none of the advantages. Also, this does nothing to address the dynamic SQL issue.
Or we could create a new precompiler. One that would automatically create and reference the host variables needed by the ESQL statements. That would automatically substitute sensible values for NULL fields. One that would allow dynamic SQL statements to be manipulated in a Cobol friendly manner.
To implement the new precompiler I would ignore gpre completely and start with a blank slate. The new precompiler would not create the BLR for the ESQL statements but would rather make calls to a new subroutine library that would use the database engine to compile the ESQL statements as needed and return / retrieve the column values from a fixed format "record" buffer, the format of which would be determined by the precompiler. Sensible values would be substituted for NULL columns by the subroutine library. For example:
EXEC SQL
SELECT LD_LOCATION, LD_LOAD_NUMBER, LD_LOAD_SUFFIX
FROM LOADS;
would generate something like this:
WORKING-STORAGE SECTION.
01 SELECT1-SQL.
03 FILLER PIC X(??)
"SELECT LD_LOCATION, LD_LOAD_NUMBER, LD_LOAD_SUFFIX FROM LOADS".
01 LOADS.
03 LD-LOCATION PIC X(6).
03 LD-LOAD-NUMBER PIC 9(6).
03 LD-LOAD-SUFFIX PIC X(2).
PROCEDURE DIVISION.
CALL "FBLIB" USING SQLCODE,
SELECT1-SQL,
LOADS.
The programmer would have no control over the data type assigned to the automatically generated host variables. They would be given a data type consistent with the corresponding Firebird data type.
Dynamic SQL could be handled by providing a set of subroutines that would accept an SQL statement and parameter values as inputs and provide the fields values as outputs. Something like this:
WORKING-STORAGE SECTION.
01 DYNSQL PIC X(??)
"SELECT * FROM LOADS WHERE LD_LOCATION = :LD_LOCATION".
01 SYNSQL-H PIC 9(9).
01 LD-LOCATION PIC X(6).
PROCEDURE DIVISION.
CALL "PREPARESQL" USING SQLCODE,
DYNSQL
GIVING DYNSQL-H.
CALL "SETPARAM" USING DYNSQL-H,
"LD_LOCATION",
"LOCCODE".
CALL "EXECSQL" USING SQLCODE,
DYNSQL-H.
CALL "GETFIELD" USING DYNSQL-H,
"LD_LOCATION"
GIVING LD-LOCATION.
I fully understand that this will not be as efficient as the code currently generated by gpre since the SQL statements will have to be compiled at run time rather than at compile time. And I will have to provide routines to convert from Firebird data types to the host language data types and vice versa.
I appreciate any and all comments about this approach. Is it workable or am I completely out to lunch? What have I not considered? You could probably write a book on that topic.
I have a Cobol application the needs to be migrated to Firebird from ISAM files. I have dusted of GPRE and fixed it so that it actually works with our Cobol compiler. You all have probably seen my posts about this in the past. It works flawlessly. You would think that would be enough. However, all I am getting from the Cobol guys is a constant litany of complaints about how it is too much work, how it "isn't enough like Cobol", and on and on and on and on ... Some days I get so fed up I just want to tell them to take their project and put it where the sun don't shine. But then I calm down and start to think about it. There are ways the precompiler could be "improved" at the cost of deviating from "standard" ESQL.
The problems as I see it are these:
--1--
The current ESQL syntax is cumbersome. Consider that for every ESQL statement you code you effectively have to specify the list of columns to be operated on twice. Once to specify the columns to be retrieved / updated and once to specify the host variables containing the column values. This is probably the biggest source of complaints.
--2--
Handling NULL fields is a pain. To a Cobol programmer the concept of NULL fields is completely foreign. To them NULL means spaces or zeros depending on the data type of the column. Having to include indicator variables on all of the ESQL statements and then test them and set the host variable to spaces or zeros is the second biggest source of whining.
--3--
The way that dynamic SQL is implemented by the current precompiler makes it virtually impossible to use dynamic SQL in a Cobol program. It is difficult, verging on impossible, to use pointers to dereference the XSQLDA.
There are a number of ways of addressing these issue:
We could develop a set of standard subroutines to read / write the various tables. This has the disadvantage of encouraging lazy programming. Why bother to custom craft an ESQL statement to retrieve / update just those columns you need when you can just call a subroutine that affects the entire row. This approach would be massively appealing to Cobol programmers and I can virtually guarantee that you would never see another ESQL statement coded outside of these standard subroutines. We would end up with a database that, for all intents and purposes, would contain a collection of ISAM files with all of the disadvantages of using a database and none of the advantages. Also, this does nothing to address the dynamic SQL issue.
Or we could create a new precompiler. One that would automatically create and reference the host variables needed by the ESQL statements. That would automatically substitute sensible values for NULL fields. One that would allow dynamic SQL statements to be manipulated in a Cobol friendly manner.
To implement the new precompiler I would ignore gpre completely and start with a blank slate. The new precompiler would not create the BLR for the ESQL statements but would rather make calls to a new subroutine library that would use the database engine to compile the ESQL statements as needed and return / retrieve the column values from a fixed format "record" buffer, the format of which would be determined by the precompiler. Sensible values would be substituted for NULL columns by the subroutine library. For example:
EXEC SQL
SELECT LD_LOCATION, LD_LOAD_NUMBER, LD_LOAD_SUFFIX
FROM LOADS;
would generate something like this:
WORKING-STORAGE SECTION.
01 SELECT1-SQL.
03 FILLER PIC X(??)
"SELECT LD_LOCATION, LD_LOAD_NUMBER, LD_LOAD_SUFFIX FROM LOADS".
01 LOADS.
03 LD-LOCATION PIC X(6).
03 LD-LOAD-NUMBER PIC 9(6).
03 LD-LOAD-SUFFIX PIC X(2).
PROCEDURE DIVISION.
CALL "FBLIB" USING SQLCODE,
SELECT1-SQL,
LOADS.
The programmer would have no control over the data type assigned to the automatically generated host variables. They would be given a data type consistent with the corresponding Firebird data type.
Dynamic SQL could be handled by providing a set of subroutines that would accept an SQL statement and parameter values as inputs and provide the fields values as outputs. Something like this:
WORKING-STORAGE SECTION.
01 DYNSQL PIC X(??)
"SELECT * FROM LOADS WHERE LD_LOCATION = :LD_LOCATION".
01 SYNSQL-H PIC 9(9).
01 LD-LOCATION PIC X(6).
PROCEDURE DIVISION.
CALL "PREPARESQL" USING SQLCODE,
DYNSQL
GIVING DYNSQL-H.
CALL "SETPARAM" USING DYNSQL-H,
"LD_LOCATION",
"LOCCODE".
CALL "EXECSQL" USING SQLCODE,
DYNSQL-H.
CALL "GETFIELD" USING DYNSQL-H,
"LD_LOCATION"
GIVING LD-LOCATION.
I fully understand that this will not be as efficient as the code currently generated by gpre since the SQL statements will have to be compiled at run time rather than at compile time. And I will have to provide routines to convert from Firebird data types to the host language data types and vice versa.
I appreciate any and all comments about this approach. Is it workable or am I completely out to lunch? What have I not considered? You could probably write a book on that topic.