Subject | Re: [Firebird-Architect] Gpre & Cobol |
---|---|
Author | Ann W. Harrison |
Post date | 2006-08-29T15:09:37Z |
Stephen Boyd wrote:
who uses what. There may be a cabal of COBOL users somewhere
in Karjackistan... who knows?
What we did with Pascal, where the differences between compilers
was enormous, was to have different code generators for the
different versions of Pascal. That's probably the right way
to handle this. Teach the main line part of gpre that -cob85
should cause it to invoke your version of the code generator
and create a new cob85.cpp, leaving the current cob.cpp as it
is. You may want to invent a new extension for COBOL 85 files
before preprocessing and teach the main part of gpre to recognize
that extension to save using the switch all the time.
generator, though as I said before, I'd create a new module and
make the change there.
Don't add a new library - that's killing fly with an ax. You'll
spend the rest of your live trying to keep the two in sync.
There is an entry point called isc_vax_integer (approximately) that
inverts byte order. Look at the generated code for the C preprocessor
and you'll see where it adds code to copy strings around. A similar
call to the vax_integer routine will take the native format binary
and reverse the bytes. Just generate that call for any binary data item.
understand. The engine is really good about converting among
the data types it understands. If the chosen type is a scaled
binary, then you'll need to generate the call to invert bytes.
If you've got several compilers with different rules,
you may need separate code generators (and switches) for each.
Regards,
Ann
PDP-11's were big-endian machines. It never occurred to the guy who
did the port that there were different ways to represent integers.
Thirty years later, it's still a problem for COBOL compilers.... We
used to call the little-endian one COMP and the big-endian COMP-5.
The Fortran guys thought we were idiots.
>In answer to your larger question, which is architectural...
> First and most important. Is anyone actually using Cobol with GPRE?Among the challenges of an open source project is you never know
> ...If I rework this to actually match ANSI-85 Cobol is
> that going to break anyone else's code? Or should I introduce the
> concept of a Cobol dialect that could be set via the command line to
> generate legacy (-ansi) code vs RM/Cobol or any other Cobol that might
> come along?
who uses what. There may be a cabal of COBOL users somewhere
in Karjackistan... who knows?
What we did with Pascal, where the differences between compilers
was enormous, was to have different code generators for the
different versions of Pascal. That's probably the right way
to handle this. Teach the main line part of gpre that -cob85
should cause it to invoke your version of the code generator
and create a new cob85.cpp, leaving the current cob.cpp as it
is. You may want to invent a new extension for COBOL 85 files
before preprocessing and teach the main part of gpre to recognize
that extension to save using the switch all the time.
> 1) Using USAGE COMP data items as though they were 16 or 32 bitThose should be a reasonably straightforward changes to the code
> integers.
> 2) Using PIC S9(9) to define a 32 bit integer field or PIC S9(4) to
> define a 16 bit integer field.
>
> 3) In some places integer (USAGE BINARY) fields are assumed to be in
> native byte order.
>
> 4) Use of USAGE COMP-1 and COMP-2 to hold floating point values.
>
> 5) Use of USAGE COMP-X in some places to hold binary character data.
>
> 6) Some relatively minor bugs regarding column alignment.
>
generator, though as I said before, I'd create a new module and
make the change there.
>Yes. I have both a problem with your solution and a suggestion.
>
> In order to support RM/Cobol (and possibly other Cobols) we have to
> re-jigger the byte order from Cobol order to native order and vice
> versa. I am thinking that we will need to create an entirely new DLL
> a la gds32.dll with Cobol specific entry points that flip the bytes
> and call the existing isc_* routines. Anyone have any problems with
> this? Or a better idea?
Don't add a new library - that's killing fly with an ax. You'll
spend the rest of your live trying to keep the two in sync.
There is an entry point called isc_vax_integer (approximately) that
inverts byte order. Look at the generated code for the C preprocessor
and you'll see where it adds code to copy strings around. A similar
call to the vax_integer routine will take the native format binary
and reverse the bytes. Just generate that call for any binary data item.
>Have GPRE declare them to be a format that the compiler does
> I'm uncertain what to do about floating point data items for Cobol
> compilers that don't support them.
understand. The engine is really good about converting among
the data types it understands. If the chosen type is a scaled
binary, then you'll need to generate the call to invert bytes.
If you've got several compilers with different rules,
you may need separate code generators (and switches) for each.
Regards,
Ann
> 3) In some places integer (USAGE BINARY) fields are assumed to be inAh me. The PDP-ll COBOL compiler was originally little-endian, though
> native byte order. This is not the case for RM/Cobol and I would be
> surprised to find if it were the case for other Cobols since native
> byte order on Intel processors would tend to break some legacy
> mainframe code.
PDP-11's were big-endian machines. It never occurred to the guy who
did the port that there were different ways to represent integers.
Thirty years later, it's still a problem for COBOL compilers.... We
used to call the little-endian one COMP and the big-endian COMP-5.
The Fortran guys thought we were idiots.