Subject Re: [firebird-support] Handling version control
Author Dalton Calford
Hi Milan and Miles,


We have many different applications, spread across three cities, in two
different languages, with some clients being standard end users while
other client applications are embedded machines.

For all intents and purposes, we have a single database that all these
users log into (alright, it is a cluster of specialized servers and data
farms that comprise the spiderweb of our redundant systems).

When I say that the system is a single database, it means that a client
application, on login, may be directed to any one of many data servers
and the information is replicated appropriately.

Now, with an organization this large, some applications are very old
legacy code, while others are hot off the presses in alpha and beta
development. Some applications have evolved over time while others have
been used constantly without modification.

So, this is how we do it.

1.) Assumption 1, if somthing is working, don't fix it.

This means that you are not allowed to change any stored procedures
in the database unless they are found to be flawed. Once a
view/procedure/trigger is accepted as solid, then it is frozen.
This means that any new functionality, no matter how minor, is
always done in a new stored procedure even if it means having alot
of duplicate code in the database.
This ensures that no matter how old a client is, as long as the
client does not break any business rules, it can still connect to
the database.
We name our database objects (procedures etc) in a logical manner
and place a date tag on the end of the procedure. This way we
always know the latest code and we can go back to earlier code in
the database to view how it used to be done.
Our metadata changes with the needs of our company, but, our
interface to the different database clients stays frozen at the time
that version (or sub-version) was released.

2.) Assumption 2, Client Verification

Although Assumption 1 tries it's very best to ensure that our
inhouse applications stay working, we somtime find our business
decisions need to be modified. Our clients from the past 6 years or
so all check to see if they are allowed to work and download new
versions from a blob table in the database (select blob to local
file, check checksums, rename, exit current app and execute newly
created upgrade file). If there is any chance that a client does
not verify it's allowed version, we change the offending
procedures/triggers to raise a user defined error.
To ensure that older/offending clients can not get away with not
calling a procedure and thus not raising an error, all of our tables
are accessed by views and stored procedures, with no user having
direct access. This was a trouble in IB 5.6 and below as we found
that having too many user/procedure/rights had some
user/procedure/rights ignored. This would show up with random
problems but, once we pulled individuals out of the equation and
started to only use roles, our problem stayed away. In this manner,
we were able to lock out any bad clients without worrying about
locking out good clients that some of our developers have not ever
seen, since the client was designed before they started with the
company.
(I know of 9 small apps that I have encountered over the years that
are still used on a regular basis but, they have never been worked
on by any of the currently existing developers).
One of our verification techniques, has the client insert a value
into a control table, without commiting the transaction. Then the
client performs work, all within the same transaction, with the
triggers raising an error unless they find the right value inserted
into the control table. Then, the client deletes the entry from the
control table and then commits the transaction.
It is a little more work, but, ensures that our development dept
maintains both backward and forward compatibility.

3.) Assumption 3, Have the database do the work.

When you are working on a piece of metadata and you have not had it
released (or approved for that matter), it is beneficial to see the
different changes you have gone through as you have performed your
work. This is done by adding a few triggers to the system tables so
that when a procedure is created or updated, the previous version of
the metadata is inserted into a version table. This makes reverting
to an earlier version a snap, although, you may want to clear out
the work-in-progress versions once the metadata is approved/released.

4.) Assumption 4, The Database is the version control

When a client application is gold (released) it's install binaries,
upgrade binaries and source code are all uploaded to the database.
(Remember that we have multiple databases, including application and
history databases that all work together as a single repository).
That means we can download any of our prior released versions,
either binary or source and that diff files can be generated as
needed.
This is a very primitive version of source control, but it was set
up years ago and it works so we don't fix it.....

This is the way we maintain our environment. It is wasteful of space,
but, with the cost of drives being so low, we can afford to be wasteful
in order to save the very expensive cost of manpower in case we have to
fix a screwup.

When approaching version control, you need to perform a full needs
analysis. Once you have determined the needs, you may find an off the
shelf solution for your development environment.

I hope some of this helps you in your needs.

best regards

Dalton







Milan Babuskov wrote:

>Myles Wakeham wrote:
>
>
>>I'm sure I'm not the only person to be struggling with finding a strategy
>>for this, so I was hoping anyone out there who is currently dealing with a
>>Firebird SQL database and client applications in production could share
>>their strategy for dealing with this, and how they handle updates, etc.
>>
>>
>
>I have a simple table in database called "current_version" with fields
>like "database_version" and "last_update". Database version is a simple
>integer, and last_update is a timestamp and it's optional.
>
>Each client app. has sql scripts embedded in executable (alternatively,
>you can ship separate .sql files) that can update the database to any
>version. So, while developing the application, I record each DDL
>statement and some important DML statements and increment version number
>by one for each of them. This is largely automated by FlameRobin which
>has a feature to log incremental statements to files.
>
>So, the client app. holds database version it needs (a simple integer)
>and a bunch of scripts that can upgrade any database to that version.
>
>Whenever client app is run, it does
>
>SELECT database_version FROM current_version;
>
>If it is lower than version number hardcoded in app. it starts running
>.sql scripts one by one and subsequently runs UPDATE current_version =
>:number_of_script.
>
>If it is higher than version number hardcoded in app., the app. refuses
>to run, and needs to be upgraded first.
>
>Using this solution, you only get a single check - reading a one-row
>table. It works very fine for few years now. The only problem I ran into
>is adding foreign keys, but there are workarounds for that.
>
>
>