Subject Re: Can a "bad" stored procedure do low level file damage to an .fdb?
Author Adam
--- In, "red_october_canada"
<red_october_canada@...> wrote:
> After reading years worth of newsgroup postings, and all the trouble
> users can create by incorrectly writing a stored procedure, I have
> deduced the following:
> 1) An incorrectly written stored procedure can do low level file
> damage to a Firebird database file, corrupting it, and possibly making
> it unusable

Some links?

I imagine some maliciously written stored procedures could be used to
directly manipulate system tables, or to perform nasty DDL using
execute procedure could bring some tears, but otherwise no.

UDFs (which are different to stored procedures) can (if poorly
written) crash the database engine causing it to abandon any active
transactions but I have never seen one cause any corruption.

> 2) Triggers, even incorrectly written, cannot do low level file
> damage. Yes, they may scramble your data, however, they will not
> corrupt the database file structure and ruin it.

Triggers and procedures share the PSQL language, so I imagine that if
it is possible to shoot your foot in one, it is in the other as well.

> Is this deduction true? I have been purposely avoiding using stored
> procedures because of this. Additionally, using stored procedures, in
> my opinion, is the "business rules" layer of a 3TCS system "leaking"
> into the "data storage" layer. SQL is not a particularly smart
> language to be writing complex procedures in, so I've been keeping all
> the "smarts" in a separate .exe or .dll using a true programming

So are you avoiding it because of fears of corruption or on
fundamental principle of business rules separated from data storage?

IMO, the reasons to avoid PSQL are portability between DBMS backends.
The easiest way support multiple backends is to treat the DBMS as a
dumb lowest common denominator. Of course there are disadvantages,
like you can not make use of the features of the given DBMS that make
it powerful. In a sense, you inherit the worst traits of each
database. This becomes even worse if you share backends between MGA
and locking databases. You have to throttle Firebird by implementing
locking mechanisms, and you have to throttle SQL Server by avoiding
count(*) and other operations that are more expensive on MGA.

At the heart of the RDBMS is the ACID properties. 'C'onsistency rules
are meant to allow the RDBMS to reject inconsistent data. Of course,
the consistency of data is about the business rules, you can't have an
invoice without a CustomerID etc.

While you could police this in a business rule layer tier, you may
need to consider then opening up your tier to integrate with third
parties or risk them violating your consistency. In fact you have a
bit of a battle on your hands stopping even database admin tools from
doing 'bad things'.

At that specific point, having stored procedures and triggers in the
database allow you to feel confident that even if some third party
application one day needs to start adding data, it can not break your

Stored procedures are helpful in reducing round trips between the
client application and server and can give considerable performance
benefits in that area.

PSQL is also a lot more powerful than SQL alone. Whilst exception
handling could be improved a bit, and some debugging hooks would be
quite useful, it usually does the job.

But again, if you need to support 100 different DBMS, then you
probably don't feel like locking a piece of functionality partilarly
to Firebird, so you will correctly avoid leaking business rules into
the data layer.

I think the lesson is that there isn't a single correct design for all
situations (although there are plenty of examples of bad designs).