Subject Re: [Firebird-Architect] Block Encryption, Initialization Vector, and Security
Author Jim Starkey
On 11/9/2010 5:42 PM, Geoff Worboys wrote:
> Jim Starkey wrote:
>> I'd like to suggest a short diversion into cryptology with
>> regards to ECB (electronic code book), CBC (cipher block
>> chaining), initialization vectors, Firebird, and security.
>> It's not important, but it is interesting.
> [...]
>
> ECB vs CBC? CBC isn't really a contender for this situation
> any more. For want of better/easier reference you could read
> this: http://en.wikipedia.org/wiki/Disk_encryption_theory
There are bunch of assumptions that he makes that don't -- and can't --
apply to a database engine. User access to raw pages, the ability to
create raw pages and cause them to be decrypted, and the concern that
given these, knowing whether or not a page has been changed or not is a
problem. I'm not at all convinced he has anything but a straw man argument.
> The art of cryptography is NOT to make something that you
> don't have the skills to break, the art is to stop someone
> else - who may well be smarter or more ingenious than you
And this is why it is universally regarded that a secret algorithm is
less secure than a published and analyzed algorithm. In other word,
security by obscurity is no damn good. XORing a bit pattern at PIO
level falls in this category. Obscure until somebody read the code.

The existing mainstream ciphers -- AES/Rindael, RSA, and SHA have been
extensively analyzed. And although most have been found to slightly
weaker than initially thought, each is thought sufficiently strong to be
essentially unbreakable.
> The art of cryptography is to give as little away as
> possible because it can be VERY difficult to determine what
> an attacker may find to use against you.
We're trying to protect data from snooping. Even large scale,
government supported, heavy iron, highly motivated snooping. That said,
there are lots of things we don't care about because of the nature of
the database on-disk structure.
> For example most modern algorithms are resilient to many
> recognised forms of attack but the rules can change quite
> dramatically if an attacker can, for example, determine
> some part of the encryption key (eg: from memory chips,
> from file fragments and so on - perhaps some bug in the
> implementation let's something usually minor slip, but
> something minor combined with a poor implementation can
> lead to disaster).
So? Haven't you been arguing against a security architecture that
supports the secure transmission of session keys? How can someone argue
that XORing disk pages is OK but strong encryption is not because a
busted piece of hardware might give away a key?
> Many techniques have dated, not because they are easily
> broken, but because they give away details about the
> information they were protecting. Such details make the
> security vulnerable or potentially vulnerable and so new
> techniques are derived to combat these problems.
Could you give us an example of a peer reviewed crypto-system broken by
other than brute force? Of course it's possible. But the current crop
of codes have been pretty well studied. And even if broken, a security
architecture isn't dependent on any specific algorithm.
> Many here, yourself included apparently, will argue that
> if an attacker can't immediately turn the data into
> plain-text then the encryption is good-enough.
I would substitute "10,000 years" for immediately, but then, yes.
> My counter to that is simply this: If you are going
> to implement encryption at a performance cost of around
> 30% (or whatever) then why not do it to the best of your
> ability?
>
> . You may actually find performance increases (some
> newer techniques can be faster than old ones).
>
> . If you've spent x% of your performance it would
> be nice to get as much as you can for it
>
> . The only cost is admitting that you are not
> omniscient and accepting that a reference from
> a specialist in the field could actually be
> worthwhile.
>
> Throwing 30 year old code at something, without finding out
> if it is still the best solution, is not the way to get the
> best possible result. This is true for system programming
> and it is especially true for cryptography and security.
Are you accusing AES, RSA, and SHA-1 of being 30 years old or just
Firebird. If the later, the JRD architecture is only 28 years old and
Interbase is only 26 years old (depending on which piece you count).
Firebird is much newer, but started with code that the author considered
obsolete. He's currently trying to convince you to update the sucker
with modern technology and is getting quite frustrated at his inability
to do so.

>
> I understand that I am probably wasting my breath here. I
> have good confidence that those that are actually likely to
> do any of the work are smart enough to study the current
> technology. I'm just hoping my posts may encourage those
> looking to develop their own encryption code to find some
> real and actual expertise to assist them - to remind readers
> that despite the "all knowing" tone of Jim's postings, he is
> not a specialist in this field. (See polite I can be. :-)
I, er, am sufficiently expert to know a) 2048 is divisible by 16, and b)
XORing a fixed pattern with the code published is likely to be insecure.

So what is your position:

* Firebird is so perfect it doesn't need a security architecture
* Users can write their own encryption, so what's the problem
* Hackers aren't smart enough to read the code

or what?

I'm only arguing that you need a defensible security architecture before
you start hacking. Design first, implement second.



--
Jim Starkey
Founder, NimbusDB, Inc.
978 526-1376



[Non-text portions of this message have been removed]