Subject Re: Re : [Firebird-Architect] Database Password
Author Roman Rokytskyy
> While the server is running there is no way to change the executable,
> the compromise must be done replacing the binaries and the only way (*I
> can think of*) to check this is to verify the binaries signatures
> *manualy* before providing any keys to the server.

You forget the code injection directly into the process in memory -
that's the way exploits work.

So, let's see. In order to have access to the database file, user is
either given that access explicitly or possesses the admin rights.

Former is not interesting for us, latter means that the user can also
execute arbitrary code, which means he has a possibility to run
privileged code on the kernel level and from there modify the memory of
the Firebird process to execute some additional code.

Bad. We can't protect against anybody with admin rights while Firebird
is running and cached the key in memory.

>> If the security of the server is compromised physically, one can always
>> replace the binary.
>>
>
> Yes, but the admin would note the change because the file signature
> would change and is checked by the admin, not by a self check routine
> (that can be changed the same way to fake a valid binary)

Rootkits can mask this while they are running.

>> At this level we can only discuss whether personal estimates of the
>> probabilities of which case is more likely are correct or not. I would
>> not base the decision to implement database encryption a-la TrueCrypt
>> only on these probabilities.
>>
>>
>
> Ok. My sugestion to a encryption a-la Truecrypt is to not hold the key
> inside the file, or provide it trough the client app, the key should be
> given manually by an admin who had checked the binary integrity before
> providing the keys, this is my point.

Fine, but looks irrelevant at the moment.

> If the server is compromised at physical level, machine stolen, no one
> would be able to access the data without a valid password.

If the machine is stolen (and will have to be rebooted), it is enough to
put the database file on the encrypted drive.

If the machine is compromised at physical level, we have to distinguish
whether one can get the admin rights without shutting it down or not.

If no - we do not care, we can always protect the file on the file
system by means of OS. If yes - see above about the code injection on
the fly.

> If the server is compromised and the binaries replaced it would be noted
> by checking the binary signatures by the admin at start-up time or when
> he needs to provide the keys.

To protect yourself from rootkits, you would have to take the server
down, access the hard drive from another computer and look for the known
rootkits. This however does not protect you from the unknown ones.

Without doing this you have no chance to check the signature, since
rootkit, after activation, can fake you the right binary data when
accessing the file via normal API calls. Even more, the exploit does not
really need to modify the binary - it can watch the processes that are
starting and modify the running process.

> If the server is running, and in someway one could access the database
> file, he could not access the data, since all data is encrypted and the
> key is not on the engine code nor the database file.

It is in memory - can be accessed with appropriate permissions:

"Recovery of Encryption Keys from Memory Using a Linear Scan
Hargreaves, C.; Chivers, H.

Availability, Reliability and Security, 2008. ARES 08. Third
International Conference on
Volume , Issue , 4-7 March 2008 Page(s):1369 - 1376
Digital Object Identifier 10.1109/ARES.2008.109

Summary:As encrypted containers are encountered more frequently the need
for live imaging is likely to increase. However, an acquired live image
of an open encrypted file system cannot later be verified against any
original evidence, since when the power is removed the decrypted
contents are no longer accessible. This paper shows that if a memory
image is also obtained at the same time as the live container image, by
the design of on-the-fly encryption, decryption keys can be recovered
from the memory dump. These keys can then be used offline to gain access
to the encrypted container file, facilitating standard, repeatable,
forensic file system analysis. The recovery method uses a linear scan of
memory to generate trial keys from all possible memory positions to
decrypt the container. The effectiveness of this approach is
demonstrated by recovering TrueCrypt decryption keys from a memory dump
of a Windows XP system."

> When I was learning about reverse engineering (again, I am not a reverse
> engineer, just try to learn a bit as a hobby and to try to protect
> myself a bit), I had read something like:
> If there is code it could be reverse engineered, how hard is to find the
> right place to change, and how many time it will take could change, but
> it is always possible.

Yup - and that's exactly what those tools do - make the process complex
and expensive enough.

> The only thing that looks more secure is if part of the code (the key,
> the integrity check code, and a critical part of the code) is stored on
> a dongle that is encrypted and tied in someway to the serial code of the
> dongle so it could not be copied, Without the dongle one has a missing
> part of the code, inside the dongle is stored the key to decrypt the
> data or the executable itself, inside the dongle is the validation code,
> and if the dongle is duplicated the code would be invalid.

Solved as well - one popular business software in Russia was protected
by such dongle. After short period of time software emulators of such
dongles were created and keys were available for free. Even more, for
some time the support of that company was suggesting to use those
emulators to solve some compatibility issues on hardware level
(naturally, they recommended to extract the key from the dongle first to
have a "legal" emulator).

Roman