Subject | Re: [firebird-support] Plausible: Linux scheduler tuning for Firebird 1.5 Classic Server? |
---|---|
Author | Steve Wiser |
Post date | 2007-01-15T13:39:43Z |
I have never tried changing that kernel setting and testing it with a
database server, but we did look into the I/O scheduler in the past.
See http://www.redhat.com/magazine/008jun05/features/schedulers/ for a
list of these and what they do. We did test switching to the "deadline"
scheduler, but it didn't really make a difference in our (admittedly
non-scientific) tests...
-steve
Stephen J. Friedl wrote:
database server, but we did look into the I/O scheduler in the past.
See http://www.redhat.com/magazine/008jun05/features/schedulers/ for a
list of these and what they do. We did test switching to the "deadline"
scheduler, but it didn't really make a difference in our (admittedly
non-scientific) tests...
-steve
Stephen J. Friedl wrote:
>
> Today I had to rebuild a customer's Linux 2.6 kernel to enable >4G RAM
> support, and while wandering through menuconfig found an option to the
> scheduler that looks like it has some promise for application DB
> performance. I realize this is not a Linux kernel support forum, but
> there may be some collective wisdom here.
>
> These are the three options possible:
>
> =================================================================
>
> CONFIG_PREEMPT_NONE:
>
> This is the traditional Linux preemption model, geared towards
> throughput. It will still provide good latencies most of the
> time, but there are no guarantees and occasional longer delays
> are possible.
>
> Select this option if you are building a kernel for a server or
> scientific/computation system, or if you want to maximize the
> raw processing power of the kernel, irrespective of scheduling
> latencies.
>
> CONFIG_PREEMPT_VOLUNTARY: **DEFAULT**
>
> This option reduces the latency of the kernel by adding more
> "explicit preemption points" to the kernel code. These new
> preemption points have been selected to reduce the maximum
> latency of rescheduling, providing faster application reactions,
> at the cost of slighly lower throughput.
>
> This allows reaction to interactive events by allowing a
> low priority process to voluntarily preempt itself even if it
> is in kernel mode executing a system call. This allows
> applications to run more 'smoothly' even when the system is
> under load.
>
> Select this if you are building a kernel for a desktop system.
>
> CONFIG_PREEMPT:
>
> This option reduces the latency of the kernel by making
> all kernel code (that is not executing in a critical section)
> preemptible. This allows reaction to interactive events by
> permitting a low priority process to be preempted involuntarily
> even if it is in kernel mode executing a system call and would
> otherwise not be about to reach a natural preemption point.
> This allows applications to run more 'smoothly' even when the
> system is under load, at the cost of slighly lower throughput
> and a slight runtime overhead to kernel code.
>
> Select this if you are building a kernel for a desktop or
> embedded system with latency requirements in the milliseconds
> range.
>
> =================================================================
>
> The application I'm working with appears to be heavily bottlenecked
> around lock contention, and mutex waiting for locks appears to cause a
> lot of context switch activity.
>
> I routinely see what looks like pathological locking behavior even
> with a *single* process with incredible amounts of context switching,
> and a bit of testing on an old Linux two-processor machine with the
> first option - no pre-emption - "feels" like it's running a bit better.
>
> Certainly there are no realtime latency concerns with a machine that's
> running only as a DB server, so in that respect this option can't hurt.
>
> But I'm looking for some kind of gut feel here: "Yah, that's a big
> deal" -vs- "Won't be any more impact on Firebird than any other kind
> of server app"
>
> Thanks,
> Steve
>
> ---
> Steve Friedl | UNIX Wizard | Microsoft MVP | www.unixwiz.net
>
>