|Subject||RE: [firebird-support] SSD and "hotspots"?|
> Since FB database files have write hotspot (the header page?), which isNot any longer. The SSD controllers have logic which moves "hot spots" around (known as wear levelling) to prevent the problem from arising.
> writte for every transaction, I would assume that SSD:s would wear out
> pretty fast.
> But is this true?
> Or does the OS or disk hardware move that datablock around for each write,The built-in SSD controller does it.
> to spread the wear across the entire "disk"?
> I would assume that since forced writes cause quite a lot of disk seeks forA pretty good boost. Using 8KB pages & queue depth of 4.
> every commit, an SSD would provide a good performance boost.
Read performance = 40 MB/s vs 180 MB/s.
Write performance = 40 MB/s vs 205 MB/s
> Especially a disk like Intel X25, and especially if we put them in a RAID 1+0 orA couple of comments.
> RAID 5 config. Comments?
First, I have seen comments about problems with SSD and RAID controllers. Apparently, unless the controller needs to know how to "handle" SSDs, otherwise the performance tanks (I think it is related to TRIM support by the controllers).
Unless you consistently have a large number (16+) of simultaneous/pending disk operations (Window Performance Monitor can tell you the queue depth) an SSD RAID will be of limited value.
If you are planning on using the Intel X25-M SSDs, I would not. I would choose the Crucial C300 units -- they are *faster* and cheaper ($/GB).
If you are planning on using the X25-E SSDs, I would wait until Intel announces the G3 series later in 2011 -- the units will be larger capacity, cheaper and faster than the current G2s.
Finally, a RAID 5 array made up of 4 x 2TB 7200rpm 6gbps SAS drives **smokes** a single SSD (280MB/s reads and 320MB/s writes @ 8KB and Q=4).