
On the back you'll find its two hot-swappable power supplies (PSU). Under the power button are a few status lights for the hard drives, and the "i" button that is used to turn on an identifying blue light on the front and back of the unit, or blinks if something bad happened (such as a power supply failure or a drive issue). There is another variant that has 8 bays, with the top area of the front used for things like the VGA port and a DVD drive, but as this particular unit is the 12 bay model, its power button is accessible on the left "wing," and the front VGA port and single USB2 port on its right "wing." The R510 is a 2U server, with its front entirely used up by the 12 3.5" hot swappable drive bays. And the front with the cover removed, exposing all 12 3.5" hot-swappable drive bays Overview The front of the unit with its cover in place.

Static electricity can cause big issues if you're not careful, so avoid putting servers on carpet as much as you can. I fortunately did not place any of its components directly onto the carpet at least. Note: I have taken these photos several months ago on the day the server arrived, and in my excitement did not think about what I placed the server on when checking its internals. This post will only be about the hardware, cataloging the internals for future reference, as-well as showing those who may be interested in an R510 what it looks like from the inside. With 12 bays, I should be good for years to come. With its 12 3.5" hard drive bays it was the perfect fit for my needs, in that I wanted a server to use for file storage that had room for growth. The Dell R510 is my recent-most server acquisition, and one I got more for personal use rather than projects or other kinds of development related work.

These serve several purposes, most of which are directly or indirectly related to me bettering myself, learning more about server soft- and hardware, and for use in project development. But it’s what I needed for something I’m doing, and if it helps someone else I’m glad I posted it.Ĭlick on the graphic for a larger version that’s more readable.As you may have seen in some of my previous posts, I have a few servers at home. There are also newer disk controllers out there, namely the Dell H700 with 6 Gbps SAS links, that may improve on these scores. This isn’t as complete as it could be, and other disk benchmarks, like iozone, do a better job of characterizing disk performance with random workloads, where the SSD would likely do much better. I would expect results more consistent with RAID6 read performance, but perhaps RAID6 isn’t as mature as the RAID5 algorithms.

Is that an anomaly in my configuration, or is it really that fast? I will need to revisit that when I set the test environment up again. There is a big surprise in this data, which I will have to revisit: the sequential block read performance for 6 disk RAID5. I ran each test three times and averaged the results. The filesystems were all LVM-based ext3 filesystems, formatted with “mke2fs -j -m 0 -O dir_index.” I used the benchmarking command bonnie++, in the form “bonnie++ -r 32768” to indicate that I had 32 GB of RAM (though I had 24, this ensures that writes and reads are larger than the cache, so caching has a negligible effect on the results).
DELL PERC H200 VS H700 SPEED UPDATE
The operating system was Red Hat Enterprise Linux 5 Update 4, 64-bit, updated to the latest patches as of. While the RAID controller configurations varied, all the configs had the element size set to 64 KB, read policy set to Adaptive Read Ahead, and write policy set to Write Back. I tested various combinations of RAID 0, 1, 5, 6, 10, and 50 with 1, 2, 3, 4, and 6 disks. I’ve recently done some very basic disk performance testing of a Dell PowerEdge R610 with 24 GB of RAM (1333 MHz), dual Intel X5550 CPUs, a PERC/6i RAID controller, and a bunch of 146 GB 15K RPM 2.5″ disks, as well as four of the Dell 50 GB enterprise SSD disks (which are Samsung drives).
