We have a 42U rack which is getting a load of new 1U and 2U servers real soon. One of the guys here reckons that you need to leave a gap between the servers (of 1U) to aid cooling.
Question is, do you? When looking around the datacenter, no-one else seems to be, and it also diminishes how much we can fit in. We're using Dell 1850 and 2950 hardware.
Simply NO, the servers and switches, and KVMs, and PSUs are all designed to be on the rack stacked on top of eachother. I'm basing this on a few years building, COs and Data centers for AT&T.
You don't need to leave a gap between systems for gear designed to be rack-mountable. If you were building the systems yourself you'd need to select components carefully: some CPU+motherboards run too hot even if they can physically fit inside a 1U case.
Dell gear will be fine.
You do need to keep the space between and behind the racks clear of clutter. Most servers today channel their airflow front to back, if you don't leave enough open air behind the rack it will get very hot back there and reduce the cooling capacity.
On a typical 48 port switch the front panel is covered with RJ-45 connectors and the back by redundant power connections, PoE power tray hookups, stacking ports and uplinks. Many 1U network switches route their airflow side-to-side, because they can't get enough air through the maze of connectors front-to-back. So you also need to make sure the channels beside the rack are relatively open, to let the switches get enough airflow.
In a crowded server rack, tidiness is important.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With