MyPage is a personalized page based on your interests.The page is customized to help you to find content that matters you the most.


I'm not curious

Datacenter Cooling: Technical Support Gets Ultra Technical

Published on 17 February 14
1 David rewarded for 1 time 5 David rewarded for 5 times   Follow
0
1

Datacenter Cooling: Technical Support Gets Ultra Technical - Image 1

Stuck in rooms without windows, inadequate ventilation and often times little supervision you’ll find racks upon racks of servers growing hotter by the minute, causing errors both on end-user and server sides. The struggle to cool datacenters has been a clandestine issue for roughly 30 years and now, with many programs and data being virtualized on cloud platforms, cooling these rooms and the equipment within them has now become a monstrosity to tackle with conventional methods.


With roughly 1 million KWh of electricity being wasted yearly by datacenters, the prognosis is initially to change the equipment. What precisely causes datacenters to struggle with cooling? Let’s examine further:


Lack of Central Cooling Source


Saving money along with man-hours may be frugal on the surface, yet refusing to channel appropriate adequate air into server rooms would be construed as counterproductive. Innovations are still several years behind that enable servers to operate with less heat emission yet immediate remedies aren’t being implemented to prevent servers from going down for periods of time while they cool off. The necessity for centralized cooling or placing larger rack servers within a ‘hot aisle’ still hasn’t been implemented in larger companies to date completely.


Sure, your average technical support phone call to iTok may involve basic troubleshooting; when it comes to figuring out how to cool datacenters and fix issues with proper ventilation, you’d be surprised what U.S. based companies like iTok could pull off.


High Voltage Wiring


Today’s standard of wiring for datacenters calls for more scalable yet modular wiring configurations to replace the higher voltage direct current power eaters. Instead of having a vested interest in replacing wiring throughout datacenters, many smaller IT firms simply avoid the costs completely by placing fans in larger rooms to cool wiring down or cut off the company central air unit to reduce voltage strain. Having many uninterruptable power sources in place for datacenters is also lacking which drives up costs and could potentially leave datacenters susceptible to electromagnetic surges during massive storms and prevent wiring from overheating especially if relying upon older CAT cables for connection.


Uneven Cooling


Datacenters which do have cooling devices have improperly planned the location of cooling systems which only benefit those rack servers closest to the cooling source. When an uneven distribution of cooling is an issue, server farms can become problematic in carrying out commands or even staying properly operational especially if one server relies on the actions of another. Inadequately placed ducting to deliver air, and no cold air return in place, can also cause wasted electricity as the cooling unit will need to strain to produce cool air when no return source exists.


While cooling datacenters is perhaps the catalyst behind many other issues, we’ll look towards innovative measures datacenter managers can deploy to cool both the wiring configurations and servers within their datacenters to even cool air distribution, keep costs down while adhering to environmental standards as we head into the next decade of datacenter needs relying on cloud computing solutions.


Moore’s Law


According to Moore’s Law, processing power of semiconductors will double ever 18 to 24 months without fail. Shrinking footprint production means that space is being compromised without any cooling solution in place to compensate for the space loss. Compacting technology may have initially feasible perks, yet in the world of electrical consumption and overheating servers, it’s definitely not always plausible to shrink space. Overall, space consumption is falling at alarming rates of roughly 30% per annum. Chip power has nearly doubled over the last two years along to an output of 118 watts/chip.


Summary


On average, 60 million megawatt-hours are expounded without productive results every year, raising alarms across the information technology world. With current server configurations, only half of the electricity used actually arrives to computer loads; basically, IT centers are purchasing 50% of the electricity simply to throw away while 50% is effectively used for actual power for datacenters.


With many vendors still providing mass power data products, IT companies are looking both towards an effective solution for data passing while getting onboard with renewable energy sources. With overhead costs to consider and our lackadaisical economy not quite up to pre-war standards, finding datacenter vendors with renewable energy-ready servers has become incommodious at best - yet technical support for these datacenter issues is much simpler today than it was 10 years ago, so give one a call when your server farm seems to be running too hot.

This blog is listed under Data Centre Management and Data & Information Management Community

Post a Comment

Please notify me the replies via email.

Important:
  • We hope the conversations that take place on MyTechLogy.com will be constructive and thought-provoking.
  • To ensure the quality of the discussion, our moderators may review/edit the comments for clarity and relevance.
  • Comments that are promotional, mean-spirited, or off-topic may be deleted per the moderators' judgment.
You may also be interested in
 
Awards & Accolades for MyTechLogy
Winner of
REDHERRING
Top 100 Asia
Finalist at SiTF Awards 2014 under the category Best Social & Community Product
Finalist at HR Vendor of the Year 2015 Awards under the category Best Learning Management System
Finalist at HR Vendor of the Year 2015 Awards under the category Best Talent Management Software
Hidden Image Url