Conventionally, a server is a computer tasked with delivering services on behalf of users. Typically, a user queries a server for information, data, or computational output via a remote network or the Internet; this method of interaction is in contrast with a traditional user interface where the user would directly control the computer through keyboard or mouse hardware signals.
At the outset in the 1970s, servers catered to a wide range of business and industry-related uses, particularly given that a server may well have been catering the needs of a small organization whose requirements were both varied and sparse in frequency. Two innovations happened at this point. [1] Firstly, there was a technological improvement to the capacity of existing mainframe designs; specialization revolutionized server design such that mainframes became increasingly more modularized on an abstraction level. The benefits of modularization, speaking from a software development perspective, include the ability to confine the responsibilities of software and hardware to certain key tasks. This organization of logic and functionality serves to improve code maintenance, scalability, and correction. [2]
In addition to this specialization of server design and implementation, companies simultaneously specialized in the delivery of their services and product teams. Ultimately, each of these individual teams came to rely on their own server given the specialized nature of their own departmental needs. To accommodate this trend, servers became much more specialized to cater to particular groups' specifications. Correspondingly, "departmental servers" became less powerful than mainframes, but also much more cheaper and accessible. [3]
Due to the specialization of server's processing ability and the relative affordability of servers, demand is now skyrocketing. With this ever-increasing demand for processing power and storage space, energy for managing these expanding resources has gone up expectedly. One manifestation of this need is the advent of the server farm. Often times, large corporations or organizations require much more processing capacity than one machine can deliver alone. Thus, server farms are simply a collection of individual servers that consolidate processing power for a wider range of computational demands.
According to the SMART 2020 report, server farms create carbon footprints that grow more than 7% per year, making them one of the greatest challenges faced by the proponents of green IT. Data centers need numerous auxiliary systems, including storage devices, power supplies, and cooling systems. In 2010, over 10% of electricity in the U.S. was due to computer and IT equipment usage. At the current rate we're going, analysts and experts figure that 10% of the world's power bill will be spent on running computers. [4] To give a more concrete example of how much energy this is, Dixon shows that one 50,000 square feet data center uses about 5 megawatts, but continuously. This energy output would satisfy the needs of 5000 homes. In another staggering example, assorted US data centers use a collective 7000 megawatt data centers from seven different plants; this is more power than is used by the State of Mississippi. [4] Even more surprising is that this astronomical power consumption is just by the plants themselves - cooling systems use as much energy as the plants.
The question of how to run servers efficiently now pervades; due to the exploding nature of computational and processing need, servers are unlikely to disappear anytime soon. The mainstream usage of highly efficient power sources also has a few more years to go. Then, intermediate solutions for dealing with the tremendous heat generation and energy consumption of these servers are necessary. Firstly, using advanced power sources, such as solar, obviously saves energy quite literally. In addition, more efficient chips and cooling systems help to maximize server efficiency. Servers are generally quick to turn on and off; as such, turning servers off during reduced load periods is a perfect way to conserve energy. [5] Google, which uses millions of dollars worth of power monthly, among other companies, uses heat exchangers to provide heat for local homes while dispersing their tremendous heat sources. Intelligent cooling distribution based off of server activity also significantly reduces energy consumption. This heat, which was initially an overwhelming and untapped source of energy, can now be used to provide useful power for other facets of the world. [6]
© Justin Lee. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author.
[1] N. Bogdanowicz, "Introduction to Smart Grid Concepts," Physics 240, Stanford University, Fall 2011.
[2] C. Barile, "Solar Thermal Energy Storage Systems," Physics 240, Stanford University, Fall 2010.
[3] S. Mueller, M. A. Soper and B. Sosinsky, Upgrading and Repairing Servers (Que Publishing, 2006), pp 85-87.
[4] P. Dixon and J. Gorecki, Sustainability: How Smart Innovation and Agile Companies will Help Protect Our Future (Kogan Page, 2010), p. 115.
[5] P.A. Cook, "World Energy Usage," Physics 240, Stanford University, Fall 2010.
[6] M. van Staden and F. Musco, Local Governments and Climate Change (Springer, 2010), p. 51.