Fig. 1: The infrastructure required to power a Google data cernter. (Source: Wikimedia Commons) |
An essential part of internet infrastructure is the data center. Data centers, for instance, host websites and provide cloud computing for an increasing number of users. [1] Data centers are already part of data communication, storage, and processing in almost every industry in the United States. [2] Because of the energy intensive nature of computing, data centers consume large amounts of energy. [2] The electricity needed to power just one data center needs to be managed with significant power infrastructure (see Fig. 1). The first strategies to increase energy efficiency in computing systems focused on reducing the power needed to switch transistors at a device and materials level, as well as creating circuits to increase efficiency. [3] However, now with the implementation of large-scale computational resources in data centers, many server and cluster level strategies have also been implemented to better manage computational energy efficiency. [1] In addition, large scale data centers need to manage heat flow to prevent infrastructure failure at high temperatures. [1,2] In this report, we will review different strategies of managing energy consumption in data centers and their effectiveness in increasing energy efficiency.
The energy required to run a data center can be broken down broadly into power consumed by computing resources and power consumed by supporting infrastructure, such as cooling systems. [4,5] In a perfectly energy efficient scenario, the computing resources would consume nearly 100% of the data centers energy cost. In 2005, all data centers in the world consumed 152.5 billion kWh (5.49 × 1017 Joules) of electricity. [5] Of that energy, 50% (2.75 × 1017 Joules) was dedicated to infrastructure cost. [6] The United States in this year used 56 billion kWh (2.016 × 1017 Joules) of energy for data centers with approximately the same proportion dedicated to infrastructure. [5] By 2008 the energy demand was estimated to be 2.484 × 1017 Joules (69 billion kWh) with again 50% of the energy demand being for supporting infrastructure (1.242 × 1017 Joules). [4] This same estimate IT resources used 40% of the total demand (9.936 ×1016 Joules) to power servers of various kinds. [4] 38% of the total electricity consumption (9.439 × 1016 Joules) was for cooling; the remaining 12% was for power distribution (again, see Fig. 1 to see the infrastructure used for power distribution). [4] Although this data is not very recent, we can use it to loosely understand the needs of data centers. The demand for data centers will only rise over time, thus the power consumption in all respects can be expected to increase, so it is important to have some understanding of where power is needed.
Given these data, we can understand the relative demands of data centers and where to focus efforts in reducing energy cost. Most of the energy cost of a modern data center goes into powering servers for computations and other functions. This cost will always be high because of the function of data centers in modern internet infrastructure, but many inefficiencies are being reduced using clever management algorithms and schemes. [1,7] The other major use of power, infrastructure, can be reduced primarily by reducing cooling cost. [4-8] By reducing the heat generated, the energy consumed to cool the data center would be reduced because the energy efficiency of the data centers computational resources would be increased. Improving the energy efficiency of computational architecture would therefore doubly reduce the energy consumption of data centers.
An intuitive allocation of resources of a data center is as follows: as a request for computing resources is made, an entire server is dedicated to completing the task until it can be used for the next task. This method does not require any checks besides whether a server is busy or not (a Boolean check). However, all the servers resources would not necessarily be used when the server was allocated to a task. Therefore, instead of few servers being used for many tasks, many servers would be used for a few tasks each. Keeping more servers running like this consumes much more power than consolidating the computing to a few servers. [4,7] The solution most widely implemented is known as virtualizing servers. [4,7] The idea of this technique is to trick incoming task requests into thinking that they are being assigned an entire server, when they are only being allocated a portion of one. Thus, many different tasks can be efficiently run on fewer physical servers, thus requiring fewer active servers at any time. [4,7] The exact algorithm for this technique varies across different systems and is an active area of research still. [7]
Another solution to increasing energy efficiency is to schedule tasks across the many cores available in a server in such a way as to reduce energy consumption. [1] This solution is only possible because of the invention of the multi-core processor. [1] Multiple cores allow for parallel computation of different sub-tasks in a program, therefore reducing overall computation time and energy consumption. [1] As with scheduling researchers are still developing techniques to most efficiently allocate resources in computers of all scales that contain multiple cores. [9]
There are two approaches to reduce the cost of cooling the computational systems in data centers. One is to increase the efficiency of water-cooling systems by better managing airflow and installing more efficient evaporative cooling systems. [10] The other is to reduce the heat generated in servers in the first place by increasing the efficiency of individual computer chips. [3] Because designing new chips requires a discussion of device physics, we will limit the scope of the discussion of this topic to mentioning that new chip architecture will become more necessary as the utility of multi-core processing reaches its limit in energy efficiency.
Demand for computing resources will only increase in the coming years. Consumer usage of the internet will only increase as personal computing becomes cheaper. An integral part of internet infrastructure, the data center, must therefore be ready to provide efficient service. Many incentives exist to keep data centers energy efficient, the largest incentive being monetary. To keep operating costs low, data centers have been utilizing a variety of techniques to lower to energy cost of its servers and supporting infrastructure. These technologies have greatly improved the energy cost of data centers, but we must keep innovating for future data centers to continue to be viable.
© Anudeep Mangu. The author warrants that the work is the author's own and that Stanford University provided no input other than typesetting and referencing guidelines. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author.
[1] M. Zakarya and L. Gillam, "Energy Efficient Computing, Clusters, Grids and Clouds: A Taxonomy and Survey," Sustain. Comput. Informatics Sys. 14, 13 (2017).
[2] V. Reddy et al., "Metrics for Sustainable Data Centers," IEEE Trans. Sustain. Comput. 2, 290 (2017).
[3] A. P. Chandrakasan and R. W. Brodersen, "Minimizing Power Consumption in Digital CMOS Circuits," Proc. IEEE 83, 498 (1995).
[4] E. R. Masanet et al., "Estimating the Energy use and Efficiency Potential of U.S. Data Centers," Proc. IEEE 99, 1440 (2011).
[5] J.G. Koomey, "Worldwide Electricity Used in Data Centers," Environ. Res. Lett. 3, 034008 (2008).
[6] H. Rong et al., "Optimizing Energy Consumption for Data Centers," Renew. Sustain. Energy Rev., 58, 574 (2015).
[7] T. C. Ferreto et al., "Server Consolidation with Migration Control for Virtualized Data Centers," Future Gener. Comp. Sys. 27, 1027 (2011).
[8] M. Dayarathna, Y. Wen, and R. Fan, "Data Center Energy Consumption Modeling: A Survey," IEEE Commun. Surv. Tut. 18, 732 (2016).
[9] K. M. Attia, M. A. El-Hossein, and H. A. Ali, "Dynamic Power Management Techniques in Multi-Core Architectures: A Survey Study, Ain Shams Eng. J. 8, 445 (2017)
[10] Y. Liu et al., "Energy Savings of Hybrid Dew-point Evaporative Cooler and Micro-channel Separated Heat Pipe Cooling Systems for Computer Data Centers", Energy 163, 629 (2018).