Computing servers generally have a narrow dynamic power range. For instance, even completely idle servers consume between 50% and 70% of their peak power. Since the usage rate of the server has the main influence on its power consumption, energy-efficiency is achieved whenever the utilization of the servers that are powered on reaches its peak. For this purpose, enterprises generally adopt the following technique: consolidate as many workloads as possible via virtualization in a minimum amount of servers (i.e. maximize utilization) and power down the ones that remain idle (i.e. reduce power consumption). However, such approach can severely impact servers' performance and reliability. In this paper, we propose a methodology to determine the ideal values for power consumption and utilization for a server without performance degradation. We accomplish this through a series of experiments using two typical types of workloads commonly found in enterprises: TPC-H and SPECpower ssj2008 benchmarks. We use the first to measure the amount of queries responded successfully per hour for different numbers of users (i.e. Throughput@Size) in the VM. Moreover, we use the latter to measure the power consumption and number of operations successfully handled by a VM at different target loads. We conducted experiments varying the utilization level and number of users for different VMs and the results show that it is possible to reach the maximum value of power consumption for a server, without experiencing performance degradations when running individual, or mixing workloads.
|Number of pages||12|
|Publication status||Published - 1 Jan 2014|
|Event||5th ACM/SPEC International Conference on Performance Engineering, ICPE 2014 - Dublin, Ireland|
Duration: 22 Mar 2014 → 26 Mar 2014
|Conference||5th ACM/SPEC International Conference on Performance Engineering, ICPE 2014|
|Period||22/03/14 → 26/03/14|