Clouds and data centres are significant consumers of power. There are however, opportunities for optimising carbon cost here as resource redundancy is provisioned extensively. Data centre resources, and subsequently clouds which support them, are traditionally organised into tiers; switch-off activity when managing redundant resources therefore occurs in an approach which exploits cost advantages associated with closing down entire network portions. We suggest however, an alternative approach to optimise cloud operation while maintaining application QoS: Simulation experiments identify that network operation can be optimised by selecting servers which process traffic at a rate that more closely matches the packet arrival rate, and resources which provision excessive capacity additional to that required may be powered off for improved efficiency. This recognises that there is a point in server speed at which performance is optimised, and operation which is greater than or less than this rate will not achieve optimisation. A series of policies have been defined in this work for integration into cloud management procedures; performance results from their implementation and evaluation in simulation show improved efficiency by selecting servers based on these relationships.
|Title of host publication||Unknown Host Publication|
|Number of pages||8|
|Publication status||Published - 31 Dec 2012|
|Event||IEEE Second Symposium on Network Cloud Computing and Applications - London, UK|
Duration: 31 Dec 2012 → …
|Conference||IEEE Second Symposium on Network Cloud Computing and Applications|
|Period||31/12/12 → …|
- autonomic content distribution
- cloud data centre
- context awareness
- dynamic configuration
- energy tolerance
- policy-based management
- self-managing platform.