Monthly bulletin of the IEEE Computer Society Special Technical Community on Sustainable Computing
Providing quick access to timely information on sustainable computing.
Moreover, the ACM e-Energy 2015 Chairs Shivkumar Kalyanaraman (IBM Research, India), Deva P. Seetharam (DataGlen Technologies, India) and Rajeev Shorey (TCS Innovation Labs, USA/India) have provided a report on the last edition of the conference and welcome the interested readers to the 2016 edition taking place next June in Waterloo, Ontario, Canada.
As usual, the newsletter closes with the list of upcoming conferences and workshops in the field of sustainable computing, kindly provided by our Vice Communication Chair Lizhe Wang.
Name: Omer Rana
Current position: Professor of Performance Engineering, School of Computer Science and Informatics, Cardiff University, UK
Alumni: Imperial College of Science, Technology and Medicine, University of London, UK. University of Southampton, UK.
Currently working on:
Investigating how Cloud computing systems, generally based on the use of large-scale, centralized data centres, can be extended towards the edges of the network. The motivation behind this work is to understand how edge device capability (growing in scale and complexity), in-network function (through recent support for `programmable’ networks) and data centre based systems could be combined. A number of latency-sensitive applications have requirements that could be addressed through such an infrastructure.
Favorite memory as a student:
As a graduate student at Imperial College, two presentations always remain in my mind -- the first by Douglas Adams (of “The Hitchhiker's Guide to the Galaxy” fame) and the other by Margaret Boden (University of Sussex -- on “creativity”). Both presentations were about the importance of taking a multi-disciplinary and multi-view perspective on problem solving, and how creative problem solving could be supported by overcoming constraints in thinking that were often imposed by one specific discipline. Imperial College, and London (as a city) in general, provided ample opportunities to engage in discussions with other scientists and those in the arts and humanities.
Could you share a research contribution from your research, and explain why this is something that you are particularly proud?
Our recent work on developing marketplaces for federated clouds is an important achievement for me. It has arisen based on work with industry in the UK, and has been motivated by the need for multi-site, multi-actor coordination between different project partners for data and resource sharing. Understanding how a marketplace-based model could be at the center of a resource management framework was an important challenge to consider in this work.
Explain one thing that makes your work exciting for you?
Engaging with students and research associates, who come to me with interesting ideas, in ways I have never thought of before, remains the raison d'etre for working in academia. Universities in the UK are still “idea democracies”, and it is still exciting to come to work with the thought that I might learn something new today. I hope governments with an eye to over-regulate in times of austerity budgets do not interfere with this.
What do you think is the most important problem(s) to be solved in the next 10 years within sustainable computing?
It would be imprudent for me to identify potential problem(s) in sustainable computing in the next 10 years. However, given the trajectory of recent research, I believe energy efficiency in large scale computing systems (especially our data centres) remains an important challenge. With increasing reliance on off-loading computation to data centres, understanding how data centre computation can be shared across edge devices may be one way to achieve this -- but this will probably have a 3 to 4 year time horizon. Some of this effort could also be driven by better understanding how users engage with services made available to them through cloud-based systems -- and whether a multi-tiered architecture (e.g. use of Cisco’s “cloudlets” and edge-device hosted services) could be more effective in improving energy usage.
What courses and skills are most important for students wanting to work in this area?
This is a multi-perspective problem, combining aspects of “social” with the “technical”. Understanding service usage profiles and mobility contexts could be taught in courses on data science and social computing. More technical courses, focusing on distributed systems and networks, could provide the basis for understanding mechanisms for energy profiling and systems architectures. A combination of these is probably needed to develop more sustainable infrastructures.
More interviews can be found here.
Antonio Marotta, CINI - Consorzio Interuniversitario Nazionale per l’Informatica, Rome, Italy Stefano Avallone, University of Naples Federico II, Italy
Energy efficiency has become a very important concern in different IT scopes since it represents a cross layer concept. During the last years, the scientific community has been interested by the adoption of a new paradigm, namely the Cloud Computing: the objective of such a paradigm is to increase the cost-efficiency of the underlying infrastructure, e.g. network, computing and storage resources, to run virtualized services. Virtualized data centers need to find the right compromise between resources utilization, power consumption and the Quality of Service perceived by users deploying or accessing applications in the cloud. Indeed, there is a growing effort in introducing new metrics to evaluate the efficiency of the resource utilization, by comparing it with the related energy consumption. Cloud Computing and virtualization are the cornerstone for implementing different mechanisms and techniques with the aim of increasing the energy efficiency. The reason behind their adoption lies in the fact that big data centers are increasingly experiencing the need to reduce consumption, because of both the environmental pollution and the economic concern. Among the techniques, usually applied in order to minimize the power consumption, there is the Virtual Machines (VMs) Consolidation. By leveraging the VM Live Migration , the consolidation can be useful for allocating as many VMs as possible on the fewest physical servers, with the objective of saving energy. Although this problem has been proposed in the scientific literature, in this work we present a different VMs consolidation model  that allows reallocating a certain number of VMs on a minimal set of servers, characterized by different energy efficiency levels. The problem starts from an allocation scheme and it consists in finding a new one, and thus a subset of active servers needed to host the VMs, that minimizes the linear combination of new active servers power consumption normalized to the total initial power and the number of migrations, normalized to the total number of VMs. The new allocation should guarantee that the resources utilization at each server does not exceed the available amount.
The proposed problem is solved through an algorithm based on the Simulated Annealing approach. The algorithm starts from an initial solution which coincides with the current allocation and each solution is evaluated through the objective function. A perturbation phase is used to generate a new allocation: it is not performed in a random fashion as in the original heuristic, but on the basis of a parameter, which we define as “Migration Desirability”. This latter evaluates a VM migration by considering not only the resource utilization, but also the power consumption of the source and destination servers.
We try to drive the algorithm to perform the best VM migrations, in terms of number of hibernated servers and power consumption. The VMs to be migrated are chosen on the basis of the resource utilization of their source node: the lower the resource usage, the more the source node is a convenient candidate to be hibernated. This value is obtained, by using a sigmoidal-like function of the server’s utilization as in Fig. 1.
For the VMs to migrate, different destination servers can be used: in this case, the selection is based on a value obtained by using a Gaussian function of the physical machine utilization. Furthermore, migrations that have been already accepted, refused or that undo a previously accepted migration are discarded. When computing the desirability we also take into account the fact that by performing a migration, the reduction of power consumption should be as large as possible at the source node, while the increase at the destination should be as small as possible, to be sure that the most energy-efficient servers are always selected. The migration with the highest value of desirability is selected and performed. The second phase of the algorithm is to rate the new allocation by means of the objective function. The difference between the current objective function value and the new one is computed: if it is negative, this means we had an improvement, since the problem objective is to minimize the energy consumption. Therefore the migration is unconditionally accepted; otherwise new migrations can be performed even if they do not lead to an objective function decrease, because they can possibly consolidate some nodes in the near future. We keep track of the number of the migrations which were accepted but that gave no improvement to the objective function. The acceptance probability is a decreasing function depending on two factors: the first one is the value of “temperature”, which is chosen high enough at the beginning of the algorithm, in order to assure a high probability of accepting new migrations. Anyway this value is decreasing with a cooling factor and with the number of VM moves accepted without improving the objective function. The second important parameter is the difference in the objective function values: the more this difference, the lower the probability of performing the migration. In the end of the evaluation if the migration is accepted it is inserted in a list, while otherwise it is undone and recorded into a sort of “tabu” list.
The Simulated Annealing based algorithm was implemented in JAVA and compared with two known consolidation approaches, namely First Fit Decreasing Consolidation and Sercon . In Fig. 2 the x axis shows each single configuration which consists of the number of active nodes before the consolidation and the number of VMs. In particular, our heuristic reaches a reduction between 27% and 37% in the number of active nodes and between 31% and 44% in the power consumption, by comparing it to FFD. Instead by considering Sercon, the reduction is in a range of 9%-17% for the number of active nodes and 14%-24% for the power consumption. The consolidation model was also optimally solved through IBM CPLEX  with the aim of computing the gap in the quality of the solution provided by the heuristic and the execution times. In particular Fig. 3 shows that the worsening in terms of the objective function value is in a range between 3.2% and 5.4% with comparison to the optimal one, while the reduction for the execution time is 89% in the worst case. Anyway the scalability of the algorithm can be also tuned by choosing the source and destination desirability thresholds, even if the accuracy of the heuristic solution is then reduced.
 D. Kapil, E. Pilli, and R. Joshi, “Live virtual machine migration techniques: Survey and research challenges,” in Advance Computing Conference (IACC), 2013 IEEE 3rd International, 2013, pp. 963–969. 2013 13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing, pp. 671–678, May 2013.
 A. Marotta and S. Avallone, "A Simulated Annealing Based Approach for Power Efficient Virtual Machines Consolidation," Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, New York City, NY, 2015, pp. 445-452
 A. Murtazaev and S. Oh, “Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing,” IETE Technical Review, vol. 28, no. 3, p. 212, 2011.
 Ibm cplex. [Online]. Available: http://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/
Monica Vitali, DEIB, Politecnico di Milano, Milan, Italy
Call for participation: The author of this article is co-chair of the EnBIS (Energy-awareness and Big Data Management in Information Systems) 2016 workshop, held in conjuction with CaiSE 2016. For further information about the topics and for registering to the event please visit http://enbis-2016.deib.polimi.it/
The general interest about environmental issues is continuously growing. One of the main contributors to pollution and energy consumption is Information Technology (IT). The impact of technology and its growing rate is evident at the everyday scale, and even more significant at a larger scale. Attention towards Green IT and Energy Efficiency (EE) has been growing [1,2,3,4], exploring different perspectives (usage of resources, applications, and networks), driven by the growing demand of energy coming from IT.
Managing energy-efficiency of complex systems like data centers and clouds, is a very complex issues requiring a big effort and coordination of several perspectives. In previous work , we analyzed the application perspective, without neglecting all the related issues like physical and virtual resource management.
Figure 1: A goal oriented model for managing green applications 
As can be observed in Fig. 1, the goal layer is composed of the metrics used to evaluate the energy efficiency and the quality of service of the monitored application. These metrics are not independent, but a modification in the state of a goal can affect another goal. This is depicted in the model with links connecting goals. These relations can be obvious or hidden. To discover them, we decided to model the goal layer as a Bayesian Network, in which each goal is associated with a Conditional Probability Table describing the probability of its satisfaction or dissatisfaction given the state of its parents. The structure of the Bayesian Network is automatically learned using an algorithm which explores historical values of the metrics. Knowing relations between goals of the model enables to perform what-if analysis, to explore the model for indirect adaptation (improving a goal by modifying one of its parents), and to predict side effects of adaptation strategies. Learning the Bayesian Network is not an easy task and the computation time is exponential with the number of goals considered. However, the conditional independence property of the Bayesian Network can be exploited to perform a distributed learning approach , shown in Fig. 2. The distributed algorithm is legitimated by the observation that variables belonging to different VMs/Hosts are conditional independent by the other VMs/Hosts. The performance of the learning algorithm are significantly improved in terms of computation time.
Figure 2: Learning the Bayesian Network with a distributed algorithm 
Figure 3: Learning actions effect over goals 
The proposed methodology enables the management of applications running in a distributed environment in a dynamic, flexible, and adaptive way. However, new challenges are arising and have to be analyzed. For instance, the Internet of Things era provides new information coming from unconventional sources that can be analyzed and explored to support the management of green applications. This means dealing with heterogeneous and big volume information that needs to be integrated with the conventional information collected by the monitoring system.
 A. Beloglazov, R. Buyya, Managing overloaded hosts for dynamic consolidation of virtual machines in cloud data centers under quality of service constraints, IEEE Trans. Parallel Distrib. Syst. 24 (7) (2013) 1366–1379.
 A. Beloglazov, R. Buyya, Y.C. Lee, A. Zomaya, A taxonomy and survey of energy-efficient data centers and cloud computing systems, Adv. Comput. 82 (2) (2011) 47–111.
 K. Bilal, S.U.R. Malik, O. Khalid, A. Hameed, E. Alvarez, V. Wijaysekara, R. Irfan, S. Shrestha, D. Dwivedy, M. Ali, et al, A taxonomy and survey on green data center networks, Future Gener. Comput. Syst. 36 (2014) 189–208.
 D. Borgetto, M. Maurer, G. Da-Costa, J.-M. Pierson, I. Brandic. Energy-Efficient and SLA-aware management of IaaS Clouds, in: Proceedings of the 3rd International Conference on Future Energy Systems: Where Energy, Computing and Communication Meet, 2012.
 M. Vitali, B. Pernici, and U. O’Reilly, “Learning a goal-oriented model for energy efficient adaptive applications in data centers,” Information Sciences, vol. 319, pp. 152-170, 2015.
 M. Vitali, “Managing Energy Efficiency and Quality of Service in Cloud Applications Using a Distributed Monitoring System,” in 23rd Italian Symposium on Advanced Database Systems, SEBD 2015, Gaeta, Italy, June 14-17, 2015., 2015, pp. 24-35.
Shivkumar Kalyanaraman, IBM Research - India
The Sixth ACM International Conference on Future Energy Systems (ACM e-Energy 2015) was hosted for the first time in Asia, in Bangalore from July 14-17, 2015. The conference is the premier venue for researchers working in the broad areas of computing and communication for smart energy systems, and in energy-efficient computing and communication systems.
Overall, over 85 papers were submitted to the regular and challenge tracks as well as 12 to the poster/demo track. The Technical Program Committee and the chairs selected a small subset (acceptance rate of the conference to the main track was 22.8%) of them through a meticulous review process. In addition to a single track of full papers and challenge papers, the program included keynote addresses by eminent researchers, a panel discussion and a poster/demo session. The program, keynotes and papers are available online at: http://conferences.sigcomm.org/eenergy/2015/program.php
The technical program of the conference was informative and thought provoking with sessions on Demand Response, Electric Vehicles (including batteries), Challenge Papers, Energy Analytics, Energy Markets, Smart Grids, and a Poster/Demo session. The best paper award was closely contested, and won by the paper: “Bugs in the Freezer: Detecting Faults in Supermarket Refrigeration Systems Using Energy Signals,” by S. Srinivasan, A. Vasan, V. Sarangan and A. Sivasubramanian from Tata Consultancy Services Limited, India (and Penn State, USA).
The day before the main conference featured three concurrent full-day workshops:
1. Energy Efficient Data Centers (E2DC): The fourth international workshop on Energy Efficient Data Centers (E2DC) focused on various topics related to the fields of energy efficient and energy aware data centers while considering data centers as active participants in smart grids and smart cities.
2. Distributed Energy Networks (DEN): The Distributed Energy Networks (DEN) workshop explored technologies that will shift the paradigm of energy networks from a conventional centralized top-down approach to a decentralized peer-to-peer one in which consumers, producers and prosumers can actively participate.
3. Smart Grid Communication, Computation and Control (C3): The C3 workshop brought together representatives from the communication, control and computation communities to discuss collaborative progress towards smart grid solutions, and to elucidate limitations and opportunities of emerging smart grid proposals.
The conference had keynote speakers from across the globe: Prof. Iven Mareels (University of Melbourne, Australia), Prof. Ashok Jhunjhunwala (IIT Madras, India), Bruce Nordman (Lawrence Berkeley Laboratory, USA).
Prof. Iven Mareels’ keynote gave a view of the electric grid innovation from the last mile & controls perspective, and also the Australian experience. Specifically he spoke on how last mile of the grid (the low voltage distribution network) demand and supply can be coordinated through a power matching strategy that respects the physical infrastructure's operational limits. Prof. Mareels gave examples of how much more solar PVs and Electric Vehicles (EVs) can be absorbed at the last-mile using demand control strategies, as compared to the classical case, where supply follows demand. He showed results that decentralized demand shaping allowed for significantly more PV and EV penetration than would be the case without such demand moderation.
Prof. Ashok Jhunjhunwala’s keynote examined India’s demand-supply gap in power (and power cut problem), in addition to almost 80 million homes having no grid-connectivity. He presented a bold vision where using DC appliances as demand-side intervention and decentralized solar as a supply side intervention, he argued the gap can be considerably reduced. An interesting aspect of his work was to work with the constraints of Indian grid and, with minimal change, offer supply 24 x 7, at least at limited power level, to each home and at the same time create a demand pull for solar-DC.
Bruce Nordman’s keynote emphasized Nanogrids for local power distribution within buildings. In this model, individual devices are organized into nanogrids (a single domain of power), with nanogrids networked to each other, to local generation, and to a building-wide microgrid. Nanogrids inherently work well with DC power for efficiency and other benefits, though should also be extended to AC systems as feasible. This new model for electricity distribution in buildings is implemented with a layered model of power – that isolates communication about power from communication for control, function, and all other purposes.
Workshop keynote speakers included Srinivasan Keshav (University of Waterloo, Canada, in DEN), Amod Ranade (Schneider Electric, India, in E2DC), and Prof. Anurag Kumar (IISc, in C3 workshop). The conference also included an exciting panel discussion, moderated by Prof. Sarvapali Ramchurn, on Smart Energy Systems: Where East meets West with Prof. Keshav, Prof. Jhunjhunwala and Dr. Venkatesh Sarangan.
We are grateful to ACM, Tata Consultancy Services, IBM Research and Rolta for providing generous financial support. We thank the members of organizing and steering committees for making e-energy 2015 a productive and successful event.
The 2016 edition of the ACM eEnergy conference will be held in Waterloo, Ontario, Canada during June 21-24 2016. Given that it is being held in the North America for the first time since 2013, and in view of the new Trudeau Canadian government's emphasis on green energy and clean technology, we hope to have a strong turnout of experts who work at the intersection of energy and IT. The main conference will be co-located with four workshops, on Energy Efficient Data centres; on Communications, Computation and Control for Resilient Smart Energy Systems; on Electric Vehicle Systems, Data, and Applications, and on Energy-aware Simulation. More details can be found at http://conferences.sigcomm.org/eenergy/2016/
The following venues are requesting submissions on subtopics related to sustainable computing or IT for sustainability.
Journal and Special Issue Call For Papers
Journal Papers Due
Sustainable Computing (Open)
IEEE Transactions on Sustainable Computing (Open)
Conference, Workshop & Symposium Call For Participation