The Hasso Plattner Institute (HPI) and the Massachusetts Institute of Technology (MIT) collaborate on tackling global societal challenges through the joint research program “Designing for Sustainability”. One joint project aims to make Artificial Intelligence more environmentally sustainable. Prof. Ralf Herbrich, Head of the Chair for Artificial Intelligence and Sustainability at HPI, MIT Media Lab Professor Deblina Sarkar, PhD student Shivam Kajale and PhD student Nicolas Adler are members of the multidisciplinary team. This project will develop energy-efficient paradigms for computing. We spoke to Prof. Sarkar about her research – and about the collaboration with HPI.
Why do we need new paradigms for computing?
Prof. Sarkar: Artificial Intelligence (AI) is a powerful technology, that has seen tremendous growth in its application space and accuracy in the last two decades, and this trend is only increasing at present. This growth has been fuelled by ingenious developments in software algorithms and assisted by extensively parallelized computing hardware. However, the field has now matured to a point where issues which could earlier be overlooked are being duly noticed. The difficulty of sustainable growth of Artificial Intelligence, and computing space in general, based on the traditional computer design paradigm i.e., the von Neumann architecture, has come into focus. Apart from global challenges, lack of adequate scaling in energy dissipation of computing hardware will start limiting the complexity of tasks that can be performed in mobile devices powered by batteries. Thus, there is an immediate need to design alternate, energy-efficient paradigms for computing, to solve this impending problem.
What numbers do you have? With the current AI growth, how much electricity will be consumed by computers in 2030 – if we do nothing?
Prof. Sarkar: A study reported in 2018 has estimated that at the current growth of AI systems, an alarming 30% of the world’s primary electricity in 2030 will be consumed by computers, even if we are able to keep up with Moore’s law scaling. This has direct environmental implications, as electricity generation is a major contributor of greenhouse gases of the world. According to Electric Power Annual 2020 report, 60.9% of electricity in the U.S. was generated using fossil fuels contributing about 25% of the total U.S. greenhouse gas emissions (U.S. GHG inventory report 2019). These numbers highlight how growing computing infrastructures are on track to becoming a major source for greenhouse gas emissions. Another study from 2016 estimates that by 2040, the total energy spent on computation will reach ≈ 1027 J, which is far greater than the total energy that humans may be able to generate by then.
With your research, you are rethinking computer design in terms of materials, devices, architecture, and algorithms. What is key here?
Prof. Sarkar: The key here is the unique fusion of diverse expertise of the two groups. Through our proposed project, we intend to use our collective but complementary expertise across material science and electronic devices, and computing architectures and algorithms to provide a well-rounded solution to this impending problem. The computing ecosystem encompasses a wide intellectual space, including materials and electronics device, systems architecture, and algorithms. A radical shift from existing computing framework, as is warranted and proposed, requires innovation as well as compatibility across all these domains. Hence, arriving at a solution will require a close collaboration between experts from all these domains. The team we have created for this project between MIT and HPI is ideal for this work. My lab focuses on the use of emerging material systems and principles in physics to build scalable and highly energy-efficient electronic devices for computer hardware. Complementary to this, Prof. Ralf Herbrich’s group has expertise in the theory and algorithms of approximate and stochastic computing systems for machine learning and AI. Prof. Herbrich’s perspective of an algorithm developer helps clearly identify the requirements for hardware which can run these algorithms. While my team’s knowledge of materials, applied physics and nanotechnology makes it possible to design, fabricate and characterize suitable devices. Both groups will together theoretically model the designed devices based on collected data and identify relevant computational problems for simulation at a system level and benchmark energy-delay performance.
What do you want to achieve in this joint research with HPI?
Prof. Sarkar: Our project for enabling energy-efficient growth of AI aligns with several of the United Nations Sustainable Development goals, like climate action, sustainable consumption and by extension, affordable and clean energy. As indicated above, computation alone may account for up to 10% of the greenhouse gas emissions worldwide by the end of this decade. This can be avoided by transitioning to energy efficient computing paradigms as proposed in our work. By providing clear benchmarking of improvements in energy-efficiency and latency associated with our stochastic computing devices, we can accelerate this transition at scale, so that the services of large-scale computing like artificial intelligence can be consumed in an environmentally and practically sustainable manner. Finally, by capping the impeding energy demands from the computation sector, we can ease the pressure on global energy supplies to help ensure affordable and clean energy.
Thank you for the interview!
Find all projects of the joint research program “Designing for Sustainability”, co-created by the MIT Morningside Academy for Design and the Hasso Plattner Institute (HPI), here.