Artificial intelligence (AI) methods and applications consume energy and resources in all phases of their life cycle. This begins with problem definition and exploratory investigations and continues through data collection and processing, AI model development and the necessary training to the application and adaptation of the models. Large amounts of data are often required here, which entail correspondingly complex training and application scenarios. At the same time, the wealth of available (environmentally relevant) data offers considerable opportunities to use AI to solve environmental and sustainability problems. This makes it all the more relevant that AI itself does not become a resource driver, that energy and resource consumption is made transparent and that valid measurement methods and metrics exist to support AI developers and users with regard to sustainable AI, for example through process models and recommendations for action. As the computing power required to train large AI models has increased more than 300,000-fold, innovative approaches, new algorithms, etc. are needed as well as robust methods to make the environmental impact assessable and transparent for developers and users.
[1] Further developed from
Guldner, A. (2022). Assessing the Resource- and Energy Efficiency of AI-based Cyber-Physical Systems, Talk at Workshop “AI and Cyber-Physical Process Systems” AI-CPPS@KI2022: 45th German Conference on Artificial Intelligence. Available online.
Overall, decisions are made during the development of AI in the course of software development and the selection of hardware components that have a strong influence on energy and resource consumption when the systems are used (see Fig. 1 for examples). Due to the complexity of the systems and the large number of individual decisions in the development process (e.g. frequency of data collection, data transfer, system architecture, structure of the AI models, selection of algorithms, etc.) and variables in the practical application, simple methods for determining energy and resource efficiency cannot be used. Instead, an AI reference model specifically geared towards AI development and application is required, which structures the relationships and interdependencies in a meaningful way and makes them as universally applicable as possible with the greatest possible transparency. With the help of key figures from the reference model, AI developers and users will be able to make process models and decisions in the development process with greater certainty of direction from a sustainability perspective.
The project addresses these problems and develops and tests criteria and metrics for optimizing AI systems along their life cycle in terms of resource and energy requirements. This includes the handling of (training) data, the choice of methods and frameworks, hardware and sensors, communication infrastructure and software architecture as well as the question of efficient training and application/adaptation of the model. The project also aims to answer the question of how AI processes can be at least roughly quantified along the life cycle in terms of resource consumption and which measurement methods are suitable for this. To this end, existing methods and procedures are also to be examined comparatively, with a systematic distinction being made between the different levels of effects (direct and indirect, possibly induction and rebound effects). In addition, hardware components that have not yet been examined in a life cycle assessment approach in previous studies (e.g. sensors, actuators, energy-harvesting components) will be systematically included in the AI reference model. The model will be evaluated on the basis of case studies and ultimately disseminated.
You are leaving the official website of Trier University of Applied Sciences