Volume 2023, Number September (2023), Pages 1-9
Digital Economy: AI for Environmental Intelligence in the Digital Economy
Jagabondhu Hazra, Shantanu Godbole, Kommy Weldemariam, Maja Vuković
In today's digital economy, companies face climate-related damage to their assets, disruptions to supply chains and operations, and increasing pressure from consumers and regulators to their sustainability goals. Researchers need better tools to support climate research; businesses need better technologies to accelerate their sustainable digital transformation journeys. These include reimagining operations, supply chains, emissions management, or ESG and climate risk reporting with the help of emerging technologies for organizations to meet their sustainability goals. In this paper, we focus on some of the proposed approaches for helping enterprises to decarbonize their emission as they embrace their digital economy transformation.
Enterprises are under significant pressure from investors, consumers, and policymakers to act on climate change mitigation by disclosing their greenhouse gas (GHG) emissions and committing to reduction of emissions from industrial activities, including operations, manufacturing, logistics, supply chains, and computing . More than 20% of the world's largest companies have set long-term, net-zero targets but need technology help to measure, track, and reduce their emissions while building operational resiliency to the effects of climate change. Unfortunately, many enterprises lack an integrated view of how their critical business operations and processes, such as supply chain, asset management, or infrastructure operations, contribute to their carbon footprint, which makes it difficult for them to embark on a well-planned journey to reduce and optimize their carbon emissions. There is a unique opportunity to address this problem using highly differentiated technology offerings (aka "accelerated discoveries") that will support enterprises interested in carbon performance with enterprise-grade support and integration.
Enterprise Decarbonization in the Digital Economy
We have developed a suite of solutions for managing an enterprise's carbon performance by leveraging AI, optimization, and remote sensing capabilities [1, 2, 3]. Figure 1 shows a high-level view of the technology stack.
The four core elements of the technology stack are: building blocks, data and AI enhancements, decarbonization accelerators, and enterprise imperatives. Below is a description of each in detail.
The Carbon Performance Engine is built on top of the IBM's Environmental Intelligence Suite , a geospatial big data platform, and provides six GHG protocol-compliant carbon accounting APIs: stationary emissions API, fugitive emissions API, mobile emissions, location-based emissions, market-based emissions, and transport and distribution API. Key benefits of this engine include:
- Leveraging AI. Get built-in, AI-driven augmentation capabilities with anomaly detection and natural language processing for handling data, such as the various names for fugitive gasses in different countries.
- Accelerated digital transformation. Accelerate the transformation of emission data to carbon equivalents by automating your data conversion using these APIs.
- Data-driven managed reference data. Access a hassle-free process for updating the reference data, no matter how the guidance and standards evolve. The APIs automatically update the data points for our customers.
Data and AI Enhancements
Data quality is a common pain point across enterprises. The accuracy of carbon footprint estimation heavily relies on data quality. To address this pressing issue, we have developed a set of AI tools that address some of the data gaps. For example, we use NLP to extract the Chemical Abstracts Service (CAS) Number of a chemical substance from the textual description of the chemical. Next, we validate the CAS Number and then use it to identify the chemical compound using our database. This helps to resolve the chemical name unambiguously and leads to accurate carbon accounting of fugitive emissions.
GHG downscaling. Though GHG satellites are a scalable method of measuring GHG emissions, they are limited by coarse spatial and/or temporal resolutions. Further, cloud and dust occlusion and nighttime measurements result in missing data, for example, only 7% of the retrieved Orbiting Carbon Observatory-2 (OCO2) satellite data are valid and can be used for analysis. In this work, we have addressed the challenges of data gaps and low resolution of GHG satellite data, in particular of the OCO2 satellite by applying the statistical interpolation technique, fixed rank kriging (FRK) technique to OCO2 level two (L2) data to generate fine resolution level three (L3) spatiotemporal maps. Compared to the native spatiotemporal resolution of 16 days and 1.2x2.2 km2 of the OCO2 satellite, we have increased the temporal resolution to one day and spatial resolution to 1x1 km2. The daily spatial maps at 111km, 11.1km, and 1km grid resolutions have been generated and validated across 13 Total Carbon Column Observing Network (TCCON) sensor sites. As part of the validation, we obtained <2.5ppm root mean square error (RMSE) values with input data ranging from 400 to 410 ppm across 13 TCCON sites in 2019.
Most of the organization face a challenge in getting high-quality data to support any data-driven innovations. The existing approaches are focused on solving the small missing rate scenarios and are limited by the availability of domain-specific data. To address this challenge, we have developed a scalable transfer learning-based imputation framework, which leverages the bidirectional property of time series and learns the forward and backward temporal dependencies using a long short-term memory (LSTM) architecture. We have investigated the model's efficacy on open-source building energy consumption data (~1.2M data sample) spread across different geographies (U.S., Canada, UK). We have experimented with model transferability across geographies and different building usage types and analyzed the sensitivity with different missing rates. The model demonstrates significantly improved performance compared to the baseline models found in the literature. It maintains an error rate of less than 10% in estimating GHG emissions from building energy consumption, even when up to 60% of the data is missing, across all the test cases considered.
AI Decarbonization Accelerators
We have developed a set of decarbonization AI-based accelerators for enterprise carbon reduction and management.
Emission Performance Accelerator. This tool identifies carbon hotspots in the enterprise supply chain based on the carbon performance of enterprise assets and operations. We use IBM's Carbon Accounting Engine1 to estimate the carbon footprint of the individual asset/operation. We then leverage a variant of multi-dimensional subset scan (MDSS)2 and Python toolkit PyOD for detecting anomalous subgroups in the spatiotemporal enterprise data. This asset compares and contrasts emissions and associated contextual parameters (weather, asset age, electricity usage, etc.) to find the factors behind the regulatory risk of assets and helps identify quantitative and qualitative opportunities for carbon reduction using explainable AI.
Emission Optimization Accelerator. We have developed a multi-objective optimization framework to help enterprises make informed decisions for supply-chain optimization. The developed asset is being used to enhance two of IBM's existing offerings—Sterling Fulfilment Optimizer (SFO) and Maximo MRO Inventory Optimization (MRO IO)—based on two repeatable solution scenarios: (i) As-is carbon savings, wherein the current carbon performance of existing solutions is compared with that of baseline solutions; and (ii) explicit carbon optimization, where solutions that explicitly balance the trade-off between economic costs and carbon emissions are recommended.
Land Emission Reduction Accelerator. IBM Research has also developed an ensemble of static calculators and dynamic approaches, namely, physical models and surface-level estimates of downscaled GHG satellite data to estimate the carbon footprint of the growing stage of food supply chains. This asset leverages the spectrum of data requirements, spatiotemporal resolutions, and accuracy levels of static and dynamic approaches to accurately calculate emissions due to the growing stage, identify hotspots, and take steps to reduce GHG emissions. The team leveraged a novel sensor and IoT technologies—coupled with a new approach for inversion modeling—to locate and quantify methane sources and a geospatial platform  to complement the methane analytics with remote observation analysis.
Illustrative Enterprise Imperatives
We have been integrating the above core capabilities with IBM's product portfolio to enable differentiated solutions for enterprise clients and ecosystem partners. Some of these IBM product portfolios include carbon accounting for hybrid cloud tenants , sustainable order fulfilment optimization (Sterling SFO), carbon-aware inventory optimization (MRO IO3), carbon-aware repair scheduling, carbon performance of Maximo HSE (Health Safety and Environment) assets,4 and Farm-to-Gate carbon tracking in IBM Food Trust5 all of which are part of the IBM Sustainability Software suite.
Carbon-aware Sterling Fulfilment Optimizer. In this context, we explored two scenarios. The first scenario is related to what we call "as-is carbon savings," where we deployed the back-end engine to compute carbon savings due to the SFO as-is (i.e., without accounting for emissions in fulfilment optimization) and demonstrate the total as-is carbon savings of approximately 10% for 40 days for a real client data set. The second scenario is related to "explicit carbon optimization," where we applied multi-objective optimization algorithms to further optimize multiple emission factors.
Carbon-aware MRO IO. We demonstrated significant emission savings with a minimal cost impact on representative MRO IO data. Using this capability, enterprise clients can estimate the increase in MRO costs for a desired emission reduction. Conversely, they can also estimate the possible emission reduction achievable within a given budget.
Emission Performance Accelerator. We have tested this capability using enterprise data from 500-plus buildings across the globe and identified the buildings that have carbon hotspots with explainability. Capitalizing on existing inventory management capabilities (such as in IBM Maximo6), we are validating our proposed approaches to identify assets with high regulatory risk due to excessive leakage of refrigerants. Given that grocery supermarkets are located in densely populated areas, repercussions from violations could immediately have a deleterious business impact, providing yet another motivation to prevent, monitor, and repair leaks at the earliest. With this capability, large supermarket operators could save millions of dollars annually. Finally, we also used this capability for a palm oil use case to identify areas with a high carbon footprint per ton production of palm oil.
We can no longer reverse all the inevitable effects of climate change before our planet begins to heal. But we can take crucial steps to reduce our impact as we work toward a more sustainable future, such as accurately measuring an organization's carbon footprint and identifying ways to reduce emissions and optimize across business processes ranging from asset management to infrastructure to supply chains across industries. We have designed new AI-enhanced, general-purpose carbon footprint reporting, tracking, and optimization capabilities to help enterprises account for, reduce, and optimize emissions from their business processes and supply chains as they embark on their digital transformation journeys. For example, our new carbon footprint engine uses AI and natural language processing algorithms to move carbon accounting and optimization from manual aggregation and measurement processes to an automated method. Making it easier to solve data quality issues, analyze hotspots to identify "super" emitters, and achieve multi-objective optimization that balances monetary and emission metrics side by side.
 Jain, A., Guruprasad, R., Hazra, J. , Kayongo, I., Kulkarni, K., and Syam, H. A Framework for GHG Hot-spot Identification and Recommendation of Farming Practices using Cohort Analysis. 2021 INFORMS Annual Meeting. 2021.
Jagabondhu Hazra is a senior technical staff member and leading the climate and sustainability initiatives at IBM Research. Under his leadership, IBM has designed, developed, and commercialized an AI powered scalable and standard compliant Carbon Performance Engine, an industry agnostic generalized tool to accurately estimate the carbon footprint (Scope1,2&3) at enterprise level across all types of asset classes, identify GHG hotspots and carry out what-if analysis to reduce the carbon footprint. He was also responsible for transformative innovation in agribusiness and led development of a geo-spatial bigdata platform called IBM Watson Decision Platform for Agribusiness (WDPA). Prior to this, he was instrumental in setting up the UBD-IBM Research Center at Brunei, worked as adjunct faculty at University Brunei Darussalam, and appointed as technical lead of a complex eight-entity (industrial and academic) EU funded project called OPTi. He is an IEEE senior member and has published more than 80 research papers in international journals and conferences and filed more than 60 patents at USPTO. For his remarkable technical contributions, he has received many awards such as INAE Young Engineer Award, Best of IBM award, IBM Eminence and Excellent award, and IBM Outstanding Technical Achievements Award.
Shantanu Godbole leads IBM Research's vision and execution around the important topic of Sustainability. He focuses on building innovative technology solutions for helping enterprises and their supply chains with climate resiliency and carbon emissions accounting, optimization, and reduction. He has earlier led applied AI and blockchain research in the agriculture, retail, financial services, and education industries. His focus is on researching and experimenting in-market with clients to develop first of a kind solution, that then make their way to IBM product and service offerings. He led the founding and growth of the IBM Center for Blockchain Innovation in Singapore, building solutions in the domains of trade and finance. He has been with IBM Research since 2006 and has spent two years in the analytics services business of IBM as a global analytics architect building and deploying text mining, speech analytics, and predictive analytics solutions. He is a senior technical staff member at IBM and a member of the IBM Academy of Technology. He has been recognized by several awards including an IBM Corporate Award, the Best of IBM award twice, multiple Outstanding Technical Achievement Awards, an IBM Research Client Award, and a 10-year most influential paper award at PAKDD. He is part of the UN Advisory Group on Cross-border Paperless Trade Facilitation. He has a Ph.D. in machine learning from IIT Bombay and is active in professional service and mentoring.
Dr. Kommy Weldemariam is a distinguished engineer at IBM Research. He is passionate about researching and solving societal, business, and scientific problems using data, artificial intelligence, and cloud platforms, while embracing open standards and open-source technologies. His expertise lies in digital transformation, software systems, and innovation. He leads global teams in developing cutting-edge tech for public services, sustainability, climate change, and healthcare. Dr. Weldemariam is a master inventor with an impressive portfolio of more than 300 U.S. patents. He has also authored more than 100 scientific publications, focusing on data-driven approaches for digitization, agriculture, healthcare, education, supply chains, sustainability, climate change, and secure software systems design. Kommy's international background is a cornerstone of his work, having lived, studied, and worked in North America, Africa, Europe, and Asia. This unique perspective enables him to create impactful solutions and technologies that resonate globally. His contributions have been featured in major media outlets, including Forbes, Fortune, The Economist, and TechCrunch. Beyond his technical achievements, Kommy is deeply committed to mentoring and developing the next generation of technology leaders. He has supervised and mentored hundreds of engineers, researchers, and scientists, nurturing their growth and fostering innovation in the field.
Maja Vuković is an IBM Fellow, responsible for technical and research strategy for AI driven application modernization. Maja is working on cutting-edge technologies to deliver breakthrough innovations to IBM products and solutions. Her main research interests include AI planning, machine learning, data science, and AI for Code. Maja's research is focused on AI solutions to challenges in software engineering (AI for Code) and continuous modernization through AIOps. Maja is an IBM Master Inventor, with 200 patents granted. She is a senior member of IEEE and was awarded Women in Services Computing Award by IEEE. Maja has received her Ph.D. from the University of Cambridge, for her work on context-aware service composition using AI planning.
4 Maximo HSE - https://www.ibm.com/docs/en/mhs-and-em
5 IBM Food Trust - https://www.ibm.com/blockchain/solutions/food-trust
Copyright 2023 held by Owner/Author.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.