The Consumer Packaged Goods (CPG) industry has always been driven by innovation, with the goal of meeting consumers’ ever-changing expectations and preferences. In recent years, the integration of Artificial Intelligence (AI) and Explainable AI (XAI) has emerged as a game changer, revolutionizing this sector’s research and development (R&D) processes.
Especially in the cosmetic and food segments, the formulation process for creating new product variants or entirely new products often requires timelines incompatible with market deadlines.
Matilde: accelerate innovation and optimize formulations with Explainable AI
Intellico’s Matilde solution is an Explainable AI platform designed to support the formulation process from idea to finished formulation, significantly reducing time-to-market thanks to a simulation system that autonomously predicts the best combinations of ingredients. This system autonomously predicts optimal ingredient combinations, enabling users to project the end attributes of the new product (such as hydration levels), based on the ingredients’ composition and intrinsic properties (like density). As a result, Matilde effectively acts as a “virtual laboratory”, capable of modeling various solutions and subjecting only a limited number of formulations, which are particularly promising, to final validation.
Matilde combines the potential of AI models with the logic of “Explainability”. In addition to returning the prediction result, Matilde allows users to explore the formulations that it identifies as similar to the reference one, highlighting the factors (ingredients or properties) that have the greatest impact on the final characteristic of the product. In this way the result of the prediction is not limited to being expressed with a score or a scale but it is made “explained” in all the components that the model has identified. The result and the way in which the prediction model reasoned are thus represented as the final result through graphs.
With Grapho, AI does not replace the formulators, but accompanies them through the various stages of research and development, allowing them to:
- 1) explore past projects, identifying past iterations to be used as a baseline for the new product;
- 2) understand which factors impact the expected result of the formulation and therefore focus the
- 3) simulate new formulations and see how close the output is to the reference formula.

Explainable AI: Making Artificial Intelligence transparent and ethical for the future of AI
Explainable AI or XAI is a branch within artificial intelligence that aims to bridge the gap between the opacity of complex machine learning models and the need for transparency and interpretability. Traditional AI models, such as deep neural networks, have consistent performances but they are not transparent enough to let the decision makers understand the factors that influenced the outputs (“black box”).
XAI is a rapidly evolving field. According to an analysis performed by Scopus1, there has been a growing focus on the topic in scientific literature with each passing day. Researchers and practitioners have been actively exploring various approaches and methodologies to enhance the interpretability and transparency of AI models. The volume of publications is sky-rocketing. A wide range of XAI techniques has been developed, including criteria for ranking relevant inputs (e.g.: feature importance analysis) or understand the impacts of variables through scenario analysis (e.g.: counterfactual explanations, shap values) and more.
The European Union’s Artificial Intelligence Act (AIA), currently under discussion, stipulates that high-risk AI systems must be transparent and explainable to users. This aligns with Gartner2‘s prediction that, by 2025, 30% of government and large enterprise contracts will require the use of explainable AI (XAI) and ethics.
Why we do care about Explainable AI: keeping human in the driver’s seat
What prevents AI from being largely applied in organizations is the trust that decision makers have in models. As AI technologies become increasingly integrated into companies’ processes, it becomes crucial to understand and trust the decisions made by AI systems. According to a survey run by IBM3 in 2022, 4 out of 5 companies declared they could not apply AI on a large scale as they were not able to explain how their AI arrived at a decision. Among the barriers to AI adoption we find the need to avoid unintentional biases and to control the deterioration of model performance (drifting), the need to better understand the importance of input data and to explain why the model returns a prediction .
Explainable AI seeks to address this challenge by providing human-readable and understandable explanations for the decisions made by AI models, allowing users to comprehend how and why certain conclusions are reached. The user traces the logic behind specific predictions and can improve the knowledge base behind his decision-making process or provide feedback to the model to improve the criteria for the prediction. The result is an empowered decision maker that still sits in the driving seat, but with better understanding of the underlying determinants. This becomes valuable in sector where the relations among factors and variables are complex to analyze with traditional techniques like healthcare, chemical formulations and more.
XAI in the CPG sector: use cases
Companies in the CPG sector can also leverage XAI to optimize material selection and enhance marketing strategies.
As regards material selection, it is critical to choose the correct materials for CPG products to ensure quality, safety, and sustainability. When it comes to material selection, AI plays a crucial part in supporting researchers and engineers in making knowledgeable selections. To select the best components for a product, machine learning algorithms can analyze the qualities of alternative materials, historical performance data, and consumer feedback. Furthermore, AI-powered simulations and virtual testing can forecast how materials would interact under various conditions, speeding up R&D and minimising the need for expensive physical prototypes. CPG firms can improve product durability, eco-friendliness, and overall consumer happiness by optimising material selection.
Concerning marketing strategies, AI technologies assist retailers of consumer goods to derive important insights from consumer behaviour patterns, purchase histories, and social media interactions in today’s data-rich world. Machine learning algorithms can analyse immense amounts of data to provide a comprehensive insight of consumer preferences, trends, and sentiments. These insights enable businesses to adjust their campaigns, improve product positioning, and provide targeted audiences with personalised experiences. Predictive analytics further assists in forecasting demand, minimising waste, and streamlining inventory management.
Conclusions
With the advent of the EU’s Artificial Intelligence Act, we are at the dawn of a new era, where Explainable AI shatters the ‘black box’ facade, replacing it with a crystal-clear prism reflecting the light of understanding and trust. Like a conductor leading an orchestra, XAI harmonizes the human-AI symphony, seamlessly blending the power of AI with the need for human oversight. It ensures that humans not only remain in the driver’s seat but are equipped with a clear roadmap of AI’s complex decision-making terrain. XAI transforms the daunting ‘black box’ into an accessible, interactive tool, amplifying the potential for widespread AI integration across organizations.
In the dynamic world of Consumer Packaged Goods, XAI has taken center stage, revolutionizing everything from material selection to marketing strategies. Matilde is the tool created by Intellico to facilitate the creation of new products, reducing time to market by offering a virtual laboratory where you can simulate and analyze new product hypotheses.
As we stand on the cusp of this AI revolution, one thing is clear – with Explainable AI, the future of artificial intelligence is not just promising, it’s understandable.
Sources:
1. Scopus (2021)
2. Gartner (2023) “Beyond ChatGPT: Il futuro dell’IA generativa per le imprese”.
3. IBM (2022) Indice di adozione dell’IA a livello globale.
Contributors:
Riccardo Turrisi Grifeo
Matteo Mainetti