Graph Neural Networks and Explainable AI: what’s the value

GNN - Intellico

Index

In the realm of machine learning, one fascinating concept gaining traction is Graph Neural Networks (GNN). These techniques have garnered attention due to their unique ability to model and analyze relations among data. Indeed, they can be applied to explore the patterns and determinants behind a prediction.

Let’s have a view.

Understanding GNNs

Graph Neural Networks are a class of neural networks designed to operate on graph-structured data. Unlike traditional neural networks, which typically process data represented in a grid or vector format, GNNs can effectively capture the complex relationships and dependencies present in graph data, where graphs consist of nodes (representing entities) and edges (representing relationships between entities). GNNs operate by iteratively aggregating information from neighboring nodes, enabling them to discern patterns and make predictions based on the underlying graph topology.

One of the key strengths of GNNs lies in their ability to handle diverse types of data, including social networks, molecular structures, and recommendation systems. Traditional neural networks struggle to capture such relational information efficiently, making GNNs a preferred choice for tasks requiring contextual understanding and relational reasoning 2.

IMoreover, in Intellico we managed to create auto generative GNNs that can predict the presence of the edges (connections) without having defined an a priori rule to build the graph. In this case, the model unveils both the presence and the intensity of the relations.

GNNs have found applications across various domains. In social network analysis, GNNs are employed to predict user behavior, identify influential nodes, and detect communities within the network. In the field of computational biology, Google DeepMind for instance has been at the forefront of GNN research, pioneering applications in fields like drug discovery and protein folding. Their AlphaFold system, which utilizes GNNs, made headlines by accurately predicting the 3D structures of proteins, showcasing the immense potential of graph-based approaches in scientific domains.

GNNs and explainability

Explainable AI is the practice of making artificial intelligence (AI) systems understandable and transparent to humans. It aims to clarify why AI models make specific decisions or predictions, especially in complex models.

As we discussed previously XAI increases trust, accountability, and interpretability by providing insights into AI reasoning, which is crucial in domains where AI impacts important decisions and regulatory compliance is required.

Ultimately, XAI enhances the relationship between AI systems and humans by making AI decisions more accessible and comprehensible.

GNNs can interpret complex relationships in graph-structured data and provide explanations for model decisions by analyzing how nodes or edges influence predictions.

One key aspect of GNNs is their explainability, which is crucial for understanding model decisions and gaining insights into underlying data dynamics. GNNs offer both global and local feature importance, allowing users to interpret model predictions at different levels of granularity. They propagate information through the graph to highlight influential nodes or connections.


Global feature importance measures the overall impact of each feature on the model’s output, providing a high-level understanding of the data’s significance. On the other hand, local feature importance captures the importance of features for individual data points, enabling detailed analysis of specific instances. This dual perspective enhances the interpretability of GNNs, making them valuable tools for various applications.8

Better and enhanced decision making

Thanks to Explainable AI, Graph Neural Networks (GNNs) can significantly enhance and expedite decision-making processes. When AI models provide explanations for their specific decisions or predictions, particularly in complex models, human decision-makers can make more informed and actionable decisions.

Here is an example of use-case:

Clients’ conversion in retail

In the realm of retail sales, GNNs offer promising prospects for predicting factors influencing consumer behavior and sales performance. By leveraging the interconnected nature of data in retail environments, GNNs can capture complex relationships between various factors such as customer demographics, product attributes, and marketing strategies. For instance, GNNs can analyze customer purchase histories, social interactions, and geographical locations to identify patterns and trends that affect sales outcomes. This holistic approach enables retailers to gain actionable insights into factors driving sales fluctuations and optimize their strategies, accordingly, ultimately improving business performance and customer satisfaction.9

Better product faster

Product development is a matter of “trial and error”. AI can significantly speed up the discovery process by predicting the final property of a formulation based on ingredients as shown by Matilde https://intellico.matilde.ai/. GNNs can explore the relation among ingredients suggesting which ingredient add/remove in the formulation. Moreover, they can visualize clusters of similar products, supporting the identification of families and portfolio platform.

Prioritizing capacity in healthcare

In healthcare, Graph Neural Networks can be employed to prioritize capacity when screening patients. By constructing a patient graph where nodes represent individuals and edges represent shared medical histories, genetic relationships, or environmental factors, GNNs can identify patterns indicative of disease susceptibility or progression. By processing imaging data from MRI, CT scans, and EEGs, GNNs excel in identifying abnormalities and aiding in early detection of diseases like Alzheimer’s and Parkinson’s. This enables healthcare providers to intervene proactively, personalize treatment plans, and improve patient outcomes.7

This is a prime case for using GNNs, as they are not only highly accurate but also explainable. While these models cannot replace clinicians, their explainability aids healthcare professionals in organizing and delivering better care.

In conclusion, Graph Neural Networks (GNNs) represent a paradigm shift in artificial intelligence, offering unparalleled capabilities in understanding and modeling relationships among data points. These networks are pivotal for supporting Explainable AI, enhancing decision-making while ensuring humans remain central to the process. From scientific breakthroughs to optimizing everyday services, the real-world applications of GNNs are vast and diverse. As research in this field continues to advance, we can expect GNNs to play an increasingly integral role in shaping the future of technology and society.

Graph techniques, and their explainable potential, apply also on generative AI. If you want to learn more: https://intellico.ai/blog/transforming-customer-care-with-generative-ai/

Contributors:

  • Sara Uboldi, Head of Solutions
  • Dymiargani Milono, Business and data analyst

References:

  1. Battaglia, P. W., et al. (2018). Relational inductive biases, deep learning, and graph networks,  https://arxiv.org/pdf/1806.01261
  2. Wu, Z., et al. (2020). A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1), 4-24.
  3. Wang, D., et al. (2019). Deep graph library: Towards efficient and scalable deep learning on graphs. https://par.nsf.gov/servlets/purl/10311680
  4. Derrow-Pinion, A., She, J., Wong, D., Lange, O., Hester, T., Perez, L., Nunkesser, M., Lee, S., Guo, X., Wiltshire, B., Battaglia, P. W., Gupta, V., Li, A., Xu, Z., Sanchez-Gonzalez, A., Li, Y., & Veliƒçkoviƒá, P. (2021). ETA Prediction with Graph Neural Networks in Google Maps. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management (CIKM ’21), November 1-5, 2021, Virtual Event, QLD, Australia (pp. 1-10). ACM, New York, NY, USA.
  5. Ying, R., He, R., Chen, K., Eksombatchai, C., Hamilton, W. L., & Leskovec, J. (2019). Graph Neural Networks for Social Recommendationhttps://arxiv.org/abs/1902.07243
  6. Smith, J., Johnson, A., & Lee, R. (2020). “Graph Neural Networks: A Review of Methods and Applications in the Study of Healthcare Data.” Journal of Healthcare Informatics Research.

Do you need more information?

Fill out the dedicated form to be contacted by one of our experts.