Graph Neural Networks Applied to Similarity Search of Extrusion Die – a technical overview

Graph Neural Networks - Intellico


The identification of patterns in similarities is a common challenge in several businesses. In marketing, we may need to understand the patterns in purchasing behaviour to retain our client or to increase sales. In healthcare we may understand how patients show similar symptoms. In product development we may look for similarities between past trails/designs and new ones to develop.

Grapho: leveraging on similarities

Design by-reuse refers to a practice in Engineer-to-Order and Make-to-Order companies. These companies aim to improve their productivity by leveraging the thousands of projects they have designed in the past.

In the previous article“AI & EU: AI Regio Project and Explainable AI for Manufacturing and R&D,” we presented how Intellico developed the Grapho solution within the AI Regio project. Grapho is a technology that employs advanced artificial intelligence techniques based on GNNs (Graph Neural Networks) to make the process of identifying and reusing existing components in new product generations more efficient and accurate. Its purpose is to improve the productivity and competitiveness of European manufacturing companies by speeding up the process of identifying common platforms and therefore reducing variants.

Here we summarize some technical characteristics of the model. Contact us for additional details.

State of the Art.

Pattern Identification and Recognition requires to perform an unsupervised task given the limit in defining meaningful labels that capture both high-level concepts and consider fine-grained details. For this reason, we employed Self-Supervised Learning (SLL).

SSL takes advantage of the intrinsic properties of the data to aid in its learning process, creating versatile pre-trained models that can be further refined for a variety of specific tasks. Recent advancements have underscored the effectiveness of SSL, showcasing its superiority over supervised networks, especially in the fields of image and video applications — including image classification, object detection, action classification and action localization.

Contrastive Learning has emerged as a premiere paradigm in SLL. This technique operates on the principle of instance discrimination, where a random data sample, referred to as the anchor, is selected and transformed to create a positive sample. This transformation assumes a certain level of similarity between the anchor and the positive sample. The learning process then seeks to enhance the similarity between the anchor and positive samples while maximizing the dissimilarity with the negative sample. The key ingredient in the effectiveness of the learned representation lies in how the data points are transformed in positive samples and on how the negative sample are drawn from the dataset. The negative sample is so named because, contrary to the positive sample, it is used by the model to learn the concept of dissimilarity. This sample is often selected randomly or according to certain criteria (in case there are significant labels for the problem).
At this point, the similarity process consists of maximizing the similarity between the anchor and the positive sample and at the same time distancing these as much as possible from the negative sample.
The key element in the effectiveness of this paradigm is encapsulated in how samples are extracted and processed.


With the objective of creating a competent and interpretable system able to discerning similarities across an array of images, a dual-component system has been devised, encompassing a similarity model and a class proposal model. The objective is in fact to create an interpretable system capable of identifying similarities between different projects.
The similarity model serves a crucial role in learning the underlying similarities between images within the original dataset, facilitating the construction of an embedding database tailored for quick retrieval of past projects. This not only promotes an efficient reuse of previous designs but also negates the necessity to start the design phase from scratch, thereby streamlining the design process.

Subsequently, the class proposal model builds an interpretable representation by utilizing the embeddings generated by the similarity model to construct a graph and a novel set of artificial ’smart’ classes is generated, offering transparent insights into the internal dynamics of the similarity model and paving the way for explainable and intuitive design suggestions.

Model of Similarity: building blocks

State-Of-The-Art image processing leverages Convolutional Neural Networks, due to their ability to extract information about both low-level details and high-level concepts. This process stems from the use of convolutional filters that operate on adjacent sets of pixels, introducing a bias in the model by correlating feature representations of close areas within an image. Furthermore, CNNs present a strong translational invariance when an associative pooling operation is used on a feature map.

CNNs also present a weaker invariance with respect to flipping and mirroring, albeit under certain symmetry assumptions regarding the image structure. In the specific context of analyzing aluminum die projects, these inherent abilities of CNNs become indispensable. Over the years, a single project could have been archived in numerous orientations (rotated, flipped, or mirrored) and it is crucial to be able to detect these transformations.

The model proposed is trained in three steps, reusing the parameters and output generated by the previous stages:

  • Autoencoder: An UNET-likeencoder serves as a starting point for our architecture. The model is trained on the original dataset in an unsupervised fashion to derive an initial low-dimensional representation of the input images. In this stage, the model is only able to recognize a general shape similarity.
  • Contrastive Model: a Multi-Layer Perceptron (MLP) head is attached on top of the encoder, parallelly to the decoder, and a contrastive training loop is performed. The autoencoder holds a pivotal role, since the reconstruction error of these three images is still considered, allowing the CNN to retain relevant geometrical features, and offering resistance against noisy or too general labels
  • Graph Neural Networks (GNNs) are leveraged to further refine the selection of negative pairs in the contrastive loss function, avoiding the limitations of random sampling employed in earlier stages. The GNN builds a graphical representation based on the learned CNN representation. In this approach, the contrastive loss is not applied to random pairs of images, but by consciously selecting the closest negative samples, offering a more challenging and therefore more informative set of negative samples for training. This focused strategy accelerates the learning process, fostering a model that is adept at distinguishing even the most subtly different images.


Grapho have been tested with a panel of skilled and experienced designers on a sample of 100 projects with different levels of complexity:

– Simple projects are projects with a low number of details. GRAPHO exhibited a 95% reuse rate, outperforming the baseline

– Complex projects are characterized by a high number of details and difficult extrusion pattern. On this class of profiles, GRAPHO managed a reuse rate higher than 80%, again outperforming the baseline

– GRAPHO excelled in swift operations, conducting a comprehensive scan of the entire database in 0.5 seconds, while the baseline exhibited prolonged and unfeasible execution times.

Do you need more information?

Fill out the dedicated form to be contacted by one of our experts.