The advent of Large Language Models (LLMs) like ChatGPT is leading to an increased use of artificial intelligence in various business contexts, leveraging Generative AI to create texts. However, despite their technological potential, there are still issues to address to ensure their correct use. This article will further explore the characteristics of Large Language Models as an example of Generative AI, the techniques of prompt engineering to optimize the performance of LLMs, and some use cases of business application. Lastly, we will emphasize the importance of not replacing humans with LLMs and Generative AI, but enhancing them in the decision-making process, especially in sensitive areas such as the legal and healthcare sectors.
Can LLMs stand alone in making a significant impact?
While the application of Large Language Models (LLMs) like ChatGPT is undeniably a part of our present reality, a multitude of issues remain to be addressed The technological potential they bring to the table, particularly in enhancing process efficiency, is substantial. However, these models also present inherent limitations, including:
- “Hallucinations”, as LLMs do not reference their sources, the reliability of their responses can sometimes be questionable (e.g., due to an unreliable source) and it can also be unclear whether the information provided is derived from the model’s training or fine-tuning datasets or if it is generated by the model itself;
- “User-dependency”, as the answer can vary based on the user posing the question and the specific form of the question;
- “Content accessibility”, since users can potentially access the knowledge accumulated by the LLM during its training, risks associated with incomplete knowledge or access to confidential information may arise;
- “Language dependencies”, as LLMs are predominantly trained on English language datasets, resulting in better performance in this language.
Moreover, a significant, yet unresolved aspect relates to the “explainability” of the results proposed by the system when employing natural language to convey analytical outcomes and predictions. The notable improvement in interaction and interpretation speed provided by natural language does not overcome humans’ inherent limitation in understanding the ‘why’ behind a specific AI-produced result or prediction. Consequently, the risk of creating highly effective solutions that are essentially black boxes is a real and present danger.
Therefore, a company that fully harnesses the potential of artificial intelligence should not solely rely on a single technology. Instead, it must thoughtfully integrate the various technologies available in the market to suit its internal requirements.
Ask Grapho: 3 engines for your company
With Ask Grapho, we decided to meet companies’ needs by designing a solution that combines Grapho, a GNN-based engine, with 2 Large Language Model engines.
The first engine is based on Machine Learning and Explainable AI to make predictions,returning not only the result of the prediction but also insights on the criteria adopted by the model to solve the problem. . This decision-support engine, our proprietary solution known as Grapho, is adept at forecasting outcomes for complex, multivariate problems. . These could range from analyzing and evaluating real estate investments to developing novel chemical formulations or understanding multi-channel customer/user buying behaviors.
Our second engine is an internal analysis engine, built upon local LLM solutions (including open source such as LLama2). It is designed to to facilitate user inquiries about the predictions made by Grapho or other AI models using natural language. This engine leverages LLMs to operate on internal data sources where data security and protection are paramount. It can also access information/data within a company’s broader internal knowledge base, such as internal regulatory documents or strategic reports.
Lastly, the third external analysis engine is designed to to provide real-time insights into external data sources on specific topics of interest (e.g.: news, documents and content and open-source databases available on the web). This engine employs a combination of scraping models, based on keywords and key reference sources, as well as readily accessible market LLMs via API, such as GPT-4 and Claude.
Understanding Large Language Models: their structure and capabilities
Generative AI is a type of artificial intelligence capable of generating various forms of media, such as text, images, videos, or music, in response to specific requests (prompts). Over the recent months, this technology has garnered significant interest, starting with the launch and subsequent rapid proliferation of ChatGPT, which has led to the gradual diffusion of open-source solutions as well, tailored for specific training and customization needs. In general, the term “foundation models” is used to denote models that undergo extensive training on a broad information base, making them adaptable to diverse tasks.oni per essere adattabili a diversi task.
LLMs dissect text into fundamental units known as tokens, which can represent words, sub-words, or larger language structures. These models process raw text using tokens and generate predictions. Typically, an LLM is trained to forecast the next word or sequence of words (tokens) or alternatively, identify missing words in a sentence. They rely on a “transformer” architecture that enables them to analyze all the words in a sentence simultaneously rather than sequentially (GPT, for example, stands for Generative Pre-trained Transformer, while BERT stands for Bidirectional Encoder Representations from Transformers). LLMs are called ‘Large’ due to their reliance on neural networks with millions or billions of parameters, pre-trained to respond at a human-like level.
Among the language-based generative artificial intelligence systems (LLM – Large Language Model) there are “closed” solutions like ChatGPT (OpenAI), Bard (Google, based on the LaMDA model), Claude (Anthropic).
Additionally, there are local solutions (also open-source) including: LLama2 by Meta, Bloom by Hugging Face, Dolly 2.0 by Databricks. While certain solutions are commercially viable, others have limited use and may require a license. Localized solutions offer several benefits:
- • they are flexible and allow users to modify and adapt models to their specific needs
- they enhance complete control over data, reducing the risk of data breaches
- they can be modified by users to meet their specific needs, improving performance on unique data sets.
Prompt engineering: the key to the correct use of LLMs
The careful crafting of inquiries, a process known as “prompt engineering,” is pivotal to ensure the answers derived from LLMs are independent of user experiences and hit the mark. For instance, if we want to extract data from a contract, we need to conceive the template to be filled.
This is a crucial technique to optimize LLMs for specific tasks, enabling optimal performance in various domains By providing tailored instructions, constraints or example output as prompts, LLMs can better understand and generate relevant answer for the context Consequently, the behavior of LLMs can be tailored to meet the demands of specific tasks. Prompt engineering also proves essential in scenarios with limited data, enabling LLMs to generalize and deliver high-quality results even with insufficient training samples. Nonetheless, it poses challenges in devising effective prompts and addressing possible limitations.
When it comes to prompt engineering, a few best practices can enhance the performance of LLMs. For instance, specify the desired tone and role (“Act as…“), define the target and context, and segment complex prompts into manageable steps (“chained prompting“). Furthermore, prompts can be improved by providing useful details, including examples and keywords, or by specifying any constraints on form.
Furthermore, we can rely on specific techniques for improve models, especially local solutions, such as “few-shot learning” and “fine-tuning.”
“Few-shot learning“ is employed when data sources for training the algorithm are limited:
- Meta-learning entails leveraging insights from previous tasks to execute a new task. Pre-training the LLM with an array of tasks and corresponding prompts enables the model to learn to generalize and perform unfamiliar tasks effectively.
- Adaptive prompting incorporates dynamic prompts based on user input. For instance, if a user requests a recipe, the LLM could inquire about specific ingredients or cooking methods, resulting in more accurate, contextualized responses.
“Fine-tuning” (or “transfer learning”), on the other hand, is used to adapt a pre-trained model to a specific task/domain, selecting a dataset pertinent to the task:
- Task-specific Prompt Design refines the prompts for desired tasks. For example, in sentiment analysis, prompts should include examples of expressions conveying both positive and negative sentiments.
- Domain Adaptation manages fine-tuning by using prompts constructed with domain-specific language. For instance, prompts containing particular terminology can train a model for use in legal contexts.
In both instances, prompts should offer ample contextual information. The process of prompt engineering is experimental in nature; it necessitates the exploration of various structures, instructions, and constraints to enhance LLM performance over time.
In general, it is crucial to remember that LLMs should not be viewed as replacements for decision-makers. Maintaining the decision-makers’ role and enhancing their ability to analyze and generate content is particularly vital in sensitive sectors such as legal or healthcare.
Some use cases of business application of Large Language Models
Large Language Models (LLMs) are making their presence felt across various domains. Applications have diversified both horizontally (e.g., productivity-enhancing solutions, such as content generation, note management or presentation generation, or conversational automation, such as chatbots) and vertically (e.g., applications for marketing, healthcare, the legal context or real estate). A research by the “Osservatorio Artificial Intelligence” of the Politecnico di Milano has shown that 60% of generative AI solutions (out of a sample of 270) concerns single-mode tools suitable for many sectors (they are focused on optimally performing a task, but for generic applications). Vertical industry applications account for 24%, of which 17% are single-mode.
In the different fields of vertical application, the legal context offers an interesting point of view because it shows how different applications of LLMs can result in a change of business processes, without necessarily replacing the decision-maker. Consider the example of a due-diligence workflow, where LLMs can
- streamline the document analysis process, which typically requires hours of manual review. LLMs can rapidly analyze several documents, provide summaries, and gather additional information from external sources.
- enhance contract analysis in terms of both speed and accuracy, extracting vital information and details based on predefined templates, thereby reducing potential errors.
- support Information Retrieval by providing the most relevant sources among available content in response to a specific question. This is contingent upon correctly implemented prompts and the model’s familiarity with the information templates.
Such functionalities are extendable to all sectors where due-diligence workflows are implemented, such as real estate, investments, and insurance. In these cases, different technologies can merge: AI solutions that run predictive models on demand or the optimal price (e.g. property price prediction) can be fed with other solutions designed to analyse sources available in the company (e.g. internal databases on previous transactions) or collected externally (e.g. competitor prices, availability of targets). This exemplifies situations where solutions like AsK Grapho can truly shine.
These principles can further apply to other contexts, such as sentiment analysis and prospect analysis, among others.
Lastly, it is worth emphasising how the latest draft of the proposed European Act on Artificial Intelligence envisages a specific regulation for the use of generative AI, including LLMs. In particular, the draft defines that the use of generative AI will only be permitted if it meets certain requirements, including transparency, ethics and security, with a clear indication that the content has been AI-generated. Thus, future applications will need to navigate within these evolving boundaries, perhaps even more rigorously than other AI solutions.
LLMs are ushering in a new era in business processes by harnessing their incredible ability to generate and understand human language. They serve as catalysts for innovation, contributing substantially to both creativity and automation. Whether it’s generating original content based on user-supplied data or extracting valuable insights from a vast array of documents and sources, the benefits are substantial.
For instance, can evaluate sentences to discern whether the sentiment conveyed is positive or negative, considering factors such as tone, keywords, and context. They can also interpret monitoring data from a diabetic patient’s mobile app and deliver personalized advice on glucose levels, thereby improving condition management and fostering healthier lifestyle choices..
Yet, their potential extends beyond these applications. LLMs have the power to transform companies’ internal processes, bringing efficiency to unprecedented levels. However, there are challenges to be faced: model ‘hallucinations’, user dependency, accessibility of content and interpretation of results. These hurdles, while significant, are not insurmountable.
The path to a company’s success lies in a holistic approach: not focusing exclusively on a single technology but exploiting a combination of the best solutions available on the market according to specific needs. This is the reason behind our design for Ask Grapho, a solution that integrates the powerful GNN-based Grapho engine with two LLM engines: :
- An internal analysis engine, built on local LLM solutions (open-source such as LLama2) and thus designed to run on internal data sources on which the company needs to ensure security and data protection.
- An external analysis engine to have a real-time overview on external sources on the specific topic under study (whether fed by news, documents and content and open-source databases available on the web). • This engine utilizes market-available LLMs that can be accessed via APIs (like Claude).
Achieving success with these tools hinges on the meticulous and purposeful design of prompts, a practice known as ‘prompt engineering’. This process necessitates careful management to ensure that LLM responses are genuinely beneficial and pertinent.
In conclusion, it is crucial to understand that LLMs are not intended to replace the decision-makers, but to empower them. Keeping the human in the driving seat by enhancing her ability to analyse and generate content is crucial, especially in sensitive sectors such as legal or healthcare. By intelligently harnessing the power of artificial intelligence, we can pave the way for a future brimming with limitless opportunities.
Contributor: Nicolas Gutierrez de Esteban