Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
MarioDeFelipe
Contributor

Opening


We have been storing data in SAP for decades, amount of data is too big so we used Data Warehouse and similar technologies

Now, with Transformer Models we the capacity to understand extremely large corpus of data. Imagine we show a General Knowledge Model the relationships between BKPF and BSEG, and answer queries about it. If I catch your attention, keep reading.

-

In evaluating Language Models interaction with SAP data, I am still investing my interaction with existing models, technologies like RAG, and even some Fine Tuning techniques. I am avoiding pre-training models with domain data. And I believe I am not going in the right direction 🥀🥺

I believe Foundation Models have the power to reshape all industries, but existing Large Language Models fall short on multilingualism and their applicability to complex industry use cases.

From embedding and fine-tuning a generic Foundation Model, I am slowly jumping into incorporating Vertical and Domain-Specific data into models. My final goal is to build a custom model to process comprehensive company knowledge from SAP because none of the academic research I am reviewing and sometimes implementing goes specifically in that direction.

Apart from academia, I have a significant interest in how the software industry is approaching SAP, and mentally, I see there are two large groups of approaches from vendors.

On one side, we have the traditional Data vendors, who are exceptionally skilled in Data Management, Governance, Data Quality, Integration, Ingestion, etc. These companies talk a lot about building Domain Specific Models because it's in their interest to show their customers that all of what is left to its right will not be relevant if you don't have a suitable dataset for the models to interact.

 


A strong message like that is correct but not 100% accurate. Foundation Models are trained on a large variety of (let's hope public) data that might be relevant for Enterprise and Enterprise queries, and there is a lot of research to allow the models to interact with another corpus of data in the way of performing an API query, or a DB query.

Shifting to the right end of the picture, we see the companies, primarily startups or research groups, bringing much innovation on the LLM layer, how we can master Prompt Engineering, and how to apply RAG.

On this right end, the entry-level for corporations is way simpler. We can easily interact with a given model, apply some of these techniques with a few API calls, and get some fantastic results. But the API call scalability becomes a risk really, really soon. Third-party models API calls are charged per token (characters), and a quick look at Cohere or OpenAI shows Cohere price is between 10 and 500 times cheaper than OpenAI; depending on the selected Engine, costs can go high quite fast.


RAG is cool, but Vector DBs are not everything.


Retrieval Augmented Generation (RAG) has disrupted the field of open-domain question answering, enabling systems to generate responses that mimic human-like behavior for various queries. At the core of RAG is a retrieval module, which scans an extensive collection of texts to identify relevant context passages. These passages are then processed by a neural generative module, typically a pre-trained language model such as GPT-3, to formulate a final answer.


However, despite its effectiveness, this approach has several limitations. One of the system's critical components, the vector search conducted over embedded passages, has inherent constraints that can hinder the system's ability to engage in nuanced reasoning. This becomes particularly apparent when confronted with complex questions that require multi-step sense across multiple documents.





The primary advantage of vector search is its ability to search for semantic similarity rather than relying solely on literal keyword matches. The vector representations capture the conceptual meaning, enabling the identification of more relevant search results that may differ linguistically but share similar semantic concepts. This allows a higher quality of search compared to traditional keyword matching.


However, there are limitations when converting data into vectors and conducting searches in high-dimensional semantic space. Vector search struggles to capture the diverse relationships and intricate interconnections between documents.



Performing vector search over a corpus of structured SAP data, especially when considering complex table relations, has several limitations and challenges.


Addressing these limitations may require a combination of specialized algorithms, data preprocessing, and domain-specific knowledge to create an effective vector search system for SAP data with complex table relations. Additionally, ongoing monitoring and optimization are essential to ensure the system remains accurate and efficient as the SAP data evolves.







LLMs have demonstrated their prowess in understanding and generating human languages, among other applications. However, their ability to work with graph data remains underexplored. Evaluating LLMs on graph-related tasks can enhance their utility, especially in relationship detection, knowledge inference, and graph pattern recognition.

Numerous studies have focused on assessing LLMs, primarily GPT models, in handling graph-related queries and producing accurate responses. However, these studies have mainly concentrated on closed-source models from OpenAI, overlooking other open-source alternatives. Additionally, they have not thoroughly explored aspects like fidelity in multi-answer questions or self-rectification capabilities in LLMs.






Knowledge Graphs and LLMs


To overcome these limitations, Knowledge Graphs (KG), or Knowledge Graph Prompting (KGP) emerges as a promising alternative. KGP explicitly encodes various relationships into an interconnected graph structure, enhancing the richness of reasoning capabilities.





Graph DB structure is beneficial for modeling and analyzing complex relationships between different data types, such as customer data, product data, market data, 3rd party data, and balance sheet data.

  1. Nodes and Labels:

    • In a graph database, each piece of data is represented as a node. Nodes can be labeled to categorize them into different types. For instance, you might have labels like "Customer," "Product," "Market," "ThirdParty," and "BalanceSheet."



  2. Properties:

    • Each node can have properties associated with it. For example, a "Customer" node might have properties like "customer_id," "name," and "email." Similarly, a "Product" node may have properties like "product_id," "name," and "price."



  3. Relationships:

    • Relationships between nodes are represented as edges in the graph. These edges define how different pieces of data are connected. For instance:

      • A "Customer" node might have relationships with "Product" nodes to represent their purchased products.

      • "Market" nodes may be linked to "Product" nodes to show which products are available in specific markets.

      • "ThirdParty" nodes could be connected to "Product" nodes to indicate third-party services or data sources associated with specific products.

      • "BalanceSheet" nodes may be linked to "Customer" nodes to show financial transactions and balances.





  4. Queries:

    • Graph databases support powerful query languages, such as Gremlin, designed explicitly for traversing and querying graph data. These queries can retrieve information about relationships and patterns within the data.




Let's say you want to find all the products purchased by a specific customer, their associated markets, any third-party services used during those transactions, and the corresponding balance sheet information. In a graph database:

  • Start with the "Customer" node representing the specific customer.

  • Traverse the edges labeled as "purchased" to reach the "Product" nodes.

  • Traverse the edges labeled as "available_in" to find the associated "Market" nodes.

  • Traverse the edges labeled as "third_party_service" to discover third-party data or services.

  • Traverse the edges labeled as "transaction" to find "BalanceSheet" nodes and associated financial data.








Diverse Relationships Modeled by Knowledge Graphs:


Advantages of Knowledge Graphs:

  1. Structural Relationships: KGP encodes the contextual hierarchy of information by linking passages to specific documents or sections, aiding in determining importance, validity, and relevance during reasoning.

  2. Temporal Relationships: KGP factors in temporal dynamics by ordering passages chronologically, facilitating reasoning about unfolding narratives and timelines.

  3. Entity Relationships: KGP's entity-centric approach allows focused exploration of the knowledge graph, facilitating the aggregation of facts about specific entities across documents.



While vector search has been a significant leap in open-domain question answering, it has limitations, particularly in handling complex queries and diverse relationships between content. Knowledge Graph Prompting offers a promising solution by explicitly modeling various relationships, enhancing reasoning capabilities, and addressing some of the shortcomings of vector search. As AI systems become increasingly integrated into our lives, understanding these strengths and weaknesses becomes paramount in harnessing their full potential.



 





A typical automotive data model represents the structure and relationships of data within the automotive industry. It organizes and manages information related to vehicles, their components, customers, dealerships, and other relevant entities. Below, I'll outline some key entities and their relationships in a simplified automotive data model:

  1. Vehicle:

    • Attributes: VIN (Vehicle Identification Number), make, model, year, color, engine type, etc.

    • Relationships:

      • Many-to-One with Manufacturer (each vehicle is made by one manufacturer)

      • Many-to-Many with Dealership (a car can be sold by multiple dealerships)

      • One-to-many with Service Records (a vehicle can have multiple service records)





  2. Manufacturer:

    • Attributes: Name, headquarters location, founding year, etc.

    • Relationships:

      • One-to-Many with Vehicles (a manufacturer produces many vehicles)





  3. Customer:

    • Attributes: Customer ID, name, contact information, etc.

    • Relationships:

      • Many-to-Many with Vehicles (a customer can own multiple vehicles)

      • Many-to-Many with Dealership (a customer can buy from multiple dealerships)





  4. Dealership:

    • Attributes: Dealer ID, name, location, contact information, etc.

    • Relationships:

      • Many-to-Many with Vehicles (a dealership can sell multiple vehicles)

      • Many-to-Many with Customers (a dealership can have multiple customers)

      • One-to-Many with Sales Transactions (a dealership can have multiple sales transactions)





  5. Service Record:

    • Attributes: Service ID, date, description, cost, etc.

    • Relationships:

      • Many-to-One with Vehicle (a service record is associated with one vehicle)





  6. Sales Transaction:

    • Attributes: Transaction ID, date, price, payment method, etc.

    • Relationships:

      • Many-to-One with Vehicle (a transaction is associated with one vehicle)

      • Many-to-One with Customer (a transaction is associated with one customer)

      • Many-to-One with Dealership (a transaction occurs at one dealership)





  7. Inventory:

    • Attributes: Inventory ID, quantity, price, etc.

    • Relationships:

      • Many-to-One with Dealership (inventory is managed by one dealership)

      • Many-to-One with Vehicle (inventory includes one type of vehicle)





  8. Employee:

    • Attributes: Employee ID, name, role, contact information, etc.

    • Relationships:

      • Many-to-One with Dealership (an employee works at one dealership)






This is a simplified representation, and in a real-world scenario with ECC or S/4HANA, the data model could be more complex, bringing Ariba, Fieldglass, or additional entities and attributes of the automotive industry, including parts, suppliers, warranties, and more.



The interaction between KGs and LLMs has particular benefits for Enterprises that work with relational databases (case a.) However, I spent some days reading 55 papers that could guide me on how LLMs could better understand the SAP data model. This is how I summarize them.










Several works have explored using KGs to assist LLMs in making predictions (Yasunaga et al., 2021; Lin et al., 2019; Feng et al., 2020). For example, KagNet (Lin et al., 2019) employs graph neural networks to model relational graphs, enabling relational reasoning in symbolic and semantic spaces. QA-GNN (Yasunaga et al., 2021) learns representations by connecting QA context and KGs in joint graphs. However, these methods often involve training additional knowledge-aware modules like graph neural networks (GNNs), which can be challenging to adapt to novel domains and may underutilize the strengths of LLMs.

 


 



1️⃣ KG-enhanced LLM Pre-training: This research group focuses on incorporating knowledge graphs during the pre-training stage of Large Language Models. By doing so, it aims to improve the ability of LLMs to express and understand knowledge. This typically involves modifying the pre-training process to make the model more aware of structured knowledge from KGs.

2️⃣ KG-enhanced LLM Inference: In this category, research uses knowledge graphs during the inference stage of LLMs. This allows LLMs to access the latest information from KGs without requiring retraining. This is particularly useful for keeping LLMs up-to-date with current knowledge.

3️⃣ KG-enhanced LLM Interpretability: This group focuses on leveraging knowledge graphs to better understand the knowledge learned by LLMs and to interpret the reasoning process of LLMs. This can help make the decision-making process of LLMs more transparent and interpretable.

Integrating knowledge graphs with large language models is an exciting area of research that can potentially improve the performance and interpretability of these models across a wide range of applications. It allows them to incorporate structured knowledge, stay current with evolving information, and provide more transparent reasoning in their outputs.



Let's do another example with a Supply Chain Graph;






Incorporated SAP data from the following entities


This approach allows us to structure the complex SAP data model into nodes and relationships, thereby generating a holistic picture of how materials, components, and products flow from suppliers to customers. The inherent interconnections and dependencies become evident and analyzable. We believe the future of LLM-based applications is combining a vector similarity search approach coupled with database query languages such as Gremlin or SPARQL.











How Knowledge Graphs Help LLMs Better Reasoning


Knowledge graphs have emerged as a powerful way to structure and encode world knowledge in a machine-readable format. Based on vector similarity, traditional passage retrieval techniques have limitations in their reasoning abilities. This article explores how modern large language models (LLMs) can enhance the construction of high-quality knowledge graphs and how these augmented graphs, combined with content retrieval, are poised to shape the future of retrieval systems.

Building knowledge graphs traditionally involved complex processes like entity extraction, relationship extraction, and graph population. However, tools like Llama Index's KnowledgeGraphIndex leverage LLMs to automate these tasks, including entity recognition and relation extraction, reducing the barriers to using knowledge graphs effectively.

Knowledge graphs offer a promising path beyond traditional vector similarity for passage retrieval when combined with LLMs and content retrieval methods. They enhance multi-hop reasoning capabilities and have the potential to shape the future of intelligent retrieval systems, providing greater flexibility and depth of understanding.




In this example, an intelligent search system demonstrates the powerful combination of Large Language Models (LLMs) and knowledge graphs. It utilizes LLMs to generate Cypher statements for knowledge graph queries, retrieve relevant information, and then employ another LLM to generate accurate answers. This flexible approach allows the use of different LLMs for distinct tasks or various prompts on a single LLM, enhancing the system's adaptability.







The convergence of knowledge graphs and Large Language Models exemplifies a robust strategy for optimizing innovative search systems over structured data, particularly when addressing inquiries necessitating the harmonious integration of structured and unstructured data sources to provide contextually relevant responses.

 


 

Figure 1 extracted from a research Southeast University Nanjing: Knowledge Solver. An example comparing the vanilla LLM in (a) and zero-shot knowledge solver in (b) for question-answering tasks. LLMs search for necessary knowledge to perform tasks by harnessing LLMs’ own generalizability. Purple represents nodes and relations in LLMs’ chosen correct path.


 


 

Figure 2 extracted from a research University of Illinois. For each question answer choice pair, we retrieve relevant knowledge subgraph and encode it into a text prompt, injected into LLMs directly to help them perform knowledge-required tasks. In this question-answering scenario, LLMs interact with provided external knowledge to choose the correct path for answering the question.








Conclusion


Some pioneering research efforts involve fine-tuning various LLMs, such as BART and T5, for KG-to-text generation. These approaches often represent the input graph as a linear traversal, and this simple approach has shown success, outperforming many existing state-of-the-art KG-to-text generation systems.

While Graph databases have been around for many years, they have never gained much popularity, although this may change fast. In this blog, I introduce how the RAG framework combines the capabilities of knowledge graphs and large language models (LLMs) to enhance reasoning and information retrieval. These are some of the benefits introduced.

  1. Structured Context: Knowledge graphs provide structured data, enhancing an LLM's understanding of relationships between entities and improving context-aware reasoning.

  2. Efficient Information Retrieval: Knowledge graphs enable precise and efficient information retrieval, reducing noise and improving the accuracy of responses.

  3. Multi-hop Reasoning: LLMs can perform multi-hop reasoning by traversing knowledge graph paths, allowing them to answer complex questions and make inferences based on structured data


Organizations can create custom knowledge graphs tailored to their specific domain or industry. This allows LLMs access to domain-specific knowledge, making them more valuable in specialized applications. However, this is a real challenge for SAP environments, which are very complex in ECC systems and less complicated on a S/4HANA but not easy.

Interestingly, one of the first courses SAP has introduced on Generative AI is how the new Graph API can be utilized to develop Large Language Model (LLM) based applications.



In the next blog, I will discuss how Knowledge Graphs can help with a ubiquitous problem for LLMs in the enterprise, especially for SAP, and how to overcome Role-Based authorization issues.



8 Comments
Labels in this area