Quantexa
Get Your Data Ready for Successful AI Initiatives
Artificial Intelligence
Get Your Data Ready for Successful AI Initiatives

Your AI Copilot is Only as Good as Your Data Foundation

The Quantexa Decision Intelligence Platform and your copilot combine in a Contextual RAG architecture for better AI-informed decisions.

Your AI Copilot is Only as Good as Your Data Foundation

While AI models - large language models (LLMs) in particular - become more powerful and widely commoditized, the need for accurate, context-rich data grows ever more critical. In regulated industries, this becomes even more essential, as organizations must also ensure explainability and governance. For the modern organization, Contextual Retrieval-Augmented Generation (RAG) with Quantexa’s Decision Intelligence Platform provides a solution — delivering not only trusted data and organizational context but also facilitating trusted AI alongside models like LLMs.

In this rapidly evolving landscape, it’s crucial to remember that your AI is only as good as the data it’s built upon. That’s where Quantexa’s platform comes in, offering a foundation that ensures your AI-driven decisions are both informed and reliable.

image

The rapid innovation of LLMs and copilots

In November 2022, OpenAI introduced ChatGPT, the power of a large language model behind a simple, easy-to-access, chat user interface.

The copilot was born, a means by which one can engage with – prompt – a Generative Pretrained Transformer (GPT) AI model predicated on a global set of data to train the model and real-time input data to produce an accurate response, prediction, or action.

Since then, a Generative AI lexicon emerged, centered on Large Language Models (LLMs), a subset of Foundation Models, which can generate art, music, code, etc, as well as text. LLMs understand your prompts and author text responses.

Typically, LLMs require compute-hungry hardware, such as GPUs, Graphics Processing Units, so named during an earlier decade when they dominated “embarrassingly parallel” image transformations in gaming systems. Such approaches translated well to the inner parallel workings of neural networks which set the preconditions for the transformer architectures which power LLMs. Compute-efficient DeepSeek LLMs are challenging the GPU paradigm. They can work with commodity CPU hardware through advanced algorithmic implementations, task-specific optimizations, and careful attention to hardware integration.

Incorporating enterprise data: Retrieval-Augmented Generation

Yet in all cases, LLMs are only as good as the data they’re trained on, which likely does not include your organization’s proprietary data. Only by integrating your organization’s proprietary data alongside LLMs can GenAI deliver enterprise value, while deploying an architecture that provides protective guardrails to mitigate against data leakage and hallucination risks, the latter being a function of Garbage In, Garbage Out (GIGO).

After May 2023, then, the term Retrieval-Augmented Generation (RAG) became popular.

RAG is a process or pipeline whereby an organization's applications and databases simply augment LLM prompts and output. However, underneath it can be complex, requiring the structuring of searchable vector embeddings in columnar databases or technical computing environments. These are mathematical code sets generated by neural networks, like those used in LLMs themselves, which represent understanding of objects or words.

Graphs aficionados, meanwhile, have evangelized GraphRAG, applications of graph technologies, popularly knowledge graphs, repositories of relationship information governed by a user-defined ontology or set of rules. GraphRAG also entails the overhead of converting graph structures into searchable vector embeddings. At Quantexa, we deploy graphs and knowledge graphs across our Decision Intelligence Platform, but we believe graphs are just one (highly useful) tool in its contextual toolbox.

Thus Quantexa embraces what is called Contextual RAG. At the highest level, it adds extra explanatory context to RAG pipelines beyond graphs, and vectors. In Quantexa’s case, context is wrapped up in its Contextual Fabric, the data layer that underpins the process of taking data to decision intelligence, unifying data from across and beyond your organization and building entity-oriented context through multiple capabilities such as entity resolution, graphs, and more.

How does Quantexa inform Contextual RAG strategies?

Let’s explore a prompt: “Can you tell me about Michael Greene? What risks are associated with him and the transactions he has made?

This prompt investigates a potentially risky customer, perhaps as part of a Perpetual KYC (Know Your Customer) investigation. Perpetual KYC is a pertinent use case because the discipline requires you to continuously check, update and maintain customer and counterparty records, which an LLM won't do.

LLMs can guide with generic information and offer some (public) pointers, but that's about it, for example.

Michael Green is known for his expertise in market structures and passive investment strategies, which he sees as posing significant risks to the financial system. His concern..."

Note the spelling of the name – Green versus Greene. Is this actually the Michael Green we’re investigating, or is it a hallucination?

A common RAG pipeline might too fail to resolve the Green versus Greene entity conundrum, or offer informed knowledge about Michael Greene’s networks and relationships that may reside elsewhere in your organization. Remember, your AI is only good as your data foundation, and by over-structuring your data as vectors and not allowing for key data preparation tasks such as entity resolution, context can be lost along the way.

A Contextual RAG approach can re-use your data sources to garner greater context about the real Michael Greene, and, with entity resolution, mitigate the risk of assessing the wrong Michael Green(e). Here is where Quantexa comes into play.

Copilot options with Quantexa

When Quantexa is deployed, through its own copilot Q Assist or delivering context via its APIs to your own copilot, context can accompany the prompt. With Q Assist specifically, you can:

  • Incorporate internal database insights, e.g. customer interactions, prior investigations, in conjunction with external information derived from, say, watchlists or documents.

  • Have confidence in relationships determined via entity resolution and entity -to-entity connections, facilitated through graph capabilities.

  • Rank scores relevant to the prompt and the use case, e.g. customer 360 or pKYC case.

image

Of the copilot options, Q Assist offers the greatest opportunity to leverage the Quantexa Contextual Fabric, that layers of (ever changing) unified data, resolved entities, graphs and scores, but platform components can also be integrated with your own copilot.

Thus when answering whether Michael Greene presents risk, I get meaningful responses drawn from across my data estate, like:

“Michael Greene is a customer of the bank and is flagged as a high risk individual.

Michael Greene is linked to offshore and corporate registry documents that indicate personal risk.

Overall, with a score of 285 Michael Greene is deemed a risky individual in this network.”

Democratize decision intelligence & mitigate AI risk

Contextual RAG with Quantexa harnesses your organization’s data and innate knowledge to ensure the public inference of the LLM is managed from the prompt, drawing directly into your enterprise's decision intelligence simply and efficiently. It helps democratize queries to subject matter experts directly via a copilot, lessening requirements for user interfaces or dashboards.

Your AI copilot is only as good as your data foundation, so use your data comprehensively and wisely. Contextual RAG coupled with the Quantexa Decision Intelligence Platform makes for a powerful, simple, effective contextual combination.

For more information, watch the recorded webinar, Your AI Copilot is Only as Good as Your Data Foundation

Get Your Data Ready for Successful AI Initiatives
Artificial Intelligence
Get Your Data Ready for Successful AI Initiatives