OutSystems, OpenAI Embeddings and Qdrant Vector Database — Answer Right

Stefan Weber
ITNEXT
Published in
7 min readAug 20, 2023

--

This is the second article in a two-part series about developing generative AI solutions with OutSystems, OpenAI and Qdrant Vector Database. In this article we combine Qdrant semantic similarity search with OpenAI Chat Completions to generate tailored answers to custom questions by only using articles of our own knowledge base.

This article series includes a sample QnA application that I have published on Forge. I will explain the implementation details using this sample. See links below.

Part 1 — “Find Similar”

In this article you will learn the basics of vector embeddings and vector databases. Plus, it’ll show you how to use pre-built Forge components to create these vector embeddings, save them in Qdrant Vector Database, and execute similarity queries.

Part 2 — “Answer Right” (this article)

In the second part, we’re expanding our application. We’re combining our similarity search with OpenAI’s completions. We’re using the search results from our QnA application as context for an OpenAI prompt. This helps OpenAI answer users’ questions based only on the answers we’ve collected. This is a common way to use generative AI along with a reliable information source to avoid incorrect answers.

Prerequisites

To follow along make sure that you have downloaded and configured the Vector Embeddings Demo application from Forge as described in the first article.

Import the sample data by clicking on the Bootstrap Sample data button. Sample data are Question-Answer pairs taken from the Munich Airport FAQ.

Trying out the application

Let’s begin by exploring some example questions. Navigate to the Answer Right screen and input “How can I reach the city center from the airport?” Then, click the Ask button.

The demo application follows a series of steps.

  • First, it conducts a similarity search within the Qdrant collection — as we did in the previous article (located in the left column).
  • Next, it compiles a prompt that includes instructions, discovered question-answer pairs, and the question you entered. This prompt is then dispatched to the chat completions endpoint of OpenAI (found in the middle column).
  • Finally, OpenAI uses the provided prompt to generate an answer (located in the right column).

Upon comparing your question with the answers retrieved through the Qdrant similarity search, you may notice that the initial answer discovered pertains to traveling from the city center to the airport, which is contrary to our intended query. Nevertheless, the two questions are semantically similar.

Another question, “Where should I head upon my arrival at the airport?” is also identified as semantically similar, although it is not relevant to our specific inquiry.

At this point, we encountered a scenario akin to searching within a vast online QnA Knowledge Base, which often returns numerous search results, leaving it to you to scroll through and identify the correct ones. Furthermore, the actual answer to your question might be spread across multiple responses.

Thats not what we want. We want OpenAI to create a tailored answer to our question.

So, we generate a prompt that directs OpenAI to return a customized answer to a specific question by relying solely on a collection of question-answer pairs we feed into the prompt.

The instruction we add to prompt in the demo application is

  • Instruction — “Following is a question by a user. Use only the following question answer pairs to answer the question. If you cannot answer the question output I cannot answer the question”
  • followed by the list of question-answer pairs retrieved by similarity search, divided by dashes.
  • followed by the entered question.

The method of feeding external content into a prompt context is also known as Retrieval Augmented Generation (RAG).

Let us add another QnA Article to the Knowledge Base. Click on the Add Article button. Enter the following information, then click Create.

Question
How much is a train ticket from the airport to the city center?

Answer
Munich Airport is located in fare zone 5. We recommend the MVV Day Ticket with zone M-5 validity or the CityTourCard with zone M-6 validity for your trip by train. Both offers are available for single travelers and for couples, families or groups. The group version of the ticket allows up to five people to travel. The MVV M-5 Day Ticket for a single person costs 14,80 Euro. The City Tour Card M-6 for a single person costs 24,50 Euro.

Once we’ve saved the article, we’ll modify our inquiry to include a query about ticket prices as well. Please input “How do i get from the airport to city center and how much is a ticket?” into the search box on the Answer Right screen.

It’s important to note that the updated answer will now incorporate the newly added ticket price information from our knowledge base.

This serves as an effective illustration of how Retrieval Augmented Generation (RAG) enables us to generate customized responses to inquiries by utilizing the latest data available.

Technical Walkthrough

Let us investigate the implementation details. Open the VectorEmbeddingsDemo module in the Vector Embeddings Demo application in Service Studio.

In the Logic tab open the Articles_AnswerQuery server action

Articles_AnswerQuery server action

This action is responsible for querying our Qdrant collection, generating a prompt and returning the answer.

It first searches for relevant articles in our knowledge base using the provided search term. We reuse the Articles_SearchArticles server action we have already seen in the previous article.

Note that this server action by default returns a maximum of 6 search results. You may modify the configuration to suit your needs.

The PrepareAnswerSystemContext constructs the instructional part of the prompt. It combines the instruction with the returned question answer pairs. This becomes the System context of our prompt.

For simplicity Iam using the built in StringBuilder server action from the Text extension. In more complex prompt scenarios i suggest using a more mature template builder like the awesome Handlebars.Net Forge component by Miguel Antunes.

Next both the system context and the original question are appended to a local variable LocalMessages which will serve as input for the OpenAI connector.

Then we call the OpenAI chat completions endpoint using OpenAI_CreateChatCompletion server action.

Last, we return the first answer from the result, along with all related data used to produce the answer.

As you see it is actually quite simple to implement. But there is something left. What if you don’t have a question?

Generating Questions from Statements

In many cases, there are no question-answer pairs. These may be company policies, for example. They only contain the respective statements. A policy would be, for example, “You must not accept any gifts from suppliers and customers”.

However, users of our knowledge base would rather ask a question such as “Am I allowed to accept a gift from a customer?”. As we have already learned, in terms of semantic similarity search, like should be compared with like. Comparing a question to a statement will always result in a lower similarity score than comparing question to question, as is done in the sample application.

But how do I efficiently get to a question if I only have a statement? OpenAI supports us here as well. We generate a prompt with the respective statement and ask OpenAI to create a corresponding question for it. We then use this question for creating embeddings and performing the similarity search.

This is a good start as you do not have to create your questions manually. In addition, the question does not need to be perfect as this is compensated by the similarity search.

In the Logic tab of the demo application open the Articles_SaveArticle server action. You will note that this server action now has an additional If statement to check if a question has been provided. If not it calls another server action CreateQuestionFromAnswer (located in the Internal Folder), which returns a question that is then used for the article.

Open the CreateQuestionFromAnswer server action.

CreateQuestionFromAnswer server action

The flow is like the one of the Articles_AnswerQuery flow. But instead of producing an answer, this one now creates a corresponding question from a given answer.

The PrepareStatementSystemContext uses a StringBuilder again to combine an instruction with the given answer.

Summary

In the article and demo, we learned that it’s quite simple to create a personalized answering app using OutSystems, OpenAI and Qdrant. By giving OpenAI clear instructions to answer questions only using provided question-answer pairs, we can avoid getting incorrect answers due to hallucination and retrieve tailored answers even if the search text contains multiple questions.

We also learned that OpenAI can also help to create a corresponding question for a statement for information data like company policies.

Thank you for reading. I hope you liked it and that i have explained the important parts well. Let me know if not 😊

If you have difficulties in getting up and running, please use the OutSystems Forum to get help. Suggestions on how to improve this article are very welcome. Send me a message via my OutSystems Profile or respond directly here on medium.

If you like my articles, please leave some claps. Follow me and subscribe to receive a notification whenever I publish a new article. Happy Low Coding!

--

--

Digital Craftsman, OutSytems MVP, AWS Community Builder and Senior Director at Telelink Business Services