How to Build a RAG Chatbot Using Agent Cloud And MongoDB

Learn How to Build a RAG Chatbot Using AgentCloud And MongoDB

Enterprises constantly seeking ways to improve efficiency, gain a competitive edge, and deliver exceptional customer service.

Retrieval-Augmented Generation (RAG) technology is emerging as a powerful tool that addresses these needs by combining information retrieval with AI generation. This innovative approach unlocks a range of benefits that can significantly transform enterprises' operations.

One of the most impactful applications of RAG lies in enhanced customer support. RAG can empowerchatbots and virtual assistants to provide more accurate andcontextually relevant responses by retrieving information from a company's knowledge base or past customer interactions. This translates to faster resolution times, improved customer satisfaction, and reduced burden on human support teams.

Another key advantage of RAG is its ability to streamline knowledge management. Enterprises often need help with vast amounts of unstructured data stored in documents, emails, and reports. RAG tackles this challenge by enabling users to retrieve the information they need quickly. This empowers employees to find answers to internal queries, access relevant documents for decision-making, and conduct research more efficiently, ultimately boosting overall productivity.

RAG goes beyond simply retrieving and generating information. It can also play a crucial role in data analysis. By identifying relevant data points and insights from large datasets, RAG can automate parts of the data analysis pipeline, leading to faster extraction of actionable insights. This empowers enterprises to make data-driven decisions with incredible speed and accuracy.

RAG is not just a technological innovation; it represents the future of enterprise operations. Its ability to unlock the true potential of data, streamline workflows, and empower better decision-making positions RAG as a game-changer for businesses across all sectors.

What is Retrieval-Augmented Generation (RAG)

In this blog, we will learn to build a RAG chatbot in minutes using AgentCloud.

AgentCloud is an open-source platform enabling companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. AgentCloud internally uses Airbyte to create data pipelines allowing us to split, chunk, and embed data from over 300 data sources, including NoSQL databases like MongoDB. It simplifies the process of ingesting data into the vector store for the initial setup and subsequent scheduled updates, ensuring that the vector store information is constantly updated. AgentCloud uses Qdrant as the vector store to efficiently store and manage large sets of vector embeddings. For a given user query the RAG application fetches relevant documents from vector store by analyzing how similar their vector representation is compared to the query vector.

Setting up Agent Cloud via Docker

To run AgentCloud locally, we must have Docker installed on our system. Then, we can execute the below steps to run AgentCloud. 

  1. First, we need to clone the repository:

 git clone

  1. Next move to the agent cloud directory: 

cd agentcloud

  1. For running locally, run this command: 

chmod +x && ./

The install command will download the required Docker images and start the containers in Docker.

Once the install script is executed successfully we can view the containers running in the docker app:

To access Agent Cloud in the browser we can hit this url: http://localhost:3000/register 

Agent Cloud Signup Page

Next, we need to sign to the platform.

Agent Cloud Sign up Screen in Local


Log in to the App to get to this landing screen after sign-up.

Agent Cloud Landing Page on Local

Congrats. Our setup is complete now. Next, we will move towards building our RAG application.

Adding New Model

Agent Cloud allows us to use models like FastEmbed and OpenAI in our app. let's go to the Models screen and click the Add Model option to add a new model.

Agent Cloud Model Screen

In the configure screen you can select Model, let's select fast-bge-small-enmodel to embed the text content. Then click on the Save button to complete the model setup.

FastEmbed, a lightweight library with minimal dependencies, is ideal for serverless environments like AWS Lambda.  The core model, fast-bge-small-en, efficiently captures text meaning for tasks like classification and retrieval due to its compact size. This combination offers developers a powerful solution for real-time text analysis in serverless deployments.

Model Pop Up Window

Post successfully adding the model we will be able to view the model in the Models list.

Model Window After Adding a Model

Creating DataSource

We will be using MongoDB as our data source.

MongoDB is a NoSQL database, that offers a flexible alternative to traditional relational databases. Unlike relational databases with rigid schemas, MongoDB stores data in JSON-like documents, allowing for easy adaptation to ever-changing data structures.

In our MongoDB, we have a database called course_db which contains a collection called course_catalog. Inside this collection, we have stored different course information.

MongoDB Data Source

There are multiple fields in each document but the fields which we are interested in are:

  • title 
  • description
  • level
  • duration
  • skills_covered
  • url
  • meta_data

To access and utilize MongoDB data within the RAG, we'll create a MongoDB data source.

First we need to go to the Data Sources page and click on the New Connection button.

Data Sources Screen in Agent Cloud App

We will select MongoDB as the Datasource.

Selecting MongoDB Data Source

We will select the Datasource Name as course_db_mongo which is derived from the database name. We will add a short description of the new data source. We have kept the Schedule Type as Manual which means the MongoDB data will get synced to the vectorstore manually.

I am running MongoDB on my local machine with Docker.

For connecting Airbyte to MongoDB we need to provide the MongoDB connection string and the Mongo database name. In the cluster type, I have selected Self-Managed Replica Set since MongoDB is running in my local. Rest we can keep the default value as it is.

Next, we need to select the collection that we want to sync, which is course_catalog,  we will be syncing all the fields to the vector store. 

Creating a data source

Post that we need to select the field to be embedded and click continue. The meta_data field in the Mongo DB has all the relevant information required so we will select this field for embedding.

The data source is created now. In the first run, it embeds the Mongo data and stores it in the Qdrant vector store.

Processing Data Sources

We can check the Qdrant DB running in our local to verify the data sync.

The Qdrant DB is running in port 6333 and can be accessed from the below link. http://localhost:6333/dashboard#/collections

On the collection page we can see a new collection is created.

As the data syncs, this collection gets populated with the documents.

Setting up tools

Tools are an essential component for enabling the AI agent to interact with its environment effectively, process information, and take appropriate actions to achieve its goals. The tools used by an AI agent can include functions, APIs, data sources, and other resources that help the agent perform specific tasks autonomously and efficiently. 

The tool we will be setting up will be responsible for querying the data source and fetching relevant documents. Agent Cloud by default creates a tool for us when a new data source is added. In the below screenshot, we can see that the course_db_mongo tool has already been created for us by the platform.

Tools screen in Agent Cloud App

The AI agents read the tool's description and make a judgment about using the tool for the task, so we need to make sure that the description of the tool covers all the information that the agent would require.

Edit Tools Screen in Agent Cloud

Creating Agent

An AI Agent is a sophisticated system that utilizes LLM technology to reason through problems, create plans to solve these problems and execute these plans with the assistance of various tools. These agents are characterized by their complex reasoning capabilities, memory functions, and the ability to execute tasks autonomously.

For creating the agent we will first go to the Agents page and then click on the New Agent button.

Agent Screen in the App

This brings us to the agent configuration page where we define the Name, Role, Goal, and Backstory of an Agent. We have selected both the Model and Function Calling Model as Open AI GPT 4. In the Tools section, we will select the course_db_mongo tool.

How to Create an Agent in Agent Cloud

If you don’t have the Open AI GPT 4 model configured then you can click on the Model option and add a new model. A modal for configuring new model opens and there you can add the name of the model, the model type, the Credentials which will be the OpenAI API key, and finally the LLM model. On clicking the save button the Open AI GPT 4 model we be configured.

Now click on the Save button on the agent configuration page and a new agent will be created for us.

Course Info Agent Created

Creating Task

Tasks are specific assignments assigned to an agent for completion. For creating a new task we need to click the `Add Task` button in the Task screen.

Create Tasks Screen

In the Task configuration page, we need to define the Name and Task Description. We will be selecting the Tools and Preferred Agent as course_db_mongo and Course Information Agent.

On clicking the Save button a new task will be created for us.

Creating App

We will now start into the App creation part. In our app, we bind the Agent and Task together to create a conversational RAG. This RAG will help users in answering questions related to courses. In the app configuration we will select the App Type as Conversation Chat App. The Task will be the Course Information Task which we created before and the Agent will be the Course Information Agent. We will want the App to process tasks sequentially so we will select the Process as sequential. And finally, we will select the LLM model as OpenAI GPT 4. Then we can click on the save button to save our configuration.

Now lets test our App, on click the play button it will open a chat window for us where we can have conversation.

I want to check if there are any Python courses on the list. The Agent uses the course_db_mongo tool to retrieve the Python courses.

Here is another example where I am inquiring for any beginner course on Google Workspace and agent was able to retrieve the course with the difficulty level as beginner.

This is the last example which I tried where I am trying to fetch a web development course with a  duration of 2 weeks


In this blog, we learned to build a RAG chat app with Agent Cloud and MongoDB. We covered creating a data source, embedding it, and storing it in Qdrant DB. Additionally, we learned how to build tools for Agents and create an app where users can interact with their private data.

Explore other blog posts

see all