Talk to your data

Agent Cloud is an open source platform enabling companies to build and deploy private LLM chat apps (like chat GPT), that can enable teams to securely talk to their data.

Have a Mac or Linux?
Copy and paste this into terminal to get started
Copy to Clipboard
Agent Cloud is an open-source generative AI platform with a built-in RAG as a Service that enables you to build and deploy LLM-powered conversation chat apps for talking with your data.
Trusted by developers at:
Stanford University logoMicrosoft LogoBumble LogoGoogle logo

Retrieve data from 300+ data sources

Agent Cloud comes with a built in data pipeline, allowing you to split, chunk and embed data from over 300 sources out of the box.
Agent Cloud integration options with various service icons, including BigQuery, Hubspot, and Salesforce among others
"Host your own internal GPT builder which can use any LLM and access hundreds of data sources."

Bring your own LLM or use Open AI

Agent Cloud is designed to be LLM agnostic, enabling users to connect to their own open source model or leverage Open AI. To be fully private, Agent Cloud open source or self managed users can connect Agent Cloud to their own locally hosted model.
integration of AgentCloud with various Large Language Models (LLMs) including Mistral AI, Meta, OpenAI, ANTHROPIC, and others.

The end to end RAG pipeline

Select your connector

Use our collection of data sources to sync data from other systems like confluence or upload your own pdf, docx, txt or csv file.
When selecting systems like databases (postgres, snowflake, bigquery) you can select tables and even columns to ingest.

Prep your data

For files you can provide instructions on how to split and chunk your data. Leverage Open AI latest text-embedding-3 for embedding or select from open source models like BGE/base.

Vector store your data

Once data has been embedded the platform will store your data within a vector database. We also expose

Keep data fresh

Select what frequency you would like to sync data from the source. This can be manual, scheduled or a cron expression. This means users can query fresh data and know how recent the source was updated.

Start chatting with your data!

Now that data is synced, simply create an agent with your choice of LLM and start a session to talk to your data.

Scale from startup to enterprise

We're not shy to say that we stand on the shoulders of giants. Our open source architecture is modular and designed to scale with your org, making the build vs buy discussion a non starter.
Built in ELT pipeline (powered by Airbyte)
Built in message bus (powered by Rabbit MQ)
Built in Vector Database (powered by Qdrant)
Under the hood, Agent Cloud uses the following open-source stack:

- Airtbyte for its ELT pipeline
- RabbitMQ for message bus.
- Qdrant for vector database.

Privately chat to your data in your cloud.

Get started with Agent Cloud Community edition today or talk to us for enterprise enquiries.


Got questions? We have some answers below
What is your software license?

AGPL 3.0 - it is a copy left license which can be found here on our github page

What hardware requirements do I need to run Agent Cloud locally?

If running via Docker, we strongly recommend a machine with at least 16 GB of RAM.

E.g. a base Macbook Air M1/M2 with 8GB RAM will not suffice as Airbyte requires more resources.

If you are also running Olama or LM studio locally, you will need the additional RAM if you want agents to inference a local LLM model.

If you are running without docker, 8GB RAM may suffice, but it is harder to get started.

What local OS do you support?

Currently we have a docker script for Mac/Linux users. For windows we recommend using WSL.

How can I access the ELT platfrom (Airbyte) locally?

You can access the airbyte instance directly by going to http://localhost:8000 with the username airbyte and password password.

How can I access the Vector DB locally?

The vector DB is qdrant, and it is located at http://localhost:6333/dashboard#/collections. For more information about Qdrant, see their docs here.

How can I contribute to the repository?

If you want an initial idea of how your code would fit into the repo, please raise a feature request. Otherwise if you're very keen, write the code and raise a pull request.
You can also chat on discord.

What are your goals?

We aim to be the leading open source platform enabling companies to deploy private and secure AI apps on their infrastructure.

Can I use a local Large Language Model?

Yes. We support any local LLM which has an Open AI compatible endpoint (i.e. it responds the same way Open AI does.).
This means you can use LM Studio or more recently Ollama with our app for local inference.

For cloud inference we currently support Open AI.

Which Cloud models do you support?

Currently we support Open AI and Azure Open AI. We have a long term vision to support all Cloud providers by leveraging the litellm library within our Python backend. Contributors welcome!

Can I use local Embedding models?

Yes. Our app can embed locally via fastembed.
You don't need to do anything to get this working, just go to the Models screen > Fastembed > select model.

For cloud embedding inference we will be supporting Open AI text-ada models for now.

What splitting/chunking methods do you support?

For files
At present we support two methods
Basic: Character splitting (e.g /n)
Advanced: Semantic chunking which leverages an embedding model to chunk sentences which are semantically similar. You can read about semantic chunking here.

For data sources (not files)
At present data sources other than files, will automatically chunk by the message which comes through Rabbit MQ. For example when leveraging Bigquery as the data source, a message will be equal to one row. Therefore it will chunk row by row. We are adding support long term to enable users to select fields to embed and fields to store as metadata. For now, any field selected will be embedded AND used as metadata.

We intend to enable an API endpoint for vector upsert to enable more custom chunking strategies or alternatively feel free to contribute to the vector-proxy app in the repo if you are comfortable coding in Rust!

Can I control what fields get synced when I sync a data source

Yes, you can select both the table and the fields that are being synced. This differs for Structured vs Unstrcutured data and will conform to the Airbyte stream settings for the source.

Can I control which fields get embedded and which fields get stored as metadata?

We are adding support long term to enable users to select fields to embed and fields to store as metadata. For now, any field selected when connecting a data source will be embedded AND used as metadata. For files only the document text will be embedded and any other document metadata that is available in the file will be attempted to be extracted and stored as metadata (e.g. page number, document name).

Can I control the sync frequency?

Yes, you can select basic or advanced, giving you hourly, daily or cron expression syncs.

To edit go
Data Sources > Select Data Source > Select Schedule Tab [edit schedule]

What file upload formats are supported?

For local file uploads we support:

How do you price your platform?

Check out our pricing page where you can see our pricing.

What is agent cloud?

A single place where companies can build and deploy AI apps. This includes single agent chat apps, multi agent chat apps and knowledge retrieval apps. The platform both enables developers and engineers to build apps for themselves and the teams they interface to in sales, HR, operations etc.

How can my data retrieval be truly private.

If you want it to be truly private, don't use our managed cloud product or anyone else's for that matter. For this we reccommend deploying our open source app to your infrasturcture and signing an enterprise self managed license.