RAG Dixa Data

Agent Cloud enables you to split/chunk, embed, vector store and sync your Dixa data, providing a production RAG pipeline.

How does Agent Cloud automate RAG for
Dixa
?
Authenticate
Dixa
to Agent Cloud
Split and Chunk data using simple character chunking or advanced semantic chunking strategies
Embed using latest Open AI text-ada-3 models or open source models like BAAI/bge-base-en
Vector Store your embedding in our Vector DB (powered by Qdrant)
Keep data fresh by syncing data on custom schedules (hourly, daily, or cron expression)
Chat UI that can be securely shared within your company
View our pricing options

FAQs

Got questions? We have some answers below
What is your software license?

AGPL 3.0 - it is a copy left license which can be found here on our github page

What hardware requirements do I need to run Agent Cloud locally?

If running via Docker, we strongly recommend a machine with at least 16 GB of RAM.

E.g. a base Macbook Air M1/M2 with 8GB RAM will not suffice as Airbyte requires more resources.

If you are also running Olama or LM studio locally, you will need the additional RAM if you want agents to inference a local LLM model.

If you are running without docker, 8GB RAM may suffice, but it is harder to get started.

What local OS do you support?

Currently we have a docker install.sh script for Mac/Linux users. For windows we recommend using WSL.

How can I access the ELT platfrom (Airbyte) locally?

You can access the airbyte instance directly by going to http://localhost:8000 with the username airbyte and password password.

How can I access the Vector DB locally?

The vector DB is qdrant, and it is located at http://localhost:6333/dashboard#/collections. For more information about Qdrant, see their docs here.

How can I contribute to the repository?

If you want an initial idea of how your code would fit into the repo, please raise a feature request. Otherwise if you're very keen, write the code and raise a pull request.
You can also chat on discord.

What are your goals?

We aim to be the leading open source platform enabling companies to deploy private and secure AI apps on their infrastructure.

Can I use a local Large Language Model?

Yes. We support any local LLM which has an Open AI compatible endpoint (i.e. it responds the same way Open AI does.).
This means you can use LM Studio or more recently Ollama with our app for local inference.

For cloud inference we currently support Open AI.

Which Cloud models do you support?

Currently we support Open AI and Azure Open AI. We have a long term vision to support all Cloud providers by leveraging the litellm library within our Python backend. Contributors welcome!

Can I use local Embedding models?

Yes. Our app can embed locally via fastembed.
You don't need to do anything to get this working, just go to the Models screen > Fastembed > select model.

For cloud embedding inference we will be supporting Open AI text-ada models for now.

What splitting/chunking methods do you support?

For files
At present we support two methods
Basic: Character splitting (e.g /n)
Advanced: Semantic chunking which leverages an embedding model to chunk sentences which are semantically similar. You can read about semantic chunking here.

For data sources (not files)
At present data sources other than files, will automatically chunk by the message which comes through Rabbit MQ. For example when leveraging Bigquery as the data source, a message will be equal to one row. Therefore it will chunk row by row. We are adding support long term to enable users to select fields to embed and fields to store as metadata. For now, any field selected will be embedded AND used as metadata.

We intend to enable an API endpoint for vector upsert to enable more custom chunking strategies or alternatively feel free to contribute to the vector-proxy app in the repo if you are comfortable coding in Rust!

Can I control what fields get synced when I sync a data source

Yes, you can select both the table and the fields that are being synced. This differs for Structured vs Unstrcutured data and will conform to the Airbyte stream settings for the source.

Can I control which fields get embedded and which fields get stored as metadata?

Yes, we have support for users to select field to embed and fields to store as metadata. For files only the document text will be embedded and any other document metadata that is available in the file will be attempted to be extracted and stored as metadata (e.g. page number, document name).

Can I control the sync frequency?

Yes, you can select basic or advanced, giving you hourly, daily or cron expression syncs. Note: for cloud this depends on your plan.

To edit go
Data Sources > Select Data Source > Select Schedule Tab [edit schedule]

What file upload formats are supported?

For local file uploads we support:
PDF, DOCX, TXT, CSV, XLSX

How do you price your platform?

Check out our pricing page where you can see our pricing.

What is agent cloud?

A single place where companies can build and deploy AI apps. This includes single agent chat apps, multi agent chat apps and knowledge retrieval apps. The platform both enables developers and engineers to build apps for themselves and the teams they interface to in sales, HR, operations etc.

How can my data retrieval be truly private.

If you want it to be truly private, don't use our managed cloud product or anyone else's for that matter. For this we recommend deploying our open source app to your infrastructure, running an LLM on prem/cloud and signing an enterprise self managed license.