Get started#

Search components in the Aryn Conversational Search Stack#

Untitled

Aryn’s Conversational Search Stack uses OpenSearch and LLMs for its conversational search pipeline. Aryn added these conversational capabilites in OpenSearch v2.10. The conversational search architecture is built around OpenSearch’s document search core, using hybrid search (a combination of vector and keyword search) to get the best documents out of the knowledge base, and OpenSearch’s Search Pipelines to send those documents to a LLM using a RAG pipeline to get an answer. The pipeline also shows the results of the hybrid search, so users can see what documents were used to generate the natural language answer. We also use conversation memory to store the history of conversations so they can be used as context for the next user query.

You can easily see conversational search in action with the Aryn Quickstart, which will ingest a sample dataset and make it available for search.

For more information about this functionality:

Neural Search: neural search configuration

Search Pipeline & RAG: search pipeline configuration