End to End Examples
We include several end-to-end examples using LlamaIndex.TS in the repository
Check out the examples below or try them out and complete them in minutes with interactive Github Codespace tutorials provided by Dev-Docs here:
Chat Engine
Read a file and chat about it with the LLM.
Vector Index
Create a vector index and query it. The vector index will use embeddings to fetch the top k most relevant nodes. By default, the top k is 2.
Summary Index
Create a list index and query it. This example also use the LLMRetriever
, which will use the LLM to select the best nodes to use when generating answer.
Save / Load an Index
Create and load a vector index. Persistance to disk in LlamaIndex.TS happens automatically once a storage context object is created.
Customized Vector Index
Create a vector index and query it, while also configuring the the LLM
, the ServiceContext
, and the similarity_top_k
.
OpenAI LLM
Create an OpenAI LLM and directly use it for chat.
Llama2 DeuceLLM
Create a Llama-2 LLM and directly use it for chat.
SubQuestionQueryEngine
Uses the SubQuestionQueryEngine
, which breaks complex queries into multiple questions, and then aggreates a response across the answers to all sub-questions.
Low Level Modules
This example uses several low-level components, which removes the need for an actual query engine. These components can be used anywhere, in any application, or customized and sub-classed to meet your own needs.
JSON Entity Extraction
Features OpenAI's chat API (using json_mode
) to extract a JSON object from a sales call transcript.