Skip to content

Llamaindex questions and answers for your files using ollama, and behind the scenes the language models: Llama3, Phi-3, GPT-3, and GEMINI.

Notifications You must be signed in to change notification settings

Deluxer/RAG-API

Repository files navigation

Llamaindex for JavaScript projects

In this video, I explain step by step how we can do it.

What does this repo contain?

  • Load a CSV file, TXT file
  • Configure Ollama to use Llama2
  • Use and configure a Huggingface embedding model
  • Custom Prompt template

Requirements

Add an .env file and set OPENAI_API_KEY, see OpenAI api key:

$ OPENAI_API_KEY='sk-xxxxx'

Vector search index

name: embedded_flowers_index_768

{
  "fields": [
    {
      "numDimensions": 768,
      "path": "embedding",
      "similarity": "euclidean",
      "type": "vector"
    }
  ]
}

Installation

$ yarn install

Running the app

# development
$ yarn run start

# watch mode
$ yarn run start:dev

# production mode
$ yarn run start:prod
  1. Install node curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash nvm install 20 source ~/.bashrc

  2. Install yarn curl -o- -L https://yarnpkg.com/install.sh | bash source ~/.bashrc

source-clone

  • bin/
    • 64bts
      • source-clone.so
  • data/
    • locale/
      • de-DE.ini
      • en-US.ini
      • es-ES.ini
      • pt-BR.ini

Stay in touch

About

Llamaindex questions and answers for your files using ollama, and behind the scenes the language models: Llama3, Phi-3, GPT-3, and GEMINI.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published