Back to Blog

Run your own ChatGPT locally with Ollama! (Part 2)

Using LibreChat as UI for Ollama!

Posted by

A llama using a computer

Recap

This tutorial is a continuation of Part 1.

Previously, we managed to get Ollama running locally and run the Llama3.2 model by Meta AI (well, I'm assuming you got it working haha). However, it wasn't exactly ChatGPT-like. Thus, we will be beautifying it another awesome open-source tool - LibreChat!

What exactly is LibreChat?

LibreChat is a software that contains a variety of features. Picture ChatGPT but on steroids! Some of the features that it contains - UI, AI Model Selection, Multilingual UI, Customizable Interface, Speech & Audio, and many more. To understand it better, you can visit. ... Although building our own UI is possible (you can try yourself), I prefer not to reinvent the wheel. Furthermore, LibreChat is probably the best wheel out. Alright, with the introduction out of the way, let's get started.

Steps

1. Downloading Docker

Docker is a platform for packaging and running applications in isolated environments called containers. We will be running LibreChat with Docker Desktop. Docker Desktop provides us with a user-friendly interface when working with docker.

Go to their official site to download and install Docker Desktop. Once installed, you might need to restart your computer.

2. Downloading LibreChat

There are two ways to download LibreChat.

The first way is to just manually download the codebase from GitHub here. Click on the dropdown of code and download as ZIP. Afterwards, extract all the files in the ZIP folder.

Alternatively, you can just use Git and clone the codebase to your local.

git clone https://github.com/danny-avila/LibreChat.git

3. Configuring LibreChat

  • Rename the .env.example file to .env
  • Rename the docker-compose.override.yml.example to docker-compose.override.yml
  • In docker-compose.override.yml file, uncomment the following code below. This is to allow the app to read the librechat.yaml file for configurations.
  • services:
        api:
          volumes:
          - type: bind
            source: ./librechat.yaml
            target: /app/librechat.yaml
  • Rename librechat.example.yaml to librechat.yaml
  • Comment out the following codes in librechat.yaml file. Unfortunately, I'm not sure what these configurations do (seems to be related to some features) but they are causing the app to fail.
  • # Example Actions Object Structure
      # actions:
      #   allowedDomains:
      #     - "swapi.dev"
      #     - "librechat.ai"
      #     - "google.com"
    
      # Example MCP Servers Object Structure
      # mcpServers:
      #   everything:
      #     # type: sse # type can optionally be omitted
      #     url: http://localhost:3001/sse
      #   puppeteer:
      #     type: stdio
      #     command: npx
      #     args:
      #       - -y
      #       - "@modelcontextprotocol/server-puppeteer"
      #   filesystem:
      #     # type: stdio
      #     command: npx
      #     args:
      #       - -y
      #       - "@modelcontextprotocol/server-filesystem"
      #       - /home/user/LibreChat/
      #     iconPath: /home/user/LibreChat/client/public/assets/logo.svg
      #   mcp-obsidian:
      #     command: npx
      #     args:
      #       - -y
      #       - "mcp-obsidian"
      #       - /path/to/obsidian/vault
      
  • Last but not least, in the librechat.yaml file, add the following custom endpoints for Ollama. Go to endpoints, under custom, add the following.
  • endpoints:
        # ...
        # Commented codes
        # ...
        custom:
          - name: "Ollama"
            apiKey: "ollama"
            # use 'host.docker.internal' instead of localhost if running LibreChat in a docker container
            baseURL: "http://host.docker.internal:11434/v1/chat/completions" 
            models:
              default: [
                "llama3.2",
                ]
              fetch: true
            titleConvo: true
            titleModel: "current_model"
            summarize: false
            summaryModel: "current_model"
            forcePrompt: false
            modelDisplayLabel: "Ollama"

4. Starting LibreChat

Before we start the application, make sure that Ollama and Docker Desktop is running. If they are not, simply search for the applications in windows search and click. Once they two programs are running and you've configured the files, head over to the terminal and run the following.

docker compose up -d

If you have started the application successfully, you will see green ticks in the terminal showing that the program has started. Furthermore, you can check in Docker Desktop a new container is created for your application. Well, if it is not running successfully, do check out the references at the bottom that I've used as guide. And of course, ask Professor ChatGPT.

5. Using LibreChat

Now, the moment of truth.

Open up your browser and navigate to localhost:3080. If they LibreChat is running, you should see a sign in page. Create an account first. The account is created locally so your information is private. You can just use a simple and random email and password.

Upon successfully logged in, click on the dropdown on the top left of the chat UI and select Ollama.

You can now start chatting with your local hosted "ChatGPT"! There are more cool things you can do with LibreChat but I'll leave it up to you to explore! Have fun~

References