Taking Control of AI: Running LLMs on Your Local Machine ๐Ÿš€๐Ÿ’ป

ยท

8 min read

Hey there, fellow tech enthusiasts! You've likely heard the constant hum about the latest generative AI tools โ€“ ChatGPT, Bard, Claude, and more. These cutting-edge tools all share a common backbone: Large Language Models (LLMs). If you've dabbled in the AI world, you might have seen discussions about installing these LLMs locally. But why go local with LLMs? Let's dive in.

Why Run Your Own LLM Locally? ๐Ÿค”๐Ÿ› ๏ธ

Running your own LLM offers unparalleled freedom and control:

  1. Say Goodbye to Rate Limits: Itโ€™s completely free and unrestricted.

  2. Customize to Your Heart's Content: Experiment and tweak settings for personalized use.

  3. Versatility at Your Fingertips: Different models for different tasks? Absolutely.

  4. Be Your Own AI Trainer: Train models tailored to your specific needs.

And if privacy is your priority, running LLMs locally is a game-changer. No more worries about sensitive, confidential, or IP-protected data being exposed in the cloud.

The Easiest Path to Local LLM Installation ๐Ÿ›ฃ๏ธ๐Ÿ‘จโ€๐Ÿ’ป

For those of us on Windows, thereโ€™s a clever workaround to tap into the power of LLMs. You can use LMStudio, but in this guide we will cover installing Ollama due to it's simplified user experience and accessibility. There is one problem though, Ollama is currently tailored only for Linux and Mac.

But no worries, we're not left behind. The solution? Windows Subsystem for Linux (WSL). With WSL, we can seamlessly run Linux alongside our Windows environment, opening the door to using Ollama right on our Windows machines.

Let's Get Started! ๐ŸŒŸ

So, gear up to run a large language model right on your Windows machine. Follow along as I walk you through every step. I am currently running Windows 10, however Windows 11 shouldn't be too different to set up. Hereโ€™s to unlocking the power of AI, right from your desktop!

Installing WSL

Opening PowerShell as an Administrator ๐Ÿ”‘๐Ÿ”ต

  1. Search for PowerShell: Click on the search bar in your Windows taskbar and type in โ€œPowerShellโ€.

  2. Run as Administrator: Right-click on the PowerShell app and select โ€œRun as Administratorโ€. This step is crucial as installing WSL (Windows Subsystem for Linux) requires administrative privileges.

  3. Enter the Install Command: Simply type

     wsl --install
    

    and press Enter. This command initiates the installation of WSL on your machine, opening the door to running genuine Linux distributions within Windows.

    Note: The installation might take a few minutes, so grab a coffee and let Windows do its magic!

Choosing Your Linux Flavor ๐Ÿฆ๐Ÿง

With WSL installed, you now have access to multiple Linux distributions:

  1. List Available Distributions: To see what Linux flavors you can install, type the following command:

     wsl.exe -l -o
    
  2. Select Your Preferred Distribution: You can choose from popular options like Ubuntu, Debian, or Kali Linux. For the sake of this tutorial, we will be installing Ubuntu 22.04

To do this we input the command:

wsl --install -d Ubuntu-22.04

Creating Your Linux User Account ๐Ÿ”‘๐Ÿ‘ค
Once WSL installation completes, itโ€™s time to personalize your new Linux environment:

Username and Password Setup: The system will prompt you to create a username and password. Choose a username thatโ€™s easy for you to remember. When it comes to the password, remember: strong and secure is the way to go! This step is just like setting up any new system โ€“ straightforward and essential for your security.

Launching Your Linux Distribution ๐Ÿš€๐Ÿ”
Now that your account is ready, letโ€™s launch your Linux distribution:

  1. Search and Run: Simply type โ€œUbuntuโ€ (or the name of the Linux distribution you installed) in your Windows search bar.

  2. Accessing Linux: You can start it right from the search results. For convenience, consider pinning it to your taskbar. If youโ€™re a fan of Windows Terminal, you can also launch it from there.

Congratulations: You're Linux-Ready on Windows! ๐ŸŽ‰

With these steps, you now have a fully functional Linux system running alongside your Windows environment. This setup paves the way for installing and experimenting with various software thatโ€™s native to Linux, including the Ollama project for LLMs.

Installing Ollama: A Breeze on Your Linux Setup ๐ŸŒฌ๏ธ๐Ÿ’ป

The Simplicity of Ollama's Installer ๐Ÿ› ๏ธ๐Ÿ‘Œ

Installing Ollama on your newly set up Linux environment is surprisingly simple. The developers behind Ollama have crafted an installer thatโ€™s not only efficient but also incredibly user-friendly, particularly for Linux and Mac systems. But don't worry, Windows users with WSL are also in luck!

Running the Install Command ๐Ÿ“ฅ

Just open your Linux terminal and enter the following command:

curl https://ollama.ai/install.sh | sh

And... You're Done! โœ…

Thatโ€™s really it! With these simple steps, Ollama is now installed on your system. But weโ€™re not quite finished yet.

With Ollama successfully installed, the next step is to select a Large Language Model (LLM) to work with. The great news is, Ollama supports a wide array of models right from the get-go, making this step a breeze. Letโ€™s pick one and dive in.

Starting Up Ollama ๐Ÿ”„

Ollama runs in the background, simplifying your workflow. To ensure it's up and running, you can initiate it with this simple command:

ollama serve

Enter this at the prompt in your Linux terminal. Once done, Ollama is set to automatically start every time you boot up WSL. I have found at this point I would need to open a new terminal.

What's Next? The Adventure Continues... ๐ŸŒ๐Ÿ”ฎ

But our journey doesn't end here. With your powerful model active, the next thrilling step awaits: setting up a web interface to interact with Ollama. Imagine commanding this coding virtuoso through a sleek, intuitive web portal. Stay tuned, as the next part of our guide unveils how to create this dynamic interface โ€“ the journey into advanced AI interaction is about to get even more exciting...

Setting Up a Web Interface for Ollama: Enhancing Your AI Experience ๐ŸŒ๐Ÿ’ป

Getting Started with ollama-webui ๐Ÿš€

Want to add a sleek web interface to your Ollama setup? Thanks to the intuitive ollama-webui, it's straightforward. Let's walk through the steps.

Cloning the Repository ๐Ÿ“๐Ÿ”—

First things first, let's clone the ollama-webui repository:

git clone https://github.com/ollama-webui/ollama-webui.git && cd ollama-webui/

You can place this repository anywhere you prefer in your system.

Installing with NPM ๐Ÿ› ๏ธ๐Ÿ“ฆ

Next, we'll utilize NPM (Node Package Manager) to install the web interface:

npm install

This process is efficient and should only take a few minutes.

Handling "Unable to Locate Package NPM" Error ๐Ÿ› ๏ธโ“

If you bump into the "unable to locate package npm" error, here's how to tackle it. This is a common hiccup, especially on Linux systems that are freshly installed.

Updating Your System ๐Ÿ”„

First, make sure your package lists are updated:

sudo apt update

This command refreshes your system's package list, ensuring it recognizes all the latest software, including NPM.

Installing NPM ๐Ÿ“ฆ

After updating, proceed with the NPM installation:

sudo apt-get install npm

Clearing the NPM Cache ๐Ÿงน

Clearing the cache is crucial for avoiding conflicts and ensuring smooth installations:

npm cache clean --force

Installing 'n' for Node Version Management ๐ŸŒ๐Ÿ”€

'n' is a handy package for managing Node versions. Install it globally:

npm install -g n

Updating Node to the Latest or LTS Version โฌ†๏ธ

Now, let's get your Node version up to date:

n lts
n latest

These commands install the Long-Term Support (LTS) and the latest versions of Node, ensuring you have the most stable and recent features.

Removing Old Node Versions ๐Ÿ—‘๏ธ

Keep your system clean by removing outdated versions:

n prune

This command gets rid of older versions, keeping only the latest installed one.

Setting Up the Environment File ๐ŸŒ๐Ÿ“„

Copy the required .env file for configuration:

cp -RPp example.env .env

Building the Frontend ๐Ÿ—๏ธ๐Ÿ–ฅ๏ธ

Finally, install the dependencies and build the frontend:

npm i
npm run dev

And there you have it! You're all set to run ollama-webui with a fully updated and optimized environment.

โœจ Looks pretty familiar, doesn't it?

Configuring Ollama for the Web Interface โš™๏ธ๐Ÿ–ฅ๏ธ

With the configuration set, the next step is to reload the system daemon and restart Ollama. This ensures all changes take effect:

 systemctl daemon-reload
 systemctl restart ollama

These commands refresh the system's understanding of the services and apply the new configuration for Ollama.

Running Your Chosen Model in Ollama ๐Ÿฌ๐Ÿš€

Now, let's activate Ollama with the model of your choice. In this example, I'm going back to using the neural-chat model:

ollama run neural-chat

This command starts the neural-chat model within Ollama, setting the stage for interaction.

Launching the Web Interface ๐ŸŒ๐Ÿ’ป

To interact with Ollama through a more intuitive interface, we'll set up the web server:

  1. Open a Second Terminal: Make sure it's using the same Ubuntu installation where you've set up Ollama.

  2. Navigate to ollama-webui Directory:

     cd ollama-webui
    
  3. Start the Web Server:

     npm run dev
    

    Running this command initiates the web interface.

Finalizing Your Web Interface Setup โš™๏ธ๐ŸŒ

Awesome! With the web server up and running, there's just one more step to complete your setup:

  1. Accessing Settings: In the web interface, navigate to the 'Settings' โš™ icon. This is where you'll fine-tune the connection to your Ollama instance.

  2. Configuring Ollama URL: Make sure to enter your Ollama URL exactly as follows:

This step is crucial to ensure seamless communication between the web interface and your Ollama model.

And Youโ€™re Ready to Go!! ๐Ÿš€๐ŸŽ‰

Congratulations! Your journey to setting up a local LLM with Ollama and its web interface is complete. You're now fully equipped to explore the vast capabilities of AI right from your desktop. Dive in, experiment, and discover the endless possibilities that your new AI setup has to offer!

ย