Guide to Running Microsoft Phi-3 AI Locally on Windows

Key Notes

  • Microsoft’s Phi-3 is a compact yet robust AI model that can be run locally on Windows machines.
  • Utilize Ollama with the command ollama run phi3 in a terminal to interact with the AI.
  • For a visual interaction, you can use LM Studio and integrate the Phi-3 model from a separate download.

Unlocking the Power of AI: Running Microsoft’s Phi-3 on Your Windows PC

Microsoft’s Phi-3 AI model has made waves in the tech community, making it essential for enthusiasts and developers to know how to run it locally. This guide covers step-by-step instructions on installing and using Phi-3 via both Ollama and LM Studio, ensuring you can access its capabilities without needing an online connection.

Essential Insights Before You Begin

  • Microsoft’s Phi-3 is a small but powerful AI model that you can run locally on Windows.
  • Install Ollama, then run the command ollama run phi3 on a terminal app (like CMD).Once Phi-3 is downloaded, you’ll be able to chat with the AI within the terminal itself.
  • You can also use a software like LM Studio for a graphical interface to chat with Phi-3 locally. Download the Phi-3 guff file separately and save it within LM Studio’s directory. Then load the model within LM Studio and start chatting with Phi-3 on Windows locally.

Executing Microsoft’s Phi-3 on Windows with Ollama

Ollama provides an excellent framework for running large language models locally. Here’s how to use it to get Microsoft’s Phi-3 up and running on your Windows machine.

Step 1: Acquire and Set Up Ollama

Commence by downloading and installing Ollama:

  1. Click on Download for Windows (Preview) using the link above.
  2. After downloading, run the setup file.
  3. Follow through the installation process by clicking on Install.

Step 2: Execute the Phi-3 Command to Retrieve the Model

For downloading the Phi-3 model, follow these steps:

  1. Visit Ollama.com and navigate to the Models section.
  2. Search for and locate phi3.
  3. Copy the command available for downloading phi3.
  4. Open the Command Prompt (or a terminal of your choice) from the Start menu.
  5. Paste the copied command within the terminal application.
  6. Press Enter and wait for the Phi-3 model to download.
  7. Once you see the “Send a message” prompt, the model is ready for interaction.

Step 3: Begin Conversations with Microsoft’s Phi-3 LLM

Engage directly with the model in your terminal. Type your prompt and hit Enter. Here are some functionalities you might explore:

Testing Censorship Resistance:

Assessing Comprehension of Complex Topics:

Detecting Hallucinations:

Evaluating Creativity:

Running Microsoft’s Phi-3 on Windows Using LM Studio

If a simpler, graphical interface is preferable for interacting with Phi-3, consider using LM Studio. Here’s how to set it up:

Step 1: Download and Install LM Studio

  1. Utilize the provided link to download LM Studio for Windows.
  2. Open the installer and allow it to complete the installation of LM Studio.

Step 2: Acquire the Phi-3 GGUF File

To integrate Phi-3 with LM Studio, download the GGUF file separately:

  1. Access the link and select Files.
  2. Choose a version of the Phi-3 model; we recommend the smaller version for ease of use.
  3. Initiate the download of the selected version.
  4. Save the file in a location that is easy to access.

Step 3: Add the Phi-3 Model into LM Studio

Now, load the downloaded Phi-3 model:

  1. Launch LM Studio and click on My Models.
  2. Check for the ‘Local models folder’ and open it via Show in File Explorer.
  3. Create a folder named Microsoft in the directory.
  4. Within the Microsoft folder, create another folder titled Phi-3.
  5. Transfer the downloaded GGUF file into the Phi-3 folder.
  6. After moving the file, it should display in LM Studio (you may need to restart the application).
  7. Select AI Chat on the menu for interaction.
  8. Select Select a model to load and choose the Phi-3 model.
  9. Wait for LM Studio to load the model.
  10. For optimal performance, consider configuring the model to operate through GPU; adjust settings under Advanced Configuration > Hardware Settings.
  11. Change ‘GPU Acceleration’ to Max.
  12. Click on Reload model to apply configuration.
  13. Once the model is fully loaded, begin interactions in LM Studio.

Step 4: Initiate Your Conversation with Microsoft’s Phi-3 LLM

You are now ready to engage with the model! Enter any prompt, and even without an internet connection, enjoy chatting locally with Microsoft’s Phi-3 on your Windows machine.

Summary

This tutorial has provided you with the necessary steps to run Microsoft’s Phi-3 on your Windows PC, whether you choose the command line interface with Ollama or the graphical interface of LM Studio. Both methods unlock new possibilities for your interaction with AI.

Conclusion

With these insights and instructions, you are empowered to explore the capabilities of Microsoft’s Phi-3 locally. Whether you prefer terminal commands or a more visual experience, the choice is now yours. Dive into the fascinating world of AI and enjoy your interactions!

FAQ (Frequently Asked Questions)

What to do if Ollama gets stuck while downloading Phi-3?

If you encounter issues with Ollama not downloading Phi-3, re-enter ollama run phi3. The download should resume from where it halted.