Running Microsoft Phi-3 AI Locally on Windows: A Step-by-Step Guide
Key Notes
A Comprehensive Guide to Running Microsoft’s Phi-3 Locally on Windows
This guide details how you can install and run Microsoft’s Phi-3 AI model on Windows using both Ollama and LM Studio, providing actionable steps for both approaches.
What You Should Know
- Microsoft’s Phi-3 is a compact yet potent AI model that can efficiently run locally on Windows.
- Install Ollama, then execute the command
ollama run phi3in your terminal (like CMD).Upon download completion, you can interact with the AI directly in the terminal. - Alternatively, use LM Studio for a graphical interface to chat with Phi-3 locally. Download the Phi-3 guff file separately, place it in LM Studio’s directory, and load the model there to initiate chats.
Microsoft’s Phi-3 models represent a leap forward in AI, surpassing many newer models like Llama 3 and Mixtral in several capabilities. Due to its compact size, Phi-3 can be seamlessly executed on your Windows machine. Below, you’ll find detailed instructions on utilizing both Ollama and LM Studio for this purpose.
Executing Microsoft’s Phi-3 on Windows with Ollama
Ollama serves as a comprehensive framework designed for running and experimenting with large language models (LLMs).Let’s explore how to set it up to run Microsoft’s Phi-3 locally on your Windows machine.
Step 1: Get Ollama Installed
Begin by downloading and installing Ollama:
- Ollama for Windows | Download Link
- Click on the above link and select Download for Windows (Preview).
- Once the download is complete, run the setup file.
- Follow the prompts and click Install to finalize the installation of Ollama.
Step 2: Execute the Phi-3 Download Command
Now, let’s download the Phi-3 model through Ollama:
- Visit Ollama.com and navigate to the Models section.
- Search for and select phi3, or scroll down to find it.
- Copy the command that appears to download phi3.
- Open Command Prompt or any terminal app from the Start menu.
- Paste the copied command into the terminal.
- Hit Enter and wait for Phi-3 to finish downloading.
- Once the “Send a message” prompt appears, you’re ready to converse with the AI locally.
Step 3: Engage with Microsoft’s Phi-3 LLM
You can initiate a chat directly within the terminal. Simply enter your prompt and press Enter.
Here are some areas where Phi-3 excels:
Testing censorship resistance Assessing complex topic understanding Identifying hallucinations Evaluating creativity
Executing Microsoft’s Phi-3 on Windows with LM Studio
If a terminal interface is not what you prefer, LM Studio provides a robust graphical environment for interacting with Microsoft’s Phi-3. Follow these steps to set it up:
Step 1: Install LM Studio
- LM Studio | Download link
- Follow the link to download LM Studio for Windows.
- Once downloaded, execute the installer to set up LM Studio.
Step 2: Acquire the Phi-3 Guff File
You will need to download the Phi-3 Guff file separately:
- Phi-3 Mini Guff file | Download Link
- Click the link above, and navigate to Files.
- Select one of the Phi-3 model versions—preferably the smaller option.
- Hit Download to retrieve the Guff file.
- Save it in a convenient location on your computer.
Step 3: Import the Phi-3 Model
Now, let’s load the downloaded Phi-3 model into LM Studio:
- Open LM Studio and select My Models from the left-hand menu.
- Check the ‘Local models folder’ path. Click on Show in File Explorer to access it.
- Create a new folder named Microsoft within this directory.
- Within the Microsoft folder, create a subfolder called Phi-3.
- Place the downloaded Phi-3 Guff file in the Phi-3 folder.
- The model should display in LM Studio. You may need to restart it for the changes to take effect.
- To initiate the Phi-3 model, go to the AI Chat option in the sidebar.
- Select Select a model to load and find the Phi-3 model.
- Allow it to load completely. Consider offloading to the GPU to reduce CPU load by navigating to Advanced Configuration > Hardware Settings.
- Enable Max under ‘GPU Acceleration’ and click on Reload model to apply configuration.
- Once the model finishes loading, you can start chatting with Phi-3.
Step 4: Engage with Microsoft’s Phi-3 LLM
That’s it! Enter your prompt, and regardless of your internet connection status, you can now communicate with Microsoft’s Phi-3 model on your Windows PC locally.
Additional Tips
- Ensure that your Windows version meets all necessary system requirements for optimal performance.
- Restart your terminal or LM Studio if models don’t appear as expected after installation.
- Consider checking for updates to Ollama and LM Studio regularly for improved functionalities.
Summary
This guide provides a step-by-step approach to successfully installing and running Microsoft’s Phi-3 model locally on Windows, using both Ollama and LM Studio. Whether you’re comforted by terminal commands or prefer a graphical user interface, both methods empower you to leverage this AI model effectively.
Conclusion
With Microsoft’s Phi-3 running locally, you can explore advanced AI functionalities directly on your device. Set up either Ollama or LM Studio as instructed, and dive into interactive conversations with Phi-3. Don’t hesitate to experiment and play around with various prompts to fully utilize the capabilities of this exceptional model!
FAQ
How to fix Ollama stuck at downloading Phi-3?
If you experience difficulties downloading Phi-3 via Ollama on the command prompt, simply re-enter the command ollama run phi3. The download will resume from where it last halted.