IPEX-LLM Quickstart# Note We are adding more Quickstart guide. This section includes efficient guide to show you how to: bigdl-llm Migration Guide Install IPEX-LLM on Linux with Intel GPU Install IPEX-LLM on Windows with Intel GPU Install IPEX-LLM in Docker on Windows with Intel GPU Run Performance Benchmarking with IPEX-LLM Run Local RAG using Langchain-Chatchat on Intel GPU Run Text Generation WebUI on Intel GPU Run Open WebUI on Intel GPU Run Coding Copilot (Continue) in VSCode with Intel GPU Run Dify on Intel GPU Run llama.cpp with IPEX-LLM on Intel GPU Run Ollama with IPEX-LLM on Intel GPU Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM Run IPEX-LLM Serving with FastChat Finetune LLM with Axolotl on Intel GPU