Published inOpenVINO-toolkitHow to Run Llama 3.1 Locally with OpenVINO™Run Llama 3.1 Locally: Optimize and Accelerate Your AI Models with OpenVINO™ and Minimal Code.Aug 221Aug 221
Published inOpenVINO-toolkitBuild Agentic-RAG with OpenVINO™ and LlamaIndexEnhancing LLM Capabilities with Agentic-RAG: A Comprehensive Guide to Building Advanced AI Systems Using OpenVINO™ and LlamaIndex.Aug 16Aug 16
Published inOpenVINO-toolkitHow to Build Faster GenAI Apps with Fewer Lines of Code using OpenVINO™ GenAI APIAuthors: Raymond Lo, Dmitriy Pastushenkov, Zhuo WuJul 9Jul 9
Published inOpenVINO-toolkitHow to run and develop your AI app on Intel NPU (Intel AI Boost)Jan 51Jan 51
Running Llama2 on CPU with OpenVINOWith the new weight compression feature from OpenVINO, now you can run llama2–7b with less than 16GB of RAM on CPUs!Sep 6, 20232Sep 6, 20232
How to run Your First OpenVINO C++ code with Xeus-cling and Jupyter LabDid you know you can write high-performance C++ code that runs on Jupyter Lab? More interestingly, did you know we can run AI/ML in C++…May 24, 2023May 24, 2023
Hot AI Topics April 2023SAM, OpenVINO Notebooks, Intel N-Series, Gorden Moores…May 2, 2023May 2, 2023
How to save your Macbook Pro (Intel) battery from self-destructionIf you are still using your old Macbook (e.g. MacBook Pro Retina, 15-inch, Mid 2015 or earlier), this tool will significantly reduce the…Feb 24, 2023Feb 24, 2023
Published inOpenVINO-toolkitHow to run Stable Diffusion on Intel GPUs with OpenVINONote: It will also work on CPUs too! :)Feb 15, 20234Feb 15, 20234