Run a large language models locally (2) - Providing an External Knowledge Base to the Model

In the previous article, we demonstrated how to run large language models (LLMs) locally using Ollama. This article focuses on enhancing LLM accuracy by allowing them to retrieve custom data from external knowledge bases, making them appear “smarter.” This article involves the concepts of LangChain and RAG, which will not be explained in detail here. Prepare the Model Visit Ollama’s model page, search for qwen, and this time we will use the “Tongyi Qianwen” model, which has a better understanding of Chinese semantics, for the experiment. ...

2024-03-04 · caol64