Run a large language models locally (3) - Enabling the Model to Use Tools Autonomously

This article is the third in a series on running large language models (LLMs) locally. In the previous two articles, we introduced how to run Ollama locally and how to fine-tune the model’s answers by providing an external database. This article will continue to explore how to use the function-calling feature to extend the model’s capabilities and make it go further on the road to “intelligence”. Function-calling According to the OpenAI official documentation, function-calling is a way for large language models to connect to external tools....

2024-03-07 · caol64

Run a large language models locally (2) - Providing an External Knowledge Base to the Model

In the previous article, we demonstrated how to run large language models (LLMs) locally using Ollama. This article focuses on enhancing LLM accuracy by allowing them to retrieve custom data from external knowledge bases, making them appear “smarter.” This article involves the concepts of LangChain and RAG, which will not be explained in detail here. Prepare the Model Visit Ollama’s model page, search for qwen, and this time we will...

2024-03-04 · caol64

Run a large language models locally

With the rise of ChatGPT, LLMs (Large Language Models) have become a hot topic in the field of artificial intelligence and natural language processing. In this article, I will walk you through the process of running a large language model on your own personal computer. Pros and Cons There are many advantages to running a large language model locally: Privacy protection No expensive costs Ignore network problems Try out various open source models I think the first two advantages are enough for everyone to try....

2024-02-27 · caol64