Run a large language models locally (3) - Enabling the Model to Use Tools Autonomously

This article is the third in a series on running large language models (LLMs) locally. In the previous two articles, we introduced how to run Ollama locally and how to fine-tune the model’s answers by providing an external database. This article will continue to explore how to use the function-calling feature to extend the model’s capabilities and make it go further on the road to “intelligence”. Function-calling According to the OpenAI official documentation, function-calling is a way for large language models to connect to external tools. In short, developers provide the model with a number of tools (functions) in advance. After the model understands the user’s question, it decides whether to call the tool to obtain more context information to help the model make better decisions. ...

2024-03-07 · caol64

Run a large language models locally (2) - Providing an External Knowledge Base to the Model

In the previous article, we demonstrated how to run large language models (LLMs) locally using Ollama. This article focuses on enhancing LLM accuracy by allowing them to retrieve custom data from external knowledge bases, making them appear “smarter.” This article involves the concepts of LangChain and RAG, which will not be explained in detail here. Prepare the Model Visit Ollama’s model page, search for qwen, and this time we will use the “Tongyi Qianwen” model, which has a better understanding of Chinese semantics, for the experiment. ...

2024-03-04 · caol64

Run a large language models locally

With the rise of ChatGPT, LLMs (Large Language Models) have become a hot topic in the field of artificial intelligence and natural language processing. In this article, I will walk you through the process of running a large language model on your own personal computer. Pros and Cons There are many advantages to running a large language model locally: Privacy protection No expensive costs Ignore network problems Try out various open source models I think the first two advantages are enough for everyone to try. Everyone has some private data that they don’t want to send to third parties. If you can use AI to process private data locally or even offline, isn’t that perfect? In addition, locally, no matter how much data you run, you don’t need to pay for API and token fees. Are you excited? ...

2024-02-27 · caol64