Run a large language models locally (3) - Enabling the Model to Use Tools Autonomously
This article is the third in a series on running large language models (LLMs) locally. In the previous two articles, we introduced how to run Ollama locally and how to fine-tune the model’s answers by providing an external database. This article will continue to explore how to use the function-calling feature to extend the model’s capabilities and make it go further on the road to “intelligence”. Function-calling According to the OpenAI official documentation, function-calling is a way for large language models to connect to external tools....