π€ Working with LLMs Part 5 - Build an AI Assistant Using Tool Calling π
Imagine asking an AI: What's the weather in Tokyo? The AI doesn't actually know the current weatherβit needs to call an external function to fetch real-time data. This is where tool calling (also known as function calling) becomes powerful.
πΊ Tutorial Video
- π€ Working with LLMs Part 5: Build an AI Assistant Using Tool Calling π
β
π― Introduction to Tool Calling
Imagine asking an AI: βWhatβs the weather in Tokyo?β The AI doesnβt actually know the current weatherβit needs to call an external function to fetch real-time data. This is where tool calling (also known as function calling) becomes powerful.
Tool calling enables Large Language Models (LLMs) to:
- π§ Execute external functions
- π Access real-time data
- ποΈ Query databases
- π Read files from your system
- π Make API calls
Instead of just generating text, LLMs can now take actions and retrieve information dynamically, making them truly useful assistants.
π‘ Use Case: Building an AI Documentation Assistant
The Problem
You have dozens of markdown documentation files scattered across directories. Finding specific information requires:
- π Manually searching through files
- π Remembering which file contains what
- β° Wasting time on repetitive lookups
The Solution
Build an AI assistant that:
- β Understands your questions in natural language
- β Automatically finds and reads relevant documentation
- β Provides accurate answers from your local files
- β Saves time and improves productivity
Real-World Scenario
Imagine youβre a developer working on a project with extensive documentation:
π Your Documentation:
docs/
βββ installation-guide.md
βββ api-reference.md
βββ authentication.md
βββ deployment-guide.md
βββ troubleshooting.md
βββ best-practices.md
Traditional Approach (Without AI Assistant):
- π Think βWhere did I document the authentication process?β
- π Open file explorer, browse through files
- π Open authentication.md, scan through content
- β° Spend 5-10 minutes finding the specific section
- π Repeat for every question
With AI Documentation Assistant:
- π¬ Ask: βHow do I authenticate users in the API?β
- β‘ AI instantly reads authentication.md
- π― Gets precise answer in seconds
- β Continue coding without context switching
Use Case Benefits
Without AI Assistant | With AI Assistant |
---|---|
π« Manual file searching | π€ Automatic file detection |
β° 5-10 minutes per lookup | β‘ Instant answers |
π§ Need to remember file structure | π¬ Just ask in plain English |
π Read entire documents | π― Get specific information |
π Context switching disrupts flow | π Stay in your workflow |
Who Benefits?
- π¨βπ» Developers - Quick access to API docs and guides
- π Technical Writers - Verify documentation content
- π New Team Members - Learn codebase faster
- π§ DevOps Engineers - Find deployment procedures
- π₯ Support Teams - Answer customer questions accurately
π How It Works: The Flow
βββββββββββββββ
β User β "What's in the API guide?"
ββββββββ¬βββββββ
β
βΌ
βββββββββββββββββββββββ
β Streamlit UI β Captures user question
ββββββββ¬βββββββββββββββ
β
βΌ
βββββββββββββββββββββββ
β Ollama LLM β Analyzes question + available tools
β + Tool Defs β Decides: "I need to read api-guide.md"
ββββββββ¬βββββββββββββββ
β
βΌ
βββββββββββββββββββββββ
β Tool Function β Executes: read_file("api-guide.md")
β (Python) β Returns: File content
ββββββββ¬βββββββββββββββ
β
βΌ
βββββββββββββββββββββββ
β Ollama LLM β Processes file content
β (with context) β Generates natural answer
ββββββββ¬βββββββββββββββ
β
βΌ
βββββββββββββββββββββββ
β User Gets Answer β "The API guide covers authentication,
βββββββββββββββββββββββ endpoints, and rate limits..."
π οΈ Implementation: Key Components
1οΈβ£ Define Tools (JSON Schema)
First, define what functions your LLM can call:
tools = [
{
"type": "function",
"function": {
"name": "read_file",
"description": "Read the complete content of a markdown file",
"parameters": {
"type": "object",
"properties": {
"file_name": {
"type": "string",
"description": "Name of the markdown file (e.g., 'api-guide.md')"
}
},
"required": ["file_name"]
}
}
},
{
"type": "function",
"function": {
"name": "list_all_files",
"description": "List all available markdown files",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
}
]
Key Points:
- π Each tool has a clear
name
anddescription
- π Parameters are defined using JSON Schema
- π― Descriptions help the LLM decide when to use each tool
2οΈβ£ Implement Tool Functions
Create the actual Python functions that do the work:
from pathlib import Path
def read_file(file_name: str) -> str:
"""Read and return markdown file content"""
docs_dir = Path("./docs")
file_path = docs_dir / file_name
if not file_path.exists():
return f"β File not found: {file_name}"
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read()
return f"π Content of {file_name}:\n\n{content}"
def list_all_files() -> str:
"""List all markdown files in directory"""
docs_dir = Path("./docs")
md_files = list(docs_dir.glob('**/*.md'))
result = f"π Found {len(md_files)} file(s):\n\n"
for file in md_files:
result += f" β’ {file.name}\n"
return result
# Map function names to implementations
available_functions = {
'read_file': read_file,
'list_all_files': list_all_files
}
3οΈβ£ Call LLM with Tools
Send user message and tool definitions to the LLM:
import ollama
# User asks a question
user_message = "What's in the installation guide?"
# Call LLM with tools available
response = ollama.chat(
model='llama3.2',
messages=[{'role': 'user', 'content': user_message}],
tools=tools
)
print(response['message'])
What happens:
- π§ LLM analyzes the question
- π― Decides it needs to call
read_file("installation.md")
- π€ Returns a tool call request
4οΈβ£ Execute Tool Calls
When the LLM requests a tool, execute it:
# Check if LLM wants to call a tool
if response['message'].get('tool_calls'):
# Prepare messages list
messages = [{'role': 'user', 'content': user_message}]
messages.append(response['message'])
# Execute each tool call
for tool in response['message']['tool_calls']:
function_name = tool['function']['name']
arguments = tool['function']['arguments']
# Call the actual function
function_to_call = available_functions[function_name]
function_result = function_to_call(**arguments)
# Add result to conversation
messages.append({
'role': 'tool',
'content': function_result
})
# Get final response from LLM with tool results
final_response = ollama.chat(
model='llama3.2',
messages=messages
)
print(final_response['message']['content'])
5οΈβ£ Build the UI with Streamlit
Create an interactive chat interface:
import streamlit as st
st.title("π¬ Documentation Assistant")
# Chat input
if prompt := st.chat_input("Ask about your documentation..."):
# Add user message
st.session_state.messages.append({"role": "user", "content": prompt})
# Get response with tool calling logic
response = get_llm_response(prompt)
# Display response
st.session_state.messages.append({"role": "assistant", "content": response})
π¨ Real-World Example
User Input:
"How do I authenticate with the API?"
Behind the Scenes:
- LLM Decision:
{ "tool_call": "read_file", "arguments": {"file_name": "api-guide.md"} }
- Tool Execution:
# Reads api-guide.md from disk content = """ # API Authentication Use Bearer token authentication: Headers: Authorization: Bearer YOUR_API_KEY """
- LLM Response:
To authenticate with the API, you need to use Bearer token authentication. Include your API key in the Authorization header as follows: Authorization: Bearer YOUR_API_KEY This is documented in the API guide's authentication section.
β¨ Key Benefits
Feature | Benefit |
---|---|
π― Natural Language | Ask questions in plain English |
π Automated Search | No manual file hunting |
π Context-Aware | Understands entire documentation |
π Real-Time | Always reads latest file content |
π¨ User-Friendly | Chat interface anyone can use |
π¦ When to Use Tool Calling
β Good Use Cases:
- π Reading files/documents
- π Fetching real-time data (weather, stocks, news)
- ποΈ Querying databases
- π Running calculations
- π Searching external APIs
β Not Needed For:
- π General knowledge questions
- π€ Creative writing
- π Explaining concepts
- π‘ Brainstorming ideas
π― Quick Start Guide
# 1. Install dependencies
pip install streamlit ollama
# 2. Pull an Ollama model
ollama pull llama3.2
# 3. Create your tool definitions and functions
# 4. Run your app
streamlit run app.py
π Key Takeaways
- Tool calling bridges the gap between LLM intelligence and real-world data
- Define clear tool schemas so the LLM knows when and how to use them
- Implement reliable functions that return structured, useful data
- Combine multiple tools to create powerful AI assistants
- The LLM decides which tools to use based on user questions
π Whatβs Next?
Now that you understand tool calling, you can:
- π Add API integrations (weather, news, databases)
- π Create data analysis assistants
- π€ Build autonomous agents
- π Chain multiple tool calls together
- π Scale to production applications
Tool calling transforms LLMs from text generators into intelligent agents that can take action!
π Resources
- GitHub Repository - AI Documentation Assistant
- Ollama Documentation
- Anthropic Tool Use Guide
- OpenAI Function Calling
- Streamlit Documentation
Ready to build your own AI assistant? Start with a simple use case and expand from there. The possibilities are endless! π