Agent With Tools
Palico provides you flexiblity and tools to build complex Agent interactions. With Palico you can run functions on client and server side, manage states across requests, and stream messages or intermediate steps to client.
Starter Agent
The starter Palico app from quickstart comes with a basic LLM Agent that can fetch weather data with human-in-the-loop confirmation prior to taking action. This agent is a good starting point to understand how Agents works in Palico and we’ll use it as a reference to explain different parts of an agent.
The starter agent folder as the following files:
Agent Executor
The agentExecutor()
function is the “brain” of your agent. It is responsible for:
- Taking in the current state of the conversation (previous messages + new message)
- Calling LLM model
- Running tools if needed.
- if tool is client-side, pause the execution and ask the client to run the tool
- if tool is server-side, execute the tool and continue
- Recursively calling LLM with tool results if needed
- Returning the final response
Learn more about Agent Executor Algorithm
This function is defined in the agent_executor.ts
file and is called from the index.ts
file.
Input Parameters
The current state of the conversation. This includes all the messages sent the LLM model, and the new message sent by the user. Theses messages are saved in a database and restored between each request. Learn more about Message
The maximum number of steps the agent can take. This is to prevent infinite loops.
The function that calls the LLM model. By default the starter app uses openai but you can use your own function. Learn more about Chat Completion
The list of tools that the agent can execute. Learn more about Tools.
A callback function that is called when a tool is executed. This is used to notify the client about the tool call. Learn more about Streaming
Tools
The core theme of an agent is it’s ability to take actions. In Palico we have two types of tools:
- Server-side Tools: These are tools that are executed on the server-side within the runtime of your palico app. For example: fetching data from an API, querying a database, etc.
- Client-side Tools: These are tools that are executed on the client-side. For example: human-in-the-loop confirmation, getting user’s location, running a script on the client’s machine, etc.
The tools.ts
file contains the definition of these tools. Here’s an example of a client-side and a server-side tool.
Note that client-side tools do not have an execute
function as they are
executed on the client-side.
You can modify the tools.ts
file to add more tools to your agent.
Client-Side Tool Execution
Sometimes you need to run actions on the client-side. For example, you might want to ask for human confirmation before taking an action. This can be done by returning a toolCall
response from your Chat
function from your palico-app.
From the client-side, you can run the tool and send the results back to the server. Here’s simple example of how you can use palico’s react library to run tool on a NextJS app.
Learn more about Client-SDK
State Management
A conversation with an agent can span multiple messages and tool executions between client and server. For example, if during the execution of your agent you need to ask for human confirmation, you need to save the state of the conversation and restore it when the client sends the confirmation. You can manage these states without managing any storage infrastructure using Palico’s state management tools.
Here’s an example of how you can manage state in your agent across multiple messages.
Learn More
Deep-dive into components of an agent
Types
These are common types used in the starter agent.
Message
Message
is a structured way define requests that are sent to an LLM model across different request in a conversation. We save these messages between requests. It contains the following fields:
The sender of the message. This can be: “system” | “user” | “tool” | “assistant”
The content of the message. For system or user, this is often just a text
If the role “assistant” requires a tool to be executed, this field contains the tool call information.
If the role “tool” is a response to a tool call, this field contains the result of the tool call.
Chat Completion
ChatCompletionFunction
represents a function that:
- Takes in
Message[]
andTool[]
as input - Calls an LLM model
- Returns a
Message
response from the LLM model.
The starter app comes with an openai chat completion function, but developers can use their own chat completion function. The function signature is as follows: