Build You Application
With Palico you can build complex LLM applications with complete flexibility.
Your First Application
Define your application by creating a folder in the src/agents
and adding an index.ts
file.
The index.ts
file should export a function of type Chat
. You have complete control over the implementation detail of your chat function.
Learn more about the Chat
function’s interface.
Preview Changes
You can preview your changes in the Chat Playground in Palico Studio. Start your Palico App by running the following command:
Find your Palico Studio url in the terminal output. It should look something like this:
By default, the Palico Studio runs on http://localhost:3000.
Streaming Response
Can you stream responses to the client using the stream.push()
method in the ChatRequest
object.
You can stream chunks of data back to the user such as messages, intermediate steps, or other data.
Learn more about Streaming.
Multi-Turn Conversations
Often times LLM applications are multi-turn conversations between your agent and your client. Palico helps you manage these conversations by providing a conversationId
and a requestId
as part of a request input. Each request has a unique requestId
and all requests in a conversation share the same conversationId
.
Long-Term Memory
With Palico you can create and restore conversation state without worrying about underlying storage infrastructure. This allows you to build multi-turn conversation applications such as chatbot with memory, or complex Agent interactions.
Learn more about Conversation State Management.
Calling Other Agents
You can call other agents using the Agent.chat()
method. For example, let’s say you have another agent called my_other_agent
:
You can call my_other_agent
from my_agent
like this:
It’s better to encapsulate different non-determistic parts (e.g. LLM model call) of your application into different agents. This way you can improve each agent independently to ultimately improve the overall application.
Chat Handler Function
Chat
is a function you have to implement for defining your application logic. It takes in ChatRequest as an input and returns ChatResponse as output. For stream-based applications, no return is expected. The input and output of the function are defined as follows:
Request Input
A response field example
Unique identifier for the other contact in the conversation.
Indicates if this is the first request in the conversation.
The message sent by the user.
Additional data sent by the user.
For client-side tool execution, the results of the tool call. Learn more about tool executions with Agents.
Configuration data for how to execute the application. This can be treated as feature-flags and can be used to swap different LLM models, prompts, or other configurations. Learn more about App Config.
Object used to stream chunks of data back to the user such as messages, intermediate steps, or other data. Learn more about Streaming.
Response Output
The message to be sent back to the user.
Additional data to be sent back to the user.
For client-side tool execution, the tool calls to be executed. Learn more about tool executions with Agents.
Intermediate steps that the agent has taken. This can be used for debugging or logging purposes, or to provide additional context to the client. Intermediate step is defined as: