Your First Application
Define your application by creating a folder in thesrc/agents
and adding an index.ts
file.
index.ts
file should export a function of type Chat
. You have complete control over the implementation detail of your chat function.
src/agents/my_agent/index.ts
Chat
function’s interface.
Preview Changes
You can preview your changes in the Chat Playground in Palico Studio. Start your Palico App by running the following command:Streaming Response
Can you stream responses to the client using thestream.push()
method in the ChatRequest
object.
Multi-Turn Conversations
Often times LLM applications are multi-turn conversations between your agent and your client. Palico helps you manage these conversations by providing aconversationId
and a requestId
as part of a request input. Each request has a unique requestId
and all requests in a conversation share the same conversationId
.
Long-Term Memory
With Palico you can create and restore conversation state without worrying about underlying storage infrastructure. This allows you to build multi-turn conversation applications such as chatbot with memory, or complex Agent interactions.Calling Other Agents
You can call other agents using theAgent.chat()
method. For example, let’s say you have another agent called my_other_agent
:
my_other_agent
from my_agent
like this:
agents/my_agent/index.ts
Chat Handler Function
Chat
is a function you have to implement for defining your application logic. It takes in ChatRequest as an input and returns ChatResponse as output. For stream-based applications, no return is expected. The input and output of the function are defined as follows:
Request Input
A response field example
Unique identifier for the other contact in the conversation.
Indicates if this is the first request in the conversation.
The message sent by the user.
Additional data sent by the user.
For client-side tool execution, the results of the tool call. Learn more
about tool executions with Agents.
Configuration data for how to execute the application. This can be treated
as feature-flags and can be used to swap different LLM models, prompts, or
other configurations. Learn more about App Config.
Response Output
The message to be sent back to the user.
Additional data to be sent back to the user.
For client-side tool execution, the tool calls to be executed. Learn more
about tool executions with Agents.
Intermediate steps that the agent has taken. This can be used for debugging
or logging purposes, or to provide additional context to the client. Intermediate step is defined as: