Your First Application

Define your application by creating a folder in the src/agents and adding an index.ts file.

src/
  agents/
    my_agent/
      index.ts

The index.ts file should export a function of type Chat. You have complete control over the implementation detail of your chat function.

src/agents/my_agent/index.ts
import { Chat } from '@palico-ai/app';
import { getOpenAIClient } from '../../utils/openai';

// 1. implement Chat function
const handler: Chat = async ({ userMessage }) => {
  // 2. implement your application logic
  if (!userMessage) throw new Error('User message is required');

  const response = await getOpenAIClient().chat.completions.create({
    model: 'gpt-3.5-turbo-0125',
    messages: [{ role: 'user', content: userMessage }],
  });

  const responseMessage = response.choices[0].message.content;
  if (!responseMessage) {
    throw new Error('No response message from OpenAI');
  }

  return {
    message: responseMessage,
    data: { /* additional data*/ }
  };
};

// 3. export the handler
export default handler;

Learn more about the Chat function’s interface.

Preview Changes

You can preview your changes in the Chat Playground in Palico Studio. Start your Palico App by running the following command:

npm start

Find your Palico Studio url in the terminal output. It should look something like this:

Palico Studio: http://localhost:3000
Database URL: postgresql://root:root@localhost:5433/palicoapp
API URL: http://localhost:8000

By default, the Palico Studio runs on http://localhost:3000.

Streaming Response

Can you stream responses to the client using the stream.push() method in the ChatRequest object.

const handler: Chat = async ({ userMessage, stream }) => {
  // call llm with streaming
  const response = await openai.chat.completions.create({
    model: 'gpt-3.5-turbo-0125',
    stream: true,
    messages: [{ role: 'user', content: userMessage }],
  });
  for await (const chunk of response) {
    if (chunk.choices[0].delta.content) {
      // stream chunks back to the client
      stream.push({
        message: chunk.choices[0].delta.content,
        data: { /* additional data */ },
        intermediateSteps: [ /* intermediate steps */ ],
      });
    }
  }
};

You can stream chunks of data back to the user such as messages, intermediate steps, or other data.

Learn more about Streaming.

Multi-Turn Conversations

Often times LLM applications are multi-turn conversations between your agent and your client. Palico helps you manage these conversations by providing a conversationId and a requestId as part of a request input. Each request has a unique requestId and all requests in a conversation share the same conversationId.

const handler: Chat = async ({
  conversationId,
  requestId,
  isNewConversation,
}) => {
  // your application logic
};

Long-Term Memory

With Palico you can create and restore conversation state without worrying about underlying storage infrastructure. This allows you to build multi-turn conversation applications such as chatbot with memory, or complex Agent interactions.

import {
  Chat,
  // import conversation state management functions
  getConversationState,
  setConversationState,
} from "@palico-ai/app";

const handler: Chat = async ({
  conversationId,
  isNewConversation,
  userMessage,
}) => {
  let state: {
    messages: OpenAIMessage[];
  };
  // create or restore conversation state
  if (isNewConversation) {
    state = { messages: [] };
  } else {
    state = await getConversationState(conversationId);
  }

  // add user message to conversation state and call LLM
  state.messages.push({ role: "user", content: userMessage });
  const response = await openai.chat.completions.create({
    model: "gpt-3.5-turbo-0125",
    messages: state.messages,
  });
  const responseMessage = response.choices[0].message.content;
  // add agent message to conversation state
  state.messages.push({ role: "agent", content: responseMessage });

  // save conversation state
  await setConversationState(conversationId, state);
  return { message: responseMessage };
};

Learn more about Conversation State Management.

Calling Other Agents

You can call other agents using the Agent.chat() method. For example, let’s say you have another agent called my_other_agent:

src/
  agents/
    my_agent/
      index.ts
    my_other_agent/
      index.ts

You can call my_other_agent from my_agent like this:

agents/my_agent/index.ts
import { Chat, Agent } from '@palico-ai/app';

const handler: Chat = async ({ userMessage }) => {
  const response = await Agent.chat({
    agentName: 'my_other_agent', // refers to the agent folder name
    userMessage,
    appConfig: { model: 'gpt-3.5-turbo' },
  });
  return {
    message: response.message,
  };
};

It’s better to encapsulate different non-determistic parts (e.g. LLM model call) of your application into different agents. This way you can improve each agent independently to ultimately improve the overall application.

Chat Handler Function

Chat is a function you have to implement for defining your application logic. It takes in ChatRequest as an input and returns ChatResponse as output. For stream-based applications, no return is expected. The input and output of the function are defined as follows:

Request Input

conversationId
string
required

A response field example

requestId
string
required

Unique identifier for the other contact in the conversation.

isNewConversation
boolean
required

Indicates if this is the first request in the conversation.

userMessage
string

The message sent by the user.

payload
json

Additional data sent by the user.

toolCallResults
ToolCallResult[]

For client-side tool execution, the results of the tool call. Learn more about tool executions with Agents.

appConfig
json

Configuration data for how to execute the application. This can be treated as feature-flags and can be used to swap different LLM models, prompts, or other configurations. Learn more about App Config.

stream
ChatResponseStream
required

Object used to stream chunks of data back to the user such as messages, intermediate steps, or other data. Learn more about Streaming.

Response Output

message
string

The message to be sent back to the user.

data
json

Additional data to be sent back to the user.

toolCalls
ToolCall[]

For client-side tool execution, the tool calls to be executed. Learn more about tool executions with Agents.

intermediateSteps
IntermediateStep

Intermediate steps that the agent has taken. This can be used for debugging or logging purposes, or to provide additional context to the client. Intermediate step is defined as:

{
  name: string; // name or description of the intermediate step
  data?: any; // additional data for the intermediate step
}

What’s Next?