# Palico AI ## Docs - [Conversation Reply](https://docs.palico.ai/client-sdk/api/agent/conversation_reply.md): Reply to a conversation with an agent. - [New Conversation](https://docs.palico.ai/client-sdk/api/agent/new_conversation.md): Start a new conversation with an agent. - [Getting Started](https://docs.palico.ai/client-sdk/api/introduction.md): The Palico REST API allows you to interact your Palico app from any application. - [Getting Started](https://docs.palico.ai/client-sdk/introduction.md) - [Getting Started](https://docs.palico.ai/client-sdk/react-sdk/introduction.md) - [Components](https://docs.palico.ai/components.md): Learn about the different components of a Palico app - [Agent With Tools](https://docs.palico.ai/guides/agents.md): Palico provides you flexiblity and tools to build complex Agent interactions. With Palico you can run functions on client and server side, manage states across requests, and stream messages or intermediate steps to client. - [AI Gateway](https://docs.palico.ai/guides/ai_gateway.md): AI Gateway is a service that let's you use with different LLM models from a single endpoint - [Build You Application](https://docs.palico.ai/guides/build.md): With Palico you can build complex LLM applications with complete flexibility. - [Long-Term Memory](https://docs.palico.ai/guides/conversation_state.md): Create and restore conversation state without worrying about underlying storage infrastructure. Build chatbot with memory or complex stateless agent interactions. - [Deployment](https://docs.palico.ai/guides/deployment.md): Deploy your Palico app to production with Docker. Setup CI/CD and Pull Request Preview with Coolify. - [Experiments](https://docs.palico.ai/guides/experiments.md): To improve the performance of an application, we need to setup an iterative loop that helps us _systematically_ measure and improve performance of our application. - [Feature Flags](https://docs.palico.ai/guides/feature_flag.md) - [Prompt Management](https://docs.palico.ai/guides/prompt_management.md): This guide will help you develop a structured way to manage your prompts - [Streaming](https://docs.palico.ai/guides/streaming.md): You can stream messages, intermediate steps, and other data back to the user using the [ChatResponseStream](https://typedoc.palico.ai/classes/_palico_ai_app.ChatResponseStream.html) object. You can access this object from your `Chat` function's input. - [Logs And Traces](https://docs.palico.ai/guides/telemetry.md): Add custom logs and traces and view them in Palico Studio. - [LangChain](https://docs.palico.ai/integrations/langchain.md): Palico provides complete flexibility over the implementation details of your LLM Application, and as such, you can libraries like LangChain to help build your LLM application logic. - [Llama Index](https://docs.palico.ai/integrations/llamaindex.md): Palico provides complete flexibility over the implementation details of your LLM Application, and as such, you can libraries like LlamaIndex to help build your LLM application logic. - [Model Providers](https://docs.palico.ai/integrations/llm_providers.md): You can use any LLM Providers with Palico. Here are some examples of popular LLM providers. - [Vector Database](https://docs.palico.ai/integrations/vector_db.md): Learn how to use a vector database to store and retrieve vectors for your LLM application. - [Quickstart](https://docs.palico.ai/quickstart.md): Setup a Palico app locally in just a few steps. ## OpenAPI Specs - [openapi](https://docs.palico.ai/api-reference/openapi.json) ## Optional - [Palico.AI](https://www.palico.ai/) - [Blog](https://blog.palico.ai/) - [Community](https://discord.gg/TyQPVSXa) - [Types](https://typedoc.palico.ai/)