Skip to main content

Basic Info

The Basic Info section allows you to configure the core technical settings for your AI Agent, including the provider, language model, and response behavior settings.

Basic Info section showing core technical settings for your AI Agent

Provider

Select the AI provider that will power your AI Agent. The provider determines which language models are available for your AI Agent. Available providers include:

  • OpenAI - Access to GPT models including GPT-4o, GPT-4o mini, and more

  • Google - Access to Google's Gemini models

  • Anthropic - Access to Claude models

  • DeepSeek - Access to DeepSeek models

  • Alibaba - Access to Alibaba's language models

    Choose an AI provider to power your AI Agent

LLM Model

After selecting a provider, choose the specific Large Language Model (LLM) that will power your AI Agent. The model you select determines the capabilities, performance, and cost of your AI Agent instance.

Each model has different characteristics:

  • Performance: Some models offer faster response times, while others provide more advanced reasoning capabilities
  • Cost: Different models consume credits at different rates per message
  • Capabilities: Models vary in their ability to handle complex queries, context understanding, and specialized tasks
Choose the specific language model that will power your AI Agent
note

Different LLM models may consume credits at different rates. Review the cost structure displayed next to each model to manage usage effectively.

Message Streaming

Message Streaming controls how responses are delivered to users during conversations.

When enabled (default):

  • Responses are streamed in real-time as they're generated
  • Users see the message appear word-by-word, creating a more interactive and engaging experience
  • This provides immediate feedback and makes the conversation feel more natural

When disabled:

  • The full message will appear only once it's completely generated

  • Users wait for the entire response before seeing any content

  • This may result in longer perceived wait times, but ensures complete messages are displayed at once

    Enable or disable real-time message streaming for user responses
tip

Enable Message Streaming for a more engaging user experience, especially for longer responses. Disable it if you prefer to show complete messages at once.

AI Action Execution Messages

AI Action Execution Messages controls whether users receive feedback when AI actions are executed during conversations.

When enabled:

  • Users will see success or failure messages when an AI action is executed
  • This provides transparency about what actions the AI Agent is performing
  • Users are informed about the outcome of each action, improving trust and understanding

When disabled:

  • No execution messages are displayed to users

  • Actions are performed silently in the background

  • This creates a cleaner conversation flow but provides less visibility into AI Agent actions

    Control whether users receive feedback when AI actions are executed
    tip

    Enable AI Action Execution Messages to provide transparency and build user trust. Disable them for a cleaner, more streamlined conversation experience.

Kaily logo

More than just a virtual AI assistant, Kaily brings interactive, human-like conversations to your website. Easy to create, easier to customize, and the easiest to deploy—no code required. Let Kaily enhance your user experience using the information you provide.

Is this page useful?