How to Configure AI Services
The AI Services section in Geta.ai allows you to configure and manage the AI models used across your account.
These AI models power:
π¨ Smart Reply Assistant (Inbox Module)
π AI Template Generator
π€ Other AI-driven features across the platform
By selecting the right model, you control:
Response quality
Performance speed
Cost per usage
AI provider preference
Where AI Services Are Used
The selected AI model is applied to:
1οΈβ£ Smart Reply Assistant (Inbox)
Automatically generates AI-powered replies for customer conversations.
2οΈβ£ AI Template Generator
Helps create WhatsApp templates using AI.
β οΈ Important: The model selected in AI Services directly affects cost and output quality in these modules.
How to Configure Smart AI LLM
Step 1: Navigate to AI Services
Log in to your Geta.ai account
Go to Settings
Click on AI Services
The AI Services Configuration page opens
Step 2: Select Smart AI LLM
Click on Smart AI LLM
You will see:
100+ models
Multiple providers (OpenAI, Google, Anthropic, DeepSeek, etc.)
Pricing per model
Step 3: Filter and Select a Model
Use Filter by Provider dropdown
Choose from:
All Providers
OpenAI
Google
Anthropic
DeepSeek
etc.
Browse the model cards
Each model displays:
β Model Name
π² Input Price (per million tokens)
π² Output Price (per million tokens)
π Select Model button
Click Select Model
Click Save
Your selected model will now power:
Inbox Smart Reply
AI Template Generator
Understanding AI Model Pricing
Each AI model has:
Input Price β Cost for tokens you send to the model
Output Price β Cost for tokens the model generates
Tokens represent chunks of text (words, punctuation, characters).
π° Cost Calculation Formula
The total AI cost is calculated using this formula:
Total Cost =(Input Tokens / 1,000,000 Γ Input Price)+(Output Tokens / 1,000,000 Γ Output Price)
π Example Calculation
Letβs assume the selected model has:
Input Price = $1.25 per 1M tokens
Output Price = $10.00 per 1M tokens
If your automation used:
50,000 input tokens
20,000 output tokens
Step 1 β Input Cost
(50,000 / 1,000,000) Γ 1.25= 0.05 Γ 1.25= $0.0625
Step 2 β Output Cost
(20,000 / 1,000,000) Γ 10.00= 0.02 Γ 10.00= $0.20
Step 3 β Total Cost
$0.0625 + $0.20 = $0.2625
β Total AI cost = $0.2625
π Quick Reference Example (1 Million Tokens Split 50/50)
If total usage = 1,000,000 tokens:
Type | Tokens | Unit Price | Subtotal |
|---|---|---|---|
Input | 500,000 | $1.25/M | $0.625 |
Output | 500,000 | $10.00/M | $5.00 |
Total | 1,000,000 | β | $5.625 |
β οΈ Important Notes
If no model or API key is selected, the system automatically uses the default model.
Costs are deducted from your available wallet balance.
Higher-quality models typically have higher output pricing.
Output tokens usually cost more than input tokens.
How to Configure Your Own OpenAI API Key
If you prefer to use your own OpenAI account:
Step 1: Switch to Custom API
Go to AI Services
Click on Configure Your Own OpenAI Key
Step 2: API Configuration
Enter your OpenAI API Key
Click Select Model
Choose your preferred OpenAI model
Click Save
Now:
Your OpenAI account will be billed directly
Geta.ai will not deduct wallet balance for AI usage
When Should You Use Smart AI LLM vs Your Own API?
Use Case | Recommendation |
|---|---|
Simple usage | Use Smart AI LLM |
No API management required | Use Smart AI LLM |
Enterprise with own OpenAI billing | Configure Own API |
Cost optimization control | Configure Own API |
Best Practices for Cost Optimization
Use smaller models for template generation
Use advanced models only when needed
Monitor token-heavy automations
Keep prompts concise
Avoid unnecessary repeated calls
Summary
The AI Services module in Geta.ai allows you to:
Choose from 100+ AI models
Filter by provider
Understand pricing before selection
Control how AI works in Inbox & Template Generator
Calculate cost transparently using token-based billing
By understanding the pricing formula and token usage, you can optimize both performance and cost efficiently.