v0.0.5 - AI Enhancements & UILM Updates
Release Notes
This release expands flexibility and intelligence across the platform. Users now have greater control over AI Agents through customizable parameters, safer workflows with UILM history and revert options, smarter insights with AI-powered Logs, Metrics & Tracing, and seamless environment migration starting with the UILM module. Together, these improvements make the platform more adaptable, reliable, and efficient for a wide range of use cases.
Environment Migration
Our cloud platform now supports environment migration, giving users the flexibility to move data between environments. This capability streamlines workflows, simplifies testing, and ensures more consistent deployments across projects. We're starting with the UILM (Language Module), where users can migrate their language keys and modules. Migration support will gradually extend across all modules and services.
UILM
UILM now includes language history with revert capability, enabling users to track changes and restore previous versions when needed. This is especially useful if a recent update introduces errors, if you want to compare different states, or if you simply need to roll back to a stable configuration. By making past changes visible and reversible, UILM adds both safety and flexibility to your workflow.
Bug Fixes
- Resolved issue where API calls occurred with deselected modules in "Create New Key."
- Fixed issue where the revert button appeared with no previous data available.
- Corrected non-functional search within the module selection field.
Logs & Tracing
Our Logs, Metrics & Tracing (LMT) now has AI support, allowing users to query logs, traces, and metrics in natural language. You can even search directly by trace ID, making it faster to diagnose issues and uncover insights.
New Features
- Added support for searching by trace ID.
- Introduced Ask AI, providing contextual insights powered by AI.
AI Agents
AI Agents now give users more control through adjustable parameters, ensuring the assistant behaves as needed — whether that's more focused, more creative, or more efficient.
New Features
- Context window – set how many past messages are remembered
- Summary – automatically summarizes each conversation to preserve context
- Embedding model – select between small to large models:
- Embedding Ada v2
- Embedding 3 Small
- Embedding 3 Large
- Chunking strategy – choose how text is split into chunks:
- Semantic
- Recursive
- Markdown
- Character
- LLM Conversations – added source links under each response to trace the origin of collected information
- The design of the conversation page has been changed
- The agent playground page has been redesigned
Improvements
- Enhanced response quality with improved retrieval re-ranking, including semantic chunking for more accurate and relevant results
- Refactored the AI Agent Playground to use the Conversation API instead of query-based interaction, providing a more natural conversational view