Skip to main content

Test the Retrieval Process

Used to test the retrieval process and determine the effectiveness of an agent's RAG (Retrieval-Augmented Generation) configuration. This endpoint helps evaluate retrieval quality by testing queries against the knowledge base and measuring performance metrics.

API Endpoint

PropertyValue
Request MethodPOST
Request URLhttps://api.seliseblocks.com/kb/retrieval-test/{agent_id}

Request

Request Example

curl -X POST 'https://api.seliseblocks.com/kb/retrieval-test/a1b2c3d4-e5f6-7890-abcd-ef1234567890' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"enable_query_enhancement": true,
"enable_rerank": true,
"ground_truth_texts": [
"Our refund policy allows returns within 30 days of purchase.",
"Refunds are processed within 5-7 business days."
],
"project_key": "YOUR_PROJECT_KEY",
"query": "What is our refund policy?",
"score_threshold": 0.7,
"top_k": 5
}'

Request Headers

FieldTypeRequiredDescription
acceptstringYesAccepted response format. Use application/json
Content-Typeapplication/jsonYesData type, must be application/json.

Path Parameters

FieldTypeRequiredDescription
agent_idstringYesUUID of the AI agent to test retrieval for.

Request Body

Request Body Schema

{
"enable_query_enhancement": false,
"enable_rerank": true,
"ground_truth_texts": [
"Our refund policy allows returns within 30 days of purchase.",
"Refunds are processed within 5-7 business days."
],
"project_key": "my-project-123",
"query": "What is our refund policy?",
"score_threshold": 0.7,
"top_k": 5
}

Request Body Parameters

FieldTypeRequiredDescription
enable_query_enhancementbooleanNoWhether to enable query enhancement for better retrieval. Default: false.
enable_rerankbooleanNoWhether to enable reranking of retrieved results. Default: true.
ground_truth_textsarrayYesArray of expected relevant text passages for evaluation.
project_keystringYesThe project key for your project.
querystringYesThe test query to retrieve relevant documents for.
score_thresholdnumberNoMinimum similarity score threshold (0.0-1.0). Default: 0.7.
top_kintegerNoNumber of top results to retrieve. Default: 5.
tip

Use Cases for Retrieval Testing

  • Evaluate the effectiveness of your RAG configuration
  • Compare different embedding models or chunking strategies
  • Optimize score thresholds and top_k values
  • Measure retrieval quality with ground truth data
  • Fine-tune query enhancement and reranking settings

Response

Success Response (200 OK)

Returns an object containing retrieval test results and performance metrics.

{
"is_success": true,
"result": [
{
"query": "What is our refund policy?",
"query_type": "original",
"documents": [
{
"keywords": ["refund", "policy", "returns", "30 days"],
"chars": 156,
"score": 0.89,
"tokens": 42,
"text": "Our refund policy allows returns within 30 days of purchase. All items must be in original condition with tags attached..."
},
{
"keywords": ["refund", "processing", "business days"],
"chars": 98,
"score": 0.82,
"tokens": 28,
"text": "Refunds are processed within 5-7 business days after we receive your returned item..."
}
],
"metrics": {
"f1_score": 0.77,
"hit_rate": true,
"k": 5,
"precision": 0.8,
"recall": 0.75
}
}
],
"analysis_data": {
"enhanced_queries": [
"What is our refund policy?",
"refund policy details",
"return policy timeframe"
],
"score_of_adequate_context": 0.85
}
}

Response Fields

FieldTypeDescription
is_successbooleanIndicates whether the retrieval test was successful.
resultarrayArray of retrieval result objects for each query variation.
analysis_dataobjectAdditional analysis data including enhanced queries and context scores.

Result Object Fields

FieldTypeDescription
querystringThe query that was used for retrieval.
query_typestringType of query (e.g., original, enhanced).
documentsarrayArray of retrieved document objects.
metricsobjectPerformance metrics for this retrieval.

Document Object Fields

FieldTypeDescription
keywordsarrayExtracted keywords from the document.
charsintegerNumber of characters in the document.
scorenumberSimilarity score (0.0-1.0).
tokensintegerNumber of tokens in the document.
textstringThe actual text content of the document.

Metrics Object Fields

FieldTypeDescription
f1_scorenumberF1 score measuring balance between precision and recall.
hit_ratebooleanWhether at least one ground truth text was retrieved.
kintegerNumber of documents retrieved (top_k value used).
precisionnumberPrecision score (relevant retrieved / total retrieved).
recallnumberRecall score (relevant retrieved / total relevant).

Analysis Data Fields

FieldTypeDescription
enhanced_queriesarrayArray of enhanced query variations generated.
score_of_adequate_contextnumberOverall score indicating context adequacy (0.0-1.0).
note

Understanding the Metrics

  • Precision: What percentage of retrieved documents are relevant?
  • Recall: What percentage of relevant documents were retrieved?
  • F1 Score: Harmonic mean of precision and recall
  • Hit Rate: Whether any ground truth texts were found in top_k results
  • Score of Adequate Context: Overall assessment of retrieval quality

Error Response (422 Unprocessable Entity)

Returns validation error details when the request body is invalid.

{
"detail": [
{
"loc": [
"body",
"ground_truth_texts"
],
"msg": "field required",
"type": "value_error.missing"
}
]
}

Error Response Fields

FieldTypeDescription
detailarrayArray of validation error objects.
locarrayLocation of the error in the request (e.g., path, body).
msgstringHuman-readable error message.
typestringError type identifier.

Error Codes

Status CodeDescriptionResponse Type
200Successful ResponseSuccess
404Not Found - Agent does not existNot Found
422Validation Error - Invalid request parametersUnprocessable Entity