These docs are for Cribl Api 4.10 and are no longer actively maintained.
See the latest version (4.13).
Search and Retrieve Results
Use the Cribl API to execute searches and retrieve Cribl Search results so that you can automate operations. Applications include triggering alerts based on specific search results and aggregating data for reporting purposes.
About the Example Requests
Replace the variables in the example requests with the corresponding information for your Cribl deployment. In the cURL command options, replace
${token}
with a valid API Bearer token. You can also set the$token
environment variable to match the value of a Bearer token.You must commit and deploy the changes you make. You can use the Cribl API to automate commit and deploy commands.
Execute a Search and Retrieve the Results
This example demonstrates how to use the Cribl API to execute a search, optionally confirm that the search is complete, and retrieve the Cribl Search results.
1. Execute the Search
Define your search in the request body. See Build a Search and the POST /search/jobs
endpoint in the API Reference for more information about Cribl Search query options. In this example, the request body includes:
query
(required): The search query string, with escaped quotes around the dataset name. In API requests, always begin the query with thecribl
operator. This example retrieves the first 1,000 records in thegoatherd_sample_dataset
dataset.earliest
(optional): Beginning of query time range. This example retrieves results only for the last hour.latest
(optional): End of query time range. This example retrieves results up to the current time.sampleRate
(optional): Ratio to use to reduce the number results. In this example, the search query will return all matching records from the dataset, up to the 1,000-record limit, without excluding any data based on sampling.
curl --request POST \
--url 'https://${workspaceName}-${organizationId}.cribl.cloud/api/v1/m/default_search/search/jobs' \
--header 'Authorization: Bearer ${token}' \
--header 'Content-Type: application/json' \
--data '{
"query": "cribl dataset=\"goatherd_sample_dataset\" | limit 1000",
"earliest": "-1h",
"latest": "now",
"sampleRate": 1
}'
The response is a JSON object with details about the Cribl Search job, similar to the following example. The response includes the job’s id
, which you need for your requests to check the search status and retrieve the Cribl Search results in subsequent steps.
{
"items": [
{
"query": "cribl dataset=\"goatherd_sample_dataset\" | limit 1000",
"earliest": "-1h",
"latest": "now",
"sampleRate": 1,
"user": "google-oauth2|12345678910111EXAMPLE",
"displayUsername": "Alex Smith",
"isPrivate": true,
"group": "default_search",
"id": "1349305736255.Acp7er",
"timeCreated": 1744114795547,
"internal": {
"email": "asmith@goatherd.io",
"roles": [
"org_owner",
"ws_owner"
],
[...]
}
}
]
}
2. Confirm that the Search Is Complete (Optional)
The search must have enough time to finish before you can retrieve the results. Many searches finish quickly, but you can confirm the status by sending a request to the GET /search/jobs/${id}/status
endpoint. In your request URL, use the id
value from the response in step 1 as demonstrated in this example.
curl --request GET \
--url 'https://${workspaceName}-${organizationId}.cribl.cloud/api/v1/m/default_search/search/jobs/1349305736255.Acp7er/status' \
--header 'Authorization: Bearer ${token}' \
--header 'Content-Type: application/json'
The response is a JSON object that includes a status
attribute, similar to the following example. If the status
value is completed
, the search is complete.
{"items":[{"status":"completed","timeStarted":1744114795547,"timeCreated":1744114795066,"timeCompleted":1744114801517,"timeNow":1744115261804,"cacheStatusesByStageId":{"root":{"goatherd_sample_dataset":{"usedCache":false,"reason":"Not a Lake Dataset"}}}}],"count":1}
3. Retrieve the Search Results
When the search is complete, send a request to the GET /search/jobs/${id}/results
endpoint to retrieve the Cribl Search results. For the Content-Type
header, specify Newline Delimited JSON (x-ndjson
) so that the response lists the search results as individual JSON objects separated by newlines.
In your request URL, use the id
value from the response in step 1. Your request also must include the limit
query parameter to specify the maximum number of records to return in the response.
You can provide any value for limit
, but it may be impractical to retrieve all records at once. Instead, you can send multiple requests that use limit
with the optional offset
query parameter to paginate the response into more manageable batches. The offset
specifies the starting point from which to return records.
For example, if you want to retrieve 100 results per page, set the limit
value to 100
and the offset
value to 0
in your first request, as shown below. Then, in subsequent requests, continue incrementing the offset
value until you retrieve all of the search results.
curl --request GET \
--url 'https://${workspaceName}-${organizationId}.cribl.cloud/api/v1/m/default_search/search/jobs/1349305736255.Acp7er/results?limit=100&offset=0' \
--header 'Authorization: Bearer ${token}' \
--header 'Content-Type: application/x-ndjson'
The response is a NDJSON object that contains the first 100 search results.
{"isFinished":true,"limit":10,"offset":0,"persistedEventCount":1000,"totalEventCount":1000,"job":{"id":"1349305736255.Acp7er","query":"cribl dataset=\"goatherd_sample_dataset\" | limit 1000","earliest":"-1h","latest":"now","timeCreated":1744114795066,"timeStarted":1744114795547,"timeCompleted":1744114801517,"status":"completed"}}
{"dataset":"goatherd_sample_dataset","_raw":"2 847602856271 eni-4tjvafdk47gjasdsg 52.15.47.93 10.0.0.89 6000 443 6 9 9288 1744114734 1744114739 ACCEPT OK","source":"s3://goatherd-search-example/data/goatlogs/2025/04/08/12/Goats-5cWrf3.1.raw.gz","dataSource":"goatlogs","version":"2","account_id":"847602856271","interface_id":"eni-4tjvafdk47gjasdsg","srcaddr":"52.15.47.93","dstaddr":"10.0.0.89","srcport":"6000","dstport":"443","protocol":"6","packets":"9","bytes":"9288","start":"1744114734","end":"1744114739","action":"ACCEPT","log_status":"OK","_time":1744114734,"datatype":"aws_vpcflow"}
{"dataset":"goatherd_sample_dataset","_raw":"2 847602856271 eni-9rgj38dgk83h59djt 189.16.1.201 10.8.90.121 2083 53 6 13 2431 1744114731 1744114738 ACCEPT OK","source":"s3://goatherd-search-example/data/goatlogs/2025/04/08/12/Goats-5cWrf3.1.raw.gz","dataSource":"goatlogs","version":"2","account_id":"847602856271","interface_id":"eni-9rgj38dgk83h59djt","srcaddr":"189.16.1.201","dstaddr":"10.8.90.121","srcport":"2083","dstport":"53","protocol":"6","packets":"13","bytes":"2431","start":"1744114731","end":"1744114738","action":"ACCEPT","log_status":"OK","_time":1744114731,"datatype":"aws_vpcflow"}
{"dataset":"goatherd_sample_dataset","_raw":"2 847602856271 eni-0bjag724hdfh8sdfk 10.5.46.358 54.266.152.25 2002 8080 6 45 13545 1744114724 1744114730 ACCEPT OK","source":"s3://goatherd-search-example/data/goatlogs/2025/04/08/12/Goats-Vx67Pn.2.raw.gz","dataSource":"goatlogs","version":"2","account_id":"847602856271","interface_id":"eni-0bjag724hdfh8sdfk","srcaddr":"10.5.46.358","dstaddr":"54.266.152.25","srcport":"2002","dstport":"8080","protocol":"6","packets":"45","bytes":"13545","start":"1744114724","end":"1744114730","action":"ACCEPT","log_status":"OK","_time":1744114724,"datatype":"aws_vpcflow"}
{...}
To retrieve the next 100 results, send a second request with the offset
value set to 100
to skip the first 100 records that you already retrieved. For example, the following request will retrieve results 101 through 200.
curl --request GET \
--url 'https://${workspaceName}-${organizationId}.cribl.cloud/api/v1/m/default_search/search/jobs/1349305736255.Acp7er/results?limit=100&offset=100' \
--header 'Authorization: Bearer ${token}' \
--header 'Content-Type: application/x-ndjson'
To retrieve the next 100 results, send a third request with the the limit
set to 100
and the offset
set to 200
to skip the first 200 records. Continue sending requests with the offset
value incremented by 100 until you retrieve all of the search results.