Skip to main content
Version: 3.2

New Relic Logs & Metrics

Cribl LogStream supports sending events to the New Relic Log API and the New Relic Metric API.

As of LogStream v.3.1.2, this Destination is updated to authenticate using New Relic's Ingest License API key. (New Relic will retire the Insights Insert API keys, which this Destination previously used for authentication.)

Also as of v.3.1.2, LogStream provides a separate New Relic Events Destination that you can use to send ad hoc (loosely structured) events to New Relic via the New Relic Event API.

Configuring Cribl LogStream to Output to New Relic

In the QuickConnect UI: Click + Add beside Destinations. From the resulting drawer's tiles, select New Relic Ingest > Logs & Metrics. Next, click either + Add New or (if displayed) Select Existing. The resulting drawer will provide the following options and fields.

Or, in the Data Routes UI: From the top nav of a LogStream instance or Group, select Data > Destinations. From the resulting page's tiles or the Destinations left nav, select New Relic Ingest > Logs & Metrics. Next, click + Add New to open a New Destination modal that provides the following options and fields.

General Settings

Output ID: Enter a unique name to identify this New Relic definition.

Authentication method: Select one of the following buttons.

  • Manual: This default option exposes an API key field. Directly enter your New Relic Ingest License API key, as you created or accessed it from New Relic's account drop-down. (For details, see the New Relic API Keys documentation.)

  • Secret: This option exposes an API key (text secret) drop-down, in which you can select a stored secret that references a New Relic Ingest License API key. A Create link is available to store a new, reusable secret.

Region: Select which New Relic region endpoint to use.

Log type: Name of the logType to send with events. E.g., observability or access_log.

This sets a default. Where a sourcetype is specified in an event, it will override this value.

Log message field: Name of the field to send as the log message value. If not specified, the event will be serialized and sent as JSON.

Fields: Additional metadata fields to (optionally) add, as Name-Value pairs.

  • Name: Enter the metadata field name.

  • Value:JavaScript expression to compute field’s value, enclosed in single quotes, double quotes, or backticks. (Can evaluate to a constant.)

  • Add Field: Click to add more metadata Name-Value pairs.

Backpressure behavior: Select whether to block, drop, or queue events when all receivers are exerting backpressure. (Causes might include a broken or denied connection, or a rate limiter.) Defaults to Block. For the Persistent Queue option, see the section just below.

Persistent Queue Settings

This section is displayed when the Backpressure behavior is set to Persistent Queue.

Max file size: The maximum size to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to 1 MB.

Max queue size: The maximum amount of disk space the queue is allowed to consume. Once this limit is reached, queueing is stopped, and data blocking is applied. Enter a numeral with units of KB, MB, etc.

Queue file path: The location for the persistent queue files. This will be of the form: your/path/here/<worker-id>/<output-id>. Defaults to $CRIBL_HOME/state/queues.

Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to None; Gzip is also available.

Queue-full behavior: Whether to block or drop events when the queue is exerting backpressure (because disk is low or at full capacity). Block is the same behavior as non-PQ blocking, corresponding to the Block option on the Backpressure behavior drop-down. Drop new data throws away incoming data, while leaving the contents of the PQ unchanged.

Processing Settings

Post‑Processing

Pipeline: Pipeline to process data before sending the data out using this output.

System fields: A list of fields to automatically add to events that use this output. By default, includes cribl_pipe (identifying the LogStream Pipeline that processed the event). Supports wildcards. Other options include:

  • cribl_host – LogStream Node that processed the event.
  • cribl_wp – LogStream Worker Process that processed the event.
  • cribl_input – LogStream Source that processed the event.
  • cribl_output – LogStream Destination that processed the event.

Advanced Settings

Validate server certs: Toggle to Yes to reject certificates that are not authorized by a CA in the CA certificate path, nor by another trusted CA (e.g., the system's CA).

Round-robin DNS: Toggle to Yes to use round-robin DNS lookup. When a DNS server returns multiple addresses, this will cause LogStream to cycle through them in the order returned.

Compress: Toggle to Yes to compress the payload body before sending.

Request timeout: Amount of time (in seconds) to wait for a request to complete before aborting it. Defaults to 30.

Request concurrency: Maximum number of concurrent requests before blocking. This is set per Worker Process. Defaults to 5.

Max body size (KB): Maximum size of the request body. Defaults to 1000 KB.

Flush period (sec): Maximum time between requests. Low values can cause the payload size to be smaller than the configured Max body size. Defaults to 1 second.

Extra HTTP headers: Click + Add Header to insert extra headers as Name/Value pairs.

Environment: If you're using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.

Verifying the New Relic Destination

Once you've configured log and/or metrics sources, create one or more Routes to send data to New Relic.

In New Relic, you can create visualizations incorporating the LogStream-supplied data, then add them to new or existing dashboards as widgets.

Logs and metrics land in two different places in New Relic.

Log Queries

To access and query log data:

  • Navigate to the New Relic home screen's Logs header option, and click the (+) button at right.

  • Then to build your queries, use the Find logs where input field, and add desired columns to the table view below the graph,.

Metrics Queries

To access and query metrics data:

  • From the New Relic home screen, *Click Browse Data > Metrics > Can Search for metricNames.

  • Then, customize time range and dimensions to build the desired logic for your queries.

  • Alternatively, you can use NRQL to build your own query searches.