Modern observability isn’t just about logs and traces; it’s about actionable signals. OpenTelemetry (OTel) Events and New Relic Custom Events are both event-driven signals - but they solve different problems. The “why” behind each is about who consumes the data and what decisions it enables.
As teams adopt AI-powered services, LLM-based pipelines, and complex distributed architectures, the volume of signals grows exponentially. Knowing which event mechanism to reach for - and when - can mean the difference between a team that reacts to incidents and one that proactively improves its systems.
Why This Matters
If your signals are only good for debugging, product and AI teams miss critical insights. If your signals are only good for analytics, engineers lose the diagnostic trail. The best teams do both:
- OTel Events → precise diagnostic context tied to traces
- New Relic Custom Events → analytics-ready signals that power dashboards, alerts, and model evaluation
Consider a scenario where an LLM-powered chatbot starts returning low-quality answers. The engineering team needs trace-level detail to find the root cause (slow embedding lookup? bad prompt template?). Meanwhile, the product team needs aggregate quality scores to decide whether to roll back a model version. These are fundamentally different questions answered by fundamentally different event types.
Understanding this split is the difference between “We can debug it” and “We can improve it”.
OpenTelemetry Events: The “Why”
OpenTelemetry Events are best understood as structured logs with a semantic name, defined in the OTel Events specification. Their purpose is to enrich traces and timelines so engineers can diagnose what happened and why. Unlike plain log lines, OTel Events carry a well-defined schema, a semantic event name, and automatic correlation to the active trace and span - making them far more useful during incident investigation.
Why use OTel Events?
- Vendor neutrality - Instrument once, export to any backend (New Relic, Jaeger, Grafana, etc.). No proprietary SDK lock-in.
- Rich, structured context - Every event carries typed key-value attributes rather than free-form text, enabling precise filtering and aggregation.
- Trace correlation - Events are automatically linked to the active
trace_idandspan_id, so you can see exactly where in a request lifecycle something occurred. - Debugging and root-cause analysis - When something breaks, OTel Events give you the breadcrumbs to reconstruct the full chain of causality.
Capabilities
| Capability | Description |
|---|---|
| Structured attributes | Key-value pairs with typed data (strings, ints, arrays) |
| Semantic naming | Convention-based names like com.acme.user_login or llm.completion |
| Trace/span correlation | Automatic trace_id and span_id propagation |
| Resource context | Service name, version, environment, and other resource attributes travel with every event |
| Baggage propagation | Cross-service context (e.g., tenant ID, feature flags) can be included |
OTel Events are implemented as LogRecord entries with the event_name attribute set. This means they flow through the standard OTel logging pipeline and can be collected by any OTel-compatible collector.
Example: OTel Event as a LogRecord (Python)
import uuid
from opentelemetry import trace
# Get the current span context for correlation
current_span = trace.get_current_span()
ctx = current_span.get_span_context()
trace_id = format(ctx.trace_id, '032x')
span_id = format(ctx.span_id, '016x')
logger.info("[llm_feedback]", extra={
"service_name": serviceName,
"trace_id": trace_id,
"span_id": span_id,
"rating": rating,
"category": feedback,
"feedback_id": str(uuid.uuid4()),
"vendor": "openai",
"model": "gpt-4o",
"event_name": "LlmFeedbackMessage",
"prompt_tokens": prompt_tokens,
"completion_tokens": completion_tokens,
})
Because trace_id and span_id are included, this event can later be viewed alongside the full distributed trace in any OTel-compatible backend - giving you the exact request context surrounding the feedback.
New Relic Custom Events: The “Why”
New Relic Custom Events exist to make business and AI signals first-class citizens in your observability platform. Instead of burying important metrics inside log lines, Custom Events promote them to dedicated, queryable event types that power dashboards, comparisons, alerts, and automated evaluations.
Think of Custom Events as purpose-built data tables. Each event type (e.g., LlmFeedbackMessage, OrderCompleted, ModelEvaluation) becomes its own queryable table in NRDB (New Relic Database), optimized for fast aggregation and time-series analysis.
Why use Custom Events?
- Fast analytics and trends - Custom Events are stored in a columnar format optimized for aggregation. Queries that would be slow against raw logs return in milliseconds.
- NRQL-powered dashboards - Build real-time dashboards with full NRQL query support, including faceted breakdowns, percentiles, histograms, and time-series comparisons.
- Alerting - Set up NRQL alert conditions directly on Custom Event data (e.g., alert when
average(quality_score)drops below a threshold). - AI and model evaluation - Track quality scores, token usage, latency, and user feedback per model version to inform rollback and promotion decisions.
- Retention flexibility - Custom Events have configurable retention (default 30 days, extendable), independent of log retention policies.
Capabilities
| Capability | Description |
|---|---|
| Dedicated event type | Each Custom Event gets its own NRDB table (e.g., MyEvent) |
| NRQL queryable | Full SQL-like query language: SELECT * FROM MyEvent WHERE ... |
| Attribute limits | Up to 254 attributes per event, with string values up to 4 KB |
| Throughput | Up to 100k events/minute per account via the Event API |
| Dashboard integration | Native support in New Relic dashboards, alerts, and SLIs |
Example: Emit a New Relic Custom Event via OTel LogRecord
The key insight is that you don’t need the proprietary New Relic SDK to create Custom Events. If you’re already sending OTel data to New Relic, you can promote any LogRecord to a Custom Event by adding a single attribute:
newrelic.event.type=<EventType>
For example, a LogRecord with attribute newrelic.event.type=MyEvent will be ingested as a Custom Event with type=MyEvent.
Here’s a Python example:
logger.info("[model_evaluation]", extra={
"newrelic.event.type": "ModelEvaluation",
"model_name": "gpt-4o",
"model_version": "2026-02-01",
"quality_score": 0.87,
"latency_ms": 1230,
"prompt_tokens": 512,
"completion_tokens": 256,
"evaluation_method": "cosine_similarity",
"environment": "production",
})
This event is now queryable in New Relic with NRQL:
SELECT average(quality_score), percentile(latency_ms, 95)
FROM ModelEvaluation
WHERE model_name = 'gpt-4o'
SINCE 1 day ago
TIMESERIES
Practical “Why” Scenarios
Scenario 1: Debugging a Bad Response
A user reports that the AI assistant gave a nonsensical answer. With OTel Events, you can:
- Find the user’s request by
trace_idor user identifier. - See the exact prompt that was sent to the LLM, including the system message and retrieved context chunks.
- Inspect the span timeline to identify whether the issue was a slow vector search, a malformed prompt template, or an upstream service timeout.
- Check the
span_idto see if the embedding retrieval step returned irrelevant documents.
This level of detail is only possible because OTel Events are correlated to the full distributed trace.
Scenario 2: Tracking AI Quality Over Time
Your team ships a new prompt template or upgrades from one model version to another. With Custom Events, you can:
- Record
quality_score,model_version, andprompt_template_idon every evaluation. - Build a NRQL dashboard comparing quality across model versions:
SELECT average(quality_score)
FROM ModelEvaluation
FACET model_version
SINCE 7 days ago
TIMESERIES
- Set up an alert: if
average(quality_score)drops below 0.7 for any 15-minute window, notify the team. - Correlate quality dips with deployment events to quickly identify regressions.
Scenario 3: Product Analytics
Product managers want to understand user engagement and satisfaction patterns. Custom Events power dashboards like:
SELECT average(quality_score), count(*)
FROM LlmFeedbackMessage
FACET category
SINCE 30 days ago
TIMESERIES
This enables questions like: “Which feedback categories are trending negatively?” or “Did last week’s feature launch improve satisfaction scores?”
Scenario 4: Cost Tracking and Token Budgeting
With Custom Events, you can track token usage per request and aggregate it by team, feature, or customer:
SELECT sum(prompt_tokens) + sum(completion_tokens) AS 'total_tokens',
sum(estimated_cost_usd) AS 'total_cost'
FROM LlmUsage
FACET customer_tier
SINCE 1 month ago
This gives finance and engineering leadership direct visibility into AI infrastructure costs without requiring a separate analytics pipeline.
Putting It Together: Dual-Track Strategy
The most effective observability strategy uses both event types in tandem. OTel Events handle diagnostic fidelity; Custom Events handle analytics velocity. Here’s how to think about the split:
| Dimension | OTel Events | New Relic Custom Events |
|---|---|---|
| Primary audience | Engineers, SREs | Product, AI/ML, leadership |
| Primary use case | Debugging, root-cause analysis | Dashboards, alerts, trends |
| Correlation | Trace-aligned (trace_id, span_id) | Standalone or loosely correlated |
| Query language | Depends on backend | NRQL (native) |
| Portability | Vendor-neutral | New Relic-specific |
| Data shape | Enriched log records | Flat, analytics-optimized rows |
| Retention | Follows log retention policy | Configurable (default 30 days) |
Example: Emit Both for an LLM Interaction
In practice, a single user interaction might produce both event types:
from opentelemetry import trace
import uuid
current_span = trace.get_current_span()
ctx = current_span.get_span_context()
trace_id = format(ctx.trace_id, '032x')
span_id = format(ctx.span_id, '016x')
# 1. OTel Event: diagnostic detail tied to the trace
logger.info("[llm_completion]", extra={
"event_name": "LlmCompletion",
"trace_id": trace_id,
"span_id": span_id,
"model": "gpt-4o",
"prompt_template": template_name,
"prompt_hash": prompt_hash,
"retrieved_chunks": len(context_docs),
"completion_tokens": completion_tokens,
"finish_reason": finish_reason,
})
# 2. Custom Event: analytics-ready signal for dashboards
logger.info("[model_eval]", extra={
"newrelic.event.type": "LlmEvaluation",
"model": "gpt-4o",
"quality_score": quality_score,
"latency_ms": latency_ms,
"prompt_tokens": prompt_tokens,
"completion_tokens": completion_tokens,
"estimated_cost_usd": estimated_cost,
"customer_tier": customer_tier,
"feature": "chat_assistant",
})
The OTel Event gives you the ability to drill into a single request and see everything that happened. The Custom Event lets you zoom out and ask: “How is this model performing across all requests this week?”
When to Use Which: A Quick Decision Guide
- “I need to debug a specific request” → OTel Event (find it by trace_id)
- “I need a dashboard for stakeholders” → Custom Event (query with NRQL)
- “I need to alert on quality regression” → Custom Event (NRQL alert condition)
- “I need to understand why latency spiked” → OTel Event (inspect span waterfall)
- “I need to compare model versions” → Custom Event (FACET by model_version)
- “I need to reproduce a user’s exact experience” → OTel Event (full trace context)
Final Takeaway
OpenTelemetry Events give engineers trace-aligned, structured diagnostics - the context needed to understand why something happened at the request level.
New Relic Custom Events give teams analytics-ready, business-level insights - the aggregate view needed to spot trends, set alerts, and make data-driven decisions.
The “why” is simple: debug fast, improve faster. Instrument with OTel Events for depth. Promote to Custom Events for breadth. Use both, and your observability practice covers the full spectrum from incident response to continuous improvement.
