Carbonite Log Signals through OpenTelemetry#

Overview#

The Carbonite observability system provides integration between the traditional Carbonite logging system and OpenTelemetry (OTel) log signals. This integration allows log messages emitted through the standard CARB_LOG_*() and OMNI_LOG_*() macros to be automatically captured, transformed, and transmitted to configured OpenTelemetry Protocol (OTLP) endpoints for ingestion by observability backends. This unified approach to logging enables log aggregation, querying, and correlation with other telemetry signals such as traces and metrics.

The omni.observability.core library and omni.observability-otel.plugin work together to provide this functionality. The core library manages the OpenTelemetry SDK lifecycle and configuration, while the plugin provides the integration layer that registers a log listener with the Carbonite logging system. Once registered, this listener intercepts log messages and transforms them into OTel log signals before forwarding them to the configured exporter.

There is minimal performance overhead for this integration when properly configured. Log messages are filtered based on severity and channel settings before being transformed, and the actual transmission to OTLP endpoints happens asynchronously in most configurations. This ensures that the observability instrumentation does not significantly impact application performance.

What is a Log Signal?#

In the OpenTelemetry context, a log signal is a structured telemetry signal that represents a single log entry emitted by an application or service. Unlike traditional text-based logs that consist primarily of formatted message strings, OTel log signals are structured records that include both the log message body and a rich set of metadata attributes. This structured approach makes log data machine-readable and enables querying, filtering, and correlation capabilities in observability backends.

An OTel log signal typically includes the following components:

  • Body: The formatted log message text. This is the human-readable message that would traditionally appear in a log file.

  • Severity: A numeric severity level and associated text label (e.g., “INFO”, “WARN”, “ERROR”, “FATAL”) indicating the importance of the log message. OpenTelemetry defines severity levels from 1 to 24 with four sub-levels for each major category.

  • Timestamp: The time when the log message was emitted. This may include multiple timestamp values depending on the logging system’s capabilities.

  • Attributes: A set of key/value pairs that provide context about where, when, and why the log message was emitted. Standard attributes include source file path, function name, line number, process ID, thread ID, and more.

  • Resources: Immutable resource attributes that apply to all telemetry signals from the application, such as service name, service version, deployment environment, and session ID.

  • Trace Context: Optional trace-parent ID that associates the log message with an active distributed trace span, enabling correlation between logs and traces.

The structured nature of OTel log signals enables several important capabilities that traditional text logs lack. Observability backends can efficiently index and query log data based on any attribute. Logs can be automatically correlated with traces and metrics that share the same trace context. Aggregation and statistical analysis can be performed across large volumes of log data. Filtering and alerting rules can be defined based on specific attribute values rather than brittle text pattern matching.

Why Collect Log Messages?#

Collecting and centralizing log messages from applications and services is a fundamental practice in modern software operations and observability. Logs provide a chronological record of events and state changes that occurred during application execution, making them invaluable for debugging issues, understanding system behavior, and maintaining operational awareness.

The primary benefits of collecting log messages include:

  • Debugging and Troubleshooting: When an error or unexpected behavior occurs in production, log messages provide crucial context about what the application was doing leading up to the issue. Stack traces, error messages, and informational logs help developers understand the sequence of events and identify root causes. Without collected logs, diagnosing production issues often requires reproducing problems locally, which may be impossible for environment-specific or rare issues.

  • Operational Monitoring: Log messages provide real-time insight into application health and behavior. Warning and error messages can trigger alerts to notify operators of problems before they escalate. Informational messages can track the progress of long-running operations or important state transitions. By collecting logs centrally, operations teams can monitor multiple services simultaneously and quickly identify which components are experiencing issues.

  • Security and Compliance: Many security and compliance frameworks require detailed audit trails of system activity. Collected logs can record user actions, authentication attempts, data access patterns, and configuration changes. This audit trail is essential for security investigations, regulatory compliance, and forensic analysis after security incidents.

  • Performance Analysis: While dedicated metrics are better suited for quantitative performance monitoring, log messages can provide qualitative context about performance characteristics. Logs can identify which operations are slow, which code paths are being executed, and where bottlenecks might be occurring. Combined with distributed tracing, log messages help paint a complete picture of application performance.

  • Usage Analytics: Log messages can track how users interact with an application, which features are being used, and where users encounter problems. This information guides product development priorities and helps identify usability issues that may not surface through other telemetry signals.

  • Historical Analysis: Collected logs create a historical record of application behavior over time. This enables trend analysis, capacity planning, and retrospective investigations of past issues. The ability to query historical logs is invaluable when investigating problems that occurred hours or days ago.

  • Correlation with Other Signals: When log messages are collected alongside traces and metrics through an integrated observability platform, they can be automatically correlated to provide a comprehensive view of system behavior. A spike in error metrics can be immediately investigated by viewing the associated error logs. A slow trace span can be examined by reviewing logs emitted during that span’s execution. This correlation accelerates root cause analysis and reduces mean time to resolution.

In the context of OpenTelemetry, collected log signals gain additional value from their structured format and automatic integration with the broader observability ecosystem. The consistent attribute schema across logs, traces, and metrics enables powerful cross-signal queries and visualizations that would be difficult or impossible with traditional unstructured logs.

Carbonite Logging System Integration#

The integration between the Carbonite logging system and OpenTelemetry is implemented through a log listener registered with the carb::logging::ILogging interface. This listener, implemented in the OtelLogger class in omni.observability-otel.plugin, intercepts log messages as they flow through the Carbonite logging system and transforms them into OTel log signals.

How the Integration Works#

When the omni.observability-otel.plugin loads and starts up, it performs the following initialization sequence:

  1. Acquire Interfaces: The plugin acquires the IObservabilityProvider interface from omni.observability.core library and the carb::settings::ISettings interface if available.

  2. Configure OTel: The plugin creates an OtelSettings implementation that reads configuration from both environment variables and Carbonite settings. It then initializes the IObservabilityProvider with this configuration, which in turn initializes the OpenTelemetry SDK with the appropriate log exporters, processors, and resource attributes.

  3. Register Log Listener: The plugin creates an instance of the OtelLogger class and registers it with the Carbonite logging system by calling carb::logging::getLogging()->addLogger(&g_logger). This log listener is only registered if the core library has successfully initialized and configured log signals for export. This ensures that all subsequent log messages will be routed through the OTel listener.

  4. Apply Settings: The plugin registers for notifications on some settings changes in the /observability/ branch, allowing
    dynamic reconfiguration of log filtering and channel settings at runtime.

Once registered, the OtelLogger listener receives every log message emitted through the Carbonite logging system. The listener’s handleMessage() method is called synchronously in the context of the logging thread, and it performs the following operations:

  1. Filter by Level and Channel: The listener first checks if the message’s severity level meets the configured emit level for the message’s channel. If the message level is below the threshold, the message is discarded immediately without further processing. This filtering is critical for performance as it prevents unnecessary work for log messages that won’t be transmitted anyway.

  2. Acquire Logger Provider: The listener retrieves the global OTel LoggerProvider singleton. If no provider is available (e.g., OTel is disabled), the message is silently discarded.

  3. Get Channel-Specific Logger: The listener requests a logger instance for the specific channel (source) name. OTel uses this channel name as the instrumentation scope, which allows the backend to distinguish between log messages from different components.

  4. Transform and Emit: The listener transforms the Carbonite LogMessage structure into an OTel log record by mapping severity levels, extracting attributes, and formatting the message body. It then calls EmitLogRecord() to send the log signal through the OTel pipeline.

The integration is designed to be transparent to application code. Existing code that uses CARB_LOG_*() or OMNI_LOG_*() macros requires no modifications to have its log messages collected through OpenTelemetry. The only requirement is that the omni.observability-otel.plugin be loaded before any log messages are emitted that should be collected.

Performance Considerations#

The log listener implementation is designed to minimize performance impact, but there are several considerations to keep in mind:

  • Synchronous Filtering: Channel and level filtering happens synchronously in the logging thread before the message is passed to OTel. This filtering is very fast (simple integer comparison) and prevents unnecessary work for filtered messages.

  • Synchronous vs Asynchronous (Batch) Export: By default, log signals are queued asynchronously and exported in batches by a background thread. This prevents blocking the logging thread while waiting for network I/O. However, synchronous export can be enabled via the /observability/logs/synchronous setting if immediate transmission is required (e.g., for debugging). Synchronous export will block the logging thread until the log signal is transmitted, which can significantly impact performance.

  • String Lifetime: For the OTel ‘OStream’ exporter, the OTel SDK accepts string attributes as string_view references, which means the strings must remain valid until the log record is actually emitted. The current implementation only supports synchronous logging for this reason. Batch logging would require copying all string data or using an arena allocator to ensure strings remain valid until export completes. This has been fixed in version 1.21 and up of the OpenTelemetry SDK. In general however, temporary strings or strings that have an very short lifetime should not be used in a call to logger->EmitLogRecord() (part of the OTel API). Doing so would lead to undefined behavior. For example, logger->EmitLogRecord(..., std::string(param), ...); cannot guarantee the lifetime of the string beyond the return from the EmitLogRecord() call.

  • Queue Capacity: The log exporter uses a bounded queue to store log records before batching and export. If log messages are generated faster than they can be exported, the queue may fill up and cause log records to be dropped. The queue size can be configured via the /observability/logs/maxQueueSize setting (see Configuration).

Enabling and Disabling the OTel Log Listener#

The OTel log listener can be controlled through several configuration mechanisms. The listener is automatically enabled when the omni.observability-otel.plugin loads, assuming OpenTelemetry is not globally disabled. However, various settings and environment variables control whether log signals are actually exported.

Global OTel Disable#

The most direct way to disable all OpenTelemetry functionality including log collection is through the global disable setting:

  • Environment Variable: Set OTEL_SDK_DISABLED=true before the application starts.

  • Carbonite Setting: Set /observability/disabled to true in configuration files, on the command line, or programmatically.

When OTel is globally disabled, the log listener is not registered with the logging system. Log messages will not be received or transformed in any way in this case. This approach has minimal performance impact as the observability core library detects the disabled state early and short-circuits all operations.

For more information on this setting, see the Configuration documentation.

Controlling Log Export#

Even when OTel is globally enabled, log export can be independently controlled:

  • Environment Variable: Set OTEL_LOGS_EXPORTER to an empty string or omit it entirely to disable log export.

  • Carbonite Setting: Set /observability/logs/exporter to an empty string to disable log export.

Valid exporter values include:

  • "otlp": Export logs via OTLP protocol to a configured endpoint (default if not specified).

  • "file": Write logs to a local file (for debugging purposes).

  • "console": Write logs to stdout/stderr (for debugging purposes).

  • "" (empty string): Disable log export entirely.

When the log exporter is disabled or set to an empty string, the log listener will not be registered and no log messages will be processed, transformed, or exported.

For more information on exporter configuration, see the Configuration documentation.

Unloading the Plugin#

The most complete way to disable the OTel log listener is to prevent the omni.observability-otel.plugin from loading at all. This can be done by removing the plugin from the pluginsLoaded list in application configuration files. When the plugin is not loaded, the log listener is never registered and all log messages flow through the normal Carbonite logging channels without any OpenTelemetry processing.

Runtime Considerations#

The log listener registration is performed during plugin startup and persists for the lifetime of the plugin. There is currently no mechanism to dynamically unregister the listener at runtime without unloading the entire plugin. However, the filtering and export settings can be modified at runtime through the settings system, allowing dynamic control over which messages are collected and where they are sent.

Filtering Log Messages#

The OTel log listener provides sophisticated filtering capabilities to control which log messages are transformed and exported. Filtering is essential for managing data volume, controlling costs, and ensuring that only relevant log messages are transmitted to observability backends. Three primary configuration options control log filtering: the base emit level, per-channel emit levels, and the option to adopt existing logging channel settings.

Base Emit Level#

The /observability/logs/emitLevel setting establishes the minimum severity level for log messages to be exported through OTel. This setting acts as a global threshold that applies to all log channels unless overridden by channel-specific settings.

  • Setting Path: /observability/logs/emitLevel.

  • Valid Values: Either an integer severity level or a string severity name (“verbose”, “info”, “warn”, “error”, “fatal”).

  • Default Value: "warn" (level 0 in Carbonite’s severity scale).

The base emit level can be set higher or lower than the core Carbonite logging system’s level (configured via /log/level). However, there is an important interaction to understand:

  • If the OTel emit level is lower than the core logging level, messages below the core logging level will never reach the OTel listener because they are filtered out earlier in the logging pipeline. The effective emit level is the maximum of the two settings.

  • If the OTel emit level is higher than the core logging level, the core logging system will emit the messages but the OTel listener will filter some of them out. This, for example, allows you to send verbose messages to local log files while only transmitting error-level messages to remote observability backends.

For more information on this setting, see the Configuration documentation.

Example Configuration:

[observability.logs]
    emitLevel = "error"  # Only emit error and fatal messages via OTel

Channel-Specific Emit Levels#

For more fine-grained control, you can specify different emit levels for specific log channels using the /observability/logs/channels/ settings branch. Each key in this branch represents a channel name pattern, and the value is the emit level for matching channels.

  • Setting Path: /observability/logs/channels/<pattern>.

  • Pattern Matching: The channel name pattern supports wildcards:

    • * matches zero or more characters.

    • ? matches exactly one character.

  • Valid Values: Same as base emit level (integer or string severity name).

Channel-specific settings override the base emit level for matching channels. If multiple patterns match a given channel name, the first matching pattern takes precedence.

For more information on this setting, see the Configuration documentation.

Example Configuration:

[observability.logs]
    emitLevel = "warn"  # Default emit level

    [observability.logs.channels]
        "omni.kit.*" = "error"        # Only errors from kit channels
        "omni.telemetry.*" = "info"   # Info and above from telemetry
        "my.debug.channel" = "verbose" # Everything from this debug channel

In this example:

  • Most channels use the default “warn” level.

  • Channels starting with “omni.kit.” only emit errors.

  • Channels starting with “omni.telemetry.” emit info, warn, error, and fatal.

  • The specific “my.debug.channel” emits all messages including verbose.

Adopting Logging Channel Settings#

To simplify configuration, the OTel log listener can automatically adopt the channel settings from the main Carbonite logging system. This is controlled by the /observability/logs/adoptLoggingChannelSettings setting.

  • Setting Path: /observability/logs/adoptLoggingChannelSettings.

  • Valid Values: Boolean (true or false).

  • Default Value: false.

When this setting is enabled (true), the OTel log listener reads all channel patterns and levels from the /log/channels/ settings branch at startup and uses them as the initial channel filters. This means any channel-specific log levels configured for the core logging system will also apply to OTel log export.

The adoption happens only at initialization time. After startup, the OTel channel settings become independent and can be modified without affecting the core logging channels, and vice versa. Any channel-specific settings explicitly defined in /observability/logs/channels/ will override the adopted settings even if /observability/logs/adoptLoggingChannelSettings is true.

For more information on this setting, see the Configuration documentation.

Example Configuration:

[log.channels]
    "omni.kit.app" = "verbose"
    "omni.telemetry" = "info"

[observability.logs]
    adoptLoggingChannelSettings = true  # Adopt the above channel settings
    emitLevel = "warn"                   # Default for channels not specified above

    [observability.logs.channels]
        "omni.kit.app" = "error"  # Override the adopted "verbose" setting for OTel export

In this example:

  • The core logging system emits verbose messages from “omni.kit.app”.

  • OTel initially adopts this setting but then immediately overrides it to only emit errors.

  • The “omni.telemetry” channel setting is adopted and not overridden, so info and above are emitted via OTel.

Filtering Performance#

The filtering implementation is highly optimized to minimize overhead. Channel name matching is performed using efficient string comparison and pattern matching algorithms. The effective emit level for each channel is cached, so most log messages only require a single integer comparison to determine if they should be processed. The filtering happens before any string allocation, attribute extraction, or OTel API calls, ensuring that filtered messages have minimal performance impact.

Log Message Translation to OTel Format#

When a Carbonite log message passes the filtering checks and is ready to be exported via OTel, the OtelLogger listener transforms the carb::logging::LogMessage structure into an OpenTelemetry log record. This translation maps Carbonite’s logging concepts to the corresponding OTel semantic conventions while preserving all available context and metadata.

Severity Level Mapping#

Carbonite uses a five-level severity system with numeric values ranging from -2 to 2. OpenTelemetry uses a more granular 24-level system with four sub-levels for each major severity category. The translation maps Carbonite levels to the primary OTel severity levels:

Carbonite Level

Numeric Value

OTel Severity

OTel Numeric Range

Verbose

-2

TRACE

1-4

Info

-1

INFO

9-12

Warn

0

WARN

13-16

Error

1

ERROR

17-20

Fatal

2

FATAL

21-24

The translation is performed by the convertCarbLogLevelToOtelSeverity() utility function. Both the numeric severity value and the text severity name are included in the exported log record, allowing observability backends to filter and display logs based on either representation.

Message Body#

The primary log message text becomes the OTel log record’s body field. In Carbonite, this is the formatted string that was passed to the CARB_LOG_*() or OMNI_LOG_*() macro after all formatting substitutions have been applied. The message body is the human-readable content that would traditionally appear in a text log file.

OTel represents the body as a generic AnyValue that can contain various data types. In the Carbonite integration, the body is always a string value containing the formatted message text. No additional parsing or transformation is performed on the message content.

Standard Attributes#

The OTel log record includes a rich set of attributes derived from the Carbonite LogMessage structure. These attributes follow the OpenTelemetry Semantic Conventions where applicable:

  • code.file.path: The full path to the source file that emitted the log message. This maps directly to the fileName field from the Carbonite log message. If the file name is not available, this attribute is set to an empty string to avoid null pointer issues.

  • code.line.number: The line number within the source file where the log message was emitted. This corresponds to the lineNumber field and is always included as an integer value.

  • code.function.name: The name of the function or method that emitted the log message. This maps to the functionName field. If the function name is not available, this attribute is set to an empty string.

  • process.pid: The process ID of the process that emitted the log message. This allows correlation of log messages when multiple processes are logging to the same backend.

  • thread.id: The thread ID of the thread that emitted the log message. Note that this attribute is currently in “development” status in the OTel semantic conventions, but is widely supported by observability backends. This enables correlation of log messages from specific threads and helps identify thread-specific issues.

Omniverse-Specific Attributes#

In addition to the standard OTel attributes, the Carbonite integration includes several custom attributes that carry Omniverse-specific metadata. These attributes use the omni. prefix to distinguish them from standard OTel attributes:

  • omni.trace.parent.id: The W3C trace-parent ID string if the log message was emitted within the context of an active distributed trace span through the OmniTrace library. This enables automatic correlation between log messages and traces in observability backends. If no trace context is active, this attribute contains an empty string. The trace-parent ID format follows the W3C Trace Context specification and includes the trace ID, span ID, and trace flags.

  • omni.timestamp: The high-resolution timestamp from the Carbonite logging system, expressed in nanoseconds. This provides more precise timing information than the standard OTel timestamp field, which may have lower resolution depending on the platform. The value is the timestampNs field converted to an integer count.

  • omni.extra_fields.global: A string containing global extra fields that were attached to the log message. Extra fields are a Carbonite logging feature that allows attaching arbitrary key/value pairs to log messages for additional context. Global extra fields are set at the process level and apply to all subsequent log messages. The string format is defined by the Carbonite logging system.

  • omni.extra_fields.thread: A string containing thread-local extra fields that were attached to the log message. Thread extra fields are set for a specific thread and only apply to log messages emitted from that thread. The string format is defined by the Carbonite logging system.

Resource Attributes#

In addition to the per-message attributes described above, every OTel log signal automatically includes resource attributes that describe the execution environment. These resource attributes are configured during IObservabilityProvider initialization and remain constant for the lifetime of the process. Common resource attributes include:

  • service.name: The name of the service or application, typically configured via OTEL_SERVICE_NAME or /observability/serviceName.

  • service.version: The version of the service or application.

  • session.id: A unique identifier for this execution session, used to correlate all telemetry signals from the same process lifetime.

  • deployment.environment: The deployment environment (e.g., “production”, “staging”, “development”).

  • telemetry.sdk.name and telemetry.sdk.version: Information about the OpenTelemetry SDK implementation.

These resource attributes are automatically included with every log signal and do not need to be explicitly added by the log listener. For more information on configuring resource attributes, see the Configuration documentation.

Instrumentation Scope#

Each OTel log record includes an instrumentation scope that identifies which component of the application emitted the log message. In the Carbonite integration, the instrumentation scope is set based on the log message’s channel (source) name.

When requesting a logger from the LoggerProvider, the listener passes:

  • Name: The channel/source name from the log message (e.g., “omni.kit.app”, “carb.filesystem”).

  • Version: Currently hardcoded to “1.0.0”.

  • Schema URL: Not currently set.

This allows observability backends to filter and group log messages by their source channel, making it easy to focus on logs from specific components or plugins.

String Lifetime and Safety#

An important implementation detail of the translation is that the OTel SDK accepts most attributes as string_view references rather than copying the strings immediately. This means the original strings must remain valid until the log record is actually exported (which may happen asynchronously). This has been fixed in version 1.21 of the OpenTelemetry SDK.

Log Message Emission Pipeline#

Understanding the complete journey of a log message from emission to ingestion helps with debugging configuration issues and optimizing performance. This section describes the expected behavior and sequence of events when a log message flows through the system.

Local Application Pipeline#

The journey begins when application code invokes a logging macro such as CARB_LOG_ERROR() or OMNI_LOG_WARN():

  1. Core Logging System: The Carbonite logging system receives the message and applies its own filtering rules based on the /log/level and /log/channels/ settings. If the message passes these filters, it is formatted and dispatched to all registered log listeners, including the default console/file loggers and the OTel log listener.

  2. OTel Log Listener: The OtelLogger::handleMessage() method is invoked synchronously in the calling thread. This method:

    • Determines the effective emit level for the message’s channel.

    • Compares the message severity against this threshold.

    • Returns immediately if the message should be filtered.

    • Otherwise, proceeds with transformation and emission.

  3. Channel Filtering: The listener queries the effective emit level for the message’s channel by checking:

    • Channel-specific settings in /observability/logs/channels/ (matching wildcards if necessary).

    • The base emit level in /observability/logs/emitLevel if no channel-specific setting matches.

    • Returns the determined threshold value.

  4. Logger Acquisition: If the message passes filtering, the listener:

    • Retrieves the global OTel LoggerProvider singleton.

    • Requests a logger instance for the message’s channel name as the instrumentation scope.

    • Returns immediately if no logger provider is available (e.g., OTel is disabled).

  5. Message Transformation: The listener transforms the Carbonite log message into OTel format:

    • Maps the Carbonite severity level to OTel severity.

    • Extracts all available attributes from the LogMessage structure.

    • Creates the attributes map with both standard and custom attributes.

  6. Emit to OTel: The listener calls logger->EmitLogRecord() with the transformed data. This hands the log record to the OTel SDK’s logging pipeline.

  7. OTel Processing Pipeline: The OTel SDK processes the log record through its configured pipeline:

    • Resource attributes are automatically attached.

    • The active trace context (if any) is captured and associated with the log record.

    • The log record is passed to the configured log processor (typically a batch processor).

  8. Batch Processor Queuing: The batch processor (if configured) adds the log record to its internal queue:

    • The log record is stored in a bounded queue with size configured by /observability/logs/maxQueueSize.

    • If the queue is full, the log record may be dropped (depending on drop policy).

    • Queuing is typically very fast (lock-free or low-contention).

  9. Background Batch Export: A background worker thread in the OTel SDK:

    • Wakes up periodically according to /observability/logs/scheduleDelay (default 5 seconds).

    • Collects up to /observability/logs/maxExportBatchSize log records from the queue (default 512).

    • Passes the batch to the configured log exporter.

Synchronous vs Asynchronous Export#

The pipeline described above uses asynchronous batching, which is the default and recommended configuration. However, synchronous export can be enabled via the /observability/logs/synchronous setting for debugging purposes. Note that synchronous export should not be used in production environments since it can potentially have significant performance implications.

Asynchronous (Batch) Export (default):

  • Log records are queued and exported in batches by a background thread.

  • The logging thread returns immediately after queuing the log record.

  • Minimal impact on application performance.

  • Log records may be lost if the application crashes before they are exported.

  • Bounded queue may drop log records if the export rate cannot keep up with the emission rate.

Synchronous Export:

  • Log records are exported immediately in the calling thread.

  • The logging thread blocks until the export completes (including network I/O).

  • Guarantees that log records are exported before the logging call returns.

  • Significant performance impact due to blocking on network I/O.

  • Useful for debugging or scenarios where log delivery must be guaranteed.

For more information on configuring synchronous export, see the Configuration documentation.

Exporter-Specific Behavior#

The final stage of the pipeline depends on the configured exporter type:

OTLP Exporter:

  • The exporter serializes the batch of log records into the OTLP wire format.

  • The serialization format depends on the protocol setting (gRPC, HTTP/protobuf, HTTP/JSON).

  • The batch is transmitted to the configured endpoint via HTTP POST or gRPC call.

  • If the transmission succeeds (2xx status code), the batch is acknowledged and the log records are discarded.

  • If the transmission fails, the behavior depends on the retry policy (typically the records are dropped).

File Exporter:

  • The exporter formats the log records as JSON (one record per line).

  • Each log record is written to the configured file path.

  • File operations are performed synchronously. There is no asynchronous option.

  • If the file cannot be written, error messages are logged but log records are not queued for retry.

Console Exporter:

  • The exporter formats the log records as human-readable text.

  • Each log record is written to stdout or stderr.

  • Console output is performed synchronously.

  • Useful for local debugging and development.

For more information on configuring exporters, see the Configuration documentation.

Network Transmission to OTLP Endpoint#

When using the OTLP exporter, log records are transmitted over the network to a configured endpoint. The endpoint must implement the OTLP specification for accepting log data.

HTTP/Protobuf (recommended, default):

  • Log records are serialized using Protocol Buffers.

  • The batch is transmitted via HTTP POST to the endpoint URL.

  • The Content-Type header is set to application/x-protobuf.

  • The endpoint must respond with a 2xx status code to acknowledge receipt.

  • Connection reuse and HTTP keep-alive optimize performance for repeated batches.

HTTP/JSON:

  • Log records are serialized as JSON.

  • The batch is transmitted via HTTP POST to the endpoint URL.

  • The Content-Type header is set to application/json.

  • Less efficient than protobuf but easier to debug and inspect.

  • Useful for development and testing.

gRPC:

  • Log records are serialized using Protocol Buffers.

  • The batch is transmitted via gRPC call to the endpoint.

  • Provides more sophisticated connection management and flow control.

  • May be required by some observability backends.

The endpoint URL is configured via OTEL_EXPORTER_OTLP_LOGS_ENDPOINT or /observability/logs/otlp/endpoint. If no logs-specific endpoint is set, the general OTEL_EXPORTER_OTLP_ENDPOINT is used as a fallback. For more information, see the Configuration documentation.

Error Handling and Retries#

The current OTel SDK implementation has limited retry logic for failed exports:

  • If a batch export fails (network error, endpoint unavailable, non-2xx response), the log records in that batch are typically dropped immediately.

  • There is no automatic retry or persistent queue for failed exports.

  • Errors during export are logged via the OTel SDK’s internal error handler (configured via OTEL_LOG_LEVEL).

For production deployments where log delivery must be more reliable, consider using an OpenTelemetry Collector as an intermediary (described in the next section).

Shutdown and Flush#

When the application shuts down, the observability system performs a final flush to ensure that queued log records are exported:

  1. The IObservabilityProvider::finish() method is called (typically via omni::observability::State destructor or plugin shutdown).

  2. The OTel SDK is instructed to shut down, which triggers a flush of all pipelines.

  3. The log batch processor exports any remaining queued log records.

  4. The system waits up to /observability/logs/shutdownFlushTimeout milliseconds for the final export to complete.

  5. If the timeout expires, any remaining log records are discarded and the shutdown proceeds.

The shutdown flush timeout defaults to 20 seconds. This can be adjusted if you have a large volume of queued log records or slow network conditions. For more information, see the Configuration documentation.

OpenTelemetry Collector Integration#

The OpenTelemetry Collector is a standalone application that can receive, process, and forward telemetry signals between applications and observability backends. While not required for basic log collection, the Collector provides several important capabilities for production deployments.

What is the OTel Collector?#

The OpenTelemetry Collector acts as a telemetry proxy, sitting between your application and your observability backend. Instead of configuring your application to send logs directly to a backend like Grafana, DataDog, or Azure Monitor, you configure it to send logs to a local or nearby Collector instance. The Collector then handles forwarding the data to one or more backends.

The Collector is a separate process (not a library) that must be deployed and managed independently of your application. It can run on the same machine as your application (localhost deployment), on a centralized server in your datacenter, or in a cloud environment. Multiple applications can send telemetry to a single Collector instance, which then aggregates and forwards the data.

Key capabilities of the Collector include:

  • Protocol Translation: The Collector can receive telemetry in one format (e.g., OTLP) and export it in a different format (e.g., Prometheus, Jaeger, vendor-specific formats). This allows your application to use standard OTLP while integrating with backends that don’t natively support OTLP.

  • Batching and Compression: The Collector can batch small telemetry signals into larger exports and apply compression to reduce network bandwidth usage. This is especially valuable for high-volume applications where per-signal network overhead would be significant.

  • Buffering and Retry: The Collector maintains persistent queues for telemetry data and automatically retries failed exports. If an observability backend is temporarily unavailable, the Collector will buffer the data and retry delivery once the backend comes back online. This provides much more reliable delivery than direct application-to-backend transmission.

  • Filtering and Sampling: The Collector can filter telemetry signals based on attributes, severity, or other criteria. It can also apply sampling strategies to reduce data volume while maintaining statistical representativeness. This allows you to implement organization-wide telemetry policies without modifying application code.

  • Enrichment: The Collector can add additional attributes to telemetry signals, such as environment labels, datacenter locations, or cluster identifiers. This enrichment happens transparently to the application and ensures consistent metadata across all telemetry.

  • Multi-Backend Export: A single Collector instance can export telemetry to multiple backends simultaneously. This is useful for sending data to both a production observability system and a testing environment, or for migrating between observability vendors without downtime.

  • Security and Authentication: The Collector can handle authentication and credential management for observability backends, keeping sensitive credentials out of application configuration. It can also enforce TLS/SSL for secure transmission of telemetry data.

How the Collector Integrates with Carbonite Logging#

To use the Collector with Carbonite log signals, you configure your application to send OTLP log data to the Collector’s endpoint instead of directly to your observability backend:

  1. Deploy the Collector: The Collector application must be running and accessible from your application. For local development, you can run the Collector on localhost:4318 (HTTP) or localhost:4317 (gRPC). For production deployments, the Collector typically runs as a service on a known hostname or IP address.

  2. Configure the Collector: The Collector uses a YAML configuration file that defines:

    • Receivers: Which protocols and endpoints the Collector listens on (e.g., OTLP HTTP on port 4318).

    • Processors: What transformations or filtering to apply to telemetry signals.

    • Exporters: Where to send the processed telemetry (e.g., a Grafana Loki endpoint).

    • Pipelines: How receivers, processors, and exporters are connected together.

    Example Collector configuration for receiving logs:

    receivers:
      otlp:
        protocols:
          http:
            endpoint: 0.0.0.0:4318
          grpc:
            endpoint: 0.0.0.0:4317
    
    processors:
      batch:
        timeout: 10s
        send_batch_size: 1024
    
    exporters:
      otlphttp:
        endpoint: https://your-backend.example.com/v1/logs
        headers:
          Authorization: "Bearer YOUR_API_KEY"
    
    service:
      pipelines:
        logs:
          receivers: [otlp]
          processors: [batch]
          exporters: [otlphttp]
    
  3. Configure Your Application: Set the OTLP logs endpoint to point to the Collector:

    export OTEL_LOGS_EXPORTER=otlp
    export OTEL_EXPORTER_OTLP_LOGS_PROTOCOL=http/protobuf
    export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT=http://localhost:4318/v1/logs
    

    Or in a configuration file:

    [observability.logs]
        exporter = "otlp"
        [observability.logs.otlp]
            protocol = "http/protobuf"
            endpoint = "http://localhost:4318/v1/logs"
    
  4. Log Flow: With this configuration, the log emission pipeline changes:

    • Your application emits CARB_LOG_*() or OMNI_LOG_*() messages.

    • The OTel log listener transforms them into OTel log signals.

    • The OTLP exporter sends the log signals to the Collector at http://localhost:4318/v1/logs.

    • The Collector receives the log signals through its OTLP receiver.

    • The Collector’s batch processor aggregates the log signals.

    • The Collector’s OTLP HTTP exporter sends batches to the final observability backend.

    • The backend ingests and stores the log signals for querying and analysis.

Benefits of Using a Collector#

There are several reasons you might want to use a Collector rather than sending logs directly from your application to an observability backend:

  • Reliability: The Collector provides retry logic and persistent buffering that the application-side OTel SDK lacks. If your observability backend is temporarily unavailable, the Collector will buffer log signals and retry delivery, preventing data loss.

  • Performance: By sending log signals to a nearby Collector (e.g., on localhost or the same datacenter), network latency is minimized and transmission is more reliable. The Collector then handles the slower, more reliable transmission to remote backends.

  • Centralized Configuration: Telemetry policies such as sampling rates, filtering rules, and export destinations can be managed centrally in the Collector rather than requiring configuration changes across all applications.

  • Backend Migration: If you need to change observability backends, you can reconfigure the Collector without touching application deployments. The applications continue sending OTLP to the Collector, and the Collector handles the new export format and destination.

  • Multi-Tenancy: A single Collector can route telemetry from multiple applications to different backends based on attributes or other criteria. This is useful in shared infrastructure where different teams use different observability tools.

Managed Collector Instances#

Some deployment environments provide managed Collector instances as a service. For example, Kubernetes environments might deploy a Collector as a DaemonSet that runs on each node, with applications configured to send telemetry to localhost. Cloud providers might offer Collector-as-a-Service where you send telemetry to a managed endpoint without deploying your own Collector instances.

The omni.observability.collector component provides capabilities for automatically launching and managing a Collector instance alongside Kit-based applications, though this is typically disabled by default. The expectation is that production deployments will use centrally-managed Collector infrastructure rather than application-managed instances.

For more information about Collector configuration and deployment, refer to the OpenTelemetry Collector documentation.


For configuration reference and detailed settings documentation, see Configuration. For definitions of observability terminology, see Glossary. For information on metrics and tracing, see Metrics and the Overview.