Observability Glossary of Terms#
Overview#
The field of observability has a lot of terms to describe its large number of aspects. In general, observability covers telemetry events, logs, distributed traces, metrics, and crash reporting. Each of these categories can also be broken down into several sub categories. All of these together provide developers with visibility into how their application or service is performing and how it is being used. This can be used to investigate and fix bugs and to make informed decisions around prioritizing new software features and deprecations.
Below are some common terms used to describe the various aspects of each category of observability. Some terms are common to multiple categories, while some apply only to a single category. The terms listed here will be broken up into groups based on the major observability categories. Some of these definitions include parts that may be specific to Omniverse or OpenTelemetry implementations, but for the most part these are common industry terms.
Defined terms listed below are sorted alphabetically.
Telemetry#
Common#
Attribute: A single key/value pair that is associated with a telemetry signal. Any OTel telemetry signal can have an arbitrary number of attributes dynamically attached to it when emitted. Note that including attributes with a signal can also have side effects. For example, attributes attached to a metric will effectively split the instrument’s counter(s) for every combination of attribute key/value pairs provided. This can cause the total number of exported metrics to grow very quickly if care is not taken with associated attributes. Similarly, attributes added to log messages can result in PII or intellectual property being unintentionally associated with log messages. Care must also be taken to not include any kind of PII in log messages.Collector: An application unrelated to the app that emits telemetry signals, that collects, processes, and forwards all signals to a configured endpoint. The collector app acts as an entity-in-the-middle between the emitting app and the ingestion endpoint. When a collector app is used, The emitting app will be configured to export its telemetry signals to the collector app, and the collector app will be configured to emit to an ingestion endpoint. The collector app can also be configured to perform processing and transformation operations on each telemetry signal before transmitting to the ingestion endpoint. The collector app is not an ingestion endpoint and should not be used as a replacement for full end-to-end testing of a telemetry pipeline.Consent: When collecting telemetry data, seeking consent from the user to collect their personal information is always necessary. This may take the form of language in a license agreement that outlines what kinds of data is collected and that agreeing to the license indicates they consent to the collection. It could also take the form of an explicit notification to the user that asks if they agree to the data collection. When data is collected anonymously, user notification is still necessary, but the anonymous data may be collected on an opt-out basis. Regardless of what data is collected and how, user consent and notification is always required to avoid data privacy misuse claims.Column: On the data endpoint side, a column refers to the data for a given field in a message across a full data table. This data table can be visualized as a table where each row is a single telemetry signal in full, and each column is the data for a given field across all ingested signals. These tables can often be treated as sparse tables where a given column is only filled in for a given row if that specific signal contained a given column’s field name.Data Backend: A system or application running on a remote server whose purpose is to store, process, and query data from a collection of telemetry signals that have been ingested from one or more sessions of an app or service. This data backend is often an application such as Grafana, DataDog, Azure Monitor, etc. The primary purposes of the data backend are to store the collected data and to allow data scientists to query and analyze the collected data.Endpoint: A URL that represents a data endpoint capable of ingesting telemetry signals from an OpenTelemetry application. This URL must be configured to receive data using OTLP. This is usually done in settings specific to that data backend application. At most one endpoint may be specified per telemetry signal type with the OTel SDK. A single common endpoint may also be given as a fallback path for when an endpoint wasn’t given for a specific signal type.Exporter: A configurable handler object that manages emitting messages from either the instrumented application or from a collector app. Every telemetry signal type must be configured to use an exporter in order for its data to be emitted from an application. All exporter configurations default to “otlp” which expects an endpoint to be defined to send the signal’s data to using OTLP.Field: A field is the name of the key in a key/value pair in an emitted telemetry signal. Some fields in telemety signals are mandatory while others are added automatically by the OTel SDK or an OTel collector app. Once ingested, the key of each field becomes the name of a column in the storage table on the data backend.General Data Protection Regulation (GDPR): The GDPR is a set of laws in the European Union that govern how personal data must be managed and the rights that individual users have to request or delete their own personal information collected by any entity operating within the EU. Any product or service that collects data from even a single EU citizen must respect the GDPR rules. Failure to follow these rules can result in significant fines up to 4% of gross income of the entity.Instrumentation: Codebase instrumentation involves adding code to emit telemetry signals (ie: logs, metrics, traces) as needed using information that is available in code. Instrumentation is not automatic in any way. If a metric, log or trace needs to be emitted, code must be explicitly added to support it.OpenTelemetry: The industry standard open source telemery management and ingestion protocol. This includes specifications for how both the server and client sides should behave when emitting or ingesting telemetry signals. SDKs implementing the client side are available for many programming languages and platforms. Several major server side data ingestion and management software platforms also support OTLP data ingestion.OpenTelemetry API: A C++ header-only implementation of the OpenTelemetry signal output API. Using this API does not require the use of the full OpenTelemetry SDK in all modules that need to emit data. As long as one module in the process is linked to the full OpenTelemetry SDK, this API can be used to emit signals.OpenTelemetry SDK: A C++ SDK that includes a large set of libraries and headers covering the full OpenTelemetry SDK. The SDK must be linked to at least one module in the process to allow all of its global singleton objects to be accessible to the C++ OTel API. In the omniverse case, the OTel SDK libraries are linked into theomni.observability.corelibrary.OTel: A shorthand for “OpenTelemetry”.OTLP: The OpenTelemetry Protocol specification. This describes the protocol to be used for telemetry signal ingestion by an OpenTelemetry compliant data backend, and specifies how signals are transmitted, how errors are handled, and how decoding of signals will occur.Personally Identifiable Information (PII): PII is any specific piece or pieces of information that can be used to find out who a given user is or where they are located. Having PII present in collected data can be useful in tracking user behavior and application/service usage, but also comes with responsibilities for following the various data privacy laws around the world (ie: GDPR).Protocol: A set of rules and structures that defines how communication between machines or entities must proceed. A protocol covers topics such as which communication methods are allowed, how messages are formed and interpreted, and the expected order of communications in any conversation.Resources: Each OpenTelemetry telemetry signal automatically has a block associated with it calledresources. This block is seen as an immutable object of attributes that are seen as static for the duration of the session. The key/value pairs in theresourcesmust be set when OpenTelemetry is first initialized and should not be modified for the remainder of the session. Theresourcesblock typically includes static information such as a session ID, process information, service name (if any), and the OpenTelemetry implementation and SDK version information.Row: On the data endpoint side, a row refers to all the data associated with a single telemetry signal. This set of data values for the row will be separated into individual columns based on each field name in the table the row’s data is stored in. This data table can be visualized as a table where each row is a single telemetry signal in full, and each column is the data for a given field across all ingested signals. These tables can often be treated as sparse tables where a given column is only filled in for a given row if that specific signal contained a given column’s field name.Service: A service is a software application or function that is hosted in a cloud environment and is accessible for direct use by outside customers. Some services may be provided free of charge while others may require a paid license or valid account to use. A service may be as simple as returning a single value, or as complex as rendering a complete animated 3D scene. Services are often accessed through technologies such as REST APIs or gRPC.Telemetry Signal: A telemetry signal represents a single emitted unit of information. This could take the form of a log message, trace span, metric value, crash report, or event record. The signal is emitted by an application using a specific data format (OTLP in this case) and is potentially ingested by a data endpoint.Trace Propagation: Trace propagation refers to the process of continuing or modifying a trace span across multiple components, services, or applications. A trace may be started and finished within the context of a single process, or it may be passed to, adopted by, and continued in a separate process. A trace-parent ID is used to provide a unique identifier of both the trace operation overall and the currently active span in the trace. A trace can be unique to a single thread in a process or associated with all threads depending on how and where the trace was started. Trace propagation also involves attaching an active trace-parent ID to every other telemetry signal that is emitted by the application or service. This allows various other telemetry signals to be correlated with a given trace space to allow for better data and behavioral analysis of the application or service.
Events#
Event Signal: An event signal is a record of a specific operation or piece of activity having occurred in an application or service. An event signal is provided in a structured format (ie: JSON) with zero or more event specific attributes attached. An event could indicate that a certain operation either started or stopped, provide information about the application, service, platform, or hardware it is being run on, or note that an action has been triggered by a user. An event signal’s contents often goes beyond what a simple log message could provide. A structured format is used so that the ingested data is more readily accessible and processable by machines. Emitting an event requires a schema that outlines exactly which information is expected in a given event and how it is expected to be structured. The schema also provides a way to validate that the ingested event signal is as expected and contains the correct information. In Omniverse Kit apps and services, a telemetry event signal is emitted usingomni.structuredlog.pluginthrough a generated helper class and macros. In terms of OpenTelemetry, an event signal is emitted as a log signal that has an extra set of attributes specific to the event being emitted.
Logs#
Message Body: A message body is the formatted log message text that is associated with a given log signal. Under OTLP, a log signal must have a field namedbodythat contains this formatted text. All other fields in the log message are metadata values that indicate where and when the log message was emitted. The message body is often not machine interpreted, but is rather used for display to a user in a retrieved log for a given application/service session.Log Metadata: Log metadata is a set of extra attributes associated with any given log message. These extra attributes indicate where in code the log message was emitted (ie: file, function, line number, module, source or channel name, thread and process ID, etc), when it was emitted (ie: timestamp(s)), and how it is to be categorized (ie: severity level, active trace-parent ID, etc).Severity: A log message’s severity level is a number or label to indicate how serious the log message is. Under Carbonite’s logging system the supported severity levels areverbose(-2),info(-1),warn(0),error(1), andfatal(2). Under OpenTelemetry, the same level labels are supported withdebuginstead ofverbose, and the addition oftrace. The related severity numbers range from 1 to 24 with each severity level having four separate sub-levels. The log severity number and text are included in each log signal that is emitted.
Metrics#
Counter: A monotonically increasing instrument that only records positive values and never decreases. A counter is used to track metrics such as total requests processed, total errors encountered, or total bytes sent. The counter value starts at zero when the instrument is first created and can only increase from there. In OpenTelemetry, counters are created through a meter object and can be synchronous (updated directly in code when an event occurs) or asynchronous (read periodically via a callback function). When exporting metrics, the counter value is typically aggregated and the cumulative sum is reported to the data backend. Care must be taken with counter attributes as each unique combination of attribute key/value pairs effectively creates a separate counter instance.Export Interval: The time period in milliseconds between the starts of successive metric exports to the configured endpoint. This interval determines how frequently the metric reader collects data from all registered instruments and sends the aggregated metrics to the data backend. In OpenTelemetry, the default export interval is 60,000 milliseconds (60 seconds), but this can be configured to a shorter or longer period depending on the application’s needs. A shorter export interval provides more real-time metric data but increases network traffic and processing overhead. A longer interval reduces overhead but may miss short-lived metric spikes or changes. In Carbonite’s observability implementation, the export interval can be configured through settings or environment variables specific to the metric exporter being used.Export Timeout: The maximum amount of time in milliseconds that the metric exporter will wait for an export operation to complete before considering it failed. If the data backend doesn’t acknowledge receipt of the metrics within this timeout period, the export operation is aborted and an error may be logged. This prevents the application from hanging indefinitely if the network connection is slow or the endpoint is unresponsive. In OpenTelemetry, the default export timeout is 30,000 milliseconds (30 seconds). Setting an appropriate timeout value balances the need to handle slow network conditions against the risk of prematurely canceling legitimate but slow exports. When an export times out, the metrics for that export cycle may be lost or may be retried depending on the exporter configuration. This value must always be less than or equal to the export interval time.Gauge: An instrument that represents a single numerical value that can arbitrarily increase or decrease over time. Unlike counters which only go up, gauges can fluctuate in either direction and represent instantaneous measurements. Common examples include current memory usage, active connection count, CPU temperature, or queue depth. In OpenTelemetry, gauges are typically implemented as observable instruments where a callback function is registered to provide the current value when metrics are being collected. The gauge reports the most recent measurement at export time rather than aggregating values over the export interval. Gauges are useful for tracking resource utilization and system state that naturally varies over time. In Carbonite’s observability system, gauges can be created through the meter API to track runtime metrics that reflect current application state.Histogram: An instrument that records a distribution of values by grouping measurements into configurable buckets or bins. Histograms are ideal for tracking metrics where you need to understand not just the average but also the distribution of values, such as request latencies, response sizes, or processing times. In OpenTelemetry, when a value is recorded to a histogram, it’s placed into predefined buckets (e.g., 0-10ms, 10-50ms, 50-100ms, 100-500ms, 500ms+) and the count for that bucket is incremented. When exported, the histogram includes the count of values in each bucket, the total count of all measurements, and often the sum of all values. This allows the data backend to calculate percentiles, medians, and other statistical measures. The bucket boundaries can be configured when creating the histogram instrument. Histograms provide more insight than simple averages but also consume more storage space per metric due to the per-bucket counters.Instrument: A named object that is used to record measurements for a specific metric. An instrument is created through a meter object and is identified by a unique name within that meter’s scope. Each instrument has a specific type (counter, gauge, histogram, up/down counter, etc.) that determines how measurements are recorded and aggregated. Instruments can be either synchronous or asynchronous (observable). Synchronous instruments are updated directly in application code at the point where the measurement occurs, while asynchronous instruments are read periodically via callback functions. In OpenTelemetry, creating an instrument is a lightweight operation and instruments are typically created once during initialization and reused throughout the application’s lifetime. Each instrument can have attributes attached to recorded measurements, and each unique combination of attribute values effectively creates a separate metric instance. Instruments are the building blocks of the metrics system and provide the API for applications to record observability data.Meter: A factory object that is used to create and manage metric instruments within a specific scope or component. In OpenTelemetry, a meter is obtained from a meter provider and is typically associated with a particular library, module, or instrumentation package through a name and optional version. The meter acts as a namespace for instruments, allowing different components to create instruments with the same name without conflicts. When creating instruments, the meter’s identifying information (name, version, schema URL) becomes part of the instrument’s metadata and is included with exported metrics. This allows data backends to differentiate between metrics from different components even if they share the same instrument names. In Carbonite’s observability implementation, meters are typically created once per plugin or component during initialization and stored for the lifetime of that component. The meter interface provides methods for creating all types of metric instruments including counters, histograms, gauges, and observable instruments.Metric: A numeric measurement captured over time that reflects some aspect of an application’s or service’s behavior or state. Metrics are one of the three core pillars of observability (along with logs and traces) and are used to track quantitative data such as request rates, error counts, resource utilization, processing durations, and business-specific measurements. In OpenTelemetry, metrics are emitted as structured signals containing the instrument name, measurement value(s), timestamp, attributes, and metadata about the meter that created the instrument. Metrics are collected periodically according to the configured export interval and sent to a data backend where they can be aggregated, analyzed, and visualized. Unlike logs which provide detailed event information, metrics provide high-level numerical trends and patterns that are efficient to collect, store, and query at scale. Metrics are essential for monitoring application health, detecting anomalies, capacity planning, and understanding long-term trends. The OTel metrics system supports various instrument types to accommodate different measurement patterns and use cases.Observable Instrument: An asynchronous instrument type where the application registers a callback function that is invoked by the metrics system to retrieve the current measurement value(s). Unlike synchronous instruments where the application actively calls methods to record values, observable instruments are passive - the metrics system pulls values by calling the registered callback when it’s time to export metrics. Observable instruments are ideal for measurements that represent current state rather than discrete events, such as current memory usage, active request count, or queue depth. In OpenTelemetry, observable instruments include observable counters, observable up/down counters, and observable gauges. The callback function is called once per export interval and should return the current value without performing expensive operations or blocking. Observable instruments reduce overhead by eliminating the need to continuously update values in application code - the value is only queried when needed for export. Care must be taken to ensure callback functions execute quickly to avoid delaying metric exports.Raw Metric: A metric value or measurement that has been recorded but not yet aggregated, processed, or exported. Raw metrics represent individual data points captured by an instrument before the metric reader applies aggregation operations. In OpenTelemetry’s metrics pipeline, raw measurements flow from instruments to views (which select and configure aggregation), then to the metric reader (which applies the aggregation), and finally to the exporter (which formats and transmits the aggregated data). The raw metric data includes the measurement value, any attributes attached at recording time, and a timestamp. Depending on the instrument type and configured aggregation, raw metrics may be summed (for counters), averaged (for gauges), bucketed (for histograms), or aggregated in other ways before export. Most applications and data backends work with aggregated metrics rather than raw values, as aggregation reduces data volume and provides more meaningful insights over time. However, understanding the concept of raw metrics is important for troubleshooting why aggregated values may differ from expectations.Up/Down Counter: An instrument that can both increase and decrease, but only moves in one direction at a time and maintains a cumulative total. Unlike a regular counter which only increases, an up/down counter tracks a quantity that can both grow and shrink, such as the number of active connections, items in a queue, or currently executing tasks. The value starts at zero and changes by positive or negative deltas as events occur. In OpenTelemetry, up/down counters are useful for tracking resources that are acquired and released, where you want to know the current count rather than just total allocations. When a resource is acquired, a positive value is added to the counter; when released, a negative value (or positive value subtracted) decreases the counter. Up/down counters can be synchronous (updated directly when events occur) or asynchronous/observable (read via callback). The counter value is aggregated and reported at export time, showing the net change over the export interval. This differs from a gauge which reports instantaneous values rather than cumulative changes.
Distributed Traces#
Baggage: A mechanism for propagating arbitrary key/value pairs alongside trace context across service boundaries and process boundaries. Baggage allows applications to attach application-specific data that should follow a request as it flows through a distributed system, even when crossing process or network boundaries. Unlike span attributes which are attached to specific spans, baggage is carried in the trace context and is accessible to all code executing within that context, including child spans and downstream services. In OpenTelemetry, baggage is propagated via standard HTTP headers or other protocol-specific mechanisms along with the trace-parent ID. Common uses for baggage include carrying user IDs, request IDs, feature flags, or routing information that multiple services need to access. However, baggage should be used sparingly as it adds overhead to every request - all baggage entries are transmitted with every trace propagation operation. Care must also be taken to never include PII or sensitive data in baggage as it may be logged or exposed in various places throughout the system.Span: A named, timed operation that represents a single unit of work within a trace. A span records the start time, end time (or duration), and metadata about a specific operation such as a function call, API request, database query, or any other discrete task. Spans are the building blocks of distributed traces. Each span has a unique span ID and references a parent span ID (except for root spans which have no parent), forming a tree or DAG structure that represents the complete execution flow. In OpenTelemetry, spans can have attributes attached that provide additional context about the operation, such as HTTP status codes, query parameters, error messages, or custom application data. Spans also track their status (OK, Error, Unset) and can contain events that mark specific points in time during the span’s lifetime. In Carbonite’s observability implementation, spans are created through the tracer API and can be nested to represent parent-child relationships between operations. When a span ends, it is recorded with all its accumulated data and eventually exported to a data backend as part of a complete trace.Trace: A distributed trace represents the complete journey of a request or operation as it flows through an application or across multiple services in a distributed system. A trace consists of one or more spans organized in a parent-child hierarchy that shows the relationships between operations and their relative timing. The trace starts with a root span representing the initial operation and may include child spans for sub-operations, database calls, external API requests, and other work performed to complete the request. Each trace has a unique trace ID that is shared by all spans within that trace, allowing the complete operation flow to be reconstructed and visualized. In OpenTelemetry, traces are automatically propagated across process and service boundaries through context propagation mechanisms, allowing a single logical operation to be tracked across multiple physical systems. Distributed traces are essential for understanding system behavior, identifying performance bottlenecks, debugging errors in complex workflows, and analyzing dependencies between services. The trace data is collected from all participating services and sent to a data backend where it can be queried, visualized, and analyzed to understand the end-to-end behavior of distributed operations.Trace Context: The contextual information that ties together all operations within a single distributed trace. The trace context includes the trace ID (identifying the overall trace), span ID (identifying the current span), trace flags (control flags for sampling and other behaviors), and optionally the trace state (additional vendor-specific data). In OpenTelemetry, trace context is propagated both within a process (through context objects passed between functions) and across process boundaries (through protocol-specific headers such as HTTPtraceparentandtracestateheaders). The trace context is what enables distributed tracing - by propagating the context from service to service, all operations related to a single request can be correlated even when they occur in different processes, on different machines, or in different services. When a new span is created, it inherits the trace context from its parent, maintaining the trace ID while generating a new unique span ID. The W3C Trace Context specification standardizes how trace context is represented and propagated, ensuring interoperability between different observability tools and implementations. In Carbonite, trace context is automatically managed by the OpenTelemetry SDK and is accessible through the tracer API for cases where manual context propagation is needed.Trace-Parent ID: A unique identifier that encodes both the trace ID and the current span ID, used for propagating trace context across boundaries. The trace-parent ID follows the W3C Trace Context specification format and consists of several components: a version number, the 128-bit trace ID (in hexadecimal), the 64-bit parent span ID (in hexadecimal), and trace flags indicating sampling and other control information. For example, a trace-parent might look like00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01where00is the version, the first long hex string is the trace ID, the second hex string is the span ID, and01indicates the trace is sampled. The trace-parent ID is propagated through HTTP headers, gRPC metadata, or other protocol-specific mechanisms when a request crosses service boundaries. When a service receives a request with a trace-parent ID, it extracts the context and creates child spans that maintain the same trace ID but generate new span IDs, preserving the trace lineage. In OpenTelemetry, the trace-parent ID is automatically injected into outgoing requests and extracted from incoming requests by context propagators. This enables automatic distributed tracing without requiring manual correlation IDs or custom headers. The trace-parent ID is also attached to other telemetry signals (logs, metrics, events) emitted during the span’s lifetime, allowing all observability data to be correlated back to the originating trace.
Crash Reporting#
Crash Dump: A binary file containing a snapshot of an application’s memory, thread, and CPU state at the moment it crashed or encountered a fatal error. The crash dump captures the contents of process memory including the call stack, thread states, register values, some heap memory, and other runtime information that can be analyzed to determine why the crash occurred. The size and detail level of crash dumps can be configured - a minidump contains only essential information like stacks and register states, while a full dump contains the complete process memory space. In Carbonite, crash dumps are generated by the crash reporter plugin when a fatal error occurs. These dumps can be analyzed using debuggers like WinDbg, Visual Studio, or GDB, or uploaded to crash analysis services that can automatically extract and symbolicate stack traces. Crash dumps are invaluable for debugging issues that only occur in production or are difficult to reproduce, as they preserve the exact state of the application at the failure point. However, crash dumps may contain sensitive data from memory, so care must be taken when sharing or uploading them to ensure PII and intellectual property are protected.Crash Report: A structured document that contains information about an application crash including the stack trace, system information, exception details, and contextual metadata about what the application was doing when it crashed. Unlike a crash dump which is a raw memory snapshot, a crash report is a processed and formatted summary designed to be human-readable and machine-parseable. In Carbonite’s crash reporting system, crash reports are generated automatically when a fatal error or unhandled exception occurs. The report typically includes the exception type and message, the call stack showing the sequence of function calls leading to the crash, information about the operating system and hardware, the application version and build details, loaded modules and libraries, and custom metadata that was registered by the application such as user actions, settings values, or application state. Crash reports can be displayed to the user, written to log files, or transmitted to a crash reporting backend service for aggregation and analysis. When submitted to a backend, crash reports are often grouped by similar stack traces to identify common failure patterns and prioritize fixes. The crash reporting interface in Carbonite allows applications to register custom metadata providers that add application-specific context to crash reports, making it easier to diagnose issues and understand the circumstances that led to the crash.Metadata: Additional contextual information that is attached to a crash report to provide insights into the application’s state and environment at the time of the crash. Crash metadata includes both automatically collected system information and custom application-specific data. Standard metadata typically includes the operating system version and architecture, CPU information, available and used memory, the application name, version, and build number, the timestamp when the crash occurred, and the user’s locale and language settings. In Carbonite’s crash reporting system, applications can register custom metadata providers to add domain-specific information such as the currently loaded document or file, recent user actions or commands executed, relevant settings or configuration values, active plugins or extensions, network connectivity state, or any other application state that might help diagnose the crash. Metadata is collected at the time of the crash and included in both the crash report and associated with any crash dump files. Good metadata is crucial for understanding and reproducing crashes, especially for issues that only occur under specific configurations or usage patterns. When designing metadata for crash reports, care must be taken to exclude any PII or sensitive information, to keep the data volume reasonable, and to focus on information that will actually be useful for debugging rather than including everything possible.Stack Trace: A report of the active stack frames at the moment an exception occurred or a crash happened, showing the sequence of function or method calls that led to the error. Each frame in the stack trace represents a function that was executing, along with the location in the source code (file name, line number) and the instruction address in memory. The stack trace is ordered from the most recent call (where the error occurred) at the top to the initial entry point of the thread at the bottom, allowing developers to trace the execution path that led to the failure. In Carbonite and OpenTelemetry implementations, stack traces are captured and included in crash reports and can also be attached to error spans in distributed traces. A symbolicated stack trace includes human-readable function names and source locations, while an unsymbolicated trace contains only raw memory addresses that must be resolved using debug symbols from the original binary. Symbolicating a stack trace does require access to the debug symbols packages for each module found on the call stack. This can be accessed through either a symbol server or provided locally for lookup by the debugger or crash analysis tool. Stack traces may include frames from multiple threads if the crash occurred in a multi-threaded application, with each thread’s stack shown separately. The call stack is one of the most critical pieces of information for debugging crashes, as it pinpoints exactly where the error occurred and how the code reached that point, making it possible to understand the root cause even without reproducing the issue.