omni.kit.test

Python asyncio-centric testing system.

To create a test derive from omni.kit.test.AsyncTestCase and add a method that starts with test_, like in unittest. Method can be either async or regular one.

import omni.kit.test

class MyTest(omni.kit.test.AsyncTestCase):
    async def setUp(self):
        pass

    async def tearDown(self):
        pass

    # Actual test, notice it is "async" function, so "await" can be used if needed
    async def test_hello(self):
        self.assertEqual(10, 10)

Test class must be defined in “tests” submodule of your public extension module. For example if your extension.toml defines:

[[python.module]]
name = "omni.foo"

omni.foo.tests.MyTest should be a path to your test. Test system will automatically discover and import omni.foo.tests module. Using tests submodule of your extension module is a recommended way to organize tests. That keeps tests together with extension, but not too coupled with the actual module they test, so that they can import module with absolute path (e.g. import omni.foo) and test it the way user will see them.

Refer to omni.example.hello extension as a simplest example of extension with a python test.

Settings

For the settings refer to extension.toml file:

[core]
reloadable = false
order = -1000

[package]
title = "Testing System"
category = "Internal"


[dependencies]
"omni.kit.async_engine" = {}
"omni.kit.loop" = {}
"omni.kit.pip_archive" = {}

[[python.module]]
name = "omni.kit.test"

[[native.plugin]]
path = "bin/*.plugin"

[settings]

# Wait few updates (to allow all extensions to load), run tests to completion and quit.
exts."omni.kit.test".runTestsAndQuit = false

# Do not quit after. Failure won't be communicated with app return value, use for debugging only.
exts."omni.kit.test".doNotQuit = false

# Wait few updates (to allow all extensions to load), and print all tests in stdout and quit
exts."omni.kit.test".printTestsAndQuit = false

# Filter which tests to run: python's fnmatch is used. Use `*`, `?` etc. Test ids look like this [module].[class].[method].
# E.g.: "omni.client.tests.test_client.TestClient.test_list_async"
exts."omni.kit.test".includeTests = []
exts."omni.kit.test".excludeTests = []

# To select subset of tests to run uncomment specify string with wildcards to filter tests. `includeTests` and `excludeTests`
# above are used by extension testing system to select tests from extension in question. So for user filtering tests to run
# even further (within one extension) use this one: (Shortcut: `-f`)
exts."omni.kit.test".runTestsFilter = ""

# Path to playlist of tests to run. Use to replay tests in a specific order or a subset of tests.
# Note each test run will generate a playlist file named `exttest_<extension_name>_<run_count>.log`
exts."omni.kit.test".runTestsFromFile = ""

# Path to output test data (logs, crash dumps, profile traces). Relative to CWD.
exts."omni.kit.test".testExtOutputPath = "${omni_data}/_testoutput"

# Delete all content from output path before each test run
exts."omni.kit.test".testExtCleanOutputPath = false

# App used for extension tests
exts."omni.kit.test".testExtApp = "${kit}/apps/omni.app.test_ext.kit"

# Select test process ([[test]] entries) to run by name. (Shortcut: `-n`)
exts."omni.kit.test".testExtTestNameFilter = ""

# If passed extension is kit file it will replace `testExtApp` if true. If false treat kit file as regular extensions,
# allowing to run kit file in context of other app (kit file)
exts."omni.kit.test".testExtUseKitFileAsApp = true

# Run UI with list of python tests instead of autorunning them. Useful for running single tests. Shortcut: `--dev`
exts."omni.kit.test".testExtUIMode = false

# Extra args to pass to extension test process cmd
exts."omni.kit.test".testExtArgs = []

# Max ext test process to run at the same time, if < 0 use CPU count.
exts."omni.kit.test".testExtMaxParallelProcesses = 1

# Capture profile trace for extension test process
exts."omni.kit.test".testExtEnableProfiler = false

# Read REPO_TEST_CONTEXT env var for 'changed_files'. If found, decide if test needs to run.
exts."omni.kit.test".testExtCodeChangeAnalyzerEnabled = true

# Default extension test timeout (can be overridden by extensions), in seconds:
exts."omni.kit.test".testExtDefaultTimeout = 300

# Maximum timeout (applied if not 0) and applied after the override of extensions, in seconds:
exts."omni.kit.test".testExtMaxTimeout = 0

# Set to true to shuffle tests in random order before running them
exts."omni.kit.test".testExtRandomOrder = false

# Tests Sampling factor [0.0 to 1.0] set to 1.0 to execute all tests (no sampling)
# Can be overridden per extension like this:
# [[test]]
# samplingFactor = 0.5
exts."omni.kit.test".testExtSamplingFactor = 1.0

# Tests Sampling Context, choices are:
# any               -> always run with tests sampling
# local             -> only run tests sampling locally
# ci                -> only run tests sampling on CI
exts."omni.kit.test".testExtSamplingContext = "ci"

# Tests Sampling Seed, use the same seed to get the same tests sampling, ignored if set to < 0
exts."omni.kit.test".testExtSamplingSeed = -1

# Global override to allow/disallow all tests sampling
exts."omni.kit.test".useSampling = true

# Extension test retry strategy, choices are:
# no-retry            -> run once
# retry-on-failure    -> run up to N times, stop at first success (N = testExtMaxTestRunCount)
# iterations          -> run N times (N = testExtMaxTestRunCount)
# rerun-until-failure -> run up to N times, stop at first failure (N = testExtMaxTestRunCount)
exts."omni.kit.test".testExtRetryStrategy = "no-retry"

# Maximum test run count when using iterations/retry-on-failure retry strategies
exts."omni.kit.test".testExtMaxTestRunCount = 1

# By default don't run "unreliable" tests. If set to 1 run ONLY "unreliable" tests. If set to 2 then run both.
exts."omni.kit.test".testExtRunUnreliableTests = 0

# Run Flaky tests
exts."omni.kit.test".testExtRunFlakyTests = false

# Number of days of data to query
exts."omni.kit.test".flakyTestsQueryDays = 30

# Sync and include registry extensions for test run
exts."omni.kit.test".testExtUseRegistry = false

# Match versions by string comparison instead of semver rules. E.g. `omni.foo-1.2` will match `omni.foo-1.2.1`, but not
# `omni.foo-1.3.0` if `true`.
exts."omni.kit.test".testExtMatchVersionAsString = true

# Name for type of test run, used when building test id
exts."omni.kit.test".testExtTestType = ""

# Name of test bucket
exts."omni.kit.test".testExtTestBucket = ""

# Separate execution mode to generate test report from previous runs
exts."omni.kit.test".testExtGenerateReport = false

# Support for code coverage, the report will be generated after running all tests
exts."omni.kit.test".testExtGenerateCoverageReport = false

# List of extension to run test on. Runs separate process with single extension test for each of them. Wildcard can be
# used in both `testExts` and `excludeExts`.
exts."omni.kit.test".testExts = []
exts."omni.kit.test".excludeExts = [
    # NOTE: Look into repo.toml -> `[[repo_test.suites.pythontests.group]]` for the list of tests that runs
]

# Do not print stdout when ext test passes (TC service messages are still printed)
exts."omni.kit.test".testExtTrimStdoutOnSuccess = false

# Stdout parse fail patterns. Includes and excludes.
exts."omni.kit.test".stdoutFailPatterns.include = ["*[error]*", "*[fatal]*"]
exts."omni.kit.test".stdoutFailPatterns.exclude = [
	"*Leaking graphics objects*",  # Exclude grahics leaks until fixed
	"*leaking memory. Missing call to destroyResourceBindingSignature*",
    "*[carb.launcher.plugin] [parent]: timed out waiting for the child process*", # CC-507 carb.launchr intermittent error
    "*[carb.launcher.plugin] failed to fork the child process*", # CC-507 carb.launchr intermittent error
    "*[rtx.optixdenoising.plugin] [Optix] [DiskCacheDatabase] Failed to prepare statement: file is not a database*", # OM-50198 intermittent error, tests still pass
    "*[rtx.optixdenoising.plugin] [Optix] [WARNING] Error when configuring the database.*", # OM-50198 intermittent error, tests still pass
]
# Remove excluded messages from stdout (replace with some generic message to avoid confusion when people search for [error] etc.)
exts."omni.kit.test".stdoutFailPatterns.trimExcludedMessages = true

# Enables or disables Python test coverage
# Note that enabling coverage for an extension(s) will prevent it from debugging
# Use [[test]] section in the extension configuration to enable/disable it for the selected extension
# Ex:
# [[test]]
# pyCoverageEnabled = false
exts."omni.kit.test".pyCoverageEnabled = false

# Code coverage threshold in percent: above or equal is good, below needs more test coverage.
# As a general guideline for extension coverage, 60% is acceptable, 75% is commendable and 90% is exemplary.
# If for a selected extension this option needs to be changed, use [[test]] section in the extension configuration
# Ex:
# [[test]]
# pyCoverageThreshold = 50
exts."omni.kit.test".pyCoverageThreshold = 60

# Filter flags to include extensions modules and dependencies to the coverage filter
# Use [[test]] section in the extension configuration to modify it for the selected extension
# Ex:
# [[test]]
# pyCoverageIncludeDependencies = false
exts."omni.kit.test".pyCoverageIncludeModules = true
exts."omni.kit.test".pyCoverageIncludeDependencies = true
exts."omni.kit.test".pyCoverageIncludeTestDependencies = false

# Sets a custom filter for paths/modules to collect coverage data from Python tests
# NOTE: if this option is set then the filter flags mentioned above will be ignored
# If for a selected extension this option needs to be changed, use [[test]] section in the extension configuration
# Ex:
# [[test]]
# pyCoverageFilter = ["some_module", "some_other_module", "some_path"]

# exts."omni.kit.test".pyCoverageFilter = []

# Omit files or path for the Coverage. The file name patterns follow typical shell syntax: * matches any number of characters and ? matches a single character.
# NOTE: To avoid side effects all patterns should start from the root folder of the extension, for example omni/kit/test/file.py
#
# Ex:
# [[test]]
# pyCoverageOmit = ["omni/kit/test/path/to/some_file.py", "omni/kit/test/all_files_under_path/*"]

# exts."omni.kit.test".pyCoverageOmit = []

# Sets output formats for the Coverage report. Possible values are "stdout", "json", "html"
# If nothing is set then defaults to "json"
exts."omni.kit.test".pyCoverageFormats = ["json", "html"]

# If this flag is true then previously collected Python internal coverage data will be loaded from
# coverage output dir and will be used during generation of the coverage report to produce a single combined JSON report
# file instead of several files
exts."omni.kit.test".pyCoverageCombinedReport = true

# Waiver for extensions with no tests
# Use [[test]] section in the extension configuration to set the reason
# Ex:
# [[test]]
# waiver = "Reason why the extension contains no test"

#
exts."omni.kit.test".testLibraries = []


# Test testing config itself, pass various test settings
[[test]]
args = ["--/extra_arg_passed/param=123"]
stdoutFailPatterns.exclude = [
    "*message will not fail*",
]
pythonTests.exclude = [
    "*test_that_is_excluded*",
    "*test_test_other_settings*",
]
pythonTests.unreliable = [
    "*test_that_is_unreliable*"
]

# It has some tests, but many of the files have 0 coverage measured because of chicken and egg problem.
# It is hard to measure the coverage, because it starts coverage (and testing) itself. Also, being test system naturally
# most of it is tested by just using it for all other tests, hence lowering the threshold.
pyCoverageOmit = [
    "omni/kit/test/__init__.py",
    "omni/kit/test/async_unittest.py",
    "omni/kit/test/code_change_analyzer.py",
    "omni/kit/test/crash_process.py",
    "omni/kit/test/ext_test_generator.py",
    "omni/kit/test/ext_utils.py",
    "omni/kit/test/exttests.py",
    "omni/kit/test/flaky.py",
    "omni/kit/test/nvdf.py",
    "omni/kit/test/reporter.py",
    "omni/kit/test/teamcity.py",
    "omni/kit/test/test_coverage.py",
    "omni/kit/test/test_populators.py",
    "omni/kit/test/test_reporters.py",
    "omni/kit/test/unittests.py",
    "omni/kit/test/utils.py",
]
pyCoverageThreshold = 40
samplingFactor = 1.0  # No test sampling for this extension


# Multiple tests configs are supported, useful to run different tests and/or the same tests with different arguments.
[[test]]
name="another-test-config"
args = ["--/extra_arg_passed/param=456"]
pythonTests.include = [
    "omni.kit.test.tests.test_kit_test.TestKitTest.test_test_other_settings*"
]
pyCoverageEnabled = false

They can be used to filter, automatically run tests and quit.

API Reference

class omni.kit.test.AsyncTestCase(methodName='runTest')

Base class for all async test cases.

Derive from it to make your tests auto discoverable. Test methods must start with test_ prefix.

Test cases allow for generation and/or adaptation of tests at runtime. See testing_exts_python.md for more details.

fail_on_log_error = False
async run(result=None)
class omni.kit.test.AsyncTestCaseFailOnLogError(methodName='runTest')

Test Case which automatically subscribes to logging events and fails if any error were produced during the test.

This class is for backward compatibility, you can also just change value of fail_on_log_error.

fail_on_log_error = True
class omni.kit.test.AsyncTestSuite(tests=())

A test suite is a composite test consisting of a number of TestCases.

For use, create an instance of TestSuite, then add test case instances. When all tests have been added, the suite can be passed to a test runner, such as TextTestRunner. It will run the individual test cases in the order in which they were added, aggregating the results. When subclassing, do not forget to call the base class constructor.

async run(result, debug=False)
class omni.kit.test.ExtTest(ext_id: str, ext_info: Item, test_config: Dict, test_id: str, is_parallel_run: bool, run_context: TestRunContext, test_app: TestApp, valid=True)
get_cmd() str
on_fail(fail_message)
on_finish(test_result)
on_start()
class omni.kit.test.ExtTestResult
class omni.kit.test.PyCoverageCollector

Initializes code coverage collections and saves collected data at Python interpreter exit

class PyCoverageSettings
shutdown(_=None)
startup()
class omni.kit.test.TestPopulateAll

Implementation of the TestPopulator that returns a list of all tests known to Kit

get_tests(call_when_done: callable)

Populate the internal list of raw tests and then call the provided function when it has been done. The callable takes one optional boolean ‘canceled’ that is only True if the test retrieval was not done.

class omni.kit.test.TestPopulateDisabled

Implementation of the TestPopulator that returns a list of all tests disabled by their extension.toml file

get_tests(call_when_done: callable)

Populate the internal list of raw tests and then call the provided function when it has been done. The callable takes one optional boolean ‘canceled’ that is only True if the test retrieval was not done.

class omni.kit.test.TestPopulator(name: str, description: str)

Base class for the objects used to populate the initial list of tests, before filtering.

destroy()

Opportunity to clean up any allocated resources

abstract get_tests(call_when_done: callable)

Populate the internal list of raw tests and then call the provided function when it has been done. The callable takes one optional boolean ‘canceled’ that is only True if the test retrieval was not done.

class omni.kit.test.TestRunStatus(value)

An enumeration.

FAILED = 3
PASSED = 2
RUNNING = 1
UNKNOWN = 0
omni.kit.test.add_test_status_report_cb(callback: Callable[[str, TestRunStatus, Any], None])

Add callback to be called when tests start, fail, pass.

omni.kit.test.decompose_test_list(test_list: list[str]) tuple[list[unittest.case.TestCase], set[str], set[str], collections.defaultdict[str, set[str]]]

Read in the given log file and return the list of tests that were run, in the order in which they were run.

TODO: Move this outside the core omni.kit.test area as it requires external knowledge

If any modules containing the tests in the log are not currently available then they are reported for the user to intervene and most likely enable the owning extensions.

Parameters

test_list – List of tests to decompose and find modules and extensions for

Returns

Tuple of (tests, not_found, extensions, modules) gleaned from the log file

tests: List of unittest.TestCase for all tests named in the log file, in the order they appeared not_found: Name of tests whose location could not be determined, or that did not exist extensions: Name of extensions containing modules that look like they contain tests from “not_found” modules: Map of extension to list of modules where the extension is enabled but the module potentially containing the tests from “not_found” has not been imported.

omni.kit.test.extension_from_test_name(test_name: str, module_map: Dict[str, Tuple[str, bool]]) tuple[str, bool, str, bool] | None

Given a test name, return None if the extension couldn’t be inferred from the name, otherwise a tuple containing the name of the owning extension, a boolean indicating if it is currently enabled, a string indicating in which Python module the test was found, and a boolean indicating if that module is currently imported, or None if it was not.

Parameters
  • test_name – Full name of the test to look up

  • module_map – Module to extension mapping. Passed in for sharing as it’s expensive to compute.

The algorithm walks backwards from the full name to find the maximum-length Python import module known to be part of an extension that is part of the test name. It does this because the exact import paths can be nested or not nested. e.g. omni.kit.window.tests is not part of omni.kit.window

Extracting the extension from the test name is a little tricky but all of the information is available. Here is how it attempts to decompose a sample test name

omni.graph.nodes.tests.tests_for_samples.TestsForSamples.test_for_sample
+--------------+ +---+ +---------------+ +-------------+ +-------------+
Import path      |     |                 |               |
                 Testing subdirectory    |               |
                       |                 |               |
                       Test File         Test Class      Test Name

Each extension has a list of import paths of Python modules it explicitly defines, and in addition it will add implicit imports for .tests and .ogn.tests submodules that are not explicitly listed in the extension dictionary.

With this structure the user could have done any of these imports:

import omni.graph.nodes
import omni.graph.nodes.tests
import omni.graph.nodes.test_for_samples

Each nested one may or may not have been exposed by the parent so it is important to do a greedy match.

This is how the process of decoding works for this test:

Split the test name on "."
    ["omni", "graph", "nodes", "tests", "tests_for_samples", "TestsForSamples", "test_for_sample"]
Starting at the entire list, recursively remove one element until a match in the module dictionary is found
    Fail: "omni.graph.nodes.tests.tests_for_samples.TestsForSamples.test_for_sample"
    Fail: "omni.graph.nodes.tests.tests_for_samples.TestsForSamples"
    Fail: "omni.graph.nodes.tests.tests_for_samples"
    Succeed: "omni.graph.nodes.tests"
If no success, of if sys.modules does not contain the found module:
    Return the extension id, enabled state, and None for the module
Else:
    Check the module recursively for exposed attributes with the rest of the names. In this example:
        file_object = getattr(module, "tests_for_samples")
        class_object = getattr(file_object, "TestsForSamples")
        test_object = getattr(class_object, "test_for_sample")
        If test_object is valid:
            Return the extension id, enabled state, and the found module
        Else:
            Return the extension id, enabled state, and None for the module
omni.kit.test.find_disabled_tests() list[unittest.case.TestCase]

Scan the existing tests and the extension.toml to find all tests that are currently disabled

omni.kit.test.generate_report()

After running tests this function will generate html report / post to nvdf / publish artifacts

omni.kit.test.get_global_test_output_path()

Get global extension test output path. It is shared for all extensions.

omni.kit.test.get_module_to_extension_map() Dict[str, Tuple[str, bool]]

Returns a dictionary mapping the names of Python modules in an extension to (OwningExtension, EnabledState) e.g. for this extension it would contain {“omni.kit.test”: ([“omni.kit.test”, True])} It will be expanded to include the implicit test modules added by the test management.

omni.kit.test.get_setting(path, default=None)
omni.kit.test.get_test_output_path()

Get local extension test output path. It is unique for each extension test process.

omni.kit.test.get_tests(tests_filter='') List

Default function to get all current tests.

It gets tests from all enabled extensions, but also included include and exclude settings to filter them

Parameters

tests_filter (str) – Additional filter string to apply on list of tests.

Returns

List of tests.

omni.kit.test.get_tests_from_modules(modules, log=False)

Return the list of tests registered or dynamically discovered from the list of modules

omni.kit.test.get_tests_to_remove_from_modules(modules, log=False)

Return the list of tests to be removed when a module is unloaded. This includes all tests registered or dynamically discovered from the list of modules and their .tests or .ogn.tests submodules. Keeping this separate from get_tests_from_modules() allows the import of all three related modules, while preventing duplication of their tests when all extension module tests are requested.

Parameters

modules – List of modules to

omni.kit.test.omni_test_registry(*args, **kwargs)

The decorator for Python tests. NOTE: currently passing in the test uuid as a kwarg ‘guid’

omni.kit.test.remove_from_dynamic_test_cache(module_root)

Get the list of tests dynamically added to the given module directory (via “scan_for_test_modules”)

omni.kit.test.remove_test_status_report_cb(callback: Callable[[str, TestRunStatus, Any], None])

Remove callback to be called when tests start, fail, pass.

omni.kit.test.run_ext_tests(test_exts, on_finish_fn=None, on_status_report_fn=None, exclude_exts=[])
omni.kit.test.run_tests(tests=None, on_finish_fn=None, on_status_report_fn=None)
omni.kit.test.shutdown_ext_tests()
omni.kit.test.test_only_extension_dependencies(ext_id: str) set[str]

Returns a set of extensions with test-only dependencies on the given one. Not currently used as dynamically enabling random extensions is not stable enough to use here yet.