Set Up OmniGraph Tests
Much of the testing for a node can be set up through the .ogn file’s test section, however there are a number of situations that require more detailed setup or more flexible checking than the automatically generated tests can provide. For such tests you will want to hook into Kit’s testing system.
Some examples of these situations are when you need to check for attributes that are one-of a set of allowed results rather than a fixed value, where you need to check node or attribute information that is more than just the current value, where you need to call utility scripts to set up desired configurations, or when your results depend on some external condition such as the graph state.
Described here are some best practices for writing such tests. This is only meant to describe setting up Python
regression tests that use the Kit extensions to the Python unittest
module. It is not meant to describe setting up
C++ unit tests.
Note
For clarity a lot of recommended coding practices, like adding docstrings to all classes, functions, and modules, or checking for unexpected exceptions in order to provide better error messages, are not followed. Please do use them when you write your actual tests though.
Locating The Tests
The Kit extension system uses an automatic module recognition algorithm to detect directories in which test cases
may be found. In particular it looks for .tests submodules. So if your Python module is named omni.graph.foo
it
will check the contents of the module omni.graph.foo.tests
, if it exists, and attempt to find files containing
classes derived from unittest.TestCase
, or the Kit version omni.kit.test.AsyncTestCase
.
The usual way of structuring extensions provides a directory structure that looks like this:
omni.graph.foo/
python/
tests/
test_some_stuff.py
The tests/ subdirectory would be linked into the build using these lines in your premake5.lua file inside the python project definition:
add_files("python/tests", "python/tests/*.py")
repo_build.prebuild_link {
{ "python/tests", ogn.python_tests_target_path },
}
This creates a link that creates a .tests submodule for your extension.
The files containing tests should all begin with the prefix test_.
Creating A Test Class
OmniGraph tests have some shared setUp and tearDown operations so the easiest way to set up your test class is to have it derive from the derived test case class that implements them:
import omni.graph.core.tests as ogts
class TestsForMe(ogts.OmniGraphTestCase):
pass
This will ensure your tests are part of the Kit regression test system. The parent class will define some temporary settings as required by OmniGraph, and will clear the scene when the test is done so as not to influence the results of the test after it (barring any other side effects the test itself causes of course).
Tip
Although the name of the class is not significant it’s helpful to prefix it with Tests to make it easy to identify.
Specialized SetUp And TearDown
If you have some other setUp or tearDown functions you wish to perform you do it in the usual Pythonic manner:
import omni.graph.core.tests as ogts
class TestsForMe(ogts.OmniGraphTestCase):
async def setUp(self):
await super().setUp()
do_my_setup()
async def tearDown(self):
do_my_teardown()
await super().tearDown()
Note
The tests are async so both they and the setUp/tearDown will be “awaited” when running. This was done to facilitate easier access to some of the Kit async functions, though normally you want to ensure your test steps run sequentially.
Adding Tests
Tests are added in the usual way for the Python unittest
framework, by creating a function with the prefix test_.
As the tests are all awaited your functions should be async.
import omni.graph.core.tests as ogts
class TestsForMe(ogts.OmniGraphTestCase):
async def setUp(self):
await super().setUp()
do_my_setup()
async def tearDown(self):
do_my_teardown()
await super().tearDown()
async def test1(self):
self.assertTrue(run_first_test())
async def test2(self):
self.assertTrue(run_second_test())
How you divide your test bodies is up to you. You’ll want to balance the slower performance of repetitive setup against the isolation of specific test conditions.
Your best friend in setting up test conditions is the og.Controller class. It provides a lot of what you will need for setting up and inspecting your graph, nodes, and attributes.
Here is a simple example that will create an add node with one constant input, and one input supplied from another node that will test to make sure that results are correct over a set of inputs. It uses several concepts from the controller to illustrate its use.
import omni.graph.core as og
import omni.graph.core.tests as ogts
class TestsForMe(ogts.OmniGraphTestCase):
async def test_add(self):
keys = og.Controller.Keys
(graph, nodes, _, _) = og.Controller.edit("/TestGraph", {
keys.CREATE_NODES: [
("Add", "omni.graph.nodes.Add"),
("ConstInt", "omni.graph.nodes.ConstantInt"),
],
keys.CONNECT: ("ConstInt.inputs:value", "Add.inputs:a"),
keys.SET_VALUES: [("ConstInt.inputs:value", 3), ("Add.inputs:b", {"type": "int", "value": 1})]
})
# Get controllers attached to the attributes since they will be accessed in a loop
b_view = og.Controller(attribute=og.Controller.attribute("inputs:b", nodes[0]))
sum_view = og.Controller(attribute=og.Controller.attribute("outputs:sum", nodes[0]))
# Test configurations are pairs of (input_b_value, output_sum_expected)
test_configurations = [
({"type": "int", "value": 1}, 4),
({"type": "int", "value": -3}, 0),
({"type": "int", "value": 1000000}, 1000003),
]
for b_value, sum_expected in test_configurations:
b_view.set(b_value)
# Before checking computed values you must ensure the graph has evaluated
await og.Controller.evaluate(graph)
self.assertAlmostEqual(sum_expected, sum_view.get())
Expected Errors
When writing tests, it can be desirable to test error conditions. However, this may cause errors to be displayed to the console, which can cause tests to fail when running in batch mode.
One way to have the testing system ignore these errors is to prepend a additional text to the error line.
# write to the console, skipping the newline to prepend the expected error message
print("[Ignore this error/warning] ", end ="")
self.assertFalse(self.function_that_causes_error())
Then, in your packages extensions.toml file, tell the test system to ignore error output when the prepended output occurs.
[[test]]
stdoutFailPatterns.exclude = [
# Exclude messages which say they should be ignored
"*Ignore this error/warning*",
]
This allows for the test to specify which errors should be ignored without ignoring all errors with the same output in the entire package, or by disabling all error checking from your test class.
Executing Your Tests
Now that your tests are visible to Kit they will be automatically run by TeamCity. To run them yourself locally you have two options.
Batch Running Tests
All tests registered in the manner described above will be added to a batch file that runs
all tests in your extension. You can find this file at $BUILD/tests-omni.graph.foo.{bat|sh}. Executing this file
will run your tests in a minimal Kit configuration. (It basically loads your extension and all dependent extensions,
including the test manager.) Look up the documentation on omni.kit.test
for more information on how the tests
can be configured.
Test Runner
Kit’s Test Runner window is a handy way to interactively run one or more of your tests at a finer granularity through a UI. By default none of the tests are added to the window, however, so you must add a line like this to your user configuration file, usually in ~/Documents/Kit/shared/user.toml, to specify which tests it should load:
exts."omni.kit.test".includeTests = ["omni.graph.foo.*"]
Debugging Your Tests
Whether you’re tracking down a bug or engaging in test-driven-development eventually you will end up in a situation where you need to debug your tests.
One of the best tools is to use the script editor and the UI in Kit to inspect and manipulate your test scene. While the normal OmniGraph test case class deletes the scene at the end of the test you can make a temporary change to instead use a variation of the test case that does not do that, so that you can examine the failing scene.
import omni.graph.core.tests as ogts
class TestsForMe(ogts.OmniGraphTestCaseNoClear):
pass
Running Test Batch Files With Attached Debuggers
You’re probably familiar with the debugging extensions omni.kit.debug.python
and omni.kit.debug.vscode
,
with which you can attach debuggers to running versions of Kit.
If you are running the test batch file, however, you have to ensure that these extensions are running as part of the
test environment. To do so you just need to add these flags to your invocation of the test .bat
file:
> $BUILD/tests-omni.graph.foo.bat --enable omni.kit.debug.vscode --enable omni.kit.debug.python
This will enable the extra extensions required to attach the debuggers while the test scripts are running. Of course you still have to manually attach them from your IDE in the same way you usually do.
Note
As of this writing you might see two kit.exe
processes when attaching a C++ debugger to a test script. The
safest thing to do is attach to both of them.
The .bat files also support the flag -d
flag to wait for the debugger to attach before executing. If you’re running
a debug cut this probably won’t be necessary as the startup time is ample for attaching a debugger.