Executor#
Fully qualified name: omni::graph::exec::unstable::Executor
Structs#
- Info
Structure passed to the traversal algorithm collecting all necessary data for easy access.
-
template<typename ExecNode, typename ExecStrategy, typename ExecNodeData, typename Scheduler, typename SchedulingStrategy, typename ExecutorInterface = IExecutor>
class Executor : public omni::graph::exec::unstable::Implements<IExecutor># Easily configurable omni::graph::exec::unstable::IExecutor implementation providing necessary tools for most common executor types.
The omni::graph::exec::unstable::Executor class traverses parts of the execution graph , generating tasks for each node visited. One of the core concepts of EF is that each omni::graph::exec::unstable::INodeGraphDef specifies the omni::graph::exec::unstable::IExecutor that should be used to execute the subgraph it defines. This allows each omni::graph::exec::unstable::INodeGraphDef to control a host of strategies for how its subgraph is executed. Some of the strategies are as follows:
If a node should be scheduled. For example, the executor may decide to prune parts of the graph based on the result of a previous execution (i.e. conditional execution). An executor may detect part of the graph does not need to be computed because a previous execution’s results are still valid (i.e. caching). An executor may also employ strategies such as executing a node once all of its parent have completed or executing the node as soon as any of the parents have executed.
How nodes are scheduled. When an executor visits a node, the executor may choose to execute the computational logic in the node’s definition immediately. Alternatively, it can delegate the execution to a scheduler. Working with the scheduler, an executor is able to provide execution strategies such as:
Defer execution of the node to a later time.
Execute the node in parallel with other nodes in the graph.
Ensure the node is the only node executing at the moment (e.g. “isolated” tasks).
Execute the node on a specified thread (e.g. the thread that started executing the graph).
Where nodes are scheduled. An executor can work with a resource scheduler to determine where to execute a node. This includes deciding the best GPU on a multi-GPU system to execute a GPU node. Likewise, executors can consult data center aware schedulers to schedule nodes on remote machines.
The amount of work to be scheduled. When visiting a node, an executor can create any number of tasks to accomplish the node’s computation. These tasks are able to dynamically create additional work/tasks that the executor is able to track.
Executors and schedulers work together to produce, schedule, and execute tasks on behalf of the node. Executors determine which nodes should be visited and generate appropriate work (i.e. tasks). Said differently, executor objects “interpret” the graph based on the behavioral logic encapsulated in the executor. Schedulers collect tasks, possibly concurrently from many executor objects, and map the tasks to hardware resources for execution.
This omni::graph::exec::unstable::Executor template contains several parameters that allow the user to control the strategies above.
Node Type to be Traversed
The
ExecNode
parameter defines the interface used to communicate with nodes. This will usually be omni::graph::exec::unstable::INode or a subclass thereof.Work Generation Strategy
ExecStrategy
defines when/if/what work should be generated when visiting a node. EF provides several implementations of this strategy:omni::graph::exec::unstable::ExecutionVisit - Generates work after all parent nodes have been executed.
omni::graph::exec::unstable::ExecutionVisitWithCacheCheck - Generates work after all parents have been visited and either a parent has successfully executed or the node has been explicitly marked for execution (i.e. dirty).
Users are able to define their own execution strategies. For example OmniGraph defines custom work generation strategies for its various graph types (e.g. pull, push, etc).
Transient Execution Data
Executing a graph definition may require transient data to implement the executor’s work generation strategy. For example, when executing parents in parallel, transient data is needed to atomically count the number of parents that have executed to avoid a node incorrectly executing multiple times.
ExecNodeData
is astruct
the user can define to store this transient data.Each node in the graph definition is assigned an
ExecNodeData
. This transient data type is usually tied toExecStrategy
but can also be utilized by the other parameters in this template.EF provides the omni::graph::exec::unstable::ExecutionNodeData struct to work with EF’s built-in execution strategies.
Scheduler
The executors job is to traverse a graph definition, generating appropriate work as nodes are visited. That work is given to a scheduler, whose job it is to dispatch the work. The benefit of a scheduler is that it can have a holistic view of the system, across multiple running executors, and efficiently dispatch the work to proper hardware resources.
The
Scheduler
parameter defines the scheduler to be used.EF defines the omni::graph::exec::unstable::SerialScheduler which executes task serially. In practice, more advanced schedulers are available. For example, omni.kit.exec.core defines the
ParallelSpawner
scheduler (based on Intel’s Thread Building Blocks) which is able to run tasks in parallel.Scheduling Strategy
The
SchedulingStrategy
provides a omni::graph::exec::unstable::SchedulingInfo for each generated task. omni::graph::exec::unstable::SchedulingInfo is an enum that outlines scheduling constraints for a task (e.g. must be run serially, must run on the thread that started graph execution, etc).EF’s omni::graph::exec::unstable::DefaultSchedulingStrategy calls the definitions’s omni::graph::exec::unstable::IDef::getSchedulingInfo() to get the definitions’s preferred strategy. However, users may choose to override the definitions’s preferred strategy with a custom
SchedulingStrategy
. For example, forcing all definitions to run serially to ease in debugging the execution graph.Virtual Methods
In addition to the template parameters, users may choose to override one of omni::graph::exec::unstable::Executor’s virtual methods. These methods are:
omni::graph::exec::unstable::IExecutor::execute_abi(): This method begins execution of the node provided in the constructor. Since this node is usually a root node, this method simply calls omni::graph::exec::unstable::IExecutor::continueExecute_abi() to execute nodes beyond the root node and calls the scheduler’s
getStatus()
method which is a blocking call that waits for outstanding work to finish.omni::graph::exec::unstable::IExecutor::continueExecute_abi(): This method is called after each node executes. It’s job is to continue executing the nodes downstream of the executed node. By default, this method uses the work generation strategy (i.e.
ExecStrategy
) on each of the node’s children.
Miscellaneous
The lifetime of an executor is short. They exist only when executing their owning graph definition. All transient data stored in
ExecNodeData
is valid only during this lifetime.See Execution Concepts for an in-depth guide on how this object is used during execution.
See Creating an Executor for a guide on creating a customize executor for your graph defintion.
Subclassed by omni::graph::exec::unstable::detail::ExecutorSingleNode
Public Functions
- inline SchedulingInfo getSchedulingInfo(
- const ExecutionTask &task,
Scheduling constraint to use when dispatching given task.
-
inline const ExecutionPath &getPath() const#
Execution path to node instantiating graph def associated with this executor.
-
inline IExecutionContext *getContext() const#
Execution context.
-
inline void acquire() noexcept#
Increments the object’s reference count.
Objects may have multiple reference counts (e.g. one per interface implemented). As such, it is important that you call omni::core::IObject::release() on the same pointer from which you called omni::core::IObject::acquire().
Do not directly use this method, rather use omni::core::ObjectPtr, which will manage calling omni::core::IObject::acquire() and omni::core::IObject::release() for you.
- Thread Safety
This method is thread safe.
-
inline void release() noexcept#
Decrements the objects reference count.
Most implementations will destroy the object if the reference count reaches 0 (though this is not a requirement).
Objects may have multiple reference counts (e.g. one per interface implemented). As such, it is important that you call omni::core::IObject::release() on the same pointer from which you called omni::core::IObject::acquire().
Do not directly use this method, rather use omni::core::ObjectPtr, which will manage calling omni::core::IObject::acquire() and omni::core::IObject::release() for you.
- Thread Safety
This method is thread safe.
-
inline uint32_t getUseCount() noexcept#
Returns the number of different instances (this included) referencing the current object.
- Thread Safety
This method is thread safe.
-
inline void *castWithoutAcquire(omni::core::TypeId id) noexcept#
See omni::graph::exec::unstable::IBase_abi::castWithoutAcquire_abi.
Public Static Functions
- static inline ThisExecutorPtr create(
- omni::core::ObjectParam<ITopology> toExecute,
- const ExecutionTask &thisTask,
Factory method for this executor.
Protected Functions
-
Executor() = delete#
Default constructor is removed.
- inline explicit Executor(
- ITopology *toExecute,
- const ExecutionTask ¤tTask,
Constructor used by factory method.
- Parameters:
toExecute – Graph topology used to generate the work.
currentTask – Task causing this execution. Used to generate execution path.
-
inline Status execute_abi() noexcept override#
Main execution method. Called once by each node instantiating same graph definition.
- inline Status continueExecute_abi(
- ExecutionTask *currentTask,
Implementation of the base class method to generate additional work after the given task has executed but before it has completed.
- inline Status schedule_abi(
- IScheduleFunction *fn,
- SchedulingInfo schedInfo,
Implementation of base class schedule method available for work generation outside of traversal loop.
-
template<typename S = Scheduler>
inline Status scheduleInternal( - ExecutionTask &&newTask,
- typename std::enable_if_t<!is_deferred<S>::value>* = nullptr,
Scheduling spawner of a task generated by traversal implementation.
-
template<typename S = Scheduler>
inline Status scheduleInternal( - ExecutionTask &&newTask,
- typename std::enable_if_t<is_deferred<S>::value>* = nullptr,
Deferred scheduling spawner of a task generated by traversal implementation.
-
template<typename S = Scheduler>
inline Status scheduleExternal( - IScheduleFunction *fn,
- SchedulingInfo schedInfo,
- typename std::enable_if_t<!is_deferred<S>::value>* = nullptr,
Scheduling spawner of a task generated by currently running task.
-
template<typename S = Scheduler>
inline Status scheduleExternal( - IScheduleFunction *fn,
- SchedulingInfo schedInfo,
- typename std::enable_if_t<is_deferred<S>::value>* = nullptr,
Deferred scheduling spawner of a task generated by currently running task.
-
inline void acquire_abi() noexcept override#
Increments the object’s reference count.
Objects may have multiple reference counts (e.g. one per interface implemented). As such, it is important that you call omni::core::IObject::release() on the same pointer from which you called omni::core::IObject::acquire().
Do not directly use this method, rather use omni::core::ObjectPtr, which will manage calling omni::core::IObject::acquire() and omni::core::IObject::release() for you.
- Thread Safety
This method is thread safe.
-
inline void release_abi() noexcept override#
Decrements the objects reference count.
Most implementations will destroy the object if the reference count reaches 0 (though this is not a requirement).
Objects may have multiple reference counts (e.g. one per interface implemented). As such, it is important that you call omni::core::IObject::release() on the same pointer from which you called omni::core::IObject::acquire().
Do not directly use this method, rather use omni::core::ObjectPtr, which will manage calling omni::core::IObject::acquire() and omni::core::IObject::release() for you.
- Thread Safety
This method is thread safe.
-
inline uint32_t getUseCount_abi() noexcept override#
Returns the number of different instances (this included) referencing the current object.
-
inline void *cast_abi(omni::core::TypeId id) noexcept override#
Returns a pointer to the interface defined by the given type id if this object implements the type id’s interface.
Objects can support multiple interfaces, even interfaces that are in different inheritance chains.
The returned object will have omni::core::IObject::acquire() called on it before it is returned, meaning it is up to the caller to call omni::core::IObject::release() on the returned pointer.
The returned pointer can be safely
reinterpret_cast<>
to the type id’s C++ class. For example, “omni.windowing.IWindow” can be cast toomni::windowing::IWindow
.Do not directly use this method, rather use a wrapper function like omni::core::cast() or omni::core::ObjectPtr::as().
- Thread Safety
This method is thread safe.
- inline void *castWithoutAcquire_abi( ) noexcept override#
Casts this object to the type described the the given id.
Returns
nullptr
if the cast was not successful.Unlike omni::core::IObject::cast(), this casting method does not call omni::core::IObject::acquire().
- Thread Safety
This method is thread safe.
Protected Attributes
-
ExecutionPath m_path#
Execution path helping discover state associated with current instance of the graph.
-
ExecutionTask m_task#
Task starting the execution.
-
NodeDataArray m_nodeData#
Storage for per node custom data.
-
struct Info#
Structure passed to the traversal algorithm collecting all necessary data for easy access.
Public Functions
- inline Info(
- Executor *executor,
- const ExecutionTask &task,
- INode *next,
Constructor.
-
inline IExecutionContext *getContext()#
Returns the current context (i.e. IExecutionContext) in which the Executor is executing.
-
inline Stamp getExecutionStamp()#
Returns the current context’s execution stamp/version (i.e. IExecutionContext::getExecutionStamp()).
-
inline const ExecutionPath &getUpstreamPath() const#
Returns the upstream path of the node that is currently being processed.
-
inline Status schedule(ExecutionTask &&newTask)#
Schedules the given task.
- inline SchedulingInfo getSchedulingInfo(
- const ExecutionTask &task,
Returns the given task’s SchedulingInfo.
-
inline Status continueExecute(ExecutionTask ¤tTask)#
Tells the Executor to processes the given task/node’s children. This allows it to generate additional work after the given task has executed.
Public Members
-
const ExecutionTask ¤tTask#
The node currently being processed.