file_path
stringlengths 5
148
| content
stringlengths 150
498k
| size
int64 150
498k
|
---|---|---|
profiling.md | # Profiling
Kit-based applications come bundled with a profiler interface to instrument your code, for both C++ and Python. Multiple profiler backend implementations are supported:
- NVTX
- ChromeTrace
- Tracy
## Easy Start
1. Enable the `omni.kit.profiler.window` extension.
2. Press <kbd>F5
Press <kbd>F8
All traces are saved into one folder (can be found in the *Browse* section of the profiler window). They can be viewed with either **Tracy** or **Chrome** (by navigating to `chrome://tracing`).
> **Note**: Both <kbd>F5
## Profiling Backends
### Chrome Trace
Run the Kit-based application using the following settings to produce a trace file named `mytrace.gz` in the directory where the executable is located:
```console
kit.exe [your_configuration] \
--/app/profilerBackend="cpu" \
--/app/profileFromStart=1 \
--/plugins/carb.profiler-cpu.plugin/saveProfile=1 \
--/plugins/carb.profiler-cpu.plugin/compressProfile=1 \
--/plugins/carb.profiler-cpu.plugin/filePath="mytrace.gz"
```
Then, using the *Google Chrome* browser, navigate to `chrome://tracing` to open a trace file and explore areas of interest.
### Tracy
#### On Demand
1. Enable the `omni.kit.profiler.tracy` extension.
2. Select **Profiling->Tracy->Launch and Connect** from the menu.
Note: The `omni.kit.profiler.tracy` extension contains the currently supported version of Tracy (v0.9.1), which can also be downloaded from [GitHub](https://github.com/wolfpld/tracy/releases/tag/v0.9.1).
# GitHub
## From Startup
Run the Kit-based application using the following settings:
```
kit.exe [your_configuration] \
--/app/profilerBackend="tracy" \
--/app/profileFromStart=true
```
Run Tracy and click the “Connect” button to start capturing profiling events.
You can also convert a chrome trace profile to Tracy format using
```
import-chrome.exe
```
tool it provides. There is a helper tool to do that in
```
repo_kit_tools
```
, it downloads **Tracy** from packman and opens any of those 3 formats:
```
repo tracetools tracy mytrace.gz
repo tracetools tracy mytrace.json
repo tracetools tracy mytrace.tracy
```
## Multiplexer
You can enable multiple profiler backends at the same time.
Run the Kit-based application using the following settings:
```
kit.exe [your_configuration] \
--/app/profilerBackend=[cpu,tracy] \
{other_settings_specific_to_either}
```
The multiplexer profiler will automatically detect any IProfiler implementations that are loaded afterwards, for example as part of an extension.
If the
```
--/app/profilerBackend
```
setting is empty, the multiplexer profiler will be used as the default, along with the cpu profiler behind it.
## Instrumenting Code
To instrument C++ code, use the macros from the Carbonite Profiler, e.g.:
```cpp
#include <carb/profiler/Profile.h>
constexpr const uint64_t kProfilerMask = 1;
void myfunc()
{
CARB_PROFILE_ZONE(kProfilerMask, "My C++ function");
// Do hard work here.
// [...]
}
```
For Python code, use the Carbonite Profiler bindings:
```python
import carb.profiler
# Using the decorator version:
@carb.profiler.profile
def foo():
pass
# Using explicit begin/end statements:
def my_func():
carb.profiler.begin(1, "My Python function")
// Do hard work here.
// [...]
carb.profiler.end(1)
```
## Automatic Python Profiler:
```
omni.kit.profile_python
```
Python offers a
```
sys.setprofile()
```
method to profile all function calls. Kit-based applications come with an extension that hooks into it automatically and reports all events to
```
carb.profiler
```
. Since this profiling method has an impact on the runtime performance of the application, it is disabled by default.
## Profiling Startup Time
Kit includes a handy shell script to profile app startup time:
```
profile_startup.bat
```
.
It runs an app with profiling enabled, quits, and opens the trace in **Tracy**. Pass the path to the app kit file and other arguments to it. E.g.:
```
profile_startup.bat path/to/omni.app.full.kit --/foo/bar=123
```
To enable python profiling pass
```
--enable omni.kit.profile_python
```
:
```
profile_startup.bat path/to/omni.app.full.kit --enable omni.kit.profile_python
``` | 4,207 |
program_listing_file_omni_avreality_rain_IPuddleBaker.h.md | # omni/avreality/rain/IPuddleBaker.h
File members:
- [omni/avreality/rain/IPuddleBaker.h](#i-puddle-baker-8h)
```cpp
// Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#pragma once
#include <carb/Interface.h>
#include <carb/Types.h>
#include <gsl/span>
#include <cstdint>
#include <tuple>
namespace omni::avreality::rain {
struct IPuddleBaker {
CARB_PLUGIN_INTERFACE("omni::avreality::rain::IPuddleBaker", 1, 0)
void(CARB_ABI* bake)(const char* const textureName,
carb::Uint2 textureDims,
carb::Float2 regionMin,
carb::Float2 regionMax,
std::size_t puddleCount,
const carb::Float2* puddlesPositions,
const float* puddlesRadii,
const float* puddlesDepths);
void(CARB_ABI* assignShadersAccumulationMapTextureNames)(
```cpp
gsl::span<std::tuple<const char* const, const char* const>> shaderPathsAndTextureNames,
const char* const usdContextName);
};
} // namespace omni::avreality::rain
``` | 1,470 |
program_listing_file_omni_avreality_rain_IWetnessController.h.md | # omni/avreality/rain/IWetnessController.h
File members:
- [omni/avreality/rain/IWetnessController.h](#i-wetness-controller-8h)
```c++
// Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#pragma once
#include <carb/Interface.h>
#include <carb/Types.h>
namespace omni::avreality::rain {
struct IWetnessController {
CARB_PLUGIN_INTERFACE("omni::avreality::rain::IWetnessController", 1, 0)
void(CARB_ABI* applyGlobalWetnessState)(bool wetnessState, const char* usdContextName);
void(CARB_ABI* applyGlobalWetness)(float wetness, const char* usdContextName);
void(CARB_ABI* applyGlobalPorosity)(float porosity, const char* usdContextName);
void(CARB_ABI* applyGlobalPorosityScale)(float porosityScale, const char* usdContextName);
void(CARB_ABI* applyGlobalWaterAlbedo)(carb::ColorRgb waterAlbedo, const char* usdContextName);
}
}
```
```cpp
void(CARB_ABI* applyGlobalWaterTransparency)(float waterTransparency, const char* usdContextName);
void(CARB_ABI* applyGlobalWaterAccumulation)(float waterAccumulation, const char* usdContextName);
void(CARB_ABI* applyGlobalWaterAccumulationScale)(float waterAccumulationScale, const char* usdContextName);
void(CARB_ABI* loadShadersParameters)(const char* usdContextName);
};
```
} // namespace omni::avreality::rain
``` | 1,681 |
program_listing_file_omni_avreality_rain_PuddleBaker.h.md | # omni/avreality/rain/PuddleBaker.h
File members:
- [omni/avreality/rain/PuddleBaker.h](#puddle-baker-8h)
```cpp
// Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#pragma once
#include "IPuddleBaker.h"
#include <carb/Defines.h>
#include <carb/InterfaceUtils.h>
#include <carb/Types.h>
#include <pxr/usd/sdf/path.h>
// clang-format off
#include <omni/usd/UsdContextIncludes.h>
#include <omni/usd/UsdContext.h>
// clang-format on
#include <gsl/span>
#include <string>
#include <tuple>
#include <vector>
namespace omni::avreality::rain {
class PuddleBaker {
IPuddleBaker* m_puddleBaker = nullptr;
public:
PuddleBaker() = delete;
CARB_ALWAYS_INLINE static void bake(std::string textureName,
carb::Uint2 textureDims,
carb::Float2 regionMin,
carb::Float2 regionMax,
gsl::span<const carb::Float2> puddlesPositions);
};
}
```
- [omni/avreality/rain/PuddleBaker.h](#puddle-baker-8h)
gsl::span<const float> puddlesRadii,
gsl::span<const float> puddlesDepths)
{
CARB_ASSERT(puddlesPositions.size() == puddlesRadii.size());
CARB_ASSERT(puddlesPositions.size() == puddlesDepths.size());
carb::getCachedInterface<IPuddleBaker>()->bake(textureName.c_str(), textureDims, regionMin, regionMax,
puddlesPositions.size(), puddlesPositions.data(),
puddlesRadii.data(), puddlesDepths.data());
}
CARB_ALWAYS_INLINE static void bake(std::string textureName,
carb::Float2 regionMin,
carb::Float2 regionMax,
gsl::span<carb::Float2> puddlesPositions,
gsl::span<float> puddlesRadii,
gsl::span<float> puddlesDepths)
{
bake(textureName, {1024u, 1024u}, regionMin, regionMax, puddlesPositions, puddlesRadii, puddlesDepths);
}
CARB_ALWAYS_INLINE static void assignShadersAccumulationMapTextureNames(
gsl::span<std::tuple<const pxr::SdfPath, const std::string>> shaderPathsAndTextureNames,
omni::usd::UsdContext* usdContext = nullptr)
{
std::string usdContextName = usdContext ? usdContext->getName() : "";
std::vector<std::string> shaderPathsAsString;
std::vector<std::tuple<const char* const, const char* const>> shaderPathsAndTextureNames_c;
shaderPathsAsString.reserve(shaderPathsAndTextureNames.size());
shaderPathsAndTextureNames_c.reserve(shaderPathsAndTextureNames.size());
for (auto&& sptn : shaderPathsAndTextureNames)
{
shaderPathsAsString.push_back(std::get<0>(sptn).GetAsString());
shaderPathsAndTextureNames_c.push_back({ shaderPathsAsString.back().c_str(), std::get<1>(sptn).c_str() });
}
carb::getCachedInterface<IPuddleBaker>()->assignShadersAccumulationMapTextureNames(
gsl::span<std::tuple<const char* const, const char* const>>(
shaderPathsAndTextureNames_c.data(), shaderPathsAndTextureNames_c.size()),
usdContextName.c_str());
};
} // namespace omni::avreality::rain | 3,592 |
program_listing_file_omni_avreality_rain_WetnessController.h.md | # omni/avreality/rain/WetnessController.h
File members:
- [omni/avreality/rain/WetnessController.h](#wetness-controller-8h)
```cpp
// Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#pragma once
#include "IWetnessController.h"
#include <carb/Defines.h>
#include <carb/InterfaceUtils.h>
// clang-format off
#include <omni/usd/UsdContextIncludes.h>
#include <omni/usd/UsdContext.h>
// clang-format on
namespace omni::avreality::rain {
class WetnessController {
IWetnessController* m_wetnessController = nullptr;
omni::usd::UsdContext* m_usdContext = nullptr;
public:
CARB_ALWAYS_INLINE WetnessController(omni::usd::UsdContext* usdContext = nullptr)
: m_wetnessController(carb::getCachedInterface<IWetnessController>()),
m_usdContext(usdContext ? usdContext : omni::usd::UsdContext::getContext())
{
}
CARB_ALWAYS_INLINE ~WetnessController()
{
m_wetnessController = nullptr;
}
}
}
```cpp
CARB_ALWAYS_INLINE void applyGlobalWetnessState(bool state)
{
m_wetnessController->applyGlobalWetnessState(state, m_usdContext->getName().c_str());
}
CARB_ALWAYS_INLINE void applyGlobalWetness(float wetness)
{
m_wetnessController->applyGlobalWetness(wetness, m_usdContext->getName().c_str());
}
CARB_ALWAYS_INLINE void applyGlobalPorosity(float porosity)
{
m_wetnessController->applyGlobalPorosity(porosity, m_usdContext->getName().c_str());
}
CARB_ALWAYS_INLINE void applyGlobalPorosityScale(float porosityScale)
{
m_wetnessController->applyGlobalPorosityScale(porosityScale, m_usdContext->getName().c_str());
}
CARB_ALWAYS_INLINE void applyGlobalWaterAlbedo(carb::ColorRgb waterAlbedo)
{
m_wetnessController->applyGlobalWaterAlbedo(waterAlbedo, m_usdContext->getName().c_str());
}
CARB_ALWAYS_INLINE void applyGlobalWaterTransparency(float waterTransparency)
{
m_wetnessController->applyGlobalWaterTransparency(waterTransparency, m_usdContext->getName().c_str());
}
CARB_ALWAYS_INLINE void applyGlobalWaterAccumulation(float waterAccumulation)
{
m_wetnessController->applyGlobalWaterAccumulation(waterAccumulation, m_usdContext->getName().c_str());
}
CARB_ALWAYS_INLINE void applyGlobalWaterAccumulationScale(float accumulationScale)
{
m_wetnessController->applyGlobalWaterAccumulationScale(accumulationScale, m_usdContext->getName().c_str());
}
CARB_ALWAYS_INLINE static void loadShadersParameters(omni::usd::UsdContext* usdContext = nullptr)
{
std::string usdContextName = usdContext ? usdContext->getName() : "";
carb::getCachedInterface<IWetnessController>()->loadShadersParameters(usdContextName.c_str());
}
};
```
SphinxRtdTheme.Navigation.enable(true);
});
| 3,062 |
PropertyPanel.md | # Property Panel
## OmniGraph Attribute Values
Here is a property panel for a simple node type. The upper `OmniGraph Node` section is shown as open. This is the area where OmniGraph-specific values can be modified. In the generic configuration these are all of the `Attribute`s on the selected `Node`. It supports a variety of data types.
Here you can see the data type for `Element` is a token with a specific set of allowed values that are set as unchanging parameters to the node. The node also has inputs `Deadzone` which is a floating point value, and an integer value `Gamepad ID`. On the output side, the values shown are what is computed by the node and not able to be edited - a boolean `isPressed`, and a floating point `value`. All `Attribute`s have tooltips with a description of what they mean when you hover over them.
## Raw USD Properties
The lower section, `Raw USD Properties`, show roughly the same data in the same form as you would use for editing a regular `USD Prim`. It will mostly contain the same values as the `OmniGraph Node` section, as well as any addition properties that are stored on the `Prim` but which are not part of the node’s attributes.
Some nodes are set up to store data only with `Fabric`. Those attributes will not show up in this `Raw USD Properties` sections as they will not be part of the USD data.
## Extending The Panel
Each individual node type can provide some code that will extend the capabilities of the property panel for any node of that type. Here is the property panel for the script node, which has extended the normal features of the panel to allow for adding and removing dynamic attributes that will be used in the node, and for accessing a library of code snippets that perform common tasks in the node. | 1,775 |
publish.md | # Publish a Package
The ultimate goal is to transform your hard work into a product that others can realize value from. The publishing process helps you to achieve this by relocating the Packaged Build to a location suitable for installation or deployment. Omniverse offers a wide array of endpoints including integrated ZIP files, Cloud systems, Streaming systems, our Launcher system, Git repositories, customized CMS, and more. Although the functional content of a package can be consistent across platforms, the delivery and installation methods may vary. This section guides you through the process of publishing your project to reach your intended audience.
For end users of your custom App or Extension, users must accept the NVIDIA Omniverse License Agreement.
It’s important to keep in mind that you can publish your project to multiple destinations as long as you can meet the requirements for each of them. Some might require particular Package steps, others may require that you create a Package for a particular platform. In any case, it is important to note that you can create your development workflow around generating multiple Packages intended for multiple publishing destinations.
## Direct Deployment
- Package Types: Thin Package, Fat Package
- Project Types: Apps, Services, Connectors, Extensions
- Platforms: Windows, Linux
Instructions to Install Packages Directly to End User
## Launcher
Package Types: Thin Package, Fat Package
Project Types: Apps, Services, Connectors, Extensions
- Connectors, Extensions
- Platforms: Windows, Linux
NVIDIA Omniverse™ Launcher is the method that most Omniverse Projects are distributed to end-users. Launcher not only allows for distribution of NVIDIA generated applications and extensions, but also third-party apps and extensions. Launcher also can be used for on-prem distribution, enterprise solutions, and more. You can even test your packages on a local version of Launcher to see how they will be presented to your customers.
Learn more about publishing to Omniverse Launcher
---
## Omniverse Cloud
- Package Types: Fat Package
- Project Types: Apps
- Platforms: Linux
After building a Launcher Package using the Fat Package, you can then also distribute via Omniverse Cloud (OVC). With OVC, users can stream their apps to devices with nothing more than an internet connection.
Learn about publishing to OVC via the Omniverse Cloud Guide | 2,422 |
publishing_exts.md | # Publishing Extensions
Extensions are published to the registry to be used by downstream apps and extensions.
[Kit documentation: Publishing](extensions_advanced.html#kit-ext-publishing) covers how to do it manually with the command line or UI. However, we suggest automating that process in CI.
Extensions are published using the `repo publish_exts` tool that comes with Kit. The `[repo_publish_exts]` section of `repo.toml` lists which extensions to publish. E.g.:
```toml
[repo_publish_exts]
# Extensions to publish, include and exclude among those discovered by kit. Wildcards are supported.
exts.include = [
"omni.foo.bar",
]
exts.exclude = []
```
Typically, CI scripts are setup to run `repo publish_exts -c release` (and `debug`) on every green commit to master, after builds and tests pass. That publishes any new extension version. For versions that were already published, nothing happens. So the version number needs to be incremented for publishing to have any effect.
You can test publishing locally with a “dry” run using `-n` flag:
```console
repo publish_exts -c release -n
```
It is important to remember that some extensions (typically C++, native) have a separate package per platform, so we need to run publishing separately on each platform and publish for each configuration (`debug` and `release`). This is especially important to satisfy all required dependencies for downstream consumers.
## Publish Verification
The extension system verifies extensions before publishing. It checks basic things like the extension icon being present, that the changelog is correct, that a name and description field are present, etc. Those checks are recommended, but not required. You can control them with a setting for `repo_publish_exts`:
```toml
[repo_publish_exts]
publish_verification = false
```
To only run the verification step, without publishing, use the `--verify` flag:
```console
repo publish_exts -c release --verify
```
It is recommended to run the verification step as part of build, to catch issues early.
## Other Publish Tool Settings
As with any repo tool, to find other available settings for the publish tool, look into its
repo_tools.toml
```
file. Since it comes with Kit, this file is a part of the
kit-sdk
```
package and can be found at:
_build/$platform/$config/kit/dev/repo_tools.toml
``` | 2,348 |
pxr.DestructionSchema.Classes.md | # pxr.DestructionSchema Classes
## Classes Summary:
| Class Name | Description |
|------------|-------------|
| [DestructibleBaseAPI](pxr.DestructionSchema/pxr.DestructionSchema.DestructibleBaseAPI.html) | DestructibleBaseAPI |
| [DestructibleBondAPI](pxr.DestructionSchema/pxr.DestructionSchema.DestructibleBondAPI.html) | DestructibleBondAPI |
| [DestructibleChunkAPI](pxr.DestructionSchema/pxr.DestructionSchema.DestructibleChunkAPI.html) | DestructibleChunkAPI |
| [DestructibleInstAPI](pxr.DestructionSchema/pxr.DestructionSchema.DestructibleInstAPI.html) | DestructibleInstAPI |
| [Tokens](pxr.DestructionSchema/pxr.DestructionSchema.Tokens.html) | Tokens | | 665 |
pxr.DestructionSchema.DestructibleBaseAPI.md | # DestructibleBaseAPI
## Methods
- **Apply**
- **CreateSupportDepthAttr**
- **Get**
- **GetSchemaAttributeNames**
- **GetSupportDepthAttr**
- **__init__**
- Raises an exception This class cannot be instantiated from Python | 226 |
pxr.DestructionSchema.DestructibleBondAPI.md | # DestructibleBondAPI
## Methods
- **Apply**
- **CreateAreaAttr**
- **CreateAttachedRel**
- **CreateCentroidAttr**
- **CreateNormalAttr**
- **CreateUnbreakableAttr**
- **Get**
- **GetAreaAttr**
- **GetAttachedRel**
- **GetCentroidAttr**
| 方法名 | 描述 |
|------------------------|------------|
| GetNormalAttr | |
| GetSchemaAttributeNames| |
| GetUnbreakableAttr | |
| __init__ | Raises an exception This class cannot be instantiated from Python |
## __init__
Raises an exception
This class cannot be instantiated from Python | 609 |
pxr.DestructionSchema.DestructibleChunkAPI.md | # DestructibleChunkAPI
## DestructibleChunkAPI
- Bases: `pxr.Usd.APISchemaBase`
### Methods
- `Apply`
- `CreateCentroidAttr`
- `CreateParentChunkRel`
- `CreateVolumeAttr`
- `Get`
- `GetCentroidAttr`
- `GetParentChunkRel`
- `GetSchemaAttributeNames`
- `GetVolumeAttr`
- `__init__`
Raises an exception This class cannot be instantiated from Python
__init__()
Raises an exception
This class cannot be instantiated from Python | 426 |
pxr.DestructionSchema.DestructibleInstAPI.md | # DestructibleInstAPI
## Methods
- **Apply**
- **CreateBaseRel**
- **CreateEnabledAttr**
- **CreateStressGravityEnabledAttr**
- **CreateStressRotationEnabledAttr**
- **Get**
- **GetBaseRel**
- **GetEnabledAttr**
- **GetSchemaAttributeNames**
- **GetStressGravityEnabledAttr**
| 属性/方法 | 描述 |
|-----------|------|
| GetStressRotationEnabledAttr | |
| __init__ | Raises an exception This class cannot be instantiated from Python |
## __init__
Raises an exception
This class cannot be instantiated from Python | 510 |
pxr.DestructionSchema.md | # pxr.DestructionSchema
## Classes Summary:
- **DestructibleBaseAPI**
- **DestructibleBondAPI**
- **DestructibleChunkAPI**
- **DestructibleInstAPI**
- **Tokens** | 163 |
pxr.DestructionSchema.Tokens.md | # Tokens
## Class: pxr.DestructionSchema.Tokens
Bases: `Boost.Python.instance`
### Methods
- `__init__`: Raises an exception This class cannot be instantiated from Python
### Attributes
- `destructArea`
- `destructAttached`
- `destructBase`
- `destructCentroid`
- `destructEnabled`
- `destructNormal`
- `destructParentChunk`
- `destructStressGravityEnabled`
| destructStressRotationEnabled |
| ----------------------------- |
| destructSupportDepth |
| destructUnbreakable |
| destructVolume |
__init__()
Raises an exception. This class cannot be instantiated from Python. | 614 |
py-modindex.md | # Python Module Index
## o
- **omni**
- [omni.asset_validator.core](api.html#module-omni.asset_validator.core) *(Windows-x86_64, Linux-x86_64)*
- [omni.asset_validator.core.tests](testApi.html#module-omni.asset_validator.core.tests) *(Windows-x86_64, Linux-x86_64)*
- [omni.asset_validator.ui](api.html#module-omni.asset_validator.ui) *(Windows-x86_64, Linux-x86_64)* | 375 |
python-usage-examples_Overview.md | # Overview — Kit Extension Template C++ 1.0.0 documentation
## Overview
An example Python extension that can be used as a reference/template for creating new extensions.
Demonstrates how to create a Python module that will startup / shutdown along with the extension.
Also demonstrates how to expose Python functions so that they can be called from other extensions.
## Python Usage Examples
### Defining Extensions
```c++
# When this extension is enabled, any class that derives from 'omni.ext.IExt'
# declared in the top level module (see 'python.modules' of 'extension.toml')
# will be instantiated and 'on_startup(ext_id)' called. When the extension is
# later disabled, a matching 'on_shutdown()' call will be made on the object.
class ExamplePythonHelloWorldExtension(omni.ext.IExt):
# ext_id can be used to query the extension manager for additional information about
# this extension, for example the location of this extension in the local filesystem.
def on_startup(self, ext_id):
print(f"ExamplePythonHelloWorldExtension starting up (ext_id: {ext_id}).")
def on_shutdown(self):
print(f"ExamplePythonHelloWorldExtension shutting down.")
```
``` | 1,198 |
python.md | # Omniverse USD Resolver Python API
## omni.usd_resolver
### Class omni.usd_resolver.Event
Members:
- RESOLVING
- READING
- WRITING
#### Property name
### Class omni.usd_resolver.EventState
Members:
- STARTED
- SUCCESS
- FAILURE
#### Property name
### Function omni.usd_resolver.get_version()
Get the version of USD Resolver being used.
Returns: Returns a human readable version string.
### Function omni.usd_resolver.register_event_callback(callback: Callable)
```python
omni.usd_resolver.register_event_callback(
callback: Callable[[omni.usd_resolver.Event, omni.usd_resolver.EventState, int], None],
subscription: omni.usd_resolver.Subscription = None
) -> omni.usd_resolver.Subscription
```
Register a function that will be called any time something interesting happens.
**Parameters**
- **callback** – Callback to be called with the event.
**Returns**
- Subscription Object. Callback will be unregistered once subscription is released.
```python
omni.usd_resolver.set_checkpoint_message(message: str) -> None
```
Set the message to be used for atomic checkpoints created when saving files.
**Args:**
- message (str): Checkpoint message.
```python
omni.usd_resolver.set_mdl_builtins(arg0: List[str]) -> None
```
Set a list of built-in MDLs.
Resolving an MDL in this list will return immediately rather than performing a full resolution.
``` | 1,368 |
PythonScripting.md | # Python Nodes and Scripts
While the core part of OmniGraph is built in C++ there are Python bindings and scripts built on top of it to make it easier to work with.
## Importing
The Python interface is exposed in a consistent way so that you can easily find any of the OmniGraph scripting information. Any script can start with this simple import, in the spirit of how popular packages such as `numpy` and `pandas` work:
```python
import omni.graph.core as og
```
Using this module you can access internal documentation through the usual Python mechanisms:
```python
help(og)
```
## Bindings
The first level of support is the [Python Bindings](ext_omni_graph) which provide a wrapper on top of the C++ ABI of the OmniGraph core. These have been made available from the same import to make use of all OmniGraph functionality consistent. When you are programming in Python you really don’t need to be aware of the C++ underpinnings.
The bound functions have all of the same documentation available at runtime so they can be inspected in the same way you would work with any regular Python scripts. For the most part the bindings follow the C++ ABI closely, with the minor exception of using the standard PEP8 naming conventions rather than the established C++ naming conventions.
For example a C++ ABI function `getAttributeType` will be named `get_attribute_type` in the Python binding. This was primarily done to deemphasize the boundary between Python and C++ so that Python writers can stick with Python conventions and C++ writers can stick with C++ conventions.
As the Python world doesn’t have the C++ concept of an interface definition there is no separation between the objects providing the functionality and the objects storing the implementation and data. For example in C++ you would have an `omni::graph::core::AttributeObj` which contains a handle to the internal data and a reference to the `omni::graph::core::IAttribute` interface definition. In Python you only have the `og.Attribute` object which encapsulates both.
## The One Thing You Need
A lot of the imported submodules and functions available are used internally by generated code and you won’t have much occasion to use them. You can explore the internal documentation to see if any look useful to you. The naming was intentionally made verbose so that it is easier to discover functionality.
One class deserves special attention though, as it is quite useful for interacting with OmniGraph.
```python
import omni.graph.core as og
controller = og.Controller()
```
The `Controller` class provides functionality for you to change the graph topology, or modify and inspect values within the graph.
Take a tour of how to use the controller with this how-to documentation.
``` | 2,762 |
python_scripting.md | # Scripting
## Overview
Kit Core comes with a python interpreter and a scripting system. It is mainly used to write extensions in python, but it can also be used to make simple python scripts. Each extension can register a set of script search folders. Scripts can be run via command line or API.
## How to add a script folder
Multiple ways:
1. Use the `/app/python/scriptFolders` setting. As with any setting it can be changed in the core config, in the app config, via the command line or at runtime using settings API.
2. Use `IAppScripting` API, e.g. `carb::getCachedInterface<omni::kit::IApp>()->getPythonScripting()->addSearchScriptFolder("myfolder")`.
3. Specify in the extension.toml, in the `[[python.scriptFolder]]` section:
```toml
[[python.scriptFolder]]
path = "scripts"
```
## How to run a script
1. Command line:
Example:
```
> kit.exe --exec "some_script.py arg1 arg2" --exec "open_stage"
```
2. Settings:
Example:
```
> kit.exe --app/exec/0="some_script.py arg1"
```
3. API:
C++ Example:
```cpp
carb::getCachedInterface<omni::kit::IApp>()->getPythonScripting()->executeFile("script.py")
```
Python Example:
```python
omni.kit.app.get_app_interface().get_python_scripting().execute_file("script.py")
```
see (`omni.kit.app.IAppScripting.execute_file()`) for more details.
Note
The script extension (.py) can be omitted. | 1,409 |
quickstart.md | # Action Graph Quickstart
In this tutorial, you use OmniGraph in Omniverse USD Composer to move a mesh in response to a key press.
> [!note]
> While you use Omniverse USD Composer in this tutorial, you can follow similar steps to achieve the same results in other Omniverse Apps.
## Before You Begin
While this is an introductory-level tutorial on OmniGraph, we recommend you complete Introduction to OmniGraph first. That tutorial provides more detailed information about some of the same steps that you complete here.
## Load the OmniGraph Extensions
First, load the Action Graph Bundle extension into Omniverse USD Composer.
## Prepare Your Scene
Next, prepare your empty scene by adding a torus.
## Create a New Action Graph
Create a new Action Graph so you can trigger action in response to a particular event.
## Use an On Keyboard Input Node
Search for the **On Keyboard Input** node, drag it into the editor, and set its **Key In** to `J`:
**On Keyboard Input** is an event-source node. When the user presses the J key, the **outputs:pressed** attribute is enabled. You use this attribute to trigger an action later in this tutorial.
## Use a Write Prim Attribute Node
Drag the torus prim from the **Stage** to the Action Graph editor, and select **Write Attribute**:
Then, set its **Attribute Name** to `xformOp:translate`:
This means you want to translate, or move, the torus.
## Search for the Constant Point3d node, drag it into the editor:
Notice that the **X**, **Y**, and **Z** input values are all set to `0.0` in the **Property** panel. This points your constant point at the origin of the scene.
## Wire up the Nodes
Now, it’s time to wire up your nodes and direct the flow of execution.
### Move the Torus to the Origin
Click and drag the **Constant Point3d** node’s **Value** pin to the **Write Prim Attribute** node’s **Value** pin:
This takes the constant value, a 3-tuple representing the origin of the scene, and writes it to the torus’s `xformOp:translate` attribute. In other words, it moves the torus to the origin.
### Move the Torus on Key Press
Next, click and drag the **On Keyboard Input** node’s **Pressed** pin to the **Write Prim Attribute** node’s **Exec In** pin:
This moves the torus to the origin when the user presses the J key.
#### Technical Detail
The execution evaluator works by following node connections downstream and computing the nodes that it encounters until there are no more connections to follow.
In this case, the evaluator executes the network downstream from **outputs:pressed** attribute, whose next node is the **Write Prim Attribute** node. Before that can be computed, though, the evaluator evaluates its upstream data-dependency: the **Constant Point3d** node.
Finally, the **Write Prim Attribute** node is computed, which sets `Torus.xformOp:translate` to `(0, 0, 0)`, the origin.
## Review Your Work
Click **Play** in your scene’s toolbar, move the torus away from the origin, and press J:
The torus snaps back to the origin.
## Alternate Actions with a Flip Flop Node
Next, make this network a little more interesting by cycling the location of the Torus on key press.
### Use a Flip Flop Node
Search for the **Flip Flop** node, drag it into the editor:
The **Flip Flop** node alternates activating one of two downstream networks every time it’s computed. You use this to cycle through two behaviors.
Right-click and disconnect the existing node connections to prepare to use your **Flip Flop** node.
### Duplicate Your Constant and Write Prim Nodes
Use Ctrl-D to duplicate your **Constant Point3d** and **Write Prim Attribute** nodes:
Rearrange the nodes so that you can easily see them.
#### Tip
You can marquee-select or Ctrl-select both nodes to duplicate them simultaneously.
### Change the Inputs Values for the New Constant Point3d Node
Set the **Y** input value to `250.0` for your new **Constant Point3d** node:
# Wire up the Flip Flop Node
Wire up the **Flip Flop** node to evaluate one **Write Prim Attribute** node for **Execute A** and the other node for **Execute B**:
# Test Your Flip Flop Node
Click **Play** in your scene’s toolbar. Every time you press J, the torus will alternate between the origin and 250 units in the positive direction on the y-axis:
> Cycling the torus's position.
# Common Problems and Caveats
If you run into problems or have questions, read through this section to, hopefully, find an explanation:
| Common Problem/Question | Explanation |
|-------------------------|-------------|
| You can’t connect to a node input due to incompatible types. | Remove all connections from the target node and reconnect. When extended types are resolved, the node has to be disconnected to reset the types. |
| It’s set up correctly, but isn’t working. | Check the **Console** panel for error or warning messages, try saving and reloading the scene, and ensure you’ve loaded the Action Graph extensions bundle. | | 4,955 |
README.md | # Omni.VDB
Omni.VDB as a central place for handling VDB volumes in Omniverse. It supports encoding and decoding of the entire VDB family (Open/Nano/Neural) as well as manipulating volumes like boolean operations, mesh conversion, filtering, etc. It provides a range of APIs including C++, Python, and OmniGraph. | 312 |
Redirects.md | # Redirections To The Node Library Documentation
## Note
These are only needed until all extensions are migrated, at which point the links will all reference the local node documentation.
## omni.graph.docs.ogn_attribute_types
## omni.graph.docs.ogn_attribute_roles | 268 |
release-notes.md | # Release Notes
## Current Release
### 1.9.11
**Release Date:** April 2024
### Fixed
- Fixed an issue where Launcher was minimized to the system tray instead of exiting when users clicked on Exit option in user settings menu.
- Fixed a race condition that could cause settings reset. [OM-118568]
- Fixed gallery positioning for content packs. [OM-118695]
- Fixed beta banner positioning on the Exchange tab. [OM-119105]
- Fixed an issue on the Hub page settings that caused showing “infinity” in disk chart for Linux. [HUB-965]
- Fixed cache size max validations on Hub page settings tab. [OM-119136]
- Fixed cache size decimal points validations on Hub page settings tab. [OM-119335]
- Fixed Hub Total Disk Space chart to not allow available disk space to become negative. [HUB-966]
- Fixed an issue on the Hub page settings that caused showing “infinity” in disk chart for Linux. [HUB-965]
- Fixed an issue on the Hub page settings that cause cache size not to be displayed. [HUB-960]
- Fixed an issue on the Hub page settings preventing editing Cleanup Threshold. [OM-119137]
- Fixed Hub page settings chart drive/mount detection size based on cache path. [HUB-970]
- Replace Omniverse Beta license agreement text with NVIDIA License and add license agreement link in the About dialog. [OM-120991]
## All Release Notes
- [1.9.8](release-notes/1_9_8.html)
- [1.9.11](release-notes/1_9_11.html)
- [Fixed](release-notes/1_9_11.html#fixed)
- [1.9.10](release-notes/1_9_10.html)
- [1.8.7](release-notes/1_8_7.html)
- [1.8.2](release-notes/1_8_2.html)
- [1.8.11](release-notes/1_8_11.html)
- [1.7.1](release-notes/1_7_1.html)
- [1.6.10](release-notes/1_6_1.html)
- [1.5.7](release-notes/1_5_7.html)
- [1.5.5](release-notes/1_5_5.html)
- [1.5.4](release-notes/1_5_4.html)
- [1.5.3](release-notes/1_5_3.html)
- [1.5.1](release-notes/1_5_1.html)
- [1.4.0](release-notes/1_4_0.html)
- 1.3.4
- 1.3.3
- 1.2.8
- 1.1.2
- 1.0.50
- 1.0.42
- 1.00.48 | 1,939 |
ReleaseNotes.md | # Release Notes
## [0.1.9] - 06/01/2023
- **OM-100100** - [CAD Converter] job definition for Farm
- **OM-100490** - [CAD Converter] Add progress reporting for CAD Converter(s)
- Upgrade to Kit-105 and Python 3.10.
- Change the request entry point.
- Add basic material support.
> **Known limitations**: POST requests of the same CAD file twice in the same process will return “exit -1” due to name conflict. There is an open ticket.
## [0.1.8] - 06/01/2023
## [0.1.7] - 04/05/2023
- Add config file support for tessellation control and instancing.
- **OM-89303**: Add response model.
- **OM-87528**: Fix filename bug that was replacing illegal USD characters with “_” in the filenames.
## [0.1.6] - 03/07/2023
- Initial Alpha release | 738 |
Releasing.md | # Releasing Carbonite
Two types of releases are performed:
- Sprint releases, which are made at the end of each 2-week long sprint
- Patch releases, which can be made at any time and include only fixes for a previous release
For every release, a git tag of the form `v{version-number}` is created on the appropriate commit. This will trigger an automatic promotion of the build artifacts previously published from that commit into “officially versioned” artifacts. For example, if the commit SHA started with `baadf00d`, the originally published artifact would include this information, along with branch name and CI system used to make the artifact. After the tag has been made the version will simply be the `{version-number}` specified in the git tag. This creates a clear distinction between in-development builds and official releases. Official releases are the only artifacts that will have the short form version (no hash, no build number, etc). Please understand that all that information is still available inside the artifact, if needed.
Sprint releases are always made from the default branch (main/master). These releases will never have a patch number set, they will end with a `.0`.
Patch releases are made to deliver bug fixes. They must always have a patch number. A patch to the latest release can be delivered from the default branch if only bug fixes have been added to that branch. Otherwise a branch must be created from the version tag which was originally created for the release that needs fixing. This branch should be named `patch/{version}`. Note that you do not include the patch number in the `{version}` used to name the branch. This allows us to release multiple successive patches to the same sprint release from that branch. Each patch release will increment the patch number and have an associated git tag that includes the full version number (including patch number).
Carbonite follows a romantic versioning scheme (in contrast to a semantic versioning scheme). It is inspired by the NVIDIA GPU driver and is one-hundred based. We will only use the patch number (fractional in this scheme) for patching, so every sprint release we will bump the 100-based number. This means that officially released artifacts from sprint release will take the form of h.0 (where h is a number between 100 and 999) while officially released artifacts from a patch release will take the form of h.n where n > 0. Every sprint release will increment h by one. We envision moving Carbonite to semantic versioning in the future, once we have fully converted to Omniverse Native Interfaces.
Once a release has been made, the version number in `VERSION` file must be increased if further development is planned on that branch. If the next release is a sprint release then the hundred-based number is simply incremented by one. If patch number was set, it is removed. On a patch branch, the patch number is incremented by one. Note that incrementing on a patch branch post release is **only needed if further patches are planned**.
You will have noticed in the previous paragraph that there was a mention of the possibility that the patch number was set on the default branch. You may be wondering how that is possible. The scenario is as follows. The development team hasn’t submitted any new feature work in the default branch, only bug fixes. A request comes in to urgently patch the latest sprint release with these bug fixes. In that case it makes the most sense to deploy the patch from the default branch by simply adjusting the `VERSION` file to match. That being said, if there is a strict requirement from customers to only include a single bug fix and no other bug fixes this would not work and we would follow the regular flow. That is, we would create a patch branch from the release tag, set the `VERSION` for a patch release and cherry-pick the fix to that branch.
### Note
The above applies to `CHANGES.md` as well. The the next release version is used as the heading of a new section at the top of the file immediately after a release. The top section in `CHANGES.md` must always have the same version number as in `VERSION`. Once we have the capability to automatically generate release notes (from MR/commit messages) the version number in `VERSION` will become the single source of truth. | 4,328 |
remix-faq.md | # FAQ
## General Questions
### What is RTX Remix and what sets it apart from other modding tools?
> RTX Remix is a cutting-edge modding platform built on NVIDIA Omniverse. What sets it apart is its ability to capture game scenes, enhance materials using generative AI tools, and efficiently create impressive RTX remasters incorporating path tracing, DLSS, and Reflex. You can even plug in modern assets that feature physically accurate materials and incredible fidelity, into game engines that originally only supported simple color textures and assets of basic polygonal detail.
>
> Beyond its core capabilities, what makes RTX Remix special is that it offers modders a chance to remaster the graphics of a wide variety of games with the same exact workflow and tool–a rarity in modding. Experts who have mastered traditional modding tools can use them in tandem with RTX Remix to make even more impressive mods.
### Can you explain the role of generative AI Texture Tools in RTX Remix and how they enhance materials automatically?
> RTX Remix offers generative AI Texture Tools to automatically enhance textures from classic games. The AI network has been trained on a wide variety of textures, and can analyze them to identify the material properties they are meant to possess. It will generate roughness and normal maps to simulate realistic materials, and upscale the pixel count of textures by 4X, ensuring that the remastered content not only looks stunning but also retains the essence of the original game.
### How does RTX Remix utilize ray tracing in the creation of remastered content?
> RTX Remix integrates full ray tracing technology (also known as path tracing) to simulate the behavior of light in a virtual environment. This results in highly realistic lighting effects, reflections, and shadows, significantly enhancing the visual appeal of the remastered content. The use of ray tracing in RTX Remix contributes to creating a more immersive and visually captivating gaming experience. It is also easier to remaster and author scenes with full ray tracing as it’s easier to relight a game when lights behave realistically.
### What role does DLSS play in RTX Remix, and how does it impact performance?
> NVIDIA DLSS 3 utilizes deep learning algorithms to perform DLSS Frame Generation and DLSS Super Resolution, boosting performance while maintaining high-quality visuals. It is an essential technology to enable full ray tracing, also known as path tracing (the most realistic light simulation available), which is used in state of the art games and blockbuster movies. With DLSS 3, powerful RTX GPUs can render jaw dropping visuals without tradeoffs to smoothness or image quality.
### Can RTX Remix be used with any game, or is it limited to specific titles?
> RTX Remix is designed to support a wide range of games, though its level of compatibility may vary depending on the complexity of the game’s assets and engine. RTX Remix works best with DirectX 8 and 9 games with fixed function pipelines, such as Call of Duty 2, Hitman 2: Silent Assassin, Garry’s Mod, Freedom Fighters, Need for Speed Underground 2, and Vampire: The Masquerade – Bloodlines; head to the community compatibility list on ModDB to see which games are compatible. Download any game’s rtx.conf config file and the RTX Remix runtime version it works with, and you are ready to get going with your mod.
>
> Game compatibility will expand over time, in part as NVIDIA publishes more feature-rich versions of the RTX Remix Runtime (the component that handles how RTX Remix hooks to games). In part, compatibility will also improve thanks to the community; in April, we released the RTX Remix Runtime in open source, making it easy for the community to contribute code that can improve RTX Remix’s functionality with a range of games.
### I’ve added a RTX Remix runtime and a RTX.conf file from ModDB next to my game and RTX Remix won’t properly hook to it–any suggestions?
> Many classic games do not use a fixed function pipeline for how they render everything, and therefore may struggle to work with RTX Remix. To better understand compatibility, we encourage modders to read about it within our RTX Remix Text Guide. In some cases, a game may not work because its rendering techniques are too modern or primitive. Alternatively, it may not work because it requires unorthodox steps–for example, a wrapper, a unique file structure that allows the RTX Remix Runtime to hook to the game properly, or a different version of the game.
>
> We recommend modders check ModDB’s community resources, which include a compatibility table, rtx.conf file with the proper config settings for each game, as well as any unique steps users have documented that are required to run a particular game well. These resources will continue to improve as they are updated by the community.
### Can RTX Remix remaster a game with new lighting and textures in a single click?
> RTX Remix is not a “one button” solution to remastering. While producing a mod with full ray tracing in RTX Remix is relatively straightforward, if the game assets are not upgraded to possess physically accurate materials, the mod will not likely look right; the likelihood is many textures will look uniformly shiny or matte.
>
> PBR assets with physically accurate materials react properly to realistic lighting. Glass reflects the world with clear detail, while laminate wood flooring has rough, coarse reflections. And stone, though without visible reflections, is still capable of bouncing light and having an effect on the scene. Without taking advantage of PBR, the modder is not fully taking advantage of full ray tracing.
>
> Generative AI Texture Tools can help you get started with converting legacy textures to physically accurate materials. But the most impressive RTX Remix projects (like Portal With RTX, Portal: Prelude RTX and Half Life 2 RTX: An RTX Remix Project) are chock full of lovingly hand made high quality assets with enormous polygonal counts and realistic materials. The best Blender artists will revel in being able to bring their carefully crafted assets into games without compromising on their visuals.
>
> The most ambitious mods also see modders customize and add new lights to each scene to account for how the game now looks with realistic lighting and shadowing. This relighting step can allow for all the advantages of path traced lights while preserving the look of the original game.
>
> When used alongside traditional modding tools, like Valve’s Hammer Editor, RTX Remix can make mods even more spectacular. Modders can reinvent particle systems and redesign aspects of the game RTX Remix can not interface with– for example the level design, the physics, and in-game AI.
### How does RTX Remix simplify the process of capturing game assets for modders?
> RTX Remix streamlines asset capturing by providing an intuitive interface that allows modders to easily capture a game scene and the game assets, before converting them to OpenUSD. The software simplifies the often complex task of capturing models, textures, and other elements, making it accessible to modders.
### What are some notable examples or success stories of games remastered using RTX Remix?
> RTX Remix has been employed in the remastering of several games, including NVIDIA’s own Portal With RTX, as well as the community made Portal: Prelude RTX and the under development Half-Life 2 RTX: An RTX Remix Project. Each remaster has been visually stunning and immersive. If you would like to see more examples of what RTX Remix can do, we encourage you to check out the RTX Remix Discord.
### How user-friendly is RTX Remix for modders with varying levels of experience?
> RTX Remix is designed for experienced modders. The intuitive interface, coupled with step-by-step guides and tutorials, ensures that modders can navigate and utilize the software effectively, unlocking the potential for creative expression. We recommend modders participate in the RTX Remix Discord community to collaborate and learn from one another.
### How much time can I expect to spend with RTX Remix to make a mod?
> It’s truly up to the modder. Some modders will lean heavily on AI and produce mods that feature full ray tracing and replaced textures, but no modifications to geometry and meshes. Others will spend months crafting the perfect remaster, complete with assets that feature 20, 30, or in the case of Half Life 2 RTX: An RTX Remix Project, sometimes 70 times the polygonal detail of assets in the original game.
>
> RTX Remix is a huge time saver, as it takes the need away to juggle dozens of tools to mod a single game’s visuals. You don’t need to be skilled in reverse engineering games to inject full ray tracing into an RTX Remix mod. And in a fully ray traced game where lighting is simulated realistically, it is much easier to reauthor and relight a game as every light behaves just as you expect.
>
> All of that saved time can be spent on leveling up other aspects of a mod or bringing a mod to market sooner.
### How do I give feedback to NVIDIA about my experience with RTX Remix?
> Please share any feedback with us on our RTX Remix GitHub. Simply, follow the steps below:
> 1. Go to this NVIDIAGameWorks RTX Remix page
1. Click the green “New Issue” button
2. Select the bug template (Runtime, Documentation, Toolkit, Feature Request) and click “Get Started”
3. Fill out the template, add as many details as possible, include files and screenshots
4. Click the green “Submit new issue”
We will develop RTX Remix with close attention paid to the issues documented there.
Need to leave feedback about the RTX Remix Documentation? Click here | 9,726 |
remix-formats.md | # Formats — rtx_remix 2024.3.0 documentation
## Formats
Remix utilizes Omniverse’s standard USD (for scenes) and MDL (for materials) file formats. Most content used in Omniverse needs to be converted into one of those standard formats so that your file can be used universally among the applications being used within the platform. You can view the Omniverse Format documentation to read further details about file formats and format conversions.
### Asset Converter
Apps in Omniverse are loaded with the Asset Converter extension. With it, users can convert models into USD using the Asset Converter service. Below is a list of formats it can convert to USD.
| Level | Operating System | CPU | CPU Cores | RAM | GPU | VRAM | Disk |
|-------|------------------|-----|-----------|-----|-----|------|------|
| Min | Windows 10/11 | Intel I7 or AMD Ryzen | 4 | 16 GB | GeForce RTX 3060Ti | 8 GB | 512 GB SSD |
| Rec | Windows 10/11 | Intel I7 or AMD Ryzen | 8 | 32 GB | GeForce RTX 4070 | 12 GB | 512 GB M.2 SSD |
| Extension | Format | Description |
|-----------|--------|-------------|
| .fbx | Autodesk FBX Interchange File | Common 3D model saved in the Autodesk Filmbox format |
| .obj | Object File Format | Common 3D Model format |
| .gltf | GL Transmission Format File | Common 3D Scene Description |
| .lxo | Foundry MODO 3D Image Format | Foundry MODO is a type of software used for rendering, 3D modeling, and animation. |
## Materials
NVIDIA has developed a custom schema in USD to represent material assignments and specify material parameters. In Omniverse, these specialized USD’s get an extension change to .MDL signifying that it is represented in NVIDIA’s open-source MDL (Material Definition Language).
### DL Texture Formats Accepted
MDL Materials throughout Omniverse can accept texture files in the following formats.
| Extension | Format | Description |
|-----------|--------|-------------|
| .bmp | Bitmap Image File | Common image format developed by Microsoft. |
| .dds | DirectDraw Surface | Microsoft DirectX format for textures and environments. |
| .gif | Graphical Interchange Format File | Common color constrained lossless web format developed by CompuServe. |
| .hdr | High Dynamic Range Image File | High Dynamic Range format developed by Industrial Light and Magic. |
| .pgm | Portable Gray Map | Files that store grayscale 2D images. Each pixel within the image contains only one or two bytes of information (8 or 16 bits) |
| .jpg | Joint Photographic Experts Group | Common “lossy” compressed graphic format. |
| .pic | PICtor raster image format | DOS imaging standard mainly used by Graphics Animation System for Professionals (GRASP) and Pictor Paint. |
| .png | Portable Network Graphics File | Common “lossless” compressed graphics format. |
| .ppm | Adobe Photoshop Document | The native format for Adobe Photoshop documents. |
## USD File Formats
Universal Scene Description (USD) is a versatile framework designed to encode data that can be scaled, organized hierarchically, and sampled over time. Its primary purpose is to facilitate the exchange and enhancement of data among different digital content creation applications.
| Extension | Format | Description |
|-----------|--------|-------------|
| .usd | Universal Scene Description (Binary) | This is the standard binary or ASCII file format for USD. It stores the 3D scene and asset data in a compact, binary form, making it efficient for storage and processing. |
| .usda | Universal Scene Description (ASCII) | This format stores USD data in a human-readable, ASCII text format. It's primarily used for debugging and as a reference because it's easier for humans to read and modify. However, it's less efficient in terms of file size and loading speed compared to the binary format. |
| .usdc | Universal Scene Description (Crate) | This is a binary format for USD, but it's optimized for high-performance data storage and retrieval. .usdc files are typically used as the primary format for asset storage and production pipelines, as they offer faster loading and saving times compared to .usd files. |
Need to leave feedback about the RTX Remix Documentation? Click here | 4,272 |
remix-glossary.md | # Glossary of Terms
## .exe file
- An “.exe” file, short for “executable,” is a common file extension used in Windows operating systems and some other computing environments. An executable file contains a program or application that can be run or executed by a computer’s operating system.
## .usd
- Universal Scene Description (Binary) This is the standard binary or ASCII file format for USD. It stores the 3D scene and asset data in a compact, binary form, making it efficient for storage and processing.
## .usda
- Universal Scene Description (ASCII) This format stores USD data in a human-readable, ASCII text format. It’s primarily used for debugging and as a reference because it’s easier for humans to read and modify. However, it’s less efficient in terms of file size and loading speed compared to the binary format.
## .usdc
- Universal Scene Description (Crate) This is a binary format for USD, but it’s optimized for high-performance data storage and retrieval. .usdc files are typically used as the primary format for asset storage and production pipelines, as they offer faster loading and saving times compared to .usd files.
## .usdz
- Universal Scene Description (ZIP) A compressed container file in the ZIP structure that can contain both geometry and texture information.
## A
### Anisotropic Roughness
- Refers to a property that describes surface roughness in a way that varies based on the direction of measurement. Unlike isotropic roughness, which is uniform in all directions, anisotropic roughness implies that the micro-surface irregularities on an object’s surface have a directional preference.
### Anisotropy
- Refers to a property of materials or surfaces that causes them to exhibit different reflective or shading characteristics in different directions. Anisotropic materials have a direction-dependent behavior, meaning they can appear shinier or more reflective in one direction while exhibiting different properties in other directions.
## B
### Barycentric Coordinates
- A set of coordinates used to describe the position of a point within a triangle or other convex polygon. These coordinates are defined relative to the vertices of the polygon and are useful for various operations in graphics rendering, including interpolation and texture mapping. Barycentric coordinates are represented as a set of weights for each vertex of the polygon.
### Bitangent
- A vector that is perpendicular to both the surface normal and the tangent vector of a 3D surface at a particular point. The bitangent vector is typically used in advanced shading techniques, such as normal mapping and bump mapping, to complete a local coordinate system known as the tangent space.
### Blue Noise
- Refers to a type of noise pattern that has unique properties, making it particularly useful for various applications, including texture mapping, sampling, and anti-aliasing. Blue noise is characterized by a distribution of points or values in such a way that they are more evenly spaced and have a more perceptually uniform distribution of energy in the high-frequency spectrum, especially in the blue part of the spectrum.
## C
### Composite Output
- Refers to the final image that is generated by combining and blending various graphical elements or layers together. This process typically involves taking multiple rendered images, often with transparency information, and compositing them into a single cohesive image that represents the final scene as it will be displayed to the viewer.
### Cone Radius
- The path of a ray of light as it extends from the camera into the scene and helps determine which objects or surfaces in the 3D environment the ray intersects with.
## D
### DDS File
- DDS stands for “DirectDraw Surface,” and it is a file format commonly used in computer graphics and game development. DDS files are specifically designed for storing and efficiently accessing texture and image data.
### Diffuse Albedo
- Refers to the inherent color or reflectance of a surface when it interacts with and scatters incoming light uniformly in all directions. It represents the base color or the color of a surface under diffuse lighting conditions, meaning when there are no specular highlights or reflections.
## Disocclusion
- refers to the process of determining what is visible and what is not in a 3D scene from a given viewpoint. It primarily relates to the handling of objects, surfaces, or portions of objects that were previously hidden or occluded by other objects but have become visible due to changes in the camera’s position or orientation.
## DLSS
- NVIDIA DLSS (Deep Learning Super Sampling) is a neural graphics technology that multiplies performance using AI to create entirely new frames and display higher resolution through image reconstruction—all while delivering best-in-class image quality and responsiveness.
## E
### Emissive Radiance
- Refers to the radiant energy emitted or radiated from a surface or object in a 3D scene. It represents the light or color that a surface emits as opposed to reflecting or scattering light like most materials.
### Exposure Histogram
- A graphical representation that provides a visual summary of the distribution of pixel brightness or luminance values in an image. It shows how many pixels fall into different brightness or exposure levels, typically displayed as a histogram chart.
## F
### Froxel
- a portmanteau of “fragment” and “voxel” used in computer graphics. It represents a small 3D volume element or pixel-sized voxel within a three-dimensional space. Froxels are typically used in volume rendering and ray tracing techniques to sample and process data within a 3D volume, similar to how pixels sample a 2D image.
## H
### HitT
- Hit-Testing (hit detection, picking, or pick correlation) is the process of determining whether a user-controlled cursor (such as a mouse cursor or touch-point on a touch-screen interface) intersects a given shape, line, or curve drawn on the screen.
## I
### Inf/Nan Check
- Refers to a process or technique used to identify and handle numerical values that are either infinite (Inf) or not-a-number (NaN) during rendering or computation.
### Interpolated Normal
- A normal vector calculated for a specific point on a 3D surface by interpolating or blending the normals of nearby vertices. These interpolated normals are used to determine how light interacts with the surface and are crucial for achieving smooth shading and realistic lighting effects.
### Isotropic Roughness
- Refers to a property that describes the degree of micro-surface irregularities or roughness on a 3D object’s surface in a uniform and non-directional manner. This roughness affects how light scatters and interacts with the surface, leading to diffuse reflections.
## L
### Local Tonemapper Luminance Output
- Refers to the output of a local tonemapping process that adjusts the brightness and contrast of an image on a pixel-by-pixel basis. This adjustment is based on the luminance or brightness values of the pixels in the image.
## M
### Material Type
- refers to a classification or categorization of the physical properties and visual characteristics of surfaces or materials used in 3D scenes. Material types are used to describe how light interacts with a particular surface and how it should be shaded and rendered. Common material types might include: Diffuse Materials, Specular Materials, Translucent Materials, and Emissive Materials.
## N
### Normals
- Perpendicular vectors on the surfaces of 3D objects. They define the orientation of surfaces, play a key role in lighting calculations, and enable smooth shading by interpolating across polygon surfaces. Normals are fundamental for simulating how light interacts with objects and achieving realistic lighting and shading effects in 3D scenes.
### NRD
- NVIDIA Real-Time Denoisers (NRD) a spatio-temporal, API-agnostic denoising library that’s designed to work with low ray-per-pixel signals. It uses input signals and environmental conditions to deliver results comparable to ground-truth images.
## O
### Octahedron-normal vectors
- to encode normals by projecting them on an octahedron, folding it and placing it on one square to give it uniform properties for value distribution and low encoding and decoding costs.
### Opacity
- Refers to the degree to which an object or part of an object is transparent or allows light to pass through. It is a fundamental property used to control the visibility and transparency of 3D objects and their components within a rendered scene.
## P
### Pixel Checkerboard
- A technique used to analyze and visualize the distribution of pixel shading workloads across the screen or image. It involves rendering a checkerboard pattern over a scene, where each square of the checkerboard represents a pixel. The color of each square can indicate the complexity or computational workload of the corresponding pixel.
### Primary Depth
- Refers to the depth information associated with the primary rays in a ray tracing pipeline.
### Primary Ray Bounces
- Refers to the first set of rays cast from the camera or viewer into a 3D scene during the ray tracing process. These primary rays are used to determine which objects or surfaces in the scene are visible from the camera’s perspective.
### Primary Specular Albedo
- Refers to the albedo or reflectance value that represents the color and intensity of the specular reflections on a surface when using ray tracing or other rendering techniques. It specifically relates to how a surface reflects light from direct light sources, such as point lights or directional lights.
### Primitive Index
- Refers to a unique identifier or index associated with a primitive in a rendering or graphics pipeline. Primitives are fundamental geometric shapes or elements used in computer graphics to construct more complex scenes and objects. These primitives can include points, lines, triangles, and more. Each primitive is assigned a unique index that allows the GPU to process them individually or in specific groups as required by the rendering algorithm.
## R
### ReBLUR
## ReBLUR
- A denoiser based on the idea of self-stabilizing, recurrent blurring. It’s designed to work with diffuse and specular signals generated with low ray budgets. In fact, ReBLUR supports checkerboard rendering, producing reasonable results when casting just half a ray per pixel.
## ReLAX
- a variant of SVGF optimized for denoising ray-traced specular and diffuse signals generated by NVIDIA RTX™ Direct Illumination (RTXDI). ReLAX offers substantial improvements to image quality and performance over stock SVGF. Not only does ReLAX preserve lighting details produced by massive RTXDI light counts, it also yields better temporal stability and remains responsive to changing lighting conditions.
## ReSTIR Direct Illumination
- ReSTIR or spatiotemporal reservoir resampling samples one-bounce direct lighting from many lights without needing to maintain complex data. ReSTIR DI samples all primary lighting and shadows in the screen space 65x faster than the previous state of the art solution (RIS or resampled importance sampling). This screen space light sampling solution is capable of virtually unlimited lights with a few (1-4) rays per pixel.
## ReSTIR Global Illumination
- ReSTIR GI resamples multi-bounce indirect lighting paths. At a single sample per pixel every frame, this solution achieves a mean-square error (MSE) improvement greater than 10x. In conjunction with a denoiser, this offers high quality path tracing at real-time frame rates.
## RTX
- RTX represents real-time ray tracing, where the calculations required for ray tracing are performed in real-time, enabling lifelike graphics and dynamic lighting effects in video games and other applications.
## RTXDI
- (RTX Direct Illumination) Generates millions of fully ray traced dynamic lights creating photorealistic lighting of night and indoor scenes that require computing shadows from 100,000s to millions of area lights. No more baking, no more hero lights. Unlock unrestrained creativity even with limited ray-per-pixel counts. When integrated with RTXGI and NVIDIA Real-Time Denoiser (NRD), scenes benefit from breathtaking and scalable ray-traced illumination and crisp denoised images, regardless of whether the environment is indoor or outdoor, in the day or night.
## RTXGI
- {RTX Global Illumination} Multi-bounce indirect light without bake times, light leaks, or expensive per-frame costs. RTX Global Illumination (RTXGI) is a scalable solution that powers infinite bounce lighting in real time, even with strict frame budgets. Accelerate content creation to the speed of light with real-time in-engine lighting updates, and enjoy broad hardware support on all DirectX Raytracing (DXR)-enabled GPUs. RTXGI was built to be paired with RTX Direct Illumination (RTXDI) to create fully ray-traced scenes with an unrestrained count of dynamic light sources.
## S
### Screen-Space Motion Vector
- (SSMV) is a data representation that captures the motion of objects or pixels between consecutive frames within the screen space. SSMVs are used in various rendering techniques, such as motion blur and temporal anti-aliasing, to simulate realistic motion effects.
### Secondary Ray Bounces
- Refers to the rays that are cast after the primary rays during the ray tracing process. Primary rays are initially shot from the camera or viewer into the 3D scene to determine which objects or surfaces are visible. When these primary rays hit a reflective or refractive surface, secondary rays are generated to simulate additional lighting effects, such as reflections and refractions.
### Secondary Specular Albedo
- Refers to the albedo or reflectance value that represents the color and intensity of the secondary, or indirect, specular reflections on a surface when using ray tracing or other advanced rendering techniques.
### Shading Normal
- Often referred to as a “Geometric Normal,” is a vector that represents the orientation or facing direction of a surface at a particular point on a 3D object. This vector is used in shading calculations to determine how light interacts with the surface and how the surface should be illuminated or shaded.
### SIGMA
- SIGMA is a fast shadow denoiser. It supports shadows from any type of light sources, like the sun and local lights. SIGMA relies more on physically based spatial filtering than temporal filtering, offering minimal temporal lag.
### Stochastic Texture Filtering (STF)
- often referred to as stochastic texture synthesis or stochastic texture generation, is a technique used in computer graphics and computer vision to create realistic and natural-looking textures. The term “stochastic” refers to randomness or unpredictability, and in this context, it involves introducing controlled randomness into the generation of textures to make them appear more natural and less repetitive.
## T
### Tangent
- Refers to a vector that lies in the plane of a 3D surface and is perpendicular to the surface’s normal vector. Tangent vectors are used in techniques like normal mapping and bump mapping to simulate fine surface details and enhance the realism.
### Texture Coordinates
- Texture coordinates specify which part of a 2D texture map should be applied to each point on the 3D model’s surface. This mapping allows for realistic and detailed texturing of 3D objects, enabling the application of textures like color, bump maps, normal maps, and more. By specifying texture coordinates for each vertex, the graphics hardware can interpolate and apply the corresponding texture data smoothly across the entire surface, creating the illusion of complex surface properties and details on the 3D object when rendered.
### Thin Film Thickness
- Refers to the measurement of the thickness of a thin film or layer of material that is applied to a surface. This concept is often used in computer graphics and rendering to simulate the interaction of light with thin layers of materials, such as oil films on water or soap bubbles.
### Tonemapping
- A technique of converting high dynamic range (HDR) images or scenes into low dynamic range (LDR) images that can be displayed while preserving as much visual detail and realism as possible.
### Triangle Normal
- These normals help determine how light rays are reflected or refracted off the triangle’s surface, affecting the appearance of the triangle in the rendered image.
## U
### UDIM
- Stands for U DIMension is based on a tile system where each tile is a different texture in the overall UDIM texture array where each tile consists of its own UV space (0-1, 1-2, 2-3) and has its own image assigned to that tile.
## V
### Vertex Color
- Refers to color information associated with individual vertices (corner points) of a 3D model or object. Each vertex can have a specific color value assigned to it, which is typically represented as a combination of red, green, and blue (RGB) values.
**Virtual Motion Vector**
- The motion of objects between frames. These motion vectors help reduce data redundancy and improve compression efficiency by describing how objects move from one frame to another.
Need to leave feedback about the RTX Remix Documentation? Click here | 17,324 |
remix-index.md | # Index
## Introduction
- [How Does It Work](#how-does-it-work)
## Requirements
- [Technical Requirements](#technical-requirements)
- [Requirements For Modders](#requirements-for-modders)
- [RTX Remix Runtime Requirements for Developers](#rtx-remix-runtime-requirements-for-developers)
## Compatibility
- [Defining Compatibility](#defining-compatibility)
- [Fixed Function Pipelines](#fixed-function-pipelines)
- [DirectX Versions](#directx-versions)
- [ModDB Compatibility Table](#moddb-compatibility-table)
- [Rules of Thumb](#rules-of-thumb)
- [Publish Date](#publish-date)
- [Graphics API version](#graphics-api-version)
- [Supported GPU](#supported-gpu)
- [So Is My Game Content Being Processed by Remix?](#so-is-my-game-content-being-processed-by-remix)
- [Why are Shaders Hard to Path Trace?](#why-are-shaders-hard-to-path-trace)
## Installation Guide
- [Install the RTX Remix Runtime](#install-the-rtx-remix-runtime)
- [Install the Runtime from the App Files](#install-the-runtime-from-the-app-files)
- [Install the Runtime from the Omniverse Launcher](#install-the-runtime-from-the-omniverse-launcher)
- [Install the Runtime from GitHub](#install-the-runtime-from-github)
- Install the Runtime from GitHub
- Install the RTX Remix Toolkit
- Install the RTX Remix from the Omniverse Launcher
- How to Remaster Using RTX Remix
- RTX Remix Runtime User Guide
- RTX Remix Toolkit User Guide
- Formats
- Asset Converter
- Materials
- DL Texture Formats Accepted
- USD File Formats
- Changelog
- RTX Remix Release Notes (4/30/2024)
- RTX Remix Toolkit Release 2024.3.0
- Features
- Quality of Life Improvements
- Bug Fixes
- RTX Remix Runtime 0.5
- Features
- Quality of Life Improvements
- Compatibility Improvements
- Bug Fixes
- Full changelog
- [Unreleased]
- Added
- Changed
- Fixed
- Removed
- [2024.3.0]
- Added
- Changed
- Fixed
- Removed
- [2024.3.0-RC.3]
- Added
- Changed
- Fixed
- Removed
- [2024.3.0-RC.2]
- Added
- Changed
- Fixed
- Removed
- [2024.3.0-RC.1]
- Added
- Changed
- Fixed
- Removed
- [Removed](#id17)
- [2024.2.1](#id18)
- [Added:](#id19)
- [Fixed:](#id20)
- [Known Issues](#known-issues)
- [How to Report an Issue](#how-to-report-an-issue)
- Glossary of Terms
- FAQ
- Contributing to RTX Remix
- Index
- Module Index
- Search Page | 2,490 |
remix-installation.md | # Installation Guide
RTX Remix consists of two components - the **RTX Remix Runtime** and the **RTX Remix Toolkit**. The **RTX Remix Runtime**, which is open source, injects the path tracing into the game and bridges the gap between the original game’s renderer and the RTX Toolkit. The **RTX Remix Toolkit** allows you to modify captures created via the RTX Runtime, ingest assets, and make scene changes. Both are required to fully remaster a game end-to-end.
## Install the RTX Remix Runtime
### Install the Runtime from the App Files
If you’ve downloaded the RTX Remix Toolkit, you have access to the RTX Remix Runtime. Simply navigate to the contents of the runtime folder to discover it:
Navigate to this folder:
```
C:\Users<USERNAME>\AppData\Local\ov\pkg<rtx-remix-XXX.X.X>\deps\remix_runtime\runtime
```
**NOTE:** You may need to make hidden files visible in order to see the `AppData` folder. To do this, select the hamburger menu in the file explorer > Show > Hidden Files
### Install the Runtime from the Omniverse Launcher
If you’re having trouble finding this folder, you can also do it through the Omniverse Launcher:
1. Click on the hamburger menu next to “Launch.”
2. Select “Settings.”
3. Click the folder icon.
From there, you will find yourself nearly at the runtime folder–all you have to do is go to:
```
deps\remix_runtime\runtime
```
### Install the Runtime from GitHub
Alternatively, you can also download the latest version of the RTX Remix Runtime through GitHub.
To install RTX Remix Runtime, you’ll need to download the latest files through GitHub via this link: [github.com/NVIDIAGameWorks/rtx-remix](https://github.com/NVIDIAGameWorks/rtx-remix/releases/).
This version includes the **Runtime Bridge** and the **DXVK-Remix** applications required to run the Runtime.
When you download RTX Remix Runtime, you should get a zip file with the necessary components to prepare a supported game for RTX Remix. Unzipping the file, you should see a folder structure like the following:
```
remix-0.4.0/
|--- d3d9.dll <-- Bridge interposer
|--- ...
\--- .trex/
|--- NvRemixBridge.exe
|--- d3d9.dll <-- Remix Renderer/DXVK-Remix
\--- ...
```
Once you have the files on your computer, you’ll need to copy them alongside your game executables following the instructions in the section [Setup RTX Remix Runtime with your Game](howto/learning-runtimesetup.html) with your Game.
We also host the RTX Remix Bridge and DXVK-Remix files separately through GitHub, and update them with experimental changes before they are ready to be packaged as part of an official RTX Remix Runtime release. If you would like to access those files, feel free to check them out below:
1.
```
# Install the RTX Remix Toolkit
1. For the Bridge Application: bridge-remix.
2. For the DXVK-Remix Application: dxvk-remix.
## Install the RTX Remix Toolkit
1. Click here to go to NVIDIA’s™ RTX Remix website.
2. Follow the instructions for Installation.
### Install the RTX Remix from the Omniverse Launcher
1. Follow the instructions on how to Install the NVIDIA Omniverse Platform here: Install NVIDIA Omniverse.
2. In Omniverse Launcher, under the Exchange Tab, search for “**RTX Remix**”.
3. Select RTX Remix Application, (ensure that the dropdown next to the install button displays the latest release or the release you wish to download) and select the green “**INSTALL**” button to install the application.
4. After the application has been installed, the green “**INSTALL**” button will change into a grayish-white “**LAUNCH**” button. Click the LAUNCH button to launch the **RTX Remix** application.
Need to leave feedback about the RTX Remix Documentation? Click here | 3,709 |
resolver-details.md | # OmniUsdResolver Details
ArResolver was originally designed so a single studio could hook up their asset management system to USD. For the most part, this was a fairly trivial process for a studio to set up. Very rarely did one studio’s assets need to work with a different studio’s assets in USD. As USD has been incorporated into more industries those needs to consume assets from multiple asset management systems became a necessity. The second iteration for ArResolver, Ar 2.0, addressed this requirement by implementing the notion of Primary and URI Resolvers. This separation allowed multiple ArResolver plugins to live in the same USD environment and Ar would dispatch resolves to the correct ArResolver plugin. Even though the API between Ar 1.0 and Ar 2.0 is quite different the underlying concepts are similar. The following sections describe these concepts and how they apply to the different versions of Ar
## Asset Paths
Asset Paths are a familiar part of scene description that are commonly authored in USD. They are a special type of string which indicates to USD that they need to be identified and located by asset resolution. The documentation describes what an [Asset](https://openusd.org/release/glossary.html#asset) means in USD along with the Asset Path which represents it.
### SdfFileFormat Arguments
SdfFileFormat plugins are a powerful concept in USD but are beyond the scope of this documentation. What is important for SdfFileFormat plugins in regards to ArResolver are the SdfFileFormat arguments that can be authored as part of the Asset Path. These SdfFileFormat arguments are important to the underlying SdfFileFormat plugin that will be used to load the actual Asset. There are two important aspects to an Asset Path authored with SdfFileFormat arguments:
1. An ArResolver plugin must respect incoming Asset Paths that may, or may not, have SdfFileFormat arguments. These SdfFileFormat arguments are usually seen when creating the Asset Identifier from the Asset Path (CreateIdentifier / AnchorRelativePath) or computing the Resolved Path (Resolve / ResolveWithAssetInfo)
2. The SdfFileFormat arguments are tied to the identity of the Asset which means that the same Asset Path with different SdfFileFormat arguments is a completely different Asset.
> Examples of Asset Paths that contain SdfFileFormat arguments:
> - @./foo/bar/asset.sff:SDF_FORMAT_ARGS:arg1=baz@
> - @./foo/bar/asset.sff:SDF_FORMAT_ARGS:arg1=boo@
The **OmniUsdResolver** will maintain the **SdfFileFormat** arguments when creating the **Asset Identifier** from the **Asset Path**. So it should be expected that the returned **Asset Identifier** can contain the **SdfFileFormat** arguments. However, the **SdfFileFormat** arguments will be stripped out when computing and returning the **Resolved Path**.
### Asset Identifiers
Asset Identifiers are not explicitly referred to throughout USD but they are important for uniquely identifying an Asset. In most cases, an **Asset Path** does not point to a specific Asset and requires additional information so the Asset can be resolved. For example, a relative file path authored as an **Asset Path** requires the containing Layer **Asset Identifier** so it can be properly anchored and resolved. The result returned from **CreateIdentifier** / **AnchorRelativePath** is usually the **Asset Identifier** computed from the **Asset Path** and anchoring **Asset Identifier**.
Ar 2.0 introduced new API around creating Asset Identifiers for new Assets. **CreateIdentifierForNewAsset** is a place for an **ArResolver** plugin to perform any sort of initialization when an Asset is about to be created. The initialization performed in **CreateIdentifierForNewAsset** is completely up to the plugin and can be as simple, or as complex, as necessary for the new Asset. **OmniUsdResolver (Ar 2.0)** does not perform any special initialization in **CreateIdentifierForNewAsset** and is functionally equivalent to **CreateIdentifier**.
### Resolved Paths
Resolved Paths are the computed result when an **Asset Identifier** is resolved. In Ar 2.0 Resolved Paths are explicitly typed as **ArResolvedPath** but are really a wrapper around a normal **std::string**. The explicit **ArResolvedPath** type helps inform APIs what to expect about the Asset. When an API specifies an **ArResolvedPath** it indicates to the caller, or implementer, that the Asset is expected to have already gone through Asset Resolution.
> The lack of an explicit **Resolved Path** type like **ArResolvedPath** in Ar 1.0 made it difficult for a **ArResolver** implementation to know the state of an incoming path. Everything was just a **std::string** that could be either an **Asset Path**, **Asset Identifier** or Resolved Path. The explicit type in Ar 2.0 really helped clarify the expectation of an incoming or outgoing path.
In a similar manor as **CreateIdentifierForNewAsset**, Ar 2.0 introduced **ResolveForNewAsset**. In most cases, the normal call to **Resolve** / **ResolveWithAssetInfo** would perform some sort of existence check on the Asset to return the **Resolved Path** successfully. But for new Assets it’s quite common that they might not exist, as they are still in the process of being created, but need to resolve to some different result than the Asset Identifier. The new **CreateIdentifierForNewAsset** / **ResolveForNewAsset** API allows for an **ArResolver** plugin to completely handle the creation of new Assets. **OmniUsdResolver (Ar 2.0)** does not do any existence checking but does make sure to completely resolve the URL performing any necessary normalization.
### Search Paths
A concept that Pixar uses for it’s own Asset Resolution purposes is Search Paths. Search Paths are a special type of **Asset Path** that require a method of “searching” to find the actual Asset. They require an **Asset Path** to be authored in a special way and configuration that determines where these Search Paths will be searched for. The syntax to author a Search Path as an **Asset Path** is similar to a normal relative file path, it just requires the that the **Asset Path**...
is authored
**without**
a
*./*
or
*../*
prefix. The other requirement is a list of paths for the Search Path to be “searched” against which need to be set on the **ArResolver**. The method to set these paths is specific to the **ArResolver** implementation.
The **ArDefaultResolver** allows for these paths to be set from an environment variable (*PXR_AR_DEFAULT_SEARCH_PATH*) or creating the **ArDefaultResolverContext** directly.
> **Example of Asset Paths authored as Search Paths:**
> @vehicles/vehicle_a/asset.usd@
>
> **Set of paths that Search Paths will be configured to search against:**
> - /fast-storage-server/assets/
> - /normal-storage-server/assets/
> - /slow-storage-server/assets/
>
> **Search Path** *vehicles/vehicle_a/asset.usd* **will be searched for in the following order:**
> - /fast-storage-server/assets/vehicles/vehicle_a/asset.usd
> - /normal-storage-server/assets/vehicles/vehicle_a/asset.usd
> - /slow-storage-server/assets/vehicles/vehicle_a/asset.usd
**OmniUsdResolver** also supports Search Paths indirectly through the **Omniverse Client Library**. The paths used for “searching” are set explicitly by calling omniClientAddDefaultSearchPath (C++) or omni.client.add_default_search_path (Python).
The **OmniUsdResolver** will respect these configured paths when resolving a Search Path.
## Look Here First Strategy
Search Paths were a formal concept in Ar 1.0 that required **ArResolver** implementations to acknowledge them regardless if they were supported or not. To make matters worse, the **Sdf** library in USD also had some of it’s own logic to handle a “Look Here First” approach for **Search Paths**. This “Look Here First” strategy would simply treat the **Search Path** as a normal relative file path and create an **Asset Identifier** from the **SdfLayer** containing the Search Path. This anchored **Asset Identifier** would then be resolved to determine existence, and if it did exist the “searching” was done.
For the most part the “Look Here First” behaved as one would expect, since they have the appearance of a relative file path, but there were a couple problems with it:
1. Asset Resolution was not entirely handled by the underlying **ArResolver**. The “Look Here First” resolution step was done in **Sdf** while the “searching” was handled in **ArResolver**. To do this the **ArResolver** API required methods for determining if an **Asset Path** was indeed a Search Path, regardless if they were supported.
2. It assumes a file-based asset management system being hosted on a really fast file server where latency isn’t too much of a concern. For cloud-based asset management systems latency is a much larger issue.
> **Going back to the example above with a Search Path of:**
> @vehicles/vehicle_a/asset.usd@
>
> **Authored in the following SdfLayer:**
> omniverse://server-a/scenes/scene.usd
>
> **Set of paths that Search Paths will be configured to search against:**
> - /fast-storage-server/assets/
> - /normal-storage-server/assets/
> - /slow-storage-server/assets/
>
> **The actual set of paths that Search Path** *vehicles/vehicle_a/asset.usd* **will be searched for:**
> - **omniverse://server-a/scenes/vehicles/vehicle_a/asset.usd**
> - /fast-storage-server/assets/vehicles/vehicle_a/asset.usd
> - /normal-storage-server/assets/vehicles/vehicle_a/asset.usd
> - /slow-storage-server/assets/vehicles/vehicle_a/asset.usd
**This “Look Here First” strategy is the core issue with MDL Paths for Omniverse in USD which has led to performance problems, bugs and confusion.**
## MDL Paths
Before getting into how **OmniUsdResolver** works with both MDL Paths and Search Paths to ensure that everything resolves
> Example of a core MDL module authored as a Search Path:
> ```
> @nvidia/core_definitions.mdl@
> ```
> Authored in the following SdfLayer:
> ```
> omniverse://server-a/scenes/vehicles/vehicle_a/asset.usd
> ```
> Set of paths that Search Paths will be configured to search against:
> ```
> /local-server/share/ov/pkgs/omni_core_materials/mdl/core/mdl/
> /local-server/share/ov/pkgs/omni_core_materials/mdl/core/Volume/
> /local-server/share/ov/pkgs/omni_core_materials/mdl/core/VRay/
> /local-server/share/ov/pkgs/omni_core_materials/mdl/core/Ue4/
> /local-server/share/ov/pkgs/omni_core_materials/mdl/core/Base/
> /local-server/share/ov/pkgs/omni_core_materials/mdl/rtx/iray/
> /local-server/share/ov/pkgs/omni_core_materials/mdl/rtx/
> ```
> The actual set of paths that MDL Path nvidia/core_definitions.mdl will be searched for:
> ```
> omniverse://server-a/scenes/vehicles/vehicle_a/nvidia/core_definitions.mdl
> /local-server/share/ov/pkgs/omni_core_materials/mdl/core/mdl/nvidia/core_definitions.mdl
> /local-server/share/ov/pkgs/omni_core_materials/mdl/core/Volume/nvidia/core_definitions.mdl
> /local-server/share/ov/pkgs/omni_core_materials/mdl/core/VRay/nvidia/core_definitions.mdl
> /local-server/share/ov/pkgs/omni_core_materials/mdl/core/Ue4/nvidia/core_definitions.mdl
> /local-server/share/ov/pkgs/omni_core_materials/mdl/core/Base/nvidia/core_definitions.mdl
> /local-server/share/ov/pkgs/omni_core_materials/mdl/rtx/iray/nvidia/core_definitions.mdl
> /local-server/share/ov/pkgs/omni_core_materials/mdl/rtx/nvidia/core_definitions.mdl
> ```
On the surface, everything with how the core MDL module nvidia/core_definitions.mdl is authored in USD seems fine. It’s a normal Search Path that uses the configured paths from the omni_core_materials package to search for the correct core module on disk. However, the fundamental problem is the Look Here First strategy that will always search for the core MDL module relative to where the Search Path is authored. In the example above this would be omniverse://server-a/scenes/vehicles/vehicle_a/nvidia/core_definitions.mdl which will fail to resolve as core MDL modules are intended to be a part of a library shared across applications. Now, why is this failed resolve such an issue?
1. In order to resolve omniverse://server-a/scenes/vehicles/vehicle_a/nvidia/core_definitions.mdl the Nucleus server hosted at server-a needs to be consulted to determine existence. This will introduce latency, based on proximity to the server, for something that will always fail to resolve.
2. OmniUsdResolver does not cache failed resolves which means that every core MDL module authored will be impacted by latency. The latency is based on the SdfLayer where the core MDL module is authored, for an SdfLayer that is hosted on a cloud-based asset management system like Nucleus this latency can really impact performance.
> OmniUsdResolver does not cache failed resolves as there is not a good way to determine cache invalidation. Doing so can lead to lots of undesirable issues such as restarting the process so a previous resolve can be recomputed. If failed resolves need to be cached, calling code can use ArResolverScopedCache to control the cache lifetime which will respect any failed resolves.
3. The number of materials using core MDL modules in a composed USD stage can be large. With a cloud-based asset management system the number of requests can flood the server causing slow-down on the server itself.
Now that there is a better description of the problem between MDL Paths and Search Paths its a good time to look at how OmniUsdResolver handles it.
### OmniUsdResolver MDL Path Strategy
The way that OmniUsdResolver (Ar 1.0) must optimize for MDL Paths is much more involved than how OmniUsdResolver (Ar 2.0) needs to handle it. The reason for this is that the Look Here First strategy for Search Paths is codified in Sdf for Ar 1.0. In Ar 2.0, Sdf no longer makes that requirement and it’s up to the ArResolver implementation to enforce that or not. The focus will be on OmniUsdResolver (Ar 1.0) to describe the solution then compare that with how it has been improved in Ar 2.0 with OmniUsdResolver (Ar 2.0).
To optimize MDL Paths in OmniUsdResolver (Ar 1.0) we have the following requirements:
1. Core MDL modules should not be resolved relative to the SdfLayer they are authored in.
> From the example, completely eliminate the resolve call for omniverse://server-a/scenes/vehicles/vehicle_a/nvidia/core_definitions.mdl
2. Core MDL modules should resolve according to the configured list of paths to be “searched”.
- User-defined MDL modules authored as Search Paths (no `./` or `../` prefix) should still use the **Look Here First** strategy for backwards compatibility.
> This requirement may be dropped in the future as Asset Validation can update old Assets to correct these paths to normal file relative paths.
- MDL modules authored as normal file relative paths (prefixed with `./` or `../` ) should be anchored to the **SdfLayer** they are authored in.
- Avoid making changes to **Sdf** specific to MDL modules
To satisfy the first and third requirements, **OmniUsdResolver (Ar 1.0)** needs to be bootstrapped with the core MDL modules that should not be resolved relative to **SdfLayer** they are authored in. There are two ways that this can be done:
> Unfortunately, there is not a better way to do this as core MDL modules can be added, removed or even versioned as a whole from the package that hosts them. If the third requirement from above can be dropped this will no longer be necessary
1. By explicitly setting the list of MDL modules paths in **omniUsdResolverSetMdlBuiltins** which is declared in **OmniUsdResolver.h**
2. Through the environment variable `OMNI_USD_RESOLVER_MDL_BUILTIN_PATHS` which are the comma-separated MDL module paths.
> The explicit call to **omniUsdResolverSetMdlBuiltins** takes priority over the environment variable.
With the core MDL modules bootstrapped, **OmniUsdResolver (Ar 1.0)** uses these paths in **AnchorRelativePath**, **IsSearchPath**, and **IsRelativePath** to quickly determine if the incoming **Asset Path** matches one of these core MDL module paths. So when **Sdf** calls **AnchorRelativePath** with a core MDL module path it will return the path as-is, meaning that **Sdf** has no way to anchor a core MDL module path from the **SdfLayer** it is authored in. **IsSearchPath** will always return **false** when called with a core MDL module path but **true** when called with a user-defined MDL module path. This is to ensure that the third requirement from above works with the **Look Here First** strategy. Finally, **IsRelativePath** will also return **false** for core MDL modules paths to prevent any normalization in **Sdf**.
As convoluted as the logic is, the thing to remember is this: **OmniUsdResolver (Ar 1.0) needs to ensure that the core MDL module paths are returned as-is until ResolveWithAssetInfo is called**. **ResolveWithAssetInfo** is the process that will compute the **Search Path** against the configured list of paths to be “searched”. At this point the absolute path to the MDL module, wherever it is, should be returned.
**OmniUsdResolver (Ar 2.0)** greatly simplifies this process. First, **Sdf** no longer applies the **Look Here First** strategy, that is completely handled in **ArDefaultResolver**. This means that we only need to check if something is a core MDL module path in **CreateIdentifier**. If it is a core MDL module path **OmniUsdResolver (Ar 2.0)** will just return it as-is. **Sdf** will then use that as the **Asset Identifier** and resolve it as needed.
> For whatever reason, all this logic can be turned on, or off, by setting the environment variable `OMNI_USD_RESOLVER_MDL_BUILTIN_BYPASS` to a truth-like, or false-like, value. i.e
- OMNI_USD_RESOLVER_MDL_BUILTIN_BYPASS=1
- OMNI_USD_RESOLVER_MDL_BUILTIN_BYPASS=0
- OMNI_USD_RESOLVER_MDL_BUILTIN_BYPASS=ON
- OMNI_USD_RESOLVER_MDL_BUILTIN_BYPASS=OFF
- OMNI_USD_RESOLVER_MDL_BUILTIN_BYPASS=TRUE
- OMNI_USD_RESOLVER_MDL_BUILTIN_BYPASS=FALSE
## Troubleshooting MDL Paths
Due to all the complexity between **Search Paths** and **MDL Paths** problems do arise and it’s not always clear where the problem might be. It might be on the USD side when resolving the **MDL Path** or it might be on the MDL side where the **Search Paths** are configured. Either way, there are a couple easy things to do to at least see where the problem might be.
First would be to make sure the **Search Paths** are configured properly:
```python
import omni.client
# get all the configured search paths from omni.client
search_paths = omni.client.get_default_search_paths()
# print out the list of paths that will be search when resolving a MDL Path
for search_path in search_paths:
print(search_path)
```
The output list of paths gives a starting point to see if the **MDL Path** is somewhere in those directories. If the **MDL Path** is not in any of those paths, it’s pretty safe to assume that the problem is on the configuration side of MDL.
Now if the **MDL Path** is in one of those paths the problem is more than likely on the USD side. That is still a pretty large space which could be narrowed down further. The next step would be to see if the **MDL Path** is being resolved correctly from **ArResolver**:
```python
from pxr import Ar
# assuming that we are running in a Omniverse Kit-based Application
# we get the UsdStage that the MDL Path is authored on
import omni.usd
stage = omni.usd.get_context().get_stage()
anchor_path = stage.GetRootLayer().resolvedPath
# get the configured ArResolver instance
resolver = Ar.GetResolver()
# create the asset identifier for the MDL Path
# we'll use the OmniPBR.mdl module as an example
asset_path = "OmniPBR.mdl"
asset_id = resolver.CreateIdentifier(asset_path, anchor_path)
# check to see if the MDL Path has been anchored or not
print(asset_id)
# verify that the MDL Path can be resolved
resolved_path = resolver.Resolve(asset_id)
print(resolved_path)
```
If everything looks to be behaving correctly, the **asset_id** is not anchored and **resolved_path** is correct, the problem is probably not with resolving the **MDL Path**. The problem could be related to a load-order issue where rendering code is resolving the **MDL Path** before all the **Search Paths** are configured. Regardless, it helps narrow down where the problem might be and possibly what teams to engage with.
Another way to observe what might be happening is to enable some **TfDebug** flags for **OmniUsdResolver**. Specifically, the **OMNI_USD_RESOLVER** flag that outputs a lot of general information when **OmniUsdResolver** is being invoked. The messages that it writes to the console easily show the input it receives along with the output they produce. Sometimes enabling this **TfDebug** flag will quickly point out the issue.
See **Troubleshooting** for more details about **TfDebug** flags.
## Package-Relative Paths
Package-Relative Paths are a special form of **Asset Paths** that reference an Asset within another Asset. These paths are special in the fact that they only have meaning within an Asset whose underlying **SdfFileFormat** plugin supports packages. The most common **SdfFileFormat** plugin that supports packages is the **UsdzFileFormat** plugin.
In most cases, developers don’t need to deal with Package-Relative Paths. **Sdf** does the heavy-lifting when loading an Asset that is represented by a **SdfFileFormat** plugin which supports packages. Conversely, when saving the Asset utility functions are usually provided so the Asset is properly packaged. **UsdZipFileWriter** in **UsdUtils** is a perfect example of this.
> There is no limit to the level of nesting that a Package-Relative Path can represent. But directly authoring and parsing of these paths should be avoided. It’s encouraged to use the utility functions in **Ar** to handle any sort of interaction with these paths. See **ArIsPackageRelativePath**, **ArJoinPackageRelativePath**, **ArSplitPackageRelativePathOuter** and **ArSplitPackageRelativePathInner** in **<pxr/usd/ar/packageUtils.h>**
## Reading Assets
<cite>
ArResolver
plugin. When a
<cite>
ArResolver
plugin returns a Resolved Path
that path may, or may not, point to a file on disk. Regardless, the Asset pointed to by
Resolved Path
needs to be read
into memory in order for USD to load the Asset.
<cite>
ArResolver
provides API for opening that
Resolved Path
via
<cite>
OpenAsset
and returns a handle, so to speak, to an
<cite>
ArAsset
. The
<cite>
ArAsset
abstraction is the API that USD will use for reading
that Asset into memory.
<blockquote>
<div>
<p>
The API around reading Assets in
<cite>
ArResolver
is mostly the same from Ar 1.0 to Ar 2.0, so no
special distinction will be made between
<cite>
OmniUsdResolver (Ar 1.0)
and
<cite>
OmniUsdResolver (Ar 2.0)
.
<p>
<cite>
OmniUsdResolver
supports both file paths and URLs which can live on Nucleus or HTTP. How the Asset will be
opened for reading will depend on what the
Resolved Path
points to. When
Resolved Path
points to a normal file path
<cite>
OmniUsdResolver
will open the Asset for reading with
<cite>
ArFilesystemAsset
. But when
Resolved Path
points to a URL,
hosted on either Nucleus or HTTP,
<cite>
OmniUsdResolver
will use
<cite>
OmniUsdAsset
to read the Asset. Ultimately, the
caller to
<cite>
OpenAsset
will read the Asset in the same way since both
<cite>
ArFilesystemAsset
and
<cite>
OmniUsdAsset
are
implemented via the
<cite>
ArAsset
abstraction.
<p>
<cite>
OmniUsdAsset
provides efficient reading of Assets hosted on Nucleus or HTTP. To optimize performance with USD,
<cite>
OmniUsdAsset
will download the content of the Asset to a file on disk and return a memory-mapped buffer for that file.
The local file on disk serves multiple purposes:
<ol>
<li>
<p>
As a caching mechanism so reads to the same Asset are not re-downloaded
<li>
<p>
OS level support for memory-mapped files
<li>
<p>
Reduce traffic to Nucleus or HTTP with subsequent reads for the Asset
<p>
A trivial example for reading an Asset hosted on Nucleus would be:
<blockquote>
<div>
<p>
The
<cite>
ArResolver
API for reading (
<cite>
OpenAsset
) and writing (
<cite>
OpenAssetForWrite
) Assets are only available in C++.
```cpp
ArResolver& resolver = ArGetResolver();
const std::string assetId = "omniverse://server-a/scenes/vehicles/vehicle_a/asset_metadata.dat";
std::shared_ptr<ArAsset> asset = resolver.OpenAsset(resolver.Resolve(assetId));
if (asset) {
// get the buffer to read data into
std::shared_ptr<const char> buffer = asset->GetBuffer();
if (!buffer) {
TF_RUNTIME_ERROR("Failed to obtain buffer for reading asset");
return;
}
size_t numBytes = asset->Read(buffer.get(), asset->GetSize(), 0);
if (numBytes == 0) {
TF_RUNTIME_ERROR("Failed to read asset");
}
// buffer should now contain numBytes chars read from resolvedPath
}
```
## Opening UsdStage
Now that there is a better understanding of the concepts that apply to Asset Resolution in USD, its a good time to look
at the various APIs that go into opening a UsdStage, eg.
<cite>
UsdStage::Open()
:
ArResolver-->>SdfLayer: resolvedPath
SdfLayer->>SdfFileFormat: FindByExtension(resolvedPath)
SdfFileFormat->>ArResolver: GetExtension(resolvedPath)
ArResolver-->>SdfFileFormat: extension
SdfFileFormat-->>SdfLayer: fileFormat
loop TryToFindLayer
SdfLayer->>SdfLayer: Check for Matching Layer
end
SdfLayer-->>UsdStage: layer
UsdStage-->>Alice: stage
SdfLayer->>SdfFileFormat: fileFormat->NewLayer(assetIdentifier, resolvedPath)
SdfFileFormat-->>SdfLayer: layer
SdfLayer->>SdfFileFormat: fileFormat->Read(layer, resolvedPath)
SdfFileFormat->>ArResolver: resolver->OpenAsset(resolvedPath)
ArResolver-->>SdfFileFormat: asset
SdfFileFormat-->>SdfLayer: layer
SdfLayer-->>UsdStage: layer
UsdStage->>ArResolver: CreateDefaultContextForAsset(assetIdentifier) | CreateDefaultContext()
ArResolver-->>UsdStage: resolverContext
UsdStage->>ArResolver: BindContext(resolverContext)
UsdStage-->>Alice: stage
The diagram above is not exhaustive in the sense of showing all the different edge cases and internal APIs called. But it shows the sequence of events and the main collaboration between the
**Usd**, **Sdf**, and **Ar** APIs
## Writing Assets
Writing Assets is also a very important role for an **ArResolver** plugin but was not easily done in Ar 1.0. Ar 2.0 acknowledged this deficiency and added direct support to the **ArResolver** API. **OpenAssetForWrite** and **ArWritableAsset** are the equivalent APIs for writes as **OpenAsset** and **ArAsset** are for reads. **OmniUsdResolver (Ar 2.0)** provides the **OmniUsdWritableAsset** implementation of **ArWritableAsset** to write Assets to Nucleus (we don’t support writing to HTTP). Similar to **OpenAsset**, **OpenAssetForWrite** will use **ArFilesystemWritableAsset** for **Resolved Paths** that point to a normal file path.
To keeps things fast and efficient, **OmniUsdWritableAsset** does not write directly to the remote host. Instead, a temporary file will be opened for writing when **OpenAssetForWrite** is called. All subsequent writes through **OmniUsdWritableAsset** will write to this temporary file then it will be moved to the remote host when the Asset is closed via **Close**. The process and API is quite simple for writing Assets and is a welcomed addition to support content that is hosted remotely on services such as Nucleus.
> **OmniUsdResolver (Ar 1.0)** did support writing Assets to Nucleus but it was a very round-about way to do so. Writes were handled by redirecting most file formats through **OmniUsdWrapperFileFormat**. See **OmniUsdWrapperFileFormat Overview** for details on how **OmniUsdWrapperFileFormat** works.
An area where writing Assets deviates from reading Assets is with checking for write permission. It’s not uncommon to lock an Asset to prevent accidental writes. The **ArResolver** API exposes a method that an **ArResolver** plugin can implement to properly check write permissions on an Asset before any writes take place. **CanWriteAssetToPath** / **CanWriteLayerToPath** are implemented in **OmniUsdResolver** to check write permissions on an Asset and optionally report back any reason why an Asset can not be written to.
> **OmniUsdResolver** tries to be as robust as possible for checking write permissions on an Asset, handling edge cases such as trying to write a file to a channel or writing a file underneath a directory that has been locked. The reason why a write can not occur for a given Asset can be obtained by the caller
A trivial example for writing an Asset to Nucleus would be:
> The **ArResolver** API for reading (**OpenAsset**) and writing (**OpenAssetForWrite**) Assets are only available in C++.
```c++
ArResolver& resolver = ArGetResolver();
const std::string assetId = "omniverse://server-a/scenes/vehicles/vehicle_a/asset_metadata.dat";
// assume we are writing to a new Asset
const ArResolvedPath resolvedPath = ArGetResolver().ResolveForNewAsset(assetPath);
// before writing any data check that we have permission to write the Asset
std::string reason;
```
```cpp
if (!resolver.CanWriteAssetToPath(resolvedPath, &reason)) {
TF_RUNTIME_ERROR("Unable to write asset %s, reason: %s", resolvedPath.c_str(), reason.c_str());
return;
}
// create a buffer that contains the data we want to write
const size_t bufferSize = 4096;
std::unique_ptr<char[]> buffer(new char[bufferSize]);
// put some data into the buffer
const std::string data = "some asset data";
memcpy(buffer.get(), data.c_str(), data.length());
// open the asset for writing
auto writableAsset = ArGetResolver().OpenAssetForWrite(resolvedPath, ArResolver::WriteMode::Replace);
if (writableAsset) {
// write the data from our buffer to the Asset
const size_t numBytes = writableAsset->Write(buffer.get(), data.length(), 0);
if (numBytes == 0) {
TF_RUNTIME_ERROR("Failed to write asset");
return;
}
// close out the asset to indicate that all data has been written
bool success = asset->Close();
if (!success) {
TF_RUNTIME_ERROR("Failed to close asset");
return;
}
}
```
## Creating UsdStage
Similar to what was examined with [Reading Assets](#reading-assets), it’s also a good point to look at the different APIs that come into play when dealing with writes. For example when calling `UsdStage::CreateNew()`:
```mermaid
sequenceDiagram
autonumber
actor Bob
Bob->>UsdStage: CreateNew(assetPath)
UsdStage->>SdfLayer: CreateNew(assetPath)
SdfLayer->>ArResolver: ArGetResolver()
ArResolver-->>SdfLayer: resolver
SdfLayer->>ArResolver: CreateIdentifierForNewAsset(assetPath)
ArResolver-->>SdfLayer: assetIdentifier
SdfLayer->>ArResolver: ResolveForNewAsset(assetIdentifier)
ArResolver-->>SdfLayer: resolvedPath
SdfLayer->>SdfFileFormat: FindByExtension(resolvedPath)
SdfFileFormat->>ArResolver: GetExtension(resolvedPath)
ArResolver-->>SdfFileFormat: extension
SdfFileFormat-->>SdfLayer: fileFormat
loop IsPackageOrPackagedLayer
SdfLayer->>SdfLayer: Prevent Creating Package Layers
end
SdfLayer-->>UsdStage: Null Layer (IsPackage)
UsdStage-->>Bob: Null Stage
SdfLayer->>SdfFileFormat: NewLayer(assetIdentifier, resolvedPath)
SdfFileFormat-->>SdfLayer: layer
SdfLayer->>ArResolver: CanWriteAssetToPath(resolvedPath)
SdfLayer->>SdfFileFormat: WriteToFile(layer, resolvedPath)
SdfFileFormat->>ArResolver: OpenAssetForWrite(resolvedPath)
ArResolver-->>SdfFileFormat: asset
SdfFileFormat-->>SdfLayer: layer
SdfLayer-->>UsdStage: layer
UsdStage->>ArResolver: CreateDefaultContextForAsset(assetIdentifier) | CreateDefaultContext()
ArResolver-->>UsdStage: resolverContext
UsdStage->>ArResolver: BindContext(resolverContext)
UsdStage-->>Bob: stage
```
SdfLayer-->>UsdStage: layer
UsdStage->>ArResolver: CreateDefaultContextForAsset(assetIdentifier) | CreateDefaultContext()
ArResolver-->>UsdStage: resolverContext
UsdStage->>ArResolver: BindContext(resolverContext)
UsdStage-->>Bob: stage
If compared against opening a
**UsdStage**
the call sequence for creating a
**UsdStage** isn’t that different. The main APIs
are still `Usd`,
**Sdf**, and
**Ar** but there are a few differences. Specifically the calls to
**CreateIdentifierForNewAsset** /
**ResolveForNewAsset** and using
**OpenAssetForWrite** /
**ArWritableAsset** APIs.
## Initialization
The initialization for the
**ArResolver**
system is not overly involved but it is a common source of problems. If USD tries to load an Asset that requires a specific
**ArResolver**
plugin and that plugin can not be found USD has no way to load it. Most of the time it’s related to the underlying Plugin System in USD and the nature of how it loads plugins in
**PlugRegistry**. But before getting into all that, it’s good to understand the different pieces that come into play for the
**ArResolver** system.
## Primary Resolvers
Primary resolvers are in charge of dealing with the bulk of asset resolution within Ar. There can be only one Primary resolver active for a given USD environment. If no suitable Primary resolver can be found, Ar will use its own
**ArDefaultResolver** as the Primary resolver.
In the first iteration of
**ArResolver** (Ar 1.0) any implementation would be considered a Primary resolver. If your
**ArResolver** plugin supported multiple URI schemes, such as the case with
**OmniUsdResolver**, there was nothing preventing that. But if your
**ArResolver** was very specific and only supported a single URI scheme it would still need to be configured as a Primary resolver. This caused problems when multiple resolver implementations need to coexist in the same USD environment. Ar 2.0 added support for URI resolvers to address this problem.
## URI Resolvers
Much as the name implies, URI Resolvers are
**ArResolver** plugins that support specific URI schemes. These types of resolvers are configured in plugInfo.json to specify the URI scheme(s) that they support. Internally, Ar will inspect the asset path for a scheme and dispatch the calls to the corresponding URI resolver. If no matching URI Resolver can be found (e.g the asset path is a normal file path) the Primary resolver will be used. This change to Ar 2.0 made support for multiple resolvers much easier. An ArResolver plugin could just specify the URI schemes it supports, and it will work alongside any other
**ArResolver** plugin. It’s important to point out that a lot of
**ArResolver** plugins in Ar 1.0 supported multiple URI schemes but were developed as Primary Resolvers. For instance,
**OmniUsdResolver** is a Primary Resolver but supports “omniverse://”, “omni://”, “https://”, “file://” URI schemes. This will be an important fact when configuring multiple resolvers in the same USD environment.
### URI Resolver Support in OmniUsdResolver (Ar 1.0)
As an intermediate step to transition to Ar 2.0, the
**OmniUsdResolver (Ar 1.0)**
has limited support for URI Resolvers. The API for Ar 2.0 is quite different but the underlying Plugin System that Ar uses to load different
**ArResolver** plugins is the same. From this, the
**OmniUsdResolver (Ar 1.0)**
tries to inspect the
**PlugRegistry**
for other
**ArResolver** plugins that have been defined in the environment. For all
**ArResolver** plugins that declare a “uriSchemes” field in their plugInfo.json
**OmniUsdResolver (Ar 1.0)**
will keep a mapping of the URI scheme to the actual loaded plugin. When the various parts of the
**OmniUsdResolver (Ar 1.0)** API are invoked, the mapping of
**URI Resolvers**
will be checked first and if a matching
**URI Resolver** is found that call will be dispatched to the corresponding
**URI Resolver**.
> To support **URI Resolvers** in Ar 1.0,
> **OmniUsdResolver** must be set as the
> **Preferred Resolver** in the environment. During the initialization of
> **ArResolver**, Ar 1.0 makes no distinction between
> **Primary Resolvers** and
> **URI Resolvers**.
## Package Resolvers
Package Resolvers are a less common form of Asset Resolution in USD but they are very important for certain
**SdfFileFormat** plugins. The most well-know
**SdfFileFormat** plugin that requires a Package Resolver is the
**UsdzFileFormat** plugin. The
**UsdzFileFormat** is an uncompressed archive of Assets laid out according to the Zip file specification. This allows multiple Assets to be packaged together into a single .usdz file that is easy to transport.
Unlike the public API to
**ArResolver**, which can be easily obtained by calling
**ArGetResolver()**, there is no public access to Package Resolvers. A Package Resolver only has meaning to the
<cite>
SdfFileFormat
plugin that it is associated
with. For this reason,
<cite>
ArResolver
will handle the appropriate calls during Asset Resolution to the corresponding
Package Resolver.
> <div>
> <p>
> Package Resolvers are associated with their corresponding
> <cite>
> SdfFileFormat
>
> plugin via extension. The extensions are
declared through their plugInfo.json
>
>
## Preferred Resolver
Configuration for multiple
<cite>
ArResolver
plugins can be done in a couple different ways. The important thing to remember
is that one
<cite>
ArResolver
will need to serve as the
Primary Resolver
. At a minimum, the
<cite>
ArDefaultResolver
will be the
Primary Resolver
if no
<cite>
ArResolver
plugin is suitable. In an environment with multiple
<cite>
ArResolver
plugins there
is not a direct way to set the
Primary Resolver
, one can only “hint” at what plugin should
serve as the
Primary Resolver
. This “hint” can be set via
<cite>
ArSetPreferredResolver()
with the type name of the
<cite>
ArResolver
plugin that is declared in plugInfo.json. If
<cite>
ArSetPreferredResolver()
is called multiple times, the
Primary Resolver
set last will be used.
> <div>
> <p>
> The type name used in
> <cite>
> ArSetPreferredResolver()
>
> should match the name used for defining the
> <cite>
> ArResolver
>
> <cite>
> TfType
>
> .
For example, with
> <cite>
> OmniUsdResolver
>
> we define the
> <cite>
> TfType
>
> with
> <cite>
> AR_DEFINE_RESOLVER(OmniUsdResolver, ArResolver)
>
> .
As such, the
> <cite>
> OmniUsdResolver
>
> must be hinted at with
> <cite>
> ArSetPreferredResolver(“OmniUsdResolver”)
>
>
>
In Ar 2.0, an
<cite>
ArResolver
plugin will be determined as a
Primary Resolver
in the absence of a
“uriSchemes” field in it’s plugInfo.json. In an environment with multiple
<cite>
ArResolver
plugins that can be a
Primary Resolver
the one chosen will be determined in one of two ways:
1. By the last
<cite>
ArResolver
plugin specified through
<cite>
ArSetPreferredResolver()
before the first call to
<cite>
ArGetResolver()
. So it’s important that
<cite>
ArSetPreferredResolver()
is called during application startup.:
> <div>
> <p>
> The
> <cite>
> ArSetPreferredResolver()
>
> can only be set with an
> <cite>
> ArResolver
>
> plugin that satisfies the Primary Resolver
requirement. So any
> <cite>
> ArResolver
>
> plugin that declares a “uriSchemes” field in their plugInfo.json can not be set
as the Primary Resolver.
>
>
2. If no
<cite>
ArResolver
plugin is specified through
<cite>
ArSetPreferredResolver()
, it will be determined by the first one found from the alphabetically sorted list of
<cite>
ArGetAvailableResolvers()
.
## Bootstrapping ArResolver
The call to
<cite>
ArGetResolver()
is the entry point to any Asset Resolution in USD. Even libraries within USD that sit above
the Ar library initialize
<cite>
ArResolver
in this way. But as simple as it may seem,
<cite>
ArGetResolver()
does perform a lot
of work to initialize the
<cite>
ArResolver
instance that it returns. It requires the following steps to initialize properly:
1. Load all
Primary Resolvers
plugins and identify which plugin will serve as the
Primary Resolver
. See
Preferred Resolver
for how this is determined.
2. Fallback to the
<cite>
ArDefaultResolver
if no
Primary Resolver
satisfies the criteria.
3. Set the identified
Primary Resolver
as the underlying resolver for the entire process.
4. Load all
URI Resolvers
plugins uniquely mapping their declared “uriSchemes” to each loaded plugin
5. Initialize all
Package Resolvers
indexed according to the supported extensions.
Once all these plugins have been loaded and identified a
Primary Resolver
<cite>
ArResolver
instance can be returned from
<cite>
ArGetResolver()
. At this point, the Asset Resolution system has been initialized and USD
# Problems with Initialization
A common problem is that **any ArResolver plugins registered with PlugRegistry after this first call to ArGetResolver will not be loaded**. The reason that it needs to load all these plugins upon the first call to `ArGetResolver()` is due to the nature of how `PlugRegistry` loads plugins. The first time that a plugin type needs to be loaded `PlugRegistry` will load all plugins, and any dependencies those plugins may have, derived from that type. Once those plugins are loaded any plugins registered with `PlugRegistry`, deriving from the same plugin type, will not be recognized. **This is a problem with all plugins not just ArResolver plugins**! `UsdSchemaRegistry` and `SdfFileFormatRegistry` have the same issue. So, it’s **really** important to be very mindful of calling `ArGetResolver()` within application startup.
# Troubleshooting
Identifying an initialization problem with `ArResolver` can be challenging. The problem can be as simple as some recently added startup code making a call to `ArGetResolver()` to inspect Assets. Any `ArResolver` plugins registered after this new startup code will just fail to load the Assets they support. When this happens it’s hard to identify the exact problem as it will seem that the `ArResolver` plugins themselves have an issue. The best way to identify a problem with initialization for `ArResolver` is to use `TfDebug` flags that are provided for both `ArResolver` and `OmniUsdResolver`.
> Without going into too much detail, `TfDebug` flags are a great way to turn on additional diagnostics at run-time. Depending on their usage within a library, `TfDebug` flags can provide a lot of information without going through the, sometimes lengthy, process of setting up a debug environment and stepping through code. The can be turned on via environment variable or Python.
`ArResolver` provides one `TfDebug` flag **AR_RESOLVER_INIT** which writes additional information to the console when `ArGetResolver()` is called. This will list information like the Primary Resolvers, URI Resolvers, and Package Resolvers discovered. The Preferred Resolver set and if a Primary Resolver was found that matches it. The information that the `TfDebug` flag **AR_RESOLVER_INIT** outputs is extremely helpful to help understand how the returned `ArResolver` instance from `ArGetResolver()` was initialized. For a lot of initialization problems it becomes clear that an expected `ArResolver` plugin wasn’t discovered or that there was a problem loading the plugin. If an `ArResolver` plugin wasn’t discovered it means that it needs to be registered earlier in the startup process or any startup code calling `ArGetResolver()` might need to be deferred.
In a similar manner, `OmniUsdResolver` also provides multiple `TfDebug` flags to troubleshooting resolves. These aren’t as useful for initialization problems but can really help identify problems with resolving Assets. The following `TfDebug` flags are available for `OmniUsdResolver`:
| TfDebug Flag | Description |
|--------------|-------------|
| OMNI_USD_RESOLVER | OmniUsdResolver general resolve information |
| OMNI_USD_RESOLVER_CONTEXT | OmniUsdResolver Context information |
| OMNI_USD_RESOLVER_MDL | OmniUsdResolver MDL specific resolve information |
| OMNI_USD_RESOLVER_ASSET | OmniUsdResolver asset read / write information | | 43,999 |
resolver.md | # OmniUsdResolver Overview
## OmniUsdResolver
The main entry point to resolving assets within Omniverse for USD. `OmniUsdResolver` is transparently integrated into USD by deriving from and implementing the ArResolver API. It is not intended to be invoked directly but rather through the `PlugRegistry` system within USD which is declared via its “plugInfo.json” metadata. Any calls to deal with resolving assets should go through the ArResolver API which can be obtained via `ArGetResolver()`.
`OmniUsdResolver` supports a wide variety of USD versions along with different “flavors” for those versions. This range of builds for so many versions of USD requires support for both ArResolver 1.0 and ArResolver 2.0 APIs. The version of ArResolver to use is determined at build-time, via the USD dependency, and will appropriately build the corresponding `OmniUsdResolver`. Documentation specific to a particular ArResolver version of `OmniUsdResolver` will be suffixed by the ArResolver version. For example, details about `OmniUsdResolver` for Ar 2.0 will be `OmniUsdResolver (Ar 2.0)`.
## ResolverHelper
The ArResolver API is quite different between Ar 1.0 and Ar 2.0 versions. But the main concepts of creating an identifier and resolving assets are mostly similar for the underlying implementation. For this reason the `ResolverHelper` was added as a simple utility class that provides shared functions for both `OmniUsdResolver (Ar 1.0)` and `OmniUsdResolver (Ar 2.0)`.
## OmniUsdResolverCache
The `OmniUsdResolverCache` is a very simple key-value store caching mechanism that can be used to cache scoped areas of code where assets are frequently resolved. Internally, the `OmniUsdResolverCache` uses a `tbb::concurrent_hash_map` to cache the resolved results to ensure thread-safety. This should allow the `ArResolverScopedCache` to be used across threads. As with most things in Ar, the `OmniUsdResolverCache` is created indirectly through the `ArResolverScopedCache` which uses RAII to control lifetime of the cache.
> It is the responsibility of the caller to scope caching accordingly. Without explicitly creating a `ArResolverScopedCache`, `OmniUsdResolver` will not cache its computed results, although `client-library` may perform some of its own caching. This is important for cache misses as the results of those failed resolves will be cached within the duration of the `ArResolverScopedCache` lifetime. The benefit of scoped caches is that the caller controls cache invalidation.
## OmniUsdResolver (Ar 2.0)
The `OmniUsdResolver` implementation that implements the Ar 2.0 `ArResolver` abstraction. It is intended to be used as a primary `ArResolver` but it can also be configured as a URI `ArResolver`. Since the `OmniUsdResolver` ultimately calls `client-library` it supports all protocols that `client-library` supports. Currently, the following protocols are supported:
1. omniverse://, omni:// (Nucleus)
2. http://, https:// (Web, with extra support for S3)
3. file://
# OmniUsdResolver
## Supported Protocols
- **omniverse://, omni:// (Nucleus)**
- **http://, https:// (Web, with extra support for S3)**
- **(File URI)**
- **POSIX (Linux file paths)**
- **Windows (Windows file paths)**
## OmniUsdResolverContext
**OmniUsdResolverContext** is an **ArResolverContext** that simply stores the base asset path. This base asset path is usually, but not always, the root layer associated with the **UsdStage** that bound the context. The **OmniUsdResolverContext** is intentionally kept pretty bare-bones to prevent assets from resolving differently based on the bound context. Contextual information stored within a **ArResolverContext** is not persisted within scene description which can make it difficult to reproduce the same asset without the exact same context.
## OmniUsdAsset
Implements the **ArAsset** interface which is required for reading the binary data from resolved Nucleus assets. An instance is obtained via **OpenAsset()** on the **ArResolver** abstraction, or more succinctly **ArGetResolver().OpenAsset()**.
## OmniUsdWritableAsset
Implements the **ArWritableAsset** interface which is required for writing binary data to resolved Nucleus assets. An instance is obtained via **OpenAssetForWrite()** on the **ArResolver** abstraction, or more succinctly **ArGetResolver().OpenAssetForWrite()**.
## OmniUsdResolver (Ar 1.0)
The **OmniUsdResolver** implementation that implements the Ar 1.0 **ArResolver** abstraction. Since the **OmniUsdResolver** ultimately calls **client-library** it supports all protocols that **client-library** supports.
The first iteration of the Ar API, Ar 1.0, was initially developed to only separate Pixar-specific implementation details from USD. This resulted in an API that was not ideal for all different types of asset-management systems. As such, most **ArResolver** Ar 1.0 implementations required their own level of “hacks” to properly hookup the underlying asset-management system. **OmniUsdResolver (Ar 1.0)** also suffers from this and is visible with things like **OmniUsdWrapperFileFormat** and MDL Paths.
## OmniUsdWrapperFileFormat
A **SdfFileFormat** plugin whose sole purpose is to “wrap” other **SdfFileFormat** plugins to fix the process of reading / writing USD **SdfLayer** to / from Omniverse.
This **SdfFileFormat** plugin is only used for **OmniUsdResolver (Ar 1.0)**. Since the same “plugInfo.json” is used to declare the **OmniUsdResolver** plugin for Ar 1.0 and Ar 2.0, the **OmniUsdWrapperFileFormat** will be accessible by **PlugRegistry** for Ar 2.0 builds of **usd-resolver** but will not be used. | 5,610 |
REST_API.md | # REST API
The service has two `HTTP POST` endpoints - Request and Handle.
## Request
Use this endpoint to send requests to the service.
| Property | Type | Description |
|----------|--------|--------------------------------------------------------------------------------------------------|
| source_url | string | The USD stage to process, or a folder containing usd files. |
| destination_url | string | The location to write the optimized stage. It can be a fully specified path or a relative filename scheme. If only a scheme is specified it will be placed in the same directory as the source. It can contain special tokens from the source URL, these include `{stem}` (source filename without extension) and `{suffix}` (file extension including ‘.’). For example, given a `source_url` of `file:://foo/bar.usd` if you want to save `bar.optimized.usd` in the same folder then you would specify `{stem}.optimized{suffix}`. Default behavior is to export a flattened result (see `export` below). To overwrite the source and any modified layers in place (with checkpoints) then specify this destination_url to resolve the same as the source e.g. `{stem}{suffix}`. For this special case the `export` flag is ignored. |
| recurse | boolean | If a folder is provided as a source, process all files in the folder. |
| command | string | Application or command to be executed by the job. |
| Property | Type | Description |
|----------|------|-------------|
| config_url | string | Url of the scene optimizer JSON config that describes the optimization stack of processes to run. It can also contain tokens from the source_url, for example if you are processing file:://foo/bar.usd and wish to use a sidecar config bar.json you could configure as {stem}.json. |
| export | boolean | If true (default), export the optimized result as a flattened stage. Otherwise, perform a regular save_as. This flag is ignored if the source_url and destination_url resolve to the same location, in which case the source and its referenced layers will be overwritten (with checkpoints if supported). |
| sync | boolean | If true, process the request immediately and don’t return until it is complete. Otherwise, add the job to the internal work queue (default). |
### Response
| Property | Type | Description |
|----------|------|-------------|
| status | string | Response message. Whether the request was computed successfully. |
### Handle
Handle for events from the Nucleus Transport service.
| Property | Type | Description |
|----------|------|-------------|
| status | str | Status, OK if this is a specific event, otherwise an indicator of the state of the connection or interaction |
| ts | dict | A dictionary of timestamps |
| event | str | Event type, e.g. full, create, delete, rename, copy, etc. |
| entry | dict | Data about the event including path, branch, timestamps, etc (see below). |
#### Example of “entry” dict as provided by Nucleus transport service.
```json
{
"status": "OK",
"ts": {
"omni_server_out_ts": 1669143047098863
},
"entries": [
{
"path": "/example.usda"
}
]
}
``` | 3,304 |
Roadmap.md | # Roadmap
As is usual in software development we have more ideas than time to implement them. This is a collection of such ideas that we would like to be able to implement. Addressing these will be prioritized as resources and interests dictate.
- When a USD file is loaded containing OmniGraph nodes then automatically the extensions in which they are implemented
- Replace the Node Description Editor with something more modern that has functionality on parity with .ogn files
- Add support for arrays of bundles
- Add support for arrays of strings
- Add the ability for bundle attributes to contain “any” or “union” type attributes
- Add the ability to put “any” or “union” type attributes in bundles
- Allow implementations of a node that work on both CPU and GPU
- Add the ability to easily create Python node type definitions at runtime
## Pending Deprecations
As described in [OmniGraph Versioning](#omnigraph-versioning) we attempt to provide backward compatibility for as long as we can. That notwithstanding it will always be a smoother experience for you if you migrate away from using deprecated functionality at your earliest convenience to ensure maximum compatibility. | 1,187 |
RunningOneScript.md | # Running Minimal Kit With OmniGraph
The Kit application at its core is a basic framework from plugging in extensions with a common communication method. We can take advantage of this to run a script that works with OmniGraph without pulling in the entire Kit overhead.
## Running With Python Support
The most user-friendly approach to running a minimal version of Kit with OmniGraph is to make use of the `omni.graph` extension, which adds Python bindings and scripts to the OmniGraph core.
Your extension must have a dependency on `omni.graph` using these lines in your `extension.toml` file:
```toml
[dependencies]
"omni.graph" = {}
```
Let’s say your extension, `omni.my.extension`, has a single node type `MyNodeType` that when executed will take a directory path name as input and will print out the number of files and total file size of every file in that directory. This is a script that will set up and execute an OmniGraph that creates a node that will display that information for the Omniverse Cache directory.
```python
import carb
import omni.graph.core as og
import omni.usd
# This is needed to initialize the OmniGraph backing
omni.usd.get_context().new_stage()
# Get the cache directory to be examined
cache_directory = carb.tokens.get_tokens_interface().resolve("${omni_cache}")
# Create a graph with the node that will print out the cache directory contents
_ = og.Controller.edit("/CacheGraph", {
og.Controller.Keys.CREATE_NODES: ("MyNode", "omni.my.extension.MyNodeType"),
og.Controller.Keys.SET_VALUES: ("MyNode.inputs:directory", cache_directory)
})
# Evaluate the node to complete the operation
og.Controller.evaluate_sync()
```
If this script is saved in the file `showCache.py` then you run this from your Kit executable directory:
```sh
# Command to run the script
```
```pre
$ ./kit.exe --enable omni.my.extension --exec showCache.py
C:/Kit/cache contained **123** files with a total size of **456**,789 bytes
```
!!! note
### Note
Running with only `omni.graph` enabled will work, but it is just a framework and has no nodes of its own to execute. That is why you must enable your own extension. You might also want to enable other extensions such as `omni.graph.nodes` or `omni.graph.action` if you want to access the standard set of OmniGraph nodes.
## Running With Just The Core
If you have an extension that uses the C++ ABI to create and manipulate an OmniGraph you can run Kit with only your extension enabled, executing a script that will trigger the code you wish to execute.
Your extension must have a dependency on `omni.graph.core` using these lines in your `extension.toml` file:
```toml
[dependencies]
"omni.graph.core" = {}
```
You can then run your own script `setUpOmniGraphAndEvaluate.py` that executes your C++ code to create and evaluate the graph in a way similar to the above, but using the C++ ABI, with the same command line:
```sh
> ./kit.exe --enable omni.my.extension --exec setUpOmniGraphAndEvaluate.py
``` | 2,998 |
RuntimeInitialize.md | # Initializing Attributes to Non-Default Values
Normally you will specify attribute default values in your .ogn files and the attributes will be given those values when the node is created. Occasionally, you may wish to provide different default values for your attributes based on some condition that can only be ascertained at runtime. This document describes the current best-practice for achieving that goal, in both C++ and Python nodes.
As of this writing the database cannot be accessed during the node’s `initialize()` method, so providing new values in there will not work. (If in the future that changes this document will be updated to explain how to use it instead.)
In fact, the only time the data is guaranteed to be available to use or set is in the node’s `compute()` method, so that will be used to set up a delayed initialization of attribute values.
The general approach to this initialization will be to use a boolean state value in the node to determine whether the attribute or attributes have been given their initial values or not when the `compute()` method is called. It’s also possible that the attribute would have been given a value directly so that also must be considered when managing the boolean state value.
For these examples this node definition will be used:
```json
{
"RandomInt": {
"version": 1,
"description": "Holds an integer with a random integer number.",
"outputs": {
"number": {
"description": "The random integer",
"type": "int"
}
},
"state": {
"$comment": "This section exists solely to inform the scheduler that there is internal state information."
}
}
}
```
> For the Python node assume the addition of the line `"language": "python"`.
## Initializing Values In C++ Nodes
In the [internal state tutorial](../../../omni.graph.tutorials/1.26.1/tutorial18.html#ogn-tutorial-state) you can see that a way to add state information to a C++ node is to make some class members. When the owning graph is instantiated, we can have that state information shared amongst all graph instances (for each copy of our node type - a “node instance”), or/and have that information linked to each individual graph instance for that node instance.
In our example here, we’ll add a boolean class member to tell us if the node is initialized or not, for each graph instance, and check it in the **compute()** method.
```cpp
#include <OgnRandomIntDatabase.h>
```cpp
#include <cstdlib>
class OgnRandomInt
{
public:
static bool compute(OgnRandomIntDatabase& db)
{
auto& state = db.perInstanceState<OgnRandomInt>();
if (!state.m_initialized)
{
db.outputs.number() = std::rand();
state.m_initialized = true;
}
return true;
}
};
```
If you know your attribute will never be set from the outside then that is sufficient, however usually there is no guarantee that some script or UI has not set the attribute value. Fortunately the node can monitor that using the **registerValueChangedCallback()** ABI function on the attribute. It can be set up in the node’s **initialize()** method. The callback will need to iterate through all graph instances in order to setup the state information. If the graph happens to not be instantiated, it still runs on its “default” instance, so it is safe to reset at least one state.
Putting this in with the above code you get this:
```cpp
#include <OgnRandomIntDatabase.h>
#include <cstdlib>
class OgnRandomInt
{
public:
static bool compute(OgnRandomIntDatabase& db)
{
auto& state = db.perInstanceState<OgnRandomInt>();
if (!state.m_initialized)
{
db.outputs.number() = std::rand();
state.m_initialized = true;
}
return true;
}
};
{
```
// Normally you would have other things to do as part of the compute as well...
return true;
}
static void initialize(const GraphContextObj&, const NodeObj& nodeObj)
{
AttributeObj attrObj = nodeObj.iNode->getAttributeByToken(nodeObj, outputs::number.m_token);
attrObj.iAttribute->registerValueChangedCallback(attrObj, attributeChanged, true);
}
};
REGISTER_OGN_NODE()
```
## Initializing Values In Python Nodes
In the internal state tutorial you can see that the way to add state information to a Python node is to create a static **internal_state** method. We’ll create a simple class with a boolean class member to tell us if the node is initialized or not, and check it in the **compute()** method. Be aware that in python, there is no notion of a shared state between all graph instances. A distinct state object is created for each graph instance for that node copy, or “node instance” in the graph.
```python
from dataclasses import dataclass
from random import randint
class OgnRandomInt:
@dataclass
class State:
initialized: bool = False
@staticmethod
def internal_state() -> State:
return OgnRandomInt.State()
@staticmethod
def compute(db) -> bool:
if not db.per_instance_state.initialized:
db.outputs.number = randint(-0x7fffffff, 0x7fffffff)
db.per_instance_state.initialized = True
# Normally you would have other things to do as part of the compute as well...
return True
```
If you know your attribute will never be set from the outside then that is sufficient. Unfortunately the Python API does not yet have a method of getting a callback when an attribute value has changed so for now this is all you can do. | 5,621 |
SaveVDB.md | # Save VDB
Saves a VDB from file and puts it in a memory buffer.
## Installation
To use this node enable `omni.volume_nodes` in the Extension Manager.
## Inputs
| Name | Type | Descripton | Default |
|-----------------------|------------|-------------------------------------|---------|
| Asset Path (inputs:assetPath) | token | Path to VDB file to save. | |
| Compression Mode (inputs:compressionMode) | token | The compression mode to use when encoding | None |
| Metadata | | allowedTokens = None,Blosc,Zip | |
| Metadata | | default = None | |
| Data (inputs:data) | uint[] | Data to save to file in NanoVDB or OpenVDB memory format. | [] |
| Exec In (inputs:execIn)| execution | Input execution | None |
## Outputs
| Name | Type | Descripton | Default |
|-----------------------|------------|-------------------------------------|---------|
## Metadata
| Name | Value |
|--------------|------------------------|
| Unique ID | omni.volume.SaveVDB |
| Version | 1 |
| Extension | omni.volume_nodes |
| Has State? | True |
| Implementation Language | C++ |
| Default Memory Type | cpu |
| Generated Code Exclusions | tests |
| tags | VDB |
| uiName | Save VDB |
| __tokens | {“none”: “None”, “blosc”: “Blosc”, “zip”: “Zip”} |
| Categories | Omni Volume |
| Generated Class Name | SaveVDBDatabase |
| Python Module | omni.volume_nodes | | 1,781 |
scene-optimizer-service_overview.md | # Scene Optimizer Service
The Scene Optimizer Service uses the [Scene Optimizer Extension](https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_scene-optimizer.html) (omni.kit.services.scene.optimizer) to optimize USD files to improve performance.
Example use cases for the service:
- **Automated UV generation for CAD models**
- **Optimizing scenes for runtime interactivity**
- **Optimizing scenes for memory efficiency**
- **Point Cloud Partitioning**
The service will take a USD file from a Nucleus location, use a predefined configuration file with desired optimization operations and create a new optimized USD file in the same Nucleus directory.
## Configuration
The service is primarily configured via a json file which describes the operation stack which should be executed on the scene.
### Generating Config Files
The file can be written by hand, but the easier way is to use the Scene Optimizer Kit extension to generate the file. The optimization steps can be defined in the Scene Optimizer UI, then saved to a JSON file.
See details on how to generate the JSON file from the [Scene Optimizer Kit extension](https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_scene-optimizer/user-manual.html).
### Example JSON Configuration
This sample json will run the following operations:
- optimize materials - deduplicate
- deduplicate geometry - instanceable reference
- prune leaf xforms
```json
[
{
"operation": "optimizeMaterials",
"materialPrimPaths": [],
"optimizeMaterialsMode": 0
},
{
"operation": "deduplicateGeometry",
"instanceableReference": true
},
{
"operation": "pruneLeafXforms"
}
]
```
A configuration json file needs to be in the same top level directory when calling this as a service from Nucleus. | 1,839 |
Scene.md | # Scene
## Camera
SceneUI is the framework built on top and tightly integrated with `omni.ui`. It uses `omni.ui` inputs and basically supports everything `omni.ui` supports, like Python bindings, properties, callbacks, and async workflow.
SceneView is the `omni.ui` widget that renders all the SceneUI items. It can be a part of the `omni.ui` layout or an overlay of `omni.ui` interface. It’s the entry point of SceneUI.
SceneView determines the position and configuration of the camera and has projection and view matrices.
```python
# Projection matrix
proj = [1.7, 0, 0, 0, 0, 3, 0, 0, 0, 0, -1, -1, 0, 0, -2, 0]
# Move camera
rotation = sc.Matrix44.get_rotation_matrix(30, 50, 0, True)
transl = sc.Matrix44.get_translation_matrix(0, 0, -6)
view = transl * rotation
scene_view = sc.SceneView(
sc.CameraModel(proj, view),
aspect_ratio_policy=sc.AspectRatioPolicy.PRESERVE_ASPECT_FIT,
height=200
)
with scene_view.scene:
# Edges of cube
sc.Line([-1, -1, -1], [1, -1, -1])
sc.Line([-1, 1, -1], [1, 1, -1])
sc.Line([-1, -1, 1], [1, -1, 1])
```
```python
class CameraModel(sc.AbstractManipulatorModel):
def __init__(self):
super().__init__()
self._angle = 0
def append_angle(self, delta: float):
self._angle += delta * 100
# Inform SceneView that view matrix is changed
self._item_changed("view")
def get_as_floats(self, item):
"""Called by SceneView to get projection and view matrices"""
if item == self.get_item("projection"):
# Projection matrix
return [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, -1, -1, 0, 0, -2, 0]
if item == self.get_item("view"):
# Move camera
rotation = sc.Matrix44.get_rotation_matrix(30, self._angle, 0, True)
```
# Screen
To get position and view matrices, SceneView queries the model component. It represents either the data transferred from the external backend or the data that the model holds. The user can reimplement the model to manage the user input or get the camera directly from the renderer or any other back end.
The SceneView model is required to return two float arrays, `projection` and `view` of size 16, which are the camera matrices.
`ui.Screen` is designed to simplify tracking the user input to control the camera position. `ui.Screen` represents the rectangle always placed in the front of the camera. It doesn’t produce any visible shape, but it interacts with the user input.
You can change the camera orientation in the example below by dragging the mouse cursor left and right.
```
class CameraModel(sc.AbstractManipulatorModel):
def __init__(self):
super().__init__()
self._angle = 0
def append_angle(self, delta: float):
self._angle += delta * 100
# Inform SceneView that view matrix is changed
self._item_changed("view")
def get_as_floats(self, item):
"""Called by SceneView to get projection and view matrices"""
if item == self.get_item("projection"):
# Projection matrix
return [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, -1, -1, 0, 0, -2, 0]
if item == self.get_item("view"):
# Move camera
rotation = sc.Matrix44.get_rotation_matrix(30, self._angle, 0, True)
```
transl = sc.Matrix44.get_translation_matrix(0, 0, -8)
view = transl * rotation
return [view[i] for i in range(16)]
def on_mouse_dragged(sender):
# Change the model's angle according to mouse x offset
mouse_moved = sender.gesture_payload.mouse_moved[0]
sender.scene_view.model.append_angle(mouse_moved)
with sc.SceneView(CameraModel(), height=200).scene:
# Camera control
sc.Screen(gesture=sc.DragGesture(on_changed_fn=on_mouse_dragged))
# Edges of cube
sc.Line([-1, -1, -1], [1, -1, -1])
sc.Line([-1, 1, -1], [1, 1, -1])
sc.Line([-1, -1, 1], [1, -1, 1])
sc.Line([-1, 1, 1], [1, 1, 1])
sc.Line([-1, -1, -1], [-1, 1, -1])
sc.Line([1, -1, -1], [1, 1, -1])
sc.Line([-1, -1, 1], [-1, 1, 1])
sc.Line([1, -1, 1], [1, 1, 1])
sc.Line([-1, -1, -1], [-1, -1, 1])
sc.Line([-1, 1, -1], [-1, 1, 1])
sc.Line([1, -1, -1], [1, -1, 1])
sc.Line([1, 1, -1], [1, 1, 1]) | 4,221 |
scenegraph_get.md | # Getting the USDRT Scenegraph API
## Inside Kit
Starting with Kit 104, you can load the **USDRT Scenegraph API** extension in Kit from the Extension Manager.
Once it’s loaded, you can use the Python API directly in Kit, or from another extension that declares a dependency on the USDRT Scenegraph API extension.
Similarly, you can use the C++ API from a Kit extension by declaring an extension dependency to the USDRT Scenegraph API extension, and adding this path to the “includedirs” section of your premake file:
```lua
"%{target_deps}/usdrt/include"
```
The Kit extension manager will handle the plugin loading for you.
## Outside Kit
There is no standalone release of USDRT publicly available at this time. Kit extensions are the best way to leverage USDRT within Omniverse. | 788 |
scenegraph_use.md | # USDRT Scenegraph API Usage
## About
The USDRT API is intended as a pin-compatible replacement for the USD API, with a goal of enabling low-cost transitions for existing USD-centric codebases to leverage the performance and replication features of Fabric. This is the vision:
### USD
```cpp
#include <pxr/base/vt/array.h>
#include <pxr/base/tf/token.h>
#include <pxr/usd/sdf/path.h>
#include <pxr/usd/usd/stage.h>
#include <pxr/usd/usd/prim.h>
#include <pxr/usd/usd/attribute.h>
#include <pxr/usd/usd/attribute.h>
PXR_NAMESPACE_USING_DIRECTIVE
UsdStageRefPtr stage = UsdStage::Open("./data/usd/tests/cornell.usda");
UsdPrim prim = stage->GetPrimAtPath(SdfPath("/Cornell_Box/Root/White_Wall_Back"));
UsdAttribute attr = prim.GetAttribute(TfToken("faceVertexIndices"));
VtArray<int> arrayResult;
attr.Get(&arrayResult);
CHECK(arrayResult.size() == 4);
CHECK(arrayResult[0] == 1);
CHECK(arrayResult[1] == 3);
```
### USDRT
```cpp
#include <usdrt/scenegraph/base/vt/array.h>
#include <usdrt/scenegraph/base/tf/token.h>
#include <usdrt/scenegraph/usd/sdf/path.h>
#include <usdrt/scenegraph/usd/usd/stage.h>
#include <usdrt/scenegraph/usd/usd/prim.h>
#include <usdrt/scenegraph/usd/usd/attribute.h>
#include <usdrt/scenegraph/usd/usdGeom/tokens.h>
using namespace usdrt;
UsdStageRefPtr stage = UsdStage::Open("./data/usd/tests/cornell.usda");
UsdPrim prim = stage->GetPrimAtPath(SdfPath("/Cornell_Box/Root/White_Wall_Back"));
```
UsdAttribute attr = prim.GetAttribute(UsdGeomTokens->faceVertexIndices);
VtArray<int> arrayResult;
attr.Get(&arrayResult);
CHECK(arrayResult.size() == 4);
CHECK(arrayResult[0] == 1);
CHECK(arrayResult[1] == 3);
```
Note that only the include lines and namespace directives were modified. In the USD example code above, included files are from pxr and we are using the Pixar namespace directive, so the data is read from USD. In the USDRT example code, the include files are from usdrt and using the usdrt namespace directive, so the data is accessed from Fabric.
### API Status
#### Core USD
With the Kit 104 release, a minimal subset of the USD API is available. Significant portions of these classes have been implemented:
- UsdStage
- UsdPrim
- UsdAttribute
- UsdRelationship
- UsdPrimRange
- UsdTimeCode
- SdfPath
- SdfTypeValueName
- TfToken
- VtArray
- GfHalf
- GfMatrix(3/4)(d/f/h)
- GfQuat(d/f/h)
- GfVec(2/3/4)(d/f/h/i)
- GfRange(1/2/3)(d/f)
- GfRect2i
Specific details can be accessed via the C++ API Docs link. Because the API matches USD, the rest of this document will review how USDRT interacts with Fabric, the current limitations, and future development plans.
#### Schema classes
Support for schema classes in USDRT is currently in development and will be added in an upcoming release.
#### Rt schemas and classes
Functionality specific to USDRT, Fabric, and Omniverse is added in a new library, `Rt`. Schemas and support classes include:
- RtXformable (see Working with OmniHydra Transforms)
### How USDRT Interacts with Fabric
USDRT does not replace USD’s composition engine or sidestep the need for USD stages. Under the hood, the USDRT scenegraph plugin still creates or maintains a USD stage in order to populate Fabric. However, many USDRT operations read or write Fabric directly, which allows many USD bottlenecks to be avoided. This also provides a way to interact with Fabric using a familiar USD API, including Python bindings.
![scenegraph API diagram](_images/rt_scenegraph_overview.png)
#### Stages
There are (currently) two conceptual models for working with a stage in USDRT:
- Open or create a new stage, and automatically add a SimStageWithHistory for the stage
- `usdrt::UsdStage::Open`
- `usdrt::UsdStage::CreateNew`
- `usdrt::UsdStage::CreateInMemory`
- Create a USDRT stage representing a pre-existing USD Stage and SimStageWithHistory (or create a SimStageWithHistory for a USD stage if one doesn’t exist)
- `usdrt::UsdStage::Attach`
The stage returned by the open/create APIs behaves in the same way that USD stages do - when the last reference to the `UsdStageRefPtr` is dropped, the underlying USD stage and the associated SimStageWithHistory are cleaned up.
A stage created with the Attach method does not attempt any cleanup with the last reference to the `UsdStageRefPtr` is destroyed. It is assumed that because a USD Stage and SimStageWithHistory already existed at `usdrt::UsdStage` creation time that they are owned by someone else, and should persist beyond the lifetime of the `UsdStageRefPtr`.
## Fabric population
In the Kit 104 release, USDRT takes a naive approach to loading data from a USD stage into Fabric. USDRT will evolve over time to support additional models of loading data into Fabric, using the USD stage as a fallback for data in some cases, and synchronizing changes in the underlying USD stage into Fabric.
Fabric is lazily populated with USD data at the point where any `usdrt::UsdPrim` object is created:
- `usdrt::UsdStage::GetPrimAtPath` adds the returned prim to Fabric if it is not already stored in Fabric
- `usdrt::UsdStage::Traverse` and `usdrt::UsdPrimRange` add the returned prims discovered during traversal if they are not already stored in Fabric
- `usdrt::UsdStage::DefinePrim` creates a prim directly in Fabric, and does not add it to USD
The properties that are populated into Fabric for the prim are the properties with any authored opinion in USD. Attributes that only have fallback values are currently **not** populated into Fabric. Additionally, the property values stored in Fabric are only those for the default value of the property - Fabric does not currently support USD timesamples.
Creating new properties on a USDRT prim will only add the property to Fabric - the new property is not created on the USD stage:
- `usdrt::UsdPrim::CreateAttribute`
- `usdrt::UsdPrim::CreateRelationship`
Querying properties on a `usdrt::UsdPrim` only gives visibility into properties that are present in Fabric - properties not in Fabric will return a value as though the property does not exist.
- `usdrt::UsdPrim::HasAttribute`
- `usdrt::UsdPrim::GetAttribute`
- `usdrt::UsdPrim::GetAttributes`
- `usdrt::UsdPrim::HasRelationship`
- `usdrt::UsdPrim::GetRelationship`
- `usdrt::UsdPrim::GetRelationships`
It should be noted that `usdrt::UsdStage::DefinePrim` will create an otherwise empty prim in Fabric. Properties of interest need to be subsequently created for the prim using `usdrt::UsdPrim::CreateAttribute` and `usdrt::UsdPrim::CreateRelationship`. This will evolve over time as we consider the roles of schemas in USDRT and Fabric data population strategies.
## Writing back to USD
As of the Kit 104 release, there are two methods for writing data in Fabric back to the USD stage using USDRT:
- `usdrt::UsdStage::WriteToStage()`
- WriteToStage will write any modified properties in Fabric back to the EditTarget on the underlying USD stage, as long as those prims and properties already exist on the USD stage.
- `usdrt::UsdStage::WriteToLayer(const std::string& filePath)`
- WriteToLayer will write all prims and properties in Fabric to a layer that is not part of the underlying USD stage. This is useful for exporting Fabric data as USD.
Additional support for writing data back to the USD stage will be added in subsequent releases of USDRT.
## A note on transforms
OmniHydra (the Omniverse Scene Delegate that extends UsdImaging) can read prim transform data directly from Fabric. This enables very fast visualization of simulation data, since all simulation results are written to and read from Fabric.
The USDRT API provides a schema class for querying and manipulating this transform data, `usdrt::RtXformable`. For more information, see Working with OmniHydra Transforms.
## Prim traversal
Like USD, stage traversals leverage the PrimRange class, invoked by either:
1. Use `usdrt::UsdStage::Traverse`
2. or `usdrt::UsdPrimRange` constructor
There are two important notes, as of the Kit 104 release:
1. PrimRanges that access a prim for the first time will cause that prim and its attributes to be loaded into Fabric
2. Prims that are defined only in Fabric (via `usdrt::UsdStage::DefinePrim`) will not appear in PrimRange results. Currently only prims that exist on the `pxr::UsdStage` will be returned by a `usdrt::UsdPrimRange`, in the order defined by the `pxr::UsdStage`
This approach will evolve over time as development proceeds on USDRT.
### Accessing property values
As of the Kit 104 release, the USDRT API reads and writes property values from Fabric exclusively, using the StageReaderWriter. There is currently no “passthrough” support to USD, although there may be in the future.
The `Get()` and `Set()` APIs implement value-templated access only - VtValue is currently not supported.
```cpp
UsdAttribute attr = prim.GetAttribute(UsdGeomTokens->doubleSided);
bool result = false;
attr.Get(&result, 0.0);
CHECK(result);
```
Array-typed properties use `usdrt::VtArray` to access values in Fabric.
```cpp
attr = prim.GetAttribute(UsdGeomTokens->faceVertexIndices);
VtArray<int> arrayResult;
attr.Get(&arrayResult, 0.0);
CHECK(arrayResult.size() == 4);
CHECK(arrayResult.IsFabricData());
CHECK(arrayResult[0] == 1);
CHECK(arrayResult[1] == 3);
```
`usdrt::VtArray` has similar properties to `pxr::VtArray`, in that it is copy-on-write and copy-on-non-const-access, with one important exception. A `usdrt::VtArray` that is populated from a call to `usdrt::UsdAttribute::Get` is created in a state such that it is attached to the Fabric data it represents. Modifying the VtArray in this attached state will modify the array data directly in Fabric. The `usdrt::VtArray::IsFabricData` API indicates whether the VtArray is in this attached state, and `usdrt::VtArray::DetachFromSource` will make an instance-local copy of the array data so that further modifications to the VtArray will not write to Fabric. This avoids unnecessary data copying and gives developers an efficient way to modify array-typed data in Fabric.
```cpp
// prefetch a prim into Fabric
UsdStageRefPtr stage = UsdStage::Open("./data/usd/tests/cornell.usda");
UsdPrim prim = stage->GetPrimAtPath(SdfPath("/Cornell_Box/Root/Cornell_Box1_LP/White_Wall_Back"));
// Get array from Fabric
omni::fabric::StageReaderWriter sip(stage->GetStageReaderWriterId());
```
gsl::span<int> fabricArray =
sip.getArrayAttribute<int>(omni::fabric::Path("/Cornell_Box/Root/Cornell_Box1_LP/White_Wall_Back"),
omni::fabric::Token("faceVertexIndices"));
// Get VtArray from USDRT
VtArray<int> fromFabric;
UsdAttribute attr = prim.GetAttribute(UsdGeomTokens->faceVertexIndices);
attr.Get(&fromFabric);
CHECK(fromFabric.IsFabricData());
CHECK(fromFabric[1] == 3);
// modification in VtArray attached to Fabric modifies Fabric
fromFabric[3] = 9;
CHECK(fromFabric.IsFabricData());
CHECK(fromFabric[3] == 9);
CHECK(fabricArray[3] == 9);
// detach from Fabric to localize array
fromFabric.DetachFromSource();
fromFabric[3] = 5;
CHECK(!fromFabric.IsFabricData());
CHECK(fromFabric[3] == 5);
CHECK(fabricArray[3] == 9);
fabricArray[3] = 12;
CHECK(fromFabric[3] == 5);
CHECK(fabricArray[3] == 12); | 11,252 |
SchedulingHints.md | # Scheduling Hints for OG Nodes
With the Execution Framework (EF) enabled by default in Kit 105, its current integration with OmniGraph (OG) now provides OG node developers an avenue for quickly boosting their nodal compute performance with minimal code overhead. This is done by adding so-called *scheduling hints* inside `.ogn` definitions, which allow the EF to more optimally *schedule* the marked nodes for evaluation.
## Scheduling Hints Overview
In order to properly leverage scheduling hints to improve OG node computational performance, it helps to be aware of the exact types of scheduling dispatches that the EF currently supports.
Adding a scheduling hint to a node is as simple as placing a `"scheduling"` property in the node’s `.ogn` file, followed by at least one of the following arguments:
- `"threadsafe"`: Indicates that this node can be executed in *parallel* with other `"threadsafe"` (and serially-scheduled) nodes. In an `.ogn` file this hint would look like this:
```json
{
"scheduling": ["threadsafe"]
}
```
- `"usd-write"`: Indicates that this node writes data out to the global USD stage, which can lead to trouble if other nodes also attempt reading from/writing to stage at the same time. For this reason, nodes with this hint are executed in *isolation*, i.e., no other nodes can run concurrently until the `"usd-write"` node’s compute method has finished.
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```markdown
```html
<code class="docutils literal notranslate">
<span class="pre">
DataModel
(and corresponding mutex safeguards). Thread-efficiency, on the other hand, is only assured if, in addition to the previous requirements, all of the bundle-related methods that a node’s
<code class="docutils literal notranslate">
<span class="pre">
compute
calls into ultimately invoke
<code class="docutils literal notranslate">
<span class="pre">
DataModel
functions that only require a
<em>
reader
mutex to be acquired (thereby allowing multiple threads to call on them concurrently).
<li>
<p>
Nodes that utilize the GPU(s) via CUDA don’t inherently require a specific type of scheduling hint. Implementation details will determine whether or not the node implementation is thread-safe (an example of which is shown later - see
<a class="reference internal" href="#ogndeformer1-gpu">
<span class="std std-doc">
OgnDeformer1_GPU
)
<section id="code-examples">
<h2>
Code Examples
<a class="headerlink" href="#code-examples" title="Permalink to this headline">
<p>
Presented below are a variety of node implementations, most of which also exist in various OG extensions; the ones that have been conjured up specifically for this documentation are marked as such for clarity’s sake. Alongside each definition is a discussion on how one could analyze the given code, using the methods described in previous sections, to reasonably deduce what scheduling behavior would best suit each one. With any luck, this will help further elucidate the process behind applying scheduling hints to nodes and showcase some of the similarities shared between nodes in each category to make future identification a bit easier.
<p>
Note that for brevity some of the examples have been trimmed to just show the node’s core compute logic.
<section id="parallel-scheduled-node-examples">
<h3>
<strong>
Parallel-Scheduled Node Examples
<a class="headerlink" href="#parallel-scheduled-node-examples" title="Permalink to this headline">
<section id="ogntutorialtuplearrays">
<h4>
<em>
OgnTutorialTupleArrays
<a class="headerlink" href="#ogntutorialtuplearrays" title="Permalink to this headline">
<div class="highlight-C++ notranslate">
<div class="highlight">
<pre><span>
<span class="k">static
<span class="p">{
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="w">
<span class="p">}
<p>
Following the flowchart steps:
<ol class="arabic simple">
<li>
<p>
Does the node write to the USD stage? →
<strong>
No
.
<li>
<p>
Does the node utilize bundles in any capacity? →
<strong>
No
.
<li>
<p>
Does the node write to any other external/shared data containers (not including the USD stage and the node’s direct outputs)? →
<strong>
No
.
<li>
<p>
Is the node implemented using Python/Warp? →
<strong>
No
.
<li>
<p>
Does the node load/unload any extensions? →
<strong>
No
# OgnDeformer1_GPU
## OgnDeformer1_GPU
```C++
// OgnDeformer1_GPU.cpp
#include "OgnDeformer1_GPUDatabase.h"
namespace omni
{
namespace graph
{
namespace examples
{
extern "C" void deformer1W(outputs::points_t outputPoints,
inputs::points_t points,
inputs::multiplier_t multiplier,
inputs::wavelength_t wavelength,
size_t numPoints);
class OgnDeformer1_GPU
{
public:
static bool compute(OgnDeformer1_GPUDatabase& db)
{
size_t numberOfPoints = db.inputs.points.size();
db.outputs.points.resize(numberOfPoints);
if (numberOfPoints == 0)
{
return true;
}
deformer1W(db.outputs.points(), db.inputs.points(), db.inputs.multiplier(), db.inputs.wavelength(), numberOfPoints);
return true;
}
};
REGISTER_OGN_NODE()
}
}
}
```
```C++
// OgnDeformer1_GPU.cu
#include <OgnDeformer1_GPUDatabase.h>
__global__ void deformer1(outputs::points_t outputArray, inputs::points_t inputArray,
inputs::multiplier_t multiplier, inputs::wavelength_t wavelength, size_t numPoints)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (numPoints <= i) return;
const float3* points = *inputArray;
float3* outputPoints = *outputArray;
float width = *wavelength;
float height = 10.0f * (*multiplier);
}
```
```c++
float freq = 10.0f;
float3 point = points[i];
float tx = freq * (point.x - width) / width;
float ty = 1.5f * freq * (point.y - width) / width;
point.z += height * (sin(tx) + cos(ty));
outputPoints[i] = point;
}
extern "C"
void deformer1W(outputs::points_t outputArray, inputs::points_t inputArray,
inputs::multiplier_t multiplier, inputs::wavelength_t wavelength, size_t numPoints)
{
const int nt = 256;
const int nb = (numPoints + nt - 1) / nt;
deformer1<<<nb, nt>>>(outputArray, inputArray, multiplier, wavelength, numPoints);
}
```
Following the flowchart steps:
1. Does the node write to the USD stage? → **No**.
2. Does the node utilize bundles in any capacity? → **No**.
3. Does the node write to any other external/shared data containers (not including the USD stage and the node’s direct outputs)? → **No**.
4. Is the node implemented using Python/Warp? → **No**.
5. Does the node load/unload any extensions? → **No**.
Conclusion: Schedule this node in parallel with the `"threadsafe"` hint.
### OgnWritePrimAttribute
```c++
// OgnWritePrimAttribute.cpp
static bool compute(OgnWritePrimAttributeDatabase& db)
{
if (!db.inputs.value().resolved())
return true;
NodeObj nodeObj = db.abi_node();
GraphObj graphObj = nodeObj.iNode->getGraph(nodeObj);
auto& instance = db.internalState<OgnWritePrimAttribute>();
if (!instance.m_correctlySetup)
```
{
instance.setup(nodeObj, graphObj, db.getInstanceIndex());
}
else
{
auto path = db.inputs.usePath() ? db.inputs.primPath().token : db.stringToToken(db.inputs.prim.path()).token;
if (path != instance.m_destPathToken.token || db.inputs.name() != instance.m_destAttrib)
instance.setup(nodeObj, graphObj, db.getInstanceIndex());
}
if (instance.m_correctlySetup)
{
copyAttributeDataToPrim(db.abi_context(),
instance.m_destPath,
instance.m_destAttrib,
nodeObj,
inputs::value.m_token,
db.getInstanceIndex(),
true,
db.inputs.usdWriteBack());
db.outputs.execOut() = kExecutionAttributeStateEnabled;
return true;
}
return false;
// PrimCommon.cpp
// Helper to copy data from our attribute to the target prim
void copyAttributeDataToPrim(const GraphContextObj& context,
PathC destPath,
TokenC destName,
const NodeObj& srcNode,
TokenC srcName,
InstanceIndex instanceIndex,
bool allowDisconnected,
bool usdWriteBack)
{
AttributeObj inputAttr = srcNode.iNode->getAttributeByToken(srcNode, srcName);
// Implementation details have been nixed for brevity's sake...
ConstAttributeDataHandle inputHandle = inputAttr.iAttribute->getConstAttributeDataHandle(inputAttr, instanceIndex);
copyAttributeData(context, destPath, destName, srcNode, inputHandle, usdWriteBack);
}
// Helper to copy data from an input to a destination attribute
void copyAttributeData(GraphContextObj const& context,
PathC destPath,
TokenC destName,
const NodeObj& srcNode,
ConstAttributeDataHandle inputHandle,
bool usdWriteBack)
{
// Implementation details have been nixed for brevity's sake...
}
TokenC destName,
NodeObj const& srcNode,
ConstAttributeDataHandle const& inputHandle,
bool const usdWriteBack)
{
AttributeDataHandle const dstHandle{ AttrKey{ destPath.path, destName.token } };
context.iBundle->copyAttribute(context, BundleHandle{ destPath.path }, destName, inputHandle);
if (usdWriteBack)
{
context.iContext->registerForUSDWriteBack(context, (BundleHandle)destPath.path, destName);
}
else
{
// Implementation details have been nixed for brevity's sake...
}
}
```
Following the flowchart steps:
1. Does the node write to the USD stage? → **Yes**, it writes some attribute data out to a target USD prim.
2. Is the node utilizing the `registerForUSDWriteBack` method to do so? → **Yes**, the `compute` method calls into `copyAttributeDataToPrim`, which calls into `copyAttributeData`, which finally calls into `registerForUsdWriteBack`.
3. Does the node utilize bundles in any capacity? → **No**.
4. Does the node write to any other external/shared data containers (not including the USD stage and the node’s direct outputs)? → **No**.
5. Is the node implemented using Python/Warp? → **No**.
6. Does the node load/unload any extensions? → **No**.
Conclusion: Add both the `"usd-write"` and `"threadsafe"` scheduling hints to this node; the latter will take precedence and ensure that the node can be executed in parallel, while the former simply lets observers know that the node writes to USD.
### OgnArrayLength
```c++
// OgnArrayLength.cpp
// Outputs the length of a specified array attribute in an input prim,
// or 1 if the attribute is not an array attribute.
static bool compute(OgnArrayLengthDatabase& db)
{
auto bundledAttribute = db.inputs.data().attributeByName(db.inputs.attrName());
db.outputs.length() = bundledAttribute.isValid() ? bundledAttribute.size() : 0;
return true;
}
```
Following the flowchart steps:
1. Does the node write to the USD stage? → **No**.
2. Does the node utilize bundles in any capacity? → **Yes**, the node ingests a bundle as part of its input.
- Are the bundles operated on using strictly methods from the database ABI? →
**Yes, Database ABI Only**; the bundle is only accessed via `db.inputs.data()`, and the only function invoked on it is `attributeByName`.
- Do all of the `DataModel` methods underlying the operations being performed on the bundles allow for multiple reader thread access? →
**Yes**, `attributeByName` eventually invokes `DataModel::commonGetAttribute`, which only requires a reader mutex lock to utilize.
- Does the node write to any other external/shared data containers (not including the USD stage and the node’s direct outputs)? →
**No**.
- Is the node implemented using Python/Warp? →
**No**.
- Does the node load/unload any extensions? →
**No**.
Conclusion: Schedule this node in parallel with the `"threadsafe"` hint.
## Serially-Scheduled Node Examples
### OgnSharedDataWrite (Made-up!)
```C++
// SharedState.h
#pragma once
#include <mutex>
static std::mutex s_mutex;
static int s_counter = 0;
```
```C++
// OgnSharedDataWrite.cpp
#include <OgnSharedDataWriteDatabase.h>
#include "../include/omni/graph/madeup/SharedState.h"
#include <chrono>
#include <thread>
namespace omni
{
namespace graph
{
namespace madeup
{
class OgnSharedDataWrite
{
public:
static bool compute(OgnSharedDataWriteDatabase& db)
{
// Mutex lock prevents other threads from accessing the shared
// integer counter at the same time. Also sleep to simulate
// complex compute.
std::lock_guard<std::mutex> guard(s_mutex);
std::this_thread::sleep_for(std::chrono::seconds(2));
++s_counter;
return true;
}
};
REGISTER_OGN_NODE()
}
}
}
```
Following the flowchart steps:
1. Does the node write to the USD stage? → **No**.
2. Does the node utilize bundles in any capacity? → **No**.
3. Does the node write to any other external/shared data containers (not including the USD stage and the node’s direct outputs)? → **Yes**, this node attempts to increment a shared static integer counter.
4. Are these write operations being performed in a thread-safe manner? → **Yes**, mutex locks are used to ensure that only one thread at a time has access to `s_counter`.
5. Are these write operations being performed in a thread-efficient manner? → **No**, if multiple `OgnSharedDataWrite`
1. If nodes were to be executed in separate parallel threads, each thread would have to wait for its turn to access `s_counter` thanks to the mutex lock. The resultant graph evaluation behavior would thus be similar to a serial scheduling pattern, except for the fact that extra overhead would also be incurred from needlessly utilizing multiple threads for the computes.
Conclusion: Schedule this node in serial (done by default when no scheduling hint is specified in the `.ogn`).
## OgnTutorialComplexDataPy
```python
# OgnTutorialComplexDataPy.py
def compute(db) -> bool:
"""
Multiply a float array by a float[3] to yield a float[3] array, using the point3f role.
Practically speaking the data in the role-based attributes is no different than the underlying raw data
types. The role only helps you understand what the intention behind the data is, e.g. to differentiate
surface normals and colours, both of which might have float[3] types.
"""
# Verify that the output array was correctly set up to have a "point" role
assert db.role.outputs.a_productArray == db.ROLE_POINT
multiplier = db.inputs.a_vectorMultiplier
input_array = db.inputs.a_inputArray
input_array_size = len(db.inputs.a_inputArray)
# The output array should have the same number of elements as the input array.
# Setting the size informs fabric that when it retrieves the data it should allocate this much space.
db.outputs.a_productArray_size = input_array_size
# The assertions illustrate the type of data that should have been received for inputs and set for outputs
assert isinstance(multiplier, numpy.ndarray) # numpy.ndarray is the underlying type of tuples
assert multiplier.shape == (3,)
assert isinstance(input_array, numpy.ndarray) # numpy.ndarray is the underlying type of simple arrays
assert input_array.shape == (input_array_size,)
# If the input array is empty then the output is empty and does not need any computing
if input_array.shape[0] == 0:
db.outputs.a_productArray = []
assert db.outputs.a_productArray.shape == (0, 3)
return True
# numpy has a nice little method for replicating the multiplier vector the number of times required
# by the size of the input array.
# e.g. numpy.tile( [1, 2], (3, 1) ) yields [[1, 2], [1, 2], [1, 2]]
product = numpy.tile(multiplier, (input_array_size, 1))
# Multiply each of the tiled vectors by the corresponding constant in the input array
for i in range(0, product.shape[0]):
product[i] = product[i] * input_array[i]
db.outputs.a_productArray = product
# Make sure the correct type of array was produced
assert db.outputs.a_productArray.shape == (input_array_size, 3)
return True
```
Following the flowchart steps:
1. Does the node write to the USD stage? → **No**.
2. Does the node utilize bundles in any capacity? → **No**.
3. Does the node write to any other external/shared data containers (not including the USD stage and the node’s direct outputs)? → **No**.
4. Is the node implemented using Python/Warp? → **Yes**, this node is implemented using Python.
Conclusion: Schedule this node in serial (done by default when no scheduling hint is specified in the `.ogn`).
## Isolate Scheduled Node Examples
### OgnPrimDeformer1
```C++
// OgnPrimDeformer1.cpp
static bool compute(OgnPrimDeformer1Database& db)
{
const auto& contextObj = db.abi_context();
const auto& nodeObj = db.abi_node();
const IGraphContext& iContext = *contextObj.iContext;
NodeContextHandle node = nodeObj.nodeContextHandle;
static const Token inputMeshName("inputMesh");
static const Token outputMeshName("outputMesh");
static const Token pointsName("points");
// Get input bundle.
omni::fabric::PathC const inputBundlePath = iContext.getInputTarget(contextObj, node, inputMeshName, db.getInstanceIndex());
ogn::BundleContents<ogn::kOgnInput, ogn::kAny> const inputBundle{ contextObj, inputBundlePath };
ConstBundleHandle inputMesh = inputBundle.abi_bundleHandle();
// Make output bundle from input prim.
BundleHandle outputMesh = iContext.copyBundleContentsIntoOutput(contextObj, node, outputMeshName, inputMesh, db.getInstanceIndex());
AttributeDataHandle outputPointsAttr = getAttributeW(contextObj, outputMesh, pointsName);
Float3* const* pointsArray = getDataW<Float3*>(contextObj, outputPointsAttr);
if (!pointsArray)
{
return true;
}
Float3* points = *pointsArray;
size_t pointCount = getElementCount(contextObj, outputPointsAttr);
static const Token multiplierName("multiplier");
ConstAttributeDataHandle multiplierAttr = getAttributeR(contextObj, node, multiplierName, db.getInstanceIndex());
const float* pMultiplier = getDataR<float>(contextObj, multiplierAttr);
}
```
float multiplier = pMultiplier ? *pMultiplier : 1.0f;
static const Token wavelengthName("wavelength");
ConstAttributeDataHandle wavelengthAttr = getAttributeR(contextObj, node, wavelengthName, db.getInstanceIndex());
const float* pWavelength = getDataR<float>(contextObj, wavelengthAttr);
float wavelength = pWavelength ? *pWavelength : 1.0f;
float width = wavelength;
float height = 10.0f * multiplier;
float freq = 10.0f;
for (uint32_t i = 0; i < pointCount; i++)
{
carb::Float3 point = points[i];
float tx = freq * (point.x - width) / width;
float ty = 1.5f * freq * (point.y - width) / width;
point.z += height * (sin(tx) + cos(ty));
points[i] = point;
}
return true;
- Does the node write to any other external/shared data containers (not including the USD stage and the node’s direct outputs)? → **Yes**, this node attempts to directly write to the “points” output attribute inside of the output bundle, which itself is stored in a bucket in Fabric (which is external to the node’s scope), via a writable data pointer.
- Are these write operations being performed in a thread-safe manner? → **No**, there are no protections in place for writing over the output bundle’s “points” attribute in this node; there could be another node executing in another thread, for example, that tries accessing the data in the same attribute on the same bundle while it’s being written into by this node.
Conclusion: Schedule this node in isolation with the `"usd-write"` hint.
### OgnSetPrimActive
```cpp
// OgnSetPrimActive.cpp
static bool compute(OgnSetPrimActiveDatabase& db)
{
const auto& primPath = db.inputs.prim();
if (pxr::SdfPath::IsValidPathString(primPath))
{
// Find our stage
const GraphContextObj& context = db.abi_context();
long stageId = context.iContext->getStageId(context);
auto stage = pxr::UsdUtilsStageCache::Get().Find(pxr::UsdStageCache::Id::FromLongInt(stageId));
if (!stage)
{
db.logError("Could not find USD stage %ld", stageId);
return false;
}
pxr::UsdPrim targetPrim = stage->GetPrimAtPath(pxr::SdfPath(primPath));
if (!targetPrim)
{
db.logError("Could not find prim \"%s\" in USD stage", primPath.data());
return false;
}
return targetPrim.SetActive(db.inputs.active());
}
return true;
}
```
Following the flowchart steps:
1. Does the node write to the USD stage? → **Yes**, it binds some user-specified material to an input prim, i.e. `targetPrim.SetActive(db.inputs.active())`.
2. Is the node utilizing the `registerForUSDWriteBack` method to do so? → **No**.
Conclusion: Schedule this node in isolation with the `"usd-write"` hint.
### OgnWritePrimMaterial
```cpp
// OgnWritePrimMaterial.cpp
static bool compute(OgnWritePrimMaterialDatabase& db)
{
const auto& primPath = db.inputs.primPath();
if (!PXR_NS::SdfPath::IsValidPathString(primPath)) {
db.logError("Invalid prim path");
return false;
}
const auto& materialPath = db.inputs.materialPath();
// Find our stage
const GraphContextObj& context = db.abi_context();
long stageId = context.iContext->getStageId(context);
PXR_NS::UsdStagePtr stage = pxr::UsdUtilsStageCache::Get().Find(pxr::UsdStageCache::Id::FromLongInt(stageId));
if (!stage)
{
db.logError("Could not find USD stage %ld", stageId);
return false;
}
PXR_NS::UsdPrim prim = stage->GetPrimAtPath(PXR_NS::SdfPath(primPath));
if (!prim) {
db.logError("Could not find USD prim");
return false;
}
PXR_NS::UsdShadeMaterialBindingAPI materialBinding(prim);
PXR_NS::UsdPrim materialPrim = stage->GetPrimAtPath(PXR_NS::SdfPath(materialPath));
if (!materialPrim) {
db.logError("Could not find USD material");
return false;
}
PXR_NS::UsdShadeMaterial material(materialPrim);
if (!materialBinding.Bind(material)) {
db.logError("Could not bind USD material to USD prim");
return false;
}
db.outputs.execOut() = kExecutionAttributeStateEnabled;
return true;
}
```
Following the flowchart steps:
1. Does the node write to the USD stage? → **Yes**, it binds some user-specified material to an input prim, i.e. `materialBinding.Bind(material)`.
2. Is the node utilizing the `registerForUSDWriteBack` method to do so? → **No**.
Conclusion: Schedule this node in isolation with the
## Testing for Node Thread Safety
While the aforementioned decision flowchart and examples may be helpful in correctly identifying the best scheduling behavior for a variety of nodes, it’s unfortunately not a catch-all solution. Perhaps a node implements some behavior that isn’t covered in this documentation (very possible given the large scope of what a node is allowed to do), which could leave a developer unsure of how to best approach the issue. The workflow discussed thus far also does little for detecting potential future regressions where a node’s behavior changes significantly-enough that it loses compatibility with its given scheduling hint.
In both cases one would benefit greatly from having targeted “scheduling behavior” tests, whether it be to experimentally determine how a node should be scheduled or to simply add greater code coverage to ensure that future node alterations remain congenial with the manner in which they are scheduled for execution.
Typically these tests tend to focus around verifying the thread-safety status of a node, especially since:
1. Developers stand to gain significant performance boosts if a node can be flipped to concurrent evaluation.
2. Particularly nasty breakages can occur when the EF attempts to execute not-thread-safe nodes in parallel; the opposite scenario, i.e. thread-safe nodes being evaluated serially and/or in isolation, may drag performance down unnecessarily but won’t lead to the code base crashing, hanging, etc. entirely.
While developers are more than welcome to write their own specialized thread-safety tests, a few different tools have already been developed to automate parts of the relevant test-development process; these tools will be discussed below in a bit more detail.
### Autogenerated Tests from `.ogn` Constructs
Whenever a node’s `.ogn` file has the `"threadsafe"` scheduling hint and a user-defined test construct (see the ogn_user_guide for more information on adding unit tests directly to a node’s `.ogn`), a Python threading test called `test_thread_safety` will automatically be generated. This threading test will create multiple copies of the test graph setup(s) specified in the `.ogn` and execute them many times to look for potential race conditions. This is perhaps the easiest way to begin verifying that a node is thread-safe and/or establishing test coverage for thread-safe nodes, but it does come with some major limitations.
For starters, the `.ogn` `test` construct itself is quite constricted in terms of the features one might like to use when designing complete unit tests. While this is partly by design (more complex tests should be written as separate external Python scripts), it’s still important to keep in mind that actions such as:
- Specifying graph execution behavior (e.g. `push` vs. `dirty push` vs. `execution` vs. …; The autogenerated test currently defaults to running all nodes on a `push` graph).
- Dynamically adding/removing node attributes, connecting/disconnecting nodes from one another, adding/removing nodes after the test graph has been initially populated.
- Creating multiple test graphs at once.
- etc.
are not (presently) possible to exercise in `.ogn` `test` constructs; the resultant restriction in scope on the number of possible behaviors that these `test` scenes can pick up on means that a node may exhibit thread-safety issues which simply don’t/cannot be discovered via the autogenerated threading test.
On the other hand, these limitations sometimes have the opposite effect and lead to an autogenerated thread-safety test failing despite the fact that the underlying node is perfectly capable of being safely executed concurrently. One such situation that’s popped up a few times in the past involved nodes with non-deterministic/stateful behavior (i.e. their internal state depends on the number of times that they’ve been evaluated). Take, for example, the `OgnCounter` Action Graph node, which increments its internal counter by one every time it’s executed. If a developer added the following test construct to its `.ogn` and left out the `"threadsafe"` scheduling hint:
```json
{
"tests" : [
{
"outputs:count": 1,
"state_set:count": 0,
"inputs:execIn": 1
}
]
}
```
then the test would pass with no issue - the node gets executed once, resulting in the counter going up by one. If the `"threadsafe"` scheduling hint was included, however, the autogenerated threading test would likely fail due to the non-deterministic nature of the node’s state.
"threadsafe"
hint is then added back in, however, the resultant autogenerated threading test will actually
**fail**. This is because a portion of the test involves executing each graph instance multiple times before checking the final condition. Each graph evaluation leads to the node’s internal counter being incremented by one, so if the threading test runs each graph 10 times, then the instanced nodes’ final output counts will be 10, and not 1 as is expected in the “
```
outputs:count
```
attribute, hence the failure. One could thus be misled to believe that
```
OgnCounter
```
is not thread-safe, when the exact opposite is true.
In general, it’s important to remember that these autogenerated thread-safety tests, although convenient, are not applicable to every node (as they’re currently designed) and do not
**always**
yield correct results; treat them as an extra source of reassurance when determining what scheduling behavior will best suite a given node rather than ironclad proof that a node is (or isn’t) thread-safe.
## ThreadsafetyTestUtils
The
```
ThreadsafetyTestUtils
```
class (located in the
```
omni.graph
```
extension) provides a set of utility functions and decorators that allow one to (relatively) easily convert a typical test coroutine for a node in some external Python script into a fully-fledged thread-safety test. Similar to the autogenerated threading tests from above, this is accomplished behind-the-hood by essentially “duplicating” the given test script (or more specifically the test graph created by the script), executing them (the test graphs) concurrently, and checking whether each test instance passes. If they don’t, then there might exist some threading issues with some of the nodes in the test graph. The process for doing so is perhaps best shown with an example:
### “Regular” Async, Non-Threaded Version of a Node Unit Test:
```python
import omni.graph.core as og
import omni.graph.core.tests as ogts
class TestClass(ogts.OmniGraphTestCase):
"""Test Class"""
TEST_GRAPH_PATH = "/World/TestGraph"
keys = og.Controller.Keys
async def setUp(self):
"""Set up test environment, to be torn down when done"""
await super().setUp()
async def test_forloop_node(self):
"""Test ForLoop node"""
context = omni.usd.get_context()
stage = context.get_stage()
# Create a prim + add an attribute to it.
prim = stage.DefinePrim("/World/TestPrim")
prim.CreateAttribute("val1", Sdf.ValueTypeNames.Int2, False).Set(Gf.Vec2i(1, 1))
# Instance a test graph setup.
graph_path = self.TEST_GRAPH_PATH
og.Controller.create_graph({"graph_path": graph_path, "evaluator_name": "execution"})
(,, _, _, _, _, write_node, finish_counter) = og.Controller.edit(
graph_path,
{
self.keys.CREATE_NODES: [
("OnTick", "omni.graph.action.OnTick"),
("Const", "omni.graph.nodes.ConstantInt2"),
("StopNum", "omni.graph.nodes.ConstantInt"),
("Add", "omni.graph.nodes.Add"),
("Branch", "omni.graph.action.Branch"),
("For", "omni.graph.action.ForLoop"),
("Write1", "omni.graph.nodes.WritePrimAttribute"),
("FinishCounter", "omni.graph.action.Counter"),
],
self.keys.SET_VALUES: [
("OnTick.inputs:onlyPlayback", False),
("Const.inputs:value", [1, 2]),
],
}
)
```python
(("StopNum.inputs:value", 3),
("Write1.inputs:name", "val1"),
("Write1.inputs:primPath", "/World/TestPrim"),
("Write1.inputs:usePath", True),
("Branch.inputs:condition", True),
),
self.keys.CONNECT: [
("OnTick.outputs:tick", "For.inputs:execIn"),
("StopNum.inputs:value", "For.inputs:stop"),
("For.outputs:loopBody", "Branch.inputs:execIn"),
("For.outputs:finished", "FinishCounter.inputs:execIn"),
("Branch.outputs:execTrue", "Write1.inputs:execIn"),
("For.outputs:value", "Add.inputs:a"),
("Const.inputs:value", "Add.inputs:b"),
("Add.outputs:sum", "Write1.inputs:value"),
],
},
)
# Evaluate the graph.
await og.Controller.evaluate()
self.assertListEqual([3, 4], list(stage.GetAttributeAtPath("/World/TestPrim.val1").Get()))
self.assertEqual(3, write_node.get_compute_count())
self.assertEqual(1, finish_counter.get_compute_count())
# Remove the prim from the stage at the very end of the test, when it's no longer needed.
stage.RemovePrim("/World/TestPrim")
```
# Threaded Version of a Node Unit Test:
```python
import omni.graph.core as og
from omni.graph.core import ThreadsafetyTestUtils
import omni.graph.core.tests as ogts
class TestClass(ogts.OmniGraphTestCase):
"""Test Class"""
TEST_GRAPH_PATH = "/World/TestGraph"
keys = og.Controller.Keys
async def setUp(self):
"""Set up test environment, to be torn down when done"""
await super().setUp()
@ThreadsafetyTestUtils.make_threading_test
def test_forloop_node(self, test_instance_id: int = 0):
"""Test ForLoop node"""
context = omni.usd.get_context()
stage = context.get_stage()
# Since we want to use the same prim across all graph instances in the
# thread-safety test, we add it to the threading cache like so:
prim = ThreadsafetyTestUtils.add_to_threading_cache(test_instance_id, stage.DefinePrim("/World/TestPrim"))
# We only want to add this new attribute to the prim once at the start of the
# threading test.
ThreadsafetyTestUtils.single_evaluation_first_test_instance(
test_instance_id,
lambda: prim.CreateAttribute("val1", Sdf.ValueTypeNames.Int2, False).Set(Gf.Vec2i(1, 1))
)
# Instance a test graph setup. Note that we append the graph path with the test_instance_id
# so that the graph can be uniquely identified in the thread-safety test!
graph_path = self.TEST_GRAPH_PATH + str(test_instance_id)
og.Controller.create_graph({"graph_path": graph_path, "evaluator_name": "execution"})
(,, _, _, _, _, write_node, finish_counter), _, _ = og.Controller.edit(
```
graph_path,
{
self.keys.CREATE_NODES: [
("OnTick", "omni.graph.action.OnTick"),
("Const", "omni.graph.nodes.ConstantInt2"),
("StopNum", "omni.graph.nodes.ConstantInt"),
("Add", "omni.graph.nodes.Add"),
("Branch", "omni.graph.action.Branch"),
("For", "omni.graph.action.ForLoop"),
("Write1", "omni.graph.nodes.WritePrimAttribute"),
("FinishCounter", "omni.graph.action.Counter"),
],
self.keys.SET_VALUES: [
("OnTick.inputs:onlyPlayback", False),
("Const.inputs:value", [1, 2]),
("StopNum.inputs:value", 3),
("Write1.inputs:name", "val1"),
("Write1.inputs:primPath", "/World/TestPrim"),
("Write1.inputs:usePath", True),
("Branch.inputs:condition", True),
],
self.keys.CONNECT: [
("OnTick.outputs:tick", "For.inputs:execIn"),
("StopNum.inputs:value", "For.inputs:stop"),
("For.outputs:loopBody", "Branch.inputs:execIn"),
("For.outputs:finished", "FinishCounter.inputs:execIn"),
("Branch.outputs:execTrue", "Write1.inputs:execIn"),
("For.outputs:value", "Add.inputs:a"),
("Const.inputs:value", "Add.inputs:b"),
("Add.outputs:sum", "Write1.inputs:value"),
],
},
)
# Evaluate the graph(s). Yielding to wait for compute to happen across
# all graph instances before continuing the test.
yield ThreadsafetyTestUtils.EVALUATION_ALL_GRAPHS
self.assertListEqual([3, 4], list(stage.GetAttributeAtPath("/World/TestPrim.val1").Get()))
self.assertEqual(3, write_node.get_compute_count())
self.assertEqual(1, finish_counter.get_compute_count())
# Remove the prim from the stage at the very end of
# the test, when it's no longer needed.
ThreadsafetyTestUtils.single_evaluation_last_test_instance(
test_instance_id,
lambda: stage.RemovePrim("/World/TestPrim")
)
```
To summarize, converting from the former to the latter involves:
- Adding the `make_threading_test` decorator to the top of the test method one wishes to convert.
- Removing the `async` keyword from the test.
- Adding a `test_instance_id` argument to the test method and using it to identify objects that need to remain unique in each instanced test (e.g. the test graph path, which is the most common use for `test_instance_id`).
- Replacing all `await og.Controller.evaluate()` calls with `yield ThreadsafetyTestUtils.EVALUATION_ALL_GRAPHS`.
- Replacing all `await omni.kit.app.get_app().next_update_async()` calls with `yield ThreadsafetyTestUtils.EVALUATION_WAIT_FRAME`.
- Executing any code that needs to occur only once at the start of the test via the `ThreadsafetyTestUtils.single_evaluation_first_test_instance()` method.
- Executing any code that needs to occur only once at the end of the test via the `ThreadsafetyTestUtils.single_evaluation_last_test_instance()` method.
- Adding any objects/variables that need to persist/remain the same across all test instances to an internal cache using the `ThreadsafetyTestUtils.add_to_threading_cache()` method.
Finally, note that threaded tests which are converted in this manner can also be easily configured to become serial tests once more by just swapping the `make_threading_test` decorator with `make_serial_test` (rather than having to rewrite the entire chunk of code to resemble the “`async` version”). | 54,104 |
Sdf.md | # Sdf module
Summary: The Sdf (Scene Description Foundation) provides foundations for serializing scene description and primitive abstractions for interacting.
## Classes:
- **AngularUnit**
- [pxr.Sdf.AngularUnit](#pxr.Sdf.AngularUnit)
- **AssetPath**
- [pxr.Sdf.AssetPath](#pxr.Sdf.AssetPath)
- Contains an asset path and an optional resolved path.
- **AssetPathArray**
- [pxr.Sdf.AssetPathArray](#pxr.Sdf.AssetPathArray)
- An array of type SdfAssetPath.
- **AttributeSpec**
- [pxr.Sdf.AttributeSpec](#pxr.Sdf.AttributeSpec)
- A subclass of SdfPropertySpec that holds typed data.
- **AuthoringError**
- [pxr.Sdf.AuthoringError](#pxr.Sdf.AuthoringError)
- **BatchNamespaceEdit**
- [pxr.Sdf.BatchNamespaceEdit](#pxr.Sdf.BatchNamespaceEdit)
- A description of an arbitrarily complex namespace edit.
- **ChangeBlock**
- [pxr.Sdf.ChangeBlock](#pxr.Sdf.ChangeBlock)
- **DANGER DANGER DANGER**
- **ChildrenView_Sdf_AttributeChildPolicy_SdfAttributeViewPredicate**
- [pxr.Sdf.ChildrenView_Sdf_AttributeChildPolicy_SdfAttributeViewPredicate](#pxr.Sdf.ChildrenView_Sdf_AttributeChildPolicy_SdfAttributeViewPredicate)
- **ChildrenView_Sdf_AttributeChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfAttributeSpec___**
- [pxr.Sdf.ChildrenView_Sdf_AttributeChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfAttributeSpec___](#pxr.Sdf.ChildrenView_Sdf_AttributeChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfAttributeSpec___)
```
ChildrenView_Sdf_PrimChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfPrimSpec___
```
ChildrenView_Sdf_PropertyChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfPropertySpec___
```
ChildrenView_Sdf_RelationshipChildPolicy_SdfRelationshipViewPredicate
```
ChildrenView_Sdf_VariantChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSpec___
```
ChildrenView_Sdf_VariantSetChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSetSpec___
```
CleanupEnabler
```
An RAII class which, when an instance is alive, enables scheduling of automatic cleanup of SdfLayers.
DimensionlessUnit
```
FastUpdateList
```
FileFormat
```
Base class for file format implementations.
Int64ListOp
```
IntListOp
```
Layer
```
A scene description container that can combine with other such containers to form simple component assets, and successively larger aggregates.
LayerOffset
```
Represents a time offset and scale between layers.
LayerTree
```
A SdfLayerTree is an immutable tree structure representing a sublayer stack and its recursive structure.
LengthUnit
```
ListEditorProxy_SdfNameKeyPolicy
```
ListEditorProxy_SdfPathKeyPolicy
```
<p>
ListEditorProxy_SdfPayloadTypePolicy
<p>
ListEditorProxy_SdfReferenceTypePolicy
<p>
ListOpType
<p>
ListProxy_SdfNameKeyPolicy
<p>
ListProxy_SdfNameTokenKeyPolicy
<p>
ListProxy_SdfPathKeyPolicy
<p>
ListProxy_SdfPayloadTypePolicy
<p>
ListProxy_SdfReferenceTypePolicy
<p>
ListProxy_SdfSubLayerTypePolicy
<p>
MapEditProxy_VtDictionary
<p>
MapEditProxy_map_SdfPath_SdfPath_less_SdfPath__allocator_pair_SdfPath_const__SdfPath_____
<p>
MapEditProxy_map_string_string_less_string__allocator_pair_stringconst__string_____
<p>
NamespaceEdit
<p>
NamespaceEditDetail
<p>
Notice
<p>
Path
<p>
PathArray
<p>
PathListOp
- **PathListOp**
- **Payload**
- Represents a payload and all its meta data.
- **PayloadListOp**
- **Permission**
- **PrimSpec**
- Represents a prim description in an SdfLayer object.
- **PropertySpec**
- Base class for SdfAttributeSpec and SdfRelationshipSpec.
- **PseudoRootSpec**
- **Reference**
- Represents a reference and all its meta data.
- **ReferenceListOp**
- **RelationshipSpec**
- A property that contains a reference to one or more SdfPrimSpec instances.
- **Spec**
- Base class for all Sdf spec classes.
- **SpecType**
- **Specifier**
- **StringListOp**
- **TimeCode**
- Value type that represents a time code.
- **TimeCodeArray**
- An array of type SdfTimeCode.
- **TokenListOp**
- **UInt64ListOp**
- **UIntListOp**
- **UnregisteredValue**
| 名称 | 描述 |
| --- | --- |
| UnregisteredValueListOp | |
| ValueBlock | A special value type that can be used to explicitly author an opinion for an attribute's default value or time sample value that represents having no value. |
| ValueRoleNames | |
| ValueTypeName | Represents a value type name, i.e. an attribute's type name. |
| ValueTypeNames | |
| Variability | |
| VariantSetSpec | Represents a coherent set of alternate representations for part of a scene. |
| VariantSpec | Represents a single variant in a variant set. |
**Functions:**
| 名称 | 描述 |
| --- | --- |
| Find (layerFileName, scenePath) | layerFileName: string scenePath: Path |
**pxr.Sdf.AngularUnit**
**Methods:**
| 名称 | 描述 |
| --- | --- |
| GetValueFromName | |
**Attributes:**
| 名称 | 描述 |
| --- | --- |
| allValues | |
**pxr.Sdf.AngularUnit.GetValueFromName**
**pxr.Sdf.AngularUnit.allValues**
### pxr.Sdf.AssetPath
Contains an asset path and an optional resolved path. Asset paths may contain non-control UTF-8 encoded characters. Specifically, U+0000..U+001F (C0 controls), U+007F (delete), and U+0080..U+009F (C1 controls) are disallowed. Attempts to construct asset paths with such characters will issue a TfError and produce the default-constructed empty asset path.
**Attributes:**
- **path**
- **resolvedPath** (str)
- Return the resolved asset path, if any.
- Note that SdfAssetPath carries a resolved path only if its creator passed one to the constructor. SdfAssetPath never performs resolution itself.
- type : str
- Overload for rvalues, move out the asset path.
### pxr.Sdf.AssetPathArray
An array of type SdfAssetPath.
### pxr.Sdf.AttributeSpec
A subclass of SdfPropertySpec that holds typed data. Attributes are typed data containers that can optionally hold any and all of the following:
- A single default value.
- An array of knot values describing how the value varies over time.
- A dictionary of posed values, indexed by name. The values contained in an attribute must all be of the same type. In the Python API the `typeName` property holds the attribute type. In the C++ API, you can get the attribute type using the GetTypeName() method. In addition, all values, including all knot values, must be the same shape. For information on shapes, see the VtShape class reference in the C++ documentation.
**Methods:**
- **ClearColorSpace**()
- Clears the colorSpace metadata value set on this attribute.
- **HasColorSpace**()
- Returns true if this attribute has a colorSpace value authored.
**Attributes:**
- **ConnectionPathsKey**
| Key | Description |
|--------------------------|-----------------------------------------------------------------------------|
| DefaultValueKey | DefaultValueKey |
| DisplayUnitKey | DisplayUnitKey |
| allowedTokens | The allowed value tokens for this property |
| colorSpace | The color-space in which the attribute value is authored. |
| connectionPathList | A PathListEditor for the attribute's connection paths. |
| displayUnit | The display unit for this attribute. |
| expired | |
| roleName | The roleName for this attribute's typeName. |
| typeName | The typename of this attribute. |
| valueType | The value type of this attribute. |
### ClearColorSpace()
Clears the colorSpace metadata value set on this attribute.
### HasColorSpace()
Returns true if this attribute has a colorSpace value authored.
### ConnectionPathsKey = 'connectionPaths'
### DefaultValueKey = 'default'
### DisplayUnitKey
`DisplayUnitKey = 'displayUnit'`
### allowedTokens
**property** `allowedTokens`
> The allowed value tokens for this property
### colorSpace
**property** `colorSpace`
> The color-space in which the attribute value is authored.
### connectionPathList
**property** `connectionPathList`
> A PathListEditor for the attribute’s connection paths.
> The list of the connection paths for this attribute may be modified with this PathListEditor.
> A PathListEditor may express a list either as an explicit value or as a set of list editing operations. See GdListEditor for more information.
### displayUnit
**property** `displayUnit`
> The display unit for this attribute.
### expired
**property** `expired`
### roleName
**property** `roleName`
> The roleName for this attribute’s typeName.
### typeName
**property** `typeName`
> The typename of this attribute.
### valueType
**property** `valueType`
> The value type of this attribute.
### AuthoringError
**class** `pxr.Sdf.AuthoringError`
> **Methods:**
> - `GetValueFromName`
>
> **Attributes:**
> - `allValues`
## pxr.Sdf.BatchNamespaceEdit
class pxr.Sdf.BatchNamespaceEdit
A description of an arbitrarily complex namespace edit.
A `SdfBatchNamespaceEdit` object describes zero or more namespace edits. Various types providing a namespace will allow the edits to be applied in a single operation and also allow testing if this will work.
Clients are encouraged to group several edits into one object because that may allow more efficient processing of the edits. If, for example, you need to reparent several prims it may be faster to add all of the reparents to a single `SdfBatchNamespaceEdit` and apply them at once than to apply each separately.
Objects that allow applying edits are free to apply the edits in any way and any order they see fit but they should guarantee that the resulting namespace will be as if each edit was applied one at a time in the order they were added.
Note that the above rule permits skipping edits that have no effect or generate a non-final state. For example, if renaming A to B then to C we could just rename A to C. This means notices may be elided. However, implementations must not elide notices that contain information about any edit that clients must be able to know but otherwise cannot determine.
**Methods:**
| Method | Description |
| --- | --- |
| `Add(edit)` | Add a namespace edit. |
| `Process(processedEdits, hasObjectAtPath, ...)` | Validate the edits and generate a possibly more efficient edit sequence. |
**Attributes:**
| Attribute | Description |
| --- | --- |
| `edits` | list[SdfNamespaceEdit] |
### Add
Add a namespace edit.
**Parameters:**
- **edit** (NamespaceEdit) –
Add(currentPath, newPath, index) -> None
Add a namespace edit.
**Parameters:**
- **currentPath** (NamespaceEdit.Path) –
- **newPath** (NamespaceEdit.Path) –
- **index** (NamespaceEdit.Index) –
### Process
Process(processedEdits, hasObjectAtPath, canEdit, details, fixBackpointers) → bool
Validate the edits and generate a possibly more efficient edit sequence.
Edits are treated as if they were performed one at a time in sequence, therefore each edit occurs in the namespace resulting from all previous edits.
Editing the descendants of the object in each edit is implied. If an object is removed then the new path will be empty. If an object is removed after being otherwise edited, the other edits will be processed and included in `processedEdits` followed by the removal. This allows clients to fixup references to point to the object’s final location prior to removal.
This function needs help to determine if edits are allowed. The callbacks provide that help. `hasObjectAtPath` returns `true` iff there’s an object at the given path. This path will be in the original namespace not any intermediate or final namespace. `canEdit` returns `true` iff the object at the current path can be namespace edited to the new path, ignoring whether an object already exists at the new path. Both paths are in the original namespace. If it returns `false` it should set the string to the reason why the edit isn’t allowed. It should not write either path to the string.
If `hasObjectAtPath` is invalid then this assumes objects exist where they should and don’t exist where they shouldn’t. Use this with care. If `canEdit` in invalid then it’s assumed all edits are valid.
If `fixBackpointers` is `true` then target/connection paths are expected to be in the intermediate namespace resulting from all previous edits. If `false` and any current or new path contains a target or connection path that has been edited then this will generate an error.
This method returns `true` if the edits are allowed and sets `processedEdits` to a new edit sequence at least as efficient as the input sequence. If not allowed it returns `false` and appends reasons why not to `details`.
Parameters:
- **processedEdits** (list[SdfNamespaceEdit]) –
- **hasObjectAtPath** (HasObjectAtPath) –
- **canEdit** (CanEdit) –
- **details** (list[SdfNamespaceEditDetail]) –
- **fixBackpointers** (bool) –
property edits: list[SdfNamespaceEdit]
Returns the edits.
Type: type
## pxr.Sdf.ChangeBlock
**DANGER DANGER DANGER**
Please make sure you have read and fully understand the issues below before using a changeblock! They are very easy to use in an unsafe way that could make the system crash or corrupt data. If you have any questions, please contact the USD team, who would be happy to help!
SdfChangeBlock provides a way to group a round of related changes to scene description in order to process them more efficiently.
Normally, Sdf sends notification immediately as changes are made so that downstream representations like UsdStage can update accordingly.
However, sometimes it can be advantageous to group a series of Sdf changes into a batch so that they can be processed more efficiently, with a single round of change processing. An example might be when setting many avar values on a model at the same time.
Opening a changeblock tells Sdf to delay sending notification about changes until the outermost changeblock is exited. Until then, Sdf internally queues up the notification it needs to send.
It is **not** safe to use Usd or other downstream API while a changeblock is open!! This is because those derived representations will not have had a chance to update while the changeblock is open. Not only will their view of the world be stale, it could be unsafe to even make queries from, since they may be holding onto expired handles to Sdf objects that no longer exist. If you need to make a bunch of changes to scene description, the best approach is to build a list of necessary changes that can be performed directly via the Sdf API, then submit those all inside a changeblock without talking to any downstream modules. For example, this is how many mutators in Usd that operate on more than one field or Spec work.
## pxr.Sdf.ChildrenView_Sdf_AttributeChildPolicy_SdfAttributeViewPredicate
**Classes:**
| Class Name | Description |
|------------|-------------|
| ChildrenView_Sdf_AttributeChildPolicy_SdfAttributeViewPredicate_Iterator | |
| ChildrenView_Sdf_AttributeChildPolicy_SdfAttributeViewPredicate_KeyIterator | |
| ChildrenView_Sdf_AttributeChildPolicy_SdfAttributeViewPredicate_ValueIterator | |
**Methods:**
| Method Name | Description |
|-------------|-------------|
| get | |
| index | |
| items | |
| keys | |
| values | |
### ChildrenView_Sdf_AttributeChildPolicy_SdfAttributeViewPredicate_Iterator
class
### ChildrenView_Sdf_AttributeChildPolicy_SdfAttributeViewPredicate_KeyIterator
class
### ChildrenView_Sdf_AttributeChildPolicy_SdfAttributeViewPredicate_ValueIterator
class
### get
### index
### items
### keys
### values
### ChildrenView_Sdf_AttributeChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfAttributeSpec___
class
#### Classes:
- ChildrenView_Sdf_AttributeChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfAttributeSpec____Iterator
- ChildrenView_Sdf_AttributeChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfAttributeSpec____KeyIterator
- ChildrenView_Sdf_AttributeChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfAttributeSpec____ValueIterator
## Methods:
- `get`
- `index`
- `items`
- `keys`
- `values`
### Classes
- `ChildrenView_Sdf_AttributeChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfAttributeSpec____Iterator`
- `ChildrenView_Sdf_AttributeChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfAttributeSpec____KeyIterator`
- `ChildrenView_Sdf_AttributeChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfAttributeSpec____ValueIterator`
### Methods
- `get()`
- `index()`
- `items()`
## pxr.Sdf.ChildrenView_Sdf_PrimChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfPrimSpec___
### Classes:
- **ChildrenView_Sdf_PrimChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfPrimSpec____Iterator**
- **ChildrenView_Sdf_PrimChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfPrimSpec____KeyIterator**
- **ChildrenView_Sdf_PrimChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfPrimSpec____ValueIterator**
### Methods:
- **get**
- **index**
- **items**
- **keys**
- **values**
### Classes:
- **ChildrenView_Sdf_PropertyChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfPropertySpec____Iterator**
- **ChildrenView_Sdf_PropertyChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfPropertySpec____KeyIterator**
| | |
|---|---|
| | |
| | |
| | |
|---|---|
| | |
| | |
## Methods:
| | |
|---|---|
| `get` | |
| | |
| | |
|---|---|
| `index` | |
| | |
| | |
|---|---|
| `items` | |
| | |
| | |
|---|---|
| `keys` | |
| | |
| | |
|---|---|
| `values` | |
| | |
### ChildrenView_Sdf_PropertyChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfPropertySpec____Iterator
### ChildrenView_Sdf_PropertyChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfPropertySpec____KeyIterator
### ChildrenView_Sdf_PropertyChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfPropertySpec____ValueIterator
### get()
### index()
### items
```
```markdown
### keys
```
```markdown
### values
```
```markdown
### ChildrenView_Sdf_RelationshipChildPolicy_SdfRelationshipViewPredicate
**Classes:**
| Class | Description |
|-------|-------------|
| ChildrenView_Sdf_RelationshipChildPolicy_SdfRelationshipViewPredicate_Iterator | |
| ChildrenView_Sdf_RelationshipChildPolicy_SdfRelationshipViewPredicate_KeyIterator | |
| ChildrenView_Sdf_RelationshipChildPolicy_SdfRelationshipViewPredicate_ValueIterator | |
**Methods:**
| Method | Description |
|--------|-------------|
| get | |
| index | |
| items | |
| keys | |
| values | |
### Classes:
- **ChildrenView_Sdf_RelationshipChildPolicy_SdfRelationshipViewPredicate_Iterator**
- **ChildrenView_Sdf_RelationshipChildPolicy_SdfRelationshipViewPredicate_KeyIterator**
- **ChildrenView_Sdf_RelationshipChildPolicy_SdfRelationshipViewPredicate_ValueIterator**
### Methods:
- **get()**
- **index()**
- **items()**
- **keys()**
- **values()**
### Classes:
- **ChildrenView_Sdf_VariantChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSpec___**
- **ChildrenView_Sdf_VariantChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSpec____Iterator**
- **ChildrenView_Sdf_VariantChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSpec____KeyIterator**
- **ChildrenView_Sdf_VariantChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSpec____ValueIterator**
## Methods:
- **get**
- **index**
- **items**
- **keys**
- **values**
### ChildrenView_Sdf_VariantChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSpec____Iterator
### ChildrenView_Sdf_VariantChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSpec____KeyIterator
### ChildrenView_Sdf_VariantChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSpec____ValueIterator
### get
### index
### items
## Keys
```
```markdown
## Values
```
```markdown
## ChildrenView_Sdf_VariantSetChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSetSpec___
### Classes:
```
```markdown
### Methods:
```
```markdown
## get
```
```markdown
## index
```
```markdown
## items
```
```markdown
## keys
```
```markdown
## values
### ChildrenView_Sdf_VariantSetChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSetSpec____Iterator
class
### ChildrenView_Sdf_VariantSetChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSetSpec____KeyIterator
class
### ChildrenView_Sdf_VariantSetChildPolicy_SdfChildrenViewTrivialPredicate_SdfHandle_SdfVariantSetSpec____ValueIterator
class
### get
()
### index
()
### items
()
### keys
()
### values
()
### CleanupEnabler
class
An RAII class which, when an instance is alive, enables scheduling of automatic cleanup of SdfLayers.
Any affected specs which no longer contribute to the scene will be removed when the last SdfCleanupEnabler instance goes out of scope. Note that for this purpose, SdfPropertySpecs are removed if they have only required fields (see SdfPropertySpecs::HasOnlyRequiredFields), but only if the property spec itself was affected by an edit that left it with only required fields. This will have the effect of uninstantiating on-demand attributes. For example, if its parent prim was affected by an edit that left it otherwise inert, it will not be removed if it contains an SdfPropertySpec with only required fields, but if the property spec itself is edited leaving it with only required fields, it will be removed, potentially uninstantiating it if it’s an on-demand property.
SdfCleanupEnablers are accessible in both C++ and Python.
/// SdfCleanupEnabler can be used in the following manner:
```cpp
{
SdfCleanupEnabler enabler;
// Perform any action that might otherwise leave inert specs around,
// such as removing info from properties or prims, or removing name
// children. i.e:
primSpec->ClearInfo(SdfFieldKeys->Default);
// When enabler goes out of scope on the next line, primSpec will
// be removed if it has been left as an empty over.
}
### DimensionlessUnit
class
### pxr.Sdf.DimensionlessUnit
**Methods:**
- `GetValueFromName`
**Attributes:**
- `allValues`
```python
static GetValueFromName()
```
```python
allValues = (Sdf.DimensionlessUnitPercent, Sdf.DimensionlessUnitDefault)
```
### pxr.Sdf.FastUpdateList
**Classes:**
- `FastUpdate`
**Attributes:**
- `fastUpdates`
- `hasCompositionDependents`
```python
class FastUpdate
```
**Attributes:**
- `path`
- `value`
```python
property path
```
```python
property value
```
### pxr.Sdf.FileFormat
Base class for file format implementations.
**Classes:**
- Tokens
**Methods:**
- CanRead(file)
- Returns true if `file` can be read by this format.
- FindAllFileFormatExtensions() -> set[str]
- FindByExtension(path, target) -> FileFormat
- FindById(formatId) -> FileFormat
- GetFileExtension(s) -> str
- GetFileExtensions()
- Returns a list of extensions that this format supports.
- IsPackage()
- Returns true if this file format is a package containing other assets.
- IsSupportedExtension(extension)
- Returns true if `extension` is supported by this format.
| extension | matches one of the extensions returned by GetFileExtensions. |
|-----------|--------------------------------------------------------------|
## Attributes:
| Attribute | Description |
|-----------|-------------|
| expired | True if this object has expired, False otherwise. |
| fileCookie | str |
| formatId | str |
| primaryFileExtension | str |
| target | str |
## Tokens
### Attributes:
| Attribute | Description |
|-----------|-------------|
| TargetArg | |
## TargetArg = 'target'
## CanRead(file) -> bool
Returns true if `file` can be read by this format.
**Parameters**
- **file** (str) –
## FindAllFileFormatExtensions() -> set[str]
classmethod FindAllFileFormatExtensions() -> set[str]
Returns a set containing the extension(s) corresponding to all registered file formats.
## FindByExtension(path, target) -> FileFormat
classmethod FindByExtension(path, target) -> FileFormat
Returns the file format instance that supports the extension for
<code class="docutils literal notranslate">
<span class="pre">
path
.
<p>
If a format with a matching extension is not found, this returns a null file format pointer.
<p>
An extension may be handled by multiple file formats, but each with a different target. In such cases, if no
<code class="docutils literal notranslate">
<span class="pre">
target
is specified, the file format that is registered as the primary plugin will be returned. Otherwise, the file format whose target matches
<code class="docutils literal notranslate">
<span class="pre">
target
will be returned.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<ul class="simple">
<li>
<p>
<strong>
path
(<em>str
<li>
<p>
<strong>
target
(<em>str
<hr class="docutils"/>
<p>
FindByExtension(path, args) -> FileFormat
<p>
Returns a file format instance that supports the extension for
<code class="docutils literal notranslate">
<span class="pre">
path
and whose target matches one of those specified by the given
<code class="docutils literal notranslate">
<span class="pre">
args
.
<p>
If the
<code class="docutils literal notranslate">
<span class="pre">
args
specify no target, then the file format that is registered as the primary plugin will be returned. If a format with a matching extension is not found, this returns a null file format pointer.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<ul class="simple">
<li>
<p>
<strong>
path
(<em>str
<li>
<p>
<strong>
args
(<em>FileFormatArguments
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.FileFormat.FindById">
<em class="property">
<span class="pre">
static
<span class="w">
<span class="sig-name descname">
<span class="pre">
FindById
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Sdf.FileFormat.FindById" title="Permalink to this definition">
<dd>
<p>
<strong>
classmethod
FindById(formatId) -> FileFormat
<p>
Returns the file format instance with the specified
<code class="docutils literal notranslate">
<span class="pre">
formatId
identifier.
<p>
If a format with a matching identifier is not found, this returns a null file format pointer.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
<strong>
formatId
(<em>str
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.FileFormat.GetFileExtension">
<em class="property">
<span class="pre">
static
<span class="w">
<span class="sig-name descname">
<span class="pre">
GetFileExtension
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Sdf.FileFormat.GetFileExtension" title="Permalink to this definition">
<dd>
<p>
<strong>
classmethod
GetFileExtension(s) -> str
<p>
Returns the file extension for path or file name
<code class="docutils literal notranslate">
<span class="pre">
s
, without the leading dot character.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
<strong>
s
(<em>str
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.FileFormat.GetFileExtensions">
<span class="sig-name descname">
<span class="pre">
GetFileExtensions
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
list
<span class="p">
<span class="pre">
[
<span class="pre">
str
<span class="p">
<span class="pre">
]
<a class="headerlink" href="#pxr.Sdf.FileFormat.GetFileExtensions" title="Permalink to this definition">
<dd>
<p>
Returns a list of extensions that this format supports.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.FileFormat.IsPackage">
<span class="sig-name descname">
<span class="pre">
IsPackage
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
bool
<a class="headerlink" href="#pxr.Sdf.FileFormat.IsPackage" title="Permalink to this definition">
<dd>
<p>
Returns true if this file format is a package containing other assets.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.FileFormat.IsSupportedExtension">
<span class="sig-name descname">
<span class="pre">
IsSupportedExtension
<span class="sig-paren">
(
### pxr.Sdf.FileFormat.IsSupportedExtension
- **Description**: Returns true if `extension` matches one of the extensions returned by GetFileExtensions.
- **Parameters**:
- **extension** (str) –
### pxr.Sdf.FileFormat.expired
- **Description**: True if this object has expired, False otherwise.
### pxr.Sdf.FileFormat.fileCookie
- **Description**: Returns the cookie to be used when writing files with this format.
- **Type**: type
### pxr.Sdf.FileFormat.formatId
- **Description**: Returns the format identifier.
- **Type**: type
### pxr.Sdf.FileFormat.primaryFileExtension
- **Description**: Returns the primary file extension for this format. This is the extension that is reported for layers using this file format.
- **Type**: type
### pxr.Sdf.FileFormat.target
- **Description**: Returns the target for this file format.
- **Type**: type
### pxr.Sdf.Int64ListOp
- **Methods**:
- **ApplyOperations**
- **Clear**
- **ClearAndMakeExplicit**
- **Create**
CreateExplicit
GetAddedOrExplicitItems
HasItem
addedItems
appendedItems
deletedItems
explicitItems
isExplicit
orderedItems
prependedItems
ApplyOperations
Clear
ClearAndMakeExplicit
Create
CreateExplicit
GetAddedOrExplicitItems
### pxr.Sdf.Int64ListOp.HasItem
- **Method:** HasItem()
### pxr.Sdf.Int64ListOp.addedItems
- **Property:** addedItems
### pxr.Sdf.Int64ListOp.appendedItems
- **Property:** appendedItems
### pxr.Sdf.Int64ListOp.deletedItems
- **Property:** deletedItems
### pxr.Sdf.Int64ListOp.explicitItems
- **Property:** explicitItems
### pxr.Sdf.Int64ListOp.isExplicit
- **Property:** isExplicit
### pxr.Sdf.Int64ListOp.orderedItems
- **Property:** orderedItems
### pxr.Sdf.Int64ListOp.prependedItems
- **Property:** prependedItems
### pxr.Sdf.IntListOp
- **Class:** pxr.Sdf.IntListOp
- **Methods:**
- ApplyOperations
- Clear
- ClearAndMakeExplicit
- Create
- CreateExplicit
- GetAddedOrExplicitItems
- HasItem
### Attributes:
- **addedItems**
- **appendedItems**
- **deletedItems**
- **explicitItems**
- **isExplicit**
- **orderedItems**
- **prependedItems**
### Methods:
- **ApplyOperations**()
- **Clear**()
- **ClearAndMakeExplicit**()
- **Create**()
- **CreateExplicit**()
- **GetAddedOrExplicitItems**()
- **HasItem**()
### Properties:
- **addedItems**
### pxr.Sdf.IntListOp.appendedItems
- **Property**: appendedItems
### pxr.Sdf.IntListOp.deletedItems
- **Property**: deletedItems
### pxr.Sdf.IntListOp.explicitItems
- **Property**: explicitItems
### pxr.Sdf.IntListOp.isExplicit
- **Property**: isExplicit
### pxr.Sdf.IntListOp.orderedItems
- **Property**: orderedItems
### pxr.Sdf.IntListOp.prependedItems
- **Property**: prependedItems
### pxr.Sdf.Layer
- **Class**: pxr.Sdf.Layer
- A scene description container that can combine with other such containers to form simple component assets, and successively larger aggregates. The contents of an SdfLayer adhere to the SdfData data model. A layer can be ephemeral, or be an asset accessed and serialized through the ArAsset and ArResolver interfaces.
- The SdfLayer class provides a consistent API for accessing and serializing scene description, using any data store provided by Ar plugins. Sdf itself provides a UTF-8 text format for layers identified by the ".sdf" identifier extension, but via the SdfFileFormat abstraction, allows downstream modules and plugins to adapt arbitrary data formats to the SdfData/SdfLayer model.
- The FindOrOpen() method returns a new SdfLayer object with scene description from any supported asset format. Once read, a layer remembers which asset it was read from. The Save() method saves the layer back out to the original asset. You can use the Export() method to write the layer to a different location. You can use the GetIdentifier() method to get the layer’s Id or GetRealPath() to get the resolved, full URI.
- Layers can have a timeCode range (startTimeCode and endTimeCode). This range represents the suggested playback range, but has no impact on the extent of the animation data that may be stored in the layer. The metadatum "timeCodesPerSecond" is used to annotate how the time ordinate for samples contained in the file scales to seconds. For example, if timeCodesPerSecond is 24, then a sample at time ordinate 24 should be viewed exactly one second after the sample at time ordinate 0.
#### Classes:
- DetachedLayerRules
#### Methods:
- **AddToMutedLayers**: classmethod AddToMutedLayers(mutedPath) -> None
- **Apply**: Performs a batch of namespace edits.
- **ApplyRootPrimOrder**: Reorders the given list of prim names according to the reorder rootPrims statement for this layer.
| Function Name | Description |
|---------------|-------------|
| Clear | Clears the layer of all content. |
| ClearColorConfiguration | Clears the color configuration metadata authored in this layer. |
| ClearColorManagementSystem | Clears the 'colorManagementSystem' metadata authored in this layer. |
| ClearCustomLayerData | Clears out the CustomLayerData dictionary associated with this layer. |
| ClearDefaultPrim | Clear the default prim metadata for this layer. |
| ClearEndTimeCode | Clear the endTimeCode opinion. |
| ClearFramePrecision | Clear the framePrecision opinion. |
| ClearFramesPerSecond | Clear the framesPerSecond opinion. |
| ClearOwner | Clear the owner opinion. |
| ClearSessionOwner | |
| ClearStartTimeCode | Clear the startTimeCode opinion. |
| ClearTimeCodesPerSecond | Clear the timeCodesPerSecond opinion. |
| ComputeAbsolutePath | Returns the path to the asset specified by `assetPath` using this layer to anchor the path if necessary. |
| CreateAnonymous | **classmethod** CreateAnonymous(tag, args) -> Layer |
| CreateIdentifier | **classmethod** |
CreateIdentifier(layerPath, arguments) -> str
CreateNew
classmethod CreateNew(identifier, args) -> Layer
DumpLayerInfo
Debug helper to examine content of the current layer registry and the asset/real path of all layers in the registry.
EraseTimeSample(path, time)
param path
Export(filename, comment, args)
Exports this layer to a file.
ExportToString
Returns the string representation of the layer.
Find(filename)
filename : string
FindOrOpen
classmethod FindOrOpen(identifier, args) -> Layer
FindOrOpenRelativeToLayer
classmethod FindOrOpenRelativeToLayer(anchor, identifier, args) -> Layer
FindRelativeToLayer
Returns the open layer with the given filename, or None.
GetAssetInfo()
Returns resolve information from the last time the layer identifier was resolved.
GetAssetName()
Returns the asset name associated with this layer.
GetAttributeAtPath(path)
Returns an attribute at the given path.
GetBracketingTimeSamples(time, tLower, tUpper)
param time
GetBracketingTimeSamplesForPath(path, time, ...)
param path
GetCompositionAssetDependencies()
Return paths of all assets this layer depends on due to composition fields.
GetDetachedLayerRules() -> DetachedLayerRules
classmethod GetDetachedLayerRules() -> DetachedLayerRules
GetDisplayName()
Returns the layer's display name.
GetDisplayNameFromIdentifier(identifier) -> str
classmethod GetDisplayNameFromIdentifier(identifier) -> str
GetExternalAssetDependencies()
Returns a set of resolved paths to all external asset dependencies the layer needs to generate its contents.
GetExternalReferences()
Return a list of asset paths for this layer.
GetFileFormat()
Returns the file format used by this layer.
GetFileFormatArguments()
Returns the file format-specific arguments used during the construction of this layer.
GetLoadedLayers()
Return list of loaded layers.
GetMutedLayers()
Return list of muted layers.
GetNumTimeSamplesForPath(path)
param path
GetObjectAtPath(path)
Returns the object at the given path.
GetPrimAtPath(path)
Returns the prim at the given path.
GetPropertyAtPath(path)
Returns a property at the given path.
<p>
GetRelationshipAtPath (path)
<p>
Returns a relationship at the given path.
<p>
HasColorConfiguration ()
<p>
Returns true if color configuration metadata is set in this layer.
<p>
HasColorManagementSystem ()
<p>
Returns true if colorManagementSystem metadata is set in this layer.
<p>
HasCustomLayerData ()
<p>
Returns true if CustomLayerData is authored on the layer.
<p>
HasDefaultPrim ()
<p>
Return true if the default prim metadata is set in this layer.
<p>
HasEndTimeCode ()
<p>
Returns true if the layer has an endTimeCode opinion.
<p>
HasFramePrecision ()
<p>
Returns true if the layer has a frames precision opinion.
<p>
HasFramesPerSecond ()
<p>
Returns true if the layer has a frames per second opinion.
<p>
HasOwner ()
<p>
Returns true if the layer has an owner opinion.
<p>
HasSessionOwner ()
<p>
Returns true if the layer has a session owner opinion.
<p>
HasStartTimeCode ()
<p>
Returns true if the layer has a startTimeCode opinion.
<p>
HasTimeCodesPerSecond ()
<p>
Returns true if the layer has a timeCodesPerSecond opinion.
<p>
Import (layerPath)
<p>
Imports the content of the given layer path, replacing the content of the current layer.
<p>
ImportFromString (string)
<p>
Reads this layer from the given string.
<p>
IsAnonymousLayerIdentifier (identifier) -> bool
<p>
IsDetached()
<p>
Returns true if this layer is detached from its serialized data store, false otherwise.
<p>
IsIncludedByDetachedLayerRules
<p>
classmethod IsIncludedByDetachedLayerRules(identifier) -> bool
<p>
IsMuted
<p>
classmethod IsMuted() -> bool
<p>
ListAllTimeSamples()
<p>
ListTimeSamplesForPath(path)
<p>
param path
<p>
New
<p>
classmethod New(fileFormat, identifier, args) -> Layer
<p>
OpenAsAnonymous
<p>
classmethod OpenAsAnonymous(layerPath, metadataOnly, tag) -> Layer
<p>
QueryTimeSample(path, time, value)
<p>
param path
<p>
Reload(force)
<p>
Reloads the layer from its persistent representation.
<p>
ReloadLayers
<p>
classmethod ReloadLayers(layers, force) -> bool
<p>
RemoveFromMutedLayers(mutedPath)
<p>
classmethod RemoveFromMutedLayers(mutedPath) -> None
<p>
RemoveInertSceneDescription()
<p>
Removes all scene description in this layer that does not affect the scene.
<p>
Save(force)
<p>
Returns true if successful, false if an error occurred.
<p>
ScheduleRemoveIfInert(spec)
spec
to be removed if it no longer affects the scene when the last change block is closed, or now if there are no change blocks.
SetDetachedLayerRules
classmethod SetDetachedLayerRules(mask) -> None
SetMuted (muted)
Mutes the current layer if `muted` is `true`, and unmutes it otherwise.
SetPermissionToEdit (allow)
Sets permission to edit.
SetPermissionToSave (allow)
Sets permission to save.
SetTimeSample (path, time, value)
param path
SplitIdentifier
classmethod SplitIdentifier(identifier, layerPath, arguments) -> bool
StreamsData ()
Returns true if this layer streams data from its serialized data store on demand, false otherwise.
TransferContent (layer)
Copies the content of the given layer into this layer.
Traverse (path, func)
param path
UpdateAssetInfo ()
Update layer asset information.
UpdateCompositionAssetDependency (...)
Updates the asset path of a composation dependency in this layer.
UpdateExternalReference (oldAssetPath, ...)
Deprecated
Attributes:
ColorConfigurationKey
ColorManagementSystemKey
CommentKey
DocumentationKey
EndFrameKey
EndTimeCodeKey
FramePrecisionKey
FramesPerSecondKey
HasOwnedSubLayers
OwnerKey
SessionOwnerKey
StartFrameKey
StartTimeCodeKey
TimeCodesPerSecondKey
anonymous
colorConfiguration
colorManagementSystem
comment
customLayerData
The customLayerData dictionary associated with this layer.
```python
defaultPrim
```
The layer's default reference target token.
```python
dirty
```
bool
```python
documentation
```
The layer's documentation string.
```python
empty
```
bool
```python
endTimeCode
```
The end timeCode of this layer.
```python
expired
```
True if this object has expired, False otherwise.
```python
externalReferences
```
Return unique list of asset paths of external references for given layer.
```python
fileExtension
```
The layer's file extension.
```python
framePrecision
```
The number of digits of precision used in times in this layer.
```python
framesPerSecond
```
The frames per second used in this layer.
```python
hasOwnedSubLayers
```
Whether this layer's sub layers are expected to have owners.
```python
identifier
```
The layer's identifier.
```python
owner
```
The owner of this layer.
```python
permissionToEdit
```
Return true if permitted to be edited (modified), false otherwise.
```python
permissionToSave
```
Return true if permitted to be saved, false otherwise.
```python
pseudoRoot
```
The pseudo-root of the layer.
```python
realPath
```
| Property | Description |
|----------|-------------|
| realPath | The layer's resolved path. |
| repositoryPath | The layer's associated repository path |
| resolvedPath | The layer's resolved path. |
| rootPrimOrder | Get/set the list of root prim names for this layer's 'reorder rootPrims' statement. |
| rootPrims | The root prims of this layer, as an ordered dictionary. |
| sessionOwner | The session owner of this layer. |
| startTimeCode | The start timeCode of this layer. |
| subLayerOffsets | The sublayer offsets of this layer, as a list. |
| subLayerPaths | The sublayer paths of this layer, as a list. |
| timeCodesPerSecond | The timeCodes per second used in this layer. |
| version | The layer's version. |
### DetachedLayerRules
**Methods:**
- Exclude
- GetExcluded
- GetIncluded
- Include
- IncludeAll
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
IncludeAll
<td>
<p>
<tr class="row-even">
<td>
<p>
<a class="reference internal" href="#pxr.Sdf.Layer.DetachedLayerRules.IncludedAll" title="pxr.Sdf.Layer.DetachedLayerRules.IncludedAll">
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
IncludedAll
<td>
<p>
<tr class="row-odd">
<td>
<p>
<a class="reference internal" href="#pxr.Sdf.Layer.DetachedLayerRules.IsIncluded" title="pxr.Sdf.Layer.DetachedLayerRules.IsIncluded">
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
IsIncluded
<td>
<p>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.DetachedLayerRules.Exclude">
<span class="sig-name descname">
<span class="pre">
Exclude
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Sdf.Layer.DetachedLayerRules.Exclude" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.DetachedLayerRules.GetExcluded">
<span class="sig-name descname">
<span class="pre">
GetExcluded
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Sdf.Layer.DetachedLayerRules.GetExcluded" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.DetachedLayerRules.GetIncluded">
<span class="sig-name descname">
<span class="pre">
GetIncluded
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Sdf.Layer.DetachedLayerRules.GetIncluded" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.DetachedLayerRules.Include">
<span class="sig-name descname">
<span class="pre">
Include
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Sdf.Layer.DetachedLayerRules.Include" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.DetachedLayerRules.IncludeAll">
<span class="sig-name descname">
<span class="pre">
IncludeAll
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Sdf.Layer.DetachedLayerRules.IncludeAll" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.DetachedLayerRules.IncludedAll">
<span class="sig-name descname">
<span class="pre">
IncludedAll
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Sdf.Layer.DetachedLayerRules.IncludedAll" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.DetachedLayerRules.IsIncluded">
<span class="sig-name descname">
<span class="pre">
IsIncluded
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Sdf.Layer.DetachedLayerRules.IsIncluded" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.AddToMutedLayers">
<em class="property">
<span class="pre">
static
<span class="w">
<span class="sig-name descname">
<span class="pre">
AddToMutedLayers
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Sdf.Layer.AddToMutedLayers" title="Permalink to this definition">
<dd>
<p>
<strong>
classmethod
AddToMutedLayers(mutedPath) -> None
<p>
Add the specified path to the muted layers set.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
<strong>
mutedPath
(
<em>
str
) –
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.Apply">
<span class="sig-name descname">
<span class="pre">
Apply
<span class="sig-paren">
(
<em class="sig-param">
<span class="n">
<span class="pre">
arg1
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
bool
<a class="headerlink" href="#pxr.Sdf.Layer.Apply" title="Permalink to this definition">
<dd>
<p>
Performs a batch of namespace edits.
<p>
Returns
<code class="docutils literal notranslate">
<span class="pre">
true
on success and
<code class="docutils literal notranslate">
<span class="pre">
false
on failure. On failure, no
namespace edits will have occurred.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
<strong>
arg1
(
<a class="reference internal" href="#pxr.Sdf.BatchNamespaceEdit" title="pxr.Sdf.BatchNamespaceEdit">
**BatchNamespaceEdit**
### ApplyRootPrimOrder
**ApplyRootPrimOrder**(vec) → None
Reorders the given list of prim names according to the reorder rootPrims statement for this layer.
This routine employs the standard list editing operations for ordered items in a ListEditor.
**Parameters**
- **vec** (list [str]) –
### CanApply
**CanApply**(arg1, details) → NamespaceEditDetail.Result
Check if a batch of namespace edits will succeed.
This returns `SdfNamespaceEditDetail::Okay` if they will succeed as a batch, `SdfNamespaceEditDetail::Unbatched` if the edits will succeed but will be applied unbatched, and `SdfNamespaceEditDetail::Error` if they will not succeed. No edits will be performed in any case.
If `details` is not `None` and the method does not return `Okay` then details about the problems will be appended to `details`. A problem may cause the method to return early, so `details` may not list every problem.
Note that Sdf does not track backpointers so it’s unable to fix up targets/connections to namespace edited objects. Clients must fix those to prevent them from falling off. In addition, this method will report failure if any relational attribute with a target to a namespace edited object is subsequently edited (in the same batch). Clients should perform edits on relational attributes first.
Clients may wish to report unbatch details to the user to confirm that the edits should be applied unbatched. This will give the user a chance to correct any problems that cause batching to fail and try again.
**Parameters**
- **arg1** (BatchNamespaceEdit) –
- **details** (list [SdfNamespaceEditDetail]) –
### Clear
**Clear**() → None
Clears the layer of all content.
This restores the layer to a state as if it had just been created with CreateNew(). This operation is Undo-able.
The fileName and whether journaling is enabled are not affected by this method.
### ClearColorConfiguration
**ClearColorConfiguration**() → None
Clears the color configuration of the layer.
This operation does not affect other properties of the layer.
### ClearColorConfiguration
Clears the color configuration metadata authored in this layer.
HasColorConfiguration() , SetColorConfiguration()
### ClearColorManagementSystem
Clears the’colorManagementSystem’metadata authored in this layer.
HascolorManagementSystem(), SetColorManagementSystem()
### ClearCustomLayerData
Clears out the CustomLayerData dictionary associated with this layer.
### ClearDefaultPrim
Clear the default prim metadata for this layer.
See GetDefaultPrim() and SetDefaultPrim() .
### ClearEndTimeCode
Clear the endTimeCode opinion.
### ClearFramePrecision
Clear the framePrecision opinion.
### ClearFramesPerSecond
Clear the framesPerSecond opinion.
### ClearOwner
Clear the owner opinion.
### ClearSessionOwner
### ClearStartTimeCode
```
Clear the startTimeCode opinion.
```
### ClearTimeCodesPerSecond
```
Clear the timeCodesPerSecond opinion.
```
### ComputeAbsolutePath
```
(assetPath) -> str
```
Returns the path to the asset specified by `assetPath` using this layer to anchor the path if necessary.
Returns `assetPath` if it’s empty or an anonymous layer identifier.
This method can be used on asset paths that are authored in this layer to create new asset paths that can be copied to other layers. These new asset paths should refer to the same assets as the original asset paths. For example, if the underlying ArResolver is filesystem-based and `assetPath` is a relative filesystem path, this method might return the absolute filesystem path using this layer’s location as the anchor.
The returned path should in general not be assumed to be an absolute filesystem path or any other specific form. It is "absolute" in that it should resolve to the same asset regardless of what layer it’s authored in.
**Parameters**
- **assetPath** (str) –
### CreateAnonymous
```
static CreateAnonymous()
```
**classmethod** CreateAnonymous(tag, args) -> Layer
Creates a new anonymous layer with an optional `tag`.
An anonymous layer is a layer with a system assigned identifier, that cannot be saved to disk via Save(). Anonymous layers have an identifier, but no real path or other asset information fields.
Anonymous layers may be tagged, which can be done to aid debugging subsystems that make use of anonymous layers. The tag becomes the display name of an anonymous layer, and is also included in the generated identifier. Untagged anonymous layers have an empty display name.
Additional arguments may be supplied via the `args` parameter. These arguments may control behavior specific to the layer’s file format.
**Parameters**
- **tag** (str) –
- **args** (FileFormatArguments) –
CreateAnonymous(tag, format, args) -> Layer
Create an anonymous layer with a specific `format`.
**Parameters**
- **tag** (str) –
- **format** (FileFormat) –
- **args** (FileFormatArguments) –
### CreateIdentifier
**classmethod** CreateIdentifier(layerPath, arguments) -> str
Joins the given layer path and arguments into an identifier.
**Parameters**
- **layerPath** (str) –
- **arguments** (FileFormatArguments) –
### CreateNew
**classmethod** CreateNew(identifier, args) -> Layer
Creates a new empty layer with the given identifier.
Additional arguments may be supplied via the `args` parameter. These arguments may control behavior specific to the layer’s file format.
**Parameters**
- **identifier** (str) –
- **args** (FileFormatArguments) –
---
CreateNew(fileFormat, identifier, args) -> Layer
Creates a new empty layer with the given identifier for a given file format class.
This function has the same behavior as the other CreateNew function, but uses the explicitly-specified `fileFormat` instead of attempting to discern the format from `identifier`.
**Parameters**
- **fileFormat** (FileFormat) –
- **identifier** (str) –
- **args** (FileFormatArguments) –
### DumpLayerInfo
Debug helper to examine content of the current layer registry and the asset/real path of all layers in the registry.
### EraseTimeSample
EraseTimeSample(path, time) -> None
**Parameters**
- **path** (Path) –
- **time** (float) –
### pxr.Sdf.Layer.Export
Exports this layer to a file.
Returns `true` if successful, `false` if an error occurred.
If `comment` is not empty, the layer gets exported with the given comment. Additional arguments may be supplied via the `args` parameter. These arguments may control behavior specific to the exported layer’s file format.
Note that the file name or comment of the original layer is not updated. This only saves a copy of the layer to the given filename. Subsequent calls to Save() will still save the layer to it’s previously remembered file name.
#### Parameters
- **filename** (str) –
- **comment** (str) –
- **args** (FileFormatArguments) –
### pxr.Sdf.Layer.ExportToString
Returns the string representation of the layer.
### pxr.Sdf.Layer.Find
filename : string
Returns the open layer with the given filename, or None. Note that this is a static class method.
### pxr.Sdf.Layer.FindOrOpen
classmethod FindOrOpen(identifier, args) -> Layer
Return an existing layer with the given `identifier` and `args`, or else load it.
If the layer can’t be found or loaded, an error is posted and a null layer is returned.
Arguments in `args` will override any arguments specified in `identifier`.
#### Parameters
- **identifier** (str) –
- **args** (FileFormatArguments) –
### pxr.Sdf.Layer.FindOrOpenRelativeToLayer
classmethod FindOrOpenRelativeToLayer(anchor, identifier, args) -> Layer
Return an existing layer with the given `identifier` and `args`, or else load it.
If the layer can’t be found or loaded, an error is posted and a null layer is returned.
Arguments in `args` will override any arguments specified in `identifier`.
or else load it.
The given `identifier` will be resolved relative to the `anchor` layer. If the layer can’t be found or loaded, an error is posted and a null layer is returned.
If the `anchor` layer is invalid, issues a coding error and returns a null handle.
Arguments in `args` will override any arguments specified in `identifier`.
### Parameters
- **anchor** (Layer) –
- **identifier** (str) –
- **args** (FileFormatArguments) –
#### FindRelativeToLayer()
Returns the open layer with the given filename, or None. If the filename is a relative path then it’s found relative to the given layer. Note that this is a static class method.
#### GetAssetInfo()
Returns resolve information from the last time the layer identifier was resolved.
#### GetAssetName()
Returns the asset name associated with this layer.
#### GetAttributeAtPath(path)
Returns an attribute at the given `path`.
Returns `None` if there is no attribute at `path`. This is simply a more specifically typed version of `GetObjectAtPath()`.
##### Parameters
- **path** (Path) –
#### GetBracketingTimeSamples(time, tLower, tUpper)
Returns whether the given time has bracketing time samples.
### Parameters
- **time** (`float`) –
- **tLower** (`float`) –
- **tUpper** (`float`) –
### GetBracketingTimeSamplesForPath
```python
GetBracketingTimeSamplesForPath(path, time, tLower, tUpper) -> bool
```
- **path** (`Path`) –
- **time** (`float`) –
- **tLower** (`float`) –
- **tUpper** (`float`) –
### GetCompositionAssetDependencies
```python
GetCompositionAssetDependencies() -> set[str]
```
Return paths of all assets this layer depends on due to composition fields.
This includes the paths of all layers referred to by reference, payload, and sublayer fields in this layer. This function only returns direct composition dependencies of this layer, i.e. it does not recurse to find composition dependencies from its dependent layer assets.
### GetDetachedLayerRules
```python
@classmethod
GetDetachedLayerRules() -> DetachedLayerRules
```
Returns the current rules for the detached layer set.
### GetDisplayName
```python
GetDisplayName() -> str
```
Returns the layer’s display name.
The display name is the base filename of the identifier.
### GetDisplayNameFromIdentifier
```python
@classmethod
GetDisplayNameFromIdentifier(identifier) -> str
```
Returns the display name for the given `identifier`, using the same rules as GetDisplayName.
#### Parameters
- **identifier** (`str`) –
### GetExternalAssetDependencies
Returns a set of resolved paths to all external asset dependencies the layer needs to generate its contents.
These are additional asset dependencies that are determined by the layer’s file format and will be consulted during Reload() when determining if the layer needs to be reloaded. This specifically does not include dependencies related to composition, i.e., this will not include assets from references, payloads, and sublayers.
### GetExternalReferences
Return a list of asset paths for this layer.
### GetFileFormat
Returns the file format used by this layer.
### GetFileFormatArguments
Returns the file format-specific arguments used during the construction of this layer.
### GetLoadedLayers
Return list of loaded layers.
### GetMutedLayers
Return list of muted layers.
### GetNumTimeSamplesForPath
Parameters:
- path (Path) –
### GetObjectAtPath
Returns the object at the given path.
There is no distinction between an absolute and relative path at the SdLayer level.
Returns Spec.
```code
None
```
if there is no object at
```code
path
```
.
### Parameters
- **path** (Path) –
### GetPrimAtPath
```code
GetPrimAtPath(path)
```
- Returns the prim at the given ```code
path
```.
- Returns ```code
None
``` if there is no prim at ```code
path
```. This is simply a more specifically typed version of ```code
GetObjectAtPath()
```.
### Parameters
- **path** (Path) –
### GetPropertyAtPath
```code
GetPropertyAtPath(path)
```
- Returns a property at the given ```code
path
```.
- Returns ```code
None
``` if there is no property at ```code
path
```. This is simply a more specifically typed version of ```code
GetObjectAtPath()
```.
### Parameters
- **path** (Path) –
### GetRelationshipAtPath
```code
GetRelationshipAtPath(path)
```
- Returns a relationship at the given ```code
path
```.
- Returns ```code
None
``` if there is no relationship at ```code
path
```. This is simply a more specifically typed version of ```code
GetObjectAtPath()
```.
### Parameters
- **path** (Path) –
### HasColorConfiguration
```code
HasColorConfiguration()
```
- Returns true if color configuration metadata is set in this layer.
### HasColorManagementSystem()
Returns true if colorManagementSystem metadata is set in this layer.
GetColorManagementSystem() , SetColorManagementSystem()
### HasCustomLayerData()
Returns true if CustomLayerData is authored on the layer.
### HasDefaultPrim()
Return true if the default prim metadata is set in this layer.
See GetDefaultPrim() and SetDefaultPrim() .
### HasEndTimeCode()
Returns true if the layer has an endTimeCode opinion.
### HasFramePrecision()
Returns true if the layer has a frames precision opinion.
### HasFramesPerSecond()
Returns true if the layer has a frames per second opinion.
### HasOwner()
Returns true if the layer has an owner opinion.
### HasSessionOwner()
Returns true if the layer has a session owner opinion.
### HasStartTimeCode()
Returns true if the layer has a startTimeCode opinion.
### HasTimeCodesPerSecond()
Returns true if the layer has a timeCodesPerSecond opinion.
### pxr.Sdf.Layer.HasTimeCodesPerSecond
Returns true if the layer has a timeCodesPerSecond opinion.
### pxr.Sdf.Layer.Import
Imports the content of the given layer path, replacing the content of the current layer.
Note: If the layer path is the same as the current layer’s real path, no action is taken (and a warning occurs). For this case use Reload().
Parameters:
- **layerPath** (str) –
### pxr.Sdf.Layer.ImportFromString
Reads this layer from the given string.
Returns `true` if successful, otherwise returns `false`.
Parameters:
- **string** (str) –
### pxr.Sdf.Layer.IsAnonymousLayerIdentifier
**classmethod** IsAnonymousLayerIdentifier(identifier) -> bool
Returns true if the `identifier` is an anonymous layer unique identifier.
Parameters:
- **identifier** (str) –
### pxr.Sdf.Layer.IsDetached
Returns true if this layer is detached from its serialized data store, false otherwise.
Detached layers are isolated from external changes to their serialized data.
### pxr.Sdf.Layer.IsIncludedByDetachedLayerRules
**classmethod** IsIncludedByDetachedLayerRules(identifier) -> bool
Returns whether the given layer identifier is included in the current rules for the detached layer set.
This is equivalent to GetDetachedLayerRules().IsIncluded(identifier).
Parameters:
- **identifier** (str) –
### pxr.Sdf.Layer.IsMuted
**classmethod** IsMuted() -> bool
Returns `true` if the layer is muted, otherwise `false`.
```html
<span class="pre">
true
if the current layer is muted.
<hr class="docutils"/>
<p>
IsMuted(path) -> bool
<p>
Returns
<code class="docutils literal notranslate">
<span class="pre">
true
if the specified layer path is muted.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
<strong>
path
(
<em>
str
) –
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.ListAllTimeSamples">
<span class="sig-name descname">
<span class="pre">
ListAllTimeSamples
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
set
<span class="p">
<span class="pre">
[
<span class="pre">
float
<span class="p">
<span class="pre">
]
<a class="headerlink" href="#pxr.Sdf.Layer.ListAllTimeSamples" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.ListTimeSamplesForPath">
<span class="sig-name descname">
<span class="pre">
ListTimeSamplesForPath
<span class="sig-paren">
(
<em class="sig-param">
<span class="n">
<span class="pre">
path
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
set
<span class="p">
<span class="pre">
[
<span class="pre">
float
<span class="p">
<span class="pre">
]
<a class="headerlink" href="#pxr.Sdf.Layer.ListTimeSamplesForPath" title="Permalink to this definition">
<dd>
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
<strong>
path
(
<a class="reference internal" href="#pxr.Sdf.Path" title="pxr.Sdf.Path">
<em>
Path
) –
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.New">
<em class="property">
<span class="pre">
static
<span class="w">
<span class="sig-name descname">
<span class="pre">
New
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Sdf.Layer.New" title="Permalink to this definition">
<dd>
<p>
<strong>
classmethod
New(fileFormat, identifier, args) -> Layer
<p>
Creates a new empty layer with the given identifier for a given file
format class.
<p>
The new layer will not be dirty and will not be saved.
<p>
Additional arguments may be supplied via the
<code class="docutils literal notranslate">
<span class="pre">
args
parameter. These
arguments may control behavior specific to the layer’s file format.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<ul class="simple">
<li>
<p>
<strong>
fileFormat
(
<a class="reference internal" href="#pxr.Sdf.FileFormat" title="pxr.Sdf.FileFormat">
<em>
FileFormat
) –
<li>
<p>
<strong>
identifier
(
<em>
str
) –
<li>
<p>
<strong>
args
(
<em>
FileFormatArguments
) –
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.OpenAsAnonymous">
<em class="property">
<span class="pre">
static
<span class="w">
<span class="sig-name descname">
<span class="pre">
OpenAsAnonymous
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Sdf.Layer.OpenAsAnonymous" title="Permalink to this definition">
<dd>
<p>
<strong>
classmethod
OpenAsAnonymous(layerPath, metadataOnly, tag) -> Layer
<p>
Load the given layer from disk as a new anonymous layer.
<p>
If the layer can’t be found or loaded, an error is posted and a null
layer is returned.
<p>
The anonymous layer does not retain any knowledge of the backing file
on the filesystem.
<p>
<code class="docutils literal notranslate">
<span class="pre">
metadataOnly
is a flag that asks for only the layer metadata to be
read in, which can be much faster if that is all that is required.
Note that this is just a hint: some FileFormat readers may disregard
this flag and still fully populate the layer contents.
<p>
An optional
<code class="docutils literal notranslate">
<span class="pre">
tag
may be specified. See CreateAnonymous for details.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<ul class="simple">
<li>
<p>
<strong>
layerPath
(
<em>
str
) –
<li>
<p>
<strong>
metadataOnly
(
<em>
bool
) –
<li>
<p>
<strong>
tag
(
<em>
str
) –
### QueryTimeSample
**Parameters**
- **path** (Path) –
- **time** (float) –
- **value** (VtValue) –
QueryTimeSample(path, time, value) -> bool
**Parameters**
- **path** (Path) –
- **time** (float) –
- **value** (SdfAbstractDataValue) –
QueryTimeSample(path, time, data) -> bool
**Parameters**
- **path** (Path) –
- **time** (float) –
- **data** (T) –
### Reload
Reloads the layer from its persistent representation.
This restores the layer to a state as if it had just been created with FindOrOpen(). This operation is Undo-able.
The fileName and whether journaling is enabled are not affected by this method.
When called with force = false (the default), Reload attempts to avoid reloading layers that have not changed on disk. It does so by comparing the file’s modification time (mtime) to when the file was loaded. If the layer has unsaved modifications, this mechanism is not used, and the layer is reloaded from disk. If the layer has any external asset dependencies their modification state will also be consulted when determining if the layer needs to be reloaded.
Passing true to the `force` parameter overrides this behavior, forcing the layer to be reloaded from disk regardless of whether it has changed.
**Parameters**
- **force** (bool) –
### ReloadLayers
**classmethod** ReloadLayers(layers, force) -> bool
Reloads the specified layers.
Returns `false` if one or more layers failed to reload.
```python
force
```
flag.
- **Parameters**
- **layers** (`set[Layer]`) –
- **force** (`bool`) –
```python
static
```
```python
RemoveFromMutedLayers
```
```python
)
```
```python
)
```
```python
classmethod
```
```python
RemoveFromMutedLayers(mutedPath) -> None
```
Remove the specified path from the muted layers set.
- **Parameters**
- **mutedPath** (`str`) –
```python
RemoveInertSceneDescription
```
```python
)
```
```python
)
```
```python
→
```
```python
None
```
Removes all scene description in this layer that does not affect the scene.
This method walks the layer namespace hierarchy and removes any prims and that are not contributing any opinions.
```python
Save
```
```python
)
```
```python
force
```
```python
)
```
```python
→
```
```python
bool
```
Returns `true` if successful, `false` if an error occurred.
Returns `false` if the layer has no remembered file name or the layer type cannot be saved. The layer will not be overwritten if the file exists and the layer is not dirty unless `force` is true.
- **Parameters**
- **force** (`bool`) –
```python
ScheduleRemoveIfInert
```
```python
)
```
```python
spec
```
```python
)
```
```python
→
```
```python
None
```
Cause `spec` to be removed if it no longer affects the scene when the last change block is closed, or now if there are no change blocks.
- **Parameters**
- **spec** (`Spec`) –
```python
static
```
```python
SetDetachedLayerRules
```
```python
)
```
```python
)
```
```python
classmethod
```
```python
SetDetachedLayerRules(mask) -> None
```
Sets the rules specifying detached layers.
Newly-created or opened layers whose identifiers are included in
rules will be opened as detached layers. Existing layers that are now included or no longer included will be reloaded. Any unsaved modifications to those layers will be lost.
This function is not thread-safe. It may not be run concurrently with any other functions that open, close, or read from any layers.
The detached layer rules are initially set to exclude all layers. This may be overridden by setting the environment variables SDF_LAYER_INCLUDE_DETACHED and SDF_LAYER_EXCLUDE_DETACHED to specify the initial set of include and exclude patterns in the rules. These variables can be set to a comma-delimited list of patterns. SDF_LAYER_INCLUDE_DETACHED may also be set to”*”to include all layers. Note that these environment variables only set the initial state of the detached layer rules; these values may be overwritten by subsequent calls to this function.
See SdfLayer::DetachedLayerRules::IsIncluded for details on how the rules are applied to layer identifiers.
Parameters
----------
- **mask** (DetachedLayerRules) –
SetMuted(muted) → None
-------------------------
Mutes the current layer if muted is true, and unmutes it otherwise.
Parameters
----------
- **muted** (bool) –
SetPermissionToEdit(allow) → None
----------------------------------
Sets permission to edit.
Parameters
----------
- **allow** (bool) –
SetPermissionToSave(allow) → None
----------------------------------
Sets permission to save.
Parameters
----------
- **allow** (bool) –
SetTimeSample(path, time, value) → None
----------------------------------------
Parameters
----------
- **path** (Path) –
- **time** (float) –
- **value** (unknown) –
- **value** (**VtValue**) –
SetTimeSample(path, time, value) -> None
- **Parameters**
- **path** (**Path**) –
- **time** (**float**) –
- **value** (**SdfAbstractDataConstValue**) –
SetTimeSample(path, time, value) -> None
- **Parameters**
- **path** (**Path**) –
- **time** (**float**) –
- **value** (**T**) –
classmethod SplitIdentifier(identifier, layerPath, arguments) -> bool
Splits the given layer identifier into its constituent layer path and arguments.
- **Parameters**
- **identifier** (**str**) –
- **layerPath** (**str**) –
- **arguments** (**FileFormatArguments**) –
StreamsData() -> bool
Returns true if this layer streams data from its serialized data store on demand, false otherwise.
TransferContent(layer) -> None
Copies the content of the given layer into this layer.
Source layer is unmodified.
- **Parameters**
- **layer** (**Layer**) –
Traverse(path, func) -> None
### Parameters
- **path** (`Path`) –
- **func** (`TraversalFunction`) –
### UpdateAssetInfo()
Update layer asset information.
Calling this method re-resolves the layer identifier, which updates asset information such as the layer’s resolved path and other asset info. This may be used to update the layer after external changes to the underlying asset system.
### UpdateCompositionAssetDependency(oldAssetPath, newAssetPath)
- **oldAssetPath** (`str`) –
- **newAssetPath** (`str`) –
Updates the asset path of a composation dependency in this layer.
If `newAssetPath` is supplied, the update works as "rename", updating any occurrence of `oldAssetPath` to `newAssetPath` in all reference, payload, and sublayer fields.
If `newAssetPath` is not given, this update behaves as a "delete", removing all occurrences of `oldAssetPath` from all reference, payload, and sublayer fields.
### UpdateExternalReference(oldAssetPath, newAssetPath)
Deprecated
Use UpdateCompositionAssetDependency instead.
- **oldAssetPath** (`str`) –
- **newAssetPath** (`str`) –
### ColorConfigurationKey
= 'colorConfiguration'
### ColorManagementSystemKey
= 'colorManagementSystem'
CommentKey = 'comment'
DocumentationKey = 'documentation'
EndFrameKey = 'endFrame'
EndTimeCodeKey = 'endTimeCode'
FramePrecisionKey = 'framePrecision'
FramesPerSecondKey = 'framesPerSecond'
HasOwnedSubLayers = 'hasOwnedSubLayers'
OwnerKey = 'owner'
SessionOwnerKey = 'sessionOwner'
StartFrameKey = 'startFrame'
StartTimeCodeKey = 'startTimeCode'
TimeCodesPerSecondKey = 'timeCodesPerSecond'
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.timeCodesPerSecondKey">
<em class="property">
<span class="pre">property
<span class="w">
<span class="sig-name descname">
<span class="pre">'timeCodesPerSecond'
<dd>
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.anonymous">
<em class="property">
<span class="pre">property
<span class="w">
<span class="sig-name descname">
<span class="pre">anonymous
<dd>
<p>bool
<p>Returns true if this layer is an anonymous layer.
<dl class="field-list simple">
<dt class="field-odd">Type
<dd class="field-odd">
<p>type
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.colorConfiguration">
<em class="property">
<span class="pre">property
<span class="w">
<span class="sig-name descname">
<span class="pre">colorConfiguration
<dd>
<p>The color configuration asset-path of this layer.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.colorManagementSystem">
<em class="property">
<span class="pre">property
<span class="w">
<span class="sig-name descname">
<span class="pre">colorManagementSystem
<dd>
<p>The name of the color management system used to interpret the colorConfiguration asset.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.comment">
<em class="property">
<span class="pre">property
<span class="w">
<span class="sig-name descname">
<span class="pre">comment
<dd>
<p>The layer’s comment string.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.customLayerData">
<em class="property">
<span class="pre">property
<span class="w">
<span class="sig-name descname">
<span class="pre">customLayerData
<dd>
<p>The customLayerData dictionary associated with this layer.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.defaultPrim">
<em class="property">
<span class="pre">property
<span class="w">
<span class="sig-name descname">
<span class="pre">defaultPrim
<dd>
<p>The layer’s default reference target token.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.dirty">
<em class="property">
<span class="pre">property
<span class="w">
<span class="sig-name descname">
<span class="pre">dirty
<dd>
<p>bool
<p>Returns true if the layer is dirty, i.e., has changed from its persistent representation.
<dl class="field-list simple">
<dt class="field-odd">Type
<dd class="field-odd">
<p>type
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.documentation">
<em class="property">
<span class="pre">property
<span class="w">
<span class="sig-name descname">
<span class="pre">documentation
<dd>
<p>The layer’s documentation string.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.empty">
<em class="property">
<span class="pre">property
<span class="w">
<span class="sig-name descname">
<span class="pre">empty
<dd>
<p>bool
<p>Returns whether this layer has no significant data.
<dl class="field-list simple">
<dt class="field-odd">Type
<dd class="field-odd">
<p>type
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.endTimeCode">
<em class="property">
<span class="pre">property
<span class="w">
<span class="sig-name descname">
<span class="pre">endTimeCode
<dd>
<p>The end timeCode of this layer.
<p>The end timeCode of a layer is not a hard limit, but is more of a hint. A layer’s time-varying content is not limited to the timeCode range of the layer.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Layer.expired">
<em class="property">
<span class="pre">property
<span class="w">
<span class="sig-name descname">
<span class="pre">expired
<dd>
<p>True if this object has expired, False otherwise.
### externalReferences
Return unique list of asset paths of external references for given layer.
### fileExtension
The layer’s file extension.
### framePrecision
The number of digits of precision used in times in this layer.
### framesPerSecond
The frames per second used in this layer.
### hasOwnedSubLayers
Whether this layer’s sub layers are expected to have owners.
### identifier
The layer’s identifier.
### owner
The owner of this layer.
### permissionToEdit
Return true if permitted to be edited (modified), false otherwise.
### permissionToSave
Return true if permitted to be saved, false otherwise.
### pseudoRoot
The pseudo-root of the layer.
### realPath
The layer’s resolved path.
### repositoryPath
The layer’s associated repository path
### resolvedPath
The layer’s resolved path.
### rootPrimOrder
## Properties
### rootPrimOrder
Get/set the list of root prim names for this layer’s ‘reorder rootPrims’ statement.
### rootPrims
The root prims of this layer, as an ordered dictionary.
The prims may be accessed by index or by name.
Although this property claims it is read only, you can modify the contents of this dictionary to add, remove, or reorder the contents.
### sessionOwner
The session owner of this layer. Only intended for use with session layers.
### startTimeCode
The start timeCode of this layer.
The start timeCode of a layer is not a hard limit, but is
more of a hint. A layer’s time-varying content is not limited to
the timeCode range of the layer.
### subLayerOffsets
The sublayer offsets of this layer, as a list. Although this property is claimed to be read only, you can modify the contents of this list by assigning new layer offsets to specific indices.
### subLayerPaths
The sublayer paths of this layer, as a list. Although this property is claimed to be read only, you can modify the contents of this list.
### timeCodesPerSecond
The timeCodes per second used in this layer.
### version
The layer’s version.
## Class
### LayerOffset
Represents a time offset and scale between layers.
The SdfLayerOffset class is an affine transform, providing both a
scale and a translate. It supports vector algebra semantics for
composing SdfLayerOffsets together via multiplication. The
SdfLayerOffset class is unitless: it does not refer to seconds or
frames.
For example, suppose layer A uses layer B, with an offset of X: when
bringing animation from B into A, you first apply the scale of X, and
then the offset. Suppose you have a scale of 2 and an offset of 24:
first multiply B’s frame numbers by 2, and then add 24. The animation
from B as seen in A will take twice as long and start 24 frames later.
Offsets are typically used in either sublayers or prim references. For
more information, see the SetSubLayerOffset() method of the SdfLayer
class (the subLayerOffsets property in Python), as well as the
SetReference() and GetReferenceLayerOffset() methods (the latter is
the referenceLayerOffset property in Python) of the SdfPrimSpec class.
**Methods:**
- **GetInverse()**
Gets the inverse offset, which performs the opposite transformation.
- **IsIdentity()**
Returns whether the offset is an identity transform.
```
true
```
if this is an identity transformation, with an offset of 0.0 and a scale of 1.0.
**Attributes:**
| Key | Description |
|-----------|-------------|
| `offset` | None |
| `scale` | None |
```python
def GetInverse():
→ LayerOffset
```
Gets the inverse offset, which performs the opposite transformation.
```python
def IsIdentity():
→ bool
```
Returns `true` if this is an identity transformation, with an offset of 0.0 and a scale of 1.0.
**offset**
- Type: float
- Sets the time offset.
- Returns the time offset.
**scale**
- Type: float
- Sets the time scale factor.
- Returns the time scale factor.
**class pxr.Sdf.LayerTree**
- A SdfLayerTree is an immutable tree structure representing a sublayer stack and its recursive structure.
- Layers can have sublayers, which can in turn have sublayers of their own. Clients that want to represent that hierarchical structure in memory can build a SdfLayerTree for that purpose.
- We use TfRefPtr<SdfLayerTree> as handles to LayerTrees, as a simple way to pass them around as immutable trees without worrying about lifetime.
**Attributes:**
| Key | Description |
|--------------|--------------------------------------------------|
| `childTrees` | list[SdfLayerTreeHandle] |
| `expired` | True if this object has expired, False otherwise |
| `layer` | Layer |
<p>
offset
<p>
LayerOffset
<dl>
<dt>
property childTrees
<dd>
<p>
list[SdfLayerTreeHandle]
<p>
Returns the children of this tree node.
<dl>
<dt>
Type
<dd>
<p>
type
<dt>
property expired
<dd>
<p>
True if this object has expired, False otherwise.
<dt>
property layer
<dd>
<p>
Layer
<p>
Returns the layer handle this tree node represents.
<dl>
<dt>
Type
<dd>
<p>
type
<dt>
property offset
<dd>
<p>
LayerOffset
<p>
Returns the cumulative layer offset from the root of the tree.
<dl>
<dt>
Type
<dd>
<p>
type
<dl>
<dt>
class pxr.Sdf.LengthUnit
<dd>
<p>
<strong>
Methods:
<table>
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr>
<td>
<p>
GetValueFromName
<td>
<p>
<p>
<strong>
Attributes:
<table>
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr>
<td>
<p>
allValues
<td>
<p>
<dl>
<dt>
static GetValueFromName
<dd>
<dl>
<dt>
allValues = (Sdf.LengthUnitMillimeter, Sdf.LengthUnitCentimeter, Sdf.LengthUnitDecimeter, Sdf.LengthUnitMeter, Sdf.LengthUnitKilometer, Sdf.LengthUnitInch, Sdf.LengthUnitFoot, Sdf.LengthUnitYard, Sdf.LengthUnitMile)
<dd>
<dl>
<dt>
class pxr.Sdf.ListEditorProxy_SdfNameKeyPolicy
<dd>
## ListEditorProxy_SdfNameKeyPolicy
**Methods:**
| Method | Description |
| --- | --- |
| `Add` | |
| `Append` | |
| `ApplyEditsToList` | |
| `ClearEdits` | |
| `ClearEditsAndMakeExplicit` | |
| `ContainsItemEdit` | |
| `CopyItems` | |
| `Erase` | |
| `GetAddedOrExplicitItems` | |
| `ModifyItemEdits` | |
| `Prepend` | |
| `Remove` | |
| `RemoveItemEdits` | |
| `ReplaceItemEdits` | |
**Attributes:**
| Attribute | Description |
| --- | --- |
| `addedItems` | |
- `appendedItems`
- `deletedItems`
- `explicitItems`
- `isExpired`
- `isExplicit`
- `isOrderedOnly`
- `orderedItems`
- `prependedItems`
### Add
### Append
### ApplyEditsToList
### ClearEdits
### ClearEditsAndMakeExplicit
### ContainsItemEdit
### CopyItems
### Erase
### GetAddedOrExplicitItems()
### ModifyItemEdits()
### Prepend()
### Remove()
### RemoveItemEdits()
### ReplaceItemEdits()
### property addedItems
### property appendedItems
### property deletedItems
### property explicitItems
### property isExpired
### property isExplicit
### property isOrderedOnly
### property orderedItems
### property prependedItems
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
prependedItems
<dd>
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Sdf.ListEditorProxy_SdfPathKeyPolicy">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-prename descclassname">
<span class="pre">
pxr.Sdf.
<span class="sig-name descname">
<span class="pre">
ListEditorProxy_SdfPathKeyPolicy
<dd>
<p>
<strong>
Methods:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<code>
Add
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>
Append
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>
ApplyEditsToList
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>
ClearEdits
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>
ClearEditsAndMakeExplicit
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>
ContainsItemEdit
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>
CopyItems
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>
Erase
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>
GetAddedOrExplicitItems
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>
ModifyItemEdits
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>
Prepend
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>
Remove
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>
RemoveItemEdits
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>
ReplaceItemEdits
<td>
<p>
<strong>Attributes:
| Attribute | Description |
|--------------------------|-------------|
| `addedItems` | |
| `appendedItems` | |
| `deletedItems` | |
| `explicitItems` | |
| `isExpired` | |
| `isExplicit` | |
| `isOrderedOnly` | |
| `orderedItems` | |
| `prependedItems` | |
<dl>
<dt>Add()
<dd>
<dl>
<dt>Append()
<dd>
<dl>
<dt>ApplyEditsToList()
<dd>
<dl>
<dt>ClearEdits()
<dd>
<dl>
<dt>ClearEditsAndMakeExplicit()
<dd>
<dl>
<dt>ContainsItemEdit()
<dd>
<dl>
<dt>CopyItems()
<dd>
### CopyItems()
### Erase()
### GetAddedOrExplicitItems()
### ModifyItemEdits()
### Prepend()
### Remove()
### RemoveItemEdits()
### ReplaceItemEdits()
### property addedItems
### property appendedItems
### property deletedItems
### property explicitItems
### property isExpired
### property isExplicit
### property isOrderedOnly
### pxr.Sdf.ListEditorProxy_SdfPayloadTypePolicy
**Methods:**
- Add
- Append
- ApplyEditsToList
- ClearEdits
- ClearEditsAndMakeExplicit
- ContainsItemEdit
- CopyItems
- Erase
- GetAddedOrExplicitItems
- ModifyItemEdits
- Prepend
- Remove
<p>
RemoveItemEdits
<p>
<p>
ReplaceItemEdits
<p>
<p>
addedItems
<p>
<p>
appendedItems
<p>
<p>
deletedItems
<p>
<p>
explicitItems
<p>
<p>
isExpired
<p>
<p>
isExplicit
<p>
<p>
isOrderedOnly
<p>
<p>
orderedItems
<p>
<p>
prependedItems
<p>
<dl>
<dt>
Add
<dd>
<dl>
<dt>
Append
<dd>
<dl>
<dt>
ApplyEditsToList
<dd>
<dl>
<dt>
ClearEdits
<dd>
<dl>
<dt>
ClearEditsAndMakeExplicit
<dd>
### ClearEditsAndMakeExplicit
### ContainsItemEdit
### CopyItems
### Erase
### GetAddedOrExplicitItems
### ModifyItemEdits
### Prepend
### Remove
### RemoveItemEdits
### ReplaceItemEdits
### addedItems
### appendedItems
### deletedItems
### explicitItems
### isExpired
### pxr.Sdf.ListEditorProxy_SdfPayloadTypePolicy
#### Methods:
- **isExpired**
- Property
- **isExplicit**
- Property
- **isOrderedOnly**
- Property
- **orderedItems**
- Property
- **prependedItems**
- Property
### pxr.Sdf.ListEditorProxy_SdfReferenceTypePolicy
#### Methods:
- **Add**
- **Append**
- **ApplyEditsToList**
- **ClearEdits**
- **ClearEditsAndMakeExplicit**
- **ContainsItemEdit**
- **CopyItems**
- **Erase**
- **GetAddedOrExplicitItems**
- **ModifyItemEdits**
ModifyItemEdits
Prepend
Remove
RemoveItemEdits
ReplaceItemEdits
**Attributes:**
addedItems
appendedItems
deletedItems
explicitItems
isExpired
isExplicit
isOrderedOnly
orderedItems
prependedItems
**Add**
**Append**
## ApplyEditsToList()
## ClearEdits()
## ClearEditsAndMakeExplicit()
## ContainsItemEdit()
## CopyItems()
## Erase()
## GetAddedOrExplicitItems()
## ModifyItemEdits()
## Prepend()
## Remove()
## RemoveItemEdits()
## ReplaceItemEdits()
## property addedItems
## property appendedItems
## property deletedItems
## Properties
- **deletedItems**
- **explicitItems**
- **isExpired**
- **isExplicit**
- **isOrderedOnly**
- **orderedItems**
- **prependedItems**
## Class: pxr.Sdf.ListOpType
### Methods
- **GetValueFromName**
### Attributes
- **allValues**
- Values: Sdf.ListOpTypeExplicit, Sdf.ListOpTypeAdded, Sdf.ListOpTypePrepended, Sdf.ListOpTypeAppended, Sdf.ListOpTypeDeleted, Sdf.ListOpTypeOrdered
## Class: pxr.Sdf.ListProxy_SdfNameKeyPolicy
## ListProxy_SdfNameKeyPolicy
### Methods:
| Method | Description |
| --- | --- |
| `ApplyEditsToList` | |
| `ApplyList` | |
| `append` | |
| `clear` | |
| `copy` | |
| `count` | |
| `index` | |
| `insert` | |
| `remove` | |
| `replace` | |
### Attributes:
| Attribute | Description |
| --- | --- |
| `expired` | |
### Detailed Method Descriptions:
#### `ApplyEditsToList`
#### `ApplyList`
#### `append`
#### `clear`
#### `copy`
### Methods:
| Method | Description |
|--------|-------------|
| `ApplyEditsToList` | |
| `ApplyList` | |
| `append` | |
| `clear` | |
| `copy` | |
| `count` | |
| `index` | |
| `insert` | |
| `remove` | |
| `replace` | |
```
### Properties:
| Property | Description |
|----------|-------------|
| `expired` | |
### Classes:
| Class | Description |
|-------|-------------|
| `pxr.Sdf.ListProxy_SdfNameTokenKeyPolicy` | |
<p>
remove
<p>
replace
<p>
expired
<p>
ApplyEditsToList
<p>
ApplyList
<p>
append
<p>
clear
<p>
copy
<p>
count
<p>
index
<p>
insert
<p>
remove
<p>
replace
<p>
property expired
## ListProxy_SdfPathKeyPolicy
### Methods:
| Method | Description |
| --- | --- |
| `ApplyEditsToList` | |
| `ApplyList` | |
| `append` | |
| `clear` | |
| `copy` | |
| `count` | |
| `index` | |
| `insert` | |
| `remove` | |
| `replace` | |
### Attributes:
| Attribute | Description |
| --- | --- |
| `expired` | |
### Detailed Method Descriptions
#### ApplyEditsToList
#### ApplyList
#### append
#### clear
#### copy
### copy()
### count()
### index()
### insert()
### remove()
### replace()
### property expired
### class pxr.Sdf.ListProxy_SdfPayloadTypePolicy
**Methods:**
- ApplyEditsToList
- ApplyList
- append
- clear
- copy
- count
- index
- insert
| remove |
| ------ |
| replace |
**Attributes:**
| expired |
**ApplyEditsToList**
**ApplyList**
**append**
**clear**
**copy**
**count**
**index**
**insert**
**remove**
**replace**
**expired**
### ListProxy_SdfReferenceTypePolicy
**Methods:**
| Method | Description |
|--------|-------------|
| `ApplyEditsToList` | |
| `ApplyList` | |
| `append` | |
| `clear` | |
| `copy` | |
| `count` | |
| `index` | |
| `insert` | |
| `remove` | |
| `replace` | |
**Attributes:**
| Attribute | Description |
|-----------|-------------|
| `expired` | |
#### ApplyEditsToList
#### ApplyList
#### append
#### clear
## Methods:
- `ApplyEditsToList`
- `ApplyList`
- `append`
- `clear`
- `copy`
- `count`
- `index`
- `insert`
- `remove`
- `replace`
## Properties:
- `expired`
| Method | Description |
|--------|-------------|
| insert | |
| remove | |
| replace | |
**Attributes:**
| Attribute | Description |
|-----------|-------------|
| expired | |
## ApplyEditsToList()
## ApplyList()
## append()
## clear()
## copy()
## count()
## index()
## insert()
## remove()
## replace()
## expired (property)
### Classes:
- `MapEditProxy_VtDictionary_Iterator`
- `MapEditProxy_VtDictionary_KeyIterator`
- `MapEditProxy_VtDictionary_ValueIterator`
### Methods:
- `clear`
- `copy`
- `get`
- `items`
- `keys`
- `pop`
- `popitem`
- `setdefault`
- `update`
- `values`
### Attributes:
- `expired`
### MapEditProxy_VtDictionary_Iterator
### MapEditProxy_VtDictionary_KeyIterator
### MapEditProxy_VtDictionary_ValueIterator
### clear()
### copy()
### get()
### items()
### keys()
### pop()
### popitem()
### setdefault()
### update()
### values()
### property expired
### MapEditProxy_map_SdfPath_SdfPath_less_SdfPath__allocator_pair_SdfPath_const__SdfPath_____
## Classes:
| | |
| --- | --- |
| **MapEditProxy_map_SdfPath_SdfPath_less_SdfPath__allocator_pair_SdfPath_const__SdfPath______Iterator** | |
| **MapEditProxy_map_SdfPath_SdfPath_less_SdfPath__allocator_pair_SdfPath_const__SdfPath______KeyIterator** | |
| **MapEditProxy_map_SdfPath_SdfPath_less_SdfPath__allocator_pair_SdfPath_const__SdfPath______ValueIterator** | |
## Methods:
| | |
| --- | --- |
| **clear** | |
| **copy** | |
| **get** | |
| **items** | |
| **keys** | |
| **pop** | |
| **popitem** | |
| **setdefault** | |
| **update** | |
| **values** | |
### Attributes:
### Methods:
- **clear()**
- **copy()**
- **get()**
- **items()**
- **keys()**
- **pop()**
- **popitem()**
### Classes:
- **MapEditProxy_map_SdfPath_SdfPath_less_SdfPath__allocator_pair_SdfPath_const__SdfPath______Iterator**
- **MapEditProxy_map_SdfPath_SdfPath_less_SdfPath__allocator_pair_SdfPath_const__SdfPath______KeyIterator**
- **MapEditProxy_map_SdfPath_SdfPath_less_SdfPath__allocator_pair_SdfPath_const__SdfPath______ValueIterator**
```
| Method | Description |
| ------ | ----------- |
| items | |
| keys | |
| pop | |
| popitem | |
| setdefault | |
| update | |
| values | |
**Attributes:**
| Attribute | Description |
| --------- | ----------- |
| expired | |
**Classes:**
- **MapEditProxy_map_string_string_less_string__allocator_pair_stringconst__string______Iterator**
- **MapEditProxy_map_string_string_less_string__allocator_pair_stringconst__string______KeyIterator**
- **MapEditProxy_map_string_string_less_string__allocator_pair_stringconst__string______ValueIterator**
**Methods:**
- **clear()**
- **copy()**
### pxr.Sdf.MapEditProxy Methods
#### get
```python
get()
```
#### items
```python
items()
```
#### keys
```python
keys()
```
#### pop
```python
pop()
```
#### popitem
```python
popitem()
```
#### setdefault
```python
setdefault()
```
#### update
```python
update()
```
#### values
```python
values()
```
### pxr.Sdf.MapEditProxy Property
#### expired
```python
property expired
```
### pxr.Sdf.NamespaceEdit
A single namespace edit. It supports renaming, reparenting, reparenting with a rename, reordering, and removal.
**Methods:**
- **Remove**
```python
classmethod Remove(currentPath) -> This
```
- **Rename**
```python
classmethod Rename(currentPath, name) -> This
```
- **Reorder**
```python
classmethod Reorder(currentPath, newIndex) -> This
```
**classmethod** Reorder(currentPath, index) -> This
**classmethod** Reparent(currentPath, newParentPath, index) -> This
**classmethod** ReparentAndRename(currentPath, newParentPath, name, index) -> This
**Attributes:**
- **atEnd**
- **currentPath**
- **index**
- **newPath**
- **same**
**classmethod** Remove(currentPath) -> This
Returns a namespace edit that removes the object at `currentPath`.
Parameters:
- **currentPath** (Path) –
**classmethod** Rename(currentPath, name) -> This
Returns a namespace edit that renames the prim or property at `currentPath` to `name`.
Parameters:
- **currentPath** (Path) –
- **name** (str) –
**classmethod** Reorder(currentPath, index) -> This
Returns a namespace edit to reorder the prim or property at `currentPath`, index.
<p>
Move the prim or property at
<code>
currentPath
to index
<code>
index
.
<dl>
<dt>Parameters
<dd>
<ul>
<li>
<p>
<strong>currentPath
(Path) –
<li>
<p>
<strong>index
(Index) –
<dl>
<dt>Reparent
<dd>
<p>
<strong>classmethod
Reparent(currentPath, newParentPath, index) -> This
<p>
Returns a namespace edit to reparent the prim or property at
<code>
currentPath
to be under
<code>
newParentPath
at index
<code>
index
.
<dl>
<dt>Parameters
<dd>
<ul>
<li>
<p>
<strong>currentPath
(Path) –
<li>
<p>
<strong>newParentPath
(Path) –
<li>
<p>
<strong>index
(Index) –
<dl>
<dt>ReparentAndRename
<dd>
<p>
<strong>classmethod
ReparentAndRename(currentPath, newParentPath, name, index) -> This
<p>
Returns a namespace edit to reparent the prim or property at
<code>
currentPath
to be under
<code>
newParentPath
at index
<code>
index
with the name
<code>
name
.
<dl>
<dt>Parameters
<dd>
<ul>
<li>
<p>
<strong>currentPath
(Path) –
<li>
<p>
<strong>newParentPath
(Path) –
<li>
<p>
<strong>name
(str) –
<li>
<p>
<strong>index
(Index) –
<dl>
<dt>atEnd
<dd>
<dl>
<dt>currentPath
<dd>
<dl>
<dt>index
<dd>
<dl>
<dt>newPath
<dd>
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
newPath
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.NamespaceEdit.same">
<span class="sig-name descname">
<span class="pre">
same
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
-2
<dd>
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Sdf.NamespaceEditDetail">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-prename descclassname">
<span class="pre">
pxr.Sdf.
<span class="sig-name descname">
<span class="pre">
NamespaceEditDetail
<dd>
<p>Detailed information about a namespace edit.
<p><strong>Classes:
<table>
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p><code>Result
<td>
<p>Validity of an edit.
<p><strong>Attributes:
<table>
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p><code>Error
<td>
<p>
<tr class="row-even">
<td>
<p><code>Okay
<td>
<p>
<tr class="row-odd">
<td>
<p><code>Unbatched
<td>
<p>
<tr class="row-even">
<td>
<p><code>edit
<td>
<p>
<tr class="row-odd">
<td>
<p><code>reason
<td>
<p>
<tr class="row-even">
<td>
<p><code>result
<td>
<p>
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Sdf.NamespaceEditDetail.Result">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-name descname">
<span class="pre">
Result
<dd>
<p>Validity of an edit.
<p><strong>Methods:
<table>
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p><code>GetValueFromName
<td>
<p>
<p><strong>Attributes:
<table>
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p><code>allValues
<td>
<p>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.NamespaceEditDetail.Result.GetValueFromName">
<em class="property">
<span class="pre">
static
<span class="w">
<span class="sig-name descname">
<span class="pre">
GetValueFromName
### pxr.Sdf.NamespaceEditDetail.Result.GetValueFromName
### pxr.Sdf.NamespaceEditDetail.Result.allValues
allValues = (Sdf.NamespaceEditDetail.Error, Sdf.NamespaceEditDetail.Unbatched, Sdf.NamespaceEditDetail.Okay)
### pxr.Sdf.NamespaceEditDetail.Error
Error = Sdf.NamespaceEditDetail.Error
### pxr.Sdf.NamespaceEditDetail.Okay
Okay = Sdf.NamespaceEditDetail.Okay
### pxr.Sdf.NamespaceEditDetail.Unbatched
Unbatched = Sdf.NamespaceEditDetail.Unbatched
### pxr.Sdf.NamespaceEditDetail.edit
property edit
### pxr.Sdf.NamespaceEditDetail.reason
property reason
### pxr.Sdf.NamespaceEditDetail.result
property result
### pxr.Sdf.Notice
Wrapper class for Sdf notices.
**Classes:**
- Base
- LayerDidReloadContent
- LayerDidReplaceContent
- LayerDirtinessChanged
- LayerIdentifierDidChange
| LayerInfoDidChange |
|---------------------|
| LayerMutenessChanged |
| LayersDidChange |
| LayersDidChangeSentPerLayer |
### Base
### LayerDidReloadContent
### LayerDidReplaceContent
### LayerDirtinessChanged
### LayerIdentifierDidChange
**Attributes:**
| Attribute | Description |
|-----------|-------------|
| newIdentifier | |
| oldIdentifier | |
#### newIdentifier
#### oldIdentifier
### LayerInfoDidChange
**Methods:**
| Method | Description |
|--------|-------------|
| key |
| --- |
| layerPath |
| wasMuted |
class LayerMutenessChanged
- Attributes:
- layerPath
- wasMuted
class LayersDidChange
- Methods:
- GetLayers
- GetSerialNumber
class LayersDidChangeSentPerLayer
- Methods:
- GetLayers
| GetLayers |
| --- |
| GetSerialNumber |
### GetLayers
```
### GetSerialNumber
```
### pxr.Sdf.Path
**class** pxr.Sdf.Path
A path value used to locate objects in layers or scenegraphs.
#### Overview
SdfPath is used in several ways:
- As a storage key for addressing and accessing values held in a SdfLayer
- As a namespace identity for scenegraph objects
- As a way to refer to other scenegraph objects through relative paths
The paths represented by an SdfPath class may be either relative or absolute. Relative paths are relative to the prim object that contains them (that is, if an SdfRelationshipSpec target is relative, it is relative to the SdfPrimSpec object that owns the SdfRelationshipSpec object).
SdfPath objects can be readily created from and converted back to strings, but as SdfPath objects, they have behaviors that make it easy and efficient to work with them. The SdfPath class provides a full range of methods for manipulating scene paths by appending a namespace child, appending a relationship target, getting the parent path, and so on. Since the SdfPath class uses a node-based representation internally, you should use the editing functions rather than converting to and from strings if possible.
#### Path Syntax
Like a filesystem path, an SdfPath is conceptually just a sequence of path components. Unlike a filesystem path, each component has a type, and the type is indicated by the syntax.
Two separators are used between parts of a path. A slash (“/”) following an identifier is used to introduce a namespace child. A period (“.”) following an identifier is used to introduce a property. A property may also have several non-sequential colons (‘:’) in its name to provide a rudimentary namespace within properties but may not end or begin with a colon.
A leading slash in the string representation of an SdfPath object indicates an absolute path. Two adjacent periods indicate the parent namespace.
Brackets (“[“and”]”) are used to indicate relationship target paths for relational attributes.
The first part in a path is assumed to be a namespace child unless it is preceded by a period. That means:
- `/Foo` is an absolute path specifying the root prim Foo.
- `/Foo/Bar` is an absolute path specifying namespace child Bar of root prim Foo.
- `/Foo/Bar.baz` is an absolute path specifying property `baz` of namespace child Bar of root prim Foo.
- `Foo` is a relative path specifying namespace child Foo of the current prim.
- `Foo/Bar` is a relative path specifying namespace child Bar of namespace child Foo of the current prim.
- `Foo/Bar.baz` is a relative path specifying property `baz` of namespace child Bar of namespace child Foo of the current prim.
- `.foo` is a relative path specifying the property `foo` of the current prim.
- `/Foo.bar[/Foo.baz].attrib` is a relational attribute path. The relationship `/Foo.bar`
## A Note on Thread-Safety
SdfPath is strongly thread-safe, in the sense that zero additional synchronization is required between threads creating or using SdfPath values. Just like TfToken, SdfPath values are immutable. Internally, SdfPath uses a global prefix tree to efficiently share representations of paths, and provide fast equality/hashing operations, but modifications to this table are internally synchronized. Consequently, as with TfToken, for best performance it is important to minimize the number of values created (since it requires synchronized access to this table) or copied (since it requires atomic ref-counting operations).
### Classes:
- `AncestorsRange`
### Methods:
- `AppendChild(childName)` - Creates a path by appending an element for `childName` to this path.
- `AppendElementString(element)` - Creates a path by extracting and appending an element from the given ascii element encoding.
- `AppendExpression()` - Creates a path by appending an expression element.
- `AppendMapper(targetPath)` - Creates a path by appending a mapper element for `targetPath`.
- `AppendMapperArg(argName)` - Creates a path by appending an element for `argName`.
- `AppendPath(newSuffix)` - Creates a path by appending a given relative path to this path.
- `AppendProperty(propName)` - Creates a path by appending an element for `propName` to this path.
- `AppendRelationalAttribute(attrName)` - Creates a path by appending an element for `attrName` to this path.
- `AppendTarget(targetPath)` - Creates a path by appending an element for `targetPath`.
AppendVariantSelection (variantSet, variant)
Creates a path by appending an element for `variantSet` and `variant` to this path.
ContainsPrimVariantSelection ()
Returns whether the path or any of its parent paths identifies a variant selection for a prim.
ContainsPropertyElements ()
Return true if this path contains any property elements, false otherwise.
ContainsTargetPath ()
Return true if this path is or has a prefix that's a target path or a mapper path.
FindLongestPrefix
FindLongestStrictPrefix
FindPrefixedRange
GetAbsoluteRootOrPrimPath ()
Creates a path by stripping all properties and relational attributes from this path, leaving the path to the containing prim.
GetAllTargetPathsRecursively (result)
Returns all the relationship target or connection target paths contained in this path, and recursively all the target paths contained in those target paths in reverse depth-first order.
GetAncestorsRange ()
Return a range for iterating over the ancestors of this path.
GetCommonPrefix (path)
Returns a path with maximal length that is a prefix path of both this path and `path`.
GetConciseRelativePaths
classmethod GetConciseRelativePaths(paths) -> list[SdfPath]
GetParentPath ()
Return the path that identifies this path's namespace parent.
GetPrefixes
Returns the prefix paths of this path.
GetPrimOrPrimVariantSelectionPath()
Creates a path by stripping all relational attributes, targets, and properties, leaving the nearest path for which
IsPrimOrPrimVariantSelectionPath() returns true.
GetPrimPath()
Creates a path by stripping all relational attributes, targets, properties, and variant selections from the leafmost prim path, leaving the nearest path for which
IsPrimPath() returns true.
GetVariantSelection()
Returns the variant selection for this path, if this is a variant selection path.
HasPrefix(prefix)
Return true if both this path and prefix are not the empty path and this path has prefix as a prefix.
IsAbsolutePath()
Returns whether the path is absolute.
IsAbsoluteRootOrPrimPath()
Returns whether the path identifies a prim or the absolute root.
IsAbsoluteRootPath()
Return true if this path is the AbsoluteRootPath().
IsExpressionPath()
Returns whether the path identifies a connection expression.
IsMapperArgPath()
Returns whether the path identifies a connection mapper arg.
IsMapperPath()
Returns whether the path identifies a connection mapper.
IsNamespacedPropertyPath()
Returns whether the path identifies a namespaced property.
IsPrimPath()
Returns whether the path identifies a prim.
IsPrimPropertyPath()
Returns whether the path identifies a prim's property.
IsPrimVariantSelectionPath()
Returns whether the path identifies a variant selection for a prim.
- `IsPropertyPath()`
- Returns whether the path identifies a property.
- `IsRelationalAttributePath()`
- Returns whether the path identifies a relational attribute.
- `IsRootPrimPath()`
- Returns whether the path identifies a root prim.
- `IsTargetPath()`
- Returns whether the path identifies a relationship or connection target.
- `IsValidIdentifier(name) -> bool`
- **classmethod**
- `IsValidNamespacedIdentifier(name) -> bool`
- **classmethod**
- `IsValidPathString(pathString, errMsg) -> bool`
- **classmethod**
- `JoinIdentifier(names) -> str`
- **classmethod**
- `MakeAbsolutePath(anchor)`
- Returns the absolute form of this path using `anchor` as the relative basis.
- `MakeRelativePath(anchor)`
- Returns the relative form of this path using `anchor` as the relative basis.
- `RemoveAncestorPaths(paths) -> None`
- **classmethod**
- `RemoveCommonSuffix(otherPath, stopAtRootPrim)`
- Find and remove the longest common suffix from two paths.
- `RemoveDescendentPaths(paths) -> None`
- **classmethod**
- `ReplaceName(newName)`
- (description not provided)
Return a copy of this path with its final component changed to **newName**.
ReplacePrefix(oldPrefix, newPrefix, ...)
Returns a path with all occurrences of the prefix path **oldPrefix** replaced with the prefix path **newPrefix**.
ReplaceTargetPath(newTargetPath)
Replaces the relational attribute's target path.
StripAllVariantSelections()
Create a path by stripping all variant selections from all components of this path, leaving a path with no embedded variant selections.
StripNamespace(name) -> str
classmethod
StripPrefixNamespace(name, matchNamespace) -> tuple[str, bool]
classmethod
TokenizeIdentifier(name) -> list[str]
classmethod
Attributes:
absoluteIndicator
absoluteRootPath
childDelimiter
elementString
The string representation of the terminal component of this path.
emptyPath
expressionIndicator
isEmpty
bool
mapperArgDelimiter
mapperIndicator
name
The name of the prim, property or relational attribute identified by the path.
namespaceDelimiter
parentPathElement
pathElementCount
The number of path elements in this path.
pathString
The string representation of this path.
propertyDelimiter
reflexiveRelativePath
relationshipTargetEnd
relationshipTargetStart
targetPath
The relational attribute target path for this path.
class AncestorsRange
Methods:
- GetPath
- GetPath()
AppendChild(childName) → Path
Creates a path by appending an element for `childName` to this path.
This path must be a prim path, the AbsoluteRootPath or the ReflexiveRelativePath.
**Parameters**
* **childName** (`str`) –
Creates a path by extracting and appending an element from the given ascii element encoding.
Attempting to append a root or empty path (or malformed path) or attempting to append to the EmptyPath will raise an error and return the EmptyPath.
May also fail and return EmptyPath if this path’s type cannot possess a child of the type encoded in `element`.
**Parameters**
* **element** (`str`) –
Creates a path by appending an expression element.
This path must be a prim property or relational attribute path.
Creates a path by appending a mapper element for `targetPath`.
This path must be a prim property or relational attribute path.
**Parameters**
* **targetPath** (`Path`) –
Creates a path by appending an element for `argName`.
This path must be a mapper path.
**Parameters**
* **argName** (`str`) –
Creates a path by appending a new suffix to this path.
This path must be a prim path, the AbsoluteRootPath or the ReflexiveRelativePath.
**Parameters**
* **newSuffix** (`str`) –
### AppendPath
Creates a path by appending a given relative path to this path.
If the newSuffix is a prim path, then this path must be a prim path or a root path.
If the newSuffix is a prim property path, then this path must be a prim path or the ReflexiveRelativePath.
#### Parameters
- **newSuffix** (`Path`) –
### AppendProperty
Creates a path by appending an element for `propName` to this path.
This path must be a prim path or the ReflexiveRelativePath.
#### Parameters
- **propName** (`str`) –
### AppendRelationalAttribute
Creates a path by appending an element for `attrName` to this path.
This path must be a target path.
#### Parameters
- **attrName** (`str`) –
### AppendTarget
Creates a path by appending an element for `targetPath`.
This path must be a prim property or relational attribute path.
#### Parameters
- **targetPath** (`Path`) –
### AppendVariantSelection
Creates a path by appending an element for `variantSet` and `variant` to this path.
This path must be a prim path.
#### Parameters
- **variantSet** (`str`) –
- **variant** (`str`) –
- **variant** (**str**) –
- **variant** (**str**) –
### ContainsPrimVariantSelection
Returns whether the path or any of its parent paths identifies a variant selection for a prim.
### ContainsPropertyElements
Return true if this path contains any property elements, false otherwise.
A false return indicates a prim-like path, specifically a root path, a prim path, or a prim variant selection path. A true return indicates a property-like path: a prim property path, a target path, a relational attribute path, etc.
### ContainsTargetPath
Return true if this path is or has a prefix that’s a target path or a mapper path.
### GetAbsoluteRootOrPrimPath
Creates a path by stripping all properties and relational attributes from this path, leaving the path to the containing prim.
If the path is already a prim or absolute root path, the same path is returned.
### GetAllTargetPathsRecursively
Returns all the relationship target or connection target paths contained in this path, and recursively all the target paths contained in those target paths in reverse depth-first order.
For example, given the path:’/A/B.a[/C/D.a[/E/F.a]].a[/A/B.a[/C/D.a]]’this method produces:’/A/B.a[/C/D.a]’,’/C/D.a’,’/C/D.a[/E/F.a]’,’/E/F.a’
**result** (
**list** [
**SdfPath**
] ) –
## GetAncestorsRange
Return a range for iterating over the ancestors of this path.
The range provides iteration over the prefixes of a path, ordered from longest to shortest (the opposite of the order of the prefixes returned by GetPrefixes).
## GetCommonPrefix
Returns a path with maximal length that is a prefix path of both this path and `path`.
### Parameters
- **path** (**Path**) –
## GetConciseRelativePaths
**classmethod** GetConciseRelativePaths(paths) -> list[SdfPath]
Given some vector of paths, get a vector of concise unambiguous relative paths.
GetConciseRelativePaths requires a vector of absolute paths. It finds a set of relative paths such that each relative path is unique.
### Parameters
- **paths** (**list** [**SdfPath**]) –
## GetParentPath
Return the path that identifies this path’s namespace parent.
For a prim path (like’/foo/bar’), return the prim’s parent’s path (‘/foo’). For a prim property path (like’/foo/bar.property’), return the prim’s path (‘/foo/bar’). For a target path (like’/foo/bar.property[/target]’) return the property path (‘/foo/bar.property’). For a mapper path (like’/foo/bar.property.mapper[/target]’) return the property path (‘/foo/bar.property). For a relational attribute path (like’/foo/bar.property[/target].relAttr’) return the relationship target’s path (‘/foo/bar.property[/target]’). For a prim variant selection path (like’/foo/bar{var=sel}’) return the prim path (‘/foo/bar’). For a root prim path (like’/rootPrim’), return AbsoluteRootPath() (‘/’). For a single element relative prim path (like’relativePrim’), return ReflexiveRelativePath() (‘.’). For ReflexiveRelativePath() , return the relative parent path (’..’).
Note that the parent path of a relative parent path (’..’) is a relative grandparent path (’../..’). Use caution writing loops that walk to parent paths since relative paths have infinitely many ancestors. To more safely traverse ancestor paths, consider iterating over an SdfPathAncestorsRange instead, as returend by GetAncestorsRange() .
## GetPrefixes
Returns the prefix paths of this path.
## GetPrimOrPrimVariantSelectionPath
Return the path that identifies this path’s namespace parent.
### Creates a path by stripping all relational attributes, targets, and properties, leaving the nearest path for which
IsPrimOrPrimVariantSelectionPath()
returns true.
### See
GetPrimPath
also.
### If the path is already a prim or a prim variant selection path, the same path is returned.
### Creates a path by stripping all relational attributes, targets, properties, and variant selections from the leafmost prim path,
leaving the nearest path for which
IsPrimPath()
returns true.
### See
GetPrimOrPrimVariantSelectionPath
also.
### If the path is already a prim path, the same path is returned.
### Returns the variant selection for this path, if this is a variant selection path.
### Returns a pair of empty strings if this path is not a variant selection path.
### Return true if both this path and prefix are not the empty path and this path has prefix as a prefix.
### Return false otherwise.
### Parameters
- prefix (Path) –
### Returns whether the path is absolute.
### Returns whether the path identifies a prim or the absolute root.
### Return true if this path is the AbsoluteRootPath().
### Return true if this path is an expression path.
### IsExpressionPath
Returns whether the path identifies a connection expression.
### IsMapperArgPath
Returns whether the path identifies a connection mapper arg.
### IsMapperPath
Returns whether the path identifies a connection mapper.
### IsNamespacedPropertyPath
Returns whether the path identifies a namespaced property.
A namespaced property has colon embedded in its name.
### IsPrimPath
Returns whether the path identifies a prim.
### IsPrimPropertyPath
Returns whether the path identifies a prim’s property.
A relational attribute is not a prim property.
### IsPrimVariantSelectionPath
Returns whether the path identifies a variant selection for a prim.
### IsPropertyPath
Returns whether the path identifies a property.
A relational attribute is considered to be a property, so this method will return true for relational attributes as well as properties of prims.
### IsRelationalAttributePath
Returns whether the path identifies a relational attribute.
If this is true, IsPropertyPath() will also be true.
### IsRootPrimPath
### IsRootPrimPath
Returns whether the path identifies a root prim.
The path must be absolute and have a single element (for example `/foo`).
### IsTargetPath
Returns whether the path identifies a relationship or connection target.
### IsValidIdentifier
**classmethod** IsValidIdentifier(name) -> bool
Returns whether `name` is a legal identifier for any path component.
- **Parameters**
- **name** (str) –
### IsValidNamespacedIdentifier
**classmethod** IsValidNamespacedIdentifier(name) -> bool
Returns whether `name` is a legal namespaced identifier.
This returns `true` if IsValidIdentifier() does.
- **Parameters**
- **name** (str) –
### IsValidPathString
**classmethod** IsValidPathString(pathString, errMsg) -> bool
Return true if `pathString` is a valid path string, meaning that passing the string to the SdfPath constructor will result in a valid, non-empty SdfPath.
Otherwise, return false and if `errMsg` is not None, set the pointed-to string to the parse error.
- **Parameters**
- **pathString** (str) –
- **errMsg** (str) –
### JoinIdentifier
**classmethod** JoinIdentifier(names) -> str
Join `names` into a single identifier using the namespace delimiter.
Any empty strings present in `names` are ignored when joining.
- **Parameters**
- **names** (str) –
### names
(
**list**
**[**
**str**
**]**
) –
---
### JoinIdentifier(names) -> str
Join **names** into a single identifier using the namespace delimiter.
Any empty strings present in **names** are ignored when joining.
---
### Parameters
**names** (**list** **[** **TfToken** **]**) –
---
### JoinIdentifier(lhs, rhs) -> str
Join **lhs** and **rhs** into a single identifier using the namespace delimiter.
Returns **lhs** if **rhs** is empty and vice versa. Returns an empty string if both **lhs** and **rhs** are empty.
---
### Parameters
- **lhs** (**str**) –
- **rhs** (**str**) –
---
### MakeAbsolutePath(anchor) -> Path
Returns the absolute form of this path using **anchor** as the relative basis.
**anchor** must be an absolute prim path.
If this path is a relative path, resolve it using **anchor** as the relative basis.
If this path is already an absolute path, just return a copy.
### Parameters
**anchor** (**Path**) –
Returns the relative form of this path using `anchor` as the relative basis.
`anchor` must be an absolute prim path.
If this path is an absolute path, return the corresponding relative path that is relative to the absolute path given by `anchor`.
If this path is a relative path, return the optimal relative path to the absolute path given by `anchor`. (The optimal relative path from a given prim path is the relative path with the least leading dot-dots.
**Parameters**
**anchor** (`Path`) –
**classmethod** RemoveAncestorPaths(paths) -> None
Remove all elements of `paths` that prefix other elements in `paths`.
As a side-effect, the result is left in sorted order.
**Parameters**
**paths** (`list[SdfPath]`) –
RemoveCommonSuffix(otherPath, stopAtRootPrim) -> tuple[Path, Path]
Find and remove the longest common suffix from two paths.
Returns this path and `otherPath` with the longest common suffix removed (first and second, respectively). If the two paths have no common suffix then the paths are returned as-is. If the paths are equal then this returns empty paths for relative paths and absolute roots for absolute paths. The paths need not be the same length.
If `stopAtRootPrim` is `true` then neither returned path will be the root path. That, in turn, means that some common suffixes will not be removed. For example, if `stopAtRootPrim` is `true` then the paths /A/B and /B will be returned as is. Were it `false` then the result would be /A and /. Similarly paths /A/B/C and /B/C would return /A/B and /B if `stopAtRootPrim` is `true` but /A and / if it’s `false`.
**Parameters**
- **otherPath** (`Path`) –
- **stopAtRootPrim** (`bool`) –
### RemoveDescendentPaths
**classmethod** RemoveDescendentPaths(paths) -> None
Remove all elements of `paths` that are prefixed by other elements in `paths`.
As a side-effect, the result is left in sorted order.
**Parameters**
- **paths** (`list[SdfPath]`) –
### ReplaceName
Return a copy of this path with its final component changed to `newName`.
This path must be a prim or property path.
This method is shorthand for path.GetParentPath().AppendChild(newName) for prim paths, path.GetParentPath().AppendProperty(newName) for prim property paths, and path.GetParentPath().AppendRelationalAttribute(newName) for relational attribute paths.
Note that only the final path component is ever changed. If the name of the final path component appears elsewhere in the path, it will not be modified.
Some examples:
- ReplaceName('/chars/MeridaGroup','AngusGroup') -> '/chars/AngusGroup'
- ReplaceName('/Merida.tx','ty') -> '/Merida.ty'
- ReplaceName('/Merida.tx[targ].tx','ty') -> '/Merida.tx[targ].ty'
**Parameters**
- **newName** (`str`) –
### ReplacePrefix
Returns a path with all occurrences of the prefix path `oldPrefix` replaced with the prefix path `newPrefix`.
If fixTargetPaths is true, any embedded target paths will also have their paths replaced. This is the default.
If this is not a target, relational attribute or mapper path this will do zero or one path prefix replacements, if not the number of replacements can be greater than one.
**Parameters**
- **oldPrefix** (`Path`) –
- **newPrefix** (`Path`) –
- **fixTargetPaths** (`bool`) –
### ReplaceTargetPath
Returns a path with its target path replaced with `newTargetPath`.
**Parameters**
- **newTargetPath** (`Path`) –
### Replaces the relational attribute’s target path.
The path must be a relational attribute path.
#### Parameters
- **newTargetPath** (`Path`) –
### StripAllVariantSelections
Create a path by stripping all variant selections from all components of this path, leaving a path with no embedded variant selections.
### StripNamespace
**classmethod** StripNamespace(name) -> str
Returns `name` stripped of any namespaces.
This does not check the validity of the name; it just attempts to remove anything that looks like a namespace.
#### Parameters
- **name** (`str`) –
### StripPrefixNamespace
**classmethod** StripPrefixNamespace(name, matchNamespace) -> tuple[str, bool]
Returns (`name`, `true`) where `name` is stripped of the prefix specified by `matchNamespace` if `name` indeed starts with `matchNamespace`.
Returns (`name`, `false`) otherwise, with `name` unmodified.
This function deals with both the case where `matchNamespace` contains the trailing namespace delimiter ’:’ or not.
#### Parameters
- **name** (`str`) –
- **matchNamespace** (`str`) –
### TokenizeIdentifier
**classmethod** TokenizeIdentifier(name) -> list[str]
Tokenizes `name` by the namespace delimiter.
Returns the empty vector if `name` is not a valid namespaced identifier.
**Parameters**
- **name** (`str`) –
**absoluteIndicator** = '/'
**absoluteRootPath** = Sdf.Path('/')
**childDelimiter** = '/'
**elementString**
The string representation of the terminal component of this path. This path can be reconstructed via thisPath.GetParentPath().AppendElementString(thisPath.element). None of absoluteRootPath, reflexiveRelativePath, nor emptyPath possess the above quality; their .elementString is the empty string.
**emptyPath** = Sdf.Path.emptyPath
**expressionIndicator** = 'expression'
**isEmpty**
- **Type**: bool
- Returns true if this is the empty path ( SdfPath::EmptyPath() ).
**mapperArgDelimiter** = '.'
**mapperIndicator** = 'mapper'
**name**
The name of the prim, property or relational
## Attributes and Properties
### namespaceDelimiter
```
namespaceDelimiter``` = ':'
### parentPathElement
```
parentPathElement``` = '..'
### pathElementCount
```
propertypathElementCount```
The number of path elements in this path.
### pathString
```
propertypathString```
The string representation of this path.
### propertyDelimiter
```
propertyDelimiter``` = '.'
### reflexiveRelativePath
```
reflexiveRelativePath``` = Sdf.Path('.')
### relationshipTargetEnd
```
relationshipTargetEnd``` = ']'
### relationshipTargetStart
```
relationshipTargetStart``` = '['
### targetPath
```
propertytargetPath```
The relational attribute target path for this path.
EmptyPath if this is not a relational attribute path.
## Classes
### PathArray
```
classpxr.Sdf.PathArray```
An array of type SdfPath.
### PathListOp
```
classpxr.Sdf.PathListOp```
**Methods:**
### Methods
- `ApplyOperations()`
- `Clear()`
- `ClearAndMakeExplicit()`
- `Create()`
- `CreateExplicit()`
- `GetAddedOrExplicitItems()`
- `HasItem()`
### Attributes
- `addedItems`
- `appendedItems`
- `deletedItems`
- `explicitItems`
- `isExplicit`
- `orderedItems`
- `prependedItems`
### pxr.Sdf.PathListOp.Create
- **Method**: `static Create()`
### pxr.Sdf.PathListOp.CreateExplicit
- **Method**: `static CreateExplicit()`
### pxr.Sdf.PathListOp.GetAddedOrExplicitItems
- **Method**: `GetAddedOrExplicitItems()`
### pxr.Sdf.PathListOp.HasItem
- **Method**: `HasItem()`
### pxr.Sdf.PathListOp.addedItems
- **Property**: `property addedItems`
### pxr.Sdf.PathListOp.appendedItems
- **Property**: `property appendedItems`
### pxr.Sdf.PathListOp.deletedItems
- **Property**: `property deletedItems`
### pxr.Sdf.PathListOp.explicitItems
- **Property**: `property explicitItems`
### pxr.Sdf.PathListOp.isExplicit
- **Property**: `property isExplicit`
### pxr.Sdf.PathListOp.orderedItems
- **Property**: `property orderedItems`
### pxr.Sdf.PathListOp.prependedItems
- **Property**: `property prependedItems`
### pxr.Sdf.Payload
- **Class**: `pxr.Sdf.Payload`
- **Description**: Represents a payload and all its meta data. A payload represents a prim reference to an external layer. A payload is similar to a prim reference (see SdfReference) with the major difference that payloads are explicitly loaded by the user. Unloaded payloads represent a boundary that lazy composition and system behaviors will not traverse across, providing a user-visible way to manage the working set of the scene.
- **Attributes**:
- `assetPath`: None
- `layerOffset`: None
<table>
<tbody>
<tr>
<td>
<p>None
<tr class="row-odd">
<td>
<p>
<code>primPath
<td>
<p>None
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Payload.assetPath">
<em class="property">
<span class="pre">property
<span class="sig-name descname">
<span class="pre">assetPath
<dd>
<p>None
<p>Sets a new asset path for the layer the payload uses.
<p>See SdfAssetPath for what characters are valid in <code>assetPath
<hr />
<p>type : str
<p>Returns the asset path of the layer that the payload uses.
<dl class="field-list simple">
<dt class="field-odd">Type
<dd class="field-odd">
<p>type
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Payload.layerOffset">
<em class="property">
<span class="pre">property
<span class="sig-name descname">
<span class="pre">layerOffset
<dd>
<p>None
<p>Sets a new layer offset.
<hr />
<p>type : LayerOffset
<p>Returns the layer offset associated with the payload.
<dl class="field-list simple">
<dt class="field-odd">Type
<dd class="field-odd">
<p>type
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.Payload.primPath">
<em class="property">
<span class="pre">property
<span class="sig-name descname">
<span class="pre">primPath
<dd>
<p>None
<p>Sets a new prim path for the prim that the payload uses.
<hr />
<p>type : Path
<p>Returns the scene path of the prim for the payload.
<dl class="field-list simple">
<dt class="field-odd">Type
<dd class="field-odd">
<p>type
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Sdf.PayloadListOp">
<em class="property">
<span class="pre">class
<span class="sig-prename descclassname">
<span class="pre">pxr.Sdf.
<span class="sig-name descname">
<span class="pre">PayloadListOp
<dd>
<p><strong>Methods:
<table>
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
<tbody>
<tr class="row-odd">
<td>
<p><code>ApplyOperations
<td>
<p>
<tr class="row-even">
<td>
<p><code>Clear
<td>
<p>
<tr class="row-odd">
<td>
<p><code>ClearAndMakeExplicit
<td>
<p>
<tr class="row-even">
<td>
<p><code>Create
<td>
<p>
<tr class="row-odd">
<td>
<p><code>CreateExplicit
<td>
<p>
<tr class="row-even">
<td>
<p><code>GetAddedOrExplicitItems
<td>
<p>
<tr class="row-odd">
<td>
<p><code>HasItem
<td>
<p>
<p>
<p>
<strong>
Attributes:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<code>
addedItems
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>
appendedItems
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>
deletedItems
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>
explicitItems
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>
isExplicit
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>
orderedItems
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>
prependedItems
<td>
<p>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.PayloadListOp.ApplyOperations">
<span class="sig-name descname">
<span class="pre">
ApplyOperations
<span class="sig-paren">
()
<a class="headerlink" href="#pxr.Sdf.PayloadListOp.ApplyOperations" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.PayloadListOp.Clear">
<span class="sig-name descname">
<span class="pre">
Clear
<span class="sig-paren">
()
<a class="headerlink" href="#pxr.Sdf.PayloadListOp.Clear" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.PayloadListOp.ClearAndMakeExplicit">
<span class="sig-name descname">
<span class="pre">
ClearAndMakeExplicit
<span class="sig-paren">
()
<a class="headerlink" href="#pxr.Sdf.PayloadListOp.ClearAndMakeExplicit" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.PayloadListOp.Create">
<em class="property">
<span class="pre">
static
<span class="w">
<span class="sig-name descname">
<span class="pre">
Create
<span class="sig-paren">
()
<a class="headerlink" href="#pxr.Sdf.PayloadListOp.Create" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.PayloadListOp.CreateExplicit">
<em class="property">
<span class="pre">
static
<span class="w">
<span class="sig-name descname">
<span class="pre">
CreateExplicit
<span class="sig-paren">
()
<a class="headerlink" href="#pxr.Sdf.PayloadListOp.CreateExplicit" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.PayloadListOp.GetAddedOrExplicitItems">
<span class="sig-name descname">
<span class="pre">
GetAddedOrExplicitItems
<span class="sig-paren">
()
<a class="headerlink" href="#pxr.Sdf.PayloadListOp.GetAddedOrExplicitItems" title="Permalink to this definition">
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.PayloadListOp.HasItem">
<span class="sig-name descname">
<span class="pre">
HasItem
<span class="sig-paren">
()
<a class="headerlink" href="#pxr.Sdf.PayloadListOp.HasItem" title="Permalink to this definition">
<dd>
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PayloadListOp.addedItems">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
addedItems
<a class="headerlink" href="#pxr.Sdf.PayloadListOp.addedItems" title="Permalink to this definition">
<dd>
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PayloadListOp.appendedItems">
## Properties
### appendedItems
### deletedItems
### explicitItems
### isExplicit
### orderedItems
### prependedItems
## Class: pxr.Sdf.Permission
### Methods
- **GetValueFromName**
### Attributes
- **allValues**
## Class: pxr.Sdf.PrimSpec
Represents a prim description in an SdfLayer object.
Every SdfPrimSpec object is defined in a layer. It is identified by its path (SdfPath class) in the namespace hierarchy of its layer. SdfPrimSpecs can be created using the New() method as children of either the containing SdfLayer itself (for "root level" prims), or as children of other SdfPrimSpec objects to extend a hierarchy. The helper function SdfCreatePrimInLayer() can be used to quickly create a hierarchy of primSpecs.
SdfPrimSpec objects have properties of two general types: attributes (containing values) and relationships (different types of connections to other prims and attributes). Attributes are represented by the SdfAttributeSpec class and relationships by the SdfRelationshipSpec class. Each prim has its own namespace of properties. Properties are stored and accessed by their name.
SdfPrimSpec objects have a typeName, permission restriction, and they reference and inherit prim paths. Permission restrictions control which other layers may refer to, or express opinions about a prim. See the SdfPermission class for more information.
> - Insert doc about references and inherits here.
> - Should have validate... methods for name, children, properties
**Methods:**
| Method Name | Description |
|-------------|-------------|
| ApplyNameChildrenOrder(vec) | Reorders the given list of child names according to the reorder nameChildren statement for this prim. |
| ApplyPropertyOrder(vec) | Reorders the given list of property names according to the reorder properties statement for this prim. |
| BlockVariantSelection(variantSetName) | Blocks the variant selected for the given variant set by setting the variant selection to empty. |
| CanSetName(newName, whyNot) | Returns true if setting the prim spec's name to `newName` will succeed. |
| ClearActive() | Removes the active opinion in this prim spec if there is one. |
| ClearInstanceable() | Clears the value for the prim's instanceable flag. |
| ClearKind() | Remove the kind opinion from this prim spec if there is one. |
| ClearPayloadList() | Clears the payloads for this prim. |
| ClearReferenceList() | Clears the references for this prim. |
| GetAttributeAtPath(path) | Returns an attribute given its `path`. |
| GetObjectAtPath(path) | path: Path |
| GetPrimAtPath(path) | Returns a prim given its `path`. |
| GetPropertyAtPath(path) | Returns a property given its `path`. |
| Method | Description |
|--------|-------------|
| `GetRelationshipAtPath(path)` | Returns a relationship given its path. |
| `GetVariantNames(name)` | Returns list of variant names for the given variant set. |
| `HasActive()` | Returns true if this prim spec has an opinion about active. |
| `HasInstanceable()` | Returns true if this prim spec has a value authored for its instanceable flag, false otherwise. |
| `HasKind()` | Returns true if this prim spec has an opinion about kind. |
| `RemoveProperty(property)` | Removes the property. |
**Attributes:**
| Attribute | Description |
|-----------|-------------|
| `ActiveKey` | |
| `AnyTypeToken` | |
| `CommentKey` | |
| `CustomDataKey` | |
| `DocumentationKey` | |
| `HiddenKey` | |
| `InheritPathsKey` | |
| `KindKey` | |
| `PayloadKey` | |
| `PermissionKey` | |
<p>
<p>
PrefixKey
<p>
<p>
PrefixSubstitutionsKey
<p>
<p>
PrimOrderKey
<p>
<p>
PropertyOrderKey
<p>
<p>
ReferencesKey
<p>
<p>
RelocatesKey
<p>
<p>
SpecializesKey
<p>
<p>
SpecifierKey
<p>
<p>
SymmetricPeerKey
<p>
<p>
SymmetryArgumentsKey
<p>
<p>
SymmetryFunctionKey
<p>
<p>
TypeNameKey
<p>
<p>
VariantSelectionKey
<p>
<p>
VariantSetNamesKey
<p>
<p>
active
<p>
Whether this prim spec is active.
<p>
assetInfo
<p>
Returns the asset info dictionary for this prim.
<p>
attributes
<p>
The attributes of this prim, as an ordered dictionary.
- **comment**: The prim's comment string.
- **customData**: The custom data for this prim.
- **documentation**: The prim's documentation string.
- **expired**:
- **hasPayloads**: Returns true if this prim has payloads set.
- **hasReferences**: Returns true if this prim has references set.
- **hidden**: Whether this prim spec will be hidden in browsers.
- **inheritPathList**: A PathListEditor for the prim's inherit paths.
- **instanceable**: Whether this prim spec is flagged as instanceable.
- **kind**: What kind of model this prim spec represents, if any.
- **name**: The prim's name.
- **nameChildren**: The prim name children of this prim, as an ordered dictionary.
- **nameChildrenOrder**: Get/set the list of child names for this prim's 'reorder nameChildren' statement.
- **nameParent**: The name parent of this prim.
- **nameRoot**: The name pseudo-root of this prim.
- **payloadList**: A PayloadListEditor for the prim's payloads.
- **permission**: The prim's permission restriction.
- **prefix**:
- `prefix` - The prim's prefix.
- `prefixSubstitutions` - Dictionary of prefix substitutions.
- `properties` - The properties of this prim, as an ordered dictionary.
- `propertyOrder` - Get/set the list of property names for this prim's 'reorder properties' statement.
- `realNameParent` - The name parent of this prim.
- `referenceList` - A ReferenceListEditor for the prim's references.
- `relationships` - The relationships of this prim, as an ordered dictionary.
- `relocates` - An editing proxy for the prim's map of relocation paths.
- `specializesList` - A PathListEditor for the prim's specializes.
- `specifier` - The prim's specifier (SpecifierDef or SpecifierOver).
- `suffix` - The prim's suffix.
- `suffixSubstitutions` - Dictionary of prefix substitutions.
- `symmetricPeer` - The prims's symmetric peer.
- `symmetryArguments` - Dictionary with prim symmetry arguments.
- `symmetryFunction` - The prim's symmetry function.
- `typeName` - The type of this prim.
- `variantSelections` -
Dictionary whose keys are variant set names and whose values are the variants chosen for each set.
A StringListEditor for the names of the variant sets for this prim.
The VariantSetSpecs for this prim indexed by name.
### ApplyNameChildrenOrder
```python
ApplyNameChildrenOrder(vec) → None
```
Reorders the given list of child names according to the reorder nameChildren statement for this prim.
This routine employs the standard list editing operation for ordered items in a ListEditor.
**Parameters**
- **vec** (list[str]) –
### ApplyPropertyOrder
```python
ApplyPropertyOrder(vec) → None
```
Reorders the given list of property names according to the reorder properties statement for this prim.
This routine employs the standard list editing operation for ordered items in a ListEditor.
**Parameters**
- **vec** (list[str]) –
### BlockVariantSelection
```python
BlockVariantSelection(variantSetName) → None
```
Blocks the variant selected for the given variant set by setting the variant selection to empty.
**Parameters**
- **variantSetName** (str) –
### CanSetName
```python
CanSetName(newName, whyNot) → bool
```
Returns true if setting the prim spec’s name to `newName` will succeed.
Returns false if it won’t, and sets `whyNot` with a string describing why not.
**Parameters**
- **newName** (str) –
- **whyNot** (str) –
- **whyNot** (`str`) –
### ClearActive()
- Removes the active opinion in this prim spec if there is one.
### ClearInstanceable()
- Clears the value for the prim’s instanceable flag.
### ClearKind()
- Remove the kind opinion from this prim spec if there is one.
### ClearPayloadList()
- Clears the payloads for this prim.
### ClearReferenceList()
- Clears the references for this prim.
### GetAttributeAtPath(path)
- Returns an attribute given its `path`.
- Returns invalid handle if there is no attribute at `path`. This is simply a more specifically typed version of GetObjectAtPath.
- **Parameters**
- **path** (`Path`) –
### GetObjectAtPath(path)
- Returns a prim or property given its namespace path.
- If path is relative then it will be interpreted as relative to this prim. If it is absolute then it will be interpreted as absolute in this prim’s layer. The return type can be either PrimSpecPtr or PropertySpecPtr.
### GetPrimAtPath(path)
- Returns a prim given its `path`.
### GetPrimAtPath
Returns a prim given its `path`.
Returns invalid handle if there is no prim at `path`. This is simply a more specifically typed version of GetObjectAtPath.
#### Parameters
- **path** (`Path`) –
### GetPropertyAtPath
Returns a property given its `path`.
Returns invalid handle if there is no property at `path`. This is simply a more specifically typed version of GetObjectAtPath.
#### Parameters
- **path** (`Path`) –
### GetRelationshipAtPath
Returns a relationship given its `path`.
Returns invalid handle if there is no relationship at `path`. This is simply a more specifically typed version of GetObjectAtPath.
#### Parameters
- **path** (`Path`) –
### GetVariantNames
Returns list of variant names for the given variant set.
#### Parameters
- **name** (`str`) –
### HasActive
Returns true if this prim spec has an opinion about active.
### HasInstanceable
Returns true if this prim spec has an opinion about instanceable.
### HasInstanceable
Returns true if this prim spec has a value authored for its instanceable flag, false otherwise.
### HasKind
Returns true if this prim spec has an opinion about kind.
### RemoveProperty
Removes the property.
#### Parameters
- **property** (`PropertySpec`) –
### ActiveKey
= 'active'
### AnyTypeToken
= '__AnyType__'
### CommentKey
= 'comment'
### CustomDataKey
= 'customData'
### DocumentationKey
= 'documentation'
### HiddenKey
= 'hidden'
### InheritPathsKey
= 'inheritPaths'
### KindKey
= 'kind'
KindKey = 'kind'
PayloadKey = 'payload'
PermissionKey = 'permission'
PrefixKey = 'prefix'
PrefixSubstitutionsKey = 'prefixSubstitutions'
PrimOrderKey = 'primOrder'
PropertyOrderKey = 'propertyOrder'
ReferencesKey = 'references'
RelocatesKey = 'relocates'
SpecializesKey = 'specializes'
SpecifierKey = 'specifier'
SymmetricPeerKey = 'symmetricPeer'
SymmetryArgumentsKey = 'symmetryArguments'
'symmetryArguments'
'symmetryFunction'
'typeName'
'variantSelection'
'variantSetNames'
property active
- Whether this prim spec is active.
The default value is true.
property assetInfo
- Returns the asset info dictionary for this prim.
The default value is an empty dictionary.
The asset info dictionary is used to annotate prims representing the root-prims of assets (generally organized as models) with various data related to asset management. For example, asset name, root layer identifier, asset version etc.
property attributes
- The attributes of this prim, as an ordered dictionary.
property comment
- The prim’s comment string.
property customData
- The custom data for this prim.
The default value for custom data is an empty dictionary.
Custom data is for use by plugins or other non-tools supplied extensions that need to be able to store data attached to arbitrary scene objects. Note that if the only objects you want to store data on are prims, using custom attributes is probably a better choice. But if you need to possibly store this data on attributes or relationships or as annotations on reference arcs, then custom data is an appropriate choice.
property documentation
- The prim’s documentation string.
property expired
### hasPayloads
Returns true if this prim has payloads set.
### hasReferences
Returns true if this prim has references set.
### hidden
Whether this prim spec will be hidden in browsers.
The default value is false.
### inheritPathList
A PathListEditor for the prim’s inherit paths.
The list of the inherit paths for this prim may be modified with this PathListEditor.
A PathListEditor may express a list either as an explicit value or as a set of list editing operations. See PathListEditor for more information.
### instanceable
Whether this prim spec is flagged as instanceable.
The default value is false.
### kind
What kind of model this prim spec represents, if any.
The default is an empty string
### name
The prim’s name.
### nameChildren
The prim name children of this prim, as an ordered dictionary.
Note that although this property is described as being read-only, you can modify the contents to add, remove, or reorder children.
### nameChildrenOrder
Get/set the list of child names for this prim’s ‘reorder nameChildren’ statement.
### nameParent
The name parent of this prim.
### nameRoot
The name pseudo-root of this prim.
### payloadList
A PayloadListEditor for the prim’s payloads.
The list of the payloads for this prim may be modified with this PayloadListEditor.
A PayloadListEditor may express a list either as an explicit value or as a set of list editing operations. See PayloadListEditor for more information.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.permission">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
permission
<dd>
<p>
The prim’s permission restriction.
The default value is SdfPermissionPublic.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.prefix">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
prefix
<dd>
<p>
The prim’s prefix.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.prefixSubstitutions">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
prefixSubstitutions
<dd>
<p>
Dictionary of prefix substitutions.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.properties">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
properties
<dd>
<p>
The properties of this prim, as an ordered dictionary.
<p>
Note that although this property is described as being read-only, you can modify the contents to add, remove, or reorder properties.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.propertyOrder">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
propertyOrder
<dd>
<p>
Get/set the list of property names for this prim’s ‘reorder properties’ statement.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.realNameParent">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
realNameParent
<dd>
<p>
The name parent of this prim.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.referenceList">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
referenceList
<dd>
<p>
A ReferenceListEditor for the prim’s references.
<p>
The list of the references for this prim may be modified with this ReferenceListEditor.
<p>
A ReferenceListEditor may express a list either as an explicit value or as a set of list editing operations. See ReferenceListEditor for more information.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.relationships">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
relationships
<dd>
<p>
The relationships of this prim, as an ordered dictionary.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.relocates">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
relocates
<dd>
<p>
An editing proxy for the prim’s map of relocation paths.
<p>
The map of source-to-target paths specifying namespace relocation may be set or cleared whole, or individual map entries may be added, removed, or edited.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.specializesList">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
specializesList
<dd>
<p>
A PathListEditor for the prim’s specializes.
<p>
The list of the specializes for this prim may be modified with this PathListEditor.
<p>
A PathListEditor may express a list either as an explicit value or as a set of list editing operations. See PathListEditor for more information.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.specifier">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
specifier
<dd>
<p>
The prim’s specifier (SpecifierDef or SpecifierOver).
The default value is SpecifierOver.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.suffix">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
suffix
<dd>
<p>
The prim’s suffix.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.suffix">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
suffix
<dd>
<p>
The prim’s suffix.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.suffixSubstitutions">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
suffixSubstitutions
<dd>
<p>
Dictionary of prefix substitutions.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.symmetricPeer">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
symmetricPeer
<dd>
<p>
The prims’s symmetric peer.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.symmetryArguments">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
symmetryArguments
<dd>
<p>
Dictionary with prim symmetry arguments.
<p>
Although this property is marked read-only, you can modify the contents to add, change, and clear symmetry arguments.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.symmetryFunction">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
symmetryFunction
<dd>
<p>
The prim’s symmetry function.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.typeName">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
typeName
<dd>
<p>
The type of this prim.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.variantSelections">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
variantSelections
<dd>
<p>
Dictionary whose keys are variant set names and whose values are the variants chosen for each set.
<p>
Although this property is marked read-only, you can modify the contents to add, change, and clear variants.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.variantSetNameList">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
variantSetNameList
<dd>
<p>
A StringListEditor for the names of the variant sets for this prim.
<p>
The list of the names of the variants sets of this prim may be modified with this StringListEditor.
<p>
A StringListEditor may express a list either as an explicit value or as a set of list editing operations. See StringListEditor for more information.
<p>
Although this property is marked as read-only, the returned object is modifiable.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.PrimSpec.variantSets">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
variantSets
<dd>
<p>
The VariantSetSpecs for this prim indexed by name.
<p>
Although this property is marked as read-only, you can modify the contents to remove variant sets. New variant sets are created by creating them with the prim as the owner.
<p>
Although this property is marked as read-only, the returned object is modifiable.
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Sdf.PropertySpec">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-prename descclassname">
<span class="pre">
pxr.Sdf.
<span class="sig-name descname">
<span class="pre">
PropertySpec
<dd>
<p>
Base class for SdfAttributeSpec and SdfRelationshipSpec.
<p>
Scene Spec Attributes (SdfAttributeSpec) and Relationships (SdfRelationshipSpec) are the basic properties that make up Scene Spec Prims (SdfPrimSpec). They share many qualities and can sometimes be treated uniformly. The common qualities are provided by this base class.
<p>
NOTE: Do not use Python reserved words and keywords as attribute names. This will cause attribute resolution to fail.
<p>
<strong>
Methods:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
ClearDefaultValue
<td>
<p>
Clear the attribute's default value.
<tr class="row-even">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
HasDefaultValue
<td>
<p>
Check if the attribute has a default value.
### Attributes:
- **AssetInfoKey**
- **CommentKey**
- **CustomDataKey**
- **CustomKey**
- **DisplayGroupKey**
- **DisplayNameKey**
- **DocumentationKey**
- **HiddenKey**
- **PermissionKey**
- **PrefixKey**
- **SymmetricPeerKey**
- **SymmetryArgumentsKey**
- **SymmetryFunctionKey**
- **assetInfo**
- Returns the asset info dictionary for this property.
- **comment**
- A comment describing the property.
- **custom**
- Whether this property spec declares a custom attribute.
- **customData**
- Returns a dictionary of custom data for this property.
| Property | Description |
|----------|-------------|
| `customData` | The property's custom data. |
| `default` | The default value of this property. |
| `displayGroup` | DisplayGroup for the property. |
| `displayName` | DisplayName for the property. |
| `documentation` | Documentation for the property. |
| `expired` | |
| `hasOnlyRequiredFields` | Indicates whether this spec has any significant data other than just what is necessary for instantiation. |
| `hidden` | Whether this property will be hidden in browsers. |
| `name` | The name of the property. |
| `owner` | The owner of this property. |
| `permission` | The property's permission restriction. |
| `prefix` | Prefix for the property. |
| `symmetricPeer` | The property's symmetric peer. |
| `symmetryArguments` | Dictionary with property symmetry arguments. |
| `symmetryFunction` | The property's symmetry function. |
| `variability` | Returns the variability of the property. |
```
```markdown
### ClearDefaultValue
### ClearDefaultValue
Clear the attribute’s default value.
### HasDefaultValue
Returns true if a default value is set for this attribute.
### AssetInfoKey
= 'assetInfo'
### CommentKey
= 'comment'
### CustomDataKey
= 'customData'
### CustomKey
= 'custom'
### DisplayGroupKey
= 'displayGroup'
### DisplayNameKey
= 'displayName'
### DocumentationKey
= 'documentation'
### HiddenKey
= 'hidden'
### PermissionKey
= 'permission'
### PrefixKey
= 'prefix'
## pxr.Sdf.PropertySpec.PrefixKey
- **Prefix**: 'prefix'
## pxr.Sdf.PropertySpec.SymmetricPeerKey
- **SymmetricPeerKey**: 'symmetricPeer'
## pxr.Sdf.PropertySpec.SymmetryArgumentsKey
- **SymmetryArgumentsKey**: 'symmetryArguments'
## pxr.Sdf.PropertySpec.SymmetryFunctionKey
- **SymmetryFunctionKey**: 'symmetryFunction'
## pxr.Sdf.PropertySpec.assetInfo
- **property assetInfo**
- Returns the asset info dictionary for this property.
- The default value is an empty dictionary.
- The asset info dictionary is used to annotate SdfAssetPath-valued attributes pointing to the root-prims of assets (generally organized as models) with various data related to asset management. For example, asset name, root layer identifier, asset version etc.
- Note: It is only valid to author assetInfo on attributes that are of type SdfAssetPath.
## pxr.Sdf.PropertySpec.comment
- **property comment**
- A comment describing the property.
## pxr.Sdf.PropertySpec.custom
- **property custom**
- Whether this property spec declares a custom attribute.
## pxr.Sdf.PropertySpec.customData
- **property customData**
- The property’s custom data.
- The default value for custom data is an empty dictionary.
- Custom data is for use by plugins or other non-tools supplied extensions that need to be able to store data attached to arbitrary scene objects. Note that if the only objects you want to store data on are prims, using custom attributes is probably a better choice. But if you need to possibly store this data on attributes or relationships or as annotations on reference arcs, then custom data is an appropriate choice.
## pxr.Sdf.PropertySpec.default
- **property default**
- The default value of this property.
## pxr.Sdf.PropertySpec.displayGroup
- **property displayGroup**
- DisplayGroup for the property.
## pxr.Sdf.PropertySpec.displayName
- **property displayName**
- DisplayName for the property.
## pxr.Sdf.PropertySpec.documentation
- **property documentation**
- Documentation for the property.
### PropertySpec.expired
- **Description:**
### PropertySpec.hasOnlyRequiredFields
- **Description:** Indicates whether this spec has any significant data other than just what is necessary for instantiation.
- This is a less strict version of isInert, returning True if the spec contains as much as the type and name.
### PropertySpec.hidden
- **Description:** Whether this property will be hidden in browsers.
### PropertySpec.name
- **Description:** The name of the property.
### PropertySpec.owner
- **Description:** The owner of this property. Either a relationship or a prim.
### PropertySpec.permission
- **Description:** The property’s permission restriction.
### PropertySpec.prefix
- **Description:** Prefix for the property.
### PropertySpec.symmetricPeer
- **Description:** The property’s symmetric peer.
### PropertySpec.symmetryArguments
- **Description:** Dictionary with property symmetry arguments.
- Although this property is marked read-only, you can modify the contents to add, change, and clear symmetry arguments.
### PropertySpec.symmetryFunction
- **Description:** The property’s symmetry function.
### PropertySpec.variability
- **Description:** Returns the variability of the property.
- An attribute’s variability may be Varying, Uniform, Config or Computed.
- For an attribute, the default is Varying, for a relationship the default is Uniform.
- Varying relationships may be directly authored ‘animating’ targetpaths over time.
- Varying attributes may be directly authored, animated and affected on by Actions. They are the most flexible.
- Uniform attributes may be authored only with non-animated values (default values). They cannot be affected by Actions, but they can be connected to other Uniform attributes.
- Config attributes are the same as Uniform except that a Prim can choose to alter its collection of built-in properties based on the values of its Config attributes.
- Computed attributes may not be authored in scene description. Prims determine the values of their Computed attributes through Prim-specific computation. They may not be connected.
### pxr.Sdf.PseudoRootSpec
**Attributes:**
| Attribute | Description |
|-----------|-------------|
| expired | |
### pxr.Sdf.Reference
Represents a reference and all its meta data.
A reference is expressed on a prim in a given layer and it identifies a prim in a layer stack. All opinions in the namespace hierarchy under the referenced prim will be composed with the opinions in the namespace hierarchy under the referencing prim.
The asset path specifies the layer stack being referenced. If this asset path is non-empty, this reference is considered an 'external' reference to the layer stack rooted at the specified layer. If this is empty, this reference is considered an 'internal' reference to the layer stack containing (but not necessarily rooted at) the layer where the reference is authored.
The prim path specifies the prim in the referenced layer stack from which opinions will be composed. If this prim path is empty, it will be considered a reference to the default prim specified in the root layer of the referenced layer stack — see SdfLayer::GetDefaultPrim.
The meta data for a reference is its layer offset and custom data. The layer offset is an affine transformation applied to all anim splines in the referenced prim’s namespace hierarchy, see SdfLayerOffset for details. Custom data is for use by plugins or other non-tools supplied extensions that need to be able to store data associated with references.
**Methods:**
| Method | Description |
|--------------|-----------------------------------------------------------------------------|
| IsInternal() | Returns `true` in the case of an internal reference. |
**Attributes:**
| Attribute | Description |
|-------------|-------------|
| assetPath | None |
| customData | None |
| layerOffset | None |
| primPath | None |
### pxr.Sdf.Reference.IsInternal
Returns `true` in the case of an internal reference.
An internal reference is a reference with an empty asset path.
### pxr.Sdf.Reference.assetPath
**property**
<dl>
<dt>
<dd>
<p>None
<p>Sets the asset path for the root layer of the referenced layer stack.
<p>This may be set to an empty string to specify an internal reference. See SdfAssetPath for what characters are valid in assetPath. If assetPath contains invalid characters, issue an error and set this reference’s asset path to the empty asset path.
<hr>
<p>type : str
<p>Returns the asset path to the root layer of the referenced layer stack.
<p>This will be empty in the case of an internal reference.
<dl class="field-list simple">
<dt class="field-odd">Type
<dd class="field-odd">
<p>type
<dt>
<em class="property">property
<span class="sig-name descname">customData
<a class="headerlink" href="#pxr.Sdf.Reference.customData" title="Permalink to this definition">
<dd>
<p>None
<p>Sets the custom data associated with the reference.
<hr>
<p>type : None
<p>Sets a custom data entry for the reference. If value is empty, then this removes the given custom data entry.
<hr>
<p>type : VtDictionary
<p>Returns the custom data associated with the reference.
<dl class="field-list simple">
<dt class="field-odd">Type
<dd class="field-odd">
<p>type
<dt>
<em class="property">property
<span class="sig-name descname">layerOffset
<a class="headerlink" href="#pxr.Sdf.Reference.layerOffset" title="Permalink to this definition">
<dd>
<p>None
<p>Sets a new layer offset.
<hr>
<p>type : LayerOffset
<p>Returns the layer offset associated with the reference.
<dl class="field-list simple">
<dt class="field-odd">Type
<dd class="field-odd">
<p>type
<dt>
<em class="property">property
<span class="sig-name descname">primPath
<a class="headerlink" href="#pxr.Sdf.Reference.primPath" title="Permalink to this definition">
<dd>
<p>None
<p>Sets the path of the referenced prim. This may be set to an empty path to specify a reference to the default prim in the referenced layer stack.
<hr>
<p>type : Path
<p>Returns the path of the referenced prim. This will be empty if the referenced prim is the default prim specified in the referenced layer stack.
<dl class="field-list simple">
<dt class="field-odd">Type
<dd class="field-odd">
<p>type
<dl class="py class">
<dt>
<em class="property">class
<span class="sig-prename descclassname">pxr.Sdf.
<span class="sig-name descname">ReferenceListOp
<a class="headerlink" href="#pxr.Sdf.ReferenceListOp" title="Permalink to this definition">
<dd>
<p><strong>Methods:
<table>
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p><code>ApplyOperations
<td>
<p>
<tr class="row-even">
<td>
<p><code>Clear
<td>
<p>
<tr class="row-odd">
<td>
<p><code>ClearAndMakeExplicit
<td>
<p>
<tr class="row-even">
<td>
<p><code>Create
<td>
<p>
<tr class="row-odd">
<td>
<p><code>CreateExplicit
<td>
<p>
<tr class="row-even">
<td>
<p><code>GetAddedOrExplicitItems
<td>
<p>
### Attributes:
- `addedItems`
- `appendedItems`
- `deletedItems`
- `explicitItems`
- `isExplicit`
- `orderedItems`
- `prependedItems`
### Methods:
- `ApplyOperations()`
- `Clear()`
- `ClearAndMakeExplicit()`
- `Create()`
- `CreateExplicit()`
- `GetAddedOrExplicitItems()`
- `HasItem()`
### pxr.Sdf.ReferenceListOp.addedItems
### pxr.Sdf.ReferenceListOp.appendedItems
### pxr.Sdf.ReferenceListOp.deletedItems
### pxr.Sdf.ReferenceListOp.explicitItems
### pxr.Sdf.ReferenceListOp.isExplicit
### pxr.Sdf.ReferenceListOp.orderedItems
### pxr.Sdf.ReferenceListOp.prependedItems
### pxr.Sdf.RelationshipSpec
A property that contains a reference to one or more SdfPrimSpec instances.
A relationship may refer to one or more target prims or attributes. All targets of a single relationship are considered to be playing the same role. Note that `role` does not imply that the target prims or attributes are of the same `type`.
Relationships may be annotated with relational attributes. Relational attributes are named SdfAttributeSpec objects containing values that describe the relationship. For example, point weights are commonly expressed as relational attributes.
**Methods:**
- `RemoveTargetPath(path, preserveTargetOrder)`: Removes the specified target path.
- `ReplaceTargetPath(oldPath, newPath)`: Updates the specified target path.
**Attributes:**
- `TargetsKey`:
- `expired`:
- `noLoadHint`:
| noLoadHint | whether the target must be loaded to load the prim this relationship is attached to. |
|------------|----------------------------------------------------------------------------------------|
| targetPathList | A PathListEditor for the relationship's target paths. |
### RemoveTargetPath
```python
RemoveTargetPath(path, preserveTargetOrder) -> None
```
Removes the specified target path.
Removes the given target path and any relational attributes for the given target path. If `preserveTargetOrder` is `true`, Erase() is called on the list editor instead of RemoveItemEdits(). This preserves the ordered items list.
**Parameters**
- **path** (Path) –
- **preserveTargetOrder** (bool) –
### ReplaceTargetPath
```python
ReplaceTargetPath(oldPath, newPath) -> None
```
Updates the specified target path.
Replaces the path given by `oldPath` with the one specified by `newPath`. Relational attributes are updated if necessary.
**Parameters**
- **oldPath** (Path) –
- **newPath** (Path) –
### TargetsKey
```python
TargetsKey = 'targetPaths'
```
### expired
```python
property expired
```
### noLoadHint
```python
property noLoadHint
```
whether the target must be loaded to load the prim this relationship is attached to.
### targetPathList
```python
property targetPathList
```
A PathListEditor for the relationship's target paths.
## targetPathList
A PathListEditor for the relationship’s target paths.
The list of the target paths for this relationship may be modified with this PathListEditor.
A PathListEditor may express a list either as an explicit value or as a set of list editing operations. See PathListEditor for more information.
## pxr.Sdf.Spec
Base class for all Sdf spec classes.
**Methods:**
| Method | Description |
| --- | --- |
| ClearInfo(key) | Clears the value for scene spec info with the given key. |
| GetAsText() | |
| GetFallbackForInfo(key) | key : string |
| GetInfo(key) | Gets the value for the given metadata key. |
| GetMetaDataDisplayGroup(key) | Returns this metadata key's displayGroup. |
| GetMetaDataInfoKeys() | Returns the list of metadata info keys for this object. |
| GetTypeForInfo(key) | key : string |
| HasInfo(key) | key : string |
| IsInert() | Indicates whether this spec has any significant data. |
| ListInfoKeys() | Returns the full list of info keys currently set on this object. |
| SetInfo(key, value) | Sets the value for the given metadata key. |
| SetInfoDictionaryValue(dictionaryKey, ...) | Sets the value for entryKey to value within the dictionary with the given metadata key dictionaryKey. |
**Attributes:**
| 属性 | 描述 |
|------------|------------------------------------------------------------|
| expired | |
| isInert | Indicates whether this spec has any significant data. |
| layer | The owning layer. |
| path | The absolute scene path. |
### ClearInfo
```python
ClearInfo(key)
```
key : string
Clears the value for scene spec info with the given key. After calling this, HasInfo() will return false. To make HasInfo() return true, set a value for that scene spec info.
### GetAsText
```python
GetAsText()
```
### GetFallbackForInfo
```python
GetFallbackForInfo(key)
```
key : string
Returns the fallback value for the given key.
### GetInfo
```python
GetInfo(key)
```
Gets the value for the given metadata key.
This is interim API which is likely to change. Only editors with an immediate specific need (like the Inspector) should use this API.
Parameters:
- **key** (str) –
### GetMetaDataDisplayGroup
```python
GetMetaDataDisplayGroup(key)
```
Returns this metadata key’s displayGroup.
Parameters:
- **key** (str) –
### GetMetaDataInfoKeys
```python
GetMetaDataInfoKeys()
```
Returns the list of metadata info keys for this object.
This is not the complete list of keys, it is only those that should be considered to be metadata by inspectors or other presentation UI.
This is interim API which is likely to change. Only editors with an immediate specific need (like the Inspector) should use this API.
## GetTypeForInfo
- **Parameters**:
- key : string
- **Returns**: The type of value for the given key.
## HasInfo
- **Parameters**:
- key : string
- **Returns**: bool
- **Description**: Returns whether there is a setting for the scene spec info with the given key.
- **Note**: When asked for a value for one of its scene spec info, a valid value will always be returned. But if this API returns false for a scene spec info, the value of that info will be the defined default value.
- **Future Consideration**: This may change such that it is an error to ask for a value when there is none.
- **ComposedLayer Note**: When dealing with a composedLayer, it is not necessary to worry about whether a scene spec info ‘has a value’ because the composed layer will always have a valid value, even if it is the default.
- **Additional Info**: A spec may or may not have an expressed value for some of its scene spec info.
## IsInert
- **Description**: Indicates whether this spec has any significant data. If ignoreChildren is true, child scenegraph objects will be ignored.
## ListInfoKeys
- **Returns**: list[str]
- **Description**: Returns the full list of info keys currently set on this object. This does not include fields that represent names of children.
## SetInfo
- **Parameters**:
- key : string
- value : VtValue
- **Returns**: None
- **Description**: Sets the value for the given metadata key.
- **Error Handling**: It is an error to pass a value that is not the correct type for that given key.
- **API Stability**: This is interim API which is likely to change. Only editors with an immediate specific need (like the Inspector) should use this API.
## SetInfoDictionaryValue
- **Parameters**:
- dictionaryKey : string
- entryKey : string
- value : VtValue
- **Returns**: None
- **Description**: Sets the value for `entryKey` in the specified dictionary.
to
```code
value
```
within the dictionary
with the given metadata key
```code
dictionaryKey
```
.
Parameters
----------
- **dictionaryKey** (str) –
- **entryKey** (str) –
- **value** (VtValue) –
### expired
property expired
Indicates whether this spec has any significant data. This is for backwards compatibility, use IsInert instead.
Compatibility note: prior to presto 1.9, isInert (then isEmpty) was true for otherwise inert PrimSpecs with inert inherits, references, or variant sets. isInert is now false in such conditions.
### isInert
property isInert
The owning layer.
### layer
property layer
The absolute scene path.
### path
property path
### SpecType
class pxr.Sdf.SpecType
Methods:
- **GetValueFromName**
Attributes:
- **allValues** = (Sdf.SpecTypeUnknown, Sdf.SpecTypeAttribute, Sdf.SpecTypeConnection, Sdf.SpecTypeExpression, Sdf.SpecTypeMapper, Sdf.SpecTypeMapperArg, Sdf.SpecTypePrim, Sdf.SpecTypePseudoRoot, Sdf.SpecTypeRelationship, Sdf.SpecTypeRelationshipTarget, Sdf.SpecTypeVariant, Sdf.SpecTypeVariantSet)
#### GetValueFromName
static GetValueFromName()
#### allValues
allValues = (Sdf.SpecTypeUnknown, Sdf.SpecTypeAttribute, Sdf.SpecTypeConnection, Sdf.SpecTypeExpression, Sdf.SpecTypeMapper, Sdf.SpecTypeMapperArg, Sdf.SpecTypePrim, Sdf.SpecTypePseudoRoot, Sdf.SpecTypeRelationship, Sdf.SpecTypeRelationshipTarget, Sdf.SpecTypeVariant, Sdf.SpecTypeVariantSet)
### pxr.Sdf.Specifier
**Methods:**
- GetValueFromName
**Attributes:**
- allValues
### pxr.Sdf.Specifier.GetValueFromName
- static GetValueFromName()
### pxr.Sdf.Specifier.allValues
- allValues = (Sdf.SpecifierDef, Sdf.SpecifierOver, Sdf.SpecifierClass)
### pxr.Sdf.StringListOp
**Methods:**
- ApplyOperations
- Clear
- ClearAndMakeExplicit
- Create
- CreateExplicit
- GetAddedOrExplicitItems
- HasItem
**Attributes:**
- addedItems
- `appendedItems`
- `deletedItems`
- `explicitItems`
- `isExplicit`
- `orderedItems`
- `prependedItems`
### ApplyOperations()
### Clear()
### ClearAndMakeExplicit()
### Create()
### CreateExplicit()
### GetAddedOrExplicitItems()
### HasItem()
### addedItems
### appendedItems
### deletedItems
<em>
<span class="sig-name descname">
<span class="pre">
deletedItems
<a class="headerlink" href="#pxr.Sdf.StringListOp.deletedItems" title="Permalink to this definition">
<dd>
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.StringListOp.explicitItems">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
explicitItems
<a class="headerlink" href="#pxr.Sdf.StringListOp.explicitItems" title="Permalink to this definition">
<dd>
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.StringListOp.isExplicit">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
isExplicit
<a class="headerlink" href="#pxr.Sdf.StringListOp.isExplicit" title="Permalink to this definition">
<dd>
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.StringListOp.orderedItems">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
orderedItems
<a class="headerlink" href="#pxr.Sdf.StringListOp.orderedItems" title="Permalink to this definition">
<dd>
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdf.StringListOp.prependedItems">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
prependedItems
<a class="headerlink" href="#pxr.Sdf.StringListOp.prependedItems" title="Permalink to this definition">
<dd>
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Sdf.TimeCode">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-prename descclassname">
<span class="pre">
pxr.Sdf.
<span class="sig-name descname">
<span class="pre">
TimeCode
<a class="headerlink" href="#pxr.Sdf.TimeCode" title="Permalink to this definition">
<dd>
<p>
Value type that represents a time code. It’s equivalent to a double
type value but is used to indicate that this value should be resolved
by any time based value resolution.
<p>
<strong>
Methods:
<table>
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
GetValue
()
<td>
<p>
Return the time value.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.TimeCode.GetValue">
<span class="sig-name descname">
<span class="pre">
GetValue
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
float
<a class="headerlink" href="#pxr.Sdf.TimeCode.GetValue" title="Permalink to this definition">
<dd>
<p>
Return the time value.
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Sdf.TimeCodeArray">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-prename descclassname">
<span class="pre">
pxr.Sdf.
<span class="sig-name descname">
<span class="pre">
TimeCodeArray
<a class="headerlink" href="#pxr.Sdf.TimeCodeArray" title="Permalink to this definition">
<dd>
<p>
An array of type SdfTimeCode.
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Sdf.TokenListOp">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-prename descclassname">
<span class="pre">
pxr.Sdf.
<span class="sig-name descname">
<span class="pre">
TokenListOp
<a class="headerlink" href="#pxr.Sdf.TokenListOp" title="Permalink to this definition">
<dd>
<p>
<strong>
Methods:
<table>
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
ApplyOperations
<td>
<p>
<tr class="row-even">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Clear
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
ClearAndMakeExplicit
<td>
<p>
<tr class="row-even">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Create
<td>
<p>
<p>
CreateExplicit
<p>
<p>
GetAddedOrExplicitItems
<p>
<p>
HasItem
<p>
<p>
Attributes:
<p>
addedItems
<p>
<p>
appendedItems
<p>
<p>
deletedItems
<p>
<p>
explicitItems
<p>
<p>
isExplicit
<p>
<p>
orderedItems
<p>
<p>
prependedItems
<p>
<p>
ApplyOperations
<p>
Clear
<p>
ClearAndMakeExplicit
<p>
static Create
<p>
static CreateExplicit
<p>
GetAddedOrExplicitItems
## pxr.Sdf.TokenListOp
### Methods
- **HasItem**()
### Properties
- **addedItems** property
- **appendedItems** property
- **deletedItems** property
- **explicitItems** property
- **isExplicit** property
- **orderedItems** property
- **prependedItems** property
## pxr.Sdf.UInt64ListOp
### Methods
- **ApplyOperations**()
- **Clear**()
- **ClearAndMakeExplicit**()
- **Create**()
- **CreateExplicit**()
- **GetAddedOrExplicitItems**()
- **HasItem**()
## Attributes:
- addedItems
- appendedItems
- deletedItems
- explicitItems
- isExplicit
- orderedItems
- prependedItems
## Methods
### ApplyOperations()
### Clear()
### ClearAndMakeExplicit()
### Create()
### CreateExplicit()
### GetAddedOrExplicitItems()
### HasItem()
## Properties
### addedItems
### pxr.Sdf.UIntListOp
**Methods:**
- ApplyOperations
- Clear
- ClearAndMakeExplicit
- Create
- CreateExplicit
- GetAddedOrExplicitItems
- HasItem
**Attributes:**
- addedItems
- `appendedItems`
- `deletedItems`
- `explicitItems`
- `isExplicit`
- `orderedItems`
- `prependedItems`
### ApplyOperations()
### Clear()
### ClearAndMakeExplicit()
### Create()
### CreateExplicit()
### GetAddedOrExplicitItems()
### HasItem()
### addedItems
### appendedItems
### deletedItems
## pxr.Sdf.UnregisteredValue
Stores a representation of the value for an unregistered metadata field encountered during text layer parsing.
This provides the ability to serialize this data to a layer, as well as limited inspection and editing capabilities (e.g., moving this data to a different spec or field) even when the data type of the value isn’t known.
### Attributes:
| Attribute | Description |
|-----------|-------------|
| `value` | VtValue |
### value Property
- **Type:** VtValue
- **Description:** Returns the wrapped VtValue specified in the constructor.
## pxr.Sdf.UnregisteredValueListOp
### Methods:
| Method | Description |
|---------------------------------|-------------|
| `ApplyOperations` | |
| `Clear` | |
| `ClearAndMakeExplicit` | |
| `Create` | |
| `CreateExplicit` | |
<p>
GetAddedOrExplicitItems
<p>
<p>
HasItem
<p>
<p>
Attributes:
<table>
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
addedItems
<td>
<p>
<tr class="row-even">
<td>
<p>
appendedItems
<td>
<p>
<tr class="row-odd">
<td>
<p>
deletedItems
<td>
<p>
<tr class="row-even">
<td>
<p>
explicitItems
<td>
<p>
<tr class="row-odd">
<td>
<p>
isExplicit
<td>
<p>
<tr class="row-even">
<td>
<p>
orderedItems
<td>
<p>
<tr class="row-odd">
<td>
<p>
prependedItems
<td>
<p>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.UnregisteredValueListOp.ApplyOperations">
<span class="sig-name descname">
<span class="pre">
ApplyOperations
<span class="sig-paren">
()
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.UnregisteredValueListOp.Clear">
<span class="sig-name descname">
<span class="pre">
Clear
<span class="sig-paren">
()
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.UnregisteredValueListOp.ClearAndMakeExplicit">
<span class="sig-name descname">
<span class="pre">
ClearAndMakeExplicit
<span class="sig-paren">
()
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.UnregisteredValueListOp.Create">
<em class="property">
<span class="pre">
static
<span class="sig-name descname">
<span class="pre">
Create
<span class="sig-paren">
()
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.UnregisteredValueListOp.CreateExplicit">
<em class="property">
<span class="pre">
static
<span class="sig-name descname">
<span class="pre">
CreateExplicit
<span class="sig-paren">
()
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.UnregisteredValueListOp.GetAddedOrExplicitItems">
<span class="sig-name descname">
<span class="pre">
GetAddedOrExplicitItems
<span class="sig-paren">
()
<dd>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.UnregisteredValueListOp.HasItem">
<span class="sig-name descname">
<span class="pre">
HasItem
<span class="sig-paren">
()
<dd>
### HasItem
### addedItems
### appendedItems
### deletedItems
### explicitItems
### isExplicit
### orderedItems
### prependedItems
### pxr.Sdf.ValueBlock
A special value type that can be used to explicitly author an opinion for an attribute’s default value or time sample value that represents having no value. Note that this is different from not having a value authored.
One could author such a value in two ways.
```cpp
attribute->SetDefaultValue(VtValue(SdfValueBlock());
...
layer->SetTimeSample(attribute->GetPath(), 101, VtValue(SdfValueBlock()));
```
### pxr.Sdf.ValueRoleNames
**Attributes:**
| Attribute | Description |
|-----------|-------------|
| Color | |
| EdgeIndex | |
| FaceIndex | |
| Frame | |
| Normal |
| --- |
| Point |
| PointIndex |
| TextureCoordinate |
| Transform |
| Vector |
### Color
= 'Color'
### EdgeIndex
= 'EdgeIndex'
### FaceIndex
= 'FaceIndex'
### Frame
= 'Frame'
### Normal
= 'Normal'
### Point
= 'Point'
### PointIndex
= 'PointIndex'
### TextureCoordinate
= 'TextureCoordinate'
Transform = 'Transform'
Vector = 'Vector'
class pxr.Sdf.ValueTypeName
Represents a value type name, i.e. an attribute’s type name. Usually, a value type name associates a string with a TfType and an optional role, along with additional metadata. A schema registers all known value type names and may register multiple names for the same TfType and role pair. All name strings for a given pair are collectively called its aliases.
A value type name may also represent just a name string, without a TfType, role or other metadata. This is currently used exclusively to unserialize and re-serialize an attribute’s type name where that name is not known to the schema.
Because value type names can have aliases and those aliases may change in the future, clients should avoid using the value type name’s string representation except to report human readable messages and when serializing. Clients can look up a value type name by string using SdfSchemaBase::FindType() and shouldn’t otherwise need the string. Aliases compare equal, even if registered by different schemas.
**Attributes:**
- aliasesAsStrings
- arrayType: ValueTypeName
- cppTypeName
- defaultUnit: Enum
- defaultValue: VtValue
- isArray: bool
- isScalar: bool
- role: str
- scalarType: ValueTypeName
- type
| Type |
| ---- |
## aliasesAsStrings
property
## arrayType
property
### ValueTypeName
Returns the array version of this type name if it’s an scalar type name, otherwise returns this type name.
If there is no array type name then this returns the invalid type name.
#### Type
type
## cppTypeName
property
## defaultUnit
property
### Enum
Returns the default unit enum for the type.
#### Type
type
## defaultValue
property
### VtValue
Returns the default value for the type.
#### Type
type
## isArray
property
### bool
Returns `true` iff this type is an array.
The invalid type is considered neither scalar nor array.
#### Type
type
## isScalar
property
### bool
Returns `true` iff this type is a scalar.
The invalid type is considered neither scalar nor array.
#### Type
type
## role
property
### str
Returns the type’s role.
#### Type
type
## scalarType
property
### ValueTypeName
Returns the scalar version of this type name if it’s an array type name, otherwise returns this type name.
If there is no scalar type name then this returns the invalid type name.
#### Type
type
## type
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
type
<dd>
<p>
Type
<p>
Returns the
<code class="docutils literal notranslate">
<span class="pre">
TfType
of the type.
<dl class="field-list simple">
<dt class="field-odd">
Type
<dd class="field-odd">
<p>
type
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-prename descclassname">
<span class="pre">
pxr.Sdf.
<span class="sig-name descname">
<span class="pre">
ValueTypeNames
<dd>
<p>
<strong>
Methods:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Find
<td>
<p>
<p>
<strong>
Attributes:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Asset
<td>
<p>
<tr class="row-even">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
AssetArray
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Bool
<td>
<p>
<tr class="row-even">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
BoolArray
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Color3d
<td>
<p>
<tr class="row-even">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Color3dArray
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Color3f
<td>
<p>
<tr class="row-even">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Color3fArray
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Color3h
<td>
<p>
<tr class="row-even">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Color3hArray
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Color4d
<td>
<p>
<tr class="row-even">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Color4dArray
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Color4f
<td>
<p>
Color4fArray
Color4h
Color4hArray
Double
Double2
Double2Array
Double3
Double3Array
Double4
Double4Array
DoubleArray
Float
Float2
Float2Array
Float3
Float3Array
Float4
Float4Array
FloatArray
Frame4d
Frame4dArray
Half
Half2
Half2Array
Half3
Half3Array
Half4
Half4Array
HalfArray
Int
Int2
Int2Array
Int3
Int3Array
Int4
Int4Array
Int64
Int64Array
IntArray
Matrix2d
Matrix2dArray
Matrix3d
Matrix3dArray
Matrix4d
Matrix4dArray
Normal3d
Normal3dArray
Normal3f
Normal3fArray
Normal3h
Normal3hArray
Point3d
Point3dArray
Point3f
<p>
Point3fArray
<p>
Point3h
<p>
Point3hArray
<p>
Quatd
<p>
QuatdArray
<p>
Quatf
<p>
QuatfArray
<p>
Quath
<p>
QuathArray
<p>
String
<p>
StringArray
<p>
TexCoord2d
<p>
TexCoord2dArray
<p>
TexCoord2f
<p>
TexCoord2fArray
<p>
TexCoord2h
<p>
TexCoord2hArray
<p>
TexCoord3d
<p>
TexCoord3dArray
TexCoord3f
TexCoord3fArray
TexCoord3h
TexCoord3hArray
TimeCode
TimeCodeArray
Token
TokenArray
UChar
UCharArray
UInt
UInt64
UInt64Array
UIntArray
Vector3d
Vector3dArray
Vector3f
Vector3fArray
- `Vector3fArray`
- `Vector3h`
- `Vector3hArray`
### Find
### Asset
= `<pxr.Sdf.ValueTypeName object>`
### AssetArray
= `<pxr.Sdf.ValueTypeName object>`
### Bool
= `<pxr.Sdf.ValueTypeName object>`
### BoolArray
= `<pxr.Sdf.ValueTypeName object>`
### Color3d
= `<pxr.Sdf.ValueTypeName object>`
### Color3dArray
= `<pxr.Sdf.ValueTypeName object>`
### Color3f
= `<pxr.Sdf.ValueTypeName object>`
### Color3fArray
= `<pxr.Sdf.ValueTypeName object>`
### Color3h
Color3h = <pxr.Sdf.ValueTypeName object>
Color3hArray = <pxr.Sdf.ValueTypeName object>
Color4d = <pxr.Sdf.ValueTypeName object>
Color4dArray = <pxr.Sdf.ValueTypeName object>
Color4f = <pxr.Sdf.ValueTypeName object>
Color4fArray = <pxr.Sdf.ValueTypeName object>
Color4h = <pxr.Sdf.ValueTypeName object>
Color4hArray = <pxr.Sdf.ValueTypeName object>
Double = <pxr.Sdf.ValueTypeName object>
Double2 = <pxr.Sdf.ValueTypeName object>
Double2Array = <pxr.Sdf.ValueTypeName object>
### Double2
= `<pxr.Sdf.ValueTypeName object>`
### Double3
= `<pxr.Sdf.ValueTypeName object>`
### Double3Array
= `<pxr.Sdf.ValueTypeName object>`
### Double4
= `<pxr.Sdf.ValueTypeName object>`
### Double4Array
= `<pxr.Sdf.ValueTypeName object>`
### DoubleArray
= `<pxr.Sdf.ValueTypeName object>`
### Float
= `<pxr.Sdf.ValueTypeName object>`
### Float2
= `<pxr.Sdf.ValueTypeName object>`
### Float2Array
= `<pxr.Sdf.ValueTypeName object>`
### Float3
= `<pxr.Sdf.ValueTypeName object>`
### Float3Array
= `<pxr.Sdf.ValueTypeName object>`
### Float4
= `<pxr.Sdf.ValueTypeName object>`
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames.Float4Array">
<span class="sig-name descname">
<span class="pre">
Float4Array
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames.FloatArray">
<span class="sig-name descname">
<span class="pre">
FloatArray
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames.Frame4d">
<span class="sig-name descname">
<span class="pre">
Frame4d
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames.Frame4dArray">
<span class="sig-name descname">
<span class="pre">
Frame4dArray
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames.Half">
<span class="sig-name descname">
<span class="pre">
Half
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames.Half2">
<span class="sig-name descname">
<span class="pre">
Half2
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames.Half2Array">
<span class="sig-name descname">
<span class="pre">
Half2Array
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames.Half3">
<span class="sig-name descname">
<span class="pre">
Half3
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames.Half3Array">
<span class="sig-name descname">
<span class="pre">
Half3Array
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames.Half4">
<span class="sig-name descname">
<span class="pre">
Half4
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
Half4Array = <pxr.Sdf.ValueTypeName object>
HalfArray = <pxr.Sdf.ValueTypeName object>
Int = <pxr.Sdf.ValueTypeName object>
Int2 = <pxr.Sdf.ValueTypeName object>
Int2Array = <pxr.Sdf.ValueTypeName object>
Int3 = <pxr.Sdf.ValueTypeName object>
Int3Array = <pxr.Sdf.ValueTypeName object>
Int4 = <pxr.Sdf.ValueTypeName object>
Int4Array = <pxr.Sdf.ValueTypeName object>
Int64 = <pxr.Sdf.ValueTypeName object>
Int64Array = <pxr.Sdf.ValueTypeName object>
<span class="pre">
Int64Array
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<span class="pre">
IntArray
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<span class="pre">
Matrix2d
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<span class="pre">
Matrix2dArray
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<span class="pre">
Matrix3d
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<span class="pre">
Matrix3dArray
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<span class="pre">
Matrix4d
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<span class="pre">
Matrix4dArray
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<span class="pre">
Normal3d
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<span class="pre">
Normal3dArray
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<span class="pre">
Normal3f
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<span class="pre">
Normal3fArray
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
Normal3fArray = <pxr.Sdf.ValueTypeName object>
Normal3h = <pxr.Sdf.ValueTypeName object>
Normal3hArray = <pxr.Sdf.ValueTypeName object>
Point3d = <pxr.Sdf.ValueTypeName object>
Point3dArray = <pxr.Sdf.ValueTypeName object>
Point3f = <pxr.Sdf.ValueTypeName object>
Point3fArray = <pxr.Sdf.ValueTypeName object>
Point3h = <pxr.Sdf.ValueTypeName object>
Point3hArray = <pxr.Sdf.ValueTypeName object>
Quatd = <pxr.Sdf.ValueTypeName object>
QuatdArray = <pxr.Sdf.ValueTypeName object>
### Quatd
```
```markdown
### Quatf
```
```markdown
### QuatfArray
```
```markdown
### Quath
```
```markdown
### QuathArray
```
```markdown
### String
```
```markdown
### StringArray
```
```markdown
### TexCoord2d
```
```markdown
### TexCoord2dArray
```
```markdown
### TexCoord2f
```
```markdown
### TexCoord2fArray
```
```markdown
### TexCoord2h
```
```markdown
### TexCoord2hArray
```
```markdown
### TexCoord3d
```
```markdown
### TexCoord3dArray
```
```markdown
### TexCoord3f
```
```markdown
### TexCoord3fArray
```
```markdown
### TexCoord3h
```
```markdown
### TexCoord3hArray
```
```markdown
### TexCoord4d
```
```markdown
### TexCoord4dArray
```
```markdown
### TexCoord4f
```
```markdown
### TexCoord4fArray
```
```markdown
### TexCoord4h
```
```markdown
### TexCoord4hArray
```
```markdown
### Vector2d
```
```markdown
### Vector2dArray
```
```markdown
### Vector2f
```
```markdown
### Vector2fArray
```
```markdown
### Vector2h
```
```markdown
### Vector2hArray
```
```markdown
### Vector3d
```
```markdown
### Vector3dArray
```
```markdown
### Vector3f
```
```markdown
### Vector3fArray
```
```markdown
### Vector3h
```
```markdown
### Vector3hArray
```
```markdown
### Vector4d
```
```markdown
### Vector4dArray
```
```markdown
### Vector4f
```
```markdown
### Vector4fArray
```
```markdown
### Vector4h
```
```markdown
### Vector4hArray
```
```markdown
### Matrix2d
```
```markdown
### Matrix2dArray
```
```markdown
### Matrix2f
```
```markdown
### Matrix2fArray
```
```markdown
### Matrix2h
```
```markdown
### Matrix2hArray
```
```markdown
### Matrix3d
```
```markdown
### Matrix3dArray
```
```markdown
### Matrix3f
```
```markdown
### Matrix3fArray
```
```markdown
### Matrix3h
```
```markdown
### Matrix3hArray
```
```markdown
### Matrix4d
```
```markdown
### Matrix4dArray
```
```markdown
### Matrix4f
```
```markdown
### Matrix4fArray
```
```markdown
### Matrix4h
```
```markdown
### Matrix4hArray
```
```markdown
### Color3d
```
```markdown
### Color3dArray
```
```markdown
### Color3f
```
```markdown
### Color3fArray
```
```markdown
### Color3h
```
```markdown
### Color3hArray
```
```markdown
### Color4d
```
```markdown
### Color4dArray
```
```markdown
### Color4f
```
```markdown
### Color4fArray
```
```markdown
### Color4h
```
```markdown
### Color4hArray
```
```markdown
### Quatd
```
```markdown
### QuatdArray
```
```markdown
### Quatf
```
```markdown
### QuatfArray
```
```markdown
### Quath
```
```markdown
### QuathArray
```
```markdown
### String
```
```markdown
### StringArray
```
```markdown
### TexCoord2d
```
```markdown
### TexCoord2dArray
```
```markdown
### TexCoord2f
```
```markdown
### TexCoord2fArray
```
```markdown
### TexCoord2h
```
```markdown
### TexCoord2hArray
```
```markdown
### TexCoord3d
```
```markdown
### TexCoord3dArray
```
```markdown
### TexCoord3f
```
```markdown
### TexCoord3fArray
```
```markdown
### TexCoord3h
```
```markdown
### TexCoord3hArray
```
```markdown
### TexCoord4d
```
```markdown
### TexCoord4dArray
```
```markdown
### TexCoord4f
```
```markdown
### TexCoord4fArray
```
```markdown
### TexCoord4h
```
```markdown
### TexCoord4hArray
```
```markdown
### Vector2d
```
```markdown
### Vector2dArray
```
```markdown
### Vector2f
```
```markdown
### Vector2fArray
```
```markdown
### Vector2h
```
```markdown
### Vector2hArray
```
```markdown
### Vector3d
```
```markdown
### Vector3dArray
```
```markdown
### Vector3f
```
```markdown
### Vector3fArray
```
```markdown
### Vector3h
```
```markdown
### Vector3hArray
```
```markdown
### Vector4d
```
```markdown
### Vector4dArray
```
```markdown
### Vector4f
```
```markdown
### Vector4fArray
```
```markdown
### Vector4h
```
```markdown
### Vector4hArray
```
```markdown
### Matrix2d
```
```markdown
### Matrix2dArray
```
```markdown
### Matrix2f
```
```markdown
### Matrix2fArray
```
```markdown
### Matrix2h
```
```markdown
### Matrix2hArray
```
```markdown
### Matrix3d
```
```markdown
### Matrix3dArray
```
```markdown
### Matrix3f
```
```markdown
### Matrix3fArray
```
```markdown
### Matrix3h
```
```markdown
### Matrix3hArray
```
```markdown
### Matrix4d
```
```markdown
### Matrix4dArray
```
```markdown
### Matrix4f
```
```markdown
### Matrix4fArray
```
```markdown
### Matrix4h
```
```markdown
### Matrix4hArray
```
```markdown
### Color3d
```
```markdown
### Color3dArray
```
```markdown
### Color3f
```
```markdown
### Color3fArray
```
```markdown
### Color3h
```
```markdown
### Color3hArray
```
```markdown
### Color4d
```
```markdown
### Color4dArray
```
```markdown
### Color4f
```
```markdown
### Color4fArray
```
```markdown
### Color4h
```
```markdown
### Color4hArray
```
```markdown
### Quatd
```
```markdown
### QuatdArray
```
```markdown
### Quatf
```
```markdown
### QuatfArray
```
```markdown
### Quath
```
```markdown
### QuathArray
```
```markdown
### String
```
```markdown
### StringArray
```
```markdown
### TexCoord2d
```
```markdown
### TexCoord2dArray
```
```markdown
### TexCoord2f
```
```markdown
### TexCoord2fArray
```
```markdown
### TexCoord2h
```
```markdown
### TexCoord2hArray
```
```markdown
### TexCoord3d
```
```markdown
### TexCoord3dArray
```
```markdown
### TexCoord3f
```
```markdown
### TexCoord3fArray
```
```markdown
### TexCoord3h
```
```markdown
### TexCoord3hArray
```
```markdown
### TexCoord4d
```
```markdown
### TexCoord4dArray
```
```markdown
### TexCoord4f
```
```markdown
### TexCoord4fArray
```
```markdown
### TexCoord4h
```
```markdown
### TexCoord4hArray
```
```markdown
### Vector2d
```
```markdown
### Vector2dArray
```
```markdown
### Vector2f
```
```markdown
### Vector2fArray
```
```markdown
### Vector2h
```
```markdown
### Vector2hArray
```
```markdown
### Vector3d
```
```markdown
### Vector3dArray
```
```markdown
### Vector3f
```
```markdown
### Vector3fArray
```
```markdown
### Vector3h
```
```markdown
### Vector3hArray
```
```markdown
### Vector4d
```
```markdown
### Vector4dArray
```
```markdown
### Vector4f
```
```markdown
### Vector4fArray
```
```markdown
### Vector4h
```
```markdown
### Vector4hArray
```
```markdown
### Matrix2d
```
```markdown
### Matrix2dArray
```
```markdown
### Matrix2f
```
```markdown
### Matrix2fArray
```
```markdown
### Matrix2h
```
```markdown
### Matrix2hArray
```
```markdown
### Matrix3d
```
```markdown
### Matrix3dArray
```
```markdown
### Matrix3f
```
```markdown
### Matrix3fArray
```
```markdown
### Matrix3h
```
```markdown
### Matrix3hArray
```
```markdown
### Matrix4d
```
```markdown
### Matrix4dArray
```
```markdown
### Matrix4f
```
```markdown
### Matrix4fArray
```
```markdown
### Matrix4h
```
```markdown
### Matrix4hArray
```
```markdown
### Color3d
```
```markdown
### Color3dArray
```
```markdown
### Color3f
```
```markdown
### Color3fArray
```
```markdown
### Color3h
```
```markdown
### Color3hArray
```
```markdown
### Color4d
```
```markdown
### Color4dArray
```
```markdown
### Color4f
```
```markdown
### Color4fArray
```
```markdown
### Color4h
```
```markdown
### Color4hArray
```
```markdown
### Quatd
```
```markdown
### QuatdArray
```
```markdown
### Quatf
```
```markdown
### QuatfArray
```
```markdown
### Quath
```
```markdown
### QuathArray
```
```markdown
### String
```
```markdown
### StringArray
```
```markdown
### TexCoord2d
```
```markdown
### TexCoord2dArray
```
```markdown
### TexCoord2f
```
```markdown
### TexCoord2fArray
```
```markdown
### TexCoord2h
```
```markdown
### TexCoord2hArray
```
```markdown
### TexCoord3d
```
```markdown
### TexCoord3dArray
```
```markdown
### TexCoord3f
```
```markdown
### TexCoord3fArray
```
```markdown
### TexCoord3h
```
```markdown
### TexCoord3hArray
```
```markdown
### TexCoord4d
```
```markdown
### TexCoord4dArray
```
```markdown
### TexCoord4f
```
```markdown
### TexCoord4fArray
```
```markdown
### TexCoord4h
```
```markdown
### TexCoord4hArray
```
```markdown
### Vector2d
```
```markdown
### Vector2dArray
```
```markdown
### Vector2f
```
```markdown
### Vector2fArray
```
```markdown
### Vector2h
```
```markdown
### Vector2hArray
```
```markdown
### Vector3d
```
```markdown
### Vector3dArray
```
```markdown
### Vector3f
```
```markdown
### Vector3fArray
```
```markdown
### Vector3h
```
```markdown
### Vector3hArray
```
```markdown
### Vector4d
```
```markdown
### Vector4dArray
```
```markdown
### Vector4f
```
```markdown
### Vector4fArray
```
```markdown
### Vector4h
```
```markdown
### Vector4hArray
```
```markdown
### Matrix2d
```
```markdown
### Matrix2dArray
```
```markdown
### Matrix2f
```
```markdown
### Matrix2fArray
```
```markdown
### Matrix2h
```
```markdown
### Matrix2hArray
```
```markdown
### Matrix3d
```
```markdown
### Matrix3dArray
```
```markdown
### Matrix3f
```
```markdown
### Matrix3fArray
```
```markdown
### Matrix3h
```
```markdown
### Matrix3hArray
```
```markdown
### Matrix4d
```
```markdown
### Matrix4dArray
```
```markdown
### Matrix4f
```
```markdown
### Matrix4fArray
```
```markdown
### Matrix4h
```
```markdown
### Matrix4hArray
```
```markdown
### Color3d
```
```markdown
### Color3dArray
```
```markdown
### Color3f
```
```markdown
### Color3fArray
```
```markdown
### Color3h
```
```markdown
### Color3hArray
```
```markdown
### Color4d
```
```markdown
### Color4dArray
```
```markdown
### Color4f
```
```markdown
### Color4fArray
```
```markdown
### Color4h
```
```markdown
### Color4hArray
```
```markdown
### Quatd
```
```markdown
### QuatdArray
```
```markdown
### Quatf
```
```markdown
### QuatfArray
```
```markdown
### Quath
```
```markdown
### QuathArray
```
```markdown
### String
```
```markdown
### StringArray
```
```markdown
### TexCoord2d
```
```markdown
### TexCoord2dArray
```
```markdown
### TexCoord2f
```
```markdown
### TexCoord2fArray
```
```markdown
### TexCoord2h
```
```markdown
### TexCoord2hArray
```
```markdown
### TexCoord3d
```
```markdown
### TexCoord3dArray
```
```markdown
### TexCoord3f
```
```markdown
### TexCoord3fArray
```
```markdown
### TexCoord3h
```
```markdown
### TexCoord3hArray
```
```markdown
### TexCoord4d
```
```markdown
### TexCoord4dArray
```
```markdown
### TexCoord4f
```
```markdown
### TexCoord4fArray
```
```markdown
### TexCoord4h
```
```markdown
### TexCoord4hArray
```
```markdown
### Vector2d
```
```markdown
### Vector2dArray
```
```markdown
### Vector2f
```
```markdown
### Vector2fArray
```
```markdown
### Vector2h
```
```markdown
### Vector2hArray
```
```markdown
### Vector3d
```
```markdown
### Vector3dArray
```
```markdown
### Vector3f
```
```markdown
### Vector3fArray
```
```markdown
### Vector3h
```
```markdown
### Vector3hArray
```
```markdown
### Vector4d
```
```markdown
### Vector4dArray
```
```markdown
### Vector4f
```
```markdown
### Vector4fArray
```
```markdown
### Vector4h
```
```markdown
### Vector4hArray
```
```markdown
### Matrix2d
```
```markdown
### Matrix2dArray
```
```markdown
### Matrix2f
```
```markdown
### Matrix2fArray
```
```markdown
### Matrix2h
```
```markdown
### Matrix2hArray
```
```markdown
### Matrix3d
```
```markdown
### Matrix3dArray
```
```markdown
### Matrix3f
```
```markdown
### Matrix3fArray
```
```markdown
### Matrix3h
```
```markdown
### Matrix3hArray
TexCoord2h = <pxr.Sdf.ValueTypeName object>
TexCoord2hArray = <pxr.Sdf.ValueTypeName object>
TexCoord3d = <pxr.Sdf.ValueTypeName object>
TexCoord3dArray = <pxr.Sdf.ValueTypeName object>
TexCoord3f = <pxr.Sdf.ValueTypeName object>
TexCoord3fArray = <pxr.Sdf.ValueTypeName object>
TexCoord3h = <pxr.Sdf.ValueTypeName object>
TexCoord3hArray = <pxr.Sdf.ValueTypeName object>
TimeCode = <pxr.Sdf.ValueTypeName object>
TimeCodeArray = <pxr.Sdf.ValueTypeName object>
Token = <pxr.Sdf.ValueTypeName object>
### pxr.Sdf.ValueTypeNames.Token
```
```markdown
### pxr.Sdf.ValueTypeNames.TokenArray
```
```markdown
### pxr.Sdf.ValueTypeNames.UChar
```
```markdown
### pxr.Sdf.ValueTypeNames.UCharArray
```
```markdown
### pxr.Sdf.ValueTypeNames.UInt
```
```markdown
### pxr.Sdf.ValueTypeNames.UInt64
```
```markdown
### pxr.Sdf.ValueTypeNames.UInt64Array
```
```markdown
### pxr.Sdf.ValueTypeNames.UIntArray
```
```markdown
### pxr.Sdf.ValueTypeNames.Vector3d
```
```markdown
### pxr.Sdf.ValueTypeNames.Vector3dArray
```
```markdown
### pxr.Sdf.ValueTypeNames.Vector3f
```
```markdown
### pxr.Sdf.ValueTypeNames.Vector3fArray
```
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames.Vector3h">
<span class="sig-name descname">
<span class="pre">
Vector3h
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.ValueTypeNames.Vector3hArray">
<span class="sig-name descname">
<span class="pre">
Vector3hArray
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
<pxr.Sdf.ValueTypeName
<span class="pre">
object>
<dd>
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Sdf.Variability">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-prename descclassname">
<span class="pre">
pxr.Sdf.
<span class="sig-name descname">
<span class="pre">
Variability
<dd>
<p>
<strong>
Methods:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
GetValueFromName
<td>
<p>
<p>
<strong>
Attributes:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
allValues
<td>
<p>
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdf.Variability.GetValueFromName">
<em class="property">
<span class="pre">
static
<span class="w">
<span class="sig-name descname">
<span class="pre">
GetValueFromName
<span class="sig-paren">
(
<span class="sig-paren">
)
<dd>
<dl class="py attribute">
<dt class="sig sig-object py" id="pxr.Sdf.Variability.allValues">
<span class="sig-name descname">
<span class="pre">
allValues
<em class="property">
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="pre">
(Sdf.VariabilityVarying,
<span class="pre">
Sdf.VariabilityUniform)
<dd>
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Sdf.VariantSetSpec">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-prename descclassname">
<span class="pre">
pxr.Sdf.
<span class="sig-name descname">
<span class="pre">
VariantSetSpec
<dd>
<p>
Represents a coherent set of alternate representations for part of a
scene.
<p>
An SdfPrimSpec object may contain one or more named SdfVariantSetSpec
objects that define variations on the prim.
<p>
An SdfVariantSetSpec object contains one or more named SdfVariantSpec
objects. It may also define the name of one of its variants to be used
by default.
<p>
When a prim references another prim, the referencing prim may specify
one of the variants from each of the variant sets of the target prim.
The chosen variant from each set (or the default variant from those
sets that the referencing prim does not explicitly specify) is
composited over the target prim, and then the referencing prim is
composited over the result.
<p>
<strong>
Methods:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
RemoveVariant
(variant)
<td>
<p>
Removes
<code class="docutils literal notranslate">
<span class="pre">
variant
from the list of variants.
<p>
<strong>
Attributes:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
| expired | name | owner | variantList | variants |
|---------|------|-------|-------------|----------|
| `expired` | `name` | `owner` | `variantList` | `variants` |
### RemoveVariant
```python
RemoveVariant(variant) -> None
```
Removes `variant` from the list of variants.
If the variant set does not currently own `variant`, no action is taken.
**Parameters**
- **variant** (`VariantSpec`) –
### expired
```python
property expired
```
### name
```python
property name
```
The variant set’s name.
### owner
```python
property owner
```
The prim that this variant set belongs to.
### variantList
```python
property variantList
```
The variants in this variant set as a list.
### variants
```python
property variants
```
The variants in this variant set as a dict.
### pxr.Sdf.VariantSpec
Represents a single variant in a variant set.
A variant contains a prim. This prim is the root prim of the variant.
SdfVariantSpecs are value objects. This means they are immutable once created and they are passed by copy-in APIs. To change a variant spec, you make a new one and replace the existing one.
**Methods:**
| Method Name | Description |
|-------------|-------------|
| `GetVariantNames(name)` | Returns list of variant names for the given variant set. |
**Attributes:**
| Attribute Name | Description |
|----------------|-------------|
| `expired` | |
| `name` | The variant's name. |
| `owner` | The variant set that this variant belongs to. |
| `primSpec` | The root prim of this variant. |
| `variantSets` | SdfVariantSetsProxy |
### GetVariantNames(name)
Returns list of variant names for the given variant set.
**Parameters**
- **name** (str) –
### expired
### name
The variant’s name.
### owner
The variant set that this variant belongs to.
### primSpec
The root prim of this variant.
### variantSets
SdfVariantSetsProxy
Returns the nested variant sets.
The result maps variant set names to variant sets. Variant sets may be removed through the proxy.
Type
----
type
pxr.Sdf.Find(layerFileName, scenePath) → object
----------------------------------------------
- layerFileName: string
- scenePath: Path
If given a single string argument, returns the menv layer with the given filename. If given two arguments (a string and a Path), finds the menv layer with the given filename and returns the scene object within it at the given path. | 234,316 |
Sdr.md | # Sdr module
Summary: The Sdr (Shader Definition Registry) is a specialized version of Ndr for Shaders.
## Python bindings for libSdr
**Classes:**
- **NodeContext**
- **NodeMetadata**
- **NodeRole**
- **PropertyMetadata**
- **PropertyRole**
- **PropertyTypes**
- **Registry**
- The shading-specialized version of `NdrRegistry`.
- **ShaderNode**
- A specialized version of `NdrNode` which holds shading information.
- **ShaderNodeList**
- **ShaderProperty**
A specialized version of `NdrProperty` which holds shading information.
class `pxr.Sdr.NodeContext`
**Attributes:**
- `Displacement`
- `Light`
- `LightFilter`
- `Pattern`
- `PixelFilter`
- `SampleFilter`
- `Surface`
- `Volume`
- **Displacement**: = 'displacement'
- **Light**: = 'light'
- **LightFilter**: = 'lightFilter'
- **Pattern**: = 'pattern'
- **PixelFilter**: = 'pixelFilter'
### Attributes:
| Attribute | Description |
|----------------------------|-------------|
| Category | |
| Departments | |
| Help | |
| ImplementationName | |
| Label | |
| Pages | |
| Primvars | |
| Role | |
| SdrDefinitionNameFallbackPrefix | |
| SdrUsdEncodingVersion | |
| Target | |
### Category
- **Property**: 'category'
### Departments
- **Property**: 'departments'
### Help
- **Property**: 'help'
### ImplementationName
- **Property**: '__SDR__implementationName'
### Label
- **Property**: 'label'
### Pages
- **Property**: 'pages'
### Primvars
- **Property**: 'primvars'
### Role
- **Property**: 'role'
### SdrDefinitionNameFallbackPrefix
- **Property**: 'sdrDefinitionNameFallbackPrefix'
### SdrUsdEncodingVersion
- **Property**: 'sdrUsdEncodingVersion'
### Target
- **Property**: '__SDR__target'
### NodeRole
- **Class**: pxr.Sdr.NodeRole
### Attributes:
| | |
| ---- | --------------------------------------------------------------- |
| Field | [Field](#pxr.Sdr.NodeRole.Field) |
| Math | [Math](#pxr.Sdr.NodeRole.Math) |
| Primvar | [Primvar](#pxr.Sdr.NodeRole.Primvar) |
| Texture | [Texture](#pxr.Sdr.NodeRole.Texture) |
#### Field
```python
Field = 'field'
```
#### Math
```python
Math = 'math'
```
#### Primvar
```python
Primvar = 'primvar'
```
#### Texture
```python
Texture = 'texture'
```
### Attributes:
| | |
| ---- | --------------------------------------------------------------- |
| Colorspace | [Colorspace](#pxr.Sdr.PropertyMetadata.Colorspace) |
| Connectable | [Connectable](#pxr.Sdr.PropertyMetadata.Connectable) |
| DefaultInput | [DefaultInput](#pxr.Sdr.PropertyMetadata.DefaultInput) |
| Help | [Help](#pxr.Sdr.PropertyMetadata.Help) |
| Hints | [Hints](#pxr.Sdr.PropertyMetadata.Hints) |
| ImplementationName | [ImplementationName](#pxr.Sdr.PropertyMetadata.ImplementationName) |
<table>
<tbody>
<tr class="row-even">
<td>
<p>
<code>IsAssetIdentifier
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>IsDynamicArray
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>Label
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>Options
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>Page
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>RenderType
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>Role
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>SdrUsdDefinitionType
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>Target
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>ValidConnectionTypes
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>VstructConditionalExpr
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>VstructMemberName
<td>
<p>
<tr class="row-even">
<td>
<p>
<code>VstructMemberOf
<td>
<p>
<tr class="row-odd">
<td>
<p>
<code>Widget
<td>
<p>
<dl>
<dt>
<code>Colorspace
<dd>
<dl>
<dt>
<code>Connectable
<dd>
DefaultInput = '__SDR__defaultinput'
Help = 'help'
Hints = 'hints'
ImplementationName = '__SDR__implementationName'
IsAssetIdentifier = '__SDR__isAssetIdentifier'
IsDynamicArray = 'isDynamicArray'
Label = 'label'
Options = 'options'
Page = 'page'
RenderType = 'renderType'
Role = 'role'
SdrUsdDefinitionType = 'sdrUsdDefinitionType'
### Attributes:
| Attribute | Description |
|-----------|-------------|
| `None` | |
### Attributes:
| Attribute | Description |
|-----------|-------------|
| `Color` | |
| `Color4` | |
| Float | Int | Matrix | Normal | Point | String | Struct | Terminal | Unknown | Vector | Vstruct |
|-------|-----|--------|--------|-------|--------|--------|----------|---------|--------|---------|
| Float | Int | Matrix | Normal | Point | String | Struct | Terminal | Unknown | Vector | Vstruct |
### Color
= 'color'
### Color4
= 'color4'
### Float
= 'float'
### Int
= 'int'
### Matrix
= 'matrix'
Normal = 'normal'
Point = 'point'
String = 'string'
Struct = 'struct'
Terminal = 'terminal'
Unknown = 'unknown'
Vector = 'vector'
Vstruct = 'vstruct'
class pxr.Sdr.Registry
The shading-specialized version of NdrRegistry.
**Methods:**
- GetShaderNodeByIdentifier(identifier, ...)
Exactly like NdrRegistry::GetNodeByIdentifier(), but returns a SdrShaderNode pointer instead of a NdrNode pointer.
- GetShaderNodeByIdentifierAndType(identifier, ...)
Exactly like...
`NdrRegistry::GetNodeByIdentifierAndType()` , but returns a `SdrShaderNode` pointer instead of a `NdrNode` pointer.
`GetShaderNodeByName` (name, typePriority, filter)
Exactly like `NdrRegistry::GetNodeByName()` , but returns a `SdrShaderNode` pointer instead of a `NdrNode` pointer.
`GetShaderNodeByNameAndType` (name, nodeType, ...)
Exactly like `NdrRegistry::GetNodeByNameAndType()` , but returns a `SdrShaderNode` pointer instead of a `NdrNode` pointer.
`GetShaderNodeFromAsset` (shaderAsset, ...)
Wrapper method for NdrRegistry::GetNodeFromAsset() .
`GetShaderNodeFromSourceCode` (sourceCode, ...)
Wrapper method for NdrRegistry::GetNodeFromSourceCode() .
`GetShaderNodesByFamily` (family, filter)
Exactly like `NdrRegistry::GetNodesByFamily()` , but returns a vector of `SdrShaderNode` pointers instead of a vector of `NdrNode` pointers.
`GetShaderNodesByIdentifier` (identifier)
Exactly like `NdrRegistry::GetNodesByIdentifier()` , but returns a vector of `SdrShaderNode` pointers instead of a vector of `NdrNode` pointers.
`GetShaderNodesByName` (name, filter)
Exactly like `NdrRegistry::GetNodesByName()` , but returns a vector of `SdrShaderNode` pointers instead of a vector of `NdrNode` pointers.
**Attributes:**
`expired`
True if this object has expired, False otherwise.
`GetShaderNodeByIdentifier` (identifier)
### GetShaderNodeByIdentifier
```python
GetShaderNodeByIdentifier(identifier, typePriority) -> ShaderNode
```
Exactly like `NdrRegistry::GetNodeByIdentifier()`, but returns a `SdrShaderNode` pointer instead of a `NdrNode` pointer.
**Parameters**
- **identifier** (NdrIdentifier) –
- **typePriority** (NdrTokenVec) –
### GetShaderNodeByIdentifierAndType
```python
GetShaderNodeByIdentifierAndType(identifier, nodeType) -> ShaderNode
```
Exactly like `NdrRegistry::GetNodeByIdentifierAndType()`, but returns a `SdrShaderNode` pointer instead of a `NdrNode` pointer.
**Parameters**
- **identifier** (NdrIdentifier) –
- **nodeType** (str) –
### GetShaderNodeByName
```python
GetShaderNodeByName(name, typePriority, filter) -> ShaderNode
```
Exactly like `NdrRegistry::GetNodeByName()`, but returns a `SdrShaderNode` pointer instead of a `NdrNode` pointer.
**Parameters**
- **name** (str) –
- **typePriority** (NdrTokenVec) –
- **filter** (VersionFilter) –
### GetShaderNodeByNameAndType
```python
GetShaderNodeByNameAndType(name, nodeType) -> ShaderNode
```
Exactly like `NdrRegistry::GetNodeByNameAndType()`, but returns a `SdrShaderNode` pointer instead of a `NdrNode` pointer.
**Parameters**
- **name** (str) –
- **nodeType** (str) –
### pxr.Sdr.Registry.GetShaderNodeByNameAndType
```filter```
)
```ShaderNode```
Exactly like `NdrRegistry::GetNodeByNameAndType()`, but returns a `SdrShaderNode` pointer instead of a `NdrNode` pointer.
#### Parameters
- **name** (`str`) –
- **nodeType** (`str`) –
- **filter** (`VersionFilter`) –
### pxr.Sdr.Registry.GetShaderNodeFromAsset
```GetShaderNodeFromAsset```
(
```shaderAsset```
,
```metadata```
,
```subIdentifier```
,
```sourceType```
)
```ShaderNode```
Wrapper method for NdrRegistry::GetNodeFromAsset().
Returns a valid SdrShaderNode pointer upon success.
#### Parameters
- **shaderAsset** (`AssetPath`) –
- **metadata** (`NdrTokenMap`) –
- **subIdentifier** (`str`) –
- **sourceType** (`str`) –
### pxr.Sdr.Registry.GetShaderNodeFromSourceCode
```GetShaderNodeFromSourceCode```
(
```sourceCode```
,
```sourceType```
,
```metadata```
)
```ShaderNode```
Wrapper method for NdrRegistry::GetNodeFromSourceCode().
Returns a valid SdrShaderNode pointer upon success.
#### Parameters
- **sourceCode** (`str`) –
- **sourceType** (`str`) –
- **metadata** (`NdrTokenMap`) –
<span class="n">
<span class="pre">
filter
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
SdrShaderNodePtrVec
<dd>
<p>
Exactly like
<code class="docutils literal notranslate">
<span class="pre">
NdrRegistry::GetNodesByFamily()
, but returns a vector of
<code class="docutils literal notranslate">
<span class="pre">
SdrShaderNode
pointers instead of a vector of
<code class="docutils literal notranslate">
<span class="pre">
NdrNode
pointers.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<ul class="simple">
<li>
<p>
<strong>
family
(
<em>
str
) –
<li>
<p>
<strong>
filter
(
<em>
VersionFilter
) –
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.Registry.GetShaderNodesByIdentifier">
<span class="sig-name descname">
<span class="pre">
GetShaderNodesByIdentifier
<span class="sig-paren">
(
<em class="sig-param">
<span class="n">
<span class="pre">
identifier
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
SdrShaderNodePtrVec
<dd>
<p>
Exactly like
<code class="docutils literal notranslate">
<span class="pre">
NdrRegistry::GetNodesByIdentifier()
, but returns a vector of
<code class="docutils literal notranslate">
<span class="pre">
SdrShaderNode
pointers instead of a vector of
<code class="docutils literal notranslate">
<span class="pre">
NdrNode
pointers.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
<strong>
identifier
(
<em>
NdrIdentifier
) –
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.Registry.GetShaderNodesByName">
<span class="sig-name descname">
<span class="pre">
GetShaderNodesByName
<span class="sig-paren">
(
<em class="sig-param">
<span class="n">
<span class="pre">
name
,
<em class="sig-param">
<span class="n">
<span class="pre">
filter
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
SdrShaderNodePtrVec
<dd>
<p>
Exactly like
<code class="docutils literal notranslate">
<span class="pre">
NdrRegistry::GetNodesByName()
, but returns a vector of
<code class="docutils literal notranslate">
<span class="pre">
SdrShaderNode
pointers instead of a vector of
<code class="docutils literal notranslate">
<span class="pre">
NdrNode
pointers.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<ul class="simple">
<li>
<p>
<strong>
name
(
<em>
str
) –
<li>
<p>
<strong>
filter
(
<em>
VersionFilter
) –
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Sdr.Registry.expired">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
expired
<dd>
<p>
True if this object has expired, False otherwise.
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-prename descclassname">
<span class="pre">
pxr.Sdr.
<span class="sig-name descname">
<span class="pre">
ShaderNode
<dd>
<p>
A specialized version of
<code class="docutils literal notranslate">
<span class="pre">
NdrNode
which holds shading information.
<p>
<strong>
Methods:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<a class="reference internal" href="#pxr.Sdr.ShaderNode.GetAdditionalPrimvarProperties" title="pxr.Sdr.ShaderNode.GetAdditionalPrimvarProperties">
- **GetAdditionalPrimvarProperties()**
- The list of string input properties whose values provide the names of additional primvars consumed by this node.
- **GetAllVstructNames()**
- Gets all vstructs that are present in the shader.
- **GetAssetIdentifierInputNames()**
- Returns the list of all inputs that are tagged as asset identifier inputs.
- **GetCategory()**
- The category assigned to this node, if any.
- **GetDefaultInput()**
- Returns the first shader input that is tagged as the default input.
- **GetDepartments()**
- The departments this node is associated with, if any.
- **GetHelp()**
- The help message assigned to this node, if any.
- **GetImplementationName()**
- Returns the implementation name of this node.
- **GetLabel()**
- The label assigned to this node, if any.
- **GetPages()**
- Gets the pages on which the node's properties reside (an aggregate of the unique `SdrShaderProperty::GetPage()` values for all of the node's properties).
- **GetPrimvars()**
- The list of primvars this node knows it requires / uses.
- **GetPropertyNamesForPage(pageName)**
- Gets the names of the properties on a certain page (one that was returned by `GetPages()`).
- **GetRole()**
- Returns the role of this node.
- **GetShaderInput(inputName)**
- Get a shader input property by name.
- **GetShaderOutput(outputName)**
- Get a shader output property by name.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetAdditionalPrimvarProperties">
<span class="sig-name descname">
<span class="pre">
GetAdditionalPrimvarProperties
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
NdrTokenVec
<dd>
<p>
The list of string input properties whose values provide the names of additional primvars consumed by this node.
<p>
For example, this may return a token named <code class="docutils literal notranslate"><span class="pre">varname
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetAllVstructNames">
<span class="sig-name descname">
<span class="pre">
GetAllVstructNames
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
NdrTokenVec
<dd>
<p>
Gets all vstructs that are present in the shader.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetAssetIdentifierInputNames">
<span class="sig-name descname">
<span class="pre">
GetAssetIdentifierInputNames
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
NdrTokenVec
<dd>
<p>
Returns the list of all inputs that are tagged as asset identifier inputs.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetCategory">
<span class="sig-name descname">
<span class="pre">
GetCategory
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
str
<dd>
<p>
The category assigned to this node, if any.
<p>
Distinct from the family returned from <code class="docutils literal notranslate"><span class="pre">GetFamily()
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetDefaultInput">
<span class="sig-name descname">
<span class="pre">
GetDefaultInput
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
ShaderProperty
<dd>
<p>
Returns the first shader input that is tagged as the default input.
<p>
A default input and its value can be used to acquire a fallback value for a node when the node is considered’disabled’or otherwise incapable of producing an output value.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetDepartments">
<span class="sig-name descname">
<span class="pre">
GetDepartments
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
NdrTokenVec
<dd>
<p>
The departments this node is associated with, if any.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetHelp">
<span class="sig-name descname">
<span class="pre">
GetHelp
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
str
<dd>
<p>
The help message assigned to this node, if any.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetImplementationName">
<span class="sig-name descname">
<span class="pre">
GetImplementationName
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
str
<dd>
<p>
Returns the implementation name of this node.
<p>
The name of the node is how to refer to the node in shader networks. The label is how to present this node to users. The implementation name is the name of the function (or something) this node represents.
in the implementation. Any client using the implementation
**must**
call this method to get the correct name; using
```
getName()
```
is not correct.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetLabel">
<span class="sig-name descname">
<span class="pre">
GetLabel
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
str
<dd>
<p>
The label assigned to this node, if any.
<p>
Distinct from the name returned from
```
GetName()
```
. In the context of a UI, the label value might be used as the display name for the node instead of the name.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetPages">
<span class="sig-name descname">
<span class="pre">
GetPages
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
NdrTokenVec
<dd>
<p>
Gets the pages on which the node’s properties reside (an aggregate of the unique
```
SdrShaderProperty::GetPage()
```
values for all of the node’s properties).
<p>
Nodes themselves do not reside on pages. In an example scenario, properties might be divided into two pages,’Simple’and’Advanced’.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetPrimvars">
<span class="sig-name descname">
<span class="pre">
GetPrimvars
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
NdrTokenVec
<dd>
<p>
The list of primvars this node knows it requires / uses.
<p>
For example, a shader node may require the’normals’primvar to function correctly. Additional, user specified primvars may have been authored on the node. These can be queried via
```
GetAdditionalPrimvarProperties()
```
. Together,
```
GetPrimvars()
```
and
```
GetAdditionalPrimvarProperties()
```
, provide the complete list of primvar requirements for the node.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetPropertyNamesForPage">
<span class="sig-name descname">
<span class="pre">
GetPropertyNamesForPage
<span class="sig-paren">
(
<em class="sig-param">
<span class="n">
<span class="pre">
pageName
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
NdrTokenVec
<dd>
<p>
Gets the names of the properties on a certain page (one that was returned by
```
GetPages()
```
).
<p>
To get properties that are not assigned to a page, an empty string can be used for
```
pageName
```
.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
**pageName** (str) –
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetRole">
<span class="sig-name descname">
<span class="pre">
GetRole
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
str
<dd>
<p>
Returns the role of this node.
<p>
This is used to annotate the role that the shader node plays inside a shader network. We can tag certain shaders to indicate their role within a shading network. We currently tag primvar reading nodes, texture reading nodes and nodes that access volume fields (like extinction or scattering). This is done to identify resources used by a shading network.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderNode.GetShaderInput">
<span class="sig-name descname">
<span class="pre">
GetShaderInput
<span class="sig-paren">
(
<em class="sig-param">
<span class="n">
<span class="pre">
inputName
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
ShaderProperty
<dd>
Get a shader input property by name.
```cpp
nullptr
```
is returned if an input with the given name does not exist.
Parameters
----------
inputName (str) –
Get a shader output property by name.
```cpp
nullptr
```
is returned if an output with the given name does not exist.
Parameters
----------
outputName (str) –
class pxr.Sdr.ShaderNodeList
----------------------
Methods:
- append
- extend
class pxr.Sdr.ShaderProperty
----------------------
A specialized version of NdrProperty which holds shading information.
Methods:
- GetDefaultValueAsSdfType()
- Accessor for default value corresponding to the SdfValueTypeName returned by GetTypeAsSdfType.
- GetHelp()
- The help message assigned to this property, if any.
- GetHints()
- Any UI hints that are associated with this property.
- GetImplementationName()
- The implementation name of this property.
| Method Name | Description |
| --- | --- |
| GetDefaultValueAsSdfType() | Accessor for default value corresponding to the SdfValueTypeName returned by GetTypeAsSdfType. Note that this is different than GetDefaultValue which returns the default value associated with the SdrPropertyType and may differ from the SdfValueTypeName, example when sdrUsdDefinitionType metadata is specified for a sdr property. |
| GetImplementationName() | Returns the implementation name of this property. |
| GetLabel() | The label assigned to this property, if any. |
| GetOptions() | If the property has a set of valid values that are pre-determined, this will return the valid option names and corresponding string values (if the option was specified with a value). |
| GetPage() | The page (group), eg "Advanced", this property appears on, if any. |
| GetVStructConditionalExpr() | If this field is part of a vstruct, this is the conditional expression. |
| GetVStructMemberName() | If this field is part of a vstruct, this is its name in the struct. |
| GetVStructMemberOf() | If this field is part of a vstruct, this is the name of the struct. |
| GetValidConnectionTypes() | Gets the list of valid connection types for this property. |
| GetWidget() | The widget "hint" that indicates the widget that can best display the type of data contained in this property, if any. |
| IsAssetIdentifier() | Determines if the value held by this property is an asset identifier (eg, a file path); the logic for this is left up to the parser. |
| IsDefaultInput() | Determines if the value held by this property is the default input for this node. |
| IsVStruct() | Returns true if the field is the head of a vstruct. |
| IsVStructMember() | Returns true if this field is part of a vstruct. |
### GetHelp
```python
GetHelp() -> str
```
The help message assigned to this property, if any.
### GetHints
```python
GetHints() -> NdrTokenMap
```
Any UI "hints" that are associated with this property.
"Hints" are simple key/value pairs.
### GetImplementationName
```python
GetImplementationName() -> str
```
Returns the implementation name of this property.
The name of the property is how to refer to the property in shader networks. The label is how to present this property to users. The implementation name is the name of the parameter this property represents in the implementation. Any client using the implementation **must** call this method to get the correct name; using `getName()` is not correct.
### GetLabel
```python
GetLabel() -> str
```
The label assigned to this property, if any.
Distinct from the name returned from `GetName()`. In the context of a UI, the label value might be used as the display name for the property instead of the name.
### GetOptions
```python
GetOptions() -> NdrOptionVec
```
If the property has a set of valid values that are pre-determined, this will return the valid option names and corresponding string values (if the option was specified with a value).
### GetPage
```python
GetPage() -> str
```
The page (group), e.g., "Advanced", this property appears on, if any.
Note that the page for a shader property can be nested, delimited by ":", representing the hierarchy of sub-pages a property is defined in.
### GetVStructConditionalExpr
```python
GetVStructConditionalExpr() -> str
```
If this field is part of a vstruct, this is the conditional expression.
### GetVStructMemberName
```python
GetVStructMemberName() -> str
```
If this field is part of a vstruct, this is its name in the struct.
### GetVStructMemberOf
```python
GetVStructMemberOf() -> str
```
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderProperty.GetVStructMemberOf">
<span class="sig-name descname">
<span class="pre">
GetVStructMemberOf
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
str
<dd>
<p>
If this field is part of a vstruct, this is the name of the struct.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderProperty.GetValidConnectionTypes">
<span class="sig-name descname">
<span class="pre">
GetValidConnectionTypes
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
NdrTokenVec
<dd>
<p>
Gets the list of valid connection types for this property.
<p>
This value comes from shader metadata, and may not be specified. The
value from
<code class="docutils literal notranslate">
<span class="pre">
NdrProperty::GetType()
can be used as a fallback, or
you can use the connectability test in
<code class="docutils literal notranslate">
<span class="pre">
CanConnectTo()
.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderProperty.GetWidget">
<span class="sig-name descname">
<span class="pre">
GetWidget
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
str
<dd>
<p>
The widget”hint”that indicates the widget that can best display the
type of data contained in this property, if any.
<p>
Examples of this value could include”number”,”slider”, etc.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderProperty.IsAssetIdentifier">
<span class="sig-name descname">
<span class="pre">
IsAssetIdentifier
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
bool
<dd>
<p>
Determines if the value held by this property is an asset identifier
(eg, a file path); the logic for this is left up to the parser.
<p>
Note: The type returned from
<code class="docutils literal notranslate">
<span class="pre">
GetTypeAsSdfType()
will be
<code class="docutils literal notranslate">
<span class="pre">
Asset
if this method returns
<code class="docutils literal notranslate">
<span class="pre">
true
(even though its true underlying data
type is string).
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderProperty.IsDefaultInput">
<span class="sig-name descname">
<span class="pre">
IsDefaultInput
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
bool
<dd>
<p>
Determines if the value held by this property is the default input for
this node.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderProperty.IsVStruct">
<span class="sig-name descname">
<span class="pre">
IsVStruct
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
bool
<dd>
<p>
Returns true if the field is the head of a vstruct.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Sdr.ShaderProperty.IsVStructMember">
<span class="sig-name descname">
<span class="pre">
IsVStructMember
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
bool
<dd>
<p>
Returns true if this field is part of a vstruct.
| 38,265 |
search.md | # Search — Omni.VDB 0.5.2 documentation
## Navigation
- [Home](https://docs.omniverse.nvidia.com/kit/docs) »
- Search
## Main Content
### Search Form
Please activate JavaScript to enable the search functionality.
### Search Results
---
## Footer
--- | 257 |
SETTINGS.md | # Settings — Omniverse Kit 1.1.2 documentation
## Settings
### Extension
: omni.kit.widget.settings-1.1.2
### Documentation Generated
: May 08, 2024
#### Settings
Doesn’t own any settings but uses
##### “/ext/omni.kit.window.property/checkboxAlignment”
- checkboxAlignment can be “left” or “right”
##### “/ext/omni.kit.window.property/labelAlignment”
- labelAlignment can be “left” or “right” | 401 |
SettingsEditor.md | # Settings Editor
## OmniGraph Settings Editor
The **OmniGraph** settings editor provides a method of modifying settings which affect the **OmniGraph** appearance and evaluation.
This shows the menu through which you can access the settings editor. It opens up a window with a navigation panel on the left side and the selected group of settings on the right. The settings modify some simple behaviors in the operation of OmniGraph.
### Settings Table
| Setting | Allowed Values | Meaning |
|---------|----------------|---------|
| **Setting** | **Allowed Values** | **Meaning** |
| Update mesh points to Hydra | true, false | If true then write back mesh data to USD |
| Default graph evaluator | dirty_push, push, execution | The type of evaluator to use when a graph is created without specifying the evaluator it uses. (The last three are all synonyms for the same type of evaluator.) “push” = Push Graph, “execution” = Action Graph, and “dirty_push” = Lazy Graph |
| Enabled deprecated Node.pathChanged callback | true, false | If true then enable callbacks registered through INode::registerPathChangedCallback. This was deprecated in favor of listening directly to USD notices |
| Escalate all deprecation warnings to be errors | true, false | If true then whenever deprecated functionality is used an error will be issued, rather than the usual warning | | 1,364 |
setup-webrtc-streaming_Overview.md | # Setup webrtc streaming
Enabling this extension will set settings appropriate for streaming the app with webrtc.
This enables the omni.kit.livestream.webrtc extension and sets window to 1920x1080
and frame pacing to 60fps. | 224 |
setup.md | # Setting up a new Repo
## 1. Duplicate the template (kit-template repo)
- Fork [https://gitlab-master.nvidia.com/omniverse/kit-extensions/kit-template](https://gitlab-master.nvidia.com/omniverse/kit-extensions/kit-template) into your own space (i.e https://gitlab-master.nvidia.com/your_username) using the button at top right of the screen. The result should look like below:
- Rename both the project and its path to be what you want. The project name change is shown below. To change the path, go to **Settings** > **Advanced** > **Change Path**.
- Transfer the project back to the kit-extensions group with its new name. The transfer option is available under **Settings** > **Advanced**.
- To transfer you need “Maintainer” permissions on the namespace (kit-extensions). Ask someone on slack for help.
- Note that the new project still has a fork relationship with kit-template, you can break fork relationship in project settings.
## 2. Change project name
- There are several places you need to change the project name:
- **repo.toml**:
```toml
# Repository Name.
name = "kit-my-funny-project"
```
- **PACKAGE-INFO.yaml** change name and url
- Each extension has **repository** for repo url. Don’t forget to update for your extensions.
- Search for **kit-template** in VSCode across whole project to make sure nothing else requires renaming.
- You can build, test and push that change:
- `> build.bat -r`
- `> repo.bat test`
## 3. Duplicate the TeamCity project
- All TeamCity (TC) configs are stored in the repo under the **.teamcity** folder, so you only need to setup the root project with correct VCS url and everything else will show up in TC UI upon first build. It is easier to just copy this project, change VCS settings and run a build to achieve the same result:
1. Go to
[https://teamcity.nvidia.com/project/Omniverse_KitExtensions_KitTemplate?mode=builds](https://teamcity.nvidia.com/project/Omniverse_KitExtensions_KitTemplate?mode=builds)
2. Click *View Project Settings* button on the upper right-hand corner of the page -> From the *Actions* menu on the upper right-hand corner of the page -> Select *Copy project…*. We recommend using a namespace on TC matches one on gitlab. E.g. `Omniverse/kit-extensions/kit-my-funny-project`. You can uncheck the *Copy build configurations’ build counters* checkbox. Click the *Copy* button on the dialog to perform the copy operation.
3. Disable VCS settings sync (otherwise you can’t edit VCS root): *View Project Settings* -> *Synchronization disabled* -> *Apply*
4. Go back into the *View Project Settings* -> *VCS Roots* and click to edit previous `gitlab-master-omniverse-kit-extensions-kit-template`. Change *VCS root name* and *Fetch URL* (replace `kit-template` with your `kit-my-funny-project`).
5. Enable VCS setting sync: *View Project Settings* -> *Synchronization enabled*. Make sure:
- Proper VCS root is selected (edited in the previous step)
- Settings format: Kotlin
- Allow editing project settings: off
- Show settings changes in builds: on
- When build starts: use settings from VCS
6. Go into *Build and validation* job and run to make sure all works as expected.
## 4. What’s next?
Your new extension project is ready to use. It still builds the same demo extensions that were in `kit-template`. So first step would be to rename or remove them. Create your own extensions and update the application kit file in `[source/apps]` to load them, update `[repo_publish_exts]` of `repo.toml` to publish them.
## Extra: Gitlab Settings
It is recommended to make sure approved MRs are required to work on a new repo. Settings are not preserved after fork. Recommended settings to change:
- *General* -> *Merge request approvals* -> *Eligible users* -> *Approvals required* -> 1
- *General* -> *Merge request approvals* -> *Approval settings*:
- Prevent approval by author: on
- Prevent approvals by users who add commits: on
- Prevent editing approval rules in merge requests: on
- Require user password to approve: off
- Remove all approvals when commits are added to the source branch: off
- *General* -> *Merge requests* -> *Squash commits when merging* -> Encourage
- *Repository* -> *Protected branches* -> for `master` branch (and `release/*` in the future) -> *Allowed to merge* -> “Developers + Maintainers” | 4,350 |
shades.md | # Shades
Shades are used to have multiple named color palettes with the ability for runtime switch. For example, one App could have several ui themes users can switch during using the App.
The shade can be defined with the following code:
```python
cl.shade(cl("#FF6600"), red=cl("#0000FF"), green=cl("#66FF00"))
```
It can be assigned to the color style. It’s possible to switch the color with the following command globally:
```python
cl.set_shade("red")
```
## Example
```python
from omni.ui import color as cl
from omni.ui import constant as fl
from functools import partial
def set_color(color):
cl.example_color = color
def set_width(value):
fl.example_width = value
cl.example_color = cl.green
fl.example_width = 1.0
with ui.HStack(height=100, spacing=5):
with ui.ZStack():
ui.Rectangle(
style={
"background_color": cl.shade(
"aqua",
orange=cl.orange,
another=cl.example_color,
transparent=cl(0, 0, 0, 0),
black=cl.black,
),
"border_width": fl.shade(1, orange=4, another=8),
"border_radius": fl.one,
"border_color": cl.black,
},
)
with ui.HStack():
```
ui.Spacer()
ui.Label(
"ui.Rectangle(\n"
"\tstyle={\n"
'\'\t\t"background_color":\n'
"\t\t\tcl.shade(\n"
'\'\t\t\t"aqua",\n'
"\t\t\t\torange=cl(1, 0.5, 0),\n"
"\t\t\t\tanother=cl.example_color),\n"
'\'\t\t"border_width":\n'
"\t\t\tfl.shade(1, orange=4, another=8)})",
style={"color": cl.black, "margin": 15},
width=0,
)
ui.Spacer()
with ui.ZStack():
ui.Rectangle(
style={
"background_color": cl.example_color,
"border_width": fl.example_width,
"border_radius": fl.one,
"border_color": cl.black,
}
)
with ui.HStack():
ui.Spacer()
ui.Label(
"ui.Rectangle(\n"
"\tstyle={\n"
'\'\t\t"background_color": cl.example_color,\n'
'\'\t\t"border_width": fl.example_width})',
style={"color": cl.black, "margin": 15},
width=0,
)
ui.Spacer()
with ui.VStack(style={"Button": {"background_color": cl("097EFF")}}):
ui.Label("Click the following buttons to change the shader of the left rectangle")
with ui.HStack():
ui.Button("cl.set_shade()", clicked_fn=partial(cl.set_shade, ""))
ui.Button('cl.set_shade("orange")', clicked_fn=partial(cl.set_shade, "orange"))
ui.Button('cl.set_shade("another")', clicked_fn=partial(cl.set_shade, "another"))
ui.Label("Click the following buttons to change the border width of the right rectangle")
with ui.HStack():
ui.Button("fl.example_width = 1", clicked_fn=partial(set_width, 1))
ui.Button("fl.example_width = 4", clicked_fn=partial(set_width, 4))
ui.Label("Click the following buttons to change the background color of both rectangles")
with ui.HStack():
ui.Button('cl.example_color = "green"', clicked_fn=partial(set_color, "green"))
ui.Button("cl.example_color = cl(0.8)", clicked_fn=partial(set_color, cl(0.8)))
```python
from omni.ui import color as cl
from omni.ui.url_utils import url
from functools import partial
def set_url(url_path: str):
url.example_url = url_path
walk = "resources/icons/Nav_Walkmode.png"
fly = "resources/icons/Nav_Flymode.png"
url.example_url = walk
with ui.HStack(height=100, spacing=5):
with ui.ZStack():
ui.Image(height=100, style={"image_url": url.example_url})
with ui.HStack():
ui.Spacer()
ui.Label(
'ui.Image(\n\tstyle={"image_url": cl.example_url})\n',
style={"color": cl.black, "font_size": 12, "margin": 15},
width=0,
)
ui.Spacer()
with ui.ZStack():
ui.ImageWithProvider(
height=100,
style={
"image_url": url.shade(
"resources/icons/Move_local_64.png",
another="resources/icons/Move_64.png",
orange="resources/icons/Rotate_local_64.png",
)
}
)
with ui.HStack():
ui.Spacer()
ui.Label(
"ui.ImageWithProvider(\n"
"\tstyle={\n"
'\t\t"image_url":\n'
"\t\t\tst.shade(\n"
'\t\t\t\t"Move_local_64.png",\n'
'\t\t\t\tanother="Move_64.png")})\n',
style={"color": cl.black, "font_size": 12, "margin": 15},
width=0,
)
ui.Spacer()
with ui.HStack():
# buttons to change the url for the image
with ui.VStack():
ui.Button("url.example_url = Nav_Walkmode.png", clicked_fn=partial(set_url, walk))
ui.Button("url.example_url = Nav_Flymode.png", clicked_fn=partial(set_url, fly))
# buttons to switch between shades to a different image
with ui.VStack():
ui.Button("ui.set_shade()", clicked_fn=partial(ui.set_shade, ""))
ui.Button('ui.set_shade("another")', clicked_fn=partial(ui.set_shade, "another"))
``` | 5,269 |
shapes.md | # Shapes
Shape defines the 3D graphics-related item that is directly transformable. It is the base class for all “geometric primitives”, which encodes several per-primitive graphics-related properties.
## Line
Line is the simplest shape that represents a straight line. It has two points, color, and thickness.
```python
sc.Line([-0.5,-0.5,0], [0.5, 0.5, 0], color=cl.green, thickness=5)
```
## Curve
Curve is a shape drawn with multiple lines which has a bent or turns in it. There are two supported curve types: linear and cubic. `sc.Curve` is default to draw cubic curve and can be switched to linear with `curve_type=sc.Curve.CurveType.LINEAR`. `tessellation` controls the smoothness of the curve. The higher the value is, the smoother the curve is, with the higher computational cost. The curve also has positions, colors, thicknesses which are all array typed properties, which means per vertex property is supported. This feature needs further development with ImGui support.
```python
with scene_view.scene:
# linear curve
with sc.Transform(transform=sc.Matrix44.get_translation_matrix(-4, 0, 0)):
sc.Curve(
[[0.5, -0.7, 0], [0.1, 0.6, 0], [2.0, 0.6, 0], [3.5, -0.7, 0]],
thicknesses=[1.0],
colors=[cl.red],
curve_type=sc.Curve.CurveType.LINEAR,
)
# corresponding cubic curve
with sc.Transform(transform=sc.Matrix44.get_translation_matrix(0, 0, 0)):
sc.Curve(
[[0.5, -0.7, 0], [0.1, 0.6, 0], [2.0, 0.6, 0], [3.5, -0.7, 0]],
thicknesses=[1.0],
colors=[cl.red],
)
```
# Triangle
```python
thicknesses=[3.0],
colors=[cl.blue],
tessellation=9,
)
```
# Rectangle
Rectangle is a shape with four sides and four corners. The corners are all right angles.
It’s also possible to draw Rectangle with lines with enabling property `wireframe`:
```python
sc.Rectangle(color=cl.green)
```
```python
sc.Rectangle(2, 1, thickness=5, wireframe=True)
```
# Arc
Two radii of a circle and the arc between them. It also can be a wireframe.
# Image
A rectangle with an image on it. It can read raster and vector graphics format and supports `http://` and `omniverse://` paths.
# Points
The list of points in 3d space. Points may have different sizes and different colors.
```python
point_count = 36
points = []
sizes = []
colors = []
for i in range(point_count):
weight = i / point_count
angle = 2.0 * math.pi * weight
points.append(
[math.cos(angle), math.sin(angle), 0]
)
colors.append([weight, 1 - weight, 1, 1])
sizes.append(6 * (weight + 1.0 / point_count))
sc.Points(points, colors=colors, sizes=sizes)
```
# PolygonMesh
Encodes a mesh. Meshes are defined as points connected to edges and faces. Each face is defined by a list of face vertices `vertex_indices` using indices into the point `positions` array. `vertex_counts` provides the number of points at each face. This is the minimal requirement to construct the mesh.
```python
point_count = 36
# Form the mesh data
points = []
vertex_indices = []
sizes = []
colors = []
for i in range(point_count):
weight = i / point_count
```
# Draw the mesh
sc.PolygonMesh(
points, colors, [point_count], vertex_indices
)
## TexturedMesh
Encodes a polygonal mesh with free-form textures. Meshes are defined the same as PolygonMesh. It supports both ImageProvider and URL. Basically it’s PolygonMesh with the ability to use images. Users can provide either sourceUrl or imageProvider, just as sc.Image as the source of the texture. And `uvs` provides how the texture is applied to the mesh.
NOTE: in Kit 105 UVs are specified with V-coordinate flipped, while Kit 106 will move to specifying UVs in same “space” as USD.
In 105.1 there is a transitional property “legacy_flipped_v” that can be provided to the TexturedMesh constructor to internally handle the conversion, but specifying UV cordinates with legacy_flipped_v=True has a negative performance impact.
```python
from omni.ui import scene as sc
from pathlib import Path
EXT_PATH = f"{Path(__file__).parent.parent.parent.parent.parent}"
scene_view = sc.SceneView(
aspect_ratio_policy=sc.AspectRatioPolicy.PRESERVE_ASPECT_FIT,
height=150,
)
with scene_view.scene:
point_count = 4
# Form the mesh data
points = [(-1, -1, 0), (1, -1, 0), (-1, 1, 0), (1, 1, 0)]
vertex_indices = [0, 2, 3, 1]
colors = [[0, 1, 0, 1], [0, 1, 0, 1], [0, 1, 0, 1], [0, 1, 0, 1]]
uvs = [(0, 0), (0, 1), (1, 1), (1, 0)]
# Draw the mesh
filename = f"{EXT_PATH}/data/main_ov_logo_square.png"
sc.TexturedMesh(filename, uvs, points, colors, [point_count], vertex_indices, legacy_flipped_v=False)
```
## Label
Defines a standard label for user interface items. The text size is always in the screen space and oriented to the camera. It supports `omni.ui` alignment.
```
# 文章标题
## 子标题
这里是文章的正文内容。
### 小标题
这里是更详细的内容。
```python
color = cl("#76b900"),
size = 50
```
---
这里是页脚。 | 4,984 |
simpletoolbutton_Overview.md | # Overview — Omniverse Kit 1.7.1 documentation
## Overview
### Overview
```
```
Extension: omni.kit.widget.toolbar-1.7.1
Documentation Generated: May 08, 2024
```
### Overview
```
omni.kit.widget.toolbar provides a toolbar-like widget that can host sub-widget such as ToolButton within itself. While the toolbar widget can be placed inside any UI frame, in a common Kit app setting, it is organized within omni.kit.window.toolbar to create the Main Toolbar in Kit. There can be only one instance of such toolbar widget globally.
```
```
The structure of a horizontal Toolbar looks like this:
```
```mermaid
graph TD
subgraph toolbar_window["Toolbar Window (omni.kit.window.toolbar)"]
subgraph toolbar_widget["Toolbar Widget (omni.kit.widget.toolbar)"]
direction TB
subgraph widget_group1[WidgetGroup 1]
omni_ui_widget1(omni.ui.Widget 1)
end
subgraph widget_group2[WidgetGroup 2]
direction TB
omni_ui_widget2(omni.ui.Widget 2)
omni_ui_widget3(omni.ui.Widget 3)
end
subgraph widget_group3[WidgetGroup 3]
direction TB
omni_ui_widget4(omni.ui.Widget 4)
omni_ui_widget5(omni.ui.Widget 5)
end
end
end
```
### Toolbar Widget
```
Toolbar class is both the registry and ui frame to host WidgetGroup.
```
```
For examples of how to put WidgetGroup on a Toolbar, please check Toolbar Usage Examples.
```
### WidgetGroup
```
A widget group is a collection of one or more omni.ui widgets to be placed as associated entries on the toolbar. You can inherit this class and override various of its methods to create your custom widget group.
```
```
For example of how to create a WidgetGroup, please check WidgetGroup Usage Example.
```
### SimpleToolButton
```
SimpleToolButton class is...
For examples of how to use SimpleToolButton, please check SimpleToolButton Usage Examples.
```
# SimpleToolButton
serves both as an example as well as a util class to demonstrate how a
WidgetGroup with one `omni.ui.ToolButton` can be created. Check its documentation and source code for details.
## User Guide
* Toolbar Usage Examples
* WidgetGroup Usage Example
* CHANGELOG | 2,186 |
simplified-schema_ogn_reference_guide.md | # OGN Reference Guide
This is a detailed guide to the syntax of the .ogn file. All of the keywords supported are described in detail, and a simplified JSON schema file is provided for reference.
Each of the described elements contains an example of its use in a .ogn file to illustrate the syntax. For a more detailed guide on how each of the elements are used see the OGN User Guide.
## Contents
- [OGN Reference Guide](#ogn-reference-guide)
- [Basic Structure](#basic-structure)
- [Comments](#comments)
- [Node Property Keywords](#node-property-keywords)
- [Attribute Dictionaries](#attribute-dictionaries)
- [Attribute Property Keywords](#attribute-property-keywords)
- [Attribute Properties](#attribute-properties)
- [Test Definitions](#test-definitions)
- [CPU/GPU Data Switch](#cpu-gpu-data-switch)
- [Extended Type Test Data](#extended-type-test-data)
- [State Test Data](#state-test-data)
- [Test Graph Setup](#test-graph-setup)
- [External Test File](#external-test-file)
- [Simplified Schema](#simplified-schema)
### Node Level Keywords vs Attribute Level Keywords
| Node Level Keywords | Attribute Level Keywords |
|---------------------|-------------------------|
| description | description |
| exclude | default |
| Column 1 | Column 2 |
|-------------------|-------------------|
| icon | deprecated |
| memoryType | memoryType |
| cudaPointers | metadata |
| metadata | maximum |
| singleton | minimum |
| tags | optional |
| tokens | type |
| uiName | uiName |
| version | uiType |
| language | unvalidated |
| scheduling | |
| categories | |
## Basic Structure
See the [Naming Conventions](../Conventions.html#omnigraph-naming-conventions) for guidance on how to name your files, nodes, and attributes.
```json
{
"NodeName": {
"NODE_PROPERTY": "NODE_PROPERTY_VALUE",
"inputs": "ATTRIBUTE_DICTIONARY",
"outputs": "ATTRIBUTE_DICTIONARY",
"state": "ATTRIBUTE_DICTIONARY",
"tests": "TEST_DATA"
}
}
```
The [NODE_PROPERTY](ogn-node-property-keywords) values are keywords recognized at the node level. The values in `NODE_PROPERTY_VALUE` will vary based on the specific keyword to which they pertain.
The [ATTRIBUTE_DICTIONARY](ogn-attribute-dictionaries) sections contain all of the information required to define the subset of attributes, each containing a set of [Attribute Property Keywords](ogn-attribute-property-keywords) that describe the attribute.
Lastly the [TEST_DATA](ogn-test-data) section contains information required to construct one or more Python tests.
## Comments
JSON files do not have a syntax for adding comments, however in order to allow for adding descriptions or disabled values to a .ogn file the leading character “$” will treat the key in any key/value pair as a comment. So while
```
```markdown
"description":"Hello"
```
```markdown
will be treated as a value to be added to the node definition,
```
```markdown
"$description":"Hello"
```
```markdown
will be ignored and not parsed.
Comments can appear pretty much anywhere in your file. They are used extensively in the Walkthrough Tutorial Nodes to describe the file contents.
```json
{
"Node": {
"$comment": "This node is like a box of chocolates - you never know what you're gonna get",
"description": [
"This node is part of the OmniGraph node writing examples.",
"It is structured to include node and attribute information illustrating the .ogn format"
],
"version": 1,
"exclude": [
"c++",
"docs",
"icon",
"python",
"template",
"tests",
"usd"
]
}
}
```
## Node Property Keywords
These are the elements that can appear in the `NODE_PROPERTY` section. The values they describe pertain to the node type as a whole.
### description
The `description` key value is required on all nodes and will be used in the generated documentation of the node. You can embed reStructuredText code in the string to be rendered in the final node documentation, though it will appear as-is in internal documentation such as Python docstrings.
The value can be a string or a list of strings. If it is a list, they will be concatenated as appropriate in the locations they are used. (Linefeeds preserved in Python docstrings, turned into a single space for text documentation, prepended with comment directives in code…)
> Tip
> This mandatory string should inform users exactly what function the node performs, as concisely as possible.
```json
{
"Node": {
"$comment": "This node is like a box of chocolates - you never know what you're gonna get",
"description": [
"This node is part of the OmniGraph node writing examples.",
"It is structured to include node and attribute information illustrating the .ogn format"
],
"version": 1,
"exclude": [
"c++",
"docs",
"icon",
"python",
"template",
"tests",
"usd"
]
}
}
```
### version
The integer value `version` defines the version number of the current node definition. It is up to the node writer how to manage the encoding of version levels in the integer value. (For example a node might encode a major version of 3, a minor version of 6, and a patch version of 12 in two digit groups as the integer 30612, or it might simply use monotonic increasing values for versions 1, 2, 3…)
> Tip
> This mandatory value can be anything but by convention should start at 1.
### exclude
Some node types will not be interested in all generated files, e.g. if the node is a Python node it will not need the C++ interface. Any of the generated files can be skipped by including it in a list of strings whose key is `exclude`. Here is a node which excludes all generated output, something you might do if you are developing the description of a new node and just want the node syntax to validate without generating code.
Legal values to include in the exclusion list are **“c++”**, **“docs”**, **“icon”**, **“python”**, **“template”**, **“tests”**, or **“usd”**, in any combination.
> Note
> C++ is automatically excluded when the implementation language is Python, however when the implementation language is C++ there will still be a Python interface class generated for convenience. It will have less functionality than for nodes implemented in Python and is mainly intended to provide an easy interface to the node from Python scripts.
categories
==========
Categories provide a way to group similar node types, mostly so that they can be managed easier in the UI.
```json
{
"description": "This is some kind of math array conversion node",
"categories": "math:array,math:conversion"
}
```
For a more detailed example see the *Node Categories* “how-to”.
cudaPointers
============
Usually when the memory type is set to *cuda* or *any* the CUDA memory pointers for array types are returned as a GPU pointer to GPU data, so when passing the data to CUDA code you have to pass pointers-to-pointers, since the CPU code cannot dereference them. Sometimes it is more efficient to just pass the GPU pointer directly though, pointed at by a CPU pointer. (It’s still a pointer to allow for null values.) You can do this by specifying *“cpu”* as your *cudaPointers* property.
```json
{
"metadata": {
"author": "Bertram P. Knowedrighter"
},
"cudaPointers": "cpu",
"uiName": "OmniGraph Example Node"
}
```
Note
====
You can also specify *“cuda”* for this value, although as it is the default this has no effect.
metadata
========
Node types can have key/value style metadata attached to them by adding a dictionary of them using the *metadata* property. The key and value are any arbitrary string, though it’s a good idea to avoid keywords starting with underscore (_) as they may have special meaning to the graph. Lists of strings can also be used as metadata values, though they will be transformed into a single comma-separated string.
A simple example of useful metadata is a human readable format for your node type name. UI code can then read the consistently named metadata to provide a better name in any interface requiring node type selection. In the example the keyword *author* is used.
```json
{
"memoryType": "cuda",
"metadata": {
"author": "Bertram P. Knowedrighter"
},
"tokens": "apple"
}
```
Tip
===
There are several hardcoded metadata values, described in this guide. The keywords under which these are stored are available as constants for consistency, and can be found in Python in the og.MetadataKeys object and in C++ in the file *omni/graph/core/ogn/Database.h*.
scheduling
==========
A string or list of string values that represent information for the scheduler on how nodes of this type may be safely scheduled. The string values are fixed, and say specific things about the kind of data the node access when computing.
```json
{
"version": 1,
"scheduling": ["global-write", "usd"],
"language": "python"
}
```
The strings accepted as values in the .ogn file are described below (extracted directly from the code)
```python
class SchedulingHints:
"""Class managing the scheduling hints.
The keywords are case-independent during parsing, specified in lower case here for easy checking.
When there is a -read and -write variant only one of them should be specified at a time:
no suffix: The item in question is accessed for both read and write
-read suffix: The item in question is accessed only for reading
-write suffix: The item in question is accessed only for writing
These class static values list the possible values for the "scheduling" lists in the .ogn file.
# Set when the node accesses other global data, i.e. data stored outside of the node, including the data
# on other nodes.
GLOBAL_DATA = "global"
GLOBAL_DATA_READ = "global-read"
GLOBAL_DATA_WRITE = "global-write"
# Set when a node accesses static data, i.e. data shared among all nodes of the same type
STATIC_DATA = "static"
STATIC_DATA_READ = "static-read"
STATIC_DATA_WRITE = "static-write"
# Set when the node is a threadsafe function, i.e. it can be scheduled in parallel with any other nodes, including
# nodes of the same type. This flag is not compatible with the topology hints that aren't read-only.
```
THREADSAFE = "threadsafe"
# Set when the node accesses the graph topology, e.g. connections, attributes, or nodes
TOPOLOGY = "topology"
TOPOLOGY_READ = "topology-read"
TOPOLOGY_WRITE = "topology-write"
# Set when the node accesses the USD stage data (for read-only, write-only, or both read and write)
USD = "usd"
USD_READ = "usd-read"
USD_WRITE = "usd-write"
# Set when the scheduling of the node compute may be modified from the evaluator default.
COMPUTERULE_DEFAULT = "compute-default"
COMPUTERULE_ON_REQUEST = "compute-on-request"
# Set when the node author wishes to specify the purity of the computations that a node does.
# A "pure" node is one that has no side effects in its initialize, compute, and/or release
# methods (no mutation of data that is shared/can be accessed outside of the node scope, no
# dependencies on external data apart from its inputs that could influence execution results).
# In other words, pure nodes are deterministic in that they will always produce the same output
# attribute values for a given set of input attribute values, and do not access, rely on, or
# otherwise mutate data external to the node's scope.
PURE = "pure"
"""
singleton is metadata with special meaning to the node type, so as a shortcut it can also be specified as its own keyword at the node level. The meaning is the same; associate a piece of metadata with the node type. This piece of metadata indicates the quality of the node type of only being able to instantiate a single node of that type in a graph or its child graphs. The value is specified as a boolean, though it is stored as the string “1”. (If the boolean is false then nothing is stored, as that is the default.)
tags is a very common piece of metadata, so as a shortcut it can also be specified as its own keyword at the node level. The meaning is the same; associate a piece of metadata with the node type. This piece of metadata can be used by the UI to better organize sets of nodes into common groups.
Tip
Tags can be either a single string, a comma-separated string, or a list of strings. They will all be represented as a comma-separated string in the metadata.
tokens
Token types are more efficient than string types for comparison, and are fairly common. For that reason the .ogn file provides this shortcut to predefine some tokens for use in your node implementation code.
The simplest method of adding tokens is to add a single token string.
If you have multiple tokens then you can instead specify a list:
The lookup is the same:
Lastly, if the token value contains illegal names for C++ or Python variables you can specify tokens in a dictionary, where the key is the name through which it will be accessed and the value is the actual token string:
See the OGN User Guide for information on how to access the different sets of token in your code.
uiName is a very common piece of metadata, so as a shortcut it can also be specified as its own keyword at the node level. The meaning is the same; associate a piece of metadata with the node type. This piece of
metadata can be used by the UI to present a more human-readable name for the node type.
```json
{
"tags": "fruit,example,chocolate",
"uiName": "OmniGraph Example Node"
}
```
> Tip
> Unlike the actual name, the uiName has no formatting or uniqueness requirements. Choose a name that will make its function obvious to a user selecting it from a list.
### Attribute Dictionaries
Each of the three attribute sections, denoted by the keywords `inputs`, `outputs`, and `state`, contain a list of attributes of each respective location and their properties.
- **inputs**
- Attributes that are read-only within the node’s compute function. These form the collection of data used to run the node’s computation algorithm.
- **outputs**
- Attributes whose values are generated as part of the computation algorithm. Until the node computes their values they will be undefined. This data is passed on to other nodes in the graph, or made available for inspection.
- **state**
- Attributes that persist between one evaluation and the next. They are both readable and writable. The primary difference between `state` attributes and `output` attributes is that when you set the value on a `state` attribute that value is guaranteed to be there the next time the node computes. Its data is entirely owned by the node.
```json
{
"uiName": "OmniGraph Example Node",
"inputs": {},
"outputs": {},
"state": {}
}
```
> Note
> If there are no attributes of a specific location then that section can simply be omitted.
### Attribute Property Keywords
The top level keyword of the attribute is always the unique name. It is always namespaced within the section it resides and only need be unique within that section. For example, the attribute `mesh` can appear in both the `inputs` and `outputs` sections, where it will be named `inputs:mesh` and `outputs:mesh` respectively.
#### Attribute Properties
Like the outer node level, each of the attributes has a set of mandatory and optional attributes.
- **description**
- As with the node, the `description` field is a multi-line description of the attribute, optionally with reStructuredText formatting. The description should contain enough information for the user to know how that attribute will be used (as an input), computed (as an output), or updated (as state).
> Tip
> This mandatory string should inform users exactly what data the attribute contains, as concisely as possible.
```json
{
"inputs": {
"numberOfLimbs": {
"description": "The number of limbs present in the generated character",
"type": "int"
}
}
}
```
- **type**
- The `type` property is one of several hard-coded values that specify what type of data the attribute contains. As we ramp up not all type combinations are supported; run `generate_node.py –help` to see the currently supported list of attribute types. For a full list of supported types and the data types they generate see [Attribute Data Types](#ogn-attribute-types).
> Tip
> This field is mandatory, and will help determine what type of interface is generated for the node.
```json
"type": "int",
"default": 4,
```
```json
"type": "int",
"default": 4,
"optional": true,
```
### default
The `default` property on inputs contains the value of the attribute that will be used when the user has not explicitly set a value or provided an incoming connection to it. For outputs the default value is optional and will only be used when the node compute method cannot be run.
The value type of the `default` property will be the JSON version of the type of data, shown in Attribute Data Types.
### Tip
Although input attributes should all have a default, concrete data types need not have a default set if the intent is for them to have their natural default. It will be assigned to them automatically. e.g. 0 for “int”, [0.0, 0.0, 0.0] for “float[3]”, false for “bool”, and “[]” for any array types.
### Warning
Some attribute types, such as “any” and “bundle”, have no well-defined data types and cannot have a default set.
### deprecated
The `deprecated` property is used to indicate that the attribute is being phased out and should no longer be used. The value of the property is a string or array of strings providing users with information on how they should change their graphs to accommodate the eventual removal of the attribute.
```json
"inputs": {
"offset": {
"description": "Value to be added to the result, after 'scale' has been applied.",
"type": "float",
"default": 0.0,
"optional": true,
"deprecated": "Use 'minValue' instead."
},
"scale": {
"description": "Value to multiply the result by, before 'offset' has been applied.",
"type": "float",
"default": 1.0,
"optional": true,
"deprecated": [
"Use 'maxValue' instead.",
"To reproduce the same behavior as before, set 'maxValue' to 'scale' + 'offset'."
]
}
}
```
### optional
The `optional` property is used to tell the node whether the attribute’s value needs to be present in order for the compute function to run. If it is set to `true` then the value is not checked before calling compute. The default value `false` will not call the compute function if the attribute does not have a valid value.
```json
"default": 4,
"optional": true,
"memoryType": "cpu",
```
### memoryType
By default every attribute in a node will use the `memoryType` defined at the node level. It’s possible for attributes to override that choice by adding that same keyword in the attribute properties.
Here’s an example of an attribute that overrides the node level memory type to force the attribute onto the CPU. You might do this to keep cheap POD values on the CPU while the expensive data arrays go directly to the GPU.
```json
"optional": true,
"memoryType": "cpu",
"minimum": 2,
```
### minimum/maximum
When specified, these properties represent the minimum and maximum allowable value for the attribute. For arrays the values are applicable to every array element. For tuples the values will themselves be tuples with the same size.
```json
{
"memoryType": "cpu",
"minimum": 2,
"maximum": 8,
"metadata": {
}
}
```
> **Note**
> These properties are only valid for the numeric attribute types, including tuples and arrays. At present they are not applied at runtime, only for validating test and default values within the .ogn file, however in the future they may be saved so it is always a good idea to specify values here when applicable.
**metadata**
Attributes can also have key/value style metadata attached to them by adding a dictionary of them using the `metadata` property. The key and value are any arbitrary string, though it’s a good idea to avoid keywords starting with underscore (`_`) as they may have special meaning to the graph. Lists of strings can also be used as metadata values, though they will be transformed into a single comma-separated string.
```json
{
"maximum": 8,
"metadata": {
"disclaimer": "There is no distinction between 8-limbed spiders and 8-limbed Octopi"
},
"uiName": "Number Of Limbs"
}
```
There are a number of attribute metadata keys with special meanings:
| allowedTokens | Used only for attributes of type `token` and contains a list of the values that token is allowed to take. |
|---------------|------------------------------------------------------------------------------------------------------------|
```json
{
"inputs": {
"operator": {
"type": "token",
"description": "The mathematical operator to apply",
"metadata": {
"allowedTokens": ["lt", "gt", "ne"]
}
}
}
}
```
Sometimes you may wish to have special characters in the list of allowed tokens. The generated code uses the token name for easy access to its values so in these cases you will have to also supply a corresponding safe name for the token value through which the generated code will access it.
```json
{
"inputs": {
"operator": {
"type": "token",
"description": "The mathematical operator to apply",
"metadata": {
"allowedTokens": {
"lt": "<",
"gt": ">",
"ne": "!="
}
}
}
}
}
```
In both cases you would access the token values through the database members `db.tokens.lt`, `db.tokens.gt`, and `db.tokens.ne`.
When you have safe names specified you can also choose to set the default value using the safe name rather than the literal value of the token. You can also specify the `allowedTokens` metadata at the main keyword level.
"operator": {
"type": "token",
"description": "The mathematical operator to apply",
"default": "<",
"allowedTokens": {
"lt": "<",
"gt": ">",
"ne": "!="
}
},
"operator2": {
"type": "token",
"description": "The mathematical operator to apply, defaulted by name",
"default": "lt",
"allowedTokens": {
"lt": "<",
"gt": ">",
"ne": "!="
}
}
"inputs": {
"primsBundle": {
"type": "bundle",
"description": [
"The bundle(s) of multiple prims to be written back."
],
"metadata": {
"allowMultiInputs": "1"
}
}
}
"inputs": {
"appearanceHint": {
"description": "Hint to shader writer of how the limbs should be colored.",
"type": "string",
"optional": true,
"metadata": {
"hidden": "true"
}
}
}
"inputs": {
"debugFlags": {
"description": "Debugging flags. For internal use only.",
# ... 此处省略部分内容,因为提供的HTML代码不完整
}
}
{
"type": "int",
"optional": true,
"metadata": {
"internal": "true"
}
}
{
"inputs": {
"keyIn": {
"type": "token",
"description": "The key to trigger the downstream execution",
"metadata": {
"literalOnly": "1"
}
}
}
}
{
"inputs": {
"defaultSpeed": {
"type": "double",
"description": [
"Default speed for interpolations. By connecting all 'Interpolate To' nodes to",
"this the user can set it in just one place."
],
"metadata": {
"outputOnly": "1"
}
}
}
}
{
"disclaimer": "There is no distinction between 8-limbed spiders and 8-limbed Octopi",
"uiName": "Number Of Limbs"
}
{
"metadata": {
"uiType": "color"
}
}
In a `compute()`. The difference is that these attributes will always exist, they just may not have valid data when the compute is invoked. For such attributes, the onus is on the node writer to check the validity of such attributes if they do end up being used for the compute.
```json
{
"inputs": {
"useInput1": {
"type": "bool",
"description": "If true then output the first input"
},
"firstInput": {
"type": "any",
"description": "First attribute to be checked",
"unvalidated": true
},
"secondInput": {
"type": "any",
"description": "Second attribute to be checked",
"unvalidated": true
}
}
}
```
### Test Definitions
The node generator is also capable of automatically generating some unit tests on the operation of the node’s algorithm through the **tests** property. This property contains a list of dictionaries, where each entry in the list is a dictionary of either `attribute name : attribute value` (if a test file is not being utilized) or `path to node in test scene : dictionary of attribute values` key-value pairs.
The test runs by either setting all of the input attributes to their corresponding values in the dictionary or leveraging a user-specified test scene in which all input connections/values have already been set, executing the node’s compute algorithm, and then comparing the computed values of the outputs against their corresponding values in the dictionary.
> **Note**
> When input attributes do not have a value in the tests, their default is used. When output attributes do not have a value in the test, they are not checked against the computed result.
There are a few methods of specifying test data, each of which revolves around whether or not a test scene is to be used for a given test run. Between each format, there are equivalencies in the specifics so you can use the one(s) that makes the most sense for your particular test cases. They can coexist if you have different types of test data.
The first method is using a single dictionary to specify any non-default attribute values (without an external test scene).
```json
{
"tests": [
{
"inputs:firstAddend": 1,
"inputs:secondAddend": 2,
"outputs:sum": 3
},
{
"inputs:secondAddend": 5,
"outputs:sum": 5
},
{
"inputs": {
"t0": 40320,
"t1": -109584,
"t2": 118124
}
}
]
}
```
This example shows test cases to exercise a simple node that adds two integers together. The first test says *if the node has inputs 1 and 2 the output should be 3* and the second one says *if the node has an input of 5 and a default valued input the output should be 5* (the defaults have been set to 0).
For a more complex test, you can specify the data involved in the test by location instead of all in a single dictionary. Here’s a similar example for an 8-dimensional polynomial solver.
```json
{
"inputs:firstAddend": {
"float": 1.5
},
"inputs:secondAddend": {
"float": 2.5
},
"outputs:sum": 4.0
}
```
```json
{
"state_set:counter": 9,
"state_get:counter": 10
},
{
"state_set:counter": 5,
"state:counter": 6
},
{
"state_set": {
"counter": 9
},
"state_get": {
"counter": 10
}
},
{
"state_set": {
"counter": 5
},
"state": {
"counter": 6
}
}
```
# State Test Data
In the special case of tests that need to exercise state data extra syntax is added to differentiate between values that should be set on state attributes before the test starts and values that are checked on them after the test is completed. The special suffix `_set` is appended to the state namespace to signify that a value is to be initialized before a test runs. You may optionally add the suffix `_get` to the state namespace to clarify which values are to be checked after the test runs but that is the default so it is not necessary.
# Test Graph Setup
For more complex situations you may need more than just a single node to test code paths properly. For these situations there is a pre-test setup section you can add, in the form of the `Controller.edit` function parameters. Only the creation directives are accepted, not the destructive ones such as `disconnections`.
These creation directives are all executed by the test script before it starts to run, providing you with a known starting graph configuration consisting of any nodes, prims, and connections needed to run the test.
```json
{
"inputs": {
"firstAddend": 1
},
"outputs": {
"sum": 3
},
"setup": {
"create_nodes": [
["TestNode", "omni.graph.examples.Node"]
],
"create_prims": [
[["Prim1", {"attrInt": ["int", 2]}]]
],
"connect": [
]
}
}
```
```json
[
"Prim1",
"attrInt",
"TestNode",
"inputs:secondAddend"
]
```
```json
{
"inputs:firstAddend": 10,
"outputs:sum": 12
}
```
### Note
If you define the graph in this way then the first node in your “nodes” directives must refer to the node being tested. If there is no graph specified then a single node of the type being tested will be the sole contents of the stage when running the test.
### External Test File
Some nodes may require large amounts of data and/or graph set-up before they can be properly tested, which can be time-consuming to set up via the test construct code alone. For such situations it can be more efficient to simply load an external file (.usd or .usda format) that already describes the entire test scene.
```json
{
"file": "TestFileForMyNode.usda",
"/World/MyGraph/MyNode": {
"outputs": {
"outputAttr0": "ExpectedValue0",
"outputAttr1": {
"type": "token",
"value": "ExpectedValue1"
}
},
"outputs:outputAttr2": "ExpectedValue2",
"state": {
"stateAttr0": "ExpectedState0",
"stateAttr1": "ExpectedState1"
},
"state:stateAttr2": "ExpectedState2"
},
"/World/MyGraph/AnotherNodeInTheSameTestScene.outputs": {
"result": 5,
"intermediateResult": {
"type": "float",
"value": 2.5
}
},
"/World/MyGraph/AnotherNodeInTheSameTestScene.state": {
"intermediateState": 1.5
},
"/World/MyGraph/MyNode.outputs:outputAttr3": "ExpectedValue2",
"/World/MyGraph/MyNode.outputs:outputAttr4": {
"type": "token",
"value": "ExpectedValue3"
}
}
```
The test file locations can be specified either as an absolute path or as a path relative to the .ogn file. Note that the formatting in this test is different from the previous examples; because the test file in question may contain many different nodes whose output conditions we would like to check after evaluation, we need to add an extra layer of wrapping to relate the nodes being tested to the attributes that need to be checked. More specifically, there are four general allowed key-value patterns for creating per-node test constructs when utilizing an external test file:
## 测试节点属性
### 测试节点属性示例
```json
{
"tests": [
{
"file": "TestFileForMyNode.usda",
"/World/MyGraph/MyNode": {
"outputs": {
"outputAttr0": "ExpectedValue0",
"outputAttr1": {
"type": "token",
"value": "ExpectedValue1"
}
},
"outputs:outputAttr2": "ExpectedValue2",
"state": {
"stateAttr0": "ExpectedState0",
"stateAttr1": "ExpectedState1"
},
"state:stateAttr2": "ExpectedState2"
},
"/World/MyGraph/AnotherNodeInTheSameTestScene.outputs": {
"result": 5,
"intermediateResult": {
"type": "float",
"value": 2.5
}
},
"/World/MyGraph/AnotherNodeInTheSameTestScene.state": {
"intermediateState": 1.5
},
"/World/MyGraph/MyNode.outputs:outputAttr3": "ExpectedValue2",
"/World/MyGraph/MyNode.outputs:outputAttr4": {
"type": "token",
"value": "ExpectedValue3"
}
}
]
}
```
### 属性检查的简写形式
```json
{
"tests": [
{
"file": "TestFileForMyNode.usda",
"/World/MyGraph/MyNode": {
"outputs": {
"outputAttr0": "ExpectedValue0",
"outputAttr1": {
"type": "token",
"value": "ExpectedValue1"
}
},
"outputs:outputAttr2": "ExpectedValue2",
"state": {
"stateAttr0": "ExpectedState0",
"stateAttr1": "ExpectedState1"
},
"state:stateAttr2": "ExpectedState2"
},
"/World/MyGraph/AnotherNodeInTheSameTestScene.outputs": {
"result": 5,
"intermediateResult": {
"type": "float",
"value": 2.5
}
},
"/World/MyGraph/AnotherNodeInTheSameTestScene.state": {
"intermediateState": 1.5
},
"/World/MyGraph/MyNode.outputs:outputAttr3": "ExpectedValue2",
"/World/MyGraph/MyNode.outputs:outputAttr4": {
"type": "token",
"value": "ExpectedValue3"
}
}
]
}
```
The key is a combination of the path to the node in the test scene and attribute namespace, while the value is a key-value pair of attribute name: expected value for all attributes in the namespace that require verification:
```json
{
"tests": [
{
"file": "TestFileForMyNode.usda",
"/World/MyGraph/MyNode": {
"outputs": {
"outputAttr0": "ExpectedValue0",
"outputAttr1": {
"type": "token",
"value": "ExpectedValue1"
}
},
"outputs:outputAttr2": "ExpectedValue2",
"state": {
"stateAttr0": "ExpectedState0",
"stateAttr1": "ExpectedState1"
},
"state:stateAttr2": "ExpectedState2"
},
"/World/MyGraph/AnotherNodeInTheSameTestScene.outputs": {
"result": 5,
"intermediateResult": {
"type": "float",
"value": 2.5
}
},
"/World/MyGraph/AnotherNodeInTheSameTestScene.state": {
"intermediateState": 1.5
},
"/World/MyGraph/MyNode.outputs:outputAttr3": "ExpectedValue2",
"/World/MyGraph/MyNode.outputs:outputAttr4": {
"type": "token",
"value": "ExpectedValue3"
}
}
]
}
```
The key is a combination of the path to the node in the test scene, attribute namespace, and attribute name that we want to test, while the value is the expected attribute value after graph evaluation(s).
```json
{
"tests": [
{
"file": "TestFileForMyNode.usda",
"/World/MyGraph/MyNode": {
"outputs": {
"outputAttr0": "ExpectedValue0"
}
}
}
]
}
```
```json
{
"$schema": "http://json-schema.org/schema#",
"type": "object",
"title": "OmniGraph Compute Node Interface Description",
"description": "Contains a description of the interfaces available on one or more OmniGraph Compute Nodes.",
"$comments": "If any dictionary keyword begins with a '$' it will be treated as a comment",
"definitions": {
"commentType": {
"description": "Pattern to allow $comment to be used with any data to annotate the file",
"type": ["array", "boolean", "integer", "number", "object", "string"]
},
"languageType": {
"description": "Values recognized as valid node language types",
"pattern": "^(cpp|c\\+\\+|cu|cuda|py|python)$"
},
"tags": {
"description": "Single value or list of value to use as tags on the node"
}
}
}
```
22 "type": ["array", "string"]
23 },
24
25 "schedulingHints": {
26 "description": "Values recognized as valid scheduling hints",
27 "pattern": "^(global|global-read|global-write|static|static-read|static-write|threadsafe|topology|topology-read|topology-write|usd|usd-read|usd-write|compute-default|compute-on-request|pure)$"
28 },
29
30 "color": {
31 "description": "RGBA Color specification",
32 "oneOf": [
33 { "pattern": "^#[0-9a-fA-F]{8}$" },
34 { "type": "array", "items": { "type": "integer", "minItems": 4, "maxItems": 4 } }
35 ]
36 },
37
38 "iconDictionary": {
39 "description": "Long form specifying icon properties by keyword",
40 "type": "object",
41 "properties": {
42 "path": { "type": "string" },
43 "color": { "$ref": "#/definitions/color" },
44 "backgroundColor": { "$ref": "#/definitions/color" },
45 "borderColor": { "$ref": "#/definitions/color" }
46 }
47 },
48 "icon": {
49 "description": "Single string path or dictionary of detailed information to override icon appearance",
50 "oneOf": [
51 { "type": "string" },
52 { "$ref": "#/definitions/iconDictionary" }
53 ]
54 },
55
56 "attributeValue": {
57 "description": "Simplified match for attribute values specified in the file (no type validation)",
58 "type": ["array", "boolean", "integer", "number", "string"]
59 },
60
61 "attributeValueType": {
62 "description": "Types of data recognized as valid attribute data in JSON form",
63 "type": ["array", "boolean", "integer", "number", "string"]
64. },
65.
66. "typedAttributeValue": {
67. "description": "An attribute value that includes type information",
68. "type": "object",
69. "properties": {
70. "type": { "$ref": "#/definitions/attributePattern" },
71. "value": { "$ref": "#/definitions/attributeValueType" }
72. }
73. },
74.
75. "maybeTypedAttributeValue": {
76. "description": "An attribute value that may be typed",
77. "oneOf": [
78. { "$ref": "#/definitions/attributeValueType" },
79. { "$ref": "#/definitions/typedAttributeValue" }
80. ]
81. },
82.
83. "$subtypes": { "what": "Patterns for matching the attribute type declaration" },
84. "simpleType": {
85. "description": "Matches an attribute type pattern with no component counts or arrays",
86. "pattern": "^[^\\[\\]]+$"
87. },
88. "componentType": {
89. "description": "Matches an attribute type pattern with a simple value with a component count",
90. "pattern": "^[^\\[\\]]+\\[0-9]{1,3}\\]$"
91. },
92. "arrayType": {
93. "description": "Matches an attribute type pattern for an array of simple values",
94. "pattern": "^[^\\[\\]]+\\[\\]$"
95. },
96. "arrayOfArraysType": {
97. "description": "Matches an attribute type pattern for an array of arrays of simple values",
98. "pattern": "^[^\\[\\]]+\\[\\]\\[\\]$"
99. },
100. "componentArrayType": {
101. "description": "Matches an attribute type pattern for an array of components",
102. "pattern": "^[^\\[\\]]+\\[0-9]{1,3}\\]\\[\\]$"
103. },
104. "componentArrayOfArraysType": {
105. "description": "Matches an attribute type pattern for an array of arrays of components",
106. "pattern": "^[^\\[\\]]+\\[0-9]{1,3}\\]\\[\\]\\[\\]$"
107. },
108. "attributeTypeName": {
109. "description": "Simple attribute type name portion of the pattern",
110. "pattern": "^(any|bool|bundle|double|execution|float|half|int|int64|path|string|target|timecode|token|uchar|uint|uint64)"
111. },
112. "attributeTypeNameWithRoles": {
113 "description": "Simple attribute type name portion of the pattern including role-based attributes",
114 "pattern": "^(any|bool|bundle|double|execution|float|half|int|int64|objectId|string|target|timecode|token|uchar|uint|uint64|(color|normal|point|quat|path|texcoord|vector|(d|f|h))|matrixd|frame|transform)"
115 },
116 "numericAttributeTypeName": {
117 "description": "Numeric attribute types supporting min/max values",
118 "pattern": "^(double|float|half|int|int64|timecode|uchar|uint|uint64)"
119 },
120 "numericAttributeTypeNameWithRoles": {
121 "description": "Numeric and role-based attribute types supporting min/max values",
122 "pattern": "^(double|float|half|int|int64|timecode|uchar|uint|uint64|(color|normal|point|quat|texcoord|vector|(d|f|h))|(matrix(d|f))|frame|transform)"
123 },
125 "$allTypes": { "what": "Patterns for matching all valid attribute types and their component or array extensions" },
126 "attributePatternSimple": {
127 "description": "Simple attribute types, no components or arrays",
128 "allOf": [
129 { "$ref": "#/definitions/attributeTypeName" },
130 { "$ref": "#/definitions/simpleType" }
131 ]
132 },
133 "attributePatternComponent": {
134 "description": "Simple attribute types with a non-zero component count, no arrays",
135 "allOf": [
136 { "$ref": "#/definitions/attributeTypeNameWithRoles" },
137 { "$ref": "#/definitions/componentType" }
138 ]
139 },
140 "attributePatternArray": {
141 "description": "Array attribute types with no components",
142 "allOf": [
143 { "$ref": "#/definitions/attributeTypeName" },
144 { "$ref": "#/definitions/arrayType" }
145 ]
146 },
147 "attributePatternArrayOfArrays": {
148 "description": "Array of arrays of attribute types with no components",
149 "allOf": [
150 { "$ref": "#/definitions/attributeTypeName" },
151 { "$ref": "#/definitions/arrayOfArraysType" }
152 ]
153 },
154 "attributePatternComponentArray": {
155 "description": "Array attribute types with a non-zero component count",
156 "allOf": [
157 { "$ref": "#/definitions/attributeTypeNameWithRoles" },
{ "$ref": "#/definitions/componentArrayType" }
]
},
"attributePatternComponentArrayOfArrays": {
"description": "Array of arrays of attribute types with a non-zero component count",
"allOf": [
{ "$ref": "#/definitions/attributeTypeNameWithRoles" },
{ "$ref": "#/definitions/componentArrayOfArraysType" }
]
},
"attributePattern": {
"description": "Match all of the simple types, plus an optional component count, and optional array type",
"oneOf": [
{ "$ref": "#/definitions/attributePatternSimple" },
{ "$ref": "#/definitions/attributePatternComponent" },
{ "$ref": "#/definitions/attributePatternArray" },
{ "$ref": "#/definitions/attributePatternArrayOfArrays" },
{ "$ref": "#/definitions/attributePatternComponentArray" },
{ "$ref": "#/definitions/attributePatternComponentArrayOfArrays" }
]
},
"$numericTypes": { "what": "Patterns for recognizing the numeric types, for special handling" },
"numericAttributePatternSimple": {
"description": "Numeric attribute types, no components or arrays",
"allOf": [
{ "$ref": "#/definitions/numericAttributeTypeName" },
{ "$ref": "#/definitions/simpleType" }
]
},
"numericAttributePatternComponent": {
"description": "Numeric attribute types with a non-zero component count, no arrays",
"allOf": [
{ "$ref": "#/definitions/numericAttributeTypeNameWithRoles" },
{ "$ref": "#/definitions/componentType" }
]
},
"numericAttributePatternArray": {
"description": "Array of numeric attribute types with no components",
"allOf": [
{ "$ref": "#/definitions/numericAttributeTypeName" },
{ "$ref": "#/definitions/arrayType" }
]
}
202]
203},
204 "numericAttributePatternArrayOfArrays": {
205 "description": "Array of arrays of numeric attribute types with no components",
206 "allOf": [
207 { "$ref": "#/definitions/numericAttributeTypeName" },
208 { "$ref": "#/definitions/arrayOfArraysType" }
209 ]
210 },
211 "numericAttributePatternComponentArray": {
212 "description": "Array of numeric attribute types with a non-zero component count",
213 "allOf": [
214 { "$ref": "#/definitions/numericAttributeTypeNameWithRoles" },
215 { "$ref": "#/definitions/componentArrayType" }
216 ]
217 },
218 "numericAttributePatternComponentArrayOfArrays": {
219 "description": "Array of arrays of numeric attribute types with a non-zero component count",
220 "allOf": [
221 { "$ref": "#/definitions/numericAttributeTypeNameWithRoles" },
222 { "$ref": "#/definitions/componentArrayOfArraysType" }
223 ]
224 },
225 "numericAttributePattern": {
226 "description": "Match all of the numeric types, plus an optional component count, and optional array type",
227 "oneOf": [
228 { "$ref": "#/definitions/numericAttributePatternSimple" },
229 { "$ref": "#/definitions/numericAttributePatternComponent" },
230 { "$ref": "#/definitions/numericAttributePatternArray" },
231 { "$ref": "#/definitions/numericAttributePatternArrayOfArrays" },
232 { "$ref": "#/definitions/numericAttributePatternComponentArray" },
233 { "$ref": "#/definitions/numericAttributePatternComponentArrayOfArrays" }
234 ]
235 },
237 "memoryTypeName": {
238 "description": "Node or attribute-level specification of the memory location",
239 "type": "string",
240 "pattern": "^(cpu|cuda|any)$"
241 },
243 "metadata": {
244 "description": "Key/Value pairs to be stored with the node or attribute type definition",
245 "type": "object",
246 "additionalProperties": false,
```json
{
"patternProperties": {
"^\\$": {
"$ref": "#/definitions/commentType"
},
"^[^\\$].*": {
"type": "string"
}
},
"attribute": {
"type": "object",
"description": "A single attribute on a node",
"required": ["description", "type"],
"properties": {
"description": {
"type": ["string", "array"],
"items": {
"type": "string"
}
},
"array": {
"type": "boolean",
"default": false
},
"optional": {
"type": "boolean",
"default": false
},
"unvalidated": {
"type": "boolean",
"default": false
},
"memoryType": {
"$ref": "#/definitions/memoryTypeName"
},
"metadata": {
"$ref": "#/definitions/metadata"
},
"type": {
"$ref": "#/definitions/attributePattern"
},
"uiName": {
"type": "string"
}
},
"patternProperties": {
"^\\$": {
"$ref": "#/definitions/commentType"
}
},
"$extraProperties": "Collection of properties that are conditional on attribute types or other values",
"allOf": [
{
"$attributeDefaults": "Define attribute default types, where omission is okay if attribute is optional or an output",
"if": {
"required": ["default"]
},
"then": {
"$comment": "If the default is provided then validate its type",
"$ref": "#/definitions/attributeValue"
}
}
]
}
}
```
284 "description": "A minimum value is only supported on certain attribute types",
285 "if": {
286 "required": ["minimum"]
287 },
288 "then": {
289 "$comment": "If the minimum is provided then validate its type",
290 "$ref": "#/definitions/attributeValue"
291 },
292 },
293 {
294 "description": "A maximum value is only supported on certain attribute types",
295 "if": {
296 "required": ["maximum"]
297 },
298 "then": {
299 "$comment": "If the maximum is provided then validate its type",
300 "$ref": "#/definitions/attributeValue"
301 },
302 }
303 ],
304 },
305 "attributes": {
306 "type": "object",
307 "default": {},
308 "description": "A subset of attributes on a node",
309 "$comment": "Attribute names are alphanumeric with underscores, not starting with a number, allowing periods and colons as separators",
310 "additionalProperties": false,
311 "patternProperties": {
312 "^[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/attribute" },
313 "^\\$": { "$ref": "#/definitions/commentType" }
314 }
315 },
316 "tests": {
317 "description": "Tests consist of a list of objects containing either direct values for input and output attributes, or a file path to an external test scene followed by a list of objects containing the expected output and state values to check for a user-specified node (as long as it exists in the test scene)",
318 "type": "array",
319 "items": {
320 "description": "Attribute values can either be namespaces as inputs:X/outputs:X/state:X or in objects with keys inputs/outputs/state. When a file string is specified, attribute values can be correlated to a path to a node in the scene concatenated with the attribute as nodePath.outputs:X/nodePath.state:X, objects with keys nodePath.outputs whose values are the attribute name + value, and/or objects with keys nodePath whose values are objects with keys outputs/state (similar to the previous case without test files)",
321 "type": "object",
322 "patternProperties": {
323 "description": { "$ref": "#/definitions/commentType" },
324 "^inputs$": {
325 "type": "object",
326 "description": "Alternative way of specifying inputs without namespacing; not compatible with file strings located in the same test"
327 }
328 }
329 }
330 }
"patternProperties": {
"^[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/maybeTypedAttributeValue" }
}
"^outputs$": {
"type": "object",
"description": "Alternative way of specifying outputs without namespacing",
"patternProperties": {
"^[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/maybeTypedAttributeValue" }
}
}
"^state$": {
"type": "object",
"description": "Alternative way of specifying state without namespacing",
"patternProperties": {
"^[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/maybeTypedAttributeValue" }
}
}
"^setup$": {
"type": "object",
"description": "Detailed graph setup prior to a test; not compatible with file strings located in the same test",
"properties": {
"nodes": { "type": "array" },
"connections": { "type": "array" },
"prims": { "type": "array" },
"values": { "type": "object" }
}
}
"^inputs:[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/maybeTypedAttributeValue" },
"^outputs:[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/maybeTypedAttributeValue" },
"^state:[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/maybeTypedAttributeValue" },
"^\\$": { "$ref": "#/definitions/commentType" },
"^file$": {
"type": "string",
"description": "Name of the (optional) test file to use; not compatible with inputs and/or setup objects in the same test"
}
"^[A-Za-z_][A-Za-z0-9_\\\/._]$": {
"type": "object"
}
365. "description": "Path to an arbitrary node in the test scene specified by the file name. Only allowed if the file property is defined for the given test",
366. "patternProperties": {
367. "^outputs$": { "$ref": "#/definitions/tests/items/patternProperties/^outputs$" },
368. "^state$": { "$ref": "#/definitions/tests/items/patternProperties/^state$" },
369. "^outputs:[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/maybeTypedAttributeValue" },
370. "^state:[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/maybeTypedAttributeValue" }
371. }
372. },
373. "^[A-Za-z_][A-Za-z0-9_\\\/._].outputs$": {
374. "type": "object",
375. "description": "Path to an arbitrary node in the test scene concatenated with the \"outputs\" attribute namespace. Only allowed if the file property is defined for the given test",
376. "patternProperties": {
377. "^[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/maybeTypedAttributeValue" }
378. }
379. },
380. "^[A-Za-z_][A-Za-z0-9_\\\/._].state$": {
381. "type": "object",
382. "description": "Path to an arbitrary node in the test scene concatenated with the \"state\" attribute namespace. Only allowed if the file property is defined for the given test",
383. "patternProperties": {
384. "^[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/maybeTypedAttributeValue" }
385. }
386. },
387. "^[A-Za-z_][A-Za-z0-9_\\\/._].outputs:[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/maybeTypedAttributeValue" },
388. "^[A-Za-z_][A-Za-z0-9_\\\/._].state:[A-Za-z_][A-Za-z0-9_.:]*$": { "$ref": "#/definitions/maybeTypedAttributeValue" }
389. }
390. },
391. },
392. "node": {
393. "type": "object",
394. "description": "Referenced schema for describing an OmniGraph Compute Node",
395. "required": ["description"],
396. "additionalProperties": false,
397. "patternProperties": {
398. "categories": { "$ref": "#/definitions/categoriesName" },
399. "categoryType": { "$ref": "#/definitions/categoryTypeName" },
400. "description": { "type": ["string", "array"], "items": { "type": "string" } }
401. }
402. }
```json
{
"interfaces": {
"node": {
"exclude": {
"type": ["string", "array"],
"items": {
"type": "string"
}
},
"language": {
"$ref": "#/definitions/languageType"
},
"inputs": {
"$ref": "#/definitions/attributes"
},
"outputs": {
"$ref": "#/definitions/attributes"
},
"state": {
"$ref": "#/definitions/attributes"
},
"tests": {
"$ref": "#/definitions/tests"
},
"version": {
"type": "integer",
"default": 1
},
"metadata": {
"$ref": "#/definitions/metadata"
},
"memoryType": {
"$ref": "#/definitions/memoryTypeName"
},
"scheduling": {
"oneOf": [
{
"type": "string"
},
{
"$ref": "#/definitions/schedulingHints"
}
]
},
"uiName": {
"type": "string"
},
"icon": {
"$ref": "#/definitions/icon"
},
"tags": {
"$ref": "#/definitions/tags"
},
"^\\$": {
"$ref": "#/definitions/commentType"
}
}
},
"$limitation": "A valid file must have at least one node interface definition",
"minProperties": 1,
"additionalProperties": false,
"patternProperties": {
"^[A-Za-z_][A-Za-z0-9_]*$": {
"$ref": "#/definitions/node"
},
"^\\$": {
"$ref": "#/definitions/commentType"
}
}
}
``` | 57,344 |
sliders.md | # Fields and Sliders
## Common Styling for Fields and Sliders
Here is a list of common style you can customize on Fields and Sliders:
> background_color (color): the background color of the field or slider
> border_color (color): the border color if the field or slider background has a border
> border_radius (float): the border radius if the user wants to round the field or slider
> border_width (float): the border width if the field or slider background has a border
> padding (float): the distance between the text and the border of the field or slider
> font_size (float): the size of the text in the field or slider
## Field
There are fields for string, float and int models.
Except the common style for Fields and Sliders, here is a list of styles you can customize on Field:
> color (color): the color of the text
> background_selected_color (color): the background color of the selected text
### StringField
The StringField widget is a one-line text editor. A field allows the user to enter and edit a single line of plain text. It’s implemented using the model-delegate-view pattern and uses AbstractValueModel as the central component of the system.
The following example demonstrates how to connect a StringField and a Label. You can type anything into the StringField.
```python
from omni.ui import color as cl
field_style = {
"Field": {
"background_color": cl(0.8),
"border_color": cl.blue,
"background_selected_color": cl.yellow,
"border_radius": 5,
"border_width": 1,
"color": cl.red,
"font_size": 20.0,
"padding": 5,
},
"Field:pressed": {"background_color": cl.white, "border_color": cl.green, "border_width": 2, "padding": 8},
}
def setText(label, text):
"""Sets text on the label"""
# This function exists because lambda cannot contain assignment
label.text = f"You wrote '{text}'"
```
```python
with ui.HStack():
field = ui.StringField(style=field_style)
ui.Spacer(width=5)
label = ui.Label("", name="text")
field.model.add_value_changed_fn(lambda m, label=label: setText(label, m.get_value_as_string()))
ui.Spacer(width=10)
```
The following example demonstrates that the CheckBox’s model decides the content of the Field. Click to edit and update the string field value also updates the value of the CheckBox. The field can only have one of the two options, either ‘True’ or ‘False’, because the model only supports those two possibilities.
```python
from omni.ui import color as cl
with ui.HStack():
field = ui.StringField(width=100, style={"background_color": cl.black})
checkbox = ui.CheckBox(width=0)
field.model = checkbox.model
```
In this example, the field can have anything because the model accepts any string. The model returns bool for checkbox, and the checkbox is unchecked when the string is empty or ‘False’.
```python
from omni.ui import color as cl
with ui.HStack():
field = ui.StringField(width=100, style={"background_color": cl.black})
checkbox = ui.CheckBox(width=0)
checkbox.model = field.model
```
The Field widget doesn’t keep the data due to the model-delegate-view pattern. However, there are two ways to track the state of the widget. It’s possible to re-implement the AbstractValueModel. The second way is using the callbacks of the model. Here is a minimal example of callbacks. When you start editing the field, you will see “Editing is started”, and when you finish editing by press enter, you will see “Editing is finished”.
```python
def on_value(label):
label.text = "Value is changed"
def on_begin(label):
label.text = "Editing is started"
def on_end(label):
label.text = "Editing is finished"
label = ui.Label("Nothing happened", name="text")
model = ui.StringField().model
model.add_value_changed_fn(lambda m, l=label: on_value(l))
model.add_begin_edit_fn(lambda m, l=label: on_begin(l))
model.add_end_edit_fn(lambda m, l=label: on_end(l))
```
## Multiline StringField
Property `multiline` of `StringField` allows users to press enter and create a new line. It’s possible to finish editing with Ctrl-Enter.
```
```python
import inspect
field_style = {
"Field": {
"background_color": cl(0.8),
"color": cl.black,
},
"Field:pressed": {"background_color": cl(0.8)},
}
field_callbacks = lambda: field_callbacks()
with ui.Frame(style=field_style, height=200):
model = ui.SimpleStringModel("hello \nworld \n")
field = ui.StringField(model, multiline=True)
```
## FloatField and IntField
The following example shows how string field, float field and int field interact with each other. All three fields share the same default FloatModel:
```python
with ui.HStack(spacing=5):
ui.Label("FloatField")
ui.Label("IntField")
ui.Label("StringField")
with ui.HStack(spacing=5):
left = ui.FloatField()
center = ui.IntField()
right = ui.StringField()
center.model = left.model
right.model = left.model
ui.Spacer(height=5)
```
## MultiField
MultiField widget groups the widgets that have multiple similar widgets to represent each item in the model. It’s handy to use them for arrays and multi-component data like float3, matrix, and color.
MultiField is using `Field` as the Type Selector. Therefore, the list of styles we can customize on MultiField is the same as Field
### MultiIntField
Each of the field value could be changed by editing
```python
from omni.ui import color as cl
field_style = {
"Field": {
"background_color": cl(0.8),
"border_color": cl.blue,
"border_radius": 5,
"border_width": 1,
"color": cl.red,
"font_size": 20.0,
"padding": 5,
},
"Field:pressed": {
"background_color": cl.white,
"border_color": cl.green,
"border_width": 2,
"padding": 8,
},
}
ui.MultiIntField(0, 0, 0, 0, style=field_style)
```
### MultiFloatField
Use MultiFloatField to construct a matrix field:
```python
args = [1.0 if i % 5 == 0 else 0.0 for i in range(16)]
ui.MultiFloatField(*args, width=ui.Percent(50), h_spacing=5, v_spacing=2)
```
# MultiFloatDragField
Each of the field value could be changed by dragging
```python
ui.MultiFloatDragField(0.0, 0.0, 0.0, 0.0)
```
# Sliders
The Sliders are more like a traditional slider that can be dragged and snapped where you click. The value of the slider can be shown on the slider or not, but can not be edited directly by clicking.
Except the common style for Fields and Sliders, here is a list of styles you can customize on ProgressBar:
> color (color): the color of the text
> secondary_color (color): the color of the handle in `ui.SliderDrawMode.HANDLE` draw_mode or the background color of the left portion of the slider in `ui.SliderDrawMode.DRAG` draw_mode
> secondary_selected_color (color): the color of the handle when selected, not useful when the draw_mode is FILLED since there is no handle drawn.
> draw_mode (enum): defines how the slider handle is drawn. There are three types of draw_mode.
- ui.SliderDrawMode.HANDLE: draw the handle as a knob at the slider position
- ui.SliderDrawMode.DRAG: the same as `ui.SliderDrawMode.HANDLE` for now
- ui.SliderDrawMode.FILLED: the handle is eventually the boundary between the `secondary_color` and `background_color`
Sliders with different draw_mode:
```python
from omni.ui import color as cl
with ui.VStack(spacing=5):
ui.FloatSlider(style={"background_color": cl(0.8), "secondary_color": cl(0.6), "color": cl(0.1), "draw_mode": ui.SliderDrawMode.HANDLE})
ui.FloatSlider(style={"background_color": cl(0.8), "secondary_color": cl(0.6), "color": cl(0.1), "draw_mode": ui.SliderDrawMode.DRAG})
ui.FloatSlider(style={"background_color": cl(0.8), "secondary_color": cl(0.6), "color": cl(0.1), "draw_mode": ui.SliderDrawMode.FILLED})
```
# FloatSlider
Default slider whose range is between 0 to 1:
```python
ui.FloatSlider()
```
With defined Min/Max whose range is between min to max:
```python
ui.FloatSlider(min=0, max=10)
```
With defined Min/Max from the model. Notice the model allows the value range between 0 to 100, but the FloatSlider has a more strict range between 0 to 10.
```python
model = ui.SimpleFloatModel(1.0, min=0, max=100)
```
```python
from omni.ui import color as cl
with ui.HStack(width=200):
ui.Spacer(width=20)
with ui.VStack():
ui.Spacer(height=5)
ui.FloatSlider(
min=-180,
max=180,
style={
"color": cl.blue,
"background_color": cl(0.8),
"draw_mode": ui.SliderDrawMode.HANDLE,
"secondary_color": cl.red,
"secondary_selected_color": cl.green,
"font_size": 20,
"border_width": 3,
"border_color": cl.black,
"border_radius": 10,
"padding": 10,
})
ui.Spacer(height=5)
ui.Spacer(width=20)
```
With styles and rounded slider:
Filled mode slider with style:
Transparent background:
```python
from omni.ui import color as cl
with ui.HStack():
# a separate float field
field = ui.FloatField(height=15, width=50)
# a slider using field's model
ui.FloatSlider(
min=0,
max=20,
step=0.25,
model=field.model,
style={
"color": cl.transparent,
"background_color": cl(0.3),
"draw_mode": ui.SliderDrawMode.HANDLE
}
)
# default value
field.model.set_value(12.0)
```
Slider with transparent value. Notice the use of `step` attribute
IntSlider
======
Default slider whose range is between 0 to 100:
```python
ui.IntSlider()
```
With defined Min/Max whose range is between min to max. Note that the handle width is much wider.
```python
ui.IntSlider(min=0, max=20)
```
With style:
```python
from omni.ui import color as cl
with ui.HStack(width=200):
ui.Spacer(width=20)
with ui.VStack():
ui.Spacer(height=5)
ui.IntSlider(
min=0,
max=20,
style={
"background_color": cl("#BBFFBB"),
"color": cl.purple,
"draw_mode": ui.SliderDrawMode.HANDLE,
"secondary_color": cl.green,
"secondary_selected_color": cl.red,
"font_size": 14.0,
"border_width": 3,
"border_color": cl.green,
"padding": 5,
}
).model.set_value(4)
ui.Spacer(height=5)
ui.Spacer(width=20)
```
Drags
=====
The Drags are very similar to Sliders, but more like Field in the way that they behave. You can double click to edit the value but they also have a mean to be ‘Dragged’ to increase or decrease the value.
Except the common style for Fields and Sliders, here is a list of styles you can customize on ProgressBar:
```
## FloatDrag
- Default float drag whose range is -inf and +inf
- With defined Min/Max whose range is between min to max:
```python
ui.FloatDrag(min=-10, max=10, step=0.1)
```
- With styles and rounded shape:
```python
from omni.ui import color as cl
with ui.HStack(width=200):
ui.Spacer(width=20)
with ui.VStack():
ui.Spacer(height=5)
ui.FloatDrag(
min=-180,
max=180,
style={
"color": cl.blue,
"background_color": cl(0.8),
"secondary_color": cl.red,
"font_size": 20,
"border_width": 3,
"border_color": cl.black,
"border_radius": 10,
"padding": 10,
}
)
ui.Spacer(height=5)
ui.Spacer(width=20)
```
## IntDrag
- Default int drag whose range is -inf and +inf
- With defined Min/Max whose range is between min to max:
```python
ui.IntDrag(min=-10, max=10)
```
- With styles and rounded slider:
```python
from omni.ui import color as cl
with ui.HStack(width=200):
ui.Spacer(width=20)
with ui.VStack():
ui.Spacer(height=5)
ui.IntDrag(
min=-180,
max=180,
style={
"color": cl.blue,
"background_color": cl(0.8),
"secondary_color": cl.purple,
"font_size": 20,
"border_width": 4,
"border_color": cl.black,
"border_radius": 20,
"padding": 5,
}
)
ui.Spacer(height=5)
ui.Spacer(width=20)
```
# ProgressBar
A ProgressBar is a widget that indicates the progress of an operation.
Except the common style for Fields and Sliders, here is a list of styles you can customize on ProgressBar:
> color (color): the color of the progress bar indicating the progress value of the progress bar in the portion of the overall value
> secondary_color (color): the color of the text indicating the progress value
In the following example, it shows how to use ProgressBar and override the style of the overlay text.
```python
from omni.ui import color as cl
class CustomProgressValueModel(ui.AbstractValueModel):
"""An example of custom float model that can be used for progress bar"""
def __init__(self, value: float):
super().__init__()
self._value = value
def set_value(self, value):
"""Reimplemented set"""
try:
value = float(value)
except ValueError:
value = None
if value != self._value:
# Tell the widget that the model is changed
self._value = value
self._value_changed()
def get_value_as_float(self):
return self._value
def get_value_as_string(self):
return "Custom Overlay"
with ui.VStack(spacing=5):
# Create ProgressBar
first = ui.ProgressBar()
# Range is [0.0, 1.0]
first.model.set_value(0.5)
second = ui.ProgressBar()
second.model.set_value(1.0)
# Overrides the overlay of ProgressBar
model = CustomProgressValueModel(0.8)
third = ui.ProgressBar(model)
third.model.set_value(0.1)
# Styling its color
fourth = ui.ProgressBar(style={"color": cl("#0000dd")})
fourth.model.set_value(0.3)
# Styling its border width
ui.ProgressBar(style={"border_width": 2, "border_color": cl("#dd0000"), "color": cl("#0000dd")}).model.set_value(0.7)
# Styling its border radius
ui.ProgressBar(style={"border_radius": 100, "color": cl("#0000dd")}).model.set_value(0.6)
# Styling its background color
ui.ProgressBar(style={"border_radius": 10, "background_color": cl("#0000dd")}).model.set_value(0.6)
# Styling the text color
ui.ProgressBar(style={"ProgressBar": {"border_radius": 30, "secondary_color": cl("#00dddd"), "font_size": 20}}).model.set_value(0.6)
# Two progress bars in a row with padding
with ui.HStack():
ui.ProgressBar(style={"color": cl("#0000dd"), "padding": 100}).model.set_value(1.0)
ui.ProgressBar().model.set_value(0.0)
## Tooltip
All Widget can be augmented with a tooltip. It can take 2 forms, either a simple ui.Label or a callback when using the callback of
tooltip_fn=
```
or
```
widget.set_tooltip_fn()
```
. You can create the tooltip for any widget.
Except the common style for Fields and Sliders, here is a list of styles you can customize on Line:
> color (color): the color of the text of the tooltip.
> margin_width (float): the width distance between the tooltip content and the parent widget defined boundary
> margin_height (float): the height distance between the tooltip content and the parent widget defined boundary
Here is a simple label tooltip with style when you hover over a button:
```python
from omni.ui import color as cl
tooltip_style = {
"Tooltip": {
"background_color": cl("#DDDD00"),
"color": cl(0.2),
"padding": 10,
"border_width": 3,
"border_color": cl.red,
"font_size": 20,
"border_radius": 10
}
}
ui.Button("Simple Label Tooltip", name="tooltip", width=200, tooltip="I am a text ToolTip", style=tooltip_style)
```
You can create a callback function as the tooltip where you can create any types of widgets you like in the tooltip and layout them. Make the tooltip very illustrative to have Image or Field or Label etc.
```python
from omni.ui import color as cl
def create_tooltip():
with ui.VStack(width=200, style=tooltip_style):
with ui.HStack():
ui.Label("Fancy tooltip", width=150)
ui.IntField().model.set_value(12)
ui.Line(height=2, style={"color":cl.white})
with ui.HStack():
ui.Label("Anything is possible", width=150)
ui.StringField().model.set_value("you bet")
image_source = "resources/desktop-icons/omniverse_512.png"
ui.Image(
image_source,
width=200,
height=200,
alignment=ui.Alignment.CENTER,
style={"margin": 0},
)
tooltip_style = {
"Tooltip": {
"background_color": cl(0.2),
"border_width": 2,
"border_radius": 5,
"margin_width": 5,
"margin_height": 10
}
}
ui.Button("Callback function Tooltip", width=200, style=tooltip_style, tooltip_fn=create_tooltip)
```
You can define a fixed position for tooltip:
You can also define a random position for tooltip:
```python
ui.Button("Fixed-position Tooltip", width=200, tooltip="Hello World", tooltip_offset_y=22)
```
```python
import random
button = ui.Button("Random-position Tooltip", width=200, tooltip_offset_y=22)
def create_tooltip(button=button):
button.tooltip_offset_x = random.randint(0, 200)
ui.Label("Hello World")
button.set_tooltip_fn(create_tooltip)
``` | 18,169 |
sounds-great-how-do-i-use-it_Overview.md | # Overview
## Our Mission
To serve as a framework that democratizes and instills composability into computations in Omniverse and NVIDIA while ensuring performance, hyper-scalability, and seamless integration with USD.
## What Is OmniGraph, and Why Do I Want It?
To a user, OmniGraph is a visual scripting language that provides the ability to implement actions and reactions in an otherwise static Omniverse world.
To a developer, OmniGraph is a framework that provides the ability to unify computation models that include such diverse approaches as traditional animation timelines, event driven state machines, inferencing algorithms with DNNs, NeRF scene generators, and training artificial intelligence algorithms.
To everyone, OmniGraph provides a scalable architecture where a description of a set of computations in graph form can perform well on an individual machine and make use of the full power of a multiple-node data center without changing the representation of the graph.
History has shown that the most flexible systems are also the most useful so OmniGraph has been architected from the ground up to provide a compute framework where clients can define their own functionality and have it interact with every other component as though they were built for each other. The framework is flexible enough to address all of the above range of use cases and more, as well as handling a large range of compute granularity. From a sub-millisecond math operation to an hours-long training session of a neural network the framework connects them all.
To find out more about the bits and pieces that comprise OmniGraph have a look at the [Core Concepts](#omnigraph-core-concepts). If you’ve been exposed to graph-based programs such as Maya, Houdini, 3D Studio Max, or Blender then a lot of the concepts will be very familiar.
## Sounds Great, How Do I Use It?
As you might suspect from the wide ranges of uses for OmniGraph there are an equally wide range of places for you to start depending on your expertise level and what you are trying to do. Here is a set of questions that will help you find your starting point:
| Question | Suggestion |
|----------|------------|
| I’m still not sure what this OmniGraph thing is | Have a look at the detailed [Core Concepts](#omnigraph-core-concepts) for a better idea of the pieces involved, or watch the [Getting Started Tutorials](#). |
| I’m a user running a Kit-based app and just want to use what’s already there | The [User Interfaces](#omnigraph-user-interfaces) guide will give you a good feel for how to find the OmniGraph features in the app. In particular, to create and wire up nodes in a graph you’ll be interested in the [Graph Editors](#omnigraph-graph-editors). |
# Visual Graph Editors
If you have a little bit of Python programming knowledge you can make use of the **Script Node** to write your own node behaviors without the need to write a full-blown node.
## I’m a user with some Python knowledge and want to build a graph through scripts
We’ve provided a “one-stop” class that handles most of the operations you might want to perform with OmniGraph. The **Controller Class** amalgamates several different facets of OmniGraph manipulation in a single class, including **topology changes**, **flexible object lookup**, **node structure changes**, and **attribute value reading and writing**.
## I’m a Python programmer and I’d like to write my own nodes
If you just want to try out some simple ideas to see how they work then you can write a node directly in the application by using the general **Python script node**. For script-based node creation, the **AutoNode decorators** let you quickly turn a class or function into a first-class node. Once you’re ready for something more serious you can move on to creating an entire **Python node** from start to finish.
## I’m a C++ or CUDA programmer and I’d like to write my own nodes
As a C++ programmer we can assume you are familiar with the concept of a build system, compiler, and so forth so you won’t be daunted to know that you’ll have to use those in order to write your own node. The details specific to development of OmniGraph nodes in a Kit environment can be found in **Creating C++ Nodes**. If you really want to get down to the “bare metal” of C++ development then you can peruse through the details of our **C++ ABI** which all code, including the generated OmniGraph node code, uses in order to maximize compatibility across platforms and versions.
## I’d like to know what the different options are for writing nodes
If you want a quick start to writing Python nodes you can start with **AutoNode - Node Type Definition**.
## I have the basics - now I would like to see a bunch of nodes as examples
For instructional purposes the set of curated **Walkthrough Tutorial Nodes** that illustrate methods of accessing various types of OmniGraph data with accompanying explanations. If you just want to see what others have written you may find the online node library reference interesting.
## I’ve heard about Action Graphs and want to know what they are
You can start with this **Action Graph** overview. There you will find some links to tutorials and demonstrations that use the **Action Graph**.
## I’d like to look through the How-To guides to see what’s there
You can peruse through the current set of task-based instructions in the **OmniGraph How-To Guide**.
. Check back often as these will be updated regularly with new workflows.
Got any FAQs for me?
The OmniGraph How-To Guide contains very detailed instructions on specific tasks, but if you just have a simple question you can check the Frequently Asked Questions page to see if it is answered there. You can also try the search bar on this page if your question has a very specific term in it. Searching for a generic term like **Attribute** will get you so many results that you will have a hard time finding your answer, but something more specific like **Hydra** will give much better results. | 6,022 |
String.md | # Omni String
## Why omni::string?
Strings are frequently used in Carbonite and Omniverse interfaces, however there is not an ABI safe string implementation available for interfaces to use. This leads to `char*` and `size` parameters being used, which are both error prone and cumbersome to use. For example:
```cpp
// This creates a URL from the pieces provided.
// "bufferSize" is an in-out parameter; set it to the size of the buffer before calling this function
// and it will be set to the actual size when the function returns. If the size required is more
// than the size provided, this function returns null, otherwise it returns 'buffer'.
char* omniClientMakeUrl(struct OmniClientUrl const* url, char* buffer, size_t* bufferSize);
```
This interface requires a confusing combination of `char*` and `size_t*` parameters. It requires users to preallocate a character array which may or may not be large enough, and then may returns a `nullptr`, which could lead to program crashes if users don’t check the return value. With `omni::string`, this interface could be simplified to:
```cpp
// This creates a URL from the pieces provided.
omni::string omniClientMakeUrl(struct OmniClientUrl const* url);
```
# Why not std::string?
Why go through the trouble of implementing and maintaining our own string class instead of using `std::string`? There are two major reasons that `std::string` is not suitable for use in Omniverse interfaces. First, `std::string` is not ABI safe. The ABI of `std::string` was broken in C++11, and is not guaranteed to not be broken again in the future. Second, `std::string` is not safe to pass across DLL boundaries due to memory allocation. A `std::string` that is created with allocated storage in one module should not be passed to another module which may free the memory.
For these reasons, `omni::string` was created to be the ABI-safe, DLL-boundary-safe string class in Omniverse.
# Details
## Small String Optimization
`omni::string` provides small string optimization (SSO), so that an allocation is not required for small strings. In `omni::string`, strings up to 31 characters long will be stored in the string’s internal buffer rather than in allocated storage. Note that `omni::string` is still only 32 bytes in size, so users do not have to pay a penalty in stack space to get this optimization. Profiling typical workflows indicates that close to 50% of strings used in Kit will be able to take advantage of being small string optimized.
## No Templates
Unlike the Standard Library which provides a templated `std::basic_string<CharT, Traits, Allocator>` class and then typedefs `std::string`, `omni::string` has no template arguments. This was done primarily for ABI safety and complexity concerns. Omniverse only uses UTF-8 characters, so the `CharT` template option was removed because `char` is the only type that should be used with `omni::string`. Similarly then, there was no need for a `Traits` type. Finally, to maintain DLL boundary safety, `omni::string` uses the allocation methods provided by `Memory`, so the `Allocator` template parameter was also omitted.
A template parameter for changing the size of the SSO buffer was also originally considered, however it was ultimately rejected. This parameter would allow for the ABI of an interface to be broken simply by changing that template parameter, which could lead to difficult to diagnose issues. This parameter also led to more complex implementation, and it was decided that the benefits did not outweigh the costs.
## std::string-compatible Interface
`omni::string` provides an interface as close to that of `std::string` as possible. All member functions and non-member functions provided by `std::string` up to C++20 are available for `omni::string`.
# Section Title
This should make it easy and familiar to use. There are a few subtle differences however.
First, in C++17, an overload was added to most functions that took a generic `T` parameter and implicitly converted it to a `std::string_view`, and performed the operation with that `std::string_view`. `omni::string` does not currently implement these overloads because Carbonite headers must be compatible with C++14, which does not have the `std::string_view` type. These functions can be added in the future.
Second, most functions in `std::string` became `constexpr` in C++20. `omni::string` makes an effort to make as many functions as possible `constexpr`, but it cannot match `std::string` in this regard. `std::string` takes advantage of C++20 features that allow transient allocations via `operator new` to be `constexpr`, which allows any `std::string` method that may grow the string, and thus trigger a reallocation, to be `constexpr`. Because `omni::string` uses the `Memory` management functions, which are not `constexpr`, it is unable to make functions that may grow the string `constexpr`.
# Python Bindings
`omni::string` is only intended to be used in C++, and therefore will not have Python bindings generated for the class. Python users should continue to use Python’s native strings. Omni.bind will be extended to automatically convert Python strings to `omni::string` for interfaces that use `omni::string` and have Python bindings. This will allow Python users to seamlessly interact with interfaces using `omni::string` without requiring bindings for the `omni::string` class itself. | 5,406 |
structhoops_exchange_c_a_d_converter_spec.md | # hoopsExchangeCADConverterSpec
Defined in [8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release/hoops_exchange_cad_converter/include/hoops_reader/CADConverterSpec.h](8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release/hoops_exchange_cad_converter/include/hoops_reader/CADConverterSpec.h)
Default configuration settings for HOOPS Exchange CAD.
## Fields
- **Param sConfigFilePath (str)**: Configuration file path
- **Param sFilePathIn (str)**: Path to input model head file
- **Param sFilePathOut (str)**: Path to the output usd file
- **Param sFolderPathOut (str)**: Path to output destination folder
- **Param sFileNameOut (str)**: Name of output USD file (the head file only in the case of multiple output files)
- **Param bInstancing (bool)**: If true, enable instancing
- **Param bGlobalXforms (bool)**: When instancing = false, this flag controls whether globalXforms are composited. If false local transforms are applied
- **Param bConvertHidden (bool)**: If false, ignore hidden or non-visible parts during conversion. If true, convert all parts and maintain visibility settings.
- **Param bOptimize (bool)**: Run UJITSO optimization on completed usd files
- **Param bDedup (bool)**: Deduplicate mesh vertices and normals (welds mesh)
- **Param sTessLOD (int)**: Tessellation Level of Detail (LOD) presets: 0=kA3DTessLODExtraLow, 1=kA3DTessLODLow,2=kA3DTessLODMedium, 3=kA3DTessLODHigh, 4=kA3DTessLODExtraHigh
- **Param bAccurateSurfaceCurvatures (bool)**: Consider surface curvature to control triangles elongation direction
- **Param sUsdSuffix (UsdSuffix)**: Save file(s) in following formats: “usd”, “usda”, “usdc”
- **Param bUseNormals (bool)**: If true then we pass normals to USD. if false, then we do not.
- **Param bUseMaterials (bool)**: If true then use specified modes of materials. if false, then use only basic display colors
- **Param bReportProgress (bool)**: If true, report progress
### Parameters
- **Param bReportProgress**
- (bool): If true then we report import/export progress
- **Param bReportProgressFreq**
- (int): Progress reporting frequency in Hz.
- **Param _bUseCurrentStage**
- (bool): Use currently opened USD.
### Public Functions
- **parseArgs(const pxr::SdfLayer::FileFormatArguments &args)**
- inline void parseArgs(const pxr::SdfLayer::FileFormatArguments &args)
- **toArgs() const**
- inline pxr::SdfLayer::FileFormatArguments toArgs() const
### Public Members
- **sConfigFilePath**
- std::string sConfigFilePath = ""
- **sFilePathIn**
- std::string sFilePathIn = ""
std::string sFilePathOut = ""
std::string sFolderPathOut = ""
std::string sFileNameOut = ""
bool bInstancing = true
bool bGlobalXforms = false
bool bConvertHidden = false
- **bConvertHidden**: `bool = true`
- **bOptimize**: `bool = true`
- **bDedup**: `bool = true`
- **iTessLOD**: `int = 2`
- **bAccurateSurfaceCurvatures**: `bool = true`
- **sUsdSuffix**: `std::string = "usd"`
- **bUseNormals**: `bool = true`
<dl class="cpp var">
<dt class="sig sig-object cpp" id="_CPPv4N29hoopsExchangeCADConverterSpec11bUseNormalsE">
<span id="_CPPv3N29hoopsExchangeCADConverterSpec11bUseNormalsE">
<span id="_CPPv2N29hoopsExchangeCADConverterSpec11bUseNormalsE">
<span id="hoopsExchangeCADConverterSpec::bUseNormals__b">
<span class="target" id="structhoops_exchange_c_a_d_converter_spec_1a0b2e610b0163c6d10f0500a5d33d99a6">
<span class="kt">
<span class="pre">
bool
<span class="w">
<span class="sig-name descname">
<span class="n">
<span class="pre">
bUseNormals
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="k">
<span class="pre">
true
<br/>
<dd>
<dl class="cpp var">
<dt class="sig sig-object cpp" id="_CPPv4N29hoopsExchangeCADConverterSpec13bUseMaterialsE">
<span id="_CPPv3N29hoopsExchangeCADConverterSpec13bUseMaterialsE">
<span id="_CPPv2N29hoopsExchangeCADConverterSpec13bUseMaterialsE">
<span id="hoopsExchangeCADConverterSpec::bUseMaterials__b">
<span class="target" id="structhoops_exchange_c_a_d_converter_spec_1a8e626fa7f0ea0730298c4f7fdb9f58b1">
<span class="kt">
<span class="pre">
bool
<span class="w">
<span class="sig-name descname">
<span class="n">
<span class="pre">
bUseMaterials
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="k">
<span class="pre">
true
<br/>
<dd>
<dl class="cpp var">
<dt class="sig sig-object cpp" id="_CPPv4N29hoopsExchangeCADConverterSpec15bReportProgressE">
<span id="_CPPv3N29hoopsExchangeCADConverterSpec15bReportProgressE">
<span id="_CPPv2N29hoopsExchangeCADConverterSpec15bReportProgressE">
<span id="hoopsExchangeCADConverterSpec::bReportProgress__b">
<span class="target" id="structhoops_exchange_c_a_d_converter_spec_1ae8246795bebbe5497420c740dc13cd76">
<span class="kt">
<span class="pre">
bool
<span class="w">
<span class="sig-name descname">
<span class="n">
<span class="pre">
bReportProgress
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="k">
<span class="pre">
true
<br/>
<dd>
<dl class="cpp var">
<dt class="sig sig-object cpp" id="_CPPv4N29hoopsExchangeCADConverterSpec19bReportProgressFreqE">
<span id="_CPPv3N29hoopsExchangeCADConverterSpec19bReportProgressFreqE">
<span id="_CPPv2N29hoopsExchangeCADConverterSpec19bReportProgressFreqE">
<span id="hoopsExchangeCADConverterSpec::bReportProgressFreq__double">
<span class="target" id="structhoops_exchange_c_a_d_converter_spec_1a0710114082970474b4cbb1e37062f622">
<span class="kt">
<span class="pre">
double
<span class="w">
<span class="sig-name descname">
<span class="n">
<span class="pre">
bReportProgressFreq
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="m">
<span class="pre">
4.0
<br/>
<dd>
<dl class="cpp var">
<dt class="sig sig-object cpp" id="_CPPv4N29hoopsExchangeCADConverterSpec16bUseCurrentStageE">
<span id="_CPPv3N29hoopsExchangeCADConverterSpec16bUseCurrentStageE">
<span id="_CPPv2N29hoopsExchangeCADConverterSpec16bUseCurrentStageE">
<span id="hoopsExchangeCADConverterSpec::bUseCurrentStage__b">
<span class="target" id="structhoops_exchange_c_a_d_converter_spec_1ac72f190b783d29ab0e2bcf7dc267761b">
<span class="kt">
<span class="pre">
bool
<span class="w">
<span class="sig-name descname">
<span class="n">
<span class="pre">
bUseCurrentStage
<span class="w">
<span class="p">
<span class="pre">
=
<span class="w">
<span class="k">
<span class="pre">
false
<br/>
<dd>
| 7,025 |
structhoops_exchange_c_a_d_converter_spec_description.md | # hoopsExchangeCADConverterSpecDescription
Defined in [8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release/hoops_exchange_cad_converter/include/hoops_reader/CADConverterSpec.h](#c-a-d-converter-spec-8h)
```cpp
struct hoopsExchangeCADConverterSpecDescription {
const char *classDescription = "HoopsExchangeCADConverter Options Class.";
const char *sConfigFilePath = "config/hoops_exchange_cad_converter_options.json";
};
"(str):
Configuration
file
path"
"(str):
Path
to
input
model
head
file"
"(str):
Path
to
output
usd
file"
"(str):
Path
to
output
destination
folder"
"(str):
Name
of
output
USD
file
(the"
head
file
only
in
the
case
of
multiple
output
files
bInstancing = "(bool): If true, enable instancing"
bGlobalXforms = "(bool): When instancing = false, this flag controls whether globalXforms are composited. If false local transforms are applied"
bConvertHidden = "(bool): If false, ignore hidden or non-visible parts during conversion. If true, convert all parts and maintain visibility settings"
<dl class="cpp var">
<dt class="sig sig-object cpp" id="_CPPv4N40hoopsExchangeCADConverterSpecDescription9bOptimizeE">
<span class="k">
<span class="pre">
const
<span class="kt">
<span class="pre">
char
<span class="p">
<span class="pre">
*
<span class="sig-name descname">
<span class="n">
<span class="pre">
bOptimize
<span class="p">
<span class="pre">
=
<span class="s">
<span class="pre">
"(bool):
<span class="pre">
Run
<span class="pre">
UJITSO
<span class="pre">
optimization
<span class="pre">
on
<span class="pre">
completed
<span class="pre">
usd
<span class="pre">
files"
<dd>
<dl class="cpp var">
<dt class="sig sig-object cpp" id="_CPPv4N40hoopsExchangeCADConverterSpecDescription6bDedupE">
<span class="k">
<span class="pre">
const
<span class="kt">
<span class="pre">
char
<span class="p">
<span class="pre">
*
<span class="sig-name descname">
<span class="n">
<span class="pre">
bDedup
<span class="p">
<span class="pre">
=
<span class="s">
<span class="pre">
"(bool):
<span class="pre">
Deduplicate
<span class="pre">
mesh
<span class="pre">
vertices
<span class="pre">
and
<span class="pre">
normals
<span class="pre">
(welds
<span class="pre">
mesh)"
<dd>
<dl class="cpp var">
<dt class="sig sig-object cpp" id="_CPPv4N40hoopsExchangeCADConverterSpecDescription8iTessLODE">
<span class="k">
<span class="pre">
const
<span class="kt">
<span class="pre">
char
<span class="p">
<span class="pre">
*
<span class="sig-name descname">
<span class="n">
<span class="pre">
iTessLOD
<span class="p">
<span class="pre">
=
<span class="s">
<span class="pre">
"(int):
<span class="pre">
Tessellation
<span class="pre">
Level
<span class="pre">
of
<span class="pre">
Detail
<span class="pre">
(LOD)
<span class="pre">
presets:
<span class="pre">
0=kA3DTessLODExtraLow,
<span class="pre">
1=kA3DTessLODLow,2=kA3DTessLODMedium,
<span class="pre">
3=kA3DTessLODHigh,
<span class="pre">
4=kA3DTessLODExtraHigh"
<dd>
<dl class="cpp var">
<dt class="sig sig-object cpp" id="_CPPv4N40hoopsExchangeCADConverterSpecDescription26bAccurateSurfaceCurvaturesE">
<span class="k">
<span class="pre">
const
<span class="kt">
<span class="pre">
char
<span class="p">
<span class="pre">
*
<span class="sig-name descname">
<span class="n">
<span class="pre">
bAccurateSurfaceCurvatures
<span class="p">
<span class="pre">
=
<span class="s">
<span class="pre">
"(bool):
<span class="pre">
Consider
<span class="pre">
surface
<span class="pre">
curvature
<dd>
to
control
triangles
elongation
direction
const char * sUsdSuffix = "(UsdSuffix): Save file(s) in following formats: 'usd', 'usda', 'usdc'"
const char * bUseNormals = "(bool): If true then we pass normals to USD. if false, then we do not."
const char * bUseMaterials = "(bool): If true then use specified modes of materials. if false, then use only basic display colors"
<dl class="cpp var">
<dt class="sig sig-object cpp" id="_CPPv4N40hoopsExchangeCADConverterSpecDescription15bReportProgressE">
<span class="k">
<span class="pre">
const
<span class="kt">
<span class="pre">
char
<span class="p">
<span class="pre">
*
<span class="sig-name descname">
<span class="n">
<span class="pre">
bReportProgress
<span class="p">
<span class="pre">
=
<span class="s">
<span class="pre">
"(bool):
<span class="pre">
If
<span class="pre">
true
<span class="pre">
then
<span class="pre">
we
<span class="pre">
report
<span class="pre">
import/export
<span class="pre">
progress"
<br/>
<dd>
<dl class="cpp var">
<dt class="sig sig-object cpp" id="_CPPv4N40hoopsExchangeCADConverterSpecDescription19bReportProgressFreqE">
<span class="k">
<span class="pre">
const
<span class="kt">
<span class="pre">
char
<span class="p">
<span class="pre">
*
<span class="sig-name descname">
<span class="n">
<span class="pre">
bReportProgressFreq
<span class="p">
<span class="pre">
=
<span class="s">
<span class="pre">
"(int):
<span class="pre">
Progress
<span class="pre">
reporting
<span class="pre">
frequency
<span class="pre">
in
<span class="pre">
Hz."
<br/>
<dd>
<dl class="cpp var">
<dt class="sig sig-object cpp" id="_CPPv4N40hoopsExchangeCADConverterSpecDescription16bUseCurrentStageE">
<span class="k">
<span class="pre">
const
<span class="kt">
<span class="pre">
char
<span class="p">
<span class="pre">
*
<span class="sig-name descname">
<span class="n">
<span class="pre">
bUseCurrentStage
<span class="p">
<span class="pre">
=
<span class="s">
<span class="pre">
"(bool):
<span class="pre">
Use
<span class="pre">
currently
<span class="pre">
opened
<span class="pre">
USD."
<br/>
<dd>
| 7,432 |
structs.md | # Structs
## Structs
- **hoopsExchangeCADConverterSpec**: Default configuration settings for HOOPS Exchange CAD.
- **hoopsExchangeCADConverterSpecDescription** | 161 |
style_Overview.md | # Overview — Omniverse Kit 2.25.15 documentation
## Overview
The Omniverse UI Framework is the UI toolkit for creating beautiful and flexible graphical user interfaces in the Kit extensions. It provides a list of basic UI elements as well as a layout system which allow users to create a visually rich user interface. Widgets are mostly a combination of the basic shapes, images or texts. They are provided to be stepping stones for an interactive and dynamic user interface to receive user inputs, trigger callbacks and create data models. The widgets follow the Model-Delegate-View (MDV) pattern which highlights a separation between the data and the display logic. Users will be able to find all the omni::ui properties and APIs for each ui element, both C++ and python, in this documentation.
### Example
To demonstrate how each ui element is working, how to use layout to construct more complicated widgets and how MDV works in our UI system, we provide an interactive extension called `omni.example.ui`. Users can easily enable it through the kit extension registry. It provides excessively interactive and live examples where users can explore to understand how omniverse UI works.
### Style
To build customized widgets and serve different user queries, all the omniverse ui elements support style override to make these widgets visually pleasant and functionally indicative with user interactions. Each widget has its own style attributes to tweak, which they also follow the same syntax rules. We provide an interactive extension called `omni.kit.documentation.ui.style`. Just like `omni.example.ui`, users can enable it through the kit extension registry. It lists all the supported style attributes for each ui element, provides interactive examples where users can explore the very flexible styling system that allows deep customizing the final look of the application.
## Footer
--- | 1,899 |
styling.md | # The Style Sheet Syntax
omni.ui Style Sheet rules are almost identical to those of HTML CSS. It applies to the style of all omni ui elements.
Style sheets consist of a sequence of style rules. A style rule is made up of a selector and a declaration. The selector specifies which widgets are affected by the rule. The declaration specifies which properties should be set on the widget. For example:
```python
with ui.VStack(width=0, style={"Button": {"background_color": cl("#097eff")}}):
ui.Button("Style Example")
```
In the above style rule, Button is the selector, and {"background_color": cl("#097eff")} is the declaration. The rule specifies that Button should use blue as its background color.
## Selector
There are three types of selectors, type Selector, name Selector and state Selector They are structured as:
Type Selector :: Name Selector : State Selector
e.g.,Button::okButton:hovered
### Type Selector
where `Button` is the type selector, which matches the ui.Button’s type.
### Name Selector
`okButton` is the type selector, which selects all Button instances whose object name is okButton. It separates from the type selector with `::`.
### State Selector
`hovered` is the state selector, which by itself matches all Button instances whose state is hovered. It separates from the other selectors with `:`.
When type, name and state selector are used together, it defines the style of all Button typed instances named as `okButton` and in hovered, while `Button:hovered` defines the style of all Button typed instances which are in hovered states.
These are the states recognized by omni.ui:
- hovered : the mouse in the widget area
- pressed : the mouse is pressing in the widget area
- selected : the widget is selected
- disabled : the widget is disabled
- checked : the widget is checked
- drop : the rectangle is accepting a drop. For example, style = {"Rectangle:drop" : {"background_color": cl.blue}} meaning if the drop is acceptable, the rectangle is blue.
## Style Override
### Omit the Selector
# Omit the selector
It’s possible to omit the selector and override the property in all the widget types.
In this example, the style is set to VStack. The style will be propagated to all the widgets in VStack including VStack itself. Since only `background_color` is in the style, only the widgets which have `background_color` as the style will have the background color set. For VStack and Label which don’t have `background_color`, the style is ignored. Button and FloatField get the blue background color.
```python
from omni.ui import color as cl
with ui.VStack(width=400, style={"background_color": cl("#097eff")}, spacing=5):
ui.Button("One")
ui.Button("Two")
ui.FloatField()
ui.Label("Label doesn't have background_color style")
```
# Style overridden with name and state selector
In this example, we set the “Button” style for all the buttons, then override different buttons with name selector style, e.g. “Button::one” and “Button::two”. Furthermore, the we also set different style for Button::one when pressed or hovered, e.g. “Button::one:hovered” and “Button::one:pressed”, which will override “Button::one” style when button is pressed or hovered.
```python
from omni.ui import color as cl
style1 = {
"Button": {"border_width": 0.5, "border_radius": 0.0, "margin": 5.0, "padding": 5.0},
"Button::one": {
"background_color": cl("#097eff"),
"background_gradient_color": cl("#6db2fa"),
"border_color": cl("#1d76fd"),
},
"Button.Label::one": {"color": cl.white},
"Button::one:hovered": {"background_color": cl("#006eff"), "background_gradient_color": cl("#5aaeff")},
"Button::one:pressed": {"background_color": cl("#6db2fa"), "background_gradient_color": cl("#097eff")},
"Button::two": {"background_color": cl.white, "border_color": cl("#B1B1B1")},
"Button.Label::two": {"color": cl("#272727")},
"Button::three:hovered": {
"background_color": cl("#006eff"),
"background_gradient_color": cl("#5aaeff"),
"border_color": cl("#1d76fd"),
},
"Button::four:pressed": {
"background_color": cl("#6db2fa"),
"background_gradient_color": cl("#097eff"),
"border_color": cl("#1d76fd"),
},
}
with ui.HStack(style=style1):
ui.Button("One", name="one")
ui.Button("Two", name="two")
ui.Button("Three", name="three")
ui.Button("Four", name="four")
ui.Button("Five", name="five")
```
## Style override to different levels of the widgets
It’s possible to assign any style override to any level of the widgets. It can be assigned to both parents and children at the same time.
In this example, we have style_system which will be propagated to all buttons, but buttons with its own style will override the style_system.
```python
from omni.ui import color as cl
style_system = {
"Button": {
"background_color": cl("#E1E1E1"),
"border_color": cl("#ADADAD"),
"border_width": 0.5,
"border_radius": 3.0,
"margin": 5.0,
"padding": 5.0,
},
"Button.Label": {
"color": cl.black,
},
"Button:hovered": {
"background_color": cl("#e5f1fb"),
"border_color": cl("#0078d7"),
},
"Button:pressed": {
"background_color": cl("#cce4f7"),
"border_color": cl("#005499"),
"border_width": 1.0
},
}
with ui.HStack(style=style_system):
ui.Button("One")
ui.Button("Two", style={"color": cl("#AAAAAA")})
ui.Button("Three", style={"background_color": cl("#097eff"), "background_gradient_color": cl("#6db2fa")})
ui.Button(
"Four", style={":hovered": {"background_color": cl("#006eff"), "background_gradient_color": cl("#5aaeff")}}
)
ui.Button(
"Five",
style={"Button:pressed": {"background_color": cl("#6db2fa"), "background_gradient_color": cl("#097eff")}},
)
```
## Customize the selector type using style_type_name_override
What if the user has a customized widget which is not a standard omni.ui one. How to define that Type Selector? In this case, We can use `style_type_name_override` to override the type name.
The `name` attribute is the Name Selector and State Selector can be added as usual.
Another use case is when we have a giant list of the same typed widgets, for example `Button`, but some of the Buttons are in the main window, and some of the Buttons are in the pop-up window, which we want to differentiate for easy look-up. Instead of calling all of them the same Type Selector as `Button` and only have different Name Selectors, we can override the type name for the main window buttons as `WindowButton` and the pop-up window buttons as `PopupButton`. This groups the style-sheet into categories and makes the change of the look or debug much easier.
Here is an example where we use `style_type_name_override` to override the style type name.
```python
from omni.ui import color as cl
style = {
"WindowButton::one": {"background_color": cl("#006eff")},
"WindowButton::one:hovered": {"background_color": cl("#006eff"), "background_gradient_color": cl("#FFAEFF")},
}
```
"PopupButton::two": {
"background_color": cl("#6db2fa")
},
"PopupButton::two:hovered": {
"background_color": cl("#6db2fa"),
"background_gradient_color": cl("#097eff")
}
```
```python
with ui.HStack(width=400, style=style, spacing=5):
ui.Button("Open", style_type_name_override="WindowButton", name="one")
ui.Button("Save", style_type_name_override="PopupButton", name="two")
```
## Default style override
From the above examples, we know that if we want to propagate the style to all children, we just need to set the style to the parent widget, but this rule doesn’t apply to windows. The style set to the window will not propagate to its widgets. If we want to propagate the style to ui.Window and their widgets, we should set the default style with
```python
ui.style.default = {
"background_color": cl.blue,
"border_radius": 10,
"border_width": 5,
"border_color": cl.red,
}
```
## Debug Color
All shapes or widgets can be styled to use a debug color that enables you to visualize their frame. It is very useful when debugging complicated ui layout with overlaps.
Here we use red as the debug_color to indicate the label widget:
```python
from omni.ui import color as cl
style = {"background_color": cl("#DDDD00"), "color": cl.white, "debug_color": cl("#FF000055")}
ui.Label("Label with Debug", width=200, style=style)
```
If several widgets are adjacent, we can use the `debug_color` in the `hovered` state to differentiate the widget with others.
```python
style = {
"Label": {"padding": 3, "background_color": cl("#DDDD00"), "color": cl.white},
"Label:hovered": {"debug_color": cl("#00FFFF55")},
}
with ui.HStack(width=500, style=style):
ui.Label("Label 1", width=50)
ui.Label("Label 2")
ui.Label("Label 3", width=100, alignment=ui.Alignment.CENTER)
ui.Spacer()
ui.Label("Label 3", width=50)
```
## Visibility
This property holds whether the shape or widget is visible. Invisible shape or widget is not rendered, and it doesn’t take part in the layout. The layout skips it.
```
# Section
## Subsection
<p>
A button that makes all buttons invisible, and another button brings them all back.
<p>
Code Result
<div class="highlight-python notranslate">
<div class="highlight">
<pre><span>
<span class="n">button
<span class="k">def
<span class="k">for
<span class="n">button
<span class="n">buttons
<span class="k">with
<span class="k">for
<span class="n">button
<span class="n">button
<span class="n">buttons
<span class="n">ui
<span class="n">button
<span class="n">button
<footer>
<hr/>
| 9,792 |
subclass-preferencebuilder-to-customize-the-ui_Overview.md | # Overview — Omniverse Kit 1.5.1 documentation
## Overview
omni.kit.window.preferences has a window where users customize settings.
NOTE: This document is going to refer to pages, a page is a `PreferenceBuilder` subclass and has name and builds UI. See `MyExtension` below
## Adding preferences to your extension
To create your own preference’s pane follow the steps below:
### 1. Register hooks/callbacks so page can be added/remove from omni.kit.window.preferences as required
```c++
def on_startup(self):
....
manager = omni.kit.app.get_app().get_extension_manager()
self._preferences_page = None
self._hooks = []
# Register hooks/callbacks so page can be added/remove from omni.kit.window.preferences as required
# keep copy of manager.subscribe_to_extension_enable so it doesn't get garbage collected
self._hooks.append(
manager.subscribe_to_extension_enable(
on_enable_fn=lambda _: self._register_page(),
on_disable_fn=lambda _: self._unregister_page(),
ext_name="omni.kit.window.preferences",
hook_name="my.extension omni.kit.window.preferences listener",
)
)
....
def on_shutdown(self):
....
self._unregister_page()
....
def _register_page(self):
```
```python
try:
from omni.kit.window.preferences import register_page
from .my_extension_page import MyExtensionPreferences
self._preferences_page = register_page(MyExtensionPreferences())
except ModuleNotFoundError:
pass
def _unregister_page(self):
if self._preferences_page:
try:
import omni.kit.window.preferences
omni.kit.window.preferences.unregister_page(self._preferences_page)
self._preferences_page = None
except ModuleNotFoundError:
pass
```
## 2. Define settings in toml
extension.toml
```toml
[settings]
persistent.exts."my.extension".mySettingFLOAT = 1.0
persistent.exts."my.extension".mySettingINT = 1
persistent.exts."my.extension".mySettingBOOL = true
persistent.exts."my.extension".mySettingSTRING = "my string"
persistent.exts."my.extension".mySettingCOLOR3 = [0.25, 0.5, 0.75]
persistent.exts."my.extension".mySettingDOUBLE3 = [2.5, 3.5, 4.5]
persistent.exts."my.extension".mySettingINT2 = [1, 2]
persistent.exts."my.extension".mySettingDOUBLE2 = [1.25, 1.65]
persistent.exts."my.extension".mySettingASSET = "${kit}/exts/my.extension/icons/kit.png"
persistent.exts."my.extension".mySettingCombo1 = "hovercraft"
persistent.exts."my.extension".mySettingCombo2 = 1
```
## 3. Subclass PreferenceBuilder to customize the UI
my_extension_page.py
```python
import carb.settings
import omni.kit.app
import omni.ui as ui
from omni.kit.window.preferences import PreferenceBuilder, show_file_importer, SettingType, PERSISTENT_SETTINGS_PREFIX
class MyExtensionPreferences(PreferenceBuilder):
pass
```
```python
def __init__(self):
super().__init__("My Custom Extension")
# update on setting change, this is required as setting could be changed via script or other extension
def on_change(item, event_type):
if event_type == carb.settings.ChangeEventType.CHANGED:
omni.kit.window.preferences.rebuild_pages()
self._update_setting = omni.kit.app.SettingChangeSubscription(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingBOOL", on_change)
def build(self):
combo_list = ["my", "hovercraft", "is", "full", "of", "eels"]
with ui.VStack(height=0):
with self.add_frame("My Custom Extension"):
with ui.VStack():
self.create_setting_widget("My FLOAT Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingFLOAT", SettingType.FLOAT)
self.create_setting_widget("My INT Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingINT", SettingType.INT)
self.create_setting_widget("My BOOL Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingBOOL", SettingType.BOOL)
self.create_setting_widget("My STRING Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingSTRING", SettingType.STRING)
self.create_setting_widget("My COLOR3 Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingCOLOR3", SettingType.COLOR3)
self.create_setting_widget("My DOUBLE3 Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingDOUBLE3", SettingType.DOUBLE3)
self.create_setting_widget("My INT2 Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingINT2", SettingType.INT2)
self.create_setting_widget("My DOUBLE2 Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingDOUBLE2", SettingType.DOUBLE2)
self.create_setting_widget("My ASSET Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingASSET", SettingType.ASSET)
self.create_setting_widget_combo("My COMBO Setting 1", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingCombo1", combo_list)
```
```python
self.create_setting_widget_combo("My COMBO Setting 2", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingCombo2", combo_list, setting_is_index=True)
```
What does this do?
- This function creates a setting widget combo with the specified parameters.
- It sets up a combo box with the given list and setting as an index.
**Functions:**
- `subscribe_to_extension_enable` adds a callback for `on_enable_fn` / `on_disable_fn`.
- `_register_page` registers new page to omni.kit.window.preferences.
- `_unregister_page` removes new page from omni.kit.window.preferences.
- `MyExtension` is definition of new page name “My Custom Extension” which will appear in the list of names on the left-hand side of the omni.kit.window.preferences window.
**NOTES:**
- Multiple extensions can add the same page name like “My Custom Extension” and only one “My Custom Extension” will appear in the list of names on the left-hand side of the omni.kit.window.preferences window, but all pages will be shown when selecting the page.
- The build function can build any UI wanted and isn’t restricted to `self.create_xxx` functions.
- `mySettingCombo1` uses a combobox with settings as a string.
- `mySettingCombo1` uses a combobox with settings as an integer.
**My Custom Extension page will look like this**
(Note: Image removed as per the conversion rules) | 6,392 |
technical-overview.md | # Technical Overview
The `usd-resolver` repository is a set of core USD plugins that interact with Omniverse Nucleus through `client-library`. The plugins in this repository do share a few common utilities when interacting with `client-library` but they are in-fact separate plugins.
## OmniUsdResolver
`OmniUsdResolver` is an `ArResolver` plugin used to resolve assets within Omniverse. With `usd-resolver` supporting numerous different flavors and versions of USD, `OmniUsdResolver` has both a Ar 1.0 implementation and a Ar 2.0 implementation. In this documentation `OmniUsdResolver (Ar 1.0)` will refer to the Ar 1.0 implementation and `OmniUsdResolver (Ar 2.0)` will refer to the Ar 2.0 implementation.
## OmniUsdWrapperFileFormat
A `SdfFileFormat` plugin whose sole purpose is to “wrap” other `SdfFileFormat` plugins to fix the process of reading / writing USD layers to / from Omniverse.
> This `SdfFileFormat` plugin is only used for `OmniUsdResolver (Ar 1.0)`. Since the same “plugInfo.json” is used to declare the `OmniUsdResolver` plugin for Ar 1.0 and Ar 2.0, the `OmniUsdWrapperFileFormat` will be accessible by `PlugRegistry` for Ar 2.0 builds of `usd-resolver` but will not be used.
## OmniUsdLiveFileFormat
The `OmniUsdLiveFileFormat` is a `SdfFileFormat` plugin that represents “live” layers. Internally, this iteration of the plugin is referred to as “live v1”.
## OmniUsdLiveFileFormat (Multi-threaded)
This `OmniUsdLiveFileFormat` is also a `SdfFileFormat` plugin that represents “live” layers. As the title suggests, this iteration of the plugin was re-written to improve performance in multi-threaded environments and is referred to as “live v2”. | 1,678 |
technical-requirements.md | # Technical Requirements
## Driver Versions
| Driver Version Support | Windows | Linux |
|------------------------|-------------------------------------------------------------------------|------------------------------------------------------------------------|
| Recommended | 528.24 (GameReady, Studio, RTX/Quadro), 528.33 (Grid/vGPU) | 525.85.05 (GameReady, Studio, RTX/Quadro), 525.85.12 (Grid/vGPU) |
| Minimum | 473.47 | 470.121 |
| Unsupported | 495.0 up to 512.59, 525 up to 526.91 | 495.0 up to 510.58, 515.0 up to 515.17 |
## NVIDIA RTX GPU Recommendations for Professional Workstation Users
For recommended specification for Omniverse Workstation users, please view this table on the Non-Virtualized Topology page.
## NVIDIA RTX GPU Recommendations for Studio Users
Recommended specification for Omniverse Studio users.
| Minimum | Recommended | Ultimate |
|---------|-------------|----------|
| RTX Enabled GPU with 10GB | RTX Enabled GPU with 24 to 48GB | Multiple RTX Enabled GPUs with 48GB |
## Suggested Minimums by Product
### Nucleus
For Nucleus Workstation and the Enterprise Nucleus Server, please refer to this page
### Apps
| Application | Version | OS | CPU | GPU | Memory |
|-------------|---------|----|-----|-----|---------|
| Product | Supported Operating Systems | Min CPU: (intel/amd) | Min Ram | Min GPU | Min Disk |
|---------|-----------------------------|----------------------|---------|---------|----------|
| Kit | - Windows 10/11<br>- Ubuntu 20.04/22.04<br>- CentOS 7 | - Intel i7 Gen 5<br>- AMD Ryzen | 16GB | GeForce RTX 3070 | 250GB |
| USD Presenter<br>USD Composer<br>USD Explorer | - Windows 10/11<br>- Ubuntu 20.04/22.04<br>- CentOS 7 | - Intel i7 Gen 5<br>- AMD Ryzen | 16GB | GeForce RTX 3070 | 250GB |
| Streaming Client | - Windows 10/11<br>- Ubuntu 20.04/22.04<br>- CentOS 7 | - Intel i3 Gen 7<br>- AMD Ryzen | 2GB | N/A | 1GB |
| Isaac Sim | - Windows 10/11<br>- Ubuntu 20.04/22.04 | - Intel i7 Gen 7<br>- AMD Ryzen | 32GB | GeForce RTX 3070 | Data Footprint + 20% |
| XR | - Meta Quest 2<br>- HTC Vive Pro<br>- Windows 10/1<br>- Ubuntu | - Intel i9 Gen 12<br>- AMD Ryzen TR Gen 3 | 64GB | - GeForce RTX 3090<br>- Quadro A6000 | 1TB |
Note:
These specs are minimum suggested requirements. Faster and more robust GPU’s and/or CPU’s, additional memory (RAM) and/or additional disk space will positively benefit Omniverse performance. | 2,820 |
test.md | # Test a Build
Omniverse provides tooling and automation making testing easier and more efficient. Use the system’s built-in methods to generate UNIT TESTS for extensions, run automated INTEGRATION TESTS for your applications, and perform PERFORMANCE TESTS to ensure your project runs as efficiently as possible.
- Testing Extensions with Python - Kit Manual : Python Testing.
- Testing Extensions with C++ - Kit Manual : C++ Testing.
- Service Testing Tutorial - Unit testing for a viewport capture Service.
## Logging
Logging, an essential tool for tracking Project activities, offers a detailed, sequential record of events occurring within your Project. This process assists in specifying the performance of your Project.
- Kit Manual : Logging - Provides a brief overview of utility functions in Kit so you can create log entries during runtime.
## Profiling
Profiling is an analysis technique used to evaluate the runtime behavior of a software program. It entails gathering data on aspects such as time distribution across code segments, frequency of function usage, number of rendered “frames,” memory allocation, etc. This data serves to identify potential performance bottlenecks, memory leaks, and other factors potentially affecting the efficiency or stability of the program.
The Omniverse Platform provides comprehensive profiling support, enabling the thorough examination of your project’s frame rate, resource usage, stability, among other aspects.
For more information on profiling, refer to the following resources:
- Kit Manual : Profiling Guide | 1,575 |
Testing.md | # Testing
## Folder Structure and Naming
Everything test related is in the `source/tests` folder.
`source/tests/test.unit` contains all unit tests which compile in the `test.unit` executable.
All tests grouped by the interface/namespace they are testing. For instance, `source/tests/test.unit/tasking` contains all tests for `carb.tasking.plugin` while `source/tests/test.unit/framework` contains all framework tests.
Each unit test source file must start with `Test` prefix, e.g. `tests/test.unit/tasking/TestTasking.cpp`.
Generally you want to test against the interface. If the interface has multiple plugins implementing it you can just iterate over all of them in the test initialization code, without changing test itself or making small changes (like having separate shaders for different graphics backends).
If you want to write tests against a particular implementation and it is not convenient anymore to keep them in the same folder the naming guideline is to add impl name: `/tests/test.unit/graphics.vulkan`.
If you need to create special plugins for your testing they should be put into: `source/tests/plugins/`.
## The Testing Framework
We use [doctest](https://github.com/onqtam/doctest) as our testing framework of choice. While some parts of its functionality will be covered in this guide, it is recommended you read the official [tutorial](https://github.com/onqtam/doctest/blob/master/doc/markdown/tutorial.md). This will for instance help you understand the concept of `SECTION()` and how it can be used to structure tests. This [CppCon 2017 talk](https://www.youtube.com/watch?v=eH1CxEC29l8) is also useful.
## Writing Tests
The typical unit test can look like this (from `framework/TestFileSystem.cpp`):
```cpp
#if CARB_PLATFORM_WINDOWS
const char* kFileName = "carb.dll";
#else
const char* kFileName = "libcarb.so";
#endif
TEST_CASE("paths can be checked for existence",
"[framework][filesystem]"
"[component=carbonite][owner=adent][priority=mandatory]")
{
FrameworkScoped f;
FileSystem* fs = f->getFileSystem();
REQUIRE(fs);
}
SECTION "carb (relative) path exists"
{
CHECK(fs->exists(kFileName));
}
SECTION "app (absolute) path exists"
{
CHECK(fs->exists(f->getAppPath()));
}
SECTION "made up path doesn't exist"
{
CHECK(fs->exists("doesn't exist") == false);
}
}
The general flow is to first get the framework, using the `FrameworkScoped` utility from `common/TestHelpers`. Then get the interface you need to test against, write your tests and clean up.
Framework config can be used to control which plugins to load (at least to avoid loading overhead). It’s up to the test writer how to organize initialization code, the only important thing is to clean up everything after the test is done. In order to not have to write the same setup and teardown code over and over you can create a C++ object using the RAII pattern, like this:
```c++
class AutoTempDir
{
public:
AutoTempDir()
{
FileSystem* fs = getFramework()->getFileSystem();
bool res = fs->makeTempDirectory(m_path, sizeof(m_path));
REQUIRE(res);
}
~AutoTempDir()
{
FileSystem* fs = getFramework()->getFileSystem();
bool res = fs->removeDirectory(m_path);
CHECK(res);
}
const char* getPath()
{
return m_path;
}
};
TEST_CASE("temp directory", "[framework][filesystem]", "[component=carbonite][owner=ncournia][priority=mandatory]")
{
FrameworkScoped f;
FileSystem* fs = f->getFileSystem();
REQUIRE(fs);
SECTION("create and remove")
{
AutoTempDir autoTempDir;
SECTION("while creating empty file inside")
{
std::string path = autoTempDir.getPath() + std::string("/empty.txt");
File* file = fs->openFileToWrite(path.c_str());
REQUIRE(file);
fs->closeFile(file);
}
}
}
```n
SECTION("while creating empty utf8 file inside")
{
std::string path = autoTempDir.getPath() + "/" + std::string(kUtf8FileName);
File* file = fs->openFileToWrite(path.c_str());
REQUIRE(file);
fs->closeFile(file);
}
```
Every test must be tagged at least with the
```
[my interface name]
```
tag. Where “my interface name” is equal to the folder name. You can add additional tags to support even more elaborate filtering. Keep in mind that these tags are primarily there for people to focus their testing on a specific area of the code. We will explain later how to control this.
## Running Tests
Once compiled all tests can be run with the `test.unit` executable. For instance on Windows release it is placed here:
> _build\windows-x86_64\release\test.unit.exe
Command line examples:
Run all tests and output errors:
```
test.unit
```
Help can be found with the `-h` flag or by reading the official documentation:
```
test.unit -h
```
Run only tests tagged with `[tasking]`:
```bash
# Linux
test.unit -tc='*[tasking]*'
# Windows
test.unit.exe -tc=*[tasking]*
```
Run all tests, excluding tests tagged with `[tasking]`:
```bash
test.unit -tce=*[tasking]*
```
Run only tests tagged with `[framework]` or `[tasking]`:
```bash
test.unit -tc=*[framework]*,*[tasking]*
```
Run only tests which name starts with `Acquire*`:
```bash
test.unit -tc=Acquire*
```
Include successful tests in the output:
```bash
test.unit -s
```
Prints a list of all test cases:
```bash
test.unit --list-test-cases
```
Enable Carbonite error logging:
```bash
# -g or --carb-log
test.unit -g
```
Enable Carbonite logging and set log level to verbose:
```bash
test.unit -g --carb-log-level=-2
```
Use Debug Heap as provided by VC C runtime (Windows only).
```bash
test.unit --debug-heap
``` | 5,705 |
testing_exts_cplusplus.md | # Testing Extensions with C++
For information on testing extensions with Python, look [here](#testing-extensions-python).
## Doctest
`omni.kit.test` has a **doctest** library as a runner for C++ tests.
Further, refer to Carbonite’s Testing.md to learn about using the Doctest testing system.
List of doctest command line arguments
You can also use –help on the command line for test.unit:
```console
test.unit.exe --help
```
Although Omniverse adds additional options, there is also an online reference for Doctest command-line options.
## Set up testable libraries
To write C++ tests, you first must have created a shared library with tests to be loaded:
```lua
project_ext_tests(ext, "omni.appwindow.tests")
add_files("impl", "tests.cpp")
```
`tests.cpp/AppWindowTests.cpp`:
```cpp
#include <carb/BindingsUtils.h>
#include <doctest/doctest.h>
CARB_BINDINGS("omni.appwindow.tests")
TEST_SUITE("some test suite") {
TEST_CASE("test case success") {
CHECK(5 == 5);
}
}
```
Next specify in the test section to load this library:
```toml
[[test]]
cppTests.libraries = [
"bin/${lib_prefix}omni.appwindow.tests${lib_ext}",
]
```
Run tests the same way with `_build\windows-x86_64\release\tests-[extension_name].bat`.
`omni.kit.test` this:
1. loads the library
2. tests will be registered automatically in **doctest**
- **C++ tests**
- **doctest**
- **Python tests**
- **doctest**
In this setup C++ and python tests will run in the same process. A separate `[[test]]` entry can be created to run separate processes.
To run only subset of tests `-f` option can be used:
```
_build\windows-x86_64\release\tests-omni.appwindow.bat -f foo
```
All arguments in the **tested process** after `--` are passed to **doctest**. But to pass to the **tested process**, `--` must be used. So to pass arguments to doctest, `--` must be specified twice, like so:
```
_build\windows-x86_64\release\tests-omni.appwindow.bat -- -- -tc=*foo* -s
```
When using the `test.unit.exe` workflow instead, check below.
## Running a single test case
In order to run a single test case, use the -tc flag (short for –test-case) with wildcard filters:
```console
_build\windows-x86_64\release\tests-omni.appwindow.bat -- -- -tc="*[rtx]*"
```
Commas can be used to add multiple filters:
```console
_build\windows-x86_64\release\tests-omni.appwindow.bat -- -- -tc="*[rtx]*,*[graphics]*"
```
## Unit Tests
Some tests are written using an older approach. Carbonite is used directly without kit and all the required plugins are manually loaded. To run those tests use:
```
_build\windows-x86_64\release\test.unit.exe
```
## Image comparison with a golden image
Some graphics tests allow you to compare visual tests with a golden image. This can be done by creating an instance of `ImageComparison` class.
Each ImageComparison descriptor requires a unique GUID, and must be accompanied with the equivalent string version in C++ source as a comment for easy lookup.
Defining a test case:
```cpp
ImageComparisonDesc desc =
{
{ 0x2ae3d60e, 0xbc3b, 0x48b6, { 0xa8, 0x67, 0xe0, 0xa0, 0x7c, 0xaa, 0x9e, 0xd0 } }, // 2AE3D60E-BC3B-48B6-A867-E0A07CAA9ED0
"Depth-stencil-16bit",
ComparisonMetric::eAbsoluteError,
kBackBufferWidth,
kBackBufferHeight
};
// Create Image comparison
imageComparison = new ImageComparison();
// register the test case (only once)
status = imageComparison->registerTestCase(&desc);
REQUIRE(status);
```
### Regression testing of an executable with a golden image:
- This is supported by any executable that uses carb.renderer (e.g. omnivserse-kit or rtx.example), in which it can capture and dump a frame.
- NVF is not yet supported.
```cpp
// 1- run an executable that supports CaptureFrame
std::string execPath;
std::string cmdLine;
ImageComparison::makeCaptureFrameCmdLine(500, // Captures frame number 500
&desc, // ImageComparisonDesc desc
```
"kit", // Executable's name
execPath, // Returns the executable path needed for executeCommand()
cmdLine); // Returns command line arguments needed for executeCommand()
// 2- Append any command line arguments you need to cmdLine with proper spaces
cmdLine += " --/rtx/debugView='normal'";
// 3- Run the application with a limited time-out
status = executeCommand(execPath, cmdLine, kExecuteTimeoutMs);
REQUIRE(status);
// 4- compare the golden image with the dumped output of the captured frame (located at $Grapehene$/outputs)
float result = 0.0f;
CHECK(m_captureFrame->compareResult(&desc, result) == ImageComparisonStatus::eSuccess);
CHECK(result <= Approx(kMaxMeanSqrError)); // With ComparisonMetric::eMeanErrorSquared
// 1- Create an instance of CaptureFrame and initialize it
captureFrame = new CaptureFrame(m_gEnv->graphics, m_gEnv->device);
captureFrame->initializeCaptureFrame(RenderOpTest::kBackBufferWidth, RenderOpTest::kBackBufferHeight);
// 2- Render something
// 3- copy BackBuffer to CaptureFrame
captureFrame->copyBackBufferToHostBuffer(commandList, backBufferTexture);
// 4- Submit commands and wait to finish
// 5- compare the golden image with the BackBuffer (or dump it into the disk $Grapehene$/outputs)
float result = 0.0f;
CHECK(imageComparison->compareResult(&desc, captureFrame->readBufferData(true), captureFrame->getBufferSize(), result) == ImageComparisonStatus::eSuccess);
CHECK(result == Approx(0.0f));
// compareResult() also allows you to dump the BackBuffer into outputs folder on the disk.
// Example: regression testing of OmniverseKit executable
test.unit.exe -tc="*[omniverse-kit][rtx]*"
// Example: regression testing of visual rendering tests
test.unit.exe --carb-golden -tc="*[graphics]*,*[visual]*"
// Example: regression testing of visual rendering tests that fail our acceptable threshold
test.unit.exe --carb-golden-failure -tc="*[graphics]*,*[visual]*"
// Verify and view the golden image that is added to outputs folder in Omniverse kit’s repo.
# Table of Contents
## Data Management
- **Folder Structure**
- `data\golden` is a folder for `git lfs` data.
- Open a merge request with git-lfs data changes.
## Troubleshooting
### Troubleshooting
- All unit tests crash on textures or shaders:
- You must have `git lfs` installed and initialize it.
- Check files in `data` folder, and open them in a text editor. You should not see any URL or hash as content.
- Install the latest driver (refer to readme.md)
- executeCommand() fails:
- A possible crash or assert in release mode.
- A crash or hang during exit.
- Time-out reached. Note that any assert dialog box in release mode may cause time-out.
- compareResult() fails:
- Rendering is broken, or a regression is introduced beyond the threshold.
- outputs folder is empty for tests with a tag of [executable]:
- A regression caused the app to fail.
## How to Use Asset Folder for Storing Data
### How to use asset folder for storing data
- Perform the following command to delete the existing `_build\asset.override` folder. That folder must be gone before proceeding further.
```console
.\assets.bat clean
```
- Stage assets. It copies data from `assets` to `assets.override`.
```console
.\assets.bat stage
```
- Modify any data under `asset.override`. Do NOT modify `assets` folder.
- Upload and publish a new asset package:
```console
.\assets.bat publish
```
- Rebuild to download the new assets and run the the test to verify:
```console
.\build.bat --rebuild
```
- Open a merge request with new assets.packman.xml changes.
## Skipping Vulkan or Direct3D 12 Graphics Tests
### Skipping Vulkan or Direct3D 12 graphics tests
- In order to skip running a specific backend for graphical tests, use `--carb-no-vulkan` or `--carb-no-d3d12`.
```console
test.unit.exe --carb-no-vulkan
``` | 7,816 |
testing_exts_python.md | # Testing Extensions with Python
This guide covers the practical part of testing extensions with Python. Both for extensions developed in the `kit` repo and outside.
For information on testing extensions with C++ / doctest, look [here](#testing-extensions-cplusplus), although there is some overlap, because it can be preferable to test C++ code from python bindings.
The *Kit Sdk* includes the `omni.kit.test` extension and a set of build scripts (in `premake5-public.lua` file) to run extension tests.
It supports two types of testing:
* python tests (`unittest` with `async/await` support)
* c++ tests (`doctest`)
It is generally preferred to test `C++` code from python using bindings where feasible. In this way, the bindings are also tested, and that promotes writing bindings to your `C++` code. Most of this guide focuses on python tests, but there is a `C++` section at the very end.
## Adding Extension Test: Build Scripts
If your extension’s `premake5.lua` file defines the extension project in usual way:
```lua
local ext = get_current_extension_info()
project_ext(ext)
```
It should already have corresponding bat/sh files generated in the `_build` folder, e.g.: `_build\windows-x86_64\release\tests-[extension_name].bat`
Even if you haven’t written any actual tests, it is already useful. It is a startup/shutdown test, that verifies that all extension dependencies are correct, python imports are working, and that it can start and exit without any errors.
An empty extension test entry is already an important one. Wrong or missing dependencies are a source of many future issues. Extensions are often developed in the context of certain apps and have implicit expectations. When used in other apps they do not work. Or when the extension load order randomly changes and other extensions you implicitly depend on start to load after you.
## How does it work?
You can look inside Kit’s `premake5-public.lua` file to find the details on how it happens, follow the `function project_ext(ext, args)`.
If you look inside that shell script, it basically runs an empty `Kit`
# omni.kit.test passes your extension. That will run the **test system process** which in turn will run another, **tested process**, which is basically: empty Kit.
**Test system process** prints each command it uses to spawn a new process. You can copy that command and use exactly the same command for debugging purposes.
You may ask why we spawn a process, which spawns another process? And both have omni.kit.test? Many reasons:
1. **Test system process** monitors **tested process**:
- It can kill it in case of timeout.
- Reads return code. If != 0 indicates test failure.
- Reads stdout/stderr for error messages.
2. **Test system process** reads extension.toml of the tested extension in advance. That allows us to specify test settings, cmd args, etc.
3. It can run many extension tests in parallel.
4. It can download extensions from the registry before testing.
omni.kit.test has separate modules for both **test system process** (exttests.py) and **tested process** (unittests.py).
## Writing First Test
**Tested process** runs with omni.kit.test which has the unittests module. It is a wrapper on top of python’s standard unittest framework.
It adds support for async/await usage in tests. Allowing test methods to be async and run for many updates/frames. For instance, a test can at any point call await omni.kit.app.get_app().next_update_async() and thus yield test code execution until next update.
All the methods in the python standard unittest can be used normally (like self.assertEqual etc).
If your extension for instance is defined:
```toml
[[python.module]]
name = "omni.foo"
```
Your tests must go into the omni.foo.tests module. The testing framework will try to import tests submodule for every python module, in order to discover them. This has a few benefits:
1. It only gets imported in the test run. Thus it can use test-only dependencies or run more expensive code. There is no need to depend on omni.kit.test everywhere.
2. It gets imported after other python modules. This way public modules can be imported and used in tests as if the tests are written externally (but still inside of an extension). In this way, the public API can be tested.
An actual test code can look like this:
```python
# NOTE:
# omni.kit.test - python's standard library unittest module with additional wrapping to add support for async/await tests
# For most things refer to the unittest docs: https://docs.python.org/3/library/unittest.html
import omni.kit.test
# Import extension python module we are testing with absolute import path, as if we are an external user (i.e. a different extension)
import example.python_ext
class Test(omni.kit.test.AsyncTestCase):
# Before running each test
async def setUp(self):
pass
# After running each test
async def tearDown(self):
pass
# Actual test, notice it is an "async" function, so "await" can be used if needed
async def test_hello_public_function(self):
```
```python
result = example.python_ext.some_public_function(4)
self.assertEqual(result, 256)
```
All the concepts here are from the standard `unittest` framework. Test methods start with `test_`. You need to inherit a base test class, which will be created, `setUp()` will be called before each test, `tearDown()` after. Everything can be `async` or “sync”.
## Test Settings
### Test Settings: Basics
The `[[test]]` section of `extension.toml` allows control of how to run a test process. We aim to make that configuration empty and for the defaults to be reasonable. We also strive to make tests run as close as possible to a real-usage run, making test environments the same as production.
However, you can specify which errors to ignore, what additional dependencies to bring, change the timeout, pass extra args etc. All the details are in Kit’s Extensions doc.
Below is an example, it shows how to:
- add extra arguments
- add test only dependencies (extensions)
- change timeout
- include and exclude error messages from failing tests
```toml
[[test]]
args = ["--/some/setting=1"]
dependencies = ["omni.kit.capture"]
timeout = 666
stdoutFailPatterns.include = ["*[error]*", "*[fatal]*"]
stdoutFailPatterns.exclude = [
"*Leaking graphics objects*", # Exclude graphics leaks until fixed
]
```
### Test Settings: Where to look for python tests?
By default **test system process** (`exttests.py`) reads all `[[python.module]]` entries from the tested extension and searches for tests in each of them. You can override it by explicitly setting where to look for tests:
```toml
[[test]]
pythonTests.include = ["omni.foo.*"]
pythonTests.exclude = ["omni.foo.bar.*"]
```
This is useful if you want to bring tests from other extensions. Especially, when testing apps.
### Test Settings: Multiple test processes
**Each `[[test]]` entry is a new process**. Thus by default, each extension will run one test process to run all the python tests for that extension.
When adding multiple entries they must be named to distinguish them (in artifacts, logs, etc):
```toml
[[test]]
name = "basic"
pythonTests.include = [ "omni.foo.*basic*" ]
[[test]]
name = "other"
pythonTests.include = [ "omni.foo.*other*" ]
```
To select which process to run: pass `-n [wildcard]`, where `[wildcard]` is a name.
## Test Settings: Disable test on certain platform
Any setting in `extension.toml` can be set per platform, using filters. Read more about them in “Extensions” doc. For example, to disable tests on Windows, the `enabled` setting can be overridden:
```toml
[[test]]
name = "some_test"
"filter:platform"."windows-x86_64".enabled = false
```
## Running Your Test
To run your test just call the shell script described above: `_build\windows-x86_64\release\tests-[extension_name].bat`.
### Run subset of tests
Pass `-f [wildcard]`, where `[wildcard]` is a name of the test or part of the name. `*` are supported:
```
>_build\windows-x86_64\release\tests-example.python_ext.bat -f "public_function"
```
### Run subset of tests using Tests Sampling
Pass `--/exts/omni.kit.test/testExtSamplingFactor=N`, where `N` is a number between 0.0 and 1.0, 0.0 meaning no tests will be run and 1.0 all tests will run and 0.5 means 50% of tests will run. To have this behavior on local build you need an additional parameter:
```
>_build\windows-x86_64\release\tests-example.python_ext.bat --/exts/omni.kit.test/testExtSamplingFactor=0.5 --/exts/omni.kit.test/testExtSamplingContext='local'
```
### Run tests from a file
To run tests from a file use `--/exts/omni.kit.test/runTestsFromFile=''` with the name of the file to read. Note: each time the tests run a playlist will be generated, useful to replay tests in a specific order or a subset of tests.
```
>_build\windows-x86_64\release\tests-example.python_ext.bat --/exts/omni.kit.test/runTestsFromFile='C:/dev/ov/kit/exttest_omni_foo_playlist_1.log'
```
### Retry Strategy
There are 4 supported retry strategies:
1. `no-retry` -> run tests once
2. `retry-on-failure` -> run up to N times, stop at first success (N = `testExtMaxTestRunCount`)
3. `iterations` -> run tests N times (N = `testExtMaxTestRunCount`)
4. `rerun-until-failure` -> run up to N times, stop at first failure (N = `testExtMaxTestRunCount`)
For example to retry tests up to 3 times (if a flaky test occurs) use this command:
```
>_build\windows-x86_64\release\tests-example.python_ext.bat --/exts/omni.kit.test/testExtRetryStrategy='retry-on-failure' --/exts/omni.kit.test/testExtMaxTestRunCount=3
```
### Developing Tests
Pass `--dev`
```code
or
```code
<span class="pre">
--/exts/omni.kit.test/testExtUIMode=1
. That will start a window with a list of tests instead of immediately running them. Here you can select tests to run. Change code, extension hot reloads, run again. E.g.:
```code
<span class="pre">
>
<span class="pre">
_build\windows-x86_64\release\tests-example.python_ext.bat
<span class="pre">
--dev
Note that this test run environment is a bit different. Extra extensions required to render a basic UI are enabled.
## Tests Code Coverage (Python)
```
Pass
```code
<span class="pre">
--coverage
. That will run your tests and produce a coverage report at the end (HTML format):
```code
<span class="pre">
>
<span class="pre">
_build\windows-x86_64\release\tests-example.python_ext.bat
<span class="pre">
--coverage
The output will look like this:
```
```bash
Generating a Test Report...
> Coverage for example.python_ext:default is 49.8%
> Full report available here C:/dev/ov/kit/kit/_testoutput/test_report/index.html
The HTML file will have 3 tabs. The coverage tab will display the coverage per file. Click on a filename to see the actual code coverage for that file.
Based on the Google Code Coverage Best Practices the general guideline for extension coverage is defined as: 60% is acceptable, 75% is commendable and 90% is exemplary.
The settings to modify the Coverage behavior are found in the
```code
<span class="pre">
extension.toml
file of
```code
<span class="pre">
omni.kit.test
, for example
```code
<span class="pre">
pyCoverageThreshold
to modify the threshold and filter flags like
```code
<span class="pre">
pyCoverageIncludeDependencies
to modify the filtering.
Note: the python code coverage is done with Coverage.py. If you need to exclude code from Coverage you can consult this section.
The python code coverage is done with Coverage.py. If you need to exclude code from Coverage you can consult this section.
For example any line with a comment
```code
<span class="pre">
#
<span class="pre">
pragma
<span class="pre">
:
<span class="pre">
no
<span class="pre">
cover
is excluded. If that line introduces a clause, for example, an if clause, or a function or class definition, then the entire clause is also excluded.
```
```python
if example: # pragma: no cover
print("this line and the `if example:` branch will be excluded")
print("this line not excluded")
## Debugging Coverage
If you see strange coverage results, the easiest way to understand what is going is to modify
```code
<span class="pre">
test_coverage.py
from
```code
<span class="pre">
omni.kit.test
. In
```code
<span class="pre">
def
<span class="pre">
startup(self)
comment out the
```code
<span class="pre">
source=self._settings.filter
line and also remove all items in
```code
<span class="pre">
self._coverage.config.disable_warnings
. Coverage will run without any filter and will report all warnings, giving more insights. List of warnings can be seen here.
## Disabling a python test
Use decorators from
```code
<span class="pre">
unittest
module, e.g.:
```
```python
@unittest.skip("Fails on Linux now, to be fixed") # OM-12345
async def test_create_delete(self):
...
## Pass extra cmd args to the test
To pass extra arguments for debugging purposes (for permanent use
```code
<span class="pre">
[[test]]
config part) there are 2 ways:
```
- all arguments after
```code
<span class="pre">
--
will be passed, e.g.
```code
<span class="pre">
_build\windows-x86_64\release\tests-[extension_name].bat
<span class="pre">
--
<span class="pre">
-v
```
### Choose an app to run tests in
All tests run in a context of an app, which by default is an empty app: `${kit}/apps/omni.app.test_ext.kit`. You can instead pass your own kit file, where you can define any extra settings.
In this kit file you can change testing environment, enable some debug settings or extensions. `omni.app.test_ext_kit_sdk.kit` app kit comes with a few useful settings commented.
### Test Output
For each test process, `omni.kit.test` provides a directory it can write test outputs to (logs, images, etc):
```python
import omni.kit.test
output_dir = omni.kit.test.get_test_output_path()
```
Or using `test_output` token:
```python
output_dir = carb.tokens.get_tokens_interface().resolve("${test_output}")
```
When running on CI this folder becomes a build artifact.
### Python debugger
To enable python debugger you can use `omni.kit.debug.python` extension. One way is to uncomment in `omni.app.test_ext.kit`:
```toml
# "omni.kit.debug.python" = {}
```
You can use VSCode to attach a python debugger. Look into `omni.kit.debug.python` `extension.toml` for more settings, and check the FAQ section for a walkthrough.
### Wait for the debugger to attach
If you want to attach a debugger, you can run with the `-d` flag. When Kit runs with `-d`, it stops and wait for debugger to attach, which can also can be skipped. Since we run 2 processes, you likely want to attach to the second one - skip the first one. E.g.:
```bash
λ _build\windows-x86_64\release\tests-example.python_ext.bat -d
[omni.kit.app] Waiting for debugger to attach, press any key to skip... [pid: 19052]
[Info] [carb] Logging to file: C:/projects/extensions/kit-template/_build/windows-x86_64/release//logs/Kit/kit/103.0/kit_20211018_160436.log
Test output path: C:\projects\extensions\kit-template\_testoutput
Running 1 Extension Test(s).
|||||||||||||||||||||||||||||| [EXTENSION TEST START: example.python_ext-0.2.1] ||||||||||||||||||||||||||||
>>> running process: C:\projects\extensions\kit-template\_build\windows-x86_64\release\kit\kit.exe ${kit}/apps/omni.app.test_ext.kit --enable example.python_ext-0.2.1 --/log/flushStandardStreamOutput=1 --/app/name=exttest_example.python_ext-0.2.1 --/log/file='C:\projects\extensions\kit-template\_testoutput/exttest_example.python_ext-0.2.1/exttest_example.python_ext-0.2.1_2021-10-18T16-04-37.log' --/crashreporter/dumpDir='C:\projects\extensions\kit-template\_testoutput/exttest_example.python_ext-0.2.1' --/plugins/carb.profiler-cpu.plugin/saveProfile=1 --/plugins/carb.profiler-cpu.plugin/compressProfile=1 --/app/profileFromStart=1 --/plugins/carb.profiler-cpu.plugin/filePath='C:\projects\extensions\kit-template\_testoutput/exttest_example.python_ext-0.2.1/ct_exttest_example.python_ext-0.2.1_2021-10-18T16-04-37.gz' --ext-folder c:/projects/extensions/kit-template/_build/windows-x86_64/release/kit/extsPhysics --ext-folder c:/projects/extensions/kit-converters/_build/windows-x86_64/release/exts --ext-folder C:/projects/extensions/kit-template/_build/windows-x86_64/release/exts --ext-folder C:/projects/extensions/kit-template/_build/windows-x86_64/release/apps --enable omni.kit.test --/exts/omni.kit.test/runTestsAndQuit=true --/exts/omni.kit.test/includeTests/0='example.python_ext.*' --portable-root C:\projects\extensions\kit-template\_build\windows-x86_64\release\/ -d
|| [omni.kit.app] Waiting for debugger to attach, press any key to skip... [pid: 22940]
```
# Marking tests as unreliable
It is often the case that certain tests can fail randomly, with some probability. That can block CI/CD pipelines and lowers the trust into the TC state.
In that case:
1. Create a ticket with `Kit:UnreliableTests` label
2. Mark a test as unreliable and leave the ticket number in the comment
Unreliable tests do not run as part of the regular CI pipeline. They run in the separate nightly TC job.
There are 2 ways to mark a test as unreliable:
1. Mark whole test process as unreliable:
```toml
[[test]]
unreliable = true
```
2. Mark specific python tests as unreliable:
```toml
[[test]]
pythonTests.unreliable = ["*test_name"]
```
# Running unreliable tests
To run unreliable tests (and only them) pass `--/exts/omni.kit.test/testExtRunUnreliableTests=1` to the test runner:
```
> _build\windows-x86_64\release\tests-example.python_ext.bat --/exts/omni.kit.test/testExtRunUnreliableTests=1
```
# Listing tests
To list tests without running, pass `--/exts/omni.kit.test/printTestsAndQuit=1`. That will still take some time to start the tested extension. It is a limitation of the testing system that it can’t find tests without setting up python environment:
```
> _build\windows-x86_64\release\tests-example.python_ext.bat --/exts/omni.kit.test/printTestsAndQuit=1
```
Look for lines like:
```bash
|| =========================================
|| Printing All Tests (count: 6):
|| =========================================
|| omni.kit.commands.tests.test_commands.TestCommands.test_callbacks
|| omni.kit.commands.tests.test_commands.TestCommands.test_command_parameters
|| omni.kit.commands.tests.test_commands.TestCommands.test_commands
|| omni.kit.commands.tests.test_commands.TestCommands.test_error
|| omni.kit.commands.tests.test_commands.TestCommands.test_group
|| omni.kit.commands.tests.test_commands.TestCommands.test_multiple_calls
```
Accordingly, to list unreliable tests add `--/exts/omni.kit.test/testExtRunUnreliableTests=1`:
```
> _build\windows-x86_64\release\tests-example.python_ext.bat --/exts/omni.kit.test/testExtRunUnreliableTests=1 --/exts/omni.kit.test/printTestsAndQuit=1
```
# repo_test: Running All Tests
To run all tests in the repo we use `repo_test` repo tool. Which is yet another process that runs before anything. It globs all the files according to `repo.toml` `[repo_test]` section configuration and runs them.
It is one entry point to run all sorts of tests. Different kinds of tests are grouped into *suites*. By default, it will run one *suite*, but you can select which suite to run with ````
```markdown
```code
##omni.kit.test[append,
foo,
bah]
```
Note that if a pragma fails for any reason (the syntax is incorrect; you try to append to a value that
was previously set), it will be silently ignored.
## omni.kit.ui_test: Writing UI tests
Many extensions build various windows and widgets using `omni.ui`. The best way to test them is by simulating user interactions with the UI. For that `omni.kit.ui_test` extension can be used.
`omni.kit.ui_test` provides a way to query UI elements and interact with them. To start add test dependency to this extension:
```toml
[[test]]
dependencies = [
"omni.kit.ui_test",
]
```
Now you can import and use it in tests. Example:
```python
import omni.kit.ui_test as ui_test
async def test_button(self):
# Find a button
button = ui_test.find("Nice Window//Frame/**/Button[*]")
# button is a reference, actual omni.ui.Widget can be accessed:
print(type(button.widget)) # <class 'omni.ui._ui.Button'>
# Click on button
await button.click()
```
Refer to `omni.kit.ui_test` documentation for more examples and API.
## (Advanced) Generating new tests or adapting discovered tests at runtime
Python unit tests are discovered at runtime. We introduced a way to adapt and/or extend the list of tests
by implementing custom omni.kit.test.AsyncTestCase class with `def generate_extra_tests(self)` method.
This method allows:
- changes to discovered test case by mutating `self` instance
- generation of new test cases by returning a list of them
In general, this method is preferred when same set of tests needs to be validated with multiple different
configurations. For example, when developing new subsystems while maintaining the old ones. | 21,038 |
tests_Overview.md | # Overview
This extension is the gold-standard for an extension that contains only OmniGraph Python nodes without a build process to create the generated OmniGraph files. They will be generated at run-time when the extension is enabled.
## The Files
To use this template first copy the entire directory into a location that is visible to the extension manager, such as `Documents/Kit/shared/exts`. You will end up with this directory structure. The highlighted lines should be renamed to match your extension, or removed if you do not want to use them.
```text
omni.graph.template.no_build/
config/
extension.toml
data/
icon.svg
preview.png
docs/
CHANGELOG.md
Overview.md
README.md
directory.txt
ogn/
nodes.json
omni/
graph/
template/
no_build/
__init__.py
_impl/
__init__.py
extension.py
nodes/
OgnTemplateNodeNoBuildPy.ogn
OgnTemplateNodeNoBuildPy.py
tests/
__init__.py
test_api.py
test_omni_graph_template_no_build.py
```
By convention the Python files are structured in a directory tree that matches a namespace corresponding to the extension name, in this case `omni/graph/template/no_build/`, which corresponds to the extension name *omni.graph.template.no_build*. You’ll want to modify this to match your own extension’s name.
The file `ogn/nodes.json` was manually written, usually being a byproduct of the build process. It contains a JSON list of all nodes implemented in this extension with the description, version, extension owner, and implementation language for each node. It is used in the extension window as a preview of nodes in the extension so it is a good idea to provide this file with your extension, though not mandatory.
The convention of having implementation details of a module in the `_impl/` subdirectory is to make it clear to the user that they should not be directly accessing anything in that directory, only what is exposed in the `__init__.py`.
## The Configuration
Every extension requires a `config/extension.toml` file with metadata describing the extension to the extension management system. Below is the annotated version of this file, where the highlighted lines are the ones you should change to match your own extension.
```toml
# Main extension description values
[package]
# The current extension version number - uses [Semantic Versioning](https://semver.org/spec/v2.0.0.html)
version = "2.3.1"
```
# The title of the extension that will appear in the extension window
# Longer description of the extension
# Authors/owners of the extension - usually an email by convention
# Category under which the extension will be organized
# Location of the main README file describing the extension for extension developers
# Location of the main CHANGELOG file describing the modifications made to the extension during development
# Location of the repository in which the extension's source can be found
# Keywords to help identify the extension when searching
# Image that shows up in the preview pane of the extension window
# Image that shows up in the navigation pane of the extension window - can be a .png, .jpg, or .svg
# Specifying this ensures that the extension is always published for the matching version of the Kit SDK
# Specify the minimum level for support
# Main module for the Python interface. This is how the module will be imported.
[[python.module]]
name = "omni.graph.template.no_build"
# Watch the .ogn files for hot reloading. Only useful during development as after delivery files cannot be changed.
[fswatcher.patterns]
include = ["*.ogn", "*.py"]
exclude = ["Ogn*Database.py"]
# Other extensions that need to load in order for this one to work
[dependencies]
"omni.graph" = {} # For basic functionality and node registration
"omni.graph.tools" = {} # For node type code generation
# Main pages published as part of documentation. (Only if you build and publish your documentation.)
[documentation]
pages = [
"docs/Overview.md",
"docs/CHANGELOG.md",
]
# Some extensions are only needed when writing tests, including those automatically generated from a .ogn file.
# Having special test-only dependencies lets you avoid introducing a dependency on the test environment when only
# using the functionality.
[[test]]
dependencies = [
"omni.kit.test" # Brings in the Kit testing framework
]
```
Contained in this file are references to the icon file in `data/icon.svg` and the preview image in `data/preview.png` which control how your extension appears in the extension manager. You will want to customize those.
## Documentation
This section provides documentation for the extension.
# Documentation Structure
Everything in the `docs/` subdirectory is considered documentation for the extension.
- **README.md**
The contents of this file appear in the extension manager window so you will want to customize it. The location of this file is configured in the `extension.toml` file as the **readme** value.
- **CHANGELOG.md**
It is good practice to keep track of changes to your extension so that users know what is available. The location of this file is configured in the `extension.toml` file as the **changelog** value.
- **Overview.md**
This file is not usually required when not running a build process; in particular a documentation and can be deleted.
- **directory.txt**
This file can be deleted as it is specific to these instructions.
# The Node Type Definitions
You define a new node type using two files, examples of which are in the `nodes/` subdirectory. Tailor the definition of your node types for your computations. Start with the OmniGraph User Guide for information on how to configure your own definitions.
# Tests
While completely optional it’s always a good idea to add a few tests for your node to ensure that it works as you intend it and continues to work when you make changes to it.
The sample tests in the `tests/` subdirectory show you how you can integrate with the Kit testing framework to easily run tests on nodes built from your node type definition.
That’s all there is to creating a simple node type! You can now open your app, enable the new extension, and your sample node type will be available to use within OmniGraph.
::: note
: **Note**
Although development is faster without a build process you are sacrificing discoverability of your node type. There will be no automated test or documentation generation, and your node types will not be visible in the extension manager. They will, however, still be visible in the OmniGraph editor windows. There will also be a small one-time performance price as the node type definitions will be generated the first time your extension is enabled.
::: | 7,025 |
Tf.md | # Tf module
Summary: The Tf (Tools Foundations) module.
## Tf – Tools Foundation
### Exceptions:
| Class Name | Description |
|------------|-------------|
| `CppException` | |
| `ErrorException(*args)` | |
### Classes:
| Class Name | Description |
|------------|-------------|
| `CallContext` | |
| `Debug` | |
| `DiagnosticType` | |
| `Enum` | |
| `Error` | |
| `MallocTag` | |
| `NamedTemporaryFile([suffix, prefix, dir, text])` | A named temporary file which keeps the internal file handle closed. |
| `Notice` | |
| 名称 | 描述 |
| --- | --- |
| PyModuleWasLoaded | A TfNotice that is sent when a script module is loaded. |
| RefPtrTracker | Provides tracking of TfRefPtr objects to particular objects. |
| ScopeDescription | This class is used to provide high-level descriptions about scopes of execution that could possibly block, or to provide relevant information about high-level action that would be useful in a crash report. |
| ScriptModuleLoader | Provides low-level facilities for shared modules with script bindings to register themselves with their dependences, and provides a mechanism whereby those script modules will be loaded when necessary. |
| Singleton | |
| StatusObject | |
| Stopwatch | |
| TemplateString | |
| Tf_PyEnumWrapper | |
| Tf_TestAnnotatedBoolResult | |
| Tf_TestPyContainerConversions | |
| Tf_TestPyOptional | |
| Type | TfType represents a dynamic runtime type. |
| Warning | |
**Functions:**
| 名称 | 描述 |
| --- | --- |
| Fatal(msg) | Raise a fatal error to the Tf Diagnostic system. |
| GetCodeLocation(framesUp) | Returns a tuple (moduleName, functionName, fileName, lineNo). |
| PrepareModule(module, result) | PrepareModule(module, result) -- Prepare an extension module at import time. |
| Function Name | Description |
|---------------|-------------|
| `PreparePythonModule([moduleName])` | Prepare an extension module at import time. |
| `RaiseCodingError(msg)` | Raise a coding error to the Tf Diagnostic system. |
| `RaiseRuntimeError(msg)` | Raise a runtime error to the Tf Diagnostic system. |
| `Status(msg[, verbose])` | Issues a status update to the Tf diagnostic system. |
| `Warn(msg[, template])` | Issue a warning via the TfDiagnostic system. |
| `WindowsImportWrapper()` | |
### pxr.Tf.CppException
**exception**
### pxr.Tf.ErrorException
**exception**(*args*)
### pxr.Tf.CallContext
**class**
**Attributes:**
| Attribute | Type |
|-----------|------|
| `file` | char |
| `function` | char |
| `line` | int |
| `prettyFunction` | char |
#### pxr.Tf.CallContext.file
**property**
**Type:** type
#### pxr.Tf.CallContext.function
**property**
### pxr.Tf.CallContext.function
- **Type**: char
- **Type**: type
### pxr.Tf.CallContext.line
- **Type**: int
- **Type**: type
### pxr.Tf.CallContext.prettyFunction
- **Type**: char
- **Type**: type
### pxr.Tf.Debug
**Methods:**
- **GetDebugSymbolDescription**: classmethod GetDebugSymbolDescription(name) -> str
- Get a description for the specified debug symbol.
- Parameters:
- **name** (str) –
- **GetDebugSymbolDescriptions**: classmethod GetDebugSymbolDescriptions() -> str
- **GetDebugSymbolNames**: classmethod GetDebugSymbolNames() -> list[str]
- **IsDebugSymbolNameEnabled**: classmethod IsDebugSymbolNameEnabled(name) -> bool
- **SetDebugSymbolsByName**: classmethod SetDebugSymbolsByName(pattern, value) -> list[str]
- **SetOutputFile**: classmethod SetOutputFile(file) -> None
## GetDebugSymbolDescriptions
- **classmethod** GetDebugSymbolDescriptions() -> str
- Get a description of all debug symbols and their purpose.
- A single string describing all registered debug symbols along with short descriptions is returned.
## GetDebugSymbolNames
- **classmethod** GetDebugSymbolNames() -> list[str]
- Get a listing of all debug symbols.
## IsDebugSymbolNameEnabled
- **classmethod** IsDebugSymbolNameEnabled(name) -> bool
- True if the specified debug symbol is set.
- **Parameters**
- **name** (str) –
## SetDebugSymbolsByName
- **classmethod** SetDebugSymbolsByName(pattern, value) -> list[str]
- Set registered debug symbols matching `pattern` to `value`.
- All registered debug symbols matching `pattern` are set to `value`. The only matching is an exact match with `pattern`, or if `pattern` ends with an’*’as is otherwise a prefix of a debug symbols. The names of all debug symbols set by this call are returned as a vector.
- **Parameters**
- **pattern** (str) –
- **value** (bool) –
## SetOutputFile
- **classmethod** SetOutputFile(file) -> None
- Direct debug output to either stdout or stderr.
- Note that `file` MUST be either stdout or stderr. If not, issue an error and do nothing. Debug output is issued to stdout by default. If the environment variable TF_DEBUG_OUTPUT_FILE is set to’stderr’, then output is issued to stderr by default.
- **Parameters**
- **file** (FILE) –
## DiagnosticType
- **Methods:**
- [GetValueFromName](#pxr.Tf.DiagnosticType.GetValueFromName)
GetValueFromName
```
```markdown
allValues
```
```markdown
static
GetValueFromName
```
```markdown
allValues
=
(Tf.TF_DIAGNOSTIC_CODING_ERROR_TYPE,
Tf.TF_DIAGNOSTIC_FATAL_CODING_ERROR_TYPE,
Tf.TF_DIAGNOSTIC_RUNTIME_ERROR_TYPE,
Tf.TF_DIAGNOSTIC_FATAL_ERROR_TYPE,
Tf.TF_DIAGNOSTIC_NONFATAL_ERROR_TYPE,
Tf.TF_DIAGNOSTIC_WARNING_TYPE,
Tf.TF_DIAGNOSTIC_STATUS_TYPE,
Tf.TF_APPLICATION_EXIT_TYPE)
```
```markdown
class
pxr.Tf.
Enum
```
```markdown
Methods:
```
```markdown
GetValueFromFullName
```
```markdown
classmethod
GetValueFromFullName(fullname, foundIt) -> Enum
```
```markdown
Returns the enumerated value for a fully-qualified name.
This takes a fully-qualified enumerated value name (e.g.,
"Season::WINTER") and returns the associated value. If there is
no such name, this returns -1. Since -1 can sometimes be a valid
value, the
foundIt flag pointer, if not
None, is set to
true if the name was found and
false otherwise. Also, since
this is not a templated function, it has to return a generic value
type, so we use
TfEnum.
Parameters
fullname (str) –
foundIt (bool) –
```
```markdown
class
pxr.Tf.
Error
```
```markdown
Classes:
```
## Attributes:
### errorCode
The error code posted for this error.
### errorCodeString
The error code posted for this error, as a string.
## Methods:
### Clear
### GetErrors
A list of the errors held by this mark.
### IsClean
### SetMark
## Classes:
## Methods:
### GetCallStacks
### GetCallTree
```python
classmethod
GetCallTree(tree, skipRepeated) -> bool
```
### GetMaxTotalBytes
```python
classmethod
GetMaxTotalBytes() -> int
```
### GetTotalBytes
```python
classmethod
GetTotalBytes() -> int
```
### Initialize
```python
classmethod
Initialize(errMsg) -> bool
```
### IsInitialized
```python
classmethod
IsInitialized() -> bool
```
### SetCapturedMallocStacksMatchList
```python
classmethod
SetCapturedMallocStacksMatchList(matchList) -> None
```
### SetDebugMatchList
```python
classmethod
SetDebugMatchList(matchList) -> None
```
## Classes:
## Methods:
### GetCallSites
```
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
GetPrettyPrintString
<td>
<p>
<tr class="row-odd">
<td>
<p>
<a class="reference internal" href="#pxr.Tf.MallocTag.CallTree.GetRoot" title="pxr.Tf.MallocTag.CallTree.GetRoot">
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
GetRoot
<td>
<p>
<tr class="row-even">
<td>
<p>
<a class="reference internal" href="#pxr.Tf.MallocTag.CallTree.LogReport" title="pxr.Tf.MallocTag.CallTree.LogReport">
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
LogReport
<td>
<p>
<tr class="row-odd">
<td>
<p>
<a class="reference internal" href="#pxr.Tf.MallocTag.CallTree.Report" title="pxr.Tf.MallocTag.CallTree.Report">
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
Report
<td>
<p>
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Tf.MallocTag.CallTree.CallSite">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-name descname">
<span class="pre">
CallSite
<a class="headerlink" href="#pxr.Tf.MallocTag.CallTree.CallSite" title="Permalink to this definition">
<dd>
<p>
<strong>
Attributes:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<a class="reference internal" href="#pxr.Tf.MallocTag.CallTree.CallSite.nBytes" title="pxr.Tf.MallocTag.CallTree.CallSite.nBytes">
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
nBytes
<td>
<p>
<tr class="row-even">
<td>
<p>
<a class="reference internal" href="#pxr.Tf.MallocTag.CallTree.CallSite.name" title="pxr.Tf.MallocTag.CallTree.CallSite.name">
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
name
<td>
<p>
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Tf.MallocTag.CallTree.CallSite.nBytes">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
nBytes
<a class="headerlink" href="#pxr.Tf.MallocTag.CallTree.CallSite.nBytes" title="Permalink to this definition">
<dd>
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Tf.MallocTag.CallTree.CallSite.name">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
name
<a class="headerlink" href="#pxr.Tf.MallocTag.CallTree.CallSite.name" title="Permalink to this definition">
<dd>
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Tf.MallocTag.CallTree.PathNode">
<em class="property">
<span class="pre">
class
<span class="w">
<span class="sig-name descname">
<span class="pre">
PathNode
<a class="headerlink" href="#pxr.Tf.MallocTag.CallTree.PathNode" title="Permalink to this definition">
<dd>
<p>
<strong>
Methods:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<a class="reference internal" href="#pxr.Tf.MallocTag.CallTree.PathNode.GetChildren" title="pxr.Tf.MallocTag.CallTree.PathNode.GetChildren">
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
GetChildren
<td>
<p>
<p>
<strong>
Attributes:
<table class="autosummary longtable docutils align-default">
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
<a class="reference internal" href="#pxr.Tf.MallocTag.CallTree.PathNode.nAllocations" title="pxr.Tf.MallocTag.CallTree.PathNode.nAllocations">
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
nAllocations
<td>
<p>
<tr class="row-even">
<td>
<p>
<a class="reference internal" href="#pxr.Tf.MallocTag.CallTree.PathNode.nBytes" title="pxr.Tf.MallocTag.CallTree.PathNode.nBytes">
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
nBytes
<td>
<p>
<tr class="row-odd">
<td>
<p>
<a class="reference internal" href="#pxr.Tf.MallocTag.CallTree.PathNode.nBytesDirect" title="pxr.Tf.MallocTag.CallTree.PathNode.nBytesDirect">
<code class="xref py py-obj docutils literal notranslate">
<span class="pre">
nBytesDirect
<p>
<p>
<code>siteName
<p>
<dl>
<dt>
<code>GetChildren()
<dd>
<dl>
<dt>
<em>property
<code>nAllocations
<dd>
<dl>
<dt>
<em>property
<code>nBytes
<dd>
<dl>
<dt>
<em>property
<code>nBytesDirect
<dd>
<dl>
<dt>
<em>property
<code>siteName
<dd>
<dl>
<dt>
<code>GetCallSites()
<dd>
<dl>
<dt>
<code>GetPrettyPrintString()
<dd>
<dl>
<dt>
<code>GetRoot()
<dd>
<dl>
<dt>
<code>LogReport()
<dd>
<dl>
<dt>
<code>Report()
<dd>
<dl>
<dt>
<em>static
<code>GetCallStacks()
<dd>
<dl>
<dt>
<em>static
<code>GetCallTree()
<dd>
<p>
<strong>classmethod
<p>
Return a snapshot of memory usage.
<p>
Returns a snapshot by writing into
\*tree. See the C*tree structure for documentation. If Initialize() has not been called, *tree is set to a rather blank structure (empty vectors, empty strings, zero in all integral fields) and false is returned; otherwise, \*tree is set with the contents of the current memory snapshot and true is returned. It is fine to call this function on the same \*tree instance; each call simply overwrites the data from the last call. If /p skipRepeated is true, then any repeated callsite is skipped. See the CallTree documentation for more details.
### Parameters
- **tree** (CallTree) –
- **skipRepeated** (bool) –
### GetMaxTotalBytes
```python
classmethod GetMaxTotalBytes() -> int
```
Return the maximum total number of bytes that have ever been allocated at one time. This is simply the maximum value of GetTotalBytes() since Initialize() was called.
### GetTotalBytes
```python
classmethod GetTotalBytes() -> int
```
Return total number of allocated bytes. The current total memory that has been allocated and not freed is returned. Memory allocated before calling Initialize() is not accounted for.
### Initialize
```python
classmethod Initialize(errMsg) -> bool
```
Initialize the memory tagging system. This function returns true if the memory tagging system can be successfully initialized or it has already been initialized. Otherwise, \*errMsg is set with an explanation for the failure. Until the system is initialized, the various memory reporting calls will indicate that no memory has been allocated. Note also that memory allocated prior to calling Initialize() is not tracked i.e. all data refers to allocations that happen subsequent to calling Initialize().
#### Parameters
- **errMsg** (str) –
### IsInitialized
```python
classmethod IsInitialized() -> bool
```
Return true if the tagging system is active. If Initialize() has been successfully called, this function returns true.
### SetCapturedMallocStacksMatchList
```python
classmethod SetCapturedMallocStacksMatchList(matchList) -> None
```
Sets the tags to trace.
When memory is allocated for any tag that matches `matchList`, a stack trace is recorded. When that memory is released, the stack trace is discarded. Clients can call `GetCapturedMallocStacks()` to get a list of all recorded stack traces. This is useful for finding leaks.
Traces recorded for any tag that will no longer be matched are discarded by this call. Traces recorded for tags that continue to be matched are retained.
`matchList` is a comma, tab or newline separated list of malloc tag names. The names can have internal spaces but leading and trailing spaces are stripped. If a name ends in `*`, then the suffix is wildcarded. A name can have a leading `-` or `+` to prevent or allow a match. Each name is considered in order and later matches override earlier matches. For example, `Csd*, -CsdScene::_Populate*, +CsdScene::_PopulatePrimCacheLocal` matches any malloc tag starting with `Csd` but nothing starting with `CsdScene::_Populate` except `CsdScene::_PopulatePrimCacheLocal`. Use the empty string to disable stack capturing.
**Parameters:**
- **matchList** (`str`) –
### SetDebugMatchList
```python
classmethod SetDebugMatchList(matchList) -> None
```
Sets the tags to trap in the debugger.
When memory is allocated or freed for any tag that matches `matchList`, the debugger trap is invoked. If a debugger is attached, the program will stop in the debugger, otherwise the program will continue to run. See `ArchDebuggerTrap()` and `ArchDebuggerWait()`.
`matchList` is a comma, tab or newline separated list of malloc tag names. The names can have internal spaces but leading and trailing spaces are stripped. If a name ends in `*`, then the suffix is wildcarded. A name can have a leading `-` or `+` to prevent or allow a match. Each name is considered in order and later matches override earlier matches. For example, `Csd*, -CsdScene::_Populate*, +CsdScene::_PopulatePrimCacheLocal` matches any malloc tag starting with `Csd` but nothing starting with `CsdScene::_Populate` except `CsdScene::_PopulatePrimCacheLocal`. Use the empty string to disable debugging traps.
**Parameters:**
- **matchList** (`str`) –
### NamedTemporaryFile
```python
class pxr.Tf.NamedTemporaryFile(suffix='', prefix='', dir=None, text=False)
```
A named temporary file which keeps the internal file handle closed. A class which constructs a temporary file (that isn’t open) on `__enter__`, provides its name as an attribute, and deletes it on `__exit__`.
Note: The constructor args for this object match those of python’s `tempfile.mkstemp()` function, and will have the same effect on the underlying file created.
**Attributes:**
- **name** (`str`) – The name of the temporary file.
The path for the temporary file created.
class pxr.Tf.Notice
Classes:
- Listener
Listener:
Represents the Notice connection between senders and receivers of notices.
Methods:
- Register(noticeType, callback, sender)
- noticeType : Tf.Notice
- callback : function
- sender : object
- RegisterGlobally(noticeType, callback)
- noticeType : Tf.Notice
- callback : function
- Send(sender)
- SendGlobally()
class pxr.Tf.Notice.Listener
Represents the Notice connection between senders and receivers of notices. When a Listener object expires the connection is broken. You can also use the Revoke() function to break the connection. A Listener object is returned from the Register() and RegisterGlobally() functions.
Methods:
- Revoke()
Revoke():
Revoke interest by a notice listener. This function revokes interest in the particular notice type and call-back method that its Listener object was registered for.
static Register(noticeType, callback, sender) → Listener
<dl>
<dt>
<p>
noticeType : Tf.Notice
callback : function
sender : object
<p>
Register a listener as being interested in a TfNotice type from a specific sender. Notice listener will get sender as an argument. Registration of interest in a notice class N automatically registers interest in all classes derived from N. When a notice of appropriate type is received, the listening object’s member-function method is called with the notice. To reverse the registration, call Revoke() on the Listener object returned by this call.
<p>
Register( noticeType, callback, sender ) -> Listener
<p>
noticeType : Tf.Notice
callback : function
sender : object
<p>
Register a listener as being interested in a TfNotice type from a specific sender. Notice listener will get sender as an argument. Registration of interest in a notice class N automatically registers interest in all classes derived from N. When a notice of appropriate type is received, the listening object’s member-function method is called with the notice. To reverse the registration, call Revoke() on the Listener object returned by this call.
<dt class="py method">
<dt>
<em>
<span class="pre">
static
<span class="w">
<span class="sig-name descname">
<span class="pre">
RegisterGlobally
<span class="sig-paren">
(
<em class="sig-param">
<span class="n">
<span class="pre">
noticeType
,
<em class="sig-param">
<span class="n">
<span class="pre">
callback
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
Listener
<dd>
<p>
noticeType : Tf.Notice
callback : function
<p>
Register a listener as being interested in a TfNotice type from any sender. The notice listener does not get sender as an argument.
<dt class="py method">
<dt>
<span class="sig-name descname">
<span class="pre">
Send
<span class="sig-paren">
(
<span class="sig-paren">
)
<dd>
<p>
Send(sender)
<p>
sender : object
<p>
Deliver the notice to interested listeners, returning the number of interested listeners. This is the recommended form of Send. It takes the sender as an argument. Listeners that registered for the given sender AND listeners that registered globally will get the notice.
<p>
Send(sender)
<p>
sender : object
<p>
Deliver the notice to interested listeners, returning the number of interested listeners. This is the recommended form of Send. It takes the sender as an argument. Listeners that registered for the given sender AND listeners that registered globally will get the notice.
<dt class="py method">
<dt>
<span class="sig-name descname">
<span class="pre">
SendGlobally
<span class="sig-paren">
(
<span class="sig-paren">
)
<dd>
<p>
SendGlobally()
<p>
Deliver the notice to interested listeners. For most clients it is recommended to use the Send(sender) version of Send() rather than this one. Clients that use this form of Send will prevent listeners from being able to register to receive notices based on the sender of the notice. ONLY listeners that registered globally will get the notice.
<dl class="py class">
<dt>
<em>
<span class="pre">
class
<span class="w">
<span class="sig-prename descclassname">
<span class="pre">
pxr.Tf.
<span class="sig-name descname">
<span class="pre">
PyModuleWasLoaded
<dd>
<p>
A TfNotice that is sent when a script module is loaded. Since many modules may be loaded at once, listeners are encouraged to defer work triggered by this notice to the end of an application iteration. This, of course, is good practice in general.
<p>
<strong>
Methods:
<table>
<colgroup>
<col style="width: 10%"/>
<col style="width: 90%"/>
<tbody>
<tr class="row-odd">
<td>
<p>
name()
<td>
<p>
Return the name of the module that was loaded.
<dl class="py method">
<dt>
<span class="sig-name descname">
<span class="pre">
name
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
str
<dd>
<p>
Return the name of the module that was loaded.
<dl class="py class">
<dt>
<em>
<span class="pre">
class
<span class="w">
<span class="sig-prename descclassname">
<span class="pre">
pxr.Tf.
<span class="sig-name descname">
<span class="pre">
RefPtrTracker
<dd>
<p>
Provides tracking of TfRefPtr objects to particular objects.
<p>
Clients can enable, at compile time, tracking of
```
TfRefPtr
```
objects that point to particular instances of classes derived from
```
TfRefBase
```
. This is useful if you have a ref counted object with a ref count that should’ve gone to zero but didn’t. This tracker can tell you every
```
TfRefPtr
```
that’s holding the
```
TfRefBase
```
and a stack trace where it was created or last assigned to.
<p>
Clients can get a report of all watched instances and how many
```
TfRefPtr
```
objects are holding them using
```
ReportAllWatchedCounts()
```
(in python use
```
Tf.RefPtrTracker().GetAllWatchedCountsReport()
```
). You can see all of the stack traces using
```
ReportAllTraces()
```
(in python use
```
Tf.RefPtrTracker().GetAllTracesReport()
```
).
<p>
Clients will typically enable tracking using code like this:
<div class="highlight-text notranslate">
<div class="highlight">
<pre><span>
class MyRefBaseType;
typedef TfRefPtr<MyRefBaseType> MyRefBaseTypeRefPtr;
TF_DECLARE_REFPTR_TRACK(MyRefBaseType);
class MyRefBaseType {
\.\.\.
public:
static bool _ShouldWatch(const MyRefBaseType\*);
\.\.\.
};
TF_DEFINE_REFPTR_TRACK(MyRefBaseType, MyRefBaseType::_ShouldWatch);
<p>
Note that the
```
TF_DECLARE_REFPTR_TRACK()
```
macro must be invoked before any use of the
```
MyRefBaseTypeRefPtr
```
type.
<p>
The
```
MyRefBaseType::_ShouldWatch()
```
function returns
```
true
```
if the given instance of
```
MyRefBaseType
```
should be tracked. You can also use
```
TfRefPtrTracker::WatchAll()
```
to watch every instance (but that might use a lot of memory and time).
<p>
If you have a base type,
```
B
```
, and a derived type,
```
D
```
, and you hold instances of
```
D
```
in a
```
TfRefPtr<B>
```
(i.e. a pointer to the base) then you must track both type
```
B
```
and type
```
D
```
. But you can use
```
TfRefPtrTracker::WatchNone()
```
when tracking
```
B
```
if you’re not interested in instances of
```
B
```
.
<p>
<strong>
Methods:
<table>
<thead>
<tr>
<th>Method
<th>Description
<tbody>
<tr>
<td>
```
GetAllTracesReport
```
<td>
<tr>
<td>
```
GetAllWatchedCountsReport
```
<td>
<tr>
<td>
```
GetTracesReportForWatched
```
<td>
<p>
<strong>
Attributes:
<table>
<thead>
<tr>
<th>Attribute
<th>Description
<tbody>
<tr>
<td>
```
expired
```
<td>
True if this object has expired, False otherwise.
<dl>
<dt>
```
GetAllTracesReport
```
<dd>
<dl>
<dt>
```
GetAllWatchedCountsReport
```
<dd>
## pxr.Tf.RefPtrTracker.GetAllWatchedCountsReport
## pxr.Tf.RefPtrTracker.GetTracesReportForWatched
## pxr.Tf.RefPtrTracker.expired
- **Description**: True if this object has expired, False otherwise.
## pxr.Tf.ScopeDescription
- **Description**: This class is used to provide high-level descriptions about scopes of execution that could possibly block, or to provide relevant information about high-level action that would be useful in a crash report.
- **Performance Note**: This class is reasonably fast to use, especially if the message strings are not dynamically created, however it should not be used in very highly performance sensitive contexts. The cost to push & pop is essentially a TLS lookup plus a couple of atomic operations.
### Methods:
- **SetDescription(description)**: Replace the description stack entry for this scope description.
- **Parameters**:
- **description** (str)
- **Description**: Caller guarantees that the string `description` lives at least as long as this TfScopeDescription object.
## pxr.Tf.ScriptModuleLoader
- **Description**: Provides low-level facilities for shared modules with script bindings to register themselves with their dependences, and provides a mechanism whereby those script modules will be loaded when necessary. Currently, this is when one of our script modules is loaded, when TfPyInitialize is called, and when Plug opens shared modules.
- **Usage Note**: Generally, user code will not make use of this.
### Methods:
- **GetModuleNames()**: Description not provided in the HTML snippet.
### Functions
- **GetModuleNames()**
- Return a list of all currently known modules in a valid dependency order.
- **GetModulesDict()**
- Return a python dict containing all currently known modules under their canonical names.
- **WriteDotFile(file)**
- Write a graphviz dot-file for the dependency graph of all currently known modules to `file`.
- Parameters:
- **file** (str) –
### Attributes
- **expired**
- True if this object has expired, False otherwise.
### Classes
- **pxr.Tf.Singleton**
- **pxr.Tf.StatusObject**
- **pxr.Tf.Stopwatch**
## Methods:
| Method | Description |
| --- | --- |
| `AddFrom(t)` | Adds the accumulated time and sample count from `t` into the `TfStopwatch`. |
| `Reset()` | Resets the accumulated time and the sample count to zero. |
| `Start()` | Record the current time for use by the next `Stop()` call. |
| `Stop()` | Increases the accumulated time stored in the `TfStopwatch`. |
## Attributes:
| Attribute | Description |
| --- | --- |
| `microseconds` | int |
| `milliseconds` | int |
| `nanoseconds` | int |
| `sampleCount` | int |
| `seconds` | float |
## AddFrom Method
**Description:**
Adds the accumulated time and sample count from `t` into the `TfStopwatch`.
**Example:**
`t2.AddFrom(t1)` will add `t1`'s time and sample count into `t2`.
**Parameters:**
- `t` (Stopwatch) –
### Reset
```
Resets the accumulated time and the sample count to zero.
### Start
```
Record the current time for use by the next `Stop()` call.
The `Start()` function records the current time. A subsequent call to `Start()` before a call to `Stop()` simply records a later current time, but does not change the accumulated time of the `TfStopwatch`.
```
### Stop
```
Increases the accumulated time stored in the `TfStopwatch`.
The `Stop()` function increases the accumulated time by the duration between the current time and the last time recorded by a `Start()` call. A subsequent call to `Stop()` before another call to `Start()` will therefore double-count time and throw off the results.
A `TfStopwatch` also counts the number of samples it has taken. The "sample count" is simply the number of times that `Stop()` has been called.
```
### property microseconds
```
int
Return the accumulated time in microseconds.
Note that 45 minutes will overflow a 32-bit counter, so take care to save the result in an `int64_t`, and not a regular `int` or `long`.
Type: type
```
### property milliseconds
```
int
Return the accumulated time in milliseconds.
Type: type
```
### property nanoseconds
```
int
Return the accumulated time in nanoseconds.
Note that this number can easily overflow a 32-bit counter, so take care to save the result in an `int64_t`, and not a regular `int`.
```
```code
int
```
or
```code
long
```
.
Type
----
type
### sampleCount
property sampleCount
Return the current sample count.
The sample count, which is simply the number of calls to `Stop()` since creation or a call to `Reset()`, is useful for computing average running times of a repeated task.
Type
----
type
### seconds
property seconds
Return the accumulated time in seconds as a `double`.
Type
----
type
### TemplateString
class pxr.Tf.TemplateString
**Methods:**
| Method | Description |
|--------|-------------|
| `GetEmptyMapping()` | Returns an empty mapping for the current template. |
| `GetParseErrors()` | Returns any error messages generated during template parsing. |
| `SafeSubstitute(arg1)` | Like Substitute(), except that if placeholders are missing from the mapping, instead of raising a coding error, the original placeholder will appear in the resulting string intact. |
| `Substitute(arg1)` | Performs the template substitution, returning a new string. |
**Attributes:**
| Attribute | Type |
|-----------|------|
| `template` | str |
| `valid` | bool |
#### GetEmptyMapping
Returns an empty mapping for the current template.
This method first calls IsValid to ensure that the template is valid.
### GetParseErrors
Returns any error messages generated during template parsing.
### SafeSubstitute
Like Substitute(), except that if placeholders are missing from the mapping, instead of raising a coding error, the original placeholder will appear in the resulting string intact.
**Parameters**
- **arg1** (Mapping) –
### Substitute
Performs the template substitution, returning a new string.
The mapping contains keys which match the placeholders in the template. If a placeholder is found for which no mapping is present, a coding error is raised.
**Parameters**
- **arg1** (Mapping) –
### template
str
Returns the template source string supplied to the constructor.
**Type**
- type
### valid
bool
Returns true if the current template is well formed.
Empty templates are valid.
**Type**
- type
### Tf_PyEnumWrapper
**Attributes:**
- displayName
- fullName
- name
- value
```python
value
```
```python
property displayName
```
```python
property fullName
```
```python
property name
```
```python
property value
```
```python
class pxr.Tf.Tf_TestAnnotatedBoolResult
```
**Attributes:**
| Attribute | Description |
|-----------|-------------|
| `annotation` | |
```python
property annotation
```
```python
class pxr.Tf.Tf_TestPyContainerConversions
```
**Methods:**
| Method | Description |
|--------|-------------|
| `GetPairTimesTwo()` | |
| `GetTokens()` | |
| `GetVectorTimesTwo()` | |
```python
static GetPairTimesTwo()
```
```python
static GetTokens()
```
### pxr.Tf.Tf_TestPyOptional
**Methods:**
- **TakesOptional**
- **TestOptionalChar**
- **TestOptionalDouble**
- **TestOptionalFloat**
- **TestOptionalInt**
- **TestOptionalLong**
- **TestOptionalShort**
- **TestOptionalString**
- **TestOptionalStringVector**
- **TestOptionalUChar**
- **TestOptionalUInt**
- **TestOptionalULong**
- **TestOptionalUShort**
#### TakesOptional
## TestOptionalChar
- **Type**: static
## TestOptionalDouble
- **Type**: static
## TestOptionalFloat
- **Type**: static
## TestOptionalInt
- **Type**: static
## TestOptionalLong
- **Type**: static
## TestOptionalShort
- **Type**: static
## TestOptionalString
- **Type**: static
## TestOptionalStringVector
- **Type**: static
## TestOptionalUChar
- **Type**: static
## TestOptionalUInt
- **Type**: static
## TestOptionalULong
- **Type**: static
## TestOptionalUShort
- **Type**: static
TestOptionalUShort
(
)
class pxr.Tf.Type
TfType represents a dynamic runtime type.
TfTypes are created and discovered at runtime, rather than compile time.
Features:
- unique typename
- safe across DSO boundaries
- can represent C++ types, pure Python types, or Python subclasses of wrapped C++ types
- lightweight value semantics you can copy and default construct TfType, unlike std::type_info.
- totally ordered can use as a std::map key
Methods:
- AddAlias(base, name) -> None
- Define() -> Type
- Find() -> Type
- FindByName(name) -> Type
- FindDerivedByName(name) -> Type
- GetAliases(derivedType) -> Returns a vector of the aliases registered for the derivedType under this, the base type.
- GetAllAncestorTypes(result) -> Build a vector of all ancestor types inherited by this type.
- GetAllDerivedTypes(result) -> Return the set of all types derived (directly or indirectly) from this type.
- GetRoot() -> Type
- IsA(queryType) -> Return true if this type is the same as or derived from queryType
Attributes:
| Unknown | - |
| --- | --- |
| baseTypes | list[Type] |
| derivedTypes | - |
| isEnumType | bool |
| isPlainOldDataType | bool |
| isUnknown | bool |
| pythonClass | TfPyObjWrapper |
| sizeof | int |
| typeName | str |
### AddAlias
```python
classmethod AddAlias(base, name) -> None
```
Add an alias name for this type under the given base type.
Aliases are similar to typedefs in C++: they provide an alternate name for a type. The alias is defined with respect to the given `base` type. Aliases must be unique with respect to both other aliases beneath that base type and names of derived types of that base.
**Parameters**
- **base** (`Type`) –
- **name** (`str`) –
```python
AddAlias(name) -> None
```
Add an alias for DERIVED beneath BASE.
This is a convenience method, that declares both DERIVED and BASE as TfTypes before adding the alias.
**Parameters**
- **name** (`str`) –
### Define
```python
classmethod Define() -> Type
```
Define a TfType with the given C++ type T and C++ base types B.
Each of the base types will be declared (but not defined) as TfTypes if they have not already been.
The typeName of the created TfType will be the canonical demangled RTTI type name, as defined by GetCanonicalTypeName().
It is an error to attempt to define a type that has already been defined.
```python
Define() -> Type
```
Define a TfType with the given C++ type T and no bases.
See the other Define() template for more details.
C++ does not allow default template arguments for function templates, so we provide this separate definition for the case of no bases.
## Find Method
### Description
**classmethod** Find() -> Type
Retrieve the `TfType` corresponding to type `T`.
The type `T` must have been declared or defined in the type system or the `TfType` corresponding to an unknown type is returned.
IsUnknown()
---
Find(obj) -> Type
Retrieve the `TfType` corresponding to `obj`.
The `TfType` corresponding to the actual object represented by `obj` is returned; this may not be the object returned by `TfType::Find<T>()` if `T` is a polymorphic type.
This works for Python subclasses of the C++ type `T` as well, as long as `T` has been wrapped using TfPyPolymorphic.
Of course, the object’s type must have been declared or defined in the type system or the `TfType` corresponding to an unknown type is returned.
IsUnknown()
#### Parameters
- **obj** (`T`) –
---
Find(t) -> Type
Retrieve the `TfType` corresponding to an obj with the given `type_info`.
#### Parameters
- **t** (`type_info`) –
## FindByName Method
### Description
**classmethod** FindByName(name) -> Type
Retrieve the `TfType` corresponding to the given `name`.
Every type defined in the TfType system has a unique, implementation independent name. In addition, aliases can be added to identify a type underneath a specific base type; see TfType::AddAlias(). The given name will first be tried as an alias under the root type, and subsequently as a typename.
This method is equivalent to:
```python
TfType::GetRoot().FindDerivedByName(name)
```
For any object `obj`,
```python
Find(obj) == FindByName( Find(obj).GetTypeName() )
```
#### Parameters
- **name** (`str`) –
## FindDerivedByName Method
### Description
**classmethod** FindDerivedByName(name) -> Type
Retrieve the `TfType` that derives from this type and has the given alias or typename.
AddAlias
#### Parameters
- **name** (`str`) –
<em>str
<hr class="docutils"/>
<p>
FindDerivedByName(name) -> Type
<p>
Retrieve the
<code class="docutils literal notranslate">
<span class="pre">
TfType
that derives from BASE and has the given alias or typename.
<p>
This is a convenience method, and is equivalent to:
<div class="highlight-text notranslate">
<div class="highlight">
<pre><span>
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
<strong>
name
(<em>str
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Tf.Type.GetAliases">
<span class="sig-name descname">
<span class="pre">
GetAliases
<span class="sig-paren">
(
<em class="sig-param">
<span class="n">
<span class="pre">
derivedType
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
list
<span class="p">
<span class="pre">
[
<span class="pre">
str
<span class="p">
<span class="pre">
]
<a class="headerlink" href="#pxr.Tf.Type.GetAliases" title="Permalink to this definition">
<dd>
<p>
Returns a vector of the aliases registered for the derivedType under this, the base type.
<p>
AddAlias()
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
<strong>
derivedType
(<em>Type
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Tf.Type.GetAllAncestorTypes">
<span class="sig-name descname">
<span class="pre">
GetAllAncestorTypes
<span class="sig-paren">
(
<em class="sig-param">
<span class="n">
<span class="pre">
result
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
None
<a class="headerlink" href="#pxr.Tf.Type.GetAllAncestorTypes" title="Permalink to this definition">
<dd>
<p>
Build a vector of all ancestor types inherited by this type.
<p>
The starting type is itself included, as the first element of the results vector.
<p>
Types are given in”C3”resolution order, as used for new-style classes starting in Python 2.3. This algorithm is more complicated than a simple depth-first traversal of base classes, in order to prevent some subtle errors with multiple-inheritance. See the references below for more background.
<p>
This can be expensive; consider caching the results. TfType does not cache this itself since it is not needed internally.
<p>
Guido van Rossum.”Unifying types and classes in Python 2.2: Method resolution order.”
<p>
Barrett, Cassels, Haahr, Moon, Playford, Withington.”A Monotonic Superclass Linearization for Dylan.”OOPSLA 96.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
<strong>
result
(<em>list
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Tf.Type.GetAllDerivedTypes">
<span class="sig-name descname">
<span class="pre">
GetAllDerivedTypes
<span class="sig-paren">
(
<em class="sig-param">
<span class="n">
<span class="pre">
result
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
None
<a class="headerlink" href="#pxr.Tf.Type.GetAllDerivedTypes" title="Permalink to this definition">
<dd>
<p>
Return the set of all types derived (directly or indirectly) from this type.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
<strong>
result
(<em>set
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Tf.Type.GetRoot">
<em class="property">
<span class="pre">
static
<span class="w">
<span class="sig-name descname">
<span class="pre">
GetRoot
<span class="sig-paren">
(
<span class="sig-paren">
)
<a class="headerlink" href="#pxr.Tf.Type.GetRoot" title="Permalink to this definition">
<dd>
<p>
<strong>
classmethod
GetRoot() -> Type
<p>
Return the root type of the type hierarchy.
All known types derive (directly or indirectly) from the root. If a type is specified with no bases, it is implicitly considered to derive from the root type.
IsA(queryType) → bool
- Return true if this type is the same as or derived from `queryType`.
- If `queryType` is unknown, this always returns `false`.
- Parameters:
- **queryType** (Type) –
- IsA() -> bool
- Return true if this type is the same as or derived from T.
- This is equivalent to:
```
IsA(Find<T>())
```
Unknown = Tf.Type.Unknown
property baseTypes
- list[Type]
- Return a vector of types from which this type was derived.
- Type: type
property derivedTypes
property isEnumType
- bool
- Return true if this is an enum type.
- Type: type
property isPlainOldDataType
- bool
- Return true if this is a plain old data type, as defined by C++.
- Type: type
property isUnknown
- bool
- Return true if this is the unknown type, representing a type unknown to the TfType system.
- The unknown type does not derive from the root type, or any other type.
- Type: type
property pythonClass
- TfPyObjWrapper
- Return the Python class object for this type.
- If this type is unknown or has not yet had a Python class defined, this will return `None`.
, as an empty
```code```
TfPyObjWrapper
```code```
DefinePythonClass()
Type
: type
Type
: type
### property sizeof
int
Return the size required to hold an instance of this type on the stack (does not include any heap allocated memory the instance uses).
This is what the C++ sizeof operator returns for the type, so this value is not very useful for Python types (it will always be sizeof(boost::python::object)).
Type
: type
### property typeName
str
Return the machine-independent name for this type.
This name is specified when the TfType is declared.
Declare()
Type
: type
### class Warning
### function Fatal
Raise a fatal error to the Tf Diagnostic system.
### function GetCodeLocation
Returns a tuple (moduleName, functionName, fileName, lineNo).
To trace the current location of python execution, use GetCodeLocation(). By default, the information is returned at the current stack-frame; thus:
```c++
info = GetCodeLocation()
```
will return information about the line that GetCodeLocation() was called from. One can write:
```c++
def genericDebugFacility():
info = GetCodeLocation(1)
# print out data
def someCode():
...
if bad:
genericDebugFacility()
```
and genericDebugFacility() will get information associated with its caller, i.e. the function someCode().
### function PrepareModule
PrepareModule(module, result) – Prepare an extension module at import time. Generally, this should only be called by the __init__.py script for a module upon loading a boost python module (generally ‘_libName.so’).
### function PreparePythonModule
PreparePythonModule(moduleName=) – Prepare a Python module at import time.
## pxr.Tf.PreparePythonModule
Prepare an extension module at import time. This will import the Python module associated with the caller’s module (e.g. ‘_tf’ for ‘pxr.Tf’) or the module with the specified moduleName and copy its contents into the caller’s local namespace.
Generally, this should only be called by the __init__.py script for a module upon loading a boost python module (generally ‘_libName.so’).
## pxr.Tf.RaiseCodingError
Raise a coding error to the Tf Diagnostic system.
## pxr.Tf.RaiseRuntimeError
Raise a runtime error to the Tf Diagnostic system.
## pxr.Tf.Status
Issues a status update to the Tf diagnostic system.
If verbose is True (the default) then information about where in the code the status update was issued from is included.
## pxr.Tf.Warn
Issue a warning via the TfDiagnostic system.
At this time, template is ignored.
## pxr.Tf.WindowsImportWrapper | 46,538 |
the-telemetry-transmitter_KitTelemetry.md | # Configuring the `omni.kit.telemetry` Extension
## Overview
The `omni.kit.telemetry` extension is responsible for a few major tasks. These largely occur in the background and require no direct interaction from the rest of the app. All of this behavior occurs during the startup of the extension automatically. The major tasks that occur during extension startup are:
1. Launch the telemetry transmitter app. This app is shipped with the extension and is responsible for parsing, validating, and transmitting all structured log messages produced by the app. Only the specific messages that have been approved and validated will be transmitted. More on this below.
2. Collect system information and emit structured log messages and crash reporter metadata values for it. The collected system information includes CPU, memory, OS, GPU, and display information. Only information about the capabilities of the system is collected, never any user specific information.
3. Emit various startup events. This includes events that identify the run environment being used (ie: cloud/enterprise/individual, cloud node/cluster name, etc), the name and version of the app, the various session IDs (ie: telemetry, launcher, cloud, etc), and the point at which the app is ready for the user to interact with it.
4. Provide interfaces that allow some limited access to information about the session. The `omni::telemetry::ITelemetry` and `omni::telemetry::ITelemetry2` interfaces can be used to access this information. These interfaces are read-only for the most part.
Once the extension has successfully started up, it is generally not interacted with again for the duration of the app’s session.
## The Telemetry Transmitter
The telemetry transmitter is a separate app that is bundled with the `omni.kit.telemetry` extension. It is launched during the extension’s startup. For the most part the configuration of the transmitter is automatic. However, its configuration can be affected by passing specific settings to the Kit based app itself. In general, any settings under the `/telemetry/` settings branch will be passed directly on to the transmitter when it is launched. There are some settings that may be slightly adjusted or added to however depending on the launch mode. The transmitter process will also inherit any settings under the `/log/` (with a few exceptions) and `/structuredLog/extraFields/` settings branches.
In almost all cases, the transmitter process will be unique in the system. At any given time, only a single instance of the transmitter process will be running. If another instance of the transmitter is launched while another one is running, the new instance will immediately exit. This single instance of the transmitter will however handle events produced by all Kit based apps, even if multiple apps are running simultaneously. This limitation can be overcome by specifying a new launch guard name with the `/telemetry/launchGuardName` setting, but is not recommended without also including additional configuration changes for the transmitter such as the log folder to be scanned. Having multiple transmitters running simultaneously could result in duplicate messages being sent and more contention on accessing log files.
When the transmitter is successfully launched, it will keep track of how many Kit based apps have attempted to launch it. The transmitter will continue to run until all Kit based apps that tried to launch it have exited.
This is true regardless of how each Kit based app exits - whether through a normal exit, crashing, or being terminated by the user. The only cases where the transmitter will exit early will be if it detects that another instance is already running, and if it detects that the user has not given any consent to transmit any data. In the latter case, the transmitter exits because it has no job to perform without user consent.
When the transmitter is run with authentication enabled (ie: the `/telemetry/transmitter/0/authenticate=true` or `/telemetry/authenticate=true` settings), it requires a way to deliver the authentication token to it. This is usually provided by downloading a JSON file from a certain configurable URL. The authentication token may arrive with an expiry time. The transmitter will request a renewed authentication token only once the expiry time has passed. The authentication token is never stored locally in a file by the transmitter. If the transmitter is unable to acquire an authentication token for any reason (ie: URL not available, attempt to download the token failed or was rejected, etc), that endpoint in the transmitter will simply pause its event processing queue until a valid authentication token can be acquired.
When the transmitter starts up, it performs the following checks:
- Reads the current privacy consent settings for the user. These settings are found in the `privacy.toml` file that the Kit based app loaded on startup. By default this file is located in `~/.nvidia-omniverse/config/privacy.toml` but can be relocated for a session using the `/structuredLog/privacySettingsFile` setting.
- Loads its configuration settings and builds all the requested transmission profiles. The same set of parsed, validated events can be sent to multiple endpoints if the transmitter is configured to do so.
- Downloads the appropriate approved schemas package for the current telemetry mode. Each schema in the package is then loaded and validated. Information about each event in each schema is then stored internally.
- Parses out the extra fields passed to it. Each of the named extra fields will be added to each validated message before it is transmitted.
- In newer versions of the transmitter (v0.5.0 and later), the list of current schema IDs is downloaded and parsed if running in ‘open endpoint’ mode (ie: authentication is off and the `schemaid` extra field is passed on to it). This is used to set the latest value for the `schemaid` field.
- Outputs its startup settings to its log file. Depending on how the Kit based app is launched, this log file defaults to either `${kit}/logs/` or `~/.nvidia-omniverse/logs/`. The default name for the log file is `omni.telemetry.transmitter.log`.
While the transmitter is running, it repeatedly performs the following operations:
- Scans the log directory for new structured log messages. If no new messages are found, the transmitter will sleep for one minute (by default) before trying again.
- All new messages that are found are then validated against the set of loaded events. Any message that fails validation (ie: not formatted correctly or its event type isn’t present in the approved events list) will simply be dropped and not transmitted.
- Send the set of new approved, validated events to each of the requested endpoints. The transmitter will remove any endpoint that repeatedly fails to be contacted but continue doing its job for all other endpoints. If all endpoints are removed, the transmitter will simply exit.
- Update the progress tags for each endpoint in each log file to indicate how far into the log file it has successfully processed and transmitted. If the transmitter exits and the log files persist, the next run will simply pick off where it left off.
- Check whether the transmitter should exit. This can occur if all of the launching Kit based apps have exited or if all endpoints have been removed due to them being unreachable.
## Anonymous Data Mode
An anonymous data mode is also supported for Omniverse telemetry. This guarantees that all user information is cleared out, if loaded, very early on startup. Enabling this also enables open endpoint usage, and sets the transmitter to ‘production’ mode. All consent levels will also be enabled once a random user ID is chosen for the session. This mode is enabled using the `/telemetry/enableAnonymousData` setting (boolean).
## Configuration Options Available to the `omni.kit.telemetry` Extension
The `omni.kit.telemetry` will do its best to automatically detect the mode that it should run in. However, sometimes an app can be run in a setting where the correct mode cannot be accurately detected. In these cases the extension will just fall back to its default mode. The current mode can be explicitly chosen using the `/telemetry/mode` setting. However, some choices of mode (ie: ‘test’) may not function properly without the correct build of the extension and transmitter. The extension can run in the following modes:
- `Production`: Only transmits events that are approved for public users. Internal-only events will only be emitted to local log files and will not be transmitted anywhere. The default transmission endpoint is Kratos (public). This is the default mode.
- `Developer`: Transmits events that are approved for both public users and internal users. The default
transmission endpoints are Kratos (public) and NVDF (internal only).
- Send only locally defined test events. This mode is typically only used for early iterative testing purposes during development. This mode in the transmitter allows locally defined schemas to be provided. The default transmission endpoints are Kratos (public) and NVDF (internal only).
The extension also detects the ‘run environment’ it is in as best it can. This detection cannot be overridden by a setting. The current run environment can be retrieved with the `omni::telemetry::ITelemetry2::getRunEnvironment()` function (C++) or the `omni.telemetry.ITelemetry2().run_environment` property (python). The following run environments are detected and supported:
- **Individual**: This is the default mode. This launches the transmitter in its default mode as well (ie: `production` unless otherwise specified). If consent is given, all generated and approved telemetry events will be sent to both Kratos (public) and NVDF (internal only). This mode requires that the user be logged into the Omniverse Launcher app since it provides the authentication information that the public data endpoint requires. If the Omniverse Launcher is not running, data transmission will just be paused until the Launcher app is running. This mode is chosen only if no others are detected. This run environment is typically picked for individual users who install their Omniverse apps through the desktop Omniverse Launcher app. This run environment is referred to as “OVI”.
- **Cloud**: This launches the transmitter in ‘cloud’ mode. In this mode the final output from the transmitter is not sent anywhere, but rather written to a local file on disk. The intent is that another log consumer service will monitor for changes on this log file and consume events as they become available. This allows more control over which data is ingested and how that data is ingested. This run environment is typically launched through the Omniverse Cloud cockpit web portal and is referred to as “OVC”.
- **Enterprise**: This launches the transmitter in ‘enterprise’ mode. In this mode, data is sent to an open endpoint data collector. No authentication is needed in this mode. The data coming in does however get validated before storing. This run environment is typically detected when using the Omniverse Enterprise Launcher app to install or launch the Kit based app. This run environment is referred to as “OVE”.
Many of the structured logging and telemetry settings that come from the Carbonite components of the telemetry system also affect how the `omni.kit.telemetry` extension starts up. Some of the more useful settings that affect this are listed below. Other settings listed in the above Carbonite documentation can be referred to for additional information.
The following settings can control the startup behavior of the `omni.kit.telemetry` extension, the transmitter launch, and structured logging for the app:
- Settings used for configuring the transmitter to use an open endpoint:
- `/structuredLog/privacySettingsFile`: Sets the location of the privacy settings TOML file. This setting should only be used when configuring an app in a container to use a special privacy settings file instead of the default one. The default location and name for this file is `~/.nvidia-omniverse/config/privacy.toml`. This setting is undefined by default.
- `/telemetry/openTestEndpointUrl`: Sets the URL to use as the test mode open endpoint URL for the transmitter.
- `/telemetry/openEndpointUrl`: Sets the URL to use as the dev or production mode open endpoint URL for the transmitter.
- `/telemetry/enterpriseOpenTestEndpointUrl`: Sets the URL to use as the test mode open endpoint URL for OVE for the transmitter.
- `/telemetry/enterpriseOpenEndpointUrl`: Sets the URL to use as the dev or production mode open endpoint URL for OVE for the transmitter.
- `/telemetry/useOpenEndpoint`: Boolean value to explicitly launch the transmitter in ‘open endpoint’ mode. This defaults to `false`.
- `/telemetry/enableAnonymousData`: Boolean value to override several other telemetry, privacy, and endpoint settings. This defaults to `false`.
1. **Extension Startup**:
- `omni.kit.telemetry` is initialized when the extension starts up.
2. **Logging Control Settings**:
- `/telemetry/log/level`: Sets the logging level passed to the transmitter. This defaults to `warning`.
- `/telemetry/log/file`: Sets the logging output filename passed to the transmitter. This defaults to `omni.telemetry.transmitter.log` in the structured log system’s log directory (default: `~/.nvidia-omniverse/logs/`).
- Any other `/log/` settings, except `/log/enableStandardStreamOutput`, `/log/file`, and `/log/level`, are inherited by the transmitter.
- Any settings under `/structuredLog/extraFields/` are passed along to the transmitter unmodified.
- Any settings under `/telemetry/` are passed along to the transmitter unmodified.
- The `/structuredLog/privacySettingsFile` setting is passed along to the transmitter if specified.
- The `/structuredLog/logDirectory` setting is passed on to the transmitter if explicitly given.
- `/telemetry/testLogFile`: Specifies the path to a special log file for additional transmitter information. This defaults to disabling the test log.
3. **Telemetry Destination Control Settings**:
- `/telemetry/enableNVDF`: Controls whether the NVDF endpoint is added to the transmitter during launch in OVI run environments. Enabled by default.
- `/telemetry/nvdfTestEndpoint`: Specifies whether the 'test' or 'production' NVDF endpoint should be used. This defaults to `false`.
- `/telemetry/endpoint`: Overrides the default public endpoint used in the transmitter. This defaults to an empty string.
- `/telemetry/cloudLogEndpoint`: Allows overriding the default endpoint for OVC. This defaults to `file:///${omni_logs}/kit.transmitter.out.log`. Note that the server name must be `localhost` or left blank.
# Configuration Settings
## File System Access Settings
- **Localhost Access**: When accessing files on localhost (or nothing in this case), the absolute file path on the given host system is used. On Windows, this might look like `file:///c:/path/to/file.txt`. On POSIX systems, since absolute paths always start with a slash ('/'), it might look like `file:////path/to/file.txt`.
## Extension Startup Behavior Settings
- **Serial Startup**: The `/exts/omni.kit.telemetry/skipDeferredStartup` setting allows the extension's startup tasks to run serially instead of in parallel. This is useful for unit testing to ensure all startup tasks complete before tests run. It defaults to `false`.
## OVC Run Environment Settings
- **Cluster Name**: The `/cloud/cluster` setting specifies the name of the cluster the session will run on. It defaults to an empty string.
- **Node Name**: The `/cloud/node` setting specifies the name of the node the session will run on. It defaults to an empty string.
- **Telemetry Extra Fields**: The `/telemetry/extraFieldsToAdd` setting specifies which extra fields under `/structuredLog/extraFields/` should be added to each message by the transmitter. It should be a comma-separated list of key names and defaults to an empty string.
- **Run Environment**: The `/telemetry/runEnvironment` setting specifies the run environment detected by the `omni.kit.telemetry` extension. This is automatically passed to the telemetry transmitter in open-endpoint mode.
# Crash Reporter Metadata
The `omni.kit.telemetry` extension manages several crash reporter metadata values:
- **Environment Name**: Originally set by Kit-kernel, it can be modified by `omni.kit.telemetry` if left at `default`. It will then be replaced by the current detected run environment, which can be `Individual`, `Enterprise`, or `Cloud`.
- **Run Environment**: Contains the current detected run environment, which can be `Individual`, `Enterprise`, or `Cloud`.
- **External Build**: Set to `true` if the current Kit app is run by an external user or is not detected as an internal-only session. Set to `false` if an internal user or session is detected.
- **Launcher Session ID**: Set to the session ID for the launcher if the OVI launcher app is currently running in the system.
- `cloudPodSessionId`: If in the OVC run environment, this will contain the cloud session ID.
- `cpuName`: The friendly name of the system’s main CPU.
- `cpuId`: The internal ID of the system’s main CPU.
- `cpuVendor`: The name of the system’s main CPU vendor.
- `osName`: The friendly name of the operating system.
- `osDistro`: The distribution name of the operating system.
- `osVersion`: The detailed version number or code of the operating system.
- `primaryDisplayRes`: The resolution of the system’s primary display (if any).
- `desktopSize`: The size of the entire system desktop for the current user.
- `desktopOrigin`: The top-left origin point of the desktop window. On some systems this may just be (0, 0), but others such as Windows allow for negative origin points.
- `displayCount`: The number of attached displays (if any).
- `displayRes_<n>`: The current resolution in pixels of the n-th display.
- `gpu_<n>`: The name of the n-th GPU attached to the system.
- `gpuVRAM_<n>`: The amount of video memory the n-th GPU attached to the system has.
- `gpuDriver_<n>`: The active driver version for the n-th GPU attached to the system. | 18,283 |
tokens.md | # Tokens
Kit supports tokens to make the configuration more flexible. They take the form of `${token}`. They are implemented in `carb.tokens` Carbonite plugin.
Most of the tokens in settings are resolved when configuration is loaded. Some settings are set later and for those, it is each extension’s responsibility to resolve tokens in them.
Tokens are most often used to build various filesystem paths. List of commonly used tokens, that are always available:
## App Tokens
- `${app_name}` - Application name, e.g.: `Create`, `View`, `omni.create` (`/app/name` setting, otherwise name of kit file)
- `${app_filename}` - Application kit filename, e.g.: `omni.create`, `omni.app.mini`
- `${app_version}` - Application version, e.g.: `2022.3.0-rc.5` (`/app/version` setting, otherwise version in kit file)
- `${app_version_short}` - Application major.minor version, e.g. `2022.3` (version of the app from kit file)
- `${kit_version}` - Kit version, e.g.: `105.0+master.123.b1255276.tc`
- `${kit_version_short}` - Kit major.minor version, e.g.: `105.0`
- `${kit_git_hash}` - Kit git hash, e.g.: `b1255276`
When running without an app, e.g. `kit.exe --enable [ext]`, then `${app_name}` becomes `kit` and app version equals to the kit version.
# Environment Variable Tokens
Tokens can be used to read environment variables:
- **${env:VAR_NAME}** - Environment variable, e.g.:
**${env:USERPROFILE}**
# Path Tokens
There are a few important folders that Kit provides for extensions to read and write to. While may look similar there are conceptual differences between them.
For each of them, there are a few tokens. They point to app specific and Omniverse wide versions of a folder. They also influenced by running in “portable mode” (developer mode, `--portable`) or not.
For some token **${folder}** that represents a folder named **[FOLDER NAME]** it will look like this:
- **${folder}** - Kit app specific version:
- portable mode: **[PORTABLE ROOT]/[FOLDER NAME]/Kit/${app_name}/${app_version_short}**.
- non-portable mode: **[SYSTEM PATH]/[FOLDER NAME]/Kit/${app_name}/${app_version_short}**.
- **${omni_folder}** - Omniverse wide version:
- portable mode: **[PORTABLE ROOT]/[FOLDER NAME]**.
- non-portable mode: **[SYSTEM PATH]/[FOLDER NAME]**.
- **${omni_global_folder}** - Omniverse wide version, that is not influenced by portable mode:
- portable mode: **[SYSTEM PATH]/[FOLDER NAME]**.
- non-portable mode: **[SYSTEM PATH]/[FOLDER NAME]**.
# Data Folder
Data folder is a per user system folder to store persistent data. This system folder is different for every OS user.
Data folder is where an application can write anything that must reliably persist between sessions. For example, user settings are stored there.
- **${data}** - kit app specific version, e.g.: **C:/Users/[user]/AppData/Local/ov/data/Kit/${app_name}/${app_version_short}**.
- **${omni_data}** - Omniverse wide version, e.g.: **C:/Users/[user]/AppData/Local/ov/data**.
- **${omni_global_data}** - Omniverse wide version, that is not influenced by portable mode.
# Program data
Program data folder is a global system folder to store persistent data. This system folder is shared by all OS users.
Otherwise it can be used the same way as data folder.
- **{app_program_data}** - kit app specific version, e.g.: **C:/ProgramData/NVIDIA Corporation/kit/${app_name}**.
- **{shared_program_data}** - Kit wide version, e.g.: **C:/ProgramData/NVIDIA Corporation/kit**.
- **${omni_program_data}** - System wide version, e.g.: **C:/ProgramData**.
## Documents folder
Documents folder is a system folder to store user’s data. Typically it is like a user home directory, where user can store anything. For example, default location when picking where to save a stage.
- `${app_documents}` - kit app specific version, e.g.:
```
C:/Users/[user]/Documents/Kit/apps/${app_name}
```
- `${shared_documents}` - kit wide version, e.g.:
```
C:/Users/[user]/Documents/Kit/shared
```
- `${omni_documents}` - Omniverse wide version, e.g.:
```
C:/Users/[user]/Documents/Kit
```
## Cache folder
Cache folder is a system folder to be used for caching. It can be cleaned up between runs (usually it is not). And application should be able to rebuild the cache if it is missing.
- `[KIT VERSION SHORT]` - Kit major.minor version, like
```
105.0
```
- `[KIT GIT HASH]` - Kit git hash, like
```
a1b2c4d4
```
- `${cache}` - kit specific version, e.g.:
```
C:/Users/[user]/AppData/Local/ov/cache/Kit/[KIT VERSION SHORT]/[KIT GIT HASH]
```
- `${omni_cache}` - Omniverse wide version, e.g.:
```
C:/Users/[user]/AppData/Local/ov/cache
```
- `${omni_global_cache}` - Omniverse wide version, that is not influenced by portable mode.
## Logs folder
System folder to store logs.
- `${logs}` - kit app specific version, e.g.:
```
C:/Users/[user]/.nvidia-omniverse/logs/Kit/${app_name}/${app_version_short}
```
- `${omni_logs}` - Omniverse wide version, e.g.:
```
C:/Users/[user]/.nvidia-omniverse/logs
```
- `${omni_global_logs}` - Omniverse wide version, that is not influenced by portable mode:
## Config folder
System folder where Omniverse config `omniverse.toml` is read from:
- `${omni_config}` - Omniverse wide version, e.g.:
```
C:/Users/[user]/.nvidia-omniverse/config
```
- `${omni_global_config}` - Omniverse wide version, that is not influenced by portable mode:
## Temporary folder
Temporary folder is cleaned between runs and provided by OS.
- `${temp}` - e.g.:
```
C:/Users/[user]/AppData/Local/Temp/xplw.0
```
## Other useful paths
- path to Kit folder, where the Kit executable is (it is not always the same executable as was used to run currently, because someone could run from `python.exe`).
- path to app, if loaded with `--merge-config` that will be a folder where this config is.
- path to python interpreter executable.
## Platform tokens
- whether `debug` or `release` build is running.
- target platform Kit is running on, e.g. `windows-x86_64`.
- `.dll` on Windows, `.so` on Linux, `.dylib` on Mac OS.
- empty on Windows, `lib` on Linux and Mac OS.
- `.pyd` on Windows, `.so` on Linux and Mac OS.
- `.exe` on Windows, empty on Linux and Mac OS.
- `.bat` on Windows, `.sh` on Linux and Mac OS.
## Extension tokens
Each extension sets a token with the extension name and extension folder path. See [Extension Tokens](#ext-tokens).
## Overriding Tokens
Some tokens can be overridden by using `/app/tokens` setting namespace. E.g.: `--/app/tokens/data="C:/data"`.
## Checking Token Values
Kit logs all tokens in INFO log level, search for `Tokens:`. Either look in a log file or run with `-v`.
You can also print all tokens using settings:
```python
import carb.settings
settings = carb.settings.get_settings()
print(settings.get("/app/tokens"))
```
## Resolving your path
To make your path (or string) support tokens you must resolve it before using it, like this:
```cpp
path = carb::tokens::resolveString(carb::getCachedInterface<carb::tokens::ITokens>(), path);
```
```python
import carb.tokens
path = carb.tokens.get_tokens_interface().resolve(path)
```
---
title: "Your Document Title"
author: "Your Name"
date: "2023-04-01"
---
# Introduction
This is the introduction section of your document.
## Section 1
Here is some content for section 1.
## Section 2
And here is some content for section 2.
# Conclusion
This is the conclusion of your document.
---
This is the footer of your document. | 7,474 |
Toolkit.md | # Toolkit
The OmniGraph toolkit window provides a variety of inspection and debugging tools that are mostly of interest to low level developers and node writers. It is constantly evolving to provide the tools most relevant to developers, both internal and external.
## Memory
The top section shows information on memory usage. The memory use must be updated manually as it changes constantly. The values shows in this section are only a rough guide as not all memory is tracked internally. It is more useful as a tool to see changes in memory usage as a session proceeds, to flag potential memory leaks.
## Inspection
This section has functions that inspect internal data in OmniGraph, implemented via the `omni.inspect` extension. Each of the functions has a “?” icon through which you can get details on what it outputs and what each of the flags appearing beside it represent.
| Feature | Description |
|------------------------|-----------------------------------------------------------------------------|
| Dump Fabric | Presents a JSON format view of the data OmniGraph uses to link to the underlying Fabric data used for computation |
| Dump Graph | Presents a JSON format view of the data known to the currently existing OmniGraphs |
| Dump Registry | Presents a JSON format dictionary of all registered node types. It’s worth noting here that nodes that are implemented in Python will be in a sub-list labeled `PythonNode`. |
| Dump Categories | Presents a JSON format list of all of the currently known node type categories |
| Extensions | Extracts information regarding the known extensions downstream of `omni.graph.core` and presents it in a JSON format that can be processed to understand where OmniGraph features are being used |
| Extension Processing Timing | Displays the timing information gathered when OmniGraph notices that a new extension has been enabled and it processes the extension to find and register any existing OmniGraph nodes and tests. |
## Scene Manipulation
This section contains a number of features that show and manipulate high level contents of the current stage.
| Feature | Description |
|------------------------|-----------------------------------------------------------------------------|
| Extensions Used By Nodes | List the current node types in use, grouped by the extension in which they were defined |
| Node Extensions Missing | |
## Extensions Not Enabled
List a guess at extensions not currently enabled, but which define existing nodes. Until the extensions are enabled such nodes will not operate correctly
## Nodes In All Extensions
List every node type known to OmniGraph, grouped by the extension in which they were defined
## Generate .ogn From Selected Node
Take the attributes defined on the currently selected node and create a basic .ogn file that describes it. This can be used for a raw Prim with properties defined through USD as well as an existing OmniGraph node with extra attributes added dynamically.
## Debug output
For most Inspection operations and some Scene Manipulation operations an output will be produced. This output will be shown in this section, as well as being copied into the global Python variable `omni.graph.core.DEBUG_DATA`. The section also provides a copy feature when the right mouse button is pressed, so that you can paste the data into an external editor.
Here is how you might process the JSON data that the Dump Graph button provides as output:
```python
import json
import omni.graph.core as og
graph_data = json.loads(og.DEBUG_DATA)
for graph_path, graph_info in graph_data.items():
print(f"Showing graph information at {graph_path}")
print(f" Evaluator Type = {graph_info['Evaluator']}")
print(f" Node Count = {len(graph_info['Nodes'])}")
```
``` | 4,067 |
Trace.md | # Trace module
## Summary
The Trace module provides performance tracking utility classes for counting, timing, measuring, recording, and reporting events.
## Module Overview
Trace – Utilities for counting and recording events.
### Classes
| Class | Description |
|-------|-------------|
| `AggregateNode` | A representation of a call tree. |
| `Collector` | This is a singleton class that records TraceEvent instances and populates TraceCollection instances. |
| `Reporter` | This class converts streams of TraceEvent objects into call trees which can then be used as a data source to a GUI or written out to a file. |
### Functions
| Function | Description |
|----------|-------------|
| `TraceFunction(obj)` | A decorator that enables tracing the function that it decorates. |
| `TraceMethod(obj)` | A convenience. |
| `TraceScope(label)` | A context manager that calls BeginEvent on the global collector on enter and EndEvent on exit. |
### Detailed Class Description
#### AggregateNode
A representation of a call tree. Each node represents one or more calls that occurred in the trace. Multiple calls to a child node are aggregated into one node.
**Attributes:**
- (Attributes table here)
| Property | Type | Description |
|----------|------|-------------|
| children | list[TraceAggregateNodePtr] | |
| count | int | Returns the call count of this node. |
| exclusiveCount | int | Returns the exclusive count. |
| exclusiveTime | TimeStamp | Returns the time spent in this node but not its children. |
| expanded | bool | |
| expired | bool | True if this object has expired, False otherwise. |
| id | Id | |
| inclusiveTime | TimeStamp | |
| key | str | |
```
```markdown
### children
**Type:** list[TraceAggregateNodePtr]
### count
**Type:** int
**Description:** Returns the call count of this node.
### exclusiveCount
**Type:** int
**Description:** Returns the exclusive count.
### exclusiveTime
**Type:** TimeStamp
**Description:** Returns the time spent in this node but not its children.
### expanded
**Type:** bool
### expired
**Type:** bool
**Description:** True if this object has expired, False otherwise.
### id
**Type:** Id
### inclusiveTime
**Type:** TimeStamp
### key
**Type:** str
### expanded
- **Type**: bool
- **Description**: Returns whether this node is expanded in a gui.
- **Type**: None
- **Description**: Sets whether or not this node is expanded in a gui.
### expired
- **Description**: True if this object has expired, False otherwise.
### id
- **Description**: Returns the node’s id.
- **Type**: type
### inclusiveTime
- **Type**: TimeStamp
- **Description**: Returns the total time of this node and its children.
- **Type**: type
### key
- **Type**: str
- **Description**: Returns the node’s key.
- **Type**: type
### Collector
- **Description**: This is a singleton class that records TraceEvent instances and populates TraceCollection instances. All public methods of TraceCollector are safe to call from any thread.
- **Methods**:
- **BeginEvent(key)**: Record a begin event with *key* if Category is enabled.
- **BeginEventAtTime(key, ms)**: Record a begin event with *key* at a specified time if Category is enabled.
- **Clear()**: Clear all pending events from the collector.
- **EndEvent(key)**: Record an end event with *key* if Category is enabled.
- **EndEventAtTime(key, ms)**: Record an end event with *key* at a specified time if Category is enabled.
| Record an end event with **key** at a specified time if `Category` is enabled. |
|--------------------------------------------------------------------------------|
| `GetLabel()` | Return the label associated with this collector. |
|--------------------------------------------------------------------------------|
**Attributes:**
| `enabled` | bool |
|--------------------------------------------------------------------------------|
| `expired` | True if this object has expired, False otherwise. |
|--------------------------------------------------------------------------------|
| `pythonTracingEnabled` | None |
|--------------------------------------------------------------------------------|
### BeginEvent
```python
BeginEvent(key) -> TimeStamp
```
Record a begin event with **key** if `Category` is enabled.
A matching end event is expected some time in the future.
If the key is known at compile time `BeginScope` and `Scope` methods are preferred because they have lower overhead.
The TimeStamp of the TraceEvent or 0 if the collector is disabled.
**Parameters:**
- **key** (Key) –
### BeginEventAtTime
```python
BeginEventAtTime(key, ms) -> None
```
Record a begin event with **key** at a specified time if `Category` is enabled.
This version of the method allows the passing of a specific number of elapsed milliseconds, **ms**, to use for this event. This method is used for testing and debugging code.
**Parameters:**
- **key** (Key) –
- **ms** (float) –
### Clear
```python
Clear() -> None
```
Clear all pending events from the collector.
<p>
No TraceCollection will be made for these events.
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Trace.Collector.EndEvent">
<span class="sig-name descname">
<span class="pre">
EndEvent
<span class="sig-paren">
(
<em class="sig-param">
<span class="n">
<span class="pre">
key
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
TimeStamp
<a class="headerlink" href="#pxr.Trace.Collector.EndEvent" title="Permalink to this definition">
<dd>
<p>
Record an end event with
<em>
key
if
<code class="docutils literal notranslate">
<span class="pre">
Category
is enabled.
<p>
A matching begin event must have preceded this end event.
<p>
If the key is known at compile time EndScope and Scope methods are preferred because they have lower overhead.
<p>
The TimeStamp of the TraceEvent or 0 if the collector is disabled.
<p>
EndScope
<p>
Scope
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<p>
<strong>
key
(
<em>
Key
) –
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Trace.Collector.EndEventAtTime">
<span class="sig-name descname">
<span class="pre">
EndEventAtTime
<span class="sig-paren">
(
<em class="sig-param">
<span class="n">
<span class="pre">
key
,
<em class="sig-param">
<span class="n">
<span class="pre">
ms
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<a class="reference internal" href="Sdr.html#pxr.Sdr.PropertyRole.None" title="pxr.Sdr.PropertyRole.None">
<span class="pre">
None
<a class="headerlink" href="#pxr.Trace.Collector.EndEventAtTime" title="Permalink to this definition">
<dd>
<p>
Record an end event with
<em>
key
at a specified time if
<code class="docutils literal notranslate">
<span class="pre">
Category
is enabled.
<p>
This version of the method allows the passing of a specific number of elapsed milliseconds,
<em>
ms
, to use for this event. This method is used for testing and debugging code.
<dl class="field-list simple">
<dt class="field-odd">
Parameters
<dd class="field-odd">
<ul class="simple">
<li>
<p>
<strong>
key
(
<em>
Key
) –
<li>
<p>
<strong>
ms
(
<em>
float
) –
<dl class="py method">
<dt class="sig sig-object py" id="pxr.Trace.Collector.GetLabel">
<span class="sig-name descname">
<span class="pre">
GetLabel
<span class="sig-paren">
(
<span class="sig-paren">
)
<span class="sig-return">
<span class="sig-return-icon">
→
<span class="sig-return-typehint">
<span class="pre">
str
<a class="headerlink" href="#pxr.Trace.Collector.GetLabel" title="Permalink to this definition">
<dd>
<p>
Return the label associated with this collector.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Trace.Collector.enabled">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
enabled
<a class="headerlink" href="#pxr.Trace.Collector.enabled" title="Permalink to this definition">
<dd>
<p>
bool
<p>
Returns whether collection of events is enabled for DefaultCategory.
<hr class="docutils"/>
<p>
type : None
<p>
Enables or disables collection of events for DefaultCategory.
<dl class="field-list simple">
<dt class="field-odd">
Type
<dd class="field-odd">
<p>
<strong>
classmethod
type
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Trace.Collector.expired">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
expired
<a class="headerlink" href="#pxr.Trace.Collector.expired" title="Permalink to this definition">
<dd>
<p>
True if this object has expired, False otherwise.
<dl class="py property">
<dt class="sig sig-object py" id="pxr.Trace.Collector.pythonTracingEnabled">
<em class="property">
<span class="pre">
property
<span class="w">
<span class="sig-name descname">
<span class="pre">
pythonTracingEnabled
<a class="headerlink" href="#pxr.Trace.Collector.pythonTracingEnabled" title="Permalink to this definition">
<dd>
<p>
None
<p>
Set whether automatic tracing of all python scopes is enabled.
<hr class="docutils"/>
<p>
type : bool
<p>
Returns whether automatic tracing of all python scopes is enabled.
<dl class="field-list simple">
<dt class="field-odd">
Type
<dd class="field-odd">
<p>
type
<dl class="py class">
<dt class="sig sig-object py" id="pxr.Trace.Reporter">
### pxr.Trace.Reporter
This class converts streams of TraceEvent objects into call trees which can then be used as a data source to a GUI or written out to a file.
**Methods:**
- **ClearTree()**
Clears event tree and counters.
- **GetLabel()**
Return the label associated with this reporter.
- **Report(s, iterationCount)**
Generates a report to the ostream `s`, dividing all times by `iterationCount`.
- **ReportChromeTracing(s)**
Generates a timeline trace report suitable for viewing in Chrome's trace viewer.
- **ReportChromeTracingToFile()**
(No description provided)
- **ReportTimes(s)**
Generates a report of the times to the ostream `s`.
- **UpdateTraceTrees()**
This fully re-builds the event and aggregate trees from whatever the current collection holds.
**Attributes:**
- **aggregateTreeRoot**
AggregateNode
- **expired**
True if this object has expired, False otherwise.
- **foldRecursiveCalls**
bool
- **globalReporter**
(No description provided)
- **groupByFunction**
bool
- **shouldAdjustForOverheadAndNoise**
None
### ClearTree
Clears event tree and counters.
### GetLabel
Return the label associated with this reporter.
### Report
Generates a report to the ostream `s`, dividing all times by `iterationCount`.
**Parameters**
- **s** (`ostream`) –
- **iterationCount** (`int`) –
### ReportChromeTracing
Generates a timeline trace report suitable for viewing in Chrome’s trace viewer.
**Parameters**
- **s** (`ostream`) –
### ReportChromeTracingToFile
### ReportTimes
Generates a report of the times to the ostream `s`.
**Parameters**
- **s** (`ostream`) –
### UpdateTraceTrees
This fully re-builds the event and aggregate trees from whatever the current collection holds.
It is ok to call this multiple times in case the collection gets appended on inbetween.
If we want to have multiple reporters per collector, this will need to be changed so that all reporters reporting on a collector update their respective trees.
### aggregateTreeRoot
- **Type**: AggregateNode
- **Description**: Returns the root node of the aggregated call tree.
### expired
- **Description**: True if this object has expired, False otherwise.
### foldRecursiveCalls
- **Type**: bool
- **Description**: Returns the current setting for recursion folding for stack trace event reporting.
- **Details**:
- When stack trace event reporting, this sets whether or not recursive calls are folded in the output.
- Recursion folding is useful when the stacks contain deep recursive structures.
### globalReporter
- **Description**: GlobalReporter
### groupByFunction
- **Type**: bool
- **Description**: Returns the current group-by-function state.
- **Details**:
- This affects only stack trace event reporting.
- If true then all events in a function are grouped together otherwise events are split out by address.
### shouldAdjustForOverheadAndNoise
- **Description**: Set whether or not the reporter should adjust scope times for overhead and noise.
### TraceFunction
- **Description**: A decorator that enables tracing the function that it decorates. If you decorate with ‘TraceFunction’ the function will be traced in the global collector.
### TraceMethod
- **Description**: A convenience. Same as TraceFunction but changes the recorded label to use the term ‘method’ rather than ‘function’.
### TraceScope
- **Description**: TraceScope
(label)
A context manager that calls BeginEvent on the global collector on enter and EndEvent on exit.
--- | 13,058 |
Transcoding.Functions.md | # Transcoding Functions
## Functions Summary:
| Function |
|----------|
| [decodeASCIIIdentifier](#) |
| [encodeASCIIPath](#) |
| [encodeUTF8Identifier](#) |
| [encodeUTF8Path](#) | | 183 |
troubleshooting_OniWalkthrough.md | # Creating a New Omniverse Native Interface
## Warning
A common misconception is that Omniverse Native Interfaces can be changed over time, but this is **not correct**. Once created and released, Omniverse Native Interfaces are **immutable**. To add functionality, an Interface can be inherited into a new Interface that adds functionality, or a new separate Interface can be created. The process of doing this is also described in [Extending an Omniverse Native Interface](#extending-oni-interfaces-label).
## Warning
Omniverse Native Interfaces are Beta software and can be difficult to use. Active development by the Carbonite team has paused. If other contributors wish to develop improvements, the Carbonite team is willing to evaluate Merge Requests.
[Carbonite Interfaces](#carbonite-interfaces) are actively supported.
Setting up the projects and definitions of a new interface can be a daunting prospect. This will be clarified by following the steps below. Below we will walk through the creation of a set of interfaces in a (pointless) plugin called `omni.meals`. These don’t do anything particularly useful, but should at least be instructional.
## Project Definitions
The first step is to create a new set of projects in a premake file (ie: `premake5.lua`). There will typically be three new projects added for any new Omniverse interface - the interface generator project, the C++ implementation project for the interface, and the python bindings project for the interface.
### Interface Generator Project
The interface generator project definition typically looks along the lines of the listing below. This project is responsible for performing the code generation tasks by running each listed header through `omni.bind`. The resulting header files will be generated at the listed locations. Any time one of the C++ interface headers is modified, this project will regenerate the other headers automatically. It is important that this project be dependent on `omni.core.interfaces` and that all other projects for the new set of interfaces be dependent on this project. Note that if python bindings aren’t needed for this interface, the `py=` parts of each line can be omitted.
```lua
project "omni.meals.interfaces"
location (workspaceDir.."/%{prj.name}")
omnibind {
{ file="include/omni/meals/IBreakfast.h", api="include/omni/meals/IBreakfast.gen.h", py="source/bindings/python/omni.meals/PyIBreakfast.gen.h" },
{ file="include/omni/meals/ILunch.h", api="include/omni/meals/ILunch.gen.h", py="source/bindings/python/omni.meals/PyILunch.gen.h" },
{ file="include/omni/meals/IDinner.h", api="include/omni/meals/IDinner.gen.h", py="source/bindings/python/omni.meals/PyIDinner.gen.h" },
```
-- add one more line for each other interface header in the project.
}
dependson { "omni.core.interfaces" }
```
Note that calling the `omnibind` function will implicitly make your project have the “StaticLib” kind under premake. For this reason it is a good idea to keep the code generation projects separate from other projects that depend on it. However, It is possible to make an `omnibind` call inside a project that also builds other code (ie: in cases where only one project depends on the interface’s generated code). The only fallout from it will be that the project’s ‘kind’ will have to be reset after the `omnibind` call(s) if it is not intended to be a static library.
### C++ Implementation Project
The C++ implementation project definition looks very similar to any other C++ plugin project in Carbonite. This simply defines the important folders for the plugin’s implementation files, any dependent projects that need to be built first, and any additional platform specific SDKs, includes, build settings, etc. This should look similar to the listing below at its simplest. Initially the project does not need any implementation files. All of the .cpp files will be added later.
```lua
project "omni.meals.plugin"
define_plugin { ifaces = "include/omni/meals", impl = "source/plugins/omni.meals" }
dependson { "omni.meals.interfaces" }
```
### Python Bindings Project
If needed, the python bindings project defines the location and source files for the generated python bindings. Note that `omni.bind` does not generate a .cpp that creates and exports the bindings. Instead it generates a header file that contains a set of inlined helper functions that define the bindings. It is the responsibility of the project implementor to call each of those inlined helpers inside a `PYBIND11_MODULE(_moduleName, m)` block somewhere in the project. This is left up to the implementor so that they can also add extra custom members or values to the bindings if needed. Each generated inline helper function will return a `pybind11` class or enum object that can have other symbols, functions, values, or documentation added to them as needed.
```lua
project "omni.meals.python"
define_bindings_python {
name = "_meals", -- must match the module name in the PYBIND11_MODULE() block.
folder = "source/bindings/python/omni.meals",
namespace = "omni/meals"
}
dependson { "omni.meals.interfaces" }
```
### Creating the C++ Interface Header(s)
Once the projects have been added, all of the C++ header files that were listed in the `omni.meals.interfaces` project need to be added to the tree and filled in. The headers that need to be created are specific to your new interface. Continuing with our example here, these headers should be created:
```c++
// file 'include/omni/meals/ILunch.h'
#pragma once
#include <omni/core/IObject.h>
namespace omni
{
namespace meals
{
// we must always forward declare each interface that will be referenced here.
class ILunch;
// the interface's name must end in '_abi'.
class ILunch_abi : public omni::core::Inherits<omni::core::IObject, OMNI_TYPE_ID("omni.meals.ILunch")>
{
protected: // all ABI functions must always be 'protected' and must end in '_abi'.
virtual bool isTime_abi() noexcept = 0;
virtual void prepare_abi(OMNI_ATTR("c_str, in") const char* dish) noexcept = 0;
virtual void eat_abi(OMNI_ATTR("c_str, in") const char* dish) noexcept = 0;
};
} // namespace meals
} // namespace omni
```
```c++
} // namespace omni
// include the generated header and declare the API interface. Note that this must be
// done at the global scope.
#define OMNI_BIND_INCLUDE_INTERFACE_DECL
#include "ILunch.gen.h"
// this is the API version of the interface that code will call into. Custom members and
// helpers may also be added to this interface API as needed, but this API object may not
// hold any additional data members.
class omni::meals::ILunch : public omni::core::Generated<omni::meals::ILunch_abi>
{
};
#define OMNI_BIND_INCLUDE_INTERFACE_IMPL
#include "ILunch.gen.h"
```
For the purposes of this example, we’ll assume that the other two headers look the same except that ‘lunch’ is replaced
with either ‘breakfast’ or ‘dinner’ (appropriately capitalized too). The actual interfaces themselves are not
important here, just the process for getting them created and building. Also note that for brevity in this example,
the documentation for each of the ABI functions has been omitted here. All ABI functions must be documented
appropriately.
The other two interface headers (
```code
include/omni/meals/IBreakfast.h
```
and
```code
include/omni/meals/IDinner.h
```
) should be
very similar to
```code
include/omni/meals/ILunch.h
```
and are left as an exercise for the reader.
## Generating Code
After adding the new projects and the C++ interface declaration header(s) to your tree, the initial code generation
step needs to be run. Follow these steps:
1. Run the build once. This will likely fail due to the C++ implementation and python projects not having any
source files in them yet. However, it should at least generate the headers. Running the build twice will
also work around any errors generated about attempting to include a missing header file. The full build can
be shortcut by simply pre-building the tree then building only the
```code
omni.meals.interfaces
```
project in MSVC
or VSCode.
2. Verify that all expected header files are appropriately generated in the correct location(s). This includes the
C++ headers and the python bindings header.
## Adding Python Bindings
For many uses, adding the python bindings is trivial. Simply add a new .cpp file to the python bindings project
folder with the same base name as the generated header (ie:
```code
PyIMeals.cpp
```
). While this naming is not a strict
requirement, it does keep things easy to find.
Implement the C++ source file for the bindings by creating a pybind11 module and calling the generated helper
functions in it:
```c++
#include <omni/python/PyBind.h>
#include <omni/meals/IBreakfast.h>
#include <omni/meals/ILunch.h>
#include <omni/meals/IDinner.h>
#include "PyIBreakfast.gen.h"
#include "PyILunch.gen.h"
#include "PyIDinner.gen.h"
OMNI_PYTHON_GLOBALS("omni.meals-pyd", "Python bindings for omni.meals.")
PYBIND11_MODULE(_meals, m)
{
bindIBreakfast(m);
bindILunch(m);
bindIDinner(m);
}
```
This bindings module should now be able to build on its own and produce the appropriate bindings. At this point
there shouldn’t be any link warnings or errors in the python module. The C++ implementation project however will
still have some errors due to a missing implementation.
Each of the generated inline helper functions will return a pybind11 class or enum object that can also be used
to add other custom symbols, functions, values, or documentation to if needed.
## Adding a C++ Implementation
The C++ implementation project is the last one to fill in. This requires a few files - a module startup
implementation, a header to share common internal declarations, and at least one implementation file for each
interface being defined. Note that having these files separated is not a strict requirement, but is good
practice in general. If needed, both the interface implementation and the module startup code could exist in
a single file.
### Shared Internal Header File
Since multiple source files in this example will likely be referring to the same set of classes and types,
they must be declared in a common internal header file. Even if each implementation source file were to be
completely self contained, there would still need to be at the very least a creator helper function that
can be referenced from the module startup source file.
```c++
// file 'source/plugins/omni.meals/Meals.h'.
#pragma once
#include <omni/core/Omni.h>
#include <omni/meals/IBreakfast.h>
#include <omni/meals/ILunch.h>
```
```cpp
#include <omni/meals/IDinner.h>
namespace omni
{
namespace meals
{
class Breakfast : public omni::core::Implements<omni::meals::IBreakfast>
{
public:
Breakfast();
~Breakfast();
// ... other internal declarations here ...
protected: // all ABI functions must always be overridden.
bool isTime_abi() noexcept override;
void prepare_abi(const char* dish) noexcept override;
void eat_abi(const char* dish) noexcept override;
private:
bool _needToast();
bool m_withToast;
};
// ... repeat for other internal implementation class declarations here ...
} // namespace meals
} // namespace omni
```
In the above example, each class is implemented separately internally. This is perfectly acceptable. However, all related objects may also be implemented with a single internal class if it makes logical sense or saves on code duplication. In this case, merging the implementations into a single class does not work since all of the interfaces need to implement the same three methods. However, if all the interfaces being implemented have mutually exclusive function names, a single class that simply inherits from all of the interfaces could be used. The only modification to the above example would be to add multiple API class names to the omni::core::Implements invocation in the class declaration.
### Module Startup Source
The task of the module startup source file is to define startup and shutdown helper functions, define any additional callbacks such as ‘on started’ and ‘can unload’, and define the module exports table. These can be done with code along these lines:
```cpp
// file 'source/plugins/omni.meals/Interfaces.cpp'.
#include "Meals.h"
OMNI_MODULE_GLOBALS("omni.meals.plugin", "plain text brief omni.meals plugin description");
namespace // anonymous namespace to avoid unnecessary exports.
{
omni::core::Result onLoad(const omni::core::InterfaceImplementation** out, uint32_t* outCount)
{
// clang-format off
static const char* breakfastInterfaces[] = { "omni.meals.IBreakfast" };
static const char* lunchInterfaces[] = { "omni.meals.ILunch" };
static const char* dinnerInterfaces[] = { "omni.meals.IDinner" };
static omni::core::InterfaceImplementation impls[] =
{
{
"omni.meals.breakfast",
[]() { return static_cast<omni::core::IObject*>(new Breakfast); },
1, // version
breakfastInterfaces, CARB_COUNTOF32(breakfastInterfaces)
},
// ... repeat for other meals ...
};
*out = impls;
*outCount = CARB_COUNTOF32(impls);
return omni::core::Result::Success;
}
} // anonymous namespace
```
},
{
"omni.meals.lunch",
[]() { return static_cast<omni::core::IObject*>(new Lunch); },
1, // version
lunchInterfaces, CARB_COUNTOF32(lunchInterfaces)
},
{
"omni.meals.dinner",
[]() { return static_cast<omni::core::IObject*>(new Dinner); },
1, // version
dinnerInterfaces, CARB_COUNTOF32(dinnerInterfaces)
};
// clang-format on
*out = impls;
*outCount = CARB_COUNTOF32(impls);
return omni::core::kResultSuccess;
void onStarted()
{
// ... do necessary one-time startup tasks ...
}
bool onCanUnload()
{
// ... return true if unloading the module is safe ...
}
void onUnload()
{
// ... do necessary one-time shutdown tasks ...
}
};
OMNI_MODULE_API omni::core::Result omniModuleGetExports(omni::core::ModuleExports* out)
{
OMNI_MODULE_SET_EXPORTS(out);
OMNI_MODULE_ON_MODULE_LOAD(out, onLoad);
OMNI_MODULE_ON_MODULE_STARTED(out, onStarted);
OMNI_MODULE_ON_MODULE_CAN_UNLOAD(out, onCanUnload);
OMNI_MODULE_ON_MODULE_UNLOAD(out, onUnload);
// the following two lines are needed for Carbonite interface interop. This includes any implicit use of
// other Carbonite interfaces such as logging, assertions, or acquiring other interfaces. If no Carbonite
// interface functionality is used, these can be omitted.
OMNI_MODULE_REQUIRE_CARB_CLIENT_NAME(out);
OMNI_MODULE_REQUIRE_CARB_FRAMEWORK(out);
return omni::core::kResultSuccess;
}
```
## C++ Interface Implementation Files
All that remains is to add the actual implementation file(s) for your new interface(s). These should not export
anything, but should just provide the required functionality for the external ABI. The details of the implementation
will be left as an exercise here since they are specific to the particular interface being defined.
## Loading an Omniverse Interface Module
An Omniverse interface module can be loaded as any other Carbonite plugin would be loaded. This includes searching
for and loading wildcard modules during framework startup, or loading a specific library directly. Once loaded, the
interfaces offered in the library should be registered with the core type factory automatically. The various
objects offered in the library can then be created using the
`omni::core::createType<>()` helper template.
## Troubleshooting
When creating a new Omniverse interface and all of its related projects, there are some common problems that can come
up. This section looks to address some of those potential problems:
- "warning
- G041F212F:
class
template
specialization
of
'Generated'
not
in
a
namespace
enclosing
'core'
is
a
Microsoft
extension
[-Wmicrosoft-template]
: this warning on MSVC is caused by including the generated C++ header from inside
the namespace of the object(s) being declared. The generated header should always be included from the global
namespace scope level.
- when moving an interface declaration from one header to another or renaming a header,
omni.bind
will error out on the first build because it wants to process the previous API implementation first to check for changes. It will
report that the deleted interface no longer exists. However, in this case the new header will still be generated
correctly and the rest of the project will still build successfully. Running the build again, including the
omni.bind step, will succeed. Since an existing ABI should never be removed, the only legitimate use case for
this situation is in changes during initial development or in splitting up a header that originally declared multiple
interfaces.
- structs that are passed into omni interface methods as parameters may not have any members with default initializers.
If a member of the struct has a default initializer,
omni.bind
will give an error stating that the use of the struct is not ABI safe. This is because the struct no longer has a trivial layout when it has a default initializer
and is therefore not ABI safe. | 17,374 |
tutorial.md | # How to use Slang node in OmniGraph
## Video Tutorial
Video Tutorial on configuring simple Slang node in the action graph.
The example code from the tutorial is listed [here](#demo-code).
## Slang Function
Slang node, when dropped into a push graph, has only an input attribute for Code token and Instance count, while the node dropped into an action graph also has execution **Exec In** and **Exec Out** pins. The Slang function’s code can be previewed in the *Slang Code Editor* by clicking **Edit** in the node’s Property window next to the **Code** attribute. Node attributes for variables used in the function can be added or removed via dedicated buttons in the *Add and Remove Attributes* section in the top part of the Property window.
## Code
The code attribute should always contain at least an empty `compute` function.
```hlsl
void compute(uint instanceId)
{
}
```
!!! important
Defining non static global variables in Slang function is not allowed. The user needs to create a node attribute to declare a resource buffer.
## Instance Count
Slang node’s **Instance count** attribute input specifies how many times the function will be executed and the size of an output array. The `uint instanceId` parameter of the main *compute* function then defines the position in the output array where the computed result can be stored (an [example](#arrays-code) of the usage).
!!! important
Arrays are zero-indexed bounds checked which means, if an access to array is out of bounds, the value at the zero index is returned. An empty array thus always contains at least one element with a zero value by default.
## Node Attributes
User added node attributes can be used in a Slang function as variables. Type of attributes is *input*, *output*, and *state*. Input attributes are read-only while output and state are can be accessed for both reads and writes. Conversion table between OGN, USD and Slang data types can be found on a separate page.
Slang Node Data Types
Execution attributes can be added or removed arbitrary. Constants which can be passed to the output execution attributes are:
```
EXEC_OUT_DISABLED
EXEC_OUT_ENABLED
EXEC_OUT_ENABLED_AND_PUSH
EXEC_OUT_LATENT_PUSH
EXEC_OUT_LATENT_FINISH
```
---
To create a node from the demo scene shown in a tutorial video, follow the next steps:
1. Drop a node called **Slang Function** in the Action Graph
2. In the Property window of the node, click *Add* button
3. Create **input** attribute of type *double* and name it *time*
4. Create **output** attribute of type *double3* and name it *position*
5. Continue with opening the Slang Code Editor
---
## Slang Code Editor
To use the node’s attribute as a variable in the code, the attribute name has to be adjusted by a simple pattern. “Please refer to the *Variables from Attributes Syntax* section to see examples of this pattern.”
An example of a compute function from the tutorial can be copy pasted into the editor and compiled by hitting **Save & Compile**. The green tick next to the button indicates that the compilation was successful. Any errors, in case of failed compilation, are displayed in the Console window and a red cross appears next to the compile button. Handling compilation errors is described further in the text.
```hlsl
void compute(uint instanceId)
{
double time = inputs_time_get();
double k = 0.1f;
double r = 100.f + 10.f * sin(time);
double x = 100.f * sin(k * time);
double y = 100.f * cos(k * time);
double3 pos = double(x, 50.f, y);
outputs_position_set(pos);
outputs_execOut_set(EXEC_OUT_ENABLED);
}
```
> **Note**
> Slang nodes authored in **Create 2022.3.0** use deprecated type `uint64_t` for `token` attributes. By hitting **Upgrade** button in the bottom right of the Code editor, the node will be upgraded to the latest version and user needs to replace `uint64_t` types for tokens in their code by a new Slang `Token` type.
## Slang Settings
**Use Slang LLVM**
> Slang code is compiled by Slang LLVM compiler. The library is included in the extension and the compiler is used to compile Slang code by default. When turned off in **Slang Settings**, Slang looks for a default system C++ compiler.
**Multithreaded Compilation**
> Each node would be compiled on a single thread to accelerate the initialization of multiple Slang nodes in the Stage during the scene load. Only relevant for use with Slang LLVM off.
**Show Generated Code**
> This toggle shows a separate editor tab after switching to **Generated Slang Code** where the user can preview what functions are auto-generated from the node’s attribute before the actual code compilation is run.
> Also, when a compilation error occurs, the line number refers to a position in the generated code not in the **Compute Function** tab.
The code can be edited and compiled even while the OmniGraph is running. The Slang node won’t be updated and does not output any data in case of failed compilation.
## Slang Functions Examples
> *Reading and writing arrays*
> ```hlsl
> void compute(uint instanceId)
> {
> float3 pos = inputs_pointsIn_get(instanceId);
>
> float time = float(inputs_time_get());
>
> pos.x += 10.f * cos(time);
> pos.z += 10.f * sin(time);
>
> outputs_pointsOut_set(instanceId, pos);
>
> outputs_execOut_set(EXEC_OUT_ENABLED);
> }
> ``` | 5,359 |
tutorial1.md | # Tutorial 1 - Trivial Node
The simplest possible node is one that implements only the mandatory fields in a node. These are the “version” and “description” fields.
The existence of the file `OgnTutorialEmpty.svg` will automatically install this icon into the build directory and add its path to the node type’s metadata. The installed file will be named after the node type, not the class type, so it will be installed at the path `$BUILD/exts/omni.graph.tutorials/ogn/icons/Empty.svg`.
## OgnTutorialEmpty.ogn
The `.ogn` file containing the implementation of a node named “omni.graph.tutorials.Empty”, in its first version, with a simple description.
```json
{
"Empty": {
"version": 1,
"categories": "tutorials",
"scheduling": ["threadsafe"],
"description": [
"This is a tutorial node. It does absolutely nothing and is only meant to ",
"serve as an example to use for setting up your build."
],
"metadata": {
"uiName": "Tutorial Node: No Attributes"
}
}
}
```
## OgnTutorialEmpty.cpp
The `.cpp` file contains the minimum necessary implementation of the node class, which contains only the empty compute method. It contains a detailed description of the necessary code components.
```cpp
// Copyright (c) 2020-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
// ============================================================
// Note the name of the generated include file, taken from the name of the file with "Database.h" appended.
// The build system adds the include path that picks this file up from the generated file location.
#include <OgnTutorialEmptyDatabase.h>
// ============================================================
```
```c++
// The class name should match the file name to avoid confusion as the file name will be used
// as a base for the names in the generated interface.
//
class OgnTutorialEmpty
{
public:
// ------------------------------------------------------------
// Note the name of the generated computation database, the same as the name of the include file,
// auto-generated by appending "Database" to the file name.
//
static bool compute(OgnTutorialEmptyDatabase&)
{
// This node correctly does nothing, but it must return true to indicate a successful compute.
//
return true;
}
};
// ============================================================
// Now that the node has been defined it can be registered for use. This registration takes care of
// automatic registration of the node when the extension loads and deregistration when it unloads.
REGISTER_OGN_NODE()
``` | 2,986 |
tutorial10.md | # Tutorial 10 - Simple Data Node in Python
The simple data node creates one input attribute and one output attribute of each of the simple types, where “simple” refers to data types that have a single component and are not arrays. (e.g. “float” is simple, “float[3]” is not, nor is “float[]”). See also [Tutorial 2 - Simple Data Node](tutorial2.html#ogn-tutorial-simpledata) for a similar example in C++.
## Automatic Python Node Registration
By implementing the standard Carbonite extension interfact in Python, OmniGraph will know to scan your Python import path for to recursively scan the directory, import all Python node files it finds, and register those nodes. It will also deregister those nodes when the extension shuts down. Here is an example of the directory structure for an extension with a single node in it. (For extensions that have a `premake5.lua` build script this will be in the build directory. For standalone extensions it is in your source directory.)
```
omni.my.extension/
omni/
my/
extension/
nodes/
OgnMyNode.ogn
OgnMyNode.py
```
## OgnTutorialSimpleDataPy.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.SimpleDataPy”, which has one input and one output attribute of each simple type.
```json
{
"SimpleDataPy": {
"version": 1,
"categories": "tutorials",
"description": [
"This is a tutorial node. It creates both an input and output attribute of every simple ",
"supported data type. The values are modified in a simple way so that the compute modifies values. ",
"It is the same as node omni.graph.tutorials.SimpleData, except it is implemented in Python instead of C++."
],
"language": "python",
"$iconOverride": [
"If for some reason the default name or colors for the icon are inappropriate you can override like this.",
"This gives an alternative icon path, a shape color of Green, a border color of Red, and a background",
"color of half-opaque Blue. If you just want to override the icon path then instead of a dictionary you",
"can just use the string path, as in \"icon\": \"Tutorial10Icon.svg\"."
],
"icon": {
"path": "Tutorial10Icon.svg"
}
}
}
```
19. "color": "#FF00FF00",
20. "borderColor": [255, 0, 0, 255],
21. "backgroundColor": "#7FFF0000"
22. },
23. "metadata":
24. {
25. "uiName": "Tutorial Python Node: Attributes With Simple Data"
26. },
27. "inputs": {
28. "a_bool": {
29. "type": "bool",
30. "metadata": {
31. "uiName": "Simple Boolean Input"
32. },
33. "description": ["This is an attribute of type boolean"],
34. "default": true,
35. "$optional": "When this is set there is no checking for validity before calling compute()",
36. "optional": true
37. },
38. "a_half": {
39. "type": "half",
40. "description": ["This is an attribute of type 16 bit float"],
41. "$comment": "0 is used as the decimal portion due to reduced precision of this type",
42. "default": 0.0
43. },
44. "a_int": {
45. "type": "int",
46. "description": ["This is an attribute of type 32 bit integer"],
47. "default": 0
48. },
49. "a_int64": {
50. "type": "int64",
51. "description": ["This is an attribute of type 64 bit integer"],
52. "default": 0
53. },
54. "a_float": {
55. "type": "float",
56. "description": ["This is an attribute of type 32 bit floating point"],
57. "default": 0
58. },
59. "a_double": {
60. "type": "double",
61. "description": ["This is an attribute of type 64 bit floating point"],
62. "default": 0
63. },
64. "a_path": {
65. "type": "path",
66. "description": ["This is an attribute of type path"],
```json
{
"a_string": {
"type": "string",
"description": ["This is an attribute of type string"],
"default": "helloString"
},
"a_token": {
"type": "token",
"description": ["This is an attribute of type interned string with fast comparison and hashing"],
"default": "helloToken"
},
"a_objectId": {
"type": "objectId",
"description": ["This is an attribute of type objectId"],
"default": 0
},
"a_uchar": {
"type": "uchar",
"description": ["This is an attribute of type unsigned 8 bit integer"],
"default": 0
},
"a_uint": {
"type": "uint",
"description": ["This is an attribute of type unsigned 32 bit integer"],
"default": 0
},
"a_uint64": {
"type": "uint64",
"description": ["This is an attribute of type unsigned 64 bit integer"],
"default": 0
},
"a_constant_input": {
"type": "int",
"description": ["This is an input attribute whose value can be set but can only be connected as a source."],
"metadata": {
"outputOnly": "1"
}
},
"outputs": {
"a_bool": {
"type": "bool",
"description": ["This is a computed attribute of type boolean"]
},
"a_half": {
"type": "half",
"description": ["This is a computed attribute of type 16 bit float"]
}
}
}
```
```json
{
"a_int": {
"type": "int",
"description": ["This is a computed attribute of type 32 bit integer"]
},
"a_int64": {
"type": "int64",
"description": ["This is a computed attribute of type 64 bit integer"]
},
"a_float": {
"type": "float",
"description": ["This is a computed attribute of type 32 bit floating point"]
},
"a_double": {
"type": "double",
"description": ["This is a computed attribute of type 64 bit floating point"]
},
"a_path": {
"type": "path",
"description": ["This is a computed attribute of type path"],
"default": "/Child"
},
"a_string": {
"type": "string",
"description": ["This is a computed attribute of type string"],
"default": "This string is empty"
},
"a_token": {
"type": "token",
"description": ["This is a computed attribute of type interned string with fast comparison and hashing"]
},
"a_objectId": {
"type": "objectId",
"description": ["This is a computed attribute of type objectId"]
},
"a_uchar": {
"type": "uchar",
"description": ["This is a computed attribute of type unsigned 8 bit integer"]
},
"a_uint": {
"type": "uint",
"description": ["This is a computed attribute of type unsigned 32 bit integer"]
},
"a_uint64": {
"type": "uint64",
"description": ["This is a computed attribute of type unsigned 64 bit integer"]
},
"a_nodeTypeUiName": {
"type": "string",
"description": []
}
}
```
164. "description": "Computed attribute containing the UI name of this node type"
165.
166. "a_a_boolUiName": {
167. "type": "string",
168. "description": "Computed attribute containing the UI name of input a_bool"
169. }
170.
171. "tests": [
172. {
173. "$comment": [
174. "Each test has a description of the test and a set of input and output values. ",
175. "The test runs by setting all of the specified inputs on the node to their values, ",
176. "running the compute, then comparing the computed outputs against the values ",
177. "specified in the test. Only the inputs in the list are set; others will use their ",
178. "default values. Only the outputs in the list are checked; others are ignored."
179. ],
180. "description": "Check that false becomes true",
181. "inputs:a_bool": false,
182. "outputs:a_bool": true
183. },
184. {
185. "$comment": "This is a more verbose format of test data that provides a different grouping of values",
186. "description": "Check that true becomes false",
187. "inputs": {
188. "a_bool": true
189. },
190. "outputs": {
191. "a_bool": false,
192. "a_a_boolUiName": "Simple Boolean Input",
193. "a_nodeTypeUiName": "Tutorial Python Node: Attributes With Simple Data"
194. }
195. },
196. {
197. "$comment": "Make sure the path append does the right thing",
198. "inputs:a_path": "/World/Domination",
199. "outputs:a_path": "/World/Domination/Child"
200. },
201. {
202. "$comment": "Even though these computations are all independent they can be checked in a single test.",
203. "description": "Check all attributes against their computed values",
204. "inputs:a_bool": false,
205. "outputs:a_bool": true,
206. "inputs:a_double": 1.1,
207. "outputs:a_double": 2.1,
208. "inputs:a_float": 3.3,
209. "outputs:a_float": 4.3,
210. "inputs:a_half": 5.0,
211. "outputs:a_half": 6.0,
212. "inputs:a_int": 7,
213. "outputs:a_int": 8,
214. "inputs:a_int64": 9,
215. "outputs:a_int64": 10
216. }
217. ]
```json
{
"data": [
{
"inputs:a_token": "helloToken",
"outputs:a_token": "worldToken"
},
{
"inputs:a_string": "helloString",
"outputs:a_string": "worldString"
},
{
"inputs:a_objectId": 10,
"outputs:a_objectId": 11
},
{
"inputs:a_uchar": 11,
"outputs:a_uchar": 12
},
{
"inputs:a_uint": 13,
"outputs:a_uint": 14
},
{
"inputs:a_uint64": 15,
"outputs:a_uint64": 16
}
]
}
```
# OgnTutorialSimpleDataPy.py
The py file contains the implementation of the compute method, which modifies each of the inputs in a simple way to create outputs that have different values.
```python
"""
Implementation of the Python node accessing all of the simple data types.
This class exercises access to the DataModel through the generated database class for all simple data types.
It implements the same algorithm as the C++ node OgnTutorialSimpleData.cpp
"""
import omni.graph.tools.ogn as ogn
class OgnTutorialSimpleDataPy:
"""Exercise the simple data types through a Python OmniGraph node"""
@staticmethod
def compute(db) -> bool:
"""Perform a trivial computation on all of the simple data types to make testing easy"""
# Inside the database the contained object "inputs" holds the data references for all input attributes and the
# contained object "outputs" holds the data references for all output attributes.
# Each of the attribute accessors are named for the name of the attribute, with the ":" replaced by "_".
# The colon is used in USD as a convention for creating namespaces so it's safe to replace it without
# modifying the meaning. The "inputs:" and "outputs:" prefixes in the generated attributes are matched
# by the container names.
# For example attribute "inputs:translate:x" would be accessible as "db.inputs.translate_x" and attribute
# "outputs:matrix" would be accessible as "db.outputs.matrix".
# The "compute" of this method modifies each attribute in a subtle way so that a test can be written
# to verify the operation of the node. See the .ogn file for a description of tests.
db.outputs.a_bool = not db.inputs.a_bool
db.outputs.a_half = 1.0 + db.inputs.a_half
db.outputs.a_int = 1 + db.inputs.a_int
db.outputs.a_int64 = 1 + db.inputs.a_int64
db.outputs.a_double = 1.0 + db.inputs.a_double
db.outputs.a_float = 1.0 + db.inputs.a_float
db.outputs.a_uchar = 1 + db.inputs.a_uchar
db.outputs.a_uint = 1 + db.inputs.a_uint
db.outputs.a_uint64 = 1 + db.inputs.a_uint64
```python
# Line numbers are not supported in Markdown, so they are removed.
db.outputs.a_string = db.inputs.a_string.replace("hello", "world")
db.outputs.a_objectId = 1 + db.inputs.a_objectId
# The token interface is made available in the database as well, for convenience.
# By calling "db.token" you can look up the token ID of a given string.
if db.inputs.a_token == "helloToken":
db.outputs.a_token = "worldToken"
# Path just gets a new child named "Child".
# In the implementation the string is manipulated directly, as it does not care if the SdfPath is valid or
# not. If you want to manipulate it using the pxr.Sdf.Path API this is how you could do it:
#
# from pxr import Sdf
# input_path Sdf.Path(db.inputs.a_path)
# if input_path.IsValid():
# db.outputs.a_path() = input_path.AppendChild("/Child").GetString();
#
db.outputs.a_path = db.inputs.a_path + "/Child"
# To access the metadata you have to go out to the ABI, though the hardcoded metadata tags are in the
# OmniGraph Python namespace
assert db.node.get_attribute("inputs:a_bool").get_metadata(ogn.MetadataKeys.UI_NAME) == "Simple Boolean Input"
# You can also use the database interface to get the same data
db.outputs.a_nodeTypeUiName = db.get_metadata(ogn.MetadataKeys.UI_NAME)
db.outputs.a_a_boolUiName = db.get_metadata(ogn.MetadataKeys.UI_NAME, db.attributes.inputs.a_bool)
return True
```
Note how the attribute values are available through the `OgnTutorialSimpleDataPyDatabase` class. The generated interface creates access methods for every attribute, named for the attribute itself. They are all implemented as Python properties, where inputs only have get methods and outputs have both get and set methods.
## Pythonic Attribute Data
Three subsections are created in the generated database class. The main section implements the node type ABI methods and uses introspection on your node class to call any versions of the ABI methods you have defined (see later tutorials for examples of how this works).
The other two subsections are classes containing attribute access properties for inputs and outputs. For naming consistency the class members are called `inputs` and `outputs`. For example, you can access the value of the input attribute named `foo` by referencing `db.inputs.foo`.
## Pythonic Attribute Access
In the USD file the attribute names are automatically namespaced as `inputs:FOO` or `outputs:BAR`. In the Python interface the colon is illegal so the contained classes above are used to make use of the dot-separated equivalent, as `inputs.FOO` or `outputs.BAR`.
While the underlying data types are stored in their exact form there is conversion when they are passed back to Python as Python has a more limited set of data types, though they all have compatible ranges. For this class, these are the types the properties provide:
| Database Property | Returned Type |
|-------------------|---------------|
| inputs.a_bool | bool |
| inputs.a_half | float |
| inputs.a_int | int |
| inputs.a_int64 | int |
| inputs.a_float | float |
<table>
<tbody>
<tr class="row-even">
<td>
<p>
inputs.a_float
<td>
<p>
float
<tr class="row-odd">
<td>
<p>
inputs.a_double
<td>
<p>
float
<tr class="row-even">
<td>
<p>
inputs.a_token
<td>
<p>
str
<tr class="row-odd">
<td>
<p>
outputs.a_bool
<td>
<p>
bool
<tr class="row-even">
<td>
<p>
outputs.a_half
<td>
<p>
float
<tr class="row-odd">
<td>
<p>
outputs.a_int
<td>
<p>
int
<tr class="row-even">
<td>
<p>
outputs.a_int64
<td>
<p>
int
<tr class="row-odd">
<td>
<p>
outputs.a_float
<td>
<p>
float
<tr class="row-even">
<td>
<p>
outputs.a_double
<td>
<p>
float
<tr class="row-odd">
<td>
<p>
outputs.a_token
<td>
<p>
str
<p>
The data returned are all references to the real data in the Fabric, our managed memory store, pointed to the correct location at evaluation time.
<section id="python-helpers">
<h2>
Python Helpers
<p>
A few helpers are provided in the database class definition to help make coding with it more natural.
<section id="python-logging">
<h3>
Python logging
<p>
Two helper functions are providing in the database class to help provide more information when the compute method of a node has failed. Two methods are provided, both taking a formatted string describing the problem.
<p>
<code>log_error(message)
<p>
<code>log_warning(message)
<section id="direct-pythonic-abi-access">
<h3>
Direct Pythonic ABI Access
<p>
All of the generated database classes provide access to the underlying <em>INodeType
<p>
There is the graph evaluation context member, <code>db.abi_context
<p>
There is also the OmniGraph node member, <code>db.abi_node
| 16,704 |
tutorial11.md | # Tutorial 11 - Complex Data Node in Python
This node fills on the remainder of the (CPU for now) data types available through Python. It combines the progressive introduction in C++ of:
- Tutorial 4 - Tuple Data Node
- Tutorial 5 - Array Data Node
- Tutorial 6 - Array of Tuples
- Tutorial 7 - Role-Based Data Node
Rather than providing an exhaustive set of attribute types there will be one chosen from each of the aforementioned categories of types. See the section [Pythonic Complex Attribute Type Access](#pythonic-complex-attribute-type-access) for details on how to access the representative types.
## OgnTutorialComplexDataPy.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.ComplexDataPy”, which has one input and one output attribute of each complex (arrays, tuples, roles) type.
```json
{
"ComplexDataPy": {
"version": 1,
"categories": "tutorials",
"description": [
"This is a tutorial node written in Python. It will compute the point3f array by multiplying",
"each element of the float array by the three element vector in the multiplier."
],
"language": "python",
"metadata": {
"uiName": "Tutorial Python Node: Attributes With Arrays of Tuples"
},
"inputs": {
"a_inputArray": {
"description": "Input array",
"type": "float[]",
"default": []
},
"a_vectorMultiplier": {
"description": "Vector multiplier",
"type": "point3f",
"default": {
"x": 1.0,
"y": 1.0,
"z": 1.0
}
}
},
"outputs": {
"a_outputArray": {
"description": "Output array",
"type": "point3f[]"
}
}
}
}
```
```json
{
"inputs": {
"a_inputArray": {
"description": "Input array",
"type": "float[3]",
"default": [1.0, 2.0, 3.0]
},
"a_vectorMultiplier": {
"description": "Vector multiplier",
"type": "float[3]",
"default": [1.0, 2.0, 3.0]
}
},
"outputs": {
"a_productArray": {
"description": "Output array",
"type": "pointf[3][]",
"default": []
},
"a_tokenArray": {
"description": "String representations of the input array",
"type": "token[]"
}
},
"tests": [
{
"$comment": "Always a good idea to test the edge cases, here an empty/default input",
"outputs:a_productArray": []
},
{
"$comment": "Multiplication of a float[5] by float[3] yielding a point3f[5], equivalent to float[3][5]",
"inputs:a_inputArray": [1.0, 2.0, 3.0, 4.0, 5.0],
"inputs:a_vectorMultiplier": [6.0, 7.0, 8.0],
"outputs:a_productArray": [
[6.0, 7.0, 8.0],
[12.0, 14.0, 16.0],
[18.0, 21.0, 24.0],
[24.0, 28.0, 32.0],
[30.0, 35.0, 40.0]
],
"outputs:a_tokenArray": ["1.0", "2.0", "3.0", "4.0", "5.0"]
}
]
}
```
# OgnTutorialComplexDataPy.py
The py file contains the implementation of the compute method, which modifies each of the inputs in a simple way to create outputs that have different values.
```python
"""
Implementation of a node handling complex attribute data
"""
# This class exercises access to the DataModel through the generated database class for a representative set of
# complex data types, including tuples, arrays, arrays of tuples, and role-based attributes. More details on
# individual type definitions can be found in the earlier C++ tutorial nodes where each of those types are
# explored in detail.
# Any Python node with array attributes will receive its data wrapped in a numpy array for efficiency.
# Unlike C++ includes, a Python import is not transitive so this has to be explicitly imported here.
import numpy
import omni.graph.core as og
class OgnTutorialComplexDataPy:
"""Exercise a sample of complex data types through a Python OmniGraph node"""
```
72
73 return True
Note how the attribute values are available through the
`OgnTutorialComplexDataPyDatabase` class. The generated interface creates access methods for every attribute, named for the attribute itself. They are all implemented as Python properties, where inputs only have get methods and outputs have both get and set methods.
## Pythonic Complex Attribute Type Access
Complex data in Python takes advantage of the numpy library to handle arrays so you should always include this line at the top of your node if you have array data:
```python
import numpy
```
| Database Property | Representative Type | Returned Type |
|-------------------|---------------------|---------------|
| inputs.a_float3 | Tuple | [float, float, float] |
| inputs.a_floatArray | Array | numpy.ndarray[float, 1] |
| inputs.a_point3Array | Role-Based | numpy.ndarray[float, 3] |
As with simple data, the values returned are all references to the real data in the Fabric, our managed memory store, pointing to the correct location at evaluation time.
## Python Role Information
The attribute roles can be checked in Python similar to C++ by using the `role()` method on the generated database class.
```python
def compute(db) -> bool:
"""Run my algorithm"""
if db.role(db.outputs.a_pointArray) == db.ROLE_POINT:
print("Hey, I did get the correct role")
```
| Python Role | Attribute Types |
|-------------|-----------------|
| ROLE_COLOR | colord, colorf, colorh |
| ROLE_FRAME | frame |
| ROLE_NORMAL | normald, normalf, normalh |
| ROLE_POSITION | positiond, positionf, positionh |
| ROLE_QUATERNION | quatd, quatf, quath |
| ROLE_TEXCOORD | texcoordd, texcoordf, texcoordh |
| ROLE_TIMECODE | timecode |
| ROLE_TRANSFORM | transform |
| ROLE_VECTOR | vectord, vectorf, vectorh |
``` | 5,968 |
tutorial12.md | # Tutorial 12 - Python ABI Override Node
Although the .ogn format creates an easy-to-use interface to the ABI of the OmniGraph node and the associated data model, there may be cases where you want to override the ABI to perform special processing.
## OgnTutorialABIPy.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.AbiPy”, in its first version, with a simple description.
```json
{
"AbiPy": {
"version": 1,
"categories": ["tutorials", {"internal:abiPy": "Internal nodes that override the Python ABI functions"}],
"language": "python",
"description": ["This tutorial node shows how to override ABI methods on your Python node.", "The algorithm of the node converts an RGB color into HSV components."],
"metadata": {
"uiName": "Tutorial Python Node: ABI Overrides"
},
"inputs": {
"color": {
"type": "colord[3]",
"description": ["The color to be converted"],
"default": [0.0, 0.0, 0.0],
"metadata": {
"$comment": "Metadata is key/value pairs associated with the attribute type.",
"$specialNames": "Kit may recognize specific keys. 'uiName' is a human readable version of the attribute name",
"uiName": "Color To Convert",
"multipleValues": ["value1", "value2", "value3"]
}
}
}
}
}
```
```python
# The error isn't recovered, to prevent proliferation of inconsistent calls. The exception is
# thrown to help with debugging. (As this is an example the exception is caught and ignored here.)
with suppress(og.OmniGraphError):
og.Controller.set(h, output_hue_attr)
og.Controller.set(output_hue_attr, h)
og.Controller.set(output_saturation_attr, s)
og.Controller.set(output_value_attr, v)
#
# For comparison, here is the same algorithm implemented using "compute(db)"
#
# def compute(db) -> bool:
# (db.outputs.h, db.outputs.s, db.outputs.v) = colorsys.rgb_to_hsv(*db.inputs.color)
return True
# ----------------------------------------------------------------------
@staticmethod
def get_node_type() -> str:
"""
Rarely overridden
This should almost never be overridden as the auto-generated code will handle the name
"""
carb.log_info("Python ABI override of get_node_type")
return "omni.graph.tutorials.AbiPy"
# ----------------------------------------------------------------------
@staticmethod
def initialize(graph_context, node):
"""
Occasionally overridden
This method might be overridden to set up initial conditions when a node of this type is created.
Note that overridding this puts the onus on the node writer to set up initial conditions such as
attribute default values and metadata.
When a node is created this will be called
"""
carb.log_info("Python ABI override of initialize")
# There is no default behavior on initialize so nothing else is needed for this tutorial to function
# ----------------------------------------------------------------------
@staticmethod
def initialize_type(node_type) -> bool:
"""
Rarely overridden
This method might be overridden to set up initial conditions when a node type is registered.
Note that overriding this puts the onus on the node writer to initialize the attributes and metadata.
By returning "True" the function is requesting that the attributes and metadata be initialized upon return,
otherwise the caller will assume that this override has already done that.
"""
carb.log_info("Python ABI override of initialize_type")
return True
# ----------------------------------------------------------------------
@staticmethod
def release(node):
"""
Occasionally overridden
After a node is removed it will get a release call where anything set up in initialize() can be torn down
"""
carb.log_info("Python ABI override of release")
# There is no default behavior on release so nothing else is needed for this tutorial to function
# ----------------------------------------------------------------------
@staticmethod
def update_node_version(graph_context, node, old_version: int, new_version: int):
"""
Occasionally overridden
This is something you do want to override when you have more than version of your node.
In it you would translate attribute information from older versions into the current one.
"""
carb.log_info(f"Python ABI override of update_node_version from {old_version} to {new_version}")
```
```python
# There is no default behavior on update_node_version so nothing else is needed for this tutorial to function
return old_version < new_version
# ----------------------------------------------------------------------
@staticmethod
def on_connection_type_resolve(node):
"""
Occasionally overridden
When there is a connection change to this node which results in an extended type attribute being automatically
resolved, this callback gives the node a change to resolve other extended type attributes. For example a generic
'Increment' node can resolve its output to an int only after its input has been resolved to an int. Attribute
types are resolved using omni.graph.attribute.set_resolved_type(), or the utility functions such as
og.resolve_fully_coupled().
"""
carb.log_info("Python ABI override of on_connection_type_resolve")
# There is no default behavior for on_connection_type_resolve so nothing else is needed for this
# tutorial to function
```
## Metadata Attached To Attributes
This file introduces the *metadata* keyword to attributes, whose value is a dictionary of key/value pairs associated with the attribute in which it appears that may be extracted using the ABI metadata functions. These are not persisted in any files and so must be set either in the .ogn file or in an override of the **initialize()** method in the node definition. | 5,905 |
tutorial13.md | # Tutorial 13 - Python State Node
This node illustrates how you can use internal state information, so long as you inform OmniGraph that you are doing so in order for it to make more intelligent execution scheduling decisions.
## OgnTutorialStatePy.ogn
The `.ogn` file containing the implementation of a node named “omni.graph.tutorials.StatePy”, with an empty state set to inform OmniGraph of its intention to compute using internal state information.
```json
{
"StatePy": {
"version": 1,
"categories": "tutorials",
"description": [
"This is a tutorial node. It makes use of internal state information",
"to continuously increment an output."
],
"language": "python",
"metadata": {
"uiName": "Tutorial Python Node: Internal States"
},
"state": {
"$comment": "The existence of this state section, even if it contains no attributes, means there is internal state that is entirely managed by the node"
},
"inputs": {
"overrideValue": {
"type": "int64",
"description": "Value to use instead of the monotonically increasing internal one when 'override' is true",
"default": 0
},
"override": {
"type": "bool",
"description": "When true get the output from the overrideValue, otherwise use the internal value",
"default": false
}
}
}
}
{
"inputs": {
"override": {
"type": "bool",
"description": "Override the default input value",
"default": false
}
},
"outputs": {
"monotonic": {
"type": "int64",
"description": "Monotonically increasing output, set by internal state information",
"default": 0
}
},
"tests": [
{
"inputs:overrideValue": 5,
"inputs:override": true,
"outputs:monotonic": 5
}
]
}
```
## OgnTutorialStatePy.py
The `.py` file contains the compute method and the internal state information used to run the algorithm.
By overriding the special method `internal_state` you can define an object that will contain per-node and per-graph-instance data, that you can manage yourself. It will not be visible to OmniGraph. That data can be accessed with the db passed in to the compute function, through the `per_instance_state` property.
```python
"""
Implementation of a Python node that uses internal state information to compute outputs.
There are two types of state information in use here:
- OgnTutorialStatePy.step (per-class state information)
This is inherently dangerous in a multi-threaded multi-hardware evaluation so
it must be used with care. In this case the value is only used when a node is created, which for now is a safe
single-threaded operation
- per-node state information.
"""
class OgnTutorialStatePyInternalState:
"""Convenience class for maintaining per-node state information"""
def __init__(self):
"""Instantiate the per-node state information.
Note: For convenience, per-node state data is maintained as members of this class, imposing the minor
restriction of having no parameters allowed in this constructor.
The presence of the "state" section in the .ogn node has flagged to omnigraph the fact that this node will
be managing some per-node state data.
"""
# Start all nodes with a monotonic increment value of 0
self.increment_value = 0
# Get this node's internal step value from the per-class state information
self.node_step = OgnTutorialStatePy.step
# Update the per-class state information for the next node
OgnTutorialStatePy.step += 1
def update_state(self):
"""Helper function to update the node's internal state based on the previous values and the per-class state"""
self.increment_value += self.node_step
class OgnTutorialStatePy:
"""Use internal node state information in addition to inputs"""
# This is a simplified bit of internal per-class state information. In real applications this would be a complex
# structure, potentially keyed off of combinations of inputs or real time information.
#
# This value increases for each node and indicates the value at which a node's own internal state value increments.
# e.g. the first instance of this node type will increment its state value by 1, the second instance of it by 2,
# and so on...
step = 1
# Defining this method, in conjunction with adding the "state" section in the .ogn file, tells OmniGraph that you
# intend to maintain opaque internal state information on your node. OmniGraph will ensure that your node is not
# scheduled for evaluation in such a way that it would compromise the thread-safety of your node due to this state
# information, however you are responsible for updating the values and/or maintaining your own dirty bits when
# required.
@staticmethod
```python
def internal_state():
"""Returns an object that will contain per-node state information"""
return OgnTutorialStatePyInternalState()
@staticmethod
def compute(db) -> bool:
"""Compute the output based on inputs and internal state"""
# This illustrates how internal state and inputs can be used in conjunction. The inputs can be used
# to divert to a different computation path.
if db.inputs.override:
db.outputs.monotonic = db.inputs.overrideValue
else:
# OmniGraph ensures that the database contains the correct internal state information for the node
# being evaluated. Beyond that it has no knowledge of the data within that state.
db.outputs.monotonic = db.per_instance_state.increment_value
# Update the node's internal state data for the next evaluation.
db.per_instance_state.update_state()
return True
``` | 5,840 |
tutorial14.md | # Tutorial 14 - Defaults
While most inputs are required to have default values it’s not strictly necessary to provide explicit values for those defaults. If a default is required and not specified then it will get a default value equal to an empty value. See the table at the bottom for what is considered an “empty” value for each type of attribute.
## OgnTutorialDefaults.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.Defaults”, which has sample inputs of several types without default values and matching outputs.
```json
{
"Defaults": {
"version": 1,
"categories": "tutorials",
"scheduling": ["threadsafe"],
"description": [
"This is a tutorial node. It will move the values of inputs to corresponding outputs.",
"Inputs all have unspecified, and therefore empty, default values."
],
"metadata": {
"uiName": "Tutorial Node: Defaults"
},
"inputs": {
"a_bool": {
"type": "bool",
"description": ["This is an attribute of type boolean"]
},
"a_half": {
"type": "half",
"description": ["This is an attribute of type 16 bit floating point"]
},
"a_int": {
"type": "int",
"description": ["This is an attribute of type 32 bit integer"]
},
"a_int64": {
"type": "int64",
"description": ["This is an attribute of type 64 bit integer"]
}
}
}
}
```
```json
{
"a_int64": {
"type": "int64",
"description": ["This is an attribute of type 64 bit integer"]
},
"a_float": {
"type": "float",
"description": ["This is an attribute of type 32 bit floating point"]
},
"a_double": {
"type": "double",
"description": ["This is an attribute of type 64 bit floating point"]
},
"a_string": {
"type": "string",
"description": ["This is an attribute of type string"]
},
"a_token": {
"type": "token",
"description": ["This is an attribute of type interned string with fast comparison and hashing"]
},
"a_uchar": {
"type": "uchar",
"description": ["This is an attribute of type unsigned 8 bit integer"]
},
"a_uint": {
"type": "uint",
"description": ["This is an attribute of type unsigned 32 bit integer"]
},
"a_uint64": {
"type": "uint64",
"description": ["This is an attribute of type unsigned 64 bit integer"]
},
"a_int2": {
"type": "int[2]",
"description": ["This is an attribute of type 2-tuple of integers"]
},
"a_matrix": {
"type": "matrixd[2]",
"description": ["This is an attribute of type 2x2 matrix"]
},
"a_array": {
"type": "float[]",
"description": ["This is an attribute of type array of floats"]
},
"outputs": {
"a_bool": {
"type": "bool",
"description": ["This is a computed attribute of type boolean"]
}
}
}
```
75. },
76. "a_half": {
77. "type": "half",
78. "description": ["This is a computed attribute of type 16 bit floating point"]
79. },
80. "a_int": {
81. "type": "int",
82. "description": ["This is a computed attribute of type 32 bit integer"]
83. },
84. "a_int64": {
85. "type": "int64",
86. "description": ["This is a computed attribute of type 64 bit integer"]
87. },
88. "a_float": {
89. "type": "float",
90. "description": ["This is a computed attribute of type 32 bit floating point"]
91. },
92. "a_double": {
93. "type": "double",
94. "description": ["This is a computed attribute of type 64 bit floating point"]
95. },
96. "a_string": {
97. "type": "string",
98. "description": ["This is a computed attribute of type string"]
99. },
100. "a_token": {
101. "type": "token",
102. "description": ["This is a computed attribute of type interned string with fast comparison and hashing"]
103. },
104. "a_uchar": {
105. "type": "uchar",
106. "description": ["This is a computed attribute of type unsigned 8 bit integer"]
107. },
108. "a_uint": {
109. "type": "uint",
110. "description": ["This is a computed attribute of type unsigned 32 bit integer"]
111. },
112. "a_uint64": {
113. "type": "uint64",
114. "description": ["This is a computed attribute of type unsigned 64 bit integer"]
115. },
116. "a_int2": {
117. "type": "int[2]",
118. "description": ["This is a computed attribute of type 2-tuple of integers"]
119. },
120. "a_matrix": {
121. "type": "matrixd[2]",
122. "description": ["This is a computed attribute of type 2x2 matrix"]
```
123},
124"a_array": {
125 "type": "float[]",
126 "description": ["This is a computed attribute of type array of floats"]
127}
128},
129"tests": [
130{
131 "outputs:a_bool": false,
132 "outputs:a_double": 0.0,
133 "outputs:a_float": 0.0,
134 "outputs:a_half": 0.0,
135 "outputs:a_int": 0,
136 "outputs:a_int64": 0,
137 "outputs:a_string": "",
138 "outputs:a_token": "",
139 "outputs:a_uchar": 0,
140 "outputs:a_uint": 0,
141 "outputs:a_uint64": 0,
142 "outputs:a_int2": [0, 0],
143 "outputs:a_matrix": [[1.0, 0.0], [0.0, 1.0]],
144 "outputs:a_array": []
145}
146]
147}
148}
```
# OgnTutorialDefaults.cpp
The *cpp* file contains the implementation of the compute method, which copies the input values over to the corresponding outputs. All values should be the empty defaults.
# Empty Values For Attribute Types
The empty values for each of the attribute types is defined below. Having no default specified in the .ogn file for any of them is equivalent to defining a default of the given value.
| Type Name | Empty Default |
|-----------|---------------|
| bool | False |
| double | 0.0 |
| float | 0.0 |
| half | 0.0 |
| int | 0 |
| int64 | 0 |
| string | "" |
| token | "" |
| uchar | 0 |
| uint | 0 |
| uint64 | 0 |
> All attributes that are array types have empty defaults equal to the empty array []
# Note
All tuple types have empty defaults equal to a tuple of the correct count, each member containing the empty value for the base type. e.g. a float[2] will have empty default [0.0, 0.0], and a matrix[2] will have empty default [[0.0, 0.0], [0.0, 0.0]] | 6,611 |
tutorial15.md | # Tutorial 15 - Bundle Manipulation
Attribute bundles are a construct that packages up groups of attributes into a single entity that can be passed around the graph. Some advantages of a bundle are that they greatly simplify graph connections, only requiring a single connection between nodes rather than dozens or even hundreds, and they do not require static definition of the data they contain so it can change as the evaluation of the nodes dictate. The only disadvantage is that the node writer is responsible for analyzing the contents of the bundle and deciding what to do with them.
## OgnTutorialBundles.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.BundleManipulation”, which has some bundles as inputs and outputs. It’s called “manipulation” as the focus of this tutorial node is on operations applied directly to the bundle itself, as opposed to on the data on the attributes contained within the bundles. See future tutorials for information on how to deal with that.
```json
{
"BundleManipulation": {
"version": 1,
"categories": "tutorials",
"description": [
"This is a tutorial node. It exercises functionality for the manipulation of bundle",
"attribute contents."
],
"uiName": "Tutorial Node: Bundle Manipulation",
"inputs": {
"fullBundle": {
"type": "bundle",
"description": [
"Bundle whose contents are passed to the output in their entirety"
],
"metadata": {
"uiName": "Full Bundle"
}
},
"filteredBundle": {
"type": "bundle",
"description": [
"Bundle whose contents are filtered before being added to the output"
],
"uiName": "Filtered Bundle"
},
"filters": {
"type": "token[]"
}
}
}
}
```json
{
"inputs": {
"fullBundle": {
"type": "bundle",
"description": "This is the full bundle of data to be processed."
},
"filteredBundle": {
"type": "bundle",
"description": [
"List of filter names to be applied to the filteredBundle. Any filter name",
"appearing in this list will be applied to members of that bundle and only those",
"passing all filters will be added to the output bundle. Legal filter values are",
"big (arrays of size > 10), 'x' (attributes whose name contains the letter x), ",
"and 'int' (attributes whose base type is integer)."
],
"default": []
}
},
"outputs": {
"combinedBundle": {
"type": "bundle",
"description": [
"This is the union of fullBundle and filtered members of the filteredBundle."
]
}
}
}
```
# OgnTutorialBundles.cpp
The cpp file contains the implementation of the compute method. It exercises each of the available bundle manipulation functions.
```c++
// Copyright (c) 2020-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <OgnTutorialBundlesDatabase.h>
using omni::graph::core::BaseDataType;
using omni::graph::core::NameToken;
using omni::graph::core::Type;
namespace {
// Tokens to use for checking filter names
static NameToken s_filterBigArrays;
static NameToken s_filterNameX;
static NameToken s_filterTypeInt;
}
class OgnTutorialBundles {
public:
// Overriding the initialize method allows caching of the name tokens which will avoid string comparisons
// at evaluation time.
static void initialize(const GraphContextObj& contextObj, const NodeObj&) {
s_filterBigArrays = contextObj.iToken->getHandle("big");
s_filterNameX = contextObj.iToken->getHandle("x");
s_filterTypeInt = contextObj.iToken->getHandle("int");
}
static bool compute(OgnTutorialBundlesDatabase& db) {
// Implementation of compute method
}
};
```
```cpp
{
// Bundle attributes are extracted from the database in the same way as any other attribute.
// The only difference is that a different interface is provided, suited to bundle manipulation.
const auto& fullBundle = db.inputs.fullBundle();
const auto& filteredBundle = db.inputs.filteredBundle();
const auto& filters = db.inputs.filters();
auto& outputBundle = db.outputs.combinedBundle();
// The first thing this node does is to copy the contents of the fullBundle to the output bundle.
// operator=() has been defined on bundles to make this a one-step operation. Note that this completely
// replaces any previous bundle contents. If you wish to append another bundle then you would use:
// outputBundle.insertBundle(fullBundle);
outputBundle = fullBundle;
// Set some booleans that determine which filters to apply
bool filterBigArrays{ false };
bool filterNameX{ false };
bool filterTypeInt{ false };
for (const auto& filterToken : filters)
{
if (filterToken == s_filterBigArrays)
{
filterBigArrays = true;
}
else if (filterToken == s_filterNameX)
{
filterNameX = true;
}
else if (filterToken == s_filterTypeInt)
{
filterTypeInt = true;
}
else
{
db.logWarning("Unrecognized filter name '%s'", db.tokenToString(filterToken));
}
}
// The bundle object has an iterator for looping over the attributes within in
for (const auto& bundledAttribute : filteredBundle)
{
// The two main accessors for the bundled attribute provide the name and type information
NameToken name = bundledAttribute.name();
Type type = bundledAttribute.type();
}
}
80
81 // Check each of the filters to see which attributes are to be skipped
82 if (filterTypeInt)
83 {
84 if ((type.baseType == BaseDataType::eInt) || (type.baseType == BaseDataType::eUInt) ||
85 (type.baseType == BaseDataType::eInt64) || (type.baseType == BaseDataType::eUInt64))
86 {
87 continue;
88 }
89 }
90 if (filterNameX)
91 {
92 std::string nameString(db.tokenToString(name));
93 if (nameString.find('x') != std::string::npos)
94 {
95 continue;
96 }
97 }
98 if (filterBigArrays)
99 {
100 // A simple utility method on the bundled attribute provides access to array size
101 if (bundledAttribute.size() > 10)
102 {
103 continue;
104 }
105 }
106
107 // All filters have been passed so the attribute is eligible to be copied onto the output.
108 outputBundle.insertAttribute(bundledAttribute);
109 }
110 return true;
111 }
112};
113
114REGISTER_OGN_NODE()
## OgnTutorialBundlesPy.py
The *py* file duplicates the functionality in the *cpp* file, except that it is implemented in Python.
```python
"""
Implementation of the Python node accessing attributes through the bundle in which they are contained.
"""
import omni.graph.core as og
# Types recognized by the integer filter
_INTEGER_TYPES = [og.BaseDataType.INT, og.BaseDataType.UINT, og.BaseDataType.INT64, og.BaseDataType.UINT64]
class OgnTutorialBundlesPy:
"""Exercise the bundled data types through a Python OmniGraph node"""
@staticmethod
def compute(db) -> bool:
"""Implements the same algorithm as the C++ node OgnTutorialBundles.cpp"""
full_bundle = db.inputs.fullBundle
```
```cpp
// The bundle attribute is extracted from the database in exactly the same way as any other attribute.
const auto& inputBundle = db.inputs.myBundle();
// Output and state bundles are the same, except not const
auto& outputBundle = db.outputs.myBundle();
// The size of a bundle is the number of attributes it contains
auto bundleAttributeCount = inputBundle.size();
// Full bundles can be copied using the assignment operator
outputBundle = inputBundle;
# Bundle Notes
Bundles are implemented in USD as “virtual primitives”. That is, while regular attributes appear in a USD file as attributes on a primitive, a bundle appears as a nested primitive with no members.
# Naming Convention
Attributes can and do contain namespaces to make them easier to work with. For example, `outputs:operations` is the namespaced name for the output attribute `operations`. However as USD does not allow colons in the names of the primitives used for implementing attribute bundles they will be replaced by underscores, vis `outputs_operations`.
# Bundled Attribute Manipulation Methods
There are a few methods for manipulating the bundle contents, independent of the actual data inside. The actual implementation of these methods may change over time however the usage should remain the same.
## The Bundle As A Whole
The bundle attribute is extracted from the database in exactly the same way as any other attribute.
## Accessing Attributes By Name
The attribute names should be cached somewhere as a token for fast access.
```cpp
static const NameToken normalsName = db.stringToToken("normals");
```
```cpp
// Then it's a call into the bundle to find an attribute with matching name.
// Names are unique so there is at most one match, and bundled attributes do not have the usual attribute
// namespace prefixes "inputs:", "outputs:", or "state:"
const auto& inputBundle = db.inputs.myBundle();
auto normals = inputBundle.attributeByName(normalsName);
if (normals.isValid())
{
// If the attribute is not found in the bundle then isValid() will return false.
}
```
```cpp
// Once an attribute has been extracted from a bundle a copy of it can be added to a writable bundle.
const auto& inputBundle = db.inputs.myBundle();
auto& outputBundle = db.outputs.myBundle();
auto normals = inputBundle.attributeByName(normalsToken);
if (normals.isValid())
{
// Clear the contents of stale data first since it will not be reused here.
outputBundle.clear();
// The attribute wrapper knows how to insert a copy into a bundle
outputBundle.insertAttribute(normals);
}
```
```cpp
// The range-based for loop provides a method for iterating over the bundle contents.
const auto& inputBundle = db.inputs.myBundle();
for (const auto& bundledAttribute : inputBundle)
{
// Type information is available from a bundled attribute, consisting of a structure defined in
// include/omni/graph/core/Type.h
auto type = bundledAttribute.type();
// The type has four pieces, the first is the basic data type...
assert( type.baseType == BaseDataType::eFloat );
// .. the second is the role, if any
assert( type.role == AttributeRole::eNormal );
// .. the third is the number of tuple components (e.g. 3 for float[3] types)
assert( type.componentCount == 3 );
// .. the last is the array depth, either 0 or 1
assert( type.arrayDepth == 0 );
}
``` | 11,491 |
tutorial16.md | # Tutorial 16 - Bundle Data
Attribute bundles are a construct that packages up groups of attributes into a single entity that can be passed around the graph. These attributes have all of the same properties as a regular attribute, you just have to go through an extra step to access their values. This node illustrates how to break open a bundle to access and modify values in the bundled attributes.
## OgnTutorialBundleData.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.BundleData”, which has one input bundle and one output bundle.
```json
{
"BundleData": {
"version": 1,
"categories": "tutorials",
"description": [
"This is a tutorial node. It exercises functionality for access of data within",
"bundle attributes."
],
"metadata": {
"uiName": "Tutorial Node: Bundle Data"
},
"inputs": {
"bundle": {
"type": "bundle",
"description": [
"Bundle whose contents are modified for passing to the output"
],
"metadata": {
"uiName": "Input Bundle"
}
}
},
"outputs": {
"bundle": {
"type": "bundle",
"description": [
"This is the bundle with values of known types doubled."
]
}
}
}
}
```
## OgnTutorialBundleData.cpp
The `cpp` file contains the implementation of the compute method. It accesses any attributes in the bundle that have integral base types and doubles the values of those attributes.
```c++
// Copyright (c) 2021-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <OgnTutorialBundleDataDatabase.h>
using omni::graph::core::BaseDataType;
using omni::graph::core::NameToken;
using omni::graph::core::Type;
namespace
{
// The function of this node is to double the values for all bundled attributes
// with integral types, including tuples and arrays.
//
// The parameter is a ogn::RuntimeAttribute<kOgnOutput, ogn::kCpu>, which is the type of data returned when iterating
// over an output bundle on the CPU.
// It contains a description of the attribute within the bundle and access to the attribute's data.
// BundledInput is a similar type, which is what you get when iterating over an input bundle. The main difference
// between the two is the ability to modify the attribute or its data.
template <typename POD>
bool doubleSimple(ogn::RuntimeAttribute<ogn::kOgnOutput, ogn::kCpu>& bundledAttribute)
{
// When an attribute is cast to the wrong type (e.g. an integer attribute is extracted with a float
// template parameter on the get<,>() method) a nullptr is returned. That can be used to determine
// the attribute type. You can also use the bundledAttribute.type() method to access the full type
// information and select a code path using that.
auto podValue = bundledAttribute.get<POD>();
if (podValue)
{
*podValue *= 2;
return true;
}
return false;
};
// Array and tuple data has iterator capabilities for easy access to individual elements
template <typename CppType>
bool doubleArray(ogn::RuntimeAttribute<ogn::kOgnOutput, ogn::kCpu>& bundledAttribute)
{
// Strings and paths look like uint8_t[] but are not, so don't process them
if ((bundledAttribute.type().role == AttributeRole::eText) || (bundledAttribute.type().role == AttributeRole::ePath))
{
return false;
}
auto arrayValue = bundledAttribute.get<CppType>();
}
}
```
```cpp
if (arrayValue)
{
for (auto& arrayElement : *arrayValue)
{
arrayElement *= 2;
}
return true;
}
return false;
};
// Tuple arrays must have nested iteration
template <typename CppType, size_t tupleSize>
bool doubleTupleArray(ogn::RuntimeAttribute<ogn::kOgnOutput, ogn::kCpu>&& bundledAttribute)
{
auto tupleArrayValue = bundledAttribute.get<CppType>();
if (tupleArrayValue)
{
for (auto& arrayElement : *tupleArrayValue)
{
for (size_t i = 0; i < tupleSize; ++i)
{
arrayElement[i] *= 2;
}
}
return true;
}
return false;
};
}
class OgnTutorialBundleData
{
public:
static bool compute(OgnTutorialBundleDataDatabase& db)
{
// Bundle attributes are extracted from the database in the same way as any other attribute.
// The only difference is that a different interface class is provided, suited to bundle manipulation.
const auto& inputBundle = db.inputs.bundle();
auto& outputBundle = db.outputs.bundle();
// This loop processes all integer typed attributes in the bundle
for (const auto& bundledAttribute : inputBundle)
{
auto podValue = bundledAttribute.get<int>();
if (podValue)
{
// Found an integer typed attribute
}
}
}
};
```
98}
99}
100
101 // Copying the entire bundle is more efficient than adding one member at a time. As the output bundle
102 // will have the same attributes as the input, even though the values will be different, this is the best
103 // approach. If the attribute lists were different then you would copy or create the individual attributes
104 // as required.
105 outputBundle = inputBundle;
106
107 // Now walk the bundle to look for types to be modified
108 for (auto& bundledAttribute : outputBundle)
109 {
110 // This shows how using a templated function can simplify handling of several different bundled
111 // attribute types. The data types for each of the POD attributes is fully explained in the documentation
112 // page titled "Attribute Data Types". The list of available POD data types is:
113 //
114 // bool
115 // double
116 // float
117 // pxr::GfHalf
118 // int
119 // int64_t
120 // NameToken
121 // uint8_t
122 // uint32_t
123 // uint64_t
124 //
125 if (doubleSimple<int>(bundledAttribute))
126 continue;
127 if (doubleSimple<int64_t>(bundledAttribute))
128 continue;
129 if (doubleSimple<uint8_t>(bundledAttribute))
130 continue;
131 if (doubleSimple<uint32_t>(bundledAttribute))
132 continue;
133 if (doubleSimple<uint64_t>(bundledAttribute))
134 continue;
135
136 // Plain ints are the only integral types supporting tuples. Double those here
137 if (doubleArray<int[2]>(bundledAttribute))
138 continue;
139 if (doubleArray<int[3]>(bundledAttribute))
140 continue;
141 if (doubleArray<int[4]>(bundledAttribute))
142 continue;
143
144 // Arrays are looped differently than tuples so they are also handled differently
145 if (doubleArray<int[]>(bundledAttribute))
146 continue;
147 if (doubleArray<int64_t[]>(bundledAttribute))
148 continue;
149 if (doubleArray<uint8_t[]>(bundledAttribute))
150 continue;
```cpp
// As the attribute data types are only known at runtime you must perform a type-specific cast
// to get the data out in its native form.
const auto& inputBundle = db.inputs.bundle();
// Note the "const" here, to ensure we are not inadvertently modifying the input data.
const auto weight = inputBundle.attributeByName(weightToken);
const float* weightValue = weight.value<float>();
// nullptr return means the data is not of the requested type
assert( nullptr == weight.value<int>() );
# Bundled Attribute Data Manipulation Methods
These are the methods for accessing the data that the bundled attributes encapsulate. In regular attributes the code generated from the .ogn file provides accessors with predetermined data types. The data types of attributes within bundles are unknown until compute time so it is up to the node writer to explicitly cast to the correct data type.
## Extracting Bundled Attribute Data - Simple Types
For reference, simple types, tuple types, array types, tuple array types, and role types are all described in omni.graph.docs.ogn_attribute_types. However, unlike normal attributes the bundled attributes are always accessed as their raw native data types. For example instead of `pxr::GfVec3f` you will access with `float[3]`, which can always be cast to the explicit types if desired.
> Note
> One exception to the type casting is tokens. In normal attributes you retrieve tokens as `NameToken`. Due to certain compiler restrictions the bundled attributes will be retrieved as the helper type `OgnToken`, which is castable to `NameToken` for subsequent use.
## Extracting Bundled Attribute Data - Tuple Types
The tuple data types can be accessed in exactly the same way as simple data types, with the proper cast.
Implementation of the Python node accessing attributes through the bundle in which they are contained.
"""
import numpy as np
import omni.graph.core as og
# Types recognized by the integer filter
_INTEGER_TYPES = [og.BaseDataType.INT, og.BaseDataType.UINT, og.BaseDataType.INT64, og.BaseDataType.UINT64]
class OgnTutorialBundleDataPy:
"""Exercise the bundled data types through a Python OmniGraph node"""
@staticmethod
def compute(db) -> bool:
"""Implements the same algorithm as the C++ node OgnTutorialBundleData.cpp
As Python is so much more flexible it doubles values on any attribute type that can handle it, unlike
the C++ node which only operates on integer types
"""
input_bundle = db.inputs.bundle
output_bundle = db.outputs.bundle
# This does a copy of the full bundle contents from the input bundle to the output bundle so that the
# output data can be modified directly.
output_bundle.bundle = input_bundle
# The "attributes" member is a list that can be iterated. The members of the list do not contain real
# og.Attribute objects, which must always exist, they are wrappers on og.AttributeData objects, which can
# come and go at runtime.
for bundled_attribute in output_bundle.attributes:
attribute_type = bundled_attribute.type
# Only integer types are recognized for this node's operation (doubling all integral values).
# It does operate on tuples and arrays though so that part does not need to be set.
# if attribute_type.base_type not in _INTEGER_TYPES:
# continue
# This operation does the right thing on all compatible types, unlike the C++ equivalent where it
# requires special handling for each variation of the data types it can handle.
if attribute_type.base_type == og.BaseDataType.TOKEN:
if attribute_type.array_depth > 0:
bundled_attribute.value = [f"{element}{element}" for element in bundled_attribute.value]
else:
bundled_attribute.value = f"{bundled_attribute.value}{bundled_attribute.value}"
elif attribute_type.role in [og.AttributeRole.TEXT, og.AttributeRole.PATH]:
bundled_attribute.value = f"{bundled_attribute.value}{bundled_attribute.value}"
else:
try:
bundled_attribute.value = np.multiply(bundled_attribute.value, 2)
except TypeError:
db.log_error(f"This node does not handle data of type {attribute_type.get_type_name()}")
return True | 12,333 |
tutorial17.md | # Tutorial 17 - Python State Attributes Node
This node illustrates how you can use state attributes. These are attributes that are not meant to be connected to other nodes as they maintain a node’s internal state, persistent from one evaluation to the next.
As they are persistent, care must be taken that they be initialized properly. This can take the form of a reset flag, as seen on this node, state flag values with known defaults that describe the validity of the state attribute data, or using a checksum on inputs, among other possibilities.
State attributes can be both read and written, like output attributes. The presence of state attributes will also inform the evaluators on what type of parallel scheduling is appropriate.
These attributes provide a similar functionality to those found in Tutorial 13 - Python State Node, except that being node attributes the structure is visible to the outside world, making it easier to construct UI and visualizers for it.
## OgnTutorialStateAttributesPy.ogn
The `.ogn` file containing the implementation of a node named “omni.graph.tutorials.StateAttributesPy”, with a couple of state attributes that both read and write values during the compute.
```json
{
"StateAttributesPy": {
"version": 1,
"categories": "tutorials",
"description": [
"This is a tutorial node. It exercises state attributes to remember data from one execution to the next."
],
"language": "python",
"metadata": {
"uiName": "Tutorial Python Node: State Attributes"
},
"inputs": { "ignored": { "type": "bool", "description": "Ignore me" } },
"state": {
"reset": {
"type": "bool",
"description": [
"If true then the inputs are ignored and outputs are set to default values, then this",
"flag is set to false for subsequent executions."
]
}
}
}
}
```
```json
{
"reset": {
"type": "bool",
"description": "A boolean flag to reset the monotonically increasing output value to 0"
},
"monotonic": {
"type": "int",
"description": "The monotonically increasing output value, reset to 0 when the reset value is true"
},
"$externalTest": [
"This test infrastructure does not yet support multiple successive executions, so",
"an external test script called 'test_tutorial_state_attributes_py.py' is used for that case.",
"This test illustrates how a state value can be initialized before the test runs and checked after",
"the test runs by using the _get and _set suffix on the state namespace. Having no suffix defaults to",
"_get, where the expected state value is checked after running."
],
"tests": [
{
"state_set:monotonic": 7,
"state_get:reset": false,
"state:monotonic": 8
}
]
}
```
## OgnTutorialStateAttributesPy.py
The `.py` file contains the compute method that uses the state attributes to run the algorithm.
```python
"""
Implementation of a Python node that uses state attributes to maintain information between evaluations.
The node has a simple monotonically increasing state output "monotonic" that can be reset by setting the state
attribute "reset" to true.
"""
class OgnTutorialStateAttributesPy:
"""Use internal node state information in addition to inputs"""
@staticmethod
def compute(db) -> bool:
"""Compute the output based on inputs and internal state"""
# State attributes are the only ones guaranteed to remember and modify their values.
# Care must be taken to set proper defaults when the node initializes, otherwise you could be
# starting in an unknown state.
if db.state.reset:
db.state.monotonic = 0
db.state.reset = False
else:
db.state.monotonic = db.state.monotonic + 1
return True
```
## Test Script
The .ogn test infrastructure currently only supports single evaluation, which will not be sufficient to test state
attribute manipulations. This test script runs multiple evaluations and verifies that the state information is
updated as expected after each evaluation.
```python
"""
Tests for the omnigraph.tutorial.stateAttributesPy node
"""
import omni.graph.core as og
import omni.graph.core.tests as ogts
class TestTutorialStateAttributesPy(ogts.OmniGraphTestCase):
""".ogn tests only run once while state tests require multiple evaluations, handled here"""
async def test_tutorial_state_attributes_py_node(self):
"""Test basic operation of the Python tutorial node containing state attributes"""
keys = og.Controller.Keys
```
```python
(_ , [state_node], _ , _) = og.Controller.edit(
"/TestGraph",
{
keys.CREATE_NODES: ("StateNode", "omni.graph.tutorials.StateAttributesPy"),
keys.SET_VALUES: ("StateNode.state:reset", True),
},
)
await og.Controller.evaluate()
reset_attr = og.Controller.attribute("state:reset", state_node)
monotonic_attr = og.Controller.attribute("state:monotonic", state_node)
self.assertEqual(og.Controller.get(reset_attr), False, "Reset attribute set back to False")
self.assertEqual(og.Controller.get(monotonic_attr), 0, "Monotonic attribute reset to start")
await og.Controller.evaluate()
self.assertEqual(og.Controller.get(reset_attr), False, "Reset attribute still False")
self.assertEqual(og.Controller.get(monotonic_attr), 1, "Monotonic attribute incremented once")
await og.Controller.evaluate()
self.assertEqual(og.Controller.get(monotonic_attr), 2, "Monotonic attribute incremented twice")
og.Controller.set(reset_attr, True)
await og.Controller.evaluate()
self.assertEqual(og.Controller.get(reset_attr), False, "Reset again set back to False")
self.assertEqual(og.Controller.get(monotonic_attr), 0, "Monotonic attribute again reset to start")
```
``` | 5,886 |
tutorial18.md | # Tutorial 18 - Node With Internal State
This node illustrates how you can use internal state information, so long as you inform OmniGraph that you are doing so in order for it to make more intelligent execution scheduling decisions.
The advantage of using internal state data rather than state attributes is that the data can be in any structure you choose, not just those supported by OmniGraph. The disadvantage is that being opaque, none of the generic UI will be able to show information about that data.
An internal state can be associated with every graph instance for that node instance (db.perInstanceState), but a unique state shared by all graph instances (for that node instance) can be used as well (db.sharedState).
Notes to the reader:
- A “node instance” refers to each individual node of a given type used in a graph. For example, you can have many instances of the node “Add” in a given graph. Each copy of this “Add” node is a node instance.
- A “graph instance” refers to the data associated to a prim when this graph is applied to it (prim known as the “graph target”)
## OgnTutorialState.ogn
The `.ogn` file containing the implementation of a node named “omni.graph.tutorials.State”. Unlike Python nodes with internal state the C++ nodes do not require and empty “state” section as the presence of state information is inferred from the data members in the node implementation class (i.e. `mIncrementValue` in this node).
```json
{
"State": {
"version": 1,
"categories": "tutorials",
"scheduling": ["threadsafe"],
"description": [
"This is a tutorial node. It makes use of internal state information",
"to continuously increment an output."
],
"metadata": {
"uiName": "Tutorial Node: Internal States"
},
"inputs": {
"overrideValue": {
"type": "int64",
"description": "Value to use instead of the monotonically increasing internal one when 'override' is true",
"default": 0,
"metadata": {
"uiName": "Override Value"
}
}
}
}
}
```
```json
{
"type": "OgnTutorialState",
"inputs": {
"overrideValue": {
"type": "int64",
"description": "The value to override the internal state with",
"default": 0,
"metadata": {
"uiName": "Override Value"
}
},
"override": {
"type": "bool",
"description": "When true get the output from the overrideValue, otherwise use the internal value",
"default": false,
"metadata": {
"uiName": "Enable Override"
}
},
"shared": {
"type": "bool",
"description": "Whether to use the state shared by all graph instances for this node, or a per graph-instance state",
"default": false,
"metadata": {
"uiName": "Enable using a shared state amongst graph instances"
}
}
},
"outputs": {
"monotonic": {
"type": "int64",
"description": "Monotonically increasing output, set by internal state information",
"default": 0,
"metadata": {
"uiName": "State-Based Output"
}
}
},
"tests": [
{
"inputs:overrideValue": 555,
"inputs:override": true,
"outputs:monotonic": 555
}
],
"$tests": "State tests are better done by a script that can control how many times a node executes."
}
```
## OgnTutorialState.cpp
The `.cpp` file contains the compute method and the internal state information used to run the algorithm.
By adding non-static class members to your node OmniGraph will know to instantiate a unique instance of your node for every evaluation context, letting you use those members as state data. The data in the node will be invisible to OmniGraph as a whole and will be persistent between evaluations of the node. Please note that using the node class itself as an internal state is recommended, but not mandatory. An internal state can be an instance of any C++ class of your choosing. This can be particularly interesting when using the sharedState as well as the perInstanceState, so both can be different C++ classes.
```cpp
// Copyright (c) 2020-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <OgnTutorialStateDatabase.h>
#include <atomic>
// Implementation of a C++ OmniGraph node that uses internal state information to compute outputs.
class OgnTutorialState
{
// ------------------------------------------------------------
// We can use the node class (or another type) to hold some state members, that can be attached to every instance
```
// of that nodetype in a graph (ie. each time a copy of this node is added to a graph, also called a "node
// instance") If the graph itself is instantiated (some graph targets have been added), every "graph instance" will
// have its own state for this "node instance". An object shared amongst all graph instances can be associated to
// each node instance, and accessed through the database class member "db.sharedState<>()" Another object can be
// associated to each graph instance (for each node instance), and accessed through the database class member
// "db.perInstanceState<>()"
// Start all nodes with a monotonic increment value of 0
size_t mIncrementValue{ 0 };
// ------------------------------------------------------------
// You can also define node-type static data, although you are responsible for dealing with any
// thread-safety issues that may arise from accessing it from multiple threads (or multiple hardware)
// at the same time. In this case it is a single value that is read and incremented so an atomic
// variable is sufficient. In real applications this would be a complex structure, potentially keyed off
// of combinations of inputs or real time information, requiring more stringent locking.
// This value increases for each node and indicates the value from which a node's own internal state value
// increments. e.g. the first instance of this node type will start its state value at 1, the second instance at 2,
// and so on...
static std::atomic<size_t> sStartingValue;
public:
// You can add some per-instance initialization/release code in your constructor/destructor
// The initInstance and releaseInstance callbacks can be implemented as well if some more demanding
// setup/shutdown work is required.
OgnTutorialState()
{
}
~OgnTutorialState()
{
}
// Helper function to update the node's internal state based on the previous values and the per-class state.
// You could also do this in-place in the compute() function; pulling it out here makes the state manipulation
// more explicit.
void updateState()
{
mIncrementValue += 1;
}
// When this method is implemented by the node class, it gets called by the framework
// whenever an instance is added to the graph
static void initInstance(const NodeObj& node, GraphInstanceID instanceId);
// When this method is implemented by the node class, it gets called by the framework
// whenever an instance is removed from the graph
static void releaseInstance(const NodeObj& node, GraphInstanceID instanceId);
// When this method is implemented by the node class, it gets called by the framework
// before the node gets removed from the graph
static void release(const NodeObj& node);
public:
// Compute the output based on inputs and internal state
static bool compute(OgnTutorialStateDatabase& db);
};
//////////////////////////////////////////////////////////////////////////
// The shared state can be another object
class OgnTutorialSharedState : public OgnTutorialState
78 {
79 public:
80 // Adds a member to the shared state to track the number of live initiated instances
81 size_t mInstanceCount{ 0 };
82 };
83
84
85 //////////////////////////////////////////////////////////////////////////
86 /// IMPLEMENTATIONS
87
88 std::atomic<size_t> OgnTutorialState::sStartingValue{ 0 };
89
90 //--------------------------------------------------------------------------------------
91 void OgnTutorialState::initInstance(NodeObj const& node, GraphInstanceID instanceId)
92 {
93 OgnTutorialState& state = OgnTutorialStateDatabase::sPerInstanceState<OgnTutorialState>(node, instanceId);
94 state.mIncrementValue = sStartingValue;
95 sStartingValue += 100;
96
97 OgnTutorialSharedState& sharedState = OgnTutorialStateDatabase::sSharedState<OgnTutorialSharedState>(node);
98 sharedState.mInstanceCount++;
99 }
100
101 //--------------------------------------------------------------------------------------
102 void OgnTutorialState::releaseInstance(NodeObj const& node, GraphInstanceID instanceId)
103 {
104 OgnTutorialSharedState& sharedState = OgnTutorialStateDatabase::sSharedState<OgnTutorialSharedState>(node);
105 sharedState.mInstanceCount--;
106 }
107
108 //--------------------------------------------------------------------------------------
109 void OgnTutorialState::release(NodeObj const& node)
110 {
111 OgnTutorialSharedState& sharedState = OgnTutorialStateDatabase::sSharedState<OgnTutorialSharedState>(node);
112 if (sharedState.mInstanceCount != 0)
113 {
114 throw std::runtime_error("Releasing the node while some instances are still alive");
115 }
116 }
117
118 //--------------------------------------------------------------------------------------
119 bool OgnTutorialState::compute(OgnTutorialStateDatabase& db)
120 {
121 // This illustrates how internal state and inputs can be used in conjunction. The inputs can be used
122 // to divert to a different computation path.
123 if (db.inputs.override())
124 {
125 db.outputs.monotonic() = db.inputs.overrideValue();
```c++
}
else
{
// OmniGraph ensures that the database contains the correct internal state information for the node
// being evaluated. Beyond that it has no knowledge of the data within that state.
// This node can access either the object shared by all graph instance ("db.sharedState"),
// or a distinct one that is allocated per graph instance
OgnTutorialState& state =
db.inputs.shared() ? db.sharedState<OgnTutorialSharedState>() : db.perInstanceState<OgnTutorialState>();
db.outputs.monotonic() = state.mIncrementValue;
// Update the node's internal state data for the next evaluation.
state.updateState();
}
return true;
}
REGISTER_OGN_NODE()
```
``` | 11,053 |
tutorial19.md | # Tutorial 19 - Extended Attribute Types
Extended attribute types are so-named because they extend the types of data an attribute can accept from one type to several types. Extended attributes come in two flavours. The _any_ type is the most flexible. It allows a connection with any other attribute type:
```json
{
"inputs": {
"myAnyAttribute": {
"description": "Accepts an incoming connection from any type of attribute",
"type": "any"
}
}
}
```
The union type, represented as an array of type names, allows a connection from a limited subset of attribute types. Here’s one that can connect to attributes of type _float[3]_ and _double[3]_:
```json
{
"inputs": {
"myUnionAttribute": {
"description": "Accepts an incoming connection from attributes with a vector of a 3-tuple of numbers",
"type": ["float[3]", "double[3]"]
}
}
}
```
> Note
> “union” is not an actual type name, as the type names are specified by a list. It is just the nomenclature used for the set of all attributes that can be specified in this way. More details about union types can be found in [omni.graph.docs.ogn_attribute_types](Redirects.html#zzogn-attribute-types).
As you will see in the code examples, the value extracted from the database for such attributes has to be checked for the actual resolved data type. Until an extended attribute is connected its data type will be unresolved and it will not have a value. For this reason _”default”_ values are not allowed on extended attributes.
## OgnTutorialExtendedTypes.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.ExtendedTypes”, which has inputs and outputs with the extended attribute types.
```json
{
"ExtendedTypes": {
"version": 1,
"categories": "tutorials",
"scheduling": ["threadsafe"],
"description": [
"This is a tutorial node. It exercises functionality for the manipulation of the extended",
"attribute types."
]
}
}
```
```json
{
"uiName": "Tutorial Node: Extended Attribute Types",
"inputs": {
"floatOrToken": {
"$comment": [
"Support for a union of types is noted by putting a list into the attribute type.",
"Each element of the list must be a legal attribute type from the supported type list."
],
"type": ["float", "token"],
"description": "Attribute that can either be a float value or a token value",
"uiName": "Float Or Token",
"unvalidated": true
},
"toNegate": {
"$comment": "An example showing that array and tuple types are also legal members of a union.",
"type": ["bool[]", "float[]"],
"description": "Attribute that can either be an array of booleans or an array of floats",
"uiName": "To Negate",
"unvalidated": true
},
"tuple": {
"$comment": "Tuple types are also allowed, implemented as 'any' to show similarities",
"type": "any",
"description": "Variable size/type tuple values",
"uiName": "Tuple Values",
"unvalidated": true
},
"flexible": {
"$comment": "You don't even have to have the same shape of data in a union",
"type": ["float[3][]", "token"],
"description": "Flexible data type input",
"uiName": "Flexible Values",
"unvalidated": true
}
},
"outputs": {
"doubledResult": {
"type": "any",
"description": [
"If the input 'simpleInput' is a float this is 2x the value.",
"If it is a token this contains the input token repeated twice."
],
"uiName": "Doubled Input Value",
"unvalidated": true
},
"negatedResult": {
"type": ["bool[]", "float[]"],
"description": "Result of negating the data from the 'toNegate' input",
"uiName": "Negated Result",
"unvalidated": true
}
}
}
```
```cpp
auto typeError = [&](const char* message, const Type& type1, const Type& type2)
{
db.logError("%s (%s -> %s)", message, getOgnTypeName(type1).c_str(), getOgnTypeName(type2).c_str());
};
auto computeSimpleValues = [&]()
{
// ====================================================================================================
// Compute for the union types that resolve to simple values.
// Accepted value types are floats and tokens. As these were the only types specified in the union
// definition the node does not have to worry about other numeric types, such as int or double.
// The node can decide what the meaning of an attempt to compute with unresolved types is.
// For this particular node they are treated as silent success.
const auto& floatOrToken = db.inputs.floatOrToken();
auto& doubledResult = db.outputs.doubledResult();
if (floatOrToken.resolved() && doubledResult.resolved())
{
// Check for an exact type match for the input and output
if (floatOrToken.type() != doubledResult.type())
{
// Mismatched types are possible, and result in no compute
typeWarning("Simple resolved types do not match", floatOrToken.type(), doubledResult.type());
return false;
}
// When extracting extended types the templated get<> method returns an object that contains the cast
// data. It can be cast to a boolean for quick checks for matching types.
//
// Note: The single "=" in these if statements is intentional. It facilitates one-line set-and-test of
// the
// typed values.
//
if (auto floatValue = floatOrToken.get<float>())
{
// Once the existence of the cast type is verified it can be dereferenced to get at the raw data,
// whose types are described in the tutorial on bundled data.
if (auto doubledValue = doubledResult.get<float>())
{
*doubledValue = *floatValue * 2.0f;
}
else
{
// Handle case where doubledValue is not available
}
}
else
{
// Handle case where floatValue is not available
}
}
else
{
// Handle case where either floatOrToken or doubledResult is not resolved
}
};
```
```c++
// This could be an assert because it should never happen. The types were confirmed above to
// match, so they should have cast to the same types without incident.
// Simple types were matched as bool then failed to cast properly
typeError("Simple types were matched as bool then failed to cast properly", floatOrToken.type(), doubledResult.type());
return false;
}
}
else if (auto tokenValue = floatOrToken.get<OgnToken>())
{
if (auto doubledValue = doubledResult.get<OgnToken>())
{
std::string inputString{ db.tokenToString(*tokenValue) };
inputString += inputString;
*doubledValue = db.stringToToken(inputString.c_str());
}
else
{
// This could be an assert because it should never happen. The types were confirmed above to
// match, so they should have cast to the same types without incident.
// Simple types were matched as token then failed to cast properly
typeError("Simple types were matched as token then failed to cast properly", floatOrToken.type(), doubledResult.type());
return false;
}
}
else
{
// As Union types are supposed to restrict the data types being passed in to the declared types
// any unrecognized types are an error, not a warning.
// Simple types resolved to unknown types
typeError("Simple types resolved to unknown types", floatOrToken.type(), doubledResult.type());
return false;
}
}
else
{
// Unresolved types are reasonable, resulting in no compute
return true;
}
return true;
};
auto computeArrayValues = [&]()
{
// ====================================================================================================
// Compute for the union types that resolve to arrays.
// Accepted value types are arrays of bool or arrays of float, which are extracted as interfaces to
// those values so that resizing can happen transparently through the fabric.
//
// These interfaces are similar to what you've seen in regular array attributes - they support resize(),
// operator[], and range-based for loops.
```
```c++
//
const auto& toNegate = db.inputs.toNegate();
auto& negatedResult = db.outputs.negatedResult();
if (toNegate.resolved() && negatedResult.resolved())
{
// Check for an exact type match for the input and output
if (toNegate.type() != negatedResult.type())
{
// Mismatched types are possible, and result in no compute
typeWarning("Array resolved types do not match", toNegate.type(), negatedResult.type());
return false;
}
// Extended types can be any legal attribute type. Here the types in the extended attribute can be
// either an array of booleans or an array of integers.
if (auto boolArray = toNegate.get<bool[]>())
{
auto valueAsBoolArray = negatedResult.get<bool[]>();
if (valueAsBoolArray)
{
valueAsBoolArray.resize(boolArray->size());
size_t index{ 0 };
for (auto& value : *boolArray)
{
(*valueAsBoolArray)[index++] = !value;
}
}
else
{
// This could be an assert because it should never happen. The types were confirmed above to
// match, so they should have cast to the same types without incident.
typeError("Array types were matched as bool[] then failed to cast properly", toNegate.type(),
negatedResult.type());
return false;
}
}
else if (auto floatArray = toNegate.get<float[]>())
{
auto valueAsFloatArray = negatedResult.get<float[]>();
}
}
```
```cpp
if (valueAsFloatArray)
{
valueAsFloatArray.resize(floatArray->size());
size_t index{ 0 };
for (auto& value : *floatArray)
{
(*valueAsFloatArray)[index++] = -value;
}
}
else
{
// This could be an assert because it should never happen. The types were confirmed above to
// match, so they should have cast to the same types without incident.
typeError("Array types were matched as float[] then failed to cast properly", toNegate.type(), negatedResult.type());
return false;
}
// As Union types are supposed to restrict the data types being passed in to the declared types
// any unrecognized types are an error, not a warning.
typeError("Array type not recognized", toNegate.type(), negatedResult.type());
return false;
// Unresolved types are reasonable, resulting in no compute
return true;
};
auto computeTupleValues = [&]()
{
// ====================================================================================================
// Compute for the "any" types that only handle tuple values. In practice you'd only use "any" when the
// type of data you handle is unrestricted. This is more an illustration to show how in practical use the
// two types of attribute are accessed exactly the same way, the only difference is restrictions that the
// OmniGraph system will put on potential connections.
//
// For simplicity this node will treat unrecognized type as a warning with success.
// Full commentary and error checking is elided as it will be the same as for the above examples.
// The algorithm for tuple values is a component-wise negation.
const auto& tupleInput = db.inputs.tuple();
auto& tupleOutput = db.outputs.tuple();
if (tupleInput.resolved() && tupleOutput.resolved())
{
// Code for handling tuple values
}
};
```
```cpp
if (tupleInput.type() != tupleOutput.type())
{
typeWarning("Tuple resolved types do not match", tupleInput.type(), tupleOutput.type());
return false;
}
// This node will only recognize the float[3] and int[2] cases, to illustrate that tuple count and
// base type are both flexible.
if (auto float3Input = tupleInput.get<float[3]>()){
if (auto float3Output = tupleOutput.get<float[3]>()){
(*float3Output)[0] = -(*float3Input)[0];
(*float3Output)[1] = -(*float3Input)[1];
(*float3Output)[2] = -(*float3Input)[2];
}
} else if (auto int2Input = tupleInput.get<int[2]>()){
if (auto int2Output = tupleOutput.get<int[2]>()){
(*int2Output)[0] = -(*int2Input)[0];
(*int2Output)[1] = -(*int2Input)[1];
}
} else {
// As "any" types are not restricted in their data types but this node is only handling two of
// them an unrecognized type is just unimplemented code.
typeWarning("Unimplemented type combination", tupleInput.type(), tupleOutput.type());
return true;
}
```
259 };
260
261 auto computeFlexibleValues = [&]()
262 {
263 // ====================================================================================================
264 // Complex union type that handles both simple values and an array of tuples. It illustrates how the
265 // data types in a union do not have to be related in any way.
266 //
267 // Full commentary and error checking is elided as it will be the same as for the above examples.
268 // The algorithm for tuple array values is to negate everything in the float3 array values, and to reverse
269 // the string for string values.
270 const auto& flexibleInput = db.inputs.flexible();
271 auto& flexibleOutput = db.outputs.flexible();
272
273 if (flexibleInput.resolved() && flexibleOutput.resolved())
274 {
275 if (flexibleInput.type() != flexibleOutput.type())
276 {
277 typeWarning("Flexible resolved types do not match", flexibleInput.type(), flexibleOutput.type());
278 return false;
279 }
280
281 // Arrays of tuples are handled with the same interface as with normal attributes.
282 if (auto float3ArrayInput = flexibleInput.get<float[][3]>()){
283 if (auto float3ArrayOutput = flexibleOutput.get<float[][3]>()){
284 size_t itemCount = float3ArrayInput.size();
285 float3ArrayOutput.resize(itemCount);
286 for (size_t index = 0; index < itemCount; index++)
287 {
288 (*float3ArrayOutput)[index][0] = -(*float3ArrayInput)[index][0];
289 (*float3ArrayOutput)[index][1] = -(*float3ArrayInput)[index][1];
290 (*float3ArrayOutput)[index][2] = -(*float3ArrayInput)[index][2];
291 }
292 }
293 }
294 }
295 }
```cpp
else if (auto tokenInput = flexibleInput.get<OgnToken>())
{
if (auto tokenOutput = flexibleOutput.get<OgnToken>())
{
std::string toReverse{ db.tokenToString(*tokenInput) };
std::reverse(toReverse.begin(), toReverse.end());
*tokenOutput = db.stringToToken(toReverse.c_str());
}
}
else
{
typeError("Unrecognized type combination", flexibleInput.type(), flexibleOutput.type());
return false;
}
```
```cpp
// Unresolved types are reasonable, resulting in no compute
return true;
```
```cpp
return true;
```
```cpp
// This approach lets either section fail while still computing the other.
computedOne = computeSimpleValues();
computedOne = computeArrayValues() || computedOne;
computedOne = computeTupleValues() || computedOne;
computedOne = computeFlexibleValues() || computedOne;
if (!computedOne)
{
db.logWarning("None of the inputs had resolved type, resulting in no compute");
}
return !computedOne;
```
```cpp
static void onConnectionTypeResolve(const NodeObj& nodeObj)
{
// The attribute types resolve in pairs
AttributeObj pairs[][2]{ { nodeObj.iNode->getAttributeByToken(nodeObj, inputs::floatOrToken.token()),
nodeObj.iNode->getAttributeByToken(nodeObj, outputs::doubledResult.token()) } };
}
```
```{338}
{ nodeObj.iNode->getAttributeByToken(nodeObj, inputs::toNegate.token()),
```{339}
nodeObj.iNode->getAttributeByToken(nodeObj, outputs::negatedResult.token()) },
```{340}
{ nodeObj.iNode->getAttributeByToken(nodeObj, inputs::tuple.token()),
```{341}
nodeObj.iNode->getAttributeByToken(nodeObj, outputs::tuple.token()) },
```{342}
{ nodeObj.iNode->getAttributeByToken(nodeObj, inputs::flexible.token()),
```{343}
nodeObj.iNode->getAttributeByToken(nodeObj, outputs::flexible.token()) };
```{344}
for (auto& pair : pairs)
```{345}
{
```{346}
nodeObj.iNode->resolveCoupledAttributes(nodeObj, &pair[0], 2);
```{347}
}
```{348}
}
```{349}
};
```{350}
Information on the raw types extracted from the extended type values can be seen in
Tutorial 16 - Bundle Data.
### OgnTutorialExtendedTypesPy.py
```
This is a Python version of the above C++ node with exactly the same set of attributes and the same algorithm. It
shows the parallels between manipulating extended attribute types in both languages. (The .ogn file is omitted for
brevity, being identical to the previous one save for the addition of a
```
"language": "python"
```
property.
```python
"""
Implementation of the Python node accessing attributes whose type is determined at runtime.
This class exercises access to the DataModel through the generated database class for all simple data types.
"""
import omni.graph.core as og
# Hardcode each of the expected types for easy comparison
FLOAT_TYPE = og.Type(og.BaseDataType.FLOAT)
TOKEN_TYPE = og.Type(og.BaseDataType.TOKEN)
BOOL_ARRAY_TYPE = og.Type(og.BaseDataType.BOOL, array_depth=1)
FLOAT_ARRAY_TYPE = og.Type(og.BaseDataType.FLOAT, array_depth=1)
FLOAT3_TYPE = og.Type(og.BaseDataType.FLOAT, tuple_count=3)
INT2_TYPE = og.Type(og.BaseDataType.INT, tuple_count=2)
FLOAT3_ARRAY_TYPE = og.Type(og.BaseDataType.FLOAT, tuple_count=3, array_depth=1)
```
15
16
17 class OgnTutorialExtendedTypesPy:
18 """Exercise the runtime data types through a Python OmniGraph node"""
19
20 @staticmethod
21 def compute(db) -> bool:
22 """Implements the same algorithm as the C++ node OgnTutorialExtendedTypes.cpp."""
23
24 It follows the same code pattern for easier comparison, though in practice you would probably code Python
25 nodes differently from C++ nodes to take advantage of the strengths of each language.
26 """
27
28 def __compare_resolved_types(input_attribute, output_attribute) -> og.Type:
29 """Returns the resolved type if they are the same, outputs a warning and returns None otherwise"""
30 resolved_input_type = input_attribute.type
31 resolved_output_type = output_attribute.type
32 if resolved_input_type != resolved_output_type:
33 db.log_warn(f"Resolved types do not match {resolved_input_type} -> {resolved_output_type}")
34 return None
35 return resolved_input_type if resolved_input_type.base_type != og.BaseDataType.UNKNOWN else None
36
37 # ---------------------------------------------------------------------------------------------------
38 def _compute_simple_values():
39 """Perform the first algorithm on the simple input data types"""
40
41 # Unlike C++ code the Python types are flexible so you must check the data types to do the right thing.
42 # This works out better when the operation is the same as you don't even have to check the data type. In
43 # this case the "doubling" operation is slightly different for floats and tokens.
44 resolved_type = __compare_resolved_types(db.inputs.floatOrToken, db.outputs.doubledResult)
45 if resolved_type == FLOAT_TYPE:
46 db.outputs.doubledResult.value = db.inputs.floatOrToken.value * 2.0
47 elif resolved_type == TOKEN_TYPE:
48 db.outputs.doubledResult.value = db.inputs.floatOrToken.value + db.inputs.floatOrToken.value
49
50 # A Pythonic way to do the same thing by just applying an operation and checking for compatibility is:
51 # try:
52 # db.outputs.doubledResult = db.inputs.floatOrToken * 2.0
53 # except TypeError:
54 # # Gets in here for token types since multiplying string by float is not legal
55 # db.outputs.doubledResult = db.inputs.floatOrToken + db.inputs.floatOrToken
56
57 return True
58
59 # ---------------------------------------------------------------------------------------------------
60 def _compute_array_values():
61 """Perform the second algorithm on the array input data types"""
62
63 resolved_type = __compare_resolved_types(db.inputs.toNegate, db.outputs.negatedResult)
64 if resolved_type == BOOL_ARRAY_TYPE:
65 db.outputs.negatedResult.value = [not value for value in db.inputs.toNegate.value]
66 elif resolved_type == FLOAT_ARRAY_TYPE:
67 db.outputs.negatedResult.value = [-value for value in db.inputs.toNegate.value]
68
69 return True
70
71 # ---------------------------------------------------------------------------------------------------
72 def _compute_tuple_values():
73 """Perform the third algorithm on the 'any' data types"""
74
75 resolved_type = __compare_resolved_types(db.inputs.tuple, db.outputs.tuple)
76 # Notice how, since the operation is applied the same for both recognized types, the
77 # same code can handle both of them.
78 if resolved_type in (FLOAT3_TYPE, INT2_TYPE):
79 db.outputs.tuple.value = tuple(-x for x in db.inputs.tuple.value)
80 # An unresolved type is a temporary state and okay, resolving to unsupported types means the graph is in
81 # an unsupported configuration that needs to be corrected.
82 elif resolved_type is not None:
83 type_name = resolved_type.get_type_name()
84 db.log_error(f"Only float[3] and int[2] types are supported by this node, not {type_name}")
85 return False
86
87 return True
88
89 # ---------------------------------------------------------------------------------------------------
90 def _compute_flexible_values():
91 """Perform the fourth algorithm on the multi-shape data types"""
92
93 resolved_type = __compare_resolved_types(db.inputs.flexible, db.outputs.flexible)
94 if resolved_type == FLOAT3_ARRAY_TYPE:
95 db.outputs.flexible.value = [(-x, -y, -z) for (x, y, z) in db.inputs.flexible.value]
96 elif resolved_type == TOKEN_TYPE:
97 db.outputs.flexible.value = db.inputs.flexible.value[::-1]
98
99 return True
100
101 # ---------------------------------------------------------------------------------------------------
102 compute_success = _compute_simple_values()
103 compute_success = _compute_array_values() and compute_success
104 compute_success = _compute_tuple_values() and compute_success
105 compute_success = _compute_flexible_values() and compute_success
106
107 # ---------------------------------------------------------------------------------------------------
108 # As Python has a much more flexible typing system it can do things in a few lines that require a lot
109 # more in C++. One such example is the ability to add two arbitrary data types. Here is an example of
110 # how, using "any" type inputs "a", and "b", with an "any" type output "result" you can generically
111 # add two elements without explicitly checking the type, failing only when Python cannot support
112 # the operation.
113 #
114 # try:
115 # db.outputs.result = db.inputs.a + db.inputs.b
116 # return True
117 # except TypeError:
118 # a_type = inputs.a.type().get_type_name()
119 # b_type = inputs.b.type().get_type_name()
120 # db.log_error(f"Cannot add attributes of type {a_type} and {b_type}")
121 # return False
122
123 return True
124
125 @staticmethod
126 def on_connection_type_resolve(node) -> None:
```
127 # There are 4 sets of type-coupled attributes in this node, meaning that the base_type of the attributes
128 # must be the same for the node to function as designed.
129 # 1. floatOrToken <-> doubledResult
130 # 2. toNegate <-> negatedResult
131 # 3. tuple <-> tuple
132 # 4. flexible <-> flexible
133 #
134 # The following code uses a helper function to resolve the attribute types of the coupled pairs. Note that
135 # without this logic a chain of extended-attribute connections may result in a non-functional graph, due to
136 # the requirement that types be resolved before graph evaluation, and the ambiguity of the graph without knowing
137 # how the types are related.
138 og.resolve_fully_coupled(
139 [node.get_attribute("inputs:floatOrToken"), node.get_attribute("outputs:doubledResult")]
140 )
141 og.resolve_fully_coupled([node.get_attribute("inputs:toNegate"), node.get_attribute("outputs:negatedResult")])
142 og.resolve_fully_coupled([node.get_attribute("inputs:tuple"), node.get_attribute("outputs:tuple")])
143 og.resolve_fully_coupled([node.get_attribute("inputs:flexible"), node.get_attribute("outputs:flexible")])
```
``` | 25,767 |
tutorial2.md | # Tutorial 2 - Simple Data Node
The simple data node creates one input attribute and one output attribute of each of the simple types, where “simple” refers to data types that have a single component and are not arrays. (e.g. “float” is simple, “float[3]” is not, nor is “float[]”). See also [Tutorial 10 - Simple Data Node in Python](#ogn-tutorial-simpledatapy) for a similar example in Python.
## OgnTutorialSimpleData.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.SimpleData”, which has one input and one output attribute of each simple type.
```json
{
"SimpleData": {
"version": 1,
"categories": "tutorials",
"scheduling": ["threadsafe"],
"description": [
"This is a tutorial node. It creates both an input and output attribute of every simple supported data type. The values are modified in a simple way so that the compute modifies values."
],
"$uiNameMetadata": "The value of the 'uiName' metadata can also be expressed at the top level as well",
"uiName": "Tutorial Node: Attributes With Simple Data",
"inputs": {
"a_bool": {
"type": "bool",
"metadata": {
"$comment": "Metadata can also be added at the attribute level",
"uiName": "Sample Boolean Input"
},
"description": ["This is an attribute of type boolean"],
"default": true
},
"a_half": {
"type": "half",
...
}
}
}
}
```
```json
{
"$uiNameMetadata": "Like the node uiName metadata, the attribute uiName metadata also has a shortform",
"uiName": "Sample Half Precision Input",
"description": ["This is an attribute of type 16 bit float"],
"$comment": "0 is used as the decimal portion due to reduced precision of this type",
"default": 0.0
},
{
"a_int": {
"type": "int",
"description": ["This is an attribute of type 32 bit integer"],
"default": 0
},
"a_int64": {
"type": "int64",
"description": ["This is an attribute of type 64 bit integer"],
"default": 0
},
"a_float": {
"type": "float",
"description": ["This is an attribute of type 32 bit floating point"],
"default": 0
},
"a_double": {
"type": "double",
"description": ["This is an attribute of type 64 bit floating point"],
"default": 0
},
"a_token": {
"type": "token",
"description": ["This is an attribute of type interned string with fast comparison and hashing"],
"default": "helloToken"
},
"a_path": {
"type": "path",
"description": ["This is an attribute of type path"],
"default": ""
},
"a_string": {
"type": "string",
"description": ["This is an attribute of type string"],
"default": "helloString"
},
"a_objectId": {
"type": "objectId",
"description": ["This is an attribute of type objectId"],
"default": 0
},
"unsigned:a_uchar": {
"type": "unsigned:a_uchar",
"description": ["This is an attribute of type unsigned:a_uchar"],
"default": 0
}
}
```
72. "type": "uchar",
73. "description": ["This is an attribute of type unsigned 8 bit integer"],
74. "default": 0
75. },
76. "unsigned:a_uint": {
77. "type": "uint",
78. "description": ["This is an attribute of type unsigned 32 bit integer"],
79. "default": 0
80. },
81. "unsigned:a_uint64": {
82. "type": "uint64",
83. "description": ["This is an attribute of type unsigned 64 bit integer"],
84. "default": 0
85. },
86. "a_constant_input": {
87. "type": "int",
88. "description": ["This is an input attribute whose value can be set but can only be connected as a source."],
89. "metadata": {
90. "outputOnly": "1"
91. }
92. },
93. },
94. "outputs": {
95. "a_bool": {
96. "type": "bool",
97. "uiName": "Sample Boolean Output",
98. "description": ["This is a computed attribute of type boolean"],
99. "default": false
100. },
101. "a_half": {
102. "type": "half",
103. "uiName": "Sample Half Precision Output",
104. "description": ["This is a computed attribute of type 16 bit float"],
105. "default": 1.0
106. },
107. "a_int": {
108. "type": "int",
109. "description": ["This is a computed attribute of type 32 bit integer"],
110. "default": 2
111. },
112. "a_int64": {
113. "type": "int64",
114. "description": ["This is a computed attribute of type 64 bit integer"],
115. "default": 3
116. },
117. "a_float": {
118. "type": "float",
119. "description": ["This is a computed attribute of type 32 bit floating point"],
```json
{
"default": 4.0
},
{
"a_double": {
"type": "double",
"description": [
"This is a computed attribute of type 64 bit floating point"
],
"default": 5.0
},
"a_token": {
"type": "token",
"description": [
"This is a computed attribute of type interned string with fast comparison and hashing"
],
"default": "six"
},
"a_path": {
"type": "path",
"description": [
"This is a computed attribute of type path"
],
"default": "/"
},
"a_string": {
"type": "string",
"description": [
"This is a computed attribute of type string"
],
"default": "seven"
},
"a_objectId": {
"type": "objectId",
"description": [
"This is a computed attribute of type objectId"
],
"default": 8
},
"unsigned:a_uchar": {
"type": "uchar",
"description": [
"This is a computed attribute of type unsigned 8 bit integer"
],
"default": 9
},
"unsigned:a_uint": {
"type": "uint",
"description": [
"This is a computed attribute of type unsigned 32 bit integer"
],
"default": 10
},
"unsigned:a_uint64": {
"type": "uint64",
"description": [
"This is a computed attribute of type unsigned 64 bit integer"
],
"default": 11
}
}
```
168. "specified in the test. Only the inputs in the list are set; others will use their "
169. "default values. Only the outputs in the list are checked; others are ignored."
170. "description": "Check that false becomes true"
171. "inputs:a_bool": false
172. "outputs:a_bool": true
173.
174. {
175. "$comment": "This is a more verbose format of test data that provides a different grouping of values"
176. "description": "Check that true becomes false"
177. "inputs": {
178. "a_bool": true
179. }
180. "outputs": {
181. "a_bool": false
182. }
183. }
184. {
185. "$comment": "Even though these computations are all independent they can be checked in a single test."
186. "description": "Check all attributes against their expected values"
187. "inputs:a_bool": false, "outputs:a_bool": true,
188. "inputs:a_double": 1.1, "outputs:a_double": 2.1,
189. "inputs:a_float": 3.3, "outputs:a_float": 4.3,
190. "inputs:a_half": 5.0, "outputs:a_half": 6.0,
191. "inputs:a_int": 7, "outputs:a_int": 8,
192. "inputs:a_int64": 9, "outputs:a_int64": 10,
193. "inputs:a_token": "helloToken", "outputs:a_token": "worldToken",
194. "inputs:a_string": "helloString", "outputs:a_string": "worldString",
195. "inputs:a_objectId": 5, "outputs:a_objectId": 6,
196. "inputs:unsigned:a_uchar": 11, "outputs:unsigned:a_uchar": 12,
197. "inputs:unsigned:a_uint": 13, "outputs:unsigned:a_uint": 14,
198. "inputs:unsigned:a_uint64": 15, "outputs:unsigned:a_uint64": 16
199. }
200. {
201. "$comment": "Make sure embedded quotes in a string function correctly"
202. "inputs:a_token": "hello'Token", "outputs:a_token": "world'Token",
203. "inputs:a_string": "hello\"String", "outputs:a_string": "world\"String"
```json
{
"tests": [
{
"$comment": "Make sure the path append does the right thing",
"inputs:a_path": "/World/Domination",
"outputs:a_path": "/World/Domination/Child"
},
{
"$comment": "Check that strings and tokens get correct defaults",
"outputs:a_token": "worldToken",
"outputs:a_string": "worldString"
}
]
}
```
# OgnTutorialSimpleData.cpp
The `cpp` file contains the implementation of the compute method, which modifies each of the inputs in a simple way to create outputs that have different values.
```c++
// Copyright (c) 2020-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <OgnTutorialSimpleDataDatabase.h>
#include <string>
// Even though the path is stored as a string this tutorial will use the SdfPath API to manipulate it
#include <pxr/usd/sdf/path.h>
// This class exercises access to the DataModel through the generated database class for all simple data types
// It's a good practice to namespace your nodes, so that they are guaranteed to be unique. Using this practice
// you can shorten your class names as well. This class could have equally been named "OgnSimpleData", since
// the "Tutorial" part of it is just another incarnation of the namespace.
namespace omni
{
namespace graph
{
namespace core
{
namespace tutorial
{
class OgnTutorialSimpleData
{
public:
static bool compute(OgnTutorialSimpleDataDatabase& db)
{
// Inside the database the contained object "inputs" holds the data references for all input attributes and the
// contained object "outputs" holds the data references for all output attributes.
// Each of the attribute accessors are named for the name of the attribute, with the ":" replaced by "_".
// The colon is used in USD as a convention for creating namespaces so it's safe to replace it without
// modifying the meaning. The "inputs:" and "outputs:" prefixes in the generated attributes are matched
// by the container names.
//
// For example attribute "inputs:translate:x" would be accessible as "db.inputs.translate_x" and attribute
// "outputs:matrix" would be accessible as "db.outputs.matrix".
//
// The "compute" of this method modifies each attribute in a subtle way so that a test can be written
// to verify the operation of the node. See the .ogn file for a description of tests.
db.outputs.a_bool() = !db.inputs.a_bool();
db.outputs.a_half() = 1.0f + db.inputs.a_half();
db.outputs.a_int() = 1 + db.inputs.a_int();
}
};
}
}
}
}
49 db.outputs.a_int64() = 1 + db.inputs.a_int64();
50 db.outputs.a_double() = 1.0 + db.inputs.a_double();
51 db.outputs.a_float() = 1.0f + db.inputs.a_float();
52 db.outputs.a_objectId() = 1 + db.inputs.a_objectId();
53
54 // The namespace separator ":" has special meaning in C++ so it is replaced by "_" when it appears in names
55 // Attribute "outputs:unsigned:a_uchar" becomes "outputs.unsigned_a_uchar".
56 db.outputs.unsigned_a_uchar() = 1 + db.inputs.unsigned_a_uchar();
57 db.outputs.unsigned_a_uint() = 1 + db.inputs.unsigned_a_uint();
58 db.outputs.unsigned_a_uint64() = 1 + db.inputs.unsigned_a_uint64();
59
60 // Internally the string type is more akin to a std::string_view, not available until C++17.
61 // The data is a pair of (const char*, size_t), but the interface provided through the accessor is
62 // castable to a std::string.
63 //
64 // This code shows the recommended way to use it, extracting inputs into a std::string for manipulation and
65 // then assigning outputs from the results. Using the referenced object directly could cause a lot of
66 // unnecessary fabric allocations. (i.e. avoid auto& outputStringView = db.outputs.a_string())
67 std::string outputString(db.inputs.a_string());
68 if (outputString.length() > 0)
69 {
70 auto foundStringAt = outputString.find("hello");
71 if (foundStringAt != std::string::npos)
72 {
73 outputString.replace(foundStringAt, 5, "world");
74 }
75 db.outputs.a_string() = outputString;
76 }
77 else
78 {
79 db.outputs.a_string() = "";
80 }
81
82
83 // The token interface is made available in the database as well, for convenience.
84 // By calling "db.stringToToken()" you can look up the token ID of a given string.
85 // There is also a symmetrical "db.tokenToString()" for going the other way.
86 std::string outputTokenString = db.tokenToString(db.inputs.a_token());
```cpp
auto objectIdAttributeObj = nodeObj.iNode->getAttribute(nodeObj, "inputs:a_objectId");
auto objectIdMetadata = attributeObj.iAttribute->getMetadata(objectIdAttributeObj, kOgnMetadataObjectId);
if (not objectIdMetadata)
{
db.logError("Found unexpected object ID");
return false;
}
return true;
}
};
// namespaces are closed after the registration macro, to ensure the correct class is registered
REGISTER_OGN_NODE()
} // namespace tutorial
} // namespace core
} // namespace graph
} // namespace omni
```
Note how the attribute values are available through the OgnTutorialSimpleDataDatabase class. The generated interface creates access methods for every attribute, named for the attribute itself. Inputs will be returned as const references, outputs will be returned as non-const references.
### Attribute Data
Two types of attribute data are created, which help with ease of access and of use - the attribute name lookup information, and the attribute type definition.
Attribute data is accessed via a name-based lookup. This is not particularly efficient, so to facilitate this process the attribute name is translated into a fast access token. In addition, the information about the attribute’s type and default value is constant for all nodes of the same type so that is stored as well, in static data.
Normally you would use an `auto` declaration for attribute types. Sometimes you want to pass around attribute data so it is helpful to have access to the attribute’s data type. In the generated code a `using namespace` is set up to provide a very simple syntax for accessing the attribute’s metadata from within the node:
```cpp
std::cout << "Attribute name is " << inputs::a_bool.m_name << std::endl;
std::cout << "Attribute type is " << inputs::a_bool.m_dataType << std::endl;
extern "C" void processAttribute(inputs::a_bool_t& value);
// Equivalent to extern "C" void processAttribute(bool& value);
```
### Attribute Data Access
The attributes are automatically namespaced with `inputs` and `outputs`. In the USD file the attribute names will appear as `inputs:XXX` or `outputs:XXX`. In the C++ interface the colon is illegal so a contained struct is used to make use of the period equivalent, as `inputs.XXX` or `outputs.XXX`.
The minimum information provided by these wrapper classes is a reference to the underlying data, accessed by `operator()`. For this class, these are the types it provides:
| Database Function | Returned Type |
|-------------------|---------------|
| inputs.a_bool() | const bool& |
| inputs.a_half() | const pxr::GfHalf& |
| inputs.a_int() | const int& |
| inputs.a_int64() | const int64_t& |
inputs.a_float()
const float&
inputs.a_double()
const double&
inputs.a_path()
const std::string&
inputs.a_string()
const std::string&
inputs.a_token()
const NameToken&
outputs.a_bool()
bool&
outputs.a_half()
pxr::GfHalf&
outputs.a_int()
int&
outputs.a_int64()
int64_t&
outputs.a_float()
float&
outputs.a_double()
double&
outputs.a_string()
std::string&
outputs.a_token()
NameToken&
The data returned are all references to the real data in the Fabric, our managed memory store, pointed to the correct location at evaluation time.
Note how input attributes return `const` data while output attributes do not. This reinforces the restriction that input data should never be written to, as it would cause graph synchronization problems.
The type `pxr::GfHalf` is an implementation of a 16-bit floating point value, though any other may also be used with a runtime cast of the value. `omni::graph::core::NameToken` is a simple token through which a unique string can be looked up at runtime.
## Helpers
A few helpers are provided in the database class definition to help make coding with it more natural.
### initializeType
Function signature
```
static void initializeType(const NodeTypeObj& nodeTypeObj)
```
is an implementation of the `ABI function` that is called once for each node type, initializing such things as its mandatory attributes and their default values.
### validate
Function signature
```
bool validate()
```
. If any of the mandatory attributes do not have values then the generated code will exit early with an error message and not actually call the node’s compute method.
### token
Function signature
```
NameToken token(const char* tokenName)
```
. Provides a simple conversion from a string to the unique token representing that string, for fast comparison of strings and for use with the attributes whose data types are `token`.
### Compute Status Logging
Two helper functions are providing in the database class to help provide more information when the compute method of a node has failed. Two methods are provided, both taking printf-like variable sets of parameters.
```
void logError(Args...)
```
is used when the compute has run into some inconsistent or unexpected data, such as two input arrays that are supposed to have the same size but do not, like the normals and vertexes on a mesh.
```
void logWarning(Args...)
```
can be used when the compute has hit an unusual case but can still provide a consistent output for it, for example the deformation of an empty mesh would result in an empty mesh and a warning since that is not a typical use for the node.
### typedefs
Although not part of the database class per se, a typedef alias is created for every attribute so that you can use its type directly without knowing the detailed type; a midway point between exact types and `auto`. The main use for such types might be passing attribute data between functions.
Here are the corresponding typedef names for each of the attributes:
```
| Typedef Alias | Actual Type |
|---------------|-------------|
| inputs.a_bool_t | const bool& |
| inputs.a_half_t | const pxr::GfHalf& |
| inputs.a_int_t | const int& |
| inputs.a_int64_t | const int64_t& |
| inputs.a_float_t | const float& |
| inputs.a_double_t | const double& |
| inputs.a_token_t | const NameToken& |
| outputs.a_bool_t | bool& |
| outputs.a_half_t | pxr::GfHalf& |
| outputs.a_int_t | int& |
| outputs.a_int64_t | int64_t& |
| outputs.a_float_t | float& |
| outputs.a_double_t | double& |
| outputs.a_token_t | NameToken& |
Notice the similarity between this table and the one above. The typedef name is formed by adding the extension `_t` to the attribute accessor name, similar to C++ standard type naming conventions. The typedef should always correspond to the return value of the attribute’s `operator()`.
### Direct ABI Access
All of the generated database classes provide access to the underlying `INodeType` ABI for those rare situations where you want to access the ABI directly. There are two methods provided, which correspond to the objects passed in to the ABI compute method.
- Context function signature `const GraphContextObj& abi_context() const` for accessing the underlying OmniGraph evaluation context and its interface.
- Node function signature `const NodeObj& nodeObj abi_node() const` for accessing the underlying OmniGraph node object and its interface.
In addition, the attribute ABI objects are extracted into a shared structure so that they can be accessed in a manner similar to the attribute data. For example `db.attributes.inputs.a_bool()` returns the `AttributeObj` that refers to the input attribute named `a_bool`. It can be used to directly call ABI functions when required, though again it should be emphasized that this will be a rare occurrence - all of the common operations can be performed more easily using the database interfaces.
### Node Computation Tests
The “tests” section of the .ogn file contains a list of tests that take on two general forms. The first consists of a description and attribute values, both inputs and outputs, that will be used for the test, while the second contains the name of an external test scene to use along with various paths to nodes within the scene that are coupled with output attribute values that need to be checked.
The test runs by either setting all of the named input attributes to their values or loading the specified test scene, running the compute, and then comparing the resulting output attribute values against those specified by the test.
For example, to test the computation of the boolean attribute (whose output is the negation of the input) one could either specify two test values to use, or point to a test scene containing the node (along with any other necessary machinery that sets up the test) plus the expected output value.
The “description” field is optional, though highly recommended to aid in debugging which tests are failing. Any unspecified inputs take their default value, and any unspecified outputs do not get checked after the compute.
Note that the `file` attribute can be specified either as an absolute path or as a relative path in respect to the `.ogn` file. In the above example, “FileWithPredefinedTestSetup.usda” should be located relative to the `.ogn`, which implies that both reside in the same directory.
Various abbreviated syntaxes exist for writing out test objects, which mostly rely around in-lining node attribute names with their attribute namespaces (e.g., “inputs” and “outputs”) and node paths, thus reducing the number of required objects for defining tests.
--- | 22,592 |
tutorial20.md | # Tutorial 20 - Tokens
Tokens are a method of providing fast access to strings that have fixed contents. All strings with the same contents can be translated into the same shared token. Token comparison is as fast as integer comparisons, rather than the more expensive string comparisons you would need for a general string.
One example of where they are useful is in having a fixed set of allowable values for an input string. For example you might choose a color channel by selecting from the names “red”, “green”, and “blue”, or you might know that a mesh bundle’s contents always use the attribute names “normals”, “points”, and “faces”.
Tokens can be accessed through the database methods `tokenToString()` and `stringToToken()`. Using the `tokens` keyword in a .ogn file merely provides a shortcut to always having certain tokens available. In the color case then if you have a token input containing the color your comparison code changes from this:
```c++
const auto& colorToken = db.inputs.colorToken();
if (colorToken == db.stringToToken("red"))
{
// do red stuff
}
else if (colorToken == db.stringToToken("green"))
{
// do green stuff
}
else if (colorToken == db.stringToToken("blue"))
{
// do blue stuff
}
```
to this, which has much faster comparison times:
```c++
const auto& colorToken = db.inputs.colorToken();
if (colorToken == db.tokens.red)
{
// do red stuff
}
else if (colorToken == db.tokens.green)
{
// do green stuff
}
else if (colorToken == db.tokens.blue)
{
// do blue stuff
}
```
In Python there isn’t a first-class object that is a token but the same token access is provided for consistency:
```python
color_token = db.inputs.colorToken
if color_token == db.tokens.red:
# do red stuff
elif color_token == db.tokens.green:
# do green stuff
elif color_token == db.tokens.blue:
# do blue stuff
```
## OgnTutorialTokens.ogn
The
**ogn**
file shows the implementation of a node named “omni.graph.tutorials.Tokens”, which contains some hardcoded
tokens to use in the compute method.
```json
{
"Tokens": {
"version": 1,
"categories": "tutorials",
"scheduling": ["threadsafe"],
"description": [
"This is a tutorial node. It exercises the feature of providing hardcoded token values",
"in the database after a node type has been initialized. It sets output booleans to the",
"truth value of whether corresponding inputs appear in the hardcoded token list."
],
"uiName": "Tutorial Node: Tokens",
"inputs": {
"valuesToCheck": {
"type": "token[]",
"description": "Array of tokens that are to be checked"
}
},
"outputs": {
"isColor": {
"type": "bool[]",
"description": "True values if the corresponding input value appears in the token list"
}
},
"$comment": [
"The tokens can be a list or a dictionary. If a list then the token string is also the name of the",
"variable in the database through which they can be accessed. If a dictionary then the key is the",
"name of the access variable and the value is the actual token string. Use a list if your token values",
"are all legal variable names in your node's implementation language (C++ or Python)."
],
"tokens": ["red", "green", "blue"],
"tests": [
{
"inputs:valuesToCheck": ["red", "Red", "magenta", "green", "cyan", "blue", "yellow"],
"outputs:isColor": [true, false, false, true, false, true, false]
}
]
}
}
```
## OgnTutorialTokens.cpp
The `cpp` file contains the implementation of the compute method. It illustrates how to access the hardcoded tokens to avoid writing the boilerplate code yourself.
```c++
// Copyright (c) 2021-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <OgnTutorialTokensDatabase.h>
class OgnTutorialTokens
{
public:
static bool compute(OgnTutorialTokensDatabase& db)
{
const auto& valuesToCheck = db.inputs.valuesToCheck();
auto& isColor = db.outputs.isColor();
size_t len = valuesToCheck.size();
isColor.resize(len);
if (len == 0)
{
return true;
}
// Walk the list of inputs, setting the corresponding output to true if and only if the input is in
// the list of allowable tokens.
for (size_t index = 0; index < len; index++)
{
const auto& inputValue = valuesToCheck[index];
// When the database is available you can use it to access the token values directly. When it is not
// you can access them statically (e.g. OgnTutorialTokensDatabase.token.red)
if ((inputValue == db.tokens.red) || (inputValue == db.tokens.green) || (inputValue == db.tokens.blue))
{
isColor[index] = true;
}
else
{
isColor[index] = false;
}
}
return true;
}
};
REGISTER_OGN_NODE()
## OgnTutorialTokens.py
## OgnTutorialTokensPy.py
The `py` file contains the implementation of the compute method in Python. The .ogn file is the same as the above, except for the addition of the implementation language key `"language": "python"`. The compute follows the same algorithm as the `cpp` equivalent.
```python
"""
Implementation of a node handling hardcoded tokens. Tokens are a fixed set of strings, usually used for things
like keywords and enum names. In C++ tokens are more efficient than strings for lookup as they are represented
as a single long integer. The Python access methods are set up the same way, though at present there is no
differentiation between strings and tokens in Python code.
"""
class OgnTutorialTokensPy:
"""Exercise access to hardcoded tokens"""
@staticmethod
def compute(db) -> bool:
"""
Run through a list of input tokens and set booleans in a corresponding output array indicating if the
token appears in the list of hardcoded color names.
"""
values_to_check = db.inputs.valuesToCheck
# When assigning the entire array the size does not have to be set in advance
db.outputs.isColor = [value in [db.tokens.red, db.tokens.green, db.tokens.blue] for value in values_to_check]
return True
``` | 6,668 |
tutorial21.md | # Tutorial 21 - Adding Bundled Attributes
Sometimes instead of simply copying data from an input or input bundle into an output bundle you might want to construct a bundle from some other criteria. For example a bundle construction node could take in an array of names and attribute types and output a bundle consisting of those attributes with some default values.
The bundle accessor provides a simple method that can accomplish this task. Adding a new attribute is as simple as providing those two values to the bundle for every attribute you wish to add.
There is also a complementary function to remove named bundle attributes.
## OgnTutorialBundleAddAttributes.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.BundleData”, which has one input bundle and one output bundle.
```json
{
"BundleAddAttributes": {
"description": [
"This is a tutorial node. It exercises functionality for adding and removing attributes on",
"output bundles."
],
"version": 1,
"categories": "tutorials",
"uiName": "Tutorial Node: Bundle Add Attributes",
"inputs": {
"typesToAdd": {
"type": "token[]",
"description": [
"List of type descriptions to add to the bundle. The strings in this list correspond to the",
"strings that represent the attribute types in the .ogn file (e.g. float[3][], colord[3], bool"
],
"uiName": "Attribute Types To Add"
},
"addedAttributeNames": {
"type": "token[]",
"description": [
"Names for the attribute types to be added. The size of this array must match the size",
"of the 'typesToAdd' array to be legal."
]
}
}
}
}
```
```json
{
"inputs": {
"typesToAdd": {
"type": "token[]",
"description": "Names for the attribute types to be added. Non-existent types will be ignored."
},
"addedAttributeNames": {
"type": "token[]",
"description": "Names for the attributes to be added. Non-existent attributes will be ignored."
},
"removedAttributeNames": {
"type": "token[]",
"description": "Names for the attribute types to be removed. Non-existent attributes will be ignored."
},
"useBatchedAPI": {
"type": "bool",
"description": "Controls whether or not to used batched APIS for adding/removing attributes"
}
},
"outputs": {
"bundle": {
"type": "bundle",
"description": [
"This is the bundle with all attributes added by compute."
],
"uiName": "Constructed Bundle"
}
}
}
```
# OgnTutorialBundleAddAttributes.cpp
The `cpp` file contains the implementation of the compute method. It accesses the attribute descriptions on the inputs and creates a bundle with attributes matching those descriptions as its output.
```cpp
// Copyright (c) 2021-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <omni/graph/core/IAttributeType.h>
#include <OgnTutorialBundleAddAttributesDatabase.h>
using omni::graph::core::Type;
class OgnTutorialBundleAddAttributes
{
public:
static bool compute(OgnTutorialBundleAddAttributesDatabase& db)
{
const auto& attributeTypeNames = db.inputs.typesToAdd();
const auto& attributeNames = db.inputs.addedAttributeNames();
const auto& useBatchedAPI = db.inputs.useBatchedAPI();
auto& outputBundle = db.outputs.bundle();
// The usual error checking. Being diligent about checking the data ensures you will have an easier time
// debugging the graph if anything goes wrong.
if (attributeTypeNames.size() != attributeNames.size())
{
db.logWarning("Number of attribute types (%zu) does not match number of attribute names (%zu)",
attributeTypeNames.size(), attributeNames.size());
}
}
};
```
```c++
return false;
```
```c++
}
```
```c++
// Make sure the bundle is starting from empty
outputBundle.clear();
```
```c++
if (useBatchedAPI)
{
```
```c++
// unfortunately we need to build a vector of the types
auto typeNameIt = std::begin(attributeTypeNames);
std::vector<Type> types;
types.reserve(attributeTypeNames.size());
for (; typeNameIt != std::end(attributeTypeNames); ++typeNameIt)
{
auto typeName = *typeNameIt;
types.emplace_back(db.typeFromName(typeName));
}
outputBundle.addAttributes(attributeTypeNames.size(), attributeNames.data(), types.data());
```
```c++
// Remove attributes from the bundle that were already added. This is a somewhat contrived operation that
// allows testing of both adding and removal within a simple environment.
if (db.inputs.removedAttributeNames().size())
{
outputBundle.removeAttributes(
db.inputs.removedAttributeNames().size(), db.inputs.removedAttributeNames().data());
}
```
```c++
}
else
{
```
```c++
// Since the two arrays are the same size a dual loop can be used to walk them in pairs
auto typeNameIt = std::begin(attributeTypeNames);
auto attributeNameIt = std::begin(attributeNames);
for (; typeNameIt != std::end(attributeTypeNames) && attributeNameIt != std::end(attributeNames);
++typeNameIt, ++attributeNameIt)
{
auto typeName = *typeNameIt;
auto attributeName = *attributeNameIt;
Type newType = db.typeFromName(typeName);
// Ignore the output since for this example there will not be any values set on the new attribute
```
```markdown
```cpp
void outputBundle.addAttribute(attributeName, newType);
```
```cpp
```
```cpp
// Remove attributes from the bundle that were already added. This is a somewhat contrived operation that
// allows testing of both adding and removal within a simple environment.
for (const auto& toRemove : db.inputs.removedAttributeNames())
{
outputBundle.removeAttribute(toRemove);
}
```
```cpp
```
```cpp
return true;
```
```cpp
};
```
```cpp
REGISTER_OGN_NODE()
```
### OgnTutorialBundleAddAttributesPy.py
The `py` file contains the same algorithm as the C++ node, with only the implementation language being different.
```python
"""
Implementation of the Python node adding attributes with a given description to an output bundle.
"""
import omni.graph.core as og
class OgnTutorialBundleAddAttributesPy:
"""Exercise the bundled data types through a Python OmniGraph node"""
@staticmethod
def compute(db) -> bool:
"""Implements the same algorithm as the C++ node OgnTutorialBundleAddAttributes.cpp using the Python
bindings to the bundle method"""
# Start with an empty output bundle.
output_bundle = db.outputs.bundle
output_bundle.clear()
if db.inputs.useBatchedAPI:
attr_types = [og.AttributeType.type_from_ogn_type_name(type_name) for type_name in db.inputs.typesToAdd]
output_bundle.add_attributes(attr_types, db.inputs.addedAttributeNames)
output_bundle.remove_attributes(db.inputs.removedAttributeNames)
else:
for attribute_type_name, attribute_name in zip(db.inputs.typesToAdd, db.inputs.addedAttributeNames):
attribute_type = og.AttributeType.type_from_ogn_type_name(attribute_type_name)
output_bundle.insert((attribute_type, attribute_name))
# Remove attributes from the bundle that were already added. This is a somewhat contrived operation that
# allows testing of both adding and removal within a simple environment.
for attribute_name in db.inputs.removedAttributeNames:
output_bundle.remove(attribute_name)
return True
``` | 8,182 |
tutorial22.md | # Tutorial 22 - Bundles On The GPU
Bundles are not exactly data themselves, they are a representation of a collection of attributes whose composition is determined at runtime. As such, they will always live on the CPU. However the attributes they are encapsulating have the same flexibility as other attributes to live on the CPU, GPU, or have their location decided at runtime.
For that reason it’s convenient to use the same “cpu”, “cuda”, and “any” memory types for the bundle attributes, with a slightly different interpretation.
- **cpu**: all attributes in the bundle will be on the CPU
- **gpu**: all attributes in the bundle will be on the GPU
- **any**: either some attributes in the bundle are on the CPU and some are on the GPU, or that decision will be made at runtime
For example if you had a bundle of attributes consisting of a large array of points and a boolean that controls the type of operation you will perform on them it makes sense to leave the boolean on the CPU and move the points to the GPU for more efficient processing.
## OgnTutorialCpuGpuBundles.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.CpuGpuBundles” with an input bundle on the CPU, an input bundle on the GPU, and an output bundle whose memory location is decided at runtime by a boolean.
```json
{
"CpuGpuBundles": {
"version": 1,
"categories": "tutorials",
"description": [
"This is a tutorial node. It exercises functionality for accessing data in bundles that",
"are on the GPU as well as bundles whose CPU/GPU location is decided at runtime. The",
"compute looks for bundled attributes named 'points' and, if they are found, computes",
"their dot products. If the bundle on the output contains an integer array type named",
"'dotProducts' then the results are placed there, otherwise a new attribute of that name and",
"type is created on the output bundle to hold the results.",
"This node is identical to OgnTutorialCpuGpuBundlesPy.ogn, except it is implemented in C++."
],
"tags": ["tutorial", "bundle", "gpu"],
"tokens": ["points", "dotProducts"],
"uiName": "Tutorial Node: CPU/GPU Bundles",
"inputs": {
...
}
}
}
```
```json
{
"inputs": {
"cpuBundle": {
"type": "bundle",
"description": "Input bundle whose data always lives on the CPU",
"uiName": "CPU Input Bundle"
},
"gpuBundle": {
"type": "bundle",
"memoryType": "cuda",
"description": "Input bundle whose data always lives on the GPU",
"uiName": "GPU Input Bundle"
},
"gpu": {
"type": "bool",
"description": "If true then copy gpuBundle onto the output, otherwise copy cpuBundle",
"uiName": "Results To GPU"
}
},
"outputs": {
"cpuGpuBundle": {
"type": "bundle",
"memoryType": "any",
"description": [
"This is the bundle with the merged data. If the 'gpu' attribute is set to true then this",
"bundle's contents will be entirely on the GPU, otherwise they will be on the CPU."
],
"uiName": "Constructed Bundle"
}
}
}
```
# OgnTutorialCpuGpuBundles.cpp
The cpp file contains the implementation of the compute method. It creates a merged bundle in either the CPU or GPU based on the input boolean and runs an algorithm on the output location.
```c++
// Copyright (c) 2021-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <OgnTutorialCpuGpuBundlesDatabase.h>
extern "C" void cpuGpuDotProductCPU(float const (*p1)[3], float const (*p2)[3], float *, size_t);
extern "C" void cpuGpuDotProductGPU(float const (**p1)[3], float const (**p2)[3], float **, size_t);
namespace omni
{
namespace graph
{
namespace tutorials
{
}
}
}
```
18 {
19
20 class OgnTutorialCpuGpuBundles {
21 public:
22 static bool compute(OgnTutorialCpuGpuBundlesDatabase& db)
23 {
24 const auto& gpu = db.inputs.gpu();
25 // Bundles are merely abstract representations of a collection of attributes so you do not have to do anything
26 // different when they are marked for GPU, or ANY memory location.
27 const auto& cpuBundle = db.inputs.cpuBundle();
28 const auto& gpuBundle = db.inputs.gpuBundle();
29 auto& outputBundle = db.outputs.cpuGpuBundle();
30
31 // Assign the correct destination bundle to the output based on the gpu flag
32 if (gpu)
33 {
34 outputBundle = gpuBundle;
35 }
36 else
37 {
38 outputBundle = cpuBundle;
39 }
40
41 // Get the attribute references. They're the same whether the bundles are on the CPU or GPU
42 const auto pointsCpuAttribute = cpuBundle.attributeByName(db.tokens.points);
43 const auto pointsGpuAttribute = gpuBundle.attributeByName(db.tokens.points);
44 auto dotProductAttribute = outputBundle.attributeByName(db.tokens.dotProducts);
45 if (!dotProductAttribute.isValid())
46 {
47 dotProductAttribute = outputBundle.addAttribute(db.tokens.dotProducts, Type(BaseDataType::eFloat, 1, 1));
48 }
49
50 // Find the bundle contents to be processed
51 if (gpu)
52 {
53 const auto points1 = pointsCpuAttribute.getGpu<float[][3]>();
54 const auto points2 = pointsGpuAttribute.get<float[][3]>();
55 auto dotProducts = dotProductAttribute.getGpu<float[]>();
56 }
57 }
58 }
```cpp
if (!points1) {
db.logWarning("Skipping compute - No valid float[3][] attribute named '%s' on the CPU bundle",
db.tokenToString(db.tokens.points));
return false;
}
if (!points2) {
db.logWarning("Skipping compute - No valid float[3][] attribute named '%s' on the GPU bundle",
db.tokenToString(db.tokens.points));
return false;
}
if (points1.size() != points2.size()) {
db.logWarning("Skipping compute - Point arrays are different sizes (%zu and %zu)", points1.size(),
points2.size());
return false;
}
dotProducts.resize(points1.size());
if (!dotProducts) {
db.logWarning("Skipping compute - No valid float[] attribute named '%s' on the output bundle",
db.tokenToString(db.tokens.dotProducts));
return false;
}
cpuGpuDotProductGPU(points1(), points2(), dotProducts(), points1.size());
else {
const auto points1 = pointsCpuAttribute.get<float[][3]>();
const auto points2 = pointsGpuAttribute.getCpu<float[][3]>();
auto dotProducts = dotProductAttribute.getCpu<float[]>();
if (!points1) {
db.logWarning("Skipping compute - No valid float[3][] attribute named '%s' on the CPU bundle",
db.tokenToString(db.tokens.points));
return false;
}
if (!points2) {
// ...
}
}
```
```cpp
{
db.logWarning("Skipping compute - No valid float[3][] attribute named '%s' on the GPU bundle",
db.tokenToString(db.tokens.points));
return false;
}
if (points1.size() != points2.size())
{
db.logWarning("Skipping compute - Point arrays are different sizes (%zu and %zu)", points1.size(),
points2.size());
return false;
}
dotProducts.resize(points1.size());
if (!dotProducts)
{
db.logWarning("Skipping compute - No valid dot product attribute on the output bundle");
return false;
}
cpuGpuDotProductCPU(points1->data(), points2->data(), dotProducts->data(), points1.size());
return true;
}
};
```
```python
"""Implementation of the Python node accessing attributes whose memory location is determined at runtime."""
import numpy as np
import omni.graph.core as og
# Types to check on bundled attributes
FLOAT_ARRAY_TYPE = og.Type(og.BaseDataType.FLOAT, array_depth=1)
FLOAT3_ARRAY_TYPE = og.Type(og.BaseDataType.FLOAT, tuple_count=3, array_depth=1, role=og.AttributeRole.POSITION)
class OgnTutorialCpuGpuBundlesPy:
"""Exercise bundle members through a Python OmniGraph node"""
@staticmethod
def compute(db) -> bool:
"""Implements the same algorithm as the C++ node OgnTutorialCpuGpuBundles.cpp.
It follows the same code pattern for easier comparison, though in practice you would probably code Python
nodes differently from C++ nodes to take advantage of the strengths of each language.
"""
if db.inputs.gpu:
# Invalid data yields no compute
if not db.inputs.gpuBundle.valid:
```
```python
# 代码块开始
return True
db.outputs.cpuGpuBundle = db.inputs.gpuBundle
else:
if not db.inputs.cpuBundle.valid:
return True
db.outputs.cpuGpuBundle = db.inputs.cpuBundle
# Find and verify the attributes containing the points
cpu_points = db.inputs.cpuBundle.attribute_by_name(db.tokens.points)
if cpu_points.type != FLOAT3_ARRAY_TYPE:
db.log_warning(
f"Skipping compute - No valid float[3][] attribute named '{db.tokens.points}' on the CPU bundle"
)
return False
gpu_points = db.inputs.gpuBundle.attribute_by_name(db.tokens.points)
if gpu_points.type != FLOAT3_ARRAY_TYPE:
db.log_warning(
f"Skipping compute - No valid float[3][] attribute named '{db.tokens.points}' on the GPU bundle"
)
return False
# If the attribute is not already on the output bundle then add it
dot_product = db.outputs.cpuGpuBundle.attribute_by_name(db.tokens.dotProducts)
if dot_product is None:
dot_product = db.outputs.cpuGpuBundle.insert((og.Type(og.BaseDataType.FLOAT, array_depth=1), "dotProducts"))
elif dot_product.type != FLOAT_ARRAY_TYPE:
# Python types do not use a cast to find out if they are the correct type so explicitly check it instead
db.log_warning(
f"Skipping compute - No valid float[] attribute named '{db.tokens.dotProducts}' on the output bundle"
)
return False
# Set the size to what is required for the dot product calculation
dot_product.size = cpu_points.size
# Use the correct data access based on whether the output is supposed to be on the GPU or not
if db.inputs.gpu:
# The second line is how the values would be extracted if Python supported GPU data extraction.
# When it does this tutorial will be updated
dot_product.cpu_value = np.einsum("ij,ij->i", cpu_points.value, gpu_points.cpu_value)
# dot_product.gpu_value = np.einsum("ij,ij->i", cpu_points.gpu_value, gpu_points.value)
else:
dot_product.cpu_value = np.einsum("ij,ij->i", cpu_points.value, gpu_points.cpu_value)
return True
# 代码块结束
``` | 11,029 |
tutorial23.md | # Tutorial 23 - Extended Attributes On The GPU
Extended attributes are no different from other types of attributes with respect to where their memory will be located. The difference is that there is a slightly different API for accessing their data, illustrating by these examples.
This node also illustrates the new concept of having a node create an ABI function override that handles the runtime type resolution of extended attribute types. In this case when any of the two input attributes or one output attribute become resolved then the other two attributes are resolved to the same type, if possible.
## OgnTutorialCpuGpuExtended.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.CpuGpuExtended” with an input `'any'` attribute on the CPU, an input `'any'` attribute on the GPU, and an output whose memory location is decided at runtime by a boolean.
```json
{
"CpuGpuExtended": {
"version": 1,
"categories": "tutorials",
"description": [
"This is a tutorial node. It exercises functionality for accessing data in extended attributes that",
"are on the GPU as well as those whose CPU/GPU location is decided at runtime. The compute",
"adds the two inputs 'gpuData' and 'cpuData' together, placing the result in `cpuGpuSum`, whose",
"memory location is determined by the 'gpu' flag.",
"This node is identical to OgnTutorialCpuGpuExtendedPy.ogn, except is is implemented in C++."
],
"tags": ["tutorial", "extended", "gpu"],
"uiName": "Tutorial Node: CPU/GPU Extended Attributes",
"inputs": {
"cpuData": {
"type": "any",
"description": "Input attribute whose data always lives on the CPU",
"uiName": "CPU Input Attribute"
},
"gpuData": {
"type": "any",
"description": "Input attribute whose data always lives on the GPU",
"uiName": "GPU Input Attribute"
}
}
}
}
```
22. "memoryType": "cuda",
23. "description": "Input attribute whose data always lives on the GPU",
24. "uiName": "GPU Input Attribute"
25. },
26. "gpu": {
27. "type": "bool",
28. "description": "If true then put the sum on the GPU, otherwise put it on the CPU",
29. "uiName": "Results To GPU"
30. }
31. },
32. "outputs": {
33. "cpuGpuSum": {
34. "type": "any",
35. "memoryType": "any",
36. "description": [
37. "This is the attribute with the selected data. If the 'gpu' attribute is set to true then this",
38. "attribute's contents will be entirely on the GPU, otherwise it will be on the CPU."
39. ],
40. "uiName": "Sum"
41. }
42. },
43. "tests": [
44. {
45. "inputs:cpuData": {
46. "type": "pointf[3][]",
47. "value": [
48. [ 1.0, 2.0, 3.0 ],
49. [ 4.0, 5.0, 6.0 ]
50. ]
51. },
52. "inputs:gpuData": {
53. "type": "pointf[3][]",
54. "value": [
55. [ 7.0, 8.0, 9.0 ],
56. [ 10.0, 11.0, 12.0 ]
57. ]
58. },
59. "inputs:gpu": false,
60. "outputs:cpuGpuSum": {
61. "type": "pointf[3][]",
62. "value": [
63. [ 8.0, 10.0, 12.0 ],
64. [ 14.0, 16.0, 18.0 ]
65. ]
66. }
67. },
68. {
69. "inputs:cpuData": {
70. "type": "pointf[3][]",
```json
{
"inputs:cpuData": {
"type": "pointf[3][]",
"value": [
[4.0, 5.0, 6.0]
]
},
"inputs:gpuData": {
"type": "pointf[3][]",
"value": [
[7.0, 8.0, 9.0]
]
},
"inputs:gpu": true,
"outputs:cpuGpuSum": {
"type": "pointf[3][]",
"value": [
[11.0, 13.0, 15.0]
]
}
}
```
## OgnTutorialCpuGpuExtended.cpp
The cpp file contains the implementation of the compute method. It sums two inputs on either the CPU or GPU based on the input boolean. For simplicity only the **float[3][]** attribute type is processed, with all others resulting in a compute failure.
```c++
// Copyright (c) 2021-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <OgnTutorialCpuGpuExtendedDatabase.h>
extern "C" void cpuGpuSumCPU(float const (*p1)[3], float const (*p2)[3], float (*sums)[3], size_t);
extern "C" void cpuGpuSumGPU(float const (**p1)[3], float const (**p2)[3], float (**sums)[3]);
namespace omni
{
namespace graph
{
namespace tutorials
{
// Only the pointf[3][] type is accepted. Make a shortcut to the type information describing it for comparison
core::Type acceptedType(core::BaseDataType::eFloat, 3, 1, core::AttributeRole::ePosition);
// Template for reporting type inconsistencies. The data types are different for the attributes but the checks are
// the same so this avoids duplication
template <typename P1, typename P2, typename P3>
```
```cpp
bool verifyDataTypes(OgnTutorialCpuGpuExtendedDatabase& db, const P1& points1, const P2& points2, P3& sums, const char* type)
{
if (!points1)
{
db.logWarning("Skipping compute - The %s attribute was not a valid pointf[3][]", type);
}
else if (!points2)
{
db.logWarning("Skipping compute - The %s attribute was not a valid pointf[3][]", type);
}
else if (!sums)
{
db.logWarning("Skipping compute - The %s output attribute was not a valid pointf[3][]", type);
}
else if (points1.size() != points2.size())
{
db.logWarning(
"Skipping compute - Point arrays are different sizes (%zu and %zu)", points1.size(), points2.size());
}
else
{
sums.resize(points1.size());
return true;
}
return false;
}
class OgnTutorialCpuGpuExtended
{
public:
static bool compute(OgnTutorialCpuGpuExtendedDatabase& db)
{
if (!db.sharedState<OgnTutorialCpuGpuExtended>().m_allAttributesResolved)
{
db.logWarning("All types are not yet resolved. Cannot run the compute.");
return false;
}
const auto& gpu = db.inputs.gpu();
const auto cpuData = db.inputs.cpuData();
const auto gpuData = db.inputs.gpuData();
}
};
```
```cpp
auto cpuGpuSum = db.outputs.cpuGpuSum();
if ((cpuData.type() != acceptedType) || (gpuData.type() != acceptedType) || (cpuGpuSum.type() != acceptedType))
{
db.logWarning("Skipping compute - All of the attributes do not have the accepted resolved type pointf[3][]");
return false;
}
if (gpu)
{
// Computation on the GPU has been requested so get the GPU versions of the attribute data
const auto points1 = cpuData.getGpu<float[][3]>();
const auto points2 = gpuData.get<float[][3]>();
auto sums = cpuGpuSum.getGpu<float[][3]>();
if (!verifyDataTypes(db, points1, points2, sums, "GPU"))
{
return false;
}
cpuGpuSumGPU(points1(), points2(), sums());
}
else
{
// Computation on the CPU has been requested so get the CPU versions of the attribute data
const auto points1 = cpuData.get<float[][3]>();
const auto points2 = gpuData.getCpu<float[][3]>();
auto sums = cpuGpuSum.getCpu<float[][3]>();
if (!verifyDataTypes(db, points1, points2, sums, "CPU"))
{
return false;
}
cpuGpuSumCPU(points1->data(), points2->data(), sums->data(), points1.size());
}
return true;
```
103
104 static void onConnectionTypeResolve(const NodeObj& nodeObj)
105 {
106 // If any one type is resolved the others should resolve to the same type. Calling this helper function
107 // makes that happen automatically. If it returns false then the resolution failed for some reason. The
108 // node's user data, which is just a copy of this class, is used to keep track of the resolution state so
109 // that the compute method can quickly exit when the types are not resolved.
110 AttributeObj attributes[3]{ nodeObj.iNode->getAttributeByToken(nodeObj, inputs::cpuData.token()),
111 nodeObj.iNode->getAttributeByToken(nodeObj, inputs::gpuData.token()),
112 nodeObj.iNode->getAttributeByToken(nodeObj, outputs::cpuGpuSum.token()) };
113 auto& state = OgnTutorialCpuGpuExtendedDatabase::sSharedState<OgnTutorialCpuGpuExtended>(nodeObj);
114 state.m_allAttributesResolved = nodeObj.iNode->resolveCoupledAttributes(nodeObj, attributes, 3);
115 }
116};
117
118 REGISTER_OGN_NODE()
119
120 } // namespace tutorials
121 } // namespace graph
122 } // namespace omni
## OgnTutorialCpuGpuExtendedPy.py
The `py` file contains the same algorithm as the C++ node, with the node implementation language being different.
```python
"""
Implementation of the Python node accessing extended attributes whose memory location is determined at runtime.
"""
import omni.graph.core as og
# Only one type of data is handled by the compute - pointf[3][]
POINT_ARRAY_TYPE = og.Type(og.BaseDataType.FLOAT, tuple_count=3, array_depth=1, role=og.AttributeRole.POSITION)
class OgnTutorialCpuGpuExtendedPy:
"""Exercise GPU access for extended attributes through a Python OmniGraph node"""
@staticmethod
def compute(db) -> bool:
"""Implements the same algorithm as the C++ node OgnTutorialCpuGpuExtended.cpp.
It follows the same code pattern for easier comparison, though in practice you would probably code Python
nodes differently from C++ nodes to take advantage of the strengths of each language.
"""
# Find and verify the attributes containing the points
if db.attributes.inputs.cpuData.get_resolved_type() != POINT_ARRAY_TYPE:
db.log_warning("Skipping compute - CPU attribute type did not resolve to pointf[3][]")
return False
if db.attributes.inputs.gpuData.get_resolved_type() != POINT_ARRAY_TYPE:
db.log_warning("Skipping compute - GPU attribute type did not resolve to pointf[3][]")
```python
def on_connection_type_resolve(node: og.Node) -> None:
"""Whenever any of the inputs or the output get a resolved type the others should get the same resolution"""
attribs = [
node.get_attribute("inputs:cpuData"),
node.get_attribute("inputs:gpuData"),
node.get_attribute("outputs:cpuGpuSum"),
]
og.resolve_fully_coupled(attribs)
```
``` | 10,453 |
tutorial24.md | # Tutorial 24 - Overridden Types
By default the code generator will provide POD types for simple data, and USD types for tuple data (e.g. `float` and `pxr::GfVec3f`). Sometimes you may have your own favorite math library and want to use its data types directly rather than constantly using a _reinterpret_cast_ on the attribute values. To facilitate this, JSON data which contains type overrides for one or more of the attribute types may be provided so that the generated code will use those types directly.
## OgnTutorialOverrideType.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.OverrideType”, which has one input and one output attribute that use an overridden type for `float[3]`.
```json
{
"OverrideType": {
"version": 1,
"categories": "tutorials",
"scheduling": ["threadsafe"],
"description": [
"This is a tutorial node. It has an input and output of type float[3], an input and output of type",
"double[3], and a type override specification that lets the node use Carbonite types for the generated",
"data on the float[3] attributes only. Ordinarily all of the types would be defined in a separate",
"configuration file so that it can be shared for a project. In that case the type definition build flag",
"would also be used so that this information does not have to be embedded in every .ogn file in",
"the project. It is placed directly in the file here solely for instructional purposes.",
" The compute is just a rotation of components from x->y, y->z, and z->x, for each input type."
],
"uiName": "Tutorial Node: Overriding C++ Data Types",
"inputs": {
"typedData": {
"type": "float[3]",
"uiName": "Input value with a modified float type",
"description": "The value to rotate"
}
}
}
}
```
{
"inputs": {
"typedData": {
"type": "double[3]",
"uiName": "Input value with a standard double type",
"description": "The value to rotate"
}
},
"outputs": {
"typedData": {
"type": "float[3]",
"uiName": "Output value with a modified float type",
"description": "The rotated version of inputs::typedData"
},
"data": {
"type": "double[3]",
"uiName": "Output value with a standard double type",
"description": "The rotated version of inputs::data"
}
},
"$typeDefinitionDescription": "This redefines the generated output type for the float[3] type only.",
"typeDefinitions": {
"c++": {
"float[3]": ["carb::Float3", ["carb/Types.h"]]
}
},
"tests": [
{
"inputs:data": [1.0, 2.0, 3.0],
"outputs:data": [2.0, 3.0, 1.0]
}
]
}
```
# OgnTutorialOverrideType.cpp
The cpp file contains the implementation of the compute method. The default type implementation would have a return type of `pxr::GfVec3f` but this one uses the override type of `carb::Float3`.
```cpp
// Copyright (c) 2021-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <OgnTutorialOverrideTypeDatabase.h>
// The include files required by the type definition override is in the database definition so it does not have to
// be directly included here.
namespace omni
{
namespace graph
{
namespace core
{
namespace tutorial
{
class OgnTutorialOverrideType
{
public:
};
}
}
}
}
```cpp
static bool compute(OgnTutorialOverrideTypeDatabase& db) {
// Usually you would use an "auto" declaration. The type is explicit here to show how the override has
// changed the generated type.
const carb::Float3& inputTypedData = db.inputs.typedData();
carb::Float3& outputTypedData = db.outputs.typedData();
const pxr::GfVec3d& inputStandardData = db.inputs.data();
pxr::GfVec3d& outputStandardData = db.outputs.data();
// Rearrange the components as a simple way to verify that compute happened
outputTypedData.x = inputTypedData.y;
outputTypedData.y = inputTypedData.z;
outputTypedData.z = inputTypedData.x;
outputStandardData.Set(inputStandardData[1], inputStandardData[2], inputStandardData[0]);
return true;
}
};
// namespaces are closed after the registration macro, to ensure the correct class is registered
REGISTER_OGN_NODE()
} // namespace tutorial
} // namespace core
} // namespace graph
} // namespace omni
``` | 4,957 |
tutorial25.md | # Tutorial 25 - Dynamic Attributes
A dynamic attribute is like any other attribute on a node, except that it is added at runtime rather than being part of the .ogn specification. These are added through the ABI function `INode::createAttribute` and removed from the node through the ABI function `INode::removeAttribute`.
Once a dynamic attribute is added it can be accessed through the same ABI and script functions as regular attributes.
> **Warning**
> While the Python node database is able to handle the dynamic attributes through the same interface as regular attributes (e.g. `db.inputs.dynAttr`), the C++ node database is not yet similarly flexible and access to dynamic attribute values must be done directly through the ABI calls.
## OgnTutorialDynamicAttributes.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.DynamicAttributes”, which has a simple float input and output.
```json
{
"DynamicAttributes": {
"version": 1,
"categories": "tutorials",
"scheduling": ["threadsafe"],
"description": [
"This is a C++ node that exercises the ability to add and remove database attribute",
"accessors for dynamic attributes. When the dynamic attribute is added the property will exist",
"and be able to get/set the attribute values. When it does not the property will not exist.",
"The dynamic attribute names are found in the tokens below. If neither exist then the input",
"value is copied to the output directly. If 'firstBit' exists then the 'firstBit'th bit of the input",
"is x-ored for the copy. If 'secondBit' exists then the 'secondBit'th bit of the input is x-ored",
"for the copy. (Recall bitwise match xor(0,0)=0, xor(0,1)=1, xor(1,0)=1, and xor(1,1)=0.)",
"For example, if 'firstBit' is present and set to 1 then the bitmask will be b0010, where bit 1 is set.",
"If the input is 7, or b0111, then the xor operation will flip bit 1, yielding b0101, or 5 as the result.",
"If on the next run 'secondBit' is also present and set to 2 then its bitmask will be b0100, where bit",
"2 is set. The input of 7 (b0111) flips bit 1 because firstBit=1 and flips bit 2 because",
"secondBit=2, yielding a final result of 1 (b0001)."
]
}
}
```
```c++
const auto firstBitPtr = getDataR<uint32_t>(
db.abi_context(), firstBit.iAttribute->getConstAttributeDataHandle(firstBit, db.getInstanceIndex()));
if (firstBitPtr)
{
if (0 <= *firstBitPtr && *firstBitPtr <= 31)
{
rawOutput ^= 1 << *firstBitPtr;
}
else
{
db.logWarning("Could not xor bit %ud. Must be in [0, 31]", *firstBitPtr);
}
}
else
{
db.logError("Could not retrieve the data for firstBit");
}
if (iNode->getAttributeExists(db.abi_node(), db.tokenToString(db.tokens.secondBit)))
{
AttributeObj secondBit = iNode->getAttributeByToken(db.abi_node(), db.tokens.secondBit);
// secondBit will invert the bit with its number, if present
const auto secondBitPtr = getDataR<uint32_t>(
db.abi_context(), secondBit.iAttribute->getConstAttributeDataHandle(secondBit, db.getInstanceIndex()));
if (secondBitPtr)
{
if (0 <= *secondBitPtr && *secondBitPtr <= 31)
{
rawOutput ^= 1 << *secondBitPtr;
}
else
{
db.logWarning("Could not xor bit %ud. Must be in [0, 31]", *secondBitPtr);
}
}
else
{
db.logError("Could not retrieve the data for secondBit");
}
}
```
```cpp
{
if (iNode->getAttributeExists(db.abi_node(), db.tokenToString(db.tokens.invert)))
{
AttributeObj invert = iNode->getAttributeByToken(db.abi_node(), db.tokens.invert);
// invert will invert the bits, if the role is set and the attribute access is correct
const auto invertPtr = getDataR<double>(
db.abi_context(), invert.iAttribute->getConstAttributeDataHandle(invert, db.getInstanceIndex()));
if (invertPtr)
{
// Verify that the invert attribute has the (random) correct role before applying it
if (invert.iAttribute->getResolvedType(invert).role == AttributeRole::eTimeCode)
{
rawOutput ^= 0xffffffff;
}
}
else
{
db.logError("Could not retrieve the data for invert");
}
}
}
// Set the modified result onto the output as usual
db.outputs.result() = rawOutput;
return true;
};
REGISTER_OGN_NODE()
} // namespace tutorials
} // namespace graph
} // namespace omni
```
```python
"""Implementation of the node OgnTutorialDynamicAttributesPy.ogn"""
from contextlib import suppress
from operator import xor
import omni.graph.core as og
class OgnTutorialDynamicAttributesPy:
@staticmethod
def compute(db) -> bool:
"""Compute the output based on the input and the presence or absence of dynamic attributes"""
raw_output = db.inputs.value
# The suppression of the AttributeError will just skip this section of code if the dynamic attribute
# is not present
with suppress(AttributeError):
# firstBit will invert the bit with its number, if present.
if 0 <= db.inputs.firstBit <= 31:
raw_output = xor(raw_output, 2**db.inputs.firstBit)
When the node is deleted the dynamic attribute will also be deleted, and the attribute will be stored in the USD file.
If you want to remove the attribute from the node at any time you would use this function:
```python
@classmethod
def remove_attribute(obj, *args, **kwargs) -> bool: # noqa: N804,PLC0202,PLE0202
"""Removes an existing dynamic attribute from a node.
This function can be called either from the class or using an instantiated object. The first argument is
positional, being either the class or object. All others are by keyword and optional, defaulting to the value
set in the constructor in the object context and the function defaults in the class context.
Args:
obj: Either cls or self depending on how the function was called
attribute: Reference to the attribute to be removed
node: If the attribute reference is a string the node is used to find the attribute to be removed
undoable: If True the operation is added to the undo queue, else it is done immediately and forgotten
Raises:
OmniGraphError: if the attribute was not found or could not be removed
"""
```
The second optional parameter is only needed when the attribute is passed as a string. When passing an
`og.Attribute` the node is already known, being part of the attribute.
```python
import omni.graph.core as og
new_attr = og.Controller.create_attribute("/World/MyNode", "newInput", "float[3]")
# When passing the attribute the node is not necessary
og.Controller.remove_attribute(new_attr)
# However if you don't have the attribute available you can still use the name, noting that the
# namespace must be present.
# og.Controller.remove_attribute("inputs:newInput", "/World/MyNode")
```
### Adding More Information
While the attribute name and type are sufficient to unambiguously create it there is other information you can add
that would normally be present in the .ogn file. It’s a good idea to add some of the basic metadata for the UI.
```python
import omni.graph.core as og
new_attr = og.Controller.create_attribute("/World/MyNode", "newInput", "vectorf[3]")
new_attr.set_metadata(og.MetadataKeys.DESCRIPTION, "This is a new input with a vector in it")
new_attr.set_metadata(og.MetadataKeys.UI_NAME, "Input Vector")
```
While dynamic attributes don’t have default values you can do the equivalent by setting a value as soon as you
create the attribute:
```python
import omni.graph.core as og
new_attr = og.Controller.create_attribute("/World/MyNode", "newInput", "vectorf[3]")
og.Controller.set(new_attr, [1.0, 2.0, 3.0])
```
This default value can also be changed at any time (even when the attribute is already connected):
```python
new_attr.set_default([1.0, 0.0, 0.0])
``` | 8,063 |
tutorial26.md | # Tutorial 26 - Generic Math Node
This tutorial demonstrates how to compose nodes that perform mathematical operations in python using numpy. Using numpy has the advantage that it is api-compatible to cuNumeric. As demonstrated in the Extended Attributes tutorial, generic math nodes use extended attributes to allow inputs and outputs of arbitrary numeric types, specified using the “numerics” keyword.
```json
{
"inputs": {
"myNumbericAttribute": {
"description": "Accepts an incoming connection from any type of numeric value",
"type": ["numerics"]
}
}
}
```
## OgnTutorialGenericMathNode.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.GenericMathNode”, which takes inputs of any numeric types and performs a multiplication.
```json
{
"GenericMathNode": {
"description": [
"This is a tutorial node. It is functionally equivalent to the built-in Multiply node,",
"but written in python as a practical demonstration of using extended attributes to ",
"write math nodes that work with any numeric types, including arrays and tuples."
],
"version": 1,
"language": "python",
"uiName": "Tutorial Python Node: Generic Math Node",
"categories": "tutorials",
"inputs": {
"a": {
"type": ["numerics"],
"description": "First number to multiply",
"uiName": "A"
},
"b": {
"type": ["numerics"],
"description": "Second number to multiply",
"uiName": "B"
}
}
}
}
```
```json
{
"inputs:a": {
"type": "double",
"value": 2
},
"inputs:b": {
"type": "double",
"value": 3
},
"outputs:product": {
"type": "double",
"value": 6
}
},
{
"inputs:a": {
"type": "double[2]",
"value": [1.0, 42.0]
},
"inputs:b": {
"type": "double[2]",
"value": [2.0, 1.0]
},
"outputs:product": {
"type": "double[2]",
"value": [2.0, 42.0]
}
},
{
"inputs:a": {
"type": "double[]",
"value": [1.0, 42.0]
},
"inputs:b": {
"type": "double",
"value": 2.0
},
"outputs:product": {
"type": "double[]",
"value": [2.0, 84.0]
}
},
{
"inputs:a": {
"type": "double[2][]",
"value": [[10, 5], [1, 1]]
},
"inputs:b": {
"type": "double[2]",
"value": [5, 5]
},
"outputs:product": {
"type": "double[2][]",
"value": [[50, 25], [5, 5]]
}
}
```
## OgnTutorialGenericMathNode.py
The `py` file contains the implementation of the node. It takes two numeric inputs and performs a multiplication, demonstrating how to handle cases where the inputs are both numeric types but vary in precision, format or dimension.
```python
import numpy as np
import omni.graph.core as og
# Mappings of possible numpy dtypes from the result data type and back
dtype_from_basetype = {
og.BaseDataType.INT: np.int32,
og.BaseDataType.INT64: np.int64,
og.BaseDataType.HALF: np.float16,
```
9 og.BaseDataType.FLOAT: np.float32,
10 og.BaseDataType.DOUBLE: np.float64,
11}
12
13supported_basetypes = [
14 og.BaseDataType.INT,
15 og.BaseDataType.INT64,
16 og.BaseDataType.HALF,
17 og.BaseDataType.FLOAT,
18 og.BaseDataType.DOUBLE,
19]
20
21basetype_resolution_table = [
22 [0, 1, 3, 3, 4], # Int
23 [1, 1, 4, 4, 4], # Int64
24 [3, 4, 2, 3, 4], # Half
25 [3, 4, 3, 3, 4], # Float
26 [4, 4, 4, 4, 4], # Double
27]
28
29
30class OgnTutorialGenericMathNode:
31 """Node to multiple two values of any type"""
32
33 @staticmethod
34 def compute(db) -> bool:
35 """Compute the product of two values, if the types are all resolved.
36
37 When the types are not compatible for multiplication, or the result type is not compatible with the
38 resolved output type, the method will log an error and fail
39 """
40 try:
41 # To support multiplying array of vectors by array of scalars we need to broadcast the scalars to match the
42 # shape of the vector array, and we will convert the result to whatever the result is resolved to
43 atype = db.inputs.a.type
44 btype = db.inputs.b.type
45 rtype = db.outputs.product.type
46
47 result_dtype = dtype_from_basetype.get(rtype.base_type, None)
48
49 # Use numpy to perform the multiplication in order to automatically handle both scalar and array types
50 # and automatically convert to the resolved output type
51 if atype.array_depth > 0 and btype.array_depth > 0 and btype.tuple_count < atype.tuple_count:
52 r = np.multiply(db.inputs.a.value, db.inputs.b.value[:, np.newaxis], dtype=result_dtype)
53 else:
54 r = np.multiply(db.inputs.a.value, db.inputs.b.value, dtype=result_dtype)
55
56 db.outputs.product.value = r
57 except TypeError as error:
58 db.log_error(f"Multiplication could not be performed: {error}")
```python
# 59
return False
# 60
# 61
return True
# 62
# 63
@staticmethod
# 64
def on_connection_type_resolve(node) -> None:
# Resolves the type of the output based on the types of inputs
atype = node.get_attribute("inputs:a").get_resolved_type()
btype = node.get_attribute("inputs:b").get_resolved_type()
productattr = node.get_attribute("outputs:product")
producttype = productattr.get_resolved_type()
# 70
# 71
# The output types can be only inferred when both inputs types are resolved.
if (
atype.base_type != og.BaseDataType.UNKNOWN
and btype.base_type != og.BaseDataType.UNKNOWN
and producttype.base_type == og.BaseDataType.UNKNOWN
):
# 78
# Resolve the base type using the lookup table
base_type = og.BaseDataType.DOUBLE
# 81
a_index = supported_basetypes.index(atype.base_type)
b_index = supported_basetypes.index(btype.base_type)
# 84
if a_index >= 0 and b_index >= 0:
base_type = supported_basetypes[basetype_resolution_table[a_index][b_index]]
# 87
productattr.set_resolved_type(
og.Type(base_type, max(atype.tuple_count, btype.tuple_count), max(atype.array_depth, btype.array_depth))
)
```
``` | 6,444 |
tutorial27.md | # Tutorial 27 - GPU Data Node with CPU Array Pointers
The GPU data node illustrates the alternative method of extracting array data from the GPU by returning a CPU pointer to the GPU array. Normally the data returns a GPU pointer to an array of GPU pointers, optimized for future use in parallel processing of GPU array data. By returning a CPU pointer to the array you can use host-side processing to dereference the pointers.
## OgnTutorialCudaDataCpu.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.CudaCpuArrays”, which has an input and an output of type `float[3][]`, along with the special keyword to indicate that the pointer to the CUDA arrays should be in CPU space.
```json
{
"CudaCpuArrays": {
"version": 1,
"memoryType": "cuda",
"cudaPointers": "cpu",
"description": [
"This is a tutorial node. It illustrates the alternative method of extracting pointers to GPU array data",
"in which the pointer returned is a CPU pointer and can be dereferenced on the CPU side. Without the",
"cudaPointers value set that pointer would be a GPU pointer to an array of GPU pointers and could",
"only be dereferenced on the device."
],
"metadata": {
"uiName": "Tutorial Node: Attributes With CUDA Array Pointers In Cpu Memory"
},
"categories": "tutorials",
"inputs": {
"points": {
"type": "float[3][]",
"memoryType": "any",
"description": ["Array of points to be moved"],
"default": []
},
"multiplier": {
"type": "float",
"memoryType": "any",
"description": ["Multiplier for the points"],
"default": 1.0
}
},
"outputs": {
"transformedPoints": {
"type": "float[3][]",
"memoryType": "any",
"description": ["Transformed array of points"],
"default": []
}
}
}
}
```
25 "type": "float[3]",
26 "description": ["Amplitude of the expansion for the input points"],
27 "default": [1.0, 1.0, 1.0]
28 }
29 },
30 "outputs": {
31 "points": {
32 "type": "float[3][]",
33 "description": ["Final positions of points"]
34 }
35 },
36 "tests": [
37 {
38 "inputs:multiplier": [1.0, 2.0, 3.0],
39 "inputs:points": [[1.0, 1.0, 1.0], [2.0, 2.0, 2.0], [3.0, 3.0, 3.0]],
40 "outputs:points": [[1.0, 2.0, 3.0], [2.0, 4.0, 6.0], [3.0, 6.0, 9.0]]
41 }
42 ]
43 }
44}
```
### OgnTutorialCudaDataCpu.cpp
The `cpp` file contains the implementation of the compute method, which in turn calls the CUDA algorithm.
```cpp
// Copyright (c) 2020-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <omni/graph/core/GpuArray.h>
#include <OgnTutorialCudaDataCpuDatabase.h>
// This function exercises referencing of array data on the GPU.
// The CUDA code takes its own "float3" data types, which are castable equivalents to the generated GfVec3f.
// The GpuArray/ConstGpuArray wrappers isolate the GPU pointers from the CPU code.
extern "C" void applyDeformationCpuToGpu(pxr::GfVec3f* outputPoints,
const pxr::GfVec3f* inputPoints,
const pxr::GfVec3f* multiplier,
size_t numberOfPoints);
// This node runs a couple of algorithms on the GPU, while accessing parameters from the CPU
class OgnTutorialCudaDataCpu
{
public:
static bool compute(OgnTutorialCudaDataCpuDatabase& db)
{
// Implementation details here
}
};
```cpp
size_t numberOfPoints = db.inputs.points.size();
db.outputs.points.resize(numberOfPoints);
if (numberOfPoints > 0)
{
// The main point to note here is how the pointer can be dereferenced on the CPU side, whereas normally
// you would have to send it to the GPU for dereferencing. (The long term purpose of the latter is to make
// it more efficient to handle arrays-of-arrays on the GPU, however since that is not yet implemented
// we can get away with a single dereference here.)
applyDeformationCpuToGpu(
*db.outputs.points, *db.inputs.points.gpu(), db.inputs.multiplier(), numberOfPoints);
// Just as a test now also reference the points as CPU data to ensure the value casts correctly
const float* pointsAsCpu = reinterpret_cast<const float*>(db.inputs.points.cpu().data());
if (!pointsAsCpu)
{
db.logWarning("Points could not be copied to the CPU");
return false;
}
}
return true;
};
```
# OgnTutorialCudaDataCpu_CUDA.cu
The `cu` file contains the implementation of the algorithm on the GPU using CUDA.
```cpp
// Copyright (c) 2020-2021, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <OgnTutorialCudaDataCpuDatabase.h>
// ======================================================================
// Apply the multiplier to "inputPoints" to yield "outputPoints"
__global__ void applyDeformationCUDA(
float3* outputPoints,
const float3* inputPoints,
const float3* multiplier,
size_t numberOfPoints
)
{
// Make sure the current evaluation block is in range of the available points
int currentPointIndex = blockIdx.x * blockDim.x + threadIdx.x;
if (numberOfPoints <= currentPointIndex) return;
}
23
24 // Apply the multiplier to the current points
25 outputPoints[currentPointIndex].x = inputPoints[currentPointIndex].x * multiplier->x;
26 outputPoints[currentPointIndex].y = inputPoints[currentPointIndex].y * multiplier->y;
27 outputPoints[currentPointIndex].z = inputPoints[currentPointIndex].z * multiplier->z;
28}
29// CPU interface called from OgnTutorialCudaDataCpu::compute, used to launch the GPU workers.
30// "numberOfPoints" is technically redundant since it's equal to inputPoints.size(), however that call
31// is only available on the __device__ (GPU) side and the value is needed in the calculation of the
32// number of blocks required here on the __host__ (CPU) side so it's better to pass it in.
33extern "C" void applyDeformationCpuToGpu(
34 float3* outputPoints,
35 const float3* inputPoints,
36 const float3* multiplier,
37 size_t numberOfPoints
38)
39{
40 // Split the work into 256 threads, an arbitrary number that could be more precisely tuned when necessary
41 const int numberOfThreads = 256;
42 // Block size is the number of points that fit into each of the threads
43 const int numberOfBlocks = (numberOfPoints + numberOfThreads - 1) / numberOfThreads;
44
45 // Launch the GPU deformation using the calculated number of threads and blocks
46 applyDeformationCUDA<<<numberOfBlocks, numberOfThreads>>>(outputPoints, inputPoints, multiplier, numberOfPoints);
47}
```
# OgnTutorialCudaDataCpuPy.py
The py file contains the implementation of the compute method, which for this example doesn’t actually compute as extra extension support is required for Python to run on the GPU (e.g. a Python -> CUDA compiler).
```python
"""
Implementation of the Python node accessing CUDA attributes in a way that accesses the GPU arrays with a CPU pointer.
No actual computation is done here as the tutorial nodes are not set up to handle GPU computation.
"""
import ctypes
import omni.graph.core as og
# Only one type of data is handled by the compute - pointf[3][]
POINT_ARRAY_TYPE = og.Type(og.BaseDataType.FLOAT, tuple_count=3, array_depth=1, role=og.AttributeRole.POSITION)
def get_address(attr: og.Attribute) -> int:
"""Returns the contents of the memory the attribute points to"""
if attr.memory == 0:
return 0
ptr_type = ctypes.POINTER(ctypes.c_size_t)
```
```python
ptr = ctypes.cast(attr.memory, ptr_type)
return ptr.contents.value
class OgnTutorialCudaDataCpuPy:
"""Exercise GPU access for extended attributes through a Python OmniGraph node"""
@staticmethod
def compute(db) -> bool:
"""Accesses the CUDA data, which for arrays exists as wrappers around CPU memory pointers to GPU pointer arrays.
No compute is done here.
"""
# Put accessors into local variables for convenience
input_points = db.inputs.points
multiplier = db.inputs.multiplier
# Set the size to what is required for the multiplication - this can be done without accessing GPU data.
# Notice that since this is CPU pointers to GPU data the size has to be taken from the data type description
# rather than the usual method of taking len(input_points).
db.outputs.points_size = input_points.dtype.size
# After changing the size the memory isn't allocated immediately (when necessary). It is delayed until you
# request access to it, which is what this line will do.
output_points = db.outputs.points
# This is a separate test to add a points attribute to the output bundle to show how when a bundle has
# CPU pointers to the GPU data that information propagates to its children
# Start with an empty output bundle.
output_bundle = db.outputs.outBundle
output_bundle.clear()
output_bundle.add_attributes([og.Type(og.BaseDataType.FLOAT, 3, 1)], ["points"])
bundle_attr = output_bundle.attribute_by_name("points")
# As for the main attributes, setting the bundle member size readies the buffer of the given size on the GPU
bundle_attr.size = input_points.dtype.size
# The output cannot be written to here through the normal assignment mechanisms, e.g. the typical step of
# copying input points to the output points, as the data is not accessible on the GPU through Python directly.
# Instead you can access the GPU memory pointers through the attribute values and send it to CUDA code, either
# generated from the Python code or accessed through something like pybind wrappers.
print("Locations in CUDA() should be in GPU memory space")
print(f" CPU Location for reference = {hex(id(db))}", flush=True)
print(f" Input points are {input_points} at CUDA({hex(get_address(input_points))})", flush=True)
print(f" Multiplier is CUDA({multiplier})", flush=True)
print(f" Output points are {output_points} at CUDA({hex(get_address(output_points))})", flush=True)
print(f" Bundle {bundle_attr.gpu_value} at CUDA({hex(get_address(bundle_attr.gpu_value))})", flush=True)
return True
```
---
title: "HTML to Markdown Conversion"
author: "Data Parser"
date: "2023-04-01"
---
## Introduction
This is a simple example of converting HTML to Markdown.
## Section 1
Some content here.
## Section 2
More content here.
---
This is the end of the document. | 11,539 |
tutorial28.md | # Tutorial 28 - Node with simple OGN computeVectorized
This tutorial demonstrates how to compose nodes that implements a very simple computeVectorized function. It shows how to access the data, using the different available methods.
## OgnTutorialVectorizedPassthrough.ogn
The `ogn` file shows the implementation of a node named “omni.graph.tutorials.TutorialVectorizedPassThrough”, which takes input of a floating point value, and just copy it to its output.
```json
{
"TutorialVectorizedPassThrough": {
"version": 1,
"description": "Simple passthrough node that copy its input to its output in a vectorized way",
"categories": "tutorials",
"uiName": "Tutorial Node: Vectorized Passthrough",
"inputs": {
"value": {
"type": "float",
"description": "input value"
}
},
"outputs": {
"value": {
"type": "float",
"description": "output value"
}
},
"tests": [
{ "inputs:value": 1, "outputs:value": 1 },
{ "inputs:value": 2, "outputs:value": 2 },
{ "inputs:value": 3, "outputs:value": 3 }
]
}
}
```
```json
{
"inputs:value": 4,
"outputs:value": 4
}
```
```json
[]
```
```json
{}
```
```json
{}
```
# OgnTutorialVectorizedPassthrough.cpp
The cpp file contains the implementation of the node. It takes a floating point input and just copy it to its output, demonstrating how to handle a vectorized compute. It shows what would be the implementation for a regular compute function, and the different way it could implement a computeVectorized function.
- method #1: by switching the entire database to the next instance, while performing the computation in a loop
- method #2: by directly indexing attributes for the right instance in a loop
- method #3: by retrieving the raw data, and working directly with it
```c++
// Copyright (c) 2023-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <OgnTutorialVectorizedPassthroughDatabase.h>
// The method used by the computation to perform the passthrough
// 0 = simple copy, no vectorization
#define TUTO_PASSTHROUGH_METHOD_SIMPLE 0
// 1 = vectorized copy by moving the entire database to the next instance
#define TUTO_PASSTHROUGH_METHOD_DB 1
// 2 = vectorized copy by indexing the instance directly per attribute
#define TUTO_PASSTHROUGH_METHOD_ATTR 2
// 3 = vectorized copy using raw data
#define TUTO_PASSTHROUGH_METHOD_RAW 3
// By default, use the most efficient method
#define TUTO_PASSTHROUGH_METHOD TUTO_PASSTHROUGH_METHOD_RAW
// This node perform a copy of its input to its output
class OgnTutorialVectorizedPassthrough
{
public:
#if TUTO_PASSTHROUGH_METHOD == TUTO_PASSTHROUGH_METHOD_SIMPLE
// begin-regular
static bool compute(OgnTutorialVectorizedPassthroughDatabase& db)
{
db.outputs.value() = db.inputs.value();
return true;
}
// end-regular
#elif TUTO_PASSTHROUGH_METHOD == TUTO_PASSTHROUGH_METHOD_DB
// begin-db
static size_t computeVectorized(OgnTutorialVectorizedPassthroughDatabase& db, size_t count)
{
for (size_t idx = 0; idx < count; ++idx)
{
db.outputs.value() = db.inputs.value();
db.moveToNextInstance();
}
return count;
}
// end-db
#elif TUTO_PASSTHROUGH_METHOD == TUTO_PASSTHROUGH_METHOD_ATTR
// begin-attr
static size_t computeVectorized(OgnTutorialVectorizedPassthroughDatabase& db, size_t count)
{
// Implementation here
}
// end-attr
#endif
};
```
54{
55 for (size_t idx = 0; idx < count; ++idx)
56 db.outputs.value(idx) = db.inputs.value(idx);
57 return count;
58}
59// end-attr
60#elif TUTO_PASSTHROUGH_METHOD == TUTO_PASSTHROUGH_METHOD_RAW
61// begin-raw
62 static size_t computeVectorized(OgnTutorialVectorizedPassthroughDatabase& db, size_t count)
63 {
64 auto spanIn = db.inputs.value.vectorized(count);
65 auto spanOut = db.outputs.value.vectorized(count);
66
67 memcpy(spanOut.data(), spanIn.data(), std::min(spanIn.size_bytes(), spanOut.size_bytes()));
68
69 return count;
70 }
71// end-raw
72#endif
73};
74
75REGISTER_OGN_NODE() | 4,514 |
tutorial29.md | # Tutorial 29 - Node with simple ABI computeVectorized
This tutorial demonstrates how to compose nodes that implements a very simple computeVectorized function using directly ABIs. It shows how to access the data, using the different available methods.
## OgnTutorialVectorizedABIPassThrough.cpp
The `cpp` file contains the implementation of the node. It takes a floating point input and just copy it to its output, demonstrating how to handle a vectorized compute. It shows what would be the implementation for a regular `compute` function, and the different way it could implement a `computeVectorized` function.
- method #1: by indexing attribute retrieval ABI function directly in a loop
- method #2: by mutating the attribute data handle in a loop
- method #3: by retrieving the raw data, and working directly with it
```cpp
// Copyright (c) 2023-2024, NVIDIA CORPORATION. All rights reserved.
//
// NVIDIA CORPORATION and its licensors retain all intellectual property
// and proprietary rights in and to this software, related documentation
// and any modifications thereto. Any use, reproduction, disclosure or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA CORPORATION is strictly prohibited.
//
#include <OgnTutorialVectorizedABIPassthroughDatabase.h>
// The method used by the computation to perform the passthrough
// 0 = simple copy, no vectorization
#define TUTO_ABI_PASSTHROUGH_METHOD_SIMPLE 0
// 1 = vectorized copy by indexing the instance directly per attribute
#define TUTO_ABI_PASSTHROUGH_METHOD_ATTR 1
// 2 = vectorized copy by mutating attribute data handles
#define TUTO_ABI_PASSTHROUGH_METHOD_MUTATE 2
// 3 = vectorized copy using raw data: this is the most efficient method
#define TUTO_ABI_PASSTHROUGH_METHOD_RAW 3
// By default here, use the least popular method to make sure it remains tested
#define TUTO_ABI_PASSTHROUGH_METHOD TUTO_ABI_PASSTHROUGH_METHOD_ATTR
// This node perform a copy of its input to its output
class OgnTutorialVectorizedABIPassthrough
{
public:
#if TUTO_ABI_PASSTHROUGH_METHOD == TUTO_ABI_PASSTHROUGH_METHOD_SIMPLE
// begin-regular
static bool compute(GraphContextObj const& contextObj, NodeObj const& nodeObj)
{
NodeContextHandle nodeHandle = nodeObj.nodeContextHandle;
36
37 auto inputValueAttr = getAttributeR(contextObj, nodeHandle, Token("inputs:value"), kAccordingToContextIndex);
38 const float* inputValue = getDataR<float>(contextObj, inputValueAttr);
39
40 auto outputValueAttr = getAttributeW(contextObj, nodeHandle, Token("outputs:value"), kAccordingToContextIndex);
41 float* outputValue = getDataW<float>(contextObj, outputValueAttr);
42
43 if (inputValue && outputValue)
44 {
45 *outputValue = *inputValue;
46 return true;
47 }
48
49 return false;
50 }
51 // end-regular
52 #elif TUTO_ABI_PASSTHROUGH_METHOD == TUTO_ABI_PASSTHROUGH_METHOD_ATTR
53 // begin-attr
54 static size_t computeVectorized(GraphContextObj const& contextObj, NodeObj const& nodeObj, size_t count)
55 {
56 GraphContextObj const* contexts = nullptr;
57 NodeObj const* nodes = nullptr;
58
59 // When using auto instancing, similar graphs can get merged together, and computed vectorized
60 // In such case, each instance represent a different node in a different graph
61 // Accessing the data either through the provided node, or through the actual auto-instance node would work
62 // properly But any other ABI call requiring the node would need to provide the proper node. While not necessary
63 // in this context, do the work of using the proper auto-instance node in order to demonstrate how to use it.
64 size_t handleCount = nodeObj.iNode->getAutoInstances(nodeObj, contexts, nodes);
65
66 auto nodeHandle = [&](InstanceIndex index) -> NodeContextHandle
67 { return nodes[handleCount == 1 ? 0 : index.index].nodeContextHandle; };
68
69 auto context = [&](InstanceIndex index) -> GraphContextObj
70 { return contexts[handleCount == 1 ? 0 : index.index]; };
71
72 size_t ret = 0;
73 const float* inputValue{ nullptr };
74 float* outputValue{ nullptr };
75 auto inToken = Token("inputs:value");
76 auto outToken = Token("outputs:value");
77
78 for (InstanceIndex idx{ 0 }; idx < InstanceIndex{ count }; ++idx)
79 {
80 auto inputValueAttr = getAttributeR(context(idx), nodeHandle(idx), inToken, idx);
81 inputValue = getDataR<float>(context(idx), inputValueAttr);
82
83 auto outputValueAttr = getAttributeW(context(idx), nodeHandle(idx), outToken, idx);
84 outputValue = getDataW<float>(context(idx), outputValueAttr);
85
86 if (inputValue && outputValue)
87 {
88 *outputValue = *inputValue;
89 ++ret;
90 }
91 }
92
93 return ret;
94 }
95 // end-attr
96#elif TUTO_ABI_PASSTHROUGH_METHOD == TUTO_ABI_PASSTHROUGH_METHOD_MUTATE
97 // begin-mutate
98 static size_t computeVectorized(GraphContextObj const& contextObj, NodeObj const& nodeObj, size_t count)
99 {
100 NodeContextHandle nodeHandle = nodeObj.nodeContextHandle;
101
102 size_t ret = 0;
103 const float* inputValue{ nullptr };
104 float* outputValue{ nullptr };
105 auto inputValueAttr = getAttributeR(contextObj, nodeHandle, Token("inputs:value"), kAccordingToContextIndex);
106 auto outputValueAttr = getAttributeW(contextObj, nodeHandle, Token("outputs:value"), kAccordingToContextIndex);
147};
148
149REGISTER_OGN_NODE() | 5,723 |