Supported Storage Systems#

WRAPP supports multiple storage systems. This section covers the supported URL formats, configuration options, and authentication methods for each.

Supported Storage and URL Formats#

WRAPP supports URLs to Nucleus servers, S3 buckets, Azure containers/blobs, and the local file system:

S3

Data on S3 can be accessed using https://...cloudfront.net, s3://... or https://...amazonaws.com URLs. WRAPP uses boto3 to access S3 when using s3://... or https://...amazonaws.com URLs. https://...cloudfront.net URLs are opened via the client-library. For details on authentication, please refer to the S3 authentication section.

Warning

Plain HTTP is insecure and not recommended for public S3/CDN objects; use HTTPS. HTTP may be used only for local/test endpoints (e.g., MinIO at http://localhost:9000).

WRAPP is primarily tested with S3 general purpose buckets without any additional features enabled.

To utilize AWS Transfer Acceleration, the environment variable WRAPP_ENABLE_S3_ACCELERATION must be set to true. This is a global switch, and all buckets accessed must have AWS Transfer Acceleration enabled in their settings. Additional costs may be incurred with the AWS account; therefore, testing is recommended before activation for large payloads.

To fine-tune S3 multipart upload performance, the following environment variables can be configured:

  • WRAPP_S3_MULTIPART_THRESHOLD: Minimum file size (in bytes) before multipart uploads are used. Default is 5 MiB (5242880).

  • WRAPP_S3_MULTIPART_CHUNKSIZE: Size (in bytes) of each part/chunk in a multipart upload. Default is 5 MiB (5242880).

  • WRAPP_S3_MAX_CONCURRENCY: Maximum number of threads/connections used for multipart uploads. Default is 50.

Example usage:

export WRAPP_S3_MULTIPART_THRESHOLD=10485760   # 10 MiB
export WRAPP_S3_MULTIPART_CHUNKSIZE=10485760   # 10 MiB
export WRAPP_S3_MAX_CONCURRENCY=32
Azure

Data on Azure Blob Storage can be accessed using https://<account>.blob.core.windows.net/<container>/... URLs. For details on authentication, please refer to the Azure authentication section.

Local file system

Data on the local file system can be accessed using file://localhost/.... or file:///... URLs. Any URL or path that has no scheme is interpreted as a file path, so you can specify file:local_folder or local_folder to address a local directory.

Nucleus Servers

Data on Nucleus servers can be accessed using omniverse://... URLs. It is possible to either authenticate interactively or with credentials provided on the command line, please refer to the Nucleus authentication section for details.

Authentication#

Nucleus Authentication#

By default WRAPP does not allow for interactive authentication. Interactive authentication can be enabled using the --interactive-auth command line parameter, which might open a browser window to allow for single sign-on workflows. Successful connections will be cached and no further authentication will be required running commands.

It is also possible to provide credentials with the --auth parameter. The credentials need to be in the form of a comma separated triplet, consisting of

  1. The server URL. This needs to start with omniverse:// and must match the server name as used in the URLs that target the server.

  2. The username. This can be a regular username or the special name $omni-api-token when the third item is an API token and not a password

  3. The password for that user, or the API token generated for a single sign-on user.

As an example, this is how to specify a wrapp command authenticating against a localhost workstation with the default username and password:

wrapp list-repo omniverse://localhost --auth omniverse://localhost,omniverse,omniverse

and this is how you would use an API token stored in an environment variable on Windows (See API Tokens in the Nucleus documentation):

wrapp list-repo omniverse://staging.nvidia.com/staging_remote/beta_packages --auth omniverse://staging.nvidia.com,$omni-api-token,%STAGING_TOKEN%

on Linux, don’t forget to escape the $.

For Python API usage, authentication can be configured programmatically using wrapp.initialize().

Azure Authentication#

WRAPP accesses Azure Blob Storage using the azure-storage-blob library. Authentication is done through the standard Azure SDK mechanisms. For details, please refer to the Azure Storage authentication documentation.

It is also possible to provide credentials for Azure through the CLI --auth parameter or the WRAPP_AUTH environment variable using the format server_url,connection_string:

wrapp list-repo https://myaccount.blob.core.windows.net/mycontainer/ --auth https://myaccount.blob.core.windows.net,DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=...

S3 Authentication#

When WRAPP directly accesses S3 via boto3, authentication and other configuration is done through the standard boto3 mechanisms. For details, please refer to the boto3 documentation for credentials, and configuration via environment variables and config files.

It is also possible to provide credentials for S3 through the CLI --auth parameter or the WRAPP_AUTH environment variable - please see the command line help for details.

Storage Semantics and Empty Folders#

Different storage systems handle folders differently, which affects how WRAPP operations behave:

Object storage (S3, Azure Blob Storage)

Folders are implicit — they exist only as prefixes in object keys. When all files in a “folder” are deleted, the folder automatically ceases to exist. There is no concept of an empty folder.

File system-based storage (local file system, Nucleus servers)

Folders are first-class citizens — they exist independently of the files they contain. Empty folders can exist and persist after all files within them are removed.

This distinction has practical implications:

  • Package creation: WRAPP only captures files, not folders. Empty folders in the source are not included in packages. If you need an empty folder in your package, add a placeholder file (e.g., .keep) to it.

  • Uninstall and delete operations: On file system-based storage, WRAPP removes files but does not automatically remove the folders that contained them. Empty folders may remain and need to be cleaned up manually if desired.