Supported Storage Systems#
WRAPP supports multiple storage systems. This section covers the supported URL formats, configuration options, and authentication methods for each.
Supported Storage and URL Formats#
WRAPP supports URLs to Nucleus servers, S3 buckets, Azure containers/blobs, Google Cloud Storage buckets, and the local file system:
- S3
Data on S3 can be accessed using
https://...cloudfront.net,s3://...orhttps://...amazonaws.comURLs. WRAPP uses boto3 to access S3 when usings3://...orhttps://...amazonaws.comURLs.https://...cloudfront.netURLs are opened via the client-library. For details on authentication, please refer to the S3 authentication section.Warning
Plain HTTP is insecure and not recommended for public S3/CDN objects; use HTTPS. HTTP may be used only for local/test endpoints (e.g., MinIO at
http://localhost:9000).WRAPP is primarily tested with S3 general purpose buckets without any additional features enabled.
To utilize AWS Transfer Acceleration, the environment variable WRAPP_ENABLE_S3_ACCELERATION must be set to true. This is a global switch, and all buckets accessed must have AWS Transfer Acceleration enabled in their settings. Additional costs may be incurred with the AWS account; therefore, testing is recommended before activation for large payloads.
To fine-tune S3 multipart upload performance, the following environment variables can be configured:
WRAPP_S3_MULTIPART_THRESHOLD: Minimum file size (in bytes) before multipart uploads are used. Default is 5 MiB (5242880).WRAPP_S3_MULTIPART_CHUNKSIZE: Size (in bytes) of each part/chunk in a multipart upload. Default is 5 MiB (5242880).WRAPP_S3_MAX_CONCURRENCY: Maximum number of threads/connections used for multipart uploads. Default is50.
Example usage:
export WRAPP_S3_MULTIPART_THRESHOLD=10485760 # 10 MiB export WRAPP_S3_MULTIPART_CHUNKSIZE=10485760 # 10 MiB export WRAPP_S3_MAX_CONCURRENCY=32
- Azure
Data on Azure Blob Storage can be accessed using
https://<account>.blob.core.windows.net/<container>/...URLs. For details on authentication, please refer to the Azure authentication section.- Google Cloud Storage
Data on Google Cloud Storage can be accessed using
gs://bucket/...orhttps://storage.googleapis.com/bucket/...URLs. WRAPP uses thegoogle-cloud-storagelibrary to access GCS. Install the required dependency with:pip install omni_wrapp_minimal[gcs]
For details on authentication, please refer to the GCS authentication section.
- Local file system
Data on the local file system can be accessed using
file://localhost/....orfile:///...URLs. Any path without a scheme is interpreted as a file path, so you can simply specifylocal_folder. WRAPP also acceptsfile:local_folderand treats it as a local directory.
- Nucleus Servers
Data on Nucleus servers can be accessed using
omniverse://...URLs. It is possible to either authenticate interactively or with credentials provided on the command line, please refer to the Nucleus authentication section for details.
Authentication#
Storage API Authentication#
For Storage API discovery mode, set WRAPP_STORAGE_API_DISCOVERY_URL to the discovery endpoint.
WRAPP will call /api/v1/services and, if required, authenticate against the discovery service.
Interactive authentication can be enabled using --interactive-auth. In this mode, WRAPP starts
an OpenID browser flow and listens on http://127.0.0.1:35101/openid for the callback.
It is also possible to provide a bearer token explicitly with --auth (or WRAPP_AUTH) using
the format server_url,bearer_token where server_url must match the discovery server URL:
wrapp list-repo file-storage-id://repo --auth https://my-discovery.example.com,<TOKEN>
For non-interactive machine-to-machine authentication (CI/CD, service accounts), use the OAuth2
client credentials flow by providing a client_id and client_secret:
wrapp list-repo file-storage-id://repo --auth https://my-discovery.example.com,<CLIENT_ID>,<CLIENT_SECRET>
WRAPP will use these credentials to obtain an access token from the discovery service’s token
endpoint using grant_type=client_credentials. The token is automatically refreshed before
expiry using the same credentials.
To select a specific OAuth client profile from discovery /api/v1/auth-config, use
WRAPP_STORAGE_API_AUTH_CLIENT_NAME. If not set, WRAPP falls back to
OMNI_STORAGE_CLIENT_NAME, then client_library, then wrapp.
WRAPP stores refresh tokens in the system keyring and reuses them on subsequent runs.
The keyring service name can be overridden with WRAPP_STORAGE_API_KEYRING_SERVICE
(default: wrapp.storageapi.discovery).
Authentication source precedence is:
manual bearer token from
--authorWRAPP_AUTHclient credentials from
--authorWRAPP_AUTH(client_id + client_secret)refresh token from keyring
interactive auth flow (when
--interactive-authis enabled)
If keyring access fails (for example on a host without a keyring backend), WRAPP logs a warning and continues without persisted-token reuse.
When --interactive-auth is not enabled and authentication is required, WRAPP fails with guidance
to either provide --auth/WRAPP_AUTH or enable interactive authentication.
The discovery server must expose OAuth client configuration for one of the configured profile names
in /api/v1/auth-config.
Nucleus Authentication#
By default WRAPP does not allow for interactive authentication. Interactive authentication can be enabled using
the --interactive-auth command line parameter, which might open a browser window to allow for single sign-on workflows.
Successful connections will be cached and no further authentication will be required running commands.
It is also possible to provide credentials with the --auth parameter. The credentials
need to be in the form of a comma separated triplet, consisting of
The server URL. This needs to start with
omniverse://and must match the server name as used in the URLs that target the server.The username. This can be a regular username or the special name
$omni-api-tokenwhen the third item is an API token and not a passwordThe password for that user, or the API token generated for a single sign-on user.
As an example, this is how to specify a wrapp command authenticating against a localhost workstation with the default username and password:
wrapp list-repo omniverse://localhost --auth omniverse://localhost,omniverse,omniverse
and this is how you would use an API token stored in an environment variable on Windows (See API Tokens in the Nucleus documentation):
wrapp list-repo omniverse://staging.nvidia.com/staging_remote/beta_packages --auth omniverse://staging.nvidia.com,$omni-api-token,%STAGING_TOKEN%
on Linux, don’t forget to escape the $.
For Python API usage, authentication can be configured programmatically using wrapp.initialize().
Azure Authentication#
WRAPP accesses Azure Blob Storage using the azure-storage-blob library. Authentication is done through
the standard Azure SDK mechanisms. For details, please refer to the
Azure Storage authentication documentation.
It is also possible to provide credentials for Azure through the CLI --auth parameter or the
WRAPP_AUTH environment variable using the format server_url,connection_string:
wrapp list-repo https://myaccount.blob.core.windows.net/mycontainer/ --auth https://myaccount.blob.core.windows.net,DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=...
Azure Transfer Tuning#
The Azure backend exposes Azure SDK transfer settings for both uploads and downloads:
WRAPP_AZURE_UPLOAD_MAX_CONCURRENCY(default1)WRAPP_AZURE_UPLOAD_MAX_SINGLE_PUT_SIZE(default67108864bytes, 64 MiB)WRAPP_AZURE_UPLOAD_MAX_BLOCK_SIZE(default4194304bytes, 4 MiB)WRAPP_AZURE_DOWNLOAD_MAX_CONCURRENCY(default1)WRAPP_AZURE_DOWNLOAD_MAX_SINGLE_GET_SIZE(default33554432bytes, 32 MiB)WRAPP_AZURE_DOWNLOAD_MAX_CHUNK_GET_SIZE(default4194304bytes, 4 MiB)
Values must be positive integers. Invalid or non-positive values are logged and clamped to a safe fallback.
S3 Authentication#
When WRAPP directly accesses S3 via boto3, authentication and other configuration is done through the standard boto3 mechanisms. For details, please refer to the boto3 documentation for credentials, and configuration via environment variables and config files.
It is also possible to provide credentials for S3 through the CLI --auth parameter or the
WRAPP_AUTH environment variable - please see the command line help for details.
GCS Authentication#
WRAPP accesses Google Cloud Storage using the google-cloud-storage library (installed via the
[gcs] extra).
Credential Resolution Order#
WRAPP resolves GCS credentials in the following order. The first source that provides valid credentials wins; later sources are not consulted.
Explicit credentials via
--authorWRAPP_AUTH: The format isserver_url,credentials_file_or_json. The second value can be a path to a service-account JSON key file or an inline JSON string. Omit the second value to fall back to later sources:wrapp list-repo gs://my-bucket --auth gs://my-bucket,/path/to/service-account.json
WRAPP-specific environment variables:
WRAPP_GCS_CREDENTIALS_FILE: Path to a service-account JSON key file.WRAPP_GCS_PROJECT: Google Cloud project ID (optional, used alongside credentials).
If
WRAPP_GCS_CREDENTIALS_FILEis set, WRAPP uses that file directly and does not probe Application Default Credentials.Google Application Default Credentials (ADC): If no explicit credentials are provided, WRAPP calls google.auth.default(), which checks the following sub-sources in order:
GOOGLE_APPLICATION_CREDENTIALSenvironment variable — path to any supported credential file (service-account key, external-account config, etc.).gcloud CLI default credentials — created by running
gcloud auth application-default login, stored in~/.config/gcloud/application_default_credentials.json(Linux/macOS) or%APPDATA%\gcloud\application_default_credentials.json(Windows).GCE metadata server — available on Compute Engine, Cloud Run, GKE, etc. Disabled by default in WRAPP (see GCE Metadata Server and NO_GCE_CHECK below).
Anonymous credentials (fallback): If none of the above produce credentials, WRAPP falls back to anonymous access and logs a warning. Operations against private buckets will fail with a permission error.
Using Lesser-Privileged Credentials#
If your default credentials (e.g. from gcloud auth application-default login) have broad
permissions and you want to run a specific WRAPP command with reduced privileges, you can
override credentials for a single invocation. Because the resolution order above is
strictly prioritised, any higher-priority source overrides lower ones:
# Override with a read-only service account key via --auth (highest priority)
wrapp list-repo gs://my-bucket --auth gs://my-bucket,/path/to/readonly-sa.json
# Or point GOOGLE_APPLICATION_CREDENTIALS to a restricted key for one command
GOOGLE_APPLICATION_CREDENTIALS=/path/to/readonly-sa.json wrapp list-repo gs://my-bucket
# Or use the WRAPP-specific env var
WRAPP_GCS_CREDENTIALS_FILE=/path/to/readonly-sa.json wrapp list-repo gs://my-bucket
GCE Metadata Server and NO_GCE_CHECK#
As the last step of Application Default Credentials discovery (step 3c), the Google auth library
attempts to contact the GCE metadata server at http://metadata.google.internal/. On machines
that are not running on Google Compute Engine this request hangs for several seconds before
timing out.
To avoid this, WRAPP sets NO_GCE_CHECK=true by default. This skips only the metadata server
probe; all other credential sources (steps 1–3b) work normally.
Running on Compute Engine / GKE / Cloud Run? Set
NO_GCE_CHECK=falsein your environment so that metadata-based credentials are discovered.Already set in your environment? WRAPP does not overwrite an existing
NO_GCE_CHECKvalue — your setting is preserved.GCS modules are already imported before you use WRAPP? Set
NO_GCE_CHECK=trueto avoid the metadata server probe before importing any GCS module.
Troubleshooting#
- “No GCS credentials found … Using anonymous credentials”
No credentials were discovered at any level. The most common fix is to run:
gcloud auth application-default login
or set
WRAPP_GCS_CREDENTIALS_FILEto point to a service-account key file.- “Permission denied” / 403 on a private bucket
Credentials were found but lack the required IAM permissions on the target bucket or objects. Verify that the authenticated identity has at least
roles/storage.objectViewer(for reads) orroles/storage.objectAdmin(for writes).- Slow startup (~10 s delay before any GCS operation)
NO_GCE_CHECKmay have been explicitly set tofalsewhile running outside of GCE. Remove the variable or set it totrue:export NO_GCE_CHECK=true
Additional Environment Variables#
WRAPP_GCS_ENDPOINT: Override the GCS API endpoint (useful for local testing with fake-gcs-server). When set, anonymous credentials are used automatically if no other credentials are configured.WRAPP_GCS_EXTRA_RETRIES: Maximum number of retry attempts for upload/download operations (default:3).WRAPP_GCS_RETRY_WAIT_SECONDS: Base delay in seconds between retries with exponential back-off (default:1.0).
Storage Semantics and Empty Folders#
Different storage systems handle folders differently, which affects how WRAPP operations behave:
- Object storage (S3, Azure Blob Storage, Google Cloud Storage)
Folders are implicit — they exist only as prefixes in object keys. When all files in a “folder” are deleted, the folder automatically ceases to exist. There is no concept of an empty folder.
- File system-based storage (local file system, Nucleus servers)
Folders are first-class citizens — they exist independently of the files they contain. Empty folders can exist and persist after all files within them are removed.
This distinction has practical implications:
Package creation: WRAPP only captures files, not folders. Empty folders in the source are not included in packages. If you need an empty folder in your package, add a placeholder file (e.g.,
.keep) to it.Uninstall and delete operations: On file system-based storage, WRAPP removes files but does not automatically remove the folders that contained them. Empty folders may remain and need to be cleaned up manually if desired.