Deploy Nucleus on Amazon EC2

../../_images/cloud_aws_header.jpg

Deploy to Cloud

This guide focuses on getting NVIDIA Omniverse Enterprise Nucleus configured and running on Amazon EC2. We outline the requirements for an NVIDIA Omniverse Nucleus deployment and dive deep into the technical steps for getting Nucleus successfully running within your AWS account.

Why AWS?

There are numerous reasons to deploy Nucleus on the AWS Global Cloud Infrastructure. With AWS, you have the ability to connect distributed users all over the globe. AWS’ security, identity, and access management controls allow you to retain complete control over your data. Also, with the variety of compute instance types and storage solutions AWS offers, you can right size your infrastructure and fine tune performance as needed.

Services used: Amazon Route 53, Amazon EC2, Amazon EBS, Amazon Certificates Manager, Amazon VPC

Solution Overview

The following steps outline a solution that implements the basic components of a Nucleus deployment. To handle communication from end users, an Amazon Elastic Compute Cloud (Amazon EC2) instance configured as an NGINX reverse proxy is deployed in a public subnet. The reverse proxy accepts TLS traffic utilizing a TLS certificate from the Amazon Certificates Manager. Typically, this component would be an Elastic Load Balancer (ELB), but Nucleus requires path rewrites in the request which is not currently supported by an ELB.

The Nucleus Server is an Amazon EC2 instance deployed to a private subnet that only accepts traffic from the reverse proxy subnet. The Nucleus Server is running the Nucleus Enterprise Stack, which is deployed as a Docker Compose Stack. Thus, Nucleus will need a NAT Gateway and Internet Gateway to communicate with the NVIDIA Container Registry.

Prerequisites

Deploying Omniverse Nucleus on Amazon EC2

Register a Domain and Create a Hosted Zone with Route 53

First, you will need a Hosted Zone and a Domain for the Nucleus Server. Amazon Route 53 allows registration of a domain, such as my-omniverse.com, and create a subdomain, such as nucleus.my-omniverse.com, for the Nucleus Server. When registering a domain, there is communication that happens with the domain registrar. It is best to do this step manually and then reference the Hosted Zone ID, created by Route 53, in the subsequent configuration steps.

Visit this page for more information on registering a domain and creating a hosted zone: Registering a new domain.

Configure the CDK Stack

Next, you will configure an CDK stack with basic resources for the Nucleus deployment.

  1. Open a terminal and create a project folder for your CDK app. The name of the folder will become the application name. For this procedure nucleus-app is the name used.

  2. Change directory into the folder created in Step 1 and initialize your CDK app with the following command:

cdk init sample-app --language=typescript

Now your project structure should be the following:

nucleus-app/
    bin/
      nucleus-app-ts
    lib/
      nucleus-app-stack.ts

 additional config files ...

nucleus-app.ts is the main entry point for the app and the file that subsequent CDK commands will reference. When viewing this file, you can see it imports lib/nucleus-app-stack.ts, which is where you’ll put custom code for your deployment.

  1. Run a basic “Hello World” test.

Deploy the starter CDK stack with cdk deploy, this will produce a basic stack and ensure your CLI and CDK are properly configured.

  1. Set default account and Region environment values.

Open bin/nucleus-app.ts and set the default account and Region environment (env) values. The contents of the file should look like the following:

#!/usr/bin/env node
import * as cdk from 'aws-cdk-lib';
import { NucleusAppStack } from '../lib/nucleus-app-stack';

const app = new cdk.App();
new NucleusAppStack(app, 'NucleusAppStack', {
   env: { account: process.env.CDK_DEFAULT_ACCOUNT,
   region: process.env.CDK_DEFAULT_REGION },
});
  1. Remove sample resources.

Open lib/nucleus-app-stack.ts and remove the sample Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS) resources. Your file should now look like the following:

import { Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';

export class NucleusAppStack extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);

    // custom resources go here
  }
}
  1. Add the below CDK libraries, as these are required in subsequent steps.

import * as route53 from 'aws-cdk-lib/aws-route53'
import { Tags, CfnOutput } from 'aws-cdk-lib'
import * as s3 from 'aws-cdk-lib/aws-s3'
import * as ec2 from 'aws-cdk-lib/aws-ec2'
import * as iam from 'aws-cdk-lib/aws-iam'
import * as acm from 'aws-cdk-lib/aws-certificatemanager'

Define Stack Resources

Next, you will define custom infrastructure resources required for the deployment. Code samples in this section need to be added inside the constructor of the NucleusAppStack class.

  1. Create S3 bucket for artifacts.

First, create a simple S3 bucket that will be used to transfer artifacts from our local client to EC2 instances. As per security best practices, enable encryption, enforce SSL, and block public access. Then create an AWS Identity and Access Management (IAM) policy that allows access to list bucket and get objects from the bucket. This policy will be attached to our EC2 instance profile role.

const artifactsBucket = new s3.Bucket(this, 'artifactsBucket', {
    encryption: s3.BucketEncryption.S3_MANAGED,
    enforceSSL: true,
    blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
});

const getBucketObjectsPolicy = new iam.PolicyDocument({
   statements: [
   new iam.PolicyStatement({
       actions: [
           "s3:GetObject",
           "s3:ListBucket",
       ],
resources: [
    `${artifactsBucket.bucketArn}`,
    `${artifactsBucket.bucketArn}/*`
       ]
   })]
});
  1. Add a Amazon Virtual Private Cloud (VPC) configuration.

Specify the private subnet that contains the NAT gateway with a route to the internet. Then provision two security groups that the proxy server and the nucleus server will use.

const eip = new ec2.CfnEIP(this, 'natGatewayElasticIP', {
      domain: 'vpc'
    });

const vpc = new ec2.Vpc(this, 'nucleusVpc', {
  cidr: '10.0.0.0/20',
  natGateways: 1,
  subnetConfiguration: [{
    name: 'publicSubnetNatGateway',
    subnetType: ec2.SubnetType.PUBLIC,
    cidrMask: 24
  }, {
     name: 'privateSubnet',
     subnetType: ec2.SubnetType.PRIVATE_WITH_NAT,
     cidrMask: 24
  }],
  natGatewayProvider: ec2.NatProvider.gateway({
      eipAllocationIds: [eip.attrAllocationId]
  }),
});

const proxySG = new ec2.SecurityGroup(this, 'reverseProxySG', {
  vpc: vpc,
  allowAllOutbound: true,
  description: "Reverse Proxy Security Group"
  });


const nucleusSG = new ec2.SecurityGroup(this, 'nucleusSG', {
  vpc: vpc,
  allowAllOutbound: true,
  description: "Nucleus Server Security Group"
});
  1. Add security group ingress rules.

Configure the proxy and Nucleus security groups to allow traffic on required ports. The Nucleus security group only allows traffic from the proxy security group. The proxy security group allows traffic from a specific CIDR range. You’ll want to set this to a range you will use to connect to the server. For example, you can use the IP address of the client machine you plan to connect from. Then, you enter that IP appended with a network mask as the CIDR range. For this solution, the recommended network mask is /32.

const allowedCidrRange = 'ip-address/network-mask'

    proxySG.addIngressRule(ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(443), "HTTPS Traffic");
    proxySG.addIngressRule(ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3180), "Auth login");
    proxySG.addIngressRule(ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3100), "Auth Service");
    proxySG.addIngressRule(ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3333), "Discovery Service");
    proxySG.addIngressRule(ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3030), "LFT");
    proxySG.addIngressRule(ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3019), "Core API");
    proxySG.addIngressRule(ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3020), "Tagging Service");
    proxySG.addIngressRule(ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3400), "Search Service");

const proxySGId = proxySG.securityGroupId

    nucleusSG.addIngressRule(ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(8080), "HTTP Traffic");
    nucleusSG.addIngressRule(ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3180), "Auth login");
    nucleusSG.addIngressRule(ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3100), "Auth Service");
    nucleusSG.addIngressRule(ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3333), "Discovery Service");
    nucleusSG.addIngressRule(ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3030), "LFT");
    nucleusSG.addIngressRule(ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3019), "Core API");
    nucleusSG.addIngressRule(ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3020), "Tagging Service");
    nucleusSG.addIngressRule(ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3400), "Search Service");
  1. Add TLS Certificate and set the domain from Step 1 for validation.

The root-domain variable must be set to the domain registered with the Route 53 hosted zone from Step 1.

const rootDomain = root-domain;
  const fullDomain = 'nucleus.'+rootDomain;
  const hostedZone = route53.HostedZone.fromLookup(this, 'PublicHostedZone', {domainName: rootDomain}
  );

  const certificate = new acm.Certificate(this, 'PublicCertificate', {
    domainName: rootDomain,
    subjectAlternativeNames: [`*.${rootDomain}`],
    validation: acm.CertificateValidation.fromDns(hostedZone),
  });

Note

Currently there is no additional management of this CNAME record. Meaning when you no longer require it, you’ll have to remove it manually from your Route 53 Hosted Zone.

  1. Add Reverse Proxy Resources.

For the reverse proxy, configure it with Nitro Enclaves enabled. Enclaves provides features to create isolated compute environments to protect and securely process highly sensitive data. In this case that’s our TLS certificate. On top of that, Nitro Enclaves has support for integration with Amazon Certificates Manager. Meaning Certificates Manager can automatically handle the rotation of the certificate. For more information see AWS Nitro Enclaves User Guide.

Starting from the Certificate Manager for Nitro Enclaves AMI, create a c5.xlarge instance with 32GB of storage. In this case c5.xlarge was chosen as one of the smallest available instances required for the Nitro Enclaves AMI. Configure a basic instance role with the AmazonSSMManagedInstanceCore policy. This allows you to connect to the instance with SSM and avoid opening the instance to SSH traffic over the internet.

Finally, attach a “dummy” IAM policy to the reverse proxy. This is an empty policy which will get updated with the configuration scripts.

If your Region is not in the list below, review the AMI listing on the AWS Marketplace or the AWS Documentation for finding the correct AMI ID.

// AWS Certificate Manager for Nitro Enclaves AMI
const proxyServerAMI = new ec2.GenericLinuxImage({
  'us-west-1': 'ami-0213075968e811ea7',    //california
  'us-west-2': 'ami-01c4415fd6c2f0927',    //oregon
  'us-east-1': 'ami-00d96e5ee00daa484',    //virginia
  'us-east-2': 'ami-020ea706ac260de21',    //ohio
  'ca-central-1': 'ami-096dd1150b96b6125', //canada
  'eu-central-1': 'ami-06a2b19f6b97762cb', //frankfurt
  'eu-west-1': 'ami-069e205c9dea19322',    //ireland
  'eu-west-2': 'ami-069b79a2d7d0d9408'     //london
})

const proxyInstanceRole = new iam.Role(this, 'proxyInstanceRole', {
  assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
  description: 'EC2 Instance Role',
  managedPolicies: [
      iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonSSMManagedInstanceCore')
  ],
  inlinePolicies: {
    getBucketObjectsPolicy: getBucketObjectsPolicy
  }
});

const cfnInstanceProfile = new iam.CfnInstanceProfile(this, 'proxyInstanceProfile', {
  roles: [proxyInstanceRole.roleName],
});

// using CfnInstance because it exposes enclaveOptions
const cfnInstance = new ec2.CfnInstance(this, 'reverseProxyServer',  {
  blockDeviceMappings: [{
      deviceName: '/dev/xvda',
      ebs: {
        encrypted: true,
        volumeSize: 32,
      }
    }],
  enclaveOptions: {
      enabled: true,
  },
  imageId: proxyServerAMI.getImage(this).imageId,
  instanceType: 'c5.xlarge',
  securityGroupIds: [proxySG.securityGroupId],
  subnetId: vpc.selectSubnets({subnetGroupName: 'publicSubnetNatGateway'}).subnetIds[0],
  tags: [{
      key: 'Name',
      value: 'Nucleus-ReverseProxy',
    }],
  iamInstanceProfile: cfnInstanceProfile.ref
})

new route53.CnameRecord(this, `CnameApiRecord`, {
  recordName: fullDomain,
  zone: hostedZone,
  domainName: cfnInstance.attrPublicDnsName,
});


const revProxyCertAssociationPolicy = new iam.ManagedPolicy(this, 'revProxyCertAssociationPolicy', {
    statements: [
    new iam.PolicyStatement({
      actions: ["s3:GetObject"], resources: ["*"]
    })
  ]
})
proxyInstanceRole.addManagedPolicy(revProxyCertAssociationPolicy)
  1. Add Nucleus Server Resources.

Next, configure the Nucleus Server. Start with the Ubuntu, 20.04 LTS AMI with c5.4xlarge as the instance type. C5 instances are optimized for compute-intensive workloads and deliver cost-effective high performance at a low price per compute ratio. The instance has 16 vCPUs and 32GB of Memory. An Amazon Elastic Block Store (EBS) volume is attached to the instance with 512GB of storage. These specs were chosen to be sufficiently large for a proof of concept.

The instance user data script is configured to install Docker, Docker Compose, and the AWS CLI.

const nucleusServerAMI = new ec2.GenericLinuxImage({
  'us-west-1': 'ami-0dc5e9ff792ec08e3',    //california
  'us-west-2': 'ami-0ee8244746ec5d6d4',    //oregon
  'us-east-1': 'ami-09d56f8956ab235b3',    //virginia
  'us-east-2': 'ami-0aeb7c931a5a61206',    //ohio
  'ca-central-1': 'ami-0fb99f22ad0184043', //canada
  'eu-central-1': 'ami-015c25ad8763b2f11', //frankfurt
  'eu-west-1': 'ami-00c90dbdc12232b58',    //ireland
  'eu-west-2': 'ami-0a244485e2e4ffd03'     //london
})

const nucleusEbsVolume: ec2.BlockDevice = {
  deviceName: '/dev/sda1',
  volume: ec2.BlockDeviceVolume.ebs(512, {
      encrypted: true,
  })
};

const nucleusInstanceRole = new iam.Role(this, 'nucleusInstanceRole', {
  assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
  description: 'EC2 Instance Role',
  managedPolicies: [
      iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonSSMManagedInstanceCore')
  ],
  inlinePolicies: {
    getBucketObjectsPolicy: getBucketObjectsPolicy
  }
});

const nucleusUserData = `
#!/bin/bash
sudo apt-get update

# docker
sudo apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get -y update
sudo apt-get -y install docker-ce docker-ce-cli containerd.io

# docker compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# aws cli
sudo curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt-get install unzip
sudo unzip awscliv2.zip
sudo ./aws/install
sudo rm awscliv2.zip
sudo rm -fr ./aws/install
`

const nucleusServerInstance = new ec2.Instance(this, "NucleusServer", {
  instanceType: new ec2.InstanceType("c5.4xlarge"),
  machineImage: nucleusServerAMI,
  blockDevices: [nucleusEbsVolume],
  vpc: vpc,
  role: nucleusInstanceRole,
  securityGroup:  nucleusSG,
  userData: ec2.UserData.custom(nucleusUserData),
  vpcSubnets: vpc.selectSubnets({subnetGroupName: 'privateSubnet'}),
  detailedMonitoring: true,
});

Tags.of(nucleusServerInstance).add("Name", "Nucleus-Server");
  1. Configure Stack Outputs and add output values so they can easily be referenced later.

    new CfnOutput(this, 'region', {
        value: this.region
    }).overrideLogicalId('region');
    
    new CfnOutput(this, 'artifactsBucketName', {
        value: artifactsBucket.bucketName
    }).overrideLogicalId('artifactsBucketName');
    
    new CfnOutput(this, 'tlsCertifcateArn', {
        value: certificate.certificateArn
    }).overrideLogicalId('tlsCertifcateArn');
    
    new CfnOutput(this, 'proxyInstanceRoleArn', {
        value: proxyInstanceRole.roleArn
    }).overrideLogicalId('proxyInstanceRoleArn');
    
    new CfnOutput(this, 'proxyCertAssociationPolicyArn', {
        value: revProxyCertAssociationPolicy.managedPolicyArn
    }).overrideLogicalId('proxyCertAssociationPolicyArn');
    
    new CfnOutput(this, 'nucleusServerPrivateDnsName', {
        value: nucleusServerInstance.instancePrivateDnsName
    }).overrideLogicalId('nucleusServerPrivateDnsName');
    
    new CfnOutput(this, 'domain', {
        value: fullDomain
    }).overrideLogicalId('domain');
    
  2. Deploy the stack.

cdk deploy

Once this completes, you will have the basic resources required and next you will configuring them. If you encounter the follow CDK deploy error:

[Error at /NucleusAppStack] Found zones: [] for dns:DOMAIN, privateZone:undefined,vpcId:undefined, but wanted exactly 1 zone

Check that you have the correct domain specified and that your hosted zone exists in the Route 53 console Route 53 Hosted zones.

  1. Make note of the stack output values. You’ll use them in the future:

    NucleusAppStack.artifactsBucketName = nucleusappstack-artifactsbucket…
    NucleusAppStack.domain = nucleus.my_omniverse.com
    NucleusAppStack.nucleusServerPrivateDnsName = ip-...us-west-2.compute.internal
    NucleusAppStack.proxyCertAssociationPolicyArn = arn:aws:iam::...:policy/...
    NucleusAppStack.proxyInstanceRoleArn = arn:aws:iam::...:role/...
    NucleusAppStack.region = ...
    NucleusAppStack.tlsCertifcateArn = arn:aws:acm:...:...:certificate/...
    

Configure the Reverse Proxy Server

  1. Associate Enclave Certificate with Proxy instance IAM Role.

The first thing you have to do with the reverse proxy is associate your certificate with the IAM role that the Nitro Enclave uses. Replace tls-certificate-arn, proxy-instance-role-arn, proxy-cert-association-policy-arn, and region in the below script with stack output values from above.

The following script was written in Python 3.9. If you have issues with conflicting python versions, its recommended that you set a local virtualenv. For more information see Python Tutorial Virtual Environments and Packages.

CERT_ARN = tls-certificate-arn
ROLE_ARN = proxy-instance-role-arn
ROLE_POLICY_ARN = proxy-cert-association-policy-arn
REGION = region

import boto3
import json

ec2_client = boto3.client('ec2')
iam_client = boto3.client('iam')
iam_rsrc = boto3.resource('iam')

response = ec2_client.associate_enclave_certificate_iam_role(
    CertificateArn=CERT_ARN,
    RoleArn=ROLE_ARN
)

print(response)

bucket = response['CertificateS3BucketName']
s3object = response['CertificateS3ObjectKey']
kmskeyid = response['EncryptionKmsKeyId']

# update policy with association resources
policy = iam_rsrc.Policy(ROLE_POLICY_ARN)
policyJson = policy.default_version.document
cur_version = policy.default_version_id

policyJson['Statement'] = [
{
    "Effect": "Allow", "Action":
    "s3:GetObject"
    ],
    "Resource": [f"arn:aws:s3:::{bucket}/*"]
},{
    "Sid": "VisualEditor0",
    "Effect": "Allow",
    "Action": [
        "kms:Decrypt"
    ],
    "Resource": f"arn:aws:kms:{REGION}:*:key/{kmskeyid}"
 },{

    "Effect": "Allow",
    "Action": "iam:GetRole",
    "Resource": ROLE_ARN
 }]

response = iam_client.create_policy_version(
    PolicyArn = ROLE_POLICY_ARN,
    PolicyDocument= json.dumps(policyJson),
    SetAsDefault= True
)

print(response)

response = iam_client.delete_policy_version(
    PolicyArn = ROLE_POLICY_ARN,
    VersionId = cur_version
)

print(response)

This script Associates an Identity and IAM role with a AWS Certificate Manager (ACM) certificate. This enables the certificate to be used by the ACM for Nitro Enclaves application inside an enclave. For more information, see Certificate Manager for Nitro Enclaves in the Amazon Web Services Nitro Enclaves User Guide. The script then updates the IAM role policy with permissions to get its own role, download the certificate, and decrypt it.

Save the script to a file and run it from the terminal:

python ./associate_enclave_cert.py

  1. Configure NGINX (nginx.conf).

NVIDIA provides a sample NGINX config for the Nucleus deployment. It is packaged within their provided archive file. At the time of this writing, the latest available is: nucleus-stack-2022.1.0+tag-2022.1.0.gitlab.3983146.613004ac.tar.gz

Open the archive and look for: ssl/nginx.ingress.router.conf

This file needs to be updated and then placed at /etc/nginx/nginx.conf on the reverse proxy instance.

First, you need to update the config with configuration outlined in the AWS Certificate Manager for Nitro Enclaves guide: Nitro Enclaves application: AWS Certificate Manager for Nitro Enclaves.

At the top of the file, in the main context add the following:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

ssl_engine pkcs11;

After the line, # Configure your SSL options as required by your security practices, add the below snippet:

ssl_protocols TLSv1.2;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_prefer_server_ciphers on;

# Set this to the stanza path configured in /etc/nitro_enclaves/acm.yaml
include "/etc/pki/nginx/nginx-acm.conf";

Next, update the config file with the Nucleus Server private DNS address and the fully qualified domain for your server. Replace instances of my-ssl-nucleus.my-company.com with your domain. Then, replace instances of BASE_STACK_IP_OR_HOST with the nucleusServerPrivateDnsName from the stack outputs above.

  1. Copy the .conf file to S3.

aws s3 cp ssl/nginx.ingress.router.conf  s3://artifactBucketName/nginx.conf

  1. Connect to the Proxy Instance. From your web browser, navigate to the EC2 Dashboard in the AWS Console, select the Nucleus-ReverseProxy instance, and click the Connect button.

Select the Session Manager tab, then click the Connect button.

  1. In the terminal, copy the nginx.conf file path from S3 to /etc/nginx/.

    sudo aws s3 cp s3://artifactBucketName/nginx.conf ./nginx.conf
    sudo mv ./nginx.conf /etc/nginx/nginx.conf
    
  2. While still in the Proxy Server terminal, rename the sample ACM for Nitro Enclaves configuration file from /etc/nitro_enclaves/acm.example.yaml to /etc/nitro_enclaves/acm.yaml.

  3. Update the acm.yaml certificate_arn value.

Using your preferred text editor, open /etc/nitro_enclaves/acm.yaml. In the ACM section, update certificate_arn, with the ARN of the certificate from our stack. This is the tls-certificate-arn from the stack outputs above. Save and close the file.

  1. Start NGINX.

    sudo systemctl start nitro-enclaves-acm.service
    sudo systemctl enable nitro-enclaves-acm
    
  2. Confirm the server is accepting TLS requests to your domain.

    curl https://nucleus.my-omniverse.com

You’ll see a generic HTML template as output.

Configure Nucleus Server

The following is derived from the Nucleus Server deployment documentation. Additional information is available by visiting: Enterprise Nucleus Server Quick Start Tips.

  1. From your local computer using the AWS CLI, copy the Nucleus Stack archive to S3.

    aws s3 cp  ./nucleus-stack-2022.1.0+tag-2022.1.0.gitlab.3983146.613004ac.tar.gz \
    s3://artifactBucketName/nucleus-stack-2022.1.0+tag-2022.1.0.gitlab.3983146.613004ac.tar.gz
    
  2. Connect to the Nucleus Server with EC2 Session Manager. With your web browser, navigate to the EC2 Dashboard in the AWS Console, select the Nucleus-Server instances, click the Connect button, and then click the Connect button again on the Session Manager tab.

  3. In the Nucleus-Server terminal, change directory to the home directory, and then copy the Nucleus Stack from S3.

    cd ~
    aws s3 cp s3://artifactBucketName/nucleus-stack-2022.1.0+tag-2022.1.0.gitlab.3983146.613004ac.tar.gz \
    ./nucleus-stack-2022.1.0+tag-2022.1.0.gitlab.3983146.613004ac.tar.gz
    
  4. Unpack the archive to an appropriate directory, then cd into that directory.

    omniverse_root=/opt/ove
    sudo mkdir -p $omniverse_root
    sudo tar xzvf  nucleus-stack-2022.1.0+tag-2022.1.0.gitlab.3983146.613004ac.tar.gz -C  $omniverse_root --strip-components=1
    cd ${omniverse_root}/base_stack
    
  5. Update nucleus-stack.env.

With your preferred text editor, review the nucleus-stack.env file. It is recommended that you review this file in its entirety. You will use this file to confirm that you accept the NVIDIA Omniverse End User License Agreement.

Then update the following nucleus-stack.env variables as needed:

ACCEPT_EULA             Review the notes in the .env file for more information
SECURITY_REVIEWED       Review the notes in the .env file for more information
SERVER_IP_OR_HOST       Set to the Nucleus Server private DNS hostname
SSL_INGRESS_HOST        Set as fully qualified domain name (i.e., nucleus.my_omniverse.com)
MASTER_PASSWORD         Omniverse master user password
SERVICE_PASSWORD        Omniverse service user password
INSTANCE_NAME           Omniverse instance name
DATA_ROOT               Omniverse data root directory
WEB_PORT                NVIDIA recommends that you set this to 8080; this is also what nginx.conf is configured to expect
  1. Generate secrets required for authentication.

The following is required because you are not using SSO integration at this time. See the security notes in nucleus-stack.env for more information.

sudo chmod +x ./generate-sample-insecure-secrets.sh
sudo ./generate-sample-insecure-secrets.sh
  1. Pull the Nucleus Docker images.

    sudo docker-compose --env-file ${omniverse_root}/base_stack/nucleus-stack.env  -f ${omniverse_root}/base_stack/nucleus-stack-ssl.yml pull

  2. Start the Nucleus stack.

    sudo docker-compose --env-file ${omniverse_root}/base_stack/nucleus-stack.env  -f ${omniverse_root}/base_stack/nucleus-stack-ssl.yml up -d

Usage

On your local machine, test a connection to your Nucleus Server by pointing your web browser to the domain you specified in the .env file. You should be greeted with the following login dialog:

../../_images/cloud_aws_login.png

Here you can use the Master or Service Username and Password configured in the nucleus-stack.env, or click Create Account. You’ll then be presented with a Navigator view of your Nucleus Server content.

../../_images/cloud_aws_nucleus_server.png

Cleanup

  1. Disassociate the Nitro Enclave Certificate by running the dissacociate_enclave_cert.py script.

    print(response)
    CERT_ARN = "tlsCertifcateArn"
    ROLE_ARN = "proxyInstanceRoleArn"
    
    import boto3
    
    ec2_client = boto3.client('ec2')
    
    response = ec2_client. disassociate_enclave_certificate_iam_role(
        CertificateArn=CERT_ARN,
        RoleArn=ROLE_ARN
    )
    
  2. Delete the stack, by running cdk destroy from the nucleus-app application folder.

Additional Reading

AWS Services

  • Amazon EC2, secure and resizable compute capacity for virtually any workload

  • Amazon Route 53, a reliable and cost-effective way to route end users to Internet applications

  • Amazon S3, object storage built to retrieve any amount of data from anywhere

  • Amazon EBS, easy to use, high performance block storage at any scale

  • AWS Certificates Manager for Nitro Enclaves, public and private TLS certificates with web servers running on Amazon EC2 instances