In this article I will demonstrate how you can deploy vCloud Director 9.7 with three cells. In my example I will use the vCD appliances (.OVA) and the internal PostgreSQL database as shown below:

Prerequisites
Before we get started, there are some preparations you need to do yourself:
- Check the vCloud Director 9.7 release notes here
- Download the vCloud Director binaries here
- Setup a NFS share for vCloud Director
- Fill in the table below:
Object | Comment | Example |
NTP Server | I use the AD servers | dc01.vblog.local |
Root password | Requirements: Minimum 8 characters in length Minimum 1 uppercase character Minimum 1 lowercase Minimum 1 numeric Minimum 1 special Only visible ASCII characters, including space | VeryS3cur3! |
NFS mount for transfer file location | Format: Target:/Path | NFS01:/vcd-nfs |
vCloud DB password | Use a different password than the root password | S@feD@tabas3! |
Admin User Name | User name | administrator |
Admin Full Name | Full name | vCD Admin |
Admin User Password | Use a different password than the root password | SecureVCDn0w! |
Admin user email address | This address will receive warnings and alerts | your@mail.com |
Default gateway | The default gateway for this subnet | 192.168.100.254 |
Domain name | Use the AD domain here | vblog.local |
Domain search path | Use the AD domain here | vblog.local |
Domain name servers | Enter DNS servers comma separated | 192.168.100.1, 192.168.100.2 |
eth0 Network IP Address | Enter an IP address for eth0 Purpose: Internal traffic | 192.168.100.10 |
eth0 Network Netmask | Enter the subnet for eth0 | 255.255.255.0 |
eth1 Network IP Address | Enter an IP address for eth1 Purpose: GUI and API access | 192.168.100.11 |
eth1 Network Netmask | Enter the subnet for eth1 | 255.255.255.0 |
Deploy the first vCloud Director 9.7 appliance
When you’re done with the prerequisites, you’re ready to get started with the deployment of the vCloud appliances!

Start the Deploy OVF template dialog:






VMware’s statement on small and large deployment configurations:
The large vCloud Director primary appliance size is suitable for production systems, while the small is suitable for lab or test systems. After the deployment, you can reconfigure the size of the appliance.




The appliance will be deployed now. After a few minutes, depending on your storage system underneath you can start the appliance.
Hint: if the appliance is not being deployed, you took too long to enter all the settings. The solution is to be a bit quicker next time 😉
Hint 2: if the appliance is being deployed and you want to check the log files after deployment, you can find them here:
Firstboot log | /opt/vmware/var/log/firstboot |
vCD setup | /opt/vmware/var/log/vcd/setupvcd.log |
Open your browser and navigate to the eth0 IP of vCloud Director.
It can take up to 5 minutes for the UI to be available in your browser.


Add the second & third appliance
Before you start deploying additional cells in your environment, check the NFS share. After you’ve deployed the first cell, the following folders & file should be present:

The deployment steps for vcd-cell 02 and vcd-cell03 are the same as the steps for vcd-cell 01 except for the steps below:
Use the same initial root password on all your cells!


When the deployment is finished, connect to the eth1 IP address of vcd-cell02 and vcd-cell03 via Putty and run the following commands:
service vmware-vcd stop
/opt/vmware/vcloud-director/bin/configure -r /opt/vmware/vcloud-director/data/transfer/responses.properties
The results will look like this:

After you completed the steps for both cells, navigate to the vCloud Director Appliance IP address on any of the cells. (https://<eth0-IP>:5480)

Log on to the Flash or HTML5 client via the browser and add a vCenter Server, NSX manager and other good stuff.
If you add a vCenter Server or change settings on one of the cells, your settings will be automatically replicated to the other cells too.
You must use different subnets for both ethernets. Thats the best practise. Check the https://kb.vmware.com/s/article/76559
Thanks for sharing, good article.