Part 4: Recovering VM from replicated backup

Anirudha | Sun, 02/09/2020 - 16:28

In previous post, we replicated data from Aws S3 to on-prem Objects cluster. In this post, we will try to generate some data loss and recover lost data from replicated backup.

Data loss and Recovery from backup :

  • Delete the data from Aws S3://backup-vpx-vms bucket , to make sure there is nothing on Aws and we are sure data is pulled from Objects cluster .

         (Note : This is only for test purpose, do not try this on actual data)

          

On the right side, you can see Aws S3://backup-vpx-vms bucket is empty. All the data is deleted. And on right side, we are connected to Objects cluster which has all replicated data.

  • Delete scalcia-project-vpx from Src cluster where this VM was residing. In my case source cluster is Nutanix AHV cluster. I have logged into one of the CVM and using acli deleted the VM.

nutanix@NTNX-18SM52390002-A-CVM::~$ acli


<acropolis> vm.get scalcia-project-vpx
scalcia-project-vpx {
  config {
    agent_vm: False
    allow_live_migrate: True
    disk_list {
      addr {
        bus: "ide"
        index: 0
      }
      cdrom: True
      device_uuid: "47615a0f-efc4-4756-bdb2-baf826d22e9c"
      empty: True
    }
    disk_list {
      addr {
        bus: "scsi"
        index: 0
      }
      container_id: 15
      container_uuid: "1db1dc64-1dd1-4d98-9efb-6d3abf1bb04e"
      device_uuid: "39db4b09-e7c1-48c8-91dc-bb3edc7fcca0"
      naa_id: "naa.6506b8d08cb905caeab0e43bd2754f4c"
      source_vmdisk_uuid: "e0c59c57-c07c-4c6d-9f7c-585234f5791f"
      vmdisk_size: 21474836480
      vmdisk_uuid: "0dc73e11-ffb3-46ef-acc7-7fb967d54ac9"
    }
    hwclock_timezone: "America/Los_Angeles"
    machine_type: "pc"
    memory_mb: 1024
    name: "scalcia-project-vpx"
    nic_list {
      ip_address: "192.168.0.12"
      mac_addr: "50:6b:8d:d0:76:d3"
      network_name: "vlan.scalcia"
      network_type: "kNativeNetwork"
      network_uuid: "ebf263d3-3588-4f5b-b91a-0803fcaf034c"
      uuid: "9feb56eb-e170-42c3-9efd-4403e95f2f0a"
    }
    num_cores_per_vcpu: 1
    num_threads_per_core: 1
    num_vcpus: 1
    num_vnuma_nodes: 0
    source_vm_uuid: "e588ae1b-9825-45cc-9843-7c2fb33453ed"
    vga_console: True
    vm_type: "kGuestVM"
  }
  host_name: "scalcia-ahv-1"
  host_uuid: "3da538df-7647-4e44-a7dd-008bb5551296"
  logical_timestamp: 2
  state: "kOn"
  uuid: "59e089c6-f2fa-4dc8-bfc6-2fc630fbe985"
}
  • Vm.get returns all the info about the VM. Lets delete it now.

<acropolis> vm.delete scalcia-project-vpx
Delete 1 VMs? (yes/no) yes
scalcia-project-vpx: complete

  • Verify VM is gone permanently

<acropolis> vm.get scalcia-project-vpx
Unknown name: scalcia-project-vpx
<acropolis>

 

Initiate Restore Workflow from Commvault:

  • From Commvault Commcell -> Navigate to Subclient -> Click on “Browse and Restore

          

  • Select VM and then available backup -> Click on “Recover All Selected” button 

          

  • Proceed with wizard for In-place restore of VM.

          

  • Check restore progress in Job Controller commcell tab:

          

 

On the completion, lets check if VM is recovered :

  • SSH to one of the CVM and enter acli prompt.

<acropolis> vm.get scalcia-project-vpx
scalcia-project-vpx {
  config {
    allow_live_migrate: True
    disk_list {
      addr {
        bus: "ide"
        index: 0
      }
      cdrom: True
      device_uuid: "05f03bd2-84ac-407c-83a0-a8743f05be4d"
      empty: True
    }
    disk_list {
      addr {
        bus: "scsi"
        index: 0
      }
      container_id: 15
      container_uuid: "1db1dc64-1dd1-4d98-9efb-6d3abf1bb04e"
      device_uuid: "d31bb1df-2a4a-4dd2-bad7-95b886c80fd9"
      naa_id: "naa.6506b8db06ba6ccdc7c6a554fdda2683"
      source_vmdisk_uuid: "6b6c646a-03e9-4316-b547-54561a4eaf1b"
      vmdisk_size: 21474836480
      vmdisk_uuid: "609fac19-96ae-465b-bec2-0260c1892d47"
    }
    machine_type: "pc"
    memory_mb: 1024
    name: "scalcia-project-vpx"
    num_cores_per_vcpu: 1
    num_threads_per_core: 1
    num_vcpus: 1
    num_vnuma_nodes: 0
    vga_console: True
    vm_type: "kGuestVM"
  }
  logical_timestamp: 2
  state: "kOff"
  uuid: "3fd8c301-0528-449d-b43a-16e49abb41fa"
}
<acropolis>
  • PowerOn VM and check if you have all the data intact.

Simple and easy :-)