Skip to content

rscan command for hypervisor

penguhyang edited this page Jun 22, 2016 · 15 revisions

The mini-design of rscan command for hypervisor

Background

Currently, the rscan command didn't support to get the kvm guests information from the kvmhost/hypervisor, and can't write the kvm guests information into the xCAT database if required.

This design will support rscan command to scan kvmhost to get the virtual machine list.

Support Options:

-w write the different kvm guest information in xCAT tables and output conflict information if occurs.
-u update the same kvm guest.
-n write the different kvm guest information in xCAT tables.

The interfaces details

a) Command: rscan [noderange]

The output will looks like:

type   name   hypervisor   id      cpu     memory     nic     disk
kvm    kvm1   kvmhost1     41      2       2048       br0     /install/vms/kvm1.hda.qcow2
kvm    kvm2   kvmhost1     21      2       2048       br0     /install/vms/kvm2.hda.qcow2
kvm    kvm1   kvmhost2     43      2       2048       br0     /install/vms/kvm1.hda.qcow2
kvm    kvm2   kvmhost2             2       2048       br0     /install/vms/kvm2.hda.qcow2

b) Command: rscan [noderange] -w or rscan [noderange] -u or rscan [noderange] -n

The output will looks like below also writes or update the kvm guests information in the xCAT database tables.

type   name   hypervisor   id      cpu     memory     nic     disk
kvm    kvm1   kvmhost1     41      2       2048       br0     /install/vms/kvm1.hda.qcow2
kvm    kvm2   kvmhost1     21      2       2048       br0     /install/vms/kvm2.hda.qcow2
kvm    kvm1   kvmhost2     43      2       2048       br0     /install/vms/kvm1.hda.qcow2
kvm    kvm2   kvmhost2             2       2048       br0     /install/vms/kvm2.hda.qcow2

These are some scenarios we need to take care of.

  • Command: rscan [noderange] -w

Scenario 1

If the xCAT database tables don't have the kvm1 guest information. While rscan command find that kvmhost1 have kvm1 guest information and kvmhost2 have another kvm1 guest information. Which kvm1 guest information will be write in the database.

Solution:

Write the last kvm1 guest information to be found in the database and output the conflict information.

Scenario 2

If the xCAT database tables have kvm1 guest information, this kvm1 belong to the kvmhost1. While rscan command find that kvmhost2 have another kvm1 guest informaiton. Should this kvm1 guest information from kvmhost1 will be overwrited by the kvm1 guest information from kvmhost2?

Solution:

Don't overwrite the kvm1 guest information from kvm1host1 and output the conflict information.

  • Command: rscan [noderange] -u

Scenario 1

If the xCAT database tables don't have the kvm1 guest information. While rscan command find that kvmhost1 have kvm1 guest information and kvmhost2 have another kvm1 guest information. What to do next?

Solution:

Don't write the kvm1 guest information to the xCAT database tables.

Scenario 2

If the xCAT database tables have kvm1 guest information. While rscan command find that kvmhost1 have kvm1 guest information and kvmhost2 have another kvm1 guest information. How to identify which kvm1 guest information will be updated?

Solution:

If the kvm1 guest information from xCAT database tables have the same name and vmhost with kvm1 guest information from the kvmhost1. Then update the kvm1 guest information from kvmhost1.

The implementation details

a) Add rscan command definition in kvm.pm

We need to write the rscan command definition into the kvm.pm file so that the xcatd will look for this command in kvm.pm.

  • add rscan command in handle_commands() function

Use the nodehm table in xcat, and the value in mgt column will be used to find the kvm.pm file.

sub handled_commands {
... 
    rscan    => 'nodehm:mgt=ipmi',
...
}
  • add rscan() function in guestcmd() function

Enable every node in noderange to be handled by rscan() function.

sub guestcmd {
...
      } elsif ($command eq "rscan") {
          return rscan($node, @args);
...
}

b) Add rscan command in process_request() function

Find hyphash for every node in noderange.

sub process_request {
...
    unless ($command eq 'lsvm' or $command eq 'rscan') {
        xCAT::VMCommon::grab_table_data($noderange, $confdata, $callback);
        my $kvmdatatab = xCAT::Table->new("kvm_nodedata", -create => 0); #grab any pertinent pre-existing xml
        if ($kvmdatatab) {
            $confdata->{kvmnodedata} = $kvmdatatab->getNodesAttribs($noderange, [qw/xml/]);
        } else {
            $confdata->{kvmnodedata} = {};
        }
    }
...
    if ($command eq 'lsvm' or $command eq 'rscan') {    #command intended for hypervisors, not guests
        foreach (@$noderange) { $hyphash{$_}->{nodes}->{$_} = 1; }
    } else {
        foreach (keys %{ $confdata->{vm} }) {
            if ($confdata->{vm}->{$_}->[0]->{host}) {
                $hyphash{ $confdata->{vm}->{$_}->[0]->{host} }->{nodes}->{$_} = 1;
            } else {
                $orphans{$_} = 1;
            }
        }
    }
...
}

c) Add rscan() function in the kvm.pm

Libvirt software is the open source infrastructure to provide the low-level virtualization capabilities in most hypervisors that are available, including KVM, Xen, VMware, IBM PowerVM. Libvirt provides different ways of access, from a command line called virsh to a low-level API for many programming languages. (For perl, the API reference to http://search.cpan.org/dist/Sys-Virt/)

  • require Sys::Virt

Try to connect to virtualization host identified by uri.

$hypconn= Sys::Virt->new(uri=>"qemu+ssh://root@".$_."/system?no_tty=1");

Return a list of all domains currently known to the $hypconn, whether running or shutoff.

@doms = $hypconn->list_all_domains();
  • require Sys::Virt::Domain

Returns an XML document containing a complete description of the domain's configuration.

$currxml=$dom->get_xml_description();
  • get information from xml file

Get the type name id cpu memory disk information from the xml file.

my $domain=$parser->parse_string($currxml);
my $type = $domain->findnodes("/domain")->[0]->getAttribute("type");
my $name = $domain->findnodes("/domain/name")->[0]->to_literal;
my $id = $domain->findnodes("/domain")->[0]->getAttribute("id");
my $vmcpus = $domain->findnodes("/domain/vcpu")->[0]->to_literal;
my $mem = $domain->findnodes("/domain/memory")->[0]->to_literal;
my $vmnics = $domain->findnodes("/domain/devices/interface/source")->[0]->getAttribute("bridge");
my $vmstorage = $domain->findnodes("/domain/devices/disk/source")->[0]->getAttribute("file");

News

History

  • Oct 22, 2010: xCAT 2.5 released.
  • Apr 30, 2010: xCAT 2.4 is released.
  • Oct 31, 2009: xCAT 2.3 released. xCAT's 10 year anniversary!
  • Apr 16, 2009: xCAT 2.2 released.
  • Oct 31, 2008: xCAT 2.1 released.
  • Sep 12, 2008: Support for xCAT 2 can now be purchased!
  • June 9, 2008: xCAT breaths life into (at the time) the fastest supercomputer on the planet
  • May 30, 2008: xCAT 2.0 for Linux officially released!
  • Oct 31, 2007: IBM open sources xCAT 2.0 to allow collaboration among all of the xCAT users.
  • Oct 31, 1999: xCAT 1.0 is born!
    xCAT started out as a project in IBM developed by Egan Ford. It was quickly adopted by customers and IBM manufacturing sites to rapidly deploy clusters.
Clone this wiki locally