Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add test cases of volume hot-plug for VM #857

Merged
merged 3 commits into from
Jul 18, 2023

Conversation

lanfon72
Copy link
Member

@lanfon72 lanfon72 commented Jun 11, 2023

Changes

  • ADD [apiclient] vm.add_volume to hot plug volumes
  • ADD [apiclient] vm.remove_volume to hot unplug volumes
  • ADD TestHotPlugVolume for volume hot plug/unplug
    • new test cases: test_add, test_remove
    • new fixtures: small_volume
  • DEL unused fixtures: virtctl, kubeconfig_file

@lanfon72 lanfon72 marked this pull request as ready for review June 12, 2023 12:42
@lanfon72 lanfon72 requested a review from a team June 12, 2023 12:43
@lanfon72
Copy link
Member Author

Tested on harvester-runtests#320


yield vol_name, size

code, data = api_client.volumes.delete(vol_name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to check delete done to avoid affect TC later?

Copy link
Member Author

@lanfon72 lanfon72 Jun 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be fine, the volume name is unique and no other test cases will use it.
And we are expecting functionality of volume deletion been tested in other test cases.

Copy link
Collaborator

@khushboo-rancher khushboo-rancher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general looks good. Some improvements.

Comment on lines +1617 to +1634
code, data = api_client.vms.start(unique_vm_name)
endtime = datetime.now() + timedelta(seconds=wait_timeout)
while endtime > datetime.now():
code, data = api_client.vms.get_status(unique_vm_name)
if 200 == code:
phase = data.get('status', {}).get('phase')
conds = data.get('status', {}).get('conditions', [{}])
if ("Running" == phase
and "AgentConnected" == conds[-1].get('type')
and data['status'].get('interfaces')):
break
sleep(3)
else:
raise AssertionError(
f"Failed to Start VM({unique_vm_name}) with errors:\n"
f"Status: {data.get('status')}\n"
f"API Status({code}): {data}"
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel, we should have a fixture running_vm instead of writing starting vm code in the test. As in future, many test cases will require running vm to start with.
Could you help to implement that?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to start the VM because we leverage existing stopped_vm which already defined, stopped_vm would be more useful when we are going to update the VM or especially cloud-config.

Of course we can add another running_vm fixture, but as you mentioned in next comment, it will add more lines rather than check the VM is started.

It is a trade off, in here, I would prefer reuse existing fixture and start the VM to make sure the VM just started and accessible.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Can you elaborate on how the running_vm will add more lines? I'm assuming a running_vm fixture will have the checks that the vm is started successfully.

  2. Running_vm could be useful, quick and convenient to use for other test cases.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 lines for function naming and decorator, 1 lines for return, and lines same as the codes, is it the minimal lines to create the fixture and depends on stopped_vm

Comment on lines 1642 to 1654
with host_shell.login(host_ip, jumphost=True) as h:
vm_sh = vm_shell(ssh_user, pkey=pri_key)
endtime = datetime.now() + timedelta(seconds=wait_timeout)
while endtime > datetime.now():
try:
vm_sh.connect(vm_ip, jumphost=h.client)
except ChannelException as e:
login_ex = e
sleep(3)
else:
break
else:
raise AssertionError(f"Unable to login to VM {unique_vm_name}") from login_ex
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same with checking the login to VM, we should have a common method. So that we can use is at multiple places and keep the test simple.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember we were discussed it before, to make those thing into common functions will lead us to the way to the structure of legacy code, then test cases will looks like:

def tets_something(fixture_a, fix_b):
    utils.create_something()
    # ... etc

When we add new tests and use utils.create_something, it make those test cases depends on the common function, when one of tests failed, we will need fix codes in utils.create_something, then it will affect other test cases because they really depends on it. (It is what legacy code did.)

Of course I totally agree that repeat code is not good for automation, but currently It make test cases independent with each others and we can view the whole step in a test case, not refer to other functions, and I still thinking whether there have a better way to organize those repeated code in the future, to keep readability and clarity.

Or you can demonstrate a show case for me that how to organize them in a common function to prevent the results like legacy code or longhorn's bunch imports from common import .... ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not necessary that you create a common file and put the common methods there but you can create common methods required within a module at least.

For example, In the test test_add I see there are two attempts to login to a VM, we can avoid writing same lines of code again and have a method login_to_vm. Writing repeated code is hard to maintain in future, if something changes you need to modify same things at multiple places.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is not necessary to emphasize that a test case will frequently read to addressing the fail is a real bug or need a implement improvement, otherwise it will not be modified, so not couple with others function should be the highest priority than code lines.

but yeah, it is just my opinion like #840 , so let’s follow your design.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lanfon72 It's not about your or my opinion but I recommend let's follow what is best for Harvester test. The goal of these tests is to have good coverage and nicely written cases which are easy to read and maintain.
I think the discussion here is not working. Let's discuss this offline.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have different concept about automation testing. what is care about the test case is:

  1. Test isolation
  2. readability, including test reports
  3. accurate and stability

I had explained when I migrating legacy code to current implementation, one of problems is, legacy code calling sharing functions which make tests not isolated. And I said I don't have another better way to reduce repeated code for now.

In my opinion, in automation testing, repeated code is not harmful, sharing functions does.
I will not say something like good, easy, or nicely, because it will not explain anything.

I think the discussion here is not working. Let's discuss this offline.

If I can't persuade to not do that, I will just follow your instructions.
I don't think discuss anything offline will solve the problem, same as #840 is not closed, in my experienced, it is not necessary and should be closed, and we discussed offline; same as #363, I had explained in the issue, but yeah, let's follow your instructions.

Comment on lines 1675 to 1688
# Login to VM to verify volume hot pluged
with host_shell.login(host_ip, jumphost=True) as h:
vm_sh = vm_shell(ssh_user, pkey=pri_key)
endtime = datetime.now() + timedelta(seconds=wait_timeout)
while endtime > datetime.now():
try:
vm_sh.connect(vm_ip, jumphost=h.client)
except ChannelException as e:
login_ex = e
sleep(3)
else:
break
else:
raise AssertionError(f"Unable to login to VM {unique_vm_name}") from login_ex
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned above, repeated code for Login, we can replace this with common method.

@lanfon72 lanfon72 force-pushed the e2e branch 2 times, most recently from d4c2a4f to 832af6d Compare July 3, 2023 12:07
Copy link
Collaborator

@khushboo-rancher khushboo-rancher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@khushboo-rancher khushboo-rancher merged commit eb68dcf into harvester:main Jul 18, 2023
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants