The ALT framework is designed to test logic that spans multiple services/endpoints which might use different technologies and protocols.
It does so by executing scenarios
(1 scenario = 1 complete use case) which are composed of actions
(e.g. 1 action could be a call to and response from a service).
The framework supports the definition of different action types in a yaml format, including e.g. which endpoints have to be called with which parameters as well as defining validation rules to be applied on the responses. It also supports detailed report creation of the test results. For more detailed documentation, check out the wiki!
- simple definition of reusable REST, MQTT, WS, etc. action templates
- ...
- using Variables across multiple actions
- validation of response payload from a REST action
- ...
- automatic retries of REST requests
- automatic reconnections of WS sessions
- ...
- filtering on incomming MQTT & WS messages
- publishing & listening of protobuf messages on MQTT broker
First of all, you have to define actions
(see Actions) which should be invoked. Let's say we want to test creation of new users in our system.
For that, we'd need two actions: one for creating new user and one for retrieving the created user and checking if the attributes
were stored correctly:
type: REST
service: https://reqres.in/api
endpoint: /users
method: POST
headers:
Content-Type: application/json
data:
name: James
job: Agent
responseValidation:
- "res.data.first_name === James"
type: REST
service: https://reqres.in/api
endpoint: /users/2
method: GET
responseValidation:
- "res.data.first_name === Janet"
In order to execute those actions
we need a 'playbook' that defines which action
should be executed in which order:
this is exactly what a scenario
(see Scenarios) is made for:
description: "testing User-API of the system"
actions:
- name: create-new-user
- name: query-user
Scenarios
also allow to reuse existing actions
:
description: "testing if the 'job' property is being saved correctly"
actions:
- name: create-new-user
data:
name: Steve
job: Teacher
responseValidation:
- "res.data.job === Teacher"
Now to run our scenarios
, we have basically two options: either using plain JS or via custom build Docker image:
First of all we need to download the ALT's dependency:
npm i -s @maibornwolff/alt-core-js
And then simply call the runScenario
main entrypoint providing the paths to our scenarios
' and actions
directory:
const ALT = require('@maibornwolff/alt-core-js');
ALT.runScenario('src/scenarios/s1-my-first-scenario.yaml', 'src/actions');
There is special runner image available (see Docker Hub) which already
contains the core framwork and also an invokation script for running a particular scanario. You can either use it in raw or as
runner
image on CI platforms like e.g. GitLab.
docker run
-v `pwd`/src:/src # mounting scenarios' & actions' root directory
-e ALT_SRC=/src # declaring the mounted path as resource directory
-v `pwd`/output:/alt-runner-app/out # output directory where .log files and diagrams will be saved after the execution
maibornwolff/alt-runner-image:latest
runScenario s1-my-first-scenario.yaml # run command with scenario-name as input param
run-my-scenario:
stage: test
image: maibornwolff/alt-runner-image:latest
script:
- export ALT_SRC=$(pwd)/src # directory path containing ./scenarios & ./actions directories
- runScenario s1-my-first-scenario.yaml # execution script available inside the container: 'runScenario'
when: manual
...
todo
During the executing there are 2 kind of logging: basic information on which Scenario/Action is being executed is logged
to the console
while detailed log containing Actions' paramters, results and stack traces are logged to files
which
are stored under out/
: each scenario logs into its own .log
file!
The framework can automatically create sequence diagrams from the given scenario definition which are also saved in out/
;
$ npm install
$ tsc
$ npm test
...