The ExSce Management is an approach to store, query and generate test scenarios for ROS-based multi-robot systems. Provenance data about test scenarios and their executions is modeled using PROV and stored on a property graph. Runtime information is obtained from recorded bag files. Metamorphic testing is used to generate new scenarios and validate the system's requirements.
At its core, provenance describes the historical ownership of an object. In computer science, provenance refers to the lineage of data, that is, how data is created and used. In the ExSce workbench, provenance allows us to answer questions such as “Which scenario was used as a basis to generate scenario X?”, “Which data was used to compute this metric?” “How many runs of scenario X pass the acceptance criteria?”, “What would be the acceptance criteria for a new scenario with this specification?”. Provenance describes agents, such as robots, and activities, such as executions or “runs”, that produce, influence or deliver an entity, such as an object or data. In short, the ExSce workbench conforms to the W3C PROV specification as a means to describe scenarios, their executions and their corresponding acceptance criteria; describe what data was collected and how it was used for the acceptance criteria on each run; and as part of the process to verify and validate that a system meets its requirements.
Here, we provide a small overview of the PROV models (cf. Figure 1).
There are three main concepts: Entities
represent physical, digital or
conceptual things; Activities
are used to represent actions or
processes that generate or change entities; and Agents
are
responsible for or play a role in activities. The basic relationships
between these concepts describe how activities use entities or
generate new ones, and which agent was responsible
(wasAssociatedWith
) for an activity. wasAttributedTo
describes which
agent played a role in the activity that generated that entity and
wasDerivedFrom
describes that the existence or properties of an entity
are due to another entity. A Plan
is a special type of entity that
describes the steps an agent followed while performing an activity.
Figure 2 below shows a simple example of PROV models in the context of
the ExSce Management.
One of the challenges of testing (autonomous) robotic systems lies in the difficulty of defining whether the result of a test is correct or not[2]. A test oracle is the procedure by which determines whether the behaviour of the System Under Test (SUT) is the correct one, in principle comparing an “expected outcome” with the observed one. However, for complex software, it is challenging to predict what the expected outcome for a given input should be [3], e.g. how long a robot should take to complete a task depends on many factors. Furthermore, changes to the test inputs, even slight variations, can greatly influence the outcome.
Metamorphic testing [4] is a testing technique that leverages the relationships between the outputs of a system, instead of fully formalising its input-output behaviour. This is particularly well-suited for robotic systems, which often consider the robot(s) as a blackbox when it comes to testing. As a simple example, consider a robot and a task that consists of navigating from waypoint A to waypoint B. A metamorphic relation, illustrated in Figure 3, consists of an input transformation and an output relation. An input transformation, such as reverting the order of the waypoints, generates a new test case. An output relationship compares the outputs of this pair of test cases, e.g. the robot travels (roughly) the same distance when it navigates from A to B than from B to A. In metamorphic testing, a test oracle validates that the output relationship holds for a pair of test cases, i.e. rather than specifying the robot should travel exactly 5m from A to B, which might also depend on and vary because of robot parameters or properties (e.g. its kinematics), the test oracle checks that travelling from A to B and viceversa is (roughly) the same distance. Consider also that due to non-determinism and noise, robotic systems can exploit metamorphic relations even without user-defined input transformations, since the each run will have slightly different “inputs” (e.g. due to sensor noise); assuming we want the robot to behave the same regardless of those variations, we can define the output relation as invariant, e.g. distance to obstacles should always remain above some limit.
[1] “PROV Model Primer.” https://www.w3.org/TR/prov-primer/.
[2] A. Afzal, C. L. Goues, M. Hilton, and C. S. Timperley, “A Study on Challenges of Testing Robotic Systems,” in 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST), Oct. 2020, pp. 96–107. doi: 10.1109/ICST46399.2020.00020.
[3] E. T. Barr, M. Harman, P. McMinn, M. Shahbaz, and S. Yoo, “The Oracle Problem in Software Testing: A Survey,” IEEE Transactions on Software Engineering, vol. 41, no. 5, pp. 507–525, May 2015, doi: 10.1109/TSE.2014.2372785.
[4] S. Segura, G. Fraser, A. B. Sanchez, and A. Ruiz-Cortes, “A Survey on Metamorphic Testing,” IEEE Transactions on Software Engineering, vol. 42, no. 9, pp. 805–824, Sep. 2016, doi: 10.1109/TSE.2016.2532875.