Skip to content
Snippets Groups Projects
Commit 40c8faf9 authored by Jutta Schnabel's avatar Jutta Schnabel
Browse files

adding Testing strategy

parent 88fc7925
Branches
No related tags found
No related merge requests found
Checking pipeline status
......@@ -11,23 +11,45 @@ Topics:
# Open data sets and formats
## Particle event tables
### Data generation
* Level 2 data processed using km3pipe
* Selection of relevant parameters
* Adding of metadata description
* adding provenance data
* Pipeline writer to different formats (csv, hdf5)
### Data description
**Scientific use**
* Astrophysics
* Neutrino oscillation
**Metadata**
-> Define Metadata here (table)
* Provenance information
* Parameter descriptions
* Data taking metadata
### Technical specification
**Format**
* Flat table
* column description metadata
#### Output format
-> formats, which metadata where
hdf5
**Interfaces**
*
## Multimessenger alerts
### Data generation
### Data description
**Scientific use**
......@@ -35,6 +57,7 @@ Topics:
**Metadata**
### Technical specification
**Format**
......
---
Title: Manuals introduction
Author: Jutta
status: dump
---
* no copying but pointing to the documentation
systematics:
- external manuals
- per platform
- per software
- per workflow
- internal manuals
- providing data and software: templates
- publishing data -> procedures
- operating the system
- change procedures
---
Title: Testing of the System
Author: Susanne, Jutta
status: dump
status: draft
---
Testing overview: https://www.guru99.com/system-testing.html
# Tests of the Open Science System
Based on [this testing tutorial](https://www.guru99.com/software-testing.html), the following testing scheme has been implemented for the Open Science System. It ranges from automatic testing at the software development level to manual testing by a test user to verify the system usability and acceptance.
## Automated software testing
### Unit testing
[Unit testing](https://www.guru99.com/unit-testing-guide.html) tests the core functionality of code parts and functions. Unit tests are part of each software project used in the KM3NeT Open Science system that is not third-party software. Test evaluation is automatic and part of the continuous integration pipeline in Gitlab, and implemented by the programmer. Publication of the software and integration into the open science system requires a high degree of coverage of the overall code to be evaluated by automatic testing. In the current setup, the workflow tool *km3pipe*, the django-based *open data center*, the astropy derivative *km3astro*, the metadata tool *kmeta* and the python client *openkm3* all implement unit testing.
### Integration testing
The software design is tested in [integration testing](https://www.guru99.com/integration-testing.html) as part of the automated procedure. Test cases of the software include reading test data, performing complex functions of the software and testing the construction of the software by performing tests on various subsets and groups. The above mentioned software also includes integration testing as part of the automated test cases.
### System testing
The interoperability of software components is evaluated in [system testing](https://www.guru99.com/system-testing.html). Workflows relevant for testing are defined also as example workflows for documentation and provided e.g. as Jupyter notebooks, but they also serve to test the functioning of the full workflow. Part of this testing is also performed manually, as new test cases can be defined during the acceptance testing which lead to new system test workflows.
## Manual testing of the system
In addition to the automated testing, a manual testing procedure has been performed on the full system by an external tester. In this testing procedure, the following steps were performed:
* Introduction to the open science system for the test user by pointing to the documentation to the open science portal
* Definition of test cases by the tester, including example work flows and additional testing of partial functions
* Addition to the test cases from the developers
* Testing and reporting on the testing by the tester through a [standard test protocol](https://www.guru99.com/test-case.html)
* Implementation of changes and recommendations from the test results
The tests were implemented at the time of the writing of this deliverable and documentation of the testing procedure can be made available on demand. Similar manual test will be performed when introducing major architecural changes or new types of open data to the open data system.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment