1. Scicos Tests
Contents
2. Non-regression tests
2.1. Purpose
Perform tests on full diagrams - rather than elementary blocks for unitary tests - and check that the current behavior is consistent with reference results obtained on a prior version of the software.
A non-regression test is considered "passed" if results are 100% identical or if different results can be fully explained by voluntary changes in the code.
2.2. Location
SCI/modules/scicos/tests/nonreg_tests/
2.3. Naming conventions
No special naming convention has been established. Diagrams can be saved under any appropriate and meaningful name, further referred to as <DIAGRAM_NAME>.
Due to differences in results for a given Scilab release on Windows and Unix platforms, two different sets of files have to be generated and compared when doing non-regression tests. Corresponding files hold a special <OS> suffix, either equal to win or unix.
2.4. File structure
- So as to be automatically recognized as non-regression tests and launched as such, diagrams should be saved under the root directory of Scicos non-regression tests (cf Location above).
2.5. Needed files
A comprehensive set of files for one test must include the following items:
Filename
Description
Example
<DIAGRAM_NAME>.cos
Diagram
rossler.cos
<DIAGRAM_NAME>.<OS>.out.ref
Simulation output [reference]
rossler.win.out.ref, rossler.unix.out.ref
<DIAGRAM_NAME>.<OS>.log.ref
Console log [reference]
rossler.win.log.ref, rossler.unix.log.ref
<DIAGRAM_NAME>.<OS>.err.ref
Error log [reference]
rossler.win.err.ref, rossler.unix.err.ref
The following files are automatically generated before or during non-regression tests:
Filename
Description
Example
<DIAGRAM_NAME>.test
Script used to launch the test in a separate background Scilab process
rossler.test
<DIAGRAM_NAME>.<OS>.out
Simulation output
rossler.win.out, rossler.unix.out
<DIAGRAM_NAME>.<OS>.log
Console log
rossler.win.log, rossler.unix.log
<DIAGRAM_NAME>.<OS>.err
Error log
rossler.win.err, rossler.unix.err
2.5.1. Diagram
<DIAGRAM_NAME>.cos
The diagram itself contains all the parameters needed to run the simulation (final time, solver settings, etc.), as well as a context if need be.
Remark: As opposed to unitary tests, context can be defined directly INSIDE the diagram. It is externalized in unitary tests to allow automatic processing of close - yet different - contexts applied to the same base diagram. Here there is no need to have different contexts because each non-regression test is supposed to be unique.
2.5.2. Simulation output
<DIAGRAM_NAME>.<OS>.out <DIAGRAM_NAME>.<OS>.out.ref
Each non-regression diagram contains a Write to output file block (found in the Sinks palette). It generates text files that can be easily compared to check consistency in the results between two different Scilab/Scicos versions. Some remarks:
Diagrams should contain EXACTLY ONE Write to output file block. Including no block would mean no output file and thus no possible comparison. On the other hand, more than one block would impose to put an additional index at the end of an already long filemane.
Special Write to output file block settings:
Setting
Value
Description
Output format
(7(e22.15,1x))
This format is chosen with regards to %eps being equal to 2.220e-16 in Scilab.
- 16 significant digits can not ensure that two outputs are "equal": the last one may be different between the two outputs while the absolute difference is still stricly less than %eps.
- 15 significant digits is thus the best compromise: the slightest difference on the last digit makes sure that the two outputs are considered different (as 1e-15 > 2.220e-16), and simulations can be considered equal if no difference is noticed after several thousands of steps.
Before launching any simulation, a function checks whether the output format matches the right pattern and automatically fixes it in case of mismatch.Output file name
<DIAGRAM_NAME>.<OS>.out
Output file name should be directly derived from diagram name.
This convention allows an automatic processing by the various Scilab scripts dealing with tests.
Before launching any simulation, a function checks whether the output filename matches the right pattern. It loops through the diagram, looking for the only "Write to output file" block available, and fixes its settings automatically. This way, the test designer is allowed to forget to set the right filename.
2.5.3. Console output
<DIAGRAM_NAME>.<OS>.log <DIAGRAM_NAME>.<OS>.log.ref
These files are used to log the console output throughout the simulation process. Console can be used to display some debug data to facilitate further comparisons and debug sessions.
2.5.4. Error log
<DIAGRAM_NAME>.<OS>.err <DIAGRAM_NAME>.<OS>.err.ref
These files result from the redirection of the standard error output to a file.
2.6. How to launch the non-regression tests?
Use the following instructions to launch every test in the Scicos non-regression suite:
-->cd SCI/modules/scicos/tests/nonreg_tests; -->exec('scicos_nonreg.sci'); -->scicos_nonreg(); 001/029 - [scicos] constant.....................................passed : Output and reference are equal 002/029 - [scicos] delay_anal...................................failed : Output and reference are NOT equal 003/029 - [scicos] disease......................................failed : Output and reference are NOT equal [...] 028/029 - [scicos] threshold....................................failed : Output and reference are NOT equal 029/029 - [scicos] transferfcn..................................failed : Output and reference are NOT equal -------------------------------------------------------------------------- Summary tests 29 - 100.0 % passed 2 - 6.9 % failed 27 - 93.1 % skipped 0 - 0.0 % -------------------------------------------------------------------------- Details TEST : [scicos] delay_anal failed : Output and reference are NOT equal Compare the following files for more details: - /home/vaylet/dev/scilab-5.0/modules/scicos/tests/nonreg_tests/delay_anal.unix.out - /home/vaylet/dev/scilab-5.0/modules/scicos/tests/nonreg_tests/delay_anal.unix.out.ref TEST : [scicos] disease failed : Output and reference are NOT equal Compare the following files for more details: - /home/vaylet/dev/scilab-5.0/modules/scicos/tests/nonreg_tests/disease.unix.out - /home/vaylet/dev/scilab-5.0/modules/scicos/tests/nonreg_tests/disease.unix.out.ref [...] TEST : [scicos] threshold failed : Output and reference are NOT equal Compare the following files for more details: - /home/vaylet/dev/scilab-5.0/modules/scicos/tests/nonreg_tests/threshold.unix.out - /home/vaylet/dev/scilab-5.0/modules/scicos/tests/nonreg_tests/threshold.unix.out.ref TEST : [scicos] transferfcn failed : Output and reference are NOT equal Compare the following files for more details: - /home/vaylet/dev/scilab-5.0/modules/scicos/tests/nonreg_tests/transferfcn.unix.out - /home/vaylet/dev/scilab-5.0/modules/scicos/tests/nonreg_tests/transferfcn.unix.out.ref -------------------------------------------------------------------------- -->
You can also choose to run only a subset of all available tests using instructions like:
--> scicos_nonreg('constant'); // run only the 'constant' test
or
--> scicos_nonreg(['constant', 'disease', 'threshold']); // run only the 'constant', 'disease' and 'threshold' tests
2.7. How to use generated files to debug errors or inconsistencies?
If one or several tests fail to pass, files generated during the simulation can be used to obtain useful debug information. The following pair of files can be easily compared in a friendly visual environment like meld or kdiff3 under Linux, and WinMerge under Windows:
Filename(s)
Description
<DIAGRAM_NAME>.<OS>.out
These files should be the first ones to be compared. This way, it's easy to notice where and when the first differences/errors occur for the first time, and whether they propagate or not.
<DIAGRAM_NAME>.<OS>.out.ref
<DIAGRAM_NAME>.<OS>.log
Console logs can give some useful debug info and some hints, especially thanks to the display of two important structures: scs_m and Info:
- scs_m, obtained after a call to the load function, contains all the details about the architecture and the internals of the diagram.
- Info, an empty list updated by the scicos_simulate function, provides everything concerning the simulation state.<DIAGRAM_NAME>.<OS>.log.ref
<DIAGRAM_NAME>.<OS>.err
Error logs can also be useful when a simulation could not be run or a fatal error occurred.
<DIAGRAM_NAME>.<OS>.err.ref
2.8. How to add a new test to the existing suite?
Adding a new test to the suite is relatively easy:
Build your diagram as you would if it was a general purpose one. You can even include scopes for debug purposes. Simulations are launched in No Window mode (-nw) so they will be hidden during the non-regression process.
Then just add a Write to output file block (in the Sinks palette) to the diagram, feeding it with all the outputs you want to log for comparison between the two versions to test. Use a Mux block (in the Branching palette) to concatenate signals when needed and don't forget to set the right Input size setting in the Write to output file parameters.
Just in case, launch the simulation manually using the "Simulate/Run" menu item to check that everything is going right (then do not forget to delete the file generated by Write to output file block to keep a clean environment).
3. Unitary tests
3.1. Purpose
Perform tests on elementary blocks and check that the current behavior is consistent with reference results obtained on a prior version of the software.
A unitary tests is considered "passed" if results are 100% identical or if different results can be fully explained by voluntary changes in the code.
3.2. Location
SCI/modules/scicos_blocks/tests/unit_tests/
3.3. Naming conventions
Test names SHOULD be primarily based on the name of the associated computing function
<BLOCK_NAME> = name of the computing function of the tested block (as found in SCI/modules/scicos_blocks/src/)
Example: GAINBLK for the GAIN block (in the Linear palette), accordingly to GAINBLK.c in SCI/modules/scicos_blocks/src/c/
- It is possible to build several different diagrams to test a single block.
<DIAGRAM_NAME> = <BLOCK_NAME>_<IDX>
A 3-digit index allows up to 999 different diagrams for a single block.
Example:
GAINBLK_001 GAINBLK_023
- To avoid building the same diagram again and again when only a block setting is changing, it is possible to define different contexts for a given diagram. This way, one or more parameters can vary in an automated way.
<TEST_NAME> = <DIAGRAM_NAME>_<IDX>
A 3-digit index allows to define up to 999 different contexts for a single diagram.
Example:
GAINBLK_001_001 GAINBLK_023_009
Due to differences in results for a given Scilab release on Windows and Unix platforms, two different sets of files have to be generated and compared when doing non-regression tests. Corresponding files hold a special <OS> suffix, either equal to win or unix.
3.4. File Structure
Each palette has a related subfolder in SCI/modules/scicos_blocks/tests/unit_tests/:
- Tidier organization (less files in more folders)
- User are able to launch a smaller subset of tests (a chosen set of palettes for example, using their names)
3.5. Needed files
A comprehensive set of files for one test must include the following items:
Filename
Description
Example
<DIAGRAM_NAME>.cos
Diagram
GAINBLK_001.cos
<TEST_NAME>.in
Simulation input
GAINBLK_001_001.in
<TEST_NAME>.<OS>.out.ref
Simulation output [reference]
GAINBLK_001_001.win.out.ref, GAINBLK_001_001.unix.out.ref
<TEST_NAME>.<OS>.log.ref
Console log [reference]
GAINBLK_001_001.win.log.ref, GAINBLK_001_001.unix.log.ref
<TEST_NAME>.<OS>.err.ref
Error log [reference]
GAINBLK_001_001.win.err.ref, GAINBLK_001_001.unix.err.ref
The following items are optional:
Filename
Description
Example
<TEST_NAME>.cxt
Context definition
GAINBLK_001_001.cxt
<TEST_NAME>_pre.sce
Pre-simulation script
GAINBLK_001_001_pre.sce
<TEST_NAME>_post.sce
Post-simulation script
GAINBLK_001_001_post.sce
The following files are automatically generated before or during non-regression tests:
Filename
Description
Example
<TEST_NAME>.test
Script used to launch the test in a separate background Scilab process
GAINBLK_001_001.test
<TEST_NAME>.<OS>.out
Simulation output
GAINBLK_001_001.win.out, GAINBLK_001_001.unix.out
<TEST_NAME>.<OS>.log
Console log
GAINBLK_001_001.win.log, GAINBLK_001_001.unix.log
3.5.1. Diagram
<TEST_NAME>.cos
The diagram itself contains all the parameters needed to run the simulation (final time, solver settings, etc.). However, a context can be (optionally) defined in an external file to supply custom block settings (see below).
3.5.2. Simulation context
<TEST_NAME>.cxt
The context should be defined in a text file containing name/value pairs defining variables. This file should be human-readable so that the user can modify it anytime.
@TODO: use a better and easier way to store this kind of data (XML?)
@REMARK: Beware of matrices (text files may not be a convenient way to input and store them)
3.5.3. Pre- and post-simulation scripts
<TEST_NAME>_pre.sce
<TEST_NAME>_post.sce
If present, these scripts are launched (respectively) right before and right after the simulation.
For instance, they can be used to set workspace variables needed by "From Workspace" blocks, or in order to display contents of variables generated by To Workspace blocks and allow visual comparison. However, no graphical function like plot should be used (except maybe for debug purposes) because tests results are text-based comparisons only!
@TODO: not yet implemented
3.5.4. Simulation input
<TEST_NAME>.in
Some simulations may need a custom input imported with a Read from File block. Filename should be directly derived from diagram name and thus set to <TEST_NAME>.in. This convention allows an automatic processing by the various Scilab scripts dealing with unit tests.
Before launching any simulation, a function checks whether the input filename matches the right pattern. It loops through the diagram, looking for the only Read from File block available, and fixes its settings automatically. This way, the test designer is allowed to forget to set the right filename.
3.5.5. Simulation output
<TEST_NAME>.out <TEST_NAME>.out.ref
Each non-regression diagram contains a Write to output file block (found in the Sinks palette). It generates text files that can be easily compared to check consistency in the results between two different Scilab/Scicos versions. Some remarks:
Diagrams should contain EXACTLY ONE Write to output file block. Including no block would mean no output file and thus no possible comparison. On the other hand, more than one block would impose to put an additional index at the end of an already long filemane.
Special Write to output file block settings:
Setting
Value
Description
Output format
(7(e22.15,1x))
This format is chosen with regards to %eps being equal to 2.220e-16 in Scilab.
- 16 significant digits can not ensure that two outputs are "equal": the last one may be different between the two outputs while the absolute difference is still stricly less than %eps.
- 15 significant digits is thus the best compromise: the slightest difference on the last digit makes sure that the two outputs are considered different (as 1e-15 > 2.220e-16), and simulations can be considered equal if no difference is noticed after several thousands of steps.
Before launching any simulation, a function checks whether the output format matches the right pattern and automatically fixes it in case of mismatch.Output file name
<DIAGRAM_NAME>.<OS>.out
Output file name should be directly derived from diagram name.
This convention allows an automatic processing by the various Scilab scripts dealing with tests.
Before launching any simulation, a function checks whether the output filename matches the right pattern. It loops through the diagram, looking for the only "Write to output file" block available, and fixes its settings automatically. This way, the test designer is allowed to forget to set the right filename.
3.5.6. Console output
<TEST_NAME>.log <TEST_NAME>.log.ref
These files are used to log the console output throughout the simulation process. Console can be used to display some debug data to facilitate further comparisons and debug sessions.
3.5.7. Error log
<TEST_NAME>.<OS>.err <TEST_NAME>.<OS>.err.ref
These files result from the redirection of the standard error output to a file.
3.6. How to launch the unitary tests?
Use the following instructions to launch every test in the Scicos unitary suite:
-->cd SCI/modules/scicos_blocks/tests/unit_tests; -->exec('scicos_unitary.sci'); -->scicos_unitary(); [...] -->
You can also choose to run only a subset of all available tests using instructions like:
--> scicos_unitary('Linear'); // run only the 'Linear' palette
3.7. How to use generated files to debug errors or inconsistencies?
If one or several tests fail to pass, files generated during the simulation can be used to obtain useful debug information. The following pair of files can be easily compared in a friendly visual environment like meld or kdiff3 under Linux, and WinMerge under Windows:
Filename(s)
Description
<TEST_NAME>.<OS>.out
These files should be the first ones to be compared. This way, it's easy to notice where and when the first differences/errors occur for the first time, and whether they propagate or not.
<TEST_NAME>.<OS>.out.ref
<TEST_NAME>.<OS>.log
Console logs can give some useful debug info and some hints, especially thanks to the display of two important structures: scs_m and Info:
- scs_m, obtained after a call to the load function, contains all the details about the architecture and the internals of the diagram.
- Info, an empty list updated by the scicos_simulate function, provides everything concerning the simulation state.<TEST_NAME>.<OS>.log.ref
<TEST_NAME>.<OS>.err
Error logs can also be useful when a simulation could not be run or a fatal error occurred.
<TEST_NAME>.<OS>.err.ref
3.8. How to add a new test to the existing suite?
Adding a new test to the suite is relatively easy:
Build your diagram as you would if it was a general purpose one. You can even include scopes for debug purposes. Simulations are launched in No Window mode (-nw) so they will be hidden during the non-regression process.
Then just add a Write to output file block (in the Sinks palette) to the diagram, feeding it with all the outputs you want to log for comparison between the two versions to test. Use a Mux block (in the Branching palette) to concatenate signals when needed and don't forget to set the right Input size setting in the Write to output file parameters.
Just in case, launch the simulation manually using the "Simulate/Run" menu item to check that everything is going right (then do not forget to delete the file generated by Write to output file block to keep a clean environment).