summaryrefslogtreecommitdiffstats
path: root/scripts/lib/resulttool
AgeCommit message (Collapse)Author
2019-03-18resulttool/manualexecution: To output right test case idMazliana
We found that manualexecution does not capture test suite values correctly if there are more than one test suite in test cases. After verification has made we found out we should retrieved full test cases value <test_module.test_suite.test_case> from oeqa/manual/ json file rather than split it them into new variables test_suite and test_cases. Signed-off-by: Mazliana <mazliana.mohamad@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-03-18resulttool/report: Enable roll-up report for a commitYeoh Ee Peng
Enable roll-up all test results belong to a commit and to provide a roll-up report. Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-03-07scripts/resulttool: Enable manual result store and regressionYeoh Ee Peng
To enable store for testresults.json file from manualexecution, add layers metadata to configuration and add "manual" map to resultutils.store_map. To enable regression for manual, add "manual" map to resultutils.regression_map. Also added compulsory configurations ('MACHINE', 'IMAGE_BASENAME') to manualexecution. Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-02-28resulttool/regression: Ensure regressoin results are sortedYeoh Ee Peng
Sorted regression results to provide friendly viewing of report. Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-02-28resulttool/store: Fix missing variable causing testresult corruptionRichard Purdie
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-02-27 resulttool/report: Ensure ptest results are sortedRichard Purdie
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-02-27resulttool/report: Ensure test suites with no results show up on the reportRichard Purdie
ptest suites with no results don't show up on the reports even though we have a duration for them. Fix this so the fact they report no tests is visible. Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-02-27resulttool/report: Handle missing metadata sections more cleanlyRichard Purdie
Currently some older results files cause the code to give tracebacks. Handle these missing sections more cleanly. Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-02-27resulttool/store: Handle results files for multiple revisionsRichard Purdie
Currently we cant store results if the results files span multiple different build revisons. Remove this limitation by iterating. Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-02-25resulttool/resultutils: Avoids tracebacks for missing logsRichard Purdie
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-02-21resulttool: Improvements to allow integration to the autobuilderRichard Purdie
This is a combined patch of the various tweaks and improvements I made to resulttool: * Avoid subprocess.run() as its a python 3.6 feature and we have autobuilder workers with 3.5. * Avoid python keywords as variable names * Simplify dict accesses using .get() * Rename resultsutils -> resultutils to match the resultstool -> resulttool rename * Formalised the handling of "file_name" to "TESTSERIES" which the code will now add into the json configuration data if its not present, based on the directory name. * When we don't have failed test cases, print something saying so instead of an empty table * Tweak the table headers in the report to be more readable (reference "Test Series" instead if file_id and ID instead of results_id) * Improve/simplify the max string length handling * Merge the counts and percentage data into one table in the report since printing two reports of the same data confuses the user * Removed the confusing header in the regression report * Show matches, then regressions, then unmatched runs in the regression report, also remove chatting unneeded output * Try harder to "pair" up matching configurations to reduce noise in the regressions report * Abstracted the "mapping" table concept used to pairing in the regression code to general code in resultutils * Created multiple mappings for results analysis, results storage and 'flattening' results data in a merge * Simplify the merge command to take a source and a destination, letting the destination be a directory or a file, removing the need for an output directory parameter * Add the 'IMAGE_PKGTYPE' and 'DISTRO' config options to the regression mappings * Have the store command place the testresults files in a layout from the mapping, making commits into the git repo for results storage more useful for simple comparison purposes * Set the oe-git-archive tag format appropriately for oeqa results storage (and simplify the commit messages closer to their defaults) * Fix oe-git-archive to use the commit/branch data from the results file * Cleaned up the command option help to match other changes * Follow the model of git branch/tag processing used by oe-build-perf-report and use that to read the data using git show to avoid branch change * Add ptest summary to the report command * Update the tests to match the above changes Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-02-21scripts/resulttool: enable manual execution and result creationMazliana
Integrated “manualexecution” operation to resulttool scripts. Manual execution script is a helper script to execute all manual test cases in baseline command, which consists of user guideline steps and the expected results. The last step will ask user to provide their input to execute result. The input options are passed/failed/blocked/skipped status. The result given will be written in testresults.json including log error from the user input and configuration if there is any.The output test result for json file is created by using OEQA library. The configuration part is manually key-in by the user. The system allow user to specify how many configuration they want to add and they need to define the required configuration name and value pair. In QA perspective, "configuration" means the test environments and parameters used during QA setup before testing can be carry out. Example of configurations: image used for boot up, host machine distro used, poky configurations, etc. The purpose of adding the configuration is to standardize the output test result format between automation and manual execution. To use these scripts, first source oe environment, then run the entry point script to look for help. $ resulttool To execute manual test cases, execute the below $ resulttool manualexecution <manualjsonfile> By default testresults.json store in <build_dir>/tmp/log/manual/ [YOCTO #12651] Signed-off-by: Mazliana <mazliana.mohamad@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-02-21resulttool: enable merge, store, report and regression analysisYeoh Ee Peng
OEQA outputs test results into json files and these files were archived by Autobuilder during QA releases. Example: each oe-selftest run by Autobuilder for different host distro generate a testresults.json file. These scripts were developed as a test result tools to manage these testresults.json file. Using the "store" operation, user can store multiple testresults.json files as well as the pre-configured directories used to hold those files. Using the "merge" operation, user can merge multiple testresults.json files to a target file. Using the "report" operation, user can view the test result summary for all available testresults.json files inside a ordinary directory or a git repository. Using the "regression-file" operation, user can perform regression analysis on testresults.json files specified. Using the "regression-dir" and "regression-git" operations, user can perform regression analysis on directory and git accordingly. These resulttool operations expect the testresults.json file to use the json format below. { "<testresult_1>": { "configuration": { "<config_name_1>": "<config_value_1>", "<config_name_2>": "<config_value_2>", ... "<config_name_n>": "<config_value_n>", }, "result": { "<testcase_namespace_1>": { "status": "<PASSED or FAILED or ERROR or SKIPPED>", "log": "<failure or error logging>" }, "<testcase_namespace_2>": { "status": "<PASSED or FAILED or ERROR or SKIPPED>", "log": "<failure or error logging>" }, ... "<testcase_namespace_n>": { "status": "<PASSED or FAILED or ERROR or SKIPPED>", "log": "<failure or error logging>" }, } }, ... "<testresult_n>": { "configuration": { "<config_name_1>": "<config_value_1>", "<config_name_2>": "<config_value_2>", ... "<config_name_n>": "<config_value_n>", }, "result": { "<testcase_namespace_1>": { "status": "<PASSED or FAILED or ERROR or SKIPPED>", "log": "<failure or error logging>" }, "<testcase_namespace_2>": { "status": "<PASSED or FAILED or ERROR or SKIPPED>", "log": "<failure or error logging>" }, ... "<testcase_namespace_n>": { "status": "<PASSED or FAILED or ERROR or SKIPPED>", "log": "<failure or error logging>" }, } }, } To use these scripts, first source oe environment, then run the entry point script to look for help. $ resulttool To store test result from oeqa automated tests, execute the below $ resulttool store <source_dir> <git_branch> To merge multiple testresults.json files, execute the below $ resulttool merge <base_result_file> <target_result_file> To report test report, execute the below $ resulttool report <source_dir> To perform regression file analysis, execute the below $ resulttool regression-file <base_result_file> <target_result_file> To perform regression dir analysis, execute the below $ resulttool regression-dir <base_result_dir> <target_result_dir> To perform regression git analysis, execute the below $ resulttool regression-git <source_dir> <base_branch> <target_branch> [YOCTO# 13012] [YOCTO# 12654] Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>