aboutsummaryrefslogtreecommitdiffstats
path: root/meta/lib/oeqa/runtime/cases/ptest.py
AgeCommit message (Collapse)Author
2018-12-06oeqa/runtime/ptest: Inject results+logs into stored json results fileRichard Purdie
This allows the ptest results from ptest-runner, run in an image to be transferred over to the resulting json results output. Each test is given a pass/skip/fail so individual results can be monitored and the raw log output from the ptest-runner is also dumped into the results json file as this means after the fact debugging becomes much easier. Currently the log output is not split up per test but that would make a good future enhancement. I attempted to implement this as python subTests however it failed as the output was too confusing, subTests don't support any kind of log output handling, subTest successes aren't logged and it was making things far more complex than they needed to be. We mark ptest-runner as "EXPECTEDFAILURE" since its unlikely every ptest will pass currently and we don't want that to fail the whole image test run. Its assumed there would be later analysis of the json output to determine regressions. We do have to change the test runner code so that 'unexpectedsuccess' is not a failure. Also, the test names are manipuated to remove spaces and brackets with "_" used as a replacement and any duplicate occurrences truncated. (From OE-Core rev: a13e088942e2a3c3521e98954a394e61a15234e8) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2018-01-05runtime/cases/ptest.py: fail when ptests fail on targetAlexander Kanavin
That's the whole point isn't it? Previously this testcase succeeded even if some of the underlying on-target tests failed; the only way to find out if anything was wrong was to manually inspect the logs. Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2018-01-05runtime/cases/ptest.py: do not require ptest-pkgs in IMAGE_FEATURES; run ↵Alexander Kanavin
only when ptest-runner is availalble; Previously the test would execute only when all available ptests for packages in the image were installed; some of those tests may be broken, never finish, take a very long time or simply irrelevant to the user who wants to check ptests of only a few specific packages, and does so by listing them explicitly via IMAGE_INSTALL_append or similar. Presence of ptest-runner means there is at least one ptest package installed as they pull it in via a class dependency; ptest-runner is not generally installed otherwise. Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-08-23runtime/cases/_ptest.py: rename it to ptest.pyRobert Yang
It works now. [YOCTO #11547] Signed-off-by: Robert Yang <liezhi.yang@windriver.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>