Age | Commit message (Collapse) | Author |
|
Some tests such as lttng-tools are marginal and timing out on the autobuilder
with the current 300s default. Increase to avoid this noise in the ptest
failures list.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5fb902a52e35130af6b0735a087c709daa35655f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
|
|
This allows spotting ptest regressions without having hard ptest failures
(for that full ptest stability should be achieved).
Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
The output content is created in current directory, because json content
has no defined absolute path to WORKDIR as in bitbake.
Signed-off-by: Andrej Valek <andrej.valek@siemens.com>
Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
In multilib build configs libs can be installed in /usr/lib{32,64,x32}
so use libdir to specify the correct ptest directory along with default
/usr/lib.
[YOCTO #12604]
Signed-off-by: Aníbal Limón <anibal.limon@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
Add missing import os statement to the oeqa runtime ptest.py
Signed-off-by: Stefan Kral <sk@typedivision.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
This can be useful with a more specific, targeted and robust set of ptest
packages; the benefit is that ptest regressions are caught as they happen
and can be more easily traced to changes that caused them.
The existing AB ptest image continues to be expected to fail, my observation
of the AB runs is that the full set of ptests is not robust enough
(particularly around socket/network related tests) and sporadically fails
in random places. This can probably be addressed by making ptests exclusive
to a worker (e.g. there is no other workload happening at the same time as
ptests).
Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
Currently if a ptest does not produce PASS or FAIL, but simply
errors out, this is not caught or reported; I think some ptests
may have silently regressed due to this.
Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
Since commit d6065f136f6d ("oeqa/logparser: Various misc cleanups"),
7b17274c30c6 in poky, the ptest OEQA is unable to detect failures
in any of the test results.
The reason is that the test result string changed from 'fail' to
'FAILED', because the original mapping has been removed as part of
that commit, but the code in here is still trying to match against
the old string, resulting in no matches, i.e. everything is treated
as successful, even if it shouldn't be.
Update the OEQA ptest test to actually work again and report
failure if there was a failure.
Note that the ptest test is marked as @expectedfail, so even though
this test now again starts to fail, the overall OEQA test result is
not affected - but at least the overall OEQA test summary reflects
the correct status again.
In other words:
RESULTS:
RESULTS - ping.PingTest.test_ping: PASSED (0.26s)
RESULTS - ptest.PtestRunnerTest.test_ptestrunner: PASSED (4.05s)
RESULTS - ssh.SSHTest.test_ssh: PASSED (0.60s)
SUMMARY:
image-debug () - Ran 3 tests in 4.937s
correctly changes to:
AssertionError: Failed ptests:
{'dummytest': ['check_True_is_True', 'test_basic']}
RESULTS:
RESULTS - ping.PingTest.test_ping: PASSED (0.24s)
RESULTS - ssh.SSHTest.test_ssh: PASSED (0.56s)
RESULTS - ptest.PtestRunnerTest.test_ptestrunner: EXPECTEDFAIL (4.13s)
SUMMARY:
image-debug () - Ran 3 tests in 4.937s
instead and we see a summary of the ptest subtests that failed.
Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
This adds SPDX license headers in place of the wide assortment of things
currently in our script headers. We default to GPL-2.0-only except for the
oeqa code where it was clearly submitted and marked as MIT on the most part
or some scripts which had the "or later" GPL versioning.
The patch also drops other obsolete bits of file headers where they were
encoountered such as editor modelines, obsolete maintainer information or
the phrase "All rights reserved" which is now obsolete and not required in
copyright headers (in this case its actually confusing for licensing as all
rights were not reserved).
More work is needed for OE-Core but this takes care of the bulk of the scripts
and meta/lib directories.
The top level LICENSE files are tweaked to match the new structure and the
SPDX naming.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
These IDs refer to testopia which we're no longer using. We would now use the test
names to definitively reference tests and the IDs can be dropped, along with their
supporting code.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
Currently processed being killed by the OOM killer may not be spotted by
ptest-runner. After we complete the tests, check the logs and report if there
were any. This ensures the user is aware of OOM conditions affecting the
ptest results.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
Get rid of further unneeded code complications:
* value mappings we could just direct use
* ftools when we can write files easily ourself
* test result status filtering we don't use
* variable overwriting module imports
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
Merge the results handling into the ptest log parser as a seperate
method.
Drop the weird "pass.skip.fail." prefix to the results filename, its
just bizarre.
Drop the code turning a list into a regex then searching the regex for
an item, "x in y" is perfectly capable.
Use a dict, sort the keys as needed and drop the list sorting code.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
Allow parsing of the ptest duration, exit code and timeout keywords
from the logs, returning data on each section.
Also include the logs broken out per section.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
Now we have a dedicated ptest parser, merge in the remaining ptest
specific pieces to further clarify and simplify the code, moving to
a point where we can consider extending/enhancing it.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
Rename the paster to be ptest specific and apply some further cleanups
to the code to simplify and clarify what its doing.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
logparser is only used by ptest. Its slightly overcomplicated as it was
intended to be reusable but wasn't. Simplify it as a dedicated parser is
likely to me more readable and maintainable.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
Some tests end up without a section, avoid tracebacks trying to use
None as a string in that case.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
Add the OEHasPackage decorator to a variety of tests so they determine
automatically if they should run against a given image.
To ensure tests can do this we need to move target operations such
as scp commands into the tests and out of the class startup/teardown.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
This allows the ptest results from ptest-runner, run in an image to be
transferred over to the resulting json results output.
Each test is given a pass/skip/fail so individual results can be monitored
and the raw log output from the ptest-runner is also dumped into the
results json file as this means after the fact debugging becomes much easier.
Currently the log output is not split up per test but that would make a good
future enhancement.
I attempted to implement this as python subTests however it failed as the
output was too confusing, subTests don't support any kind of log
output handling, subTest successes aren't logged and it was making things
far more complex than they needed to be.
We mark ptest-runner as "EXPECTEDFAILURE" since its unlikely every ptest
will pass currently and we don't want that to fail the whole image test run.
Its assumed there would be later analysis of the json output to determine
regressions. We do have to change the test runner code so that
'unexpectedsuccess' is not a failure.
Also, the test names are manipuated to remove spaces and brackets with
"_" used as a replacement and any duplicate occurrences truncated.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
That's the whole point isn't it? Previously this testcase succeeded
even if some of the underlying on-target tests failed; the only way
to find out if anything was wrong was to manually inspect the logs.
Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
only when ptest-runner is availalble;
Previously the test would execute only when all available ptests
for packages in the image were installed; some of those tests may
be broken, never finish, take a very long time or simply irrelevant
to the user who wants to check ptests of only a few specific packages,
and does so by listing them explicitly via IMAGE_INSTALL_append or similar.
Presence of ptest-runner means there is at least one ptest package installed
as they pull it in via a class dependency; ptest-runner is not generally
installed otherwise.
Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
It works now.
[YOCTO #11547]
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|