Browse Source

twister: ztest: short test case names on --no-detailed-test-id

Extend `--no-detailed-test-id` command line option: in addition to its
current behavior to exclude from a test Suite name its configuration path
prefix, also don't prefix each Ztest Case name with its Scenario name.

For example: 'kernel.common.timing' Scenario name, the same Suite name,
and 'sleep.usleep' test Case (where 'sleep' is its Ztest suite name
and 'usleep' is Ztest test name.

This way both TestSuite and TestCase names follow the same principle
having no parent object name prefix.

There is no information loss in Twister reports with this naming:
TestSuite is a container object for its TestCases, whereas TestSuite
has its configuration path as a property.

Signed-off-by: Dmitrii Golovanov <dmitrii.golovanov@intel.com>
pull/82958/head
Dmitrii Golovanov 7 months ago committed by Benjamin Cabé
parent
commit
b69a8d1deb
  1. 18
      doc/develop/test/twister.rst
  2. 9
      doc/releases/release-notes-4.1.rst
  3. 34
      scripts/pylib/twister/twisterlib/environment.py
  4. 4
      scripts/pylib/twister/twisterlib/harness.py
  5. 22
      scripts/pylib/twister/twisterlib/runner.py
  6. 5
      scripts/pylib/twister/twisterlib/testinstance.py
  7. 14
      scripts/pylib/twister/twisterlib/testplan.py
  8. 25
      scripts/pylib/twister/twisterlib/testsuite.py
  9. 60
      scripts/tests/twister/test_harness.py
  10. 57
      scripts/tests/twister/test_runner.py
  11. 1
      scripts/tests/twister_blackbox/test_data/tests/dummy/agnostic/group1/subgroup1/test_data.yaml
  12. 1
      scripts/tests/twister_blackbox/test_data/tests/dummy/agnostic/group1/subgroup2/test_data.yaml
  13. 4
      scripts/tests/twister_blackbox/test_data/tests/dummy/agnostic/group2/test_data.yaml
  14. 8
      scripts/tests/twister_blackbox/test_output.py
  15. 219
      scripts/tests/twister_blackbox/test_printouts.py

18
doc/develop/test/twister.rst

@ -261,8 +261,7 @@ A Test Suite is a collection of Test Cases which are intended to be used to test @@ -261,8 +261,7 @@ A Test Suite is a collection of Test Cases which are intended to be used to test
a software program to ensure it meets certain requirements. The Test Cases in a
Test Suite are either related or meant to be executed together.
The name of each Test Scenario needs to be unique in the context of the overall
test application and has to follow basic rules:
Test Scenario, Test Suite, and Test Case names must follow to these basic rules:
#. The format of the Test Scenario identifier shall be a string without any spaces or
special characters (allowed characters: alphanumeric and [\_=]) consisting
@ -272,7 +271,8 @@ test application and has to follow basic rules: @@ -272,7 +271,8 @@ test application and has to follow basic rules:
subsection names delimited with a dot (``.``). For example, a test scenario
that covers semaphores in the kernel shall start with ``kernel.semaphore``.
#. All Test Scenario identifiers within a ``testcase.yaml`` file need to be unique.
#. All Test Scenario identifiers within a Test Configuration (``testcase.yaml`` file)
need to be unique.
For example a ``testcase.yaml`` file covering semaphores in the kernel can have:
* ``kernel.semaphore``: For general semaphore tests
@ -295,6 +295,18 @@ test application and has to follow basic rules: @@ -295,6 +295,18 @@ test application and has to follow basic rules:
Test Case name, for example: ``debug.coredump.logging_backend``.
The ``--no-detailed-test-id`` command line option modifies the above rules in this way:
#. A Test Suite name has only ``<Test Scenario identifier>`` component.
Its Application Project path can be found in ``twister.json`` report as ``path:`` property.
#. With short Test Suite names in this mode, all corresponding Test Scenario names
must be unique for the Twister execution scope.
#. **Ztest** Test Case names have only Ztest components ``<Ztest suite name>.<Ztest test name>``.
Its parent Test Suite name equals to the corresponding Test Scenario identifier.
The following is an example test configuration with a few options that are
explained in this document.

9
doc/releases/release-notes-4.1.rst

@ -118,6 +118,15 @@ Build system and Infrastructure @@ -118,6 +118,15 @@ Build system and Infrastructure
them can use the :zephyr_file:`scripts/utils/twister_to_list.py` script to
automatically migrate Twister configuration files.
* Twister
* Test Case names for Ztest now include Ztest suite name, so the resulting identifier has
three sections and looks like: ``<test_scenario_name>.<ztest_suite_name>.<ztest_name>``.
These extended identifiers are used in log output, twister.json and testplan.json,
as well as for ``--sub-test`` command line parameters (:github:`80088`).
* The ``--no-detailed-test-id`` command line option also shortens Ztest Test Case names excluding
its Test Scenario name prefix which is the same as the parent Test Suite id (:github:`82302`).
Drivers and Sensors
*******************

34
scripts/pylib/twister/twisterlib/environment.py

@ -148,12 +148,10 @@ Artificially long but functional example: @@ -148,12 +148,10 @@ Artificially long but functional example:
test_plan_report_xor.add_argument("--list-tests", action="store_true",
help="""List of all sub-test functions recursively found in
all --testsuite-root arguments. Note different sub-tests can share
the same test scenario identifier (section.subsection)
and come from different directories.
The output is flattened and reports --sub-test names only,
not their directories. For instance net.socket.getaddrinfo_ok
and net.socket.fd_set belong to different directories.
all --testsuite-root arguments. The output is flattened and reports detailed
sub-test names without their directories.
Note: sub-test names can share the same test scenario identifier prefix
(section.subsection) even if they are from different test projects.
""")
test_plan_report_xor.add_argument("--test-tree", action="store_true",
@ -264,9 +262,11 @@ Artificially long but functional example: @@ -264,9 +262,11 @@ Artificially long but functional example:
functions. Sub-tests are named by:
'section.subsection_in_testcase_yaml.ztest_suite.ztest_without_test_prefix'.
Example_1: 'kernel.fifo.fifo_api_1cpu.fifo_loop' where 'kernel.fifo' is a test scenario
name (section.subsection) and 'fifo_api_1cpu.fifo_loop' is
a Ztest suite_name.test_name identificator.
name (section.subsection) and 'fifo_api_1cpu.fifo_loop' is a Ztest 'suite_name.test_name'.
Example_2: 'debug.coredump.logging_backend' is a standalone test scenario name.
Note: This selection mechanism works only for Ztest suite and test function names in
the source files which are not generated by macro-substitutions.
Note: With --no-detailed-test-id use only Ztest names without scenario name.
""")
parser.add_argument(
@ -578,15 +578,21 @@ structure in the main Zephyr tree: boards/<vendor>/<board_name>/""") @@ -578,15 +578,21 @@ structure in the main Zephyr tree: boards/<vendor>/<board_name>/""")
parser.add_argument(
'--detailed-test-id', action='store_true',
help="Include paths to tests' locations in tests' names. Names will follow "
"PATH_TO_TEST/SCENARIO_NAME schema "
"e.g. samples/hello_world/sample.basic.helloworld")
help="Compose each test Suite name from its configuration path (relative to root) and "
"the appropriate Scenario name using PATH_TO_TEST_CONFIG/SCENARIO_NAME schema. "
"Also (for Ztest only), prefix each test Case name with its Scenario name. "
"For example: 'kernel.common.timing' Scenario with test Suite name "
"'tests/kernel/sleep/kernel.common.timing' and 'kernel.common.timing.sleep.usleep' "
"test Case (where 'sleep' is its Ztest suite name and 'usleep' is Ztest test name.")
parser.add_argument(
"--no-detailed-test-id", dest='detailed_test_id', action="store_false",
help="Don't put paths into tests' names. "
"With this arg a test name will be a scenario name "
"e.g. sample.basic.helloworld.")
help="Don't prefix each test Suite name with its configuration path, "
"so it is the same as the appropriate Scenario name. "
"Also (for Ztest only), don't prefix each Ztest Case name with its Scenario name. "
"For example: 'kernel.common.timing' Scenario name, the same Suite name, "
"and 'sleep.usleep' test Case (where 'sleep' is its Ztest suite name "
"and 'usleep' is Ztest test name.")
# Include paths in names by default.
parser.set_defaults(detailed_test_id=True)

4
scripts/pylib/twister/twisterlib/harness.py

@ -770,7 +770,7 @@ class Test(Harness): @@ -770,7 +770,7 @@ class Test(Harness):
for ts_name_ in ts_names:
if self.started_suites[ts_name_]['count'] < (0 if phase == 'TS_SUM' else 1):
continue
tc_fq_id = f"{self.id}.{ts_name_}.{tc_name}"
tc_fq_id = self.instance.compose_case_name(f"{ts_name_}.{tc_name}")
if tc := self.instance.get_case_by_name(tc_fq_id):
if self.trace:
logger.debug(f"On {phase}: Ztest case '{tc_name}' matched to '{tc_fq_id}")
@ -779,7 +779,7 @@ class Test(Harness): @@ -779,7 +779,7 @@ class Test(Harness):
f"On {phase}: Ztest case '{tc_name}' is not known"
f" in {self.started_suites} running suite(s)."
)
tc_id = f"{self.id}.{tc_name}"
tc_id = self.instance.compose_case_name(tc_name)
return self.instance.get_case_or_create(tc_id)
def start_suite(self, suite_name):

22
scripts/pylib/twister/twisterlib/runner.py

@ -1186,12 +1186,8 @@ class ProjectBuilder(FilterBuilder): @@ -1186,12 +1186,8 @@ class ProjectBuilder(FilterBuilder):
return symbol_name
def determine_testcases(self, results):
yaml_testsuite_name = self.instance.testsuite.id
logger.debug(f"Determine test cases for test suite: {yaml_testsuite_name}")
logger.debug(f"Determine test cases for test suite: {self.instance.testsuite.id}")
logger.debug(
f"Test instance {self.instance.name} already has {len(self.instance.testcases)} cases."
)
new_ztest_unit_test_regex = re.compile(r"z_ztest_unit_test__([^\s]+?)__([^\s]*)")
detected_cases = []
@ -1220,9 +1216,14 @@ class ProjectBuilder(FilterBuilder): @@ -1220,9 +1216,14 @@ class ProjectBuilder(FilterBuilder):
f"not present in: {self.instance.testsuite.ztest_suite_names}"
)
test_func_name = m_[2].replace("test_", "", 1)
testcase_id = f"{yaml_testsuite_name}.{new_ztest_suite}.{test_func_name}"
testcase_id = self.instance.compose_case_name(
f"{new_ztest_suite}.{test_func_name}"
)
detected_cases.append(testcase_id)
logger.debug(
f"Test instance {self.instance.name} already has {len(self.instance.testcases)} cases."
)
if detected_cases:
logger.debug(f"Detected Ztest cases: [{', '.join(detected_cases)}] in {elf_file}")
tc_keeper = {
@ -1232,16 +1233,17 @@ class ProjectBuilder(FilterBuilder): @@ -1232,16 +1233,17 @@ class ProjectBuilder(FilterBuilder):
self.instance.testcases.clear()
self.instance.testsuite.testcases.clear()
# When the old regex-based test case collection is fully deprecated,
# this will be the sole place where test cases get added to the test instance.
# Then we can further include the new_ztest_suite info in the testcase_id.
for testcase_id in detected_cases:
testcase = self.instance.add_testcase(name=testcase_id)
self.instance.testsuite.add_testcase(name=testcase_id)
# Keep previous statuses and reasons
tc_info = tc_keeper.get(testcase_id, {})
if not tc_info and self.trace:
# Also happens when Ztest uses macroses, eg. DEFINE_TEST_VARIANT
logger.debug(f"Ztest case '{testcase_id}' discovered for "
f"'{self.instance.testsuite.source_dir_rel}' "
f"with {list(tc_keeper)}")
testcase.status = tc_info.get('status', TwisterStatus.NONE)
testcase.reason = tc_info.get('reason')

5
scripts/pylib/twister/twisterlib/testinstance.py

@ -1,6 +1,6 @@ @@ -1,6 +1,6 @@
# vim: set syntax=python ts=4 :
#
# Copyright (c) 2018-2022 Intel Corporation
# Copyright (c) 2018-2024 Intel Corporation
# Copyright 2022 NXP
# Copyright (c) 2024 Arm Limited (or its affiliates). All rights reserved.
#
@ -173,6 +173,9 @@ class TestInstance: @@ -173,6 +173,9 @@ class TestInstance:
def __lt__(self, other):
return self.name < other.name
def compose_case_name(self, tc_name) -> str:
return self.testsuite.compose_case_name(tc_name)
def set_case_status_by_name(self, name, status, reason=None):
tc = self.get_case_or_create(name)
tc.status = status

14
scripts/pylib/twister/twisterlib/testplan.py

@ -1,7 +1,7 @@ @@ -1,7 +1,7 @@
#!/usr/bin/env python3
# vim: set syntax=python ts=4 :
#
# Copyright (c) 2018 Intel Corporation
# Copyright (c) 2018-2024 Intel Corporation
# Copyright (c) 2024 Arm Limited (or its affiliates). All rights reserved.
#
# SPDX-License-Identifier: Apache-2.0
@ -346,9 +346,13 @@ class TestPlan: @@ -346,9 +346,13 @@ class TestPlan:
def report(self):
if self.options.test_tree:
if not self.options.detailed_test_id:
logger.info("Test tree is always shown with detailed test-id.")
self.report_test_tree()
return 0
elif self.options.list_tests:
if not self.options.detailed_test_id:
logger.info("Test list is always shown with detailed test-id.")
self.report_test_list()
return 0
elif self.options.list_tags:
@ -551,18 +555,18 @@ class TestPlan: @@ -551,18 +555,18 @@ class TestPlan:
for _, ts in self.testsuites.items():
if ts.tags.intersection(tag_filter):
for case in ts.testcases:
testcases.append(case.name)
testcases.append(case.detailed_name)
else:
for _, ts in self.testsuites.items():
for case in ts.testcases:
testcases.append(case.name)
testcases.append(case.detailed_name)
if exclude_tag := self.options.exclude_tag:
for _, ts in self.testsuites.items():
if ts.tags.intersection(exclude_tag):
for case in ts.testcases:
if case.name in testcases:
testcases.remove(case.name)
if case.detailed_name in testcases:
testcases.remove(case.detailed_name)
return testcases
def add_testsuites(self, testsuite_filter=None):

25
scripts/pylib/twister/twisterlib/testsuite.py

@ -386,6 +386,10 @@ class TestCase(DisablePyTestCollectionMixin): @@ -386,6 +386,10 @@ class TestCase(DisablePyTestCollectionMixin):
self.output = ""
self.freeform = False
@property
def detailed_name(self) -> str:
return TestSuite.get_case_name_(self.testsuite, self.name, detailed=True)
@property
def status(self) -> TwisterStatus:
return self._status
@ -477,20 +481,31 @@ class TestSuite(DisablePyTestCollectionMixin): @@ -477,20 +481,31 @@ class TestSuite(DisablePyTestCollectionMixin):
'Harness config error: console harness defined without a configuration.'
)
@staticmethod
def get_case_name_(test_suite, tc_name, detailed=True) -> str:
return f"{test_suite.id}.{tc_name}" \
if test_suite and detailed and not test_suite.detailed_test_id else f"{tc_name}"
@staticmethod
def compose_case_name_(test_suite, tc_name) -> str:
return f"{test_suite.id}.{tc_name}" \
if test_suite and test_suite.detailed_test_id else f"{tc_name}"
def compose_case_name(self, tc_name) -> str:
return self.compose_case_name_(self, tc_name)
def add_subcases(self, data, parsed_subcases=None, suite_names=None):
testcases = data.get("testcases", [])
if testcases:
for tc in testcases:
self.add_testcase(name=f"{self.id}.{tc}")
self.add_testcase(name=self.compose_case_name(tc))
else:
if not parsed_subcases:
self.add_testcase(self.id, freeform=True)
else:
# only add each testcase once
for sub in set(parsed_subcases):
name = f"{self.id}.{sub}"
self.add_testcase(name)
for tc in set(parsed_subcases):
self.add_testcase(name=self.compose_case_name(tc))
if suite_names:
self.ztest_suite_names = suite_names

60
scripts/tests/twister/test_harness.py

@ -30,6 +30,7 @@ from twisterlib.harness import ( @@ -30,6 +30,7 @@ from twisterlib.harness import (
Test,
)
from twisterlib.statuses import TwisterStatus
from twisterlib.testsuite import TestSuite
from twisterlib.testinstance import TestInstance
GTEST_START_STATE = " RUN "
@ -594,6 +595,7 @@ def test_get_harness(name): @@ -594,6 +595,7 @@ def test_get_harness(name):
TEST_DATA_7 = [
(
True,
"",
"Running TESTSUITE suite_name",
["suite_name"],
@ -604,16 +606,18 @@ TEST_DATA_7 = [ @@ -604,16 +606,18 @@ TEST_DATA_7 = [
TwisterStatus.NONE,
),
(
True,
"On TC_START: Ztest case 'testcase' is not known in {} running suite(s)",
"START - test_testcase",
[],
{},
{ 'test_id.testcase': { 'count': 1 } },
{ 'dummy.test_id.testcase': { 'count': 1 } },
TwisterStatus.STARTED,
True,
TwisterStatus.NONE
),
(
True,
"On TC_END: Ztest case 'example' is not known in {} running suite(s)",
"PASS - test_example in 0 seconds",
[],
@ -624,6 +628,7 @@ TEST_DATA_7 = [ @@ -624,6 +628,7 @@ TEST_DATA_7 = [
TwisterStatus.NONE,
),
(
True,
"On TC_END: Ztest case 'example' is not known in {} running suite(s)",
"SKIP - test_example in 0 seconds",
[],
@ -634,6 +639,7 @@ TEST_DATA_7 = [ @@ -634,6 +639,7 @@ TEST_DATA_7 = [
TwisterStatus.NONE,
),
(
True,
"On TC_END: Ztest case 'example' is not known in {} running suite(s)",
"FAIL - test_example in 0 seconds",
[],
@ -644,21 +650,34 @@ TEST_DATA_7 = [ @@ -644,21 +650,34 @@ TEST_DATA_7 = [
TwisterStatus.NONE,
),
(
"not a ztest and no state for test_id",
True,
"not a ztest and no state for dummy.test_id",
"START - test_testcase",
[],
{},
{ 'test_id.testcase': { 'count': 1 } },
{ 'dummy.test_id.testcase': { 'count': 1 } },
TwisterStatus.PASS,
False,
TwisterStatus.PASS,
),
(
"not a ztest and no state for test_id",
False,
"not a ztest and no state for dummy.test_id",
"START - test_testcase",
[],
{},
{ 'test_id.testcase': { 'count': 1 } },
{ 'testcase': { 'count': 1 } },
TwisterStatus.PASS,
False,
TwisterStatus.PASS,
),
(
True,
"not a ztest and no state for dummy.test_id",
"START - test_testcase",
[],
{},
{ 'dummy.test_id.testcase': { 'count': 1 } },
TwisterStatus.FAIL,
False,
TwisterStatus.FAIL,
@ -667,12 +686,12 @@ TEST_DATA_7 = [ @@ -667,12 +686,12 @@ TEST_DATA_7 = [
@pytest.mark.parametrize(
"exp_out, line, exp_suite_name, exp_started_suites, exp_started_cases, exp_status, ztest, state",
"detailed_id, exp_out, line, exp_suite_name, exp_started_suites, exp_started_cases, exp_status, ztest, state",
TEST_DATA_7,
ids=["testsuite", "testcase", "pass", "skip", "failed", "ztest pass", "ztest fail"],
ids=["testsuite", "testcase", "pass", "skip", "failed", "ztest pass", "ztest pass short id", "ztest fail"],
)
def test_test_handle(
tmp_path, caplog, exp_out, line,
tmp_path, caplog, detailed_id, exp_out, line,
exp_suite_name, exp_started_suites, exp_started_cases,
exp_status, ztest, state
):
@ -682,24 +701,27 @@ def test_test_handle( @@ -682,24 +701,27 @@ def test_test_handle(
mock_platform.name = "mock_platform"
mock_platform.normalized_name = "mock_platform"
mock_testsuite = mock.Mock(id="id", testcases=[])
mock_testsuite.name = "mock_testsuite"
mock_testsuite = mock.Mock(id="dummy.test_id", testcases=[])
mock_testsuite.name = "dummy_suite/dummy.test_id"
mock_testsuite.harness_config = {}
mock_testsuite.ztest_suite_names = []
outdir = tmp_path / "gtest_out"
outdir.mkdir()
instance = TestInstance(
testsuite=mock_testsuite, platform=mock_platform, outdir=outdir
)
mock_testsuite.detailed_test_id = detailed_id
mock_testsuite.source_dir_rel = "dummy_suite"
mock_testsuite.compose_case_name.return_value = TestSuite.compose_case_name_(mock_testsuite, "testcase")
outdir = tmp_path / "ztest_out"
with mock.patch('twisterlib.testsuite.TestSuite.get_unique', return_value="dummy_suite"):
instance = TestInstance(
testsuite=mock_testsuite, platform=mock_platform, outdir=outdir
)
test_obj = Test()
test_obj.configure(instance)
test_obj.id = "test_id"
test_obj.id = "dummy.test_id"
test_obj.ztest = ztest
test_obj.status = state
test_obj.id = "test_id"
test_obj.started_cases = {}
# Act
test_obj.handle(line)

57
scripts/tests/twister/test_runner.py

@ -1562,17 +1562,31 @@ def test_projectbuilder_process( @@ -1562,17 +1562,31 @@ def test_projectbuilder_process(
TESTDATA_7 = [
(
True,
[
'z_ztest_unit_test__dummy_suite1_name__dummy_test_name1',
'z_ztest_unit_test__dummy_suite2_name__test_dummy_name2',
'no match'
],
[
'dummy.test_id.dummy_suite1_name.dummy_name1',
'dummy.test_id.dummy_suite2_name.dummy_name2'
]
),
(
False,
[
'z_ztest_unit_test__dummy_suite1_name__dummy_test_name1',
'z_ztest_unit_test__dummy_suite2_name__test_dummy_name2',
'no match'
],
[
('dummy_id.dummy_suite1_name.dummy_name1'),
('dummy_id.dummy_suite2_name.dummy_name2')
'dummy_suite1_name.dummy_name1',
'dummy_suite2_name.dummy_name2'
]
),
(
True,
[
'z_ztest_unit_test__dummy_suite2_name__test_dummy_name2',
'z_ztest_unit_test__bad_suite3_name_no_test',
@ -1583,27 +1597,48 @@ TESTDATA_7 = [ @@ -1583,27 +1597,48 @@ TESTDATA_7 = [
'_ZN15foobarnamespaceL54z_ztest_unit_test__dummy_suite3_name__test_dummy_name6E',
],
[
('dummy_id.dummy_suite2_name.dummy_name2'),
('dummy_id.dummy_suite3_name.dummy_name4'),
('dummy_id.dummy_suite3_name.bad_name1E'),
('dummy_id.dummy_suite3_name.dummy_name5'),
('dummy_id.dummy_suite3_name.dummy_name6'),
'dummy.test_id.dummy_suite2_name.dummy_name2',
'dummy.test_id.dummy_suite3_name.dummy_name4',
'dummy.test_id.dummy_suite3_name.bad_name1E',
'dummy.test_id.dummy_suite3_name.dummy_name5',
'dummy.test_id.dummy_suite3_name.dummy_name6',
]
),
(
True,
[
'z_ztest_unit_test__dummy_suite2_name__test_dummy_name2',
'z_ztest_unit_test__bad_suite3_name_no_test',
'_ZN12_GLOBAL__N_1L54z_ztest_unit_test__dummy_suite3_name__test_dummy_name4E',
'_ZN12_GLOBAL__N_1L54z_ztest_unit_test__dummy_suite3_name__test_bad_name1E',
'_ZN12_GLOBAL__N_1L51z_ztest_unit_test_dummy_suite3_name__test_bad_name2E',
'_ZN12_GLOBAL__N_1L54z_ztest_unit_test__dummy_suite3_name__test_dummy_name5E',
'_ZN15foobarnamespaceL54z_ztest_unit_test__dummy_suite3_name__test_dummy_name6E',
],
[
'dummy_suite2_name.dummy_name2',
'dummy_suite3_name.dummy_name4',
'dummy_suite3_name.bad_name1E',
'dummy_suite3_name.dummy_name5',
'dummy_suite3_name.dummy_name6',
]
),
(
True,
['no match'],
[]
),
]
@pytest.mark.parametrize(
'symbols_names, added_tcs',
'detailed_id, symbols_names, added_tcs',
TESTDATA_7,
ids=['two hits, one miss', 'demangle', 'nothing']
ids=['two hits, one miss', 'two hits short id', 'demangle', 'demangle short id', 'nothing']
)
def test_projectbuilder_determine_testcases(
mocked_jobserver,
mocked_env,
detailed_id,
symbols_names,
added_tcs
):
@ -1621,8 +1656,10 @@ def test_projectbuilder_determine_testcases( @@ -1621,8 +1656,10 @@ def test_projectbuilder_determine_testcases(
instance_mock = mock.Mock()
instance_mock.testcases = []
instance_mock.testsuite.id = 'dummy_id'
instance_mock.testsuite.id = 'dummy.test_id'
instance_mock.testsuite.ztest_suite_names = []
instance_mock.testsuite.detailed_test_id = detailed_id
instance_mock.compose_case_name = mock.Mock(side_effect=iter(added_tcs))
pb = ProjectBuilder(instance_mock, mocked_env, mocked_jobserver)

1
scripts/tests/twister_blackbox/test_data/tests/dummy/agnostic/group1/subgroup1/test_data.yaml

@ -9,3 +9,4 @@ tests: @@ -9,3 +9,4 @@ tests:
tags:
- agnostic
- subgrouped
- odd

1
scripts/tests/twister_blackbox/test_data/tests/dummy/agnostic/group1/subgroup2/test_data.yaml

@ -10,3 +10,4 @@ tests: @@ -10,3 +10,4 @@ tests:
tags:
- agnostic
- subgrouped
- even

4
scripts/tests/twister_blackbox/test_data/tests/dummy/agnostic/group2/test_data.yaml

@ -6,4 +6,6 @@ tests: @@ -6,4 +6,6 @@ tests:
- qemu_x86_64
integration_platforms:
- native_sim
tags: agnostic
tags:
- agnostic
- even

8
scripts/tests/twister_blackbox/test_output.py

@ -14,6 +14,7 @@ import pytest @@ -14,6 +14,7 @@ import pytest
import sys
import json
# pylint: disable=no-name-in-module
from conftest import ZEPHYR_BASE, TEST_DATA, testsuite_filename_mock, clear_log_in_test
from twisterlib.testplan import TestPlan
@ -74,7 +75,12 @@ class TestOutput: @@ -74,7 +75,12 @@ class TestOutput:
assert len(filtered_j) > 0, "No dummy tests found."
expected_start = os.path.relpath(TEST_DATA, ZEPHYR_BASE) if expect_paths else 'dummy.'
assert all([testsuite.startswith(expected_start)for _, testsuite, _ in filtered_j])
assert all([testsuite.startswith(expected_start) for _, testsuite, _ in filtered_j])
if expect_paths:
assert all([(tc_name.count('.') > 1) for _, _, tc_name in filtered_j])
else:
assert all([(tc_name.count('.') == 1) for _, _, tc_name in filtered_j])
def test_inline_logs(self, out_path):
test_platforms = ['qemu_x86', 'intel_adl_crb']

219
scripts/tests/twister_blackbox/test_printouts.py

@ -29,7 +29,7 @@ class TestPrintOuts: @@ -29,7 +29,7 @@ class TestPrintOuts:
TESTDATA_1 = [
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
['agnostic', 'subgrouped']
['agnostic', 'subgrouped', 'even', 'odd']
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'device'),
@ -47,13 +47,100 @@ class TestPrintOuts: @@ -47,13 +47,100 @@ class TestPrintOuts:
'dummy.agnostic.group2.a2_tests.assert2',
'dummy.agnostic.group2.a3_tests.assert1',
'dummy.agnostic.group2.a2_tests.assert3'
]
],
'--no-detailed-test-id',
''
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
[
'dummy.agnostic.group1.subgroup2.a1_2_tests.assert',
'dummy.agnostic.group2.a2_tests.assert1',
'dummy.agnostic.group2.a2_tests.assert2',
'dummy.agnostic.group2.a3_tests.assert1',
'dummy.agnostic.group2.a2_tests.assert3'
],
'--no-detailed-test-id',
'odd'
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
[],
'--no-detailed-test-id',
'odd even'
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
[
'dummy.agnostic.group1.subgroup1.a1_1_tests.assert',
'dummy.agnostic.group1.subgroup2.a1_2_tests.assert',
'dummy.agnostic.group2.a2_tests.assert1',
'dummy.agnostic.group2.a2_tests.assert2',
'dummy.agnostic.group2.a3_tests.assert1',
'dummy.agnostic.group2.a2_tests.assert3'
],
'--no-detailed-test-id',
'unknown_tag'
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
[
'dummy.agnostic.group1.subgroup1.a1_1_tests.assert',
'dummy.agnostic.group1.subgroup2.a1_2_tests.assert',
'dummy.agnostic.group2.a2_tests.assert1',
'dummy.agnostic.group2.a2_tests.assert2',
'dummy.agnostic.group2.a3_tests.assert1',
'dummy.agnostic.group2.a2_tests.assert3'
],
'--detailed-test-id',
''
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
[
'dummy.agnostic.group1.subgroup2.a1_2_tests.assert',
'dummy.agnostic.group2.a2_tests.assert1',
'dummy.agnostic.group2.a2_tests.assert2',
'dummy.agnostic.group2.a3_tests.assert1',
'dummy.agnostic.group2.a2_tests.assert3'
],
'--detailed-test-id',
'odd'
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
[],
'--detailed-test-id',
'odd even'
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
[
'dummy.agnostic.group1.subgroup1.a1_1_tests.assert',
'dummy.agnostic.group1.subgroup2.a1_2_tests.assert',
'dummy.agnostic.group2.a2_tests.assert1',
'dummy.agnostic.group2.a2_tests.assert2',
'dummy.agnostic.group2.a3_tests.assert1',
'dummy.agnostic.group2.a2_tests.assert3'
],
'--detailed-test-id',
'unknown_tag'
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'device'),
[
'dummy.device.group.d_tests.assert'
]
],
'--no-detailed-test-id',
''
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'device'),
[
'dummy.device.group.d_tests.assert'
],
'--detailed-test-id',
''
),
]
@ -70,7 +157,79 @@ class TestPrintOuts: @@ -70,7 +157,79 @@ class TestPrintOuts:
' ├── dummy.agnostic.group2.a2_tests.assert1\n' \
' ├── dummy.agnostic.group2.a2_tests.assert2\n' \
' ├── dummy.agnostic.group2.a2_tests.assert3\n' \
' └── dummy.agnostic.group2.a3_tests.assert1\n'
' └── dummy.agnostic.group2.a3_tests.assert1\n',
'--no-detailed-test-id',
''
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
'Testsuite\n' \
'├── Samples\n' \
'└── Tests\n' \
' └── dummy\n' \
' └── agnostic\n' \
' ├── dummy.agnostic.group1.subgroup2.a1_2_tests.assert\n' \
' ├── dummy.agnostic.group2.a2_tests.assert1\n' \
' ├── dummy.agnostic.group2.a2_tests.assert2\n' \
' ├── dummy.agnostic.group2.a2_tests.assert3\n' \
' └── dummy.agnostic.group2.a3_tests.assert1\n',
'--no-detailed-test-id',
'odd'
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
'Testsuite\n' \
'├── Samples\n' \
'└── Tests\n' \
' └── dummy\n' \
' └── agnostic\n' \
' ├── dummy.agnostic.group1.subgroup1.a1_1_tests.assert\n' \
' ├── dummy.agnostic.group1.subgroup2.a1_2_tests.assert\n' \
' ├── dummy.agnostic.group2.a2_tests.assert1\n' \
' ├── dummy.agnostic.group2.a2_tests.assert2\n' \
' ├── dummy.agnostic.group2.a2_tests.assert3\n' \
' └── dummy.agnostic.group2.a3_tests.assert1\n',
'--detailed-test-id',
''
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
'Testsuite\n' \
'├── Samples\n' \
'└── Tests\n' \
' └── dummy\n' \
' └── agnostic\n' \
' ├── dummy.agnostic.group1.subgroup2.a1_2_tests.assert\n' \
' ├── dummy.agnostic.group2.a2_tests.assert1\n' \
' ├── dummy.agnostic.group2.a2_tests.assert2\n' \
' ├── dummy.agnostic.group2.a2_tests.assert3\n' \
' └── dummy.agnostic.group2.a3_tests.assert1\n',
'--detailed-test-id',
'odd'
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
'Testsuite\n' \
'├── Samples\n' \
'└── Tests\n',
'--detailed-test-id',
'odd even'
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'agnostic'),
'Testsuite\n' \
'├── Samples\n' \
'└── Tests\n' \
' └── dummy\n' \
' └── agnostic\n' \
' ├── dummy.agnostic.group1.subgroup1.a1_1_tests.assert\n' \
' ├── dummy.agnostic.group1.subgroup2.a1_2_tests.assert\n' \
' ├── dummy.agnostic.group2.a2_tests.assert1\n' \
' ├── dummy.agnostic.group2.a2_tests.assert2\n' \
' ├── dummy.agnostic.group2.a2_tests.assert3\n' \
' └── dummy.agnostic.group2.a3_tests.assert1\n',
'--detailed-test-id',
'unknown_tag'
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'device'),
@ -79,7 +238,20 @@ class TestPrintOuts: @@ -79,7 +238,20 @@ class TestPrintOuts:
'└── Tests\n'
' └── dummy\n'
' └── device\n'
' └── dummy.device.group.d_tests.assert\n'
' └── dummy.device.group.d_tests.assert\n',
'--no-detailed-test-id',
''
),
(
os.path.join(TEST_DATA, 'tests', 'dummy', 'device'),
'Testsuite\n'
'├── Samples\n'
'└── Tests\n'
' └── dummy\n'
' └── device\n'
' └── dummy.device.group.d_tests.assert\n',
'--detailed-test-id',
''
),
]
@ -128,15 +300,25 @@ class TestPrintOuts: @@ -128,15 +300,25 @@ class TestPrintOuts:
assert str(sys_exit.value) == '0'
@pytest.mark.parametrize(
'test_path, expected',
'test_path, expected, detailed_id, exclude_tags',
TESTDATA_2,
ids=[
'tests/dummy/agnostic',
'tests/dummy/agnostic no_detailed_id',
'tests/dummy/agnostic no_detailed_id excl_tag',
'tests/dummy/agnostic no_detailed_id excl_all_tags',
'tests/dummy/agnostic no_detailed_id no_excl_tag',
'tests/dummy/agnostic detailed_id',
'tests/dummy/agnostic detailed_id excl_tag',
'tests/dummy/agnostic detailed_id excl_all_tags',
'tests/dummy/agnostic detailed_id no_excl_tag',
'tests/dummy/device',
'tests/dummy/device detailed_id',
]
)
def test_list_tests(self, capfd, out_path, test_path, expected):
args = ['--outdir', out_path, '-T', test_path, '--list-tests']
def test_list_tests(self, capfd, out_path, test_path, expected, detailed_id, exclude_tags):
args = ['--outdir', out_path, '-T', test_path, '--list-tests', detailed_id]
for tag in exclude_tags.split():
args += ['--exclude-tag', tag]
with mock.patch.object(sys, 'argv', [sys.argv[0]] + args), \
pytest.raises(SystemExit) as sys_exit:
@ -147,7 +329,8 @@ class TestPrintOuts: @@ -147,7 +329,8 @@ class TestPrintOuts:
sys.stderr.write(err)
printed_tests = [test.strip() for test in out.split('- ')[1:]]
printed_tests[-1] = printed_tests[-1].split('\n')[0]
if printed_tests:
printed_tests[-1] = printed_tests[-1].split('\n')[0]
assert all([test in printed_tests for test in expected])
assert all([test in expected for test in printed_tests])
@ -155,15 +338,23 @@ class TestPrintOuts: @@ -155,15 +338,23 @@ class TestPrintOuts:
assert str(sys_exit.value) == '0'
@pytest.mark.parametrize(
'test_path, expected',
'test_path, expected, detailed_id, exclude_tags',
TESTDATA_3,
ids=[
'tests/dummy/agnostic',
'tests/dummy/agnostic no_detailed_id',
'tests/dummy/agnostic no_detailed_id excl_tag',
'tests/dummy/agnostic detailed_id',
'tests/dummy/agnostic detailed_id excl_tag',
'tests/dummy/agnostic detailed_id excl_all_tags',
'tests/dummy/agnostic detailed_id no_excl_tag',
'tests/dummy/device',
'tests/dummy/device detailed_id',
]
)
def test_tree(self, capfd, out_path, test_path, expected):
args = ['--outdir', out_path, '-T', test_path, '--test-tree']
def test_tree(self, capfd, out_path, test_path, expected, detailed_id, exclude_tags):
args = ['--outdir', out_path, '-T', test_path, '--test-tree', detailed_id]
for tag in exclude_tags.split():
args += ['--exclude-tag', tag]
with mock.patch.object(sys, 'argv', [sys.argv[0]] + args), \
pytest.raises(SystemExit) as sys_exit:

Loading…
Cancel
Save