[Notes] [Git][BuildStream/buildstream][tristan/use-tox] 6 commits: tests/loader/junctions.py: Use Result checking APIs instead of manually



Title: GitLab

Tristan Van Berkom pushed to branch tristan/use-tox at BuildStream / buildstream

Commits:

6 changed files:

Changes:

  • .gitlab-ci.yml
    ... ... @@ -13,7 +13,7 @@ stages:
    13 13
     variables:
    
    14 14
       PYTEST_ADDOPTS: "--color=yes"
    
    15 15
       INTEGRATION_CACHE: "${CI_PROJECT_DIR}/cache/integration-cache"
    
    16
    -  TEST_COMMAND: 'python3 setup.py test --index-url invalid://uri --addopts --integration'
    
    16
    +  TEST_COMMAND: 'tox -- --integration'
    
    17 17
     
    
    18 18
     #####################################################
    
    19 19
     #                  Prepare stage                    #
    
    ... ... @@ -72,6 +72,13 @@ source_dist:
    72 72
       - cd dist && ./unpack.sh
    
    73 73
       - cd buildstream
    
    74 74
     
    
    75
    +  # Install tox
    
    76
    +  - pip3 install tox
    
    77
    +  - pip3 install tox-venv
    
    78
    +
    
    79
    +  # Install venv
    
    80
    +  - apt-get install python3-venv || true
    
    81
    +
    
    75 82
       script:
    
    76 83
       - useradd -Um buildstream
    
    77 84
       - chown -R buildstream:buildstream .
    
    ... ... @@ -134,6 +141,10 @@ tests-unix:
    134 141
         - dnf mark install fuse-libs
    
    135 142
         - dnf erase -y bubblewrap ostree
    
    136 143
     
    
    144
    +    # Install tox
    
    145
    +    - pip3 install tox
    
    146
    +    - pip3 install tox-venv
    
    147
    +
    
    137 148
         # Since the unix platform is required to run as root, no user change required
    
    138 149
         - ${TEST_COMMAND}
    
    139 150
     
    

  • CONTRIBUTING.rst
    ... ... @@ -1478,48 +1478,76 @@ Don't get lost in the docs if you don't need to, follow existing examples instea
    1478 1478
     
    
    1479 1479
     Running tests
    
    1480 1480
     ~~~~~~~~~~~~~
    
    1481
    -To run the tests, just type::
    
    1481
    +We use `tox <https://tox.readthedocs.org/>`_ as a frontend run the tests which
    
    1482
    +are implemented using `pytest <https://pytest.org/>`_. To run the tests, simply
    
    1483
    +navigate to the toplevel directory of your buildstream checkout and run::
    
    1482 1484
     
    
    1483
    -  ./setup.py test
    
    1485
    +  tox
    
    1484 1486
     
    
    1485
    -At the toplevel.
    
    1487
    +By default, the test suite will be run against every supported python version
    
    1488
    +found on your host. If you have multiple python versions installed, you may
    
    1489
    +want to run tests against only one version and you can do that using the ``-e``
    
    1490
    +option when running tox::
    
    1486 1491
     
    
    1487
    -When debugging a test, it can be desirable to see the stdout
    
    1488
    -and stderr generated by a test, to do this use the ``--addopts``
    
    1489
    -function to feed arguments to pytest as such::
    
    1492
    +  tox -e py37
    
    1490 1493
     
    
    1491
    -  ./setup.py test --addopts -s
    
    1494
    +The output of all failing tests will always be printed in the summary, but
    
    1495
    +if you want to observe the stdout and stderr generated by a passing test,
    
    1496
    +you can pass the ``-s`` option to pytest as such::
    
    1497
    +
    
    1498
    +  tox -- -s
    
    1499
    +
    
    1500
    +.. tip::
    
    1501
    +
    
    1502
    +   The ``-s`` option is `a pytest option <https://docs.pytest.org/latest/usage.html>`_.
    
    1503
    +
    
    1504
    +   Any options specified before the ``--`` separator are consumed by ``tox``,
    
    1505
    +   and any options after the ``--`` separator will be passed along to pytest.
    
    1492 1506
     
    
    1493 1507
     You can always abort on the first failure by running::
    
    1494 1508
     
    
    1495
    -  ./setup.py test --addopts -x
    
    1509
    +  tox -- -x
    
    1496 1510
     
    
    1497 1511
     If you want to run a specific test or a group of tests, you
    
    1498 1512
     can specify a prefix to match. E.g. if you want to run all of
    
    1499 1513
     the frontend tests you can do::
    
    1500 1514
     
    
    1501
    -  ./setup.py test --addopts 'tests/frontend/'
    
    1515
    +  tox -- tests/frontend/
    
    1502 1516
     
    
    1503 1517
     Specific tests can be chosen by using the :: delimeter after the test module.
    
    1504 1518
     If you wanted to run the test_build_track test within frontend/buildtrack.py you could do::
    
    1505 1519
     
    
    1506
    -  ./setup.py test --addopts 'tests/frontend/buildtrack.py::test_build_track'
    
    1520
    +  tox -- tests/frontend/buildtrack.py::test_build_track
    
    1507 1521
     
    
    1508 1522
     We also have a set of slow integration tests that are disabled by
    
    1509 1523
     default - you will notice most of them marked with SKIP in the pytest
    
    1510 1524
     output. To run them, you can use::
    
    1511 1525
     
    
    1512
    -  ./setup.py test --addopts '--integration'
    
    1526
    +  tox -- --integration
    
    1513 1527
     
    
    1514 1528
     By default, buildstream also runs pylint on all files. Should you want
    
    1515 1529
     to run just pylint (these checks are a lot faster), you can do so
    
    1516 1530
     with::
    
    1517 1531
     
    
    1518
    -  ./setup.py test --addopts '-m pylint'
    
    1532
    +  tox -- -m pylint
    
    1519 1533
     
    
    1520 1534
     Alternatively, any IDE plugin that uses pytest should automatically
    
    1521 1535
     detect the ``.pylintrc`` in the project's root directory.
    
    1522 1536
     
    
    1537
    +.. note::
    
    1538
    +
    
    1539
    +   While using ``tox`` is practical for developers running tests in
    
    1540
    +   more predictable execution environments, it is still possible to
    
    1541
    +   execute the test suite against a specific installation environment
    
    1542
    +   using pytest directly::
    
    1543
    +
    
    1544
    +     ./setup.py test
    
    1545
    +
    
    1546
    +   Specific options can be passed to ``pytest`` using the ``--addopts``
    
    1547
    +   option::
    
    1548
    +
    
    1549
    +     ./setup.py test --addopts 'tests/frontend/buildtrack.py::test_build_track'
    
    1550
    +
    
    1523 1551
     
    
    1524 1552
     Adding tests
    
    1525 1553
     ~~~~~~~~~~~~
    

  • MANIFEST.in
    ... ... @@ -5,6 +5,7 @@ include CONTRIBUTING.rst
    5 5
     include MAINTAINERS
    
    6 6
     include NEWS
    
    7 7
     include README.rst
    
    8
    +include tox.ini
    
    8 9
     
    
    9 10
     # Documentation package includes
    
    10 11
     include doc/Makefile
    

  • tests/loader/junctions.py
    ... ... @@ -3,7 +3,7 @@ import pytest
    3 3
     import shutil
    
    4 4
     
    
    5 5
     from buildstream import _yaml, ElementError
    
    6
    -from buildstream._exceptions import LoadError, LoadErrorReason
    
    6
    +from buildstream._exceptions import ErrorDomain, LoadErrorReason
    
    7 7
     from tests.testutils import cli, create_repo
    
    8 8
     from tests.testutils.site import HAVE_GIT
    
    9 9
     
    
    ... ... @@ -38,9 +38,9 @@ def test_simple_build(cli, tmpdir, datafiles):
    38 38
     
    
    39 39
         # Build, checkout
    
    40 40
         result = cli.run(project=project, args=['build', 'target.bst'])
    
    41
    -    assert result.exit_code == 0
    
    41
    +    result.assert_success()
    
    42 42
         result = cli.run(project=project, args=['checkout', 'target.bst', checkoutdir])
    
    43
    -    assert result.exit_code == 0
    
    43
    +    result.assert_success()
    
    44 44
     
    
    45 45
         # Check that the checkout contains the expected files from both projects
    
    46 46
         assert(os.path.exists(os.path.join(checkoutdir, 'base.txt')))
    
    ... ... @@ -54,7 +54,7 @@ def test_build_of_same_junction_used_twice(cli, tmpdir, datafiles):
    54 54
         # Check we can build a project that contains the same junction
    
    55 55
         # that is used twice, but named differently
    
    56 56
         result = cli.run(project=project, args=['build', 'target.bst'])
    
    57
    -    assert result.exit_code == 0
    
    57
    +    result.assert_success()
    
    58 58
     
    
    59 59
     
    
    60 60
     @pytest.mark.datafiles(DATA_DIR)
    
    ... ... @@ -69,9 +69,9 @@ def test_nested_simple(cli, tmpdir, datafiles):
    69 69
     
    
    70 70
         # Build, checkout
    
    71 71
         result = cli.run(project=project, args=['build', 'target.bst'])
    
    72
    -    assert result.exit_code == 0
    
    72
    +    result.assert_success()
    
    73 73
         result = cli.run(project=project, args=['checkout', 'target.bst', checkoutdir])
    
    74
    -    assert result.exit_code == 0
    
    74
    +    result.assert_success()
    
    75 75
     
    
    76 76
         # Check that the checkout contains the expected files from all subprojects
    
    77 77
         assert(os.path.exists(os.path.join(checkoutdir, 'base.txt')))
    
    ... ... @@ -93,9 +93,9 @@ def test_nested_double(cli, tmpdir, datafiles):
    93 93
     
    
    94 94
         # Build, checkout
    
    95 95
         result = cli.run(project=project, args=['build', 'target.bst'])
    
    96
    -    assert result.exit_code == 0
    
    96
    +    result.assert_success()
    
    97 97
         result = cli.run(project=project, args=['checkout', 'target.bst', checkoutdir])
    
    98
    -    assert result.exit_code == 0
    
    98
    +    result.assert_success()
    
    99 99
     
    
    100 100
         # Check that the checkout contains the expected files from all subprojects
    
    101 101
         assert(os.path.exists(os.path.join(checkoutdir, 'base.txt')))
    
    ... ... @@ -115,45 +115,46 @@ def test_nested_conflict(cli, datafiles):
    115 115
         copy_subprojects(project, datafiles, ['foo', 'bar'])
    
    116 116
     
    
    117 117
         result = cli.run(project=project, args=['build', 'target.bst'])
    
    118
    -    assert result.exit_code != 0
    
    119
    -    assert result.exception
    
    120
    -    assert isinstance(result.exception, LoadError)
    
    121
    -    assert result.exception.reason == LoadErrorReason.CONFLICTING_JUNCTION
    
    118
    +    result.assert_main_error(ErrorDomain.LOAD, LoadErrorReason.CONFLICTING_JUNCTION)
    
    122 119
     
    
    123 120
     
    
    121
    +# Test that we error correctly when the junction element itself is missing
    
    124 122
     @pytest.mark.datafiles(DATA_DIR)
    
    125
    -def test_invalid_missing(cli, datafiles):
    
    123
    +def test_missing_junction(cli, datafiles):
    
    126 124
         project = os.path.join(str(datafiles), 'invalid')
    
    127 125
     
    
    128 126
         result = cli.run(project=project, args=['build', 'missing.bst'])
    
    129
    -    assert result.exit_code != 0
    
    130
    -    assert result.exception
    
    131
    -    assert isinstance(result.exception, LoadError)
    
    132
    -    assert result.exception.reason == LoadErrorReason.MISSING_FILE
    
    127
    +    result.assert_main_error(ErrorDomain.LOAD, LoadErrorReason.MISSING_FILE)
    
    133 128
     
    
    134 129
     
    
    130
    +# Test that we error correctly when an element is not found in the subproject
    
    131
    +@pytest.mark.datafiles(DATA_DIR)
    
    132
    +def test_missing_subproject_element(cli, datafiles):
    
    133
    +    project = os.path.join(str(datafiles), 'invalid')
    
    134
    +    copy_subprojects(project, datafiles, ['base'])
    
    135
    +
    
    136
    +    result = cli.run(project=project, args=['build', 'missing-element.bst'])
    
    137
    +    result.assert_main_error(ErrorDomain.LOAD, LoadErrorReason.MISSING_FILE)
    
    138
    +
    
    139
    +
    
    140
    +# Test that we error correctly when a junction itself has dependencies
    
    135 141
     @pytest.mark.datafiles(DATA_DIR)
    
    136 142
     def test_invalid_with_deps(cli, datafiles):
    
    137 143
         project = os.path.join(str(datafiles), 'invalid')
    
    138 144
         copy_subprojects(project, datafiles, ['base'])
    
    139 145
     
    
    140 146
         result = cli.run(project=project, args=['build', 'junction-with-deps.bst'])
    
    141
    -    assert result.exit_code != 0
    
    142
    -    assert result.exception
    
    143
    -    assert isinstance(result.exception, ElementError)
    
    144
    -    assert result.exception.reason == 'element-forbidden-depends'
    
    147
    +    result.assert_main_error(ErrorDomain.ELEMENT, 'element-forbidden-depends')
    
    145 148
     
    
    146 149
     
    
    150
    +# Test that we error correctly when a junction is directly depended on
    
    147 151
     @pytest.mark.datafiles(DATA_DIR)
    
    148 152
     def test_invalid_junction_dep(cli, datafiles):
    
    149 153
         project = os.path.join(str(datafiles), 'invalid')
    
    150 154
         copy_subprojects(project, datafiles, ['base'])
    
    151 155
     
    
    152 156
         result = cli.run(project=project, args=['build', 'junction-dep.bst'])
    
    153
    -    assert result.exit_code != 0
    
    154
    -    assert result.exception
    
    155
    -    assert isinstance(result.exception, LoadError)
    
    156
    -    assert result.exception.reason == LoadErrorReason.INVALID_DATA
    
    157
    +    result.assert_main_error(ErrorDomain.LOAD, LoadErrorReason.INVALID_DATA)
    
    157 158
     
    
    158 159
     
    
    159 160
     @pytest.mark.datafiles(DATA_DIR)
    
    ... ... @@ -165,9 +166,9 @@ def test_options_default(cli, tmpdir, datafiles):
    165 166
     
    
    166 167
         # Build, checkout
    
    167 168
         result = cli.run(project=project, args=['build', 'target.bst'])
    
    168
    -    assert result.exit_code == 0
    
    169
    +    result.assert_success()
    
    169 170
         result = cli.run(project=project, args=['checkout', 'target.bst', checkoutdir])
    
    170
    -    assert result.exit_code == 0
    
    171
    +    result.assert_success()
    
    171 172
     
    
    172 173
         assert(os.path.exists(os.path.join(checkoutdir, 'pony.txt')))
    
    173 174
         assert(not os.path.exists(os.path.join(checkoutdir, 'horsy.txt')))
    
    ... ... @@ -182,9 +183,9 @@ def test_options(cli, tmpdir, datafiles):
    182 183
     
    
    183 184
         # Build, checkout
    
    184 185
         result = cli.run(project=project, args=['build', 'target.bst'])
    
    185
    -    assert result.exit_code == 0
    
    186
    +    result.assert_success()
    
    186 187
         result = cli.run(project=project, args=['checkout', 'target.bst', checkoutdir])
    
    187
    -    assert result.exit_code == 0
    
    188
    +    result.assert_success()
    
    188 189
     
    
    189 190
         assert(not os.path.exists(os.path.join(checkoutdir, 'pony.txt')))
    
    190 191
         assert(os.path.exists(os.path.join(checkoutdir, 'horsy.txt')))
    
    ... ... @@ -199,9 +200,9 @@ def test_options_inherit(cli, tmpdir, datafiles):
    199 200
     
    
    200 201
         # Build, checkout
    
    201 202
         result = cli.run(project=project, args=['build', 'target.bst'])
    
    202
    -    assert result.exit_code == 0
    
    203
    +    result.assert_success()
    
    203 204
         result = cli.run(project=project, args=['checkout', 'target.bst', checkoutdir])
    
    204
    -    assert result.exit_code == 0
    
    205
    +    result.assert_success()
    
    205 206
     
    
    206 207
         assert(not os.path.exists(os.path.join(checkoutdir, 'pony.txt')))
    
    207 208
         assert(os.path.exists(os.path.join(checkoutdir, 'horsy.txt')))
    
    ... ... @@ -228,14 +229,11 @@ def test_git_show(cli, tmpdir, datafiles):
    228 229
     
    
    229 230
         # Verify that bst show does not implicitly fetch subproject
    
    230 231
         result = cli.run(project=project, args=['show', 'target.bst'])
    
    231
    -    assert result.exit_code != 0
    
    232
    -    assert result.exception
    
    233
    -    assert isinstance(result.exception, LoadError)
    
    234
    -    assert result.exception.reason == LoadErrorReason.SUBPROJECT_FETCH_NEEDED
    
    232
    +    result.assert_main_error(ErrorDomain.LOAD, LoadErrorReason.SUBPROJECT_FETCH_NEEDED)
    
    235 233
     
    
    236 234
         # Explicitly fetch subproject
    
    237 235
         result = cli.run(project=project, args=['source', 'fetch', 'base.bst'])
    
    238
    -    assert result.exit_code == 0
    
    236
    +    result.assert_success()
    
    239 237
     
    
    240 238
         # Check that bst show succeeds now and the pipeline includes the subproject element
    
    241 239
         element_list = cli.get_pipeline(project, ['target.bst'])
    
    ... ... @@ -263,9 +261,9 @@ def test_git_build(cli, tmpdir, datafiles):
    263 261
     
    
    264 262
         # Build (with implicit fetch of subproject), checkout
    
    265 263
         result = cli.run(project=project, args=['build', 'target.bst'])
    
    266
    -    assert result.exit_code == 0
    
    264
    +    result.assert_success()
    
    267 265
         result = cli.run(project=project, args=['checkout', 'target.bst', checkoutdir])
    
    268
    -    assert result.exit_code == 0
    
    266
    +    result.assert_success()
    
    269 267
     
    
    270 268
         # Check that the checkout contains the expected files from both projects
    
    271 269
         assert(os.path.exists(os.path.join(checkoutdir, 'base.txt')))
    
    ... ... @@ -304,9 +302,9 @@ def test_build_git_cross_junction_names(cli, tmpdir, datafiles):
    304 302
     
    
    305 303
         # Build (with implicit fetch of subproject), checkout
    
    306 304
         result = cli.run(project=project, args=['build', 'base.bst:target.bst'])
    
    307
    -    assert result.exit_code == 0
    
    305
    +    result.assert_success()
    
    308 306
         result = cli.run(project=project, args=['checkout', 'base.bst:target.bst', checkoutdir])
    
    309
    -    assert result.exit_code == 0
    
    307
    +    result.assert_success()
    
    310 308
     
    
    311 309
         # Check that the checkout contains the expected files from both projects
    
    312 310
         assert(os.path.exists(os.path.join(checkoutdir, 'base.txt')))

  • tests/loader/junctions/invalid/missing-element.bst
    1
    +# This refers to the `foo.bst` element through
    
    2
    +# the `base.bst` junction. The `base.bst` junction
    
    3
    +# exists but the `foo.bst` element does not exist
    
    4
    +# in the subproject.
    
    5
    +#
    
    6
    +kind: stack
    
    7
    +depends:
    
    8
    +- junction: base.bst
    
    9
    +  filename: foo.bst

  • tox.ini
    1
    +#
    
    2
    +# Configuration file for running tests under tox
    
    3
    +#
    
    4
    +[tox]
    
    5
    +minversion = 3.4.0
    
    6
    +requires = tox-venv
    
    7
    +envlist = py{35,36,37}
    
    8
    +skip_missing_interpreters = true
    
    9
    +
    
    10
    +[testenv]
    
    11
    +sitepackages=True
    
    12
    +deps =
    
    13
    +    coverage == 4.4.0
    
    14
    +    pep8
    
    15
    +    pylint
    
    16
    +    pytest >= 3.9
    
    17
    +    pytest-cov >= 2.5.0
    
    18
    +    pytest-datafiles >= 2.0
    
    19
    +    pytest-env
    
    20
    +    pytest-pep8
    
    21
    +    pytest-pylint
    
    22
    +    pytest-xdist
    
    23
    +    pytest-timeout
    
    24
    +    pyftpdlib
    
    25
    +    setuptools
    
    26
    +    psutil
    
    27
    +    ruamel.yaml >= 0.15.41, < 0.15.52
    
    28
    +    pluginbase
    
    29
    +    Click >= 7.0
    
    30
    +    jinja2 >= 2.10
    
    31
    +    protobuf >= 3.5
    
    32
    +    grpcio >= 1.10
    
    33
    +
    
    34
    +passenv =
    
    35
    +    GI_TYPELIB_PATH
    
    36
    +
    
    37
    +commands =
    
    38
    +    pytest {posargs}



  • [Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]