Azure SDK for Python - Engineering System
- Azure SDK for Python - Engineering System
There are various tests currently enabled in Azure pipeline for Python SDK and some of them are enabled only for nightly CI checks. We also run some static analysis tool to verify code completeness, security and lint check.
Check the contributing guide for an intro to tox
. For a deeper dive into the tooling that enables the CI checks below and additional detail on reproducing builds locally please refer to the azure-sdk-tools README.md.
As a contributor, you will see the build jobs run in two modes: Nightly Scheduled
and Pull Request
.
These utilize the same build definition, except that the nightly
builds run additional, deeper checks that run for a bit longer.
Example PR build:
Analyze
tox envs run during the `Analyze job.Test <platform>_<pyversion>
runs PR/Nightly tox envs, depending on context.
Targeting a specific package at build queue time
In both public
and internal
projects, all builds allow a filter to be introduced at build time to narrow the set of packages build/tested.
- Click
Run New
on your target build. - Before clicking
run
againstmain
or your target commit, clickVariables
and add a variable. Add variableBuildTargetingString
with value of a valid glob string.- For example, setting filter string
azure-mgmt-*
will filter a build to only management packages. A value ofazure-keyvault-secrets
will result in only building THAT specific package.
- For example, setting filter string
- Once it's set, run the build!
Skipping a tox test environment at build queue time
All build definitions allow choice at queue time as to which tox
environments actually run during the test phase.
- Find your target service
internal
build. - Click
Run New
. - Before clicking
run
againstmain
or your target commit, clickVariables
and add a variable of nameRun.ToxCustomEnvs
. The value should be a comma separated list of tox environments that you want to run in the test phase. - Once it's set, run the build!
This is an example setting of that narrows the default set from whl, sdist, depends, latestdependency, minimumdependency
.
Any combination of valid valid tox environments will work. Reference either this document or the file present at eng/tox/tox.ini
to find what options are available.
Skipping entire sections of builds
In certain cases,release engineers may want to disable APIView
checks prior to releasing. Engineers who need this capability should first clear it with their lead, then set the following build time variable.
- Create variable named
Skip.CreateApiReview
- Set variable value to
true
- Set variable value to
This is the most useful skip, but the following skip variables are also supported. Setting the variable value to true
should be used for all of the below.
Skip.Analyze
- Skip the
analyze
job entirely.
- Skip the
Skip.Test
- Skip the
test
jobs entirely.
- Skip the
Skip.TestConda
- Skip the
conda test
jobs entirely.
- Skip the
Skip.ApiStubGen
- Entirely omits API stub generation within
build
job.
- Entirely omits API stub generation within
Skip.VerifySdist
- Omit
twine check
of source distributions inbuild
job.
- Omit
Skip.VerifyWhl
- Omit
twine check
of wheels inbuild
job.
- Omit
Skip.Bandit
- Omit
bandit
checks inanalyze
job.
- Omit
Skip.Pylint
- Omit linting checks in
analyze
job.
- Omit linting checks in
Skip.VerifyTypes
- Omit
VerifyTypes
check inanalyze
job.
- Omit
Skip.Pyright
- Omit
Pyright
check inanalyze
job.
- Omit
Skip.BreakingChanges
- Don't verify if a changeset includes breaking changes.
Skip.MyPy
- Omit
mypy
checks inanalyze
job.
- Omit
Skip.AnalyzeDependencies
- Omit 'Analyze Dependencies' step in
analyze
job.
- Omit 'Analyze Dependencies' step in
Skip.VerifyDependencies
- Omit checking that a package's dependencies are on PyPI before releasing.
Skip.KeywordCheck
- Omit checking that a package's keywords are correctly formulated before releasing.
The pyproject.toml
Starting with this pr, which checks apply to which packages are now established in a pyproject.toml
, right next to each package's setup.py
. This not only allows devs to fine-tune which checks that are applied at a package-level, but also seriously reduces confusion as to which checks apply when.
We default to enabling most of our checks like pylint
, mypy
, etc. Due to that, most pyproject.toml
settings will likely be disabling checks.
Here's an example:
# from sdk/core/azure-servicemanagement-legacy/pyproject.toml, which is a legacy package
# as a result, all of these checks are disabled
[tool.azure-sdk-build]
mypy = false
type_check_samples = false
verifytypes = false
pyright = false
pylint = false
black = false
sphinx = false
If a package does not yet have a pyproject.toml
, creating one with just the section [tool.azure-sdk-build]
will do no harm to the release of the package in question.
Environment variables important to CI
There are a few differences from a standard local invocation of tox <env>
. Primarily, these differences adjust the checks to be friendly to parallel invocation. These adjustments are necessary to prevent random CI crashes.
Environment Variable | Affect on Build |
---|---|
TF_BUILD |
EngSys uses the presence of any value in this variable as the bit indicating "in CI" or not. The primary effect of this is that all relative dev dependencies will be prebuilt prior to running the tox environments. |
PREBUILT_WHEEL_DIR |
Setting this env variables means that instead of generating a fresh wheel or sdist to test, tox will look in this directory for the targeted package. |
PIP_INDEX_URL |
Standard pip environment variable. During nightly alpha builds, this environment variable is set to a public dev feed. |
The various tooling abstracted by the environments within eng/tox/tox.ini
take the above variables into account automatically.
Atomic Overrides
Packages with classifier Development Status :: 7 - Inactive
, are not built by default and as such normal checks
like mypy
and pylint
are also not run against them. Older "core" packages like azure-common
and azure-servicemanagement-legacy
are present, but excluded from the build due to this restriction.
Additionally, packages with the pyproject.toml option ci_enabled = false
will skip normal checks and tests. This is used for packages that are not yet compliant with certain CI checks. If ci_enabled = false
is present in the package's pyproject.toml, it will be blocked from releasing until it is removed and all required CI checks pass.
To temporarily override this restriction, a dev need only set the queue time variable: ENABLE_PACKAGE_NAME
. The -
in package names should be replaced by an _
, as that is how the environment variable will be set on the actual CI machine anyway.
ENABLE_AZURE_COMMON=true
ENABLE_AZURE_SERVICEMANAGEMENT_LEGACY=true
This same methodology also applies to individual checks that run during various phases of CI. Developers can use a queue time variable of format PACKAGE_NAME_CHECK=true/false
.
The name that you should use is visible based on what the tox environment
that the check refers to! Here are a few examples of enabling/disabling checks:
AZURE_SERVICEBUS_PYRIGHT=true
<-- enable a check that normally is disabled inpyproject.toml
AZURE_CORE_PYLINT=false
<-- disable a check that normally runs
Enable test logging in CI pipelines
You can enable test logging in a pipeline by setting the queue time variable PYTEST_LOG_LEVEL
to the desired logging level. For example,
PYTEST_LOG_LEVEL=INFO
This also works locally with tox by setting the PYTEST_LOG_LEVEL
environment variable.
Note that if you want DEBUG level logging with sensitive information unredacted in the test logs, then you still must pass logging_enable=True
into the client(s) being used in tests.
Analyze Checks
Analyze job in both nightly CI and pull request validation pipeline runs a set of static analysis using external and internal tools. Following are the list of these static analysis.
MyPy
MyPy
is a static analysis tool that runs type checking of python package. Following are the steps to run MyPy
locally for a specific package:
- Go to root of the package
- Execute following command:
tox run -e mypy -c ../../../eng/tox/tox.ini --root .
Pyright
Pyright
is a static analysis tool that runs type checking of python package. Following are the steps to run pyright
locally for a specific package:
- Go to root of the package
- Execute following command:
tox run -e pyright -c ../../../eng/tox/tox.ini --root .
Verifytypes
Verifytypes
is a feature of pyright that checks the type completeness of a python package. Following are the steps to run verifytypes
locally for a specific package:
- Go to root of the package
- Execute following command:
tox run -e verifytypes -c ../../../eng/tox/tox.ini --root .
Pylint
Pylint
is a static analysis tool to run lint checking, it is automatically run on all PRs. Following are the steps to run pylint
locally for a specific package.
- Go to root of the package.
- Execute following command:
tox run -e pylint -c ../../../eng/tox/tox.ini --root .
Note that the pylint
environment is configured to run against the earliest supported python version. This means that users must have python 3.7
installed on their machine to run this check locally.
Sphinx and docstring checker
Sphinx
is the preferred documentation builder for Python libraries. The documentation is always built and attached to each PR builds. Sphinx is configured to
fail if docstring are invalid, helping to ensure the resulting documentation will be of high quality. Following are the steps to run sphinx
locally for a specific package with strict docstring checking:
- Go to root of the package.
- Execute following command:
tox run -e sphinx -c ../../../eng/tox/tox.ini --root .
Bandit
Bandit
is static security analysis tool. This check is triggered for all Azure SDK package as part of analyze job. Following are the steps to Bandit
tool locally for a specific package.
- Got to package root directory.
- Execute command:
tox run -e bandit -c ../../../eng/tox/tox.ini --root .
ApiStubGen
ApiStubGen
is an internal tool used to create API stub to help reviewing public APIs in our SDK package using APIViewTool
. This tool also has some built in lint checks available and purpose of having this step in analyze job is to ensure any change in code is not impacting stubbing process and also to have more and more custom lint checks added in future.
black
black is an opinionated code formatter for Python source code.
Opt-in to formatting validation
Make the following change to your projects ci.yml
:
extends:
template: ../../eng/pipelines/templates/stages/archetype-sdk-client.yml
parameters:
...
ValidateFormatting: true
...
Running locally
- Go to package root directory.
- Execute command:
tox run -e black -c ../../../eng/tox/tox.ini -- .
Tip: You can provide any arguments that black
accepts after the --
. Example: tox run -e black -c ../../../eng/tox/tox.ini -- path/to/file.py
Change log verification
Change log verification is added to ensure package has valid change log for current version. Guidelines to properly maintain the change log is documented here
PR Validation Checks
Each pull request runs various tests using pytest
in addition to all the tests mentioned above in analyze check. Pull request validation performs 3 different types of test: whl, sdist and depends
. The following section explains the purpose of each of these tests and how to execute them locally. All pull requests are validated on multiple python versions across different platforms. Find the test matrix below.
Python Version |
Platform |
---|---|
2.7 | Linux |
3.5 | Windows |
3.8 | Linux |
PR validation tox test environments
Tests are executed using tox environment and following are the tox test names that are part of pull request validation
whl
This test installs wheel of the package being tested and runs all tests cases in the package using pytest
. Following is the command to run this test environment locally.
- Go to package root folder on a command line
- Run following command
tox run -e whl -c ../../../eng/tox/tox.ini --root .
sdist
This test installs sdist of the package being tested and runs all tests cases in the package using pytest
. Following is the command to run this test environment locally.
- Go to package root folder on a command line
- Run following command
tox run -e sdist -c ../../../eng/tox/tox.ini --root .
depends
The depends
check ensures all modules in a target package can be successfully imported. Actually installing and importing will verify that all package requirements are properly set in setup.py and that the __all__
set for the package is properly defined. This test installs the package and its required packages, then executes from <package-root-namespace> import *
. For example from azure-core
, the following would be invoked: from azure.core import *
.
Following is the command to run this test environment locally.
- Go to package root folder on a command line
- Run following command
tox run -e sdist -c ../../../eng/tox/tox.ini --root .
Nightly CI Checks
Nightly continuous integration checks run all tests mentioned above in Analyze and Pull request checks in addition to multiple other tests. Nightly CI checks run on all python versions that are supported by Azure SDK packages across multiple platforms.
Nightly CI check runs following additional tests to ensure the dependency between a package being developed against released packages to ensure backward compatibility. Following is the explanation of why we need dependency tests to ensure backward compatibility.
Imagine a situation where package XYZ
requires another package ABC
and as per the package requirement of XYZ
, It should work with any version between 1.0 and 2.0 of package ABC
.
Package XYZ
requires package ABC
As a developer of package XYZ
, we need to ensure that our package works fine with all versions of ABC as long as it is within package requirement specification.
Another scenario where regression test( reverse dependency) is required. Let's take same example above and assume we are developers of package ABC
which is taken as required package by another package XYZ
Package ABC
is required by package XYZ
As a developer of ABC
, we need to ensure that any new change in ABC
is not breaking the use of XYZ
and hence ensures backward compatibility.
Let's take few Azure SDK packages instead of dummy names to explain this in a context we are more familiar of.
Most of the Azure SDK packages require azure-core
and this requirement is within a range for e.g. azure-storage-blob
that requires azure-core >1.0.0, <2.0.0
. So any new change in azure-storage-blob needs to make sure it works fine with all versions of azure-core between 1.0.0 and 2.0.0(Both included).
Similarly any new version of azure-core needs to ensure that it is still compatible with all released package versions which takes azure-core as required package.
It is lot of combinations if we need to run tests for all released versions within the range of requirement specification. In order to reduce the test matrix and at the same time ensures the quality, we currently run the test using oldest released and latest released packages and skips any version in between.
Following are the additional tests we run during nightly CI checks.
Latest Dependency Test
This test makes sure that a package being developed works absolutely fine using latest released version of required Azure SDK package as long as there is a released version which satisfies the requirement specification. Workflow of this test is as follows:
- Identify if any azure SDK package is marked as required package in setup.py of current package being tested. Note: Any dependency mentioned only in dev_requirements are not considered to identify dependency.
- Identify latest released version of required azure sdk package on PyPI
- Install latest released version of required package instead of dev dependency to package in code repo
- Install current package that is being tested
- Run pytest of all test cases in current package
Tox name of this test is latestdependency
and steps to manually run this test locally is as follows.
- Go to package root. For e.g azure-storage-blob or azure-identity
- Run command
tox run -e latestdependency -c ../../../eng/tox/tox.ini --root .
Minimum Dependency Test
This test makes sure that a package being developed works absolutely fine using oldest released version of required Azure SDK package as long as there is a released version which satisfies the requirement specification. Workflow of this test is as follows:
- Identify if any azure SDK package is marked as required package in setup.py of current package being tested. Note: Any dependency mentioned only in dev_requirements are not considered to identify dependency.
- Identify oldest released version of required azure sdk package on PyPI
- Install oldest released version of required package instead of dev dependency to package in code repo
- Install current package that is being tested
- Run pytest of all test cases in current package
Tox name of this test is mindependency
and steps to manually run this test locally is as follows.
- Go to package root. For e.g azure-storage-blob or azure-identity
- Run following command
tox run -e mindependency -c ../../../eng/tox/tox.ini --root .
Regression Test
As mentioned earlier, regression test or reverse dependency test is added to avoid a regression scenario for customers when any new change is made in a package that is required by other packages. Currently we have only very few Azure SDK packages that are added as required package by other Azure SDK package. As of now, list of these required packages are:
azure-core
azure-eventhub
azure-storage-blob
Our regression framework automatically finds any such package that is added as required package, so this list is not hardcoded.
We have two different set of regression tests to verify regression scenarios against oldest and latest released dependent packages. • Regression using latest released dependent package • Regression using oldest released dependent package
One main difference between regression tests and forward dependency test( latest and mindependency) is in terms of what test cases are executed as part of the tests. While forward dependency tests executes the test cases in current code repo, regression tests execute the tests that were part of repo at the time of dependent package release. To make it more clear, let's look at an example here.
Let's assume that we are testing regression for azure-core and this test is for regression against latest released dependent packages. Test will identify all packages that takes azure-core as required package and finds latest released version of those packages. Test framework install currently being developed azure-core and latest released dependent package and runs the test cases in dependent package, for e.g. azure-identity, that were part of repo at the time of releasing depending package.
Workflow of this test is as follows when running regression for an SDK package.
- Identify any packages that takes currently being tested package as required package
- Find latest and oldest released versions of dependent package from PyPI
- Install currently being developed version of package we are testing regression for. E.g. azure-core
- Checkout the release tag of dependent package from github
- Install latest/oldest version of dependent package. For e.g. azure-identity
- Run test cases within dependent package from checked out branch.
Steps to manually run regression test locally:
- Run below command from your git code repo to generate the wheel of package being developed. Currently we have restricted to have prebuilt wheel.
./scripts/devops_tasks/build_packages.py --service= <service-name> -d <out-folder>
- Run below command to start regression test locally
./scripts/devops_tasks/test_regression.py azure-* --service=<service-name> --whl-dir=<out-folder given above in step 2>
The following variables can be set at queueing time in order to run these additional tests. By default regression tests execute only on scheduled runs.
Regression test
Variable name: Run.Regression
Value: true
Autorest Automation
This check will automatically create PRs with updated generated code whenever autorest has made an update that results in a change to the generated code for a package.
Opt-in to autorest automation
Make the following change to your projects ci.yml
:
extends:
template: ../../eng/pipelines/templates/stages/archetype-sdk-client.yml
parameters:
...
VerifyAutorest: true
...
Running locally
To run autorest automation locally run the following command from the home of azure-sdk-for-python
azure-sdk-for-python> python scripts/devops_tasks/verify_autorest.py --service_directory <your_service_directory>
Nightly Live Checks
There are additional checks that run in live tests.
Running Samples
Samples for a library can be run as part of the nightly checks. To opt-in to the check edit the tests.yml
file to look like:
stages:
- template: ../../eng/pipelines/templates/stages/archetype-sdk-tests.yml
parameters:
...
MatrixReplace:
- TestSamples=.*/true