Security/Projects/Bandit

Project Moved
Please note that Bandit is no longer maintained under OpenStack and has been moved to the Python Code Quality Authority:

https://github.com/PyCQA/bandit

All patches and issues should be raised on the PyCQA github repository.

Overview
Bandit is a security linter for Python source code, utilizing the ast module from the Python standard library.

The ast module is used to convert source code into a parsed tree of Python syntax nodes. Bandit allows users to define custom tests that are performed against those nodes. At the completion of testing, a report is generated that lists security issues identified within the target source code.

Bandit is currently a stand-alone tool which can be downloaded by end-users and run against arbitrary source code. As it matures and is proven to be useful, we see it being a possible addition to OpenStack CI gate tests with non-voting and eventually voting capabilities.

Bandit can be obtained by cloning the repository at https://git.openstack.org/openstack/bandit.git. The README.rst file contains documentation regarding installation, usage, and configuration.

There is a video of bandit being presented at the OpenStack 2015 Summit in Vancouver.

Gate Testing with Bandit
Bandit can help maintain the security of OpenStack projects when it's used as a gate test. Projects such as Keystone have created a gate test which runs Bandit to ensure that common security code mistakes are not introduced when code is modified. New: you now have two ways to set up a Bandit gate and we'll cover the steps for both below.

Full run Bandit gate:
This option works well for projects that already have clean Bandit runs. To set up a full run Bandit gate for an OpenStack project, follow these steps:


 * 1) Add bandit (the package name on pypi) to the test-requirements.txt file. See global-requirements.txt and copy the bandit line. While running Bandit only actually requires the bandit package, it's easiest for now to just leave it in with the rest of the test requirements. This file lists the requirements for creating the virtual environment Bandit runs in and gets updated automatically in most projects by the OpenStack proposal bot.
 * 2) We need to have bandit in 2 tox environments: A bandit env that's used by the bandit team for integration tests, and the pep8 env. See Keystone's for an example.

The following is a good starting point:

# B105-B107: hardcoded password checks - likely to generate false positives in a gate environment # B401: import subprocess - not necessarily a security issue; this plugin is mainly used for penetration testing workflow # B603,B606: process without shell - not necessarily a security issue; this plugin is mainly used for penetration testing workflow # B607: start process with a partial path - this should be a project level decision bandit -r project -x tests -s B105,B106,B107,B404,B603,B606,B607

Test this by running tox -e pep8. Initially, there will likely be several other tests that fail. Exclude these and work on fixing them in separate commits.

If you have any questions or comments please contact tmcpeak or tkelsey in #openstack-security on Freenode IRC.

Bandit Baseline Gate
This is the best option for projects which may have some legacy Bandit findings. Just because a project has some pre-existing security issues doesn't mean Bandit can't help prevent new ones! To set up a Bandit Baseline gate for an OpenStack project, follow these steps:


 * 1) Decide on the appropriate tests to run, not every test supported by bandit will be a good fit for every project. The bandit command line arguments -s and -t can be used to filter the test set to run.
 * 2) (optional) Add a Bandit config for your project. If you need to configure specific test parameters, in addition to switching tests on or off wholesale, then a bandit config may be needed. Bandit ships with the tool 'bandit-config-generator' that can help generate a config file if needed. This config is completely optional and is only needed if the defaults for specific tests are not sufficient.
 * 3) Add "bandit" (the package name on pypi) to the test-requirements.txt file. While running Bandit only actually requires the bandit package, it's easiest for now to just leave it in with the rest of the test requirements.  This file lists the requirements for creating the virtual environment Bandit runs in and gets updated automatically in most projects by the OpenStack proposal bot.
 * 4) Add a tox environment to run the Bandit basline.  To do this we'll add two targets:
 * 5) a standalone target "codesec" codesec tox target which is useful for developers to check their changes
 * 6) a "linters" target: linters tox target. By adding a linters target, we're extending our linters to run Bandit in addition to the normal flake8 tests.
 * 7) In both cases the arguments for 'bandit-baseline' should be identical to what you want to pass into Bandit.  For example, if you created a config file above, you'll want to specify it with '-c myconfig.yaml'.  In the example above we're running: 'bandit-baseline -r bandit -ll -ii' which tells Bandit (and bandit-baseline) to run scan the 'bandit' directory recursively and filter medium+ severity and confidence issues.
 * 8) Change the OpenStack infra zuul/layout.yaml for your project to use 'python-jobs-linters' instead of 'python-jobs' like we did for the Bandit project itself here: zuul layout change example.  This enables your project to use the 'linters' target in your python-jobs gate instead of just flake8.  Note: if you do this you'll have a voting Bandit gate.  You should make sure you're comfortable with Bandit and how it works before changing this.

Per-task Configurations
By default Bandit runs all plugins that it finds in the plugins directory. While this may be useful for doing a thorough manual review, for use cases such as automation and gate testing, this is probably overkill. When it is desirable to create specific sets of tests for specific tasks you should create a config file file for each task. This file can control the list of tests that should be run as well as tuning any parameters relevant to those tests. Once a custom config has been created, you can run it by using the "-c" command line option to point to the config file. Another way to control the tests that are run is to use the -t and -s command line options to select a set of tests, this is normally more convenient than using a full blown config when test parameters don't need refining.