Jump to: navigation, search

Difference between revisions of "Fuel/Code Review Rules"

(Submitting your code for review)
(Reviewing other people's code)
Line 12: Line 12:
  
 
== Reviewing other people's code ==
 
== Reviewing other people's code ==
 +
# Be nice. Frame your comments as helpful suggestions.
 +
# Before the review, find out why the code is written: what is the exact problem/feature this code is written for?
 +
# Understand the use cases for the code, data flow.
 +
#* Example: review request for node agent. Use cases: agent is ran from rc.local at boot time, it checks nailgun url from cmdline and does POST request to API, posting the data obtained from ohai.
 +
# Are these use cases addressed in the code?
 +
#* Example: check that agent will be run at the boot time, that it collects data and does POST request
 +
# Make sure the code does not violate the documented architecture. Changes to architecture should be discussed in blueprints and on the mailing list.
 +
#* Example: Deployment orchestrator lacks some data about node, which could be easily obtained from Nailgun API, and junior developer uses GET request to Nailgun API to get the data.
 +
#* Reviewer comment could be: "Orchestrator obtains all the data via AMQP, and it should not get any data from Nailgun API. Orchestrator must work with no use of Nailgun, so my suggestion is to extend Nailgun code so required data is sent over AMQP, along with other data."
 +
# Take a piece of paper and write down negative scenarios. Does the code provided handles all of them? Is it written in the way to handle even possible errors, not mandatory happening right now?
 +
#* Example: agent - What if ohai doesn't provide some data? What if POST request fails due to temporary network connectivity issues? What if object is already created in backend, and POST returns error? Could it be that agent starts before network connectivity is established? Any case that url is not in cmdline?
 +
# Make sure unit tests are present and cover:
 +
#* use cases defined in the blueprint or bug
 +
#* negative scenarios
 +
#* If there are other requirements, such as performance/scalability/threading-safety, check for specific tests too.
 +
# Check if integration test is present or planned when required. Good reason to ask for integration test if new code affects RPC call to deployment orchestrator: new data provided or the new method call is created.
  
 
== Commit message guidelines ==
 
== Commit message guidelines ==
  
 
== Code quality standards ==
 
== Code quality standards ==

Revision as of 02:28, 3 June 2014

Why do we need code review?

Code review is essential to sustainable software development process. It serves the following goals:

  1. Confirm that proposed code change is suitable for its purpose: it solves the stated problem and follows the agreed architecture.
  2. Analyze external impact of the code change: it doesn't introduce new issues, doesn't reduce stability and performance of the whole solution.
  3. Enforce code quality standards: the proposed change is well structured, well documented, and is consistent with the coding style of the surrounding code.
  4. Spread knowledge about architecture and implementation details, keep other developers up to date with current state of the code base.

Submitting your code for review

  1. Make sure your commit message describes the proposed code change adequately and is in line with commit message guidelines.
  2. Make sure your code improves overall quality of the surrounding code, avoid quick and dirty workarounds, follow code quality standards.
  3. Remember that it is your responsibility to get other developers to review and and approve your commit. Be proactive and responsive, use IRC and mailing list to draw attention of core developers to your review requests, and try to address all review comments promptly.

Reviewing other people's code

  1. Be nice. Frame your comments as helpful suggestions.
  2. Before the review, find out why the code is written: what is the exact problem/feature this code is written for?
  3. Understand the use cases for the code, data flow.
    • Example: review request for node agent. Use cases: agent is ran from rc.local at boot time, it checks nailgun url from cmdline and does POST request to API, posting the data obtained from ohai.
  4. Are these use cases addressed in the code?
    • Example: check that agent will be run at the boot time, that it collects data and does POST request
  5. Make sure the code does not violate the documented architecture. Changes to architecture should be discussed in blueprints and on the mailing list.
    • Example: Deployment orchestrator lacks some data about node, which could be easily obtained from Nailgun API, and junior developer uses GET request to Nailgun API to get the data.
    • Reviewer comment could be: "Orchestrator obtains all the data via AMQP, and it should not get any data from Nailgun API. Orchestrator must work with no use of Nailgun, so my suggestion is to extend Nailgun code so required data is sent over AMQP, along with other data."
  6. Take a piece of paper and write down negative scenarios. Does the code provided handles all of them? Is it written in the way to handle even possible errors, not mandatory happening right now?
    • Example: agent - What if ohai doesn't provide some data? What if POST request fails due to temporary network connectivity issues? What if object is already created in backend, and POST returns error? Could it be that agent starts before network connectivity is established? Any case that url is not in cmdline?
  7. Make sure unit tests are present and cover:
    • use cases defined in the blueprint or bug
    • negative scenarios
    • If there are other requirements, such as performance/scalability/threading-safety, check for specific tests too.
  8. Check if integration test is present or planned when required. Good reason to ask for integration test if new code affects RPC call to deployment orchestrator: new data provided or the new method call is created.

Commit message guidelines

Code quality standards