User talk:Péter Hegedűs

= OpenStack Quality Improvement Plan =

Who we are?

We are a small team at the University of Szeged, Software Engineering Department who contribute to OpenStack. The members of the team are Béla Vancsics, Gábor Antal, Ferenc Horváth, Viktor Varga, Alex Szarka and Péter Hegedűs. Our research and industrial focus is software quality from every aspect. We have experiences in writing static code analyzers for various languages, in static and dynamic testing methods, etc. For the list of our references, see: http://www.sed.inf.u-szeged.hu/softwarequality.

What do we plan to do?

We are targeting the hardening of OpenStack from the quality point of view. Obviously, higher technical quality would result in higher flexibility, enhanced reliability and stability, and most importantly much more effective maintainability.

Other than our years of expertise in software quality assurance we use different static analysis techniques and tools as the basis of our work. In this case we apply open-source tools (e.g. SonarQube) and our Python static analyzer (SourceMeter) as well. These tools analyze the source code of the OpenStack modules from time to time and produce a list of coding rule violations, object-oriented metrics and code duplications. By constantly monitoring the code base of the targeted OpenStack modules, these tools help us determine where we should focus our attention. The latest results produced by the aforementioned tool chain are publicly available at http://openqa.sed.hu/dashboard/index/1. We hope that this site will be frequently visited by developers who are also interested in fixing quality related issues. This way it can help organizing the maintenance of OpenStack, which will lead to a more convenient and improved development process in the long term.

The method of processing the large amount of available data to create actual fixes is the following. At first, we select those results that might indicate bigger quality issues like code parts with extreme metric values, components with large number of rule violations or large clones (i.e. copy-paste code parts). Then all the selected entries are manually checked to verify whether they are true hits or not. Even in the case of true hits we omit those of minor relevance and keep only the ones indicating real quality issues. From these remaining hits we create fixes by inspecting the affected code parts and performing the possible refactorings.

Several patches that were created this way are already in the review system. What is more, some of our quality related fixes (e.g. https://review.openstack.org/#/c/234170/, https://review.openstack.org/#/c/232020/) have already made it through, so they got merged.

Our intention is to keep up with this process, and continuously provide patches targeting internal quality improvements besides the usual activities like bug fixes, etc. We hope we get support from the community and our efforts help a bit to make OpenStack more reliable, more stable and a very long-lived project. You can read more technical details  here.

Why do we do it?

As a first step we thought we could do what we are the best at, which is improving quality. Although we have built an extensive knowledge in developing OpenStack since then, we still consider quality improvement as one of our main tasks besides regular implementation tasks.