Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

MaturityAssessmentSurveyResults

As of December 2014, the maturity assessment task force has finalised the list of quality attributes, metrics, and scales that will be the foundation of the maturity assessment process. We held several meetings, conf calls, exchanges on the mailing list, to really make it an open, transparent and collaborative work. As a final step toward an established common agreement a survey was published, that ran from beginning of december 2014 to mid-January 2015. Here are the results.

Participants

  • 7 people answered the survey. Not a lot, but many people told us they already had expressed their ideas and feedback in the various meetings we had before.
  • The experience of participants ranges from 2 to 27 years, with an average of 15 years. They are all considered as medium or highly knowledgeable in software quality -- which is probably why they took the time to answer the survey, too.
  • Domains of activity include IT services, research, cross industry and aeronautics.

Characterising the maturity assessment

  • 50% think it represents the Likeness of the project to maintain a healthy evolution,
  • 50% think it has a larger representation (Likeness of the project to maintain a healthy evolution, Likeness of the produced software to be of high quality, Likeness of the project to be safely used).

Usages of the maturity assessment

Summary of mentionned usages, in order of importance:

  1. Ability to assess COTS
  2. Promoting best practices and quality in software projects,
  3. Providing guidelines and reducing effort for quality evaluation.

See references [1] for more detailed answers.

Automation of the maturity assessment process

Importance and reasons of automation for the maturity assessment process:

  1. High importance. Apply it at a large scale, on dozens of projects.
  2. High Importance. Manual gathering of metrics is not reliable.

Interest of results

Why results of the maturity assessment are interesting, in order of priority:

  1. Suitability of projects for industrial usage (is it mature enough it is for a certain kind of maturity requirements)
  2. Benchmarking (how mature is this project in comparison with other projects in a wide ecosystem)
  3. Evaluation of projects by themselves (to asses on whether they are mature or not)

The quality model

Completeness of axes

100% of participants state that the 3 defined axes (ecosystem, process, product) are enough.

Comments:

  • maybe add visibility (note: visibility exists in the model, but is inactive because no reliable measure has been found).
  • add references to ISO/CMM to better understand attributes and check completeness.

Quality attributes

New attributes:

  • Requirements (several mentions)
  • Documentation
  • Reviews

Not for now, but would be nice one of those days:

  • Security (process, product)
  • Scalability (product): multicore support, distribution, flat vs exponential duration algorithms, etc.
  • Availability of professional services (ecosystem)
  • Use of continuous integration / test (process)
  • Multiplatform support / portability (product)

Comments on existing attributes:

  • Reusability doesn't rely on dependency troubles like artifacts with high fan-in/out or cyclical dependency between packages.
  • The model doesn't take into account evolutionnary aspect. For example, the evolution of code committer is more important than the number of committer to detect maturity problem.

Mapping quality attributes and metrics

Seems ok for most praticipants.

Comments:

  • Number of tests should be made relative with complexity, not code size [2]
  • Test Management metrics are relevant to product only, not process [3]
  • Review ratio system, in some cases using sloc is not good. comment rate is such an example [4]
  • Not convinced of marketplace indicators usefuleness.

Number of levels for the scale (5)

  1. Appropriate for most of participants (85%)
  2. Too small (15%)

Data sources

New data sources that should be considered:

  • Requirements
  • Integration and validation tests
  • Documents/Plans (specification, design, verification, integration, development).
  • Forums (for projects that use them rather than mailing lists)

Data sources relevant for maturity assessment (in order of preference):

  • ITS, Tests
  • SCM
  • Reviews, PMI, Analysis
  • MLS
  • Documentation, Licensing

Other tools

Tools suggested (for code analysis):

  • Checkstyle
  • Simians for code duplication
  • FramaC for C, Astree.

Dashboard / Visualisation

  • Very useful
  • Need more graphs
  • Should be able to analyse and display information about multiple releases of the project.

References

[1] Full answers for usages of the maturity model:

  • Performing benchmarks with external application.
  • It would allow to reduce the manual effort for quality estimation
  • Could be used as a guidance and indicator for all PolarSys projects and my company delivers services around PolarSys.
  • It will allow to efficiently follow health and readiness of PolarSys technologies. As such, it is a key element for selecting technological components, specific releases of these components, or for deciding to invest in a given technology.
  • To promote quality and to build/maintain reputation
  • Ability to assess COTS
  • Improving practices in software project

[2] The metric number of test relative to the code size uses the sloc metric as denominator. For me, comparing test with code size is very risky : the number of tests to do depends on the complexity not on the number of loc and the complexity by loc depends on the nature of the project which is difficult to evaluate the impact on the model. Maybe comparing tests with global complexity has more sense.

[3] Every metric in test management corresponds to product quality (reliability) no to process one (according to SIG stability characteristic). You can have a mature test process with test plan/test campaign/test review/test report ... but not a great coverage, number of tests, etc. It depends on the available effort. An example of metric will be more the percentage of release having a tests reports, test campaign or tests failed, the completness of a tests campaign (all bug/issue found during the campaign has been fixed or assigned to another release/campaign).

[4] Globally, to scale the metric according to the project size, the model uses the number of LOC which is not adapted for all metric/rules and high maturity code area will hide low maturity code area. For sample, see the comment rate metric, you can have a good score with a mix of heavily documented methods and lowly documented methods. However, this unstability corresponds maybe to 2 different kind of developer : one kind with a high maturity and another with a very low maturity in documentation. Maybe just counting the methods with a comments rate lower to a threshold and divide it by the number of method will be more precise. Another metric is the percentage of documented methods. Sonarqube can identify these kind of methods easily.

Copyright © Eclipse Foundation, Inc. All Rights Reserved.