The Costs, Accuracy, and Uses of Software Benchmarks


What is called “historical benchmark data” for software projects has proven to be incomplete and incorrect unless the data is validated by interviews with project teams. The most common omissions are unpaid overtime, management effort, all forms of client effort, and the work of part-time specialists such as architects, business analysts, technical writers, function point counters, and integration specialists.
Quality data is even worse and routinely omits all defects found prior to testing, and sometimes even test defects are not recorded. The following table shows that an average “software benchmark” only contains about 37% of the full and true effort expended on a typical software project:


[ms-protect-content id=”88589″]
Show More
Back to top button

We use cookies on our website

We use cookies to give you the best user experience. Please confirm, if you accept our tracking cookies. You can also decline the tracking, so you can continue to visit our website without any data sent to third party services.