What is called “historical benchmark data” for software projects has proven to be incomplete and incorrect unless the data is validated by interviews with project teams. The most common omissions are unpaid overtime, management effort, all forms of client effort, and the work of part-time specialists such as architects, business analysts, technical writers, function point counters, and integration specialists.
Quality data is even worse and routinely omits all defects found prior to testing, and sometimes even test defects are not recorded. The following table shows that an average “software benchmark” only contains about 37% of the full and true effort expended on a typical software project:
FULL REPORT: 26 PAGES[ms-protect-content id=”88589″]