
According to Deloitte’s research, over 60% of companies have experienced project failure. The top reasons for such failures include poorly defined scopes, inadequate risk management, lack of detail in project planning, inappropriate monitoring of execution, and project expectations.
In our experience, at the initial stage, it is critical to adhere to the software quality standards. Meanwhile, in the later stages like User Acceptance Testing (UAT) or Production Rollout, the issues may occur because of the poorly defined or lack of non-functional requirements (NFR) to the project. Seldom, do software development providers consider these standards and requirements. As a result, projects fail because of inconsistent planning, data silos, and lack of proper execution monitoring.
We value every cooperation with our customers and prospects that’s why we prepared some useful checklists to use when you need to validate your potential vendors.
DICEUS has a consistent methodology for planning and executing software product development and uses all the quality models outlined in ISO/IEC 25010. Below is a comprehensive checklist for defining the non-functional requirements (NFR) that we use internally during regular audits.
Feel free to use the lists and reach out to us and to our experts for any professional consulting on your business case.
Project initiation and project execution monitoring
Usability
Defined as the application owner intentions to increase or extend the functionality of the software after it is deployed, which should be planned from the beginning; it influences choices made during the design, development, testing, and deployment of the system.
Defined as how the application should be accessed, what restrictions and rules there could be.
Defined as the extent to which software components should be designed in such a way that they should be used in applications other than the ones for which they were initially developed.
Defined and provided as guidelines on time management for different time zones and data filtering.
Defined and provided as a list of all the languages that the system should support, including localization parameters (like formatting, date, currency).
Defined and provided as specifications on all the major issues (cultural, translations, colors, possible offensive lexicon, and other aspects.
Legal requirementsâ¯
Defined as packaging and distribution procedures. .NET UIs and external components usage (like DevExpress and Infragistics) – their versions and packaging. Any future need in deployment to the cloud environment and any locks to specific cloud vendors should be validated, also cloud-neutral installation and rollout options should be confirmed by the customer.
The ease with which the software can be installed on all necessary platforms, and the platforms on which it is expected to run.
Defined as the ability to support customized reports, provide online report viewing as well as printed reports, provide report scheduling, support various report formatting and styles, produce HTML and PDF format reports.
The coding or construction of a system: required standards, data conversion/migration needs, implementation languages, policies for database integrity, resource limits, and operating environments.
Serviceabilityâ¯
Ultimately, project execution monitoring is very critical to project success. We measure the success as a confluence of key performance indicators (KPIs) and critical success factors (CSFs) along the specific quality metrics allocated for the project. In our experience, we recommend to all our partners selecting and using a few metrics that match the best with their project type and scope. Below are 10 most frequently used metrics we leverage within SDLC.
| Metrics | Definition | Calculations |
| Schedule Variance (SV) | Any difference between the scheduled completion of an activity and the actual completion | ((Actual calendar days – Planned calendar days) + Start variance) / Planned calendar days x 100 |
| Effort Variance (EV) | Difference between the planned outlined effort and the effort required to undertake the task | (Actual Effort – Planned Effort) / Planned Effort x 100 |
| Size Variance (SV) | Difference between the estimated size of the project and the actual size of the project | Normally in KLOC or FP (Actual size – Estimated size) / Estimated size x 100 |
| Requirement Stability Index (RSI) | Visibility to the magnitude and impact of requirements changes. | RSI = 1- ((Number of changed + Number of deleted + Number of added) / Total number of initial requirements) x100 |
| Productivity (Project) | Output from a related process for a unit of input. | Actual Project Size / Actual effort expended in the project |
| Productivity (for test case preparation) | Actual number of test cases/ Actual effort expended in test case preparation. | Actual number of test cases / actual effort expended in testing. |
| Productivity (defect detection) | Actual number of defects (review + testing) / actual effort spent on (review + testing) | |
| Productivity (defect fixation) | Actual number of defects fixed / actual effort spent on defect fixation | |
| Schedule Variance for a phase | The deviation between planned and actual schedules for the phases within a project. | Schedule variance for a phase = (Actual Calendar days for a phase – Planned calendar days for a phase + Start variance for a phase)/ (Planned calendar days for a phase) x 100 |
| Effort Variance for a phase | The deviation between a planned and actual effort for various phases within the project. | (Actual effort for a phase – a planned effort for a phase) / (planned effort for a phase) x 100 |