Project initiation and project execution monitoring: Essential checklists
According to Deloitte’s research, over 60% of companies have experienced project failure. The top reasons for such failures include poorly defined scopes, inadequate risk management, lack of detail in project planning, inappropriate monitoring of execution, and project expectations.
In our experience, at the initial stage, it is critical to adhere to the software quality standards. Meanwhile, in the later stages like User Acceptance Testing (UAT) or Production Rollout, the issues may occur because of the poorly defined or lack of non-functional requirements (NFR) to the project. Seldom, do software development providers consider these standards and requirements. As a result, projects fail because of inconsistent planning, data silos, and lack of proper execution monitoring.
We value every cooperation with our customers and prospects that’s why we prepared some useful checklists to use when you need to validate your potential vendors.
DICEUS has a consistent methodology for planning and executing software product development and uses all the quality models outlined in ISO/IEC 25010. Below is a comprehensive checklist for defining the non-functional requirements (NFR) that we use internally during regular audits.
Feel free to use the lists and reach out to us and to our experts for any professional consulting on your business case.
Project initiation and project execution monitoring
Pinch and spread for zoom
Project planning: NFRs checklist
Reliability – How often does the system experience critical failures?
Defined as probability that the product will operate without failure for a specified number of users, transactions, or specified time.
Provided as test results and forecast indicating the level of confidence in to what extent the product will meet the requirements.
Performance – How fast does the system return results?
Defined as all the workloads, scenarios, and resources related to the system event, which is one or more use cases and/or scenarios involved in a dynamic situation, e.g., a “busy hour” during which a maximum processor load is expected.
Provided a list of system events that are prioritized based on expected utilization and criticality of use case scenarios.
Availability – How much time is the system available to users against the downtimes?
Defined by the planning of power/network infrastructure management, including a backup strategy.
Provided as a probability percentage or a percentage of time when the system is accessible for operation during some time, e.g., the system may be available 98% of the time during a month.
Usability – How hard is it to use the product?
Defined as usability criteria assuming the evaluation of learnability, efficiency, memorability, errors, and satisfaction.
Provided as a specific version of guidelines for UI and UX.
Project initiation and project execution monitoring: Essential checklists – Usability
Pinch and spread for zoom
Security – How is the data protected against unauthorized access and malware attacks?
Defined as specific risks and threats that you need your system to be protected from.
Provided as privacy requirements defines as protection for sensitive information; physical requirements defined as physical protection of the system; access requirements defined as account types/groups, and their access rights.
Flexibility – How is it likely that the system will need more functionalities?
Defined as the application owner intentions to increase or extend the functionality of the software after it is deployed, which should be planned from the beginning; it influences choices made during the design, development, testing, and deployment of the system.
Scalability – How much will the system’s performance change with higher workloads?
Defined as how the system is expected to scale up.
Provided as the highest workloads under which the system will still meet the performance requirements.
Maintainability – How much time is it needed to fix the solution or its component?
Defined and verified by DevOps Architect with inputs from Solution Architect
Provided as a probability of fix during some time.
Connectivity – What access rights levels there should be provided?
Defined as how the application should be accessed, what restrictions and rules there could be.
Data accessibility and storage – Which data will be archived, how long it will be kept, what happens to the data at the end of the retention period?
Defined asdata retention strategy in conjunction with performance (old data can’t be available with the same performance as in an online database).
Provided as a data retention policy.
Documentation – What documentation will be provided?
Defined as a list of papers that should be provided to the customer.
Provided as Installation documentation, Patching / Upgrade documentation, Architecture / Infrastructure documentation, Operational documentation including online help and user guide.
Reusability - In what forms will be software development assets be used?
Defined as the extent to which software components should be designed in such a way that they should be used in applications other than the ones for which they were initially developed.
Time/Date management – In what time zones will the product be used and how the data will be filtered?
Defined and provided as guidelines on time management for different time zones and data filtering.
Languages - What languages will the system support?
Defined and provided as a list of all the languages that the system should support, including localization parameters (like formatting, date, currency).
Cultural, religious, and political requirements
Defined and provided as specifications on all the major issues (cultural, translations, colors, possible offensive lexicon, and other aspects.
Project initiation and project execution monitoring: Essential checklists
Pinch and spread for zoom
Legal requirements - What legal agreements will be signed? What regulations should be followed?
Defined as a list of legal requirements.
Agreement between the sides on the legal context of the data. Check that no production data available to the vendor not only on production but also in UAT and SIT/DEV environments.
A vendor should immediately raise any issues in such a context to senior architects and management.
Auditability
Defined as the audited data (flows/content).
Data retention rules (see data accessibility and storage).
Installability – How deployment should be implemented?
Defined as packaging and distribution procedures. .NET UIs and external components usage (like DevExpress and Infragistics) – their versions and packaging. Any future need in deployment to the cloud environment and any locks to specific cloud vendors should be validated, also cloud-neutral installation and rollout options should be confirmed by the customer.
Portability – How easy is it to install the software?
The ease with which the software can be installed on all necessary platforms, and the platforms on which it is expected to run.
Reporting - How will reporting be handled?
Defined as the ability to support customized reports, provide online report viewing as well as printed reports, provide report scheduling, support various report formatting and styles, produce HTML and PDF format reports.
Implementation – How should the software be implemented?
The coding or construction of a system: required standards, data conversion/migration needs, implementation languages, policies for database integrity, resource limits, and operating environments.
Software Configuration Management (SCM) – How change requests and changes will be tracked?
Defined as SCM standards.
Provided as a list of the tool(s) and methods for managing and tracking changes to application software as it evolves through the development lifecycle
Capacity requirements should include the following things:
Number of Local Users
Estimated number of main office end-users
Number of Remote Users
Estimated number of regional end-users or other remote end-users
Estimated number of Concurrent Users
Estimated number of concurrent application users. This can be a range but minimally should define the expected maximum.
Estimated Transaction Volumes-Average
The average number of transactions to be processed over a given period.
Peak Processing Period
Description of peak processing period (e.g. month-end, daily 8-9AM etc.)
Estimated Transaction Volumes-Peak
The peak number of transactions to be processed over a given period.
Disk Space-Applications Server
Estimated disk space requirements for application libraries.
Disk Space-Application Database
Estimated disk space requirements for application databases (Development, Test, Production and Production Maintenance)
Serviceability
Defined as a support behavior in case of problems, timings, automatically or manually, tools/logs for detecting issues.
Provided as a Disaster Recovery plan, recovery time, etc.
Project planning – Serviceability
Pinch and spread for zoom
Project execution monitoring: Metrics checklist
Ultimately, project execution monitoring is very critical to project success. We measure the success as a confluence of key performance indicators (KPIs) and critical success factors (CSFs) along the specific quality metrics allocated for the project. In our experience, we recommend to all our partners selecting and using a few metrics that match the best with their project type and scope. Below are 10 most frequently used metrics we leverage within SDLC.
Metrics
Definition
Calculations
Schedule Variance (SV)
Any difference between the scheduled completion of an activity and the actual completion
((Actual calendar days – Planned calendar days) + Start variance) / Planned calendar days x 100
Effort Variance (EV)
Difference between the planned outlined effort and the effort required to undertake the task
(Actual Effort – Planned Effort) / Planned Effort x 100
Size Variance (SV)
Difference between the estimated size of the project and the actual size of the project
Normally in KLOC or FP (Actual size – Estimated size) / Estimated size x 100
Requirement Stability Index (RSI)
Visibility to the magnitude and impact of requirements changes.
RSI = 1- ((Number of changed + Number of deleted + Number of added) / Total number of initial requirements) x100
Productivity (Project)
Output from a related process for a unit of input.
Actual Project Size / Actual effort expended in the project
Productivity (for test case preparation)
Actual number of test cases/ Actual effort expended in test case preparation.
Actual number of test cases / actual effort expended in testing.
Productivity (defect detection)
Actual number of defects (review + testing) / actual effort spent on (review + testing)
Productivity (defect fixation)
Actual number of defects fixed / actual effort spent on defect fixation
Schedule Variance for a phase
The deviation between planned and actual schedules for the phases within a project.
Schedule variance for a phase = (Actual Calendar days for a phase – Planned calendar days for a phase + Start variance for a phase)/ (Planned calendar days for a phase) x 100
Effort Variance for a phase
The deviation between a planned and actual effort for various phases within the project.
(Actual effort for a phase – a planned effort for a phase) / (planned effort for a phase) x 100
"DICEUS team has consistently supported the BriteCore team for many years. Their engineers are well-educated and highly invested in the ongoing quality of the BriteCore platform with sustained relationships that extend over four years. We appreciate everything the DICEUS team brings to the table as a development partner."
Soren Hundeboll Founder at InsuBiz
"The DICEUS team is flexible in this collaboration, their expertise shines. I like that they are not afraid to suggest alternative designs. If DICEUS team thinks that my way of crafting a user interface is not ideal, they will confront me with an alternative. They don’t simply take what I give them and run with it if there is a better way."
Soren Hundeboll Founder at RiskVille
"Existing clients were introduced to the cloud software and are eager to adopt it. Although accustomed to full stack development, the team is flexible in this work, because they managed to develop a certain part of the stack. DICEUS could recommend me alternatives to solve the issues."
Richard Haynes CEO at eTool
"The project ran smoothly. DICEUS delivered good work and increased resources in an effort to maintain deadlines. The project itself was adding functionality to an existing web app and DICEUS had to quickly become familiar with a very large code base to deliver the work which is commendable. We will consider them for future projects."
David Abbott CEO at Take Note
"We were relieved when we started the work with DICEUS, the competence level promised, was evident with a real commitment to quality. They gained a quick understanding of our business which meant they were highly productive very quickly. The price was also competitive, reducing the costs of working with our previous partner in India and probably halving the price we would pay for less qualified developers in the UK."
Robert Koval Managing Director at TeamBase DMCC
"The collaboration has increased by as much as 25% each year, thanks to DICEUS’ technical capabilities and quick feedback adoption. The team’s professionalism, cost-effective rates, and consistent delivery continue to support both the engagement and product end-users."
Andrew Kagan Project Coordinator at AMFS
"DICEUS demonstrated a very high degree of expertise in the type of programming and the product environment that they’re working in, which is exactly what I was hoping to find. I would say their core competency was having a very strong depth of programming expertise and knowledge within this particular market, and also within the platform that they were working on."
USA
+164698032762810 N Church St, Ste 94987, Wilmington, Delaware 19802-4447