Monika Mejs, Author at Mood Up team - software house
Bug fixing and finding

What metrics can you use to verify the quality of software?

 

Software quality is a way to describe how accurately the product fits the project objectives, client’s needs and general requirements. Usually, the completion of all the functional requirements is the bare minimum for a product to be considered as done. The code of these functional requirements must then verified via the inspection for bugs. The manner in which the developers organise their work and use to check the code is important here as it helps in understanding the quality of the code, monitor project status and creating quality models that can be used as a benchmark for future code. 

Equally important as the testing of the code is the evaluation of non-functional requirements such as the UX , UI and many other categories. Doing so is vital as no amount of bug testing by the internal team can replace the feedback from a client who has just downloaded the app. One must, however, remember that extrinsic factors such as timelines in which the product was delivered, the degree of satisfaction from the client, users and the team play a role in measuring software quality as well. 

Why is measuring software quality important?

Ever heard of the phrase “If you can’t measure it, you can’t improve it.”? Well, this is very applicable in software development where the focus is on continuous development and the delivery of a world-class product. 

  1. Helps to reduce development costs – Measuring the quality of the software is a great way of receiving objective data on the software the team is creating, work of the QA team etc. Doing so from even the very early stages of the project is important for maintaining quality throughout the software development life cycle and provides the team with feedback on what they need to improve on. `Such monitoring and optimizations helps to reduce development costs to the client.
  2. Helps track team effectiveness – knowing how much time is taken to fix a bug can give an indication of the team’s effectiveness and help the client decide on which bugs needs to be fixed if the team is to meet a set deadline. 
  3. Aids in planning future functionalities – Gathering the data on software quality and the development process helps with planning future deadlines, estimating how many hours need to be devoted to implementing a functionality and how much would it cost.
  4. Increase client confidence – a culture of quality assurance creates confidence about a developers skills in the minds of the customer and even the end-users. This is one reason why we have no qualms about downloading apps from well-known brands such as Google and Facebook, when compared against more obscure ones.

cost_to_fix_bugs

Bugs can be expensive. Image sourced from deepsource.io

8 metrics to verify software quality

The metric you use to measure the quality of software depends on the software and its expected outcome, as there are too many to choose from. Such metrics in my opinion should be chosen carefully as per attribute you want tested and cover basics such as time spent on the project, defects of software, the size of the product and effort put into the project by the team. 

These are some of the most popular measures of software quality you can track on your project for basic insights:

  1. Code churn— a very popular metric that measures the amount of code that is deleted, added or edited in the repository over time. What is important to remember here is to interpret the data correctly as a big code churn value can be expected, in a new feature. The same in older code, however, can be a hint of trouble as the developers are then focusing too much fixing technical debt instead of working on new functionalities. cost_of_bad_code

    Image sourced from Pullrequest

  2. Lines of code (LOC)– this metric as the name suggests refers to the lines of code in a functionality and is a good indicator of how complex/efficient the code is. It can also be used as a company restriction for how many lines of code can maximally be written for a functionality to make sure it’s not overly complicated.  Generally, counting lines of code as a measure of programmer productivity is not a good practice as it might just be seen as a reason to write unnecessarily complicated code.
  3. Number of bugs or defects per KLOC (1000 lines of code)– also called defect density, KLOC helps predict the number of defects that may arise based on previously collected data. Tracking this metric is important as a high number of critical bugs per KLOC may be a signal to focus on tests and slow down on implementing new features.
  4. Lead time– an indicator of the time it takes an idea to go from being an idea, to development and deployment. It takes into account every step of the software development life cycle such as design, development, testing design, testing and is a good measure of how a project is progressing. This data can then be used to help plan, estimate releases and future work. 
  5. Burndown chart–  a chart through which you can track the amount of work that is incomplete with the time it should be completed in. Set on a horizontal axis, the burndown chart measures the work that is already done (bug fixes for example), new work that will appear in the meantime (newly discovered bugs for example) and the rate at which the bugs are fixed. Such charts in our experience help to plan the end date of a project with better accuracy and monitor the efficiency of the team. Burndown_chart

    Image sourced from Atlassian

  6. Release confidence– software quality can be measured beyond quantitative methods and take qualitative feedback such as the confidence in which the developers have in their work.  Doing so is easy with a board where developers can mark product features that they feel can be shipped soon with a high degree of confidence. 
  7. % crash free user – monitoring the crash rate of an app upon its release is important to ensure the app is functioning as it should. The crash rate should ideally be under 1% as anything larger than might mean trouble, and should be looked into, rectified and pushed out via an update.
  8. Customer satisfaction– one of the easiest and the best means through which you receive customer feedback on an app is its reviews on the App/Play Store. Qualitative feedback such as this should be gathered, sliced, diced, analysed and investigated as they can yield some very good feedback about the app and its functionalities. Pushing updates as per such feedback will help you being seen as an organisation that listens to its users and increase product uptake.

Conclusion

With a variety of metrics to measure software quality, it can be difficult to pick one that is the right one, which is why we recommend using a range of them to understand the full picture. Just remember, the metrics which you opt for in measuring the quality of software should reflect what is important for the project – for example, if a deadline is approaching, focus on monitoring the burndown chart or lead time to get a hold of whether this deadline is feasible. If providing the most reliable product is what matters most, crash rate and user reviews are a better fit for your needs.

Pink sticky note with Run Usability Test Written On it

Usability Testing: the Key to Design Validation

 

Our partners hire us to bring their ideas to life and design validation helps us make sure theend product meets expectations. One key means through which we validate our designs is the performing of usability tests, which is essentially the process through which we allow potential users to try out the product and share their thoughts. Suchactive involvement from potential end usersin different stages of product development helps uskeep the product user centredand not get ahead of ourselves. This, in turn,keeps the project’s velocity in checkand helps usreduce the number of issuesthat might arise as the project is nearing its end. 

What’s the difference between regular QA testing and usability testing?

QA testingis performed on different versions of a product to find bugs by testerswho have a good amount of knowledge on the app and itsfunctionalities.Usability testing, on the other hand, is more concerned with thedesign intuitiveness of the product and tested with userswho have no prior exposure to it. Such testing is paramount to the success of an end product as a fully functioning app that creates confusion amongst its users will not last for long. 

Why is usability testing important when developing a product?

1. It lets youavoid design changes that might draw users away from it – Remember Instagram’s horizontal scrolling, that was reversedin a few hours after the update?

2. It can help decide which idea is best when the team has many ideas on the table.  Such testing is a great learning opportunity for the designers to learn on what works, what doesn’t and will shape the way in which they approach future design.

3. It highlights user expectations of functionalities and allows developers to create user-centric apps.

4. Iterative tests help pick the best options of wording, icons or fonts.

Yoda meme on running usability tests

At what stage of a product should usability testing be done?

It is important to start usability tests in theearly stages of the design process when it is easy to make changes. Another round of testing should then be done when the design is nearing completion as it will allow for the testing of a prototype similar to the end product. We also encourage our partners to conduct usability tests after the launch of an appas user interaction data from the end users is invaluable in improving an already great product.

The type of usability testing is dependent on many factors such as the project, its level of progress, costs and the resources that can be deployed for testing. What you need to know however is that most prototype usability testing is divided between low-fidelity and high-fidelity tests. Low-fidelity tests are helpful when testing ideas as the prototypes does not need to be fully developed. The prototypes for high-fidelity tests, on the other hand, needs to resemble the end product very closely and require more resources.

It is also wise to know what you are looking for through usability testing.  Behavioural testing can provide quantitative measurements, such as the time between taps on an app or eye tracking. Attitudinal testing, on the other hand, is more concentrated on the needs and attitudes of users, and involve qualitative data collection via surveys and interviews. 

What is the process for conducting such usability tests? 

Thefirst stagein conducting usability testing lies indeciding on the type of testing required and the measures through which success will be defined. A prototype with the functionalities you’d like to test therefore must be prepared, alongside a scenario of the behaviour expected of the participants. It’s also important to create a test environment that reflects the environment the end product will be used in.

The second stage is where you’d need to pick and invite a sample of the population your end product will be used by. Remember that the number and the quality of the sample should be a good representation of the end users.  For example, designing a website for children requires the participation of children of different ages, due to the cognitive differences. Participants should also be reminded of how the test is to validate the designs and that no score is given for sticking to the scenario that was drawn up in stage one.

The third stage is where the data from stage two is analysed. The feedback gathered through this can reveal issues with something as specific as the order of buttons or give rise to a completely new functionality that would make the products user experience better. 

Usability testing hard to swallow meme

To summarise,usability testing is pivotal to validating the designs made for software products. Not doing so can produce an app that looks and feels great in the eyes of the designers, but is found to be difficult to use by the most important stakeholder, its end users. Such apps with invalidated designs is a recipe for disaster as bad UX experiences can kill any app and damage your brand reputation. This experience will stain the trust your users have with the brand and impact the take up of any future software products as well.