Evaluation techniques for interactive systems

Krishan Shamod
4 min readApr 17, 2022

--

When we build a product, we need to make sure our product or design should meet the requirement. So this process of assessing the design and testing the system is called evaluation.

Rather than being a single-phase, evaluation should usually take place throughout the design life cycle. Early evaluation is important because it allows us to identify and rectify problems with our system before they become costly.

Goals of evaluation

👉 Assess system functionality and usability

The system’s functionality is important because it must satisfy the needs of the users. This level of evaluation may analyze the system’s effectiveness in supporting the task by measuring the user’s performance with it.

👉 Assess the effect of the interface on user

It’s essential to consider the user’s reaction to the interaction and how it affected them. And it includes factors like how simple the system is to learn, how usable it is, and how satisfied users are with it.

👉 Identify problems related to both the functionality and usability

These could be design elements that, when utilized in their intended context, produce unexpected consequences or cause user confusion. It has to do with the design’s functionality as well as usability.

Evaluation through expert analysis

A system evaluation should ideally be completed before any implementation work begins. A number of strategies for evaluating interactive systems through expert analysis have been proposed. These methods are flexible evaluation approaches because they can be used at any point in the development process. Here are some expert-based evaluation techniques.

👉 Cognitive Walkthrough

This is originally proposed by Polson and colleagues as an attempt to introduce psychological theory into the informal and subjective walkthrough technique. The primary goal is to determine how simple a system is to learn. The focus is more specifically on learning through exploration.

👉 Heuristic Evaluation

This method was invented and developed by Jakob Nielsen and Rolf Molich. This is a method for structuring the critique of a system using a set of relatively simple and general heuristics. The simple idea behind this heuristic evaluation is that several evaluators independently critique a system to come up with potential usability problems.

👉 Review based Evaluation

Review-based evaluation is an evaluation method that relies on experimental results and empirical evidence from the literature (for instance from psychology, HCI, etc.) in order to support or refute parts of the user interface design.

Evaluation through user participation

User participation in evaluation tends to occur in the later stages of development when there is at least a working prototype of the system in place. These include empirical or experimental methods, observational methods, query techniques, and methods that use physiological monitoring, such as eye-tracking and measures of heart rate and skin conductance.

👉 Styles of evaluation

There are two types of evaluation.

Laboratory Study: In this type of evaluation study, users are taken out of their normal work environment to take part in controlled tests, often in a specialist usability laboratory.

Field Study: This type of evaluation takes the designer or evaluator out into the user’s work environment in order to observe the system in action.

👉 Empirical methods: experimental evaluation

This method provides empirical evidence to support a particular claim or hypothesis. The evaluator should choose a hypothesis to test before proceeding. The various conditions are attributed to any changes in behavioural measures. There are a few factors that are critical to the experiment’s reliability. Participants, variables, hypotheses, and experiment design are examples.

👉 Observational techniques

The most common and powerful way to gather information about the actual use of a system is to observe users interacting with it. Users are required to perform a set of tasks that have been pre-determined. The evaluator monitors and records the behaviours of the users. When the observations are gathered at their own location while they are performing their typical activities, it is more effective. Think Aloud, Cooperative Evaluation, Protocol Analysis, and other techniques are only a few examples.

👉 Query techniques

This method entails directly questioning the user about the interface. Query strategies can help you get more information about how a user sees a system. There are two approaches to using these strategies.

Interviews: The analyst person questions the user on one to one basis with prepared questions about his experience with the design and gets ideas.

Questionnaires: Users are given a set of fixed questions about what they prefer and what they think about the design and the analyst person gets the idea from it.

👉 Evaluation through monitoring physiological responses

This method will allow evaluators to not only observe more clearly what users do when interacting with the system but also to measure how they feel about it. They can use techniques like eye tracking and other physiological measurement techniques to measure it.

--

--