top of page
Top

Usability Problems of a

Smart Conference Room's Control System

Screenshot of the UI

Problem Definition

There is a smart conference room in the Computer Engineering department of Sharif University of Technology. The system includes capabilities such as controlling the appliances, managing various modes and themes, and adjusting the automation level. The conference room is controlled with a software application that could be installed on tablets and laptops. The current application, however, has some usability problems and its management team is planning to identify and fix these problems. As the project of the Introduction to Human-Computer Interaction course at sharif university of technology, I've tried to fix the usability problems of this user interface. 

General Information

School

Course Name

Project Duration

Team

Tools

Skillset

Sharif University of Technology

Introduction to Human-Computer Interaction


February 2018 to June 2018

Working alone with advice and mentoring from a professor

PowerPoint, Google Forms

Usability Testing, Heuristic Evaluation, Cognitive Walkthrough

Project Process

The following diagram shows the process we are following in this project.

Define & Ideate

Objectives

Solutions

Evaluation

Evaluation Process

Data Analysis

Data Analysis

Results

Phase 1: Define & Ideate

Objectives • Solutions

Objectives

  • Improving the usability of a user interface that controls and manages the smart conference room at sharif university of technology.

  • Evaluating how many participants are needed to achieve a %95 problem discovery rate.

Solutions

Heuristic evaluation is a usability engineering method for identifying usability problems in a user interface design. This kind of evaluation requires a small number of expert evaluators and a set of heuristics that are usually chosen based on the system being tested. We think heuristic evaluation matches the objectives of our project and could examine the interface and identify those usability issues that are against our desired heuristics. The small number of evaluators that are needed for conducting a heuristic evaluation, makes our project doable within the time period of HCI course.

In the following slideshow, you can find some screenshots from the user interface being evaluated.

Phase 2: Evaluation

Evaluation Process

There are three main phases for conducting a heuristic evaluation. 

Planning

Objectives

 

Tasks

Heuristics

Evaluators

Methods & Instruments

Executing

Briefing Sessions

Individual Evaluation

Reviewing

Data Analysis

Results

Step 1: Planning

Objectives

  • What are the main usability issues in prototype of the smart conference room management system?

  • How many participants are needed to achieve a %95 problem discovery rate?

Tasks

To ensure that our evaluation covers all the functionalities of the system, we need to design user tasks that navigate users into working with all the features of the user interface. Besides, it is very important that user tasks be clear and understandable. Here are a part of user tasks that we used in our evaluation.

  1. I want to leave the house and enable the mode of outside house.

  2. I want to enable the rule of disconnecting high-consumption devices.

  3. In security mode, I want to change the light's automation level to "fully automatic".

  4. In relaxing mode, I want to enable the rule of turning on the lights.

  5. In outside house mode, I want to edit the rule of identifying unknown people and add the turning on tv to its event.

  6. In outside house mode, I want to edit the rule of identifying unknown people and add the call emergency to its actions.

  7. I want to have a new mode of travel.

  8. so on.

Heuristics

We decided to use 8 of the Jakob Nielsen's 10 Heuristics that matches the goals of our project and characteristics of the user interface.

  1. Visibility of system status

  2. Match between system and the real world

  3. User control and freedom

  4. Consistency and standards

  5. Error prevention

  6. Recognition rather than recall

  7. Flexibility and efficiency of use

  8. Aesthetic and minimalist design

Evaluators

We recruited three evaluators to conduct the heuristic evaluation. All the evaluators are undergraduate students majoring computer engineering at sharif university of technology and are familiar with heuristic evaluation. Our evaluators age between 20-24 years old. 

Methods & Instrucments

All the evaluation sessions were done in the actual physical conference room at computer engineering department of sharif university of technology and on a tablet that had the software installed. We asked our participants to think aloud during the experiments so that researchers could record users' thoughts and feelings. We used a mobile phone to record all the experiments. 

Step 2: Executing

Briefing sessions

At the beginning of the experiment, participants were given an introduction about the system they are evaluating and the goals of our study. We explained how they are going to evaluate the system and record their findings in evaluation forms. We also made sure that participants feel comfortable in our prepared environment and have everything they need. 

Individual evaluation

A set of predefined structured tasks were given to each of the evaluators. Each evaluator conducted the evaluation separately from others. After each evaluator finished the experiment, the next one started the experiment. One of the researchers was responsible to take notes during the experiments and the second researcher was helping the participants with any question they might had.

Evaluators were asked to fill out an online evaluation form asking for the violated heuristic, the issue, severity rating of five, and any other comments.

Step 3: Reviewing

This step is explained in next phase.

Phase 3: Data Analysis

Data Analysis • Results

Data Analysis

By analyzing the recorded data from evaluators, researchers, and recorded videos, we extracted a list of the most common usability problems. In the following, five of these usability issues are described.

👎 Usability Issue #1

No feedback is given to users when a task is completed on the system.

Violated Heuristic

Visibility of system status

3 Evaluators

👎 Usability Issue #2

Clickable titles are not recognizable

Violated Heuristic

Aesthetics & Minimalist Design

2 Evaluators

👎 Usability Issue #3

Vagueness in functionality of + and - icons

Violated Heuristic

Match between system and the real world

2 Evaluators

👎 Usability Issue #4

Clickable items like icons and tests are very close to each other

Violated Heuristic

Error prevention

Aesthetic & minimalist design

3 Evaluators

👎 Usability Issue #5

The color palette used in the user interface is not appropriate

Violated Heuristic

Aesthetic & minimalist design

2 Evaluators

We used the following formula to find the total number of usability issues that exist in the user interface and how many participants are needed to achieve a %95 problem discovery rate.

Calculations

Results

We found 8 usability issues in our evaluation. Using the above evaluation, we found that for achieving the %95 problem discovery rate, 11 usability problems should be discovered. This means that there are still 3 unknown usability problems in the user interface. To find these additional problems, we need a maximum of 13 new evaluators (in total 16 evaluators).

Constraints

#1  Time and Cost limitation

I completed this project as a student in the HCI course and in a timeframe of 4-5 months without being financially supported. These limitations applied constraints on recruiting participants and collecting large data sets.

If there was more time and cost available, we could provide some design recommendations and redesign the system. After redesign, we could conduct other evaluation methods to see if objectives are achieved.

Reflection

#1  Source of biases

During this project, I learned about different biases that might affect the result of the study. For example, the followings are the biases that exist in our project:

  • Experimenter Bias. In this study, experimenters observed each other's experiments which might have created biases in the results.

  • Sampling Bias. Evaluators are students of the HCI course which creates sampling bias.

  • Subject Bias. The experiments were conducted in the afternoon and evaluators started the experiments after attending multiple classes. Therefore, they might have been tired and were not concentrated.

  • Environmental Bias. The experiments were conducted at the computer engineering department and environmental factors such as the temperature, light, tables, etc. were not adjusted appropriately.

#2  Data triangulation

The project showed us that a single source of data may not be fully informative to decide about usability problems. For example, in our case, we considered data gathered from two resources that are recorded videos and researchers' notes from observations. However, if we were to consider only one of these, we could have been easily misled. For example, from our recorded videos, one of the evaluators had difficulties finding a button, so in first glance, we thought that we need to redesign or relocate our buttons. However, from our notes, we found that the evaluator was distracted by another evaluator who was also present in the room. This information could not be found from the videos and could have misled us in our decisions.

Problem Definition
General Info
Process
Define
Evaluatin
Data Analysis
Constraints
Reflection

© 2022 by Mohammad Hadi Nezhad. Created with Wix.com

bottom of page