Tag Archives: Jakob Nielsen

RAA: Using Design Patterns in Heuristic Evaluation

RAA stands for: Research Article Analysis

Paper discussed:

Botella, F., Gallud, J. A., & Tesoreiro, R. (2011). Using Interaction Patterns in Heuristic Evaluation. In A. Marcus (Ed.), Design, User Experience, and Usability. Theory, Methods, Tools and Practice (Vol. 6769, pp. 23-32). Berlin, Heidelberg: Springer Berlin Heidelberg. Retrieved from http://www.springerlink.com.login.ezproxy.lib.purdue.edu/content/t346743643602746/

1. Purpose of the research:

Proposes a method to use interaction patterns in heuristic evaluation, in order to facilitate the process of heuristic evaluation as well as improve the output of heuristic evaluation in terms of providing redesign advice.

2. Methods:

The authors overviewed the use of heuristic evaluation as usability inspection methods and the prevalence of using design patterns in interface design respectively, and came up with the idea of mapping Nielsen’s heuristics with subsets of design patterns from Welie’s library. This approach seeks to find the correspondence between each heuristic and one or more design patterns, as showed below. The paper also claimed after several evaluation cycles, a refined correlation will be gained.

Mapping of heuristics and design patterns

Mapping of heuristics and design patterns

3. Main Findings:

A case study of heuristic evaluation of a university website using the proposed method shows it is easier to find direct solutions for usability problems found by evaluators.

4. Analysis:

After I re-read Nielsen’s procedure of conducting heuristic evaluation, I figured out that this paper is trying to fill the gap Nielsen mentioned as “Heuristic evaluation does not provide a systematic way to generate fixes to the usability problems or a way to assess the probable quality of any redesigns”. I appreciate the authors effort in mapping Nielsen’s heuristics with design patterns, trying to offer a systematic solution to the problem, but I think it a little bit forced to combine them in this way.

First of all, I think seeking redesign solutions through the successful examples is a common sense, and the use of pattern library as a reference could be a good idea (which might have already been widely used). But I am not sure how practical and efficient to use the mapping methods proposed in this paper as an evaluation step. After all, Nielsen’s heuristics is subjective, Welie’s patterns categories are subjective, and this mapping between them is subjective, it might be hard and unnatural to force people understanding and accepting the mapping. At least, as I looked at the mapping figure, I could not tell directly and clearly the relationship between the left column and right column. With a specific usability problem found, I would say it is more straight forward to find corresponding examples directly in the pattern library.

Anyway, I will try to test this idea when I do my heuristic evaluation, with questions like this: Is the mapping a redundant step? Will it be easier to locate solutions directly in the pattern library without mapping to their categories first?


RAA: Customize your heuristics?

Paper discussed:

Kientz, J.A., Choe, E.K., Birch, B., Maharaj, R., Fonville, A., Glasson, C. & Mundt, J. (2010) Heuristic evaluation of persuasive health technologies. Proceeding of the 1st ACM International Health Informatics Symposium (IHI ’10), 555-564. doi: 10.1145/1882992.1883084.

1. Purpose of the research:

Develop a set of 10 heuristics intended to find problems in persuasive technologies, and compare with Nielsen’s heuristics to see if specific designed heuristics could be more helpful for persuasive technologies.

2. Methods:

2.1 How to Define Heuristics

The research group firstly reviewed related literature and compile a master list of all usability guidelines and heuristics for persuasive technologies. Then they narrow down the list by combining the similar guidelines, prioritizing them, and discussing them using a process similar to affinity diagramming and ultimately came up with 10 heuristics. The list of 10 heuristics enable evaluators to focus on the most important aspects and also allow researchers do a comparison to Nielsen’s 10 heuristics.

The list of 10 heuristics are as follows (you could find explanation for each one in the paper):

Appropriate Functionality; Not Irritating or Embarrassing; Protect Users’ Privacy; Use of Positive Motivation Strategies; Usable and Aesthetically Appealing Design; Accuracy of Information; Appropriate Time and Place; Visibility of User’s Status; Customizability; and Educate Users.

As you might see these heuristics had some overlap with Nielsen’s. This was intentional and necessary because Nielsen’s list reflects the fundamental usability principles.

2.2 How to Conduct Evaluation

The researchers chose two web-based applications to evaluate: Mindbloom and MyPyramid BlastOff. The former is a website designed to track progress of users’ life goals, including health goal; while the latter is a online game aiming to educate children about healthy food choices. These two examples of persuasive technologies were chose because they could be accessed easily by any evaluators at any places with internet connection.

The researchers also recruited 10 evaluators, among who there were graduate students in HCI-related program and one game designer and one web coordinator. They were then randomly assigned to 2 groups: experimental and control group, corresponding to evaluate applications using new heuristics and Nielsen’s heuristics, respectively. The evaluation process was basically identical to Nielsen’s instruction, however, the researchers define the severity rating afterwards, instead of evaluators.

3. Main Findings:

There were several interesting findings.

The researchers claimed that the designed heuristics could discover more sever issues, more severe issues more frequently, and more issues that are useful in improving persuasive aspects of the interface evaluated.

What’s more interesting to me is that they found out the first two heuristics in the list had the highest number of issues correlated. This phenomena was consistent in both experimental and control groups. This finding suggested the order of heuristics we gave to evaluators might influence their findings. Thus we could intentionally randomize the order or place the heuristics in order of importance to get better data.

4. Analysis:

I will give 3 out of 5 points to this paper. Though the findings are kind of interesting, and valuable for evaluation of persuasive technologies, I doubt its scientific significance for the following reasons:

First, as the authors stated at the beginning of the paper, design specialized heuristics was already a trend in usability evaluation area. What this paper did was duplicate the research procedure of similar papers and applied it to evaluate persuasive technologies.

What’s more, I saw too much manipulations from researchers in the experiment. For example, since the evaluators recruited were not expert-level (with average self-rate experience = 2.3, 1=no experience, 4=very experienced), the researchers had to re-group the issues they found during evaluation and gave them severity rating. Without any verbalized comments from observer (see also Nielsen’s instruction), it was hard to tell how much subject opinions had been added during this process, which might skew the final conclusions.

Generally, I don’t doubt the overall conclusion of this paper, and agree that specialized heuristics could facilitate the evaluation for specific applications than Nielsen’s heuristics. However, as a research paper analyzed with statistics, I would like to see more rigid control of the experiment.

All About Designs — Equation Editor in Word

As we discussed before, a good design should have a user-centered approach, that being said, don’t let your users think, let them use the product nice and easy. A bad design does the opposite:

Equation Editor in Microsoft Word

What I am using is not the newest version of Microsoft Word (2008 version for Mac), and I noticed that it has already been improved somehow on this, but still, it is worth talking about as a case study.

Say, if I was working on my document and need to insert an equation.First thing to do, find where the function is. I started, naturally, with Insert menu, but could not find an option called “equation”, which is the searching goal in my mind. Then I googled it, thanked to so many same questions posting there, I knew it should be in “Object”. That does’t make too much sense for me, since everything insert might be count as an “object”. I am even not sure if I could remember where to find it next time.

Next, clicked the equation option in the “Object”, typed in equations and ready to insert it. But, I was scared when I found no bottom saying anything like “Insert it!”. What am I supposed to do now? Copy and paste equations in this editor window, find other bottoms in the tool bar, or just close it? Will I lose the equation I’ve typed in if I close it? With so many question marks in my head, I decided to try my luck and just close it to see what will happen. To my surprise, the equation was in the right place in the document waiting for me! After all these confusion and surprise thing, only one question left: why not just give me a simple “insert” bottom?

Equation editor view

According to J. Nielsen’s ten usability heuristics, there are some design principles the design confronts here.

First, visibility of system status. An “insert” bottom with in time feedback (the equation inserted to the text) could let the user know what is going on. Or even a realtime updated display in the document will let the user know (s)he won’t lose anything if (s)he close the editor since the equation is already inserted the same time (s)he types it.

Second, recognition rather than recall. That being said, make things visible! The designer should put the function of inserting equations visible for users, rather than testing their imagination and asking them to remember the path if they want to use the function again.