In this video we are going to introduce a technique called Heuristic Evaluation. As we talked about at the beginning of the course, there’s lots of different ways to evaluate software. One that you might be most familiar with is empirical methods, where, of some level of formality, you have actual people trying out your software. It’s also possible to have formal methods, where you’re building a model of how people behave in a particular situation,
and that enables you to predict how different user interfaces will work. Or, if you can’t build a closed-form formal model,
you can also try out your interface with simulation and have automated tests — that can detect usability bugs and effective designs.
This works especially well for low-level stuff; it’s harder to do for higher-level stuff. And what we’re going to talk about today is critique-based approaches, where people are giving you feedback directly, based on their expertise or a set of heuristics. As any of you who have ever taken an art or design class know, peer critique can be an incredibly effective form of feedback, and it can make you make your designs even better.
You can get peer critique really at any stage of your design process, but I’d like to highlight a couple that I think can be particularly valuable. First, it’s really valuable to get peer critique before user testing, because that helps you not waste your users on stuff that’s just going to get picked up automatically. You want to be able to focus the valuable resources of user testing on stuff that other people wouldn’t be able to pick up on. The rich qualitative feedback that peer critique provides
can also be really valuable before redesigning your application, because what it can do is it can show you what parts of your app you probably want to keep, and what are other parts that are more problematic and deserve redesign. Third, sometimes, you know there are problems,
and you need data to be able to convince other stakeholders to make the changes. And peer critique can be a great way, especially if it’s structured, to be able to get the feedback that you need, to make the changes that you know need to happen. And lastly, this kind of structured peer critique can be really valuable before releasing software, because it helps you do a final sanding of the entire design, and smooth out any rough edges. As with most types of evaluation, it’s usually helpful to begin with a clear goal, even if what you ultimately learn is completely unexpected.
And so, what we’re going to talk about today is a particular technique called Heuristic Evaluation. Heuristic Evaluation was created by Jakob Nielsen and colleagues, about twenty years ago now. And the goal of Heuristic Evaluation is to be able to find usability problems in the design. I first learned about Heuristic Evaluation
when I TA’d James Landay’s Intro to HCI course, and I’ve been using it and teaching it ever since. It’s a really valuable technique because it lets you get feedback really quickly and it’s a high bang-for-the-buck strategy.
And the slides that I have here are based off James’ slides for this course, and the materials are all available on Jacob Nielsen’s website. The basic idea of heuristic evaluation is that you’re going to provide a set of people — often other stakeholders on the design team or outside design experts — with a set of heuristics or principles,
and they’re going to use those to look for problems in your design. Each of them is first going to do this independently
and so they’ll walk through a variety of tasks using your design to look for these bugs. And you’ll see that different evaluators are going to find different problems. And then they’re going to communicate and talk together only at the end, afterwards. At the end of the process, they’re going to get back together and talk about what they found. And this “independent first, gather afterwards”
is how you get a “wisdom of crowds” benefit in having multiple...
Please join StudyMode to read the full document