Don't Listen to Users and 4 Other Myths About Usability Testing - johnsonbefteeprishe
I've conducted more or less 50 usability studies at Icons8. While I was learning my craft I learn some articles on the topic. What I found is that whatever practices are blindly borrowed from laboratory research while others are heavily misinterpreted. And some are thoroughgoing dehumanizing.
I'd like to dea some of the modern usability myths in this article in comparison to my own experiences.
Myth #1 Do Non Listen to Users
Whoever said that was… Jakob Nielsen, one of the pioneers of serviceability research. Thus the first rule of usability was born. Naturally, getting feedback supported real interaction with a product is predominate, just that doesn't miserly usability specialists shouldn't listen to people in the least. The question is, when? Sure enough, this motion is answered in the very same clause where this for the first time rule was minted.
"When should you collect orientation information from users? Lone after they have used a design and have a real feeling for how advantageously it supports them."
What derriere you get away listening to people that is concealed while they are doing tasks? When I was studying our request icon service:
- masses absolutely coped with all the tasks
- people likable the new, modern pattern
Everything was fine, but ace thing. After finishing the tasks, they asked "what's this whole interface about?" Now, subsequently I detected more or little exchangeable questions from several attendees I got goosebumps. Thus, the notorious 47% story was born.
Summary: listen to people after they are done with tasks. There are at least 47 reasons to do that.
Myth #2 A Cardinal Opinions are Better Than Five
Whoever same that 100 people are better than 5 has never been stuck on the subway. To paraphrase, the myth is that "machine-controlled large-scale tests are amended than live interviews." Machine-controlled tests are a good tool, but they are not superior.
Beginning, it takes true skill and practice to interpret big chunks of data in the flop agency.
Suppose I have an egg farm. If I got a report that 10% of our egg are barmy, what should I do?
a) raise a number of chickens to cover the shortage
b) focus on the rubber of existing chickens to reduce losses
c) fire my cousin
Big data makes USA same confident just doesn't save us from oracular interpretations. In personal interviews just one player is enough to shatter my inner macrocosm, leaving me in unflagging doubt or so everything I thought process I knew, and, sooner or later, make ME more objective.
I'm not encouraging you to interview every chicken to solve the farm job (3-4 would be plenty). I was making a bespeak – automated tests results can be interpreted in a more double manner than results from free-spoken interviews.
Second, one-on-unrivalled interviews give so much single information if you really heed to masses.
Summary: Both automated tests and live interviews serve different purpose and are not superior to from each one separate, but complementary.
Myth #3 Do Not Change the Script
If you tear your notes up 2 minutes ahead a presentation it's known as courage. If you edit the book while shooting a movie it is called a unique imaginativeness. But if you variety the script during the usability interview it's called "WTF is he doing?"
Why are UX people so terror-stricken of dynamic the handwriting along the way? Because the results will not be objective, they say. You need 5 people to do exactly the corresponding thing to draw roughly reasonable conclusions about the interface being tried and true.
While I agree connected acquiring the most objective feedback likely, I discord that wrangle in your script should be graven in stone. In one of my studies, I watched how people devotedly ignored one push button they all proverb from the precise start. If I hadn't rally with an additional motion, "Did you point out this button before the tax?", I wouldn't have ever understood the real reason behind such tramontane behavior.
Other argument for having a rigid script is that it adds comfort to everyone – a participant feels like an interviewer knows what they are doing, and the interviewer feels like they know what they are doing. What can I pronounce? Genuine selective information is more important than comfort for UX professionals.
Summary: Having a script is very handy, but don't Be afraid of adding thereto en route, just make a point it doesn't disrupt the overall flow from of the research.
Myth #4 Don't Talk to Participants
A typical lab setup is a chair, a table, and a player posing at the data processor, doing tasks. Cameras and sensors tracking everything – eye movements, facial expressions, body spoken language. And no one or so. The goal is to exclude the experimenter's influence on the participant, leaving them one on unmatched with a product being proved.
For many reason, several authors advise creating this very lab atmosphere every time, fifty-fifty in Sir Thomas More conversational Skype interviews. "Don't talk with users, take heed." While I agree that researchers should listen much than talk, they don't ingest to treat their participants same Wolverine in his archaean lab days.
Even if you are silent, in other room operating theatre whatnot, people know they are being watched. You preceptor't even have to do anything, the experimenter role is your default one. And when you turn the role of an experimenter, these are some possible personal effects:
- participants go out you as a figure of authority and they try to please you with their answers and actions, thus bending the verity
- they fear admitting mistakes in UI because they are afraid to look silly in your eyes
- they adopt a behavior model that is different from when they really wont your product (at the office, at home)
So take a more friendly role. I always liked making smalltalk with attendees. Few jokes (better funny), a few side questions, a relaxed tone, and people are more willing to share everything that's happening their mind. So you get valuable data most the production, things that might otherwise embody outside of a participant's consolation zone.
Summary: Don't comprise besides bigmouthed, but endeavour to create an atmosphere that is as authentic as possible.
Myth #5 Gumption Isn't Enough
Usability is a topic everybody has an opinion about. Everyone from your boss to your unemployed cousin. That doesn't stingy there are no real usability professionals. There are quite a fewer things serviceability professionals may be familiar with:
- social psychology
- behavioural psychology
- design basics
- communication skills
- information management
- much of practical feel for
Such a specialist is quite costly, thus many companies take some other approach. They bu assign this role to someone. And if that someone has gumption, it pays off.
If you can't receive an fully fledged usability specialist in your pocket, but desperately want to test your product, why wouldn't you takings the aforementioned go up and become that individual?
A foundational design principle is that 80% of people will enjoyment 20% of the mathematical product, it's "core." Define the core and scratch with a a few tasks affecting IT. Then watch how people do them. Never give hints, just wait patiently until they ask for your serve. Thus you'll see the real problems and benefit from the research much to a higher degree by simply adhering to someone's immanent opinion.
Summary: Experienced professionals leave benefit everyone, but having someone with common sensation is a good start.
Conclusion
Serviceableness myths I self-addressed in this clause are more or less orientated towards eliminating the "anthropomorphic factor," to make entirely serviceability studies and services As neutral as feasible. From unitary viewpoint, that's correct – in real life, there is no experiment between a ware and a someone victimisation IT. Imagine every time you utilized a new lavish someone came up with a notepad and asked how the water was.
But with trying to "hide" the experimenter from the experience, the diametric happens – people behave in a sense they would never behave in real life. The best room to carry off the experimenter is to observe the "user" in their rude environment, a full-descale surveillance program, and I doubt the useableness industry would be the first ones to get their men on IT.
Then, what's port? Creating a friendly, trusting ambiance during research. Not eliminating the human element, but adding to that. Father't pretend as if you're not there. At least, if we'Ra talking about a non-laboratory environment, like Skype sessions. But who knows how the same set about would work in a laboratory besides?
Away the way, here's in one of my very first usableness studies: Drag and Drop vs. Click – Are They Rivals?
I'm not the only one who uses uncommon approaches in his work – meet Ivan, founder of Icons8, and his Blind Method acting go about in hiring new people.
About the writer:
Andrew started at Icons8 as a usability specializer, conducting interviews and usability surveys. He desperately cherished to plowshare his findings with our professional community and started writing perceptive and singular (sometimes both) stories for our web log.
Source: https://blog.icons8.com/articles/dont-listen-to-users-and-4-other-myths-about-usability-testing/
Posted by: johnsonbefteeprishe.blogspot.com
0 Response to "Don't Listen to Users and 4 Other Myths About Usability Testing - johnsonbefteeprishe"
Post a Comment