Week 11 – Usability Tests and Evaluator Bias

November 6, 2011

Friess, Erin. “Discourse Variations Between Usability Tests and Usability Reports.” Journal of Usability Studies 6, no. 3 (2011): 102-16.

 

The article “Discourse Variations Between Usability Tests and Usability Reports” documents research on discourse analysis techniques to compare the language used by the end-users to the language used in the evaluators’ oral reports.  Friess conducted five rounds of formative testing involving three pairs of team members who were all novices in conducting usability tests; these team member pairs were each assigned a participant.  A team of raters read the transcripts, watched the video recordings, and “determined if any of the issues mentioned in the oral reports but not mentioned by the usability participant could be reasonably assumed by the actions of the usability participant in the think-aloud usability test” (105).  From there, 83.9% of the findings had some basis in the usability testing;  however, 65% of these were accurate findings and 34.6% were potentially inaccurate findings (106).  Both the data for the accurate and inaccurate findings came from the sound bites and from evaluator interpretation.  The discussion portion of Friess’s article comments on the gatekeeper role of the evaluators and how this powerful role could explain why differences exist between the language used by end users and the language used by evaluators’ oral reports.  Four possible explanations include the following:

 

  • Confirmation Bias in Oral Reports – The evaluators appeared to seek out confirmation for issues they had previously identified (110)
  • Bias in What’s Omitted in the Usability Reports – The evaluators at no time presented finding to the group that ran counter to a claim the evaluators made at a previous meeting (111)
  • Biases in Client Desires – The evaluators did not mention a specific participant desire for an index because the client had already specified that including an index was not an option (112)
  • Poor Interpretation Skills – The evaluators were not well-experienced in this kind of study and therefore clung to the few pieces of data that they understood well (sound-bite data) (113)

 

Honestly, I found this article to be somewhat surprising.  It bothers me that so many evaluators will orally communicate information about the participants’ tests without actually referencing the transcript or notes of the test itself.  Relying solely on memory, particularly when there might be specific biases at stake, has proven faulty time and time again.  For example, my client for my STEM brief is currently studying the consistency of positive flashbulb memories over time.  Her research indicates that as time passes, individuals continue having a strong belief in the veracity of their memories while results indicate that the consistency of those memories actually decreases.  Even though the team members conducting the tests do not experience flashbulb memory while monitoring or observing the participants, there is still the element of unreliability when it comes to presenting information from their memories rather from documented data.  The whole process of learning to interpret results accurately and without bias is a crucial component of good practice in technical communication.  This study reminds me that certain parameters need to be in place even in a semi-informal testing environment of do-it-yourself usability practice.  How could conducting experiments in this fashion hurt the credibility of technical communication as a field?  What are some safeguards to put into place to avoid such biases and discrepancies in the future with do-it-yourself usability test


Week 11 – User-Centeredness in Computer Documentation

November 6, 2011

Johnson, Bob. “User-Centeredness, Situatedness, and Designing the Media of Computer Documentation.” In ACM Eighth International Conference on Systems Documentation, 55-61,1990.

 

Johnson’s article on user-centeredness in designing computer documentation begins with an emphasis on the danger of such a philosophy.  He suggests that the phrase “user-centeredness” could become “at best, empty rhetoric, and, at worst . . . could serve to undermine the humanitarian goals of a user-centered ethic” (55).  His dual purpose in writing the article is to:

1) focus on a clear understanding of what user-centeredness means in regards to

2) discuss how to design for the different media of computer documentation.

Johnson argues that “the ineffectiveness of systems lies in the miscalculations and poor planning of the designers” rather than a reflection on the competency of the consumers (56).  Much documentation is written to reflect “what the designer views as the important components” instead of taking the true user into account (56).  In regards to a text-centered approach as an option for good document design, Johnson comments that the chief drawback is that the approach focuses on “how well readers comprehend and follow printed text” which can limit the document’s effectiveness (57).  Rather, centering on the user’s situation focuses attention to the user and the user’s environment.  The user-centered view continues outward by then analyzing:  tasks and actions of the user, the user’s activity with the medium, and the design of the documentation (57-58).  Additionally, Johnson suggests representing the rhetorical framework of user-centered documentation (users, writers, and task/action) within the broader scope of global contexts and situations, thus “giv[ing] computer documentation a broader . . . and more relevant structure” (59).

Reading this article jogged mental connections between several pieces we have read so far this semester.  His emphasis on the ineffectiveness of some document designs reminded me of Cooper’s “Designing for Pleasure”; like Cooper, Johnson argues that when the specific user is “far removed from the central concerns of the system design, [the user] is left with the task of reconstructing the entire system into his or her own image of what has been passed on from the system image” (56).  Additionally, I found it important to remember that in today’s age of technical communication, much of our work will be formatted for web-based or screen-based viewing; factors such as “eye strain, impatience, poor resolutions, etc. all play a role in the difficulties of reading the computer screen” (57).  Therefore, the way content is managed for a website is directly correlated to whether users can easily “browse, access, skim and jump from screen to screen” or whether the content requires them the read large chunks of text for extended periods of time.  How would having a user-centered approach to computer documentation be effective in specific cultural situations which have technology constraints?