Barnum, Carol M., and Laura A. Palmer. “More Than a Feeling: Understanding the Desirability Factor in User Experience.” In CHI 2010, 4703-15. Atlanta, Georgia, 2010.
In the article “More Than a Feeling,” Barnum and Palmer discuss ways to measure the “desirability” factor in studies instead of relying solely on the sometimes-problematic use of post-test questionnaires. Microsoft created and used 118 product reaction cards which gave participants a wider breadth of options for how to express feedback regarding products. Barnum and Palmer incorporate the product reaction cards into their studies “to know what users felt and . . . to add an element of methodological plurality to [their] studies” (4706). Some of the case studies included:
- Computer Network Monitoring Application
- Destination Teaching Website
- Comparative Evaluation of Hotel Reservation Process
- Hotel Group Study on Fee-based Loyalty Program Enrollment
- Call Center Application
These cases, though they fall under various industries, produced similar results in terms of the product cards. Thematic groupings of words as well as repeated word selection occurred often throughout the case studies. Though Barnum and Palmer felt that using the cards were helpful in determining a different angle on participant feedback, they believe that “the cards should not be used as the sole means of getting participant feedback” and that such methods “work best when used along other satisfaction survey instruments or when used as a baseline for comparison in iterative studies” (4715).
This article brought up the crucial component of how to test a document’s effectiveness. Indeed, actually subjecting material to participants is the only true way to test whether or not the goal of the document is being accomplished. This is crucial for technical communicators because we need to understand how this process works. Revising and changing documents to meet the needs of the audience is an essential part of the writing and creating process; additionally, learning how to interpret feedback from participant studies and then knowing what to do with such results will be essential for any technical communicator. What are the potential downsides of relying completely on product reaction cards rather than diversifying the methods of collecting participant feedback?