Week 12 – Content Strategy for the Web

November 12, 2011

Halvorson, Kristina. “Content Strategy for the Web.” 5-42, 147-72: New Riders Press, 2009.

 

Halvorson begins “Content Strategy for the Web” with five suggestions to improve web content.  These include: 

  • Do less, not more – less content is easier to manage and is more user-friendly
  • Figure out what you have and where it’s coming from – conduct a content audit
  • Learn how to listen – find out from you customers what their true needs are
  • Put someone in charge – establish an editor-in-chief to maintain the content
  • Start asking, Why? – develop clear reasons for delivering content online (5-12)

One of the biggest problems with content strategies is that in most organizations, no one actually does own the content – this causes priorities to clash and compromises to be made (20).  Someone needs to be “in charge” of balancing all of the different priorities between different aspects of the organization in order for the content to actually be effective.  Halvorson goes on to emphasize that our standards for content are really low.  Too often, more time and effort have been invested in the flashy design and interface workings of a website, leaving content as a last-minute after-thought. 

Halvorson defines content strategy as a “holistic, well-considered plan for obtaining a specific goal or result” through the use of “text, data, graphics, video, audio, [and] countless [online] tools” (32).  A true content strategy plans for content’s creation, delivery, and governance (33).  In order to determine what angle to take on this strategy plan, web analytics must come into the mix into order to provide you with “hard data about how content efforts have impact on you business’s bottom line” (147).  One of the most difficult aspects of implementing a plan is to do so with a clear maintenance plan in place.  This can include both individual and team responsibilities including web editor-in-chief, web editor, web writer, search engine optimization strategist, reviewers, and approvers (160-161).  Some of the best companies who treat web content as a business asset that drives their success online include: 

  • Wells Fargo
  • IBM.com
  • REI
  • Mint.com

This whole balance of learning how to listen to actual, relevant needs and filtering out the immediate wants of an organization is a fine-tuned skill that can make the difference between an effective website and one that comes across as desperate to incorporate every new idea that happens to pop into someone’s head.  Along this same line of thought, it is important to look to the needs of the customer in addition to the desires of the organization as a whole; however, “just because an employee or a customer asks for something does not mean it should be automatically delivered upon” (11).  For example, in the reading for last week by Friess, participants in a study indicated that they wanted an index to help navigate the manual the participants were testing.  However, the organization was adamant that they were not going to pay to include an index – it was non-negotiable.  Eventually, the index did work its way into the final product but only because the organization realized its essential benefit to the participants in the study. 

The other aspect of content strategy that really clicked with me was the emphasis on clear business objectives and goals for an organization.  In curriculum planning, I often see too many teachers get hung-up on a particular novel that they just can’t bear to part with or a new technique or teaching strategy that they just have incorporate into the classroom; often, ideas and methods are implemented without pause to ask “Why?” Sure, new techniques and newer technology can be helpful with classroom engagement and variation in presenting information, but if they fail to contribute to the overall goal of teaching a concept or getting a skill across, how effective can they be?  The same premise is true for delivering effective web content.  Without a clear goal and purpose in mind, a website can be ineffective and fail to achieve its desired results despite its shiny new homepage and flashy icons.  What qualities and skills are necessary in an individual to oversee the creation and execution of an effective content strategy for an organization?

Advertisements

Week 11 – Usability Tests and Evaluator Bias

November 6, 2011

Friess, Erin. “Discourse Variations Between Usability Tests and Usability Reports.” Journal of Usability Studies 6, no. 3 (2011): 102-16.

 

The article “Discourse Variations Between Usability Tests and Usability Reports” documents research on discourse analysis techniques to compare the language used by the end-users to the language used in the evaluators’ oral reports.  Friess conducted five rounds of formative testing involving three pairs of team members who were all novices in conducting usability tests; these team member pairs were each assigned a participant.  A team of raters read the transcripts, watched the video recordings, and “determined if any of the issues mentioned in the oral reports but not mentioned by the usability participant could be reasonably assumed by the actions of the usability participant in the think-aloud usability test” (105).  From there, 83.9% of the findings had some basis in the usability testing;  however, 65% of these were accurate findings and 34.6% were potentially inaccurate findings (106).  Both the data for the accurate and inaccurate findings came from the sound bites and from evaluator interpretation.  The discussion portion of Friess’s article comments on the gatekeeper role of the evaluators and how this powerful role could explain why differences exist between the language used by end users and the language used by evaluators’ oral reports.  Four possible explanations include the following:

 

  • Confirmation Bias in Oral Reports – The evaluators appeared to seek out confirmation for issues they had previously identified (110)
  • Bias in What’s Omitted in the Usability Reports – The evaluators at no time presented finding to the group that ran counter to a claim the evaluators made at a previous meeting (111)
  • Biases in Client Desires – The evaluators did not mention a specific participant desire for an index because the client had already specified that including an index was not an option (112)
  • Poor Interpretation Skills – The evaluators were not well-experienced in this kind of study and therefore clung to the few pieces of data that they understood well (sound-bite data) (113)

 

Honestly, I found this article to be somewhat surprising.  It bothers me that so many evaluators will orally communicate information about the participants’ tests without actually referencing the transcript or notes of the test itself.  Relying solely on memory, particularly when there might be specific biases at stake, has proven faulty time and time again.  For example, my client for my STEM brief is currently studying the consistency of positive flashbulb memories over time.  Her research indicates that as time passes, individuals continue having a strong belief in the veracity of their memories while results indicate that the consistency of those memories actually decreases.  Even though the team members conducting the tests do not experience flashbulb memory while monitoring or observing the participants, there is still the element of unreliability when it comes to presenting information from their memories rather from documented data.  The whole process of learning to interpret results accurately and without bias is a crucial component of good practice in technical communication.  This study reminds me that certain parameters need to be in place even in a semi-informal testing environment of do-it-yourself usability practice.  How could conducting experiments in this fashion hurt the credibility of technical communication as a field?  What are some safeguards to put into place to avoid such biases and discrepancies in the future with do-it-yourself usability test


Week 11 – Understanding the Desirability Factor in User Experience

November 6, 2011

Barnum, Carol M., and Laura A. Palmer. “More Than a Feeling: Understanding the Desirability Factor in User Experience.” In CHI 2010, 4703-15. Atlanta, Georgia, 2010.

In the article “More Than a Feeling,” Barnum and Palmer discuss ways to measure the “desirability” factor in studies instead of relying solely on the sometimes-problematic use of post-test questionnaires.  Microsoft created and used 118 product reaction cards which gave participants a wider breadth of options for how to express feedback regarding products.  Barnum and Palmer incorporate the product reaction cards into their studies “to know what users felt and . . . to add an element of methodological plurality to [their] studies” (4706).  Some of the case studies included:

  • Computer Network Monitoring Application
  • Destination Teaching Website
  • Comparative Evaluation of Hotel Reservation Process
  • Hotel Group Study on Fee-based Loyalty Program Enrollment
  • Call Center Application

These cases, though they fall under various industries, produced similar results in terms of the product cards.   Thematic groupings of words as well as repeated word selection occurred often throughout the case studies.  Though Barnum and Palmer felt that using the cards were helpful in determining a different angle on participant feedback, they believe that “the cards should not be used as the sole means of getting participant feedback” and that such methods “work best when used along other satisfaction survey instruments or when used as a baseline for comparison in iterative studies” (4715).

This article brought up the crucial component of how to test a document’s effectiveness.  Indeed, actually subjecting material to participants is the only true way to test whether or not the goal of the document is being accomplished.  This is crucial for technical communicators because we need to understand how this process works.  Revising and changing documents to meet the needs of the audience is an essential part of the writing and creating process; additionally, learning how to interpret feedback from participant studies and then knowing what to do with such results will be essential for any technical communicator.  What are the potential downsides of relying completely on product reaction cards rather than diversifying the methods of collecting participant feedback?