Week 11 – Understanding the Desirability Factor in User Experience

November 6, 2011

Barnum, Carol M., and Laura A. Palmer. “More Than a Feeling: Understanding the Desirability Factor in User Experience.” In CHI 2010, 4703-15. Atlanta, Georgia, 2010.

In the article “More Than a Feeling,” Barnum and Palmer discuss ways to measure the “desirability” factor in studies instead of relying solely on the sometimes-problematic use of post-test questionnaires.  Microsoft created and used 118 product reaction cards which gave participants a wider breadth of options for how to express feedback regarding products.  Barnum and Palmer incorporate the product reaction cards into their studies “to know what users felt and . . . to add an element of methodological plurality to [their] studies” (4706).  Some of the case studies included:

  • Computer Network Monitoring Application
  • Destination Teaching Website
  • Comparative Evaluation of Hotel Reservation Process
  • Hotel Group Study on Fee-based Loyalty Program Enrollment
  • Call Center Application

These cases, though they fall under various industries, produced similar results in terms of the product cards.   Thematic groupings of words as well as repeated word selection occurred often throughout the case studies.  Though Barnum and Palmer felt that using the cards were helpful in determining a different angle on participant feedback, they believe that “the cards should not be used as the sole means of getting participant feedback” and that such methods “work best when used along other satisfaction survey instruments or when used as a baseline for comparison in iterative studies” (4715).

This article brought up the crucial component of how to test a document’s effectiveness.  Indeed, actually subjecting material to participants is the only true way to test whether or not the goal of the document is being accomplished.  This is crucial for technical communicators because we need to understand how this process works.  Revising and changing documents to meet the needs of the audience is an essential part of the writing and creating process; additionally, learning how to interpret feedback from participant studies and then knowing what to do with such results will be essential for any technical communicator.  What are the potential downsides of relying completely on product reaction cards rather than diversifying the methods of collecting participant feedback?


Week 11 – User-Centeredness in Computer Documentation

November 6, 2011

Johnson, Bob. “User-Centeredness, Situatedness, and Designing the Media of Computer Documentation.” In ACM Eighth International Conference on Systems Documentation, 55-61,1990.


Johnson’s article on user-centeredness in designing computer documentation begins with an emphasis on the danger of such a philosophy.  He suggests that the phrase “user-centeredness” could become “at best, empty rhetoric, and, at worst . . . could serve to undermine the humanitarian goals of a user-centered ethic” (55).  His dual purpose in writing the article is to:

1) focus on a clear understanding of what user-centeredness means in regards to

2) discuss how to design for the different media of computer documentation.

Johnson argues that “the ineffectiveness of systems lies in the miscalculations and poor planning of the designers” rather than a reflection on the competency of the consumers (56).  Much documentation is written to reflect “what the designer views as the important components” instead of taking the true user into account (56).  In regards to a text-centered approach as an option for good document design, Johnson comments that the chief drawback is that the approach focuses on “how well readers comprehend and follow printed text” which can limit the document’s effectiveness (57).  Rather, centering on the user’s situation focuses attention to the user and the user’s environment.  The user-centered view continues outward by then analyzing:  tasks and actions of the user, the user’s activity with the medium, and the design of the documentation (57-58).  Additionally, Johnson suggests representing the rhetorical framework of user-centered documentation (users, writers, and task/action) within the broader scope of global contexts and situations, thus “giv[ing] computer documentation a broader . . . and more relevant structure” (59).

Reading this article jogged mental connections between several pieces we have read so far this semester.  His emphasis on the ineffectiveness of some document designs reminded me of Cooper’s “Designing for Pleasure”; like Cooper, Johnson argues that when the specific user is “far removed from the central concerns of the system design, [the user] is left with the task of reconstructing the entire system into his or her own image of what has been passed on from the system image” (56).  Additionally, I found it important to remember that in today’s age of technical communication, much of our work will be formatted for web-based or screen-based viewing; factors such as “eye strain, impatience, poor resolutions, etc. all play a role in the difficulties of reading the computer screen” (57).  Therefore, the way content is managed for a website is directly correlated to whether users can easily “browse, access, skim and jump from screen to screen” or whether the content requires them the read large chunks of text for extended periods of time.  How would having a user-centered approach to computer documentation be effective in specific cultural situations which have technology constraints?

Week 10 – Refining Wikipedia through Collaboration

October 30, 2011

Liu, Jun, and Sudha Ram. “Who Does What: Collaboration Patterns in the Wikipedia and Their Impact on Article Quality.” ACM Transactions on Management Information Systems 2, no. 2 (2011): 11:1- 11:23.

The article “Who Does What:  Collaboration Patterns in the Wikipedia and Their Impact on Article Quality” identifies main critiques of Wikipedia and provides research by Liu and Ram to account for these issues.  The main questions Liu and Ram address are 1) Why do Wikipedia articles vary widely in quality? and 2) How can  quality of Wikipedia articles be improved?  Because Wikipedia is easy to edit, an article can be edited by any person; however, all editors do not edit the same way or with the same intensity (2).  The research by Liu and Ram investigated the Wikipedia’s article assessment project as a starting place for determine various degrees of quality.  The criteria for assessment includes:  1) well-written, 2) comprehensive, 3) well-researched and verifiable, 4) neutral, 5) stable, 6) compliance with Wikipedia style guidelines, 7) appropriate images and copyright status,  and 8 ) appropriate length and focus (4).  To study the relationship between collaboration and quality, Lui and Ram selected articles which had been rated by these criteria as a basis for their study.  Their methods involved creating categories of how contributors edited articles; from looking at that data, Lui and Ram then identified collaboration patterns.  Lui and Ram concluded that “article quality depends on different types of contributors, that is, the roles they play, and they way they collaborate” (16). Additionally, Lui and Ram feel that improving Wikipedia article quality is possible if software tools are developed to help contributors make the decision to include references, links, and support for their edits.  These software tools should “nudge contributors to assume different roles and support self-justification and self-policing” as well as “motivate the contributors to revist the article, review their inserted sentences, and respond to other contributors’ modifications” (20). 

The main point of this article truly hones in of the positive results which can emerge when collaboration is instilled into the creation/editing of Wikipedia articles.  Lui and Ram’s research definitely supports the hypothesis that the better-written and better-referenced articles are constructed by multiple contributors who justify their added content/changes and who respond to other contributors.  This same concept could be easily transferred to the field of technical communication where collaboration continues to be rare.  Additionally, this article clearly articulates why Wikipedia continues to receive mixed reviews in terms of being a credible sources of information.  I think that Lui and Ram articulate this best when they state, “It is unreasonable to simply assume that Wikipedia is completely reliable or unreliable” (2).  If practices such as the methods suggested by Lui and Ram are implemented, could Wikipedia change its current status where it is viewed as being neither completely reliable or unreliable?  If so, would it change the way people use the medium?

Week 10 – Exploring Social Media

October 30, 2011

Singleton, Meredith, and Lisa Meloncon. “A Social Media Primer for Technical Communicators.” Intercom,June 2011, 7-9.

Porter, Alan J. “Tweet Me This…” Intercom, June 2011, 10-13.

Molisani, Jack. “Creating a 3d Model of the Content Management Lifecycle.” Intercom, June 2011, 14-18.


Singleton and Meloncon’s article, “A Social Media Primer for Technical Communicators” focuses on how social media is relevant for technical communicators.  Indeed, they argue that “information can no longer only be provided as downloadable, static documents . . . and should now include forums, email options, and opportunities to message a help technician” (7).  This reminds me of how the UNT Library page has recently been updated to include several of these options.  Because of social media, the method and medium of communicating information is changing rapidly and technical communicators need to be up-to-date on how to adapt.  Singleton and Meloncon address this challenge by instructing technical communicators to 1) understand the social media landscape, 2) build a strategy, and 3) know your audience’s preferences, 4) interact, and 5) evaluate and adjust.

Focusing specifically on Twitter as a communication tool, Alan Porter defines Twitter as “a communication tool of the moment,” and emphasizes that “as professional communicators, we should . . . be in a position to use it to communicate not only among ourselves but also . . . with our customers” (10).  Porter defends Twitter use as a means of communication in a way that can influence a person’s profile in a particular community as well as help share information and knowledge (11).  However, Porter does emphasize that there are appropriate ways for communicating via Twitter.  Some of these recommendations include having separate accounts for work and private use, make decisions ahead of time about content that you will or will not ever discuss on Twitter, and to remember that it is not a requirement to follow everyone on Twitter who follows you.  He emphasizes that “it’s what you post and the way you interact that is important” (12).  I also appreciated how Porter mentions the responsibility of participating on Twitter that include being a gatekeeper, being responsive, and being friendly. 

Molisani’s article took a different spin from the first two as he discusses the process of developing a diagram of the content management lifecycle.  The beginning of his article reminded me of the “feedback” aspect discussed from the previous two articles in regards to his initial steps of developing the diagram.  Though he began with a very basic model of two adjoining lifecycles, the final result was a far more complex 3D model of a coffee pot to depict input and output (coffee beans to a coffee beverage); the stages of planning, developing, and deploying (layers of the pot); strategic planning and project management (coffee pot handle); and various localized content (coffee mugs) (18).  He emphasizes that the journey into social media needs this level of forethought – that good technical communicators need to “respond to market changes by asking . . . customers what they wanted and changing” to meet those needs (18). 

All three authors reminded me of the complexity of learning to operate in the realm of social media.  It is easy for me to simply wave social media away as being a time-waster, a replacement for real-life relationships, the reason why people have such short attention spans, or yet another example of information-overload at its finest.  However, these articles brought up some key aspects of social media in terms of practical, helpful, and ethical applications with which I need to familiarize myself. 

On the flip side, I also couldn’t help but wonder if there is a downside to investing so much time and energy into social media.  We’ve discussed in class how appealing to audience is a tricky line to walk, namely because we risk insulting the audience by attempting to appeal to a specific demographic.  Therefore, it is possible that appealing to audience needs through social media could backfire?  Or is this truly the direction we should take in order to stay current with customers?

Week 10 – New New Media

October 30, 2011

Levinson, Paul. “Why “New New” Media?” In New New Media, 1-16. New York: Allyn & Bacon, 2009.

In Paul Levinson’s article, “Why “New New” Media?”, Levinson discusses the widespread implications of understanding the differences not just between old media and new media but also the differences between new media and new new media.  Levinson begins by defining the five most prominent principles of new media which include:

  • You Can’t Fake Being Nonprofessional – authors are not working for a newspaper or broadcast medium
  • Choose Your Medium – people can decide which medium they prefer which complements their specific talents
  • You Get What You Don’t Pay For – media are free to the consumer and sometimes for the producer
  • Competitive and Mutually Catalytic – media are competitive and they simultaneously support each other
  • More Than Search Engines and Email – new new media are different because they allow users to customize options, create content, and add specific applications (2-3)

One key element present in both new media and new new media is that “it give[s] users the same control of when and where to get text, sound and audio-visual content” (3).  Within the scope of new new media, there are multiple categories.  Levinson acknowledges that overlap occurs between categories; however, he defines these categories “based on the services they provide and the way they provide them” (5).  These categories include 1) Print, Audio, Audio-Visual, Photographic, 2) News, 3) Social Media, 4) General vs. Specific Systems, 5) Politics and Entertainment, 6) New New Media and Governmental Control, 7) Microblogging and Blogging, and 8 ) Hardware vs. Software.  Levinson also discusses how hardware such as the iPhone has propelled and made possible the speed with which the systems previously mentioned have become available (8). 

Levinson then catalogs his own involvement with new new media experiences.  This list of “achievements” in regards to new new media establishes his own knowledge on the topic and builds his credibility as one who understands the field.  Indeed, since joining Facebook in 2004, Levinson has created a MySpace account complete with blog posts, uploaded video segments of televisions appearances to YouTube, contributed to Wikipedia articles, created three podcasts, began an independent blog on Infinite Regress, joined Digg, signed-up with Twitter, and joined Second Life (9-10).  He then explains how he organized his book based on the “order of importance of the new new media in the 2008-2009 world, followed by several chapters that address across-the-board issues pertinent to all new media” (11). 

I found Levinson’s article interesting and insightful, given that I am only somewhat familiar with the new new media he discusses.  In fact, I have only joined two of the new new media he mentions, but I am a frequent user of (or familiar with) almost all the new new media listed.   The section toward the end of his article which discusses “The Dark Side of New New Media” resonated with me, particularly because of a recent incident at Lovejoy High School.  Apparently, cyberbullying has recently driven a boy at the school to attempt suicide.  He is currently in the hospital, barely emerging from a coma, and yet students at the high school are continuing to post disgusting, hateful content on his Facebook page.  One student even posted a YouTube video this weekend titled, “How to Commit Suicide.”  We need to be cognizant of such horrific events because new new media is not going away any time soon.  Therefore, we have the responsibility as technical communicators to use the new new media is appropriate, professional, and ethical ways to achieve a positive purpose.  How can we engage effectively with new new media while setting positive examples to the younger generation of new new media users?  Additionally, how do we communicate the ethics of new new media in a practical way?

Week 9 – Writing Web Content

October 24, 2011

Redish, Janice (Ginny). “Content! Content! Content!” In Letting Go of the Words: Writing Web Content That Works, 1-9: Morgan Kaufmann/ Elsevier, 2007. 

———. “Writing Information, Not Documents.” In Letting Go of the Words: Writing Web Content That Works, 69-92: Morgan Kaufmann/ Elsevier, 2007.


Redish’s chapter “Content! Content! Content!” introduces in book by focusing on how to write content for the web; she focuses on how the majority of web-users have a specific goal in mind when they approach the web and that we as writers need to construct web-content in a way to help them meet that goal.  Redish begins by emphasizing how most people “skim and scan” on the web in order to satisfy their goal which brought them to the web in the first place (2).  Good writing on the web is conversational, answers people’s questions, and let’s people “grab and go” (5).  She also promotes the idea that good writing for the web is about writing and design rather than technology, offers good examples, and is inherently user-centered in terms of design.

Chapter 5 of Redish’s book titled “Writing Information, Not Documents” focuses on the three issues of 1) breaking up large documents, 2) deciding how much to put on a web page, and 3) PDF – yes or no? (69).  Redish differentiates between information organized by topic and information organized within a book.  She gives the example of how books make sense in the “world of paper” but how on the web, “a separate page for each topic makes more sense than a book of many topics” (71).  Writers can accomplish this by breaking web content into topics and subtopics either by time or sequence, by task, by people (specific members of the audience), by types of information, or by questions people ask.

Redish highly discourages places large amounts of information on a web page which requires users to scroll indefinitely to the end of the information.  Overloading site visitors is a sure way of guaranteeing that the visitors to not return to that web site again.  Some other aspects of consideration for a web designer include debating the issue of download time and the question of whether or not users will want to print (or how much will they want to print) (84).  Finally, Redish details the positives and negative of including PDFs in web documents.  She emphasizes that PDF would be appropriate when the main purpose of the documents is mass distribution.  However, the general population would benefit far more from a well-designed web-page than from a PDF for numerous reasons.  Some of the reasons not to include a PDF emerge when the readers don’t want the whole document, when people want to read from the screen, and when the audience is not conformable with PDF files or with downloading software (87-88).

I found this chapter especially relevant in regards to some of the first pieces we read in this class by Giovanna and O’Keefe.  Both authors mentioned how technical writers of “the future” need to be able to do far more than simply “be good writers.”  Redish structures her entire chapter around this idea of taking far more into consideration when writing for the web versus writing for paper distribution.  I believe that these skills of knowing how and why to design web content in specific ways for specific audiences is essential for technical writers to continuously learn and modify as technology continues to change.  Being adept at taking a document and making it acceptable and usable on the web entails a skill-set that all technical communicators need.

It is becoming more common to people to browse the web on their iPhones and iPads rather than browsing on larger screen (laptop or computer monitor).  How could shift change the way we approach web design?


Week 9 – Chronotopes as Memory Schemata

October 24, 2011

Keunen, Bart. “Bakhtin, Genre Formation, and the Cognitive Turn: Chronotopes as Memory Schemata.” http://docs.lib.purdue.edu/clcweb/vol2/iss2/2.


In the essay on chronotopes as memory schematia, Kuenen proposes to link Bakhtin’s chronotope essay with “cognitive-theoretical frames of reference” which Kuenen defines as schema theories (2).  Kuenen begins by dissecting how Bakhtin’s work with chronotopes – “cognitive invariants used by writers and readers in order to structure historically and textually divergent semantic elements” – contribute to the way in which people identify genres and motifs in literature (2).  Kuenen argues that he will explore how both “superstructural schemata” and “action schemata” can be linked to various functions of chronotopes, namely geological functions of chronotopes and motifs (3).

Bakhtin defines his mental structures as chronotopes which he then claims to be determined by historical stereotypes such as the “adventure chronotope,” “the “idyllic chronotope, “the folkloric chronototope,” and the “chronotope of the Bildungsroman” (3).  Kuenen then differentiates this approach with the influence of the Russian Formalist criticism which emphasizes the procedural approach to knowledge in literary criticism; this theory emphasizes that “the units of this knowledge are no longer linguistic units but pragmatic elements . . . called ‘genres’” (5).  This shift allows Bakhtin to “no longer put the emphasis of critical analysis on the narrative action . . . but on the chronotopic construction that the writers and readers associate with a text” (5).  According to Kuenen, Bakhtin’s work supports genological chronotopes as superstructural memory schemata through a “stereotypical sequence of special setting and invariant series of time segments” (i.e., how the aspects of the plot line up with events by means of time-markers) (6).  Additionally, Bakhtin’s work supports the notion of motivic chronotopes as action schemata by “enabling the reader to concretize and even to reproduce the genological language schemata [Bakhtin] associates with a specific motif” (i.e., how literary motifs trigger the reader’s prior knowledge of something not explicitly mentioned in a text) (9).

Kuenen closes a call for further “narratological” research (i.e., deals with the distinction between spatial and chronotopical levels in text) and “historiographical research” (i.e., how to align science fiction with world models of text) (13).

Kuenen’s incredibly dense essay presents a deeply scientific view of literary criticism.  This is interesting because of the assumption that literary criticism would normally lean toward the “arts” end of the arts-sciences spectrum; however, Kuenen focuses on cognition and various forms of schemata to emphasize how reader-interpretation of the text is inherently scientific.  In terms of relating Kuenen’s and Bakhtin’s ideas to technical communication, I initially had trouble noting the connection.  However, I have concluded that it is important for a writer to remember that readers are going to bring their memory schemata’s to the table every time they read a piece of text whether that be a page from a classic novel or a table in a procedural manual.  As writers in technical communication, we must take into consideration “the interaction between a world model and a concrete text” whenever we put pen to paper – or fingers to a keyboard.

Should constructs such as “genre” and “motif” be taken into consideration when writing for a globalized audience?  What concerns could arise if such factors are not taken into consideration?