Week 10 – Refining Wikipedia through Collaboration

October 30, 2011

Liu, Jun, and Sudha Ram. “Who Does What: Collaboration Patterns in the Wikipedia and Their Impact on Article Quality.” ACM Transactions on Management Information Systems 2, no. 2 (2011): 11:1- 11:23.

The article “Who Does What:  Collaboration Patterns in the Wikipedia and Their Impact on Article Quality” identifies main critiques of Wikipedia and provides research by Liu and Ram to account for these issues.  The main questions Liu and Ram address are 1) Why do Wikipedia articles vary widely in quality? and 2) How can  quality of Wikipedia articles be improved?  Because Wikipedia is easy to edit, an article can be edited by any person; however, all editors do not edit the same way or with the same intensity (2).  The research by Liu and Ram investigated the Wikipedia’s article assessment project as a starting place for determine various degrees of quality.  The criteria for assessment includes:  1) well-written, 2) comprehensive, 3) well-researched and verifiable, 4) neutral, 5) stable, 6) compliance with Wikipedia style guidelines, 7) appropriate images and copyright status,  and 8 ) appropriate length and focus (4).  To study the relationship between collaboration and quality, Lui and Ram selected articles which had been rated by these criteria as a basis for their study.  Their methods involved creating categories of how contributors edited articles; from looking at that data, Lui and Ram then identified collaboration patterns.  Lui and Ram concluded that “article quality depends on different types of contributors, that is, the roles they play, and they way they collaborate” (16). Additionally, Lui and Ram feel that improving Wikipedia article quality is possible if software tools are developed to help contributors make the decision to include references, links, and support for their edits.  These software tools should “nudge contributors to assume different roles and support self-justification and self-policing” as well as “motivate the contributors to revist the article, review their inserted sentences, and respond to other contributors’ modifications” (20). 

The main point of this article truly hones in of the positive results which can emerge when collaboration is instilled into the creation/editing of Wikipedia articles.  Lui and Ram’s research definitely supports the hypothesis that the better-written and better-referenced articles are constructed by multiple contributors who justify their added content/changes and who respond to other contributors.  This same concept could be easily transferred to the field of technical communication where collaboration continues to be rare.  Additionally, this article clearly articulates why Wikipedia continues to receive mixed reviews in terms of being a credible sources of information.  I think that Lui and Ram articulate this best when they state, “It is unreasonable to simply assume that Wikipedia is completely reliable or unreliable” (2).  If practices such as the methods suggested by Lui and Ram are implemented, could Wikipedia change its current status where it is viewed as being neither completely reliable or unreliable?  If so, would it change the way people use the medium?


Week 10 – Exploring Social Media

October 30, 2011

Singleton, Meredith, and Lisa Meloncon. “A Social Media Primer for Technical Communicators.” Intercom,June 2011, 7-9.

Porter, Alan J. “Tweet Me This…” Intercom, June 2011, 10-13.

Molisani, Jack. “Creating a 3d Model of the Content Management Lifecycle.” Intercom, June 2011, 14-18.


Singleton and Meloncon’s article, “A Social Media Primer for Technical Communicators” focuses on how social media is relevant for technical communicators.  Indeed, they argue that “information can no longer only be provided as downloadable, static documents . . . and should now include forums, email options, and opportunities to message a help technician” (7).  This reminds me of how the UNT Library page has recently been updated to include several of these options.  Because of social media, the method and medium of communicating information is changing rapidly and technical communicators need to be up-to-date on how to adapt.  Singleton and Meloncon address this challenge by instructing technical communicators to 1) understand the social media landscape, 2) build a strategy, and 3) know your audience’s preferences, 4) interact, and 5) evaluate and adjust.

Focusing specifically on Twitter as a communication tool, Alan Porter defines Twitter as “a communication tool of the moment,” and emphasizes that “as professional communicators, we should . . . be in a position to use it to communicate not only among ourselves but also . . . with our customers” (10).  Porter defends Twitter use as a means of communication in a way that can influence a person’s profile in a particular community as well as help share information and knowledge (11).  However, Porter does emphasize that there are appropriate ways for communicating via Twitter.  Some of these recommendations include having separate accounts for work and private use, make decisions ahead of time about content that you will or will not ever discuss on Twitter, and to remember that it is not a requirement to follow everyone on Twitter who follows you.  He emphasizes that “it’s what you post and the way you interact that is important” (12).  I also appreciated how Porter mentions the responsibility of participating on Twitter that include being a gatekeeper, being responsive, and being friendly. 

Molisani’s article took a different spin from the first two as he discusses the process of developing a diagram of the content management lifecycle.  The beginning of his article reminded me of the “feedback” aspect discussed from the previous two articles in regards to his initial steps of developing the diagram.  Though he began with a very basic model of two adjoining lifecycles, the final result was a far more complex 3D model of a coffee pot to depict input and output (coffee beans to a coffee beverage); the stages of planning, developing, and deploying (layers of the pot); strategic planning and project management (coffee pot handle); and various localized content (coffee mugs) (18).  He emphasizes that the journey into social media needs this level of forethought – that good technical communicators need to “respond to market changes by asking . . . customers what they wanted and changing” to meet those needs (18). 

All three authors reminded me of the complexity of learning to operate in the realm of social media.  It is easy for me to simply wave social media away as being a time-waster, a replacement for real-life relationships, the reason why people have such short attention spans, or yet another example of information-overload at its finest.  However, these articles brought up some key aspects of social media in terms of practical, helpful, and ethical applications with which I need to familiarize myself. 

On the flip side, I also couldn’t help but wonder if there is a downside to investing so much time and energy into social media.  We’ve discussed in class how appealing to audience is a tricky line to walk, namely because we risk insulting the audience by attempting to appeal to a specific demographic.  Therefore, it is possible that appealing to audience needs through social media could backfire?  Or is this truly the direction we should take in order to stay current with customers?

Week 10 – New New Media

October 30, 2011

Levinson, Paul. “Why “New New” Media?” In New New Media, 1-16. New York: Allyn & Bacon, 2009.

In Paul Levinson’s article, “Why “New New” Media?”, Levinson discusses the widespread implications of understanding the differences not just between old media and new media but also the differences between new media and new new media.  Levinson begins by defining the five most prominent principles of new media which include:

  • You Can’t Fake Being Nonprofessional – authors are not working for a newspaper or broadcast medium
  • Choose Your Medium – people can decide which medium they prefer which complements their specific talents
  • You Get What You Don’t Pay For – media are free to the consumer and sometimes for the producer
  • Competitive and Mutually Catalytic – media are competitive and they simultaneously support each other
  • More Than Search Engines and Email – new new media are different because they allow users to customize options, create content, and add specific applications (2-3)

One key element present in both new media and new new media is that “it give[s] users the same control of when and where to get text, sound and audio-visual content” (3).  Within the scope of new new media, there are multiple categories.  Levinson acknowledges that overlap occurs between categories; however, he defines these categories “based on the services they provide and the way they provide them” (5).  These categories include 1) Print, Audio, Audio-Visual, Photographic, 2) News, 3) Social Media, 4) General vs. Specific Systems, 5) Politics and Entertainment, 6) New New Media and Governmental Control, 7) Microblogging and Blogging, and 8 ) Hardware vs. Software.  Levinson also discusses how hardware such as the iPhone has propelled and made possible the speed with which the systems previously mentioned have become available (8). 

Levinson then catalogs his own involvement with new new media experiences.  This list of “achievements” in regards to new new media establishes his own knowledge on the topic and builds his credibility as one who understands the field.  Indeed, since joining Facebook in 2004, Levinson has created a MySpace account complete with blog posts, uploaded video segments of televisions appearances to YouTube, contributed to Wikipedia articles, created three podcasts, began an independent blog on Infinite Regress, joined Digg, signed-up with Twitter, and joined Second Life (9-10).  He then explains how he organized his book based on the “order of importance of the new new media in the 2008-2009 world, followed by several chapters that address across-the-board issues pertinent to all new media” (11). 

I found Levinson’s article interesting and insightful, given that I am only somewhat familiar with the new new media he discusses.  In fact, I have only joined two of the new new media he mentions, but I am a frequent user of (or familiar with) almost all the new new media listed.   The section toward the end of his article which discusses “The Dark Side of New New Media” resonated with me, particularly because of a recent incident at Lovejoy High School.  Apparently, cyberbullying has recently driven a boy at the school to attempt suicide.  He is currently in the hospital, barely emerging from a coma, and yet students at the high school are continuing to post disgusting, hateful content on his Facebook page.  One student even posted a YouTube video this weekend titled, “How to Commit Suicide.”  We need to be cognizant of such horrific events because new new media is not going away any time soon.  Therefore, we have the responsibility as technical communicators to use the new new media is appropriate, professional, and ethical ways to achieve a positive purpose.  How can we engage effectively with new new media while setting positive examples to the younger generation of new new media users?  Additionally, how do we communicate the ethics of new new media in a practical way?

Week 9 – Writing Web Content

October 24, 2011

Redish, Janice (Ginny). “Content! Content! Content!” In Letting Go of the Words: Writing Web Content That Works, 1-9: Morgan Kaufmann/ Elsevier, 2007. 

———. “Writing Information, Not Documents.” In Letting Go of the Words: Writing Web Content That Works, 69-92: Morgan Kaufmann/ Elsevier, 2007.


Redish’s chapter “Content! Content! Content!” introduces in book by focusing on how to write content for the web; she focuses on how the majority of web-users have a specific goal in mind when they approach the web and that we as writers need to construct web-content in a way to help them meet that goal.  Redish begins by emphasizing how most people “skim and scan” on the web in order to satisfy their goal which brought them to the web in the first place (2).  Good writing on the web is conversational, answers people’s questions, and let’s people “grab and go” (5).  She also promotes the idea that good writing for the web is about writing and design rather than technology, offers good examples, and is inherently user-centered in terms of design.

Chapter 5 of Redish’s book titled “Writing Information, Not Documents” focuses on the three issues of 1) breaking up large documents, 2) deciding how much to put on a web page, and 3) PDF – yes or no? (69).  Redish differentiates between information organized by topic and information organized within a book.  She gives the example of how books make sense in the “world of paper” but how on the web, “a separate page for each topic makes more sense than a book of many topics” (71).  Writers can accomplish this by breaking web content into topics and subtopics either by time or sequence, by task, by people (specific members of the audience), by types of information, or by questions people ask.

Redish highly discourages places large amounts of information on a web page which requires users to scroll indefinitely to the end of the information.  Overloading site visitors is a sure way of guaranteeing that the visitors to not return to that web site again.  Some other aspects of consideration for a web designer include debating the issue of download time and the question of whether or not users will want to print (or how much will they want to print) (84).  Finally, Redish details the positives and negative of including PDFs in web documents.  She emphasizes that PDF would be appropriate when the main purpose of the documents is mass distribution.  However, the general population would benefit far more from a well-designed web-page than from a PDF for numerous reasons.  Some of the reasons not to include a PDF emerge when the readers don’t want the whole document, when people want to read from the screen, and when the audience is not conformable with PDF files or with downloading software (87-88).

I found this chapter especially relevant in regards to some of the first pieces we read in this class by Giovanna and O’Keefe.  Both authors mentioned how technical writers of “the future” need to be able to do far more than simply “be good writers.”  Redish structures her entire chapter around this idea of taking far more into consideration when writing for the web versus writing for paper distribution.  I believe that these skills of knowing how and why to design web content in specific ways for specific audiences is essential for technical writers to continuously learn and modify as technology continues to change.  Being adept at taking a document and making it acceptable and usable on the web entails a skill-set that all technical communicators need.

It is becoming more common to people to browse the web on their iPhones and iPads rather than browsing on larger screen (laptop or computer monitor).  How could shift change the way we approach web design?


Week 9 – Chronotopes as Memory Schemata

October 24, 2011

Keunen, Bart. “Bakhtin, Genre Formation, and the Cognitive Turn: Chronotopes as Memory Schemata.” http://docs.lib.purdue.edu/clcweb/vol2/iss2/2.


In the essay on chronotopes as memory schematia, Kuenen proposes to link Bakhtin’s chronotope essay with “cognitive-theoretical frames of reference” which Kuenen defines as schema theories (2).  Kuenen begins by dissecting how Bakhtin’s work with chronotopes – “cognitive invariants used by writers and readers in order to structure historically and textually divergent semantic elements” – contribute to the way in which people identify genres and motifs in literature (2).  Kuenen argues that he will explore how both “superstructural schemata” and “action schemata” can be linked to various functions of chronotopes, namely geological functions of chronotopes and motifs (3).

Bakhtin defines his mental structures as chronotopes which he then claims to be determined by historical stereotypes such as the “adventure chronotope,” “the “idyllic chronotope, “the folkloric chronototope,” and the “chronotope of the Bildungsroman” (3).  Kuenen then differentiates this approach with the influence of the Russian Formalist criticism which emphasizes the procedural approach to knowledge in literary criticism; this theory emphasizes that “the units of this knowledge are no longer linguistic units but pragmatic elements . . . called ‘genres’” (5).  This shift allows Bakhtin to “no longer put the emphasis of critical analysis on the narrative action . . . but on the chronotopic construction that the writers and readers associate with a text” (5).  According to Kuenen, Bakhtin’s work supports genological chronotopes as superstructural memory schemata through a “stereotypical sequence of special setting and invariant series of time segments” (i.e., how the aspects of the plot line up with events by means of time-markers) (6).  Additionally, Bakhtin’s work supports the notion of motivic chronotopes as action schemata by “enabling the reader to concretize and even to reproduce the genological language schemata [Bakhtin] associates with a specific motif” (i.e., how literary motifs trigger the reader’s prior knowledge of something not explicitly mentioned in a text) (9).

Kuenen closes a call for further “narratological” research (i.e., deals with the distinction between spatial and chronotopical levels in text) and “historiographical research” (i.e., how to align science fiction with world models of text) (13).

Kuenen’s incredibly dense essay presents a deeply scientific view of literary criticism.  This is interesting because of the assumption that literary criticism would normally lean toward the “arts” end of the arts-sciences spectrum; however, Kuenen focuses on cognition and various forms of schemata to emphasize how reader-interpretation of the text is inherently scientific.  In terms of relating Kuenen’s and Bakhtin’s ideas to technical communication, I initially had trouble noting the connection.  However, I have concluded that it is important for a writer to remember that readers are going to bring their memory schemata’s to the table every time they read a piece of text whether that be a page from a classic novel or a table in a procedural manual.  As writers in technical communication, we must take into consideration “the interaction between a world model and a concrete text” whenever we put pen to paper – or fingers to a keyboard.

Should constructs such as “genre” and “motif” be taken into consideration when writing for a globalized audience?  What concerns could arise if such factors are not taken into consideration?

Week 8 – Cruel Pies

October 17, 2011

Dragga, Sam, and Dan Voss. “Cruel Pies: The Inhumanity of Technical Illustrations.” Technical Communication 48, no. 3 (2000): 265-74.


In “Cruel Pies” by Dragga and Voss, the crux of the article focuses around “the ethics of visual display” (265).  The majority of definitions related to the ethics of visual communication revolve around “distortion,” “deception,” “the lie factor”, and “lying graphics,” (265).  However, discussing the ethics of visual communication in these terms “is useful and important, but insufficient” because it neglects “studying and developing a variety of techniques that will bring humanity to technical illustrations” (266).  This “humanity” aspect rings true to Carolyn Miller’s humanistic approach to technical communication.  While Miller focused on “the verbal component of communication to carry the entire weight of the humanistic orientation,” Dragga and Voss seek to apply the humanistic approach to visuals as well (266).

The main focus of “Cruel Pies” centers on the insensitivity to human fatalities as depicted in visual displays.  Dragga and Voss focus on pie graphs depicting human fatalities in the fishing industry, column graphs of human fatalities from bedding fires, and column graphs of baby walker-related injuries (268-269).  They argue that “nowhere are the statistics given the humanity of flesh and blood . . . [and] offer only a pitiless depiction of human misery” (269).  To solve this ethical dilemma, Dragga and Voss advocate incorporating photographs, iconography, or cartoons into visuals to serve as reminders of the human subject matter without resorting to morbid displays of overly-sensitive content.

At the end of the article, Dragga and Voss acknowledge that some people feel that incorporating such images into a graphic display “would be unnecessary, unscientific, and distracting,” to which the authors reply:  “Yes, it would.  And that’s the whole point.”  (272).  They feel that technical communicators hold the responsibility of presenting information accurately and clearly while continuing to uphold the ethics of visual communication.

I had a difficult time with this article because I constantly felt like I was standing on both sides on this issue.  On one hand, I completely agree that people tend to read statistics, particularly regarding human fatalities, with a certain degree of apathy because the information is simply presented as numbers and figures on a page.  However, I think the reason for this is not as simple as Dragga and Voss make it appear.  Usually, people only have a sympathetic, emotional reaction to statistics if there is an element that makes it intensely personal.  For example, if I were reading statistics and viewing a graph on the percentage of school shootings in the United States, I would perceive the statistics in a matter-of-fact way regardless of there were an image of a gun placed next to the graph.  However, if I were reading statistics specifically relevant to Columbine High School, I would have an emotional reaction regardless if the information were presented in standard bar graphs versus bar graphs accompanied with a picture.  This is only because I have a cousin who was a student at Columbine and was in the cafeteria the day of the shooting; my reaction to the numbers would have more to do with my personal connection to the incident than to how the information itself was displayed on paper.

Though I think the idea of incorporating a humanistic aspect into visuals is important, I feel that it runs the risk of coming across as trite or (even worse) insensitive for different reasons.  How can we incorporate elements of humanity back into graphs without creating additional problems?

Week 8 – Nightingale’s Visual and Verbal Rhetoric

October 17, 2011

Brasseur, Lee. “Florence Nightingale’s Visual Rhetoric in the Rose Diagrams.” Technical Communication Quarterly 14, no. 2 (2005): 161-82.


In his essay, Lee Brasseur argues that Florence Nightingale’s success in persuading the British government to institute reform regarding sanitary conditions in military hospitals was largely based on Nightingale’s use of visual and verbal rhetoric.  After working as a nurse and administrator at the front of the Crimean War, Nightingale wrote a recommendation for the Royal Commission after months of “researching, consulting officials, interviewing hospital workers, and pouring over varying accounts of mortality” in order to clearly state which reforms should be made.  Because the government took no action on her recommendations, Nightingale published an annex to her report which 1) refuted opposition of doctors who disagreed with her statistics, and 2) convinced the government to implement reforms (164).

Nightingale’s annex systematically addressed the objections raised by the doctors by producing tables and figures complemented with descriptions.  Lee highlights the effectiveness of Nightingale’s visual rhetoric by focusing on her three “rose diagrams”.  The first rose diagram compared Manchester mortality to the Crimean War mortality by creating two graphs side-by-side to visually show the substantial difference in mortality rates.  It was especially effective because “the circular shape of the diagram [was] well-suited to showing the progression of the war in a time-based genre . . . like . . . [a] clock” (171).  Nightingale’s second rose diagram detailed “which portion of the mortality data for that month could be allotted to each cause of death,” thereby “help[ing] the reader understand the reasons for death” (171).  Additionally, Nightingale color-coded the different causes of death in blue, black, and red.  The third rose diagram included both comparative and progressive arguments:  progressive, by indicating the date when reforms were initiated, and comparative, by showing the difference between London’s military hospitals and hospitals at the military front (173-174).  Lee focuses on Nightingale’s verbal rhetoric by emphasizing how she combines a “straightforward, factual, and concise approach” with a strong emotional appeal in order to “influence the emotional response of her audience and to make the numbers in the diagram come alive” (176).  Her strategies effectively persuaded the government to establish four subcommissions to carry out her reforms.

This essay sheds light on the misconception that rhetoric is purely an “arts and humanities” construct that simply adds creative word-play to a text; rather, Lee provides Nightingale’s annex to her field report as an example of how rhetoric can both clarify information and persuade an audience.  I found the article especially interesting in light of our class discussion last week where we discussed how technical communicators often do a poor job of interpreting graphs and figures for their audience.  Nightingale’s first failed attempt to elicit reform resonated with me.  I was struck by how she readily jumped at the opportunity to revise and refine her argument by presenting the same information in a different way in order to appeal to her audience and achieve the desired response.  This act of refining the interpretation of results is paramount to our work as technical writers; indeed, learning to accurately interpret results and communicate those ideas to an audience is a highly desired skill in the world of work.  What is another instance where an audience that might need information clarified through visuals and enhanced by descriptive text?