Week 14 – Readers’ Response to Certification

November 28, 2011

Cuddihy, Kevin. “A Monumental Day Dawns for Technical Communicators: Certification!.” http://notebook.stc.org/a-monumental-day-dawns-for-technical-communicators-certification.

 

The reader comments on the article posted by Cuddihy on certification for the technical communication field include both enthusiastic approval and vehement opposition.  Those who approve of the certification applaud various aspects of the process because certification:

  • Shores up the business proposition of belonging to STC
  • Differentiates true professionals in the field from those who just have “a knack for writing”
  • Add credibility to membership to the STC and to the technical communication profession
  • Help raise technical communicators’ visibility within a company
  • Provides opportunity for people who are young in the field to gain credibility if their portfolios are lacking

Many of the readers’ comments, however, are less than enthused about the prospect of implementing a required technical communication certification.  They oppose certification because it:

  • Will only become relevant when hiring managers actually understand the skills and details of the certification
  • Focuses on individual work (which is difficult to identify in a climate of collaborative projects)
  • Requires portfolios of work (which is impossible for some people to produce due to non-disclosure agreements)
  • Could exclude a large number of quality writers due to the requirements of the certification
  • Might encourage people to earn a certification in place of earning a technical communication Master’s degree
  • Could be used as a screening wielded by “ignorant HR departments” to screen otherwise qualified writers
  • Does not allow for a “grandfathering” period for practitioners who have been in the field for an extensive amount of time
  • Only has a “three-year shelf-life” before mandatory renewal
  • Could cause people with a solid reputation to have a hard time being hired if they do not become certified
  • Might mark people who are experts in niche areas as unqualified

Looking at the reader comments, it appears that far more people have concerns about certification requirements than wholehearted approval.  I know that for educators in public school, seeking multiple certifications comes with the territory of being a teacher.  This can both be positive or detrimental depending on the current circumstances of a school district.  For example, during the budget cuts in education last spring, several thousand teachers suffered due to Reduction in Force (RIF).  Administrators for each school were given a list of qualifications to assess all teachers in the school to see which teachers would stay and which ones would be let go.  At one PISD school, some of the qualifications that the administrators looked at included: years of teaching experience, years of teaching experience a particular school, number and type of degrees, number of positive/negative reviews from classroom walk-throughs, and number of different certifications. I knew a teacher at that school who had taught for 15+ years with impeccable reviews.  She had her undergraduate degree and her teacher certification for K-4; however, she was RIFed because her only had one certification whereas many of the younger teachers were certified in two or even three different areas of expertise (ESL, Special Ed, grades 4-8, etc.) Though the fields of education and technical communication are not really comparable, the premise remains similar:  placing a strong emphasis on new (or more) certification has the potential for putting the jobs of excellent professionals in jeopardy.  How could the STC reevaluate the grandfathering aspect of certification to accommodate these concerns?


Week 14 – Certification Will (or Will Not) Work

November 28, 2011

Hart, Geoffrey J.S. “Why Certification by STC Won’t Work.” Intercom, July/August 2008, 11-13.

Rosenberg, Nad. “Certification – Why We Need to Begin ” Intercom, July/August 2008, 11-12.

Both Hart and Rosenberg address whether STC should implement required certification for technical communicators.  Hart openly admits from the start that his argument is “clearly one-sided” and that this particular article serves to focus on the drawbacks of certification, namely the issue of whether or not employers will eventually pay more for a certified communicator (1).  If the STC members are the only people who recognize the value of a TC certification, then the certification will be highly ineffective.  Another drawback to certification stems from the reality that technical communication is a highly subjective field which contains (according to Hart) “no universally accepted ‘best practices’ . . . [because] often, a technical communicator can choose from several solutions to . . . solve a problem” (2). Additionally, Hart cites that BELS already has its own certification system and worries that STC certification “appears to be reinventing the wheel” (2).  He concluded with an assertion that even if STC certification were implemented, grandfathering presents multiple obstacles with determining how to asses a candidate’s years of experience without diluting the certification program’s values. 

On the flip side of the certification issue, Rosenberg boldly supports a move toward implementing certification standards.  He begins by citing technical communication certification programs offered in Europe and in India and questions why we (in America) have yet to follow suit.  Like Hart, Rosenberg emphasizes the monetary perspective and furthers this focus by quoting Judith Hale, “The driver behind most certifications is economic, whether this fact is stated or not” (1).  Some of the positives presented by certification include providing evidence of competency when hiring/evaluating technical communicators, establishing quality assurance with hiring employees, and presenting a way for prospective employees to demonstrate their interest and commitment (not to mention, competency) in the field.  However, Rosenberg concedes that there will be a “chicken-and-the-egg” scenario until the financial details can be worked out.  Additionally, he admits (like Hart) that establishing a set Body of Knowledge is a daunting task considering the scope of the field. 

As a new practitioner in this field, I understand that developing an educated opinion about STC certification is imperative to my future success.  At this point, I can definitely see the value to portions of both arguments.  For example, I believe that establishing a set BoK is an unrealistic because of the diversity of the technical communications field.  However, I also find that the idea of a certification as a means of establishing myself as knowledgeable and competent could be extremely helpful in terms of being hired simply because I don’t have decades of work experience under my belt; in other words, I would probably be a better candidate for embracing the certification than a woman who has been in the profession for 25 years. 

Additionally, I wonder if my perspective on this issue will change as I begin to work in the field and see how the politics of the organizations actually play-out.  Like Rosenberg said at the start of his article:  “Times have changed – and my opinion along with them.”  How will the technical community’s response to certification change as time passes and as the details of the certification process actually get fleshed-out?


Week 13 – Situating Learning to Write

November 21, 2011

Freedman, Aviva, and Christine Adam. “Write Where You Are: Situating Learning to Write in University and Workplace Settings.” In Transactions Writing in Academic and Workplace Settings, edited by Patrick Dias and Anthony Pare. Cresskill, NJ: Hampton Press, Inc., 2000.

 

The research completed by Freedman and Adam in “Write Where You Are” explores two types of situated learning described as “facilitated performance” and “attenuated authentic participation”.  The students in the facilitated performance group were undergraduates enrolled in a finance course; the novices in the attenuated authentic participation group were graduate students involved in full-time internships with government agencies. 

The students in the finance class learned discipline-specific writing.  Freedman and Adam found that the purpose and the goal of writing in this academic setting was geared almost entirely to “the learner and to the learner’s learning” (38).  The instructor would scaffold the students’ learning through modeling appropriate approaches to writing and asking numerous questions to help students look at information in particular ways.  Additionally, the students learned through collaborative performance through writing papers together. 

The interns, however, had a much different experience as they engaged in true workplace writing.  One key difference was that “no conscious attention [was] paid to the learner’s learning; all attention is directed to the task at hand and its successful completion” (45).  Because the experiences of the internship were not carefully structured and sequenced like a course curriculum, many of the interns had trouble adjusting to a less-structured environment.  This is primarily because the interns “did not necessarily recognize the opportunities for learning in the new setting because they [were] used to the way they learned in the old setting” (51). 

As a result of this unfamiliar setting, many of the interns experienced adverse emotions during the course of the internship.  I found it interesting that Freedman and Adam asserted that these feeling of “disjuncture, anxiety, or displacement” experienced by the interns “are inevitable, given the differing nature of the institutions, and not signs of student or school failure” (56).  This conclusion was especially surprising to me given the previous article I read by Johns about approaching teaching techniques in a way to better prepare students for the workplace.  Rather than agreeing with Johns, Feedman and Adam appear to almost shrug and claim that the workplace and the university are two discrete institutions; therefore, differences in student reactions to these drastically different environments is not something to be remedied.  I know that to some extent I struggled with this idea when I went from writing lesson plans during my education courses at UNT to actually planning lessons for my students at the high school.  The level of supervision, length and style of the documents, and practicality of incorporating methods into the classroom were drastically different from my student-teaching experience to actually running my own classroom.  Additionally, the differences between the logistics and expectations between school districts are even more pronounced; I was unprepared for the sharp difference between theoretically discussing classroom procedures and actually having to dismiss an unruly student from my class.

How can professors structure classes to make this transition from the classroom and into the workplace more smooth?


Week 13 – The Filing Cabinet Has a Sex Life

November 21, 2011

Johns, Lee Clark. “The File Cabinet Has a Sex Life: Insights of a Professional Writing Consultant.” In Worlds of Writing: Teaching and Learning in Discourse Communities of Work, edited by Carolyn B. Matalene. New York: Random House, 1989.

 

In “The File Cabinet Has a Sex Life”, Johns discusses the nature of writing models which dictate the way in which documents are written in the workplace.  He identifies the four “parents” of this problem as the academy, the profession, the organization, and the supervisory review and details how each “parent” contributes to the models most people use to write.

Because people learn to write in school, it makes sense for writers to default to an academic writing style by way of an academic essay for a research paper.  As a result, people tend to write “long, rambling reports” of information; for example, in the banking industry, it is not uncommon for writers to break the text into a “short introduction, background, a story about the company’s operations and financial position, etc., until they reach the conclusion about credit risks involved in the loan” (156).  Such writing is ineffective for helping senior executives make important decisions about the organization as a whole.

Professional discourse also has roots in academia; however, some models arise from specific profession requirements.  Johns establishes three examples of professional discourse communities including the patent application (emphasis on the claims section as the “bottom line”), the internal audit report (establishes information of primary interest to the accountant before information of primary interest to management), and the certification report (used by a judge in deciding whether or not a juvenile offenders should be certified in a trial as an adult). 

Additionally, the organization model (one-page memo or corporate style sheets) and the supervisory review (supervisors who dictate requirements based on personal preference or company tradition) also explain where these “models” of writing originate. 

Ultimately, people write within these parameters or with a specific tone and style because “they think they are supposed to” (181).  Johns prompts a response to the old, dated styles of writing by challenging everyone to “clean out the file cabinet” in order to “replace archaic models with superior descendants” (183).   He claims that teachers need to focus on testing students’ problem-solving skills and should create assignments to duplicate the context of the workplace by “writ[ing] for real audiences, for real purposes” (183).  The academic world needs to duplicate the workplace environment and re-visit the traditional approaches to scholarly papers.  In the workplace, companies should “evaluate the supervisory review process . . . to improve it” (185).  Finally, writing consultants should think of themselves as “change agents” and accept their roles as being central to a company’s success (185). 

Johns’s frustration with the tendency for writers to automatically default to traditional writing models reminded me of Popken’s article on the dissemination of the résumé in textbooks over the years.  The idea of following models for the sake of having no other point of reference harkened back to the textbook-methodology Popken disparages in his article; and, much like Popken, Johns advocates a move away from this standard, formulaic methods of writing within the frameworks of certain genres. 

Additionally, I was intrigued by Johns’s emphasis on how people have been steeped from an early age in the models of writing taught in English classes:  personal essays, arguments on public issues, literary analyses, and term papers (155).  As an English teacher, I constantly battle with this issue myself.  Even as I’ve graded stacks of poetry analysis essays this weekend, I’ve thought to myself, “Sure, these kids can write a stellar analysis of William Blake . . . but will they be at a complete loss when it comes to writing for ‘the real world’?” As someone who became a teacher with the aspiration to equip adolescents for success in my classroom, in their academic career, and in their future professions, I struggle with the impracticality of some of the course curriculum offered in the public school system; I struggle with the knowledge that the skills I teach my students (though helpful with building a strong work ethic and developing critical thinking processes) will do little to shape their writing for anyone other than an academic audience.  Therefore, I appreciated Johns’s section titled “The Challenge for Teachers” which emphasized how the classroom should duplicate the context of the workplace.  Though teaching students to write letters and memos is not part of the high school curriculum, I do believe that teaching collaborative writing, engaging students in the writing/review process, and having students pay a penalty for sloppy or poor writing would be highly beneficial.  What are some other ways to incorporate the broader needs of students into the current high school curriculum?  Additionally, what are methods for helping bring change to the writing styles of other institutions as well?


Week 12 – Awesome Web Analytics

November 12, 2011

Kaushik, Avinash. “The Awesome World of Clickstream Analysis: Metrics.” In Web Analytics 2.0: The Art of Online Accountability and Science of Customer Centricity, 38-73: Sybex, 2009.

 

Avinash Kaushik outlines the ins and outs of metrics and key performance indicators in web analytics.  He defines a metric as a quantitative measurement of statistics describing events or trends on a website while a key performance indicator is a metric that helps you understand how you are doing against your objectives (37). 

Kaushik explains that the visitor experience of someone coming to your website and spending some time browsing around before leaving is commonly called a session, visit, visitor, or some other label (38).  The emphasis on this metric is the time aspect.  Similarly, computing Unique Visitors is when the web analytics tool tries to approximate the number of people who come to your website.  This gets confusing because the tool often counts visitors more than once, creating faulty data.  Kaushik asserts that because of this faulty duplication, there are only two visitor metrics worth assessing in web analytics:  Visits and Absolute Unique Visitors (43). 

Time on Page and Time on Site is designed to measure the time that visitors spend on an individual page and the time spent on the site during a visit or session (44).  However, the web analytics tool is unable to calculate how long the visitors spent on the last page on your site because the second time stamp is missing.  Therefore, the challenge is to know when the exit from the last page happened. 

Kaushik’s favorite web metric is the Bounce Rate which measures the percentage of sessions on your website with only one page view – meaning that  person came to the web page and left without giving the website eve one click.  He prefers Bounce Rate to Exit Rate because Exit Rate simply records how many people left your website from a certain page.  The problem with this is that everyone who enters a website eventually has to leave – “their exit from a page is no indication of the greatness, or lack thereof, of that particular page!” (54). 

The Conversion Rate metric receives the most attention because is measures what comes out of the websites.  Expressed as a percentage, the Conversion Rate is defined as “Outcomes divided by Unique Visitors” (55).  Kaushik believes that most customer behavior is pan-session (or, across multiple sessions) which means that most customers in real-world purchasing will come to the website, check elsewhere, allow time to pass, and then return to the website to complete the purchase (56). 

Engagement as a metric is difficult to measure because it is impossible to derive the kind of visitor Engagement (positive/negative) from degree of Engagement (58).   Indeed, it would be far more beneficial to use other forms of measurement (such as surveys or response cards) to measure the degree to which a visitor was engaged. 

Because of all the options with web metrics, it is important to use ones which fit the four attributes of effective metrics:  uncomplex, relevant, timely, and instantly useful. Additionally, taking time to customize the analytics reporting interface saves time and energy because it will result in “a single clear view [to] help understand performance better and take action” (68).

Web analytics is something that I’ve never really understood until now.  I can definitely see the unsurpassed value of not only using web analytics but also having the insight to know which metrics would be most beneficial for the needs and goals of the company or organization.  Again, the whole idea of keeping the bigger goal in mind comes into play with understanding and interpreting such data.  Without truly understanding what all the numbers mean, faulty reports could result (such as reliance on Daily, Weekly, or Monthly Unique Visitors rather than Absolute Unique Visitors).  Additionally, misinterpreting web analytics could result in fixing something that’s not actually broken.  For example, if someone relied too heavily on the Exit Rate metric, they may think that a particular web page was ineffective because of the high percentage of visitors who left from that page.  However, closer analysis could reveal that visitors left from that page because it was the last page in a series of online check-out steps for making a purchase.  Using common sense as well as giving serious attention to what the metrics are actually measuring is essential for truly delivering accurate results.  What are the implications of failing to correctly interpret web analytic results?


Week 12 – Content Strategy for the Web

November 12, 2011

Halvorson, Kristina. “Content Strategy for the Web.” 5-42, 147-72: New Riders Press, 2009.

 

Halvorson begins “Content Strategy for the Web” with five suggestions to improve web content.  These include: 

  • Do less, not more – less content is easier to manage and is more user-friendly
  • Figure out what you have and where it’s coming from – conduct a content audit
  • Learn how to listen – find out from you customers what their true needs are
  • Put someone in charge – establish an editor-in-chief to maintain the content
  • Start asking, Why? – develop clear reasons for delivering content online (5-12)

One of the biggest problems with content strategies is that in most organizations, no one actually does own the content – this causes priorities to clash and compromises to be made (20).  Someone needs to be “in charge” of balancing all of the different priorities between different aspects of the organization in order for the content to actually be effective.  Halvorson goes on to emphasize that our standards for content are really low.  Too often, more time and effort have been invested in the flashy design and interface workings of a website, leaving content as a last-minute after-thought. 

Halvorson defines content strategy as a “holistic, well-considered plan for obtaining a specific goal or result” through the use of “text, data, graphics, video, audio, [and] countless [online] tools” (32).  A true content strategy plans for content’s creation, delivery, and governance (33).  In order to determine what angle to take on this strategy plan, web analytics must come into the mix into order to provide you with “hard data about how content efforts have impact on you business’s bottom line” (147).  One of the most difficult aspects of implementing a plan is to do so with a clear maintenance plan in place.  This can include both individual and team responsibilities including web editor-in-chief, web editor, web writer, search engine optimization strategist, reviewers, and approvers (160-161).  Some of the best companies who treat web content as a business asset that drives their success online include: 

  • Wells Fargo
  • IBM.com
  • REI
  • Mint.com

This whole balance of learning how to listen to actual, relevant needs and filtering out the immediate wants of an organization is a fine-tuned skill that can make the difference between an effective website and one that comes across as desperate to incorporate every new idea that happens to pop into someone’s head.  Along this same line of thought, it is important to look to the needs of the customer in addition to the desires of the organization as a whole; however, “just because an employee or a customer asks for something does not mean it should be automatically delivered upon” (11).  For example, in the reading for last week by Friess, participants in a study indicated that they wanted an index to help navigate the manual the participants were testing.  However, the organization was adamant that they were not going to pay to include an index – it was non-negotiable.  Eventually, the index did work its way into the final product but only because the organization realized its essential benefit to the participants in the study. 

The other aspect of content strategy that really clicked with me was the emphasis on clear business objectives and goals for an organization.  In curriculum planning, I often see too many teachers get hung-up on a particular novel that they just can’t bear to part with or a new technique or teaching strategy that they just have incorporate into the classroom; often, ideas and methods are implemented without pause to ask “Why?” Sure, new techniques and newer technology can be helpful with classroom engagement and variation in presenting information, but if they fail to contribute to the overall goal of teaching a concept or getting a skill across, how effective can they be?  The same premise is true for delivering effective web content.  Without a clear goal and purpose in mind, a website can be ineffective and fail to achieve its desired results despite its shiny new homepage and flashy icons.  What qualities and skills are necessary in an individual to oversee the creation and execution of an effective content strategy for an organization?


Week 11 – Usability Tests and Evaluator Bias

November 6, 2011

Friess, Erin. “Discourse Variations Between Usability Tests and Usability Reports.” Journal of Usability Studies 6, no. 3 (2011): 102-16.

 

The article “Discourse Variations Between Usability Tests and Usability Reports” documents research on discourse analysis techniques to compare the language used by the end-users to the language used in the evaluators’ oral reports.  Friess conducted five rounds of formative testing involving three pairs of team members who were all novices in conducting usability tests; these team member pairs were each assigned a participant.  A team of raters read the transcripts, watched the video recordings, and “determined if any of the issues mentioned in the oral reports but not mentioned by the usability participant could be reasonably assumed by the actions of the usability participant in the think-aloud usability test” (105).  From there, 83.9% of the findings had some basis in the usability testing;  however, 65% of these were accurate findings and 34.6% were potentially inaccurate findings (106).  Both the data for the accurate and inaccurate findings came from the sound bites and from evaluator interpretation.  The discussion portion of Friess’s article comments on the gatekeeper role of the evaluators and how this powerful role could explain why differences exist between the language used by end users and the language used by evaluators’ oral reports.  Four possible explanations include the following:

 

  • Confirmation Bias in Oral Reports – The evaluators appeared to seek out confirmation for issues they had previously identified (110)
  • Bias in What’s Omitted in the Usability Reports – The evaluators at no time presented finding to the group that ran counter to a claim the evaluators made at a previous meeting (111)
  • Biases in Client Desires – The evaluators did not mention a specific participant desire for an index because the client had already specified that including an index was not an option (112)
  • Poor Interpretation Skills – The evaluators were not well-experienced in this kind of study and therefore clung to the few pieces of data that they understood well (sound-bite data) (113)

 

Honestly, I found this article to be somewhat surprising.  It bothers me that so many evaluators will orally communicate information about the participants’ tests without actually referencing the transcript or notes of the test itself.  Relying solely on memory, particularly when there might be specific biases at stake, has proven faulty time and time again.  For example, my client for my STEM brief is currently studying the consistency of positive flashbulb memories over time.  Her research indicates that as time passes, individuals continue having a strong belief in the veracity of their memories while results indicate that the consistency of those memories actually decreases.  Even though the team members conducting the tests do not experience flashbulb memory while monitoring or observing the participants, there is still the element of unreliability when it comes to presenting information from their memories rather from documented data.  The whole process of learning to interpret results accurately and without bias is a crucial component of good practice in technical communication.  This study reminds me that certain parameters need to be in place even in a semi-informal testing environment of do-it-yourself usability practice.  How could conducting experiments in this fashion hurt the credibility of technical communication as a field?  What are some safeguards to put into place to avoid such biases and discrepancies in the future with do-it-yourself usability test


Week 11 – Understanding the Desirability Factor in User Experience

November 6, 2011

Barnum, Carol M., and Laura A. Palmer. “More Than a Feeling: Understanding the Desirability Factor in User Experience.” In CHI 2010, 4703-15. Atlanta, Georgia, 2010.

In the article “More Than a Feeling,” Barnum and Palmer discuss ways to measure the “desirability” factor in studies instead of relying solely on the sometimes-problematic use of post-test questionnaires.  Microsoft created and used 118 product reaction cards which gave participants a wider breadth of options for how to express feedback regarding products.  Barnum and Palmer incorporate the product reaction cards into their studies “to know what users felt and . . . to add an element of methodological plurality to [their] studies” (4706).  Some of the case studies included:

  • Computer Network Monitoring Application
  • Destination Teaching Website
  • Comparative Evaluation of Hotel Reservation Process
  • Hotel Group Study on Fee-based Loyalty Program Enrollment
  • Call Center Application

These cases, though they fall under various industries, produced similar results in terms of the product cards.   Thematic groupings of words as well as repeated word selection occurred often throughout the case studies.  Though Barnum and Palmer felt that using the cards were helpful in determining a different angle on participant feedback, they believe that “the cards should not be used as the sole means of getting participant feedback” and that such methods “work best when used along other satisfaction survey instruments or when used as a baseline for comparison in iterative studies” (4715).

This article brought up the crucial component of how to test a document’s effectiveness.  Indeed, actually subjecting material to participants is the only true way to test whether or not the goal of the document is being accomplished.  This is crucial for technical communicators because we need to understand how this process works.  Revising and changing documents to meet the needs of the audience is an essential part of the writing and creating process; additionally, learning how to interpret feedback from participant studies and then knowing what to do with such results will be essential for any technical communicator.  What are the potential downsides of relying completely on product reaction cards rather than diversifying the methods of collecting participant feedback?


Week 11 – User-Centeredness in Computer Documentation

November 6, 2011

Johnson, Bob. “User-Centeredness, Situatedness, and Designing the Media of Computer Documentation.” In ACM Eighth International Conference on Systems Documentation, 55-61,1990.

 

Johnson’s article on user-centeredness in designing computer documentation begins with an emphasis on the danger of such a philosophy.  He suggests that the phrase “user-centeredness” could become “at best, empty rhetoric, and, at worst . . . could serve to undermine the humanitarian goals of a user-centered ethic” (55).  His dual purpose in writing the article is to:

1) focus on a clear understanding of what user-centeredness means in regards to

2) discuss how to design for the different media of computer documentation.

Johnson argues that “the ineffectiveness of systems lies in the miscalculations and poor planning of the designers” rather than a reflection on the competency of the consumers (56).  Much documentation is written to reflect “what the designer views as the important components” instead of taking the true user into account (56).  In regards to a text-centered approach as an option for good document design, Johnson comments that the chief drawback is that the approach focuses on “how well readers comprehend and follow printed text” which can limit the document’s effectiveness (57).  Rather, centering on the user’s situation focuses attention to the user and the user’s environment.  The user-centered view continues outward by then analyzing:  tasks and actions of the user, the user’s activity with the medium, and the design of the documentation (57-58).  Additionally, Johnson suggests representing the rhetorical framework of user-centered documentation (users, writers, and task/action) within the broader scope of global contexts and situations, thus “giv[ing] computer documentation a broader . . . and more relevant structure” (59).

Reading this article jogged mental connections between several pieces we have read so far this semester.  His emphasis on the ineffectiveness of some document designs reminded me of Cooper’s “Designing for Pleasure”; like Cooper, Johnson argues that when the specific user is “far removed from the central concerns of the system design, [the user] is left with the task of reconstructing the entire system into his or her own image of what has been passed on from the system image” (56).  Additionally, I found it important to remember that in today’s age of technical communication, much of our work will be formatted for web-based or screen-based viewing; factors such as “eye strain, impatience, poor resolutions, etc. all play a role in the difficulties of reading the computer screen” (57).  Therefore, the way content is managed for a website is directly correlated to whether users can easily “browse, access, skim and jump from screen to screen” or whether the content requires them the read large chunks of text for extended periods of time.  How would having a user-centered approach to computer documentation be effective in specific cultural situations which have technology constraints?


Week 10 – Refining Wikipedia through Collaboration

October 30, 2011

Liu, Jun, and Sudha Ram. “Who Does What: Collaboration Patterns in the Wikipedia and Their Impact on Article Quality.” ACM Transactions on Management Information Systems 2, no. 2 (2011): 11:1- 11:23.

The article “Who Does What:  Collaboration Patterns in the Wikipedia and Their Impact on Article Quality” identifies main critiques of Wikipedia and provides research by Liu and Ram to account for these issues.  The main questions Liu and Ram address are 1) Why do Wikipedia articles vary widely in quality? and 2) How can  quality of Wikipedia articles be improved?  Because Wikipedia is easy to edit, an article can be edited by any person; however, all editors do not edit the same way or with the same intensity (2).  The research by Liu and Ram investigated the Wikipedia’s article assessment project as a starting place for determine various degrees of quality.  The criteria for assessment includes:  1) well-written, 2) comprehensive, 3) well-researched and verifiable, 4) neutral, 5) stable, 6) compliance with Wikipedia style guidelines, 7) appropriate images and copyright status,  and 8 ) appropriate length and focus (4).  To study the relationship between collaboration and quality, Lui and Ram selected articles which had been rated by these criteria as a basis for their study.  Their methods involved creating categories of how contributors edited articles; from looking at that data, Lui and Ram then identified collaboration patterns.  Lui and Ram concluded that “article quality depends on different types of contributors, that is, the roles they play, and they way they collaborate” (16). Additionally, Lui and Ram feel that improving Wikipedia article quality is possible if software tools are developed to help contributors make the decision to include references, links, and support for their edits.  These software tools should “nudge contributors to assume different roles and support self-justification and self-policing” as well as “motivate the contributors to revist the article, review their inserted sentences, and respond to other contributors’ modifications” (20). 

The main point of this article truly hones in of the positive results which can emerge when collaboration is instilled into the creation/editing of Wikipedia articles.  Lui and Ram’s research definitely supports the hypothesis that the better-written and better-referenced articles are constructed by multiple contributors who justify their added content/changes and who respond to other contributors.  This same concept could be easily transferred to the field of technical communication where collaboration continues to be rare.  Additionally, this article clearly articulates why Wikipedia continues to receive mixed reviews in terms of being a credible sources of information.  I think that Lui and Ram articulate this best when they state, “It is unreasonable to simply assume that Wikipedia is completely reliable or unreliable” (2).  If practices such as the methods suggested by Lui and Ram are implemented, could Wikipedia change its current status where it is viewed as being neither completely reliable or unreliable?  If so, would it change the way people use the medium?