Friday, December 21, 2007

Regarding the Programming Succubus

In my usual stumbling around on teh intarwebs, I came across this post on "the Programming Succubus" in regards to experimental game design. It's an intriguing concept. Not being a game designer myself, the principles Benmergui lists aren't directly applicable, but I certainly empathize with the concept of succubi in project development. Personally, I have to deal with the incubi of sequential ordering and design.

Those pretty much sum up my problems with the Noel Heikkinen Project; I found I had to finish one thing before I went on to the next, and I had to do it all in order. The design for all nodes had to be established before I could write, and the writing had to be completed linearly, even though each node stands individually--linearity is not required. The design demon, requiring perfection, meant that I spent literally hours on the swirl motif before I could even move on to photograph selection. Hours which, coincidentally, I didn't have. By the time I got to the writing phase, I was under such a tight time constraint and so stressed out that I literally couldn't write anymore. And it's a problem I could have avoided, too; I had most of the text written out mentally months ago. I just couldn't write it until I saw what its layout would be.

Is this really a problem? Maybe only if you're also a victim of the incubus of chronic procrastination and are trying to do relatively in-depth work in a short period of time. I envy people with the internal organization to keep things on schedule. Even deadlines barely work for me anymore.

Tuesday, December 18, 2007

Hero: It's a nice-boy notion that the real world's gonna destroy

A friend and I went to see Beowulf tonight (in 3-D, for what it's worth). I'm glad I saw it--but the gladness is more from an academic standpoint than from an actual sense of true enjoyment. You feel immersed in the movie, but more because it feels like an immersive experience than a good, well-rounded story.

But then, I'm a purist.

Some of the changes actually weren't bad. The original Beowulf doesn't follow the Aristotelean plot arc quite to my personal satisfaction--that Grendel's mother is changed significantly makes for a better story, for example, though the ending is depressing as all heck. No--the thing that bothered me was how all of the Christology was treated. It seemed like every few minutes there was a new subtle poke at Christianity. The unfinished tapestry illustrating Christ's ascension. The abusive nature of the sole worshipper of the "new Roman god." The burning cross on Beowulf's funeral pyre. The movie's message is apparent: there's no such thing as a savior, and those who claim to be one probably caused the problems they eventually solved in the first place. If we're lucky, we might magically manage to avoid falling into the same trap as our predecessors, but probably not.

Should I have expected anything less from a screenplay that was co-authored by one of the co-author of Good Omens? I suppose not. But I'd rather they had left the Christian stuff out entirely. The movie just leaves you with a feeling of utter despair; in fact, the demons have a distinct upper hand. Compare it to the end of the poem itself--Beowulf isn't strong enough to kill the dragon by himself, but with the help of his companion, he manages it. Yes, Beowulf dies; but good triumphs over evil, and there's a new hero in town.

Actually, I think that's at least part of the message of the original: heroes are the ones who can fulfill justice when another no longer can. Beowulf fulfills the social obligations Hrothgar can't at the beginning (form of extracting the wergild, or the reparations for various peoples' deaths, out of Grendel), and eventually Wiglaf takes over for Beowulf. Point: even heroes need a redeemer, but the movie never gives them one. In fact, the only social obligation fulfilled is
the wergild granted to Grendel's mother, form of a new child to replace the dead one, thereby creating the monster of a consequence the next hero has to fight.

How utterly hopeless, right? Sounds like the sort of vicious cycle only a hero can break. Too bad there aren't any.

Monday, December 17, 2007

Another intriguing idea

A middle school teacher is apparently using a twitter account to have his (and other!) students collaboratively write a story. I have to wonder what the purpose is, of course; is it simply collaboration for the sake of collaboration, or is it a lesson on the Aristotelean story arc? ("Sure, you can have a woman and baby run screaming--as long as you do something with it.") My friends and I used to play a story called "and then," which is similar in concept; I promise you, though, our stories were incredibly silly. That this teacher thinks he can corral something worthwhile out of this may overly optimistic. It will be interesting to see how it all turns out.

It's fascinating, though. Dig deeper into mrmayo.org (the teacher's blog/projects page) and you see a lot of internet-based learning. They're doing digital childrens' books. They're using Skype to talk to people like Susan Linn, head of the Campaign for Commercial-Free Childhood--a conversation I'm pretty sure led to the class's current reading: Animal Farm. He's got his students blogging as opposed to keeping reading journals. They've got a book on Lulu. Digital integration into the classroom? That's crazy talk. I think a lot of teachers on all level (including the collegiate) could learn from him.

Point of Amusement

I love how this article from the Lansing State Journal about a website doesn't actually state the URL anywhere in the article itself--and the side link a) isn't hyperlinked, and b) doesn't copy/paste well. I had assumed it was a convention: if writing for a print medium (i.e., a newspaper), one ought always cite the url for all but the most common of sites right after its name so the reader can cite the site on sight.

(...cite the site on sight, lest they incite the.... uhm... ok, gonna go now.)

Friday, December 14, 2007

An update? Craziness!

Two bits of news:

(1) I just registered wasabijane.com, so hopefully soon I'll start blogging over there. I'd been needing a portfolio site for a while. The 100mb MSU offered just isn't enough if you do anything fancier than plain html, and anyway, my own domain should be a bit more permanent. After all, what kind of digital rhetor doesn't have her own domain?

(2) Projects kicked my butt this week. I don't even have the server space (yet) to post one of my projects (ironically, the one that's just a paper)--my server space won't be activated for a day or two. However, my other course has all students' projects online, and they're worth checking out. This includes mine, of course, but I'm not direct-linking it here yet--long story that I'll explain in a few days. Just realize the version that's up is actually a draft. A near-final draft, but a draft.

Monday, December 03, 2007

Barriers to entry

I had the opportunity today to participate in an online conversation hosted by one of my pastors. It was... educational. There were technology problems, for one thing; on my end, figuring out the software was more difficult than it should have been, and at one point the pastor's line went dead (not the best of circumstances, considering that he was the one hosting the discussion).

On a personal level, I'm not a phone person. At all. In general, I'm a much more eloquent writer than speaker. Actually, Moses' plea that he is slow of speech has always resonated with me; as a child, I actually had to go to speech therapy because (a) I couldn't say the letter "r," and (b) I tend to pause a lot and fill in gaps with multiple "umms" and "uhs." This has, of course, impacted my life in a number of ways, but that's a different blog post. For now, suffice it to say that any technology that requires speech is less than pleasant. I am in no way cognitively impaired, and chances are nobody realizes I had a speech impediment, but using a technology dependent on orality is still a barrier to entry to me.

Chances are, I'm in the minority on that one. Still, it's an interesting thought. As my pastor tries to build a collaborative site, he has to consider what people use. One of my favorite stories from Tapscott's Wikinomics is of the CEO of GeekSquad's attempt to build a wiki. Despite the tech-geek nature of GeekSquad members, the wiki never really took off. It turned out that members were already collaborating elsewhere--in a MMO, in fact.

So, of course, the question for my pastor is, what technologies are users of his new site already using to collaborate? It may be, in his case, that I'm the minority in hating phones; it might be the best option for him. Still, it's interesting to ponder from an accessibility standpoint. Is universality an impossibility?

Saturday, November 10, 2007

A Case of Questionable Rhetoric

I find this Onion article--"Female Serial Killer has to Work Twice as Hard to Achieve Notoriety"--to be in less than stellar taste. It's not the article itself; this is the Onion after all--it's the fact that they chose to locate the killer in Michigan. To be fair, they put their fictionalized killer in Otsego, which is just north of Kalamazoo on the west side, but considering Lansing is actually in the early stages of trying a suspected serial killer, the timing and location is... less than wonderful. I suppose it could be worse; they could have set it in mid-Michigan, or they could have put a satirized shooting spree in a northern Wisconsin town, but still, a little bit of forethought would have been nice.

Monday, November 05, 2007

I'm not sure what to think of this

This email came through the university listserv today:

Google, Microsoft, and other companies are offering, at no charge, an
increasingly diverse set of online, web-based software tools, many of which
provide standard "office" functions such as document and slide deck
production and management, spreadsheeting and communications, including
e-mail. These tools run on company (vendor) servers, and user files and
other content are stored on company systems as well. The no-cost option is
attractive to many University units, faculty and staff. However, due to the
terms of use and business models being applied to these tools, they MAY NOT
be used in the conduct of University work.


To illustrate one concern, the terms to which you agree when you use Google
Apps (http://www.google.com/accounts/TOS?hl=en) include the following; which
compromises the University's intellectual property rights and the security of University records:


"You [user] retain copyright and any other rights you already hold in
Content which you submit, post or display on or through, the Services. BY
SUBMITTING, POSTING OR DISPLAYING THE CONTENT YOU GIVE GOOGLE A PERPETUAL, IRREVOCABLE, WORLDWIDE, ROYALTY-FREE, AND NON-EXCLUSIVE LICENSE TO REPRODUCE, ADAPT, MODIFY, TRANSLATE, PUBLISH, PUBLICLY PERFORM, PUBLICLY DISPLAY AND DISTRIBUTE ANY CONTENT WHICH YOU SUBMIT, POST OR DISPLAY ON OR THROUGH, THE SERVICES. . You agree that this license INCLUDES A RIGHT FOR GOOGLE TO MAKE SUCH CONTENT AVAILABLE TO OTHER COMPANIES, ORGANIZATIONS OR INDIVIDUALS WITH WHOM GOOGLE HAS RELATIONSHIPS for the provision of syndicated services, and to use such Content in connection with the provision of those services." (emphasis added)


Furthermore, neither Google nor Microsoft makes any promise to back up

content, secure confidential or proprietary content, or provide free service
for any particular period of time. If they change their minds about the
business utility of this model, it may simply go away one day.


Contract terms like these do not provide appropriate protection for

University business records or documents. If software tools like these are
used in instruction, they may compromise student intellectual property
protections. They may also adversely affect commercialization or
publication of intellectual property created by faculty. Because
individuals frequently do not know that they are creating University
business records, including student records, in the course of doing their
daily work at MSU, ANY use of these sorts of online tools must be avoided.
Individuals or units contemplating use of any software tool or online
service should carefully read and consider the terms of use, and seek
appropriate legal or other review from University administrative offices
when terms of use are incompatible with University policy or exceed the
relevant administrator's authority.

Now I'm a big fan of Google and have used Google Docs for several things, both school-related and personal. I'm also distinctly not a fan of the university's webmail system (128 MB limit? Seriously?). I do, however, understand the university's concerns. Are they being neo-luddites? I'm not sure. I'll have to do some more thinking about this.

Wednesday, October 31, 2007

Westboro Baptist Church Loses Lawsuit

Oh, thank God. Westboro's practices in general would make an interesting rhetorical case study (perhaps a future blog entry on my part), but for now, suffice it to say that even if their logic is reasonable according to Scripture, their methods are not. It's as if they've stuck with the Old Testament model of spirituality and completely ignored the whole need for love thing.

I also have an interesting history with Calvinism (long story), so I was amused to see that Westboro claims five point Calvinism as its theology. It would be interesting to see if Westboro's actions are, in fact, reflective of Calvinism taken to its logical extreme.

Sunday, October 07, 2007

Moby rocks, and not just for his music.

Someone recently sent Moby a YouTube link to one of his own videos. It's unclear why. Maybe the person just wanted to let Moby know that it was up and possibly violating copyright or something. But Moby's response was to post the YouTube link himself. A musician? Who is actually harnessing the power of the internets? Awesome!

see original post

Also, see Moby on Net Neutrality

Wednesday, October 03, 2007

As part of a deal to fix the recent Michigan budget crisis, the Senate passed a bill cutting future teachers' benefits. Meanwhile, as the Lansing State Journal points out, they left their own benefits intact.

Actually, the article leaves me feeling vaguely optimistic. I mean, it sucks for all my teacher friends--and I know quite a few people in education. Three of them in particular may get burned by this--one's doing long-term subbing in Pennsylvania, one's in North Carolina, and one's on a missions trip in Albania--if they ever want to come back to Michigan, which is where their families are, they'll be a little bit screwed. Naturally, it irritates me that the legislature would hurt the public schools before it hurts itself. On the other hand, I'm outraged because I'm now aware of the issue--but I wouldn't be aware of the problem if it weren't pointed out to the newspaper by Senators. Maybe there's hope for change.

Monday, September 24, 2007

Rethinking the Complementarian Wife

I had lunch with my mother and sister-in-law today. Both my dad and my brother are in the process of looking for new jobs, and both women expressed with the certainty of faith their belief that both men would soon find jobs, because, as both believed, "the husband is supposed to be the breadmaker of the family."

I was somewhat surprised to discover that I disagree with this sentiment.

I should back up. If you can't tell, I come from a pretty conservative background, and actually have inherited this position. I actually self-identify as Baptist. It's not an entirely true picture of my theology, but it's the closest picture that most people understand. And I'm a complementarian, which mostly means that, if I ever marry, I expect that certain details of husbandly and wifely roles will hold true.

Case in point: my sister-in-law is two months pregnant with her first child. As I believe that the wife is better suited to stay with the child, and that the child's physical/psychological/spiritual wellbeing is usually better off when the mother stays at home through his or her early development, I certainly expect my brother to get a more stable job than his current one. In effect, my brother will, by the grace of God, take on the role of the breadmaker of the family, a role which he and his wife currently share.

For Mom and Dad, however, this is a different matter. They're empty-nesters. Mom doesn't have to stay home to take care of her children. Dad's job hunt is for a new job so that he can take early retirement from his current one (if he doesn't find one, he just won't retire). In their circumstances, there's no reason that Mom shouldn't be the one earning the primary income (other than the fact that she may then have to give up certain leisure pursuits).

It's even scriptural. Consider the Proverbs 31 woman. She was the one who kept her family fed and clothed. She took charge of the charity contributions. She even kept up with the housework (though the litany of servants certainly helped in that). This freed her husband up to to Important Political Things of a nature that isn't entirely clear to me, other than that said work likely involved sandals. Point: the woman was the one who kept everyone fed and everything running. And if there could be such things as Active Business Women who were Not Also Harlets in those days, certainly it means that a middle-aged woman can get a job to pay the bills while her husband tries to figure out what God has planned. Or a young woman with young children can run a freelance business out of her house to at least supplement her husband's income. And you know what else? They talk about caring for the widow and the orphan, and that's certainly important, but in modern America, there are other social justice issues of more importance than taking care of women fully capable of getting jobs.

Friday, September 14, 2007

Waiting for the Next Bubble to Burst

I'll never understand Web 2.0 companies. To be honest, they've always seemed like the hipster children of the dot-com bubble--like the brains behind flooz and kibu got together over drinks and decided to have another go. Take, say, Twitter. It's hard to fit in even two full sentences, and yet somehow it's reasonably successful--or at least, "all the cool Web 2.0 kids are doing it." What's their corporate model? From whence comes their funding?

I bring this up because I just came across spinsnap, which touts itself as "channel-surfing the internet." First, this concept seems familiar--I could be wrong, but I feel as if I came across something very similar a few years back. It's an interesting concept and is clearly ad-supported, but I cannot for the life of me figure out how this website will stay afloat. The sites they include on their randomized list don't seem to be sponsored--that is, anyone can suggest any site, though I'm reasonably certain they're trying to keep porn out of it. (Or not--adult friend finder just came up.) Really, though, what is this? "Hey, I have an awesome idea! Let's get ad sponsors to help keep very bored people entertained! We'll model it after Google's 'I'm feeling lucky!' function, only without even giving it the guidance of a search word!" It just doesn't seem sustainable.

To spinsnap's credit, they have an interesting implementation--you can search by websites, by music, or by people. Music seems particularly interesting to me. Really, this tirade is against Web 2.0 corporations in general. I honestly wouldn't be surprised if there's another bubble burst in the next year or two, as companies realize their services aren't really all that useful and as they discover that their costs outweigh their profits, and therefore start quietly pulling their services away. I have little doubt that this movement will be replaced by one that will ultimately prove equally unstable, of course, but one can hope.

Tuesday, September 11, 2007

The Continuing Decline of Memory

By now, probably everyone's seen the Miss Teen South Carolina video (if not, it's easily found on YouTube). A few weeks ago, some people I was hanging out with were watching and critiquing both her clip and some of the responses to it.

Now let me start by saying that I actually feel sorry for the girl. She made it quite far in a beauty pageant--an accomplishment worth boasting about--and she had a proverbial blonde moment. Pretty much everyone has those. She just had the misfortune of having hers on national television in the YouTube generation. (I hope she has a sense of humor about this). This isn't even about Web 2.0, the viral spread of the clip, and the netizens having their say about this.

No, my friends, this is about rhetoric.

As we were discussing the video, one of the guys pointed out that he couldn't blame Miss Teen SC for being taken aback, though certainly he would have had a marginally better response: "You've got to be kidding me! Where did that statistic come from?" After all, one can hardly expect toddlers and Alzheimer's patients to be able to identify anything on a map.

Later, it struck me that there was more to it than that. We're living in the Information Age, right? Quite simply, we don't need to be able to identify something on the world map because we have Google to do it for us.

See, this is where it gets back to the rhetorical canons. Aristotle lists one of them as Memory--i.e., the rhetor had better know what he or she is talking about, and had better know it well. Think Homer, who most likely composed his epics orally, or the African griots, the oral historians. They trained their "memory muscles" quite successfully to the extent that their oral records would be considered just as accurate as most written ones in Western culture. Even Plato complained about how books degraded the mind. The Internet's just the next iteration of memory "destroying" tools.

But really, is this a problem? According to Pew Internet and American Life, in February 2007 47% of American adults had broadband at home. For those who don't have it, Wi-Fi's available in coffee shops, libraries, schools, and even bars. Ultimately, Internet access will likely be as ubiquitous as the television set, the CD player, the telephone, ad infinitum. Sure, there will always be have-nots, but probably most Americans will either have it, have relatively easy access to it, or be under a self-imposed ban.

That's not even touching on mobile devices, like cell phones. Although such things are perhaps less easy to use for internet access than, say, a laptop, they're still useful when, say, some random person holds a gun up to your head and threatens your life if you can't identify the U.S. on the map. Google will still pull through.

So this begs the question: is memory still even useful as a rhetorical canon? I can't truly answer that question, but would vaguely argue that it isn't, or at least that its usefulness is reduced. It may be time to define the rhetorical canons of the new technology. I'm sure new definitions are out there... if I remember to look.

Tuesday, August 14, 2007

Web 2.0 vs. University Outreach in the Product Development Cycle

Note: If this post seems out of context, it's because it is. It was written for a screen capture for a section intro for this project:

http://msu.edu/~eldredl/web2/intro.swf



Much of UOE's work--especially that of their Communication and Information Technologies unit--is on product development and dissemination. Products, of course, take many forms--Web sites like that for MSU's Year of Arts and Culture; events like the annual World Usability Day; publications like the Engaged Scholar magazine. The question under consideration here is how UOE as a whole and CIT in particular can learn from the Web 2.0 product development and dissemination cycle.

Tuesday, July 24, 2007

Jing

I've been playing around in Jing lately. It's a pretty neat tool--it doesn't record videos with sound (no saving YouTube files, for example), but you can make short videos as you write. Pretty neat, actually. The really interesting thing is that videos are ultimately recorded in .swf format--that is, Flash.

I love Flash. In terms of, say, web site design, it may not be the most usable thing in the world, but I actually understand how flash works--its thought process, if you will.

And the neat thing about it? I may be able to actually go in there and modify it to my needs! If it's primarily a flash-based product, that's an API i actually understand! Mashup land, here I come!

Sunday, June 17, 2007

Journal Reviews: Towards a Better Understanding of Why All Journal Articles Must Have a Colon and Include the Word "Toward" in the Title

In this blog post, I will discuss some pet peeves about writing, including such functions used in the very title. Why are such things true, anyway? Somewhere, I'm sure, there's been a rhetorical analysis of why these conventions have been adopted. Wherever it is, I'd like to see it.

Actually, all journal writing bothers me. In all of my training--until now--I was taught that, for example, using a phrase like "In this paper, we will discuss X" was just plain poor writing. Show, not tell. The nearest thing to coming outright in stating a purpose was the thesis statement, usually some sort of arguable point which we would then be verifying through a literary analysis (e.g., "Quentin is an archetype of all female characters in Faulkner's works"--no, don't ask me to back that statement up).

It's a different genre, I suppose. Or--is it? Is a scholarly article really all that different from a literary analysis? The purpose of both is remarkably similar. Both have their theses--whether on heresthetics and TC, or Faulkner's misogyny--they just use different sorts of evidence. Literary analysis relies heavily on the primary text, as opposed to secondary texts (i.e., other people's research) or empirical research. When, then, did the conventions break? Is this just another example of the bitter battle between English and Professional Communications departments?

Before I've finished this program, I'd like to write one TC paper in the style of a lit analysis, just to see if it can be done.

Friday, June 15, 2007

Fridays this semester are work-at-home days, and I think it's really good for me. I'm slowly learning what it means to be what Ellen Kossek would call an integrater--perhaps I'm a product of a generation trained to be ADD, but I seem to be happiest when I focus on tasks for short amounts of time and constantly switch between them. It explains why I'm a compulsive email/blog checker at work; at home, it means that I'll read a few chapters of whatever text I've been assigned, then go play a few rounds of Katamari Damacy, or take an "internal processing" nap, or even simply pull some meat out of the freezer to thaw for the night's dinner.

You can get away with this behavior at home. It's not so possible at work.

This is why I'm posting this at around 3:00 AM: I've been bouncing around between a freelance editing job and the goofing-off that similarly characterizes my online existence. Tomorrow, I can sleep in. Right now I can be productive and not feel guilty about not waking up early in the morning. I do wonder if I'm simply a product of my generation; my crowd is probably the first that grew up with the ability to use a computer for both work and play. Maybe it's a bad habit to interrupt an editing job halfway to read through web comic archives for a bit. Sometimes I wonder if I'm not diligent enough, though heaven knows I can be given enough pressure (form of: Deadlines). I wonder, though: what form will the CEOs of my generation take? One can only hope for the macbook- and beanbag-chair-laden offices of TEH FUTUAR.

...but that's not where corporations are now, is it?

Wednesday, June 13, 2007

Here's one for the (nonexistent) Digital Ethics and Electronic Publications center: The Church of England is suing Sony for using the Manchester Cathedral as a backdrop for a shootout between humans and aliens. Isn't that just splendid?

As a theologian, I can somewhat understand their concerns. After all, the concept of the existence of aliens doesn't really mesh with most serious Christians' views of theology, though personally, I don't think I'd have a serious crisis of faith if life on other planets were discovered. If aliens exist, they're also part of God's creation.

In this case, the theology is far outweighed by every other part of me. For one thing, the Church will undoubtedly lose respect with the gaming generation. People who take offense at this sort of thing, especially so publically, are loathsome to me. If Christians went up in arms any time anything offended them, then we'd never even be able to speak to nonbelievers. The first time somebody said "Geez," for example, we'd be washing that poor person's mouth out with soap. I mean, look at what happened to Chuck Swindoll. I know God will work good out of this, but right now, I can only see the situation as harmful to the gospel.

Then there's the question of whether Sony actually did anything unethical in the first place. I'd argue not, and I certainly hope their lawyers both (a) agree with me and (b) win. The cathedral is a historic building. It would be like the U.S. Government suing Hollywood for blowing up the White House in the movie Independence Day, or the Department of Homeland Security retroactively arresting anyone who ever intentionally crashed a plane into the World Trade Center in a flight simulation game for an act of terrorism. The game is not telling people to go on a killing spree in yon local church. It is not even stating--at least, directly--that God does not exist. It is a game, a work of fiction. Although I would not deny that there is a subtle rhetoric to any story, Sony was well within its rights to use the building. It's not like it's a trademark violation.

Heresthetics; authorship

Just read an interesting article on technical communication and heresthetics ("structuring the world so that you can win"). Essentially, TCers need to be able to maneuver through their employer's politics; it's very much not user-centered, ultimately. Can heresthetics be aligned with user-centered design? Granted, the concept can be applied to user-advocacy: if one wishes to speak for her user, one must first know how to best communicate the user's need to her employer.

A model of heresthetics has, according to Riker (qtd. by the authors), four constituents:

  1. People--who are the people directly involved (upwards, lateral, downwards)? What are their values/tastes? What action can/will they take?
  2. Alternatives--Person X wants one thing, and Person B wants another. Does A push on B or let B go? If A pushes, what method does A use to get beyond B? (Mentally, I see this as a flowchart).
  3. Dimensions--things like time and money may also be a factor. What are these other factors?
  4. What are the methods of choosing? Does the problem lie in a democratic or oligarchic setting?

Ultimately, the authors push for a non-utopian view of TC; that is, TCers tend to think that with enough knowledge of rhetoric, they can, in fact, change the world, or at least the documents they're designing. True? Perhaps--but as the authors point out, a utopian view assumes that all TCers are, in fact, inherently good and that all people are changeable. While I wish this were true, I'm enough of a Calvinist to disbelieve this.

Source: Moore, P. & Kreth, M. (2005). From wordsmith to communication strategist: Heresthetic and political maneuvering in technical commmunication. Technical Communication 52(3): 302-323.

As an aside, the authors also critiqued an article of Hart-Davidson's. Although I haven't read said article, and in fact have barely even interacted with Hart-Davidson (or Bill, as he's known here in the R&W department at Michigan State University), my emotional reaction was interesting, to say the least. In The Darker Side of the Renaissance (2003), Mignolo talks about the European conceptualization of the Book as the ultimate authority in contrast to the indigenous rhetorical traditions. I've inherited this more than I know; of course the Bible is still my personal ultimate authority, but I've found myself having a hard time disagreeing with any print materials unless I know for a fact that they are blatantly wrong. I've granted Bill authorship, for example, though I've never read his work; to see a critique thereof throws me into a bit of mental turmoil. Bill's an author, so he cannot be wrong.

It doesn't help that my background is literature, in which no such binary exists in terms of the text itself. The biggest debates even remotely similar to such issues have more to do with a question of authorship and manuscript authority (Did Shakespeare write Romeo and Juliet, or did Bacon? Is manuscript X, since it's older than Y, therefore more "correct," even though more copies of Y exist?) Then there are the cases of deliberate authorial deceit (see Danielewski's House of Leaves or Goldman's The Princess Bride, for example), but I've dealt with those elsewhere.

Slack, Miller, and Doak don't deal with this issue. Perhaps Foucalt does; I've yet to read his work, so I can't say. Perhaps they consider this unnecessary. After all, TC still hasn't shaken off positivism completely--it's the rhetor's responsibility to be as accurate as possible with the information in order to reveal truth, or as close to it as possible. But, again, this brings us back to utopianism. Do we assume that all clearly written TC products are as true and accurate as possible, given the context? (Ah, relativism creeps in.) Do we assume that the TCer/rhetor didn't use a document as a means to his own end? And do we assume that the audience cares enough to discern between what in an article they agree and disagree with?

Interesting. My aside was longer than my actual commentary. Welcome to my stream-of-consciousness.