Teach the Tech or Teach the Tools?
An annotated bibliography

With the annotated citations below I am attempting to capture at least some of the published conversation from the past decade or so around an issue which has proven rather durable in both "new media" composition pedagogy and professional writing. It can be seen a classic example of computing's religious wars (though rather more muted in academia than among the populace of general practitioners); or as the old tussle between scientific and romantic perspectives carried over into web design; or simply as a specific case of the ubiquitous question, for tool-using creatures, of how abstract we may properly make our tools — when we can allow them to be black boxes, and when their surfaces must be transparent and their inner workings revealed to their users.

That debate is the question of whether content creators for the web should have a thorough understanding of the technologies they use — HTML, CSS, and so forth — or whether it's acceptable for them to deal with those technologies (the "tech") indirectly and even in blissful ignorance of the details of their specifications and implementations, through higher-level software (the "tools") which presents a more abstracted view, often an ostensibly WYSIWYG one, of the material. I've framed this here primarily as a pedagogical question (hence "teach the tech", etc.), to give it a practical gloss rather than an ethical one — that is, I want to avoid letting the discussion degenerate into contention over the sins, real or imagined, of established practictioners. Casting it as a pedagogical choice lets us see it in terms of future options rather than present errors. But in reality this makes little difference to the main issue, because practitioners learn their craft somehow, and presumably we want pedagogy to reflect what we believe are best practices.

Of course as with all religious wars there is not likely to be a single satisfactory answers. There are any number of possible pat responses ("horses for courses", for example, or "the proof of the pudding is in the eating" — the right approach depends on contingencies of the situation, or the approach can only be justified by the outcome), but those lead to little useful insight. Similarly, there will be those who take a hard stance somewhere in the field and defend it against all comers; while that often does result in some interesting ideas, it moves us no closer to better action. Thus having reviewed these sources I now believe it is important to continually maintain a reflective and critical stance of any position on a question like this one; to both hold some (hopefully informed and sophisticated) beliefs about what makes for good web pedagogy and practice, and to continually review those beliefs in each new situation. We must remember that continual ideological vigilance is impossible and undesirable, and that all of us must work at multiple levels of abstraction. No one can consider simultaneously and continuously all of the interwoven technologies that go into building, serving, retrieving, and rendering web content — to say nothing of imagining and interpreting it. And by the same token, no web content is ever entirely original; we do not mint our own bits.

Summary of the Conversation

Early in this project I posted a query to the TECHRHET listserv, asking about sources on this question that subscribers particularly liked or felt should not be missed. At least one person noted that it would likely be hard to find academics arguing for the "teach the tools" position. And indeed I found far more arguments for the tech side. There are likely several reasons for this. One is that the tools are, at first glance, the easier option; certainly they let students produce their first work faster, they provide more integrated assistance, they offer instructors consistency, and they're more visually appealing to most users. Thus they largely make their own argument and there's little reason for an instructor to advocate on their behalf. More than that, though, academics are professional knowers of things, and what we might call the "code layer" of web documents is an ideal object of academic knowledge. It sits beneath the visible surface and determines much of that surface; it follows its own obscure rules; it takes the form of an argot for initiates. It's the hermeneutic filling in web-design cake.

Nonetheless, some academics do admit — often somewhat guiltily — that exigencies have forced them to rely on tools in the classroom. Sometimes that is because the instructors themselves lack the leisure to become experts in the technology. In other cases, it's because employment-oriented students insist on learning whatever is popular in the workplace, or because instructors believe that training students for the workplace is the correct ethical choice. (I hardly need note that this is a question of long tradition itself.) Or the course schedule may simply not have the time required to teach this material to students, many of whom are likely to find it rather alien and difficult.

It's worth noting, I think, that tool-based production is certainly in demand in the industry, as is training for it. The Association for Computing Machinery recently noted that its introductory course on Dreamweaver — likely the best-known WYSIWYG web authoring tool — is the ACM's most popular offering in its "Graphic Design" series ("ACM Member Technical Interest Service" email newsletter, February 2011). Since the ACM is a technical organization that caters to coders and scientists, this is not insignificant; it's likely that many ACM members, perhaps a majority, would disdain Dreamweaver and other WYSIWYG tools on principle.

Bibliography

Albers, Michael J. “The Technical Editor and Document Databases: What the Future May Hold.” Technical Communication Quarterly 9.2 (2000) : 191.

Albers is concerned with changes in the role of the technical editor that follow from a shift from monolithic documents to dynamic documents assembled from a content fragment database. This would seem tangential to the tools/tech debate, and in fact that debate does not appear in his argument -- until a few paragraphs into the final section, on pedagogical recommendations.

There, Albers inserts the following comment: "The rapid changes inherent in the XML and intranet tool market prohibit the attempt to teach tools, which has always been a bad idea anyway" (202). This is more or less an aside, not directly connected to anything in its immediate context. Nor does Albers return to the idea.

What's interesting here is how this interjection simply appears as an unsupported claim in the midst of a series of pedagogical recommendations, the rest of which are quite focused and follow directly from Albers' preceding discussion. It's symptomatic of the strong views some authors hold on the question of whether to teach tooling or underlying technology.

Applen, J D. “Technical Communication, Knowledge Management, and XML.” Technical Communication 49.3 (2002) : 301.

Much of Applen's article is essentially theoretical, arguing for a perspective on knowledge management built from the sociology of science and sociological theories of knowledge organization. Based on this theory, however, Applen champions "XML code" (including cognate technologies such as XML DTDs, XLink, and XPointer) as ideal for the knowledge-management tasks that are bound up in technical communication. While Applen does not specifically address the tools-versus-tech issue, this proposal only works if technical communicators deal intimately with XML technology. Claims like "the very nature of XML allows technical communicators to think critically about knowledge" (311) imply that a key epistemological benefit is attached to knowing and working with the core technologies, rather than dealing exclusively with tools that abstract them.

Arola, Kristin L. “Mindful Design:  An Anishinaabe Approach to Multimedia Production.” 2 May 2011 n. pag.

In contrast to her 2010 "Design of Web 2.0" (q.v.), Arola made something of an argument against the strict-tech position in this presentation. It was a nuanced argument and presented as a suggestion rather than a thesis (and was only a small portion of a longer composite discussion); and it was not necessarily an argument for the use of tools to replace understanding of the technology. Nonetheless, it stands out both as a rare example of an expert in the area making some allowance for the use of black-box tooling, and as a complication and elaboration of her pro-tech position in the earlier work.

WYSIWYG vs. Code

In a relatively short interlude between two longer segments, Arola presented a copy of an IM conversation she'd had recently with her husband, as he helped her troubleshoot some issues with a page she was creating under some time pressure. She titled the interlude "wysiwyg vs. code", which is more or less an alternative form of the dichotomy this bibliography attempts to capture. And that's interesting, because from the content of the conversation that tension is not entirely apparent.

In the course of the conversation, Arola notes that some aspects of the page are not working correctly (link mouseovers, for example) despite the fact that the page "validates" - that is, it has been programmatically found to comply with (some version of) the HTML and other standards. A validator, while a "tool" in the generic sense, is typically used by white-box content creators, coders, those who believe in the importance of knowing the technology. Her husband corrects a usage (self-closing A tags) which validates but does not work (for reasons we need not get into here); be he also makes changes (removing empty paragraphs used for vertical spacing) which do not affect the correctness or appearance of the page, but are not approved of by web purists.

In other words, Arola's narrative in this segment is that despite her technical competence, she has deviated from the path of pure tech and employed a kluge - for which she is gently chided by her husband, here the voice of the pro-tech hard line.

Git 'er done

It is at this point, in Arola's subsequent reflection on her text, that the WYSIWYG tools reference becomes clear. Arola suggests that the coder ethos, know-the-tech ideological purity, can be an obstacle to simply creating and publishing content. And while that obstacle may be productive for many people, it can also be oppressive for those with more-limited access and resources. For such creators, or for creators who need to publish under otherwise less-than-ideal conditions, the reigning imperative may be to "git 'er done", as Arola puts it, and worry about correctness later.

Equally importantly, the endorsement of "right" and "wrong" ways to create viable content threatens to turn a technological regime into a moral one, and foreclose on the possibilities of other ways of conceiving of and using that technology. If there can indeed be an "Anishnaabe approach to design" on the web, we don't want to impede it by making a shibboleth of spacing paragraphs.

–––. “The Design of Web 2.0: The Rise of the Template, The Fall of Design.” Computers and Composition 27.1 (2010) : 4-14. 8 May 2011. <http://www.sciencedirect.com/science/article/B6W49-4Y8G1PH-2/2/3510a4a077f5d3aad056d3630d6420b0>.

Arola's position here is that excessive reliance on templates for creating web content, which she sees as particularly common with "Web 2.0" sites that build pages by inserting content fragments into templates and dynamically-generated populated structures, reduces opportunities for design. The form/content split which has always been a theme of markup languages, at first because they separated content creation from separate rendering and presentation stages in the toolchain, and later as an explicit design principle; but, Arola argues, the increasing incorporation of layout templates and content-marshalling software in newer web applications has tended "to render form standard and invisible". To put it another way, Web 2.0 encourages us to let our tools dictate the conventions of our genres. Arola characterizes this in various ways, such as a shift from "homepage" to "post" as the quintessential personal web contribution.

Design, not Engineering

This is an interesting tech-over-tools argument because it does not rely primarily on any of the usual claims or warrants. Arola does not found her position on technical correctness, browser compatibility, consistency for the sake of machine processing or alternative use (eg accessibility), or even engineering elegance and ideological purity (often present, if not so often acknowledged, among the warrants for the pro-tech side).

Instead, what is at stake here for Arola is design affordance. This is an aesthetic argument, and a rhetorical one, and a pedagogical one; and beneath all those, it is a question of creative freedom and individual choice - attributes that the cheerleaders of "new media" universally espouse.

Breuch, Lee-Ann Kastman. “Thinking Critically About Technological Literacy: Developing a Framework To Guide Computer Pedagogy in Technical Communication.” Technical Communication Quarterly 11.3 (2002) : 267-288.

Breuch frames her discussion of "technological literacy" by defining it in terms of the tools-versus-technology debate. In her first paragraph, she describes a conversation at an academic/industry colloquium: "Interestingly enough, after some discussion industry partners unanimously agreed that there were no specific 'tools' that students should learn ... [but] they collectively voiced the expectation that students understand technologies and have the aptitude to learn them quickly" (267-268). It is this latter qualification that Breuch glosses as "technological literacy".

Insufficient but Necessary

In elaborating this point, Breuch cautions against a pedagogical overemphasis on tools, which she sees as arising from an early understanding of technological literacy as performance, or "how to use a computer". Besides the practical problems associated with a tools-oriented pedagogy — tool skills are narrow, and it's infeasible to keep updating labs with the latest tools — she agrees with a number of other scholars that such an approach tends to present technology in a neutral and uncritical fashion: "many scholars argue that a tools-based stance is dangerous because it suggests that technology is not problematic and does not require the involvement of instructors, students, or communicators employing the technology" (271).

On the other hand, Breuch sees some tool instruction as essential. She reports that industry partners want students to have some familiarity with tools so they know what kinds of facilities are available, and concludes that students should be exposed to at least one software package of each of several types, including "Web authoring" (271). In this sense, while Breuch argues against a strongly tool-oriented pedagogy, she does explicitly advocate for limited teaching of tools — which is more than she has to say about teaching underlying web technologies such as HTML. Teaching the tools may be insufficient, in Breuch's view, but it is also apparently necessary.

Beyond Tools

But this is only a small part of Breuch's vision for technology-literacy pedagogy. Far more important in her view is teaching the context of technology: political, social, cultural, economic, and other valences, seen through a critical gaze. As she puts it after reviewing the literature, "scholarship suggests we should consider context a great deal" (273). And among the questions she suggests for such a critical review of technological context is that of which tools are used and why, which in effect could bring the tools/tech debate into the classroom. Similarly, she suggests that questions of how technology affects our reading and writing practices and how we communicate should be part of the curriculum.

Breuch's final and broadest recommendation is that all of the perspectives she describes be brought together in concert in formulating pedagogy for computer-oriented technical communication. "I break with scholars who promote one issue of technological literacy over others", she writes (276); while this may sound so reasonable as to be hardly worth mentioning, a moment's reflection shows that it is indeed a break with many of the other participants in this debate. And by declaring "pedagogy must drive technology" (278, emphasis in original), rather than the other way around, she to some extent implicitly critiques the very basis of debates like tools versus technology.

Ching, Kory. “WYSIWYG Vs. XHTML/CSS < Teaching Writing in a Digital Age.” Teachign Writing in a Digital Age 23 Feb. 2011. 27 May 2011. <http://twinada.wordpress.com/2011/02/23/wysiwyg-vs-xhtmlcss/>.

This blog post is actually about the conversation I started on the TechRhet listserv about this bibliography. The sources Ching mentions are ones I have included here, but his glosses of them, and particularly the ensuing discussion, are interesting. In the latter, Cheryl Haynes wonders about course requirements, and Cynthia Carter Ching notes that Learning Sciences "went through this exact same discussion ... two decades ago" and describes some aspects of that debate.

Cook, Kelli Cargile. “Layered Literacies: A Theoretical Frame for Technical Communication Pedagogy: [1].” Technical Communication Quarterly 11.1 (2002) : 5-29.

Though this widely-cited essay appeared before the tools/tech debate was particularly prominent, and does not directly address it, it was cited as formative by a couple of the respondents to my initial query on the Techrhet list. The "six layered literacies" around which Cook formulates her pedagogy form a schema that emphasizes teaching technical communicators multiple skills from multiple perspectives; Cook's position is that technical communication requires a wide range of skills, and no one approach is sufficient. Among the six is "technological literacy", about which Cook notes that it "strives to advance students beyond knowledge of software applications" (13). Cook expands on this point by describing technical communicators as mediators of technology, with roles in documentation, usability, and so forth. It's possible, though, to extrapolate from her position to conclude that technical communicators should understand the technologies that underlie those applications.

Delagrange, Susan H. “When Revision Is Redesign: Key Questions for Digital Scholarship.” Kairos 14.1: Delagrange, When Revision Is Redesign 15 Aug. 2009. 27 May 2011. <http://kairos.technorhetoric.net/14.1/inventio/delagrange/>.

Here Delagrange reflects on an older Kairos piece ("Wunderkammer, Cornell, and the Visual Canon of Arrangement", 2007) she wrote as a Flash application. In particular, she describes how she proceeded after receiving the initial "revise and resubmit" response from the editors.

For my purposes here, the relevant portion is the "Code" section, where Delagrange considers the question "What are the affordances and constraints of learning and writing underlying code when designing the visual and conceptual interface of a multimedia project?". (Delagrange begins each of her sections with a question.) She begins: "I want to argue for writing code", opposing it explicitly to WYSIWYG software, and suggesting that working at the level of code is necessary to produce "an optimal user experience" and to "fit the design to the rhetorical argument".

In Favor of Friction

Delagrange challenges what she calls (following Nancy Kaplan, q.v.) "frictionless computing", where software provides a surface interface for some activity that attempts to relieve users of the need to develop new interpretive skills. She claims that "real learning requires significant cognitive engagement", that literacy skills cannot be developed without effort, and that such skills are necessary to adapt to technological change and important for producing new-media work.

She notes that while working on the redesign, she often asked whether "the default design choice [was] the best rhetorical or aesthetic solution", or "merely adequate". This echoes a classic argument in the debate over black-box document-production software: that it reduces the creator's options to whatever the software creator thought important (or convenient, etc).

Coding as Inventing

For Delagrange, writing code is a research and design activity, "a practice of intellectual inquiry". It is fundamental to the process of invention when working in computer-based media. Creating a new-media work at the level of code is not a matter of a creative stage followed by an implementation stage; the work continues to be shaped as the author writes code.

Furthermore, Delagrange notes the social and cultural aspects of creating software, both in its creation (for example in collaborating with other experts) and in its consumption. She sites Lev Manovich's caution that to ignore the details of software is to deal "only with its effects rather than the causes".

Though Delagrange is writing here about Flash and Actionscript, the same theory should apply rather directly to HTML, CSS, and other standard web technologies.

Fortune, Ron, and Jim Kalmbach. “Letter from the Guest Editors.” Computers and Composition 16.3 (1999) : 319-324. 28 May 2011. <http://www.sciencedirect.com/science/article/pii/S8755461599000134>.

In their introduction to this special issue of Computers and Composition on programming in the composition classroom, Fortune and Kalmbach begin by suggesting that there is a place in writing instruction for learning about relevant core IT technologies. The contributors here take up the question of programming in the writing classroom in various ways, from "teachers doing coding to create positive learning environments ... to involving students in the coding themselves" (322). The former, of course, does not necessarily imply any "teach the tech" aspect, except for those who intend to become teachers themselves, but the latter is clearly a call to incorporate the study of basic computing technologies in writing curricula. And while programming and HTML/CSS markup are distinct activities, it is not much of a theoretical leap from teaching one to teaching the other.

Fortune and Kalmbach suggest that the contributors, and other like-minded rhetoric and/or composition teachers, have come to feel programming is important in various ways. One is the conception of programming as rhetorical, as concerned with logic and syntax, with the conjoining of concepts and the arrangement of ideas. The authors see this as increasing with object-oriented programming, hypermedia authoring, and HTML; again this suggests that their arguments could easily apply to the question I'm investigating here.

Another argument they advance is that HTML has rhetorical affordances, and so should come under rhetorical study. "HTML coding on the World Wide Web has made both the rhetoric of code and the impact of that rhetoric on the effectiveness of web pages" apparent to the authors of those pages, Fortune and Kalmbach declare (322). For tech comm, this means learning HTML at the "code" level is a matter of learning rhetorical techne.

To be completely frank, Fortune and Kalmbach, and people they cite, are not very technically accurate when they discuss programming (and particularly not on the history and nature of programming prior to, say, 1990); but they have useful insights about some of the ways programming can be rhetorical in their own historical moment.

Franci, Luke. “What Does ‘WYSIWYG’ Stand For?” What Does “WYSIWYG” Stand for? @jefflin Explains. on Twitpic. 8 May 2011. <http://twitpic.com/4ur3as>.

Luke Franci's photo of a slide from a Jeff Lin presentation is a typical example of the hostility toward WYSIWYG tools in the tech community.

Gresham, Morgan. “The New Frontier: Conquering the World Wild Web by Mule.” Computers and Composition 16.3 (1999) : 395-407. 28 May 2011. <http://www.sciencedirect.com/science/article/pii/S8755461599000183>.

Gresham expands on a concern that many writing instructors voice in casual discussion: how do we prevent the writing classroom from becoming solely about the technology? Or, as Gresham puts it, how do we avoid "the danger that our use of technology will not be balanced with our writing goals" (405)? Nonetheless, Gresham makes an argument for having students interact directly with HTML code: "teaching HTML coding ... could allow me the opportunity to examine those gaps present when technologies are not seamless and when the conventions that underlie online communications are not transparent" (397). Of course, in 1999 there were relatively few choices for "transparent" HTML creation tools, but Gresham's argument remains relevant.

What's particularly interesting about Gresham's piece is that it arises specifically from the experience of teaching web-page creation in a class forced to use outdated and barely-capable hardware and software (the "mule" of his title). He suggests that the tensions of technological problems illuminate theoretical ones.

Gurak, Laura J, and Ann Hill Duin. “The Impact of the Internet and Digital Technologies on Teaching and Research in Technical Communication.” Technical Communication Quarterly 13.2 (2004) : 187-198.

This essay is primarily concerned with the relationship between academia and industry, and how it affects and should affect the teaching of technical communication. But Gurak and Duin do have a brief discussion of problems with tool-oriented pedagogy:

[W]e used to say that it was acceptable to teach a specific tool and expect students to then learn the tool of choice when they got on the job. However, our programs cannot afford to become comfortable with one product ... [W]e must work harder to keep up (or lead!) in the classroom by offering courses that provide both the theoretical overview of an issue (theories of single sourcing and tag languages, for example) and also the hands-on training on tools that help students understand industry standards. (189)

This is interesting in this context for a couple of reasons. First, it takes a somewhat stronger line against the "representative tools" argument sometimes used to justify the tool-oriented pedagogy, even though it does not go entirely over to the tech-oriented side. The second is that the authors contrast tool skills with "theories", including those of "tag languages", without suggesting that students could actually use the latter! There is an odd reluctance here to simply call for practical experience in web core technologies.

Haefner, Joel. “The Politics of the Code.” Computers and Composition 16.3 (1999) : 325-339. 28 May 2011. <http://www.sciencedirect.com/science/article/pii/S8755461599000146>.

Haefner offers a cultural and political critique of programming languages and techniques such as structured programming. Frankly, I find his argument highly dubious; I find it technically suspect, historically underinformed (at least judging from the explicit evidence Haefner offers), and theoretically uncompelling. But it is an interesting position nonetheless.

Certainly Haefner is broadly correct in the sense that early and prolonged US (and to a lesser extent UK) dominance of commercial data processing has lasting cultural-imperialist effects, seen for example in the still-incomplete deployment of localization and internationalization features in commercial software. And these issues are inherent in some (though by no means all) programming languages. Whether they are inherent in programming practices, either at the time the article was written or now (dominant programming practices having changed substantially in the interval), is quite another question.

Close Reading Code?

Haefner is least successful when philosophizing on the nature of program source code, as in his attempt to "translate" Shakespeare to C. This is not a meaningful exercise, and the argument that emerges from it is not productive. The "simultaneous dichotomy" of "To be or not to be?" can't exist in his toy program not because the C language restricts him to a Boolean logic, as Haefner would have it, but because existential problems are not members of the conceptual domain of computer software. That dichotomy doesn't exist in a copy of Hamlet, either; it exists in the reader's mind.

Haefner also shows an unfortunate tendency to leap from correlation to causation, particularly in his analysis of structured programming. For example, he sees similarities in corporate management structure and program structure, and concludes the latter must derive from, or at least endorse, the former; but it is just as possible that both are simply examples of exercising the same tool.

Perhaps the greatest flaw in Haefner's argument, though, is his suggestion that the essences of programming languages and structure programming that he purports to have revealed somehow contaminate the use of computer applications for writing classrooms, which (he claims) would endorse different cognitive approaches. This is an old argument, with such illustrious predecessors as Audre Lourde's "master's tools" philosophy; but it is a bold and contended position, and one that needs rather more support than Haefner gives it here.

Practical Problems

Though Haefner's theoretical position (however interesting) is untenable,  he is more successful at pointing to practical problems with software for writing, such as the myriad features (such as they are) of Microsoft Word that are unhelpful in the typical writing classroom (333-334). He points out that customizing software often isn't a viable option for writing labs, since many students will typically use the same copy of the software, or lose their custom settings each time they log out. And though he commits another theoretical error (due in large part to his reliance on the work of Ted Nelson, a "visionary" not overly concerned with accuracy) in seeing a lack of support for revision history as inherent in hierarchical filesystems, he's correct that typical commercial word-processing software does not handle revision history well, if at all. And if like many of the authors I've listed in this bibliography he overstates the liberatory powers of hypertext, he is correct in noting its potential for simplifying collaborative work, for example.

Despite the problems with this piece, Haefner makes a number of good points. More than that, it's interesting as an early attempt for a writing scholar to come to grips with the technologies and practices of programming and their theoretical implications for users of software. Today, nascent fields such as software studies and critical code studies are introducing more trenchant, more accurate, more technically-informed theories of programming into our discussions; but in 1999 work like this was rare and daring.

Hart-Davidson, William. “On Writing, Technical Communication, and Information Technology: The Core Competencies of Technical Communication.” Technical Communication 48.2 (2001) : 145.

In a special issue where several of the authors lament a focus on tools in the technical-communication workplace, literature, etc., and the rapid turnover in tools for technical communication, Hart-Davidson offers an interesting innovation. The question for him is not to what extent technical communicators should be consuming tools, or what sort of tools they should be consuming, but whether they should be actively involved in producing those tools. (Hart-Davidson's answer, unsurprisingly, is yes.)

Theory and Practice

Perhaps surprisingly, this practical conclusion arises from a call for theory — specifically, a theory of technical communication and its "orientation to our work with information technology", with the specific goal of "mak[ing] leadership in the IT field seem reasonable, possible, and desirable" (146). This is in large part a rhetorical move: it establishes grounds for recognizing the roles technical communication plays in knowledge and data management, decision making, and so forth. It is also a call to demystify concepts such as "talent" which are impossible to measure, and to make "the core expertise of technical communication explicit" (147).

From this starting point, Hart-Davidson adopts some of Derrida's central ideas to show, first, that writing is an information technology, and thus that technical communicators are already technological experts; and second, that theory need not be abstruse, difficult, and disconnected from everyday practice.

Technical Gardens

Hart-Davidson ends with an extended metaphor of "gardening" in the domain of technical-communication technologies. Treating the collection of technologies and information tasks as an ecology, he suggests that technical communicators can move between technologies per se and their applications in the workplace, improving both in the process. The most desirable outcome of such a stance is that workplaces would develop their own IT suited to their particular situations, and technical communicators would be an intimate part of that process.

Tools and Tech Stewardship

This might not seem, at first blush, to be especially relevant to the tools-versus-tech argument; if anything, it appears to take a more or less holistic view of both core technologies and the tools constructed atop them as co-participants in the IT ecology.

I would suggest, though, that for technical communicators to really participate in the creation of IT tools, they'll need to be conversant in the core technologies involved. Furthermore, I believe that if we are to have a rigorous theory of technical communication under a regime of complex information technologies, that theory cannot be satisfied with black-box tools; it will have to inquire into their inner workings, which again implies an understanding of core technology.

Hart-Davidson, William, Victoria Moore, and Joshua Porter. “Modeling Flexible Document Structures with XML Schema: Rhetorical Objects and Rhetorical Metadata.” 1 Jan. 2003. . <http://replay.web.archive.org/20030426201230/http://www.rpi.edu/~hartdw/ro.whitepaper.pdf>.

This whitepaper (unfortunately now somewhat difficult to find; as the URL above indicates, I had to resort to the Internet Archive's Wayback Machine to locate a copy) does not explicitly address the tools-versus-tech debate. What it does do is demonstrate a compelling argument against focusing exclusively on tools, because the work it describes -- creating an XML schema informed by rhetorical theory -- requires an understanding of the underlying technology.

Tools that abstract away from the underlying technology limit their users to the domain of those abstractions. Ideally those tools offer other affordances and efficiencies beyond those available by manipulating the core technologies directly; that's the benefit that justifies the tools' cost. But there will always be innovative projects like this one which begin by positing new uses of the technologies, and consequently cannot be expressed in tools that did not anticipate such uses when their abstractions were formulated.

Hartley, Cecilia, Ellen Schendel, and Michael R. Neal. “Writing (online) Spaces: Composing Webware in Perl.” Computers and Composition 16.3 (1999) : 359-370. 28 May 2011. <http://www.sciencedirect.com/science/article/pii/S875546159900016X>.

Another example of an essay which focuses not so much on the question of teaching web technologies, as of their employment by instructors; but it seems to me this necessarily implies a benefit to teaching them as well, for the benefit of future instructors, and by extension for students who may enter other professions where the same arguments apply.

Hartley et alia offer three major points in favor of using web technology. They support them through the example of a web site they created named Writing Spaces, which is not just static HTML (and related) content, but a web application written primarily in Perl. Creating such a site is significantly more involved than simply composing static pages, so on the one hand the authors are more or less compelled to gain technical knowledge and use web technologies directly—their options for employing black-box tools were limited, particularly in the late 1990s when the site was created. On the other hand, it means they had to become more informed about that technology, and learn how it fit with their theories about the importance of critical examination of technology.

The three arguments the authors advance are not unique to this piece, but they are solid and clearly formulated. One is simply that instructors who teach technology should understand it; a second, complementary goal is to gain sufficient understanding to offer robust critiques of those technologies, and in particular of the ideologies that underlie them. The third, which might be seen as following from those two, is to use that understanding to create web technologies that embody the theories writing instructors advocate.

As a final note, I appreciated the authors' description of their software-writing practice. They studied the Perl programming language enough to gain some literacy in it, and learned basic concepts of web application design. Then they found free-for-use Perl scripts online, customized them for their purposes, and integrated them into an application. This bricolage is reminiscent not only of much professional software development (which makes much use of free and commercial components), but reflects contemporary ideas about writing as well.

Hawk, Byron. “Toward a Post-Technê-Or, Inventing Pedagogies for Professional Writing.” Technical Communication Quarterly 13.4 (2004) : 371-392.

Hawk does not discuss web tools or technologies in this piece. What he presents here is a theory of technê, and specifically what he calls "post-technê", which he defines as "the combination of technique, the technical, technology, and technê that is grounded in posthumanism" (389). Hawk's pedagogical proposal is to rethink the relationships between human actors and their technological milieu and teach students to approach each rhetorical situation as an occasion for investigation, critical reflection, and invention.

As such, Hawk's theory points to one way of escaping dichotomies such as tools/technologies; in his scheme, both core web technologies and packaged tools that abstract from those technologies possess the aspects he calls "technique" (which includes strategies and methods), the "technical" (whatever is "both ordered and complex"), "technology" (abstractions and their relations), and technê (the "combination of art and technology in productive knowledge"). As noted above, it is the combination of these four aspects in a posthumanist regime that forms Hawk's post-technê (389).

Ideally, a rhetor following Hawk's model would be able to deploy web technologies and tools in a kairotic fashion, as appropriate for each situation, and do so in terms of a philosophically robust understanding of how those constituents interact in the "constellation" of a rhetorical response.

Hayhoe, George F. “What Do Technical Communicators Need to Know?” Technical Communication 47.2 (2000) : 151.

In this editorial, Hayhoe points to one of the issues that is often invoked in the tools/tech debate. On the one hand, he begins by dismissing fluency with specific tools from the category of "essential knowledge that all technical communicators must possess"; on the other, he admits that job listings "seldom list any other specific knowledge as a prerequisite" (151). Thus some instructors may advocate teaching tools purely as a pragmatic matter: their students need certain competencies to be employable.

Hayhoe later admits that tool knowledge, while "less important than communication skills and knowledge of subject domains", is still a necessary skill (152). At the same time, "we should not let the tools define us or distract us" (153), simply because knowledge of them is easier to measure. Ultimately, this piece is a mild argument for teaching tool skills, but one that qualifies that area as subordinate to what Slattery refers to as "higher-order competencies".

Kaplan, Nancy. “Knowing Practice: A More Complex View of New Media Literacy.” L’Aquila, Italy, 2001. <http://iat.ubalt.edu/kaplan/ssgrr01.pdf>.

Kaplan makes an unusual and telling argument against what she calls "invisible computers, applications, and interfaces" (1), those designed to reduce the cognitive demands they put on users. She proposes that reducing users' cognitive load prevents them from developing the skills they need to learn and adapt to new technologies in the future.

She opposes this argument to a trend she sees in prominent usability theorists such as Donald Norman and Jakob Nielsen, who have argued for "invisible" or "frictionless" "appliance" computing, where IT is incorporated into devices designed to minimize the effort users have to make in order to accomplish specific tasks; and in the work of Brenda Laurel and Janet Murray, who make a similar argument for black-boxing technologies, though from an aesthetic rather than a usability perspective.

Kaplan suggests, though, that while there may be advantages to  transparency or a "perfect fit" (2) between written language and readers, it is not certain those advantages apply to new media, and particularly not to the processes of creating new media. For new media, she believes, literacy includes both reception and production, in multiple modes. Whereas print technology offers only a narrow range of affordances for human action, and thus those affordances can safely be black-boxed, new media offer a vaster and ever-growing range, and any attempt to reduce those to a small set of abstractions will necessarily greatly limit both the understanding and capability of users.

Kaplan applies this theoretical stance to the question of courseware, explaining that web-course and learning-management systems like Blackboard cost educators the opportunity for innovation. She points out such commercial applications "prevent everyone from penetrating their mysteries, from understanding the ways code constrains what they can do within the system" (3). And this is not just an issue for educators, she says: "That which is unproblematic or routinized to the point of invisibility fails to provide grounds for learning" (3).

In acknowledging the pragmatic arguments in favor of tool use, but challenging the theoretical ones underpinning their value, Kaplan makes one of the strongest arguments to date in favor of teaching technological details rather than simply surface tools.

Kolosseus, Beverly, Dan Bauer, and Stephen A. Bernhardt. “From Writer to Designer: Modeling Composing Processes in a Hypertext Environment.” Technical Communication Quarterly 4.1 (1995) : 79-93.

This relatively early piece (the Web had only been around for about five years when it was written; the W3C had only just been formed) discusses hypertext composition using a now-obsolete proprietary system. For my purposes, what's interesting about the article is its emphasis on the different and far more complex process of composing hypertext documents, as compared to writing linear prose. Kolosseus et al. emphasize that hypertext composition is not like using a conventional word processor. We see here that even in 1995, technical communicators were aware of the complexities of hypertext authoring and the deficiencies of the word-processing WYSIWYG model for it.

Kramer, Robert. “Single Source in Practice: IBM’s SGML Toolset and the Writer as Technologist, Problem Solver, and Editor.” Technical Communication 50.3 (2003) : 328.

Kramer's focus is not the question of knowing core technologies or possessing fluency with tools that abstract them. If anything, his argument is that in a single-sourcing environment, technical writers need both kinds of knowledge; drawing on IBM's documentation production as an extended example, he argues that under single sourcing the writer becomes even more of a technologist, and must be an expert user of a wide range of technologies and tools, many of which are not directly related to producing written content.

But in the kind of environment Kramer describes, at least tools of the WYSIWYG variety have little place, in part because of the increased distance between content production and document layout. "Semantic" markup languages like the IBM SGML toolset Kramer describes or various other DITA-based approaches are the antithesis of black-box WYSIWYG tools — authors don't deal with formatting at all. While it's certainly possible to have GUI authoring tools for semantic-style markup that avoid having to work with the actual markup code, users of such tools would still be working with a vocabulary and document model that refer to document elements, not appearance.

In this sense, Kramer's description of working in a single-source environment is an argument for teaching markup technology as a fait accompli: writers who work in such environments will have no choice but to understand the core technologies.

Krause, Steven D. “Teachers Learning (Not Teaching) HTML With Students: An Experimental Lesson Plan for Introducing Web Authoring Into Writing Classes.” Readerly/Writerly Texts 7.1 (1999) : 113-128. . <http://www.readerly-writerlytexts.com/RW_FW_99_Index.htm>.

Another early tangential contributor to (or precursor of) the debate, this piece by Krause is of interest mostly because it suggests some advantages to both students and instructors in learning HTML. As such it lays some of the groundwork that appears in later arguments in favor of learning core web technologies.

Lanier, Clinton R. “Analysis of the Skills Called for by Technical Communication Employers in Recruitment Postings.” Technical Communication 56.1 (2009) : 51.

A survey of job postings for technical writers shows that employers want them to have technical skills, and are not particularly interested in knowledge of specific tools. This is a moderate pragmatic argument in favor of teaching the tech rather than the tools.

Lowe, Charlie. “Re: [techrhet] Do HTML Authors Need to Know HTML?” 20 Feb. 2011 n. pag.

As with the Slattery email (q.v.), this message was in response to my TECHRHET query regarding teaching core web technologies. It's notable as one of the relatively rare contributions to the debate that argues for teaching more-abstract tools, at least in some circumstances.

Lowe distinguishes between technically-oriented and design-oriented student populations, and claims that for the latter knowledge of markup languages is a secondary concern: "it is the principle of markup and styling that is the important outcome I seek, not so much the ability to build websites with HTML/CSS". While he believes some familiarity with core technologies is important for understanding the basic design principles of web documents, most students, he says, will be working with content-management systems and other more-abstracted toolchains.

It's interesting to contrast this with Lowe's contemporaneous "The Future of the Book" blog post (q.v.), which makes almost the opposite argument. In both cases, though, Lowe's position arises from an assessment of the available tools and which ones a technical communicator is likely to use, at this particular moment in the development of web technologies. In other words, these contrasting positions both arise from a highly pragmatic, results-oriented view.

–––. “The Future of the Book: Time to Learn Some HTML/CSS.” Kairosnews 18 Feb. 2011. 8 May 2011. <http://kairosnews.org/the-future-of-the-book-html-css>.

In this short blog post, Lowe argues that because ebooks are expected to become the dominant book publishing format, and because epub (ebook publishing) standards rely on HTML for content, CSS for formatting, and XML for metadata, technical writers in particular will need to be fluent in those technologies. Lowe claims that existing tools do not handle epub well, and while they may be sufficient for ebooks with simple formatting requirements "like a novel", they are not suitable for technical publishing.

Mauriello, Nicholas, Gian S. Pagnucci, and Tammy Winner. “Reading Between the Code: The Teaching of HTML and the Displacement of Writing Instruction.” Computers and Composition 16.3 (1999) : 409-419. 28 May 2011. <http://www.sciencedirect.com/science/article/pii/S8755461599000201>.

As their title suggests, Mauriello et al belong to that group of compositionists who are concerned that teaching web technology in the writing classroom diverts time and attention from traditional writing instruction. They describe their experiences in attempting to identify and address these issues in a collaborative teaching support and research group. Beside the obvious problem of limited time in a composition course, and the concomitant problem of trying to add web technology or any other additional subject to the syllabus, they offer some specific objections.

One is that HTML is "difficult to learn". While it's easy to object to this sort of generalization (what subjects are easy to learn? what factors make a subject easy or difficult?), there is some justice to it. Certainly the workflow, failure modes, and reward structure of constructing software (which, for these purposes, would include writing HTML, even though it's not programming in a strict sense) are substantially different from those for writing prose. Students without programming experience are likely to find their introduction to it a frustrating experience.

Their second major objection is that teaching HTML threatens disciplinary boundaries. This is a common and, again, fairly obvious complaint, and one that I admit I don't have a great deal of sympathy for. Are writing technologies not within the purview of writing classes? Still, I understand that there are institutional consequences for troubling the configuration of the disciplines, and that faculty members need to recognize and negotiate them.

The authors also suggest that students are uneasy, and find it harder to learn, when instructors employ experimental pedagogical techniques or introduce subjects they are themselves unsure of. And, they claim, because web work is usually public (the authors don't distinguish between web sites on the Internet and those confined to institutional networks), this problem is compounded by the potential for outside criticism.

They spend some time discussing the capabilities and potential of web-authoring tools, about which they're rather ambivalent. Early on they declare that "At present, to publish on the Web requires using HTML code" (411), which seems to dismiss tools as an option. But they go on to discuss template-driven web sites such as Tripod and Geocities (413-414), and HTML converters and WYSIWYG tools (414-415). They describe issues with all of these, and describe one solution which frankly, twelve years later, seems decidedly backward: students would write papers using conventional word processors and hand them in as softcopy, and instructors would convert them to HTML and publish them on a class web site (416). Though the authors claim both students and instructors like the results of this process, it was too much work for instructors, and some felt it was "disempowering" for students.

Ultimately the authors do not come out strongly in favor of teaching tools rather than technology. This was not the result of a theoretical stand in favor of teaching the tech, but a practical one: the group could not find a workable approach to an HTML writing curriculum that did not involve students writing HTML directly.

Beyond that, the authors call for special Web-based sections of writing courses; for institutional support for such courses; more research on teaching HTML; and a greater focus on collaboration among writing instructors and other faculty to deal with the challenges of Web writing.

O’Sullivan, Mary F. “Worlds Within Which We Teach: Issues for Designing World Wide Web Course Material.” Technical Communication Quarterly 8.1 (1999) : 61.

Though O'Sullivan's topic here is approaches to developing online courseware, not the question of what to teach technical communicators, her piece is a good example of the classic tech-over-tools argument. She looks at various "course-in-a-box" packages, and while she notes various useful aspects — instructors can create courseware quickly; non-technical instructors may actually have better control over a site created with course-in-the-box software than if they have to work with university IT — she ultimately concludes that "creating online instructional sties by hand ... is preferable to using course-in-a-box software" because the latter is constrained, and that course-creation "software should be seen only as a stepping stone" (70).

Rainey, Kenneth T, Roy K Turner, and David Dayton. “Do Curricula Correspond to Managerial Expectations? Core Competencies for Technical Communicators.” Technical Communication 52.3 (2005) : 323-352.

Another article that falls into the "preparing for the job" category, "Core Competencies" surveys both technical-communication managers in the workplace and curricula from undergraduate programs, to see how well the two match.

Many of the skills revealed by the study are catholic and equally applicable to nearly any approach to teaching web authoring, such as the ability to collaborate or to write clearly. The authors list both the ability to learn new technologies (which others have suggested argues against overemphasizing specific tools in the classroom) and "skills in using technologies" (323) as desirable, which would seem to be one vote for each side; the latter is only a "secondary competency", but the former is a weaker claim, so they largely balance out. They do later conclude that "the ability to adapt to new situations and to learn new software quickly is far more important than knowledge of specific software packages" (333), but this is not the same as an endorsement for teaching web technologies.

While there's no strong argument for either side in this piece, the results of the study might be taken as a recommendation for a balanced approach, so that students get some exposure to popular tools but also enough understanding of underlying technologies to be flexible and quick to learn new tools.

Slattery, Shaun. “Re: [techrhet] Do HTML Authors Need to Know HTML?” 19 Feb. 2011 n. pag.

In response to my query on the TECHRHET list regarding this topic, Slattery explained that while he is largely in agreement with Karl Stolley (q.v.), he is also "firmly in the both/and camp". Referring to his own "Technical Writing as Textual Coordination" (q.v.), Slattery suggests that technical communicators need to learn various "genres of software", and WYSIWYG HTML tools are among those genres. Thus there is a practical need to be acquainted with them, even if from a theoretical perspective they're somewhat lacking.

Slattery also notes in passing that because students and professional technical communicators often need to use content-management systems and template-based website software, teaching such systems in conjunction with web technologies and theory can better enable students to respond critically to them.

–––. “Technical Writing as Textual Coordination: An Argument for the Value of Writers’ Skill with Information Technology.” Technical Communication 52.3 (2005) : 353.

He does not explicitly address the question of tools versus tech, but Slattery indirectly makes an argument against pursuing that argument — at least against pursuing it too far.

Slattery describes an empirical study of technical communicators, with a view to understanding the "relationship between tool skill and ... higher-order competencies" such as cognitive skills and domain knowledge (353, emphasis in original). For Slattery's purposes here, technical knowledge of markup languages and facility with document-production applications are both types of "tools" in this relationship; he finds his subjects are required to use "a wide variety of IT" in various combinations in the course of their work.

This could be seen as an unwelcome conflation of core technologies and black-box tools that (depending on one's view) either obscure or instrumentalize them, or simply as neglecting the question of which is the royal road to technical document creation. But Slattery's final call to "help students develop a repertoire of information technologies and critical processes for selecting and using IT" (359) suggests the best approach might be to teach both tools and tech, and the critical insight to determine when to use each. Whether that's practical, of course, is another question.

Stephenson, Lynda Rutledge. “Road Trip.” 15 Jan. 2011. 8 May 2011. <http://kairos.technorhetoric.net/15.2/praxis/stephenson/index.html>.

In "Road Map", the introductory autobiographical narrative process reflection for this piece, Stephenson explains how she developed her web-content creation process; largely self-taught, she was unaware of the existence of WYSIWYG tools. But citing Kristin Arola's "The Design of Web 2.0" (q.v.), she suggests that there is value in creating content directly with code. Though Stephenson claims she "is not nor ever will be a computer programmer, new media designer, or technology professor", she feels knowing some technical details is both intellectually and creatively rewarding.

Stolley, Karl. “Re: [techrhet] Do HTML Authors Need to Know HTML?” 19 Feb. 2011 n. pag.

Stolley's reply to the TechRhet discussion (see also Ching, Lowe, Slattery) makes a few interesting points. First, Stolley suggests that little of the published work on the subject will advocate a tools-based approach. (While I did find some sources for that side, it's true that a majority favor tech.) He believes the pro-WYSIWYG position in particular is more often voiced in casual conversation or implicit in classroom practice.

Stolley also notes that the argument he most often encounters in favor of teaching tools is that instructors don't have time to learn to write code. He claims that his experience is the opposite — that keeping up with changing tools was too time-consuming, particularly for an activity that did not provide any other research benefit.

–––. “The Lo-Fi Manifesto.” Kairos 12.3 (2008) : n. pag. 21 Feb. 2011. <http://kairos.technorhetoric.net/12.3/topoi/stolley/>.

Stolley argues that "digital scholars" should create online work that is "free and open source" and "software- and device-independent". In particular, he argues against creating online texts using proprietary formats. Further, digital scholars should be able to talk about "the intricacies and methods of digital production". He extends this to a pedagogical argument: students should be taught these same values and knowledges.

The model he advocates revolves around "lo-fi production technologies", which are distinguished by being compatible with a wide range of tools. For example, where feasible, they should use plain-text file formats so that they are human-readable and can be viewed and edited using any plain-text editing software. Dense media (audio, video, etc) that can't reasonably be encoded as plain text should use open formats.

Stolley is careful to note that lo-fi technologies can produce "hi-fi" results, so expressive power isn't lost.

Emphasize the Source

Stolley is not directly concerned with the question of how deeply technical an HTML author should be for most of the piece, though his argument requires at least a theoretical understanding of technical issues. At times, though, he does refer to the question, for example when he suggests pedagogy should "emphasize the *source* in 'free and open source'".

The LOFI Schema

He augments his general appeal for lo-fi production with a more-specific theoretical schema represented by the acronym "LOFI": Lossless, Open, Flexible, and In(ter)dependent. The middle two terms should be familiar to practitioners, but the first and last may need some explanation. By "lossless" Stolley recommends production technologies that do not need to alter their constituent elements (text, images, etc) but "orchestrate" them for the audience; an example is how HTML documents include images by references to the original files, not by incorporating them into an opaque structure (as, for example, Microsoft Word does). Similarly, his "in(ter)dependent" refers to the affordances such orchestration gives users, who can decompose a lo-fi production into constituent elements, treating them differently or using them for new purposes.

Minimal Manifesto

"The Lo-Fi Manifesto" ends with its eponymous position statement - a set of six principles. (The presentation of the manifesto exemplifies Stolley's position: it uses scripting to expand each point into a longer discussion when, and if, the reader clicks on a claim. If the user has scripting disabled, or is using a UA that doesn't support it, the document degrades gracefully, by always showing the additional content. This combination would be difficult to achieve using proprietary tools.)

Of those the first three are directly relevant to the question of "teaching the tech":

1. Software is a poor organizing principle for digital production.
2. Digital literacy should reach beyond the limits of software.
3. Discourse should not be trapped by production technologies.

In different ways, all three recommend a turn away from tools that enfold, and thus inevitably limit, the possibilities offered by the technologies of digital discourse.

Williams, Joe D. “The Implications of Single Sourcing for Technical Communicators.” Technical Communication 50.3 (2003) : 321.

This is a literature review, and so is chiefly concerned with surveying the field rather than offering an original perspective. And the survey-article form doesn't give Williams much room to go into specific issues in depth; since his focus is single-sourcing, he only touches on some of the issues directly related to the question of tools and technologies.

Points for Both Sides

Nonetheless, those issues do come up several times. In reviewing Ann Rockley's work, for example, Williams describes a transition from "document-oriented to object-oriented" that also seems to imply a shift away from WYSIWYG tools, as it moves from "desktop publishing software" to "XML and a CMS" (321-322). It's not clear that understanding core technologies such as XML is necessarily part of following this shift — that this isn't simply a historical accident, where technology-abstracting authoring software simply had not (in 2003) caught up with the technologies used for single-sourcing. And Williams moves next to considering an approach described by Kurt Ament which, he says, "can be used with the most commonly used desktop publishing tools" (322). So at this point, and indeed for much of Williams' article, there's no clear advantage to tools or technological knowledge.

It's also worth noting that in reviewing Kevin Dick's XML: A manager's guide, Williams repeats without questioning Dick's claim that XML "requires a core set of skilled [software] developers with a thorough understanding of its mechanics". In fact, Williams elaborates on this idea, claiming that such developers will primarily be responsible for "creating a shared mental model and vocabulary" (323). It seems odd that Williams does not suggest this is a role that could be played by technical communicators; it's almost as if, at this point, he does not consider core technology a logical area for writers to be skilled in.

The Technological Turn

At this point, however, Williams turns to then-current articles specifically in technical communications on single sourcing, and there he finds a discernible turn toward technological expertise. He briefly cites Filipp Sapienza's recommendation to teachers and practitioners to become familiar with XML, for example. Most significantly, the final piece Williams reviews is the 2002 whitepaper "Modeling flexible document structures with XML schema" by Hart-Davidson, Moore, and Porter (q.v.), from which he derives his final recommendation: "If educators and practitioners of technical communication were to refine such rhetorically grounded 'homegrown' single sourcing solutions as Hart-Davidson, Moore, and Porter describe, I believe that many of the benefits of single sourcing and content management could be realized" (326). Since the approach described in "Modeling flexible document structures" is thoroughly technological — an XML schema informed by rhetorical theory — it's hard to see this as anything other than an endorsement of technological expertise by Williams as the royal road to dealing with the imperatives of single sourcing.

Appendix: Implementation Details

This section is purely for nerds who want to know how this page was constructed. Others may go do something productive.

This annotated bibliography page was constructed from materials exported from my Zotero database, processed by a PHP script. Zotero currently has no support for doing proper annotated bibliographies (one of the most-requested features, but apparently difficult to implement due to architectural infelicities). I cobbled this system together in order to use Zotero for research and writing the annotations. It's not fancy and makes no effort to be user-friendly (but then, neither do I), but for this project it worked.

Here's the process and tool chain:

And that's how we do things at ideoplast.org. Or at least that's how we do things for this particular page. I've built other bibliography systems for other classes that are also hosted on this site, including the MultiBib application and the ad hoc tool chain used to produce my "Used Without Permission" improprietography. No hobgoblin consistency here.

You can download the sources for this system, if you want the them for some unguessable reason. My usual license (do whatever you want, blah blah blah) applies.