Separating Thought from Labour — A Design Knowledge Production Demands

日本語で読む

In knowledge production, thought demands continuous attention; labour admits division, deferral, and automation. Tools that fail to honour this asymmetry ── that interleave thinking and procedural work in time and space ── erode productivity in measurable ways. This article traces the proposition through cognitive-science findings and several historical instances. The central case study is the WYSIWYG editor, the design principle of what you see is what you get whose hidden cost is precisely a constant low-level coupling of thought to formatting work.

The argument is that the cost of a writer’s tools is not measured only in licence fees but in the cognitive resources the tool consumes from the writing itself.

Attention Cannot Run Two Tasks at Once

The discovery that human attention does not fully parallelise across cognitive tasks is a cornerstone of twentieth-century cognitive psychology. Donald Broadbent’s filter theory (1958)2 formulated attention as a single channel. Daniel Kahneman’s capacity theory (1973)3 reframed it as a finite resource pool. Harold Pashler’s work on the psychological refractory period (1994)4 demonstrated a central-processing bottleneck: inputs arrive in parallel but processing is serial.

In 1977, Richard Shiffrin and Walter Schneider distinguished controlled processing from automatic processing5:

Property Controlled Automatic
Attention required Yes No
Conscious Yes No
Interferes with concurrent tasks Yes Largely no
Acquired Immediately Through extensive training

Two simultaneous controlled-processing tasks interfere with each other. If one is automatised, interference with the other drops sharply ── one can hold a conversation while cycling, but conversation falters on an unfamiliar route while navigating in real time.

For knowledge production, the relevant inference is direct: thinking itself is, in principle, controlled processing. Constructing a new concept, validating a chain of reasoning, integrating multiple perspectives ── none of these can be automatised. Much of the surrounding labour, however, can be moved out of the controlled-processing layer through training, design, or automation. Typing, file management, layout adjustment, format conformance ── these belong to the labour side of the dichotomy.

The design question, accordingly, is whether the labour is being run as controlled processing in concurrence with thinking ── and thereby interfering with it.

Separating Capture from Organisation ── Umesao’s Cards and Luhmann’s Slip Box

The temporal separation of thinking from labour in knowledge work has a long tradition as practitioner knowledge. The Japanese exemplar is The Technique of Intellectual Production (1969) by the cultural anthropologist Umesao Tadao, published as an Iwanami Shinsho paperback6.

The Kyoto University card method at the heart of Umesao’s proposal has its core in a temporal separation:

  1. Capture each thought immediately on a card, without thinking about organisation
  2. Organise later, by physically rearranging the cards as a batch operation

The temporal split produces three effects:

  • The flow of thinking is not interrupted by the controlled processing of “now organise this”
  • The organising work can be batched into its own dedicated time block
  • The physical rearrangement surfaces associations not visible during initial capture

Umesao formulated this as practitioner knowledge before cognitive psychology had developed the theoretical apparatus. With hindsight, the method aligns with the Shiffrin & Schneider framework: capture and organisation each demand controlled processing, and running them concurrently has both interfering with each other.

The German sociologist Niklas Luhmann (1927–1998) reached the same proposition independently. The Zettelkasten he began in 1951 as a young researcher grew over a lifetime to roughly 90,000 cards and supported the production of 50+ books and several hundred papers78.

The core of Luhmann’s card method:

  1. No formal classification at capture ── one idea per card, sequential numbering only, no fixed taxonomy
  2. Links added later ── references to other card numbers are written in afterwards as the network of knowledge accumulates density over time
  3. Unexpected connections generate thought ── crossings between ideas surface arguments and chapter structures invisible at the moment of writing

In his 1981 essay “Kommunikation mit Zettelkästen”7, Luhmann described the card box as a communication partner through which much of his writing productivity was sustained.

The temporal separation of capture-time decisions from organising-time decisions ── identical in design philosophy to Umesao’s cards. Two implementations, independently arrived at across geographic and cultural distance, of the same proposition. Sönke Ahrens’s How to Take Smart Notes (2017)9 reintroduced the Luhmann method to the English-speaking audience, contributing to the popularity of digital implementations like Roam Research and Obsidian.

The same design pattern appears in David Allen’s Getting Things Done (2001)10, where the separation of capture from processing is the central methodology. At capture time, no judgement is made about the contents; everything goes to an inbox. At processing time, classification, delegation, deferral, or execution is decided. What to think about and how to handle it are separated in time.

Separating Content from Presentation ── Structured Authoring as the Antithesis

Looking at the act of writing itself, another separation comes into focus: content vs presentation.

Word processors of the WYSIWYG family display a near-final visual approximation of the output as one types. The convenience masks an unusual demand on attention ── layout judgements run continuously alongside thinking:

  • Is this heading the right size?
  • Is the spacing between paragraphs about right?
  • Are the bullet markers aligned?
  • Is the figure positioning interfering with the body flow?

All of these are formal decisions that demand controlled processing. The writer is, in effect, trickle-feeding attention away from argument construction toward layout management.

Donald Knuth’s TeX system, in development since 1978, took the opposite design11. The author writes markup ── structural tags such as “this is a heading”, “this is mathematics”, “this is a quotation” ── and the typesetting machine decides the layout. Leslie Lamport’s LaTeX (1986)12 refined the separation further, establishing structured authoring as the standard for academic publishing.

The author specifies structure; the machine specifies layout. This is the substantive content of structured authoring as a design principle.

The economist Allin Cottrell, in his well-known 1999 essay “Word Processors: Stupid and Inefficient”13, translated this point for a general readership. Cottrell diagnosed Microsoft Word as an editor that drives the user’s attention to surface presentation rather than to structure, and judged the design inefficient and stupid for the knowledge-worker case:

Word processors are designed to give the user the impression of “what you see is what you get” (WYSIWYG). They give the user the appearance of having control over presentation. In return, they take away control over structure13.

The author gives up structure to gain the appearance of presentation control. That this trade is happening is what writers typically fail to notice ── the heart of Cottrell’s argument.

John Gruber’s Markdown (2004)14 brought the philosophy of structured authoring down to the level of writeable with plain symbols, accelerating the lineage of lightweight markup languages (AsciiDoc, reStructuredText, Org-mode). The articles on this site are themselves written in Markdown and submitted to WordPress as structure ── what the writer sees is text; what the reader sees is the typeset article. The same separation pattern.

Separating Drafting from Revising

Decomposing the writer’s labour one more level surfaces another separation: drafting vs revising.

Linda Flower, in her 1979 paper “Writer-Based Prose”15, proposed that inexperienced writers fail to distinguish writer-based prose from reader-based prose and try to produce both at once ── unsuccessfully. Flower’s prescription is two-staged:

  1. First, draft writer-based: let what is being thought flow through unrevised
  2. Then, as a separate operation, translate to reader-based as the act of revising

In Shiffrin & Schneider terms, this temporally separates content generation from form adjustment. Run together, both interfere; the result is two low-quality outputs. The everyday observation that editing while writing degrades quality finds its theoretical articulation here.

Peter Elbow’s Writing Without Teachers (1973)16 proposed freewriting along the same axis, and Anne Lamott’s Bird by Bird (1994)17 famously gave us the encouragement of “shitty first drafts” ── do not edit the first draft as you write it; edit afterwards. Three different practitioners, in different decades, articulating the same proposition: do not interleave drafting and revising.

A Common Design Principle

These three separations ── capture / organisation, content / presentation, drafting / revising ── look like distinct concerns at the surface but share a single design principle:

Separation Thought side Labour side
Capture / organisation Initial idea Classification, rearrangement
Content / presentation Argument, structure Typography, placement, spacing
Drafting / revising Content generation Form adjustment, reader-orientation

In each case the thought side demands continuous controlled processing; the labour side admits batching, deferral, and automation. Run concurrently, both come out lower-quality. Run with a separation, both improve.

The implication for tool design is direct: knowledge-production tools should not impose simultaneous processing of thought and labour on the writer. The boundary between thought and labour should be made explicit, and the writer should be able to consciously choose which mode is currently running.

This is exactly the matter of mediation as the foundational essay Relative, Not Deterministic uses the term1 ── a tool, as the thing standing in between, prescribes where the writer’s attention goes, what is foregrounded and what recedes. A WYSIWYG editor foregrounds control over presentation at the cost of receding control over structure; a structured-authoring editor mediates the other way. Which mediation you choose is which attention economy you choose. Hold onto the single point ── the tool is not neutral ── and the question shifts from “is this convenient?” to “how do I want my thinking mediated?”


Read in 2026, the proposition has acquired a new register.

Modern AI writing assistants are designed to suggest completions, edits, and alternative phrasings in immediate response to the writer’s input. The convenience is real; the cost, in the Shiffrin & Schneider frame, is a new source of interference. Evaluation, selection, and acceptance/rejection decisions are now interleaved continuously into the flow of thought. This reproduces, at a different layer, the structural form of WYSIWYG layout interruption.

The problem is not that AI is bad, nor that completion suggestions are bad. The problem is a design that runs thinking and evaluation in continuous parallel. The writer becomes habituated to the interleaving and stops noticing that the boundary between I am thinking and I am evaluating has been dissolved.

The next generation of knowledge-production tools probably needs explicit, switchable boundaries for AI generation and evaluation features. The writer enters a thinking mode with evaluation off, and a revising mode with evaluation on, with conscious gear-changes between them. This sits in the same design lineage as Vim’s modal editing (insert vs normal mode) and the IDE convention of explicitly distinguishing running from paused during a debug session.

This is the systems-track lineage on the dlab site (the Japanese companion essays are linked below; English versions are forthcoming). 書くことは考えることか — 思考と道具の境界 (Writing as Thinking) traced the genealogy of thought-constituting tools (Umesao, Feynman, Clark & Chalmers, Shiffrin & Schneider). LuaTeX を Docker でリモート実行する (Running LuaTeX in Docker) and 記録を標準化する (Standardising Records) treat the assembly of those thinking environments as computational infrastructure. The present article belongs to the same lineage and confirms that a single proposition ── separate thought from labour ── runs through cognitive science, typesetting, and writing theory together. All of these are domain-specific concretions of the stance declared in Relative, Not Deterministic: treat the tool as a mediator, not as the correct procedure.

Choosing one’s tools is, ultimately, choosing one’s attention economy ── that is, choosing which mediation to take on. Whether to keep the continuity of thinking with tools designed for it, or to live with tools that quietly braid thought and labour together, comes back to the writing ── and, over a long arc, to the quality of intellectual production itself.

References


  1. For the operational definitions and the theoretical treatment of mediation / différance / the V&V asymmetry, see the footnote in the foundational essay Relative, Not Deterministic, and the author’s Zenodo preprint series (Letter version DOI: 10.5281/zenodo.20096463). 

  2. Broadbent, Donald E. Perception and Communication. Pergamon Press, 1958. The classic statement of the filter theory of attention. 

  3. Kahneman, Daniel. Attention and Effort. Prentice-Hall, 1973. Capacity theory of attention as a finite resource pool. 

  4. Pashler, Harold. “Dual-Task Interference in Simple Tasks: Data and Theory.” Psychological Bulletin, vol. 116, no. 2, 1994, pp. 220–244. 

  5. Shiffrin, R. M. and W. Schneider. “Controlled and Automatic Human Information Processing: II. Perceptual Learning, Automatic Attending, and a General Theory.” Psychological Review, vol. 84, no. 2, 1977, pp. 127–190. 

  6. 梅棹忠夫『知的生産の技術』岩波新書、1969 年 (Umesao Tadao, The Technique of Intellectual Production, Iwanami Shinsho, 1969). 

  7. Luhmann, Niklas. “Kommunikation mit Zettelkästen. Ein Erfahrungsbericht.” In Universität als Milieu: Kleine Schriften, edited by André Kieserling, Haux, 1992, pp. 53–61. Originally written 1981. Luhmann’s own systematic account of his Zettelkasten practice. 

  8. Schmidt, Johannes F. K. “Niklas Luhmann’s Card Index: Thinking Tool, Communicative Partner, Publication Machine.” In Forgetting Machines: Knowledge Management Evolution in Early Modern Europe, edited by Alberto Cevolini, Brill, 2018, pp. 289–311. 

  9. Ahrens, Sönke. How to Take Smart Notes: One Simple Technique to Boost Writing, Learning and Thinking. CreateSpace, 2017. 

  10. Allen, David. Getting Things Done: The Art of Stress-Free Productivity. Viking, 2001. 

  11. Knuth, Donald E. The TEXbook. Addison-Wesley, 1984. 

  12. Lamport, Leslie. LATEX: A Document Preparation System. Addison-Wesley, 1986. 

  13. Cottrell, Allin. “Word Processors: Stupid and Inefficient.” 1999. Wake Forest University. https://users.phhp.ufl.edu/rlh/idh2931/wordprocessors.html 

  14. Gruber, John. “Markdown.” Daring Fireball, 2004. https://daringfireball.net/projects/markdown/ 

  15. Flower, Linda. “Writer-Based Prose: A Cognitive Basis for Problems in Writing.” College English, vol. 41, no. 1, 1979, pp. 19–37. 

  16. Elbow, Peter. Writing Without Teachers. Oxford University Press, 1973. 

  17. Lamott, Anne. Bird by Bird: Some Instructions on Writing and Life. Pantheon, 1994.