TUIs Are an Abomination
The terminal UI is back, and people are acting like this is progress.
Claude Code, Codex-style workflows, agent shells, prompt consoles, chat panes full of tool calls and logs: this is increasingly being sold as the natural interface for AI. It is not. It is a temporary hack that the industry is flattering into a philosophy.
Terminal UIs are not the future of AI. They are what happens when engineers build a powerful new engine and then wrap it in the fastest interface they know how to ship.
And if we are serious about AI adoption beyond engineers entertaining other engineers, we should say it clearly: TUIs are an abomination.
Not because terminals are useless. They are incredibly useful. For deterministic systems. For expert operators. For tools where commands and outputs map cleanly to user intent.
But AI systems are not grep. They are not Git. They are not a compiler.
They are probabilistic, stateful, asynchronous systems that inspect files, call tools, retry, branch, maintain context, and increasingly act over time. Compressing all of that into a terminal transcript or a chat log is not elegant minimalism. It is interface failure.
We are mistaking engineering convenience for design truth
The reason chatbots and terminals dominate AI is not because they are the best interfaces. It is because they are the cheapest.
If your model emits text, the fastest wrapper is a text box. If your agent produces tool calls and logs, the fastest wrapper is a stream. If your early adopters are developers, the safest aesthetic is a terminal.
So that is what gets built.
Then, because it shipped first, people start mistaking it for the natural shape of the category. A shortcut hardens into doctrine. A stopgap becomes dogma.
Suddenly we are told that chat is the native interface for AI. That terminals are superior because they “stay out of the way.” That raw logs equal transparency.
No. They are simply the easiest thing to expose.
This is builder capture. Engineers have wrapped AI in the interfaces they themselves are comfortable with, and then acted as though everyone else should adapt.
That is not product design. That is gatekeeping with better branding.
Chat worked for single-shot generation. That world is over.
When AI was mainly used to generate a paragraph, rewrite an email, summarise a document, or spit out a function, chat was fine.
Prompt in. Answer out.
But AI is no longer a single-shot generator. It is becoming a process engine.
It searches, edits, retries, plans, calls tools, spawns subtasks, remembers state, evaluates outputs, and loops through decisions over time.
That changes everything.
A conversation is linear. Agentic work is not.
A chat transcript is chronological. Real work is causal.
A terminal scroll shows what happened in order. Users need to understand what happened because of what.
That is the break. Once AI became multi-step, stateful, and asynchronous, chat stopped being a natural interface and became a lossy compression format for a much richer system.
The problem is cognitive load
A transcript is not the same as an interface.
Human-computer interaction has known for decades that good interfaces reduce cognitive load by making state visible, supporting recognition over recall, and letting users manipulate structures directly instead of reconstructing them from memory.
Chatbots and TUIs do the opposite when the task becomes complex.
They force users to remember:
- what the system already tried
- which instruction caused which change
- what failed and why
- which outputs are stable versus provisional
- where a bad assumption entered the chain
- what can be safely replayed
- what depends on what
A long transcript is not transparency. It is a tax on memory.
Showing me 1,500 lines of agent logs is not a gift. It is the UX equivalent of dumping engine oil on the dashboard and claiming I now have better observability.
Sequence is not comprehension. Verbosity is not clarity. Exhaust is not explanation.
Code editing makes the failure obvious
Nowhere is this more absurd than in AI coding tools.
An agent reads files, edits multiple modules, runs tests, retries failed approaches, introduces regressions, repairs them, changes strategy halfway through, and gradually builds up thousands of lines of code.
And how is all of that usually surfaced?
A terminal transcript. A chat narration. A diff.
Maybe, if you are lucky, a list of touched files.
This is insane.
We are taking graph-shaped work and forcing it through interfaces designed for conversation.
The right interface for agentic coding is not “a better chat window.” It is something like a temporal change graph.
Imagine the agent’s work surfaced as a visible graph pinned directly to the codebase:
- nodes for edits, tests, retries, plans, and decisions
- branches for alternate approaches
- annotations showing which instruction triggered what
- visible dependencies between upstream assumptions and downstream changes
- the ability to fork, replay, replace, or suppress specific branches without losing unrelated work
Now the user is no longer reading a monologue. They are navigating a structure.
That matters because graph-shaped interfaces create leverage. We already know this from Git commit graphs, CI/CD DAGs, data lineage systems, notebook execution graphs, and event-sourced architectures. These systems become powerful because they make causality visible.
AI interfaces today do the opposite. More autonomy, less inspectability. More capability, less intelligibility.
The terminal is the wrong metaphor
The terminal is a superb interface for issuing commands to tools.
It is a terrible default interface for supervising semi-autonomous processes.
Those are different jobs.
Once a system becomes probabilistic, branching, revisable, and long-running, the user no longer just needs input and output. They need:
- state
- provenance
- dependency visibility
- branch comparison
- safe intervention points
- rollback
- replay
- scoped detail
A terminal gives you almost none of this cleanly. It gives you a river of text and asks you to pretend that chronology is understanding.
It is not.
This will inhibit AI adoption
This is not just an aesthetic complaint. It is an adoption problem.
If using AI means becoming a prompt babysitter, transcript archaeologist, and terminal operator, adoption will plateau. Not because the models are weak, but because the interaction cost is ridiculous.
Most people do not want to operate the machine. They want the outcome.
Most businesses do not want a clever shell. They want:
- auditability
- visibility
- approvals
- rollback
- reproducibility
- governance
- workflow-level control
Most creative users do not want to interrogate a stream of tool calls. They want canvases, timelines, layers, and variants.
Most knowledge workers do not want to manage a branching task through a chatbot. They want structures that capture assumptions, alternatives, and consequences.
In other words: AI adoption stalls when the interface forces users to think like operators of the model instead of beneficiaries of the system.
Engineers see the terminal and think power. Everyone else sees the terminal and sees homework.
Separate the engine from the UX
This is the shift the industry needs.
The model is not the product. The agent loop is not the interface. The transcript is not the experience.
We need to separate the engines from the UX so that interface design can evolve independently.
Different kinds of AI work will need different kinds of surfaces:
- code needs provenance, graphs, symbol-aware history, and replay
- research needs claim maps, evidence chains, and competing hypotheses
- operations needs queues, approvals, exceptions, and intervention points
- creative work needs branching artefacts, canvases, and layered state
None of these are “just chat.” None of these are improved by terminal theatre.
The winners in AI will not simply build better models. They will build better instruments for thought.
The terminal revival is a failure of imagination
The return of the terminal as AI’s prestige interface is not evidence of maturity. It is evidence that the industry has, temporarily, run out of imagination.
It is what happens when builders confuse the easiest thing to expose with the right thing to design.
So yes, TUIs are an abomination.
Not because text is bad. Not because terminals should disappear. Not because engineers are evil.
They are an abomination because they are being elevated far beyond their proper role. They are scaffolding pretending to be architecture. A compatibility layer being sold as a civilisation layer.
Chat got us through the demo era. The terminal got us through the operator era.
But if AI is going to become a real medium for work, thought, and creation, we need to stop worshipping the shell and start designing for the actual shape of the work.
That is where UX for AI has to go next.