← spec.cafe

Coding with AI from the phone

March 2026

This blog was built on a phone. Not prototyped, not sketched out — actually built. The repository cloned, the files written, the commit pushed, all from a screen I normally use to check the weather and argue about films.

I had Claude Code open in a browser session. I typed what I wanted in plain English: clone the repo, create this folder structure, make the homepage look like this, use these fonts, these colours. It did it. I reviewed the output, said yes or not quite, and we moved forward.

The constraint wasn't the AI. It was the keyboard.

Typing long commands on a phone is miserable. But typing sentences is fine — it's what phones are for. Describing intent in prose rather than syntax turns out to suit the medium well. I wasn't writing code. I was directing it.

What the process looked like

I started with a private GitHub repo and a rough idea: a static magazine blog, no framework, plain HTML and CSS, deployed through Cloudflare Pages. I told Claude that. It asked a few clarifying questions, then got to work.

The first pass produced an index.html with the layout, typography, and colour palette I'd described. I asked for adjustments — tighten the line height, make the heading larger on mobile, add a footer with specific links. Each round-trip was a sentence or two from me and a file edit from Claude.

When I wanted to lock in the conventions so future sessions wouldn't drift, I asked it to write an agents.md — a file that tells any AI agent working in this repo what the rules are. Don't change the font stack. Don't introduce frameworks. Always push to main. It wrote that too.

What surprised me

The thing I expected to be hard — the coding — was easy. The thing I didn't expect to matter — how I phrased things — turned out to be everything. Vague instructions produced technically correct but tonally wrong results. Specific ones, even short ones, landed well.

"Magazine-style" meant nothing on its own. "Centred layout, 720px max-width, DM Serif Display for headings, warm off-white background, burnt orange accent" meant something. The gap between those two descriptions is just thinking clearly about what you want.

AI doesn't lower the bar for taste. It just removes the excuse that you couldn't execute it.

If you know what you want and can say it plainly, the distance between intention and result collapses. If you don't know what you want, you'll get something — but it won't feel like yours.

On doing this from a phone specifically

There's something useful about the constraint. A laptop invites tinkering — opening another tab, pulling up the inspector, fiddling with a pixel here and there. A phone doesn't. You describe, you review, you decide. The feedback loop is coarser but faster.

I also found myself writing more carefully, because retyping is painful. Short, precise instructions. No rambling. That discipline carried through into the output.

The result is this site. It took less time than I expected and more thought than I assumed. Which is, I think, how it should work.