Using LLMs at Oxide

Bryan Cantrill:

volunteering that an LLM has been used to generate work product is to implicitly distance oneself from the responsibility for the content

The argument here is that LLMs are useful in as much as they promote and reinforce your values. If they don’t do that, don’t use them.

we must be careful to not use LLMs in such a way as to undermine the trust that we have in one another

Extra credit: writing is a vessel for establishing trust. LLMs undermine that.

Finally, LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.

If, however, prose is LLM-generated, this social contract becomes ripped up: a reader cannot assume that the writer understands their ideas because they might not so much have read the product of the LLM that they tasked to write it. If one is lucky, these are LLM hallucinations: obviously wrong and quickly discarded. If one is unlucky, however, it will be a kind of LLM-induced cognitive dissonance: a puzzle in which pieces don’t fit because there is in fact no puzzle at all. This can leave a reader frustrated: why should they spend more time reading prose than the writer spent writing it?