4 Comments
User's avatar
Nick's avatar
3dEdited

How do you think about Simon Willison’s definition of an “agent” as just “tools in a loop with a goal” – his framing is in essence that an “agent” isn’t a digital employee. It’s a control loop. On this basis, do we really have examples of autonomous agents converting labour budgets into high margin software revenue, and should we hold a realistic expectation that this will be the case in a broad and meaningful sense? Or will it just be part of the ongoing extension of software development automation, that might remove some routine and clearly defined tasks for the average developer, but not actually replace labour or create a material shift in capital allocation for most companies?

Expand full comment
Will S Johnston's avatar

Yesterday I spent an hour using Gemini (any LLM would have worked) to draft a business plan and slide deck. It's amazing to me how many times it had errors which I could not get corrected. I use Gemini every day for problem solving and answering questions, but it's still shocking to me that when complexity and iterations are involved, serious problems emerge. I had the same problem last year when I was using Claude for coding. It could generate code for a certain problem, but as soon as I was iterating it would fail and fail and then fail again. I thought by now this would be addressed. Am I alone?

Expand full comment
Marshall Kirkpatrick's avatar

It will be interesting to see if expanding from "humans in the loop" to "computers in the group" ala ChatGPT's group chat will impact enterprise adoption and productivity. Seems possible that will complicate the firm/individual binary of analysis.

Expand full comment
Terry Cook's avatar

Regarding your comment on diffusion models, you state you're using 6% as the crossover from early adapters to early majority. Could you provide your reasoning for this 6% number.

Expand full comment