Photo

Hi, I'm Aaron.

An LLM is not a Rubber Duck

LLMs

This originally began as one lengthy post but I’ve broken it into multiple posts that are more digestible. This is part 7.


Harvard’s CS50 course, widely lauded as a great intro for nascent devs, and they have created a companion LLM product for it called CS50.ai, which is presented as a tool for “rubber duck debugging.”

This may seem superficially correct, but if you have ever done actual rubber duck debugging, you might be able to detect why it is NOT that. From the wikipedia article, which I think gives a great summation:

Rubber duck debugging (or rubberducking) is a debugging technique in software engineering. A programmer explains their code, step by step, in natural language—either aloud or in writing—to reveal mistakes and misunderstandings.

More specifically:

Programmers often discover solutions while explaining a problem to someone else, even to people with no programming knowledge. Describing the code, and comparing to what it actually does, exposes inconsistencies. Explaining a subject also forces the programmer to look at it from new perspectives and can provide a deeper understanding. The programmer explaining their solution to an inanimate object (such as a rubber duck) has the advantage of not requiring another human, but also works better than thinking aloud without an audience.

I want to specifically call attention to:

  • “explaining to an inanimate object”
  • “describing the code … exposes inconsistencies”

A critical aspect of this is that the duck does not speak back to you!

Using an LLM as a coding-pair is another topic entirely, but for this case? Not a rubber duck.

Why is Rubber Ducking helpful?

The benefit of rubber ducking is that

  1. By presuming no knowledge of the listener, you are forced to distill your understanding down to real words that can be verbalized, forcing abstractions to become concrete
  2. By receiving no feedback, you are also forced to consider how that information is received by the listener, acting as their proxy, which means you are now re-ingesting your understanding of the problem, creating a feedback loop.
  3. By re-ingesting the information in that feedback loop, you then do the comparison yourself, identifying gaps and inconsistencies in your understanding, driving you towards the solution.

This is similar to Jungian psychological principles about mental pluralism or the cognitive “senate” in parts work. You gain the most benefit by embodying all the roles.

When using an LLM, you are now only performing the first step (expressing the problem in plain language) but are not doing the latter parts (receiving and synthesizing that information).

I completely understand why someone might think that using an LLM as a rubber duck is an improvement – again, superficially it seems like “wouldn’t it be better if the duck could give you feedback?” But you will become stronger and more competent by embodying that full feedback loop yourself.