Discussion about this post

User's avatar
GN's avatar

Always brilliant Kendra. This is exactly what I find when pointing AI at code (as I'm doing right now because I have a cold and my partner has abandoned me for the day... humph).

Your experience was with a PRD, but the same pattern shows up in direct engineering or in architecture solutions. It's that point about the output always sounding authoritative, which is the key thing, and it's a bit scary.

Uncertain conclusions are presented with the same confidence as facts. Until that AI gets "better", or you have a second AI as a judge to identify where it might be making assumptions, it speaks to the person working with the AI having some relevant knowledge to spot when it's filling a gap with a plausible-sounding "guess".

It always came back to asking it questions because something had a "smell" that triggered me.

But the bits where it was good: taking a different git project, parsing it to understand the logic, and converting that into a Mermaid sequence diagram so I could understand and use the API correctly, and then feeding it my client API logs. That stuff is a lifesaver.

1 more comment...

No posts

Ready for more?