AI hallucinations have been causing real problems in design research workflows and I don’t see enough discussion about it in this community.
Two specific examples from my own recent work:
Asked an AI assistant for statistics on accessibility adoption rates in digital product design. Got confident numbers with plausible-sounding source citations. Spent 30 minutes tracking down those sources. Two of them didn’t exist. One existed but the statistic cited was not in it.
Asked for examples of brands that had successfully used a specific design approach. Got a mix of real and fabricated case studies. The fabricated ones were indistinguishable from the real ones in how they were presented.
The problem is the confidence. Wrong information delivered tentatively is catchable. Wrong information delivered with the same authority as correct information is dangerous - especially when you’re presenting research to clients or in academic contexts.
My current rule: never use AI-generated statistics or case studies without primary source verification. AI is good for synthesis of known information. It’s unreliable for specific claims.
How are others handling this in their research workflows?