Thank you, Rosemary. I completely agree, Simon.
In MSK radiology, especially with unexpected findings, the risk is even higher. Subtle cases are often recognized correctly only by a few experienced radiologists. If flawed examples enter teaching sets or AI training data, the error doesn’t just persist – it amplifies.
And isn’t this a systemic problem? The new knowledge we gain every day – for example, here in OCAD – can’t be integrated into an FDA-certified “finished” AI model.
So while we keep learning and adapting, the model stays static, slowly drifting away from reality.
I have to admit that ChatGPT has refined and polished my text.
Volker
![]()
