This is actually one of my particular shibboleths; the advancement of mechanistic machine learning leads to things "no human could ever understand".
There's a difference between the products of a process being unexpected, and the products not being understandable. Take the Google neural-net-generated "slug-dog" pictures. Certainly, they wouldn't have guessed that outcome, but given that outcome, understanding it at both a gestalt level ("there were a lot of pugs in the training set") and a technical, specific level (the cumulative decision functions going on) isn't even the hard part.
Broadly speaking, I think this cliche is related to a very common (some primate studies may indicate innate) flaw in intuitive estimates of probability and its consequences. For example, any particular ordering of a deck of cards(*) has an equal and astronomically-low probability of occurring, and you can't predict it in advance. But once you shuffle the deck, you have one of them, and it's no more alien than any other result.
(*) or position in chess, go, etc. Or particular formations of sentences, word choices, and so on.
]]>