As any AI researcher can tell you, this is utter nonsense. Humans have no idea how the human, or any other brain, works, so we can hardly teach a machine how brains work. At best, Google is programming (not teaching) a computer to mimic the conversation of humans under highly constrained circumstances. And the methods used have nothing to do with true cognition.
AI hype to the public has gotten progressively more strident in recent years, misleading lay people into believing researchers are much further along than they really are — by orders of magnitude. I'd love to see legitimate A.I. researchers condemn this kind of hucksterism.
The National Institute of Standards and Technology (NIST) Tattoo Recognition Technology Challenge Workshop challenged industry and academia to work towards developing an automated image-based tattoo matching technology. Participating organizations in the challenge used a FBI -supplied dataset of thousands of images of tattoos from government databases. They were challenged to develop methods for identifying a tattoo in an image, identifying visually similar or related tattoos from different subjects; identifying the same tattoo image from the same subject over time; identifying a small region of interest that is contained in a larger image; and identifying a tattoo from a visually similar image like a sketch or scanned print.
He shared a number of the bizarre "cards" his program had come up with, replete with their properly fantastical names ("Shring the Artist," "Mided Hied Parira's Scepter") and freshly invented abilities ("fuseback"). Players devoured—and cheered—the results.