Recently, as anyone who has managed to find this post is likely to know, a Google engineer was placed on leave after raising concerns that they may have created a sentient artificial intelligence (AI) , called LaMDA (Language Model for Dialog Applications). This story made waves in the popular press, with many people outside the field wondering if we had at last created the sci-fi holy grail of AI: a living machine. Meanwhile, those involved in cognitive science or AI research were quick to point out the myriad ways LaMDA fell short of what we might call “sentience.” Eventually, enough popular press articles came out to firmly establish that anyone who thought there might be a modicum of sentience in LaMDA was a fool, the victim of their own credulity for wanting to anthropomorphize anything capable of language. That previous sentence has 23 hyperlinks to different articles with various angles describing why having a sentience conversation
Medical applications for Artificial Intelligence (AI) and Deep Learning (DL) have drawn a lot of attention from investors, the media, and the public at large. Whether helping us better interpret radiological images , identify potentially harmful drug interactions , discover new therapeutic targets , or simply organize medical information , AI and DL are beginning to impact real care given to real patients. Because these systems learn from examining data, rather than being programmed to follow specific logic, it is often challenging to understand why they make the decisions they make. Furthermore, recent results suggest the role of AI in medicine is transitioning to a new phase, in which AI enables us to do more than merely solve known problems in an automated way. AI is enabling us to solve “xenocomplex” problems : problems so difficult, humans have a hard time identifying a solution or even fully articulating the problem itself. To fully translate these capabilities into better o