Skip to main content

AI, Medicine, and Xenocomplexity: Beyond Human-Understandable Data

Medical applications for Artificial Intelligence (AI) and Deep Learning (DL) have drawn a lot of attention from investors, the media, and the public at large. Whether helping us better interpret radiological imagesidentify potentially harmful drug interactionsdiscover new therapeutic targets, or simply organize medical information, AI and DL are beginning to impact real care given to real patients. Because these systems learn from examining data, rather than being programmed to follow specific logic, it is often challenging to understand why they make the decisions they make. Furthermore, recent results suggest the role of AI in medicine is transitioning to a new phase, in which AI enables us to do more than merely solve known problems in an automated way. AI is enabling us to solve “xenocomplex” problems: problems so difficult, humans have a hard time identifying a solution or even fully articulating the problem itself. To fully translate these capabilities into better outcomes, we must recognize and come to terms with the ways in which AI may be able to solve problems we don’t fully understand.

Primer: What Is Deep Learning?

Xenocomplexity and Challenging Domains

Xenocomplexity, AI, and Medicine

Should We Let AI Diagnose Xenocomplex Medical Issues?


Comments

Popular posts from this blog

What is “Sentient AI?”

Recently, as anyone who has managed to find this post is likely to know, a  Google engineer was placed on leave after raising concerns that they may have created a sentient artificial intelligence (AI) , called LaMDA (Language Model for Dialog Applications). This story made waves in the popular press, with many people outside the field wondering if we had at last created the sci-fi holy grail of AI: a living machine. Meanwhile, those involved in cognitive science or AI research were quick to point out the myriad ways LaMDA fell short of what we might call “sentience.” Eventually,  enough   popular   press   articles   came   out   to   firmly   establish   that   anyone   who   thought   there   might   be   a   modicum   of   sentience   in   LaMDA   was  a fool, the victim of their own credulity for wanting to anthropomorphize anything capable of language. That previous sentence has 23 hyperlinks to different articles with various angles describing why having a sentience conversation

Traffic, game theory, and efficient markets

One of the perks of working for Microsoft and living in Seattle is getting to sit in traffic for a couple hours a day (I'm working on something greener, honest). So, I have a lot of time to think. Often, my thoughts turn to why exactly there is so much traffic. (There is a lot of scholarly work on traffic, and you can write code to model different things, but I'm speaking from a more theoretical perspective). I take State Route 520 to and from work. There are no traffic lights, so on the surface, I thought traffic couldn't get too bad. I don't see much reason why the ratio of total trip time to the number of cars on the road shouldn't remain fairly constant or at least scale gracefully. But, it does not. At some point, there is enough traffic that you spend most of the time stopped, and getting home takes an hour plus, instead of 15 minutes. So, there must be other factors affecting traffic patterns besides the road itself. There are two important other factors: sur