Software That Knows You

The other day a friend asked me to define AI in as few words as possible.

I blurted out, "AI is a subset of software that can think, reason, and learn."

There are probably a few issues with this definition, but I'm ok with it. 

Another way of describing AI is to call it software that can make its own mistakes. Traditional software only makes a mistake when the person coding it makes a mistake. That's not AI. When the AI runs with something on its own, it becomes much more powerful, and mistakes become more likely. Ironically, there’s great power that comes from the ability to make mistakes.

Microsoft Excel doesn't make mistakes. It's not AI. It doesn't look at your formula and give you its best guess. It either calculates it correctly or it gives you an error message.

With some web services, this distinction isn't so clear. Take recommendation engines as an example. You might think that the videos YouTube recommends that you watch are coming from some magical AI, but they're not. Benedict Evans made a relevant point on this in his newsletter this week:

"YouTube never knew what was in the video, Instagram didn’t know what was in the picture, and Amazon didn’t know what the SKU was: they each had metadata written by people, and they can look at the social graph around them (“people who liked this liked that”), but they can’t look at the thing itself."

Said simply, YouTube doesn't know you, and it hasn't watched the video it’s recommending to you. It's just matching the tags attached to videos you've watched to tags of videos that someone who watched similar videos as you watched and tagging that to your user account. You might not like the video they're recommending, but that's not AI thinking and making a mistake. That's an engineer writing an imperfect algorithm. Evans continues:

"How far do LLMs change this — how far do they mean that YouTube can watch all the videos and know what they are and why people watched, not which upload they watched, and Amazon can know what people bought, not what SKU they bought? And how does that change what we buy, and what gets created?" 

The promise of LLMs is that they actually will start to know what they’re recommending, which is a hard concept to get your head around, but it's particularly exciting for healthcare technology. Traditional clinical decision support tools (CDS) are doing something very similar to what YouTube is doing — matching what it knows about you against patterns of patients with similar inputs. CDS tools don't know the patients they're supporting. They haven't “watched the video” of you. The exciting part of AI in decision support is that the LLM can begin to know you really, really well. By incorporating all kinds of factors (your emails, texts, calendar, conversations with doctors, medical history, social/demographic attributes, wearable data, etc.), it can start to actually get to know who you are orders of magnitude better than your doctor could in a 15-minute visit. Or even a dozen 15-minute visits. It can better assess your health, make more accurate recommendations, and perhaps most importantly, tap into the right highly customized levers to maximize positive behavior change. 

The real innovation and step forward with LLMs in healthcare won't come from more software with more accurate algorithms and better tagging that recommends a better treatment. It'll come from the fact that the AI knows your whole story. It's like YouTube watching all their videos before they make a recommendation. The potential is hard to fathom.