When Novo Nordisk was developing its digital concierge chatbot Sophia, it noticed something: late-night spikes in online visits to Novo’s Cornerstones for Care website.
The team realized that behind those late-night 11 p.m.-to-2 a.m. spikes were people who finally had time to do self care. Maybe they could finally focus after putting the kids to bed, or maybe they were up late worried about a recent diagnosis.
The result for Novo? A chatbot meant to create a more empathetic and human experience outside call center hours, while also helping people get information quickly, Amy West, Novo Nordisk’s head of U.S. digital strategy, said during Fierce AI Week’s recent pharma marketing session.
Novo’s chatbot has become a model within the industry since the company debuted it back in 2018. Part of the reason Sophia gets good reviews is timing and relevance, West said.
“You’re introducing a technology where the person already is and based on what they’re trying to do. It’s a seamless integration to the existing contextual experience,” she said.
But while many companies may be looking to follow in Novo’s footsteps, not everyone understands what’s needed to pull off a chatbot successfully, Brendan Gallagher, Publicis Health’s chief connected health officer and another panel member, said during the event.
“It depends on the organization’s commitment to it,” he said. “Novo’s commitment to Sophia has been admirable. But we see a lot of brands trying to do one-offs on their own—and they’re not comfortable actually deploying an algorithm that’s willing to learn, or even comfortable collecting the data this platform is capable of collecting.”
But how do bots, when seeking to drive a more human experience, avoid bias—gender, cultural, language and otherwise—that could alienate patients?
Bias has been recognized as a factor for years, West said, ever since the days that mostly male programmers wrote for audiences that included women. On the language side, Sophia is available in Spanish, in a version that was built from the ground up for Spanish-preferring audiences.
As Gallagher noted, bias is actually a problem AI can help solve. Ideally, the bias can be programmed out when developing the AI by getting it to recognize a lack of diversity.
“The thing we can do as we’re building these is build in mechanisms so that it recognizes it’s not seeing enough language variation, or it’s not seeing enough geographic differentiation in terms of where it’s getting its data,” he said. “… Make sure to build in that awareness of data diversity.”