fbpx

AI/HI Blending: Conversation Design

Meet Our Speaker

Celene-Osiecka

Celene-Osiecka

Senior Director @[24]7.ai

Celene has been designing conversational interfaces using emerging technologies like chatbots, AI, natural language, speech recognition, and machine learning for the last fifteen years, deploying over 500 conversational interface deployments in the financial, telecommunications, travel, retail, and education industries. With a background in psychology and HCI, she currently leads a team of conversational designers across three countries that seeks to design innovative and ground-breaking conversational interfaces.

https://www.247.ai/

Event Q&A

Out of these scenarios, which one drives more empathy? Or have you seen that is closest to the user and creates the best experience? The second question is, when it comes to AI systems, what kind of algorithms are you using for the bots that you were building?

Celene: That's a good question. I can answer the second question: for especially easier, but yeah, they're proprietary. In our company they're proprietary. We built our own models, we have a data sciences team, our products build some of it, they are statistical language-based models, and we have rules-based models, depending on the technology that we have, but yeah, that's right now we use all of our own in house technology, we integrate with other partners. We can integrate with Watson, things like that, but we use all of our in house stuff.

And the other question that's interesting, and something I didn't talk about that we also experimented with is in the bot super revision model, because yeah, the empathy part, I think that probably one gives you the most empathy. The other ones are more the better experience, but not, you know, trying to emulate a human as much, right. They're more trying to emulate the intelligence, but not so much the empathy, we're not trying to replace empathy. But in the bot supervision model, you can, instead of the supervisor coming in and just controlling the bot, like a puppet in the back, they can actually intervene. And they can actually write the bot responses, which is kind of cool. So without necessarily knowing that as a human, so let's say the user is really mad, and they're swearing, and we're all ready to, you know, escalate. And that's the point we would typically escalate today to calm it down, there might be an opportunity for the supervisor to come in and actually write the bomb response, and then send it, that's how you probably design it. But again, you have to be careful, because if you as soon as you start showing that the bot could be more human-like, and this is what our team discusses all the time, they are going to think it's a human and serve to talk to it like a human. And we know there are challenges right now with people talking to bots, exactly like humans. And, you know, it's helpful to have that mental model that they're a bot because they kind of talked to it like it's a dumb thing, which helps her recognition. But as soon as it starts to seem intelligent, like a bot walnut, or like a human, I'm gonna just dump my life story on you, and you figure it out. So it becomes more dependent on the human that way. But maybe, for now, because our technology is a little lacking, maybe we have to do that, right, we have to have that, you know, a lot of supervision in the experiences. But again, the cool thing about it is the bot can learn from that they can learn from all of the responses and the triggers that the humans are intervening with. So eventually, I imagine that dissipating over time, you know, you're gonna have a lot of heavy-handed and the humans at first, but then eventually, maybe in a year or two after you optimize it. It's going to be automated.

What are the best use cases, that we can apply the chatbots to?

Celene:  We started off as just our bot became navigation. It was just, it was just pointing to people on the website because they already have self serve channels. And that works, right? It works for the web, especially even works for mobile, a little bit less, but it works for mobile because again, you take your bot, you take them away out of the bot experience for mobile. So that was a challenge, but it's still fine if they know how to, you know, minimize, bring up the bot again. But what we're seeing the use case of building that the self serves transactional journeys into the bot, is we're starting to move away from the web, right? Like when I talked about the async messaging, a lot of customers now are not going to the website, they're going they're googling the phone number, like imagine they Google your phone number for your telco. And then they'll see the phone number for your telco. And then Apple business chat will come up with a little chat suggest options saying, chat with me. So you say yes, I'll chat with you. And that's where your bot lives now. Right. So you're not even touching the website. They're totally bypassing your website. They're going right into the Apple. So that's where Apple and Google are pushing people right now and I as a user, kind of where I want to go, like I have a lot of my chat with my telco personally, through Rogers or through Facebook. I just that's where I go right? It's I don't really like to go to their website because it's clunky and messy, but I know if I just talk to them on Facebook, I'm gonna get somebody. So I feel like users are pushing us in that direction. And, and the big tech giants are too. So I think that's one reason why you want to do everything contained potentially right. And then the security concerns and things like that. That becomes, that's, that's something we're gonna have to as a group of conversation designers, we're going to have to design for that, you know, making people assuring them of the security and sometimes we'll do it in legalese, or sometimes we'll do it in some kind of like, you know, notifications beginning, but we have to notify them of that. But also, they'll just be I think they'll become more secure, comfortable with the security when they start seeing things like that, especially when you do things through Google and Apple. Maybe it'll be maybe they'll like it more, maybe they'll like it less. But yeah, so that's one reason for doing that. But yeah, I mean, there's we always are intense and use cases, we do both we do all intense. And we do FAQ intense and we do transactional intense and the transactional ones are the things like yeah, that need payment, they need an API lookup, they need some kind of personal information. But there are some FAQ's that don't. So for telcos, there's probably like a common list of like, 200 or so that we've done that. I can't even we can talk about it after too if you wanted to message me, but just that's those are the ones that we've seen are typically common, and it's probably a blend of both FAQ and personal.

How can we integrate the element of humor? My company is also trying to do that, so I'm curious, how can you do that?

Celene: That's a really good concept, right? And the whole and it's interesting because our teams talk about it to about persona development and how your users will accept that right? And what verticals make sense for humor? What verticals? Don't, we just had an example today? I'll tell you, we have a social model that runs in the background of our bots. And it tends to be a little greedy, sometimes it like interviews itself when it shouldn't. And somebody wrote this response. And I think if I don't use the client, or whatever, but it was a standard response. It's in there probably for one of our internal deployments, where it says, it's about a degree. So you would ask the bot, what degree do you have? And then the answer is, oh, well, I bought mine on eBay, because I liked the frame. I got it from South Hampton, South Hampton international technology, or something like that. So it spelled expletive. And the client saw this social model come up because somebody asked a legitimate question about their degree. They said, Oh, you know, how do I get my bla bla bla bla degree and it came up? is bad, that experience so humor has to be done in the right way? Or else? It's horrible, like that response to that answer to this poor person's question about getting a degree. It was horrible. And, and yeah, it's so these humor models are great. But they have to be very, very restrictive. You know you have to make sure that it's only coming up when you know 100, confidence has to be so high that yes, they want it, they want a joke, or they want whatever, if they said something, imagine this, right? They said something like, you know, this bot is a joke. And then I'll tell you a joke. Bad, bad bad. So you have to be really blunt in that they're asking for a joke and not being negative sentiment or whatever. Right. So so I would like to use humor. But uh, yeah, you'd have to be careful with it. Number two, you have to make sure your brand's okay with it. Right? We've found in a lot of verticals that the financial services, people don't like jokes when they're dealing with money, right? If I'm talking to my bank, I don't want to hear a joke, because it makes me feel nervous. Right. But when you're talking to a telco maybe a little later, definitely to retail a lot later, you know, so it depends on your vertical that you're designing for, but it's always you know, user testing, test it out, you know, but just make sure that it's accurate. And it's doing it at the right time at the right place. I love making bots more human-like that because of it. You just have to do it in the right way.

Do you work with only companies that have like, humans available to them for this sort of integrated approach? Or do businesses have the worry of making it scalable, depending on how many times that you know how many inquiries they
get? And that kind of thing? Like, how do you account for that?

Celene: Ya know, it's really interesting that we don't, these approaches I mentioned, most of our clients haven't deployed them yet, because they're still kind of experimental and new, not experimental, as much just not as common that the one the, you know, the highway model I mentioned, that's typically what gets designed today. Just because of the there's the other process elements you have to put in place to enable it, but our product does enable that it just has to train agents on that process. And but yeah, so anyway, we deal with the clients that do have the human capital model where they have agents or voice agents or something like that. I say most, most of our enterprise clients have that in place, whether it's our agents, or maybe they have other agents that do that. But some are medium business clients, they just have like email support or something like that. So then our escalation model just becomes we give the email, or we give a phone number or something like that, where it's not connected as much to their it's not back and forth. It's just it is that one highway model. But yeah, that's typically what happens, I would say, you know, maybe 60% clients in that one, you know, we give an email, we give a phone number, and then the rest of the bigger clients, we have that truly seamless escalation strategy between bots and humans.

There was an example of a new product that is a bot that monitors sales calls. So salespeople are speaking to humans on the phone. And a bot is kind of whispering secrets into the salesperson's ear notifications that are reading the sentiment of humans on that call and making suggestions to help them close the sale. And it just seems like there are all kinds of, risky, ethically complicated potential use cases here. And I was wondering if there are other examples within the relationships that you gave that would be, you know, a real challenge or specific design considerations to make sure that what we're building is an ethically conscious product.

Celene: That's interesting. I would say that probably fits in with the augmentation, the last one because we do somewhat not in that exact context, but it's the idea, right? You're augmenting the salesperson, right? You're making them sound better or polished or whatever, right. I think that's still I think that fits in with the augmentation, please. But it is a different way. Right? Whereas I'm thinking of more of how can the agent provide the best answer and the agent would read? You know, typically, we always say we're, you know, recording your conversation and blah, blah, blah with the agent, right? So they kind of already know. But yeah, that use case we're using it without the user knowing becomes problematic, right? I think, in that case, the best way to do it from like, an ethical perspective is to treat it like you would in an agent use case, right? Say, this is the salesperson, you know, that we have you know, we're gonna be recording your call for quality or whatever it is, right? where, you know, they know they're adjustable. Only then I think it would be ethical if you're doing Yeah, recording and bot transcription and bot analysis without them knowing that's Yeah, that's dicey, I think for sure.

Question about bots supervision. Do you believe that the ideal way to implement this is just to simply have every agent as a bot supervisor, as just a channel, you know, for, for things to be coming into the agents? Or would you have people dedicated to that channel?

Celene: That's interesting, I think it might depend on your agent ability. If you have the skillset, and I would think typically, all agents would have the skill set to do this, but maybe not, you know, maybe they, you know, are not gonna say maybe they're not great at detecting intent themselves, but then I don't know if they should be an agent. 

Followup: The reality of the situation is, most of the implementations for this kind of thing would be taking a pre-existing thing, at least an existing organization is taking an existing process and adding a new channel and diverting and deflecting calls, right, or, you know, helping users. So these people would be calling customer service or tech support or anything, or any of these channels, which are adding kind of like bots and AI. So, you know, I would hope that the agents would be able to handle these already.

That's what I'm thinking. I think it would be the same group of agents. I mean, that's how we've pitched it and deployed it currently is that it is the same group of agents that would be handling the calls, you would just get the strategy of staffing might become a bit different, right? Because they are spending time maybe it's less time, maybe it's more time supervising rather than answering questions.

Followup. I was thinking also from a continual improvement, like, would these bots supervisors be people that would then be going in and trying to refine to make sure that where they got stuck is immediately dealt with? Or, you know, like, the segregation of duties, I guess is a little bit worse.

That's true, right? Yeah, you'd have to balance because even we're noticing in the async model, the staffing for agents is different, you know because then you can handle more asynchronous conversations and synchronous. So they're able to do that. But we also noticed that you don't want to blend synchronous and asynchronous agents together because they get it's a different process. Right? So it's, it might be similar, where you might actually have to separate the supervisors versus the actual agents, because the process may be different. And even just a little time it takes for them to shift their thought process from one to the other mode might be enough to cause inefficiencies for the customer. So you actually, I'm leaning toward probably separating them, the more I'm thinking about it.

If you have any books kind of related to this topic that you might recommend if we wanted to look into this more?

Celene: Because we're kind of building this model ourselves. Because we're, I think we're ready. They're rare companies that have both like the AI capabilities and the human capital agent component. So yeah, so I think Yeah, we're not we're kind of writing it ourselves. But yeah, if there are any other books out there, anybody else knows? I'd love to know about them and research them. But yeah, the, it's still I think, pretty new. Because I think again there are probably lots of books around the existing model around when to prepare when to escalate. You know, when is that point that this case is? Yeah, it's the back and forth? There's not a lot of companies that have both. Yeah, they can do back and forth. So I think that's why it just hasn't come up yet as much. 

If the conversational assistant is the actual product itself. What kind of team would you advise to have? In the beginning, mind you, it's, you know, bootstrap companies. So who do you need in the team to start off?

Celene: I started with a FAQ bought back in 2005. I think that time we had 10 people in our whole company. And totally, I know where startups come from, too. And I love them, I love starting from them. And then you can build these big things that get acquired by these, you know, big giants, which is kind of cool. And, but yeah, so I would say you need to deploy the deployment team, the product team aside, you still need your product, people ready to build your product for you. But the deployment team, if you can build a bot, that self serve enough, it doesn't require developers, then you only need you couldn't need anybody even if you're helping the client through the process, a new project manager, and what we called a knowledge base analyst or body analyst or whatever, right, and that person is a designer, they're an optimization expert, their data, a little bit of a data scientist, or a little bit of all those things. And if that you can get people that are broad enough and train them on it, which is what we had to do, you can get them at relatively low cost. And you have to, we quiz them at a certain type of logic, they have to think in a certain way, you know, but once you can determine they can think in that way, you can train them on design, data science, metrics, even a little bit of Product Manager, project management, but you need one person's project, project manager and analysis analyst are what we used to have, and that they built all over what's just those two people.

Curated Resources By Voice Tech Global

Presentation Slides by Speaker

20200924-conversation_design_AI%3AHI_presentation

Articles

4 Methods to Blend AI and Human Agents in Your Contact Center by 247.ai

AI and Humans Must Join Forces to Deliver Superior Customer Experience by Patrick Nguyen

Enhancing artificial intelligence with human insights

Webinar on the findings of the research from Opus Research about blending AI with human agents in call-centre.

 

Curated Resources By Voice Tech Global

Presentation Slides by Speaker

20200924-conversation_design_AI%3AHI_presentation

Articles

4 Methods to Blend AI and Human Agents in Your Contact Center by 247.ai

AI and Humans Must Join Forces to Deliver Superior Customer Experience by Patrick Nguyen

Enhancing artificial intelligence with human insights

Webinar on the findings of the research from Opus Research about blending AI with human agents in call-centre.

 

table of content

×