How to create an intelligent call center with Microsoft Azure AI | Azure AI Essentials

In Call Center Services


How to create an intelligent call center with Microsoft Azure AI | Azure AI Essentials - read the full article about Call center services, Call Center Services and Calling and answering from Microsoft Azure on Qualified.One
alt
Microsoft Azure
Youtube Blogger
alt

(cheerful music) - Welcome back to Azure AI Essentials.

Today were going to take a look at how you can add language and speech capabilities to your applications with Azure Cognitive Services.

Were going to focus on a call center scenario, since this is a common use case, but the concepts youll learn here will apply to many other scenarios as well.

Organizations across industries rely on call centers to help provide excellent customer service.

These call center interactions can be a source of rich information about customer needs, employee performance, and more, thats largely untapped.

Call centers today often sample just a fraction of calls to understand agent performance, and customer insights are often classified manually by agents during or after the call, which can be inefficient and error-prone.

With AI, organizations are increasingly building intelligent call centers, which automate processes and enable them to gain learnings from every customer call.

This can help reduce costs, identify areas of improvement, and improve customer experience.

This diagram represents the main components of an intelligent call center built on Azure AI.

Over the next few minutes, Ill walk you through some basics of Azures Speech and Language Cognitive Services.

The first step in creating an intelligent call center solution is typically to implement automated transcription of calls, which can be done using Azures Speech service.

This is a helpful first step for scenarios like compliance audit, associate training, gaining insights to improve products, and understanding customer satisfaction.

Call centers often come with their own set of challenges to address when it comes to converting speech to text.

A combination of low quality phone signals, substantial background noise, and a wide variety of languages and dialects can result in audio thats difficult for even a human to understand.

In addition, company and industry-specific vocabulary can be difficult for a model to transcribe accurately.

There are numerous ways to connect call center audio to Azure, and the specifics depend on the environment.

Here, lets assume you have a way to access the call center audio either as recordings or in real-time.

Azures pre-built speech to text models are trained on a variety of telephony data and are accurate for transcribing call center data, as well as other types of audio.

The models can also be easily customized to handle different acoustic conditions and domain-specific terminology.

In this next example, imagine we have a call center in the healthcare industry, and we want to train a model using our own data.

This will allow us to increase accuracy of the speech transcriptions, taking into account medical vocabulary.

Ive already taken the steps of deploying a speech resource and setting up a custom speech project.

Starting on the data tab in the Speech Studio, you can upload data to train the custom speech model.

This data is used to tune the provided baseline models to recognize your domain-specific vocabulary.

The custom model will be trained using audio data along with human-labeled transcriptions.

Notice the use of the term "Tenex" in these examples.

This is a medication name.

Keep in mind that each audio file used as a training example must be under 60 seconds-long.

The audio with human-labeled transcriptions should be between 10 and 1,000 hours long for the best results.

Once weve uploaded the data, we can use the training tab to train our model.

From here we can also see which data has been used to train the model.

Here, weve already run an evaluation of our new model against the baseline.

Notice the significant improvement in performance between our custom model and baseline model.

We can click in to learn more and see that the baseline model has trouble recognizing the medical term, "Tenex," but our custom model now recognizes this accurately.

Continue to test and refine your model until youve reached the desired level of accuracy, and deploy a custom endpoint to use your model in your application.

If you prefer a code-first approach, theres also an option to use the Speech SDK, which is available in many programming languages.

You can also use the Speech service to translate audio to different languages in real-time, and to convert text to speech, which can be helpful if theres a virtual agent component to your call center.

Once the audio has been transcribed and translated, the next step is often to analyze that text to extract insights.

For organizations with many customer service calls, it can become overwhelming to analyze them all to spot trends such as commonly reported issues.

With Azures Language cognitive services, this analysis can be automated, cutting down on the time needed to optimize your models.

Text Analytics has pre-built natural language processing models that can take in-the-call transcriptions, and analyze for sentiment, extract key phrases, and more.

Here, we call the Text Analytics API and pull out key phrases, like names of products, as well as analyze for positive, negative, or neutral customer sentiment.

With Azures Language Understanding service, you can easily train natural language understanding models to help support your call center by enabling computers to accept language as input, understand whats being said, and act accordingly.

For a call center, you can train a model to understand and categorize calls in a way thats specific to your business, which can help filter calls and direct customers to get appropriate help more quickly.

For example, a retail company may want to categorize calls based on categories like design or pricing, and use that information to bring in the right team to help solve the customers issue.

You can build two types of language models, the first is classifier models, which help categorize data.

In the call center, this may be useful to classify calls to get a sense of top customer issues.

The second type is extractor models, which extract key information from the text.

For example, you could use these models to extract and understand customer sentiment over time.

These models are often used together to enable you to get the most appropriate extractions.

Training a custom model can be done visually, through the Language Understanding services UI-based experience, or through an API or SDK.

Here, were building a classifier model in the Language portal.

An intent represents a task or action the user wants to perform.

For each intent, youll train the model by providing some examples of what customers might say.

These examples are called "utterances." An example utterance might be, "My dishwasher is broken.

"How fast can I receive a new one?" The output would be the intent of "speed." Language Understanding uses a technique called "machine teaching," which enables you to teach a computer the same way youd teach a person.

This is an iterative process that uses fewer labels than traditional machine-learning techniques, and allows you to effectively build and maintain custom models.

To train your model, you can upload a labeled dataset or label it manually using the online UI, which has an interface that allows you to easily label utterances from your training dataset.

A good guideline to follow is to use 70% of your data to train the model, and use the other 30% to test it.

Click "train" to train your model, publish your Language Understanding app, and then you can test the trained model for accuracy.

Given a test utterance, the model will return an intent with a score between zero and one, indicating how confident the model is that its prediction is correct.

Now, these insights can be aggregated and visualized using a business intelligence tool, to help inform business decisions.

These visualizations were built using custom Python scripts in Power BI.

A dashboard like this makes it easier to understand how to resolve common issues, reduce future support call volumes, and improve customer service.

As we just demonstrated, this process is helpful for post-call analytics.

But the same process can also be done in real-time, for an agent assist scenario, where real-time insights are provided to call center agents during their customer calls.

Imagine a solution which recommends the next best action, or pulls up relevant documents.

For example, if the customer is asking a question about a particular order, the system can retrieve the order details, saving time for the agent.

Modernizing a call center is just one of the ways Speech and Language AI capabilities can be applied to solve business challenges.

And this general pattern weve discussed today can be applied to any scenario where you want to make sense of conversations, including medical consultations, sales calls, business meetings, and more.

Thanks for tuning in, and well see you next time on Azure AI Essentials.

(lively music)

Microsoft Azure: How to create an intelligent call center with Microsoft Azure AI | Azure AI Essentials - Call Center Services