artificial intelligence, contact centers

Managing the Boundaries of Artificial Intelligence

Share on:

ChatGPT Offer Benefits to Contact Centers While Balancing Ethics

By Kolby Harvey, PhD
Head of Content and Conversational Strategist
Humach

The ration of robust artificial intelligence tools exploding onto the scene is being greeted by contact center providers with a mix of awe, trepidation, and fear. The initial popularity of ChatGPT, Dall-E and Midjourny is encouraging forward-thinking BPO providers to ensure they recognize the importance of having a human in the loop of all customer interactions.

My work with Humach entails a mixture of the technical and the creative to improve the customer experience, so it’s hard not to feel a twinge of fright when popular products can write and debug code or generate stylized images from a text description.

Even with the assurances Humach’s business philosophy brings, the reality is this: we can’t put the genie back in the bottle. However, we do have a say in how this technology is deployed to protect and enhance the customer experience. We also have a responsibility to regulate existing and future A.I. technologies to work for the common good and prevent bad actors from utilizing these tools in nefarious ways.

Concerns and Responsibilities of Artificial Intelligence

A Misinformation Superhighway

While the speed at which ChatGPT and others can generate readable, competent-sounding text boggles the mind, the content of its answers often contains inaccuracies. As pointed out by Mike Pearl in a recent piece for Mashable, ChatGPT “can blurt out basic common sense, but when asked to be logical or factual, it’s on the low side of average.” For example, when asked to identify the second-largest Central American country after Mexico, OpenAI’s digital agent returned a plausible, albeit false, answer (Guatemala) in place of the correct one (Honduras).

When it comes to longer prompts, like an essay or blog post, ChatGPT can deliver instantly, but again, the quality of the response leaves something to be desired. In preparation for this blog post, I requested a 700-word blog about ChatGPT’s potential negative effects on artists. What I received was the caliber of work a disengaged first-year college student might throw together the night before an assignment’s due date—competent prose masking facile observations and a lack of expertise. Other users have described instances of the bot generating essays on “films that were never made by someone who never existed,” and, more disturbingly, “fake news articles attributed to real journalists,” complete with invented citations.

However, our efforts to address the challenges A.I. technologies present don’t have to start from square one. The work of the Stanford Policy Center’s committee, Working Group on Platform Scale is a good place to start. In a 2020 white paper and again at Stanford’s 2021 Human-Centered AI (HAI) Conference, the group argued for the creation of a competitive market of middleware that would “give users control over what they see rather than leaving it up to a nontransparent algorithm that is being used by the platform.” You can read more about the proposal here.

Bias and Accountability

Despite our best intentions bias often worms its way into our technologies. There’s nothing in ChatGPT that a human did not in one way, or another contribute. While it’s clear OpenAI (ChatGPT’s developers) invested considerable work into eliminating harmful responses, they have not been 100% successful. In spite of their best efforts, the bias imposed upon our data is often unconscious.

Thankfully, another 2021 HAI speaker outlined ways to combat and root out bias in our data. Deb Raji, a Mozilla Foundation fellow and doctoral candidate at UC Berkely, advocates for independent auditor access to increase accountability for those making and distributing A.I. technologies. As Raji points out, internal audits conducted by A.I. companies “have not been reliable sources of information about the effectiveness of their own systems.” Impartial, third-party auditors are essential, according to Raji, as they “provide concrete evidence focused on the concerns of an affected population.”

Data Rights and Protecting Creativity

The last of the larger concerns centers on the impact on art and creativity. For writers, artists, and other producers of culture, the propagation of ChatGPT et.al. poses threats both financial and ethical.

For starters, the ways in which the developers of A.I. technologies acquire data for training models is dubious at best. Whether you’ve posted Nintendo fan art on Tumblr or responded to a thread on a message board, Drs. Hanlin Li and Nick Vincent point out that “you’ve done free work for tech companies, because downloading all this content from the web is how their artificial intelligence systems learn about the world.” Our content, regardless of what shape it takes, is a precious resource that few of us receive compensation for, even when models trained on our content go on to generate massive amounts of money.

Drs. Li and Vincent outline a few ways to leverage one’s own content, including “individuals banding together to withhold, ‘poison,’ or redirect data” (direct action); lawsuits and other legal actions; making demands of technology companies as consumers, creators, and workers (market action); and the foundation of data cooperatives (regulatory action).

Divya Siddarth, another 2021 HAI participant and social technologist with Microsoft, advocates for that last one, arguing that while a few players have a monopoly on nearly all personal data, there’s no reason for these benefits to be so one-sided. Data cooperatives— “intermediary fiduciaries who would negotiate with companies and other entities to establish guidelines around the use of our shared data; set limits on who can view, store, use, or buy it; and route the benefits back to us: in dollar form, in-kind, or through recognition and access”—could redistribute the wealth, so to speak

None of these approaches, however, addresses a longstanding condition that new artificial intelligence technology throws into sharp relief—the contempt for and dismissal of human creativity, the least important most important thing there is.. ChatGPT and the rest are not creative, and they do not create. Rather, they generate, rearranging pieces we’ve already given them into legible responses. I truly cannot fathom why someone would want to (beyond testing for novelty’s sake) generate a story or poem with software they had no hand in making. To then call the result art and attempt to publish it in a magazine seems even more delusional, yet people did exactly that, flooding literary journals with stories generated by software bots to the point one of the most prominent science-fiction journals (among others) had to close submissions for an unspecified period of time, moving to a solicitation-only model. This means that, for the foreseeable future, no new writers will be breaking into the industry, at least not through literary journals.

A Balanced Approach

As it turns out, Humach itself is an example of displaying the potential of A.I technology of improving CX with every interaction. We firmly believe in humans working alongside machines. It’s right there in our name. For ChatGPT, customer support is, I’d argue, the perfect use case. The promise of automation lies in its ability to shoulder our greatest and most tedious burdens, freeing us to focus on what matters. For workers, this could mean a reduction in the most rote aspects of the workday as well as a new collection of tools that could increase efficiency and reduce frustration (this goes for customers too). Outside of the workplace, automation should leave us with more time and energy for personal and creative pursuits. It’s critical we utilize new technologies for the greater good rather than for making a quick buck.

Improving the Contact Center

While contact center work has never been easy, providing a top-tier multichannel customer experience means our agents face the challenges of the digital age while navigating multiple communication platforms. We design our AI-powered contact center solutions to augment and improve agent work, but never replace it, a quality we also look for in our partner organizations.

Case in point: Talkdesk, an organization with whom we’ve enjoyed a long partnership in customer service and thought leadership in the contact center industry, one that only gets better with age. Unveiled in February and coming soon to their platform, Talkdesk has harnessed the power of ChatGPT to create a game-changing feature for contact center agents: automatic summaries.

While auto-summaries have existed in one form or another for some time, the specific skillset of ChatGPT makes it a perfect choice for succinctly summarizing information into legible, digestible pieces that human agents can then proofread for inaccuracies. Not only does this feature free up time for contact center agents on every single call, these auto-summaries could also be sent between agents when a caller must be transferred or needs to switch channels. ChatGPT’s impressive capacity for parsing language and responding to requests in friendly, conversational language could mean the dawn of a new era for both contact centers and self-service customer support.

Author:

Share on:

Hello There!

Hey there!

Want to see a demo?

Get in touch with our team