As the AI race heats up, no business wants to be left behind – and doing things properly will yield even bigger benefits
The AI era is upon us, with what seems like new advances every week, pushing the technology to new heights. Between Google, OpenAI, Microsoft and a raft of other companies, new developments that can ease the way we live and work are accessible to people more than ever before. It’s little wonder, then, that businesses are starting to consider how best to integrate AI into their processes to reap the benefits.
But thinking before acting is vital in such a fast-moving space. The first-mover advantage that businesses seek out can quickly be negated by the regulatory risks of irresponsible use of AI.
“Lots of companies talk about AI, but only a few of them can talk about responsible AI,” says Vikash Khatri, senior vice-president for artificial intelligence at Afiniti, which provides AI that pairs customers and contact-centre agents based on how well they are likely to interact. “Yet, it’s vital that responsibility be front of mind when considering any deployment of AI – the risks of not considering that are too great.”
Think fast, act slower
In part, the fast moving and competitive environment often places the responsible use of AI secondary to gaining market share. The history of AI, says Khatri, has seen companies develop tools that harness the power of AI by making use of big data sets without fully considering what impact they can have on society. Widely used AI tools are trained by trawling the internet and gleaning information from what is found online, which can often replicate and amplify our societal biases. Another problem with AI generated content is that it is often ill-suited to the specific needs businesses may have when deploying AI.
“If I’m a broadband provider in the UK, as opposed to a health insurance company in the US, there’s a specific way that I communicate with my customer,” says Khatri. “With respect to the generative AI technology that’s receiving so much attention, it’s important that the AI models being used are trained on the company’s own data, rather than relying solely on generic, third-party data. That way, the organisation remains compliant with global data regulation and the AI models generate content that aligns with the company’s unique approach to its customers.”
Khatri points to how a customer service chatbot trained on the way users interact with one another on social media, for instance, could quickly turn quite poisonous rather than supportive, lobbing insults rather than offering advice.
“At Afiniti, we use responsible AI design to make those moments of human connection more valuable,” says Khatri. “That in turn produces better outcomes for customers, customer service agents and companies alike. One way we do this is by training our AI models only with the data we need, and we continuously monitor them so our customers and their customers get the results they want, while being protected from bias or other discriminatory outcomes.”
It’s not just the risk of alienating customers that should be at the forefront of a business leader’s mind when considering how to roll out AI within their organisation and to their clients. Regulation is on the horizon for AI, and is likely to bring specific requirements for how data is fed into models that are used to give AI its ‘brain’, and how AI is used to handle customer interactions.
Caution avoids consequences
“Before you even start to develop or deploy AI, you must be cognisant of the regulatory landscape,” says Kristin Johnston, associate general counsel for artificial intelligence, privacy and security at Afiniti. “This means examining your governance structure around data compliance to get your house in order first.”
AI regulation is complex and constantly changing, and a patchwork of laws across the globe can make it hard for businesses to comply. For example, businesses operating in Europe have different requirements from those with customers in the US, while the UK’s data protection regulation is likely to soon diverge from the European Union’s.
The magnitude of the task in responsibly deploying AI is something most businesses have yet to fully wrap their heads around, fears Johnston. “A lot of companies haven’t built out a governance process specifically around AI,” she says. To do so properly, Johnston says it’s important to consider, first, the definitions of ‘AI’ and ‘machine learning’, then to identify how AI is being used within the organisation based on those definitions, and to construct your responsible AI programme accordingly so that all employees are aligned.
AI is set to become so ubiquitous that external services that feed into your company may use AI as well. For instance, Google has now introduced generative AI-powered aids to develop documents and slide decks in its cloud-software suite that your employees could soon find themselves inadvertently using without knowing it. And if people in your company aren’t sure what AI is — or even if they’re using it — you can’t be confident your approach to AI is responsible.
Root and branch reform
Johnston stresses that a clearly understood definition of AI within your company is the basis of any AI governance programme. She recommends considering the definition of ‘AI systems’ in the artificial intelligence risk management framework published by the National Institute of Standards and Technology (NIST) in the US as a working definition.
“Making sure everyone is aligned is critical, because you want to check for any use of AI throughout your organisation,” she says. “Any protocol worth its salt needs to be able to categorically define who is using AI tools, when they’re using them, what data they’re using and what the limitations of the tools are. It’s also important to ensure AI tools are being used in a way that respects privacy and intellectual property, given the mounting legal actions against some generative AI tools by those who believe their data was used to train the models that power such platforms.”
Doing this work in making sure responsibility is front and centre of any AI deployment is vital because it will avoid headaches in the long run. Not only can the irresponsible use of AI lead to trouble, but generative AI’s tendency to ‘hallucinate’ content — in other words, generate untrue responses — could lead to even bigger trouble in the court of public opinion for spreading disinformation. Yet fewer than 20% of executives say their organisation’s actions around AI ethics live up to their stated principles on AI. By putting in place a robust responsible AI programme, companies can avoid the pitfalls that come with leaping headfirst into the promise of AI without considering its drawbacks. “We’re very mindful about ethical and responsible use of data,” says Johnston. “Responsible AI should be a priority for organisations globally.”
Responsibly transform your business with AI at afiniti.com.
Originally published in The Times Future of Data and AI report on March 22, 2023.
With AI playing such an important role in digital transformation, ensuring it is used in smart and safe ways is critical
Digital and data-led advancement is at the heart of today’s business transformation, and AI technology, increasingly helping organisations deliver innovative services, is the key to this evolution. So, when Google made headlines recently after an engineer claimed an AI chatbot had become sentient, shockwaves rippled through the global business community.
Whether sentient AI truly exists or not is a distraction. What this hyped news story highlights is the increasing need for responsible guardrails around artificial intelligence to ensure the technology successfully drives business transformation. If there are any questions looming over trust in the use of AI, then corporations will find it difficult to offer the next level of digital-first services and get customer buy-in.
“The debate over a self-conscious AI shows there are still many anxieties about the use of artificial intelligence. If we want AI to synthesise large amounts of data, identify patterns, make decisions and continuously improve, then it has to be ethical, accountable and fully understood by all,” says Caroline O’Brien, chief data officer and head of product at Afiniti, which provides AI that pairs customers and contact centre agents based on how well they are likely to interact.
“Every business now needs a responsible AI practice. Good governance is vital, not only for an organisation, but for its vendors, partners and suppliers. Anyone who is using AI needs to have best practices embedded into their enterprise as it transforms. This is the only way to promote trust in the use of artificial intelligence in business.”
As more companies become digital-first, the challenge is likely to grow. Industry-wide funding for AI is expected to increase in 2022. A third of technology and service provider organisations with plans for AI aim to invest US1m or more into this technology in the next two years, according to Gartner, a business technology research and consulting firm. Such funding is increasing at a moderate to fast pace, as organisations create new products and services, expand their customer base and generate new revenue streams.
“We are increasingly seeing AI used in call centres to connect callers with agents. It is also being used to provide prompts to agents that help them have better conversations with customers. More intelligent chatbots are coming to the fore as well. We will see many more business processes and human decision-making complemented with intelligent machine-based services,” says the CDO of Afiniti, whose technology serves several of the largest Fortune 100 companies across industries, including telecoms, healthcare, and insurance.
“As AI’s use grows organically, it is harder for organisations to get a handle on the extent to which artificial intelligence is influencing business decision-making. Managing risk and promoting accountability is therefore becoming more important, especially as AI is increasingly used not just for frontline customer engagement, but throughout the customer journey. Having a clear understanding of how AI is being leveraged and what the impacts are in their own businesses is a huge priority for our clients right now.”
With more regulation on the horizon, the need to act now is crucial. The EU is proposing legislation to address AI systems specifically that will include a universal obligation to inform customers that they are dealing with an AI. In the UK, policies in this field are under development with the Artificial Intelligence Act, while a patchwork of federal and state-level proposals are being embedded in the U.S.
The core aim is to foster trust in AI and avoid a consumer backlash. Those enterprises, especially ones with operations in multiple jurisdictions, that don’t start building out a robust approach to responsible AI are likely to risk costs to their reputations and bottom line if issues do come to light in this arena.
This increase in regulation comes as artificial intelligence is being further embedded into the enterprise, with many more organizational tasks. For example, customer insights, user experience and process improvement are three ways AI will increasingly benefit customer service organisations in the near future, according to research from Gartner.
“We measure how our AI affects customers and employees, enabling us to understand both the business value that AI creates and its impact on people. Using AI responsibly should include testing for potential bias and correcting it where it is found,” says O’Brien from Afiniti, which has more than 400 patents and whose AI has, to date, been involved in more than 1 billion conversations between customers and call centre agents.
“From a responsible AI perspective, it’s important to make sure that we have these protections in place. Context also matters. AI is being used in many different parts of the customer journey, so knowing where it is being used, when to use it, when not to use it, and where it is going to have the most impact in a positive way is extremely important.”
Many businesses, especially large enterprises, have also implemented AI across multiple business units from marketing and sales, to logistics and business development. Having a cohesive picture on the use of AI and how it can be leveraged responsibly across the whole organisation is challenging. Yet companies will need to have a ‘single pane of glass’ view of their AI if they are to comply with new regulations.
“From conversational bots to intelligent recommendations for customer service agents, the list of how AI can be used grows longer by the day. We increasingly need to account for all these use cases, how they influence decisions and the impact they have, or the industry will start to see rising distrust from consumers,” says O’Brien.
“Having a holistic, responsible AI practice in place addresses this issue of trust. As businesses transform, we need observability, accountability and explainability for all AI uses, all of which drive customer trust. This is what Google’s sentient AI story is really highlighting.”
Responsibly transform your business with AI at afiniti.com.
Originally published in The Times Business Transformation Special Report on June 30, 2022.