The results of our survey of more than 1,300 business leaders and 3,000 consumers globally suggest that establishing trust in products and experiences that leverage AI, digital technologies, and data not only meets consumer expectations but also could promote growth. The research indicates that organizations that are best positioned to build digital trust are also more likely than others to see annual growth rates of at least 10 percent on their top and bottom lines. However, only a small contingent of companies surveyed are set to deliver. The research suggests what these companies are doing differently.
A majority of consumers believe that the companies they do business with provide the foundational elements of digital trust, which we define as confidence in an organization to protect consumer data, enact effective cybersecurity, offer trustworthy AI-powered products and services, and provide transparency around AI and data usage. However, most companies aren’t putting themselves in a position to live up to consumers’ expectations.
Consumers value digital trust
Consumers report that digital trust truly matters—and many will take their business elsewhere when companies don’t deliver it.
Consumers believe that companies establish a moderate degree of digital trust
When it comes to how organizations are performing on digital trust, consumers express a surprisingly high degree of confidence in AI-powered products and services compared with products that rely mostly on humans. They exhibit a more moderate level of confidence that the companies they do business with are protecting their data. For organizations, this suggests that digital trust is largely theirs to lose.
More than two-thirds of consumers say that they trust products or services that rely mostly on AI the same as, or more than, products that rely mostly on people (Exhibit 1). The most frequent online shoppers, consumers in Asia–Pacific, and Gen Z respondents globally express the most faith in AI-powered products and services, frequently reporting that they trust products relying on AI more than those relying largely on people—41 percent, 49 percent, and 44 percent, respectively.
However, these survey results could be influenced, at least in part, by the fact that consumers may not always understand when they are interacting with AI. Although home voice-assisted devices (for example, Amazon’s Alexa, Apple’s Siri, or Google Home) frequently use AI systems, only 62 percent of respondents say that it is likely that they are interacting with AI when they ask one of these devices to play a song.
While 59 percent of consumers think that, in general, companies care more about profiting from their data than protecting it, most respondents have confidence in the companies they choose to do business with. Seventy percent of consumers express at least a moderate degree of confidence that the companies they buy products and services from are protecting their data.
And the data suggest that a majority of consumers believe that the businesses they interact with are being transparent—at least about their AI and data privacy policies. Sixty-seven percent of consumers have confidence in their ability to find information about company data privacy policies, and a smaller majority, 54 percent, are confident that they can surface company AI policies.
Most businesses are failing to protect against digital risks
Our research shows that companies have an abundance of confidence in their ability to establish digital trust. Nearly 90 percent believe that they are at least somewhat effective at mitigating digital risks, and a similar proportion report that they are taking a proactive approach to risk mitigation (for example, employing controls to prevent exploitation of a digital vulnerability rather than reacting only after the vulnerability has been exploited). Of the nearly three-quarters of companies reporting that they have codified policies on data ethics conduct (meaning those that detail, for example, how to handle sensitive data and provide transparency on data collection practices beyond legally required disclosures) and the 60 percent with codified AI ethics policies, almost every respondent had at least a moderate degree of confidence that those policies are being followed by employees.
However, the data show that this assuredness is largely unfounded. Less than a quarter of executives report that their organizations are actively mitigating a variety of digital risks across most of their organizations, such as those posed by AI models, data retention and quality, and lack of talent diversity. Cybersecurity risk was mitigated most often, though only by 41 percent of respondents’ organizations (Exhibit 2).
Given this disconnection between assumption of coverage and lack thereof, it’s likely no surprise that 57 percent of executives report that their organizations suffered at least one material data breach in the past three years (Exhibit 3). Further, many of these breaches resulted in financial loss (42 percent of the time), customer attrition (38 percent), or other consequences.
A similar 55 percent of executives experienced an incident in which active AI (for example, in use in an application) produced outputs that were biased, incorrect, or did not reflect the organization’s values. Only a little over half of these AI errors were publicized. These AI mishaps, too, frequently resulted in consequences, most often employees’ loss of confidence in using AI (38 percent of the time) and financial losses (37 percent).
Advanced industries—including aerospace, advanced electronics, automotive and assembly, and semiconductors—reported both AI incidents and data breaches most often, with 71 percent and 65 percent reporting them, respectively. Business, legal, and professional services reported material AI malfunctions least often (49 percent), and telecom, media, and tech companies reported data breaches least often (55 percent). By region, AI and data incidents were reported most by respondents at organizations in Asia–Pacific (64 percent) and least by those in North America (41 percent reported data breaches, and 35 percent reported AI incidents).
The survey results suggest that delivering on digital trust could provide significant benefits beyond satisfying consumer expectations. Leaders in digital trust are more likely to see revenue and EBIT growth of at least 10 percent annually.
Digital-trust leaders lose less and grow more
Digital-trust leaders are defined as those companies with employees who follow codified data, AI, and general ethics policies and that engage in at least half of the best practices for AI, data, and cybersecurity that we asked about. These companies are outperforming their peers both in loss prevention and business growth.
What digital-trust leaders do differently
A look at the practices of digital-trust leaders shows that their success starts with goal setting. First, they simply set more goals—leaders in digital trust set twice as many goals for trust building (six) than all other organizations. They are also more likely to focus on value-driving goals—particularly, strengthening existing customer relationships and acquiring new customers by building trust and developing competitive advantage through faster recovery from industry-wide disruptions (Exhibit 4).
As digital-trust leaders pursue these goals, they are more likely to mitigate every single digital risk we asked about, from the most obvious, such as cybersecurity, to the less so, such as those associated with cloud configuration and migration (Exhibit 5).
And while, by definition, digital-trust leaders engage in at least half of all the AI, data, and cybersecurity practices we asked about, they are also about twice as likely to engage in any—and every—single one (Exhibit 6).
About the research
The data for this article were obtained through two global online surveys: one answered by business leaders, the other by consumers. Both were conducted from April to May 2022. The business leader survey included responses from 1,333 senior business executives (one-third of whom were CEOs) across 27 industries in 20 countries, including Australia, Brazil, Colombia, Germany, India, Indonesia, Pakistan, Singapore, Spain, the United Kingdom, and the United States. The consumer survey included responses from 3,073 adults from the same countries. The data were adjusted to better fit the survey sample to population estimates within each country using age and gender weights globally and, in the United States only, by weighting for region, income, and ethnicity.