Shopify Whitepaper 2023
Study: 73% of consumers trust content created by generative AI

Study: 73% of consumers trust content created by generative AI

share on

Artificial Intelligence (AI) has been the buzzword of 2023 with many exploring the generative AI tools. However, in tandem, many conversations have started to arise about the safety of these tools as machines get smarter. Surprisingly though, it was found in a recent survey that 70% of Singaporean consumers, compared to 73% of consumers globally, trust content created by generative AI. This spans across many aspects of life, from financial planning and medical diagnosis, to even relationship advice.

It was also found that 70% of consumers globally use generative AI tools to seek recommendations for new products and services and that 65% of Singaporean consumers, compared to 64% globally, are open to making purchases based on these recommendations.

These were the results of Capgemini Research Institute’s latest report titled ‘Why consumers love generative AI’, which explores how consumers globally are using generative AI applications and how it could be the key to accelerating society’s digital future.

Don't miss: Dear Straight People unveils new web series largely scripted by ChatGPT

The survey found that consumers who use generative AI frequently are most satisfied with chatbots, gaming, and search use cases, however, generative AI platforms are also being used for personal, day-to-day activities and consumers seem to trust AI for these activities as well. 

Over half of the respondents (53%) trust generative AI to assist with financial planning while globally, 67% of consumers noted that they could benefit from receiving medical diagnoses and advice from generative AI. 63% on the other hand indicated that they are excited by the prospect of generative AI aiding with more accurate and efficient drug discovery.

Additionally, two-thirds (66%) of consumers surveyed revealed that they would be willing to seek advice from generative AI for personal relationships or life and career plans, with Baby Boomers the most likely (70%) age group to use it for this purpose.

With the increase in trust, it was also found that almost half of consumers (43%) are keen for organisations to implement generative AI throughout customer interactions, and half of consumers are excited by the highly immersive and interactive experiences that this technology can enable. There is, therefore, a good opportunity for businesses to implement AI into their systems as it is already a go-to for 70% of consumers when seeking recommendations for new products and services, and the majority (64%) of consumers are open to making purchases based on these recommendations.

While it would seem that a significant majority trust the system, there is much talk about the potential for cyberattacks and deepfakes as well as concerns regarding ethics and misuse of generative AI, things that consumers seem unconcerned with. Capgemini Research Institute noted that consumer awareness around the ethical concerns of generative AI is low, with just 33% worried about copyright issues and even fewer (27%) worried about the use of generative AI algorithms to copy competitors’ product designs or formulas.

It also noted that almost half (49%) of consumers remain unconcerned by the prospect of generative AI being used to create fake news stories, and just 34% of respondents are concerned about phishing attacks. 

“The awareness of generative AI amongst consumers globally is remarkable, and the rate of adoption has been massive, yet the understanding of how this technology works and the associated risks is still very low,” noted Niraj Parihar, the CEO of the insights and data global business line and member of the group executive committee at Capgemini.

“Whilst regulation is critical, business and technology partners also have an important role to play in providing education and enforcing the safeguards that address concerns around the ethics and misuse of generative AI. Generative AI is not “intelligent” in itself; the intelligence stems from the human experts who these tools will assist and support. The key to success therefore, as with any AI, is the safeguards that humans build around them to guarantee the quality of its output," he said.

The survey comes shortly after the CEO of OpenAI, Sam Altman, took the stand last month to speak directly to US lawmakers about the risks artificial intelligence (AI) poses and why heavier regulations are needed amidst ethical, legal and national security concerns.

Speaking to the Senate Judiciary subcommittee, Altman, who was the man behind ChatGPT, noted that AI systems have become incredibly powerful but that as it advances, more people are getting anxious about the way it could change the way we live. 

He noted that his team at OpenAI as well as himself were concerned about this too. To mitigate the risks, Altman proposed forming a US-based or global agency or committee that would be able to license these AI systems and to ensure compliance with safety standards as well as have the authority to revoke licenses. 

Related articles:
ChatGPT chief warns US congress of the need for government intervention in AI
Expedia launches in-app travel planning experience powered by ChatGPT
GPT-4 for dummies: 101 on the even more powerful version of ChatGPT

share on

Follow us on our Telegram channel for the latest updates in the marketing and advertising scene.
Follow

Free newsletter

Get the daily lowdown on Asia's top marketing stories.

We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.

subscribe now open in new window