Content
summary Summary

AI technology is being used by a large proportion of British teenagers, according to new research.

Ad

A recent study by UK media regulator Ofcom found that almost 80 percent of UK teenagers have already used generative AI tools and services.

Specifically, the study found that:

Four in five (79%) online teenagers aged 13-17 have used generative AI tools and services, with a significant minority of younger children aged 7-12 also using the technology (40%).

Ad
Ad

Adult internet users aged 16 and over are, on average, comparatively more reluctant to use generative AI (31%). Of those who have never used the technology (69%), almost a quarter have no idea what it is (24%).

Snapchat's My AI and OpenAI's ChatGPT in the lead

According to the study, Snapchat's My AI is the most popular generative AI tool among kids and teens, used by half (51%) of online 7-17-year-olds. Online teenage girls are the most avid users (75%).

Among internet users aged 16+, ChatGPT is the most used generative AI service (23%). Among internet users aged 7-17, boys are more avid users of ChatGPT than girls (34% vs. 14%).

Internet users aged 16+ use generative AI in a variety of ways: 58% for fun, 33% for work, and 25% for study. The most popular activities are chatting and exploring AI possibilities (48%), followed by searching for information (36%) and seeking advice (22%). Creative applications include writing text such as poems or song lyrics (20%), creating images (20%), videos (9%) and audio (4%). 11% use generative AI for programming tasks.

Gen Z is one of the early adopters of generative AI

Yih-Choung Teh, director of strategy and research at Ofcom, said the adoption of new technologies is "second nature to Gen Z". But he also acknowledged concerns about the risks of AI.

Recommendation

The study echoes a trend previously seen in schools and universities: in July, it was revealed that more than 40 percent of UK universities had launched investigations into students for cheating using AI chatbots such as ChatGPT. Since December 2022, there have been nearly 400 investigations into students across 48 institutions.

AI tools could be affected by new cybersecurity regulations

Ofcom stressed that some AI tools will fall within the scope of the new cybersecurity legislation and that it will look at how companies proactively assess the security risks of their products and take effective measures to protect users from potential harm.

A few days ago, the UK's National Cyber Security Centre and the US Cybersecurity and Infrastructure Security Agency jointly developed the first global cybersecurity guidelines for AI to help developers make informed cybersecurity decisions. The "Guidelines for Secure AI System Development" are divided into four main areas: secure design, secure development, secure implementation, and secure operation and maintenance. 17 other countries, including the US, have pledged their support for the new guidelines.

The AI Safety Summit was also held in the UK in November. Representatives from many countries, including Germany, signed a memorandum of understanding for greater cooperation in the development and regulation of artificial intelligence. The "Bletchley Declaration" highlights the opportunities and risks of AI technology and aims to develop a common scientific and evidence-based understanding of the risks. The signatories, which include Brazil, Canada, the US, Kenya, Saudi Arabia, and China, also aim to develop risk-based policies to ensure safety in the face of these risks.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Research from UK media regulator Ofcom shows that 79% of UK online teenagers (13-17) and 40% of younger children (7-12) use generative AI tools and services, while adults (16+) are more reluctant (31%).
  • Snapchat's My AI is the most popular generative AI tool among children and young people, while ChatGPT is most used by internet users aged 16 and over.
  • Ofcom notes that some AI tools could be affected by new cybersecurity regulations, and is looking at how companies are proactively assessing security risks and taking steps to protect users.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.