Artificial intelligence and adolescent well-being

Article:

AI tools can significantly enhance student learning and development. For learning, AI can assist in brainstorming, creating, organizing, summarizing, and synthesizing information, and offering resources and solutions for challenging problems. Research indicates that these capabilities make it easier for students to understand and retain key concepts. In terms of cognitive development, AI promotes growth through advanced questioning techniques that can stimulate critical thinking, scaffolding that provides step-by-step guidance, and adaptive learning that delivers personalized feedback. If teachers have the skills to leverage AI in appropriate ways and ensure that the use of AI does not override processes necessary for students to learn, these tools can encourage students to more deeply explore concepts and progressive skill-building as well as help with the development of complex interpersonal skills.

However, it’s crucial for students to be aware of AI’s limitations. AI-generated summaries may not always be accurate, and students must not become overly dependent on AI, which could impede developing their own knowledge and skills. Additionally, AI may miss subtle verbal and nonverbal cues that convey important emphases and messages. To maximize AI’s benefits, students should actively question and challenge AI-generated content and use AI tools to supplement rather than replace existing strategies and pedagogical approaches. This necessitates engaging in active learning, where students interact with information and construct their own knowledge, which research indicates leads to better academic outcomes.

Limit access to and engagement with harmful and inaccurate content

As noted in APA’s recent video content recommendations, exposure to harmful content is associated with increased risk of anxiety, depression, and other mental health problems. Adolescents exposed to violence and graphic content may become desensitized to it or traumatized by it, contributing to harmful behaviors and attitudes being normalized for young people who are routinely exposed to this content.

Research also suggests that repeated exposure to misinformation makes it more likely to be believed and contributes to its spreading. This repeated exposure may hinder analytic thinking skills and make adolescents even more susceptible to misinformation.

  • Developers of AI systems accessible to youth should use robust protections to prevent and mitigate youth’s exposure to harmful content. This content includes but is not limited to material that is inappropriate for their age, dangerous, illegal, biased and/or discriminatory, or may trigger similar behavior among vulnerable youth.
  • User reporting and feedback systems should be in place to allow adolescents and caregivers to customize content restrictions based on their specific needs and sensitivities.
  • Educational resources should be provided to help adolescents and their caregivers recognize and avoid harmful content and to understand the associated risks of engaging with AI tools.
  • Collaboration with mental health professionals, educators, and psychologists is essential to ensure content filtering mechanisms are effective and appropriate.

Accuracy of health information is especially important

Accurate health information is especially crucial for adolescents because they are in a critical stage of physical and psychological development. Data show that young people often seek out health information online. Misinformation or incomplete information can lead to harmful behaviors, misdiagnoses, and delayed or incorrect treatment, among other negative possibilities, which can have serious impacts on well-being.

  • AI systems that provide health-related information or recommendations to youth, including those using generative or interactive AI, should ensure the accuracy and reliability of health content and/or provide explicit and repeated warnings that the information may not be scientifically accurate. This includes awareness that information from publications that purport to offer empirically based information or from self-identified authoritative sources vary significantly in quality and accuracy and should not be weighted equally in the training of AI models.
  • AI systems should include clear disclaimers to prominently and clearly warn young users that AI-generated information is not a substitute for professional health advice, diagnosis, or treatment, and that relying on unverified AI-generated health information is ill-advised.
  • AI platforms should provide resources and reminders to adolescents to contact a human (e.g., an educator, school counselor, pediatrician, or other authority) or validated resource to verify the information obtained online and to ensure proper next steps.
  • Parents and educators should continually remind adolescents that the content they find online and from AI may not be accurate and may actually be intended to be persuasive and could be harmful.

Protect adolescents’ data privacy

AI systems that collect or process data from adolescents must prioritize their privacy and well-being over commercial profit. This requires maximizing transparency and user control and minimizing potential harm associated with data collection, use, misuse, and manipulation. Platforms should limit the use of adolescents’ data for targeted advertising, personalized marketing that exploits their immature brain development, the sale of user data to third parties, or any purpose beyond that for which it was explicitly collected.

Transparency in data collection and usage, presented in a clear, comprehensible, and user-centered manner, along with obtaining informed consent from users and caregivers, is essential. Furthermore, recognize that data collected by AI, including biometric and neural information from emerging technologies, can provide insights into mental states and cognitive processes. AI systems must safeguard this sensitive information and uphold adolescents’ basic right to privacy

Protect likenesses of youth

The misuse of adolescents’ likenesses (e.g., images, voices) can lead to the creation and dissemination of harmful content, including cyberhate, cyberbullying, and sexually abusive material such as “deepfakes” and nonconsensual explicit images. These practices can have severe psychological and emotional impacts on young individuals, including increased risk of depression, anxiety, and suicide-related behaviors.

  • AI platforms and systems must implement stringent restrictions on the use of youths’ likenesses to prevent the creation and dissemination of harmful content. These restrictions must encompass both the input and output of content into AI platforms. Mechanisms for monitoring compliance and enforcing these restrictions should be established to ensure adherence.
  • Parents, caregivers, and educators should teach youth the dangers of posting images online and strategies to use when confronting images of their peers or themselves that may be disturbing, inappropriate, or illegal.
  • Educators should consider policies to manage the creation and proliferation of hateful AI-generated content in schools.

Empower parents and caregivers

Parents and caregivers play a vital role in guiding and protecting adolescents as they navigate AI technologies. However, they often have limited time or capacity to learn about the age appropriateness, safety, prevalence, and potential risks and benefits of these technologies.

  • Industry stakeholders, policymakers, educators, psychologists, and other health professionals should collaborate to develop and implement readily accessible, user-friendly resources that provide clear guidance on the age appropriateness, safety, and potential risks and benefits of AI technologies accessible to youth, as well as literacy on how best to have a conversation with them about AI. These resources should extend beyond simple ratings and incorporate detailed explanations of data collection practices, algorithmic biases, and the potential for manipulative or addictive design elements.
  • Customizable and accessible parental control settings and interactive tutorials needed to identify and mitigate online risks should be included. These resources should be analogous to existing systems for movies, video games, and music, offering a concise and easily understandable way for adults to make informed decisions about their children’s AI interactions without requiring extensive individual research. Crucially, these materials must be regularly updated to reflect the rapidly evolving AI landscape. These resources should be paired with default settings and parental tools that empower caregivers to easily set parameters for their adolescent using AI-based technologies and to maintain visibility into potentially harmful interactions.

Implement comprehensive AI literacy education

AI literacy is essential for adolescents and those who support and educate them to navigate the increasingly AI-driven world. Understanding AI’s workings, benefits, limitations, and risks is crucial for making informed decisions and using AI responsibly. This education must equip young people with the knowledge and skills to understand what AI is, how it works, its potential benefits and limitations, privacy concerns around personal data, and the risks of overreliance. Crucially, this education must include a specific focus on algorithmic bias: how biases can be embedded in AI systems due to skewed training data, flawed model design, or unrepresentative development and testing teams. Young users should understand how these biases can lead to incomplete or inaccurate information that perpetuates myths, untruths, and/or antiquated beliefs. These biases can even lead to discriminatory or inequitable information, particularly regarding vulnerable groups. Education should include tips on how to critically evaluate AI-generated outputs and interactions to identify and challenge potential bias. The overall goal is to empower youth to use AI safely, responsibly, critically, and ethically. A multipronged, multistakeholder approach is necessary.

  • Educators should integrate AI literacy into core curricula, spanning computer science, social studies, and ethics courses; provide teacher training on AI concepts, algorithmic bias, and responsible AI use; offer hands-on learning experiences with AI tools and platforms, emphasizing critical evaluation of AI-generated content; and facilitate discussions on the ethical implications of AI, including privacy, data security, transparency, possible bias, and potential societal impacts.
  • Policymakers should develop national- and state-level guidelines for AI literacy education, allocate funding for research and development of AI literacy resources and teacher training programs, enact legislation that mandates age-appropriate AI literacy education in schools, and promote public awareness campaigns about AI’s potential risks and benefits.
  • Technology developers should create transparent and accessible explanations of AI algorithms and data collection practices, develop educational tools and resources to help users understand how AI systems work, including explanations of algorithmic bias, collaborate with educators to develop age-appropriate AI literacy curricula, incorporate bias detection and mitigation tools into AI platforms, and provide simple and easy-to-use reporting mechanisms for users to report suspected bias.

Prioritize and fund rigorous scientific investigation of AI’s impact on adolescent development

To comprehensively understand the complex interplay between AI technologies and adolescent well-being, a significant and sustained investment in scientific research is imperative. This necessitates:

  • Longitudinal studies: Funding for extended longitudinal research to track the developmental trajectories of adolescents interacting with AI over time;
  • Research designs that enable the identification of causal relationships and long-term effects;
  • Diverse population studies: Expanded research to include younger children and marginalized and vulnerable populations, ensuring that findings are generalizable while addressing the unique vulnerabilities of certain groups;
  • Data accessibility and transparency: Development and implementation of mechanisms for independent scientists to access relevant data, including data held by technology companies, to facilitate thorough and unbiased examination of the associations between AI use and adolescent development. This includes data pertaining to algorithmic functions, content moderation, and user engagement metrics.
  • Interdisciplinary collaboration: Fostering collaboration between psychologists, neuroscientists, computer scientists, ethicists, educators, public health experts, youth, and parents/caregivers to develop a comprehensive understanding of the multifaceted impacts of AI.

Source: An APA health advisory https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-ai-adolescent-well-being