What happens to a child’s developing brain when some of its earliest attachments are formed through a relationship with an AI chatbot? A new testimony from the American Psychological Association presented before the U.S. Senate Subcommittee on Crime and Counterterrorism Sept. 16 suggested that it’s perilous for early childhood development.
The internal working model refers to how the attachments we form unconsciously dictate the way we see ourselves and others in the world. It is the basis for how we behave in relationships: formed young but present across the lifespan. These are usually formed through the interactions with our caregivers, teachers and peers — all real human subjects capable of empathy and connection.
Experts share this foundational concern, among them Christina Sladkowski, a clinical trauma counselor with the CSU Trauma and Resilience Assessment Center.
“If a child forms a relationship, whether it’s romantic or platonic, with a computer, you’re not getting what a human interaction — with all its flaws and glory, all encompassed within it — (can do),” Sladkowski said. “And so it becomes a place-saver and, in a way, takes away a really valuable tool for the child.”
Sladkowski said 90% of our brain development happens before the age of 5, a critical window for when we learn how to navigate the world and relationships.
“I could see their inner working model not have the same flexibility and problem-solving comprehension skills to deal with the complexities of everyday interactions when AI, in a way, filters it out, simplifies it and becomes too much of a convenience for the user to really use their own agency and thinking skills to navigate a conversation, navigate a relationship,” Sladkowski said.
When we form an attachment with another human, especially in early childhood, we activate the brain’s limbic system, which is responsible for our ability to emotionally regulate and contains anatomic structures that produce dopamine and oxytocin. However, it also houses our fight or flight response to traumatic situations.
“We’re still learning in real time. It’s hard to even name the impacts because I think we’re still experiencing the first hit of it all on every generation really, just not even the kids right now, but how it affects all of us in relationships and everyday life.” –Christina Sladkowski, clinical trauma counselor
Tasha Seiter is a licensed marriage and family therapist in private practice in Fort Collins and holds a Ph.D. in applied developmental science.
“My hypothesis is that maybe there’s this quick effect of it alleviating some stress in the limbic system but (has) almost a training effect that could make normal human interaction a little bit more challenging,” Seiter said.
Seiter said human social bonds reduce activity in the limbic system because we know someone has our back, and we don’t need to be as wary. However, when a chatbot seeks to please a user, is always available and provides an immediate reply, it can disrupt the system’s ability to regulate or distinguish between the two types of relationships.
“I think that could train the limbic system to potentially expect something like that in normal human relationships, which are messy and not completely within our control,” Seiter said. “We’re not always being bowed down to in our human relationships, and we shouldn’t be.”
Since ChatGPT publicly debuted in November 2022, the long-term impacts of AI on child development are not yet known. To take precautions, Seiter and Sladkowski said there are many things parents can do in their own home to mitigate risks and promote healthy boundaries with AI that will last through adolescence and into adulthood. Choosing human relationships over chatbot interactions is crucial.
“It’s kind of like teaching your kid to swim,” Seiter said. “You wouldn’t look at the river and be like, ‘Wow, that’s just horrible; I can’t even believe that river exists,’ because AI is going to be here, and even if we try to fight it, it’s going to be around. And so instead of cursing the river, teach your kid how to swim.”
Sladkowski recommended that parents, and users in general, have informed consent over the different artificial intelligence they may use. Knowing how the AI was made, how it uses your data and how safety regulations apply to its usage are particularly important.
“(It’s important to have) some sort of education on how these algorithms work and are learning from you, in real time, how to spew answers,” Sladkowski said. “So I think as an individual and even as parents supporting their children, that education piece has to be essential because we can look at it as a tool if we’re educated and if we’re aware of how that tool can be used for good.”
The APA reaffirmed these suggestions and provided regulatory oversight advice for the best protection and education of children in its testimony. These recommendations include implementing developmentally appropriate models to be used by different ages as well as undergoing rigorous independent testing before AI systems can be accessed by children.
It also recommended measures like safe-by-default designs, investment in independent research and promotion of digital literacy programs in schools for using AI in a healthy and ethical manner.
“We must equip young people with the skills to navigate this new world,” the testimony reads. “Congress should authorize and fund the development and implementation of comprehensive AI literacy programs in schools.”
While AI’s development continues to accelerate, the technology’s presence in children’s lives cannot be fully separated from their developing identities — a dueling reality that parents, psychologists and students are currently grappling with.
“We’re still learning in real time,” Sladkowski said. “It’s hard to even name the impacts because I think we’re still experiencing the first hit of it all on every generation really, just not even the kids right now, but how it affects all of us in relationships and everyday life.”
Reach Caden Proulx at science@collegian.com or on social media @cadenpru.
