
I blogged about it artificial intelligenceof (AI) potential effects closeness for years. When I publish these posts, I run the risk of being called AI destructive. Likewise, I am aware of this label every time I lecture on AI. AI doomer is a dismissive term for people who worry about the possible negative effects of artificial intelligence. This criticism is what bothers me the most. Not because it hurt my feelings (although that would be offensive), but because of the effect it has on all of us.
The doomer tab turns off unwanted conversations
When we label someone as an AI doomsday, we simply disagree with their assessment. We dismiss them as mindless, irrationally negative and dramatic. In a culture of values positive thinking and optimismBeing called a doomer has real social costs. It shows that you are the problem, not the problem you are raising.
And it’s dangerous. Because it prevents us from having honest conversations about real risks at a time when these conversations are most needed. Some people with legitimate concerns simply stop talking. They see what happened to others who spoke up, decide it’s not worth the social costs, and keep quiet. In doing so, we lose their perspective, experience, and questions. We also risk becoming collectively clueless about the challenges ahead.
I see a similar dynamic in my clinical practice. In pairs therapyif people are afraid to voice their concerns because they seem negative or difficult, it allows them to help their partners. rejection about the problem. As a result, problems can grow and relationships can deteriorate. The same dangerous process can be applied at the societal level.
Why do we reach this mark?
As a clinical psychologist, I recognize the doomer sign as a defense mechanism. If people feel helpless about a problem that doesn’t seem to have a good solution, worry can become unbearable. The brain seeks relief, and dismissing the person raising the alarm provides immediate comfort. If the speaker is just being unreasonable, they are fatalthen you don’t actually have to deal with the uncomfortable information. You don’t have to feel scared or helpless.
Calling a person destructive is what calms us down now. But defense mechanismswhile protecting us from immediate discomfort, it can prevent us from solving real problems. Anxiety will decrease. There is no danger.
What do we lose when we let go of our worries?
Think what would happen if we routinely rejected people who pose a threat across other domains. If we called the people who warned us about the danger of wildfires dangerous, we would lose more homes, people and forests. If we dismissed concerns about guns in schools as fatalistic talk, we wouldn’t have lockdown drills or any measures to protect children. What if we call the people who demand safety regulations at nuclear power plants destructive? I could go on and on. In each of these cases, the people who expressed concern were not unnecessarily negative. They were realistic about the risks and tried to prevent damage. And we are better together because we listened even when it was uncomfortable to consider the risks we faced.
Stakes with AI
We are in the midst of a massive technological shift that is already reshaping how people connect, how they seek support, and how they understand intimacy and relationships. wisdom. Research shows that people, especially young people, are turning to AI for companionship. Some find it more loving and satisfying than human interaction.
While I am concerned about what this means for human communication, I am not saying that AI should be banned or that the technology is bad. I mean, we need to think carefully about what we’re building and what we’re losing in the process. I mean, we need to have tough conversations about preserving human intimacy and wisdom (if you value them, as I do) in a world where algorithms increasingly control our thoughts and actions. goals.
But we’d have fewer of these critical discussions if everyone who raised these concerns was labeled a doomsayer. We simply fall asleep to the future that technology creates without ever questioning whether we want that future or not.
A better way forward
So here’s what I’d like to suggest: Before you dismiss someone’s concerns about AI as doomsday, ask yourself what you’re protecting against. Are their concerns really unfounded? Or does it seem unbearable to sit with the uncertainty? Do you disagree with their analysis, or are you trying to dispel your own concerns?
I’m not asking for anyone to get hurt. I ask that we have an honest conversation about the technologies that are changing how we connect, how we think, and who we become. These conversations won’t happen if we keep shooting messengers.
The future of human intimacy and connection is being decided right now, in real time, by the choices being made in technology companies and in our daily behaviors. We can be part of that decision, or we can ignore everyone we care about as fatal and wake up one day in a world we didn’t choose.




