Why Social Media Algorithms Are a Public Health Issue Now
The debate about social media and mental health has been running for a decade. The research has caught up, and the picture is sharper than it used to be.
The harm is not social media use broadly. It is specific: heavy algorithmic feed consumption, particularly among adolescent girls, correlates meaningfully with depression, anxiety, and disordered eating. The correlation survives controls for pre-existing conditions and reverse causality in the most rigorous studies now available. It is not a proven causal chain in every case, but it is strong enough that the “no evidence of harm” position is no longer defensible.
The mechanism is the algorithm, not the content specifically. Platforms are optimized for engagement, and engagement is highest when emotional arousal is highest. Anger, anxiety, social comparison, and outrage keep people scrolling longer than contentment does. The algorithm did not intend to harm teenagers — it intended to maximize time on platform. The harm is a byproduct of that optimization applied to developing brains during the most socially sensitive period of their lives.
What makes this a public health issue rather than a parenting issue is scale and information asymmetry. Parents do not know what the algorithm is serving their children, cannot access the engagement data platforms hold, and are making decisions with partial information against systems designed by teams of engineers to be maximally compelling.
Legislative responses — age verification requirements, algorithm transparency mandates — are moving through multiple jurisdictions. They are imperfect and often easy to circumvent. But the alternative of treating this as a private consumer choice problem is no longer consistent with what the evidence shows.
The question has moved from “is there harm” to “who is responsible for it.”