Introduction
The paper examines the controversy surrounding social media algorithms and their role in creating filter bubbles that harm adolescent mental health. Recent events highlight this issue’s urgency. A New York Times article from March 25, 2026, reported a landmark trial verdict holding a major social media platform liable for contributing to youth mental health crises through addictive algorithmic designs. This case underscores society’s growing recognition of platform accountability. In this analysis, the thesis asserts that social media recommendation algorithms are not neutral technical systems—they are socially constructed tools that impose measurable psychological harm on youth by amplifying addictive engagement patterns and limiting informational diversity, and platform designers, policymakers, and educators must intervene through structural reforms and digital literacy education to protect adolescent well-being. The discussion applies the Social Construction of Technology (SCOT) framework from Wiebe Bijker to reveal how these algorithms embed political choices, and Mark Ackerman’s concept of the social-technical gap to explain why technical fixes alone fall short. The paper draws on scholarly sources to build the argument and concludes with policy recommendations.
Background on the Controversy and Connection to Course Lens
Social media recommendation algorithms operate by analyzing user data to curate personalized content feeds on platforms such as TikTok, Instagram, and YouTube. These systems prioritize content that maximizes user engagement, often through mechanisms like filter bubbles and echo chambers. Filter bubbles refer to the algorithmic isolation of users within ideologically homogeneous content, while echo chambers amplify similar viewpoints, reducing exposure to diverse perspectives. Algorithms achieve this by leveraging metrics such as likes, shares, and watch time, creating dopamine reward loops that encourage prolonged interaction. Users receive instant gratification from tailored content, which triggers neurochemical responses similar to those in gambling or substance use.
This setup connects directly to the Social Construction of Technology (SCOT) framework outlined by Bijker (2008). SCOT posits that technologies do not possess intrinsic workings but gain meaning through social construction by relevant social groups. Interpretive flexibility allows different groups to assign varied meanings to the same artifact, while technological frames guide how groups perceive and develop technologies. In the case of social media algorithms, engineers and advertisers form key relevant social groups. These groups construct algorithms with frames emphasizing profit-driven engagement metrics over user well-being. Bijker emphasizes that artifacts embed political choices, meaning algorithms reflect the values of dominant groups, such as corporate interests in data monetization, rather than broader societal needs.
Evidence from Ahmmad et al. (2025) documents how these algorithmic systems structurally amplify ideological homogeneity among youth. Their systematic review shows that recommender systems limit viewpoint diversity by prioritizing sensational or confirmatory content, constraining youth agency through opaque operations. Adolescents often lack the tools to navigate these systems effectively, leading to reduced informational diversity. Furthermore, Masri-zada et al. (2025) detail the neurological and mental health impacts. They explain that dopaminergic reward loops foster compulsive use, exacerbating conditions like ADHD, depression, anxiety, cyberbullying, and body dysmorphia. For instance, constant exposure to idealized images on Instagram contributes to body image issues, while TikTok’s short-form videos promote attention fragmentation. These harms affect adolescents disproportionately due to their developing brains, which are more susceptible to addictive patterns. The controversy arises because platforms claim neutrality, yet their designs prioritize engagement at the expense of mental health, as seen in rising rates of youth anxiety linked to social media use.
The SCOT lens reveals that these problems stem from social construction rather than technological inevitability. Relevant social groups like platform engineers interpret algorithms as tools for optimization, while advertisers view them as revenue engines. This construction marginalizes adolescents, who experience the harms but lack influence in design processes. Bijker’s framework highlights the politics of artifacts, where algorithms encode choices that favor economic gains over psychological safety. Connecting this to course readings on the information society, the controversy illustrates how digital technologies shape social realities, often reinforcing inequalities in access to diverse information.
The Social, Moral, and Political Side: Applying SCOT and the Social-Technical Gap
The analysis takes the position that social media algorithms inflict moral and political harms by design, not accident, and society must address this through a critical lens. Bijker’s SCOT framework demonstrates that these algorithms result from specific social constructions, where harms emerge from the exclusion of affected groups in technological development. Relevant social groups, such as engagement-optimization engineers and advertisers, dominate the technological frames, prioritizing metrics like time spent on platform over mental health considerations. This setup embeds political choices that favor corporate profits, leading to addictive patterns that exploit youth vulnerabilities. For example, algorithms amplify content that triggers emotional responses, creating cycles of engagement that limit exposure to diverse ideas and foster psychological distress.
Ackerman’s (2000) concept of the social-technical gap strengthens this argument by explaining why purely technical solutions fail to mitigate these harms. The gap describes the mismatch between complex social requirements and the limitations of technical feasibility. Human social life involves nuance, flexibility, and context-dependency that current algorithmic systems cannot fully capture. Content moderation tools, for instance, rely on automated filters that struggle with subtle cultural contexts or evolving social norms. Ackerman argues that technologies often oversimplify social dynamics, leading to incomplete solutions. In social media, this gap manifests when algorithms attempt to curb harmful content but inadvertently reinforce filter bubbles by misinterpreting user preferences. Technical fixes like algorithm tweaks cannot bridge the gap because they ignore the socially constructed nature of the problem.
Ahmmad et al. (2025) support this view by showing that youth exhibit only partial agency in navigating algorithmic feeds. Opaque systems obscure how recommendations form, leaving adolescents trapped in echo chambers that homogenize their worldviews. This constraint raises moral concerns, as platforms profit from youth engagement while externalizing psychological costs. From a political perspective, the exclusion of adolescents as a relevant social group in SCOT terms represents a power imbalance. Bijker (2008) notes that artifacts carry politics, and here, algorithms enforce a form of control that limits informational diversity, potentially stifling democratic discourse among young people.
The moral dimension emerges in the measurable harms to adolescent well-being. Masri-zada et al. (2025) provide evidence of neurobiological effects, including heightened anxiety from cyberbullying amplified by algorithmic curation. Platforms’ designs conflate engagement with value, ignoring how this harms developing minds. This side argues against viewing algorithms as neutral; instead, they impose socially constructed burdens that demand accountability. Ackerman’s gap underscores that no amount of technical refinement will suffice without addressing social nuances, such as the need for inclusive design processes. Overall, this position evaluates the controversy as a failure of moral and political responsibility, where dominant groups shape technologies to the detriment of vulnerable users.
Prescription and Recommendations
To address the controversy, a multi-pronged approach combines structural reforms with educational and therapeutic interventions. First, platform governance reform requires algorithmic transparency and auditability. Policymakers should mandate disclosures of how recommendation systems function, allowing external audits to identify biases. Ahmmad et al. (2025) discuss the European Union’s Digital Services Act as a model, which imposes accountability on platforms for systemic risks, including youth harms. Implementing similar regulations globally would force companies to prioritize well-being in design, broadening the relevant social groups in SCOT terms to include regulators and youth advocates.
Second, youth-centered digital literacy programs belong in school curricula. Educators must teach adolescents to recognize filter bubbles and critically evaluate algorithmic content. Ahmmad et al. (2025) and Masri-zada et al. (2025) endorse school-based social-emotional learning (SEL) programs that build resilience against addictive patterns. These initiatives would empower youth with skills to diversify their information sources, countering the agency constraints noted in the literature.
Third, recommender systems need redesign to incorporate adolescent well-being as a core criterion. Drawing on Bijker’s SCOT, platform designers should expand technological frames to involve diverse groups, such as mental health experts and young users, in development. This shift would embed values like informational diversity, reducing the amplification of harmful content.
Fourth, for cases where harm occurs, parental guidance and cognitive behavioral therapy (CBT) interventions offer support. Masri-zada et al. (2025) recommend digital detox strategies and CBT to manage compulsive use, alongside parental controls that limit exposure. Schools and communities can facilitate these through workshops.
Ackerman (2000) reminds that the social-technical gap means no perfect technical fix exists; thus, combining structural reforms with social mechanisms ensures comprehensive protection. These recommendations address the thesis by intervening in the social construction of algorithms, promoting a balanced information society.
Conclusion
Social media algorithms represent a controversy not just about screen time, but about embedded values in socio-political constructions. The analysis shows these systems harm adolescent mental health through addictive designs and limited diversity, as explained by SCOT and the social-technical gap. The path forward demands accountability from designers, policymakers, and educators via reforms and education. Recognizing platforms as constructed artifacts enables society to reshape them for well-being, fostering a healthier information environment for youth.
References
- Ackerman, M. S. (2000). The intellectual challenge of CSCW: The gap between social requirements and technical feasibility. Human-Computer Interaction, 15(2-3), 179-203.
- Ahmmad, M., Shahzad, K., Iqbal, A., & Latif, M. (2025). Trap of social media algorithms: A systematic review of research on filter bubbles, echo chambers, and their impact on youth. Societies, 15(8), 301.
- Bijker, W. E. (2008). Technology, social construction of. In The international encyclopedia of communication (pp. 1-10). Wiley-Blackwell.
- Masri-zada, T., et al. (2025). The impact of social media & technology on child and adolescent mental health. Journal of Psychiatry and Psychiatric Disorders, 9(2), 111-130.
- The New York Times. (2026, March 25). Social media trial verdict highlights platform liability for youth harms.
(Word count: 1,612 words in body, excluding references; total 1,732 including references.)

