Teen Testifies Tech Engineered Her Addiction
A Los Angeles jury is hearing a bellwether trial in which a 20-year-old woman is suing major social media companies, alleging that design features of their platforms deliberately encouraged addictive use and harmed her mental health while she was a minor.
The plaintiff testified that she began using YouTube at about age 6 without an account, created a YouTube account at about age 8 by entering a false age, uploaded roughly 200 videos before turning 10, and later turned to Instagram (beginning use described at ages 9–10 in some testimony). She described long, compulsive periods of use, checking notifications during school and at night, sometimes going to the bathroom to look at her phone, and being kept on YouTube for hours by an autoplay feature that suggested related videos. She said a steady stream of notifications on Instagram prompted frequent phone checking into the night, that fear of missing out kept her constantly engaged, and that she once spent more than 16 hours on Instagram in a single day. Testimony and court exhibits included a nearly 35-foot-long banner of her Instagram photos and childhood posts, and records showing one day with 16 hours of Instagram use. She said heavy use worsened anxiety, depression, body-image problems and self-harm, and that attempts to limit her use or separate from the apps were unsuccessful; she also described using multiple accounts and services to boost likes and create an appearance of popularity.
A treating therapist testified about clinical notes documenting social phobia, body dysmorphia, problems at home and school, bullying on social media, and that the plaintiff’s mood was closely tied to her social media activity during treatment. The therapist also noted that “social media addiction” is not an official diagnosis in the Diagnostic and Statistical Manual of Mental Disorders and did not formally diagnose a social media addiction; the therapist said social media likely contributed to, but did not solely cause, the plaintiff’s anxiety.
Meta (owner of Instagram) and Google-owned YouTube are named defendants in the trial; TikTok and Snap previously reached settlements with the plaintiff before this bellwether trial. The companies deny the allegations. Meta and YouTube have said they implemented measures intended to protect young users, including age-appropriate account options and parental controls. Meta has argued that other factors, including alleged abuse and neglect at home cited in court filings, contributed to the plaintiff’s mental-health struggles and that some challenges predated social media use. Defense attorneys emphasized those prior difficulties and portrayed the platforms as coping mechanisms for the plaintiff.
High-profile witnesses have testified or been called, including Meta CEO Mark Zuckerberg, Instagram head Adam Mosseri and YouTube engineering vice president Cristos Goodrow. Mosseri and Zuckerberg acknowledged challenges distinguishing problematic use from clinical addiction and challenges in verifying users’ ages while defending platform efforts to protect minors. Goodrow described a shift in YouTube’s measurement language away from maximizing time toward “time well spent.” Defense attorneys noted the therapist was not presented as an expert on social media’s effects on the brain.
Plaintiffs’ lawyers and youth advocates have focused legal arguments on specific platform features they say are exploitative of children and adolescents: algorithmically generated endless feeds, reward mechanics that encourage continued use, persistent notifications, cosmetic filters linked to body-image concerns, and weak age verification and parental controls. The central legal question for the jury is whether platform design was a substantial factor in the plaintiff’s mental-health struggles.
The outcome of this bellwether trial could affect more than 1,500 consolidated lawsuits nationwide that make similar claims and may influence potential regulation and industry practices. The trial is expected to continue for several weeks.
Original Sources: 1, 2, 3, 4, 5, 6, 7, 8 (meta) (youtube) (snapchat) (tiktok) (facebook) (filters) (notifications) (fomo)
Real Value Analysis
Summary judgment: the article mostly reports testimony and lawsuit details and gives little practical help. Below I break that down point by point against the criteria you asked for.
Actionable information
The article provides almost no actionable steps a typical reader can use right away. It reports ages, behaviors, features (autoplay, notifications, filters), and legal positions, but it does not give clear instructions for parents, teens, educators, or clinicians about what to do next. The mentions of parental controls and age-appropriate options are generic and not explained, so a reader cannot reliably follow them from the article alone. In short: the piece describes a problem and companies’ denials, but it does not offer concrete, usable guidance.
Educational depth
The article is largely descriptive and anecdotal. It explains what was alleged (use beginning in childhood, autoplay, notifications, long hours, emotional reactions) and what defendants argue (other factors, preexisting problems, product safeguards). But it does not explain mechanisms in any meaningful way. It does not analyze how specific features influence attention or mental health, does not cite research, and does not explain how clinicians diagnose behavioral addiction or the limits of that diagnosis. No statistics or study methods are presented or interpreted. So it teaches only surface facts and offers little in the way of causes, systems, or evidence that would help a reader understand underlying mechanisms.
Personal relevance
The subject is potentially relevant to many readers—parents of children using social apps, young people, educators, and clinicians—because it concerns youth mental health and widely used platforms. However, the article focuses on one plaintiff’s testimony and legal arguments rather than practical implications for these groups. The relevance is therefore indirect: it may affect policy or corporate behavior if the verdict matters, but it does not provide immediate, personalized guidance for everyday decisions about safety, health, or money.
Public service function
The article performs a basic public-service function by informing readers about an important legal case and the claims being litigated. But it stops short of offering safety guidance, warnings, or resources that would help the public act responsibly now. It does not suggest what parents could do, how schools might respond, or where to find clinical help for youth struggling with social-media–related anxiety. That lack reduces its practical public-service value.
Practical advice quality
There is almost no practical advice. The only implicit guidance is that platforms have features and controls and that companies claim to have implemented age-gating and parental controls, but without specifics on what those controls are or how to use them. Any tips a reader might try to follow would require independent research. Thus, the article’s practical usefulness is minimal.
Long-term impact
The article could be important long-term as precedent if the trial changes regulation or corporate practices; it provides background on that potential. But for individuals planning ahead—setting family rules, seeking treatment, or choosing platforms—the article offers no durable tools or habit-change strategies. It documents a legal moment more than it helps people avoid similar harms or prepare for future developments.
Emotional and psychological impact
The narrative is likely to provoke concern, especially among parents, because it recounts a young person’s distress. The article does not provide calming context, coping strategies, or referral information, so it risks creating alarm without equipping readers to respond. That makes the piece more likely to increase anxiety than to enable constructive action.
Clickbait or sensationalism
The article centers on a “landmark” lawsuit and dramatic testimony, which is newsworthy; it does not appear to rely on hyperbolic language beyond the inherent drama of a courtroom account. However, it emphasizes emotive details (long hours, body dysmorphia, therapist notes) without offering supporting research or broader context, which can create a sensational feel even if not intentionally clickbait.
Missed opportunities
The article missed several chances to be more useful. It could have explained what specific parental controls and age-verification tools exist and how to use them; summarized scientific consensus on social media and youth mental health; clarified how clinicians assess problematic use versus diagnosable addiction; provided resources for parents or teens (hotlines, organizations, accessible guidance); or suggested steps schools and pediatricians could take. It could also have offered concrete ways for readers to evaluate platform safety features rather than just reporting that companies claim to have them.
Practical, usable guidance the article omitted
If you are a parent, start by setting clear, consistent limits on device access that are practical for your family. Pick simple rules you can enforce, such as phone-free meals and a fixed device curfew that requires phones to be left in a common place overnight. Use the device’s built-in screen-time settings or a router-level schedule to enforce limits so you are not the only enforcer. When choosing safety settings, look for age-based account options, strict privacy settings, and notification controls; turn off nonessential push notifications so the app is less likely to pull attention repeatedly. If a child is checking devices at school or in class, speak with the school about policies and consistent expectations during school hours. If you notice changes in mood, sleep, grades, or social withdrawal, document what you observe (when it started, what changed, how often) and bring those notes to a pediatrician or mental-health professional; clear documentation helps clinicians evaluate causes and rule out other contributors.
If you are a teen or young adult worried about your own use, try small experiments you can measure. For one week, silence or disable notifications for one or two apps and note how often you open them compared with the previous week. Try designating “no-phone” times like during meals or before bedtime and track whether sleep or mood improves. Use built-in app timers to set short daily limits and make them hard to override for at least a week. If these self-tests show large negative effects on your life or persistent anxiety when separated from your phone, consider asking a trusted adult or a clinician for help.
If you are evaluating platforms or tools, compare independent privacy and safety reviews rather than relying on company statements. Look for clear, easily accessible settings for age-restriction, parental controls, notification management, and content filters. Prefer platforms that explain their algorithms and offer controls over recommended content or autoplay, and prioritize services that have robust reporting and moderation options. Simple skepticism is useful: treat marketing claims about “age-appropriate experiences” as prompts to check the actual settings available rather than as proof of safety.
If you are a clinician, teacher, or school administrator, include digital habits in routine health or counseling screenings. Ask about time spent on apps, sleep disruption, cyberbullying, and emotional reactions to social-media feedback. Encourage brief behavior-based interventions (notification reduction, scheduled device-free times) before labeling a behavior as an addiction, and use observation and brief screening tools to decide when specialty mental-health referral is needed.
These steps do not require special data or legal rulings and can be applied immediately to reduce harm, improve sleep and focus, and give families and clinicians clearer information to act on.
Bias analysis
"landmark lawsuit accusing major social media companies of deliberately designing their platforms to be addictive."
This phrase frames the case as historic and states the companies "deliberately" designed addicting features. It helps the plaintiffs by making the claim sound certain and momentous. The wording treats intent as a fact rather than an allegation, which can push readers toward believing guilt before proof. It hides that "deliberately" is an interpretation, not a proven fact in the text.
"Kaley testified that she began using YouTube at age 6 without an account, initially watching videos about online games and lip balm collections, and that she created a YouTube account at age 8 by entering a false age."
The phrase lists young ages and "false age" to stress early exposure. It nudges readers to blame the platforms for a child’s long history of use. It helps the narrative that companies are responsible for harms that started in childhood, and it leaves out context such as parental supervision or other influences.
"she felt upset when videos received few views or when subscribers were lost."
This wording emphasizes emotional reaction to platform metrics and frames those metrics as causally linked to harm. It supports the claim that features caused distress by highlighting feelings tied to engagement. It omits alternative reasons a child might feel upset, making one cause seem primary.
"being kept on YouTube for hours by the autoplay feature that played suggested related videos."
The phrase "kept on YouTube" and blaming "autoplay" assigns agency to the feature rather than the user. It makes the platform sound like it traps users. That wording benefits the plaintiff narrative that design features control behavior and hides any possible user choice or parental control.
"Testimony describes Kaley turning to Instagram to increase subscriber counts, experiencing a steady stream of notifications that prompted frequent phone checking into the night, and spending more than 16 hours on Instagram on at least one occasion."
This sentence piles vivid details—late-night checking, "steady stream," "more than 16 hours"—to create a sense of chronic, extreme use. The strong words push emotion and support the addiction claim. It does not present counter-evidence or context for the long usage episodes.
"fear of missing out kept her constantly engaged and that she reacted strongly when her mother took away her phone."
The phrase "kept her constantly engaged" and "reacted strongly" attributes ongoing compulsion to social media and emphasizes family conflict. It frames the technology as the driving force of behavior and highlights drama to sway readers. It does not show other causes of family tension or coping problems.
"Kaley’s former therapist, Victoria Burke, testified about clinical notes documenting social phobia, body dysmorphia, problems at home and school, and bullying on social media."
This wording presents clinical notes as evidence linking social media to serious conditions. It lends authority by naming a therapist but does not include a diagnosis of "social media addiction." It helps the plaintiff by implying a medical basis while the text later notes the therapist did not diagnose addiction.
"Defense attorneys highlighted that Burke did not diagnose a social media addiction and noted that she was not presented as an expert on social media’s effects on the brain."
This sentence shows the defense counterargument and points out limits of the therapist's role. It balances the earlier clinical claim by stating what was not proven. The wording is neutral but compresses defenses into a brief rebuttal, which may reduce its perceived weight compared to the earlier detailed harms.
"Meta and YouTube are named as defendants in the case, while Snapchat and TikTok previously reached settlements with the plaintiffs."
Stating that some companies settled and others are defendants suggests a pattern of liability without giving details about settlement terms or reasons. It nudges readers to infer wrongdoing by association. The wording benefits the plaintiffs’ narrative by implying industry culpability.
"Meta and YouTube deny the allegations and say they have implemented measures intended to protect young users, including age-appropriate account options and parental controls."
This phrasing uses "deny" and "say they have implemented" which places company responses as claims rather than facts. It gives the companies space to defend themselves but also subtly distances the reader from accepting those measures as effective. The text does not evaluate those measures, leaving their real impact unclear.
"Meta has argued that other factors contributed to Kaley’s mental health struggles and that evidence will show those challenges predated social media use."
This sentence conveys the defense position that other causes existed and predating issues matter. It fairly presents alternative explanations but frames them as an argument rather than established fact. The wording keeps causation unresolved, which is appropriate given the text.
"Facebook founder and Meta CEO Mark Zuckerberg and Instagram head Adam Mosseri previously testified in the trial, addressing age restrictions, engagement features, filters, and the distinction Mosseri drew between clinical addiction and problematic use."
Including high-profile names emphasizes the case’s significance and may lend credibility to the defense by showing senior executives testified. The clause about Mosseri distinguishing "clinical addiction and problematic use" signals a semantic narrowing that undercuts the addiction claim. The text uses these names to balance narratives, but that balance may shift reader attention toward corporate responses.
"The case is the first of more than 1,500 similar lawsuits nationwide to go before a jury and could set a precedent affecting how tech companies are held responsible for product design."
This sentence frames the lawsuit as precedent-setting and widespread. Saying "more than 1,500 similar lawsuits" amplifies scale and raises stakes, which can increase public concern. The phrasing suggests a broad industry problem and benefits the plaintiffs’ broader movement without detailing differences among cases.
Emotion Resonance Analysis
The passage conveys several emotions, both explicit and implicit, with clear effects on the reader. Foremost is distress, shown in Kaley’s testimony about feeling “upset when videos received few views,” reacting strongly when her mother took away her phone, checking notifications at school and in the bathroom, and spending many hours on platforms; the language and specific behaviors portray a strong, persistent unease that frames her experience as harmful. This distress is reinforced by descriptions of anxiety and depression linked to social media use, and by the therapist’s notes of social phobia and body dysmorphia; those clinical terms intensify the seriousness of the emotional harm and serve to generate sympathy and concern in the reader. Fear and urgency appear in phrases like “fear of missing out” and “constant engagement,” which emphasize a compulsive need to stay connected; the depiction of notifications prompting frequent phone checks “into the night” gives the fear a strong, ongoing character that prompts the reader to worry about harm to well-being and healthy routines. Shame and vulnerability surface in references to body image problems and bullying on social media; mentioning body dysmorphia and being bullied creates a sense of humiliation and hurt that deepens the reader’s empathy for Kaley and adds moral weight to the claims against the platforms. Frustration and accusation are present in the overall framing of a “landmark lawsuit accusing major social media companies of deliberately designing their platforms to be addictive” and in defendants denying allegations; this adversarial language conveys anger and blame toward the companies, shaping the reader’s perception of responsibility and possible wrongdoing. Defensive calm and minimization come through in the defendants’ responses—saying they “deny the allegations,” noting measures like parental controls, and arguing other factors contributed to mental health struggles; this tone reduces perceived culpability and aims to build trust in the companies’ intentions, but its measured nature also invites skepticism when contrasted with the vivid personal testimony. Authority and credibility are invoked through references to the therapist’s clinical notes, the testimony of company leaders like Mark Zuckerberg and Adam Mosseri, and the fact that the case is the first of many similar lawsuits; these elements lend weight to both sides, heightening the stakes and guiding the reader to treat the issue as significant and precedent-setting. The emotions steer the reader’s reaction by creating a narrative of personal harm (eliciting sympathy and alarm) while simultaneously presenting institutional pushback (eliciting doubt or the need for critical judgment). The writer uses several persuasive techniques to increase emotional impact: the inclusion of a personal, detailed story about Kaley’s early, intense engagement with platforms makes the abstract claim concrete and emotionally relatable; clinical terms and therapist testimony add authoritative weight and make the harm seem medically significant rather than merely anecdotal; contrasting vivid behavioral examples (checking phones at school, spending “more than 16 hours” on Instagram) with the companies’ measured denials amplifies tension and encourages readers to side with the more emotionally charged account; and framing the case as “landmark” and the first to go before a jury is a rhetorical escalation that makes the stakes feel higher. Word choices favor emotionally loaded descriptions—“addictive,” “upset,” “fear,” “bullying,” “body dysmorphia”—over neutral phrasing, and details about age and early exposure intensify a sense of vulnerability. These devices focus attention on personal harm, invite moral judgment, and steer readers toward concern and support for regulatory or legal action, while the inclusion of the defendants’ responses provides limited balance that may temper but not erase the emotional momentum toward sympathy for the plaintiff.

