No More Than a BAND-AID: Instagram’s New Teen Accounts
Technology Policy #117 | By: Allie Amato | September 25, 2024
Featured Photo: www.fastcompany.com
__________________________________
Nearly one in four Instagram users under the age of 16 have reportedly had a “bad experience” on the platform from being firsthand witnesses to racism, bigotry, and antisemitism. Even worse, more than 25% of teen users between 13 and 15 years old have received unwanted sexual advances on Instagram. This is according to research conducted by former Facebook employee turned whistleblower, Arturo Bejar. These statistics are not new or novel, they were published by the Wall Street Journal more than a year ago. Bejar isn’t even Meta’s first whistleblower to point out the tech giant’s “see no evil, hear no evil” approach when it comes to protecting kids on their various platforms.
However, this information is worth revisiting in light of Meta’s rollout last week of their new Teen Accounts. What appears to be the company taking accountability for their previous inaction is unfortunately a small patch to a wholly flawed system. While the company promises built-in protections many holes in the new feature do little to mitigate the central issue. Kids can’t get a vodka soda at the bar because that’s legally an adult space and a federally regulated adult product. Yes, there are even holes in those protections but what’s important is they make it that much more difficult for kids to access. So shouldn’t the same go for social media, namely Instagram? An easily accessible, arguably adult space, dominated by adult products that has proven to be exceedingly damaging to the psyche and well-being of our youth.
ANALYSIS:
It remains to be seen if and how the Instagram Teen Accounts will help with the harassment, discrimination, and diminishment of self-worth that plagues Instagram’s young users. A cursory glance at the new feature easily shows the glaring flaws within Teen Accounts. Firstly, teens under 16 need a guardian’s permission to disable Instagram’s new built-in protections, but this leaves those 16 and older with virtually no safeguards. They may be approaching legal adulthood, but kids in their late teens are still very much young, impressionable kids with minds that are still developing. Secondly, there are seemingly no protections in place to stop adults from catfishing as teens themselves. The language is vague on how Meta plans to mitigate teens pretending to be older than they are as well. There’s mention of a new technology that will help identify teens, but testing hasn’t even begun on that feature, so it’s unclear when these issues will be properly addressed. Thirdly, and most importantly Instagram passes off the responsibility of managing and supervising these accounts to the parents. A well-meaning guardian may take advantage of the features out of genuine concern, but what about those who don’t have their kids best interests at heart? According to the CDC, in the United States at least one in seven children have experienced neglect or abuse. We must think of these especially vulnerable children whose guardians will otherwise ignore the effort, or more dangerously, abusive ones who can use this as a tool to further carry out their cruelty.
There are already some policies in place meant to safeguard children from the harms of the World Wide Web, such as the Children’s Internet Protection Act (CIPA). The problem is, that technology is ever-evolving which can make it difficult to pin down violators with outdated regulations. For example, CIPA merely protects children in certain school and library settings, imposing restrictions only on institutions that receive discounts for Internet access through the federal E-rate program. Meanwhile, social media companies are able to mostly dodge legal repercussions due to a law known as Section 230, which grants immunity to tech industry titans from being held responsible for user-generated content. Late last October, more than 40 states took Meta to task, suing the company for what they say was deliberate action to make their platforms addictive and knowingly fueling the youth mental health crisis. It’s clear though that more comprehensive action is needed.
This is where the Kids Online Safety Act comes in. Sponsor Senator Richard Blumenthal has been in staunch opposition to Meta’s practices, accusing the company of hiding evidence of the “harms that they knew was credible” The bill has been making its way through Congress over the past few months and was just advanced by the House. It focuses on putting the responsibility solely on social media companies to prevent harm to minors, even calling for frequent public reports of foreseeable risks on the platforms and the establishment of the Kids Online Safety Council. Both sides of the aisle are united in that social media has become a public health issue, but opinions vary on how to tackle that. If made into law, it could change the landscape of the internet for kids and force the hands of tech companies to take meaningful action to repair the damage they’ve already done.
ENGAGEMENT RESOURCES:
- Internet Crimes Against Children (ICAC) Task Force Program is a national network of task forces, representing thousands of federal, state, and local law enforcement and prosecutorial agencies developed in response to the rise of online sexual abuse.
- Family Online Safety Institute (FOSI) is an international, non-profit organization which seeks to make the online world safer for families through public policy, industry best practices, and good digital parenting.
- Protect Us Kids (PUK) equips young people in marginalized and rural communities worldwide with essential life-saving skills to safely navigate the online world, minimizing their risk of being targeted by child predators and exploiters.
Stay in-the-know with the latest updates from our reporters by subscribing to the U.S. Resist News Weekly Newsletter. We depend on support from readers like you to aide in protecting fearless independent journalism, so please consider donating to keep democracy alive today!