Skip to content

Perspective: New York's Social Media Initiative Marks a Wisebeginning

A proposed New York bill against addictive social media algorithms is a clever approach, according to Kara Alaimo. While it won't solve all issues, it will benefit children and their parents in beneficial ways, she contends.

GettyImages-2155478945.jpg
GettyImages-2155478945.jpg

Perspective: New York's Social Media Initiative Marks a Wisebeginning

It's a wise concept due to the fact that algorithms are currently shrouded in mystery. We don't fully understand how social media companies choose content for minors, but we do know they have an interest in presenting them with posts that are detrimental to young individuals and our society as a whole. Additionally, there's evidence suggesting that algorithms can sometimes promote harmful content to children. This proposed legislation would make it easier for kids, ideally with some guidance from their parents, to select and consume positive content. However, it's not a complete solution as children could still be exposed to harmful advertisements.

As Roger McNamee, an early Facebook investor, points out in "Zucked: Waking Up to the Facebook Catastrophe," social networks thrive on provoking strong emotions, particularly outrage, because emotionally charged users spend more time on these platforms. The more time spent on these platforms, the more money the social networks make from ads.

However, this practice isn't beneficial for the mental health of children and can contribute to the divide in our nation. It's become the "age of grievance," an era of extreme partisanship where the focus is on differences and disputes, rather than seeking common ground.

Moreover, the content that algorithms promote to children can be hazardous. A woman I interviewed for my new book developed an eating disorder as a teenager after posting a handstand picture on Instagram and it was shared by a so-called “fitspo” (“fitness inspiration”) page. She was then exposed to more and more toxic “fitspo” content.

Studies reveal that algorithms occasionally serve this harmful content. When UK researchers created TikTok accounts and searched for commonly sought-after content by young men, the amount of misogynistic content promoted to the accounts increased fourfold within five days. TikTok responded by saying the report "does not reflect how real people experience TikTok."

Similarly, in 2022, researchers at the Center for Countering Digital Hate created accounts pretending to be 13-year-olds and viewed and liked content about mental health and body image. Within minutes, the accounts were being served videos about eating disorders and suicide. TikTok said the study didn't accurately represent user experiences and introduced new measures to avoid promoting harmful posts on weight loss and dieting.

The issue is that we have no idea how algorithms select the content they promote to children. Social media companies protect their algorithm programming as proprietary information, which is a crucial aspect of differentiating their apps.

If algorithms could not determine what children see, it would give children more control. They would only see content from accounts they decide to follow, in the order it is posted. Of course, some children might choose to follow extremely toxic content, but others could select accounts concerning topics they care about or healthy interests they might consider as future careers. Implementing programs in schools that teach students how to find and follow accounts that empower them is essential. Likewise, parents should collaborate with their children in selecting healthy accounts to follow.

It's worth noting that even if social networks couldn't use algorithms to determine what they show children, they would still control the ads young people see - and they could still be harmful. For instance, the Center for Countering Digital Hate's accounts were shown ads for tummy tuck surgery and weight-loss drinks. Thus, lawmakers should also consider how to prevent social media companies from displaying harmful ads to children.

Nevertheless, this legislation would provide children with the means to curate healthier feeds. With guidance and assistance from parents, it would be a beneficial move in making social networks potentially safer for young people.

As a parent, I'd prefer to let my children control the content they see online rather than letting social media companies dictate it. I'd then support them in making intelligent choices.

Kara  Alaimo

Read also:

This proposed legislation could empower children to make informed decisions about the content they consume, allowing them to cultivate a feed aligned with their interests and values. It's essential for parents to engage in open conversations with their children about the importance of selecting positive and uplifting content on social media, as their opinions and perspectives play a crucial role in shaping their online experiences.

Comments

Latest

Election campaign topic: Migration Issues: Following Harris's visit to the Mexico border, Trump...

Trumpdelivers derogatory remark towards Harris, labeling her as "intellectually disabled"

Trumpdelivers derogatory remark towards Harris, labeling her as "intellectually disabled" In a tiny municipality nestled within the battleground state of Wisconsin, the Republican presidential hopeful unleashes a tirade: During his speech, Trump attacks his Democratic opponent, intertwining slurs with anti-immigrant sentiments. Harris maintains a composed demeanor in response.

Members Public