Safe Spaces Are an Answer to the Ever-More-Hostile Internet
Safe Spaces Are an Answer to the Ever-More-Hostile
Internet
If Facebook, YouTube and Twitter won’t police online
content, there are new services and apps to do it
By Christopher Mims Dec. 3, 2017 8:00 a.m. ET
America is finally waking up to the fact that the
internet is an increasingly hostile and unsafe place to do business, hang out
or share with friends.
The epicenter of the problem tends to be at social media
networks—specifically Twitter, YouTube and Facebook where Russian bots, fake
news, creepy ad tracking, political polarization, sketchy videos and oh so many
internet trolls can be found. While the ad revenue is still pouring in, it’s no
wonder people are limiting what they share and how they interact online.
Evan Spiegel, chief executive of Snap Inc., creator of
Snapchat, articulated it well—if self-servingly—this week in a post: “The
combination of social and media has yielded incredible business results, but
has ultimately undermined our relationships with our friends and our
relationships with the media.” Mr. Spiegel unveiled a redesign of Snapchat that
separates professional content from personal sharing.
A new breed of apps and services takes this to heart.
Having learned from the tech giants’ mistakes, they are emerging as islands in
the internet storm—I call them “safe spaces.” They filter content without
asking for personal data and without lulling us into the cycle of mindless
engagement that mostly rewards advertisers. Smaller in scope, they are teams of
people assisted by algorithms—not the other way around.
Here are a few that exemplify this trend:
Jellies
App developer Ken Yarmosh is also the father of four
children under the age of 6. They inspired him to build Jellies, a children’s
video app that avoids many problems that the dominant service, YouTube Kids,
suffers from.
Costing $5 a month, Jellies has no ads. All the videos,
which stream from other sites such as YouTube, are screened by two different
editors. Algorithms help them find content, but what appears in each of the
app’s age-graded channels is ultimately decided by a human, Mr. Yarmosh says.
Jellies has only about 3,000 videos—far from the essentially infinite depth of
YouTube—but that’s not the point, he says. Children should only consume so much
content, after all.
Soon after the launch of Jellies in October, sketchiness
within YouTube Kids came to light—from insipid unboxing videos that feel like
barely masked toy ads, to cartoons that recast Mickey, Peppa Pig and other
beloved characters in disturbing ways.
What many parents had assumed was a completely safe space
got bungled by Big Tech. YouTube has said only a tiny fraction of videos have
been removed from the site for being inappropriate, but it’s clear that
screening content on the site through automatic filters, user feedback and
reviews simply hasn’t been enough.
Neverthink
A millennial-focused site, Neverthink has “channels” of
video content scoured daily from all over the internet and streamed directly
from their sources (typically YouTube) by a team of 15 editors. You can’t skip
videos. It’s exactly like cable TV, except the content is very internet—short
clips, fast cuts, lots of first-person narrative.
KARHUNEN/ALMA TALENT
Younger children who never knew television find it
revelatory, says Aviv Junno, Neverthink’s co-founder. “For them, not having
this choice of what to click on next means so much less stress,” he says. “To
have all that choice means I’ll always keep looking, and that’s why people get
hooked on scrolling endlessly and swiping.”
Like Jellies, Finland-based Neverthink uses a number of
tools and algorithms to help surface content for its human editors, but Mr.
Junno contrasts his company’s approach with that of the internet giants.
“Content that goes viral on Facebook is not necessarily
what you need to know or is interesting or valuable to you, but it is the most
engaging, so it keeps you on the platform,” Mr. Junno says. It’s the same on
YouTube, he adds, where the algorithms may have priorities other than giving
you the “best” content—like what you’re most likely to click on, or what will
be appropriate for advertisers.
A Facebook spokesman said, “Whether posts on Facebook are
authentic, informative, entertaining, and ultimately meaningful are the
principles that guide how News Feed ranking works.” He continued, “It’s a
mistake to ignore the fact that the overwhelming majority of stories people
share on News Feed every day aren’t meant to go viral—but are meaningful
stories people connect with their friends about.”
A spokesman for YouTube parent Google, a unit of Alphabet
Inc., said search results and recommendations aren’t determined by whether a
video is monetized or not.
Otto Radio and NPR One
Human curation is how Otto Radio, launched in 2014,
assures that the podcasts it surfaces are the highest quality, says CEO Stanley
Yuan. The app offers a mix of breaking news, entertainment and information. Mr.
Yuan says that while algorithms help Otto classify content, they can’t yet make
good quality judgments.
The human and machine “editors” of NPR One, which is
owned by National Public Radio Inc. but includes content from a variety of
sources, operate in a similar way: Humans decide what content everyone will
hear on the service, while the algorithms personalize the content.
One of its goals is to avoid “filter bubbles,” the
tendency of algorithms to continue feeding us things we’ll like, rather than
viewpoints that might expand our horizons, says Tamar Charney, NPR One’s
managing editor.
Snapchat
Snapchat is still primarily a messaging platform, owned
by a $14 billion public company that makes money through advertising, but Mr.
Spiegel is savvy to attempt to align it with this trend.
Like other giants, Snapchat can’t pay humans to vet every
piece of content that it surfaces. But clips that go into Snapchat’s news
stories are fact-checked, a company spokeswoman says. Though Snapchat’s
overhaul includes algorithmically curated news stories, all of them will be
seen by a human before they go live.
Messaging apps can still be conduits for fake news, but
Snapchat’s lack of viral sharing mechanisms could make it less likely. Mr. Spiegel
said content choices won’t be determined by your friends’ interests, but by
your own—short-circuiting one route by which people are drawn into extremist
content on Facebook.
Whether these safe spaces can stay in business, let alone
challenge Big Tech, depends on their ability to attract users. TBH, a teen
social survey app that only lets you say nice things, did draw a large
audience—and was consequently snapped up by Facebook, which clearly understands
this trend.
Will people choose to spend time in safe spaces, rather
than in the addictive, endlessly scrolling services that currently dominate?
That will require not more technology, but a potentially slow and difficult
cultural transformation.
Comments
Post a Comment