Skip to content
You are about to activate our Facebook Messenger news bot. Once subscribed, the bot will send you a digest of trending stories once a day. You can also customize the types of stories it sends you.
Click on the button below to subscribe and wait for a new Facebook message from the TC Messenger news bot.
Facebook may have laid some of the early groundwork to launch its social network in China, but the U.S. company’s chances of making a dent in the world’s most populous country remain remote.
A New York Times report that Facebook is developing a system that could censor information to appease the Chinese government is the talk of the tech industry right. The timing couldn’t be worse: domestically, Facebook is under pressure for failing to adequately manage the influence of fake news on the U.S. election, yet here it is seemingly prepared to quash legitimate information on user timelines to kowtow to the Chinese government and further its interests in a country of 1.3 billion people.
Facebook’s China conundrum hasn’t changed much since its IPO in 2012, when it admitted it may not ever find a way into the country.
Recently, however, CEO Mark Zuckerberg reportedly justified the development of censorship software, telling staff that “it’s better for Facebook to be a part of enabling conversation, even if it’s not yet the full conversation.” That triggered a number of departures, according to the New York Times, but the truth about China is that it would take a huge effort from Facebook to be relevant to the general conversation in the first place.
Even if Zuckerberg — who has made little effort to hide his interest in doing business in China — sold out and agreed to censorship in exchange for being unblocked, Facebook has a major challenge in finding a place to sit within the nation’s already-developed social media ecosystem.
Sticking to its roots won’t cut it because Facebook-style social networking has already failed in China.
Renren, the company widely labeled as ‘China’s Facebook,’ has long since pivoted. Initial promise saw Renren attract investment from SoftBank in its early days, and before its U.S. IPO in April 2011, the company claimed 160 million users.
That NYSE listing raised $743 million, but the share price has fallen from a first-day close of $18.01 to just $1.81 today. These days Renren’s service is barely used and the company is more notable for its investment deals, which include stakes in a mortgage lender and a delivery service. That investment business and its social video platform are being spun out of the company to give them room to breathe, such is the decline of the core service and ‘traditional’ social networks in China.
Renren and lesser rivals like Kaixin withered because they missed mobile, hugely popular messaging app WeChat didn’t and now it is king.
WeChat’s dominance has been clear for a long while — I wrote as much back in 2013 — and today it has 846 million monthly active users, the majority of whom are in China. It is also a critical part of parent firm Tencent’s mobile monetization strategy. More to the point, for Facebook, is that it occupies the space that Facebook is aiming for in China — and then some.
Messaging apps have taken a huge bite into social networks, no where more so than China where you frequently notice people out and about in public using WeChat groups, or holding their phone to their face to use the push-to-talk ‘walkie talkie’ feature to communicate.
But WeChat goes beyond messaging. It is the internet, and more.
It includes a Facebook-style timeline feed from friends — Moments — consumers can connect with branded accounts as they do with Facebook Pages, there’s a payment system, shopping, banking, appointments and now a new feature that enables developers to build their own apps for the messaging platform, thereby disintermediating official app stores.
WeChat is essentially the mobile portal for Chinese consumers, as A16z partner Connie Chan put it, while Twitter-like Weibo covers social with 297 million MAUs, so it is hard to see what new tricks Facebook can bring to the party
Then there’s the fact that, in China, your Western brand means very little.
Just ask Uber CEO Travis Kalanick, who agreed to sell his company’s China business to rival Didi. Facebook’s global appeal is muted in China. Apple and the iPhone are thriving in China as exceptions to the norm, but Facebook doesn’t have that same brand gravitas.
The average person in China has no immediate need for Facebook. Sure, you can connect with people who are overseas but, at this point, people who would find Facebook useful to connect with friends or family overseas almost certainly already use it via a VPN. Facebook’s ad buying service estimates that the social network has an audience of around 2.1 million users in China, a tiny portion of the country’s reported 710 million internet users.
Zuckerberg’s burning desire for China seems to be the catalyst for the development of the censorship tool, which the Times report stressed may not ever be deployed, but Facebook should tread very carefully here. Compromising on free speech can only lose it friends in the West, and the chances of any kind of success in China are very slim, which by extension could negatively impact its stock price.
Sticking to its existing strategy of serving advertising customers in China that want to reach a global audience is a better bet but, even then, working with state-run publications — as Facebook does — throws up plenty of issues around media manipulation and fake news.
Featured Image: Marcio Jose Sanchez/AP
Facebook wants to be unbanned in China, so it’s built a censorship tool that could hide posts about prohibited topics from people in China, according to The New York Times‘ Mike Isaac. Rather than censor posts itself, Facebook would potentially provide the tool to a third-party in China such as a local partner company that could use it to prevent users in China from seeing content that breaks the government’s rules.
While China could unlock huge amounts of users and ad revenue for Facebook, the censorship tool could also be used to enact human rights abuse. If China could track which local users are trying to protest or bad-mouth the government, they could face persecution.
Perhaps that’s why The New York Times says several Facebook staffers who worked on the product have left the company. So far, there are no signs that Facebook has offered the tool to Chinese authorities. We don’t have details on the specifics of how it would work. It’s apparently only one of several ideas Facebook has explored for getting access to China, and they might never be launched.
But the existence of the tool brings up strong concerns about what’s best and safest for Chinese citizens.
Mark Zuckerberg has held in the past that some Facebook access could benefit them. The New York Times reports that at an internal Q&A about its intentions in China, Zuckerberg said, “It’s better for Facebook to be a part of enabling conversation, even if it’s not yet the full conversation.”
That mirrors Facebook’s stance about internet access, where it’s pushed the idea that limited free access to the web is better than none at all for those who can’t afford it. Facebook already allows Chinese companies to buy ads that run in places where it isn’t banned.
In a statement to TechCrunch, a Facebook spokesperson wrote: “We have long said that we are interested in China, and are spending time understanding and learning more about the country. However, we have not made any decision on our approach to China. Our focus right now is on helping Chinese businesses and developers expand to new markets outside China by using our ad platform.”
Over time, the interpersonal connection via Facebook could strengthen communities who might be able to organize and protest the government outside of the app. Yet the censorship tool’s potential to be used to round up dissidents looms over any long-term benefit for citizens, or profit for Facebook.
As Facebook’s advertising business continues to grow globally, the social network is also streamlining operations in certain markets. TechCrunch has learned that Facebook is consolidating sales operations in Europe focused on small and medium businesses. In the process has cut 30 jobs including in Hamburg, according to a source among those cut.
The cuts are of “support” staff that were contracted from a third party.
30 is a small proportion of Facebook’s overall employee count of 15,724 (as of the end of September). It’s notable for a company that has not made regular practice of laying people off.
It’s not clear what the subtext might be for this latest cut, if any.
“Facebook’s support to Small and Medium Businesses in Germany remains unchanged. To the contrary, over the past year, we have been ramping up our SMB efforts across Germany. This included founding the industry initiative ‘Digital Durchstarten‘, which will continue in 2017. The next event in Münster will be taking place tomorrow.”
From what we understand, the company’s SMB director in Europe, Middle East and Africa, Stefanos Loukakos, who is based out of Facebook’s global headquarters in Dublin, made the cuts to consolidate SMB operations across fewer locations.
Our source said that the Hamburg office covered Facebook’s SMB business, selling ads in Facebook and Instagram in German-speaking markets (Germany, Austria and Switzerland); as well as Turkey and Israel — regions that will now be covered from SMB sales offices in Dublin and Lisbon.
There will still be SMB activity in Hamburg and across Germany, where Facebook does some marketing outreach to SMBs in the form of online content and events.
Other small rounds of layoffs this year have pointed to bigger thematic changes at the company. They included around 40 people going in the wake of Facebook restructuring parts of its ad-tech business, specifically at LiveRail, which has since been shut down.
There were also reportedly around 15-18 contractors let go who had been working on Facebook’s Trending team, a group tasked with curating news for Facebook. News has been a problematic area for the social network for a while, but it’s come into the spotlight especially this year, as many have accused Facebook of being a haven for disseminating fake news. Facebook’s still trying to fix this.
Turning back to today’s news, in September, Facebook announced that it had hit 4 million advertisers on its platform, and while it does not break out specific numbers or the performance of specific regions, it’s been long understood that small and medium businesses form a large part of that base, both in the U.S. and internationally.
But over the years, there have been some tense moments between Facebook and SMBs, as Facebook has sought to build out more of its paid ad products over organic reach (that is, unpaid distribution) on the platform.
More generally, Facebook has been shuttering parts of its ad business that are seeing less activity. Just last week, it closed down the ad-serving part of its Atlas platform to focus on Atlas’s measurement tools.
In the German market, Facebook has not been a stranger to regulatory heat, specifically from the country’s data protection watchdog. In March, it became the subject of an antitrust privacy probe, and in September Germany was the first country in Europe to order Facebook to stop tapping into data from WhatsApp, the messaging app owned by Facebook. (That’s now extended to all of Europe.)
Story updated with quote from Facebook and further detail about nature of cuts — contractors versus full-time staff.
Featured Image: Sean Gallup/Getty Images
If you thought you heard the last on fake news, you were sadly mistaken.
A Stanford study found that the majority of middle school students can’t tell the difference between real news and fake news. In fact, 82 percent couldn’t distinguish between a real news story on a website and a “sponsored content” post.
Of the 8,704 students studied (ranging in age from middle school to college level), four in ten high-school students believed that the region near Japan’s Fukushima nuclear plant was toxic after seeing an unsourced photo of deformed daisies coupled with a headline about the Japanese area. The photo, keep in mind, had no source or location attribution. Meanwhile, two out of every three middle-schoolers were fooled by an article on financial preparedness penned by a bank executive.
It seems that those surveyed in the study were judging validity of news on Twitter based on the amount of detail in the tweet and whether or not a large photo was attached, rather than focusing on the source of the tweet.
The WSJ, which first reported on the study, says that a big part of solving this problem among young people comes down to education, both at school and at home.
But with 62 percent of U.S. adults getting the majority of their news from social media, the responsibility for this issue also lies with the social media organizations themselves, such as Facebook and Twitter.
Both Google and Facebook have made steps toward thwarting the fake news onslaught, including banning fake news organizations from their ad network. Facebook’s Mark Zuckerberg also posted a number of responses to the issue on Facebook, and gave actual steps toward stopping the spread of fake news on the platform.
That said, the fallout from fake news is not as minor as Zuck originally stated in his first response on Facebook, where he mentioned that less than 1 percent of news on Facebook is fake.
Even in minuscule amounts, fake news has a much greater ability to spread quickly and be consumed by many given the nature of the salacious headlines themselves. Paired with the fact that most adults get their news from social media, and most young people can’t tell the difference, you can see just how problematic this issue is.
Hopefully, steps toward stopping fake news come swiftly and effectively. But until then, it’s important for parents to be diligent in teaching their kids how to determine the difference between a sourced news report and a salacious headline with no evidence behind it.
Featured Image: Nationaal Archief/J.D. Noske/Flickr
Say what you will about the merits of Facebook’s Internet.org and Free Basics — it’s pretty cool that they’re building a huge, solar-powered, laser-shooting drone to deliver it. But a “structural failure” that occurred on the Aquila’s first test flight may be more serious than Facebook made it out to be: The National Transportation Safety Board is conducting an investigation, Bloomberg reports. The NTSB confirmed this and provided further details.
Facebook wrote about its tests (which occurred on June 28) in July, listing several things they were looking at, learned and so on. Under the “Real-world conditions” bullet point, the blog post admits things weren’t entirely nominal:
We are still analyzing the results of the extended test, including a structural failure we experienced just before landing. We hope to share more details on this and other structural tests in the future.
They didn’t, possibly because of the NTSB investigation, but Facebook did issue a statement today emphasizing the positive outcomes of the test:
We were happy with the successful first test flight and were able to verify several performance models and components including aerodynamics, batteries, control systems and crew training, with no major unexpected results.
Really, it was too much to hope that nothing would go wrong on the first full-scale test of an enormous, experimental aircraft design. A source close to the project told TechCrunch that some damage was expected, since the Aquila isn’t actually designed for repeat takeoffs and landings (it has skids, not landing gear), and also because the day was windier than expected. The failure occurred just a few seconds before landing from the craft’s 90-minute flight, the source said.
It’s the NTSB’s prerogative to investigate any airborne troubles like this, and clearly it decided to so in this case, perhaps because of the high-profile nature of the test and aircraft. But the NTSB wouldn’t get involved if a screw dropped off: a representative explained that it investigates when aircraft weighing 300 pounds or more cause death or serious injury, or incur “substantial damage” — defined as damage that “compromises the airworthiness of the aircraft.”
That said, if the Aquila had nose-dived into the ground, caught fire or sustained some other high-profile damage, that likely would have come out by now. A full report is expected in a month or two, at which point we’ll have more details — but considering the scope of the project and pride evinced by Facebook in the Aquila’s development, it seemed reasonable to, well, clip its wings a little bit.
(This article has been substantially updated from its original form.)
We just witnessed, to paraphrase John Oliver, the 2016 U.S. Presidential Election Dumpster Fire F**ktacular. Much of it took place on social media, and much is being written of late about how Facebook and Twitter have changed virtually everything that we’ve come to see as a normal part of an election cycle: the consumption of daily news and facts (or non-facts).
Remember 2012, when we called it the “Twitter Election” due to Obama’s skill in exploiting the fledgling platform? It seems almost quaint now. 2016 will be remembered as the election where the power of Twitter was revealed in full as a platform that can enable an individual with little formal organization to bypass media institutions and speak directly to a populace all the way to the presidency.
On Sunday’s 60 Minutes interview, Trump could hardly contain his self-satisfaction regarding how well Twitter and Facebook had served him.
Why was exploiting Twitter such an advantage for Trump? Because Twitter is an unusually powerful media platform, providing an unprecedented ability to reach anyone in the world, with far less friction than ever before.
That ability points to a deep responsibility for Twitter and Facebook, among others, to acknowledge their roles as arbiters of information that can have historical implications. And it’s possible Twitter and Facebook aren’t up to the task — there is a need for a responsible social network, and if they can’t do it, someone else has to.
The power of smush
Consider how much has changed in nine short years. Until 2007, if you wanted to build an audience and sell your ideas to millions of viewers, you needed to raise money to build a cable network or a printing office, hire a sales force to sell ads or hire a bunch of Columbia journalism or USC film grads to produce content. You needed to buy cameras and pay business affairs people and all manner of other things.
Twitter has given anyone and everyone a direct voice to the world by smushing together three things that in traditional media have been mostly separate: distribution, or a platform to connect the world (formerly Comcast or Dish; today the completely commoditized mobile phone and internet); application (formerly the 30-minute video-on-a-screen-plus-ads model; today the pixels that comprise an app like Twitter or Facebook); and content (formerly Rachel Maddow’s show; today tweets and Facebook posts).
The implications of this are profound. First, Twitter owns its own distribution. Because it’s a network, every new user makes the platform that much more essential, whether you’re one of the Monthly Active Users that Twitter is so often maligned for not adding fast enough — or whether you’re one of the hundreds of millions of people who are consuming news elsewhere about what’s happening on Twitter. Traditional media amplification of Twitter content doesn’t get Wall Street excited, but it nonetheless solidifies Twitter’s place in the world. Prior purveyors of the pipes to consumers (like cable companies) had neither the leverage nor the capabilities to play the role of content arbiter with subscribers.
Tesla and SolarCity-powered Island | Crunch Report
Watch More Episodes
Next, because the application is owned by Twitter, the company has as much power in determining how people consume and create that content as the earliest creators of movie cameras, network television and production houses had in the 1930s when TV was just becoming a thing.
We take these now-“traditional” formats as gospel, but they were merely invented by normal people in charge of the early medium, and they persist to this day. Similarly, Twitter’s founders (and the communities they enable), by intentionally designing the user experience that we all consume, shape what gets out of our phones and into our brains. More than ever before, the medium shapes the message.
And finally there’s the content: free, live, visceral, formatted specifically for this application and ready to deliver 140 characters, photos, videos, Vines (RIP), etc. of noteworthy and not-so-noteworthy stuff right to your phone, computer or (Apple) TV.
So new platforms unprecedentedly combine Comcast, Sony and MTV into a single, powerful platform controlled by one team. Gatekeepers are removed at every step, and anyone can participate. That’s why owning a share of Twitter (or Facebook, Snapchat or Instagram for that matter) is like owning a share of an entire media ecosystem, indeed like owning the whole cable industry, not merely a single company or player within it.
I don’t want (or need) my MTV
Historically, with every other new media platform since Gutenberg invented movable type, once a new media platform is created (books, newspapers, magazines, radios and television), the first major media starts broad, attempting to bring as many people as possible to the new platform (like the original “Big 3” broadcast networks). Then, over time, once people are used to the new platform, verticalization occurs to super-serve certain demographics or constituents, like Nickelodeon serving kids or MTV serving teens. People self-select into the various cable channels, but the major broadcast networks remain to serve a mainstream audience a more diverse set of perspectives.
But with social media you have an added dynamic: personalization. No one person’s Facebook or Twitter feed looks like anyone else’s. There’s little room for an MTV to come and super-serve the teen demo on Twitter: Teens usually do it themselves by creating a follower graph that serves their needs. If they can’t find what they’re looking for, they switch applications entirely (hence, in part, the rise of Snapchat).
This means that whether users actively want access to a very limited stream of information, or if they simply engage more with a certain kind of content, they end up with a far narrower information stream.
My friends from my poor hometown in South Texas see a very different Facebook News Feed than I do, even without active curation, because they engage with only a certain kind of story and, in turn, Facebook reinforces that interest. While Twitter’s real-time stream invites some amount of surprise and diversity into the experience, it’s nothing like knowing that we all saw the same nightly CBS/NBC/ABC news show, regardless of geography.
- Founded 2006
- Overview Twitter is a global social networking platform that allows its users to send and read 140-character messages known as “tweets”. It enables registered users to read and post their tweets through the web, short message service (SMS), and mobile applications. As a global real-time communications platform, Twitter has more than 400 million monthly visitors and 255 million monthly active users around …
- Location San Francisco, CA
- Categories SMS, Blogging Platforms, Social Media, Messaging
- Website http://www.twitter.com/
- Full profile for Twitter
So Twitter and Facebook today are wholly owned news and information platforms, praying mainly at the altar of increased engagement, with personalized, increasingly limited information streams, no embedded gatekeepers and completely open participation.
The 2016 election and Trump’s victory in part show how powerful this democratization can be, because when one masters Twitter, one can impact public discourse as much or more than can the highest-rated TV network. For example, Fox News has a 1-2 million total viewer count on its best day. Trump’s Twitter account reaches 15.3 million people every time he says something. This kind of reach makes a platform like Twitter very hard to unseat.
But when we rely on the community of users and followers to be the gatekeepers, to take the place of all those trained and experienced folks who have made up the media institutions that we’ve relied on for so long, we can also see what’s missing — and the consequences of omission are severe. As many have noted, conversation about the election was deeply marred by rapid sharing of false truths and misinformation, with no filter. Because most people today get their news from social networks, this is deeply troubling.
The platforms, Twitter and Facebook among them, have to take responsibility — because claiming neutrality at this stage, hiding behind the “technology company” label, when they represent such platform-level power, is absurd. It comes down to two things:
Could emerging entrepreneurs create a new social platform that combines the expression, creativity and easy trading of social currency that we see from Facebook and Twitter, but with an eye to more thoughtful discourse? Something that captures the clear need we have as citizens to develop and express identities around news and information, but with built-in means to edit bad information and non-constructive conversation?
Even given the enormous network effects inherent in the major social media platforms, creating a new one isn’t quite as ludicrous as it may sound. If Facebook and Twitter are, indeed, new whole-cloth media platforms, then we’ve witnessed the creation of no less than seven such global platforms with more than 100 million users (and in some cases, a billion or more users) in the last decade alone: Facebook, Twitter, Instagram, Snapchat, WhatsApp, Telegram and Pinterest. And I’m not even including the massive Asian platforms, such as WeChat and LINE.
When big challenges present themselves, we look to both the established players and emerging entrepreneurs to take big risks to make things better. I’m excited to see how this moment in time galvanizes entrepreneurial attention on making social networks work in our new world.
Facebook has followed Google’s lead by trumpeting plans to expand its presence in the UK — despite ongoing uncertainty over the impact of this summer’s Brexit vote for the country to leave the European Union.
Speaking at the annual CBI conference in London today, Facebook’s Nicola Mendelsohn, VP EMEA, announced plans for the social network to increase its UK headcount by 50 per cent by the end of 2017, and open a new HQ in the country.
Mendelsohn said the aim is to grow headcount from 1,000 to 1,500 by then — with “many” of the new jobs touted as “high skilled engineering jobs”.
“We came to London in 2007 with just a handful of people, by the end of next year we will have opened a new HQ and plan to employ 1,500 people. Many of those new roles will be high skilled engineering jobs as the UK is home to our largest engineering base outside of the US and is where we have developed new products like Workplace,” she said, also noting the company’s presence in Somerset — where its Aquila facility is working on designing and building solar power unmanned planes to bring connectivity to remote regions.
It’s not clear exactly what proportion of the additional jobs would be engineering roles vs other jobs such as sales. We asked but the company declined to provide any further details.
Facebook’s announcement of an intention to increase UK headcount follows Google’s UK-focused publicity last week when the company re-announced a long planned expansion of its London campus — couching the move as a continued commitment to the UK in spite of Brexit.
Reporters were told that the capacity of Google’s new London HQ is 7,000 vs the 4,000 of its current building — with the implication being the company could employee 3,000 more staff in the UK by 2020. Assuming, that is, business conditions in the UK prove favorable — with CEO Sundar Pichai talking about the ‘absolute’ importance of open borders and free movement for skilled migrants. Two things that, absolutely, cannot be guaranteed, given the UK’s impending Brexit. So quite how many of those potential 3,000 additional Google UK jobs end up existing remains to be seen — like so many things affected by Brexit.
Facebook’s UK expansion plans don’t mention any specific caveats or conditions for the company to grow headcount in the country. But in related PR it also makes a point of referencing its mission to “make the world more open and connected”. Which reads like a not-so-subtle argument for the UK government to push for a ‘soft Brexit’, rather than the tough on immigration rhetoric of the hard Brexiteers.
Especially as a “plan” to add an additional 500 jobs is in no way an irreversible guarantee. So again, it remains to be seen how many of the extra Facebook jobs survive the looming Brexit negotiations.
UK Prime Minister Theresa May has said she intends to trigger the start of the two-year negotiation process to leave the EU by the end of March 2017.
Also speaking at the CBI conference today, the Prime Minister announced a series of business-friendly measures aimed at pouring some emollient oil on the troubled waters of Brexit — including a government funding boost for R&D worth £2BN per year by 2020; and a review of the UK’s corporate tax rate, suggesting it could move to substantially cut the rate below the current 20 per cent. (Albeit, such a move could in fact complicate the UK’s Brexit negotiations — given it would likely be viewed as a hostile move by EU governments.)
Also on the table: a possible boost for R&D tax credits to further support businesses conducting research in the UK.
May also announced a new Industrial Strategy Challenge Fund, overseen by UK Research and Innovation and funded by some of the £2BN R&D boost — aimed at supporting the commercialization of what the government is dubbing “priority technologies”, such as robotics, biotechnology and AI.
Other emerging fields that could benefit from the new fund’s support include medical technology, satellites, advanced materials manufacturing and “other areas where the UK has a proven scientific strength and there is a significant economic opportunity for commercialisation”.
Featured Image: Sean Gallup/Getty Images
Facebook’s issues with viral false news reports dominated headlines this week, so naturally it came up as a key topic of discussion when I spoke to TechCrunch’s special projects editor and internet culture reporter Jordan Crook on this week’s episode. The sheer scope of the issue is something that becomes very apparent as we found out in talking things through.
We also cover the union of FanDuel and DraftKings into a single online fantasy sports betting platform powerhouse, since Jordan’s a big fan of fantasy sports (I’ll stick to just LOTR-style fantasy, thanks very much). The issue isn’t really whether the two pairing up is better for either; it’s the nature of the business model itself, and whether there isn’t something ethically unsettling about the whole proposition.
Fair warning: this is a pretty heavy episode, because we’re all still feeling a little raw after the U.S. election. But it’s honest, which is more than you can say for a lot of headlines that got plenty of shares during the election.
You can listen via the stream embedded above, or check us out and subscribe on iTunes (and leave a review), or in your podcast player of choice.
Facebook’s fake news problem persists, CEO Mark Zuckerberg acknowledged last night.
He’d been dismissive about the reach of misinformation on Facebook, saying that false news accounted for less than one percent of all the posts on the social media network. But a slew of media reports this week have demonstrated that, although fake posts may not make up the bulk of the content on Facebook, they spread like wildfire — and Facebook has a responsibility to address it.
“We’ve made significant progress, but there is more work to be done,” Zuckerberg wrote, outlining several ways to address what he called a technically and philosophically complicated problem. He proposed stronger machine learning to detect misinformation, easier user reporting and content warnings for fake stories, while noting that Facebook has already taken action to eliminate fake news sites from its ad program.
The firestorm over misinformation on Facebook began with a particularly outrageous headline: “FBI Agent Suspected in Hillary Email Leaks Found Dead.”
The false story led to accusations that Facebook had tipped the election in Donald Trump’s favor by turning a blind eye to the flood of fake stories trending on its platform. The story, which ran just days before the election on a site for a made-up publication called Denver Guardian, suggests that Clinton plotted the murders of an imaginary agent and his imaginary wife, then tried to cover it up as an act of domestic violence. It was shared more than 568,000 times.
The Denver Guardian story caused a crisis at Facebook, and it hasn’t gone away. Last night, the
story appeared yet again in a friend’s newsfeed. “BREAKING,” the post blared. “FBI AGENT & HIS WIFE FOUND DEAD After Being ACCUSED of LEAKING HILLARY’s EMAILS.” This time, the story was hosted by a site called Viral Liberty. Beneath the headline is a button encouraging Facebook users to share the story, and according to Facebook’s own data, it’s been shared 127,680 times.
Facebook isn’t alone. Google and Twitter grapple with similar problems and have mistakenly allowed fake stories to rise to prominence as well. And although stories about the rise of fake news online have focused primarily on pro-Trump propaganda, the sharing-without-reading epidemic exists in liberal circles too — several of my Facebook friends recently shared an article by the New Yorker‘s satirist Andy Borowitz titled “Trump Confirms That He Just Googled Obamacare” as if it were fact, celebrating in their posts that Trump may not dismantle the Affordable Care Act after all his campaign promises to the contrary.
But, as the hub where 44 percent of Americans read their news, Facebook bears a unique responsibility to address the problem. According to former Facebook employees and contractors, the company struggles with fake news because its culture prioritizes engineering over everything else and because it failed to build its news apparatus to recognize and prioritize reliable sources.
Facebook’s media troubles began this spring, when a contractor on its Trending Topics team told Gizmodo that the site was biased against conservative media outlets. To escape allegations of bias, Facebook fired the team of journalists who vetted and wrote Trending Topics blurbs and turned the feature over to an algorithm, which quickly began promoting fake stories from sites designed to churn out incendiary election stories and convert them into quick cash.
It’s not a surprise that Trending Topics went so wrong, so quickly — according to Adam Schrader, a former writer for Trending Topics, the tool pulled its hashtagged titles from Wikipedia, a source with its own struggles with the truth.
Mark Zuckerberg is the front page editor of every newspaper in the world.
“The topics would pop up into the review tool by name, with no description. It was generated from a Wikipedia topic ID, essentially. If a Wikipedia topic was frequently discussed in the news or Facebook, it would pop up into the review tool,” Schrader explained.
From there, he and the other Trending Topics writers would scan through news stories and Facebook posts to determine why the topic was trending. Part of the job was to determine whether the story was true — in Facebook’s jargon, to determine whether a “real world event” had occurred. If the story was real, the writer would then draft a short description and choose an article to feature. If the topic didn’t have a Wikipedia page yet, the writers had the ability to override the tool and write their own title for the post.
Human intervention was necessary at several steps of the process — and it’s easy to see how Trending Topics broke down when humans were removed from the system. Without a journalist to determine whether a “real world event” had occurred and to choose a reputable news story to feature in the Topic, Facebook’s algorithm is barely more than a Wikipedia-scraping bot, susceptible to exploitation by fake news sites.
But the idea of using editorial judgement made Facebook executives uncomfortable, and ultimately Schrader and his co-workers lost their jobs.
“[Facebook] and Google and everyone else have been hiding behind mathematics. They’re allergic to becoming a media company. They don’t want to deal with it,” former Facebook product manager and author of Chaos Monkeys Antonio Garcia-Martinez told TechCrunch. “An engineering-first culture is completely antithetical to a media company.”
Of course, Facebook doesn’t want to be a media company. Facebook would say it’s a technology company, with no editorial voice. Now that the Trending editors are gone, the only content Facebook produces is code.
But Facebook is a media company, Garcia-Martinez and Schrader argue.
Tesla and SolarCity-powered Island | Crunch Report
Watch More Episodes
“Facebook, whether it says it is or it isn’t, is a media company. They have an obligation to provide legit information,” Schrader told me. “They should take actions that make their product cleaner and better for people who use Facebook as a news consumption tool.”
Garcia-Martinez agreed. “The New York Times has a front page editor, who arranges the front page. That’s what New York Times readers read every day — what the front page editor chooses for them. Now Mark Zuckerberg is the front page editor of every newspaper in the world. He has the job but he doesn’t want it,” he said.
Zuckerberg is resistant to this role, writing last night that he preferred to leave complex decisions about the accuracy of Facebook content in the hands of his users. “We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties,” he wrote. “We have relied on our community to help us understand what is fake and what is not. Anyone on Facebook can report any link as false, and we use signals from those reports along with a number of others — like people sharing links to myth-busting sites such as Snopes — to understand which stories we can confidently classify as misinformation.”
However, Facebook’s reliance on crowd-sourced truth from its users and from sites like Wikipedia will only take the company halfway to the truth. Zuckerberg also acknowledges that Facebook can and should do more.
Change the algorithm
“There’s definitely things Facebook could do to, if not solve the problem, at least mitigate it,” Garcia-Martinez said, highlighting his former work on ad quality and the massive moderation system Facebook uses to remove images and posts that violate its community guidelines.
To cut back on misinformation, he explains, “You could effectively change distribution at the algorithmic level so they don’t get the engagement that they do.”
This kind of technical solution is most likely to get traction in Facebook’s engineering-first culture, and Zuckerberg says the work is already underway. “The most important thing we can do is improve our ability to classify misinformation. This means better technical systems to detect what people will flag as false before they do it themselves,” he wrote.
This kind of algorithmic tweaking is already popular at Google and other major companies as a way to moderate content. But, in pursuing a strictly technical response, Facebook risks becoming an opaque censor. Legitimate content can vanish into the void, and when users protest, the only response they’re likely to get is, “Oops, there was some kind of error in the algorithm.”
Zuckerberg is rightfully wary of this. “We need to be careful not to discourage sharing of opinions or to mistakenly restrict accurate content,” he said.
Improve the user interface
Mike Caulfield, the director of blended and networked learning at Washington State University Vancouver, has critiqued Facebook’s misinformation problem. He writes that sharing fake news on Facebook isn’t a passive act — rather, it trains us to believe the things we share are true.
“Early Facebook trained you to remember birthdays and share photos, and to some extent this trained you to be a better person, or in any case the sort of person you desired to be,” Caulfield said, adding:
The process that Facebook currently encourages, on the other hand, of looking at these short cards of news stories and forcing you to immediately decide whether to support or not support them trains people to be extremists. It takes a moment of ambivalence or nuance, and by design pushes the reader to go deeper into their support for whatever theory or argument they are staring at. When you consider that people are being trained in this way by Facebook for hours each day, that should scare the living daylights out of you.
When users look at articles in their News Feed today, Caulfield notes, they see prompts encouraging them to Like, Share, Comment — but nothing suggesting that they Read.
Caulfield suggests that Facebook place more emphasis on the domain name of the news source, rather than solely focusing on the name of the friend who shares the story. Facebook could also improve by driving readers to actually engage with the stories rather than simply reacting to them without reading, but as Caulfield notes, Facebook’s business model is all about keeping you locked into News Feed and not exiting to other sites.
Caulfield’s suggestions for an overhaul of the way articles appear in News Feed are powerful, but Facebook is more likely to make small tweaks than major changes. A compromise might be to label or flag fake news as such when it appears in the News Feed, and Zuckerberg says this is a strategy Facebook is considering.
“We are exploring labeling stories that have been flagged as false by third parties or our community, and showing warnings when people read or share them,” he said.
- Founded 2004
- Overview Facebook is an online social networking service that allows its users to connect with friends and family as well as make new connections. It provides its users with the ability to create a profile, update information, add images, send friend requests, and accept requests from other users. Its features include status update, photo tagging and sharing, and more. Facebook’s profile structure includes …
- Location Menlo Park, CA
- Categories Internet, Social Media, Social Network, Social
- Website http://www.facebook.com
- Full profile for Facebook
It’s a strategy that sources tell me is being considered not just at Facebook but at other social networks, but risk-averse tech giants are hesitant to slap a “FAKE” label on a news story. What if they get it wrong? And what about stories like Borowitz’s satire — should the story be called out as false, or merely a joke? And what if a news story from a legitimate publisher turns out to contain inaccuracies? Facebook, Google, Twitter, and others will be painted into a corner, forced to decide what percentage of the information in a story can be false before it’s blocked, downgraded, or marked with a warning label.
Fact-checking Instant Articles
Like the fight against spam, clickbait, and other undesirable content, the war against misinformation on platforms like Google and Facebook is a game of wack-a-mole. But both companies have built their own interfaces for news — Accelerated Mobile Pages and Instant Articles — and they could proactively counter fake stories in those spaces.
AMP and Instant Articles are open platforms, so fake news publishers are welcome to join and distribute their content. But the companies’ control over these spaces gives them an opportunity to detect fake news early.
Google and Facebook both have a unique opportunity to fact-check within AMP and Instant Articles — they could place annotations over certain parts of a news story in the style of News Genius to point out inaccuracies, or include links to other articles offering counterpoints and fact-checks.
Zuckerberg wasn’t clear about what third-party verification of the news on Facebook would look like, saying only, “There are many respected fact checking organizations and, while we have reached out to some, we plan to learn from many more.”
Bringing third-party vetting back into the picture means a return to the kind of human oversight Facebook had in its Trending Topics team. Although Facebook has made clear it wants to leave complex decisions up to its algorithms, the plummeting quality of Trending Topics makes it clear that the algorithm isn’t ready yet.
“I don’t think Trending ever had a problem with fake news or biases necessarily, before the Gizmodo article or after. All the problems were after the team was let go,” Schrader said, noting that Facebook intended to incorporate machine learning into Trending Topics but needed human input to guide and train the algorithm.
Engineers working on machine learning have told me they estimate it would take a dedicated team more than a year to train an algorithm to properly do the work Facebook is attempting with Trending Topics.
Appoint a public editor
Zuckerberg did acknowledge that perhaps Facebook can learn something from journalists like Schrader after all. “We will continue to work with journalists and others in the news industry to get their input, in particular, to better understand their fact checking systems and learn from them,” he said.
But the media certainly isn’t perfect. Sometimes we get our facts wrong, and the results can range from comical to disastrous. In 2004, the New York Times issued a statement questioning its own reporting on several factually-inaccurate stories that spurred the war in Iraq. As journalists sometimes make mistakes, so will Facebook. And when that happens, Facebook should address the errors.
“In a small back door sort of way, it will adopt some of the protocols of a media company,” Garcia-Martinez says of Facebook. One suggestion: “Get a public editor like the New York Times.”
The public editor serves as a liaison between a paper and its readers, and provides answers about the reporting and what could have been done better.
In his late-night Facebook posts, Zuckerberg has already somewhat assumed this role. But an individual with more independence could help Facebook learn and grow.
“They are going to get a lot better about this business of editorship,” Garcia-Martinez predicts. “When the stakes are American democracy, saying, ‘We’re not a media company,’ is not good enough.”
Featured Image: Bryce Durbin/TechCrunch