Facebook’s new video app hits Samsung Smart TVs

Facebook’s new video app is now available on Samsung Smart TVs – which makes Samsung the first platform to feature the social network’s latest application. Facebook had announced earlier this month that it was soon releasing a video app aimed at connected TVs and other media players, like Apple TV and Amazon Fire TV, but didn’t unveil a specific launch date for any of those platforms.

According to Samsung, the Facebook video app for TV is now available on Samsung Smart TVs, including its 2017 QLED TV lineup and all of its 2015, 2016, 2017 Smart TV models. Samsung is the only TV manufacturer that’s supporting the Facebook app at launch, the company also notes.

The app itself allows users to sign into their own Facebook accounts, then view the videos shared by friends or Pages they follow, as well as top videos and others recommended to them based on their interests.

You only have to authenticate with Facebook one time – after installing the app and launching it for the first time, says Samsung.

News of Facebook’s app for the TV’s big screen was originally detailed by Facebook’s VP of Partnerships Dan Rose at the CODE Media conference. The move represents Facebook’s attempt to better compete with video-sharing rival YouTube as a place where creators want to upload and share their work, as well as place where those videos can be more easily discoverable.

The company also said at the time that videos in the News Feed would now play with the sound on, unless your device was set to silent. And the company said that it would no longer crop vertical video, to better compete with Snapchat. A picture-in-picture mode was introduced, as well, allowing you to watch a video by pulling it out to the side of your feed, as you continued to scroll.

More recently, Facebook announced it would test ads in the middle of videos – something that could help publishers make more money from their Facebook videos. This is an area where Facebook is still challenged, as compared with YouTube.

Combined, these changes are designed to make video a more central part of Facebook’s social network, which is often still used more for things like text updates, sharing links, and posting photos. But Facebook’s focus on video has been expanding, thanks to its launch of live streaming, and other advanced video tools. This includes those that allow creators more control over how their content is shared, such as Rights Manager, its version of YouTube’s Content ID, introduced at last year’s Facebook F8 conference.

With an app for watching video on TVs, Facebook gains another means of capturing users’ attention, not to mention ad dollars.

The company has not yet said when the video app will launch on Apple TV and Amazon Fire TV, but given Samsung’s news, that may not be far off.

Facebook’s video app is available now via the Samsung Smart Hub

Facebook digs into mobile infrastructure in Uganda as TIP commits $170M to startups

While Google is using MWC to show off some of its advances in native apps on mobile devices — specifically in chat apps — the world’s biggest chat app company is doing something completely different. Today, Facebook announced that it is building a 770-kilometer (500-mile) fiber backhaul network in Uganda, in partnership with India’s Airtel and wholesale provider BCS, carriers that both have networking businesses in the country; and its Telecom Infrastructure Project is leading the charge to invest $170 million into telecoms infrastructure startups.

Alongside this, the company is also making headway on its other efforts to play a bigger role in the infrastructure behind how people connect to the internet (and specifically to Facebook) through its Telecom Infrastructure Project. Facebook’s own Voyager optical networking transponder is now being deployed and tested by the carriers Telia and Orange in Europe.

Facebook said it expects the Uganda project — which will see “tens of millions of investment” from Facebook — to cover access for more than 3 million people (that’s not how many will use it, but how many can potentially be covered). As a backhaul network, the purpose will be to provide more capacity to wireless carriers’ base stations so that they can offer 3G and 4G mobile data services (in many places in the developing world, carriers still can offer no more than 2G or 2.5G).

voyagerThe Voyager project, meanwhile, is one of a number of updates from the TIP, which was created by Facebook last year but (like Facebook’s other connectivity project, Internet.org) counts a number of other members — in this case, over 450, including large and small, regional carriers; equipment and software vendors like Intel and Microsoft; and more.

Other news from the TIP today included the announcement that the TIP will commit $170 million to invest in startups that are building or working on telecom infrastructure solutions. This, in my opinion, is an interesting development, considering how so much of the recent period of development in startups and their funding has been focused on software solutions.

Facebook and the TIP are not revealing too many details yet on which companies are the recipients of this funding — we have asked and will update as we learn more — but it notes that others that are contributing to that $170 million pot include Atlantic Bridge, Capital Enterprise, Downing Ventures, Entrepreneur First, Episode 1 Ventures, IP Group plc, Oxford Sciences Innovation and Touchstone Innovations, along with other investors, incubators and institutions.


“We believe this focused investment direction from these innovative investors will bring new infrastructure solutions to the industry,” Facebook said in today’s announcement. During a meeting at MWC, Facebook VP Jay Parikh offered a for more details on how Facebook is involved in the fund. “Facebook is not actually investing in that in terms of actual money. That’s the VCs that . We are lending our expertise in mentoring, we help them understand how to do hackathons, how to build out their space, we will offer any expertise we can if they decide to use our open source hardware and software.” He added that the company is essentially helping to bring the knowledge it gained from running its production environment at scale and its culture to these centers. “It’s more sustainable this way,” he noted.

To that end, there are also two new “acceleration centers” getting launched in the UK, spearheaded by BT, for carriers and Facebook to consider and deploy infrastructure solutions from startups in the field. This is on top of a first center that Facebook launched in South Korea last year with SK Telecom. You can read more about TIP’s other projects, which are largely in the very technical, piloting phase of networking technology, here.


Network connectivity, and Facebook’s “mission to connect the world,” have been longstanding side themes for the social networking company, whose bread and butter continues to be advertising on its social network, which includes Facebook, but also Messenger, WhatsApp and Instagram.

Whereas Facebook usage is nearly ubiquitous in regions like North America and Western Europe, in developing markets, especially in places where the infrastructure is lacking for good internet access, it’s less used, and so Facebook’s connectivity efforts are in part a way of creating the right circumstances to attract more business.

But those efforts, while having an overtly charitable and good goal of bridging the digital divide, have had very mixed results up to now. Internet.org — the project where Facebook has partnered with several other companies to provide essentially “free” mobile internet in selected countries — backfired when it got blocked in India over net neutrality concerns (specifically that Facebook’s initiative was helping Facebook more than anyone else). It’s still managed to connect 40 million people with the initiative, which has continued to expand.

Parikh noted in today’s press conference that the company is currently focused on the Express Wifi project in India and that we should “stay tuned” for any further announcements.

And a test of its Aquila drone, a “plane” that beams down Internet access, had a crash as a result of a structural failure.

And while today’s news is about how Facebook appears to be focusing more on building the exact physical infrastructure that it has said in the past was too costly to deploy, it’s also continuing to explore further wireless options, such as this plan to offer access in Africa via satellite. That plan faced a setback when Facebook’s first satellite was destroyed when SpaceX’s rocket exploded last year. Parikh, however, believes that satellites are something the company remains to be interested in and that it is the best solution for remote areas (and potentially a complimentary technology to its Aquila drone efforts).

The Internet.org situation in India shows how governments, businesses and the general public are indeed raising questions about what the full benefits or detriments are of companies like Facebook getting more involved in areas like connectivity. These are questions that will continue to be raised as Facebook provides ever-loftier presentations of its vision. Meanwhile, on a more basic level, there are ongoing questions of just how beneficial more connectivity is without better understanding of what’s being shared. The rise of fake news, for example, coupled with freshly minted surfers, is a scary prospect.

Featured Image: Ken Banks/Flickr UNDER A CC BY 2.0 LICENSE

Chat app Line doubles its stake in Snapchat clone Snow

Ahead of Snap’s eagerly awaited IPO, Line, one of the U.S. tech IPO highlights of 2016, has doubled down on sister service and Snapchat clone Snow.

Line and Snow both share the same parent company — Naver — but the two firms have increasingly become financially entwined. Line bought up 25 percent of Snow last September for $45 million, and now it is nearly doubling its equity to 48.6 percent, according to a filing. The deal values Snow at around $207 million (235 billion KRW), according a filing in Japanese, up from 200 billion KRW ($177 million) in September.

Line went public in a dual U.S.-Japan IPO that raised $1.1 billion. Shares initially popped 50 percent on the firm’s Tokyo debut, but there’s been little to celebrate since then. Line saw user numbers dip for the first time in January — despite a record year of revenue in 2016 — while its userbase increasingly limited to four countries: Japan, Indonesia, Taiwan and Thailand. Expanding its presence is an area where it feels that increased collaboration with Snow could bear fruit.

As part of this new deal, Line has agreed to give its photo app business to Snow — that includes selfie app B612, its core Line Camera app, food-focused Foodie, and its makeup preview app Looks — in order to “consolidate and improve the efficiency” of the services.

Line said in another filing that it has been working closely with Snow since its September investment but the competitive nature of photo and video apps means that this deal to combine many of their resources will help both Snow and the Line apps to grow.

Snow reached 40-50 million active users in January, with the app particularly popular in Japan, Korea and China. (Its rising popularity triggered investment interest from Facebook, which was rebuffed.) While Line hopes it may be able to tap into that success to boost its core chat app, it said that the camera apps have proven popular in markets like China, Japan, Vietnam, Indonesia, Brazil and Mexico. Given that each ties back into Line — although they can be used without a Line account — it is also betting that it can reverse its globalization struggles by giving these apps more freedom to grow via this deal.

Investors certainly seemed bullish. Line’s Tokyo share price reached 3,895 JPY at close on Friday, when Line made the announcement. That was up from 3,660 JPY at the start of the day. However, the share price dropped more than two percent when the market reopened on Monday.

Facebook’s mobile prodigy launches video charades game

Michael Sayman was just 17 when Facebook hired him, but he’d already built 5 apps. Now 7 years after his first launch, the Facebook product manager has just released Show and Tell, which turns selfies and visual communication into a game. You’re given an emotion to act out, you send the video to friends and they try to guess what you’re feeling.

“I made 4 Snaps about 3 years ago and it was like the same thing, except with 4 pictures instead of videos. And so I thought it would be fun to make a video version,” Sayman tells me.


When I asked the now 20-year old if Facebook helped and if it’s cool with him making his own apps on the side, he tells me Show and Tell is “not connected to Facebook,” and that “as long as they don’t compete, it’s fine.”

Sayman last launched Facebook’s high school-only app experiment Lifestage in August. But with Show and Tell, he says, “I made it during the weekends . . . on a separate laptop, so it was ok 👍. Of course, I had to go make sure it was all good with the conflicts team before putting it anywhere.”


Sayman, now 20, pictured on the left, was just 17 when he was hired by Facebook, as shown on the right

The game trades on the same immutable fact as the ALS Ice Bucket Challenge that set off the meteoric rise of video on Facebook: Certain activities make everyone look funny. Like pretending to be a snake, or mimicking the “disgust” emoji. And the game is inherently viral, as it pushes you to send your charades to friends. It’s already gunning for revenue with interstitial ads.

Games built atop mobile video have a big opportunity. While there are plenty of open-ended platforms like YouTube, Snapchat, Instagram and Houseparty, one of the core problems is that most people don’t know what to do on camera. Some apps have tried to solve that with crazy selfie filters and stickers that divert the attention from your real face. Show and Tell lets you respond to a prompt rather than creating from scratch.


There are plenty of other ways this medium could be explored. Video-dating apps where everyone answers a question, Q&A apps for sharing knowledge or games that detect your movement on camera. Show and Tell would have to get pretty big for Sayman to quit his lucrative day job. But this experiment in mobile could also teach Sayman what tricks to bake into Facebook’s next standalone app.

Facebook’s new profile photo flags and Zuck’s idea of ‘community’

Mark Zuckerberg believes we should be “coming together not just as cities or nations, but also as a global community.” That’s why Facebook’s latest feature feels a bit confusing.

Facebook has added nearly 200 flags to its Profile Frames feature, which lets you overlay imagery filters atop your profile photo. Facebook first launched profile frames for sports teams in 2015, and started letting people submit their own frames last year.


But today’s push of flags, many for individual countries, seems to simultaneously align with Zuckerberg’s idea of finding your community on Facebook, yet contradicts the view of the world as a unified global community. If users are proudly waving their country’s flag all over Facebook, it might make them appear even more foreign to users from elsewhere.

It’s this “us versus them” ideology that Zuckerberg rails against in his 5,000-word manifesto, but that is somewhat propelled by these profile flags.

While this might be a minor launch meant to just be fun and patriotic, it outlines the potential concerns with Facebook’s leader taking an outright stand on world issues. Rather than simply maximizing for user engagement, shareholder value and its basic mission to connect people, it must also weigh whether product changes align with its new mission of a safe, inclusive, informed global community.

We need world leaders, including tech CEOs, to stand up for justice and safety for everyone in these dire days of Trump. But that push for the greater good could complicate the facts of running their businesses.

Facebook tests ad breaks in all types of videos, giving creators a 55% cut

Facebook today announced it has begun testing ad breaks that interrupt on-demand video, using a small set of partners who will earn a 55 percent ad revenue share while Facebook keeps 45 percent. That could change the way creators make video content so they tease viewers enough to sit through the ads, while luring more producers to Facebook.

On-demand video publishers will get to select where in their video they want to insert an ad break, but it must be at least 20 seconds in and at least 2 minutes apart. Recode reported last month that ad breaks were coming.

Facebook’s Audience Network for showing ads in other apps now lets all publishers host in-stream video ads, after testing them this year.

Facebook is also expanding its existing test of ad breaks in Live videos that it announced in August. Now Pages and profiles in the U.S. that have at least 2,000 followers and reached at least 300 concurrent viewers in one of their recent Live videos are eligible to insert ad breaks.

After at least 4 minutes of broadcasting to at least 300 concurrent viewers, they’ll see a “You can take an ad break” money sign alert alongside real-time comments on their video. Tapping that initiates an ad break up to 20 seconds, and creators can take more ad breaks every 5 minutes.

Now both live broadcasters and recorded content creators on Facebook will earn a share of ad revenue from their viewers, creating an open monetization platform that could persuade creators to choose Facebook Live.

Ad breaks could actually make it easier for Live creators to be on camera, because if they need to take a quick breather, adjust their hair or switch settings, they can do it off camera. The ad breaks can include vertical video, a further sign of Facebook invading Snapchat’s domain.

And Facebook’s plan takes all the work out of monetization, because its team handles all the ad sales and accounting. Outside of big news and entertainment publishers, many of the web’s top video creators are teens and young adults shooting from their bedrooms and desperate to turn their hobby into a profession.

That’s why YouTube, which pays them, has been the clubhouse for these videographers. Now they have good reason to put their content on Facebook beyond the virality, even if it cannibalizes their YouTube view counts. And Facebook’s ad breaks might lure Live broadcasters away from competitor Periscope, which has only begun doing big sponsorship deals with celebrities. Facebook was already doing one-off deals with big publishers to get them using Live, but now Facebook’s incentive system is available to a much wider range of broadcasters.

Previously, Facebook only showed video ads as either related videos after you watched one purposefully, or as distinct ad units in the feed. Now it can earn cash directly from the more than 100 million hours of video people watch on its platform, and that stat was from a year ago, before Facebook’s continued rise as a video host. Facebook video consumption could also rise beyond its home on mobile with the company’s launch of video viewing apps for TV set-top boxes, though for now it’s not showing ads there.

live-ad-breakThe big concern here, though, is that video makers will purposefully delay the best parts of videos until after ad breaks, making them much less watchable. Currently, creators try to cram the flashiest moments of their content in the first few seconds to catch people’s eyes while they’re scrolling the feed, giving people what they want up front.

Now creators might instead use the first 20 seconds of videos to build suspense to a cliffhanger, insert an ad break and then put the meat of the video after they’ve already earned their cut. Along with the switch from videos autplaying silently to having sound on by default, the whole Facebook video creation playbook will have to change.

Facebook’s VP of partnerships Nick Grudin tells TechCrunch, “Whether on Facebook or off, we’re committed to continuing to work with our partners to develop new monetization products and ad formats for digital video. It’s early days, but today’s updates are a step towards this goal.”

Together, these initiatives could let Facebook further boost the cash it earns from the same amount of News Feed space. If Facebook can lure the best content onto its platform, users will end up sitting through more lucrative ads than if they were just scrolling past photo ads in the feed.

Don’t trust Facebook’s shifting line on controversy

Would you tell Facebook you’re happy to see all the bared flesh it can show you? And that the more gratuitous violence it pumps into your News Feed the better?

Obtaining answers to where a person’s ‘line’ on viewing what can be controversial types of content lies is now on Facebook’s product roadmap — explicitly stated by CEO Mark Zuckerberg in a lengthy blog post last week, not-so-humbly entitled ‘Building a global community‘.

Make no mistake, this is a huge shift from the one-size fits all ‘community standards’ Facebook has peddled for years — crashing into controversies of its own when, for example, it disappeared an iconic Vietnam war photograph of a naked child fleeing a napalm attack.

In last week’s wordy essay — in which Zuckerberg generally tries to promote the grandiose notion that Facebook’s future role is to be the glue holding the fabric of global society together, even as he fails to flag the obvious paradox: that technology which helps amplify misinformation and prejudice might not be so great for social cohesion after all — the Facebook CEO sketches out an impending change to community standards that will see the site actively ask users to set a ‘personal tolerance threshold’ for viewing various types of less-than-vanilla content.

On this Zuckerberg writes:

The idea is to give everyone in the community options for how they would like to set the content policy for themselves. Where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings. We will periodically ask you these questions to increase participation and so you don’t need to dig around to find them. For those who don’t make a decision, the default will be whatever the majority of people in your region selected, like a referendum. Of course you will always be free to update your personal settings anytime.

With a broader range of controls, content will only be taken down if it is more objectionable than the most permissive options allow. Within that range, content should simply not be shown to anyone whose personal controls suggest they would not want to see it, or at least they should see a warning first. Although we will still block content based on standards and local laws, our hope is that this system of personal controls and democratic referenda should minimize restrictions on what we can share.

A following paragraph caveats that Facebook’s in-house AI does not currently have the ability to automatically identify every type of (potentially) problematic content. Though the engineer in Zuck is apparently keeping the flame of possibility alive — by declining to state the obvious: that understanding the entire spectrum of possible human controversies would require a truly super-intelligent AI.

(Meanwhile, Facebook’s in-house algorithms have shown themselves to be hopeless at being able to correctly ID some pretty bald-faced fakery. And he’s leaning on third party fact-checking organizations — who do employ actual humans to separate truth and lies — to help fight the spread of Fake News on the platform, so set your expectations accordingly… )

“It’s worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more. At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years,” is how Zuck frames Facebook’s challenge here.

The problem is this — and indeed much else in the ~5,000-word post — is mostly misdirection.

The issue is not whether Facebook will be able to do what he suggests is its ultimate AI-powered goal (i.e. scan all user-shared content for problems; categorize everything accurately across a range of measures; and then dish up exactly the stuff each user wants to see in order to keep them fully engaged on Facebook, and save Facebook from any more content removal controversies) — rather the point is Facebook is going to be asking users to explicitly give it even more personal data.

Data that is necessarily highly sensitive in nature — being as the community governance issue he’s flagging here relates to controversial content. Nudity, violence, profanity, hate speech, and so on.

Yet Facebook remains an advertising business. It profiles all its users, and even tracks non-users‘ web browsing habits, continually harvesting digital usage signals to feed its ad targeting algorithms. So the obvious question is whether or not any additional data Facebook gathers from users via a ‘content threshold setting’ will become another input for fleshing out its user profiles for helping it target ads.

We asked Facebook whether it intends to use data provided by users responding to content settings-related questions in future for ad targeting purposes but the company declined to comment further on Zuckerberg’s post.

You might also wonder whether, given the scale of Facebook’s tracking systems and machine learning algorithms, couldn’t it essentially infer individuals’ likely tolerance for controversial content? Why does it need to ask at all?

And isn’t it also odd that Zuckerberg didn’t suggest an engineering solution for managing controversial content, given, for example, he’s been so intent on pursuing an engineering solution to the problem of Fake News. Why doesn’t he talk about how AI might also rise to the complex challenge of figuring out personal content tastes without offending people?

“To some extent they probably can already make a very educated, very good guess at [the types of content people are okay seeing],” argues Eerke Boiten, senior lecturer in computer science at the University of Kent. “But… telling Facebook explicitly what your line in the sand is on different categories of content is in itself giving Facebook a whole lot of quite high level information that they can use for profiling again.

“Not only could they derive that information from what they already have but it would also help them to fine-tune the information they already have. It works in two directions. It reinforces the profiling, and could be deduced from profiling in the first place.”

“It’s checking their inferred data is accurate,” agrees Paul Bernal, law lecturer at the University of East Anglia. “It’s almost testing their algorithms. ‘We reckon this about you, this is what you say — and this is why we’ve got it wrong’. It can actually, effectively be improving their ability to determine information on people.”

Bernal also makes the point that there could be a difference, in data protection law terms, if Facebook users are directly handing over personal information about content tolerances to Facebook (i.e. when it asks them to tell it) vs such personal information being inferred by Facebook’s indirect tracking of their usage of its platform.

“In data protection terms there is at least some question if they derive information — for example sexuality from our shopping habits — whether that brings into play all of the sensitive personal data rules. If we’ve given it consensually then it’s clearer that they have permission. So again they may be trying to head off issues,” he suggests. “I do see this as being another data-grab, and I do see this as being another way of enriching the value of their own data and testing their algorithms.”

This is increasing risks and increasing our vulnerability at a time when we should be doing exactly the opposite.

“I’m not on Facebook and this makes it even clearer to me why I’m not on Facebook because it seems to me in particular this is increasing risks and increasing our vulnerability at a time when we should be doing exactly the opposite,” adds Bernal.

Facebook users are able to request to see some of the personal data Facebook holds on them. But, as Boiten points out, this list is by no means complete. “What they give you back is not the full information they have on you,” he tells TechCrunch. “Because some of the tracking they are doing is really more sophisticated than that. I am absolutely, 100 per cent certain that they are hiding stuff in there. They don’t give you the full information even if you ask for it.

“A very simple example of that is that they memorize your search history within Facebook. Even if you delete your Facebook search history it still autocompletes on the basis of your past searches. So I have no doubt whatsoever that Facebook knows more than they are letting on… There remains a complete lack of transparency.”

So it at least seems fair that Facebook could take a shot at inferring users’ content thresholds, based on the mass of personal data it holds on individuals. (One anecdotal example: I recall once seeing a notification float into my News Feed that a Facebook friend had liked a page called “Rough sex”, which would seem to be just the sort of relevant preference signal Facebook could use to auto-determine the types of content thresholds Zuckerberg is talking about, at least for users who have shared enough such signals with it.)

But of course if it did that Facebook would be diving headfirst into some very controversial territory. And underlining exactly how much it knows about you, dear user — and that might come across as especially creepy when paired with a News Feed that’s injecting graphic content into your eyeballs because it thinks that’s what you want to see.

“Given the level at which they’re profiling we shouldn’t tell them anymore,” says Boiten, when asked whether people should feel okay feeding Facebook info about their ‘line in the sand’ — pointing to another controversy that arose last year when it emerged Facebook’s ad capabilities could be used to actively exclude or include people with specific ethnic affinities (aka ‘racial profiling’).

“If they make the advances in understanding of natural language content — the AI slant that Zuckerberg’s [blog post] promises, probably unrealistically, but nevertheless — if they get that sort of advantage then blimey they’re going to know an awful lot more than they already do,” he adds.

“You can bet that they’re going to be profiling people based on their standards settings in this way,” adds Bernal. “That’s how it works, and then they aggregate it and they’ll be using it — I bet — to target their advertising and so on. It is more total information management. The more they can get, and the more granular those personal controls get the more information they’re picking up.

“And I do think it’s disingenuous in that Zuckerberg’s post is not mentioning any of this.”

While it’s not yet clear exactly how (or when) these content settings will be implemented, the structure sketched out by Zuckerberg already looks pretty problematic — given that Facebook users who do not want to share any additional sensitive signals with the ad-targeting giant will be forced to tolerate their peers’ predilections.

Which immediately puts pressure on users to confess their content likes/dislikes to Facebook in order to avoid this ‘hell is other people’s tastes’ bind — i.e. in order to not be subject to the preferences of a local median. And to avoid being tainted by association of the types of content showing up (or not showing up) in their News Feed. After all, the Facebook News Feed is inherently individual — so there’s a risk of the character of the content in a user’s Feed being assumed to be a reflection of their personal tastes.

So by not telling Facebook anything about your content thresholds you’re put into a default corner of telling Facebook you’re okay with whatever the regional average is okay with, content wise. And that may be the opposite of okay for you.

“I think there’s another little trap here that they’ve done before,” continues Bernal. “When you make controls granular it looks as if you’re giving people control — but actually people generally get bored and don’t bother changing anything. So you can say you’ve given people control, and now it’s all much better — but in general they don’t use it. The few people who do are the few people who would understand it and get round it anyway. It will be very interesting to see what extent people actually use it.”

Such a majority rule system could also be at risk of being gamed by — let’s say — mischievous 4Channers banding together and working to get graphic boundaries opened up in a region where more conservative standards are the norm.

“I can see people gaming this kind of system — in the way that all kinds of online polls and referenda are gamed, somebody will work out the way to get the systems set the way they want… There are all kinds of possibilities,” argues Bernal. “There’s also a danger of ‘community leaders’ taking some degree of control; recommending people particular settings. I’m wary of Zuckerberg ending up doing this so you have standards for particular kind of people, so you ‘chose’ the standards that someone else has effectively chosen for you.”

A lot will depend on the implementation of the content controls, certainly, but when you look at how easily, for example, Facebook’s trending news section — not to mention its News Feed in general — has been shown to be vulnerable to manipulation (by purveyors of clickbait, Fake News etc) it suggests there could well be risks of content settings being turned on their head, and ending up causing more offenses than they were trying to prevent.

Another point Bernal makes is that shifting some of the responsibility for the types of content being shown onto users implicitly shifts some of the blame away from Facebook when controversies inexorably arise. So, basically: see something you don’t like in your News Feed in future? Well, that’s YOUR fault now! Either you didn’t set your Facebook content settings correctly. Or you didn’t set any at all… Tsk!

In other words, Facebook gets to deflect objections to the type of content its algorithms are shunting into the News Feeds of users all over the world as a ‘settings configuration’ issue — sidestepping having to address the more systemic and fundamental flaw embedded into the design of the Facebook product: aka the filter bubble issue.

Facebook has long been accused of encouraging a narrowing of personal perspective via its user-engagement focused content hierarchies. And Zuckerberg’s blog post has a fair amount of fuzzy thinking on filter bubbles, as you might expect from the chief of an engagement-algorithm-driven content distribution machine. But — for all his talk of “building global community” — he offers no clear fix for how Facebook can help break users out of the AI-enabled, navel-gazing circles its business model creates.

Yet a very simple fix for this does exist — which would be to show people a chronological News Feed of friends’ posts as the default vs the current default being the algorithmically powered one. Facebook users can manually switch to a chronological feed but the option is tricky to find, and clearly actively discouraged as the choice gets reset back to the AI Feed either per session or very soon after. In short the choice barely exists.

The root problem here of course is that Facebook’s business benefits massively from algorithmically engaged users. So there’s zero chance Zuck is going to announce it’s abandoning such a lucrative and (thus far) scalable default. So his solitary claim in the essay to “worry” about fake news and filter bubbles rings very hollow indeed.

Indeed, there is also a risk that giving users controls over controversial content could exacerbate the filter bubble effect further. Because a user who can effectively dial down all controversy to zero is probably not going to be encountering news about conflict in Syria, say. It’s going to be a lot easier for them to live inside a padded Facebook stream populated with cute photos of babies and kittens. News? What news? Awwww, how purdy!

And while that might make a pleasing experience for individuals who wants to disengage from wider global realities, it’s reductive for society as a whole if lots of people start retreating into rose-tinted filter bubbles. (Dialing up hateful content, should that also be possible via the future Facebook content filters, would also obviously likely have a deleterious and divisive societal impact).

The point is, giving people easy opt outs for types of content that might push them outside their comfort zone and force them to confront unfamiliar ideas or encounter a different or difficult perspective just offers a self-enabled filter bubble (alongside the algorithmic filter Facebook users get pushed inside when inside Facebook, thanks to its default setting).

This issue is of rising important given how many users Facebook has, and how the massively dominant platform has been shown to be increasingly cannibalizing traditional news media; becoming a place people go to get news generally, not just to learn what their friends are up to.

And remember, all this stuff is being discussed in a post where Zuckerberg is seeking to position Facebook as the platform to glue the world together in a “global community” and at a fractious moment in history. Which would imply giving users the ability to access perspectives far-flung from their own, rather that helping people retreat into reductive digital comfort zones. A multitude of disconnected filter bubbles certainly does not have the ring of ‘global community’ to me.

Another glaring omission in Zuckerberg’s writing is the risk of Facebook’s cache of highly personal (and likely increasingly sensitive) data being misused by overreaching governments seeking to clamp down on particular groups within society.

It’s especially strange for a US CEO to stay silent on this at this point in time, given how social media searches by US customs agents have ramped up following President Trump’s Executive Order on immigration last month. There have also been suggestions that foreigners wanting to enter the US could be forced to hand over their social media passwords to US border agents in future. All of which has very clear and very alarming implications for Facebook users and their Facebook data.

Yet the threat posed to Facebook users by government agencies appropriating accounts to enable highly intrusive, warrantless searches — and presumably go on phishing expeditions for incriminating content, perhaps, in future, as a matter of course for all foreigners traveling to the US — apparently does not merit public consideration by Facebook’s CEO.

Instead, Zuckerberg is calling for more user data, and for increased use of Facebook.

While clearly such calls are driven by the commercial imperatives of his business, the essay is couched as a humanitarian manifesto. So those calls seems either willfully ignorant or recklessly disingenuous.

I’ll leave the last word to Bernal: “The idea that we concentrate all our stuff in one place — both in one online place (i.e. Facebook) and one physical place (i.e. our smartphones), puts us at greater risk when we have governments who are likely to take advantage of those risks. And are actually looking at doing things that will be putting us under pressure. So I think we need to be looking at diversifying, rather than looking at one particular route in.

“Anyone who’s got any sense is not going to be doing anything that’s even slightly risky on Facebook,” he adds. “And should be looking for alternatives. Because while the border guards may know about Facebook and Twitter they’re not going to know about the more obscure systems, and they’re not going to be able to get access to them. So now is actually the time for us to be saying let’s do less Facebook, not more Facebook.”

Featured Image: Bryce Durbin/TechCrunch

Facebook updates Analytics for Apps with improved segmentation and domain-level reporting

Facebook has launched an update to Analytics for Apps, its cross-platform analytics platform for developers who want to track how their users engage with their sites, bots and apps. The promise of Analytics for Apps is that it allows developers to better understand their audience and then use this data to better engage them through push and in-app notifications.

As Facebook announced today, Analytics for Apps users have integrated the service into one million apps, website and bots since its launch in 2015. Bots are obviously a rather new phenomenon, but the company added support for them into the service about three months ago.

With today’s update, the company is adding two major new features to the service that will make it more useful for the product managers and marketers that may use it inside a company. The first is the ability to compare two customer segments side-by-side. Say you want to see how customers in one country are using a service compared to another country. That wasn’t very easy to do in earlier versions of the service but it now only takes a few clicks to see this data. You can also split data by which device people used, their demographics, etc.


The second new feature focused on giving users more insights into where their customers actually come from. You could already track users across desktop and mobile, but now the service also supports referring domains so you can see where your users are coming from.

Analytics for Apps is far from the only player in this market, of course. Its competitors include heavyweights like Google Analytics and Google’s Firebase Analytics service as well as Mixpanel and, depending on the use case, Leanplum and others. Given the social networking service at the core of its business, Facebook, of course, promises that it can give its users better insights into their customers’ demographics than its competitors.

Featured Image: Sean Gallup/Getty Images

Learn how to design for privacy and security from Facebook’s Benjamin Strahs

Freshly launched startups often don’t have the funding for a fully formed security team, but a data breach or a privacy overreach can be deadly for a new company. That’s why Facebook security engineer Benjamin Strahs is joining TechCrunch at our D.C. meetup and pitch-off this week: He’ll offer advice to founders about how to bootstrap a secure culture at their companies.

Facebook is a social media company, not a security firm — but, considering the wealth of personal data it holds, security has to be a consideration for everything Facebook does. Facebook has over the past few years rolled out encrypted messaging, secure browsing and new account authentication and recovery methods to make sure users’ data stays safe.

Facebook also routinely tests its own systems for vulnerabilities and invites the public to do the same through its bug bounty program.

But smaller companies don’t always have the financial or engineering resources for new privacy features and security programs — which is why Strahs encourages founders to use open-source frameworks and centralize their risk so they can address it more easily.

Strahs has led education initiatives for his non-technical co-workers, teaching them how to recognize phishing schemes and other suspicious behavior. It’s not just about securing your infrastructure — you have to make sure your employees understand how to keep themselves secure and how to protect user data.

Our conversation with Strahs comes at a time when security and privacy are more urgent than ever — digital surveillance is being debated in court, massive breaches are coming to light and privacy policy changes are spurring backlash. It will be a great discussion, and we hope you’ll join us at the meetup! Get your tickets today.

What Zuck’s letter didn’t say

He might not want to run for office any time soon, but Mark Zuckerberg has perfected the time-honored political art of talking a lot without saying anything.

In a sprawling letter consisting mostly of feel-good mumbo jumbo and a light sprinkling of feature ideas, the Facebook visionary laid out 5,700 words’ worth of nonspecific stuff that sounds nice. Like fellow Facebook feel-gooder Sheryl Sandberg, Zuck was sure to drive home the warm/fuzzy message of his ad-dollar factory, hitting isolated human interest anecdotes with more frequency than the evening news.

While his desire for corporate self-reflection is mildly admirable in an industry loath to introspect, all of those words don’t manage to add up to the sum of their parts. (Seriously, that’s kind of a lot of words.) Of course, Zuck is right that Facebook does have a uniquely huge opportunity to do novel things at massive scale. But what can you really pull off if your number one goal is to avoid rocking a boat full of nearly two billion people?

For a snapshot of what the letter discussed, we broke down a few topics by number of mentions:

  • Social infrastructure: 15
  • Global: 26
  • Politics/Political: 10
  • Trump: 0
  • Harassment: 1
  • Fake news: 1
  • Hoax: 3
  • Propaganda: 1
  • Instagram: 0
  • WhatsApp: 3
  • Encryption: 2

 Zuck’s greater good

  • First off, the word “Trump” appears zero times. Zuckerberg’s apolitical treatise predictably makes no mention of the U.S. president, opting instead for palatable allusions to a climate of polarized viewpoints and the looming threat and/or promise of globalization. In lieu of enumerating specific policies, Zuck goes big here. Unfortunately, it’s more “undergrad sociology term paper big” than “compelling big ideas big.”
  • In the letter, Zuckerberg indulges in some weird exceptionalism around this moment in time, abdicating responsibility for Facebook’s role in creating the moment we’re living through. Facebook’s modus operandi has always been to take credit for anything good accomplished on its network and dismiss anything bad as an external phenomenon. This strategy proves advantageous time and time again.
  • Zuck believes in something he calls “our collective values and common humanity,” which appears to be shorthand for a watered-down kind of politics that isn’t about anything at all. He opines about “the vast scale of people’s intrinsic goodness aggregated across our community” without doing much to acknowledge humanity’s aggregate badness.
  • Do the good actions made possible by the platform (Red Cross donations, volunteer mobilization) cancel out the bad (harassment, neo-Nazi affinity groups)? Zuck doesn’t ask this question, nor does he answer it.



  • Facebook wants to expand the power of its predictive safety features using artificial intelligence. That could affect everything from disaster relief to suicide prevention to terrorism.
  • He noted the importance of “protecting individual security and liberty” but did not immediately offer a solution to the risks inherent in this model beyond touting Facebook’s adoption of end-to-end encryption in its chat apps. Encryption will do little to curtail the privacy risk inherent in AI, facial recognition software and other kinds of algorithms that can predict user identity and behavior.

facebook-mediaFake news

  • Zuckerberg suggests that providing a “range of perspectives” will combat fake news and sensational content over time, but this appears to naively assume users have some inherent drive to seek the truth. Most social science suggests that people pursue information that concerns their existing biases, time and time again. Facebook does not have a solution for how to incentivize users to behave otherwise.
  • Even if we’re friends with people like ourselves, Zuckerberg thinks that Facebook feeds display “more diverse content” than newspapers or broadcast news. That’s a big claim, one that seems to embrace Facebook’s identity as a media company, and it’s not backed up by anything at all.
  • Facebook explains that its approach “will focus less on banning misinformation, and more on surfacing additional perspectives and information.” For fear of backlash, Facebook will sit this one out, pretty much.
  • Zuckerberg thinks the real problem is polarization across not only social media but also “groups and communities, including companies, classrooms and juries,” which he clumsily dismisses as “usually unrelated to politics.” Basically, Facebook will reflect the systemic inequities found elsewhere in society and it shouldn’t really be expected to do otherwise.
  • Zuck “[wants] to emphasize that the vast majority of conversations on Facebook are social, not ideological.” By design, so are the vast majority of conversations Facebook has about Facebook. The company continues to be terrified of appearing politically or ideologically aligned.


In one of the only substantive bits, Zuckerberg owns up to some of his platform’s recent shortcomings:

“In the last year, the complexity of the issues we’ve seen has outstripped our existing processes for governing the community. We saw this in errors taking down newsworthy videos related to Black Lives Matter and police violence, and in removing the historical Terror of War photo from Vietnam. We’ve seen this in misclassifying hate speech in political debates in both directions — taking down accounts and content that should be left up and leaving up content that was hateful and should be taken down. Both the number of issues and their cultural importance has increased recently.”

You had us right up until that last sentence.

Facebook’s raison d’être

Sure, it’s nice that Facebook wants to do Good Things, but it’d be nicer if the company didn’t beat us over the head with its own ostensible selflessness. It’s hard to find inspiration in Facebook’s grand global mission when the fact of the matter is that it stands to make a lot of money by expanding into countries it has yet to conquer. This remains the obvious undercurrent behind its global mission to nobly connect everyone everywhere.

This should go without saying, but a lot of users (and plenty of reporters) seem unduly charmed by Facebook’s humanitarian overtures. Facebook engages in a lot of strategic philanthropy, but that doesn’t make its mission philanthropic. Its mission is making money. It’s okay to say that!

While Zuckerberg’s letter is not particularly profound, his message is clear: Facebook, the great equivocator, is truly for everyone. The lowest common denominator of digital socialization works extraordinarily well for no one in particular. By rallying around a nebulous notion of a greater good that flows through humanity at scale (miraculously alienating no one in the process!), Facebook can be some things to all people — and that’s really been its true mission all along.

Perhaps we should all put aside our differences and join hands on the next quarterly earnings call?

Featured Image: Flickr UNDER A CC BY 2.0 LICENSE