The NTSB is investigating the ‘structural failure’ of Facebook’s Aquila internet drone

Say what you will about the merits of Facebook’s Internet.org and Free Basics — it’s pretty cool that they’re building a huge, solar-powered, laser-shooting drone to deliver it. But a “structural failure” that occurred on the Aquila’s first test flight may be more serious than Facebook made it out to be: The National Transportation Safety Board is conducting an investigation, Bloomberg reports. The NTSB confirmed this and provided further details.

Facebook wrote about its tests (which occurred on June 28) in July, listing several things they were looking at, learned and so on. Under the “Real-world conditions” bullet point, the blog post admits things weren’t entirely nominal:

We are still analyzing the results of the extended test, including a structural failure we experienced just before landing. We hope to share more details on this and other structural tests in the future.

They didn’t, possibly because of the NTSB investigation, but Facebook did issue a statement today emphasizing the positive outcomes of the test:

We were happy with the successful first test flight and were able to verify several performance models and components including aerodynamics, batteries, control systems and crew training, with no major unexpected results.

Really, it was too much to hope that nothing would go wrong on the first full-scale test of an enormous, experimental aircraft design. A source close to the project told TechCrunch that some damage was expected, since the Aquila isn’t actually designed for repeat takeoffs and landings (it has skids, not landing gear), and also because the day was windier than expected. The failure occurred just a few seconds before landing from the craft’s 90-minute flight, the source said.

It’s the NTSB’s prerogative to investigate any airborne troubles like this, and clearly it decided to so in this case, perhaps because of the high-profile nature of the test and aircraft. But the NTSB wouldn’t get involved if a screw dropped off: a representative explained that it investigates when aircraft weighing 300 pounds or more cause death or serious injury, or incur “substantial damage” — defined as damage that “compromises the airworthiness of the aircraft.”

That said, if the Aquila had nose-dived into the ground, caught fire or sustained some other high-profile damage, that likely would have come out by now. A full report is expected in a month or two, at which point we’ll have more details — but considering the scope of the project and pride evinced by Facebook in the Aquila’s development, it seemed reasonable to, well, clip its wings a little bit.

(This article has been substantially updated from its original form.)

Why now, more than ever, we need a Twitter that works

Rishi Garg Crunch Network Contributor

Rishi Garg is a partner at Mayfield focusing on the consumer sector. He was VP of Corporate Development and Strategy at Twitter and ran similar teams at Square and MTV Networks.

How to join the network

We just witnessed, to paraphrase John Oliver, the 2016 U.S. Presidential Election Dumpster Fire F**ktacular. Much of it took place on social media, and much is being written of late about how Facebook and Twitter have changed virtually everything that we’ve come to see as a normal part of an election cycle: the consumption of daily news and facts (or non-facts).

Remember 2012, when we called it the “Twitter Election” due to Obama’s skill in exploiting the fledgling platform? It seems almost quaint now. 2016 will be remembered as the election where the power of Twitter was revealed in full as a platform that can enable an individual with little formal organization to bypass media institutions and speak directly to a populace all the way to the presidency.

On Sunday’s 60 Minutes interview, Trump could hardly contain his self-satisfaction regarding how well Twitter and Facebook had served him.

Why was exploiting Twitter such an advantage for Trump? Because Twitter is an unusually powerful media platform, providing an unprecedented ability to reach anyone in the world, with far less friction than ever before.

That ability points to a deep responsibility for Twitter and Facebook, among others, to acknowledge their roles as arbiters of information that can have historical implications. And it’s possible Twitter and Facebook aren’t up to the task — there is a need for a responsible social network, and if they can’t do it, someone else has to.

twitter-eats-newspaper2

The power of smush

Consider how much has changed in nine short years. Until 2007, if you wanted to build an audience and sell your ideas to millions of viewers, you needed to raise money to build a cable network or a printing office, hire a sales force to sell ads or hire a bunch of Columbia journalism or USC film grads to produce content. You needed to buy cameras and pay business affairs people and all manner of other things.

Not anymore.

Twitter has given anyone and everyone a direct voice to the world by smushing together three things that in traditional media have been mostly separate: distribution, or a platform to connect the world (formerly Comcast or Dish; today the completely commoditized mobile phone and internet); application (formerly the 30-minute video-on-a-screen-plus-ads model; today the pixels that comprise an app like Twitter or Facebook); and content (formerly Rachel Maddow’s show; today tweets and Facebook posts).

The implications of this are profound. First, Twitter owns its own distribution. Because it’s a network, every new user makes the platform that much more essential, whether you’re one of the Monthly Active Users that Twitter is so often maligned for not adding fast enough — or whether you’re one of the hundreds of millions of people who are consuming news elsewhere about what’s happening on Twitter. Traditional media amplification of Twitter content doesn’t get Wall Street excited, but it nonetheless solidifies Twitter’s place in the world. Prior purveyors of the pipes to consumers (like cable companies) had neither the leverage nor the capabilities to play the role of content arbiter with subscribers.

Tesla and SolarCity-powered Island | Crunch Report

Tesla and SolarCity-powered Island | Crunch Report

Watch More Episodes

Next, because the application is owned by Twitter, the company has as much power in determining how people consume and create that content as the earliest creators of movie cameras, network television and production houses had in the 1930s when TV was just becoming a thing.

We take these now-“traditional” formats as gospel, but they were merely invented by normal people in charge of the early medium, and they persist to this day. Similarly, Twitter’s founders (and the communities they enable), by intentionally designing the user experience that we all consume, shape what gets out of our phones and into our brains. More than ever before, the medium shapes the message.

And finally there’s the content: free, live, visceral, formatted specifically for this application and ready to deliver 140 characters, photos, videos, Vines (RIP), etc. of noteworthy and not-so-noteworthy stuff right to your phone, computer or (Apple) TV.

So new platforms unprecedentedly combine Comcast, Sony and MTV into a single, powerful platform controlled by one team. Gatekeepers are removed at every step, and anyone can participate. That’s why owning a share of Twitter (or Facebook, Snapchat or Instagram for that matter) is like owning a share of an entire media ecosystem, indeed like owning the whole cable industry, not merely a single company or player within it.

Instagram Snapchat Facebook Twitter

I don’t want (or need) my MTV

Historically, with every other new media platform since Gutenberg invented movable type, once a new media platform is created (books, newspapers, magazines, radios and television), the first major media starts broad, attempting to bring as many people as possible to the new platform (like the original “Big 3” broadcast networks). Then, over time, once people are used to the new platform, verticalization occurs to super-serve certain demographics or constituents, like Nickelodeon serving kids or MTV serving teens. People self-select into the various cable channels, but the major broadcast networks remain to serve a mainstream audience a more diverse set of perspectives.

But with social media you have an added dynamic: personalization. No one person’s Facebook or Twitter feed looks like anyone else’s. There’s little room for an MTV to come and super-serve the teen demo on Twitter: Teens usually do it themselves by creating a follower graph that serves their needs. If they can’t find what they’re looking for, they switch applications entirely (hence, in part, the rise of Snapchat).

This means that whether users actively want access to a very limited stream of information, or if they simply engage more with a certain kind of content, they end up with a far narrower information stream.

My friends from my poor hometown in South Texas see a very different Facebook News Feed than I do, even without active curation, because they engage with only a certain kind of story and, in turn, Facebook reinforces that interest. While Twitter’s real-time stream invites some amount of surprise and diversity into the experience, it’s nothing like knowing that we all saw the same nightly CBS/NBC/ABC news show, regardless of geography.

twitter-politics

Twitter

  • Founded 2006
  • Overview Twitter is a global social networking platform that allows its users to send and read 140-character messages known as “tweets”. It enables registered users to read and post their tweets through the web, short message service (SMS), and mobile applications. As a global real-time communications platform, Twitter has more than 400 million monthly visitors and 255 million monthly active users around …
  • Location San Francisco, CA
  • Categories SMS, Blogging Platforms, Social Media, Messaging
  • Website http://www.twitter.com/
  • Full profile for Twitter

Taming the bird

So Twitter and Facebook today are wholly owned news and information platforms, praying mainly at the altar of increased engagement, with personalized, increasingly limited information streams, no embedded gatekeepers and completely open participation.

The 2016 election and Trump’s victory in part show how powerful this democratization can be, because when one masters Twitter, one can impact public discourse as much or more than can the highest-rated TV network. For example, Fox News has a 1-2 million total viewer count on its best day. Trump’s Twitter account reaches 15.3 million people every time he says something. This kind of reach makes a platform like Twitter very hard to unseat.

But when we rely on the community of users and followers to be the gatekeepers, to take the place of all those trained and experienced folks who have made up the media institutions that we’ve relied on for so long, we can also see what’s missing — and the consequences of omission are severe. As many have noted, conversation about the election was deeply marred by rapid sharing of false truths and misinformation, with no filter. Because most people today get their news from social networks, this is deeply troubling.

The platforms, Twitter and Facebook among them, have to take responsibility — because claiming neutrality at this stage, hiding behind the “technology company” label, when they represent such platform-level power, is absurd. It comes down to two things:

  1. Trust and safety remain Twitter’s Achilles’ heel. Limiting the worst parts of Twitter doesn’t limit free speech; it enables it by providing a safe space for dialogue. Twitter knows it needs to get this right; now that the role of Twitter as a veritable utility is self-evident, let’s hope they do it. Del Harvey’s announcement Tuesday is progress, but it’s far from adequate, essentially requiring the targets of abuse to turn the other cheek. There’s much more work to do here.
  1. Simple means, built into the core product, of promoting truth and validation: Besides an active retweet, there’s hardly a way to indicate that a piece of content is false, or signal a question about the content’s validity. Ben Thompson at Stratechery in his excellent Election Day post stated well how the problem is even worse on Facebook than it is today on Twitter:

“…the News Feed algorithm is a big reason why Facebook Squashed Twitter. Giving people what they want to see will always draw more attention than making them work for it, in rather the same way that making up news is cheaper and more profitable than actually reporting the truth.

And yet it is Twitter that has reaffirmed itself as the most powerful antidote to Facebook’s algorithm: misinformation certainly spreads via a tweet, but truth follows unusually quickly; thanks to the power of retweets and quoted tweets, both are far more inescapable than they are on Facebook.”

A new social platform?

Could emerging entrepreneurs create a new social platform that combines the expression, creativity and easy trading of social currency that we see from Facebook and Twitter, but with an eye to more thoughtful discourse? Something that captures the clear need we have as citizens to develop and express identities around news and information, but with built-in means to edit bad information and non-constructive conversation?

Even given the enormous network effects inherent in the major social media platforms, creating a new one isn’t quite as ludicrous as it may sound. If Facebook and Twitter are, indeed, new whole-cloth media platforms, then we’ve witnessed the creation of no less than seven such global platforms with more than 100 million users (and in some cases, a billion or more users) in the last decade alone: Facebook, Twitter, Instagram, Snapchat, WhatsApp, Telegram and Pinterest. And I’m not even including the massive Asian platforms, such as WeChat and LINE.

When big challenges present themselves, we look to both the established players and emerging entrepreneurs to take big risks to make things better. I’m excited to see how this moment in time galvanizes entrepreneurial attention on making social networks work in our new world.

Facebook plans to boost UK headcount by 50% as gov’t signals corporate tax rate cut

Facebook has followed Google’s lead by trumpeting plans to expand its presence in the UK — despite ongoing uncertainty over the impact of this summer’s Brexit vote for the country to leave the European Union.

Speaking at the annual CBI conference in London today, Facebook’s Nicola Mendelsohn, VP EMEA, announced plans for the social network to increase its UK headcount by 50 per cent by the end of 2017, and open a new HQ in the country.

Mendelsohn said the aim is to grow headcount from 1,000 to 1,500 by then — with “many” of the new jobs touted as “high skilled engineering jobs”.

“We came to London in 2007 with just a handful of people, by the end of next year we will have opened a new HQ and plan to employ 1,500 people. Many of those new roles will be high skilled engineering jobs as the UK is home to our largest engineering base outside of the US and is where we have developed new products like Workplace,” she said, also noting the company’s presence in Somerset — where its Aquila facility is working on designing and building solar power unmanned planes to bring connectivity to remote regions.

It’s not clear exactly what proportion of the additional jobs would be engineering roles vs other jobs such as sales. We asked but the company declined to provide any further details.

Facebook’s announcement of an intention to increase UK headcount follows Google’s UK-focused publicity last week when the company re-announced a long planned expansion of its London campus — couching the move as a continued commitment to the UK in spite of Brexit.

Reporters were told that the capacity of Google’s new London HQ is 7,000 vs the 4,000 of its current building — with the implication being the company could employee 3,000 more staff in the UK by 2020. Assuming, that is, business conditions in the UK prove favorable — with CEO Sundar Pichai talking about the ‘absolute’ importance of open borders and free movement for skilled migrants. Two things that, absolutely, cannot be guaranteed, given the UK’s impending Brexit. So quite how many of those potential 3,000 additional Google UK jobs end up existing remains to be seen — like so many things affected by Brexit.

Facebook’s UK expansion plans don’t mention any specific caveats or conditions for the company to grow headcount in the country. But in related PR it also makes a point of referencing its mission to “make the world more open and connected”. Which reads like a not-so-subtle argument for the UK government to push for a ‘soft Brexit’, rather than the tough on immigration rhetoric of the hard Brexiteers.

Especially as a “plan” to add an additional 500 jobs is in no way an irreversible guarantee. So again, it remains to be seen how many of the extra Facebook jobs survive the looming Brexit negotiations.

UK Prime Minister Theresa May has said she intends to trigger the start of the two-year negotiation process to leave the EU by the end of March 2017.

Also speaking at the CBI conference today, the Prime Minister announced a series of business-friendly measures aimed at pouring some emollient oil on the troubled waters of Brexit — including a government funding boost for R&D worth £2BN per year by 2020; and a review of the UK’s corporate tax rate, suggesting it could move to substantially cut the rate below the current 20 per cent. (Albeit, such a move could in fact complicate the UK’s Brexit negotiations — given it would likely be viewed as a hostile move by EU governments.)

Also on the table: a possible boost for R&D tax credits to further support businesses conducting research in the UK.

May also announced a new Industrial Strategy Challenge Fund, overseen by UK Research and Innovation and funded by some of the £2BN R&D boost — aimed at supporting the commercialization of what the government is dubbing “priority technologies”, such as robotics, biotechnology and AI.

Other emerging fields that could benefit from the new fund’s support include medical technology, satellites, advanced materials manufacturing and “other areas where the UK has a proven scientific strength and there is a significant economic opportunity for commercialisation”.

Featured Image: Sean Gallup/Getty Images

Facebook’s fake news problem and fantasy sports: Listen to TCBC 9 with Jordan Crook

Facebook’s issues with viral false news reports dominated headlines this week, so naturally it came up as a key topic of discussion when I spoke to TechCrunch’s special projects editor and internet culture reporter Jordan Crook on this week’s episode. The sheer scope of the issue is something that becomes very apparent as we found out in talking things through.

We also cover the union of FanDuel and DraftKings into a single online fantasy sports betting platform powerhouse, since Jordan’s a big fan of fantasy sports (I’ll stick to just LOTR-style fantasy, thanks very much). The issue isn’t really whether the two pairing up is better for either; it’s the nature of the business model itself, and whether there isn’t something ethically unsettling about the whole proposition.

Fair warning: this is a pretty heavy episode, because we’re all still feeling a little raw after the U.S. election. But it’s honest, which is more than you can say for a lot of headlines that got plenty of shares during the election.

You can listen via the stream embedded above, or check us out and subscribe on iTunes (and leave a review), or in your podcast player of choice.

Zuckerberg reveals plans to address misinformation on Facebook

Facebook’s fake news problem persists, CEO Mark Zuckerberg acknowledged last night.

He’d been dismissive about the reach of misinformation on Facebook, saying that false news accounted for less than one percent of all the posts on the social media network. But a slew of media reports this week have demonstrated that, although fake posts may not make up the bulk of the content on Facebook, they spread like wildfire — and Facebook has a responsibility to address it.

“We’ve made significant progress, but there is more work to be done,” Zuckerberg wrote, outlining several ways to address what he called a technically and philosophically complicated problem. He proposed stronger machine learning to detect misinformation, easier user reporting and content warnings for fake stories, while noting that Facebook has already taken action to eliminate fake news sites from its ad program.

The firestorm over misinformation on Facebook began with a particularly outrageous headline: “FBI Agent Suspected in Hillary Email Leaks Found Dead.”

The false story led to accusations that Facebook had tipped the election in Donald Trump’s favor by turning a blind eye to the flood of fake stories trending on its platform. The story, which ran just days before the election on a site for a made-up publication called Denver Guardian, suggests that Clinton plotted the murders of an imaginary agent and his imaginary wife, then tried to cover it up as an act of domestic violence. It was shared more than 568,000 times.

screen-shot-2016-11-18-at-11-29-00-amThe Denver Guardian story caused a crisis at Facebook, and it hasn’t gone away. Last night, the
story appeared yet again in a friend’s newsfeed. “BREAKING,” the post blared. “FBI AGENT & HIS WIFE FOUND DEAD After Being ACCUSED of LEAKING HILLARY’s EMAILS.” This time, the story was hosted by a site called Viral Liberty. Beneath the headline is a button encouraging Facebook users to share the story, and according to Facebook’s own data, it’s been shared 127,680 times.

Facebook isn’t alone. Google and Twitter grapple with similar problems and have mistakenly allowed fake stories to rise to prominence as well. And although stories about the rise of fake news online have focused primarily on pro-Trump propaganda, the sharing-without-reading epidemic exists in liberal circles too — several of my Facebook friends recently shared an article by the New Yorker‘s satirist Andy Borowitz titled “Trump Confirms That He Just Googled Obamacare” as if it were fact, celebrating in their posts that Trump may not dismantle the Affordable Care Act after all his campaign promises to the contrary.

But, as the hub where 44 percent of Americans read their news, Facebook bears a unique responsibility to address the problem. According to former Facebook employees and contractors, the company struggles with fake news because its culture prioritizes engineering over everything else and because it failed to build its news apparatus to recognize and prioritize reliable sources.

Facebook’s media troubles began this spring, when a contractor on its Trending Topics team told Gizmodo that the site was biased against conservative media outlets. To escape allegations of bias, Facebook fired the team of journalists who vetted and wrote Trending Topics blurbs and turned the feature over to an algorithm, which quickly began promoting fake stories from sites designed to churn out incendiary election stories and convert them into quick cash.

It’s not a surprise that Trending Topics went so wrong, so quickly — according to Adam Schrader, a former writer for Trending Topics, the tool pulled its hashtagged titles from Wikipedia, a source with its own struggles with the truth.

Mark Zuckerberg is the front page editor of every newspaper in the world.

— Antionio Garcia-Martinez

“The topics would pop up into the review tool by name, with no description. It was generated from a Wikipedia topic ID, essentially. If a Wikipedia topic was frequently discussed in the news or Facebook, it would pop up into the review tool,” Schrader explained.

From there, he and the other Trending Topics writers would scan through news stories and Facebook posts to determine why the topic was trending. Part of the job was to determine whether the story was true — in Facebook’s jargon, to determine whether a “real world event” had occurred. If the story was real, the writer would then draft a short description and choose an article to feature. If the topic didn’t have a Wikipedia page yet, the writers had the ability to override the tool and write their own title for the post.

Human intervention was necessary at several steps of the process — and it’s easy to see how Trending Topics broke down when humans were removed from the system. Without a journalist to determine whether a “real world event” had occurred and to choose a reputable news story to feature in the Topic, Facebook’s algorithm is barely more than a Wikipedia-scraping bot, susceptible to exploitation by fake news sites.

But the idea of using editorial judgement made Facebook executives uncomfortable, and ultimately Schrader and his co-workers lost their jobs.

“[Facebook] and Google and everyone else have been hiding behind mathematics. They’re allergic to becoming a media company. They don’t want to deal with it,” former Facebook product manager and author of Chaos Monkeys Antonio Garcia-Martinez told TechCrunch. “An engineering-first culture is completely antithetical to a media company.”

Of course, Facebook doesn’t want to be a media company. Facebook would say it’s a technology company, with no editorial voice. Now that the Trending editors are gone, the only content Facebook produces is code.

But Facebook is a media company, Garcia-Martinez and Schrader argue.

Tesla and SolarCity-powered Island | Crunch Report

Watch More Episodes

“Facebook, whether it says it is or it isn’t, is a media company. They have an obligation to provide legit information,” Schrader told me. “They should take actions that make their product cleaner and better for people who use Facebook as a news consumption tool.”

Garcia-Martinez agreed. “The New York Times has a front page editor, who arranges the front page. That’s what New York Times readers read every day — what the front page editor chooses for them. Now Mark Zuckerberg is the front page editor of every newspaper in the world. He has the job but he doesn’t want it,” he said.

Zuckerberg is resistant to this role, writing last night that he preferred to leave complex decisions about the accuracy of Facebook content in the hands of his users. “We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties,” he wrote. “We have relied on our community to help us understand what is fake and what is not. Anyone on Facebook can report any link as false, and we use signals from those reports along with a number of others — like people sharing links to myth-busting sites such as Snopes — to understand which stories we can confidently classify as misinformation.”

However, Facebook’s reliance on crowd-sourced truth from its users and from sites like Wikipedia will only take the company halfway to the truth. Zuckerberg also acknowledges that Facebook can and should do more.

Change the algorithm

“There’s definitely things Facebook could do to, if not solve the problem, at least mitigate it,” Garcia-Martinez said, highlighting his former work on ad quality and the massive moderation system Facebook uses to remove images and posts that violate its community guidelines.

To cut back on misinformation, he explains, “You could effectively change distribution at the algorithmic level so they don’t get the engagement that they do.”

This kind of technical solution is most likely to get traction in Facebook’s engineering-first culture, and Zuckerberg says the work is already underway. “The most important thing we can do is improve our ability to classify misinformation. This means better technical systems to detect what people will flag as false before they do it themselves,” he wrote.

This kind of algorithmic tweaking is already popular at Google and other major companies as a way to moderate content. But, in pursuing a strictly technical response, Facebook risks becoming an opaque censor. Legitimate content can vanish into the void, and when users protest, the only response they’re likely to get is, “Oops, there was some kind of error in the algorithm.”

Zuckerberg is rightfully wary of this. “We need to be careful not to discourage sharing of opinions or to mistakenly restrict accurate content,” he said.

Improve the user interface

Mike Caulfield, the director of blended and networked learning at Washington State University Vancouver, has critiqued Facebook’s misinformation problem. He writes that sharing fake news on Facebook isn’t a passive act — rather, it trains us to believe the things we share are true.

“Early Facebook trained you to remember birthdays and share photos, and to some extent this trained you to be a better person, or in any case the sort of person you desired to be,” Caulfield said, adding:

The process that Facebook currently encourages, on the other hand, of looking at these short cards of news stories and forcing you to immediately decide whether to support or not support them trains people to be extremists. It takes a moment of ambivalence or nuance, and by design pushes the reader to go deeper into their support for whatever theory or argument they are staring at. When you consider that people are being trained in this way by Facebook for hours each day, that should scare the living daylights out of you.

When users look at articles in their News Feed today, Caulfield notes, they see prompts encouraging them to Like, Share, Comment — but nothing suggesting that they Read.

Caulfield suggests that Facebook place more emphasis on the domain name of the news source, rather than solely focusing on the name of the friend who shares the story. Facebook could also improve by driving readers to actually engage with the stories rather than simply reacting to them without reading, but as Caulfield notes, Facebook’s business model is all about keeping you locked into News Feed and not exiting to other sites.

Caulfield’s suggestions for an overhaul of the way articles appear in News Feed are powerful, but Facebook is more likely to make small tweaks than major changes. A compromise might be to label or flag fake news as such when it appears in the News Feed, and Zuckerberg says this is a strategy Facebook is considering.

“We are exploring labeling stories that have been flagged as false by third parties or our community, and showing warnings when people read or share them,” he said.

Facebook

  • Founded 2004
  • Overview Facebook is an online social networking service that allows its users to connect with friends and family as well as make new connections. It provides its users with the ability to create a profile, update information, add images, send friend requests, and accept requests from other users. Its features include status update, photo tagging and sharing, and more. Facebook’s profile structure includes …
  • Location Menlo Park, CA
  • Categories Internet, Social Media, Social Network, Social
  • Website http://www.facebook.com
  • Full profile for Facebook

It’s a strategy that sources tell me is being considered not just at Facebook but at other social networks, but risk-averse tech giants are hesitant to slap a “FAKE” label on a news story. What if they get it wrong? And what about stories like Borowitz’s satire — should the story be called out as false, or merely a joke? And what if a news story from a legitimate publisher turns out to contain inaccuracies? Facebook, Google, Twitter, and others will be painted into a corner, forced to decide what percentage of the information in a story can be false before it’s blocked, downgraded, or marked with a warning label.

Fact-checking Instant Articles

Like the fight against spam, clickbait, and other undesirable content, the war against misinformation on platforms like Google and Facebook is a game of wack-a-mole. But both companies have built their own interfaces for news — Accelerated Mobile Pages and Instant Articles — and they could proactively counter fake stories in those spaces.

AMP and Instant Articles are open platforms, so fake news publishers are welcome to join and distribute their content. But the companies’ control over these spaces gives them an opportunity to detect fake news early.

Google and Facebook both have a unique opportunity to fact-check within AMP and Instant Articles — they could place annotations over certain parts of a news story in the style of News Genius to point out inaccuracies, or include links to other articles offering counterpoints and fact-checks.

Zuckerberg wasn’t clear about what third-party verification of the news on Facebook would look like, saying only, “There are many respected fact checking organizations and, while we have reached out to some, we plan to learn from many more.”

Bringing third-party vetting back into the picture means a return to the kind of human oversight Facebook had in its Trending Topics team. Although Facebook has made clear it wants to leave complex decisions up to its algorithms, the plummeting quality of Trending Topics makes it clear that the algorithm isn’t ready yet.

“I don’t think Trending ever had a problem with fake news or biases necessarily, before the Gizmodo article or after. All the problems were after the team was let go,” Schrader said, noting that Facebook intended to incorporate machine learning into Trending Topics but needed human input to guide and train the algorithm.

Engineers working on machine learning have told me they estimate it would take a dedicated team more than a year to train an algorithm to properly do the work Facebook is attempting with Trending Topics.

Appoint a public editor

Zuckerberg did acknowledge that perhaps Facebook can learn something from journalists like Schrader after all. “We will continue to work with journalists and others in the news industry to get their input, in particular, to better understand their fact checking systems and learn from them,” he said.

But the media certainly isn’t perfect. Sometimes we get our facts wrong, and the results can range from comical to disastrous. In 2004, the New York Times issued a statement questioning its own reporting on several factually-inaccurate stories that spurred the war in Iraq. As journalists sometimes make mistakes, so will Facebook. And when that happens, Facebook should address the errors.

“In a small back door sort of way, it will adopt some of the protocols of a media company,” Garcia-Martinez says of Facebook. One suggestion: “Get a public editor like the New York Times.”

The public editor serves as a liaison between a paper and its readers, and provides answers about the reporting and what could have been done better.

In his late-night Facebook posts, Zuckerberg has already somewhat assumed this role. But an individual with more independence could help Facebook learn and grow.

“They are going to get a lot better about this business of editorship,” Garcia-Martinez predicts. “When the stakes are American democracy, saying, ‘We’re not a media company,’ is not good enough.”

Facebook authorizes a $6B stock buyback

Facebook today said it is authorizing a $6 billion stock buyback that will go into effect in the first quarter next year.

Facebook in its last earnings call also said that its growth would likely slow as a result of the company reaching its maximum advertising load. While Facebook has historically grown at a very fast clip, the company is now in a position that it needs to find additional ways to create value for investors beyond just trying to expand its user base and gather more eyeballs to put ads in front of.

For Facebook, keeping control of the company doesn’t necessarily seem like an issue. Earlier this year, Facebook issued a new class of stock that would essentially keep Mark Zuckerberg in control of the company, enabling him to outmaneuver any kind of heavy pressure from Wall Street. That means Facebook can essentially continue to make long-term plays — while that may be to the chagrin of industry watchers and investors. However, share repurchases can sometimes be useful for reducing the overall amount of outstanding shares.

Companies can authorize a share repurchase for a number of reasons. For one, it represents an opportunity for the company to return value to shareholders (the company can also issue a dividend), which may be agitating the company to do something in order to build up good will with Wall Street. A near-term buyback can buy Facebook time to keep pressure off the company while investing in longer-term plays, such as growing its other platforms like Instagram and WhatsApp and investing in virtual reality.

In fact, Facebook pretty much spells that out in its filing with the SEC:

“The timing and actual number of shares repurchased will depend on a variety of factors, including price, general business and market conditions, and alternative investment opportunities,” the company said in its filing. “The program will be executed consistent with the Company’s capital allocation strategy of prioritizing investment to grow the business over the long term.”

In some cases, share buybacks may even be a result from pressure from investors. For example, around 2013 activist investor Carl Icahn pressured Apple to buy back more shares in an effort to return value to shareholders. Facebook shares are up around 9% year-over-year, but it hasn’t seen the crazy levels of growth it’s had historically (the shares are up almost 60% over the past two years).

Facebook has been sitting on a large cash pile. With $26 billion sitting in the bank, investors may be impatient with the company’s use of that cash even while it invests heavily in growth and research and development. This is a perpetual optics issue with Apple, which has amassed a cash pile of more than $200 billion.

Facebook, which saw its shares decline after its most-recent earnings, may have an opportunity to kill two birds with one stone as it picks shares back up at a lower price. There’s no specific schedule for the stock buyback, so it can essentially make a repurchase whenever it wants. Following the announcement, shares of Facebook were up around 2%, though they have dropped to now remain largely unchanged.

The company also said that its chief accounting officer, Jas Athwal, would be leaving the company after serving at the company for nearly 9 years in a separate filing.

Featured Image: Justin Sullivan/Getty Images

Weekly Roundup: Facebook’s fake news, MacBook Pro reviewed, first human CRISPR-ed

Facebook’s fake news frenzy continued as the company came under scrutiny for its debated influence on the U.S. election, LinkedIn was blocked in Russia and a person was treated with CRISPR technology for the first time. Also, the human species has about 1,000 years left according to Stephen Hawking. But do we deserve survival? The existence of Coca-Cola’s selfie bottle points toward no.

1. Mark Zuckerberg published a response to accusations that fake news on Facebook influenced the outcome of the U.S. election. The Facebook CEO claims that at least 99 percent of news content on Facebook was “authentic.” However, many still argue that Facebook has locked users inside of an echo chamber. 

2. Apple and U.S. auto sales could suffer a setback if President-elect Donald Trump takes action on his pre-election comments about global trade. Back in September, Trump said he would impose a 45 percent tariff on imports from China. And now the country is threatening to squeeze iPhone sales if a trade war comes to be. 

MacBook Pro

3. A full four years after the last major upgrade, the new MacBook Pro is finally here. It’s slimmer and lighter than its predecessor, has a new Touch Bar feature and a larger TrackPad.

4. Chinese scientists injected a human being with cells genetically edited using CRISPR-Cas9 technology. This is the first time CRISPR has been used on a fully formed adult human, and scientists are hoping that this will help their patient fend off a deadly type of lung cancer.

snapbot

5. Snap Inc. appears to be moving forward in its plans to go public early next year. The company reportedly filed confidentially for its massive IPO. Snap is already targeting as much as $1 billion in revenue for 2017. It has 150 million daily active users and has rapidly become one of the most enticing new advertising platforms for marketers. Snapchat also continued selling its Spectacles glasses to the public in the most millennial way possible — through pop-up vending machines across California and in Oklahoma. But they didn’t give them away to techies. 

6. Microsoft and the Linux community often felt like they were at war with each other in the past. But this week, Microsoft, one of the biggest open-source contributors, joined the Linux Foundation as a high-paying Platinum member.

7. Shareholders approved Tesla’s acquisition of SolarCity in an important hurdle for the deal. Tesla expects the transaction to close in the coming days. Overall, the acquisition is pushed forward by Elon Musk’s vision of a unified sustainable energy track.

Jason Robins of DraftKings

8. It was confirmed that fantasy sports sites DraftKings and FanDuel are merging into one company in what will be a dual-operating structure. DraftKings CEO Jason Robins will become CEO of the newly combined company and FanDuel CEO Nigel Eccles will become Chairman of the Board.

9. WhatsApp is on its way to becoming the global multi-platform FaceTime. The Facebook-owned communication app launched video calling for everyone.

10. A red-hot new startup called Hustle announced it’s raised $3 million led by Social Capital. The text-distribution tool’s goal is to let organizers quickly start individual, personalized conversations with huge groups of supporters. It has already been used by Hillary Clinton and Bernie Sanders.

11. Samsung is gunning to increase its focus on connected cars as it announced plans to buy auto and audio product maker Harman in an $8 billion all-cash deal.

12. LinkedIn was officially blocked in Russia after the social network failed to transfer Russian user data to servers located in the country. This violates a law instituted in Russia requiring all online sites to store personal data on national servers.

Featured Image: Eric Risberg/AP

Like by smiling? Facebook acquires emotion detection startup FacioMetrics

Facebook could one day build facial gesture controls for its app thanks to the acquisition of a Carnegie Mellon University spinoff company called FacioMetrics. The startup made an app called IntraFace that could detect seven different emotions in people’s faces, but it’s been removed from the app stores.

The acquisition aligns with a surprising nugget of information Facebook slipped into a 32-bullet point briefing sent to TechCrunch this month. Regarding its plans for applying its artificial intelligence research, Facebook wrote (emphasis mine):

“Future applications of deep learning platform on mobile: Gesture-based controls, recognize facial expressions and perform related actions

It’s not hard to imagine Facebook one day employing FacioMetrics’ tech and its own AI to let you add a Like or one of its Wow/Haha/Angry/Sad emoji reactions by showing that emotion with your face.

That’s probably a long way off, though.

For now, Facebook tells me it will use FacioMetrics to enhance its Snapchat selfie Lens-style augmented reality face masks that are making their way into its videos and Live broadcasts:

“How people share and communicate is changing and things like masks and other effects allow people to express themselves in fun and creative ways. We’re excited to welcome the Faciometrics team who will help bring more fun effects to photos and videos and build even more engaging sharing experiences on Facebook.”

FacioMetrics' app can detect happiness in this person's face

FacioMetrics’ app can detect happiness in this person’s face

There are already some Facebook and Snapchat selfie masks that react to you opening your mouth or raising your eyebrows. FacioMetrics’ tech could add tons of new ways to trigger animated effects in your videos.

Facebook is playing catchup to Snapchat in the AR game, and it could use all the talent it can buy. The social giant wouldn’t disclose the price it paid for FacioMetrics, but the startup’s founder Fernando De la Torre, an associate research professor at robots-and-self-driving car college Carnegie Mellon, wrote that:

We started FacioMetrics to respond to the increasing interest and demand for facial image analysis — with all kinds of applications including augmented/virtual reality, animation, audience reaction measurement, and others. We began our research at Carnegie Mellon University developing state-of-the-art computer vision and machine learning algorithms for facial image analysis. Over time, we have successfully developed and integrated this cutting-edge technology into battery-friendly and efficient mobile applications, and also created new applications of this technology.

Now, we’re taking a big step forward by joining the team at Facebook.

The Greensburg Tribune Review spotted the acquisition, and reports that De la Torre’s research and app could be used to spot drowsy drivers, automatically analyze focus groups, detect depression and improve avatars in video games. That last part could come in handy, because Facebook’s Oculus division is also working on making life-like avatars that convey emotions via “VR emoji.” For example, shaking your fist in the air inside Oculus would make your avatar’s face turn angry.

If Facebook wants to be the home for all our sentimental social content, teaching computers to understand our emotions could definitely come in handy.

Kaspersky eyes launching a real-time back-up service for social media leavers

Veteran security firm Kaspersky Lab is looking at launching a real-time encrypted back-up service for social media users to store their data outside the walled garden of Facebook, Instagram, Twitter and Google+.

At this point it’s testing how much appetite there might be for such a service — launching a website with details of the potential app, which it’s calling FFForget, where people can sign up to express interest and provide feedback on the sorts of features they’d like to see. Any beta launch would not be before early 2017, it says.

Kaspersky’s thesis is that while social media remains wildly popular there are also widespread feelings of disaffection with how social giants such as Facebook monopolize people’s time and attention, and how they lock users in by merit of holding both their friendship networks and their personal data.

Kaspersky commissioned its own research on the topic, polling more than 4,800 web users across nine languages, and says it found a majority of respondents (nearly 78 percent) reported they had already considered ditching social networks. While more than one-third (39 percent) reckon they are wasting time on social networks.

The poll was conducted before the current controversy about the impact on democracy of social networks helping to spread misinformation online — but that could stoke further disaffection with tech platforms which algorithmically filter content to maximize user engagement and ad clicks.

FFForget is envisioned as a fix for the data lock-in issue — by providing a mechanism to free users of major social networking platforms from the fear of leaving their information behind should they decide to pull the plug and close their accounts. Kaspersky found around one-fifth (21 percent) of survey respondents worry about losing things like their photos should they quit a network.

A lot more respondents (62 percent) were concerned about losing contact with friends. Although, for some of these social platforms, you could argue the friendship networks in question involve a lot of weak bonds — when you consider how sprawling these sorts of quasi-public social networks tend to be, and how many users can be seen pruning connections on such services (rather than seeking to preserve every person they ever added), versus, say, the relative intimacy of mobile messaging apps.

However, the public-following element of a network like Instagram is obviously also a draw for those posting content in the hopes of building an audience. As ever with social networks, it’s a case of horses for courses — and the FFForget service clearly won’t appeal to everyone.

It’s worth noting that some social networks do already let users export data. Twitter, for example, has an option in settings for users to “request your archive” — which creates a downloadable file of all your tweets. However, it’s a manually triggered process that can take as long as a few days before the file is available.

Facebook also lets users download an archive of their info — but, again, it’s necessary to manually request this. Whereas the FFForget’s paid subscription service would be real-time, letting a user close an account immediately, should they wish, without having to also jettison their data.

The premium version of the service (envisaged costing $1.99 per month) would also offer a content browsing and organizing interface, and real-time sync API access to allow for access to a user’s social network content to third-party apps and services of their choosing — again freeing up some of their locked-in social media data for alternative uses.

A freemium version is also planned, with fewer features, and what’s billed as “basic encryption” versus the premium service’s “military-grade encryption of your choice, with DES, AES, Blowfish.”

A spokeswoman for Kaspersky confirmed data backed up via the freemium version would still be client-side encrypted — so there’s no intention of Kaspersky trying to data-mine users’ social networking archives. On the contrary, the service is billed as having “no tracking, no ads.”

As well as positioning FFForget as a fix for social network fatigue, Kaspersky touts it as a way to safeguard social network data against an account being closed or taken over by hackers.

Facebook overhauls ad metrics, admits 4 bugs and errors led to misreported numbers

Facebook has been posting big gains on the back of advertising this year, but it looks like not all is well in the world of ad metrics on the social network. Today the social network admitted that it has discovered some bugs and errors in its system that have led to misreporting numbers across four products, including Instant Articles, video and Page Insights.

While coming clean on the bugs and errors, Facebook also said that it was putting several new measures into place both to fix those and bring in outside groups to provide more measurement to advertising and other clients, including the creation of a Measurement Council and more third-party verification.

The news is coming out at a key time for the company. Facebook has already been coming under pressure over accusations that it influenced the U.S. election by showing people too much “fake news” — posts created to look like factual content that were in fact made specifically to skew opinion and drive more clicks. And it had already admitted a miscalculation in September led to over-inflated video views. Shares of Facebook were down nearly three percent in pre-market trading. We’ll continue to monitor this to see whether there is a bigger effect.

The news that Facebook released today is comprised of two parts: the changes it’s putting in place, and the disclosure of the problems.

Facebook said that it uncovered bugs and other reporting errors in four products: Page Insights, its video product, Instant Articles and referrals in Analytics for Apps. For context, there are 220 metrics that Facebook counts across its platform.

It notes that one of its Pages dashboards, the summary number for 7-day or 28-day organic page reach, was miscalculated as a simple sum of daily reach instead of de-duplicating repeat visitors over those periods. It also notes that “the vast majority of reach data in the Page Insights dashboard was unaffected, including all the graphs, daily and historical reach, per-post reach, exported and API reach data, and all data on the Reach tab.” It says that the de-duplicated 7-day summary in the overview dashboard will be 33 percent lower on average and 28-day will be 55 percent lower and this bug has been live since May and will be fixed in the next few weeks. It does not affect paid reach, it added. The error area is marked here in red:

insights-bug

It also said that it had been undercounting metrics for completed, or 100 percent, video views — because sometimes the audio plays out longer than the video does. It notes that this could mean up to a 35 percent increase in video watches at 100 percent.

Meanwhile, Facebook said it had made a calculation error in Instant Articles, over-reporting by between 7 percent and 8 percent since August of 2015. “We were calculating the average across a histogram of time spent, instead of reflecting the total time spent reading an article divided by its total views. We have now fixed this issue,” Facebook said.

Lastly, Facebook said that it had been miscalculating Referrals in Analytics for Apps by about 6 percent for the most frequent users: it was counting clicks that went directly to apps and websites, but also clicks on posts via apps and websites, including clicks to view media. Other referral measurements were unaffected it said.

While four problems in a pool of 220 may not seem like a lot, it’s a sign of how Facebook’s platform is getting increasingly complex. But given what a force Facebook is today in online advertising, there is also an important need for transparency and trust for those who are putting their content (and ad spend) onto the platform. And some of this is overdue, considering that Facebook has broken new ground around a whole new set of products and parameters in the digital ad market. To that end, Facebook also today announced some changes in how it’s approaching measurement.

For starters, Facebook said that it will be widening the pool of third-party companies that it works with to measure traffic and engagement on its platform beyond the small group that it works with today, which includes comScore, Moat, Nielsen and Integral Ad Science (IAS), in response to requests from partners for more independent measurement, specifically around time ads are viewed.

This will also include more work with existing partners, for example Nielsen, to monitor video and Facebook live content to incorporate that into their wider social media dashboard.

It said that it will also create a new Measurement Council — comprised of advertising clients and measurement companies — to look at how it will continue to evolve this going forward, part of a bigger plan to communicate more about its metrics publicly.

This will also include overhauling the language it uses to describe metrics, more clear calculations, more categorization and better definitions of what Facebook is measuring.

video-views

More to come.