Google hid 2015 to March 2018 Google+data breach from public...


Google Exposed User Data, Feared Repercussions of Disclosing to Public

Google opted not to disclose to users its discovery of a bug that gave outside developers access to private data. It found no evidence of misuse.

By Douglas MacMillan and Robert McMillan Updated Oct. 8, 2018 2:45 p.m. ET

Google exposed the private data of hundreds of thousands of users of the Google+ social network and then opted not to disclose the issue this past spring, in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage, according to people briefed on the incident and documents reviewed by The Wall Street Journal.

As part of its response to the incident, the Alphabet Inc. unit announced a sweeping set of data privacy measures that include permanently shutting down all consumer functionality of Google+. Monday’s move effectively puts the final nail in the coffin of a product that was launched in 2011 to challenge Facebook Inc. and is widely seen as one of Google’s biggest failures.

A software glitch in the social site gave outside developers potential access to private Google+ profile data between 2015 and March 2018, when internal investigators discovered and fixed the issue, according to the documents and people briefed on the incident. A memo reviewed by the Journal prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica.

Chief Executive Sundar Pichai was briefed on the plan not to notify users after an internal committee had reached that decision, the people said.

The closure of Google+ is part of a broader review of privacy practices by Google that has determined the company needs tighter controls on several major products, the people said. In its announcement Monday, the company said it is curtailing the access it gives outside developers to user data on Android smartphones and Gmail.

The episode involving Google+, which hasn’t been previously reported, shows the company’s concerted efforts to avoid public scrutiny of how it handles user information, particularly at a time when regulators and consumer privacy groups are leading a charge to hold tech giants accountable for the vast power they wield over the personal data of billions of people.

Social Bug

How a software glitch allowed app developers to potentially access Google+ user data

User A signs up to Google+ and fills out profile fields: name, employer, job title, gender, birth date and relationship status.

User A goes into privacy settings to make profile data viewable only to certain friends on Google+, including User B.

User B signs up for an app which asks the user to log in using Google+ credentials. The user gives the app permission to access profile information.

The app developer collects data on User B. Because of the software glitch, the developer can also collect User A’s private profile data.

Google discovered and fixed the glitch in March 2018. It found no evidence of misuse of data.

Sources: People briefed on the incident and documents reviewed by The Wall Street Journal

The snafu threatens to give Google a black eye on privacy after public assurances that it was less susceptible to data gaffes like those that have befallen Facebook. It may also complicate Google’s attempts to stave off unfavorable regulation in Washington. Mr. Pichai recently agreed to testify before Congress in the coming weeks.

“Whenever user data may have been affected, we go beyond our legal requirements and apply several criteria focused on our users in determining whether to provide notice,” a Google spokesman said in a statement.

In weighing whether to disclose the incident, the company considered “whether we could accurately identify the users to inform, whether there was any evidence of misuse, and whether there were any actions a developer or user could take in response,” he said. “None of these thresholds were met here.”

The internal memo from legal and policy staff says the company has no evidence that any outside developers misused the data but acknowledges it has no way of knowing for sure. The profile data that was exposed included full names, email addresses, birth dates, gender, profile photos, places lived, occupation and relationship status; it didn’t include phone numbers, email messages, timeline posts, direct messages or any other type of communication data, one of the people said.

Google makes user data available to outside developers through more than 130 different public channels known as application programming interfaces, or APIs. These tools usually require a user’s permission to access any information, but they can be misused by unscrupulous actors posing as app developers to gain access to sensitive personal data.

A privacy task force formed inside Google, code named Project Strobe, has in recent months conducted a companywide audit of the company’s APIs, according to the people briefed on the process. The group is made up of more than 100 engineers, product managers and lawyers, the people said.

In a blog post on Monday, Google said it plans to clamp down on the data it provides outside developers through APIs. The company will stop letting most outside developers gain access to SMS messaging data, call log data and some forms of contact data on Android phones, and Gmail will only permit a small number of developers to continue building add-ons for the email service, the company said.

Google faced pressure to rein in developer access to Gmail earlier this year, after a Wall Street Journal examination found that developers commonly use free email apps to hook users into giving up access to their inboxes without clearly stating what data they collect. In some cases, employees at these app companies have read people’s actual emails to improve their software algorithms.

The coming changes are evidence of a larger rethinking of data privacy at Google, which has in the past placed relatively few restrictions on how external apps access users’ data, provided those users give permission. Restricting access to APIs will hurt some developers who have been helping Google build a universe of useful apps.

The Google+ data problem, discovered as part of the Strobe audit, was the result of a flaw in an API Google created to help app developers access an array of profile and contact information about the people who sign up to use their apps, as well as the people they are connected to on Google+. When a user grants a developer permission, any of the data they entered into a Google+ profile can be collected by the developer.

In March of this year, Google discovered that Google+ also permitted developers to retrieve the data of some users who never intended to share it publicly, according to the memo and two people briefed on the matter. Because of a bug in the API, developers could collect the profile data of their users’ friends even if that data was explicitly marked nonpublic in Google’s privacy settings, the people said.

During a two-week period in late March, Google ran tests to determine the impact of the bug, one of the people said. It found 496,951 users who had shared private profile data with a friend could have had that data accessed by an outside developer, the person said. Some of the individuals whose data was exposed to potential misuse included paying users of G Suite, a set of productivity tools including Google Docs and Drive, the person said. G Suite customers include businesses, schools and governments.

Because the company kept a limited set of activity logs, it was unable to determine which users were affected and what types of data may potentially have been improperly collected, the two people briefed on the matter said. The bug existed since 2015, and it is unclear whether a larger number of users may have been affected over that time.

Google believes up to 438 applications had access to the unauthorized Google+ data, the people said. Strobe investigators, after testing some of the apps and checking to see if any of the developers had previous complaints against them, determined none of the developers looked suspicious, the people said. The company’s ability to determine what was done with the data was limited because the company doesn’t have “audit rights” over its developers, the memo said. The company didn’t call or visit with any of the developers, the people said.

The question of whether to notify users went before Google’s Privacy and Data Protection Office, a council of top product executives who oversee key decisions relating to privacy, the people said.

Internal lawyers advised that Google wasn’t legally required to disclose the incident to the public, the people said. Because the company didn’t know what developers may have what data, the group also didn’t believe notifying users would give any actionable benefit to the end users, the people said.

The memo from legal and policy staff wasn’t a factor in the decision, said a person familiar with the process, but reflected internal disagreements over how to handle the matter.

The document shows Google officials knew that disclosure could have serious ramifications. Revealing the incident would likely result “in us coming into the spotlight alongside or even instead of Facebook despite having stayed under the radar throughout the Cambridge Analytica scandal,” the memo said. It “almost guarantees Sundar will testify before Congress.”

A range of factors go into determining whether a company must notify users of a potential data breach. There is no federal breach notification law in the U.S., so companies must navigate a patchwork of state laws with differing standards, said Al Saikali, a lawyer with Shook, Hardy & Bacon LLP. He isn’t affiliated with any of the parties.

While many companies wouldn’t notify users if a name and birth date were accessed, some firms would, Mr. Saikali said. Some firms notify users even when it is unclear that the data in question was accessed, he said. “Fifty percent of the cases I work on are judgment calls,” he said. “Only about half the time do you get conclusive evidence that says that this bad guy did access information.”

Europe’s General Data Protection Regulation, which went into effect in May of this year, requires companies to notify regulators of breaches within 72 hours, under threat of a maximum fine of 2% of world-wide revenue. The information potentially leaked via Google’s API would constitute personal information under GDPR, but because the problem was discovered in March, it wouldn’t have been covered under the European regulation, Mr. Saikali said.

Google could also face class-action lawsuits over its decision not to disclose the incident, Mr. Saikali said. “The story here that the plaintiffs will tell is that Google knew something here and hid it. That by itself is enough to make the lawyers salivate,” he said.

In its contracts with paid users of G Suite apps, Google tells customers it will notify them about any incidents involving their data “promptly and without undue delay” and will “promptly take reasonable steps to minimize harm.” That requirement may not apply to Google+ profile data, however, even if it belonged to a G Suite customer.

—Newley Purnell contributed to this article.

Comments

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger