Sunday, June 26, 2016

Google, Facebook quietly move toward automatic blocking of extremist videos


By Joseph Menn and Dustin Volz      June 24, 2016

By Joseph Menn and Dustin Volz

SAN FRANCISCO/WASHINGTON (Reuters) - Some of the web’s biggest destinations for watching videos have quietly started using automation to remove extremist content from their sites, according to two people familiar with the process.

The move is a major step forward for internet companies that are eager to eradicate violent propaganda from their sites and are under pressure to do so from governments around the world as attacks by extremists proliferate, from Syria to Belgium and the United States.

YouTube and Facebook are among the sites deploying systems to block or rapidly take down Islamic State videos and other similar material, the sources said.

The technology was originally developed to identify and remove copyright-protected content on video sites. It looks for "hashes," a type of unique digital fingerprint that internet companies automatically assign to specific videos, allowing all content with matching fingerprints to be removed rapidly.

Such a system would catch attempts to repost content already identified as unacceptable, but would not automatically block videos that have not been seen before.

The companies would not confirm that they are using the method or talk about how it might be employed, but numerous people familiar with the technology said that posted videos could be checked against a database of banned content to identify new postings of, say, a beheading or a lecture inciting violence.

The two sources would not discuss how much human work goes into reviewing videos identified as matches or near-matches by the technology. They also would not say how videos in the databases were initially identified as extremist.

Use of the new technology is likely to be refined over time as internet companies continue to discuss the issue internally and with competitors and other interested parties.

In late April, amid pressure from U.S. President Barack Obama and other U.S. and European leaders concerned about online radicalization, internet companies including Alphabet Inc's YouTube, Twitter Inc, Facebook Inc and CloudFlare held a call to discuss options, including a content-blocking system put forward by the private Counter Extremism Project, according to one person on the call and three who were briefed on what was discussed.

The discussions underscored the central but difficult role some of the world's most influential companies now play in addressing issues such as terrorism, free speech and the lines between government and corporate authority.

None of the companies at this point has embraced the anti-extremist group's system, and they have typically been wary of outside intervention in how their sites should be policed.

“It’s a little bit different than copyright or child pornography, where things are very clearly illegal,” said Seamus Hughes, deputy director of George Washington University’s Program on Extremism.

Extremist content exists on a spectrum, Hughes said, and different web companies draw the line in different places.

Most have relied until now mainly on users to flag content that violates their terms of service, and many still do. Flagged material is then individually reviewed by human editors who delete postings found to be in violation.

The companies now using automation are not publicly discussing it, two sources said, in part out of concern that terrorists might learn how to manipulate their systems or that repressive regimes might insist the technology be used to censor opponents.

“There's no upside in these companies talking about it,” said Matthew Prince, chief executive of content distribution company CloudFlare. “Why would they brag about censorship?”

The two people familiar with the still-evolving industry practice confirmed it to Reuters after the Counter Extremism Project publicly described its content-blocking system for the first time last week and urged the big internet companies to adopt it.

WARY OF OUTSIDE SOLUTION

The April call was led by Facebook's head of global policy management, Monika Bickert, sources with knowledge of the call said. On it, Facebook presented options for discussion, according to one participant, including the one proposed by the non-profit Counter Extremism Project.

The anti-extremism group was founded by, among others, Frances Townsend, who advised former president George W. Bush on homeland security, and Mark Wallace, who was deputy campaign manager for the Bush 2004 re-election campaign.

Three sources with knowledge of the April call said that companies expressed wariness of letting an outside group decide what defined unacceptable content.

Other alternatives raised on the call included establishing a new industry-controlled nonprofit or expanding an existing industry-controlled nonprofit. All the options discussed involved hashing technology.

The model for an industry-funded organization might be the nonprofit National Center for Missing and Exploited Children, which identifies known child pornography images using a system known as PhotoDNA. The system is licensed for free by Microsoft Corp.

Microsoft announced in May it was providing funding and technical support to Dartmouth College computer scientist Hany Farid, who works with the Counter Extremism Project and helped develop PhotoDNA, "to develop a technology to help stakeholders identify copies of patently terrorist content."

Facebook’s Bickert agreed with some of the concerns voiced during the call about the Counter Extremism Project's proposal, two people familiar with the events said. She declined to comment publicly on the call or on Facebook's efforts, except to note in a statement that Facebook is “exploring with others in industry ways we can collaboratively work to remove content that violates our policies against terrorism.”

In recent weeks, one source said, Facebook has sent out a survey to other companies soliciting their opinions on different options for industry collaboration on the issue.

William Fitzgerald, a spokesman for Alphabet's Google unit, which owns YouTube, also declined to comment on the call or about the company's automated efforts to police content.

A Twitter spokesman said the company was still evaluating the Counter Extremism Project's proposal and had "not yet taken a position."

A former Google employee said people there had long debated what else besides thwarting copyright violations or sharing revenue with creators the company should do with its Content ID system. Google's system for content-matching is older and far more sophisticated than Facebook's, according to people familiar with both.

Lisa Monaco, senior adviser to the U.S. president on counterterrorism, said in a statement that the White House welcomed initiatives that seek to help companies “better respond to the threat posed by terrorists’ activities online.

(Reporting by Joseph Menn in San Francisco and Dustin Volz in Washington; Additional reporting by Yasmeen Abutaleb and Jim Finkle; Editing by Jonathan Weber and Bill Rigby)


Wednesday, June 22, 2016

Goodbye, Password. Banks Opt to Scan Fingers and Faces Instead.

Goodbye, Password. Banks Opt to Scan Fingers and Faces Instead.
By MICHAEL CORKERY JUNE 21, 2016

The banking password may be about to expire — forever.

Some of the nation’s largest banks, acknowledging that traditional passwords are either too cumbersome or no longer secure, are increasingly using fingerprints, facial scans and other types of biometrics to safeguard accounts.

Millions of customers at Bank of America, JPMorgan Chase and Wells Fargo routinely use fingerprints to log into their bank accounts through their mobile phones. This feature, which some of the largest banks have introduced in the last few months, is enabling a huge share of American banking customers to verify their identities with biometrics. And millions more are expected to opt in as more phones incorporate fingerprint scans.

Other uses of biometrics are also coming online. Wells Fargo lets some customers scan their eyes with their mobile phones to log into corporate accounts and wire millions of dollars. Citigroup can help verify 800,000 of its credit card customers by their voices. USAA, which provides insurance and banking services to members of the military and their families, identifies some of its customers through their facial contours.

Some of the moves reflect concern that so many hundreds of millions of email addresses, phone numbers, Social Security numbers and other personal identifiers have fallen into the hands of criminals, rendering those identifiers increasingly ineffective at protecting accounts. And while thieves could eventually find ways to steal biometric data, banks are convinced they offer more protection.

“We believe the password is dying,” said Tom Shaw, vice president for enterprise financial crimes management at USAA, which is based in San Antonio. “We realized we have to get away from personal identification information because of the growing number of data breaches.”

Long regarded as the stuff of science fiction, biometrics have been tested by big banks for decades, but have only recently become sufficiently accurate and cost effective to use in a big way. It has taken a great deal of trial and error: With many of the early prototypes, a facial scan could be foiled by bad lighting, and voice recognition could be scuttled by background noise or laryngitis.

Before smartphones became ubiquitous, there was an even bigger obstacle: To capture a finger image or scan an eyeball, a bank would have to pay to distribute the necessary technology to tens of millions of customers. A few tried, but their efforts were costly and short-lived.

Today, the equation has changed. Many models of the iPhone have touch pads that can scan fingerprints. The cameras and microphones on many mobile devices are so powerful that they can record the minute details needed to create a biometric ID.

The smartphones also provide an extra layer of security: Many biometric features will only work when used on the specific phone that belongs to the bank account holder.

“If you have your phone and you are authenticating with your fingerprint, it is very likely you,” said Samir Nanavati, a longtime biometrics expert and a founder of Twin Mill, a security software and consulting firm.

The trade-off, of course, is that in the quest for security and convenience, customers are handing over marks of their unique physical identities. After all, it is easy to change a compromised password. But a fingerprint must last forever.

Some bank executives say customers often ask whether their biometric information will become part of a private database, akin to what the F.B.I. keeps.

The banks themselves are not keeping caches of actual fingerprints or eye patterns. Rather, the banks are creating and storing what they call templates — or what amount to long, hard-to-predict numerical sequences — based on a scan of a person’s fingerprint or eyeballs.

It is possible that the thieves could use the biometric templates to steal money, but the banks say they have worked to develop additional safeguards. With some voice authentication systems, banks use certain prompts to prove it is a living customer and not a recording. Many eye scans require customers to blink or move their eyes to prevent a thief from using a photo to gain access.

Wells Fargo has been working with EyeVerify, a start-up in Kansas City, Mo., to develop its eye scan feature, which is being tested with a small group of corporate customers. The technology creates a map of the veins in the whites of an eye.

To log into an account, a customer taps open a Wells Fargo app on a smartphone. When prompted, the customer’s eyes are lined up with a pair of yellow circles on the phone screen. If they match, the customer — typically a chief financial officer or other top executive — gains instant access to the account and can start moving money or conducting other transactions.

Wells Fargo executives said the eye scan could eventually offer an alternative to the authentication system used for corporate accounts, which involves physical tokens that generate numeric pass codes every few seconds. Although generally considered secure, these tokens can be a hassle to carry around.

For now, Wells Fargo is offering eye scans — among the most foolproof biometric technologies, according to security experts — only to select corporate customers, for whom the stakes are arguably higher because there is potentially so much money involved.

“It is harder to take someone’s eyeball than someone’s user ID and password,” said Steve Ellis, who leads Wells Fargo’s innovation group that worked on developing the eye scan authentication. The bank also made an investment in EyeVerify.

Instead of eye scans, Bank of America has embraced fingerprints. Since it began offering the option in September, about 33 percent of the bank’s 20 million mobile banking customers have started using a fingertip to get into their accounts.

There are limits, though, on how far an average retail customer can proceed through the banking process without a password.

For example, JPMorgan Chase customers can gain access to their bank accounts with their fingerprints, but have to use a traditional password to transfer money.

Still, the speed and accuracy of the banks’ biometric capabilities are especially notable because they are emerging from an industry known for its antiquated system of tellers and branches and endless reams of paperwork.

Wells Fargo’s eye scan technology, for example, worked so quickly that the developers had to slow it down by a few seconds so customers knew it had actually registered their identities.

It takes only about 40 seconds to capture enough information about a customer’s vocal patterns to create a voice imprint that can be used as a form of identification, according to Andrew S. Keen, director of program management for Global Consumer Operations at Citigroup. Once a print is established, it can reduce the time that customers spend identifying themselves to a call center representative.

Many financial firms emphasize the convenience of biometrics, but USAA is one of the few that highlights the effectiveness of these technologies at thwarting thieves.

Since USAA began offering biometric authentication early last year, more than 1.7 million customers have been accessing their accounts using either their fingerprints, voices or facial scans.

“We can’t rely on personal identification information any longer,” said Mr. Shaw. “We believe we have to rely on biometrics.”

A version of this article appears in print on June 22, 2016, on page A1 of the New York edition with the headline: Bye, Password. Now a Fingertip Gets Clients In.


Proposals to curb online speech viewed as threat to open internet

Proposals to curb online speech viewed as threat to open internet

By Yasmeen Abutaleb and Alastair Sharp
June 21, 2016

SAN FRANCISCO/ TORONTO (Reuters) - At least a dozen countries are considering or have enacted laws restricting online speech, a trend that is alarming policymakers and others who see the internet as a valuable medium for debate and expression.

Such curbs are called out as a threat to the open internet in a report on internet governance set to be released today at an Organization for Economic Cooperation and Development meeting in Cancun, Mexico.

The report, reviewed by Reuters, warns of dangers for the global internet, including intrusive surveillance, rising cybercrime and fragmentation as governments exert control of online content.

It was prepared by the London-based Chatham House think tank and the Centre for International Governance Innovation, founded by former BlackBerry Ltd co-chief Jim Balsillie.

China and Iran long have restricted online speech. Now limitations are under discussion in countries that have had a more open approach to speech, including Brazil, Malaysia, Pakistan, Bolivia, Kenya and Nigeria.

Advocates said some of the proposals would criminalize conversations online that otherwise would be protected under the countries' constitutions. Some use broad language to outlaw online postings that "disturb the public order" or "convey false statements" - formulations that could enable crackdowns on political speech, critics said.

"Free expression is one of the foundational elements of the internet," said Michael Chertoff, former U.S. secretary of Homeland Security and a co-author of the internet governance report. "It shouldn't be protecting the political interests of the ruling party or something of that sort."

Turkey and Thailand also have cracked down on online speech, and a number of developing world countries have unplugged social media sites altogether during elections and other sensitive moments. In the U.S. as well, some have called for restrictions on Internet communications.

Speech limitations create business and ethical conflicts for companies like Facebook Inc, Twitter Inc and Alphabet Inc's Google, platforms for debate and political organizing.

"This is the next evolution of political suppression," said Richard Forno, assistant director of the University of Maryland, Baltimore County Center for Cybersecurity. "Technology facilitates freedom of expression, and politicians don't like that."

"FIGHTING DELINQUENCY"

Tanzania and Ethiopia have passed laws restricting online speech. In others, including Pakistan, Brazil, Bolivia and Kenya, proposals are under discussion or under legislative consideration, according to a review of laws by Reuters and reports by Internet activist groups.

In Bolivia, President Evo Morales earlier this year said that the country needs to "regulate the social networks." A bill has been drafted and is ready for introduction in the legislature, said Leonardo Loza, head of one of Bolivia's coca growers unions, a supporter of the proposal.

"It is aimed at educating and disciplining people, particularly young Bolivians, and fighting delinquency on social networks," Loza said. "Freedom of expression can't be lying to the people or insulting citizens and politicians."

A bill in Pakistan would allow the government to block internet content to protect the "integrity, security or defense" of the state. The legislation, which has passed a vote in Pakistan's lower house of parliament, is supposed to target terrorism, but critics said the language is broad.

It comes after Pakistan blocked YouTube in 2012 when a video it deemed inflammatory sparked protests across the country and much of the Muslim world.

Earlier this year, YouTube, which is owned by Google, agreed to launch a local version of its site in the country. But now, the internet report said, the Pakistan Telecommunications Authority can ask the company to remove any material it finds offensive.

COMPANIES IN THE CROSSFIRE

U.S. internet companies have faced mounting pressure in recent years to restrict content. Companies' terms of service lay out what users can and cannot post, and they said they apply a single standard globally. They aim to comply with local laws, but often confront demands to remove even legal content.


Senate Falls 1 Vote Short of Giving FBI Access to Browser Histories Without Court Order

Senate Falls 1 Vote Short of Giving FBI Access to Browser Histories Without Court Order

Privacy advocates brace for another vote, say it's time to flood Senate offices with phone calls.
By Steven Nelson | Staff Writer June 22, 2016, at 1:25 p.m.

Privacy-minded senators on Wednesday blocked an amendment that would give the FBI power to take internet records, including browser histories and email metadata, without a court order. But the victory may be fleeting.

Just one vote kept the measure from clearing a 60-vote procedural hurdle, and political arm-twisting may soon result in a second vote. Senate Majority Leader Mitch McConnell, R-Ky., switched his vote to "no" to allow reconsideration in the near future. That made the final tally 58-38, with four senators not voting.

Critics of the proposed expansion of the FBI's ability to demand records with national security letters, or NSLs, are urging opponents to flood their senators with calls. There were some unexpected "yes" votes, such as Sen. Ted Cruz, R-Texas, who they hope to flip as some of the four senators who did not vote are viewed as tougher sells.

"It's obviously a good thing that this didn't move forward in the Senate," says Neema Singh Guliani, legislative counsel at the American Civil Liberties Union. "This would be an expansion of the Patriot Act and a very substantial one that would allow the FBI to get what many people consider their most sensitive information."

"It's important that the public contact their senators and say, 'We don't want this expansion of the Patriot Act,'" she says. "There were a lot of members who voted in favor who you wouldn't expect. This is a situation where you could see a lot of pressure on members to change their votes, which is why it's important the public understands the stakes here."

The amendment would allow the FBI to use NSLs to force companies to turn over “electronic communications transactional records," sometimes referred to as an ECTR, when it claims they are relevant to an investigation into terrorism or espionage. NSLs are administrative subpoenas that don’t require court approval and often come with a gag order.

Critics say the FBI already can get ECTR records if it convinces a judge there's good cause or if there's an emergency and it seeks retroactive court review.

“When most people hear ECTR, they go, ‘What’s an ECTR?’ And of course they do," says Robyn Greene, policy counsel at the Open Technology Institute. "ECTRs are not records that people are familiar with. When you send an email or go to a website, you think about the content you are sending or receiving, not that there's a trail you are leaving that if the government accessed would reveal your entire digital fingerprint."

Greene, who opposes the amendment, says "they were beat, but they may try again."

The surveillance-enhancing amendment is part of a third attempt to get the NSL expansion through the Senate. A first attempt successfully attached it to the annual intelligence authorization bill, with the Senate intelligence committee approving it behind closed doors with a lone "no" vote from Sen. Ron Wyden, D-Ore. The underlying bill hasn't been considered by the full Senate. A second attempt killed a bill that would have required warrants for U.S. emails when there were enough votes to attach the measure in the Senate Judiciary Committee.

On Tuesday, a prominent supporter of the legislation, Sen. John Cornyn, R-Texas, said the authority could have helped the FBI apprehend Orlando mass murderer Omar Mateen, who the agency had twice investigated years before he killed 49 people at a gay nightclub on June 12 in the worst mass shooting in modern U.S. history. Cornyn said it might have showed Mateen’s email contacts and that he was watching sermons posted online of radical cleric Anwar al-Awlaki.

Wyden argued on the Senate floor Wednesday that the amendment violates the Fourth Amendment’s protections and that it’s unnecessary because the surveillance-reforming USA Freedom Act enacted last year “allows the FBI to demand all of these records in an emergency and then go get court approval after the fact. So unless you’re opposed to court oversight, even after the fact, there’s no need to support this amendment.”

One of the amendment’s sponsors, Sen. Richard Burr, R-N.C., said there’s no evidence the authority would have prevented the Orlando shooting. He said “this is simply to provide law enforcement with tools to fulfill their mission, which is to keep America safe" and would shorten what can be a month long process of the FBI requesting permission from the Foreign Intelligence Surveillance Court to a “one-day process” with an NSL – something Wyden said was untrue in emergencies, given the Freedom Act provision.

The Justice Department's Office of Legal Counsel concluded in 2008 current law does not authorize taking of ECTR records with an NSL, which the FBI had been requesting anyhow. Burr said most companies, however, provided the records to the FBI regardless until 2010, “when a general counsel in one company decided to buck the system.”

Despite the Justice Department opinion, the FBI was still demanding ECTR records as late as 2013, as indicated in an NSL published by Yahoo! this month, after implementation of Freedom Act reforms requiring the FBI to review open-ended gag orders and determine if they are no longer needed.

A very long legal fight is over.

Nicholas Merrill, the owner of now-defunct Calyx Internet Access who in November became one of the first people given permission by a court to speak freely about receiving a NSL for customer information, which he fought beginning in 2004, says the expansion should be resisted.

“The fact that the government is now attempting to legalize the demand for electronic communications transactional records that they demanded of me back in 2004 is a tacit admission that what they did with all of the roughly 500,000 NSL's issued since then was illegal and over-reaching,” Merrill says.

Merrill says the FBI has attempted to avoid court review whenever possible by dropping NSL requests, such as happened in his case.

“The reason the government getting access to electronic communications transactional records is bad is because they paint a vivid picture of First Amendment-protected speech and association online, without requiring any particularized suspicion of wrongdoing,” he says. “In other words, national security letters are used to go on fishing expeditions.”

Whether the measure can pass the House of Representatives is unclear. Since whistleblower Edward Snowden’s 2013 disclosures about mass surveillance, the House has been more deferential to privacy pushes – but following the Orlando shooting, momentum may have stalled, with House members rejecting an effort to ban “backdoor” NSA surveillance after passing the amendment in 2015 and 2014.

“It's a question of constitutional values, and of checks and balances on executive authority,” says Steven Aftergood, a government secrecy expert at the Federation of American Scientists.

“There are all kinds of intrusive law enforcement techniques, including warrantless search and seizure, that might be useful in reducing crime,” he says. “But the constitutional path is to require checks and balances on their use. If more authority is needed, fine – but only with a corresponding increase in external oversight and accountability. The NSL legislation does not provide for that.”

Spokesmen for McConnell and Cornyn did not immediately respond to requests for comment on the status of the NSL amendment effort.


How did Google become the internet’s censor and master manipulator, blocking access to millions of websites?

The New Censorship
How did Google become the internet’s censor and master manipulator, blocking access to millions of websites?
By Robert Epstein | Contributor
June 22, 2016, at 9:00 a.m.
Google, Inc., isn't just the world's biggest purveyor of information; it is also the world's biggest censor.
The company maintains at least nine different blacklists that impact our lives, generally without input or authority from any outside advisory group, industry association or government agency. Google is not the only company suppressing content on the internet. Reddit has frequently been accused of banning postings on specific topics, and a recent report suggests that Facebook has been deleting conservative news stories from its newsfeed, a practice that might have a significant effect on public opinion – even on voting. Google, though, is currently the biggest bully on the block.
When Google's employees or algorithms decide to block our access to information about a news item, political candidate or business, opinions and votes can shift, reputations can be ruined and businesses can crash and burn. Because online censorship is entirely unregulated at the moment, victims have little or no recourse when they have been harmed.
Eventually, authorities will almost certainly have to step in, just as they did when credit bureaus were regulated in 1970. The alternative would be to allow a large corporation to wield an especially destructive kind of power that should be exercised with great restraint and should belong only to the public: the power to shame or exclude.
If Google were just another mom-and-pop shop with a sign saying "we reserve the right to refuse service to anyone," that would be one thing. But as the golden gateway to all knowledge, Google has rapidly become an essential in people's lives – nearly as essential as air or water. We don't let public utilities make arbitrary and secretive decisions about denying people services; we shouldn't let Google do so either.
Big social media companies like Facebook and Google have too much power to manipulate elections.
Let's start with the most trivial blacklist and work our way up. I'll save the biggest and baddest – one the public knows virtually nothing about but that gives Google an almost obscene amount of power over our economic well-being – until last.
1. The autocomplete blacklist. This is a list of words and phrases that are excluded from the autocomplete feature in Google's search bar. The search bar instantly suggests multiple search options when you type words such as "democracy" or "watermelon," but it freezes when you type profanities, and, at times, it has frozen when people typed words like "torrent," "bisexual" and "penis." At this writing, it's freezing when I type "clitoris." The autocomplete blacklist can also be used to protect or discredit political candidates. As recently reported, at the moment autocomplete shows you "Ted" (for former GOP presidential candidate Ted Cruz) when you type "lying," but it will not show you "Hillary" when you type "crooked" – not even, on my computer, anyway, when you type "crooked hill." (The nicknames for Clinton and Cruz coined by Donald Trump, of course.) If you add the "a," so you've got "crooked hilla," you get the very odd suggestion "crooked Hillary Bernie." When you type "crooked" on Bing, "crooked Hillary" pops up instantly. Google's list of forbidden terms varies by region and individual, so "clitoris" might work for you. (Can you resist checking?)
2. The Google Maps blacklist. This list is a little more creepy, and if you are concerned about your privacy, it might be a good list to be on. The cameras of Google Earth and Google Maps have photographed your home for all to see. If you don't like that, "just move," Google's former CEO Eric Schmidt said. Google also maintains a list of properties it either blacks out or blurs out in its images. Some are probably military installations, some the residences of wealthy people, and some – well, who knows? Martian pre-invasion enclaves? Google doesn't say.
3. The YouTube blacklist. YouTube, which is owned by Google, allows users to flag inappropriate videos, at which point Google censors weigh in and sometimes remove them, but not, according to a recent report by Gizmodo, with any great consistency – except perhaps when it comes to politics. Consistent with the company's strong and open support for liberal political candidates, Google employees seem far more apt to ban politically conservative videos than liberal ones. In December 2015, singer Susan Bartholomew sued YouTube for removing her openly pro-life music video, but I can find no instances of pro-choice music being removed. YouTube also sometimes acquiesces to the censorship demands of foreign governments. Most recently, in return for overturning a three-year ban on YouTube in Pakistan, it agreed to allow Pakistan's government to determine which videos it can and cannot post.
4. The Google account blacklist. A couple of years ago, Google consolidated a number of its products – Gmail, Google Docs, Google+, YouTube, Google Wallet and others – so you can access all of them through your one Google account. If you somehow violate Google's vague and intimidating terms of service agreement, you will join the ever-growing list of people who are shut out of their accounts, which means you'll lose access to all of these interconnected products. Because virtually no one has ever read this lengthy, legalistic agreement, however, people are shocked when they're shut out, in part because Google reserves the right to "stop providing Services to you … at any time." And because Google, one of the largest and richest companies in the world, has no customer service department, getting reinstated can be difficult. (Given, however, that all of these services gather personal information about you to sell to advertisers, losing one's Google account has been judged by some to be a blessing in disguise.)
5. The Google News blacklist. If a librarian were caught trashing all the liberal newspapers before people could read them, he or she might get in a heap o' trouble. What happens when most of the librarians in the world have been replaced by a single company? Google is now the largest news aggregator in the world, tracking tens of thousands of news sources in more than thirty languages and recently adding thousands of small, local news sources to its inventory. It also selectively bans news sources as it pleases. In 2006, Google was accused of excluding conservative news sources that generated stories critical of Islam, and the company has also been accused of banning individual columnists and competing companies  from its news feed. In December 2014, facing a new law in Spain that would have charged Google for scraping content from Spanish news sources (which, after all, have to pay to prepare their news), Google suddenly withdrew its news service from Spain, which led to an immediate drop in traffic to Spanish new stories. That drop in traffic is the problem: When a large aggregator bans you from its service, fewer people find your news stories, which means opinions will shift away from those you support. Selective blacklisting of news sources is a powerful way of promoting a political, religious or moral agenda, with no one the wiser.
6. The Google AdWords blacklist. Now things get creepier. More than 70 percent of Google's $80 billion in annual revenue comes from its AdWords advertising service, which it implemented in 2000 by infringing on a similar system already patented by Overture Services. The way it works is simple: Businesses worldwide bid on the right to use certain keywords in short text ads that link to their websites (those text ads are the AdWords); when people click on the links, those businesses pay Google. These ads appear on Google.com and other Google websites and are also interwoven into the content of more than a million non-Google websites – Google's "Display Network." The problem here is that if a Google executive decides your business or industry doesn't meet its moral standards, it bans you from AdWords; these days, with Google's reach so large, that can quickly put you out of business. In 2011, Google blacklisted an Irish political group that defended sex workers but which did not provide them; after a protest, the company eventually backed down.
In May 2016, Google blacklisted an entire industry – companies providing high-interest "payday" loans. As always, the company billed this dramatic move as an exercise in social responsibility, failing to note that it is a major investor in LendUp.com, which is in the same industry; if Google fails to blacklist LendUp (it's too early to tell), the industry ban might turn out to have been more of an anticompetitive move than one of conscience. That kind of hypocrisy has turned up before in AdWords activities. Whereas Google takes a moral stand, for example, in banning ads from companies promising quick weight loss, in 2011, Google forfeited a whopping $500 million to the U.S. Justice Department for having knowingly allowed Canadian drug companies to sell drugs illegally in the U.S. for years through the AdWords system, and several state attorneys general believe that Google has continued to engage in similar practices since 2011; investigations are ongoing.
7. The Google AdSense blacklist. If your website has been approved by AdWords, you are eligible to sign up for Google AdSense, a system in which Google places ads for various products and services on your website. When people click on those ads, Google pays you. If you are good at driving traffic to your website, you can make millions of dollars a year running AdSense ads – all without having any products or services of your own. Meanwhile, Google makes a net profit by charging the companies behind the ads for bringing them customers; this accounts for about 18 percent of Google's income. Here, too, there is scandal: In April 2014, in two posts on PasteBin.com, someone claiming to be a former Google employee working in their AdSense department alleged the department engaged in a regular practice of dumping AdSense customers just before Google was scheduled to pay them. To this day, no one knows whether the person behind the posts was legit, but one thing is clear: Since that time, real lawsuits filed by real companies have, according to WebProNews, been "piling up" against Google, alleging the companies were unaccountably dumped at the last minute by AdSense just before large payments were due, in some cases payments as high as $500,000.
Google's dominance in search is why businesses large and small live in constant "fear of Google," as Mathias Dopfner, CEO of Axel Springer, the largest publishing conglomerate in Europe, put it in an open letter to Eric Schmidt in 2014. According to Dopfner, when Google made one of its frequent adjustments to its search algorithm, one of his company's subsidiaries dropped dramatically in the search rankings and lost 70 percent of its traffic within a few days. Even worse than the vagaries of the adjustments, however, are the dire consequences that follow when Google employees somehow conclude you have violated their "guidelines": You either get banished to the rarely visited Netherlands of search pages beyond the first page (90 percent of all clicks go to links on that first page) or completely removed from the index. In 2011, Google took a "manual action" of a "corrective" nature against retailer J.C. Penney – punishment for Penney's alleged use of a legal SEO technique called "link building" that many companies employ to try to boost their rankings in Google's search results. Penney was demoted 60 positions or more in the rankings.
Search ranking manipulations of this sort don't just ruin businesses; they also affect people's opinions, attitudes, beliefs and behavior, as my research on the Search Engine Manipulation Effect has demonstrated. Fortunately, definitive information about Google's punishment programs is likely to turn up over the next year or two thanks to legal challenges the company is facing. In 2014, a Florida company called e-Ventures Worldwide filed a lawsuit against Google for "completely removing almost every website" associated with the company from its search rankings. When the company's lawyers tried to get internal documents relevant to Google's actions though typical litigation discovery procedures, Google refused to comply. In July 2015, a judge ruled that Google had to honor e-Ventures' discovery requests, and that case is now moving forward. More significantly, in April 2016, the Fifth Circuit Court of Appeals ruled that the attorney general of Mississippi – supported in his efforts by the attorneys general of 40 other states – has the right to proceed with broad discovery requests in his own investigations into Google's secretive and often arbitrary practices.
Substitute "ogle" for "rt," and you get "Google," which is every bit as powerful as Gort but with a much better public relations department – so good, in fact, that you are probably unaware that on Jan. 31, 2009, Google blocked access to virtually the entire internet. And, as if not to be outdone by a 1951 science fiction move, it did so for 40 minutes. Impossible, you say. Why would do-no-evil Google do such an apocalyptic thing, and, for that matter, how, technically, could a single company block access to more than 100 million websites?The answer has to do with the dark and murky world of website blacklists – ever-changing lists of websites that contain malicious software that might infect or damage people's computers. There are many such lists – even tools, such as blacklistalert.org, that scan multiple blacklists to see if your IP address is on any of them. Some lists are kind of mickey-mouse – repositories where people submit the names or IP addresses of suspect sites. Others, usually maintained by security companies that help protect other companies, are more high-tech, relying on "crawlers" – computer programs that continuously comb the internet.
When Google's search engine shows you a search result for a site it has quarantined, you see warnings such as, "The site ahead contains malware" or "This site may harm your computer" on the search result. That's useful information if that website actually contains malware, either because the website was set up by bad guys or because a legitimate site was infected with malware by hackers. But Google's crawlers often make mistakes, blacklisting websites that have merely been "hijacked," which means the website itself isn't dangerous but merely that accessing it through the search engine will forward you to a malicious site. My own website, http://drrobertepstein.com, was hijacked in this way in early 2012. Accessing the website directly wasn't dangerous, but trying to access it through the Google search engine forwarded users to a malicious website in Nigeria. When this happens, Google not only warns you about the infected website on its search engine (which makes sense), it also blocks you from accessing the website directly through multiple browsers – even non-Google browsers. (Hmm. Now that's odd. I'll get back to that point shortly.)
The mistakes are just one problem. The bigger problem is that even though it takes only a fraction of a second for a crawler to list you, after your site has been cleaned up Google's crawlers sometimes take days or even weeks to delist you – long enough to threaten the existence of some businesses. This is quite bizarre considering how rapidly automated online systems operate these days. Within seconds after you pay for a plane ticket online, your seat is booked, your credit card is charged, your receipt is displayed and a confirmation email shows up in your inbox – a complex series of events involving multiple computers controlled by at least three or four separate companies. But when you inform Google's automated blacklist system that your website is now clean, you are simply advised to check back occasionally to see if any action has been taken. To get delisted after your website has been repaired, you either have to struggle with the company's online Webmaster tools, which are far from friendly, or you have to hire a security expert to do so – typically for a fee ranging between $1,000 and $10,000. No expert, however, can speed up the mysterious delisting process; the best he or she can do is set it in motion. So far, all I've told you is that Google's crawlers scan the internet, sometimes find what appear to be suspect websites and put those websites on a quarantine list. That information is then conveyed to users through the search engine. So far so good, except of course for the mistakes and the delisting problem; one might even say that Google is performing a public service, which is how some people who are familiar with the quarantine list defend it. But I also mentioned that Google somehow blocks people from accessing websites directly through multiple browsers. How on earth could it do that? How could Google block you when you are trying to access a website using Safari, an Apple product, or Firefox, a browser maintained by Mozilla, the self-proclaimed "nonprofit defender of the free and open internet"?
Have you figured it out yet? The scam is as simple as it is brilliant: When a browser queries Google's quarantine list, it has just shared information with Google. With Chrome and Android, you are always giving up information to Google, but you are also doing so even if you are using non-Google browsers. That is where the money is – more information about search activity kindly provided by competing browser companies. How much information is shared will depend on the particular deal the browser company has with Google. In a maximum information deal, Google will learn the identity of the user; in a minimum information deal, Google will still learn which websites people want to visit – valuable data when one is in the business of ranking websites. Google can also charge fees for access to its quarantine list, of course, but that's not where the real gold is.
Google's mysterious and self-serving practice of blacklisting is one of many reasons Google should be regulated, just as phone companies and credit bureaus are. The E.U.'s recent antitrust actions against Google, the recently leaked FTC staff report about Google's biased search rankings, President Obama's call for regulating internet service providers – all have merit, but they overlook another danger. No one company, which is accountable to its shareholders but not to the general public, should have the power to instantly put another company out of business or block access to any website in the world. How frequently Google acts irresponsibly is beside the point; it has the ability to do so, which means that in a matter of seconds any of Google's 37,000 employees with the right passwords or skills could laser a business or political candidate into oblivion or even freeze much of the world's economy.


Europe's robots to become 'electronic persons' under draft plan

Europe's robots to become 'electronic persons' under draft plan

By Georgina Prodhan June 21, 2016

MUNICH, Germany (Reuters) - Europe's growing army of robot workers could be classed as "electronic persons" and their owners liable to paying social security for them if the European Union adopts a draft plan to address the realities of a new industrial revolution.

Robots are being deployed in ever-greater numbers in factories and also taking on tasks such as personal care or surgery, raising fears over unemployment, wealth inequality and alienation.

Their growing intelligence, pervasiveness and autonomy requires rethinking everything from taxation to legal liability, a draft European Parliament motion, dated May 31, suggests.

Some robots are even taking on a human form. Visitors to the world's biggest travel show in March were greeted by a lifelike robot developed by Japan's Toshiba and were helped by another made by France's Aldebaran Robotics.

However, Germany's VDMA, which represents companies such as automation giant Siemens and robot maker Kuka, says the proposals are too complicated and too early.

German robotics and automation turnover rose 7 percent to 12.2 billion euros ($13.8 billion) last year and the country is keen to keep its edge in the latest industrial technology. Kuka is the target of a takeover bid by China's Midea.

The draft motion called on the European Commission to consider "that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations".

It also suggested the creation of a register for smart autonomous robots, which would link each one to funds established to cover its legal liabilities.

Patrick Schwarzkopf, managing director of the VDMA's robotic and automation department, said: "That we would create a legal framework with electronic persons - that's something that could happen in 50 years but not in 10 years."

"We think it would be very bureaucratic and would stunt the development of robotics," he told reporters at the Automatica robotics trade fair in Munich, while acknowledging that a legal framework for self-driving cars would be needed soon.

The report added that robotics and artificial intelligence may result in a large part of the work now done by humans being taken over by robots, raising concerns about the future of employment and the viability of social security systems.

The draft motion, drawn up by the European parliament's committee on legal affairs also said organizations should have to declare savings they made in social security contributions by using robotics instead of people, for tax purposes.

Schwarzkopf said there was no proven correlation between increasing robot density and unemployment, pointing out that the number of employees in the German automotive industry rose by 13 percent between 2010 and 2015, while industrial robot stock in the industry rose 17 percent in the same period.

The motion faces an uphill battle to win backing from the various political blocks in European Parliament. Even if it did get enough support to pass, it would be a non-binding resolution as the Parliament lacks the authority to propose legislation.

(Additional reporting by Alissa de Carbonnel in Brussels; Editing by Alexander Smith)