Saturday, February 28, 2015

China censorship sweep deletes more than 60,000 Internet accounts

China censorship sweep deletes more than 60,000 Internet accounts
Reuters – Fri, 27 Feb, 2015

BEIJING (Reuters) - Some of China's largest Internet companies deleted more than 60,000 online accounts because their names did not conform to regulations due to take effect on Sunday, the top Internet regulator said.

Alibaba Group Holding Ltd, Tencent Holdings Ltd, Baidu Inc, Sina Corp affiliate Weibo Corp and other companies deleted the accounts in a cull aimed at "rectifying" online names, the Cyberspace Administration of China (CAC) said.

The reasons for their removal included accusations of being misleading, rumor mongering, links to terrorism, or involving violence, pornography and other violations, the CAC said in a statement on its website late on Thursday.

The purge is notable as a step toward China's government locking down control over people's internet account names, an effort which censors have struggled with in the past, despite numerous efforts to introduce controls.

These failed attempts have included trying to force users to register for online services using their real names.

The new regulations, which take effect on March 1 and will also target real-name registration, were issued by the CAC, which was formed last year and given power over all online content, something previously divided between various state ministries.

"Previously, the real-name registration system hasn't really been enforced," said Rogier Creemers, a researcher on Chinese media law at the University of Oxford. "These rules essentially impose a uniform and consolidated system for all online services requiring accounts."

The measure also reflects China's tightening control of the Internet, which has accelerated since President Xi Jinping took power in early 2013.

Weibo, the country's biggest microblog platform, will comply with the regulations and had a dedicated team to handle illegal information, including account names, a spokesman told Reuters.

E-commerce giant Alibaba declined to comment beyond highlighting a section of the CAC's statement on Alibaba's efforts to set up a team to handle account name issues. Tencent, China's biggest social networking and gaming company, and search leader Baidu were not available for immediate comment.

Among the accounts removed were those purporting to belong to state agencies, state media organizations and the East Turkestan Islamic Movement, said the CAC. China has blamed ETIM for violent attacks, but experts and rights groups have cast doubt on its existence as a cohesive group.

China operates one of the world's most sophisticated online censorship mechanisms, known as the Great Firewall. Censors keep a grip on what can be published online, particularly content seen as potentially undermining the ruling Communist Party.

(Reporting by Paul Carsten; Editing by Robert Birsel)


Chicago Business Files Lawsuit Over Negative Yelp Review

Chicago Business Files Lawsuit Over Negative Yelp Review

February 27, 2015 9:05 PM

(STMW) – A North Park neighborhood business is suing a husband and wife for writing a negative review on Yelp, accusing the couple of libel.

Zwick Window Shade Co., 3336 W. Foster, takes issue with a Feb. 6, 2014 review where a customer claims she and her husband never received a set of blinds ordered from Zwick, according to the lawsuit filed Friday in Cook County Circuit Court.

“Still waiting on blinds ordered nearly 4 months ago that were supposed to take 3 to 6 weeks,” the reviewer wrote. “My husband and I like to support small, family-run businesses when doing home improvement projects.”

The two-star review states Zwick’s service was excellent, but the delay in receiving the product was frustrating.

More than four months after the Oct. 16, 2013 purchase, Zwick delivered and installed the blinds, the suit said.

But the company claims in the suit that, because the blinds were eventually delivered, the Yelp review is making “misrepresentations about the alleged non-delivery of the blinds” and is essentially false.

“In the event that your [Yelp] statement is not removed within five days from the date of this letter, my client intends to pursue all remedies available to him at law and equity,” according to a letter sent by Zwick’s attorney a year after the blinds were installed. “In addition, you must not publicly publish this event in the future.”

Despite Zwick’s efforts, the review remains on Yelp.

The two-count lawsuit claims libel and seeks at least $50,000 in damages and that the review be taken down.

Calls made to Zwick and its attorney were not immediately returned Friday evening.

© 2015 by STATS LLC and Associated Press.


Thursday, February 26, 2015

What it means: The FCC's net neutrality vote

What it means: The FCC's net neutrality vote
FCC says net neutrality plan is on a 'firm legal foundation'
Net Neutrality: A social good, or a solution in search of a problem?
In addition to expected legal challenges, experts say a profusion of private networks will emerge

By Matt Hamblen
Computerworld | Feb 26, 2015 1:22 PM PT

Net neutrality has been debated for a decade, but the Federal Communications Commission's historic vote on Thursday signals only the beginning of further battles and likely lawsuits.

At issue is how best to keep the Internet open and neutral to all while still giving Internet service providers sufficient incentive to expand their networks to serve more customers and to support an exploding array of data-hungry applications as futuristic as holographic videoconferencing used for home-based medical exams.

The FCC voted 3-to-2 to create a series of sweeping changes, including three open Internet conduct rules that block broadband providers, both wired and wireless, from blocking or throttling Internet traffic. The rules also ban broadband providers from taking payments to prioritize content and services over their networks.

The vote followed party lines, with Democrats Tom Wheeler, the chairman, and commissioners Mignon Clyburn and Jessica Rosenworcel voting in favor. Republican commissioners Ajit Pai and Michael O'Rielly dissented.

The main legal instrument used by the FCC to put these rules in place comes through Title II of the Communications Act of 1934. Many cable, phone and wireless providers, including prominent voices at AT&T and Verizon, objected to the use of Title II, saying the rules would subject them to arduous and costly reviews -- the same as other utilities like phone service -- that will detract from their investments in growth and expansion.

Opponents also predict that future FCC commissioners will try to impose tariffs and fee-setting regulations on Internet providers, although the newly adopted rules don't  explicitly say so. An FCC summary states that broadband providers "shall not be subject to utility-style rate regulation." (The full ruling of the FCC is not expected for weeks.)

Title II supporters included FCC Chairman Tom Wheeler, President Obama and many of the 4 million people, public interest groups and companies who submitted comments to the FCC on the issue. These supporters maintain that reclassifying broadband providers under Title II puts broadband, appropriately, in the category of other utilities, akin to a 21st century version of electricity or telephone service. The FCC outlined these so-called "Bright Line Rules" and other details in an online fact sheet and a press release.

Where it started

The primary motivation for Wheeler to propose these Open Internet rules was a U.S. Court of Appeals decision on Jan. 14, 2014 in the now-famous Verizon vs. FCC case which vacated existing FCC rules that preventing Internet blocking and unreasonable discrimination. Those earlier FCC rules had stemmed from two oft-cited FCC decisions: In 1995, the FCC found Madison River Communications had blocked Vonage Voice over IP services to some customers; In 2008, the FCC said Comcast was arbitrarily throttling BitTorrent traffic in violation of FCC principles. The FCC has outlined some of this history on its website.

Other provisions of new FCC rules

Also in its vote, the FCC decided to use Section 706 of the Telecommunications Act of 1996 to supplant Title II in its adoption of Open Internet rules. Section 706 was specifically cited by the Court of Appeals in the 2014 Verizon case as giving the commission an independent grant of authority to support such rules.

By using both Section 706 and Title II to invoke new rules, FCC senior officials have said they are employing a "tailored" approach to Open Internet enforcement that will withstand the inevitable lawsuits threatened by multiple ISPs. Title II allows the FCC to use a broad "just and reasonable" standard in its regulation of Internet providers.

One area that is sure to stir up controversy and lawsuits is how the FCC uses its Title II "just and reasonable" standard to act on complaints by so-called "edge" companies, such as Netflix, that connect their services to Internet providers like AT&T, Comcast and other broadband providers. For example, an edge provider could complain to the FCC that its Internet capacity was unreasonably limited by an Internet provider, opening up an FCC inquiry and possible ruling.

Title II also allows competitors to an Internet provider in a community to access the same utility poles and underground conduits, in hopes of boosting the deployment of new broadband networks.

What some supporters and opponents say

Supporting the new FCC order are a range of public interest groups that point to the Internet as the primary medium of free speech today.

In congratulating the grassroots movement that spurred the FCC's action, U.S. Sen. Ed Markey, D-Mass., called the effort a 21st-century battle where supporters acted as modern-day Paul Reveres. "You have sounded the alarm and called us to arms ... to advocate for this new set of rules," Markey said in a conference call with reporters. "This revolution was not only televised but it was tweeted ... around the world."

The new rules will help protect the economy and are as important as keeping our air, water and roads safe, Markey added. "Reclassifying under Title II is a major victory for consumers," he said.

During the FCC hearing, Tim Berners-Lee, founder of the World Wide Web, in a pre-recorded video statement , gave his support to the Title II reclassification as the means to keep "permissionless innovation" alive on the Web.

Etsy CEO Chad Dickerson also testified about the value of keeping an unrestricted Internet to support businesses like Etsy, a peer-to-peer e-commerce Web site that supports sales by online artists and designers. In an emotional highlight, Dickerson read a letter from a woman identified only as Nancy from California who had been injured in a traffic accident and was relying on sales of her goods from her chair at home via Etsy. "My dream is alive and viable because of the free Internet," she said.

Producer and writer Veena Sud described at the hearing how her video series The Killing had been canceled on AMC television but gained renewed life with online streaming over Netflix. Such online series are able to promote more programming competition and bring more women into executive video production roles, she said.

Malkia Cyril, director of the Center for Media Justice, said the FCC order helps keep the Internet open to protect civil rights and promote fair policing in cities such as Ferguson, Mo., as well as fair wages in workplaces. "The Internet is where movements like Black Lives Matter are born and where dissent is protected and where underserved communities can plead our cause," she said in a conference call.

Like many other supporters, Cyril argued that enforcement of the FCC's rules will matter as much as the creation of the new rules.

Opponents to the FCC's new rules, meanwhile, have thrown out a wide range of objections.

Most critics call the FCC rules an over-regulation of a thriving Internet industry. A group of congressional Republicans had urged the FCC to delay its vote, while U.S. Rep. Marsha Blackburn, R-Tenn., described reliance on Title II as an outdated, 1930s-style utility regulation.

Picking up on that theme, Verizon posted a blog shortly after the FCC vote that was composed using old-fashioned Morse Code and titled, "FCC's 'Throwback Thursday' Move Imposes 1930s Rules on the Internet." (The blog includes a translation of the press release with a 1934 dateline and the typeface of a manual typewriter.)

Blackburn also argued that in the future, the FCC could impose rate regulation on Internet providers.

Pai and O'Rielly, the Republican FCC commissioners, both hammered on the likelihood of future FCC rate regulation on Internet service providers. Pai predicted $11 billion each year in new fees would be imposed on consumers. O'Reilly argued that the "just and reasonable" review clause can be interpreted to grant the FCC the ability to regulate rates set by Internet providers.

To rebut their point, former FCC Commissioner Michael Copps, now with Common Cause, said that "regulation of rates is not what's being contemplated" by the FCC. He pointed out that the FCC has the potential to spur expansion of high speed, affordable broadband without controlling rates set by Internet providers. Measures the FCC can use to spur expansion include supporting municipal broadband by pre-empting state laws (as the FCC did in a separate 3-2 vote affecting communities in Tennessee and North Carolina) and slowing down industry mergers that can reduce the number of broadband competitors.

AT&T Chairman Randall Stephenson has emerged as one of the most vocal opponents of the new FCC rules. He has appeared many times in recent months on television programs to point out that litigation against the FCC's rules could last up to three years before real clarity emerges on how the regulation will work and what Internet providers will be allowed to do. Discussions are underway as to whether individual companies like AT&T and others would file separate suits or would join together in a massive lawsuit, he said recently.

A large number of Internet providers, including some small municipal broadband authorities, have argued that they already adhere to open Internet rules, making the new FCC rules unnecessary.

"There's not a shred of evidence that this [vote] is necessary," O'Reilly said. Meanwhile, Pai said that both the BitTorrent and Vonage VoIP infringement cases happened years ago and did not constitute much harm, especially in light of the proliferation of Internet services in recent years.

That's the same position that Verizon and other service providers have taken.
"What has been and will remain constant before, during and after the existence of any regulations is Verizon's commitment to an open Internet that provides consumers with competitive broadband choices and Internet access when, where and how they want," said Michael Glover, Verizon senior vice president for public policy.

Questions over FCC's interconnection oversight

One area of the new rules that is ripe for attack will be how the FCC deals with heavy traffic on public networks. The FCC will now prohibit paid prioritization for traffic, as in a case where an Internet provider allows, for a fee, an edge provider or other company to have a fast lane for its fat data video service. While fast lanes are out, the FCC will still allow an Internet provider to conduct "reasonable network management" that recognizes the need for broadband providers to manage the technical and engineering aspects of their networks.

(Of note: All of the FCC's Title II oversight applies to public Internet services and not data services that use private pipes, such as VoIP from a cable service or a dedicated heart-monitoring service. However, the FCC will still keep tabs on these kinds of services through new transparency rules on Internet providers to make sure such services don't undermine Open Internet rules.)

Some Internet providers and other businesses have said that prohibiting paid prioritization while still allowing reasonable network management will create a murky area for the FCC in an era with new technology such as Software Defined Networks (SDNs). For example, what if an Internet provider creates an SDN over a fiber cable normally used for the public Internet and then charges an edge provider a fee for using that fast lane SDN?

As a result of the FCC rules, some analysts predict that Internet providers will be forced to create a profusion of private fast-lane networks of all varieties for their customers that are willing to pay a premium to push out fat content, especially byte-rich video, such as the real-time holographic video now on the technology horizon.
Private networks and reserved private pipes are already a reality, of course, but there are many quasi-public-private networks where a conflict is expected to arise.
For example, Gartner analyst Akshay Sharma posed the question of whether a doctor in surgery waiting for a critical MRI image to be sent over a public network would have the right to network prioritization over other users on the same network accessing games on BitTorrent. Likewise, in January, the FCC chairman was asked in a public forum at the International CES trade show if pornography on the Internet should be treated equally with medical records. Wheeler didn't answer directly, but repeatedly said the "just and reasonable" standard would apply.

There's not likely to be a much of a public discussion of any of these what-if scenarios, and only a lawsuit resulting from a particular dispute between an edge content provider and an Internet provider is likely to have much bearing.

The FCC has already allowed choke points on telephone networks for network management, Sharma noted. For example, when a radio station offers a prize and callers flood the phone lines, there is network management technology in place that still allows 911 calls to go through.
If the FCC does indirectly force creation of more paid, private networks for heavy traffic users, the emergence of SDN and other technologies will create gray areas, at least in a legal sense, if not a technological one.

"The problem you will have is trying to define a public network from what is a private one," said Derek Peterson, chief technology officer for Boingo Wireless, which provides Wi-Fi access to more than 1 million hotspots globally.

Peterson said it is reasonable for the FCC to prohibit paid network prioritization because an Internet provider could hurt one business while helping another on a network link. "An ISP (Internet Service Provider) could sit there and say, 'I don't like that retailer,' which could be bad for it," Peterson said in an interview.
"It's going to be interesting to see how crazy the FCC gets and how technology providers work around rules to deliver the services they need to deliver," he added. "It will be interesting to see how the FCC balances all that and if they are successful at all."



Human head transplant 'only two years away'...

Frankenstein-style human head transplant 'could happen in two years'

Italian surgeon claims procedure to graft a living person’s head on to a donor body will soon be ready

By Agency

9:37AM GMT 26 Feb 2015

The first human head transplant could take place in just two years, it was reported on Thursday.

Italian surgeon Sergio Canavero, from the Turin Advanced Neuromodulation Group, claims the Frankenstein-style procedure to graft a living person’s head on to a donor body will soon be ready.

The breakthrough surgery is being pioneered to help extend the lives of people who have suffered degeneration of the muscles and nerves or those who have advanced cancer.

The New Scientist reported Dr Canavero plans to announce the project at the American Academy of Neurological and Orthopaedic Surgeons conference in Annapolis, Maryland, in June.

Mr Canavero published a paper on the technique he would use in the Surgical Neurology International journal this month.

The recipient's head and the donor body would be cooled at the start of the procedure to extend the time that cells can survive without oxygen.

Tissue around the neck would be dissected and major blood vessels would be joined using tiny tubes.

The spinal cords would then be cut and the recipient's head moved on to the donor body. The ends of the spinal cord would be fused together using a chemical called polyethylene glycol, which encourages fat within cell membranes to mesh.

After this, the person would be put into a coma for around four weeks to prevent them moving while they heal.

Mr Canavero said he would expect the patient to be able to move and feel their face when they awoke, they would speak with the same voice and they should be able to walk within a year.

Mr Canavero first proposed the idea of the surgery in 2013, but at the time other experts dismissed the idea

He told the New Scientist: "If society doesn't want it, I won't do it. But if people don't want it, in the US or Europe, that doesn't mean it won't be done somewhere else.

"I'm trying to go about this the right way, but before going to the moon, you want to make sure people will follow you."

The first successful head transplant - involving moving the head of one monkey on to another - was carried out in 1970 at the Case Western Reserve University in Cleveland, US.

The monkey lived for nine days, but its immune system rejected the head.


Google's Quiet Dominance Over The 'Ad Tech' Industry

Opinion 2/26/2015 @ 6:00AM

Google's Quiet Dominance Over The 'Ad Tech' Industry

Guest post written by Allen Grunes

Mr. Grunes, a former attorney with the Department of Justice’s Antitrust Division, is cofounder of the Data Competition Institute.

A few months ago, display advertising on the Internet mysteriously vanished for more than an hour. On more than 55,000 websites such as BuzzFeed and Forbes, spaces that usually display advertisements went blank. It turned out that Google’s behemoth online advertising platform, DoubleClick, was to blame. The DoubleClick ad server had crashed, disrupting the entire infrastructure by which advertisers buy billions of dollars of ads across millions of websites.

Think about it: In an era of global competition, one company’s network crash broke the Internet.

The crash was a stark reminder of how an established player like Google has quietly achieved dominance over the so-called “ad tech” industry, the multi-billion dollar economic backbone that supports the vast majority of “free” content and services online.

The exponential growth of the Internet over the past decade is the direct result of advertising that enables Internet publishers to monetize their content. Ad tech makes this possible by helping companies track, serve and measure the effectiveness of ads online. Without a healthy, competitive ad tech industry, much of the free online content we use every day would go behind paywalls or disappear altogether.

Google’s efforts to dominate ad tech and squeeze out competitors is bringing that potentially ominous future closer to reality.

Piece by piece, Google has assembled a dominant position at each critical juncture in the complex ad tech infrastructure. Google imposes restrictions on publishers and advertisers alike to force them into committing exclusively to Google’s end-to-end ad tech pipeline, making it more difficult for smaller companies to retain their footing or for new innovative companies to enter. Industry observers warned the Federal Trade Commission about this prospect several years ago, and the agency vowed to keep a close watch. But the FTC has failed to rein in Google’s aggressive behavior, at least so far.

Internet advertising has exploded in recent years, driven primarily by technologies that allow advertisers to target ads precisely and in real-time to specific groups (e.g., 30-year old men in Massachusetts driving Toyotas) regardless of what sites members of those groups are visiting.

It used to be that advertisers connected directly with online “content publishers,” a catch-all phrase for any website that has space to sell advertising. Those publishers would decide what ads to buy and what audiences to reach, much like advertisers directly call a particular newspaper to place an ad. Now, advertisers bid to have their ads served to particular users or in response to specific keywords, with ad tech companies playing match-maker between publishers and advertisers. Winning advertisers’ ads are then served on content publishers’ websites, videos or apps, and advertisers review and optimize their campaigns based on data received from their ad tech tools. All of this takes place within a few milliseconds.

Broadly, there are six critical ad tech markets, each of which is essential to the functioning of this type of real-time advertising: Ad Networks, Ad Exchanges, Demand-Side Platforms (DSP), Supply-Side Platforms (SSP), Ad Servers and Analytics Platforms. The details of each are a subject for a longer, in-depth look at how the ad-tech market works, but suffice to say that Google is now the largest and/or dominant player in each. Google is the world’s leading analytic platform and controls some of the leading ad networks: AdSense, Google Display Network and AdMob. Google’s DoubleClick is the largest ad exchange in the market. To round it out, Google’s DoubleClick Bid Manager is the largest DSP.

When the FTC closed its investigation of Google’s acquisition of DoubleClick in 2007, it vowed to monitor Google’s behavior in the affected ad tech markets, saying: “We want to be clear, however, that we will closely watch these markets and, should Google engage in unlawful tying or other anticompetitive conduct, the Commission intends to act quickly.” Since then, widespread concerns have been raised about Google’s engagement in a range of exclusionary conduct, including the very same tying that the FTC had said it would move quickly to address.

Google now locks in publishers and advertisers at both ends. It ties services that advertisers or publishers do not want to those that they need, pressuring them to use Google-only services all the way up and down the pipeline. For example, Google ties its dominant ad exchange to its publisher ad-serving tools, and it pressures advertisers to use its DSP to buy ads on its exchange.

Another example of Google’s brazen conduct is its new “enhanced dynamic allocation” feature, which gives Google’s own exchange a “first look” at the advertising opportunities spit out by its dominant DoubleClick ad server. Google tells publishers that dynamic allocation is a “risk-free way” to “maximize revenue” by comparing bids on the DoubleClick with other exchanges. In truth, Google compares other exchanges’ average bids with the highest bids on its own exchange. Through this sleight of hand, Google steers publishers into its own exchanges, maximizing its profits at publisher expense.

Google also blocks competitors by requiring exclusive agreements and preventing cross-platform interoperability. Publishers are barred from using third-party tools to manage their data or to maximize revenue by comparing bids on different exchanges. The result is that publishers commit their inventory exclusively to Google rather than incur the additional expense of “multi-homing.”

The FTC vowed to police Google’s behavior in the booming “ad tech” industry. They need to live up to their commitments and start taking action.


Wednesday, February 25, 2015

Google computer mimics human brain; learns from new experience...

Is playing 'Space Invaders' a milestone in artificial intelligence?

Researchers with Google's DeepMind project created a computer loosely based on brain architecture that mastered computer games -- such as Space Invaders -- without any knowledge of their rules.

By Geoffrey Mohan 

Computers have beaten humans at chess and "Jeopardy!," and now they can master old Atari games such as "Space Invaders" or "Breakout" without knowing anything about their rules or strategies..

Playing Atari 2600 games from the 1980s may seem a bit "Back to the Future," but researchers with Google's DeepMind project say they have taken a small but crucial step toward a general learning machine that can mimic the way human brains learn from new experience.

Unlike the Watson and Deep Blue computers that beat "Jeopardy!" and chess champions with intensive programming specific to those games, the Deep-Q Network built its winning strategies from keystrokes up, through trial and error and constant reprocessing of feedback to find winning strategies.

“The ultimate goal is to build smart, general-purpose [learning] machines. We’re many decades off from doing that," said artificial intelligence researcher Demis Hassabis, coauthor of the study published online Wednesday in the journal Nature. "But I do think this is the first significant rung of the ladder that we’re on."

The Deep-Q Network computer, developed by the London-based Google DeepMind, played 49 old-school Atari games, scoring "at or better than human level," on 29 of them, according to the study.

The algorithm approach, based loosely on the architecture of human neural networks, could eventually be applied to any complex and multidimensional task requiring a series of decisions, according to the researchers.

The algorithms employed in this type of machine learning depart strongly from approaches that rely on a computer's ability to weigh stunning amounts of inputs and outcomes and choose programmed models to "explain" the data. Those approaches, known as supervised learning, required artful tailoring of algorithms around specific problems, such as a chess game.

The computer instead relies on random exploration of keystrokes bolstered by human-like reinforcement learning, where a reward essentially takes the place of such supervision.

“In supervised learning, there’s a teacher that says what the right answer was," said study coauthor David Silver. "In reinforcement learning, there is no teacher. No one says what the right action was, and the system needs to discover by trial and error what the correct action or sequence of actions was that led to the best possible desired outcome.”

The computer "learned" over the course of several weeks of training, in hundreds of trials, based only on the video pixels of the game -- the equivalent of a human looking at screens and manipulating a cursor without reading any instructions, according to the study.

Over the course of that training, the computer built up progressively more abstract representations of the data in ways similar to human neural networks, according to the study.

There was nothing about the learning algorithms, however, that was specific to Atari, or to video games for that matter, the researchers said.

The computer eventually figured out such insider gaming strategies as carving a tunnel through the bricks in "Breakout" to reach the back of the wall. And it found a few tricks that were unknown to the programmers, such as keeping a submarine hovering just below the surface of the ocean in "Seaquest."

The computer's limits, however, became evident in the games at which it failed, sometimes spectacularly. It was miserable at "Montezuma's Revenge," and performed nearly as poorly at "Ms. Pac-Man." That's because those games also require more sophisticated exploration, planning and complex route-finding, said coauthor Volodymyr Mnih.

And though the computer may be able to match the video-gaming proficiency of a 1980s teenager, its overall "intelligence" hardly reaches that of a pre-verbal toddler. It cannot build conceptual or abstract knowledge, doesn't find novel solutions and can get stuck trying to exploit its accumulated knowledge rather than abandoning it and resort to random exploration, as humans do.

“It’s mastering and understanding the construction of these games, but we wouldn’t say yet that it’s building conceptual knowledge, or abstract knowledge," said Hassabis.

The researchers chose the Atari 2600 platform in part because it offered an engineering sweet spot -- not too easy and not too hard. They plan to move into the 1990s, toward 3-D games involving complex environments, such as the "Grand Theft Auto" franchise. That milestone could come within five years, said Hassabis.

“With a few tweaks, it should be able to drive a real car,” Hassabis said.

DeepMind was formed in 2010 by Hassabis, Shane Legg and Mustafa Suleyman, and received funding from Tesla Motors' Elon Musk and Facebook investor Peter Thiel, among others. It was purchased by Google last year, for a reported $650 million. Hassabis, a chess prodigy and game designer, met Legg, an algorithm specialist, while studying at the Gatsby Computational Neuroscience Unit at University College, London. Suleyman, an entrepreneur who dropped out of Oxford University, is a partner in Reos, a conflict-resolution consulting group.



Mind-Controlled Drone Scientists Work On Groundbreaking Flight

Mind-Controlled Drone Scientists Work On Groundbreaking Flight

2/25/2015 @ 7:47AM

A company has successfully flown a mind controlled drone, a step that its scientists say will lead to passenger carrying airplanes steered only by pilots’ brains.

In a rather stunning demonstration yesterday, Portuguese business Tekever fitted a special cap to a pilot to measure his brain activity, allowing him to steer a drone through a mission in the sky using his thoughts alone.

The company’s eventual target for the drone technology is applying it to pilots flying private and commercial aircraft using their minds alone, but it acknowledges there is a lot of work ahead.

For yesterday’s test demonstration, in order to steer the drone, pilot Nuno Lourenço focused entirely on simple thoughts within set formats, which he learned during extensive training. This means the drone received clear signals, from his brain waves, that it could process quickly.

“This is an amazing, high-risk and high-payoff project,” Tekever chief operating officer Ricardo Mendes said at the launch. The project needs extensive further technology development, he explained, but added that it “represents the beginning of a tremendous step change in the aviation field, empowering pilots and de-risking missions”.

Key benefits include allowing pilots to focus on the many advanced in-air processes while more simply controlling an aircraft, which the company describes as being akin to how professional sportspeople need to focus on tactical aspects of movement without worrying about maintaining basic game skills.

The system works by a pilot wearing a special cap that can measure his or her brain waves, and it is programmed with highly complex algorithms to counteract any confused or unhelpful thoughts from the pilot that could cause a crash. Tekever is conducting the project with technology research center the Champalimaud Foundation and software business Eagle Science, in collaboration with the Technical University of Munich, Germany.

Tekever is convinced the technology will eventually lead to manned commercial aircraft being flown simply by pilots’ thoughts, and even suggests the system could allow people to drive cars the same way. It believes the technology can also be used in advanced prosthetic limbs, allowing people with severe disabilities to move with their thoughts.

The system has already been tested in a four seat, twin engine, propeller driven airplane simulator, and the company will work on running it in the real thing.

The technology will need extensive development before it is safe enough in such circumstances, and regulators would be unlikely to allow it unless proven to be extremely safe. But the concept is feasible, and perhaps even likely in the long run.