Thursday, July 21, 2016

Google Sprints Ahead in AI Building Blocks, Leaving Rivals Wary

Google Sprints Ahead in AI Building Blocks, Leaving Rivals Wary
By Jack Clark July 21, 2016 — 3:00 AM PDT

There’s a high-stakes race under way in Silicon Valley to develop software that makes it easy to weave artificial intelligence technology into almost everything, and Google has sprinted into the lead.

Google computer scientists including Jeff Dean and Greg Corrado built software called TensorFlow, which simplifies the programming of key systems that underpin artificial intelligence. That helps Google make its products smarter and more responsive. It’s important for other companies too because the software makes it dramatically easier to create computer programs that learn and improve automatically. What’s more, Google gives it away.

But for some competitors, there’s a big downside to adopting Google’s standard. Using TensorFlow will help Google recruit more AI experts by training them on the same tool it uses internally, spotting their code, and hiring the best contributors. It could also let the search-engine provider exert outsize influence over the burgeoning AI ecosystem. If the internet giant dominates in this field, it could gain an advantage in the fast-growing cloud-computing business, turning the popularity of its software into real revenue.

"It’s the next big area, and people are worried Google’s going to own the show," said Ed Lazowska, a computer science professor at the University of Washington who has served on the technical advisory board of Microsoft Corp.’s research lab. "There is a network effect, and it’s a really excellent system."

Google initially used TensorFlow internally for products like its Inbox and Photos apps. The company made it available for free in November. Technology companies like Microsoft Corp., Amazon.com Inc. and Samsung Electronics Co. rushed to give away their own versions, hoping to get the most outside developers using their standards.
The company that wins will benefit from the collective efforts of thousands of developers using, but also updating and improving, its system. That’s an advantage when it comes time to make money from the new asset. Whoever has the most popular software will have the best chance of creating commercial cloud services for AI because potential customers will already know how to use it.

Amazon and Samsung declined to comment. Microsoft did not respond to requests for comment on Wednesday.

Success in these types of open-source projects sometimes yields big rewards. Google released Android for free in 2008, and it’s now the most widely used mobile operating system with over 400,000 developers and more than a billion users. Google generates billions of dollars a year from ads shown on Android devices and the cut it gets from revenue app developers make through the operating system.

‘Linux Level’

Since emerging, TensorFlow has become the most popular AI programming project on software code sharing service GitHub, leapfrogging well-regarded systems created by universities and corporate rivals, according to data gathered by Bloomberg.

On launch day, TensorFlow had around 3,000 "stars" on GitHub, meaning that number of programmers had bookmarked the code, indicating interest. As of July 13, it had 27,873. Two other popular AI software projects, Theano and Torch, have less than a fifth of that following. In 2014, Torch was the leader. A Microsoft tool called CNTK, released for free in January, and Amazon’s free DSSTNE, which rolled out in May, have so far failed to dent Google’s lead much.

Linux, an open-source operating system launched in 1991, now helps run everything from supercomputers to phones to airplanes and helped turn Red Hat Inc. into a $13 billion enterprise software company. Linux has 33,967 stars on GitHub. "It’s kind of crazy," said Dean, a top Google engineer and one of the main developers of TensorFlow. "We’re almost to Linux level."

‘Green Field’

Google will soon begin generating revenue from this lead. It plans to offer a version of TensorFlow that runs on its Google Cloud Platform service, letting people and businesses pay to run their AI software in Google’s data centers.

Google made the software free so it could give the community a useful tool "and everyone could standardize on that," said Corrado, a senior research scientist at Google. "In a giant green field, trying to build a fence around the next blade of grass is really absurdist. It’s really better to help everybody run into that field."
That openness, and continual Google updates, have lured developers like computer-vision startup Matroid, which re-wrote its software to work with TensorFlow, after building on another free AI tool called Caffe. Kindred, a robotics startup, made a similar switch.

Looking Elsewhere

Not everyone is so keen. As TensorFlow’s usage grows, some companies are realizing an increasingly important part of the technology toolkit is controlled by Google, and they don’t want to exacerbate that trend.

They’re "skeptical about using a language backed by another large company," said Soumith Chintala, a Facebook Inc. artificial intelligence researcher and one of the people behind Torch.

The unease stems from the fact Google can tweak TensorFlow to suit its own purposes, he said. If the company changes the software too much, then other companies that have adopted it will need to make a copy of the software and rewrite it to suit their own needs -- an expensive and time-consuming process known as forking.

That’s led some to look elsewhere. Skymind, which makes free AI software, has had more than five customers tell it they are wary of using TensorFlow, said CEO Chris Nicholson. He declined to name any of the companies, citing non-disclosure agreements.

Since TensorFlow launched, designers of other AI programming projects have been inundated with queries from companies that don’t want to rely on Google. Several reached out to the creators of Theano, developed mainly at the University of Montreal, to see if they can donate resources to the project, according to Yoshua Bengio, a professor who leads AI research at the school.

The same happened with Torch, said Chintala. Facebook does much of its AI research with Torch, and Chintala helps guide development of the project in his spare time. He and other backers moved Torch into a non-profit organization called SPI Inc. in May to make it easier for more people to work on the language and donate to it.

‘Counterpoint’

"One of the reasons we want to stick with Torch is to create a strong counterpoint" to TensorFlow, said Clement Farabet, who helped develop Torch. He now works at Twitter Inc., which uses Torch to run AI systems that analyze images and select tweets people may want to read. It’s better for the community if there’s a choice of AI software, he said.

Google could solve some of these problems by donating TensorFlow to a neutral third-party, said Bengio, who has discussed his ideas with the company. This structure could "provide neutral software for all," he said.

Google has no plans to do that, but it’s open to letting outside people have a say in what code gets merged into the main software, said Jason Freidenfelds, a spokesman for Google.

Google’s strategy may be dictated by past failings, said Reza Zadeh, chief executive officer of Matroid, who worked at Google a decade ago. Back then, Google developed the Google File System and MapReduce to store and analyze lots of data. It published research papers on them, but no code. Some employees at Yahoo! Inc. used the research to create Hadoop, technology that underpins public company Hortonworks Inc. and larger private rival Cloudera Inc.

"They’ve learned from that," said Zadeh.


Pepper robot gets new job selling insurance

Pepper robot gets new job selling insurance

8:48 pm, July 21, 2016

The Yomiuri Shimbun

A major life insurance company will deploy humanoid robots nationwide this autumn, using them to wait on customers at its offices and sending them out on sales calls.

Meiji Yasuda Life Insurance Co. has announced plans to deploy 100 Pepper robots, made by SoftBank Group Corp., at its 80 branches in October. Pepper will explain insurance products and services, and accompany sales people on their rounds.

This will give Meiji Yasuda the highest number of humanoid robots deployed in the financial industry.

Pepper will explain comparatively simple, reasonably priced insurance products in customer service areas at branch offices. The robots also will attend to visitors at insurance seminars held by the company, and accompany Meiji Yasuda salespeople on visits to other companies to promote insurance products.

Heightened security arrangements have made it more difficult to sell insurance to companies, and Meiji Yasuda is hoping to draw attention through the use of the robots. The company plans to redecorate 60 of its shop fronts across the nation and position Pepper robots in customer service booths by fiscal 2017.



Wednesday, July 20, 2016

Drone can hit 70 mph has replaceable legs

The Teal drone can hit 70 mph, so it's a good thing it has replaceable legs
July 20, 2016 6:00 AM PDT

Teal is more than just another quadcopter: It's a platform.

As it stands in 2016, consumers can pick out a ready-to-fly drone for aerial photos and video or for racing or just to fly casually. Teal is meant to appeal to all of these buyers, regardless of skill level, and eventually to commercial pilots, too.

Behind Teal -- the company and the drone -- is 18-year old George Matus who has been flying quads since he was 11 and built his first one at 14. The drone is the result of an evolving list of dream features he's been making since then.

The quad can go fast at up to 70 mph (112 kph) in up to 40 mph (64 kph) winds, it's weatherproof, can be controlled with an iOS or Android device or a regular radio controller and is small enough to slip into backpack. In front is an electronically stabilized 13-megapixel camera that can record video at 4K resolution.

Teal is also modular, and that doesn't only mean removing the battery. Each arm can be popped on and off, as can the drone's top section. With other drones, if you were to break one of the prop arms you would have to send the whole thing in for repair. With Teal you'll be able to easily replace it on your own. Plus, this opens the possibility for specialized arms for specific tasks. Teal is also currently planning to release modules for the top section including thermal imaging, obstacle avoidance (something it currently can't do on its own) and a secondary camera for first-person-view racing.

Here's where it gets even more interesting, though. Inside Teal is an Nvidia TX1 computer with an octa-core processor to handle machine learning and artificial intelligence technologies. The idea here is that by having the modular design, powerful hardware running the drone's Teal OS as well as making an SDK available, it can be a platform to be developed for consumer and commercial uses.

For the moment the drone is targeted at consumers and will have three apps available at launch: one for flight control, another for a Follow-Me mode for automatic subject tracking and a racing application so you can compete against other Teal pilots. Matus hopes after an app store has been built and grows, that licensing of the platform with other hardware manufacturers will soon follow.

The biggest downsides we see are the same things we see with a lot of drones: battery life and price. Teal has a 1,800mAh lithium polymer battery that will provide around 10 minutes of flight time. This is shorter than larger camera drones, but is in line with most racing drones. Teal should be releasing extended batteries at some point after launch, too.

The other issue is that Teal is a new comer and at $1,299 the unit is not cheap and it is far off with the earliest units shipping right before Christmas 2016. While the rest of the orders placed by August 15, should ship by early 2017, which is quite some time. And that's if all goes according to plan.

The company is accepting preorders on Teal Drones site and you won't be charged until the drone ships.


Tuesday, July 19, 2016

Mercedes' autonomous Future Bus just drove through Amsterdam

Mercedes' autonomous Future Bus just drove through Amsterdam

BY NICK JAYNES July 18, 2016

Autonomy isn't just for cars; Mercedes-Benz has created a self-driving city bus, too.

Mercedes-Benz revealed its latest creation on Monday morning. Called the Future Bus, it's the first city bus that can drive autonomously.

Mercedes did more than just unveil the futuristic vehicle. It also sent it on a 12-mile route through the streets of Amsterdam.

The bus uses Mercedes' latest autonomous driving system called CityPilot. Like HighwayPilot, which allows the company's semi trucks to drive more safely and efficiently down freeways, CityPilot enables buses to drive partially autonomously in specially marked bus lanes up to 43 mph. All of this is achieved with a human driver onboard to monitor for safety.

Future Bus can do more than just drive in special lanes. It can also arrive at bus stops, pass through tunnels, communicate with traffic signals, and brake for obstacles and pedestrians.

Unlike HighwayPilot, which Mercedes aims to send into production vehicles by 2020, the German automaker doesn't intend to send Future Bus' CityPilot system into production in its complete form. Instead, it will implement portions of the system — like driving to and away from bus stops — into its city buses. Additionally, Mercedes wants to use semi-autonomous tech to improve the efficiency of its zero-emissions powertrains.

Future Bus does more than demo some production-intended technology; it also shows how Mercedes envisions a more comfortable, tech-heavy public transit of the future.

Specifically, in the "lounge" portion of the bus, riders can wirelessly charge their smartphones through inductive charging pads as well as check information on large displays.

There's no word as to whether future Mercedes trucks will include such distinctive exterior styling to match the underlying tech. But here's to hoping.


This Guy Trains Computers to Find Future Criminals

This Guy Trains Computers to Find Future Criminals

Richard Berk says his algorithms take the bias out of criminal justice. But could they make it worse?

by Joshua Brustein July 18, 2016

When historians look back at the turmoil over prejudice and policing in the U.S. over the past few years, they’re unlikely to dwell on the case of Eric Loomis. Police in La Crosse, Wis., arrested Loomis in February 2013 for driving a car that was used in a drive-by shooting. He had been arrested a dozen times before. Loomis took a plea, and was sentenced to six years in prison plus five years of probation.

The episode was unremarkable compared with the deaths of Philando Castile and Alton Sterling at the hands of police, which were captured on camera and distributed widely online. But Loomis’s story marks an important point in a quieter debate over the role of fairness and technology in policing. Before his sentence, the judge in the case received an automatically generated risk score that determined Loomis was likely to commit violent crimes in the future.

Risk scores, generated by algorithms, are an increasingly common factor in sentencing. Computers crunch data—arrests, type of crime committed, and demographic information—and a risk rating is generated. The idea is to create a guide that’s less likely to be subject to unconscious biases, the mood of a judge, or other human shortcomings. Similar tools are used to decide which blocks police officers should patrol, where to put inmates in prison, and who to let out on parole. Supporters of these tools claim they’ll help solve historical inequities, but their critics say they have the potential to aggravate them, by hiding old prejudices under the veneer of computerized precision. Some people see them as a sterilized version of what brought protesters into the streets at Black Lives Matter rallies.

Loomis is a surprising fulcrum in this controversy: He’s a white man. But when Loomis challenged the state’s use of a risk score in his sentence, he cited many of the fundamental criticisms of the tools: that they’re too mysterious to be used in court, that they punish people for the crimes of others, and that they hold your demographics against you. Last week the Wisconsin Supreme Court ruled against Loomis, but the decision validated some of his core claims. The case, say legal experts, could serve as a jumping-off point for legal challenges questioning the constitutionality of these kinds of techniques.

To understand the algorithms being used all over the country, it’s good to talk to Richard Berk. He’s been writing them for decades (though he didn’t write the tool that created Loomis’s risk score). Berk, a professor at the University of Pennsylvania, is a shortish, bald guy, whose solid stature and I-dare-you-to-disagree-with-me demeanor might lead people to mistake him for an ex-cop. In fact, he’s a career statistician.

His tools have been used by prisons to determine which inmates to place in restrictive settings; parole departments to choose how closely to supervise people being released from prison; and police officers to predict whether people arrested for domestic violence will re-offend. He once created an algorithm that would tell the Occupational Safety and Health Administration which workplaces were likely to commit safety violations, but says the agency never used it for anything. Starting this fall, the state of Pennsylvania plans to run a pilot program using Berk’s system in sentencing decisions.

As his work has been put into use across the country, Berk’s academic pursuits have become progressively fantastical. He’s currently working on an algorithm that he says will be able to predict at the time of someone’s birth how likely she is to commit a crime by the time she turns 18. The only limit to applications like this, in Berk’s mind, is the data he can find to feed into them.

“The policy position that is taken is that it’s much more dangerous to release Darth Vader than it is to incarcerate Luke Skywalker”

This kind of talk makes people uncomfortable, something Berk was clearly aware of on a sunny Thursday morning in May as he headed into a conference in the basement of a campus building at Penn to play the role of least popular man in the room. He was scheduled to participate in the first panel of the day, which was essentially a referendum on his work. Berk settled into his chair and prepared for a spirited debate about whether what he does all day is good for society.

The moderator, a researcher named Sandra Mayson, took the podium. “This panel is the Minority Report panel,” she said, referring to the Tom Cruise movie where the government employs a trio of psychic mutants to identify future murderers, then arrests these “pre-criminals” before their offenses occur. The comparison is so common it’s become a kind of joke. “I use it too, occasionally, because there’s no way to avoid it," Berk said later.

For the next hour, the other members of the panel took turns questioning the scientific integrity, utility, and basic fairness of predictive techniques such as Berk’s. As it went on, he began to fidget in frustration. Berk leaned all the way back in his chair and crossed his hands over his stomach. He leaned all the way forward and flexed his fingers. He scribbled a few notes. He rested his chin in one hand like a bored teenager and stared off into space.

Eventually, the debate was too much for him: “Here’s what I, maybe hyperbolically, get out of this,” Berk said. “No data are any good, the criminal justice system sucks, and all the actors in the criminal justice system are biased by race and gender. If that’s the takeaway message, we might as well all go home. There’s nothing more to do.” The room tittered with awkward laughter.

Berk’s work on crime started in the late 1960s, when he was splitting his time between grad school and a social work job in Baltimore. The city exploded in violence following the assassination of Martin Luther King Jr. Berk’s graduate school thesis examined the looting patterns during the riots. “You couldn’t really be alive and sentient at that moment in time and not be concerned about what was going on in crime and justice,” he said. “Very much like today with the Ferguson stuff.”

In the mid-1990s, Berk began focusing on machine learning, where computers look for patterns in data sets too large for humans to sift through manually. To make a model, Berk inputs tens of thousands of profiles into a computer. Each one includes the data of someone who has been arrested, including how old they were when first arrested, what neighborhood they’re from, how long they’ve spent in jail, and so on. The data also contain information about who was re-arrested. The computer finds patterns, and those serve as the basis for predictions about which arrestees will re-offend.

To Berk, a big advantage of machine learning is that it eliminates the need to understand what causes someone to be violent. “For these problems, we don’t have good theory,” he said. Feed the computer enough data and it can figure it out on its own, without deciding on a philosophy of the origins of criminal proclivity. This is a seductive idea. But it’s also one that comes under criticism each time a supposedly neutral algorithm in any field produces worryingly non-neutral results. In one widely cited study, researchers showed that Google’s automated ad-serving software was more likely to show ads for high-paying jobs to men than to women. Another found that ads for arrest records show up more often when searching the web for distinctly black names than for white ones.

Computer scientists have a maxim, “Garbage in, garbage out.” In this case, the garbage would be decades of racial and socioeconomic disparities in the criminal justice system. Predictions about future crimes based on data about historical crime statistics have the potential to equate past patterns of policing with the predisposition of people in certain groups—mostly poor and nonwhite—to commit crimes.

Berk readily acknowledges this as a concern, then quickly dismisses it. Race isn’t an input in any of his systems, and he says his own research has shown his algorithms produce similar risk scores regardless of race. He also argues that the tools he creates aren’t used for punishment—more often they’re used, he said, to reverse long-running patterns of overly harsh sentencing, by identifying people whom judges and probation officers shouldn’t worry about.

Berk began working with Philadelphia’s Adult Probation and Parole Department in 2006. At the time, the city had a big murder problem and a small budget. There were a lot of people in the city’s probation and parole programs. City Hall wanted to know which people it truly needed to watch. Berk and a small team of researchers from the University of Pennsylvania wrote a model to identify which people were most likely to commit murder or attempted murder while on probation or parole. Berk generally works for free, and was never on Philadelphia’s payroll.

A common question, of course, is how accurate risk scores are. Berk says that in his own work, between 29 percent and 38 percent of predictions about whether someone is low-risk end up being wrong. But focusing on accuracy misses the point, he says. When it comes to crime, sometimes the best answers aren’t the most statistically precise ones. Just like weathermen err on the side of predicting rain because no one wants to get caught without an umbrella, court systems want technology that intentionally overpredicts the risk that any individual is a crime risk. The same person could end up being described as either high-risk or not depending on where the government decides to set that line. “The policy position that is taken is that it’s much more dangerous to release Darth Vader than it is to incarcerate Luke Skywalker,” Berk said.

“Every mark of poverty serves as a risk factor”

Philadelphia’s plan was to offer cognitive behavioral therapy to the highest-risk people, and offset the costs by spending less money supervising everyone else. When Berk posed the Darth Vader question, the parole department initially determined it’d be 10 times worse, according to Geoffrey Barnes, who worked on the project. Berk figured that at that threshold the algorithm would name 8,000 to 9,000 people as potential pre-murderers. Officials realized they couldn’t afford to pay for that much therapy, and asked for a model that was less harsh. Berk’s team twisted the dials accordingly. “We’re intentionally making the model less accurate, but trying to make sure it produces the right kind of error when it does,” Barnes said.

The program later expanded to group everyone into high-, medium-, and low-risk populations, and the city significantly reduced how closely it watched parolees Berk’s system identified as low-risk. In a 2010 study, Berk and city officials reported that people who were given more lenient treatment were less likely to be arrested for violent crimes than people with similar risk scores who stayed with traditional parole or probation. People classified as high-risk were almost four times more likely to be charged with violent crimes.

Since then, Berk has created similar programs in Maryland’s and Pennsylvania’s statewide parole systems. In Pennsylvania, an internal analysis showed that between 2011 and 2014 about 15 percent of people who came up for parole received different decisions because of their risk scores. Those who were released during that period were significantly less likely to be re-arrested than those who had been released in years past. The conclusion: Berk’s software was helping the state make smarter decisions.

Laura Treaster, a spokeswoman for the state’s Board of Probation and Parole, says Pennsylvania isn’t sure how its risk scores are impacted by race. “This has not been analyzed yet,” she said. “However, it needs to be noted that parole is very different than sentencing. The board is not determining guilt or innocence. We are looking at risk.”

Sentencing, though, is the next frontier for Berk’s risk scores. And using algorithms to decide how long someone goes to jail is proving more controversial than using them to decide when to let people out early.

Wisconsin courts use Compas, a popular commercial tool made by a Michigan-based company called Northpointe. By the company’s account, the people it deems high-risk are re-arrested within two years in about 70 percent of cases. Part of Loomis’s challenge was specific to Northpointe’s practice of declining to share specific information about how its tool generates scores, citing competitive reasons. Not allowing a defendant to assess the evidence against him violated due process, he argued. (Berk shares the code for his systems, and criticizes commercial products such as Northpointe’s for not doing the same.)

As the court was considering Loomis’s appeal, the journalism website ProPublica published an investigation looking at 7,000 Compas risk scores in a single county in Florida over the course of 2013 and 2014. It found that black people were almost twice as likely as white people to be labeled high-risk, then not commit a crime, while it was much more common for white people who were labeled low-risk to re-offend than black people who received a low-risk score. Northpointe challenged the findings, saying ProPublica had miscategorized many risk scores and ignored results that didn’t support its thesis. Its analysis of the same data found no racial disparities.

Even as it upheld Loomis’s sentence, the Wisconsin Supreme Court cited the research on race to raise concerns about the use of tools like Compas. Going forward, it requires risk scores to be accompanied by disclaimers about their nontransparent nature and various caveats about their conclusions. It also says they can’t be used as the determining factor in a sentencing decision. The decision was the first time that such a high court had signaled ambivalence about the use of risk scores in sentencing.

Sonja Starr, a professor at the University of Michigan’s law school and a prominent critic of risk assessment, thinks that Loomis’s case foreshadows stronger legal arguments to come. Loomis made a demographic argument, saying that Compas rated him as riskier because of his gender, reflecting the historical patterns of men being arrested at higher rates than women. But he didn’t frame it as an argument that Compas violated the Equal Protection Clause of the 14th Amendment, which allowed the court to sidestep the core issue.

Loomis also didn’t argue that the risk scores serve to discriminate against poor people. “That’s the part that seems to concern judges, that every mark of poverty serves as a risk factor,” Starr said. “We should very easily see more successful challenges in other cases.”

Officials in Pennsylvania, which has been slowly preparing to use risk assessment in sentencing for the past six years, are sensitive to these potential pitfalls. The state’s experience shows how tricky it is to create an algorithm through the public policy process. To come up with a politically palatable risk tool, Pennsylvania established a sentencing commission. It quickly rejected commercial products like Compas, saying they were too expensive and too mysterious, so the commission began creating its own system.

Race was discarded immediately as an input. But every other factor became a matter of debate. When the state initially wanted to include location, which it determined to be statistically useful in predicting who would re-offend, the Pennsylvania Association of Criminal Defense Lawyers argued that it was a proxy for race, given patterns of housing segregation. The commission eventually dropped the use of location. Also in question: the system’s use of arrests, instead of convictions, since it seems to punish people who live in communities that are policed more aggressively.

Berk argues that eliminating sensitive factors weakens the predictive power of the algorithms. “If you want me to do a totally race-neutral forecast, you’ve got to tell me what variables you’re going to allow me to use, and nobody can, because everything is confounded with race and gender,” he said.

Starr says this argument confuses the differing standards in academic research and the legal system. In social science, it can be useful to calculate the relative likelihood that members of certain groups will do certain things. But that doesn’t mean a specific person’s future should be calculated based on an analysis of population wide crime stats, especially when the data set being used reflects decades of racial and socioeconomic disparities. It amounts to a computerized version of racial profiling, Starr argued. “If the variables aren’t appropriate, you shouldn’t be relying on them," she said.

Late this spring, Berk traveled to Norway to meet with a group of researchers from the University of Oslo. The Norwegian government gathers an immense amount of information about the country’s citizens and connects each of them to a single identification file, presenting a tantalizing set of potential inputs.

Torbjørn Skardhamar, a professor at the university, was interested in exploring how he could use machine learning to make long-term predictions. He helped set up Berk’s visit. Norway has lagged behind the U.S. in using predictive analytics in criminal justice, and the men threw around a few ideas.

Berk wants to predict at the moment of birth whether people will commit a crime by their 18th birthday, based on factors such as environment and the history of a new child’s parents. This would be almost impossible in the U.S., given that much of a person’s biographical information is spread out across many agencies and subject to many restrictions. He’s not sure if it’s possible in Norway, either, and he acknowledges he also hasn’t completely thought through how best to use such information.

Caveats aside, this has the potential to be a capstone project of Berk’s career. It also takes all of the ethical and political questions and extends them to their logical conclusion. Even in the movie Minority Report, the government peered only hours into the future—not years. Skardhamar, who is new to these techniques, said he’s not afraid of making mistakes: They’re talking about them now, he said, so they can avoid future errors. “These are tricky questions,” he said, mulling all the ways the project could go wrong. “Making them explicit—that’s a good thing.”

Robot 'therapist' to improve productivity among TCM physicians

Robot 'therapist' to improve productivity among TCM physicians

By Olivia Quay  Posted 18 Jul 2016 14:49 Updated 18 Jul 2016 16:49

SINGAPORE: A robot capable of giving targeted physiotherapy massages to relieve muscle strains and injuries has been developed by local start-up AiTreat, and was previewed on Monday (Jul 18).

The Expert Manipulative Massage Automation (EMMA) was created by Nanyang Technological University (NTU) graduate and AiTreat CEO Albert Zhang.

Mr Zhang, a Traditional Chinese Medicine (TCM) physician of five years, said: "TCM is competing with other industries for physically fit people who can learn TCM knowledge. It's pretty difficult. The salaries keep increasing. The clinics are not earning a lot of profit, even though the charge is not low any longer.

"So we hope, other than the solving the labour shortage, the robot can also bring scientific data. So, TCM (practitioners) can do research, can have scientific support to show to their patients what their condition is, what their improvement is, and what should be done in the future.

"We have designed EMMA as a clinically precise tool that can automatically carry out treatment for patients as prescribed by a physiotherapist or Chinese physician," he added.

“Our aim is not to replace the therapists who are skilled in sports massage and acupoint therapy, but to improve productivity by enabling one therapist to treat multiple patients with the help of our robots."

'EMMA' TO DO HEAVY-DUTY WORK

Physicians will continue to consult patients and perform physical check-ups - assessing what massages and methods are best - while leaving EMMA to the heavy-duty work.

"Basically, you build a work flow to tell the robot what to do. Then the robot will do the hard and tedious, time-consuming work," said Mr Zhang. "After that, the physician can do the manipulations and the acupuncture. If the physician has a lot of patients; if he's very popular, he can have an assistant to help them operate several robots at the same time, to serve more people and charge a lower price."

According to Mr Zhang, EMMA is probably the first such robot in the world developed specifically for use by TCM physicians and sports therapists.

EMMA has already been used to treat Singapore's national basketball team using acupoint therapy.

"There are different levels that the robot masseuse can do, so I think it will target the different needs, especially with team sports, like basketball, where it needs to cater to more than one athlete - all 12 of us," said national basketballer Leon Kwek. "So I think if we had a masseuse for one team, then he would be tired out by the time he reaches the seventh or eighth athlete. I think it helps reduce his workload, at the same time giving - in a way - equal treatment to every single one of us."

The robot comprises a single robotic arm, capable of highly articulated movements, and also has a 3D-stereoscopic camera for vision, and a customised, fully rotatable 3D-printed massage tip.

To ensure consistent quality of therapy, EMMA is equipped with sensors and diagnostic functions that will measure a patient's progress, as well as exact stiffness of a particular muscle or tendon. These are uploaded to the AiTreat’s propriety cloud-based intelligence platform for analysis.

TREATING THE AGEING POPULATION

The robot has undergone trials at TCM clinic Kin Teck Tong since last week, where it has treated 50 patients. 

Said Ms Coco Zhang, Executive Director of Kin Teck Tong: “Like many developed countries, Singapore has the problem of an ageing population. Over the next decade, more people are going to suffer from physical ailments such as arthritis and will be seeking treatment.

“In our trials with the robot, the experience has been very good, as it can perform most treatments as well as our therapists,” she added.

Moving forward, AiTreat will focus on developing its second-generation robot, which is more compact and mobile, the start-up said. The team is also looking into using heated contact pads to better simulate human hands.

- CNA/xk



Monday, July 18, 2016

Tiny Hard Drive Uses Single Atoms to Store Data

Tiny Hard Drive Uses Single Atoms to Store Data

It packs hundreds of time more information per square inch than best currently available technologies, study says

By DANIELA HERNANDEZ July 18, 2016 11:00 a.m. ET

By manipulating the interactions between individual atoms, scientists report they have created a device that can pack hundreds of times more information per square inch than the best currently available data-storage technologies.

The working prototype is part of a decades-long attempt to shrink electronics down to the atomic level, a feat scientists believe would allow them to store information much more efficiently, in less space and more cheaply. By comparison, tech companies today build warehouse-sized data centers to store the billions of photos, videos and posts consumers upload to the internet daily. Corporations including International Business Machines Corp. and Hewlett Packard Enterprise Co. also have explored research to reduce such space needs.

The so-called atomic-scale memory, described in a paper published on Monday in the scientific journal Nature Nanotechnology, can hold one kilobyte, the equivalent of roughly a paragraph of text.

It may not sound “very impressive,” said Franz Himpsel, a professor emeritus of physics at the University of Wisconsin, Madison, who wasn’t involved in the study. But “I would call it a breakthrough.”

Most previous attempts at encoding information with atoms, including his own, managed roughly one byte, Dr. Himpsel said. And data could be stored only once. To store new information, the “disk” had to be re-formatted, like CD-Rs popular in the ’90s.

With the new device, “we can rewrite it as often as we like,” said Sander Otte, an experimental physicist at Delft University of Technology in the Netherlands and the lead author on the new paper.

The researchers first stored a portion of Charles Darwin’s “On the Origin of Species” on the device. They then replaced that with 160 words from a 1959 lecture by physicist Richard Feynman in which he imagined a world powered by devices running on atomic-scale memory.

To build their prototype, the scientists peppered a flat copper bed with about 60,000 chlorine atoms scattered at random, purposely leaving roughly 8,000 empty spaces among them. A mapping algorithm guided the tiny, copper-coated tip of a high-tech microscope to gently pull each chlorine atom to a predetermined location, creating a precise arrangement of atoms and neighboring “holes.”

The team also crafted a language for their device. The stored information is encoded in the patterns of holes between atoms. The atom-tugging needle reads them as ones and zeros, turning them into regular binary code.

The researchers marked up the grid with instructions that cued the software where it should direct the needle to write and read data. For instance, a three-hole diagonal line marked the end of a file.

“That’s what I really love in this work,” said Elke Scheer, a nanoscientist at the University of Konstanz in Germany not involved with the study. “It’s not just physics. It’s also informatics.”

Writing the initial data to the device took about a week, though the rewriting process takes just a few hours, Dr. Otte said.

“It’s automated, so it’s 10 times faster than previous examples,” said Christopher Lutz, a staff scientist at IBM Research-Almaden in San Jose, Calif. Still, “this is very exploratory. It’s important not to see this one-kilobyte memory result as something that can be taken directly to a product.”

Reading the stored data is much too slow to have practical applications soon. Plus, the device is stable for only a few hours at extremely low temperatures. To be competitive with today’s hard drives, the memory would have to persist for years and work in warmer temperatures, said Victor Zhirnov, chief scientist at the Semiconductor Research Corp., a research consortium based in Durham, N.C.

When Dr. Otte’s team took the memory out of the extremely low-temperature environment in which it was built and stored, the information it held was lost. Next, his team will explore other metal surfaces as well as elements similar to, but heavier than, chlorine, to see if that improves the device’s stability.

“There’s many combinations to play with,” he said.

Write to Daniela Hernandez at daniela.hernandez@wsj.com