Tuesday, November 24, 2015

These 3 judges hold the fate of the Internet net neutrality rules in their hands

These 3 judges hold the fate of the Internet in their hands

By Brian Fung November 24 at 6:15 AM

Next week, a federal appeals court in Washington will hear one of its biggest cases of the year, one whose outcome will directly affect how Internet providers can alter your experience online.

At stake are the government's net neutrality rules banning telecom and cable companies from unfairly discriminating against new or potential rivals. Using their power in the marketplace to control what services consumers can access from their smartphones, tablets and PCs, Internet providers could be granted more latitude to favor preferred Web sites — if the U.S. Court of Appeals for the D.C. Circuit says so.

Three judges from the D.C. Circuit have been named to hear the oral argument on Dec. 4. Much like the Supreme Court, the very makeup of this panel could subtly shape the course of events. What do we know about the judges? Are they familiar with the issues? How might they vote? Below, get briefly acquainted with each one ahead of the big day.

Judge Sri Srinivasan is a relative newcomer to the court, having been appointed by President Obama in 2013. His views on net neutrality and technology aren't clear, making him a bit of an enigma. But we do know this much: He's said to be a rising star. Srinivasan is reportedly on the Democratic Party's shortlist for Supreme Court nominees.

Getting there certainly wasn't easy. While under consideration for the D.C. Circuit post, some liberals attempted to torpedo Srinivasan's nomination because of his past jobs. He'd previously been a legal assistant to the Bush administration and has represented clients such as Exxon on human-rights issues. Here's how Mother Jones described him in 2013:

At a time when Republican obstruction has ground the confirmation process to a halt, and the outspoken progressivism — or even mild progressivism — of prior Obama nominees has run into GOP filibusters, Srinivasan's unclear record offers Republicans few legitimate reasons to block him. It also means that liberals can't be sure that Srinivasan actually shares their views.

When it comes to net neutrality, that last point is just as true today as it was two years ago.

Judge Stephen F. Williams is a senior judge on the D.C. Circuit. Appointed by President Ronald Reagan in 1986, Williams is described by some court-watchers as skeptical of preemptive regulation when after-the-fact antitrust enforcement may suffice. He's written prolifically about regulation, particularly on environmental issues.

That makes Williams an incredibly interesting character. Looking back at some of his articles, it's clear Williams has a nuanced and complex relationship with his job. In "The Roots of Deference," a 1991 book review for the Yale Law Journal, Williams lays out a theory for how judges should interpret federal agency decisions that come before the courts.

This is significant because it's exactly the situation we're in now, with industry groups challenging the FCC's net neutrality rules as unlawful. It's the job of the D.C. Circuit to decide whether the FCC did, in fact, go too far. Though Williams might view regulation more skeptically in general, in 1991 he made a conservative argument for judicial restraint when federal agencies test out certain, possibly controversial, legal theories.

"An agency's caution in one domain may require it to extend itself in another, just as a stretch — going to the edge — in one may enable it to occupy safe territory in another," Williams wrote. He went on:

Courts have a duty in appropriate cases to curb agency lawlessness, and carrying out that duty contributes to sound governance. But just as masons building a cathedral should not supplant the architect, even though both are creating a work of art, a judge should not supplant the politician or administrator though all are seeking sound governance.

In short, it's unwise to make any hard-and-fast assumptions about how Williams is likely to rule in the net neutrality case.

Judge David S. Tatel's key credential here is that he authored the legal opinion that led to this current case. Appointed by President Bill Clinton, Tatel has the unusual distinction of enjoying skiing, marathoning and climbing mountains — while blind. Tatel has a background in civil rights and education law, and once served in the administration of President Jimmy Carter.

Tatel along with two other judges held in 2014 that the FCC misused its powers to impose net neutrality on Internet providers. But they never explicitly said what the FCC should do to get on the right side of the law. That has led to a furious debate over the court's ruling. Partisans on both sides say the court laid out a very clear road map for the FCC; it's just that each side disagrees on what that road map actually said.

That 2014 net neutrality case is known as Verizon v. FCC, and Tatel is the sole returning judge this time, drawing that much more attention to his role in the last round.

Because both sides are claiming to have properly interpreted Tatel's 2014 ruling, everyone's watching to see how Tatel himself will now view this case.

Much as we shouldn't read too much into Williams's conservative leanings, however, we shouldn't conclude that Tatel necessarily has any greater insight to offer on the case than either of his colleagues. Nor should we assume that Srinivasan will side with the FCC just because he's a Democratic appointee who stands to defend his position on the Supreme Court shortlist if he sides with the Obama administration.

That said, knowing the judges' backgrounds ahead of time helps put their questioning — and their decisions — into greater context, making it easier to understand it all later.

Monday, November 23, 2015

Two dozen Disney IT workers prepare to sue over foreign replacements

Two dozen Disney IT workers prepare to sue over foreign replacements

Increasingly, U.S. IT workers are alleging discrimination

By Patrick Thibodeau 
Computerworld | Nov 23, 2015 1:25 PM PT

At least 23 former Disney IT workers have filed complaints with the federal Equal Employment Opportunity Commission (EEOC) over the loss of their jobs to foreign replacements. This federal filing is a first step to filing a lawsuit alleging discrimination.

These employees are arguing that they are victims of national origin discrimination, a complaint increasingly raised by U.S. workers who have lost their jobs to foreign workers on H-1B and other temporary visas.

Sara Blackwell, the Florida attorney representing the workers, says the deadline for Disney employees terminated on Jan. 30 for filing EEOC actions is Thursday.

These employees are making discrimination claims with the EEO under Title VII of the Civil Rights Act of 1964, citing in part "hostile treatment in forcing the Americans to train their replacements." The claims include discrimination based on national origin and age.

A Disney spokeswoman, Jacquee Wahler, in an email response to the EEOC claims, said: "We comply with all applicable employment laws. We are expanding our IT department and adding more jobs for U.S. IT workers."

Disney's layoff last January followed agreements with IT services contractors that use foreign labor, mostly from India. Some former Disney workers have begun to go public over the displacement process.

In the ongoing conflict over U.S. worker displacements, this may well be the largest number of people to take action in this manner. "I'm hoping that it signifies that American workers are being brave and standing up and doing something about it," said Blackwell.

The EEOC investigates claims of discrimination, and has the option of bringing its own lawsuit. The commission typically issues a right-to-sue letter, with the next step being a lawsuit.

Separately, Blackwell said, Disney workers are also claiming violation of Florida's discrimination laws. Employees have until Jan. 30, 2016 to file a state claim.

Tuesday, November 17, 2015

All Excuses Aside, Apple's Major Problem Is Tim Cook

All Excuses Aside, Apple's Major Problem Is Tim Cook

11/15/2015 @ 9:30AM

Yes, I have heard all the good guys (long only crowd) make excuses for Apple’s terrible performance this year to date.

Yes, the market cap is too big. (It’s getting bigger for the likes of Amazon, Google/Alphabet, Microsoft et al despite being “too” big as well.) That argument holds no water given what other companies have done this year and despite their own giant market caps. No market cap monitors anywhere you look.

Yes, investors don’t understand Apple. (Maybe that’s one of the issues with Apple and an Apple-specific issue.)

Yes, they have this, that and the other coming down the pipeline. (So do other companies–no competitor is standing still.)

Yes, Tim Cook and Eddy Cue and Jony Ive said that this is Apple’s year like they said last year was and the year before (all the while selling tens of million in stock options, if not hundreds of millions).

Yes, wait until next year for an Apple TV set (now forgotten).

Yes, wait until a few years for the iCar.

Yes, wait for China sales to kick in and then wait for India to pick up (they are picking up already–China sales up 90% YoY and India probably 2x that).

Foreign exchange is an issue–just wait till that headwind turns into a tailwind. (FX is an issue for everyone and with the Fed on a rate raising “jihad,” that FX problem/issue could worsen before getting better.)

Wait until Angela Ahrendts kicks in her magic. (Seriously? She got paid almost $75 million just for her first year alone and thus far, there has been nary a peep about what she has done in return for the company since May 2014 when she joined Apple except for the fact that she has been furiously selling her stock.)

Investors are mistakenly classifying Apple as a hardware company. (Then it’s the job of Apple’s management to clear the mistake.)

Q3 results were very good. (They were very good only given the amount of fear-mongering and negativity that went on ahead of the results.)

No one will deny that this has been a rocky year with very volatile markets, so I thought it would be a good idea to look at a few companies best associated with Apple from a cutting edge of technology, market cap and brand name awareness point of view, and compare price performance for 2015.

I thought we would take a look at various important price points starting with price to start the year, price at the height of the China scare aka “world is ending”, pre-Q3 earnings price, post-Q3 earnings price and price as of last Friday’s (11/13/15) close.

Let’s start with Facebook, led by Mark Zuckerberg, CEO and founder. The company began the year at $78/share, went down to $72/share in our last crash in August, was trading at $104/share the day before Q3 earnings were released, popped to $110 the next day and closed at $104 this past Friday. (Market cap of Facebook is $294 billion.)

Let’s look at Microsoft with Satya Nadella as the head honcho, aka “Mr. Softee No More,” next. The shares began the year at $47 per share, fell to $40 per share in the August crash, were at $48 per share pre-Q3 earnings, shot up to $53/share the next day and closed on Friday at $53/share. (Market cap is $422 billion.)

Next up is Google/Alphabet, led by Sundar Pichai/ Larry Page/Sergey Brin as the top triumvirate, which began the year at $530 per share, was available for $594 at the August crash lows, traded at $681/share pre-Q3 earnings, popped to $719/share post the report and closed Friday at $735/share. (Market cap of Alphabet is $509 billion.)

Take Amazon, led by CEO and founder, Jeff Bezos, which began the year at $309/share, was buyable at $451/share in the August melt-down, at $564/share pre-Q3 earnings, $599/share the day after and $642/share this past Friday. (Amazon market cap is $301 billion.)

Then we have Apple, led by Tim Cook. Apple began the year at $109/share, hit a low of $92/share in the August crash, was at $115/share prior to Q3 (Apple FQ4) earnings, $119/share the day after and closed on Friday at $112/share. Yes, I know we got those whopping dividends the last three quarters. (Market cap is $626 billion.)

Heck let’s even compare Priceline led by Darren Huston (who?), which has got absolutely hammered since its own Q3 earnings. The company began the year at $1142/share, hit a low of $1151/share in the August melt-down, made a huge run to $1450/share prior to Q3 earnings, hammered down to $1311/share the next day and closed Friday at $1298/share. (Market cap is $65 billion.)

The Nasdaq is the final one we will look at and began the year at 4727, hit a low of 4506 in the August crash, and closed at 4928 on Friday.

So what do Jeff Bezos, Mark Zuckerberg, Larry Page & Sergey Brin have that Tim Cook does not? Let me count the ways for you:

All the guys are absolute tech geniuses, including Tim Cook, but with one exception: Tim Cook is not a Wall Street-friendly CEO and does not and cannot impress Wall Street. Jeff Bezos is a master at convincing Wall Street that his “build it and they will come” is the way to go and analysts and shareholders have been absolutely lapping it up. Even Zuck, young as he is, is able to convince shareholders and the Street of his “vision.” Of course he has help from Sheryl Sandberg who is no shrinking violet either. Finally, Google/Alphabet has Ruth Porat as the CFO who is someone who has actually been a Wall Street CEO and thus knows exactly what will get shareholders/Wall Street analysts tripping over themselves to buy/issue reports with strong buys and buys with higher and higher price targets.

The others under-promise and over-deliver aka UPOD while thus far Tim Cook has done the opposite-OPUD.

The others have made brilliant hires in the last year or so while Tim Cook has hired Angela Ahrendts (see above) who has certainly been very, very busy-selling her stock. Google got Ruth Porat and recently promoted Sundar Pichai. Microsoft appointed Satya Nadella as the head honcho after stumbling and bumbling through the Steve Ballmer “hoot and holler” years. All these hires are seen as very shareholder- and Wall Street-friendly and that is what Tim Cook has been unable/unwilling to do. Of course, Tim Cook did give us Angela Ahrendts lest we forget.

Of the above, Google/Alphabet, Facebook and Amazon still have active founders while Apple, Microsoft and Priceline do not–and the results are there for all to see. Of course, Steve Jobs’ pre-mature passing was a blow to not just Apple but to fans of technology globally and certainly no fault of Tim Cook. However, an employee of a company (like Tim Cook, Darren Huston et al) just cannot be as passionate about repaying the faith shareholders are showing the company (by being shareholders the first place) as a founder will be. Especially not “ employees” that are being paid hundreds of millions of dollars to run a company and that compensation is not 100% tied to share performance. My take is if they make shareholders money the top managers make money. If not, then not/naught for them as well.

All the above companies have had issues with a stronger greenback, China slowdown (heck, Facebook and Google don’t even operate directly in China) if there is one, a schizophrenic stock market here at home thanks to our Federal Reserve crusade, but have still managed to perform very well to say the least.

Look, Tim Cook might be an absolute Mahatma Gandhi of a human being but he does not seem to be the right person to lead the biggest and one of the most technological savvy companies in the world.

Can you imagine where Apple would be were it not for the biggest share buyback in corporate history?

I shudder to think.

So, until things change at Apple or Tim Cook changes or shows us something meaningful or maybe makes a meaningful Wall Street-savvy hire, share performance of the biggest and probably one of the top global brands in the world more than likely could/will continue to underperform. Meanwhile Tim Cook and company will continue cashing in their tens of millions and hundreds of million worth of options that get vested and we shareholders will continue to stand by and watch passively.

Well, yours truly has chosen not to stand by nor watch passively.

Until the next article, “may the trade always be in your favor.”

Sunday, November 15, 2015

Publishers underwhelmed with Apple News app

Publishers underwhelmed with Apple News app
Friday, November 13, 2015 · 10:23 am

“When Time Inc. CEO Joe Ripp expressed frustration with his company’s performance on Apple News last week, his complaints apparently were just the tip of the iceberg,” Lucia Moses reports for Digiday. “Other publishing execs are unhappy about everything from the traffic they’re getting from the two-month-old news aggregation app to the user experience to the data Apple’s giving them.”

“As one publisher, who like others wouldn’t talk on the record for fear of jeopardizing their relationship with Apple, said, ‘The traffic is underwhelming,'” Moses reports. “Data is also a sticking point. Apple is providing weekly data reports including basics like the volume of page views and shares, but publishers want a dashboard that they can use to analyze data on demand, and more demographic data on users. To appeal to publishers, Apple was supposed to let them count the views toward their traffic and let publishers sell ads into the app. But publishers said Apple has been delayed in adding measurement firm comScore tags to the content.”

“There are execution issues, too. There are more than 70 publishers in the app, but only a few get featured at a time on the app’s promotional screen, so some could be getting a big traffic advantage over others,” Moses reports. “There are kinks in the user experience, too. Apple hasn’t provided ways to promote individual stories so they’re not all just in reverse-chronological order, as some had hoped. Two features of Apple News are its story personalization and recommendation, but the selections don’t seem especially personalized and the “related stories” section often contains other publishers’ version of the same story that the user clicked on, giving it a stale feeling. All this adds up to a feeling that Apple wasn’t ready for the app release.”

Read more in the full article here.

MacDailyNews Take: “Apple wasn’t ready for the app release.”


So far, Apple under Tim Cook badly botched the iPad 2 launch, completely botched the iMac release in 2012 (missing Christmas, no less), botched the Maps release beyond belief, then botched the Apple Watch release all to hell by launching with no supply (à la the iMac, so much for learning lessons), launched Apple Music with a horrendous UI and rampant usability issues, launched a wildly incomplete Apple TV without even providing simple basics like Apple Remote app compatibility, and just botched the release of the iPad Pro without having its Apple Pencil or its uninspired, poorly-reviewed so-called “Smart” Keyboard available for over a month.

Attention to detail, Tim. It means something. You should give it a try sometime.

Saturday, November 14, 2015

It’s Way Too Easy to Hack the Hospital -- Changing meds, stealing identities...

It’s Way Too Easy to Hack the Hospital
Firewalls and medical devices are extremely vulnerable, and everyone’s pointing fingers

By Monte Reel and Jordan Robertson | November 2015 from Bloomberg Businessweek

In the fall of 2013, Billy Rios flew from his home in California to Rochester, Minn., for an assignment at the Mayo Clinic, the largest integrated nonprofit medical group practice in the world. Rios is a “white hat” hacker, which means customers hire him to break into their own computers. His roster of clients has included the Pentagon, major defense contractors, Microsoft, Google, and some others he can’t talk about.

He’s tinkered with weapons systems, with aircraft components, and even with the electrical grid, hacking into the largest public utility district in Washington state to show officials how they might improve public safety. The Mayo Clinic job, in comparison, seemed pretty tame. He assumed he was going on a routine bug hunt, a week of solo work in clean and quiet rooms.

But when he showed up, he was surprised to find himself in a conference room full of familiar faces. The Mayo Clinic had assembled an all-star team of about a dozen computer jocks, investigators from some of the biggest cybersecurity firms in the country, as well as the kind of hackers who draw crowds at conferences such as Black Hat and Def Con. The researchers split into teams, and hospital officials presented them with about 40 different medical devices. Do your worst, the researchers were instructed. Hack whatever you can.

Like the printers, copiers, and office telephones used across all industries, many medical devices today are networked, running standard operating systems and living on the Internet just as laptops and smartphones do. Like the rest of the Internet of Things—devices that range from cars to garden sprinklers—they communicate with servers, and many can be controlled remotely. As quickly became apparent to Rios and the others, hospital administrators have a lot of reasons to fear hackers. For a full week, the group spent their days looking for backdoors into magnetic resonance imaging scanners, ultrasound equipment, ventilators, electroconvulsive therapy machines, and dozens of other contraptions. The teams gathered each evening inside the hospital to trade casualty reports.

“Every day, it was like every device on the menu got crushed,” Rios says. “It was all bad. Really, really bad.” The teams didn’t have time to dive deeply into the vulnerabilities they found, partly because they found so many—defenseless operating systems, generic passwords that couldn’t be changed, and so on.

The Mayo Clinic emerged from those sessions with a fresh set of security requirements for its medical device suppliers, requiring that each device be tested to meet standards before purchasing contracts were signed. Rios applauded the clinic, but he knew that only a few hospitals in the world had the resources and influence to pull that off, and he walked away from the job with an unshakable conviction: Sooner or later, hospitals would be hacked, and patients would be hurt. He’d gotten privileged glimpses into all sorts of sensitive industries, but hospitals seemed at least a decade behind the standard security curve.

“Someone is going to take it to the next level. They always do,” says Rios. “The second someone tries to do this, they’ll be able to do it. The only barrier is the goodwill of a stranger.”

Rios lives on a quiet street in Half Moon Bay, a town about 25 miles south of San Francisco, pressed against a rugged curl of coastline where scary, 50-foot waves attract the state’s gutsiest surfers. He’s 37, a former U.S. Marine and veteran of the war in Iraq. In the Marines, Rios worked in a signal intelligence unit and afterward took a position at the Defense Information Systems Agency. He practices jiu-jitsu, wanders the beach in board shorts, and shares his house with his wife, a 6-year-old daughter, and a 4-year-old son. His small home office is crowded with computers, a soldering station, and a slew of medical devices.

Shortly after flying home from the Mayo gig, Rios ordered his first device—a Hospira Symbiq infusion pump. He wasn’t targeting that particular manufacturer or model to investigate; he simply happened to find one posted on EBay for about $100. It was an odd feeling, putting it in his online shopping cart. Was buying one of these without some sort of license even legal? he wondered. Is it OK to crack this open?

Infusion pumps can be found in almost every hospital room, usually affixed to a metal stand next to the patient’s bed, automatically delivering intravenous drips, injectable drugs, or other fluids into a patient’s bloodstream. Hospira, a company that was bought by Pfizer this year, is a leading manufacturer of the devices, with several different models on the market. On the company’s website, an article explains that “smart pumps” are designed to improve patient safety by automating intravenous drug delivery, which it says accounts for 56 percent of all medication errors.

Rios connected his pump to a computer network, just as a hospital would, and discovered it was possible to remotely take over the machine and “press” the buttons on the device’s touchscreen, as if someone were standing right in front of it. He found that he could set the machine to dump an entire vial of medication into a patient. A doctor or nurse standing in front of the machine might be able to spot such a manipulation and stop the infusion before the entire vial empties, but a hospital staff member keeping an eye on the pump from a centralized monitoring station wouldn’t notice a thing, he says.

In the spring of 2014, Rios typed up his findings and sent them to the Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team (ICS-CERT). In his report, he listed the vulnerabilities he had found and suggested that Hospira conduct further analysis to answer two questions: Could the same vulnerabilities exist in other Hospira devices? And what potential consequences could the flaws present for patients? DHS in turn contacted the Food and Drug Administration, which forwarded the report to Hospira. Months passed, and Rios got no response from the manufacturer and received no indication that government regulators planned to take action.

“The FDA seems to literally be waiting for someone to be killed before they can say, ‘OK, yeah, this is something we need to worry about,’ ” Rios says.

Rios is one of a small group of independent researchers who have targeted the medical device sector in recent years, exploiting the security flaws they’ve uncovered to dramatic effect. Jay Radcliffe, a researcher and a diabetic, appeared at the 2011 Def Con hacking conference to demonstrate how he could hijack his Medtronic insulin pump, manipulating it to deliver a potentially lethal dose. The following year, Barnaby Jack, a hacker from New Zealand, showed attendees at a conference in Australia how he could remotely hack a pacemaker to deliver a dangerous shock. In 2013, Jack died of a drug overdose one week before he was scheduled to attend Black Hat, where he promised to unveil a system that could pinpoint any wirelessly connected insulin pumps within a 300-foot radius, then alter the insulin doses they administered.

Such attacks angered device makers and hospital administrators, who say the staged hacks threatened to scare the public away from technologies that do far more good than harm. At an industry forum last year, a hospital IT administrator lost his temper, lashing out at Rios and other researchers for stoking hysteria when, in fact, not a single incident of patient harm has ever been attributed to lax cybersecurity in a medical device. “I appreciate you wanting to jump in,” Rick Hampton, wireless communications manager for Partners HealthCare System, said, “but frankly, some of the National Enquirer headlines that you guys create cause nothing but problems.” Another time, Rios was shouted at by device vendors on a conference call while dozens of industry executives and federal officials listened in. “It wasn’t just someone saying, ‘Hey, you suck,’ or something,” Rios remembers, “but truly, literally, screaming.”

“All their devices are getting compromised, all their systems are getting compromised,” he continues. “All their clinical applications are getting compromised—and no one cares. It’s just ridiculous, right? And anyone who tries to justify that it’s OK is not living in this world. They’re in a fantasyland.”

Last fall analysts with TrapX Security, a firm based in San Mateo, Calif., began installing software in more than 60 hospitals to trace medical device hacks. TrapX created virtual replicas of specific medical devices and installed them as though they were online and running. To a hacker, the operating system of a fake CT scan device planted by TrapX would appear no different than the real thing. But unlike the real machines, the fake devices allowed TrapX to monitor the movements of the hackers across the hospital network. After six months, TrapX concluded that all of the hospitals contained medical devices that had been infected by malware.

In several cases, the hackers “spear phished” hospital staffers, luring them into opening e-mails that appeared to come from senders they knew, which infected hospital computers when they fell for the bait. In one case, hackers penetrated the computer at a nurses’ station, and from there the malware spread throughout the network, eventually slipping into radiological machines, blood gas analyzers, and other devices. Many of the machines ran on cheap, antiquated operating systems, such as Windows XP and even Windows 2000. The hospital’s antivirus protections quickly scrubbed the computer at the nurses’ station, but the medical devices weren’t so well guarded.

Many of the hospitals that participated in the study rely on the device manufacturers to maintain security on the machines, says Carl Wright, general manager for TrapX. That service is often sporadic, he says, and tends to be reactive rather than preventive. “These medical devices aren’t presenting any indication or warning to the provider that someone is attacking it, and they can’t defend themselves at all,” says Wright, who is a former information security officer for the U.S. military.

After hackers had compromised a medical device in a hospital, they lurked there, using the machine as a permanent base from which to probe the hospital network. Their goal, according to Wright, was to steal personal medical data.

A credit card is good only until its expiration date and becomes almost useless as soon as the owner notices that it has been stolen. Medical profiles often contain that same credit card information, as well as Social Security numbers, addresses, dates of birth, familial relationships, and medical histories—tools that can be used to establish false identities and lines of credit, to conduct insurance fraud, or even for blackmail. Simple credit card numbers often sell for less than $10 on the Web’s black market; medical profiles can fetch 10 times as much. For a hacker, it’s all about resale value.

The decoy devices that TrapX analysts set up in hospitals allowed them to observe hackers attempting to take medical records out of the hospitals through the infected devices. The trail, Wright says, led them to a server in Eastern Europe believed to be controlled by a known Russian criminal syndicate. Basically, they would log on from their control server in Eastern Europe to a blood gas analyzer; they’d then go from the BGA to a data source, pull the records back to the BGA, and then out. Wright says they were able to determine that hackers were taking data out through medical devices because, to take one example, they found patient data in a blood gas analyzer, where it wasn’t supposed to be.

In addition to the command-and-control malware that allowed the records to be swiped, TrapX also found a bug called Citadel, ransomware that’s designed to restrict a user’s access to his or her own files, which allows hackers to demand payment to restore that access. The researchers found no evidence suggesting the hackers had actually ransomed the machines, but its mere presence was unsettling. “That stuff is only used for one purpose,” Wright says.

Hospitals generally keep network breaches to themselves. Even so, scattered reports of disruptions caused by malware have surfaced. In 2011, the Gwinnett Medical Center in Lawrenceville, Ga., shut its doors to all non-emergency patients for three days after a virus crippled its computer system. Doctor’s offices in the U.S. and Australia have reported cases of cybercriminals encrypting patient databases and demanding ransom payments. Auditing firm KPMG released a survey in August that indicated 81 percent of health information technology executives said the computer systems at their workplaces had been compromised by a cyber attack within the past two years.

Watching all this, Rios grew anxious for federal regulators to pay attention to the vulnerabilities he’d found in the Hospira pump. In the summer of 2014 he sent reminders to the Department of Homeland Security, asking if Hospira had responded to his suggestions. According to an e-mail from DHS, the company was “not interested in verifying that other pumps are vulnerable.”

A few weeks after he received that message, an increasingly frustrated Rios found himself in a vulnerable position: immobilized in a hospital bed, utterly dependent upon, of all things, an infusion pump.

Late last July, Rios began snoring loudly, which interrupted his sleep enough that he went to a doctor, who discovered a polyp inside his nose, near the cerebral membrane. The polyp was removed—a simple outpatient procedure—but days later Rios developed a fever and noticed clear liquid leaking from his nose. Years before, he’d broken it, and the doctors thought the polyp had grown around scar tissue. When the polyp was removed, some of the scar tissue that had protected his brain casing must have been clipped, too. The clear liquid coming out of his nose was cerebral fluid.

He spent two weeks at Stanford Hospital, in a room filled with the kind of gadgetry he’d been breaking into. After a few dazed days in bed, he got his bearings and assessed his situation. His bed was plugged into a network jack. The pressure bands strapped around his legs, which periodically squeezed his calves to aid circulation, were also connected to a computer. He counted 16 networked devices in his room, and eight wireless access points. The most obvious of these was the CareFusion infusion pump, a brand he hadn’t looked into yet, that controlled the fluids that were pumped into his arm. “It wasn’t like I was going to turn to the doctor and say, ‘Don’t hook me up to that infusion pump!’ ” Rios recalls. “I needed that thing.”

He noticed that the other patient in his room, separated from him by a curtain, was connected to a Hospira pump. “I kept thinking, ‘Should I tell him?’ ” Rios says. He opted for silence.

When he was able to drag himself out of bed, Rios wheeled his infusion pump into the bathroom, where he gave it a good once-over. “I’m looking at the wireless card, pushing the buttons on it, seeing what menus I can get to,” he recalls. It only inflamed his concerns. “Whatever Wi-Fi password they’re using to let the pump join the network, I could get that off the pump pretty easily.”

In the hallway just outside his room, Rios found a computerized dispensary that stored medications in locked drawers. Doctors and nurses normally used coded identification badges to operate the machine. But Rios had examined the security system before, and he knew it had a built-in vulnerability: a hard-coded password that would allow him to “jackpot” every drawer in the cabinet. Such generic passwords are common in many medical devices, installed to allow service technicians to access their systems, and many of them cannot be changed. Rios and a partner had already alerted Homeland Security about those password vulnerabilities, and the agency had issued notices to vendors informing them of his findings. But nothing, at least at this hospital, had been done. In the hallway, he quickly discovered that all the medications in the device’s drawers could have been his for the taking. “They hadn’t patched it at this point, so I was testing some passwords on it, and I was like, ‘This s--- works!’ ”

He didn’t touch any drugs, he says, but when he was released, he tried to turn up the heat on Hospira. He’d already told the federal government that he knew how to sabotage the pumps, but after he returned home he decided to make a video to show them how easily it could be done. He aimed the camera directly at the infusion pump’s touchscreen and demonstrated how he could remotely press the buttons, speeding through password protections, unlocking the infuser, and manipulating the machine at will. Then he wrote out sample computer code and sent it to the DHS and the FDA so they could test his work for themselves.

“We have to create videos and write real exploit code that could really kill somebody in order for anything to be taken seriously,” Rios says. “It’s not the right way.”

But it got the FDA’s attention. Finally, after more than a year of hectoring from Rios, the FDA in July issued an advisory urging hospitals to stop using the Hospira Symbiq infusion pump because it “could allow an unauthorized user to control the device and change the dosage the pump delivers.”

“It’s viewed as precedent-setting,” says Suzanne Schwartz, who coordinates cybersecurity initiatives for the FDA’s Center for Devices and Radiological Health. “It’s the first time we’ve called out a product specifically on a cybersecurity issue.”

“There have been no known breaches of a Hospira product in a clinical setting, and the company has worked with industry stakeholders to make sure that doesn’t happen,” says MacKay Jimeson, a spokesman for Pfizer.

The medical research community didn’t break out in celebration over the advisory. Hospira said that it would work with vendors to remedy any problems and that the Symbiq model was off the market. But the advisory was merely that: It didn’t force the company to fix the machines that were already in hospitals and clinics, and it didn’t require the company to prove that similar cybersecurity flaws didn’t also affect its other pump models. For some researchers, the advisory felt like a hollow victory.

“It was the moment we realized that the FDA really was a toothless dragon in this situation,” says Mike Ahmadi, a researcher active in the medical device sector.

The FDA’s challenge is a tricky one: to draft regulations that are specific enough to matter yet general enough to outlast threats that mutate and adapt much faster than the products the agency must certify. The agency finalized a set of guidelines last October that recommended—but didn’t require—that medical device manufacturers consider cybersecurity risks in their design and development phases and that they submit documentation to the agency identifying any potential risks they’ve discovered. But the onus doesn’t rest solely on manufacturers; Schwartz emphasizes that providers and regulators also need to address the challenge, which she calls one “of shared responsibility and shared ownership.”

Divvying up that responsibility is where things get messy. After the guidelines were published, the American Hospital Association sent a letter to the FDA saying health-care providers were happy to do their part, but it urged the agency to do more to “hold device manufacturers accountable for cybersecurity.” It said device vendors need to respond faster to vulnerabilities and patch problems when they occur. Device vendors, meanwhile, have pointed out that to be hacked, criminals first need to breach the firewalls at hospitals and clinics; so why was everyone talking about regulating the devices when the providers clearly needed to improve their network protections? Hospira, in a statement issued after the FDA advisory, labeled hospital firewalls and network security “the primary defense against tampering with medical devices” and said its own internal protections “add an additional layer of security.” Others have suggested that security researchers such as Rios are pressuring the industry to adopt security measures that might get in the way of patient care.

At a forum sponsored by the FDA to discuss the guidelines, an anesthesiologist from Massachusetts General Hospital in Boston used the example of automated medicine cabinets, like the one that Rios had cracked, to make this point. After Rios told the government about the password vulnerability, some hospitals began instituting fingerprint scans as a backup security measure. “Now, one usually wears gloves in the operating room,” Dr. Julian Goldman told those at the forum. Fumbling with those gloves, fiddling with the drawer, making sure no contaminated blood got near the exposed hands, yanking the gloves back on—it turned out to be a maddening hassle, he suggested, and a potentially dangerous waste of time. “I can tell you that it certainly brings it home when you suddenly need something,” Goldman said, “and as you’re turning around to reach for the drawers, you hear click-click-click-click, and they lock, just as you are reaching for the drawers to get access to a critical drug.”

Rios says he doesn’t care how manufacturers or hospitals fix the problem, so long as they do something. The Hospira saga convinced him that the only way for that to happen is to continue to pressure manufacturers, calling them out by name until they’re forced to pay attention. That automated medicine cabinet wasn’t the only device he’d found with a hard-coded password; along with research partner Terry McCorkle, Rios found the same vulnerability in about 300 different devices made by about 40 different companies. The names of those vendors weren’t released when the government issued its notice about the problem, and Rios says none of them has fixed the password problem. “What that shows me,” he says, “is that without pressure on a particular vendor, they’re not going to do anything.”

Since the FDA’s Hospira advisory was issued this July, boxes of medical devices have continued to arrive on Rios’s doorstep in Half Moon Bay, and they’ve crowded his office so much that he’s been forced to relocate some to his garage. No one is paying him to try to hack them, and no one is reimbursing his expenses. “I’ve been lucky, and I’ve done well, so it’s not that big of a deal for me to buy a $2,000 infusion pump and look at it whenever I have time,” he says.

For novice independent researchers, however, access to devices can be a forbidding barrier to work in this field. Infusion pumps are relatively affordable, but MRI machines, for example, cost hundreds of thousands of dollars, if not more. And radiological equipment requires a special license. To encourage more research on devices, Rios is trying to establish a lending library of medical equipment; he and a group of partners have begun lobbying hospitals for used devices, and they’re hoping to crowdsource the purchase of new ones.

The buzz that surrounded the Hospira advisory this year might have done more to attract new researchers to the field than anything Rios could do. Kevin Fu, a professor of engineering who oversees the Archimedes Research Center for Medical Device Security at the University of Michigan, has been investigating medical device security for more than a decade, and he’s never seen as much interest in the field as he’s noticed this year. “Every day I hear of another name I hadn’t heard before, somebody who hadn’t been doing anything with medical devices,” Fu says. “And out of the blue, they find some problems.”

On a sunny fall day in Half Moon Bay, Rios grabs an iced coffee at a Starbucks in the city center. He’s fresh off a week of work in Oklahoma—one of those assignments he can’t talk about—and he’s looking forward to some family time. Maybe in a spare moment, he’ll grab one of the devices in his office and see what flaws he can find inside it.

One of those machines is exerting a powerful pull on him, as if begging to be hacked. After he was released from the hospital last year, he surfed around online and found the same CareFusion pump that had been tethered to him for two weeks. It now sits near a filing cabinet in his office.

“It’s next,” Rios says.

Friday, November 13, 2015

Tor Project warns: Academics accused of helping FBI de-anonymize Internet users

Tor Project warns: Academics accused of helping FBI de-anonymize Internet users

By Andrew Blake - The Washington Times - Thursday, November 12, 2015

Researchers from Carnegie Mellon are being accused of helping the FBI exploit a vulnerability that allowed investigators to gather information on users of Tor, an online tool that allows individuals around the globe to browse the Internet anonymously.

Tor Project, the not-for-profit group behind the technology, said on Wednesday that academics from Carnegie Mellon University made “at least $1 million” by helping the FBI de-anonymize Tor users earlier this year during the course of a criminal investigation.

“Such action is a violation of our trust and basic guidelines for ethical research. We strongly support independent research on our software and network, but this attack crosses the crucial line between research and endangering innocent users,” Tor said in a statement.

Tor allows users to stay relatively anonymous online by routing Internet traffic through various nodes around the world, in turn making it difficult for eavesdroppers to see where users are located or the websites they visit. It’s popular among whistleblowers, journalists, human rights workers and law enforcement officials who use the tool to mask their online activity, as well as individuals in repressive regimes where access to online content is restricted by the government.

Drug dealers and child pornographers also rely on the anonymity the technology provides, however, in order to operate on websites hosted on the Tor network — so-called “hidden services” where contraband can be bought, sold and bartered for without one’s real identity having to be revealed.

The latest discussion to concern law enforcement’s efforts to crack Tor erupted early on Wednesday when Vice’s Motherboard reported that court documents recently filed in the Western District of Washington revealed that investigators had identified an alleged drug dealer accused of selling narcotics through a hidden service, Silk Road 2.0, by way of a “university-based research institute that operated its own computers on the anonymous network” used by the online drug den.

Carnegie Mellon has yet to confirm it’s the “university-based research institute” named in court filings, but the attack as described shares overwhelming similarities with a presentation its researchers had planned to deliver at a hacking conference in August that ended up being nixed from the schedule at the last minute.

CERT/Carnegie Mellon researcher Alexander Volynkin had been scheduled to give a talk titled “You Don’t Have to be the NSA to Break Tor: Deanonymizing Users on a Budget” at Black Hat USA in Las Vegas. The presentation had planned to show that “a persistent adversary … can de-anonymize hundreds of thousands of Tor clients and thousands of hidden services within a couple of months [for] just under $3,000,” according to the synopsis.

“Apparently these researchers were paid by the FBI to attack hidden services users in a broad sweep, and then sift through their data to find people whom they could accuse of crimes,” Tor said in response to Motherboard’s report.

“I’d like to see the substantiation for their claim,” Ed Desautels, a public relations staffer at the school’s Software Engineering Institute, told WIRED this week in response to the allegations, adding that he was not personally aware of any payment being made to CWU in exchange for their research, contrary to Tor’s claims of a $1 million reward.

Nevertheless, Tor has outright accused the school of aiding the authorities and said in a statement this week that the attack establishes a “troubling precedent.”

“Civil liberties are under attack if law enforcement believes it can circumvent the rules of evidence by outsourcing police work to universities. If academia uses ‘research’ as a stalking horse for privacy invasion, the entire enterprise of security research will fall into disrepute. Legitimate privacy researchers study many online systems, including social networks — if this kind of FBI attack by university proxy is accepted, no one will have meaningful 4th Amendment protections online and everyone is at risk,” it read in part.

The group added that it seems unlikely law enforcement obtained a warrant to execute the de-anonymizing process discovered by researchers “since it was not narrowly tailored to target criminals or criminal activity, but instead appears to have indiscriminately targeted many users at once.”

“We teach law enforcement agents that they can use Tor to do their investigations ethically, and we support such use of Tor — but the mere veneer of a law enforcement investigation cannot justify wholesale invasion of people’s privacy, and certainly cannot give it the color of ‘legitimate research,’ ” Tor said.

“Whatever academic security research should be in the 21st century, it certainly does not include ‘experiments’ for pay that indiscriminately endanger strangers without their knowledge or consent.”

Robots could steal 80 million US jobs: Bank of England

Robots could steal 80 million US jobs: BoE

Alexandra Gibbs 6 Hours Ago

One central bank has some frightening predictions when it comes to job stability in the future.

80 million jobs in the United States are at risk of being taken over by robots in the next few decades, a Bank of England (BoE) official warned on Thursday.

With U.S. data showing that total nonfarm employment hit 142.6 million in October, that's roughly over half of the total jobs at risk.

And the U.S. isn't the only one who'd be at the mercy of the mechanical hands.

In a speech at the Trades Union Congress in London, the bank's chief economist, Andy Haldane, said that up to 15 million jobs in the U.K. were at risk of being lost to an age of machines, which is almost half of the employed population currently.

To come to its conclusion, the Bank of England conducted a U.K. study which organized occupations into three categories: high, medium and low probability of automation, and demonstrated the share of employment these jobs represented.

It based its survey on research by Oxford professors Dr. Carl Benedikt Frey and Dr. Michael Osborne, who projected a similar change in the workforce over the course of the next few decades within the U.S. Thus, the BoE's own predictions suggest these developments could also materialize over the next 20 to 30 years.

Jobs with the highest level of being taken over by a machine in the U.K. included administrative, production, and clerical tasks. Haldane gave two contrasting examples of risk, with accountants having a 95 percent probability of losing their job to machines, while hairdressers had lower risk, at 33 percent.

With robots being more cost-effective than hiring individuals in the workplace over the long term, jobs with the lowest wages were also at very high risk of going to the machines.

However, Haldane did admit that these projections "may be far too pessimistic."

"The lessons of history are that rising real incomes have ridden to the rescue, boosting the demand for new goods from new industries requiring new workers," Haldane noted, adding that in the past, workers have moved up the income escalator by "skilling up," therefore staying one-step-ahead of the machine.

Haldane suggested society may have an edge against machines in jobs which require high-level reasoning, creativity and cognition, while AI (artificial intelligence) problems are more digital and data driven.

The chief economist suggested that even if the study was accurate; a change in how society works may be underway. People may opt towards work in more tailored businesses Haldane argued, adding that there are already early signs of a move towards more flexible working and temporary contracts.

"The smarter machines become, the greater the likelihood that the space remaining for uniquely-human skills could shrink further. Machines are already undertaking tasks which were unthinkable – if not unimaginable – a decade ago. Algorithms are rapidly learning not just to process and problem-solve, but to perceive and even emote."

Haldane isn't the only one speaking out against this threat.

Nobel Prize-winning economist Robert Shiller told CNBC in January that there's an "increasing fear of technology" in all its different forms. Technology seems to be leaving questions of what will life and people be like in 30 years.

Billionaire Jeff Greene also echoed these comments on CNBC's Squawk Box Thursday, saying that people in the workplace could go the same way of the "horse-and-buggy" did – out of business – due to the "exponential growth of artificial intelligence."