Saturday, February 26, 2011
By Patrick Thibodeau
February 23, 2011 06:00 AM ET
Computerworld - Businesses are buying technology and lots of it, say some of the major enterprise vendors, including Hewlett-Packard, IBM and Dell. But consumers are holding back.
While sales of servers, storage and networking gear grew in double digits in the last quarter, consumer PC spending dipped into negative numbers for Dell and HP. IBM focuses on the business market.
In its latest quarterly report on Tuesday, HP said business hardware sales increased 22% from the same quarter a year ago. Dell last week said its large enterprise business was up 12%, and IBM last month said hardware revenue grew 21%.
However, the hardware gains are being dampened by consumer spending. The notable exception in the consumer market: Apple, which last month reported a revenue gain of 71%, thanks partly to sales of 7.33 million iPads in the last quarter.
HP said its personal systems group, which includes PC sales to consumers and businesses, declined 1%, despite an 11% increase in growth in PC spending by businesses. Dell said its consumer segment was down year over year by 8%, "relative to a strong Windows 7 launch last year."
Analysts see multiple forces impacting consumer PC buying at HP and Dell.
If you assembled all the things affecting PC sales at enterprise vendors into a word cloud it might look something like this: "Apple" would dominate in bold jumbo letters, and perhaps in similar-size letters would appear "economy," along with still-smaller names of the various Android tablets, and the names of vendors selling heavily discounted PCs.
HP's CEO, Leo Apotheker, didn't cite competition from alternative devices but instead blamed the economy, pointing to "continued softness" in the consumer PC market in explaining the results.
But some analysts were giving at least partial credit to Apple for the PC sales. "Independent business spending came back very strongly, but interest in and actual sales of the iPad have taken an impressive bite out of the consumer PC business," said Rob Enderle, an analyst at Enderle Group.
Similarly, Andrew Bartels, an analyst at Forrester Research, sees the impact of Apple at work. "Demand has been siphoned off from the PCs and is going into iPads and similar things," he said.
But Crawford Del Prete, an analyst at IDC, said weak consumer demand was the reason. While it is possible that the iPad affected demand at the low end of the PC market, he said he also believes that HP chose not to participate in some discounting that other PC suppliers offered.
That explains why operating profits improved for PCs, even as revenues were down, he said.
Bartels said the underlying business growth at HP may not be as strong as it seems but was helped by its purchase of 3Com last year. HP reported 30% growth in routing and switching, but overall business spending on technology is declining, he said.
IDC said the worldwide PC market continued to slow in the fourth quarter of 2010 due to the economy and media tablets. Global PC shipments rose "only a modest 2.7% year-on-year" in the last quarter.
Friday, February 25, 2011
NEW YORK – Google says it has tweaked the formulas steering its Internet search engine to take the rubbish out of its results. The overhaul is designed to lower the rankings of what Google deems "low-quality" sites.
That could be a veiled reference to such sites as Demand Media's eHow.com, which critics call online "content farms" — that is, sites producing cheap, abundant, mostly useless content that ranks high in search results.
Sites that produce original content or information that Google considers valuable are supposed to rank higher under the new system.
The change announced late Thursday affects about 12 percent, or nearly one in every eight, search requests in the U.S. Google Inc. said the new ranking rules eventually will be introduced in other parts of the world, too. The company tweaks its search algorithms, or formulas, hundreds of times a year, but it said many of the changes are so subtle that only a few people notice them. This latest change is "pretty big," the company said in a blog post.
"Google depends on the high-quality content created by wonderful websites around the world, and we do have a responsibility to encourage a healthy web ecosystem," Google fellow Amit Singhal and principal engineer Matt Cutts wrote in a blog post. "Therefore, it is important for high-quality sites to be rewarded, and that's exactly what this change does."
Demand Media, based in Santa Monica, assigns roughly 13,000 freelance writers to produce stories about frequently searched topics and then sells ads alongside the content at its own websites, including eHow.com and Livestrong.com, and about 375 Internet other destinations operated by its partners. Articles range from the likes of "How to Tie Shoelaces" to "How to Bake a Potato" and more. The company doesn't seem agree with the "content farm" or "content mill" definition.
In a blog post, Demand Media's executive vice president of media and operations, Larry Fitzgibbon, said the company applauds changes that search engines make "to improve the consumer experience," and he saidDemand Media focuses on creating "useful and original content."
He added that Demand Media "saw some content go up and some go down in Google search results."
"It's impossible to speculate how these or any changes made by Google impact any online business in the long term — but at this point in time, we haven't seen a material net impact on our Content & Media business," Fitzgibbon wrote.
Demand Media Inc.'s shares fell 54 cents, or 2.4 percent, to $22.06 in afternoon trading Friday.
Google's blog post: http://bit.ly/i3OOUx
Demand Media's blog post: http://bit.ly/fRqV5Z
Wednesday, February 23, 2011
By Thomas Claburn , InformationWeek
February 22, 2011 02:55 PM
Apple's new subscription rules and its decision to begin enforcing its longstanding iOS application requirement that apps use the company's In-App Purchase (IAP) system to sell content have again stirred up developer discontent.
Readability, a company that makes the popular iOS reading app of the same name, on Monday posted a complaint charging that Apple's rules will ruin its business model. Readability strips the ads and graphics out of published content and republishes the text in a more reader-friendly format. To mitigate the ire of publishers who might not appreciate the removal of ads from their text, Readability hands 70% of its subscription-based revenue back to publishers.
But under Apple's recently revised terms, subscriptions in iOS apps have to use Apple's purchasing system, which costs 30% of subscription revenue for the duration of the subscription.
"Readability's model is unique in that 70% of our service fees go directly to writers and publishers," wrote Richard Ziade, founding partner of design firm Arc90 and creator of Readability, in an open letter to Apple. "If we implemented In App Purchasing, your 30% cut drastically undermines a key premise of how Readability works. .[W]e believe that your new policy smacks of greed."
Social screenshot sharing service TinyGrab on Monday posted a similar complaint. Its business model involves selling subscriptions for its free app through its Web site.
TinyGrab sells accounts through PayPal because Apple doesn't provide the customer data necessary to open accounts, such as the app buyer's email address. The company also violates other Apple rules against unlocking functionality through mechanisms other than IAP, against subscriptions that expire, and against directing users to external purchase mechanisms on the Web.
"I'm sad to say that as of today we can no longer provide development support to iOS, officially, through the App Store," wrote TinyGrab project manager Chris Leydon in a blog post. "Until Apple loosens up on [its] restrictions we're ceasing all active development on TinyGrab for iPhone."
Such dissatisfaction echoes objections raised by Sony earlier this month when the company said Apple won't allow Sony's Reader for iPhone app to be distributed through the iTunes App Store.
The unhappiness with Apple's policies echoes complaints heard last year, when CEO Steve Jobs declared Flash and other third-party developer tools unfit for the iPhone. The outcry from companies like Adobe and individual developers, which found receptive ears in Washington, eventually prompted Apple to relax its restrictions.
Regulators are again paying attention. The Federal Trade Commission and Department of Justice are said to be conducting preliminary inquires into Apple's subscription requirements. But the growing strength of Apple's rivals, particularly Google's Android platform, may make it harder for regulators to act.
Apple's rules have prompted Marco Arment, who developed Tumblr and Instapaper, to also suggest the company is being too greedy and to worry that various business models in iOS apps will be made untenable through the vagueness of Apple's requirements.
Apple CEO Steve Jobs appears to be paying attention. An email said to be from Jobs, sent in reply to a developer's concerns suggests Apple is trying to apply its rules narrowly.
"We created subscriptions for publishing apps, not SaaS apps," the e-mail states. Apple did not immediately respond to a request to confirm the authenticity of the e-mail.
Nonetheless, SaaS apps that violate App Store guidelines unrelated to subscription selling rules shouldn't expect Apple's approval.
In the end, Apple may be within its rights to apply its rules as it sees fit. But Arment suggests Apple is hurting itself by pressing its rights as far as they can go.
"The discussion shouldn't be whether Apple can enforce this policy, but whether they should," he wrote. "And if you look at what this does to developer relations, big and small, it's easier to argue that this is likely to result in more harm than good to the iOS platform."
By Ralph Jennings, IDG News Service
February 21, 2011 05:48 AM ET
The world's top handset makers are meeting this week to finalize a version of an advanced mobile communication standard that would raise data transfer speeds to 1Gbps, an event organizer said on Monday.
About 800 people, from companies such as HTC, Nokia and Samsung Electronics, will agree on final terms for the Long-Term Evolution Advanced LTE-Advanced) standard at a meeting of the 3GPP standards body in Taipei this week.
With speeds up to 1Gbps, the technology will be ideal for people who download audio-visual files onto their handhelds, said Feng Wen-sheng, wireless communications director with a lab under the event sponsor, Taiwan's government-funded Industrial Technology Research Institute.
LTE-Advanced will also give machines another way to communicate with one another, for example allowing them to connect sensors detecting changes in air temperature that could signal a fire or a burglary and then passing messages to emergency personnel such as search and rescue teams, Feng said.
This type of data transfer is expected to help especially with earthquake relief.
"Mobile voice technology is pretty advanced already, so this time it's all about data transfers," Feng said. "We've been trying to get LTE-Advanced out there for some time, and in Taipei we expect to confirm a final version."
The International Telecommunication Union, a United Nations agency, has adopted LTE-Advanced and WiMax-derived WirelessMAN-Advanced standards for its IMT-Advanced program to define future mobile networks. It says both are substantial improvements over current wireless systems.
After Friday, the LTE Advanced standard will be ready for manufacturers to design smartphones and network equipment, Feng said, as participants at this week's conference discuss patents and cross-license deals relating to the technology.
The IDG News Service is a Network World affiliate.
Friday, February 18, 2011
Thursday, February 17, 2011 | 6:48 p.m.
Updated at 9:22 a.m. on February 18.
The House passed an amendment Thursday that would bar the Federal Communications Commission from using any funding to implement the network-neutrality order it approved in December.
The amendment, approved on a 244-181 vote, was offered by Energy and Commerce Communications and Technology Subcommittee Chairman Greg Walden, R-Ore., to legislation that would fund government agencies for the rest of fiscal year 2011.
Walden and other critics of the FCC's net-neutrality order argue it will stifle innovation and investment in broadband. The order aims to bar broadband providers from discriminating against Internet content, services, or applications.
"If left unchallenged, this claim of authority would allow the FCC to regulate any matter it discussed in the national broadband plan," Walden said.
If the defunding effort fails, Republicans are pursuing a second route to try to block the FCC's open-Internet order. Walden and other Republicans in both the House and the Senate introduced on Wednesday a resolution of disapproval under the Congressional Review Act, which would give lawmakers a limited amount of time to try to block the FCC's net-neutrality rules.
Rep. Ed Markey, D-Mass., a senior Energy and Commerce member, argued that by voting for the amendment, "you give control to the Broadband Barons ... and then you will see an inevitable decline in innovation, in investment, in the private sector, in the new products, the new technology, the new applications, these new devices, which are basically invented by hundreds and thousands of smaller companies in our country."
President Obama, who supports the FCC's net neutrality order, has threatened to veto the spending measure if it cuts government programs too deeply. He was on the West Coast meeting with leaders from Google, Facebook, Twitter, Apple and other technology companies when the vote came in.
Thursday, February 17, 2011
February 17, 2011
Android Central has obtained two alleged product timelines for more than a dozen Dell tablets and smartphones.
To read the article and view the roadmap, click here
In the mix we spotted next-gen Windows 7, Windows 8, and Google Android's next build, 2.4, launching within the next 12 months:
Codenamed "Wrigley," Dell plans to launch a smartphone in mid-July 2011 equipped with "Windows 7 Next Gen," a 1GHz processor, 4" WGVA screen, 8 megapixel camera, and 720p video recording.
Dell Hancock, a smartphone earmarked for September 2011, looks equipped with Android's "Ice Cream" operating system only announced this week at Mobile World Congress. The OS looks to adopt Honeycomb's tablet-like capabilities for mobile phones.
Finally a "Windows 8" tablet, codenamed Peju, is scheduled for January 2012. Up to now, Microsoft hasn't officially uttered the phrase "Windows 8." But the timing of Peju does complement a leak fromMicrosoft's Dutch blog in October 2010, estimating the launch of Windows 8 in 2012.
In the roadmaps you'll also notice a couple already-launched products, such as the the "gorgeous"Dell Venue Pro smartphone and the underwhelming Dell Streak 7 tablet.
The 83,990 sites that weren't hosting underage porn were stuck with a the gigantic graphic seen here for days after the error was realized. Not exactly a trivial accusation-and an extremely damaging one for the sites, which were mostly personal and small business pages. FreeDNS-the domain service behind the affected sites-was forced to comply with the takedown request by court order, but was clearly (and rightfully) pissed at the misuse of their system: "freedns.afraid.org has never allowed this type of abuse," they commented. At the moment, nobody has any idea how the tremendous screwup happened.
Surely, DoJ and DHS must be a little red in the face over the whole thing. Right? Right..? Nope. In a beaming statement released yesterday, Secretary of Homeland Security Janet Napolitano heroically explained that "Each year, far too many children fall prey to sexual predators and all too often, these heinous acts are recorded in photos and on video and released on the Internet. DHS is committed to working with our law enforcement partners to shut down websites that promote child pornography to protect these children from further victimization."
Which is great, really. Child pornography is vile, and the people responsible for it are the absolute scum of the internet. But by allowing the government to wield an online sledgehammer to protect kids, we need to be sure the person holding it isn't completely inept, and that the process whereby sites are smashed is a transparent one. When something goes wrong-especially this wrong-we need to know how it happened. It needs to, at the very least, be acknowledged. Child porn is horrible and damaging. Yes.
But so is wrongfully accusing 84,000 people of having a hand in it.
CHICAGO -- An IBM computer creamed two human champions on the television game show "Jeopardy!" today in a triumph of artificial intelligence.
"I for one welcome our new computer overlords," contestant Ken Jennings -- who holds the "Jeopardy!" record of 74 straight wins -- cheekily wrote on his answer screen at the conclusion of the much-hyped three-day showdown.
"Watson" -- named after Thomas Watson, the founder of the US technology giant -- made some funny flubs in the game, but prevailed by beating his human opponents to the buzzer again and again.
The final tally from the two games: Watson at $77,147, Jennings at $24,000 and $21,600 for reigning champion Brad Rutter -- who previously won a record $3.25 million on the quiz show.
"Watson is fast, knows a lot of stuff, and can really dominate a match," host Alex Trebek said at the opening of Wednesday's match.
Watson, which is not connected to the internet, plays the game by crunching through multiple algorithms at dizzying speed and attaching a percentage score to what it believes is the correct response.
"Jeopardy!," which first aired on US television in 1964, tests a player's knowledge in a range of categories, from geography to politics to history to sports and entertainment.
In a twist on traditional game play, contestants are provided with clues and need to supply the questions.
The complex language of the brain-teasers meant Watson didn't merely have to have access to a vast database of information, it also had to understand what the clue meant.
One impressive display was when Watson answered "What is United Airlines" to the clue "Nearly 10 million Youtubers saw Dave Carrol's clip called this 'friendly skies' airline 'breaks guitars.'"
But a Final Jeopardy flub prompted one IBM engineer to wear a Toronto Blue Jays jacket to the second day of taping and Trebek to joke that he'd learned Toronto was a US city.
Watson had earlier answered "What is Toronto????" to the question: "Its largest airport is named for a WWII hero. Its second largest, for a WWII battle" under the category "US Cities."
Watson, which has been under development at IBM Research labs in New York since 2006, is the latest machine developed by IBM to challenge mankind.
In 1997, an IBM computer named "Deep Blue" defeated world chess champion Garry Kasparov in a six-game match.
Watson's success was a remarkable achievement and an historic moment for the field of artificial intelligence, said Oren Etzioni, a professor computer science at the University of Washington.
"Jeopardy is a particularly difficult form of natural language because it's so open ended and it's so full of puns and quirky questions," he told AFP.
The next step is to see how this technology can be applied to applications with real economic and social impacts.
"The day where robots will keep us as pets is still very far away," Etzioni said.
Wednesday, February 16, 2011
February 15, 2011
(CNN) -- The computers haven't proven to be our trivia overlords just yet. Give them at least until Wednesday.
An IBM supercomputer named Watson finished one round of the TV show "Jeopardy!" on Monday night tied with one of his human competitors and $3,000 ahead of the other.
The man vs. computer face-off won't be complete, however, until the final rounds of the extended trivia game show are aired on Tuesday and Wednesday.
IBM trumpets Watson, which has been in development for years and has the processing power of 2,800 "powerful computers," as a major advancement in machines' efforts to understand human language. The computer receives clues through digital texts and then buzzes in against the two other "Jeopardy!" contestants like any other player would. It juggles dozens of lines of reasoning at once and tries to arrive at a smart answer.
Man vs. machine on 'Jeopardy!'
After getting off to a scary-good start, Watson did have a few stumbles. In one instance, it repeated an answer that another contestant, Ken Jennings, who won 74 "Jeopardy!" episodes in a row, had already tried.
"What is 1920s?" Watson said, sounding like a digitized Matthew Broderick.
"No," game-show host Alex Trebek replied. "Ken said that."
On many other clues, however, Watson was spot-on. After losing the first clue to Brad Rutter, another "Jeopardy!" champion, Watson jumped in on the second question.
Clue: "Iron fitting on the hoof of a horse or a card-dealing box in a casino."
Watson: "What is shoe?"
At the start of the show, Trebek went to some lengths to explain the origins of Watson -- IBM approached the show about the idea three years ago -- and how the computer actually works. That's partly because what you see on the "Jeopardy!" stage is somewhat misleading. It looks as if two humans are bookending a simple computer monitor, which appears to be just about as smart as they are. In reality, as Trebek explained, the bulk of Watson's computer power was stored in another building at an IBM lab in New York, where the show is being held for this special three-day competition.
After introducing Watson, to studio applause, this is how Trebek explained
"Just as I expected," he said. "That was a very warm reception and I'm sure Watson would have appreciated the applause. Except for one thing: Watson can neither hear nor see. It will be receiving all of its information electronically.
"And as a matter of fact what you're looking at right now is not the real Watson. This is an avatar. This is a representation of Watson. Watson, or course, is a sophisticated computer system too big and too heavy to fit behind that lectern on our stage."
As for the stage version of Watson, his brain-face was represented by a digital Earth that swirled with ribbons of various colors while he thought about questions. As Trebek read the clues, a bar graph appeared at the bottom of the screen, showing the top three answers Watson was considering at that moment and how confident he was in those choices.
Sometimes the computer managed to be confident but still incorrect.
Here's the clue to the first question Watson got wrong:
"From the Latin for end, this is where trains can also originate."
Watson: "What is finis." Confidence level: 97%.
Trebek: "No. Ken?"
"What is terminus," Jennings answered correctly.
Before ending the evening tied with Rutter at $5,000 each, Watson had jumped out to an early lead at the first commercial break. At that point, Watson had $5,200 and his closest noncomputer counterpart had only $1,000. Several Twitter users were awed by the computer's smarts.
"Watson kinda feakin' me out. Big time," Michael Gartenberg, a tech analyst, wrote on his Twitter feed. Another person wrote: "Watson is almost scary. This is willld! These humans are no match for Watson's algorithms."
Trebek summed up the computer's mixed performance this way:
"So, what have we learned so far: Watson's very bright, very fast, but he had some weird little moments once in a while."
Then he teased the upcoming shows:
"And how many of those will we encounter tomorrow when we play double and final 'Jeopardy!'?"
February 14, 2011
Strong support for HTML 5. Built-in Flash player and PDF reader.
Paranoids won't want to give Google another way to collect data about them.
Chrome Instant means your Web page is ready to read before you finish typing the address. This, its speed, minimalist design, and advanced support for
HTML5 have deservedly been attracting more and more users to the browser.
Some of these many releases have brought new major features, such as bookmark syncing, a bookmark manager, a built-in PDF reader, and extensions, though others have just added speed, stability, and new standards support.
This latest version takes a page from Google search, with the remarkable Chrome Instant, as well as a page from IE9 beta, by including graphics hardware acceleration. Its fine design, compatibility, and especially the speed have impressed the Web community enough to make Chrome the fastest growing browser in terms of market share, recently passing ten percent.
Let's take a look at what makes this browser so special.
Even the setup process shows Chrome's commitment to speed: Just click the Install button on the Chrome Web page, and you'll have the new browser up and running in less than a minute, with no wizard to go through and no system restart. The browser's now available for Mac OS X and Linux, as well as Windows. In each platform the browser's up and running before you realize it, and it updates itself automatically in the background.
Built-in Flash and PDF Support
Chrome is the only browser to come with Adobe Flash built in, rather than requiring a separate (and annoying) installation. And not having to perform the frequent required updates of the Flash plugin separately is another boon—it updates automatically with the browser.
Chrome boasts a PDF reader as well, so you don't have to worry about installing any Adobe plugins for viewing specialized Web content. When you load a PDF, an intuitive toolbar shows when your mouse cursor is in the southeast vicinity of the browser window. From this, you can have the document fill the width of the window, show a full page, or zoom in and out.
By default, you can select text for cutting and pasting, but I couldn't copy and paste images. You can print the PDF as you would any Web page.
Minimalism has been a hallmark of Chrome since its first beta release. Tabs are above everything, and the only row below them holds the combined search/address bar, or "Omnibar." Optionally you can display bookmark links in a row below this. And the control buttons on the top-right of the browser have been reduced window to the absolute minimum—just one. Google has removed the Page icon and placed some of its functions under the Wrench choice. Some Page options have been combined into buttons on one line in the new menu, such as Cut, Copy, and Paste. I like what Google's done with the Zoom choice on the menu, adding plus and minus buttons that save you from having to fly out another submenu.
This is one of the niftiest things to be added to Chrome in a while. Start typing a Web address in the Omnibar, and before you're even done, a page from your history or a search result page is displayed below in the main browser window. I just type "PC," and PCMag.com is already loaded. The idea was first implemented in Google search's Instant feature, but I think it's even more useful in the browser than in search, where I usually ignore it and finish typing my query anyway: Most sites we visit, we've visited before, so having them ready to go before you even finish typing is a big speeder-upper.
Chrome also still sports excellent tab implementation. Tabs are prominent at the top of the browser window, and you can drag them out to the desktop to create independent windows (and drag them back in later) or split them side by side à la Windows 7 Aero Snap.
Google has put considerable thought into its browser's new tab page, which shows thumbnails of your most-visited pages. I like that you can move the large thumbnails around and pin them in place, or remove those you don't want. You also now have a choice of list or thumbnail view, and you can display only recently closed tabs, only most visited pages, or neither.
For version 9, Google has added an Apps section to the new tab page, showing any Web apps you've installed, along with a link to the Chrome Web Store, but as with any section of the page, you can click an X to its right to turn it off. If you've synced Chrome on different computers (see below), the Apps section with be the same on all. For more on the store, check out the Chrome App Store section of my Hands On with Chrome OS. Any apps you've added on a Chrome OS machine will also appear in the browser on any other computer you log into Chrome on, and vice versa. But you're not likely to have a Chrome OS machine at this point.
Monday, February 14, 2011
Published: February 11, 2011
The Stuxnet software worm repeatedly sought to infect five industrial facilities in Iran over a 10-month period, a new report says, in what could be a clue into how it might have infected the Iranian uranium enrichment complex at Natanz.
The report, released Friday by Symantec, a computer security software firm, said there were three waves of attacks. Liam O Murchu, a security researcher at the firm, said his team was able to chart the path of the infection because of an unusual feature of the malware: Stuxnet recorded information on the location and type of each computer it infected.
Such information would allow the authors of Stuxnet to determine if they had successfully reached their intended target. By taking samples of Stuxnet they had collected from various computers, the researchers were able to build a model of the spread of the infection. They determined that 12,000 infections could be traced back to just five initial infection points.
Between June 2009 and May 2010, the program took aim at specific organizations in Iran on three occasions, Symantec research noted in an update of a research report the company published last year.
The Symantec team said it had collected five Internet domains that were linked to industrial organizations within Iran. They said because of the company's privacy policies, they would not disclose the domain names.
"All of the domains are involved in industrial processing," Mr. O Murchu said in an interview.
It is likely that a classified site like Natanz is not connected directly to the Internet. Therefore, an attacker might try to infect industrial organizations that would be likely to share information, and the malware, with Natanz.
At least three and possibly four versions of the program were probably written, and the researchers discovered that the first version had been completed just 12 hours before the first successful infection in June 2009.
The researchers speculated that the first step in the infection was either an infected e-mail sent to an intended victim or a hand-carried USB device that carried the attack code.
When international inspectors visited Natanz in late 2009, they found that almost 1,000 gas centrifuges had been taken offline, leading to speculation that the attack may have disabled a portion of the complex.
In April 2010, the attackers again tried to distribute the program. This time they found a new vulnerability in Windows-based computers to be infected with a USB device and most likely successfully inserted the program that way at an unknown location inside Iran.
The Symantec researchers also said they had determined that the malware program carried two different attack modules aimed at different centrifuge arrays, but that one of them had been disabled.
Stuxnet first infected Windows-based industrial control computers while it hunted for particular types of equipment made by the Siemens Corporation. It was programmed to then damage a uranium centrifuge array by repeatedly speeding it up, while at the same time hiding its attack from the control computers by sending false information to displays that monitored the system.
The New York Times reported in January that Israel had built an elaborate test facility at a classified nuclear weapons site that contained a replica array of the Iranian uranium enrichment plant. Such a test site would have been necessary for the design of the attack software.
"We know the exact configuration of the system they were looking for," Mr. O Murchu said. "We know they were looking for a certain number of frequency converters. And each of those frequency converters controls a certain number of motors. And those numbers fit in with what you expect to see in an uranium enrichment facility."
By Lev Grossman
On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.
On the show (you can find the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200.
Kurzweil then demonstrated the computer, which he built himself—a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher.
But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.
That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.
Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing.
So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.
If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly.
It wouldn't even take breaks to play Farmville.
Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be.
But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities.
Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.
The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.
People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA.
Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there's more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language.
The Singularity isn't a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an "intelligence explosion":
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. (Read "Is Technology Making Us Lonelier?")
The word singularity is borrowed from astrophysics: it refers to a point in space-time — for example, inside a black hole — at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good's intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that "within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended."
By that time Kurzweil was thinking about the Singularity too. He'd been busy since his appearance on I've Got a Secret. He'd made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind — Stevie Wonder was customer No. 1—and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology.
But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.) Bill Gates has called him "the best person I know at predicting the future of artificial intelligence."
In real life, the transcendent man is an unimposing figure who could pass for Woody Allen's even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity's most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He's good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I've looked at the numbers, and this is what they say, so what else can I tell you?
Kurzweil's interest in humanity's cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. "Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project," he says. "So it's like skeet shooting—you can't shoot at the target." He knew about Moore's law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It's a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.
As it turned out, Kurzweil's numbers looked a lot like Moore's. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900.
Kurzweil then ran the numbers on a whole bunch of other key technological indexes — the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond—the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. "It's really amazing how smooth these trajectories are," he says. "Through thick and thin, war and peace, boom times and recessions." Kurzweil calls it the law of accelerating returns:technological progress happens exponentially, not linearly.
Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we're not evolved to think in terms of exponential growth. "It's not intuitive. Our built-in predictors are linear. When we're trying to avoid an animal, we pick the linear prediction of where it's going to be in 20 seconds and what to do about it. That is actually hardwired in our brains."
Here's what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity—never say he's not conservative—at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today.
The Singularity isn't just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.
Not all of them are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality.
When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.
In addition to the Singularity University, which Kurzweil co-founded, there's also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology.
At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James "the Amazing" Randi. The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading—the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters—handed out pamphlets. An android chatted with visitors in one corner.
After artificial intelligence, the most talked-about topic at the 2010 summit was life extension. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It's not just wishful thinking; there's actual science going on here.
For example, it's well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can't reproduce anymore and dies. But there's an enzyme called telomerase that reverses this process; it's one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn't just get better; they got younger.
Aubrey de Grey is one of the world's best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. "People have begun to realize that the view of aging being something immutable—rather like the heat death of the universe—is simply ridiculous," he says. "It's just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It's really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable."
Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father's genetic predisposition; he also developed Type 2 diabetes when he was 35.
Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he's 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.
But his goal differs slightly from de Grey's. For Kurzweil, it's not so much about staying healthy as long as possible; it's about staying alive until the Singularity. It's an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they'll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we'll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.
It's an idea that's radical and ancient at the same time. In "Sailing to Byzantium," W.B. Yeats describes mankind's fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead? But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves. "There are people who can accept computers being more intelligent than people," he says. "But the idea of significant changes to human longevity—that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that's the major reason we have religion."
Of course, a lot of people think the Singularity is nonsense — a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience. Most of the serious critics focus on the question of whether a computer can truly become intelligent.
The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn't currently produce the kind of intelligence we associate with humans or even with talking computers in movies—HAL or C3PO or Data. Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don't make conversation at parties. They're intelligent, but only if you define intelligence in a vanishingly narrow way. The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn't exist yet.
Why not? Obviously we're still waiting on all that exponentially growing computing power to get here. But it's also possible that there are things going on in our brains that can't be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon. The biologist Dennis Bray was one of the few voices of dissent at last summer's Singularity Summit.
"Although biological components act in ways that are comparable to those in electronic circuits," he argued, in a talk titled "What Cells Can Do That Robots Can't," "they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events." That makes the ones and zeros that computers trade in look pretty crude.
Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being—in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.) Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness—a machine with no ghost in it? And how would we know?
Even if you grant that the Singularity is plausible, you're still staring at a thicket of unanswerable questions. If I can scan my consciousness into a computer, am I still me? What are the geopolitics and the socioeconomics of the Singularity? Who decides who gets to be immortal? Who draws the line between sentient and nonsentient? And as we approach immortality, omniscience and omnipotence, will our lives still have meaning? By beating death, will we have lost our essential humanity?
Kurzweil admits that there's a fundamental level of risk associated with the Singularity that's impossible to refine away, simply because we don't know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don't have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error.
If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous. "It would require a totalitarian system to implement such a ban," he says. "It wouldn't work. It would just drive these technologies underground, where the responsible scientists who we're counting on to create the defenses would not have easy access to the tools."
Kurzweil is an almost inhumanly patient and thorough debater. He relishes it. He's tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.
Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. "Generally speaking," he says, "the core of a disagreement I'll have with a critic is, they'll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don't believe I'm underestimating the challenge. I think they're underestimating the power of exponential growth."
This position doesn't make Kurzweil an outlier, at least among Singularitarians. Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It's called the Blue Brain project, and it's an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM's Blue Gene super-computer. So far, Markram's team has managed to simulate one neocortical column from a rat's brain, which contains about 10,000 neurons.
Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you'd then have to educate the brain, and who knows how long that would
By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it. He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware. "When people look at the implications of ongoing exponential growth, it gets harder and harder to accept," he says. "So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I've tried to push myself to really look."
In Kurzweil's future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level. Progress hyperaccelerates, and every hour brings a century's worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.
We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species.
Or it isn't. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.
But as for the minor questions, they're already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn't have 600 million humans carrying out their social lives over a single electronic network. Now we have Facebook. Five years ago you didn't see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics. Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?
Already 30,000 patients with Parkinson's disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy! Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter. It got every question it answered right, but much more important, it didn't need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn't strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits.
A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century's answer to the Founding Fathers — except unlike the Founding Fathers, they'll still be alive to get credit — or their ideas could look as hilariously retro and dated as Disney's Tomorrowland. Nothing gets old as fast as the future.
But even if they're dead wrong about the future, they're right about the present. They're taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further inside it than anyone ever has before.
Friday, February 11, 2011
Google's Android platform continued its meteoric rise to the top of the smartphone shipments table in 4Q, beating Nokia's Symbian into second place.
Figures from Canalys reveal total shipments of smartphones running Google software - including Android, Tapas and OMS- hit 33.3 million during the quarter, giving the firm 32.9% of the total smartphone market. Nokia was only narrowly beaten though, with shipments of 31 million and a 30.6% share.
Android's share was boosted by strong sales of smartphones from HTC and Samsung, whose combined shipments generated 45% of Google's total.
Total smartphone shipments for the period grew 88.6% to 101.2 million, showing that the market has recovered from a tough 2009 when the global economic recession impacted sales.
Chris Jones, vice president and principal analyst at Canalys, said the speed of recovery demonstrates the "commitment and innovation" of handset vendors.
Consumers in the EMEA region purchased the most smartphones in 4Q, with shipments up 90% to 38.8 million, however the US remained the largest country, Canalys notes.
Jones says vendors must work hard to build on the success during 2011, noting that the next 12 months will be "highly competitive" as vendors seek to differentiate their devices with new technology including "dual-core processors, NFC and 3D displays."
Thursday, February 10, 2011
February 9, 2011: 3:47 PM ET
NEW YORK (CNNMoney) -- The U.S. Postal Service warned Wednesday that it may default on some of its financial obligations later this year after reporting yet another quarterly loss.
The USPS, a self-supporting government agency that receives no tax dollars, said it suffered a loss of $329 million in the first quarter of federal fiscal year 2011. That compared with a loss of $297 million a year earlier.
The agency has been suffering from an ongoing decline in mail volume, which has undercut revenues, while retiree health care costs have been straining its reserves.
Excluding costs related to retiree benefits and adjustments to workers' compensation liability, the Postal Service said it had net income was $226 million in the first quarter, which ended Dec. 31.
Despite ongoing cost-cutting efforts, the USPS said it expects to have a cash shortfall this year and to hit its federally mandated borrowing limit by September, when the government's fiscal year ends.
All first-class stamps to be 'Forever' stamps
The agency said it will be forced to default on some of its financial obligations this year unless Congress changes a 2006 law requiring it to pay between $5.4 and $5.8 billion into its prepaid retiree health benefits each year.
"The Postal Service continues to seek changes in the law to enable a more flexible and sustainable business model," Patrick Donahoe, the Postmaster General, said in a statement. "We are eager to work with Congress and the administration to resolve these issues prior to the end of the fiscal year."
The Postal Service has taken a number of steps to increase revenue, including marketing initiatives and price increases. The agency raised rates an average of 3.6% in January.
It is also perusing more dramatic changes. Last year, the USPS submitted a request to the Postal Regulatory Commission, which oversees the agency, to eliminate Saturday mail service. The commission has yet to respond to the request, but a spokesman said it is in the "final phase" of making its decision.
The USPS has also cut back on hours to save money. The agency expects to eliminate 40 million work hours this fiscal year as part of a plan to save
However, the service is currently negotiating new contracts with the American Postal Workers Union and the National Rural Letter Carriers Association, which will probably object to cutting hours.
On the bright side, the Postal Service said improving economic conditions suggest the "worst of the precipitous volume decline during the recession is over." But mail volume continues to be anemic, rising only 1.5% in the first quarter as economic growth remains sluggish.
Wednesday, February 9, 2011
Google built on the buzz swirling around its forthcoming Android 3.0 operating system for tablets during an event showcasing the Honeycomb platform tailored for tablets. After a brief introduction at the Googleplex in Mountain View, Calif., Android lead Andy Rubin passed the torch to Hugo Barra, product management director for Android, and Chris Yerga, Android engineering director for cloud services. Barra whizzed through an array of demos using Motorola's soon-to-be-launched (as in late February, early March) Xoom tablet, showing off multitasking, widgets, application bars and several other perks that were introduced to developers via the Android 3.0 preview SDK last week. Yerga then relieved Barra to show off Google's new Android Market Website, a destination that will allow consumers to purchase applications, games and music on Android smartphones and tablets. In-application purchasing is also part of the mix, as you'll see here. Peruse the Honeycomb demos and the new Android application perks here in this eWEEK slide show.
Kenneth Olsen, the MIT engineer who co-founded Digital Equipment Corp., has died at the age of 84, according to local sources.
Massachusetts-based DEC was key in moving corporate computing away from sole reliance on mainframes. In today's era of notebooks, netbooks and tablets, it may not sound like much to talk about the shift from mainframes to bookcase-sized minicomputers. However, the rise of DEC was the critical first step for enterprise computing's move away from sole residence in the data center -- despite doubters who said those "small" machines couldn't handle serious tasks. It's fair to say that most IT professionals who were working in the 1980s came in contact with a DEC system, be it a PDP-11 or a VAX.
The company, the world's second-largest computer maker at its peak, helped put the Rte. 128 corridor west of Boston on the map as one of America's premier high-tech centers.
Olsen was known locally as an engineer's engineer, a man who believed if you built a quality product, it would have buyers. While DEC certainly wasn't without marketing prowess in its heyday -- it held part of its DECworld customer meeting one year on the QE II cruise ship -- the company in general took on the personality of its founder: substance over style.
Olsen, a multi-millionaire in an era where there were far fewer such wealthy entrepreneurs than there are today, was known to drive a modest car around town and do his own grocery shopping. I also recall him having a surprisingly plain office for a man who was leading a technology revolution; and being gracious enough to carve out time to speak with young local reporters like me as well as the national business press.
A former colleague at the MetroWest Daily News found an interview I did with Olsen 20+ years ago, in which he explained the value of his low-key style with the story of explorers racing to reach the South Pole.
Norwegian Roald Amundsen didn't talk about himself much but "always worried about the details," Olsen told me.
England's Robert Scott, on the other hand, was "show. Flair. Parties. Announcements. Bragging."
Which style won out? Amundsen arrived at the pole first and returned safely. Scott got there a month later but never made it back.
Olsen's conclusion: "No one is productive, no one is creative without extreme discipline."
Olsen became somewhat unfairly known for a quote in which he supposedly claimed he couldn't see any need for a computer in the home. Others argue those words were taken out of context; and that he was talking about a computer to run your home (turn on and off lights, etc.), an interpretation that makes sense to me.
Whatever the backstory on that, though, it's true that he missed the desktop computing revolution. Unlike, say, Steve Jobs, Olsen's entrepreneurial genius didn't transcend computing generations. And it's hard not to see irony in the fact that the PC/desktop industry did to DEC what DEC (and other minicomputer makers) did to mainframe companies like IBM. While DEC finally did attempt a desktop system -- anyone else remember the Rainbow? -- it was far too late and without the necessary marketing muscle behind it.
However, it's also fair to say that the pendulum is swinging back somewhat toward Olsen's vision of more centralized computing (compared to desktops, that is, not mainframes). We're not moving back to the days of VAX terminals; but the days of early stand-alone desktops are just as over as the minicomputing era.
It's asking a lot to expect one engineer to be ahead of multiple technology curves, especially one who came of age when the computer industry wasn't expected to move quite so quickly. Even though Digital Equipment Corp. is no longer around, Kenneth Olsen had a lasting impact on the computer industry.
Sharon Machlis is online managing editor at Computerworld. Her e-mail address is firstname.lastname@example.org. You can follow her on Twitter @sharon000, on Facebook or by subscribing to her RSS feeds:
Monday, February 7, 2011
Hackers have repeatedly penetrated the computer network of the company that runs the Nasdaq Stock Market during the past year, and federal investigators are trying to identify the perpetrators and their purpose, according to people familiar with the matter.
The exchange's trading platform-the part of the system that executes trades-wasn't compromised, these people said. However, it couldn't be determined which other parts of Nasdaq's computer network were accessed.
Investigators are considering a range of possible motives, including unlawful financial gain, theft of trade secrets and a national-security threat designed to damage the exchange.
The Nasdaq situation has set off alarms within the government because of the exchange's critical role, which officials put right up with power companies and air-traffic-control operations, all part of the nation's basic infrastructure. Other infrastructure components have been compromised in the past, including a case in which hackers planted potentially disruptive software programs in the U.S. electrical grid, according to current and former national-security officials.
"So far, [the perpetrators] appear to have just been looking around," said one person involved in the Nasdaq matter. Another person familiar with the case said the incidents were, for a computer network, the equivalent of someone sneaking into a house and walking around but-apparently, so far-not taking or tampering with anything.
A spokesman for Nasdaq declined to comment.
A probe into the matter was initiated by the Secret Service and now includes the Federal Bureau of Investigation.
The mystery surrounding the hackers and their motives is worrying investigators, who remain unsure whether they have been able to plug all potential security gaps-especially since invaders typically seek new ways to breach systems.
The case involving New York-based Nasdaq OMX Group Inc. is part of what cyber-crime authorities see as a broader problem of hackers nosing around corporate computer networks, with varying degrees of success.
U.S. companies are a continual target, and sometimes their public websites are vandalized. It is rarer for perpetrators to penetrate internal systems.
Such breaches rarely come to light because companies fear that acknowledging them would alarm customers or encourage copycats.
Tom Kellermann, a former computer security official at the World Bank who now works at a firm called Core Security Technologies, said the most advanced hackers in the world are increasingly targeting financial institutions, particularly those involved in trading.
"Many sophisticated hackers don't immediately try to monetize the situation; they oftentimes do what's called local information gathering, almost like collecting intelligence, to ascertain what would be the best way in the long term to monetize their presence,'' he said.
People familiar with the Nasdaq matter said the Secret Service first began investigating last year. Investigators have informed White House officials of the case, according to the people familiar with the situation, who said that such a move is typical in hacking investigations, particularly in the early stages of the probes.
Authorities haven't yet been able to follow the trail to any specific individual or country. Those familiar with the case said that some evidence points toward Russia, but the person or people responsible could be almost anywhere, perhaps using computers in Russia merely as a conduit.
The case poses two concerns for authorities: preserving the stability and reliability of computerized trading, and ensuring that investors have full faith in that system.
Stock exchanges know they are frequently targets for hackers.
"We take any potential threat seriously and we are continually working to ensure that our systems operate at the highest levels of security and integrity," said Ray Pellecchia, a spokesman for NYSE Euronext, which operates the New York Stock Exchange.
He declined to discuss any specific instances of computer-hacking attempts against that exchange.
In 1999, hackers vandalized Nasdaq's publicly accessible website. In that incident, a group of hackers quickly claimed responsibility for defacing the site, as well as major media websites. Nasdaq officials at that time said the company's internal network wasn't affected.
Computer hacking is a problem for many countries. In recent years, U.S. authorities have dealt with cyberattacks linked to computers in Russia, China and Eastern Europe.
Hackers can use geography as a foil. Prosecutors said Albert Gonzalez, perhaps the most renowned hacker, perpetrated his biggest theft with help from computers in Eastern Europe even though he lived in Miami.
According to a 2009 federal indictment, he used computers located in the U.S., Latvia and Estonia, in a conspiracy that netted more than 100 million stolen credit-card numbers.
The case is considered the largest hacking crime in U.S. history. Mr. Gonzalez eventually pleaded guilty and was sentenced to 20 years in prison.
Write to Devlin Barrett at email@example.com
Thursday, February 3, 2011
NEW YORK - Netflix's streaming service, which has helped make the company the second-largest U.S. media subscription service and boosted the firm's market value, may finally get competition from Amazon.com.
Amazon, led by CEO Jeff Bezos, has been rumored to work on a streaming video service offer bundled with its Amazon Prime service, which for an annual subscription fee of $79 a year gives users unlimited free two-day shipping, for a while. Tech blog over the weekend showed a screen shot of an ad that has since disappeared and mentioned content from BBC America and PBS.
"Your Amazon Prime membership now includes unlimited, commercial-free, instant streaming of 5,000 movies and TV shows at no additional cost," the screen shot, which featured the film The Girl Who Kicked The Hornet's Nest, said.
"The link quickly disappeared, so we don't know if it was a real video service in progress, a test, or vaporware," Lazard Capital Markets analyst Barton Crockett said. "Still, the possibility that this is real is a provocative statement of how Amazon could become Netflix's first meaningful streaming competitor."
Indeed, the bundled offer "highlights the potential for Amazon to "superset" Netflix, or offer Netflix's core streaming feature as part of a more valuable, broader package," Crockett argued. "Amazon Prime includes free shipping for purchases and costs $79 per year, versus a Netflix streaming-only sub at $95.88."
The renewed talk about a likely Amazon streaming offer comes after the e-tailer recently said it was acquiring full control of Lovefilm, which has been called the European version of Netflix.
But the timing of Amazon's streaming service bundle launch likely depends in part on how much access to major content it can negotiate. "We suspect Amazon Prime is not launching in the immediate future as the service description of 5,000 movies and TV shows does not appear to match up with the aforementioned content we saw," BTIG analyst Richard Greenfield said.
"This implies that Amazon is still working on its movie/TV content deals with all the majors."
By JENNIFER MARTINEZ | 2/2/11 12:53 PM EST Updated: 2/2/11 1:47 PM EST
The federal government has seized the Web addresses of ten websites that allegedly live stream sporting and pay-per-view events online, shutting them down just days before one of the biggest televised sporting events of the
year: the Super Bowl.
The U. S. Attorney's Office of the Southern District of New York, working in conjunction with Immigration and Customs Enforcement, seized the Web addresses Tuesday. The seizure affidavit was unsealed Wednesday.
The websites, which include channelsurfing.net and Spain-based rojadirecta.org, were said to illegally provide access to content from the major professional sports organizations, namely the National Football League, National Basketball Association and the National Hockey League. The sites do not host the pirated sporting content themselves, but instead provide links to other websites where people can access it illegally.
Government officials argue that the sites are not only distributing pirated content illegally, but in the process, are also denting the revenues of the professional sports leagues and broadcasters as well as negatively impacting viewers.
"The illegal streaming of professional sporting events over the Internet deals a financial body blow to the leagues and broadcasters, who are forced to pass their losses off to fans by raising prices for tickets and pay-per-view events," said Preet Bharara, U.S. Attorney in Manhattan. "With the Super Bowl just days away, the seizures of these infringing websites reaffirm our commitment to working with our law enforcement partners to protect copyrighted material and put the people who steal it out of business."
Fans have increasingly abandoned watching sports games on their television sets, opting instead to watch them on their computers via the Web instead. The shift has jolted professional sports organizations, which are grappling with how to control the growing problem of the illegal streaming of sports games online in real time. The organizations copyright the content of their sports games - from the audio, video, text and images - and restrict others from distributing it without prior written approval.
But the government's action didn't provide a permanent solution to the problem. One of the sites that was shut down on Wednesday, ATDHE.net, has already reappeared at a new Web address, ATDHE.me.
The federal government launched a similar campaign last November that shut down 82 websites offering counterfeit goods and digital music and movie content. U.S. Attorney General Eric Holder and ICE Director John Morton had warned that the two agencies were committed to going after more websites that offer copyrighted content illegally.
Morton repeated that message during Wednesday's operation.
"This swift action by our Homeland Security Investigations New York special agents and analysts sends a clear message to website operators who mistakenly believe it's worth the risk to take copyrighted programming and portray it as their own," Morton said. "We will continue to aggressively investigate this type of crime with our law enforcement partners."
Tuesday, February 1, 2011
If you haven’t seen or heard something about Android 3.0 by now, you probably live under the coolest rock ever. Dubbed “Honeycomb“, next in a long line of sugary sweet names that have been assigned alphabetically to the Android version releases, 3.0 has been called everything from “the Tablet Android” to “the iOS killer”. While I doubt that either of those titles will apply, what few details and demonstrations we have seen so far have been impressive. Its promise of optimizing software for a tablet-comfortable UI has been the start along a road to what will eventually be the next major step for Android.
Earlier this week, Google finally released the Software Development Kit for Honeycomb. The Android SDK will provide us will a definitive breakdown of the tools and features developers will need to bring to their app when Honeycomb devices are finally launched. I’ve spent a lot of time now observing the Android 3.0 SDK and its new features, and am now ready to provide an in-depth look at what to expect from Android 3.0.
The most striking change you will find in Honeycomb is the user interface. Place even an Android 2.3 device (like the Nexus S) and the screenshots of Honeycomb side-by-side and you can scarcely tell it’s the same OS. From first glance, Honeycomb focuses very much on deep, dark colors. This goes along with most everything we have heard so far about the effects of white and other bright colors consuming more power than darks on AMOLED and SAMOLED screens, and shows the first of many things Android is changing to positively affect battery life. The UI design is referred to in the documentation as “holographic” and I believe that refers to the added functionality of the widgets as well as the almost 3D way in which you are able to place them.
Widgets that contain multiple items, like a photo album or music player, have access to tools that would allow you to “stack” the items within the widget, providing depth to the UI. Placing a widget one any of the five homescreens is also different, and in my opinion compares to what we’ve seen from the MIUI hackers. You are presented with a flexed view of all five homescreens above your widget examples, and are able to drag and drop to any of the homescreens immediately.
This panel controls much more than widgets: it’s a control panel for much of the UI itself. Aside from the ability to place widgets, we see this panel also provides you access to wallpapers and app shortcuts, turning this into the one-stop-shop for the personalization aspect of the Honeycomb UI, and replacing the “long press” popup seen in precious and current versions of Android. It’s easy to see how a major UI overhaul with such comfortable similarities will provide both a simple transition for existing users, and a greatly simplified and powerful new UI for new users.
The gearheads will forever bicker about which OS handles multiple applications better. In my opinion, multitasking in a mobile environment needs to be a delicate balance of ease-of-use and functionality. If accessing or navigating the multitasking features are difficult or not obvious, they won’t get used. The long press on the home button, the double tab in iOS, the “card” system in webOS, they all offer different forms of blending UI with functionality.
Alongside Honeycomb’s UI changes is a significant change to the way multitasking both looks and works. The existing method of accessing running or recently opened applications has been grown to display in-app screens instead of just the name of the app and an icon (though the screen is inactive, unlike what’s been seen on devices like the Playbook).
On top of this re-design, additional functionality has been added into the Action Bar or “top drawer”. The Action Bar will now also provide overflow app features as well as in app functionality like menu options.
I’ll say it right now, 90% of virtual keyboards drive me crazy. iOS, Android, WebOS, Windows Phone 7, I dislike them all. It’s really hard to make a keyboard for a phone that the most people can use, and in my opinion that is even harder on a tablet, especially and Android tablet, given the extreme possibilities in screen size variance. I can’t speak to how functional the Honeycomb keyboard will be, but if the “new and improved” keyboard sticks to the design the Android team went with when developing the Gingerbread stock keyboard (which is my current favorite next to Swype) it should provide a pleasant user experience.
Combine that with the “enhanced” copy and paste that also looks like the Gingerbread implementation. These changes will really be something users will need to touch and play with in order to make a judgment.
The Google Apps suite is arguably what makes Android the success it is. We’ve already seen the impressive changes to Google Maps, with the new vector based 3D maps showing up in several cities in the US already. As that functionality increases at Google, Maps will become much faster even in low bandwidth areas. To support the larger screens, the Camera, Gallery, Gmail, and Browser apps have been adjusted as well.
The browser is still being called “Browser” even though it seems to be less and less distinguishable from Google Chrome now that tabbed browsing and incognito windows have been added to the app. The Camera and Gallery have been allowed to take advantage of the possible large screens with a modified UI for adjusting menu settings.
Gmail on the other hand (I hope) is incomplete. The examples given for Gmail are completely opposite the entire rest of the UI. Surrounding Gmail with bright clashing colors and lots of while made it hard to see the examples of the new drag and drop interface in the 2 pane setup of Gmail. A 2 pane setup is not something alien to Google, since they already have one for the iPad. It would seem that a similar design, optimized for Android, would have been acceptable. We’ve only got the pair of screenshots now, so I will remain hopeful that Gmail is simply incomplete.
It’s clear that Google has it in mind to create a user experience that is greatly improved over the existing versions of Android, but that’s not all Honeycomb is all about. Stay tuned for our Developer focused look at Android 3.0!
Metered Internet usage (also called "Usage-Based Billing") is coming to Canada, and it's going to cost Internet users. While an advance guard of Canadians are expressing creative outrage at the prospect of having to pay inflated prices for Internet use charged by the gigabyte, the consequences probably haven't set in for most consumers. Now, however, independent Canadian ISPs are publishing their revised data plans, and they aren't pretty.
"Like our customers, and Canadian internet users everywhere, we are not happy with this new development," wrote the Ontario-based indie ISP TekSavvy in a recent e-mail message to its subscribers.
But like it or not, the Canadian Radio-Telecommunications Commission (CRTC) approved UBB for the incumbent carrier Bell Canada in September. Competitive ISPs, which connect to Canada's top telco for last-mile copper connections to customers, will also be metered by Bell. Even though the CRTC gave these ISPs a 15 percent discount this month (TekSavvy asked for 50 percent), it's still going to mean a real adjustment for consumers.
This is going to hurt
Starting on March 1, Ontario TekSavvy members who subscribed to the 5Mbps plan have a new usage cap of 25GB, "substantially down from the 200GB or unlimited deals TekSavvy was able to offer before the CRTC's decision to impose usage based billing," the message added.
By way of comparison, Comcast here in the United States has a 250GB data cap. Looks like lots of Canadians can kiss that kind of high ceiling goodbye. And going over will cost you: according to TekSavvy, the CRTC put data overage rates at CAN $1.90 per gigabyte for most of Canada, and $2.35 for the country's French-speaking region.
Bottom line: no more unlimited buffet. TekSavvy users who bought the "High Speed Internet Premium" plan at $31.95 now get 175GB less per month.
"Extensive web surfing, sharing music, video streaming, downloading and playing games, online shopping and email," could put users over the 25GB cap, TekSavvy warns. Also, watch out "power users that use multiple computers, smartphones, and game consoles at the same time."
You need "protection"
Here's the "good" news: TekSavvy users can now buy "insurance," defined as "a recurring subscription fee that provides you with additional monthly usage." For Ontario it's $4.75 for 40GB of additional data (sorry, but the unused data can't be forwarded to the next month).
There are also "usage vault" plans—payments made in advance for extra data. Consumers can buy vault data for $1.90/GB up to 300GB in any month.
Where once TekSavvy consumers could purchase High Speed Internet Premium at a monthly base usage of 200GB for $31.95 a month, now they can get about half of that data (if they buy two units of insurance) at $41.45 a month.