jeudi 1 mai 2014

Vers le cerveau électronique...


Brain-inspired circuit board 9000 times faster than an average PC

Bioengineers at Stanford University have developed microchips based on the human brain that are more energy efficient and up to 9000 times faster than the typical PC.

commentsBioengineers at Stanford University have developed microchips based on the human brain that are more energy efficient and up to 9000 times faster than the typical PC.


(Credit: Stanford University)

Simulating the human brain is one of the holy grails of computing — but it's extraordinarily difficult to do. Just last year, the longest simulation of brain activity to date was achieved. It used the fourth-most powerful computer in the world, Japan's K Computer, 705,024 processor cores, and running at speeds of over 10 petaflops. The simulation, using 92,944 processors, took 40 minutes to simulate one second of brain activity over the equivalent of one per cent of the brain, around 10.4 trillion synapses.
As for why it's important — if we could get a computer to operate with the power and speed of the human brain, we could make some incredibly advanced robots, or prosthetic limbs that operate with the speed and complexity of our own movements. It could also help us understand better how the brain actually works — a largely mysterious subject.
A team of bioengineers at Stanford University has created a circuit board that is able to simulate the activity of one million neurons — around seven billion synaptic connections — in real-time. At about the size of an iPad, the board — called the Neurogrid — consists of 16 custom-designed "Neurocore" chips made using 15-year-old technology laid out in a tree network. This is because it uses analogue computation alongside digital.
"Analog computation constrains the number of distinct ion-channel populations that can be simulated—unlike digital computation, which simply takes longer to run bigger simulations," the Neurogrid website explains. "Digital communication constrains the number of synaptic connections that can be activated per second, unlike analog communication, which simply sums additional inputs onto the same wire. Working within these constraints, Neurogrid achieves its goal of simulating multiple cortical areas in real-time by making judicious choices."
Using analogue computation also keeps power requirements low — 100,000 times less power than a supercomputer, according to principal investigator Kwabena Boahen, who has been working on the Neurogrid since 2006. A supercomputer requires one million Watts of power to simulate one million neurons, while the human brain uses just 20 Watts for 100 billion neurons. But the Neurogrid isn't quite as energy-efficient as the brain. "The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power," Boahen wrote in an article for the most recent issue ofProceedings of the IEEE.
Boahen and his team would like to see the Neurogrid developed for applications such as prosthetics — but first, the cost of creating the Neurogrid will need to be lowered. At the moment, a Neurogrid costs about US$40,000, but switching to modern manufacturing methods could reduce the price to as low as US$400, the team believes.

vendredi 18 avril 2014

Robots journalistes...


robots
Ninety percent of the news could be written by computers by 2030.
Software is writing news stories with increasing frequency. In a recent example, an LA Times writer-bot wrote and posted a snippet about an earthquake three minutes after the event. The LA Times claims they were first to publish anything on the quake, and outside the USGS, they probably were.
The LA Times example isn’t special because it’s the first algorithm to write a story on a major news site. With the help of Chicago startup and robot writing firm, Narrative Science, algorithms have basically been passing the Turing test online for the last few years.
This is possible because some kinds of reporting are formulaic. You take a publicly available source, crunch it down to the highlights, and translate it for readers using a few boiler plateconnectors. Hopefully, this makes it more digestible.
Indeed, Kristian Hammond, cofounder and CTO of Narrative Science, thinks some 90% of the news could be written by computers by 2030.
I imagine the computer populating a Venn diagram. In one circle, it adds hard data (earnings, sports stats, earthquake readings), in another, a selection of journalistic clichés—and where the two intersect, an article is born.
In truth, it’s a little more complicated than that. In engineering their software, Narrative worked with trained journalists to help the software determine an angle. For example, in the case of sports, the algorithm answers key questions like, “Who won the game and by how much? Was it a comeback or a blowout? Any heroics or notable stats?”
The program chooses an article template, strings together sentences, and spices them up with catch phrases: “It was a flawless day at the dish for the Giants.” The tone is colorfully prosaic, but human enough.
Early on, Narrative applied its algorithms to Little League baseball games. Participating parents would enter game stats into an iPhone app called GameChanger and the app would spit out written game summaries.
Since then, they’ve supplied content to major news sites. Forbes is open about its use of Narrative’s software, including an explanation in the article. The LA Times earthquake story, written by an algorithm created by one of their staff, included a disclaimer. But many more big sites anonymously use algorithms to write simple stories.
Narrative’s approach can be applied elsewhere too. The firm recently launched an app that works with Google Analytics to transform raw website metrics (traffic, sources, referrals, demographics) into accessible, natural language reports. These could be useful in any business, a kind of automated analyst to help make sense of big data sets.
The software clearly has some native advantages over the typical human.
For example, the LA earthquake hit at 6:25am. I doubt many West Coast journalists were at their desk that early. And if they were, few would have cared to scoop what amounted to a pretty inconsequential earthquake. Even if someone had been on it—how many could have penned and published a typo-free article in three minutes?
Ken Schwencke, the journalist who created the algorithm, was awoken by the quake, rolled out of bed, found the article awaiting his approval—and simply hit “publish.”
If a writer never had to compose a fifty word earthquake report again—few would complain. Better to leave the short, dry, purely informational articles to the bots.
In the perennially cash-strapped news business, unpaid algorithms could add lots of cheap content while (hopefully) freeing human writers to focus on and improve the quality of more in-depth, nuanced pieces.
“The way we use it, it’s supplemental,” Schwencke told the Huffington Post. “It saves people a lot of time, and for certain types of stories, it gets the information out there in usually about as good a way as anybody else would. The way I see it is, it doesn’t eliminate anybody’s job as much as it makes everybody’s job more interesting.”
But Narrative isn’t satisfied with Little League write-ups and data reports.
Hammond doesn’t mince words. He believes a computer could write stories worthy of a Pulitzer Prize by 2017. Not only would such a robot writer be fast and ever-wakeful, prowling the exponentially growing deluge of online information—it would know enough of the subtleties of human language and logic to write compelling stories too.
And the software needn’t be limited to the digital world. Such algorithms might one day find themselves a robot body, travel to war zones, and cover robot bull fights.
These robot-Hemingways might write existential think pieces that get to the heart (or emotional processor) of what it means to be a robot, and in the process, make us question what it means to be human—what sets us apart from the machines we make.
Photo credit: Silicon Republic
Source : Impactlab.net