AI, Biometrics, Robotics etc. 01 (Sep 14 - May 23)

Re: Robotics & Artificial Intelligence

Postby behappyalways » Fri Jan 29, 2016 5:56 pm

Google人工智能
完勝圍棋高手
http://hk.apple.nextmedia.com/internati ... 9/19470964
血要热 头脑要冷 骨头要硬
behappyalways
Millionaire Boss
 
Posts: 40264
Joined: Wed Oct 15, 2008 4:43 pm

Re: Robotics & Artificial Intelligence

Postby behappyalways » Thu Feb 25, 2016 10:31 am

Boston robot fights against pushing
http://www.bbc.com/news/technology-35648921
血要热 头脑要冷 骨头要硬
behappyalways
Millionaire Boss
 
Posts: 40264
Joined: Wed Oct 15, 2008 4:43 pm

Re: Robotics & Artificial Intelligence

Postby behappyalways » Fri Mar 11, 2016 7:47 am

AlphaGo奇招再勝
南韓棋王:無話可說
http://hk.apple.nextmedia.com/news/art/ ... 1457508587
血要热 头脑要冷 骨头要硬
behappyalways
Millionaire Boss
 
Posts: 40264
Joined: Wed Oct 15, 2008 4:43 pm

Re: Robotics & Artificial Intelligence

Postby behappyalways » Sat Mar 12, 2016 8:26 pm

Artificial intelligence and Go

Showdown

Win or lose, a computer program’s contest against a professional Go player is another milestone in AI


UPDATE Mar 12th 2016: AlphaGo has won the third game against Lee Sedol, and has thus won the five-game match.

TWO : NIL to the computer. That was the score, as The Economist went to press, in the latest round of the battle between artificial intelligence (AI) and the naturally evolved sort. The field of honour is a Go board in Seoul, South Korea—a country that cedes to no one, least of all its neighbour Japan, the title of most Go-crazy place on the planet.

To the chagrin of many Japanese, who think of Go as theirs in the same way that the English think of cricket, the game’s best player is generally reckoned to be Lee Sedol, a South Korean. But not, perhaps, for much longer. Mr Lee is in the middle of a five-game series with AlphaGo, a computer program written by researchers at DeepMind, an AI software house in London that was bought by Google in 2014. And, though this is not an official championship series, as the scoreline shows, Mr Lee is losing.

Go is an ancient game—invented, legend has it, by the mythical First Emperor of China, for the instruction of his son. It is played all over East Asia, where it occupies roughly the same position as chess does in the West. It is popular with computer scientists, too.

For AI researchers in particular, the idea of cracking Go has become an obsession. Other games have fallen over the years—most notably when, in 1997, one of the best chess players in history, Garry Kasparov, lost to a machine called Deep Blue. Modern chess programs are better than any human. But compared with Go, teaching chess to computers is a doddle.

At first sight, this is odd. The rules of Go are simple and minimal. The players are Black and White, each provided with a bowl of stones of the appropriate colour. Black starts. Players take turns to place a stone on any unoccupied intersection of a 19x19 grid of vertical and horizontal lines. The aim is to use the stones to claim territory.

In the version being played by Mr Lee and AlphaGo each stone, and each surrounded intersection, is a point towards the final score. Stones surrounded by enemy stones are captured and removed. If an infinite loop of capture and recapture, known as Ko, becomes possible, a player is not allowed to recapture immediately, but must first play elsewhere. Play carries on until neither player wishes to continue.

Go forth and multiply

This simplicity, though, is deceptive. In a truly simple game, like noughts and crosses, every possible outcome, all the way to the end of a game, can be calculated. This brute-force approach means a computer can always work out which move is the best in a given situation.

The most complex game to be “solved” this way is draughts, in which around 1020 (a hundred billion billion) different matches are possible. In 2007, after 18 years of effort, researchers announced that they had come up with a provably optimum strategy.

But a draughts board is only 8x8. A Go board’s size means that the number of games that can be played on it is enormous: a rough-and-ready guess gives around 10170. Analogies fail when trying to describe such a number. It is nearly a hundred of orders of magnitude more than the number of atoms in the observable universe, which is somewhere in the region of 1080.

Any one of Go’s hundreds of turns has about 250 possible legal moves, a number called the branching factor. Choosing any of those will throw up another 250 possible moves, and so on until the game ends. As Demis Hassabis, one of DeepMind’s founders, observes, all this means that Go is impervious to attack by mathematical brute force.

But there is more to the game’s difficulty than that. Though the small board and comparatively restrictive rules of chess mean there are only around 1047 different possible games, and its branching factor is only 35, that does, in practice, mean chess is also unsolvable in the way that draughts has been solved.

Instead, chess programs filter their options as they go along, selecting promising-looking moves and reserving their number-crunching prowess for the simulation of the thousands of outcomes that flow from those chosen few. This is possible because chess has some built-in structure that helps a program understand whether or not a given position is a good one.

A knight is generally worth more than a pawn, for instance; a queen is worth more than either. (The standard values are three, one and nine respectively.)

Working out who is winning in Go is much harder, says Dr Hassabis. A stone’s value comes only from its location relative to the other stones on the board, which changes with every move. At the same time, small tactical decisions can have, as every Go player knows, huge strategic consequences later on. There is plenty of structure—Go players talk of features such as ladders, walls and false eyes—but these emerge organically from the rules, rather than being prescribed by them.

Since good players routinely beat bad ones, there are plainly strategies for doing well. But even the best players struggle to describe exactly what they are doing, says Miles Brundage, an AI researcher at Arizona State University. “Professional Go players talk a lot about general principles, or even intuition,” he says, “whereas if you talk to professional chess players they can often do a much better job of explaining exactly why they made a specific move.”

Intuition is all very well. But it is not much use when it comes to the hyper-literal job of programming a computer. Before AlphaGo came along, the best programs played at the level of a skilled amateur.

Go figure

AlphaGo uses some of the same technologies as those older programs. But its big idea is to combine them with new approaches that try to get the computer to develop its own intuition about how to play—to discover for itself the rules that human players understand but cannot explain.

It does that using a technique called deep learning, which lets computers work out, by repeatedly applying complicated statistics, how to extract general rules from masses of noisy data.

Deep learning requires two things: plenty of processing grunt and plenty of data to learn from. DeepMind trained its machine on a sample of 30m Go positions culled from online servers where amateurs and professionals gather to play. And by having AlphaGo play against another, slightly tweaked version of itself, more training data can be generated quickly.

Those data are fed into two deep-learning algorithms. One, called the policy network, is trained to imitate human play. After watching millions of games, it has learned to extract features, principles and rules of thumb. Its job during a game is to look at the board’s state and generate a handful of promising-looking moves for the second algorithm to consider.

This algorithm, called the value network, evaluates how strong a move is. The machine plays out the suggestions of the policy network, making moves and countermoves for the thousands of possible daughter games those suggestions could give rise to. Because Go is so complex, playing all conceivable games through to the end is impossible.

Instead, the value network looks at the likely state of the board several moves ahead and compares those states with examples it has seen before. The idea is to find the board state that looks, statistically speaking, most like the sorts of board states that have led to wins in the past. Together, the policy and value networks embody the Go-playing wisdom that human players accumulate over years of practice.

As Mr Brundage points out, brute force has not been banished entirely from DeepMind’s approach. Like many deep-learning systems, AlphaGo’s performance improves, at least up to a point, as more processing power is thrown at it. The version playing against Mr Lee uses 1,920 standard processor chips and 280 special ones developed originally to produce graphics for video games—a particularly demanding task.

At least part of the reason AlphaGo is so far ahead of the competition, says Mr Brundage, is that it runs on this more potent hardware. He also points out that there are still one or two hand-crafted features lurking in the code. These give the machine direct hints about what to do, rather than letting it work things out for itself.

Nevertheless, he says, AlphaGo’s self-taught approach is much closer to the way people play Go than Deep Blue’s is to the way they play chess.

One reason for the commercial and academic excitement around deep learning is that it has broad applications. The techniques employed in AlphaGo can be used to teach computers to recognise faces, translate between languages, show relevant advertisements to internet users or hunt for subatomic particles in data from atom-smashers.

Deep learning is thus a booming business. It powers the increasingly effective image- and voice-recognition abilities of computers, and firms such as Google, Facebook and Baidu are throwing money at it.

Deep learning is also, in Dr Hassabis’s view, essential to the quest to build a general artificial intelligence—in other words, one that displays the same sort of broad, fluid intelligence as a human being. A previous DeepMind paper, published in 2015, described how a computer had taught itself to play 49 classic Atari videogames—from “Space Invaders” to “Breakout”—simply by watching the screen, with no helpful hints (or even basic instructions) from its human overlords.

It ended up doing much better than any human player can. (In a nice coincidence, atari is also the name in Go for a stone or group of stones that is in peril of being captured.)

Games offer a convenient way to measure progress towards this general intelligence. Board games such as Go can be ranked in order of mathematical complexity. Video games span a range of difficulties, too. Space Invaders is a simple game, played on a low-resolution screen; for a computer to learn to play a modern video game would require it to interpret a picture much more subtle and complicated than some ugly-looking monsters descending a screen, and in pursuit of much less obvious goals than merely zapping them. One of DeepMind’s next objectives, Dr Hassabis says, is to build a machine that can learn to play any game of cards simply by watching videos of humans doing so.

Go tell the Spartans

For now, he reckons, general-purpose machine intelligence remains a long way off. The pattern-recognising abilities of deep-learning algorithms are impressive, but computers still lack many of the mental tools that humans take for granted. A big one is “transfer learning”, which is what AI researchers call reasoning by analogy.

This is the ability to take lessons learned in one domain and apply them to another. And machines like AlphaGo have no goals, and no more awareness of their own existence than does a word processor or a piece of accounting software.

In the short term, though, Dr Hassabis is optimistic. At a kiwon, or Go parlour, in Seoul, the day before the match, the 30 or so players present were almost unanimous in believing that the machine would fall short.

“Lee is a genius who is constantly creating new moves; what machine can replicate that?” asked one. At a pre-match press conference Mr Lee said he was confident he would win 5-0, or perhaps 4-1.

He was, plainly, wrong about that, although it is not over yet. “He’s a very good player,” said a diplomatic Dr Hassabis before the match. “But our internal tests say something different.” Even if Mr Lee does manage to pull off an improbable victory, though, humans are unlikely to stay on top for long.

As AlphaGo’s algorithms are tweaked, and as it gathers more data from which to learn, it is only going to get better. Asked whether there was a ceiling to its abilities, Dr Hassabis said he did not know: “If there is, we haven’t found it yet.”

Correction: An earlier version of this story suggested that 10170 was the number of possible positions of stones on a Go board; in fact it is an estimate of the number of possible Go games. Sorry.

Source: The Economist
血要热 头脑要冷 骨头要硬
behappyalways
Millionaire Boss
 
Posts: 40264
Joined: Wed Oct 15, 2008 4:43 pm

Re: Robotics & Artificial Intelligence

Postby winston » Sun Mar 13, 2016 8:09 am

Rise of Intelligent Machines

It’s not the answer that’s important. It’s the question. This is why intelligent machines are going to change the world as we know it.

With today’s technology we can ask questions we never knew the answer to before but intelligent machines will be able to ask questions we never thought to ask. Lending an additional perspective to the creative exercise will allow humanity to reach levels of technological innovation we have never imagined possible.

The digitization trend really took off with the internet and is accelerating at an exponential rate. Since digital is just about the only media people now use, the pace of this evolution is likely to continue.

A big part of this story is the amount of data being created every day. In fact, ‘big’ data is becoming an understatement. Exabytes (billions of gigabytes) of data are created and copied every day. The value proposition in companies like Google, Apple and Facebook lies in the billions they have invested in sifting, collating, analysing and protecting the ocean of customer data their networks generate on a daily basis.

With such a rich seam of data they are teaching programs to do in fractions of seconds what a legion of people would takes years to do and more importantly they are developing predicative characteristics so they can tailor future products to the evolving client demographic before the need actually exists.

Creativity has always been a wholly human endeavour but that is rapidly changing. When asked to design a swingarm for racing motorcycles, Autodesk’s generative design software came up with a design that took inspiration from biology, with the product resembling a dog’s hip.

That’s not generally something you’d expect from a robot with no conception of art but it was the most efficient design possible given the parameters of the work. Another company is using the same software to 3-D print a bridge across one of Amsterdam’s canals in real time.

We’re getting to a point where Uber could be completely automated and we will all be able to relax while being chauffeured around by autonomous vehicles. Home construction could be left to a few robots with built-in 3-D printers who come up with the most efficient design based on your desires. Semi-intelligent software already writes time sensitive stories for news bureaus like Bloomberg and Reuters.

Your home becomes fully automated so that each room is climate controlled based on whether it is currently occupied. We’re all going to have to up our game if we are going to keep up with the pace of innovation in what is likely to be a struggle to remain relevant in the workforce. Stay in school kids!

Machine learning is incredibly exciting because as with our own children, once we unleash them on the world we have no real idea what will come out of their mouths. Most important of all machines have always been wonderful facilitators for human ingenuity and now they are supplementing that with their own.

The speed with which computations can now be done and the ease big data algorithms have in identifying patterns in the most dizzying web of information means the pace of technological innovation is only going to continue to accelerate. It’s going to be very exciting and I can’t wait to share it with you.

Source: Exponential Investor
It's all about "how much you made when you were right" & "how little you lost when you were wrong"
User avatar
winston
Billionaire Boss
 
Posts: 111065
Joined: Wed May 07, 2008 9:28 am

Re: Robotics & Artificial Intelligence

Postby winston » Sat Apr 16, 2016 10:17 am

7 Companies That Are Doing Wonders With AI

From virtual assistants to chatbots and self-driving cars, artificial intelligence is powering some of the most exciting future technology

By Brad Moon

Source: Investor Place

http://investorplace.com/2016/04/artifi ... xGeXDB96M8
It's all about "how much you made when you were right" & "how little you lost when you were wrong"
User avatar
winston
Billionaire Boss
 
Posts: 111065
Joined: Wed May 07, 2008 9:28 am

Re: Robotics & Artificial Intelligence

Postby winston » Wed May 25, 2016 8:07 am

3 Stocks to Buy to Win the Machine Learning Megatrend

These companies are setting themselves up to be leaders in the machine-learning space

By Laura Hoy

Machine learning has become one of the hottest industries for tech companies and consequently, a huge area of focus for many investors.

Because machine learning has so much potential in such a wide range of industries, traders would be wise to look for stocks to buy within the trend. With most big-name tech firms at least dabbling in machine learning somehow, there are several choices for those interested in adding it to their portfolio.

However, it’s important to invest in companies whose machine-learning efforts are setting them up for success as the technology develops.

Source: Investor Place

http://investorplace.com/2016/05/3-stoc ... 0TsK5F96M8
It's all about "how much you made when you were right" & "how little you lost when you were wrong"
User avatar
winston
Billionaire Boss
 
Posts: 111065
Joined: Wed May 07, 2008 9:28 am

Re: Robotics & Artificial Intelligence

Postby winston » Fri Jun 24, 2016 7:02 pm

A Ground-Floor Opportunity in the Next Decade’s Most Disruptive Technology

By MICHAEL A. ROBINSON

Source: Money Morning

http://moneymorning.com/2016/06/24/a-gr ... echnology/
It's all about "how much you made when you were right" & "how little you lost when you were wrong"
User avatar
winston
Billionaire Boss
 
Posts: 111065
Joined: Wed May 07, 2008 9:28 am

Re: Robotics & Artificial Intelligence

Postby behappyalways » Fri Jul 08, 2016 1:12 pm

Sucking robot arm wins Amazon Picking Challenge
http://www.bbc.com/news/technology-36702758
血要热 头脑要冷 骨头要硬
behappyalways
Millionaire Boss
 
Posts: 40264
Joined: Wed Oct 15, 2008 4:43 pm

Re: Robotics & Artificial Intelligence

Postby behappyalways » Sun Sep 11, 2016 9:48 pm

Google’s DeepMind Achieves Speech-Generation Breakthrough
http://www.bloomberg.com/news/articles/ ... eakthrough
血要热 头脑要冷 骨头要硬
behappyalways
Millionaire Boss
 
Posts: 40264
Joined: Wed Oct 15, 2008 4:43 pm

PreviousNext

Return to Archives

Who is online

Users browsing this forum: No registered users and 2 guests