Mine is, but it didn't used to be. I think there are two reasons for this change.
First, Geezerhood has progressively robbed me of more and more brain cells so that relative to my phone I've become dumber. And although I still have a few cells left but they don't seem to want to work together as well as they used to.
But there is a second far more interesting reason. Up until a few years ago I had a cell phone that looked like a Star Trek communicator. Although I tried hard, I could never get Scotty to beam me up, nor would the phone do any of those other nifty things the communicators did. I traded that old "flip" phone for a newer sleeker model, but still it was only a phone, dumb as a post. Then I finally succumbed to the techno-bug and bought a "smart" phone that has been getting disquietlingly more and more capable of what seems like intelligent behavior. In other words, it may be getting smarter and I'm...well, let's just say "not."
In my attempts to keep my remaining neurons firing I like to keep up on current developments in information technology and the internet. This was a topic I used to teach about and I am still interested in it, especially as it impacts our daily lives and also our society. One of the hot topics lately is artificial intelligence, or AI. There have been some significant recent advances in this area that support my suspicions about my phone and all other interfaces to the internet -- they are getting smarter.
The notion that computer systems could become truly intelligent has been around a long time, and the idea has gone through several cycles of optimistic over-hype and pessimistic disparagement. AI was legitimized as a serious field of research in the 50's, led by an impressive array of mathematicians and computer programmers who made striking progress initially but then stalled when certain problems became intractable, either because of the limits of theoretical approaches available at the time or because existing computer power and storage were insufficient. However, today you have more computing power in your cell phone than existed in a room-sized computer in the early days of AI, and recent breakthroughs in how we think of what constitutes machine intelligence have led to real-world applications of AI that are all around us and growing rapidly in number and scope.
Wikipedia attributes the recent surge in AI successes to three factors: Advanced statistical techniques (loosely known as deep learning), access to large amounts of data, and faster computers.
So one very important aspect of AlphaGo's success is that it functions more like human intelligence than previous attempts at AI. A second is that AlphaGo doesn't just apply rules or logic, it "learns" by being exposed to massive amounts of data from which it "distills" knowledge. According to Cade Metz of Wired magazine, AlphaGo's development team:
But there also may be a dark side, because this type of AI ceases to be understandable and predictable by its creators. As Jason Tanz of Wired puts it, "...With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable...When engineers do peer into a deep neural network, what they see is an ocean of math: a massive, multilayer set of calculus problems that—by constantly deriving the relationship between billions of data points—generate guesses about the world." (Tanz, 2016).
This loss of control has led to some serious warnings about the potential for dire negative future outcomes. For example, the brilliant physicist Stephen Hawking cautions "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand...Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all." Elon Musk, Bill Gates, Steve Wozniak and many others share similar reservations, leading to a recent Open Letter in which they and over 100 other experts have drawn attention to this issue and have called for efforts to lessen the probability that the technology will go awry. "We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do" (my emphasis).
Hmmm. Sounds good. But I'm not sure my phone is doing what I want it do even NOW......
__________________________
Sources and Resources:
Artificial Intelligence, Wikipedia.
AlphaGo, Deepmind
"What the AI behind AlphaGo can teach us about being human." Cade Metz, Wired, May 2016.
"Soon we won't program computers. We'll train them like dogs." Jason Tanz, Wired, May 2016.
Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence, The Observer, 2015
Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence, Future of Life Institute
First, Geezerhood has progressively robbed me of more and more brain cells so that relative to my phone I've become dumber. And although I still have a few cells left but they don't seem to want to work together as well as they used to.
But there is a second far more interesting reason. Up until a few years ago I had a cell phone that looked like a Star Trek communicator. Although I tried hard, I could never get Scotty to beam me up, nor would the phone do any of those other nifty things the communicators did. I traded that old "flip" phone for a newer sleeker model, but still it was only a phone, dumb as a post. Then I finally succumbed to the techno-bug and bought a "smart" phone that has been getting disquietlingly more and more capable of what seems like intelligent behavior. In other words, it may be getting smarter and I'm...well, let's just say "not."
In my attempts to keep my remaining neurons firing I like to keep up on current developments in information technology and the internet. This was a topic I used to teach about and I am still interested in it, especially as it impacts our daily lives and also our society. One of the hot topics lately is artificial intelligence, or AI. There have been some significant recent advances in this area that support my suspicions about my phone and all other interfaces to the internet -- they are getting smarter.
The notion that computer systems could become truly intelligent has been around a long time, and the idea has gone through several cycles of optimistic over-hype and pessimistic disparagement. AI was legitimized as a serious field of research in the 50's, led by an impressive array of mathematicians and computer programmers who made striking progress initially but then stalled when certain problems became intractable, either because of the limits of theoretical approaches available at the time or because existing computer power and storage were insufficient. However, today you have more computing power in your cell phone than existed in a room-sized computer in the early days of AI, and recent breakthroughs in how we think of what constitutes machine intelligence have led to real-world applications of AI that are all around us and growing rapidly in number and scope.
Wikipedia attributes the recent surge in AI successes to three factors: Advanced statistical techniques (loosely known as deep learning), access to large amounts of data, and faster computers.
"[These have]...enabled advances in machine learning and perception. By the mid 2010s, machine learning applications were used throughout the world. In a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research as do intelligent personal assistants in smartphones" [emphasis added]. (Wikipedia, "Artificial Intelligence")One particularly instructive illustration of the power of current AI occurred in March of this year, when Google's AlphaGo program won 4 out of 5 games against champion GO player Lee Sedol. Computers have previously beaten humans at board games like checkers and chess, but they did so by brute force calculation of the potential outcome of each move. The complexity of GO, however, makes this nearly impossible, even with very fast computers -- it is said there are more possible positions in Go than there are atoms in the universe. Expert players have to rely more on an intuitive feel for the game at a higher intellectual level. As Demis Hassabis, one of AlphaGo's creators describes it, "Good positions look good. It seems to follow some kind of aesthetic. That’s why it has been such a fascinating game for thousands of years” (Wired, 5/16). And it is also why designing an AI system that could play Go well has been such a challenge -- it would have to incorporate human intellectual qualities that go beyond mere calculation.
So one very important aspect of AlphaGo's success is that it functions more like human intelligence than previous attempts at AI. A second is that AlphaGo doesn't just apply rules or logic, it "learns" by being exposed to massive amounts of data from which it "distills" knowledge. According to Cade Metz of Wired magazine, AlphaGo's development team:
"...fed 30 million human Go moves into a deep neural network, a network of hardware and software that loosely mimics the web of neurons in the human brain. Neural networks are actually pretty common; Facebook uses them to tag faces in photos. Google uses them to identify commands spoken into Android smartphones. If you feed a neural net enough photos of your mom, it can learn to recognize her. Feed it enough speech, it can learn to recognize what you say. Feed it 30 million Go moves, it can learn to play Go" (Metz, 5/16).One particular move in the AlphaGo/Sedol match, #37 in game two, was particularly meaningful because it wasn't one of the moves AlphaGo had seen before and because the move was considered by many expert Go players to show an extraordinary level of "artificial insight" and mastery of the game.
"Move 37 wasn’t in that set of 30 million. So how did AlphaGo learn to play it? AlphaGo was making decisions based not on a set of rules its creators had encoded but on algorithms it had taught itself.AlphaGo knew—to the extent that it could “know” anything—that the move was a long shot. 'It knew that this was a move that professionals would not choose, and yet, as it started to search deeper and deeper, it was able to override that initial guide,' [developer] Silver says. AlphaGo had, in a sense, started to think on its own. It was making decisions based not on a set of rules its creators had encoded in its digital DNA but on algorithms it had taught itself. 'It really discovered this for itself, through its own process of introspection and analysis.' " (Metz, 5/16, my emphasis added.)This ability to go beyond a rote set of programmed instructions is one of the most important and significant qualities of the recent advances in AI -- with both positive and negative potential implications. On the positive side, it greatly enhances the power of AI systems to do all kinds of complex tasks for us. The neural net/machine learning approach that was used to develop AlphaGo is being applied to many other areas as well, including search engines, facial recognition, biometric scanning, robotics, speech recognition, robotic navigation and manipulation, data mining, control systems for self-driving cars, managing complex scheduling operations, etc.
But there also may be a dark side, because this type of AI ceases to be understandable and predictable by its creators. As Jason Tanz of Wired puts it, "...With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable...When engineers do peer into a deep neural network, what they see is an ocean of math: a massive, multilayer set of calculus problems that—by constantly deriving the relationship between billions of data points—generate guesses about the world." (Tanz, 2016).
This loss of control has led to some serious warnings about the potential for dire negative future outcomes. For example, the brilliant physicist Stephen Hawking cautions "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand...Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all." Elon Musk, Bill Gates, Steve Wozniak and many others share similar reservations, leading to a recent Open Letter in which they and over 100 other experts have drawn attention to this issue and have called for efforts to lessen the probability that the technology will go awry. "We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do" (my emphasis).
Hmmm. Sounds good. But I'm not sure my phone is doing what I want it do even NOW......
__________________________
Sources and Resources:
Artificial Intelligence, Wikipedia.
AlphaGo, Deepmind
"What the AI behind AlphaGo can teach us about being human." Cade Metz, Wired, May 2016.
"Soon we won't program computers. We'll train them like dogs." Jason Tanz, Wired, May 2016.
Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence, The Observer, 2015
Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence, Future of Life Institute
3 comments:
I think the phone is smarter and much more stupid too, it depends. My last phone got lost in a nature preserve and never came home. I thot it was gone for ever and then it made, probably with some help, a few attempts to call my friend Bob. Bob was never sure what it wanted. I put a wooden stake in it's identity after that. I'll read the AI open letter later, it's been a vague concern, but I figured on being gone before it becomes too serious. I may have underestimated as the Go story you re-tell indicates, hmmm. I may have something for you on cats soon, probably something you know about, but maybe some new twists.
I've been writing a NCI grant proposal involving some advanced discourse processing technology and the recent strides are amazing. One small example, my colleague told me about as-of-yet unpublished data I. Which advanced natural language processing technology used to analyze undergraduate college admission essays was a better predictor of success in college than SAT score.
I sometimes imagine myself talking with Benjamine Franklin; "Each of us carried a device with much of the information in the world's best libraries. Each of us can use these devices to communicate with more people then we're alive in 1776 in the whole world. We use them to send each other pictures of cats and insult perfect strangers."
Dick, I think you are still pretty smart! Your blog is evidence enough for me.... Fascinating stuff...
And please, go back and correct your second sentence, paragraph two...LOL
Post a Comment