Quote:
Originally Posted by cheerfulgreek
AI systems are limited by what we currently know about the nature of intelligence
|
I think "AI" might be more correctly referred to as "APM" / "artificial pattern matching." AI systems aren't "intelligent" but they can in many ways now replicate the patterns of intelligence well enough to convince us of AI "intelligence" and that ability/feature of AI will certainly improve in the future. Future AI will be so good at doing that most people will likely believe that what they are seeing is actual intelligence when it's just pattern matching.
Quote:
Originally Posted by cheerfulgreek
seems like all evidence suggests that human and machine intelligence are radically different
|
In regards to how the brain does what it does and how AI does what it does I would probably agree that it's radically different. But the end result I think is quite similar. Humans, as far as I understand, do much of our learning based on patterns that we perceive with new or more complicated learning often or usually based on the patterns that we previously perceived. That is very similar to AI in that AI is an extremely sophisticated pattern matching calculator, in a sense.
Quote:
Originally Posted by cheerfulgreek
Don’t you think it will be more problematic than beneficial, based on that?
|
Great question. Probably both more problematic and more beneficial. But I'm not sure which one will be more significant.
I guess in comparing possible end results... AI could make life significantly better, more productive, more advancement, more efficient, more possibilities, etc. But future AI could also result in disaster, or maybe be used by people in order to cause a disaster scenario. So from that perspective, maybe AI will be more problematic because of the potential for serious harm.
But that cat is out of the bag, no going back in. Even if it's regulated it's probably too late. Those who are regulated from developing new AI will simply be those who lose out to the people who take advantage of the technology.
Consider Google's AI AlphaGo beating world master champions in the game of Go. AlphaGo was trained based on actual games between people. Then Google AI devs went and made AlphaGo Zero which was given the rules of Go and trained itself, basically played itself for a period of time and learned to master the game that way. AlphaGo Zero defeated AlphaGo significantly. I think at first it was just by a relatively small margin, then after some changes AlphaGo Zero won 100% against AlphaGo. I don't know the exact history of it, but it was something like that.
So that was a game that an AI mastered versus people and another AI completely defeated the first AI.
Consider that the military has war "games" for training & analyzing scenarios, etc. The US military not only should be using AI, they must be. They have to. Because if they are not and an enemy or future potential enemy develops AI that can defeat all the human military war game "players," like the AIs in the actual game Go, the side that does not have that advantage probably loses. And it might be that the government which reaches AI superiority first is the one that wins it all.
Then there are governments that will use massive data collection & AI training/development against their own people.
Lots of great benefits but also lots of very serious potential problems.