GreekChat.com Forums  

Go Back   GreekChat.com Forums > GreekChat
Register FAQ Community Calendar Today's Posts Search

» GC Stats
Members: 331,518
Threads: 115,711
Posts: 2,207,655
Welcome to our newest member, zaangltopoz4673
» Online Users: 3,686
1 members and 3,685 guests
No Members online
Reply
 
Thread Tools Display Modes
  #1  
Old 08-29-2024, 01:32 PM
John John is offline
Administrator
 
Join Date: Aug 1999
Location: NJ, USA
Posts: 2,326
Quote:
Originally Posted by cheerfulgreek View Post
AI systems are limited by what we currently know about the nature of intelligence
I think "AI" might be more correctly referred to as "APM" / "artificial pattern matching." AI systems aren't "intelligent" but they can in many ways now replicate the patterns of intelligence well enough to convince us of AI "intelligence" and that ability/feature of AI will certainly improve in the future. Future AI will be so good at doing that most people will likely believe that what they are seeing is actual intelligence when it's just pattern matching.

Quote:
Originally Posted by cheerfulgreek View Post
seems like all evidence suggests that human and machine intelligence are radically different
In regards to how the brain does what it does and how AI does what it does I would probably agree that it's radically different. But the end result I think is quite similar. Humans, as far as I understand, do much of our learning based on patterns that we perceive with new or more complicated learning often or usually based on the patterns that we previously perceived. That is very similar to AI in that AI is an extremely sophisticated pattern matching calculator, in a sense.

Quote:
Originally Posted by cheerfulgreek View Post
Don’t you think it will be more problematic than beneficial, based on that?
Great question. Probably both more problematic and more beneficial. But I'm not sure which one will be more significant.

I guess in comparing possible end results... AI could make life significantly better, more productive, more advancement, more efficient, more possibilities, etc. But future AI could also result in disaster, or maybe be used by people in order to cause a disaster scenario. So from that perspective, maybe AI will be more problematic because of the potential for serious harm.

But that cat is out of the bag, no going back in. Even if it's regulated it's probably too late. Those who are regulated from developing new AI will simply be those who lose out to the people who take advantage of the technology.

Consider Google's AI AlphaGo beating world master champions in the game of Go. AlphaGo was trained based on actual games between people. Then Google AI devs went and made AlphaGo Zero which was given the rules of Go and trained itself, basically played itself for a period of time and learned to master the game that way. AlphaGo Zero defeated AlphaGo significantly. I think at first it was just by a relatively small margin, then after some changes AlphaGo Zero won 100% against AlphaGo. I don't know the exact history of it, but it was something like that.

So that was a game that an AI mastered versus people and another AI completely defeated the first AI.

Consider that the military has war "games" for training & analyzing scenarios, etc. The US military not only should be using AI, they must be. They have to. Because if they are not and an enemy or future potential enemy develops AI that can defeat all the human military war game "players," like the AIs in the actual game Go, the side that does not have that advantage probably loses. And it might be that the government which reaches AI superiority first is the one that wins it all.

Then there are governments that will use massive data collection & AI training/development against their own people.

Lots of great benefits but also lots of very serious potential problems.
__________________
John Hammell
Network Admin, GreekChat.com
Reply With Quote
  #2  
Old 08-29-2024, 07:09 PM
cheerfulgreek cheerfulgreek is offline
GreekChat Member
 
Join Date: Nov 2006
Location: Minnesota
Posts: 16,220
^^^ Oh wow! I didn’t look at it from any of the perspectives you’re looking at it from, John. So true, and you make a lot of sense. I think greed will be the downfall of it, though. I mean, for like the other side of the benefits. The bad side. It’s just that a lot of companies now launch AI teams because they’re afraid of falling behind other companies/competitors, without fully knowing where or for what purpose they’ll use AI. And then too, a lot of companies pretend to use AI when they don’t, just to increase their chances of obtaining funding. That’s the greed part. That’s the part I think will get worse. There’s like also a fair amount of general confusion about what AI can and can’t do.

What’s interesting though is that we now use it a lot, daily, sometimes without even realizing it. Do you think it can or will get out of control? I mean, right now AI is completely under human control, but in the future, it might not be under our control anymore. Seems like eventually every single task is going to be done by AI.
__________________
Phi Sigma
Biological Sciences Honor Society
“Daisies that bring you joy are better than roses that bring you sorrow. If I had my life to live over, I'd pick more Daisies!”
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



All times are GMT -4. The time now is 07:34 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2025, vBulletin Solutions Inc.