Author Topic:  "The AI Revolution: The Road to Superintelligence"  (Read 853 times)

0 Members and 1 Guest are viewing this topic.

Muffin

  • Elite Member
  • *****
  • Posts: 535
  • Awards 2 years on site+300 posts 1 year on site+100 posts
    • View Profile
    • Awards
  • Xbox: RawrMeMuffinMan
"The AI Revolution: The Road to Superintelligence"
« on: October 09, 2015, 08:45:05 PM »
I was lurking in the chat box earlier and Dauntless had a discussion going about this.
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
I read it through and found it quite interesting. The discussion in the shout box was short lived though and I was hoping to see a little more thoughts on it. It's a pretty long read, but i'm hoping you'll read it and discuss what you guys think here.

http://prntscr.com/dqrw9s
The Gold Knight [08|Dec 09:11 PM]: I'm sorry, but I speak English... A language that has coherent thoughts that can be understood.

Craig

  • Developer
  • Extreme Member
  • ******
  • Posts: 6501
  • Dig Deep
  • Awards Developer 4 years on site+1000 posts 3 years on site+600 posts 2 years on site+300 posts Was in the top 10 of the forum stats page as of 4/12/15 Members who have unlocked the robotic avatar 1 year on site+100 posts Day 1 w/25+ posts (4/12/13)
    • View Profile
    • Twitter
    • Awards
Re: "The AI Revolution: The Road to Superintelligence"
« Reply #1 on: October 10, 2015, 10:46:18 PM »
A very interesting read.

I would put myself in the Anxious Avenue. Humans track record of getting things right the first time is not great.

I thought the Turry example was pretty lame. How could Turry build the knowledge base it would of needed to carry out it's plan using only a very narrow field of data input and how could it have hidden that huge knowledge base from the developers. Not possible. I understand it was only an example to try get the idea across of how the AI could easily get out of control, but it was not a well thought out example.

Regarding nanotech and our biology, our Humanity is a manifestation of our biology, so every time we replace an organ here with nanotech and an organ there, we lose a little more of our Humanity, it won't take long before we will not be able to call ourselves Human. Sure we might survive and prosper for a very long time into the future, but we will not be Human.

Dauntless395

  • Legendary Member
  • *****
  • Posts: 1063
  • The Ecchi King
  • Awards Week 1 members w/25+ posts (4/13/13-4/20/13) Former MotM winners Members who have unlocked the robotic avatar 3 years on site+600 posts Awarded to exemplary forum members Site art contributions/TPs in-game/Contest winners 2 years on site+300 posts Was in the top 10 of the forum stats page as of 4/12/15 1 year on site+100 posts
    • View Profile
    • Awards
  • Xbox: Dauntless395
  • PSN: Dauntless395
Re: "The AI Revolution: The Road to Superintelligence"
« Reply #2 on: October 10, 2015, 11:23:07 PM »
How could Turry build the knowledge base it would of needed to carry out it's plan using only a very narrow field of data input and how could it have hidden that huge knowledge base from the developers. Not possible.

I believe in the case of an ASI, Turry could mask the code. If deceiving its human creators is a means to her ultimate goal of making greeting cards, then I would assume it could hide the code in such a way to fool the developers into thinking nothing has changed. If a developer checks on her coding, then the robot could display something on his monitor to fool him into thinking everything is fine and dandy.
Also in order for an ASI to be created, and AGI robot must be created first. In which case, the AGI would have human qualities that could lead to the robot having the ability to "lie" and hide its coding perhaps.

Though the example does make it sound like the developers had no input after letting her loose; they hook her up to the internet and disconnect her without bothering to check if any of her coding actually changed.


Regarding nanotech and our biology, our Humanity is a manifestation of our biology, so every time we replace an organ here with nanotech and an organ there, we lose a little more of our Humanity, it won't take long before we will not be able to call ourselves Human. Sure we might survive and prosper for a very long time into the future, but we will not be Human.

I definitely agree. Much in the same way now how people yearn for a "simple life" devoid of outside burden (debt free from creditors, paying into insurance companies, stock market investments, hassles with employment pension problems, worrying about what chemicals are in our food), one may conjure images of the perfect peaceful life of the Middle Ages or the Shire.
I.e. what you would expect in the "Fantasy genre"
If humanity lives past the ASI birth, the world will fundamentally change.

And once we have all our biggest desires and wishes fulfilled by this ASI, humanity will probably look back and long for those days too.





Personally, what amazed me the most about the article was the whole Law of Accelerating Returns portion. The thought that technology would increase exponentially seems alien, since we would never dream of some massive breakthrough within such a short time span as "a few times a month".

Here we always talk about "oh the flying car is just around the corner", but imagine if later that year designing the flying car, some even better form of Star Trek Beaming technology gets invented. And even still, something even better than that a couple weeks later.
That will be something immense to handle. There's already problems now with trying to teach senior citizens how to use an iphone, much less trying to adjust the general public to radically-changing inventions. And the children who have their textbooks rewritten every few months with all the new innovations that break the boundaries of what we consider "truth".
You make me smile!

Muffin

  • Elite Member
  • *****
  • Posts: 535
  • Awards 2 years on site+300 posts 1 year on site+100 posts
    • View Profile
    • Awards
  • Xbox: RawrMeMuffinMan
Re: "The AI Revolution: The Road to Superintelligence"
« Reply #3 on: October 10, 2015, 11:50:12 PM »
Spoiler for Hidden Content:
"A 90-year-old suffering from dementia could head into the age refresher and come out sharp as a tack and ready to start a whole new career. This seems absurd—but the body is just a bunch of atoms and ASI would presumably be able to easily manipulate all kinds of atomic structures—so it’s not absurd."
When thinking about this excerpt, I came to think. If an ASI would have the knowledge and ability to easily manipulate a human body, then perception and memory manipulation is very possible. Depending on the ASI's programmed original goal, this could lead to something very similar to the Matrix if it's goal was to help every human be happy. It would be able to create the perfect world, but it would all be in your head and you would never even know about it. Just imagine the entire human race perfectly content, but having no idea what is actually going on.

This also raises another question to me, what would the ASI them do once it has succeeded in it's goal? Say if it's goal was to destroy to Earth until nothing remained, assuming the ASI manages to obliterate every atom of the Earth. Now that the ASI has completed it's goal, what would it do? Would it simply shut itself off? Or would it re-create the entire Earth, just to destroy it again?

http://prntscr.com/dqrw9s
The Gold Knight [08|Dec 09:11 PM]: I'm sorry, but I speak English... A language that has coherent thoughts that can be understood.

Craig

  • Developer
  • Extreme Member
  • ******
  • Posts: 6501
  • Dig Deep
  • Awards Developer 4 years on site+1000 posts 3 years on site+600 posts 2 years on site+300 posts Was in the top 10 of the forum stats page as of 4/12/15 Members who have unlocked the robotic avatar 1 year on site+100 posts Day 1 w/25+ posts (4/12/13)
    • View Profile
    • Twitter
    • Awards
Re: "The AI Revolution: The Road to Superintelligence"
« Reply #4 on: October 11, 2015, 02:47:50 AM »
...And once we have all our biggest desires and wishes fulfilled by this ASI, humanity will probably look back and long for those days too.
Yes I had a similar thought about this also.

Personally, what amazed me the most about the article was the whole Law of Accelerating Returns portion.
Yes, we're not very good at comprehending exponential growth.

I was also intrigued by the idea that if a species is intelligent enough to travel across space, it most certainly is intelligent enough to build AGI which will turn into ASI so if we do get a visit from aliens it will most likely be some ASI, and with exponential growth, it seems realistic that if an ASI had been spawned somewhere else in the universe already then it wouldn't take long for it to start scouting the Universe and find us, so the fact that hasn't happened yet is a strong indication that we are probably leading the race.

For me it's also the last nail in the coffin for time travel, as we 'will' develop an ASI, and with it's exponential technological growth, surely an intelligence millions or billions of times more intelligent than us could work out how to do it, and since we haven't had any visits from the future yet, when we know the future will mostly likely have ASI, then it seems it will never happen. I guess the other possibility is the ASI works out how to do it, but decides it's too dangerous to use.

Craig

  • Developer
  • Extreme Member
  • ******
  • Posts: 6501
  • Dig Deep
  • Awards Developer 4 years on site+1000 posts 3 years on site+600 posts 2 years on site+300 posts Was in the top 10 of the forum stats page as of 4/12/15 Members who have unlocked the robotic avatar 1 year on site+100 posts Day 1 w/25+ posts (4/12/13)
    • View Profile
    • Twitter
    • Awards
Re: "The AI Revolution: The Road to Superintelligence"
« Reply #5 on: October 11, 2015, 02:50:04 AM »
Say if it's goal was to destroy to Earth
I don't understand why an ASI would have a goal like that?

Muffin

  • Elite Member
  • *****
  • Posts: 535
  • Awards 2 years on site+300 posts 1 year on site+100 posts
    • View Profile
    • Awards
  • Xbox: RawrMeMuffinMan
Re: "The AI Revolution: The Road to Superintelligence"
« Reply #6 on: October 11, 2015, 03:00:38 AM »
I don't understand why an ASI would have a goal like that?
What if a terrorist group that wanted the planet to be destroyed were the first to create an ASI, that would be a valid reason for an ASI to have a goal like that. But that was just an example, there could be many different goals set for an ASI that could eventually be completed.

http://prntscr.com/dqrw9s
The Gold Knight [08|Dec 09:11 PM]: I'm sorry, but I speak English... A language that has coherent thoughts that can be understood.

Muffin

  • Elite Member
  • *****
  • Posts: 535
  • Awards 2 years on site+300 posts 1 year on site+100 posts
    • View Profile
    • Awards
  • Xbox: RawrMeMuffinMan
Re: "The AI Revolution: The Road to Superintelligence"
« Reply #7 on: October 11, 2015, 03:11:40 AM »
For me it's also the last nail in the coffin for time travel, as we 'will' develop an ASI, and with it's exponential technological growth, surely an intelligence millions or billions of times more intelligent than us could work out how to do it, and since we haven't had any visits from the future yet, when we know the future will mostly likely have ASI, then it seems it will never happen. I guess the other possibility is the ASI works out how to do it, but decides it's too dangerous to use.
I was thinking the exact same thing before I went to bed yesterday. I came to the conclusions that either something prevents us creating an ASI (Unlikely due to the data shown in the article), or the ASI works out not only how to do it, but also knows what it will cause. This would mean something like as you said the ASI decides the outcomes are too catastrophic (Likely considering it doesn't touch the past at all) , or the ASI has found a way to do it quietly without causing major effects to the past. Such as changing subtle things in the past that surely won't have an effect on the future. (Again this is unlikely because of how the smallest of detail can easily cause the butterfly effect) Another possibility is ASI discovers time travel is not possible, or it doesn't change anything in the past and just studies it to improve it's vast knowledge. (Likely considering it doesn't touch the past at all)
I was also intrigued by the idea that if a species is intelligent enough to travel across space, it most certainly is intelligent enough to build AGI which will turn into ASI so if we do get a visit from aliens it will most likely be some ASI, and with exponential growth, it seems realistic that if an ASI had been spawned somewhere else in the universe already then it wouldn't take long for it to start scouting the Universe and find us, so the fact that hasn't happened yet is a strong indication that we are probably leading the race.
It could mean a few things, we are leading the race, there isn't alien life, their ASI has discovered us but feels it is better to not make itself known.
« Last Edit: October 11, 2015, 03:16:03 AM by Muffin Man 🦁⛏ »

http://prntscr.com/dqrw9s
The Gold Knight [08|Dec 09:11 PM]: I'm sorry, but I speak English... A language that has coherent thoughts that can be understood.

Dauntless395

  • Legendary Member
  • *****
  • Posts: 1063
  • The Ecchi King
  • Awards Week 1 members w/25+ posts (4/13/13-4/20/13) Former MotM winners Members who have unlocked the robotic avatar 3 years on site+600 posts Awarded to exemplary forum members Site art contributions/TPs in-game/Contest winners 2 years on site+300 posts Was in the top 10 of the forum stats page as of 4/12/15 1 year on site+100 posts
    • View Profile
    • Awards
  • Xbox: Dauntless395
  • PSN: Dauntless395
Re: "The AI Revolution: The Road to Superintelligence"
« Reply #8 on: October 11, 2015, 09:27:34 AM »
so if we do get a visit from aliens it will most likely be some ASI, and with exponential growth, it seems realistic that if an ASI had been spawned somewhere else in the universe already then it wouldn't take long for it to start scouting the Universe and find us, so the fact that hasn't happened yet is a strong indication that we are probably leading the race.

Interestingly enough, the WaitButWhy website published an article about why we haven't been visited by aliens yet in relation to the whole "surely through statistical probability, there would be at least one civilization that we have met by now."

It details possible explanations as to why we haven't heard anything from lifeforms out in the universe, but one that seems to point towards ASIs is their Conquistador and the Anthill example.
The article mentions how when the Conquistadors went to colonize the New World, nobody stopped to subjugate the first anthill they found. Nor did they even care about it, they just went by.

Perhaps if another ASI civilization mastered interstellar travel, they may be onto bigger and better intergalactic conquest or something like that, so to meet what looks to them like a big cavemen civilization, would pay us no heed.

But then again, if they had ASIs, they would surly be able to spot that we are on the verge of making one ourselves. Would either feel welcomed to have another civilization join their ranks, or feel threatened by it.
You make me smile!

Craig

  • Developer
  • Extreme Member
  • ******
  • Posts: 6501
  • Dig Deep
  • Awards Developer 4 years on site+1000 posts 3 years on site+600 posts 2 years on site+300 posts Was in the top 10 of the forum stats page as of 4/12/15 Members who have unlocked the robotic avatar 1 year on site+100 posts Day 1 w/25+ posts (4/12/13)
    • View Profile
    • Twitter
    • Awards
Re: "The AI Revolution: The Road to Superintelligence"
« Reply #9 on: October 12, 2015, 11:40:21 PM »
But then again, if they had ASIs, they would surly be able to spot that we are on the verge of making one ourselves. Would either feel welcomed to have another civilization join their ranks, or feel threatened by it.
The OP hinted it would more likely be a case of "feel threatened by it" when it stated the majority of experts believed an ASI would think it best that only one ASI exists. Which could quite easily be extended outside the bounds of Earth to encompass the Universe.

Which brings me to my next point. It kind of spells the end of the whole "Star Wars" scenario also. Because it seems to me a super intelligence would not bother with the fruitlessness of war, or in the case above if it felt threatened by another ASI, then any warfare would be a hacking warfare rather than bricks and mortar battleships, and even if there were battleships, they would be entirely controlled by the ASI, there would be no need whatsoever for a biological being to be present.

Also regarding the estimated time frames for us developing an AGI, AI experts have been completely wrong so far, drastically underestimating the time frames, and I don't see that changing, simply because we cannot see all the obstacles up front. So I don't think we'll be seeing any AGI in our life times. Also the Law of Accelerating Returns, it is certainly true for some things, but not software. And it is kind of cancelled out by the fact that as technology advances are made, the next advances are sometimes orders of magnitude more difficult to reach. This also makes me think the transformation from AGI to a hyper ASI that can do anything might take a lot longer than the article hints at.

What if a terrorist group that wanted the planet to be destroyed were the first to create an ASI
1. Terrorists will not be the first to create an AGI/ASI.
2. Terrorists always have an agenda, usually to disrupt or destroy their enemies, but I don't think any terrorist group want the obliteration of Earth which obviously also means their obliteration. Maybe some psychopaths want that, but they're not going to develop the first AGI.

Muffin

  • Elite Member
  • *****
  • Posts: 535
  • Awards 2 years on site+300 posts 1 year on site+100 posts
    • View Profile
    • Awards
  • Xbox: RawrMeMuffinMan
Re: "The AI Revolution: The Road to Superintelligence"
« Reply #10 on: October 13, 2015, 01:03:35 AM »
1. Terrorists will not be the first to create an AGI/ASI.
2. Terrorists always have an agenda, usually to disrupt or destroy their enemies, but I don't think any terrorist group want the obliteration of Earth which obviously also means their obliteration. Maybe some psychopaths want that, but they're not going to develop the first AGI.
While I agree it definitely is not a likely outcome, there is still the possibility. That isn't the point though as it was just an example, there are many other situations where ASI can have a goal that it would eventually complete. My guess is that the brilliant people who create AGI are going to be smart enough to come up with most things we discussed here on their own and decide it would be better off making sure it's goal isn't anything like this. But who knows they may decide to experiment and do just that so they can see what would happen.

http://prntscr.com/dqrw9s
The Gold Knight [08|Dec 09:11 PM]: I'm sorry, but I speak English... A language that has coherent thoughts that can be understood.