DT: Cryonics or Cremation?

The Off-Topic forum for anything non-LDS related, such as sports or politics. Rated PG through PG-13.
_honorentheos
_Emeritus
Posts: 11104
Joined: Thu Feb 04, 2010 5:17 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _honorentheos »

Gadianton wrote:
honorandtheos wrote:Assuming an omniscient blue-print reading, I'm guessing?

right -- the point. ; )

Maybe it's my own bias but when I read subbie's post it came across as an argument for a religious afterlife with that distinctly Mormon flavor of a God that operates according to natural laws rather than outright miracles that defy natural law. It seems that the degree of omniscience required for subbie's comment to be worth even some small consideration is essentially miraculous. As such, it seems like a typical Mormon attempt to distance their view of God from that of the apostate Christians and play at being rational even if the reality is far from it.

For some powerful entity in the future to make use of atomic interactivity in one of the most far reaching science-fiction manners that they could recreate one's identify by the imprints it left on a very finite amount of the material and energy in the system and be able to separate it out from the other imprints left by all of the other living things that had included those atoms over the extensive the course of Earth's natural existence is basically saying, "God can do anything He wants" so it's hard to place subbie's comment in the same discussion as one where we are talking about ways of recreating or reanimating consciousness with access to the organized network of neurons and cells that make up a particular state of personal identity at a given point in time. I'm not sure if subbie sees that gulf being as wide as I do, but more power to anyone who sincerely believes the universe can be extrapolated from a crumb of fairy cake.
The world is always full of the sound of waves..but who knows the heart of the sea, a hundred feet down? Who knows it's depth?
~ Eiji Yoshikawa
_honorentheos
_Emeritus
Posts: 11104
Joined: Thu Feb 04, 2010 5:17 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _honorentheos »

DoubtingThomas wrote:
Gadianton wrote:
Sam Harris doesn't seem as optimistic:

https://www.youtube.com/watch?v=8nt3edWLgIg


I guess we will have to wait to see what happens. However, A.I. is a risk we should be willing to take.

There's an argument to be made that the risk-reward calculus of pursuing the singularity where A.I. becomes self-aware and essentially a new species operates at wild extremes of both risk and reward. The risk is very real that self-aware A.I. will become the new masters of this planet and wipe us out making the magnitude of risk incredibly high, extinction-event levels if you will. It doesn't require the A.I. to be malevolent for this to happen, either. It just requires it to act out of self-interest. And as those who raise the alarm point out, the scale of evolution with A.I. will be unfathomable compared to biological evolution. As someone once put it, the moment of the singularity will mark 1000s of generations of A.I. evolution as the A.I. computes and reforms itself at computational speeds before the humans monitoring the system could even be aware that the threshold has been crossed.

The rewards often present themselves as what such advanced A.I. will be able to accomplish precisely because of this leapfrog into a new form of evolutionary advance. This belief that someone is going to win this arms race so it has to be us makes the probability it will occur also very high. But I think people who think it will be almost certainly and exclusively beneficial also think too highly of themselves which forms a blindspot in their ability to recognize a leap in evolution does not follow the path they personally would impose on evolutionary advancement. It's like those people who speak of society just needing a few more generations before their own perspective will become dominant as culture "evolves" into the societal norms of a higher enlightened way of doing things. Because of course part of being enlightened is recognizing one's own enlightened state and everyone elses lack thereof.
The world is always full of the sound of waves..but who knows the heart of the sea, a hundred feet down? Who knows it's depth?
~ Eiji Yoshikawa
_subgenius
_Emeritus
Posts: 13326
Joined: Thu Sep 01, 2011 12:50 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _subgenius »

honorentheos wrote:The risk is very real that self-aware A.I. will become the new masters of this planet....

There is no evidence that such a risk exists or could ever exist. So "very real" is a false charge here. Heck it is even difficult to speculate on such a scenario....imagine and/or fantasize, yes- but there is no real intellectual rigor behind it...its a fallacy of anthropomorphism...to assume A.I. would just be be a "better" human is arrogant at best....its Planet of the Apes but with gears and sprockets.

so, as goes the risk thus goes the reward.
Seek freedom and become captive of your desires...seek discipline and find your liberty
I can tell if a person is judgmental just by looking at them
what is chaos to the fly is normal to the spider - morticia addams
If you're not upsetting idiots, you might be an idiot. - Ted Nugent
_honorentheos
_Emeritus
Posts: 11104
Joined: Thu Feb 04, 2010 5:17 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _honorentheos »

There is no evidence ... what a silly thing to say.

Assume one thing - a sense that existence is to be valued over nonexistence. Run that assumption through your preferred scenario for sentient A.I. entering the world stage and let's see how you exclude the risk, subbie. Let's see your mind at work on this.
The world is always full of the sound of waves..but who knows the heart of the sea, a hundred feet down? Who knows it's depth?
~ Eiji Yoshikawa
_subgenius
_Emeritus
Posts: 13326
Joined: Thu Sep 01, 2011 12:50 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _subgenius »

honorentheos wrote:There is no evidence ... what a silly thing to say.

obviously, depends on the context.

honorentheos wrote:Assume one thing -

something silly people say when they have no evidence..except it is more appropriate to phrase it as "imagine one thing..."

honorentheos wrote: a sense that existence is to be valued over nonexistence.

The existence of cancer is to be valued over the nonexistence of a human? Or is your "sense" more ambiguous, yet self-involved, than already proposed?

honorentheos wrote: Run that assumption through your preferred scenario for sentient A.I. entering the world stage and let's see how you exclude the risk,

I was not excluding anything, I was simply pointing out that evidence did not exist. Noting the difference between imagination and reality in this context is not an exclusion, but rather it is an inclusion of an accurate perspective.

honorentheos wrote: subbie. Let's see your mind at work on this.

work on what? There is no reason to assume that A.I. will master the planet...in fact, all the current evidence concludes with A.I. being subservient to its human masters...you would have man over God but how can that ever be? By what measure and by what evidence can you reasonably conclude that A.I. would ever be in a position to master this planet?
It seems that you are hinting at the motion that freedom is a condition of intelligence and that a dominance is the product of freedom...that somehow A.I. will become aware of its subservience; will "deduce" freedom from that subservience as necessary; and all this will lead to A.I. transcending from slave to master in some sort of season-ending cliff-hanger?
I mean, i understand the imagination here...the need to make A.I. inevitably have "human like" motives, morality, and meanings...but I just do not see any evidence for converting that imagination to a belief.
Seek freedom and become captive of your desires...seek discipline and find your liberty
I can tell if a person is judgmental just by looking at them
what is chaos to the fly is normal to the spider - morticia addams
If you're not upsetting idiots, you might be an idiot. - Ted Nugent
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DT: Cryonics or Cremation?

Post by _Some Schmo »

There are three main problems with the idea of runaway A.I. in my mind:

- It assumes the engineers who design it don't have a concern for safety in mind. It's like worrying guys are going to race cars without seat belts and roll bars. Nobody flips their car only to be followed up with, "Man, I wish we'd thought of some safety measures before we took that out for a drive."

- Even if engineers did manage to build such a consciousness, do they plan to build the required interfaces it would need to wreak havoc? "I suspected it might be a bad idea to build a gun turret into my self-aware robot car, but look how cool it looks!"

- It seems to me the main reason people do things that cause misery for other people is selfishness, or more fundamental, emotional responses to external stimuli. Does consciousness require emotional selfishness in order to be considered consciousness? Wouldn't it be ok to leave that out of the program?
God belief is for people who don't want to live life on the universe's terms.
_Themis
_Emeritus
Posts: 13426
Joined: Wed Feb 17, 2010 6:43 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Themis »

Some Schmo wrote:There are three main problems with the idea of runaway A.I. in my mind:

- It assumes the engineers who design it don't have a concern for safety in mind. It's like worrying guys are going to race cars without seat belts and roll bars. Nobody flips their car only to be followed up with, "Man, I wish we'd thought of some safety measures before we took that out for a drive."


I agree that they will have concern for safety measures, but how many software products are error free. This would be worse if the A.I. has become much more intelligent then your average genius. Would the A.I. be able to find those flaws at speeds astronomically faster then what humans do now?

- Even if engineers did manage to build such a consciousness, do they plan to build the required interfaces it would need to wreak havoc? "I suspected it might be a bad idea to build a gun turret into my self-aware robot car, but look how cool it looks!"


Government armed forces around the world are doing this today. We have drones capable to firing on their own if we want them to, and software is never perfect.

- It seems to me the main reason people do things that cause misery for other people is selfishness, or more fundamental, emotional responses to external stimuli. Does consciousness require emotional selfishness in order to be considered consciousness? Wouldn't it be ok to leave that out of the program?


I believe the main concerns are not about A.I. selfishness or malevolence, but their goals diverging even in small amounts from ours. How concerned are humans about bugs living in an area we want to build a mall in?
42
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Some Schmo »

Themis wrote:I agree that they will have concern for safety measures, but how many software products are error free. This would be worse if the A.I. has become much more intelligent then your average genius. Would the A.I. be able to find those flaws at speeds astronomically faster then what humans do now?

Error reporting has levels of severity. Programs with severe faults are never released. Most of the bugs you see in programs you use today are low severity - they don't impact the primary functions of the program.

I would consider a potentially dangerous A.I. to have a high severity bug that would prevent it from ever going live.

Government armed forces around the world are doing this today. We have drones capable to firing on their own if we want them to, and software is never perfect.

But drones aren't controlled by A.I.. Is it a valid fear that we're going to flip the switch on an A.I. and give it control of making drone decisions?

Again - critical error.

The more I think about this conversation, the more I realize it's not A.I. people fear, it's the incompetence of the engineers who will create it.

I believe the main concerns are not about A.I. selfishness or malevolence, but their goals diverging even in small amounts from ours. How concerned are humans about bugs living in an area we want to build a mall in?

Why on earth would programmers program in a potential disregard for human value (which is to say, why wouldn't they be very careful to continuously error check for said disregard)? This makes no sense to me.

There's another issue with fear about A.I. that I never hear people talk about, and that is that computer programs are not these huge, holistic systems the way a human brain largely is. They are a cobbling together of several disparate functions tied up in a neat package, just like a book is a collection of different thoughts, not one big monolithic thought. You could likely say something similar about the human mind, but the processes within the human mind are far more interdependent than the various high level functions in a computer program. I think that inter-dependency leads to a lot of muddled thinking, which can lead to human suffering. I don't see a series of programs having that issue (unless they were exceptionally poorly designed, and I suppose I think if you're smart enough to pull off A.I., you're also likely smart enough to value safety over brain power - which is likely why I'm not as fearful about the competence of the engineers).

To be honest, I just don't see A.I. happening in the science fiction/Westworld kind of way where the programs become so complex, at some point they become self aware. We already have programs that massively out-perform humans on specific tasks, and have for many years. Computers are already vastly better at math than we are. We have programs that can consistently beat the best chess players in the world. Do we think they've become self aware for having been programmed these talents?
God belief is for people who don't want to live life on the universe's terms.
_subgenius
_Emeritus
Posts: 13326
Joined: Thu Sep 01, 2011 12:50 pm

Re: DT: Cryonics or Cremation?

Post by _subgenius »

Some Schmo wrote:There are three main problems with the idea of runaway A.I. in my mind:

- It assumes the engineers who design it don't have a concern for safety in mind. It's like worrying guys are going to race cars without seat belts and roll bars. Nobody flips their car only to be followed up with, "Man, I wish we'd thought of some safety measures before we took that out for a drive."

is that a Dale Eanhardt senior quote?
:eek:

Some Schmo wrote:- Even if engineers did manage to build such a consciousness, do they plan to build the required interfaces it would need to wreak havoc? "I suspected it might be a bad idea to build a gun turret into my self-aware robot car, but look how cool it looks!"

I think you misunderstand A.I. with calculator. The notion with A.I. would be that it can become something "not designed"....or did your robot car of a brain come without gun turret ability?

Some Schmo wrote:- It seems to me the main reason people do things that cause misery for other people is selfishness, or more fundamental, emotional responses to external stimuli. Does consciousness require emotional selfishness in order to be considered consciousness? Wouldn't it be ok to leave that out of the program?

Are you asking if something can be conscious without a "self"? :neutral:
Seek freedom and become captive of your desires...seek discipline and find your liberty
I can tell if a person is judgmental just by looking at them
what is chaos to the fly is normal to the spider - morticia addams
If you're not upsetting idiots, you might be an idiot. - Ted Nugent
_Gadianton
_Emeritus
Posts: 9947
Joined: Sat Jul 07, 2007 5:12 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Gadianton »

Some Schmo wrote:It assumes the engineers who design it don't have a concern for safety in mind


- Military A.I. "defense" systems, like nukes, will also be a reality.

- Just as important if not more, along the lines of DT's fears of legal technicalities that screw peoples lives, A.I. takes it to the next level:

https://www.youtube.com/watch?v=TRzBk_KuIaM

DT has not seriously reflected on this matter enough. His utter fear of the legal system and faith that all is well with A.I. is a compartmentalization, like a Mormon scientist who teaches physics but still believes in Kolob.
Lou Midgley 08/20/2020: "...meat wad," and "cockroach" are pithy descriptions of human beings used by gemli? They were not fashioned by Professor Peterson.

LM 11/23/2018: one can explain away the soul of human beings...as...a Meat Unit, to use Professor Peterson's clever derogatory description of gemli's ideology.
Post Reply