The title is very misleading (I know that it is the title on Nature's own site). The pigs brains were not revived; when stimulated, neurons were observed behaving as live neurons do but the overall coordination of the brain was not reinstated. This is a long way from revival.
> In this study, the researchers deliberately prevented the pig brains from regaining consciousness, by using chemicals to block neurons from firing.
> And last year, researchers at the University of California, San Diego, reported that ‘mini brains’ grown in a dish had spontaneously produced human-like brain waves for the first time.
That's disturbing. Rather than producing "human-like brain waves", we stay with growing mini-brains with worm-like or insect-like brain-waves for a very very long time until we master simpler neural systems. We know so little about the brain and consciousness that I think we should err on the side of caution. 100 years from now, imagine if we learned that these "mini brains" had semblence of human consciousness that we weren't aware of. It would horrifying.
But I also don't know how we can advance in neuroscience without taking some risks. So much of knowledge and advancement requires us breaking things apart and making mistakes. Maybe the risks are worth it especially for something as important as knowledge of the brain and consciousness. In many ways, is there anything more important than that?
That is the problem the grandparent is getting at. We don't have a widely accepted definition, and therefore we should tread lightly, without assuming that we're not even close to crossing that boundary.
We have a very poor understanding of consciousness and when death occurs, and yet every day we harvest organs from living donors before allowing them to die. Why so much caution demanded here when we already consider something still much very unknown to be "solved enough"?
If having a poor understanding of a problem constitutes "solving it", then we could be reckless in any area, and reap the consequences.
Why would caution be not be demanded from a moral point of view? If morality places any value on the suffering of conscious beings, then the approach of "I don't know what will happen. Let's do this!" when conscious beings are the subject of an experiment is insane.
That's right, and the previous poster recognizes that:
> I also don't know how we can advance in neuroscience without taking some risks
I replied, because pointing out that we don't have a definition of something is unproductive in both ways: "we don't know what consciousness is? so let's drop precautions" is just as bad as "we don't know what consciousness is? so let's stop in place". It's quite understandable to me to feel that experimentation in this area are disturbing to someone who doesn't know what model of consciousness was used to ensure that nothing immoral is being done. The answers provided in the article don't really address that.
I also don't agree that consciousness is a complete mystery. We can infer criteria based on various measures of similarity, although the one criterion given in the article: "brain-wide activity that would indicate the organs could be conscious" doesn't offer any explanation as to where the criterion comes from. I take this as a failure to explain and justify that the precautions taken to eliminate the risk were enough.
thats quite a reductionist approach and i dont think it applies to this case. we have an understanding of consciousness by the very fact that we can experience it - i.e theres a proof of existence already. We dont have the same understanding for god. If we somehow obtained evidence for a god and then subsequently found out we were causing that god intense suffering, then i think there would be similar discussion
> we have an understanding of consciousness by the very fact that we can experience it
Would you say that you have an understanding of capitalism just by existing in a capitalist society?
Would even an economist or a stockbroker have an "understanding of capitalism"? Or do they just know how to make predictions? If they predict incorrectly, would you say that there understanding is less than they had thought?
Would you say that you have an understanding of the English language just by writing it?
Would even a linguist or a writer have an "understanding of English"? Or do they just know how to string words together based off a set of rules? If they write a bad paper/novel, would you say that there understanding is less than they thought?
If an understanding of capitalism/language is "less than they thought" is it really "understanding" necessarily?
Perhaps a dead horse has been beaten, but I think my point is clear: complex activities/systems that we do/take-part-in everyday do not imply knowledge of the fundamental elements of those activities/systems.
Our brains are highly skilled at pretending we have understanding.
"Perhaps consciousness arises when the brain's simulation of the world becomes so complex that it must include a model of
itself" ( Richard Dawkins - The Selfish Gene )
Is consciousness different from self-awareness?
Because we're already using an experimental protocol for testing self-awareness in animals.
You can't prove or disprove that I'm conscious in the same way that you can't prove or disprove that I experience the color red in the same way that you do.
Agree that precaution is good but this is dependent on what the researcher deemed as 'horrifiying'.
For example, if I learned that these "mini brains" had semblence of human consciousness that I weren't aware of. My reaction would be 'that is interesting' instead of horrified or disturbed.
So basically we should leave the research to be done in China?
If you get an important western institutions to decry this research as unethical it will just happen elsewhere. This research could be very beneficial and could lead to extending the time window within which someone can be revived without becoming a vegetable after their heart stops pumping blood around. It makes sense to allow the research to continue where we at least have some control over it, especially considering that we are a long ways off from creating conscious brains in jars (or other suitable containers) which is approximately where most people seem to draw the ethics line.
So? We're not talking about selling administrative record keeping equipment to goose-stepping despots. We're talking about medical research that in the medium term seems likely to prevent people from dying. We're nowhere near growing self aware beings in a lab but people are acting like that's what we're doing. We're so very far from that that it makes no sense to hamper research. To put it in perspective, this is like restricting the sale of capacitors because someone built a small rail gun and you're worried they're gonna shoot down the ISS next week.
> No. Although a person in a vegetative state is not considered to be conscious, the neurons in their brain are still firing and are capable of regulating sleep–wake cycles and heart and lung function, among other processes. The pig brains in the latest study carried out metabolic functions — such as those cells use to produce energy and remove waste — but their neurons didn’t fire unless researchers stimulated them individually.
In this case, while it does touch the question of supporting cells in vitro (BTW: how long does other cell live after an organism stop breathing? can one revive it in the same way), I don't see how does it even remotely touch the topic of consciousness (or give raise to any ethical dilemmas).
Last paragraph of the plot summary. I found this paragraph particularly nutty (not necessarily in a bad way):
> Joan marries her lawyer, Jake Salomon, and moves her household and friends onto a boat. Jake has a massive rupture of a large blood vessel in his brain and dies, but his personality is saved and joins Smith and Eunice in Joan's head. She (Joan, Eunice and Jake) immigrates to the Moon to find a better future for her child. Once there, her body starts to reject her (Smith's) transplanted brain. She dies during childbirth.
I thought it one of the two worst books he ever wrote. (The other candidate being The Number of the Beast.)
Just posting this to discourage anyone from starting there and being turned off Heinlein for life. He's interesting enough that I tried even the stinkers.
I was about to suggest Battle Angel Alita and Ghost In The Shell (the mangas) but yes, Heinlein has precedence. Iain Banks's Culture novels contain quite a bit material on cross-body consciousness as well and what that means for individuals.
A lot of people are mentioning brain-in-vat type fiction, so I'll just name-drop the earliest example I know of (and my favorite) - "William and Mary" by Roald Dahl in 1959. I'd be curious if anyone can name an earlier example.
The most interesting question/answer is the last:
"What does this mean for the possibility of cryonics?"
"No one has tested whether the system would work on brains that have been frozen."
Until we have a vitrification process that doesn't do significant tissue damage, that will be difficult. It is being worked on though, as the ability to freeze organs would revolutionize transplants.
Hadn't heard of that, but it appears they used a different sort of process, which will preserve some state, but only to a degree; it's not expected to preserve the organ such that it would be viable when thawed.
> It proved effective at preserving an entire brain to the nanometer level, including the connectome—the web of synapses that connect neurons.
> A connectome map could be the basis for re-creating a particular person’s consciousness, believes Ken Hayworth, a neuroscientist who is president of the Brain Preservation Foundation—the organization that, on March 13, recognized McIntyre and Fahy’s work with the prize for preserving the pig brain.
> There’s no expectation here that the preserved tissue can be actually brought back to life, as is the hope with Alcor-style cryonics. Instead, the idea is to retrieve information that’s present in the brain’s anatomical layout and molecular details.
While some may believe that the connectome alone is sufficient to define memory and consciousness, there are reasons to believe it's more complicated than that, so I have my doubts.
That's how it is in cutting edge research. It is much better to say "we don't know" than to speculate, which is what a natural reaction of many other researchers would be.
And before long, someone asks why just automation engines? If you can get a bio-mechanical brain to drive a car you can get it to understand Math, C++, Law and what not. All at a much lower cost than training a full human being...
And before long someone will think that - hey, how do we know this has not already happened?
...
The title is very misleading (I know that it is the title on Nature's own site). The pigs brains were not revived; when stimulated, neurons were observed behaving as live neurons do but the overall coordination of the brain was not reinstated. This is a long way from revival.
From the article:
> In this study, the researchers deliberately prevented the pig brains from regaining consciousness, by using chemicals to block neurons from firing.
> And last year, researchers at the University of California, San Diego, reported that ‘mini brains’ grown in a dish had spontaneously produced human-like brain waves for the first time.
Maybe the way is not that long!
That's disturbing. Rather than producing "human-like brain waves", we stay with growing mini-brains with worm-like or insect-like brain-waves for a very very long time until we master simpler neural systems. We know so little about the brain and consciousness that I think we should err on the side of caution. 100 years from now, imagine if we learned that these "mini brains" had semblence of human consciousness that we weren't aware of. It would horrifying.
But I also don't know how we can advance in neuroscience without taking some risks. So much of knowledge and advancement requires us breaking things apart and making mistakes. Maybe the risks are worth it especially for something as important as knowledge of the brain and consciousness. In many ways, is there anything more important than that?
We don't have a definition of consciousness that allows us to test whether something is conscious or not, or at least I never heard of one.
That is the problem the grandparent is getting at. We don't have a widely accepted definition, and therefore we should tread lightly, without assuming that we're not even close to crossing that boundary.
We have a very poor understanding of consciousness and when death occurs, and yet every day we harvest organs from living donors before allowing them to die. Why so much caution demanded here when we already consider something still much very unknown to be "solved enough"?
If having a poor understanding of a problem constitutes "solving it", then we could be reckless in any area, and reap the consequences.
Why would caution be not be demanded from a moral point of view? If morality places any value on the suffering of conscious beings, then the approach of "I don't know what will happen. Let's do this!" when conscious beings are the subject of an experiment is insane.
We also don't have a definition of God. We should thread lightly as anything can make her angry.
Sorry for the sarcasm (I'm definitely not trying to be impolite) but it was honestly my best attempt to make my point.
That's right, and the previous poster recognizes that:
> I also don't know how we can advance in neuroscience without taking some risks
I replied, because pointing out that we don't have a definition of something is unproductive in both ways: "we don't know what consciousness is? so let's drop precautions" is just as bad as "we don't know what consciousness is? so let's stop in place". It's quite understandable to me to feel that experimentation in this area are disturbing to someone who doesn't know what model of consciousness was used to ensure that nothing immoral is being done. The answers provided in the article don't really address that.
I also don't agree that consciousness is a complete mystery. We can infer criteria based on various measures of similarity, although the one criterion given in the article: "brain-wide activity that would indicate the organs could be conscious" doesn't offer any explanation as to where the criterion comes from. I take this as a failure to explain and justify that the precautions taken to eliminate the risk were enough.
thats quite a reductionist approach and i dont think it applies to this case. we have an understanding of consciousness by the very fact that we can experience it - i.e theres a proof of existence already. We dont have the same understanding for god. If we somehow obtained evidence for a god and then subsequently found out we were causing that god intense suffering, then i think there would be similar discussion
> we have an understanding of consciousness by the very fact that we can experience it
Would you say that you have an understanding of capitalism just by existing in a capitalist society?
Would even an economist or a stockbroker have an "understanding of capitalism"? Or do they just know how to make predictions? If they predict incorrectly, would you say that there understanding is less than they had thought?
Would you say that you have an understanding of the English language just by writing it?
Would even a linguist or a writer have an "understanding of English"? Or do they just know how to string words together based off a set of rules? If they write a bad paper/novel, would you say that there understanding is less than they thought?
If an understanding of capitalism/language is "less than they thought" is it really "understanding" necessarily?
Perhaps a dead horse has been beaten, but I think my point is clear: complex activities/systems that we do/take-part-in everyday do not imply knowledge of the fundamental elements of those activities/systems.
Our brains are highly skilled at pretending we have understanding.
"Perhaps consciousness arises when the brain's simulation of the world becomes so complex that it must include a model of itself" ( Richard Dawkins - The Selfish Gene )
Is consciousness different from self-awareness? Because we're already using an experimental protocol for testing self-awareness in animals.
You can't prove or disprove that I'm conscious in the same way that you can't prove or disprove that I experience the color red in the same way that you do.
Agree that precaution is good but this is dependent on what the researcher deemed as 'horrifiying'.
For example, if I learned that these "mini brains" had semblence of human consciousness that I weren't aware of. My reaction would be 'that is interesting' instead of horrified or disturbed.
Not if you were the human consciousness
But he isn't.
So basically we should leave the research to be done in China?
If you get an important western institutions to decry this research as unethical it will just happen elsewhere. This research could be very beneficial and could lead to extending the time window within which someone can be revived without becoming a vegetable after their heart stops pumping blood around. It makes sense to allow the research to continue where we at least have some control over it, especially considering that we are a long ways off from creating conscious brains in jars (or other suitable containers) which is approximately where most people seem to draw the ethics line.
This... is not an excellent argument in itself. It can be used to justify anything.
So? We're not talking about selling administrative record keeping equipment to goose-stepping despots. We're talking about medical research that in the medium term seems likely to prevent people from dying. We're nowhere near growing self aware beings in a lab but people are acting like that's what we're doing. We're so very far from that that it makes no sense to hamper research. To put it in perspective, this is like restricting the sale of capacitors because someone built a small rail gun and you're worried they're gonna shoot down the ISS next week.
Now that is a proper argument and I will let other more informed people argue in this thread.
> No. Although a person in a vegetative state is not considered to be conscious, the neurons in their brain are still firing and are capable of regulating sleep–wake cycles and heart and lung function, among other processes. The pig brains in the latest study carried out metabolic functions — such as those cells use to produce energy and remove waste — but their neurons didn’t fire unless researchers stimulated them individually.
In this case, while it does touch the question of supporting cells in vitro (BTW: how long does other cell live after an organism stop breathing? can one revive it in the same way), I don't see how does it even remotely touch the topic of consciousness (or give raise to any ethical dilemmas).
Title is definitely about luring people in to get views.
I believe now it's a very good time to reread the book I will fear no evil by Robert A. Heinlein.
https://en.wikipedia.org/wiki/I_Will_Fear_No_Evil
SPOILER:
Last paragraph of the plot summary. I found this paragraph particularly nutty (not necessarily in a bad way):
> Joan marries her lawyer, Jake Salomon, and moves her household and friends onto a boat. Jake has a massive rupture of a large blood vessel in his brain and dies, but his personality is saved and joins Smith and Eunice in Joan's head. She (Joan, Eunice and Jake) immigrates to the Moon to find a better future for her child. Once there, her body starts to reject her (Smith's) transplanted brain. She dies during childbirth.
I thought it one of the two worst books he ever wrote. (The other candidate being The Number of the Beast.)
Just posting this to discourage anyone from starting there and being turned off Heinlein for life. He's interesting enough that I tried even the stinkers.
Or perhaps look out for Choice Cuts by Thomas Boileau and Pierre Narcejac.
http://www.complete-review.com/reviews/trcrime/boileau3.htm
I was about to suggest Battle Angel Alita and Ghost In The Shell (the mangas) but yes, Heinlein has precedence. Iain Banks's Culture novels contain quite a bit material on cross-body consciousness as well and what that means for individuals.
Heinlein stuff is generally not to be recommended. "I will fear no evil" is especially bad.
A bunch of rich old dudes and hot women.
Wow, I needed to stop reading that about halfway through. My mind goes to weird places when I think about consciousness in the abstract like that.
There's a form of meditation where you try and just be aware of awareness itself, and it can do similarly weird things.
It reminds me of https://www.earthexplodes.com/comics/200/ (NSFL ?). It is probably what this experiment looks like from pigs perspective.
Similar to Black Mirror: Black Museum. Anda few others in that season.
Wow. Greg Egan's Distress (chapter 1) in reality.
> All right. He's dead. Go ahead and talk to him.
"He" being a murder victim.
A lot of people are mentioning brain-in-vat type fiction, so I'll just name-drop the earliest example I know of (and my favorite) - "William and Mary" by Roald Dahl in 1959. I'd be curious if anyone can name an earlier example.
The most interesting question/answer is the last: "What does this mean for the possibility of cryonics?" "No one has tested whether the system would work on brains that have been frozen."
So that should be one of the next steps
Until we have a vitrification process that doesn't do significant tissue damage, that will be difficult. It is being worked on though, as the ability to freeze organs would revolutionize transplants.
I thought that was demonstrated on animals last year, and the team had started looking for human volunteers?
https://news.ycombinator.com/item?id=16577627
Hadn't heard of that, but it appears they used a different sort of process, which will preserve some state, but only to a degree; it's not expected to preserve the organ such that it would be viable when thawed.
> It proved effective at preserving an entire brain to the nanometer level, including the connectome—the web of synapses that connect neurons.
> A connectome map could be the basis for re-creating a particular person’s consciousness, believes Ken Hayworth, a neuroscientist who is president of the Brain Preservation Foundation—the organization that, on March 13, recognized McIntyre and Fahy’s work with the prize for preserving the pig brain.
> There’s no expectation here that the preserved tissue can be actually brought back to life, as is the hope with Alcor-style cryonics. Instead, the idea is to retrieve information that’s present in the brain’s anatomical layout and molecular details.
While some may believe that the connectome alone is sufficient to define memory and consciousness, there are reasons to believe it's more complicated than that, so I have my doubts.
Most of the answers start with 'We don't know'
That's how it is in cutting edge research. It is much better to say "we don't know" than to speculate, which is what a natural reaction of many other researchers would be.
I've been thinking that bio-engineered primates would replace agricultural workers and other low-skill labor jobs for some time.
Bio-mechanical brains might prove to be useful automation engines. Would you utilize a self-driving car that's powered by a bio-mechanical brain?
Seems quicker and easier to bring a primate brain to human-level intellect than to bring a machine.
And before long, someone asks why just automation engines? If you can get a bio-mechanical brain to drive a car you can get it to understand Math, C++, Law and what not. All at a much lower cost than training a full human being... And before long someone will think that - hey, how do we know this has not already happened? ...