"Although other members of the institute found the gloomy logician baffling and unapproachable, Einstein told people that he went to his office 'just to have the privilege of walking home with Kurt Gödel.'"

I know this post is aimed at laymen but this sentence is not correct.

"A bit more precisely, the Incompleteness Theorem shows that human beings can never formulate a correct and complete description of the set of natural numbers, {0, 1, 2, 3, . . .}."

The second order Peano Axioms are categorical and thus, up to isomorphism, the only model for this axiom system are the Natural numbers {0, 1, 2, 3, ...}. This is a complete system. We can't happen is a recursively enumerable axiomatic description of the Natural numbers that is complete.

Another way to get a complete description of the Natural numbers is to take the collection of all true statements of the Natural numbers and make that our axiomatic system. It's just not a useful axiomatic system but it is a complete description of the Natural numbers.

Thanks for pointing this out. Two questions: (1) Are there any resources you'd recommend reading to learn more about this? (2) If you are familiar, could you restate the Incompleteness theorem in your own words? Just looking to learn more here :)

Lots of reading are available to technical people who are willing. People elsewhere in this thread have mentioned Nagel and Newman, and also Smullyan. Here are two more:

1) Smith's Introduction to Godel's Theoremshttp://www.logicmatters.net/igt/ is a great book, with all the mathematics but willing to go into the philosophy.

You're on a dangerous road. I asked these questions once, got a couple of books. Decided I needed to enrol on a couple of courses... 7 years later I had completed a math degree. :)

there's a tiny book called Godel's Proof that is both short and easy to read. I picked it up in my quest to read everything Hofstadter (he wrote the foreword) and thought it made it pretty straight forward. any axiomatic system that's flexible/powerful enough to make some sort of statement about itself will succumb to this method for constructing a paradox - and so all such systems are either inconsistent (paradox is possible) or incomplete (lacks sufficient flexibility to make a statement about itself).

so by contraction, self-reference is paradox? That's certainly true for the liar's paradox. But that shouldn't be mistaken for the actual statement, that some unprovable propositions follow from incomplete axioms and that the axioms can't be completed.

At the beginning of the 20th century is when abstraction in mathematics really took off. Things started getting abstract quickly. There was also Russell's paradox and the work of Cantor that dealt with infinities. It became clear that math needed to be put on a rigorous, firm foundation. Set theories were developed and logic formalized.

One goal was to come up with a collection of axioms for the Natural numbers (capital letter to denote the Natural numbers we all know and love). What was wanted was a collection of axioms that are recursively enumerable. Think computable. The goal was to find a mechanistic process to check theorems and to prove new theorems. In some sense too alleviate the field from human error. In modern language we'd say to find a way to have a computer check/discover theorems in number theory.

There are two axiom systems for the Natural numbers. Both are Peano axioms and one system is first order and recursively enumerable. The other system is second order and not recursively enumerable. There are infinitely many models of the first order Peano axioms. For each such model we call them natural numbers (lower case) to signify they are a model of the first order axioms.

The Incompleteness Theorem: There are statements that are true in the Natural numbers (upper case!) that are not true in all models of the natural numbers. Furthermore, this will always be the case no matter what system of recursively enumerable axioms you have for describing the Natural numbers.

Consequence: The Natural numbers can not fully be described by a nice set of axioms. Whatever recursively enumerable system of axioms you have to describe the Natural numbers will be insufficient to prove all true statements of the Natural numbers. Hence, such systems of axioms are incomplete.

One can always find a complete system of axioms by taking as the collection of axioms the collection of all true statements. This isn't helpful because there would no effective way to determining whether or not a statement is an axiom just by looking at it or comparing it to a finite set of axiom schema. Such a collection of axioms is wholly impractical and not useful. But it is wrong to say that a complete axiom system can not be found.

Some people falsely claim that the Incompleteness theorem says that there are statements of the Natural numbers that are neither provable or disprovable. What the theorem says is that for a given recursively enumerable set of axioms that the Natural numbers are a model of there will be statements of the Natural numbers that are true but not provable in that system. All true statements of the Natural numbers are provable in the second order system of axioms but the things get dicey from a logic point of view when working with the second order Peano axioms.

I'm out of my depth but I found this interesting. Is it accurate?

"But Z2 is usually studied with first-order semantics, and in that context it is an effective theory of arithmetic subject to the incompleteness theorems. In particular, Z2 includes every axiom of PA, and it does include the second-order induction axiom, and it is still incomplete.

"Therefore, the well-known categoricity proof must not rely solely on the second-order induction axiom. It also relies on a change to an entirely different semantics, apart from the choice of axioms. It is only in the context of these special 'full' semantics that PA with the second-order induction axiom becomes categorical."

Yes. Mummert is far more qualified than I am on these matters. Here is a line from his reply that is important:

"So, even though Z2 with full second-order semantics is categorical, for any sound effective deductive system there are still true formulas of Z2 that are neither provable nor disprovable in that system."

The key is sound and effective deductive system. Think computable or mechanistic process for deduction. The second order Peano axioms with second order semantics are not and effective deduction system.

In all proofs you have to start with (or end up with depending on the direction your proofs go) axioms. In the second order Peano axioms with second order semantics you can end up in a situation where you don't know if a given statement is an axiom! Making that determination can be quite hard.

Fascinating! How does the situation arise where you don't know if something is an axiom? Aren't axioms either members of a small agreed-on set, or based on an axiom schema?

Well let's say I take as my axioms all true statements about the Natural numbers. Let's take the Goldbach conjecture. Is it an axiom? No one knows because no one knows if it is true or not.

When you say "small agreed upon set" you are in essence talking about a recursively enumerable set of axioms. A collection of axioms that is "small" enough so that one could easily determine if a statement is an axiom.

If you have time for another question, I'm still confused about how it applies to second-order arithmetic, though, since the Peano axioms are well-known and easily listed on a single piece of paper. What difficulty is there be in determining whether a statement is a second-order Peano axiom?

It seems particularly strange since there are apparently fewer axioms than in first-order Peano arithmetic (by replacing an axiom schema with a single induction axiom).

I'm surprised no one has mentioned anything about Rudy Rucker, whose blog this is from. Some credit him with authoring the first cyber punk novel, which was titled 'Software.' It's the first in his 'Ware Tetralogy' (Software, Wetware, Realware, Freeware) which I'd certainly recommend checking out.

“The illusion of the passage of time arises from the confusing of the given with the real. Passage of time arises because we think of occupying different realities. In fact, we occupy only different givens. There is only one reality.”

Is he saying that our brains exist over all time simultaneously but they "give" us a sequence of instants from which we perceive the illusion of passage of time ?

he's not making a statement about brains, more the nature of reality. it's a perspective that's gaining traction lately with the dataist/reality-is-information line of reasoning.

ah he's distinguishing between ontology and epistemology ? He's saying there's only one reality but our instantaneous measurements of it (the given - what we perceive with our brains) form the illusion of time

the single vast space-time would already be locked in a state of maximal entropy, nothing moves, the illusion of time is just the universe recalling a memory.

We can perceive this moment in time, right now. We can also perceive moments in time which are not right now. To do this, use your imagination and explore your memories or fantasies. Therefore, there are many moments in time. However, it seems like only one moment in time is experienced simultaneously; this is the illusion.

This doesn't male much sense to me. Isn't the memory (1) just a record of some other moments with our senses and not the real moments and (2)often false?

Memory is not just of senses, for example we remember our thoughts.
Our understanding of the current moment is also (in part) a record of our senses, and is also often false.

That's a beautiful story. I've been fascinated by Gödel since I read Gödel, Escher, Bach by Douglas Hofstadter back in the 1980s. I'm not a mathematician, so I can't plumb the depths of his work, but I gain a little more insight by reading articles like this one.

I remember reading the proof of his incompleteness theorem (or the portion of it that I had time to between classes). It's surprisingly approachable. AFAIR, he came up with a way to encode proofs as symbol sequences. Proofs that actually exist have finite representations. He then proved the theorem with that somehow. (Granted, I read it over a decade ago...)

You're thinking of Gödel numbering which is an elegant technique used extensively throughout that proof. He uses it to map logical symbols, statements and entire proofs to natural numbers and then proceeds to prove things about those statements and proofs by referring to them via their Gödel numbers.

So does Gödel's incompleteness theorem apply to scientific knowledge? I mean, as I understand it, he proved that any useful logical system contains statements that can neither be proved true nor false. But does that apply to quantum mechanics, for example? Are there self-referential statements in QM?

It applies to any formal system powerful enough to encode the properties of the natural numbers.

Quantum Mechanics, as a formal system can encode the natural numbers, and thus can form a substrate for self referential Gödel encoding. The same principles would apply to a quantum system, and any other capable of encoding natural numbers.

Well, scientific hypotheses are tested through observations or, ideally, experiments. Maybe QM could encode arbitrary statements about natural numbers. But I can't imagine how one could test them through observations or experiments.

The question wasn't about reality, but about quantum physics as a formal system, which is very much a formal mathematical system that can exist separate of hypothesis testing and whatnot. The point is that as soon as we have any finte definition of a model for reality, incompleteness kicks in.

As I understand it, rather than "finite definition of a model for reality" it's really more a matter of having a single model. No one model can cover everything but multiple models can, right?

What would be the difference between multiple models and one model containing all of them? You can keep adding on to the model, or mixing and matching models, and you will never get one that immune to go goedel. Either you have model of physics with contradictions, or you have one where there exist questions that cannot be proven true or false, in other words, undecidable.

Right. I can't imagine a single model for reality. That is, in the sense that you start the model running and it reproduces all events everywhere from the Big Bang to whatever happens eventually.

But models that generate results that can be tested, through observation and experiment? Sure. Maybe even arbitrarily integrated models.

However, there's uncertainty throughout. So models can't be deterministic. Certainly at the "ends", at the quantum level, and at the level of consciousness.

A model doesn't have to be deterministic, use finite quantities, or even be possible to execute in a simulation. Math often does not concern itself with such things. Undecidability is concerned with the set of rules themselves. Even if it is impossible to actually simulate, is there a finite set of physical laws which drive all things? If yes, and if you can construct the basic requirements for counting and making logical statements and such within that system, then you are subject to incompleteness.

That said, it's interesting to think about a universe with truly infinite rules. Each physical law could have minor exceptions caused by smaller more detailed phenomenon. Each time you would discover some new principle, it would reveal more yet unknown questions, a fractal of infinite knowledge to be refined and science to do. But I think most scientists hope for a finite set of underlying rules for reality.

I think it applies specially to scientific knowledge.

There is a connexion there between Gödel and Popper.

Except inside a formal system, you can never prove that something is true, only that one explanation is better than other in an endless pursue of better explanations.

I'm not sure there is such thing as not-scientific knowledge, by the way.

> I'm not sure there is such thing as not-scientific knowledge, by the way.

I'm fully on-board with the overwhelming, world-changing effectiveness that the scientific method provides for distilling factual, empirical knowledge and truth.

Lately, however, I've been contemplating forms of knowledge and understanding that are more difficult to assess and validate -- things that might be typically described as wisdom or keen insight. Our scientific instruments can't provide observations that let us robustly verify such knowledge, but to me it seems very evident that it exists.

Some examples: What is important to building and maintaining strong relationships? How can one prepare for and handle personal hardship? If one finds themselves in a fortunate position with excess resources, what are good ways to use those resources to help others?

Science can help us with these questions, but humans have useful knowledge to bring to bear in answering those questions that can't be yet described within the framework of science.

Differentiating by quality or truthiness is ridiculously hard in such domains, but I don't think that is a valid reason for dismissing such things altogether.

Yes, that's what I was getting at. Insight, wisdom, etc.

Even so, one can also apply the scientific method to those sorts of knowledge. One can look at performance. Quality of relationships. Success at dealing with hardship. That's part of psychology. But it hasn't received enough attention, I think.

Right, Popper. I guess that's why they were adjacent on my university reading list ;) But I don't think of that as an expression of limitations in hypotheses as formal systems. Because you don't reject an hypothesis (or not) through logic and reasoning. You do that through observation and experiment. So it's rather a fundamental limitation in the scientific method.

Anyway, I get that they're similar. But I don't see them as the same, but rather complementary.

I also get what you say about knowledge. Scientific knowledge is what you get by studying external reality. But there is also knowledge that you get through introspection.

What I hear from a collegue whose thesis is on alternative explanations for the quantum field observations, the main assertion is that there are difficulties in observing quantum particles which may simply be not true in light of newer theories with more explanatory power such as the pilot wave equation that sheds more light upon, for example, phenomenon such as quantum entanglement than just saying "it's more probable therefore it happens".

For further reading on Gödel, Rebecca Goldstein's Incompleteness is a great book. It is a biography of Gödel but also places his work in the context of the philosophical debates of the time.

He would have been interested in Ed Lorenz' work (from the butterfly effect fame) I suppose. According to Tim Palmer:

"Despite being known for his pioneering work on chaotic unpredictability, the key discovery at the core of meteorologist Ed Lorenz's work is the link between space-time calculus and state-space fractal geometry. Indeed, properties of Lorenz's fractal invariant set relate space-time calculus to deep areas of mathematics such as Gödel's Incompleteness Theorem."

"Consider a point p in the three-dimensional Lorenz state space. Is there an algorithm for
determining whether p belongs to IL? There are certainly large parts of state space which don’t
contain any part of IL. However, suppose p was a point which ‘looked’ as if it might belong to
IL. How would one establish whether this really is the case or not? If we could initialise the
Lorenz equations at some point which was known to lie on IL, we could then run (1) forward to
see if the trajectory passes through p. If the integration is terminated after any finite time and
the trajectory still hasn’t passed through p, we can’t really deduce anything. We can’t be sure
that if the integration was continued, it would pass through p at some future stage.
The Lorenz attractor provides a geometric illustration of the Gödel/Turing incompleteness
theorems: not all problems in mathematics are solvable by algorithm. This linkage has been
made rigorous by the following theorem [7]: so-called Halting Sets must have integral Hausdorff
dimension. IL has fractional Hausdorff dimension - this is why it is called a fractal. Hence we can
say that IL is formally non-computational. To be a bit more concrete, consider one of the classic
undecidable problems of computing theory: the Post Correspondence Problem [46]. Dube [16]
has shown that this problem is equivalent to asking whether a given line intersects the fractal
invariant set of an iterated function system [4]. In general, non-computational problems can all
be posed in this fractal geometric way."

I wonder what is meant by this question. In a very naive sense of the question, the type of mathematics Gödel worked on (foundational?) wasn't really computation oriented or even formulaic, so the ability to perform lots of calculations very quickly probably wasn't of much use to him. Indeed, computers where available to him in later years.

A more interesting interpretation of the question (to me) is what if he had access to computers used as mind amplification devices. For example; such as by using Mathmatica or Maple to explore and visualize theorems and things. I'd imagine the benefit of this activity for someone like Gödel would be "not much". Computations and simulation have inherit limitations; precision and rounding errors for scientific computation, and the fact they can only model what we can imagine for another. These people such as Gödel, Neumann, and their ilk, new this, they begat the era of computation we have today, and all the limitations that involved. Neumann in particular was famous for, when presented with your problem, would tell how to solve it.

What's new today that they may not have foreseen is the vast level of internetworking and human communication that arose from ubiquitous presence of computers and networks.

Something that strikes me about the present article is the fact that Gödel, keen to see Rucker before he knew him, was not so keen to converse with him after their first encounter. One might think Gödel was not too impressed with Rucker, maybe found him boring and dull for example.

"What's new today that they may not have foreseen is the vast level of internetworking and human communication that arose from ubiquitous presence of computers and networks."

In an alternate universe, Godel published his proofs in a paper on arXiv in 2002. However it was largely overlooked or dismissed as the work of a crank, so he went back to his day job at MSR.

A casual reference to the paper in a comment on on Math Overflow five years later led to wider discussion and eventually to Scott Aaronson publicising it on his blog. Within a year the two theorems were accepted by the global mathematics community.

Since then, numerous blogs, subreddits, and Facebook posts have challenged the legitimacy of the proofs or claimed prior credit. Speculation persists on some parts of the internet that GCHQ or the NSA knew of the theorems as early as the 1950s.

Perhaps the idea of computers would have been more useful than the computers themselves. Several of the ideas in his proof are simpler if you can say "computable" rather than "recursive" or whatever. So perhaps if he had been thinking in these terms he could have done more work sooner.

The way things turned out the work from this era defined what computers can and can't do, not the other way around, which would likely not satisfy many mathematicians.

He was right there at the Institute for Advanced Study (IAS) when they built the seminal IAS machine whose plans were distributed widely to help germinate computing around the world.

It reminds me of this article in the New Yorker about Gödel's and Einstein's talks. http://www.newyorker.com/magazine/2005/02/28/time-bandits-2

"Although other members of the institute found the gloomy logician baffling and unapproachable, Einstein told people that he went to his office 'just to have the privilege of walking home with Kurt Gödel.'"

I know this post is aimed at laymen but this sentence is not correct.

"A bit more precisely, the Incompleteness Theorem shows that human beings can never formulate a correct and complete description of the set of natural numbers, {0, 1, 2, 3, . . .}."

The second order Peano Axioms are categorical and thus, up to isomorphism, the only model for this axiom system are the Natural numbers {0, 1, 2, 3, ...}. This is a complete system. We can't happen is a recursively enumerable axiomatic description of the Natural numbers that is complete.

Another way to get a complete description of the Natural numbers is to take the collection of all true statements of the Natural numbers and make that our axiomatic system. It's just not a useful axiomatic system but it is a complete description of the Natural numbers.

Thanks for pointing this out. Two questions: (1) Are there any resources you'd recommend reading to learn more about this? (2) If you are familiar, could you restate the Incompleteness theorem in your own words? Just looking to learn more here :)

Lots of reading are available to technical people who are willing. People elsewhere in this thread have mentioned Nagel and Newman, and also Smullyan. Here are two more:

1) Smith's

Introduction to Godel's Theoremshttp://www.logicmatters.net/igt/ is a great book, with all the mathematics but willing to go into the philosophy.2) Franzen's

Gödel's Theorem: An Incomplete Guide to Its Use and Abusehttp://www.ams.org/notices/200703/rev-raatikainen.pdf is enlightening in a different way.I'll also mention a favorite author around here http://www.scottaaronson.com/blog/?p=710 .

You're on a dangerous road. I asked these questions once, got a couple of books. Decided I needed to enrol on a couple of courses... 7 years later I had completed a math degree. :)

really i will love love to hear your story.i have feeling that there is more to it ! wow , i am impressed.please what is your email?

there's a tiny book called Godel's Proof that is both short and easy to read. I picked it up in my quest to read everything Hofstadter (he wrote the foreword) and thought it made it pretty straight forward. any axiomatic system that's flexible/powerful enough to make some sort of statement about itself will succumb to this method for constructing a paradox - and so all such systems are either inconsistent (paradox is possible) or incomplete (lacks sufficient flexibility to make a statement about itself).

so by contraction, self-reference is paradox? That's certainly true for the liar's paradox. But that shouldn't be mistaken for the actual statement, that some unprovable propositions follow from incomplete axioms and that the axioms can't be completed.

At the beginning of the 20th century is when abstraction in mathematics really took off. Things started getting abstract quickly. There was also Russell's paradox and the work of Cantor that dealt with infinities. It became clear that math needed to be put on a rigorous, firm foundation. Set theories were developed and logic formalized.

One goal was to come up with a collection of axioms for the Natural numbers (capital letter to denote the Natural numbers we all know and love). What was wanted was a collection of axioms that are recursively enumerable. Think computable. The goal was to find a mechanistic process to check theorems and to prove new theorems. In some sense too alleviate the field from human error. In modern language we'd say to find a way to have a computer check/discover theorems in number theory.

There are two axiom systems for the Natural numbers. Both are Peano axioms and one system is first order and recursively enumerable. The other system is second order and not recursively enumerable. There are infinitely many models of the first order Peano axioms. For each such model we call them natural numbers (lower case) to signify they are a model of the first order axioms.

The Incompleteness Theorem: There are statements that are true in the Natural numbers (upper case!) that are not true in all models of the natural numbers. Furthermore, this will always be the case no matter what system of recursively enumerable axioms you have for describing the Natural numbers.

Consequence: The Natural numbers can not fully be described by a nice set of axioms. Whatever recursively enumerable system of axioms you have to describe the Natural numbers will be insufficient to prove all true statements of the Natural numbers. Hence, such systems of axioms are incomplete.

One can always find a complete system of axioms by taking as the collection of axioms the collection of all true statements. This isn't helpful because there would no effective way to determining whether or not a statement is an axiom just by looking at it or comparing it to a finite set of axiom schema. Such a collection of axioms is wholly impractical and not useful. But it is wrong to say that a complete axiom system can not be found.

Some people falsely claim that the Incompleteness theorem says that there are statements of the Natural numbers that are neither provable or disprovable. What the theorem says is that for a given recursively enumerable set of axioms that the Natural numbers are a model of there will be statements of the Natural numbers that are true but not provable in that system. All true statements of the Natural numbers are provable in the second order system of axioms but the things get dicey from a logic point of view when working with the second order Peano axioms.

I'm out of my depth but I found this interesting. Is it accurate?

"But Z2 is usually studied with first-order semantics, and in that context it is an effective theory of arithmetic subject to the incompleteness theorems. In particular, Z2 includes every axiom of PA, and it does include the second-order induction axiom, and it is still incomplete.

"Therefore, the well-known categoricity proof must not rely solely on the second-order induction axiom. It also relies on a change to an entirely different semantics, apart from the choice of axioms. It is only in the context of these special 'full' semantics that PA with the second-order induction axiom becomes categorical."

From: https://math.stackexchange.com/questions/617124/peano-arithm...

Yes. Mummert is far more qualified than I am on these matters. Here is a line from his reply that is important:

"So, even though Z2 with full second-order semantics is categorical, for any sound effective deductive system there are still true formulas of Z2 that are neither provable nor disprovable in that system."

The key is sound and effective deductive system. Think computable or mechanistic process for deduction. The second order Peano axioms with second order semantics are not and effective deduction system.

In all proofs you have to start with (or end up with depending on the direction your proofs go) axioms. In the second order Peano axioms with second order semantics you can end up in a situation where you don't know if a given statement is an axiom! Making that determination can be quite hard.

Fascinating! How does the situation arise where you don't know if something is an axiom? Aren't axioms either members of a small agreed-on set, or based on an axiom schema?

Well let's say I take as my axioms all true statements about the Natural numbers. Let's take the Goldbach conjecture. Is it an axiom? No one knows because no one knows if it is true or not.

When you say "small agreed upon set" you are in essence talking about a recursively enumerable set of axioms. A collection of axioms that is "small" enough so that one could easily determine if a statement is an axiom.

Thanks, that makes sense.

If you have time for another question, I'm still confused about how it applies to second-order arithmetic, though, since the Peano axioms are well-known and easily listed on a single piece of paper. What difficulty is there be in determining whether a statement is a second-order Peano axiom?

It seems particularly strange since there are apparently fewer axioms than in first-order Peano arithmetic (by replacing an axiom schema with a single induction axiom).

[1] https://math.stackexchange.com/questions/106635/why-does-the...

It's a bit of a stretch to say that True Arithmetic can be "formulated" by human beings.

I'm surprised no one has mentioned anything about Rudy Rucker, whose blog this is from. Some credit him with authoring the first cyber punk novel, which was titled 'Software.' It's the first in his 'Ware Tetralogy' (Software, Wetware, Realware, Freeware) which I'd certainly recommend checking out.

“Spaceland” was also a hoot. It’s his take on the classic “Flatland”.

Yes, and I highly recommend

Turing & Burroughs:)“The illusion of the passage of time arises from the confusing of the given with the real. Passage of time arises because we think of occupying different realities. In fact, we occupy only different givens. There is only one reality.”

Is he saying that our brains exist over all time simultaneously but they "give" us a sequence of instants from which we perceive the illusion of passage of time ?

he's not making a statement about brains, more the nature of reality. it's a perspective that's gaining traction lately with the dataist/reality-is-information line of reasoning.

ah he's distinguishing between ontology and epistemology ? He's saying there's only one reality but our instantaneous measurements of it (the given - what we perceive with our brains) form the illusion of time

Yes,

but the increase of entropy isn't an illusion

the single vast space-time would already be locked in a state of maximal entropy, nothing moves, the illusion of time is just the universe recalling a memory.

^ This, exactly.

No, but it's about our perception not entropy.

Let go of your brain.

We can perceive this moment in time, right now. We can also perceive moments in time which are not right now. To do this, use your imagination and explore your memories or fantasies. Therefore, there are many moments in time. However, it seems like only one moment in time is experienced simultaneously; this is the illusion.

This doesn't male much sense to me. Isn't the memory (1) just a record of some other moments with our senses and not the real moments and (2)often false?

Memory is not just of senses, for example we remember our thoughts. Our understanding of the current moment is also (in part) a record of our senses, and is also often false.

That's a beautiful story. I've been fascinated by Gödel since I read

Gödel, Escher, Bachby Douglas Hofstadter back in the 1980s. I'm not a mathematician, so I can't plumb the depths of his work, but I gain a little more insight by reading articles like this one.I remember reading the proof of his incompleteness theorem (or the portion of it that I had time to between classes). It's surprisingly approachable. AFAIR, he came up with a way to encode proofs as symbol sequences. Proofs that actually exist have finite representations. He then proved the theorem with that somehow. (Granted, I read it over a decade ago...)

You're thinking of Gödel numbering which is an elegant technique used extensively throughout that proof. He uses it to map logical symbols, statements and entire proofs to natural numbers and then proceeds to prove things about those statements and proofs by referring to them via their Gödel numbers.

So does Gödel's incompleteness theorem apply to scientific knowledge? I mean, as I understand it, he proved that any useful logical system contains statements that can neither be proved true nor false. But does that apply to quantum mechanics, for example? Are there self-referential statements in QM?

It applies to any formal system powerful enough to encode the properties of the natural numbers.

Quantum Mechanics, as a formal system can encode the natural numbers, and thus can form a substrate for self referential Gödel encoding. The same principles would apply to a quantum system, and any other capable of encoding natural numbers.

Well, scientific hypotheses are tested through observations or, ideally, experiments. Maybe QM could encode arbitrary statements about natural numbers. But I can't imagine how one could test them through observations or experiments.

The question wasn't about reality, but about quantum physics as a formal system, which is very much a formal mathematical system that can exist separate of hypothesis testing and whatnot. The point is that as soon as we have any finte definition of a model for reality, incompleteness kicks in.

As I understand it, rather than "finite definition of a model for reality" it's really more a matter of having a single model. No one model can cover everything but multiple models can, right?

What would be the difference between multiple models and one model containing all of them? You can keep adding on to the model, or mixing and matching models, and you will never get one that immune to go goedel. Either you have model of physics with contradictions, or you have one where there exist questions that cannot be proven true or false, in other words, undecidable.

Right. I can't imagine a single model for reality. That is, in the sense that you start the model running and it reproduces all events everywhere from the Big Bang to whatever happens eventually.

But models that generate results that can be tested, through observation and experiment? Sure. Maybe even arbitrarily integrated models.

However, there's uncertainty throughout. So models can't be deterministic. Certainly at the "ends", at the quantum level, and at the level of consciousness.

A model doesn't have to be deterministic, use finite quantities, or even be possible to execute in a simulation. Math often does not concern itself with such things. Undecidability is concerned with the set of rules themselves. Even if it is impossible to actually simulate, is there a finite set of physical laws which drive all things? If yes, and if you can construct the basic requirements for counting and making logical statements and such within that system, then you are subject to incompleteness.

That said, it's interesting to think about a universe with truly infinite rules. Each physical law could have minor exceptions caused by smaller more detailed phenomenon. Each time you would discover some new principle, it would reveal more yet unknown questions, a fractal of infinite knowledge to be refined and science to do. But I think most scientists hope for a finite set of underlying rules for reality.

I think it applies specially to scientific knowledge.

There is a connexion there between Gödel and Popper.

Except inside a formal system, you can never prove that something is true, only that one explanation is better than other in an endless pursue of better explanations.

I'm not sure there is such thing as not-scientific knowledge, by the way.

> I'm not sure there is such thing as not-scientific knowledge, by the way.

I'm fully on-board with the overwhelming, world-changing effectiveness that the scientific method provides for distilling factual, empirical knowledge and truth.

Lately, however, I've been contemplating forms of knowledge and understanding that are more difficult to assess and validate -- things that might be typically described as wisdom or keen insight. Our scientific instruments can't provide observations that let us robustly verify such knowledge, but to me it seems very evident that it exists.

Some examples: What is important to building and maintaining strong relationships? How can one prepare for and handle personal hardship? If one finds themselves in a fortunate position with excess resources, what are good ways to use those resources to help others?

Science can help us with these questions, but humans have useful knowledge to bring to bear in answering those questions that can't be yet described within the framework of science.

Differentiating by quality or truthiness is ridiculously hard in such domains, but I don't think that is a valid reason for dismissing such things altogether.

Yes, that's what I was getting at. Insight, wisdom, etc.

Even so, one can also apply the scientific method to those sorts of knowledge. One can look at performance. Quality of relationships. Success at dealing with hardship. That's part of psychology. But it hasn't received enough attention, I think.

Right, Popper. I guess that's why they were adjacent on my university reading list ;) But I don't think of that as an expression of limitations in hypotheses as formal systems. Because you don't reject an hypothesis (or not) through logic and reasoning. You do that through observation and experiment. So it's rather a fundamental limitation in the scientific method.

Anyway, I get that they're similar. But I don't see them as the same, but rather complementary.

I also get what you say about knowledge. Scientific knowledge is what you get by studying external reality. But there is also knowledge that you get through introspection.

What I hear from a collegue whose thesis is on alternative explanations for the quantum field observations, the main assertion is that there are difficulties in observing quantum particles which may simply be not true in light of newer theories with more explanatory power such as the pilot wave equation that sheds more light upon, for example, phenomenon such as quantum entanglement than just saying "it's more probable therefore it happens".

For further reading on Gödel, Rebecca Goldstein's

Incompletenessis a great book. It is a biography of Gödel but also places his work in the context of the philosophical debates of the time.Godel's incompleteness theorems are explored by many of Raymond Smullyan's books in a way that an inquiring high school student can understand it.

I wonder what he would have come up with had he had access to computers

He would have been interested in Ed Lorenz' work (from the butterfly effect fame) I suppose. According to Tim Palmer:

"Despite being known for his pioneering work on chaotic unpredictability, the key discovery at the core of meteorologist Ed Lorenz's work is the link between space-time calculus and state-space fractal geometry. Indeed, properties of Lorenz's fractal invariant set relate space-time calculus to deep areas of mathematics such as Gödel's Incompleteness Theorem."

"Consider a point p in the three-dimensional Lorenz state space. Is there an algorithm for determining whether p belongs to IL? There are certainly large parts of state space which don’t contain any part of IL. However, suppose p was a point which ‘looked’ as if it might belong to IL. How would one establish whether this really is the case or not? If we could initialise the Lorenz equations at some point which was known to lie on IL, we could then run (1) forward to see if the trajectory passes through p. If the integration is terminated after any finite time and the trajectory still hasn’t passed through p, we can’t really deduce anything. We can’t be sure that if the integration was continued, it would pass through p at some future stage. The Lorenz attractor provides a geometric illustration of the Gödel/Turing incompleteness theorems: not all problems in mathematics are solvable by algorithm. This linkage has been made rigorous by the following theorem [7]: so-called Halting Sets must have integral Hausdorff dimension. IL has fractional Hausdorff dimension - this is why it is called a fractal. Hence we can say that IL is formally non-computational. To be a bit more concrete, consider one of the classic undecidable problems of computing theory: the Post Correspondence Problem [46]. Dube [16] has shown that this problem is equivalent to asking whether a given line intersects the fractal invariant set of an iterated function system [4]. In general, non-computational problems can all be posed in this fractal geometric way."

Source: Lorenz, Gödel and Penrose: New Perspectives on Determinism and Causality in Fundamental Physics https://arxiv.org/pdf/1309.2396.pdf

I wonder what is meant by this question. In a very naive sense of the question, the type of mathematics Gödel worked on (foundational?) wasn't really computation oriented or even formulaic, so the ability to perform lots of calculations very quickly probably wasn't of much use to him. Indeed, computers where available to him in later years.

A more interesting interpretation of the question (to me) is what if he had access to computers used as mind amplification devices. For example; such as by using Mathmatica or Maple to explore and visualize theorems and things. I'd imagine the benefit of this activity for someone like Gödel would be "not much". Computations and simulation have inherit limitations; precision and rounding errors for scientific computation, and the fact they can only model what we can imagine for another. These people such as Gödel, Neumann, and their ilk, new this, they begat the era of computation we have today, and all the limitations that involved. Neumann in particular was famous for, when presented with your problem, would tell how to solve it.

What's new today that they may not have foreseen is the vast level of internetworking and human communication that arose from ubiquitous presence of computers and networks.

Something that strikes me about the present article is the fact that Gödel, keen to see Rucker before he knew him, was not so keen to converse with him after their first encounter. One might think Gödel was not too impressed with Rucker, maybe found him boring and dull for example.

"What's new today that they may not have foreseen is the vast level of internetworking and human communication that arose from ubiquitous presence of computers and networks."In an alternate universe, Godel published his proofs in a paper on arXiv in 2002. However it was largely overlooked or dismissed as the work of a crank, so he went back to his day job at MSR.

A casual reference to the paper in a comment on on Math Overflow five years later led to wider discussion and eventually to Scott Aaronson publicising it on his blog. Within a year the two theorems were accepted by the global mathematics community.

Since then, numerous blogs, subreddits, and Facebook posts have challenged the legitimacy of the proofs or claimed prior credit. Speculation persists on some parts of the internet that GCHQ or the NSA knew of the theorems as early as the 1950s.

Perhaps the idea of computers would have been more useful than the computers themselves. Several of the ideas in his proof are simpler if you can say "computable" rather than "recursive" or whatever. So perhaps if he had been thinking in these terms he could have done more work sooner.

The way things turned out the work from this era defined what computers can and can't do, not the other way around, which would likely not satisfy many mathematicians.

He was right there at the Institute for Advanced Study (IAS) when they built the seminal IAS machine whose plans were distributed widely to help germinate computing around the world.

Aghhh who the ... picked the acid background of this page? cool

Looks like the Rock Paper Scissor cellular automaton: https://www.youtube.com/watch?v=lt9ihcg-bZc

He was quite insane. Dopamine is a hell of a drug.