joe_the_user 6 years ago

This seems like really exotic stuff.

It's one of those talks that seems to talk about things I know but then immediately jumps to theories and definitions I have never heard of. "Sequential algorithms were axiomatized in 2000" - This "axiomization" was apparently performed by the author of the talk, my Googling says. All links about things talked in this talk just seem to lead to other texts and such by this author. Might not be a good sign but maybe it's just a very hard to grasp realm.

Anyway, broadly, the Church-Turing thesis is not a theorem and cannot be proven true or false (so said my professors and Wikidepia). Effectively, it is a definition of computation that has so far been accepted because nothing fundamentally different and more powerful than a Turing machine has been exhibited (there is a theorem that Turing machines, partial recursive functions and the lambda calculus are equivalent - and then there's the jump to "these are really general and I think that about covers it" but still, not a theorem, not provable).

My hunch is the author is mixing in "higher-order" logic "version of Church-Turing" into their argument. Which is basically "cheating", switching to an effectively different question without making things clear. However, I simply don't know. This is basically opaque to my mere MA-level maths.

  • tom_mellior 6 years ago

    > the Church-Turing thesis is not a theorem and cannot be proven true or false

    This is not quite correct: It cannot be proven true. But it could be proven false with a counterexample. The speaker would simply have to point to a concrete algorithm that can be performed by a person with pencil and paper but not by a Turing machine (or vice versa).

    I haven't had the time to watch the video, but from the comments here it appears that the speaker did not provide such an example.

    • andrewla 6 years ago

      He does in fact provide at least one counterexample (around 18:55 in the video).

      One is the real-number GCD algorithm; basically the same as Euclid's standard GCD algorithm, but with real numbers. He seems to be arguing that since Turing machines cannot represent real numbers, they cannot compute this algorithm. On the other hand, it can be easily computed with pen and paper, and he gives an example of a ruler-and-compass algorithm for evaluating it.

      To me this seems a little wishy-washy; a Turing machine can approximate the problem to an arbitrary degree of precisions, just as a pen-and-paper approach is only capable to executing to a finite degree of approximation. Still, an "idealized" ruler and compass does provide a framework for describing an algorithm which is not directly Turing computable.

      • roywiggins 6 years ago

        Sure- if you have access to exact real numbers, you have something more powerful than a Turing machine. But there's no evidence that that's physically possible. The Church-Turing thesis is about actual computation that could happen in the world, not theoretical models, of which there are many that can perform hypercomputation.

      • SilasX 6 years ago

        Yeah, I agree, but from the opposite side: the human isn't really doing the real-number version either, because you can't represent a full real number. You're just doing an arbitrary approximation of the real number, which -- of course! -- Turing machines can do too.

      • blablabla123 6 years ago

        I mean when talking about real numbers this way, we mean numbers that can only we represented as compositions of periodic digits, or worse, transcendental numbers which lack even periodicity. However, unless you do Numerics, you probably choose a different representation, for instance as series. This is precise and there is a choice of Computer Algebra Systems available for that.

        In fact Algebra has very deep connections to pen and paper geometry.

        Apart from that... Due to the increased use of statistics, the postulates of CS might shift. One example being CAP theorem.

        • palunon 6 years ago

          > we mean numbers that can only we represented as compositions of periodic digits, or worse, transcendental numbers which lack even periodicity

          Every number that has periodicity in its decimals is rational... √2 doesn't have periodic decimals.

          Transcendental numbers are non algebraic numbers, ie. they are not the root of any non zero polynomial with rational coefficient. √2 is algebraic

      • cultus 6 years ago

        Yeah, that's a nonsensical "proof."

        > Still, an "idealized" ruler and compass does provide a framework for describing an algorithm which is not directly Turing computable.

        You could describe such an algorithm, but you could not do it in this universe. This is called hypercomputation.

        "If non-computable inputs are permitted then non computable outputs are attainable"

        https://en.wikipedia.org/wiki/Hypercomputation

    • dooglius 6 years ago

      If theoretical physics can find a complete model of all particles and spacetime, and we can show that a Turing Machine can emulate it, then that would suffice.

      • tom_mellior 6 years ago

        If you can write down the Turing machine for anything, then a person with pencil and paper can simulate its execution. That direction doesn't appear to be the way to go. Despite the generality of what I wrote above, the only feasible counterexample would be one that people can do on paper but a Turing machine somehow can't.

        • dooglius 6 years ago

          >If you can write down the Turing machine for anything, then a person with pencil and paper can simulate its execution

          What you're describing is the reverse direction, simulating a TM with pencil and paper, and this is doable so long as you are allowed two stacks of paper to work with, and an unlimited supply of pencil/paper. The question of simulating TMs in the real world really doesn't have anything to do with the Church-Turing thesis, though.

          >Despite the generality of what I wrote above, the only feasible counterexample would be one that people can do on paper but a Turing machine somehow can't.

          If some dimensionless measurable physical constant were a non-computable number, then the Church-Turing thesis would be false. In fact, if we think that certain quantities are in some sense random or arbitrary, rather than something that could be written as an equation, we should expect the Church-Turing thesis to be false with probability 1.

          • tom_mellior 6 years ago

            > What you're describing is the reverse direction, simulating a TM with pencil and paper

            That's what I'm describing, but what is it the reverse of?

            You wrote: "If theoretical physics can find a complete model of all particles and spacetime, and we can show that a Turing Machine can emulate it", to which I added "then a person with pencil and paper can simulate its execution", so putting these together a person with pencil and paper can simulate your complete physics model. How does that suffice as a counterexample to the Church-Turing thesis?

          • ikeboy 6 years ago

            Only if it's measurable to infinite precision, and doesn't break down at e.g. Planck levels

            • dooglius 6 years ago

              Right, I don't think physics is advanced enough yet to say whether anything can be measured with infinite precision. In the case of distance, our current understanding of quantum physics says this can't be done (or rather, the notion of distance is not meaningful beyond a certain precision), but I don't think there is any known limitation on generating quantum states of the form a|0>+b|1> where a and b are non-computable, in which case measuring these states over time could give a (probabilistically) infinite measurement.

              • ikeboy 6 years ago

                But can you make infinitely many states with same exact non computable numbers?

                Also once we're talking real world, entropy is finite and there's only so many computations you can do before you run out

    • joe_the_user 6 years ago

      This is not quite correct: No, my original statement is absolutely correct.

      Church-Turing is an approach, a labeling, a thesis, not a theorem. Not being a theorem, it is not subject to proof.

      One could exhibit a plausible argument that Church-Turing is wrong but that argument would not be a proof. It would not involve a sequence of mathematical argumentation that is standard for those artifacts which we call proofs. Just no.

      Pointing to a concrete algorithm would not be a "proof". Maybe it would be very persuasive and the world would be persuaded to abandon Church-Turing. Still it would not be a proof.

      Edit: I will admit my claim here involves a "mathematicians' thesis" that proofs are mathematical objects rather than just extra-strong arguments. So be it. I believe the whole reason Church-Turing is labeled a "thesis" rather than a theorem was because it's framers were using this distinction. And the distinction appears in the foundation of mathematics. Philosophers still argue for and against the continuum hypothesis even it's well not to be provable or disprovable - the argument is whether assuming it's true or falsehood produces good or bad mathematics. etc.

    • otabdeveloper1 6 years ago

      > The speaker would simply have to point to a concrete algorithm that can be performed by a person with pencil and paper but not by a Turing machine (or vice versa).

      No, this is a circular definition. We define an "algorithm" as something that can be performed by a Turing machine. So the Church-Turing thesis is a trivial tautology. (Though of course a useless one.)

      • justinpombrio 6 years ago

        > No, this is a circular definition. We define an "algorithm" as something that can be performed by a Turing machine. So the Church-Turing thesis is a trivial tautology. (Though of course a useless one.)

        No, that's a terrible definition, and it isn't what computer scientists typically use, though it might be what one says if they're not careful. If someone says that "algorithm" means "what a Turing machine can compute", it's because they believe the Church-Turing thesis, which says that they're the same. If they're the same, then you can talk about them interchangeably, and "Turing machine computations" have a cleaner definition than "pen and paper algorithms", so it's convenient to talk about "Turing machine computations" as if that's the ground truth.

        But if the Church-Turing thesis is false, then they're not the same, and you should be very careful not to mix them up. And we need a word for "the sorts of things you can compute by hand by a set of well-defined steps", and that's what "algorithm" has always meant.

        EDIT: Note that I'm making a prediction. If a person told you that "algorithm" means "what a Turing machine can compute", ask them what "algorithm" means if the Church-Turing thesis is false. My prediction is that they'll use a different defintion, based on what you can effectively compute by hand, like the "Effective Method" wikipedia article that my sibling posted.

      • roywiggins 6 years ago

        Not quite:

        https://en.wikipedia.org/wiki/Effective_method

        "- It consists of a finite number of exact, finite instructions.

        - When it is applied to a problem from its class:

          - It always finishes (terminates) after a finite number of steps.
        
          - It always produces a correct answer.
        
        - In principle, it can be done by a human without any aids except writing materials.

        - Its instructions need only to be followed rigorously to succeed. In other words, it requires no ingenuity to succeed."

        "Several independent efforts to give a formal characterization of effective calculability led to a variety of proposed definitions (general recursion, Turing machines, λ-calculus) that later were shown to be equivalent."

      • SilasX 6 years ago

        Yeah, I would refine that definition to "point a concrete problem that can provably be solved by [some] humans, and provably not by a TM".

        Concretely, that problem might look something like:

        "Given N non-communicating humans, each of whom see the same 1000x1000 b/w grid. Emit the same answer as 99% of the humans to the question of whether the grid contains a Q."

        That would be a problem where, if you could prove that humans can do it, but TMs can't, you've refuted the CTT.

        Do I have that right?

      • thfuran 6 years ago

        That's a peculiar definition of algorithm.

    • lmm 6 years ago

      > The speaker would simply have to point to a concrete algorithm that can be performed by a person with pencil and paper but not by a Turing machine (or vice versa).

      Humans do all sorts of things that Turing machines don't (e.g. a Turing machine can't hold a pencil), so the question there is what constitutes an "algorithm" which comes right back to the definition of computation. Even an example that was obviously "computation" in the intuitive sense wouldn't make the Church-Turing definition "false", it would just mean that it defines something different from what we thought it defines.

  • joe_the_user 6 years ago

    OK, yeah, reading more[1] so I think the author is indeed considering things like computation on real numbers where, for example, the value of square root of two can only be approximated by a Turing machine but where ruler-and-compass construction hypothetically can construct this exactly.

    So if you toss out the idea of computation involving digital computers, you can exhibit something not (exactly) Turing computable. If you have a chaotically behaving dynamic system and call it a computer, then you have something where computation becomes something in the eye of the beholder.

    I'm not sure what questions this answers but there you are.

    [1] https://www.researchgate.net/publication/221512843_What_Is_a...

    • fnrslvr 6 years ago

      The burden is on the specifier of the compass and straight-edge algorithm to explain a reasonable mechanism for carrying out the steps exactly, to infinite precision. Without this, the procedure can only be expected to give a reliable approximation of sqrt(2) to some finite precision, which a Turing machine can match. A similar observation applies to chaotic systems.

      In general, a chaotic system can only be said to "solve" a problem if it can be configured to reliably arrive at (an encoding of) the solution for a given instance, despite any chaotic behaviour exhibited by the system. It wouldn't be reasonable to, for example, claim that a Turing machine cannot solve the problem of predicting the behaviour of a chaotic system with some particular initial conditions, if you couldn't reasonably conifgure the chaotic system itself to do that either.

      • CJefferson 6 years ago

        By the same argument, doesnt that mean there can be no reliable Turing machine either, as quantum effects and random perturbations means that eventually your machine will calculate some logical expression wrong (and that's before we talk about running out of tape).

        • AstralStorm 6 years ago

          Not really, the machine can be built to exclude all sorts of probabilistic results to any given redundancy level. Probabilistic algorithms still work on Turing machines.

          What a standard Turing machine cannot do is entangle and superpose states. (It can only simulate it at high and variable cost.)

          Remember that a Turing machine is an abstract thing. Building an ideal one is not possible. You can try though.

    • daveFNbuck 6 years ago

      A Turing machine can compute the square root of two exactly. It can't represent it exactly as a fixed point number, but that's not the only way to compute something. There's a pretty standard definition of computable numbers [1] and it includes the square root of two.

      [1] https://en.wikipedia.org/wiki/Computable_number

      • joe_the_user 6 years ago

        Sure, note I'm not trying to say the position being argued is right, I am simply trying to describe it as well as I can understand it.

        One might say that a "classical analogue" computer that "outputs" line segments could output the square root of two but an digital computer could merely "represent" it.

        That could be pure sophistry, especially since such classical analogue never have real infinite precision. But it seems like that's the sort of thing being claimed here.

        • wongarsu 6 years ago

          I think even the ability of a (even non-digital) computer to output the square root of two isn't easily asserted. Quantum theory doesn't play nice with infinite precision.

          - You can't output a voltage of sqrt(2) because you output a finite amount of electrons in each timeframe, so your resolution is limited.

          - We don't know if you can output a signal of length of sqrt(2) because we aren't sure yet whether time is infinitely divisible [1]

          - You might be able to position gears or other physical objects with infinite precision, but reading the value with a higher precision than one plank length seems impossible [2]

          1: https://en.wikipedia.org/wiki/Chronon

          2: https://en.wikipedia.org/wiki/Planck_length

        • daveFNbuck 6 years ago

          I'm having a hard time understanding the difference between outputting something and representing it. Can you explain the difference, or is that just your best understanding of the argument here?

          • rocqua 6 years ago

            The point is that we can represent sqrt(2) symbolically (as I just did there) but it cannot be represented as a binary (or decimal) expansion.

            Computers can also work symbolically rather than based on some direct binary representation. I think equality and especially comparison become more difficult to determine, but it remains possible.

            • vidarh 6 years ago

              But this merely a product of the choice of representation.

              Here is a fully expanded representation: 1.

              Of course my choice of base is a bit exotic, but the point is that no matter what, we're working on symbols. We're just used to seeing some as more direct analogues of numbers than others.

            • daveFNbuck 6 years ago

              What's a direct binary representation? If that means having each digit explicitly written out, no model of computation could possibly do that in finite time. If it means having the ability to output arbitrary bits in finite time, Turing machines can do that.

          • obastani 6 years ago

            It's probably easier to give an example of a number that I can't represent symbolically. However, I can't give you a specific example, since if I uniquely define a number then that definition is itself a symbolic representation of that number. But here is an example: a uniformly random real number on [0, 1]. Or, computationally speaking, uniformly sample an infinite number of Bernoulli random variables, and write down the binary representation. Of course, this process takes infinite time, which suggests that they are not representable.

            For a proof, consider the fact that every representable number, by definition (I know I haven't given a formal definition), is represented by some finite string. But the number of finite strings is countable. So it must be a measure zero subset of [0, 1], so a random sample is representable with probability zero.

          • jacobush 6 years ago

            Or put differently:

            How long is that piece of string?

            a) Here, have a look at it. It's exactly that long.

            b) 4.51 inches

            both answers are correct. "a" is correct by definition. "b" may be totally correct if we add a precision to the answer, like +- 0.01 or whatever we find true.

            • grey-area 6 years ago

              How long is that piece of string?

              With how much precision can you measure it?

              With how much precision can you describe it?

              The answers are pretty similar for humans or turing machines IMO.

          • joe_the_user 6 years ago

            An output would be a physical extent in space (a line of some length, the turning of a dial, etc), a representation would be an description.

            • daveFNbuck 6 years ago

              Under that definition, can Turing machines produce output at all? It seems that the Church-Turing thesis is about representations and not outputs.

              • roywiggins 6 years ago

                The original formulation of the machine just operates on a tape, the closest thing to an output it has is moving the tape into a particular state (such as writing down the digits of the calculated function on successive cells of the tape). But there's no "output" function to spit out a number. You can just tack that onto the specification if you want, but it doesn't change the power of the device really.

                • daveFNbuck 6 years ago

                  I'm specifically asking about joe_the_user's definition of output as being a physical object rather than a number.

      • SilasX 6 years ago

        I don't think that's technically correct; I think it would be more correct to say that, for a given natural-number N precision, a Turing machine can calculate sqrt(2) to N precision, and there is no upper bound on N after which a Turing machine can't do it anymore. That's not the same as computing "the exact square root of 2" (only Chuck Norris can do that, of course).

        • daveFNbuck 6 years ago

          The square root of two cannot be represented exactly in a finite decimal representation, but there are other ways to represent it exactly in finite space. We both know precisely which number we're talking about, despite using computers to represent it.

          • SilasX 6 years ago

            But "exactly representing it" just cashes out as "being able to answer questions to arbitrary precision about its value", and yep, TMs can do that do that, in finite space.

      • ghayes 6 years ago

        Doesn't computable numbers, from the reference Wikipedia article, only state they can be computed to a given nth digit? Thus, in the OP's example, the rule-and-compass construction is exact whereas the Turing machine could only approximate the number (unless given infinite time, which I assume is out).

        • alew1 6 years ago

          We need a digital representation of real numbers. One possible choice is: a real number r is represented as the source code of a function which takes a natural number n to the nth digit of r. This seems no more like “cheating” (and to me, it is less like cheating) than to represent a real number r by a line segment of length r.

          • dllthomas 6 years ago

            That can't represent most real numbers, although I don't know that we have actually established that physical lengths can.

          • OscarCunningham 6 years ago

            Both representations have the feature that it's impossible to check whether two real numbers are equal.

            • goldenkey 6 years ago

              Actually with the physical objects you could use one to occlude the other and test for light passage

              • empath75 6 years ago

                Not with infinite precision.

            • aeneasmackenzie 6 years ago

              Importantly, this is actually true of the real numbers.

              • OscarCunningham 6 years ago

                I don't think I believe in "actual" real numbers.

        • iitalics 6 years ago

          The catch is that you can't actually measure the rule-and-compass construction with infinite precision. You can perform more operations on the construction and then measure later to verify that it produced the correct result, but so can a turing machine acting on a mere "representation" of the number.

    • jonathanstrange 6 years ago

      Ah, okay, so it boils down to the question whether real numbers are physically realizable.

      There is a debate about that in the philosophy of physics and mathematics. AFAIK, according to the current state of physics real numbers cannot be realized within a finite amount of space, at least Gisin [1] recently argued that way and I've seen other arguments for this view before [2,3,4,5:167-170]. I also remember some argument based on thermodynamics but cannot find the reference.

      [1] https://arxiv.org/pdf/1803.06824.pdf [2] https://arxiv.org/abs/math/0411418 [3] https://news.ycombinator.com/item?id=14080024 [4] https://ptolemy.berkeley.edu/~eal/pnerd/blog/are-real-number... [5] https://www.sciencedirect.com/science/article/pii/S030439750...

      • gowld 6 years ago

        Are there any arguments in favor of "a noncountable subset of real numbers are physically realizable" ? That seems obviously false to me. The "debate" is one-sided; anyone who considers the problem comes to the same conclusion.

    • perl4ever 6 years ago

      "the value of square root of two can only be approximated by a Turing machine but where ruler-and-compass construction hypothetically can construct this exactly."

      Ruler and compass are made of atoms though. Or, if not atoms, planck units or something.

      • lisper 6 years ago

        The square root of 2 can also be computed exactly by a Turing machine. The result is exactly √2.

        What, that's cheating you say? You wanted an exact decimal expansion? Well, you're right, a Turing machine can't produce that. But neither can a ruler and compass.

        • jfoutz 6 years ago

          I’m pretty sure a Turing machine has infinite tape. So maybe it can represent that. But you’re going to have to wait a moment for it to print.

          • kps 6 years ago

            Better to say a Turing machine has unbounded tape. ‘Infinite’ can lead to slippery equivocations.

          • MaxBarraclough 6 years ago

            Which leads us to a rather trivial conclusion: How many operations will it take to print the infinitely-many digits of the number? Infinity, of course.

            Same goes for printing 0 to an infinite number of decimal places.

          • lisper 6 years ago

            Actually, a TM tape is unbounded, not infinite. Not the same thing.

        • mcguire 6 years ago

          Think back to your geometry class. Mathematical geometry works with infinitely small points and has infinite precision.

          I suspect that the proof is that geometry is uncomputable.

          • kaibee 6 years ago

            That seems to be the same thing as representing sqrt(2) as sqrt(2).

            • mcguire 6 years ago

              Yes, sort of.

              You can "compute" sqrt(2) by the geometric construction of a unit square and "drawing" the diagonal.

              You cannot do it physically, because marks on paper are not geometric primitives: points are not 0-dimensional; when drawn with a pencil, they have a physical size. A line on paper is not 1-dimensional, etc. But on the other hand, all physical computers are actually equivalent to finite state machines: they don't have infinite (or, unbounded) storage.

              I'm halfway through the video, and they are discussing this: the fundamental limit of Turing machines, etc., is that they cannot do an infinite (or unbounded) amount of work in a finite number of steps. As a result, geometric constructions a la Euclid are not computable.

            • lisper 6 years ago

              Yes, exactly my point.

    • remcob 6 years ago

      In some sense the ruler-and-compass constructions are even more computable than the natural numbers: Any (first-order) statement in Euclidean geometry is either true or false and there is an algorithm (on a Turing machine) that can compute it. Natural numbers don't have this property due to Gödel incompleteness.

      In practice, anything you do with a ruler and compass has an equivalent representation in polynomials that is exactly computable.

    • Al-Khwarizmi 6 years ago

      Not an expert on the Church-Turing thesis, but I suppose to find a counterexample, one has to be rigorous in defining the problem. And defining the problem includes defining the output.

      If the problem is "compute and output the decimal representation of the square root of two", then neither the human nor the Turing machine can do it in finite time, because that decimal representation is infinite. In fact, as far as I know this falls out of the scope of the Church-Turing thesis because most definitions of algorithms include that they must run in a finite number of steps, so there is no algorithm to do that.

      If the problem is "compute and output a symbolic representation of the square root of two" (in some symbol system where it is finite), then the human and the machine can both do it.

      If the problem is "compute and output a segment whose length is sqrt(2) times the length of this segment"... I don't think that's what computation is about. You might as well claim the Church-Turing thesis is false because a Turing machine cannot wave its hand.

      • andrewla 6 years ago

        The problem is that the Church-Turing thesis is not rigorously defined, and it was in the attempt to rigorously define the parameters that the research in the video was done.

        I think that qualitatively we can agree that the ruler-and-compass construction of the square root of two is an algorithm. It's a repeatable procedure that can be described in a finite set of steps, and will always yield a provably correct answer. Whereas "waving a hand" is not an algorithm, but the same qualitative definition.

        The speaker wrote a paper that proved the Church-Turing thesis for sequential algorithms, primarily by defining in strict terms what a sequential algorithm is. Ruler-and-compass constructions are not sequential algorithms by this definition, and the difficulty in specifying the parameters for a generalized "algorithm" is apparently what leads him to the conclusion that the Church-Turing thesis cannot be true in general.

    • tathougies 6 years ago

      Square root of two represented exactly in Haskell (turing equivalent):

      newtype Real = Real { isGreaterThan :: Rational -> Bool }

      sqrt2 :: Real sqrt2 = Real (\r -> r * r < 2)

      Doing arithmetic on these numbers is a more complicated matter. But this representation is exact.

      • gowld 6 years ago

        perhaps for clarity write that as

            newtype Real = Supremum {
               isGreaterThan :: Rational -> Bool
            }
        
            sqrt2 :: Real
            sqrt2 = Supremum (\r -> r * r < 2)
      • kps 6 years ago

        √2 is a bad example for this purpose; there are only countably many algebraic numbers.

        • tathougies 6 years ago

          My representation is countable, but not because of the fact it uses the algebraic term 'r * r' in its definition of sqrt(2).

          My representation is based on dedekind cuts, implemented in Haskell, which is based on the lambda calculus. There are countably many lamdda terms. Thus, I can only represent countably many real numbers. My representation can store all the computable numbers, but there are computable numbers that are not algebraic.

          For example, e is not algebraic, but it is computable. Here is a representation of e

              factorials :: Rational
              factorials = 1:zipWith (*) [1..] factorials
          
              eApproxes :: [Rational]
              eApproxes = scanl (+) 0 (map recip factorials)
          
              e :: Rational
              e = Rational (\r -> any (r <) eApproxes)
  • rocqua 6 years ago

    > Effectively, it is a definition of computation that has so far been accepted because nothing fundamentally different and more powerful than a Turing machine has been exhibited.

    Note that there is a commonly accepted extension that is more powerful than a Turing machine: Probabilistic Turing machines. I'm guessing the proof of equivalence generalizes to partial recursive functions and the lambda calculus if you add a 'random oracle'.

    An interesting counter-argument is that real random oracles don't exist, and that in reality, we can simulate probabilistic computation using Pseudo Random Number Generators. So:

    * True probabilistic computation isn't possible * And anyway, deterministic computation gets close enough to probabilistic computation through PRNGs

    Alternatively, one might argue one could build a 'real' random oracle by reading from a 'true' random source. E.g. Cosmic microwave background, atmospheric noise, quantum fluctuations or a video of lava lamps. There is a difference between a random source and a random oracle, but with sufficient storage, that can be patched.

    • OscarCunningham 6 years ago

      More powerful in what sense? The computable functions are the same for either type of machine.

    • yters 6 years ago

      Any PTM can be simulated by a DTM.

TimTheTinker 6 years ago

Overview pasted here:

————

The thesis asserts this: If an algorithm A computes a partial function f from natural numbers to natural numbers then f is partially recursive, i.e., the graph of f is recursively enumerable.

The thesis has been formulated in 1930s. The only algorithms at the time were sequential algorithms. Sequential algorithms were axiomatized in 2000. This axiomatization was used in 2008 to prove the thesis for sequential algorithms, i.e., for the case where A ranges over sequential algorithms.

These days, in addition to sequential algorithms, there are parallel algorithms, distributed algorithms, probabilistic algorithms, quantum algorithms, learning algorithms, etc.

The question whether the thesis is true in full generality is actively discussed from 1960s. We argue that, in full generality, the thesis cannot possibly be true.

  • daveFNbuck 6 years ago

    > These days, in addition to sequential algorithms, there are parallel algorithms, distributed algorithms, probabilistic algorithms, quantum algorithms, learning algorithms, etc.

    These can all be converted into sequential algorithms with some overhead. Is there an actual proof in the video or just an argument that something that can't be made sequential must exist?

    • jonathanstrange 6 years ago

      A parallel OR will terminate even if one of the subclauses does not terminate, whereas sequential OR might not terminate in this case. So if you define an algorithm as a computation that terminates, parallel OR cannot be fully emulated by sequential OR.

      The problem is that, as far as I know, sequential algorithms form a uniform class, because e.g. equivalence between different sequential models like Turing machines and the untyped lambda calculus can be proved. AFAIK, this is not possible for parallel models of computation, there are different models of them with different expressivity. (I'm not an expert on that, so take this with a grain of salt and please correct me if I'm wrong!)

      • overdrivetg 6 years ago

        Well... you can still execute the OR essentially via BFS vs DFS... Spend some "cycles" or steps executing one path, then the other, then keep alternating. Eventually you will hit the termination (if it exists).

        Or essentially, you can always simulate parallel algorithms on sequential hardware.

        Of course you are correct if the sequential OR is executed naively :-)

        ...or if you force side-effect free short-circuiting within the sequential OR execution. But then it wouldn't really the same OR (as in the parallel case), now would it?

      • daveFNbuck 6 years ago

        What are you using the acronym OR to mean? Why can't you define a sequential machine that simulates it and terminates under the same conditions?

        For all the parallel modes of computation I've seen, you can pretty easily simulate them on a single machine by just keeping the full state of each machine and simulating each time unit 1 machine at a time. It's slow, but equivalence isn't about speed.

        • overdrivetg 6 years ago

          Took me a minute as well - I think he means logical OR (vs logical AND, etc)

          ie IF (A OR B) THEN ...

        • jonathanstrange 6 years ago

          Yes, logical OR.

          • daveFNbuck 6 years ago

            Why couldn't you simulate that sequentially by simulating 1 step of each computation until one of them returns true or both have terminated?

            • jonathanstrange 6 years ago

              Oh man, you and overdrivetg are right, of course. I was silently assuming that A and B are atomic operations or inputs to an open system; otherwise it should be possible to emulate parallel OR.

              Now this casual remark during my work time is starting to bother me a lot, and I really hope some distinguished expert could jump into the discussion. We know that the pi calculus can embed the lambda calculus (and similar for other parallel models of computation), and I've always assumed that the opposite is not generally true - that you cannot express all parallel computations on Turing machines or in lambda calculus such that the same input yields the same output, at least not if you allow some of the computations not to terminate.

              Now I'm no longer sure about that and cannot find any foundational theorem that would prove this.

              Am I the victim of a prejudice? :o Where are the computer scientists when you need them?

              • daveFNbuck 6 years ago

                If you want an expert's opinion, I have a PhD in theoretical computer science. I'm not familiar with pi calculus, but parallel models of Turing machines can be fully simulated by standard Turing machines by just simulating a time step on each machine before moving on to the next time step.

                • jonathanstrange 6 years ago

                  Since merely having a PhD in theoretical computer science doesn't really count for being an expert in this area, do you happen to have a reference for that?

                  Don't get me wrong, I do believe you, but am looking for the relevant theoretical results about this.

                  • daveFNbuck 6 years ago

                    That's fair. I only mentioned it because you asked for a computer scientist.

                    One simple model of a parallel Turing machine is one with multiple tapes, each with its own read-write head. They all operate simultaneously based on the global state. This is a very general form of parallelism.

                    Wikipedia has an article [1] about them with some references for proofs that they're equivalent to single-tape Turing machines, but I encourage you to try to prove it yourself. It's fairly simple to prove, I'm pretty sure I've seen it as an undergraduate-level homework problem.

                    If you want a hint, try increasing the number of tape symbols so that you can encode the entire global state on one cell of the single-tape machine. It's a bit more work, but you can also show that two symbols are enough to simulate a Turing machine with any number of symbols.

                    [1] https://en.wikipedia.org/wiki/Multitape_Turing_machine

                    • daveFNbuck 6 years ago

                      The proof is a little trickier than I remembered. There's a bit more housekeeping to handle that each head can be on a different cell of their respective tapes.

                      Try coming up for a proof for the case where each head moves in the same direction so they're always all on the same position of their tapes first. This can be extended to the general case by some annoying bookkeeping.

    • macleginn 6 years ago

      As was proved recently, there are quantum algorithms that can not: https://www.quantamagazine.org/finally-a-problem-that-only-q...

      • atq2119 6 years ago

        First of all, the paper that this article is reporting on doesn't actually prove a separation between quantum and classical algorithms at all. It's an oracle separation result, which makes it an important result in complexity theory, but not a statement about actual classical or quantum algorithms.

        Second, the result is unrelated to the Church-Turing thesis in the first place, since it's only about relative efficiency, whereas the Church-Turing thesis is about computability.

        Every quantum algorithm can be simulated by a classical computer, so quantum computing cannot possibly disprove the Church-Turing thesis. However, there's a stronger variant of the Church-Turing thesis which also takes performance into account, and it's plausible that quantum computers can contradict this stronger version (although we're very far away from actually proving that).

  • olliej 6 years ago

    I’m still unclear on whether probabilistic algorithms are “algorithms” in the strict sense - in that definition an algorithm should have deterministic behavior, which a probabilistic method does not have. But let’s say your “random” is a deterministic prng- does that then make the PA a deterministic algorithm?

    Basically the distinction can be best illustrated with bozo/bogo sort. The definition many people use is:

    1. Randomly shuffle the input

    2. See if it is sorted, if not go to 1.

    The problem, as explained to me by an algorithms lecturer a long time ago, is that the algorithm is not guaranteed to finish, and more over the same input may or may not result in it finishing on different runs.

    Per that lecturer a “correct” bozo sort is

    1. Generate a list of every permutation of the input

    2. Iterate the list until you find a sorted entry and return that.

    It has computable best case, worst case, and average case (all terrible). It is guaranteed that the same input will always produce the same output (terminate vs not terminate).

    Obviously you could optimize this if you had an algorithm that could shuffle an input such that you were guaranteed that if you shuffled it’s own output you were guaranteed to produce every ordering of the original input. Then you would get closer to: shuffle; check; repeat. But still note that there is no random involved :)

    • rocqua 6 years ago

      See [1], which is about the 'complexity class' Bounded-error Probabilistic Polynomial time. Similar to the P=NP conjecture, there is a BPP=P conjecture. Also consider [2], which concerns variations of the Church-Turing thesis. Notably, it mentions probabilistic computation.

      To answer your original question, indeed probabilistic algorithms are an extension of normal algorithms. One way to make the extension (and retain the property that algorithms give the same output on the same input) is to add a 'random oracle'.

      [1] https://en.wikipedia.org/wiki/BPP_(complexity) [2] https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis#V...

      • kiriakasis 6 years ago

        From a mathematical point of view probabilistic algorithm simply manipulate probability distributions instead of value.

        The output is a random variable.

    • rcfox 6 years ago

      Do you have a better source than "an anonymous lecturer once said..."? I've never heard such a strict definition of algorithm before.

      Randomized algorithms are often used to avoid falling into worst-case scenarios and improve run-time, like with quicksort, or to avoid settling in local minima like with simulated annealing.

      • olliej 6 years ago

        Ok per wikipedia (https://en.wikipedia.org/wiki/Algorithm)

        "Starting from an initial state and initial input (perhaps empty),[4] the instructions describe a computation that, when executed, proceeds through a finite[5] number of well-defined successive states, eventually producing "output"[6] and terminating at a final ending state"

        The "finite" bit I guess is the key thing, that I think was what my lecturer would have been getting at - the canonical bozo sort is not - you could just run for ever, as you might not /ever/ produce a correctly sorted version of the input. That article also points to "randomized algorithms", taking some random input (you could image random input being an RNG seed I guess?).

        That article does include monte carlo algorithms, which technically could not terminate (Montecarlo simulation of a bouncing photon could technically go for ever) so I am left confused.

        The lecturer was Prof. Tadao Takaoka, who was great, and apparently (per google) was on the editorial board of the journal Algorithms (http://www.mdpi.org/algorithms/editors.htm)

        • rcfox 6 years ago

          I think your "correct" bozo sort example from earlier shows that there is a finite number of states (all of the permutations of the input array), just an unbounded number of state transitions.

          I think for something like this, you could do asymptotic analysis of the expected run-time. Assuming all elements are distinct, there are n! permutations, so each would have a 1/n! chance of being selected each iteration. (But it's rather late, and this is a poor medium for discussing math, and someone has probably already done this, so I'm not going to go any further.)

        • shoo 6 years ago

          small thought experiment: what if we decouple the issue of "finite" from the probabilistic aspect?

          Perhaps-Algorithm A:

            step 1: x := sample a value from a roll of a fair D6
            step 2: print x
           
          Is Perhaps-Algorithm A an algorithm?

          Can we argue that we're in a well defined state after step 1 has executed? We have some variable x bound to one value from {1, ..., 6}, with a known probability distribution, although we don't know which integer we've got.

          edit: on the other hand perhaps we're just haggling over definitions here. it is definitely the case that non-terminating algorithms exist and can be useful, and non-deterministic algorithms exist and can be useful. it might be the case that some powerful theory and analysis restricts attention to terminating, deterministic algorithms, it doesnt meant that that's all there is.

          • olliej 6 years ago

            The problem is that because it’s random there is a chance it will never complete, so the same input alternates between halting and non-halting, so the average time is infinite. Which is clearly nonsensical, hence it isn’t an algorithm.

    • hyperion2010 6 years ago

      I think that the intuition here has to do with the dimensionality of the problem, in the sense that we have proved that in 3 dimensions a random walk will never return to it's starting point, bit in two dimensions it will. Mapping all probabilistic algorithms into demensional random walk space is probably extremely difficult though.

      • dcow 6 years ago

        > in the sense that we have proved that in 3 dimensions a random walk will never return to it’s (sic.) starting point

        We have? I believe the probability is ~34%. http://mathworld.wolfram.com/PolyasRandomWalkConstants.html

        • hyperion2010 6 years ago

          Sorry, I had it reversed in my head. For 1 and 2 dimensions they will return to the starting point with probability 1. For 3 dimensions it simply that it is not probability 1. Obviously a weaker result.

    • rdlecler1 6 years ago

      Technically speaking, even physical computers have error rates and are, therefore probabilistic. With large distributions, your mean approaches your expected value -- the same kind of analytical result you get from boolean logic. Similarly, biological systems run on probabilistic algorithms and the fact that two twins starting from the same DNA can be so remarkably similar after millions of steps it a testament to the power of probabilistic algorithms.

  • berti 6 years ago

    > We argue that, in full generality, the thesis cannot possibly be true.

    If you could post a text summary of this part, we'd be very grateful.

  • hyperion2010 6 years ago

    Without having looked at the details, why do I feel like all of the other types of algorithms should be able to be shown to be equivalent to the sequential case under the assumption of local realism? That is to say, if you don't consider that an algorithm must be implemented in this universe then I suspect that many things are possible. Am I missing something here?

    • sometime 6 years ago

      Conversely, those algorithms which cannot be converted to sequential possibly only run on exotic Turing Machines like ones with infinite bands or infinite parallelism and those likely don't have a physical equivalent because it looks like our universe only supports finite computation. In other words, the algorithms which violate Church-Turing might not be interesting at all, or only for analysis in an idealized setting.

      • hyperion2010 6 years ago

        Indeed. I had a similar thought, which initially seemed profound. "Maybe this means that the universe can encode the results of non sequential algorithms but no one can ever decode them. Douglas Adams was right all along!" Of course if you can't decode anything then in theory the universe is currently running an infinite number of undecodable simulations of itself right now. Related piece from Scott Aaronson [0].

        0. https://www.scottaaronson.com/blog/?p=3327

rain1 6 years ago

This is pretty misleading and click-baity! The formulation they use is quite esoteric and not what people usually think of the term to mean. The Church-Turing Thesis really just states that turing completeness is the strongest type of computational class we can expect to build (finite approximations of) in our universe. And this does seem to be true. I hope the "exciting" title of the talk they chose doesn't cause a lot of people to misunderstand this.

abdulhaq 6 years ago

A Turing machine can compute root 2 but it takes an infinite amount of time to do so. A ruler and compass can do it near instantaneously, but it takes forever to measure it.

  • comboy 6 years ago

    It also takes an infinitely precise ruler and other tech which at some level of precision is not in any way more trivial than an infinite amount of time.

    You can compute as many digits as you want, but when trying to measure it, at some point Heisenberg (or Planck to be more precise) will stop you.

avodonosov 6 years ago

This talk is Yuri Gurevich's answer to Peter Shor's (famous for Shor's algorithm) comment [1] regarding a previous Gurevich work. In that previous work Dershovich and Gurevich prove the Church-Turing thesis formally, using an axiomatisation of what "algorithm" is, which captures the meaning assumed in Church and Turing times.

But Shor complains their axiomatisation does not include probabilistic algoriths, quantum algorithms.

In this talk Gurevich argues that in full generality of the term "algorithm" the thesis can not be true.

1 - https://cstheory.stackexchange.com/questions/88/what-would-i...

Verdex_3 6 years ago

There is a series of talks by Yuri Gurevich on Channel 9 where he talks about algorithms. While I haven't watched this particular video (yet), I did watch the other series a few times (3 videos IIRC). Anyway, in those videos Yuri seems to basically be saying that he doesn't think all algorithms from now to eternity will always be representable with turing machines. Which seems a bit more reasonable than the title here (Church-Turing cannot be true). It's a different way of saying when an old scientist says something is possible he's probably right and if he says something is impossible then he's probably wrong (only inverted ... ie saying church turing will never be irrelevant is hedging your bets against human progress).

All that being said, I don't buy what Yuri is selling. Even quantum computing can be simulated with a classical computer (with exponential slowdown), so I'm not seeing what's going to turn out to be more fundamental at least in a theoretical sense. Practically people might not care (yes technically you can simulate that graphics card in lambda calculus, but practically you would never do that and the guys building these things probably don't care about the church turing thesis), but the effective computability definition feels pretty effective. Either way, I'm not going to loose too much sleep over it. At the end of the day I'm more interested in looking for ways to be more effective and I'm not really interested in whether or not what I'm doing fits into the current definition of effective computability.

mwilcox 6 years ago

What about the Church–Turing–Deutsch principle? https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93...

  • tlb 6 years ago

    That one's pretty aspirational. At present, we have no idea how to build a supercomputer that simulates exactly even a cubic nanometer-femtosecond of space. It's tempting to believe that we might someday figure out that physics is digital, but there is no workable theory for how.

    • kowdermeister 6 years ago

      Not sure what your definition of simulation, but there is already one done on a supercomputer:

      https://www.youtube.com/watch?v=J3xLuZNKhlY

      • yorwba 6 years ago

        That's a box of ≈21 cubic femtometers and the time resolution is one yoctosecond. The difference between that and a cubic nanometer-femtosecond is more than 25 orders of magnitude. And that's just simulating quantum chromodynamics. Add in the other parts of the Standard Model and scaling the simulation gets even more difficult.

        • kowdermeister 6 years ago

          I'm pretty happy we can even simulate a box of ≈21 cubic femtometers :) Maybe they should turn RTX on.

scythe 6 years ago

The talk begins:

ANNOUNCER: So it's my distinct pleasure, and privilege, to host Yuri for a talk on a topic I have discussed with him since I was even a student: models of computation. And through the years, we have interacted in various ways, on both fundamentals of models of computation and then abstract state machine applications, and at this point, it's also probably the last talk that Yuri will give at this institution, as he is moving to Michigan next week. But I have no doubts that there will be many opportunities to interact with Yuri in the future. I hear that there's a birthday party coming up in a couple of years, and it was always a pleasure to participate in these events. So, without more adieu, please go ahead.

YURI: Thank you very much. It's a pleasure for return for a day.

[First slide appears]

>My scholarly friend Nachum Dershowitz and I published a long paper

>"A natural axiomatization of computability and proof of Church's thesis", Bulletin of Symbolic Logic, 2008

>where we proved Church's thesis for classical algorithms.

>Here classical is not the counterpart to quantum as is common in quantum literature. By classical algorithms we mean traditional algorithms from antiquity to Turing's time, also known as sequential algorithms.

>The claim that the Church-Turing thesis cannot be true in general is implicit in:

>"What is an Algorithm?" SOFSEM 2012, Springer LNCS 7147 (2012)

>The main goal of this talk is to clarify things and argue for the claim.

YURI: In 2008 Nachum Dershowitz, my very learned scholarly friend and myself, we wrote this paper, we proved Church's thesis, and Turing's thesis, but on very technical-- they are slight differences, it's a technicality -- for classical algorithms. Now what do we mean by classical algorithms? Now quantum people hijack the word "classical" -- it became to mean non-quantum, but our meaning was classical in the sense: algorithms from time immemorial, certainly, until 1950s, 1960s. So, this is the algorithm that Church and Turing had in mind when they put forward their theses. So, another name for these classical algorithms was traditional algorithms, or sequential. I will call them mostly sequential, because the first sort of non-classical, non-traditional algorithms were parallel, and so there was this -- there was traditional or non-sequential. You can generalize, and generalize, then it's not completely obvious why it should be even true.

YURI: I had this argument implicitly published years ago, but as you see:

[Second slide appears]

>There is a a problem with long scholarly papers: it takes time to read them, and there is a tendency to skim.

>Peter Shor: The Dershowitz-Gurevich paper says nothing about probabilistic or quantum computation. It does write down a set of axioms about computation, and prove the Church-Turing thesis assuming those axioms. However, we're left with justifying the axioms. Neither probabilistic nor quantum computation is covered by these axioms (they admit this for probabilistic computation, and do not mention quantum computation at all), so it's quite clear to me these axioms are false in the real world, even though the Church-Turing thesis is probably true.

>Nov, 22, 2010. https://cstheory.stackexchange.com/questions/what-would-it-m...

>The great hero of quantum computing has not read the paper.

===========================================================

I hope this will help you decide whether or not to make time to watch the video, although I agree it would be helpful to have a complete transcription available.

avodonosov 6 years ago

FWIW, Yuri Gurevich is famous for the Abstract State Machines formalism.

stevebmark 6 years ago

Why would you share this? It's incomprehensible.

olliej 6 years ago

Are there Paper/slides anywhere?

I shall repeat my complaint about AV only material on HN: not everyone (wants to/can/has time to) watch and listen to 10 minutes of reading spread out to 50 minutes.

It continues to frustrate me.

  • marssaxman 6 years ago

    Seriously! I don't understand why people waste so much effort presenting information in such a high-overhead, low-bandwidth format. Wouldn't it be easier to just post the notes and skip all the AV folderol?

    • titzer 6 years ago

      An echo of the Youtube effect; it's less work for the producer. Just turn on the cameras and start babbling, no editing, annotating, summarizing, or other further work required.

      (tbc not faulting the speaker here).

  • azhenley 6 years ago

    Would be cool if a web app existed that could take a video and produce not only a transcript but also a series of intelligently selected images from the video, basically producing a full document that stands alone.

    • olliej 6 years ago

      I feel like transcribe + pull out slides would be a generally useful thing for conferences. Even a specialized piece of software that has the original slides and just records when slide changes happen and inserts the appropriate slide into the transcription.

      I’m convinced this would help conference videos that frequently do just have points where just the slide is shown (WWDC videos have this a lot - where frequently the speaker isn’t being shown). There should clearly be a standardized video codec for exactly this ;)

    • Rainymood 6 years ago

      Pay me and I'll do it for you. If there is enough demand I'll look for ways to automate it.

  • hinkley 6 years ago

    My coworkers are currently on this same bent and it drives me mad.

    They’d rather make a video than write an architecture document. And what happens when someone has a question or complaint about the middle of the video? How do you come back in a week to review the second to last part?

    Learn to take screenshots and do some light typesetting. Spell it out for people (in my case, they are literally getting paid to do the work).

  • gabrielhn 6 years ago

    Listen at 2x speed and skip ahead?

    • olliej 6 years ago

      Doesn’t work for people who can’t listen.

      Doesn’t work for people who can’t see.

      Doesn’t work for people in environments where the above two are essentially true.

      25 minutes is still longer than 10 minutes (transcript, slides would be even shorter although less informative), and I was being conservative about reading speed.

      • avodonosov 6 years ago

        People who can't see whould be unable to read as well

        • RugnirViking 6 years ago

          however, if it were written in text, they can read by using a screen reader, which is very established techology by now.

          This is not (as) available for a video if at least some component of the lecture requires being able to see

    • syrrim 6 years ago

      >listen at 2x speed

      I'm trying 1.5x, but then when he is speaking it is too fast

      >skip ahead

      I skipped the introduction, but the standstill periods are too short, and you have no way of knowing when something interesting will happen, so skipping ahead entails missing things. This is where a copy of the slides is particularly useful: you can identify those slides you don't comprehend, or of particular interest, and sit through the speaking just for these portions.

      • olliej 6 years ago

        Listening to things fast is a skill - you can do it but it requires practice. If you know any blind/vision impaired people who use screen readers you should ask them to use speaker for a bit while they’re browsing/navigating etc.

        It is absolutely incredible how fast some of them have the audio going at. (It also makes any pausing or lag in your application excrutiating. It turns out 10ms being “noticeable” is more a factor of how you interact with an application.

        • mrob 6 years ago

          They've practiced with their specific text-to-speech program. The more familiar you are with a voice the faster you can understand it. The skill won't completely transfer to different speakers.

          • olliej 6 years ago

            For their standard reading speed that true (but it’s still an amazing thing to hear, and I think more devs should communicate with blind people just so they can grasp the sort of latencies involved).

            I would expect that the skill does translate to them being able to understand arbitrary speakers sped up more than a regular person could manage. I’d be curious if any HN readers using screen readers could confirm/tell me I’m talking nonsense :)

            • antt 6 years ago

              >I would expect that the skill does translate to them being able to understand arbitrary speakers sped up more than a regular person could manage. I’d be curious if any HN readers using screen readers could confirm/tell me I’m talking nonsense :)

              I use a text to speech program at ~850wpm and I also watch lectures sped up to x4 regular speed. So yes, it does generalize.

          • gugagore 6 years ago

            I think it may also be the case that (old-school, non-concatenated but generated) synthesized speech, with practice, is more intelligible at higher speeds than natural speech. The sounds are more distinct and consistent.

            • olliej 6 years ago

              The speech synthesis for screen readers is heavily tailored for that job (I remember the emphasis put on the new voiceover voices on Mac many years ago, and they apparently were noticeably superior to people who used them as screen readers at the high speed settings)

    • drb91 6 years ago

      How do you scan forward for the interesting stuff? How do you search for a quote a friend sent you? What if you’re deaf? Or blind and can’t read the slides?

      Just transcribe it and include the slides! If you can afford the speaker you can afford to make it accessible.

  • dragonwriter 6 years ago

    Yeah, speed aside, audio-video is much better for conveying emotion than information.

starpilot 6 years ago

I understood all of it.

austincheney 6 years ago

I am so not a math person. From a math perspective perhaps this is important. From a computer perspective it completely misses the point.

This work from Alonzo Church gave us https://en.m.wikipedia.org/wiki/Lambda_calculus which in turn provides recursive function abstraction and thus lexical scope. While this model of computation is older than OOP it currently seems to be having a modern renaissance in its influence of language design.

  • austincheney 6 years ago

    This many downvotes without any explanation is a true Reddit moment.

    • espeed 6 years ago

      NB: Better to refrain from saying stuff like "it completely misses the point" unless you are certain you understand the point beyond the level presented.