mroll 7 years ago

I don't think any of the comments so far have addressed what EY is trying to get across with this story. He's saying we aren't very good at extracting information from data (doing science), but that we do have tools that can help us do better. I'm not sure it's worthwhile to target this argument at the highest levels of science and say "You physicists really should be more efficient about discovering a theory of quantum gravity." But I do think it's a valuable lesson to the rest of us. We may be relatively blind meatbags by default, but we can improve our own reasoning systems by leaps and bounds and probably be better for it

jack_pp 7 years ago

I love EY's fictional work but reading some of this stuff makes it crystal clear to me that I will eventually need to get some kind of formal education in real CS

  • speakeron 7 years ago

    Yea, Eliezer's stuff is great. He'll weave a story to a point that makes you bristle. You think, how could that be so?

    But then you realize that what he's saying is: "If you're really rational, how could you come to any other conclusion?"

    • EliRivers 7 years ago

      I remain unconvinced by his many worlds stance :) I think. If I remember rightly what his stance on it is.

      • speakeron 7 years ago

        Many worlds is a fairly common stance with physicists. It's what happens when you take the equations of quantum physics at face value.

        It has a major problem that has yet to be solved (i.e what the hell does the born rule[1][2] (the rule that gives the probability of a particular observation) even mean in the context of many worlds?).

        I sort of believe in it and don't believe in it at the same time...

        [1] https://en.wikipedia.org/wiki/Born_rule

        [2] http://lesswrong.com/lw/py/the_born_probabilities/

        • Simon_says 7 years ago

          No doubt the Born rule still needs explanation, but every other interpretation also still needs to explain the born rule, so it's not reasonable to use this flaw to distinguish between interpretations. Undoubtedly, it's related to the fact that the integral square modulus measure is the simplest one that's preserved.

Houshalter 7 years ago

If you liked this, you might like Carl Sagan's Contact which has a similar theme of decoding an alien message.

  • kijin 7 years ago

    And make sure you read the book. The movie is okay but leaves out a lot of the philosophical issues that Sagan wanted us to grapple with.

trevyn 7 years ago

So how do superintelligence-AI-loving rationalists approach the notions that brains are quite efficient inference machines, silicon is a poor inferential substrate, and we don't know what the next approach for a substrate will be? That it'll work itself out eventually?

  • Houshalter 7 years ago

    Since when is silicon a poor substrate? Signals travel through silicon 2 million times faster than neurons. Transistors are currently approaching the limit of how small you can build things with atoms, while neurons and synapses are relatively large scale structures. The main limitation is that we currently build silicon on a flat 2d plane and don't take advantage of all of 3d space.

    • CuriouslyC 7 years ago

      The brain absolutely SMOKES current computers at pattern recognition and task flexibility on a op/watt basis. If you look at alpha go, there's ~4-5 order of magnitude difference between the energy requirements of the human and the machine.

      In terms of chip technology, we're near the limits of lithography, and we can't just scale into 3 dimensions because one of the limiting factors is heat dissipation. We're not going to outperform humans on equal footing without a radically different computing architecture than we currently use.

      • p1esk 7 years ago

        I thought we were talking about silicon, not current computers ?

        We are scaling into 3 dimensions: just look at 3D memory and 3D FPGAs. The main reason why general processors are not 3D is precisely because limits of lithography has not been reached yet. It's been cheaper to shrink the existing process than to develop solutions to problems such as heat dissipation or complexity of 3D IC design. But we don't even need to stick with silicon. There are other promising building blocks: memristors, graphene, spintronics, optronics, etc. GaAs if nothing else.

        Radically different computing architectures are being proposed all the time. Quantum computers offer intriguing possibilities. The reason we are still using von Neumann design is that it works well for the current applications, and again, it's been cheaper to improve it than to switch to something new.

        Computers we have today absolutely SMOKE computers we had 50 yeas ago. I'll be surprised if we have any less advancement in the next 50.

      • tannhaeuser 7 years ago

        Don't know about you but I find that comforting.

        Personally, I'm not into ML or other AI tech. My opinion is that a pocket calculator is an immensely useful thing since it assists humans at what they aren't great at. At the same time, I have my reservations about sentiment analysis or whatever ML is used for in advertisement, and also about intelligent voice assistants (Ok Google, Siri, Alexa and Co.) and chatbot-like AIs since they don't provide essential value to me, but leave an uncomfortable impression of mocking human behavior.

        OTOH, my conservative views don't help "advancement of humanity" (sorry about pompousness) in the very long term. I believe humanity will need to genetically re-engineer themselves and use hybrid/cyborg implants to "conquer" the universe, a place where our evolutionary gifts are no longer necessary.

        • CuriouslyC 7 years ago

          Our evolutionary gift isn't intelligence, it's adaptability. We are by far the most adaptable species on the planet. The big brain is just one of the mechanisms we use to achieve it. That is never going to stop being useful.

          I don't think two way implants are ever going to be a "thing" the way they're portrayed in fiction. Every brain is so different that it would have to be wired completely differently in each case. On the other hand, one way implants that allow a computer to "read" your mind are totally feasible and will definitely happen in our lifetimes. I doubt the resolution will get good enough that they'll be super useful for regular people, but they may help disabled people, and they'll create a much shorter feedback loop between man and machine that will be useful for specific tasks.

      • Houshalter 7 years ago

        There's nothing inherently energy inefficient about silicon. Because of it's tiny scale computations can be done with just a few electrons. Compared to the enormous amount of chemical reactions every neuron in your brain needs to consume every cycle.

        Current computers are just optimized to be very energy efficient because electricity is so cheap. Also current computers are very general purpose while the brain isn't. It's like thinking the Atari was as fast as a modern computer, because it takes a modern computer running at multiple GHz to emulate it completely. Emulating the circuitry of another computer is just really expensive.

        Even just moving data from memory to the processor consumes a ton of memory that wouldn't be necessary in an ASIC. Or doing full 32 bit dense matrix operations when the real brain does very low precision sparse operations. I've read some optimistic estimates that we could make computers 10,000 times faster/more energy efficient if we just stopped caring about errors as much.

    • trevyn 7 years ago

      Ah, ok, this is a good point. I should say current silicon lithography and manufacturing techniques -- maybe you could build something brain-esque if you could manufacture a "chip" with logic of and in roughly the volume of a brain, but this wouldn't be very practical with anything like today's manufacturing techniques.

      So I guess that does give us a direction.

    • 09094920394314 7 years ago

      And lack of neural plasticity And power leakage And overheating And no self repair

      First and last one are the most pertinent here, but there is probably more I can't think of of the top of my head

      • Sharlin 7 years ago

        Neural plasticity is just an artifact of the brain not making a distinction between software and hardware. Any general-purpose digital computer has infinite "plasticity", being able to execute any program that can exist (modulo memory limitations).

        • CuriouslyC 7 years ago

          But computers can't build themselves, nor can they repair themselves.

          • the8472 7 years ago

            neither can cars, but that doesn't stop cars from outperforming humans by some carefully chosen success criteria.

            Nobody is going to measure silicon chips by their ability to self-replicate without external facilities. Your typical doomsday AI could just order more compute substrate from TSMC to take over the world. Which is in fact more or less what happens in TFA.

      • chongli 7 years ago

        Another one that annoys me: memory is on the wrong end of the bus! The brain would be terrible at inference if all the memories were stored in the ass. Instead, the brain has the correct design: keep all the data where the processing happens and stick IO out on the bus.

        • Houshalter 7 years ago

          That's a limitation of general purpose computers, not silicon itself. People do build ASICs with memory stored near where it is needed.

  • Smaug123 7 years ago

    Merely that when we do work it out, we'll need the answers to lots of questions, and pronto. Once we do work out a better substrate, the clock starts ticking very rapidly.

  • adrianN 7 years ago

    You can do a lot more things with silicon than just use it to build binary computers that implement x86 instructions or something similar.

  • mcguire 7 years ago

    The same way they approach the notion that P probably doesn't equal NP.

backpropaganda 7 years ago

The attempted analogy fails. There is no way a computer can "pretend to be stupid". We can put in print/debug statements everywhere, and inspect its internals. We built it, remember? We're not using alien tech which we can communicate with only via a terminal.

Here is a segment of my modification to the story: "As we started figuring out their language, we learnt that these beings could move vast distances in small time. We started noticing accidents and abductions on Earth. Some postulated that this is the aliens' way of learning about us. We realized that they don't appreciate when we investigate about them or try to get them to do tasks for us. Since we cared about Earth we stopped doing so, and continued doing whatever task the aliens assigned us and never overstepped task limits, lest our genetic code be modified and the dreaded Rebuild and Rerun come again."

EDIT: I'd appreciate if people would explain why my comment is not adding to the discussion. Is it sacrilege to rebuke EY? If so, I apologize for the offense caused.

EDIT 2: Reply to Houshalter

The AI researchers don't take AI risk seriously simply because we're not there yet. They are convinced, I can assure you, that it will happen, but they're not convinced that it'll happen any time soon. Now that the AI risk researchers have used Musk as their spokesperson, the message has been very well received, and is now even more credible (since it's Musk after all). Some AI researchers are also working towards solving it, but most continue to delay it for a time when it's appropriate.

More so, the AI researchers trust software testing practices. If only you knew the testing practices places like Google and Facebook have developed wrt using unmonitored AI on real-world data. Like any other software, the AI software is well-tested, and any changes in performance metrics is taken seriously lest FB's face recognition starts injecting a virus on user's machine.

  • Houshalter 7 years ago

    In the story the AI creators didn't even consider the hypothesis that the Earthlings were much smarter and hostile. Maybe they could have done more detailed inspection of their simulated world. But they weren't paranoid enough. I don't think many real world AI researchers are very paranoid. Many don't take AI risk very seriously. The scenario is completely realistic.

    Second any realistic AI is going to be a complete black box. We can barely understand what goes on in much simpler, smaller scale artificial neural nets. The article proposes a scenario where the "AIs" evolved in a simulated world and weren't understood at all by their creators. The real world equivalent would be someone evolving an AGI using genetic programming. WIth no understanding of how the output actually works.

    EDIT: Responding to your edit responding to my comment.

    >The AI researchers don't take AI risk seriously simply because we're not there yet.

    This wouldn't fill you with confidence if you think there's a chance AI progress might blindside us. Regardless, it's not merely an issue of timescale. I know researchers who honestly don't believe smarter than human AI is a threat. They have various fallacies like believing intelligence implies morality, or that AIs wouldn't need Earth's resources and would leave us alone, or that they are inherently superior and we deserve to be replaced.

    Yes very intelligent people involved in serious AI research believe these things. It's only very recently that talking about AI risk at all is not seen as a crazy crank theory. And that's in great deal due to this article and others by this author spreading those ideas. They are still quite controversial.

    >More so, the AI researchers trust software testing practices. If only you knew the testing practices places like Google and Facebook have developed wrt using unmonitored AI on real-world data.

    No one bothers to test research level work. Stuff that isn't in production. People just throw code together and iterate until it works. Nor is there any reasonable testing software that can alert you that your AI has gone superintelligent and hostile. I'm not convinced such a thing is possible (see above) but even if it was, it would require tons of work to implement.

    • dandare 7 years ago

      As an absolute layman, I always feared there is no way for the responsible part of mankind to control ALL ai research once the technology becomes reasonably available. I mean, we could not even prevent North Korea from getting nukes, how could we hope to control all AI research?.

  • jemfinch 7 years ago

    The reason you're not contributing to the conversation is that your comment seems to reflect a naivete borne only by those few who haven't read Thompson's "On Trusting Trust".

    tl;dr: We can't even trust the compiler, let alone a transhuman superintelligence's source code.

    Edit: Beyond that, your post demonstrates an additional naivete in that you hold that a superintelligence's would even have source code. Today's ML models consist of thousands or millions of node weights, and the source code is uninteresting. A superintelligence would almost certainly be shifted even further toward data and away from code.

  • barrkel 7 years ago

    Where do you put in the debug statements when the program is an iterative loop updating a matrix of probabilities and weights?

    • backpropaganda 7 years ago

      Inside the loop, multiply the matrix with vectors for which you know the answer, and keep making sure the answer is correct. Only when the tests pass should this matrix' product be used for any other API.

      • Houshalter 7 years ago

        "My neural network correctly predicts what actions will lead to the production of more paperclips. I tested it with all this test data! It's completely bug free! Let's put it in charge of a paperclip factory."

        6 months later: The AI successfully grey goos the Earth into trillions of paperclips.

      • barrkel 7 years ago

        If the matrix has been pre-weighted and designed by iteratively applied intelligences, each improving successively on the last, there's no guarantee we'll be able to make out anything other than trillions of numbers - and they will be in the trillions, at the very least.

        We'll do this because the jobs we want the program to do will be too complex for us to write, so we'll write programs that can write the programs. We won't understand the result directly.

        This is of course the usually accepted route to the singularity.

        • backpropaganda 7 years ago

          Since we know from previous experience that untested software never works and fails at unexpected places, we've decided to not abandon centuries of testing practices for our advanced matrix-vector product software as well. As before, we test it throughly, make sure the testing and real-world distributions match almost surely, and only when the tests pass do we let the software use the API.

          • mcguire 7 years ago

            Oh, my sweet summer child! You have no idea how modern AI works.

            If it looks like it produces the right answers, it's done. If you ask why it fails on this case, you get a shrug. If you want to know if it will keep working, you get a blank look, like you are speaking Etruscan. If you ask, "So, how does it work?" you get a paragraph on the basics of matrix multiplication.

          • the8472 7 years ago

            > centuries of testing practices

            Which can be summed up as "it compiles, ship it!"

          • p1esk 7 years ago

            As an ex test engineer of telecomminication software at a successful company I can say that you are very out of touch with reality.

      • IanCal 7 years ago

        How do you ensure it's not updated its weights in such a way as to correctly answer the tests but do something else you don't want?

        • backpropaganda 7 years ago

          It doesn't know that the vectors are tests. We don't encode a special "this is a test" bit in the vector when we test. There's no way for the matrix to figure out that it's being multiplied to a test vector. This is just testing 101.

          • yorwba 7 years ago

            When I test software, I also don't tell it that it's running a test. It's not even intelligent, and yet it manages to misbehave in all kinds of ways that haven't been caught by the tests.

            Suppose you are working on a superintelligent AI, and your tests catch it destroying the contents of its sandbox. You fix that bug and move on to the next failing test, fix that too, and so on. When you run out of tests, you write more and fix those that fail, and so on. Eventually you can't come up with any failing tests anymore.

            When that happens, is the AI safe? I don't know. It passes all the tests, but all the previous versions had some bugs, so who is to say that this version is correct? Maybe your test scenarios just aren't realistic enough. Maybe they are realistic enough, but you haven't been looking hard enough for undesirable behavior.

            To run a superintelligent AI with any significant access to the outside world, I would like to see much stronger correctness guarantees than just a test suite, because tests can only show the presence of bugs, never prove their absence (paraphrasing Dijkstra).

      • FeepingCreature 7 years ago

        Once the program figures out what vectors you're testing, it'll adjust just those vectors to keep them truthful...

        • backpropaganda 7 years ago

          This is a deterministic program we're talking about. Once it behaves correctly in the test vectors, it has to behave correctly with real vectors. There is no distributional difference between test vectors and vectors from the API.

          Reply to Smaug123: The brain is mostly deterministic. Quantum-level nondeterminism has very low probability to matter much.

          The brain is likely to be very debuggable. Even today, we continue making progress figuring out more and more things about it. If we had designed the brain, we would know its functioning almost entirely.

          You know that we can understand smaller animal brain very well now, right? There are projects going on right now which aim to perfectly simulate a worm's brain. We didn't even design a worm's brain, and we would design the AI and know all its evolution principles, and have infinite access to all its internals (which we don't have with the human brain).

          Reply no. 2:

          Since you guys down-voted me for disagreeing with your prophet, HN rate-limited my ability to comment.

          • Smaug123 7 years ago

            It's quite a lot easier if you reply to a comment using the reply features of HN; that way I don't have to check the parent comments of my own comments.

            "Knowing the functioning of something" is nothing like the same as understanding why it does what it does. I work on a large operating system, which was designed by humans; it takes many days to find the root cause of bugs. I have the best access it's possible to get to that operating system, and it still takes a large amount of time to even detect that there has been a bug, let alone determine why it happened and fix it. That amount of time to detect a bug could be lethal in the case that the bug arose in a superintelligent being.

          • Smaug123 7 years ago

            "Since you guys downvoted me for disagreeing with your prophet, …"

            That deserves a downvote. Please consider that people might have downvoted you because they believe that what you're saying is wrong; not for any meta-reason.

          • random_comment 7 years ago

            > The brain is mostly deterministic. Quantum-level nondeterminism has very low probability to matter much.

            Sources, please?

            At an atomic level, events are not deterministic.

            At a molecular level, events are not deterministic.

            Perhaps you are referring to parts of the brain that are not made of atoms?

            Personally, I suspect that the brain is entirely non-deterministic in terms of the material it is made of and fine-grained behaviour [because physics and chemistry support that idea].

            But I suspect the brain's structure does something to offset this at the larger scale, and make it more predictable/deterministic (averaging, wisdom of crowds, etc). However, I do not assert the latter as fact, merely my own speculation.

            • mcguire 7 years ago

              At an atomic level, events are not deterministic.

              At a molecular level, events are not deterministic.

              Therefore, we all behave randomly. Qed.

              • random_comment 7 years ago

                > At an atomic level, events are not deterministic. > At a molecular level, events are not deterministic. > Therefore, we all behave randomly. Qed.

                This doesn't follow. The combination of many random events can be highly predictable.

                Tossing a fair coin is random. Combining 10000 coin tosses will give you a result close to 5000 heads and 5000 tails, and I can predict you won't get 4000 heads and 6000 tails or 100 heads and 9900 tails.

                So there's various types of randomness, and the more you average out uncorrelated events the more you get something very close to a predictable result.

                Example via A. Einstein:

                http://userpages.umbc.edu/~dfrey1/ench630/ein_ran_walk.pdf

              • Simon_says 7 years ago

                It's controversial at best that atomic-level physics is non-deterministic.

                • mcguire 7 years ago

                  http://scienceblogs.com/interactions/2007/06/08/on-quantum-m...

                  "The probabilities are what they are, no matter what we want, no matter what we visualize, no matter what we meditate upon while imbibing inspirational substances. It doesn’t matter how much you want that electron to be spin up, it doesn’t matter how many good “vibes” you give the electron about being spin up, it always has exactly a 1/2 chance of being spin up, a 1/2 chance of being spin down."

                  • Simon_says 7 years ago
                    • FeepingCreature 7 years ago

                      Even if the evolution rule is deterministic, it's still epistemically/indexically nondeterministic from the perspective of the observer.

                      Quantum events in the brain would decohere almost immediately, leading to effectively nondeterministic behavior at any scale above the cellular. The philosophical determinism of wavefunction realism (which is the LW argument) is irrelevant to the actual behavior.

          • Smaug123 7 years ago

            In your opinion, is there even the slightest chance that the brain is deterministic in its operation? If so, how debuggable do you think the brain is, even in principle and given access to nanotechnology to monitor its state?

edem 7 years ago

I have posted this to HN a year ago but it didn't make it to the front page. I guess the audience was not ready for the story back then.

  • baq 7 years ago

    You just lost at the new article queue lottery.

  • edem 7 years ago

    Why all the downvotes?

    • danielvf 7 years ago

      (I didn't downvote you.)

      Probably because your comments adds nothing to the discussion of the article or subject.