kurtisc 5 years ago

Memristors are so much more interesting than the transistor replacements they're sometimes touted as. It feels as if they've been willed into existence just to make the mathematics of electronics more complete.

  • twtw 5 years ago

    While interesting, I don't see how they make circuit theory more complete.

    > The ideal memristor was initially and abstractly formulated as the flux–charge relationship: dΦ = M dq, (2) which, in order to be invertible, must have M defined either in terms of Φ, or q, or it must be a constant. The latter case coincides with the resistor, while if M is defined in terms of Φ ... [emphasis mine]

    From my reading, this means if M is constant (like R,L,C are for "standard" components) it's just a resistor - not the miraculous "missing fourth."

    Curious to get other perspectives on this.

    • ecarrick 5 years ago

      The key is the the second 'or' in: "in terms of Φ, or q, or it must be constant". You're correct about a constant M the flux is linear to charge and therefor the device is a resistor. But it doesn't have to be. It's no longer a linear relationship and behaves differently than a resistor, resulting in a device similar but not the same. Thus why "ristor" is used in the name.

      To answer your question memristors cover the second case where the relationship isn't linear, thus tying the loose end resistors leave in the theory.

  • shanxS 5 years ago

    I read eq 1 and then 2, almost felt like somehow, 2 was meant to be - maybe my mind was trying to draw on symmetry but didn't quite understand how.

  • agumonkey 5 years ago

    Heard that a few times indeed. Symmetry.

amelius 5 years ago

Looks like they are kind of expensive:

> Now you can buy one. Actually, you can buy eight in a 16-pin DIP package. It will, reportedly, cost $240 for the 16-pin DIP. That’s only $30 per memristor, and it’s the first time you can buy them.

From: https://hackaday.com/2015/07/02/new-part-day-memristors/

shanxS 5 years ago

Interesting quotes (to me) from the paper:

> Device variability (Section 4) and volatility (Section 4.2) were mentioned as current challenges for most applications based on memristive memories. This is in contrast with biological systems, which are not built in clean rooms, and it is hard to believe that evolution exploits ideal systems in reference to its own architecture design.

> For example, in the current digital computer, computation is carried out using strings of symbols that bare no physical relationship to the quantities of interest, while, in analog computers, the latter are proportional to the physical quantities used in the computation.

macawfish 5 years ago

I'm curious about what any machine learning aficionados think about this?

  • Balgair 5 years ago

    They're not really 'machine learning' in the way it's used today: algos on giant n-tensors. It's more 'machine learning' in the bio sense: leg moves up when I smack it with a hammer, keep smacking and it stops moving up so much.

    We've been able to make that sort of 'learning' device with digital electronics for a while now, heck, with analog electronics too. It's just that we've lowered to component count, via memristors, to the near minimum. Memristors have the same physics as a biological synapse (really waving hands here) and the 'learning' is now down to the same part count and energy scales. Actually, the energy usage part is the most important thing. A computer takes in a lot of Watts to run something like a good chess program. But a brain takes in ~1/100th the power to do the same thing. The (likely) power reqs. of memristors are likely the most 'breakthrough' thing about them.

    • macawfish 5 years ago

      Do you think that some of the topologies being learned about and refined in digital machine learning research could eventually be implemented using these analog components?

      Or do those digital networks rely too much on very explicitly define, specific activation functions that would be difficult to tease out of memristors?

      (I'd be willing to guess that it's not really an either/or situation... That much of what's being learned about digital neural nets will be useful in building analog ones, but that the messiness of electrical engineering will require some new tricks to get useful machine learning happening on the memristors.)

      • Balgair 5 years ago

        Yeah, it's not really either/or, it's more an issue of how many decades do you have.

        Memristive 'learning' is analog, by definition. Implementation of modern ML algos is a very long ways off, if they're at all useful by then. Think of it more as trying to invent a modern image classification system in 1957 when all your transistors cost $5 and are vacuum tubes. Oh, and things like a compiler aren't even dreamed of yet. It's gonna happen, but after you retire.

        Specifically, you can't assume infinite impedance in the system; there are no 'bits'. You have to characterize the whole circuit around the 'memory' part and can't partition it off like you would with a normal transistor. After-all memristors kinda are a sneak circuit.

        That said, there are no issues with bit length in analog designs, so, in theory, they should be continuous/infinite in their 'resolution'. Generally, memristive learning should have near infinite 'fidelity' but will likely require a lot of time to teach. Think of this type of memristive learning as a vinyl record: the sound is incredible, nearly perfect, but not very portable and hard to reproduce. Current ML stuff is more like an mp3 file: not a perfect representation of the sound, but small, cheap, and good enough.

        So, in this view, trying to run analytics on the mp3 file is fine, you can use ML algos on them, plug some python at them, Bob's your uncle. But trying to do the same thing on vinyl isn't going to happen. Sure, the math is the same-ish, but the practicalities are totally different, you'd have to get an army of undergrads to just sort them and set them on physical record-players. That's the level of difference we're talking about here.

        • macawfish 5 years ago

          Interesting! Your vinyl analogy is helpful in a few ways. For one, training the memristive networks wouldn't be parallelizable in the same ways that training state of the art digital networks's can be. Right now, you can't just instantly spin up thousands of instances of some memristive hardware without a lot of physical infrastructure. Of course if we start seeing programmable circuits with vast num ers of programmable memristors, maybe this will change.

          Also: I would have assumed (naively) that the memristive networks could be "faster", but now I'm realizing that this is going to be a function of how quickly the memristors transition. There are likely to be some complex timing issues in building analog neural networks that behave in remotely predictable ways. I could imagine that errors might compound in much stranger (more chaotic) ways than they would in digital networks, where "error" is pretty tame.

          I imagine that the "real-time", asynchronous, and chaotic nature of analog neural networks could be seen, by the right person, in the right light, probably in the future, as a feature (not a bug). But I realize now that we're decades away from that.

          My hunch is that the early practical research will be done on FPGAs with inboard memristors.

  • Junk_Collector 5 years ago

    I suppose you might say that, "there is nothing new under the sun."

    Memristor functions have been implementable as active circuits for a long time and as digital logic functions as well. The real nifty thing about modern memristors is that manufacturing technologies have reached a point where we can implement passive memristance components, but they don't add anything fundamentally new to the field of machine learning except perhaps the potential of some more efficient ASICs.

    Attaching machine learning to something is a good way to get published right now because it is a hype field.