avaku 2 months ago

Thanks for posting this. This looks like something genuinely new. Going to look into it.

  • DoctorOetker 2 months ago

    I didn't read the referenced 2017 paper yet, but mapping the training data to noise (gaussian and/or other) is exactly what the RevNet paper does, with the advantage of deterministic reversibility such that the trained RevNet is also generative (without having to do gradient descent for each generated image)

    • dtjohnnyb 2 months ago

      The intro to the paper has a nice comparison to other similar methods (generative and non-generative) and the blog post linked in this article by inFERNCe https://www.inference.vc/unsupervised-learning-by-predicting... has a nice comparison at the end to different unsupervised methods and where this method adds novelty (or doesn't!)

      • DoctorOetker 2 months ago

        >has a nice comparison at the end to different unsupervised methods

        I don't see the comparisons at the end of the inFERENCe link?

  • smittywerben 2 months ago

    > looks like something genuinely new

    I fear those words

a008t 2 months ago

Can someone please ELI5 what this does and why/where it is/can be useful?

  • tempodox 2 months ago

    Don't trust any machine learning algorithm that you haven't faked yourself. You can make random noise mean anything you want.

  • cgearhart 2 months ago

    His stated purpose was image compression (although I didn’t see evidence that it worked). If the distribution encoded by the is smaller than your image, then you can send the small set of model parameters (instead of the image) and then use the model to reconstruct the target image.