DevOfNull 15 days ago

If anyone wants to make their own: The free e-book Ray Tracing Gems II [1] covers realtime GPU ray tracing with modern APIs and hardware acceleration, and has a chapter about spectral rendering (Chapter 42: Efficient spectral rendering on the GPU for predictive rendering).

[1] https://www.realtimerendering.com/raytracinggems/rtg2/

  • superb_dev 15 days ago

    I would also recommend Ray Tracing in One Weekend [1] if you want a very quick introduction, and Physically Based Rendering [2] if you want to then go really in depth.

    [1] https://raytracing.github.io/

    [2] https://pbrt.org/

    • jplusequalt 15 days ago

      Highly recommend Physically Based Rendering. As a book, Pbrt is to the study of path tracers, as Gravity is to the field of general relativity. Wholly encompassing and rigorous.

  • kqr 15 days ago

    I can't take time to fiddle with raytracing (however much I'd want to!) but I skimmed the first half of that book and alias sampling is a very elegant and nice technique. Wish I had known about it earlier! It is useful in a far broader context than graphics.

dagmx 15 days ago

Some examples of Spectral ray tracers

Mitsuba is an open source research renderer with lots of cool features like differentiable rendering. https://www.mitsuba-renderer.org/

Maxwell has two spectral modes of varying accuracy. The more complex method is often used for optics. https://maxwellrender.com/

Manuka by Wētā FX is spectral and has been used in several feature films https://dl.acm.org/doi/10.1145/3182161 and https://www.wetafx.co.nz/research-and-tech/technology/manuka

  • alkonaut 15 days ago

    If you want to look under the hood and find the big production ones like too complex, I can recommend peeking at this one. It's a great example example of just the minimum of a spectral path tracer https://github.com/TomCrypto/Lambda

  • pixelpoet 15 days ago

    There's also Indigo Renderer and the open source LuxRender:

    https://indigorenderer.com

    https://luxcorerender.org

    • dagmx 15 days ago

      Note that Lux isn’t spectral anymore. The older version is though.

      • pixelpoet 14 days ago

        Funny, I added the original implementation of spectral rendering to Lux and then they took it out :D

        • nextaccountic 13 days ago

          Why did they remove it?

          • pixelpoet 13 days ago

            I have no idea (it wasn't exactly the same project, LuxCore vs Lux), and PBRT, on which Lux was based, later moved to full spectral rendering.

            I guess they feel they'd rather reduce register pressure a bit than support it.

lwander 15 days ago

Author here. Waking up to seeing this on the front page with all the wonderful comments made my day! Thank you for sharing and reading

zokier 15 days ago

Spectral rendering imho is good example how ray tracing in itself is not the end-game for rendering, it's more just starting point. Occasionally I see sentiment that with real-time ray tracing rendering is a solved problem, but imho it's far from truth.

Afaik most spectral rendering systems do not do (thin-film) interference or other wave-based effects, so that is another frontier. Reality has surprising amount of detail.

  • Klaster_1 15 days ago

    The closer rendering comes closer to underlying physical principles, the more game engines will become world simulation engines. Various engine parts commonly seen today will converge towards a common point, where, for example, we'll observe less distinction between physics and rendering layers. I wonder if this trend can be traced to some degree even today. Several orders of compute growth later, we'll look upon current abstractions in the same manner as on the 30 years old state of the art, shaped by technical limitations of the yesteryear, obvious in hindsight. Love the perspective this puts the things into.

    • zokier 15 days ago

      I disagree. The goal for most games is not to simulate the real world accurately, but to be a piece of entertainment (or artwork). That sets different requirements than just "world simulation", both from mechanics point of view and from graphics point of view. So engines will for a long time be able to differentiate on how they facilitate such things; expressivity is a tough nut to crack and real-world physics gets you only so far.

      Even photorealism is a shifting target as it turns out that photography itself diverges from reality; there is this trend of having games and movies look "cinematic" in a way that is not exactly realistic, or at least not how things appear to human eye. But how scenes appear to human eyes is also tricky question as humans are not just simple mechanical cameras.

      • z3phyr 14 days ago

        Gameplay is directly influenced by the "feel" of the world. I am not strictly talking about photo-realism, but (1) how the world reacts to input (2) how is it consistent.

        Physics is not about real world accuracy, but about how consistently stuff interacts (and its side effects like illumination) in the virtual world. There will be a time in the future when the physics engine will become the rendering engine, just because there are infinite gameplay possibilities in such a craft.

  • Cthulhu_ 15 days ago

    It's like a fractal; the closer you look, the more details you notice affecting what you see. It's like we're creeping to 100% physically accurate rendering, but we'll probably never get to 100%, instead we'll just keep adding fractions.

sudosysgen 15 days ago

A great spectral ray tracing engine is LuxRender : https://luxcorerender.org/ (the older one, that is - the newer LuxCore renderer does not have full spectral support)

Beyond the effects shown here, there are other benefits to spectral rendering - if done using light tracing, it allows you to change color, spectrum and intensity of light sources after the fact. It also makes indirect lighting much more accurate in many scenes.

6mian 15 days ago

If you want to play with ray tracing implementation, it's surprisingly easy to write one by yourself. There's a great free book (https://raytracing.github.io/books/RayTracingInOneWeekend.ht...) or, if you know a bit of Unity a very nice GPU-based tutorial (https://medium.com/@jcowles/gpu-ray-tracing-in-one-weekend-3...). The Unity version is easier to tinker with, because you have scene preview and other GUI that makes moving camera around so much easier. There are many implementations based of these sources if you don't want to write one from scratch, although doing so is definitely worth it.

I spent some great time playing with the base implementation. Making the rays act as particles* that bend their path to/away from objects, making them "remember" the last angle of bounce and use it in the next material hit etc. Most of them looked bad, but I still got some intuition what I was looking at. Moving the camera by a notch was also very helpful.

A lot of fun, great for a small recreational programming project.

* Unless there's an intersection with an object, then set the maximum length of the ray to some small amount, then shoot many rays from that point around and for each hit apply something similar to the gravity equation. Of course this is slow and just an approximation, but it's easy and you can implement a "black hole" type of object that will bend light in the scene.

  • kragen 15 days ago

    when i wrote my very first ray tracer it didn't take me an entire weekend; it's about four pages of c that i wrote in one night

    http://canonical.org/~kragen/sw/aspmisc/my-very-first-raytra...

    since then i've written raytracers in clojure and lua and a raymarcher in js; they can be very small and simple

    last night i was looking at Spongy by mentor/TBC https://www.pouet.net/prod.php?which=53871 which is a fractal animation raytracer with fog in 65 machine instructions. the ms-dos executable is 128 bytes

    i think it's easy to get overwhelmed by how stunning raytraced images look and decide that the algorithms and data structures to generate them must be very difficult, but actually they're very simple, at least if you already know about three-dimensional vectors. i feel like sdf raymarching is even simpler than the traditional whitted-style raytracer, because it replaces most of the hairy math needed to solve for precise intersections with scene geometry with very simple successive approximation algorithms

    the very smallest raytracers like spongy and Oscar Toledo G.'s bootsector raytracer https://github.com/nanochess/RayTracer are often a bit harder to understand than slightly bigger ones, because you have to use a lot of tricks to get that small, and the tricks are harder to understand than a dumber piece of code would be

    • dahart 15 days ago

      > when i wrote my very first ray tracer it didn't take me an entire weekend

      It’s just a catchy title. You can implement the book in an hour or two, if you’re uncurious, or a month if you like reading the research first. Also maybe there are meaningful differences in the feature set such that it’s better not to try to compare the time taken? The Ray Tracing in One Weekend book does start the reader off with a pretty strong footing in physically based rendering, and includes global illumination, dielectric materials, and depth of field. It also spends a lot of time building an extensible and robust foundation that can scale to a more serious renderer.

DiogenesKynikos 15 days ago

If you want to go all the way, you have to track not only the wavelength of each ray, but also its polarization and phase. The situations in which these properties actually matter for human perception are rare (e.g., thin films and diffraction gratings), but they exist.

NBJack 15 days ago

The beauty of mathematics and physics in action. I wonder if some of the tweaks made for the sake of beauty could be useful in other means of visualizations.

It also reminds me of a time that I was copying code from a book to make polyphonic music on an Apple II. I got something wrong for sure when I ran it, but instead of harsh noise, I ended up with an eerily beautiful pattern of tones. Whatever happy accident I made fascinated me.

mncharity 14 days ago

Perhaps create hyperspectral (>>3 channels) images? I was exploring using them for better teaching color to kids by emphasizing spectra. Doing image[1] mouseover pixel spectra for example, to reinforce associations of colors-and-their-spectra. But hyperspectral images are rare, and their cameras traditionally[2] expensive. So how about synthetic hyperspectral images?

Perhaps a very-low-res in-browser renderer might be fast enough for interactively playing with lighting and materials? And perhaps do POV for anomalous color vision, "cataract lens removed - can see UV" humans, dichromat non-primate mammals (mice/dogs), and perhaps tetrachromat zebra fish.

[1] http://www.ok.sc.e.titech.ac.jp/res/MSI/MSIdata31.html [2] an inexpensive multispectral camera using time-multiplexed narrow-band illumination: https://ubicomplab.cs.washington.edu/publications/hypercam/

  • sudosysgen 14 days ago

    It's possible to implement this efficiently using light tracing - the final value in the image is the (possibly transformed) contribution from each light source, and since you have the spectrum of the light source you can have the spectrum of the pixel.

    Until you encounter significant dispersion or thin film effects, that is, then you need to sample wavelengths for each path, so it becomes (even more of) an approximation.

    • Etherlord87 14 days ago

      This won't work, because intermediary objects filter the spectrum of source light. Also in some scenes you can have so many lights contribute to a single pixel, that it's cheaper to save entire spectrum on each pixel. Consider how sky is a huge light that you can't save as a single light source, because different areas of that sky contribute differently - effectively one sky-light is an equivalent of millions of point-lights.

      • sudosysgen 12 days ago

        This actually does work and is implemented in a few renderers.

        You can obviate the issue of having too many lights by combining them into light groups. As for area lights with different spectrum at different locations, you simply can't change the distribution after the fact.

        In the tracing pass, you can store the shading interactions, then apply them across the spectrum of your light at the end after the shading pass, onto brightness values that you save for each light (group).

        Changing the spectrum of the light after the fact, however, is going to be an approximation, but not changin the brightness.

geon 15 days ago

I was thinking of implementing refraction in my distribution raytracer (stochastic, not parallel). https://en.m.wikipedia.org/wiki/Distributed_ray_tracing

I would randomly sample a frequency, calculate its color and use it to modulate ray color. I would have to scale the result by 3 to account for the pure refracted color being 1/3 brightness.

  • dahart 15 days ago

    I think the term “distribution ray tracing” was a bit of a mid-point on the timeline of evolution from Whitted ray tracing to today’s path tracing? IIRC distribution ray tracing came from one of Rob Cook’s Siggraph papers. It’s probably worth moving toward path tracing as a more general and unified concept, and also because when googling & researching, it’ll be easier to find tips & tricks.

    Yes when combining spectral rendering with refraction, you’ll need to pick a frequency by sampling the distribution. This can get tricky in general, good to build it in incremental steps. True of reflections as well, but up to you whether you want to have frequency-dependent materials in both cases. There are still reasons to use spectral even if you choose to use simplified materials.

spacecadet 15 days ago

Lovely. Not sure if the author would agree... There was much to love and hate about the nascent "new aesthetic" movement, but this demonstrates the best of that genre.

dantondwa 15 days ago

I’d love to see more about the artworks the author shares at the end. The idea of creating renders of realities where light works differently from ours is fascinating.

raytopia 15 days ago

Does anyone know if someone has attempted real time spectral rendering? I've tried finding information before but have never had any luck.

  • dagmx 15 days ago

    Real-time is unfortunately a sort of vague term.

    If you mean raster rendering pipelines then I don’t believe it’s possible because the nature of the GPU pipelines precludes it. You’d likely need to make use of compute shaders to create it at which point you’ve just written a patthtracer anyway.

    If you mean a pathtracer , then real-time becomes wholly dependent on what your parameters are. With a small enough resolution, Mitsuba with Dr.JIT could theoretically start rendering frames after the first one in a reasonable time to be considered realtime.

    However the reality is just that even in film, with offline rendering, very few studios find the gains of spectral rendering to be worth the effort. Outside of Wētā with Manuka , nobody else really uses spectral rendering. Animal Logic did for LEGO movie but solely for lens flares.

    The workflow change to make things work with a spectral renderer and the very subtle differences are just not worth the high increase in render time

Rayhem 14 days ago

I would be very interested in seeing an error comparison of this technique against something like a full-wave PDE or integral equation solver. Demonstrating that the (fast) raytracing solution converges on the (slow) full-wave solution as e.g. the number of rays goes up is a very powerful statement.

uoaei 15 days ago

Are there any good resources for spectral ray tracing for other frequencies of light, e.g. radio frequencies?

  • itishappy 15 days ago

    It's the same thing! What are you trying to do with it?

    One thing you'll run into is that there isn't a clear frequency response curve for non-visible, so you need to invent your own frequency to RGB function (false color).

    Another thing is that radio waves have much longer wavelengths than visible, so diffractive effects tend to be a lot more important, and ray tracing (spectral or otherwise) doesn't do this well. Modeling diffraction is typically done using something like FDTD.

    https://en.wikipedia.org/wiki/Finite-difference_time-domain_...

  • magicalhippo 15 days ago

    What are the applications you have in mind?

    I'm no RF guy, but I imagine you quickly will have to care about areas where the wavelike properties of EM radiation dominates, in which case ray tracing is not the right tool for the job.

philsnow 15 days ago

> I’ve been curious what happens when some of the laws dictating how light moves are deliberately broken, building cameras out of code in a universe just a little unlike our own. Working with the richness of the full spectrum of light, spectral ray tracing has allowed me to break the rules governing light transport in otherworldly ways.

This reminds me of diagnosing bugs while writing my own raytracer, and attempting to map the buggy output to weird/contrived/silly alternative physics

pseudosavant 14 days ago

I'd love to understand the performance implications of modeling a spectral distribution instead of an RGB pixel for ray tracing.

  • dahart 14 days ago

    There’s more than one way to implement spectral rendering, and thus multiple different trade offs you can make. Spectral in general is a trade for higher color & material accuracy at the cost of compute time.

    All else being equal, if you carry a fixed-size power spectrum along with each ray that is more than 3 elements, instead of an rgb triple, then you really might have up to an n/3 perf. For example using 6 wavelength channels can be up to twice as slow as an rgb renderer. Whether you actually experience n/3 slowdown depends on how much time you spend shading, versus the time to trace the ray, i.e., traverse the BVH. Shading will be slowed down by spectral, but scene traversal won’t, so check Amdahl’s Law.

    Problem is, all else is never equal. Spectral math comes with spectral materials that are more compute intensive, and fancier color sampling and integration utilities. Plus often additional color conversions for input and output.

    Another way to implement spectral rendering is to carry only a single wavelength per ray path, like a photon does, and ensure the lights’ wavelengths are sampled adequately. This makes a single ray faster than an rgb ray, but it adds a new dimension to your integral, which means new/extra noise, and so takes longer to converge, probably more than 3x longer.

Hammershaft 15 days ago

That animated artwork at the end is incredible. Thank you for the technical write up and the artwork!