spankalee 14 days ago

For cross-framework usage it'd be great if instead of being a render callback, this was distributed as an HTML custom element. Then you could easily use it in any declarative template system, even React/JSX.

I tried to make a web components here: https://lit.dev/playground/#gist=8cf935c3869bf4790653cd6fadf...

But the webgpu-waveform module isn't loading. Looks like you have a hard-coded import of "../../react@18.2.0/index.js", which isn't going to be a portable import even for the React users. For the non-React users, they shouldn't have to install React to get the module loading.

Could you make two entrypoints, one for React, the other for plain JS?

  • aylmao 14 days ago

    ah, I made the dependency optional, since it all goes through an index.ts it still attemps to load React, true.

    I'll find a way around this, or maybe split the package into a react version and a vanilla version. Thanks for the feedback! I think you're right, a custom element would be nice to have

  • llamaLord 14 days ago

    I mean, in a pinch you can just run a build to compile it down to a single JS bundle with zero external dependencies and then mount it into your app.

    Not ideal, but works.

    • spankalee 14 days ago

      Build systems are not available everywhere, like in many online playgrounds. There's no reason not to have a browser-loadable package by default these days.

PaulDavisThe1st 14 days ago

As a side note: Ardour doesn't use a GPU for waveform rendering and so along the way we discovered that the "find max + min values for this chunk of samples" is more or less the most expensive operation in the prepare-to-render step (so much so that we cache these values on disk). We got a notable performance improvement from using SIMD instruction sets for this computation.

  • armadsen 14 days ago

    I’ve seen the same in my own waveform rendering systems over the years. SIMD comes in handy for a lot of audio-related things, of course.

xipix 14 days ago

Why is it that every audio waveform renderer looks super aliased? I also recently made a fast audio waveform renderer using the best bits of Wasm and WebGL to create an animatable waveform that looks a bit less 1990s: https://play.google.com/store/apps/details?id=com.parabolare...

Happy to open/share the code, just haven't had time.

PS (yours didn't work for me in Safari)

  • MadDemon 14 days ago

    Since the number of pixels on the x-axis of the canvas is typically much smaller than the number of samples, you have to reduce the number of samples. This is very prone to aliasing if not done right, especially if you end up just drawing a line between the points. I found a good way to avoid aliasing by taking the min/max values for each chunk of samples and then filling the area in between rather than drawing lines between points. If you zoom in to a point where the window is only a few values, this will converge to the same result as just drawing lines between the samples. You can test it out by uploading audio files to our audio-to-midi web demo: https://samplab.com/audio-to-midi

    • xipix 14 days ago

      My point is that everyone seems to draw waveforms using only two colours. Inevitably, this results in aliasing.

      In the extreme we have the "Spotify" visualisations of vertical bars with gaps in between. I believe this is popular because it looks slightly better than a solid waveform lump with an aliased edge.

      To avoid aliasing you need to use more than two pixel colours.

      • chrisjj 13 days ago

        > To avoid aliasing you need to use more than two pixel colours.

        The aliasing in question is of the audio not the pixels, so no more colours does not help.

        • xipix 13 days ago

          Agreed, there is aliasing of the audio. But also aliasing in the way that the waveform is rendered.

          Consider: would you draw a line, or a circle using only a foreground and a background pixel colour?

          • chrisjj 13 days ago

            > would you draw a line, or a circle using only a foreground and a background pixel colour?

            That's 99% simple pixelation, not aliasing. And far less of a problem than the true aliasing in question.

            • xipix 13 days ago

              Mathematically it's aliasing. And it's fixed by antialiasing. I'll bet everything on the screen you're looking at right now is antialiased. Would be nice if audio waveform visualisations were too.

              • chrisjj 13 days ago

                > Mathematically it is aliasing.

                Not by e.g. Wikipedia.

                https://en.m.wikipedia.org/wiki/Aliasing

                • kroltan 13 days ago

                  Sure it is, it's the first thing to be said just after the title and widgets

                  > This article is about aliasing in signal processing, including computer graphics.

                  In computer graphics, the relevant aliasing is spatial aliasing, in fact mentioned in the article: the signal is the fundamental shape (such as a font glyph or a triangle mesh or whatever), and the samples are the pixels.

                  In the specific application of a waveform, a typical "CD quality" audio file has 44.1 thousand samples per second, and say, 16 bits per sample. If we want to display the waveform of one second of audio horizontally on an entire standard low-density full HD computer screen, we have 1920 samples to fit our 1 second of audio data, and 1080 samples of amplitude with which to paint the amplitude.

                  Putting it into signal processing terms, The signal frequency here is 44.1Khz, and the sampling frequency is 1.92Khz. Do you see how aliasing applies now? We want to represent f_Signal / f_Sample = 22.96875 samples of audio with 1 sample.

                  In practice you get an even worse ratio, because we usually want more than 1 second of waveform to be visible on a region that isn't the entire screen.

                  • chrisjj 13 days ago

                    > the signal is the fundamental shape (such as a font glyph or a triangle mesh or whatever)

                    No. The signal components being aliased are frequencies e.g. repeating patterns.

                    "aliasing is the overlapping of frequency components resulting from a sample rate below the Nyquist rate."

                    That is why the example is a brick wall and the result is moire banding. Nothing like your shapes and jaggies.

                    What you've mistaken for aliasing is simply pixellation.

                    • kroltan 13 days ago

                      These are the same thing. A shape with a solid boundary is a a signal with a discontinuous step: If you Fourier it, it has infinite nonzero terms, therefore you can't represent it exactly with any finite amount of frequencies, and therefore a finite amount of samples.

                      In the case of Moiré patterns in pictures, we have lines in the real world that need to fit into pixels that fit a larger area than the Nyquist rate of those lines. The Moiré effect in pictures is just the interference pattern caused by this aliasing.

                      If you look at just a column of the image, and imagine the signal as being the brightness varying over the Y coordinates, you can imagine the mortar being an occasional regular pulse, and when your sampling rate (the pixel density) isn't enough, you get aliasing: you skip over, or overrepresent, the mortar to brick ratio, variably along the signal.

                      https://imgur.com/a/BiZcxG5

                      Now if you look at the graph in that picture, doesn't that look awfully similar to what happens if you try to sample an audio file at an inferior rate for display purposes?

                      In fact, try it right now, download Audacity, go to Generate>Tone, click OK with whatever settings it's fine, press Shift+Z to go down to sample level zoom, then start zooming out. Eventually, you'll see some interesting patterns, which are exactly the sort of aliasing caused by resampling I'm talking about:

                      https://i.imgur.com/bX2IFp8.png

                      • chrisjj 12 days ago

                        > you'll see some interesting patterns, which are exactly the sort of aliasing caused by resampling I'm talking about

                        I see this and agree. This is true aliasing. I believe this is the OP's "super aliased" look.

                        However I disagree that this is "the same thing" as the jaggies that can be avoided by more colours.

                        This cannot be avoided by more colours. It can be avoided only by increased resampling rate.

                        • kroltan 11 days ago

                          How do we add more colours (besides just picking a random colour, which wouldn't be helpful)?

                          By sampling the signal more often ("multi-sample anti aliasing"), also known as increasing the resampling rate, then representing that with a wider bit depth (not just 1 bit "yes/no", but multiple bits forming a color/opacity), since we do have more than 1 bit per pixel that can be used already.

                          I'll give it to you that this is "anti aliasing", not "not having aliasing in the first place", but the Fourier argument above is the reason why in computer graphics we practically always have to "settle for" AA instead.

      • breakfastduck 13 days ago

        Its much easier to read the information with the aliasing

    • chrisjj 13 days ago

      > Since the number of pixels on the x-axis of the canvas is typically much smaller than the number of samples

      That's not the cause. The cause is simply that too few of the samples are plotted.

      Your min/max solution succeeds by ensuring all significant samples are plotted.

      • MadDemon 13 days ago

        If you're canvas has say 2000 pixels on the x-axis and you're trying to plot one second of 44.1kHz audio, you'll end up with more than 20 samples per pixel. You can then either reduce the number of samples or draw multiple lines within that pixel. Both approaches can result in aliasing. OP's approach seems to just draw lines between every sample, so using the second option. If you change the "Scale" in the example, you can clearly see how peaks appear/disappear due to aliasing (especially between 200 - 400 frames / px).

        • chrisjj 13 days ago

          > You can then either reduce the number of samples or draw multiple lines within that pixel. Both approaches can result in aliasing.

          I would disagree that the latter can result in the "super aliased" in question.

          Drawing one line per sample leaves very little aliasing.

          • MadDemon 13 days ago

            > Drawing one line per sample leaves very little aliasing.

            It's definitely gonna look better than just skipping samples, but you're also gonna draw a lot of unnecessary lines.

            • chrisjj 12 days ago

              Unnecessary lines are trivially avoided by the proposed min/max optimisation.

      • xipix 13 days ago

        The min/max solution doesn't prevent aliasing. Consider that if you do manage to avoid aliasing, you'll be rendering something visually akin to a downscaled version of a fully-detailed, high-resolution plot of the waveform.

        • MadDemon 13 days ago

          It renders a waveform that looks very similar to what it would look like if you drew a line between all points. If you have 100 samples per pixel and you draw lines between all of them, you'll essentially end up with a filled area that goes from the lowest sample to the highest. So practically the same thing as just taking the min and max and then filling everything in between. The advantage is that you're avoiding all those lines in between. If you zoom it, the signal changes very smoothly without those aliasing effects where peaks appear and disappear. The web demo currently doesn't allow zooming in, so you can't test it there, but if you download the whole Samplab app (https://samplab.com/download), you can try out what it looks like when zooming in.

          • chrisjj 12 days ago

            A great explanation.

        • chrisjj 13 days ago

          Min/max does prevent the "super aliased" result in question. I'd agree it leaves a little aliasing.

    • wiz21c 13 days ago

      why not low pass filter / decimate the wave form to remove aliasing ?

      • MadDemon 13 days ago

        You have to down sample by a lot (easily 10-100x). That would essentially remove a lot of the information in the signal. If you low pass filter you essentially remove all the peaks and the signal looks much "quieter" than it actually is.

        • chrisjj 13 days ago

          Agreed. In the worst case, the audio is simply a high-frequency tone - which the LPF removes, leaving the plot showing silence.

          • wiz21c 12 days ago

            ahhh didn't see that one. Perfectly makes sense ! Thx for explaining.

  • aylmao 14 days ago

    Oh this looks great, I'd love to have a look if you have time to share it. In my case, I'm just not very experienced with graphics. I'm sure with some tweaking it could look better, but I thought this was good enough for a v1.

    Re Safari: Unfortunately WebGPU support is still WIP. I'll add a notice on the site haha. https://caniuse.com/webgpu

    • xipix 14 days ago

      Thanks. I picked a fixed number of horizontal pixels per second for an internally cached image. I think it was 30 pixels/second so that scrolling at 1x playback rate would be smooth on 60/90/120Hz screens. So it can't zoom into the waveform like yours, but I don't need it to zoom.

      There are two parts: first the C++/wasm that analyses the audio and generates a bitmap that has width 30*duration and not very high. It effectively draws a line from each audio sample to the next, with alpha, but this can be optimised like crazy to a point where the time it takes is unnoticeable.

      The second part is the WebGL that the bitmap (aka texture) and renders to the canvas. In principle this could be rotated, spun in 3d, mapped to a sphere or any other crazy GPU manipulations. WebGL isn't too painful and works everywhere I tried including the Android webview in my Bungee Timewarp app.

  • tgv 14 days ago

    In Safari, you can enable it under Develop > Feature flags.

  • animuchan 14 days ago

    This is beautiful! I'd love to learn from the code.

  • emursebrian 13 days ago

    WebGPU only works in Chromium based browsers.

SarahC_ 14 days ago

No appropriate GPUAdapter found.

I've turned on in chrome://flags

>WebGPU Developer Features Enables web applications to access WebGPU features intended only for use during development. – Mac, Windows, Linux, ChromeOS, Android, Fuchsia, Lacros #enable-webgpu-developer-features

Chrome is up to date Version 123.0.6312.123 (Official Build) (64-bit)

RTX2060

  • FL33TW00D 14 days ago

    Doesn't work on Linux without more flags as of today.

  • aylmao 14 days ago

    Thanks for the detailed info!

Lockal 14 days ago

This needs a screenshot in GitHub, otherwise all I see is "No appropriate GPUAdapter found". Was WebGPU actually justified?

  • aylmao 14 days ago

    This and browser support info. Currently WebGPU is WIP in Safari and FF.

    https://caniuse.com/webgpu

    If you're using Chrome and getting this error, do share more info.

    As for justification, it was justified. Rendering waveforms is O(n) on pixels, and each can be rendered independently. I'm sure one can get creative with caching images at different resolutions to efficiently implement zooming, but otherwise it's noticeably slow.

    • Lockal 13 days ago

      I used latest Chrome on Android smartphone. It was kind of expected for me, that WebGPU does not work there. As I understand, WebGPU actually requires compute-capable GPU. Meanwhile WebGL fallbacks to software rendering via SwiftShader.

      • pjmlp 13 days ago

        It should work perfectly fine on Android 12 and later devices.

emadda 14 days ago

I did something similar with regular DOM nodes and css flex here:

https://bigwav.app

Every vertical line is a dom element.

I was considering moving to GPU rendering as the dom approach can take a long time to re render on changes.

  • aylmao 14 days ago

    The resulting style looks really cool. This would definitely be doable with the GPU. A halfway-step could be to draw rects on canvas too— it should be faster than the DOM approach, but less complicated than writing a GPU shader.

serial_dev 14 days ago

Feedback: it would be great if on the example site interactive demo, the waveform offset could be increased by scrolling / swiping left and right, and the scale should also change by zooming in and out.

  • aylmao 14 days ago

    I think you're right. That's the way I use it in the app I built this for:

    https://twitter.com/aykev/status/1780000204122460484

    It's more code, so I said "later". I also wanted the first example to be easy to read so people wouldn't get lost in the source. But since it's such a common use-case I think I'll add it too. Thanks for the feedback!

emursebrian 13 days ago

It looks nice. If WebGPU is adopted by more browsers, it will probably be pretty useful for doing real-time visualizations of audio.

In Emurse, we render waveform, and also pitch plots of audio recordings using canvas. It's pretty fast, but ours are a bit lower-resolution. The most resource-intensive part of the operation is actually processing the processing the audio.

lovegrenoble 14 days ago

No appropriate GPUAdapter found

  • knowaveragejoe 14 days ago

    I get the same on an M2 macbook air. Wonder if it's platform-specific.

    • dhosek 14 days ago

      Also on M3 MBP. I thought it might be a Safari limitation, but I try to avoid running Chrome as much as possible so I don’t know.

      • aylmao 14 days ago

        On my M1 MBP, Safari 17.4.1, it straight up doesn't work. Can I Use does say Safari only support WebGPU on TP and behind a flag: https://caniuse.com/?search=webgpu

        Perhaps a Safari TP bug? I'd appreciate some browser version info so I can dig deeper.

        • teki_one 14 days ago

          Works for me on Safari 17.4 (Sonoma 14.4.1, M2)

          • aylmao 13 days ago

            Interesting. I'm assuming Apple is doing some sort of A/B test then, and WebGPU must be enabled for some people but not others. huh.

            • dhosek 13 days ago

              It’s behind a flag. Enable the Develop menu if you haven’t, go to Develop|Feature Flags and you’ll find the setting to enable WebGPU (which is listed as “testing”).

  • littlestymaar 14 days ago

    Chromium Linux gets the same error here.

PaulDavisThe1st 14 days ago

> WebGPU not supported in this browser.

Pretty current firefox on Linux. Oh well.

  • sigseg1v 14 days ago

    Looks like Firefox needs a `dom.webgpu.enabled` flag set in `about:config` to enable WebGPU support

    • K5EiS 14 days ago

      Im just getting a "nil webgpu context" after dom.webgpu.enabled

      • spuz 14 days ago

        Same here with Firefox on Windows.

pininja 14 days ago

Very cool! How do you like WebGPU compared to WebGL?

  • aylmao 14 days ago

    I prefer it! It's definitely more verbose, but as someone who doesn't do graphics usually, I find it fits my mental model a little better.

    I tried to build this using WebGL before WebGPU was publicly available, but WebGL is very tailored to certain workflows and I didn't make much progress. It really felt like I was using the wrong tool for the job. Maybe I'm just too much of a noob... though I did manage to build it with WebGPU after all haha.

webprofusion 14 days ago

Cool, but the example wave form looks wrong? It doesn't appear to oscillate evenly as audio normally would.

8mobile 13 days ago

Hi, please can you show an image on your github repo? Thank you

A-You 13 days ago

[dead]