zanny 5 years ago

A lot of people are talking along the lines of "oh AMD is nice but... Nvidia".

No, in 2019 all AMD GPUs this decade support OpenGL through 4.5, support Vulkan, and still really don't have a great OpenCL situation (rocm is out of tree on every distro and only supprts parts of 2.0 still).

For gaming though, theres no reason not to get an AMD GPU. They are at near performance parity with Nvidia relative to their Windows performance, they work with the inbuilt drivers on every distro out of the box, and the only footgun to watch out for is that new hardware generally takes a feature release Mesa cycle to get stable after launch. You even get hardware accelerated h264 encoding and decoding (and vpx on some chips) via vaapi. All on top of the fundamental that they are much more freedom respecting than Nvidia.

Stop giving Nvidia your money to screw you over with. CUDA, their RTX crap, Gsync, Physx, Nvidia "Gameworks", and much more are all anti-competitive monopolist exploitative user-hostile evil meant to screw over competition and customers alike. Nvidia is one of the most reprehensible companies out there among peers like Oracle. AMD isn't a selfless helpless angel of a company, but when their products are competitive, and in many ways better (such as supporting Wayland) stop giving such a hostile business your money.

  • dragontamer 5 years ago

    > No, in 2019 all AMD GPUs this decade support OpenGL through 4.5, support Vulkan, and still really don't have a great OpenCL situation (rocm is out of tree on every distro and only supprts parts of 2.0 still).

    To be fair, only Intel has good OpenCL 2.0+ support. NVidia isn't really pushing OpenCL and AMD's ROCm OpenCL driver is still a bit unstable.

    AMD's OpenCL 2.0 support has always been poor. The Windows OpenCL 2.0 stuff didn't have a debugger for example, it really wasn't a good development environment at all. Only OpenCL1.2 had a decent debugger, profilers, or analysis tools.

    Frankly, it seems like OpenCL2.0 as a whole is a failure. All the code I've personally seen is OpenCL 1.2. Intel is pushing ICC / autovectorization, NVidia is pushing CUDA, and AMD was pushing HSA, and maybe now HCC / HIP / ROCm. No company wants to champion OpenCL 2.0.

    Video Games are written in Vulkan Shaders, DirectX shaders, or GLSL. They exist independently of the dedicated compute (CUDA / OpenCL) world.

    CUDA, OpenMP 4.5 device offload, and ROC / HIP / HCC are the compute solutions that seem to have the best chances in the future... since OpenCL 2.0 is just kinda blah. AMD's ROCm stack still needs more development, but it does seem to be improving steadily.

    ----------

    I know everyone wants to just hold hands, sing Kumbaya and load SYCL -> OpenCL -> SPIR-V neutral code on everyone's GPUs, but that's just not how the world works right now. And I have my doubts that it will ever be a valid path forward.

    The hidden message is that CUDA -> CLang -> NVidia is a partially shared stack with HCC -> CLang -> AMD. So LLVM seems to be the common denominator.

    • pjmlp 5 years ago

      Khronos learned too late that the world has moved on and the community wanted to use something else other than C to program their GPGPUs.

      And they are doing it again with Vulkan. With a large majority looking for higher level wrappers instead of dealing directly with its APIs and increasing number of extensions.

      • geezerjay 5 years ago

        > and the community wanted to use something else other than C to program their GPGPUs.

        This assertion makes no sense. The whole reason why these APIs are specified based on C is that once a C API is available its trivial to develop bindings in any conceivable language. You can't do that if you opt for a flavor of the month.

        Furthermore, GPGPU applications are performance-driven, and C is unbeatable in this domain. Thus by providing a standard API in C and by enabling standard implementations to be implemented in C you avoid forcing end-users to pay a performance tax just because someone had an irrational fear of C.

        • dagw 5 years ago

          Furthermore, GPGPU applications are performance-driven, and C is unbeatable in this domain.

          One of the reasons CUDA beat OpenCL was that enough people preferred C++ or Fortran to C and CUDA was happy to accommodate them while OpenCL wasn't.

          • geezerjay 5 years ago

            > One of the reasons CUDA beat OpenCL was that enough people preferred C++ or Fortran to C and CUDA was happy to accommodate them while OpenCL wasn't.

            Nonsense. We're talking about an API. C provides a bare-bones interface that doesn't impose any performance penalty, but just because the API is C nothing forces anyone to implement core functionality in the best tool they can find.

            That's like complaining that an air conditioner doesn't cool the room as well as others just because it uses a type F electrical socket instead of a type C plug.

            • twtw 5 years ago

              > We're talking about an API

              I don't think we are. We are talking about languages.

              In CUDA, compute kernels are written in a language called "CUDA C++." In OpenCL 1.2, compute kernels are written in a language called "OpenCL C." Somebody else probably could have (likely did) implement a compiler for their own version of C++ for OpenCL kernels, but the point 'pjmlp was making is that the standard platform did not enable C++ to be used for kernels until long after it was available in CUDA.

            • pjmlp 5 years ago

              Proper APIs can be defined via IDLs, C is not needed at all.

        • pjmlp 5 years ago

          Tell that to Khronos that is now forced to support HLSL so that most studios bother to port their shaders.

          NVidia that created a C++ binding to use Vulkan instead of plain C or AMD that had to create higher level bindings for the CAD industry to even bother to look into Vulkan.

          This blind advocacy for straight C APIs will turn Vulkan into another OpenCL.

        • bitwize 5 years ago

          Yeah, except in actual high-performance computing, almost nobody uses C. It's all C++ and Fortran.

        • dragontamer 5 years ago

          > Furthermore, GPGPU applications are performance-driven, and C is unbeatable in this domain.

          CUDA is clearly performance driven, and is a more mature C++ model.

          Template functions are a type-safe way to build different (but similar) compute kernels. Its far easier to use C++ Templates Constexpr and whatever to generate constant-code than to use C-based macros.

          In practice, CUDA C++ beats OpenCL C in performance. There's a reason why it is so popular, despite being proprietary and locked down to one company.

      • dragontamer 5 years ago

        Honestly, I think I can live with the C thing to program GPGPUs.

        The real issue IMO was the split-source. CUDA's single source or HCC's single-source means all your structs and classes work between the GPU and CPU.

        If you have a complex datastructure to pass data between the CPU and GPU, you can share all your code on CUDA (or AMD's HCC). But in OpenCL, you have to write a C / C++ / Python version of it, and then rewrite an OpenCL C version of it.

        OpenCL C is driven by this interpreter / runtime compiler thingy, which just causes issues in practice. The compiler is embedded into the device driver.

        Since AMD's OpenCL compiler is buggy, this means that different versions of AMD's drivers will segfault on different sets of code. As in, your single OpenCL program may work on 19.1.1 AMD Drivers, but it may segfault on version 18.7.2.

        The single-source compile-ahead-of-time methodology means that compiler bugs stay in developer land. IIRC, NVidia CUDA also had some bugs, but you can just rewrite your code to handle it (or upgrade your developer's compilers when the fix becomes available).

        That's simply not possible with OpenCL's model.

  • bufferoverflow 5 years ago

    RTX isn't crap. It's an innovation. Yes, it's at its early stages, but nevertheless. True reflections alone will result in new game mechanics. It will accelerate traditional 3D rendering.

    • dangerbird2 5 years ago

      At the very least, it will make it easier to implement not weird looking ambient occlusion and viewspace reflections.

  • alanaktion 5 years ago

    I love AMD's innovation in this space, but for high-end gaming Nvidia is still destroying them in raw performance. RTX and G-Sync are definitely stupid, though they are adding limited Freesync compatibility to recent cards now.

    If AMD made something that'd beat my 1080 Ti for a reasonable price, I'd definitely buy it. I certainly don't like Nvidia's Linux drivers, but the majority of my non-IGPU needs are Windows-based, so it's not as much of an issue. If I exclusively used Linux on my high-end PCs, I'd likely be more willing to lose some raw performance to go with AMD.

    • dragontamer 5 years ago

      Well, the Radeon VII looks like it is around the 1080 Ti / 2080 for $699.

      I think the main issue with AMD is that their compute drivers are clearly behind NVidia's. However, their ROCm development is now on Github, so we can publicly see releases and various development actions. AMD has been active on Github, so the drivers are clearly improving.

      But I think it is surprising to see just how far behind they are. ROCm is rewriting OpenCL from scratch, HIP / HCC / etc. etc. is built on top of C++ AMP but otherwise seems to be built from scratch as well. As such, there are still major issues like "ROCm / OpenCL doesn't work with Blender 2.79 yet".

      And since ROCm / OpenCL is a different compiler, it has different performance characteristics compared to AMDGPU-PRO (the old OpenCL compiler). So code that worked quickly on AMDGPU-PRO (ex: LuxRender) may work slowly on ROCm / OpenCL (or worst case: not at all, due to compiler errors or whatnot).

      EDIT: And the documentation... NVidia offers extremely good documentation. Not only a complete CUDA guide, but a "performance" guide, documented latencies on various instructions (not like Agner Fog level, but useful to understand which instructions are faster than others), etc. etc. AMD used to have an "OpenCL Optimization Guide" with similar information, but it hasn't been updated since the 7970.

      EDIT: AMD's Vega ISA documentation is lovely though. But its a bit too low level, and while it gives a great idea of how the GPU executes at an assembly level, it doesn't really have much about how OpenCL relates to it, or optimization tips for that matter. There are certainly nifty features, like DPP, or ds_permute instructions which probably can be used in a Bitonic Sort or something, but there's almost no "OpenCL-level" guide to how to use those instructions. (aside from: https://gpuopen.com/amd-gcn-assembly-cross-lane-operations/. That's basically the best you've got)

      That's just the reality of the situation right now for anyone looking into AMD Compute. I'm hopeful that the situation will change as AMD works on fixing bugs and developing (there have been a LOT of development items pushed to their Github repo in the past year). But there's just so much software to be written to have AMD catch up to NVidia. Not just code, but also documentation of their GPUs.

      • keldaris 5 years ago

        From my perspective (computational physics, not machine learning) the situation with GPU compute is very simple. If you are fine writing everything from scratch and won't need the CUDA ecosystem (which is really all there is for good sparse matrix, linear algebra, etc. support), write OpenCL 1.2 (or even GLSL if it's a visualization-heavy code with relatively simple compute) and buy whatever gets you the best compute/$ at that time. Otherwise - and this probably includes most people in this space - you have no choice but to keep using CUDA. There is just no meaningful compute ecosystem for AMD GPUs, sadly.

        I'm still very much looking forward to the Radeon VII due to the memory bandwidth, since I'm currently working on bandwidth-constrained CFD simulations. But that's a specific usecase and I write most things from scratch anyway.

        • dragontamer 5 years ago

          AMD's hardware is stupid-good from a compute perspective. Vega64 is $399, but renders Blender (on AMDGPU-PRO drivers) incredibly fast, like 2080 or 1080 Ti level. That's basically the main use case I bought a Vega for (which is why I'm very disappointed in ROCm's current bug which breaks Blender)

          If you really can use those 500GB/s HBM2 stacks + 10+ TFlops of power, the Vega is absolutely a monster, at far cheaper prices than the 2080.

          I really wonder why video games FPS numbers are so much better on NVidia. The compute power is clearly there, but it just doesn't show in FPS tests.

          ---

          Anyway, my custom code tests are to try and build a custom constraint-solver for a particular game AI I'm writing. Constraint solvers share similarities to Relational Databases (in particular: the relational join operator) which has been accelerated on GPUs before.

          So I too am a bit fortunate that my specific use cases actually enable me to try ROCm. But any "popular" thing (Deep Learning, Matrix Multiplications, etc. etc.) benefits so heavily from CUDA's ecosystem that its hard to say no to NVidia these days. CUDA is just more mature, with more libraries that help the programmer.

          AMD's system is still "some assembly required", especially if you run into a compiler bug or care about performance... (gotta study up on that Vega ISA...) And unfortunately, GPU Assembly language is a fair bit more mysterious than CPU Assembly Language. But I expect any decent low-level programmer to figure it out eventually...

          • microcolonel 5 years ago

            I agree, and I'd add that VII is probably going to be a lot better. There are some pretty big benefits to the open drivers as well (which can be used for OpenGL, even if you use the AMDGPU-PRO OpenCL, which is probably wise if OpenCL is what you want to do).

            As one example, I have a recurring task that runs on my GPU in the background, and I sleep next to the computer that does that. Since I don't want it to be too noisy, and it is acceptable for it to take longer to run while I'm asleep, I have a cron job which changes the power cap through sysfs to a more reasonable 45W (and at those levels, it's much more efficient anyhow, especially with my tuned voltages) at night.

            > I really wonder why video games FPS numbers are so much better on NVidia. The compute power is clearly there, but it just doesn't show in FPS tests.

            Drivers are hard, and AMD has sorta just been getting around to doing them well. The Mesa OpenGL drivers are usually faster than AMDGPU-PRO at OpenGL, and RADV is often faster than AMDGPU-PRO Vulkan (and AMDVLK).

            I've been hoping these last few years that AMD would try to ship Mesa on Windows (i.e., add a state tracker for the low level APIs underlying D3D), and save themselves the effort. As far as I can tell, there is no IP issue preventing them from doing that (including if they have to ship a proprietary version with some code they don't own). There still seems to be low-hanging fruit in Mesa, but the performance is already usually better.

      • Rychard 5 years ago

        I bought my 1080 TI just over a year ago (December 2017) from Newegg for $750. (Newegg item N82E16814126186)

        I'm glad AMD is finally catching up, but a savings of only $51 an entire year later doesn't exactly sound like a particularly great deal to me.

        • dragontamer 5 years ago

          Welcome to the end of Moore's Law. 7nm is as expensive as 14nm was. Sure, you gained double the density, but it costs twice as much to make. So you only get improved performance / watt. Cost per transistor stayed equal in this 7nm generation.

          NVidia's 2080 (roughly equivalent to the 1080 Ti) is also $699 to $799, depending on which model you get. Its the nature of how the process nodes work now.

          -----------

          Rumor is that the lower-end of the market will get price/performance upgrades, as maybe small-7nm chips will have enough yield to actually give cost-savings. But that's a bit of "hopes and dreams" leaking in, as opposed to any hard data.

          For now, it is clear that 300mm^2 7nm chips (like the Radeon VII) are going to be costly. Probably due to low yields, but its hard to know for sure. Note that Zen2 and Apple chips are all at around 100mm^2 or so (which seems to indicate that yields are fine for small chips... but even then, Apple's Phones definitely increased in price as they hit 7nm)

  • twtw 5 years ago

    Your rhetoric is a bit out of date. I have no problem with AMD (their hardware is good!), but I don't think your presentation is accurate.

    > CUDA

    Implemented by AMD too, under the moniker "hip."

    > their RTX crap

    Vendor-independent via DXR, soon to be available through Vulkan as well.

    > Gsync

    Nvidia GPUs now work with freesync monitors

    > Physx

    Open source, as of december.

    I want to feel good about AMD, but they have thus far failed to build a stable platform around their GPGPU stack(s). Nvidia has done a pretty good job with making CUDA a stable platform for SW development, however anti-competitive you might think it is.

    • AnthonyMouse 5 years ago

      The reason they're doing that now is AMD forcing them to. nVidia's goal was to lock everyone in, which kind of worked, but everybody hates it. And AMD has been successful enough with their open alternatives that it created the significant risk people could switch to that instead.

      It's forcing nVidia's hand. If they try to keep everything proprietary and the market starts to shift to the open alternatives because everybody hates that, everyone who used their solutions to begin with and has to throw them out and start over would be displeased. So their only hope is to try to be just open enough that people continue to use their stuff. But they still suck -- look at the state of the open source drivers for their hardware.

      And they should hardly be rewarded for doing the wrong thing for as long as the market would bear. Forgive them after they fall below 40% market share.

  • swoorup 5 years ago

    I regret purchasing Nvidia, i run linux for day to day usage. X frequently has issues even with compositor at times. My intel-only laptop is buttery smooth with sway, while i am forced to use X just because of nvidia not supporting wayland properly.

    Point is if you are Nvidia and user of open source software stay away from nvidia products

    • AsyncAwait 5 years ago

      Yes, the NVidia Wayland situation is bizzare, with them forcing EGLStreams down on people when nobody wants to use them etc.

      I can confirm that besides Intel, AMD plays nicely with Wayland too, am typing this from a Wayland GNOME session using the open-source AMDGPU drivers.

  • brandonjm 5 years ago

    > their RTX crap

    I was hyped about RTX when it came out and I'm all for more performant raytracing however my hype has seriously waned since the RTX release due to the lack of support. Until we have more games that support it I'm inclined to agree that there is no reason to included it when deciding between NVIDIA and AMD at this time. I would argue that by the time raytracing in gaming is more widespread there will be a more open and accessible solution, likely supported (or built) by AMD.

    • softawre 5 years ago

      Fair enough. But when the choice is a 2080 for 699 with RTX and their new 2080 competitor without RTX for 2080, or no AMD card at all, it's not a hard choice.

  • idonotknowwhy 5 years ago

    > For gaming though, theres no reason not to get an AMD GPU.

    Cemu, yuzu and any other cpu-intense emulators which only support opengl. The mesa drivers are much faster (you can get 40% more performance out of cemu my running it on Linux via wine) but this doesn't help windows AMD users.

  • deelowe 5 years ago

    Whats wrong with rtx (other than it being a bleeding edge tech that isn't well supported yet)?

    • zepearl 5 years ago

      I'm a "n00b" in this sector, so pls. feel free to correct me:

      1) RTX is meant to be linked to "raytracing".

      2a) Raytracing in general computes a scene by computing how photons are affected by matter - e.g. a full reflection by an absolutely smooth non-absorbing surface or a partial reflection&path_divergence done by liquids, etc... .

      2b) In the simulation, the photon that "bounces off" a surface is then "rebounced" by another surface and so on, and this creates a picture similar to the one we use to see in the real world.

      3) RTX maybe cuts the whole "rebouncing" and generation of photons a bit short, meaning that there isn't really any new next-gen technology but it's just a bit more processing power that is available to do some additional parallel short/semi-pure raytracing stuff, but which does not work when your scene is complex and it does need many reflections "rebounded" many times.

      Again: this is just my initial understanding.

  • glglwty 5 years ago

    > they work with the inbuilt drivers on every distro out of the box

    No they don't work with any distro that uses linux-libre.

    • sandov 5 years ago

      What gpu works on linux-libre?

      • glglwty 5 years ago

        Many old gpus store firmwares in non-volatile memory so that OS doesn't need to load them. Such gpus work with linux-libre as long as there is a free driver.

      • zanny 5 years ago

        Pre 900 series Nvidia GPUs would, I think, since Nouveau wasn't forced to use signed nvidia firmware until the 900.

  • TomVDB 5 years ago

    > CUDA, their RTX crap, Gsync, Physx, Nvidia "Gameworks"

    Do you have similar issues with the vast majority of software companies out there that don't open source their product?

    Whether it's Microsoft, SAS, Wolfram Research or Adobe. Synposys, Autodesk, or Blizzard or any mom-and-pop software company that solves some niche problem. It's all closed source and "anti-competitive".

    Are they just as monopolist, exploitative and user-hostile?

    If not, what's the difference? Just the fact that they sell hardware along with their software?

    • bgorman 5 years ago

      Proprietary drivers are the worst kind of propreitary software. If the vendor goes bankrupt or simply chooses to stop supporting your product, you have no path forward to update your operating system. Why use a Canon printer when HP has open source drivers? Why use Nvidia when AMD has drivers in the kernel that will never be removed. Old Nvidia cards like a GTX 8800 have only poor reverse engineered support on recent kernels. By contrast, if Microsoft abandons Word, I could easily get it working with Virtualization or possibly through shims like Wine. Nvidia knows that there stance is problematic, and for the embedded auto industry they actually support the open source drivers for their Tegra hardware.

      • TomVDB 5 years ago

        > Why use Nvidia when AMD has drivers in the kernel that will never be removed.

        It's perfectly reasonable to be concerned about a vendor going bankrupt and choosing not to support their product. If that's the issue, just don't buy their products.

        If one thinks that CUDA functionality is really that important, then isn't that proof that it's valuable IP? Nobody should be forced to open up valuable IP. I don't think it's reasonable to call somebody scum of the earth because they value and don't want to give away what they created.

        • sandov 5 years ago

          Nobody talked about forcing Nvidia to open up their IP. Their product is just bad from the driver point of view, and we don't want to buy it.

          Also, I don't think that someone is bad for making use of crappy laws such as IP laws, but I don't think we want to have that discussion here.

tombert 5 years ago

I can't speak for anyone else, but because AMD has been opening up their drivers, the laptop I purchased six months ago was AMD based.

I haven't done any kind of elaborate benchmarks, but as someone who runs Linux full-time, I want to support companies that make my life a bit easier.

That said, I have had some issue with my computer having some weird graphical glitches, and then crashing...I don't know if that's the drivers fault but I never had this with my NVidia or Intel cards...

  • dpwm 5 years ago

    October 2016 I built an AMD-based desktop with integrated GPU. I was seriously impressed at the out-the-box support in Linux compared with the support for their earlier chipsets.

    I seem to remember at around the same time that the Intel open source drivers went through a number of regressions.

    In the past I've had really bad experiences with ATI's GPUs. My 2016 experience would certainly allay my fears about buying AMD.

  • tatref 5 years ago

    I supported AMD several years because of this... Then I got tired of the low quality of both the open source, and the closed driver. Crashes, glitches in movies, flickering...

    Some time ago, I bought an Nvidia. It works like charm with the closed driver on Linux and windows. I do mainly games on Linux/windows, some gpgpu (machine learning with tensor flow), and the usual stuff. I couldn't be happier... Except if it was open source ;-)

    • AsyncAwait 5 years ago

      I purchased an AMD laptop precisely because I got tired of having to deal with NVidia, ether their glitchy proprietary driver breaking GNOME every so often, or Optimus being a pain via Bumblebee.

      Since switching to the AMD laptop, my experience has been smooth. My only worry is the upgrade path, there aren't that many high-performance AMD laptops out there and my next purchase is definitely AMD.

      • beezischillin 5 years ago

        If the new 3rd generation Ryzen stuff ends up being as good as it looks, then I'd say we're in for a good year as far as mobile AMD hardware's concerned. There's a good chance that you have nothing to worry about as far as upgrade paths are considered.

        • AsyncAwait 5 years ago

          I hope so, but my current laptop has a desktop-class Ryzen 7 1700 in it and their mobile offerings usually come with half as many cores, but I guess we'll see what Ryzen 3 has to offer.

          • sitkack 5 years ago

            Which laptop is this?

      • Thaxll 5 years ago

        Well Nvidia is better with closed source drivers than AMD with open source, having open source driver means nothing in term of driver quality, what matters is how many people are working on it and for Nvidia there are much much more people working on it because Nvidia is very popular outside of gaming on Linux.

    • briffle 5 years ago

      I had the opposite experience. I had an older nvidia card. The binary driver would work great, then an update to the nvidia driver would come out, and the next reboot, I got to have lots of fun trying to get get it repaired without a gui. Half the time, just uninstalling/re-installing the older version would fix it.

      • dpwm 5 years ago

        IIRC Nvidia have dropped support for entire chipsets during seemingly minor updates to their Linux driver. This happened to me with an older card.

        I think I ended up bisecting the changelogs to find that they had dropped support two versions back.

      • cr0sh 5 years ago

        For a long time I was on Ubuntu 14.04 LTS with the NVidia driver, and every time to update it would be a pain; inevitably, when I rebooted, I'd be at a console, having to fix the damn thing so the updated driver would work.

        After a couple of times of this, what I saw that was happening (and I am not saying this was it in your case) was my X config file was being replaced/updated and really just breaking everything. So I got in the habit of always making a backup of that file. Usually, when dropped at the console, I could just backup the config file there, then copy my old file over, and everything would work perfectly on restart.

        Except this last time (a few months ago) - but it was inevitable it would happen, and it was entirely my fault.

        I had a need a couple of years back to be able to use the latest C++ 11 version of gcc - but 14.04 LTS didn't have it available, and there wasn't any backports. So I decided to "wing it" from scratch, compiling a new version.

        Then I found myself in dependency hell - which I also got past through a variety of updates from for my Ubuntu, or via download and install, etc. It was a complete mess, but in the end I got it working...

        ...until I tried to update - the entire update system was fairly broken, so no moving forward from 14.04 LTS.

        But I thought I could do NVidia's latest proprietary driver - and it needed the compiler and other parts (for what reason I don't know) and it died a horrible death, leaving me with no good options to for the driver. I had to fall back to the open source neuveau driver (yuck) just to get my desktop back. But things were pretty well hosed.

        Fortunately my OS was on a seperate partition and drive, so I bit the bullet and did a reinstall and upgrade (to Budgie Desktop 18.04 LTS), and vowed to never do any hand compile and install stuff again (next time if I need such a thing, it's going to be in a VM or containerized).

        • foxylad 5 years ago

          I've often wondered about putting the whole of /etc into a git repository, so any changes (either by software updates, or by myself) were visible and reversible. Does anyone do this, and if not why not?

    • tyfon 5 years ago

      I have a GTX 670 in my main computer and I'm going threadripper/ati card next for sure.

      I had issues with the proprietary nvidia drivers just today and it took me about 1 hour to fix (arch linux).

      My attitude was the same as you but it has shifted in the last year or so. Maybe just because I want that sick CPU :)

    • IdiocyInAction 5 years ago

      The NVidia driver is a crapshoot though. I have massive problems with it on my desktop and none at all on my laptop. I have had more consistently good experiences with AMD.

    • mackal 5 years ago

      My experience with AMD open drivers has been great. I had an HD 6850 before, new system with an RX 580, both have worked great. Had a few issues with the 6850 earlier on, but nothing too bad like you've described.

    • shmerl 5 years ago

      Nvidia Optimus is a horror story on laptops if you are using Linux.

    • heroprotagonist 5 years ago

      I thought it was interesting how they began forcing Windows users through a login page to associate identity, and a continuous background telemetry service, but not Linux users. The telemetry service needs to be running just to check for updates to the gaming card drivers.

      At least, I personally didn't have that experience on Linux with proprietary driver when the Windows gaming rig did. They likely determined it wasn't worth the effort or Linux users would be more likely to lash back at the intrusion. Or it could have been a later target but still on their plan..

      Have they changed this and forced their telemetry collection onto the Linux desktop yet?

      I doubt they'd prereq it to install drivers for their ML or workstation cards. I'm a little more worried about their gaming/desktop cards.

      • llukas 5 years ago

        You need to login for GeForce Experience not for driver. GFE is optional.

        • heroprotagonist 5 years ago

          Except it's not accessible to 90% of users because it's only delivered as a single package that installs both of them in combination. If you decline their telemetry service, it does not install the driver.

          If you want to install just the driver, you have to extract the driver from their executable and manually handle updates, or use a third party package manager like chocolatey.

          Most people won't know this is possible. The trade-off to the vast majority is to trade unknown bits of their information for security and stability updates.

          Should that really be the default state of things?

        • ab5tract 5 years ago

          I have tried several times do exactly that and have yet to succeed.

    • tatref 5 years ago

      I'm replying to all the answers here.

      It seems that everyone has a different experience on the subject! Just a precision: I had an ATI card (laptop) from 2005 to 2010, then an ATI (desktop) from 2010 to 2017.

  • zanny 5 years ago

    I've been exclusively buying AMD discrete graphics for the better part of a decade since they started developing their open driver.

    Between a 4850, 7870, 290, and 580 I've been a satisfied customer. It was rough waters in 2012, but nowadays its flawless.

    On the CPU side though... I do want to make a big upgrade this year. I've had a 4770k since release. 8 cores sounds pretty juicy. But AMD still has their proprietary PSP and unlike with the Intel ME I have no way to disable it. While it sucks giving Intel money when they have been no help disabling their government backdoor into my computer the fact the community has disabled it (assuming ME cleaner will work properly on whatever Intel CPUs come out this year) makes me lean towards buying another Intel chip. Not because I want to support them, but because AMD hasn't given me an alternative yet.

  • Symmetry 5 years ago

    The sad truth is that it takes a while for the open source drivers to solidify after any new release and going open source means not getting the newest hardware.

    Well, Intel actually seems to get their open source driver support in far enough ahead of time these days that you can just load the newest Ubuntu with them and it works, but that wasn't the case 5 years ago.

    • jcastro 5 years ago

      AMD's almost there, you can just install 18.10 and Steam and go gaming with nothing else to install. 18.04 not so much, but once distros start shipping with newer kernels/mesa then it should sort itself out.

    • sandov 5 years ago

      Most people don't care about having the newest hardware. The only cases that come to my mind are hardcore pc gamers, people who edit video, and people doing deep learning.

      • zepearl 5 years ago

        Maybe this is true for desktop PCs, but in my case I always had to be very careful when buying a new laptop/notebook (which basically always had the newest HW because it just came with it).

    • gcb0 5 years ago

      true. and still, intel drivers are a landmine. even the latest ones shipped today generates FIFO warnings because of the bad code. Not to mention itel "graphic cards" are a glorified floating point co-coprocessor and the "drivers" just rename mesa-software with intelgfx or something :) man, everything about intel lately feels like a joke you remove the marketing lacquer... it's almost if they are the IBM of old.

      anyway, but AMD have actual graphics cards, and the driver quality they open source is usually very good.

      • mattst88 5 years ago

        > floating point co-coprocessor and the "drivers" just rename mesa-software with intelgfx

        That is hilariously false. I work on the drivers. They're open source. Go take a look.

  • Jnr 5 years ago

    I built new media box last year and thought I would try AMD Ryzen 2400G APU. Turns out it was a bad choice. Those graphics drivers are extremely unstable, it keeps crashing frequently, kernel bug reports have many reports from multiple people with info on how to reproduce them but those cases are still in status "NEW" almost 1 year later. And while their Windows benchmarks are decent, anything 3D related on Linux is a glitching disaster with poor performance.

    My friend bought latest Intel NUC with AMD Vega graphics and he could not even get Linux to boot with that.

    Meanwhile I have Nvidia GTX970 on my desktop PC and everything works fine, even G-Sync works. I have used Nvidia cards with Linux for 10 years now and I have not had any issues like I have with AMD now.

    To me it seems like AMD dumping their drivers as open source is a call for help on maintaining them rather than being all friendly to community.

  • chrisper 5 years ago

    Yet I can still not read CPU core temperatures of my ryzen 2600 on Linux (5.0-rc2). I am not sure what is going on but I think next time I'll go with Intel again. I read this may be related to some NDA by AMD but not sure.

    • opencl 5 years ago

      Temperature monitoring for the first gen Ryzen chips has been in the kernel for about a year. The patch for second gen is there but narrowly missed the 5.0 release window, so it should land in the first release after 5.0.

      Intel is a bit faster about this but it's not like they have perfect day 1 kernel support either. i.e. temperature monitoring drivers for both first gen Ryzen and Coffee Lake landed in 4.15. At the time Ryzen was almost a year old and Coffee Lake was about half a year old.

      http://lkml.iu.edu/hypermail/linux/kernel/1811.0/01237.html?...

      • silotis 5 years ago

        Based on personal experience temperature monitoring on Coffee Lake worked just fine with 4.14. I'm pretty sure there hasn't been a breaking change in Intel's interface for this since at least Sandy Bridge.

      • chrisper 5 years ago

        Thanks. It's not easy to find information like this. How did you find it?

        • opencl 5 years ago

          I just Googled "ryzen temperature monitoring linux".

          I've had a Ryzen 1700 since they first came out so I was following the Linux support for that fairly closely at the time, mostly by browsing Phoronix fairly frequently. Thankfully after a few months I didn't need to because things were working pretty well by then, other than the thermal monitoring that took quite a while to land. Linux and brand new hardware rarely seem to get along well regardless of the vendor unfortunately.

    • vardump 5 years ago

      Perhaps someone could just reverse engineer what the Windows driver does to get the values.

      It could be a nice way to learn a tool like Binary Ninja or IDA (pro).

      It's probably some DeviceIoControl call. Once you know the call 12-bit (I think?) ID from userland code, it should be relatively straightforward to follow the driver code to see where it ends up and what hardware registers it uses.

      Or maybe it's just something through ACPI?

    • wstuartcl 5 years ago

      just when AMD looks like they will be the safe bet on CPU for the next few years you bolt? =)

      • chrisper 5 years ago

        It's not like I'll upgrade any time soon. A lot can happen in all those years! I'll see

mrweasel 5 years ago

Sadly I constantly hear people say that you should get an Nvidia card for both Linux and FreeBSD, because the drivers are better. While I'm sure that Nvidias driver a good, it's kinda sad that the attitude is that AMD is a better friend of the open source community, but yeah, we're going with Nvidia.

I get why, you have stuff to do and Nvidia performs better, but still it a little annoying.

OpenBSD seems to be the only open source operating system that suggests that you get an AMD card (or use Intel integrated graphics).

  • marcosdumay 5 years ago

    If you are ok with a proprietary driver and all the headaches that come with it, NVidia drivers are better for some 3-4 years after you buy a high-end GPU or 1-2 years after you buy a low-end one. After this time, NVidia drops support and you are stuck with a free driver that is much worse than the AMD ones (not by fault of the free drivers developers, mind you, but because NVidia makes their life hard).

    Personally, I do prefer to avoid the headaches from day 1, so it's AMD or Intel.

    • zepearl 5 years ago

      I use an nVidia GeForce 8200 (published in ~2007) in my mediacenter and I don't have problems with drivers (v340.xxx legacy series: latest update is v340.107 https://www.nvidia.com/Download/driverResults.aspx/135161/en... ) on Arch Linux.

      Same thing with a GTX 760 (currently using v390.42) on Gentoo: I kept that server running multiple times for many weeks at a time and never had crashes/weird things happening while doing GPU-mining, playing or just having the GPU idle while using the CPU.

      As well with setting things up, I always just replaced the old card with the new one and that was it.

      • gcb0 5 years ago

        the post you reply to probably means support for gl regarding features and performance. a server and a media center will do fine even without proprietary drivers... well, until someone update the color stuff in v4l again.

        • zepearl 5 years ago

          Maybe? But then probably even AMD/ATI does not keep on tweaking their drivers more than nVidia to improve performance & OpenGL-compliance for all their 12 years old HW... (but I might be wrong).

          >>a server and a media center will do fine even without proprietary drivers...

          Honestly: not sure (never tried). My big question mark involving the GPU frequency scaling (maybe now an issue only with "Nouveau"?) when using video filters (e.g. framerate sync and/or nice upscaling filters).

    • llukas 5 years ago

      What are you smoking? Tesla and Fermi architectures had ~8 years of support.

      https://www.anandtech.com/show/12624/nvidia-moves-fermi-to-l...

      • simcop2387 5 years ago

        So I haven't looked at this in a while but the usual way it's happened on linux drivers for nvidia cards is that they stop updates for X11, the linux kernel, and other parts of the system, even if they maintain the driver itself keeping it otherwise up to date feature and bug wise. This means that you'll be stuck at older kernel versions and older linux distro versions without a way to move forward on the proprietary driver.

      • jpk 5 years ago

        Leading with an accusation of substance abuse isn't civil or productive. Your comment may well be factually correct, but your tone here is inappropriate.

        https://news.ycombinator.com/newsguidelines.html

        • gamblor956 5 years ago

          It's not an accusation of drug use.

          "What are you smoking" is a US colloquialism that means that one's statement is so nonsensical that the most logical explanation is that the speaker was smoking marijuana at the time the statement was made (because the implied alternative is that the speaker is crazy or quite daft).

          • jpk 5 years ago

            I am well aware of the colloquialism. My point is that it violates the site's comment guidelines.

            'When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."'

            This comment can be shortened to eliminate the first sentence, which is needlessly rude.

          • majewsky 5 years ago

            Please keep in mind though that this is not strictly a US site. To me (as an experienced, but non-native speaker) "what are you smoking" does not sound tongue-in-cheek at all.

            • gamblor956 5 years ago

              Please keep in mind that this site is operated by a US VC, on US hardware and software, for a US audience residing in a few tech-centric US cities.

              US colloquialisms are perfectly appropriate on this site.

              • AsyncAwait 5 years ago

                > Please keep in mind that this site is operated by a US VC, on US hardware and software, for a US audience residing in a few tech-centric US cities.

                And funded by a Brit :-)

                Honestly, that's not how today's world works. Sites like Reddit and HN obviously have an international audience. It is not meant for people from tech-centric U.S. cities, but rather to be a sort of sales channel for YC, which BTW has well-known companies founded by non-Americans on its list, such as Gitlab, Stripe etc.

                > on US hardware and software

                Really? Are you sure there's not a line of free/open-source in there that's written by a non American? It runs on Linux, I presume, who's creator is Finnish and there are many non-U.S. contributors. If there's NGINX somewhere in the pipeline, now you're got some Kazakhstani/Russian code in there. If any part was ever touched by a JetBrains IDE, more evil Russians were involved etc.

                Please keep the nationalistic rhetoric down.

                P.S. Did you know that paper is a Chinese technology? Yeah, the same Chinese who keep the U.S. down by hyping climate change! I hope you're not using it! \s

                • yellowapple 5 years ago

                  "It runs on Linux, I presume, who's creator is Finnish"

                  Who currently lives and works in the United States, last I checked.

                  Not to say I disagree with your point, of course; just pointing out the irony of condemning someone for putting undue emphasis on a particular nation's contributions to something while inadvertently doing the same in the process.

            • jqt 5 years ago

              The translated version (of course) of "what are you smoking" is used and known in my own language too. Surprises me that it's not universal as I thought it would be.

          • dcosson 5 years ago

            You don't think it's a little offensive and uncalled for to tell someone that it is inconceivable that anyone could be as stupid as they are?

        • yellowapple 5 years ago

          You must be smoking some real good stuff if you think "What are you smoking?" is meant to be a literal accusation of substance abuse.

          • jpk 5 years ago

            I am well aware of the colloquialism. My point is that it violates the site's comment guidelines.

            'When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."'

            This comment can be shortened to eliminate the first sentence, which is needlessly rude.

            • yellowapple 5 years ago

              I don't disagree. That's not the objection your original response presented, though; rather, your comment implied a literal interpretation of "What are you smoking?", at which my own comment poked fun with a variation on that colloquialism.

  • Athas 5 years ago

    I strongly recommend AMD. After a long time of using NVIDIA due to their superior Linux drivers, I got myself a Vega 64 when I built a new desktop computer last year. After so many years of driver woes, it was almost unreal having accelerated 3D graphics work out-of-the-box after installing Fedora. For my personal machines, I don't see myself going back to NVIDIA anytime soon.

    • gcb0 5 years ago

      I am on the same boat. but instead of a vega you can get the last generation best of the best for under $170! 60+% of performance for under 1/3 of the price.

      • Athas 5 years ago

        Yeah, the Vega 64 (like most high end gear) is probably not worth the expense. I had a surplus of money at the time and I do GPU programming research, so I could justify it. I also hear that while it has nice compute performance, it's not as impressive for games.

  • chme 5 years ago

    You mean the proprietary Nvidia driver?

    Because( no offense to the nouveau guys, you do great job, all things considered) but the open source one is in pretty bad shape and that is completely the fault of Nvidia.

    Having to extra compile their driver (that does not conform to kernel standards) delivers a subpar experience for Linux users and taints the kernel. If they don't care about Linux users and help develop the open source driver, they I don't see any reason for buying their hardware. I rather spend it on hardware manufacturers that try to support me as a customer.

    • jjrh 5 years ago

      I think guy above was talking about the proprietary driver. While I have tried to be a fan of AMD/ATI because of their stance on opensource, for a long time AMD proprietary drivers sucked and had all sorts of problems and the open ones were noticeably slower. For a lot of people there wasn't really much choice.

      Sounds like things are much better these days.

  • nas 5 years ago

    After years of having AMD video cards, I recently got a high end Nvidia card, GeForce GTX 1070 Ti. For 3-D gaming, Nvidia cards were clearly better performance vs price. However, dealing with the proprietary driver is just a pain in the ass. With an AMD card, the open source driver just worked. With Nvidia, I have to keep re-installing the proprietary driver. Sometimes it doesn't compile with the current kernel (e.g. Debian testing). Finding the correct driver to download and install is a pain (try using the 'links' on nvidia's site).

    If I build another PC, I'm going to very strongly prefer an AMD card.

    • yellowapple 5 years ago

      Yep. I was on Nvidia+Intel exclusively for a long time, but I bit the bullet and went with AMD graphics for my most recent laptop purchase, and was blown away by it being reasonably performant out-of-the-box. My most recent desktop build was consequently AMD-only (Threadripper+Radeon); worked like a charm. Can't say I'm all that tempted to go back.

  • floatboth 5 years ago

    FreeBSD is definitely better with AMD. No one should be recommending nvidia in 2019.

    There is unfortunately a little boot issue currently (amdgpu conflicts with the EFI framebuffer, so when booting with UEFI you have to turn efifb off, resulting in no display after the bootloader and until the GPU driver loads). But other than that, I'm very happy with amdgpu & Mesa.

  • kllrnohj 5 years ago

    I've gone through 3 different desktops and 4 different Nvidia GPUs in my Linux workstation over the last ~8 years and Nvidia's proprietary driver has never once been what I'd describe as "good"

    There's regular display corruption and flickering on the desktop, most significantly on the second display.

    All I need/want is basic 2D composition of the desktop, so maybe their 3D acceleration works better. But there doesn't seem to be any reason to put up with the crappy re-compilation path just to get kinda OK desktop performance with a heap of bugs.

  • bjoli 5 years ago

    I have other values. I dont need the absolute best performance and I'd rather spend money on a product from an OSS-friendly company. AMD is probably what will finally make me abandon intel integrated graphics for my Linux computers

  • pimeys 5 years ago

    Just built my new work workstation with 2970WX and RX 590. This is my first AMD workstation since the Thunderbird and everything has worked just great with Arch Linux using the open source drivers.

    And my god it is fast...

    • jqt 5 years ago

      If you think that card is fast using the Linux drivers, wait until you try it with the Windows ones.

      • pimeys 5 years ago

        The card is just to get two displayport outputs and open source drivers. The CPU is for work.

  • crazysim 5 years ago

    What about Linus giving the finger to Nvidia? I'm pretty sure that's a suggestion of some sort.

    • dijit 5 years ago

      they seem to be difficult to work with as a company, but their desktop GPUs seem to mostly "just work" on Linux/FreeBSD - so for the user they seem to be better.

  • shmerl 5 years ago

    > Sadly I constantly hear people say that you should get an Nvidia card for both Linux and FreeBSD, because the drivers are better.

    Not in the recent years. The trend has been positive for AMD and negative for Nvidia at least among Linux users for quite some time already.

    See: https://www.gamingonlinux.com/index.php?module=statistics&vi...

    • blihp 5 years ago

      Not in all use cases (yet)... they still don't have a very compelling GPU compute story on Linux. There are reasons they've only made a minor dent so far and compute is one of them. As soon as they do, I'll be happy to switch my systems as I do like the direction AMD has been going the last couple of years on the driver front but they still have more work to do before it's viable for me.

      • shmerl 5 years ago

        > they still don't have a very compelling GPU compute story on Linux.

        OpenCL AMD story has been rather flaky until recently, but supposedly ROCm is the most complete and open option there is now. You can also use Vulkan for compute purposes as far as I know. And Vulkan support isn't behind Nvidia in any way.

        Some further Vulkan+OpenCL interop is still planned by Khronos.

        See https://github.com/KhronosGroup/Vulkan-Ecosystem/issues/42#i...

  • zepearl 5 years ago

    I did have an AMD card many (~10?) years ago, but I ended up replacing it with an nVidia because A) it was very noisy and B) Linux support was bad => I always used nVidia since then and never had problems with many types of setups and hardware, therefore I never switched back.

    I will reevaluate both brands for the next switch. I would like to use AMD because of its friendliness/efforts towards open-source, but it has to perform (not "top" but at least "good") & be stable & be silent.

    • RussianCow 5 years ago

      Loudness has more to do with each individual card and how the manufacturer built/designed it than it does with AMD/Nvidia. Read GPU reviews online and you will see that, even within the same model (e.g. RX 580), some cards are known for being unreasonably loud while others are near silent.

      • zepearl 5 years ago

        You're fuzzy/vague and you assume that I did not perform a selection of what I decided to buy at that time and you state about huge differences of loudness between brands of the same chip without posting proof.

        • RussianCow 5 years ago

          I didn't assume anything, I just wanted to point this out to clarify that fan noise isn't just about the card itself. As for proof, there are lots of sites (Tom's Hardware, Anandtech, etc) that compare different brands of cards by noise level—search for reviews of whatever card you're looking at and you're likely to find a chart comparing them. I've also personally experienced some rather large differences by switching brands of the same card, with both AMD and Nvidia.

          • zepearl 5 years ago

            >>I just wanted to point this out...

            "this" what?

            >>As for proof, there are lots of sites..

            Pls. post the direct links to the specific pages that allow us to see such a direct comparison or to at least see data which can be compared.

  • pjmlp 5 years ago

    I bought a netbook with an AMD APU, exactly because I wanted to support AMD and their open source efforts.

    The pay I got was the downgrade of the available GPGPU features, which now requires the Windows drivers for me to actually make use of the DirectX 11 capabilities and accelerated video decoding, as the radeon driver for the APU only covers a subset of what fxgl was capable of.

    So yeah, what is the point of supporting AMD again?

  • BlackLotus89 5 years ago

    Nouveau developers actually suggested getting AMD cards multiple times (at least to me). I also went with AMD for my gaming rig. The only reason for me to choose nvidia is for computing (opencl/cuda)

  • BoysenberryPi 5 years ago

    AMD has the worst open source drivers for Linux. Pretty sure Ryzen APUs still aren't even properly supported. I would love to use AMD on everything, truly would. They are the best for a tight budget but the support just isn't there.

    • green7ea 5 years ago

      I'm using first gen ryzen with an RX560 with different Linux distros (Ubuntu, arch and void) and it works great. Especially for compiling and gaming.

      • BoysenberryPi 5 years ago

        I'm talking about the APUs using integrated graphics. If you are using an APU with a discrete graphics card you defeat the purpose of buying an APU.

    • chme 5 years ago

      You seem to life in an alternative universe than me.

      Worst Open Source driver? In my universe that might be lima (no offense to the devs, I think freedreno, etnaviv and videocore drivers are a bit better and I don't think there is a PowerVR open source driver yet)

vorpalhex 5 years ago

I'm glad AMD has consistently put in work to keep their drivers available to the Linux community, even if sometimes it's been less than perfect. I really hope that Nvidia eventually also open sources it's drivers.

  • xvilka 5 years ago

    When hell freezes over. Their management seems stuck to agressive anti-FOSS stance forever. Shame to every developer who works for them - it is like working for the evil.

    • beatgammit 5 years ago

      I keep wondering where Oracle and Nvidia come up with their developers. I don't want to work at either because of their complete disregard for FOSS, and most developers I know have the same opinion.

      • pjmlp 5 years ago

        Because not everyone is religious about FOSS and there are lots of interesting things to work on.

      • mj_olnir 5 years ago

        Salary.

        Remember, Comcast also has software developers.

        • kkarakk 5 years ago

          also there are a lot of developers globally with looser morals because of different cultural values. in my experience anyone from a -stan country will make whatever you want as long as you give them a decent salary and a visa

          • SmellyGeekBoy 5 years ago

            Aside from the racism - are you suggesting that working on closed source software is immoral!?

            • kkarakk 5 years ago

              closed source is the default. and where is the racism? you don't even get ethics lectures if you're educated as an engineer in a third world country. anecdotally i was educated from a pretty good private college and even there the ethics professor herself said you don't really get to make any ethics decisions as an engineer in india. your only choice is quitting and that is a really tough choice

  • chme 5 years ago

    Nvidia will properly never do that. Since they don't see themselves as just hardware manufacturers.

    They want to bind customers to their hardware and cause them to buy newer hardware versions in high frequency and there seems to be no better way to do that than via software that they can artificially age if need be and lock to prevent alternative solutions.

notus 5 years ago

It seems like this repo has existed for over a year and no commits within the past couple days, I'm not sure what the discussion is supposed to be about when there is just a link to a repo.

  • boudin 5 years ago

    Indeed, AMDVLK was part of the AMDGPU pro driver first and was opensourced in december 2017, so it's not new.

    • Macha 5 years ago

      I guess it's in the context of the Proton/Linux gaming discussions earlier.

joshuarubin 5 years ago

Nvidia refuses to support GBM for Wayland and instead came out with a completely different buffer API, EGLStreams. This is pretty arrogant. As I use sway, and it doesn't support Nvidia, I chose an AMD Vega 64, which works great.

https://drewdevault.com/2017/10/26/Fuck-you-nvidia.html

  • beatgammit 5 years ago

    How do you like sway? I'm still on an Nvidia GPU, but I might upgrade this year to an AMD GPU, and I've been hesitant to get back into tiling window managers untill I can use one on Wayland.

    • joshuarubin 5 years ago

      Love it. That said, the 3rd party application support for wayland is pretty poor. Chrom(ium|e) can’t even do accelerated video under XWayland (so everything is choppy, even on my threadripper system). Firefox is better, but still uses XWayland. Wayland support is available in Firefox nightlies, but it _really_ isn’t ready for daily use yet. For the most part everything works fine, but I’m anxious for native wayland support from many apps (Electron apps in particular).

turblety 5 years ago

So does anyone know now if AMD Vulkan GPU's are fully open sourced? i.e. can we build everything from source, firmware, drivers, app and then use it without having to trust any blobs?

  • bayindirh 5 years ago

    AMD beta tester here.

    AMD separated HDCP and DRM related silicon from video acceleration units some time ago to be able to open source their GPUs completely sans the NDA bound stuff. Even this is a very big generosity and step from them for the Linux community.

    I'm sure that the firmware contains some highly proprietary and revealing information about some of their secret sauce. So, they won't be able to do it even if they want to.

    • shmerl 5 years ago

      There is no secret sauce in the firmware, but it has HDCP garbage (which is causing it to remain a blob). AMD could provide one without HDCP as an open option, but I suppose they didn't see enough demand for it.

      • bayindirh 5 years ago

        I think it also had the clocks, memory settings, thermal thresholds and other hardware tuning parameters. So opening the firmware may also lead to many many fried boards.

        Also, if the core enablement and configuration is done in the firmware, some vendors may find themselves in a hard position, since they may be selling crippled GPUs as lower spec cards.

        Last, but not the least if folks enable faulty CUs in their cards and see the faults, they may create some (albeit unjustified) noise in "teh internets", which will return as bad press.

        So, while the firmware is good for research and educational purposes, it's also a Pandora's box IMHO.

        • shmerl 5 years ago

          You can fry the hardware even now, if you set fan curve incorrectly, or create some other power management mess. Opening up firmware isn't really affecting that.

          In the end, opening it up isn't any worse than opening up the kernel driver to begin with (you could apply similar arguments to that). And AMD were OK with it, and from what I've heard, DRM is really the main issue here. As usual, media lobby poisoned the technology for us.

          • bayindirh 5 years ago

            I think you won't be able to override the emergency shutdown thresholds in the firmware.

            I'm a big free and open software advocate. I primarily use free and open source software, and try to open every line of code I write. I'd like to see the firmware on the open like the drivers. I just wanted to talk my understanding of hardware. If my comments sounded otherwise, I'm sorry, my bad.

            • shmerl 5 years ago

              I've heard from Linux AMD engineers, that they supported the idea of opening up the firmware, and opposition to it wasn't based on the concerns you listed, but primarily driven by DRM (and the need to split it into two variants which is an extra effort).

              • bayindirh 5 years ago

                That's possible & I'd love to see the firmware source code and play with it TBH. I also like AMD because of the efforts they make to open themselves as much as they can.

                BTW, I'm not employed by AMD or ATI. I was just one of the independent members while the GPU driver beta testing was closed to outsiders.

                DRM always complicates things, but always get broken at the end. Also, it's always a crippling pain.

    • novaRom 5 years ago

      What HDL is in use at AMD - Verilog like at Nvidia or SystemC like at Intel?

      • bayindirh 5 years ago

        Sorry, that's beyond my knowledge.

  • boudin 5 years ago

    Firmware isn't opensource. Kernel and userland drivers are. There's RADV as an alternative to AMDVLK which was created while AMDVLK was closed source. Both are maintained.

    • makomk 5 years ago

      One of the obnoxious consequences of the firmware not being open source is that, even though both RADV and AMDVLK support GPUs as far back as the 7xxx series, the AMDGPU driver required to use them can't be enabled by default. The released firmware for the UVD video decode unit doesn't support the framebuffer addresses used by AMDGPU, meaning that it can't support hardware video decode until they release an up-to-date firmware: https://lists.freedesktop.org/archives/amd-gfx/2017-November...

      So far users have been waiting for over a year and heard only silence. I suspect, given their attitude to older hardware, the required firmware will never be released.

    • zepearl 5 years ago

      But it's correct to say that the firmware used under Linux is the same one that is used as well under Windows (maybe as well MacOS)... or not?

      (I personally understand that the "firmware" does the low-level stuff, and the drivers of the OS provide the abstraction through their functions, but I might be terribly wrong)

      • woodrowbarlow 5 years ago

        yes, the firmware is the same regardless of your OS. think of it like using an open-source client for slack: it still hits the API of a closed blob (the slack servers).

        • zepearl 5 years ago

          Thx!

          Then, if the performance is "worse" or "better" for a specific OS, it's just because the code of the open-source part (kernel and/or userland progs) and/or the app (game/application/whatever) is not written as well as on the other OSs, right?

          Indirectly asking as well: even if the firmware is alway the same one, there is no "part" of the firmware that is dedicated to only a specific OS?

          • woodrowbarlow 5 years ago

            performance differences come down to how the driver utilizes the firmware, or how the applications utilize the driver. there may be some other factors, like how the OS manages memory buffers.

            it's unlikely that the firmware has OS-specific code. it's more likely that the firmware exposes functionality that happens to be taken advantage of by one OS' drivers more than it is by another OS' drivers -- perhaps in part because of differences in driver execution models on different kernels. or sometimes (as with nvidia) because a proprietary closed-source windows driver was written by the company with access to private documentation of all the firmware's features while the community-written linux OSS driver was written with incomplete knowledge of the firmware's features derived from reverse engineering.

          • boudin 5 years ago

            Performance-wise yes, logicaly the difference would come from the driver part.

            For the firmware, it's hard to know, i think only someone from AMD can answer this. It's technically possible if part of a firmware is used only by the Windows driver for example. After, I have no idea, if it's the case...

  • monocasa 5 years ago

    Some of the firmware has been reverse engineered, with community supplied patches, but that's not supported at all by AMD.

    https://github.com/fail0verflow/radeon-tools/tree/master/f32

    What's interesting is there's not a clear reason for the firmware to not be opened other than it's probably a huge amount of work to document. It looks like an unexpected house design so there shouldn't be IP issues, and it's pretty mundane stuff so there shouldn't be trade secret issues either. : /

  • vardump 5 years ago

    I think open source GPU firmware will be very unlikely to happen for any high-performance part in the near-medium term future.

cr0sh 5 years ago

I'd love to try AMD video cards again, but what's recently held me back is that I sometimes play around with stuff like Tensorflow and other ML libraries.

They all seem to be geared toward CUDA, which of course is an NVidia only thing.

I've never really looked deeply into it, but are there performant options, close to CUDA, that would allow me or others to use such ML libraries on AMD GPUs?

novaRom 5 years ago

I have recently moved my home PC to latest AMD APU (latest Athlon). No CPU Fan because it's passive, no proprietary closed source binary blobs anymore because AMD open source drivers work out of the box (latest Ubuntu).

newnewpdro 5 years ago

The more people support AMD by buying their hardware, the better the drivers will become. Obviously we should support the more open of the options, it's not like AMD can't deliver satisfactory hardware.

  • ab5tract 5 years ago

    “But mah FPS!!” I have never understood the relative lack of loyalty amongst nerds. Demand more FLOSS, but buy NVIDIA for a few frames per second. Use Chrome instead of Firefox for a few milliseconds on render. Rejoice at clang while maintaining that GCC never did anything good for anybody.

    The whole point of libre software is that the choice of what you use matters, but I rarely see ethical considerations trump hot rodding.

novaRom 5 years ago

Nvidia will be forced to do the same pretty much soon. The real thread for this oligopoly (AMD, Nvidia) will be from Asia. Look what's happening with SoCs in mobile phones in general and project it to all different types of silicon including dedicated accelerators.

charliebrownau 5 years ago

I will be giving Linux another go with R9 380 with Kernel 5.1 later this year

I gave Linux a go for 2 months in 2018 over 11

different distros, in the end I reverted back to Windows 7

AMDGPUPRO Driver -

* Best FPS , 70-80fps in most games

* no control panel centre

* only worked with frist screen (27" HDMI+19" VGA)

* no wattman

Open source driver

* Worked on both screens

* NO HDMI audio

* Sub 60fps fps

* no control panel for settings