userbinator 6 years ago

This is sad. Very sad. It means "PCs" are only going to become more closed and proprietary than ever before, at an even faster rate.

"Legacy" is a good thing. It means simple, stable interfaces that have been around for long enough to be mature, well-understood, and relied upon by everyone. Some examples: A PS/2 mouse or keyboard doesn't require a full USB stack in order to work, nor can it exploit the system by becoming a different USB device and sending malformed packets. RS232 is simple enough that it's the de-facto "standard input/output" on embedded platforms. LPT/IEEE1284 gives you 8 bits of GP(I)O with ultra-low latency (microseconds vs milliseconds for USB.)

It's disgusting to see the "security" argument brought up again here, when it's all about corporate control over the users. To them, it's a "security risk" to not have secure boot. To the users, it's freedom.

The "advantage" of "smaller code size" is complete FUD. I'm pretty sure the ROM images have gotten much bigger since UEFI became the norm, composed of bloated compiler output instead of the beautifully efficient handwritten Asm that BIOSes used to be written in.

No DOS requirements for pre-OS validation/tools (try UEFI Shell or Python)

DOS is simple, which is why it remains so common for BIOS flashing, testing, etc. Replacing that with the horrendous bloat of UEFI Shell or Python!?!? Yuck!!

Linus Torvalds' famous rant is well worth reading: http://yarchive.net/comp/linux/efi.html

tl;dr: You can pry 7c00h and INT xx from my cold, dead hands.

  • Rusky 6 years ago

    Secure boot doesn't mean the user isn't allowed to replace the bootloader or kernel, and it is more secure.

    UEFI doesn't require USB or secure boot or a compiled language or Python or proprietary code.

    It would be nice to have something simpler than UEFI to replace the legacy PC boot process, but the point IMO is that we want to replace the legacy PC boot process. It's not as straightforward as "simple, stable interfaces that have been around for long enough to be mature, well-understood, and relied upon by everyone." It's a de-facto standard with a lot of room for things to go wrong, and they do.

    • userbinator 6 years ago

      Secure boot doesn't mean the user isn't allowed to replace the bootloader or kernel

      Unfortunately that's what it will likely turn out to be in practice.

      It's a de-facto standard with a lot of room for things to go wrong, and they do.

      ...as opposed to an even more complex standard? The saying about "known unknowns" and "unknown unknowns" comes to mind... there is literally tons of documentation out there about all the "legacy" devices in a PC, but far less about the newer proprietary bits.

      • johncolanduoni 6 years ago

        > Unfortunately that's what it will likely turn out to be in practice.

        Even Microsoft specifically forbids hardware marked as "Windows compatible" from not allowing user installed keys on x86. I'd like some actual reasoning for "likely".

        • Santosh83 6 years ago

          Okay so how is an 'average' computer user supposed to install a Linux OS that they want (or another OS) on their PC without reading up on MoK keys and UEFI and delving into the key management interface?

          Yes, you can install the few 'blessed' distros like Ubuntu, Fedora or OpenSuSE, but what about distro X? Thus far, an average user need only pop a CD to install Majaro or FreeBSD or FreeDOS. Now...? Install their own keys? Sign their kernels and modules?

          This reminds me of all the anti-piracy measures. Sure, you can theoretically circumvent all the DRM. But is it practical for 90% of users? No.

          Sure, DRM is theoretically perfectly fine, as is Secure Boot. As is even ME. The theory is always fine. How it turns out in practice, not so much. We should be asking ourselves if this marked disparity is really just inadvertent.

          • Rusky 6 years ago

            Woah, wait. DRM and the ME are not fine in theory. Secure boot is.

            If the practice of installing your own keys is too hard, that's something we can fix. We can't fix DRM or ME without just removing them.

            • betterunix2 6 years ago

              Secure Boot is part of an effort to harden DRM on PCs; this is something Microsoft and Intel have pushed for over a decade, especially now that they have seen how much money is being made on mobile devices (where DRM was baked in from the beginning). There is not much benefit to users, because bootloader malware was never a major threat and malware remains a constant threat for users despite years of Secure Boot deployment (just the other day my mother-in-law asked us to deal with a ransomware'd surface pro). The real point of Secure Boot is to create a system where users do not have the freedom to install arbitrary software i.e. where all software must be installed from an app store, where it can be vetted to ensure that it does not allow users to do things like breaking DRM systems or blocking ads.

            • Santosh83 6 years ago

              Yup that was mainly my point. The UI for easily installing your own keys needs to be more accessible to the average user. Also we need a Let's Encrypt like non-profit, standardised signing body who can sign off all OSes including Microsoft's, and not let the latter get a choke-hold in this vital area.

              I have disagree with you. DRM and ME are both fine theoretically the way I see it. Of course that's my point. Only theoretically. The way they're used in reality is far from okay, but that's not the fault of the idea in and off itself. It's just that industries and other powerful interests have twisted it into being their tool, instead of serving the users and artists.

            • digi_owl 6 years ago

              The line between the two is thin and ill focused...

          • johncolanduoni 6 years ago

            That’s my whole point; what is the justification for thinking the “practice” for Secure Boot is going to be so dystopian? I’ve been hearing the same thing about TPMs for ten years, and that never materialized. Considering even the specifically DRM-oriented features like trusted path still aren’t being used often by software that uses DRM, I think it’s jumping at shadows to cast everything with secure in the name as a DRM solution waiting in the wings.

          • Avery3R 6 years ago

            Just like you had to go into the setting menu to enable booting by usb, you go into the settings menu to disable secure boot

            • Santosh83 6 years ago

              It strikes me that this option was intended as a compromise for a certain transition period after which it might be removed completely, at least on consumer boards. Besides the point is to be able to use Secure Boot without having to play by Microsoft's rules and restrictions.

              Why not have an equivalent of Let's Encrypt for secure boot, i.e., a non-profit foundation or a cooperative committee that distributes the master key and allows everyone (including MS) to get their kernels signed? That would be the best of both worlds. Instead we're moving farther away from it gradually.

            • digi_owl 6 years ago

              Found getting into the settings menu on UEFI was an adventure all its won.

              Nope, hitting a key during boot does not do it.

              You have to hold a key while hitting reboot in Windows to get a menu that can trigger the UEFI settings.

              More and more i feel we are going through a generational change in the PC business, and i do not like what i am seeing.

              • janc_ 6 years ago

                You can disable fast boot permanently (and something like it existed on some pre-UEFI BIOS firmwares too, so it's nothing new).

              • Avery3R 6 years ago

                Hitting a key during boot works for me. Check your mobo's manual

                • digi_owl 6 years ago

                  I did, they mentioned a key, it didn't work.

        • betterunix2 6 years ago

          For now Microsoft insists on that, probably because they are afraid of antitrust litigation otherwise. The problem is that Microsoft can change their policy at any time; they already have a totally different policy for ARM.

        • SquareWheel 6 years ago

          I know that was true for Windows 8 approved PCs, but didn't they remove that requirement for Windows 10?

          • Santosh83 6 years ago

            And why exactly are Microsoft being allowed to dictate restrictions upon general purpose PCs manufactured by others? Why do other OS distributors need approval from Microsoft before their OS images can be used under Secure Boot, without the user having to install their own keys? We need to overcome these big usability hurdles, or PCs are going to become just glorified smartphones needing 'jailbreaks' before long. The writing is on the wall for those who can see it, IMO.

            • cwyers 6 years ago

              Microsoft dictates what a general purpose PC manufactured by others has to do to be compatible with Windows; vendors that don't care about being able to run Windows can refuse to abide with anything Microsoft says. (Apple doesn't meet Microsoft's standards, for instance.) Linux is an operating system designed to run on computers that meet Microsoft's standards. A Linux distribution could go the Apple route and define a different set of standards a computer has to meet to run that distro, so that only computers made to run that distro could run Linux. ChromeOS is a Linux that has done this.

              The commodity PC platform is basically a historical accident, where IBM used commodity off-the-shelf hardware, and licensed MS-DOS but neglected to get an exclusive license. So anyone that bought the right hardware and a license to MS-DOS could provide an IBM-compatible PC. Before the ascendance of the IBM compatible platform, you had manufacturers of general purpose PCs building incompatible machines. Now we have a market dominated by compatible machines. There's no ISO standard for general-purpose PCs that makes this happen, or any other independent standard. Windows Hardware Certification is the standard for commodity PCs.

              EDIT: To clarify, this is only true regarding x86. Linux on MIPS/POWER/etc. is conforming to some other standard, not one set by Microsoft.

              • digi_owl 6 years ago

                This ignores the one piece that was IBM proprietary, the BIOS, that they tried to defend strongly.

                But back then they didn't have software patents, and Compaq was able to prove that their BIOS was a clean room re-implementation. And from that point the cat was out of the bag.

                Never mind that IBM after that tried to go proprietary with the PS/2, and it bombed badly.

              • revmoo 6 years ago

                > Linux is an operating system designed to run on computers that meet Microsoft's standards.

                No, it isn't.

        • digi_owl 6 years ago

          I seem to recall they initially did not, and there was a massive outcry about it, so they relented.

          Sounds like one of those things that they will try to sneak back in every odd years or so, and if the masses miss it just once it's game over.

      • tscs37 6 years ago

        >Unfortunately that's what it will likely turn out to be in practice.

        It should be mentioned that part of the Microsoft Certification requirements for UEFI is that you can install your own keys (which I did and do recommend);

        """ On non-ARM systems, the platform MUST implement the ability for a physically present user to select between two Secure Boot modes in firmware setup: "Custom" and "Standard". [...] It shall be possible for a physically present user to use the Custom Mode firmware setup option to modify the contents of the Secure Boot signature databases and the PK. [...] """

        So unless the vendors don't want the Microsoft Windows Certified Badge and be able to preinstall Windows, they will offer those options.

        >but far less [documentation] about the newer proprietary bits.

        http://wiki.phoenix.com/wiki/index.php/Category:UEFI_2.0 http://wiki.phoenix.com/wiki/index.php/Category:UEFI_2.1 http://wiki.phoenix.com/wiki/index.php/Category:UEFI_2.2

        I'm currently using the above links to write a small kernel, which compared to using BIOS is rather easy since it provides a very convenient trampoline to get basic services in the kernel setup without hassle.

        The legacy devices out there are usually very poorly documented too. The page for PS/2 keyboard IO on OSDev.org is several dozen pages explaining how to properly drive the keyboard controller or a floppy drive or a harddisk. And they're largely incomplete and only cover the most common cases.

        • bitwize 6 years ago

          It should be mentioned that as of Windows 10, Microsoft removed the requirement to allow you to install your own keys.

          • tscs37 6 years ago

            No, the current requirements still include this. On ARM platforms it has been removed for ages now.

            The quote in my previous comment has been directly pulled from the UEFI requirements on the microsoft website.

    • betterunix2 6 years ago

      "Secure boot doesn't mean the user isn't allowed to replace the bootloader or kernel,"

      Unless you're on ARM, right? What makes you think Microsoft will always be so kind to x86 users? What makes you think there will never be pressure from the music/movie/gaming industries to make PCs "more secure" by locking down bootloaders etc?

  • bearbearbear 6 years ago

    That's how they'll take away our last civil right.

    By protecting us from freedom. For our safety.

    They sure protected the hell out of Syria and Iraq.

  • rasz 6 years ago

    >LPT/IEEE1284 gives you 8 bits of GP(I)O with ultra-low latency (microseconds vs milliseconds for USB.)

    USB is also microseconds (hundreds of them, but still)

    • vardump 6 years ago

      No idea why you were downvoted. USB hispeed frame is 125 us and microframe obviously less than that. If the device is the only one on that USB controller, latency can be just that.

_kp6z 6 years ago

UEFI is a great example of second system syndrome. In the BIOS world, more featured boot loaders were chained in early (think pxelinux, grub, *BSD loaders etc) so even though it was primitive it wasn't the typical UX. UEFI is almost trollingly bad, the worst amalgamation of low level firmware, boot loader UX, and OS->firmware runtime services I've seen when contrasted to OpenFirmware, uBoot, petitboot.

OpenFirmware was a much more elegant technology sitting around for the lifetime of modern x86 but intel had to be different.

I like the direction IBM is going with OpenPOWER.. petitboot/kexec by default https://sthbrx.github.io/blog/2016/05/13/tell-me-about-petit... although all the firmware sources are on github so you could do whatever the heck you want. It makes intel look positively oppressive.

  • wolfgke 6 years ago

    > OpenFirmware was a much more elegant technology sitting around for the lifetime of modern x86 but intel had to be different.

    Let's rather look at the reasing of the UEFI developers why they have a different opinion on Open Firmware (page 5 of http://www.uefi.org/sites/default/files/resources/A_Tale_of_...):

    "Another effort that is similar in its role to the PC/AT BIOS-based platform occurred with Open Firmware (OF) [4] and the Common Hardware Reference Platform (CHRP). CHRP was a set of common platform designs and hardware specifications meant to allow for interoperable designs among PowerPC (PowerPC) platform builders. Open Firmware, also known as IEEE-1275 [4], has issues in that it is an interpreted, stack-based byte-code. Very few programmers are facile in Forth programming, and it is renowned as being “write-once/understand-never”, and having poor performance, and non-standard tools. Also, Open Firmware has a device tree essentially duplicating ACPI static tables. As such, the lack of Forth programmers, prevalence of ACPI, and the fact that UEFI uses standard tools and works alongside ACPI — versus instead-of — helped spell Open Firmware’s lack of growth. In fact, it is used on SPARC and PowerPC, but it is unsuitable for high-volume machines and thus prevent it from making the leap from niche server & workstation markets."

    • _kp6z 6 years ago

      That is almost entirely FUD, every Mac before the intel switch had OpenFirmware. FCode was elegant in that it is cross platform, and the option ROMs can work across multiple uarchs. The only valid point is that Forth is obscure. I don't know how much that matters in reality due to how obscure working at that level is itself, as an OS dev you're working on the device tree and runtime services in a language like C. But I will grant benefit of the doubt since I'm an OS dev and not an option ROM dev.. and say, well, Petitboot is the logical endgame as the industry has made Linux uber alles and you're going to be writing drivers for it anyway.

      • snuxoll 6 years ago

        Also, talking about the OF device tree duplicating functionality in ACPI like it's a bad thing. ACPI is a huge shitshow, if transitioning from legacy BIOS to something other than UEFI happened maybe we could have killed it off and done something better.

        With that said, UEFI isn't TERRIBLE - the complexity is hard to overlook and for all the benefits boot services among other things can bring it's rarely used by anybody but Apple.

        • wolfgke 6 years ago

          > With that said, UEFI isn't TERRIBLE - the complexity is hard to overlook

          UEFI is not actually that complex (though not a jewel of tinyness). There are lots of features that are optional. So in principle it is possible to build a quite small UEFI implementation if desired. The problem is that most mainboard vendors deliver very bloated implementations. I actually can understand there reasoning: They only want to (barely) support one implementation. If a feature is left out, there will be customers that complain. On the other hand, if they leave it in, it "suffices" to add an option in in the UEFI configuration tool that enables the user to disable the feature.

          • justinjlynn 6 years ago

            well, to be fair, they generally buy implementations from third party vendors. Those vendors throw things in for competitive advantage because some other vendor footed the bill for implementation and "why maintain multiple builds or a kconfig file". It's a bit of a crap spiral. If the implementations were open source with a strong consulting component for integration you'd probably see less non-removable bloat.

            • wolfgke 6 years ago

              > If the implementations were open source with a strong consulting component for integration you'd probably see less non-removable bloat.

              I believe about all UEFI implementations are based on the open source (though not copyleft) TianoCore implementation.

              • justinjlynn 6 years ago

                That's probably true. This likely means that almost nothing is upstreamed, so it may as well be proprietary software once it flows through to the manufacturers and us. Copyleft is an incredibly important concept and this is a good illustration of why "restricted" freedom for some can result in greater freedom for end users and developers. Freedom from proprietary software is just as important as the freedom to create it.

      • monocasa 6 years ago

        Also, if it was really an issue, you could just compile something else to FCode. Pretty much every non-native language is being compiled first to a stack language that looks a lot like a shitty version of Forth as the IR.

        • wolfgke 6 years ago

          > Pretty much every non-native language is being compiled first to a stack language that looks a lot like a shitty version of Forth as the IR.

          It is not clear what you mean with that. I assume you want to hint that the CLR and the Java Runtime use a stack-based instruction set and are common compilation targets, which is true.

          But there are lots of other virtual machines that provide a runtime that are not stack-based. For example the implementation of the Lua 5.0 runtime is register-based. An other example of a register-based virtual machine is Parrot.

          • fasquoika 6 years ago

            To be fair, just about all of the most widely used bytecode VMs are stack-based. The JVM, CLR, CPython and YARV are all stack based. You're basically describing the exceptions

            • wolfgke 6 years ago

              > To be fair, just about all of the most widely used bytecode VMs are stack-based. The JVM, CLR, CPython and YARV are all stack based. You're basically describing the exceptions

              Be very cautious with this kind of statement. Dalvik (that is used on Android), LLVM (Low Level Virtual Machine) and Erlang's virtual machine are also register-based (I just forgot about them when writing the above post). So I would rather claim it is a 50-50. I could write something about the advantages and disadvantages of both approaches (stack-based vs register-based), but this would become off-topic.

              • exikyut 6 years ago

                > I could write something about the advantages and disadvantages of both approaches (stack-based vs register-based), but this would become off-topic.

                But still very interesting. And this subthread has been discussing Forth, so I argue that at least an overview discussion isn't out of order.

                As I understand it, stack-based software and hardware architectures have been shown to have poor performance when compared to traditional register-based approaches (I specifically read somewhere that even hardware has been shown to be suboptimal, but I unfortunately don't remember where, and the source didn't qualify if the stack architecture in question was pipelined or had other enhancements or if it was just a dumb basic 70s/80s design). In any case, modern (pipelined) register hardware doesn't particularly like the L1/L2/Letc thrashing that stack dancing creates.

                Obviously with registers you have a bunch of switches that have theoretically O(0) access time regardless of execution machine state. At least that's how I mentally model it, at least superficially; I presume 0 becomes 0.nnnnnnnn for fluctuating values of nnnn depending on pipeline filled-ness and other implementation details. (And I use floating-point here just as an aesthetic representation of hard-to-quantify derivations from theoretical perfect efficiency.)

                I feel like there's gotta be some kind of cool middle ground between stack-like architectures and register architectures that nobody's found yet, but I do wonder if I'm just barking up an impossible tree. Or is there any research being done in these areas?

                The main problem I see with stack architectures is that it's effectively impossible to optimize (ie build a pipeline) for. Because if all you're dealing with is the top $however_many_things_the_current_word_pops_and_pushes items on the stack (which, to clarify, the hardware can't even know because that information is locked away in the implementation of the currently-executing word), well... you're in an impossible situation. For want of a better way to express it, the attention span of the implementation is inefficiently small.

                Anyway, this is some of how I see it. What are your thoughts?

                • standupstandup 6 years ago

                  If you'd like to learn about a CPU arch that is neither register nor stack based, watch the videos for the Mill and in particular the belt.

                  • exikyut 6 years ago

                    Ooh, alright then. Thanks.

                • wolfgke 6 years ago

                  > > I could write something about the advantages and disadvantages of both approaches (stack-based vs register-based), but this would become off-topic.

                  > But still very interesting. And this subthread has been discussing Forth, so I argue that at least an overview discussion isn't out of order.

                  I think this whole topic is more complicated than the points you gave.

                  I start with an example: The JVM takes the stack-based instructions and JIT-compiles (in principle AOT-compiling is also possible) them into (register-based) native instructions of the CPU. For this of course lots of transformations etc. have to be done. So executing the stack-based VM instructions naively one after each other fits the CPU badly, but this does not matter - thanks to modern JIT compilers, which transform the code completely.

                  One clear advantage of stack-based VM instruction sets is that there are much less "parameters to decide about". If you work register-based:

                  - How many registers? 8? 16? 32? 256 (i.e. a large number that can nevertheless be reached by real, though artificial programs)? "Near infinite" (say 2^31 or 2^32)?

                  - What register sizes? ARM has only 32 bit (and in AArch64 64 bit) integer registers. x86 has 8, 16, 32 and 64 bit registers.

                  - Should one allow to interprete the lower bits of some large register as a smaller one? What shall happen if we load some constant into such a subregister: Will the upper bits be unchanged (if done on a real CPU this can lead to a pipeline stall), zero-extended or sign-extended?

                  - What about floating point registers: Should one be able to interprete them as integers (encouraged on SSE/AVX (x86), but dicouraged on ARM)?

                  - If we consider SIMD registers to be useful: What size?

                  - Do we want 2-operand or 3-operand instructions: 2-operand instructions have the advantage that the graph coloring problem that is used for allocating CPU registers can be solved in polynomial time, since the graph is chordal. This is also (before AVX) the preferred instruction format for x86. 3-operand instructions have the disadvantage that the graph coloring problem is NP-hard so that in practise often heuristics are used. 3-operand instructions are common on RISC and RISC-inspired processors (e.g. ARM A32, A64 instruction set; note however that I think T32 uses 2-operand instructions).

                  As I pointed out this really large design space forces you to make lots of inconvenient decisions. I think this was a problem of Parrot VM who introduced lots and lots of different instructions to their VM. So if you want to keep the VM portable over lots of architectures, a stack-based approach is more convenient (I don't claim "better"). This was - I believe - one reason why the Java Bytecode was designed stack-based.

                  On the other hand, if you do it the right way, register-based code tends to be be more compact and is simpler to transform into machine code. These are surely central reasons why a register-based implementation was chosen for Dalvik (Android) and the Lua VM.

                  On the other hand to run stack-based code fast, you typically have to do a lot more transformations to the code - which one would love to avoid, in particular for embedded/small systems. So in some sense one can argue that a register-instruction based VM is the more low-level approach for designing VMs - much more decisions to do (which are the best one tend to depend on the primary CPUs that you want to target), but less code transformations to do in the runtime.

                  • exikyut 6 years ago

                    > I think this whole topic is more complicated than the points you gave.

                    And now I've read your reply I agree. Thanks very much for the additional considerations.

                    I had no idea the JVM applies analysis to the stack VM state to turn it into register-based code. I realize it's a JIT, but I never really thought through the implications.

                    Regarding register vs stack - don't stack based systems also have to decide about stack-item size? I'm not sure how this works in practice but surely size considerations get factored in at some point.

                    Regarding the 2/3-operand instruction problem, this is very interesting but must admit I need to do some reading about graph theory. I do very vaguely understand it but for example https://en.wikipedia.org/wiki/Graph_theory doesn't mention the world "chord" anywhere.

                    This indeed is a complex problem, and thanks very much for illustrating the difficulty.

                  • comex 6 years ago

                    I don’t see how a stack-based architecture avoids the need to think about register sizes. You just get equivalent questions, like - what sizes of integers do you have primitive ops for? do you have separate pop/swap/drop/etc. for every possible size, and if not, what’s the standard size of a stack item?

                    • wolfgke 6 years ago

                      > I don’t see how a stack-based architecture avoids the need to think about register sizes. You just get equivalent questions, like - what sizes of integers do you have primitive ops for? do you have separate pop/swap/drop/etc. for every possible size, and if not, what’s the standard size of a stack item?

                      This is correct. But these are still less decisions, e.g.

                      - no number of registers,

                      - no reinterpretation of parts of registers,

                      - SIMD fits the stack-based model rather badly,

                      - stack-based VM instruction sets are typically of the kind "take two values from top of stack, do something with it and push back" (very inspired by Forth - but I don't know much Forth), see for example for the Java bytecode instructions (https://en.wikipedia.org/wiki/Java_bytecode_instruction_list...) or CIL instructions (https://en.wikipedia.org/wiki/List_of_CIL_instructions and http://download.microsoft.com/download/7/3/3/733ad403-90b2-4...), so no worry about 2-operand vs 3-operand instructions.

              • monocasa 6 years ago

                I said non-native, sort of thinking of LLVM. And to be fair, Dalvik bytecode is compiled from JVM bytecode, so a stack based language was used as an IR in that language stack.

          • foota 6 years ago

            I believe that many compilers have an intermediate step that looks like a stack based language.

            • wolfgke 6 years ago

              > I believe that many compilers have an intermediate step that looks like a stack based language.

              I admit that in the 80th many compilers used something reminiscent of a stack based language for the code generation phase. The reason is that it is rather easy to write a code generator for an accumulator machine (do it as an exercise if you want).

              But this is not how typical modern compilers look like. Just to give one point of evidence beforehand: Modern processors have many more general purpose registers (x86-64 has 16 and AArch64 has 32, though some have reserved purposes). So such code would be a waste of the possibilities.

              A typical modern compiler looks like this (I am aware that your preferred compiler might have additional or somewhat different stages, but the following is typical):

              Frontend (very language-dependendent):

                - Tokenize input
                - Parse input (or report syntactical errors) to generate parse tree
                - Generate Abstract Syntax Tree (AST)
                - Do semantic analysis/type checking
                - Transform program in Static single assignment form (SSA form)
              
              Middleend (not much depdendent on language or machine)

                - Do optimizations on SSA code (e.g. liveness of variables/dead code elimination, constant propagation, peephole optimizations, perhaps replace whole algorithms)
              
              Backend (very machine-dependendent)

                - Do machine-dependent optimizations
                - Do register allocation
                - Generate machine-dependent code
              
              As one can see: A stack-based intermediate step does not appear hear (and is to my knowledge uncommon) - instead transformations on code in SSA form are common.
            • jcranmer 6 years ago

              Compiler IRs are often based on reflecting expression DAGs rather than manipulating a finite register set or an infinite stack.

              • exikyut 6 years ago

                Manipulating an infinite stack sounds like a really big world of pain.

                I realize you're only ever poking around near the top, but if the system has the chance to grow infinitely, well, it will.

      • feelin_googley 6 years ago

        fgen(1) FCode tokenizer

        ftp://ftp.netbsd.org/pub/NetBSD/NetBSD-release-8/src/usr.bin/fgen/Makefile ftp://ftp.netbsd.org/pub/NetBSD/NetBSD-release-8/src/usr.bin/fgen/fgen.1 ftp://ftp.netbsd.org/pub/NetBSD/NetBSD-release-8/src/usr.bin/fgen/fgen.h ftp://ftp.netbsd.org/pub/NetBSD/NetBSD-release-8/src/usr.bin/fgen/fgen.l

    • gioele 6 years ago

      > Open Firmware, also known as IEEE-1275, has issues in that it is an interpreted, stack-based byte-code. Very few programmers are facile in Forth programming

      And ACPI is based on an interpreted, non stack-based bytecode, AML. [1]

      There is one order of magnitude or two more Forth programmers than AML/ASL programmers.

      [1] http://wiki.osdev.org/AML

      • kjkjhkjhkjhdsa 6 years ago

        precisly

        AML requires freaking garbage collection in the interpreter and deal with a zillion complex data types. AML is so crazily obscure and ugly with its 4 character identifier limit. its is all bad and ugly and the people responsible for this monster should be forced to write AML interpreters in AML forever until they die.

    • kabdib 6 years ago

      I hate FORTH as a general purpose environment, and I've seen a lot of FORTH-based train wrecks written by FORTH fanatics. But FORTH really, really shines in the boot environment, it's a fantastic tool for setting up hardware and gluing things together.

      As for the arguments that it's got poor performance, I'd like to point out that the BIOS / UEFI based systems on most of the systems I work with take minutes to POST, tens of minutes for the bigger machines, and I think it's a clear case of not looking critically at their own work.

      • jandrese 6 years ago

        My favorite is waiting 8-10 minutes for UEFI to finish whatever the hell it is doing to have exactly one second to hit the appropriate F key to get into the boot options.

        • exikyut 6 years ago

          On servers, I presume?

          • userbinator 6 years ago

            That's probably due to a very thorough memory test, and it's not particular to UEFI either --- I have worked with older servers, with regular BIOS, that do the same thing. There may be an option to disable it.

            • kabdib 6 years ago

              Memory tests are a big part of it, but definitely not the only really slow component.

              (The 10+ minute boot time on our bigger servers already is the fast version of the memory test. I've never turned on the more exhaustive one).

              • exikyut 6 years ago

                Yeow!

                What are the other tests? I mean, there's the CPU, memory, the disks, PCI... surely you don't have SCSI, option ROMs should be quick... I'm genuinely curious.

                What sort of server is it, for reference? (Well, I'm really just wondering how much RAM it has.)

                And now I'm wondering how long the longer test takes!

                • jandrese 6 years ago

                  IBM x3550 M3s were really bad about this. They aren't crazy big servers--dual CPU, 32GB RAM, just a couple of disks. UEFI boot took forever for no discernible reason.

            • exikyut 6 years ago

              Makes sense. I should hope there is such an option, for OS/boot debugging etc!!

      • rootw0rm 6 years ago

        UEFI + hardware RAID can be especially annoying.

    • wmf 6 years ago

      That's pretty revisionist considering that OF predates ACPI. Just like all other stories of path dependence, Intel backed themselves into a corner and now they're stuck there forever.

      • wolfgke 6 years ago

        > Just like all other stories of path dependence, Intel backed themselves into a corner and now they're stuck there forever.

        Beside the "well-known" implementations of UEFI for x86-32, IA-64 (Itanium) and X64-64, there also exist official UEFI implementations for ARM AArch32 and ARM AArch64. According to https://en.wikipedia.org/w/index.php?title=Unified_Extensibl... there also exist inofficial implementations for POWERPC64, MIPS and RISC-V.

        One consumer example where UEFI on ARM processors was used was the ARM-based Windows RT laptops (they were not very successful in the market). Much more importantly an UEFI implementation is required for ARM's "Server Base Boot Requirements (SBBR) - System Software on ARM® Platforms" standard:

        > http://infocenter.arm.com/help/topic/com.arm.doc.den0044b/DE...

        (read section 3 and appendices A to D). So about every ARM server uses an UEFI firmware.

        • _kp6z 6 years ago

          Going back to the intel FUD paper, if you consider volume shipment as success as in their document, uboot looking thing with FDTs are the most successful paradigm by orders of magnitude.

          The kind of systems using FDTs aren't going to switch I don't think. I have to imagine that ARM64 UEFI was done for marketing and by clueless product management. The CPUs and integration of ARM servers have been so pathetic, nobody was buying, and they were fumbling around at this level while ignoring the elephant in the room. TX2 and Centriq are the first realistic implementations for general purpose servers, and now they are unfortunately saddled with UEFI and ACPI.

          This was a rare 2-3 decades kind of mistake. Not a lot of software gets that privilege.

    • raverbashing 6 years ago

      You know what else is a stack-based byte code? JVM

      They could have had an easier to read language on top of that VM (could be Python/Lua or something else)

    • xorgar831 6 years ago

      Fair enough, but why is there no standard shell or alternative interpreter?

      • wolfgke 6 years ago

        > Fair enough, but why is there no standard shell or alternative interpreter?

        There actually exists the UEFI Shell, but not every UEFI implementation has it built in. For example the UEFI implementation that Intel provides for the Minnowboard Max (now EOL) and Minnowboard Turbot does provide an UEFI Shell.

        If the producer of the mainboard/laptop has not built in the UEFI Shell, there still exist options to start an UEFI Shell binary as an ordinary UEFI application (just as a bootloader etc. is also just an UEFI application):

        > https://superuser.com/a/1057594/62591

        In principle it should even be possible to write an alternative shell for UEFI and start it this way, but I am not aware that someone has written one.

  • drewg123 6 years ago

    I recently suffered through a firmware update done via the UEFI command line, and it was incredibly painful. Every time I need to use the UEFI command line (normally for a firmware update) I want to scream.

    BTW, Don't forget about DEC's "SRM" firmware, that was quite nice. It was simpler than open firmware, and made a hell of a lot more sense than uefi. After almost 15 years away from DEC alphas, I still remember most of the commands. SRM's "lfu" beats the hell out of EFI updaters I've used. And I fondly recall that you could break into the SRM and look at the current value of registers, and even (if you were running DEC Unix) force a crashdump. Kind of like ddb on FreeBSD, but built into the firmware.

    • wolfgke 6 years ago

      > I recently suffered through a firmware update done via the UEFI command line, and it was incredibly painful. Every time I need to use the UEFI command line (normally for a firmware update) I want to scream.

      What was so painful about updating the firmware via the UEFI Shell? I do/did it all the time this way for the diverse Minnowboard variants and cannot claim that it was painful.

      • drewg123 6 years ago

        In general, because the EFI shell sucks and was designed by committee. On a machine with 36 disk drives, it was not easy to find the drive with the UEFI partition that held the firmware update. That's why I mentioned SRM -- the firmware updater built into SRM would try all the drives attached to the system.

  • pera 6 years ago

    Yes! I wish more people knew how cool and powerful OpenBoot was, it even had an interactive debugger that included a Forth decompiler. I spent so much time playing with it years ago on my old SPARCstation... I would love to see something like that included in some new SBC :)

    • coling 6 years ago

      I agree. It was very empowering to be able to hit stop-a and get a backtrace with the kernel, user code, etc.

      • tinus_hn 6 years ago

        Also so interesting to disconnect your terminal and see your server hang until someone tells it to continue

  • mjevans 6 years ago

    I've been meaning to write this up more properly for a long time, and this reminds me of my pains in this area.

    This is more of an off the cusp draft of my ideas and opinions in this area.

    To start with, it would be REALLY helpful if there were a physical mode switch for maintenance tasks. (On desktops I think it should be like an 'ignition key', only with the key being a generic shape that could be substituted by a flat-head screwdriver or any custom shape someone might buy as a keychain thing at a gas station/convenience store. For thinner portable devices maybe an SD card where all the data pins are shorted to ground or a similar data cable. Whatever it is, the standard should be dead simple and generic. )

    Lacking an above maintenance mode, the system should assume it's in such a mode by default (current state of affairs).

    When in that maintenance mode the basic firmware should:

    >> If /only/ bootable media with previously authorized keys is present and the last boot attempt was less more than 5 min* (configurable time per boot profile) ago: attempt to boot.

    >> If the last boot was less than that time, follow secondary boot options (might be a local checking stub, might be a URI to offer a list of PXE style blobs from, etc * * ).

    >> If bootable media is present, but has a 'signing key' not in the local approved cache...

    >>>> Prompt the user with a description of the issuing key, the fingerprint of said key, cross-reference it against previously installed authorities to see what their opinion of the key is.

    >>>> Allow the operator to install the indicated key and/or CA (and online cross-check / maintenance image locations) in to the trusted keys storage area.

    In all cases the end user should be the highest authority and trusted operator.

    The operator SHOULD be able to remove (or at least disable) (even all) pre-installed CA/etc.

    The operator MUST be able to add any additional authority they designate.

    For TPM/DRM things, operator added authorities /may/ maintain strong signing verification for loaded code (or not), and the 'strong signing' flag would only indicate if the path was trusted or not.

    The way TPM/DRM would actually work is to verify the integrity from the ground up, in it's side channel. The host system MUST also be able to operate in a mode where that side channel is completely disabled, or replaced with a non-DRM-trusted side channel (corporations might want to use such equipment for remote support operations).

    * * This is intended to be an 'advanced' system checking and repair utility. It should do something like collect a list of 'core' things that it thinks are installed, and check that against the online repository. It would operate by downloading lists of files (per installed thing and update set), then compile an in-memory database of those items and their signatures in at least two hash algorithms and verify that the local storage matches. If there's a difference it should offer to download the expected files instead, and ask the user if they want to upload the different, possibly infected, files.

  • mtgx 6 years ago

    > but intel had to be different

    Or maybe they used it for the same reason they built the ME.

hd4 6 years ago

Sorry to go straight off-topic but if anyone from AMD is reading, please remove PSP from future and/or give us the ability to remove it from existing machines, this is a golden opportunity (with the Intel ME having been shown to be broken recently https://news.ycombinator.com/item?id=15656931 ) you really shouldn't pass up.

  • roblabla 6 years ago

    They already said they wouldn't open source it (after suggesting they might). I doubt they'll ever think about removing it.

    [0]: https://news.ycombinator.com/item?id=14803373

    • hd4 6 years ago

      Companies have been known to be astute and flexible when it comes to spotting opportunities especially when their competitors are making very publicized mistakes. It remains to be seen whether AMD is one of those.

    • monocasa 6 years ago

      Being open source doesn't really matter; it's having access to the keys. And they're never going to give those up because it would break the chain of trust for their DRM.

      • phkahler 6 years ago

        I don't want their DRM or their DRMed content. I'll gladly buy a computer for my own uses that don't have those "features".

  • monocasa 6 years ago

    They're not going to. The PSP has a hand in DRM (it runs 'trustlets' in a container), and DRM can only work through obscurity currently.

    • the8472 6 years ago

      > or give us the ability to remove it from existing machines

      DRM would still work by default.

    • kobeya 6 years ago

      And DRM is necessary to AMD’s bottom line because...?

      • jandrese 6 years ago

        People won't buy AMD chips if they can't play Blu-rays or Netflix or use commercial software?

        It's not a thing today, but it's not hard seeing the world go that way in the future once the legacy BIOS uses are finally aged out.

        • kobeya 6 years ago

          Blu-rays and Netflix work just fine on my old laptop that doesn't support secure enclaves.

          The onus is on us to make sure that a future where we are locked out of the devices we own does not come to pass.

          • wolfgke 6 years ago

            > Blu-rays and Netflix work just fine on my old laptop that doesn't support secure enclaves.

            4k Bluray requires Kaby Lake or newer (I don't know whether it works on Ryzen or not) and HDMI 2.0a with HDCP 2.2.

            • Freak_NL 6 years ago

              We're not even getting anything over 720p on Linux.

              1080p requires Internet Explorer, Edge, or Safari on Windows or MacOS¹.

              4k in the webapp requires Edge in addition to the hardware requirements wolfgke lists.

              1: https://help.netflix.com/en/node/23742

            • anthk 6 years ago

              Hah, and people still calling me a pirate for watching 4K MKV movies... under a SandyBridge.

            • deniska 6 years ago

              * Laughs in pirate

          • monocasa 6 years ago

            PSP is different than secure enclaves.

            • johncolanduoni 6 years ago

              This is my biggest frustration with all of this stuff. Every hardware feature with a word like "security" in it is now tantamount to a new Intel ME or "trusted path" implementation.

              • yuhong 6 years ago

                My favorite is how PSP is confused with DASH, the thing that actually does remote management.

            • kobeya 6 years ago

              Um, PSP implements a secure enclave capability, so no that’s not correct?

              • monocasa 6 years ago

                Secure Enclaves on AMD and Intel typically refers to SME/SEV and SGX.

        • dingo_bat 6 years ago

          > It's not a thing today, but it's not hard seeing the world go that way in the future once the legacy BIOS uses are finally aged out.

          Agreed. Just look at the root situation in android. Banking apps, netflix and even snapchat won't work on a rooted phone. It's absolutely bullshit, but it's normal now. Same can and will happen for PCs.

  • dvddgld 6 years ago

    I assume that if this was on the cards at all they would've done it earlier. Let's keep our fingers crossed though, it could make a big difference in the direction of technology (and thus the world) from here on out.

  • ajross 6 years ago

    Not that I disagree, but you realize that point about ME & PSP (which are separate controllers not part of the main boot CPU) isn't particularly relevant to the linked article, which is about removing the 16 bit BIOS boot path from future firmware? By the time you're running real mode IBM PC boot code, the ME & PSP have long since come alive and sorted things out for boot configuration.

  • craftyguy 6 years ago

    Why would they care? The folks asking for this are a tiny (tiny) minority of the market.

    • snuxoll 6 years ago

      It's a huge potential win for cloud and government contracts. Amazon, Google, Microsoft and the US Government probably aren't too keen on having ring -3 code running in their datacenters waiting to be exploited somehow.

      • microcolonel 6 years ago

        > Microsoft and the US Government probably aren't too keen on having ring -3 code running in their datacenters waiting to be exploited somehow.

        Which is why they have a special configuration for that, sold to them by Intel.

        • Mindless2112 6 years ago

          Source?

          coreboot was initially developed at LANL, a US Government laboratory. It's worth considering why that might be.

          • jabl 6 years ago

            Search for Intel HAP (High Assurance Program). It's a magic killswitch that disables the ME, provided officially only to three-letter agencies and such, of course.

            As for coreboot, keep in mind that it was originally developed long before the ME was a thing. IIRC the original motivation was that BIOS sucks. Which it does, sure, but that's orthogonal to the ME.

    • erikj 6 years ago

      Google is asking for that (or at least trying to remove Intel ME from their servers in the labs) AFAIK.

forapurpose 6 years ago

Helping to secure the boot process is valuable. My main concern about UEFI is end-user control, and one way UEFI reduces end-user control is through complexity.

Think of what someone new to IT had to learn 20 years ago about bootup and compare it to now. UEFI is far more complex than BIOS; IIRC the spec is over 2,000 pages. I'm a professional and I've spent significant time looking into UEFI, and I don't feel that I really grok the whole subsystem. What is some kid going to do? And add to that all the other 'new' subsystems in the boot process, from TPM to ME to AMT to TXT ... someone would need to be a full-time 'boot specialist' to grasp it all, keep up with them, and understand how they interoperate.

ChuckMcM 6 years ago

I find this the inevitable "next step" into more appliance mode that these machines are moving into. And like tranches of collateralized debt obligations, harder and harder to figure out exactly what your buying into when you decide to get one of these machines. On slow days I try to imagine what I would really "like" here. One of the things I would like would be a stance on surveillance that would protect me from commercial surveillance that I had not explicitly agreed to, and a way for individuals to take action against people who violated that stance.

keiyakins 6 years ago

How about 'there's no good replacement yet'? All that the firmware should be doing is making sure the hardware is intact enough to load the next stage, then finding, loading, verifying (a the very least, checking signatures for accidental modification, and ideally checking the keys against ones that require physically moving a jumper or dip switch to change), and executing the first on-disk stage. MAYBE initializing a very basic text mode so it can show errors, MAYBE a halfassed keyboard handler to let you help it out if it gets confused.

Granted, this does mean it has to have some sort of bus it can use to talk to various storage devices, but even this should be as simple as possible - you can switch to a more complex but faster one later, you just need to be able to load a couple kilobytes.

cjensen 6 years ago

There sure was a lot of "people use legacy for specific reasons" in there, but they don't mention what those reasons might be -- except for Linux support.

Kinda hard to fix the "last mile" when you don't know anything about the last mile.

  • josteink 6 years ago

    > they don't mention what those reasons might be -- except for Linux support.

    I UEFI boot all my Linux systems.

    What FUD are you on about?

    • cjensen 6 years ago

      If you Read The Fine Article, it says that people disable UEFI for Linux. And, like you, they point out that secure boot can be disabled instead.

      I pointed out that they list zero additional reasons people give for disabling UEFI, so it is hard to evaluate how to fix the "last mile" for those cases.

      Calling that FUD when it is literally in the article is unkind at best. I shouldn't have to reiterate stuff already said in the article in order to avoid hurting the feelings of Linux users who haven't.

      • josteink 6 years ago

        > And, like you, they point out that secure boot can be disabled instead.

        Why would I do that? All Linux distros I use works fine with Secure Boot.

    • e12e 6 years ago

      Actually, with all the industry support for Linux, I'd be more concerned with minix (ironically, as Intel me runs minix), free/net/openbsd, haiku, new experimental oses (like new stuff written in rust), new hypervisors (like xen, but new projects).

      Of course, as long as "Linux support" means "load signed extensible, Turing-complete bootloader that can do anything" - people running another OS can continue to that, with no help from the hw in securing their fde encryption keys...

    • thinkloop 6 years ago

      Pls HN, don't downvote straightforward questions

  • volume 6 years ago

    "people use legacy for specific reasons" was for people like me. I have a note in our company wiki to disable UEFI because it broke our PXE deployments.

    Now from this PDF I am forced to harness the power of Google to understand why.

    • volume 6 years ago

      Ah ha - I had been configuring syslinux to serve out pxelinux.0 only and not pxelinux.efi. Could they have just thrown that one liner in to one of the slides?

      http://www.syslinux.org/wiki/index.php?title=PXELINUX#UEFI

      edit: on 2nd thought I need to try this out to verify it end to end. Maybe there some other step.

      Or maybe this is yet another reason to embrace AWS. I get to declutter the UEFI/BIOS topic out of my mind.

  • dfox 6 years ago

    There are end users and then there are PC/motherboard manufacturers who usually large amount of DOS based tools (very often composed from layers upon layers of horrible hacks) for testing and/or initial setup.

  • eikenberry 6 years ago

    I haven't switched yet because I keep forgetting to create an ESP partition when I upgrade.

jgowdy 6 years ago

Everyone is bitching about how UEFI makes them not the owner of their computer anymore, as if they’re using an open source BIOS and ever had any firmware layer control over their PC.

The fact that you can argue that with BIOS you have control means you don’t really understand Intel ME and the other firmware components that actually run your PC. If you want an open platform, Intel and AMD aren’t it. If you want a modern functional platform, Intel and AMD are it.

This is why I have such high hopes for RISC-V.

  • sigjuice 6 years ago

    I don't understand how RISC-V will lead to open platforms. Vendors of RISC-V SoC's will most likely include their own Intel ME type bullshit.

Gonzih 6 years ago

I run Linux in legacy mode on my 4 machines. UEFI for me was extra complexity without any gain. Sad to see that's uefi is gonna be only option in the future.

  • BenjiWiebe 6 years ago

    I've been booting with the kernel as an EFI stub, and I love it. I can copy a new compiled kernel to the right directory, and reboot into it. No grub involved. And to edit the boot menu, or just change what I'll boot into next, is the handy efibootmgr tool.

    • TheDong 6 years ago

      Unfortunately, that means you don't have an initrd so you can't do luks encryption of your root, support certain filesystems as root (like zfs, mdadm raid), or various other things.

      efistub kernels may work for some people, but they're not a generic solution.

      • ronsor 6 years ago

        That's totally false. Just yesterday I booted a kernel using the EFI stub and was able to use an initrd.

      • yrro 6 years ago

        You can't stick an initrd on the end of an efistub kernel?

      • exikyut 6 years ago

        Wait. Couldn't you compile the initrd into the kernel?

0x0 6 years ago

Does this mean they're also looking to rip out 16bit real mode support entirely from their x86/x86_64 CPUs?

  • dfox 6 years ago

    There isn't much reason to do so. Additional resources needed to support 16bit modes probably does not exceed few hundred transistors of logic and bunch of words of microcode. The data path is completely same, there are only few control differences (loads to selector registers set offset=value<<4; limit=0xffff; size=16bit instead of consulting descriptor tables).

    On the other x86_64 CPU can be significantly simplified by riping out both 16bit and 32bit modes and thus all the selector/descriptor segmentation logic and replacing IDT with something simpler and less general.

    • jcranmer 6 years ago

      The x86 processor basically has 4 modes: real mode, virtual 8086, protected, and 64-bit mode. There's no real distinction between the protected 16-bit and 32-bit modes, it's just a bit (a few, actually) in the descriptor saying what the default size is for operands. The 64-bit mode is truly distinct however; note that you can't access rax from 32-bit or 16-bit code, but you can access eax from 16-bit code.

      It's worth pointing out that x86-64 still retains segments, although in a far more limited capacity (fs and gs are set up as a simple linear base and are used mostly for thread-local addresses).

      • dfox 6 years ago

        I would say that there are five modes, the additional one is compatibility mode as it is sufficiently different from both 64b mode and protected mode.

        To summarize:

        - (un)real mode: selector loads directly set shadow offset, IDT is array of CS:IP pairs (unreal mode is jargon for real mode that somehow has "impossible" values in shadow descriptors, notably the state of i386 after reset)

        - protected mode (CR0.PM=1): selector loads consult LDT/GDT, call gates are supported, IDT is descriptor table

        - vm86 mode (EFLAGS.VM=1): selector loads directly set shadow register, IDT is descriptor table

        - 64b mode (EFER.LMA=1, CS.L=1): shadow offsets and limits are ignored except FS/GS, selector loads do not check permission flags. IDT is descriptor table. Only 32b call gates are allowed and redefined to have extended format with 64b offset. EFLAGS.VM has no effect.

        - compatibility mode (EFER.LMA=0, CS.L=0): segmentation is enabled, selector loads work as in protected mode. IDT is descriptor table. Only 64b call gates supported. EFLAGS.VM has no effect.

        • wolfgke 6 years ago

          > I would say that there are five modes, the additional one is compatibility mode as it is sufficiently different from both 64b mode and protected mode.

          I would tend to add SMM (which runs in ring -2) to the list. Also (here I am not aware of the details) what about the mode that hypervisors run in (e.g. ring -1)?

          • dfox 6 years ago

            THe concept of rings is popular simplification of how the i386 protection works and mostly orthogonal to these five architectural modes, also the negative rings are purely abstract and have nothing to do with how the hardware really works.

            i286 protected mode protection works by comparing privilege levels (~rings, 0-3) of various things on certain operations (mainly I/O and loading selector registers). For direct programmed IO to be allowed EFLAGS.IOPL has to be >= CS.DPL (also called CPL for Current or Code), for descriptor to be accessible its DPL has to be >= CPL, gate descriptors are special case that allows CPL to change by doing CALL FAR through them (JMP FAR is AFAIK also possible but not especially useful).

            i386 extends this with another layer in the form of paging, amd64 long mode works only with paging enabled.

            SMM is then another layer on top of that which is like having additional external MMU (and in fact the SMI logic usually resided in chipset and even today is conceptually external to the CPU). The architectural mode after entry to SMI handler is essentially normal real mode and can be changed by the handler into any desired mode with one slight caveat that IRET unconditionally returns from SMM without regards to what is on stack.

            There are various extensions to i386 for hardware virtualization all of which works by somehow allowing creating process that perceives itself as running with CPL=0 while that not being true (in terms of rings it is more like the hypervisor runs in ring 0 and the guest kernel in ring 0.5 or something like that). This involves faking and duplicating some architectural state to the guest process on hardware level, the faked parts are necessary to make this possible while the duplication is needed for performance.

            Various these mechanisms work at once mostly without regards to state of other layers. obviously there are some combinations that do not make much sense or are not directly achievable but to some extent only combination that is explicitly forbidden is long mode without paging (as EFER.LMA which actually controls it is forced by hardware to be EFER.LME & CR0.PG).

            • wolfgke 6 years ago

              Thanks for this overview.

    • 21 6 years ago

      According to Wikipedia Windows 64 bit runs 32 bit apps by switching back to 32 bit mode.

      > which switches the processor hardware from its 64-bit mode to compatibility mode when it becomes necessary to execute a 32-bit thread, and then handles the switch back to 64-bit mode.

      https://en.wikipedia.org/wiki/WoW64

      • dfox 6 years ago

        "Compatibility mode" is sub-mode of long mode which has limited support for 16/32b i386-style memory segmentation (notably there is no support for vm86 code segments and gate descriptors have different format and can only reference 64bit "segments").

        It is clear that AMD's intention for this was to implement just enough of segmentation in long mode to allow compatibility with segment-aware user-space code (eg. mixed 16/32b Windows userspace) but there does not seem to be any OS that actually uses 16bit descriptors in long mode (probably due to the mechanism being significantly different than i386's 16/32 compatibility). For compatibility with "normal" flat address space user space code the whole mechanism could be significantly simpler and have same "segmantation" behavior as 64b mode, have 32b EIP pushed on stack by CALL/RET and trap on anything that tries to load new selector anywhere (ie JMP/CALL/RET FAR, MOV whatever, ?S), even REX prefix could be supported in such an "limited compatibility mode" and we would not have to invent things like x32_abi (which is to some extent the same idea, although with different motivations, ie. run new ABI-incompatible 32b code in 64b mode to gain access to new registers and instructions while having 32b pointers and thus reduced memory footprint).

      • kobeya 6 years ago

        I don’t honk this is correct. Iirc moving to 64 but more is a one way street. Rather, there is a virtual 32 but compatibility mode provided in the 64 bot environment, and that is probably what is being referred to here.

  • 0xcde4c3db 6 years ago

    I think they'd get pushback on that from users running legacy DOS/Win3.x/Win9x stuff in VMs. Most of those aren't so performance-critical that they really need to run as VT-x guests, but Intel has had multiple run-ins with the perils of "buy our new chip because it will make your applications... uh... slower".

  • maxharris 6 years ago

    I would happily pay more for a computer that lacks support for this mode.

    • 0x0 6 years ago

      Why?

      • maxharris 6 years ago

        The real advantage to cutting legacy is that it frees up engineers to work on the right problems - current and future needs - instead of verifying that the long tail of ancient stuff still works on every new thing built.

        What reason is there for legacy support in the hardware and OS when we have all sorts of emulation options already?

        • userbinator 6 years ago

          is that it frees up engineers to work on the right problems - current and future needs

          ...you mean the problems that they themselves introduced by, for example, trying to create this ridiculous complexity of UEFI? That sounds like a make-work situation to me.

          What reason is there for legacy support in the hardware and OS when we have all sorts of emulation options already?

          "emulation" is never 100%.

          • maxharris 6 years ago

            Nothing lasts forever. Intel architecture will die someday. Hopefully cleaning up a bit will help it live a little longer.

      • maxharris 6 years ago

        I want fewer knobs, switches, modes. Focusing on the future and breaking sharply with the past is a feature to me.

      • raverbashing 6 years ago

        Because it's a piece of hw that's only used for 1s before your OS boots.

        Wonder why ARM consumes much less than x86? That's one (small) reason why.

        • jcranmer 6 years ago

          The power consumption of the real-address 8086 mode in x86 is probably less than the difference between, say, Cannon Lake and Skylake. What you would end up simplifying is mostly caught up as a few bit-sense lines in the decoder to control some sign-extend/zero-extend units, and these lines aren't being switched. There's no impact outside of the instruction decode/fetch and the load/store units, and these are only minor impacts to what's not really a major power draw in the first place.

          I will also point out that ARM also has similar 16-bit/32-bit control logic in its ISA: Thumb. So ARM is duplicating this bit of power draw anyways.

          • raverbashing 6 years ago

            > There's no impact outside of the instruction decode/fetch and the load/store units

            But the issue is not only in the processor, it's in having an entire PC-XT available on the hardware so that the bios and boot loader can boot up like it's 30 years ago. (A20 emulation anyone? - ok Haswell removed it. daisy-chained interrupt controllers? DMA controllers from the time ISA buses were all the rage?)

            Thumb does not deal with 16-bit "mode" it's just an instruction set with 16-bit instructions, but each one corresponds to a 32-bit one. It's really not comparable to 16-bit mode on x86

        • kllrnohj 6 years ago

          > Wonder why ARM consumes much less than x86? That's one (small) reason why.

          Consumes much less what than x86?

          Power? Quad A57s @ 1.9ghz hits the 7 watt mark. You want performance you pay for it in power. At the end of the day the ISA doesn't actually change any of that.

          Die size? Sure, but "high end" ARM chips are still pathetic 512kb or 1mb L2 caches with no L3. Pretty much the same as above - you want performance, you pay for it in transistors. ISA doesn't change the end result much.

          If you want a real fight of ARM vs. X86 just look at the server market where ARM has attempted to challenge Intel. But Xeons have not just better overall performance, but also better performance/watt.

          It's all about who can make the best transistor, not which ISA is the most elegant or simplest or any of that. Those days are long, long gone. The complexity cost of legacy or "bloated" instructions/features is such an insignificant amount of the total transistors it just doesn't matter to perf or power.

          • lorenzhs 6 years ago

            I don’t know the exact numbers off the top of my head, but Apple‘s A-series chips have a lot of cache, something around 8MB in the last generations iirc.

            • kllrnohj 6 years ago

              Apple's A-series chips aren't small, though. The A10 (14nm) was 125 mm2, about the same size as the mobile quad core Intel 14nm parts.

              • lorenzhs 6 years ago

                True, the Apple A-series chips are large, but the A10 was built on TSMC's 16nm process, not 14nm. Even TSMC's 10nm process (used for the A11) is barely smaller than Intel's 14nm, these numbers contain a lot of marketing as you surely know: https://en.wikipedia.org/wiki/14_nanometer#Comparison_of_pro... and https://en.wikipedia.org/wiki/10_nanometer#10_nm_process_nod...

                The A11 is only 88mm², with 8MB of cache, compared to the Skylake quad-core i7 CPUs with 122mm² and the same amount of cache (using Intel's 14nm process).

                The Snapdragon 820 (Samsung 14nm process) also clocked in at 144mm² with only 1.5 MB of cache. Apple's chips aren't the only large ARM chips.

        • monocasa 6 years ago

          I'd be surprised if 16bit support uses a noticeable amount of power in a running system.

          • wolfgke 6 years ago

            > I'd be surprised if 16bit support uses a noticeable amount of power in a running system.

            For 16 bit you have to strongly distinguish between 16 bit real mode (what DOS ran on by default), 16 bit protected mode (what Windows 3.x and partly Windows 95 ran on) and Virtual 8086 mode (which was for example used under Windows 95 to run DOS applications). Also I think SMM (System Management Mode) also runs 16 bit code.

            Next: In 32 bit mode, nearly every instruction can be prefixed by 0x67 (I am aware that some instructions use an 0x66 prefix instead - I just want to keep things simple here) that allows one to use 16 bit (protected mode) addressing to be used in 32 bit mode. Except for some very obscure applications that surely does not make sense, but shows how deeply ingrained 16 bit support is in the x86 instruction set. Also: If you prepend an 0x66 prefix to many classic x86 instructions in 32 bit mode, they will do what the instruction would do in 16 bit mode (except for using 32 bit adressing if suitable; but I just explained how this can be changed, too). This is exactly how instructions that use 16 bit registers are encoded.

            So 16 bit support is deeply ingrained into x86 and cannot be simply removed. And I would bet there exist customers who would be really enraged if Intel or AMD did just remove some of the 16 bit modes, because they have some exotic application that depends on them.

sanbor 6 years ago

Floppy disks used to have a tab to lock them (write only mode). The whole secure boot thing could be solved by having a switch/tab that you would have to use when you want to make changes to your bootloader. This would fix the secure boot issue and let us use our beloved BIOS.

In different jobs I have hear coworkers spending several hours dealing with UEFI in order to be able to install linux or enable multiboot. As a final user I don't see any added value except for the secure boot. As others has pointed out here if a malware has enough access to rewrite your boot probably they could do a lot worse stuff like installing a keylogger, a fake browser, encrypt/steal your data, etc.

UEFI looks to me like Inte ME. A tech that it might make sense for enterprise, servers, etc. but not to the final user.

Looking forward to let Intel alone and use more open alternatives but I know that it is very hard to get the same computing power/features in other platforms.

upofadown 6 years ago

What currently deployed attacks does secure boot prevent? We hear a lot about secure boot is going to improve security but not a lot about exactly how it is going to do that. If some sort of malware has the ability to rewrite the relatively tiny boot stuff it also has the ability to write anywhere in the operating system.

Aardwolf 6 years ago

So does uefi support multi boot or not?

  • 0xcde4c3db 6 years ago

    The standard supports multiple bootloaders, but specific implementations may be locked down to a single one. Even on systems that support multiple bootloaders, it's pretty common to bypass the built-in menu in favor of a more configurable chainloader like GRUB or rEFInd.

    • snuxoll 6 years ago

      I mean, basically every UEFI implementation I've worked with supports any number of boot entries - but I'm forced to hit a key during startup to even get to the menu to select one. All it would take is an option to present a selection menu every boot (unless the operating system has asked to boot a specific one during a reboot) and I would set my grub timeout to like 2 seconds with no menu by default.

  • josteink 6 years ago

    > So does uefi support multi boot or not?

    UEFI was pretty much designed to solve the multi boot issue in a clean way where different OSes don’t step on each other’s toes and creates issues.

    This has worked fine from day one, even from a single physical volume.

    • Celarnor 6 years ago

      It works fine except when it doesn't, which is most of the time (either secureboot, or the EFI boot manager, or the Windows 8+ "fast startup" thing that can turn itself on during an update and sets the EFI next-boot option, permanently overriding any GRUB selection until you turn it back off from inside Windows..)

      Compared to how hassle-free multiple operating systems is to do via legacy boot, UEFI feels incredibly backward. It feels unfinished, is fragile and breaks far too easily for something so critical.

      I'm not looking forward to it being taken away. Guess I'll have to ditch my other OSes and do my important work in a VM.

      • josteink 6 years ago

        > It works fine except when it doesn't

        That can be said about anything.

        > which is most of the time

        Not to come off snarky, but citation needed.

        > either secureboot, or the EFI boot manager

        What issues do they cause? Really?

        > Windows 8+ "fast startup" thing that can turn itself on during an update and sets the EFI next-boot option

        Clearly annoying, but easily fixed.

        I'd rather have this once a year than Windows deciding it has the right to write to my MBR, and overwrite GRUB, causing my Linux installation to be unbootable without a recovery-disk.

        > Compared to how hassle-free multiple operating systems is to do via legacy boot

        That may be true for multi-volume scenarios. For systems with one volume only however (like most laptops), it's absolute the opposite.

        > It feels unfinished, is fragile and breaks far too easily for something so critical

        My experience is quite the opposite. With UEFI I feel I can fearlessly dual/multi boot several operating systems without fear that one OS is going to mess up another one.

        And I know that if something happens once a leap year or so, EFI has proper tooling so it's easy to fix, unlike black-boxed MBR bootloaders and the hacks involved with chain-loading different OSes on top of them.

        • Aardwolf 6 years ago

          So from what I understand from these replies:

          1. Windows will decide to undo your boot settings or grub every now and then? How can it do that, and can Linux do it too? If UEFI is "secure", is there no way to securely prevent windows from changing it?

          2. UEFI does NOT support multiboot on multiple volumes? You cannot install different OSes on different disks? What if you put multiple SSD's and HDD's in a desktop computer, can you install a different OS on each and boot to any with UEFI? What about booting from USB sticks?

          • josteink 6 years ago

            1. Any OS can update which UEFI boot-target is the default one (this is what parent poster complains about). It should however not interfere with another OSes config or files. Windows does not touch Grub (or the other way around). That messy, unsafe and unreliable approach is reserved for legacy BIOS boot.

            2. UEFI supports clean and safe separation between boot-targets, across single volumes, multiple volumes and networks targets with cryptographic verification if desired (Secure boot). UEFI supports everything traditional BIOS boot supports and more. The same can not be said the other way around. It also boot straight into long-mode, meaning you don’t have to implement X86 mode-golf (and similar lessons in ancient history) at all in your OS-loader.

            UEFI is rather overengineered, but it’s clearly a better approach with better support for modern use-cases built into the core design.

            Think of UEFI as Grub built into your machine. With UEFI your individual OS’s bootloader should no longer need to handle/be aware of multi-booting. They should only boot themselves.

            • Aardwolf 6 years ago

              What did you mean in the previous reply with "That may be true for multi-volume scenario"?

              Did it mean multi volume is NOT hassle-free with UEFI?

              Thanks

              • josteink 6 years ago

                UEFI has no issues with multi-volume setups. They are hassle free.

                My focus one single volume setups was really because this is where traditional legacy BIOS boot gets messy and unreliable if you want to setup dual/multi-booting, and UEFI is objectively superior in every way.

        • Celarnor 6 years ago

          > What issues do they cause? Really?

          Biggest one by far is the super-ambiguous "boot failed" with nothing else while trying to get back into windows after turning off Fastboot (see step 7 in my other post). Once you've got that, you can fix it for the next boot with bootrec /FixBoot & /RebuildBCD, but it'll keep coming back every time its restarted. Only failsafe fix I've ever found that keeps it from coming back is to move everything, including GRUB itself, onto separate disks. At that point it cooperates, at least until Windows thoughtfully turns FastBoot on for you again.

          A close second is secureboot's "invalid signature detected". You get the same message when you try to chainload off a non-present drive or one that isn't bootable, but if you make the non-booting disk the only one, suddenly it'll start working again. Usually a grub2 reprobe/reinstall will fix this, but its really annoying. The dual-OS desktops at work all get this upon reboot whenever we lose power.

          > Clearly annoying, but easily fixed.

          Sure, until it resets itself at the next update, suddenly won't work when you turn it back off and you have to do everything all over again...

          > I'd rather have this once a year than Windows deciding it has the right to write to my MBR, and overwrite GRUB, causing my Linux installation to be unbootable without a recovery-disk.

          Only there's no risk of your linux installation being completely hosed in this scenario, like there is for the Windows environment with UEFI thrown in. Instead of starting over from scratch/backups, you just have to mount your drives, re-probe and install GRUB. That takes maybe a few minutes. Also, in a multi-volume system it can be completely avoided by disconnecting your dedicated GRUB disk for windows updates; the same can't save you from Windows declaring itself king of the computer and suddenly turning on UEFI settings that you don't want.

          > That may be true for multi-volume scenarios. For systems with one volume only however (like most laptops), it's absolute the opposite.

          Its worse on single-volume scenarios, since you can get completely screwed. In a multi-volume environment, UEFI's fragility is somewhat compensated for by the fact you can boot off the boot sectors of the individual drives when it breaks.

          > My experience is quite the opposite. With UEFI I feel I can fearlessly dual/multi boot several operating systems without fear that one OS is going to mess up another one.

          Huh. Yeah, that's ... pretty much the exact opposite of the experience I and everyone I know has had. You're honestly the first person I've seen that's been a fan.

          • josteink 6 years ago

            I’ve used separate EFI system partitions for Windows and Linux, since as you point out, Windows may not be a entirely good citizen all the time here.

            At first I tried to force all OSes to use the same EFI partition (because why should they need different ones?), and yes, I now recall having some issues back then.

            After that I tried to let Windows have it’s own EFI partition. That was also the default setup created when installing Windows last, so it really didn’t take any effort.

            And that has worked flawlessly for me. You may want to give it a shot.

            Cheers.

  • zootboy 6 years ago

    Yes, UEFI supports multiboot.

    • tripletmass 6 years ago

      Meh, unsigned! No good without UEFI mode 3+ (where 0 looks like a legacy BIOS, 3 is signed, 3+ is signed/enforcing) per OP-article.

      New Intel site (uefi.org still OK) uefidk.com/develop/development-kit/ is...down? [People not making a machine key, so they can get back to setup mode...no mention of signing:) https://communities.intel.com/thread/118403?wapkw=uefi+signi... https://www-ssl.intel.com/content/www/us/en/security/practic... --2013 video showing unleaded dual-boot Linux Foundation has wboot for signing, it mentions... UEFI (seemed to) promise to offer: suspend/hibernate in each OS, seamless secure power mode switching from each OS, multiple default displays and diagnostics...

hsivonen 6 years ago

Last I tried, more that 2 years ago, Ubuntu was able to install on LVM over LUKS over sofware RAID and boot from it in BIOS mode but the result in UEFI mode completed install but didn't boot.

I wonder if that has been fixed.

  • kobeya 6 years ago

    Usually it’s because he installer mucks up the crypttab, or doesn’t install mdadm module, or something. And yes, installing with BIOS usually gets it right. But it can be made to work with UEFI.

  • josteink 6 years ago

    I’ve UEFI booted Ubuntu from a Linux soft raid without issues. Everything just worked.

    Don’t quote me on it, but I’m pretty sure that was Ubuntu 14.04.

agumonkey 6 years ago

Meaning I will migrate all my machines to coreboot.

jokoon 6 years ago

I recently disabled secure boot so I could install linux...

I was quite amazed I had to do that to be able to install a system I wanted...

shmerl 6 years ago

Why can't there be normal open source UEFI? No one seems to be pushing to get rid of all these blobs.

lousken 6 years ago

> Some users blame UEFI or Secure Boot whenever something doesn’t work

and what else to blame?

  • josteink 6 years ago

    The words of a true detective.

    Like there’s nothing else in the giant mess which is the x86 architecture which can’t break.

bobcallme 6 years ago

My main concern with this is the fact that if Secure Boot is forced (UEFI Class 3/+) users might not be able to change or manage trusted keys. It is quite sad that every few years we have to yet again have this conversation or defend the right to freely install software on machines that we bought or own.

  • zanny 6 years ago

    Because you don't own them. Because you never actually owned them. You've never been told how your chipset works, or how your CPUs transpiler works, or what that extra chip that has system level access above ring 0 does, how to modify it, or how to disable it until people randomly found the magic bit to turn it off.

    You don't know what your hard drive firmware does or how secure it actually is. You don't know what the controllers on your RAM sticks are doing. You have no way to find out, either, because its all proprietary.

    It doesn't even matter that you are not really able to modify the firmware on RAM or in a hard drive, and that it takes way more work than almost any individual is capable of to actually even try to verify the firmware on any of these devices regardless of if you have the purported source code or not.

    Its all magic, its all black boxes, and you have no control over any of it, which is why companies regularly volunteer just going one step further all the time - they already have power over your hardware, what is a bit more amidst everything else being an obscured secret?

    • kobeya 6 years ago

      I certainly do legally own the chunk of semiconductor metal and plastic I am writing this on. I don’t appreciate the hyperbole.

      • emn13 6 years ago

        The physical silicon is almost worthless. What's valuable is how you can use it; and there too I suspect "you don't own it" is hyperbole - but perhaps not entirely, given all IP involved. What's certainly not hyperbole is that you are not in control and that this lack of control can be a feature to others (e.g. DRM). Whether you legally would be allowed to mod your chip to "unlock" it is moot if you simply don't have that ability.

        • kobeya 6 years ago

          It's one thing to lack the knowledge to use the device. It's another to lack the authority, if said knowledge could be acquired or reverse engineered. If I own the device, I should be allowed to send whatever control bits I want.

          I understand the DMCA and related laws change this, but that is a flaw of our legal system that needs to be corrected.

        • hennsen 6 years ago

          And exactly that ability being removed or minimized was bobcallme‘s point!

  • izacus 6 years ago

    > My main concern with this is the fact that if Secure Boot is forced (UEFI Class 3/+) users might not be able to change or manage trusted keys. It is quite sad that every few years we have to yet again have this conversation or defend the right to freely install software on machines that we bought or own.

    Most of this pretty much comes from the copyright industry and DRM push. After all, if you can freely install kernel drivers and software, they can't do their DRM stuff.

  • mtgx 6 years ago

    I wonder if Linux machines would start getting more traction then, as everyone wouldn't just have the excuse that they might as well buy a Windows machine and then get to use either Windows or Linux if they want.

    Post-2020 a greater number of people may actually be forced to choose Linux. I'd rather we weren't forced to validate this theory of mine, but I'm cautiously optimistic about it.

    • wmf 6 years ago

      Yeah, every time you buy a Windows PC and install Linux, you're telling the market you want Windows and you're starving the Linux ecosystem of revenue that could be used for better hardware support.

      • joe_the_user 6 years ago

        Unless you can buy a laptop with no OS for less than I can buy a laptop with Windows, buying from Linux ecosystem is going to be essentially an act of charity.

        • fulafel 6 years ago

          You frequently can, at least for Lenovo/Dell/Asus laptops.

        • wmf 6 years ago

          IIRC Dell is now charging less for Linux than Windows.

          • eat_veggies 6 years ago

            Do you have a link? I'm in the market for a new laptop right now and a cheaper Dell with Linux installed sounds optimal.

      • Tangy92 6 years ago

        What market though? How do you buy Linux?

        • wmf 6 years ago

          Dell, System76, Purism, etc.

  • MichaelBurge 6 years ago

    A quick google search shows that Linux has between 3% and 7% of the desktop marketshare. It seems like an unwise business decision to make AMD the only possible choice for 5% of heavy purchasers, who also help with server purchasing decisions at IT jobs.

    • closeparen 6 years ago

      Are companies where technical people have sway over IT purchasing decisions worth anything to suppliers? I'd assume all the margin is in deals made between salespeople and MBA CTOs on golf courses. Look at the landing page for the server/networking division of any major player in the space - that's clearly not pitched at engineers.

      Hell, this may even by why AWS is so popular: the landing page actually explains to engineers what their services even are.

    • wmf 6 years ago

      Linux (the distros that people actually use) works fine with secure boot. Also, if MS mandates secure boot that means AMD will also use it.

      • vetinari 6 years ago

        These distros also either do not enforce signing kernel modules, thus making the Secure Boot moot (Ubuntu), or enforcing valid signature on the modules, but making it easy to enroll your own Machine Owner Key (Fedora), so your custom-built kernel modules for VirtualBox, Nvidia, ZFS or whatever else you need, will still work.

      • crdoconnor 6 years ago

        It still seems to fail whenever I install those distros until I turn it off.

  • sliverstorm 6 years ago

    Techies seem to understand the value of chained secure boot on Android, where shady vendors in east asia routinely try to sneak malware onto phones they sell.

    What is different about PC? How do you defend against such attacks without a secure boot chain & a fixed trusted key set?

    • khedoros1 6 years ago

      I'm happy with the Android device as long as I can unlock the bootloader and flash my own OS. I'm happy with the PC as long as security settings don't stop me from installing my own OS. If I'm prevented, then I don't care about the security arguments; the device isn't useful to me.

      My solution there is to avoid buying from vendors that I consider shady.

    • keiyakins 6 years ago

      Honestly, a fixed trusted keyset with a jumper you can use to switch to a programmable keyset. Ideally with a jumper to set that in programmable or read-only mode.

      That makes it extremely obvious when something wants to mess with it, without actually preventing you if that's actually what you want.

      • digi_owl 6 years ago

        Something akin to the ChromeOS developer switch?

  • colemickens 6 years ago

    A concern that has been repeated for 5+ years at this point, and one that has proven to be almost completely unsubstantiated by any real world devices.

    Don't get me wrong, the day I can't install Linux on my laptops, I'll grab the pitchforks, but the claims that this is going to happen almost exclusively hinge on some boogeyman ideology related to Microsoft that is fully unconvincing to me.

    edit: Please, tell me a single consumer laptop that doesn't allow key enrollment or complete Secure Boot disablement, otherwise, please stop with the FUD.

    • mtgx 6 years ago

      Wasn't the previous backlash what got Microsoft to allow Verisign and others to make Secure Boot keys, too? Wasn't it originally only Microsoft the one that could do that, and it's why everyone freaked out that they may act their usual monopolistic-self and deny Linux distros the keys?

    • joe_the_user 6 years ago

      I may not be especially competent but I had significant difficulty installing Linux the last time tried.

      Thankfully it was possible but there are now significant hurdles. That's problematic. Whether it gets worse is unknown but the situation now is bad if anyone wants Linux to be freely available.

      • colemickens 6 years ago

        Can you elaborate on the "significant hurdles"? I've heard this before, but it never comes with any specifics - examples, error messages, forum posts, nothing. If nothing else, if there is seriously a manufacturer that is botching their implementation, please tell the community so that we can avoid them.

        I've installed Linux on dozens of laptops. Here are the "significant hurdles":

        1. Literally none, because mainstream distros are signed and boot under Secure Boot

        2. Literally a single toggle in the BIOS to disable SecureBoot. (Enrolling user keys is more steps, but optional)

        And then since most people really mean "UEFI" or other miscellaneous compatibility problems when they say SecureBoot:

        3. Literally a single toggle in the BIOS to enable CSM mode for distros/install media that don't support UEFI.

        4. Laptops that ship with the SSD in RAID mode (only very, very new kernels ship with support for it). I've only heard of one laptop that didn't allow it to be toggled and it was fixed within two weeks.

        • Celarnor 6 years ago

          Can't really speak as to issues with single-OS installs. Those usually go alright. The problems I always run into arise when you try to go and install another OS alongside what you've already got (usually Windows, but I have seen the same kinds of UEFI issues with OS X+Linux and OS X+Windows+Linux). With legacy boot, this is relatively easy and goes something like this:

          1) Resize existing filesystem, setup new partitions

          2) New OS installation

          3) Install GRUB over MBR

          4) Boot whatever you want. Enjoy your multi-OS computer. Install more operating systems, whatever. GRUB has everything it needs to figure it out and edit /boot accordingly.

          With UEFI, it goes like this:

          1) Resize existing filesystem, setup new partitions

          2) New OS installation

          3) Install GRUB2 over MBR

          4) Realize you can't boot into new OS because of UEFI's "next boot" flag used by Windows 8+'s Fastboot/Fast startup option.

          5) Realize you can't get into old OS either because secureboot is still broken for multi-OS environments.

          6) Mess around with Windows Recovery Essentials. or an OS X Recovery System. Restore original boot sector with bootrec/Disk Utilities. Disable fast startup in BIOS.

          7) Hopefully boot back into windows again. Sometimes the original Windows installation doesn't like the altered UEFI settings and you can't. When that happens I resort to re-installing each OS to separate hard disks, each with other OS' drive disconnected during installation. Then add another disk for GRUB2. Re-do GRUB2 probing/installation. Then...

          8) Disable fastboot in windows, cause UEFI is dumb and will let windows turn it back on. Probably mess around in bcdedit.

          9) Load linux installation media again. Redo GRUB2 OS probing and installation, unless they're all fresh installs from step 7.

          10) Hope everything works right at this point.

          11) Wait for updates for your original OS to break everything, permanently lock you into booting just the original OS because of the terrible idea that was nextboot. If everything's on separate drives, you might be able to boot them independently from BIOS, and just have to disable settings again. Or not, because of the same "hey, this is a different computer, I'm not going to boot" type of stuff that can arise in 7.

          12) Start again at step 4...

        • smichel17 6 years ago

          I observed a friend's computer that shipped with no hotkey to access the BIOS settings or boot menu. In order to get into the computer to boot to a usb drive for installing Linux, you had to either accept the Windows license agreement or disassemble the computer and pull the hard drive.

          • digi_owl 6 years ago

            The laptop i am tying this on supposedly had a "BIOS" key combo, but i never got it to work.

            What i had to do was hold a key (shift i believe) while clicking restart in Windows, and then i would get a menu that could take me to the menu...

        • joe_the_user 6 years ago

          I mucked about the bios quite a bit and had to create a custom boot CD.

          I'm not super familiar with bios and the five hours I spent might have added up just finding "Literally a single toggle in the BIOS" for all I remember but so what?

          If you think this isn't a serious impediment to most users, you know less about the average user than I know about bios configuration.

          Argue "this is the way to do it" all you want. But arguing things into easiness is inherent ridiculous.

    • Nullabillity 6 years ago

      Look at Microsoft's certification requirements for ARM devices. This will be a problem much sooner than you think...

    • LordKano 6 years ago

      When that day comes, it'll be too late.

      You'll be the frog who was slow boiled in the pot.

dingo_bat 6 years ago

Can anybody point me to a list of problems with "legacy" BIOS that UEFI fixes? My PCs have had no problem booting in the decade I've been using them. They boot windows, ubuntu, arch, basically whatever I want them to boot. What's the big problem?

  • betterunix2 6 years ago

    Well, consider the situation Blizzard is in with WoW: people who want to cheat have resorted to writing rootkits to defeat the DRM system. Locking down the bootloader will help prevent users like yourself from defeating DRM, this making the platform more secure for entertainment companies.

    Sorry, did you think the improvements was for users?

    • dingo_bat 6 years ago

      > Sorry, did you think the improvements was for users?

      Well, funny but sad.

riveravaldez 6 years ago

Still waiting to hear about any real reason to go with UEFI instead of BIOS, beyond Restricted Boot and all that monopolistic, anti-FLOSS corporate crap...

maxharris 6 years ago

It's about time this happened! I spent the last decade using computers free of that BIOS crud (various Macs and now a Surface Pro), and all I can say is, "good riddance!"

  • Thriptic 6 years ago

    Surface pro has a BIOS interface, it's just stupidly inaccessible. Their design also means that you can't easily install your own copy of windows and are instead stuck pulling down their stupid recovery images whenever you want to format. I'm very annoyed that they decided to go this direction personally.

    • rgsteele 6 years ago

      >Surface pro has a BIOS interface, it's just stupidly inaccessible

      I think you misunderstand. Surface Pro supports only UEFI, not legacy BIOS. What you are referring to as a "BIOS interface" is in fact the UEFI firmware settings. If you're having issues accessing the settings, this article may be helpful:

      https://support.microsoft.com/en-ca/help/4023531/surface-usi...

      I'm not sure what you mean by "install your own copy of windows" but you can certainly boot from USB by following the instructions here:

      https://support.microsoft.com/en-ca/help/4023511

    • Spooky23 6 years ago

      They did that to protect the platform against stupid enterprises.

      • Thriptic 6 years ago

        What do you mean?

        • Spooky23 6 years ago

          Office 365 taught them how much corporate testing and imaging hurt their products.

          In one case I’m familiar with, they were getting pummeled over poor O365 user experience in 2015 because the customer was refusing to upgrade past Office 2007.

          Everything they’ve done in the last 5 years is about pushing out software faster and making traditional enterprise IT too expensive.

          • exikyut 6 years ago

            Suddenly the push behind cloud makes a lot more sense. Interesting to know.

          • Thriptic 6 years ago

            This makes sense, thank you!