antirez 5 years ago

It's extremely hard to agree with Linus on that. One problem in his argument is that he believes that everybody has a kernel hacker mindset: most today's developers don't care about environment reproducibility at architecture level. The second problem is that he believes that every kind of development is as platform sensitive as kernel hacking, and he even makes the example of Perl scripts. The reality is that one year ago I started the effort to support ARM as a primary architecture for Redis, and all I had to do is to fix the unaligned accesses, that are anyway fixed in ARM64 almost entirely, and almost fixed also in ARM >= v7 if I remember correctly, but for a subset of instructions (double words loads/stores). Other than that, Redis, that happens to be a low level piece of code, just worked on ARM, with all the tests passing and no stability problems at all. I can't relate to what Linus says there. If a low level piece of code written in C, developed for many years without caring about ARM, just worked almost out of the box, I can't see the Ruby or Node application to fail once uploaded to an ARM server.

  • Androider 5 years ago

    Node and Ruby applications do fail on ARM though, when it comes to native libraries and extensions. And now your whole distro is different than your development machine, which adds complexity.

    Do I really want to be debugging why node-gyp fails to compile scrypt on the ARM distro on the new Amazon A1 ARM instance (which it did in my case)? And if I solve that, what about the other 2451 dependencies? Let's pessimistically say there's a 1% failure rate, I'll be stuck doing that forever! Nah, I'll just go back to my comfy x86 instance, life's short and there's much code to write :)

    I think I'll side with Linus on this one. I saw first-hand how non-existent the x86 Android market was, despite Intel pouring megabucks into the project. If the developers don't run the same platform, it's not going to happen, no matter how great the cross platform story is in theory. Even if it's as simple as checking a box during upload that "yes, this game can run on X86", a huge chunk of the developers will simply never do that.

    • tlb 5 years ago

      I'm suffering through this now -- I have a custom C++ Node exception that needs to run on both x64 and ARM. The ARM cpu is onboard a mobile robot, where I care about power draw.

      The good news is that Clang can cross-compile fairly easily. Much better than gcc.

      The bad news is that there are a surprising number of missing libraries on Ubuntu/ARM64. For example, Eigen3. And although the code is fairly compatible, there's some extra cognitive load in learning to debug on ARM. For example, calling a virtual function on a deleted object crashes differently.

      I'm willing to put up with it for ARM's advantages in battery-powered applications, but I wouldn't just to save a few bucks on cloud servers.

      • sriram_sun 5 years ago

        How about using CMake to download and cross-compile the 3rd party dependencies? I've worked on a couple of robotics applications (C++). I wouldn't depend on Ubuntu for packages. You'll lose the flexibility to choose package versions, apply patches etc.

      • bonzini 5 years ago

        > The good news is that Clang can cross-compile fairly easily. Much better than gcc.

        How so? The complications in cross compiling come from setting up all the system libraries, not the compiler and linker.

        • Gibbon1 5 years ago

          I remember getting annoyed needing a windows driver that was part of an open source package. That needed autotools to build. I got annoyed trying to make it compile under mysys/MinGW. So I built a gcc cross compiler under linux. Compiled the damn driver and it just worked(tm).

        • rvp-x 5 years ago

          by default it supports a variety of targets, you don't need to set it up.

          setting up the system libraries is relatively easy, make a copy of the target you're building for, assuming it has a setup for compilation, and use --sysroot=/path/to/target.

      • cozzyd 5 years ago

        eigen3 is header only isn't it?

        • snovv_crash 5 years ago

          Yes, but only once you run the `configure` so that it chooses the right inline assembly blocks for your system.

    • Androider 5 years ago

      On a related note, if Apple does switch to ARM chips for their laptops, that will make mainstream ARM server-side development more viable than any cross-platform story ever can. Or kill the Mac desktop. One or the other :)

      • kakwa_ 5 years ago

        It will most likely kill apple laptops as a developer platform.

        Unless you are deeply disconnected from the hardware, CPU architecture does matter. Most developers using macbooks I know have VMs for either Windows or Linux works.

        It might be conceivable to use the ARM port of <Insert your Linux distribution here>, or the ARM version of Windows 10. But it would also require a good desktop virtualization solution for ARM. If Apple release its own solution or creates a partnership with let's say VMWare to release something at the same time an ARM macbook comes out, it might work, but barely, and I'm not sure if developers are at the top of Apple's priority list. In the end, with the Linux subsystem in Windows, choosing Windows as a development platform might make more sense.

        As a side note, if Apple switches to ARM, I foresee a lot of cringing. The switch from ppc to intel was annoying for a lot of Apple users at the time, but there were a lot of goodies (bootcamp, better autonomy, better performance, VMs) that kind of made it up for it, basically, nothing was lost in term of possibilities, a few were actually gained, and only the transition was a bit painful. With a switch to ARM, you may gain a little in term of autonomy, but with macbook pro already at +8 hours (~a work day), not sure it's a game changer, at best you will stay the same in term of performance, and you will lose in compatibility with other OSes.

        • Moto7451 5 years ago

          I think kill is too strong. Certainly some developers will need to be on an Intel chip, but not all. How many developers use their laptops as a dumb SSH terminal? While some C extensions to scripting languages will need some love, the majority of major interpreted or VM driven languages work already.

          My feeling is it will be net zero as far as ARM servers are concerned until the hardware is made and is viable. Perhaps Apple ARM laptops will help with marketing ARM as a viable option, but we already develop on OS X in order to deploy to Linux without any great calling for OS X servers.

          Cloud server “hardware” has also drifted from what you see in real hardware. There are numerous times in my career I’ve had to explain to deveopers of all experience levels that their SSD is several orders of magnitude faster than the tiny EBS volume they’re reading from and writing to.

          In short, I think architecture mismatch just isn’t that important to most Web App oriented companies. My girlfriend works at a travel industry giant and they’re at the opposite end, putting IBM mainframes to good use. They don’t have much use for Macs and most of their developers seem to be on Windows instead of anything resembling what they’re deploying on. For the segment of our industry that does care, they’ll have options and will choose them regardless of what Apple, Google, and Microsoft do with ARM, Intel, Power, MIPS and other architectures.

          • kakwa_ 5 years ago

            Maybe it would be interesting to look at the past.

            Was the PowerBook as heavily used as a developer laptop as the MacBook is today?

            (or Power Mac vs desktop PC as desktops were more common at the time).

            I was not in the industry at the time (2000 - 2006) so I don't know the answer.

            • cschep 5 years ago

              While the Intel switched helped, at least I thought it was great, the big deal was that OS X was a tremendously usable Unix on amazing laptop hardware.

              I'm not sure the architecture mattered as much as that did.

              • mbreese 5 years ago

                Agreed. I switched when there were still G4 PPC laptops just because OS X was a usable Unix with good hardware. The switch to Intel was good, but it wasn’t because I struggled with the architecture. It was for the more powerful CPUs and battery life.

          • tracker1 5 years ago

            Sorry, but in Windows, Mac and Linux x86 docker is a huge part of my workflow... I'm having enough trouble guiding people out of windows only as it stands, ARM is too big a pill to swallow in teh near future. There's still enough bad parts in scripting environments at the edges (I'm a heavy node user, scss and sqlite being a big two touch points).

        • manicdee 5 years ago

          I can imagine a neat divide here between “MacBook” (based on ARM, 15 hour battery life) and MacBook Pro (based on Intel i7, 6 hour battery life, optimised for VMs).

        • sixothree 5 years ago

          Maybe WOA will be a thing then too....

      • mullingitover 5 years ago

        I consider myself a pretty average Mac user, and I've already been turned off by the last couple rounds of Macs that Apple has shipped. Messing up the one remaining upside, x86 compatibility, would be be the straw that broke the camel's back. They still only have single digit market share in desktop computing, this could be the death blow for their platform.

        • kevin_thibedeau 5 years ago

          I would expect them to pursue a dual processor strategy first. The bulk of the OS can run on ARM and power apps can remain on x86.

          • mariovisic 5 years ago

            Sounds about right. They actually already have this setup, the T2 chip in recent macs contains an arm processor which handles some tasks like disk encryption. It could be possible that future OSX versions will leverage that processor for more general purposes.

          • copperx 5 years ago

            I can't imagine that's good for battery life.

        • riffraff 5 years ago

          But what if the switch to arm comes with a lot more battery life and great performance?

          Not all Mac users are devs.

          • jecxjo 5 years ago

            I wouldn't imagine you'd get a lot of performance boost from the change. You'll see battery life but that assumes they aren't looking to run a crazy number of cores to make it compete with the x86. And they only way that massive core counts help is if the software is designed to utilize them correctly.

            Its not that all users are devs. Its that all devs might not be able to make their software work well under that environment.

            • egypturnash 5 years ago

              Wow, I have five hundred cores, now it’s no longer a big deal that (insert cpu-hogging Electron app) is constantly maxing out four of them!

            • yoz-y 5 years ago

              Current crop of Apple A chips runs circles around almost all Intel chips which they put in the laptops at a fraction of TDP.

              I think you will see a lot of performance boost after switching to ARM. If they start on the "low end" then a macbook will be practically on par with a mbp. This might not be useful at first for native development, but I am quite sure that macOS, iOS and web development will be very much possible on these machines - the three domains that Apple cares most about.

            • koffiezet 5 years ago

              Knowing Apple, they would just go for the 'even lighter' approach, and insert a battery half the size of the current-ones...

              A battery lifetime of 8 or 12 hours is plenty, and going beyond that isn't that much of a marketable strategy, unless it has to become 24h+ or something. A lower weight approach however would also mean a lower BOM for Apple, and more profit, while being able to shout "1/3rd lighter!" - and that's an easy sell :)

            • coldtea 5 years ago

              >And they only way that massive core counts help is if the software is designed to utilize them correctly.

              That's for servers and scientific software (and perhaps 3D and such).

              For regular devs the massive core count helps even with non optimized apps, because unlike the above use cases, we run lots of apps at the same time (and each can have its core).

        • lrem 5 years ago

          I'm on a 2013 retina, because nothing in the meantime offered incentive to switch. I'm wondering how a switch to ARM would affect that.

          • ct520 5 years ago

            They can make ultra slim butterfly keyboard that feels like Cement when you type on it

      • josephg 5 years ago

        When that happens we’ll also see a big push to add ARM support to all the native nodejs modules that are out there. (And I assume Ruby, Python, Go, etc packages).

        Linus’s prediction is based on the premise that everyone will continue to use x86 for development. But that’s probably not going to be the case for long. Multiple sources have leaked the rumour that Apple will release an arm MacBook next year. And I wouldn’t be surprised if Microsoft has an arm surface book in the wings too.

        [1] https://www.macrumors.com/2019/02/21/apple-custom-arm-based-...

        • alias_neo 5 years ago

          Developers don't develop on Surface books, and Macbooks are in low percentages.

          The majority of people in the world writing code are using x86 PCs and Microsoft and Apple aren't about to change that with any *Book.

          Linus' premise that everyone will continue to use x86 for development is because they will.

          There's no incentive for companies or individuals to go switch out all of that x86 hardware sitting on desks and in racks with ARM alternatives which will offer them lower performance than their already slightly aged hardware at initially higher costs.

          I can forsee _some_ interest in ARM cloud, and I don't think it'll be the issue Linus claims at higher-than-systems-level, but I absolutely would bet on x86 going nowhere in the human-software interface space in the foreseeable future.

          • josephg 5 years ago

            > Macbooks are in low percentages

            For some reason Macbooks seem disproportionately represented amongst web developers. All the agencies I know in Sydney and Melbourne are full of macbooks.

            > There's no incentive for companies or individuals to go switch out all of that x86 hardware sitting on desks and in racks with ARM alternatives which will offer them lower performance than their already slightly aged hardware at initially higher costs.

            Uh, why are you assuming ARM laptops will have lower performance and a higher cost compared to x86 equivalents? The ARM CPU in the iPad pro already outperforms the high end intel chips in macbook pros in some tests. And how long do you think Apple will continue to sell intel-based macbooks once they have ARM laptops on the market? Maybe they'll keep selling intel laptops for a year or two, but I doubt they'll keep refreshing them when new intel processors come out. When Apple moved from powerpc to intel they didn't keep releasing new powerpc based laptops.

            Once web development shops start buying laptops with ARM chips, it will be a huge hassle if random nodejs modules don't build & work on ARM. At this point I expect most compatibility issues will be fixed, and that will in turn make deploying nodejs apps on arm a more reasonable choice.

            Obviously we'll see, and this is all speculation for all of us. But I think its a reasonable path for ARM chips to invade the desktop.

            • alias_neo 5 years ago

              I'm not assuming per-se, im guestimating, basing it on my understand of x86 and ARM. I graduated in Electronic Engineering from a university department who's alumni include Sir Robin Saxby, they pushed ARM hard, and I have a fairly good understanding of where it's at architecturally compared to x86.

              Apple have 100% control over every part of their hardware and software from the get go, so it's inevitable they perform excellently on that hardware; they can optimise their code to death, and increment the hardware where it can be improved upon.

              Web developers make up a fairly small proportion of the developers I've ever worked with, I have worked for software houses where web just isn't a thing for us other than for our own marketing. None of these people run Mac's, they all run PCs, and these PCs don't have the same control in their hardware/software process that will bring about the kind of "excellent" result you see from an iPad. They'll be relying on Microsoft to get Windows optimised, but Microsoft will be working with dozens, even hundreds of partners, Apple works with one, itself.

              I suspect, also that they'll be more expensive because of all the new development the manufacturers have to put into releasing these new ARM laptops. Microsoft will have to put extra work into Windows, which will cost money, and finally those of us that run Linux will end up with something that hasn't had the benefit of the decades of x86 development on the desktop has had, thus, worse performance, at least in the beginning.

              I could imagine a laptop equivalent of big.LITTLE, where you have x86 cores for the real grunt work, and ARM cores for power saving, bit I don't see pure ARM in the workstation space.

              It'll be an interesting time, but based on my own experience, I'm betting on Linus with this one and I don't see myself or my colleagues or my workplace moving to ARM anywhere outside of non-laptop-portables any time soon.

            • Black-Plaid 5 years ago

              Agreed, at the last 5 software companies I've worked at, the only people without Macs were the sales people.

            • IWeldMelons 5 years ago

              Yeah, well, I live in one of the ex-USSR countries. And guess what - there are no Macs here whatsoever. I'd suspect that x86 is the prevalent platform in China and India, the dominant players in the outsourcing markets. So, no, most of development is done on Intel machines.

            • tracker1 5 years ago

              This is true, but a huge part of that is VMs with linux or windows, and for me x86 docker workflows that go to x86 servers. It'll be years for any real transition imho.

              It took 4 years of concerted effort to get most node things working right in windows.

      • rbanffy 5 years ago

        If that happens, a lot of low level programming will be done ARM-first. A lot of Swift and Objective-C code will be built, tested and run primarily on ARM.

      • vbezhenar 5 years ago

        I don’t think that it would kill desktop, but I’m sure that unless ARM will be much faster, developers will use x86 macs for a long time.

        • nicoburns 5 years ago

          Apple's latest iPad processors are competitive with low-end laptop x86 processors. And they have a much stricter power budget than a laptop. If Apple wants to go this route, then they probably have the capability to build the chips to support it.

          • jecxjo 5 years ago

            The part you're discounting is just how resource intensive desktop apps are and how much optimization goes into iOS apps.

            To really see the benefit of changing they would need to add a lot of cores, and then cross their fingers that 3rd party app developers know how to do true multi-core development.

            • nicoburns 5 years ago

              I don't know about that A-series chips are competive with low-end intel in single-threaded performance. And they keep getting faster each year.

              If that team were to design a bigger core aimed at laptops, then I wouldn't be surprised if they could make it competitive.

    • pcwalton 5 years ago

      > I think I'll side with Linus on this one. I saw first-hand how non-existent the x86 Android market was, despite Intel pouring megabucks into the project.

      Doesn't that refute Linus' argument, not strengthen it? Almost all Android developers develop on x86. Intel thought, as Linus apparently does, that this would drive adoption of x86 on phones. It didn't.

      Intel even got competitive in power efficiency and it wasn't enough to save them. In fact, I remember folks on HN predicting the imminent death of ARM all the way up to Intel throwing in the towel.

      I think Linus is wrong here. His argument made sense in the '90s, but it's 2019. The ISA just doesn't matter that much anymore.

      • admax88q 5 years ago

        Phones are different than servers though. The primary customer of a phone is Joe Somebody who doesn't know or care about architectures, only battery life and cost. Well ARM wins there.

        The primary customer of servers is developers who care less about cost and more about time to market.

        • stickfigure 5 years ago

          I'm a developer deploying code to JVMs running in a PaaS (Google App Engine). I don't know or care what the architecture is.

          • alias_neo 5 years ago

            Indeed you might not, but the person that wrote your JVM does, and the person that wrote the system that runs on does, and the person that wrote GAE does...

            That single instance you're running on already took half a dozen or so other systems developers and more before it got to you, so in your example you're the minority.

            It's because of the work they've done, that you can not care about the architecture you're running on, not in spite of them.

            • stickfigure 5 years ago

              Sure, but every time you move down the stack a level, you shrink the network effect by several orders of magnitude.

              Linus' argument is that x86 stays on top because everyone is developing with x86 at home. It's much less convincing to argue that x86 will stay on top because the people writing JVMs use x86 at home. There just aren't that many of them, and if they get paid to write ARM, they write ARM.

              • alias_neo 5 years ago

                Indeed, but by at home he means in the office too (he says as much), and I don't see offices don't this unless they have a real incentive to throw out the hardware they've invested in, perhaps in the not so near future when they inevitably have to replace it all due to failure the arm stuff will have a chance to take some share.

          • matchagaucho 5 years ago

            Likewise... most of my code runs on Lambda JVMs now.

            If AWS switches to running JVMs on ARM, and passes the cost savings onto me, I'd be in no position to argue.

            • ct520 5 years ago

              JOKE OF THE DAY. And passes the savings onto me. Oh man that’s a good one.

              • Wowfunhappy 5 years ago

                Well, if ARM servers are cheaper for Amazon to run, they're going to want to incentivize customers to switch to ARM. Either by passing on some of the cost savings (even if it's only 5%), or by making the x86 option more expensive.

                In the second case, Amazon is still "passing on the cost savings" in a sense, it's just that now they take a higher profit regardless.

              • matchagaucho 5 years ago

                As the spot history charts depict, AWS pricing continues to drop.

                To break through any floor requires a disruptive change in architecture (CPU or otherwise).

                https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-sp...

                • chmod775 5 years ago

                  AWS is still 10x-100x more expensive than renting bare-metal unmetered servers and running everything yourself, so I don't think the actual hardware factors too much into their pricing.

                  • stickfigure 5 years ago

                    more expensive than...running everything yourself

                    Only if you value your time at zero.

                    • chmod775 5 years ago

                      This is so utterly untrue and directly related to what Linus was talking about.

                      Those bare-metal servers are basically 1:1 what you are developing on.

                      I can install an instance of my application on them in minutes.

                      It's AWS that takes significantly more time to set up and learn.

                      Most people using AWS are spending big bucks on an 'automatically scaling' architecture (that never just works) that will cost them many thousands of dollars a month, which they could have comfortably fit on a 30 bucks dedicated server.

                      You can pay a dedicated system administrator to run your server (let's not kid ourselves, you probably just need one server) and still save money compared to AWS.

                      With AWS you're not only paying Amazon, you're probably also paying someone who will spent most of his time just making sure your application fits into that thing.

                      Take my use-case for example: I can run my entire site on about 8 dedicated servers + other stuff that costs me ~600-700 euros a month.

                      Those just work month after month (rarely have to do anything).

                      Just my >400TB of traffic would cost me 16,000 bucks / month on AWS. I could scale to the whole US population for that money if I spent it on dedicated servers instead and just ran them myself.

                      • matchagaucho 5 years ago

                        8 servers fixed capex is not comparable to the opex of 8 peak servers.

                        If bandwidth is your highest cost, that's a completely separate problem that likely requires CDN. Neither x86 or ARM is going to reduce that cost.

                        • qes 5 years ago

                          My situation is similar to what chmod775 describes.

                          We serve 200+ TB/month, and no we didn't just forget to use a CDN ◔_◔ Those cost money, too.

                          For us, cloud is about double - $10k/month more - than dedicated boxes in a data center. I've run the same system in both types of environments for years at a time.

                          For us, cloud is basically burning money for a little bit of additional safety net and easy access to additional services that don't offer enough advantage over the basics to be worth taking on the extra service dependency. It's also occasional failures behind black boxes that you can't even began to diagnose without a standing support contract that costs hundreds or more a month. Super fun.

                          High bandwidth and steady compute needs is not generally a good fit for cloud services.

                        • chmod775 5 years ago

                          Most CDNs are more expensive at 400TB/month than just serving content yourself.

                          And no Cloudflare's cheap plans are not an option, they'll kick you out.

        • Wowfunhappy 5 years ago

          To add on to this: developers don't get choose what architecture a customer's phone uses. If they could, perhaps they would choose x86.

          For servers, developers are often the customers of their own software.

        • pjmlp 5 years ago

          When I deploy to PaaS or servless cloud instances, I couldn't care less if they are running on OS xyz, hypervisor or bare metal.

    • ozim 5 years ago

      I don't have production experience with ARM unfortunately but Raspberry Pi is huge... Linux on desktop sucks if you have any laptop or something like this, but specific hardware just like RPi, everything works for my needs. I have node.js, .net core, python and loads of software that just works for me on ARM. Let alone I have Synology with ARM processor. Making servers is a lot easier than making consumer grade laptops or desktops. I agree with Antirez, there is so much space to try out stuff on cheap ARM with RPi and other SBC it just going to roll over x86 because ARM is going to be ubiquitous with phones and SBCs. That is why x86 won with SPARC and PowerPc, it was just in more places.

      • jecxjo 5 years ago

        I ran Pis at home for a bunch of services and I agree it did a great job. But when you put actual loads on it the device craters because they are so under powered. This is where THE issue is going to be. To get speeds you expect out of server hardware its not just about making a 64 core ARM. Single Core ARM vs Single Core x86 has an obvious winner. So you need to make node and .net core and python and everyone else really push their limits on using multiple cores without developers knowing about it.

        But that is just the first step. You then need developers who write applications on top of those languages be multi-core aware and design their applications to fully use the huge number of cores. At that point you'll loose a lot of your power efficiency because you'll need a lot more hardware running to do the same tasks. You'll also need developers who know how to think in an extremely multi-core way to get the extra performance boost.

        • copperx 5 years ago

          Why are you declaring winners when you're comparing RPis to full powered x86 CPUs?

          RPis are built to a price and don't have the best CPUs that ARM can offer. A better comparison would be Apple's A chips.

        • vbezhenar 5 years ago

          Well, servers typically care about throughput and not latency. So if your ARM server will process 10000 requests per second with each request taking 100 ms and your x64 server will process 8000 requests per second with each request taking 80 ms for the same price, ARM will be preferred. There are exceptions, of course, but generally server workload is an ideal case for multi-threaded applications, because each request is isolated. That's why server CPU's usually have a lot of cores but their frequency is pretty low.

        • ct520 5 years ago

          Nah bro I just hand it to my paas and the magical unicorn make it work awesomely quick and faster then competition dollar for dollar eye roll

    • tybit 5 years ago

      I think it will be interesting to see how this plays out in different ecosystems. I’d hazard a guess that ecosystems like Go, JVM and .NET will fare much better in an arm world, compared to languages that more commonly pull in native binaries.

    • titanix2 5 years ago

      > what about the other 2451 dependencies

      Not sure if this is sarcasm or not, but if your project have that many dependencies not wonder it is hard to port anywhere.

      • ummonk 5 years ago

        You must be new to the mess that is the node package ecosystem...

    • ams6110 5 years ago

      > now your whole distro is different than your development machine

      Can you not develop on an ARM emulator? Or just buy an ARM machine for dev work?

      • Androider 5 years ago

        Of course I can, but the question is why would go out of my way to do any of that?

        I was interested in trying the state of server-side ARM for my mostly-interpreted language, and I pretty much immediately found that it doesn't Just Work. I had a vision of spending many hours searching, creating and +1:ing GitHub issues and tracking discussions around "why package X doesn't work on ARM" and the developers saying at best "happy to accept patches" (which btw is the mantra for why "why package X doesn't work on Windows", and why you don't want to develop with Node on Windows to this day despite all of Microsoft's ecosystem work). Nope, not worth it.

        I'm not interested in supporting ARM just for it's own sake. A 30% discount on the cloud instances is also not nearly enough for me or my team of developers to be spending any significant amount of time on this, solving problems unrelated to our core business.

        Let's see again in a few years. Of course, if ARM development machines become mainstream by way of Apple, then the calculation changes completely.

        • acqq 5 years ago

          > if ARM development machines become mainstream by way of Apple, then the calculation changes completely.

          That's the biggest chance for ARM: having the notebooks/desktops that are good or even better for most potential users.

          • copperx 5 years ago

            Chance for ARM? How can you say that with a straight face when ARM is the undisputed winner in the greatest market of all.

            • acqq 5 years ago

              The chance for ARM to really get to be used for all purposes (desktop, server) in the context of the OP. The context of the discussion matters.

      • Wowfunhappy 5 years ago

        Aside from the fact that emulation is slow (and thus more annoying to test), now you have to contend with emulator bugs. Is your software crashing because your code is bugged or because the emulator is bugged? Or worse: your software may only be working because of an emulator bug.

        • erik_seaberg 5 years ago

          Or your software may be working because the emulator was correct where your hardware stepping may be wrong (e.g., FDIV).

    • intellix 5 years ago

      node-gyp has caused me headaches everywhere

  • jdietrich 5 years ago

    >It's extremely hard to agree with Linus on that.

    It's very easy to disagree with him, because the server market doesn't work the way he think it does.

    Google, Amazon, Microsoft and Facebook collectively purchased 31% of all the servers sold in 2018. The market for server hardware is dominated by a handful of hyperscale operators. The "long tail" is made up of a few dozen companies like SAP, Oracle, Alibaba and Tencent, with the rest of the market practically representing a rounding error.

    These customers are extraordinarily sensitive to performance-per-watt; for their core services, they can readily afford to employ thousands of engineers at SV wages to eke out small efficiency improvements in their core services. They aren't buying Xeon chips on the open market - they're telling Intel what kind of chips they need, Intel are building those chips and everyone else gets whatever is left over. If someone else has a better architecture and can deliver meaningful efficiency savings, they'll throw a couple of billion dollars in their direction without blinking.

    This is not theoretical - Google are on the third generation of the TPU, Amazon are making their own ARM-based Graviton chips and Facebook now have a silicon design team led by the guy who designed the Pixel Visual Core. It's looking increasingly certain that Apple are moving to ARM on the desktop, which further undermines the "develop at home" argument.

    ARM won't win the server space, because nobody will win the server space. With Moore's Law grinding to a halt, the future of computing clearly involves an increasing number of specialised architectures and instruction sets. When you're spending billions of dollars a year on server hardware and using as much electricity as a small country, using a range of application-specific processors becomes a no-brainer.

    • sasavilic 5 years ago

      All of this will be useless, if there is no customers that are interested in new platform. There is a big difference between making existing used platform more efficient and offering new efficient platform for which only few customers are interested.

      • jdietrich 5 years ago

        "Run your code on our boxes" is a very, very small subset of cloud services. Does anyone other than Facebook care what instruction set they're using to ingest images? Does anyone other than Amazon care what instruction set they're using to serve S3 requests? Does anyone other than Google care what architecture they're using to crawl the web or serve ads or do something creepy with neural nets?

        • sasavilic 5 years ago

          I don't get it. Having your software being able to run on two different plaforms, means that that software need to be tested twice, maintained twice. Your architecture decision might be optimal for one platform but not for the other, so you have to change your development process, have test env for both platforms, etc. You can't just cross-compile and hope it works.

          All of this costs money, in terms of either having more developers/testers or having longer development time. So, in order to justify this investment, the second platform must be way cheaper in order to cover costs for extra developers/development time. And if there is a such huge difference and second platform works great, then why still have support for first platform anyway. Ditch it, and you will save yourself some money.

          You could be an ISV, but again, your software will be more expensive if you need to support two different platforms. Which means that your customers must be willing to pay for it. Which brings us to same conclusion, unless there is a big saving by running software on alternative platform, nobody will care.

          • jdietrich 5 years ago

            >I don't get it.

            Google's data centers collectively use more electricity than the state of Rhode Island, or about the same as the entire country of Costa Rica. Their electricity consumption has doubled in the last four years. At average wholesale prices, their annual electricity bill would be about a billion dollars. ARM isn't dramatically more efficient than x86 in most applications, but specialised ASICs can be orders of magnitude more efficient.

        • namirez 5 years ago

          I don't know about the standard offerings on cloud platforms, but people who do some sort of scientific computing care a great deal about the architecture and performance. As a C++ programmer, I'm constantly profiling my code to find out bottlenecks and optimize the code. Sometimes I even care if the CPU supports say AVX or some specific instructions.

          • jdietrich 5 years ago

            From my comment: "the future of computing clearly involves an increasing number of specialised architectures and instruction sets".

            I'm not saying that nobody cares about the choice of architecture, I'm saying that major tech companies with vast quantities of servers are beginning to develop their own silicon with custom architectures and custom instruction sets, precisely because that's vastly more efficient than using a general-purpose architecture that happens to be popular in the wider software ecosystem. The fact that nobody else uses that special-purpose architecture is unimportant, because it is economically viable for them to invest in tooling and training to write and port software for these weird chips.

      • solidasparagus 5 years ago

        The largest users of cloud VMs are internal customers - cloud services and the business that the cloud providers spun out of. If ARM is cheaper to run, the savings for Amazon/AWS could be astronomical. That will generate more than enough internal customer demand to make offering ARM servers worthwhile.

    • ksec 5 years ago

      >The "long tail" is made up of a few dozen companies like SAP, Oracle, Alibaba and Tencent, with the rest of the market practically representing a rounding error.

      Are Alibaba and Tencent Really in the long tail? I believe Tencent could be since it is about 3rd of the size of Alibaba, but if I remember correctly Alibaba will overtake Google by 2019 ( They had ~90% growth in 2018 ), and 2018 were already close to matching Google's Cloud Revenue.

      I wonder if OVH is also big in the list. And Apple? Surely the Server Back End to services 900M iPhone users can't be small. How do they compare to say, Google in Server Purchase Terms?

    • ct520 5 years ago

      Thank you for your comment. Well thought out IMO and appreciate this feedback as being a semi stock holder

  • iveqy 5 years ago

    I once had a bug developing an IP phone. Connecting to one server worked but once on site with the customer connection to their server didn't worked, even though the servers where identical.

    It turned out that the power supply on their server was malfunctioning sometimes delivering too little power. Specially when taking the code paths my IP phone triggered.

    The server software was built in php. It's not often you start looking for a bug in PHP but end up switching a capacitor in the PSU.

    My point is that even if your write in php, php is running C libraries, that is running ASM that is running hardware and every part of this chain is important. There's no such thing as "works everywhere", it's just "have a very high chance of working everywhere".

    (off-topic) thanks for the sds library. I'm a heavy user of it.

    • antirez 5 years ago

      Yes but the platform is just one of the unknowns at the lower level. If the tooling is fine, the C compiler is very unlikely to emit things that will run in a different way, it is much simpler to see software breaking because of higher level parts that don't have anything to do with the platform like: libc version, kernel upgrade, ...

      About SDS: glad it is useful for you!

    • noah256 5 years ago

      I’d be really interested to hear more of this story! How you isolated the problem down to the level of the PSU is going deeper into the machine than I’ve personally been, so this story could be a great teacher.

      • iveqy 5 years ago

        It sounds way cooler than it was. For some reason I openened up the server box and during a reboot I saw the LED on the motherboard flickering in a way I didn't expect. So we tried to change PSU and then everything worked.

  • mark-r 5 years ago

    Don't you see that his answer has nothing to do with a hacker mindset? It's an assertion that making your development and production environments as close as possible will save you from unexpected grief, coupled with an observation that this has driven server architectures historically. Especially with subtle problems like performance issues. I find it a very sensible conclusion.

    Of course it didn't hurt that x86 quickly became the price/performance leader for servers, but he makes a good case that this will continue for at least the near future.

    • kstrauser 5 years ago

      The NetBSD people vehemently disagree. By ensuring your software works on various architectures, you expose subtle bugs in the ones you actually care about. Lots of 32-bit x86 code was improved during the migration to 64-bit, not because the move created new bugs, as because existing ones (i.e. code that relied on undefined behavior) couldn't get away with it anymore.

      • admax88q 5 years ago

        Well nobody uses NetBSD so...

        Sure portability increases code quality, but at what cost to time to market which seems to be the primary concern for most developers these days?

        • supermatt 5 years ago

          NetBSD (and NetBSD code) is used pretty much everywhere. The internet pretty much runs on it.

          • kstrauser 5 years ago

            That might be a bit oversold. I love the BSDs, but I'd think by now that Linux in all its forms would surely heavily outweigh NetBSD by now.

            • agumonkey 5 years ago

              I would love to read a recent survey. If someone knows about that.

          • wtracy 5 years ago

            I couldn't name a major corporation that uses NetBSD on their servers or routers. (Yahoo used to use FreeBSD servers, but even they migrated to Linux.)

            Is there a major router vendor or something else that uses NetBSD in a big way?

          • jfrn 5 years ago

            Citation or seriously some actual examples sorely needed for this statement.

      • mehrdadn 5 years ago

        I wouldn't call their bugs. If the binaries worked correctly on x86 due to compiler specific guarantees then the code wasn't buggy. It just wasn't written for a generic C or C++ compiler.

        • pertymcpert 5 years ago

          Undefined behavior is not a compiler specific guarantee. UB can change based on almost random factors, especially between newer releases of the same compiler. They are bugs, they were just masked.

          • anfilt 5 years ago

            This honestly depends on what undefined behavior we are talking about. Sometimes it will be guaranteed to behave a certain way on a compiler. A few will also be the same across compilers if your compiling for the same architecture.

            However, I do agree that cross compiling is good for finding bugs like this. And really if we are letting the compiler or architecture define undefined behavior, I find it better to break out the inline assembly. It's explicit that this code is platform dependent, and avoids any issues that a subtle change in the future may cause it to break.

            Although, it's usually possible to define what your attempting in C without issue, and I only ever find I am doing such a thing if there is a good reason to use a platform specific feature. Generally, relying on how compiler handles uninitialized memory and similar is not what I call a compelling platform specific feature. Cross compiling is good in the regard because it forces everyone working on a project to avoid those things.

            • steveklabnik 5 years ago

              > This honestly depends on what undefined behavior we are talking about. Sometimes it will be guaranteed to behave a certain way on a compiler.

              That is implementation defined, not undefined, behavior.

              • atq2119 5 years ago

                That's at least unnecessarily splitting hairs and possibly missing the point, considering that some compilers allow you to turn undefined behaviour into implementation-defined behaviour using an option. -fwrapv comes to mind.

              • anfilt 5 years ago

                Undefined as per the spec. Does not mean it does not have a certain behavior on a given implementation.

                The spec also does mention implemtation defined behavior. However, undefined things still need to be handled.

                • doughboy23 5 years ago

                  Not really. Undefined means that no purposeful explicit behavior for handling has to occur even within a specific implementation, which means things can blow up randomly just changing some compiler settings or minor things in the environment (or even randomly at runtime).. eg running out of bounds of an array in C is a perfect example of undefined behavior.. no guarantee on what occurs from run to run. Yes obviously time doesn’t stop dead and something happens, but I think that stretches any meaningful definition of “handled”.

                  True, undefined behavior can be implementation defined but that is not a requirement, and it usually is not.

              • anfilt 5 years ago

                Undefined as per the spec.

          • mehrdadn 5 years ago

            If the compiler defines a behavior for some UB, then it's no longer UB. It's been defined for your implementation. It might still be undefined for another implementation but that doesn't mean your code is buggy on the first one.

            • pertymcpert 5 years ago

              No, it does not. It's still UB. UB is defined by the standard, not by your compiler's implementation. Certain behaviors may be implementation defined by the standard, those can be defined by your compiler.

              But if the standard says it's UB, it's UB. End of story.

              • mehrdadn 5 years ago

                Where/how do you obtain such confidence in something so wrong? The standard not only doesn't prohibit the implementation from defining something it leaves undefined (surely you don't think even possible behavior becomes invalid as soon as it is documented??), it in fact explicitly permits this possibility to occur -- I suppose to emphasize just how nuts the notion of some kind of 'enforced unpredictability' is:

                > Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner...

        • asveikau 5 years ago

          In my experience with porting stuff, sometimes the bugs exposed by ports are not along the lines of "always works on x86, always fails on ARM". In a lot of cases it fails on both, with different frequencies, but maybe the assumptions are broken sooner or more often on another platform.

        • wtracy 5 years ago

          There's a world of difference between working correctly on x86 and appearing to work correctly on x86. Sometimes the difference has serious security implications.

          • erik_seaberg 5 years ago

            If a program manages to avoid the entire maze of deathtraps, the C standard calls it strictly conforming. I doubt anything commonly used today could qualify.

      • geff82 5 years ago

        Even on NetBSD, my old love, you can not take you program on the x86 machine, pack it and then run it on arm. You will have to crosscompile and hope it works.

      • vbezhenar 5 years ago

        Many projects don’t care about subtle bugs. They need to deliver features in time. Bugs are acceptable.

      • jeffdavis 5 years ago

        Debugging on different platforms is great. But when it comes to deployment, you probably want to choose the one you know the best, and that's probably your dev platform.

    • mongol 5 years ago

      Question is: when will development not occur locally at all? Is it possible that in a near future you actually develop directly in the cloud, on your own development instance directly? When this happens the cpu architecture of your laptop is irrelevant. It will just be a window to the cloud.

    • copperx 5 years ago

      Well, unless you're hacking on kernel code, making your production environment exactly as your development one is trivial. Just develop remotely. This isn't a part of Linus's calculus because, for him, developing remotely is unthinkable.

  • mattnewport 5 years ago

    A low level piece of code written in C seems likely to have less portability issues than something sitting on top of many layers of abstraction. The thorny problems that show up deploying on an environment that's not identical to the development environment are often the result of unexpected interactions in the stack of dependencies. This is why containerization is a thing.

    Platform portability issues have got easier with better adherence to standards and where you have largely the same code running across different ISAs (and no endianness issues between x86 and Arm) but the popularity of things like Docker suggest many devs do care about reproducible production environments.

  • 013a 5 years ago

    It seems likely that, simply, times have changed. There was a time when being on the same platform as the deployment environment was super important, but nowadays the tooling has gotten so much better that it matters a lot less. The proportion of people still writing C code on a day to day basis has dropped... well to pretty much a rounding error.

    The bigger issue is really that ARM servers aren't that much cheaper than x86 servers today, and its very likely a lot of that difference in cost is just Intel's synthetic market advantage that would disappear if ARM actually started becoming a threat (which has already started happening due to AMD becoming a threat). Phoronix did a synthetic benchmark of AWS's A1 instances, versus the C5 Intel and C5A AMD instances [1]; they're nothing special at all, even with price taken into account.

    Maybe that'll change in the future, but now that AMD is in a competitive state, that's pushing Intel into high-gear and its hard to say that ARM will have any effect on the server market in the short-term.

    [1] https://www.phoronix.com/scan.php?page=article&item=ec2-grav...

    • WorldMaker 5 years ago

      > There was a time when being on the same platform as the deployment environment was super important

      Which is also interesting because there was a time before that where being on the same platform as the deployment environment was sometimes considered nigh impossible, such as the early days of the "microcomputer" revolution where a lot of software was written on big iron mainframes to run on much more constrained devices (C64, Apple II, etc). It's interesting to compare the IDEs and architectures of that era and how much cross-compilation has always happened. There doesn't seem to be a lot of computing history where the machine used to build the software was the same machine intended to run the software, it's the modern PC era that seems the unique inflection point where so much of our software are built and run on the same architectures.

      (A lot of the modern tools such as VMs like the JVM and CLR are because of the dreams and imaginations of those developers that directly experienced those earlier eras.)

      It's interesting how that tide shifts from time to time, and we so easily forget what that was like, forget to notice the high water marks of previous generations. (Even as we take advantage of it in other ways, we cross-compile to mobile and IoT devices today we'd have no way to run IDEs on, and would rather not try to run compilers directly on them.)

      • rbanffy 5 years ago

        I know some software was written on minis to run on 8-bit computers, but I have a hard time imagining that as the norm. My Apple II dev rig was two computers, one running development tools and one to test and they were two because running my software wasn't possible on the development machine without rebooting and loading all the tools took 30 seconds - a painful eternity in Apple II terms.

        • pjmlp 5 years ago

          It was quite common in the game industry.

          As confirmed by multiple interviews on the RetroGaming Magazine, almost every indie that managed to get enough pounds to carry on with their dream, invested into such setup when they started going big.

          • rbanffy 5 years ago

            For consoles, it's natural - they don't have any self-hosted development tools and the machine you write your code with is largely irrelevant. Early adopters also benefit from the maturity of the tools in other platforms for the time before native tools are developed.

            This may be more common in game studios, but was not mainstream in other segments.

            • pjmlp 5 years ago

              It was quite common on C64, Amstrad CPC and ZX Spectrum.

              Games were developed on bigger systems, and uploaded into them via the expansion ports.

    • vbezhenar 5 years ago

      I write Java but I seriously doubt that ARM has comparable JVM, it’s probably slow compared to x86. Cross platform in theory, not so much in practice.

      • MaxBarraclough 5 years ago

        My understanding is that there's a pretty good proprietary JVM for ARM (optimising JIT and all), but that the FOSS stuff (including OpenJDK) is well behind, and as you say, can be expected to perform nowhere near as well as the AMD64 counterpart.

        > Cross platform in theory, not so much in practice.

        Optimistic that the OpenJDK folks would rise to the challenge if there was anything to play for. Writing a serious optimising JIT for modern ARM CPUs would doubtless be no small task, but wouldn't be breaking the mould. I believe it's a similar situation for RISC-V, currently.

        Googles But wait, there's more! 'Graal'! Shiny new JIT engines are on the way, and ARM support is in there. Hopefully they'll perform well. [0] [1]

        [0] https://github.com/oracle/graal/issues/632

        [1] https://github.com/oracle/graal/pulls?utf8=%E2%9C%93&q=is%3A...

      • carlob 5 years ago

        wait aren't virtually all android apps written in java?

        • vbezhenar 5 years ago

          Android uses Dalvik, not JVM. Language is Java, standard library is mostly Java-compatible, but runtime is different. And I'm talking about server loads, I don't think that Dalvik is very good for those tasks (but I might be wrong, it's an interesting question).

      • ZiiS 5 years ago

        Remembers Jazelle DBX with a wry smile.

        • MaxBarraclough 5 years ago

          Falls squarely into the 'cute but pointless' category.

          Java is intended to be used by optimising JVMs. Java bytecode is rarely optimised -- that's left to the JIT. Using the Jazelle approach, where is the compiler optimisation step supposed to occur? Nowhere, of course! You'd be far better off with a decent conventional ARM JVM.

          If you're on a system so lightweight that this isn't an option, well, you probably shouldn't have gone with Java. (Remember Java ME?)

          [Not that I've ever actually worked with this stuff, mind.]

    • user5994461 5 years ago

      It's still the case that environments should be as close as possible. It's easier to achieve now because the number of environments have shrunk significantly.

      Nowadays you will be running on a CentOS/Debian server or a Windows desktop, on an AMD64 compatible CPU. Not so long ago, there were tens of Unix and Linux variants with significant differences. It was impossible to support half of them.

    • antirez 5 years ago

      > but nowadays the tooling has gotten so much better that it matters a lot less

      I think that that's the point. Portability to platforms with a strong tooling and usage base even in a different sector is ok and safe. The problem is when you try to do something like x86 -> Itanium or alike, that could take some time to stabilize.

  • sangnoir 5 years ago

    I don't think you are not really disagreeing with Linus - he's not saying ARM is not viable - he is saying it will not win. With your current setup (cross-compiling), are your ARM executables more performant than x86? Or do they have any other advantage at all over x86? Without an advantage, ARM can't possibly win.

    Having a cheap, viable ARM-native development platform drastically increases the chances of ARM-only killer apps to exist, this would be an advantage over the currently dominant x86 (just as there were Windows-only and Linux-only killer apps that cemented their ascent). However, if everyone is cross-compiling due to the cost, it means ARM will always be a secondary platform (at most)- it can't win by being the Windows Phone of platforms.

    [edited for clarity]

    • bryanlarsen 5 years ago

      He's not saying it won't win, either. He's just saying that for it to win, it needs a viable dev platform. Which, if you reverse cause and effect is blatantly obvious.

      If ARM comes anywhere close to viable enough to be "winning", there will be a good market for dev platforms, and somebody will step in and fill the need. Heck, some are even arguing here that the Pine64 already meets that need.

      • mntmoss 5 years ago

        I'm definitely on the side of ARM(and RISC-V and other new architectures for that matter) getting "wins", because the modern environment is displaying the signs of a low-layer shakeup:

        * New systems languages with promising levels of adoption

        * Stablization and commodification of the existing platforms, weakening lock-in effects

        * Emphasis on virtualization in contemporary server architectures

        * "The browser is the OS" reaching its natural conclusion in the form of WASM, driving web devs towards systems languages

        All of that produces an environment where development could become much more portable in a relatively short timeframe. It's the high friction of the full-service, C-based, multitasking development ecosystem that keeps systems development centralized within a few megaprojects like Linux. But what is actually needed for development is leaner than that, and the project of making these lower layers modernized, portable, and available to virtualization will have the inevitable effect of uprooting the tooling from its existing hardware dependencies, even when no one of the resulting environments does "as much" as a Linux box. The classic disruption story.

      • jeffdavis 5 years ago

        It's not simple cause and effect. Servers cause compatible dev boxes and dev boxes cause compatible servers.

      • copperx 5 years ago

        For applications not involving system code, a viable development platform is instantly available by developing remotely.

        Why is this discussion so fixated on having a local development environment? It's 2019.

    • rbanffy 5 years ago

      We have to define what "winning" means here. Google uses POWER9 and specialized GPU-like chips for some workloads. All cloud providers can gain from being able to offer products that perform better or have lower prices than it would be possible with x86.

      Right now ARM probably outnumbers x86 in number of machines running Linux by a very large margin. In my backpack there is one x86 machine and two ARM ones and that doesn't count the one that's in my hand

      It all depends on what chips become available at what price. All cloud providers do lots of hardware design for their own metal. I'd they tell you that their next data center will be primarily ARM, they create a market for a million unit run of whatever CPU they choose.

      • retzkek 5 years ago

        > Google uses POWER9 and specialized GPU-like chips

        As do the current top two supercomputers, Summit and Sierra, among others.

        • rbanffy 5 years ago

          Cray's XC-50 uses ARM CPUs.

    • jbottoms 5 years ago

      I have to agree with LT, but not on technical issues. This is a question of business issues. You can look at many facets of different cases, but they are all distilled into "Path Dependant Behavior". Sometimes it is called "Baby Duckling Syndrome", but it is that the leader in a market segment is much better equipped to respond, and outpace competitors.

  • jdsully 5 years ago

    Redis is a popular program, it makes sense to spend the time to port it. But what your missing is the long tail, thousands of little programs scattered around - or edge cases in bigger ones.

    Working on a non-x86 platform makes you a second class citizen, you will experience issues that others have already ironed out on x86. Software has a long tail of niche code not actively maintained but still heavily used. It doesn't make sense to switch to ARM there.

  • otterley 5 years ago

    I agree with you. For many users of the cloud -- especially serverless application developers -- the actual hardware on which their software runs is a black box. In fact, if the serverless application consists solely of scripts or precompiled bytecode and doesn't contain architecture-specific binary code, it could likely run on arbitrary hardware with any supported ISA and users wouldn't know the difference.

  • devit 5 years ago

    I think it ultimately depends on how much ARM/RISC-V's price/performance ends up being better than x86.

    If it's not much better then people will not switch due to these small annoyances, and there doesn't seem to be any fundamental reason for it being much better (Intel and AMD are perfectly capable of producing top-performing x86 CPUs, and the architecture should not matter much).

  • readittwice 5 years ago

    I guess the situation is similar to containers and static linking that got popular for server deployments quite recently. They allow to develop, test and deploy your application with its dependencies in the exact same version - the exact same binary. Although usually this works just fine even with minor differences in the dependencies, a lot of developers deem it worthwhile to preclude such issues. If you now use ARM in production, you can't use the same binary file anymore.

    There might be different implementations depending on the architecture in some library you use. Also even with higher-level languages like Java it is possible to observe ISA differences: e.g. memory ordering.

  • AnimalMuppet 5 years ago

    I'm with Linus on this one.

    I had the pleasure (?) of working on a C/C++ codebase that compiled on Windows and ten different flavors of Unix. It was all "portable", but all over the place there was stuff like

      #if defined AIX || defined OSF1
      short var;
      #else
      int var;
      #endif
    
    And to get it right, you had to compile it on all the platforms and fix all the errors (and preferably all the warnings).

    Yeah, cross platform is never as simple as same platform.

    • wahern 5 years ago

      How many years ago was that? POSIX compliance has come a long way and most of the proprietary vendors (the ones with all the corner cases) are gone. These days not only do platforms like AIX and Solaris have much fewer corner cases, they're even adopting Linux and GNU extensions wholesale. Anyhow, most people can ignore these altogether. Portability between Linux and the BSDs is much easier. macOS is the biggest outlier in terms of corner cases yet in many ways the best supported thanks to the popularity of Homebrew.

      C++ is a different matter, but C++ portability is a headache even if you stay on Linux. Likewise, trying to maintain OS-level portability of monolithic codebases between Windows and Unix is a fools errand, which is why Windows Subsystem for Linux (WSL) is likely to only get better.

      • AnimalMuppet 5 years ago

        How long ago? 15 years - and it was at least a decade-old code base. But I had to take some new code that was something like Windows-and-Sun-only, and port it to run on all of the other architectures.

    • DerekL 5 years ago

      Yes, cross-platform development is more work, but that example you gave is the wrong way to do it. It's better to abstract the differences using typedefs, functions, macros, etc., and keep the platform switches in a few isolated places in the code.

    • musicale 5 years ago

      Single platform cross-development (e.g. iOS) is a lot easier.

  • kakwa_ 5 years ago

    It's likely that ARM will find a place in a cloud. But, at, least at first, it will be in some specific parts of a cloud offering.

    I don't see ARM displacing X86 on VMs offering like EC2 any time soon, an ARM offering will exists (and it already exists in fact), but it will remain a small portion.

    However, some parts of a cloud offering are completely abstracted from the hardware: DNS, object stores, Load Balancers, queues, CDNs... for these, from the point of view of a developer, CPU architecture doesn't matter at all and if the cloud provider find it more interesting to use ARM (maybe with some custom extensions), it will probably switch to them.

    From there, it can gradually go to services where architecture kind of matters, but not necessarily, like serverless, or Postgres/MySQL as a service.

    And while it grows, ARM CPUs will improve for other use cases, and maybe overtake X86 VMs.

    The other possibility is a massive cost reduction like 3 to 4 times cheaper for equal performances, but it's not really the case right now. Also, given the all the wasted money I've seen on AWS ("app is leaking memory? just use a 64GB instance"), I'm not sure it's a good enough incentive. However we are specialist at being penny wise and pound foolish.

  • jecxjo 5 years ago

    I think more people need to have a kernel hacker mindset. We are getting to the point where everyone is working so high level that they just assume you can swap out the Distro/Kernel/Arch and it will not only "just work" but will work exactly the same was as your home system. While your simple applications (not attempting to optimize or really push the limits of your system) you might not run into issues initially but when you try push things you'll inevitably run into the problem.

    The fact that we're talking about ARM makes this even more important. You're having to compete against x86, which requires increased core counts, and a lot more optimization and potentially even redesigns of your software to make your higher level environment to be perceived equal to x86. Businesses will need this, your boss will ask if ARM is as fast as x86 and they won't care to quibble about technological differences if you can't just get the same output speeds as their old, trusted hardware. There is only so much your language can do to cover your butt. At some point you'll have to be aware of your environment to compete.

  • mcv 5 years ago

    I'm a web developer, and for years now I've been developing on either Mac or Windows, but deploying to Linux servers. Or possibly totally different kinds of servers. I don't care much about the architecture of the server as long as it runs my code. Give me Apache, JVM, node, and the necessary build and deploy tools, and there's nothing I can't run on it.

  • shitgoose 5 years ago

    Linus said "the cross-development model is so relatively painful". What you described is a relatively painful effort of chasing the word alignments. Wouldn't it be better not to worry about alignments at all and spend your time on something more productive? I think this is his point - if non-productive effort can be avoided, people will avoid it.

  • coldtea 5 years ago

    It might be easy to do that in a well written single C/C++ codebase like Redis.

    It's not simple nor something devs will ever want or care to do in a big web app with several binary dependencies.

    Just consider that a single Node app's binary deps could trivially include the entirety of Chrome itself, not just in the form of Node's v8 engine, but e.g. as the PDF rendering "headless chrome" wrapper Puppeteer.

    And that's just the tip of the iceberg, add DBs, extensions, Python backend scripts, etc etc, and few will bother.

  • musicale 5 years ago

    "I can pretty much guarantee that as long as everybody does cross-development, the platform won't be all that stable. Or successful."

    iOS seems like a huge counterexample (as you note.)

    • Androider 5 years ago

      When developing for iOS you typically interact with the iOS simulator on your desktop, which natively compiles your app against x86 versions of the mobile frameworks. True native iOS development is pretty rare, and more painful. Overall, iOS development is a delightful experience because there's a singular hardware target and Apple pretty much nails the execution.

      For Android development on the other, you don't have a good simulator, and the out-of-the-box dev experience relies on an x86 emulator of the ARM environment. In practice this means that in your day-to-day Android development, you're running the compile-run-test cycle by looking at your actual ARM device all the time, because the emulator is dogshit. I wouldn't really call it cross-platform development in any traditional sense, it's more like remote development, and a bad experience.

      • kllrnohj 5 years ago

        > For Android development on the other, you don't have a good simulator, and the out-of-the-box dev experience relies on an x86 emulator of the ARM environment. In practice this means that in your day-to-day Android development, you're running the compile-run-test cycle by looking at your actual ARM device all the time, because the emulator is dogshit. I wouldn't really call it cross-platform development in any traditional sense, it's more like remote development, and a bad experience.

        This hasn't been true for years. The emulator shipping with Adroid Studio uses an x86-based image, and it's very, very fast as a result.

        Android's emulator even has quite a few more features than iOS's simulator, such as mock camera scenes so you can even develop apps that rely on the camera on the emulator.

        If anything these days the Android emulator soundly trumps the iOS simulator on all interesting metrics except maybe RAM usage. But, critically to Linus' argument, they both use the same architecture as the development machine.

      • pcwalton 5 years ago

        > When developing for iOS you typically interact with the iOS simulator on your desktop, which natively compiles your app against x86 versions of the mobile frameworks.

        And the fact that the "develop on x86, test on ARM" workflow works so smoothly on iOS is strong evidence that Linus is wrong.

        • Androider 5 years ago

          Who's going to make the "develop on x86, deploy on server-side ARM" experience smooth? It certainly isn't today. Who has that kind of control of the entire stack top to bottom? Amazon is the only one that comes to mind... but I wouldn't bet on it.

          • pcwalton 5 years ago

            I think it's a smooth enough experience, yours notwithstanding. It's just that there aren't many server-side ARM options available, so we don't have much experience.

        • MBCook 5 years ago

          Is it? Or is it evidence that Apple has worked REALLY DAMN HARD to make it work decently?

          I’ve certainly heard of bugs that the simulator doesn’t reproduce because it’s not ARM.

          • pcwalton 5 years ago

            > I’ve certainly heard of bugs that the simulator doesn’t reproduce because it’s not ARM.

            And that isn't enough to get people to demand an ARM emulator. In fact, Android developers hate the ARM emulator and prefer the x86 simulator—more evidence against Linus' assertion.

            • Androider 5 years ago

              Hi, I've done Android development professionally for many years, over multiple apps. I don't know anyone who uses the x86 simulator for anything except out of curiosity to check it out every couple of years if it's still completely worthless. Android developers develop with an ARM phone attached by USB, and it's still an abysmal experience compared to iOS.

        • pjmlp 5 years ago

          And to top that, Apple also has their own customization of LLVM bitcode, for more binary neutral deployments.

  • asadkn 5 years ago

    Depends on general confidence level and how mission critical your deployed software is. For instance, I will always prefer to run PHP, Ruby or Python on Linux servers. But on the client side, I have faith in Electron Apps to run cross-platform without issues.

    I wouldn't be comfortable with an underlying architecture change to ARM for at least years to come and the usage decision would be based on general consensus on reliability that follows.

    • antirez 5 years ago

      I would more comfortably run my code developed on x86/Linux on ARM/Linux than on x86/FreeBSD for instance... Platform is just one unknown and not the worst if the tooling is good IMHO. Consider that complex software under Linux/ARM now has a ten years history at least, with numbers (mobile) that are not approached by any other thing on earth.

  • supermatt 5 years ago

    i think that they don't care until they do. I think it's awesome that you ported redis to ARM, but that is your software. If my node/rails app has modules with native libs unsupported on ARM, do I fix all those modules that "almost" work, replace them with other ones that already work, or just deploy to an environment I already know works? And once I've hit that issue once, will I even try it again?

  • Solar19 5 years ago

    Good work on Redis. Ruby, however, strikes me as pretty fragile. They can't get it running on Windows, officially, and the third-party Windows installer is hit or miss. Ruby also seems to depend on a specific compiler, gcc. I wouldn't be surprised if it has trouble on ARM.

  • sipos 5 years ago

    On the other hand, it's easier to not have to think about it, even when it doesn't matter 99% of the time, and run the exact same container as you tested locally on the server.

    I feel like this shouldn't matter really, but people are amazingly lazy/developer time valued highly.

  • fmajid 5 years ago

    Linux' point is that there is hardly any developer-class ARM machines available, just RaspberryPi-class SBCs that use mobile SOCs with the performance of a 2012 vintage smartphone, nothing with the grunt of a Qualcomm Centriq or Cavium ThunderX.

  • computerex 5 years ago

    > most today's developers don't care about environment reproducibility at architecture level.

    Source for this? Seems like pure speculation.

  • pennaMan 5 years ago

    >one year ago I started the effort to support ARM as a primary architecture for Redis

    I'm looking forward to embedding Redis my Android app :)

  • boris 5 years ago

    What hardware do you use for development and for testing? Can you elaborate on your setup, ARM-wise?

  • jfrn 5 years ago

    Redis is not a very large scale piece of software or software system.

  • think_free 5 years ago

    "I often find black-and-white people a bit stupid, truth be told" - Linus Torvalds, 2005

    I think maybe Linus T. is getting old, out of touch, and closed-minded, and I think we should be open to change and care less about every random thought he blows off.

    Quote source: https://www.linux.com/news/linus-compares-linux-and-bsds

    • Chyzwar 5 years ago

      he at least he does not create an accounts to shit on someone.

xoa 5 years ago

It's of course impossible not to respect Linus' opinion and first hand experience in this space, but doesn't this whole post completely ignore the 100 ton blue whale in the room? Namely smartphones. That's an entire enormous segment of the industry and it's nearly 100% (or entirely 100%?) literally develop-on-x86-deploy-on-ARM. Smartphones also fit

>"This isn't rocket science. This isn't some made up story. This is literally what happened"

right? I mean, I can see arguing that going up into the cloud is different in some ways then going down to smartphones (although the high end ones are now going to outperform plenty of old dev machines in burst power). There are certainly differences in scaling and such. But the maturity of the tech for cross development of high level software isn't the same as it was in that era either. And if we're talking about bottom-to-top revolutions, embedded and smartphones seem to be at a lower level and much higher volume then PCs.

Finally there is clearly an upcoming disruptive fusion event coming due to wearable displays. When "mobile" and "PC" gets merged, it certainly looks like ARM is in a strongly competitive position for some big players, and having more powerful stuff up the stack will matter to them as well.

None of which is to say he won't be right at least in the short term, but it still is kind of odd to not even see it addressed at all, not even a handwave.

  • flohofwoe 5 years ago

    Smartphones are a good argument for both views IMHO. Native development (as in native machine code executables) on Android is still a terrible experience even though they had a decade to fix it. It's much better on Apple platforms, maybe because they actually cared about developer-experience and native code is a "first class citizen" there.

    It goes beyond the different instruction set of course and most of the time this is indeed mostly irrelevant (unless you've arrived at processor-specific optimizations), but the "develop on the same platform you are running on" still has the least painful workflow IMHO.

    I wouldn't mind an ARM-based Mac though ;)

    • pjc50 5 years ago

      I think the ARM-based Macs are inevitable, although it might be called "iPad Pro Developer Edition".

      This is Jeff Atwood's argument: https://blog.codinghorror.com/the-tablet-turning-point/ ; Apple tablet performance at Javascript is now catching up to and exceeding desktop performance. Apple have also sunk a lot of money into developing their own processor line, and they have experience in force-migrating all their customers between architectures. At some point you might not be able to buy an Intel-based Apple laptop any more. Given the immense brand loyalty among web developers, they are likely to shrug and carry on .. and start demanding ARM servers with high Javascript performance.

      Interestingly there's also https://stackoverflow.com/questions/50966676/why-do-arm-chip... . See also on HN front page https://www.axios.com/apple-macbook-arm-chips-ea93c38a-d40a-... "Apple's move to ARM-based Macs creates uncertainty"

      (BTW the link is now slashdotted, I am using https://web.archive.org/web/20190222120214/https://www.realw... )

      • SomeHacker44 5 years ago

        I just wonder if Apple can design laptop chips that perform well (per watt) at 45W TDP or desktop chips at 2-3x that and with multiple sockets. If not, then what’s the point?

        I won’t move to an ARM Mac, personally. I will move to Windows or Linux on x86 for all the reasons Linus gives and also for games. Sorry, but an ARM Mac may finally push me where crappy keyboards and useless anti-typist touch bars have not quite done.

    • pjmlp 5 years ago

      Native (NDK) development on Android is hard on purpose, as means to increase the platform security and target multiple SOCs.

      NDK level programming is explicitly only allowed for scenarios where ART JIT/AOT still isn't up to the job like Vulkan/real time audio/machine learning, or to integrate C and C++ from other platforms.

      In fact, with each Android release, the NDK gets further clamped down.

      I would like a better NDK experience, in view of iOS and UWP capabilities, on the other hand I do understand the security point of view.

      • flohofwoe 5 years ago

        Yeah, right, just like the PS3 was intentionally hard to develop for to keep away the rubble. That worked out really great (at least Sony did a complete 180 and made the PS4 SDK a great development environment).

        https://www.tomshardware.co.uk/sony-playstation-ps3-develope...

        As long as Android allows running native code via JNI, the security concerns are void anyway. If they are really concerned about security, they would fix their development tools (just like Apple did by integrating clang ASAN and UBSAN right into the Xcode UI).

        • pjmlp 5 years ago
          • gmueckl 5 years ago

            One article is about enforcing the exvlusive use of public APIs. The rest is about hardening the C/C++ code of AOSP. I dobnot see any "clamping down" here. What am I missing?

            • pjmlp 5 years ago

              Using SE Linux and seccomp to close down entry points to the Linux kernel.

              Since this work only started on Android 7, it is clamping down the free reign that existed before.

              • kllrnohj 5 years ago

                Except they allow nearly everything for regular Android apps since libc lets you access nearly every syscall.

                Nothing was meaningfully "clamped down" there. You can't directly syscall some obsolete syscalls anymore, and you can't syscall random numbers, but nearly any actual real syscall is still accessible and nothing indicates that it won't be.

                As long as libc can do it so can you, since you & libc are in the same security domain. Or anything else that an NDK library can do in your process, you can go poke at that syscall, too.

                It'd almost always be stupid to do that instead of going through the wrappers, but you technically can

                • pjmlp 5 years ago

                  Android uses bionic.

                  • kllrnohj 5 years ago

                    Yes, and..? Bionic is Android's libc. libc is just the name of the C standard library, not any particularly C standard library.

                    You might be confused and thinking of glibc, which is a particular libc implementation.

                    • pjmlp 5 years ago

                      And as such it is only required to expose ISO C functions.

              • gmueckl 5 years ago

                Looking at this list, the blocked syscalls do not seem to be too bad:

                https://github.com/aosp-mirror/platform_bionic/blob/master/l...

                This is mostly setgid/setuid, mount point and system clock related stuff. Except for syslog and chroit, I see no syscalls that you should be using in a user process anyway.

                So technically, this is clamping down Android, but it seems like a pretty reasonable restriction and far from a heavy handed approach.

    • dkarl 5 years ago

      I bet Apple's experience moving from PowerPC to X86 gave them a leg up as well, and in both cases (PowerPC/X86, MacOS/iOS) they had the power to force developers to cross-develop to maintain access to their platform. Nobody is in a position to force server-side developers to switch to ARM.

      • geerlingguy 5 years ago

        Don’t forget 680X0 to PowerPC.

      • WorldMaker 5 years ago

        From the other perspective too, Google clearly seemed to want the flexibility to change the details of Android architecture on a "whim", seeming to settle on the Linux kernel at the last minute and expecting to support both ARM and x86 and whatever else they felt they wanted. Google's focus on the JVM/Dalvik and making Native hard in Android seems quite intentional, forcing developers to cross-develop in a different way by obfuscating as much code as possible into a virtual machine that they could 100% control abstracted from underlying architecture and even kernel.

    • jayd16 5 years ago

      What do you think can be improved in Android? Unlike iOS, Android is actually running in diverse hardware. All iOS devices are Arm, where as Android will run on x86. That alone makes it more of a hassle.

      • flohofwoe 5 years ago

        The command line C/C++ toolchain is fine, at least now where this is basically reduced to clang and libc++.

        The problem is basically everything else:

        - The ever changing build systems. And every new "improvement" is actually worse than before (I think currently it is some weird mix of cmake and Gradle, unless they changed that yet again).

        - Creating a complete APK from the native DLL outside Gradle and Android Studio is arcane magic. But both Android Studio and Gradle are extremely frustrating tools to use.

        - The Java / C interop requires way to much boilerplate.

        - Debugging native code is still hit and miss (it's improved with using Android Studio as a standalone debugger, but still too much work to setup).

        - The Android SDK only works with an outdated JDK/JRE version, if the system has the latest Java version, it spews very obscure error messages during the install process, and nothing works afterward (if it needs a specific JDK version, why doesn't it embed the right version).

        The Android NDK team should have a look at the emscripten SDK, which solves a much more exotic problem than combining C and Java. Emscripten has a compiler wrapper (emcc) which is called like the command line C compiler, but creates a complete HTML+WASM+JS "program". A lot of problems with the NDK and build system could be solved if it would provide a compiler wrapper like emcc which produces a complete APK (and not just a .so file) instead of relying on some obscure magic to do that (and all the command line tools which can do this outside gradle are "technically" deprecated).

        ...hrmpf, and now that I recalled all the problems with Android development I'm grumpy again, thanks ;)

      • MBCook 5 years ago

        Is it? I was under the impression x86 had basically failed and everything was ARM. Or is MIPS or something else reasonably popular?

        • kllrnohj 5 years ago

          MIPS support has been officially dropped but x86 is still very alive - mostly due to the emulator these days, though, but some Android TV hardware was using it for a while, too.

    • swiley 5 years ago

      Native development sucking on android is mostly an android problem (and to some extent a Qualcomm problem since their smartphone SOCs don’t support anything else.)

  • vesinisa 5 years ago

    He did address the so-called 100-ton blue whale at the end:

    > End result: cross-development is mainly done for platforms that are so weak as to make it pointless to develop on them. Nobody does native development in the embedded space. But whenever the target is powerful enough to support native development, there's a huge pressure to do it that way, because the cross-development model is so relatively painful.

    • justaaron 5 years ago

      except that developing for ARM SBC's natively is normal these days. Even the lowly Raspberry Pi encourages you to plug-in an HDMI monitor, a USB mouse and keyboard, and boot into Raspbian, where things like Wiring-Pi further extend your cross-development reach (LOL), while you develop the code for your peripheral directly on the computer that's going to run it. It's a bit of a Matryushka doll in that I have both Propeller chip and FPGA "Hat's" for my Pi and use both propeller IDE and the Icestorm toolchain to natively cross-develop for ACTUALLY embedded devices, as the ARM device is the main computer already and not the embedded device anymore lol

  • sitkack 5 years ago

    Linus is mostly wrong except for HPC. Very few dev pipelines for folks result in native executables.

    The vast majority of code is delivered as either source (python, ruby, etc) or bytcode, JVM, Scalia, etc.

    And the Xeon class machines folks deploy to in data center envs is a world apart from their MacBooks.

    These truths are true for Linus, but not for the majority of devs.

    Even those creating native binaries, this is done through ci/cd pipelines. I have worked in multi arch envs, Windows NT 4 on mips/alpha/x86, iOS, Linux on arm. The issues are overblown.

    • bayindirh 5 years ago

      --- I accidentally deleted this comment, so, I've re-written it. ---

      Disclaimer: I'm a HPC system administrator in a relatively big academic supercomputer center. I also develop scientific applications to run on these clusters.

      > Linus is mostly wrong except for HPC. Very few dev pipelines for folks result in native executables. The vast majority of code is delivered as either source (python, ruby, etc) or bytcode, JVM, Scalia, etc.

      Scientific applications targeted for HPC environments contain the most hardcore CPU optimizations. They are compiled according to CPU architecture and the code inside is duplicated and optimized for different processor families in some cases. Python is run with PyPy with optimized C bindings, JVM is generally used in UI or some very old applications. Scala is generally used in industrial applications.

      > And the Xeon class machines folks deploy to in data center envs is a world apart from their MacBooks.

      No, they don't. Xeon servers generally have more memory bandwidth, and more resiliency checks (ECC, platform checks, etc.). Considering the MacBook Pro have a same-generation CPU with your Xeon server with a relatively close frequency, per core performance will be very similar. There won't be special instructions, frequency enhancing gimmicks, or different instruction latencies. If you optimize well, you can get the same server performance from your laptop. Your server will scale better, and will be much more resilient in the end, but the differences end there.

      > Even those creating native binaries, this is done through ci/cd pipelines.

      Cross compilation is a nice black box which can add behavioral differences to your code which you cannot test in-house. Especially if you're doing leading/cutting edge optimizations in the source code level.

      • unilynx 5 years ago

        Isn't turbo boost an issue when comparing/profiling? my experience with video generation/encoding run of about 30 sec was that my macbook outperformed the server xeons... if left to cool down for a few minutes between test runs. otherwise a testrun of 30 seconds would suddenly jump up to over a minute.

        the xeons though always took about 40 seconds.. but were consistent in that runtime (and were able to do more of the same runs in parallel without loosing performance)

        always attributed that to the turboboost..

        • bayindirh 5 years ago

          > Isn't turbo boost an issue when comparing/profiling?

          No. In HPC world, profiling is not always done over "timing". Instead, tools like perf are used to see CPU saturation, instruction hit/retire/miss ratios. Same for cache hits and misses. For more detailed analysis, tools like Intel Parallel Studio or its open source equivalents are used. Timings are also used, but for scaling and "feasibility" tests to test whether the runtime is acceptable for that kind of job.

          OTOH, In a healthy system room environment, server's cooling system and system room temperature should keep the server's temperature stable. This means your timings shouldn't deviate too much. If lots of cores are idle, you can expect a lot of turbo boost. For higher core utilization, you should expect no turbo boost, but no throttling. If timings start to deviate too much, Intel's powertop can help.

          > my experience with video generation/encoding run of about 30 sec was that my macbook outperformed the server xeons...

          If the CPUs are from the same family, and speed are comparable, your servers may have turbo boost disabled.

          > otherwise a testrun of 30 seconds would suddenly jump up to over a minute.

          This seems like thermal throttling due to overheating.

          > the xeons though always took about 40 seconds.. but were consistent in that runtime (and were able to do more of the same runs in parallel without loosing performance)

          Servers' have many options for fine tuning CPU frequency response and limits. The servers may have turbo boost disabled, or if you saturate all the cores, turbo boost is also disabled due to in-package thermal budget.

          If you have any more questions, I'd do my best to answer.

      • sitkack 5 years ago

        I am not sure we are disagreeing on much, but the 4 core i7 in my dev MacBook is a whole lot different than the dual socket, 56 core machines we run on.

        Optimizations that need to happen, don’t happen locally, they get tuned on a node in the cluster. Look at all the work Goto has done on Goto Blas.

        • bayindirh 5 years ago

          We agree on HPC, however I also agree with Linus about non-HPC loads. Software and developers are always more expensive than hardware, but scaling beyond a certain point in hardware (number of servers, or the GPUs you need) drives the hardware and maintenance cost up, hence the difference becomes negligible, or the maintenance becomes unsustainable. This is why everyone is trying to run everything faster with the same power budget. At the end, after a certain point, everyone wants to run native code at the backend to reap the power of the hardware they help. This is why I think Linus is right about ARM. That's not I'm not supporting them, but they need to be able to run some desktops or "daily driver" computers which support development. Java's motto was write once, run everywhere, which was not enough to stop migration to x86. Behavioral uniformity is peace of mind, and is a very big peace TBH.

          What I wanted to say is, unless the code you are writing consists of interdependent threads and the minimum thread count is higher than your laptop, you can do 99% of the optimization on your laptop. On the other hand, if the job is single threaded or the threads are independent, the performance you obtain in your laptop per core is very similar to the performance you get on the server.

          For BLAS stuff I use Eigen, hence I don't have experience with xBLAS and libFLAME, sorry.

          From a hardware perspective, a laptop and a server is not that different. Just some different controllers and resiliency features.

    • paulmd 5 years ago

      Even in a bytecode language, there is no guarantee that an application is write-once run-everywhere. I converted a small app that was running on Windows with Oracle JDK to run on Linux with OpenJDK and it was not plug-and-play. It was close, but there were a few errors particularly surrounding path resolution (and yes, the Windows application was already using Unix-style paths, this was actually a difference in how paths were resolved). Similarly, there are small differences between Tomcat and Jetty and so on.

      This wasn't showstopping by any means, but it did take a couple of hours to tweak it until it ran properly, and this was just a small webapp not really doing anything exceptional.

      Our main line-of-business app (on Java) runs on SPARC/Solaris in production, so we have on-premises test servers so we can test this... and yes, there have been quite a few instances where we identified significant performance anomalies between developer machines running x86/Windows and our Sparc/Solaris test environment, and had to go rewrite some troublesome functions.

      • pjmlp 5 years ago

        The same can happen on the same hardware just by switching versions of the same toolchain.

        So Linus position is a bit of straw man.

        • paulmd 5 years ago

          Correct, we need to stabilize all these factors in order to ensure stable, bug free deployment. A Good Post.

          Oh, you meant that just because there is one other thing that you might slip up and forget to control for, we shouldn't bother trying to control anything? No, wait, that's actually A Very Bad Opinion.

    • rkangel 5 years ago

      He calls that out though "even if you're only running perl scripts". It's not the cross-compilation that's a factor, it's wanting the environment to be as similar as possible.

      Even if your code is Java bytecode, that's still running on a different build of the JVM, on a different build of the OS (possibly a different OS). There is opportunity for different errors to crop up. They might be rare, but they'll be surprising and costly when they happen exactly because of that.

      • MaxBarraclough 5 years ago

        The question then is how successful the JVM is. I think you're underestimating it. Torvalds' attitude is certainly justified regarding plenty of other types of software though -- just building a C++ project on a different distro can be a pain.

        Someone else [0] points out that Java (in the right context at least) is so successful in isolating the developer from the underlying platform, that it isn't a problem if the developer isn't even permitted to know what OS/hardware their code will run on.

        Could they accidentally write code that depends on some quirk of the underlying platform? I think it's not that likely. Nowhere near as likely as in C/C++, where portability is a considerable uphill battle that takes skill and attention on the part of the developer.

        > They might be rare, but they'll be surprising and costly when they happen exactly because of that.

        Ok, but you can say the same for routine software updates. It's a question of degree.

        [0] https://news.ycombinator.com/item?id=19229224

    • disiplus 5 years ago

      he just said its not worth it and as a developer if i could chose that i develop on the platform that will run my code i will chose it even if its slightly more expensive. granted that on both i could be same level of productive.

      we had those problems when developing in scripting language on windows a code that will run on linux because at some point we needed something that called native and would make us problems with different behavior. after some of that experience we tried to get everybody the same environment that is close to what will run in production.

      • TechieKid 5 years ago

        Usage of punctuation would make your attempt at communication more likely to serve its purpose.

        • disiplus 5 years ago

          thx, noted. was on my phone and wanted to reply quickly.

  • pjmlp 5 years ago

    Mainframes, have been the pioneers of using bytecode as distribution format, with the CPUs being microcoded for the specific bytecode set (e.g. Xerox PARC / Burroughs), or having JIT/AOT compilation at deployment time like IBM i and IBM z (aka OS/400, OS/300).

    So while Linus opinion is to be respected, mainframes, and the increase in smartphones, smartwatches and GPS devices use of bytecode distribution formats with compilation to native code at deployment time, shows another trend.

  • CoolGuySteve 5 years ago

    Ya it's kind of weird he talks about how stuffing a beige box PC in the corner was the impetus for X86 servers. But the modern day equivalent of that is either a cheap $5/month VPS, a RaspberryPi, or an OpenWRT router, any of which could compile/run ARM code.

    I think fundamentally, the error he's making is comparing the current market to the late 90s/early 2000s market. Back then a RISC Unix machine cost thousands of dollars. It was cost prohibitive to give one to each dev/admin. Nowadays a RISC Linux PC is $5.

    • syn0byte 5 years ago

      Actually, you make the best case for ARM servers of anyone else in this thread.

      The starving college kid in a Helsinki dorm working on his EE degree can't afford 600-1000 dollars for another Laptop/Desktop to experiment with. A 35 dollar ARM SBC and a monitor that doubles as his TV is right in his price range...

      That doesn't invalidate his point. He's just saying that is basically what needs to happen for ARM servers to start taking off. The next step is for companies to start deploying ARM workstations. That part still seems to be a good way off, MS abandoning their Windows ARM port didn't help the cause.

      • einr 5 years ago

        The starving college kid in a Helsinki dorm working on his EE degree can't afford 600-1000 dollars for another Laptop/Desktop to experiment with. A 35 dollar ARM SBC and a monitor that doubles as his TV is right in his price range...

        35 dollars will buy you an oldish x86 beige box that will absolutely flat out murder a Raspberry Pi performance-wise. Cheap, fast hardware is not a problem anymore.

  • geezerjay 5 years ago

    > but doesn't this whole post completely ignore the 100 ton blue whale in the room? Namely smartphones.

    This is patently false. Mobile developers do test their apps on smartphones, eventhough google and apple offer VMs. You'd be hard pressed to find a mobile app software house that doesn't have a dozen or so smartphones available to their developers to test and deploy on the real thing.

    • nicoburns 5 years ago

      Surely this would be the same for server software: If prod was running on ARM, then you'd probably have your CI server running ARM too. But that wouldn't stop you developing on x86 if that was what was convenient.

      • geezerjay 5 years ago

        > Surely this would be the same for server software: If prod was running on ARM, then you'd probably have your CI server running ARM too.

        CI/CD is already too far ahead in the pipeline to be useful. CI is only a stage where you ensure that whatever you've developed will pass the tests that you already devised, but it's already a stage where you already tested and are convinced that nothing breaks.

        The type of testing that Linus Torvalds referred to is much back in the pipeline. He is referring to the ability to ramp up a debugger and check if/when something is not working as expected. Developers do not want to deal with bugs that are only triggered somewhere in a CI/CD pipeline, and can't reproduce in their target machines.

    • kalleboo 5 years ago

      So you're saying there are already plenty of ARM devices out there to do testing on?

      • geezerjay 5 years ago

        No, I'm saying that mobile development is also a clear example that developers do want to develop for platforms that they actually can test, which was the point that Linus Torvalds made.

  • seanalltogether 5 years ago

    > That's an entire enormous segment of the industry and it's nearly 100% (or entirely 100%?) literally develop-on-x86-deploy-on-ARM.

    I'm not sure I agree with. My coding environment is on x86, and I build on x86, but my Run/Debug cycle is on ARM. No one is really encouraged to test on the simulator even though it's available, you are almost entirely asked to test on your actual arm device and run it and see the results of your work.

    Linus is making the argument that people want their release builds to run in the same environment as their daily test builds, and I don't see smartphone development as an exception to that rule.

  • chongli 5 years ago

    When "mobile" and "PC" gets merged

    I don't see this happening. PCs are tools for getting real work done. Mobiles are mostly communication and entertainment devices.

    I like to fall back on this Steve Jobs quote, employing a car/truck metaphor for computers:

    When we were an agrarian nation, all cars were trucks, because that's what you needed on the farm. But as vehicles started to be used in the urban centers, cars got more popular … PCs are going to be like trucks. They're still going to be around, they're still going to have a lot of value, but they're going to be used by one out of X people.

    • wil421 5 years ago

      I’ve always thought PCs will become business workstations. Meaning you use them for office work but everything else will be “cars” as your quote put it. Meaning internet browsing, social media, view/edit photos, and the like will be done on some mobile device. Windows is alresdy the de facto business workstation and I don’t see it going away.

      There’s already a whole generation or two who will likely have little to no experience with PCs.

      • pjmlp 5 years ago

        Yep. I mostly use laptops since 2000 and went full laptop around 2006.

        With 2-1 and tablet docking stations, the desktop case will be fully covered.

        Surface, Samsung DeX, ...

    • pjc50 5 years ago

      > Mobiles are mostly communication

      Communication is also work, especially as you go up the management value chain. I think maybe people should refer to the thing that PCs do and mobiles don't as "typing".

      • chongli 5 years ago

        It isn't just the keyboard which PCs hold as an advantage, it's the mouse as well. There are a lot of tasks that workers do on PCs with a mouse that can't be done reliably with a touchscreen.

        Maybe an iPad Pro with its stylus could perform a lot of those mouse-driven tasks, but using the stylus for long periods of time is going to be exhausting and injury-prone. By using a mouse your arm can rest comfortably and allow you to work for long periods of time with minimal effort and no strain.

        • pjmlp 5 years ago

          Android and Windows support mices on tablets.

          • chongli 5 years ago

            Doesn't really matter. Mice are not first class peripherals for those mobile applications.

            Desktop and mobile OSes should remain separate. You don't go around hauling fully loaded semi trailers with a car.

            • WorldMaker 5 years ago

              Touch friendly is mouse friendly.

              We've known about Fitz's Law since the dawn of the GUI and have decades of study on it. It's not any more efficient to need to "headshot" everything you need in an application 100% of the time, and in fact it is often rather the opposite that it gets in the way of actual efficiency.

              Mousing through most "mobile" applications is great, whether "first class" or not.

              Desktop and mobile OSes don't need to remain separate, and it's really past time that a lot of super-cramped "desktop apps" got the death they deserved for their decades old RSI problems, accessibility issues, and garbage UX.

              • chongli 5 years ago

                Touch friendly is mouse friendly.

                It's friendly but it's not space efficient. For applications with a huge number of features, a touch UI can't handle them. Touch screens don't have right click, so you can't get context menus.

                It's more than that, though. A touch screen UI for the iPhone makes zero sense on a 32" display. I'd much rather have a true multiwindow, multitasking operating system than that. Really, I wouldn't use a 32" iOS device at all. That's probably why Apple doesn't make them.

                • WorldMaker 5 years ago

                  > It's friendly but it's not space efficient.

                  User studies from the dawn of the GUI continue to harp that user efficiency is inversely correlated to space efficiency. It doesn't matter if an application can show a million details to the individual pixel level if the user can't process a million details or even recognize individual pixels.

                  > Touch screens don't have right click, so you can't get context menus.

                  You don't need "right click" for context menus.

                  Touch applications have supported long-press for years as context menu. Not to mention that macOS has always been that way traditionally because Apple never liked two+ button mice.

                  Then there's touch applications that have explored more interesting variations of context menus such as slide gestures and something of a return to relevance of Pie Menus (which it is dumb that those never took dominance in the mouse world and probably proof again that mice are too accurate for their own good when it comes to real efficiency over easy inefficiency).

                  > I'd much rather have a true multiwindow, multitasking operating system

                  Those have never been mutually exclusive from touch friendly. It's not touch friendliness that keeps touch/mobile OSes from being "true multiwindow/multitasking", it's other factors in play such as hardware limitations and the fact that tiling window managers and "one thing at a time" are better user experiences more often than not, and iOS if anything in particular wants to be an "easy user experience" more than an OS.

                  (I use touch all the time on Windows in true multiwindow/multitasking scenarios. It absolutely isn't mutually exclusive.)

                • pjmlp 5 years ago

                  Which is why not only do Android and Windows tablets/2-1 support mices, they also can be docked to proper screens.

                  • chongli 5 years ago

                    Sure they can, but why bother? When I use Windows, I use real Windows applications with desktop UIs. The touch UI mobile apps are a joke on a desktop monitor.

                    • pjmlp 5 years ago

                      Surface is real Windows.

      • dasil003 5 years ago

        Very few people are primarily messaging as their job. Even outside developers, designers and other creatives, the majority of people work on some mix of spreadsheets, presentations and traditional docs on a daily basis. I guess you can do a little bit of word processing on a phone but it gets ugly pretty fast.

      • swiley 5 years ago

        It’s definitely a different kind of work.

        In general, smartphone software is built to discourage creative work and focus on either reading or communicating.

        • thinkmassive 5 years ago

          I would expand “reading” to “consumption” because mobile devices are frequently used for audio and video in addition to reading (which is probably more “browsing” than long-form reading).

    • xoa 5 years ago

      I'm genuinely very sorry for missing this comment (9 hours ago as I write this) because I think it's a really important and interesting next area of development. Since this article is still front page though, I hope I'm not too late to have some discussion here particularly since none of the other replies have taken the analysis approach I do.

      If we're trying to predict the future, I think one effective approach to try to not be trapped in the present paradigm is to try to extrapolate from foundations of physics and biology that we can count on remaining constant over the considered period. Trying to really get down to the most fundamental question of end user computing, I think it's arguable that the core is "how do we do IO between the human brain and a CPU?" With improving technology, effectively everything else ultimately falls out of the solution to creating a two-way bridge between those two systems. The primary natural information channel to the human brain is our visual system with audio as secondary and minimal use of touch, and the primary general purpose output we've found are our hands and sometimes feet, with voice now an ever more solid secondary and gestures/eye movements very niche. Short of transhumanism (direct bioelectric links say) those inputs/outputs define the limits of out information and control channels to computers, and the most defining of all is the visual input.

      Up until now, the screen has defined much of the rest, and a lot of computer can be thought of "a screen, and then supporting stuff depending on the size of the screen." A really big screen is just not portable at all, so the "supporting stuff" can also be not portable which means expansive space, power, and thermal limits as well as having the screen itself able to be modularized (but even desktop AIOs can pack fairly heavy duty hardware). Human input devices can also be modularized. Get into the largest portable screen size and now the supporting gear must be attached, though it can still have its own space separate from the screen. But already the screen is defining how big that space is and we're losing modularity. That's notebooks. Going more portable then that, we immediately move to "screen with stuff on the back as thin and light as feasible" for all subsequent designs, be it tablets, smartphones, or watches. The screen directly dictates how much physical space is available and in turn how much power and how much room to dissipate heat. And that covers nearly the entire modern direct user computing market.

      Wearable displays, capping out at direct retinal projection, represent a "screen" that can hit the limits of human visual acuity while also being mobile, omnipresent, and modularized. I'm really actually kind of surprised how more people don't seem to think this represents a pretty seismic change. If we literally have the exact same maximalized (no further improvements possible) visual interface device everywhere, and the supporting compute/memory/storage/networking hardware need not be integrated, how will that not result in dramatic changes? It's hard to see how "Mobile" and "PC" won't blur in that case. Yeah, entering your local LAN or sitting at your desk may seamlessly result in new access and additional power becoming available as a standalone box(es) with hundreds of watts/kilowatts becomes directly available vs the TDP that can be handled by your belt or watches or whatever form mobile support hardware takes when it no longer is constrained to "back of slab", but the interfaces don't need to necessarily change. Interfaces seem like they'll depend more on human output options then input, but that seems likely to see major changes with WDs too, because it will also no longer be stuck in integrated form factor.

      WDs definitely look like they're getting into the initial steeper part of the S-curve at last. Retinal projection has been demoed, as well as improvements in other wearables. We're not talking next year I don't think or even necessarily the year after, but it certainly feels like we're getting into territory where it wouldn't be a total shock either. And initial efforts like always will no doubt be expensive and have compromises, but refinement will be driven pretty hard like always too. I don't think the disruptive potential can possibly be ignored, nobody should have forgotten what happened the last few such inflection points.

      >I don't see this happening. PCs are tools for getting real work done. Mobiles are mostly communication and entertainment devices.

      This line of reasoning though is fantastically unconvincing. Heck even ignoring the real work mobiles are absolutely being used for, and given the context of this article, I pretty much heard what you said repeated word for word in the 90s except that it was "SGI and Sun systems are tools for getting real work done, PCs are mostly communication and entertainment devices".

  • snowwrestler 5 years ago

    Maybe the interesting part of the smartphone ARM story is the degree to which Apple has used custom silicon to optimize speed and power for their own specific workloads and software.

    Why couldn't ARM-based servers do the same thing? I understand why a generic ARM-based CPU might not win against a generic ARM-based x86 CPU at running cross-compiled code in Linux. But what if the server has a custom ARM-based chip that is a component of a toolchain that is optimized for that code, all the way down to the processor?

    Imagine a cloud service where instead of selecting a Linux distro for your application servers, you select cloud server images based on what type of code you're running--which, behind the scenes, are handing off (all or part of) the workload to optimized silicon.

    I don't have the technical chops to detail how this would work. But I think my understanding of Apple's chip success is correct: that they customize their silicon for the specific hardware and software they plan to sell. They can do that because they own the entire stack.

    I think if any company is going to do that in the server space, it would have to be the big cloud owners. No one else would have the scale to afford the investment and realize the gains, and control of the full stack from hardware to software to networking. And sure, enough, that is who are embarking on custom chip projects:

    https://www.thestreet.com/opinion/why-tech-giants-are-design...

    So, maybe the result won't be simply "ARM beats x86," but rather "a forest of custom-purpose silicon designs collectively beat x86, and ARM helped grow the forest."

  • scraft 5 years ago

    Not disagreeing, but answering the question of are all phones ARM: no, there Intel too. Source: had to add Intel build support for our Android SKUs to run on said phones. Some Unity stats from about 6 months ago indicated:

        ARMv7: 98.1%
        Intel x86: 1.7%
    
    I think a lot of the Intel stuff has been discontinued, not sure what is actively being developed outside of ARM right now.
    • hawski 5 years ago

      I'm wondering how much of this 1.7% are x86 Chromebooks as they support Android apps for some time.

    • _underfl0w_ 5 years ago

      This is strange. Maybe like a Windows phone? Where does it get those device metrics? < 2% makes me think it might just be that Androidx86 emulator project.

      • pjc50 5 years ago

        I have an Asus Zenfone 2, complete with its "Intel Inside" logo. Not particularly special but when I bought it met the critera of "gorilla glass + pokemon go for below $150". Nice case too.

      • lagadu 5 years ago

        I remember some android phones using intel atom cpus, IIRC made by asus.

      • robterrell 5 years ago

        There really are android phones running Intel atom CPUs. Can’t recall models off the top of my head but adding x86 support was a thing we had to do.

      • ableal 5 years ago

        There were a few x86 tablets dual booting Android and Windows, mostly from Chinese brands.

  • derefr 5 years ago

    > nearly 100% (or entirely 100%?) literally develop-on-x86-deploy-on-ARM

    There may be people somewhere doing Android/ChromeOS/Fuchsia development on ARM Chromebooks, following the Google model of using a mostly cloud-based toolchain together with a local IDE. There’s none of this happening inside Google itself, though, yet—but that’s just because Google issues devs Pixelbooks, and they’re x86 (for now.)

    But, since Pixelbooks (and ChromeOS devices in general) just run web and Android software (plus a few system-level virtualization programs like Crouton) there’s nothing stopping them from spontaneously switching any given Chromebook to ARM in a model revision. So, as soon as there’s an ARM chip worth putting in a laptop, expect the Pixelbook to have it, and therefore expect instant adoption of “native development on ARM” by a decent chunk of Googlers. It could happen Real Soon Now (hint hint.)

  • pkphilip 5 years ago

    Actually, Linus does not ignore the smartphone space at all. In fact he refers to it by pointing out that people are likely to ONLY use cross compiling if the deployment is to a embedded device (which a smartphone is) because the native development on the embedded device may not be possible.

    <quote>End result: cross-development is mainly done for platforms that are so weak as to make it pointless to develop on them. Nobody does native development in the embedded space. But whenever the target is powerful enough to support native development, there's a huge pressure to do it that way, because the cross-development model is so relatively painful.</quote>

  • Fnoord 5 years ago

    Good point, from Linus' opinion it can still happen after ARM is king of the client market. Well, they're well on their way doing that with every client except for desktop being ARM, while Intel is having trouble with 7 nm.

  • rst 5 years ago

    He's also apparently assuming that ARM-based Chromebooks will never be a useful developer environment. I wouldn't take that bet -- a lot of the newer ones will support Linux VMs out of the box well enough to support at least a half-decent development environment (via Crostini). (You can get a Pixelbook with 8GB RAM and a 512GB SSD, if you're wondering about storage space. And while Crostini still has issues with in-VM driver support for things like the Chromebook's own audio and camera, that's stuff that server software wouldn't use much anyway.)

    Between that and the much-rumored ARM Macs, this could turn pretty quickly...

    • bryanlarsen 5 years ago

      I don't think he's assuming any such thing. His argument is simply that ARM can't win until it has a reasonable dev box. He makes no speculation about if/when such a box is coming.

    • tbyehl 5 years ago

      Pixelbook is i5 / i7 tho.

      I have an Acer R13 w/ MediaTek ARM SoC. It's alright, better than the comparables with Intel N-series CPUs, but it ain't no i5.

    • softshellack 5 years ago

      Pixelbook is not ARM. There are lesser-powered, arm-based, crostini-enabled chromebooks. But nothing a developer would want to use yet.

  • disiplus 5 years ago

    but with smartphones you don't have a choice. so it's different.

    • deng 5 years ago

      > but with smartphones you don't have a choice. so it's different.

      Exactly. Linus' point is that Arm has no real advantage in the server space to compensate for the problems with cross-development. That's completely different for smartphones, which is why Arm won that space.

      • pjc50 5 years ago

        When Apple replace Intel with ARM in their laptops, that goes away too.

        (See my argument elsewhere in this thread)

        • pritambaral 5 years ago

          The argument isn't "same instruction set". The argument is "same development & deployment environment", by the logic of which the Apple argument fails because not many people deploy to Apple servers.

          • comex 5 years ago

            So you run a Linux VM, just as lots of Mac-using developers do today. But the instruction set of the VM has to match the instruction set of the host, unless you’re in the mood for slow emulation.

            • pritambaral 5 years ago

              > So you run a Linux VM, just as lots of Mac-using developers do today

              I hear far more make do with just homebrew.

              > unless you’re in the mood for slow emulation

              I run an embedded OS (made for a quad-core ARM Cortex-A53 board) on both Real Hardware and on my ThinkPad (via systemd-nspawn & qemu-arm). I found (and confirmed via benchmarks) the latter to be much faster than the former — across all three of compute, memory, and disk access.

        • mattnewport 5 years ago

          Apple is a small percentage of the laptop market.

          • MBCook 5 years ago

            But their market share with developers is much higher than in the general consumer market.

            • mattnewport 5 years ago

              Possibly, it does seem that way for web dev at least. There's plenty of programmers out there (the majority?) not doing web dev and never touching Macs however. In a 20 year game development career I've never had cause to use a Mac for work purposes. Perhaps the share of developers using Macs as their primary development machines exceeds their 10% market share of laptops but I doubt it's a majority.

    • Tehnix 5 years ago

      I’m not sure if this is what you were implying, but I don’t know of any x86 processors that can compete with the Arm processors that are in use, on power consumption to performance ratio. Take e.g. Apple’s A12, which compete with their MacBooks in performance, and assuredly draw much less power.

      • kllrnohj 5 years ago

        You haven't been paying attention. In order to go faster ARM started using more power. A lot more power.

        Turns out power usage was never an ARM vs. x86 thing, it was purely a "how fast do you want to go" thing. ARM started at the "very slow" end of the spectrum which made it a good fit for mobile initially since x86 didn't have anything on the "very slow" end of things. By being very slow it was very low power. But then the push to make ARM fast happened, and now ARM is every bit as power hungry as x86 at comparable performance levels.

        The power cost is for performance. The actual instruction set is a rounding error.

      • deng 5 years ago

        > I don’t know of any x86 processors that can compete with the Arm processors that are in use, on power consumption to performance ratio

        Not anymore, but there was a time when x86 was (barely) able to compete in that area and there were some x86-based smartphones and tablets. But it was too little too late: x86 already was a niche. Developers absolute had to support ARM, but x86 was optional, so many apps were not available for x86, and that was pretty much it for those devices.

      • bryanlarsen 5 years ago

        "assuredly draw much less power."

        The Macbook uses a 14nm 4.5W m3 with 1.5 billion transistors.

        The iPad uses a 7nm 12W A12X with 10 billion transistors.

    • xoa 5 years ago

      >but with smartphones you don't have a choice.

      You don't have much choice now sure, but it's not as if there weren't any efforts at x86 smartphones (like the ZenPhone). Nor is it as if there wasn't a long run up of phones leading to the modern smartphone either. And even in this how is not directly relevant to the case of x86?

      I mean, we're directly doing a comparison to the RISC/MIPS/etc era yeah? Couldn't back then someone say "well but with PC you don't have a choice, so it's different"? x86 got heavy traction on the back of WinTel, then moved up to bigger iron, which didn't really fight hard in the lower end lower margin space. Does there really seem to be no deja vu with that vs ARM gaining heavy traction in iOS/Android/embedded then moving up to PCs and servers, where Intel/AMD didn't really play in the lower end lower margin space? There was a period with plenty of choice in servers, but then x86 won.

      And again it's not as if someone can't come up with compelling arguments, x86 has some real moats even beyond pure performance. There is enormously more legacy software for x86 for example, and the ISA for it will be under legal protection for a long time to come which complicates running it on ARM. But it's hard to say how much that matters in much of the cloud space, particularly if we're imagining 5-10 years further down the line. x86 takeover didn't happen overnight either, and the first efforts were certainly haphazard. But momentum and sheer volume matter. It just seems like something that needs to be addressed at any rate, more deeply then you have and certainly more then Linus did.

      • AstralStorm 5 years ago

        Most legacy software needs an emulator anyway. X86-64 and OS libraries are not sufficiently similar.

        Try running a Windows 95 era application on Windows 10. You can even have problems with Windows XP era stuff.

        And server space in general does not do legacy without keeping everything intact. The only real issue is lack of ARM developer PCs.

        • smush 5 years ago

          Windows does make a huge attempt to make stuff backwards compatible, however. I run a copy of Cardfile copied from Windows NT4 on Windows 10 just fine.

  • lunchables 5 years ago

    I don't think that in any way contradicts his position

    >And the only way that changes is if you end up saying "look, you can deploy more cheaply on an ARM box, and here's the development box you can do your work on".

    Sure, as soon as these merge and you have a development platform as productive as a desktop computer that allows you to natively build for ARM, then absolutely, it could displace x86. And maybe when (if) the two platforms really merge that could be a real possibility.

  • gwbas1c 5 years ago

    And, most importantly: Many server-side languages don't compile to x86 / x64! They are either interpreted, or compile to bytecode!

    And speaking of x64...

    > It's why x86 won. Do you really think the world has changed radically?

    No, x86 is loosing to x64. And at some point another instruction set will supplant x64.

    • nly 5 years ago

      x86 didn't "lose" to x86-64, it just extended the 20 year compatibility story that goes all the way back to the 386.

      Intel tried "another instruction set" (Itanium) and nearly lost the market to AMD (AMD64)

      • wmu 5 years ago

        Intel also had KNC instruction set (AVX512-like) on Xeon Phi (these CPUs available on PCI cards). They abandoned it in favour of good old x86. One of important factors was the difficulties related to tooling, especially necessity of cross-compilation.

  • syrrim 5 years ago

    He's shuffling smartphones in under embedded I believe.

    His thesis is that if you want a platform to take off, start shipping developer boxes of the platform. So mobile and pc will merge when and only when you can do all your development on a mobile platform.

  • NicoJuicy 5 years ago

    Java should work in everything and iOS only needs to support their own hardware.

    I don't think ARM can rule with Java ( that already supports it) and Swift/c ( limited hardware).

  • NicoJuicy 5 years ago

    Java should work in everything and iOS only needs to support their own hardware.

    I don't think ARM can rule with Java ( that already supports it) and Swift/c ( limited hardware)

  • vbezhenar 5 years ago

    You don’t have a choice with smartphones. So it’s not very relevant. I will write for Itanium if necessary. But given a choice, x86 wins.

  • robot 5 years ago

    power matters a lot on a phone but not that much on the server

    • pjmlp 5 years ago

      Sure it does, someone has to pay the heating bill.

enragedcacti 5 years ago

I disagree with Linus for a couple of reasons. The main one being that not every service in a product needs to be running ARM for it to be useful. There is nothing preventing heterogenous solutions in the cloud and if third parties vet their code on ARM then deploying your DBMS on ARM and your web server on x86 (or whatever services most of your business logic is in) is totally valid for cost or performance reasons.

Secondly, it seems likely that there will be ARM MacBooks by 2020, that kind of instant market penetration for arm in the dev space might mean the exact opposite of what he's saying; why would I deploy on x86 when all of my development is on my ARM machine already?

  • rythie 5 years ago

    Your second point is just re-enforcing what he said. If MacBooks move ARM, they will have solved developer piece. Since so many developers use Apple stuff, ARM will be native and servers would follow.

    In the 90s you could see a shift from people having a SPARC workstation in their office for service development, to just using a x86 PC with Linux or Windows on. Then after while developing for Linux/x86 and deploying to Solaris/SPARC made little sense, so you just put it on Linux/x86 in the end. The thing is maybe 95% of things just work and you spend all your time sorting the other 5%.

    • tinus_hn 5 years ago

      At least for now, the ‘move to ARM’ is just wishful thinking by developers who think they can shovel phone apps onto the desktop. Unfortunately phone apps on the desktop stink so that’s not going to be a big thing.

      • scarface74 5 years ago

        Moving to ARM has nothing to do with moving phone apps to desktops.

        Almost all iOS apps already run natively on x86 today. When developers use the iPhone simulator they are running natively compiled apps linked against an x86 version of the iOS framework. If all Apple wanted to do is allow iOS apps to run without any usability changes on Macs that would be easy (and ugly).

        There are already x86 based Android devices.

    • stormking 5 years ago

      > Since so many developers use Apple stuff, ARM will be native and servers would follow.

      I think you overestimate the significance of some Hipster Developers. And I type that on a Macbook Pro as well.

  • stunt 5 years ago

    > Secondly, it seems likely that there will be ARM MacBooks by 2020, that kind of instant market penetration for arm in the dev space might mean the exact opposite of what he's saying; why would I deploy on x86 when all of my development is on my ARM machine already?

    Well, that is not the exact opposite of what he's saying! He actually mentioned that. That is merely one of his points and it is true.

    Linux said: "Without a development platform, ARM in the server space is never going to make it."

  • Spooky23 5 years ago

    Apple hasn’t touched the Unix parts of MacOS in a long time. I’d assume whatever they ship will be some sort of hybrid that may not make developers happy.

    • pjmlp 5 years ago

      UNIX is not a synonym for developers.

      Developers on the Apple eco-system care very little for those UNIX parts.

  • tylerl 5 years ago

    Yeah... 2019 is the Year of ARM On The Desktop. As is 2018, 2020, 2021, 2022...

    It's totally happening for reals this time. Just look at .. like .. Raspberry Pi...

  • reacharavindh 5 years ago

    It would genius of Apple to simply own this phenomenon.

    Want an iPod/iPhone like phenomenal opportunity?

    Go all in on ARM chips and ship out MacBooks with next gen performance. Alongside, develop server grade ARM chips and make it easy for the army of devs weilding the shiny Macs to deploy straight to it. Impress with performance - you get to take the cake and eat it too.

    I doubt an operation guy like Tim Cook will understand such a strategy.

justaaron 5 years ago

...except that ARM laptops and desktops are the next wave, as ARM dominates mobile devices and tablets already, it's sneaking up on Linus from behind.

Additionally, Linux is the server platform, abstracting the ISA to a large degree, particularly with regards to what most people consider web applications development, tooling, etc. Docker and Kubernetes serves as this paper for many, as well.

This all will matter in the coming rise of Risc-V devices, as this ISA begins to eat parts of the ARM empire.

Linus would know a thing or two about challenging X86 (transmeta) and this probably shows in his display of his emotional wounds. I don't think his fear holds for all time nor in the coming decade.

  • yalogin 5 years ago

    He is right though from a historical perspective. All he is saying is x86 won over the server space because its where people develop. If the arm development platform becomes a thing, then he will be proven right again.

    I have a slightly different take. Arm on the server has a chance now because cloud and thin clients are increasingly common and so even your "at home" machine, as he calls it, could be in the cloud, the same arm machine used for deployment.

  • linuxftw 5 years ago

    From the post:

    > I can pretty much guarantee that as long as everybody does cross-development, the platform won't be all that stable.

    He's saying that until ARM platform arrives on desktop, it's unlikely to be successful in the server market. He's not making any assumptions about whether or not PCs are going to turn to ARM.

chubot 5 years ago

I feel this is why Ubuntu is one of the most popular server distros.

Ubuntu is pretty unsuited for servers IMO. It's just complex and error-prone, IMO. The packages aren't always server-quality.

But it's great for the desktop IMO, because Canonical actually tests against real hardware for you.

So the fact that people use Ubuntu for desktop/laptop development makes it popular in the cloud, which I've always felt was unfortunate.

You could do some extra work to test your web app on a more minimal distro or on a BSD, but why bother? Ubuntu works to a degree, so you save that step. Same with x86.

  • doublepg23 5 years ago

    Can you expand it on this?

    I agree that I use Ubuntu on server because I use it on the desktop, that makes sense.

    However, when I installed Ubuntu Server 18.04 recently it was delightful. There was this simple feature they added which automatically pulled down my ssh key from my GitHub account. During the install it asked me to enable the semi-recent kernel livepatching (and old fashioned unattended-upgrades) it even suggested plexmediaserver as a snap package - which was my goal.

    Of course installing actual servers is a bit archaic in the age of docker, but it made me feel like I made the right choice of OS.

    • chubot 5 years ago

      I guess I'm making an argument around predictability and stability. To be honest I haven't used Ubuntu as a server in several years. Maybe they have improved things.

      But I think they can mostly improve things "on top", but not the foundations. Patching over problems by adding layers on top generally isn't great for stability, and is bad for debuggability.

      I'm comparing Ubuntu to BSDs, where there's actually a manual, and files are put in consistent places. (as far as I understand, I have less experience with them.)

      It's also most of the same reasons that Docker switched its default image from Ubuntu to Alpine some years ago. Alpine is just smaller and makes more sense. Ubuntu does a lot but it's also sprawling and inconsistent.

      Off the top of my head:

      - Starting services is a weird mix of init scripts and Upstart. And now they're switching to systemd. When they switch they don't update all the packages. There are some weird compatibility shims.

      - When you apt-get install apache2, it actually starts the daemon. (This is a problem with both Debian and Ubuntu.) This is bad from a security perspective. A lot of people don't know what's running on their systems in practice and what ports are open, and then they have to set up an extra firewall, which is more complexity.

      - the file system is generally a mess, e.g. /etc. There seem to be multiple locations for everything, e.g. bash completion scripts. I think this is a function of the packages being old and patched over.

      In general the documentation feels scattered and incomplete. The common practice seems to be googling stuff and pasting things in random files until it works. And then automating that with a Docker container.

      That's now traditionally how servers were administered pre-Google :) System administration knowledge / quality seems to have taken a nosedive with the rise of the cloud and cheap hosting. I'm not saying that it's all bad, but it's a downside.

      The great thing about Debian and Ubuntu is that there is so much software packaged for it. I think that's generally the reason that people use it. There is a network effect in the package ecosystem.

      • e12e 5 years ago

        Fwiw Ubuntu server has gotten a lot better than it was. Initially it was more like a little bloated, better tested, sometimes oddly configured "Debian testing" - now it's more of a proper Debian derived distro, with a fairly decent "server" version.

        You should no longer have to expect problems doing an in place upgrade from lts to lts release of Ubuntu (which has been true for Debian stable as long as I can remember).

        As for running packaged versions of things like apache and them autostarting... I see your point, but I don't think it's a weakness - it's more of a difference of opinion.

        One thing canonical seem to be doing right (which I initially thought of as a bad case of NIH) is lxd/lxc, juju and zfs integration. I've yet to play with it seriously, but it does come with a lot of shiny stuff out of the box. That said - it appears (light) containers are winning the mindshare - and I can see how an email server as a container/appliance might be preferable to a custom scripted lxc "VM"/image - if you get the upstream supported container, you likely get some help in keeping the stateful data out of the container.

        The benefit/problem of lxc/lxd is that you can just keep working with the "VM"s as virtual servers.

        Anyway the end result is a lot like modern freebsd jails - in a good way.

  • Sammi 5 years ago

    Same reason Nodejs got popular. Same reason Electron is popular now. It's what people know and have at hand. People always follow the path of least resistance. This is a first principle.

deng 5 years ago

He is right. I'm still waiting for that ARM-based laptop with awesome battery life, which can run a proper OS with good speed and hence is not just a toy. Apple switching to ARM won't make much difference since I gather it will only run OSX, which will probably be pretty much iOS at that time. Yes, you will probably be able to run Linux on that thing, but almost nobody will actually do it because it makes much more sense to buy a cheaper x86-based machine instead. Yes, there are ARM-based laptops running Windows, and they are a joke. For instance, c't just tested the Lenovo Yoga C630 WOS, and it's a disaster, plain and simple.

  • AstralStorm 5 years ago

    If Apple makes such hardware, it will be just a matter of time to port Linux to it. Probably easier than getting it to run on yet another weird internal architecture of a smartphone.

    And you can run Linux on that Yoga. It's just Windows for ARM that is the disaster...

    • jammygit 5 years ago

      They won't be selling the hardware, and macbooks don't run linux very nicely lately

    • e12e 5 years ago

      > If Apple makes such hardware, it will be just a matter of time to port Linux to it.

      I'm guessing it'll be as easy to run Linux on an arm MacBook as it is to run Linux on an iPad.

  • epall 5 years ago

    I love my ARM-powered Chromebook. True, it can’t teally do native development, but ARM clients aren just coming. They’re here. And wow doesn’t have amazing battery life and an unbrickable OS.

tytso 5 years ago

War story time. File based encryption[1] (which used ext4 encryption) was supposed to have been released with Android M. It slipped by a full year because of this issue which Linus identified. At the time, we didn't have xfstests ported over so it could run in an Android/Bionic environment. So the only way I could test it was by taking the SOC kernel to which we had backported ext4 encryption, bashing it hard enough so it would compile on x86 again, all the while cursing the SOC vendor who had added "value added features" to their SOC kernel in a completely broken way such that the kernel would no longer build on x86, let alone boot on it, and then test the ext4 encryption feature backported to the Android kernel on x86 using kvm-xfstests (this was before gce-xfstests[2]). Unfortunately there was an ARM specific bug that we couldn't catch while testing on x86, and it showed up only when we were dogfooding the feature on two testers' phones. We couldn't reliable repro it on our development phones before the ZBB deadline. So we ultimately ended up having the pull the feature and wait a full calendar year for Android N because we could launch it.

As a result, a colleague developed android-xfstests[3] so we could actually do the necessary testing on an ARM platform. If we had this at the time when we were first trying to launch File Based Encryption for Android, it would have avoided a huge amount of hair pulling.

[1] https://source.android.com/security/encryption/file-based [2] https://thunk.org/gce-xfstests [3] https://thunk.org/android-xfstests

thecompilr 5 years ago

I have to strongly disagree with Linus here. While he got the history up to this point right, in the present it is very easy to develop for multiple platforms at once.

CI platforms make it super easy.

Cloudflare made a big effort to make all software run on x86 and aarch64, and it wasn't that hard at all. Most things today work out of the box.

Many things that don't work, were not very robust to begin with, and fixing them to work on both platforms only made the software better as a whole.

Cloudflare could move to ARM servers in a days' notice, and would already do that if Qualcomm hasn't closed the Centriq shop.

The only real advantage I see for Intel today is AVX512, which can't be beat for some workloads, SVE would be able to compete for some workloads, but not all. However most workloads don't use vector processing instructions anyway. Still, I hope ARM improves their SIMD architecture further.

As for the ALU performance difference, it is very easy to bridge. As evident by Apple's CPUs and Qualcomm Centriq simply having four ALU units gets you to about the same ALU performance as Intel.

Branch prediction, cache performance and the interconnect are still much better at Intel processors. However Intel already peaked branch prediction performance, so the difference can only get smaller with time.

  • jlouis 5 years ago

    The CI angle was my first bet as well.

    You build and test in the cloud nowadays. Heck, you might develop on MacOS, and deploy on Linux. There are places where it matter, SIMD processing for instance, or low-level kernel development. But if you are on any kind of VM system, or ecosystem with a good compiler history, you are likely to have a smooth ride.

    If, in addition, the price point of the ARM machines are at 2/3 of the X86-64 ones, you are in trouble. x86 had another important point up its sleeve: price. And I might expect this was a confounding variable in Linus argument. Not only did you have the x86 at home. It was also like 1/8th of the price of an Alpha, PA-RISC or Sparc machine in price point. And at the time, x86 was dog slow compared to Alpha, and its I/O was laughable compared to HPPA/Sun.

    If you have a sizeable invoice at AWS, and you can shave the EC2 price by 2/3rds, you are going to have a lot of wiggle room to make sure your system runs on that aarch64.

ken 5 years ago

> That's bullshit. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test "at home" (and by "at home" I don't mean literally in your home, but in your work environment).

I know lots of developers who work on macOS, and deploy to Linux. Back when Mac was PowerPC, I knew lots of developers who worked on PowerPC, and deployed to x86. Or worked on 32-bit, and deployed to 64-bit. Or worked on single-core, and deployed to multi-core. Or green threads to kernel threads. Or big RAM to small RAM. Or the opposite of all of these.

This disaster he anticipates just doesn't seem to be a problem in practice. It's nice when your server is exactly the same as your workstation, but it's never been required. Even today, I have x86-64 on my desk and x86-64 on my server, but they're not the same CPUs, and they don't have the same features (or core count, or RAM, or OS, or ...).

> Which in turn means that cloud providers will end up making more money from their x86 side, which means that they'll prioritize it, and any ARM offerings will be secondary and probably relegated to the mindless dregs (maybe front-end, maybe just static html, that kind of stuff).

Are databases also "mindless dregs"? There's a lot of Postgres/MySQL/Mongo/... instances out there, and even if you're worried about the costs of porting, you only need to port those once (which AFAICT is already done for all of the major ones).

If my database service told me they could cut my price by moving my Postgres instance to ARM, I'd click that button in a heartbeat. I have zero fear that anything would be fishy due to it being a different architecture than my workstation.

> Do you really not understand? This isn't rocket science. This isn't some made up story. This is literally what happened, and what killed all the RISC vendors, and made x86 be the undisputed king of the hill of servers, to the point where everybody else is just a rounding error.

I've worked with many companies, and in every case, going with x86 was 100% because of price. We didn't avoid SPARC or POWER or Itanium servers because of endianness or any other technical reason. It was just a lot more expensive.

Likewise, we switched the servers to 64-bit when it was cost effective, not at the same time that our workstations switched.

solatic 5 years ago

I disagree with Linus here, in large part because modern system architectures are so distributed. There are so many parts of modern deployments which are not developed in-house anymore. Databases, organizational productivity software, monitoring systems... the list goes on.

So let's say the developers of PostgreSQL or Prometheus or JIRA or Mattermost or any one of numerous others were to come to their audience and say, "we support ARM 100%, you can run our software on ARM in production with no issues, and in fact it's the favored deployment model because it'll cut a significant slice off your operating costs." Does Linus seriously think that, what, customers aren't going to run ARM servers because they can't run the vendor's binary on their development machines?

Maybe there's a chicken-or-the-egg problem here. But just off the bat, PostgreSQL and Prometheus are both supported and built for ARM. There are non-official binaries available for many other projects. Developers are using cheap Raspberry PIs as local desktop development platforms. The more development the ARM ecosystem sees, the easier it'll be for enterprises to justify moving away from x86.

  • readittwice 5 years ago

    > Developers are using cheap Raspberry PIs as local desktop development platforms.

    IMHO this is still a problem, it is easy to get RPi-style devices but for development a more powerful device would be great, however this is much harder to get. Sure, cross-compilation works but is usually tedious to setup and work with.

    I've had the chance to work on one of these powerful ARM servers for some time. I was connected via ssh and mostly working with tmux+vim, I could compile natively with all these available cores and memory - development was a breeze.

    Nevertheless there were some pain points compared to x86: Often no ARM64 binary packages, except for the packages provided by the distro. No precompiled binaries. Software wasn't working/building e.g. because Fedora on ARM64 uses 64K page size. perf didn't support as many performance counters as on x86. Sometimes I would've really liked to use rr (the reverse debugger) but it only supports x86. Pretty sure I've encountered more pain points that I don't remember and also I think I didn't discover all of them.

    Development is almost exclusively done on x86. So it is only natural that x86 is the most tested and optimized architecture. This wouldn't be a problem if your offering would be much better/cheaper in some way and although I think ARM servers can compete in certain areas it doesn't seem to provide enough benefit to be worth the trouble. I agree with Linus that they should be focusing on developers and software and let the machines bubble up into the server segment.

    • StillBored 5 years ago

      BTW: Fedora switched back to 4k pages a while back. The 64k page thing is one of those areas where ARM should have an advantage over x86, but so much software fails to work with 64k pages that its not yet worth it.

      So, give fedora a try on aarch64 again, with F29, its almost at 100% package parity with x86 once you remove packages specific to x86. ARM/linaro/etc have been hard at work for the past few years getting the long tail of compilers/libraries working enough that all the higher level stuff has a change to generally work.

      The perf thing is a problem because there are a lot of microarch specific counters, but frequently they don't get published by any particular cpu vendor. So, your left in the dark about them. The best solution at the moment is call your vendor and demand a more complete list.

    • icedchai 5 years ago

      There are more powerful ARM SBC boards out there. Take a look at Pine64.org for example. They have a Rock64 board that supports up to 4 gig of RAM with quad core CPUs. They are under $50.

      • readittwice 5 years ago

        I personally would like to see an ARM competitor to the Intel NUC. 4GB and 4 cores sounds great but might not be enough for development or certain benchmarks. Something like 16GB RAM and at least 8 fast cores with a fast SSD should be enough. It seems that you can only get ~50$ devices or >2000$ servers (if they even sell single ones). I guess we have to wait for Apple to sell a MacBook with their own cpu.

  • nabla9 5 years ago

    The reality of multiple systems is that their stacks are always slightly out of sync even with full support and commitment. The most used system is always updated and fixed first.

gwbas1c 5 years ago

> and how stupid your argument is

Didn't Linus recently go to sensitivity training or something? I think he missed out on the bigger picture.

It seems like his argument is more about shaming anyone who disagrees with him, instead of expressing an opinion.

  • nabla9 5 years ago

    Calling argument people have stupid is not insensitive or offending (in the love people but hate their ideas sense).

    People may be offended if they identify their ideas too much, but that's insensitivity from their part.

  • jzl 5 years ago

    Was about to say the same thing. Seems like he's slipped right back into his long-established form of discourse. It's always entertaining to read, I'll say that much.

majewsky 5 years ago

So RISC-V CPU designers should aim for the developer market then? First win the hearts and minds of developers, then everything else will come.

  • nostrademons 5 years ago

    No. It's unlikely that they'd be able to anyway - products (like, say, a Macbook Pro or ThinkPad) get targeted at developers. RISC-V is an instruction architecture, which gets used by a chip designer, which gets sold to a computer manufacturer, who does the actual marketing to end users, so they're about 4 links removed up the value chain.

    What they should do is make RISC-V useful for computing at the edge - applying computation to areas where it's previously been infeasible (eg. smartwatches, wearables, RFID, drones, vehicles, etc). Developers go where the end users are. If a new end-user market opens up that suddenly needs a lot of software, developers will buy the same ISA that their customers are using. And then there will be strong pressure to run the other software that developers write (eg. servers, as Linus notes) on the same ISA to prevent cross-development.

  • amq 5 years ago

    But if there is nothing to develop for... Chicken and egg problem.

    However, I think that Windows on ARM will soon become a real thing, this could bring the developers, it will also help the ARM devices spread among Linux users.

    Regarding what Linus wrote, I do not buy it. There is so much stuff can run without cross-compilation: js and other scripts, java, c#. Also look at things like Android and iOS, you are already happily developing for ARM, you can develop for servers with the same approach.

    • rjsw 5 years ago

      I do buy Linus' argument, you need to be able to do native development.

      My ARM64 development machine is a Pinebook, ARM32 a Cubietruck. Both have only 2GB of RAM and could really do with more. At least the Cubietruck has SATA so mine has a SSD.

      • scarface74 5 years ago

        iOS developers develop on x86 computers and deploy to ARM computers all of the time.

        But, most developers don’t need to do native development.

      • sam_lowry_ 5 years ago

        I am pretty happy to develop on ODROID-C2 as well.

        But we are a minority.

      • pjmlp 5 years ago

        With rich language runtimes, as app developer I don't care if my application instance runs on top of an OS, hypervisor or just plain bare metal.

        The only devs that need to care about the native part are the ones writing the glue between runtime and hardware.

        • rjsw 5 years ago

          Somebody has to maintain the runtimes, they are not typically set up to be cross-compilable, I'm doing JVM development on my ARM systems right now.

          • pjmlp 5 years ago

            Apparently you stop reading at the first sentence.

            • rjsw 5 years ago

              I'm not working on "glue between runtime and hardware".

              • pjmlp 5 years ago

                If you are maintaining a JVM runtime for ARM or writing JNI stuff then it looks to me you are doing just that.

                Otherwise you completely lost me there.

    • deng 5 years ago

      > However, I think that Windows on ARM will soon become a real thing, this could bring the developers, it will also help the ARM devices spread among Linux users.

      Windows an ARM has been a real thing for years, and it is horrible. Nobody in his right mind would use this for development.

      • mgamache 5 years ago

        Windows ARM (Windows RT) has been a mess, but Microsoft is committing more resources to make proper Windows + ARM it a real thing. Will it work? I don't know, but they are trying, probably because of Apple & AWS moves toward ARM.

      • WorldMaker 5 years ago

        Windows 10 on ARM has come a long way in recent builds. The SKUs have many fewer differences from x86/x64 Windows 10.

        On top of that, x86 emulation on it is now stable, including full Win32 emulation, and reports are coming in of even doing things like running Steam and old PC games on ARM devices.

      • pjc50 5 years ago

        Only because Microsoft deliberately chose to cripple it, a decision which they are gradually reversing.

  • platform 5 years ago

    Yes, very much so. Having a C++, GO, Java, RUST, Python and F# IDEs running on Risc-V.

    Allowing to debug, experiment, without relying on centralized hosting providers -- would be a huge boon to Risc-V.

    Early adaptors will be developers, than their family/friends circle. Then there will be Risc-V-First movement (mirroring mobile-first movement), and there it will start...

    > So RISC-V CPU designers should aim for the developer market then? First win the hearts and minds of developers, then everything else will come.

  • epx 5 years ago

    Absolutely yes!

  • dkersten 5 years ago

    I guess if nobody wants to develop for it, then it becomes a more difficult business proposition too.

pdpi 5 years ago

The fact that my local CPU is a different ISA from the server is barely a consideration after I've already had to account for the fact that the database is local instead of remote, that I'm using PCIe-attached storage instead of a network-mounted volume, that my local workload is wildly different from the deployment environment's... Ultimately, cross-development is much more meaningful as a software problem than it is as a hardware problem — but, given the popularity of macOS as a development environment, it seems clear that we're happy developing on something unix-y and deploying to another brand of unix-y something-or-other.

Linus is then ignoring the single biggest factor in this equation: performance-per-watt. ARM offers better performance/watt numbers than anything Intel can provide right now. Recent offerings on the server space suggest ARM is managing to scale up without sacrificing too much on that front, and Intel doesn't seem to be keeping up.

If you're operating a data centre, power supply is the single hardest limit to your capacity to scale up. Putting up another building at a pre-existing site is cheaper and easier than finding a way to pipe more power into it, so it doesn't even matter that much that servers are individually less powerful.

Google is investing in ARM, Facebook is investing in ARM, and I can only imagine several other big players are looking into it too. I wouldn't write it off this easily.

walrus01 5 years ago

Linus is saying basically the same thing I said in HN comments some time ago. This was regarding the Qualcomm Centriq server processors. I asked where I can buy right now a standard ATX motherboard + CPU combo that I could mess around with at home. For any sort of reasonable price. The answer was deafening crickets.

As compared to how easy it is to buy a decent quality $119 motherboard + $179 AMD Ryzen quad core CPU.

Not just that it was literally impossible to buy a motherboard+cpu for any reasonable price, for a midtower desktop PC format. Also near impossible to buy a 'cheap' 1RU server or similar for any sort of reasonable price, as compared to $400 Dell R620 with two-socket, 8-core-per-socket older Xeons I could buy on eBay.

There is a very real chicken-or-egg problem with economies of scale. If the top ten Taiwanese based motherboard manufacturers don't think it will make money to build boards for it, they just won't. Super low quantity of 'evaluation' boards that you have to 'contact a sales rep' to acquire are not going to gain widespread adoption.

overgard 5 years ago

Wouldn't the potential counterargument be that ARM desktops/laptops could potentially still come about? Probably more likely in the laptop space. I mean, I think about something like the latest raspberry pi, and yeah, it's slow as heck right now but it seems like something like that could gain a lot more ground if it was higher priced but gave you performance like an iPhone X or latest Galaxy. (And btw: from what I've read the iPhone X costs about $400 in parts, and $110 of that is in the OLED screen, and then there's other costs like sensors that you might not need. So let's say you get the cost of that down to $250 and make the price like $500. That seems like a competitive machine) We already saw Apple (successfully) transition from PowerPC to X86; it seems totally plausible that they could do the same thing with ARM at some point down the line, or someone could be a raspbery-pi style machine with a lot more horsepower.

throw2016 5 years ago

Lots of users mucked around with arm boards years ago untill most realised this wasn't something Arm was interested in with no drivers and those that existed not optimized for performance. It was all a hack with open source developers being kicked like footballs from ARM to SOC vendors and back. Interest waned.

A NUC can be had for about $120 that sips power and has full hardware support for storage, pcie, graphics and is seamlessly compatible with the entire software ecosystem.

ARM is more comfortable with vendors ie soc, routers, nas vendors than supporting an open platform with access to optimized drivers and off the shelf parts. Thus the entire mobile ecosystem is closed and tightly controlled. Even early devboard makers like Odroid have moved to an Intel platform for their latest N1 dev board.

  • imtringued 5 years ago

    I've come to the conclusion that neither ARM nor the actual SoC vendors are interested in desktop or server hardware. The majority have canceled their server projects or sold them off.

lsllc 5 years ago

I build Go, Rust, C/C++ code for ARM/ARM64 (and MIPS) and it's completely transparent. 99.9% of the code just works, occasionally I need to worry about some details, but just once in a while.

Go definitely is the easiest, it's built in:

  GOOS=linux GOARCH=arm go build
Rust is almost as easy to cross build (as long as you have the right GCC toolchains installed!):

  cargo build --target aarch64-unknown-linux-musl
Sooner or later, Intel's hegemony on the server CPU market will end and it'll likely be some combo of ARM and maybe RISC-V (not sure about POWER). Cloud vendors like AWS and Google are looking closely at the margins and as the AWS a1.* [Graviton] shows, they're even willing to make their own CPUs if necessary.
  • ithkuil 5 years ago

    That's all fine and dandy for the software you build, but then if it happens that the easiest way to get some job done is to extend an existing docker image or using an existing docker image in a multistage build, you might trip over the fact that the authors of those other images didn't care enough about your arch of choice to cross compile their stuff.

    • lsllc 5 years ago

      I've build & run docker on arm and used it to build my own arm based docker containers from scratch, but yeah, 99.9% of containers intel only.

      Having said that, docker is perfectly usable on arm, just no one bothers to.

      • ithkuil 5 years ago

        Yeah docker was just an example of modern binary dependencies. We tend to forget that kind of pain since many modern languages strongly favour source dependencies (every user of a library ends up compiling it from sources).

gok 5 years ago

Native development is interesting to kernel developers much more so than the average server software developer.

Most server code is CPU architecture independent, and largely operating system independent. It's written in Python, Java, JavaScript, C#, PHP. Most of it is also already cross-developed on macOS or Windows, then deployed on Linux. The fact that all 3 run x86 doesn't make this much easier. Increasingly it's also targeting server-specialized hardware that isn't practical to carry with you (8 GPUs won't fit in a laptop, and Google won't even let buy their TPUs)

If CPU-native development is important, that's actually a good sign for ARM servers. Laptops and desktops are increasingly going to be ARM powered over the next few years.

rvwaveren 5 years ago

Interesting theory. I wonder then what will happen when Macbooks start shipping with >A12 CPUs [1], will we see more ARM based servers then?

[1] https://en.wikipedia.org/wiki/Apple_A12X

  • Shivetya 5 years ago

    I am so not looking forward to Apple leaving the x86 space behind. I really do not want to wait years for software to catch up nor leave behind the options I have today for "an improved user experience" which is just walling the desktop off too.

  • abitoffthetop 5 years ago

    It literally won't matter, as Linus explained. As the infrastructure will be x86. This is what he's saying. Unless a new paradigm emerges on drastically different tech, ARM has too far to go to catch up to create an isomorphic environment where the code is created. A few tenths of a percent of mac developers won't make a jot of difference.

    • readittwice 5 years ago

      Not sure that's what he was saying. I understood that instead of targeting the server-market directly first, he recommends that it would be better to target the developer market first. And if it's popular there, it will naturally bubble up from there into the server market.

      Sure it will take more than a few tenth of a percent of mac developers but it won't need total domination like x86 either. But just imagine if Apple really just switches its MacBook (not even the Air or Pro ones) to ARM64 that would already be millions of users. A lot of devs will at least try to compile their app for that platform for the first time. Quite some developers will be curious and get one just to test their software on it. I am pretty sure this is going to help the adoption of ARM64 way more than a slightly improved CPU or another experimental support by a cloud provider.

      Personally I don't care too much about ARM64's success (although more competition would certainly be great), I am more rooting for RISC-V and I hope they follow a more developer-first strategy.

  • sheeshkebab 5 years ago

    I wonder whether Apple’s arm PCs will go more the way of chrome books and ms surface laptops. Although since people that buy these are looking for significant value, it’s hard to think Apple will be successful.

    And if Apple switches pro line to arm, they’ll probably lose all non iOS/macOS developers. So, good luck to them then there.

johnklos 5 years ago

Linus has been wrong before, but here he is clearly and unambiguously wrong. You don't - you NEVER - develop to the details of an architecture. The only exception is when you're doing OS development. Since things that people want to run in the "cloud" are exceedingly rarely OS development, the idea that you'd care about the architecture is quite silly.

We have vendors ardently trying to get us to commit to their specific environments, like NVIDIA, but they've been dropping bricks on their feet by not actively encouraging software developers to not target specific versions of CUDA. Anyone running TensorFlow knows how much this sucks. After a while, people are going to get fed up with the problems that come from a poor yet popular implementation and are going to start preferring less popular, yet less proprietary, more deployable solutions.

The idea that Intel and AMD can continue to keep the x86 platform performance advantageous forever is pretty ridiculous. We are already seeing Arm CPUs which are significantly better than x86 in performance per watt. How long will it be until we have high end consumer Arm and low end server Arm that completely overlap x86? One year? Two? Three? Possibly four?

x86 isn't over - that's not what's happening here. But it certainly hasn't "won" forever.

  • AnimalMuppet 5 years ago

    > You don't - you NEVER - develop to the details of an architecture.

    You don't try to develop to the details of an architecture. You even try not to. Then you try to run it on a different architecture, and you find (some of) the places you developed to your specific architecture even though you didn't mean to.

    And that's why Linus is right. It's easy to be architecture-specific by accident. It's really hard to not. And it's going to take time and effort to go to a different architecture. In the real world, few people want to waste their time doing that.

amelius 5 years ago

> Some people think that "the cloud" means that the instruction set doesn't matter. Develop at home, deploy in the cloud. That's bullshit. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test "at home" (and by "at home" I don't mean literally in your home, but in your work environment).

But developers can ssh to the cloud, and develop there, right?

  • oblio 5 years ago

    And he's wrong. In the enterprise, the vast majority of devs are on Windows, for Java for example. That doesn't mean that their deployment target is Windows, most of the time it's Linux. And the development differences, chance of bugs, performance profiles are way different between OSes than they are between CPU architectures.

    • MaulingMonkey 5 years ago

      I can install a reasonably performing x86 Linux VM on an x86 Windows host. If the same can be said for an ARM VM on an x86 host, please share how, I'd love to learn that trick.

      • oblio 5 years ago

        Most of those enterprise devs don't even get to touch the test/staging/prod environment, yet they still do their jobs.

        Regarding the VM, today I guess it would be a bit ugly through Qemu but tooling is always solved if there's a need.

        • MaulingMonkey 5 years ago

          > Most of those enterprise devs don't even get to touch the test/staging/prod environment, yet they still do their jobs.

          They can usually run at least some part of the stack locally. Maybe not the full thing, at least part of it.

          > Regarding the VM, today I guess it would be a bit ugly through Qemu but tooling is always solved if there's a need.

          Tooling doesn't magically fix perf. Have you actually tried that specific setup? Searching online I get results like https://raspberrypi.stackexchange.com/a/12144 which aren't impressive, and I don't count as "reasonably performing".

          Those results also jive with my experience - the meager demands of mobile apps are outrageously slow inside an ARM emulator running on x86 every time I've tried it, although I'll admit I haven't tried it on Qemu specifically. I can only imagine the horror of trying to run a full ARM server stack on x86 via emulator - perf is bad enough when it's native. For mobile, the first thing I do is fix the x86 builds (so I can use an x86 device emulator), buy a real ARM device out of my own pocket, and/or educate and/or fire my employer for their outrageously wasteful use of my time if they're serious about me and my coworkers using mobile ARM emulators on x86 for any serious length of time (that kind of waste of expensive resources like developer time can't possibly be a sustainable business decision.)

    • lunchables 5 years ago

      You're comparing a bytecode virtual machine to a processor microarchitecture. He's talking about why a particular hardware platform becomes dominant. You could say the same thing about any interpreted language, not just Java. How about python or javascript? Java didn't displace x86.

    • AnIdiotOnTheNet 5 years ago

      > That doesn't mean that their deployment target is Windows, most of the time it's Linux.

      I don't feel like that's true. I don't have data to say otherwise, but my impression for the majority of non-tech businesses I encounter is that they are very much still Windows shops, and still on-prem.

      • oblio 5 years ago

        I've worked for a few, consulted for others, have friends working for yet others. It's a mix. Java shops go Win/Lin, .NET generally is Win/Win, for now.

        • pjmlp 5 years ago

          There are also Java/.NET shops :)

    • porpoisely 5 years ago

      He isn't wrong since most linux servers run on x86 architecture and obviously most windows desktops run on x86 architecture.

      To your last point, if that is true, then what will the differences be when both the OS and CPU architectures are different? I suspect it will create even more headaches.

      I don't see ARM winning the server space anytime soon or ever considering how established and dominant x86 is.

  • AnIdiotOnTheNet 5 years ago

    > But developers can ssh to the cloud, and develop there, right?

    Then why do they insist on 32GB i9 laptops running MacOS or Linux? One must assume that there is some practical purpose, unless one is rather cynical and believes that maybe the just like shinnies.

    • Tor3 5 years ago

      .. develop remotely? I can, if I wish. I have an extremely high bandwidth link to servers. But there's still latency, and that makes all the difference. So in practice I always develop locally. I use Git and VNC and sshfs-mounted filesystems to access the remote, but my tools and editors are always local. I can build remotely, but not work remotely. And I still like to build locally all the same, during the actual development.

      • AnIdiotOnTheNet 5 years ago

        I find it hilarious that this is the developer attitude for their own work, but then they turn around and expect users to just suck it up and accept latency because cloud!

        • Tor3 5 years ago

          It's a difference in using a text editor remotely, compared to doing just about anything else remotely. I can log in and build remotely, and execute remotely, use remote git servers, even run GUIs remotely (testing my application through a VNC). But using a text editor remotely, with a few hundred millisecs rtt.. now, that is very different.

    • scarface74 5 years ago

      There is nothing stopping you from developing on x86 and SSHing to a remote ARM computer for final validation. Cross compilation has been a thing for decades.

      • blattimwind 5 years ago

        Linus' point is literally that the pains associated with cross developing/compilation are what brought x86 to power and keeps it there.

        • scarface74 5 years ago

          On the level of Linux it might be hard, but thousands of iOS developers do it all of the time. On the application level if you are just doing C, my experience with writing cross platform C, if it works on one platform and breaks on the other, there is probably a bug in your software that is only surfacing on one platform usually around depending on undefined behavior or relying implementation defined behavior.

          But my “experience” with native cross platform programming is two decades old - x86 and whatever the old DEC VAX and Stratus VOS platforms were running.

  • akhilcacharya 5 years ago

    Does anybody actually enjoy doing that? At my company there are dozens of hacks ways to do that and none are seamles.

  • blattimwind 5 years ago

    "ssh and develop there" is so painful. Linus' cross-development argument pretty much applies 1:1.

  • MaulingMonkey 5 years ago

    Not without an internet connection, not in an isolated testing environment, not without jumping through cloud specific hoops to setup accounts, not without installing your initial ssh keys, not without getting IT signoff for access and/or to spin up more cloud resources with the related charges, not without extra IDE configuration (if even possible, depending on the cloud) or resorting to raw gdb, ...

    When I want to test C++ stuff on ARM, I connect an android device, because that's somehow less painful.

    • AnIdiotOnTheNet 5 years ago

      Do people not set up on-prem test and dev environments anymore?

jillesvangurp 5 years ago

ARM is pretty much universally used on mobile these days and also in a fair bit of low end windows and chromebook laptops. With Apple being rumored to launch ARM on desktop and new OSS risc-v chipsets becoming a thing a lot of the arguments that Linus Torvalds is making ring more true looking backwards than forwards. Also WASM is becoming a thing.

I don't think it's as black and white as wanting the same CPU and OS on your deployment platform and development platform. When it comes to mobile development, having those two be completely different is the norm. It's not much of a problem in practice. The differences between different platforms with the same cpu architecture and OS from different vendors is a much bigger headache than the difference between each of those and the windows or mac + intel laptops the software is being developed on. The difference between windows and macs still causes some headaches of course but is mostly managable. If it works on Linux it generally also works on mac. And if it doesn't that's a good sign of immaturity. All I'm saying here is that most mainstream technologies work across all three of those without much headaches.

jlarocco 5 years ago

> I can pretty much guarantee that as long as everybody does cross-development, the platform won't be all that stable.

Even if that's technically true, there's no reason there can't be ARM workstations. There aren't many because they're still generally slower, but there's no fundamental reason (that I know of) why an ARM workstation can't work just as well as an x86 workstation in almost every case.

  • 51lver 5 years ago

    Does your boss have a neice that can fix your arm bootloader? He probably has one that can fix your windows bootloader.

    This stuff was "dark arts" that we committed to muscle memory in the 90s, and no normal mere mortal will ever learn why the safe mode menu existed, how to hotkey into a boot selection menu, or to boot up a live disk to restore system files. None of that is possible with, say, u-boot. You need to know the kernel load addr, the partition layout, and a ton of crap that just isn't portable from one device to another, and that that's AFTER your done fighting the thing for root access!

    Arm workstations Just Won't Work the same way those super permissive x86 workstations did, and that will make them crap for developing on.

  • orojackson 5 years ago

    I mean, ARM is eating x86's lunch in the smartphone space.

    Are ARM CPUs more power efficient than x86 CPUs? That could be the deciding factor on whether it will take over.

  • numbsafari 5 years ago

    Intel is already anticipating Apple moving to ARM for “desktop”. Chromebooks already run a mix.

    He’ll be right until he is wrong.

    • cthalupa 5 years ago

      He'll still be right, even then. His core argument is that until there are ARM computers that developers are working on in some real quantity, ARM in the server space has no chance. If that happens, then by his argument they now have a chance.

      • yoz-y 5 years ago

        It seems that the title gives a different meaning to the words he actually said.

mangecoeur 5 years ago

Well all the kids are learning programming with ARM-based RPis so...

  • AnIdiotOnTheNet 5 years ago

    Yeah, but it seems unlikely that many of them are learning in languages or environments where the arch really makes any difference. If the dominant OS on RPi was RiscOS (a man can dream...) it might be a different story.

    • rbanffy 5 years ago

      And where does it? Almost nothing I write will get compiled down to native code. Linus is forgetting the immense volume of code that's written in Java, PHP, Python, C#, Ruby and dozens of other languages that rely on runtime interpreters.

      I agree I will not even think of optimizing my code to run on POWER, SPARC, or ARM and that I won't care where my software runs enough to create a market for these, but I don't care enough to keep an x86 market either.

      Having said that, it would be good for those who propose new architectures to ensure there is a ready supply of entry level machines for enthusiasts and early adopters. Seeding these groups will ensure their architectures and software is more thoroughly used and tested.

  • islon 5 years ago

    Are all these kids learning C or assembly? Because if they are learning Javascript, Python, etc. it doesn't matter in which processor their code is running.

    • joezydeco 5 years ago

      Exactly. That's what I don't get about this whole architecture argument. If I write a mess of Python and deploy on a local ARM SBC or drop it into a remote x86 webserver, what's the difference?

    • jbverschoor 5 years ago

      True.. But the Apple arm cpu has some things to speed up javascript. I guess that makes it a high performance server cpu?

  • jerguismi 5 years ago

    Are they really? RPi is cool and all, but I doubt the market share of ARM development is that big. As I was kid I got interested in programming because I wanted to develop games, and I would guess that still most yungsters want. For that RPi is shitty choice.

    • AstralStorm 5 years ago

      It is pretty big for smartphones and tablets. And embedded devices.

      Games on smartphones are perfectly viable - RPi is not good enough but oDroid or Google Pixel is.

      The only problem is developing on same machine as you use. For that, you need a full fledged, quite powerful ARM laptop. Hacked Chromebook for example. Otherwise, the emulator is way too slow.

  • pjmlp 5 years ago

    With most tutorials using Python.

  • mtgx 5 years ago

    That's probably the strongest counter-argument. New generations will learn programming on Arm and RISC-V (whether it's something like an iPad or a Raspberry Pi).

    Linus very much sounds like a grumpy old man saying "get off of my lawn!" (as he usually does).

    • deng 5 years ago

      Well, it's difficult to not get grumpy when you hear the same promises for many years. It's not like "ARM for servers" is something new. The exact same things were already promised over a decade ago, and somehow, it never really came to pass. And I think his reasons are spot on: aside from very specific workloads, it just did not make sense to buy ARM-based servers. They were usually not much cheaper, and they usually did not save much power, compared to x86-based machines. Combine that with cross-development, and you have a loser.

      • AstralStorm 5 years ago

        Exactly. Modern X86-64 chips got improved power saving, more cores to increase density at reduced power cost, much more powerful GPU for running compute.

        Plus, server space buyers are conservative, they want support of a big company behind their hardware. Someone the size of Cisco, Dell or HP. Or hardware that is used by Google or Microsoft.

        Combine that with anticompetitive long term agreements and you get the picture.

        Somehow network routers have a variety of architectures inside and nobody has a problem - because they have a bunch of reputable companies behind each of them. (Variants of MIPS, ARM, ARM64 and x86 are in the wild.)

alkonaut 5 years ago

I think this ignores the fact that these devs just write some js and push it to some cloud server and it uses APIs for storage on some other server.

The person running the infra for S3 or GitHub needs to think like Linus because they need to worry about that system level compatibility. But if I write an app using say node+s3 I trust that even though I develop on x86 I can run it under node on Arm.

  • judge2020 5 years ago

    I think he tries to address this but fails to acknowledge it elsewhere.

    > This is true even if what you mostly do is something ostensibly cross-platform like just run Perl scripts or whatever. Simply because you'll want to have as similar an environment as possible,

    I guess if you're doing stuff with low-level packages using node-gyp or C++ bindings then sure, but pure node-based workloads don't care about the architecture and, as you said, are expected to work everywhere.

_ph_ 5 years ago

I agree with Linus so far, as that the old RISC vendors got killed, because there were only a few, if any, offerings for consumers. Less so because of the code migration issues, but rather, because the volume of processor production was to low to compete with Intel back then.

However, there is an elephant in the room today: smartphones. ARM processors are suddenly a huge market, allowing for higher R&D expenses. And basically everyone already owns an ARM-powered device, most developers are actually developing for ARM today when they build apps.

So, and there I agree with Linus again, what is a bit missing is ARM-based desktop hardware. The best thing ARM could do to push their cloud processors would be to make affordable motherboards with their processors available to hobbyists and enthusiasts. But ARM is coming to the desktop anyway, be it through the ARM-based Windows laptops or that Apple builds an ARM-based Mac. And then the x86-architecture will come under severe pressure.

  • rbanffy 5 years ago

    It'd be great if Sun had a Niagara box priced like a PC when that chip was first launched. I bet our software would be running much better on multicore machines today. Maybe languages such as Rust would have been created sooner.

    I remember joking that writing code for the Xeon Phi would be useful because that was probably what a Core i15 would look like. Now we have Xeon Platinums with similar core/thread counts, but only on the very high end while 8-thread machines are mainstream with 12 being seemingly the next step.

    On ARM we have something interesting with the asymmetric cores, something Intel has only hinted they plan to pursue. Some tasks can go to slower cores that sip power while others may be better suited to beefy cores that can do speculative execution on deep pipelines and that could heat a small house.

stunt 5 years ago

When reading it, please keep in mind that the HN title is not part of his message or what he actually mentioned.

He is talking about cross-development issues in ARM ecosystem. He has valid points and good suggestions.

throw2016 5 years ago

Its like Linus, Stallman and others create a free public pool where before only restricted private pools existed.

Now 20 years later the private pools have faded into the background and the public pool is a lake but its inhabitants are disconnected from the past, only vaguely aware of the era of private pools and couldn't care less how the public pool came to exist. Their reality takes for granted this huge public pool's existance on which they can build and sail their boats, so Linus's arguments are unlikely to resonate.

Users will gladly wade into a private pool this time by cloud providers and that suits closed vendors like Arm who maintain tight platform control - billions of phone socs with zero open drivers - just fine. It's like every generation has to make the same mistakes and go through the same cycle of pain or it slips generational memory.

bboreham 5 years ago

In the version of history I remember, x86 won because it was 2-5 times cheaper than RISC for the same performance, then in performance per Watt once that became the limiting factor.

All devs had SPARC machines and no intention to move to x86 on the desktop, until after Intel steamrollered the performance contest.

acomjean 5 years ago

linus worked at a 'failed' intel competitor "transmeta", which was trying to make low powered chips. intel/amd are tough competitors.

https://en.wikipedia.org/wiki/Transmeta

  • pickle-wizard 5 years ago

    I remember the Transmeta processor, they were neat. I only ever saw 1 Transmeta PC in the wild. IIRC it was a Sony sub notebook. Of course it was running in x86 mode and running Windows. I got to mess with it a bit and for the time it was a pretty good machine.

    • mempko 5 years ago

      I saw an insane amount of Transmeta PCs in Japan circa 2005 during the "netbook era" running Windows. It was quite a surprise.

  • rbanffy 5 years ago

    They still aimed to run x86 binary code through an emulator layer on a VLIW CPU.

blinkingled 5 years ago

He didn't even discuss the every ARM board being a snow flake needing its own BSP, custom bootloader, OS distro etc.

I am fine with x86 being successful - there are good reasons for it. I don't get why there's this ARM push in the PC and servers market. It's not making anything better.

  • kllrnohj 5 years ago

    > I don't get why there's this ARM push in the PC and servers market. It's not making anything better.

    What push? There's a rumor that Apple might make an ARM-based laptop, and there's a bit of dabbling with ARM-based servers, but there doesn't seem to be any real push for it. More like hedging bets if Intel continues to shit the bed on 10nm. Except AMD's Epyc looks to be the real contingency plan for that now anyway.

    I don't see any reason to expect Apple's motivations for using ARM (more control over hardware) would apply to anyone else, and Apple is the only one with an ARM CPU design that is even worth discussing in a laptop anyway.

    So... I wouldn't expect the other 90% of the laptop market to follow, much less any of the desktop/workstation market.

    • blinkingled 5 years ago

      Push might have been a little strong word - but I was referring to server vendors selling ARM based servers (HPE Moonshot), Amazon making ARM instances available, MS having Windows Server for ARM etc.

      Looks like it's just a plan B type thing or throw it out there and see what it does thing.

  • nfriedly 5 years ago

    Yea, ARM really needs some standardization around that area.

nailer 5 years ago

> And the only way that changes is if you end up saying "look, you can deploy more cheaply on an ARM box, and here's the development box you can do your work on".

Linus is overestimating the amount of native code being produced.

nickpsecurity 5 years ago

I thought the main reason x86 won was the tie in with Windows PC's and servers. Microsoft took the market. The apps were written for x86. So, most servers are x86.

Intel and PC makers also charged less for hardware. I always wanted one of those RISC systems. They cost upper four to five digits. On the high end, NUMA machines cost around six or more digits. People started building Beowulf clusters out of Intel boxes to knock off digits. Clustering got big. So, it was cheaper on small and large end to run Intel.

Compatibility with dominant system and low cost. Maybe add that they kept optimizing for workloads like gaming on top of that.

swebs 5 years ago

>If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test "at home" (and by "at home" I don't mean literally in your home, but in your work environment) ... This is true even if what you mostly do is something ostensibly cross-platform like just run perl scripts or whatever. Simply because you'll want to have as similar an environment as possible

Eh, Android's existence is a pretty big counterpoint to that. And you could just run an ARM VM on your desktop if it's that important to you.

jacknews 5 years ago

Except that doesn't quite explain why soo many developers use macbooks (OS/X), yet deploy on Linux.

Personally I think ARM could be leapfrogged by RISC-V, though it will take longer than people predict.

  • yaantc 5 years ago

    > ARM could be leapfrogged by RISC-V

    For the low-end deeply embedded space, I could see RISC V becoming a significant player in the next 5 years. ARM is relatively expensive there (it's cheap compared to Intel for servers, but at the low end it's more the IBM of Microsoft of old), and there's less need for their ecosystem (most 3rd party software is open source). The Cortex M are less flexible then some competitors in their configuration, and designing good small CPU IP is doable at the low end. So all the small custom designers (Andes, BA semi, Cortus...) can rally around RISC V and have cheap and flexible design, with a good shared software ecosystem. Today it's still one or the other.

    At the high-end it's very different. To be competitive one has to take advantage of the latest process nodes, and work well ahead of time with fabs and EDA vendors on the next node(s). This is highly labor intensive, and it takes deep pockets and a lot of resources. ARM is already there, and well entrenched. Getting enough money to replicate this on the RISC V side will take a loooooong while, if it ever happens. Unless a deep pocketed company decides to use RISC V and go their own way, but it seems very unlikely. Look at how even Qualcomm reduced its work on custom ARM cores for their high end, and now do tweaks on ARM designs.

    But personally I'm fine with this. It's mostly at the low end / embedded that I feel there's a need for more competition.

    • petra 5 years ago

      I agree that at the low-end , at least in low-end ip-cores, RISC-V could do well.

      But in microcontrollers ?

      To be meaningful there, they will have to convince one of the big western guys to bet on them.

      But why would any of them care , enough to hurt their future relationship with ARM ,a significant risk ?

  • gvb 5 years ago

    Except...

    1. OS/X is Unix[tm] and linux is a "Unix clone" so the operating system works the same and (for purposes of this debate) both run on the X86 ISA.

    2. The first thing all the macbook developers do is install Homebrew or equivalent and install all the same packages that are installed in their linux deployment environment.

    At that point, their OS/X development environment is effectively indistinguishable from their linux deployment environment.

    • bunnycorn 5 years ago

      What's that? The 2000's.

      The first thing we install is docker and run everything containerized.

      Homebrew is for tools that we actually want to run locally, like ffmpeg or python, or node...

  • AnIdiotOnTheNet 5 years ago

    I'm curious why you believe that. Aside from the hype and the association with the word "open", I honestly can't think of anything interesting or important about RISC-V.

  • lugg 5 years ago

    The ops guys use Linux.

    • rbanffy 5 years ago

      After SSH was invented, I can use anything (my corporate laptop is a Mac, my personal ones are Linux-running x86's).

      To be fair, I was using anything when telnet was not considered suicidal.

Animats 5 years ago

AMD-64 won in the server space. At this point, x86 is just a legacy mode that's still in AMD-64 CPUs.

  • bunnycorn 5 years ago

    This, AMD64 killed Ithanium, and that's why "x86" won the server.

    • icedchai 5 years ago

      x86 was already on plenty of servers before Itanium. 64-bit hardly mattered until servers with >4 gig RAM became common in the mid 2000's.

dfabulich 5 years ago

Linus is right that ARM won't win "as long as everybody does cross-development." And that's why ARM will win.

Apple will ship MacBooks on ARM in 2020, which will have comparable performance with superior battery life. Windows-based laptop vendors will switch to ARM to catch up; the desktop market will follow the laptop market, as it already does.

ARM's performance-per-watt advantages already make it a compelling cross-compile target today; when everybody's developing on ARM, it will be a no-brainer.

zero4ll 5 years ago

ARM laptop and machines are here. Windows on a RaspberryPI is going to make the ARM processor more appealing to the dev market and once more people buy super cheap ARM processors ARM will show up in servers. People will dev for ARM and want ARM servers. Netflix uses pfsense wich is BSD and it runs like a champ on ARM and netgate even makes an ARM based unit that is for small offices and is amazing. ARM laptops also last for like 25 hours so I think people will be switching.

k_bx 5 years ago

Same reason why Ubuntu became so popular on servers. Good point.

  • bunnycorn 5 years ago

    No, Ubuntu became popular on the servers because of the immense documentation, and better maintained repositories.

  • cm2187 5 years ago

    And the whole point of docker.

purplezooey 5 years ago

The mixing in of reader abuse is really off putting. "do you really not understand?" etc. I know, I know, it's been discussed a thousand times, but still.

  • Tistel 5 years ago

    Yeah, he just sounds like an angry "get off my lawn!" guy. Even if he has good points, he losses you because its all so rude. I don't understand why he has so much anger. He made a huge impact on the tech world, has a family/kids. I hope he made some loot of linux. He needs to learn to meditate or something. Dude, just be happy. Hardly anyone cares about the CPU anymore (I have been writing code for > 20 years, 13 of which was low level C/C++) outside of OSes and drivers. Its like 98% high level, warm and fuzzy, languages now. It's great.

  • timmytokyo 5 years ago

    I find that it makes him less persuasive. When a writer resorts to personal invective to make an argument the way Linus does, it's a tacit admission that he can't do it solely with evidence and sound logic. He's beating the reader over the head with insults to "win". In the process, he undermines the strong points he does make.

hawski 5 years ago

For this to happen we would have to use our phones as workstations - just connect monitor and input devices. That's current mission of Rob Landley with his Toybox project ¹. Years ahead of us, but probably will happen. Makes projects like Librem 5, Pinephone and Pinebook crucial in the long run.

¹ http://landley.net/toybox/about.html - see the Why section

alexandernst 5 years ago

Vast majority of "cloud" user develop and deploy code written on languages that have "no idea" what architecture the CPU is running on. Eg, I code on Python/Django. Why would I care even a little bit about what set of instructions the CPU of my EC2 instance has? I don't. Because Python/Django will run exactly the same way on x86 and on ARM.

  • rbanffy 5 years ago

    Now imagine some non x86 vendor adds a couple instructions that make Python code miraculously faster. Wouldn't you want that server?

    • alexandernst 5 years ago

      Sure. But that's not my point. My point is that the entire argument that Linus has about users willing to test and deploy code on the same arch is flawed. I'd understand and agree with that argument if we we're talking about developing and deploying C (which is what Linus is used to), but that's just not how "the cloud" is used.

armboy 5 years ago

I think the most important question here isn’t whether Arm will win servers.

The assumption would be there, that the cloud market will continue to grow at its current rate and continue to be the valuable market it has been.

What scares intel most is that a new market will emerge for middle and high power compute spread more broadly, and this market will grow explosively faster than data center ever did.

The question is whether these new cores will bring about a new way of structuring the data center, where it is decentralized and spread out more instead of stuck in one building.

A good example is the infrastructure that must go into place to support 5G and self driving cars. Every lamppost will have a server on it.

Arm winning servers becomes irrelevant in that world. If fog / Edge / infrastructure combo blow up the way mobile did, Intel’s little data center slice won’t matter.

Developers will only adapt when there is a big new market to write software for. Data center by itself not sufficient to motivate the change needed.

karakanb 5 years ago

Correct me if I am wrong, but isn't this whole dev-prod hardware parity argument contradict with the rise of Docker? I don't know much about ARM or CPU architectures in general, but I have never heard companies testing their Dockerized applications with Docker on different CPU architectures just to make sure that the application works as expected; it can be because everything is on x86 already, but at this point Docker is assumed to be a cross-platform dependency that will be the same between different hosts / OS / architectures. What prevents an abstraction layer like this to be used when it comes to ARM, hence making the whole "If you develop on x86, then you're going to want to deploy on x86" argument incorrect?

  • Slartie 5 years ago

    x86 Docker containers will not run on ARM, even if you install Docker for ARM. Docker is an entirely useless layer to support cross-architecture deployment.

    Heck, even full-blown VMs (the real ones, that docker partially replaced) are mostly useless, for performance reasons.

    • karakanb 5 years ago

      The question was not specifically around Docker. What I wonder is, why wouldn't there be a seperate layer that abstracts away these platform differences, just like Docker did for operating systems?

      • timmytokyo 5 years ago

        Like the operating system?

        • karakanb 5 years ago

          Exactly, but I am not well-versed about the challanges and potential difference that might surface in kernel level and since Linus thinks it is not applicable, I was wondering if there is / would be an equivalent in the user space that would allow reproducibility for different architectures, something like architecture independent containers maybe. I would like to believe that we have come to a position where we can consider the application and platform as completely independent entities, which would eliminate these concerns regarding dev/prod parity possibly.

          All these questions are mostly out of my ignorance though, I am trying to understand why the architecture change would require a huge effort as long as the underlying stuff is well abstracted.

zackbloom 5 years ago

I have a fundamental question. Do people think the move towards 'serverless', this desire for some percentage of developers to not have to worry all that much about the low level infrastructure their code is running on) to die? To accelerate?

It seems somewhat clear that some percentage of developers are interested in a fundamentally higher abstraction than what operating systems have classically provided. In a world where that's true for not just a small few but for most developers, it seems that something like processor architecture could be abstrated away. It's not like it's particularly easy to replicate serverless deployment environments on developer machines today, and yet it persists.

spamizbad 5 years ago

Linus is absolutely correct, despite how it feels like ARM should be winning in the datacenter.

If you want to win servers, you basically need to win developers too. So in the ghost world where ARM is dominating on the server, we're all writing code on ARM desktops.

bni 5 years ago

People will run their Java Scripts on whatever architecture AWS provides for them. Do Linus really think the bulk of developers, the line of business developing kind, cares or even knows what CPU architecture is running beneath it all?

greymeister 5 years ago

The only disagreement I have with his statements is that I think the Itanic and blind vendor buy-in is what sunk non-x86 processors. amd64 is what actually brought x86 ubiquity while ia64 never reached any relative market share.

avodonosov 5 years ago

People developing on tablets (it's real, quote [0]: "Since 2015 I do most of my software development on an Android tablet. I am a bit fed up with PCs and notebooks, as they are tedious to handle and have too many mechanical parts. A tablet is more convenient, it can be held in one hand, and usually survives if coffee is spilt over it."), have ARM as their development environment, so if the Linus' argument is true, ARM has chances.

0 - https://picolisp.com/wiki/?TermuxPentiPicoLisp

Solar19 5 years ago

For a time Intel had PCs on a stick – it was like a large flash thumb drive, with a Celeron or something. (Maybe they still offer these.)

Are there any thumb drive ARM computers? (Preferably with 64-bit ARMv8 processors to match emerging server platforms.) With pluggable ARM mini-computers it might be easy to develop for ARM.

Raspberry Pi would be interesting, but they have old 32-bit processors and I'm not sure how well they interface with a normal computer. Servers are using 64-bit ARMv8.

Maybe smartphones would work as pluggable mini-computers? I heard that OnePlus devices were explicitly root-friendly, but I've never tried it.

reacweb 5 years ago

I am in the mindset that arm servers are the future. I have already some sites on arm. The issues with IME give a bad image of Intel. Monopolys are never good... But when Linus says: "And the only way that changes is if you end up saying "look, you can deploy more cheaply on an ARM box, and here's the development box you can do your work on"."

It matches my mindset. I am also looking for the good arm laptop. I do not need anything fancy: 4GB of ram (or better), 64GB SSD, wifi, bluetooth, usb (3 if possible), audio jack, good screen, usable keyboard, good touchpad.

pankajdoharey 5 years ago

Why do i feel he is dead wrong, the stability of a system in dependent on Kernel Writers more than the system builders. ARM would make it for a very important reason the electric bill, more and more Data-centers are geared towards saving on their electric bills to be highly profitable, many are already using renewable sources like Apple and Google. Also if he isnt willing to support ARM for Server-space some other kernel will make it. Also most X86 host these days run in Virtualised environment, there is no difference if it actually runs on x86 or ARM.

randomsearch 5 years ago

Companies care more about price than replicability.

Almost all the processor market is ARM already. Scale wins.

ARM desktops are likely if not inevitable.

X86 is not one unified platform.

Cloud hardware will become specialised and not like the hardware in your desktop.

ecabuk 5 years ago

> Why? Same exact reason, just on the software side. In both cases. Where did you find developers? You found them on Windows and on Linux, because that's what developers had access to. When those workloads grew up to be "real" workloads, they continued to be run on Windows and Linux, they weren't moved over to Unix platforms even if that would have been fairly easy in the Linux case.

There are lots of developers who developing on MacOS so why BSD server usage is very low than?

wallflower 5 years ago

Lest we forget, ARM has problems with instruction ordering not happening the way programmers might assume in certain cases.

https://preshing.com/20121019/this-is-why-they-call-it-a-wea...

https://news.ycombinator.com/item?id=4673458

  • jchb 5 years ago

    ARM doesn't ”have a problem” with the ordering. Rather, those CPUs take advantage of the re-ordering allowed by the specified ordering constraints. The behaviour is entirely expected. The default option for those c++ atomic operations is the strongest constraint (memory_order_seq_cst) - a programmer who specifies a more relaxed constraint better have a good reason for it.

    • AstralStorm 5 years ago

      Usually the problem is that it exposes latent software bugs hidden by x86 strong guarantees on memory access ordering and cache validity. Not that someone misused atomics, butt rather does not use them at all and "it works".

cjiph 5 years ago

The vast majority developers I know of who are targeting the cloud, do so using MacOS or Windows and then deploy to servers running Linux. It is clearly not the case you need the same environment “at home”, as you have in the data center.

I do agree with the idea that a cross-compiled environment makew your target environment “more distant” and this is less efficient. However I don’t think that’s at all what’s limiting ARM from cracking the server market.

miohtama 5 years ago

In early 2000 Java promised to be write once, everywhere. By the time it was between Windows and Sun.

20 years later we have finally reached this point for business applications, be them in Java, Python, PHP or some other VM based SDK.

The mainstream development future is in Lambdas and other such containerised technologies. If you can run Docker or similar "home" you can also deploy. Our DevOps models have radically changed and not sure if he realises this.

  • kazinator 5 years ago

    > In early 2000 Java promised to be write once, everywhere.

    Latter half of 1990's, by my recollection. The history is well-documented. JDK 1.0 shipped in 1996, and that's when The Java Language Specification was first published also. The "write once, run anywhere" started to be heard soon after.

  • pjmlp 5 years ago

    When I left C++ for managed languages, it was when the OS became irrelevant to me, with FFI being the only occasions where it still matters.

    As you mention lambdas take it to the next level, and in such future Linux might not even matter anymore, hence his point of view.

    • kazinator 5 years ago

      When I left C++ for managed languages, operating systems and C didn't stop being relevant; what stopped being relevant for me was the grotesque extensions of C dialect desperately trying to be a high level, managed language in increasingly irrational ways.

      • pjmlp 5 years ago

        C++ is still the best tool when my managed languages need some extra native help.

        Maybe it will be eventually replaced by Rust one day on my toolbox, but surely not for C.

        I left that in 1993, only using it instead of C++ when university professors required me to do so, and on my first job (until they also moved into C++/C#).

flying_sheep 5 years ago

However this does not apply to consumer product, e.g. iOS. When building iOS app for simulator, it is running x86_64 natively in the macOS. When building iOS app for device, it is compiled into ARMv64. Cross-compilation happens every day of iOS development. And the process is 99.9% problem-free. The 0.1% problem comes from missing physical feature (e.g. touch screen / gyroscope) rather than instruction set / system issue.

zwieback 5 years ago

Flashback to 1999: Corel Netwinder tried to bring Linux to an ARM developer box. Still have one in my drawer. I guess we're still working on it...

yaleman 5 years ago

Linus' post reminds me of the Slashdot thread where people were saying the iPod would never catch on.

Saying there's no ARM "at home" is closed-minded to say the least - it's already here - clusters of ARM boards are everywhere, laptops are rapidly gaining ground and I'm currently buying the parts for my first desktop-class ARM machine.

bunnycorn 5 years ago

This will be Linus "640kb ought to be enough for everyone", except that Gates didn't actually said that, and Linus actually wrote this.

Someone please tell him what LLVM bytecode is, probably he doesn't know because he uses GCC, Apple showed recently how that can leveredged in scale, with Apple Watch's architecture change overnight.

ARM and RISCV are here.

  • jzl 5 years ago

    LLVM bytecode is very cool, and arguably evidence that a more generic system could be successful, but it is not considered to be instruction-set agnostic. Yes, it allowed Apple to tweak the cpu in new Apple Watches without requiring updated app submissions, which is great. But it would not support a transition from, for example, x86 to ARM. I've seen this topic discussed multiple times, best summary I could find is from a quora post:

    Everyone's first thought (including the other two answerers here!) was that this would allow Apple to change processors entirely: if they wanted to make an Intel iPhone, for instance, they could just tell the App Store to start converting apps' bitcode to Intel machine code instead of ARM machine code for the new phones, and every app would instantly be "updated" for the new processor.

    But in practice, this doesn't seem likely. Even with LLVM's unusually clean separation between frontend and backend, the frontend actually does know certain things about the processor it's targeting (mainly details about its memory layout, or special processor features it can use to make certain code faster). These things get baked into the bitcode, and in practice ensure you can't really mix-and-match frontends and backends as freely as you would like. So suddenly switching processors entirely probably isn't in the cards.

    What's more likely is that it will allow Apple to make smaller improvements to their processors and then roll them out to existing apps. ...

wyldfire 5 years ago

> as long as everybody does cross-development, the platform won't be all that stable.

I tend to agree that it will suffer. But you can get ARM chromebooks, a bunch of vendors offer ARM-based Windows laptops, and Apple is planning on leveraging their SoC on OS X computers. So it may not necessarily be cross-development for long.

nerdile 5 years ago

For massive scale SaaS and non-native-code PaaS hosts, cost and perceived Intel lock-in risk will be the factor that drives them to deploy and evaluate these servers side by side with their Intel hosts. And, yes, those devs will have access to this hardware in their offices.

cyberpanther 5 years ago

This is why if Apple ever did a legitimate server offering with virtualization they would kill it. How many devs, develop on Mac and deploy to linux? And also now use VMs for containers. Imagine if MacOS supported containers natively. They leave so much money on the table.

  • sfifs 5 years ago

    Very unlikely. "My way or the highway" is ingrained in their corporate DNA. Can't think of an attitude less suited for server side work where optimization to custom workloads makes massive cost difference

  • musicale 5 years ago

    Are you arguing for native macOS containers (like Windows containers but for the Xnu kernel rather than NT kernel?) or for macOS to somehow support Linux containers (and system calls) "natively," more like BSD's Linux system call emulation and Windows' so-called "Windows Subsystem for Linux?"

yoz-y 5 years ago

Besides talk about ARM Macs, there are already ARM Windows and Linux laptops here.

Also there is no need for anybody to "win" anything. ARM and Intel servers could be just used for different purposes and each find their own niche.

craftoman 5 years ago

Why is he still coding in Perl? Am I missing something? Shouldn't we upgrade to python or something? Do I have to learn Perl and code x86 software on a 15' CRT monitor while running Slackware to get it?

lifeisstillgood 5 years ago

I remember SteveY explaining that Jeff Bezos has an enourmous advantage as a CEO because he had been at the top for so long that he had points of view no one else got.

Torvalds seems to me in a similar position

uniformlyrandom 5 years ago

But if you run your dev environment in the same cloud, doesn’t it void this argument almost completely?

The onlything i would run locally would be the frontend, and it is web/mobile anyway.

tracker1 5 years ago

As mentioned in TFA... every time I've looked at ARM server offerings, it was always much more expensive than x86 options, and most of the time far less compute power.

reacharavindh 5 years ago

The corollary of what Linus says may be true for newcomers as well.

If you're selling ARM/RISC-V laptops, developers will not care because their prod apps are deployed on x86....

  • urmish 5 years ago

    typo - devastated -> developer

nfriedly 5 years ago

I just read something this morning that Apple is looking at switching Macs to ARM. So, if that happens, an ARM server will match what at least some developers are using.

jpeg_hero 5 years ago

> By: Linus Torvalds February 21, _2019_

> This is true even if what you mostly do is something ostensibly cross-platform like just run perl scripts or whatever.

Please tell me this is irony.

ypolito 5 years ago

My phone is more powerful than my previous decade server.

If smartphones came with display ports, many more people could have a PC experience without having a PC.

shepardrtc 5 years ago

Modern DevOps and CICD should be able to abstract this away. Devs can develop on whatever they want and then push to ARM or x86 servers.

  • the8472 5 years ago

    That's assuming you're not using any libraries that contain hand-optimized SIMD where one platform is better supported than the other.

    • shepardrtc 5 years ago

      Well, if https://github.com/xianyi/OpenBLAS is any indication, you can just recompile the library for ARM. A proper pipeline would recompile everything on the appropriate platform without any developer intervention.

acd 5 years ago

If developers switches to ARM/RISC in their laptops then arm will win. Arm is also power efficient and there is global warming.

jammygit 5 years ago

Honest question: why is it not viable to run an ARM virtual machine for your development purposes - is it not accurate enough?

  • snarfy 5 years ago

    Not fast enough. Performance is king during development.

bitwize 5 years ago

Linus will have major egg on his face when devs are issued 7nm ARM Macs that blow anything x86-based out of the water.

mbrodersen 5 years ago

Linus used to believe that his former employer would defeat Intel. He was wrong. And now he is wrong again.

crb002 5 years ago

Translated from Linus speak: ARM servers will gain adoption if there are more ARM laptops.

known 5 years ago

Please note that Linus is talking about SERVERS, not mobiles

magoon 5 years ago

His argument doesn’t hold up to iOS and Android development.

hyperman1 5 years ago

Does someone know why ARM servers try to behave like intel servers.

ARM's forte is in tons of lightweight cores, compared to intel's small number of powerfull cores. There are tons of small mom n pop websites that have short burst of usages, followed by long bursts of calm. So why not give every site on a big server it's own small core, say for 60 seconds? After that you shut the core down, zapping all caches etc...

Things like meltdown and spectre simply disappear if the attacking code runs on a different core, so such an architecture would be good at isolating security sensitive data. In these GDPR times, it seems a good match for small sites, and one that Intel can't provide with their good but power-hungry cores.

qwerty456127 5 years ago

Ok. Perhaps ARM won't. But what about RISC-V? Aren't RISC-V laptops and servers going to start conquering the markets soon?

  • lunchables 5 years ago

    I really encourage you to read his entire post. There's a critical part of it where he says:

    > And the only way that changes is if you end up saying "look, you can deploy more cheaply on an ARM box, and here's the development box you can do your work on".

    So sure, if RISC-V laptops become inexpensive and accessible for developers so they can develop and target the same platforms, absolutely, it _could_ take over.

    • qwerty456127 5 years ago

      I've read it. And as far as I knos RISC-V systems are meant to be inexpensive.

titzer 5 years ago

> Some people think that "the cloud" means that the instruction set doesn't matter. Develop at home, deploy in the cloud.

> That's bullshit. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test "at home" (and by "at home" I don't mean literally in your home, but in your work environment).

This is why having a compilation target that works the same way everywhere would be so valuable. We're some ways away from this, but I think WebAssembly offers hope here.

As for the rest of the article, it'd be best if we could have a discussion style where people don't preemptively paint people who disagree with them, or who have a different perspective, strategy, or end goal, as "idiotic" and "stupid", or posit that people have a "total disconnect to reality." This is known as poisoning the well. It does not advance the discussion. In fact it is specifically designed to limit and stop the discussion. The reality is that it simply makes people angry and polarized, amps up the stakes, and ultimately leads to a toxic culture.

Thankfully, I think HN has a better culture. Linus can do better, IMO.

  • jeremyjh 5 years ago

    > This is why having a compilation target that works the same way everywhere would be so valuable. We're some ways away from this, but I think WebAssembly offers hope here.

    Why does WebAssembly offer more hope than the JVM, or interpreted languages that we already have? It will still have to interop with native libraries to get work done.

    • titzer 5 years ago

      It's lower level than the JVM and more of an abstraction over hardware. So native libraries can be compiled directly into it, so everything can interop on top of Wasm.

      • rat9988 5 years ago

        >>more of an abstraction over hardware

        Same can be said for jvm.

        • titzer 5 years ago

          The jvm is explicitly designed to run the Java language. It has Java's type system baked into the bytecode. It is not an abstraction over hardware. (it doesn't even have unsigned comparisons :()

  • porpoisely 5 years ago

    I disagree. Let passionate people on all sides have their say. I think it's more toxic to stifle passionate people.

    It's amazing how everyone from Steve Jobs to Bill Gates to Linus Torvalds is labeled as "toxic" and yet the "toxic" environment they created led to substantial advancements.

    And poisoning the well ( or any ad hominem derivatives ) doesn't stop discussion, it generally leads to more discussions - though often times more contentious and off topic. And though I agree that it can make people angry, polarized and amp up the stakes, those aren't necessarily bad things. Most of the time, it is actually a good thing and a basis for competition.

    Finally, I'd say HN has a different culture, not necessarily better. Also, what you are doing could be viewed as a form of shaming and virtue signaling. And at the end of the day, if you don't like linus's style of communication, you don't have to read or listen to it.

    I don't understand the mentality of "I don't like it so you should change".

    • titzer 5 years ago

      > It's amazing how everyone from Steve Jobs to Bill Gates to Linus Torvalds is labeled as "toxic"

      To be clear, there is a difference between toxic people and toxic behaviors. The former, I think, doesn't exist. There are people who often engage in toxic behavior, and those who do rarely. Pointing out toxic behavior is the first step to correcting it. And correcting it is in fact the goal of community guidelines, in order to establish a more inclusive culture.

      > Let passionate people on all sides have their say. I think it's more toxic to stifle passionate people.

      Let's be clear about what "toxic" means, and not let it degenerate to "loud and I don't like it." Toxic means that it actively damages open discussion, drives people away, and kills off conversations. It is the same sense as a toxic substance; kills.

      > And poisoning the well ( or any ad hominem derivatives ) doesn't stop discussion, it generally leads to more discussions

      Ok this is manifestly untrue. Please read (https://en.wikipedia.org/wiki/Poisoning_the_well). Poisoning the well is a form of preemptive insult to discourage opponents from taking a position by making it seem toxic. The fallacy is aptly named.

      > what you are doing could be viewed as a form of shaming

      Yep! I am happy to shame toxic behavior when it is clear.

      > and virtue signaling

      Maybe. I'd be happy to do it anonymously if you would prefer.

      > And at the end of the day, if you don't like linus's style of communication, you don't have to read or listen to it.

      This is the very definition of suppressing conversation: sending people away who don't feel like putting up with insults. It's counterproductive and unnecessary.

      > I don't understand the mentality of "I don't like it so you should change".

      This isn't some he-said she-said situation. I am pointing out direct unprovoked insults designed to stifle discussion and establish a particular point of view. As I have made it abundantly clear, I find this completely unnecessary and I pointed it out because I really think we can stop doing this if we're just consistent about it. We'll have better discussions with more diverse viewpoints, not just hotheads shouting at each other.

      • porpoisely 5 years ago

        When we are talking about toxic, I think we all understood it meant behavior. And the point of community guidelines isn't to make it the community more inclusive, it's actually to make it more exclusive. Limiting speech and thoughts are exclusive behaviors, not inclusive ones. But I'm all for HN or any other platform having guidelines.

        I studied philosophy in college so I don't need an explanation of what poisoning the well is. "Poisoning the well" itself is poisoning the well and I don't want to get into the intricacies of ad hominems and logical fallacies. Many times, people misunderstand logical fallacies and use logical fallacies themselves to stifle debate.

        Also, Linus wasn't having an argument or a debate. He was giving his opinion. He is allowed to say someone's argument is stupid : "how stupid your argument is.". He didn't call people stupid, he called the argument stupid.

        Finally, ad hominems may or may not stifle discussion from the passionless or people who don't care about the topic, but it never stifles discussion from passionate people or people who care about a topic. Every major debate - going back to religious debates or debates about science or debates about slavery or debates about civil rights or anything else was "passionate". Can you imagine these debates being shut down because that's not what "polite company discusses"?

        And why would it matter whether you virtue signal anonymously or not? You are already anonymous as HN is thankfully an anonymous forum. One thing HN is fairly good about ( as far as I know ) is anonymity.

        I don't believe in ad hominems or attacking people. But if people want to use harsh language to express ideas they are passionate about, I say go for it. The same goes for you. You seem passionate about the subject and I support your right to express it in whatever manner you choose. What I find ironic is that under the aegis of "inclusivity and encouraging discussion", you are advocating for exclusion and stifling Linux Torvalds' speech. But as they say, the road to hell is paved with good intentions. Certainly, you can see that you are doing precisely what you claim Torvalds is doing - stifling speech ( or at least advocating for it ).

        • titzer 5 years ago

          Well, we are going to disagree about this, so I won't drag this on forever. However I would like to be clear about a couple things.

          > And the point of community guidelines isn't to make it the community more inclusive, it's actually to make it more exclusive.

          Again, mixing up behaviors and people. The whole point of community guidelines is to serve the community--to include and support a wide range of people, not a wide range behaviors. Certain behaviors just flat out drive people away--that is the very definition of exclusiveness. Being "inclusive" of these toxic behaviors leads to an exclusive culture. The worst, most exclusive cultures are the ones without guidelines, full of bad behavior. Inclusiveness requires curation of behavioral guidelines. Let's not invert the sense of words when convenient for argumentation (e.g. calling community guidelines and behavioral standards "exclusiveness" because they discourage one type of bad behavior but encourage hundreds of other good ones).

          > What I find ironic is that under the aegis of "inclusivity and encouraging discussion", you are advocating for exclusion and stifling Linux Torvalds' speech.

          This seems to be the crux of the issue. First, it's an exaggeration to say that advocating against using insults and inflammatory language is "stifling" (see above). I actually want Linus to speak his mind--just do so without the anger channel. It's really fucking annoying to some people. Even calling ideas stupid is really fucking annoying to the people who have those ideas. But the worst part is, for every Linus there is, there are dozens, maybe hundreds of people who are going to read something like that and just silently leave. That's a sign of a bad culture. That's toxic right there. And those people who leave are the meek ones who normally wouldn't speak up because they don't want to get the firehose and spotlight pointed right at them. They don't want their ideas called stupid or idiotic or be told they aren't dealing with reality. Those kind of people that actually can be very bright and have very different (and valuable!) perspectives. The kind of people who just disappear and you never notice. And I've met plenty of people like this--if you ever people-managed, you find out hey, this or that person is leaving, and it's because they actually really didn't like being around this group. It's a loss. Most people just don't notice, but their community just got a little worse each time that happens. So you gotta find soft ways to stop it.

          At any rate. Generally your comments can be construed as a defense of people that really don't need any defense. That makes it even worse when the community explicitly stands up and defends loud, obnoxious, unnecessary behavior and lionizes these "hotheads". (To be clear, I am not suggesting you are explicitly doing that, it just has that ring to it). Trust me, hotheads need no defense. They need no coddling or encouragement to keep mouthing off. Many hotheads will stick around and annihilate a community, perhaps unconsciously, because it works. They win. So don't defend them. Defending bad behavior is a death spiral, as it sends exactly the wrong message about inclusion, and that's double bad.

          • porpoisely 5 years ago

            It's semantics to say community guidelines serve to support a wide range of people and not behaviors. It has the same effect. It will drive away people who do not fit your definition of "good" behavior. It's no different than homophobic policies that said we don't discriminate against gays, we discriminate against "bad behavior". And certain behaviors can drive away people, but censorship definitely does.

            And I have to disagree with you about the worst communities. The worst communities are those with too much guidelines. Of course some guidelines are necessary, but mostly those involving harrassment, not speech. The US is based on the idea of less guidelines. North Korea, China, Russia, etc are based on "lots of guidelines". And the death spiral can go boths ways. Just as much to totalitarianism as it can go to anarcy.

            Also, it's not an exaggeration that "insults and inflammatory language" is stifling. It's the definition of it. Anything can be considered insulting to anyone. That's why we have principled understanding of free speech. The basis of free speech is that you have the right to offend. Otherwise, you claiming that the earth is round is offensive and inflammatory to a flat earther and grounds for censorship.

            You seem to think that just because I think someone should be allowed to say "an argument is stupid" is me advocating for anarchy or harrassment. I'm not. Also, "insults" aren't that insulting to everyone. Language that you find offensive isn't offensive to me. And I don't consider Linux's language to be offensive. But you do. But that's the point isn't it. Everyone has a diverse upbringing and diverse opinions.

            Also I'd stay away from the term "bad behavior" because that's the same terminology the chinese government uses to crack down on its own citizens and oppress them. It's rather paternalistic and authoritarian which reminds people of the worse form of nanny states.

            And last thing, why do you care how linux speaks. He is his own individual. Does he come to you and tell you how you should speak? I just find it the high of arrogance that you ( a relative nobody just like me ) has the gall to tell someone like Linux how he should speak or behave. If you don't like it, just don't read what he says. That's what I find frustrating. Why do you feel like you get to tell others how to live their lives?

            At the end of the day, the people at HN can do what they want because it's their property. Regular users like you and I won't change anything. Just like we aren't going to change Linux or the platform he uses to express his opinions. I wish I could have changed your mind but I think I failed so I'll just end it here too. I just find it strange that anyone on hacker news would be demanding that linus torvalds or anyone for that matter be censored.

    • rat9988 5 years ago

      Let passionate people be passionate, but there is no reason to accept immature and toxic behavior. I won't answer the rest, because I believe your opinion is stupid :)

sonnyblarney 5 years ago

With serverless, language VM's and other abstractions, I suggest this argument is going away.

People are not generally writing platform specific anything these days, this would be an exception not the rule.

That said, there are a lot of things that need to be done under the hood to facilitate 'our view on the world on ARM' ... and it might take time.

Once AWS starts to release a bunch of stuff on ARM - and the stacks we love and need are solid on them ... I suggest that could be the ARM wave that changes serverland.

glenrivard 5 years ago

The services in the cloud will be what moves to ARM and eventually RISC-V.

So things like Spanner or AWS Aurora.

Everything changes when you have huge cloud providers that have skin in the game to lower cost.

Why you see things like TPUs already. We have barely even got started with the big cloud providers creating their own chips to drive down cost.

I do hope RISC-V continues to come along. It does have the momentum right now to get there.

microcolonel 5 years ago

While it's not strictly the case, I'll tend to agree that I have a big productivity advantage by sharing an architecture between my workstation and my server.

I want a RISC-V workstation, and if the counterpart on the server side existed, that'd probably be the change I'd make (if any).

blackflame7000 5 years ago

I always thought the RaspberryPi was the exact device Linus claims doesn't exist.

Proven 5 years ago

And last time he thought Transmeta was a good idea.

YUMad 5 years ago

The Age Of Linus ended last year. I am not taking him seriously after the sjw debacle.

He refused to switch to gpl3 back in the day because 'it would be impossible to contact all contributors and get their permission' but last year he suddenly didn't need the same.

fersab 5 years ago

Man, he STILL, sounds like an angry teenager....

  • rat9988 5 years ago

    He doesn't use any injure, but still doesn't know how to answer without being agressive.