maddyboo 5 years ago

Just yesterday, I was trying to explain to my partner (who isn't a programmer) why I think open source software and hardware is so important. My argument is that without enough core components in the industry standard tech stack being open source, the more likely companies who develop solutions will restrict user freedom.

For example, Apple has been able to own nearly their entire iDevice stack from manufacturing to silicon to firmware to OS to ecosystem. They have very little incentive to interoperate with external organizations via open standards because they own many of the pieces which need to be interoperable. Thus, they can force users and developers to use their tooling, the applications they approve of, dictate what code will run on their platform, and how easily you can inspect, modify, or repair their products.

This is all to say, it is easy to imagine a future where all performance-competitive technology is entirely built upon proprietary, locked down stacks – AND – it will be at a level of complexity and specificity that independent people simply cannot gain access to the ability to create competitive solutions. It could be back to the days of the mainframe, but worse, where only the corporations who create the technology will have access to the knowledge and tools to experiment and innovate on a competitive scale.

Amazon wants developers to build solutions entirely on top of their platform using closed source core components. They also want to control the silicon their own platform runs on. In 10 years, what else will they own, how much will this effect the freedom offered by competitors, and what impact will it all have on our freedom to build cool shit?

  • PostOnce 5 years ago

    Dozens of small companies (ranging from a couple of people to a couple dozen) rely on me to decide and direct their tech stacks; I always suggest something other than Amazon even where Amazon is suggested first by them; the vast majority defer to my judgement.

    I think I'm not only doing them a service by avoiding lock-in to AWS specific "stuff", but also our industry and society at large by maintaining software diversity and openness. Often they also save a nontrivial amount of money by not using AWS (RDS has been particularly egregious).

    Just making suggestions (rather than demands or whatever) is very powerful, particularly if they respect you and there is some solid logic behind your suggestion.

    • greyskull 5 years ago

      I started my professional career in AWS so I have zero experience in non-cloud businesses (i.e the vast majority of software jobs), but I always wonder about the cost argument people make against AWS (or any major cloud provider).

      It's very easy to overspend, sure. But say we're considering how well you execute in terms of picking the right tools for your case and utilizing the right cost saving measures. In the optimal to average case, does AWS really lose out when considering the cost in human time to match against the feature set you get? e.g. Hardware and software maintenance, fault tolerance, scaling, compliance, security, integrations, etc.? I'm asking out of ignorance here.

      I don't mean to say that every business needs a large feature set (the vast majority probably do not), but there's value in not having to do all of these things on your own. Or even having the freedom to go make an arbitrarily complex application, or expand what you already have, with relatively low barrier to entry.

      • rjzzleep 5 years ago

        You can buy a 16-32 core machine with 64GB Ram for less than a hundred bucks on Hetzner. You can run SmartOS, proxmox, danubecloud or whatever other solution there, and for some of them you should be able to use terraform.

        There is a little extra upfront cost to set these things up, but once you have it, it's fine. You can do your zfs snapshots to hetzners backup infrastructure.

        A lot of people try to throw all their stuff on kubernetes, for that you can buy a few machines and provision them with ansible.

        It's not like AWS devops engineers are the cheapest of all engineers, so there are a couple of solutions, which are mostly cheaper than their cloud alternatives.

        Contrary to the parent I actually don't recommend going that route, mostly because I want to deliver a project and then have them be able to manage it on their own. E.g. an ECS cluster is simple to manage and maintain IMHO. I do tell them about the options that they have and them decide though.

        • topkai22 5 years ago

          I do most of my work in large enterprises and there is definitely a point when you hit a complexity cliff managing your own systems. No one system causes this (major systems can often support a dedicated staff), but it looks like death by a thousand cuts when the HR interview system goes down after 7 years and no one is certain how to deal with its VMs/hardware/networking.

        • zjaffee 5 years ago

          I have to say, you're completely overlooking why AWS/GCP/Azure are good services in your answer. The value add from using these services is that you don't have to know how to run any of these things, building an SRE/devops/sysadmin organization is non trivial for a team of developers with a background in application development. Spending an extra 100k a year on AWS is likely cheaper than the cost of building out a team to do all of these things for you, including as things scale.

          If you're going to have a team of people deploying all of your own middleware software anyways on VMs, that completely defeats the cost savings from using cloud these days. It's much more of a PaaS business than a pure IaaS one.

          • arbie 5 years ago

            I eagerly await CaaS (Cloud as a Service) so VPC, EC2 and EFS planning and deployment is automated.

        • castlecrasher2 5 years ago

          I also haven't done baremetal server management, but my first concern with your first line is what do you do when as your services expand as does your dev team and the org needs more processes like log management, data warehousing, analytics, or whatever else AWS offers?

          I'll admit this isn't always true (looking at you, Neptune) but I find AWS' key strength is the huge community, so most of its services have 1) decent documentation and 2) many users to refer to with technical questions as well as filling out roles.

      • zaarn 5 years ago

        AWS has a few pain points in the cost department; especially Traffic is insanely expensive compared to other providers. Hetzner costs about 1$ per TB of Traffic. EC2 costs 88$ for that amount of outgoing data, Cloudfront 80$, etc. That's not even accounting that Hetzner doesn't bill incoming and internal traffic, which AWS does.

        There is no way that the little work I need to put up a network on a Hetzner host is worth 80$ the Terabyte of Traffic.

        I have a regular Traffic usage of between 800GB and 1.4TB each month, AWS would easily double my monthly bill on that single item alone.

        When you need a lot of CPU and RAM, AWS starts to get very expensive too, I have 128GB of RAM with 12 vCPUs and 2TB storage for about 55$/m, AWS has no comparative offering at even the remotely same price. That is even including the hours I have to spend sysadmin'ing this specific box.

        • SonicSoul 5 years ago

          This is a very interesting dynamic to me. Amazon seems to compete on price with EVERYONE on everything else, yet when it comes to AWS they’re so much more expensive than competition? Is this because their infrastructure is so much more expensive to run at 0 margins?

          • PedroBatista 5 years ago

            Their competition is mostly "old" guys who can easily and "cheaply" create and manage their own infrastructure.

            In other words, when they "educate" young people about "The Cloud" and how is "The Best Way, the Only Way", they win because after a few years people got used to AWS as a fact of life and young professionals don't know how to administer a Linux box anymore.

          • 013a 5 years ago

            Compare Amazon's (at least network) pricing to Google Cloud or Azure and its roughly competitive.

            The scale of the networks these companies are building is unlike any of the novice-level stuff companies like Hetzner deploy. As one example, in one region AWS has deployed 5 Pbps of inter-AZ fiber. They build all of their own SDN switches/routers, all the way down to the chips now, for maximum performance and configuration with the AWS API. And don't forget, they maintain a GLOBAL private network; companies like Hetzner or DigitalOcean just peer to the internet to connect their "regions".

            I'll keep saying this on HN until people start listening: If you buy AWS then complain about price, you might as well go buy a Porsche then complain about the price as well. They're the best. Being the best is expensive.

            • zaarn 5 years ago

              But do you need AWS' custom "GLOBAL" private network? I bet most would be happy to setup a cheap network tunnel between DCs if needed. There is plenty of existing tools to do all that for free (other than sysadmin work and I bet in a lot of cases even then the bill goes in favor of that).

              I'm not buying a porsche and complaining about the price, I'm buying a car and complain it requires me to drive on roads made by the manufacturer and none of the parts are properly standardized and some things are expensive for no good reason.

            • ernsheong 5 years ago

              GCP is a real contender for potentially less money. It’s better than AWS in some if not many verticals. So I wouldn’t say AWS is “the best” (definitely not in the UI and user-friendliness department)

              • zjaffee 5 years ago

                GCP is not nearly as good from my personal experience when it comes to support and reliability of services at scale. There are certainly some instances where GCP is easier to use, or they offer certain niche services that aren't exactly the same with AWS. Additionally given how close the costs are to begin with, it would very likely depend on your workloads as to which service was cheaper.

                Overall I'd say Azure is the only real competition to AWS as a product (where they are often the most willing to negotiate a big deal), but googles open source efforts potentially pose a risk.

          • zaarn 5 years ago

            AWS still offers a very integrated experience and lots of people will simply pick AWS because of the brand name. Most people hosting SaaS or similar on the web don't actually need that much resources so for 98% of startups, I'd bet that AWS isn't the largest entry on their monthly running costs.

        • StreamBright 5 years ago

          This is great, how about adding reliability, availability, redundancy, SLA's and a few things more. How about needing to have a data warehouse, load balancers, SOC compliance? Not all the businesses go for the cheapest option and these decisions are rarely single dimensional. AWS is an integrated solutions for every aspects of an enterprise IT needs. If you try to compare a tiny slice of it with a single dimension comparison you can beat AWS but if you get the full picture it is getting harder to a point when you can't really beat it.

          • zaarn 5 years ago

            Availability over this year has been about 99.9% for the network, if I need more I can book a server in a different DC and do failovers. There is an SLA you can optionally book.

            I don't require a data warehouse, load balancers (beyond HAProxy on the same host) or SOC Compliance at the moment, if I need those, I can build them.

            Not all businesses go for the cheapest option but on the flipside if Amazon costs 80x of other providers, the business will probably just find two sysadmins and contract them for the work.

            Even if you need bigger, load balancers are available as hardware, they're fairly cheap compared to AWS offerings and come with in-built firewall. Incoming traffic remains free on these. Your ISP will probably peer much cheaper than AWS and Hetzner if you're bringing enough money to justify it.

            I know several corporations that do their entire IT out of AWS, some of them use inhouse and others scatter their usage across several cloud providers, AWS would likely increase their operating cost by 100x.

            • StreamBright 5 years ago

              I guess you just downplay the amount of money you spend on developing your own solutions. What does a load balancer do on the same host anyway?

              Where is this 80x-100x coming from? It would be impossible to create a 100 times cheaper solution because it would mean there is no margin left for the hosting provider.

              • zaarn 5 years ago

                Loadbalancers are useful for more than balancing load, you can distribute traffic to the different endpoints on them. They're fairly good at it.

                80x is on traffic costs alone, I've detailed that above; 80$ for 1TB of traffic on AWS, not including incoming traffic costs, 1$ for 1TB of traffic on Hetzner, incoming traffic is free.

                A 16core/64GB instance (m5.4xlarge) on AWS costs 360$ monthly, a tiny bit cheaper if you buy upfront. I pay 50$ a month for 16core/128GB/2TB. The 2TB storage would have to be paid extra on AWS. The m5a.4xlarge is a tad cheaper at 320$ monthly cost, again not counting for storage and bandwidth costs. I get double the RAM and lots of storage for less than 30% of the cost.

                So on traffic, it's 80x cheaper, the instance is barely 30% the cost of AWS, and that's not counting the storage costs, which are very high on AWS compared to other providers (OVH, B2). And that only gets cheaper once you buy volume.

                Of course the number 80-100x doesn't apply to me personally since I'm running a fairly low-scale operation but this all starts to stack up once you go large. A colo is even cheaper than any of these options since you only pay for power used and for hardware once.

                • e12e 5 years ago

                  To be fair, you should probably account for one or two mirror servers on hetzner for easier, lower latency fail over in case of hw failure (assuming you're talking about dedicated servers, not hetzner cloud). Of course with eg three servers, you might load balance acccross most of those during normal load, just make sure to have enough capacity left over to run with N less servers while you spin up a replacement and/or a disk is replaced etc).

                  Just for a more apples to apples comparison.

                  I guess the reason ppl don't "see" the insane premium clouds place on bandwidth is that bandwidth scales up with (presumably paying) customers. So as long as you're not streaming 4k video... You're happy to let the cloud eat out of the bit of your profits that "scales up".

                  • jasonlotito 5 years ago

                    We are heavily invested in AWS. AWS is not cheaper, but it doesn't mean it's prohibitively expensive when you consider everything it offers. What people tend to ignore are the other things it offers. For example, the parent talks about data transfer costs, but that's just one aspect of cost.

                    The real big cost in any organization is head count. And while a load balancer is not difficult to setup and maintain the first time, managing it becomes time consuming in a large enough organization. Couple that with everything else...

                    If someone can replicate what AWS is doing at a lower cost, people would move to them. But there are few companies out there that come close. Bandwidth cost is generally not your biggest expense.

        • bluedonuts 5 years ago

          That's not even accounting that Hetzner doesn't bill incoming and internal traffic, which AWS does.

          This is incorrect. AWS does not charge for incoming traffic nor does it charge for internal traffic within the same zone.

          • zaarn 5 years ago

            In an old project I had 2.8TB of transfer per month between DCs (ie zones in AWS). The hoster provided that service for free, AWS bills this for 20$.

            Incoming traffic is charged on a few AWS applications, not EC2, but some do.

            That doesn't really change the point though; AWS Networking is magnitudes more expensive than competitors and they bill for things that are accepted as part of the service in other places.

        • aarbor989 5 years ago

          Not disagreeing with your points, but internal traffic is free between the same AZ on AWS

          • zaarn 5 years ago

            Ye but I wouldn't know any projects that on AWS would need internal traffic that wouldn't also need instances over AZs/regions.

      • marcosdumay 5 years ago

        For large datacenters, excepting labor, on premises costs are usually an order or two of magnitude lower than Amazon, so you can ignore all the details. Specifically on labor, computers don't need that much of upkeep (unless you depend on flaky software, but then Amazon won't help you anyway), so it's a matter of having enough of them to justify hiring 2 people. if you do, you are better keeping them on premises. Besides, adapting your software for Amazon isn't free either.

        For small datacenters, there are plenty of VPS, server rentals and colos around with prices an order of magnitude lower than Amazon. The labor cost of setting those up is comparable to the cost of adapting your software for Amazon, with the added benefit that you aren't locked in.

        AWS has a few unique value propositions. If you require stuff very different from the usual (eg. a lot of disk with little CPU and RAM) it will let you personalize your system like no other place. If you have a very low mean resource usage with rare incredibly large peaks, the scaling will become interesting. But it is simply not a good choice for 90% of the folks out there. To go through your list, fault tolerance through Amazon is not that great, it need a lot of set-up (adapting your software), and adds enough complexity that I doubt it increases reliability; I don't know about compilance; about security, I only see problems; and integration is what AWS systems do worse, since network traffic is so expensive.

        • acdha 5 years ago

          > For large datacenters, excepting labor, on premises costs are usually an order or two of magnitude lower than Amazon, so you can ignore all the details.

          That’s too big a caveat to breeze by without a huge justification. If you look at the full cost that 1-2 orders of magnitude for many places goes negative and is a smaller margin for almost everyone else unless you have some combination of very high usage to amortize your staff costs over and/or the ability to change the problem (e.g. beating S3 with fewer copies or using cold storage, having simpler or slower ops or security, etc.).

          It can be done but I’ve seen far more people achieve it through incomplete comparisons than actual measured TCO, especially when you consider the first outage where they’re down for a week longer because they learned something important about how data centers can fail.

      • oblio 5 years ago

        If you have a simple business or website, AWS is not cost efficient compared to an old school shared hosting and it's not even more convenient. Those have slightly better control panels than the AWS Console and for crying out loud, AWS doesn't even offer automatic backups for their VMs as part of their offering.

        • hrez 5 years ago

          They do now. See Snapshot Lifecycle Policy.

    • nova22033 5 years ago

      >Dozens of small companies (ranging from a couple of people to a couple dozen) rely on me to decide and direct their tech stacks; I always suggest something other than Amazon even where Amazon is suggested first by them; the vast majority defer to my judgement.

      so what do you recommend instead of Amazon?

      • foxhop 5 years ago

        I recommend Joyent (acquired by Samsung).

        They open source all their software (and make it easy to install and use). Their hypervisor operating system (SmartOS) rocks so hard. They also released Manta (S3 alternative object store).

        If Joyent ever changed direction, or shut down, I have an unlocked path forward with a community to help me continue on my own hardware. I can build my own cloud if push comes to shove.

      • 3pt14159 5 years ago

        I recommend DigitalOcean. Enough of the stuff you need, without the annoyances that come with dealing with AWS.

        That said, when meltdown / spectre was announced AWS was already patched. The rest of us on hipster VPS providers had to cross our fingers. One of the main things I dislike about per-minute billing for servers is that it's too easy for bad actors to cycle through if there is some sort of side channel attack.

        I remember when I first heard of the concept of VPSes. My initial reaction was "seems risky" but I was young and lots of older, smarter people assured me it was ok. Now I've grown used to them and all the luxury goodness they provide and I don't want to go back. It's like car ownership and finding out that all that cycling around town before you were old enough to get a license was what kept you looking fit.

        • jammygit 5 years ago

          The DO privacy policies and terms of service have some confusing clauses or missing promises. Other than that, DO is really nice to use

      • akudha 5 years ago

        I too am interested in this. Alternatives, and the reasons for it. It isn't easy convincing suits - nobody is gonna get fired for suggesting AWS, so unless they are presented with a strong alternative, it isn't easy convincing managers not to go with AWS

      • ernsheong 5 years ago

        GCP, Azure... I personally really like GCP, it’s optimized for simplicity and thus developer happiness.

    • dokalanyi 5 years ago

      What alternatives to AWS could save a non trivial amount of money? Especially for RDS? I think we're getting to the point where servers cost quite a bit

      • snaky 5 years ago

        https://www.hetzner.com/dedicated-rootserver/px61-nvme

        Substract VAT from the price, add HDD for WAL, and setup PostgreSQL on a couple of them.

        • m_mueller 5 years ago

          Don't understand why you're being downvoted. Install Kubernetes and make your own cloud. It really depends on your workload whether that's better than managed cloud services with elastic scaling:

          * your software has a startup time that takes too long - cannot scale down easily

          * you have a constant base load with only moderate peaks

          * you'd rather run other background tasks at times with low load than scaling down - this way you get a bunch of "free" compute power you can use for other things

          • tasubotadas 5 years ago

            Installing kunernetes is something you can only suggest if you haven't done yourself. Getting it right is extremely non trivial

            • lmilcin 5 years ago

              Having set up couple of clusters myself, professionally, I can say setup is extremely easy relative to the functionality you are getting if you wanted to get it by traditional means (traditional = pre-cloud tech).

              What is complicated is that this absolutely does not absolve you from having to understand everything that is happening under the hood. If you feel Kubernetes replaces that requirement you are doomed first time a non-trivial issue happens.

            • m_mueller 5 years ago

              Could you share some insights on what you think is "extremely non trivial"? In what way is Kubernetes harder than what's to be expected of a technology that orchestrates serverside software? Doesn't this rather depend on the actual services you want to run rather than Kubernetes itself? Obviously it won't reduce complexity of what you want to run, but it makes deployments of it pretty straight forward as far as I can tell.

        • etaioinshrdlu 5 years ago

          I have so far found Hetzner Dedicated servers to resist most forms of "infrastructure as code".

          I can't seem to even be able to restore one from an image...

          This applies only to dedicated ... but to me that's the interesting part of Hetzner.

          I might just use Ansible to provision Hetzner servers that I set up manually initially. The cost savings are there to make it worth the hassle.

          Wish there was a cleaner approach.

          • danmaz74 5 years ago

            I wouldn't suggest to use that for the main DB, but for almost everything else docker works perfectly fine on Hetzner.

            • etaioinshrdlu 5 years ago

              Right, but I'm concerned with the initial provisioning of the OS on which docker runs. It's not nothing even if the apps run in docker.

              • danmaz74 5 years ago

                In most cases, you can just provision the OS from a web interface. If you get a bigger machine where you need to configure the disks yourself, there is a pretty easy to use command line tool they built which does most of the work.

                The only part which I had to learn the hard way is configuring iptables to secure my servers against external attacks. Luckily, recent versions of docker make it easy to keep iptables configured - at the beginning, that was almost a nightmare...

      • ksec 5 years ago

        I am hoping DO will up the game in that. [1] I am not sure how big the market is for a only VM + DBaaS, because from my limited scope that is like 90% of what I need. The other 10% I am happy to have DNS, Transitional Emails, Register, CDN all under different companies to avoid putting all eggs in one basket. I do wish there is an UI to integrate all of it though. ( Yes I know that is exactly like Heroku, but I am cheap, Heroku is already much more expensive than AWS, I often wonder what if Heroku runs on top of DO )

        [1] https://try.digitalocean.com/dbaas-beta/

      • ori_b 5 years ago

        > Especially for RDS?

        Run your own servers, instead of paying Amazon to do your system administration. At small sizes, Amazon is a cheap sysadmin. At scale, paying them as sysadmins is expensive.

        • StreamBright 5 years ago

          I just moved one of the largest (5+PB) data warehouses of Europe to AWS and we saved 35% of a huge (1M+ / year) budget while increased reliability, availability and security. I am not sure why people think that AWS is expensive. Running on a traditional hosting was a nightmare with constant downtimes because of issues outside of our control. Cooling, networking, security you name it. AWS makes these non-existent or very easy to tackle. For example of networking, there are several teams of network engineers oncall for AWS the handle routing issues etc. and you just get an email about it. With the previous vendor we found about the issue, tried to reach the vendor and we had to convince them that there is an issue and it took them 2 days to recover.

          I am planning to move to AWS more entreprise clients for saving significant money on IT. AWS is most definitely a competitive option for this.

          • pritambaral 5 years ago

            You seem to be talking about EC2, or at least services available with EC2. You parent was talking about RDS and sysadmins who manage Postgres on RDS.

        • mjlee 5 years ago

          Even at quite large sizes, RDS is a fantastic product. I think I'd opt to use it even with full time Database Administrators on staff.

      • ComputerGuru 5 years ago

        Azure is both comparable to and cheaper than (and subjectively better) AWS.

        • chopin 5 years ago

          But that's trading lock-in to AWS for lock-in to Microsoft.

          • roman030 5 years ago

            Still kind of diversifying

        • nostrebored 5 years ago

          I've literally never heard this from anyone who's used both at an enterprise level. Personally I recommended azure a few years ago rather than aws for a docker based microservice app but the biggest drawback was the cost. What exactly are you running that's cheaper?

      • rbanffy 5 years ago

        The Google cloud solutions are very competitive in price with the AWS ones.

        • mijamo 5 years ago

          The lock in is the same though.

          We are having A LOT of troubles due to lock in with Google Cloud (particularly GAE and Datastore, but also Pub/Sub) and it is really not fun at all...

          • rbanffy 5 years ago

            You can always design around that. I know I'm mostly locked in when I deploy a classic GAE application or when I use the datastore (there is AppScale, if you need).

            If what you are running is a virtualized version of your physical DC infrastructure, you can probably deploy it anywhere with very little trouble.

        • agopaul 5 years ago

          Google doesn't seem to be serious with their managed DBMS solutions. The PostgreSQL version is still 9.6 even though 11 has been released, whereas AWS already has pg 11 available.

      • ernsheong 5 years ago

        Consider if most data in RDS should be in a data warehouse instead.

        Google’s BigQuery can help offset costs. Not to mention GCP is now a strong contender to AWS’s offerings.

  • simonh 5 years ago

    All of what you say is true and I completely agree it's important to have open solutions out there. What vertical solutions offer though is accountability.

    If there is a security issue in the iPhone everybody knows who owns the problem. If there are scammy apps in the iOS App Store - and there actually are - everyone knows who has the power and responsibility to clean that up.

    • metildaa 5 years ago

      Open source isn't a solution to the problem you bring up, proper maintenance processes are.

      A great example of this is the Debian package archive, with 51,000 packages in the archive, you know each package has a maintainer that has vetted that package, and they will maintain it for the rest of the release (usually 2 or 3 years) even if the developers of it wander off or disappear.

      Key to this is the Debian Social Contract, which defines what is acceptable, and when maintainers should start ripping out malicious anti-features or reject malicious updates from the upstream project: https://www.debian.org/social_contract

      Comparatively, PyPI & npm are unmaintained dumping grounds of sketchy software, like copying code sight unseen off StackOverflow, but with the added risk of every update potentially being malicious.

      The lack of separate, objective maintainers for these package archives has caused a plethora of issues, from packages randomly disappearing, anti-features being added, to malicious code being embedded. This is a cultural issue around managing packages that the Free Software world mostly solved decades ago, yet Open Source communities like nodejs can't figure out these basic processes that prevent bad shit from happening to packages in their archive.

      • jacobush 5 years ago

        I agree with the specific examples and symptoms you cite, and yet I can't really see the dividing line being Open Source vs Free Software.

        Say, OpenBSD is thoroughly vetted and ("soft, permissive") Open Source. They also have a social contract. (Perhaps not in writing as much as in culture, I am not very familiar with the BSDs either technically or socially but I know the software is very well vetted and accounted for.) Or maybe I misunderstood the distinction you make.

    • amelius 5 years ago

      In my experience, if a company owns the entire supply chain then they are usually a (semi-)monopoly and don't care much about actual the problems you have, unless they also affect a large percentage of their customer base.

    • sametmax 5 years ago

      And how did that work out ? Apple is still doing whatever they want, shut down competition and steal ideas, but never pays any consequences for their mistakes.

      So how is that any good for us ?

  • ryacko 5 years ago

    Performance competitive technology requires paying for its development. Electronics manufacturing is inherently heavy industry, nothing short of mandating FPGAs would tilt it any other way.

    The world we live in is built upon a series of choices, the most important of which is whether you take a higher paying job or building what you desire.

    There will always be a lag between the latest product, or in the case of Linux vs Unix, reverse engineering and developing a compatible product.

    • rbanffy 5 years ago

      > nothing short of mandating FPGAs would tilt it any other way.

      I can think of multiple alternatives that could be taken in isolation or combined.

      - Regulating the market and splitting foundries from their IP developers (this may not even be necessary at this point)

      - Funding development of a rich set of public-domain IP that could be used to build a common standard platform

      - Direct all government purchases to be of such public-domain standards

      - If necessary, fund capacity so that the foundry market remains competitive even in small runs.

      • pjc50 5 years ago

        US government (and other major purchaser) "second sourcing" regulations are a longstanding attempt to keep a competitive market in situations where it would otherwise collapse into a monopoly. That's how Intel came to share x86 with AMD in the first place.

        • rbanffy 5 years ago

          Yes, but I'm not sure it's still as influential as it was in the 80's and 90's

          • jplayer01 5 years ago

            I'm not sure what you mean. We wouldn't have AMD now, and the utterly non-competitive x86 CPU market would look entirely different than it does now. Despite AMD not being competitive on performance in the past ~10ish years, they were still an alternative if Intel ever really dropped the ball or went off the rails on exploitative business practices. Without that counterweight, there would've been no limit to what Intel could've done.

            More needs to be done to ensure proprietary silos like Apple, but second sourcing has, largely, done its job.

      • ryacko 5 years ago

        The nature of competition requires an advantage of some kind, or advancement won’t be very fast. The limit to what I think the free market would allow would be the government splitting funding for research and for production, and for any contractor to fulfill production orders without paying for licenses.

        It seems more likely that splitting foundries would be successful, with some sort of anti-trust mechanism to prevent a foundry and an IP developer from having too much favoritism (or advantage from scale).

  • reikonomusha 5 years ago

    For Apple, I find almost the opposite in terms of forced development. If I want to write a program on macOS, I can expect the porting effort to Linux to be simple if not trivial, thanks to UNIX.

    Compare that to Windows, which has ostensibly less control, but I continue to find to be a massive pain to develop for.

    With that said, if macOS lost UNIX, I’d be done.

    • Meai 5 years ago

      You are speaking from inside the walled garden here, from the outside the effort of porting to macOS is monumental. The naive way is: I have to buy hardware, learn new Apple specific languages, learn a new OS, learn a new IDE, be compliant with their app store policies, distribution etc. (which applies to Windows now too I suppose).

      • reikonomusha 5 years ago

        I suppose I’m an outlier then. I just use normal UNIX tools. No Xcode, no IDE’s, no proprietary toolchains. Maybe my programs are too boring. :)

        There are some portability quirks for sure, though.

        • xnyan 5 years ago

          If you don't mind me asking, if not developing for ios and need xcode for appstore access, why are you on mac?

        • Koshkin 5 years ago

          Why not Linux, then? Why pay "the Apple tax"?

          • aoeusnth1 5 years ago

            (not the OP). I develop linux services on a Mac, connected remotely to a linux workstation. My company pays the Apple tax, and the hardware is nicer. Plus, my friends use iMessage.

            I tried using a company Linux device only to find that its graphics drivers weren't supported, the 4k scaling didn't work nicely without spending ~30 minutes looking up how to do it the hard way, and it didn't play nicely when connected to a normal 1080 monitor. I returned the device after (thankfully) it failed.

      • dkarras 5 years ago

        Except for "I have to buy hardware", that's... just not true. Even buying the hardware can be circumvented but let's stay strictly legal here.

        As OP said, it's unix, at least extremely unix-y so you don't have to learn apple specific languages. You can choose to do so for better integration to their system and vision and UI idioms (and should I say, macOS native applications are the BEST thought out applications you'll ever have the pleasure of using in user experience side). But whatever code you run in Linux will be trivially ported to macOS.

        Learning a new OS is... I mean it is a new thing to learn but it isn't like you didn't learn the OS you are using at some point. Unless you think that there should be one and only one OS for the entire universe that everyone uses, this is an irrelevant point.

        For the IDE, again, you don't have to, but you can choose to learn. Most popular IDEs natively work on macOS just fine, and you can just use your terminal like in any unix system, use your build scripts etc.

        For macOS, you don't have to be compliant with anything, you can distribute your apps the old fashioned way. If you want to be in their dedicated store though, yeah they have rules. I think that makes sense.

        For iOS, that store is the only way to distribute software and your points would make a bit more sense. But I actually am on the fence about the merit of the walled garden approach of iOS. Android ecosystem is a cesspool of malware. I can deal with malware on my computer. My computer has the resources etc. And computers are my job. But my mobile phone IMO has to run trusted code, I will happily delegate some standards to a central body as long as they manage it sufficiently well. When I install an app, I want to be sure that there are reasonable protections about what they can and can't access in my phone. I take my phone with me everywhere, it knows A LOT about me. I can't disassemble all binaries I use to make sure they aren't doing anything shady. Apple has automated tests for this stuff. I know it can't be bulletproof but it is something. They are fast to respond to exploits and they manage to keep their platform secure.

        If a "free for all" device was popular, I'd still manage. I'd just have to be EXTRA EXTRA paranoid about what I install in my phone and it would decrease my productivity and quality of life quite a bit. That I can manage. But the whole ecosystem would be A LOT less secure. Not many people would exercise discipline. Viruses, malware, rootkits everywhere, billions of people carrying them in their pockets everywhere. My inclination right now is that that would be a worse deal right now for the world. While Centralisation and corporate power is something I generally despise, in this case (mobile OS, walled garden, corporate control over what apps can and can't do on their platform), I think can see the merit as long as they don't majorly screw it up.

    • adrianN 5 years ago

      Apple refuses to update their command line tools because of the license. The bash I have here is more than ten years old. It's only a matter of time before things diverge enough to make porting a major pain.

      • giancarlostoro 5 years ago

        I dont understand this, they already share some of the source to several OS components. Do you care to elaborate a bit on this? Is there some GPL clause or something that became too risky for Apple or something? How do BSD projects get by with the newer licenses aside from already being open source it doesnt seem to affect their underlying license...

        • teddyh 5 years ago

          More details here:

          http://meta.ath0.com/2012/02/05/apples-great-gpl-purge/

          […]

          Anyway, the message is pretty obvious: Apple won’t ship anything that’s licensed under GPL v3 on OS X. Now, why is that?

          There are two big changes in GPL v3. The first is that it explicitly prohibits patent lawsuits against people for actually using the GPL-licensed software you ship. The second is that it carefully prevents TiVoization, locking down hardware so that people can’t actually run the software they want.

          So, which of those things are they planning for OS X, eh?

          I’m also intrigued to see how far they are prepared to go with this. They already annoyed and inconvenienced a lot of people with the Samba and GCC removal. Having wooed so many developers to the Mac in the last decade, are they really prepared to throw away all that goodwill by shipping obsolete tools and making it a pain in the ass to upgrade them?

        • sanxiyn 5 years ago

          Allegedly, Apple hates GPLv3.

          • sjwright 5 years ago

            And it’s worth remembering: Linus Torvalds hates GPLv3 as well.

            Meanwhile, updating gnu tools is pretty straightforward with homebrew, so the consequences of this view are minimal—for me, so far.

            • belorn 5 years ago

              Linus Torvalds has in multiple talks said that he does not hate gplv3. He thinks limiting DRM (tivo case) in the license is something he do not want for the kernel, and he strongly disliked the method where GPLv2 automatically upgrades to GPLv3 with the "or any newer version", since his view is that GPLv3 and GPLv2 is two very different licenses because of the DRM clause. During the debconf talk, he said that describing gplv3 as being similar to gplv2 was immoral.

              Here is a direct quote "I actually think version 3 is a fine license" - Linus Torvalds

              FSF view DRM as just being a physical version of a legal restriction, and they often quote laws that makes it illegal to bypass DRM as proof that DRM is also a legal restriction. Thus in GPLv3 they treat them as identical and they don't see it as a change in how the GPL license work. Linus strongly disagree on this.

              This is very different from the Apple case and I doubt anyone can find similar quotes from them.

          • Someone 5 years ago

            Their lawyers likely strongly advised against bundling GPLv3 software with their OS because there is a non-zero risk that some judge, somewhere, some day, will claim that requires them to release the source of all their software under GPLv3.

            I think that, if GPLv3 ever gets sufficiently tested in courts all around the world (which is highly unlikely) that stance could change.

            • jononor 5 years ago

              Why would GPLv3 be substantially different risk than GPLv2?

              • rbanffy 5 years ago

                GPLv3 has anti-TiVoization and patent protection built in.

                • jononor 5 years ago

                  Yes, but that doesn't make it more likely that anyone would have to open source their apps or OS?

                  • rbanffy 5 years ago

                    No, but those fears weren't rational to begin with.

          • StreamBright 5 years ago

            Not only Apple but almost every enterprise.

          • tzakrajs 5 years ago

            You would hate it too if you had to open source the entirety of your OS or application because a GPLv3 COPYING file was floating about in there.

            • oblio 5 years ago

              When they included Bash and other GNU tools in their OS to draw developers to their platform, while said platform was dying, they didn't hate that...

      • akulbe 5 years ago

        I'm not an expert, so forgive me if I'm way off base here… but isn't this exactly the need that homebrew fills?

        • adrianN 5 years ago

          That's similar to saying that Cygwin or MSYS or MinGW or WSL is filling that need on Windows.

          • rbanffy 5 years ago

            The main difference would be in that, since Windows has no native Unix environment, all these tools are added functionality while on the Mac, it overrides built-in functionality and can make things go a bit weird.

            • pjmlp 5 years ago

              Just like it doesn't have a native Win32 environment, rather different personalities built on top of a common layer.

              UNIX environment on Windows is just like how IBM and Unisys mainframes deal with it.

              • Koshkin 5 years ago

                Indeed. (Except that "WSL" should be read as "Windows Substitute for Linux.")

                • pjmlp 5 years ago

                  I don't remember FreeBSD or illumos calling their syscall compatibility layer for Linux Substitute.

                  • Koshkin 5 years ago

                    Nor were they called “services for Linux.”

                    • pjmlp 5 years ago

                      Nor are they called now, rather Subsystem.

                      I don't care what names marketing departments come up with.

                      • Boulth 5 years ago

                        Actually "for Linux" came from the legal department as Linux is a trademark and it's a common practice to indicate relationship with "for X" where X is a trademark (e.g. "Y for Twitter" instead for "Twitter Y" that would suggest close relationship, from the same company).

                        • pjmlp 5 years ago

                          Thanks for the clarification.

                • rhinoceraptor 5 years ago

                  It's more like a "Windows Subsystem for Linux Applications"

          • PeterisP 5 years ago

            The only problem with Cygwin and MSYS and MinGW was that they filled that need poorly.

            You could say that WSL is filling that need on Windows now, WSL certainly does remove some of the earlier obstacles/objections why one would want to avoid Windows and use Linux for development.

    • amelius 5 years ago

      > For Apple, I find almost the opposite in terms of forced development. If I want to write a program on macOS, I can expect the porting effort to Linux to be simple if not trivial, thanks to UNIX.

      Reminds me of the days when I had to buy Windows to test if my website worked on IE.

      In the case of Apple, I have to buy the hardware too!

    • pjmlp 5 years ago

      Historical accident of rebooting Copland with NeXTSTEP, which wasn't even relevant for NeXT beyond bringing developers into NeXTSTEP world.

      The real Apple developer culture is around Objective-C and Swift tooling, alongside OS Frameworks, none of them related to UNIX in any form.

      In case you haven't been paying attention the new network stack isn't even based on POSIX sockets.

    • Koshkin 5 years ago

      Yes, you can write a UNIX program that will run on MacOS. You can even use Autotools and X Window API. On the other hand, a usual MacOS application is not "a UNIX program."

  • DavidNielsen 5 years ago

    In fairness to Apple they are these days quite the Open Source company. Their main language is fully open source in every sense the word. You can get large parts of their base OS under an acceptable license as well as their stack, much of which is developed in the open or work is open sourced as Apple is able to. Of course they will keep back OSS code if it means making a big splash at a presentation but I don’t think that is a bad thing as such, who would deny them a bit of theatre.

    They also do a lot of tech blogging on their Open source code, especially the Safari and WebKit team have some excellent posts regularly.

    Sure they have proprietary magic in there but it is not as big a part of the pie as people imagine, and certainly less than in years past.

    The same is true for Microsoft, who now famously aims to be the biggest Open Source company in the world (having acquired GitHub, Xamarin and many others, as well as made partners and friends out of enemies of the past, to help them on that journey).

    In fact I can’t think of a single major company in the business which hasn’t embraced Open Source to some degree. I don’t think that the Apple strategy of controlling the entire experience means what it used to anymore. It doesn’t mean locking you in to just one way of doing things, we now know that we need the compiler, tools and stack to be available and truly free, as a bare minimum for this to work and experience shows that the more work we share, in general, the better an experience we can present to users and developer.

    Open Source has won, all these companies taking on designing their own chips, datacenters, OSes, languages and so on, they would not be possible without that commonly shared mass of work.

    Famously FaceTime was supposed to be an open standard, until someone threatened to sue them for damages to the tunes of X times infinity, which is a fairly large dollar amount for any given value of X.

    • rvp-x 5 years ago

      Neither of those are open companies. Stop listening to their marketing departments.

  • petra 5 years ago

    >> and what impact will it all have on our freedom to build cool shit?

    Considering how much the cloud, and it's many services has improved our ability stuff, and will further improve it - i don't think that our ability to build cool stuff will be more limited than before the cloud. Quite the opposite.

    But we'll need to pay more money to Amazon.

  • jillesvangurp 5 years ago

    Amazon is not the only game in town. They are competing not on cost but on feature set. However, for them running their own chips is mainly a way to optimize cost. Open source hardware is going to be a key enabler for them. Right now both Apple and Amazon are still using arm based processors, which are not open sourced but very common. The whole point of that is that it allows them to leverage open source compiler tool chains, open source kernels, etc. Replicating that stuff internally as a proprietary me-too style implementation is stupendously expensive. Neither Amazon nor Apple do that.

    Instead they roll their own chips optimized for their own use case. As open source chipsets based on e.g. risc-v become more popular, tool support for that will become more popular and it will become a natural choice for building custom hardware. Breaking apart the near monopoly that Intel has had on this since the nineteen seventies is a good thing IMHO. Having a not so benevolent dictator (Intel) that has arguably been asleep at the wheel for a while now is slowing everybody down. This is what is driving people to do their own chips: they think they can do better.

    The flip side is that companies building their own custom chipsets need to maintain interoperability. If they diverge too much from what the rest of the world is doing, they risk isolating themselves at great cost because it makes integrating upstream changes harder and it requires modifying standard tool chains and maintaining those modifications. Creating your own chip is one thing. Forking, e.g. llvm or the linux kernel is another thing. You need some damn good reasons to do that and opt out of all the work others are doing continually on that. Some people do of course (e.g. Google forked the linux kernel a few years back) but it buys them a lot of hassle mostly and not a whole lot of differentiation. They seem to be gradually trying to get back to running main line kernels now.

    If Amazon, Apple, MS, Google, Nvidia, etc. each start doing their own chip designs, they'll either create a lot of work for themselves reinventing a lot of wheels or they get smart about collaborating on designs, code, and standardized tool chains. My guess is the latter is already happening and is exactly what enables this to begin with. Standard tool chains, open chip designs, an open market of chip manufacturers, etc. are what is enabling this. Embedded development was locked up in completely proprietary tool and hardware stacks for decades. That is now changing. You are describing the past few decades not the future.

  • baybal2 5 years ago

    > They have very little incentive to interoperate with external organizations via open standards because they own many of the pieces which need to be interoperable. Thus, they can force users and developers to use their tooling, the applications they approve of, dictate what code will run on their platform, and how easily you can inspect, modify, or repair their products.

    I myself is wondering why this is not yet the case.

    Say, TSMC opening up a "privilege tier" only to those companies willing to make their chips with DRM to check executable signatures against their keys, and they will only be issuing signatures for non-insignificant amount of money.

    • pjc50 5 years ago

      TSMC are ""just"" a factory, they're not the ones with enough market power to do that and it would really annoy their customers. It's more people like Apple we're talking about here.

      • jsjohnst 5 years ago

        s/factory/foundry

        That said, I think saying they are “only in manufacturing physical chips” is grossly minimizing what TSMC brings to the table for a major chip designer.

  • zjaffee 5 years ago

    I have to disagree here, because the breakup of vertically integrated monopolies has been something that has been done by governments before. Entire components being open source certainly reduce the barrier for further closed source components, but we can develop open standards regulations that require certain middleware to be open and interoperable to prevent monopolization.

  • teekert 5 years ago

    The positive view would be that the market always has enough demand for open source software and open hardware because as you say it would make it impossibly hard to start as a new small player in a closed, controlled market and the would kill innovation. The coming year will be important and interesting in this regard.

  • godelmachine 5 years ago

    The thing you are trying to speak in favor of is Android.

    Google itself has admitted that Android itself is a mess by lieu of the fact that they open sourced it in the first place.

    Good thing Apple kept everything proprietary. There’s at least one Technology Stack I can place my trust on.

    • dexen 5 years ago

      The thing you are trying to speak in favor of is Apple Macintosh, with its proprietary OS long since replaced by POSIX-based and Mach-based MacOS X.

      We had beautiful gaming consoles, beautiful Macintoshes, and IBM mainframes, and beautiful Burroughs and Symbolics' Lisp Machines, and the Commodore C=64s.

      And yet here we are, the open ecosystem has won again, as it naturally does, based on the costs structure and information propagation.

      The consoles are PCs now. The mainframe of yesteryear is now a datacenter built from souped-up PCs. The mobile phones, at first dominated by proprietary OSes, are tilting ever more towards desktop-grade OSes. And the C=64s are dead and the demoscene is delivering the eulogy.

      • pjmlp 5 years ago

        MacOS being replaced by POSIX-based and Mach-based is an historical accident.

        Had Steve Jobs been at somewhere else or Jean-Louis Gassée won his bid, and there wouldn't be a POSIX-based and Mach-based Mac OS X to talk about.

        Linux is losing to MIT based OSes on embedded space, where OEM can have their cake and eat in what concerns FOSS.

        If Android does get replaced by Fucshia you will see how much open source you will get effectively from mobile handsets OEMs.

        • godelmachine 5 years ago

          >>If Android does get replaced by Fucshia you will see how much open source you will get effectively from mobile handsets OEMs.

          Right now, only Apple has the knack to control OS version releases uniformly across all of its devices.

    • jeswin 5 years ago

      > Google itself has admitted that Android itself is a mess by lieu of the fact that they open sourced it in the first place.

      Can you provide a source for this claim?

  • hellisothers 5 years ago

    Apple isn’t forcing anybody to do anything. Not only do you have other options, they’re not even the biggest fish and they’re not trying to be.

  • hawski 5 years ago

    It means that it's time to break down those companies with antitrust laws. Apple, Google and Amazon should be broken to many separate companies.

  • ramijames 5 years ago

    I help run a blockchain tech company (https://get-scatter.com/) who is creating an Oauth like stack that lets regular users access decentralized systems. A lot of the philosophy from FOSS is thriving in those communities and for the same reasons: the web has become incredibly centralized. For many people it is entirely limited to access via walled gardens of app stores and social platforms. For those of us who grew up in the 80s and 90s, we remember the web as a very different place and long for it. For those in the east, especially China, there is a very real problem of privacy and access which decentralized systems help alleviate.

    I'm all for FOSS in every way. I wish more people were. We are collectively painting ourselves into a corner by allowing these enormous corporations access to every detail of our lives. Computing should make us more free, not less.

40acres 5 years ago

Mike Tyson said: "Everyone has a plan until they get punched in the face." When it comes to semiconductors I'd say: "Everyone wants to make their own chips until they have to do so at scale". (Doesn't roll of the tounge as well!)

There is definitely a threat from Apple, Amazon, Google and especially China that will put Intel's market share in target distance, but making chips at scale is incredibly difficult. It's hard to see Amazon transitioning their AWS machines to Amazon built chips, but if they display competency they'll certainly be able to squeeze more out of Intel.

  • jimbokun 5 years ago

    But these companies don't really make their own chips at scale, they just make their own chip designs, then contract out to a fab to actually manufacture them.

    And Apple is already at an incredible scale, considering every iOS device currently made is running on Apple designed chips.

  • Nokinside 5 years ago

    Intel is microarchitecture + fab corporation. They do it all.

    1. TSMC (also GlobalFoundries) is fab only. They design the node for the process and way to fabricate it.

    2. Then ARM joins with TSMC to develop the high performance microarchitecture for their processor design for TSMC's process.

    3. Then ARM licenses the microarchitecture designed for the new processes to Amazon, Apple, Qualcomm who develop their own variants. Most of the the prosessor microarchitecture is the same for the same manufacturing process.

    As a result, costs are shared for large degree. Intel may still some scale advantages from integrated approach but not as much as you might think.

    • dnautics 5 years ago

      My personal suspicion is that the integrated approach can eventually be a liability. If you have an integrated process/design house, process can count on design to work around its shortcomings and failures. By contrast, if you are process only, and multiple firms make designs for the process, you have to make your process robust, which means that your process staff is ready and has good practices down when it's time to shrink.

      ^^ Note that this is entirely baseless speculation.

      • pcnix 5 years ago

        What you speculate is actually happening to some extent, Intel's designs work around their fabrication quirks in order to achieve their performance, and this makes Intel unable to easily separate out their fabrication business in order to take up external contracts, or unable to effectively change designs easily in order to use external fabricators.

      • Nokinside 5 years ago

        Intel has always been a process first, design second company. The company was founded by physicists and chemists. Their process has always been the best in the world until just recently. Intel brings in or buys design talent when needed, but their R&D in process technology is their strongest suit even today.

        • stcredzero 5 years ago

          Intel has always been a process first, design second company. The company was founded by physicists and chemists. Their process has always been the best in the world until just recently.

          So they had a particular advantage, and exploited the heck out of it, but now the potency of that advantage is largely gone?

          • n-gatedotcom 5 years ago

            I don't know what country you're in but in cricket there's a concept of innings and scoring runs. There's this dude who averaged nearly a 100 in every innings, most others average 50.

            Now think of the situation as him scoring a few knots. Is he old and retiring? Or is this just a slump in form? Nobody knows!

            I worked for a design team and we were proud of our material engineers.

            • stcredzero 5 years ago

              Back in about 1996, most of the profs were going on about how x86 would crumble under the weight of the ISA, and RISC was the future. One of my profs knew people at Intel, and talked of a roadmap they had for kicking butt for the next dozen years. Turns out, the road map was more or less right.

              Is there more roadmap?

              • tw04 5 years ago

                There's just no way that's true. Their roadmap in 1996 was moving everyone to ia64/itanium. That was an unmitigated disaster and they were forced to license x64 from AMD.

                If it weren't for their illegal activity (threats/bribes to partners) to stifle AMDs market penetration, the market would likely look very different today.

                • antod 5 years ago

                  > There's just no way that's true. Their roadmap in 1996 was moving everyone to ia64/itanium. That was an unmitigated disaster and they were forced to license x64 from AMD.

                  Yup, and their x86 backup plan (Netburst scaling all the way to 10GHz) was a dead end too.

                  • gpderetta 5 years ago

                    but their plan C (revive the pentium III architecture) worked perfectly.

                    We will have to see if they have plan C now(plan B being yet another iteration of the lake architecture with little changes).

                    • jplayer01 5 years ago

                      Their plan C was a complete fluke and only came together because the Israelis managed to put out Centrino. I don't think such a fluke is possible when we're at the limits of process design and everything takes tens of billions of dollars and half a decade of lead time to implement.

                      • gpderetta 5 years ago

                        Having multiple competent design teams working on potentially competing products all the time is one of the strengths of Intel, I wouldn't call it a fluke.

                        Things do look dire right now, I agree.

                    • bogomipz 5 years ago

                      I'm not that up on Intel at the moment. Why are they stuck at more iterations of the lake architecture with little changes?

                      What was the plan "A"?

                    • m_mueller 5 years ago

                      doesn't it look like they're shifting to do chiplets as well at the moment? copying AMD might be their plan C, but it won't help if AMD can steam ahead with TSMC 7nm while Intel is locked to 14nm for a couple of years. That's going to hurt a lot.

                      • pitaj 5 years ago

                        TSMC's 7nm and Intel's 14nm are about the same in actual dimensions on silicon IIRC. The names for the processes are mostly fluff.

                        • gpderetta 5 years ago

                          AFAIK, supposedly TMSC 7nm and Intel 10nm are about equivalent, but with 10nm being in a limbo, TMSC is ahead now.

                          • m_mueller 5 years ago

                            That’s also how I understand it, which seems to be supported by perf/watt numbers of Apple’s 2018 chips.

                • pjmlp 5 years ago

                  Likewise if AMD wasn't a thing maybe this laptop would be running Itanium instead.

              • hermitdev 5 years ago

                Intel chips are RISC under the hood these days (for a long while - decade or more). They're CISC at the ASM layer before the instructions are decoded and dispatched by the microcode.

                • mcbain 5 years ago

                  The idea that Intel is “RISC under the hood” is too simplistic.

                  Instructions get decoded and some end up looking like RISC instructions, but there is so much macro- and micro-op fusion going on, as well as reordering, etc, that it is nothing like a RISC machine.

                  (The whole argument is kind of pointless anyway.)

                  • rbanffy 5 years ago

                    With all the optimizations going on, high performance RISC designs don't look like RISC designs anymore either. The ISA has very little to do with whatever the execution units actually see or execute.

                    • rocqua 5 years ago

                      It is baffling to me that byte-code is essentially a 'high level language' these days.

                      • rbanffy 5 years ago

                        And yet, when I first approached C, it was considered a "high level" language.

                    • bogomipz 5 years ago

                      Because the functional units are utilizing microcode? Or do you mean something else?

                • Oreb 5 years ago

                  Possibly stupid questions from someone completely ignorant about hardware:

                  If they didn’t care about backwards compatibility, would it be possible for them to release versions of their CPUs with _only_ the microcode layer? If yes, would a sufficiently good compiler be able to generate faster code for such a platform than for x86?

                  Alternatively, could Intel theoretically implement a new, non-x86 ASM layer that would decode down to better optimized microcode?

                  • lmm 5 years ago

                    > If they didn’t care about backwards compatibility, would it be possible for them to release versions of their CPUs with _only_ the microcode layer?

                    The microcode is the CPU firmware that turns x86 (or whatever) instructions into micro-ops. In theory if you knew about all the internals of your CPU you could upload your own microcode that would run some custom ISA (which could be straight micro-ops I guess).

                    > If yes, would a sufficiently good compiler be able to generate faster code for such a platform than for x86?

                    The most important concern for modern code performance is cache efficiency, so x86-style instruction sets actually lead to better performance than a vanilla RISC-style one (the complex instructions act as a de facto compression mechanism) - compare ARM Thumb.

                    Instruction sets that are more efficient than x86 are certainly possible, especially if you allowed the compiler to come up with a custom instruction set for your particular program and microcode to implement it. (It'd be like a slightly more efficient version of how the high-performance K programming language works: interpreted, but by an interpreter designed to be small enough to fit into L1 cache). But we're talking about a small difference; existing processors are designed to implement x86 efficiently, and there's been a huge amount of compiler work put into producing efficient x86.

                • earenndil 5 years ago

                  CISC is still an asset rather than a liability, though, as it means you can fit more code into cache.

                  • chroma 5 years ago

                    I don't think that's an advantage these days. The bottleneck seems to be decoding instructions, and that's easier to parallelize if instructions are fixed width. Case in point: The big cores on Apple's A11 and A12 SoCs can decode 7 instructions per cycle. Intel's Skylake can do 5. Intel CPUs also have μop caches because decoding x86 is so expensive.

                    • jabl 5 years ago

                      Maybe the golden middle path is compressed risc instructions. E.g the risc-v C extension, where the most commonly used instructions take 16 bits, and the full 32-bit instructions are still available. Density is apparently slightly better than x86-64, while being easier to decode.

                      (yes, I'm aware there's no high performance risc-v core available (yet) comparable to x86-64 or power, or even the higher end arm ones)

                    • gpderetta 5 years ago

                      Sure, but intel CISC instructions can do more, so in the end is a wash.

                      • chroma 5 years ago

                        That's not the case. Only one of Skylake's decoders can translate complex x86 instructions. The other 4 are simple decoders, and can only transform a simple x86 instruction into a single µop. At most, Skylake's decoder can emit 5 µops per cycle.[1]

                        1. https://en.wikichip.org/wiki/intel/microarchitectures/skylak...

                        • jawnv6 5 years ago

                          ... so what? most code's hot and should be issued from the uop cache at 6uop/cl with "80%+ hit rate" from your source

                          you're really not making the case that "decode" is the bottleneck, are you unaware of the mitigations that x86 designs have taken to alleviate that? or are those mitigations your proof that the ISA's deficient

                      • Symmetry 5 years ago

                        That really isn't true in the modern world. x86 has things like load-op and large inline constants but ARM has things like load or store multiple, predication, and more registers. They tend to take about the same number of instructions per executable and about the same number of bytes per instruction.

                        If you're comparing to MIPS then sure x86 is more efficient. And x86 is instruction do more than RISC-V but most high performance RISC-V uses instruction compression and front end fusion for similar pipeline and cache usage.

                        • gpderetta 5 years ago

                          (generalized) predication is not a thing in ARM64. Is Apple cpu 7 wide even in 32 bit mode?

                          It is true though, as chroma pointed out, that intel can't decode load-op as full width.

                  • wtallis 5 years ago

                    You can fit more code into the same sized cache, but you also need an extra cache layer for the decoded µops, and a much more complicated fetch/decode/dispatch part of the pipeline. It clearly works, at least for the high per-core power levels that Intel targets, but it's not obvious whether it saves transistors or improves performance compared to having an instruction set that accurately reflects the true execution resources, and just increasing the L1i$ size. Ultimately, only one of the strategies is viable when you're trying to maintain binary compatibility across dozens of microarchitecture generations.

                    • gpderetta 5 years ago

                      The fact is that a postdecode cache is desirable even on an high performance RISC design as even there skipping decode and fetch is desirable for both performance and power usage.

                      IBM Power9 for example has a predecode stage before L1.

                      You could say that, in general, riscs can get away without extra complexity for a longer time while x86 must implement it early (this is also true for example for memory speculation due to the more restrictive intel memory model, or optimized hardware TLB walkers), but in the end it can be an advantage for x86 (more mature implementation).

                  • Symmetry 5 years ago

                    In theory, yes. In practice x86-64, while it was the right solution for the market, isn't a very efficient encoding and doesn't fit any more code in cache than pragmatic RISC designs like ARM. It still beats more purist RISC designs like MIPS but not by as much as pure x86 did.

                    It would be easy to design a variable length encoding scheme that was self-synchronizing and played nicely with decoding multiple instructions per clock. But legacy compatibility means that that scheme will not be x86 based.

                    • bogomipz 5 years ago

                      >"It would be easy to design a variable length encoding scheme that was self-synchronizing and played nicely with decoding multiple instructions per clock."

                      How might a self-synchronizing encoding scheme work? How could a decoder be divorced from the clock pulse? I am intrigued by this idea.

                      • Symmetry 5 years ago

                        What I mean is self-synchronizing like UTF-8. For example the first bit of a byte being 1 if its the start of an instruction and 0 otherwise. Just enough to know where the instruction starts are without having to decode the instructions up to that point and so that a jump to an address that's the middle of an instruction can raise a fault. Checking the security of x86 executables can be hard sometimes because reading a string of instructions started from address FOO will give you a stream of innocuous instructions whereas reading starting at address FOO+1 will give you a different stream of instructions that does something malicious.

                    • jawnv6 5 years ago

                      sure, so what's your 6 byte equivalent ARM for FXSAVE/FXRSTOR?

                  • n-gatedotcom 5 years ago

                    What is an example of a commonly used complex instruction that is "simplified"/DNE in RISC? (in asm, not binary)

                    • gpderetta 5 years ago

                      Load-op instructions do not normally exist on RISC, but are common on CISC.

                  • adrianN 5 years ago

                    You do have to worry about the µ-op cache nowadays.

              • Symmetry 5 years ago

                Those profs were still living in 1990 when the x86 tax was still a real issue. As cores get bigger the extra effort involved in handling the x86 ISA gets proportionally smaller. x86 has a accumulated a lot of features over the years and figuring out how, e.g., call gates interact with mis-speculated branches means an x86 design will take more engineering effort than an equivalent RISC design. But with Intel's huge volumes they can more than afford that extra effort.

                Of course Intel has traditionally always used their volume to be ahead in process technology and at the moment they seem to be slipping behind. So who knows.

                • bogomipz 5 years ago

                  >"As cores get bigger the extra effort involved in handling the x86 ISA gets proportionally smaller."

                  Can you elaborate on what you mean here? Do you mean as the number of cores gets bigger? Surely the size of the cores has been shrinking no?

                  >"Of course Intel has traditionally always used their volume to be ahead in process technology"

                  What's the correlation between larger volumes and quicker advances in process technology? Is it simply more cash to put back into R and D?

                  • Symmetry 5 years ago

                    When RISC was first introduced its big advantage was that by reducing the number of instructions it could handle the whole processor could be fit onto a single chip whereas CISC chips took multiple chips. In the modern day it takes a lot more transistors and power to decode 4 x86 instructions in one cycle than 4 RISC instructions because you know the RISC instruction are going to start on bytes 0, 4, 8, and 12 whereas the x86 instructions could be starting on any bytes in the window. So you have to look at most of the bytes as if they could be an instruction start until later in the cycle you figure out if they were or not. And any given bit in the instruction might be put to more possible uses increasing the logical depth of the decoder.

                    But that complexity only goes up linearly with pipeline depth in constrast to structures like the ROB that grow as the square of the depth. So it's not really a big deal. An ARM server is more likely to just slap on 6 decoders to the front end because "why not?" whereas x86 processors will tend to limit themselves to 4 but that very rarely makes any sort of difference in normal code. The decode stage is just a small proportion of the overall transistor and power cost of a deeply pipelined out of order chip.

                    In, say, dual-issue in-order processors like an A53 the decode tax of an x86 is actually an issue and that's part of why you don't see BIG.little approaches in x86 land and why atom did so poorly in the phone market.

                    For your second question, yes, spending more money means you can pursue more R&D and tend to bring up new process nodes more quickly. Being ahead means that your competitors can see which approaches worked out and which didn't and so re-direct their research more profitably for a rubber band effect, plus you're all reliant on the same suppliers for input equipment so a given advantage in expenditure tends to lead to finite rather than ever-increasing lead.

                    • bogomipz 5 years ago

                      Thanks for the thorough detailed reply, I really appreciate it. I failed to grasp one thing you mentioned which is:

                      >"is actually an issue and that's part of why you don't see BIG.little approaches in x86 land and why atom did so poorly in the phone market."

                      Is BIG an acronym here? I had trouble understanding that sentence. Cheers.

                      • Symmetry 5 years ago

                        I was reproducing an ARM marketing term incorrectly.

                        https://en.wikipedia.org/wiki/ARM_big.LITTLE

                        Basically, the idea is that you have a number of small, low power cores together with a few larger, faster, but less efficient cores. Intel hasn't made anything doing that. Intel also tried to get x86 chips into phones but it didn't work out for them.

                        • bogomipz 5 years ago

                          Thanks, this is actually a good read and clever bit of marketing. Cheers.

            • oblio 5 years ago

              Your parallel is hard to follow for people who don't watch cricket. I have no idea how 100 or 50 "innings" relate to a few "knots". Are they like some sort of weird imperial measures? (furlongs vs fathoms?)

              • jholman 5 years ago

                I suspect that "knots" was supposed to be "noughts", a.k.a zeros. That is, the last few times the 100-point batsman was at bat, he got struck out without scoring any points. Is he washed up?

                I don't think it's a very useful analogy. :)

            • ctack 5 years ago

              Knots as in ducks?

      • bloomer 5 years ago

        It turns out that the effect is typically exactly the opposite. Design and process are already coupled where a given process will have design rules that must be adhered to to achieve a successfully manufacturable design. Intel only has to support their own designs so can have very strict design rules. Fabs like TSMC have to be more lenient in what they allow from their customers so have looser design rules that result in a less optimized process to achieve the same yield.

        • dnautics 5 years ago

          The speculation is exactly that what you describe is indeed a short term gain, but that the pressure of having to accommodate looser design rules nets a stronger process discipline which pays off in the long term as feature size shrinkage gets closer to physical limits.

    • bstx 5 years ago

      ARM architectural licensees develop their own microarchitectures that implement the ARM ISA spec, they do not license any particular microarchitecture from ARM (e.g. Cortex-A? IP cores). That includes Apple, Samsung, Nvidia and others.

      • ethbro 5 years ago

        But ARM actually has relatively few architectural licensees (~10 as of 2015).

        In reality, most of their licenses are processor (core+interfaces) or POP (pre-optimized processor designs).

        https://www.anandtech.com/show/7112/the-arm-diaries-part-1-h...

        • bogomipz 5 years ago

          Could you or someone else elaborate on the different type of licenses and why a company interested in licensing might opt for one over another? I was surprised by the OPs comment that few companies actually license microarchitecture as I thought that's what Apple has been doing with ARM.

          • evancox100 5 years ago

            It is what Apple has been doing with ARM, but as he said there's only about 10 companies doing this, compared to the hundreds (thousands?) who take the core directly from ARM. Even big players like Qualcomm seem to be moving to just requesting tweaks to the Cortex cores.

            It's much, much easier & cheaper to take the premade core rather than developing your own. But your own custom design gives you the ability to differentiate or really target a specific application. See Apple's designs.

            Read the Anandtech article, it goes into more detail on the license types. There's also the newer "Built on Cortex" license: https://www.anandtech.com/show/10366/arm-built-on-cortex-lic...

            • bogomipz 5 years ago

              Your link is exactly what I was looking for. Thanks!

    • bogomipz 5 years ago

      >'They design the node for the process and way to fabricate it."

      What is a "node" in this context? I'm not familiar enough with FAB terminology.

      • Kliment 5 years ago

        A node is a combination of manufacturing capabilities and design components that are manufacturable in that process. They're typically named after an arbitrary dimension of a silicon feature, for example 14nm or 10nm. Your higher level design options are dictated by what you can produce at that "node" (with those masking methods/transistor pitches/sizes/electrical and thermal properties).

        • rbanffy 5 years ago

          Would a pixel be a good analogy? It's the smallest thing you can make on your chip and that defines all the rest of your design.

          • kingosticks 5 years ago

            Only in the same way that even pixels of the same physical size can have other vastly different properties. And that makes ranking them purely on their size totally misguided. So I'm not convinced that really helps.

          • rocqua 5 years ago

            It's closer to the quality of a display then just how small your pixels are. It determines how large your display is before you get too many dead-pixels (yield in fabrication). What range of colors your pixels can produce (electric properties? resistance, leakage, etc.). Whether you can blast all pixels at full brightness, or only a few (thermal properties). And indeed, resolution of the display (size of transistors).

            What is missing from this analogy is the degree of layering / 3d structures that is possible. You might couple that to RGB v RGBY but I'm not really sure.

  • btian 5 years ago

    But TSMC has demonstrated many times that they can make chips at scale.

    I don't see why they would fail this time.

    • abfan1127 5 years ago

      TSMC can run the masks, but if the design is not sound, then it doesn't matter how good the transistors are. Power islands, clock domain crossings, proper DFT, DFM, etc. are all needed to get a good design.

  • whynotminot 5 years ago

    Do you realize how many devices Apple sells a year? I think they've figured out the scale thing ok.

  • product50 5 years ago

    This is what Intel supporters always say till the time everyone builds those chips and there is no market left for Intel at all. It is just so sad that Intel, which had such a ferocious lead and was on the cutting edge of processor design/manufacturing, is now dying from a thousand cuts.

    Just look at the industry - everyone who is a major player in cloud, AI, or mobile (Apple, Huawei, & Samsung) are now in the chip business themselves. How will Intel grow? And where would this so called scale advantage come in?

    Wake up and smell the coffee.

    • Brybry 5 years ago

      How is Intel dying? Losing a near monopoly is a far cry from dying.

      And Amazon's Graviton/armv8 chips aren't going to be competitive for many workloads. If you look up benchmarks you'll see they generally aren't competitive in terms of performance[1].

      They'll only be competitive in terms of cost (and, generally, not even performance/cost).

      I'm personally pleased that there is more competition but I find that saying Intel is dying to be silly.

      [1] https://www.phoronix.com/scan.php?page=article&item=ec2-grav...

      • rbanffy 5 years ago

        And it sure doesn't help that Amazon won't be selling desktop PCs or on-prem servers anytime soon.

        • Rafuino 5 years ago

          Well, they did announce Amazon Outpost as well...

  • electrograv 5 years ago

    > It's hard to see Amazon transitioning their AWS machines to Amazon built chips

    As a strategic move, this makes a lot of sense for Amazon. Moreover, Amazon is a company known for excellence in a diverse set of disciplines, and TSMC has an excellent reputation for delivering state-of-the-art CPUs at scale — yet you are here to doubt they can pull it off, despite providing no evidence or rationale for your position?

    The burden of proof is on you to justify your pessimism. If you have evidence for your claim that Amazon + TSMC will have problems scaling, please provide it.

    • jawnv6 5 years ago

      how many amazon customers have they migrated to their existing ARM solutions?

      like that's the bit that's missing, the servers sitting on a rack are meaningless without ARM customers, and amazon chips not existing didn't somehow prevent the demand. they sell arm compute now and it's a paltry fraction of the whole. pretending it's about TSMC scaling is ridiculous.

  • marcosdumay 5 years ago

    > Everyone wants to make their own chips until they have to do so at scale

    Isn't it exactly the other way around?

  • askafriend 5 years ago

    Apple sold 217 MILLION iPhones in just 2017 alone.

    That's a number that doesn't include iPad, Apple Watch, HomePod, or Macs - all of which have custom Apple silicon in them.

    I think you're severely underestimating Apple here.

    • pjmlp 5 years ago

      There are lots of countries around the world where common people hardly get to see an Apple device on the wild.

      • acdha 5 years ago

        There are lots of places where you rarely see PCs, too, but that doesn’t mean that Intel and AMD don’t sell a lot of chips. 200M per year is well into the economies of scale range.

      • askafriend 5 years ago

        That has nothing at all to do with the original point, or even my point.

  • JohnJamesRambo 5 years ago

    Delivering almost any package to my house in two days at scale seems a lot harder than making chips at scale and they did that already.

    • evancox100 5 years ago

      Unfortunately I think making cutting edge chips is harder these days. Just going on cost, the most expensive Amazon fulfillment center comes in at $200 million, the most expensive fab is $14 billion, from Samsung, with word of a $20 billion fab coming from TSMC.

  • bitmapbrother 5 years ago

    >Mike Tyson said: "Everyone has a plan until they get punched in the face."

    Mike Tyson actually said "Everyone has a plan until they get punched in the mouth."

kelp 5 years ago

I'm by no means an expert in this, and maybe it's a bit obvious, but hadn't seen this mentioned yet.

I think as we run out of gains to be had from process size reductions, the next frontier for cloud providers is in custom silicon for specific workloads. First we saw GPUs move to the cloud, then Google announced their TPUs.

Behind the scenes, Amazon's acquisition of Annapurna Labs has been paying off with their Nitro (http://www.brendangregg.com/blog/2017-11-29/aws-ec2-virtuali...) ASIC, nearly eliminating the virtualization tax, and providing a ton of impressive network capabilities and performance gains.

The Graviton, which I believe also came from the Annapurna team, is likely just the start. Though a general purpose CPU, it's getting AWS started on custom silicon. AWS seems all about providing their customers with a million different options to micro-optimize things. I think the next step will be an expanding portfolio of hardware for specific workloads.

The more scale AWS has, the more it makes sense to cater to somewhat niche needs. Their scale will enable customization of hardware to serve many different workloads and is going to be yet another of Amazon's long-term competitive advantages.

I think that will show up in two ways. Hardware narrowly focused on certain workloads, like Google's TPUs that show really high performance, and general purpose CPUs like these Gravitons that are more cost efficent for some workloads.

I see echoes of Apple's acquisition of P.A. Semi that lead to the development of the A series CPUs. My iPhone XS beats my MacBook (early 2016) on multi-core Geekbench by 37%. (And on single core, it's only 10% slower than a 2018 MacBook Pro 15.)

If Amazon is able to have similar success in custom silicon, this will be a big deal.

I think early next year we'll test the a1 instances for some of our stateless work loads and see what the price/performance really looks like.

It does make me worry that this sort of thing will cement the dominance of large cloud providers, and we'll be left with only a handful (3?) of real competitors.

  • snaky 5 years ago

    > the next frontier for cloud providers is in custom silicon for specific workloads

    Sure, the cloud is a classical mainframe, and mainframes are famous for using specialized hardware for pretty much everything.

comboy 5 years ago

This can be powerful. They don't have to build general CPU right away. Start with storage and by the time you have database boxes on ASICS designed to match your software you're already winning.

I'm surprised there's still not much effort to make FPGAs more affordable and base everything on it. With this scale it seems like it should be a win on the long run over deploying new ASICs every few years.

  • clouddrover 5 years ago

    Twitch (which is owned by Amazon) is rolling out VP9 video to their site for about a 25% bitrate saving over H.264. Since Twitch is almost entirely live video they needed a VP9 encoder fast enough to keep up at the quality they wanted. They found libvpx wasn't fast enough for their live video use case so they're using an FPGA VP9 encoder from NGCodec:

    https://www.youtube.com/watch?v=g4HnM26Fwaw

    https://ngcodec.com/news/2018/11/12/ngcodec-to-deliver-broad...

    In a couple of years Twitch will start deploying AV1 streams. I imagine they'll take a similar approach for that as well.

  • trevyn 5 years ago

    FPGAs are much larger (read: more expensive in volume) and slower than ASICs, so if you have the unit volume and calendar time to do an ASIC and know roughly what it needs to be optimized for, FPGAs really can’t compete. FPGAs are effective for more exotic smaller-scale use cases where unit cost is less of an issue.

  • adamson 5 years ago

    I'm pretty out of the loop here. Are there existing, widely used workloads that are critical for storage for which FPGAs are competitive with CPUs?

    • comboy 5 years ago

      I'm just guessing that if the only thing that given box is doing is handling very specific internal S3 API, you can probably optimize a few things over a multi-purpose architecture.

      I'm totally blue and I'll be honest - when I want to learn about something, it seems that stating some thesis gets me way more information from HN vs asking a question, especially when it turns out to be wrong ;)

      • jawnv6 5 years ago

        This is a horrendously disrespectful way to learn about a niche area. I'm shocked to see it laid out so plainly like that.

        Your first assertion here is incredibly wrong. ISA's don't split up as cleanly as your fictional version, a NAS box has to support all the same branch, arithmetic, and memory operations as a "multi-purpose" architecture. The only conceivable things you'd bolt on would be things like NEON accelerators for AES, and there's better ways to do that than mucking about with the ISA.

        Do you get folks coming back for a second reply after this charade is made apparent?

        • uncoder0 5 years ago

          Only replying to the claim of disrespect. I think the disclaimer, as included, in a post positing a thesis is not disrespectful at all. gp clearly laid out that they were not an expert but had an assumption.

          I will agree though that done without a disclaimer gp would have been disrespectful.

          • jawnv6 5 years ago

            disclaimer should have been in the first post, not a reply to a reply long after the confident assertions over what a "storage" CPU must do.

        • basilgohar 5 years ago

          And yet, here he/she gets exactly the result they were looking for. It's a well known online trope that you get your question answered faster and more thoroughly by posting a wrong answer first rather than plainly asking. My guess is it triggers something primal in us geeks.

          See: https://xkcd.com/386/

          • jawnv6 5 years ago

            It's still incredibly disrespectful way to approach a community, and all of these replies ignore the thrust of my question about the expert re-upping after this ruse has been made apparent.

            Comboy spread a lot of disinformation in the first post, like "I'm surprised there's still not much effort to make FPGAs more affordable and base everything on it." before the lie was laid bare. Looking forward to arguing with "FPGA experts" who harken back to that post as their primary source.

  • muricula 5 years ago
    • comboy 5 years ago

      Indeed, with networking it seems to have already happened, if I remember correctly Google also uses software defined networking based on their own hardware (not sure if FPGAs are involved)

      • kingosticks 5 years ago

        Google, along with everyone else, still uses ASIC-based networking hardware from the traditional vendors where they need the bandwidth. But full marks to their PR department for the idea they have it all solved themselves.

      • monocasa 5 years ago

        They're definitely using miracom lanai chips for part of it at least; google engineers are the maintainers of the lanai llvm backend.

        • daxfohl 5 years ago

          Seems odd that MS and G are the ones using special hardware, but only reach to 24 and 16 gbps respectively, while AWS hits 100 gbps networking. Is AWS already using specialized HW as well?

          • jabl 5 years ago

            I'd guess the bespoke hw is not necessarily faster signaling, but rather functionality. E.g. fast multipath routing in a Clos fabric, firewalling, maybe offloading some specific workload (IIRC ms was using fpgas to offload some aspect of search for Bing)

            Going back to signaling, AFAIU the state of the art is 25 Gb/s per lane, 100 Gb networking aggregates 4 of those. 50 Gb/s is still in the labs.

            • kingosticks 5 years ago

              56Gbps serdes lanes are being used in network chips right now.

ChuckMcM 5 years ago

From the article: "Amazon licensed much of the technology from ARM, the company that provides the basic technology for most smartphone chips. It made the chip through TSMC, a Taiwanese company."

Amazon became an ARM architecture licensee and had their variant manufactured for them by TSMC.

I find the characterization "home grown" as a stretch here, had they designed their own instruction set etc I might agree.

That said, the interesting thing about this article is that given Intel's margins, a company like Amazon feels they can take on the huge cost of integrating a CPU, support chips, and related components to achieve $/CPU-OP margins that are better than letting Intel or AMD eat all of that design and validation expense.

This sort of move by Amazon and AMD's move to aggressively price the EPYC server chips, really puts a tremendous amount of pressure on Intel.

  • projektfu 5 years ago

    I think it's fair to call it home grown, like in house, but to say it threatens Intel is like saying that Amazon has introduced a server chip that will take over everyone's datacenters from Intel. Seems unlikely. Amazon also has to be careful now not to run afoul of anyone's IP as they can't farm out that responsibility to other providers.

    • Despegar 5 years ago

      Well I'd say it's certainly a big risk to Intel. For one, a lot of Intel customers are going in-house with chip design. Apple on its own might not hurt much, but add a few more like Amazon or Google, and things can really unravel. If you're an integrated chip designer and you lose volume, you're in for a lot of pain.

      The thing that defines Amazon in recent years is their desire to make everything a third party service (AWS, Fulfillment, etc) and they may in fact do that for their own chips. So while Apple may never sell their chips to anyone, Amazon may decide to enter the merchant chip business (if they decide it's not a competitive advantage). Maybe they wouldn't sell it to Microsoft or Google, but certainly other companies that they don't compete with that operate their own servers (Facebook). And then Intel would really be losing volume.

      • stcredzero 5 years ago

        The thing that defines Amazon in recent years is their desire to make everything a third party service (AWS, Fulfillment, etc) and they may in fact do that for their own chips.

        AWS was Amazon monetizing its own infrastructure. Maybe they're thinking of monetizing AWS's infrastructure? Instead of being in the gold rush, sell the pickaxes and backpacks in a general store. Then, when people realize there's a lot of money in those stores, start selling store shelves and offer wholesale logistics.

        • justicezyx 5 years ago

          > AWS was Amazon monetizing its own infrastructure.

          AWS builds infrastructure and monetize them.

          AWS's hardware usage far exceeds what they need for their other businesses.

        • thinkling 5 years ago

          AWS wasn't Amazon.com's infrastructure. The store didn't run on AWS for a long time. I believe it was more Amazon monetizing spare hardware capacity.

    • hbosch 5 years ago

      > but to say it threatens Intel is like saying that Amazon has introduced a server chip that will take over everyone's datacenters from Intel.

      The implication is that Amazon is everyone's datacenter.

    • zumu 5 years ago

      > it... is like saying that Amazon has introduced a server chip that will take over everyone's datacenters from Intel.

      AWS is what is actively taking over data centers. If it were to start running on Amazon chips, the result may possibly the same.

      • projektfu 5 years ago

        I wonder how much of the overall datacenter/server market Amazon and other cloud providers have captured.

  • monocasa 5 years ago

    > I find the characterization "home grown" as a stretch here, had they designed their own instruction set etc I might agree.

    Would you say the same thing about Apple, who has their own ARM uarch?

    • twtw 5 years ago

      The only really useful distinction that can be made here is between companies that have architecture licenses, and therefore can design their own uarch, and companies with any other kind of arm license.

    • zapita 5 years ago

      Apple is also a TSMC customer I believe.

      • monocasa 5 years ago

        So is AMD (they're shutting down GloFlo).

        • ATsch 5 years ago

          I'm not sure how they could shut down GloFo, considering they don't own it.

          Besides, GloFo is still doing decently even if they've dropped 7nm... high-end isn't the whole chip market. And AMD is still using GloFo for non-7nm designs.

          • metildaa 5 years ago

            AMD's strategy of using multiple Zen 2 dies (7nm) tied together by a 14nm I/O die (since I/O doesn't scale down quite as well) is a really interesting strategy to improve yields using smaller dies (rather than making huge, low yield chips like the competition) & reduce mask/production cost. One 7nm Zen 2 mask can be used to produce CPU cores for a multitude of SKUs, optimizing for different markets using (cheaply customized) I/O interconnects made on 14nm.

            This allows for AMD to keep much less silicon on hand for stocking the myriad of SKUs, as the only bottleneck for ramping up production of a SKU is producing those I/O interconnects on 14nm, which is a well understood process.

            • ATsch 5 years ago

              I think you replied to the wrong comment, but this also allows AMD to manufacture the IO die with glofo, which saves cost not only because 14nm capacity is much higher, but also since AMD's agreement with glofo requires them to pay a fee for every wafer they manufacture with another fab.

        • kankroc 5 years ago

          Source? GloFo stopped pursuing 7nm but that is far from shutting down.

          • monocasa 5 years ago

            Everyone working on R&D either got laid off, or moved to sustaining engineering on existing nodes. AMD is doing 7nm on TSMC. That's about as shut down as foundries get since the capital investment is all up front.

        • llampx 5 years ago

          AMD is a TSMC customer but GlobalFoundries is far from shutting down.

          • monocasa 5 years ago

            AMD isn't shipping any new GloFlo processors, and GloFlo either laid off or shifted their all of their R&D to sustaining.

            That's as close to shutting down as foundries get.

        • cdmckay 5 years ago

          I thought they spun it off?

          • monocasa 5 years ago

            They did, but recently GloFlo announced that they're stopping R&D on newer nodes and are switching purely to sustaining engineering.

  • twtw 5 years ago

    > Amazon became an ARM architecture licensee

    Does Amazon really have an ARM architecture license? I thought these chips were using stock ARM cores (licensing cores only, not architecture).

    I asked on HN on the original announcement and it sounded like that was the case: https://news.ycombinator.com/item?id=18553028

    Also, I would argue that a custom ISA matters far less than a custom microarchitecture. After all, Intel is (mostly) using AMD's ISA.

    • evancox100 5 years ago

      Also curious about this. Would be shocked if Amazon went straightaway for an architecture license.

  • jimbokun 5 years ago

    It may not be "home grown", depending on how you want to define that term, but it does point to a vector for Intel's business model to be disrupted.

    Going to ARM for chip IP, tweaking it, and then going to TSMC or some other manufacturing specialist could steadily eat away at Intel's market share and margins. Apple has now gone this route, Amazon is testing the waters, and other tech giants probably aren't far behind.

    • monocasa 5 years ago

      > Going to ARM for chip IP, tweaking it

      Just going to throw out there that Amazon paid more for Annapurna than Apple paid for P.A. Semi. That very well might imply that they have a custom uarch. Homegrown very well might be an apt adjective.

  • dang 5 years ago

    Ok, we've taken out the word "homegrown" above.

  • andyidsinga 5 years ago

    its conceivable (even likely?) that they've developed some exotic peripherals that go along with the arm core in order to complete their asic ...which sort of gets them into the same ballpark of defining their own ISA.

acqq 5 years ago

The article is non technical. For those who search for that:

https://www.theregister.co.uk/2018/11/27/amazon_aws_graviton...

"Semiconductor industry watcher David Schor shared SciMark and C-Ray benchmarks for the 16-core Graviton. In the SciMark testing, the AWS system-on-chip was twice as fast as a Raspberry Pi 3 Model B+ on Linux 4.14."

http://codepad.org/wZe5SrjI

""It does well on the Phoronix Test Suite," he said. "It does poorly benchmarking our website fully deployed on it: Nginx + PHP + MediaWiki, and everything else involved. This is your 'real world' test. All 16 cores can't match even 5 cores of our Xeon E5-2697 v4.""

"The system-on-chips use a mix of Arm's data-center-friendly Neoverse technology, and Annapurna's in-house designs. The 16 vCPU instances are arranged in four quad-core clusters with 2MB of shared L2 cache per cluster, and 32KB of L1 data cache, and 48KB of L1 instruction cache, per core. One vCPU maps to one physical core."

ksec 5 years ago

Designing an ARM Chip from ARM blueprint and TSMC is relatively simple and cheap for Amazon. And there is enough market and hype to justify the investment as they will probably break even within 24 months. It has become obvious that Intel isn't really willing to lower price that affect margin and sales. So Amazon needs to make a statement to Intel to say they have lots of options, EPYC and ARM.

I don't want to hype Zen 2 / EPYC 2, but I do think it will be very competitive. And that is a Threat to Intel. And fundamentally, the REAL threat is neither ARM, AMD or even RSIC-V, it is TSMC.

becauseiam 5 years ago

The article misses that months before the ARM announcement, AWS announced AMD based instances being available in m5 and r5 classes, and are cheaper than the default Intel offerings. If anything Intel might be afraid is that because the workloads that can be achieved are comparable.

  • TazeTSchnitzel 5 years ago

    AMD are in a great place. TSMC's 7nm is in great shape (unlike Intel's 10nm), and AMD's multi-chip architecture allows them to get vastly better effective yields for high-core-count “chips”. They will be selling superior CPU to Intel's and they will be making them at significantly lower cost, while Intel is struggling to compete, stuck with monolithic chips on 14nm.

    • stdplaceholder 5 years ago

      I have not met anyone who deployed the new AMD stuff at scale and is happy with the outcome. The new architecture shines on small codes like SPEC and then falls apart in large, branchy, pointer-chasing codes that everyone runs in production. I would not say AMD is “in a great place” with their current product. They are putting slight pressure on Intel on the very low end and filling some very specialized niches but that’s about it.

      • llampx 5 years ago

        Is there no benchmark that measures the EPYC and Xeon chips on "large, branchy, pointer-chasing codes that everyone runs in production"? From everything I've seen, the Zen architecture is a win across the board.

      • TazeTSchnitzel 5 years ago

        That's an interesting perspective, thanks. I'd be curious to see how AMD's Zen 2 next year will play though. They should have a big power/performance to price advantage over Intel unless they're forced to cut prices.

      • evancox100 5 years ago

        Any public sources? Just curious

        • stdplaceholder 5 years ago

          I don't think anyone has an incentive to release findings because that will just sour their future work with AMD and counteract any pricing concessions they are getting for waving the AMD platform around under Intel's noses. Same for POWER, for what that's worth.

    • blackstrips 5 years ago

      That said, Intel’s 14nm is very mature though. The yields are likely very good at this point given how long they have been using it. Knowing it inside out also likely allows them to squeeze a little more performance out of it.

      A pity about their 10nm. Won’t be surprised if they have given up on it by now and have shifted all resources to 7nm. Hopefully 7nm comes online before 14nm completely gives out.

twoodfin 5 years ago

Pretty funny to see “Dave Patterson” described as “a Google chip specialist”!

  • cbsmith 5 years ago

    I was like, "is that the same Dave Patterson?", and then thinking about Google & typical tech journalism I realized, "yeah, of course it is".

Solar19 5 years ago

Why ARM though? The article touts how this is a homegrown chip, and Amazon obviously has the resources to build a truly homegrown, optimized CPU. Why use ARM instead and import all of its idiosyncrasies?

I guess I could ask the question more broadly. Why does every company that "designs its own chip" use ARM instead of designing its own ISA? How much work does it save? How much optimization does it forsake? I'm reminded of John Regehr's post on discovering the optimal instruction set: https://blog.regehr.org/archives/669

  • pcwalton 5 years ago

    It's not just the ISA that you get when you license from ARM: it's the entire HDL source. If you go even further and get an architecture license, you also get a whole bunch of fundamental technical documents and a test suite. This saves a ton of time.

    There's the issue of software compatibility. Writing a port of Linux—kernel and userland—is a ton of work. And if you have other chips on board, such as a GPU, the only drivers available may be ARM-only binary blobs. There might be an entire ecosystem of ARM software you have to keep supporting (which is one reason why Apple doesn't [yet] design their own ISA).

    RISC-V has a small chance of commoditizing some of this stuff someday, but not yet.

  • pjc50 5 years ago

    > How much work does it save?

    Designing your own incompatible non-mainstream ISA basically commits you to maintaining your own compiler toolchain(s) and port of Linux distribution(s). That's dozens to hundreds of software engineers, continuously.

    > How much optimization does it forsake?

    Not a great deal, unless your workload is very unusual or you spend all your time running one algorithm.

    It seems that what Amazon are doing is adding extensions or changes to facilitate hypervisors - not changes to the instruction set at all, but changes in the way system calls, hypervisor calls, and interrupts are handled.

  • justincormack 5 years ago

    Because it is a lot of work to build from scratch, and build the whole software ecosystem. Risc-V is the other option but it is not yet as mature for server chips, and the software stack is only just starting to ship. Arm servers have only just reached maturity after some years.

  • twtw 5 years ago

    > how much work does it save?

    Really a lot of work. Software compatibility and availability will make or break a processor. Unless you own the entire stack and are willing to deal with the struggles of a custom ISA, nobody wants to make a new one - the benefits probably aren't that great.

    • kelp 5 years ago

      This can't be overstated for a cloud provider, especially one like AWS that wants to run everything for everyone. ARM has good OS and compiler support, and if you want people to move some workloads off x86 to another architecture, you're gonna want that migration to be as painless as possible.

  • jabl 5 years ago

    Aarch64 (does this chip even bother supporting 32-bit?) is much less idiosyncratic than the older 32-bit arm. Also, arm server platform has UEFI (for all its faults), providing a x86 like plug and play experience. And, like others already said, they are getting an entire ecosystem as part of the deal.

    Risc-v might get there one day, but not yet.

j1vms 5 years ago

Well, guess Intel's thinking "there's always Microsoft (Azure)."

I don't think it's settled which of Azure or AWS captures the most market share in the next decade. AWS has a lot going for it but MS is coming in hard and fast on the OSS to cloud integration front. Probably would have made sense for Amazon to pickup Red Hat from IBM.

  • stcredzero 5 years ago

    Microsoft can compile everything on ARM if they want to. No reason why Azure would be completely stuck on Intel chips.

    • cbsmith 5 years ago

      They can compile "everything" that Microsoft has written to ARM. There's a small matter of, you know, all the other stuff.

      • MBCook 5 years ago

        Make the server is cheaper and people will use them.

        It already doesn’t matter for a lot of people. Java? PHP? JS? C#? If you’re using in interpreted language then as long is the interpreter is updated you don’t need to care.

        Outside of that costs rule the day. If the arm servers are noticeably cheaper then people will be incentivized to make their software run on it.

        Tons of open source software runs on Windows. Why isn’t windows more common on Azure and AWS? It costs more per hour. So those who could moved.

        • cbsmith 5 years ago

          So, if you are using an interpreted language, you're indifferent to cost savings enough that savings from running on ARM likely won't make enough to be a differentiator.

          > If the arm servers are noticeably cheaper then people will be incentivized to make their software run on it.

          Yeah, I don't think the incentive is as large as you perceive it to be. Few, if any, customers are going to hire back the people they got rid of five years ago to port their old code over to a platform that is 50% cheaper to run, let alone smaller to no existent savings.

          But that's kind of missing the simpler problem though: there's a reason beyond form factor that the ARM version of Windows has limited features/offerings...

          > Why isn’t windows more common on Azure and AWS? It costs more per hour.

          Actually, with three year reserved instances, the pricing is largely the same in Azure.

        • stcredzero 5 years ago

          If you’re using in interpreted language then as long is the interpreter is updated you don’t need to care.

          Plenty of compiled language users wouldn't need to care either.

        • jusssi 5 years ago

          > Tons of open source software runs on Windows. Why isn’t windows more common on Azure and AWS? It costs more per hour. So those who could moved.

          Until we get something more practical than RDP mouse slinger GUI for remote admin, I doubt serious people want to use Windows, even if it cost the same.

          • cbsmith 5 years ago

            We crossed that threshold a very, very long time ago.

          • oblio 5 years ago

            Powershell Remoting has been a thing since Windows 2008, at least. Almost everything Microsoft made in the last decade has had Powershell support.

erikpukinskis 5 years ago

Apologies in advance for the layperson question:

It's my understanding that a lot of CPU gains come from caching. That suggests to me that there is potential performance to be gained by caching across a larger number of machines.

Is that something Amazon could do here? Somehow connect all their machines and cache in a huge space?

Maybe individual physical machines would be more like a front end for a cache space, and when I get an Amazon instance, it's actually a "virtual" CPU that pieces together instructions that are mostly already cached in various places throughout the network?

Is that even theoretically possible, or is it total fantasy?

  • rblatz 5 years ago

    Cache is insanely fast, orders of magnitude faster than ram, and basically instant compared to going to disk or another machine on the network. I would find it unlikely that they could overcome the added network latency introduced in such a system.

    Edit: check this out for more info https://people.eecs.berkeley.edu/~rcs/research/interactive_l...

    • sharpneli 5 years ago

      And to give the 1ns L1 access time some physicality. During 1 nanosecond light in vacuum travels 30 centimeters, or 12 inches. Signals in conductors travel slower. It is ridiculously short amount of time.

      This means that there will absolutely never* be anything that can give faster access that is going to be farther than that from the CPU. Or more specifically half of that distance as the message to request what part of the cache to read must travel to the cache itself.

      * Unless we find out FTL is possible. But it's a rather safe bet to assume no.

  • Merad 5 years ago

    I'm going by memory here, but IIRC the fastest level of CPU cache (L1) can be accessed in about 2 nanoseconds. Even the slowest level (L3) operates on the order of a few hundred nanoseconds. RAM is slow by comparison, and network latency is so much slower it's basically glacial.

  • evancox100 5 years ago

    Your idea is impractical from a "caching for performance" perspective, for the reasons other have pointed out.

    However, there is work to enable multi-machine coherent memory. I just saw [1] today. This is kind of similar, you're giving a single, unified view to memory across multiple machines. But you do this to easily share data as part of a new programming paradigm, not to speed up your code.

    https://www.westerndigital.com/company/newsroom/press-releas...

DeathArrow 5 years ago

It's not exactly a threat. Amazon use cortex A72 in their CPUs and there's no way they can replace most Intel CPUs with that. The performance isn't there.

  • simonh 5 years ago

    The threat is the potential for this to encroach on Intel's territory in the future, not necessarily this specific chip. The article makes this clear by opening with talk of a 'new line of work' and 'going the do-it-yourself route'. This is about the trend, not the moment and I thought they did a good job clearly framing it that way.

  • rbanffy 5 years ago

    It looks they are after the uses where an Intel server-grade CPU is overkill. If an A72 solves my problem and is cheaper than the cheapest x86 I can get, A72 it'll be. ARM servers are very useful for low base loads.

    If the demand gets higher, there is no reason not to spin up a couple x86's to take the load off the ARM boxes.

  • ptman 5 years ago

    There are workloads where I/O and RAM dominate

mmaunder 5 years ago

Amazon sell compute to the world. This is vertical integration and makes sense at a certain scale. That scale has to be massive, but they appear to have reached it. They may have chosen to execute sooner if they also plan to sell the chips to hardware vendors like Dell and the financials check out.

karakanb 5 years ago

Truly lame question, is there any possibility for Amazon or other cloud providers to monitor the executed instructions and their distribution? Could this allow more optimized architectures for specific loads, or would this not bring any actual benefit?

syntaxing 5 years ago

I thought Amazon is just licensing a custom design from ARM and manufacturing from TSMC. It's a step in the right direction but it'll be a good amount of years before Amazon has their own Fab making their own chips.

nickik 5 years ago

To bad they are not doing this with RISC-V. Getting a large costumer and high performance implementations would been a great boost.

Since this is about vertical integration for them it would make a certain amount of sense.

  • DeathArrow 5 years ago

    Bad for RISC-V, good for Amazon. RISC-V is in a far less usable state than ARM ISA.

    • nickik 5 years ago

      Why would it be bad for RISC-V if a large company invest money and moves workloads to it?

      Of course RISC-V is not as a far as ARM but if Amazon really had that strategy the ISA would hardly be their primary problem.

novaRom 5 years ago

In the near future we will see more and more chips coming from Asia. Not just final silicon production, test, and packaging, but also complete hardware design. They will produce GPUs, FPGAs, and CPUs of all classes.

Look how many students in Computer Design, Digital Design, Electrical Engineering graduate every single year. Multiply that by low costs with high productivity and you will find Silicon Valley will face a very strong competitor.

  • yonkshi 5 years ago

    I think we will also see proportionally more and more chips coming out of US companies as well.

    When Google started their own chips (TPUs), they just kickstarted a vertical integration race amongst the tech giants. Apple, Intel, Nvidia were the old players, Google is dipping their toes in with TPU, now Amazon and probably soon Microsoft and FB.

    I think overall we will see more chips from both Asia and US

    • evancox100 5 years ago

      Microsoft has been doing custom silicon longer than either Google or Amazon, as a part of the Xbox security-related functionality I believe. And then other consumer products like Hololens, custom pen ASIC in Surface, etc. Now Azure Sphere (more so giving away silicon IP to others, but still have their fingers in it.)

      Edit: All this in addition to their widespread and well documented usage of FPGAs for Azure network offload and Bing acceleration.

mehdix 5 years ago

Open source has won the software, however it mostly runs on closed-source and proprietary hardware. Perhaps open hardware is the ultimate answer.

aneutron 5 years ago

They're ignoring the fact that it's not even the same ISA, not even the same use cases or maturity for these ISAs, and that for them to actually produce a x86 chip, the only way to do that is to licence either AMD or Intel as IIRC they're the only ones to hold the x86 patent.

That or change the way a majority of the software stack was written for the past 15 years.

  • gpderetta 5 years ago

    A lot of the software stack was rewritten in the past 10 year to make sure that it would work well on ARM though. ARM itself contributed to the effort.

tmaly 5 years ago

As much as it is easy to license from ARM and use TSM as a foundry, it does not make much economic sense.

Look at Google's purchase of Motorola. They ended up selling Motorola to Lenova.

The hardware industry and specifically the chip industry is very specialized. If your code competency is not chip making, it really does not make sense to try to enter the industry.

atonse 5 years ago

I wanted to try to use an AWS ARM server as a bastion host running Wireguard. But a t2.nano was cheaper.

Anyone get it working? (I haven’t tried)

  • broknbottle 5 years ago

    The t2.nano without unlimited burst credits relies on staying under a usage threshold. Why wireguard over something like sshuttle?

    • kelp 5 years ago

      FYI, t3 instances have and option to either be throttld when you exceed your burst credit or just pay more for burst.

      • cthalupa 5 years ago

        T2 offers this as well.

        • kelp 5 years ago

          Oh you’re right! I’d missed the launch of t2 unlimited.

          Looks like with t3 they’ve swapped the defaults. T3 now defaults to unlimited, t2 it’s an option.

fareesh 5 years ago

How does competition work with regard to trade secrets like chip design etc? If I hire the top dogs at a chip maker and they design a similar chip for my company from memory, how is this prevented from happening?

  • wmf 5 years ago

    There are patents and non-competes in some states but basically nothing prevents it. Silicon Valley exists because of the cycle of engineers leaving established companies to start startups (or now, leaving established companies to go to companies that are branching out into every area possible).

    • fareesh 5 years ago

      Would patenting the chip design also expose the very secret that is being guarded? So someone in China could just read the patent and make the same thing, is that right?

      • wmf 5 years ago

        Some companies patent all their ideas, including the ones they don't use. This creates a chaff effect where you can't be sure which ideas you should copy.

        There are certainly trade secrets that aren't patented but the whole point is that they aren't protected as much.

sunstone 5 years ago

And Amazon is not alone. Other potential candidates to add to list include AMD, Apple and Qualcomm. Maybe a few others that don't come to mind right away.

StreamBright 5 years ago

Not only Amazon's but Apple's, Jiāngnán's as well. I think Intel really should crank up innovation if they would like to stay competitive.

cronix 5 years ago

I could almost hear Larry Ellison snickering in the background as I read this. I wonder how much Amazon pays Oracle, anyway? I think in '16 is was around $60M (according to Ellison: https://www.forbes.com/sites/bobevans1/2017/12/12/oracles-la... )

kev009 5 years ago

Yawn, the amount of publicity ARM gets for free for the past decade to enter the data center is perplexing. The economy of these designs are not good versus the competition full stop.

There are three chips that work well in the data center: EPYC, Xeon, and POWER. All of these are billion dollar designs. Nothing about the ARM ecosystem supports designing to these same constraints or spending that amount of money to enter this space seriously.

srinikoganti 5 years ago

This is oversimplification.

Binary compatibility, of millions of third party libraries out there, is the biggest hurdle.

How many years did it take to switch from Python 2.7 to Python 3 ?

AWS lockin is a bigger threat.

I prefer to keep my binaries cloud agnostic and X86 and X64 compatible rather than AWS only code or even ARM binary.

By the way what happened to Amazon phones ?

  • leowoo91 5 years ago

    What is worrying me more is that we are all stuck on idea of being locked-in to any vendor. Why can't we focus on innovation instead? Take Instagram for e.g. they started with AWS right? Then moved on the real hardware within few weeks (if I remember correctly) after acquisition. I understand, cost of the unseen problems are high, but it might be good idea to remember it also much depends on delivering speed.

ejz 5 years ago

Um, and AMD.

stevespang 5 years ago

Why has Intel not started building datacenters and offering data processing services like Amazon and Google ? They obviously have an advantage because they manufacture the chips . . .

honestoHeminway 5 years ago

Open Source will never be able to continously supply bleeding edge hardware and software - its is however able to specify data transfer standards, that have to be upheld to prevent lock in of customers.

It should do that.

glenrivard 5 years ago

Cloud changes the business dynamics with the chip business.

Before a Dell purchased a chip from Intel and sold the server to someone else.

Now the companies buying are running the servers. That changes everything. I would expect more and more the chips used to come from the big cloud providers.

I would expect once Google gets Zircon further along to build their own CPUs that are optimized for it.

I do hope they use RISC-V ISA like they have with the PVC.

trumped 5 years ago

an amazon chip would be the last thing I would want to buy seeing how awful their tablets are....

imtringued 5 years ago

Why should intel be scared of AWS selling overpriced ARM servers? Scaleway offers 8 core servers with 8GB for almost half the monthly price of the cheapest ARM instance (with 1 core) amazon offers. Even the 16 core instance doesn't make sense. Packet.net will give you a whole 96 core server for the same price with 4x the memory. Capable ARM hardware has existed for years already and AWS is barely able to catch up.

Yes, I know AWS' target group is small startups and big enterprises who couldn't care less about how much their servers costs them. But at the same time this means switching to a new architecture for meager cost savings isn't attractive to them at all. They are willing to pay a premium if it means that things "just work". As long as this barrier exists ARM servers have zero chance of threatening intel.

  • twblalock 5 years ago

    > Yes, I know AWS' target group is small startups and big enterprises who couldn't care less about how much their servers costs them.

    When you choose a cloud provider you aren't just renting servers, you are getting access to an ecosystem of supporting services like load balancing, DNS, monitoring systems, container orchestration, cloud-specific databases and event systems, datacenters across the word, 24/7 oncall support, etc.

    Nobody offers as many features as AWS. Not Google, not Azure, and certainly not Scaleway. And when they offer as many features I bet they'll charge close to the same price.

    Dismissing the valid reasons people have for choosing AWS in the way you did is completely incorrect.

  • blihp 5 years ago

    Because this is just their first generation of chips. Should Amazon stick with it, they will get better at it and costs will come down while performance goes up. This should scare Intel because they have been banking on the data center for future growth. This looks like pretty much their last stand for a place where they can maintain margins. They lost mobile very early on. More recently it's looking like PCs are at risk thanks to competition from AMD. If they start losing their large volume enterprise customers, they'll need to come up with a plan D.

    • beagle3 5 years ago

      That might be the first generation under the Amazon brand, but my 3 year old Synology is using an Annapurna CPU; Amazon's chips are built the Annapurna (acquired by Amazon) team.

      • blihp 5 years ago

        There's a world of difference between designing a chip for a range of customers (i.e. pre-Amazon acquisition) and designing a chip for one customer. So provided Amazon does a reasonably good job of managing the acquisition, this really should be looked at as first generation (despite however many iterations occurred before the acquisition) since their design constraints (workload/environmental/power/thermal) have likely been altered significantly.

  • cperciva 5 years ago

    Why should intel be scared of AWS selling overpriced ARM servers?

    Because if AWS is selling overpriced ARM servers, they're not selling overpriced Intel servers. I think it's safe to say that Intel has big profit margins on customized Xeons.

  • kondro 5 years ago

    It's not the simplistic bare-metal services that are growing in AWS. It's all their serverless products where the end-user could care less what the underlying tech is running on so long as the API/performance is what they expect.

    Amazon runs a very large number of fully managed load balances, S3/DynamoDB data servers, Lambda clouds, etc. More than 90% of their services are completely abstract the user away from the underlying architecture.

    It doesn't matter what AWS charges for these services wholesale (and I believe they're currently providing same performance in EC2 at about a 40% cost saving over the Intel equivalent), the value is in the cost saving of them running their own services on this tech.

  • karmasimida 5 years ago

    Not overpriced if there isn't any other vendor doing the same thing.

  • tptacek 5 years ago

    I'm not clear on why Intel should care that there are cheaper Scaleway cores available; presumably what they care about is whether they're the most cost-effective option at AWS. An at-scale existence proof that routinely running serverside ARM software is cheaper than running it on Intel chips is problematic for them.

  • patrickg_zill 5 years ago

    You're right however...

    "Nobody ever got fired for buying IBM."

    Then

    "Nobody ever got fired for buying Microsoft"

    Now

    "Nobody ever got fired for buying AWS services"

  • 20938ny9 5 years ago

    A-are you sweating?

paulie_a 5 years ago

Amd has been kicking the shit out of Intel for 30 years. No seemed to notice. Their 386 was faster than Intel 486. The octa core that is tenish years old is still competitive against non Xeon chips. At the time it completely destroyed intela chips for half or less of the price. Intel is doing nothing new and just gliding. They haven't done anything interesting in 20 years, the p4s and rdram were an incredible pile of junk. The itanic.. that says it all

It's cute I get a downvoted instead of an actual rebuttal. Amd has been doing great work for a far less price, Intel has been phoning it in for decades, and for a good bit of time making inferior technology.