Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Intel’s real money is in high-end CPUs sold to prosperous Cloud operators, not in supplying lower-end chips to cost-cutting laptop makers.

I keep searching for "Graviton" in these thinkpieces. I keep getting "no results found."

Mac ARM laptops mean cloud ARM VMs.

And Amazon's Graviton2 VMs are best in class for price-performance. As Anandtech said:

If you’re an EC2 customer today, and unless you’re tied to x86 for whatever reason, you’d be stupid not to switch over to Graviton2 instances once they become available, as the cost savings will be significant.

https://www.anandtech.com/show/15578/cloud-clash-amazon-grav...



While Graviton is impressive and probably an indication of things to come, you can't outright use "Amazon rents them cheaper to me" as an indication of the price performance of the chips themselves.

Amazon is exactly the kind of company that would take 50% margin on their x86 servers and 0% margin on their Graviton servers in order to engineer a long term shift that's in their favor - the end of x86 server duopoly (or monopoly depending on how the wind is blowing).


I don't feel as sure about this; there's very little evidence that Amazon is putting a "50% margin", or any significant percent, on x86 servers. Sure, they're more expensive than, just for comparisons' sake, Linode or DigitalOcean, but EC2 instances are also roughly the same cost per-core and per-gb as Azure compute instances, which is a far more accurate comparison.

Many people complain about AWS' networking costs, but I also suspect these are generally at-cost. A typical AWS region has terabits upon terabits of nano-second scale latency fiber ran between its AZs and out to the wider internet. Networking is an exponential problem; AWS doesn't overcharge for egress, they just simply haven't invested in building a solution that sacrifices quality for cost.

Amazon really does not have a history of throwing huge margins on raw compute resources. What Amazon does is build valuable software and services around those raw resources, then put huge margins on those products. EC2 and S3 are likely very close to 0% margin; but DynamoDB, EFS, Lambda, etc are much higher margin. I've found AWS Transfer for SFTP [1] to be the most egregious and actually exploitative example of this; it effectively puts an SFTP gateway in front of an S3 bucket, and they'll charge you $216/month + egress AND ingress at ~150% the standard egress rate for that benefit (SFTP layers additional egress charges on-top-of the standard AWS transfer rates).

[1] https://aws.amazon.com/aws-transfer-family/pricing/


A 32 core EPYC with 256gb will cost you $176/mo at Hetzner, and $2,009/mo on EC2.

Obviously it's on demand pricing and the hardware isn't quite the same, with Hetzner using individual servers with 1P chips. Amazon also has 10gbps networking.

But still, zero margin? Let's call it a factor of two to bridge the monthly and on demand gap - does it really cost Amazon five times as much to bring you each core of Zen 2 even with all their scale?

I don't think Amazon overcharges for what they provide, but I bet their gross margins even on the vanilla offerings are pretty good, as are those of Google Cloud and Azure.


Very few of AWS's costs are in the hardware. Nearly all of Hetzner's costs are in the hardware. That's why AWS, and Azure, and GCP are so much more expensive.

Margin is a really weird statistic to calculate in the "cloud". Sure, you could just mortgage the cost of the silicon across N months and say "their margin is huge", but realistically AWS has far more complexity: the costs of the datacenter, the cost of being able to spin up one of these 32 core EPYC servers in any one of six availability zones within a region and get 0 cost terabit-scale networking between them, the cost of each of those availability zones not even being one building but being multiple near-located buildings, the cost of your instance storage not even being physically attached to the same hardware as your VM (can you imagine the complexity of this? that they have dedicated EBS machines and dedicated EC2 machines, and yet EBS still exhibits near-SSD like performance?), the cost of VPC and its tremedous capability to model basically any on-prem private network at "no cost" (but, there's always a cost); that's all what you're paying for when you pay for cores. Its the stuff that everyone uses, but its hard to quantify into just saying "jeeze an EPYC chip should be way cheaper than this"

And, again, if all you want is a 32 core EPYC server in your basement, then buy a 32 core EPYC server and put it in your basement. But, my suspicion is not that a 32 core EPYC server on AWS makes zero margin; its that, if the only service AWS ran was EC2, priced how it is today, they'd be making far less profit than when that calculation includes all of their managed services. EC2 is not the critical component of AWS's revenue model.


Margin calculations include all that. And I suspect most of AWS's marginal cost is _still_ hardware.

The marginal cost of VPC is basically 0. Otherwise they couldn't sell tiny ec2 instances. The only cost differences between t3.micro and their giant ec2 instances are (a) hardware and (b) power.


> The marginal cost of VPC is basically 0. Otherwise they couldn't sell tiny ec2 instances.

That's not strictly true. They could recoup costs on the more expensive EC2 instances.

I have not idea what the actual split is, but the existence of cheap instances doesn't mean much when Amazon has shown itself willing to be a loss-leader.


So what you're saying is kind of the opposite of a marginal cost.

If they are recouping their costs, it's a capital expense, and works differently than a marginal cost. AWS's networking was extremely expensive to _build_ but it's not marginally more expensive to _operate_ for each new customer. Servers are relatively cheap to purchase, but as you add customers the cost increases with them.

If they're selling cheap instances are a marginal loss, that would be very surprising and go against everything I know about the costs of building out datacenters and networks.


>Many people complain about AWS' networking costs, but I also suspect these are generally at-cost. A typical AWS region has terabits upon terabits of nano-second scale latency fiber ran between its AZs and out to the wider internet.

I'm skeptical about this claim. Most cloud providers try to justify their exorbitant bandwidth costs by saying it's "premium", but can't provide any objective metrics on why it's better than low cost providers such as hetzner/ovh/scaleway. Moreover, even if it were more "premium", I suspect that most users won't notice the difference between aws's "premium" bandwidth and a low cost provider's cheap bandwidth. I think the real goal of aws's high bandwidth cost is to encourage lock-in. After all, even if azure is cheaper than aws by 10%, if it costs many times that for you to migrate all your over to azure, you'll stick with aws. Similarly, it encourages companies to go all-in into aws, because if all of your cloud is in aws, you don't need to pay bandwidth costs for shuffling data between your servers.


Right, but that's not what I'm saying. Whether or not the added network quality offers a tangible benefit to most customers isn't relevant to how it is priced. You, as the customer, need to make that call.

The reality is, their networks are fundamentally and significantly higher quality, which makes them far more expensive. But, maybe most people don't need higher quality networks, and should not be paying the AWS cost.


>You, as the customer, need to make that call.

But the problem is that you can't. You simply can't use aws/azure/gcp with cheap bandwidth. If you want to use them at all, you have to use their "premium" bandwidth service.


> Amazon really does not have a history of throwing huge margins on raw compute resources

What? My $2000 Titan V GPU and $10 raspberry pi both payed for themselves vs EC2 inside of a month.

Many of AWS's managed services burn egregious amounts of EC2, either by mandating an excessively large central control instance or by mandating one-instance-per-(small organizational unit). The SFTP example you list is completely typical. I've long assumed AWS had an incentive structure set up to make this happen.

"We're practically selling it at cost, honest!" sounds like sales talk.


Yes. Look at the requirements for the EKS control plane for another example. It has to be HA and able to manage a massive cluster, no matter how many worker boxes you plan to use.*

*Unless things have changed in the last year or so since I looked


It is currently 10 cents an hour flat rate for the control plane. That actually saved us money. Even if you weren't going to run in HA, that is still the cost of a smallish machine to run a single master. I am not sure who running K8s in production would consider that too high. If you are running at the scale where $72 a month is expensive or don't want to run HA, you might not want to be running managed Kubernetes. I'd just bootstrap a single node then myself.


You said it yourself: production at scale is the only place where the current pricing makes sense. That's fine, but it means I'm not going to be using Amazon k8s for most of my workloads, both k8s and non-k8s.


If you cut out the Kubernetes hype, you could simple use AWS' own container orchestration solution (ECS) whose control plane is free of charge to use.


> Many people complain about AWS' networking costs, but I also suspect these are generally at-cost.

This seems to be demonstrably false, given that Amazon Lightsail exists. Along with some compute and storage resources:

$3.50 gets you 1TB egress ($0.0035/GB)

$5 gets you 2TB egress ($0.0025/GB)

Now, its certainly possible that Amazon is taking a loss on this product. Its also possible that they have data showing that these types of users don't use more than a few percent of their allocated egress. But I suspect that they are actually more than capable of turning a profit at those rates.


And I mean, if you just compare the price of Amazon's egress compared to that of a VPS at Hetzner or OVH, to say nothing of the cheaper ones, you can be sure that they are making margins of over 200% on it for EC2. There's a 4$ VPS on OVH with unlimited egress at 100Mbps.

4$!

That's a theoretical maximum of 1Tb egress each three hours. So for the cost of 3 hours of egress you can buy an entire VPS with a month of egress, for cheaper. It's insane just how much cheaper it really is.


Indeed it is. But rest assured that every provider will shut your server down if it's running at full bandwidth all day.


Sure. But you just need to run your server at full bandwidth for one hour every day to use up 10 times more bandwidth than even lightsail would give you for the price of the server.

I assure you that you can run these servers for one hour a day and no one will bat an eye. I know people running seedboxes at full speed for 10 hours a day or so without an issue - that's 100 times the bandwidth of even Lightsail for the same price.


No, Hetzner charges you <$2/TB. for that price, they'll be happy to route as much traffic as you want. Never heard that they complain in these cases.

They have worse peering than AWS but the difference in cost to them is certainly not 100x or more.


Obviously the comment is in reply to having unlimited traffic for a flat fee and not for a price per TB.


This doesn’t add up. Amazon is profitable mostly because of AWS which in turn is profitable mostly due to EC2 and S3.

Clearly they have margins, and fat ones at that.


Yes, by many estimates some of the highest in the industry

"When factoring in heavy depreciation, AWS has EBITDA margins of around 50%."

https://seekingalpha.com/news/2453156-amazon-plus-15-percent...


Building managed applications is where the money is at for AWS for sure, Elasticache is another good example. The beauty is their managed services are great and worry free.

Shameless plug - partly because of the high cost of sftp in AWS, and lack of ftp (understandable), and a bunch of people wanting the same in Azure / GCS, that made us start https://docevent.io which has proved quite popular.


Long term, Amazon is also the exact kind of company that would start making "modifications" to Graviton requiring you to purchase/use/license a special compiler to run ;)


Can you point to a time Amazon did something like that? Not saying they won't, just any company can do it, it's more likely when they've done it in the past.


Why would they want to shift people to ARM based instances if they weren't more efficient?


even if it were the case, it is still a saving for the end user and it is a threat to Intel.


>Intel’s real money is in high-end CPUs sold to prosperous Cloud operators, not in supplying lower-end chips to cost-cutting laptop makers.

And to make another point. Apple isn't a lower-end cost cutting laptop makers.

Apple sell ~20M Mac per year. Intel ship roughly ~220M PC CPU per year ( I think recent years saw the number trending back towards 250M) That is close to 10%. Not an insignificant number. Apple only use expensive Intel CPU. Lots of survey shows most $1000+ PC belongs to Apple. Most of the business Desktop, and Laptop uses comparatively cheap Intel CPU. i.e I would not be surprised the median price of Apple's Intel CPU purchase is at least 2x to total market median if not more. In terms of revenue that is 20% of Intel's consumer segment.

They used to charge a premium for being the leading edge Fab. You cant get silicon that is better than Intel. You are basically paying those premiums for having the best. Then Intel's Fab went from 2 years leading, to now 2 years behind, ( That is 4 years difference. ) all while charging the same price. And since Intel wants to keep its margin, it is not much of a surprise customers, ( Apple and Amazon ) looks for alternative.

Here is another piece on Amazon Graviton 2.

https://threader.app/thread/1274020102573158402

May be I should submit it to HN.


Yea, last I looked at it, Apples average Mac sakes price was $1300, HP/Dell/et al were under $500.

Apple owns the premium PC market, it’s Mac division is not only the most profitable PC company in the world, it might be more profitable than all the others combined.

It’s share of Intels most expensive desktop CPUs is much higher than its raw market share.


We recently evaluated using Graviton2 over X86 for our Node.js app on AWS. There was enough friction, some native packages that didn't work out of the box, etc. and some key third party dependencies missing completely, that it wasn't worth the effort in the end even considering the savings, as we'd likely keep having these issues pop up and having to debug them remotely on Graviton.

If macOS goes ARM and there's a sizable population of developers testing and fixing these issues constantly, the math changes in favor of Graviton and it would make it a no-brainer to pick the cheaper alternative once everything "just works".


Unfortunately you might have witnessed the achilles heel of ARM ecosystem, where certain SW/binaries are not available yet. Most open-source code can be compiled for ARM without much hassle[1][2][3], but some might require explicit changes to port certain x86 specific instructions to ARM.

I've been shifting my work to ARM based machine for some years now, mainly to reduce power consumption. One of my current projects - A problem validation platform[4](Go) has been running nicely on a ARM server(Cavium ThunderX SoCs) on scaleway; but weirdly scaleway decided to quit on ARM servers[5] sighting hardware issues which not many of the ARM users seem to have faced. Only ARM specific issue I faced with scaleway was that the reboot required power-off.

[1]cudf: https://gist.github.com/heavyinfo/da3de9b188d41570f4e988ceb5...

[2]Ray: https://gist.github.com/heavyinfo/aa0bf2feb02aedb3b38eef203b...

[3]Apache Arrow: https://gist.github.com/heavyinfo/04e1326bb9bed9cecb19c2d603...

[4]needgap: https://needgap.com

[5]Scaleway ditched ARM: https://news.ycombinator.com/item?id=22865925


Filing bugs against the broken packages would be a nice thing to do. Easy enough to test on a Raspberry Pi or whatever.

I have a hunch our industry will need to increasingly deal with ARM environments.


>Mac ARM laptops mean cloud ARM VMs.

What is the connection here ? ARM servers would be fine in a separate discussion. What does it have to do with Macs ? Macs aren't harbingers of anything. They have set literally no trend in the last couple of decades, other than thinness at all costs. If you mean that developers will use Gravitons to develop mac apps, why/how would that be ?


To quote Linus Torvalds:

"Some people think that "the cloud" means that the instruction set doesn't matter. Develop at home, deploy in the cloud.

That's bull*t. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test "at home" (and by "at home" I don't mean literally in your home, but in your work environment)."

So I would argue there is a strong connection.


> If you develop on x86, then you're going to want to deploy on x86

I can see this making sense to Torvalds, being a low-level guy, but is it true for, say, Java web server code?

Amazon are betting big on Graviton in EC2. It's no longer just used in their 'A1' instance-types, it's also powering their M6g, C6g, and R6g instance-types.

https://aws.amazon.com/about-aws/whats-new/2019/12/announcin...


I agree about Java. I'm using Windows to write Java but deploy on Linux and it works. I used to deploy on Itanium and also on some IBM Power with little-endian and I never had any issues with Java. It's very cross-platform.

Another example is android apps development. Most developers use x86_64 CPU and run emulator using intel android image. While I don't have vast experience, I did write few apps and never had any issue because of arch mismatch.

High level languages mostly solved that issue.

Also note that there are some ARM laptops in the wild already. You can either use Windows or Linux. But I don't see that every cloud or android developer hunting for that laptop.


It works until it doesn't. We had issues where classes were loaded in a different order on linux causing issues that we could not repro on windows.


Interesting, but in that case you changed OS rather than changing CPU ISA, so not quite the same thing.


No, it's exactly the same thing. The more variables you change, the harder it will be to debug a problem.


I've deployed C++ code on ARM in production that was developed on X64 without a second thought, though I did of course test it first. If it compiles and passes unit tests, 99.9% of the time it will run without issue.

Going from ARM to X64 is even less likely to have issues as X64 is more permissive about things like unaligned access.

People are making far too big a deal out of porting between the two. Unless the code abuses undefined behaviors in ways that you can get away with on one architecture and not the other, there is usually no issue. Differences in areas like strong/weak memory ordering, etc., are hidden behind APIs like posix mutexes or std::atomic and don't generally have to be worried about.

The only hangup is usually vector intrinsic or ASM code, and that is not found in most software.

For higher level languages like Go or Java, interpreted languages like JavaScript or Python, or more modern languages with fewer undefined edge cases like Rust, there is almost never an issue.

This is just not a big deal unless you're a systems person (like Linus) or developing code that is really really close to the metal.


I've developed for x86, and deployed on x86. Some years later we decided to add arm support. Fixing the only on arm bug made our x86 software more stable. Turns out some 1 in a million issues on x86 that on arm happen often enough that we could isolate them and then fix them.

Thus I encourage everyone to target more than one platform as it makes the total better. This even though there are platform specific issues that won't happen on the other (like the compiler bug we found)


Apparently Apple had macos working for years on x86 before they switched their computers to intel CPUs. The justification at the time was exactly this - by running their software on multiple hardware platforms, they found more bugs and wrote better code. And obviously it made the later hardware transition to intel dramatically easier.

I would be surprised if Apple didn’t have internal prototypes of macos running on their own Arm chips for the last several years. Most of the macos / iOS code is shared between platforms, so it’s already well optimized for arm.


They've had it deployed worldwide on ARM -- every iPhone and iPad runs it on an ARM chip.


To add to your example, everyone that targets mobile devices with native code tends to follow a similar path.

Usually making the application run in the host OS is much more productive than dealing with the emulator/simulator.


Most of my work outside of the day job is developed on x86 and deployed on ARM.

Unless you're talking about native code (and even then, I've written in the past about ways this can be managed more easily), then no, it really doesn't matter.

If you're developing in NodeJS, Ruby, .NET, Python, Java or virtually any other interpreted or JIT-ed language, you were never building for an architecture, you were building for a runtime, and the architecture is as irrelevant to you as it ever was.


> Python

Well I can't speak to some of the others... but Conda doesn't work at all on ARM today (maybe that will change with the new ARM Macs, though), which is annoying if you want to use it on, say, a Raspberry Pi for hobby projects.

Additionally, many scientific Python packages use either pre-compiled binaries or compile them at install-time, for performance. They're just Python bindings for some C or Fortran code. Depending on what you're doing, that may make it tricky to find a bug that only triggers in production.


Sorry, yes this is an exception.

Also one I've come across myself so I'm a bit disappointed I didn't call this out. So... kudos!


If you're on a low enough level where the instruction set matters (ie. not Java/JavaScript), then the OS is bound to be just as important. Of course you can circumvent this by using a VM, though the same can be said for the instruction set using an emulator.


But that's the other way round. If you have an x86 PC, you can develop x86 cloud software easily. You don't develop cloud software on a mac anyway (i.e., that's not apple's focus). You develop mac software on macs for other macs. If you have to develop cloud software, you'll do so on linux (or wsl or whatever). What is the grand plan here ? You'll run an arm linux vm on your mac to develop general cloud software which will be deployed on graviton ?


> If you have to develop cloud software, you'll do so on linux (or wsl or whatever).

I think you are vastly underestimating how many people use Mac (or use Windows without using WSL) to develop for the cloud.


I can say our company standardized on Macs for developers back when Macs were much better relative to other laptops. But now most of the devs are doing it begrudgingly. The BSD userland thing is a constant source of incompatibility, and the package systems are a disaster. The main reason people are not actively asking for alternatives is that most use the Macs as dumb terminals to shell into their Linux dev servers, which takes the pressure off the poor dev environment.

The things the Mac is good at:

1) It powers my 4k monitor very well at 60Hz

2) It switches between open lid and closed lid, and monitor unplugged / plugged in states consistently well.

3) It sleeps/wakes up well.

4) The built in camera and audio work well, which is useful for meetings, especially these days.

None of these things really require either x86 or Arm. So if a x86 based non-Mac laptop appeared that handled points 1-4 and could run Linux closer to our production environment I'd be all over it.


I think you've hit the nail on the head, but you've also summarised why I think Apple should genuinely be concerned about losing marketshare amongst developers now that WSL2 is seriously picking up traction.

I started using my home Windows machine for development as a result of the lockdown and in all honesty I have fewer issues with it than I did with my work MacBook. Something is seriously wrong here.


I think Apple stopped caring about devs marketshare a long time ago and instead is focusing on the more lucrative hip and young Average Joe consumer.

Most of the teens to early 20 somethings I know are either buying or hoping to buy the latest Macs, iPads, iPhones and AirPods while most of the devs I know are on Linux or WSL but devs are a minority compared to the Average Joes who don't code but are willing to pay for nice hardware and join the ecosystem.


Looking at the arch slide of Apple's announcement about shifting Macs to ARM, they want to people to use them as dev platforms for better iPhone software. Think Siri on chip, Siri with eyes and short term context memory.

And as a byproduct perhaps they will work better for hip young consumers too. Or anyone else who is easily distracted by bright colours and simple pictures, which is nearly all of us.


> I think you are vastly underestimating how many people use Mac (or use Windows without using WSL) to develop for the cloud.

The dominance of Macs for software development is a very US-centric thing. In Germany, there is no such Mac dominance in this domain.


To be fair in the UK Macs are absolutely dominant in this field.


Depends very much what you're doing; certainly not in my area (simulation software) at least, not for other than use as dumb terminals.


Yes, in Germany it's mostly Linux and Lenovo / Dell / HP desktops and business-type laptops. Some Macs, too.


I have no idea where in Germany you're based, or what industry you work in, but in the Berlin startup scene, there's absolutely a critical mass of development that has coalesced around macOS. It's a little bit less that way than in the US, but not much.


Berlin is very different from the rest of Germany.


This. According to my experience and validated by Germans and expats alike, Berlin is not Germany :)


In Norway where I live Macs are pretty dominating as well. Might be Germany is the outlier here ;-)


When I go to Ruby conferences, Java conferences, academic conferences, whatever, in Europe, everyone - almost literally everyone - is on a Macintosh, just as in the US.


Most people don’t go to conferences.


Ruby conference goers don't represent all the SW devs of Europe :)


Why do you think not?

And why not Java developers?

They seem pretty polar opposite in terms of culture, but all still turn up using a Macintosh.


Because every conference is its own bubble of enthusiasts and SW engineering is a lot more diverse than Ruby, from C++ kernel devs to Firmware C and ASM devs.

Even the famous FailOverflow said in one of his videos he only bought a Mac since he saw that at conferences everyone had Macs so he thought that must mean they're the best machines.

Anecdotally, I've interviewed at over 12 companies in my life and only one of those issues Mac to its employees the rest were windows/Linux.


True, but it is full of developers using Windows to deploy on Windows/Linux servers, with Java, .NET, Go, node, C++ and plenty of other OS agnostic runtimes.


Given the fact that the US has an overwhelming dominance in software development (including for the cloud) I think that the claim this is only a US phenomenon is somewhat moot. As a simple counter-point, the choice of development workstation in the UK seems to mirror my previous experience in the US (i.e. Macs at 50% or more.)


My experience in Germany and Austria mirrors GPs experience with windows/linux laptops being the majority and Mac being present in well funded hip startups.


Same in South Africa (50% mac, 30% windows, 20% ubuntu) and Australia.


> You don't develop cloud software on a mac anyway

I've got anecdata that says different. My backend/cloud team has been pretty evenly split between Mac and Windows (with only one Linux on the desktop user). This is at a Java shop (with legacy Grails codebases to maintain but not do any new development on).


Mac is actually way better for cloud dev than Windows is, since it's all Unix (actual Unix, not just Unix-like). And let's be honest, you'll probably be using docker anyway.


Arguably now, with WSL, Windows is closer to the cloud environment than macOS. Its a true Linux kernel running in WSL, no longer a shim over Windows APIs.


Yep. WSL 2 has been great so far. My neovim setup feels almost identical to running Ubuntu natively. I did have some issues with WSL 1, but the latest version is a pleasure to use.


Do you use VimPlug? For me :PlugInstall fails with cannot resolve host github.com


I do use VimPlug. Maybe a firewall issue on your end? I'm using coc.nvim, vim-go, and a number of other plugins that installed and update just fine.


That is just utter pain though. I’ve tried it and I am like NO THANKS! Windows software operates too poorly with Unix software due to different file paths (separators, mounting) and different line endings in text files.

With Mac all your regular Mac software integrates well with the Unix world. XCode is not going to screw up my line endings. I don’t have to keep track of whether I am checking out a file from a Unix or Windows environment.


Your line-ending issue is very easy to fix in git:

`git config --global core.autocrlf true`

That will configure git to checkout files with CRLF endings and change them to plain LF when you commit files.


Eating data is hardly a fix for anything, even if you do it intentionally.


If the cloud is mostly UNIX-like and not actual UNIX, why would using “real UNIX” be better than using, well, what’s in the cloud?


Agree, although I think this is kind of nitpicking, because "UNIX-like" is pretty much the UNIX we have today on any significant scale.


macOS as certified UNIX makes no sense in this argument. it doesn't help anything, as most servers are running Linux.


I develop on Mac, but not mainly for other Macs (or iOS devices), but instead my code is mostly platform-agnostic. Macs also seem to be quite popular in the web-frontend-dev crowd. The Mac just happens to be (or at least used to be) a hassle-free UNIX-oid with a nice UI. That quality is quickly deteriorating though, so I don't know if my next machine will actually be a Mac.


True, but then the web-fronted dev stuff is several layers away from the ISA, isn't it ? As for the unix-like experience, from reading other people's accounts, it seemed like that was not really Apple's priority. So there are ancient versions of utilities due to GPL aversion and stuff. I suppose docker, xcode and things like that make it a bit better, but my general point was that didn't seem like Apple's main market.


> So there are ancient versions of utilities due to GPL aversion and stuff.

They're not ancient, but are mostly ports of recent FreeBSD (or occasionally some other BSD) utilities. Some of these have a lineage dating back to AT&T/BSD Unix, but are still the (roughly) current versions of those tools found on those platforms, perhaps with some apple-specific tweaks.


It works great though, thanks to Homebrew. I have had very few problems treating my macOS as a Linux machine.


> You don't develop cloud software on a mac anyway

You must be living in a different universe. What do you think the tens of thousands of developers at Google, Facebook, Amazon, etc etc etc are doing on their Macintoshes?


> What do you think the tens of thousands of developers at Google ... are doing on their Macintoshes?

I can only speak of my experience at Google, but the Macs used by engineers here are glorified terminals, since the cloud based software is built using tools running on Google's internal Linux workstations and compute clusters. Downloading code directly to a laptop is a security violation (With an exception for those working on iOS, Mac, or Windows software)

If we need Linux on a laptop, there is either the laptop version of the internal Linux distro or Chromebooks with Crostini.


They individually have a lot of developers, but the long tail is people pushing to AWS/Google Cloud/Azure from boring corporate offices that run a lot of Windows and develop in C#/Java.

edit: https://insights.stackoverflow.com/survey/2020#technology-pr...


>What do you think the tens of thousands of developers at Google, Facebook, Amazon, etc etc etc are doing on their Macintoshes?

SSH to a linux machine ? I get that cloud software is a broad term that includes pretty much everything under the sun. My definition of cloud dev was a little lower level.


This is the same Linus who's recently switched his "at home" environment to AMD...

https://www.theregister.com/2020/05/24/linus_torvalds_adopts...


Which is still x86...?

What point are you trying to make?


So? Linus doesn’t develop for cloud. His argument still stands.


Because when you get buggy behaviour from some library because it was compiled to a different architecture it's much easier to debug it if your local environment is similar to your production one.

Yeah, I'm able to do remote debugging in a remote VM but the feedback loop is much longer, impacting productivity, morale and time to solve the bug, a lot of externalised costs that all engineers with reasonable experience are aware of. If I can develop my code on the same architecture that it'll be deployed my mind is much more in peace, when developing in x86_64 to deploy on ARM I'm never sure that some weird cross-architecture bug will pop up. No matter how good my CI/CD pipeline is, it won't ever account for real-world usage.


on the other hand, having devs an alien workstation really put stress into the application configurability and adaptability in general.

it's harder in all the way you describe, but it's much more likely the software will survive migrating to the next debian/centos release unchanged.

it all boils down to the temporal scale of the project.


I'd say that on my 15 years career I had many more tickets related to bugs that I needed to troubleshoot locally than issues with migrating to a new version/release of a distro. To be honest it's been 10 years since the last time I had a major issue caused by distro migration or update.


> "Macs aren't harbingers of anything."

I have to agree. It's not like we're all running Darwin on servers instead of Linux. Surely the kernel is a more fundamental difference than the CPU architecture.


ARM Macs means more ARM hardware in hands of developers. It means ARM Docker images that can be run on hardware on hand, and easier debugging (see https://www.realworldtech.com/forum/?threadid=183440&curpost...).


> They have set literally no trend in the last couple of decades, other than thinness at all costs

Hahaha then you have not kept attention. Apple led the trend away from beige boxes. Style of keyboard used. Large track pads. USB. First to remove floppy drive. Both hardware, software and web design has been heavily inspired by Apple. Just look at icons used, first popularized by Apple.

Ubuntu desktop is strongly inspired by macOS. Operating system with drivers preloaded through update mechanism was pioneered by Apple. Windows finally seem to be doing this.


Because if you're developing apps to run in the cloud, it's preferable to have the VM running the same architecture that you're developing on.


maybe what he means is, if macs are jumping on the trend, man that must be a well-established trend, they're always last to the party.


Epyc Rome is now available on EC2 and the c5a.xlarge16 plan appears to be about the same or slightly cheaper than the Graviton 2 plan.

Being cheaper isn't enough here - Graviton needs to be faster and it needs to do that over generations. It needs to _sustain_ its position to become attractive. Intel can fix price in a heartbeat - they've done that in the past when they were behind. Intel's current fab issues does make this a great time to strike, but what about in 2 years? 3 years? 4? Intel's been behind before, but they don't stay there. Switching to Epyc Rome at the moment is an easy migration - same ISA, same memory model, vendor that isn't new to the game, etc... But Graviton needs a bigger jump, there's more investment there to port things over to ARM. Will that investment pay off over time is a much harder question to answer.


> But Graviton needs a bigger jump, there's more investment there to port things over to ARM.

I agree to some extent but don’t underestimate how much this has changed in the cloud era: it’s never been cheaper to run multiple ISAs and a whole lot of stuff is running in environments where switching is easy (API-managed deployments, microservices, etc.) and the toolchains support ARM already thanks to phones/tablets and previous servers - so much code will just run on the JVM, high level languages like JavaScript Python, or low-level ones like Go and Rust with great cross-compiler support, etc. and hardware acceleration also takes away the need to pour engineer-hours into things like OpenSSL which might have blocked tons of applications.

At many businesses that is probably over the threshold where someone can say it’s worth switching the easy n% over since they’ll save money now and if the price/performance crown shifts back so can their workloads. AWS has apparently already done this for managed services like load-balancers and I’m sure they aren’t alone in watching the prices very closely. That’s what Intel is most afraid of: the fat margins aren’t coming back even if they ship a great next generation chip.


The problem here is that ARM & x86 have very different memory models, not that the ISA itself is a big issue. Re-compiling for ARM is super easy, yes absolutely. Making sure you don't have any latent thread-safety bugs that happened to be OK because it was on x86 that are now bugs on ARM? That's a lot harder, and it only takes a couple of those to potentially wipe out any savings that were potentially had, as they are in the class of bugs that's particularly hard to track down.


If you do have hidden thread safety bugs you are only one compile away from failure, even on x86.

Some function gets vectorized and a variable shifts in or out of a register? Or one of your libraries gets rebuilt with LTO / LTCG and its functions are inlined.

If your code was wrong, any of that can ruin it and you're left trying to figure it out, and requiring your developers to use exactly one point release of Visual Studio and never, ever upgrade the SDK install.


And precisely for this reason if I were Amazon/AWS I would buy ARM from Softbank right now - especially at a time when the Vision Fund is performing so poorly, and therefore there might be financial distress on the side of Softbank.


I'm not so sure about so many parts of this.

I love working on ARM, but with the licensing model I'm not sure how much of a positive return Amazon would really be able to squeeze from their investment.

It also potentially brings them a truckload of future headaches if any of their cloud competitors raise anti-trust concerns down the road.

Beyond that, I think Apple, ARM and AMD get a lot of credit for their recent designs - a lot of which is due, but quite a bit of which should really go to TSMC.

The TSMC 7nm fabrication node is what's really driven microprocessors forward in the last few years, and hardly anyone outside of our industry has ever heard of them.

I don't know that Amazon couldn't transition to RISC-V if they needed to in a couple of years.


I think it's still a sound investment, not for the ROI, but for a stake in controlling the future. They could mitigate the anti-trust angle a bit by getting Apple, IBM, and maybe Google onboard.

Microsoft controls a lot of lucrative business-centric markets. An advantage that seems to have helped MS Azure surpass AWS in marketshare. One of Microsoft's weaknesses is their in-ability to migrate away from x86 in any meaningful way. IBM could used Redhat to push the corporate server market away from x86 under the guise of lower operating costs, which could deal a tremendous blow to MS, leaving Amazon and Google with an opening to hit the Office and Azure market.


Imagine if Oracle buys it, and it becomes another SUN-type outcome?


>you’d be stupid not to switch over to Graviton2

You overestimate the AWS customer base. Lots of them do silly things that cost them a lot of money.


It's because AWS is designed in a such a way that it's very easy to spend a lot, and very difficult to know why.


If you can throw money at the problem and invest engineering resources to do cost optimization later, this is often a valid strategy.

It's often easy to test if scaling the instance size/count resolves a performance issue. If it does, you know you can fix the problem by burning money.

When you have reasonable certainty of the outcome spending money is easier than engineering resources.

And later it's easier for an engineer to justify performance optimizations, if the engineer can point to a lower cloud bill..

I'm not saying it's always a well considered balance, just that burning money temporarily is a valid strategy.


>If you can throw money at the problem and invest engineering resources to do cost optimization later, this is often a valid strategy.

If only AWS had thousands of engineers to create a UX that makes cost info all upfront and easy to budget. Clearly it's beyond their capability /s

>I'm not saying it's always a well considered balance, just that burning money temporarily is a valid strategy.

Yes, but I bet a majority of AWS users don't want that to be the default strategy.


> Mac ARM laptops mean cloud ARM VMs.

Why do you think that? Cloud means linux, apple does not even have a server OS anymore.

We are seeing some ARM devices entering the server space, but these have been in progress for ears and have absolutely nothing to do with apples (supposed) CPU switch.


Why do you think developers don't run Linux VMs on macOS laptops?


They absolutely do.

One reason is to get the same environment they get in a cloud deployment scaled down. Another is Apple keeps messing with compatibility, most recent example being the code signing thing slowing down script executions.

Edit, this includes docker on Mac: it's not native.


When they run Docker that's precisely what they do...


Everyone mentions Intel high margins on servers and somehow does not consider if Apple wants these margins too.

> Guys, do you really not understand why x86 took over the server market?

> It wasn’t just all price. It was literally this “develop at home” issue. Thousands of small companies ended up having random small internal workloads where it was easy to just get a random whitebox PC and run some silly small thing on it yourself. Then as the workload expanded, it became a “real server”. And then once that thing expanded, suddenly it made a whole lot of sense to let somebody else manage the hardware and hosting, and the cloud took over.


It was also because Intel was like this giant steamroller you couldn't compete with because it was also selling desktop CPUs.

Sure they have lower margins on desktop but these brings sh!t tons of cash, cash you will then use for research and to develop your next desktop and server CPUs. If I believe this page [1], consumer CPUs brought almost $10 billions in 2019... In comparison, server CPUs generated $7 billions of revenues... And these days, Intel has like >90% of that market.

Other players (Sun, HP, IBM, Digital, ...) were playing in a walled garden and couldn't really compete on any of that (except pricing because their stuff was crazy expensive).

So not only they were sharing the server market with Intel but Intel was most likely earning more than the sum of it all from their desktop CPUs... More money, more research, more progress shared between the consumer and server CPUs: rinse and repeat and eventually you will catch up. And also, they could sell their server CPUs for a lot less than their "boutique" competitors.

You just can't compete with a player which has almost unlimited funds and executes well...

[1] https://www.cnbc.com/2020/01/23/intel-intc-earnings-q4-2019....


Apple backed away from the server market years ago despite having a quite nice web solution.


Exactly. Yes, some people need memory and multi-core speeds, but for most, it's the cost per "good enough" instance, and that's CapEx and power, both of which could be much lower with ARM.


If you need multi-core speeds, you'd be silly to not go AMD Epyc (~50% cheaper), or ARM.

People paid the IBM tax for decades. Intel is the new IBM. Wherever you look, Intel's lineup is not competitive and it is only surviving due to inertia.


If the operating margins in the article are realistic then there is a lot of room for undercutting Intel if you can convince the customer to take the plunge. That is, if you’re not in an IBM-vs-Amdahl situation where the lower price is not enough.


It isn't quite that simple, Intel has way more engineering resources than AMD and for some complicated setups like data centers, Intel really does have good arguments that their systems are better tested than the competition, and Intel does have better ability to bundle products together than AMD.


Intel is behind in fab technology. I don't think the problem is that Intel's chip designs are what's holding them back. AMD offered better multi core performance and Intel responded with more cores as well. However, I do believe that Intel suffers from an awful corporate environment. There was a story about ISPC [0] with one chapter [1] talking about the culture at the company.

[0] https://pharr.org/matt/blog/2018/04/30/ispc-all.html

[1] https://pharr.org/matt/blog/2018/04/28/ispc-talks-and-depart...


> their systems are better tested than the competition

Yes, Spectrum and Meltdown definitely proves that...

I think I heard that argument already, in the 90's. But instead of Intel it was Sun, it was SGI, it was Digital, etc

The truth is that money wins and while there is an inertia, it's more like a Cartoon inertia where your guy won't fall from the cliff until he looks down. But then it's too late.


If by tested you mean, getting eaten alive by vulnerability reports monthly and furthering degrading performance, sure. There isn't much rocket science otherwise to "better tested" in a server platform. Either it passes benchmarks of use case scenarios or it doesn't.


Yes there is. Server platforms are connected to more storage, higher bandwidth networking and more exotic sockets (more than one CPU). It is one thing to support lots of PCI express lots on paper. It is another thing to have a working solution that doens't suffer degraded performance when all the PCI express slots are in use at the same time.

None of this is rocket science but it takes money and engineers time to make these things happen. Intel has more of both at the moment.


There are plenty of applications where single-threaded clock speed matters, and Intel still wins by a wide margin there. Cache size is also a factor, and high end Xeon's have more cache than any competing CPU I've seen.


The just announced Intel Xeon Cooper Lake top end processor has about 38.5MB of cache. The AMD Rome top end has 256MB of cache.

https://ark.intel.com/content/www/us/en/ark/products/205684/...

https://www.amd.com/en/products/cpu/amd-epyc-7h12


I'm not sure this is the whole story, Intel has twice the L2 cache as AMD but I'm not sure that's enough to make a huge difference.

Epyc 7H12[1]:

- L1: two 32KiB L1 cache per core

- L2: 512KiB L2 cache per core

- L3: 16MiB L3 cache per core, but shared across all cores.

The L1/L2 cache aren't yet publicly available for any Cooper Lake processors, however the previous Cascade Lake architecture provided:

All Xeon Cascade Lakes[2]:

- L1: two 32 KiB L1 cache per core

- L2: 1 MiB L2 cache per core

- L3: 1.375 MiB L3 cache per core (shared across all cores)

Normally I'd expect the upcoming Cooper Lake to surpass AMD in L1, and lead further in L2 cache. However it looks like they're keeping the 1.375MiB L3 cache per core in Cooper Lake, so maybe L1/L2 are also unchanged.

0: https://www.hardwaretimes.com/cpu-cache-difference-between-l...

1: https://en.wikichip.org/wiki/amd/epyc/7h12

2: https://en.wikichip.org/wiki/intel/xeon_platinum/9282

Edit: Previously I showed EPYC having twice the L1 as Cascade Lake, this was a typo on my part, they're the same L1 per core.


Zen 2 has 4MiB L3 per core, 16 MiB shared in one 4-core CCX.


Thanks, I wrote that while burning the midnight oil and didn't double-check the sanity of those numbers. It's too late to edit mine but I hugely appreciate the clarification.


NP. It's still a huge amount of LLC compared to the status quo. Says something about how expensive it really is to ship all that data between the CCXs/CCDs.


Intel L3 does not equal AMD L3 cache regarding latencies. Depending on the application this can matter a lot. https://pics.computerbase.de/7/9/1/0/2/13-1080.348625475.png


You'd need that latency to be significant enough that AMD's >2x core count doesn't still result in it winning by a landslide anyway, and you need L3 usage low enough that it still fits in Intel's relatively tiny L3 size.

There's been very few cloud benchmarks where 1P Epyc Rome hasn't beaten any offering from Intel, including 2P configurations. The L3 cache latency hasn't been a significant enough difference to make up the raw CPU count difference, and where L3 does matter the massive amount of it in Rome tends to still be more significant.

Which is kinda why Intel is just desperately pointing at a latency measurement slide instead of an application benchmark.


Cache per tier matters a lot, total cache does not tell much. L1 is always per core and small, L2 is larger and slower, L3 is shared across many cores and access is really slow compared to L1 and L2. In the end performance per watt for a specific app is what matters, that is the end goal.


Interesting, that's news to me. Guess Intel just has clock speed then. That's why I still pay a premium to run certain jobs on z1d or c5 instances.

As another commenter pointed out, though, not all caches are equal. Unfortunately, I was not able to easily find access speeds for specific processors, so single-threaded benchmarks are the primary quantitative differentiator.


Given the IPC gains of Zen 2, the single-threaded gap is closing, and even reversed in some workloads.

And I think Xeon L3 cache tops out at about 40MB, whereas Threadripper & Epyc go up to 256MB.


Really? 'Entry-level' EPYC's (7F52) have 256MB of L3 cache for 16 cores.

I don't think there's any Intel CPU's with more than 36MB L3?



77MB for 56 cores. That's that ploy where they basically glued two sockets together so they could claim a performance per socket advantage even though it draws 400W and that "socket" doesn't even exist (the package has to be soldered to a motherboard).

IIRC the only people who buy those vs. the equivalent dual socket system are people with expensive software which is licensed per socket.


Those applications exist, but not enough to justify Intel’s market cap.


Do you have a source for any of the things you said?


Anecdotal: back then when SETI@home was a thing, I was running it on some servers; a 700 MHz Xeon was a lot faster (>50%, IIRC) than a 933 MHz Pentium 3, Xeon had a lot lower frequency and slower bus (100 vs 133MHz), but the cache was 4 times larger and probably the dataset or most of it was running in cache.


The same happened with the mobile Pentium-M with 1 (Banias) and 2MB (Dothan) cache - you could get the whole lot in cache and it just flew, despite the (relatively) low clock speed. There were people building farms of machines with bare boards on Ikea shelving.


Even worse for Intel, there are lots of important server workloads that aren't CPU intensive, but rely on the CPU coordinating DMA transfers between specialized chips (SSD/HDD controller, network controller, TPU/GPU) and not using much power or CapEx to do so.


> If you need multi-core speeds, you'd be silly to not go AMD Epyc (~50% cheaper), or ARM.

But Amdhal's Law shows us this doesn't make sense for most people.


Graviton is only cheaper because Amazon gouges you slightly less.

Graviton is still almost an order of magnitude slower than a VPS of the same cost, which is around what the hardware and infra costs Amazon.


And since you can already run windows 10 on arm https://docs.microsoft.com/en-us/windows/arm/ I is only a matter of time before we get Windows Server on arm. Though I guess you can run SQL Server on Linux on ARM already I think, though I am not entirely sure about that.


I would suspect that Apple is likely to enforce exclusive hardware for ARM; just like it does now, for Intel (which is a lot more common than ARM).

The limitation is not technical (as hackintoshes demonstrate); it’s legal.

That said, it would be great to be able to run Mac software (and iOS) on server-based VMs.

I just don’t think it will happen at scale, because lawyers.


Pedantic: actually, cloud ARM VMs and ARM laptops mean eventual Mac ARM laptops. The former two are already widespread in Graviton2 you mentioned and Surface X and also 3rd party ones.


there is no mainstream ARM laptop as of writing.

what are my options if I want a ARM laptop with say good mobile processor performance close to the i7-8850H found in a 2018 mbp 15, 16GB RAM and 512GB NVME SSD to setup my day to day dev environment (Linux + Golang + C++ etc)?

surface x is the only ARM laptop you can easily purchase, but it is windows only and the processor is way too slow. there is also close to 0 response from app vendors to bring their apps to native ARM windows environment.


How is the X slow? The reviewers have only claimed it was slow when emulating x86, but not in native apps.


Not sure how slow it’s ARM processor is in actual use, but we know it’s far slower than Apple ARM CPUs.


AFAIK, you can game on Surface X without much issue. There are numerous YouTube videos of popular games doing 60 FPS on descent settings.

Apple fans just seem to be in denial about being late to the game.

I have to admit, the Apple's ARM processors will likely be significantly faster per core. But they are not the driver of the switch to ARM. If anything, Chromebooks were.


Does anyone know if AWS Graviton supports TrustZone?


> Mac ARM laptops mean cloud ARM VMs.

If you develop for Mac, chances are you want your CI to use the same target hardware, which means cloud ARM hardware to run Mac VMs.


Chances indeed, but given how a lot of software is written in crossplatform languages (Java, JS, Ruby, etc) or the underlying CPU hardware is abstracted away (most compiled languages), I like to think it doesn't really matter except in edge cases and / or libraries.

Wishful thinking though, probably.


I can abstract away the OS (mostly), but I can’t abstract away the ISA without paying a pretty hefty performance cost.


Except you’re not going to be selling access to a Mac VM running on Graviton anytime soon.


> you’d be stupid

there's still a load of non scale-out services that the world depends upon.


Try Ampere:

https://amperecomputing.com/

Ever since Cavium gave up Arm server products and pivoted to HPC, there hasn't been a real Arm compititor.

Ampere is almost all ex-Intel.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: