NHacker Next
login
▲Intel: Winning and Losingabortretry.fail
102 points by rbanffy 1 days ago | 90 comments
Loading comments...
gopherloafers 20 hours ago [-]
I started at intel in 1988 and loved working there up until about 2005. The author of this article did a fantastic job enumerating the number of launched products, but there were twice as many that were cancelled. It became such a clusterfuck of leaders vying for promotion to bigger projects and taking over flailing ones only to can them after a year. The 80s and 90s were hyper efficient and focused on churning out clear roadmaps. But the fragmentation of the market was something intel couldn’t handle: its platform didn’t cover all segments no matter how hard it tried it couldn’t do everything. I think the market is still reconverging after all the segmentation. The term “ubiquitous computing” was thrown around a lot in 2000, and it finally happened but it is arm that won. I think there will be a reconvergence of personal computing platforms and I can’t wait to see who vacuums up all the little guys. But after reading this, damn I missed launching the 486 and Pentium. Those were some of the best days of my career.
fidotron 1 days ago [-]
The core problem at Intel is they promoted the myth that ISA has no impact on performance to such a degree they started fully believing it while also somehow believing their process advantage was unassailable. By that time they'd accumulated so many worthless departments that turning it around at any time after 2010 was an impossibility.

You could be the greatest business leader in history but you cannot save Intel without making most of the company hate you, so it will not happen. Just look at the blame game being played in these threads where somehow it's always the fault of these newly found to be inept individuals, and never the blundering morass of the bureaucratic whole.

phire 17 hours ago [-]
> is they promoted the myth that ISA has no impact on performance

IMO, Intel (and AMD) did prove the impact of a legacy ISA was low enough to not be a competitive disadvantage. Not zero, but close enough for high-performance designs.

In fact, I actually think the need to continue supporting the legacy x86 ISA was a massive advantage to Intel. It forced them to go down the path of massively out-of-order μarches at a point in history where everyone else was reaping massive gains from following the RISC design philosophy.

If abstracting away the legacy ISA was all the massive out-of-order buffers did, then they would be considered to be nothing more than overhead. But the out-of-order μarch also had a secondary benefit of hiding memory latency, which was starting to become a massive issue at this point in history. The performance gains from this memory latency hiding were so much higher than the losses from translating x86 instructions, which allowed Intel/AMD x86 cores to dominate in the server, workstation and consumer computing markets in the late 90s and 2000s, killing off almost every competing RISC design (including Intel's own Itanium).

RISC designs only really held onto the low power markets (PDAs, cellphones), where simplicity and low power consumption still dominated the considerations.

------------------

What Intel might have missed is that x86 didn't hold a monopoly on massively out-of-order μarch. There was no reason you couldn't make a massively out-of-order μarch for a RISC ISA too.

And that's what eventually happened. Starting in the mid 2010s. We started seeing ARM μarchs (especially from Apple) that looked suspiciously like Intel/AMDs designs, just with much simpler frontends. They could get the best of both worlds, taking advantage of simpler instruction decoding, while still getting the advantages of being massively out-of-order.

------------------

You are right about Intel's arrogance, especially assuming they could keep a process lead. But the "x86 tax" really isn't that high. It's worth noting that one of the CPUs they are losing ground too, is also x86.

vlovich123 13 hours ago [-]
> IMO, Intel (and AMD) did prove the impact of a legacy ISA was low enough to not be a competitive disadvantage. Not zero, but close enough for high-performance designs.

And Apple proved that in fact it was a significant problem once you factored into account performance per watt allowing them to completely spank AMD and Intel once those hit a thermal limit. There’s a benefit from being able to decode and dispatch multiple instructions in parallel vs having to emulate that through heuristically guessing at instruction boundaries and backtrack when you make a mistake (among other things).

phire 3 hours ago [-]
> having to emulate that through heuristically guessing at instruction boundaries and backtrack when you make a mistake

Intel/AMD don't use heuristics-based decoding, or backtracking. They can decode 4 instructions in a single cycle. They implement this by starting a pre-decode at every single byte offset (within 16 bytes) and then resolving it to actual instructions at the end of the cycle.

The actual decode is then done the following cycle, but the pre-decoder has already moved upto 4 instructions forwards, so the whole pipelined decoder can maintain 4 instructions per cycle on some code.

This pre-decode approach does have limits. Due to propagation delays, 4 instructions over 16 bytes is probably the realistic limit that you can push it (while Apple can easily do 8 instructions over 32 bytes). Intel's Golden Cove did finally push it to 6 instructions over 32 bytes, but I'm not sure that's worth it.

Intel's Skymont shows the way forwards. It only uses 3-wide decoders, but it has three of them running in parallel, leapfrogging over each other. They use the branch predictor to start each decoder running at a future instruction boundaries (inserting dummy branches to break up large branchless blocks). Skymont can maintain 9 instructions per cycle, which is more than the 8-wide that Apple currently is using. And unlike the previous "parallel pre-decode in a single cycle", this approach is scalable. Nothing stopping Intel adding a fourth decoder for 12 instructions per cycle, or a fifth decoder for 15. AMD is showing signs of going down the same path, zen5 has two 4-wide decoders though they can't work on the same thread, yet.

adgjlsfhk1 3 hours ago [-]
I don't think Apple's battery life wins are primarily from isa. I think it's largely from better target and process optimization/ecosystem control. Intel (and somewhat AMD) make most of their money in servers where what matters is performance/watt in a 100% loaded system. they also are designing for jdec RAM and pcie connectivity (and lots of other industry standards). most of Apple's efficiency advantage comes at the edges, lowering max clock speed, integrating the ram to save power, using custom SSDs where the controller is on the CPU etc.
inkyoto 16 hours ago [-]
> In fact, I actually think need to continue supporting the legacy x86 ISA was a massive advantage to Intel.

I think this is a myth that Intel (or somebody else) has invented in an attempt to save face. Legacy x86 instructions could have been culled from the silicon and implemented in the software as emulation traps – this has been done elsewhere nearly since the first revised CPU design came out. Since CPU's have been getting faster and faster, and legacy instructions have been used less and less, the emulation overhead would have been negligible to the point that no one would even notice it.

phire 3 hours ago [-]
We aren't talking about legacy instructions. Those have all been culled and replaced by microcode (in fact, most of them were always microcode from the very first 8086, they just never got non-microcoded versions thought the 286, 386, 486, pentium era).

We are talking about how the whole ISA is legacy. How the basic structure of the encoding is complex and hard to decode. How the newer instructions get longer encodings. Or things which can be done with a single instruction on some RISC ISAs take 3 or 4 instructions.

xscott 14 hours ago [-]
They tried this with Itanium and got beaten up constantly about how legacy performance was bad. Personally, I agree with you, but the market isn't rational. This paved the way for AMD to eat some of their lunch by making a "compatible" 64 bit ISA. Itanium could've been great for the kinds of workloads I was interested in at the time.
high_na_euv 1 days ago [-]
https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter
22 hours ago [-]
ksec 22 hours ago [-]
>You could be the greatest business leader in history but you cannot save Intel without making most of the company hate you, so it will not happen.

This is deep. It also highlight why it is easier to hire somebody outside of the company rather than promoting from within.

jxjnskkzxxhx 8 hours ago [-]
> they promoted the myth that ISA has no impact on performance

Could you clarify? What is "ISA", and why do you think it would have an impact on performance?

scrubs 17 hours ago [-]
In 80s-1995 spc, tqm, lean engineering where a thing ... think demming award or Malcolm baldridge national award. That thinking deals squarely with the morass.

Unfortunately business management succumbs too easily to short term profit (b/c of a tunnel vision on shareholder return) and trendiness. There are people in fashion who say of business: geez you gotta stand for something for more than 10 minutes. Get some class!

inkyoto 16 hours ago [-]
> […] they promoted the myth that ISA has no impact on performance […]

Once instructions get past the instruction decoder, they have not been x86 since Pentium Pro on the server and Pentium II on the desktop. AMD has made great strides in optimising the instruction decoder performance to minimise the translation overhead on most frequently used instructions, and the pathological cases such as 15-byte-long instructions are no longer in active use anyway. There are legacy instructions that are still there, but I don't think they affect performance as they are mere dead silicon that is getting rationalised with X86S, which culls everything non-64-bit.

A more solid argument can be made that x86 is register-starved with great implications for performance, and that is true, especially for 32 bits. It is true to a certain extent with the 64-bit ISA (32 GPR's is still better than x86-64’s 16 GPR's), but various SIMD extensions have ameliorated the pain substantially. The remaining stuff, such as legacy CISC direct memory access instructions… compilers have not been emitting that stuff for over twenty years, and they just take up space in the dead silicon, lonely waiting, and yearning for a faithful moment of somebody finally giving them a tickle, which almost never comes, so the legacy instructions just cry and wail in defeaning silence.

An ISA was a decisive and critical factor from the performance perspective in the 1970s-80s, and, due to advances in the last few decades, including in-core instruction fusion, register renaming, coupled with enormously large register files, out-of-order, as well as speculative execution, etc., it is no longer clear-cut or a defining feature. We now live in the post-RISC era where old and new approaches have coalesced into hybrid designs.

Personally, I have never been a fan of the x86 ISA, although less from the technical perspective[0] and for a completely different reason – the Wintel duopoly had completely obliterated CPU alternatives, leading the CPU industry to stagnate, which has now changed and has given Intel a headache of epic proportions and haemorrhoids.

[0] The post AVX-2 code modern compilers generate is pretty neat and is reassonably nice to look at and work with. Certainly not before.

AnotherGoodName 1 days ago [-]
I'll give a viewpoint that the article reads like a listing of spec sheets and process improvements for CPUs of that era and not much else. Not really worth reading imho.

I'd love some discussion on why Intel left XScale and went to Atom and i think Itanium is worthy of discussion in this era too. I don't really want a raw listing of [In year X Intel launched Y with SPEC_SHEET_LISTING features].

MangoCoffee 1 days ago [-]
>Itanium

IMO, Intel took us from common, affordable CPUs to high-priced, "Intel-only" CPUs. It was originally designed to use Rambus RAM, and it turned out Intel had a stake in that company. Intel got greedy and tried to force the market to go the way it wanted.

Honestly, AMD saved the x86 market for us common folks. Their approach of extending x86 to 64-bit and adopting DDR RAM allowed for the continuation of affordable, mainstream CPUs. This enabled companies to buy tons of servers for cheap.

Intel’s u-turn on x86-64 shows even they knew they couldn’t win.

AMD has saved Intel’s x86 platform more than once. The market wants a common, gradual upgrade path for the PC platform not a sudden, expensive, single-vendor ecosystem.

21 hours ago [-]
sbierwagen 1 days ago [-]
Itanium didn't support RDRAM until Itanium 2.
ethan_smith 18 hours ago [-]
Intel selling XScale to Marvell in 2006 was their pivotal strategic error - they abandoned a viable ARM-based solution right before the smartphone explosion, betting everything on x86 compatibility. Atom's power inefficiency compared to ARM designs then left them completely unprepared for the mobile revolution, costing them the entire smartphone/tablet market.
rubatuga 17 hours ago [-]
Actually atom was very efficient, but they tended to pair it with a horribly inefficient southbridge that would idle at many times the full CPU power draw (2-3W or more)
deaddodo 1 days ago [-]
> I'd love some discussion on why Intel left XScale and went to Atom

I thought it was pretty obvious. They didn't control the ARM ISA and ARM Ltd designs had caught up to + surpassed XScale innovations (superscalar, Out-of-order pipelining, MIPS/w, etc). So instead of further innovating they decided to launch a competitor of their own ISA.

KerrAvon 1 days ago [-]
Intel at the time was clear about it: they wanted to concentrate fully on x86. They thought they could do everything with x86; hadn’t they already won against their RISC competitors by pushing billions into x86? Why would ARM be any different? Shortsighted, in hindsight, but you can see how they got there.
bsder 24 hours ago [-]
> i think Itanium is worthy of discussion in this era too

Itanium was a massive technical failure but a massiver business success.

Intel spent a gigabuck and drove every single non-x86 competitor out of the server business with the exception of IBM.

icedchai 8 hours ago [-]
I remember the plethora of Unix workstations and servers in the 90's: Sun, HP, SGI, DEC, IBM. I'm skeptical Itanium killed them. It was software: plain old x86 and Linux killed them.
acroyear 1 days ago [-]
Mr. Magoo-ism galore.

Intel had constantly try to bring in visionaries, but failed over and over. With the exception of Jim Keller, Intel was duped into believing in incompetent people. At a critical juncture during the smart-phone revolution it was Mike Bell, a full-on Mr. Magoo. He never did anything after his stint with Intel worth mentioning - he was exposed as a pretender. Eric Kim would be another. Murthy Renduchintala is another. It goes on and on. Also critical was the the failure of an in-house exec named Anand Chandrasekher who completely flubbed the mega-project coop between Intel and Nokia to bring about Moblin OS and create a third phone ecosystem to the marketplace. WHY would Anand be put in charge of such an important effort?????? In Intel's defense, this project was submarined by Nokia's Stephen Elop, who usurped their CEO and left Intel standing at the altar. (Elop was a former Microsoft exec, Microsoft was also working on their foray into smartphones at the time. . very suspicious). XScale was mis-handled, Intel had a working phone with XScale prior to the iPhone being release .. but Intel was afraid of fostering a development community outside of x86 (Balmer once chanted -> developer, developer, developer). My guess is that ultimately, Intel suffers from the Kodak conundrum, i.e. they have probably rejected true visionaries because their ideas would always threaten the sacred cash cows. They have been afraid to innovate at the expense of profit margins (short term thinkers).

thijson 9 hours ago [-]
Someone I knew that worked there said that the CPU business was like a giant tree, no other business could grow because of its shade. I remember Mike Bell was leading the x86 phone project, and later wearables. I thought an interesting data point is that he ended up at Rivian, but didn't last long there. A lot of the hype around him was that he kept claiming credit for the iPhone. He would threaten to leave Intel, and then Otellini would throw more money at him.
kevvok 18 hours ago [-]
> Murthy Renduchintala

He was a joke at Qualcomm before he went to Intel too. That Intel considered snagging him a coup was a consistent source of amusement.

AtlasBarfed 22 hours ago [-]
Interesting to me is that Intel was constantly shedding people in 2008 and 2009 with high revenues, high market share, tech leads, etc.

Smacks of financialization and wall-street centric managerial groupthink, rather than having the talented engineers to fight the coming mobile wars which were already very very apparent (thus the Atom), or even the current war of failure in discrete graphics.

Once the MBAs gain control of a dynamic technology company (I saw it at Medtronic personally), the technology and talent soul of the company is on a ticking timer of death. Medtronic turned into a acquire-tech-and-products-via buyout/acquisition rather than in-house, and Intel was also a treadmill of acquire-destroy (at least from my perspective Medtronic sometimes acquired companies and they became successful product lines, but Intel always seemed clueless in executing their acquisitions.

I look at all the 2000s acquisitions of Intel: sure shows they were "trying" at mobile, in the "signal wall street we are trying by acquiring companies so we keep our executive positions" but zero about actually chasing what mobile needed: low power, high performance.

acroyear 21 hours ago [-]
shedding ppl in the USA, yes. bringing on hordes of cheap engineers from India and Malaysia at the same time though. labor arbitrage was probably MBA-think as well, to your point. (also, Intel was sued along with other big wheels for collusion, i.e. agreeing not to hire from one another in the US to keep salaries down - they settled this class action suit). managed demolition of a once great company.
brcmthrowaway 1 days ago [-]
Is Raja Koduri another phony?
acroyear 1 days ago [-]
I don't know tbh, heard both good and bad things .. he was brought in after many of the problems had already become serious. He probably had a very difficult charter.
ianand 1 days ago [-]
The site’s domain name is the best use of a .fail tld ever.
jagged-chisel 1 days ago [-]
OT from TFA, so high jacking your thread …

I don’t recall if there was ever a difference between “abort” and “fail.” I could choose to abort the operation, or tell it … to fail? That this is a failure?

¯\_(ツ)_/¯

bombcar 22 hours ago [-]
Take reading a file from disk.

Abort would cancel the entire file read.

Retry would attempt that sector again.

Fail would fail that sector, but the program might decide to keep trying to read the rest of the file.

In practice abort and fail were often the same.

jagged-chisel 7 hours ago [-]
Makes sense. Maybe I ran across a proper use a time or two back then and just don’t remember. But the two being the same was the overwhelming experience.
jxjnskkzxxhx 8 hours ago [-]
Does it feel to anyone else that the article ends abruptly? We're left in 2013 or so. What happened?
ashvardanian 1 days ago [-]
The article mostly focuses on the 2008-2014 era.
BirAdam 22 hours ago [-]
Yes. It is part of a series in which I cover Shockley -> Fairchild -> Intel, up to last month.
igtztorrero 1 days ago [-]
The Atom model was the breaking point for Intel. No one forgives them for wasting their money on Atom-based laptops, which are slower than a tortoise. Never play with the customer's intelligence.
AlotOfReading 1 days ago [-]
I was working as a contractor in this period and remember meeting a thermometer company. They had made the extremely questionable decision to build it with Intel Edison, which used an even lower performance product line called Quark. The Edison chips baffled me. Worse performance than many ARM SoCs at the time, far worse efficiency, and they cost so much. That thermometer had a BOM cost of over $40 and barely enough battery life for its intended purpose.
duskwuff 17 hours ago [-]
Quark/Edison was a truly inexplicable product offering. I have to wonder if Intel believed there was a market opportunity for "we want to build an embedded system, but all our existing code is on x86 and we don't want to port to ARM". (There wasn't - especially not as late as 2014.)
iwontberude 1 days ago [-]
I could tell they were cooked when they bought McAfee.
acroyear 1 days ago [-]
yes, this was a direct consequence of the Craig Barrett mentality. Intel wanted a finger in many pies, since it could not predict what would be the next 'thing'. So they went on multiple acquisition sprees hoping to hit gold on something. I can't think of a single post-2000 acquisition that succeeded.
jbverschoor 1 days ago [-]
They what??
01HNNWZ0MV43FF 22 hours ago [-]
Oh I forgot that one. That's hilarious.

> McAfee Corp. ... Intel Security Group from 2014 to 2017

https://en.wikipedia.org/wiki/McAfee

Demiurge 1 days ago [-]
I've always wondered, how do some smart companies, or smart film directors, or smart musicians can fail so hard? I understand that, sometimes, it's a matter of someone abusing a project for personal gain. Some CEOs, workers just want to pitch, pocket the money, and move on, but the level of absurdity of some of the decisions made are counter-productive the 'get rich quick' scheme too. I think there are self perpetuating echo chamber self dellusions. Perhaps this is why an outside perspective can see the painfully obvious. This is probably why having some churn with the outside world, and also understanding what is the periphery of the outside, unbiased opinion is, is very important.
foobarian 1 days ago [-]
At some point organizations get taken over by the 9-5 crowd who just want to collect a paycheck and live a nice life. This also leads to the hard-driving talent to leave for more aggressive organizations, leaving behind a more average team. What leaders remain will come up with not so great ideas, and the rank and file will follow along because there won't be a critical mass of passionate thought leaders to find a better way.

I don't mean to look down on this kind of group, I am probably one of them. There is nothing wrong with people enjoying a good work life balance at a decent paying job. However, I think there is a reality that if one wants a world-best company creating world-best products this is simply not good enough. Just like a team of weekend warriors would not be able to win the Superbowl (or even ever make it anywhere close to a NFL team) - which is perfectly fine! - the same way it's not fair to expect an average organization to perform world champion feats.

mattkevan 1 days ago [-]
Disagree. 9-5 working is fine, and probably more efficient long term than permanent crunches.

Organisations fail when the ‘business’ people take over. People who let short term money-thinking make the decisions, instead good taste, vision or judgement.

Think Intel when they turned down making the iPhone chips because they didn’t think it’d be profitable enough, or Google’s head of advertising (same guy who killed yahoo search) degrading search results to improve ad revenue.

Apple have been remarkably immune to it post-Jobs, but it’s clear that’s on the way out with the recent revelations about in-app purchases.

georgeburdell 22 hours ago [-]
Nah I’ve been on both sides of the fence. 9-5ers may reliably accomplish tasks through superior discipline, but they don’t do the heroics that really move individual teams forward.
ryandrake 19 hours ago [-]
Relying on "heroics" often indicates a process problem. This thread is kind of giving me a "Grindset / HustlePorn" vibe. With good decision making, focus, and discipline, 9-5 employees absolutely can make great things. And history is littered with the burnt-out husks of "hero" engineers working 120 hour weeks only to have their company fail and get sold for pennies on the dollar.
antihipocrat 21 hours ago [-]
Once the MBAs take over there is less incentive provided to staff to innovate and disrupt internal products and services.

The innovators in the company are likely correlated with doing more than 9-5. These people get frustrated that their ideas no longer get traction and leave the company.

Eventually what's left are the people happy to just deliver what their told without much extra thought. These people are probably more likely to just clock in the hours. Any remaining innovators now have another reason to become even more frustrated and leave.

mattkevan 17 hours ago [-]
Confirmation bias. We only hear about the heroics that worked. Plenty of heroes end up in unmarked graves. Teams move forward through trust, clear goals and good processes. Individuals may want to be heroic once those elements are in place but it’s not going to work without.

Companies die when the sort of managers take over who see their job as to manage, taking pride in not knowing about the product or customers, instead of caring deeply about delivering a good product. The company may continue for years afterwards, but it’s a zombie, decaying from the inside.

keyringlight 1 days ago [-]
I wonder if there will be a similar situation at nvidia, which apparently has a challenge with so many of their employees being rich as their stock has rocketed up in value, and then could cause concerns about motivation or if skilled and knowledgeable employees will leave.
iwontberude 1 days ago [-]
I think many Nvidia employees will stick around because their new found wealth being at the biggest most important company in the world will give them insight about the market they invest in. I make an order of magnitude more day trading than as a software engineer at a Mag7 company and I stay employed for the access to they way modern businesses think. Companies like mine are an amalgamation of management and engineering from other Silicon Valley companies so the tribal knowledge gets spread around to my neck of the woods.
Demiurge 24 hours ago [-]
I don’t think that’s common
nikanj 1 days ago [-]
Essentially no organizations actually reward telling your superiors that they're wrong. You pretend to sip the kool-aid and work on your resume. If one or two high-ranking leaders are steering the ship to the rocks, there's basically nothing the rank-and-file can do
igtztorrero 1 days ago [-]
Who doth not answer to the rudder shall answer to the rock.
bsder 24 hours ago [-]
> This is probably why having some churn with the outside world, and also understanding what is the periphery of the outside, unbiased opinion is, is very important.

Maximally efficient is minimally robust.

Squeezing every penny out of something means optimizing perfectly for present conditions--no more, no less. As long as those conditions shift slowly, slight adjustments work.

If those conditions shift suddenly though, death ensues.

scrubs 17 hours ago [-]
Really love this point. Perfect optimization depends on maximal content knowledge which by construction we can only know what we know now.
KerrAvon 1 days ago [-]
It doesn’t even have to be that negative. With the best intentions in the world, it’s rare to have a CEO who is fundamentally capable of understanding both the technology and the viable market applications of that technology. Steve Jobs didn’t manage to do it at NeXT.
acroyear 1 days ago [-]
NeXT was a failed rocket launch (analogous to early rocket failures within SpaceX). A great step forward and a necessary step in the evolution of the PC. I thought NeXT workstations were pretty bad-ass for their time and place. Recall that only 3 years prior to NeXT, was computers like the Atari ST .. what a vast difference!!
icedchai 8 hours ago [-]
Also remember the original NeXT workstation was incredibly expensive compared to the "consumer" 68K machines like the ST, Amiga, and even Mac. The cube was roughly $6500 at the time (late 80's money, close to $18K today!) The base system had a magneto optical disk and didn't even include a hard drive.

The NeXT hardware was massively under powered for the software it ran. Other major workstation vendors like Sun were already moving to their own RISC hardware.

buescher 1 days ago [-]
The original NeXT computer was a gorgeously sexy machine but slow compared to competitive workstations and considered very expensive for what it was at the time. It also didn't have the software ecosystem of a less expensive loaded PC or loaded Mac II. It's easy to look back with hindsight and rose-tinted glasses, squint a little, and see a macOS machine but it wouldn't be that for many years.
zozbot234 1 days ago [-]
I mean, the NeXT, Atari ST and Mac computers around that time were all m68k-based... And the Atari ST was the cheapest by far, since it was competing in the home computer market.
buescher 8 hours ago [-]
The Atari ST and similar machines like the Amiga and compact Macintoshes other than the SE/30 were not its competition, any more than the Sega Genesis was. Its immediate competition included Sun and SGI workstations (as well as other workstations) and the Mac II series - and for specific tasks, loaded 386DX and 486DX PCs. Sun was pivoting at that time to the SPARC platform and SGI to the MIPS platform, both away from Motorola 68K.
icedchai 7 hours ago [-]
There were some high end Ataris and Amigas (Atart TT 030, Amiga 3000, etc.) but they came out a bit later. There was even the A3000UX that ran a Unix port!

Still, I agree. The 68K workstation was essentially obsolete by the time NeXT shipped. Sun was shipping early Sparc systems around the same time. The writing was on the wall. No wonder they didn't stick with their own hardware for very long.

icedchai 8 hours ago [-]
I had an early Atom "server" board. Boy was that thing slow.
acroyear 1 days ago [-]
Atom was shit. A desperation move. I was so embarrassed to recommend a Poulsbo laptop to friend, it was the worst machine I have every seen.
zozbot234 1 days ago [-]
The early Atoms had pretty good performance per watt compared to Intel's other offerings. The whole 'netbook' and 'nettop' market segment was pretty much enabled by the Atom chips, and similar machines are still around nowadays. The E-cores found in recent Intel generations are also very Atom-like.
16 hours ago [-]
acroyear 1 days ago [-]
about a year after 'netbook's came out, the iPad was in the wild and it destroyed any chance of these ever catching on.. sure, they were cheaper, but the user experience on a tablet was just so much better. (and tablets got cheaper fast)
ewoodrich 16 hours ago [-]
I basically only see them referenced mockingly these days but man I loved the netbook era. A 200 dollar computer dual booting Ubuntu and Windows XP (just to play Counter Strike 1.6 and Age of Expires) was a dream come true for high school me.

I got the original iPad as a graduation present and as futuristic as it was ended quickly lost its lustre for me thanks to Apple's walled garden.

Took a few more years until I was rocking Debian via Crostini on the first Samsung ARM Chromebook to scratch that low cost Linux ultraportable itch again (with about triple the battery life and a third as thick as a bonus).

adgjlsfhk1 21 hours ago [-]
I feel like the 2012 atoms made some sense. What baffles me is that atom was complete shit until 2020. Intel sold laptop chips in 2022 that didn't support FMA or AVX2 because they used an atom designed e-core that didn't support them.
dash2 1 days ago [-]
These are the years when Intel lost dominance, right? This article doesn't seem to show much insight as to why that happened or what caused the missteps.
BearOso 1 days ago [-]
Intel really lost dominance when 14nm stagnated. This article only goes up to that point.
mrandish 1 days ago [-]
Yep, in 2014 Intel's Haswell architecture was a banger. It was one of those occasional node+design intersections which yields a CPU with an unusually long useful lifespan due to a combination of Haswell being stronger than a typical gen and the many generations that followed being decidedly 'meh'. In fact, I still run a Haswell i5 in a well-optimized, slightly overclocked retro gaming system (with a more modern SSD and GFX card).

About a year ago I looked into what practical benefits I'd gain if I upgraded the CPU and mobo to a more recent (but still used) spec from eBay. Using it mainly for retro game emulation and virtual pinball, I assessed single core performance and no CPU/mobo upgrade looked potentially compelling in real-world performance until at least 2020-ish - which is pretty crazy. Even then, one of the primary benefits would be access to NVME drives. It reminded me how much Intel under-performed and, more broadly, how the end of Moore's Law and Dennard Scaling combined around roughly 2010-ish to end the 30+ year 'Golden Era' of scaling that gave us computers which often roughly doubled performance across a broad range of applications which you could feel in everyday use - AND at >30% lower price - every three years or so.

Nowadays 8% to 15% performance uplift across mainstream applications at the same price is considered good and people are delighted if the performance is >15% OR if the price for the same performance drops >15%. If a generation delivers both >15% performance AND >15% lower price it would be stop-the-presses newsworthy. Kind of sad how our far our expectations have fallen compared to 1995-2005 when >30% perf at <30% price was considered baseline and >50% at <50% price was good and ~double perf at around half price was "great deal, time to upgrade again boys!".

ls612 1 days ago [-]
Intel lost dominance in the 2017-2019 era. The rise of Ryzen and Apple finally deciding to switch to Apple Silicon were the two fundamental blows to Intel. They have been able to make a brief comeback in 2021-2022 with Alder Lake but quickly fell behind again and now have staked everything on 18A being competitive with TSMC N2 this year.
aurizon 1 days ago [-]
Intel is a failed monopolist, unlike Apple! So is IBM with MCA, micro-channel-architecture
scrubs 17 hours ago [-]
Well not quite... Apple/next spent plenty of years not making money ... their major markets where students and graphics design which don't spent huge. Microsoft world was making piles of $ selling business apps.

Yes, they eventually got it right post-Newton after spending a lot of time on the outside of the sp500.

aurizon 38 minutes ago [-]
Yes, Blackberry was run by dinosaurs - they had a chance and wasted it = Apple ecosystem absorbed all, save for google/Android who stayed free
acroyear 1 days ago [-]
yes, they tried with the 'Compute Continuum' .. but this never panned out. They spent loads of bandwidth and money trying to bring this reality into being, but it failed miserably. They assumed every user would have a smart-TV, smart-phone, tablet, and desktop .. all running their hardware/software. Turns out, no - they won't. They didn't "see" that the phone would dominate the non-business segment as it has.
mrandish 1 days ago [-]
I think a key reason they missed mobile is that it was during Intel's peak dominance and growth. Mobile was smaller, less powerful chips at lower prices and lower margins than Intel's flagship CPUs in that era. The founders who built the company were gone and Intel was a conglomerate run by people hired/promoted for managing existing product/category growth not discovering and homesteading new categories. They managed the conglomerate with a portfolio approach of assessing new opportunities on things Wall Street analysts focus on: margins, total revenue, projected market size and meta-metrics like 'return on capital'.

It's classic Christiansen "Innovator's Dilemma" disruption. Market leading incumbents run by business managers won't assess emerging unproven new opportunities as being worth serious sustained investment compared to the existing categories they're currently dominating.

aurizon 1 days ago [-]
They wasted the $$ that could have saved Intel by buying market shares back to the treasury to appease hedge fund managers and accountants to increase the share price/yield - a true 'bonfire of the Vanities', not to mention the 'Shitanium' = born dead all tries at resuscitation failed. That one also almost killed HP - it limps along - a broken thing
acroyear 1 days ago [-]
managers, yeh, intel luvs managers ;)
aurizon 1 days ago [-]
Yes, the flowering of Moore's Law - especially with SSD and memory density - that is still unfolding to the point that an iPhone/android has power of a high end work station from the year ~~2000, same with CMOS optical sensor density and patterned lenses
acroyear 1 days ago [-]
i don't think it's just the performance .. it's a form-factor paradigm shift in the consumer end. the younger generations just don't care about screen real estate as much as genX and early Millenials did. the devices became (surprisingly) much more addictive than what ppl expected and consequently, the devices went into pockets, into bed with them .. etc, sad really.
ryao 21 hours ago [-]
I would argue that IBM failed with the BIOS, since they assumed nobody could make a compatible machine without their authorization since they controlled the BIOS and the idea of a clean room reimplementation never dawned on them. It did dawn on Compaq. MCA came afterward.
musicale 19 hours ago [-]
> they assumed nobody could make a compatible machine without their authorization since they controlled the BIOS and the idea of a clean room reimplementation never dawned on them

So you're saying that they were somehow unaware of how new BIOS implementations were used in CP/M to port it to new systems?

And that they distributed the BIOS source code with every IBM PC... to make it harder for competitors to build compatible machines due to copyright claims?

And that they were somehow unaware DOS had largely reimplemented the CP/M design and API? (Though DOS's FAT filesystem was a successor to FAT8 from Microsoft's Disk BASIC rather than CP/M's filesystem.)

ryao 17 hours ago [-]
Your use of straw man arguments in response to a straightforward historical description is both inappropriate and condescending. You may think IBM’s executives were naïve, but that does not change the actual history of the IBM PC. IBM prioritized time-to-market by using off-the-shelf components and assumed their control of the BIOS would prevent compatible clones. That assumption was proven wrong when Compaq successfully did a clean-room implementation of the PC BIOS, without violating IBM’s IP. This is a well-documented part of computing history.
17 hours ago [-]
jbverschoor 1 days ago [-]
Their domain name is probably most of their market cap
BirAdam 22 hours ago [-]
"They" in this case is just me, and I make very little money off of my writing. I write tech history because I want to, and there's little other reason.
jbverschoor 14 hours ago [-]
Oh I meant Intel :)

Sorry for the confusion.