The Most Hated PC CPU

In summary, "The Most Hated PC CPU" discusses the controversies surrounding certain computer processors that have garnered negative feedback from users and critics alike. It examines the reasons behind this disdain, including performance issues, reliability problems, and pricing concerns. The article highlights specific models that have been particularly unpopular, analyzes user reviews, and explores the implications of these sentiments on the CPU market and consumer choices.
  • #1
Vanadium 50
Staff Emeritus
Science Advisor
Education Advisor
2023 Award
35,005
21,683
This was triggered by the thread on the collapse of Adobe's PrintGear.

There have been a lot of CPUs that did not feel much love: the 80286, the AMD Bulldozers, the Celeron D, but the one that got absolutely creamed in public opinion was the IDT WinChip. Which is ironic, as it was a technological success.

In the late 1990's, the most common CPU socket was the so-called Socket 7. Unlike today, these motherboards would accept CPUs from several vendors: Intel, AMD, IBM/Cyrix, and others. You could take a machine with an Intel CPU, pop it out, put in one from AMD and go on your way. This, of course, put a lot of pressure on CPU makers to build better and better chips.

At the time, Intel was selling their Pentium MMX line in the $400-500 range. The competitors were selling similalry priced chips for similar performance (AMD) or slightly slower chips for a little less money.

IDT came along and asked "is this the optimal thing to do?" So they profiled a lot of desktop applications and discovered:
  • The CPU spent most of its time doing loads and stores (I don't know why this is ever a surprise)
  • Fancy features like out-of-order execution take a lot of silicon, but only help speed a little.
  • Floating-point is rarely used. You want something there, as fixed point emulation was up to 1000x slower, but whether it was 700x faster or 1500x faster made little difference.
This let them use a much smaller piece of silicon, which saved a lot of money, and produce a chip that was a little slower than its competition, for a lot less money. It sold for $90.

How did they beef up performance? They used eight times as much cache as the competition.

So, why was it hated?

(1) If you already owned a Socket 7 computer, there is no reason to spend $90 on a less performant CPU. If you didn't, you could save some money, sure, but it's not afactor of 4 or 5; it's more like 30%,

(2) Benchmarks of the day were more CPU-intensive than typical application code, so this chip underperformed.

(3) The idea of a "gaming PC" was just starting to evolve, and gaming workloads differ from the "business workloads" that the chip was optimized for.

The irony is that the idea was a success, even if the product was not. What are today's Intel E-Cores? A simpler CPU connected to a boatload of cache.

It's impossible to tell, but had this come out in 2004 instead, appropriately scaled, this could have been a fierce c ompetitor to the new dual-core Pentiums: a quad core thatb cost less and used less power. But the market zigged when they thought it would zag.
 
  • Like
Likes davenn
Computer science news on Phys.org
  • #2
This happens in software, too. My first experience was with Lattice C. It worked great and did the job. Microsoft rebranded and sold it until they developed their in-house C compiler, which had code common to their other language compilers.

Lattice C was effectively dead on PC-DOS and MS-DOS.

Another was VisiCalc, which pioneered the novel spreadsheet idea, but Lotus 123 destroyed it.

https://en.wikipedia.org/wiki/VisiCalc

and the list goes on...
 
  • Like
Likes davenn
  • #3
When I joined a software engineering team at NASA Ames, most system programmers worked on DEC PDP-series, distrusting and maligning the newer VAX cpus. Management assigned me to develop software and system protocols for Vax-11 VMS 'mainframes' loved by application programmers yet loathed by the PDP diehards.

VMS functioned well with a few tweaks while VAX internals such as STARnet provided excellent hardware interfaces and near-time performance. I worked happily on various VAX platforms under several NASA projects until we replaced them with Sun Microsystem servers running Solaris, a not-bad version of UNIX.
 
  • Like
Likes WWGD and davenn
  • #4
I worked on Honeywell 6000 mainframes at GE but really wanted to work with VAXen machines after I saw one at a local university. They just looked so modern and so cool.
 
  • Like
Likes Klystron
  • #5
jedishrfu said:
This happens in software, too
Can you clarify what "this" is? There are several possibilities.

jedishrfu said:
Lotus 123
Lotus 1-2-3 had two huge advantages over its competition, one of which led to its demise.

(1) It would properly determine which cells depends on which other cells and execute them in order. Previous spreadsheets worked "row-wise" or "column-wise".

(2) It had a macro language. This was truly out of control. Macros would literally run for days, on PCs dedicated to run a single spreadsheet. On the plus side, it moved effort from senior accountants to junior bookkeepers, but on the minus, the spreadsheet logic was an unintelligible mess. Auditors refused to sign off on this, management got spooked, and a lot of this turned into procedural code run as scheduled jobs.

The Lotus people were always nice to me. Partly because I found a bug and went to them before going to the press.
 
  • #6
I would not call a VAX a "hated CPU", except by people who were deeply invested in others (370, PDP). People loved it. People were happy to pay the price/performance penalty to have a VAXStation over a unix workstation from Sun or Apollo.
 
  • #7
To add t the VAX story. This was one of those rare case where a company made a vital and strategic realization about their business....and then forgot it.

When the Alpha/AXP was coming out, the issues of VAX compatibility came up. There were no good options - emulation was too slow, a partial RISC implementation was too slow, and a CIS coprocessor was too slow. The last two were also too expensive.

It was then realized that people don't run VAX. They run VMS. Most code was either supplied by DEC, like VMS, DECWindows, DECMail, LSE (which I still miss), or was user-compiled using in most cases a DEC compiler. So if DEC were to release all its VMS software for AXP, people would gobble it up. And they did.

If DEC had said, we are a software company, not a computer company, they might still be around today. They might have kept their peripherals business, or at least some of it, instead of selling it to keep their computer line afloat.

Had they realized this, they could have changed CPUs (or licensed others to do so) when the AXP started to face competition.
 
Last edited:
  • Like
Likes davenn
  • #8
Thanks guys for the thread

Always enjoyable reading personal insights and experiences
Of the older tech/software.

It passes the time whilst in my hospital bed

Dave
 
  • Care
  • Like
Likes WWGD, sbrothy, pinball1970 and 1 other person
  • #9
Sorry to hear that you are hospitalized. I tried it a couple times. Didn't much care for it.

There were unquestionably good ideas that came about too early or too late.

I am still grumpy about the hatred shown to the WinChip. I owned a few. It was very good for what it was, but the reviewers at the time had the opinion "If I am not interested, nobody should be interested".

It would have been interesting to see the move to multicore a decade earlier, but at the time Intel was telling us the future was 10 GHz single-core chips. Oops.
 
  • Like
Likes davenn and Klystron
  • #11
Sadly, the people they fooled first and best was themselves. One cannot always predict the future from a line between the past and the present.
 
  • #12
Vanadium 50 said:
I would not call a VAX a "hated CPU", except by people who were deeply invested in others (370, PDP). People loved it. ...
So true. Actually missed "PC" in the thread title (apologies) but thought about the odd problem Digital Equipment Corporation played on itself introducing the excellent VAX technology following its successful PDP line of mini-computers. No doubt the 'operating system wars' that dominated discussion during that era influenced professional opinions. I never had a problem with platform/ architecture -specific OS as long as the investment is worth the improvement. Certain keys and bit flags acessible at the system level provided exquisite synchronization among processes and programs.

Ames management had already migrated computer accounts to VAX. My group took that responsibility from the director but only as an ancillary duty. Operating, configuring internals and I/O, and programming latest dedicated VAX boxes occupied real time.
 
  • #13
I remember the times when a ten-year-old computer had a decent scrap content. For me, it all came to an end with the VAX. Gone are the halcyon days of metal recycling.
 
  • #15
Klystron said:
as long as the investment is worth the improvement.
This was the birth of VAX and the death of VAX.

PDP-11 was hitting its architectural limits. Further, it had a bunch of operating systems for sale, which a) fragmented the user base, and b) had some of the user base gain experience with OS migration.

With the VAX, everyone ran VMS. Yes, they could run Ultrix, but that's what it could do, not what it did do. And that's what let them transition to Alpha. The mistake they made was saying "We successfull moved from PDP to VAX. Then we successfully transitioned to AXP. Good thing we're done." And not "Let's get ready for the next one."

x86 has evolved to survive by not being an instruction set. Hasn't been for many years. It's effectively pseudocode. The CPU translates it into its own "real" instruction set, which few humans ever see.
 
  • Like
Likes davenn and Klystron
  • #16
Unless I misremember, the Celeron line of cheaper Intel processors were just Intel processor chips where the on-chip cache system (and maybe some of the cores) failed testing but everything else checked out. This blurb from Lenovo definitely brought out my snark: "Celeron® processors are designed for basic tasks and have lower clock speeds, fewer cores, and smaller cache sizes." That's marketing baloney right there.

The cache subsystem of a computer is an important aspect of its design that is little understood and typically not even described in specifications any longer (whereas in past years, you'd at least see Layer 1 (usually on-chip) and Layer 2 (often on the motherboard) cache sizes mentioned, though not always.

Now when buying a computer, all the available CPU's are fast enough for me. I focus more on getting 32 Gb RAM (or more) and the fastest, largest solid-state drive because Windows likes to use a lot of the disk as swap file space and software grows ever bigger.

I tend to avoid Acer brand (if my budget allows) because I found that the main way they keep cost down is by not using much external cache on their motherboards.
 
  • #17
Vanadium 50 said:
Can you clarify what "this" is? There are several possibilities.


Lotus 1-2-3 had two huge advantages over its competition, one of which led to its demise.

(1) It would properly determine which cells depends on which other cells and execute them in order. Previous spreadsheets worked "row-wise" or "column-wise".

(2) It had a macro language. This was truly out of control. Macros would literally run for days, on PCs dedicated to run a single spreadsheet. On the plus side, it moved effort from senior accountants to junior bookkeepers, but on the minus, the spreadsheet logic was an unintelligible mess. Auditors refused to sign off on this, management got spooked, and a lot of this turned into procedural code run as scheduled jobs.

The Lotus people were always nice to me. Partly because I found a bug and went to them before going to the press.

Spreadsheet macros have always, IMHO, been nightmares. I've been tasked with "debugging" what accountants believed to be "programs". Thinking back it can still bring tears and involuntary tics to my face.
 
  • #18
@harborsparrow I fail to see the problem. Some Intel customers want high performance. and some want low cost. Intel has product line for each. If they can make them (and this sell them) for less by rectcling product from other lines. everybody wins.
 
  • Like
Likes harborsparrow
  • #19
@sbrothy I don't think macros are themselves bad. It's the abuse that;s bad. Using a macro to do a non-trivial validation of daat (you can't deduct more dependents than you are allowed to) is perfectly sensible.

The problem is when these become hundreds or thousands of lines of spaghetti, often with "magic numbers" hard-coded in and not a whoy of documentation.

They also spent a lot of time doing unnecessary work. If I need to know the 5 top stores, I don't need to sort a list of thousands.

This was a good idea, gone out of control. Lotus knew this was a bad idea, but they made a lot of money at it. To their credit, they actually waned people against doing this in the manual. Which was predictably ignored.
 
  • Like
Likes sbrothy
  • #20
I'm deciding if I should go onto the silver medalist, the 80286.
 
  • #21
Vanadium 50 said:
@sbrothy I don't think macros are themselves bad. It's the abuse that's bad. Using a macro to do a non-trivial validation of daat (you can't deduct more dependents than you are allowed to) is perfectly sensible.

The problem is when these become hundreds or thousands of lines of spaghetti, often with "magic numbers" hard-coded in and not a whoy of documentation.

They also spent a lot of time doing unnecessary work. If I need to know the 5 top stores, I don't need to sort a list of thousands.

This was a good idea, gone out of control. Lotus knew this was a bad idea, but they made a lot of money at it. To their credit, they actually waned people against doing this in the manual. Which was predictably ignored.
I agree completely. What triggered this response was the wistful thought of having access to any documentation. Any at all, whoy(?) or otherwise, I wish! Awful times!

Hah! And expecting users to read the manual?! Now that's science fiction!

o0)

EDIT:

Reminds me of a little silly anecdote. One of my colleagues once went straight "over my head" and directly to my boss (who was sitting in the same room as me only 4 meters away!) complaining that I hadn't documented my work. (a simple C DLL implementing a simple word replace for use in PowerPoint, which was much too slow for the task). They came to my desk, the idiot rat with a smug smile on his face. Turned out I, as always in a hurry, had put the "documentation" (really? documentation for a DLL containing one simple word replace function?!) in the source file instead of in the header. So, too lazy to look in all the files of a project comprising 5 files?! I never saw that person again. :-p

EDIT: Changed "C++" to "C" as obviously a project containing 1 function wouldn't benefit much from being OO. :smile:

It was mostly just a matter of some counting, realloc'ing and memcpy'ing, etc...
 
Last edited:
  • #23
Vanadium 50 said:
I'm deciding if I should go onto the silver medalist, the 80286.

The 80286 was my first PC system, purchased all the parts...case, MB PSU, CPU etc and built it up.
I moved up from an ATARI 1040ST (1Meg of RAM)

Had to buy the math co-processor as a separate chip and plug it in.
From memory, only 1MB of ram
40MB HDD ( such a crazy amount of storage hahaha)

I cant remember which version of DOS it was, V5? back in 1991-'92
 
  • Like
Likes sbrothy
  • #25
Why should I hate any CPU that is bravely doing the best it can?
 
  • #26
Hornbein said:
Why should I hate any CPU that is bravely doing the best it can?
Indeed. Those little fellas are working so hard! I'd like a graphics card to work with though. I'm sitting here playing with OpenGL and having an actual graphics card would really be cool.
 
  • #27
Onto the 80286 - the chip Bill Gates famously called "Brain Dead". First some history.

First came the PC's 8088.
The market said "We need a math coprocessor!" So Intel came up with the 8087. It sold, well. not so well.
Then the market said "We need an I/O coprocessor!" So Intel came up with the 8089. It sold badly.
Then the market said "We need a chip requiring less support circuitry" So Intel came up with the 80186. It sold badly too.
So when the market said "We want a chip that will better support multiple processes", of course Intel did what they said and came up with the 80286. After all..the market said.

What does that mean? It meant two changes - one was the ability to access 16 MB of memory, and the other was memory protection. That meant Program A could not corrupt the memory of Program B (normally, it couldn't even see it) and you could only execute areas in memory designated as holding code. This was hated.

To remind people, a 1 MB address space is 20 bits. How does the 8088, a 16 bit processor address 20 bits? Addresses contain 2 numbers, a segment and an offset and the address is 16*the segment, plus the offset. The segment size - i.e. the maximum offset - was limited to 16 bits or 64K.

To make the protection work, the 80286 replaced the segment with a "selector". This is just a number: 54 might be code for Program A, and 778 might be data from Program B. Applications did not know where these were in physical memory - only the OS did.

This design had two victims:

One was pointer arithmetic in C. K&R says in black and white not to assume pointers are "really" ints, and indeed not to assume they are anything but pointers: you can count on them to have all the properties pointers are supposed to have...but that's it.

This was widely ignored and code broke like crazy.

The other was our old friend Lotus 1-2-3. You might have a 400K spreadsheet, which means it needs to be split into something like 7 64K chunks. And crossing chunk boundaries is SLOW. On the 8088 you could create a logical "chunk 2-1/2" and get around that, but the protection of the 286 keps you from doing that.

An alternative would be to give every cell its own selector - but you only have 8192, far fewer than the number of cells.

So there was no natural way to fit a giant 1-2-3 spreadsheet into these 16-bit pieces.

Therefore the torches and pitchforks.
 
  • #28
Vanadium 50 said:
Intel came up with the 80186. It sold badly too.
Funny, but 186 is actually one of the toughest survivor from that era. Stuck between being an overgrown uC and a CPU for personal computers it could not really fit in anywhere that time, but somehow it could secure a niche as the first SOC for industrial control. And the resulting SW base just kept carrying on the hardware...

Some of it's clones are still in production (but at least, they are available as 'new').
Not bad from a 'bad sell' :wink:


As for the most hated, I think the initial Intel P4 is a honorable mention too. For all the NetBurst hype it came with you got low speed for any already existing code, and that exclusive RamBus memory for the strange motherboards, at those prices - well, got some scorn.
And then came the SW developers who had to completely re-learn what optimization is about...

Later on it could re-fit into the trend and the legacy is now a decent 'collectors only' stuff, but Willamette got its just fame o0)
 
Last edited:
  • #29
Books could be written on which chips survived and why. A few bucks will buy you an "eZ80" SOC. It will run circles around the original, of course. Since it is 8080 compatible, one could argues that the 8080 has lasted 50 years.

The P4 was a bad idea. I think it's fair to say it did not meet Intel's goals. But hated? Look at contemporary reviews and how positive they were compared to Athlons of the same era - in retrospect a far superior chip. There are reasons for this, but when the P4 was released it was not hated. It just should have been.
 
  • #30
I won't say much about reviews. In most legacy software, the original Willamette P4 1.4GHz and 1.5GHz quite often underperformed the 1GHz PIII.
And you had to buy the whole set for that letdown: new memory, new board, new CPU.
Yep, I've heard lot of related *********** flying around those timeso0)

Starting from Northwood they could somewhat consolidate the price/performance, and above 2GHz it was finally something.
 
  • #31
The P4 was bad. Intel scraped it off its shoe as soon as it could. But at the time, the public was gushing over it.

I suspect it was due to it's "excellent overclocking". People hadn't quite grasped that if the chip is stalled, it doesn't matter how fast you clock it.
 
  • #32
When people say they are great programmers because they know a zillion languages, I launch into my rant on data structures and algorithms. 1-2-3 couldn't adapr to the 286 because of its data structures. Had they done it differently, they could have sold a "1-2-3/286" for even more money.

Understanding data structures is far more important than being able to write "Hello World" in 20 languages.
 
  • #33
If you can program in one language the rest is pretty much being able to consult a manual (apart from the conceptual differences between OO, procedural, functional etc). As you say, knowing math and algorithms will get you much farther. Choosing a specific language amounts to choosing the right tool for the particular task at hand.

EDIT: Except if, as you say, you're on some particular chip and are forced into assembler, or doing some embedded task as coding a barcode scanner where you have no choice, but in that particular case it's almost just a question of coding a UI.
 
  • #34
The 1-2-3 problem was simple. They had a giant 400K or so data structure and could only address it a few 64K ar a time. Further, the whole thing was written in assembly for speed.

Their solution was an 8088-specific trick: use the fact that every physical address has multiple (I think 4096) to move these "windows" around as needed. This worked as long as the trick did.

A better solution would have been to a) transition to C for tasks where it was suitable, e.g. reading and writing files to disk, and b) use a more suitable data structure. This would probably be some kind of tree, with dozens or hundreds of segments, not thousands. Segments would contain dozens or hundreds of largely dependent cells, arranged into branches. This would take some thinking and real work. Just because it is possible doesn't mean its easy.

I suspect by that point Lotus was more interested in selling the company than selling spreadsheets. (Which they did) And fixing a problem that low level in the design is tough.

I won't claim to have seen every C trick that assumes pointers are ints, but suspect that a) speed increases at the application level were marginal and b) the sane thing could be done with unions.
 
  • #35
Trying to remember this guy who found an especially smart assembler use of the C switch but all I can find is Dijkstra's algorithm

Ah, digging around a little I found it. I think this is particularly ingenius: Duff's device.

Obviously this guy almost has his algorithms down to an artform.

Maybe not assembler as such but smart it is.

I may have pulled some fast ones during my career but I'm not sure I'm nowhere near as smart as that guy.

And yeah, unions are probably a lost "artform". I've had coworkers who didn't know what it was.
 
Last edited:
Back
Top