When can I buy a laptop with specs similiar to these?

  • Thread starter JoshHolloway
  • Start date
  • Tags
    Laptop
In summary: Do you think Alienware or Dell would even touch something like that?Alienware would be the company to release a laptop with those specs. Dell would release a laptop with similar specs, but it would not be as powerful.
  • #36
rho said:
How do you mean lower quality, do IBM make better designed processors than intel?

The PowerPC architecture has a much more elegant design than i386 -- anyone that's ever done a bit of assembley programming on i386 and PPC will tell you this. IBM also has enough faith in PPC to use it on their low-end pSeries/RS6000 systems, where the POWER just isn't a viable option because of cost. Keep in mind, the POWER and PowerPC have similar features. The fact that the PowerPC is based off of an enterprise-level processor, the POWER, should tell you something, unlike the "high-end" x86 processors, like the Opteron or EMT64, which still share too many commonalities with their prodecessors -- all the way back to the 8086.
 
Last edited:
Computer science news on Phys.org
  • #37
For the last six months? I didn't know that the new iMac G5 they released was an intel, ohh, that's right, it's a PPC.

and Jobs said that those who supported osx86project and used os x on their x86es will, and I quote, "burn in hell".
http://osx86project.org/index.php?option=com_content&task=view&id=44&Itemid=2
 
Last edited by a moderator:
  • #38
Livingod said:
For the last six months? I didn't know that the new iMac G5 they released was an intel, ohh, that's right, it's a PPC.

and Jobs said that those who supported osx86project and used os x on their x86es will, and I quote, "burn in hell".
http://osx86project.org/index.php?option=com_content&task=view&id=44&Itemid=2


Your sarcasm is cute.

http://www.google.com/search?q=Appl...ient=firefox-a&rls=org.mozilla:en-US:official

Try using google more often.
 
Last edited by a moderator:
  • #39
I think it's positive that we are moving towards a common architecture but I'm not going to comment on which architecture should be the common one.
 
  • #40
-Job- said:
I think it's positive that we are moving towards a common architecture but I'm not going to comment on which architecture should be the common one.

We shouldn't be moving towards a "common architecture." None of the processors on the market today are perfect for every consumer -- each consumer wants a particular feature out of each processor. Examples:

An x86 processor is perfect for the home user and low-end enterprise -- they're cheap and perform, but they fail to scale well and aren't reliable, and thus aren't a viable option for the enterprise.

The PPC targets a similar market as the x86 processor (however it does scale well), but has failed to "latch on" because of the cost factor.

The more exotic processors like SPARC and POWER, are not an option for the home user, mostly because a new UltraSPARC IV+ or POWER 4 (and 5) will cost thousands or even tens of thousands of dollars for a single processor; however, with the POWER, you get performance and scalability. With the SPARC, you get scalability and reliability.

Point is, this whole convergence to x86 is going to leave a lot out of picture. Sure, it's cheaper, but you're sacrificing a lot just for that fact. For the home user, the x86 is an excellent processor, but there are people out there that are pushing x86 into the enterprise (via Linux-powered clusters and other nonsense that doesn't work out) -- a place it doesn't belong.
 
Last edited:
  • #41
graphic7 said:
The PowerPC architecture has a much more elegant design than i386 -- anyone that's ever done a bit of assembley programming on i386 and PPC will tell you this. IBM also has enough faith in PPC to use it on their low-end pSeries/RS6000 systems, where the POWER just isn't a viable option because of cost. Keep in mind, the POWER and PowerPC have similar features. The fact that the PowerPC is based off of an enterprise-level processor, the POWER, should tell you something, unlike the "high-end" x86 processors, like the Opteron or EMT64, which still share too many commonalities with their prodecessors -- all the way back to the 8086.

Thank you for the info :smile:

I'v never had a intel computer before only PPC and I'm going to buy a new powerbook next year, what do you think of the intel portable roadmap? (multi-core stuff like memron). Is multi-core the way to go for laptops?
 
  • #42
graphic7 said:
We shouldn't be moving towards a "common architecture." None of the processors on the market today are perfect for every consumer -- each consumer wants a particular feature out of each processor. Examples:
An x86 processor is perfect for the home user and low-end enterprise -- they're cheap and perform, but they fail to scale well and aren't reliable, and thus aren't a viable option for the enterprise.
The PPC targets a similar market as the x86 processor (however it does scale well), but has failed to "latch on" because of the cost factor.
The more exotic processors like SPARC and POWER, are not an option for the home user, mostly because a new UltraSPARC IV+ or POWER 4 (and 5) will cost thousands or even tens of thousands of dollars for a single processor; however, with the POWER, you get performance and scalability. With the SPARC, you get scalability and reliability.
Point is, this whole convergence to x86 is going to leave a lot out of picture. Sure, it's cheaper, but you're sacrificing a lot just for that fact. For the home user, the x86 is an excellent processor, but there are people out there that are pushing x86 into the enterprise (via Linux-powered clusters and other nonsense that doesn't work out) -- a place it doesn't belong.

I think i have to challenge your statement that x86 processors aren't scalable and reliable. The new Intel Xeon 64 bits processors scale quite well from what I've seen. Besides, my statement was from a software perspective, less architectures makes life easier for software companies and provide consumers (& enterprises) with a wider range of options. In this sense we should be moving towards a common architecture.
 
  • #43
-Job- said:
I think i have to challenge your statement that x86 processors aren't scalable and reliable. The new Intel Xeon 64 bits processors scale quite well from what I've seen. Besides, my statement was from a software perspective, less architectures makes life easier for software companies and provide consumers (& enterprises) with a wider range of options. In this sense we should be moving towards a common architecture.

With the "new Xeons" can you hot-swap CPUs, while the system is under a load? Can you reconfigure memory (yes, that means swapping DIMMs) while the system is in use? POWER and SPARC certainly can do this like it's a piece of cake.

Oh, and my definition of how well something scales is whether it can handle >= 64 processors in a system efficiently. Keep in mind, this can be done with POWER and SPARC -- take a look at the Sun Fire 25k (up to 74 UltraSPARCIV+), Fujistsu PrimePower 2500 (up to 128 SPARC64V), or the IBM pSeries p5 590 (up to 64 POWER5 processors). These also aren't nodes that use interlinks and crossbars, like the Altix does. This said, these large, monolithic systems scale well under all workloads, not just some, like the Altix.
 
Last edited:
  • #44
graphic7 said:
With the "new Xeons" can you hot-swap CPUs, while the system is under a load? Can you reconfigure memory (yes, that means swapping DIMMs) while the system is in use? POWER and SPARC certainly can do this like it's a piece of cake.


Wow. That's certainly all I can say about that. Though, I assume of course one can't do this in a single processor system.


Also, these systems, like the Sun 25k are very very expensive. I forget which model, i think it was the 25k but i'd have to check was just shy of $1.5 Million.
 
  • #45
franznietzsche said:
Wow. That's certainly all I can say about that. Though, I assume of course one can't do this in a single processor system.
Also, these systems, like the Sun 25k are very very expensive. I forget which model, i think it was the 25k but i'd have to check was just shy of $1.5 Million.

Yeah, the Sun Fire 15k and 25k run in that range, but if you need high-availability this is the route to go. You can literally upgrade a Sun Fire 15k to a 25k -- this would be the equivalent of upgrading your PC's motherboard, while the system is still turned on. On the other hand, Sun offers this "dynamic reconfiguration" functionality (the ability to swap processors and memory and keep the system available) on the lower-end with the V480 and E2900 -- these run around in the $30k-$150k price range; however, IBM reserves dynamic reconfiguration for the high-end (but you get LPAR functionality much cheaper than you do with Sun).

Edit: Nope, you can't do dynamic reconfiguration with a single processor system. In fact, IBM and Sun both require you to buy a system with two processors or more if it has dynamic reconfiguration functionality.
 
Last edited:
  • #46
Wow, hotswapping processors and dimms? What kind of companies are we talking about here? IBM? Because i don't think the typical enterprise, who isn't building massively parallel computers with node cards, will absolutely have to hotswap processors :smile: , i would recommend blade servers.
I think the advantages of POWER or SPARK must be visible only in very high end applications such as research.
 
  • #47
-Job- said:
Wow, hotswapping processors and dimms? What kind of companies are we talking about here? IBM? Because i don't think the typical enterprise, who isn't building massively parallel computers with node cards, will absolutely have to hotswap processors :smile: , i would recommend blade servers.
I think the advantages of POWER or SPARK must be visible only in very high end applications such as research.


POWER is IBM, SPARC is Sun Microsystems.
 
  • #48
I think it's cool that soon i might be able to have triple boot of Mac OS, Windows and Linux. Of course Apple wouldn't need to abandon the PPC platform, but i can imagine it would very costly to have two OS versions for two entirely different architectures. With a common architecture consumers will have a wider field of software options, and software makers will easily be able to make their products more compatible.
 
  • #49
-Job- said:
Wow, hotswapping processors and dimms? What kind of companies are we talking about here? IBM? Because i don't think the typical enterprise, who isn't building massively parallel computers with node cards, will absolutely have to hotswap processors :smile: , i would recommend blade servers.
I think the advantages of POWER or SPARK must be visible only in very high end applications such as research.

I work in a hospital environment, and we're a small enterprise, by definition. Needless to say, we have approximately 30 POWER systems that are capable of processor hot swapping. Why? Some of those systems have to be available 24/7/365 -- that means zero downtime, otherwise, people's lives could on the balance. For something as simple as a bad processor, the system should handle the failure appropriately and let someone remove the processor and replace it with a new one -- all while keeping the system available.

If you don't see the need for this in your environment, chances are you aren't in an enterprise.

At my work do we have "massive parallel computers"? No.
At my work are we doing research? No, yet we still have a need for near fault-tolerant systems, like the ones Sun and IBM provide.
 
Last edited:
  • #50
I still don't see how how hotswapping of processors is an essential feature. Most likely the computers you'll want to have up & running 24/7 would be servers. You can easily have multiple servers sharing the load, and when one of them goes down the rest can easily fill in for it while you repair it. Especially with blade servers which are so efficient and so small you can easily have this. IMO hotswapping is a neat feature but not an essential one.
 
  • #51
-Job- said:
I still don't see how how hotswapping of processors is an essential feature. Most likely the computers you'll want to have up & running 24/7 would be servers. You can easily have multiple servers sharing the load, and when one of them goes down the rest can easily fill in for it while you repair it. Especially with blade servers which are so efficient and so small you can easily have this. IMO hotswapping is a neat feature but not an essential one.

Why if you can't have a disruption in services? If the system goes down, during the fallover you will have a disruption of service, and for some environments that's not an option. In fact, we're also using HACMP, IBM's high availability suite, to fall over to other nodes, in case the node in use experiences a failure; however, we still need to have fault-tolerant features in the nodes -- failover is the last option.

Blade servers are also not capable of handling the load these POWER systems endure, either. Most of the POWER servers here have > 8GB of memory, 2-8 processors, and multiple fibre HBAs so we can have redundant fibre paths in case of a physical path failure. With Blade servers (example, IBM's JS20), you share two fibre HBAs between all the blades in the enclosure. Suppose the physical paths on those two fail -- you're left with a bunch of blades that can't access the SAN.
 
Last edited:
  • #52
franznietzsche said:
POWER is IBM, SPARC is Sun Microsystems.

Technically :rolleyes:, SPARC is an open standard. Sun has their own SPARC processors, as well as, Ross, Fujitsu, Axel, and a few others.
 
  • #53
-Job- said:
I think it's cool that soon i might be able to have triple boot of Mac OS, Windows and Linux. Of course Apple wouldn't need to abandon the PPC platform, but i can imagine it would very costly to have two OS versions for two entirely different architectures. With a common architecture consumers will have a wider field of software options, and software makers will easily be able to make their products more compatible.

Sun manages to do this with Solaris. There are x86 and SPARC versions, and trust me, it's a lot more difficult to maintain an enterprise-grade OS for two architectures, unlike OS X, which is used for desktop/low-end server purposes.
 
  • #54
graphic7 said:
Technically :rolleyes:, SPARC is an open standard. Sun has their own SPARC processors, as well as, Ross, Fujitsu, Axel, and a few others.


This I did not know.
 
  • #55
So, how about we get back to the laptop? or did someone find a solution to that?
 
  • #56
Yes, let's get back to the laptop. This stuff you guys are talking about is way over my head.
 
  • #57
Sorry, i just wanted to say that some Intel server blades can have up to 4processors and, i think, 8Gb of memory, so server blades can definitely handle big loads especially if you have a number of them and are distributing traffic efficiently. The HBA deficiency is true even if 2 is probably sufficient. You may not have disruption of services if a blade goes down. For example i can see that a session might be lost if a whole blade goes down, but if a processor goes down the blade will still function, right? And you'll be able to switch the session to another blade. Or you can just replicate the session across blades, in which case you'll have sufficient redundancy that if all processors in a blade go down a user session won't be damaged.
 
  • #58
-Job- said:
Sorry, i just wanted to say that some Intel server blades can have up to 4processors and, i think, 8Gb of memory, so server blades can definitely handle big loads especially if you have a number of them and are distributing traffic efficiently.

Some of our applications are AIX/POWER-specific, so this isn't neccessarily an option.

The HBA deficiency is true even if 2 is probably sufficient. You may not have disruption of services if a blade goes down. For example i can see that a session might be lost if a whole blade goes down, but if a processor goes down the blade will still function, right?

Depends on the hardware and the OS. If the OS is properly designed it'll handle the failure appropriately and stay up. It's still sort of "maybe it will work, maye it won't" type of attitude.

And you'll be able to switch the session to another blade. Or you can just replicate the session across blades, in which case you'll have sufficient redundancy that if all processors in a blade go down a user session won't be damaged.

Not sure at what you're getting at here, but ensuring availability requires the use of an HA or high-availability solution, like HACMP or Sun Cluster Server. Both of these are OS and hardware-specfic and have lengthy failover times. It typically takes HACMP 30 minutes to an hour (depending on the number of processes, amount of memory that's active, etc.) to faillover to another node. Like I said, for some environments this isn't an option.

For most environments, yes, using blade servers would be satisfactory. We're trying to deploy blades here, as well -- mainly, as a replacement and segmentation of our POWER p650 that serves as our primary and only Tivoli Storaage Manager server. We'll be purchasing IBM JS20's (dual 2.2GHz PowerPC processors and 2GB of memory per blade, as well as two fibre HBAs for each blade enclosuring, running AIX 5.3). A number of our services still need to be available 24/7/365 and using blades aren't an option here -- we need a near fault-tolerant solution, plus HA.
 
Last edited:
  • #59
5-10 GHz Clockspeed processor (Possibly IBM cell) (possible 2 proscessors)
4-10 GB Ram
1-2 TeraByte Hard Drive Space
Windows Vista

These are the only things that hevent been developed yet, of the specs you specified.
The laptop will have to have quad processors, 2.5 GHz each. Eh, some time around 06.
The RAM, around 06 as well.
If you want a 1 or 2 TB HD, then it either won't be 1 or 2 TB, or it won't be a HD. If efficient and really small flash drives are developed, then they might be used in laptops for a lot of storage capacity. Might take a bit longer to develop than the processor and RAM, around... 07 or early 08.
Vista is scheduled to come out in 06, it will be delayed, to around mid-07.

After reconsidering the specs, you do have a good chance of having your laptop out by 08.

And those guys who have spent pages talking about what the best processor is, how about you all start a new thread?
 
  • #60
Well, it's not like it's an unrelated topic. The 10 Ghz processor will have to be really tiny and efficently designed or it's not going to happen. In fact it's probably a better idea to make processors in 3D chips (like a cube or sphere) than a 2d chip. There's also the possibility of optical computers entering the show soon and they can be up to 10 times faster than our current CPUs. This would mean that you can probably raise clock speed from our ~4 Ghz up to 40 Ghz (of course you still have to make the processor nice and small).
 
  • #61
I am a simple man with a simple mind so consider the performance laptop question, (estimated time of arrival?), simply as how performance is directly proportional to power and which with current battery technology limits sez it will never happen unless the lap top is attached to a golf cart.

Seeing the obvious and comparing the lap top to a "dumb" terminal of the 60's mainframe computers it is clear the dumber the better as it reduces the size, owner cost, lost to thief cost, while increasing battery life simplier is the future as all that is really needed in a lap top is screen, net card, keyboard, mouse, mike, cd disk drive and camera and those input/output devices (300 dollars max) connect to a supercomputer via PC anywhere software or high speed cable with all processing done by the at home or office super computer like so.

http://members.cox.net/thjackson/StealthSuperComputer.jpg"
 
Last edited by a moderator:
  • #62
I agree that it's much easier to have a device that has only an internet connection which it uses to connect to a server and establish a remote session. The only bottleneck will be the connection speed but with our current technology it would certainly be acceptable. One idea would be to have superservers available everywhere, the closest one being the server of choice. The advantages of this are fantastic in my opinion.
 
  • #63
-Job- said:
I agree that it's much easier to have a device that has only an internet connection which it uses to connect to a server and establish a remote session. The only bottleneck will be the connection speed but with our current technology it would certainly be acceptable. One idea would be to have superservers available everywhere, the closest one being the server of choice. The advantages of this are fantastic in my opinion.

hmm maybe in an enterprise, but for a dispersed network like the internet. I can't see how anyone would want to do that. Plus there is the security aspect, who would control these servers?

I think the bottleneck would be the lag involved, which is going to happen when you separate geographically your terminal from the server that will server your applications. If its setup badly you will have a lot of data sent back and forth then ou would need to up your bandwidth. Adding more Bandwidth won't solve any lag problems.. typically
 
  • #64
That computer would be neat. I have a feeling that the computer you described will be on the market Quarter 2 2006. Probably on April 17th at around 3:23 pm eastern time.
 
Last edited:
  • #65
I'm really counting on microprocessors, they're small and fast and you can a couple of thousand in a laptop. It will most likely be pretty damn fast, but might take long to develop
 
  • #66
I think i heard a while back that companies were going in the direction of having multiple CPUs in a single chip. This would make it a lot easier, if they can keep the interface the same, since we wouldn't need a more expensive motherboard that supports multiple processors.
 

Similar threads

Back
Top