Jump to content
xisto Community
Sign in to follow this  
sitesmakers

Intel 4 And 5 Cores Hard did you heard

Recommended Posts

Quad Core processors are not that new been out since Core 2 processors, which is like 2-3 years. Although a 5 core processor is a new one on me though I think I saw an article or two about an 8 core processor or something like that. However, this article has a very interesting title to it.

Share this post


Link to post
Share on other sites
:P Don't even bother upgrading to quad core if in 5 years an 80-core processor is pledged. B) I'm sort of confused in a slight bit as to why there's a 5-core processor. Usually hardware counts are in powers of 2... which would give me the idea that we would start with 1 core, then 2, then 4, then 8, then 16...Then again, if they can squeeze it in, that's great.I did see an octocore processor by Intel on NewEgg one day. I think it was selling for $400 or $800... can't remember. It was insanely expensive though.

Share this post


Link to post
Share on other sites

I'm stealing this comment because it sounded cool. :P

This is a super computer on a chip. This is not SIMD, this is not SOC or an upgrade to core-duo x40. This is a change to the design of a single processor to one that is intended to include multiple cores where the software is expecting multiple cores.
C|Net has focused on the memory per core as the big issue. They are right to be impressed by it, but they fail to see the big picture as to why this memory is there. Each core is expected to perform dozens, hundreds or thousands of operations on that set of data. Some of these operations may be to make that data available to instructions being performed by another core. Splitting the data is part of the design change and software will need to know where it sends data in a multi-core chip and where it's instructions are being run to optimize data access.

Some people complain that 20GB of RAM might not be enough. They say that the system will allow more RAM because that makes sense. No it won't! You will not want to pool all of your system RAM into one place accessed by any of these 80 cores. It is a performance hit to multi-core systems to share that RAM. Some of this will be seen with one CPU performing an operation on data held by RAM attached to another CPU. In this case only two CPUs are held up instead of all 80. Loading data from outside the CPU to have a core perform operations on will slow this system down in the same way that a page file hit slows a current Windows system. It's the difference in 2 vs. 80 operations.

Software that runs on Tera-Scale processors would know how to split tasks and memory between cores. The Operating System will need to understand to give a set of cores to an application so it has enough RAM and processing power to get it's jobs done. You could see 20 cores doing the same operation on sets of data (SIMD). Then at the same time 10 cores doing a different set of instructions on a shared set of data. Each core can reach the RAM on another core, so those 10 would be able to read, write, and perform an operation on that RAM.

This chip combines some SIMD concepts along with parallel computing concepts and the new multi-core concepts. Intel discovered you really need a lot of RAM per core so each core can handle a lot of data by itself and the entire RAM needs of the applications will be met within the CPU. 20GB is what they show in a sample using SRAM. They will be switching to DRAM and with that switch may also increase the RAM per core. The key to RAM per Core is going to depend on some efficiencies in the CPU based on tasks and data. Intel will begin to discover their performance curve with this prototype.

This system is nothing near what you will see on your desk in 5 years. It will be used to pave the way for application development and system design for years to come. It is the most important stepping stone for the future of computing that we've seen in the last several years because it changes how the hardware and software are designed to work together.


Basically, to utilize the power of each core, a computer design would have to include RAM dedicated to each core to minize bottle-necking. If you had a quad core processor and one of those cores was utilizing all of the memory banks on the motherboard, the three other cores would have to wait until that one core finishes its task. This would explain why there really isn't a 100% increase in performance per core, as the processor writes data to the registries of RAM, which essentially are slots for data. If one processor fills those registers up before the second core can utilize them, that second core is essentially useless. This is why we focus on recommending more RAM for slower systems instead of just beefing up the processor.

Share this post


Link to post
Share on other sites

no, the Oct-core is a platform (2 Yorkfield XEs) with a modified server mobo and chipset.People who say dual-core is enough are just jealous. because they know that 4 cores are better than two.5 cores is just random, because then it just doesn't make any sense. Its supposed to go like 1,2,4,8,16,32,64,128 etc

Share this post


Link to post
Share on other sites

I also find having 5 cores sort of wierd. Im also not really interested into jumping into this new cores fight as many applications arent yet ready for this type of hardware, so It wouldnt really boost up the speed that much, though its really just around the corner. I am doing perfectly fine with a pentium 4 and 2 gigs of ram at the moment. I will be saving money to buy an awesome laptop whenever I feel that the pentium 4 starts to struggle with the future applications. Right now my laptop runs perfectly though kind of hot.

Share this post


Link to post
Share on other sites

will i haven't even complete my old PC bills yeti don't think I'm gowning to upgradenot before i finish the old one billsi even have 5 systems on that PC and it work greati have 2 Linux systems and xp and vistaand mac os xwhat do i need Moornothing not yet

Share this post


Link to post
Share on other sites

wow, computers are really getting faster and faster, and many cores to be seen. but computer main problem is hard drive, which is most psychical structure of computer, and improving hard drive speed is important to have a good computer. even with 4Gb of memory, and many megahertcshz processor if you hard drive is slow then it won't go top speed!

Share this post


Link to post
Share on other sites

I believe that hard drives are going to be read by lasers in the future... which should alleviate the problem with hard drive access speed being the bottleneck for performance.


Almost a year ago, we spotted an article that discussed upcoming storage technologies. One of those technologiesheat-assisted magnetic recordingwas being looked at by Seagate and involved using a laser and different materials to achieve much greater storage densities. InfoWorld now has an interesting update on the technology, which Seagate appears to still be working on.
According to InfoWorld, Dutch researchers have demonstrated an early application of heat-assisted magnetic recording (HAMR, for short) that can achieve write speeds 100 times faster than those of today's hard drives. Seagate believes HAMR could also make it possible to squeeze 40-50TB of data onto a 2.5" hard drive. There are a couple of catches, though. First, read operations in HAMR hard drives would still be done magnetically, so they wouldn't benefit from the same speed gains. HAMR tech is also quite a long way away. One of the Dutch researchers says working drive prototypes should be ready within ten years, and that we shouldn't see commercial HAMR-based drives until 13-15 years from now.


Source

That seems like a long way from actually being able to be implemented in the consumer mainstream... but I'm sure that these prospectives are worst-case scenario. Then again, Intel could be bluffing about its prospective progress in technology to snag the attention of CPU-scrutinizing audiences.

Share this post


Link to post
Share on other sites

I believe that hard drives are going to be read by lasers in the future... which should alleviate the problem with hard drive access speed being the bottleneck for performance.
Source

That seems like a long way from actually being able to be implemented in the consumer mainstream... but I'm sure that these prospectives are worst-case scenario. Then again, Intel could be bluffing about its prospective progress in technology to snag the attention of CPU-scrutinizing audiences.


The recent computer advancements have really astounded me too. Just like the holographic storage media that they'll be using soon in place of dvds blu rays etc, and likely they'll use the same type of reading for hard drives stopping most of the wear and tear due to moving parts, and the main issue of RPM speed. If it wasn't for that HDDs would last a lot longer and would no longer be limited by the speed and size of the hdd. Which would also allow for huge amounts of storage. I'm honestly quite sure that a Terrabyte will be the norm in about 5-10yrs. I'm more looking into the exponential future increase of the cores of processors, I'm glad it's going away from pure mhz races since that was heading nowhere fast. Though the main problem I see is that hardware is moving so fast, that software is just now being able to utilize dual cores, quad cores are still not fully utilized even near the amount they should be. I understand why people would think that quad cores aren't worth it yet, they're still faster but really not wholly utilized yet, although it's nice that they're the same socket type so when they are fully utilized and worth the cash spent you can buy one, and plug it in just as easily.

I'd love to see more cores, and alleviating dependency on bigger and better graphics cards because that in itself is just getting incredibly ridiculous. I'm interested in learning more about the fusion processor and the intel counter-part of that, I don't really remember the name, if anybody has a link i'd really appreciate it.

Share this post


Link to post
Share on other sites

I found a link after some digging and some preliminary info on fusion via wiki, take it with a grain of salt if you wish but they have a lot of sources up there for their info, since I suck at forums and knowing exactly how to do things properly i'll paste an excerpt from it which basically highlights the core as far as is known, still nothing yet on the intel though, and I apologize if i don't do this right.

* Fusion is a heterogeneous multicore microprocessor combining a general purpose processing core(s) and basic graphics core(s) into one processor package, with different clocks for the graphics core and the central processing core
* Four platforms focus on the four different aspects of usage
o General Purpose
o Data Centric
o Graphics Centric
o Media Centric
* The codenamed Bulldozer processor cores, announced in AMD Technology Analyst Day July 2007, will be incorporated with GPU cores to form the first Fusion processors, codenamed the Falcon family, focusing on desktop market with TDP of 10 to 100 Watts.
* An unnamed AMD Vice President expressed that Fusion can be implemented into mobile phones, UMPC and small multimedia devices, this has been further confirmed with the introduction of the codenamed Bobcat processor core focusing on low power consumption (1 to 10 Watts TDP) computations for handheld devices such as UMPC
* The Fusion series processor will see new modular design methodology named "M-SPACE", such that design of future multi-core processors will have a wider range of combinations, as well as gaining enhanced flexibility, thus to minimize the architectural changes for different combinations of components. Benefitted from this initiative by AMD, graphics core can be changed without much re-design of the whole core
* Fusion products will include at least 16 PCI Express (presumably version 2.0) lanes
* The implementation of UVD in silica for full hardware decoding of MPEG2, VC-1 and H.264 video streams on supported softwares
* The first Z-RAM design on a 45 nm fabrication process node was completed in 2006 together with the renewal of Z-RAM license, this is in coincidence with the process node that Fusion processors expected to be fabricated around the timeframe. This also conincide the AMD official roadmap for larger L3 caches after 2009, thus it was rumoured that AMD will likely to feature Z-RAM for larger L3 cache in Fusion products.
* A new set of instructions and development libraries for Fusion were being developed, and was revealed to be a new iteration of SSE, named SSE5, which is announced on the August 30, 2007.
* According to Dave Orton, Fusion will have 10% more pins than a "normal CPU" but he failed to further elaborate on what is a "normal CPU".
* Expected to come in 2008 to 2009 to replace the AMD Turion 64 X2 mobile processor for laptops.

Speed increase

There is to be an expected speed increase with the Fusion. Because the GPU and CPU will be on the same die, information transfer between the CPU and GPU/GPU memory will significantly increase since there will be no need for the information to travel on a bus as there is with current motherboards (resulting in the GPU able to shunt far more data than on the PCI Express bus, due to significantly larger amounts of bandwidth available).

EDIT: Found some info scarce but some at least about the intel version of the fusion, as far as it's knownt hey have not codenamed or admitted anything, though they have severely hinted at going in that direction. A few links will be listed below as sources.

Source 1: http://forums.xisto.com/no_longer_exists/

From the first source it seems pretty clear that intel is trying to keep up with AMD and ATI's merger, but everybody already has a pretty good idea that this near secretive acquisition will not be enough to compete directly and if intel doesn't do something soon AMD will come back on top with their new architecture design and innovation. It would seem only a merger of NVIDIA and Intel would truly be enough to put them both on equal playing grounds, and while NVIDIA is a big catch and very expensive piece, Intel is likely the one company that could pull it off. Considering that NVIDIA is now competing directly with Intel for chipset graphics and the like, it would seem almost as if they are telling Intel that they won't sit idly by. Smart move on NVIDIA's part because hopefully Intel will realize it's best in the long run to take over NVIDIA and that way the two companies can mutually prosper and benefit from each other, much like how AMD and ATI are. However this would likely weaken Intel much like it did to ATI and AMD while the merger went through and all the shifting around happened, but there's no doubt that both companies would greatly prosper together than they can apart. With NVIDIAs foray into muilti-core gpus Intel really could give them a boost tot heir performance and ability, and since GPUs are seeing a greater shift to generalized purposes both would prosper and benefit as well, I also have a link to a neat article about the progression of cpus and gpus and how we're currently at the many core stage for cpu, and the general purpose gpu stage for gpus. Thought they were fun reads and technology never ceases to amaze me, something new and different everyday, and I absolutely love to read about it. If anybody else has any new info lemme know i'd be more than glad to hear it.

Source 2: http://forums.xisto.com/no_longer_exists/

Source 3: https://www.beyond3d.com/content/articles/31

Source 4: http://analysto.com/blog/2007/09/24/nvidiaintelgpu-vs-cpu/

Edited by reith (see edit history)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.