I'd like to know a lot more about what dual-coring really is. At the moment it seems like what it does is allows more than one processor (2 for dual of course) to operate and thus gives you more power. I was looking at a certain Power Mac on apple.com, and I see this:
I assume that this means that ultimately, you have 10ghz of power?
I'd also like to know how dual-coring (is "dual-coring" a proper term?) is done. Does my motherboard need to be built specially for it?
Is it possible to have lets say two dual-core processors, Intel-compatible with 2ghz on each in my computer so that I could run Windows98 on the computer (Windows98 needs a speed under 2.1ghz)?
I'm at another person's house and I need to get going. Hopefully these questions can be answered. I'm sure I'll think of some more for later .
dual core processors are basically two processors in one package that connects to one socket, they are built in different ways, some are actually two complete processors connected together so that they share the FSB, most however are special cores, they are designed basically as two separate processors, but are more tightly integrated, for example the core duo is basically two completely separate processors, they they share the same 2MB of cache, the dual core processors made by AMD also contain an additional bus so that one core can talk to the other without taking up bandwidth from the FSB
another thing is that adding multiple processors does not exactly equate to adding up the speed, that quad G5 could approach the speeds of a single core 10GHz PPC CPU, but it would be slower then that 10GHz CPU in just about every tasks by varying amounts due to limitations in the bus and the extra overhead involved with using multiple cores, also when you have multiple cores you need multithreaded applications or many applications to take full advantage of the power, for example a program that just ads one number to another and repeats it will do it with the speed of a 2.5GHz CPU, but if that program was opened 4 times so you have 4 instances running there would be very little impact on each other, they would each still run at about 2.5GHz speed, where if that was a single core computer, having it open once would make it run at 10GHz speed, when you have it open 4 times then they all slow down to 2.5GHz speed
as for compatibility, it all needs to be compatible, and generally this is all built into the spec for the socket, and that which is not can usually be added with a BIOS update (but not always), for example if you got a motherboard that had two sockets then it would only take an Opteron or Xeon (or something like that), either single or dual core, you would not find one that would take a P4 because the socket the P4 uses does not work when you put multiple sockets on the board (so they are not made and you can't find one), so generally if it fits it will be compatible, but its not always the case
the operating system also needs to be compatible, i don't know what windows 98 supports, there are limits, i know that XP will only support 8 cores, that means that the board that TYAN makes that takes 8 dual core Opterons (after an expansion card to add the extra sockets) for a total of 16 cores will not work with Windows XP, you would have to use another operating system like Linux to use all the power
you probably also want to look at Wikipedia, i think they have some good stuff on this
Comments on this post
Thank you very much, this definately cleared up a lot of questions for me.
Dual Core Info
I am beginning a project and we are using dual core processors. I have a few questions I was hoping someone might be able to help with:
As I understand it the two processors boot up independently and go to their own separate reset vectors to initially start which means each processor will have to have its own kernel to set up its tasking and so forth (right?).
Both processors share the same RAM space (in my case). How do I "share" information between the two cores? If one core has information that the other core needs how does it get it? Do I actually have to send it over, or does that other core automatically have access to it (since they share the same RAM)? If I create an object on one core it can't possibly exist on the other, so I would suspect I'd have to "ship" it to the other core just like I would if I were dealing with another external processor (except it will be faster).
My OS only supports AMP(Asymmetric Multi-Processing), so I cannot use the Symmetric Multi-Processing model. Would that have made information sharing between cores easier?
June 15th, 2006, 03:39 PM
I won't add to edman's answer - it glosses over some details but should be enough for you at this stage. For colabecchio,
While I haven't written any multi-Processor (mP) code, you are correct that each has their own reset vector. However, they will share the same kernal, which must be mP aware. Otherwise it will always bind to the processor #0, which is defined by the BIOS (hardcoded from the sockets). The processors will share the same process scheduler, memory manager, etc. They simply need to be mP aware, such as the scheduler binding the process to one uP while the memory manager may need to lock memory segments to ensure that there isn't a simultanious write.
Smaller mP systems usually use a NUMA architecture, which allows shared memory. The benefit is shared data access and no need to transfer data between memory pools, while the drawback is added complexity by locking, marking data dirty (and sending a message to the other uP to dirty its cache), etc. Most mP frameworks should handle most of the sharing aspects, with you needing to fill in some of the gaps when your thread writes to shared memory. If you want to use data in a shared cache, then you'll really need to dig into the details to figure that out. Its very implementation specific and often abstracted away from the developer.
I believe AMP is a master-slave relationship, so the master directs the slave processors in what to do. An application designed with that in mind could remove much of the memory locking and dirty memory messages by only letting one processor access a region at a time (e.g. the others can't even read it). AMP is an easier mP architecture, but isn't as efficent and not common anymore. I believe the CELL processor uses that style for its PU.
Also, note that since multi-core processors are different then multiple, packaged processors, there are differences from the classic texts. Programmatically they are largely the same, but some techniques that may have heavy hits are now light as IPC is not over the bus, etc. There will be differences, enough that the kernal needs to be aware of whether the uP is logical and physical to best optimize resources/algorithms.