Today’s computer manufacturers can pack a huge number of circuits on a single chip of silicon. The number of circuits is doubling every 18 months and has been doing so since 1965. This is called Moore's Law after Gordon E Moore who described this phenomenon in 1965. There have been 33 doubling periods since then and the result is that chips are 8 thousand million times more complex than they were in 1965.
Recent efforts to shrink the electrical circuits on chips has run into something called a heat wall. This wall is due to leakage of electrical current between parts of the chip resulting in high heat similar to the way a hair dryer works. Shrinking is the main way that computers have become faster. Electrical signals are limited to the speed of light and things can go faster if there are shorter distances to go. As a result of the heat wall, computer chips have stopped shrinking and instead have become more complex.
As a result, chips cannot now be shrunk more than they were in about the year 2000. Instead the chips have stayed the same size and accordingly run at roughly the same speed of 2+ Gigahertz. The number of circuits on the chip continues to increase. The question is, what use is made of the huge number of extra circuits? The answer is to place multiple CPUs on a single chip. In this case the CPUs are called cores and the term CPU is reserved for the entire chip. Current product supports up to 16 cores on a single CPU chip and this number is increasing.
A traditional computer is composed of several principal parts: a Central Processing Unit (CPU), main memory and an I/O subsystem that allows data and programs to be moved into and out of memory. As you can see, current technology delivers way more than a single CPU. It may present 1, 2, 4, 12 or 16 cores to the hardware.
This abundance of capability is difficult to use for a traditional program because only one core can be kept busy at a time unless some extremely special programming techniques are used. What is done instead is to partition the entire computer into several “logical” computers. This is done by software called Hypervisors. Each logical computer appears to the operating system which controls a computer such as Linux or Windows as a single computer. The operating system in a logical instance “sees” a complete machine, but that machine is carved out of a larger physical machine.
The Hypervisor is responsible for maintaining strict security controls so that no information can bleed between the logical computers.
These logical computers have several names. They may be called instances or, alternatively, virtual servers.