I've read a blog post a while ago claiming a Java application ran better when it was allowed to utilize a single cpu in a multicore machine: http://mailinator.blogspot.com/2010/02/how-i-sped-up-my-server-by-factor-of-6.html
What reasons could there be for a Java application, running on multicore machines to run much slower than on a single core machine?
Java will benefit from multiple cores, if the OS distribute threads over the available processors. JVM itself do not do anything special to get its threads scheduled evenly across multiple cores.
The Solaris JVM interpreter takes full advantage of multiprocessor systems by using the intrinsic Solaris multithread facilities. These allow multiple threads of a single process to be scheduled simultaneously onto multiple CPUs.
A single thread is allowed to lock the same object multiple times. For each object, the JVM maintains a count of the number of times the object has been locked. An unlocked object has a count of zero. When a thread acquires the lock for the first time, the count is incremented to one.
Yes, a single process can run multiple threads on different cores. Caching is specific to the hardware. Many modern Intel processors have three layers of caching, where the last level cache is shared across cores.
If there is significant contention among shared resources in the different threads, it could be that locking and unlocking objects requires a large amount of IPI (inter-processor interrupts) and the processors may spend more time discarding their L1 and L2 caches and re-fetching data from other CPUs than they actually spend making progress on solving the problem at hand.
This can be a problem if the application has way too-fine-grained locking. (I once heard it summed up "there is no point having more than one lock per CPU cache line", which is definitely true, and perhaps still too fine-grained.)
Java's "every object is a mutex" could lead to having too many locks in the running system if too many are live and contended.
I have no doubt someone could intentionally write such an application, but it probably isn't very common. Most developers would write their applications to reduce resource contention where they can.
I doubt the "Much" part.
My guess would be that the expense of moving state from one cpu to another is high enough to be noticeable. Generally you want jobs to stay on the same cpu so its data is cached as much as possible locally.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With