Thursday, March 12, 2009

JRE and Shared Memory Overhead

Shared memory access in Java appears to be about 3 times slower than it could be.

In another post, I mentioned that ByteBuffer and its related classes provide a way for Java developers to access shared memory without having to resort to JNI. The problems come in when you try to perform operations on that segment.

One approach is to get a reference to an array of bytes that represents the data in the segment and then use the regular array syntax to access the data. The ByteBuffer.array() method appears to be the the way to do this, but unfortunately it does work.

Here is an example of what I mean:

// DOES NOT WORK!!
MappedByteBuffer mbb;
// code to initialize mbb omitted
byte[] ba = mbb.array(); // throws UnsupportedOperationException

After looking around a bit, I came to the conclusion that this was the intention of the original developers --- if you want to mess with the data in the segment, then you are supposed to use ByteBuffer.get/put.

This would be fine if get/put were about the same cost as using a straight byte array, but they appear to be 2 to 3 times slower. Here is a simple program that highlights the issue I'm running into. The basic difference is that one version uses:

b1 = bb.get(0);
bb.put(0, b2);

And the other that uses

b1 = bb[0];
bb[0] = b2;

The program performs these operations millions of times and then prints out the time (in milliseconds) they took to run. An example output:

Using get/put: 11578
Using array access: 3234

One thing this shows is that I really need to upgrade my system.

The basic point is that using get/put is a lot slower than using simple arrays. A program that reads and writes a lot of data in shared memory would be a lot faster if it could simply use an array rather than get/put.

Is there a way around this?

1 comment: