Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the use of ByteBuffer in Java? [closed]

People also ask

What is the use of ByteBuffer in Java?

ByteBuffer class is used to allocate a new byte buffer. The new buffer's position will be zero, its limit will be its capacity, its mark will be undefined, and each of its elements will be initialized to zero. It will have a backing array, and its array offset will be zero.

What does ByteBuffer wrap do?

wrap. Wraps a byte array into a buffer. The new buffer will be backed by the given byte array; that is, modifications to the buffer will cause the array to be modified and vice versa. The new buffer's capacity and limit will be array.

What is ByteBuffer limit?

ByteBuffer limit() methods in Java with ExamplesThe limit() method of java. nio. ByteBuffer Class is used to set this buffer's limit. If the position is larger than the new limit then it is set to the new limit. If the mark is defined and larger than the new limit then it is discarded.


This is a good description of its uses and shortcomings. You essentially use it whenever you need to do fast low-level I/O. If you were going to implement a TCP/IP protocol or if you were writing a database (DBMS) this class would come in handy.


The ByteBuffer class is important because it forms a basis for the use of channels in Java. ByteBuffer class defines six categories of operations upon byte buffers, as stated in the Java 7 documentation:

  • Absolute and relative get and put methods that read and write single bytes;

  • Relative bulk get methods that transfer contiguous sequences of bytes from this buffer into an array;

  • Relative bulk put methods that transfer contiguous sequences of bytes from a byte array or some other byte buffer into this buffer;

  • Absolute and relative get and put methods that read and write values of other primitive types, translating them to and from sequences of bytes in a particular byte order;

  • Methods for creating view buffers, which allow a byte buffer to be viewed as a buffer containing values of some other primitive type; and

  • Methods for compacting, duplicating, and slicing a byte buffer.

Example code : Putting Bytes into a buffer.

    // Create an empty ByteBuffer with a 10 byte capacity
    ByteBuffer bbuf = ByteBuffer.allocate(10);

    // Get the buffer's capacity
    int capacity = bbuf.capacity(); // 10

    // Use the absolute put(int, byte).
    // This method does not affect the position.
    bbuf.put(0, (byte)0xFF); // position=0

    // Set the position
    bbuf.position(5);

    // Use the relative put(byte)
    bbuf.put((byte)0xFF);

    // Get the new position
    int pos = bbuf.position(); // 6

    // Get remaining byte count
    int rem = bbuf.remaining(); // 4

    // Set the limit
    bbuf.limit(7); // remaining=1

    // This convenience method sets the position to 0
    bbuf.rewind(); // remaining=7

Java IO using stream oriented APIs is performed using a buffer as temporary storage of data within user space. Data read from disk by DMA is first copied to buffers in kernel space, which is then transfer to buffer in user space. Hence there is overhead. Avoiding it can achieve considerable gain in performance.

We could skip this temporary buffer in user space, if there was a way directly to access the buffer in kernel space. Java NIO provides a way to do so.

ByteBuffer is among several buffers provided by Java NIO. Its just a container or holding tank to read data from or write data to. Above behavior is achieved by allocating a direct buffer using allocateDirect() API on Buffer.

Java Documentation of Byte Buffer has useful information.


Here is a great article explaining ByteBuffer benefits. Following are the key points in the article:

  • First advantage of a ByteBuffer irrespective of whether it is direct or indirect is efficient random access of structured binary data (e.g., low-level IO as stated in one of the answers). Prior to Java 1.4, to read such data one could use a DataInputStream, but without random access.

Following are benefits specifically for direct ByteBuffer/MappedByteBuffer. Note that direct buffers are created outside of heap:

  1. Unaffected by gc cycles: Direct buffers won't be moved during garbage collection cycles as they reside outside of heap. TerraCota's BigMemory caching technology seems to rely heavily on this advantage. If they were on heap, it would slow down gc pause times.

  2. Performance boost: In stream IO, read calls would entail system calls, which require a context-switch between user to kernel mode and vice versa, which would be costly especially if file is being accessed constantly. However, with memory-mapping this context-switching is reduced as data is more likely to be found in memory (MappedByteBuffer). If data is available in memory, it is accessed directly without invoking OS, i.e., no context-switching.

Note that MappedByteBuffers are very useful especially if the files are big and few groups of blocks are accessed more frequently.

  1. Page sharing: Memory mapped files can be shared between processes as they are allocated in process's virtual memory space and can be shared across processes.

In Android you can create shared buffer between C++ and Java (with directAlloc method) and manipulate it in both sides.