Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Java : Out Of Memory Error when my application runs for longer time

I have a java application where i take files very small file (1KB) but large number of small file like in a minute i.e i am getting 20000 files in a minute. I am taking file and uploading into S3 .

I am running this in 10 parallel threads . Also i have to continuously run this application .

When this application runs for some days i get Out of memory error.

This is the exact error i get

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 347376 bytes for Chunk::new
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (allocation.cpp:390), pid=6912, tid=0x000000000003ec8c
#
# JRE version: Java(TM) SE Runtime Environment (8.0_181-b13) (build 1.8.0_181-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.181-b13 mixed mode windows-amd64 compressed oops)
# Core dump written. Default location: d:\S3FileUploaderApp\hs_err_pid6912.mdmp
#

Here is my java classes . I am copying all classes so that it would be easy to investigate .

This is My Java Visual VM report Image

enter image description here

Adding My Sample Output

enter image description here

Updating Metaspace Image

enter image description here

This is my Main class

  public class UploadExecutor {
    private static Logger _logger = Logger.getLogger(UploadExecutor.class);

    public static void main(String[] args) {
        _logger.info("----------STARTING JAVA MAIN METHOD----------------- ");
        /*
         * 3 C:\\Users\\u6034690\\Desktop\\TWOFILE\\xml
         * a205381-tr-fr-production-us-east-1-trf-auditabilty
         */
        final int batchSize = 100;
        while (true) {
            String strNoOfThreads = args[0];
            String strFileLocation = args[1];
            String strBucketName = args[2];
            int iNoOfThreads = Integer.parseInt(strNoOfThreads);
            S3ClientManager s3ClientObj = new S3ClientManager();
            AmazonS3Client s3Client = s3ClientObj.buildS3Client();
            try {
                FileProcessThreads fp = new FileProcessThreads();
                File[] files = fp.getFiles(strFileLocation);
                try {
                    _logger.info("No records found will wait for 10 Seconds");
                    TimeUnit.SECONDS.sleep(10);
                    files = fp.getFiles(strFileLocation);
                    ArrayList<File> batchFiles = new ArrayList<File>(batchSize);
                    if (null != files) {
                        for (File path : files) {
                            String fileType = FilenameUtils.getExtension(path.getName());
                            long fileSize = path.length();
                            if (fileType.equals("gz") && fileSize > 0) {
                                batchFiles.add(path);
                            }
                            if (batchFiles.size() == batchSize) {
                                BuildThread BuildThreadObj = new BuildThread();
                                BuildThreadObj.buildThreadLogic(iNoOfThreads, s3Client, batchFiles, strFileLocation,
                                        strBucketName);
                                _logger.info("---Batch One got completed---");
                                batchFiles.clear();
                            }
                        }
                    }
                    // to consider remaining or files with count<batch size
                    if (!batchFiles.isEmpty()) {
                        BuildThread BuildThreadObj = new BuildThread();
                        BuildThreadObj.buildThreadLogic(iNoOfThreads, s3Client, batchFiles, strFileLocation,
                                strBucketName);
                        batchFiles.clear();
                    }
                } catch (InterruptedException e) {
                    _logger.error("InterruptedException: " + e.toString());
                }
            } catch (Throwable t) {
                _logger.error("InterruptedException: " + t.toString());
            }
        }
    }
}

This is the class where i build Threads and shutdown executor . So for every run i create new Executor service .

   public class BuildThread {

    private static Logger _logger = Logger.getLogger(BuildThread.class);

    public  void buildThreadLogic(int iNoOfThreads,AmazonS3Client s3Client, List<File> records,String strFileLocation,String strBucketName) {


        _logger.info("Calling buildThreadLogic method of BuildThread class");

        final ExecutorService executor = Executors.newFixedThreadPool(iNoOfThreads);
        int recordsInEachThraed = (int) (records.size() / iNoOfThreads);
        int threadIncr=2;
        int recordsInEachThreadStart=0;
        int recordsInEachThreadEnd=0;

        for (int i = 0; i < iNoOfThreads; i++) {
            if (i==0){
                recordsInEachThreadEnd=recordsInEachThraed;
            }
            if (i==iNoOfThreads-1){
                recordsInEachThreadEnd=records.size();
            }

            Runnable worker = new UploadObject(records.subList(recordsInEachThreadStart, recordsInEachThreadEnd), s3Client,strFileLocation,strBucketName);
            executor.execute(worker);
            recordsInEachThreadStart=recordsInEachThreadEnd;
            recordsInEachThreadEnd=recordsInEachThraed*(threadIncr);
            threadIncr++;
        }

        executor.shutdown();
        while (!executor.isTerminated()) {
        }

        _logger.info("Existing buildThreadLogic method");
    }

}

And this is the class where i upload my Files into S3 and have run method

   public class UploadObject implements Runnable {

    private static Logger _logger;
    List<File> records;
    AmazonS3Client s3Client;
    String fileLocation;
    String strBucketName;

    UploadObject(List<File> list, AmazonS3Client s3Client, String  fileLocation, String strBucketName) {
        this.records = list;
        this.s3Client = s3Client;
        this.fileLocation=fileLocation;
        this.strBucketName=strBucketName;

        _logger = Logger.getLogger(UploadObject.class);
    }

    public void run() {
        uploadToToS3();
    }

    public void uploadToToS3() {
        _logger.info("Number of record to be uploaded  in current thread: : " + records.size());
        TransferManager tm = new TransferManager(s3Client);
        final MultipleFileUpload upload = tm.uploadFileList(strBucketName, "", new File(fileLocation), records);
        try {
            upload.waitForCompletion();
        } catch (AmazonServiceException e1) {
            _logger.error("AmazonServiceException " + e1.getErrorMessage());
            System.exit(1);
        } catch (AmazonClientException e1) {
            _logger.error("AmazonClientException " +  e1.getMessage());
            System.exit(1);
        } catch (InterruptedException e1) {
            _logger.error("InterruptedException " +  e1.getMessage());
            System.exit(1);
        } finally {
            _logger.info("--Calling TransferManager ShutDown--");
            tm.shutdownNow(false);

        }

        CleanUp CleanUpObj=new CleanUp();
        CleanUpObj.deleteUploadedFile(upload,records);
    }
}

This class used to create S3 client manager

public class S3ClientManager {

    private static Logger _logger = Logger.getLogger(S3ClientManager.class);

    public  AmazonS3Client buildS3Client() {

        _logger.info("Calling buildS3Client method of S3ClientManager class");
        AWSCredentials credential = new ProfileCredentialsProvider("TRFAuditability-Prod-ServiceUser").getCredentials();
        AmazonS3Client s3Client = (AmazonS3Client) AmazonS3ClientBuilder.standard().withRegion("us-east-1")
                .withCredentials(new AWSStaticCredentialsProvider(credential)).withForceGlobalBucketAccessEnabled(true)
                .build();
        s3Client.getClientConfiguration().setMaxConnections(5000);
        s3Client.getClientConfiguration().setConnectionTimeout(6000);
        s3Client.getClientConfiguration().setSocketTimeout(30000);
        _logger.info("Exiting  buildS3Client method of S3ClientManager class");
        return s3Client;
    }
}

This is where i get files .

public class FileProcessThreads {

    public  File[] getFiles(String fileLocation)  {

        File dir = new File(fileLocation);
        File[] directoryListing = dir.listFiles();
        if (directoryListing.length > 0)
            return directoryListing;
       return null;

    }
} 
like image 712
SUDARSHAN Avatar asked Apr 24 '19 06:04

SUDARSHAN


1 Answers

What version of Java are you using and what are the parameters that you set for the garbage collector? Recently, I ran into an issue with our Java 8 applications running the default settings where over time they would eat up all of the memory the server had available. I fixed this by adding the following parameters to each application:

  • -XX:+UseG1GC - makes the application use the G1 garbage collector.
  • -Xms32M - set the minimum heap size to 32mb
  • -Xmx512M - set the maximum heap size to 512mb
  • -XX:MinHeapFreeRatio=20 - sets the minimum heap ratio free when upsizing the heap
  • -XX:MaxHeapFreeRatio=40 - sets the maximum heap ratio free when downsizing the heap

Note that you should know the memory requirements and behavior of your application before configuring these parameters to avoid major performance issues.

What happens is Java will keep allocating more memory from the server until it reaches the maximum heap size. Afterwards, it will run garbage collection to try to free space within the memory it has. This meant that we had 16 microservices increasing in size naturally over time without garbage collecting since they never reached the default maximum of 4gb. Before they could, the server ran out of RAM to give to the applications and the OutOfMemory errors began to happen. This was especially apparent in our application which read and parsed over 400,000 files per day.

Additionally, because the default garbage collector in Java 8 is the parallel garbage collector, the applications would never give memory back to the server. Changing these settings made our microservices manage their memory more efficiently and made them play nicely on the server by giving back memory they no longer needed.

This is the article that I found that helped me solve our issue. It describes everything I said above in more detail.

like image 126
sawprogramming Avatar answered Oct 21 '22 13:10

sawprogramming