Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Existing data for tests is not found Scala Specs2

Tags:

I am building the integration tests which would read the data generated after the previous test cases and check it with expected result. When I run the tests the generated data is not visible in the directory in the next test cases, although it resides there. When I re-run the tests the data is picked up and read from the directory. What could be the reason of it? Could there be a problem in the sequence of the tests execution?

Here is how my tests look like:

class LoaderSpec extends Specification{

     sequential

      "Loader" should {
        "run job from assembled .jar" in {
          val res = "sh ./src/test/runLoader.sh".!
          res must beEqualTo(0)
        }

        "write results to the resources" in {
          val resultsPath = "/results/loader_result"
          resourcesDirectoryIsEmpty(resultsPath) must beFalse
        }

        "have actual result same as expected one" in {
          val expected: Set[String] = readFilesFromDirs("source/loader_source")
          println(expected)

          val result: Set[String] = readFilesFromDirs("/results/loader_result")
          println(result)

          expected must beEqualTo(result)
        }
      }
}

The first tests succeds and the next 2 tests fail as the data is not found. When I re-run the same test suite without any changes - all the tests succeed.

The runLoader.sh script:

$SPARK_HOME/bin/spark-submit \
 --class "loader.LoaderMain" \
 \
 --conf "spark.hadoop.fs.gs.impl=com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem" \
 --conf "spark.hadoop.fs.AbstractFileSystem.gs.impl=com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS" \
 --conf "spark.hadoop.fs.gs.project.id=loader-files" \
 --conf "spark.hadoop.fs.gs.implicit.dir.repair.enable=false" \
 \
 --conf "spark.loader.Config.srcPaths=;src/test/resources/source/loader" \
 --conf "spark.loader.Config.dstPath=src/test/resources/results/loader_result" \
 --conf "spark.loader.Config.filesPerPartner=10" \
 \
 --conf "spark.shuffle.memoryFraction=0.4" \
 --conf "spark.task.maxFailures=20" \
 --conf "spark.executor.extraJavaOptions=${EXTRA_JVM_FLAGS}" \
 \
 --master "local[8]" \
 --driver-memory 1500M \
 --driver-java-options "${EXTRA_JVM_FLAGS}" \
 $(find "$(pwd)"/target/scala-2.11 -name 'loader-assembly-*.jar')
like image 480
Cassie Avatar asked Sep 13 '19 16:09

Cassie


1 Answers

I have tried to change the way I read files. Turns out reading from resources might produce this error as the contents are read before all the tests. Although when I read the data simply from the directory, the contents get updated and this error does not occur. This is the way I have changed the tests:

"write results to the resources" in {
  val resultsPath = "./src/dockerise/resource/results/loader_result"
  resourcesDirectoryIsEmpty(resultsPath) must beFalse
}
like image 187
Cassie Avatar answered Nov 29 '22 14:11

Cassie