I'm running a Spark quick start application:
/* SimpleApp.java */
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
public class SimpleApp {
public static void main(String[] args) {
String logFile = "/data/software/spark-2.4.4-bin-without-hadoop/README.md"; // Should be some file on your system
SparkSession spark = SparkSession.builder().appName("Simple Application").getOrCreate();
Dataset<String> logData = spark.read().textFile(logFile).cache();
long numAs = logData.filter(s -> s.contains("a")).count();
long numBs = logData.filter(s -> s.contains("b")).count();
System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);
spark.stop();
}
}
As the official document told,
# Package a JAR containing your application
$ mvn package
When I ran mvn package
it raise the below error:
[WARNING] File encoding has not been set, using platform encoding UTF-8, i.e. build is platform dependent!
[INFO] Compiling 1 source file to /home/dennis/java/spark_quick_start/target/classes
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /home/dennis/java/spark_quick_start/src/main/java/SimpleApp.java:[11,25] reference to filter is ambiguous
both method filter(scala.Function1<T,java.lang.Object>) in org.apache.spark.sql.Dataset and method filter(org.apache.spark.api.java.function.FilterFunction<T>) in org.apache.spark.sql.Dataset match
[ERROR] /home/dennis/java/spark_quick_start/src/main/java/SimpleApp.java:[12,25] reference to filter is ambiguous
both method filter(scala.Function1<T,java.lang.Object>) in org.apache.spark.sql.Dataset and method filter(org.apache.spark.api.java.function.FilterFunction<T>) in org.apache.spark.sql.Dataset match
[INFO] 2 errors
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:00 min
[INFO] Finished at: 2020-01-13T15:04:55+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.3:compile (default-compile) on project simple-project: Compilation failure: Compilation failure:
[ERROR] /home/dennis/java/spark_quick_start/src/main/java/SimpleApp.java:[11,25] reference to filter is ambiguous
[ERROR] both method filter(scala.Function1<T,java.lang.Object>) in org.apache.spark.sql.Dataset and method filter(org.apache.spark.api.java.function.FilterFunction<T>) in org.apache.spark.sql.Dataset match
[ERROR] /home/dennis/java/spark_quick_start/src/main/java/SimpleApp.java:[12,25] reference to filter is ambiguous
[ERROR] both method filter(scala.Function1<T,java.lang.Object>) in org.apache.spark.sql.Dataset and method filter(org.apache.spark.api.java.function.FilterFunction<T>) in org.apache.spark.sql.Dataset match
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
This is pom.xml
:
<project>
<groupId>edu.berkeley</groupId>
<artifactId>simple-project</artifactId>
<modelVersion>4.0.0</modelVersion>
<name>Simple Project</name>
<packaging>jar</packaging>
<version>1.0</version>
<dependencies>
<dependency> <!-- Spark dependency -->
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>2.4.4</version>
<scope>provided</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.3</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
It means that your lambda expression can both be turned into a scala.Function1<T,java.lang.Object>
or a org.apache.spark.api.java.function.FilterFunction<T>
.
I don't know if this would be ambiguous in Scala as well, but in Java it is. You need to explicitely state the type in this case:
long numAs = logData.filter((org.apache.spark.api.java.function.FilterFunction<String>)s -> s.contains("a")).count();
Or write the code in Scala.
It seems to be an compatible issue, as Spark 2.4.4 is using Scala 2.11 (I'm not sure). Because I saw this from the official website:
After changing to 2.11
, all is working fine!
<dependency> <!-- Spark dependency -->
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.4.4</version>
<scope>provided</scope>
</dependency>
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With