I have a question regarding Java 8. Here is my source code:
final Consumer<String> d = e -> System.out.println(e);
final Function<String, String> upper = x -> x.toUpperCase();
final Function<String, String> lower = x -> x.toLowerCase();
new Thread(() -> d.accept(upper.apply("hello 1"))).run();
new Thread(() -> d.accept(lower.apply("hello 2"))).run();
This works quite well and produces following output:
HELLO 1
hello 2
My question is now if the syntax above d.accept
and upper.apply
is the only possible one or if there is some more "java 8 lambda" style we could write the last two lines.
Before saying anything about about lambda expressions or functional interfaces, we have to talk about your really problematic mistake: you are calling run()
on a thread! If you want to start a new thread, you have to call start()
on the Thread
instance, if you want to run the code sequentially, don’t create a Thread
(but just a Runnable
).
That said, there are some default
method on the functional interfaces of Java 8 for combining functions, e.g. you can chain two Function
s via Function.andThen(…)
but the available combinations are far away from being complete.
If a certain combining task repeats in your application, you may consider creating utility methods:
public static <T> Runnable bind(T value, Consumer<T> c) {
return ()->c.accept(value);
}
public static <T,U> Consumer<U> compose(Function<U,T> f, Consumer<? super T> c) {
return u->c.accept(f.apply(u));
}
new Thread(bind("Hello 1", compose(upper, d))).start();
new Thread(bind("Hello 2", compose(lower, d))).start();
But these three parts look more like a task for the stream API:
Stream.of("Hello 1").map(upper).forEach(d);
Stream.of("Hello 2").map(lower).forEach(d);
I left the creation of the new thread out here, as it hasn’t any benefit anyway.
If you really want parallel processing, you can do it on a per-character basis:
"Hello 1".chars().parallel()
.map(Character::toUpperCase).forEachOrdered(c->System.out.print((char)c));
but there still won’t be any benefit given the simplicity of the task and the fixed overhead of the parallel processing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With