So, I am trying to redirect the output of an apache spark-submit command to text file but some output fails to populate file. Here is the command I am using:
spark-submit something.py > results.txt
I can see the output in the terminal but I do not see it in the file. What am I forgetting or doing wrong here?
Edit:
If I use
spark-submit something.py | less
I can see all the output being piped into less
spark-submit
prints most of it's output to STDERR
To redirect the entire output to one file, you can use:
spark-submit something.py > results.txt 2>&1
Or
spark-submit something.py &> results.txt
If you are running the spark-submit on a cluster the logs are stored with the application Id. You can see the logs once the application finishes.
yarn logs --applicationId <your applicationId> > myfile.txt
Should fetch you the log of your job
The applicationId of your job is given when you submit the spark job. You will be able to see that in the console where you are submitting or from the Hadoop UI.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With