I need to execute something like: sed '1d' simple.tsv > noHeader.tsv
which will remove first line from my big flow file (> 1 GB).
The thing is - I need to execute it on my flow file, so it'd be:
sed '1d' myFlowFile > myFlowFile
Question is: how I should configure the ExecuteStreamCommand processor so that it runs the command on my flow file and returns it back to my flow file? If sed is not a best option, I can consider doing this other way (e.g. tail)
Thanks, Michal
Edit 2 (Solution):
Below is the final ExecuteStreamCommand config that does what I need (remove 1st line from the flow file). @Andy - thanks a lot for all the precious hints.
Keep in mind that NiFi is performing this action as the user that owns the NiFi java process. The ExecuteStreamCommand processor will pass the FlowFiles content (if there is any) to stdin.
Assuming the command was "myscript.py", The executeStreamCommand would be doing same thing as you would if you opened a terminal window on your NiFi node and clicked on "myscript.py" to run it. Keep in mind that NiFi is performing this action as the user that owns the NiFi java process.
Executes an external command on the contents of a flow file, and creates a new flow file with the results of the command. In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional.
If your file was smaller, the best solution would be to use a ReplaceText processor with the following processor properties: That would strip the first line out without having to send the 1GB content out of NiFi to the command-line and then re-ingest the results.
Michal,
I want to make sure I'm understanding your problem correctly, because I think there are better solutions.
Problem:
You have a 1GB TSV loaded into NiFi and you want to remove the first line.
Solution:
If your file was smaller, the best solution would be to use a ReplaceText
processor with the following processor properties:
^.*\n
<- empty stringThat would strip the first line out without having to send the 1GB content out of NiFi to the command-line and then re-ingest the results. Unfortunately, to use a regular expression, you need to set a Maximum Buffer Size, which means the entire contents need to be read into heap memory to perform this operation.
With a 1GB file, if you know the exact value of the first line, you should try ModifyBytes
which allows you to trim a byte count from the beginning and/or end of the flowfile contents. Then you could simply instruct the processor to drop the first n bytes of the content. Because of NiFi's copy-on-write content repository, you will still have ~2GB of data, but it does it in a streaming manner using an 8192B buffer size.
My best suggestion is to use an ExecuteScript
processor. This processor allows you to write custom code in a variety of languages (Groovy, Python, Ruby, Lua, JS) and have it execute on the flowfile. Using a Groovy script like the one below, you could remove the first line and copy the remainder in a streaming fashion so the heap does not get unnecessarily taxed.
I tested this with 1MB files and it took about 1.06 seconds for each flowfile (MacBook Pro 2015, 16 GB RAM, OS X 10.11.6). On a better machine you'll obviously get better throughput, and you can scale that up to your larger files.
def flowfile = session.get()
if (!flowfile) return
try {
// Here we are reading from the current flowfile content and writing to the new content
flowfile = session.write(flowfile, { inputStream, outputStream ->
def bufferedReader = new BufferedReader(new InputStreamReader(inputStream))
// Ignoring the first line
def ignoredFirstLine = bufferedReader.readLine()
def bufferedWriter = new BufferedWriter(new OutputStreamWriter(outputStream))
def line
int i = 0
// While the incoming line is not empty, write it to the outputStream
while ((line = bufferedReader.readLine()) != null) {
bufferedWriter.write(line)
bufferedWriter.newLine()
i++
}
// By default, INFO doesn't show in the logs and WARN will appear in the processor bulletins
log.warn("Wrote ${i} lines to output")
bufferedReader.close()
bufferedWriter.close()
} as StreamCallback)
session.transfer(flowfile, REL_SUCCESS)
} catch (Exception e) {
log.error(e)
session.transfer(flowfile, REL_FAILURE)
}
One side note, in general a good practice for NiFi is to split giant text files into smaller component flowfiles (using something like SplitText
) when possible to get the benefits of parallel processing. If the 1GB input was video, this wouldn't be applicable, but as you mentioned TSV, I think it's likely that splitting the initial flowfile into smaller pieces and operating on them in parallel (or even sending out to other nodes in the cluster for load balancing) may help your performance here.
Edit:
I realized I did not answer your original question -- how to get the content of a flowfile into the ExecuteStreamCommand
processor command-line invocation. If you wanted to operate on the value of an attribute, you could reference the attribute value with the Expression Language syntax ${attribute_name}
in the Arguments field. However, as the content is not referenceable from the EL, and you don't want to destroy the heap by moving the 1GB content into an attribute, the best solution there would be to write the contents out to a file using PutFile
, run the sed
command against the provided filename and write it to another file, and then use GetFile
to read those contents back into a flowfile in NiFi.
Edit 2:
Here is a template which demonstrates using ExecuteStreamCommand
with both rev
and sed
against flowfile content and putting the output into the content of the new flowfile. You can run the flow and monitor logs/nifi-app.log
to see the output or use the data provenance query to examine the modification that each processor performs.
Since you want to remove header from your file so I think to use StripHeader processor would be better option.
Ankit
When you need to process the data in the CSV any further, I would suggest taking a look at the CSVReader for record processing. Very powerful.
CSVReader Properties - Treat First Line as Header with Info
The property "Treat First Line as Header" in combination with the "Ignore CSV Header Column Names" allows you to deal with the first line.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With