I am having issue in reading data from azure blobs via spark streaming
JavaDStream<String> lines = ssc.textFileStream("hdfs://ip:8020/directory");
code like above works for HDFS, but is unable to read file from Azure blob
https://blobstorage.blob.core.windows.net/containerid/folder1/
Above is the path which is shown in azure UI, but this doesnt work, am i missing something, and how can we access it.
I know Eventhub are ideal choice for streaming data, but my current situation demands to use storage rather then queues
Open the File Selector dialog from the Message Analyzer File menu by highlighting Open and then selecting the From Other File Sources command. In the Add Azure Storage Connection dialog, specify Account name and Account key information, as described in Accessing Log Data in Azure Storage BLOB Containers.
In order to access resources from Azure blob you need to add jar files hadoop-azure. jar and azure-storage. jar to spark-submit command when you submitting a job. Also, if you are using Docker or installing the application on a cluster, there is a tip for you as well.
Open a blob on your local computer Select the blob you wish to open. On the main pane's toolbar, select Open. The blob will be downloaded and opened using the application associated with the blob's underlying file type.
In order to read data from blob storage, there are two things that need to be done. First, you need to tell Spark which native file system to use in the underlying Hadoop configuration. This means that you also need the Hadoop-Azure JAR to be available on your classpath (note there maybe runtime requirements for more JARs related to the Hadoop family):
JavaSparkContext ct = new JavaSparkContext();
Configuration config = ct.hadoopConfiguration();
config.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem");
config.set("fs.azure.account.key.youraccount.blob.core.windows.net", "yourkey");
Now, call onto the file using the wasb://
prefix (note the [s]
is for optional secure connection):
ssc.textFileStream("wasb[s]://<BlobStorageContainerName>@<StorageAccountName>.blob.core.windows.net/<path>");
This goes without saying that you'll need to have proper permissions set from the location making the query to blob storage.
As supplementary, there is a tutorial about HDFS-compatible Azure Blob storage with Hadoop which is very helpful, please see https://azure.microsoft.com/en-us/documentation/articles/hdinsight-hadoop-use-blob-storage.
Meanwhile, there is an offical sample on GitHub for Spark streaming on Azure. Unfortunately, the sample is written for Scala, but I think it's still helpful for you.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With