Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Writing to Google Cloud Storage from PubSub using Cloud Dataflow using DoFn

I am trying write Google PubSub messages to Google Cloud Storage using Google Cloud Dataflow. I know that TextIO/AvroIO do not support streaming pipelines. However, I read in [1] that it is possible to write to GCS in a streaming pipeline from a ParDo/DoFn in a comment by the author. I constructed a pipeline by following their article as closely as I could.

I was aiming for this behaviour:

  • Messages written out in a batches of up to 100 to objects in GCS (one per window pane) under a path that corresponds to the time the message was published in dataflow-requests/[isodate-time]/[paneIndex].

I get different results:

  • There is only a single pane in every hourly window. I therefore only get one file in every hourly 'bucket' (it's really an object path in GCS). Reducing MAX_EVENTS_IN_FILE to 10 made no difference, still only one pane/file.
  • There is only a single message in every GCS object that is written out
  • The pipeline occasionally raises a CRC error when writing to GCS.

How do I fix these problems and get the behaviour I'm expecting?

Sample log output:

21:30:06.977 writing pane 0 to blob dataflow-requests/2016-04-08T20:59:59.999Z/0
21:30:06.977 writing pane 0 to blob dataflow-requests/2016-04-08T20:59:59.999Z/0
21:30:07.773 sucessfully write pane 0 to blob dataflow-requests/2016-04-08T20:59:59.999Z/0
21:30:07.846 sucessfully write pane 0 to blob dataflow-requests/2016-04-08T20:59:59.999Z/0
21:30:07.847 writing pane 0 to blob dataflow-requests/2016-04-08T20:59:59.999Z/0

Here is my code:

package com.example.dataflow;

import com.google.cloud.dataflow.sdk.Pipeline;
import com.google.cloud.dataflow.sdk.io.PubsubIO;
import com.google.cloud.dataflow.sdk.options.DataflowPipelineOptions;
import com.google.cloud.dataflow.sdk.options.PipelineOptions;
import com.google.cloud.dataflow.sdk.options.PipelineOptionsFactory;
import com.google.cloud.dataflow.sdk.transforms.DoFn;
import com.google.cloud.dataflow.sdk.transforms.ParDo;
import com.google.cloud.dataflow.sdk.transforms.windowing.*;
import com.google.cloud.dataflow.sdk.values.PCollection;
import com.google.gcloud.storage.BlobId;
import com.google.gcloud.storage.BlobInfo;
import com.google.gcloud.storage.Storage;
import com.google.gcloud.storage.StorageOptions;
import org.joda.time.Duration;
import org.joda.time.format.ISODateTimeFormat;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.IOException;

public class PubSubGcsSSCCEPipepline {

    private static final Logger LOG = LoggerFactory.getLogger(PubSubGcsSSCCEPipepline.class);

    public static final String BUCKET_PATH = "dataflow-requests";

    public static final String BUCKET_NAME = "myBucketName";

    public static final Duration ONE_DAY = Duration.standardDays(1);
    public static final Duration ONE_HOUR = Duration.standardHours(1);
    public static final Duration TEN_SECONDS = Duration.standardSeconds(10);

    public static final int MAX_EVENTS_IN_FILE = 100;

    public static final String PUBSUB_SUBSCRIPTION = "projects/myProjectId/subscriptions/requests-dataflow";

    private static class DoGCSWrite extends DoFn<String, Void>
        implements DoFn.RequiresWindowAccess {

        public transient Storage storage;

        { init(); }

        public void init() { storage = StorageOptions.defaultInstance().service(); }

        private void readObject(java.io.ObjectInputStream in)
                throws IOException, ClassNotFoundException {
            init();
        }

        @Override
        public void processElement(ProcessContext c) throws Exception {
            String isoDate = ISODateTimeFormat.dateTime().print(c.window().maxTimestamp());
            String blobName = String.format("%s/%s/%s", BUCKET_PATH, isoDate, c.pane().getIndex());

            BlobId blobId = BlobId.of(BUCKET_NAME, blobName);
            LOG.info("writing pane {} to blob {}", c.pane().getIndex(), blobName);
            storage.create(BlobInfo.builder(blobId).contentType("text/plain").build(), c.element().getBytes());
            LOG.info("sucessfully write pane {} to blob {}", c.pane().getIndex(), blobName);
        }
    }

    public static void main(String[] args) {
        PipelineOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().create();
        options.as(DataflowPipelineOptions.class).setStreaming(true);
        Pipeline p = Pipeline.create(options);

        PubsubIO.Read.Bound<String> readFromPubsub = PubsubIO.Read.named("ReadFromPubsub")
                .subscription(PUBSUB_SUBSCRIPTION);

        PCollection<String> streamData = p.apply(readFromPubsub);

        PCollection<String> windows = streamData.apply(Window.<String>into(FixedWindows.of(ONE_HOUR))
                .withAllowedLateness(ONE_DAY)
                .triggering(AfterWatermark.pastEndOfWindow()
                        .withEarlyFirings(AfterPane.elementCountAtLeast(MAX_EVENTS_IN_FILE))
                        .withLateFirings(AfterFirst.of(AfterPane.elementCountAtLeast(MAX_EVENTS_IN_FILE),
                                AfterProcessingTime.pastFirstElementInPane()
                                        .plusDelayOf(TEN_SECONDS))))
                .discardingFiredPanes());

        windows.apply(ParDo.of(new DoGCSWrite()));

        p.run();
    }


}

[1] https://labs.spotify.com/2016/03/10/spotifys-event-delivery-the-road-to-the-cloud-part-iii/

Thanks to Sam McVeety for the solution. Here is the corrected code for anyone reading:

package com.example.dataflow;

import com.google.cloud.dataflow.sdk.Pipeline;
import com.google.cloud.dataflow.sdk.io.PubsubIO;
import com.google.cloud.dataflow.sdk.options.DataflowPipelineOptions;
import com.google.cloud.dataflow.sdk.options.PipelineOptions;
import com.google.cloud.dataflow.sdk.options.PipelineOptionsFactory;
import com.google.cloud.dataflow.sdk.transforms.*;
import com.google.cloud.dataflow.sdk.transforms.windowing.*;
import com.google.cloud.dataflow.sdk.values.KV;
import com.google.cloud.dataflow.sdk.values.PCollection;
import com.google.gcloud.WriteChannel;
import com.google.gcloud.storage.BlobId;
import com.google.gcloud.storage.BlobInfo;
import com.google.gcloud.storage.Storage;
import com.google.gcloud.storage.StorageOptions;
import org.joda.time.Duration;
import org.joda.time.format.ISODateTimeFormat;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.Iterator;

public class PubSubGcsSSCCEPipepline {

    private static final Logger LOG = LoggerFactory.getLogger(PubSubGcsSSCCEPipepline.class);

    public static final String BUCKET_PATH = "dataflow-requests";

    public static final String BUCKET_NAME = "myBucketName";

    public static final Duration ONE_DAY = Duration.standardDays(1);
    public static final Duration ONE_HOUR = Duration.standardHours(1);
    public static final Duration TEN_SECONDS = Duration.standardSeconds(10);

    public static final int MAX_EVENTS_IN_FILE = 100;

    public static final String PUBSUB_SUBSCRIPTION = "projects/myProjectId/subscriptions/requests-dataflow";

    private static class DoGCSWrite extends DoFn<Iterable<String>, Void>
        implements DoFn.RequiresWindowAccess {

        public transient Storage storage;

        { init(); }

        public void init() { storage = StorageOptions.defaultInstance().service(); }

        private void readObject(java.io.ObjectInputStream in)
                throws IOException, ClassNotFoundException {
            init();
        }

        @Override
        public void processElement(ProcessContext c) throws Exception {
            String isoDate = ISODateTimeFormat.dateTime().print(c.window().maxTimestamp());
            long paneIndex = c.pane().getIndex();
            String blobName = String.format("%s/%s/%s", BUCKET_PATH, isoDate, paneIndex);

            BlobId blobId = BlobId.of(BUCKET_NAME, blobName);

            LOG.info("writing pane {} to blob {}", paneIndex, blobName);
            WriteChannel writer = storage.writer(BlobInfo.builder(blobId).contentType("text/plain").build());
            LOG.info("blob stream opened for pane {} to blob {} ", paneIndex, blobName);
            int i=0;
            for (Iterator<String> it = c.element().iterator(); it.hasNext();) {
                i++;
                writer.write(ByteBuffer.wrap(it.next().getBytes()));
                LOG.info("wrote {} elements to blob {}", i, blobName);
            }
            writer.close();
            LOG.info("sucessfully write pane {} to blob {}", paneIndex, blobName);
        }
    }

    public static void main(String[] args) {
        PipelineOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().create();
        options.as(DataflowPipelineOptions.class).setStreaming(true);
        Pipeline p = Pipeline.create(options);

        PubsubIO.Read.Bound<String> readFromPubsub = PubsubIO.Read.named("ReadFromPubsub")
                .subscription(PUBSUB_SUBSCRIPTION);

        PCollection<String> streamData = p.apply(readFromPubsub);
        PCollection<KV<String, String>> keyedStream =
                streamData.apply(WithKeys.of(new SerializableFunction<String, String>() {
                    public String apply(String s) { return "constant"; } }));

        PCollection<KV<String, Iterable<String>>> keyedWindows = keyedStream
                .apply(Window.<KV<String, String>>into(FixedWindows.of(ONE_HOUR))
                        .withAllowedLateness(ONE_DAY)
                        .triggering(AfterWatermark.pastEndOfWindow()
                                .withEarlyFirings(AfterPane.elementCountAtLeast(MAX_EVENTS_IN_FILE))
                                .withLateFirings(AfterFirst.of(AfterPane.elementCountAtLeast(MAX_EVENTS_IN_FILE),
                                        AfterProcessingTime.pastFirstElementInPane()
                                                .plusDelayOf(TEN_SECONDS))))
                        .discardingFiredPanes())
                .apply(GroupByKey.create());


        PCollection<Iterable<String>> windows = keyedWindows
                .apply(Values.<Iterable<String>>create());


        windows.apply(ParDo.of(new DoGCSWrite()));

        p.run();
    }

}
like image 433
Frank Wilson Avatar asked Apr 08 '16 20:04

Frank Wilson


People also ask

Can Pubsub write to BigQuery?

The Pub/Sub Subscription to BigQuery template is a streaming pipeline that reads JSON-formatted messages from a Pub/Sub subscription and writes them to a BigQuery table. You can use the template as a quick solution to move Pub/Sub data to BigQuery.

How do I connect my Pub/Sub to BigQuery?

Run the pipeline Run a streaming pipeline using the Google-provided Pub/Sub Topic to BigQuery template. The pipeline gets incoming data from the Pub/Sub topic and outputs the data to your BigQuery dataset. In the Google Cloud console, go to the Dataflow Jobs page. Click Create job from template.


2 Answers

There's a gotcha here, which is that you'll need a GroupByKey in order for the panes to be aggregated appropriate. The Spotify example references this as "Materialization of panes is done in “Aggregate Events” transform which is nothing else than a GroupByKey transform", but it's a subtle point. You'll need to provide a key in order to do this, and in your case, it appears a constant value will work.

  PCollection<String> streamData = p.apply(readFromPubsub);
  PCollection<KV<String, String>> keyedStream =
        streamData.apply(WithKeys.of(new SerializableFunction<String, String>() {
           public Integer apply(String s) { return "constant"; } }));

At this point, you can apply your windowing function, and then a final GroupByKey to get the desired behavior:

  PCollection<String, Iterable<String>> keyedWindows = keyedStream.apply(...)
       .apply(GroupByKey.create());
  PCollection<Iterable<String>> windows = keyedWindows
       .apply(Values.<Iterable<String>>create());

Now the elements in processElement will be Iterable<String>, with size 100 or more.

We've filed https://issues.apache.org/jira/browse/BEAM-184 to make this behavior clearer.

like image 110
Sam McVeety Avatar answered Nov 14 '22 14:11

Sam McVeety


As of Beam 2.0, TextIO/AvroIO do support writing unbounded collections - see documentation, in particular, you have to specify withWindowedWrites().

like image 40
jkff Avatar answered Nov 14 '22 13:11

jkff