Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I configure Cloud Data Fusion pipeline to run against existing Hadoop clusters

Cloud Data Fusion creates a new Dataproc cluster for every pipeline run. I already have a Dataproc cluster setup which runs 24x7 and I would like to use that cluster to run pipelines

like image 254
Sree Avatar asked Dec 31 '22 20:12

Sree


1 Answers

This can be achieved by setting up a new compute profile using Remote Hadoop provisioner under System admin -> Configuration -> System Compute profile -> Create a new Compute profile. This feature is available only on the Enterprise edition of Cloud Data Fusion ("Execution environment selection").

Here are the detailed steps.

  1. SSH Setup on Dataproc Cluster

    a. Navigate to Dataproc console on Google Cloud Platform. Go to “Cluster details” by clicking on your Dataproc cluster name.

    b. Under “VM Instances”, click on the “SSH“ button to connect to the Dataproc VM. Dataproc VM SSH

    c. Follow the steps here to create a new SSH key, format the public key file to enforce an expiration time, and add the newly created SSH public key at project or instance level.

    d. If the SSH is setup successfully, you should be able to see the SSH key you just added in the Metadata section of your Compute Engine console, as well as the authorized_keys file in your Dataproc VM. Dataproc SSH keys

  2. Create a customized system compute profile for your Data Fusion instance

    a. Navigate to your Data Fusion instance console by clicking on “View Instance" enter image description here

    b. Click on “System Admin“ on the top right corner. enter image description here

    c. Under “Configuration“ tab, expand “System Compute Profiles”. Click on “Create New Profile“, and choose “Remote Hadoop Provisioner“ on the next page. Provisioner enter image description here

    d. Fill out the general information for the profile.

    e. You can find the SSH host IP information on the “VM instance details“ page under Compute Engine. enter image description here

    f. Copy the SSH private key created in step 1, and paste it to the “SSH Private Key“ field.

    g. Click “Create” to create the profile. Profile

  3. Configure your Data Fusion pipeline to use the customized profile

    a. Click on the pipeline to run against remote hadoop

    b. Click on Configure -> Compute config and choose the remote hadoop provisioner config Pipeline Remote Hadoop Compute Profile

like image 71
Sree Avatar answered Jun 22 '23 07:06

Sree