Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to upload file to AWS S3 using AWS AppSync

Tags:

Following this docs/tutorial in AWS AppSync Docs.

It states:

With AWS AppSync you can model these as GraphQL types. If any of your mutations have a variable with bucket, key, region, mimeType and localUri fields, the SDK will upload the file to Amazon S3 for you.

However, I cannot make my file to upload to my s3 bucket. I understand that tutorial missing a lot of details. More specifically, the tutorial does not say that the NewPostMutation.js needs to be changed.

I changed it the following way:

import gql from 'graphql-tag';  export default gql` mutation AddPostMutation($author: String!, $title: String!, $url: String!, $content: String!, $file: S3ObjectInput ) {     addPost(         author: $author         title: $title         url: $url         content: $content         file: $file     ){         __typename         id         author         title         url         content         version     } } ` 

Yet, even after I have implemented these changes, the file did not get uploaded...

like image 341
SaidAkh Avatar asked Jan 29 '18 06:01

SaidAkh


2 Answers

There's a few moving parts under the hood you need to make sure you have in place before this "just works" (TM). First of all, you need to make sure you have an appropriate input and type for an S3 object defined in your GraphQL schema

enum Visibility {     public     private }  input S3ObjectInput {     bucket: String!     region: String!     localUri: String     visibility: Visibility     key: String     mimeType: String }  type S3Object {     bucket: String!     region: String!     key: String! } 

The S3ObjectInput type, of course, is for use when uploading a new file - either by way of creating or updating a model within which said S3 object metadata is embedded. It can be handled in the request resolver of a mutation via the following:

{     "version": "2017-02-28",     "operation": "PutItem",     "key": {         "id": $util.dynamodb.toDynamoDBJson($ctx.args.input.id),     },      #set( $attribs = $util.dynamodb.toMapValues($ctx.args.input) )     #set( $file = $ctx.args.input.file )     #set( $attribs.file = $util.dynamodb.toS3Object($file.key, $file.bucket, $file.region, $file.version) )      "attributeValues": $util.toJson($attribs) } 

This is making the assumption that the S3 file object is a child field of a model attached to a DynamoDB datasource. Note that the call to $utils.dynamodb.toS3Object() sets up the complex S3 object file, which is a field of the model with a type of S3ObjectInput. Setting up the request resolver in this way handles the upload of a file to S3 (when all the credentials are set up correctly - we'll touch on that in a moment), but it doesn't address how to get the S3Object back. This is where a field level resolver attached to a local datasource becomes necessary. In essence, you need to create a local datasource in AppSync and connect it to the model's file field in the schema with the following request and response resolvers:

## Request Resolver ## {     "version": "2017-02-28",     "payload": {} }  ## Response Resolver ## $util.toJson($util.dynamodb.fromS3ObjectJson($context.source.file)) 

This resolver simply tells AppSync that we want to take the JSON string that is stored in DynamoDB for the file field of the model and parse it into an S3Object - this way, when you do a query of the model, instead of returning the string stored in the file field, you get an object containing the bucket, region, and key properties that you can use to build a URL to access the S3 Object (either directly via S3 or using a CDN - that's really dependent on your configuration).

Do make sure you have credentials set up for complex objects, however (told you I'd get back to this). I'll use a React example to illustrate this - when defining your AppSync parameters (endpoint, auth, etc.), there is an additional property called complexObjectCredentials that needs to be defined to tell the client what AWS credentials to use to handle S3 uploads, e.g.:

const client = new AWSAppSyncClient({     url: AppSync.graphqlEndpoint,     region: AppSync.region,     auth: {         type: AUTH_TYPE.AWS_IAM,         credentials: () => Auth.currentCredentials()     },     complexObjectsCredentials: () => Auth.currentCredentials(), }); 

Assuming all of these things are in place, S3 uploads and downloads via AppSync should work.

like image 149
hatboyzero Avatar answered Sep 21 '22 18:09

hatboyzero


Just to add to the discussion. For mobile clients, amplify (or if doing from aws console) will encapsulate mutation calls into an object. The clients won't auto upload if the encapsulation exists. So you can modify the mutation call directly in aws console so that the upload file : S3ObjectInput is in the calling parameters. This was happening the last time I tested (Dec 2018) following the docs.

You would change to this calling structure:

 type Mutation {         createRoom(         id: ID!,         name: String!,         file: S3ObjectInput,         roomTourId: ID     ): Room } 

Instead of autogenerated calls like:

type Mutation {     createRoom(input: CreateRoomInput!): Room } input CreateRoomInput {     id: ID     name: String!     file: S3ObjectInput } 

Once you make this change both iOS and Android will happily upload your content if you do what @hatboyzero has outlined.

[Edit] I did a bit of research, supposedly this has been fixed in 2.7.3 https://github.com/awslabs/aws-mobile-appsync-sdk-android/issues/11. They likely addressed iOS but I didn't check.

like image 22
Bruce Avatar answered Sep 19 '22 18:09

Bruce