How do I upload a CSV file from my local machine to my AWS S3 bucket and read that CSV file?
bucket = aws_connection.get_bucket('mybucket')
#with this i am able to create bucket
folders = bucket.list("","/")
for folder in folders:
print folder.name
Now I want to upload csv into my csv and read that file.
Navigate to All Settings > Raw Data Export > CSV Upload. Toggle the switch to ON. Select Amazon S3 Bucket from the dropdown menu. Enter your Access Key ID, Secret Access Key, and bucket name.
Spark Read CSV file from S3 into DataFramecsv("path") or spark. read. format("csv"). load("path") you can read a CSV file from Amazon S3 into a Spark DataFrame, Thes method takes a file path to read as an argument.
So you're using boto2 -- I would suggest to move to boto3. Please see below some simple examples:
boto2
upload example
import boto
from boto.s3.key import Key
bucket = aws_connection.get_bucket('mybucket')
k = Key(bucket)
k.key = 'myfile'
k.set_contents_from_filename('/tmp/hello.txt')
download example
import boto
from boto.s3.key import Key
bucket = aws_connection.get_bucket('mybucket')
k = Key(bucket)
k.key = 'myfile'
k. get_contents_to_filename('/tmp/hello.txt')
boto3
upload example
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('mybucket')
s3.Object('mybucket', 'hello.txt').put(Body=open('/tmp/hello.txt', 'rb'))
or simply
import boto3
s3 = boto3.resource('s3')
s3.meta.client.upload_file('/tmp/hello.txt', 'mybucket', 'hello.txt')
download example
import boto3
s3 = boto3.resource('s3')
s3.meta.client.download_file('mybucket', 'hello.txt', '/tmp/hello.txt')
print(open('/tmp/hello.txt').read())
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With