Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Differences between Amazon S3 and S3n in Hadoop

People also ask

What is the difference between S3 and S3n?

s3 is a block-based overlay on top of Amazon S3,whereas s3n/s3a are not. These are are object-based. s3n supports objects up to 5GB when size is the concern, while s3a supports objects up to 5TB and has higher performance.

What is S3n and s3a?

S3a and S3n are an Object-Based overlay on top of Amazon S3, while, on the other hand, S3 is a Block-Based overlay on top of Amazon S3. S3n is capable to support up to 5Gigabytes sized objects. S3a is capable to support up to 5Terrabytes sized objects. It is the successor of S3n.

What is S3n AWS?

S3N was a Hadoop filesystem client which can read or write data stored in Amazon S3. It uses URLs with the schema s3n://. Hadoop's S3N client for Amazon S3 has been superceded by the S3A connector. Please upgrade to S3A for a supported, higher-performance S3 Client.

Is S3 better than HDFS?

To summarize, S3 and cloud storage provide elasticity, with an order of magnitude better availability and durability and 2X better performance, at 10X lower cost than traditional HDFS data storage clusters. Hadoop and HDFS commoditized big data storage by making it cheap to store and distribute a large amount of data.


The two filesystems for using Amazon S3 are documented in the respective Hadoop wiki page addressing Amazon S3:

  • S3 Native FileSystem (URI scheme: s3n)
    A native filesystem for reading and writing regular files on S3. The advantage of this filesystem is that you can access files on S3 that were written with other tools. Conversely, other tools can access files written using Hadoop. The disadvantage is the 5GB limit on file size imposed by S3. For this reason it is not suitable as a replacement for HDFS (which has support for very large files).

  • S3 Block FileSystem (URI scheme: s3)
    A block-based filesystem backed by S3. Files are stored as blocks, just like they are in HDFS. This permits efficient implementation of renames. This filesystem requires you to dedicate a bucket for the filesystem - you should not use an existing bucket containing files, or write other files to the same bucket. The files stored by this filesystem can be larger than 5GB, but they are not interoperable with other S3 tools.

There are two ways that S3 can be used with Hadoop's Map/Reduce, either as a replacement for HDFS using the S3 block filesystem (i.e. using it as a reliable distributed filesystem with support for very large files) or as a convenient repository for data input to and output from MapReduce, using either S3 filesystem. In the second case HDFS is still used for the Map/Reduce phase. [...]

[emphasis mine]

So the difference is mainly related to how the 5GB limit is handled (which is the largest object that can be uploaded in a single PUT, even though objects can range in size from 1 byte to 5 terabytes, see How much data can I store?): while using the S3 Block FileSystem (URI scheme: s3) allows to remedy the 5GB limit and store files up to 5TB, it replaces HDFS in turn.


I think your main problem was related with having S3 and S3n as two separate connection points for Hadoop. s3n:// means "A regular file, readable from the outside world, at this S3 url". s3:// refers to an HDFS file system mapped into an S3 bucket which is sitting on AWS storage cluster. So when you were using a file from Amazon storage bucket you must be using S3N and that's why your problem is resolved. The information added by @Steffen is also great!!


Here is an explanation: https://notes.mindprince.in/2014/08/01/difference-between-s3-block-and-s3-native-filesystem-on-hadoop.html

The first S3-backed Hadoop filesystem was introduced in Hadoop 0.10.0 (HADOOP-574). It was called the S3 block fileystem and it was assigned the URI scheme s3://. In this implementation, files are stored as blocks, just like they are in HDFS. The files stored by this filesystem are not interoperable with other S3 tools - what this means is that if you go to the AWS console and try to look for files written by this filesystem, you won't find them - instead you would find files named something like block_-1212312341234512345 etc.

To overcome these limitations, another S3-backed filesystem was introduced in Hadoop 0.18.0 (HADOOP-930). It was called the S3 native filesystem and it was assigned the URI scheme s3n://. This filesystem lets you access files on S3 that were written with other tools... When this filesystem was introduced, S3 had a filesize limit of 5GB and hence this filesystem could only operate with files less than 5GB. In late 2010, Amazon... raised the file size limit from 5GB to 5TB...

Using the S3 block file system is no longer recommended. Various Hadoop-as-a-service providers like Qubole and Amazon EMR go as far as mapping both the s3:// and the s3n:// URIs to the S3 native filesystem to ensure this.

So always use the native file system. There is no more 5Gb limit. Sometimes you may have to type s3:// instead of s3n://, but just make sure that any files you create are visible in the bucket explorer in the browser.

Also see http://docs.aws.amazon.com/ElasticMapReduce/latest/ManagementGuide/emr-plan-file-systems.html.

Previously, Amazon EMR used the S3 Native FileSystem with the URI scheme, s3n. While this still works, we recommend that you use the s3 URI scheme for the best performance, security, and reliability.

It also says you can use s3bfs:// to access the old block file system, previously known as s3://.