It's very easy to solve it without url hacks, with CloudFront help.
The way I was able to get this to work is as follows:
In the Edit Redirection Rules section of the S3 Console for your domain, add the following rules:
<RoutingRules>
<RoutingRule>
<Condition>
<HttpErrorCodeReturnedEquals>404</HttpErrorCodeReturnedEquals>
</Condition>
<Redirect>
<HostName>yourdomainname.com</HostName>
<ReplaceKeyPrefixWith>#!/</ReplaceKeyPrefixWith>
</Redirect>
</RoutingRule>
</RoutingRules>
This will redirect all paths that result in a 404 not found to your root domain with a hash-bang version of the path. So http://yourdomainname.com/posts will redirect to http://yourdomainname.com/#!/posts provided there is no file at /posts.
To use HTML5 pushStates however, we need to take this request and manually establish the proper pushState based on the hash-bang path. So add this to the top of your index.html file:
<script>
history.pushState({}, "entry page", location.hash.substring(1));
</script>
This grabs the hash and turns it into an HTML5 pushState. From this point on you can use pushStates to have non-hash-bang paths in your app.
There are few problems with the S3/Redirect based approach mentioned by others.
The solution is:
Configure error page rules for your Cloudfront instance. In the error rules specify:
HTTP Response Code: 200
Configure an EC2 instance and setup an nginx server.
I can help in more details with respect to nginx setup, just leave a note. Have learnt it the hard way.
Once the cloud front distribution update. Invalidate your cloudfront cache once to be in the pristine mode. Hit the url in the browser and all should be good.
It's tangential, but here's a tip for those using Rackt's React Router library with (HTML5) browser history who want to host on S3.
Suppose a user visits /foo/bear
at your S3-hosted static web site. Given David's earlier suggestion, redirect rules will send them to /#/foo/bear
. If your application's built using browser history, this won't do much good. However your application is loaded at this point and it can now manipulate history.
Including Rackt history in our project (see also Using Custom Histories from the React Router project), you can add a listener that's aware of hash history paths and replace the path as appropriate, as illustrated in this example:
import ReactDOM from 'react-dom';
/* Application-specific details. */
const route = {};
import { Router, useRouterHistory } from 'react-router';
import { createHistory } from 'history';
const history = useRouterHistory(createHistory)();
history.listen(function (location) {
const path = (/#(\/.*)$/.exec(location.hash) || [])[1];
if (path) history.replace(path);
});
ReactDOM.render(
<Router history={history} routes={route}/>,
document.body.appendChild(document.createElement('div'))
);
To recap:
/foo/bear
to /#/foo/bear
.#/foo/bear
history notation.Link
tags will work as expected, as will all other browser history functions. The only downside I've noticed is the interstitial redirect that occurs on initial request.
This was inspired by a solution for AngularJS, and I suspect could be easily adapted to any application.
I see 4 solutions to this problem. The first 3 were already covered in answers and the last one is my contribution.
Set the error document to index.html.
Problem: the response body will be correct, but the status code will be 404, which hurts SEO.
Set the redirection rules.
Problem: URL polluted with #!
and page flashes when loaded.
Configure CloudFront.
Problem: all pages will return 404 from origin, so you need to chose if you won't cache anything (TTL 0 as suggested) or if you will cache and have issues when updating the site.
Prerender all pages.
Problem: additional work to prerender pages, specially when the pages changes frequently. For example, a news website.
My suggestion is to use option 4. If you prerender all pages, there will be no 404 errors for expected pages. The page will load fine and the framework will take control and act normally as a SPA. You can also set the error document to display a generic error.html page and a redirection rule to redirect 404 errors to a 404.html page (without the hashbang).
Regarding 403 Forbidden errors, I don't let them happen at all. In my application, I consider that all files within the host bucket are public and I set this with the everyone option with the read permission. If your site have pages that are private, letting the user to see the HTML layout should not be an issue. What you need to protect is the data and this is done in the backend.
Also, if you have private assets, like user photos, you can save them in another bucket. Because private assets need the same care as data and can't be compared to the asset files that are used to host the app.
The easiest solution to make Angular 2+ application served from Amazon S3 and direct URLs working is to specify index.html both as Index and Error documents in S3 bucket configuration.
I ran into the same problem today but the solution of @Mark-Nutter was incomplete to remove the hashbang from my angularjs application.
In fact you have to go to Edit Permissions, click on Add more permissions and then add the right List on your bucket to everyone. With this configuration, AWS S3 will now, be able to return 404 error and then the redirection rule will properly catch the case.
Just like this :
And then you can go to Edit Redirection Rules and add this rule :
<RoutingRules>
<RoutingRule>
<Condition>
<HttpErrorCodeReturnedEquals>404</HttpErrorCodeReturnedEquals>
</Condition>
<Redirect>
<HostName>subdomain.domain.fr</HostName>
<ReplaceKeyPrefixWith>#!/</ReplaceKeyPrefixWith>
</Redirect>
</RoutingRule>
</RoutingRules>
Here you can replace the HostName subdomain.domain.fr with your domain and the KeyPrefix #!/ if you don't use the hashbang method for SEO purpose.
Of course, all of this will only work if you have already have setup html5mode in your angular application.
$locationProvider.html5Mode(true).hashPrefix('!');
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With