Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Connectedness & HATEOAS

Tags:

rest

hateoas

It is said that in a well defined RESTful system, the clients only need to know the root URI or few well known URIs and the client shall discover all other links through these initial URIs. I do understand the benefits (decoupled clients) from this approach but the downside for me is that the client needs to discover the links each time it tries access something i.e given the following hierarchy of resources:

/collection1
collection1
  |-sub1
    |-sub1sub1
 |-sub1sub1sub1
         |-sub1sub1sub1sub1
    |-sub1sub2
  |-sub2
    |-sub2sub1
    |-sub2sub2
  |-sub3
    |-sub3sub1
    |-sub3sub2

If we follow the "Client only need to know the root URI" approach, then a client shall only be aware of the root URI i.e. /collection1 above and the rest of URIs should be discovered by the clients through hypermedia links. I find this cumbersome because each time a client needs to do a GET, say on sub1sub1sub1sub1, should the client first do a GET on /collection1 and the follow link defined in the returned representation and then do several more GETs on sub resources to reach the desired resource? or is my understanding about connectedness completely wrong?

Best regards, Suresh

like image 306
Suresh Kumar Avatar asked Jul 14 '10 07:07

Suresh Kumar


2 Answers

You will run into this mismatch when you try and build a REST api that does not match the flow of the user agent that is consuming the API.

Consider when you run a client application, the user is always presented with some initial screen. If you match the content and options on this screen with the root representation then the available links and desired transitions will match nicely. As the user selects options on the screen, you can transition to other representations and the client UI should be updated to reflect the new representation.

If you try and model your REST API as some kind of linked data repository and your client UI as an independent set of transitions then you will find HATEOAS quite painful.

like image 136
Darrel Miller Avatar answered Nov 16 '22 03:11

Darrel Miller


Yes, it's right that the client application should traverse the links, but once it's discovered a resource, there's nothing wrong with keeping a reference to that resource and using it for a longer time than one request. If your client has the possibility of remembering things permanently, it can do so.

consider how a web browser keeps its bookmarks. You probably have maybe ten or a hundred bookmarks in the browser, and you probably found some of these deep in a hierarchy of pages, but the browser dutifully remembers them without requiring remembering the path it took to find them.

A more rich client application could remember the URI of sub1sub1sub1sub1 and reuse it if it still works. It's likely that it still represents the same thing (it ought to). If it no longer exists or fails for any other client reason (4xx) you could retrace your steps to see if you can find a suitable replacement.

And of course what Darrel Miller said :-)

like image 44
mogsie Avatar answered Nov 16 '22 02:11

mogsie