Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Reload page if 'not available'?

I've a standalone Raspberry Pi which shows a webpage from another server.

It reloads after 30 minutes via JavaScript on the webpage.

In some cases, the server isn't reachable for a very short time and Chromium shows the usual This webpage is not available message, and stops reloading (because no JavaScript from the page triggers an reload).

In this case, how can I still reload the webpage after a few seconds?

Now i had the Idea to fetch the website results via AJAX and replace it in the current page if they were available.

like image 811
fechnert Avatar asked Feb 13 '23 05:02

fechnert


2 Answers

Rather than refreshing the webpage every few minutes, what you can do is ping the server using javascript (pingjs is a nice library that can do that)

Now, if the ping is successful, reload the page. If it is not successful, wait for 30 more seconds and ping it again. Doing this continuously, will basically make you wait until the server is open again (i.e. you can ping it)

I think this is a much simpler method compared to making your own java browser and making a browser plugin.


Extra info: You should use a exponential function or timeout checking to avoid unnecessary processing overhead. i.e. the first time out find the ping fails, wait for 30 seconds, second time wait for 30*(2^1) sec, 3rd time wait for 30*(2^2) and so on until you reach a maximum value.


Note - this assumes your server is really unreachable ... and not just that the html page in unavailable (there's a small but appreciable difference)

like image 80
AbdealiJK Avatar answered Feb 27 '23 02:02

AbdealiJK


My favored approach would be to copy the web page locally using a script every 30 mins and point chromium to the local copy.

The advantage is that script can run every 30 seconds, and it checks if the successful page pull happened in the last 30 mins. If YES it then does nothing. If NO then you can keep attempting to pull it. In the mean time the browser will be set to refresh the page every 5 seconds, but because it is pulling a local page it does little to no work for each refresh. You then can detect if what it has pulled back has the required content in it.

This approach assumes that your goal is to avoid refreshing the page every few seconds and therefore reducing load on the remote page.

Use these options to grab the whole page....

# exit if age of last reload is less than 1800 seconds (30 minutes)
AGE_IN_SECS=$(( $( perl -e 'print time();' ) - $(stat -c "%Y" /success/directory/index.html) ))
[[ $AGE_IN_SECS -lt 1800 ]] && exit

# copy whole page to the current directory
cd /temporary/directory
wget -p -k http://www.example.com/

and then you need to test the page in some way to ensure you have what you need, for example (using bash script)....

RESULT=$(grep -ci "REQUIRED_PATTERN_MATCH" expected_file_name )
[[ $result -gt 0 ]] && cp -r /temporary/directory/* /success/directory
rm -rf /temporary/directory/*

NOTE:

This is only the bare bones of what you need as I don't know the specifics of what you need. But you should also look at trying to ...

  • ensure you have a timeout on the wget, such that you do not have multiple wgets running.
  • create some form of back off so that you do not hammer the remote server when it is trouble
  • ideally show some message on the page if it is over 40 minutes old so that viewer knows a problem is being experienced.
  • you could use a chromium refresh plugin to pull the page from locally
  • you can use your script to alter the page once you have downloaded it if you want to add in additional/altered formatting (e.g. replace the css file?)
like image 41
MoopyGlue Avatar answered Feb 27 '23 01:02

MoopyGlue