I wish to reverse engineer any web-page into a logical representation of the page. For example, if a web page has a menu, then I want a logical menu structure perhaps in XML. If the webpage has an article, I want a article XML node, if it has a title for the article I want a title XML node. Basically, I want the logical form of the web-page without any of the user interface.
This logical model could either be objects in code or XML it doesn't matter, the important part is that it has identified what everything on the page means.
So is it possible to reverse-engineer a website? And if so, is there any way to make this dissection process easier? Fortunately, the answer to the first question is: yes.
You can also use shortcut 'Ctrl' + Shift + “I” for most of the browsers like Chrome, Firefox, etc. The uppermost tab holds different options provided by the browser. We will first discuss each option in brief. Elements: Provides the source code of the page displayed along with CSS.
Sounds like what you want requires a human to categorise a page's contents.
This could be automated, however it would have false positives and not work in every case.
For example, what if one page used a ul
for a menu and another one used table cells?
Do you want this for one site in particular, or any site on the Internet?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With