August 29th, 2001, 11:36 PM
HTML content extractor for XML conversion
My employer (Museum Victoria) is beginning the process of upgrading its tens of thousands of web pages from HTML to XML. The benefits of this are numerous, but I assume everyone here knows them.
The problem we currently face is how to get the data (content) out of the existing pages and leave us with essentially text content that preserves only the basic formatting (heading levels, font emphasising etc). I have found a number of HTML strippers, and they are great at taking out the HTML tags alone, but they don't remove any non-body text (eg navigation text etc), and they don't preserve any of the body-text formatting.
HTML to XML converters usually just convert the page to XHTML which is not what we want. Other XML extractors don't work on more than one page, require substantial scripting, or wont run on a windows platfor, or I don't know how to automate them to work on our many thousands of pages.
Additionally, because of the many different designs of the pages on the collection of museum sites, a great deal of time needs to be done to create filters for HTML strippers of XML converters which will work on each site. Idealy a utility which is intelligent enough to do most of the work and leave me to do the fine-tuning would be perfect!
After many hours of searching the web, I'm starting to run out of ideas. Can anyone here help me with this challenge? I'm certain this is going to become a very widespread problem in the next year or so as content providers migrate to XML.
Your help is greatly appreciated!
September 12th, 2001, 06:41 AM
My personal favorite - I started to use the tool in 1997 - is NoteTab Pro. You can find it at http://www.notetab.com/. It is a very powerful text/HTML-editor including an easy-to-learn so called "Clip Language". You can write your macro commands in that language to prepare large numbers of files (the maximum file size of ONE file is 2Gigabytes! - enough capacity I assume )
Do not expect NoteTab Pro to do miracles.
There are no easy solutions to some of your requirements, though:
The critical point here is: How to distinguish Navigation-Elements (mostly strings enclosed within <a href=...>-Tags) from "normal" Links within the "body"-text.
Write a macro to strip (search-replace) those text-elements first.
Write a macro to convert the elements to preserve to HTML-entities.
Do the following further steps:
c) Strip HTML from your files
d) Reconvert HTML-entities to HTML-tags and attributes.
e) Convert Files to XML or XHTML with the NoteTab Pro built in conversion filter
The Clip Language - in my opinion - meets your requirements.