The Research and Development of the Informatics Web A Critical Analysis of the WWW Next Page

How is this Resolved?

Information Quality

"Quality Web information is correct, accessible, usable, understandable and meaningful." (December, 1994, online)

In this context, correct, refers to correctness of facts and correctness of scope. Factual correctness means that the information is truthful, accurate and up-to-date. Correctness of scope means that the information is relevant to the user. For example, a user wishing to find information on motor racing does not need to know the technical details of the internal combustion engine.

Accessible means that the information is available to all potential users. This is especially important when considering multimedia. Presenting information in an animated movie format or sound samples will never reach a certain proportion of users who lack the necessary hardware. Conversely making use of multimedia presents a richer experience which is capable of transferring knowledge in a more intuitive fashion. The balance is found in serving the best interests of the users. This includes minimising where necessary and including where appropriate.

Usable means supplying the information in a way that is easily managed by the user. A single, huge, text file may be found to be unusable. Making use of hypertext and segmenting information into manageable chunks aids usability. Making the user sift through large amounts of useless information in order to find what they are looking for degrades usability.

Understandable means arranging the information so that the user can gain most knowledge from it. In order to make web pages more understandable, the designer may wish to use graphic design principles - cues, composition, well-crafted prose and good design all help the user derive meaning from the information.

Meaningful means assisting the user in analysing and interpreting the information in a larger context. "'Meaning' is not purely a transfer of information content, but emerges as a result of encountering that information. A web should not merely present information, but assist users in analysing and interpreting that information within a larger context." (December, 1994, online)

Maintaining Information Quality

Maintaining information quality is a continuous process of gathering, selecting and presenting online information. Developers, who must make sure their information is up-to-date and accurate, may ask domain experts for their criticism. If a web contains links to external webs, developers must also consider the accuracy of these webs. However, linking to external webs does tend to improve information quality as it reduces redundancy - there is no need to repeat information in a web, which already exists in another well-maintained web.

Information quality may also be verified at implementation-level. Automatic tools are available which will verify link-freshness - that is, a program will check that a web page still exists. However, developers should also rely on reports and make periodic sweeps of links by hand as a link-freshness tool has limited use. Web counters allow developers to see which aspects of their web are actually being utilised. There is little point investing time and energy in web pages which will never be read. This tool also allows developers to concentrate more on very popular pages. Other software tools allow management of whole web sites;

"Automated tools and higher-level hypertext languages can provide more abstract levels above HTML, so that larger units of thought and web structure can be articulated." (December, 1994, online)

These tools may also allow the automatic creation of alternate views of web pages. This would mainly be used to create text-only versions, which a lot of users may find to be important.

Another useful technique to use when designing webs is to give pages sensible URL's, which catch abstract in naming. For example makes more sense than Also, hyperlinks should be well-annotated, and leave the user in no doubt as to which information it links to.


Webs are only of use if they are available to people. People nearly always find specific webs by using a search engine. Therefore, registering webs with search engines is an integral part of their development. Before a web is published, however, developers should already be aware of automatic Web-spiders, and include useful descriptions in the <META> tags.

Other methods of publicising a web site, include posting an concise description of the web to an appropriate USENET newsgroup or mailing list. It must be borne in mind by the developers, to announce the new web within the appropriate domain. The cooking example might be posted to and Another method is to publicise the web on a "What's New" page, such as the one at Webcrawler.

To notify existing users of new information which has been developed within the same web, a "What's New" page may be created, with a link prominently displayed on the homepage.

Development Methodology

A reliable method of creating a web which conforms to the many details I have mentioned is to use a structured, user-centred development methodology. Developing a web may be considered analogous to developing a piece of software. Few pieces of software are developed by being simply programmed. Rather, a piece of software goes through a systems life cycle, consisting of problem specification, analysis, design, implementation and maintenance. The same is true with webs;

"These information shaping abilities cannot be based on machine intelligence alone. Human wisdom, judgement, and aesthetics must play a part in improving the quality of Web information." (December, 1994, online)

It is this development methodology which I will discuss in the next section.

The Research and Development of the Informatics Web A Critical Analysis of the WWW Next Page