Web Accessibility Benchmarking Cluster

WAB Cluster

Sixth Framework Programme

Information Society Technologies Priority

Unified Web Evaluation Methodology (UWEM 0.5)

Contractual Date of Delivery to the EC: 28 February 2005 + 45 days
Actual Date of Delivery to the EC: 12 October 2005
Editors:
  • Eric Velleman (Accessibility Foundation),
  • Carlos A Velasco (Fraunhofer Institute for Applied Information Technology FIT),
  • Mikael Snaprud (Agder University College),
  • Dominique Burger (BrailleNet)
Contributors: See Appendix E for contributors list
Workpackage: WAB1a
Security: Public
Nature: Report
Version: P
Total number of pages:

Keywords:

Web Accessibility, WAB Cluster, World Wide Web Consortium, W3C, Web Accessibility Initiative, WAI, Web Content Accessibility Guidelines, WCAG, Unified Web site Evaluation Methodology, UWEM, Evaluation and Report Language, EARL.

DOCUMENT HISTORY
Version Version date Responsible Description
A 05-05-22 FIT Initial version with TOC
B 05-03-06 FIT Version containing Section 3, and integrating contributions to Sections 4, 5 and 7, plus Appendices A, C, D and E
C 05-06-05 Agder University College Added section 2.6 and 2.8
D 05-06-05 DCU Revised section 4; added section 12.2.
E 10-06-05 Accessibility Foundation Results of Conclave work: 6, 7 and 8 June 2005
F 05-06-20 City University Updated version of section 7; appendices for section 7.
G 05-06-21 Eric Velleman Updated sections with comments from W3C.
H 05-06-25 Agder University College Converted to master document, for easier maintenance and removed material from the evaluation suite in section 3.
I 05-06-25 Agder University College Added latest version of section 4 that Barry had uploaded,and the glossary.
J 05-06-25 Agder University College Added information on Xiaoming Zeng's WABScore method, and elaborated on aggregating page fractions, instead of whole pages for key use scenarios and a connection to  sampling/ random walk algorithms to section 6.
K 05-08-09 Agder University College Section 6 split into core part and complementary section.
L 05-08-20 Agder University College Section 6 restructured for better readability, and updated with comments from Andreas Prinz. Minor typos fixed in section 4 and 8.
M 05-08-22 Agder University College Fixed link to references and glossary.
N 05-09-30 Accessibility Foundation, FIT Global copy-editing; replacement of missing figures; full modification of Section 5 according to BenToWeb’s proposal; missing references and cross-references fixed.
O 05-10-12 FIT Incorporation of W3C copyright notice and final remarks from WAI and Cluster projects.
From Agder University College: Incorporation of review comments from the EIAO partners (05-10-11)
P 05-10-12 Accessibility Foundation Change to section 2 following remark from WAI

Table of Contents

Skip table of contents to content

1 Executive summary

This document is the result of a joint effort by 24 European organisations in three European projects combined in a cluster to develop a Unified Web Evaluation Methodology (UWEM). This methodology is based on the W3C Web Content Accessibility Guidelines 1.0 [WCAG10] and will be synchronised with the foreseen migration from WCAG 1.0 to WCAG 2.0 [WCAG20] in the near future. The UWEM offers an interpretation of the guidelines agreed among stakeholders within the aforementioned projects. This document is a draft and shall not be considered as an stable version. This document will serve as the basis for a evaluation of the draft methodology by all intended users.

The projects involved in making this methodology aim to ensure that evaluation tools and methods developed for global monitoring or for local evaluation, are compatible and coherent among themselves and with WAI. The projects will provide feedback and contribution to WAI for future guidelines or versions of guidelines. Their aim is to increase the value of evaluations by basing them on a shared interpretation of WCAG 1.0.

This document presents the 0.5 version of the Unified Web Evaluation Methodology, which is based on the W3C Web Content Accessibility Guidelines 1.0. UWEM 0.5 will be extensively evaluated in the next phase of the Cluster including user-, expert and (semi-) automated testing of the assessment methodology. This phase will also include a phase for public comments, evaluation and discussion.

UWEM 0.5 provides an evaluation procedure consisting of a system of principles and practices for manual and automatic evaluation of Web accessibility for humans and machine interfaces. The  methodology aims to be fully conformant with WCAG 1.0 guidelines. Currently the UWEM is limited to priority 1 guidelines, but in the coming phase of the Cluster, priority 2 guidelines will be added.

The methodology covers evaluations of one Web page, an entire site (irrespective of size), or multiple sites, a method for sampling, clarifications of the checkpoints, user testing protocols and information necessary for interpretation and integration/aggregation of results.

The UWEM 0.5 will be used by the projects included in the Cluster for an observatory (EIAO project), tools for benchmarking (BenToWeb project) and a certification scheme (SEAM project). More information about the Cluster, the UWEM and the projects involved can be found at: http://www.wabcluster.org/. Refer to Appendix A for the document license.

The document is organised in the following way. Section 2 outlines the requirements for the document and the basic properties of UWEM. Section 3 describes the UWEM conformance related to a sample to be evaluated, types of tests to conduct, conformance claims and confidence levels for the test results. Section 4 describes the approaches for scoping and sampling of a Web site for evaluation. The sampled resource set should cover critical tasks and secure a representative evaluation result. Section 5 describes how to carry out the tests related to the WCAG 1.0 checkpoints, including clarifications and user testing information. Section 6 describes a model for aggregating test results from accessibility barriers on a Web resource to regional aggregations of Web resources. The model also refers to a possible aggregation over different user groups who may expereince different barriers. Section 7 describes user testing protocols according to UWEM. The section also describes how to select a set of tasks to accomplish and how to compose a group of testers. Section 8 gives a description on how to present the results of the evaluation, specially taylored for policy makers using a score card approach. There are as well a set of relevant appendices.

2 Introduction

The Unified Web Evaluation Methodology should support that evaluation tools and methods developed for large scale monitoring or for local evaluation, are compatible and coherent among themselves and with W3C/WAI. This document is the result of a joint effort of three European projects with 24 organisations collaborating in the WAB Cluster to develop UWEM.

The purpose of the 0.5 version of UWEM is to provide a basis for evaluating the methodology from all the intended types of testing, including: user-, expert- and (semi-) automated testing of Web resources. The evaluation of UWEM is also planned to provide feedback and contribute to W3C/WAI for future guidelines or versions of guidelines. W3C/WAI staff have reviewed and provided input into previous drafts of this document in order to minimize potential fragmentation of technical content. This does not imply W3C or WAI endorsement of any part of this draft. W3C/WAI Working Groups have not yet reviewed the draft, and will have the opportunity to do so along with the public.

Part of the materials presented in this document are annotations of W3C documents (those included in section 5). In particular, we are targeting the following two documents:

According to the Intellectual Rights FAQ from W3C, section 5 of UWEM falls under an annotation “... that does not require the copying and modification of the document being annotated”. Therefore, all references to guidelines and checkpoints are duly quoted, and the URL to the original document is included. W3C is not responsible for any content not found at the original URL, and our annotations are non-normative.

2.1 Methodology definition

The UWEM is a Web evaluation methodology that provides an evaluation procedure consisting of a system of principles and practices for manual and automatic evaluation of Web accessibility for humans and machine interfaces. Version 0.5 of the methodology is designed to be conformant with WCAG 1.0 priority 1 checkpoints with regard to technical criteria. Future versions of this methodology are intended to be conformant with WCAG 2.0 from the same standpoint.

UWEM 0.5 offers an interpretation of the guidelines to be agreed among European stakeholders. It aims to increase the value of evaluations by basing them on a shared interpretation of WCAG 1.0 and a set of tests that are sufficiently robust to give stakeholders confidence in results. Web content producers may also wish to evaluate their own content and UWEM aims to also be suitable to these users.

The methodology is designed to meet the following requirements:

In the methodology we have included information about:

2.2 Target audience of the document

The target audience for this document include for example:

The European Commission, national governments and other organisations who wish to carry out benchmarking projects on Web accessibility will be able to use the UWEM to carry out the evaluations and compare their results in a meaningful way.

UWEM is an evaluation methodology and is not intended to provide information for Web content producers wishing to produce content compliant with WCAG 1.0. This information is provided in the WCAG 1.0 Techniques Documents that are available through the W3C/WAI website [WCAG10-TECHS].

2.3 Target technologies of this document

The 0.5 version of UWEM covers methods to evaluate documents based on the following technologies:

2.4 Acknowledgements

The following organisations worked on this UWEM document:

Accessibility Foundation (The Netherlands, Cluster coordinator); Agder University College (Norway, EIAO coordinator); Fraunhofer Institute for Applied Information Technology FIT (Germany, BenToWeb coordinator); Association Braillenet (France, SupportEAM coordinator), Vista Utredning AS (Norway); Forschungsintitut Technologie-Behindertenhilfe der Evangelischen Stiftung Volmarstein (Germany); The Manchester Metropolitan University (UK); Nettkroken as (Norway); University of Tromsø (Norway); FBL s.r.l. (Italy); Warsaw University of Technology, Faculty of Production Engineering, Institute of Production Systems Organisation (Poland); Aalborg University (Denmark); Intermedium as (Norway); Fundosa Teleservicios (Spain); Dublin City University (Ireland); Universität Linz, Institut integriert studieren (Austria); Katholieke Universiteit Leuven, Research & Development (Belgium); Accessinmind Limited (UK); Multimedia Campus Kiel (Germany); Department of Product and Systems Design, University of the Aegean (Greece); City University London (UK); ISdAC International Association (Belgium); FernUniversität in Hagen (Germany).

We thank the Web Accessibility Initiative's Team from the World Wide Web Consortium for all the useful criticisms to the different versions of this document.

2.5 More information about the WAB Cluster

The projects participating in the WAB cluster are funded by the European Union in the second FP6 IST call (2003) of the eInclusion Strategic Objective. The WAB cluster Web site is available at http://www.wabcluster.org/. More information about the projects can be found on the project websites:

3 Evaluation procedures and conformance

The evaluation procedures of UWEM 0.5 have the following objectives:

3.1 Large scale screening of Web sites

Large scale screening of Web sites, is an UWEM specific method that may help to quickly identify the scope of problems, or monitoring progress for a large number of Web sites or individual Web sites for some disability groups. However, automatic screening will not catch all of the problems on a site and should not be used to determine conformance.

3.2 Types of evaluation

The different types of evaluation methods have a number of strengths and weaknesses, as well as different levels of confidence associated with them. describes the levels of confidence of four different evaluation methods in their ability to benchmark accessibility.

Figure 1: UWEM Confidence Levels and Evaluation Types.

The figure shows for example, that automatic evaluation (Tool1 or Tool2) can only test for conformance to a subset of the  checkpoints (such as  the set provided in section 5), which further means that a subset of all possible accessibility barriers can be identified reliably by using automatic testing. This means that confidence in automatic evaluation as an overall indicator of accessibility is low, however it can identify some barriers reliably. Tool 1 and Tool 2 are here two fully automatic assessment tools that focus on checking slightly different accessibility issues, with some overlap of functionality.

Some tools can also act as decision support systems in a semi-automatic evaluation process where the system aids the testing process and points out  where the testers should focus and possibly give hints about earlier decisions in addition to  performing some tasks automatically. Expert testing, usually done with a semi-automatic testing/decision support system, will typically be more precise than semi-automatic testing done by a non-expert.

User testing is able to identify some barriers that are not caught by other testing means, and is able to identify barriers and also estimate the accessibility for the tested scenarios with a high level of confidence. However, user testing is also quite specialised, so that it is not generally suitable for conformance testing, since it is not able to test all aspects of the tests of section 5. The best approach to ensure both accessibility and UWEM conformance is to use a combined approach with both expert testing and user testing of the Web site.

The main advantages of automatic testing are:

The highest level of confidence is achieved by a combined approach of user testing and expert testing involving all possible tasks supported by a Web site. This method also provides feedback to the developers for the improvement of the accessibility and usability of the site. However, it is an expensive and time-consuming method, and for the purposes of UWEM 0.5, we propose user testing on a subset of tasks (see sections 4 and 7). See section 8 for reporting of results as scorecards.

3.3 Conformance requirements

UWEM 0.5 conformance requires that all the tests to the selected level in section 5 pass for every test on every page of the Web site according to the scope defined in section 4.

3.4 Conformance claims

The claims of conformance of accessibility according to the UWEM  methodology must use the following form:

  1. The UWEM version and its URI identifier, e.g., http://www.wabcluster.org/refs/UWEM-0.5/

  2. The URI to a document detailing the evaluation set (scope pattern list) to which the claim refers. A resource set conforms at a given confidence level only if all content provided by that resource so conforms.

  3. The level of confidence being claimed.

3.5 Levels of confidence

Confidence in the UWEM evaluation results is expressed not only by the choice of a method, but also by providing information about the sampling of resources from the Web site (see section 4) and the set of criteria that have been applied (see section 5). This confidence is at a macro-level and is not the same term as used for the tests of section 5.

Section 5 states the technologies to which UWEM can be applied (W3C technologies (X)HTML and CSS in different versions). There are sites therefore to which UWEM cannot technically be applied, like, e.g., a site making exclusive use of Macromedia Flash. For these sites, user testing might be necessary, although it cannot provide a complete view on the accessibility of the site.

The following levels of confidence in the different individual methods for assessing accessibility are proposed (see ). Note that every level includes the precedent:

Minimal Confidence (“Automatic evaluation”):
this method implies that all tests that can be automatically checked, as described in section 5 have been used for a set of resources, as defined in section 4. The resources tested must pass all tests. The tool integrates the results as described in section 8. This confidence level indicates that a tool (or a person) can check all automatised tests of section 5.
Medium Confidence (“Semi-Automatic evaluation”):
this method implies a human evaluator takes the results of all tests (see section 5) that can be automatically assessed with different tools, and integrates these results as described in section 8, applied on resources as defined in section 4. This level is a minimal extension of the previous level where two or more tool results are used, and the results are aggregated and compared by a human.
High Confidence (“Evaluation by experts”):
this method implies experts apply all tests (see section 5) on resources as defined in section 4, and integrate the evaluation results as described in section 8. The resources tested must pass all tests.
Highest Confidence for a task (“Evaluation by users”):
this method implies users evaluate the resources as defined in section 4, using the methodology as defined in section 7, and integrate the evaluation results as described in section 8. However, to have the highest confidence in user testing, users would need to undertake all the key tasks on a Web site, which is undoubtedly a very time-consuming procedure. The highest confidence level must include “Evaluation by experts” as well.

In deciding which method or combination of methods to choose for assessing accessibility, a Web site owner should bear in mind the above levels of confidence and also use the individual Web site accessibility scorecard presented in section 8.

4 Scope of a Web site and methods for sampling

4.1 Procedure to express the scope of a Web site: the Scope Pattern List

For the purposes of the UWEM a Web site is defined as an arbitrary collection of hyperlinked Web resources, each identified by one or more URIs [RFC2396] (each URI optionally complemented with a set of additional parameters, as defined by the XML Schema described in Appendix C). The scope of a Web site – the specific set of resources considered as belonging to it — can therefore be identified or expressed by giving a clear, repeatable and unambiguous procedure for deciding whether any arbitrary URI does or does not belong to the site.

According to the needs of different applications of UWEM, scope may be specified by a variety of different participants in the evaluation process – such as a site owner, a site operator, an inspection organisation, etc. This document does not offer advice on deciding upon the appropriate scope for any particular UWEM application: it only explains how such a scope should be unambiguously expressed.

The scope of a Web site should be expressed in the form of an ordered list of resource patterns, termed a Scope Pattern List. Each pattern is designated as either an include pattern or an exclude pattern. Each pattern is expressed using an XML Schema regular expression [XMLSCHEMA2]. The complete Scope Pattern List should be expressed using the XML Schema described in Appendix C. The scope status of any arbitrary URI is then determined by testing it against each pattern in turn until a match is found. If the match is to an include pattern, then the identified resource is inside the scope of the site. If the match is to an exclude pattern, or if no match is found, then the identified resource is outside the scope of the site.

4.2 Procedure to catalogue the Complete Resource Set of a Web site

Section 4.1 explains how the scope of a Web site should be identified, i.e., how to express a rule for deciding whether a given resource is inside or outside the scope of a site. However, this does not, in itself, identify any particular resources which indeed lie inside the scope of the Web site. Such an explicit set of particular resources is a pre-requisite for any evaluation of a Web site. The complete or exhaustive set of resources belonging to a site is called the “Complete Resource Set”.

The Complete Resource Set is normally made explicit, or catalogued, by identifying a set of one or more “seed” resources (the “Seed Resource Set”) and recursively “crawling” from there, accepting all linked resources which qualify as in scope per section 4.1. That is, starting with the Seed Resource Set, each resource is retrieved and analysed to identify any links (URIs) to other resources. Each of these URIs is assessed to see whether it is in scope. If so, the corresponding resource is also retrieved and the process is repeated. In principle this procedure should be followed exhaustively until no additional in-scope URIs can be identified.

Automated crawling is, in general, subject to limitations, for example with regard to resources accessed via form submission (typical of sites offering interactive services). This may be addressed, in certain cases, by providing additional parameters to support crawling, such as form input data, HTTP accept headers, etc., as described in Appendix C. However, in all cases, it will be important to ensure that any resources which might otherwise not be located by automated crawling, should be included as explicit elements of the Seed Resource Set. That is, it is a requirement on the choice of the Seed Resource Set that it be sufficiently extensive that recursive crawling from this set will generate the intended Complete Resource Set.

In general, the outcome of cataloguing the Complete Resource Set will depend not only on the Scope Pattern List and the Seed Resource Set, but also upon the specific, dynamic, state and configuration of the Web site when the crawling is carried out. Accordingly, any catalogue of the Complete Resource Set should be explicitly linked to the relevant Scope Pattern List and Seed Resource Set, and also be timestamped to show when it was created.

4.3 Procedure to generate Evaluation Samples

In general it will not be practical to test all site resources (all elements of the Complete Resource Set) against all evaluation criteria. Accordingly, we identify here certain subsets or “samples”, of the Complete Resource Set.

4.3.1 The Core Resource Set

The Core Resource Set is a set of generic resources, or resource types, which are likely to be present in most Web sites, and which are core to the use and accessibility evaluation of a site. The Core Resource Set therefore represents a minimal set of resources which should be included in any accessibility evaluation of the site. The Core Resource Set cannot, in general, be automatically identified, but requires human judgement to select. The Core Resource Set should consist of as many of the following resources as are applicable and which lie within the defined scope of the site, per section 4.1:

Note that, of course, there may be some overlap in the resources identified under the different points above: any given resource should appear only once in the Core Resource Set.

4.3.2 Sampled Resource Set(s)

A Sampled Resource Set is a resource set which is generated in a similar manner to the Complete Resource Set – by automated recursive crawling from a set of “seed” resources (either the standard Seed Resource Set, or some subset of it) – but where the crawling has been subject to certain pre-determined limits or constraints. A Sampled Resource Set would typically be used in the context of evaluations carried out over large numbers of sites (against automatic criteria only), where it is not feasible or necessary to evaluate the Complete Resource Set for each site. The basis for delimiting a Sampled Resource Set will be application specific, and should be explicitly disclosed in any evaluation report, but may, in general, include the following possible mechanisms, individually or in combinations:

4.3.3 Relationship between Resource Sets

The following figure illustrates a typical relationship between each of the Resource Sets discussed above. It shows that a Web site (the Complete Resource Set) is some distinguished subset of the entire Web (all existing Web resources). Sampled Resource Sets are more or less arbitrary subsets of the Complete Resource Set; so they are arbitrary subsets of the resources constituting a site. The Seed Resource Set is then a smaller subset, used to seed the crawling or cataloguing of the Complete Resource Set (and also, at least in part, any Sampled Resource Sets). Finally, the Core Resource Set is the smallest of all, being a subset of the Seed Resource Set, containing just that minimal set of resources which should be included in all evaluations.

Figure 2: Relationship between Resource Sets. See text for details.

5 Evaluation guidelines and checklists

5.1 Introduction

This section covers the UWEM 0.5 testing of Priority 1 checkpoints of WCAG 1.0 [WCAG10]. Further versions of UWEM might cover additional checkpoints. This interpretation refers in particular to (X)HTML and CSS tests that MUST be carried out to be compliant with this version of UWEM.

UWEM does not impose a way to carry out the tests, and some of the following tests could be made with the help of automatic or semi-automatic tools. This section is intended to be replaced in the future by the success criteria of WCAG 2.0 [WCAG20] and the accompanying techniques documents once it becomes a W3C Recommendation.

This section contains quotes of W3C recommendations and notes (in particular, [WCAG10] and [WCAG10-TECHS]). The quoted text is in each case immediately followed by a URI linking to the quoted text in its original context in the W3C/WAI documents. These materials are copyright of W3C and its use is subject to the conditions of the W3C Document License (see Appendix F or http://www.w3.org/Consortium/Legal/2002/copyright-documents-20021231 for further information of the use of these documents).

The structure of the tests is the following:

  1. Guideline 

    Quotation of the corresponding WCAG 1.0 guideline. Pointers to additional clarifications might be added.

    • Checkpoint 

      Quotation of the corresponding WCAG 1.0 checkpoint. Pointers to additional clarifications might be added.

      1. Summary overview 

        Tabular overview of the tests corresponding to the checkpoint.

      2. (X)HTML-specific tests 

        Set of tests to be made for conformance claims for (X)HTML resources. Each test consists of:

        • Title and ID: short descriptive title (informative) and unique identifier (normative).

        • Applicability criteria: elements, attributes and combinations thereof used to determine the applicability of the test. Whenever possible, the criteria will be presented as XPath expressions, otherwise a prose description will be given.

        • Test procedure: description in a tool-independent manner of the test procedure. The procedure shall not be written as to exclude possible machine-testing.

        • Confidence level: micro-confidence level for the individual test. Do not confuse with the confidence level of the conformance claim.

        • (Optional) User testing procedures: set of recommendations that can complement the above procedures when performing user testing.

      3. CSS-specific tests 

        Set of tests to be made for conformance claims for CSS resources. Each test consists of:

        • Title and ID: short descriptive title (informative) and unique identifier (normative).

        • Applicability criteria: CSS selectors, properties and combinations thereof used to determine the applicability of the test.

        • Test procedure: description in a tool-independent manner of the test procedure. The procedure shall not be written as to exclude possible machine-testing.

        • Confidence level: micro-confidence level for the individual test. Do not confuse with the confidence level of the conformance claim.

        • (Optional) User testing procedures: set of recommendations that can complement the above procedures when performing user testing.

  2. (Optional) Additional clarification issues, such as definition pointers.

This section will not repeat information available in W3C documents. It will provide pointers to the relevant places and will extend only information when necessary for the defined tests.

5.2 Guideline 1

“Provide equivalent alternatives to auditory and visual content.”

(See http://www.w3.org/TR/WCAG10/#gl-provide-equivalents)

This guideline provides information on how to support with complementary text alternatives auditory and visual content.

5.2.1 Checkpoint 1.1

Provide a text equivalent for every non-text element (e.g., via "alt", "longdesc", or in element content). This includes: images, graphical representations of text (including symbols), image map regions, animations (e.g., animated GIFs), applets and programmatic objects, art, frames, scripts, images used as list bullets, spacers, graphical buttons, sounds (played with or without user interaction), stand-alone audio files, audio tracks of video, and video. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-text-equivalent and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-text-equivalent)

5.2.1.1 Summary

Table 1: UWEM 0.5 tests for checkpoint 1.1.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 1.1_HTML_01 img/@alt, area/@alt,
input[@type='image']/@alt,
applet/@alt, object/*

Select non-text-elements without text alternative.

High
1.1_HTML_02 img/@alt, area/@alt,
input[@type='image']/@alt,
applet/@alt, object/*

Select non-text-elements with empty text alternative. Decide whether non-text-element is purely decorative.

High
1.1_HTML_03 img/@alt, area/@alt,
input[@type='image']/@alt,
applet/@alt, object/*

Select non-text-elements with text alternative containing only whitespace. Decide whether non-text-element represents whitespace.

High
1.1_HTML_04 img/@alt, area/@alt,
input[@type='image']/@alt,
applet/@alt, object/*

Select non-text-elements with non-empty, non-whitespace-only text alternative. Decide whether text alternative represents the non-text-element's function within the context.

High
1.1_HTML_05 img/@longdesc,
object//a/@href

Select long description document referenced by the non-text-element. Decide whether the non-text-element is described by text in the document.

High
CSS N/A

5.2.1.2 (X)HTML tests

5.2.1.2.1 Test 1.1_HTML_01

This test is targeted to find media elements without a text alternative.

5.2.1.2.2 Test 1.1_HTML_02

This test is targeted to analyse non-text elements with an empty text alternative.

5.2.1.2.3 Test 1.1_HTML_03

This test is targeted to analyse non-text elements with only white space as a text alternative.

5.2.1.2.4 Test 1.1_HTML_04

This test is targeted to analyse non-text elements with non-whitespace-only text alternative.

5.2.1.2.5 Test 1.1_HTML_05

This test is targeted to analyse long descriptions of media elements.

5.2.1.3 CSS tests

For this checkpoint there are not applicable tests.

5.2.2 Checkpoint 1.2

Provide redundant text links for each active region of a server-side image map. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-redundant-server-links and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-redundant-server-links)

5.2.2.1 Summary

Table 2: UWEM 0.5 tests for checkpoint 1.2.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 1.2_HTML_01 img/@ismap,
object[@type='image']/@ismap

Select active region without redundant text link.

High
CSS N/A

5.2.2.2 (X)HTML tests

5.2.2.2.1 Test 1.2_HTML_01

This test is targeted to find active regions of a server-side image map without redundant text link.

5.2.2.3 CSS tests

For this checkpoint there are no applicable tests.

5.2.3 Checkpoint 1.3

Until user agents can automatically read aloud the text equivalent of a visual track, provide an auditory description of the important information of the visual track of a multimedia presentation. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-auditory-descriptions and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-auditory-descriptions)

5.2.3.1 Summary

Table 3: UWEM 0.5 tests for checkpoint 1.3.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 1.3_HTML_01 object, applet, a

Select multimedia presentations without auditory description of the important information of the visual track.

High
CSS N/A

5.2.3.2 (X)HTML tests

5.2.3.2.1 Test 1.3_HTML_01

This test is targeted to find multimedia presentations without an auditory description of the important information of their visual track.

5.2.3.3 CSS tests

For this checkpoint there are not applicable tests.

5.3 Guideline 2

“Don't rely on color alone.”

(See http://www.w3.org/TR/WCAG10/#gl-color)

This guideline provides information on how to use color appropriately.

5.3.1 Checkpoint 2.1

Ensure that all information conveyed with color is also available without color, for example from context or markup. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-color-convey and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-color-convey)

5.3.1.1 Summary

Table 4: UWEM 0.5 tests for checkpoint 2.1.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 2.1_HTML_01 body

Decide whether references in text via color are redundant.

High
2.1_HTML_02 img, area,
input[@type='image'], applet, object

Select non-text-elements. Decide whether color information is redundant.

High
2.1_HTML_03 */@color, */@bgcolor,
*/@link, */@vlink, */@alink,
*/@text

Select elements with attributes defining colors. Decide whether color information is redundant.

CSS 2.1_CSS_01 color, background-color,
background, border-color,
border, outline-color, outline

Select elements for which colors are defined. Decide whether color information is redundant.

5.3.1.2 (X)HTML tests

5.3.1.2.1 Test 2.1_HTML_01

This test is targeted to find phrases in text that refer to parts of a document only by mentioning their color.

5.3.1.2.2 Test 2.1_HTML_02

This test is targeted to find phrases in non-text resources that refer to parts of a document only by mentioning their color.

5.3.1.2.3 Test 2.1_HTML_03

This test is targeted to find colored elements without redundant methods of conveying the information.

5.3.1.3 CSS tests

5.3.1.3.1 Test 2.1_CSS_01

This test is targeted to find colored elements without redundant methods of conveying the information.

5.3.2 Checkpoint 2.2

Ensure that foreground and background color combinations provide sufficient contrast when viewed by someone having color deficits or when viewed on a black and white screen. [Priority 2 for images, Priority 3 for text].

(See http://www.w3.org/TR/WCAG10/#tech-color-contrast and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-color-contrast)

5.3.2.1 Summary

Table 5: UWEM 0.5 tests for checkpoint 2.2.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 2.2_HTML_01 img, area,
input[@type='image'], applet,
object

Select non-text-elements. Decide whether contrast in non-text-element is high enough to convey the information

Low
2.2_HTML_02 */@color, */@bgcolor

Select elements with attributes defining colors. Check contrast.

Low
2.2_HTML_03 */@link, */@vlink, */@alink,
*/@text

Select elements with attributes defining colors. Check contrast.

Low
CSS 2.2_CSS_01 color, background-color

Select elements for which color OR background-color are defined. Check whether both color and background-color are defined.

High
2.2_CSS_02 color, background-color

Select elements for which color and/or background-color are defined. Check color contrast.

Low
2.2_CSS_03 :link color, :link background-color, :link background,
:visited color, :visited background-color, :visited background,
:hover color, :hover background-color, :hover background,
:active color, :active background-color, :active background

Select pseudo classes for links where color and/or background color are defined. Check color contrast.

Low

5.3.2.2 (X)HTML tests

5.3.2.2.1 Test 2.2_HTML_01

This test is targeted to find media elements without enough color contrast.

5.3.2.2.2 Test 2.2_HTML_02

This test is targeted to find text without enough color contrast.

5.3.2.2.3 Test 2.2_HTML_03

This test is targeted to find text without enough color contrast.

5.3.2.3 CSS tests

5.3.2.3.1 Test 2.2_CSS_01

This test is targeted to find text without enough color contrast.

5.3.2.3.2 Test 2.2_CSS_02

This test is targeted to find text without enough color contrast.

5.3.2.3.3 Test 2.2_CSS_03

This test is targeted to find text without enough color contrast.

5.4 Guideline 4

“Clarify natural language usage.”

(See http://www.w3.org/TR/WCAG10/#gl-abbreviated-and-foreign)

This guideline provides information on how to facilitate pronunciation or interpretation of abbreviated or foreign text.

5.4.1 Checkpoint 4.1

Clearly identify changes in the natural language of a document's text and any  text equivalents (e.g., captions). [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-identify-changes and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-identify-changes)

5.4.1.1 Summary

Table 6: UWEM 0.5 tests for checkpoint 4.1.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 4.1_HTML_01 text(), img/@alt, applet/@alt,
area/@alt, input/@alt,
meta/@content,
option/@label,
optgroup/@label,
object/@standby,
table/@summary, */@title
(all elements except base, basefont, head, html, meta, param, script and title), input[@type='text']/@value, input[@type='submit']/@value, frame/@name, iframe/@name

For each word, attribute value, text node and element, check if language corresponds with:

a) the specified language of its nearest ancestor,  

b) the predominant language if no ancestor specifies language.

High
4.1_HTML_02 text(), img/@alt, applet/@alt,
area/@alt, input/@alt,
meta/@content,
option/@label,
optgroup/@label,
object/@standby,
table/@summary, */@title
(all elements except base, basefont, head, html, meta, param, script and title), input[@type='text']/@value, input[@type='submit']/@value, frame/@name, iframe/@name

For each word, attribute value, text node and element, check if text direction corresponds with:

a) the specified text direction of its nearest ancestor,  

b) the predominant text direction if no ancestor specifies text direction.

High
CSS 4.1_CSS_01 *:after {content: “...”;},
*:before {content: “...”;},
* { content: “...”},
* { cue: url(“...”);},
* { cue-before: url(“...”);},
* { cue-after: url(“...”);},
* { list-style-type: ...;},
* { list-style: ...;}

Select any elements which have associated CSS rules that generate content. Check if the generated content is in a different language than the context.

5.4.1.2 (X)HTML tests

5.4.1.2.1 Test 4.1_HTML_01

This test is targeted to find changes in natural language that are not marked up. Marking up natural language changes can make a document more accessible to multilingual users.

5.4.1.2.2 Test 4.1_HTML_02

This test is targeted to find changes in text direction (for natural languages) that are not marked up.

5.4.1.3 CSS tests

5.4.1.3.1 Test 4.1_CSS_01

This test is targeted to analyse long descriptions of media elements.

5.5 Guideline 5

“Create tables that transform gracefully.”

(See http://www.w3.org/TR/WCAG10/#gl-table-markup)

This guideline provides information on how to identify properly marked up tables,

5.5.1 Checkpoint 5.1

For data tables, identify row and column headers. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-table-headers and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-table-headers)

Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 5.1_HTML_01 th/@scope,
th/@id,
td/@headers

Verify that the scope attribute for the table heading is set. Test passes if it is set. Otherwise, continue with verifying that id attribute in table header exists, that the headers attribute in table definition exists, and that the table definition parameters matches the table header IDs.

High
5.1_HTML_02 th/@axis,
td/@axis

If the axis attribute is used in tables, check that it is used consistently.

Medium
5.1_HTML_03 table/colgroup,
table/thead,
table/tfoot,
table/tbody

Verify that colgroup, if present, comes first, tfoot comes before tbody. Test 5.1 fails if colgroup, thead, tfoot and tbody are used inconsistently.

High
5.1_HTML_04 pre

Inspect the preformatted text to see if it could have been better presented as a proper table.

Medium
CSS N/A

Table 7: UWEM 0.5 tests for checkpoint 5.1.

5.5.1.1 (X)HTML tests

5.5.1.1.1 Test 5.1_HTML_01

This test is targeted to find table cell elements with scope attribute or id and headers attributes that can aid assistive technology in presenting the  information in the table properly.

5.5.1.1.2 Test 5.1_HTML_02

This test is targeted to find table element with axis attribute that can aid assistive technology in presenting varying type specific information in the table columns properly.

5.5.1.1.3 Test 5.1_HTML_03

This test is targeted to aid assisting technology in identifying information about repeated table headers, footers and table row and column grouping.

5.5.1.1.4 Test 5.1_HTML_04

This test is targeted to identify preformatted text used instead of proper tables.

5.5.2 Checkpoint 5.2

For data tables that have two or more logical levels of row or column headers, use markup to associate data cells and header cells. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-table-structure and the techniques in http://www.w3.org/TR/WCAG10-HTML-TECHS/#identifying-table-rows-columns

Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 5.2_HTML_01 table

If the table contains more than two logical levels of row or column headers, verify that the table marks up the rows and columns properly by using test 5.1. If this test fails, or returns CannotTell/ NotApplicable, then test 5.2 fails.

High
CSS N/A
5.5.2.0.1 Test 5.2_HTML_01

This test is targeted to identify tables with more than two logical levels of rows or columns that are not marked up properly by using table markup that accosiates rows and columns and can aid assistive technologies in presenting the tables properly.

5.6 Guideline 6

“Ensure that pages featuring new technologies transform gracefully.”

(See http://www.w3.org/TR/WCAG10/#gl-new-technologies)

This guideline provides information on ensuring that pages are accessible even when newer technologies are not supported or are turned off.

5.6.1 Checkpoint 6.1

Organize documents so they may be read without style sheets. For example, when an HTML document is rendered without associated style sheets, it must still be possible to read the document. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-order-style-sheets and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-order-style-sheets)

5.6.1.1 Summary

Table 8: UWEM 0.5 tests for checkpoint 6.1.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 6.1_HTML_01 link[@rel='stylesheet'],
style, */@style

Deactivate all CSS applied to document. Ensure that:

  1. Content can be read.

  2. Content does not become invisible.

  3. Content is not obscured by other content.

  4. The intended reading order is maintained.

High
6.1_HTML_02 script, */@onfocus, */@onblur, */@onkeypress, */@onkeydown, */@onkeyup, */@onsubmit, */@onreset, */@onselect, */@onchange
(*/@onload, */@onunload, */@onclick, */@ondblclick, */@onmousedown, */@onmouseup, */@onmouseover, */@onmousemove, */@onmouseout)

Deactivate all CSS applied to document by script. Ensure that:

  1. Content can be read.

  2. Content does not become invisible.

  3. Content is not obscured by other content.

  4. The intended reading order is maintained.

High
CSS N/A

5.6.1.2 (X)HTML tests

5.6.1.2.1 Test 6.1_HTML_01

This test analyses the effect on the readability of the document of CSS applied in standalone stylesheets, embedded stylesheets and style attributes of document elements.

5.6.1.2.2 Test 6.1_HTML_02

This test analyses the effect on the readability of the document of CSS applied programmatically.

5.6.1.3 CSS tests

For this checkpoint there are no applicable tests.

5.6.2 Checkpoint 6.2

Ensure that equivalents for dynamic content are updated when the dynamic content changes. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-dynamic-source and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-dynamic-source)

5.6.2.1 Summary

Table 9: UWEM 0.5 tests for checkpoint 6.2.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 6.2_HTML_01 frame/@src

Check that the content pointed to by the src attribute complies with checkpoint 1.1.

High
6.2_HTML_02 frame, a/@href,
script, */@onfocus, */@onblur, */@onkeypress, */@onkeydown, */@onkeyup, */@onsubmit, */@onreset, */@onselect, */@onchange
(*/@onload, */@onunload, */@onclick, */@ondblclick, */@onmousedown, */@onmouseup, */@onmouseover, */@onmousemove, */@onmouseout)

Search for links, script or other elements in the delivery unit that cause other content to be loaded into the frame. Ensure that content loaded into the frame complies with checkpoint 1.1.

High
CSS N/A

5.6.2.2 (X)HTML tests

5.6.2.2.1 Test 6.2_HTML_01

This test analyses the text equivalent of any non-text content loaded into the frame by the browser as a result of the value the src attribute.

5.6.2.2.2 Test 6.2_HTML_02

This test analyses the text equivalent of any non-text content loaded into the frame by the browser as a result of link activation or script execution.

5.6.2.3 CSS tests

For this checkpoint there are no applicable tests.

5.6.3 Checkpoint 6.3

Ensure that pages are usable when scripts, applets, or other programmatic objects are turned off or not supported. If this is not possible, provide equivalent information on an alternative accessible page. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-scripts and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-scripts)

5.6.3.1 Summary

Table 10: UWEM 0.5 tests for checkpoint 6.3.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 6.3_HTML_01 object, applet, embed

Ensure that all information and functionality provided by the embedded objects is available when these are not loaded or do not function as intended.

High
6.3_HTML_02 script, a[starts-with(@href, 'javascript:')], */@onfocus, */@onblur, */@onkeypress, */@onkeydown, */@onkeyup, */@onsubmit, */@onreset, */@onselect, */@onchange (*/@onload, */@onunload, */@onclick, */@ondblclick, */@onmousedown, */@onmouseup, */@onmouseover, */@onmousemove, */@onmouseout)

Ensure that all information and functionality provided by the script is available when this is not executed.

High
CSS N/A

5.6.3.2 (X)HTML tests

5.6.3.2.1 Test 6.3_HTML_01

This test determines whether information and functionality provided by embedded content is also available without said content.

5.6.3.2.2 Test 6.3_HTML_02

This test determines whether information and funcionality provided by script is also available when script is not executed.

5.6.3.3 CSS tests

For this checkpoint there are no applicable tests.

5.7 Guideline 7

“Ensure user control of time-sensitive content changes.”

(See http://www.w3.org/TR/WAI-WEBCONTENT/#gl-movement)

This guideline provides information on moving, blinking, scrolling, or auto-updating objects or pages, which make it difficult, sometimes even impossible, to read or access content.

5.7.1 Checkpoint 7.1

Until user agents allow users to control flickering, avoid causing the screen to flicker. [Priority 1]

Note. People with photosensitive epilepsy can have seizures triggered by flickering or flashing in the 4 to 59 flashes per second (Hertz) range with a peak sensitivity at 20 flashes per second as well as quick changes from dark to light (like strobe lights).

(See http://www.w3.org/TR/WCAG10/#tech-avoid-flicker and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-avoid-flicker)

5.7.1.1 Summary

Table 11: UWEM 0.5 tests for checkpoint 7.1.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 7.1_HTML_01 marquee

Select any marquee elements. Check scroll amount, scroll delay and font size.

High
7.1_HTML_02 img, object[@type="image/gif"]

Select any animated gif files. Check playback speed of frames and colour contrast between subsequent frames.

High
7.1_HTML_03 script, */@onfocus, */@onblur, */@onkeypress, */@onkeydown, */@onkeyup, */@onsubmit, */@onreset, */@onselect, */@onchange (*/@onload, */@onunload, */@onclick, */@ondblclick, */@onmousedown, */@onmouseup, */@onmouseover, */@onmousemove, */@onmouseout)

Select any client-side scripts. Check if they cause flicker or flashing at a rate between 4 and 59 Hertz.

High
7.1_HTML_04 object[@codetype='application/java'] object[@codetype='application/java-archive], object[starts-with(@codetype, 'application/x-java-applet)], applet; any content sent by HTTP with MIME types 'application/java', 'application/java-archive', 'application/x-java-applet'

Select any Java applets. Check if they cause flicker or flashing at a rate between 4 and 59 Hertz.

High
7.1_HTML_05 object[starts-with(@type, 'video/')], embed[starts-with(@type, 'video/')]
object//a/@href

Select any video content. Check if it causes flicker or flashing at a rate between 4 an 59 Hertz.

High
CSS 7.1_CSS_01 *:after {content: url(...);}, *:before {content: url(...);}

Select any CSS-generated images, video and animations. Check if they cause flicker at a rate between 4 and 59 Hertz.

High

5.7.1.2 (X)HTML tests

5.7.1.2.1 Test 7.1_HTML_01

This test is targeted to find marquee text that causes blinking. Marquee does not normally cause blinking, but certain combinations of scroll amount, scroll delay, font size and colour might cause parts of the screen to blink.

5.7.1.2.2 Test 7.1_HTML_02

This test is targeted to find animated gif files that cause flicker. (Other image file types for inclusion in HTML pages – JPEG and PNG – do not support animation.) 

5.7.1.2.3 Test 7.1_HTML_03

This test is targeted to find scripts that cause flicker or flashing.

5.7.1.2.4 Test 7.1_HTML_04

This test is targeted to find Java applets that cause flicker or flashing.

5.7.1.2.5 Test 7.1_HTML_05

This test is targeted to find any video content that cause flicker or flashing.

5.7.1.3 CSS tests

5.7.1.3.1 Test 7.1_CSS_01

This test is targeted to find CSS-generated content that causes flicker or flashing.

5.7.2 Checkpoint 7.2

Until user agents allow users to control blinking, avoid causing content to blink (i.e., change presentation at a regular rate, such as turning on and off). [Priority 2]

(See http://www.w3.org/TR/WCAG10/#tech-avoid-blinking and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-avoid-blinking)

5.7.2.1 Summary

Table 12: UWEM 0.5 tests for checkpoint 7.5.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 7.2_HTML_01 blink

Select any blink elements.

High
7.2_HTML_02 img, object[@text='image/gif']

Select any animated gif files. Check playback speed of frames and colour contrast between subsequent frames.

High
7.2_HTML_03 script, */@onfocus, */@onblur, */@onkeypress, */@onkeydown, */@onkeyup, */@onsubmit, */@onreset, */@onselect, */@onchange (*/@onload, */@onunload, */@onclick, */@ondblclick, */@onmousedown, */@onmouseup, */@onmouseover, */@onmousemove, */@onmouseout)

Select any client-side scripts. Check if they cause blinking.

High
7.2_HTML_04 object[@codetype='application/java'] object[@codetype='application/java-archive], object[starts-with(@codetype, 'application/x-java-applet)], applet; any content sent by HTTP with MIME types 'application/java', 'application/java-archive', 'application/x-java-applet'

Select any Java applets. Check if they cause blinking.

High
7.2_HTML_05 object[starts-with(@type, 'video/')], embed[starts-with(@type, 'video/')]; any content sent by HTTP with a MIME type that starts with 'video/'

Select any video content. Check if it causes blinking.

High
CSS 7.2_CSS_01 *:after {content: url(...);},
*:before {content: url(...);}

Select any CSS-generated images, video and animations. Check if they cause blinking.

High
7.2_CSS_02 * { text-decoration: blink;}

Select any CSS rules that cause blinking.

High

5.7.2.2 (X)HTML tests

5.7.2.2.1 Test 7.2_HTML_01

This test is targeted to find any blink elements.

5.7.2.2.2 Test 7.2_HTML_02

This test is targeted to find animated gif files that cause blinking. (Other image file types for inclusion in HTML pages – JPEG and PNG – do not support animation.) 

5.7.2.2.3 Test 7.2_HTML_03

This test is targeted to find scripts that cause blinking.

5.7.2.2.4 Test 7.2_HTML_04

This test is targeted to find Java applets that cause blinking.

5.7.2.2.5 Test 7.2_HTML_05

This test is targeted to find any video content that causes blinking.

5.7.2.3 CSS tests

5.7.2.3.1 Test 7.2_CSS_01

This test is targeted to find CSS-generated content that causes blinking.

5.7.2.3.2 Test 7.2_CSS_02

This test is targeted to find CSS rules that cause content to blink.

5.7.3 Checkpoint 7.3

Until user agents allow users to freeze moving content, avoid movement in pages. [Priority 2]

(See http://www.w3.org/TR/WCAG10/#gl-movement and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-avoid-movement)

5.7.3.1 Summary

Table 13: UWEM 0.5 tests for checkpoint 7.3.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 7.3_HTML_01 img, applet, object

Decide whether the non-text elements contain moving content.

High
7.3_HTML_02 script

Decide whether the script causes movement.

High
7.3_HTML_03 */@onload, */@onunload, */@onclick, */@ondblclick, */@onmousedown, */@onmouseup, */@onmouseover, */@onmousemove, */@onmouseout, */@onfocus, */@onblur, */@onkeypress, */@onkeydown, */@onkeyup, */@onsubmit, */@onreset, */@onselect, */@onchange

Decide whether the event handlers call script functions which might cause movable elements.

High
7.3_HTML_04 marquee

Select marquee elements

High
CSS 7.3_CSS_01 position, top, bottom, left, right, padding, padding-left, padding-right, padding-top, padding-bottom, margin, margin-left, margin-right, margin-top, margin-bottom, border-width, display, visibility, z-index

Select elements that can be (self)movable.

High

5.7.3.2 (X)HTML tests

5.7.3.2.1 Test 7.3_HTML_01

This test is targeted to find moving images, applets or other objects without providing mechanism to freeze motion.

5.7.3.2.2 Test 7.3_HTML_02

This test is targeted to find scripts that cause moving without providing mechanism to freeze motion.

5.7.3.2.3 Test 7.3_HTML_03

This test is targeted to find event handlers that can cause moving conetnt without providing mechanism to freeze motion.

5.7.3.2.4 Test 7.3_HTML_04

This test is targeted to find marquee elements.

5.7.3.3 CSS tests

This test is targeted to find elements that can move elements.

5.7.4 Checkpoint 7.4

Until user agents provide the ability to stop the refresh, do not create periodically auto-refreshing pages. [Priority 2]

(See http://www.w3.org/TR/WCAG10/#gl-movement and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-no-periodic-refresh)

5.7.4.1 Summary

Table 14: UWEM 0.5 tests for checkpoint 7.4.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 7.4_HTML_01 meta[@http-equiv='refresh']

Select elements that cause automatic refresh

High
7.4_HTML_02 script, applet, object

Select programmatic elements that can cause automatic refresh

High
CSS N/A

5.7.4.2 (X)HTML tests

5.7.4.2.1 Test 7.4_HTML_01

This test is targeted to find elements that can cause page refreshing.

5.7.4.2.2 Test 7.4_HTML_02

This test is targeted to find scripts that cause moving without providing mechanism to freeze motion.

5.7.4.3 CSS tests

For this checkpoint there are not applicable tests.

5.7.5 Checkpoint 7.5

Until user agents provide the ability to stop auto-redirect, do not use markup to redirect pages automatically. Instead, configure the server to perform redirects. [Priority 2]

(See http://www.w3.org/TR/WCAG10/#gl-movement and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-no-auto-forward)

5.7.5.1 Summary

Table 15: UWEM 0.5 tests for checkpoint 7.5.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 7.5_HTML_01 meta[@http-equiv='refresh']

Select elements that cause automatic redirecting

High
7.5_HTML_02 script, applet, object

Select programmatic elements that can cause automatic redirecting

High
CSS N/A

5.7.5.2 (X)HTML tests

5.7.5.2.1 Test 7.5_HTML_01

This test is targeted to find elements that can cause page redirecting.

5.7.5.2.2 Test 7.5_HTML_02

This test is targeted to find scripts that cause redirecting without providing mechanism to stop.

5.7.5.3 CSS tests

For this checkpoint there are not applicable tests.

5.8 Guideline 8

“Ensure direct accessibility of embedded user interfaces.”

(See http://www.w3.org/TR/WCAG10/#gl-own-interface)

This guideline provides information on how to create accessible embedded user interfaces.

5.8.1 Checkpoint 8.1

Make programmatic elements such as scripts and applets directly accessible or compatible with assistive technologies [Priority 1 if functionality is important and not presented elsewhere, otherwise Priority 2.]

(See http://www.w3.org/TR/WCAG10/#tech-directly-accessible and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-directly-accessible)

5.8.1.1 Summary

Table 16: UWEM 0.5 tests for checkpoint 8.1.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 8.1_HTML_01 applet, object
  1. Is it possible to navigate to and from the embedded interface with a keyboard?

  2. Is it possible to use the interface controls with keyboard, mouse, stylus / with screen reader?

Medium
8.1_HTML_02 script

Is it possible to use the scripted interface with keyboard, mouse, stylus / with screen reader?

Medium
CSS N/A

5.8.1.2 (X)HTML tests

5.8.1.2.1 Test 8.1_HTML_01

This test is targeted to find embedded programmatic elements that are not directly accessible or compatible with assistive technologies.

5.8.1.2.2 Test 8.1_HTML_02

This test is targeted to find scripts that are not directly accessible or compatible with assistive technologies.

5.8.1.3 CSS tests

For this checkpoint there are not applicable tests.

5.9 Guideline 9

“Design for device-independence.”

(See http://www.w3.org/TR/WCAG10/#gl-device-independence)

This guideline provides information on how to create web content that does not rely on one specific input or output device.

5.9.1 Checkpoint 9.1

Provide client-side image maps instead of server-side image maps except where the regions cannot be defined with an available geometric shape. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-client-side-maps and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-client-side-maps)

5.9.1.1 Summary

Table 17: UWEM 0.5 tests for checkpoint 9.1.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 9.1_HTML_01 a//img[@ismap], input[@type='image']

Select all specified elements. There is no need for SS image maps any more because of the shape attribute's poly value (HTML 4.0).

High
CSS N/A

5.9.1.2 (X)HTML tests

5.9.1.2.1 Test 9.1_HTML_01

This test is targeted to find server-side image maps.

5.9.1.3 CSS tests

For this checkpoint there are not applicable tests.

5.9.2 Checkpoint 9.2

Ensure that any element that has its own interface can be operated in a device-independent manner. [Priority 2]

(See http://www.w3.org/TR/WCAG10/#tech-keyboard-operable and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-keyboard-operable)

5.9.2.1 Summary

Table 18: UWEM 0.5 tests for checkpoint 9.2.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML see 8.1_HTML_01
CSS N/A

5.9.2.2 (X)HTML tests

Test 8.1_HTML_01 also covers this checkpoint.

5.9.2.3 CSS tests

For this checkpoint there are not applicable tests.

5.9.3 Checkpoint 9.3

For scripts, specify logical event handlers rather than device-dependent event handlers. [Priority 2]

(See http://www.w3.org/TR/WCAG10/#tech-device-independent-events and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-device-independent-events)

5.9.3.1 Summary

Table 19: UWEM 0.5 tests for checkpoint 9.3.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 9.3_HTML_01 @onclick, @ondblclick, @onkeydown, @onkeypress, @onkeyup, @onmousedown, @onmousemove, @onmouseout, @onmouseover, @onmouseup

Select all specified attributes. Is one of @onblur, @onchange, @onfocus, @onload, @onreset, @onselect, @onsubmit, @onunload possible instead?

High
9.3_HTML_02 *[@onclick!=@onkeypress], *[@onkeypress!=@onclick], *[@onmousedown!=@onkeydown], *[@onkeydown!=@onmousedown], *[@onmouseup!=@onkeyup], *[@onkeyup!=@onmouseup]

Select all specified elements. If there is an alternative, does it provide the same functionality?

High
9.3_HTML_03 *[@ondblclick]

Select all specified elements. HTML does not provide a key equivalent.

High
CSS N/A

5.9.3.2 (X)HTML tests

5.9.3.2.1 Test 9.3_HTML_01

This test is targeted to find device-dependent event handlers that can be replaced by logical event handlers.

5.9.3.2.2 Test 9.3_HTML_02

This test is targeted to find mouse event handlers without keyboard alternative and vice versa.

5.9.3.2.3 Test 9.3_HTML_03

This test is targeted to find event handlers for double click.

5.9.3.3 CSS tests

For this checkpoint there are not applicable tests.

5.10 Guideline 11

Use W3C technologies and guidelines.

(See http://www.w3.org/TR/WCAG10/#gl-use-w3c)

This guideline recommends using W3C technologies and describes what to do if other technologies are used.

5.10.1 Checkpoint 11.4

If, after best efforts, you cannot create an accessible page, provide a link to an alternative page that uses W3C technologies, is accessible, has equivalent information (or functionality), and is updated as often as the inaccessible (original) page. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-alt-pages and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-alt-pages)

5.10.1.1 Summary

Table 20: UWEM 0.5 tests for checkpoint 11.4.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 11.4_HTML_01 a/@href

Compare functionality and information content of the two delivery units. Are they equivalent?

High
11.4_HTML_02 a/@href

Identify the alternative content. Check that it complies with all checkpoints in this procedure other than this one, 11.4.

High
11.4_HTML_03 a/@href

Check for presence of page containing alternative  content.

Check whether content is necessary, i.e. whether original content could be made accessible as defined here without unreasonable effort.

High
CSS N/A

5.10.1.2 (X)HTML tests

5.10.1.2.1 Test 11.4_HTML_01

This test looks for the alternative content and checks whether it is equivalent.

5.10.1.2.2 Test 11.4_HTML_02

This test looks for the alternative content and checks whether it is accessible.

5.10.1.2.3 Test 11.4_HTML_03

This test looks for the alternative content and checks whether it is accessible.

5.10.1.3 CSS tests

For this checkpoint there are not applicable tests.

5.11 Guideline 12

Provide context and orientation information.

(See http://www.w3.org/TR/WCAG10/#gl-complex-elements)

This guideline provides information on how to provide contextual and orientation information to help users understand complex pages or elements.

5.11.1 Checkpoint 12.1

Title each frame to facilitate frame identification and navigation. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-frame-titles and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-frame-titles)

5.11.1.1 Summary

Table 21: UWEM 0.5 tests for checkpoint 12.1.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 12.1_HTML_01 frame[not(@title)]

Select elements without description.

High
12.1_HTML_02 frame/@title

Select elements with description. Decide whether the title represents the context of the frame.

High
CSS N/A

5.11.1.2 (X)HTML tests

5.11.1.2.1 Test 12.1_HTML_01

This test is targeted to find frames without description.

5.11.1.2.2 Test 12.1_HTML_02

This test is targeted to check whether the title attribute represents the context of the frame.

5.11.1.3 CSS tests

For this checkpoint there are not applicable tests.

5.11.2 Checkpoint 12.2

Describe the purpose of frames and how frames relate to each other if it is not obvious by frame titles alone. [Priority 2]

(See http://www.w3.org/TR/WCAG10/#tech-frame-longdesc and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-frame-longdesc)

5.11.2.1 Summary

Table 22: UWEM 0.5 tests for checkpoint 12.2.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 12.2_HTML_01 frame/@longdesc, noframes//a/@href

Select long description document referenced by the element. Decide whether the frame is described by text in the document, if not obvious by frame title alone.

High
CSS N/A

5.11.2.2 (X)HTML tests

5.11.2.2.1 Test 12.2_HTML_01

This test is targeted to check whether the long description represents the context of the frame, if it is not clear by the frame title alone.

5.11.2.3 CSS tests

For this checkpoint there are no applicable tests.

5.11.3 Checkpoint 12.3

Divide large blocks of information into more manageable groups where natural and appropriate. [Priority 2]

(See http://www.w3.org/TR/WCAG10/#tech-group-information and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-group-information)

5.11.3.1 Summary

Table 23: UWEM 0.5 tests for checkpoint 12.3.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 12.3_HTML_01 fieldset/legend

Select elements without description.

High
12.3_HTML_02 fieldset/legend

Select elements with description. Decide whether the legend represents the context of the fieldset.

High
12.3_HTML_03 fieldset//input, fielset//select, fieldset//textarea

Select elements. Decide whether the elements are grouped in a practical way.

High
12.3_HTML_04 optgroup/@label

Select elements without description.

High
12.3_HTML_05 optgroup/@label

Select element with description. Decide whether the label represents the context of the optgroup.

High
12.3_HTML_06 optgroup

Select elements. Decide whether the elements are grouped in a practical way.

High
12.3_HTML_07 table[not(caption)]

Select elements without description.

High
12.3_HTML_08 table/caption

Select element with description. Decide whether caption describes the nature of the table.

High
12.3_HTML_09 table/thead, table/tbody, table/tfoot, table/colgroup

Select elements. Decide whether the elements are  grouped in a practical way.

High
12.3_HTML_10 ul/li, ol/li, dl/dt, dl/dd

Select elements. Decide whether the elements are  grouped in a practical way.

High
12.3_HTML_11 h1, h2, h3, h4, h5, h6

Select elements. Decide whether the text is structured in a practical way.

High
12.3_HTML_12 p

Select elements. Decide whether the text is  grouped in a practical way.

High
12.3_HTML_13 form[not(.//fieldset)]

Do the form controls in the form need grouping?

Medium
12.3_HTML_14 select[not(optgroup)]

Do the options need grouping?

Medium
12.3_HTML_15 table[not(thead) or not(tfoot) or not(tbody)]

Do the table rows need grouping?

Medium
12.3_HTML_16 body

Does text need grouping with heading and paragraph elements?

Medium
CSS N/A

5.11.3.2 (X)HTML tests

5.11.3.2.1 Test 12.3_HTML_01

This test is targeted to find fieldsets without legend.

5.11.3.2.2 Test 12.3_HTML_02

This test is targeted to check whether the legend describes the meaning of the fieldset.

5.11.3.2.3 Test 12.3_HTML_03

This test is targeted to check whether the elements are grouped in a practical way.

5.11.3.2.4 Test 12.3_HTML_04

This test is targeted to find optgroup elements without label.

5.11.3.2.5 Test 12.3_HTML_05

This test is targeted to check whether the label describes the meaning of the optgroup.

5.11.3.2.6 Test 12.3_HTML_06

This test is targeted to check whether the option elements are grouped in a practical way.

5.11.3.2.7 Test 12.3_HTML_07

This test is targeted to find tables without caption.

5.11.3.2.8 Test 12.3_HTML_08

This test is targeted to check whether the caption describes the meaning of the table.

5.11.3.2.9 Test 12.3_HTML_09

This test is targeted to check whether the elements are blocked in a practical way.

5.11.3.2.10 Test 12.3_HTML_10

This test is targeted to check whether the elements are grouped in a practical way.

5.11.3.2.11 Test 12.3_HTML_11

This test is targeted to check whether the elements are structured in a practical way.

5.11.3.2.12 Test 12.3_HTML_12

This test is targeted to check whether the elements are structured in a practical way.

5.11.3.2.13 Test 12.3_HTML_13

This test is targeted to check form controls need grouping.

5.11.3.2.14 Test 12.3_HTML_14

This test is targeted to check whether the options need grouping.

5.11.3.2.15 Test 12.3_HTML_15

This test is targeted to check whether the table rows need grouping.

5.11.3.2.16 Test 12.3_HTML_16

This test is targeted to check whether text needs grouping with headings and paragraphs.

5.11.3.3 CSS tests

For this checkpoint there are not applicable tests.

5.11.4 Checkpoint 12.4

Associate labels explicitly with their controls. [Priority 2]

(See http://www.w3.org/TR/WCAG10/#tech-associate-labels and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-associate-labels)

5.11.4.1 Summary

Table 24: UWEM 0.5 tests for checkpoint 12.4.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 12.4_HTML_01 input[not(@type='hidden')]/@id, select/@id, textarea/@id

Select elements without id.

High
12.4_HTML_02 input[not(@type='hidden')]/@id, select/@id, textarea/@id

Select matching label/@for elements.

High
CSS N/A

5.11.4.2 (X)HTML tests

5.11.4.2.1 Test 12.4_HTML_01

This test is targeted to find form control elements without id.

5.11.4.2.2 Test 12.4_HTML_02

This test is targeted to find form control elements without label element.

5.11.4.3 CSS tests

For this checkpoint there are not applicable tests.

5.12 Guideline 13

Provide clear navigation mechanisms.

(See http://www.w3.org/TR/WCAG10/#gl-facilitate-navigation)

This guideline provides information on how to provide contextual and orientation information to help users understand complex pages or elements.

5.12.1 Checkpoint 13.3

Provide information about the general layout of a site (e.g., a site map or table of contents). [Priority 2]

(See http://www.w3.org/TR/WCAG10/#tech-site-description and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-site-description)

5.12.1.1 Summary

Table 25: UWEM 0.5 tests for checkpoint 13.3.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 13.3_HTML_01

Does the website contain a site map?

High
13.3_HTML_02

Does the website contain a document that explains available accessibility features?

High
CSS N/A

5.12.1.2 (X)HTML tests

5.12.1.2.1 Test 13.3_HTML_01

This test is targeted to find a web site without a site map.

5.12.1.2.2 Test 13.3_HTML_02

This test is targeted to find a web site without a document that explains available accessibility features.

5.12.1.3 CSS tests

For this checkpoint there are not applicable tests.

5.12.2 Checkpoint 13.4

Title each frame to facilitate frame identification and navigation. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-clear-nav-mechanism and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-clear-nav-mechanism)

5.12.2.1 Summary

Table 26: UWEM 0.5 tests for checkpoint 13.4.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 13.4_HTML_01
  1. Search for navigation menus/bars.

  2. Are the navigation facilities similar with respect to location in the source code, presentation (location in rendered document, colours, font) and behaviour?

Medium
CSS N/A

5.12.2.2 (X)HTML tests

5.12.2.2.1 Test 13.4_HTML_01

This test is targeted to find media elements without a text alternative.

5.12.2.3 CSS tests

For this checkpoint there are not applicable tests.

5.13 Guideline 14

Ensure that documents are clear and simple.

(See http://www.w3.org/TR/WCAG10/#gl-facilitate-comprehension)

This guideline provides information on how to create clear and simple documents.

5.13.1 Checkpoint 14.1

Use the clearest and simplest language appropriate for a site's content. [Priority 1]

(See http://www.w3.org/TR/WCAG10/#tech-simple-and-straightforward and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-simple-and-straightforward)

5.13.1.1 Summary

Table 27: UWEM 0.5 tests for checkpoint 14.1.
Format Test ID Elements/
Attributes/
Selectors
Inspection Procedure Conf. Level
HTML 14.1_HTML_01

Read the text. Is the language simple? Yes: OK. No: Does it contain jargon? No: OK. Yes: Is the jargon necessary? Yes: OK. No: Error

Low
CSS N/A

5.13.1.2 (X)HTML tests

5.13.1.2.1 Test 14.1_HTML_01

This test is targeted to find media elements without a text alternative.

5.13.1.3 CSS tests

For this checkpoint there are not applicable tests.

6 Aggregation of test results and reporting of results

6.1 Introduction

One of the metrics that can be interesting for policy makers, is accessibility barriers. The European Commission regulation 808/2004 concerning community statistics for the information society explicitly states that one characteristic to be provided is barriers to the use of ICT, Internet and other electronic networks, e-commerce and e-business processes. This chapter indicates a model for calculating the accessibility barriers Fsu for key use scenarios and different disabled user groups.

6.2 Rationale

The rationale behind choosing an initial model for aggregating test results, is that the individual projects within the WAB cluster needs a unified model for aggregating data from test results. The projects will eventually need a model for the connection between accessibility assessments and barriers experienced by users that leans itself towards aggregation. The projects also need to specify the basic requirements for reporting of results. Choosing reporting methods, graphs and statistics to present to users and policy makers is far easier if the statistical aggregation method is known. The methods needs to be known and preferably easy to understand.

6.3 Approach

Aggregation of results can be viewed at two levels. At the lowest level, W3C's Evaluation and Report Language (EARL) is used as a standardised format for collecting and conveying test results from accessibility assessment tools according to any given standard.

At a higher level, the test results from the EARL reports need to be aggregated into data that indicates accessibility barriers and relevance for different user groups, e.g., people who experience barriers (such as people with disabilities) or users of the data for planning and development purposes, such as policy makers and stakeholders.

The higher level aggregation can be viewed as the “business logic” for estimating the accessibility barriers for given user groups. We plan to compare some known models for aggregating accessibility data with the user centric accessibility barrier model that we have developed.

This section outlines a model for calculating accessibility barriers. It is loosely based on section 8.1.8 of the NIST Engineering Statistics Handbook [NIST-SEMATECH].

6.4 The UWEM user centric accessibility barrier model

To cover the aggregation needs in UWEM, we propose the use of the following user centric accessibility barrier model (UCAB).

6.4.1 Limitations

6.4.2 Definitions

Barrier

An accessibility barrier is modelled as a product failure caused by an incompatibility between a disabled users needs and product functionality. The incompatibility is caused by the Web resource; i.e., it is not the users fault.

Accessibility

Accessibility is here defined as the absence of accessibility barriers for a given checkpoint, Web resource or key use scenario. A disabled user would be able to perform the required task in a given key use scenario if the Web resource is accessible.

Failure mode

The failure mode can describe more in detail how the checkpoint failed, for those checkpoints that can fail in more than one way. One example of a failure mode used in UWEM, is the Test ID. Checkpoint 1.1 can for instance fail in several ways, according to failure mode 1.1_HTML_001, 1.1_HTML_002, etc. However, more advanced failure modes can be envisioned for more advanced accessibility assessment tools.

Key use scenario

A key use scenario is a narrative that describes a meaningful sequence of user tasks.

Disability group

A disability group indicates a specific group of disabled users. The model supports calculating barrier probabilities for different disability groups.

Web resource

A web resource is a network data object or service that can be identified by a URI (optionally complemented with a set of additional parameters, as defined by the XML Schema described in Appendix C). Resources may be available in multiple representations (e.g., multiple languages, data formats, size, resolutions) or vary in other ways.

Accessibility barrier probability Fcui, Fpu, Fsu
  • Fcui is the probability for an accessibility barrier for a WCAG checkpoint c and disabled user group u, given failure mode i.

  • Fpu is the probability for an accessibility barrier for a Web resource p and disabled user group u.

  • Fsu is the probability for an accessibility barrier for a key use scenario s and disabled user group u.

    Fsu, Fpu and Fcui are assumed to be constant with time for accessibility barriers.

Note that it is in general not possible to give a positive statement that a Web site is accessible by only doing automatic assessments, since only a subset of the required tests can be done automatically. It is, however, in some cases possible to give a negative statement; i.e., that the accessibility is poor with high confidence.

6.4.3 The user centric barrier probability model

An accessibility barrier is modelled as a product failure caused by an incompatibility between a disabled user's need and product functionality. It is assumed that a test performed according to a WCAG checkpoint c that fails, will introduce a barrier with some known probability Fc that can be estimated by doing user testing. Different barrier probabilities Fcu will exist for different disability groups u, since different disability groups will experience different barriers. For instance, a blind user will have a problem with missing alternative text to images (test 1.1_html_001), but not with audio descriptions, whereas a deaf user will have a problem with missing text annotations to audio files (test 1.1_html_004), but not with images, so the barrier probabilities for these checkpoints will be different for these disability groups.

It is also assumed that Fcu in most cases can be further sub-divided into several failure modes i, like, e.g., missing alt attribute or empty alt-attribute for checkpoint 1.1. In this case, it is assumed that the probability that a given failure mode, like, e.g., the probability that checkpoint 1.1 introduces a barrier for empty alt text for a blind user, also can be determined via user testing. This is indicated in the equation below, where c indicates cp1.1, u indicates blind user and i indicates alt=''.

The barrier probability for each failure mode and user group within a checkpoint equals the barrier probability for a checkpoint given a blind user and an empty alt attribute.

The disability group u and failure mode i are given for a test involving a WCAG checkpoint.

It is assumed that each checkpoint gives the same result when the same element is being checked more than once, so that the same failure mode will be triggered every time, and therefore also the same barrier probability Fcui will apply every time the element is being tested with the given checkpoint.

In UWEM 0.5 we use the Series Model to aggregate from WCAG checkpoint to Web resource and then to key use scenario failure rate, which indicates accessibility barrier probability. This model assumes independence and that the first barrier encountered causes the user to fail in performing his task. How the barrier probability Fcui for a checkpoint c, user u and failure mode i can be estimated is discussed in section 6.4.4.

Clearly we are assuming an idealised Web in this model, and some of the assumptions may not be true for a real Web site. The evaluation phase of UWEM will be used to verify if the model is usable, and also to improve the model where necessary. Below are some examples as an introduction to how the model is used.

6.4.4 Estimating the barrier probability Fcui

The accessibility barrier probabilities for a checkpoint c with given disability group u and failure mode i can be estimated via user testing using a representative set of users for each failure mode. It may also be estimated to some extent with expert testing or semi-automatic testing, however the level of precision in detecting real barriers will be less than for user testing.

The accessibility barrier probability Fcui for this checkpoint c, disability group u, and failure mode i, can then be estimated as the fraction of the barriers b to the total number of tests for this failure mode N:

Fcui equals the fraction of the barriers b over the total number of tests N.

6.4.5 Aggregated barrier indicator for a Web resource fragment

The Series Model is used to build up from individual accessibility barrier probabilities Fcui for individual WCAG checkpoints c, disability group u and failure mode i to accessibility barrier indicators for the Web resource fragment p, that is involved in the assessment.

The Web resource fragment p, is the part of the Web page that was traversed during the random walk [HENZINGER00], or similar near uniform random sampling algorithm used during the crawling of the Web. The reason for focusing on part of a Web page, and not the whole Web page, is to address that the metric should take into account the size and complexity of the Web pages, to allow for fair comparison between Web pages of various sizes, since aggregating the barriers probabilities for the whole page may give a too large barrier probability for large or complex pages.

The series model only applies to non replaceable populations (or first failures of populations of systems).

Assumptions:

  1. Each checkpoint passes or fails independently of every other one, at least until the first failure (i.e., accessibility barrier) occurs.

  2. The Web resource is declared inaccessible when the first accessibility barrier occurs.

  3. Each of the n (possibly different) WCAG checkpoints performed on the system has a known barrier probability, Fcui for the given checkpoint c, disability group u and failure mode i.

  4. All elements are used.

When the Series Model assumptions hold we have:

The barrier probability for a page fragment Fpu equals 1 minus the product sum from c equals 1 to n of ( 1 minus Fcui )

With the subscript p referring to the Web resource fragment involved, the subscript c referring to the c-th checkpoint, u referring to disability group and the subscript i referring to the given failure mode for each checkpoint.

The series model is analogous to all WCAG checks connected as a series circuit in the Web resource. The system fails, i.e. is declared inaccessible, if one of the WCAG checks fail for one of the elements.

6.4.6 Aggregating barrier indicators for one key use scenario

Introducing the key use scenarios, is an attempt to model how a user will use the Web to perform a given task, and estimating the resulting barrier probability.

A key use scenario involves navigating through a sequence of Web resources to successfully perform a given task on the Web; i.e. {r0, r1,...,rs}, where r0 is the first resource, r1 is the resource one level below in the key scenario, and rs is the last resource involved in the key use scenario. The key use scenario does not necessarily have to start on the home page, due to factors like deep linking from other sites or bookmarks in the users browser.

The Series Model is used to build up from accessibility barrier indicators for Web resource fragments to accessibility barrier indicator for a key use scenario. It is assumed that the first barrier for a key use scenario makes the scenario inaccessible for the disabled user (i.e., the user fails in performing the task if the outcome is that one of the involved scenarios contains a barrier for the disability group the user belongs to.).

Assumptions:

  1. Each barrier on a Web resource occurs independently of every other one, at least until the first failure (i.e., accessibility barrier) occurs.

  2. The key use scenario is declared inaccessible when the first barrier occurs.

  3. Each of the s Web resource fragments in the key use scenario has a known barrier probability, Fpu composed from all checkpoints involved in each Web resource fragment.

  4. All elements are used.

When the Series Model assumptions hold we have:

The barrier probability for a given key use scenario and a given user Fsu equals 1 minus the product sum from p equals 1 to n of (1 minus Fpu).

With the subscript s referring to the probability for a barrier for a given key use scenario and the subscript p referring to the p-th Web resource fragment.

The series model is analogous to all Web resources in the key use scenario connected to a series circuit. The key use scenario fails, i.e., is declared inaccessible, if one of the involved Web resources is inaccessible.

In principle, one can consider accessibility assessments of single Web resources as a special case, where the length of the key use scenario is 1 Web resource, and the page fragment consists of the whole page, thus the model is also to some extent usable for single resource scenarios (single Web pages).

6.4.7 Aggregating data from several disability groups

There are complex interdependencies between the different disabled user groups. For instance is photo epilepsy, colour deficit and low vision not applicable for a blind user. This means that aggregation of the individual barrier probabilities is not trivial. It would in general not be correct to aggregate up barrier probabilities for different disability groups using, e.g., the competing risk model, because some groups are dependant.

The proposed way of aggregating the resulting barrier probabilities for all disabled user groups for each user scenario, is to use the largest barrier probability for the key use scenarios in each disabled user group, as the resulting barrier probability for all disabled user groups; i.e.:

The barrier probability for all disability groups for a given key use scenarion Fs equals the maximum value of Fsu for all disability groups u

Note that this implies that each disability is considered individually, and therefore that it does not consider combined disabilities like, e.g., deaf-blindness.

Some combined disabilities may be independent enough that they may be combined by using the series model, like, e.g., deaf and blind combined to deaf-blind, i.e.:

The barrier probability F deafblind equals 1 minus ( 1 minus F deaf) multiplied by (1 minus F blind).

Further research may be done in the evaluation phase of UWEM to identify the consequences of combined disabilities, and which approach is the best model for aggregating accessibility barriers.

6.4.8 Aggregating accessibility barriers Fsu

Accessibility barrier indicators can be used to indicate the probability for an accessibility barrier for key use scenarios and disability groups. Disability groups to be considered are indicated in section 7.

The accessibility barrier indicator values must be sortable on the criteria indicated in section 8 like country, type of Web site, etc., and it must be possible to break down each indicator to show the individual test results that are behind each indicator.

The barrier probability model only models barriers for key use scenarios. A key use scenario involves traversing one or more Web resources to perform a given task.

The key use scenarios naturally lend themselves to aggregation, so that the average barrier probability and variance for a Web site can be calculated from the sampled key use scenarios for a Web site. Similarly, aggregation can be further performed over several Web sites, regions or countries.

The aggregated value can then be mapped to the scorecards in section 8 by comparing 1–F to the thresholds in the scorecards, where F is the resulting barrier probability.

6.5 Examples

The examples given below involve blind and deaf users. Please note that many different disabilities are affected by accessibility barriers to Web use and that Web accessibility is a cross-disability issue that is not only a concern to blind and deaf users.

6.5.1 Example 1: two tests

Consider a test T1 for WCAG checkpoint 1.1 (alt attribute) that fails, and that in general, a missing alt- attribute causes an accessibility barrier F1 in 10% of the cases for blind people. In addition, a test T2 for WCAG checkpoint 10.1 fails, because pop-up windows are used on the tested page. The probability that this test causes a barrier F2 for a blind person is 30% (blind persons are assumed to have problems with both of these checkpoints). The probability that these two problems causes an accessibility barrier for the given Web resource, is then:

The barrier probability Fpu equals 1 minus (1 minus 0.1) times (1 minus 0.3) equals 0.37.

This means that on average a bit more than every third blind person will fail on this page.

6.5.2 Example 2: several images

Assume a test scenario, where there are 7 images, 4 of them with empty alt attribute and 3 with alt attribute specified. We are considering a blind user. Assuming that that a missing alt text causes an accessibility barrier with average probability F1=0.1, and that an alt attribute that is described causes an accessibility barrier with probability F2=0.05, we have the probability for a barrier:

The barrier probability Fpu equals 1 minus (1 minus F1)  to the power of 4 times (1 minus F2) to the power of 3 equals 0.44.

We see here that provided that several possible barriers are found, the model aggregates the probability for a barrier quickly, even though the probabilities for a failure for the individual tests was quite low. That means that this approach should be relatively robust to minor changes due to interpretations done by the assessor or tool.

6.5.3 Example 3: key use scenario

Assume that example 1 was a test on the homepage of http://example.com, and example 2 was a test on the help page http://example.com/help. What is the probability for a barrier in the key use scenarios for blind users u involving navigating the home page and then the help page?

The barrier probability Fsu equals 1 minus (1 minus Fp0) times (1 minus Fp1) equals 1 minus (1 minus 0.37) times (1 minus 0.44) equals 0.65.

Where Fp0 is barrier probability for the homepage and Fp1 is barrier probability for help page referenced via the home page.

6.6 Other considerations for the user centric accessibility barrier probability model (UCAB)

6.6.1 Random sampling for large scale automatic screening of Web sites

Screening a large number of Web sites automatically is used in UWEM to get a picture of problem areas. If a large number of Web sites are to be evaluated, then there is a capacity limit for how much it is viable to sample from each Web site for a given hardware configuration. A sampled resource set, as described in section 4.3.2, is therefore needed.

Furthermore, the UCAB model in UWEM may work better if combined with some random sampling algorithms, like a version of the near-uniform Markovian random walk algorithm [HENZINGER00] based on the Page Rank algorithm [BRIN98], because a key user scenario then would not necessarily involve traversing the whole page, but would involve starting at some random point in the page, and at each link the assessment would either assess the page until a link, and then follow the link on the page with some probability d, or would start a new key use scenario with probability 1-d. This would mean that the assessment algorithm did not consider complete pages, but page fragments involved in key use scenarios. This does not completely solve the bias problem, since Bharat and Broder states that the PageRank algorithm also has a bias for large pages, however the near-uniform sampling algorithm in [HENZINGER00] attempts to address this by sampling inversely to the PageRank.

We plan to investigate strategies like random walk algorithms from a set of seed pages to simulate random key use scenarios. This approach may be based on research done for search engines, see [HENZINGER00], and is planned for UWEM 2.0.

6.6.2 What is the source of an accessibility barrier?

In most cases, Web accessibility barriers are caused by poorly written elements that do not follow the WCAG guidelines, but there are exceptions, especially for priority 1 checkpoints, like checkpoint 1.4 "Synchronize equivalent alternatives for multimedia", that relates to a resource, or 2.1 "Information available without colour" that relates to the presentation of the Web page.

This means that several things on a Web page (the rendered version of a Web resource) may cause barriers. The model does not care about the source of a barrier, whether it comes from an element, the presentation or elsewhere in the page. It is assumed that the EARL reports will describe the deviations as test results from WCAG checks, which are the basis for estimating the accessibility barriers.

6.6.3 User Testing for estimating the barrier probability Fsu

User testing can also be used to improve automatic testing, by uncovering real accessibility barriers for given key use scenarios. This means that if a user test can state that there is an accessibility barrier for this disabled user group, then the given key use scenario will also have a barrier for this disabled user group with probability equal or close to 1.

This means that the model for estimating barrier probabilities has synergies with user testing and vice versa. The user testing can be used to improve the automated data, and the automated data can help doing the user testing more efficiently.

6.7 General metrics

Several metrics about accessibility related issues can be useful to aggregate and present. The scorecards in section 8 indicate some of the measures that are of interest when it comes to monitoring key figures concerning accessibility for policy makers in several countries. Other graphs and tables may focus on quantifiable metrics, like the percentage of barriers identified or percentage of sites conforming to WCAG.

6.8 Conformance indicators

Conformance indicators is a metric that define success criteria as all the number of sites that pass level A, AA or AAA. Aggregation can be specified as the percentage of sites that has reached a certain level in a region or other area.

6.8.1 Statistics on WCAG checkpoints

The WCAG checkpoint failure rate can be used to indicate problem areas, and if the trend is increasing or decreasing, and possibly the speed of change. Another interesting indicator can be to show a breakdown of the most frequent checkpoint failures.

Other indicators are:

Each of these indicators could be broken down according to the groups shown in 8.1. Note that these indicators make sense at all levels of granularity of the data, i.e., Web page, Web site, NUTS region, etc. They naturally lend themselves to aggregation.

7 User testing protocols

In this section, we present a face-to-face user testing protocol.

7.1 Participant consent

Obtain permission to audio/video tape the test, and to use the resulting data for the purposes of the Web site evaluation only. The identity of all participants must remain anonymous. A standard consent form is provided for this purpose (see Appendix D), but may be adapted to meet local ethics requirements.

7.2 Participant remuneration

Participants must be given appropriate remuneration for the work they undertake - travel expenses and a suitable hourly rate.

7.3 Number and type of participants in the test

Conduct the testing as a minimum with people with each of the following disabilities:

If the Web site includes a meaningful audio component, include in addition at least three deaf and hard of hearing participants (including one native speaker of Sign Language and one who uses lip reading to understand speech). Please also consider including user testing with people who are deafblind.

If the Web site includes meaningful resources requiring particular physical capability, at least three people with functional limitations in that capability are to be included. Similarly, if the Web site includes meaning resources requiring some other particular human capability, at least three people with functional limitations in that capability are to be included.

Overall the sample should be representative of the target audience of the Web site in terms of age, gender and Internet experience. The target audience should be discussed with the Web site owner. For example, if the Web site is primarily aimed at adults, all participants should be over 18 years of age. However, if the Web site is aimed solely at children, the sample should cover the target age range. If different parts of the Web site are aimed at different age groups (e.g., part of the Web site is for school children, part is for their teachers) the evaluation should be conducted with separate scope pattern lists as defined in section 4.

7.4 Browsers to be used

A range of browsers and versions of browsers is to be included, representing 96% of currently most the commonly used combinations of browsers and versions of these browsers. If the Web site requires plugins (such as media players), a range of appropriate plugins is to be used.

7.5 Assistive technologies to be used

Include participants using a range of assistive technologies to interact with the Web site. Below are some examples for the minimum user group:

If non-text information (e.g., mathematics, maps, musical scores) is provided, people using appropriate assistive devices are to be included.

7.6 Choice of tasks for the test

Each participant will undertake a series of tasks which the Web site is designed to support. Tasks should encompass the main goals that users would expect to be able to do when visiting the Web site. Tasks can be suggested by the Web site owners, but must be independently validated by the evaluator.

Three methodologies are possible for selection of tasks:

  1. Most frequently undertaken tasks for the site. This might be established by analysis of usage data of the Web site (e.g., server logs) to establish what resources on the Web site are visited most often. This has the limitation that if important resources are actually difficult to find, they might not be most frequently visited.

  2. Business/organization critical tasks, like, e.g., find the privacy statement on an e-banking site. To be established in conjunction with the Web site owners.

  3. Tasks based on the key resource types on the site (e.g., forms, tables, video, search facility; see section 4.3.1).

7.7 Basic design of the test

Participants will be encouraged to “think aloud” while undertaking the tasks. To ensure the participant vocalizes as much as possible, the testing will adopt Krug's approach [KRUG00] to facilitating a usability test. This will include probing by the facilitator during each task in order to find out what the users are doing and why at each stage.

7.8 Test protocol

7.8.1 Opening briefing and questions

This section of the protocol should take approximately 5 minutes. Participants will be assured of confidentiality of the data collected. They will also be assured that it is the Web site being tested, not them.

Ask opening questions (see section 14.2) to establish how often the participant uses the Internet, whether they use it for work or for leisure. If they use assistive technology, how long they have been using this for, also if they received any training in the use of their assistive technology. (Some of these questions may be omitted if this data is already available for the participant.)

7.8.2 First reaction to the Web site

This section of the protocol should take approximately 5 minutes. Participants will be asked to spend a few minutes exploring the home page of the Web site (without clicking on any links at this stage). Instructions should be kept simple:

They will then be asked to give their first impressions of the site (see section 14.3 for 7-point Lickert scales to capture these impressions).

N.B. Some participants will be more familiar or comfortable with a Lickert scale that has the most “positive” end of the scale as the high numeric value (i.e., 7 on the scales used in this protocol) and some participants will be more familiar or comfortable with a Lickert scale that has the most “positive” end of the scale as the low numeric value. All Lickert scales in the test should be configured to whatever the participant is most familiar and comfortable with. For purposes of analysis, all Lickert scales should then be transformed such that the most positive end of the scale is the high numeric value.

7.8.3 Tasks

Participants should complete the tasks in a logical order, and probably in an order that goes from the simplest task to the most complex. This allows the participant to build up some confidence in interacting with the Web site.

After completing each task, the participant will be asked to complete a brief assessment of that task (see Section 14.4) and a brief Problem Assessment Form for each problem they encountered during the task (see section 14.5). The evaluator should also complete their part of the problem assessment forms (ideally this should be completed while the participant is undertaking the task).

After the testing session, the evaluator can use the Problem Assessment Forms to tabulate which accessibility criteria (see section 5) caused the problems encountered by the participant.

7.8.4 Post-tasks questions: overall reaction to the Web site

This part of the protocol should take about 5 minutes. Participants will be asked a standard set of questions to capture their overall reaction to the Web site (section 14.6).

7.9 Analysis and interpretation of the findings of the testing

Data from all the participants in the test should be aggregated as follows:

These aggregated results can be compared with minimum thresholds for performance and acceptance to assess appropriate levels of accessibility. Some examples of how the results might be used include:

A number of other dimensions of user reaction to Web sites will be investigated in the evaluation phase of UWEM0.5: satisfaction, frustration, etc. The success rate allows various stakeholders to compare user tests of different Web sites. In case the success rates are identical the Web sites are to be compared according to the level of the acceptance each user test has achieved.

8 Scoring and reporting results

Each of the four evaluation approaches outlined in section 3 supports the identification of objective measures. A single value describes: for automatic, semi-automatic and expert evaluation the probability of creating a barrier by violating any of the tests outlined in section 5, each within its level of confidence. Aggregation of single values allows to compute this probability for a single Web page. Similarly, but based on users rating, following the protocols of section 7, the success rate of disabled users is determined during user testing. Therefore, scoring is based on this data and allows to compare the results, as well as allows to monitor the development of its accessibility. In the following we describe in more detail the characteristics of data as well as balanced scorecards to compare different Web pages or Web sites.

8.1 Breakdown of characteristics

Analysis of data allows to identify the following characteristics for identification of Web sites and to determine the results of the evaluation:

In the following part, more details on scoring Web sites and single Web pages using the balanced scorecard method are being described. While score cards address excellence in implementing accessibility as well as non-existence of accessibility features, the results of aggregating the probability of accessibility according to section 6 will allow to create ranking lists and therefore allow to compare Web sites.

8.2 Balanced scorecard method

The balanced scorecard is an approach to strategic management developed by Kaplan and Norton [KAPLAN96]. It is an approach that addresses some of the weaknesses and vagueness of other management approaches. The balanced scorecard method is amongst others used by the Commission in the Bologna process to undertake a stocktaking exercise to measure the progress made in three areas in higher education according to the Bologna Process. This is done to visualise improvements in the process.

Scorecards are developed to give a “big picture” of progress on priority action lines. Each scorecard is based on objective criteria and benchmarks either:

The policy score card intends to show collective achievements of the targets set out in the European policy for Web accessibility. The policy scorecards are meant as a progress chart, and not as an absolute measurement. They are not designed to make comparisons between regions or countries. They may provide baseline data against which progress can continue to be measured in the future.

The benchmarks are colour coded to determine progress on the priority action lines. It is possible to analyse areas where progress has been especially strong or weak. Explanation of the colour codes in use in the UWEM Scorecard:

Table 28: UWEM Scorecard.
Green Excellent performance
Light Green Very good performance
Yellow Good performance
Orange Some progress has been made
Red Little progress has been made
Black No efforts

Note: Each level of scoring can be applied for individual columns independently with one exception: the overall level of accessibility of public Web sites is reflecting also the average values of the other columns (see ).

The following scorecard in describes the assessment of single Web sites based on UWEM. Each level of scoring can be applied for individual columns independently with one exception: the overall level of accessibility of a single Web site is reflecting also the average values of the other columns in and is determined on the basis of applying only one or some evaluation approaches. Note, however, for excellent performance (green) multiple evaluations are required.

Table 29: Scorecard for public Web sites (national scorecard, all government websites).
Conformance Level of QA Key elements of assessment methodologies Level of user involvement Level of accessibility
Green Full WCAG AA conformance

Web accessibility Evaluation methodology is in operation at national level, and applies to all public services.

- Fully functional QA agency in place

- Existing agencies have QA as part of responsibility

All elements of the evaluation system are fully implemented:

- User testing

- Expert testing

- Semi-automatic testing

- Automatic testing

All relevant groups of disabled users participate in the evaluation process.

All public sites are accessible to disabled users

Light Green Important public services conform to WCAG AA

QA system is in operation, but is not applied in all areas

- Expert testing

- Semi-automatic testing

- Automatic testing

Some groups of disabled users participate in the evaluation process.

Important public sites are accessible to disabled users

Yellow Important public sites conform to WCAG A

QA system is being defined or fine tuned.

- Semi-automatic testing

- Automatic testing

User involvement is prepared

Important public sites are accessible to most disabled users

Orange Some public Web sites conform to WCAG A

- Legislation or regulations prepared, awaiting implementation

OR

- Existing system is undergoing review/development in according to UWEM adoption.

- Automatic testing

Policy makers are discussing involvement of disabled users

Some public websites are accessible to people with disabilities

Red

WCAG level A conformance is planned

Preliminary planning phase or no Web accessibility evaluation method in place yet, but initial debate and consolidation has begun

Preliminary evaluations have been performed

User organisations are developing their own inspection techniques

Accessibility to public Web sites is planned

Black

No information about WCAG level A conformance for public websites

No Web evaluation methodology in place and no plan to initiate

There is no Web evaluation system in place

No involvement from disabled users yet or no clarity about structures and arrangements for participation for disabled users

No information about whether public sites are accessible to disabled users

The specified numbers and percentages are being validated and will be revised on these results for UWEM 1.0.

Table 30: Scorecard for single Web sites.
Automatic testing Semi-automatic testing Expert testing User testing Level of Accessibility
Green

All requirements for lower scores plus testing of every update of the website. However,

currently this score cannot be achieved by automatic testing alone. Either expert testing or user testing, as described in the adjacent cells is also required.

Currently this score cannot be achieved by semi-automatic testing alone. Automatic testing and either expert testing or user testing, as described in the adjacent cells are required.

All requirements for lower scores plus expert testing of every re-design of the website

All requirements for lower scores plus user testing is conducted for each re-design of the website

Accessibility is fully supported

Light Green

All criteria in Section 5 are met at a probability level above 0.95 as explained in section 6

All criteria in Section 5 are met at a probability level above 0.95 as explained in section 6

All criteria in Section 5 are met at a probability level above 0.95 as explained in section 6 and all non-automatic properties tested with respect to all relevant disabled users

At least a success rate of 75% by all user groups, no accessibility catastrophe, a mean acceptance rating of at least 4

Within levels of confidence no accessibility barriers have been identified

Yellow

All criteria in Section 5 are met at a probability level above 0.85 as explained in section 6

All criteria in Section 5 are met at a probability level above 0.85 as explained in section 6

All criteria in Section 5 are met at a probability level above 0.85 as explained in section 6 and all non-automatic properties tested with respect to all relevant disabled users

A success rate of less than 75% and no accessibility catastrophe has been identified

A few accessibility barriers have been identified

Orange

All criteria in Section 5 are met at a probability level above 0.75 as explained in section 6

All criteria in Section 5 are met at a probability level above 0.75 as explained in section 6

All criteria in Section 5 are met at a probability level above 0.75 as explained in section 6 and all non-automatic properties tested with respect to all relevant disabled users

A success rate of less than 65% and no accessibility catastrophe has been identified

Several accessibility barriers have been identified

Red

All criteria in Section 5 are met at a probability level below 0.75 as explained in section 6

All criteria in Section 5 are met at a probability level below 0.75 as explained in section 6

All criteria in Section 5 are met at a probability level below 0.75 as explained in section 6 and all non-automatic properties tested with respect to all relevant disabled users

A success rate of less than 50% or  an accessibility catastrophe has been identified

Web Site is inaccessible

Black

Some criteria in Section 5 have not been tested

Some criteria in Section 5 have not been tested

Some criteria in Section 5 have not been tested

User testing has not or only partially been conducted

No involvement from disabled users yet or no clarity about structures and arrangements for participation for disabled users

For a score of good (orange) to very good performance (light-green) a ranking of several Web sites according to their evaluation results is possible. However, the results are not directly comparable among the four different methods as the levels of confidence vary.

The following scorecard in describes the accessibility scorecard for assessment of accessibility implementation based on UWEM.

Table 31: The accessibility policy scorecard
Automatic testing Semi-automatic testing Expert testing User testing Level of Accessibility
Green

All requirements for lower scores plus testing of every update of the website. However,

currently this score cannot be achieved by automatic testing alone. Either expert testing or user testing, as described in the adjacent cells is also required.

Currently this score cannot be achieved by semi-automatic testing alone. Automatic testing and either expert testing or user testing, as described in the adjacent cells are required.

All requirements for lower scores plus expert testing of every re-design of the website

All requirements for lower scores plus user testing is conducted for each re-design of the website

Accessibility is fully supported

Light Green

Criteria in Section 5 are met at average probability level above 0.95 as explained in section 6

All criteria in Section 5 are met at a probability level above 0.95 as explained in section 6

All criteria in Section 5 are met at a probability level above 0.95 as explained in section 6 and all non-automatic properties tested with respect to all relevant disabled users

At least a success rate of 75% by all user groups, no accessibility catastrophe, a mean acceptance rating of at least 4

Within levels of confidence no accessibility barriers have been identified

Yellow

Criteria in Section 5 are met at average probability level above 0.85 as explained in section 6

All criteria in Section 5 are met at a probability level above 0.85 as explained in section 6

All criteria in Section 5 are met at a probability level above 0.85 as explained in section 6 and all non-automatic properties tested with respect to all relevant disabled users

A success rate of less than 75% and no accessibility catastrophe has been identified

A few accessibility barriers have been identified

Orange

Criteria in Section 5 are met at average probability level above 0.75 as explained in section 6

All criteria in Section 5 are met at a probability level above 0.75 as explained in section 6

All criteria in Section 5 are met at a probability level above 0.75 as explained in section 6 and all non-automatic properties tested with respect to all relevant disabled users

A success rate of less than 65% and no accessibility catastrophe has been identified

Several accessibility barriers have been identified

Red

Criteria in Section 5 are met at average probability level below 0.65 as explained in section 6

All criteria in Section 5 are met at a probability level below 0.65 as explained in section 6

All criteria in Section 5 are met at a probability level below 0.65 as explained in section 6 and all non-automatic properties tested with respect to all relevant disabled users

A success rate of less than 50% or  an accessibility catastrophe has been identified

Web Site is inaccessible

Black

Criteria in Section 5 are met at average probability level below 0.5 as explained in section 6

Some criteria in Section 5 have not been tested

Some criteria in Section 5 have not been tested

User testing has not or only partially been conducted

No involvement from disabled users yet or no clarity about structures and arrangements for participation for disabled users

9 Glossary

Term Use in UWEM
Accessibility

Accessibility is here defined as the absence of accessibility barriers for a given checkpoint, Web resource or key use scenario. A disabled user would be able to perform the required task in a given key use scenario if the Web resource is accessible.

Accessibility test

A test referring to one or more Web pages resulting in an EARL report.

Accessibility barrier probability Fi

Fi indicates the probability for an accessibility barrier for a given checkpoint, page or key use scenario i.

Aggregation

Grouping of data to get an overview of the result set.

Authored units

Some set of material created as a single entity by an author. Examples include a collection of markup, a style sheet, and a media resource, such as an image or audio clip.

Balanced Scorecard

Approach to strategic management developed by Robert Kaplan and David Norton. The approach can be used to monitor progress towards a defined set of goals.

Barrier

A barrier on a Web page can be viewed as an incompatibility between the user and the Web page that causes the user to be not able to accomplish a task in a given key use scenario.

BenToWeb

Benchmarking Tools and Methods for the Web. Project in the WAB cluster: http://bentoweb.org/ 

Complete resource set

All resources in a Web site.

Core resource set

The Core Resource Set is a set of generic resource types which are likely to be present in most Web sites, and which are core to the use and accessibility evaluation of a site. It represents a minimal set of resources which should be included in any accessibility evaluation of the site. It cannot, in general, be automatically identified, but requires human judgement to select.

CP

Checkpoint 

Crawl Web site

Recursively retrieve Web pages, until the whole Web site is downloaded, or until sampling criteria terminates the crawling process.

EARL

Evaluation And Reporting Language

EIAO

European Internet Accessibility Observatory. Project in the WAB cluster: http://www.eiao.net/

Sampled resource set

A Sampled Resource Set is a resource set generated by automated recursive crawling from a set of “seed” resources, where the crawling has been subject to certain pre-determined limits or constraints.

Failure mode

A WCAG checkpoint may fail in several ways, and the failure mode indicates how the checkpoint failed. E.g., Checkpoint 1.1 consists in checking that the alt attribute of each img element has been used to provide a short text alternative for the referenced non-text content. One specific failure mode for checkpoint 1.1 is missing alt attribute.

Key use scenario

The sequence of events and Web pages that must be traversed to successfully perform a task on a Web page

A, AA, AAA

Levels of conformance with WCAG 1.0 as defined by W3C.

NACE

Acronym from the French 'Nomenclature statistique des Activites economiques dans la Communaute Europeenne' – Statistical classification of economic activities in the European Community

NIST

National Institute of Standards and Technology

NUTS

Nomenclature of Territorial Units for Statistics

QA

Quality Assurance

Replicable

If tests are repeated, the same results are expected with certain limitations

Sampling

Procedure to identify a subsets of the Complete Resource Set (a single complete Web site) which are to be evaluated.

Scoping

Procedure to decide whether any arbitrary URI does or does not belong to specific set (typically a specific Web site).

S-EAM

Support EAM. Project in the WAB cluster: http://www.support-eam.org/

Seed resource

A starting point for crawling a Web site.

User group

A user group indicates a specific group of disabled users (e.g. Blind, deaf, colour deficit, epilepsy, physical disability etc.).

User testing

Evaluation by use of a representative set of users for each failure mode

UWEM

Universal Web Evaluation Methodology

WAB

Web Accessibility Benchmark cluster

WCAG

Web Content Accessibility Guidelines

Web resource

A network data object or service that can be identified by a URI. Resources may be available in multiple representations (e.g., multiple languages, data formats, size, resolutions) or vary in other ways.

See also http://www.w3.org/2003/glossary/ for explanation of common Web terms and abbreviations.

10 References

[BRIN98]
Brin S, Page L (1998). The anatomy of a large-scale hypertextual Web search engine. In: Enslow P H, Ellis A (eds), Proceedings of the Seventh international Conference on World Wide Web 7 (Brisbane, Australia), pp. 107—117. Amsterdam: Elsevier Science Publishers B. V. DOI= http://dx.doi.org/10.1016/S0169-7552(98)00110-X
[KAPLAN96]
Kaplan R S, Norton D P (1996). The Balanced Scorecard: Translating Strategy into Action. Boston, MA: Harvard Business School Press.
[HENZINGER00]
Henzinger M, Heydon A, Mizenmacher M, Najork M (2000). On near-uniform URL sampling. In: Proceedings of the 9th international World Wide Web Conference on Computer Networks: the international Journal of Computer and Telecommunications Networking (Amsterdam, The Netherlands), pp. 295—308. Amsterdam: North-Holland Publishing Co. Available at: http://www9.org/w9cdrom/88/88.html. DOI= http://dx.doi.org/10.1016/S1389-1286(00)00055-4
[KRUG00]
Krug S (2000). Don't Make Me Think: A Common Sense Approach to Web Usability. New Riders Press (2nd edition).
[LEVENE99]
Levene M, Loizou G (1999). Navigation in Hypertext is easy only sometimes. SIAM Journal on Computing, 29, pp. 728—760.
[NIST-SEMATECH]
NIST/SEMATECH e-Handbook of Statistical Methods (2005). NIST. Available at: http://www.itl.nist.gov/div898/handbook/
[RFC2119]
Bradner S (ed) (1997). Key words for use in RFCs to Indicate Requirement Levels. Request for Comments: 2119. IETF. Available at: http://www.ietf.org/rfc/rfc2119.txt
[RFC2396]
Berners-Lee T, Fielding R, Masinter L (eds) (1998). Uniform Resource Identifiers (URI): Generic Syntax. Request for Comments: 2396. IETF. Available at: http://www.ietf.org/rfc/rfc2396.txt
[RFC2616]
Fielding R, Gettys J, Mogul J, Frystik H, Masinter L, Berners-Lee T (eds) (1999). Hypertext Transfer Protocol – HTTP/1.1. Request for Comments: 2616. IETF. Available at: http://www.ietf.org/rfc/rfc2616.txt
[WCAG10]
Chisholm W, Vanderheiden G, Jacobs I (eds) (1999). Web Content Accessibility Guidelines 1.0, W3C Recommendation 5-May-1999. World Wide Web Consortium. Available at: http://www.w3.org/TR/WCAG10/
[WCAG10-TECHS]
Chisholm W, Vanderheiden G, Jacobs I (eds) (1999). Techniques for Web Content Accessibility Guidelines 1.0, W3C Note 6 November 2000. World Wide Web Consortium. Available at: http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/
[WCAG20]
Caldwell B, Chisholm W, Slatin J, Vanderheiden G, White J (eds) (2005). Web Content Accessibility Guidelines 2.0, W3C Working Draft 30 June 2005. World Wide Web Consortium. Available at: http://www.w3.org/TR/WCAG20/
[XMLSCHEMA2]
Biron P V, Malhotra A (eds) (2004). XML Schema Part 2: Datatypes Second Edition. World Wide Web Consortium (W3C). Available at: http://www.w3.org/TR/xmlschema-2/
[ZENG04]
Zeng X (2004). Evaluation and Enhancement of Web Content Accessibility for Persons with Disabilities. Ph.D. Thesis. University of of Pittsburgh. Available at: http://etd.library.pitt.edu/ETD/available/etd-04192004-155229/unrestricted/XiaomingZeng_April2004.pdf

11 Appendix A: Document License

Copyright © 2005 European Commission and WAB Cluster members 

[License based upon the W3C document license. NOTICE: This license applies specifically to UWEM and WAB Cluster materials. None of the documents referenced in this document from the World Wide Web Consortium or its Web Accessibility Initiative are subject to the conditions of this license.]

By using and/or copying this document, or the document from which this statement is linked (Unified Website Evaluation Methodology, UWEM version 0.5), you (the licensee) agree that you have read, understood, and will comply with the following terms and conditions:

Permission to copy, and distribute the contents of this document, or the document from which this statement is linked, in any medium for any purpose and without fee or royalty is hereby granted, provided that you include the following on ALL copies of the document, or portions thereof, that you use:

When space permits, inclusion of the full text of this NOTICE should be provided. We request that authorship attribution be provided in any software, documents, or other items or products that you create pursuant to the implementation of the contents of this document, or any portion thereof.

This license allows the use, modification and extension of this document to any organisation royalty free, with the conditions expressed above. In case of modifications outside the selected standardisation body or equivalent entity by the copyright holders, neither the term “Unified Web Site Evaluation Methodology” nor the acronym “UWEM” could be used to denominate the resulting work.

THIS DOCUMENT IS PROVIDED "AS IS," AND COPYRIGHT HOLDERS MAKE NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, OR TITLE; THAT THE CONTENTS OF THE DOCUMENT ARE SUITABLE FOR ANY PURPOSE; NOR THAT THE IMPLEMENTATION OF SUCH CONTENTS WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.

COPYRIGHT HOLDERS WILL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF ANY USE OF THE DOCUMENT OR THE PERFORMANCE OR IMPLEMENTATION OF THE CONTENTS THEREOF.

The name and trademarks of copyright holders may NOT be used in advertising or publicity pertaining to this document or its contents without specific, written prior permission. Title to copyright in this document will at all times remain with copyright holders.

12 Appendix B: Template to report evaluation results

Not available for this version.

13 Appendix C: XML Schemas for Evaluation Sets

13.1 Resource description mechanism

A Web resource may not uniquely identified by its URI, as in particular, for the HTTP(S) protocol, the request send to the server by the User Agent can contain additional parameters in both the header and the body. The following XML Schema (see ), presents a list of resources described by a URI together with a set of optional parameters that contain HTTP Header and Body information.

Figure 3: Graphical Representation of the XML Schema to represent resources.

With this approach, the following cases are covered:

<?xml version = "1.0" encoding = "UTF-8"?>

<xs:schema xmlns:xs = "http://www.w3.org/2001/XMLSchema"
 xmlns:xlink = "http://www.w3.org/1999/xlink"
 elementFormDefault = "qualified">
        <xs:import namespace = "http://www.w3.org/1999/xlink"
      schemaLocation = "http://www.loc.gov/standards/mets/xlink.xsd"/>
        <xs:import namespace = "http://www.w3.org/XML/1998/namespace"
      schemaLocation = "http://www.w3.org/2001/xml.xsd"/>
        <xs:element name = "resources">
                <xs:complexType>
                        <xs:sequence>
                                <xs:element ref = "resource" maxOccurs = "unbounded"/>
                        </xs:sequence>
                </xs:complexType>
        </xs:element>
        <xs:element name = "resource">
                <xs:complexType>
                        <xs:sequence>
                                <xs:element ref = "httpRequest" minOccurs = "0"/>
                        </xs:sequence>
                        <xs:attribute ref = "xlink:href" use = "required"/>
                </xs:complexType>
        </xs:element>
        <xs:simpleType name = "httpMethod">
                <xs:annotation>
                        <xs:documentation xml:lang = "en">HTTP method (not all allowed methods are relevant to test suites, namely 'trace', 'options', 'delete', 'connect').</xs:documentation>
                </xs:annotation>
                <xs:restriction base = "xs:string">
                        <xs:enumeration value = "get"/>
                        <xs:enumeration value = "post"/>
                        <xs:enumeration value = "put"/>
                        <xs:enumeration value = "head"/>
                </xs:restriction>
        </xs:simpleType>
        <xs:simpleType name = "encoding">
                <xs:annotation>
                        <xs:documentation xml:lang = "en">Encoding types (e.g. for use in forms submission with HTML form [enctype] and with XForms. @TODO complete. [refer to 'media types': RFC 2045: 'Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies' at http://www.ietf.org/rfc/rfc2045.txt]</xs:documentation>
                </xs:annotation>
                <xs:restriction base = "xs:string">
                        <xs:enumeration value = "application/soap+xml"/>
                        <xs:enumeration value = "application/x-www-form-urlencoded"/>
                </xs:restriction>
        </xs:simpleType>
        <xs:simpleType name = "MIMEType">
                <xs:annotation>
                        <xs:documentation xml:lang = "en">MIME types @TODO complete [see 'encoding' below and list of MIME types at http://www.iana.org/assignments/media-types/].</xs:documentation>
                </xs:annotation>
                <xs:restriction base = "xs:string">
                        <xs:enumeration value = "text/html"/>
                        <xs:enumeration value = "text/xml"/>
                        <xs:enumeration value = "text/css"/>
                        <xs:enumeration value = "text/plain"/>
                        <xs:enumeration value = "application/pdf"/>
                        <xs:enumeration value = "application/postscript"/>
                        <xs:enumeration value = "application/rdf+xml"/>
                        <xs:enumeration value = "application/soap+xml"/>
                        <xs:enumeration value = "application/rtf"/>
                        <xs:enumeration value = "application/sgml"/>
                        <xs:enumeration value = "application/xhtml+xml"/>
                        <xs:enumeration value = "application/xml"/>
                        <xs:enumeration value = "application/xml-dtd"/>
                        <xs:enumeration value = "application/xml-external-parsed-entity"/>
                        <xs:enumeration value = "application/zip"/>
                        <xs:enumeration value = "audio/mpeg"/>
                        <xs:enumeration value = "audio/mpeg4-generic"/>
                        <xs:enumeration value = "image/cgm"/>
                        <xs:enumeration value = "image/jpeg"/>
                        <xs:enumeration value = "image/jp2"/>
                        <xs:enumeration value = "image/png"/>
                        <xs:enumeration value = "image/tiff"/>
                        <xs:enumeration value = "model/vrml"/>
                        <xs:enumeration value = "multipart/encrypted"/>
                        <xs:enumeration value = "multipart/form-data"/>
                        <xs:enumeration value = "multipart/signed"/>
                        <xs:enumeration value = "video/mpeg"/>
                        <xs:enumeration value = "video/mpeg4-generic"/>
                        <xs:enumeration value = "video/quicktime"/>
                        <xs:enumeration value = "video/raw"/>
                </xs:restriction>
        </xs:simpleType>
        <xs:attribute name = "value" type = "xs:string"/>
        <xs:element name = "header">
                <xs:complexType>
                        <xs:attribute name = "name" use = "required" type = "httpMethod">
                                <xs:annotation>
                                        <xs:documentation xml:lang = "en">@TODO: (HTTP body variable name) can the type be refined?</xs:documentation>
                                </xs:annotation>
                        </xs:attribute>
                        <xs:attribute ref = "value"/>
                </xs:complexType>
        </xs:element>
        <xs:element name = "variable">
                <xs:complexType>
                        <xs:attribute name = "name" use = "required" type = "string">
                                <xs:annotation>
                                        <xs:documentation xml:lang = "en">@TODO: (HTTP body variable name) can the type be refined?</xs:documentation>
                                </xs:annotation>
                        </xs:attribute>
                        <xs:attribute ref = "value"/>
                </xs:complexType>
        </xs:element>
        <xs:element name = "httpHeader">
                <xs:complexType>
                        <xs:sequence>
                                <xs:element ref = "header" maxOccurs = "unbounded"/>
                        </xs:sequence>
                </xs:complexType>
        </xs:element>
        <xs:element name = "httpBody">
                <xs:complexType>
                        <xs:sequence>
                                <xs:element ref = "variable" maxOccurs = "unbounded"/>
                        </xs:sequence>
                </xs:complexType>
        </xs:element>
        <xs:element name = "httpRequest">
                <xs:complexType>
                        <xs:sequence>
                                <xs:element ref = "httpHeader" minOccurs = "0"/>
                                <xs:element ref = "httpBody" minOccurs = "0"/>
                        </xs:sequence>
                        <xs:attribute name = "mediaType" type = "MIMEType"/>
                        <xs:attribute name = "encType" type = "encoding"/>
                </xs:complexType>
        </xs:element>
</xs:schema>

13.2 Scope Pattern List Schemas

The following schema defines an XML representation for a “scope pattern list”. This is a list of rules for determining whether a given URI lies within the scope of some set – typically defining the scope of a complete Web site, or of that portion of a site which is subject to an accessibility evaluation or certification.

<?xml version="1.0" encoding = "US-ASCII"?>

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
 targetNamespace="http://www.wabcluster.org"
 xmlns="http://www.wabcluster.org"
 elementFormDefault="qualified">
 <!-- targetNamespace and default namespace are just
 placeholders... -->

 <xs:element name="scopePatternList">
   <xs:complexType>
     <xs:sequence>
        <xs:element name="rule" minOccurs="1" maxOccurs="unbounded">
         <xs:complexType>
           <xs:attribute name="type" use="required">
             <xs:simpleType>
                <xs:restriction base="xs:string">
                 <xs:enumeration value="include"/>
                 <xs:enumeration value="exclude"/>
                </xs:restriction>
             </xs:simpleType>
           </xs:attribute>
           <xs:attribute name="pattern" use="required" type="xs:string" />
         </xs:complexType>
        </xs:element>
     </xs:sequence>
   </xs:complexType>
 </xs:element>

</xs:schema>

 

14 Appendix D: Documents related to user testing

14.1 Participant consent form

I acknowledge that I have been asked to participate in an accessibility study on a Web site. I agree to perform a number of tasks on different Web services.

By signing below, I agree and consent to participate in the accessibility study and allow the representative (and observers) to observe, record my comments, actions and observations.

I also give permission for the representative to video record me whilst I carry out the evaluation, and possibly use the video during presentations to other staff members involved in the development of the website. No other use of the video will be made without seeking my express permission.

I agree to keep confidential all details of all the Web services I am asked to evaluate. I agree not to tell anyone any details of the Web services I am asked to evaluate. I will not publicise, in any form any details of the Web services I am asked to evaluate, without the prior written consent of the Registered legal owner of that Web service.

I understand that I will be compensated in the amount of ____ regardless of whether I am able to complete the study or not and that I am free to withdraw myself from the study at any time should I feel that to be necessary.

 

Signature 

Name 

14.2 Pre-evaluation questions

[order of questions in this section may be improved] 

  1. What is your profession?

  2. What is your educational background [need response categories appropriate to the country]

  3. What is your native language?

  4. Gender

  5. How old are you?

    Option 1: Less than 21

    Option 2: Between 21 and 30

    Option 3: Between 31 and 40

    Option 4: Between 41 and 50

    Option 5: Between 51 and 60

    Option 6: Over 60

  6. What is the nature of your disability?

    Option 1: Blind.

    Option 2: Partially sighted.

    Option 3: Deaf.

    Option 4: Hard of Hearing.

    Option 5: Dyslexic.

    Option 6: Physical impairment.

    Option 7: Other (please state below).

  7. Do you have any other disabilities?

    Option 1: Yes.

    Option 2: No.

    If you do have other disabilities, please give further details:

  8. What is your level of computer experience?

    Option 1: Not at all experienced.

    Option 2: Not very experienced.

    Option 3: Quite experienced.

    Option 4: Very experienced.

  9. How many hours per week do you browse/use the World Wide Web?

    Option 1: Never use the Web.

    Option 2: Between 1 and 5 hours per week.

    Option 3: Between 6 and 10 hours per week.

    Option 4: Between 11 and 20 hours per week.

    Option 5: More than 20 hours.

  10. What activities do you use the World Wide Web for?

  11. Do you use any assistive technologies? [if they are not clear about this, rephrase as How do you access the computer? Can provide some prompts – do you use a special keyboard, do you adjust the settings in the browser?] 

    Option 1: Yes.

    Option 2: No.

    If yes please specify which assistive technologies:

  12. Did you receive any training in that assistive technology? If so, what kind of training and how much?

14.3 Initial reaction to a Web site

  1. Have you ever visited this site before?

    Yes/No 

    If yes, ask how frequently the participant has visited the site. If they are a frequent visitor, they may not be a suitable evaluator – depending on the target audience for the Web site and the context of use. If this is the case, explain this politely and end the test.

    Rationale: this question serves to ensure that the participant is not familiar with the site, which would contaminate the results of the test.

  2. Without clicking on any links, have a look around the page and tell me what you think about it.

    Free response.

    Rationale: This task serves to orient the participant to the site. [Answers may be useful to site owners/developers in terms of whether the links are clear to potential users of the site] 

  3. What do you think the purpose of the site is? [optional] 

    Free response.

    Rationale: further orientation to the site, but Q2 is sufficient, only need to ask this if the site owner is interested in whether the home page is adequately communicating its purpose.

  4. On a scale of 1 to 7, where 1 is very poor and 7 is excellent, how easy is it to understand the structure of this page. [When answer is provided, ask the participant to comment on their rating; suitable prompts – “and what led you to give that rating?” “why did you give that rating?”] 

    Response: Lickert rating

    Also comments:

  5. On a scale of 1 to 7, where 1 is not at all clear and 7 is very clear, how clear is it where you can go from here and what you might find there? [when answer is provided, ask the participant to comment on their rating; suitable prompts – “and what lead you to give that rating?” “why did you give that rating?”] 

    Response: Lickert rating

    Also comments:

  6. Overall, on the scale of 1 to 7, how easy is it to understand the site?

    Response: Lickert rating

    Also comments:

14.4 Task assessment form

After completing each task, ask the participant the following questions:

  1. On a scale of 1 to 7, where 1 is not at all easy and 7 is very easy, how easy was it to understand how to complete the task on the site? [when answer is provided, ask the participant to comment on their rating; suitable prompts – “and what led you to give that rating?” “why did you give that rating?”] 

    Response: Lickert rating

    Also comments:

  2. On a scale of 1 to 7, where 1 is not at all easy and 7 is very easy, how easy was it to navigate around the site when undertaking the task? [when answer is provided, ask the participant to comment on their rating; suitable prompts – “and what led you to give that rating?” “why did you give that rating?”] 

    Response: Lickert rating

    Also comments:

  3. Overall, on a scale of 1 to 7, where 1 is not at all easy and 7 is very easy, how easy was it to complete the task? [when answer is provided, ask the participant to comment on their rating; suitable prompts – “and what lead you to give that rating?” “why did you give that rating?”] 

    Response: Lickert rating

    Also comments:

14.5 Problem assessment form

  1. Location of the problem (URL, approximate position within the resource):

  2. Nature of the problem (provide a detailed free text description):

  3. Comments from the Participant:

  4. Rating of the severity of the problem by the participant:

    4 = Catastrophe – (e.g., completely impedes/disrupts my progress with the task) 

    3 = Major problem

    2 = Minor problem

    1 = Cosmetic problem only 

  5. Rating of the severity of the problem by the evaluator:

    4 = Catastrophe – (e.g., completely impedes/disrupts the participant's progress with the task) 

    3 = Major problem

    2 = Minor problem

    1 = Cosmetic problem only

  6. To be completed by the evaluator after the testing session:

    Criteria from section 5 violated:

14.6 Overall Web site reaction

After undertaking all the tasks, ask the participant the following questions:

  1. Thinking about your whole experience with this site, use a scale of 1 to 7, where 1 is not at all easy and 7 is very easy, how easy was it to understand the site? [when answer is provided, ask the participant to comment on their rating; suitable prompts – “and what led you to give that rating?” “why did you give that rating?”] 

    Response: Lickert rating

    Also comments:

  2. Thinking about your whole experience with this site, on a scale of 1 to 7, where 1 is not at all easy and 7 is very easy, how easy was it to navigate around the site? [when answer is provided, ask the participant to comment on their rating; suitable prompts – “and what lead you to give that rating?” “why did you give that rating?”] 

    Response: Lickert rating

    Also comments:

  3. Thinking about your whole experience with this site, on a scale of 1 to 7, where 1 is not at all clear and 7 is very clear, how clear was the layout of information presented on the site? [when answer is provided, ask the participant to comment on their rating; suitable prompts – “and what lead you to give that rating?” “why did you give that rating?”] 

    Response: Lickert rating

    Also comments:

  4. Thinking about your whole experience with this site, on a scale of 1 to 7, where 1 is not enough information and 7 is too much information, what is your reaction to the amount of information presented on the site? [when answer is provided, ask the participant to comment on their rating; suitable prompts – “and what lead you to give that rating?” “why did you give that rating?”] 

    Response: Lickert rating

    Also comments:

  5. Thinking about your whole experience with this site, on a scale of 1 to 7, where 1 is not at all easy to use and 7 is very easy to use, how easy to use was the site to you personally? [when answer is provided, ask the participant to comment on their rating; suitable prompts – “and what led you to give that rating?” “why did you give that rating?”] 

    Response: Lickert rating

    Also comments:

  6. What were the main things you liked about the site?

    Response: open

  7. What were the main things you disliked about the site?

    Response: open

  8. If you could change on thing about the site, what would it be?

    Response: open

  9. Would you return to this site in the future? Please give some reasons for your response.

    Response: open

  10. Thinking about your whole experience with this site, on a scale of 1 to 7, where 1 is not at all easy to use and 7 is very easy to use, how easy to use was the site to other people with X [the participant’s disability]? [when answer is provided, ask the participant to comment on their rating; suitable prompts – “and what led you to give that rating?” “why did you give that rating?”] 

    Response: Lickert rating

    Also comments:

Comments on specific features of the site:

  1. If the site provides a text-only version in addition to a graphical version and the participant evaluated that version,

    (a) You chose to use the text-only version of the site, could you give any reasons for that choice?

    Or

    (b) You chose to use the graphical version of the site, could you give any reasons for that choice?

  2. When using the site, you used the X facility (e.g., site map). On a scale of 1 to 7, where 1 is not at all useful and 7 is very very useful, how useful was that facility?

    [when answer is provided, ask the participant to comment on their rating; suitable prompts – “and what led you to give that rating?” “why did you give that rating?”] 

    Response: Lickert ratings

    Also comments:

    This question can be repeated for all features of particular accessibility interest on the site.

  3. Finally are there any other comments you would like to make about the site that we have not already discussed?

Thank participant for their time.

15 Appendix E: Contributors to this document

16 Appendix F: W3C® DOCUMENT LICENSE

http://www.w3.org/Consortium/Legal/2002/copyright-documents-20021231

Public documents on the W3C site are provided by the copyright holders under the following license. By using and/or copying this document, or the W3C document from which this statement is linked, you (the licensee) agree that you have read, understood, and will comply with the following terms and conditions:

Permission to copy, and distribute the contents of this document, or the W3C document from which this statement is linked, in any medium for any purpose and without fee or royalty is hereby granted, provided that you include the following on ALL copies of the document, or portions thereof, that you use:

  1. A link or URL to the original W3C document.

  2. The pre-existing copyright notice of the original author, or if it doesn't exist, a notice (hypertext is preferred, but a textual representation is permitted) of the form: "Copyright © [$date-of-document] World Wide Web Consortium, (Massachusetts Institute of Technology, European Research Consortium for Informatics and Mathematics, Keio University). All Rights Reserved.
    http://www.w3.org/Consortium/Legal/2002/copyright-documents-20021231"

  3. If it exists, the STATUS of the W3C document.

When space permits, inclusion of the full text of this NOTICE should be provided. We request that authorship attribution be provided in any software, documents, or other items or products that you create pursuant to the implementation of the contents of this document, or any portion thereof.

No right to create modifications or derivatives of W3C documents is granted pursuant to this license. However, if additional requirements (documented in the Copyright FAQ) are satisfied, the right to create modifications or derivatives is sometimes granted by the W3C to individuals complying with those requirements.

THIS DOCUMENT IS PROVIDED "AS IS," AND COPYRIGHT HOLDERS MAKE NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, OR TITLE; THAT THE CONTENTS OF THE DOCUMENT ARE SUITABLE FOR ANY PURPOSE; NOR THAT THE IMPLEMENTATION OF SUCH CONTENTS WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.

COPYRIGHT HOLDERS WILL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF ANY USE OF THE DOCUMENT OR THE PERFORMANCE OR IMPLEMENTATION OF THE CONTENTS THEREOF.

The name and trademarks of copyright holders may NOT be used in advertising or publicity pertaining to this document or its contents without specific, written prior permission. Title to copyright in this document will at all times remain with copyright holders.

This formulation of W3C's notice and license became active on December 31 2002. This version removes the copyright ownership notice such that this license can be used with materials other than those owned by the W3C, moves information on style sheets, DTDs, and schemas to the Copyright FAQ, reflects that ERCIM is now a host of the W3C, includes references to this specific dated version of the license, and removes the ambiguous grant of "use". See the older formulation for the policy prior to this date. Please see our Copyright FAQ for common questions about using materials from our site, such as the translating or annotating specifications. Other questions about this notice can be directed to site-policy@w3.org.

Joseph Reagle <site-policy@w3.org>

Last revised $Id: copyright-documents-20021231.html,v 1.6 2004/07/06 16:02:49 slesch Exp $