WCAG 2.0

In a recent controversial article on A List Apart, Joe Clark proclaimed To Hell with WCAG 2. On the surface, it seems like Joe made a strong case against the WCAG 2.0 draft in an attempt to rally support for his movement against the WCAG Working Group. Although he did raise many issues, in several cases he failed explain exactly what the problem is.

I’m fully aware of the fact that not everyone who reads Joe’s article will wade through several hundred pages from the 3 WCAG specifications, and who could blame you? I read them, but the specs are certainly long, tedious and, in many cases, extremely difficult to comprehend.

Exactly what a “page” is, let alone a “site,” will be a matter of dispute.

I somewhat agree with this. In an attempt to be technologically independent, common terms like “page” and “site” have been replaced with much more obscure technical terms like “web unit”, “authored unit” and “authored component”. Some of these terms aren’t even clearly defined or easily distinguishable from each other.

For instance, consider “authored unit” and “web unit”: two terms that have different definitions, but I still can’t quite figure out what exactly distinguishes a “set of material created as a single body by an author” from “a collection of information, consisting of one or more resources, intended to be rendered together”.

A future website that complies with WCAG 2 won’t need valid HTML—at all, ever. (More on that later.) You will, however, have to check the DOM outputs of your site in multiple browsers and prove they’re identical.

There are two issues here: whether or not validity should be required by WCAG 2.0 and the insanity of that particular technique. I’m sure everyone agrees that comparing the DOMs is an unrealistic expectation; and besides, the technique is non-normative and so it can be safely ignored.

In the current draft of the guidelines, success criterion 4.1.1 states:

Web units or authored components can be parsed unambiguously, and the relationships in the resulting data structure are also unambiguous.

How to Meet Success Criterion 4.1.1 clarifies the meaning of that a little, though it still uses rather technical lingo. It basically means that documents can be parsed properly without depending upon error recovery techniques, especially where the results are inconsistent between browsers. Strictly speaking, the wording of that success criterion doesn’t explicitly require documents to be valid; however, one of the techniques described to meet it is in fact Validating Web Units.

Personally, I’m not convinced validity should be strictly required by WCAG 2.0, I believe it should remain as just a technique. The success criteria should instead outline the purpose of validation. In other words, if you were asked: why validate? What would your answer be and how does it relate to accessibility?

As it currently stands, the working group seems to recognise that the purpose of validation is to help ensure that documents can be parsed correctly and interoperably between user agents. It’s important to realise that validation is just a technique, not a goal in itself.  It is indeed a very good technique and one that can be used to help meet the success criterion; but it is technique none-the-less.

I’m not saying validation isn’t important.  I do believe it is very important and I personally insist upon it for any site I develop, but it needs to be put into the proper context. Although I wouldn’t object to its inclusion in the guidelines, I don’t believe it needs to be enforced as a requirement on its own. If validation is to succeed in web development, it should instead succeed as part of quality assurance and best practice guidelines.

You can still use tables for layout. (And not just a table—tables for layout, plural.)

It is unfortunate the wording of these guidelines explicitly allows the use of tables for layout and does very little to discourage their use. While I believe it should be recognised that table layouts can be made relatively accessible to assistive technology, their use should still be considered a failure.

Your page, or any part of it, may blink for up to three seconds. Parts of it may not, however, “flash.”

I don’t particularly like blinking or flashing content at all, but I’m not exactly sure of the problem here. It seems to me that the guideline recognises that prolonged blinking/flashing is an accessibility issue and addresses it by limiting the time frame.

I’m no expert on the issue, nor am I aware of the use cases (beyond annoying advertisements) for allowing any blinking at all, but 3 seconds seems like a reasonable compromise to me. Is 3 seconds enough time to trigger an epileptic seizure or any other specific problems? Or is it merely an objection based on a personal dislike of blinking, rather than solid facts?

You’ll be able to define entire technologies as a “baseline,” meaning anyone without that technology has little, if any, recourse to complain that your site is inaccessible to them.

I don’t fully agree with this issue and while I do see the conceptual problem with the baseline, I don’t believe it is as bad as some people make it out to be.

There is a major difference between the statements: “This site is accessible” and “This site is accessible to users, whose UAs meet the baseline requirements”. While the latter will theoretically apply to any document conforming to WCAG 2.0, the former is a much broader statement that effectively means “This site is accessible [to everyone]”.

If your baseline is set too high, users will have “recourse to complain that your site is inaccessible to them”. The problem is that there is little guidance on specifying a realistic, accessible baseline and that means the concerns about it being set too high by organisations are indeed valid. For example, if you’re baseline includes JavaScript, anyone that chooses to disable JavaScript will be effectively denied access to the site because their UA doesn’t meet the requirements. This goes against the very principle of Unobtrusive JavaScript and I see how this is a serious problem.

On the other hand, it’s an unrealistic expectation that plain old HTML will be able to meet every possible use case and, as Joe will you tell you himself, other technologies can be made just as accessible (e.g. Accessibile PDFs).

You’ll be able to define entire directories of your site as off-limits to accessibility (including, in WCAG 2’s own example, all your freestanding videos).

I don’t see how this is a serious problem. The ability to scope a conformance statement to specific sections of the site has some very good use cases. Although failing to meet conformance for one section simply because the author doesn’t want to do so isn’t a very good excuse, the ability to scope a conformance claim is better than lying by saying the whole site is accessible.

Not that anybody ever made them accessible, but if you post videos online, you no longer have to provide audio descriptions for the blind at the lowest “conformance” level. And only prerecorded videos require captions at that level.

I’m not sure how either of these are serious issues. Those with the knowledge and tools available to provide audio descriptions of video can still do so, but requiring audio descriptions at the lowest level is an unreasonable expectation for most authors who are unlikely to have the technical skills, let alone the tools, to do so.

Such authors can still provide a full text alternative which is much easier to produce. However, Joe claimed that the full text alternative is a “discreditied holdover from WCAG 1”, yet failed to provide or link to any evidence to support that claim. His explanation stating that parts of it are not needed by the blind, and other parts by the deaf doesn’t seem to have a point – it certainly does nothing to discredit the technique.

As for requiring captions for pre-recorded videos only, I don’t understand why this is a problem at all. Would anyone seriously expect the average person with a live web cam to be able to provide captions in real time? That would require some serious effort to do, which the average author couldn’t possible do anyway. Of course it may be realistic for something like a television studio broadcasting their news online to provide captions, just as they do for TV, in which case, they can claim conformance to a higher level of accessibility.

Your podcasts may have to be remixed so that dialogue is 20 decibels louder than lengthy background noise.

That is only if authors wish to claim conformance to level 3, although I don’t understand what the problem is here. Is this an issue because the technique presents no accessibility benefits? Is it because it’s an unrealistic expectation for authors to achieve? Or is there some other reason that failed to be explained.

You can’t use offscreen positioning to add labels (e.g., to forms) that only some people, like users of assistive technology, can perceive. Everybody has to see them.

I interpret the specification differently from Joe, with regards to this issue. How to Meet Success Criterion 1.3.1 states:

The intent of this success criterion is to ensure that information and relationships that are implied by visual or auditory formatting are preserved when the presentation format changes. […] The purpose of this success criterion is to ensure that when such relationships are perceivable to one set of users, those relationships can be made to be perceivable to all.

Not everybody has to see the text labels; there is nothing there that says they can’t be hidden off screen. What I believe it is saying is that the same meaning needs to be conveyed to all users, regardless of the presentation. e.g. For a visual user, the meaning may be conveyed through the visual layout, colours, icons, etc, but for an aural user, for example, the same meaning may be conveyed by speaking the text label.

CSS layouts, particularly those with absolutely-positioned elements that are removed from the document flow, may simply be prohibited at the highest level. In fact, source order must match presentation order even at the lowest level.

Again, I interpret the spec differently. Nothing in the spec says the presentation order much match the source order, it simply states that the same meaning must be conveyed to the user regardless of the presentation. It’s ok to use absolute positioning (or any other layout method) to alter the presentational ordering, as long as the meaning of the content is not altered.

Also at the highest level, you have to provide a way to find all of the following:

  1. Definitions of idioms and “jargon”
  2. Expansion of acronyms
  3. Pronunciations of some words

Again, I don’t see how this is a problem at all. It is only required for level 3 conformance and is there any reason why providing such things would be a bad idea? Providing definitions may be as simple as linking to a glossary or dictionary entry and expansions for acronyms is as simple as using the <abbr> and <acronym> elements. Providing pronunciations of some words, where necessary, is not only useful for disabled, it’s useful for anyone that reads a difficult word they’ve never heard before and may not have an obvious pronunciation.

WCAG Samurai

Joe also announced the launch of the WCAG Samurai: an effort to publish corrections for and extensions to the existing WCAG 1.0 recommendation. In principle, it’s a very good idea for the community to begin addressing accessibility issues and the serious problems with WCAG 2.0 themselves, but my main concern with the WCAG Samurai is this:

… another thing we’re not going to do is run a totally open process. It’s a viable model for standards development, one I have championed in another context, but in web accessibility it is proven not to work.

WCAG Samurai will toil in obscurity for the foreseeable future. Membership rolls will not be published, and membership is by invitation only.

Working on this behind closed doors seems like a huge mistake to me. In fact, it seems down right hypocritical of Joe to discredit the WCAG 2.0 working group process on the grounds that it and, indeed, the results themselves are inaccessible to a wide audience; only to go ahead and make the WCAG Samurai process inaccessible to, and hidden away from, all but the few invited elite.

I’m not too concerned that participation is strictly limited to the select few, but I think there really needs to be a way for the community to at least watch from the sidelines and see everything that goes on, even if they can’t contribute directly. It has been stated that the WCAG Samurai website will soon have a news feed for updates, so there is a chance that’s exactly what we’ll get.

I will admit, however, it may be too early to pass judgement on the WCAG Samurai right now, and it’s only fair that we let them give it a shot. After all, the members are unlikely to be weighed down by absurd corporate interests, but rather have the best interests of both web developers and end users in mind. We’ll just have to wait and see.

9 thoughts on “WCAG 2.0

  1. I have explained in exhaustive length on my site why I do not, in fact, have a “movement against the WCAG Working Group” or anything remotely like it. Feel free to disagree with my assessment of WCAG 2, since we’re both venturing informed opinions, but that accusation is false.

  2. I totally agree with Joe Clark’s comments about captioning.

    Full text alternative doesn’t do me justice. I want to see “both” video and captions at the same time, not just English text where just about half/most Deaf people are too illiterate.

    Believe me, because I am Deaf, and I “live” with it.

    Thank you.

  3. I don’t particularly like blinking or flashing content at all, but I’m not exactly sure of the problem here. It seems to me that the guideline recognises that prolonged blinking/flashing is an accessibility issue and addresses it by limiting the time frame.

    Frequency is a better indicator than duration for whether flash content will trigger seizures. Or other problems, for that matter.

    Even the brief and unintentional(?) flash content on WCAG 2.0 Editor John Slatin’s homepage is enough to give me a photosensitive headache.

    There’s more to photosensitive epilepsy than grand mal seizures. Flash content can cause other problems for the brain: complex partial seizures that look like daydreaming, simple partial seizures (auras) that cause discomfort, even sensory disturbances and headaches that do not resolve to seizure. Though minor in comparison to grand mal seizures, these problems are still significant. They scar the brain and, over time, cause losses in cognitive functioning, for example reducing the brain’s ability to acquire and retain verbal memory. The scarring is permanent and persistant. (Autopsies of epileptics who’d been faithfully taking their meds for years show brain damage as extensive and recent as that done to unmedicated epileptics.)

    The WCAG 2.0 guidelines for flash content fall short of protecting people with photosensitive epilepsy.

    More importantly, WCAG 2.0 does nothing to protect those most vulnerable to photosensitive seizures (though not necessarily to photosensitive epilepsy)—young children. How do the new flash content guidelines protect children too young to read text warnings, too young even to know that s/he is photosensitive? Most of the flash content I encounter on the web—from games to advertising—is aimed at kids.

  4. Both Lachlan and some of the respondants to his comments have valid points. My question is whether any of you provided your comments to the Web Accessibility Initiative? The deadline for comments is 22 June. See [http://www.w3.org/WAI/WCAG20/comments/].

Comments are closed.