Why automated accessibility testing isn’t enough
Accessibility testing is 30% automated and 70% manual. There is a myth that once accessibility testing is done with automated tools, the accessibility testing is complete. Automated accessibility testing should, however, be still performed before performing manual accessibility assessment. For example, automated accessibility testing helps identify many underlying code problems that in turn can help you eliminate a lot of bugs that you might otherwise find during the manual testing saving your overall time in testing and logging bugs.
Accessibility compliance is required by law including Section 508 and ADA (Americans with Disability Act) laws in the U.S. and EN 301 549 in the Europe
You can build websites showing zero accessibility errors and warnings but they could be very hard to read for people without disabilities. Let’s take a look at CSS Zen Garden website.
When you test the code with free automated tools such as wave.webaim.org or Accessibility Insights for Web by Microsoft, you will notice that the tools shows zero validation errors and zero alerts indicating that tool has passed the accessibility compliance as a false positive.
However, if you view the website, you will find it hard to read, visually. So, don’t just rely on automated tools but always do a manual QA while leveraging automated checks to start with to increase your accessibility testing efficiency.
We need to make the websites accessible to screen readers like NVDA, JAWS, etc. while also making sure that the website is readable by people who do not use assistive technology such as people with low vision or those that have no vision impairment at all.
Another example is the reading language. According to the Reading Level rule of WCAG (AAA requirement, i.e., good to have), the content should be written as clearly and simply as possible. There are tools that can help you determine the readability score such as Yoast SEO plugin on WordPress, but manual checks are necessary there as well because the machine still doesn’t really know the actual intent just yet.
Yet another example is alt or alternative text according to WCAG 1.1.1 Non-text content , an automated tool will tell you an alt text is present but it won’t be able to tell you whether it is correctly describing an image or if it should be empty alt text such as for a decorative image. Microsoft PowerPoint generates alt text for images using machine learning, however, it also is not all the time accurate. I have also seen developers simply copying the title of the section (where the image is used) as an alt text for the image which makes the assistive technologies read the text twice and is annoying to the users using ATs. For example, in the accompanying image here, a good alt text could be “Syrian refugee girl playing jump rope in playground” but I would not be surprised developers using alt text such as “Syrian Refugee Girl” which is not incorrect but it doesn’t really give a clearer image of the photo to those using assistive technologies. Our goal here is to unify the experience for all users in trying to make it better.
Automation should be a part of the overall accessibility QA process
You can and should always use automation tools to expedite the accessibility QA process to find accessibility errors such as code structure, missing alt text, color contrast ratio, aria label missing, etc. but please do not rely on automation tools alone for testing. Maybe in the future the automation QA tools will become the expert but as of today you have to rely on manual testing to ensure that your website is accessible.
You need to remind your teams involved in producing designs, content, and code about the accessibility best practices every now and then such as through regular training, events, fun-based emails, etc. I will be discussing some ideas in my coming soon post on continuous accessibility education.
How do you perform website accessibility testing for blogs, websites, or software?