According to industry research, the vast majority of IT leaders view test automation as the single most important factor in accelerating software innovation. This also applies to automated accessibility testing – a great first step for making websites accessible to persons with disabilities. The new breed of automated tools can catch nearly 83% of all accessibility issues (not to be confused with overlays, which claim 100% automatic compliance.)
Automation does not, however, replace the need for manual testing, which is the only surefire way to ensure all users can access the information on your website without difficulty. To get complete coverage, you also need to perform manual accessibility testing (which requires a human to complete using the various assistive technologies that people with disabilities depend on) with multiple browsers and interfaces. However, manual testing comes with several challenges, which is why testers often perceive it as painful or complicated.
Pain Point One – WCAG Has Many Success Criteria
The most widely-accepted set of accessibility standards are the Web Content Accessibility Guidelines (WCAG), created by the W3C. WCAG criteria are classified by various versions and levels. Versions include WCAG 1.0, 2.0, and most recently 2.1, with version 3.0 actively under development. Success criteria under each of these versions are further broken down into conformance levels A, AA and AAA, with each level adding criteria.
Depending on which WCAG conformance level you choose, you could reasonably expect to check somewhere between 25 to 50 success criteria when assessing the accessibility of your site or app. As you can see, a significant amount of familiarity and expertise is required with WCAG success criteria to be able to perform accurate, full-coverage accessibility testing on a site or app.
Fortunately, software has reached the point in testing maturity where there are guided instructions for testing for specific criteria that must be manually tested. One example is keyboard navigation, often used by persons with motor disabilities. In this instance, the tool would simulate a user who interacts with a web page solely by using their keyboard; then the tool would describe a step-by-step instruction to identify accessibility issues during keyboard navigation. One such example: a news site with a carousel featuring five news stories, only one of which is visible at a time. There needs to be ‘previous’ and ‘next’ buttons that can be accessed and triggered using a keyboard so that keyboard users can navigate through the stories just as easily as they could with a mouse.
Pain Point Two – Screen Readers Take Time and Expertise to Learn
In addition to understanding numerous success criteria, accessibility testers also need to understand how to use a screen reader, a popular form of assistive technology. There are numerous screen readers available including JAWS, NVDA, VoiceOver and Talkback. Now, imagine having to memorize more than 50 success criteria that need to be tested across three different screen readers in multiple testing environments (desktop, responsive web, native mobile, etc.).
Luckily, customized testing methodologies have been developed based on a chosen ruleset and testing environment. This reduces the amount of memorization and prior experience the tester needs to run effective tests. With specific screen reader testing steps outlined for each ruleset and testing environment, accessibility teams can improve their efficiency and create a standardized testing process for the whole testing team to follow—whether they are screen reader experts or not—resulting in a significant productivity boost.
Pain Point Three – WCAG Success Criteria Can Be Open to Interpretation
Some WCAG success criteria can be very contextual. Criteria that appear simple to test for, like color contrast requirements, can sometimes be tricky to analyze. For example, imagine a black, bold-faced word featuring dark gray shadowing against a light gray website background.
In this image, which two colors should be compared to verify whether the minimum color contrast requirements are satisfied? Light gray and black, dark gray and black or light gray and dark gray?
In scenarios like this, different accessibility experts can (and will) have different opinions. For a developer, there is nothing more frustrating than receiving different remediation requirements for the same issue. This can create confusion and introduce significant delays in the release of a product.
The solution is clearly documented and detailed requirements to help reduce the potential number of issues that can be interpreted differently. Properly documented testing processes can help ensure that the results coming from an accessibility test team will be consistent, and significant time will be saved communicating issues back and forth between testers and developers.
Automated testing is great for addressing many ‘low-hanging fruit’ problems without having to be an accessibility expert or hire one, but it does not eradicate the need for manual testing. While there are certain characteristics of manual testing that can make it onerous, fortunately, there are guided testing tools, auditors, methodologies and more available to take the pressure off accessibility teams. This will enable development teams to strike the ideal composition of manual and automated accessibility testing that helps them identify and repair the highest possible number of issues.