DevOps.com

  • Latest
    • Articles
    • Features
    • Most Read
    • News
    • News Releases
  • Topics
    • AI
    • Continuous Delivery
    • Continuous Testing
    • Cloud
    • Culture
    • DataOps
    • DevSecOps
    • Enterprise DevOps
    • Leadership Suite
    • DevOps Practice
    • ROELBOB
    • DevOps Toolbox
    • IT as Code
  • Videos/Podcasts
    • Techstrong.tv Podcast
    • Techstrong.tv Video Podcast
    • Techstrong.tv - Twitch
    • DevOps Unbound
  • Webinars
    • Upcoming
    • On-Demand Webinars
  • Library
  • Events
    • Upcoming Events
    • On-Demand Events
  • Sponsored Content
  • Related Sites
    • Techstrong Group
    • Container Journal
    • Security Boulevard
    • Techstrong Research
    • DevOps Chat
    • DevOps Dozen
    • DevOps TV
    • Techstrong TV
    • Techstrong.tv Podcast
    • Techstrong.tv Video Podcast
    • Techstrong.tv - Twitch
  • Media Kit
  • About
  • Sponsor
  • AI
  • Cloud
  • Continuous Delivery
  • Continuous Testing
  • DataOps
  • DevSecOps
  • DevOps Onramp
  • Platform Engineering
  • Low-Code/No-Code
  • IT as Code
  • More
    • Application Performance Management/Monitoring
    • Culture
    • Enterprise DevOps
    • ROELBOB

Home » Blogs » Three Manual Accessibility Testing Pain Points

Three Manual Accessibility Testing Pain Points

Avatar photoBy: Sachin Gupta on April 23, 2021 Leave a Comment

According to industry research, the vast majority of IT leaders view test automation as the single most important factor in accelerating software innovation. This also applies to automated accessibility testing – a great first step for making websites accessible to persons with disabilities. The new breed of automated tools can catch nearly 83% of all accessibility issues (not to be confused with overlays, which claim 100% automatic compliance.)

Related Posts
  • Three Manual Accessibility Testing Pain Points
  • Why Financial Companies Should Embrace Automated Testing
  • DevOps Chat: Steve Hazel, Sauce Labs
    Related Categories
  • AI
  • Application Performance Management/Monitoring
  • Blogs
  • Continuous Testing
  • Leadership Suite
    Related Topics
  • accessibility testing
  • automation
  • disabilities
  • manual testing
  • WCAG
Show more
Show less

Automation does not, however, replace the need for manual testing, which is the only surefire way to ensure all users can access the information on your website without difficulty. To get complete coverage, you also need to perform manual accessibility testing (which requires a human to complete using the various assistive technologies that people with disabilities depend on) with multiple browsers and interfaces. However, manual testing comes with several challenges, which is why testers often perceive it as painful or complicated.

TechStrong Con 2023Sponsorships Available

Pain Point One – WCAG Has Many Success Criteria

The most widely-accepted set of accessibility standards are the Web Content Accessibility Guidelines (WCAG), created by the W3C. WCAG criteria are classified by various versions and levels. Versions include WCAG 1.0, 2.0, and most recently 2.1, with version 3.0 actively under development. Success criteria under each of these versions are further broken down into conformance levels A, AA and AAA, with each level adding criteria.

Depending on which WCAG conformance level you choose, you could reasonably expect to check somewhere between 25 to 50 success criteria when assessing the accessibility of your site or app. As you can see, a significant amount of familiarity and expertise is required with WCAG success criteria to be able to perform accurate, full-coverage accessibility testing on a site or app.

Fortunately, software has reached the point in testing maturity where there are guided instructions for testing for specific criteria that must be manually tested. One example is keyboard navigation, often used by persons with motor disabilities. In this instance, the tool would simulate a user who interacts with a web page solely by using their keyboard; then the tool would describe a step-by-step instruction to identify accessibility issues during keyboard navigation. One such example: a news site with a carousel featuring five news stories, only one of which is visible at a time. There needs to be ‘previous’ and ‘next’ buttons that can be accessed and triggered using a keyboard so that keyboard users can navigate through the stories just as easily as they could with a mouse.

Pain Point Two – Screen Readers Take Time and Expertise to Learn

In addition to understanding numerous success criteria, accessibility testers also need to understand how to use a screen reader, a popular form of assistive technology. There are numerous screen readers available including JAWS, NVDA, VoiceOver and Talkback. Now, imagine having to memorize more than 50 success criteria that need to be tested across three different screen readers in multiple testing environments (desktop, responsive web, native mobile, etc.).

Luckily, customized testing methodologies have been developed based on a chosen ruleset and testing environment. This reduces the amount of memorization and prior experience the tester needs to run effective tests. With specific screen reader testing steps outlined for each ruleset and testing environment, accessibility teams can improve their efficiency and create a standardized testing process for the whole testing team to follow—whether they are screen reader experts or not—resulting in a significant productivity boost.

Pain Point Three – WCAG Success Criteria Can Be Open to Interpretation

Some WCAG success criteria can be very contextual. Criteria that appear simple to test for, like color contrast requirements, can sometimes be tricky to analyze. For example, imagine a black, bold-faced word featuring dark gray shadowing against a light gray website background.

In this image, which two colors should be compared to verify whether the minimum color contrast requirements are satisfied? Light gray and black, dark gray and black or light gray and dark gray?

In scenarios like this, different accessibility experts can (and will) have different opinions. For a developer, there is nothing more frustrating than receiving different remediation requirements for the same issue. This can create confusion and introduce significant delays in the release of a product.

The solution is clearly documented and detailed requirements to help reduce the potential number of issues that can be interpreted differently. Properly documented testing processes can help ensure that the results coming from an accessibility test team will be consistent, and significant time will be saved communicating issues back and forth between testers and developers.

Automated testing is great for addressing many ‘low-hanging fruit’ problems without having to be an accessibility expert or hire one, but it does not eradicate the need for manual testing. While there are certain characteristics of manual testing that can make it onerous, fortunately, there are guided testing tools, auditors, methodologies and more available to take the pressure off accessibility teams. This will enable development teams to strike the ideal composition of manual and automated accessibility testing that helps them identify and repair the highest possible number of issues.

Filed Under: AI, Application Performance Management/Monitoring, Blogs, Continuous Testing, Leadership Suite Tagged With: accessibility testing, automation, disabilities, manual testing, WCAG

« How Shifting Testing Left Can Boost DevOps Velocity
WhiteSource Acquires Diffend to Secure Open Source Supply Chains »

Techstrong TV – Live

Click full-screen to enable volume control
Watch latest episodes and shows

Upcoming Webinars

Achieving Complete Visibility in IT Operations, Analytics, and Security
Wednesday, February 1, 2023 - 11:00 am EST
Achieving DevSecOps: Reducing AppSec Noise at Scale
Wednesday, February 1, 2023 - 1:00 pm EST
Five Best Practices for Safeguarding Salesforce Data
Thursday, February 2, 2023 - 1:00 pm EST

Sponsored Content

The Google Cloud DevOps Awards: Apply Now!

January 10, 2023 | Brenna Washington

Codenotary Extends Dynamic SBOM Reach to Serverless Computing Platforms

December 9, 2022 | Mike Vizard

Why a Low-Code Platform Should Have Pro-Code Capabilities

March 24, 2021 | Andrew Manby

AWS Well-Architected Framework Elevates Agility

December 17, 2020 | JT Giri

Practical Approaches to Long-Term Cloud-Native Security

December 5, 2019 | Chris Tozzi

Latest from DevOps.com

Cisco AppDynamics Survey Surfaces DevSecOps Challenges
January 31, 2023 | Mike Vizard
Jellyfish Adds Tool to Visualize Software Development Workflows
January 31, 2023 | Mike Vizard
3 Performance Challenges as Chatbot Adoption Grows
January 31, 2023 | Christoph Börner
Looking Ahead, 2023 Edition
January 31, 2023 | Don Macvittie
How To Build Anti-Fragile Software Ecosystems
January 31, 2023 | Bill Doerrfeld

TSTV Podcast

On-Demand Webinars

DevOps.com Webinar ReplaysDevOps.com Webinar Replays

GET THE TOP STORIES OF THE WEEK

Most Read on DevOps.com

Microsoft Outage Outrage: Was it BGP or DNS?
January 25, 2023 | Richi Jennings
The Database of the Future: Seven Key Principles
January 25, 2023 | Nick Van Wiggerern
Don’t Hire for Product Expertise
January 25, 2023 | Don Macvittie
Harness Acquires Propelo to Surface Software Engineering Bot...
January 25, 2023 | Mike Vizard
Software Supply Chain Security Debt is Increasing: Here̵...
January 26, 2023 | Bill Doerrfeld
  • Home
  • About DevOps.com
  • Meet our Authors
  • Write for DevOps.com
  • Media Kit
  • Sponsor Info
  • Copyright
  • TOS
  • Privacy Policy

Powered by Techstrong Group, Inc.

© 2023 ·Techstrong Group, Inc.All rights reserved.