Accessibility Testing
Learning Objectives
- You understand why accessibility testing requires multiple approaches.
- You can use automated tools like Lighthouse and WAVE to identify accessibility issues.
- You can perform manual keyboard testing to verify keyboard accessibility.
- You can use a screen reader to test basic accessibility and understand how screen reader users experience your application.
Building accessible requires ongoing verification throughout development. Even experienced developers make mistakes, and new features can introduce accessibility issues. Furthermore, finding and fixing issues early is much easier and cheaper than retrofitting accessibility later.
Test each new component as you build it, run automated tools regularly, do keyboard testing before considering a feature complete, include accessibility in code reviews, and test with screen readers periodically. This approach — often called “shift left” testing — helps prevent accessibility debt from accumulating and makes fixes simpler because you’re working with fresh code.
Automated Testing Tools
Automated tools provide a quick first pass at identifying accessibility issues. While they cannot catch everything, they efficiently find common problems and provide specific guidance on how to fix them.
Modern browsers include built-in accessibility features in their developer tools. In Chrome DevTools, open the developer tools (F12 or right-click and select “Inspect”), select an element in the Elements panel, and look for the Accessibility pane (you may need to enable it in settings). The Accessibility pane shows the element’s accessible name, its role in the accessibility tree, whether it’s keyboard accessible, and the color contrast ratio with WCAG compliance indicators. To check contrast for any text, inspect the element, click the color square in the Styles panel, and look for the contrast ratio indicators.
Firefox DevTools has similar features through the Accessibility Inspector (in the top tabs). Enable “Check for issues” to highlight elements with problems directly in the page, including contrast issues, keyboard access problems, and text label issues.
Lighthouse Accessibility Audit
Lighthouse is an automated tool built into Chrome DevTools that audits pages for accessibility, performance, and best practices. To run a Lighthouse accessibility audit, open Chrome DevTools (F12), click the “Lighthouse” tab, select “Accessibility,” and click “Analyze page load.” Lighthouse generates a report with an accessibility score (0-100), a list of issues found, links to learn more about each issue, and suggestions for fixes.
Accessibility: 87/100
Issues found:
❌ [button-name] Buttons do not have an accessible name
3 elements - See details
❌ [color-contrast] Background and foreground colors do not have sufficient contrast
2 elements - See details
⚠️ [heading-order] Heading elements are not in a sequentially-descending order
1 element - See details
Click on any issue to see exactly which elements are affected and how to fix them. Run Lighthouse regularly, ideally as part of your continuous integration process, and consider setting a minimum accessibility score threshold (e.g., 90) for your project.
WAVE Extension
The WAVE (Web Accessibility Evaluation Tool) extension provides visual feedback by annotating your page with icons that indicate accessibility issues and features. After installing WAVE, navigate to your page and click the WAVE icon in your browser toolbar. WAVE adds colored icons to your page where red icons indicate errors that must be fixed, yellow icons indicate warnings about potential issues, and green icons indicate accessibility features you’ve implemented correctly. Click any icon to see what the issue is, why it matters, how to fix it, and which WCAG criteria are relevant. WAVE is particularly useful for understanding the spatial relationship of issues on your page.
There’s also an online version of WAVE where you can enter a URL to analyze without installing the extension. The online may, however, miss issues in dynamic applications that retrieve content via JavaScript.
When automated tools report issues, prioritize fixes based on severity (fix critical and serious issues first), frequency (if the same issue appears many times, fix the pattern), and user impact (issues affecting core functionality matter more than cosmetic problems). Common issues flagged by automated tools include missing alt text on images, form inputs without labels, insufficient color contrast, missing document language, empty links or buttons, duplicate IDs, and missing page titles.
Here are examples of fixing common issues:
<!-- Issue: Missing alt text -->
<img src="logo.png" />
<!-- Fix: Add descriptive alt text -->
<img src="logo.png" alt="Company Name" />
<!-- Issue: Input without label -->
<input type="text" placeholder="Enter name" />
<!-- Fix: Add label -->
<label>
Name
<input type="text" placeholder="Enter name" />
</label>
<!-- Issue: Insufficient contrast (gray on white: 2.5:1) -->
<p style="color: #999999; background: #ffffff;">Low contrast text</p>
<!-- Fix: Use darker text (4.5:1 contrast) -->
<p style="color: #595959; background: #ffffff;">Good contrast text</p>
<!-- Issue: Empty button -->
<button><span class="icon-delete"></span></button>
<!-- Fix: Add accessible label -->
<button aria-label="Delete item">
<span class="icon-delete" aria-hidden="true"></span>
</button>
Manual Keyboard Testing
Keyboard testing is the most important manual accessibility test. It reveals issues that automated tools miss and gives you direct experience of how keyboard users interact with your application.
To test keyboard accessibility, open your application in a browser, and close or disconnect your mouse. Then, press Tab repeatedly to move through all interactive elements, noting the order in which elements receive focus. Press Shift+Tab to move backward through the page. As you navigate, check whether you can clearly see which element has focus — the focus indicator should be visible on all interactive elements and provide sufficient contrast.
Test interactions by pressing Enter on links and buttons, Space on buttons and checkboxes, typing in form inputs, and using arrow keys in custom components like dropdowns. Check for a skip link by — this should be the first focusable element of a page. Verify that you can access all navigation menus and navigate within menus using arrow keys.
142857
For dynamic content, open a modal dialog and try to tab through it. Attempt to tab outside the modal — focus should stay trapped inside. Close the modal with Escape and verify that focus returns to the trigger. Open dropdowns, accordions, and tabs to confirm you can interact with them using the keyboard. Throughout your testing, ensure you can exit every element you enter and that focus is never stuck in a component — these are signs of keyboard traps that must be fixed.
As you test, ask yourself whether you can reach all interactive elements with Tab, whether you can see where focus is at all times, whether you can activate all buttons and links, whether you can close or dismiss everything you open, whether there’s a logical focus order, whether there are any keyboard traps, and whether you can complete all tasks without using a mouse. If you answer “no” to any of these questions, you’ve found an accessibility issue that needs fixing.
Keyboard testing often reveals problems beyond accessibility. Illogical focus order, confusing interactions, and unexpected behavior affects all users, not just those who rely on keyboards. Fixing these issues can improve the overall user experience.
Screen Reader Testing
Screen readers are assistive technologies that convert text and interface elements into synthesized speech or braille. Testing with a screen reader gives you insight into how blind users experience your application.
For getting started with screen reader testing, Windows users should install NVDA (NonVisual Desktop Access), which is free and open source. Mac users can use VoiceOver, which is built into macOS — press Cmd+F5 to activate it. Linux users can use Orca, which is included in many distributions.
Details on using each of these differ to some extent, so refer to their documentation for specific commands. The key is to get comfortable with basic navigation commands like moving by headings, links, and landmarks, as well as reading content.
As you navigate with a screen reader, pay attention to several aspects of the experience. Listen for logical reading order — does content read in a sensible sequence, or does it jump around confusingly? Check that each element announces its role correctly, with links announced as “link,” buttons as “button,” and headings with their level like “heading level 2.” Verify that you can understand the purpose of each element — does a “Submit” button say what it submits, do icon buttons have meaningful labels, and are form inputs clearly identified.
Test whether you can navigate by headings to understand the page structure. There should be one h1 that describes the page, headings should follow a logical hierarchy, and you should be able to jump between sections using headings. Check whether you can use landmarks to skip to main content or other sections. When you interact with forms, listen for whether errors are announced, whether success messages are announced, and whether required fields are identified.
You’ll likely discover issues during screen reader testing. An icon button with no label will be announced as just “button” (missing aria), a link with generic text like “click here” without context (use descriptive link text), an unlabeled form input will be announced as just “edit text” (missing label), and dynamic content that changes without announcement (missing aria-live).
Once you’ve caught the hang on how to use a screen reader, you can try closing your eyes while testing with keyboard only. This simulates a screen reader experience and helps you focus on non-visual cues like focus indicators, reading order, and element roles. There’s no need to master a screen reader — even 15 minutes of testing can provide valuable insights.
Summary
In summary:
- Accessibility testing requires multiple approaches. Automated tools catch some issues; manual testing finds the rest.
- Test throughout development, not just at the end. Integrate accessibility testing into your regular workflow to prevent issues from accumulating.
- Automated tools like Lighthouse and WAVE help identify common issues like missing alt text, poor contrast, and unlabeled forms. Use browser DevTools to check contrast and inspect the accessibility tree.
- Screen reader testing provides direct insight into how blind users experience your application. Focus on logical reading order, correct element roles, meaningful labels, and navigability.