Web Accessibility: Designing Interfaces Users Expect
Tips on Building Accessibility Into Your Design
Feb 01, 2021

As part of our ongoing series on web accessibility, Software Engineer Andrew Marcus shares some tips on building accessible user interfaces.

The first time any person, with or without a disability, visits your website they have to learn how to use it. Where is the information they are looking for? How do they interact to accomplish their goal? For people who typically use a web browser, keyboard, and mouse, the answers may seem obvious. That’s because websites are generally designed with standard design cues to give users clues about how to navigate and interact with buttons, menus, and forms.

But for people with disabilities who use alternative methods to browse the internet, the answers may be much less obvious. Many web experiences are not designed with them in mind. Design cues that work for users with typical vision may not work for users with limited vision or colorblindness. Or, interactions that are natural and intuitive with a mouse may be awkward or even impossible with a keyboard (think: “drag-and-drop”).

Assistive Technologies

In a previous article on accessibility, we discussed four distinct types of disabilities: Vision Impairments, Motor Impairments, Hearing Impairments and Cognitive Impairments.

Whether permanent or temporary, impairments can affect the way users interact (or fail to interact) with a website. Each of your users will choose a set of technologies that work best for them. But, it’s important to remember that all users can benefit from accessible interfaces, regardless of whether they have a disability themselves.

In addition to web browsers, there are many different types of Assistive Technologies (AT) to assist users with impairments of all kinds. According to WCAG guidelines, there are four specific things all types of web interfaces need to enable their users to do:

  • Perceivable: users must be able to perceive the information
  • Operable: users must be able to interact
  • Understandable: users must be able to understand how to use the interface
  • Robust: users must be able to access the content in a variety of ways, even as technology advances

Many impaired users use a normal web browser. Some may zoom way in, others may inject their own custom stylesheet to increase the contrast. Users with vision impairments may prefer to use a screen reader instead, or may opt to use a screen reader and browser together. Users who are both vision and hearing impaired may read your website with a braille device.

Users with no vision likely use a keyboard instead of a mouse, but users with limited vision may prefer a mouse. Even users without any impairments may find a keyboard more efficient for certain tasks, such as filling out forms. People without use of their limbs might use devices powered by mouth, eye movements, or even brain waves.

People with long-term disabilities tend to choose a set of technologies that work best for their situation, becoming adept and efficient at using them. For example, a hearing-impaired person may be accustomed to using devices (like hearing aids – https://www.earpros.com/hearing-aid-brands/widex-hearing-aids or amplifiers) to help them hear clearly, which makes it easier for them to listen to the audio content on your website. On the other hand, those with temporary disabilities likely don’t have experience with technologies other than a web browser. Imagine a user wearing a cast temporarily on their dominant arm. While this user can perceive your website in a browser as they normally would, they may need to learn how to interact using their keyboard rather than their mouse.

Setting Expectations

As you can see, there are so many ways in which individual users may experience your website that it’s unlikely you’ll be able to list them all. While you may not be able to plan for every type of interface, you can and should test your website with a sampling of users who use such technologies.

To make this easier, the W3C ARIA project defines a standard vocabulary that all assistive technologies understand, the same way that web browsers understand HTML. By using HTML5 semantic elements along with ARIA roles, properties and states, you give information to AT about your website which they can present to their users however they need to.

Visual Vocabulary

And now, a test: This sentence contains a link somewhere. Can you find it?

Users have instinctive expectations about how the internet behaves. For example, if you see underlined text in a different color, you expect that it will take you somewhere if you click on it. Without that visual clue, you would have no idea there’s a link there. Some sites leave out the underlines under the pretense of a cleaner look. Such sites rely just on color to differentiate links for ordinary text, and perhaps add the underline only when a users hovers their mouse over the link. However, this potentially leaves out users who have a hard time distinguishing the colors.

Now, suppose your website has a button which brings up a menu with several options. There are specific visual clues that indicate to sighted users what they should expect when they click the button.

  • It looks like a button rather than a link, text, or heading.
  • Button labeled

  • It has a caret or icon to indicate it contains a popup.
  • When you click on it, the menu pops up and allows you to choose one of the options inside it. Perhaps the caret also changes direction to show you the menu is open.
  • Once you select an option, the menu disappears and its options are no longer available.

There are many ways to code this widget, but here’s some markup that would work if you add some Javascript and CSS to it.


(Disclaimer: This is intentionally a bad example. We do not endorse code that looks like this. Keep reading for something that looks a lot better…)

Semantic Vocabulary

Now imagine your user is blind and this menu is the only way to navigate the content on your website. To a screen reader, it just looks like a bunch of text. And herein lies the problem: visual clues don’t work for users who can’t see them.

Fortunately there are other clues you can build in to your site that will help. With semantic markup, you can describe the meaning, properties and states of elements. These are the non-visual clues that tell AT how to interpret your page.

Most HTML elements already have semantics built-in. If you build a link using the typical markup want to work for a great company?, browsers, screen readers and other AT will automatically display or announce it as a link, and it will automatically be clickable with the mouse and keyboard. You could use a different tag instead, and add role="link" to tell AT to treat it as a link. But then you’d also have to add tabindex="0" to make it selectable, onclick to make it clickable and onkeydown to let users select it with the enter key. Why do all that work when the tag gives you all those things for free?

Now let’s get back to the fancy popup menu widget we built in the last section. How would a screen reader know what to do? There isn’t a native HTML element for fancy popup widgets, but that’s where ARIA properties and states let you step outside of the built-in semantics to describe more complicated interactions.

To support the needs of all of our users, our widget needs to:

For visual users For non-visual users For keyboard users
Look like a button, and the mouse icon should change on hover to indicate it’s clickable Use a Ensure the button is focusable with the tab key
Provide an icon to indicate the button has a popup behavior Use the aria-haspopup property to indicate what will happen when the button is pressed Ensure the Enter and Space keys toggle the menu between the opened and closed states
Display the menu when it is open, hide it when it is closed Use the aria-expanded state to indicate whether the menu is opened or closed, and the display: none style to hide the menu from screen readers when it is closed Ensure the menu is the next tab stop when it’s open, and is not in the tab sequence when it is closed
Indicate each menu item is clickable by using a hover state and changing the mouse icon Use

    and/or the role="menu" or role="listbox" role to indicate the menu items form a single group
Ensure each item in the menu can be focused, either with the tab or arrow keys
If an item in the list is a link, use so the link’s URL appears at the bottom of the browser when you hover over it. Identify each menu item as a button or link using Ensure each item in the menu can be selected with the Enter or Space keys

You may notice that if you use the native

Semantic Styling

Fun fact: You can target CSS selectors based on ARIA roles, properties and states.

What does that mean? Rather than creating a special CSS class to add to your dropdown menu when it is open in order to display it, why not just use the ARIA properties it already has?

button[aria-haspopup="true"] + ul {
  display: none;
button[aria-haspopup="true"][aria-expanded="true"] + ul {
  display: block;

You can even rotate the caret icon when the button expands using CSS animations attached to the aria states.

button .caret {
  transition: transform 0.2s;
button .caret::after {
  content: '>';
button[aria-expanded="true"] .caret {
  transform: rotate(90deg);

Once you attach these styles to the HTML above, the only Javascript your menu widget needs is to switch the state of the aria-expanded attribute when the button is clicked. That’s it! Everything else is done through CSS based on the state of that attribute.

All of this may sound like a lot of additional work. But if you build all of your markup semantically first, you may actually have to do less work to add styling and behaviors to it. And if you build the styles across your website based on semantic elements, it will help enforce that you always use the proper semantics wherever you need to.

Two Dimensions vs. One Dimension

We spent the first part of this blog post discussing a single widget. But a user might only spend a short amount of time on that widget. Let’s zoom out and look at the entire page.

Visual users experience the internet in two dimensions. Elements can be placed to the left or right, above or below other elements. The position of elements in this 2D space has meaning, and provides clues as to how each element is related to others. Related elements might be close together and might share the same iconography or color or background. Distinct groups of elements might use whitespace or borders to separate them, or might use different colors, backgrounds, typography or imagery than nearby groups. Think for example of a navigation menu on a typical website. A visual user can find it immediately because it is at one of the edges of the page and is visually distinct.

By this point, you’ve probably noticed that all of the typical ways to relate groups of elements on a website rely on visual clues that only work if your users can see them. Position itself is a visual construct.

Screen reader users experience the internet in one dimension. The screen reader starts at the top of the page and reads all the HTML tags in order until it gets to the bottom. Two elements that appear close together in two dimensions might be far apart in one dimension. Before the screen reader can even get to the main content of the page it must first read everything which appears above or to the left of it. On most websites this includes things like menus that are the same on every page.

Landmarks Guide the Way

You might be thinking that with a screen reader, it would take a long time for a user to hear an entire webpage. You would be right – if the user just lets it read the whole page. However, people typically want to jump straight to the content they care about. To help out, the ARIA project defines a set of “landmark” HTML tags and roles, intended to denote various sections of the page. In addition to providing a convenient way to group related content, the set of landmarks on a page can be assembled by screen readers into a table of contents, allowing users to jump to various sections they care about.

In writing, headings are commonly used to add structure. The most important landmarks are usually the headers which precede each section. Headers provide a logical, hierarchical structure to content. Word processors can build a table of contents from headers. Screen readers do the same thing.

But, there are many other landmarks too.