MDAS-M

The Minimum Digital Accessibility Standards Conformance Testing Methodology

Introduction

The Minimum Digital Accessibility Standards Conformance Testing Methodology, or MDAS-M, is a test process for determining the conformance of digital information and services with The Ohio State University Minimum Digital Accessibility Standards (MDAS). Version 0.9.3 consists of 82 tests.

The MDAS-M is a standardized, open, and streamlined full manual testing methodology that produces reliable, repeatable, accurate, and complete results. It is a living methodology that accessibility professionals across the university are encouraged to contribute to and improve upon.

The MDAS are technical and functional requirements that ensure that digital information and services provided by the university are functionally accessible to persons with disabilities. All digital information and services to be used by faculty, staff, program participants, the general public, or other university constituencies are required to be compliant with the non-discrimination provisions of the Americans with Disabilities Act (ADA), as amended, and Section 504 of the Rehabilitation Act.

The MDAS and MDAS-M are maintained by the ADA Digital Accessibility Center under the direction of the ADA Coordinator’s Office.

Who Is This For?

The MDAS-M was created for use by Ohio State employees trained and certified to be full manual accessibility evaluators by the ADA Digital Accessibility Center. Individuals interested in becoming certified may learn more at the Web Accessibility Certified Tester (W-ACT) Program webpage.

The MDAS-M is not intended for use by approved third-party evaluator vendors. Third-party vendors have been vetted and have their own testing methodologies to determine conformance with the MDAS.

All full manual accessibility evaluations must be performed by approved accessibility evaluators, either certified Ohio State employees or approved third-party vendors. More information about this requirement can be found on the Digital Accessibility Services Full Manual Evaluations webpage.

The MDAS-M was spearheaded by the ADA Digital Accessibility Center in 2023 and developed by a group of certified accessibility evaluators representing many units across the university. Special thanks go to the members of the 2023 MDAS-M project group for their contributions to this effort: Ashley Bricker, Teresa Bruggeman, Arnell Damasco, Dan Keck, Corey Moore, Sarah Moyer, Eric Owens, and Matthew Swift. Additional thanks go to Vanessa Coterel, Richard Hopkins-Lutz, Scott Lissner, the U.S. Department of Homeland Security, the U.S. Social Security Administration, and the members of the Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C®).

Portions of this methodology are from, in part or in whole, the Trusted Tester 5.1 Test Process developed by the U.S. Interagency Trusted Tester Program. This would not have been possible without that foundation.

The Ohio State University

The Minimum Digital Accessibility Standards Conformance Testing Methodology (MDAS-M) is copyright © 2024 The Ohio State University.

Tests were written and edited by the following Ohio State University employees:

The United States of America Federal Chief Information Officers Council (CIOC) Accessibility Community of Practice (ACOP)

Major portions of this document are from the Trusted Tester 5.1 Conformance Test Process. Copyright © 2018 The Federal Chief Information Officers Council (CIOC) Accessibility Community of Practice (ACOP).

The license for the Trusted Tester 5.1 Conformance Test Process is as follows:

Information Usage

The Federal chief Information Officers council Accessibility Community of Practice (ACOP) authors and provides information to facilitate equal access to information and data for persons with disabilities. The information provided by the ACOP is intended for wide distribution and usage both inside and external to the Federal It community. The ACOP accepts feedback and recommendations about such information from all interested parties.

Permission is hereby granted, free of charge, to any person or organization obtaining a copy of information created or maintained by the ACOP, “the information”, without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the information and to permit persons to whom the information is furnished to do so, subject to the following conditions:

THE INFORMATION IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS or ACOP BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH the information OR THE USE OR OTHER DEALINGS IN THE INFORMATION.

When the information, or sub-parts thereof are included within other information, electronic systems, or commercial or noncommercial products, the ACOP shall be cited as the originating source. Citations shall include the URL of the ACOP distribution of the information, and identification of the ACOP as the original source.

When the information or sub-parts of the information are modified such modifications shall not be attributed to the ACOP as original source.

World Wide Web Consortium (W3C®)

Some wording and examples in this document are from the Web Accessibility Initiative (WAI): ARIA Authoring Practices Guide (APG). Matt King, JaEun Jemma Ku, James Nurthen, Zoë Bijl, Michael Cooper, eds. Copyright © 2023 W3C®. https://www.w3.org/WAI/ARIA/apg/

Some wording and examples in this document are from the Web Accessibility Initiative (WAI): WCAG 2.1 Understanding Docs. Alastair Campbell, Charles Adams, Rachael Bradley Montgomery, co-chairs. Copyright © 2023 W3C®. Updated 20 June 2023. https://www.w3.org/WAI/WCAG21/Understanding/

Some wording and examples in this document are from the World Wide Web Consortium (W3C®): Web Content Accessibility Guidelines (WCAG) 2.1. Andrew Kirkpatrick, Joshue O Connor, Alastair Campbell, Michael Cooper, eds. Copyright © 2023 W3C®. Status: W3C Recommendation Updated 21 September 2023. https://www.w3.org/TR/2023/REC-WCAG21-20230921/

Some wording and examples in this document are from the Web Accessibility Initiative (WAI): Accessible Rich Internet Applications (WAI-ARIA). Joanmarie Diggs, Shane McCarron, Michael Cooper, Richard Schwerdtfeger, James Craig, eds. Copyright © 2013-2017 W3C®. Status: W3C Recommendation Updated 14 December 2017. https://www.w3.org/TR/2017/REC-wai-aria-1.1-20171214/

Test Environment

A full list of recommended tools is available on the ADA Digital Accessibility Center “Accessibility Testing Tools” webpage.

The Colour Contrast Analyser software is used to evaluate color contrast on web, in applications, and in PDFs.

Operating System, Browser, and Screen Reader Combinations

The following browser, operating system, and screen reader combinations are used in this testing methodology for websites. These were selected due to their popularity among the primary audiences at Ohio State and general coverage they provide:

  1. Microsoft Windows (10 or 11), Google Chrome, NVDA screen reader
  2. Apple iOS/iPadOS, Apple Safari, VoiceOver screen reader
  3. Android, Google Chrome, TalkBack screen reader

NVDA screen reader is available in the self service application in most departments across the university. If it is not available, contact your local IT administrator for installation assistance.

ANDI (Accessible Name and Description Inspector)

ANDI, the Accessible Name and Description Inspector, is a browser bookmarklet developed by the U.S. Social Security Administration. It is used throughout this methodology to identify and inspect elements on webpages.

When opened, ANDI appears at the top of the screen. It contains 8 modules, 2 of which are not always visible (tables and iframes): focusable elements, graphics/images, links/buttons, structures, color contrast, and hidden content.

Training for this tool is covered in the Web Accessibility Certified Tester program cohort. Evaluators can also learn how to use ANDI by reviewing the ANDI Guide.

Google Chrome Accessibility Inspector

The Accessibility Inspector in Google Chrome is used in this testing methodology to inspect the accessible name, role, ARIA attributes, and programmatic relationships of elements.

The steps to open and use the tool is as follows:

  1. In Google Chrome, open the context menu (right click) and select “Inspect.” This will open the DevTools.
  2. In the bottom panel tabs (Styles, Computed, Layout, …), open the “Accessibility” tab.
  3. Multiple accordions are displayed in the panel. The one that will be used for the tests is the “Computed Properties” accordion.
    Screenshot of the accessibility properties pane in Google Chrome
  4. Select the element to evaluate by clicking on the “Select Element” icon in the upper left corner of the DevTools or press Ctrl + Shift + C, then click on the element on the page.
    Screenshot of the Google Chrome DevTools with the Select Element icon circled

    1. Keyboard users can navigate to the element in tree view in the top panel.

Browser Bookmarklets

A number of browser bookmarklets are used throughout this methodology to test things such as text spacing, reflow, duplicate IDs, autocomplete attributes, and character key shortcuts. They should be installed prior to beginning the test process. They can be installed from the ADA Digital Accessibility Center “Accessibility Testing Tools” webpage.

Steps for evaluating PDFs are not yet well documented in this methodology. Evaluators are encouraged to learn how to evaluate webpages using this methodology first, then apply the knowledge gained when evaluating PDFs using the tools below. PDF evaluation will be expanded upon in future versions of the MDAS-M.

Operating Systems

PDF testing must be performed on Microsoft Windows because the tools are only available on that platform.

PDF Accessibility Checker (PAC) 2024

PAC 2024 is a Windows-only automated and manual PDF accessibility checker that checks for conformance with WCAG 2.1 evaluation. Full training for this tool is provided in the Web Accessibility Certified Tester program cohort.

Screen Readers

In the limited circumstances where PDFs have complex interactivity, a screen reader may need to be used for evaluation. Any Windows desktop screen reader can be used, however, note that NVDA often crashes when combined with Adobe Acrobat. Screen reader testing on PDFs should be a last resort and only used when conformance to a test condition cannot be met with the tools above.

Steps for evaluating native mobile applications are not yet well documented in this methodology. Evaluators are encouraged to learn how to evaluate webpages using this methodology first, then apply the knowledge gained when evaluating mobile applications using the tools below. Native mobile application evaluation will be expanded upon in future versions of the MDAS-M.

Some training on these tools is covered in the Web Accessibility Certified Tester program.

Apple iOS, iPadOS, watchOS, and tvOS

  • VoiceOver screen reader
  • Accessibility Inspector (requires macOS and physical access between the Mac and the iPhone, iPad, Apple Watch, or Apple TV via a USB cable)
    • Much of the concepts regarding semantics, roles, states, and names are very similar to the web. This inspector is much like the Google Chrome Accessibility Inspector.

Google Android

Steps for evaluating native desktop applications are not yet well documented in this methodology. Evaluators are encouraged to learn how to evaluate webpages using this methodology first, then apply the knowledge gained when evaluating desktop applications using the tools below. Native desktop application evaluation will be expanded upon in future versions of the MDAS-M.

Microsoft Windows

Apple macOS

Test Structure

Each test is comprised of the following elements:

  1. An ID – A unique identifier for the test
  2. Test Name – A short but descriptive name that describes the outcome of the test or what kind of element is being tested
  3. Test Condition – A short description of the conditions that must be met for the content to pass the test
  4. Impacts – Individuals with the listed disabilities will experience barriers if this test condition is not met
  5. Identification – How the evaluator will identify the content that is to be tested
  6. Test steps – Steps for determining whether or not the test conditions are met
  7. Results matrices
    1. “Evaluate and Record Results” – A table that describes what conditions must be met to PASS or FAIL the test or when the content does not apply (DNA).
    2. “Failure Matrix” – The most common failure conditions for the test along with the parts of MDAS they fail (i.e., WCAG SC, WAI-ARIA requirements, functional) and suggested priority based on the scale in the Evaluation Template.
      1. Important note on priority: The priority column will sometimes NOT reflect the actual priority of the issue. Evaluators must use their best judgement for determining priority based on the conditions described in the Priority Scale in the Evaluation Template.

Testing Approaches

All types of content on the website or application must be tested, including documents and multimedia content. That said, testing should be streamlined so that it’s both thorough and resource wise.

The two approaches below are approved for determining conformance with MDAS in this testing methodology: use case testing and component testing.

Use case testing is the primary approach for performing evaluations. Use cases are descriptions of actions and tasks that users will perform on the website.

While some versions of use case testing only require that the elements that are used for completing the task are tested, the MDAS-M requires that ALL elements on the pages or screens that are a part of the task are tested. This is in alignment with the WCAG 2.1 conformance requirements 5.2.2 Full pages and 5.2.3 Complete processes.

For example, if the evaluator is testing course enrollment software, and the use case is “User searches for CHEM 101,” the evaluator would test:

  1. The search input
  2. The button that executes the search
  3. The entire page that the search input exists on
  4. The entire search results page.

An alternative way of approaching use case testing is to select pages that contain a representative sample of content on the website. This approach is especially useful for websites that contain primarily static content. In this interpretation, for example, the use cases could be:

  1. The home page
  2. The primary landing pages
  3. Page templates
  4. Contact forms
  5. Page with a video

Component testing is a testing approach that determines the conformance of UI components that are used throughout the website or application. It is optional and must always be combined with use case testing (unless the purpose of the testing is for only the component itself, i.e., for a UI component library).

Component testing can significantly reduce the amount of time for completing evaluations when the component is used across the entire website or application. Examples of components are site header, site primary navigation, page navigation, breadcrumbs, footer.

When evaluating using a component approach, the components can be tested once and the use case testing be limited to the page content that is NOT one of the tested components.

Evaluators must be cautious and ensure that the implementation of the component is truly consistent across all pages on the website or application. Typically, components that are injected into the page via CMSs and JavaScript frameworks are going to be consistent across the pages. However, if the evaluator finds ANY evidence that the component is not consistent across pages, they must evaluate each page in whole.

Recording Test Results

Test results are recorded using two instruments: the “Test Log” and the “Evaluation Template.”

Download MDAS-M v.0.9.3 Test Log (OSU.edu login required)

The MDAS-M Test Log is an Excel spreadsheet that is used to record the results of each test: PASS, FAIL, or DNA (Does Not Apply).

Test Items tab

In the “Test Items” tab, evaluators document each “item” they are testing. Items, in this context, are use cases and components. Each item is given an ID (prefixed by “U” for use case and “C” for component) and the description, location, dates tested, and pass/fail status is documented. Items are marked as “FAIL” if they fail any of the tests.

Example Test Items Table
ID Description Location Date(s) Tested P/F
C1 Site Header (name, navigation, search) Global, top of each page May 10-12, 2023 FAIL
C2 Page Section Navigation Left side of static pages May 12, 2023 FAIL
C3 Product Filters Left side of product catalog/search pages May 14-16, 2023 FAIL
C4 Site Footer Global, bottom of each page May 30, 2023 FAIL
U1 Using the search input in the site header, search for “Brutus Buckeye Halloween Costume” https://example.com June 1, 2023 FAIL
U2 In the search results, filter by size child’s medium https://example.com/?search=brutus_buckeye_haloween_costume June 2-3, 2023 FAIL
U3 View product page and add product to cart https://example.com/?product=81AUQ81L June 15, 2023 FAIL
U4 Checkout https://example.com/checkout.php June 15-18, 2023 FAIL

Test Log Tab

The “test log” tab is used to log the results of each test.

  1. Down the first column is the test ID (e.g., P.1.1).
  2. The second column contains the test condition for easy reference.
  3. Across the top are headers for each test item. For example, if testing for 5 use cases, the test log table would contain headers for U1, U2, U3, U4, and U5.
  4. The test result is recorded in the cell that corresponds to the test ID and test item. For example, if testing use case 3 for P.1.1, the test result (PASS/FAIL/DNA) is recorded in the P.1.1 row and the U3 column.
Example Test Log Tab Table
ID Test Condition U1 U2 U3
P.1.1 Meaningful images have equivalent text descriptions PASS DNA FAIL
P.1.2 Decorative images are hidden from AT FAIL PASS DNA
P.1.3 No unnecessary images of text DNA FAIL PASS

Download the template at the Digital Accessibility Services website

The evaluation template is a Word document that is provided as the final evaluation report to the product owner. In the context of this methodology, this document is used to report the failures found during the test.

Failures are reported as follows, in a list format:

  • Failure instance description
    • MDAS failure (e.g., WCAG success criteria or functional consideration)
    • Priority (Critical, Serious, Moderate, Minor)
    • Screenshot (optional, but recommended)