Case Study 6.2: University of Nevada at Reno—An Open Textbook Audit
After receiving a Scholarly Communication Research Grant from ACRL, Teresa Schultz and Elena Azadbakht set out to assess the accessibility of a randomly selected set of open textbooks. They described their project in a 2020 Open Education conference session called, “Open for Who? Assessing the Accessibility of Open Textbooks,” and their project may be summarized as follows:
- Goals: To assess the accessibility of a representative set of open textbooks, randomly selected using the search tool OASIS.
- Project Scope: The project focused on texts in HTML, PDF, Word, or EPUB. The project team evaluated accessibility for the first twenty pages of 355 open textbooks over a period of five months.
- Evaluation Criteria: A rubric was developed based on WCAG 2.0 Guidelines and guidance from the BCcampus Open Education Accessibility Toolkit. Sixteen criteria addressed: alternative text, coded elements, updated HTML coding, visual cues, PDF tagging, tables, lists, heading order, link tagging, reading order, color contrast, images, title elements, descriptive linking, descriptive headings, and language element. Texts received a simple pass or fail for each criterion.
- Tools: To accommodate various material types, the team used multiple tools to assess accessibility including: SiteImprove Accessibility Checker (for HTML/web content), Ace by DAISY (for EPUBs), Calibre (for EPUBs), Adobe Acrobat Pro (for PDFs), the Microsoft Word Accessibility Checker (for DOCs), and the Colour Contrast Analyser. The team also regularly checked source code for HTML and EPUBs to verify automated results. They used free screen readers—NVDA and Macintosh’s VoiceOver—to assess difficult materials, such as mathematical or STEM notation.
- Personnel and Support: Grant funds supported the hire of a student assistant to support evaluations.
Schultz and Azadbakht ultimately reflected on the need to more closely ally open access to accessibility. They also remarked on the complexity that arose when deciding who might be able to solve particular accessibility challenges—the author of an OER or the developers supporting the platform in question.