A survey of more than 300 officials at American colleges shows many are planning for long-term growth in online education, but few are consistently evaluating the quality of their mushrooming course lists.
According to a newly released report on the survey’s findings — by the nonprofit group Quality Matters and Encoura’s Eduventures, a higher-education-market research firm — more than 90 percent of the “chief online officers” surveyed said they expect the typical traditional-age undergraduates on their campus would be taking courses in some kind of hybrid format by 2025. That’s a stark departure from just three years ago, before the pandemic, when 20 percent of such undergrads took hybrid courses.
The vast majority of college officials in the survey — 96 percent — said they’d adopted “quality assurance” standards to guide this rapid metamorphosis. Such standards advise faculty members on how to make online learning accessible, intuitive, and engaging for students. That might mean setting expectations for offering timely, regular instructor feedback on assignments, clearly aligning activities with a course’s learning objectives, and posting transcripts of all video content.
But ultimately, there’s no universal definition of what “quality” means, though experts note that there’s ample research on what quality teaching and learning looks like. And as the report’s authors acknowledge, both the scope and the teeth of colleges’ quality standards vary considerably.
Only 34 percent of the survey’s respondents, for example, said their standards included analyzing student-learning outcomes, such as postgraduate job placements and salaries.
That’s not necessarily surprising, given that “it’s still a frontier for an institution to even collect” that kind of data, “never mind have standards around it,” said Richard Garrett, chief research officer at Eduventures and the report’s co-director. But colleges, as well as faculty members looking to promote their courses in an increasingly saturated, competitive market, have something to gain from demonstrating results.
“If a school is ultimately saying you should enroll in this program because it leads to an outcome,” he said, “but they don’t really have a handle on that [outcome] … That’s a weakness.”
Further Reading
College officials who responded to the survey were also notably divergent on whether online courses must be evaluated. While the vast majority have quality standards, only a minority — 42 percent — reported always using them to evaluate new or heavily revised online courses. Evaluating for quality was overwhelmingly cited as a voluntary undertaking left to instructors or departments.
The report flagged that as a concern, noting that, without evaluations, colleges risk muddied academic standards in their online courses and programs, and a missed opportunity to use their finite resources on smart, data-informed remediation and student-support efforts. “Without evaluating whether adopted quality standards are met,” the report states, “there is no true quality-assurance plan in place.”
‘Mandates Are Difficult’
As is often the case in higher ed, though, there’s considerable nuance at play.
Both Garrett and Bethany Simunich, the report’s other co-director, told The Chronicle that developing and enforcing quality standards was a continuing, and often holistic, process. Colleges might not have it fully figured out yet, but they may be on their way.
“Quality is a longtime conversation on college and university campuses,” said Simunich, director of research and innovation at Quality Matters. “And it’s a conversation that has grown immensely during the pandemic.”
College officials The Chronicle spoke with say they also favor working in partnership with faculty members, noting the risks of a rigid, top-down approach, especially in evaluations.
That hesitancy to impose mandates was reflected in other data points across the report. Professional development on the fundamentals of online quality assurance, for example, was optional at nearly half of the surveyed institutions.
“Mandates are difficult. … It’s better to get consensus, it’s better to get buy-in, it’s better for people to want your help, rather than to say they must have your help,” said Valerie Kelly, associate vice president of Kent State Online, part of Kent State University, in Ohio. The number of online courses and programs, including certificates, at Kent State grew 11 percent and 34 percent, respectively, from the 2019-20 to the 2021-22 academic years.
It’s better for people to want your help, rather than to say they must have your help.
Resources are available for Kent State faculty members who seek them out, Kelly noted. The university puts particular emphasis on strong design standards for online courses; some best practices include having a simple navigation menu, a document outlining all assignment deadlines, and an instructor bio on the course page.
Kelly said she relays examples to faculty members, too, of the good that can come from collaborating with her team. She recalled how a physics-lab professor once sought help about the higher rate of D’s, F’s, and withdrawals online compared with the traditional, in-person lab. After a review, the team found the answer: The students in the virtual lab “had to get their own materials,” creating an impediment for those who lacked the supplies in their dorm rooms and couldn’t afford, or didn’t want, to purchase them, Kelly said.
Resource Constraints
Carroll Community College, in Maryland, also doesn’t require faculty members to run through a checklist before starting a course — or to conduct quality evaluations.
Along with worries of overextending faculty members, it’s also a capacity issue, said Andrea Gravelle, director of digital learning. About half of the college’s 3,100 students were fully online in 2021-22, compared with about a quarter before the pandemic. The entire department of digital learning and media services is five people.
Still, Gravelle said, the college has clearly defined minimum expectations — a marriage of Quality Matters’ standards and the State University of New York’s Online Course Quality Review Rubric. She wants instructors to ask themselves: Is the posted content good content? (“If I see Wikipedia,” she said, “I’m going to start questioning things.”) Is each piece of content accessible to all students, including those who are blind, colorblind, or hearing-impaired? Is the instructor visible to students by posting announcements regularly and popping in on discussion boards? Do students have opportunities to interact with their classmates, such as in group projects?
In the absence of mandates, Gravelle said 25 percent of the college’s faculty members have worked with her department to create new courses, while 30 percent have done so to review existing courses.
There is an exception: If an instructor wants a Quality Matters certification for a course, that requires a rigorous, external evaluation, Gravelle said. It’s an undertaking she encourages, but the process is slow, with six courses certified so far.
Finding Balance
Colorado’s Fort Lewis College, meanwhile, is striking a balance between establishing quality-assurance guardrails and still allowing faculty members a high degree of autonomy. It saw online-course enrollments more than triple from the fall of 2019 to the fall of 2021.
Before instructors embark on their first online class, they complete a self-paced course — which can take 12 to 25 hours — called “Designing for Impact.” If the class they plan to teach is already online, they then fill out and submit a self-review of the course using the college’s quality-standards rubric. The rubric emphasizes, among other things, offering students multiple ways to demonstrate learning (known as the Universal Design for Learning framework). But if the course is newly online, instructors need to collaborate with the Teaching & Learning Services team.
Fort Lewis also requires a “reassessment” of online courses every three years that includes analyzing data like grades, said Ayla Moore, an instructional designer with Teaching & Learning Services. Instructors perform this assessment themselves.
“The instructors are the experts,” said Moore. “They know the criteria; they know what works.”
Moore added that her team frames the evaluations as critical “reflections” that will ultimately benefit both instructor and students, as opposed to a review that may elicit punishment or scrutiny.
“It’s not so much a ‘What’s wrong with the course?’ or ‘What did you do wrong?’ It’s ‘What are the students telling you here?’” Moore said. “We’re always teaching in beta.”