It is the invisible infrastructure that colleges and universities rely on to target potential students for recruitment, to develop financial aid offers, and to monitor student behavior. Now, a new report from the Government Accountability Office urges Congress to probe how higher education is using these scores, algorithms and other consumer data products, and to determine who will benefit most from their use – students or institutions?
The GAO also encouraged Congress to consider strengthening disclosure requirements and other consumer protections relevant to these scores.
“Among the issues to consider are the rights of consumers to view and correct data used in the creation of scores and to be informed of the uses and potential effects of scores,” the bureau recommended.
Predictive analytics has been heralded as a way to improve many facets of higher education, like boosting retention and fairer distribution of institutional aid, but it’s not without its critics. Concerns about student privacy abound. And critics worry that poorly designed or poorly understood models can embed and automate discriminatory behaviors in an institution’s operations.
“Colleges were often unaware of the data and methods used to create scores used in marketing, recruiting, and sometimes determining financial aid amounts for students,” the GAO wrote in its report, summarizing an exchange the agency had with an industry expert, and outlining the higher education uses of predictive analytics that were most relevant to the office.
The sheer complexity of some algorithms presented another challenge. After examining a scoring product, used to identify and flag students at risk of dropping out or transferring to another college, GAO researchers observed the range of variables – “potentially hundreds” – clearly relevant to the risk assessment of the underlying model.
The most worrying for the GAO? The weight given by some models to a student’s origin points — including the neighborhood they live in and the high school they attend.
“While this methodology may be harmless when used for certain purposes, we found examples of its use that could have a negative effect if the scores were incorrect,” the agency wrote. In clearer terms: In a country where race, wealth, and geography are inextricably linked, models and algorithms can rationalize and condone biases against minority and low-income students, even if those products don’t take into account as residency information in their scores and reviews.
As an example, the GAO refers to an unnamed scoring product used by admissions offices to identify students who “will be attracted to their college and match their school’s enrollment goals” — in essence, a lead generation service. A prospective student’s neighborhood and high school dictate the lists on which their contact information will appear. Each list is in turn assigned its own respective set of scores – measures of socioeconomic, demographic and “educationally relevant” characteristics shared by each cohort. Using these scored lists, admissions professionals can deploy recruitment strategies tailored to the admissions goals of their respective institution.
But what about high-performing students enrolled in low-performing or underfunded high schools? How can a college target these students for recruitment if where they live and learn prevents them from being included in college enrollment efforts? For the Government Accountability Office, this is a recipe for patchy treatment.
“Some students may not match the predominant characteristics of their neighborhood or high school and may miss the recruiting efforts that others receive,” warns the GAO.
To guard against such pitfalls, colleges and universities should consult diversity, equity and inclusion professionals, said Jenay Robert, researcher at Educause, a nonprofit focused on education. intersection of technology and higher education, in a statement. If analytics staff aren’t working with diversity experts with their institution’s specific needs in mind, “big data analytics can do more harm than good,” she said. .
Higher education also lacks widely accepted policies on this topic.
In the absence of federal regulations on the use of algorithms, colleges and universities must balance how their institutional interests align with the interests of individual students – and how well that use serves the broader public good. And in theory, there should be no distinction. For example, when an institution uses predictive analytics to allocate scholarships to those who might otherwise have dropped out of college, the public good is served.
But the reality is often more complicated. Big data products and models provide colleges and universities with fine-grained analytics capabilities that may not have previously been available to most admissions offices. In testimony to the GAO, an industry expert offered a scenario in which a college might draw certain conclusions from repeated visits to a campus or website by a potential student – conclusions ultimately leading to fewer scholarships. of studies granted to this potential student compared to peers in the same situation.
For a college, the calculation is simple: why offer a large scholarship to a student likely to attend your establishment despite everything? For the country, however, a different dilemma emerges: even if there is more money for scholarships, is the public good really better served when a student is penalized for using campus tours? and research online before embarking on one of the most important investments in American life?