Published February 22, 2022
By Paula Derrow
With predictive algorithms now powering everything from facial recognition technology to who gets a bank loan, an interdisciplinary initiative at Rutgers University–New Brunswick looks to harness the power of artificial intelligence to make it more transparent—and a cause for good.
Artificial intelligence (AI) has made aspects of life more convenient and even safer, courtesy of services such as Siri and Alexa—not to mention semi-autonomous vehicles that make it easier to switch lanes on the New Jersey Turnpike. “There are tremendous benefits to AI,” says Fred S. Roberts, Distinguished Professor of Mathematics at the School of Arts and Sciences (SAS) and director of the Command, Control, and Interoperability Center for Advanced Data Analysis (CCICADA). The center is part of the U.S. Department of Homeland Security, and Rutgers is the lead partner of this university consortium. “We can use facial recognition technology to identify missing children, for instance, or diagnose rare diseases. But you have to keep the trade-offs in mind.”
One of those trade-offs is that the powerful, predictive algorithms—fueling everything from facial recognition technology to who gets a bank loan or a traffic ticket—can adversely affect individuals’ privacy, health, well-being, and personal finances and are leading to inequities in American society. “Every university is deeply worried about AI: We see both its potential and its threats,” says Peter March, executive dean of SAS, where he is a Distinguished Professor of Mathematics.
“With AI, people tend to worry about things like super-intelligent computers turned evil like in The Terminator,” says Lauren M.E. Goodlad, a professor in the Department of English at SAS and chair of Critical AI, a new interdisciplinary Rutgers initiative examining the ethics of artificial intelligence. What is worrisome, she says, “is how this technology can be used in an opaque way to manipulate our behavior, as we’ve seen with Facebook, along with other problems that are making our country more unequal than it has been since the Gilded Age.”
“The dangers,” she says, “come with using massive sets of data on a scale that has never been available, coupled with massive computing power to facilitate data-centric machine learning.”
AI by any other name
AI, which dates to the 1940s when it was known as machine learning, falls within a technology continuum that emphasizes data-driven decision-making, whether by credit card companies deciding to approve a loan or engineers building an autonomous vehicle. “Depending on the inherent subjectivity and perceptions of the algorithm developer and the context in which it is developed, the algorithm may reflect biases that don’t benefit everyone equally,” says Piyushimita Thakuriah, Distinguished Professor and director of the Rutgers Urban and Civic Informatics Lab.
Rutgers researchers are determined to change the pattern through projects like Minds and Machines—a critical AI initiative at SAS with a new approach to educating future data scientists. “It’s not enough to just produce fast algorithms,” Roberts says. “We need to build in ethical considerations from the start, being aware of the bias that algorithms can create and the resulting damage they can cause.”
Putting a face on AI bias
Consider facial recognition technology, which is widely used in policing. In New Jersey alone, the police capture more than 700,000 videos a year, according to Roberts, analyzing them for positive aims such as finding missing children. Yet modern facial recognition algorithms, which require feeding the database millions or billions of photos of faces, are far from perfect. “The problem is,” says March, “we’ve trained the computer, maybe inadvertently, to recognize white or male faces because we’ve fed more of those photos into the database. AI is not as good at recognizing faces that don’t look like that.”
Case in point, a 2019 study from the Georgia Institute of Technology found that autonomous vehicles were better at recognizing—and possibly avoiding a collision with—lighter-skinned pedestrians than those with darker skin. Adds Goodlad: “Even if you think it’s a good idea to have facial recognition systems installed to surveil the population at large—and that’s a question our society hasn’t been given the chance to answer—there’s the added problem of inaccuracy. For instance, do we have enough Black women in our data sets for this technology to work reliably?”
According to unbiased research, apparently not. In another widely cited study, MIT researcher Joy Buolamwini found that facial recognition technology has up to a 34.7 percent error rate for Black women compared with a 0.3 percent rate for white men. “[These technologies] also tend to be sexist, associating scenes with cooking with women and scenes with medicine or sports with men,” says Roberts. “We need to worry about what is causing that.” And then take steps to fix it.
Garbage in, garbage out
Errors made by facial recognition technology used by police or immigration services can have tragic consequences, resulting in people being wrongfully arrested. But AI-driven inequities extend into every area of life, from statewide prescription drug databases misidentifying patients as abusers of opioids (and denying them pain medication) to digital redlining (marketing algorithms used by online platforms to exclude certain people from seeing online ads based on factors such as race, gender, or age).
“These things have a real impact on the quality of people’s lives,” says Thakuriah, who co-edited Seeing Cities Through Big Data. “In low-income neighborhoods, the data on accidents was not as complete as it was in high-income neighborhoods. There were just as many accidents in lower-income areas, but people who lived in these areas trusted the police less and were less likely to report them.”
The implications of that kind of error can be dramatic: “This data is used to make billion-dollar investment decisions, such as where to build a Level 1 trauma center,” she says.
That’s one reason experts at Rutgers–New Brunswick are laser focused on data curation—or finding out what data is included and how it’s organized. “There’s not enough emphasis on what is actually in these data sets, much less a careful documentation of them,” says Goodlad.
Making ethics a requirement of study—and research
There are many ways to prevent the abuses of AI and harness the technology for the good. Rutgers has introduced undergraduate computer science courses such as Computers and Society at SAS and Ethical Issues in Data Science at Rutgers University–Newark that teach programming within the context of ethics. “The social, legal, and ethical considerations of technology should not be something we consider after the fact,” says March.
Goodlad likens the role of ethics in data science to the necessity of the Hippocratic Oath in medicine. “Most professions have a guiding ethos, but this is a new idea in data science,” she says. “Besides educating everyone to be aware of the technology and what it’s good at and isn’t, we need to teach those in the field what it means to be an ethical scientist.” That’s when the technology can be used in a way that benefits everyone.
Kristin Dana is a professor at Rutgers School of Engineering who researches the growing presence of artificial intelligence as it relates to robotics, which she sees as a positive force in society. In 2020, the National Science Foundation gave Dana and an interdisciplinary Rutgers team a $3 million grant for a five-year project titled Socially Cognizant Robotics for a Technology Enhanced Society. It will evaluate robotics not only for their performance efficiencies such as speed and accuracy but also for their applications in the real world.
“We are at a point where robotics may soon be part of everyday life and work,” says Dana, “but we want robots to be developed in a way that they can adapt to human needs and desires, rather than the other way around.”
Marrying regulation and education
The role of regulation is important, too—a topic that Thakuriah is passionate about. “In this country, we have left the governance of bottom-line outcomes regarding health and the economy to people like Mark Zuckerberg,” she says.
Roberts, who cites regulations in the European Union that require decisions made by algorithms to be explainable, interpretable, and transparent, agrees. “There is room,” he says, “for regulation and societal decision-making with AI.”
Transparency is possible only with the right kind of education. “These algorithms can’t be hidden in a black box; they need to be made available to people who want to see what they are,” he says. “That means training people to make sure they document them in understandable language.”
Ultimately, nobody wants to base our actions on judgments that come from data we don’t understand, whether a medical diagnosis or taxes owed. “AI can explore permutations to a depth and degree that humans are simply not capable of, allowing us to extend our intuition and see patterns we couldn’t possibly see ourselves,” says March. “But the flip side is that we see judgments made and we have no idea why. And if you see a judgment rendered that you don’t understand, and you can’t replicate it, how good is it really?”