Why A.I. needs the liberal arts

Which U.S. academic institutions have played the biggest role in the development of A.I.? Several immediately spring to mind: MIT, of course, and Stanford. Most of the Ivy League schools. Berkeley, Carnegie Mellon, New York University, the University of Washington, and Georgia Tech would likely make the list too. But David Greene, the president of Colby College, a small liberal arts school in Waterville, ME., thinks A.I. is too important to be left to the engineers and computer scientists—and yes, increasingly, the MBAs—at America’s top research universities.

“I would say the liberal arts need A.I. but also A.I. needs the liberal arts,” Greene tells me. “We in the liberal arts need to be vibrant participants in shaping A.I. and not just spectators.” Earlier this year, Greene signaled that Colby—which enrolls about 2,000 students annually and has an endowment less than 4% the size of Stanford’s $28.9 billion—intends to play a key part in that process.

Greene persuaded Andrew Davis, a successful investment manager and prominent Colby alum, to give the school a $30 million gift to establish the Davis Institute of Artificial Intelligence, the first of its kind for a liberal arts college. Greene then recruited Amanda Stent, a machine learning expert who led the development of natural-language processing technology at financial news and data provider Bloomberg, to be the institute’s first director. (Full disclosure: I used to work at Bloomberg too.)

The Colby president sees a vital role for the liberal arts institutions in examining the societal impact of A.I. He says that until very recently the engineering and computer science departments of major research institutions were too focused on churning out new algorithms and conquering performance benchmarks, and not enough on the technology’s real-world impact.

Colby’s new institute will have a small faculty of its own—consisting of about a half-dozen researchers, not all of them from a computer science background (Greene notes that Colby’s anthropology department has already suggested that one of the positions go to someone from that field)—and will likely offer students the chance to major in A.I. But Greene says its primary mission is to seed A.I. techniques and thinking about A.I. throughout the college. “I don’t want to concentrate this in computer science or engineering as you would in most universities, with a walled-off A.I. program,” Greene says.

Greene is convinced that A.I. is going to play such an important role in society that no matter what career path Colby’s students eventually take, they will need to have at least some basic understanding of the technology. Stent says her goal is to eventually have at least 25% of Colby’s faculty across 80% of its academic departments incorporating A.I. into their curriculum in some way. A.I. is already reshaping how most academic disciplines conduct research—whether it is in a chemist using machine learning to sift through data or an archaeologist using computer vision to spot signs of ancient dwellings in satellite imagery. And Greene knows these other disciplines have a lot to say on how A.I. may transform economics, politics, and culture.

Stent says she was drawn to her new role at Colby because she sees A.I. as a field in trouble. “There are big cracks in not just the foundation, but the walls, the doors, and the windows of the building,” she says. Stent adds that even the phrase “artificial intelligence,” is problematic. “I am computer scientist, what on Earth do I have to say about what intelligence is?” she says. The field, she argues, would benefit from closer collaboration with cognitive psychologists and biologists. “Why has A.I. been so focused on seeing and hearing and not on other senses, like the amazing sense of smell that dogs have, or the emotional intelligence that humans have?” she says.

Stent thinks Colby’s new institute can help foster the multi-disciplinary collaboration that she sees as vital for making A.I. achieve its potential, while side-stepping potential negative ramifications, whether it is racial bias in algorithmic decisions or the erosion of privacy rights. When A.I. projects fail, she says, it’s very often because those building the software failed to consult with subject matter experts who know the use case the best.

Stent also says that Colby and other liberal arts colleges, because of their humanistic approach to education, are perhaps best positioned to transform how A.I. develops. “What we need are humanistic approaches to A.I.,” she says. “Approaches that start with the human needs for autonomy, privacy and connection, and start there instead of starting with the computer.”

***

Before we get to this week’s news, a reminder that if you want to know more about how A.I. can transform your company and your industry, apply to attend Fortune’s inaugural Brainstorm A.I. conference in Boston, Nov. 8-9. The conference will offer a deep dive into the business applications of A.I. Among those scheduled to speak are Moderna co-founder and chairman Dr. Noubar Afeyan, who will discuss the promise of A.I. in healthcare; Stanley Black & Decker CEO Jim Loree will explain how machine learning is turbocharging his business. Amazon senior vice president and head scientist of Alexa Artificial Intelligence Rohit Prasad will explain how A.I. can help companies tailor their services to individual customers. Meanwhile, Levi Strauss senior vice president, and chief strategy and artificial intelligence officer Katia Walsh will share her secrets about using machine learning to better engage with customers. I hope to see you there!

***

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

Facebook's A.I. moderating claims. Internal files that whistleblower Frances Haugen leaked to The Wall Street Journal seem to indicate that Facebook's A.I. systems correctly identify and remove 3% to 5% of hate speech on its platform and about 0.6% of violence and incitement, the newspaper reported. The documents undercut the company's own claims that it is making great strides in using A.I. to police its content. Facebook told the Journal that these numbers are not the best gauge of its progress and that it judges its own success by whether its algorithms are able to limit how many people see problem content—as the A.I. systems can take steps to reduce the visibility of suspect posts even in cases where it can't definitively classify it as violating Facebook's policies. By this "prevalence" metric, the company said in a blog post in response to the Journal's reporting, hate speech accounts for just 0.05% of the content users see on Facebook, a figure that has dropped almost 50% in the past three quarters.

Facebook doubles-down on A.I. elsewhere. The company unveiled a dataset with 2,200 hours of first-person video and has made it publicly available for researchers to use in training A.I. systems. Facebook says the Ego4D dataset is important because it thinks software that can understand first-person viewpoints will be critical to augmented and virtual reality applications in "the metaverse," that blending of digital and real-world experience Facebook thinks is the next big thing in tech. But, as tech publication The Verge highlighted, capturing all that first-person video involved asking college students to wear body cameras around 24/7 and raised privacy concerns that the company says it went to great lengths to address, doing such things as hiring a firm to blur or alter the faces of those captured in the videos who had not consented to have their images used, and blurring car license plates.

Using "audio deepfake" tech to steal $35 million. United Arab Emirates authorities have asked the U.S. government for help investigating a bank fraud case in which criminals used an A.I.-generated voice to impersonate one of the bank's directors and convince a bank employee to transfer $35 million to them. According to Forbes, the elaborate con also involved fake emails that lead the bank employee to believe the transaction was legitimate. The UAE has asked the U.S. to get involved because officials there were able to trace more than $400,000 of the stolen money to two U.S. bank accounts.

Instacart buys a shopping A.I. startup for $350 million. The online grocery delivery giant is buying Caper AI, a New York-based company that equips grocery shopping carts and automated check-out tills with computer vision technology and smart scales can determine what items a customer has placed into the cart or their shopping bag, enabling them to checkout without a human cashier ringing them up. Among those currently using Caper's technology are Kroger and Wakefern, as well as Sobeys in Canada and Auchan in France and Spain. You can read more about the news here.

DeepMind acquires robotics simulator MuJoCo. The London-based A.I. company, which is owned by Google-parent Alphabet, said in a blog post that it had acquired MuJoCo, a physics simulator widely used in robotics research, for an undisclosed amount. Short for "Multi-Joint dynamics with Contact," MuJoCo was developed by Emo Todorov, a neuroscientist at the University of Washington, and made available through a startup called Roboti as a commercial product starting in 2015. DeepMind, which has been using MuJoCo for a lot of its work where it needed to simulate real-world physics, including building figures that learn to move around in an environment, said it would make MuJoCo freely-available to the public as open-source software on the code repository Github. 

EYE ON A.I. TALENT

The U.S. Federal Trade Commission has named Stephanie Nguyen its acting chief technology officer, according to The Information. Nguyen was previously the agency's deputy CTO and before that was researcher at MIT

Wysa, a software company with offices in Boston, London, and India, whose mental health app combines A.I. and human mental health experts, has hired  Zereana Jess-Huff to be its chief clinical officer, trade publication AiAuthority reports. In addition, the company announced that Chad Cruze was joining as head of sales and Ross O'Brien has been hired as managing director for the U.K. and Europe. O'Brien was previously the associate director of innovation and technology at the U.K.'s National Health Service (NHS). 

EYE ON A.I. RESEARCH

Federated learning may not offer hoped-for data privacy. In recent years, a growing number of researchers and businesses have gotten excited about "federated learning," a technique that lets parties that don't want to share data benefit from training A.I. algorithms on much larger datasets. The method has been of great interest particularly in healthcare, where different hospitals or companies might have a legal obligation not to share patient data, but where there is a feeling that all patients' might benefit from an A.I. system that is trained on a much larger, more diverse dataset. It has also been of interest to financial firms that want to help the whole industry improve its fraud detection algorithms, for instance, but without sharing customer data.

Now researchers at the University of Stavanger, in Norway, as well as RTWH-Aachen and the University of Cologne, both in Germany, have shown that it is possible to reverse engineer these federated learning techniques to recover the underlying data, so long as an attacker has access to the updates to the shared algorithm. In a paper published this week on the non-peer reviewed research repository arxiv.org, the researchers concluded that federated learning alone was not enough to guarantee data privacy. Results like this could dampen enthusiasm for federated learning in business.

FORTUNE ON A.I.

‘Gone too far’: Meet the Dutch chips giant that Silicon Valley loves and Biden fears—by Christiaan Hetzner

Facebook announces European hiring spree as regulatory scrutiny intensifies—by Jeremy Kahn

Tesla’s overseas diehards question their faith as Elon Musk opens self-driving FSD beta—but only in the U.S.—by Christiaan Hetzner

The CIA’s venture capital firm has been busy lately—by Kevin Dugan

BRAIN FOOD

Facial-recognition, surveillance cameras, and ethics. There has been a lot of reporting on the racial bias inherent in many facial recognition systems. This problem generally comes from minority groups being underrepresented in the data used to train the software. The problem is especially bad for facial-recognition systems that try to identify people in surveillance camera footage, because the image quality is often poor. So apply facial recognition requires that the images be enhanced, or upscaled. But the upscaling algorithms themselves often suffer from a lack of diverse faces in their training data. (The best example of this is a popular upscaling algorithm called PULSE, that had the unfortunate side effect of morphing low-res images of Black people, including a picture of former President Barak Obama, into high-res images of white people.)

Now a team of researchers from Australian National University, Tencent, and Imperial College London, have sought to solve this problem by building a huge facial recognition dataset that is more representative. Their EDFace-Celeb-1M dataset contains 1.7 million face photos of more than 40,000 different celebrities, 31.1% of whom are white, 19.2% of whom are Black, 19.6% of whom are Asian, and 18.3% of whom are Latino. The dataset is also 64% male and 36% female. You can read the academics research on this dataset here

This work is important. But it misses the larger ethical issue with surveillance and facial recognition: that the technology is disproportionately used to police minority groups. That is true in the U.S., where police are more likely to use these systems in places with large numbers of Black and Latino residents, as well as in other countries, such as China, where a vast surveillance system has been deployed against the Uighur ethnic minority. Simply improving the accuracy of facial recognition doesn't address this systemic problem. In fact, it may allow those using facial recognition to claim they've "solved the bias problem" with better training data, thus making it more likely that surveillance will be deployed in a discriminatory manner.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.