In an effort to give female academics and others focused on AI well-deserved and long-overdue spotlight time, TechCrunch has launched a series of interviews focusing on the remarkable women contributing to the AI revolution. As the AI craze continues, we will publish multiple articles throughout the year highlighting critical work that is often overlooked. Read more profiles here.
Claire Leibowicz is director of the artificial intelligence and media integrity program at the Partnership for AI (PAI), an industry group supported by Amazon, Meta, Google, Microsoft and other companies committed to the “responsible” deployment of artificial intelligence technology. She also oversees PAI’s Artificial Intelligence and Media Integrity Steering Committee.
In 2021, Lebovich served as a journalism fellow at Tablet Magazine, and in 2022, she served as a fellow at the Rockefeller Foundation’s Bellagio Center, focusing on artificial intelligence governance. Leibowicz, who holds bachelor’s degrees in psychology and computer science from Harvard University and a master’s degree from Oxford University, has advised companies, governments, and nonprofits on artificial intelligence governance, generative media, and digital information.
Q&A
In short, how did you get started in the field of artificial intelligence? What drew you to this field?
It may seem contradictory, but I entered the field of artificial intelligence out of an interest in human behavior. I grew up in New York and have always been fascinated by the many ways people interact there and how such a diverse society is formed. I’m curious about the big issues that impact truth and justice, like how do we choose to trust others? What triggers conflict between groups? Why do people believe certain things to be true and not others? I began exploring these questions through cognitive science research in my academic life, and I quickly realized that technology was affecting the answers to these questions. I also find it interesting how artificial intelligence can be a metaphor for human intelligence.
That led me into the computer science classroom, where the faculty—and I have to give credit to Professor Barbara Grosz, who was a pioneer in the field of natural language processing, and Professor Jim Waldo, who has a blend of backgrounds in philosophy and computer science—emphasized Understand the importance of filling the classroom with a variety of knowledge. Non-computer science and engineering majors should focus on the social impact of technology, including artificial intelligence. This was before “artificial intelligence ethics” became a distinct and popular field. They made it clear that while technical understanding is beneficial, technology affects a wide range of areas including geopolitics, economics, social engagement, and more, so people from many disciplinary backgrounds are needed to weigh in on seemingly technical issues.
Whether you’re an educator thinking about how generative AI tools can impact pedagogy, a museum curator trying to predict exhibition routes, or a doctor working on new image detection methods for reading lab reports, artificial intelligence can impact your field . I was intrigued by the reality that AI spans many fields: the intellectual diversity inherent in work in AI brings opportunities to impact many aspects of society.
What work (in artificial intelligence) are you most proud of?
I’m proud of my work in the field of artificial intelligence, which brings diverse perspectives together in a surprising, action-oriented way—one that not only embraces, but encourages disagreement. I joined PAI six years ago as the organization’s second employee and immediately recognized how groundbreaking the organization was in its commitment to diverse perspectives. PAI considers such work an important prerequisite for AI governance that can mitigate harm and lead to practical adoption and impact in the AI field. That’s proven to be true, and I’m excited to help PAI embrace multidisciplinarity and watch the institution grow alongside the field of artificial intelligence.
Our work on synthetic media over the past six years began long before generative AI became part of the public consciousness and exemplified the possibilities of multi-stakeholder AI governance. In 2020, we worked with nine different organizations from civil society, industry, and media to create Facebook’s Deepfake Detection Challenge, a machine learning competition to build models that detect AI-generated media. These outside perspectives helped shape the fairness and purpose of the winning model—showing how human rights experts and journalists contribute to seemingly technical problems like deepfake detection. Last year, we published a set of prescriptive guidelines on responsible synthetic media—PAI’s Responsible Practices for Synthetic Media—and now have 18 supporters from very different backgrounds, from OpenAI to TikTok to Code for Africa, Bumble, BBC and WITNESS. It’s one thing to be able to develop actionable guidance based on technical and social realities, but it’s another to actually have institutional support. In this case, agencies are committed to providing transparent reporting on how they navigate the synthetic media landscape. The most meaningful AI projects to me are those that have practical guidance and demonstrate how to implement that guidance across agencies.
How do you deal with the challenges of the male-dominated tech industry and the male-dominated artificial intelligence industry?
I have had great male and female mentors throughout my career. Finding people who support me and challenge me at the same time has been key to any growth I’ve experienced. I have found that focusing on common interests and discussing the issues driving the field of artificial intelligence can bring people with different backgrounds and perspectives together. Interestingly, more than half of PAI’s team is female, and many organizations working on AI and social or responsible AI issues have many female employees. This is often in contrast to those working in engineering and AI research teams, and is a step in the right direction for representation in the AI ecosystem.
What advice would you give to women seeking to enter the field of artificial intelligence?
As I mentioned in my previous question, some of the predominantly male-dominated areas of AI that I’ve encountered are also the ones that are the most technical. While we should not prioritize technical acumen in the field of artificial intelligence over other forms of literacy, I have found that technical training in these areas has been beneficial to my confidence and productivity. We need equal representation in technical roles and be open to the expertise of experts in other fields, such as civil rights and politics, where representation is more balanced. At the same time, making more women technologically literate is key to balancing representation in the field of artificial intelligence.
I also find it very meaningful to connect with women in the AI field who are already balancing family and professional life. Finding role models to talk about the big issues related to career and parenthood, as well as some of the unique challenges women still face at work, makes me feel better equipped to handle those challenges.
What are the most pressing issues facing artificial intelligence in its development?
As artificial intelligence develops, issues of truth and trust, both online and offline, become increasingly thorny. Since content from images to videos to text can be generated or modified by artificial intelligence, is seeing the evidence? If documents can be easily and literally tampered with, how can we rely on evidence? If it was so easy to imitate real people, could we have human-only spaces online? How do we weigh the freedom of expression of AI against the potential for AI systems to cause harm? More broadly, how do we ensure that the information environment is not just shaped by a handful of companies and the people who work in it, but incorporates the perspectives of stakeholders around the world, including the public?
In addition to these specific questions, PAI touches on other aspects of artificial intelligence and society, including how we consider fairness and bias in the age of algorithmic decision-making, how the workforce affects and is affected by artificial intelligence, and how to guide the responsible deployment of artificial intelligence systems. And even how to make AI systems more reflective of multiple perspectives. At a structural level, we must consider how AI governance can make huge trade-offs by integrating different perspectives.
What issues should artificial intelligence users pay attention to?
First, AI users should know that if something sounds too good to be true, it probably is.
Of course, the generative AI craze of the past year has reflected tremendous ingenuity and innovation, but it has also resulted in public information surrounding AI that is often exaggerated and inaccurate.
Users of artificial intelligence should also understand that artificial intelligence is not revolutionary, but rather exacerbates and expands existing problems and opportunities. That doesn’t mean they should take AI less seriously, but rather that they should use this knowledge as a useful foundation for navigating an increasingly AI-enabled world. For example, if you’re worried that people might misunderstand the context of a video by changing the title before the election, then you should be worried about the speed and scale at which they’re using deepfake technology to mislead. If you are concerned about the use of surveillance in the workplace, you should also consider how artificial intelligence can make such surveillance easier and more common. Maintaining a healthy skepticism about the novelty of AI problems, while being honest about what is unique about the current moment, is a helpful framework for users when encountering AI.
What is the best way to build artificial intelligence responsibly?
Building AI responsibly requires us to broaden our conception of who plays a role in “building” AI. Of course, influencing tech companies and social media platforms is a key way to influence the impact of AI systems, and these institutions are critical to building the technology responsibly. At the same time, we must recognize that different institutions from civil society, industry, media, academia, and the public must continue to participate in building responsible artificial intelligence that serves the public interest.
Take responsible synthetic media development and deployment as an example.
While tech companies may be concerned about their liability in how synthetic videos impact users ahead of the election, journalists may be concerned about imposters creating synthetic videos claiming to be from their trusted news brands. Human rights defenders may view liability related to how AI-generated media can reduce the impact of video as evidence of abuse. Artists may be excited by the opportunity to express themselves through generative media, but also worry that their creations may be used to train artificial intelligence models that generate new media without their consent. These diverse considerations illustrate how important it is to engage diverse stakeholders in initiatives and efforts to responsibly build AI, and how countless institutions are affected and affected by the ways in which AI is integrated into society.
How can investors better promote responsible artificial intelligence?
A few years ago, I heard former White House chief data scientist DJ Patil describe a revision of the “move fast and break things” mantra that was prevalent in the early social media era, a mantra that has stuck with me. He advised the field to “act with purpose and solve problems”.
I like this because it doesn’t mean stagnating or giving up on innovation, but rather intention and the possibility to innovate while taking responsibility. Investors should help foster this mindset — giving their portfolio companies more time and space to pursue responsible AI practices without stifling progress. Often, institutions describe limited time and tight deadlines as constraints on doing the “right” thing, and investors can be a major catalyst in changing this dynamic.
The more I work in the field of artificial intelligence, the more I find myself grappling with deep humanistic problems. These are questions for all of us to answer.
3 Comments
Pingback: Women in AI: Claire Leibowicz, artificial intelligence and media integrity expert at PAI – Tech Empire Solutions
Pingback: Women in AI: Claire Leibowicz, artificial intelligence and media integrity expert at PAI – Mary Ashley
Pingback: Women in AI: Claire Leibowicz, artificial intelligence and media integrity expert at PAI – Paxton Willson