In an effort to give female academics and others focused on AI well-deserved and long-overdue spotlight time, TechCrunch has launched a series of interviews focusing on the remarkable women contributing to the AI revolution. As the AI craze continues, we will publish multiple articles throughout the year highlighting critical work that is often overlooked. Read more profiles here.
Heidy Khlaaf is the Director of Engineering at cybersecurity company Trail of Bits. She specializes in evaluating software and artificial intelligence implementations in “safety-critical” systems such as nuclear power plants and autonomous vehicles.
Khlaaf received his PhD in Computer Science. She holds a bachelor’s degree from University College London and a bachelor’s degree in computer science and philosophy from Florida State University. She has led safety and security audits, provided consultation and review of assurance cases, and contributed to the development of standards and guidance for safety and security related applications and their development.
Q&A
In short, how did you get started in the field of artificial intelligence? What drew you to this field?
I was attracted to robotics at a young age and started programming at age 15 because I was fascinated by the prospect of using robotics and artificial intelligence (because they are inexplicably connected) to automate workloads where they are needed most. Just like in manufacturing, I see robotics being used to help older adults—and to automate dangerous manual labor in our society. I did, however, get a Ph.D. in another subfield of computer science, because I believe that having a strong theoretical foundation in computer science allows you to make educated, scientific decisions about where artificial intelligence may or may not be a good fit, and where there may be pitfalls .
What work (in artificial intelligence) are you most proud of?
Leverage my extensive expertise and background in safety engineering and safety-critical systems to provide needed context and critique for the new field of artificial intelligence “safety.” Although the AI security field has attempted to adopt and refer to mature security technologies, various terms remain misunderstood in their use and meaning. The lack of consistent or intentional definitions really compromises the integrity of the security techniques currently used by the AI community. I’m particularly proud of “Comprehensive Risk Assessment and Assurance for Artificial Intelligence-Based Systems” and “A Hazard Analysis Framework for Code Synthesis of Large Language Models,” in which I deconstructed false narratives about security and AI assessment and provided solutions to bridge the issues specific steps. Security gaps within artificial intelligence.
How do you deal with the challenges of the male-dominated tech industry and the male-dominated artificial intelligence industry?
Acknowledging that the status quo has changed so little is not something we discuss often, but I believe it is actually important for myself and other women in technology to understand our place in the industry and have a realistic view of the changes that are needed . Retention rates and the proportion of women in leadership positions have remained essentially the same since I joined the field more than a decade ago. As TechCrunch aptly points out, despite women’s tremendous breakthroughs and contributions in the field of artificial intelligence, we are still excluded from conversations that define ourselves. Recognizing this lack of progress helped me realize that building a strong community of individuals as a source of support is more valuable than relying on DEI initiatives, which unfortunately have not been successful given that bias and suspicion against women in technology is still quite common What progress. technology.
What advice would you give to women seeking to enter the field of artificial intelligence?
Don’t appeal to authority, find work you truly believe in, even if it contradicts the popular narrative. Given the political and economic power currently held by AI labs, people instinctively accept anything said by AI “thought leaders” as fact, and many AI claims are often marketing claims that exaggerate AI. The ability to benefit is a bottom line. However, I have found that there is considerable hesitancy, especially among young women in the field, to express skepticism about unsubstantiated claims made by male counterparts. Imposter syndrome has a strong effect on women in tech, leading many to doubt their own scientific integrity. But it is more important than ever to challenge exaggerated claims about the capabilities of artificial intelligence, especially those that are not falsifiable under the scientific method.
What are the most pressing issues facing artificial intelligence in its development?
No matter what advances we see in the field of artificial intelligence, they will never be the single solution to our problems, either technologically or socially. There is a current trend to shoehorn artificial intelligence into every possible system, regardless of its effectiveness (or lack thereof) in numerous domains. AI is supposed to augment, not replace, human capabilities, and we are witnessing a complete disregard for pitfalls and failure modes of AI that are causing real, tangible harm. Just recently, the artificial intelligence system ShotSpotter recently caused a police officer to shoot a child.
What issues should artificial intelligence users pay attention to?
How unreliable artificial intelligence really is. Artificial intelligence algorithms are known to be flawed, with high error rates observed in applications that require precision, accuracy, and safety. Artificial intelligence systems are trained in a way that embeds human biases and discrimination into their outputs, which become “de facto” and automated. This is because the nature of AI systems is to provide results based on statistical and probabilistic inferences and correlations from historical data, rather than any type of reasoning, factual evidence or “causation.”
What is the best way to build artificial intelligence responsibly?
Ensure that AI is developed to protect people’s rights and safety by establishing verifiable claims and holding AI developers accountable. These statements should also apply to regulatory, safety, ethical or technical applications and not be falsified. Otherwise, there is a serious lack of scientific integrity to properly evaluate these systems. Independent regulators should also evaluate AI systems against these claims against the current requirements for many products and systems in other industries, such as those evaluated by the FDA. AI systems should not be exempt from standard review processes established to ensure public and consumer protection.
How can investors better promote responsible artificial intelligence?
Investors should partner with and fund organizations seeking to establish and advance AI audit practices. Most of the money is currently being invested in the AI labs themselves, with the belief that their security teams are strong enough to drive AI assessments. However, independent auditors and regulators are key to public trust. Independence enables the public to have confidence in the accuracy and completeness of assessments and the integrity of regulatory outcomes.
3 Comments
Pingback: Women in Artificial Intelligence: Heidy Khlaaf, Director of Security Engineering at Trail of Bits – Tech Empire Solutions
Pingback: Women in Artificial Intelligence: Heidy Khlaaf, Director of Security Engineering at Trail of Bits – Paxton Willson
Pingback: Women in Artificial Intelligence: Heidy Khlaaf, Director of Security Engineering at Trail of Bits – Mary Ashley