Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Chuzo Login

    Top Cooking Websites For Food Bloggers

    Katy Perry Goes To Space!

    Facebook X (Twitter) Instagram
    Tech Empire Solutions
    • Home
    • Cloud
    • Cyber Security
    • Technology
    • Business Solution
    • Tech Gadgets
    Tech Empire Solutions
    Home » Women in Artificial Intelligence: Sandra Watcher, Professor of Data Ethics, University of Oxford
    Technology

    Women in Artificial Intelligence: Sandra Watcher, Professor of Data Ethics, University of Oxford

    techempireBy techempire3 Comments8 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email

    In an effort to give female academics and others focused on AI well-deserved and long-overdue spotlight time, TechCrunch has launched a series of interviews focusing on the remarkable women contributing to the AI ​​revolution. As the AI ​​craze continues, we will publish multiple articles throughout the year highlighting critical work that is often overlooked. Read more profiles here.

    Sandra Wachter is Professor and Senior Research Fellow in Data Ethics, Artificial Intelligence, Robotics, Algorithms and Regulation at the Oxford Internet Institute. She is also a former fellow at the Alan Turing Institute, the UK’s national institute for data science and artificial intelligence.

    While at the Turing Institute, Observer assessed the ethical and legal aspects of data science, highlighting cases in which opaque algorithms have become racist and sexist. She also researches ways to audit artificial intelligence to tackle disinformation and promote fairness.

    Q&A

    In short, how did you get started in the field of artificial intelligence? What drew you to this field?

    There has never been a time in my life when I didn’t believe that innovation and technology have incredible potential to make people’s lives better. However, I also know that technology can have devastating consequences on people’s lives. Therefore, I am always driven – not least because of my strong sense of justice – to find a way to secure the perfect middle ground. Promote innovation while protecting human rights.

    I have always believed that the law has a very important role to play. Laws can be a middle ground that both protects people and promotes innovation. Law as a subject came naturally to me. I like a challenge, I like learning how the system works, seeing how I can play with it, finding loopholes and subsequently closing them.

    Artificial intelligence is an incredibly transformative force. It is implemented in the areas of finance, employment, criminal justice, immigration, health and the arts. This could be a good thing or a bad thing. Whether it’s good or bad, it’s a matter of design and policy. I was naturally drawn to it because I feel the law can make a meaningful contribution in ensuring innovation reaches as many people as possible.

    What work (in artificial intelligence) are you most proud of?

    I think the work I’m most proud of right now is the work I co-authored with Brent Mittelstadt (a philosopher), Chris Russell (a computer scientist), and me as a lawyer.

    Our latest book on bias and fairness, The Unfairness of Fair Machine Learning, reveals the harmful effects of implementing many “group fairness” measures in practice. Specifically, fairness is achieved by “levelling” or making everyone worse off, rather than helping the disadvantaged. This approach is deeply problematic in the context of EU and UK non-discrimination laws and is ethically troubling. In a Wired article, we discussed how lowering levels can be harmful in practice—for example, in health care, enforcing group fairness can mean missing more cancer cases than strictly necessary, It will also reduce the overall accuracy of the system.

    For us, this is scary, but it’s important for technologists, policy people, and everyone to understand. In fact, we have engaged with UK and EU regulators and shared our alarming results with them. It is my deep hope that this will provide policymakers with the necessary leverage to implement new policies that will prevent AI from causing such serious harm.

    How do you address the challenges of the male-dominated tech industry and the male-dominated artificial intelligence industry?

    Interestingly, I never thought of technology as something “for” men. It wasn’t until I started school that society told me technology wasn’t for people like me. I still remember, when I was 10 years old, the curriculum stipulated that girls had to do knitting and sewing, while boys had to build birdhouses. I also wanted to build a birdhouse and asked to be transferred to the boys’ class, but the teacher told me “girls can’t do that.” I even went to the school principal to try to overturn the decision, but unfortunately was unsuccessful at the time.

    It’s very difficult to fight the stereotype that you shouldn’t be part of this community. I wish I could say something like this never happens again, but unfortunately that’s not the case.

    However, I have been extremely fortunate to work with allies like Brent Mittelstadt and Chris Russell. I have been blessed to have incredible mentors, such as my Ph.D. My supervisor and I have a growing network of like-minded people of all genders who are doing their best to guide the way forward to improve the situation for everyone interested in tech.

    What advice would you give to women seeking to enter the field of artificial intelligence?

    The most important thing is to try to find like-minded people and allies. Finding your people and supporting each other is crucial. My most impactful work has always come from talking to open-minded people from other backgrounds and disciplines to solve common problems we face. Received wisdom alone cannot solve new problems, so women and other groups who have historically faced barriers to entering artificial intelligence and other technical fields have the tools to truly innovate and offer new things.

    What are the most pressing issues facing artificial intelligence in its development?

    I think there are a lot of issues that require careful consideration of law and policy. To name a few, artificial intelligence is plagued by biased data, which can lead to discriminatory and unfair outcomes. Artificial intelligence is inherently opaque and incomprehensible, but it is tasked with deciding who gets loans, who gets jobs, who has to go to jail and who can go to college.

    Generative AI has its associated problems, but it can also produce misinformation, be riddled with illusions, infringe on data protection and intellectual property rights, put people’s jobs at risk, and contribute to a greater impact on climate change than the aviation industry.

    We have no time to waste; we need to solve these problems yesterday.

    What issues should artificial intelligence users pay attention to?

    I think people tend to believe in a certain narrative of “artificial intelligence is here to stay, join or be left behind.” I think it’s important to consider who is pushing this narrative and who is profiting from it. It’s important to remember where the actual power lies. The power lies not with those who innovate, but with those who purchase and implement artificial intelligence.

    Therefore, consumers and businesses should ask themselves: “Can this technology really help me? In what ways?” Today’s electric toothbrushes have “artificial intelligence” embedded in them. Who is this for? Who needs this? What is being improved here?

    In other words, ask yourself what went wrong, what needs to be fixed, and whether AI can actually fix it.

    This way of thinking will transform market forces, and innovation will hopefully shift toward a focus on community utility rather than just profit.

    What is the best way to build artificial intelligence responsibly?

    Create laws requiring responsible artificial intelligence. Here, a very unhelpful and untrue narrative tends to dominate: Regulation kills innovation. This is not true.Regulatory strangulation harmful Innovation. Good laws promote and nourish ethical innovation; that’s why we have safe cars, planes, trains and bridges.Society does not suffer if regulation prevents its development
    Creating artificial intelligence that violates human rights.

    Traffic and safety regulations for cars are also considered to “stifle innovation” and “limit autonomy.” These laws prohibit driving without a license, ban cars without seat belts and airbags from the market, and penalize people who do not obey the speed limit. Imagine what the auto industry’s safety record would be like if we had no laws to regulate vehicles and drivers. Artificial intelligence is currently at a similar turning point, with heavy industry lobbying and political pressure meaning it remains unclear which path it will take.

    How can investors better promote responsible artificial intelligence?

    A few years ago, I wrote a paper called “How Artificial Intelligence Can Equitably Make Us Richer.” I firmly believe that artificial intelligence that respects human rights, is fair, explainable, and sustainable is not only the legal, ethical, and morally right thing to do, but it can also be profitable.

    I really hope investors understand that if they drive responsible research and innovation, they will also get better products. Bad data, bad algorithms, and poor design choices lead to worse products. Even though I can’t convince you that you should do the ethical thing because it’s the right thing to do, I hope you’ll see that the ethical thing is also more profitable. Ethics should be viewed as an investment, not an obstacle to be overcome.

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    techempire
    • Website

    Related Posts

    Mr. Meowski’s Bakery To Re-Locate In St. Charles MO

    Pokémon Trading Card Website Making 100k!

    Edufox

    Emerging Academic Education Platforms – Sponsored By Edufox

    GTA 6 Release Date

    Meta Announces “Edits” a New Editing Tool

    Leave A Reply Cancel Reply

    Top Reviews
    Editors Picks

    Chuzo Login

    Top Cooking Websites For Food Bloggers

    Katy Perry Goes To Space!

    Mr. Meowski’s Bakery To Re-Locate In St. Charles MO

    Legal Pages
    • About Us
    • Disclaimer
    • DMCA
    • Privacy Policy
    Our Picks

    Gateway Studios High-Tech Recording Studio To Open In Chesterfield, Missouri

    Edufox

    Emerging Academic Education Platforms – Sponsored By Edufox

    Top Reviews

    Type above and press Enter to search. Press Esc to cancel.