Women in AI: Ewa Luger explores how AI affects culture — and vice versa

Share via:


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Ewa Luger is co-director at the Institute of Design Informatics, and co-director of the Bridging Responsible AI Divides (BRAID) program, backed by the Arts and Humanities Research Council (AHRC). She works closely with policymakers and industry, and is a member of the U.K. Department for Culture, Media and Sport (DCMS) college of experts, a cohort of experts who provide scientific and technical advice to the DCMS.

Luger’s research explores social, ethical and interactional issues in the context of data-driven systems, including AI systems, with a particular interest in design, the distribution of power, spheres of exclusion, and user consent. Previously, she was a fellow at the Alan Turing Institute, served as a researcher at Microsoft, and was a fellow at Corpus Christi College at the University of Cambridge.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

After my PhD, I moved to Microsoft Research, where I worked in the user experience and design group in the Cambridge (U.K.) lab. AI was a core focus there, so my work naturally developed more fully into that area and expanded out into issues surrounding human-centered AI (e.g., intelligent voice assistants).

When I moved to the University of Edinburgh, it was due to a desire to explore issues of algorithmic intelligibility, which, back in 2016, was a niche area. I’ve found myself in the field of responsible AI and currently jointly lead a national program on the subject, funded by the AHRC.

What work are you most proud of in the AI field?

My most-cited work is a paper about the user experience of voice assistants (2016). It was the first study of its kind and is still highly cited. But the work I’m personally most proud of is ongoing. BRAID is a program I jointly lead, and is designed in partnership with a philosopher and ethicist. It’s a genuinely multidisciplinary effort designed to support the development of a responsible AI ecosystem in the U.K.

In partnership with the Ada Lovelace Institute and the BBC, it aims to connect arts and humanities knowledge to policy, regulation, industry and the voluntary sector. We often overlook the arts and humanities when it comes to AI, which has always seemed bizarre to me. When COVID-19 hit, the value of the creative industries was so profound; we know that learning from history is critical to avoid making the same mistakes, and philosophy is the root of the ethical frameworks that have kept us safe and informed within medical science for many years. Systems like Midjourney rely on artist and designer content as training data, and yet somehow these disciplines and practitioners have little to no voice in the field. We want to change that.

More practically, I’ve worked with industry partners like Microsoft and the BBC to co-produce responsible AI challenges, and we’ve worked together to find academics that can respond to those challenges. BRAID has funded 27 projects so far, some of which have been individual fellowships, and we have a new call going live soon.

We’re designing a free online course for stakeholders looking to engage with AI, setting up a forum where we hope to engage a cross-section of the population as well as other sectoral stakeholders to support governance of the work — and helping to explode some of the myths and hyperbole that surrounds AI at the moment.

I know that kind of narrative is what floats the current investment around AI, but it also serves to cultivate fear and confusion among those people who are most likely to suffer downstream harms. BRAID runs until the end of 2028, and in the next phase, we’ll be tackling AI literacy, spaces of resistance, and mechanisms for contestation and recourse. It’s a (relatively) large program at £15.9 million over six years, funded by the AHRC.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

That’s an interesting question. I’d start by saying that these issues aren’t solely issues found in industry, which is often perceived to be the case. The academic environment has very similar challenges with respect to gender equality. I’m currently co-director of an institute — Design Informatics — that brings together the school of design and the school of informatics, and so I’d say there’s a better balance both with respect to gender and with respect to the kinds of cultural issues that limit women reaching their full professional potential in the workplace.

But during my PhD, I was based in a male-dominated lab and, to a lesser extent, when I worked in industry. Setting aside the obvious effects of career breaks and caring, my experience has been of two interwoven dynamics. Firstly, there are much higher standards and expectations placed on women — for example, to be amenable, positive, kind, supportive, team-players and so on. Secondly, we’re often reticent when it comes to putting ourselves forward for opportunities that less-qualified men would quite aggressively go for. So I’ve had to push myself quite far out of my comfort zone on many occasions.

The other thing I’ve needed to do is to set very firm boundaries and learn when to say no. Women are often trained to be (and seen as) people pleasers. We can be too easily seen as the go-to person for the kinds of tasks that would be less attractive to your male colleagues, even to the extent of being assumed to be the tea-maker or note-taker in any meeting, irrespective of professional status. And it’s only really by saying no, and making sure that you’re aware of your value, that you ever end up being seen in a different light. It’s overly generalizing to say that this is true of all women, but it has certainly been my experience. I should say that I had a female manager while I was in industry, and she was wonderful, so the majority of sexism I’ve experienced has been within academia.

Overall, the issues are structural and cultural, and so navigating them takes effort — firstly in making them visible and secondly in actively addressing them. There are no simple fixes, and any navigation places yet more emotional labor on females in tech.

What advice would you give to women seeking to enter the AI field?

My advice has always been to go for opportunities that allow you to level up, even if you don’t feel that you’re 100% the right fit. Let them decline rather than you foreclosing opportunities yourself. Research shows that men go for roles they think they could do, but women only go for roles they feel they already can or are doing competently. Currently, there’s also a trend toward more gender awareness in the hiring process and among funders, although recent examples show how far we have to go.

If you look at U.K. Research and Innovation AI hubs, a recent high-profile, multi-million-pound investment, all of the nine AI research hubs announced recently are led by men. We should really be doing better to ensure gender representation.

What are some of the most pressing issues facing AI as it evolves?

Given my background, it’s perhaps unsurprising that I’d say that the most pressing issues facing AI are those related to the immediate and downstream harms that might occur if we’re not careful in the design, governance and use of AI systems.

The most pressing issue, and one that has been heavily under-researched, is the environmental impact of large-scale models. We might choose at some point to accept those impacts if the benefits of the application outweigh the risks. But right now, we’re seeing widespread use of systems like Midjourney run simply for fun, with users largely, if not completely, unaware of the impact each time they run a query.

Another pressing issue is how we reconcile the speed of AI innovations and the ability of the regulatory climate to keep up. It’s not a new issue, but regulation is the best instrument we have to ensure that AI systems are developed and deployed responsibly.

It’s very easy to assume that what has been called the democratization of AI — by this, I mean systems such as ChatGPT being so readily available to anyone — is a positive development. However, we’re already seeing the effects of generated content on the creative industries and creative practitioners, particularly regarding copyright and attribution. Journalism and news producers are also racing to ensure their content and brands are not affected. This latter point has huge implications for our democratic systems, particularly as we enter key election cycles. The effects could be quite literally world-changing from a geopolitical perspective. It also wouldn’t be a list of issues without at least a nod to bias.

What are some issues AI users should be aware of?

Not sure if this relates to companies using AI or regular citizens, but I’m assuming the latter. I think the main issue here is trust. I’m thinking, here, of the many students now using large language models to generate academic work. Setting aside the moral issues, the models are still not good enough for that. Citations are often incorrect or out of context, and the nuance of some academic papers is lost.

But this speaks to a wider point: You can’t yet fully trust generated text and so should only use those systems when the context or outcome is low risk. The obvious second issue is veracity and authenticity. As models become increasingly sophisticated, it’s going to be ever harder to know for sure whether it’s human or machine-generated. We haven’t yet developed, as a society, the requisite literacies to make reasoned judgments about content in an AI-rich media landscape. The old rules of media literacy apply in the interim: Check the source.

Another issue is that AI is not human intelligence, and so the models aren’t perfect — they can be tricked or corrupted with relative ease if one has a mind to.

What is the best way to responsibly build AI?

The best instruments we have are algorithmic impact assessments and regulatory compliance, but ideally, we’d be looking for processes that actively seek to do good rather than just seeking to minimize risk.

Going back to basics, the obvious first step is to address the composition of designers — ensuring that AI, informatics and computer science as disciplines attract women, people of color and representation from other cultures. It’s obviously not a quick fix, but we’d clearly have addressed the issue of bias earlier if it was more heterogeneous. That brings me to the issue of the data corpus, and ensuring that it’s fit-for-purpose and efforts are made to appropriately de-bias it.

Then there comes the need to train systems architects to be aware of moral and socio-technical issues — placing the same weight on these as we do the primary disciplines. Then we need to give systems architects more time and agency to consider and fix any potential issues. Then we come to the matter of governance and co-design, where stakeholders should be involved in the governance and conceptual design of the system. And finally, we need to thoroughly stress-test systems before they get anywhere near human subjects.

Ideally, we should also be ensuring that there are mechanisms in place for opt-out, contestation and recourse — though much of this is covered by emerging regulations. It seems obvious, but I’d also add that you should be prepared to kill a project that’s set to fail on any measure of responsibility. There’s often something of the fallacy of sunk costs at play here, but if a project isn’t developing as you’d hope, then raising your risk tolerance rather than killing it can result in the untimely death of a product.

The European Union’s recently adopted AI act covers much of this, of course.

How can investors better push for responsible AI?

Taking a step back here, it’s now generally understood and accepted that the whole model that underpins the internet is the monetization of user data. In the same way, much, if not all, of AI innovation is driven by capital gain. AI development in particular is a resource-hungry business, and the drive to be the first to market has often been described as an arms race. So, responsibility as a value is always in competition with those other values.

That’s not to say that companies don’t care, and there has also been much effort made by various AI ethicists to reframe responsibility as a way of actually distinguishing yourself in the field. But this feels like an unlikely scenario unless you’re a government or another public service. It’s clear that being the first to market is always going to be traded off against a full and comprehensive elimination of possible harms.

But coming back to the term responsibility. To my mind, being responsible is the least we can do. When we say to our kids that we’re trusting them to be responsible, what we mean is, don’t do anything illegal, embarrassing or insane. It’s literally the basement when it comes to behaving like a functioning human in the world. Conversely, when applied to companies, it becomes some kind of unreachable standard. You have to ask yourself, how is this even a discussion that we find ourselves having?

Also, the incentives to prioritize responsibility are pretty basic and relate to wanting to be a trusted entity while also not wanting your users to come to newsworthy harm. I say this because plenty of people at the poverty line, or those from marginalized groups, fall below the threshold of interest, as they don’t have the economic or social capital to contest any negative outcomes, or to raise them to public attention.

So, to loop back to the question, it depends on who the investors are. If it’s one of the big seven tech companies, then they’re covered by the above. They have to choose to prioritize different values at all times, and not only when it suits them. For the public or third sector, responsible AI is already aligned to their values, and so what they tend to need is sufficient experience and insight to help make the right and informed choices. Ultimately, to push for responsible AI requires an alignment of values and incentives.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Women in AI: Ewa Luger explores how AI affects culture — and vice versa


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Ewa Luger is co-director at the Institute of Design Informatics, and co-director of the Bridging Responsible AI Divides (BRAID) program, backed by the Arts and Humanities Research Council (AHRC). She works closely with policymakers and industry, and is a member of the U.K. Department for Culture, Media and Sport (DCMS) college of experts, a cohort of experts who provide scientific and technical advice to the DCMS.

Luger’s research explores social, ethical and interactional issues in the context of data-driven systems, including AI systems, with a particular interest in design, the distribution of power, spheres of exclusion, and user consent. Previously, she was a fellow at the Alan Turing Institute, served as a researcher at Microsoft, and was a fellow at Corpus Christi College at the University of Cambridge.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

After my PhD, I moved to Microsoft Research, where I worked in the user experience and design group in the Cambridge (U.K.) lab. AI was a core focus there, so my work naturally developed more fully into that area and expanded out into issues surrounding human-centered AI (e.g., intelligent voice assistants).

When I moved to the University of Edinburgh, it was due to a desire to explore issues of algorithmic intelligibility, which, back in 2016, was a niche area. I’ve found myself in the field of responsible AI and currently jointly lead a national program on the subject, funded by the AHRC.

What work are you most proud of in the AI field?

My most-cited work is a paper about the user experience of voice assistants (2016). It was the first study of its kind and is still highly cited. But the work I’m personally most proud of is ongoing. BRAID is a program I jointly lead, and is designed in partnership with a philosopher and ethicist. It’s a genuinely multidisciplinary effort designed to support the development of a responsible AI ecosystem in the U.K.

In partnership with the Ada Lovelace Institute and the BBC, it aims to connect arts and humanities knowledge to policy, regulation, industry and the voluntary sector. We often overlook the arts and humanities when it comes to AI, which has always seemed bizarre to me. When COVID-19 hit, the value of the creative industries was so profound; we know that learning from history is critical to avoid making the same mistakes, and philosophy is the root of the ethical frameworks that have kept us safe and informed within medical science for many years. Systems like Midjourney rely on artist and designer content as training data, and yet somehow these disciplines and practitioners have little to no voice in the field. We want to change that.

More practically, I’ve worked with industry partners like Microsoft and the BBC to co-produce responsible AI challenges, and we’ve worked together to find academics that can respond to those challenges. BRAID has funded 27 projects so far, some of which have been individual fellowships, and we have a new call going live soon.

We’re designing a free online course for stakeholders looking to engage with AI, setting up a forum where we hope to engage a cross-section of the population as well as other sectoral stakeholders to support governance of the work — and helping to explode some of the myths and hyperbole that surrounds AI at the moment.

I know that kind of narrative is what floats the current investment around AI, but it also serves to cultivate fear and confusion among those people who are most likely to suffer downstream harms. BRAID runs until the end of 2028, and in the next phase, we’ll be tackling AI literacy, spaces of resistance, and mechanisms for contestation and recourse. It’s a (relatively) large program at £15.9 million over six years, funded by the AHRC.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

That’s an interesting question. I’d start by saying that these issues aren’t solely issues found in industry, which is often perceived to be the case. The academic environment has very similar challenges with respect to gender equality. I’m currently co-director of an institute — Design Informatics — that brings together the school of design and the school of informatics, and so I’d say there’s a better balance both with respect to gender and with respect to the kinds of cultural issues that limit women reaching their full professional potential in the workplace.

But during my PhD, I was based in a male-dominated lab and, to a lesser extent, when I worked in industry. Setting aside the obvious effects of career breaks and caring, my experience has been of two interwoven dynamics. Firstly, there are much higher standards and expectations placed on women — for example, to be amenable, positive, kind, supportive, team-players and so on. Secondly, we’re often reticent when it comes to putting ourselves forward for opportunities that less-qualified men would quite aggressively go for. So I’ve had to push myself quite far out of my comfort zone on many occasions.

The other thing I’ve needed to do is to set very firm boundaries and learn when to say no. Women are often trained to be (and seen as) people pleasers. We can be too easily seen as the go-to person for the kinds of tasks that would be less attractive to your male colleagues, even to the extent of being assumed to be the tea-maker or note-taker in any meeting, irrespective of professional status. And it’s only really by saying no, and making sure that you’re aware of your value, that you ever end up being seen in a different light. It’s overly generalizing to say that this is true of all women, but it has certainly been my experience. I should say that I had a female manager while I was in industry, and she was wonderful, so the majority of sexism I’ve experienced has been within academia.

Overall, the issues are structural and cultural, and so navigating them takes effort — firstly in making them visible and secondly in actively addressing them. There are no simple fixes, and any navigation places yet more emotional labor on females in tech.

What advice would you give to women seeking to enter the AI field?

My advice has always been to go for opportunities that allow you to level up, even if you don’t feel that you’re 100% the right fit. Let them decline rather than you foreclosing opportunities yourself. Research shows that men go for roles they think they could do, but women only go for roles they feel they already can or are doing competently. Currently, there’s also a trend toward more gender awareness in the hiring process and among funders, although recent examples show how far we have to go.

If you look at U.K. Research and Innovation AI hubs, a recent high-profile, multi-million-pound investment, all of the nine AI research hubs announced recently are led by men. We should really be doing better to ensure gender representation.

What are some of the most pressing issues facing AI as it evolves?

Given my background, it’s perhaps unsurprising that I’d say that the most pressing issues facing AI are those related to the immediate and downstream harms that might occur if we’re not careful in the design, governance and use of AI systems.

The most pressing issue, and one that has been heavily under-researched, is the environmental impact of large-scale models. We might choose at some point to accept those impacts if the benefits of the application outweigh the risks. But right now, we’re seeing widespread use of systems like Midjourney run simply for fun, with users largely, if not completely, unaware of the impact each time they run a query.

Another pressing issue is how we reconcile the speed of AI innovations and the ability of the regulatory climate to keep up. It’s not a new issue, but regulation is the best instrument we have to ensure that AI systems are developed and deployed responsibly.

It’s very easy to assume that what has been called the democratization of AI — by this, I mean systems such as ChatGPT being so readily available to anyone — is a positive development. However, we’re already seeing the effects of generated content on the creative industries and creative practitioners, particularly regarding copyright and attribution. Journalism and news producers are also racing to ensure their content and brands are not affected. This latter point has huge implications for our democratic systems, particularly as we enter key election cycles. The effects could be quite literally world-changing from a geopolitical perspective. It also wouldn’t be a list of issues without at least a nod to bias.

What are some issues AI users should be aware of?

Not sure if this relates to companies using AI or regular citizens, but I’m assuming the latter. I think the main issue here is trust. I’m thinking, here, of the many students now using large language models to generate academic work. Setting aside the moral issues, the models are still not good enough for that. Citations are often incorrect or out of context, and the nuance of some academic papers is lost.

But this speaks to a wider point: You can’t yet fully trust generated text and so should only use those systems when the context or outcome is low risk. The obvious second issue is veracity and authenticity. As models become increasingly sophisticated, it’s going to be ever harder to know for sure whether it’s human or machine-generated. We haven’t yet developed, as a society, the requisite literacies to make reasoned judgments about content in an AI-rich media landscape. The old rules of media literacy apply in the interim: Check the source.

Another issue is that AI is not human intelligence, and so the models aren’t perfect — they can be tricked or corrupted with relative ease if one has a mind to.

What is the best way to responsibly build AI?

The best instruments we have are algorithmic impact assessments and regulatory compliance, but ideally, we’d be looking for processes that actively seek to do good rather than just seeking to minimize risk.

Going back to basics, the obvious first step is to address the composition of designers — ensuring that AI, informatics and computer science as disciplines attract women, people of color and representation from other cultures. It’s obviously not a quick fix, but we’d clearly have addressed the issue of bias earlier if it was more heterogeneous. That brings me to the issue of the data corpus, and ensuring that it’s fit-for-purpose and efforts are made to appropriately de-bias it.

Then there comes the need to train systems architects to be aware of moral and socio-technical issues — placing the same weight on these as we do the primary disciplines. Then we need to give systems architects more time and agency to consider and fix any potential issues. Then we come to the matter of governance and co-design, where stakeholders should be involved in the governance and conceptual design of the system. And finally, we need to thoroughly stress-test systems before they get anywhere near human subjects.

Ideally, we should also be ensuring that there are mechanisms in place for opt-out, contestation and recourse — though much of this is covered by emerging regulations. It seems obvious, but I’d also add that you should be prepared to kill a project that’s set to fail on any measure of responsibility. There’s often something of the fallacy of sunk costs at play here, but if a project isn’t developing as you’d hope, then raising your risk tolerance rather than killing it can result in the untimely death of a product.

The European Union’s recently adopted AI act covers much of this, of course.

How can investors better push for responsible AI?

Taking a step back here, it’s now generally understood and accepted that the whole model that underpins the internet is the monetization of user data. In the same way, much, if not all, of AI innovation is driven by capital gain. AI development in particular is a resource-hungry business, and the drive to be the first to market has often been described as an arms race. So, responsibility as a value is always in competition with those other values.

That’s not to say that companies don’t care, and there has also been much effort made by various AI ethicists to reframe responsibility as a way of actually distinguishing yourself in the field. But this feels like an unlikely scenario unless you’re a government or another public service. It’s clear that being the first to market is always going to be traded off against a full and comprehensive elimination of possible harms.

But coming back to the term responsibility. To my mind, being responsible is the least we can do. When we say to our kids that we’re trusting them to be responsible, what we mean is, don’t do anything illegal, embarrassing or insane. It’s literally the basement when it comes to behaving like a functioning human in the world. Conversely, when applied to companies, it becomes some kind of unreachable standard. You have to ask yourself, how is this even a discussion that we find ourselves having?

Also, the incentives to prioritize responsibility are pretty basic and relate to wanting to be a trusted entity while also not wanting your users to come to newsworthy harm. I say this because plenty of people at the poverty line, or those from marginalized groups, fall below the threshold of interest, as they don’t have the economic or social capital to contest any negative outcomes, or to raise them to public attention.

So, to loop back to the question, it depends on who the investors are. If it’s one of the big seven tech companies, then they’re covered by the above. They have to choose to prioritize different values at all times, and not only when it suits them. For the public or third sector, responsible AI is already aligned to their values, and so what they tend to need is sufficient experience and insight to help make the right and informed choices. Ultimately, to push for responsible AI requires an alignment of values and incentives.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

iOS 18.2 just added a faster way to message...

iOS 18.2 came packed with a lot of...

A popular technique to make AI more efficient has...

One of the most widely used techniques to...

ICAI Says Probe Into Alleged Audit Lapses At BYJU’S...

SUMMARY ICAI president Ranjeet Kumar Agarwal has revealed that...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!