Women in AI: Kate Devlin of King’s College is researching AI and intimacy

Share via:


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Kate Devlin is a lecturer in AI and society at King’s College London. The author of “Turned On: Science, Sex and Robots,” which examines the ethical and social implications of tech and intimacy, Devlin’s research investigates how people interact with and react to technologies — both past and future.

Devlin — who in 2016 ran the U.K.’s first sex tech hackathon — directs advocacy and engagement for the Trusted Autonomous Systems Hub, a collaborative platform to support the development of “socially beneficial” robotics and AI systems. She’s also a board member of the Open Rights Group, an organization that works to preserve digital rights and freedoms.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I started off as an archaeologist, eventually moving across disciplines and completing a Ph.D. in computer science in 2004. The idea was to integrate the subjects, but I ended up doing more and more on human-computer interaction, and on how people interact with AI and robots, including the reception that such technologies have.

What work are you most proud of (in the AI field)?

I’m pleased that intimacy and AI is now taken seriously as an academic area of study. There’s some amazing research going on. It used to be viewed as very niche and highly unlikely; now we’re seeing people forming meaningful relationships with chatbots — meaningful in that they really do mean something to those people.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

I don’t. We just persevere. It’s still shockingly sexist. And maybe I don’t want to “lean in”; maybe I want an environment that isn’t defined around macho qualities. I guess it’s a two-pronged thing: we need more women in visible, top positions, and we need to tackle sexism in schools and beyond. And then we need a systemic change to stop the “leaky pipeline” — we’re seeing an increase of women in AI and tech due to a rise in home working as it fits better with childcare which, let’s face it, still falls to us. Let’s have more flexibility until we don’t have to do the majority of that caring on our own.

What advice would you give to women seeking to enter the AI field?

You have the right to take up as much space as the men.

What are some of the most pressing issues facing AI as it evolves?

Responsibility. Accountability. There’s currently a fever pitch that hinges around technological determinism — as if we’re hurtling toward some dangerous future. We don’t have to be. It’s possible to reject that. It’s fine to prioritize a different path. Very few of the issues we face are new; it’s size and scale that are making this particularly tricky.

What are some issues AI users should be aware of?

Uh… late-stage capitalism.

More usefully: check provenance — where’s the data coming from? How ethical is the provider? Do they have a good track record of social responsibility? Would you let them control your oxygen supply on Mars?

What is the best way to responsibly build AI?

Regulation and conscience.

How can investors better push for responsible AI?

Thinking of this in purely business terms, you’ll have much happier customers if you care about people. We can see through ethics-washing so really make it matter. Hold the companies responsible for considering things like human rights, labor, sustainability and social impact in their AI supply chain.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Women in AI: Kate Devlin of King’s College is researching AI and intimacy


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Kate Devlin is a lecturer in AI and society at King’s College London. The author of “Turned On: Science, Sex and Robots,” which examines the ethical and social implications of tech and intimacy, Devlin’s research investigates how people interact with and react to technologies — both past and future.

Devlin — who in 2016 ran the U.K.’s first sex tech hackathon — directs advocacy and engagement for the Trusted Autonomous Systems Hub, a collaborative platform to support the development of “socially beneficial” robotics and AI systems. She’s also a board member of the Open Rights Group, an organization that works to preserve digital rights and freedoms.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I started off as an archaeologist, eventually moving across disciplines and completing a Ph.D. in computer science in 2004. The idea was to integrate the subjects, but I ended up doing more and more on human-computer interaction, and on how people interact with AI and robots, including the reception that such technologies have.

What work are you most proud of (in the AI field)?

I’m pleased that intimacy and AI is now taken seriously as an academic area of study. There’s some amazing research going on. It used to be viewed as very niche and highly unlikely; now we’re seeing people forming meaningful relationships with chatbots — meaningful in that they really do mean something to those people.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

I don’t. We just persevere. It’s still shockingly sexist. And maybe I don’t want to “lean in”; maybe I want an environment that isn’t defined around macho qualities. I guess it’s a two-pronged thing: we need more women in visible, top positions, and we need to tackle sexism in schools and beyond. And then we need a systemic change to stop the “leaky pipeline” — we’re seeing an increase of women in AI and tech due to a rise in home working as it fits better with childcare which, let’s face it, still falls to us. Let’s have more flexibility until we don’t have to do the majority of that caring on our own.

What advice would you give to women seeking to enter the AI field?

You have the right to take up as much space as the men.

What are some of the most pressing issues facing AI as it evolves?

Responsibility. Accountability. There’s currently a fever pitch that hinges around technological determinism — as if we’re hurtling toward some dangerous future. We don’t have to be. It’s possible to reject that. It’s fine to prioritize a different path. Very few of the issues we face are new; it’s size and scale that are making this particularly tricky.

What are some issues AI users should be aware of?

Uh… late-stage capitalism.

More usefully: check provenance — where’s the data coming from? How ethical is the provider? Do they have a good track record of social responsibility? Would you let them control your oxygen supply on Mars?

What is the best way to responsibly build AI?

Regulation and conscience.

How can investors better push for responsible AI?

Thinking of this in purely business terms, you’ll have much happier customers if you care about people. We can see through ethics-washing so really make it matter. Hold the companies responsible for considering things like human rights, labor, sustainability and social impact in their AI supply chain.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Australia’s ‘Barefoot Investor’ takes on crypto scammers stealing his...

Australian investing and finance educator Scott Pape, known...

SingPost fires CEO, CFO over handling of whistleblower’s report

The top executives reject accusations and will "vigorously...

The ‘superglue effect’ of eSIMs on fintech

Southeast Asia is accustomed to all-in-one apps, so...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!