Safety by design | TechCrunch

Share via:


W
elcome to the TechCrunch Exchange, a weekly startups-and-markets newsletter. It’s inspired by the daily TechCrunch+ column where it gets its name. Want it in your inbox every Saturday? Sign up here.

Tech’s ability to reinvent the wheel has its downsides: It can mean ignoring blatant truths that others have already learned. But the good news is that new founders are sometimes figuring it out for themselves faster than their predecessors. — Anna

AI, trust and safety

This year is an Olympic year, a leap year . . . and also the election year. But before you accuse me of U.S. defaultism, I’m not only thinking of the Biden vs. Trump sequel: More than 60 countries are holding national elections, not to mention the EU Parliament’s.

Which way each of these votes swings could have an impact on tech companies; different parties tend to have different takes on AI regulation, for instance. But before elections even take place, tech will also have a role to play to guarantee their integrity.

Election integrity likely wasn’t on Mark Zuckerberg’s mind when he created Facebook, and perhaps not even when he bought WhatsApp. But 20 and 10 years later, respectively, trust and safety is now a responsibility that Meta and other tech giants can’t escape, whether they like it or not. This means working toward preventing disinformation, fraud, hate speech, CSAM (child sexual abuse material), self-harm and more.

However, AI will likely make the task more difficult, and not just because of deepfakes or from empowering larger numbers of bad actors. Says Lotan Levkowitz, a general partner at Grove Ventures:

All these trust and safety platforms have this hash-sharing database, so I can upload there what is a bad thing, share with all my communities, and everybody is going to stop it together; but today, I can train the model to try to avoid it. So even the more classic trust and safety work, because of Gen AI, is getting tougher and tougher because the algorithm can help bypass all these things.

From afterthought to the forefront

Although online forums had already learned a thing or two on content moderation, there was no social network playbook for Facebook to follow when it was born, so it is somewhat understandable that it would need a while to rise to the task. But it is disheartening to learn from internal Meta documents that as far back as 2017, there was still internal reluctance at adopting measures that could better protect children.

Zuckerberg was one of the five social media tech CEOs who recently appeared at a U.S. Senate hearing on children’s online safety. Testifying was not a first by far for Meta, but that Discord was included is also worth noting; while it has branched out beyond its gaming roots, it is a reminder that trust and safety threats can occur in many online places. This means that a social gaming app, for instance, could also put its users at risk of phishing or grooming.

Will newer companies own up faster than the FAANGs? That’s not guaranteed: Founders often operate from first principles, which is good and bad; the content moderation learning curve is real. But OpenAI is much younger than Meta, so it is encouraging to hear that it is forming a new team to study child safety — even if it may be a result of the scrutiny it’s subjected to.

Some startups, however, are not waiting for signs of trouble to take action. A provider of AI-enabled trust and safety solutions and part of Grove Ventures’ portfolio, ActiveFence is seeing more inbound requests, its CEO Noam Schwartz told me.

“I’ve seen a lot of folks reaching out to our team from companies that were just founded or even pre-launched. They’re thinking about the safety of their products during the design phase [and] adopting a concept called safety by design. They are baking in safety measures inside their products, the same way that today you’re thinking about security and privacy when you’re building your features.”

ActiveFence is not the only startup in this space, which Wired described as “trust and safety as a service.” But it is one of the largest, especially since it acquired Spectrum Labs in September, so it’s good to hear that its clients include not only big names afraid of PR crises and political scrutiny, but also smaller teams that are just getting started. Tech, too, has an opportunity to learn from past mistakes.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Safety by design | TechCrunch


W
elcome to the TechCrunch Exchange, a weekly startups-and-markets newsletter. It’s inspired by the daily TechCrunch+ column where it gets its name. Want it in your inbox every Saturday? Sign up here.

Tech’s ability to reinvent the wheel has its downsides: It can mean ignoring blatant truths that others have already learned. But the good news is that new founders are sometimes figuring it out for themselves faster than their predecessors. — Anna

AI, trust and safety

This year is an Olympic year, a leap year . . . and also the election year. But before you accuse me of U.S. defaultism, I’m not only thinking of the Biden vs. Trump sequel: More than 60 countries are holding national elections, not to mention the EU Parliament’s.

Which way each of these votes swings could have an impact on tech companies; different parties tend to have different takes on AI regulation, for instance. But before elections even take place, tech will also have a role to play to guarantee their integrity.

Election integrity likely wasn’t on Mark Zuckerberg’s mind when he created Facebook, and perhaps not even when he bought WhatsApp. But 20 and 10 years later, respectively, trust and safety is now a responsibility that Meta and other tech giants can’t escape, whether they like it or not. This means working toward preventing disinformation, fraud, hate speech, CSAM (child sexual abuse material), self-harm and more.

However, AI will likely make the task more difficult, and not just because of deepfakes or from empowering larger numbers of bad actors. Says Lotan Levkowitz, a general partner at Grove Ventures:

All these trust and safety platforms have this hash-sharing database, so I can upload there what is a bad thing, share with all my communities, and everybody is going to stop it together; but today, I can train the model to try to avoid it. So even the more classic trust and safety work, because of Gen AI, is getting tougher and tougher because the algorithm can help bypass all these things.

From afterthought to the forefront

Although online forums had already learned a thing or two on content moderation, there was no social network playbook for Facebook to follow when it was born, so it is somewhat understandable that it would need a while to rise to the task. But it is disheartening to learn from internal Meta documents that as far back as 2017, there was still internal reluctance at adopting measures that could better protect children.

Zuckerberg was one of the five social media tech CEOs who recently appeared at a U.S. Senate hearing on children’s online safety. Testifying was not a first by far for Meta, but that Discord was included is also worth noting; while it has branched out beyond its gaming roots, it is a reminder that trust and safety threats can occur in many online places. This means that a social gaming app, for instance, could also put its users at risk of phishing or grooming.

Will newer companies own up faster than the FAANGs? That’s not guaranteed: Founders often operate from first principles, which is good and bad; the content moderation learning curve is real. But OpenAI is much younger than Meta, so it is encouraging to hear that it is forming a new team to study child safety — even if it may be a result of the scrutiny it’s subjected to.

Some startups, however, are not waiting for signs of trouble to take action. A provider of AI-enabled trust and safety solutions and part of Grove Ventures’ portfolio, ActiveFence is seeing more inbound requests, its CEO Noam Schwartz told me.

“I’ve seen a lot of folks reaching out to our team from companies that were just founded or even pre-launched. They’re thinking about the safety of their products during the design phase [and] adopting a concept called safety by design. They are baking in safety measures inside their products, the same way that today you’re thinking about security and privacy when you’re building your features.”

ActiveFence is not the only startup in this space, which Wired described as “trust and safety as a service.” But it is one of the largest, especially since it acquired Spectrum Labs in September, so it’s good to hear that its clients include not only big names afraid of PR crises and political scrutiny, but also smaller teams that are just getting started. Tech, too, has an opportunity to learn from past mistakes.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

More sophisticated iPhone 17 Pro cameras expected from LG...

LG Innotech is the main supplier of Apple’s...

Taking a cue from X, Threads tests AI-powered summaries...

As the Twitter-like social network Bluesky sees a...

Digital Creator Rishita Kothari Marks Her Acting Debut in...

Mumbai (Maharashtra) , November 22: Digital creator and...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!