Artificial Intelligence Bigwigs Commit to Child Safety

KEY FACT: Major artificial intelligence (AI) companies including OpenAI, Microsoft, Meta Platforms and Google agreed on Tuesday to incorporate new safety measures to protect children from exploitation and plug several holes in their current defenses.


image.png
Source


As the adoption of Artificial Intelligence (AI) continues to broaden, the loud call for responsive AI systems has yielded positive results and feedback from the big movers in the industry. In a recent move, AI industry leaders including OpenAI, Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, and Stability AI, have committed to implementing robust child safety measures in the development, deployment, and maintenance of generative AI technologies.

The commitment is articulated in the Safety by Design principles proposed by the duo of Thorn - a nonprofit dedicated to defending children from sexual abuse, and All Tech Is Human - an organization dedicated to tackling tech and society's complex problem. Safety by Design principles is designed to mitigate the risks that generative AI poses to children.

Adopting these comprehensive "Safety by Design principles" by the bigwigs in the AI industry is to ensure that child safety is prioritized at every stage in the development of AI.

According to OpenAI the commitment it has entered alongside other key movers in the AI sector marks an important step in preventing the misuse of AI technologies to create or spread child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children. In an announcement, Open AI listed its 3-tier plans/efforts to the Safety by Design principles agreement as follows:

  • Develop: Develop, build, and train generative AI models that proactively address child safety risks.
  • Deploy: Release and distribute generative AI models after they have been trained and evaluated for child safety, providing protection throughout the process.
  • Maintain: Maintain model and platform safety by continuing to actively understand and respond to child safety risks.

This push by Thorn and All Tech Is Human, as supported by Open AI, Meta, Google, and others has supercharged predators’ ability to create sexualized images of children and other exploitative material. It is believed that this alliance will ensure that training data does not include child sexual abuse material, as well as maintaining safety after their release by staying alert and responding to child safety risks that emerge.

In a similar stride, Chelsea Carlson, Team lead in charge of Child Safety at Microsoft's Trusted Platform Module has declared their commitment to working alongside Thorn, All Tech is Human and the broader tech community to uphold the Safety by Design principles

"We care deeply about the safety and responsible use of our tools, which is why we’ve built strong guardrails and safety measures into ChatGPT and DALL-E. We are committed to working alongside Thorn, All Tech is Human and the broader tech community to uphold the Safety by Design principles and continue our work in mitigating potential harms to children." - Chelsea Carlson, Child Safety TPM.

This is a welcome development and it's time AI technology started serving humanity responsibly and meaningfully, rather than destroying the virtues of humanity.


If you found the article interesting or helpful, please hit the upvote button, and share for visibility to other hive friends to see. More importantly, drop a comment below. Thank you!

This post was created via INLEO, What is INLEO?

INLEO's mission is to build a sustainable creator economy that is centered around digital ownership, tokenization, and communities. It's Built on Hive, with linkages to BSC, ETH, and Polygon blockchains. The flagship application: Inleo.io allows users and creators to engage & share micro and long-form content on the Hive blockchain while earning cryptocurrency rewards.



Let's Connect

Hive: inleo.io/profile/uyobong/blog

Twitter: https://twitter.com/Uyobong3

Discord: uyobong#5966


Posted Using InLeo Alpha



0
0
0.000
0 comments