Meta changes how they label AI content on their platform

Introduction

Major social media platforms are making sure to mark AI content and let users know about them. Meta joined the train and started labelling content that has AI since April 2024. But not too long ago, there have been reasons to modify how they label content that has AI. The clarification is really important so that viewers would differentiate content made entirely of AI and one that was just enhanced with AI. So yesterday being July 1, Meta made a big announcement indicating a change to how they label AI-enhanced content.

Change from 'Made with AI' to something better

From the update Meta gave in April, they started labelling content that has AI in it "Made with AI". The intention was to make sure that viewers of such content would have to know that it was made or modified with some AI tools. But after some considerations, Meta announced yesterday (July 1) that they are replacing the label.

Going forward, Meta has decided to label any content that was altered with AI with “AI info”. When a user clicks that link, then they will be able to find out more about the use of AI for that content. Meta also mentioned that this would be a tentative move as they discuss with industry leaders on how best to handle AI content labelling on their platform to reflect acceptable standards. Below is part of the statement released by Meta about the issue:

We want people to know when they see posts that have been made with AI. Earlier this year, we announced a new approach for labeling AI-generated content. An important part of this approach relies on industry standard indicators that other companies include in content created using their tools, which help us assess whether something is created using AI. source

Why the change is necessary

While it is important to be able to identify AI content on Meta, there is the need to also clarify to what extent AI is included in the work. With the "Made with AI" label that was formerly used by Meta, there is a broad generalization of content without being able to know by how far AI is involved. Thus, even content that has very little AI input is generally categorized as made with AI.

One of the issues that probably caused this is that Meta rely on any information provided by the AI tools used in the creation. To what extent the tool is used was not formerly put into consideration. That would mean that some original works with slight AI content are wrongly presented as though they were entirely made with AI.

So the new approach would provide a better context as to the use of AI in the work. When users click the 'AI info' label, they will be able to get a more detailed information which would likely include the AI tool used and how much of it was involved in the content. This way, content consumers on Meta would be better informed about any text, audio or video that they are checking.

The evolution of AI and rating creativity

AI generative tools continue to evolve and the dynamics of a rapidly-changing environment makes it harder to define and rate creativity. It is especially important that users are not misled when it comes to trusting the intellect of a creative person. Basically, it would be difficult to rate creativity and build trust if there is no way to really measure the input of AI in a creative work.

Industry leaders are working hard to really find a common standard for measuring AI input in natural creations. The recent change that Meta has made is one step towards helping millions of users of its platforms to understand the nature of what content they are consuming.

Creators of originally work should also not be put under undue pressure because of how their work is seen or rated. If Meta or other technology platforms continue to make a generalization of creative works, then original content creators would always face the awkward situation of trying to prove that their work is genuine. If not handled properly, a highly rated original work of art for example might come under scrutiny because of the label that a platform placed on it.

Finally

Meta has tried other strategies and once considered removing AI content entirely from its platform. However, that would be considered extreme, a form of suppression of freedom of speech. While trying to protect the integrity of content and creators on its platform, there should be a balance with allowing unhindered freedom of expression. The new approach is less inversive. While they let users know about the content that is not 100% original, they do not stop creators from using AI to make their work look better.

More conversations on this issue will happen not just within Meta, but across the tech sector especially the AI space. As these discussions happen and more consensus is reached, it would be much more easier for content consumers to know what they seeing and for creators never to continuously defend their original work.


Note: Thumbnail from pixabay

Posted Using InLeo Alpha



0
0
0.000
5 comments
avatar

This makes sense so that we can at least know the real human contents and the one of AI on Meta
That’s really nice

0
0
0.000
avatar

Meta technologies, is really doing a great job in the world of tech, especially in enhancing AI...
I see a great future in the Ai world.

0
0
0.000
avatar

It's definitely necessary that Meta updates how they label AI content, so we can differentiate what's made by humans versus AI. This will definitely help ensure transparency and trust in creative works. I heard Inleo would be doing this here, hope it happens

0
0
0.000
avatar

Yes, almost everything could be seen as AI if the proper descriptions are not there. I believe Meta has done the right thing. We cant wait to see how it will be on Inleo. Hoping its great.

0
0
0.000
avatar

Absolutely brother, let's hope it's great ❤️❤️💯

0
0
0.000