Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
jonah-ramponi 
posted an update Aug 31
Post
657
From Article 50 of the EU AI Act:

"2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated."

How might this be put into practice?

I'm interested to know how content might be deemed as being "detectable" as artificially generated. I wonder if this will require an image be detectable as AI generated if it was copied out of the site / application it was created on?

Some sort of a watermark? LSB Stegranography? I wonder if openAI are already sneaking something like this into DALL-E images.

Some sort of hash, which allowing content to be looked up, and verified as AI generated?

Would a pop up saying "this output was generated with AI"? suffice? Any ideas? Time is on the system provider's side, at least for now, as from what I can see this doesn't come into effect until August 2026.

src: https://artificialintelligenceact.eu/article/50/

@jonah-ramponi I came across this paper a while back: https://arxiv.org/abs/2301.10226. While it's hard to predict how effective these approaches will be, one interesting angle to consider is the potential for humans to start mimicking AI systems as our interactions with them become more frequent.