Microsoft offers a solution to stop the spread of deepfakes on the internet

Microsoft offers a solution to stop the spread of deepfakes on the internet

7 hardware

Microsoft launches a new set of standards for authenticating online content

Microsoft announced the creation of “technical standards” for assessing the authenticity of materials appearing on the web. The goal is to help AI developers and social platforms determine whether images or videos have been altered with digital tools (such as deepfakes) and how reliable their documentation methods are.

How the verification system works
* Example with a Rembrandt painting

- A detailed provenance log is created: storage locations, previous owners.

- The painting is scanned, and a mathematical signature—“digital fingerprint”—is generated from brushstrokes.

- When displayed in a museum, visitors can open this data to confirm authenticity.

* Methods already in use

Microsoft examined 60 combinations of existing techniques (metadata removal, minor changes, targeted manipulation). For each model, behavior was simulated under various scenarios.

Researchers found:

- Reliable combinations can be shown to a wide audience.

- Unreliable combinations may only complicate matters, creating more confusion.

Why it matters
* Legislation requires AI transparency (e.g., the “AI Transparency Act” in California).

* Microsoft has not yet announced whether it will apply these standards to its services: Copilot, Azure, OpenAI, and LinkedIn.

The standards do not determine content truthfulness; they only indicate whether material has been manipulated and where it came from. If the industry adopts them, creating deceptive content will become noticeably harder.

Industry status
Company | Action | Status
Microsoft | C2PA (2021) – provenance tracking | Ongoing development
Google | Watermarks for AI‑generated content (since 2023) | Actively deployed

But Microsoft’s full toolkit may remain just a “project” if market participants see a threat to their business models.

Effectiveness of existing solutions
* Research showed that only 30 % of posts on Instagram, LinkedIn, Pinterest, TikTok, and YouTube are correctly labeled as AI‑generated.

* Rapid deployment of verification tools is risky: failures can erode user trust.

Comprehensive verification mechanisms are preferable. For example, if a trustworthy image has been lightly edited by AI, the platform might mistakenly classify it as fully generated. An integrated approach reduces false positives.

Conclusion
Microsoft offers a structured set of standards for detecting digital manipulation in content. These tools aim to increase transparency and trust in online materials, but their success depends on industry adoption and the reliability of integrated checks.

Comments (0)

Share your thoughts — please be polite and stay on topic.

No comments yet. Leave a comment — share your opinion!

To leave a comment, please log in.

Log in to comment