Nov 14, 2025

When AI Gets Hacked: How Skyld Is Building Trust Back Into Intelligence

Explore how Marie Paindavoine, Founder and CEO of Skyld, and the 2024 Cyberwoman of the Year, is building a trusted future for the AI model world.

From healthcare to defense to industry, AI is scaling faster than security can keep up - and that’s where opportunity lies. Our portfolio company Skyld, led by Marie Paindavoine, sits at the intersection of two major forces shaping the next decade: the global rise of AI and the urgent need for trust and protection. Together with her team, she is tackling one of tech’s most overlooked challenges - keeping AI models secure, trusted, and human-safe.

Marie, many people worry about how AI might impact their jobs. You worry about something else - what happens when AI models get stolen, tricked, or turned against their creators. With Skyld, you protect AI models from being copied, manipulated, or misused. How did your journey into AI security begin?

Marie: My background is in cryptography. During my PhD, I worked on homomorphic encryption - performing computations on encrypted data. That work led me to privacy-preserving machine learning, where you can train AI on encrypted data without ever seeing the data.
I realized that while companies rushed to innovate, very few thought about security. Everyone was focused on building fast - not protecting what they built.

Was there a moment when you knew this had to be solved?

Marie: It was gradual, but the turning point was discovering how shockingly easy it was to reverse-engineer AI models that cost millions to develop. I thought: why aren’t we protecting these systems?
With support from INRIA, France’s national research institute, I had a year to turn the idea into a product. That became Skyld - software that prevents AI-model theft and manipulation.


“Models costing millions could be copied in minutes.”

You’ve since gained strong recognition - among Europe’s top security startups, and you were named Cybersecurity Woman of the Year. What stands out as your biggest learning?

Marie: Awards help visibility, but they’re not what a founder should chase. What matters most is focus. A founder has three main jobs: set the vision, secure funding, and hire the right people. I learned that at Berkeley, and it’s now my foundation.

Have you ever faced setbacks?

Marie: Definitely. There were moments when cash was tight or direction unclear - and the whole team felt it. As a CEO, you’re the captain of the ship. Even if the destination changes, you must point the boat somewhere. When you lose focus, the team feels it immediately. Without a goal, you just spin in circles.

How did you regain clarity?

Marie: With the support of a coach who helped me see the issue and refocus quickly.


“The founder’s 3 main jobs: vision, funding, and hiring the right people.”

Experts often mention “cyber debt” in AI. What does that mean - and why is it such a big risk?

Marie: Cyber debt happens when companies deploy technology first and only secure it later. And that’s exactly what we’re seeing with AI.
AI adoption is accelerating - models already make sensitive decisions - but most aren’t protected. Once attacks move from research papers to real-world exploitation, the consequences could be massive.


“Fixing security after deployment is always more expensive and more complex. The longer you wait, the bigger the debt - and the bill.”

So models should be secured before deployment. But what about companies that already have models running?

Marie: Security-by-design is ideal - but securing deployed models is still absolutely possible and necessary.
“We work like anti-theft protection for AI - preventing model extraction and unauthorized use of IP.”

What is the biggest risk if companies don’t act?

Marie: If attackers extract your model, they can copy it, resell it, or manipulate its behavior. We demonstrated this with Google’s SafetyCore AI - the explicit-content filter on Android devices. We made safe images look explicit - and explicit ones look safe. It completely broke trust in the system.


“It shows how fragile trust becomes if AI isn’t secured from the start.”

How does regulation like the EU AI Act impact your work?

Marie: It’s a major step forward. The EU AI Act now requires that systems be resistant to attacks like model extraction. That gives us strong tailwind - companies will need solutions like ours to comply. Technical standards are still evolving - and we want to help shape them.


“AI won’t scale safely without trust - the EU AI Act pushes the ecosystem in the right direction.”

What’s next for Skyld?

Marie: We’re focusing on commercialization in Europe and will be showcasing at CES Las Vegas in January 2026. We’re also expanding into generative-AI security and adversarial-attack detection - because protecting the future of AI requires protecting its integrity.


“Success means building technology people can truly rely on - and becoming Europe’s leading expert in AI security.”


Thank you for your time, Marie. We’re excited to continue supporting your mission.