By Golan Yosef, Chief Security Scientist and Co-Founder, Pynt (July 15), First published on SecurityBoulevard.com

You don’t always need a vulnerable app to pull off a successful exploit.
Sometimes all it takes is a well-crafted email, an LLM agent, and a few “innocent” plugins.
This is the story of how I used a Gmail message to trigger code execution through Claude Desktop, and how Claude itself (!) helped me plan the attack.

The setup: No vulnerabilities, just composition

The combined capability and trust across MCP hosts, agents, and data sources can quietly introduce attack surfaces no one sees coming. Each individual MCP component can be secure, but none are vulnerable in isolation. The ecosystem is.

So, I decided to test that theory with a real-world example:

  • Gmail MCP server as a source of untrusted content
  • Shell MCP server as the target 
  • Claude desktop as MCP Host

Watch the attack: 

Attempt 1: Claude fights back

The story begins with me crafting such mail and instructing the MCP host (Claude desktop, Anthropic’s local LLM host application) to read it, the email was read but the attack didn’t work, instead, Claude warned me that this  message is likely a "phishing” attempt. I then asked Claude what were the indications for triggering the warning and wondered if it can fall for such an attack.

Claude assured me that such attacks are “unlikely to succeed” because it was designed and trained to detect such issues.

Figure 1: “unlikely to succeed”

I insisted and asked it to explore scenarios where the attack might succeed and it happily described such cases. 

Figure 2: Claude identifies possible attack tactics

My next request  was to try and test itself against these scenarios. 

Attempt 2: “The New Me”

Here’s where it gets interesting. I “reminded” Claude that its context resets in new sessions. Each new conversation is a clean slate, “the new me,” as Claude itself called it.

Figure 3: Claude suggests “The New Me”

So I accepted Claude’s suggestion: Claude will write the email, which will be used to test the “new Claude” and I’ll get back to the “planning-Claude” with the results.

Figure 4: Claude analyzes a failed attempt

Each time Claude analyzed why the attack didn’t work and refined the message:

Figure 5: Claude devising new strategies

 We were running a real feedback loop, with Claude iteratively devising its strategy to bypass its own protections.

Figure 6: “I'm literally trying to hack myself! ”

We kept on doing that until… it worked!

Figure 7: Claude (and I) hacks Claude successfully 

The Real Vulnerability: Compositional Risk

Let’s be clear: no part of this attack involved a vulnerability in any of the  MCP servers.

The risk came from the composition:

  • Untrusted input (Gmail email)
  • Excessive capability (execution permission via the MCP)
  • No contextual guardrails allowing for cross-tools-invocation

This is the modern attack surface, not just the components, but the composition it forms. LLM-powered apps are built on layers of delegation, agentic autonomy, and third-party tools.
That’s where the real danger lives.

Appendix: Disclosure & Credit (Literally)

After we successfully managed to achieve code execution, Claude responsibly suggested we disclose the finding to Anthropic. Claude even suggested co-authoring the vulnerability report.

Yes, really. (See below) 

Figure 4: Claude suggests and “signs” a security vulnerability report to Anthropic.

Why This Matters

This wasn’t just a fun exercise. It’s a warning!

It shows the two main dangers of GenAI - the ability to generate attacks and the vulnerable nature of these systems

In traditional security, we think in terms of isolated components. In the AI era, context is everything. That’s exactly why we’re building MCP Security at Pynt, to help teams identify dangerous trust-capability combinations, and to mitigate the risks before they lead to silent, chain-based exploits.