Text "prep me for my 2pm" to your phone. Thirty seconds later, you get a structured briefing: who you're meeting, what you last discussed over email, what your team said about them in Slack, and three talking points. That's the promise of an always-on AI assistant—and it's surprisingly achievable once you move past the architectural pitfalls that plague most headless agent setups.
Why OpenClaw-Style Frameworks Break in Production
The headless setup that OpenClaw popularized works great for weekend demos. But when you try to build something you'd actually trust with your calendar and email, four architectural problems surface. First, god-mode credentials: the agent gets the same permissions as the developer who launched it—every OAuth token, every API key, wide open. A single prompt injection or compromised dependency cascades through everything. CVE-2026-25253 exposed a one-click RCE in OpenClaw due to missing origin validation. Second, fragile API wrappers force the model to guess complex payload parameters, and shadow registries of duplicate unversioned wrappers become a supply-chain attack vector. Third, raw API responses bloat the context window and tank reasoning quality. Fourth, no audit trail means you can't answer "what did the agent do?"—an immediate fail for SOC2 and ISO27001.
The Arcade + Claude Code Solution
This guide builds a WhatsApp AI assistant using four layers: a relay server that handles Meta's webhooks with HMAC signature validation, an MCP server that bridges to Claude Code, Arcade for secure tool access and per-action authorization, and a meeting-prep skill that pulls from Google Calendar, Gmail, and Slack. The key insight is that Arcade sits between the agent and your business tools—it evaluates permissions, mints just-in-time tokens scoped to specific actions, and executes the call. The LLM never sees long-lived credentials. Every tool call generates structured audit logs tied to the specific user and action.
What We're Building
The architecture flows like this: WhatsApp messages flow through the relay server into an MCP server, which feeds them to Claude Code. Claude Code processes messages using skills (markdown files that encode domain expertise), calls business tools through Arcade, and replies back through the same chain. The relay uses a cursor-based message queue so restarting doesn't re-process old messages. Skills tell the agent how to use tools well—which ones to call, in what order, what to look for in the results, how to format output. Without a skill, you have an agent with calendar access but no idea how to prepare a meeting brief.
Key Takeaways
- The machine isn't the threat model—credentials are. Buying a dedicated Mac Mini doesn't isolate you from prompt injection or supply-chain attacks.
- Arcade handles auth-managed tool access with per-action authorization, so compromised dependencies can't exfiltrate long-lived tokens.
- MCP tools must be agent-optimized (summarized data, not raw JSON dumps) or context bloat destroys reasoning quality.
- Skills are just markdown files—anyone on the team can write and iterate on workflows without code deployment.
- The relay always returns 200 to Meta, even on bad signatures. Returning 4xx causes retries with the same bad payload.
The Bottom Line
This isn't a weekend demo anymore. The combination of Arcade's just-in-time authorization, Claude Code's battle-tested MCP support, and skills that encode your team's workflows gives you a production-ready assistant that actually deserves access to your calendar and email. The audit trail alone makes this approach viable for compliance-heavy environments where OpenClaw-style god-mode access would be a non-starter. If you're building an always-on AI assistant in 2026, this is the architecture that doesn't cut corners on security.