AI is evolving rapidly: from technology that thinks along with you to technology that acts independently. OpenClaw is a clear example of this shift: an AI agent that performs tasks on your own computer. This enables far-reaching automation, but it also introduces new risks. Once such an agent follows the wrong instructions, it can carry out actions you would never consciously take yourself – but with your rights.
Through messaging apps like WhatsApp, OpenClaw can receive instructions and execute them immediately. Instead of just generating responses, the agent opens websites, controls applications, accesses files, and runs commands. To enable this, OpenClaw runs locally and uses the same permissions as the user. The risk is clear: if an attacker gains control of the agent, they inherit the same permissions and capabilities. Recent attacks have exploited seemingly trustworthy mechanisms, such as access tokens, browser extensions, or instructions that appeared harmless.
Users were, for example, lured into clicking a link or installing an extension that promised extra functionality. In reality, this gave the attacker access to the locally running agent. The attacker could then disable security measures and execute commands on the victim’s system. This makes AI agents fundamentally different from traditional software. The more tasks and permissions you delegate to an agent, the greater the impact if that trust is abused.
For organisations, this introduces a new attack surface. It doesn’t just affect IT or security, but everyone experimenting with AI automation. The question is not whether AI agents are valuable, but how to use them in a controlled and contained way.
What-to-do’s
Make sure to check off all actions, this will have a positive effect on your Behavioural Risk Score.








