Meta Is Recording Employee Keystrokes to Train AI — And That Should Make Everyone Uncomfortable

Meta has installed software on US employees' computers that captures mouse movements, keystrokes, and application clicks — and the data is being used to train AI models. This is one of the more brazen moves in tech's ongoing data harvesting arms race, and it deserves direct scrutiny.
What Meta Is Actually Doing
The monitoring software logs detailed interaction patterns — not just what employees type, but how they navigate applications, where they click, how long they spend on tasks. Meta's stated goal is to use this behavioral data as training material for AI models that learn human work patterns.
This is distinct from standard workplace monitoring (which is legally common). Meta is specifically repurposing employee behavioral data as an AI training asset — without, as far as reporting suggests, particularly prominent disclosure to employees.
This Is Part of a Much Larger Pattern
Tech companies are running out of public internet data to train AI. The logical next step is private behavioral data, and employees are a captive source. Anthropic's Mythos framework was accessed by unauthorized users — the industry's relationship with data collection and access controls is clearly under stress. Meta's move is the most aggressive version of this trend so far.
My Take
Using your own employees as unpaid AI training data contributors is a governance problem, regardless of whether it's technically legal. Employees have a reasonable expectation that their work behavior is monitored for performance, not harvested for model training. Meta should be required to disclose this explicitly, offer opt-out, and compensate employees whose data materially improves commercial AI products. None of that appears to be happening.
Frequently Asked Questions
What data is Meta collecting from employees?
Mouse movements, keystrokes, and application interaction patterns captured through software installed on US employee computers.
Is this legal?
Workplace monitoring is generally legal in the US with disclosure, but using behavioral data specifically as AI training material may face additional scrutiny.
Are other companies doing this?
Meta is the most prominent case, but behavioral data collection for AI training is a growing practice across the industry.