Data breaches have become a weekly fixture, and this week was no exception.
Instructure, the company behind the Canvas learning management system, confirmed that student names, email addresses, and school IDs were taken in a cyberattack. The breach has been claimed by ShinyHunters, a prolific cybercrime group, who assert that data from thousands of schools globally was compromised.
ADT separately disclosed that customer names, phone numbers, addresses, and in some cases partial Social Security numbers were accessed after an apparent intrusion into its systems. Citizens Bank and Frost Bank are now facing lawsuits tied to a third-party vendor breach.
These incidents did not necessarily involve AI. But they reflect a data ecosystem that AI is making significantly more dangerous.
Researchers have documented sharp increases in AI-powered phishing campaigns and credential attacks over the past few years. The concern is not just that breaches happen. It is what happens after. The data taken in incidents like these gets aggregated, resold, and weaponized. AI tools can let bad actors personalize that data into scams and fraud attempts at a scale that was not previously possible.
The underlying problem is that companies continue collecting far more personal information than they can reliably protect. Until data minimization becomes a baseline legal expectation rather than a voluntary practice, the pipeline from breach to harm will keep running.