Key Takeaways:
- 1. A US telecom company is using AI trained on inmates’ calls to predict and prevent crimes.
- 2. The AI model scans calls, texts, and emails of inmates in real-time to flag potential criminal activity.
- 3. Advocates raise concerns about privacy violations and the ethics of using inmates’ data for AI training.
US telecom company Securus Technologies is piloting an AI model trained on inmates' calls to monitor and predict crimes. The AI scans communications in real-time and flags suspicious activity for human review. Advocates criticize the potential privacy violations and ethics of using inmates' data for AI training without clear consent. The FCC recently allowed companies like Securus to pass security costs onto inmates, leading to concerns about surveillance and privacy rights.
Insight: The use of AI in monitoring inmates' communications raises ethical concerns about privacy rights and the potential for invasive surveillance, highlighting the need for clear guidelines and oversight in the use of such technology.
This article was curated by memoment.jp from the feed source: MIT Technology Review.
Read the original article here: https://www.technologyreview.com/2025/12/01/1128591/an-ai-model-trained-on-prison-phone-calls-is-now-being-used-to-surveil-inmates/
© All rights belong to the original publisher.



