While companies like Microsoft and Nvidia are all-in on the power of next-generation machine learning algorithms, some regulators are dreading what it might mean for our already-stressed communication networks. The chairwoman of the US Federal Communications Commission, for one, who’s just proposed an investigation into what “AI” could mean for even more spam calls and texts. The FCC will vote to adopt a multi-tiered action in November.
Chairwoman Rosencworcel, who’s served on the Commission since 2012 and as its executive since being confirmed late in 2021, is particularly concerned with how newly empowered AI tools could affect senior citizens. The FCC’s initial press release (PDF link) lists four main goals: determining whether AI technologies fall under the Comission’s jurisdiction via the Telephone Consumer Protection Act of 1991, if and when future AI tech might do the same, how AI impacts existing regulatory frameworks, and if the FCC should consider ways to verify the authenticity of auto-generated AI voice and text from “trusted sources.”
That last bullet point would seem to contain the potential for the most problems. Auto-generated text and natural-sounding voice algorithms are already fairly easy tools to use, albeit not quite as fast as necessary for real-time back-and-forth in a phone call setting. Combine it with some “big iron” data centers, whether wholly created for the purpose of mass calls and texts or merely rented from the likes of Amazon and Microsoft, and you have a recipe for disaster.
Replacing human-staffed call centers around the world in scammer hotbeds like India and Cambodia with fully automated AI systems could exponentially increase both the volume and the efficacy of scams, which are already being sent hundreds of billions of times every year. While filters and blocks exist, it’s estimated that billions of dollars are lost to scams each year in the US alone, many of which target senior citizens specifically.
The FCC’s brief does mention that AI technology could also be used to fight against spammers and scams, presumably with some kind of real-time scanning system alerting users that they’re talking to a computer. But the details of this, and the potential evolution of the threat posed by AI tools, will have to wait for the Commission’s November 15th session.