“Is this real life, or is it Black Mirror?” is a question we’ve been having to ask a lot lately. As deep-learning technology gets better, as security breaches become more common, and the line between private and public blurs, we have to constantly question the ethics behind new technologies and the availability accompanying them. The latest such technology comes via Canadian AI company Lyrebird, named after that bird that imitates chainsaws and video games and anything else it hears. The company claims it has a set of algorithms that can reproduce anyone’s voice with just a minute of sample audio.

Compare that to Adobe’s Project VoCo, which promises to be able to edit speech the way Photoshop edits photos, which requires 20 minutes of audio, and it’s a bit scary. This is no longer something where you have to get a political speech or something recorded. You just have to call someone up and keep them on the line for a couple minutes. The results are far from perfect, but they’re still pretty chilling all the same. The company even says their software can add emotion to the speech it creates. The example below features a few well-known politicians and while it wouldn’t pass muster in a court, it’s not that far off, either.

This is pretty scary

As if the fake news problems on Facebook weren’t bad enough, algorithms like Lyrebird’s put the idea of making a politician say anything you want them to much closer into reach. The implications for audio as evidence are pretty far-reaching, too. If you can create a believable facsimile of anyone’s voice, what good is a voice recording in a legal case? Could a scammer call someone, record their voice, and then use that to scam money out of their loved ones? That doesn’t even get into making our friends say weird things and our parents finally give us the positive feedback we’ve been craving for years.

Lyrebird’s solution to this, according to The Verge, is to make the technology publicly available. They say that will mitigate damage because “everyone will soon be aware that such technology exists.” Which is why faked missile videos, email phishing, and other similar lies-through-technology no longer work, right?

The technology is still in beta, so those problems I’ve listed above are still pipe dreams to some degree, but it’s clear they’re not wild dreams. They’re not science fiction fantasies. They’re scientific eventualities.