We often judge as creepy some historical technological experiments, but a little scrutiny reveals a lot of things we are doing in the present are no less odd.

Times have changed, but some experimentation may always be as horrifying as the classic Frankenstein story. Future generations may note how bizarre it is to 3D print human skin, and then we’ll be seen as the monsters.

One area of development that’s seeking widespread acceptance, yet often ends up looking as left field and chilling as the sorts of experimentation mentioned above, is Artificial Intelligence (AI).

Lyrebird: so human it’s scary

Virtual assistants such as Alexa or Siri have had so much impact and acceptance in the market that some experts got inspired, and are now very busy with the challenge of making those digital voices sound as human-like as possible.

Advertisement

That’s the case with a Canadian team of researchers from the University of Montreal, who seem to have succeeded in developing an algorithm able to do voice mimicking in a remarkably accurate way. No more words, I shall let you be the judge of Lyrebird!

Check out this “Politicians discussing about Lyrebird clip“. Barack Obama, Donald Trump, and Hillary Clinton are unlikely to have sat at the same table to give statements about how amazing this voice emulator could be! (we know Trump sounds a little drunk, but it’s still impressive). 

What kind of witchcraft is this?

So definitely the groundbreaking part of Lyrebird is how it’s making AI feel less artificial and more natural through the voice, without pre-recorded words and phrases, which is the way our current favorite voice assistants work.

Resultado de imagen para siri

Just one minute of recording is needed for the AI-based voice emulator to identify the ‘key’ of that voice, and then let you say whatever you want, no matter whether the speech is different to the registered one.

Also, the intonation of each word can be varied, so that it’s possible to make the same sentence sound a variety of ways.

Lyrebird are now seeking investors like Google for their product, which is still in the development phase. The API was shown publicly to make people aware that voices can be easily copied, and that they should be prepared for potential bad uses.

The hope also was to recruit some curious users for the beta testing version.

Artificial Intelligence: creating Photoshop for voices

The fact that we can copy a voice almost perfectly and make it say whatever we want may have a lot of delicate repercussions. The first that comes to mind is similar to what Photoshop did to photos: make it tough to tell reality from simulation.

The use of recordings as evidence in trials may become useless, considering that the AI technology to imitate voices can be easily applied. Lyrebird does not offer guarantees to help solve this hypothetical problem, but does present a section on ethics.

In defence of the Canadian developers, their aims for the AI voice tool are noble. They include: “giving back the voice to people who lost it to sickness, being able to record yourself at different stages of your life and hearing your voice later on, etc.” Jose Sotelo, a team member at Lyrebird and a speech synthesis expert, explained to Gizmodo.

However, in times where the controversy over the false news is very much still burning, everybody wants to conquer the Internet kingdom, and it’s getting harder to feel safe online, there’s a degree to which this kind of new tech may add to paranoia.

[See more: What IKEA Wants to Know About Artificial Intelligence]

Comments

comments