"If you use an automatic setting where the voice sounds robotic and it sounds like a completely synthesized voice, then when the artificial intelligence produces that synthesized voice, people won't be able to tell the difference." continued Beato. Computers can easily do this. They will improve over time. The average listener doesn't know the difference and frankly doesn't care. And I think they won't care if artificial intelligence replaces musicians and songwriters."
We certainly share Pete's concern about the future evolution of artificial intelligence-based creative tools. The question of how this technology might one day threaten the existence of musicians, producers and artists of all disciplines is crucial. However, that train has unfortunately left the station and always will, regardless of whether Adam Levine uses autotune or not.
If the development of artificial intelligence technology continues at the same pace, it will undoubtedly succeed in making the public believe that the voices created by artificial intelligence are real. This will happen regardless of whether listeners have become immune to imitation through Auto-Tune or become completely accustomed to digitally altered music. The important question is not how we can detect fake votes when they occur, but how we can ensure that real votes are used.
Moreover, the scenario conceived by Beato is not a distant future, it is already within reach. Modern technology is widely used to imitate the human singing voice. While not advanced enough to fool the average listener, consumer apps like MVoice and the technology behind them will only continue to improve, and no doubt soon we'll be able to create completely believable synthesized sounds in our DAW. What can deceive even the most attentive hearing.
Artist and scientist Holly Herndon (opens in new tab) has already trained artificial intelligence to replicate her singing voice, which she demonstrated at a TED talk last year. (The artificial intelligence has since written Dolly Parton's "Jolie" (opens in a new tab) .) We'll let you decide whether you think this is a credible fake, but to our ears, it's pretty close. All that's left between that and the real thing is a synthetic moustache. If AI can do this now, it's not hard to imagine where it will lead in the next decade.
The real problems with Beato's argument become apparent when he starts talking about T-Pain, the groundbreaking rapper, songwriter and producer known for his extensive use of Auto-Tune. “There are artists who have built their entire careers on Auto-Tune, like T-Pain. It even has its own Auto-Tune plugin,” notes Beato, opening a clip from T-Pain's Tiny Desk concert featuring his usual vocals. Voice "The amazing thing about T-Pain is that he can sing really well, you rarely hear him."
Here Beato unnecessarily misses the point. T-Pain failed to activate auto-tune as he struggled to hit the right notes. He used it to develop his own voice to express his musical ideas more effectively than any other instrument. Thus, he unwittingly started one of the most influential currents in modern music. Crafted largely with Auto-Tune, this vocal/rap hybrid has influenced countless hip-hop, pop, and R&B artists, inspiring everyone from Kanye West to Lil Wayne.
When T-Pain first heard Auto-Tune on Jennifer Lopez's "If You Had My Love," the plug-in wasn't widely used in the production community. The rapper wasn't quite sure what he was supposed to hear, but he knew he had to get it. Two years of research into the sound the song meets embraces the technology. T-Pain told Berklee students (opens in new tab) at a 2020 seminar .
When he discovered Auto-Tune, the artist was still searching for his creative outlet and looking for ways to differentiate himself from other rappers. "I was inundated with all the guys who wanted to be a rapper [...] I was like, 'Man, if I was that voice in the crowd, I'd never be heard. I must have been someone special. "No one listened to Auto-Tune like I did," he continued.
By manipulating his voice, T-Pain found his voice. What Beato doesn't accept is that the technologies he uses - auto-tune, pitch and time control, drum programming - could shape a future in which the music he personally likes to listen to is less common and even less original. used by a new generation of musicians, artists and producers to develop and enhance their creativity. Artists create music that couldn't be created without these instruments, and listeners respond to it.
Beato's prediction that only people who can't sing will want to use auto-tune is a complete failure of imagination. "The Auto-Tune plug-in has become one of my most valuable creative tools," singer and electronic musician Elijah Page (opens in new tab) told us in a recent interview. When not performing as a soloist with the New York Philharmonic, Bagg records experimental acoustic music under the name Lyla (opens in new tab) . His latest album, Patterns for Auto-Tuned Voice and Delay, consists of vocal improvisations processed using various software, including Auto-Tune.
Talented singer Liesel uses auto-tune to enhance her voice on her chosen instrument. "I created a sound processing system that allowed me to change the perception of what my instrument was," he says. "Now, what starts inside my body and continues on the computer is a process, and the ideas that come from that are my instruments. I feel like I've created a new instrument, and it's my voice on different elements. Autotune and everything they do.'
Perhaps the most telling example of an artist reusing Autotune to enhance their sound is in the music of Bon Iver. During the recording of their third album, 22 A Million, frontman Justin Vernon and co-producer Chris Messina developed an audio processing method they called "The Messina (opens in new tab) ". Based on Auto-Tune, the setup allows him to change the pitch and pitch of his voice along with the melody playing on the keyboard, all in real time. The resulting effect helped define the sound of the entire project.
In 2016, Frank Ocean decided to use Auto-Tune on "Nikes," the lead single from his era-defining second album Blonde, and fans believed it created the effect of a singing hyper alter ego. In terms of his youth, his lower self, his maturity. Artists don't just use auto-tune to change sounds to discover new versions. She even helps some explore their experiences of gender identity through their creative practice.
A number of trance artists , including Kathy Day ( opens in a new tab ), Lira Pramuk ( opens in a new tab ) , and the late SOFI ( opens in a new tab ), have used voice manipulation as a means of computing identity. in their music. Many trans people choose voice therapy and/or voice surgery as part of their transition. Processing sound can serve as a way of exploring the complex meanings and connections of words, and how this relates to their gender identity, through their work. By breaking down the male-female dichotomy, technologies like Auto-Tune allow these artists to make the live experience behind their music audible.
Although it has since been taken in new creative directions, let's not forget that Auto-Tune was originally created to fix a concert. While Beato believes it may have led to the death of musicians, others would say it democratized the empowerment of artists who lacked natural vocal talent. One such artist is Preston's Rainey Miller (opens in new tab) , who records saturnine alternative R&B with an emphasis on his own vocals.
"I didn't want to rap and I couldn't sing, so I fell into it, " Miller said when asked why he adopted the Loud & Quiet (opens in new tab) technique. “What I'm doing right now seems very important. I'm not a music teacher, I didn't have the money to buy many instruments, and I didn't start a musical family. But to me, that doesn't stop you from doing music if you want to. Auto-Tune not only helped Miller find his voice, it gave him a voice.
It's fair to say that Beato probably didn't have the artists we were referring to when he said Auto-Tune ruined popular music. If we use a narrow definition of popular music, it may not include the more innovative styles we study. However, the matter remains. How good would modern music be if Daddy Yankee didn't autotune to Despacito? And if those auto-tune songs sold millions of copies, which they did, who's to say they'll sound any different?
Autotune may have been invented to tweak unrecorded recordings, but it takes on a life of its own, changing the sound of modern music for the better. Most artists don't use plug-ins to change their voice, preferring to use Auto-Tune to transform their sound. Like many creative tools, Auto-Tune has drifted away from its original purpose as musicians and producers dream of reinventing it.
Established paradigms are threatened as new technologies develop, and people are always alarmed by the imminent danger they perceive change to pose. When the synthesizer was invented, we heard a similar cry. Outraged musicians protested against concerts featuring the instrument, and the Music Performers' Union even passed a resolution banning the sanson among its members, fearing that they too would be replaced by the instrument.
But if new technologies have no room to develop, if new instruments are not explored, exploited and used, how will music evolve? Fortunately, synthesizers have persevered and found their harmonious place in our musical world. Auto-Tune, like mixers before it, and future AI-based technology will continue to help musicians and producers do what they do best: express new ideas, discover new genres and new new sounds. This is how the music industry should be.
