Each time you utilize your voice to generate a message on a Samsung Galaxy cell phone or activate a Google Dwelling system, you’re utilizing instruments Chanwoo Kim helped develop. The previous govt vp of Samsung Analysis’s International AI Facilities focuses on end-to-end speech recognition, end-to-end text-to-speech instruments, and language modeling.

“Essentially the most rewarding a part of my profession helps to develop applied sciences that my family and friends members use and revel in,” Kim says.

He lately left Samsung to proceed his work within the area at Korea College, in Seoul, main the college’s speech and language processing laboratory. A professor of synthetic intelligence, he says he’s keen about educating the following era of tech leaders.

“I’m excited to have my very own lab on the college and to information college students in analysis,” he says.

Bringing Google Dwelling to market

When Amazon introduced in 2014 it was growing good audio system with AI assistive know-how, a gadget now often known as Echo, Google determined to develop its personal model. Kim noticed a task for his experience within the endeavor—he has a Ph.D. in language and knowledge know-how from Carnegie Mellon, and he specialised in strong speech recognition. Buddies of his who had been engaged on such initiatives at Google in Mountain View, Calif., inspired him to use for a software program engineering job there. He left Microsoft in Seattle the place he had labored for 3 years as a software program growth engineer and speech scientist.

After becoming a member of Google’s acoustic modeling group in 2013, he labored to make sure the corporate’s AI assistive know-how, utilized in Google Dwelling merchandise, might carry out within the presence of background noise.

Chanwoo Kim

Employer

Korea College in Seoul

Title

Director of the the speech and language processing lab and professor of synthetic intelligence

Member grade

Member

Alma maters

Seoul Nationwide College; Carnegie Mellon

He led an effort to enhance Google Dwelling’s speech-recognition algorithms, together with using acoustic modeling, which permits a tool to interpret the connection between speech and phonemes (phonetic models in languages).

“When individuals used the speech-recognition operate on their cellphones, they had been solely standing about 1 meter away from the system at most,” he says. “For the speaker, my group and I had to ensure it understood the consumer once they had been speaking farther away.”

Kim proposed utilizing large-scale knowledge augmentation that simulates far-field speech knowledge to boost the system’s speech-recognition capabilities. Information augmentation analyzes coaching knowledge obtained and artificially generates extra coaching knowledge to enhance recognition accuracy.

His contributions enabled the corporate to launch its first Google Dwelling product, a wise speaker, in 2016.

“That was a very rewarding expertise,” he says.

That very same yr, Kim moved as much as senior software program engineer and continued enhancing the algorithms utilized by Google Dwelling for large-scale knowledge augmentation. He additionally additional developed applied sciences to scale back the time and computing energy utilized by the neural community and to enhance multi-microphone beamforming for far-field speech recognition.

Kim, who grew up in South Korea, missed his household, and in 2018 he moved again, becoming a member of Samsung as vp of its AI Heart in Seoul.

When he joined Samsung, he aimed to develop end-to-end speech recognition and text-to-speech recognition engines for the corporate’s merchandise, specializing in on-device processing. To assist him attain his targets, he based a speech processing lab and led a group of researchers growing neural networks to interchange the traditional speech-recognition methods then utilized by Samsung’s AI units.

“Essentially the most rewarding a part of my work helps to develop applied sciences that my family and friends members use and revel in.”

These methods included an acoustic mannequin, a language mannequin, a pronunciation mannequin, a weighted finite state transducer, and an inverse textual content normalizer. The language mannequin appears to be like on the relationship between the phrases being spoken by the consumer, whereas the pronunciation mannequin acts as a dictionary. The inverse textual content normalizer, most frequently utilized by text-to-speech instruments on telephones, converts speech into written expressions.

As a result of the elements had been cumbersome, it was not doable to develop an correct, on-device speech-recognition system utilizing typical know-how, Kim says. An end-to-end neural community would full all of the duties and “enormously simplify speech-recognition methods,” he says.

Chanwoo Kim [top row, seventh from the right] with among the members of his speech processing lab at Samsung Analysis.Chanwoo Kim

He and his group used a streaming attention-based strategy to develop their mannequin. An enter sequence—the spoken phrases—are encoded, then decoded right into a goal sequence with the assistance of a context vector, a numeric illustration of phrases generated by a pretrained deep studying mannequin for machine translation.

The mannequin was commercialized in 2019 and is now a part of Samsung’s Galaxy telephone. That very same yr, a cloud model of the system was commercialized and is utilized by the telephone’s digital assistant, Bixby.

Kim’s group continued to enhance speech recognition and text-to-speech methods in different merchandise, and yearly they commercialized a brand new engine.

They embody the power-normalized cepstral coefficients, which enhance the accuracy of speech recognition in environments with disturbances equivalent to additive noise, adjustments within the sign, a number of audio system, and reverberation. It suppresses the results of background noise through the use of statistics to estimate traits. It’s now utilized in quite a lot of Samsung merchandise together with air conditioners, cellphones, and robotic vacuum cleaners.

Samsung promoted Kim in 2021 to govt vp of its six International AI Facilities, situated in Cambridge, England; Montreal; Seoul; Silicon Valley; New York; and Toronto.

In that function he oversaw analysis on incorporating synthetic intelligence and machine studying into Samsung merchandise. He’s the youngest individual to be an govt vp on the firm.

He additionally led the event of Samsung’s generative massive language fashions, which advanced in Samsung Gauss. The suite of generative AI fashions can generate code, photographs, and textual content.

In March he left the corporate to hitch Korea College as a professor of synthetic intelligence—which is a dream come true, he says.

“Once I first began my doctoral work, my dream was to pursue a profession in academia,” Kim says. “However after incomes my Ph.D., I discovered myself drawn to the impression my analysis might have on actual merchandise, so I made a decision to enter business.”

He says he was excited to hitch Korea College, as “it has a robust presence in synthetic intelligence” and is likely one of the high universities within the nation.

Kim says his analysis will deal with generative speech fashions, multimodal processing, and integrating generative speech with language fashions.

Chasing his dream at Carnegie Mellon

Kim’s father was {an electrical} engineer, and from a younger age, Kim wished to observe in his footsteps, he says. He attended a science-focused highschool in Seoul to get a head begin in studying engineering subjects and programming. He earned his bachelor’s and grasp’s levels in electrical engineering from Seoul Nationwide College in 1998 and 2001, respectively.

Kim lengthy had hoped to earn a doctoral diploma from a U.S. college as a result of he felt it could give him extra alternatives.

And that’s precisely what he did. He left for Pittsburgh in 2005 to pursue a Ph.D. in language and knowledge know-how at Carnegie Mellon.

“I made a decision to main in speech recognition as a result of I used to be involved in elevating the usual of high quality,” he says. “I additionally favored that the sphere is multifaceted, and I might work on {hardware} or software program and simply shift focus from real-time sign processing to picture sign processing or one other sector of the sphere.”

Kim did his doctoral work underneath the steering of IEEE Life Fellow Richard Stern, who in all probability is finest identified for his theoretical work in how the human mind compares sound coming from every ear to evaluate the place the sound is coming from.

“At the moment, I wished to enhance the accuracy of computerized speech recognition methods in noisy environments or when there have been a number of audio system,” he says. He developed a number of sign processing algorithms that used mathematical representations created from details about how people course of auditory info.

Kim earned his Ph.D. in 2010 and joined Microsoft in Seattle as a software program growth engineer and speech scientist. He labored at Microsoft for 3 years earlier than becoming a member of Google.

Entry to reliable info

Kim joined IEEE when he was a doctoral scholar so he might current his analysis papers at IEEE conferences. In 2016 a paper he wrote with Stern was revealed within the IEEE/ACM Transactions on Audio, Speech, and Language Processing. It received them the 2019 IEEE Sign Processing Society’s Greatest Paper Award. Kim felt honored, he says, to obtain this “prestigious award.”

Kim maintains his IEEE membership partly as a result of, he says, IEEE is a reliable supply of data, and he can entry the newest technical info.

One other good thing about membership is IEEE’s international community, Kim says.

“By being a member, I’ve the chance to fulfill different engineers in my area,” he says.

He’s a daily attendee on the annual IEEE Convention for Acoustics, Speech, and Sign Processing. This yr he’s the technical program committee’s vice chair for the assembly, which is scheduled for subsequent month in Seoul.

Share.
Leave A Reply

Exit mobile version