Tech

Computing for the Deaf-Competition to Teach Computers Sign Language | Science and Technology

Using The computer meant hitting the keyboard. Then it meant tapping the touch screen. Increasingly, it simply means speaking. Over 100 million devices powered by Amazon’s voice assistant Alexa are on the shelves of the world. Apple’s product, Siri, processes 25 billion requests a month. By 2025, the market for such technologies could be worth more than $ 27 billion.

Listen to this story

Enjoy more audio and podcasts with iOS Or Android..

However, one group is left behind. The World Health Organization counts 430 million people as hearing impaired or deaf. Many people use sign language to communicate. If you cannot communicate with your computer using these languages, you run the risk of being excluded from the digitization that takes over your daily life.

Many have tried to teach computers to understand sign language. In recent years, there have been many breakthrough claims with so-called solutions, from tactile gloves that capture the movement of the wearer’s fingers to software that detects clear hand shapes. Many of these are highly regarded, while seemingly keeping away from the people who are designed. “The value to us is basically zero,” says Mark Wheatley, Executive Director of the European Union for the Deaf.EUD).

The reason is easy to understand. Gloves get in the way, as well as similar technical solutions such as wearing a camera. In both cases, the user must adapt to the needs of the hearing impaired. While handprint recognition is useful, it cannot handle the full complexity of sign language, which also depends on facial expressions and body movements. Some projects are touted as providing cheap alternatives to human interpreters in hospitals, police stations, classrooms, and other places where even small errors can be very costly.

But things are improving. Research groups, including more and more Deaf scientists, are asking how technology can best benefit Deaf people. Sign language students are editing a database called the corpus, which is full of examples of how to use the language. Programmers are trying to turn them into useful products.

Like the spoken language, there are hundreds of sign languages ​​in the world, with their own grammar, idioms and dialects. Again, like oral language, the strict rules of grammar books do not really capture the subtleties of everyday usage. A single symbol is an abbreviation for a complex idea. Like speakers, signers often use shortcuts, such as expressing a signature with one hand and both hands. They set a reference point within their signature space, which can be essential for meaning. Interpreting all of this correctly is much more difficult than recognizing spoken syllables and written letters.

Care must also be taken when generating data. A publicly available oral corpus, led by a team at leading computing company Microsoft, published in 2019, consists of about a billion words from as many as 1,000 different speakers. I will. An equivalent dataset of sign language may have less than 100,000 sign languages ​​from just 10 people. Besides many, a good corpus also needs diversity. This means conversations between native signers of varying backgrounds, dialects and levels of fluency. It is important to represent people with limited movement fluency, as people with hearing disabilities are more likely to have disabilities than people without hearing impairment.

Thomas Hanke, a researcher at the University of Hamburg, worked with his colleagues to build a sign language library containing about 560 hours of conversation and many dialects found in Germany. Initially, Dr. Hanke asked project participants to travel to Hamburg. However, while in the city, many volunteers began incorporating local signs into their communications. It distorted the data. Now he goes to the participants instead and has traversed the country in mobile studios for most of the two years.

However, collecting data is easy. Computers are slow to learn, so you need to be clear about the meaning of each example. To do that, you need to annotate everything, including every movement, facial expression, and subtle emphasis. This is time consuming and takes a lot of time. Eight years later, Dr. Hanke is confident that he has only a 50-hour video and is properly annotated.

Microsoft researchers are using crowdsourcing to improve the quantity and quality of available data. Daniel Bragg and her colleagues on the Massachusetts company campus are developing a smartphone version of the “battleship.” This is a game in which each player tries to sink an enemy ship by showing the location on the grid. In Dr. Bragg’s version, the squares in each grid are associated with a particular symbol. Players not only generate their own signature data, but also check the meaning of signatures made by their opponents.

Privacy is of particular concern because collecting sign language data requires recording the participant’s face as well as the participant’s voice. When Dr. Hanke tried to record people’s gestures anonymously, their unique signature technique was very unique and still identifiable. Dr. Bragg plans to use a facial filter or replace the face with an artificially generated alternative. It interferes with the quality of the data, but she wants the low quality to be supplemented by a larger amount.

With sufficient data collection, researchers with a good understanding of Deaf culture and machine learning can get impressive results. The Hungarian company SignAll’s 25-member team includes three hearing-impaired people, claiming to be one of the largest in the field. The company’s own database contains 300,000 annotated videos of 100 users using over 3,000 signs in American Sign Language ()ASL), One of the most popular. Collected with the help of Gallaudet University in Washington. DCThe only university, especially for hearing-impaired students.

SignAll software can be recognized ASLHowever, the speed at which native signers can communicate has not yet been reached. The current product, SignAll 1.0, can translate signatures into those written in English, allowing hearing interlocutors to respond using voice-to-text conversion software. However, it relies on pointing the three cameras at the signer wearing a special motion tracking glove. This is a big burden.

It may change soon. SignAll boss Zsolt Robotka says the company wants to offer a globe-free option. We also finish products that work with a single camera on your smartphone. If the technology could be integrated into other apps, hearing-impaired people could use their mobile phones to search for directions, find meanings of unknown signs, etc. without resorting to written verbal language. ..

Cross the valley

For now, Dr. Robotka focuses on translating sign language into text or speech. Translation in the other direction poses greater difficulty in how to generate a visual representation of sign language. The standard approach was to use computer-generated avatars. However, many fall into the “uncanny valley.” This is the concept of computer graphics that artificial humans look creepy and disturbing because they are not fully satisfied with verisimilitude.

Bridging the valley enables a wide range of two-way communication. Creating a smartphone app that recognizes different European sign languages ​​and can switch between these languages ​​and oral speech is one of the goals of the two new multinational academic consortiums.on Project, and Intelligent Automatic Sign Language Translation Project, also known as Easier..Both EUDRepresents 31 national associations across the continent.

Codeon Covers the signature languages ​​of English, Dutch, Flemish, Irish, Spanish and their auditory equivalents, with the exception of Flemish. We aim to solve three problems in collaboration with several universities in Europe. One is to improve machine learning algorithms that recognize symbols and their meanings. The other is to figure out how best to interpret the characteristic grammar of sign language. Finally, try to create a better avatar. EasierDr. Hanke’s team in Hamburg is one of 14 partners and has similar goals. That is, sign language recognition, robust bidirectional translation, and avatar development.

Money and attention are always welcome. However, previous attempts to automate sign language translation have been aimed at making the lives of people with normal hearing more convenient, rather than trying to truly help the hearing impaired. Now, observers want a more sensitive approach to produce more useful products. “It’s a great opportunity for us,” says Wheatley. EUD. “We don’t have time to be ironic.” ■■

This article was published in the Science and Technology section of the printed version under the heading “Implicit Understanding”.

Computing for the Deaf-Competition to Teach Computers Sign Language | Science and Technology

Source link Computing for the Deaf-Competition to Teach Computers Sign Language | Science and Technology

Back to top button