Speech recognition in our everyday life

Speech recognition in everyday life

Over the course of the past decade, the world has gone through a radical transformation. Technologies that were once something popularized by science fiction have now become a facet of our daily life. One technology that has now become so commonplace, most don’t even give it a second thought is speech recognition.

Nowadays, AI powered voice assistants like Alexa, Siri and Google are scattered throughout our homes and smartphones – but it wasn’t always this way. For quite a long period of time, voice control was very hit and miss.

While computers have had the capability of recognizing voices for quite a while, they have not been too successful in their endeavors to understand them. Interactive Voice Response (IVR) systems around the world have simply not been able to get it right and clients on customer support lines continued to struggle.

Recently, however, that has started to change. Thanks to the growth of Artificial Intelligence (AI) as well as Natural Language Processing (NLP) technologies, error rates with speech recognition are dropping. Now close to 95% accurate, the technology is finally starting to pay dividends in terms of usefulness and utility.

Speech Recognition Technology is now being used across businesses and while customer service teams are benefiting greatly, its use is being seen in many other areas also. Companies are rapidly embracing speech as a means of simplifying processes and driving an increase in overall operations and efficiencies.

Facet-rich audio datasets for training speech recognition technologies can be ordered quickly and affordably through clickworker.

How did we get here – the history of Speech Recognition Technology

It’s been a somewhat bumpy road to this point and we definitely did not go from zero to hero overnight when it comes to Speech Recognition Technologies. However, the comfort we now have with voice assistants like Alexa, Siri, Cortana and Google would not be possible without the pioneers that preceded them.

Speech has actually been something of interest since the early days of computing. In the 1950s and 60s, Bell labs worked on a system known as Audrey which could recognize single digit numbers. Ten years later, IBMs Shoebox System, which had the ability to understand 16 words, was developed.

Speech recognition continued to be worked on and developed in the decades that followed. In the 1970s for example, the US Department of Defense and DARPA worked on programs which eventually led to the Harpy system. Harpy, developed by Carnegie Mellon, was able to understand and respond to 1,000 words, which was a massive improvement of earlier systems. However, despite this improvement, its capabilities were still only equivalent to those of a three-year-old child.

The 1980s and 90s continued to see gradual improvements and by the early 2000s, speech recognition had reached close to 80% accuracy. However, at this point, things seemed to stall for a little while until the latter half of the 2000s and the 2010s.

During this period of time, Google voice, Apple’s Siri and other similar technologies were being developed and deployed. It was this last push that helped transform speech capabilities and moved the needle closer to 95% accuracy. Google and others have continuously improved on what they “hear” by using their massive data collection projects and cloud-based processing.

How does Voice Recognition Work?

Speech recognition might seem simple to a lay person but in reality, it is very complex. Think about speech recognition as something similar to the way in which a child learns a language: Children hear speech around them on a daily basis. Whether it is their siblings, parents or strangers – a child is constantly absorbing different verbal and non-verbal cues. This trains their brain and builds connections between words and their meaning.

While it may seem as if we’re hardwired for language, it actually takes time and training. Speech recognition technology is essentially very similar. With computers, we’re still learning how best to train them, but it is a very similar process and involves lots of effort and repetition. Perfecting speech recognition systems might be an almost impossible task with the number of languages, accents and dialects in the world today. However, we are continually getting closer.

What are the benefits of speech recognition?

Advances in technology have primarily been made to simplify our lives and empower us to do more. Speech recognition is on the cutting edge of these advances and is making a huge impact to many in their day-to-day lives. There are many different benefits to Speech Recognition Technology. Some of the major benefits include:

Control of Voice Assistants

Voice assistants like Google and Alexa are now extremely commonplace on smart speakers in many homes around the world. Added to these speakers, assistants like Cortana on our computer or Siri and Google on our smartphones have quickly made voice assistants something many of us cannot live without.

Speech recognition in Healthcare

Speech recognition technologies have many different roles to play in the medical field. From a patient’s point of view, speech technologies are being used to improve how patients with speech impairments communicate.

Speech technologies are also being used by medical professionals. Doctors are using apps that can transcribe their notes during a consultation. This lets Doctors improve their focus and concentration and also improves overall record keeping.

Speech recognition for the hearing empaired

Voice-to-text and transcription tools can help hard of hearing and deaf students learn in ways they were previously denied. In addition, for individuals impacted by visual impairments, text-to-speech dictation technologies and screen readers can be a crucial tool.

Speech recognition and customer service

With the greater accuracy of speech recognition now available, customers are better able to use IVR systems to get their call routed correctly. Speech recognition can also help remove some of the workload from contact centers by letting chatbots answer common questions or find answers that do not require human intervention. For questions that do require human assistance, identifying information can be obtained in advance to help speed up response times.

Speech recognition and vehicles

One area where speech recognition is having a significant and life-saving impact is with vehicles. Assistant features powered through Android Auto and Apple Carplay help remove in-car distractions from drivers to keep them focused on the road.

Now, using car infotainment systems, drivers can send and receive texts while on the road safely, change radio stations without touching a dial and even navigate to new restaurants and landmarks simply by using their voice.

Speech Recognition in the Workplace

While we’d all love to have a personal assistant, sadly only a few very senior executives have access to them. Luckily digital voice assistants are now becoming a lot more mainstream. These assistants can help find files and even take meeting notes by transcribing conversations on the fly. Security can also be enhanced through the use of voice recognition instead of swipe cards.

AI and ML in Speech Recognition

Artificial Intelligence (AI) was a phrase initially coined in 1956. Since then, AI has become defined as a means of allowing computers to perform tasks and services that previously only humans were capable of. Machine Learning (ML) is a part of AI and refers to the methods through which digital systems educate themselves.

With ML, researchers and scientists are working to get computers to find and recognize patterns directly instead of having to program different rules. This training requires massive amounts of AI training data and is something that has only been possible in recent years. When the data is fed into the algorithm, the system looks for unique patterns based on different criteria. Researchers evaluate the accuracy of these patterns and gradually fine-tune them over time, helping the systems get smarter.

Speech Recognition Challenges

There are many different challenges with speech recognition technology. Some of them include how systems handle background noise, the quality of the recording equipment and even the dialects and accents used by people around the world.

There is no specific perfect way to teach spoken language to machines yet. Researchers know that with humans, what a person says is only a portion of what is actually meant. Humans constantly look for variations in pitch and tone as well facial expressions and body language to understand meaning. In addition, different ways of speaking like slang and abbreviations as well as sarcasm can radically change what is being said.

The future of Voice Recognition

We have made many advances in voice and speech recognition over the past half century and as we move into the next decade, the technology is going to become even more sophisticated.

While at the current time speech recognition is primarily being used to help perform online searches, this will not remain the same in the future. Companies around the world are constantly innovating and experimenting with the technology to find new ways in which it can be applied. As voice assistants become ever more ubiquitous, the possibilities are endless and with artificial voices becoming more natural, they will help liberate people from behind their screens.

 

avatar