Apple Says ‘Hey Siri’ Detection Briefly Becomes Extra Sensitive If Your First Try Doesn’t Work

A new entry in Apple's Machine Learning Journal provides a closer look at how hardware, software, and internet services work together to power the hands-free "Hey Siri" feature on the latest iPhone and iPad Pro models.


Specifically, a very small speech recognizer built into the embedded motion coprocessor runs all the time and listens for "Hey Siri." When just those two words are detected, Siri parses any subsequent speech as a command or query.

The detector uses a Deep Neural Network to convert the acoustic pattern of a user's voice into a probability distribution. It then uses a temporal integration process to compute a confidence score that the phrase uttered was "Hey Siri."

If the score is high enough, Siri wakes up and proceeds to complete the command or answer the query automatically.

If the score exceeds Apple's lower threshold but not the upper threshold, however, the device enters a more sensitive state for a few seconds, so that Siri is much more likely to be invoked if the user repeats the phrase—even without more effort.

"This second-chance mechanism improves the usability of the system significantly, without increasing the false alarm rate too much because it is only in this extra-sensitive state for a short time," said Apple.

To reduce false triggers from strangers, Apple invites users to complete a short enrollment session in which they say five phrases that each begin with "Hey Siri." The examples are saved on the device.
We compare the distances to the reference patterns created during enrollment with another threshold to decide whether the sound that triggered the detector is likely to be "Hey Siri" spoken by the enrolled user.

This process not only reduces the probability that "Hey Siri" spoken by another person will trigger the iPhone, but also reduces the rate at which other, similar-sounding phrases trigger Siri.
Apple also says it created "Hey Siri" recordings both close and far in various environments, such as in the kitchen, car, bedroom, and restaurant, based on native speakers of many languages around the world.

For many more technical details about how "Hey Siri" works, be sure to read Apple's full article on its Machine Learning Journal.


Discuss this article in our forums

Apple Updates Machine Learning Journal With Three Articles on Siri Technology

Back in July, Apple introduced the "Apple Machine Learning Journal," a blog detailing Apple's work on machine learning, AI, and other related topics. The blog is written entirely by Apple's engineers, and gives them a way to share their progress and interact with other researchers and engineers.

Apple today published three new articles to the Machine Learning Journal, covering topics that are based on papers Apple will share this week at Interspeech 2017 in Stockholm, Sweden.


The first article may be the most interesting to casual readers, as it explores the deep learning technology behind the Siri voice improvements introduced in iOS 11. The other two articles cover the technology behind the way dates, times, and other numbers are displayed, and the work that goes into introducing Siri in additional languages.

Links to all three articles are below:

Apple is notoriously secret and has kept its work under wraps for many years, but over the course of the last few months, the company has been open to sharing some of its machine learning advancements. The blog, along with research papers, allows Apple engineers to participate in the wider AI community and may help the company retain employees who do not want to keep their progress a secret.


Discuss this article in our forums

Apple Launches New Blog to Share Details on Machine Learning Research

Apple today debuted a new blog called the "Apple Machine Learning Journal," with a welcome message for readers and an in-depth look at the blog's first topic: "Improving the Realism of Synthetic Images." Apple describes the Machine Learning Journal as a place where users can read posts written by the company's engineers, related to all of the work and progress they've made for technologies in Apple's products.

In the welcome message, Apple encourages those interested in machine learning to contact the company at an email address for its new blog, machine-learning@apple.com.

Welcome to the Apple Machine Learning Journal. Here, you can read posts written by Apple engineers about their work using machine learning technologies to help build innovative products for millions of people around the world. If you’re a machine learning researcher or student, an engineer or developer, we’d love to hear your questions and feedback. Write us at machine-learning@apple.com
In the first post -- described as Vol. 1, Issue 1 -- Apple's engineers delve into machine learning related to neural nets that can create a program to intelligently refine synthetic images in order to make them more realistic. Using synthetic images reduces cost, Apple's engineers pointed out, but "may not be realistic enough" and could result in "poor generalization" on real test images. Because of this, Apple set out to find a way to enhance synthetic images using machine learning.
Most successful examples of neural nets today are trained with supervision. However, to achieve high accuracy, the training sets need to be large, diverse, and accurately annotated, which is costly. An alternative to labelling huge amounts of data is to use synthetic images from a simulator. This is cheap as there is no labeling cost, but the synthetic images may not be realistic enough, resulting in poor generalization on real test images. To help close this performance gap, we’ve developed a method for refining synthetic images to make them look more realistic. We show that training models on these refined images leads to significant improvements in accuracy on various machine learning tasks.
In December 2016, Apple's artificial intelligence team released its first research paper, which had the same focus on advanced image recognition as the first volume of the Apple Machine Learning Journal does today.

The new blog represents Apple's latest step in its progress surrounding AI and machine learning. During an AI conference in Barcelona last year, the company's head of machine learning Russ Salakhutdinov provided a peek behind the scenes of some of Apple's initiatives in these fields, including health and vital signs, volumetric detection of LiDAR, prediction with structured outputs, image processing and colorization, intelligent assistant and language modeling, and activity recognition, all of which could be potential subjects for research papers and blog posts in the future.

Check out the full first post in the Apple Machine Learning Journal right here.


Discuss this article in our forums

Apple Expanding Seattle Hub Working on AI and Machine Learning

Apple will expand its presence in downtown Seattle, where it has a growing team working on artificial intelligence and machine learning technologies, according to GeekWire.

The report claims Apple will expand into additional floors in Two Union Square, and this will allow its Turi team to move into the building and provide space for future employees.
“We’re trying to find the best people who are excited about AI and machine learning — excited about research and thinking long term but also bringing those ideas into products that impact and delight our customers,” said computer scientist Carlos Guestrin, Apple director of machine learning. “The bar is high, but we’re going to be hiring as quickly as we can find people that meet our high bar, which is exciting.”
Apple's director of machine learning Carlos Guestrin, who founded Turi and is a University of Washington professor, said the Seattle team collaborates "extensively" with groups at Apple's headquarters in Cupertino, including working on new AI features for upcoming Apple products and services.

Guestrin said AI, for example, will enable the iPhone to be more understanding and predictive in the future:
“But what’s going to make a major difference in the future, in addition to those things, for me to be emotionally connected to this device, is the intelligence that it has — how much it understands me, how much it can predict what I need and what I want, and how valuable it is at being a companion to me,” he said. “AI is going to be at the core of that, and we’re going to be some of the people who help with that, here in Seattle, but of course there will be tons of groups in Cupertino doing amazing things with that, too.”
Guestrin said Apple is doing long-term research in Seattle, looking ahead "three to 10 years," while also focusing on the near term by developing new features for upcoming Apple products.
"We work on the whole spectrum," he said. "It's not just about doing research, but it's about the technology transfer and how that gets embedded into experiences that customers love."
Today, the University of Washington will reportedly announce a new $1 million endowed professorship in AI and machine learning, which is said to have been made possible by Apple's acquisition of Turi last year. The endowment is named after Guestrin, and it will allow the university to attract more top talent in the field.

Last month, Apple became a member of the Partnership on AI, a non-profit organization established "to study and formulate best practices, to advance the public's understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society".

A recent report, which referenced Turi, said Apple is working on "enhanced" Siri capabilities for next-generation iPhones.


Discuss this article in our forums

Facebook wants to teach you all about how AI works

Https%3a%2f%2fblueprint-api-production.s3.amazonaws.com%2fuploads%2fcard%2fimage%2f303985%2fap_573932598800

Feed-twFeed-fb

Pay attention, everyone: Facebook is trying to teach you something. 

Since you probably let the social network distract you from just about every other educational opportunity that comes your way, you should at least learn a bit now.  

It’s all about artificial intelligence. The already pervasive tech, according to a Facebook blog post, “remains mysterious” for most people even as they use AI systems every day.  

The post was co-authored by Yann LeCun, head of Facebook’s AI Research, and Joaquin Quiñonero Candela, who leads its Applied Machine Learning research division.   Read more…

More about Neural Network, Machine Learning, Artificial Intelligence, Facebook, and Tech