Apple Music Voice
Apple Music Voice
Bringing voice intelligence to music
Bringing voice intelligence to music

COMPANY: APPLE

COMPANY: APPLE

COMPANY: APPLE

CATEGORY: SEARCH | AI

CATEGORY: SEARCH | AI

CATEGORY: SEARCH | AI

Apple Music Recommendations
Apple Music Recommendations
Apple Music Recommendations

Our founder, Harry Simmonds, played a key role in developing the foundation for Apple Music Voice before starting Studio Elros. With a music search engine, there's more room for error when you can display multiple results on a screen. However, when using voice commands, you only get one result and it has to be the right one. Harry's work focused on improving Siri's ability to accurately understand and fulfill music-related requests through voice commands.

What was done

  • Re-design of Siri’s music search architecture

  • Increased accuracy of music search queries

01.

Building an architecture for the future

Searching for content in a library of 100 million songs can be a daunting task, especially when you have to consider the user's favorite music, downloaded library, recent listening patterns, and many other parameters. To accommodate these parameters, we needed a robust architecture that could handle countless permutations in the future.


Our focus was on building a modular system that could understand user behaviour based on semantic input and attach it to a query that reflects the user's preferences. As Siri's semantic understanding improves, we can add more modules to support advanced querying.

02.

Device relevant voice domains

Siri's capabilities extend beyond just understanding and playing music. With music, movies, and videos having overlapping content, it can be difficult to discern the user's intent. For example, do they want to watch the movie "Inception" or listen to its soundtrack?


To address this issue, our team aimed to develop a robust understanding of device and user context, enabling us to deliver the appropriate content on the right device at the right time. Through the analysis of usage patterns, we could improve our models to ensure that Siri was attuned to the user's context.

03.

Re-focus on core use cases from usage

Determining the "correct" output can be challenging when dealing with highly variable input, and classifying errors can be ambiguous, especially with vague feedback from the user. Our team developed a way to improve our understanding of user requests by analysing their actions surrounding the request. By doing so, we can dive into the data to increase accuracy and focus on the core experiences that our users enjoy. Users can always surprise you, and we learned how to pivot the experience based on device-specific use cases.

Media Credit: Apple Inc.

Let’s talk

Subscribe to our upcoming newsletter for occasional updates! We only send emails when we have something to say.

© 2023 Studio Elros

·

Let’s talk

Subscribe to our upcoming newsletter for occasional updates! We only send emails when we have something to say.

© 2023 Studio Elros

·

Let’s talk

Subscribe to our upcoming newsletter for occasional updates! We only send emails when we have something to say.

© 2023 Studio Elros

·