Hey Siri! Let’s add a new feature in you

Divit Karmiani
7 min readJun 26, 2021

--

What do we know about Siri?

Siri is a built-in, voice-controlled personal assistant available for Apple users. The idea is that you talk to her as you would to a friend and she aims to help you get things done, whether that be making a dinner reservation or sending a message.

Siri has access to every other built-in application on your Apple device — Mail, Contacts, Messages, Maps, Safari and so on — and will call upon those apps to present data or search through their databases whenever she needs to. Ultimately, Siri does all the legwork for you.

Recent updates with respect to Siri

Siri was redesigned in iOS 14, and activating Siri either through physical buttons or through voice commands no longer causes the Siri interface to pop up and take over the entire display with the sound wave design.

Instead, when Siri is activated, there’s a small animated Siri logo at the bottom of the iPhone’s display. Many of the answers that Siri provides are also shown in banners at the top of the iPhone, so Siri no longer interrupts other tasks.

Siri can answer a broader range of questions than before, so some complex requests that might have previously directed you to the web now receive direct answers.

Apple increased the number of languages that Siri understands for translation purposes, and you can now ask Siri to translate words, phrases, and sentences in more than 65 language pairs.

SWOT analysis of Siri

User Research

Since I am not an Apple device user, I tried to ask a few people around who own iOS devices. And well, user research is an important element in product planning and development. I conducted a short interview with my friends. Here are their responses

What Apple devices do you own?

User 1 iPhone, iPad, MacBook

User 2 iPhone

Do you use Siri frequently?

User 1 Not really

User 2 5–6 times a day

What are some of the tasks that you use Siri for?

User 1

  1. Call somebody
  2. Search something on the phone
  3. Set reminders
  4. Entertainment- mostly to recognize/find a song.

User 2

  1. Set an alarm
  2. Call xyz person
  3. Find facts
  4. Drop a text
  5. Facetime xyz
  6. Translate

Does Siri fulfil your expectations for all the tasks that you desire to be completed?

User 1 Yes, sometimes the accent can be an issue though and Siri googles it

User 2 Yes, Siri has made my life easy and all the expectations are fulfilled

Where do you think Siri lacks the most?

User 1 Understanding the accent majorly.

User 2

  1. A little slow to process
  2. Might not have the correct information all the time

Analysis of the User research

I have a negligible sample space to do any analysis. But the above questions can give the following insights

  1. The top 3 important tasks for which Siri is used
  2. Major difficulties that users face while using Siri
  3. How frequently Siri is used by users

This analysis then can be segmented according to demographics of the user, user behavior etc.

This user research can be conducted through interviews, surveys, and contextual inquiries.

Features to be included in Siri’s new project

From the SWOT analysis and user research we can observe that Siri needs to constantly evolve to solve compound actions and in-context conversations amongst the ever growing competition. Users are willing to get information on the go, and even the slightest shortcomings hinders the retention of activated users.

From the recent update of iOS 14, we know that there has been a big change in the screen design of Siri. On activation of Siri, there’s a small animated Siri logo at the bottom of the iPhone’s display as opposed to the earlier Siri interface to pop up and take over the entire display with the sound wave design.

This change in design must engender the next features to be included which comes under the umbrella “Understanding the screen better”. It’s only conspicuous that this must be next in line. While this is a broader term, let’s understand by dividing it into features that it comprises.

  1. Send an image on the screen to a contact
    * As a user, I should be able to send an image to a contact via a messaging platform with the help of Siri
    * As a user, Siri must be able to take a screenshot of the phone screen and send that to a contact via a messaging platform
  2. Read an article displayed on the screen
    * As a user, Siri must be able to read me the content of an article displayed on the screen
  3. Translate the content of any app or screen
    * As a user, Siri must be able to translate the content displayed on the screen to the desired language
  4. Identify the song to which the lyrics displayed on screen belongs
    * As a user, Siri must identify part of the lyrics displayed on the screen and then identify the song to which the lyrics belong.

Target Customer Base

For the above features, we are trying to target the mobile market. The following are the characteristics of the target customers

  1. Millennials who want the device experience to be quick and interactive
  2. Users who want information on the go
  3. Age group 65+, among whom voice search is popular[1].

Features to be included in Minimum Viable Product

While deciding on MVP, we want to validate an assumption

Problem Solution Hypothesis: — Users who prefer voice search to retrieve information and other minimal tasks would prefer to do more complex tasks on the go with the help of voice assistant

The first step in building MVP is to identify the riskiest assumptions

  1. People are more and more inclined towards doing compound and complex tasks with the help of voice assistant
  2. People while driving or doing other tasks where mobile can’t be used, would like to do some multitasking with the help of voice assistant

Riskiest assumption

People are more and more inclined towards doing compound and complex tasks with the help of a voice assistant.

MVP features

Since the translation feature is already present in the current version of Siri, we can try to include the below 2 features for MVP. Later, other features can be launched.

  1. Read an article displayed on the screen
  2. Translate the content of any app or screen

Metrics to be tracked

By introducing these features we want to increase early adopters. Also we aim to retain and provide better user experience each time to the existing users.

We have tried to study the metrics on the basis of various components of Product Analytics

Data Points:-

  1. No of requests to send images per day/week
  2. No of requests to read articles per day/week
  3. No of requests to translate the content on the screen per day/week
  4. No of requests to guess the song from part of the lyrics present on the screen per day/week

Events:-

  1. Number of images successfully sent by Siri
  2. Number of articles read successfully by Siri
  3. Number of successful translations by Siri
  4. Number of successful guesses of the correct song from the lyrics by Siri
  5. Number of unsuccessful events of sending images by Siri
  6. Number of unsuccessful events of reading articles by Siri
  7. Number of unsuccessful translations by Siri
  8. Number of unsuccessful attempts to guess the correct song by Siri

Segmentation:-

  1. Demographics of the user
  2. Age of the user using the above features
  3. Profession of the user using the above features
  4. Segmentation by Acquisition
  5. First time users of Siri due to this feature
  6. Bounce rate for this particular feature
  7. Segmentation by User behavior
  8. Users retrying the same/similar requests for the feature in case of an unsuccessful first try
  9. Users dropping off without retrying again in case of an unsuccessful first try

Teams involved in the project and their responsibilities

Engineering team

Responsibilities:-

  1. Using text detection and extraction to understand what text is on the screen
  2. Using text to speech module for Siri to speak the extracted text
  3. Integration with messaging platforms to send images, videos to a contact
  4. Updating Siri with the respective potential commands that can be given by user to use these new features

Marketing team

Responsibilities:-

  1. Devising the go to market strategy of these new features in the next iOS update

UI/UX designers

Responsibilities:-

  1. Updating all the web platforms about the new features during the launch

Business Analysts/PM

Responsibilities:-

  1. Setting up business objectives for the new features.
  2. Accordingly identify key metrics required to track the success of these features.
  3. Divide the metrics according to the product life cycle stages. For e.g. pre-launch metrics, post-launch metrics and steady state metrics.
  4. Create dashboards for the decided metrics.

Testing the Quality

  1. There might already be a voice assisted quality testing automation framework for Siri. The new test cases corresponding to the new features should be added to the framework.
  2. Unit testing and integration tests must be performed for the new features to check end-to-end quality testing.
  3. The quality will be determined by the number of test cases passed.
  4. Also the metrics that will be collected can tell us about the quality of the project.

Challenges and Risk

  1. It is always challenging to replicate the semantics of human language. Same task can be asked by different users with different commands. Siri must be intelligent enough to cover all scenarios. Because a bad user experience can go a long way in impacting the churn rate. It’s imminent to have a good user experience.
  2. Siri must be interactive with the user. Just performing the tasks at hand is perfectly fine. But it is important to engage the users. That adds more value to the features.
  3. There is always a risk that a feature doesn’t provide value addition. Therefore it is important to lay down objectives and hypotheses well which must be validated through well defined user research using interviews, surveys, and contextual inquiries. Only then the features must be implemented and launched in the market.

--

--