Why voice platforms need to make fundamental changes to enable multi-user skills –

0

Credit: Photo by Brett Jordan on Unsplash

If you own a smart speaker and live with someone else like a partner, child, or roommates, you’ve probably experienced an oddity in how it works. They can call you the wrong name, give you the status of someone else’s package, or just say they don’t recognize you.

I noticed that these issues with my Spotify recommendations were being skewed by my kids’ use of our Google Home Hub in the kitchen. There are “made for you” playlists with music that I don’t really like but I know my kids like.

The reason these devices have these issues is that they are not built with the full community context in mind. Regardless of whoever has access to the device, it’s like allowing one person to take items out of a refrigerator for a household.

It is not the skill developers’ fault. This is because the major platforms of Amazon, Google, and others are not providing the right models of identity, privacy, security, experience, and ownership to fit in the home ( or in the office).

What should a skills developer do on these platforms?

Communal IT
Common connected devices have been around for a very long time. From the landline phone in the kitchen, we are faced with the impact of installing a network in our homes. At that time, the phone number was associated with a house rather than a person. Anyone in this household was expected to be able to take the other end.

However, it wasn’t until the computing came home for personal use that we started to see the problems. The era of mainframes and client / server architectures made it necessary to time shared between people. They invented the user model to track who was using computing resources. This made them personal computers when they were purchased for home use. Eventually, the Amazon Echo did this by linking the usage to the buyer’s Amazon account.

As an industry, these issues started to become more and more problematic. I break them down into five key problem areas:

  1. Identity: Do we know everyone who uses the device?
  2. Privacy: Are we exposing (or hiding) the right content for everyone with access?
  3. Security: Are we allowing everyone using the device to do or see what to do, and are we protecting content from people who shouldn’t?
  4. Experiment: What is the display or the next contextual action?
  5. Ownership: Who owns all device-related data and services that multiple people use?

Without addressing the communal nature of these devices, people will buy and install them but eventually get bored of the weirdness. Eventually, they will replace them when expectations are not continuously met in these areas.

Develop skills for community use
Amazon Alexa Skills Library is packed with useful features for Alexa to help people around the home and office. Most people will use their Echo to play music or set timers. At first, these abilities seem devoid of context. You’ll ask Alexa to play a particular song through Spotify. Then you will realize that everyone is doing this and it is starting to impact the use of other devices. To take it a step further, some skills, such as shopping lists or delivery notifications, are specifically aimed at the owner of the Amazon account rather than everyone in the house.

The reality is that these devices are deployed in what is more like an ecosystem of people and devices. Understanding this context means being able to provide the right services for this environment. I have already written on the do’s and don’ts of community computing but what should a skills developer do until platforms make fundamental changes to their models?

Use voice profiles, if available

Alexa can teach your voice to recognize it.

Amazon began to consider different people using a device with voice profiles. This requires everyone to feel comfortable providing their voice samples to Amazon. There is a real concern about the privacy of people’s voice data because Dan Miller previously wrote about.

If available, they can provide a quick way to identify if someone with a profile is requesting. The problem will be whether everyone in the house can and will add a voice profile. In the case of Amazon, the device owner must sign in to their Alexa app (or provide their credentials for someone else to sign in) and enter the voice profile.

At the time of this writing, I don’t think we can count on the presence or correct configuration of these voice profiles. There are many issues with using biometric data, but it can provide some relief to developers if adopted more widely.

Don’t trust behavioral data from community devices
The most popular skills are extensions of a larger service available on personal computers and phones. The example I like to cite is Spotify. I use it when I go for a walk, in my car and on the housekeeper in my kitchen. These are three different modes on a “community” spectrum.

Behavioral data is the cornerstone of how Spotify and other consumer media services frame their recommendations. The data collected is both implicit (songs played) and explicit (addition to a playlist or like). These signals are collected from my smartphone and my home assistant but the data is not from the same context. Chances are, when I listen to Spotify in my headphones, it’s just right for me. However, when it comes to a home assistant or a smart speaker, there’s really no way today to know who else is listening.

When listening behavior is significantly different on a home assistant than on my home phone, it is less likely that I have suddenly changed my preferences and that someone else is listening. In these cases, we need to consider whether collecting this data is worth it and, if we do, whether we need to segment it completely from personal eavesdropping data.

Build pseudo-identities of the location, rather than a person
While voice platforms rarely support a whole house profile, Spotify has started to consider couples and families listening together through their Duos and Family Mixes respectively. This is because they know that there is some overlap between people in their listening habits.

I would take it a step further and assume that the communal appliance is usually placed in a part of the house that some people access on a regular basis. My kitchen could be me, my partner and my kids in combination anytime. This means that we have to consider the identity of that location rather than the person who is logged into the device or service.

The behavior captured in the kitchen is its own profile. It will include all the different people who listen to it regularly and not mix more personal data (maybe industrial music that I listen to alone is not suitable for the whole family).

Ask when in doubt
If you are about to do something that might impact someone’s privacy or take an action that might cause discomfort (or harm), you should ask them if they intend to do what she does. When someone asks to add an item to a shopping list, it can be as simple as asking “do you want to add this to Chris’s shopping list and notify him?” “

Or when someone asks for a bank account balance, you need to confirm who you’re talking to, “This will announce Chris Butler’s bank account balances, are you sure you want it?” “

It won’t help against people who are deliberately trying to subvert a system, but most people are not like that. We live in our homes on the basis of standards of what we find acceptable. I know my partner’s smartphone passcode is, but it would be a huge breach of trust if I ever unlocked it without him telling me (and vice versa).

These standards allow humans to assess who is in the room and if the situation is appropriate before privacy is violated.

Be less sure!
The summary of these recommendations is that when we start to deploy things at home, we cannot be sure that only one person is using it. To build better devices, we need to start looking at the context in which devices are deployed with all the different people rather than assuming it’s just one person.

That’s great advice for platform builders like Amazon and Google, but skill developers are left in a difficult position. To avoid these problems with common devices:

  1. Use voice profiles, if available
  2. Don’t trust behavioral data from common devices
  3. Build pseudo-identities of the location, rather than a person
  4. Ask when in doubt

If you follow these steps, you develop skills that fit the context of where these devices are deployed and build trust with your user base. This is necessary until platforms build the capabilities that assume this by default.


Chris Butler is a good product manager, writer and chaotic speaker. He was a product manager at Microsoft, Facebook Reality Labs, KAYAK and Waze. Chris is currently PM’ing the PM experience at Cognizant as Assistant Vice President, Global Head of Product Operations. He focused on the impact of community computing at home and in the office as our world becomes more and more connected.

‹NICE defines the basics of managing“ massively asynchronous ”experiences

Categories: Conversational intelligence, Intelligent assistants, Intelligent authentication, Articles


Source link

Leave A Reply

Your email address will not be published.