2/ Getting users conditioned that they *always* have a smart speaker in their pocket that can talk to individual apps has been a huge hurdle for people understanding the potential for #mobilevoice. Amazon is taking the smart speaker concept to mobile in a way users understand.
3/ Why would users talk to their apps? Because apps already have their data, preferences *and* unique domain knowledge that Siri/Google do not have..unless you hand it over to them. For example, I want to know if @lowes has the @DEWALTtough drill I needed at the closest store?
4/ Other apps have had wakewords before (including ones we have worked on), but users didn't know - or expect - that they could use a wakeword with an app because they don't expect apps to talk. The expectation with Alexa is it should work on a phone as it does on a smart speaker
5/ Now it does. For example, saying "Hey Siri, open Foursquare" is a LOT easier than using your face, then scrolling to find the app, then pressing it and waiting for it to open. But what will it say it when a user opens @Foursquare with their voice now? NOTHING. It has no voice
6/ But it can. We empower companies, developers, and designers to build their own voice assistants. They do not need to rely on the platforms to speak for them - nor should they! See last week's newsletter: https://spokestack.substack.com/p/out-to-voicelunch
7/ So need a wakeword for your app? How about an on-device NLU that is fast and private? What about a branded voice that sounds like your brand instead of Siri, Google or Alexa? WE GOT YOU! Come to http://spokestack.io  and start taking back your #customerconversations
You can follow @spokestack.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: