Some ramblings about Conversational UI’s, Bots, and ChatBots.
At present there is a lot of attention on Conversational User Interfaces, Bots, and ChatBots – especially interesting/exciting for those who are interested (design and build) in how people interact with computers.
To reduce ambiguity it’s worth distinguishing between ‘Chat’ and ‘Bot’. Here I consider Chat as the interaction model whose interface is predominantly through natural conversation, the medium is the Conversational User Interface (Conversational UI or CUI for short).
A Bot is a agent (software application/service) that can carry out a task (semi-)autonomously on behalf of the user. Therefore the ChatBot is an Bot who interfaces with the user via conversation but achieves some task autonomously.
Recommendation Engines has become the ‘hello world’ of Data Products (or more generally the data era). Popularised by Amazon that not only found them to be more attractive (user engagement) that curated reviews and also figured out how to make it work at scale and in (near) real-time (using a technique known as item based collaborative filtering).
Once then, every digital commerce site has leveraged the idea and every Machine Learning/Data Mining book reviews the idea and implementation.
At a high, and very simplistic, level, it works by finding the distance between two entities (either people or items e.g. restaurants/food) based on a set of features (e.g. cuisine, food, song/movie genre, song artist/movie director/etc) and using your (or similar person’s) history of entities you’ve previously engagement with (bought, visited, etc) predicts what other entities you would like e.g. if 90% of your iTunes library is Jazz then other Jazz songs will have a higher weight than Rock, thus you will be recommended Jazz songs.
There are times when recommendations need something more than history of engagement, something more timely. I’m sure we have all experienced this, it’s your wives birthday and you shop on Amazon to find your recommendations have been embarrassingly polluted with items you would rather your workmates not see. One suggested improvements for recommendation engines is to use context (if possible). Google does this well (advantage of having established a strong presence of lifestyle and productivity products) e.g. if you’re looking for flights then Google Now will use this derived intent to keep you up-to-date with the latest flight deals.
But this can be achieved by other means, and the example I have in mind, we can leverage the mobiles attributes of being connected, aware, and present to determine if the recommendation is for a individual or group of friends e.g. your out with friends, using your phone to look for somewhere to eat – the phone has your contacts, location, awareness of who you’re with (neglecting privacy in this instance) – instead of using just your recommendations, it should extend the preference out to those in close proximity. It’s not hard to see how this extends to going to the movies, something to do, or music to play.
A Software Agents is a autonomous software program than acts on behalf of the user with no or minimal interaction. The concept has been around for sometime but will become ever so important as we move into a truly programmable world and our digital footprint/dependency reaches tipping point (for some it already has for those checking their phones 155 times per day).
This idea was conceived trying to contemplating a one general trend and one significant challenge in mobile.
The challenge is that, despite the opportunity for innovation provided from few barriers to building and deploying apps, App stores have become flooded, making discovery and management difficult & time consuming for the user. Additionally we see a lot of duplicated functionality across applications (think Retail, Events/Venue Finders, Restaurant, …) i.e. the concept just doesn’t scale well.
The trend is the move towards deeper integration into the platform, taking the true advantage mobile such as better understanding of your current context, interests and being available when needed. The most obvious examples are the Personal Assistants offered by the 3 major players, giving you information based on interests and intent e.g. Google Now provides you with a card of near-by products that you have previously searched for.
I’ve committed myself to writing and talking around the concepts of Micro-Interactions and Anticipatory computing and using a SmartWatches app as a vehicle to deliver the content such that it has particular relevance. The challenge is actually coming up with a compelling use-case (apart from the obvious), initially it was a Water Consumption Monitor, then a Pace Setter (i.e. allow the user to set a pace and nudge them when the user is slowing down), but then settled for a way for the user to navigate to a destination with less reliance on their SmartPhone (more on this on a later post).
This idea sparked a thought of how Smart devices (Phone/Watch/Earpiece/…) could help navigate the visually impaired around urban areas using similar techniques used by semi-autonomous cars.
While flicking through the news this morning I came across an article on Venture Beat talking about online advertising trends for 2014. One of those trend was the implications the absence of the cookie for targeted advertising (achieved by tracking the users past browsing), but also highlights the limitations of this approach for the new digital landscape (i.e. unable to track between devices). The article wraps-up by highlighting how internet titians are jumping in to fill this void (e.g. Clearinghouse from Mozilla, Google AdId, and others) and how this will lead to a central repository (for the user) to control privacy.