… or more specifically, too much choice. It use to be that choice was scare, news was expensive to produce and distribute. Similarly (largely due to communication costs) was movies, music, books, and everything else you can imagine.
User-Centered Design tries to optimise the product/service around how users can, want, or need to use the product/service rather than forcing the users to change their behaviour to accommodate the product/service.
We have already seen specialisation in the field that branched out the visual designer to include both both the visual and interaction designer.
We are now seeing a need for further specialisation to include a data designer, this need is driven by the increase in complexity, connectivity, and ubiquity of computational devices, it is predicted that there will be 20 billion connected devices. The underlying factor across these devices is the data, data that influences the behaviour of each of the devices and need to communicate this data.
One of the more popular examples of the value of designing with data is Nest – the smart thermostat; Nest increases conveniences and offers real value through optimising heating patterns by use of data. Without appreciation and understand of data, this would not have been possible, instead would have been approached with wireframes and an application rather than giving it some degree of autonomy.
At present there is a lot of attention on Conversational User Interfaces, Bots, and ChatBots – especially interesting/exciting for those who are interested (design and build) in how people interact with computers.
To reduce ambiguity it’s worth distinguishing between ‘Chat’ and ‘Bot’. Here I consider Chat as the interaction model whose interface is predominantly through natural conversation, the medium is the Conversational User Interface (Conversational UI or CUI for short).
A Bot is a agent (software application/service) that can carry out a task (semi-)autonomously on behalf of the user. Therefore the ChatBot is an Bot who interfaces with the user via conversation but achieves some task autonomously.
Recommendation Engines has become the ‘hello world’ of Data Products (or more generally the data era). Popularised by Amazon that not only found them to be more attractive (user engagement) that curated reviews and also figured out how to make it work at scale and in (near) real-time (using a technique known as item based collaborative filtering).
Once then, every digital commerce site has leveraged the idea and every Machine Learning/Data Mining book reviews the idea and implementation.
At a high, and very simplistic, level, it works by finding the distance between two entities (either people or items e.g. restaurants/food) based on a set of features (e.g. cuisine, food, song/movie genre, song artist/movie director/etc) and using your (or similar person’s) history of entities you’ve previously engagement with (bought, visited, etc) predicts what other entities you would like e.g. if 90% of your iTunes library is Jazz then other Jazz songs will have a higher weight than Rock, thus you will be recommended Jazz songs.
There are times when recommendations need something more than history of engagement, something more timely. I’m sure we have all experienced this, it’s your wives birthday and you shop on Amazon to find your recommendations have been embarrassingly polluted with items you would rather your workmates not see. One suggested improvements for recommendation engines is to use context (if possible). Google does this well (advantage of having established a strong presence of lifestyle and productivity products) e.g. if you’re looking for flights then Google Now will use this derived intent to keep you up-to-date with the latest flight deals.
But this can be achieved by other means, and the example I have in mind, we can leverage the mobiles attributes of being connected, aware, and present to determine if the recommendation is for a individual or group of friends e.g. your out with friends, using your phone to look for somewhere to eat – the phone has your contacts, location, awareness of who you’re with (neglecting privacy in this instance) – instead of using just your recommendations, it should extend the preference out to those in close proximity. It’s not hard to see how this extends to going to the movies, something to do, or music to play.
It’s a little ironic that I wrote this post while procrastinating doing the assignment for my data analytics course.
… we’re at the break of the most exciting era of computing, the convergence of pervasive computing, our increased ‘dependency’, and data (capturing and handling) will diminish the concept of a computer as a box (keyboard, mouse, and monitor) and transfer it into more of a living ‘thing’ (or service).
A lot of people/organisations have made their bets on 2015 – predicting, in most instances, the obvious ‘buzz’ words. Here are a fews of things I’m looking forward to in 2015.
A Software Agents is a autonomous software program than acts on behalf of the user with no or minimal interaction. The concept has been around for sometime but will become ever so important as we move into a truly programmable world and our digital footprint/dependency reaches tipping point (for some it already has for those checking their phones 155 times per day).
Having been playing around with ‘connecting’ things a fair bit these days, you find yourself asking the question ‘what could we do if this thing could talk’ for a lot of objects you interact with. Here is one simple example.
Even though our little man is 3 in November, I still find myself verging on paranoid when it comes to administrating medicine – the amounts and frequency.
So ‘what could we do if the cap could talk’; one idea I have is broadcasting the cap has been opened with an receiving app logging the time; so you know exactly when you can need give your little one more meds. Of course you could have the time logged on a LCD display but having it as simple as broadcasting via GATT (Bluetooth LE) it could be at an acceptable price point and mean you’re not having to swap batteries out every other day.
Around 2008 I build a little app for my wife to help her track how much water she drank; it was mildly successful but failed probably because of the effort outweighed the feedback (reward). One plausible solution to this could be to ‘gamifying’ it (essentially adding elements of competitiveness, social commitment, and a stronger variable reward for using the app – think Nike+) to help establish a habit. … but this is 2014 – computers should disappear where possible freeing the user to concentrate on living rather than staring at screen i.e. Context-Aware/Anticipating Computing. Which brings me to Hydrate, the prototype that automatically tracks users consumption of water and nudges them if they have gone too long without sipping on a little bit of water.
This idea was conceived trying to contemplating a one general trend and one significant challenge in mobile.
The challenge is that, despite the opportunity for innovation provided from few barriers to building and deploying apps, App stores have become flooded, making discovery and management difficult & time consuming for the user. Additionally we see a lot of duplicated functionality across applications (think Retail, Events/Venue Finders, Restaurant, …) i.e. the concept just doesn’t scale well.
The trend is the move towards deeper integration into the platform, taking the true advantage mobile such as better understanding of your current context, interests and being available when needed. The most obvious examples are the Personal Assistants offered by the 3 major players, giving you information based on interests and intent e.g. Google Now provides you with a card of near-by products that you have previously searched for.
I’ve committed myself to writing and talking around the concepts of Micro-Interactions and Anticipatory computing and using a SmartWatches app as a vehicle to deliver the content such that it has particular relevance. The challenge is actually coming up with a compelling use-case (apart from the obvious), initially it was a Water Consumption Monitor, then a Pace Setter (i.e. allow the user to set a pace and nudge them when the user is slowing down), but then settled for a way for the user to navigate to a destination with less reliance on their SmartPhone (more on this on a later post).
This idea sparked a thought of how Smart devices (Phone/Watch/Earpiece/…) could help navigate the visually impaired around urban areas using similar techniques used by semi-autonomous cars.
Despite the average Smartphone life-cycle getting longer, it’s still approx. 22 months (inline with operator monthly plans) and with the over 52% of the worlds cell phones now Smartphones means there is probably a lot of useful devices gathering dust in peoples draws.
This problem came apparent when trying to design a “Gift Recommendation service”. The general idea is to use a retailers API and your contacts interests (interest graphs built using LIKES, Follows, Favourites using data available via the users social networks – FB, Twitter, LinkedIn) to recommend relevant gifts (based on their interests, occasion, and strength of the relationship). This was the easy part – the difficulty came when trying to associate the contacts ‘interests’ with ‘gifts (products). One way would base it on keywords e.g. filter products using features from their description and the contacts interests (‘sport’, ‘music’, …) – I imagine this approach would have failed miserably.
One of my first projects I endeavoured out of Uni was building a proximity marketing service using some discounted Motorola phones with JSR-82. Imagine being able to push messages to customers in proximity with vouchers and special offers – reality of this was spam, unsolicited messages interrupting the customer, normally inconveniently. A lot of the ‘services’ currently being implemented using iBeacon remind me of these days, the good news that this time round the user has to ‘opt-in’ i.e. install an app.
Since then my interest has shifted from being intrusive to invisible and you can see this with our current iBeacon prototype at Razorfish UK.
The app ecosystem has exploded over the last few years; creating inefficiencies for app discovery. Great for the platform vendors but not for the developers (or users), there have been numerous attempts to improve this, some of the obvious ones listed below (ignoring marketing tactics such as free-app-a-day):
- Vertical and specialised app stores (Amazon, Samsung, Sony, …)
- Cross promotional networks
- Review sites
- Increased categories
- Improved recommendations
- Third party apps (e.g. AppFlow, Appreciate, …)
This inefficiency is one major reason why Just-in-Time Interactions makes sense, especially as the app model extends to other platforms (desktop, TV, SmartWatchers, …).
Haven been faced this with question many times (how to market a mobile app) I thought I would have an attempt in making app discovery more relevant.
I came across a competition recently that sounded interesting (and of interest) – the competition is organised by a UK Government technology board called IC Tomorrow and sponsored by Google Chrome (details here). The competition is to explore new ways of initiating web apps using new technologies, description shown below:
“New technology can change the way mobile web games are shared and initiated between players. Typically, mobile web games (i.e. those run in a browser, not native apps) are started by typing in a web address on a mobile device and shared by sending the address to friends. More innovative ways to start games’ sessions between users on the web will increase the potential for games to engage and attract players. This challenge seeks the development of a new service or new interaction that will encourage players to start their games’ sessions with other users with as little effort as possible.”
While flicking through the news this morning I came across an article on Venture Beat talking about online advertising trends for 2014. One of those trend was the implications the absence of the cookie for targeted advertising (achieved by tracking the users past browsing), but also highlights the limitations of this approach for the new digital landscape (i.e. unable to track between devices). The article wraps-up by highlighting how internet titians are jumping in to fill this void (e.g. Clearinghouse from Mozilla, Google AdId, and others) and how this will lead to a central repository (for the user) to control privacy.
Back in 2011 Scott Jenson, at the time a Creative Director at Frog, wrote an article titled Mobile Apps Must Die. In this post he claims, essentially, that the current model of searching, finding, and obtaining mobile applications is archaic and one that has been thoughtlessly taken from the desktop world. The main frustration is around discoverability, distribution, and fragmentation. Scott proposes a model where applications are made available to the user based on their current context, or rather, just-in-time, and delivered using ubiquitous web technologies.
In this post we examine the techniques used to know you for purposes of improving targeted advertising.
In Pete Mortensen’s post The Future Of Technology Isnt Mobile Its Contextual, he highlights the shift in computing towards a paradigm called Contextual-Awareness Computing and outlines the four ‘graphs’ that are required before contextual computing will work, these four graphs are Social, Personal, Interest, and Behaviour. In essence, Contextual-Awareness Computing meaning computers are able to proactively react to external stimuli as opposed to being commanded by its user – these four graphs being considered the necessary information to provide relevant context. The concept lends itself well to the notion of Just-In Time Interaction/Information as touched on in Frog’s Creative Director, Scott Jenson, blog post Mobile Apps Must Die, where interactions and information are not dependent on direct input but rather your current activity i.e. your current context.
One of the most immersive applications we have played with is Let’s create! Pottery; it’s a zen-type game that allows its users to create pottery in an incredibly intuitive way.
One feature that contributes to making this experience immersive is its responsive background i.e. as the user pivots the device around the background slightly pans as a response, similar to a parallax effect in that it gives the user the feeling of depth and realism.
In this document we briefly explore the what, why, and how of sentiment analysis – let’s jump straight into it.
So what is Sentiment Analysis?
I would be surprised if you haven’t already come across the term and possibly know/use it already. It has become increasingly popular with the advert of big data in context of social networks and is a tool frequently used by brands to monitor and measure ‘chatter’ about themselves and/or market.
Essentially it offers a way to extract and measure the opinion of this chatter (normally filtered by a specific topic or on a channel) e.g. if you wanted to know the general opinion of your brand you could analyse all the tweets that link to your brand somehow, tallying the general opinion for each and reporting the result.
For a more detailed explanation, check out Wikipedia
Not a week goes by without seeing an article about a new SmartWatch or some discussion about the future of wearable computing (rings, clothes, IOT, …). Having being always inspired gadgets like Dick Tracy’s watch I have always kept a fairly close eye on this space and recently decided to jump in and build something to better learn about the opportunities and constraints of building for a SmartWatch (or wearable tech in general). In this short article I’ll describe my learnings and present my opinion of some practical uses for a SmartWatch.
Whilst working on a suite of digital products for the well known Carte Blanche Group (brands include Tatty Teddy and My Blue Nose Friends) with Masters Of Pie, we were asked how we might maximise the stand they had at this year’s London Toy Fair.
Our challenge was to build something specifically for the fair that would attract bystanders, provide a memorable and enjoyable experience with the brand, as well as introduce one of the characters from the Blue Nose Friends collection (an easy choice – Coco the excitable monkey!).
Our next question was what would attract and engage the bystanders -it needed to be something casual, fun, and non-intrusive in order to draw people in willingly and give busy bystanders something they wanted to play with.
We looked at how Coco might have some fun and how he could interact with visitors and decided upon an rhythm based game where the user would take control of Coco using the Kinect to hit his Bongos in beat to the music.
Update (2013-10-14); this has been implemented and will be available on Google Play by the end of the year. For those interested; the source code is available on GitHub – more information can be found on its site: www.findaplaymate.net.
Up until now, technology has isolated individuals, forcing its users to work with machines rather than people – but thanks to the advancements in technology we are starting to see a trend where technology is becoming more natural and complementing our human (social and active) traits rather than conflict with them. There is still a lot of work to be done, especially with how we interact with a smarter environment/devices (ambient intelligence), which will take time, but there are two components of this that will become mainstream within the next few years – that is Personal Area Networks and a standardised Proximity Service Layer. Personal Area Networks are networks that connect personal smart devices e.g. your watch connecting to phone etc. Bluetooth 4.0 being the ideal facilitator for this. The other, Proximity Service Layer, refers to smart devices offering services to other devices in close proximity (very similar to PAN’s and possibly leverage the same technology). The big drivers for this are Wifi Direct for h/w and AllJoyn (from Qualcomm) and Chord (from Samsung). For this to become a reality a open standard (mimicking the success of the web) needs to be formed.
Bottom line – devices will start chatting to each other like girls at a dinner party.
No doubt you have noticed a common theme this year with how people can interact with technology, in that the technology has finally come to a stage whereby our interaction with the digital world can be less artificial. Examples include: Gesture controlled interfacing such as with the Kinect, LeapMotion, and Intels Perceptual Computing kit, Touch, Eye Tracking, Voice, All are encompassed under a category of Human Computer Interaction (HCI) called Natural User Interfaces (NUI) with the goal of making interfacing with devices, as the name suggests, natural.
In late 2010 we spotted a potential trend for Augmented Reality as a marketing tool – not a new idea as it had been done before but wasn’t a mainstream technique for marketing purposes yet. At this stage I spent a fair amount of time researching the topic but being a bootstrapped service business it wasn’t long this project was parked to gather dust. Around mid-2011 it was discovered again and the initiative was relaunched, Augmented Reality solutions existed now but given our unique relationship as a production partner for a few agencies we saw value of us being able to offer a solution that wouldn’t have the additional cost. So as a company we decided I would be given a couple of months dedicated to build a prototype to be used to promote the service to agencies. Thankfully OpenCV made it possible to build a fully functional prototype (with rendering engine) within this time. The result was a lot of long nights and a fairly responsive marker (template) detection engine.