A Software Agents is a autonomous software program than acts on behalf of the user with no or minimal interaction. The concept has been around for sometime but will become ever so important as we move into a truly programmable world and our digital footprint/dependency reaches tipping point (for some it already has for those checking their phones 155 times per day).
Around 2008 I build a little app for my wife to help her track how much water she drank; it was mildly successful but failed probably because of the effort outweighed the feedback (reward). One plausible solution to this could be to ‘gamifying’ it (essentially adding elements of competitiveness, social commitment, and a stronger variable reward for using the app – think Nike+) to help establish a habit. … but this is 2014 – computers should disappear where possible freeing the user to concentrate on living rather than staring at screen i.e. Context-Aware/Anticipating Computing. Which brings me to Hydrate, the prototype that automatically tracks users consumption of water and nudges them if they have gone too long without sipping on a little bit of water.
The app ecosystem has exploded over the last few years; creating inefficiencies for app discovery. Great for the platform vendors but not for the developers (or users), there have been numerous attempts to improve this, some of the obvious ones listed below (ignoring marketing tactics such as free-app-a-day):
- Vertical and specialised app stores (Amazon, Samsung, Sony, …)
- Cross promotional networks
- Review sites
- Increased categories
- Improved recommendations
- Third party apps (e.g. AppFlow, Appreciate, …)
This inefficiency is one major reason why Just-in-Time Interactions makes sense, especially as the app model extends to other platforms (desktop, TV, SmartWatchers, …).
Haven been faced this with question many times (how to market a mobile app) I thought I would have an attempt in making app discovery more relevant.
I came across a competition recently that sounded interesting (and of interest) – the competition is organised by a UK Government technology board called IC Tomorrow and sponsored by Google Chrome (details here). The competition is to explore new ways of initiating web apps using new technologies, description shown below:
“New technology can change the way mobile web games are shared and initiated between players. Typically, mobile web games (i.e. those run in a browser, not native apps) are started by typing in a web address on a mobile device and shared by sending the address to friends. More innovative ways to start games’ sessions between users on the web will increase the potential for games to engage and attract players. This challenge seeks the development of a new service or new interaction that will encourage players to start their games’ sessions with other users with as little effort as possible.”
One of the most immersive applications we have played with is Let’s create! Pottery; it’s a zen-type game that allows its users to create pottery in an incredibly intuitive way.
One feature that contributes to making this experience immersive is its responsive background i.e. as the user pivots the device around the background slightly pans as a response, similar to a parallax effect in that it gives the user the feeling of depth and realism.
In this document we briefly explore the what, why, and how of sentiment analysis – let’s jump straight into it.
So what is Sentiment Analysis?
I would be surprised if you haven’t already come across the term and possibly know/use it already. It has become increasingly popular with the advert of big data in context of social networks and is a tool frequently used by brands to monitor and measure ‘chatter’ about themselves and/or market.
Essentially it offers a way to extract and measure the opinion of this chatter (normally filtered by a specific topic or on a channel) e.g. if you wanted to know the general opinion of your brand you could analyse all the tweets that link to your brand somehow, tallying the general opinion for each and reporting the result.
For a more detailed explanation, check out Wikipedia
Not a week goes by without seeing an article about a new SmartWatch or some discussion about the future of wearable computing (rings, clothes, IOT, …). Having being always inspired gadgets like Dick Tracy’s watch I have always kept a fairly close eye on this space and recently decided to jump in and build something to better learn about the opportunities and constraints of building for a SmartWatch (or wearable tech in general). In this short article I’ll describe my learnings and present my opinion of some practical uses for a SmartWatch.
Update (2013-10-14); this has been implemented and will be available on Google Play by the end of the year. For those interested; the source code is available on GitHub – more information can be found on its site: www.findaplaymate.net.
Up until now, technology has isolated individuals, forcing its users to work with machines rather than people – but thanks to the advancements in technology we are starting to see a trend where technology is becoming more natural and complementing our human (social and active) traits rather than conflict with them. There is still a lot of work to be done, especially with how we interact with a smarter environment/devices (ambient intelligence), which will take time, but there are two components of this that will become mainstream within the next few years – that is Personal Area Networks and a standardised Proximity Service Layer. Personal Area Networks are networks that connect personal smart devices e.g. your watch connecting to phone etc. Bluetooth 4.0 being the ideal facilitator for this. The other, Proximity Service Layer, refers to smart devices offering services to other devices in close proximity (very similar to PAN’s and possibly leverage the same technology). The big drivers for this are Wifi Direct for h/w and AllJoyn (from Qualcomm) and Chord (from Samsung). For this to become a reality a open standard (mimicking the success of the web) needs to be formed.
Bottom line – devices will start chatting to each other like girls at a dinner party.
No doubt you have noticed a common theme this year with how people can interact with technology, in that the technology has finally come to a stage whereby our interaction with the digital world can be less artificial. Examples include: Gesture controlled interfacing such as with the Kinect, LeapMotion, and Intels Perceptual Computing kit, Touch, Eye Tracking, Voice, All are encompassed under a category of Human Computer Interaction (HCI) called Natural User Interfaces (NUI) with the goal of making interfacing with devices, as the name suggests, natural.