Now dawning: the era of thinking digital signage


Over the years, advertisers have got progressively cleverer at how they appeal to us – and now, says Ian McMurray, leading edge computer science is reshaping digital signage.

In the UK we are, it seems, collectively privileged to receive close to 4,000 million pieces of junk mail each year. In the US, it’s said to be 90 billion. (But let’s remember – an American billion is a whole lot less than a British billion…) In Australia, it’s probably around the 1,500 million mark. That’s a lot of trees being chopped down for no very good reason.

Why do we call it ‘junk mail’, though? Simply: because invariably, it offers us something we don’t want or need. But here’s the thing. If the Direct Marketing Association is to be believed, much less of it is junk than it used to be. The reason for that is that companies who mail us are getting smarter – much smarter. They know much more about each of us than they used to – and that means they’re sending us information about things which they believe could actually be of interest or value to us. That’s very different to the historic ‘one size fits all’ model.

We’ve seen a similar trend in TV advertising. The commercials we see, in many cases, are now precisely targeted at the demographic that is known to watch each individual program. In the UK, for example, it’s hard to watch TV during the daytime without being bombarded by ads for pay day loan companies – whose cynical assumption is that you’re unemployed, and therefore short of cash.


And now we have the online equivalent. Facebook, for example, watches me on the web – and sends me ads that relate to those searches. Which is annoying beyond belief if, as I do, you do a lot of research as part of your job. I don’t want to research an article on the alternative short stay accommodation phenomenon or on smart homes – and be inundated by Facebook ads for AirBnB and Control4. But I digress.

In all cases, of course, we’re talking about market segmentation in advertising. The art is to make those segments smaller and smaller with increasingly precise definitions. The optimum marketing segment is the individual – you. We’re talking about getting personal.

That brings us, in a roundabout way, to digital signage – which is, in many environments, just another facet of a marketing communication mix that may well include direct mail, TV promotion and web advertising. And: it’s following a similar trajectory – moving from broadcast to narrowcast.

In the same way as we got blasé about getting mail or watching ads on TV, we very quickly got blasé about digital signage, once the initial novelty had worn off. Sure, our attention is still drawn to particularly well executed campaigns or compelling images – but the goal of the industry has long been to do what other advertising media have done, which is to make it personal.

That saw the increasing development of interactive digital signage – advertising that attempted to treat us as individuals, and to create one-to-one relationships. Digital signage now routinely attempts to engage us with touch screens, QR codes, Bluetooth, NFC, beacons and so on. The signs want to talk to our mobile phones. (A sidenote: to be mobile phone-less these days is to be completely disenfranchised from the modern world in the same way as not having an Internet-connected PC was 10 years ago.)


Now, however, we have a new technology on the block that is forecast to take the personalisation of digital signage to a whole new level. That technology is artificial intelligence (AI).

AI is, as of course you know, the science of creating machines that can think like we do. It can probably be traced back to Alan Turing – a name revered in computing circles – and his thesis, developed in 1950, that you had achieved machine intelligence if, in a conversation with a computer, you would think you were talking to another human being.

The whole subject of artificial intelligence went quiet for three decades after Turing created his test, not least because the computer hardware needed to execute the required software wasn’t even close to powerful enough – but saw something of a renaissance in the 1980s with the advent of so-called Lisp machines (Lisp was a favoured AI programming language, and these machines were optimised to execute it) together with expert systems. It was still, however, a largely academic subject.

The ready availability of incredibly powerful hardware has, however, seen AI make a resurgence. And: not mainframe-sized hardware. Today’s graphics processing units (GPUs), originally designed for computer gaming, have a massively parallel architecture that lends itself readily to the execution of AI software. That’s why you’ll find companies like NVIDIA leading the charge in autonomous vehicles/driverless cars: the processing power necessary can easily be fitted behind the dashboard.

Those cars work – to grossly oversimplify – by ‘seeing’ everything that’s around them, being equipped with a set of rules – and taking action by applying those rules to suit the circumstances. That’s the sexy, ‘out there’ example of what it can do. But, increasingly, that ability to ‘see’ and to take the decisions we would take if we could see the same things, is being put to somewhat more mundane purposes.

Like digital signage. In effect, you put a camera into a digital signage screen, and implant AI software in the screen that can interpret what the camera sees in terms of the age, gender, actions and emotions of anyone viewing the screen – and change the on-screen content accordingly. The advertising is now reacting to your interaction with/reaction to it.

A new approach

AdMobilize is, at heart, a computer vision company. It claims that its AI software is more than 90% accurate at detecting gender; can estimate the age of a viewer within +/- 10 years; and has a better-than-80% success rate at detecting happiness, anger, surprise, fear and disgust. It can identify each individual looking at a sign, and determine how long he/she was engaged with it.

The company isn’t alone, but is representative of a new approach to managing digital signage that can, in theory at least, dynamically adapt what it shows not only according to the age, gender and so on of a viewer – but also their reaction to it. If content creates an undesired reaction – or, perhaps worse, no reaction at all – it can be automatically replaced. ‘Machine learning’ – an offshoot/extension of AI – will allow it to progressively refine what it shows.

A neat alternative example is a trial that was run in Tokyo. This used similar image recognition capabilities, not to see faces, but to see cars. Depending on the type of car passing by, content on the digital signage screen would change: advertising golf clubs to BMW drivers, for example, but vegan muesli to Prius drivers…

Of course, the success of AI-driven digital signage will depend on the human assumptions – like that last (tongue in cheek) generalisation – that underpin it. From that point of view, it’s no different from the assumptions made by advertisers about who watches TV in the afternoon, or direct mail companies who assume that, if I’m 60, I’m probably thinking about saving for my funeral. But: it’s early days (just as it is for driverless cars). AI has the potential to revolutionise digital signage in much the same way as it will one day revolutionise how we get from A to B.

The post Now dawning: the era of thinking digital signage appeared first on Connected Home – Trade.

Reference: Connected Home