The consumer’s control devices in the home ecosystem for entertainment and automation has come a long way. Go back with me in the CE way-way-back machine, or if you will, my own personal TARDIS, and see what you remember and how it was improved and replaced.
Twisting dials was the norm for audio and video content selection not so long ago. Home control was a collection of knobs, switches and buttons. Going back to the 1980s and before, it was not uncommon to see telephone dials or pushbutton keypads used as the system control. Changing channels remotely? American television manufacturer Zenith introduced the wireless remote control in the mid-1950s.
Things have certainly changed since then as IR, RF and IP have become the norm for communicating remote commands. At the same time, we seen discrete “up/down” or toggle buttons give way to multifunction, programmable, remotes. In many cases, touch screens have supplanted buttons altogether and even the product-specific touch screens have given way to app-based systems that ride atop a smartphone or tablet. Perhaps somewhere in the future we’ll have the ultimate in control: direct telepathic systems where the user simply has to think about something and it will occur.
Yet it is not always a straight line forward. Sometimes you need to reach back into the (not to distant) past to see that something previously discarded or passed over might well be worthy of reconsideration.
“Sometimes you need to reach back into the (not to distant) past to see that something previously discarded or passed over might well be worthy of reconsideration.”
You may remember a few years back when there was a great deal of excitement over gesture control, as exemplified by Microsoft’s Kinect accessory for Xbox. A good idea, but it didn’t catch on with the gaming world, and despite its potential for non-gaming control applications, it never took off. However, In service to this month’s theme, “Everything old is new again!”
Google looks to the past
That came to mind recently while watching the keynotes from Google’s annual “I/O” conference; their annual gathering for developers where they find out about what is coming down the path from the search and software giant.
Yes, voice interface and voice control are now the hot button, but in an interesting way, some forms of gesture control, combined with video recognition, may be on the way back. This time, however, it is the sensor that will be moved or pointed at, not the user gesturing to the sensor.
At the top of the list will be Google Lens, due for release some time this year along with the rest of the items unveiled at I/O. Google Lens combined Google’s vast knowledge and data bases to merge video with “deep learning”. Point it at an object and Lens will tell you what it is. Then, use voice interface to ask where you can buy it, point you to the physical or on-line location of a merchant, and in the case of online transactions, perhaps even complete the sale. Integration with photos will let the user take a picture, click on it later and then tap on a phone number in the image and the phone will call the number.
Or, point your smartphone at a store sign in a language you don’t understand. The combination of Google Lens and Google Translate will let you know what the sign says and then use the Google Assistant to begin a conversation about what you’ve found. Our favorite use case: Point a device with Google Lens and Google Assistant at a device such as a router. Google will recognize it, know the location and return an image of the label on the bottom of the unit so you can read the MAC Address and other similar information.
Move to gesture control
As mentioned, gesture control came to the broad-based market with Kinect, and to a lesser degree, with the Move “wands” for Sony PlayStation, but don’t let that count it out. VR, and perhaps to an even great extent, AR will also benefit from the new construct of a game device viewing the world and using the same sort of visual motion information as Lens to build a model of the scene, determine what it is, and then use the information for gaming or AR applications.
This part of the Google app ecosystem is called Tango. It is more than telling the device how to react to what is taking on in front of it, but to have the user move their head, and the googles, glasses or phone to merge the location, space and context. An example here would be to use a sensing device to measure a table or chair, display its dimensions and then “place” it in an AR or VR generated room. Or, walk around a space with a Tango-equipped phone and see a “heat map” of the WiFi strength and coverage so that you may most optimally place an access point. Again, Motion Tracking, Depth Perception and Area Learning, combined with deep learning give what we used to call “gesture control” a new life and place in the tool kit for software and hardware delivers. That further leads to new products and applications for use to put in our quiver of system sense, control, and activation/options.
Final thoughts
So where does this leave us? The moral for the month could also be stated as “Don’t throw the baby out with the bathwater.” Before you condemn a technology, though not necessarily the specific products that originally used it, and switch to the “latest and greatest”, think about whether the prior technology is something that will come back sooner than later in an undated form.
In the current market context, with high attention around voice control, the best summary we can provide is to look at the broader, long term, all-encompassing picture whenever something new appears. Yes, sometimes the past or current technologies will be totally obsoleted. More frequently, the lessons learned in the past will circle back around in an updated form in companion with the latest and greatest to deliver better and improved results. Sometimes, “everything old IS new again”!