Andrew Hsu: The Invention of TOUCH & What’s Next

Bill Gates stood on the stage at the (now-defunct) Comdex show in Las Vegas in 2000 with his schoolboy smile touting the new “tablet PC.”  Penned on the tablet in Bill’s handwriting was “Tablet PC is SUPER COOL!”  Behind the stage a backlit sign read “experience the evolution”.

Microsoft evolution never became a revolution because the company’s disparate and factional divisions failed to work together to vision and implement a turnkey experience.

The revolution happened in 2007 with the launch of the iPhone.

(As with most industries) evolution is often interrupted by black-swan revolutions. Sound (voice communications), touch (pinch and zoom navigation), sight (Heads Up Display [HUD]) all changed the way consumer used the phone and is one of the gating factors in technology adoption.

Knowing what technology will help us evolve and what technology revolutionizes is more of a human insight that a science. Ergonomics help us rearrange the digital furniture; however, changing the way we connect with this communication device is profoundly human. What is beyond touch, what is the next revolution?

A Short History of Touch

Although Gates told reporters off stage in Las Vegas that how excited everyone was in Redmond (Developers were checking the tablet out to play with – “a very good sign,” he said) 6 months later warehouses were still full of the Tablets. Q2 shipments had plummeted 25% with a meager 100,000 total units sold.  Mike Magee, technology writer for the Inquirer wrote despondently that “This is another classic case of IT firms thinking they know what technology people will like, and failing to take off the blinkers.”

Touch appeared back in 1971 over a ten year period began to appear in the form of  infrared technology (such as the Hewlett Packard 150) which show up in various military applications. The IR matrix of beams are used to detect a finger touching the screen.

But the IR technology was expensive and the technology gained more mainstream adoption was “resistive touch”.

It was a simple concept. Resistive touch screens were built using two layers of conductive material (Indium Tin Oxide). The two layers were separated by a small pocket of air. An action was triggered when a stylus, or other object, pressed the top layer into contact with the bottom layer.

The limitation was it was like a pin board. You could tell the device where you were move the point of contact. But it did not have multi-touch functionality essential to pitch and zoom navigation.

Mass-market adoption was not an option:

  1. The screen wore out
  2. Required a stylus pen for accuracy
  3. The air pocket made the screen appear hazy
  4. OEMs had to build a clunky hole in the casing (as the top of the resistive sensor had to be exposed to user’s input)

This is the technology that Bill Gates was holding up at Comdex in 2000*. The unit’s resistive touch stylus was used to input into clunky dialogue boxes to input text and commands. The entire project was “resistive”. The Office team refused to build for the unit adding to the painful UX.

*[A technogeek aside: Microsoft’s Surface touch solution uses Frustrated Total Internal Reflection (FTIR)]

Meeting Andrew Hsu

In 2013, I ran an event on connect screens in New York. I wanted to tell a story about the importance of the screen in the evolution of mobile phone design and adoption. I invited Professor Donnell Walton from Corning Glass, as well as representatives from Microsoft’s Surface team, Google Glass and was looking to find a speaker to explain “touch”.  Maybe I could locate someone from the scuttled Apple Newton team?

I found, much to my surprise (like an anthropologist that finds that we did not evolve directly from monkeys) that the precursor to the 2007 Apple iPhone was a skunk works project headed up by an engineer called Andrew Hsu.

Andrew developed and patented a capacitive touchscreen suitable for mobile devices way back in 1999. He developed a system which computes the location of a user’s fingers based on how they change the capacitance values of an invisible matrix of electrodes.  The capacitive touchscreen did not suffer from the various user experience drawbacks of the resistive touchscreen – it does not wear out, it does not cloudy the underlying display, and it does not require a big hole to be cut into the device casing.  But most importantly, it enables natural finger input.

This capacitive touch is not a mouse click. It is not a data poke with a Stylus. Andrew Hsu’s touch allowed us to communicate in a very human way with pointing and pinching space.

Don Norman is often quoted about touch.

“We’ve lost something really big when we went to the abstraction of a computer with a mouse and a keyboard, it wasn’t real . . . swiping your hand across the page  . . . is more intimate. Think of it not as a swipe, think of it as a caress.”

While mobile success is almost always based on interface and usability, it took seven years for Andrew Hsu to convince the industry to adopt the technology. Revolutions come in simple packages: text messaging, Apple’s mobile application SDK, gesture-based gaming.

We talk about the consumerization of technology; touch was the humanization of technology. In a world where data appeared cerebral and uninviting, we suddenly can interface in this data and content as we do with real object. The physical world became extensible and less scary.

From Click to Pinch & Zoom

In 2006, handset manufacturer LG trialled launched capacitive touch with their designer Prada phone. The LG phone had all the correct ingredients – capacitive touchscreen for intuitive finger input, high resolution display, and one of the first graphics co-processors in a handset. Prada brought style to the table and LG brought the insight that touch that would ultimately inspire the new mobile consumer.

But we had to wait one more year.

When Jobs returned to Apple he shut down the Newton project.  This legacy 1993 technology had poor handwriting recognition and had little traction in the market. But Andrew Hsu’s capacitive touch appealed to Steve Jobs UI sensibilities.

As a post Newtonist, Jobs once said “we are born with five styluses on each hand”.

When he introduced the iPhone, we knew that being able to move large format data on a small screen with a pinch and zoom changed the way the consumer saw their mobile device.  Where Steve Job went further than touch was his insight in designing a full edge-to-edge screen that had the dimensions of a letter size piece of paper.  The screen called out to be touched, worked on and paged through.

Although touch revolutionize the phone and lines weaved around the block for new product releases of Apple new “human” interface, the consumer was still nose-to-screen, bumping into lamp posts while elegantly navigating data a hundred miles away.

“Bump” (the file exchange application recently acquired by Google) and other application including NFC payment extending this love of tactile interface by promote social touch between other phones and public devices such as POS.

Gesture: Moving Beyond the Cool?

While touch is an important sense, sight is essential for navigation. The next revolution is to make data come to live seamlessly in the real world.

When we talk about HUD, we think of the new Google Glass and the opportunity to integrate data into our line of site. In parallel, see the world and the data behind it. Integrated cyborg solutions like Google Glass and future visions of embedded epidermal circuit (seen in Total Recall).

Microsoft had the lead in a new HUD interface using gesture.  XBOX Kinects was the one product that Microsoft was seeing growth in the consumer sector. However, the leviathan was unable to make this a multiscreen strategy fast enough.

Moving gesture elegantly to PCs and window phones never happened. There is a Kinect for Windows but it lacks the software for controlling the interface.

The Leap motion controller is a step forward.  A small multiscreen sensor box not tied to console in the dean but with the ability to tether like a dongle to a wide variety of screens and deliver better sensitivity to Kinect. It has multiple commands down to finger level accuracy.

Andrew Hsu still believes that touch is less ambiguous on the consumer navigation intent. “How can you disambiguate between “accidental” and intentional gestures.  The beauty of touch interaction is that you basically get user intent for “free” – a user typically only touches the device when he/she wants to interact with it.  The cases of accidental activation are much lower and easier to reject.”

Arguably HUD is a solution looking for a problem. Like the inspired Seque cycles, the inventor’s goal was to develop an urban consumer transport vehicle but he failed to get significant adoption. The Segue has now found a home with urban tourist touring groups and airport police. Why? It provided an elevated view with minimal multitasking: Ideal for tourists and law enforcement.

Andrew agrees: “What these technologies really need to address is what sort of “problem” they are trying to solve.  That is, with capacitive touchscreens, there were certainly a number of value propositions that arguably were superior to the previous (resistive) solution that helped transform/enable touch input.  Natural gestures (HUD) is still looking for a compelling value proposition”

Google Glass is a platform without a certain home. Will “super cool” it has not inspired the consumer. We have not seen the “a-ha!” that Jobs brought to the touch. We know new more intuitive human interfaces are coming. But we need a Steve Jobs to take the technology and humanize it for intuitive consumption.

Gary Schwartz is the CEO of Impact Mobile. Having been at the frontlines of the mobile industry for over a decade, Gary is the author of two books, “The Impulse Economy: Understanding Mobile Shoppers” and “Fast Shopper. Slow Store: A Guide to Courting and Capturing the Mobile Consumers,” both of which highlight the current state of the mobile commerce space and chronicle the significant impact that mobile is having on consumers, retailers and brands. Gary is also a chair emeritus for the Interactive Advertising Bureau and the Mobile Entertainment Forum NA and global director of the Location Based Marketing Association.

BNN Interview: Challenges with Twitter’s biz model pre-IPO

BNN 5 minutes: The Business News : October 4, 2013 : Challenges Facing Twitter’s Business Model and its Upcoming IPO [10-04-13 12:20 AM]

  • Positive: Twitter is NATIVELY mobile and will not have the same questions that FaceBook had on its IPO – i.e. What is your mobile strategy?
  • Positive: Twitter has an owned-content advertising model which is less impression based and more brand ENGAGEMENT. 
  • Challenge: Twitter is an social aggregation hub. We see lots of auto-twittering without visiting the social platform from third party sites. (See page 61 in their S-1 filing)  Referred to as blind tweeters (syndicated from other sites), these are a big slice of their user base.  This is an impression-based advertising challenge.
  • Challenge: Twitter is an advertising company. Specifically, mobile advertising with 65% of the revenue coming from small screen. The mobile advertising space is in a bubble. Same old story: Big growth in revenues, but no profits. The breaking bubble is evident in Jumptap’s exit to Millennial Media.
  • Challenge: Although the US is Twitter’s home it needs global growth. There will be global pressure from competitors like SINA WEIBO in China and LINE in Japan.

http://watch.bnn.ca/#clip1017318