If something negative adaptation to tactile interfaces is that they lack an answer “touch” to tell us specifically what we are doing. I do not mean this in haptic vibration resistive screens, or the “tap” when writing to a virtual keyboard, but responses that are associated with all types of actions, and broadcast, for example, movement.
They are trying to implement such systems in touch panels by using various forms of technology such as electrostatic fields which would add a sense of “roughness” creating textures on the surface of the screen, or magnetic fluids under the panel that they would get to put a highlighted on-screen keyboards. but these systems seem to have gone into a dormant state, or not moving fast enough.
NEC has chosen another approach to address this challenge, the Japanese company and the Tokyo Institute of Technology looking for something easier to implement sensory responses to interaction with touch screens instead of creating a complicated system such as those mentioned.
They have chosen to engage in a very thin wire to each corner of the display . What good can it? when pressure is applied to the display device a response taut wires or more depending on the direction of pressing, so that physically moves (but very slightly) creating an identifiable response eg permit us to locate an object on screen, or react when something in the image collides with “the edges”, achieving greater precision than simple vibration that are used to.
DigInfo has posted a video showing the technology in action, where its creators explain things in fairly simple terms, to help better understand the functioning of the panel with several practical examples, like a “ball” located in the center on which collide many objects, can determine the direction from which each one comes only with maintaining our finger on the surface of the screen.
The system is interesting how few who have shown so far, is in a relatively early stage of development and the current prototype is extremely large and cumbersome, too much to that right now can be considered for use in mobile devices such as Tablets or Smart phones.
Although it would be a very interesting system for desktop PCs, “all-in-one” touch, or even navigation systems. Another highlight is that it seems really loud (in perspective) and seems to require optical sensors within the device, so that could not be implemented without border screens, or the glass above it.
Shortlink:
Recent Comments