US8614669B2 - Touchless tablet method and system thereof - Google Patents

Touchless tablet method and system thereof Download PDF

Info

Publication number
US8614669B2
US8614669B2 US11/683,416 US68341607A US8614669B2 US 8614669 B2 US8614669 B2 US 8614669B2 US 68341607 A US68341607 A US 68341607A US 8614669 B2 US8614669 B2 US 8614669B2
Authority
US
United States
Prior art keywords
finger
touchless
ultrasonic
location
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US11/683,416
Other versions
US20070211031A1 (en
Inventor
Boillot Andre Marc
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Navisense LLC
Original Assignee
Navisense LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Navisense LLC filed Critical Navisense LLC
Priority to US11/683,416 priority Critical patent/US8614669B2/en
Assigned to NAVISENSE, LLC reassignment NAVISENSE, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOILLOT, MARC
Publication of US20070211031A1 publication Critical patent/US20070211031A1/en
Application granted granted Critical
Publication of US8614669B2 publication Critical patent/US8614669B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/043Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using propagating acoustic waves

Definitions

  • the present embodiments of the invention generally relate to the field of computer accessories, more particularly to user interfaces.
  • Tablets are devices which allow a user to visualize electronic text on a display.
  • a user can interact with the tablet by touching the tablet with a stylus.
  • a user can fill out an electronic form by writing on the tablet much like writing on a piece of paper using a pen.
  • the tablet can detect a physical pressing of the stylus on the screen and interpret the information written by the user.
  • Tablets can vary in size but are generally approximately the size of a sheet of paper.
  • a tablet can include a touchscreen which can be physically touched by a user. A user can touch the touchscreen in a manner similar to the way a stylus is used: that is, through a physical interaction.
  • Tablets, touch screens, and touch panels are typically pressure-sensitive (resistive), electrically-sensitive (capacitive), acoustically-sensitive (SAW—surface acoustic wave) or photo-sensitive (infra-red).
  • Such displays can be attached as accessory devices to computers or communication devices. Tablets can be considered an electronic substitute to the pen and paper.
  • Tablets have proven useful, some consumer are interested in smaller display devices that are more portable. A need therefore exists for a compact device which provides improved portability and use.
  • the touchless tablet can include a sensing unit for identifying a touchless finger action above a form, and a controller that associates the finger action with at least one form component in the form.
  • the touchless tablet can identify a selection of a form component based on a location and action of the finger above the form.
  • the touchless tablet can further include a projecting element for presenting a visual layout of graphical components corresponding to the form components.
  • the projecting element be a camera to capture a layout of form components on a form sheet.
  • the finger action can include at least one of a touchless depressing action, a touchless release action, a touchless hold action, and a touchless dragging action.
  • the touchless tablet can be communicatively coupled to a display for exposing a graphical application, such that a touchless selection of a form component corresponds to a selection of a graphical component in the graphical application.
  • the selection of a form component can perform an action on the form object that produces a response from a graphical component in the graphical application.
  • FIG. 1 illustrates a first embodiment of a touchless tablet in accordance with the inventive arrangements
  • FIG. 2 illustrates a second embodiment of a touchless tablet in accordance with the inventive arrangements
  • FIG. 3 shows a form for use with the touchless tablet in accordance with the inventive arrangements
  • FIG. 4 illustrates a first embodiment of a touchless screen in accordance with the inventive arrangements
  • FIG. 5 illustrates a second embodiment of a touchless screen in accordance with the inventive arrangements.
  • FIG. 6 illustrates a virtual touchscreen in accordance with the inventive arrangements.
  • a or an, as used herein, are defined as one or more than one.
  • the term plurality, as used herein, is defined as two or more than two.
  • the term another, as used herein, is defined as at least a second or more.
  • the terms including and/or having, as used herein, are defined as comprising (i.e., open language).
  • the term coupled, as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically.
  • program, software application, and the like as used herein, are defined as a sequence of instructions designed for execution on a computer system.
  • a program, computer program, or software application may include a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
  • touchless sensing is defined as sensing movement without physically touching.
  • activating or activation is defined as enabling, disabling, or moderating a control.
  • An activation cue can be a physical motion such as a finger movement, hand gesture, or a spoken utterance.
  • touchless sensory field or sensory space in this disclosure, can be defined as either a sensing field or a viewing field.
  • sensing field can be defined herein as a region that is monitored for changes such as changes in acoustic pressure level, for example, a sound wave.
  • viewing field can be defined herein as a region that is monitored for changes such as changes in intensity, such as light, for example, pixel intensity.
  • the touchless tablet 100 can include a sensing unit 110 , a processor 120 , and a controller 130 .
  • the touchless tablet 100 can connect to a communication device hosting the display 140 through a wired or wireless connection.
  • the display 140 can present graphical components 122 - 124 associated with form components 112 - 114 on a form 111 .
  • the sensing unit 110 can produce a touchless sensory field above the form 111
  • the processor 120 can detect finger movement within the touchless sensory field over the form 111 .
  • the controller 130 can identify finger actions applied to form components in the form 111 and translate the finger actions to components in the graphical application 121 .
  • the sensing unit 110 can include an off-axis sensory element 117 which can further detect touchless movements and which can also capture a picture of the form 111 , or project an image on the form 111 .
  • the touchless tablet 110 can detect finger movement above the form, and identify a location of the finger within the form 111 .
  • a user can position a finger over a form component 121 , and the touchless tablet 100 can identify a location of the selected form component corresponding to a detected location of the finger.
  • the user can also perform a touchless action on the form component, such as a touchless push action, a slide action, a drag action, a hold action, and a release action.
  • the user can select form component 112 by positioning the finger above the component 112 and pushing down. This can activate the corresponding graphical component 122 on the display 140 .
  • the user can slide the finger 112 while held down to a new location 113 , and lift the finger up.
  • the sensing unit 110 can detect the finger push, hold, and release actions and move the graphical components 122 - 124 corresponding to the form components 112 - 114 on the display 140 .
  • the sensing unit 110 can detect constant velocity movement and accelerated movement to identify finger actions.
  • a user can download a form layout to the processor 120 , or computer hosting the graphical user interface 121 .
  • the form layout identifies locations of the form components in the form 111 .
  • the user can position the form 111 within the touchless sensory of the sensing unit 110 .
  • the form 111 can be a sheet of paper containing printed material or graphical components at locations known to the touchless tablet 100 , based on the downloading of the form, or to a computer system communicatively coupled to the touchless tablet.
  • the form 111 can be a sheet of paper, a presentation, or a brochure.
  • the processor 120 or computer, can then associate locations in the touchless sensory field with the locations of form components.
  • the sensing unit 110 can capture an picture of the form 111 , and the processor 120 can identify locations of the form component from the picture, if the form is not downloaded.
  • the sensing unit 110 may not have information with regard to the location of the form components in the form 111 . Accordingly, the sensing unit 110 is limited to only identifying an absolute location of the finger within the touchless sensory field, such as an absolute coordinate.
  • the touchless sensory field can be demarcated by bounds along two dimensions, such as an X and Y direction.
  • the sensing unit 110 can estimate a location and relative movement of a finger within the bounds of the touchless sensory field. When the form sheet is correctly positioned within the touchless sensory field, a one to one correspondence of finger location can be determined with the location of the form.
  • the form sheet can be 8 ⁇ 11 inches and positioned such that the top of the form just touches the vertical aspect of the sensing unit 110 , and the side of the form just touches the horizontal aspect of the sensing unit 110 .
  • a detection of the finger within the touchless sensory field can be correlated to a position on the form 111 when correctly positioned.
  • the alignment provides a one-to-one correspondence of the finger location relative to the position of the form.
  • the off-axis element 117 can project a bounds of the frame to aid the user in identifying the placement of the form.
  • a LED or laser light source can project the outline of a standard size form on a surface, such as a desk.
  • such an arrangement only provides limited information for interfacing with the touchless tablet.
  • the touchless sensing unit can project a virtual layout of virtual components corresponding to form components already printed on the form 111 .
  • the form sheet 111 may be a printed publication with already existing form element, such as a menu or list.
  • the form elements 112 can be identified by coordinates.
  • a list of form coordinates can be generated specifying the form component and its location on the form sheet. This list can be provided to the sensing unit 110 .
  • the off-axis element 117 can generate a virtual layout of virtual components that correspond in location with the printed form components given the positional information in the list. Understandably, in this arrangement, the sensing unit 110 is given positional information of the printed form components on the form.
  • the sensing unit can then create a virtual layout based on the provided location of the form components.
  • the virtual layout can be projected by the off-axis element 117 onto the form sheet.
  • the off-axis element 117 can be a camera to capture an image of the form 111 .
  • the camera can also capture a location and motion of a finger action.
  • the graphical components on the form sheet can be identified from the image as well as the finger location and movement.
  • the coordinates of the form components from the image can be determined, and a virtual layout of virtual components corresponding to the coordinates of the form components can be created.
  • Hidden attributes can also be embedded within the image for identifying form components to associate with graphical components. That is, the processor 120 can identify hidden attributes for identifying a location and function of a form component. For example, a form manufacturer may place watermarks within the form which can identify the location, function, and method of finger action required to interact with the form component.
  • the touchless sensing unit 110 can project a visual layout of graphical components on the form. Whereas a virtual layout is not visible, a visual layout is visible.
  • the form may be a blank sheet of paper or a flat surface.
  • the off-axis element 117 can project a visual layout of graphical components on a blank form, or a table surface.
  • the off-axis element 117 can be a small light projector that produces a light field of components on the form.
  • the visual display can be a fixed presentation of form elements on the form, or it can be a changing presentation of form elements.
  • the sensing unit 110 which produces the visual layout knows the location of the graphical components within the visual layout when projected on the form. Accordingly, a location of a finger detected within the touchless sensory field can be correlated to a graphical component in the visual layout.
  • the processor 120 can identify a location and action of the finger and generates a coordinate object.
  • the controller 130 can convey the coordinate object to the graphical application 121 which can then control graphical components according to the coordinate object.
  • the graphical application 121 can implement a sensory Applications Programming Interface (API).
  • the sensory API can include functions, methods, and variables which provide communication access to the touchless sensing unit 110 .
  • the touchless tablet 122 can provide a virtual user interface (VUI) to the graphical application 121 . This allows a user to control the graphical application 121 using the touchless tablet 100 .
  • VUI virtual user interface
  • the touchless sensing unit 110 does not include one of the approximately perpendicular sensing arrays.
  • the sensing unit 110 can detect movement with a touchless sensory field above the form 111 .
  • the sensing unit 110 can detect forward and backward movement as well as a downward or upward finger movement.
  • the sensing unit 110 without the off-axis element 117 is capable of identifying two dimensional motion within a plane parallel to the form 111 .
  • the off-axis element 117 allows the sensing unit 110 to determine movement orthogonal to the plane; that is, upward and downward movement.
  • a user can position the sensing unit 110 at the top of a form sheet for generating a virtual sensory field over the form 111 .
  • a planar scanning field can be generated without the off-axis element to identify when a finger has obstructed the plane, for example using laser scanning.
  • a user can position a finger over a form component and perform a finger action to control a graphical component in the display 140 associated with the form component. The user need not touch the form.
  • a microphone can be included within the sensing unit 110 to acknowledge that a finger action has occurred.
  • the sensing unit 110 can also localize a sound of a finger tap.
  • an exemplary form 111 is shown.
  • the form 111 can include form elements 112 and 114 .
  • the form 111 can be presented on the display 140 .
  • the sensing unit 110 can be positioned along the form 111 to allow a user to select items on a menu or a list that will be recognized on the display 140 .
  • the sensing unit 110 can identify items selected by a user.
  • the sensing unit 110 can relay the identified items on the form to a server or a computer capable of processing the information.
  • a virtual screen 400 is shown by positioning the touchless sensing unit 110 on a side.
  • a user can interact with the graphical application 121 on the display 140 through the virtual screen 400 .
  • a form is not needed.
  • a user can move a finger along a horizontal and vertical direction parallel to the virtual sensory field 111 .
  • the user can push the finger forward to activate a virtual component 112 which in turn activate the corresponding graphical component 122 on the display 140 .
  • the virtual screen may not be visible. Accordingly, a location of the user's finger within the virtual sensory field can be presented on the display 140 . This provides the user visual feedback for navigating around the application.
  • the relative movement can be detected in the virtual screen for identifying finger movement and location.
  • a backdrop can be placed behind the sensing unit 110 to provide a surface on which the virtual field can be projected.
  • the virtual sensory field can be produced as a hologram.
  • the sensing unit 110 includes two off-axis elements 117 and 118 which are positioned on an opposite side from that shown in FIG. 3 .
  • the touchless interface of FIG. 4 also provides 3D sensing of finger location and action.
  • a virtual touchscreen 600 is shown.
  • the virtual touchscreen 600 can include multiple touchless sensing units 110 arranged around the display 140 for providing 3D user interaction.
  • the touchless sensory field is produced over the display thereby allowing a user to use the display as a touchscreen, though, without requiring the user to touch the display, and providing a third dimension of depth interaction.
  • a method of navigation for a touchless tablet, a virtual screen, and a virtual touchscreen is provided.
  • the method can be practiced when the sensing unit 110 uses acoustic sensing elements for creating a sensing field, or when the sensing unit 110 uses optical sensing elements for creating a viewing field.
  • the method can include creating a three dimensional (3D) sensory space, determining a finger position within the 3D sensory space, determining one of a forward or retracting finger movement associated with the finger position, and controlling a user interface according to the finger movement.
  • the user interface may be a graphical user interface providing visual information or an audible user interface providing audible information.
  • the processor 120 can determine a two dimensional position of a finger within the touchless sensory field.
  • the processor 120 can also determine a first depth of a finger during a forward finger movement within the 3D sensory space. This can be the case when a user has engaged a form component 112 to produce an action on a graphical component 122 in the graphical application 121 . The processor 120 can then determine a second depth of the finger during the forward movement within the 3D sensory space. The processor 120 can create a vector from the first and second depth, and predict a destination of the finger on the graphical application from said vector. The processor 120 can also determine a velocity and an acceleration for adjusting a direction of the vector. The processor 120 can also track a movement for identifying repetitive motion or reverse actions such as the moving forward and backward for detecting a button press. For instance, the destination can be graphical component 122 .
  • the processor 120 can adjust a region of focus within the graphical application 121 based on the destination.
  • the region of focus can correspond to a higher resolution of sensing. For example, increasing a region of focus can correspond to increasing the pulsing rate of an emitter element. More pulses per second result in a finer ability to resolve a location or a relative displacement.
  • a region of focus can also increase attributes of a graphical component. For example, as a user pushes increasingly downward, for emulating a push button action, a graphical component can increase in size to provide visual feedback to the user that the graphical component will be activated. The graphical components identified as targets can be visually distorted to let the user see which component is being targeted as the destination.
  • an exaggerated zoom lens can be applied to the graphical component indicating that it is being selected.
  • adjusting a region of focus includes one of accelerating and decelerating a movement towards the destination. This includes zooming in on the region of focus in view of the destination.
  • a cursor object can also be presented on the display that moves in accordance with finger movement within the touchless sensory field.
  • the cursor can change in color, a size, a shape, or behavior to provide visual feedback.
  • the graphical application 121 can present a distorted image of an object on the display that moves in accordance with the finger movement within the 3D sensory space.
  • the graphical application 121 can receive coordinate information concerning movement of the finger within the touchless sensory field to visually identify navigation.
  • a distant depth of the 3D sensory space corresponds to a broad region
  • a close depth of the 3D sensory space corresponds to a narrow region of focus.
  • a region of focus can broaden when the user's finger is farther from a sensing unit producing the 3D sensory space, and the region of focus can narrow when the user's finger is closer to the sensing unit.
  • the graphical application 121 can present the destination and the region of focus on the display to provide visual feedback.
  • a method of control in the virtual screen includes producing a touchless sensory field, and adjusting the strength of the touchless sensory field for adjusting a detection of a finger within the touchless sensory field.
  • the method can be practiced when the sensing unit 110 uses acoustic sensing elements for creating a sensing field, or when the sensing unit 110 uses optical sensing elements for creating a viewing field.
  • Adjusting the field can include increasing amplification or decreasing the amplification of the touchless sensory field.
  • the processor can amplify or decrease the amplitude of an emitted pulse for adjusting the field strength.
  • the strength of the field determines the sensitivity and resolution for resolving location and measuring movement within the field.
  • a change in vertical and horizontal position of the finger controls a navigation within the graphical application 121 .
  • a change of forward or retracting movement can activate a graphical component 122 within the graphical application 122 .
  • the processor 120 can track finger movement and smooth out perturbations in the track.
  • the processor 120 can determine at least one of an absolute location of the finger, a relative displacement of the finger, a velocity of the finger, a length of time of the finger, and acceleration of the finger.
  • the processor 120 can also set thresholds for identifying a finger action such as a touchless button press. For example, the required depth of a button press can be changed by adjusting the coordinate bounds for the button, or by changing the field strength.
  • the sensing elements of the sensing unit can comprise a microphone array system, a beam forming array, a three-dimensional imaging system, a camera system, a laser system, or any combination thereof for acquiring finger movement and a finger location for converting finger movement into a coordinate signal for navigating within a display as herein presented.
  • a base section can comprise ultrasonic elements
  • the off-axis element can be a camera element.
  • the sensing unit 110 can contain at least one array of ultrasonic sensors.
  • the sensors can create a three-dimensional sensing field that allows the sensing unit 110 to identify and track a location of an object within the sensing field.
  • the sensing unit 110 can employ principles of pulse-echo detection for locating the position of an object within a sensory field.
  • the intensity and sensitivity area of the sensing field depends on the signal strength of the signals pulsed out from the transmitters, and the ability of the sensing unit 110 to resolve echo signals.
  • a transmitter emits a high frequency and high energy pulse which may be reflected off an object.
  • a reflection signal will be generated from the scattering of the high energy pulse on the object. Reflection signals will be sent back towards the sensing unit 110 that can captured by receiver elements (transducers). Notably, the signal is generally high frequency as less energy is disipitated during transmission and reception of a high frequency pulse in air. Ultrasonic transducers and special sonic acoustic transducers are capable of operating at high frequencies such as 40 KHz. In a pulse-echo system a pulsed signal is transmitted towards an object and a reflection signal is identified. A time of flight measurement describes the time expiring between when the signal was emitted and when the signal was received.
  • the touchless sensory field is an approximate three dimensional region wherein a finger movement can be identified by the sensing unit 110 .
  • the sensing unit is an ultrasonic sensing unit that emits a high energy pulse
  • the field of view corresponds to that region within which a reflected high energy pulse can be detected.
  • the field of view can be a function of the emitted pulse strength and the range (e.g. distance).
  • a user can move the finger within the field of view and the processor 120 can detect a position of the finger.
  • the sensing unit 110 can also include a timer for determining a length of time a finger is at a position.
  • the processor 120 in conjunction with the timer can determine a finger location and a finger action.
  • the processor 120 can also include a buffer for storing prior coordinate information.
  • a finger push action generally corresponds to a movement from a first position to a second position and then back to the first position. Accordingly, the processor tracks finger movement and compares the movement with prior history of movement for determining
  • the sensing unit 110 can include at least one transmitter 102 and at least two receivers for transmitting and receiving ultrasonic signals.
  • the transducers can be omni-directional ultrasonic transducers be the same for providing dual transmit and receive functions.
  • the sensing unit can employ pulse-echo detection to estimate a range and position of an object within view of the sensing elements.
  • a transmitter in the sensing unit can emit a pulse shaped signal that reflects off an object which is detected by a receiver element in the sensing unit.
  • the receiver element can be coupled with a detector (e.g. processor 120 ) to detect a signal reflected off an object as part of the motion detection logic in the sensing unit.
  • the detector can include additional processing logic such as thresholds, comparators, logic gates, clocks, and the like for detecting an object's motion.
  • the sensing unit 110 calculates a position of the object causing the reflection by solving a set of geometric equations.
  • the single transmit and receive element pair along a same plane in the ultrasonic sensing unit calculates a first range (e.g. distance) of an object in the field of view.
  • a first transmit and receive pair on an x-axis can estimates a longitudinal range of the object (e.g. finger).
  • a second pair arranged separately from the first pair, estimate a second range.
  • the second pair estimates a latitudinal range of the object (e.g. finger).
  • the two range measurements establish a position (e.g. location) of the object causing the signal reflection by mathematically combining the geometrically related range measurements.
  • the first range measurement establishes a x-coordinate and the second range measurement establishes a y-coordinate.
  • the location of the object is then determined to correspond to the point (x,y) in a single plane.
  • the plane will be oriented in the direction of the first and second paired ultrasonic elements.
  • a third pair can produce a range measurement in a third direction thereby establishing a three-dimensional coordinate system (x,y,z) if the first, second, and third range measurement projections are orthogonal to one another.
  • a time of flight associated with a first transmit and receive pair produces a complex surface wherein a location of the object can be anywhere along the complex surface. That is, a single time of flight measurement produces a locus of points in a three dimensional space that describe a possible location of the object producing the time of flight.
  • a second transmit and receive pair is included, a second complex surface can be generated wherein a location of the object can be anywhere along the second complex surface. Based on the location of the transmitters, the two complex surfaces can produce a parabolic intersection curve wherein the object can be anywhere along the parabolic curve.
  • a third transmit and receive pair is included in a symmetrical arrangement, a third complex surface can be generated wherein a location of the object can be anywhere along the surface of the third complex surface.
  • a location of the object can be determined by identifying the intersection point between the parabolic surface generated by the intersection of the first and second complex surface, with the third complex surface.
  • the location of the object can be uniquely specified by calculating the intersection of the three complex surfaces. Multiple complex surfaces can be generated for each paired sensor, and a location can be determined by identifying an intersection of the complex surfaces.
  • a relative movement of the object can also be determined by calculating the relative changes in TOFs from each of the transmit-receive pairs. Changes in TOFs can be directly used to look up a corresponding change in relative position. Changes in TOFs can be used to determine relative changes along the principal axes in the three dimensional space. Notably, as more transmit-receive pairs are added, the number of complex surfaces increases thereby providing redundancy as to the location and movement of the object in three dimensional space. The intersection can be calculated by projecting one complex surface onto another complex surface. A gradient descent approach or steepest descent can be used to solve for the local and global minima. Other iterative numerical solutions also exist for calculating the maximum likelihood point of the object.
  • Pulse-echo principles can be applied to all the transducers 102 in the sensing unit 110 .
  • Each of the pair of transducers can be pulsed as emitter elements. That is, the sensing unit 110 can calculate a first set of time of flight (TOF) measurements using a first transducer as the emitter, calculating a second set of TOFs using the second transducer as the emitter, and so on, wherein each transducer is used as an emitter.
  • the TOFs can be averaged over time to smooth out discontinuities in the measurements.
  • a first ultrasonic signal can be emitted from a first transducer from a first direction at a first time.
  • a first and second reflection of the ultrasonic signal off the finger from the first direction can be detected by a plurality of ultrasonic transducers 102 .
  • a location of the finger can be determined from time of flight (TOF) measurements calculated at each transmit-receive pair.
  • TOF time of flight
  • the steps of emitting, detecting, and determining a TOF for multiple directions at multiple times for generating a plurality of finger locations can be repeated.
  • a set of possible finger locations can be determined from the plurality of transmit-receive pairs. The set of finger locations can be correlated to determine a finger position having a highest likelihood of producing the reflections from the multiple directions.
  • the processor 120 can measure changes in TOF from each of the sensors 102 . When all the TOFs decrease together, the processor 120 determines that the finger has moved closer. When all the TOFs increase together, the processor 120 determines that the finger has moved farther away. Such processing can be used for determining a button press action.
  • a first TOF and a first phase differential can be calculated between a signal emitted at a first time and a reflected signal detected at a first sensor.
  • a second TOF and a second phase differential can be calculated between the signal emitted at the first time and a reflected signal detected at a second sensor.
  • a third TOF and a third phase differential can be calculated between the signal emitted at the first time and a reflected signal detected at a third sensor. The calculations can occur in parallel to speed up computation time and the TOF and differentials can be calculated as they are received.
  • a first estimate of a location of the object from the first TOF can be calculated, a second estimate of a location of the object from the second TOF can be calculated, and a third estimate of a location of the object from the third TOF can be calculated.
  • At least three complex surfaces can be created as a function of the first, second, and third estimate of a location. An intersection of the at least three complex surfaces can be determined which can correspond to a coarse location of said object.
  • the first, second, and third phase differential can be applied to the estimated location for updating the coarse location to a fine location of the object.
  • the TOFs can be weighted by a phase differential.
  • a history of the TOFs and the phase differentials can be tracked for predicting an error estimate, wherein the error estimate is used to produce the fine location of the object.
  • the sensing unit 110 can contain multiple sensing elements positioned and arranged in various configurations for receiving range measurements in varying directions for calculating the position of the object causing the reflection using multi-path signal processing techniques.
  • the sensing unit is not limited to only three sensors which are provided as example.
  • the paired transmit and receive elements can be on a same principal axis or a different principal axis.
  • the sensing unit can also employ beam forming techniques and pattern recognition techniques for estimating the objects location.
  • the sensing unit 110 additionally produces differential coordinate signals for satisfying the input signal requirements of a BlueTooth or USB connection interface.
  • a computer mouse generally uses a PS/2 or wireless device driver for receiving differential signals for moving a cursor along each principal axis of the computer coordinate system.
  • the sensing unit 110 produces differential signal for each principal axis to comply with the requirements of the PS/2 and USB mouse device driver interface.
  • the virtual screen can include a touchless sensing unit for creating a sensory field, a processor communicatively coupled to the touchless sensing unit for determining a finger location and a finger action within the sensory field, and a controller communicatively coupled to the processor and a display for controlling a graphical application according to the finger location and finger action.
  • the finger action can be a finger push action, a finger hold action, a finger release action, and a finger slide action, or a combination.
  • the touchless sensing unit generates the sensory field over the graphical application on the display.
  • the touchless sensory field does not overlay the graphical application on the display. In this arrangement, the touchless sensory field does not spatially coincide with the graphical application.
  • the processor can detect finger movements within the touchless sensory field for interacting with the graphical application on the display.
  • Another embodiment is a method of navigating a virtual screen.
  • the method can include creating a three dimensional (3D) sensory space, determining a finger position within the 3D sensory space, determining one of a forward or retracting finger movement associated with the finger position, and controlling a graphical application according to the finger movement.
  • a first and second depth of a finger during the forward or retracting finger movement within the 3D sensory space can be identified, a vector can be created from the first and second depth, a destination of the finger on the graphical application can be predicted from the vector, and a region of focus within the graphical application can be adjusted based on the destination.
  • the destination and the region of focus can be presented on the display to provide visual feedback.
  • a graphical interface component can be identified on the display corresponding to the predicted destination of the finger movement.
  • Another embodiment is a method of control in a virtual screen.
  • the method can include producing a sensory field, and adjusting the strength of the sensory field for adjusting a detectability of a finger within the sensory field, wherein a change in a position of the finger controls a navigation within a graphical application, and a forward and retracting movement of the finger controls an action on a graphical component within the graphical application.
  • the present invention may be realized in hardware, software, or a combination of hardware and software.
  • the present invention may be realized in a centralized fashion in one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited.
  • a typical combination of hardware and software may be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • the present invention also may be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.
  • Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.

Abstract

A system (100) and method for a touchless tablet that produces a touchless sensory field over a form (111). The touchless tablet includes a touchless sensing unit (110) for identifying a finger action above the form, and a controller (130) communicatively coupled to the sensing unit for associating the finger action with at least one form component on the form. The touchless tablet identifies a selection of a form component (122) based on a location and action of the finger above the form. A display (140) connected to the touchless tablet can expose a graphical application, wherein a touchless selection of a form component corresponds to a selection of a graphical component on the graphical application.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the priority benefit of U.S. Provisional Patent Application No. 60/781,177 entitled “Method and System for a Touchless Tablet” filed Mar. 13, 2006, the entire contents of which are hereby incorporated by reference.
This application also incorporates by reference the following Applications: Ser. No. 11/683,410 entitled “Method and Device for Three-Dimensional Sensing”, Ser. No. 11/683,412 entitled “Application Programming Interface (API) for Sensory Events”, Ser. No. 11/683,413 entitled “Visual Toolkit for a Virtual User Interface”, Ser. No. 11/683,415 entitled “Virtual User Interface Method and Device Thereof”.
FIELD
The present embodiments of the invention generally relate to the field of computer accessories, more particularly to user interfaces.
BACKGROUND
Tablets are devices which allow a user to visualize electronic text on a display. A user can interact with the tablet by touching the tablet with a stylus. For example, a user can fill out an electronic form by writing on the tablet much like writing on a piece of paper using a pen. The tablet can detect a physical pressing of the stylus on the screen and interpret the information written by the user. Tablets can vary in size but are generally approximately the size of a sheet of paper. Alternatively, a tablet can include a touchscreen which can be physically touched by a user. A user can touch the touchscreen in a manner similar to the way a stylus is used: that is, through a physical interaction.
Tablets, touch screens, and touch panels are typically pressure-sensitive (resistive), electrically-sensitive (capacitive), acoustically-sensitive (SAW—surface acoustic wave) or photo-sensitive (infra-red). Such displays can be attached as accessory devices to computers or communication devices. Tablets can be considered an electronic substitute to the pen and paper. Although tablets have proven useful, some consumer are interested in smaller display devices that are more portable. A need therefore exists for a compact device which provides improved portability and use.
SUMMARY
One embodiment is directed to a touchless tablet. The touchless tablet can include a sensing unit for identifying a touchless finger action above a form, and a controller that associates the finger action with at least one form component in the form. The touchless tablet can identify a selection of a form component based on a location and action of the finger above the form. In one arrangement, the touchless tablet can further include a projecting element for presenting a visual layout of graphical components corresponding to the form components. In another arrangement, the projecting element be a camera to capture a layout of form components on a form sheet. In one aspect, the finger action can include at least one of a touchless depressing action, a touchless release action, a touchless hold action, and a touchless dragging action. In another arrangement, the touchless tablet can be communicatively coupled to a display for exposing a graphical application, such that a touchless selection of a form component corresponds to a selection of a graphical component in the graphical application. The selection of a form component can perform an action on the form object that produces a response from a graphical component in the graphical application.
BRIEF DESCRIPTION OF THE DRAWINGS
The features of the present invention, which are believed to be novel, are set forth with particularity in the appended claims. The invention, together with further objects and advantages thereof, may best be understood by reference to the following description, taken in conjunction with the accompanying drawings, in the several figures of which like reference numerals identify like elements, and in which:
FIG. 1 illustrates a first embodiment of a touchless tablet in accordance with the inventive arrangements;
FIG. 2 illustrates a second embodiment of a touchless tablet in accordance with the inventive arrangements;
FIG. 3 shows a form for use with the touchless tablet in accordance with the inventive arrangements;
FIG. 4 illustrates a first embodiment of a touchless screen in accordance with the inventive arrangements;
FIG. 5 illustrates a second embodiment of a touchless screen in accordance with the inventive arrangements; and
FIG. 6 illustrates a virtual touchscreen in accordance with the inventive arrangements.
DETAILED DESCRIPTION
While the specification concludes with claims defining the features of the invention that are regarded as novel, it is believed that the invention will be better understood from a consideration of the following description in conjunction with the drawing figures, in which like reference numerals are carried forward.
As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which can be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present invention in virtually any appropriately detailed structure. Further, the terms and phrases used herein are not intended to be limiting but rather to provide an understandable description of the invention.
The terms a or an, as used herein, are defined as one or more than one. The term plurality, as used herein, is defined as two or more than two. The term another, as used herein, is defined as at least a second or more. The terms including and/or having, as used herein, are defined as comprising (i.e., open language). The term coupled, as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically. The terms program, software application, and the like as used herein, are defined as a sequence of instructions designed for execution on a computer system. A program, computer program, or software application may include a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system. The term touchless sensing is defined as sensing movement without physically touching. The term activating or activation is defined as enabling, disabling, or moderating a control. An activation cue can be a physical motion such as a finger movement, hand gesture, or a spoken utterance. The term touchless sensory field or sensory space, in this disclosure, can be defined as either a sensing field or a viewing field. The term sensing field can be defined herein as a region that is monitored for changes such as changes in acoustic pressure level, for example, a sound wave. The term viewing field can be defined herein as a region that is monitored for changes such as changes in intensity, such as light, for example, pixel intensity.
Referring to FIG. 1 a touchless tablet 100 is shown. The touchless tablet 100 can include a sensing unit 110, a processor 120, and a controller 130. The touchless tablet 100 can connect to a communication device hosting the display 140 through a wired or wireless connection. The display 140 can present graphical components 122-124 associated with form components 112-114 on a form 111. Briefly, the sensing unit 110 can produce a touchless sensory field above the form 111, and the processor 120 can detect finger movement within the touchless sensory field over the form 111. The controller 130 can identify finger actions applied to form components in the form 111 and translate the finger actions to components in the graphical application 121. The sensing unit 110 can include an off-axis sensory element 117 which can further detect touchless movements and which can also capture a picture of the form 111, or project an image on the form 111.
Briefly, the touchless tablet 110 can detect finger movement above the form, and identify a location of the finger within the form 111. As an example, a user can position a finger over a form component 121, and the touchless tablet 100 can identify a location of the selected form component corresponding to a detected location of the finger. The user can also perform a touchless action on the form component, such as a touchless push action, a slide action, a drag action, a hold action, and a release action. For example, the user can select form component 112 by positioning the finger above the component 112 and pushing down. This can activate the corresponding graphical component 122 on the display 140. The user can slide the finger 112 while held down to a new location 113, and lift the finger up. The sensing unit 110 can detect the finger push, hold, and release actions and move the graphical components 122-124 corresponding to the form components 112-114 on the display 140. The sensing unit 110 can detect constant velocity movement and accelerated movement to identify finger actions.
In one arrangement, prior to using the touchless tablet, a user can download a form layout to the processor 120, or computer hosting the graphical user interface 121. The form layout identifies locations of the form components in the form 111. Next, the user can position the form 111 within the touchless sensory of the sensing unit 110. The form 111 can be a sheet of paper containing printed material or graphical components at locations known to the touchless tablet 100, based on the downloading of the form, or to a computer system communicatively coupled to the touchless tablet. The form 111 can be a sheet of paper, a presentation, or a brochure. The processor 120, or computer, can then associate locations in the touchless sensory field with the locations of form components. Alternatively, the sensing unit 110 can capture an picture of the form 111, and the processor 120 can identify locations of the form component from the picture, if the form is not downloaded.
In other arrangements, the sensing unit 110 may not have information with regard to the location of the form components in the form 111. Accordingly, the sensing unit 110 is limited to only identifying an absolute location of the finger within the touchless sensory field, such as an absolute coordinate. The touchless sensory field can be demarcated by bounds along two dimensions, such as an X and Y direction. Notably, the sensing unit 110 can estimate a location and relative movement of a finger within the bounds of the touchless sensory field. When the form sheet is correctly positioned within the touchless sensory field, a one to one correspondence of finger location can be determined with the location of the form. For example, the form sheet can be 8×11 inches and positioned such that the top of the form just touches the vertical aspect of the sensing unit 110, and the side of the form just touches the horizontal aspect of the sensing unit 110. Accordingly, a detection of the finger within the touchless sensory field can be correlated to a position on the form 111 when correctly positioned. The alignment provides a one-to-one correspondence of the finger location relative to the position of the form. In addition, the off-axis element 117 can project a bounds of the frame to aid the user in identifying the placement of the form. For example, a LED or laser light source can project the outline of a standard size form on a surface, such as a desk. However, such an arrangement only provides limited information for interfacing with the touchless tablet.
In another arrangement, the touchless sensing unit can project a virtual layout of virtual components corresponding to form components already printed on the form 111. For example, the form sheet 111 may be a printed publication with already existing form element, such as a menu or list. The form elements 112 can be identified by coordinates. For example, a list of form coordinates can be generated specifying the form component and its location on the form sheet. This list can be provided to the sensing unit 110. Accordingly, the off-axis element 117 can generate a virtual layout of virtual components that correspond in location with the printed form components given the positional information in the list. Understandably, in this arrangement, the sensing unit 110 is given positional information of the printed form components on the form. The sensing unit can then create a virtual layout based on the provided location of the form components. The virtual layout can be projected by the off-axis element 117 onto the form sheet. Supplemental to providing positional information to the sensing unit 110, the off-axis element 117 can be a camera to capture an image of the form 111. The camera can also capture a location and motion of a finger action. The graphical components on the form sheet can be identified from the image as well as the finger location and movement. The coordinates of the form components from the image can be determined, and a virtual layout of virtual components corresponding to the coordinates of the form components can be created. Hidden attributes can also be embedded within the image for identifying form components to associate with graphical components. That is, the processor 120 can identify hidden attributes for identifying a location and function of a form component. For example, a form manufacturer may place watermarks within the form which can identify the location, function, and method of finger action required to interact with the form component.
In yet another arrangement, the touchless sensing unit 110 can project a visual layout of graphical components on the form. Whereas a virtual layout is not visible, a visual layout is visible. The form may be a blank sheet of paper or a flat surface. For example, the off-axis element 117 can project a visual layout of graphical components on a blank form, or a table surface. The off-axis element 117 can be a small light projector that produces a light field of components on the form. The visual display can be a fixed presentation of form elements on the form, or it can be a changing presentation of form elements. The sensing unit 110 which produces the visual layout, knows the location of the graphical components within the visual layout when projected on the form. Accordingly, a location of a finger detected within the touchless sensory field can be correlated to a graphical component in the visual layout.
In practice, the processor 120 can identify a location and action of the finger and generates a coordinate object. The controller 130 can convey the coordinate object to the graphical application 121 which can then control graphical components according to the coordinate object. In one arrangement, the graphical application 121 can implement a sensory Applications Programming Interface (API). The sensory API can include functions, methods, and variables which provide communication access to the touchless sensing unit 110. By implementing the sensory API, the touchless tablet 122 can provide a virtual user interface (VUI) to the graphical application 121. This allows a user to control the graphical application 121 using the touchless tablet 100.
Referring to FIG. 2, a compact version of the touchless tablet 100 is shown. Notably, the touchless sensing unit 110 does not include one of the approximately perpendicular sensing arrays. The sensing unit 110 can detect movement with a touchless sensory field above the form 111. For example, the sensing unit 110 can detect forward and backward movement as well as a downward or upward finger movement. In one aspect, the sensing unit 110 without the off-axis element 117 is capable of identifying two dimensional motion within a plane parallel to the form 111. The off-axis element 117 allows the sensing unit 110 to determine movement orthogonal to the plane; that is, upward and downward movement. A user can position the sensing unit 110 at the top of a form sheet for generating a virtual sensory field over the form 111. In another arrangement, a planar scanning field can be generated without the off-axis element to identify when a finger has obstructed the plane, for example using laser scanning. A user can position a finger over a form component and perform a finger action to control a graphical component in the display 140 associated with the form component. The user need not touch the form. However, a microphone can be included within the sensing unit 110 to acknowledge that a finger action has occurred. For example, the sensing unit 110 can also localize a sound of a finger tap.
Referring to FIG. 3, an exemplary form 111 is shown. For instance, the form 111 can include form elements 112 and 114. The form 111 can be presented on the display 140. The sensing unit 110 can be positioned along the form 111 to allow a user to select items on a menu or a list that will be recognized on the display 140. For example, within a restaurant environment or a conference environment, the sensing unit 110 can identify items selected by a user. The sensing unit 110 can relay the identified items on the form to a server or a computer capable of processing the information. Those skilled in the art can appreciate that the embodiments of the invention can be utilized in various other contexts herein contemplated within the scope of the invention, and are not herein limited to the examples presented.
Referring to FIG. 4, a virtual screen 400 is shown by positioning the touchless sensing unit 110 on a side. A user can interact with the graphical application 121 on the display 140 through the virtual screen 400. In this case a form is not needed. For example, a user can move a finger along a horizontal and vertical direction parallel to the virtual sensory field 111. The user can push the finger forward to activate a virtual component 112 which in turn activate the corresponding graphical component 122 on the display 140. In one arrangement, the virtual screen may not be visible. Accordingly, a location of the user's finger within the virtual sensory field can be presented on the display 140. This provides the user visual feedback for navigating around the application. Accordingly, the relative movement can be detected in the virtual screen for identifying finger movement and location. In another arrangement, a backdrop can be placed behind the sensing unit 110 to provide a surface on which the virtual field can be projected. In yet another arrangement, the virtual sensory field can be produced as a hologram.
Referring to FIG. 5, a virtual screen 500 is shown. Notably, the sensing unit 110 includes two off- axis elements 117 and 118 which are positioned on an opposite side from that shown in FIG. 3. The touchless interface of FIG. 4 also provides 3D sensing of finger location and action.
Referring to FIG. 6, a virtual touchscreen 600 is shown. In particular, the virtual touchscreen 600 can include multiple touchless sensing units 110 arranged around the display 140 for providing 3D user interaction. Notably, the touchless sensory field is produced over the display thereby allowing a user to use the display as a touchscreen, though, without requiring the user to touch the display, and providing a third dimension of depth interaction.
A method of navigation for a touchless tablet, a virtual screen, and a virtual touchscreen is provided. The method can be practiced when the sensing unit 110 uses acoustic sensing elements for creating a sensing field, or when the sensing unit 110 uses optical sensing elements for creating a viewing field. The method can include creating a three dimensional (3D) sensory space, determining a finger position within the 3D sensory space, determining one of a forward or retracting finger movement associated with the finger position, and controlling a user interface according to the finger movement. The user interface may be a graphical user interface providing visual information or an audible user interface providing audible information. For example, referring back to FIG. 1, the processor 120 can determine a two dimensional position of a finger within the touchless sensory field. The processor 120 can also determine a first depth of a finger during a forward finger movement within the 3D sensory space. This can be the case when a user has engaged a form component 112 to produce an action on a graphical component 122 in the graphical application 121. The processor 120 can then determine a second depth of the finger during the forward movement within the 3D sensory space. The processor 120 can create a vector from the first and second depth, and predict a destination of the finger on the graphical application from said vector. The processor 120 can also determine a velocity and an acceleration for adjusting a direction of the vector. The processor 120 can also track a movement for identifying repetitive motion or reverse actions such as the moving forward and backward for detecting a button press. For instance, the destination can be graphical component 122.
The processor 120 can adjust a region of focus within the graphical application 121 based on the destination. The region of focus can correspond to a higher resolution of sensing. For example, increasing a region of focus can correspond to increasing the pulsing rate of an emitter element. More pulses per second result in a finer ability to resolve a location or a relative displacement. A region of focus can also increase attributes of a graphical component. For example, as a user pushes increasingly downward, for emulating a push button action, a graphical component can increase in size to provide visual feedback to the user that the graphical component will be activated. The graphical components identified as targets can be visually distorted to let the user see which component is being targeted as the destination. For example, an exaggerated zoom lens can be applied to the graphical component indicating that it is being selected. In one aspect, adjusting a region of focus includes one of accelerating and decelerating a movement towards the destination. This includes zooming in on the region of focus in view of the destination.
A cursor object can also be presented on the display that moves in accordance with finger movement within the touchless sensory field. The cursor can change in color, a size, a shape, or behavior to provide visual feedback. The graphical application 121 can present a distorted image of an object on the display that moves in accordance with the finger movement within the 3D sensory space. For example, the graphical application 121 can receive coordinate information concerning movement of the finger within the touchless sensory field to visually identify navigation. In one aspect, a distant depth of the 3D sensory space corresponds to a broad region, and a close depth of the 3D sensory space corresponds to a narrow region of focus. A region of focus can broaden when the user's finger is farther from a sensing unit producing the 3D sensory space, and the region of focus can narrow when the user's finger is closer to the sensing unit. The graphical application 121 can present the destination and the region of focus on the display to provide visual feedback.
A method of control in the virtual screen includes producing a touchless sensory field, and adjusting the strength of the touchless sensory field for adjusting a detection of a finger within the touchless sensory field. The method can be practiced when the sensing unit 110 uses acoustic sensing elements for creating a sensing field, or when the sensing unit 110 uses optical sensing elements for creating a viewing field. Adjusting the field can include increasing amplification or decreasing the amplification of the touchless sensory field. For example, the processor can amplify or decrease the amplitude of an emitted pulse for adjusting the field strength. The strength of the field determines the sensitivity and resolution for resolving location and measuring movement within the field. A change in vertical and horizontal position of the finger controls a navigation within the graphical application 121. A change of forward or retracting movement can activate a graphical component 122 within the graphical application 122. The processor 120 can track finger movement and smooth out perturbations in the track. The processor 120 can determine at least one of an absolute location of the finger, a relative displacement of the finger, a velocity of the finger, a length of time of the finger, and acceleration of the finger. The processor 120 can also set thresholds for identifying a finger action such as a touchless button press. For example, the required depth of a button press can be changed by adjusting the coordinate bounds for the button, or by changing the field strength.
As previously noted, one exemplary embodiment provides touchless navigation control using a sensing unit comprised of ultrasonic elements. It will be apparent to one of ordinary skill, however, that the sensory aspects of the invention apply equally well across other sensing technologies. For example, the sensing elements of the sensing unit can comprise a microphone array system, a beam forming array, a three-dimensional imaging system, a camera system, a laser system, or any combination thereof for acquiring finger movement and a finger location for converting finger movement into a coordinate signal for navigating within a display as herein presented. For example, a base section can comprise ultrasonic elements, and the off-axis element can be a camera element.
Referring back to FIG. 1, in an exemplary embodiment only, the sensing unit 110 can contain at least one array of ultrasonic sensors. The sensors can create a three-dimensional sensing field that allows the sensing unit 110 to identify and track a location of an object within the sensing field. The sensing unit 110 can employ principles of pulse-echo detection for locating the position of an object within a sensory field. The intensity and sensitivity area of the sensing field depends on the signal strength of the signals pulsed out from the transmitters, and the ability of the sensing unit 110 to resolve echo signals. In a pulse-echo system, a transmitter emits a high frequency and high energy pulse which may be reflected off an object. If the object is within close proximity, a reflection signal will be generated from the scattering of the high energy pulse on the object. Reflection signals will be sent back towards the sensing unit 110 that can captured by receiver elements (transducers). Notably, the signal is generally high frequency as less energy is disipitated during transmission and reception of a high frequency pulse in air. Ultrasonic transducers and special sonic acoustic transducers are capable of operating at high frequencies such as 40 KHz. In a pulse-echo system a pulsed signal is transmitted towards an object and a reflection signal is identified. A time of flight measurement describes the time expiring between when the signal was emitted and when the signal was received.
The touchless sensory field is an approximate three dimensional region wherein a finger movement can be identified by the sensing unit 110. For example, when the sensing unit is an ultrasonic sensing unit that emits a high energy pulse, the field of view corresponds to that region within which a reflected high energy pulse can be detected. The field of view can be a function of the emitted pulse strength and the range (e.g. distance). A user can move the finger within the field of view and the processor 120 can detect a position of the finger. The sensing unit 110 can also include a timer for determining a length of time a finger is at a position. The processor 120 in conjunction with the timer can determine a finger location and a finger action. The processor 120 can also include a buffer for storing prior coordinate information. Notably, a finger push action generally corresponds to a movement from a first position to a second position and then back to the first position. Accordingly, the processor tracks finger movement and compares the movement with prior history of movement for determining when a push action has been initiated.
In the exemplary embodiment, the sensing unit 110 can include at least one transmitter 102 and at least two receivers for transmitting and receiving ultrasonic signals. The transducers can be omni-directional ultrasonic transducers be the same for providing dual transmit and receive functions. The sensing unit can employ pulse-echo detection to estimate a range and position of an object within view of the sensing elements. A transmitter in the sensing unit can emit a pulse shaped signal that reflects off an object which is detected by a receiver element in the sensing unit. The receiver element can be coupled with a detector (e.g. processor 120) to detect a signal reflected off an object as part of the motion detection logic in the sensing unit. The detector can include additional processing logic such as thresholds, comparators, logic gates, clocks, and the like for detecting an object's motion. The sensing unit 110 calculates a position of the object causing the reflection by solving a set of geometric equations.
As an example, the single transmit and receive element pair along a same plane in the ultrasonic sensing unit calculates a first range (e.g. distance) of an object in the field of view. A first transmit and receive pair on an x-axis can estimates a longitudinal range of the object (e.g. finger). A second pair, arranged separately from the first pair, estimate a second range. The second pair estimates a latitudinal range of the object (e.g. finger). Accordingly, the two range measurements establish a position (e.g. location) of the object causing the signal reflection by mathematically combining the geometrically related range measurements. For example, the first range measurement establishes a x-coordinate and the second range measurement establishes a y-coordinate. The location of the object is then determined to correspond to the point (x,y) in a single plane. For example, the plane will be oriented in the direction of the first and second paired ultrasonic elements. Accordingly, a third pair can produce a range measurement in a third direction thereby establishing a three-dimensional coordinate system (x,y,z) if the first, second, and third range measurement projections are orthogonal to one another.
A time of flight associated with a first transmit and receive pair produces a complex surface wherein a location of the object can be anywhere along the complex surface. That is, a single time of flight measurement produces a locus of points in a three dimensional space that describe a possible location of the object producing the time of flight. When a second transmit and receive pair is included, a second complex surface can be generated wherein a location of the object can be anywhere along the second complex surface. Based on the location of the transmitters, the two complex surfaces can produce a parabolic intersection curve wherein the object can be anywhere along the parabolic curve. When a third transmit and receive pair is included in a symmetrical arrangement, a third complex surface can be generated wherein a location of the object can be anywhere along the surface of the third complex surface. A location of the object can be determined by identifying the intersection point between the parabolic surface generated by the intersection of the first and second complex surface, with the third complex surface. The location of the object can be uniquely specified by calculating the intersection of the three complex surfaces. Multiple complex surfaces can be generated for each paired sensor, and a location can be determined by identifying an intersection of the complex surfaces.
A relative movement of the object can also be determined by calculating the relative changes in TOFs from each of the transmit-receive pairs. Changes in TOFs can be directly used to look up a corresponding change in relative position. Changes in TOFs can be used to determine relative changes along the principal axes in the three dimensional space. Notably, as more transmit-receive pairs are added, the number of complex surfaces increases thereby providing redundancy as to the location and movement of the object in three dimensional space. The intersection can be calculated by projecting one complex surface onto another complex surface. A gradient descent approach or steepest descent can be used to solve for the local and global minima. Other iterative numerical solutions also exist for calculating the maximum likelihood point of the object. Multiple pulse-echo time of flight measurements can be captured for smoothing out the trajectory of the object in addition to an averaging of the location of the object. Pulse-echo principles can be applied to all the transducers 102 in the sensing unit 110. Each of the pair of transducers can be pulsed as emitter elements. That is, the sensing unit 110 can calculate a first set of time of flight (TOF) measurements using a first transducer as the emitter, calculating a second set of TOFs using the second transducer as the emitter, and so on, wherein each transducer is used as an emitter. Notably, the TOFs can be averaged over time to smooth out discontinuities in the measurements.
In one arrangement, a first ultrasonic signal can be emitted from a first transducer from a first direction at a first time. A first and second reflection of the ultrasonic signal off the finger from the first direction can be detected by a plurality of ultrasonic transducers 102. A location of the finger can be determined from time of flight (TOF) measurements calculated at each transmit-receive pair. The steps of emitting, detecting, and determining a TOF for multiple directions at multiple times for generating a plurality of finger locations can be repeated. In one aspect, a set of possible finger locations can be determined from the plurality of transmit-receive pairs. The set of finger locations can be correlated to determine a finger position having a highest likelihood of producing the reflections from the multiple directions. Also, the processor 120, can measure changes in TOF from each of the sensors 102. When all the TOFs decrease together, the processor 120 determines that the finger has moved closer. When all the TOFs increase together, the processor 120 determines that the finger has moved farther away. Such processing can be used for determining a button press action.
In one aspect, a first TOF and a first phase differential can be calculated between a signal emitted at a first time and a reflected signal detected at a first sensor. A second TOF and a second phase differential can be calculated between the signal emitted at the first time and a reflected signal detected at a second sensor. A third TOF and a third phase differential can be calculated between the signal emitted at the first time and a reflected signal detected at a third sensor. The calculations can occur in parallel to speed up computation time and the TOF and differentials can be calculated as they are received. Accordingly, a first estimate of a location of the object from the first TOF can be calculated, a second estimate of a location of the object from the second TOF can be calculated, and a third estimate of a location of the object from the third TOF can be calculated. At least three complex surfaces can be created as a function of the first, second, and third estimate of a location. An intersection of the at least three complex surfaces can be determined which can correspond to a coarse location of said object. The first, second, and third phase differential can be applied to the estimated location for updating the coarse location to a fine location of the object. In one aspect, the TOFs can be weighted by a phase differential. In one arrangement, for refining the trajectory of the object's movement, a history of the TOFs and the phase differentials can be tracked for predicting an error estimate, wherein the error estimate is used to produce the fine location of the object.
The sensing unit 110 can contain multiple sensing elements positioned and arranged in various configurations for receiving range measurements in varying directions for calculating the position of the object causing the reflection using multi-path signal processing techniques. The sensing unit is not limited to only three sensors which are provided as example. The paired transmit and receive elements can be on a same principal axis or a different principal axis. The sensing unit can also employ beam forming techniques and pattern recognition techniques for estimating the objects location. The sensing unit 110 additionally produces differential coordinate signals for satisfying the input signal requirements of a BlueTooth or USB connection interface. Notably, a computer mouse generally uses a PS/2 or wireless device driver for receiving differential signals for moving a cursor along each principal axis of the computer coordinate system. The sensing unit 110 produces differential signal for each principal axis to comply with the requirements of the PS/2 and USB mouse device driver interface.
Another embodiment is a virtual screen. The virtual screen can include a touchless sensing unit for creating a sensory field, a processor communicatively coupled to the touchless sensing unit for determining a finger location and a finger action within the sensory field, and a controller communicatively coupled to the processor and a display for controlling a graphical application according to the finger location and finger action. The finger action can be a finger push action, a finger hold action, a finger release action, and a finger slide action, or a combination. In one arrangement, the touchless sensing unit generates the sensory field over the graphical application on the display. In another arrangement, the touchless sensory field does not overlay the graphical application on the display. In this arrangement, the touchless sensory field does not spatially coincide with the graphical application. The processor can detect finger movements within the touchless sensory field for interacting with the graphical application on the display.
Another embodiment is a method of navigating a virtual screen. The method can include creating a three dimensional (3D) sensory space, determining a finger position within the 3D sensory space, determining one of a forward or retracting finger movement associated with the finger position, and controlling a graphical application according to the finger movement. In one aspect, a first and second depth of a finger during the forward or retracting finger movement within the 3D sensory space can be identified, a vector can be created from the first and second depth, a destination of the finger on the graphical application can be predicted from the vector, and a region of focus within the graphical application can be adjusted based on the destination. In one aspect, the destination and the region of focus can be presented on the display to provide visual feedback. A graphical interface component can be identified on the display corresponding to the predicted destination of the finger movement.
Another embodiment is a method of control in a virtual screen. The method can include producing a sensory field, and adjusting the strength of the sensory field for adjusting a detectability of a finger within the sensory field, wherein a change in a position of the finger controls a navigation within a graphical application, and a forward and retracting movement of the finger controls an action on a graphical component within the graphical application.
The present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
The present invention also may be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
This invention may be embodied in other forms without departing from the spirit or essential attributes thereof. Accordingly, reference should be made to the following claims, rather than to the foregoing specification, as indicating the scope of the invention.

Claims (18)

What is claimed is:
1. A touchless tablet comprising:
a camera that takes a picture of a form,
a touchless sensing unit operatively coupled to the camera for identifying a touchless finger action above the form within a three dimensional (3D) ultrasonic sensory space; and
a controller communicatively coupled to said touchless sensing unit, wherein said controller
determines a location of form components in the picture captured by said camera,
creates a virtual layout of virtual form components in a three-dimensional ultrasonic touchless sensing space based on the location of the form components in the picture;
presents on a display a visual layout of visual form components also based on the location of the form components in the picture,
applies a combinational weighting of a Time of Flight (TOF) ultrasonic distance measurement corresponding to a coarse estimated location of the finger and a differential Time of Flight (dTOF) ultrasonic measurement corresponding to a relative displacement as the finger accelerates or decelerates between a far distance and a close distance;
creates a vector from a first location at the far distance to a second location at the close distance for a forward finger movement, or, a vector from the first location at the close distance to the second location at the far distance for a retracting finger movement in the ultrasonic sensory space; and
predicts a destination of the finger on said form from said vector to provide zoom in and zoom out and associates the touchless finger action in the ultrasonic sensory space on at least one virtual form component in the virtual layout of the three-dimensional ultrasonic touchless sensing space with at least one visual form component of the visual layout on said form and presented on the display based on the predicted destination,
wherein said touchless tablet identifies a selection of said virtual form component based on said location and action of said finger in the three-dimensional ultrasonic touchless sensing space.
2. The touchless tablet of claim 1, further comprising an off-axis element placing the camera above the touchless finger actions to capture the picture of said form using optical camera elements, and said controller identifies form components in said picture and creates graphical components on a display corresponding to said form components of said picture.
3. The touchless tablet of claim 1, wherein said sensing unit further comprises:
a detector, wherein said detector identifies a position of said finger in a sensory field above the form by
emits a plurality of ultrasonic pulses from a first ultrasonic transducer configured to transmit the ultrasonic pulses;
estimates for a plurality of ultrasonic transducers a time of flight between transmitting one of the ultrasonic pulses and receiving a reflected ultrasonic signal corresponding to a reflection off the finger;
calculates for the plurality of ultrasonic transducers a differential receive time between the reflected ultrasonic signal and a previously received reflected ultrasonic signal, and
determines a location and relative displacement of the finger from a combination of said time of flight measurements and said differential receive time measurements for mapping the virtual components of the VUI to the user components of the UI; and
a timer cooperatively connected to said detector, wherein said timer determines a length of time said finger is at a position over a form component.
4. The touchless tablet of claim 3, wherein said plurality of ultrasonic transducers are microphones that alternate between capturing a sound of a finger tap on a surface presenting said form component and reflected ultrasonic signals off the finger, such that said detector affirms said position of said finger when said sound of said finger tap is detected.
5. The touchless tablet of claim 1, wherein said touchless tablet is:
a portion of a frame positioned on an adjacent second side of said form to allow a user to see an unobstructed view of said form,
wherein said frame contains at least one sensory element for detecting a finger movement within said frame.
6. The touchless tablet of claim 1, wherein said finger action includes at least one of a touchless depressing action, a touchless release action, a touchless hold action, and a touchless dragging action.
7. The touchless tablet of claim 1, where the display is communicatively coupled to said touchless tablet for exposing the graphical components, wherein said selection of a form component within a sensory field above the form corresponds to a selection of the graphical component on said display.
8. The touchless tablet of claim 1, wherein said selection performs an action on the form object that produces a response from a graphical component in the graphical application, wherein the graphical component corresponds to the form component.
9. The touchless tablet of claim 2, wherein said touchless sensing unit comprises:
a fixed array of ultrasonic transducers positioned on at least one side of a form for producing a sensing field over said form for determining a first and second coordinate of a finger within said sensing field;
the off-axis element comprises an ultrasonic transducer for providing a third dimension of ultrasonic sensory measurement for determining a third coordinate of said finger within said sensing field; and
a processor for determining a three-dimensional coordinate and action of said finger within said sensing field.
10. A virtual screen comprising:
a sensing unit containing an arrangement of sensing elements for generating an ultrasonic touchless sensory field representing the virtual screen;
a camera that takes a picture of a form, where the form is thereafter not needed and removed,
a processor communicatively coupled to said touchless sensing unit and the camera for
determining a location of form components in the picture captured by said camera,
creating a virtual layout of virtual form components in said ultrasonic touchless sensory field based on the location of the form components in the picture;
presenting on a display a visual layout of visual form components also based on the location of the form components in the picture
determining a touchless finger location and a touchless finger action within said touchless sensory field; and
associating the touchless finger location and touchless finger action on a virtual form component in the virtual layout with a visual form component in the visual layout, and
controlling a graphical application of the visual form component according to said touchless finger location and said touchless finger action la applying a combinational weighting of a Time of Flight (TOF) ultrasonic distance measurement corresponding to a coarse estimated location of the finger and a differential Time of Flight (dTOF) ultrasonic measurement corresponding to a relative displacement as the finger accelerates or decelerates between a far distance and a close distance to the sensing unit.
11. The virtual screen of claim 10, wherein said sensing unit comprises two or more acoustic transducer to produce the touchless sensory field.
12. The virtual screen of claim 10, wherein said finger action includes at least one of a finger push action, a finger hold action, a finger release action, and a finger slide action.
13. The virtual screen of claim 10, where the touchless sensory field is produced over the display that receives commands for controlling a graphical application on the display and provides a third dimension of depth interaction.
14. The virtual screen of claim 10, further comprising a communication unit for transmitting said finger action and location using at least one among a USB, a BlueTooth, and a ZigBee communication link.
15. The virtual screen of claim 10, further comprising a communication device that receives said finger location and action and adjusts at least one user interface control of the communication device.
16. A method of navigation in a virtual screen comprising:
capturing by way of a camera a picture of a user interface and determining a location of user interface components in the picture;
creating by way of an ultrasonic sensing unit a three dimensional (3D) sensory space with a virtual layout of virtual user interface components corresponding to the location of user interface components in the picture;
presenting on a display a visual layout of visual user interface components also based on the location of user interface components in the picture;
estimating a time of flight (TOF) between when an ultrasonic pulse is transmitted from a first ultrasonic transducer and when a reflection of said ultrasonic pulse off the finger in said 3D sensory space is received from a plurality of ultrasonic transducers;
estimating a differential time of flight (dTOF) between a first reflected ultrasonic signal and a second ultrasonic reflected received from the ultrasonic transducer for the plurality of ultrasonic transducers,
estimating a finger position within said 3D sensory space corresponding to one of the virtual user interface components in the virtual layout by applying a combinational weighting of the TOF and the dTOF as the finger accelerates and decelerates between a far distance and a close distance, where the TOF corresponds to an estimated location of the finger and the dTOF corresponds to a relative displacement of the finger;
determining one of a forward or retracting finger movement associated with said finger position; and
controlling the visual user interface component in the visual layout corresponding to the virtual user interface component according to said finger movement by adjusting a graphical portion of said user interface to where the finger is pointed with respect to said forward or retracting finger movement.
17. The method of claim 16, wherein said determining comprises:
identifying a first depth of a finger during said forward or retracting finger movement within said 3D ultrasonic sensory space;
identifying a second depth of said finger during said forward or retracting finger movement within said 3D ultrasonic sensory space;
creating a vector from said first and said second depth based on said TOF and said dTOF ultrasonic measurements;
predicting a destination of said finger on said graphical application from said vector; and
adjusting a region of focus of a zoom-in and zoom-out of the graphical portion of said visual user interface based on said destination with respect to said forward or retracting finger movement.
18. The method of claim 17, wherein a distant depth of said 3D sensory space corresponds to a broad region and a close depth of said 3D sensory space corresponds to a narrow region of focus, such that said zoom-out broadens when the user's finger is farther from a sensing unit producing said 3D sensory space, and said zoom-in narrows when the user's finger is closer to said sensing unit.
US11/683,416 2006-03-13 2007-03-07 Touchless tablet method and system thereof Expired - Fee Related US8614669B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/683,416 US8614669B2 (en) 2006-03-13 2007-03-07 Touchless tablet method and system thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US78117706P 2006-03-13 2006-03-13
US11/683,416 US8614669B2 (en) 2006-03-13 2007-03-07 Touchless tablet method and system thereof

Publications (2)

Publication Number Publication Date
US20070211031A1 US20070211031A1 (en) 2007-09-13
US8614669B2 true US8614669B2 (en) 2013-12-24

Family

ID=38478451

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/683,416 Expired - Fee Related US8614669B2 (en) 2006-03-13 2007-03-07 Touchless tablet method and system thereof

Country Status (1)

Country Link
US (1) US8614669B2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120314022A1 (en) * 2011-06-13 2012-12-13 Samsung Electronics Co., Ltd. Display apparatus and method for controlling display apparatus and remote controller
US20130321662A1 (en) * 2011-02-08 2013-12-05 Furukawa Electric Co., Ltd. Optical module
US9501810B2 (en) 2014-09-12 2016-11-22 General Electric Company Creating a virtual environment for touchless interaction
US9730255B1 (en) * 2016-08-30 2017-08-08 Polycom, Inc. Room-specific pairing via a combined ultrasonic beacon/bluetooth approach
US20180143292A1 (en) * 2016-11-23 2018-05-24 Chirp Microsystems Three dimensional object-localization and tracking using ultrasonic pulses
US10650621B1 (en) 2016-09-13 2020-05-12 Iocurrents, Inc. Interfacing with a vehicular controller area network
USRE48054E1 (en) * 2005-01-07 2020-06-16 Chauncy Godwin Virtual interface and control device
US11188157B1 (en) 2020-05-20 2021-11-30 Meir SNEH Touchless input device with sensor for measuring linear distance
WO2023196686A3 (en) * 2022-03-28 2023-12-07 Carnegie Mellon University Holographic light curtains

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8411034B2 (en) * 2009-03-12 2013-04-02 Marc Boillot Sterile networked interface for medical systems
US8219936B2 (en) * 2007-08-30 2012-07-10 Lg Electronics Inc. User interface for a mobile device using a user's gesture in the proximity of an electronic device
US8432365B2 (en) * 2007-08-30 2013-04-30 Lg Electronics Inc. Apparatus and method for providing feedback for three-dimensional touchscreen
US20090167719A1 (en) * 2007-11-02 2009-07-02 Woolley Richard D Gesture commands performed in proximity but without making physical contact with a touchpad
EP2212762A4 (en) * 2007-11-19 2011-06-29 Cirque Corp Touchpad combined with a display and having proximity and touch sensing capabilities
US20090213067A1 (en) * 2008-02-21 2009-08-27 International Business Machines Corporation Interacting with a computer via interaction with a projected image
US8743091B2 (en) * 2008-07-31 2014-06-03 Apple Inc. Acoustic multi-touch sensor panel
US8917239B2 (en) * 2012-10-14 2014-12-23 Neonode Inc. Removable protective cover with embedded proximity sensors
DE102009032262A1 (en) * 2009-07-08 2011-01-13 Steinbichler Optotechnik Gmbh Method for determining the 3D coordinates of an object
US20110115892A1 (en) * 2009-11-13 2011-05-19 VisionBrite Technologies, Inc. Real-time embedded visible spectrum light vision-based human finger detection and tracking method
US20110298708A1 (en) * 2010-06-07 2011-12-08 Microsoft Corporation Virtual Touch Interface
US9983679B2 (en) * 2010-08-19 2018-05-29 Elliptic Laboratories As Interaction with portable devices
US8760432B2 (en) * 2010-09-21 2014-06-24 Visteon Global Technologies, Inc. Finger pointing, gesture based human-machine interface for vehicles
US20120095575A1 (en) * 2010-10-14 2012-04-19 Cedes Safety & Automation Ag Time of flight (tof) human machine interface (hmi)
US8730190B2 (en) * 2011-01-13 2014-05-20 Qualcomm Incorporated Detect motion generated from gestures used to execute functionality associated with a computer system
US20140317577A1 (en) * 2011-02-04 2014-10-23 Koninklijke Philips N.V. Gesture controllable system uses proprioception to create absolute frame of reference
JP5766479B2 (en) * 2011-03-25 2015-08-19 京セラ株式会社 Electronic device, control method, and control program
CA2833928C (en) 2011-04-22 2018-01-02 Pepsico, Inc. Beverage dispensing system with social media capabilities
WO2013067020A1 (en) 2011-11-01 2013-05-10 Stephen Lim Dispensing system and user interface
US9400575B1 (en) * 2012-06-20 2016-07-26 Amazon Technologies, Inc. Finger detection for element selection
US9213436B2 (en) 2012-06-20 2015-12-15 Amazon Technologies, Inc. Fingertip location for gesture input
WO2014105183A1 (en) * 2012-12-28 2014-07-03 Intel Corporation Three-dimensional user interface device
DE102013203918A1 (en) * 2013-03-07 2014-09-11 Siemens Aktiengesellschaft A method of operating a device in a sterile environment
WO2014191036A1 (en) 2013-05-29 2014-12-04 Brainlab Ag Gesture feedback for non-sterile medical displays
US10354242B2 (en) * 2014-07-31 2019-07-16 Ncr Corporation Scanner gesture recognition
US10108269B2 (en) * 2015-03-06 2018-10-23 Align Technology, Inc. Intraoral scanner with touch sensitive input
US10671222B2 (en) 2015-09-30 2020-06-02 Apple Inc. Touch sensor pattern for edge input detection
US11036318B2 (en) 2015-09-30 2021-06-15 Apple Inc. Capacitive touch or proximity detection for crown
WO2018023080A2 (en) 2016-07-29 2018-02-01 Apple Inc. Methodology and application of acoustic touch detection
US10606418B2 (en) 2017-03-31 2020-03-31 Apple Inc. Ultrasonic touch detection on stylus
US11157115B2 (en) 2017-03-31 2021-10-26 Apple Inc. Composite cover material for sensitivity improvement of ultrasonic touch screens
US11334196B2 (en) 2017-05-24 2022-05-17 Apple Inc. System and method for acoustic touch and force sensing
US11144158B2 (en) 2017-05-24 2021-10-12 Apple Inc. Differential acoustic touch and force sensing
CN208722170U (en) 2017-05-24 2019-04-09 苹果公司 It touches and power sensitive device, electronic equipment and wearable audio frequency apparatus
US10949030B2 (en) 2017-09-26 2021-03-16 Apple Inc. Shear-poled curved piezoelectric material
US10802651B2 (en) 2018-01-30 2020-10-13 Apple Inc. Ultrasonic touch detection through display
US11366552B2 (en) 2018-02-06 2022-06-21 Apple, Inc. Ultrasonic polarizer
US10725573B2 (en) 2018-08-06 2020-07-28 Apple Inc. Annular piezoelectric structure for ultrasonic touch sensing
CN112764592A (en) * 2021-01-15 2021-05-07 安徽省东超科技有限公司 Touch feedback system, terminal device, touch feedback control method and storage medium
CN116702898B (en) * 2023-08-04 2023-11-03 北京语言大学 Knowledge representation learning-based cultural relics and literary knowledge migration method and system

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4506354A (en) * 1982-09-30 1985-03-19 Position Orientation Systems, Ltd. Ultrasonic position detecting system
US5274363A (en) 1991-02-01 1993-12-28 Ibm Interactive display system
US5367614A (en) * 1992-04-01 1994-11-22 Grumman Aerospace Corporation Three-dimensional computer image variable perspective display system
US5650799A (en) * 1994-04-15 1997-07-22 Canon Kabushiki Kaisha Programmable function keys for a networked imaging computer system
US5739814A (en) * 1992-09-28 1998-04-14 Sega Enterprises Information storage system and book device for providing information in response to the user specification
US6130663A (en) 1997-07-31 2000-10-10 Null; Nathan D. Touchless input method and apparatus
US6137427A (en) 1994-04-05 2000-10-24 Binstead; Ronald Peter Multiple input proximity detector and touchpad system
US6313825B1 (en) 1998-12-28 2001-11-06 Gateway, Inc. Virtual input device
US20020024506A1 (en) * 1999-11-09 2002-02-28 Flack James F. Motion detection and tracking system to control navigation and display of object viewers
US6614422B1 (en) * 1999-11-04 2003-09-02 Canesta, Inc. Method and apparatus for entering data using a virtual input device
US6937227B2 (en) 2003-07-14 2005-08-30 Iowa State University Research Foundation, Inc. Hand-held pointing device
US20050273533A1 (en) * 2004-06-07 2005-12-08 Broadcom Corporation Computer system, and device, in particular computer mouse or mobile telephone for use with the computer system
US20060092022A1 (en) 2003-02-06 2006-05-04 Cehelnik Thomas G Method and apparatus for detecting charge and proximity
US20060161871A1 (en) 2004-07-30 2006-07-20 Apple Computer, Inc. Proximity detector in handheld device
US7081884B2 (en) 2003-04-25 2006-07-25 Microsoft Corporation Computer input device with angular displacement detection capabilities
US7092109B2 (en) 2003-01-10 2006-08-15 Canon Kabushiki Kaisha Position/orientation measurement method, and position/orientation measurement apparatus
US7130754B2 (en) 2002-03-19 2006-10-31 Canon Kabushiki Kaisha Sensor calibration apparatus, sensor calibration method, program, storage medium, information processing method, and information processing apparatus
US20060256090A1 (en) 2005-05-12 2006-11-16 Apple Computer, Inc. Mechanical overlay

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4506354A (en) * 1982-09-30 1985-03-19 Position Orientation Systems, Ltd. Ultrasonic position detecting system
US5274363A (en) 1991-02-01 1993-12-28 Ibm Interactive display system
US5367614A (en) * 1992-04-01 1994-11-22 Grumman Aerospace Corporation Three-dimensional computer image variable perspective display system
US5739814A (en) * 1992-09-28 1998-04-14 Sega Enterprises Information storage system and book device for providing information in response to the user specification
US6137427A (en) 1994-04-05 2000-10-24 Binstead; Ronald Peter Multiple input proximity detector and touchpad system
US5650799A (en) * 1994-04-15 1997-07-22 Canon Kabushiki Kaisha Programmable function keys for a networked imaging computer system
US6130663A (en) 1997-07-31 2000-10-10 Null; Nathan D. Touchless input method and apparatus
US6313825B1 (en) 1998-12-28 2001-11-06 Gateway, Inc. Virtual input device
US6614422B1 (en) * 1999-11-04 2003-09-02 Canesta, Inc. Method and apparatus for entering data using a virtual input device
US20020024506A1 (en) * 1999-11-09 2002-02-28 Flack James F. Motion detection and tracking system to control navigation and display of object viewers
US7130754B2 (en) 2002-03-19 2006-10-31 Canon Kabushiki Kaisha Sensor calibration apparatus, sensor calibration method, program, storage medium, information processing method, and information processing apparatus
US7092109B2 (en) 2003-01-10 2006-08-15 Canon Kabushiki Kaisha Position/orientation measurement method, and position/orientation measurement apparatus
US20060092022A1 (en) 2003-02-06 2006-05-04 Cehelnik Thomas G Method and apparatus for detecting charge and proximity
US7078911B2 (en) 2003-02-06 2006-07-18 Cehelnik Thomas G Patent application for a computer motional command interface
US7081884B2 (en) 2003-04-25 2006-07-25 Microsoft Corporation Computer input device with angular displacement detection capabilities
US6937227B2 (en) 2003-07-14 2005-08-30 Iowa State University Research Foundation, Inc. Hand-held pointing device
US20050273533A1 (en) * 2004-06-07 2005-12-08 Broadcom Corporation Computer system, and device, in particular computer mouse or mobile telephone for use with the computer system
US20060161871A1 (en) 2004-07-30 2006-07-20 Apple Computer, Inc. Proximity detector in handheld device
US20060256090A1 (en) 2005-05-12 2006-11-16 Apple Computer, Inc. Mechanical overlay

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE48054E1 (en) * 2005-01-07 2020-06-16 Chauncy Godwin Virtual interface and control device
US20130321662A1 (en) * 2011-02-08 2013-12-05 Furukawa Electric Co., Ltd. Optical module
US20120314022A1 (en) * 2011-06-13 2012-12-13 Samsung Electronics Co., Ltd. Display apparatus and method for controlling display apparatus and remote controller
US9491520B2 (en) * 2011-06-13 2016-11-08 Samsung Electronics Co., Ltd. Display apparatus and method for controlling display apparatus and remote controller having a plurality of sensor arrays
US9628698B2 (en) * 2012-09-07 2017-04-18 Pixart Imaging Inc. Gesture recognition system and gesture recognition method based on sharpness values
US9501810B2 (en) 2014-09-12 2016-11-22 General Electric Company Creating a virtual environment for touchless interaction
US9730255B1 (en) * 2016-08-30 2017-08-08 Polycom, Inc. Room-specific pairing via a combined ultrasonic beacon/bluetooth approach
US10650621B1 (en) 2016-09-13 2020-05-12 Iocurrents, Inc. Interfacing with a vehicular controller area network
US11232655B2 (en) 2016-09-13 2022-01-25 Iocurrents, Inc. System and method for interfacing with a vehicular controller area network
US10545219B2 (en) * 2016-11-23 2020-01-28 Chirp Microsystems Three dimensional object-localization and tracking using ultrasonic pulses
US20200166603A1 (en) * 2016-11-23 2020-05-28 Chirp Microsystems, Inc. Three Dimensional Object-Localization And Tracking Using Ultrasonic Pulses
US20180143292A1 (en) * 2016-11-23 2018-05-24 Chirp Microsystems Three dimensional object-localization and tracking using ultrasonic pulses
US10816639B2 (en) * 2016-11-23 2020-10-27 Chirp Microsystems, Inc. Three dimensional object-localization and tracking using ultrasonic pulses
US11016167B2 (en) 2016-11-23 2021-05-25 Chirp Microsystems Three dimensional object-localization and tracking using ultrasonic pulses
US11188157B1 (en) 2020-05-20 2021-11-30 Meir SNEH Touchless input device with sensor for measuring linear distance
WO2023196686A3 (en) * 2022-03-28 2023-12-07 Carnegie Mellon University Holographic light curtains

Also Published As

Publication number Publication date
US20070211031A1 (en) 2007-09-13

Similar Documents

Publication Publication Date Title
US8614669B2 (en) Touchless tablet method and system thereof
US8334841B2 (en) Virtual user interface method and system thereof
US8139029B2 (en) Method and device for three-dimensional sensing
US8169404B1 (en) Method and device for planary sensory detection
US20230251720A1 (en) Human Interactions with Mid-Air Haptic Systems
US7834850B2 (en) Method and system for object control
KR101850680B1 (en) Detecting touch input force
US7075524B2 (en) Coordinate input apparatus, control method thereof, and program
US7834847B2 (en) Method and system for activating a touchless control
JP5615270B2 (en) Object positioning
US8743089B2 (en) Information processing apparatus and control method thereof
US20180188894A1 (en) Virtual Touchpads For Wearable And Portable Devices
US9760215B2 (en) Method for detecting a touch-and-hold touch event and corresponding device
JP2011216088A (en) Projection system with touch-sensitive projection image
CN109313502B (en) Tap event location using selection device
US8525780B2 (en) Method and apparatus for inputting three-dimensional location
JP2016071836A (en) Interactive display method, control method, and system for achieving hologram display
KR20160140324A (en) Touch recognition apparatus and control methods thereof
EP3326052A1 (en) Apparatus and method for detecting gestures on a touchpad
JP4502896B2 (en) Projection display
CN103069364B (en) For distinguishing the system and method for input object
De Pra et al. Infrared vs. ultrasonic finger detection on a virtual piano keyboard
KR20150084756A (en) Location tracking systme using sensors equipped in smart phone and so on
WO2018205478A1 (en) Directional equipment positioning device

Legal Events

Date Code Title Description
AS Assignment

Owner name: NAVISENSE, LLC,FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BOILLOT, MARC;REEL/FRAME:018977/0676

Effective date: 20070307

Owner name: NAVISENSE, LLC, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BOILLOT, MARC;REEL/FRAME:018977/0676

Effective date: 20070307

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.)

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20171224