US9201520B2 - Motion and context sharing for pen-based computing inputs - Google Patents

Motion and context sharing for pen-based computing inputs Download PDF

Info

Publication number
US9201520B2
US9201520B2 US13/903,944 US201313903944A US9201520B2 US 9201520 B2 US9201520 B2 US 9201520B2 US 201313903944 A US201313903944 A US 201313903944A US 9201520 B2 US9201520 B2 US 9201520B2
Authority
US
United States
Prior art keywords
touch
pen
computing device
motion
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/903,944
Other versions
US20130257777A1 (en
Inventor
Hrvoje Benko
Xiang Chen
Kenneth Paul Hinckley
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/026,058 external-priority patent/US8988398B2/en
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Priority to US13/903,944 priority Critical patent/US9201520B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HINCKLEY, KENNETH PAUL, BENKO, HRVOJE, CHEN, XIANG
Publication of US20130257777A1 publication Critical patent/US20130257777A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Application granted granted Critical
Publication of US9201520B2 publication Critical patent/US9201520B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03547Touch pads, in which fingers can move on a surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • G06F3/0383Signal control means within the pointing device

Definitions

  • pen type input device e.g., tablets, phones, etc.
  • pen type input devices e.g., pointer, or stylus type input device
  • digitizer component e.g., touch, and motion based input techniques
  • Various conventional input techniques have adapted pen type devices to provide auxiliary input channels including various combinations of tilting, rolling, and pressure sensing.
  • one of the limitations of many of these techniques is that they operate using sensors coupled to the computing device to sense and consider pen movements or hover conditions that are required to be in close proximity to the digitizer so that the pen can be sensed by the digitizer.
  • Many such techniques operate in a context where the pen is used to perform various input actions that are then sensed and interpreted by the computing device.
  • one conventional technique considers pen rolling during handwriting and sketching tasks, as well as various intentional pen rolling gestures.
  • these pen rolling techniques operate in close proximity to the computing device based on sensors associated with the computing device.
  • Related techniques that require the pen type input device to maintain contact (or extreme proximity) with the digitizer include various tilt and pressure based pen inputs.
  • Various examples of such techniques consider separate or combined tilt and pressure inputs in various tablet-based settings for interacting with context menus, providing multi-parameter selection, object or menu manipulation, widget control, etc.
  • various conventional techniques use an accelerometer-enhanced pen to sense movements when the pen or stylus is not touching the display. The sensed movements are then provided to the computing device for input purposes such as shaking the stylus to cycle through color palettes, and rolling the stylus to pick colors or scroll web pages.
  • a somewhat related technique provides a pointing device having multiple inertial sensors to enable three-dimensional pointing in a “smart room” environment. This technique enables a user to gesture to objects in the room and speak voice commands.
  • Other techniques use 3D spatial input to employ stylus-like devices in free space, but require absolute tracking technologies that are generally impractical for mobile pen-and-tablet type interactions.
  • various conventional systems combine pen tilt with direct-touch input.
  • One such system uses a stylus that senses which corners, edges, or sides of the stylus come into contact with a tabletop display.
  • This system also combines direct multi-touch input with stylus orientation, allowing users to tap a finger on a control while holding or “tucking” the stylus in the palm.
  • this system requires contact with the display in order to sense tilt or other motions.
  • Related techniques combine both touch and motion for mobile devices by using direct touch to cue the system to recognize shaking and other motions of pen type input devices.
  • a “Motion and Context Sharing Technique,” as described herein, provides a variety of input techniques based on various combinations of pen input, direct-touch input, and motion-sensing inputs.
  • the Motion and Context Sharing Techniques described herein leverage inputs from some or all of the sensors of a “sensor pen” in combination with displays or other surfaces that support both pen or stylus inputs and direct multi-touch input.
  • various embodiments of the Motion and Context Sharing Technique consider various combinations of pen stroke, pressure, motion, and other inputs in the context of touch-sensitive displays, in combination with various hybrid techniques that incorporate simultaneous, concurrent, sequential, and/or interleaved, sensor pen inputs and touch inputs (i.e., finger, palm, hand, etc.) on the display or other touch sensitive surface of the computing device.
  • pressure may refer to various sensor types and configurations.
  • pressure may refer to pen tip pressure exerted on a display.
  • pen tip pressure is typically sensed by some type of pressure transducer inside the pen, but it is also possible to have the pen tip pressure sensing done by the display/digitizer itself in some devices.
  • pressure or pressure sensing or the like may also refer to a separate channel of sensing the grip pressure of the hand (or fingers) contacting an exterior casing or surface of the pen.
  • Various sensing modalities employed by the Motion and Context Sharing Technique may separately or concurrently consider or employ both types of pressure sensing (i.e., pen tip pressure and pen grip pressure) for initiating various motion gestures.
  • various devices used to enable some of the many embodiments of a “Motion and Context Sharing Technique,” as described herein, include pens, pointers, stylus type input devices, etc., that are collectively referred to herein as a “sensor pen” for purposes of discussion.
  • sensor pen for purposes of discussion.
  • the functionality described herein may be implemented in any desired form factor, e.g., wand, staff, ball racquet, toy sword, etc., for use with various gaming devices, gaming consoles, or other computing devices.
  • the sensor pens described herein are adapted to incorporate various combinations of a power supply and multiple sensors including, but not limited to inertial sensors, accelerometers, pressure sensors, grip sensors, near-field communication sensors, RFID tags and/or sensors, temperature sensors, microphones, magnetometers, capacitive sensors, gyroscopes, etc., in combination with various wireless communications capabilities for interfacing with various computing devices.
  • the sensor pens described herein have been further adapted to incorporate digital memory and/or computing capabilities that allow the sensor pens to act in combination or cooperation with other computing devices, other sensor pens, or even as a standalone computing device.
  • the various embodiments of the Motion and Context Sharing Technique described herein use various wired and/or wireless communication techniques integrated into the sensor pen to enable inputs and gestures that are not restricted to a near-proximity sensing range of the digitizer of the computing device.
  • another advantage of the Motion and Context Sharing Technique described herein is that the use of a wide range of sensors and a communication interface in the sensor pen enables a wide array of sensing dimensions that provide new input scenarios and gestures for computing devices.
  • Examples input scenarios include, but are not limited to, using electronically active sensor pens for tablets, electronic whiteboards, or other direct-input devices since the sensor pen itself integrates motion and/or grip-sensing and other sensor-based capabilities that allow richer in-air pen-based gestures at arbitrary distances from the computing device, as well as richer sensing of user context information.
  • the Motion and Context Sharing Technique provides various mechanisms that can be used to optimize the behavior of computing devices and user experience based on concurrently sensing input states of the sensor pen and the computing device. Simple examples of this concept include, but are not limited to, alerting the user to a forgotten sensor pen, for example, as well as sensing whether the user is touching the display with a hand that is also grasping a sensor pen. This can be used, for example, to make fine-grained distinction among touch gestures as well as to support a variety of “palm rejection” techniques for eliminating or avoiding unintentional touch inputs.
  • the Motion and Context Sharing Technique described herein uses a sensor pen to enable a variety of input techniques and gestures based on various combinations of direct-touch inputs and sensor pen inputs that are not restricted to a near-proximity sensing range of the digitizer of a computing device.
  • other advantages of the Motion and Context Sharing Technique will become apparent from the detailed description that follows hereinafter when taken in conjunction with the accompanying drawing figures.
  • FIG. 1 shows a general operational overview of a “Motion and Context Sharing Technique” that illustrates interoperation between a sensor pen and a touch-sensitive computing for triggering one or more motion gestures or other actions, as described herein.
  • FIG. 2 provides an exemplary architectural flow diagram that illustrates program modules for implementing various embodiments of the Motion and Context Sharing Technique, as described herein.
  • FIG. 3 provides an illustration of using the Motion and Context Sharing Technique to provide a correlated touch and sensor pen input mechanism, as described herein.
  • FIG. 4 provides an illustration of using the Motion and Context Sharing Technique to provide a roll to undo input mechanism, as described herein.
  • FIG. 5 provides an illustration of using the Motion and Context Sharing Technique to provide a finger tap input mechanism, as described herein.
  • FIG. 6 provides an illustration of using the Motion and Context Sharing Technique to provide a touch and spatter input mechanism for painting, drawing, or sketching type applications, as described herein.
  • FIG. 7 provides an illustration of using the Motion and Context Sharing Technique to provide a touch and tilt for layers input mechanism, as described herein.
  • FIG. 8 provides an illustration of using the Motion and Context Sharing Technique to provide a touch and roll to rotate input mechanism, as described herein.
  • FIG. 9 provides an illustration of using the Motion and Context Sharing Technique to provide a vertical menu input mechanism, as described herein.
  • FIG. 10 provides an illustration of using the Motion and Context Sharing Technique to provide a hard tap input mechanism, as described herein.
  • FIG. 11 illustrates a general system flow diagram that illustrates exemplary methods for implementing various embodiments of the Motion and Context Sharing Technique, as described herein.
  • FIG. 12 is a general system diagram depicting a simplified general-purpose computing device having simplified computing and I/O capabilities, in combination with a sensor pen having various sensors, power and communications capabilities, for use in implementing various embodiments of the Motion and Context Sharing Technique, as described herein.
  • a “Motion and Context Sharing Technique,” as described herein, provides various techniques for using a pen or stylus enhanced with a power supply and multiple sensors, i.e., a “sensor pen,” to enable a variety of input techniques and gestures. These techniques consider various combinations of pen stroke, pressure, motion, and other inputs in the context of touch-sensitive displays, in combination with various hybrid techniques that incorporate simultaneous, concurrent, sequential, and/or interleaved, sensor pen inputs and touch inputs (i.e., finger, palm, hand, etc.) on displays or other touch sensitive surfaces. These techniques enable a wide variety of motion-gesture inputs, as well sensing the context of how a user is holding or using the sensor pen, even when the pen is not in contact, or even within sensing range, of the digitizer of a touch sensitive computing device.
  • pressure may refer to various sensor types and configurations.
  • pressure may refer to pen tip pressure exerted on a display.
  • pen tip pressure is typically sensed by some type of pressure transducer inside the pen, but it is also possible to have the pen tip pressure sensing done by the display/digitizer itself in some devices.
  • pressure or pressure sensing or the like may also refer to a separate channel of sensing the grip pressure of the hand (or fingers) contacting an exterior casing or surface of the pen.
  • Various sensing modalities employed by the Motion and Context Sharing Technique may separately or concurrently consider or employ both types of pressure sensing (i.e., pen tip pressure and pen grip pressure) for initiating various motion gestures.
  • pens for purposes of discussion.
  • stylus type input devices etc.
  • a sensor pen for purposes of discussion.
  • the functionality described herein may be implemented in any desired form factor, e.g., wand, staff, ball racquet, toy sword, etc., for use with various gaming devices, gaming consoles, or other computing devices.
  • the sensor pens described herein are adapted to incorporate a power supply and various combinations of sensors including, but not limited to inertial sensors, accelerometers, pressure sensors, grip sensors, near-field communication sensors, RFID tags and/or sensors, temperature sensors, microphones, magnetometers, capacitive sensors, gyroscopes, etc., in combination with various wireless communications capabilities for interfacing with various computing devices.
  • sensors including, but not limited to inertial sensors, accelerometers, pressure sensors, grip sensors, near-field communication sensors, RFID tags and/or sensors, temperature sensors, microphones, magnetometers, capacitive sensors, gyroscopes, etc., in combination with various wireless communications capabilities for interfacing with various computing devices.
  • any or all of these sensors may be multi-axis or multi-position sensors (e.g., 3-axis accelerometers, gyroscopes, and magnetometers).
  • the sensor pens described herein have been further adapted to incorporate memory and/or computing capabilities that allow the sensor pens to act in combination or cooperation with
  • touch-sensitive computing devices having one or more touch-sensitive surfaces or regions (e.g., touch screen, touch sensitive bezel or case, sensors for detection of hover-type inputs, optical touch sensors, etc.).
  • touch-sensitive computing devices include both single- and multi-touch devices. Examples of touch-sensitive computing devices include, but are not limited to, touch-sensitive display devices connected to a computing device, touch-sensitive phone devices, touch-sensitive media players, touch-sensitive e-readers, notebooks, netbooks, booklets (dual-screen), tablet type computers, or any other device having one or more touch-sensitive surfaces or input modalities.
  • touch-sensitive region of such computing devices need not be associated with a display, and furthermore that the location or type of contact-sensitive region (e.g. front of a device on the display, vs. back of device without any associated display) may be considered as an input parameter for initiating one or more motion gestures (i.e., user interface actions corresponding to the motion gesture).
  • location or type of contact-sensitive region e.g. front of a device on the display, vs. back of device without any associated display
  • may be considered as an input parameter for initiating one or more motion gestures i.e., user interface actions corresponding to the motion gesture.
  • touch will generally refer to physical user contact (e.g., finger, palm, hand, etc.) on touch sensitive displays or other touch sensitive surfaces of a computing device using capacitive sensors or the like.
  • virtual touch inputs relative to projected displays, electronic whiteboards, or other surfaces or objects are treated by the Motion and Context Sharing Technique in the same manner as actual touch inputs on a touch-sensitive surface.
  • Such virtual touch inputs are detected using conventional techniques such as, for example, using cameras or other imaging technologies to track user finger movement relative to a projected image, relative to text on an electronic whiteboard, relative to physical objects, etc.
  • the Motion and Context Sharing Technique is operable with a wide variety of touch and flex-sensitive materials for determining or sensing touch or pressure.
  • one touch-sensing technology adapted for use by the Motion and Context Sharing Technique determines touch or pressure by evaluating a light source relative to some definite deformation of a touched surface to sense contact.
  • sensor pens may include multiple types of touch and/or pressure sensing substrates.
  • sensor pens may be both touch-sensitive and/or pressure sensitive using any combination of sensors, such as, for example, capacitive sensors, pressure sensors, flex- or deformation-based sensors, etc.
  • the Motion and Context Sharing Technique uses a variety of known techniques to for differentiating between valid and invalid touches received by one or more touch-sensitive surfaces of the touch-sensitive computing device.
  • valid touches and contacts include single, simultaneous, concurrent, sequential, and/or interleaved user finger touches (including gesture type touches), pen or stylus touches or inputs, hover-type inputs, or any combination thereof.
  • the Motion and Context Sharing Technique disables or ignores one or more regions or sub-regions of touch-sensitive input surfaces that are expected to receive unintentional contacts, or intentional contacts not intended as inputs, for device or application control purposes. Examples of contacts that may not intended as inputs include, but are not limited to, a user's palm resting on a touch screen while the user writes on that screen with a stylus or pen, holding the computing device by gripping a touch sensitive bezel, etc.
  • contact or “pen input” as used herein generally refer to interaction involving physical contact (or hover) of the sensor pen with a touch sensitive surface or digitizer component of the computing device.
  • inputs provided by one or more sensors of the sensor pen will generally be referred to herein as a “sensor input,” regardless of whether or not the sensor pen is within a digitizer range or even in contact with a touch sensitive surface or digitizer component of the computing device.
  • any particular motion gestures or inputs described herein are derived from various combinations of simultaneous, concurrent, sequential, and/or interleaved pen inputs, user touches, and sensor inputs. It should be further understood that the current context or state of either or both the sensor pen and computing device is also considered when determining which motion gestures or inputs to activate or initialize.
  • the Motion and Context Sharing Technique described herein provides a number of advantages relating to pen or stylus based user interaction with touch-sensitive computing devices, including, but not limited to:
  • FIG. 1 provides a general operational overview of the Motion and Context Sharing Technique, illustrating interoperation between the sensor pen and the computing device to trigger one or more motion gestures or other actions. More specifically, FIG. 1 shows sensor pen 100 in communication with touch sensitive computing device 105 via communications link 110 . As discussed in further detail herein, the sensor pen 100 includes a variety of sensors. A sensor module 115 in the sensor pen 100 monitors readings of one or more of those sensors, and provides them to a communications module 120 to be sent to the computing device 105 .
  • the touch sensitive computing device 105 includes a sensor input module 125 that receives input from one or more sensors of sensor pen (e.g., inertial, accelerometers, pressure, grip, near-field communication, RFID, temperature, microphones, magnetometers, capacitive sensors, gyroscopes, etc.) and provides that input to a gesture activation module 135 .
  • the gesture activation module 135 also receives input from a touch input module 130 that receives input from user touch of one or more touch sensitive surfaces of the computing device 105 .
  • the gesture activation module 135 Given the sensor inputs and the touch inputs, if any, the gesture activation module 135 then evaluate simultaneous, concurrent, sequential, and/or interleaved sensor pen 100 inputs and touch inputs (i.e., finger, palm, hand, etc.) on displays or other touch sensitive surfaces of the computing device 105 relative to contexts of sensor pen and computing device to trigger or activate one or more motion gestures (e.g., motion gestures 140 through 170 , discussed in further detail herein).
  • touch inputs i.e., finger, palm, hand, etc.
  • the Motion and Context Sharing Technique senses various properties of the sensor pen relative to various distances between the sensor pen and the computing device (i.e., contact, hover range, and beyond hover range), and whether the motions of the sensor pen are correlated with a concurrent user touch of a display or some other touch-sensitive surface of the computing device or with some motion of the computing device. These sensed properties of the sensor pen are then correlated with various touches or motions of the computing device, and may also be considered in view of the current contexts of either or both the sensor pen and computing device (e.g., whether they are being held, moving, power state, application status, etc.), and used to trigger a variety of “motion gestures” or other actions.
  • the Motion and Context Sharing Technique considers distance of the sensor pen above the digitizer of the computing device. While a variety of ranges can be considered, in various tested embodiments, three range categories were considered, including: physical contact, within hover range of the digitizer, or beyond range of the digitizer.
  • the activation mechanism for any particular motion gestures may consider these different ranges of the sensor pen, in combination with any other correlated inputs, touches, and/or motions of the computing device.
  • motion gestures can be grouped into various categories. Examples include motions that employ device orientation (of either or both the sensor pen and the computing device), whether absolute or relative, versus other motion types, which can be used to group different styles of motion input including hard contact forces, sensing particular patterns or gestures of movement, as well as techniques that use stability (e.g., the absence of motion or other particular sensor inputs) to trigger actions or specific contexts.
  • Motion and Context Sharing Technique include.
  • a wide variety of user-definable motion gestures and input scenarios are enabled by allowing the user to associate any desired combination of touches, contacts, and/or sensor pen motions with any desired action by the computing device.
  • any particular touch inputs or combinations of touch inputs are correlated with any desired sensor inputs and/or contacts, with those correlated inputs then being used to initiate any desired action by the computing device.
  • raw sensor readings can be reported or transmitted from the sensor pen to the computing device for evaluation and characterization by the computing device.
  • raw sensor data from inertial sensors within the sensor pen can be reported by the sensor pen to the computing device, with the computing device then determining pen orientation as a function of the data from the inertial sensors.
  • the sensor pen uses onboard computational capability to evaluate the input from various sensors.
  • sensor data derived from inertial sensors within the sensor pen can be processed by a computational component of the sensor pen to determine pen orientation, with the orientation of tilt then being reported by the sensor pen to the computing device.
  • any desired combination of reporting of raw sensor data and reporting of processed sensor data to the computing device by the sensor pen can be performed depending upon the computational capabilities of the sensor pen.
  • the following discussion will generally refer to reporting of sensor data to the computing device by the sensor pen for further processing by the computing device to determine various motion gestures or other input scenarios.
  • a few examples of various motion gestures and input scenarios enabled by the Motion and Context Sharing Technique are briefly introduced below.
  • touch and tilt for layers uses a concurrent user touch and sensor pen tilt to activate or interact with different layers displayed on a screen.
  • the touch and tilt for layers gesture is initiated with the sensor pen at any desired distance from the display.
  • Sensor pen tilt is determined by one or more of the pen sensors and reported to the computing device via the communications capabilities of the sensor pen. The touch and tilt for layers gesture is discussed in further detail herein.
  • a related gesture referred to as a “touch and roll to rotate” gesture uses a concurrent user touch on a displayed object (e.g., text, shape, image, etc.) and a user initiated sensor pen rolling motion to rotate the touched object.
  • a displayed object e.g., text, shape, image, etc.
  • sensor pen rolling motion e.g., text, shape, image, etc.
  • the touch and roll to rotate gesture is initiated with the sensor pen at any desired distance from the display, with sensor pen tilt being determined by one or more of the pen sensors and reported via the communications capabilities of the sensor pen.
  • the touch and roll to rotate gesture is discussed in further detail herein.
  • Another gesture referred to herein as a “roll to undo” gesture, uses sensors of the pen to detect a user initiated rolling motion of the sensor pen, with the result being to undo previous operations, regardless of whether those actions were user initiated or automatically initiated by the computing device or system.
  • the roll to undo gesture is initiated with the sensor pen at any desired distance from the display, with sensor pen rolling motions being determined by one or more of the pen sensors and reported via the communications capabilities of the sensor pen. The roll to undo gesture is discussed in further detail herein.
  • Another gesture referred to herein as a “vertical menu” gesture, uses sensors of the pen to detect a user initiated motion of the sensor pen that occurs with the pen coming into proximity range (or in contact with the screen) in an orientation approximately perpendicular relative to the computing device. For example, in one embodiment, bringing the pen close to the screen with the pen in this perpendicular pose will initiate opening or expansion of vertical software menu or the like, while motions moving away from the computing device or display (i.e., motion beyond the proximity sensing range) may initiate closing or contraction of the vertical software menu or the like. Note that these motions may act in concert with a cursor location on or near the menu location at the time that the motion of the sensor pen is detected, or with any other locus of interaction with the computing device.
  • the vertical menu gesture is initiated with the sensor pen at any desired distance from the display, with sensor pen orientation and distance being determined by one or more of the pen sensors and reported via the communications capabilities of the sensor pen. Note that various additional embodiments and considerations with respect to the vertical menu concept are discussed in further detail in Section 2.8.1
  • touch and spatter gesture uses sensors of the pen to detect a user initiated rapping of the sensor pen motion while the user is touching the display surface of the computing device.
  • the touch and spatter gesture operates in a drawing or painting type application to initiate an action that mimics the effect of an artist rapping a loaded paint brush on her finger to produce spatters of paint on the paper.
  • the user touches the screen with a finger and then strikes the sensor pen against that finger (or any other finger, object, or surface). Note that, given the limited hover-sensing range of typical tablets, the tablet typically will not know the actual (x, y) location of the pen tip.
  • the touch and spatter gesture initiates an action that produces spatters (in a currently selected pen color) centered on the finger contact point.
  • the touch and spatter gesture is initiated with the sensor pen at any desired distance from the display, with sensor pen rapping motions being determined by one or more of the pen sensors and reported via the communications capabilities of the sensor pen. The touch and spatter gesture is discussed in further detail herein.
  • a similar gesture referred to herein as a “barrel tap gesture” uses sensors of the pen to detect a user initiated finger tap on the sensor pen while the user is holding that pen.
  • the barrel tap gesture operates at any desired distance from the computing device or digitizer. More specifically, hard-contact finger taps on the barrel of the pen are used as a way to “replace” mechanical button input such as a pen barrel button, mouse button, enter key (or other keyboard press), or other button associated with the locus of interaction between the user and the computing device.
  • the sensor pen tap gestures are generally identified as acceleration spikes from accelerometers in or on the sensor pen, consistent with a finger strike on the pen barrel. The barrel tap gesture is discussed in further detail herein.
  • a “hard stroke gesture” uses sensors of the pen to detect a fast movement (i.e., acceleration beyond a predetermined threshold) of the sensor pen when not contacting the computing device.
  • the hard stroke gesture operates at any desired distance from the computing device or digitizer, but in some embodiments the gesture is accepted at the moment the pen physically strikes the digitizer screen, and may furthermore be registered when the pen strikes the screen within an acceptable range of relative orientation to the tablet display.
  • Such hard contact gestures with the screen can be difficult to sense with traditional pressure sensors due to sampling rate limitations, and because the relative orientation of the pen may not be known on traditional digitizers. More specifically, this particular sensor pen motion can be associated with any desired action of the computing device. In a tested embodiment, it was used to initiate a lasso type operation in a drawing program. The hard stroke gesture is discussed in further detail herein.
  • correlated sensor pen motions relative to the computing device include using pen sensors (e.g., accelerometers, pressure sensors, inertial sensors, grip sensors, etc.) to determine when the sensor pen is picked up or put down by the user.
  • pen sensors e.g., accelerometers, pressure sensors, inertial sensors, grip sensors, etc.
  • any desired action can be initiated (e.g., exit sleep mode in computing device when pen picked up, or enter sleep mode if pen set down).
  • a similar technique considers motion of the computing device relative to the sensor pen. For example, if sensors in the computing device (e.g., accelerometers or other motion or positional sensors) indicate that the computing device is being held by a user that is walking or moving when sensors in the pen indicate that the sensor pen is stationary, an automated alert (e.g., visible, audible, tactile, etc.) is initiated by the computing device. Similarly, speakers or lights coupled to the sensor pen can provide any combination of visible and audible alerts to alert the user that the computing device is moving while the sensor pen is stationary.
  • sensors in the computing device e.g., accelerometers or other motion or positional sensors
  • an automated alert e.g., visible, audible, tactile, etc.
  • speakers or lights coupled to the sensor pen can provide any combination of visible and audible alerts to alert the user that the computing device is moving while the sensor pen is stationary.
  • the “Motion and Context Sharing Technique,” provides various techniques for using a “sensor pen” to enable a variety of input techniques based on various combinations of direct-touch inputs and sensor pen inputs that are not restricted to a near-proximity sensing range of the digitizer of a computing device.
  • the processes summarized above are illustrated by the general system diagram of FIG. 2 .
  • the system diagram of FIG. 2 illustrates the interrelationships between program modules for implementing various embodiments of the Motion and Context Sharing Technique, as described herein.
  • the system diagram of FIG. 2 illustrates a high-level view of various embodiments of the Motion and Context Sharing Technique
  • FIG. 2 is not intended to provide an exhaustive or complete illustration of every possible embodiment of the Motion and Context Sharing Technique as described throughout this document.
  • any boxes and interconnections between boxes that may be represented by broken or dashed lines in FIG. 2 represent alternate embodiments of the Motion and Context Sharing Technique described herein, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
  • Note also that various elements of FIG. 2 were previously introduced in FIG. 1 , and that any common elements between these two figures share the same element numbers.
  • the processes enabled by the Motion and Context Sharing Technique begin operation by providing sensor inputs from the sensor pen, touch inputs from the computing device, and context information from either or both the sensor pen and computing device to the aforementioned gesture activation module.
  • the gesture activation module evaluates the available inputs and information to trigger one or more motion gestures, along with any corresponding user interface (UI).
  • UI user interface
  • the sensor input module 125 provides sensor inputs received from one or more sensors coupled to the sensor pen to the gesture activation module 135 .
  • the touch input module 130 provides touch inputs detected by any touch sensitive surface of the computing device to the gesture activation module 135 .
  • virtual “touches” on various projections, surfaces, displays, objects, etc. are also considered by using various well-known techniques to track user touches on any surface, display or object.
  • the Motion and Context Sharing Technique also rejects or ignores unwanted or unintended touches.
  • An optional palm rejection module 205 is used for this purpose.
  • the palm rejection module 205 evaluates any touch to determine whether that touch was intended, and then either accepts that touch as input for further processing by the touch input module 130 , or rejects that touch.
  • the palm rejection module 205 disables or ignores (i.e., “rejects”) user touches on or near particular regions of any touch-sensitive surfaces, depending upon the context of that touch. Note that “rejected” touches may still be handled by the Motion and Context Sharing Technique as an input to know where the palm is planted, but flagged such that unintentional button presses or gestures will not be triggered in the operating system or applications by accident.
  • a context reporting module 210 reports this information to the gesture activation module 135 .
  • the context reporting module 210 determines the current context of the computing device and/or the sensor pen, and reports that context information to the gesture activation module 135 for use in determining motion gestures.
  • Context examples include, but are not limited to, sensor pen and computing device individual or relative motions, whether they are being held, power states, application status, etc.
  • the presence or absence (loss of) a pen signal, as well as the signal strength may be used as an aspect of context as well. For example, in the case of multiple pens, simple triangulation based on signal strengths enables the Motion and Context Sharing Technique to determine approximate relative spatial location and proximity of one or more users.
  • Examples of various motion gestures triggered or activated by the gesture activation module 135 include motion gestures 140 through 165 , and motion gestures 215 through 240 .
  • the aforementioned user defined motion gestures 170 are defined via a user interface 245 that allows the user to define one or more motion gestures using sensor pen inputs relative to any combination of touch and context inputs.
  • Each of the motion gestures 140 through 165 , and motion gestures 215 through 240 illustrated in FIG. 2 are described in further detail throughout Section 2 of this document, with examples of many of these motion gestures being illustrated by FIG. 3 through FIG. 10 .
  • the Motion and Context Sharing Technique provides various techniques for using a “sensor pen” to enable a variety of input techniques based on various combinations of direct-touch inputs and sensor pen inputs that are not restricted to a near-proximity sensing range of the digitizer of a computing device.
  • the following sections provide a detailed discussion of the operation of various embodiments of the Motion and Context Sharing Technique, and of exemplary methods for implementing the program modules described in Section 1 with respect to FIG. 1 and FIG. 2 .
  • the Motion and Context Sharing Technique-based processes described herein provide various techniques for using a “sensor pen” to enable a variety of input techniques based on various combinations of direct-touch inputs and sensor pen inputs that are not restricted to a near-proximity sensing range of the digitizer of a computing device
  • the Motion and Context Sharing Technique considers various combinations of sensor pen stroke, pressure, motion, and other sensor pen inputs to enable various hybrid input techniques that incorporate simultaneous, concurrent, sequential, and/or interleaved, sensor pen inputs and touch inputs (i.e., finger, palm, hand, etc.) on displays or other touch sensitive surfaces.
  • touch inputs i.e., finger, palm, hand, etc.
  • This enables a variety of motion-gesture inputs relating to the context of how the sensor pen is used or held, even when the pen is not in contact or within sensing range of the computing device digitizer.
  • any particular touch inputs or combinations of touch inputs relative to a computing device are correlated with any desired sensor pen inputs, with those correlated inputs then being used to initiate any desired action by the computing device.
  • the sensor pen includes a variety of sensors, communications capabilities, a power supply, and logic circuitry to enable collection of sensor data and execution of firmware or software instantiated within memory accessible to the logic circuitry.
  • the sensor pen was powered by an internal battery, and used a conventional microcontroller device to collect the sensor data and to run firmware.
  • the sensor pen in this tested embodiment included a micro-electro-mechanical systems (MEMS) type three-axis gyroscope, as well as MEMS type three-axis accelerometer and magnetometer modules.
  • Wireless communications capabilities were provided in the tested embodiment of the sensor pen by an integral 2.4 GHz transceiver operating at 2 Mbps.
  • the sensor pen firmware sampled the sensors at 200 Hz and wirelessly transmitted sensor data to an associated tablet-based computing device.
  • this exemplary sensor pen Various components of this exemplary sensor pen are discussed herein in the context of user interaction with either or both the sensor pen and a tablet-type computing device (e.g., whether they are being held, moving, power state, application status, etc.) for initiating various motion gestures and other inputs.
  • this exemplary sensor pen implementation is discussed only for purposes of illustration and explanation, and is not intended to limit the scope or functionality of the sensor pen such as types of sensors associated with the sensor pen, communications capabilities of the sensor pen, etc. Further this exemplary sensor pen implementation is not intended to limit the scope of functionality of the Motion and Context Sharing Technique or of any motion gestures or other input techniques discussed herein.
  • the Motion and Context Sharing Technique considers a variety of categories of sensor pen motion and context relative to touch, motion and context of an associated touch sensitive computing device or display device. For purposes of explanation, the following discussion will refer to a sketching or drawing type application in the context of a tablet-type computing device. However, it should be understood that both the sensor pen and the Motion and Context Sharing Technique are fully capable of interaction and interoperation with any desired application type, operating system type, or touch-sensitive computing device.
  • the Motion and Context Sharing Technique also considers the context of various sensor pen gestures relative to the context of the associated computing device, and any contemporaneous user touches of the computing device to enable a variety of concurrent pen-and-touch inputs.
  • any desired user-definable gestures and concurrent pen-and-touch inputs can be configured for any desired action for any desired application, operating system, or computing device.
  • the following sections describe various techniques, including context sensing, pen motion gestures away from the display or touch sensitive surface of the computing device, motion gestures combined with touch input, close-range motion gestures (i.e., within hover range or in contact with the display), and combined sensor pen motions.
  • voice or speech inputs can be combined with any of the various input techniques discussed herein above to enable a wide range of hybrid input techniques.
  • Sensing the motion and resting states of both the stylus and the tablet itself offers a number of opportunities to tailor the user experience to the context of the user's naturally occurring activity.
  • Several examples of such techniques are discussed below. Note that the exemplary techniques discussed in the following paragraphs are provided for purposes of explanation and are not intended to limit the scope of the sensor pen or the Motion and Context Sharing Technique described herein.
  • the Motion and Context Sharing Technique provides a number of techniques for enhancing user experience with respect to such issues.
  • the Motion and Context Sharing Technique considers the context of the sensor pen to automatically fade in a stylus tool palette when the user picks up the sensor pen (as determined via one or more of the aforementioned pen sensors).
  • this tool palette includes tools such as color chips to change the pen color, as well as controls that change the mode of the stylus (e.g., eraser, highlighter, lasso selection, inking, etc.).
  • the palette slowly fades out to minimize distraction.
  • picking up or lifting the sensor pen is identified (via one or more of the sensors) as a transition from a state where the pen is not moving (or is relatively still) to a state of motion above a fixed threshold.
  • a multi-axis gyroscope sensor in the pen was used to determine sensor pen motion for this purpose due to the sensitivity of gyroscopes to subtle motions.
  • one technique used by the Motion and Context Sharing Technique identifies pen motion (e.g., picking up or lifting) whenever a three-axis sum-of-squares of gyroscope signals exceeds some threshold rotational rate (e.g., 36 deg/s).
  • the Motion and Context Sharing Technique triggers the palette to appear or to fade in over the course of some period of time (e.g., one second). Conversely, when the pen motion falls below the threshold rotational rate, the palette disappears or fades out over some period of time. In a tested embodiment, palette fade out occurred over a longer period of time (e.g., five seconds) than palette fade in, and if sensor pen motion resumed before this fade-out finishes, the palette quickly fades back in to full opacity.
  • the Motion and Context Sharing Technique provides various motion gestures relative to the sensor pen to allow the palette UI to respond to either pen taps or finger taps, to allow users to interleave these input modalities to select a current pen tool and color.
  • this enables the Motion and Context Sharing Technique to treat pen and touch inputs interchangeably. For this reason, it is also possible to call up the palette using touch alone, by tapping on it with a finger (often useful if the pen is currently not in use, for example).
  • picking up or lifting the pen can be used to trigger any desired menu, user interface component, or action relevant to the particular application being used.
  • pens and stylus type devices One problem with pens and stylus type devices is that the user can lose or forget the pen. For example, users often leave the stylus on their desk, or forget to put it back in the tablet's pen holster after use. Unfortunately, the loss or unavailability of the pen or stylus tends to limit the use of the user's computing device to a touch-only device until such time as another pen or stylus can be obtained.
  • the Motion and Context Sharing Technique addresses this problem by sensing the context of the sensor pen relative to the computing device, and then automatically reminding or alerting the user whenever the Motion and Context Sharing Technique observes the tablet moving away without the sensor pen. Since both the sensor pen and tablet (or other computing device) have motion sensors, and they are in communication with one another, the Motion and Context Sharing Technique evaluates sensor information of the pen and tablet to infer whether or not they are moving together.
  • the Motion and Context Sharing Technique considers correlated gait patterns, for example, to determine if the user is walking with both pen and stylus on his person or not. Further, if sensor analysis indicates that the tablet is in a pack, purse, or not visible to a walking or moving user, an automated alert sent to the user's mobile phone is initiated in various embodiments to alert the user to potential sensor pen loss.
  • the Motion and Context Sharing Technique By evaluating correspondences between touch-screen input (or touch inputs on other surfaces of the computing device) and sensor pen motions, the Motion and Context Sharing Technique infers additional information about how the user is touching the screen or other touch-sensitive surface. By correlating sensor pen motions with touch inputs, the Motion and Context Sharing Technique enables a variety of input scenarios and motion gestures.
  • the Motion and Context Sharing Technique infers via analysis of sensors in either or both the computing device and the sensor pen that the pen is in motion at the same time that the touch is in motion.
  • FIG. 3 provides a simple illustration of this scenario.
  • the user's left hand 300 is holding a sensor pen 310
  • the index finger 305 of that hand is in contact 320 with the surface of the display 330 of the computing device 340 .
  • Both the index finger 305 and the sensor pen 310 held in the hand are moving in the same direction, as illustrated by the large directional arrows.
  • the Motion and Context Sharing Technique observes a touch while the sensor pen is relatively stationary, the Motion and Context Sharing Technique infers that the touch was produced by the non-preferred hand (i.e., touch produced by the hand not holding the pen).
  • the Motion and Context Sharing Technique provides an input mechanism where a dragging type touch with the non-preferred hand (i.e., with little or no corresponding pen motion) pans the canvas on the display screen.
  • a rubbing or dragging motion on an ink stroke with the preferred hand e.g., pen is tucked between the fingers of that hand, but not in contact with the display or digitizer
  • digitally smudges the ink This correlated touch and motion input mimics the action of charcoal artists who repeatedly draw some strokes, and then tuck the charcoal pencil to blend (smudge) the charcoal with a finger of the same hand.
  • the Motion and Context Sharing Technique ignores motions below a dynamically defined threshold by considering factors such as relative motion velocity and direction between the touch and the sensor pen (i.e., directional correlation).
  • the Motion and Context Sharing Technique defers the decision of panning vs. smudging for a relatively short time-window at the onset of a touch gesture while the relative motions are evaluated to make a final determination.
  • the dynamically defined motion threshold increases when the pen is in rapid motion, and exponentially decays otherwise. This allows the Motion and Context Sharing Technique to handle the case where the user has finished smudging, but the pen may not yet have come to a complete stop, and then switches quickly to panning with the non-preferred hand.
  • the dynamic threshold here helps the system to correctly reject the pen motion as a carry-over effect from the recently-completed smudging gesture.
  • the Motion and Context Sharing Technique enables various scenarios that consider corresponding directions of motion (i.e., directional correlation), thereby enabling very subtle or gentle smudging motions that might otherwise fall below a movement threshold.
  • a common inertial frame enables the Motion and Context Sharing Technique to distinguish which touch point corresponds to the pen motion.
  • the Motion and Context Sharing Technique allows the sensor pen to provide input at any distance from the computing device or digitizer surface.
  • the use of sensors coupled to the sensor pen in combination with the communications capabilities of the sensor pen enable the Motion and Context Sharing Technique to consider sensor pen motions independent from the computing device. This allows the Motion and Context Sharing Technique to implement various explicit sensor gestures that can be active at all times (or at particular times or in particular contexts).
  • a motion gesture enabled by considering pen motions independently of the computing device is the aforementioned “roll to undo” gesture, which uses a rolling motion of the pen (i.e., twisting or rotating the pen around the long axis of barrel). Note that this motion gesture is discussed in further detail below. Further, a number of the motion gestures and techniques discussed herein, including various context sensing techniques and various sensor pen motion gestures combined with direct touch inputs, rely on sensing the motion of the pen while it is away from the display (i.e., outside of contact and hover range of the computing device). Therefore, the ability to sense pen activity at a distance from the display enables many of the sensor pen-motion techniques discussed herein.
  • the Motion and Context Sharing Technique considers sensor pen rolling motions as a distinct gesture for pen input.
  • the aforementioned roll to undo gesture is activated by user rolling of the sensor around the long axis of the pen, while the sensor pen is beyond hover range of the computing device.
  • rolling of the sensor pen in this manner is detected by sensors such as gyroscopic sensors or accelerometers coupled to the sensor pen.
  • FIG. 4 provides a simple illustration of this scenario.
  • the user's right hand 400 is holding a sensor pen 410 , while the pen is rotated around the long axis of the sensor pen, as illustrated by the large directional arrow.
  • the Motion and Context Sharing Technique when the Motion and Context Sharing Technique recognize this rolling gesture, the Motion and Context Sharing Technique automatically undoes the last action completed by the computing device.
  • a user interface menu appears on the screen that shows the user they have activated the Undo command. The user can then tap or touch the displayed Undo command one or more times to undo one or more preceding actions.
  • these embodiments have been observed to speed up user interaction with the computing device by presenting the Undo command to the user without requiring the user to navigate the application menu structure to locate the Undo command in a menu and the wasted movement of going to the edge of the screen to invoke it.
  • the Motion and Context Sharing Technique provides a tap to redo motion gesture that is activated in response to detection of user finger taps following activation of an Undo command.
  • that tap to redo motion gesture can be performed outside the hover range of the computing device.
  • a user interface menu appears on the screen that shows the user they have activated the Redo command. The user can then tap or touch the displayed Redo command one or more times to redo one or more recently undone actions.
  • FIG. 5 provides a simple illustration of this scenario.
  • the user's right hand 500 is holding a sensor pen 510 , while index finger 520 of that hand taps the barrel of the sensor pen, as illustrated by the large directional arrow.
  • index finger 520 of that hand taps the barrel of the sensor pen, as illustrated by the large directional arrow.
  • a similar input mechanism is used to implement a “barrel tap” input mechanism when the sensor pen is in hover range or contact with the display of the computing device.
  • both the roll to undo and finger tap to redo motion gestures interleave stylus motion and touch input in a hybrid input scenario that takes advantage of the properties of each interaction modality, while allowing the user to keep the sensor pen close to the user's working space and, thus, the locus of attention.
  • Combined touch and sensor pen motion gestures provide an additional technique that contrasts with basic sensor pen-motion gestures (e.g., the roll to undo gesture) by adding a concurrent user touch component for activation of various input mechanisms.
  • this allows the same sensor pen motion gesture used without touch to be used to initiate one or more entirely different input mechanisms when that same sensor pen motion is combined with one or more different user touch inputs.
  • the Motion and Context Sharing Technique provides a variety of motion gestures that employ simultaneous, concurrent, sequential touch and pen motion inputs.
  • the combination of motion gesture plus touch techniques described below for providing various user input mechanisms illustrates how new sensing modalities can build on the existing skills and habits of users who may be familiar with particular applications or type of content.
  • Examples of these combined motion gestures and touch input include, but are not limited to a “touch and spatter” input mechanism with respect to painting type applications, a “touch and tilt for layers” input mechanism, and a “touch and roll to rotate” input mechanism. Note that a wide range of combined motion gestures and direct touch for initiating specific input scenarios or commands is enabled by the Motion and Context Sharing Technique, and that the examples discussed below are not intended to limit the scope of the Motion and Context Sharing Technique.
  • the Motion and Context Sharing Technique provides a corresponding touch and pen-motion gesture that mimics this physical gesture within the context of a sketching or painting application.
  • the touch and spatter input mechanism is initiated when the user touches the screen with a finger, and then strikes the pen against that finger (or other surface) to produce spatters as if the sensor pen were a loaded paintbrush.
  • FIG. 6 provides a simple illustration of this scenario.
  • the index finger 600 of the user's left hand 610 is touching display 620 of computing device 630 .
  • a sensor pen 640 held in the user's right hand 650 is struck against the index finger 600 to initiate the touch and spatter input mechanism, with the result being digital paint spatters 660 in a region around the point where the index finger is touching the display 620 .
  • the tablet does may not know the actual (x, y) location of the pen tip. Consequently, the Motion and Context Sharing Technique produces spatters (in the currently selected pen color) centered on the finger contact point when sensors in the sensor pen indicate that the user is striking the pen against the finger or other surface while the user is concurrently touching the display.
  • the Motion and Context Sharing Technique detects an acceleration peak (via accelerometers coupled to the sensor pen) corresponding to the sensor pen strike.
  • the Motion and Context Sharing Technique uses the amplitude of the peak to determine any desired combination of a number and transparency level of the spatters, how large the individual spatters are, and how far they scatter from the contact point. Each of these values increase with increasing acceleration peak amplitudes (corresponding to harder sensor pen strikes).
  • the semi-transparent spatters allow the colors to mix with one another in a natural-looking manner.
  • the Motion and Context Sharing Technique does not respond to isolated strikes. Instead, in such embodiments, the touch and spatter input mechanism is activated by the user striking the sensor pen to finger multiple times to begin the spattering effect. This results in a short delay before the paint spatters begin while ensuring that the spatter effect is actually intended by the user.
  • the Motion and Context Sharing Technique provides a gesture of touching an object and then pitching or tilting the pen backward (or forward) relative to the long axis of the sensor pen to reveal a list of the layered objects in z-order, or alternately, to show or cycle through those layers. The user may then tap on the objects in the list or on any of the displayed layers to select, reorder, or otherwise interact with the selected layer.
  • This input mechanism is referred to herein as the aforementioned “Touch and Tilt for Layers” gesture.
  • the user touches or holds an object while tilting the pen to trigger that input mechanism.
  • FIG. 7 provides a simple illustration of this scenario.
  • the index finger 700 of the user's left hand 710 is touching an area of display 720 of computing device 730 having a plurality of layers 740 or layered objects.
  • a sensor pen 750 held in the user's right hand 760 is tilted pen backward (or forward) relative to the long axis of the sensor pen, as illustrated by the large directional arrow, to reveal and interact with the layers 740 or layered objects corresponding the point where the index finger 700 is touching the display 720 .
  • the Motion and Context Sharing Technique limits activation of the touch and tilt for layers gesture to contexts where the holds a finger on a stack of objects or other layers to avoids inadvertent activation of layer cycling or interactions. Therefore, in this example, the touch component of the gesture serves a double purpose. First, touching the screen activates the pen pitching motion for recognition. Second, touching the screen also identifies which objects or layer stacks the touch and tilt for layers gesture applies to.
  • the touch and tilt for layers gesture was activated by a touch concurrent with tilting the pen away from the screen and then back towards the screen (or the opposite motions, i.e., towards and then away), within a limited time-window.
  • a related input mechanism referred to herein as the aforementioned touch and roll to rotate gesture is activated by holding or touching an object on the display surface while concurrently rolling or twisting the sensor pen around the long axis of the sensor pen. Note that this gesture contrasts with the “roll to undo” gesture discussed above in that when the Motion and Context Sharing Technique infers a concurrent touch in combination with the rolling motion, the touch and roll to rotate gesture is activated instead of the roll to undo gesture (which occurs when there is no concurrent touch).
  • this rotation mode is implemented by allowing the user to dial his finger (i.e., move the finger in a curving motion on the display surface) to precisely rotate the object.
  • rotation is controlled by either continuing to rotate the sensor pen, or by contacting the sensor pen on the display surface, and using the sensor pen in a manner similar to the finger dialing motion noted above.
  • FIG. 8 provides a simple illustration of the touch and roll to rotate input mechanism.
  • the index finger 800 of the user's left hand 810 is touching an area of display 820 of computing device 830 having an object 840 .
  • a sensor pen 850 held in the user's right hand 860 is rotated around the long axis of the sensor pen, as illustrated by the large directional arrow to initiate the touch and roll to rotate input mechanism.
  • the selected object 840 is then rotated around its axis of rotation (illustrated by the large directional arrow around object 840 ) using either the user's finger, or additional rotation motions of the sensor pen 850 .
  • the Motion and Context Sharing Technique provides additional input mechanisms that combine direct sensor pen contact or hover (i.e., sensor pen in hover range of the digitizer of the computing device).
  • input mechanisms implemented by combining sensor pen hover with sensor pen motion gestures include, but are not limited to a “vertical menu” input mechanism and a “barrel tap” input mechanism, both of which use the sensed (x, y) location of the pen tip (via hover evaluation of the pen by the computing device) to determine where on the display to bring up a menu.
  • Another input mechanism referred to herein as a “hard stroke” input mechanism, combines direct contact of the sensor pen with the display in combination with sensor pen motions.
  • the vertical menu input mechanism is initiated by using various sensors of the pen to detect a user initiated motion of the sensor pen that occurs with the pen coming towards or into proximity range (or in contact with the screen) in an orientation approximately perpendicular relative to the display screen of the computing device.
  • a user initiated motion of the sensor pen that occurs with the pen coming towards or into proximity range (or in contact with the screen) in an orientation approximately perpendicular relative to the display screen of the computing device.
  • holding the sensor pen approximately perpendicular relative to the display e.g., approximately vertical pen pose when the computing device is lying flat
  • a UI window at or near the sensed (x, y) location of the pen tip, thereby enabling efficient interleaving of stroke input with menu invocation.
  • a timing factor is also considered for initiating the vertical menu.
  • the vertical menu is initiated when the sensor pen is held approximately perpendicular relative to the display and approximately stationary for a short time (e.g., a fixed or adjustable time threshold) within the hover range of the display.
  • the UI window of the vertical menu is initiated when the pen is an approximately perpendicular pose, for a certain amount of time, as it approaches the display, regardless of whether the pen is in a proximity or hover range of the display. Consequently, it should be clear that in such embodiments, the Motion and Context Sharing Technique indirectly uses the pen's ability to know its orientation even when it is beyond the sensing range of the computing device to successfully trigger the vertical menu.
  • the vertical menu input mechanism When initiated or activated, the vertical menu input mechanism triggers a UI mode that brings up a localized (relative to the pen tip) UI menu. Consequently, bringing the pen close to the screen with the pen in this relative perpendicular pose will initiate opening or expansion of a vertical software menu or the like. Conversely, motions extending away from the computing device or display (i.e., motions moving away from the screen or beyond the proximity sensing range) may initiate closing or contraction of the vertical software menu or the like.
  • Other mechanisms for closing the UI menu include, but are not limited to, automatically closing the menu after the user picks a command from the UI menu, in response to taps (pen or finger) somewhere else on the screen, detection of some other motion gesture, etc.
  • menus that appear at or near the locus of interaction can save the user from round-trips with the sensor pen (or other pointing device) to tool palettes or other menus at the edge of the screen.
  • Such localized menus are particularly useful for frequent commands, as well as contextual commands such as Copy and Paste that integrate object selection and direct manipulation with commands.
  • the Motion and Context Sharing Technique initiates context sensitive menus, application popups, etc., when activated by the vertical menu input mechanism.
  • the vertical menu input mechanism initiates a marking menu when the pen is held approximately perpendicular relative to the display and approximately stationary for a short time within the hover range of the display.
  • marking menus are also sometimes referred to as “pie menus” or “radial menus.”
  • the marking menu provides a generally circular context menu where selection of a menu item depends on direction.
  • Marking menus may be made of several “pie slices” or radially arranged menu options or commands around a center that may be active or inactive. Each such slice may include any number of menu items (similar to a list or set of menu items in a conventional drop down menu).
  • commands or menu items in the marking menu are user-definable, and of course, vertical menus could be employed with other well-known types of command selection techniques, such as drawing gestures or picking from traditional pull-down menus as well.
  • the “vertical menu” can be conceived as a general purpose mode-switching technique, which enables the pen to input commands, gestures, or perform other tasks secondary to a primary “inking” state.
  • the marking menu is initiated as simply a display of directions (e.g., arrows, lines, icons, etc.), radiating out from a center of the marking menu, in which the sensor pen can be stroked to initiate commands without displaying a visible menu of commands.
  • directions e.g., arrows, lines, icons, etc.
  • the user may then bring the stylus or sensor pen into contact with the display and stroke in any of the displayed directions to invoke a command.
  • the Motion and Context Sharing Technique initiates marking menu pops up that reveal a mapping of stroke direction to command.
  • the vertical menu input mechanism thus combines stylus motion and pen-tip stroke input in the same technique.
  • the vertical menu input mechanism is activated relative to a particular object when the user places the sensor pen over the object in a pose that is approximately perpendicular relative to the display.
  • the marking menu includes object-specific commands (e.g. copy, paste, etc.) that take the current object and screen location as operands.
  • object-specific commands e.g. copy, paste, etc.
  • the Motion and Context Sharing Technique uses an approximately perpendicular posture of the sensor pen relative to the display to trigger the vertical menu input mechanism from the hover-state.
  • the approximately perpendicular posture of the sensor pen relative to the display is detected using a combination of the accelerometer and gyroscope sensors coupled to the sensor pen relative to a sensed orientation of the display.
  • the accelerometer is used to estimate the orientation of the pen, but even if the accelerometer indicates that the pen is relatively near-perpendicular, this sensor reading will not trigger the vertical menu input mechanism when the gyroscope indicates that the pen is still moving beyond some threshold amount. In this way, the Motion and Context Sharing Technique avoids false positive activation of the marking menu if the user briefly passes through an approximately perpendicular relative pose while handling the sensor pen.
  • FIG. 9 provides a simple illustration of a vertical menu input scenario.
  • a sensor pen 900 held in the user's right hand 910 is in hover range above the display 920 of computing device 930 .
  • hover is indicated in this figure by the cone 940 shown by broken lines extending from the tip of the sensor pen 900 .
  • the Motion and Context Sharing Technique initiates a marking menu 950 , which in this example includes menu options or command choices A through F. The user selects or initiates any of these commands by either touching one of the displayed commands, sweeping or stroking the sensor pen 900 in the direction of one of the displayed commands, or contacting one of the displayed commands with the sensor pen.
  • One mechanism by which the sensor pen can implement button presses is to simply include one or more buttons or the like on the barrel of the sensor pen, with button presses then being communicated by the pen to the computing device.
  • the Motion and Context Sharing Technique instead uses one or more of the existing sensors to identify user finger taps on the barrel of the sensor pen to initiate a barrel tap input mechanism.
  • This barrel tap input mechanism can be used for any of a variety of purposes, including, but not limited to, button press emulations.
  • the barrel tap input mechanism is activated by sensing relatively hard-contact finger taps on the barrel of the pen as a way to “replace” mechanical button input.
  • the Motion and Context Sharing Technique evaluates accelerometer data to identify an acceleration spike approximately perpendicular to the long axis of the sensor pen while the pen is approximately stationary in the hover state or in direct contact with the display. Activation of the barrel tap input mechanism brings up a menu with commands specific to the object that the user hovers over or contacts with the sensor pen, with that menu being approximately centered under the pen tip.
  • the barrel tap input mechanism differs from the aforementioned “finger tap” input mechanism in that the finger tap mechanism is performed without a concurrent touch or hover relative to the computing device, while the barrel tap input mechanism is performed while the pen is in hover range or contact within a displayed object.
  • the Motion and Context Sharing Technique provides additional input mechanisms initiated by the sensor pen that are roughly analogous to “hard tap” and “hard drag” techniques that been implemented using existing finger touch inputs.
  • the Motion and Context Sharing Technique instead evaluates accelerometer thresholds to determine whether the user intends a hard tap or a hard drag input using the sensor pen.
  • accelerometer thresholds For distinguishing the “hard” contact from softer strokes are used by the Motion and Context Sharing Technique.
  • the angle at which a user is holding the pen when it comes into contact with the display is considered by the Motion and Context Sharing Technique for differentiating between the two input mechanisms.
  • FIG. 10 provides a simple illustration of the hard tap input mechanism.
  • a sensor pen 1000 held in the user's right hand 1010 is brought down relatively hard onto a point 1020 of the display 1030 of computing device 1040 .
  • the resulting contact of the sensor pen 1000 onto the surface of display 1030 (identified using accelerometers of the sensor pen) initiates the hard tap input mechanism that triggers the aforementioned lasso mode.
  • the user uses either a finger touch or drags the sensor pen 1000 across the surface of the display 1030 to draw lasso outlines for use in selecting objects, regions, etc., in the drawing type application.
  • FIG. 11 provides an exemplary operational flow diagram that summarizes the operation of some of the various embodiments of the Motion and Context Sharing Technique. Note that FIG. 11 is not intended to be an exhaustive representation of all of the various embodiments of the Motion and Context Sharing Technique described herein, and that the embodiments represented in FIG. 11 are provided only for purposes of explanation.
  • any boxes and interconnections between boxes that are represented by broken or dashed lines in FIG. 11 represent optional or alternate embodiments of the Motion and Context Sharing Technique described herein, and that any or all of these optional or alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
  • the Motion and Context Sharing Technique begins operation by receiving 1100 sensor input from one or more sensors ( 1105 through 1150 ) of the sensor pen. These inputs are then transmitted 1155 from the sensor pen to the touch-sensitive computing device.
  • the Motion and Context Sharing Technique also receives 1160 one or more touch inputs from the touch-sensitive computing device. Further, the Motion and Context Sharing Technique also optionally receives 1165 context information from either or both the sensor pen and the context sensitive computing device.
  • the Motion and Context Sharing Technique evaluates 1170 simultaneous, concurrent, sequential, and/or interleaved sensor pen inputs and touch inputs relative to contexts of sensor pen and computing device. This evaluation serves to identify one or more motion gestures 1180 corresponding to the various sensor, touch and context inputs. The Motion and Context Sharing Technique then automatically initiates 1175 one or more motion gestures 1180 based on the evaluation. Finally, as noted above, in various embodiments, a UI or the like is provided 1185 for use in defining and/or customizing one or more of the motion gestures 1180 .
  • FIG. 12 illustrates a simplified example of a general-purpose computer system in combination with a stylus or pen enhanced with various sensors with which various embodiments and elements of the Motion and Context Sharing Technique, as described herein, may be implemented. It should be noted that any boxes that are represented by broken or dashed lines in FIG. 12 represent alternate embodiments of the simplified computing device and sensor pen, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
  • FIG. 12 shows a general system diagram showing a simplified touch-sensitive computing device 1200 .
  • touch-sensitive computing devices 1200 have one or more touch-sensitive surfaces 1205 or regions (e.g., touch screen, touch sensitive bezel or case, sensors for detection of hover-type inputs, optical touch sensors, etc.).
  • touch-sensitive computing devices 1200 include, but are not limited to, touch-sensitive display devices connected to a computing device, touch-sensitive phone devices, touch-sensitive media players, touch-sensitive e-readers, notebooks, netbooks, booklets (dual-screen), tablet type computers, or any other device having one or more touch-sensitive surfaces or input modalities.
  • the computing device 1200 should have a sufficient computational capability and system memory to enable basic computational operations.
  • the computing device 1200 may include one or more sensors 1210 , including, but not limited to, accelerometers, cameras, capacitive sensors, proximity sensors, microphones, multi-spectral sensors, pen or stylus digitizer, etc.
  • the computational capability is generally illustrated by one or more processing unit(s) 1225 , and may also include one or more GPUs 1215 , either or both in communication with system memory 1220 .
  • processing unit(s) 1225 of the computing device 1200 of may be specialized microprocessors, such as a DSP, a VLIW, or other micro-controller, or can be conventional CPUs having one or more processing cores, including specialized GPU-based cores in a multi-core CPU.
  • the computing device 1200 may also include other components, such as, for example, a communications interface 1230 for receiving communications from sensor pen device 1235 .
  • the computing device 1200 may also include one or more conventional computer input devices 1240 or combinations of such devices (e.g., pointing devices, keyboards, audio input devices, voice or speech-based input and control devices, video input devices, haptic input devices, touch input devices, devices for receiving wired or wireless data transmissions, etc.).
  • the computing device 1200 may also include other optional components, such as, for example, one or more conventional computer output devices 1250 (e.g., display device(s) 1255 , audio output devices, video output devices, devices for transmitting wired or wireless data transmissions, etc.).
  • typical communications interfaces 1230 , input devices 1240 , output devices 1250 , and storage devices 1260 for general-purpose computers are well known to those skilled in the art, and will not be described in detail herein.
  • the computing device 1200 may also include a variety of computer readable media.
  • Computer readable media can be any available media that can be accessed by computer device 1200 via storage devices 1260 and includes both volatile and nonvolatile media that is either removable 1270 and/or non-removable 1280 , for storage of information such as computer-readable or computer-executable instructions, data structures, program modules, or other data.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media refers to tangible computer or machine readable media or storage devices such as DVD's, CD's, floppy disks, tape drives, hard drives, optical drives, solid state memory devices, RAM, ROM, EEPROM, flash memory or other memory technology, magnetic cassettes, magnetic tapes, magnetic disk storage, or other magnetic storage devices, or any other device which can be used to store the desired information and which can be accessed by one or more computing devices.
  • modulated data signal or “carrier wave” generally refer a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, RF, infrared, laser, and other wireless media for transmitting and/or receiving one or more modulated data signals or carrier waves. Combinations of the any of the above should also be included within the scope of communication media.
  • Retention of information such as computer-readable or computer-executable instructions, data structures, program modules, etc. can also be accomplished by using any of a variety of the aforementioned communication media to encode one or more modulated data signals or carrier waves, or other transport mechanisms or communications protocols, and includes any wired or wireless information delivery mechanism.
  • modulated data signal or “carrier wave” generally refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, RF, infrared, laser, and other wireless media for transmitting and/or receiving one or more modulated data signals or carrier waves. Combinations of the any of the above should also be included within the scope of communication media.
  • Motion and Context Sharing Technique described herein may be further described in the general context of computer-executable instructions, such as program modules, being executed by a computing device.
  • program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • the embodiments described herein may also be practiced in distributed computing environments where tasks are performed by one or more remote processing devices, or within a cloud of one or more devices, that are linked through one or more communications networks.
  • program modules may be located in both local and remote computer storage media including media storage devices.
  • the aforementioned instructions may be implemented, in part or in whole, as hardware logic circuits, which may or may not include a processor.
  • the sensor pen device 1235 illustrated by FIG. 12 shows a simplified version of a pen or stylus augmented with pen sensors 1245 , logic 1265 , a power source 1275 , and basic I/O capabilities 1285 .
  • pen sensors 1245 for use with the sensor pen device 1235 include, but are not limited to, inertial sensors, accelerometers, pressure sensors, grip sensors, near-field communication sensors, RFID tags and/or sensors, temperature sensors, microphones, magnetometers, capacitive sensors, gyroscopes, etc.
  • the logic 1265 of the sensor pen device 1235 is similar to the computational capabilities of computing device 1200 , but is generally less powerful in terms of computational speed, memory, etc.
  • the sensor pen device 1235 can be constructed with sufficient logic 1265 such that it can be considered a standalone capable computational device.
  • the power source 1275 of the sensor pen device 1235 is implemented in various form factors, including, but not limited to, replaceable batteries, rechargeable batteries, capacitive energy storage devices, fuel cells, etc.
  • the I/O 1285 of the sensor pen device 1235 provides conventional wired or wireless communications capabilities that allow the sensor pen device to communicate sensor data and other information to the computing device 1200 .

Abstract

A “Motion and Context Sharing Technique” uses a pen or stylus enhanced to incorporate multiple sensors, i.e., a “sensor pen,” and a power supply to enable various input techniques and gestures. Various combinations of pen stroke, pressure, motion, and other sensor pen inputs are used to enable various hybrid input techniques that incorporate simultaneous, concurrent, sequential, and/or interleaved, sensor pen inputs and touch inputs (i.e., finger, palm, hand, etc.) on displays or other touch sensitive surfaces. This enables a variety of motion-gesture inputs relating to the context of how the sensor pen is used or held, even when the pen is not in contact or within sensing range of the computing device digitizer. In other words, any particular touch inputs or combinations of touch inputs are correlated with any desired sensor pen inputs, with those correlated inputs then being used to initiate any desired action by the computing device.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a Continuation-in-Part of a prior application entitled “MULTI-TOUCH INPUT DEVICE WITH ORIENTATION” by Xiang Cao, et al., which was filed with the USPTO on Feb. 11, 2011 and assigned Ser. No. 13/026,058, the subject matter of which is incorporated herein by this reference.
BACKGROUND
Many mobile computing devices (e.g., tablets, phones, etc.) use a pen, pointer, or stylus type input device (collectively referred to herein as a “pen type input device” or “pen”) in combination with a digitizer component of the computing device for input purposes. Typically, pen type input devices enable a variety of multi-modal pen, touch, and motion based input techniques.
Various conventional input techniques have adapted pen type devices to provide auxiliary input channels including various combinations of tilting, rolling, and pressure sensing. However, one of the limitations of many of these techniques is that they operate using sensors coupled to the computing device to sense and consider pen movements or hover conditions that are required to be in close proximity to the digitizer so that the pen can be sensed by the digitizer. Many such techniques operate in a context where the pen is used to perform various input actions that are then sensed and interpreted by the computing device.
For example, one conventional technique considers pen rolling during handwriting and sketching tasks, as well as various intentional pen rolling gestures. However, these pen rolling techniques operate in close proximity to the computing device based on sensors associated with the computing device. Related techniques that require the pen type input device to maintain contact (or extreme proximity) with the digitizer include various tilt and pressure based pen inputs. Various examples of such techniques consider separate or combined tilt and pressure inputs in various tablet-based settings for interacting with context menus, providing multi-parameter selection, object or menu manipulation, widget control, etc.
In contrast, various conventional techniques use an accelerometer-enhanced pen to sense movements when the pen or stylus is not touching the display. The sensed movements are then provided to the computing device for input purposes such as shaking the stylus to cycle through color palettes, and rolling the stylus to pick colors or scroll web pages. A somewhat related technique provides a pointing device having multiple inertial sensors to enable three-dimensional pointing in a “smart room” environment. This technique enables a user to gesture to objects in the room and speak voice commands. Other techniques use 3D spatial input to employ stylus-like devices in free space, but require absolute tracking technologies that are generally impractical for mobile pen-and-tablet type interactions.
Recently various techniques involving the use of contact (touch) sensors or multi-contact pressure (non-zero force) sensors on a pen surface have been used to enable various grip-sensing input scenarios. For example, stylus barrels have been developed to provide multi-touch capabilities for sensing finger gestures. Specific grips can also be associated with particular pens or brushes. Several conventional systems employ inertial sensors in tandem with grip sensing to boost grip pattern recognition.
Further, various conventional systems combine pen tilt with direct-touch input. One such system uses a stylus that senses which corners, edges, or sides of the stylus come into contact with a tabletop display. Thus, by tilting or rolling the stylus while it remains in contact with the display, the user can fluidly switch between a number of tools, modes, and other input controls. This system also combines direct multi-touch input with stylus orientation, allowing users to tap a finger on a control while holding or “tucking” the stylus in the palm. However, this system requires contact with the display in order to sense tilt or other motions. Related techniques combine both touch and motion for mobile devices by using direct touch to cue the system to recognize shaking and other motions of pen type input devices.
SUMMARY
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. Further, while certain disadvantages of prior technologies may be noted or discussed herein, the claimed subject matter is not intended to be limited to implementations that may solve or address any or all of the disadvantages of those prior technologies.
In general, a “Motion and Context Sharing Technique,” as described herein, provides a variety of input techniques based on various combinations of pen input, direct-touch input, and motion-sensing inputs. In contrast to existing pen based input techniques, the Motion and Context Sharing Techniques described herein leverage inputs from some or all of the sensors of a “sensor pen” in combination with displays or other surfaces that support both pen or stylus inputs and direct multi-touch input. In other words, various embodiments of the Motion and Context Sharing Technique consider various combinations of pen stroke, pressure, motion, and other inputs in the context of touch-sensitive displays, in combination with various hybrid techniques that incorporate simultaneous, concurrent, sequential, and/or interleaved, sensor pen inputs and touch inputs (i.e., finger, palm, hand, etc.) on the display or other touch sensitive surface of the computing device.
Note that the term pressure, as relating to pressure sensors and the like may refer to various sensor types and configurations. For example, in various cases and embodiments, pressure may refer to pen tip pressure exerted on a display. In general, pen tip pressure is typically sensed by some type of pressure transducer inside the pen, but it is also possible to have the pen tip pressure sensing done by the display/digitizer itself in some devices. In addition, the term pressure or pressure sensing or the like may also refer to a separate channel of sensing the grip pressure of the hand (or fingers) contacting an exterior casing or surface of the pen. Various sensing modalities employed by the Motion and Context Sharing Technique may separately or concurrently consider or employ both types of pressure sensing (i.e., pen tip pressure and pen grip pressure) for initiating various motion gestures.
Note that various devices used to enable some of the many embodiments of a “Motion and Context Sharing Technique,” as described herein, include pens, pointers, stylus type input devices, etc., that are collectively referred to herein as a “sensor pen” for purposes of discussion. Note also that the functionality described herein may be implemented in any desired form factor, e.g., wand, staff, ball racquet, toy sword, etc., for use with various gaming devices, gaming consoles, or other computing devices. Further, the sensor pens described herein are adapted to incorporate various combinations of a power supply and multiple sensors including, but not limited to inertial sensors, accelerometers, pressure sensors, grip sensors, near-field communication sensors, RFID tags and/or sensors, temperature sensors, microphones, magnetometers, capacitive sensors, gyroscopes, etc., in combination with various wireless communications capabilities for interfacing with various computing devices. In addition, in various embodiments, the sensor pens described herein have been further adapted to incorporate digital memory and/or computing capabilities that allow the sensor pens to act in combination or cooperation with other computing devices, other sensor pens, or even as a standalone computing device.
Advantageously, the various embodiments of the Motion and Context Sharing Technique described herein use various wired and/or wireless communication techniques integrated into the sensor pen to enable inputs and gestures that are not restricted to a near-proximity sensing range of the digitizer of the computing device. In addition, another advantage of the Motion and Context Sharing Technique described herein is that the use of a wide range of sensors and a communication interface in the sensor pen enables a wide array of sensing dimensions that provide new input scenarios and gestures for computing devices. Examples input scenarios include, but are not limited to, using electronically active sensor pens for tablets, electronic whiteboards, or other direct-input devices since the sensor pen itself integrates motion and/or grip-sensing and other sensor-based capabilities that allow richer in-air pen-based gestures at arbitrary distances from the computing device, as well as richer sensing of user context information.
Further, these capabilities enable realization of a wide range of new pen-based gestures and input scenarios, many of which are simply not supported by conventional tablet-digitizer technologies due to the necessity of conventional pens to be extremely close to the display for the digitizer to sense the pens presence. In addition, given the sensors and communications capabilities of the sensor pen, the Motion and Context Sharing Technique provides various mechanisms that can be used to optimize the behavior of computing devices and user experience based on concurrently sensing input states of the sensor pen and the computing device. Simple examples of this concept include, but are not limited to, alerting the user to a forgotten sensor pen, for example, as well as sensing whether the user is touching the display with a hand that is also grasping a sensor pen. This can be used, for example, to make fine-grained distinction among touch gestures as well as to support a variety of “palm rejection” techniques for eliminating or avoiding unintentional touch inputs.
In view of the above summary, it is clear that the Motion and Context Sharing Technique described herein uses a sensor pen to enable a variety of input techniques and gestures based on various combinations of direct-touch inputs and sensor pen inputs that are not restricted to a near-proximity sensing range of the digitizer of a computing device. In addition to the just described benefits, other advantages of the Motion and Context Sharing Technique will become apparent from the detailed description that follows hereinafter when taken in conjunction with the accompanying drawing figures.
BRIEF DESCRIPTION OF THE DRAWINGS
The specific features, aspects, and advantages of the claimed subject matter will become better understood with regard to the following description, appended claims, and accompanying drawings where:
FIG. 1 shows a general operational overview of a “Motion and Context Sharing Technique” that illustrates interoperation between a sensor pen and a touch-sensitive computing for triggering one or more motion gestures or other actions, as described herein.
FIG. 2 provides an exemplary architectural flow diagram that illustrates program modules for implementing various embodiments of the Motion and Context Sharing Technique, as described herein.
FIG. 3 provides an illustration of using the Motion and Context Sharing Technique to provide a correlated touch and sensor pen input mechanism, as described herein.
FIG. 4 provides an illustration of using the Motion and Context Sharing Technique to provide a roll to undo input mechanism, as described herein.
FIG. 5 provides an illustration of using the Motion and Context Sharing Technique to provide a finger tap input mechanism, as described herein.
FIG. 6 provides an illustration of using the Motion and Context Sharing Technique to provide a touch and spatter input mechanism for painting, drawing, or sketching type applications, as described herein.
FIG. 7 provides an illustration of using the Motion and Context Sharing Technique to provide a touch and tilt for layers input mechanism, as described herein.
FIG. 8 provides an illustration of using the Motion and Context Sharing Technique to provide a touch and roll to rotate input mechanism, as described herein.
FIG. 9 provides an illustration of using the Motion and Context Sharing Technique to provide a vertical menu input mechanism, as described herein.
FIG. 10 provides an illustration of using the Motion and Context Sharing Technique to provide a hard tap input mechanism, as described herein.
FIG. 11 illustrates a general system flow diagram that illustrates exemplary methods for implementing various embodiments of the Motion and Context Sharing Technique, as described herein.
FIG. 12 is a general system diagram depicting a simplified general-purpose computing device having simplified computing and I/O capabilities, in combination with a sensor pen having various sensors, power and communications capabilities, for use in implementing various embodiments of the Motion and Context Sharing Technique, as described herein.
DETAILED DESCRIPTION OF THE EMBODIMENTS
In the following description of the embodiments of the claimed subject matter, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the claimed subject matter may be practiced. It should be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the presently claimed subject matter.
1.0 Introduction:
In general, a “Motion and Context Sharing Technique,” as described herein, provides various techniques for using a pen or stylus enhanced with a power supply and multiple sensors, i.e., a “sensor pen,” to enable a variety of input techniques and gestures. These techniques consider various combinations of pen stroke, pressure, motion, and other inputs in the context of touch-sensitive displays, in combination with various hybrid techniques that incorporate simultaneous, concurrent, sequential, and/or interleaved, sensor pen inputs and touch inputs (i.e., finger, palm, hand, etc.) on displays or other touch sensitive surfaces. These techniques enable a wide variety of motion-gesture inputs, as well sensing the context of how a user is holding or using the sensor pen, even when the pen is not in contact, or even within sensing range, of the digitizer of a touch sensitive computing device.
Note that the term pressure, as relating to pressure sensors and the like may refer to various sensor types and configurations. For example, in various cases and embodiments, pressure may refer to pen tip pressure exerted on a display. In general, pen tip pressure is typically sensed by some type of pressure transducer inside the pen, but it is also possible to have the pen tip pressure sensing done by the display/digitizer itself in some devices. In addition, the term pressure or pressure sensing or the like may also refer to a separate channel of sensing the grip pressure of the hand (or fingers) contacting an exterior casing or surface of the pen. Various sensing modalities employed by the Motion and Context Sharing Technique may separately or concurrently consider or employ both types of pressure sensing (i.e., pen tip pressure and pen grip pressure) for initiating various motion gestures.
Note that various devices used to enable some of the many embodiments of the “Motion and Context Sharing Technique” include pens, pointers, stylus type input devices, etc., that are collectively referred to herein as a “sensor pen” for purposes of discussion. Note also that the functionality described herein may be implemented in any desired form factor, e.g., wand, staff, ball racquet, toy sword, etc., for use with various gaming devices, gaming consoles, or other computing devices. Further, the sensor pens described herein are adapted to incorporate a power supply and various combinations of sensors including, but not limited to inertial sensors, accelerometers, pressure sensors, grip sensors, near-field communication sensors, RFID tags and/or sensors, temperature sensors, microphones, magnetometers, capacitive sensors, gyroscopes, etc., in combination with various wireless communications capabilities for interfacing with various computing devices. Note that any or all of these sensors may be multi-axis or multi-position sensors (e.g., 3-axis accelerometers, gyroscopes, and magnetometers). In addition, in various embodiments, the sensor pens described herein have been further adapted to incorporate memory and/or computing capabilities that allow the sensor pens to act in combination or cooperation with other computing devices, other sensor pens, or even as a standalone computing device.
The Motion and Context Sharing Technique is adaptable for use with any touch-sensitive computing device having one or more touch-sensitive surfaces or regions (e.g., touch screen, touch sensitive bezel or case, sensors for detection of hover-type inputs, optical touch sensors, etc.). Note that touch-sensitive computing devices include both single- and multi-touch devices. Examples of touch-sensitive computing devices include, but are not limited to, touch-sensitive display devices connected to a computing device, touch-sensitive phone devices, touch-sensitive media players, touch-sensitive e-readers, notebooks, netbooks, booklets (dual-screen), tablet type computers, or any other device having one or more touch-sensitive surfaces or input modalities. Note also that the touch-sensitive region of such computing devices need not be associated with a display, and furthermore that the location or type of contact-sensitive region (e.g. front of a device on the display, vs. back of device without any associated display) may be considered as an input parameter for initiating one or more motion gestures (i.e., user interface actions corresponding to the motion gesture).
Note also that the term “touch,” as used throughout this document will generally refer to physical user contact (e.g., finger, palm, hand, etc.) on touch sensitive displays or other touch sensitive surfaces of a computing device using capacitive sensors or the like. Note also that virtual touch inputs relative to projected displays, electronic whiteboards, or other surfaces or objects are treated by the Motion and Context Sharing Technique in the same manner as actual touch inputs on a touch-sensitive surface. Such virtual touch inputs are detected using conventional techniques such as, for example, using cameras or other imaging technologies to track user finger movement relative to a projected image, relative to text on an electronic whiteboard, relative to physical objects, etc.
In addition, it should be understood that the Motion and Context Sharing Technique is operable with a wide variety of touch and flex-sensitive materials for determining or sensing touch or pressure. For example, one touch-sensing technology adapted for use by the Motion and Context Sharing Technique determines touch or pressure by evaluating a light source relative to some definite deformation of a touched surface to sense contact. Also, note that sensor pens, as discussed herein may include multiple types of touch and/or pressure sensing substrates. For example, sensor pens may be both touch-sensitive and/or pressure sensitive using any combination of sensors, such as, for example, capacitive sensors, pressure sensors, flex- or deformation-based sensors, etc.
In addition, the Motion and Context Sharing Technique uses a variety of known techniques to for differentiating between valid and invalid touches received by one or more touch-sensitive surfaces of the touch-sensitive computing device. Examples of valid touches and contacts include single, simultaneous, concurrent, sequential, and/or interleaved user finger touches (including gesture type touches), pen or stylus touches or inputs, hover-type inputs, or any combination thereof. With respect to invalid or unintended touches, the Motion and Context Sharing Technique disables or ignores one or more regions or sub-regions of touch-sensitive input surfaces that are expected to receive unintentional contacts, or intentional contacts not intended as inputs, for device or application control purposes. Examples of contacts that may not intended as inputs include, but are not limited to, a user's palm resting on a touch screen while the user writes on that screen with a stylus or pen, holding the computing device by gripping a touch sensitive bezel, etc.
Further, the terms “contact” or “pen input” as used herein generally refer to interaction involving physical contact (or hover) of the sensor pen with a touch sensitive surface or digitizer component of the computing device. Note also that as discussed herein, inputs provided by one or more sensors of the sensor pen will generally be referred to herein as a “sensor input,” regardless of whether or not the sensor pen is within a digitizer range or even in contact with a touch sensitive surface or digitizer component of the computing device.
Consequently, it should be understood that any particular motion gestures or inputs described herein are derived from various combinations of simultaneous, concurrent, sequential, and/or interleaved pen inputs, user touches, and sensor inputs. It should be further understood that the current context or state of either or both the sensor pen and computing device is also considered when determining which motion gestures or inputs to activate or initialize.
The Motion and Context Sharing Technique described herein provides a number of advantages relating to pen or stylus based user interaction with touch-sensitive computing devices, including, but not limited to:
    • Adopting the perspective of context-sensing from mobile computing and applying it to motions of the stylus or sensor pen itself;
    • Leveraging the ability to sense tilting or other explicit motion gestures that occur close to the computing device as well as beyond the hover-sensing range of the digitizer;
    • Integrating both pen motions and computing device motions for initiating various actions relative to the context between the sensor pen and computing device. Simple examples of the use of such contextual information include the pen loss prevention techniques described herein;
    • Providing expressive application-specific gestures such as a “touch and spatter” technique for interacting with painting-type applications, as well as more generic interactions such as having the computing device respond to picking up and putting down the pen (e.g., exit or enter sleep mode, open context sensitive menu for a currently active object, etc.), or cross-application gestures such as a “vertical menu” input mechanism and support for application undo and redo sequences; and
    • Combining pen stroke, finger touch, and motion-sensing together into hybrid, multi-modal techniques that enable new types of gestures for pen-operated tablet computers and other touch-sensitive computing devices.
1.1 System Overview:
The Motion and Context Sharing Technique operates, in part, by considering motion-based inputs from the sensor pen to trigger various actions with respect to the computing device. FIG. 1, provides a general operational overview of the Motion and Context Sharing Technique, illustrating interoperation between the sensor pen and the computing device to trigger one or more motion gestures or other actions. More specifically, FIG. 1 shows sensor pen 100 in communication with touch sensitive computing device 105 via communications link 110. As discussed in further detail herein, the sensor pen 100 includes a variety of sensors. A sensor module 115 in the sensor pen 100 monitors readings of one or more of those sensors, and provides them to a communications module 120 to be sent to the computing device 105.
The touch sensitive computing device 105 includes a sensor input module 125 that receives input from one or more sensors of sensor pen (e.g., inertial, accelerometers, pressure, grip, near-field communication, RFID, temperature, microphones, magnetometers, capacitive sensors, gyroscopes, etc.) and provides that input to a gesture activation module 135. In addition, the gesture activation module 135 also receives input from a touch input module 130 that receives input from user touch of one or more touch sensitive surfaces of the computing device 105. Given the sensor inputs and the touch inputs, if any, the gesture activation module 135 then evaluate simultaneous, concurrent, sequential, and/or interleaved sensor pen 100 inputs and touch inputs (i.e., finger, palm, hand, etc.) on displays or other touch sensitive surfaces of the computing device 105 relative to contexts of sensor pen and computing device to trigger or activate one or more motion gestures (e.g., motion gestures 140 through 170, discussed in further detail herein).
More specifically, the Motion and Context Sharing Technique senses various properties of the sensor pen relative to various distances between the sensor pen and the computing device (i.e., contact, hover range, and beyond hover range), and whether the motions of the sensor pen are correlated with a concurrent user touch of a display or some other touch-sensitive surface of the computing device or with some motion of the computing device. These sensed properties of the sensor pen are then correlated with various touches or motions of the computing device, and may also be considered in view of the current contexts of either or both the sensor pen and computing device (e.g., whether they are being held, moving, power state, application status, etc.), and used to trigger a variety of “motion gestures” or other actions.
With respect to hover range, in various embodiments, the Motion and Context Sharing Technique considers distance of the sensor pen above the digitizer of the computing device. While a variety of ranges can be considered, in various tested embodiments, three range categories were considered, including: physical contact, within hover range of the digitizer, or beyond range of the digitizer. The activation mechanism for any particular motion gestures may consider these different ranges of the sensor pen, in combination with any other correlated inputs, touches, and/or motions of the computing device.
In general, motion gestures can be grouped into various categories. Examples include motions that employ device orientation (of either or both the sensor pen and the computing device), whether absolute or relative, versus other motion types, which can be used to group different styles of motion input including hard contact forces, sensing particular patterns or gestures of movement, as well as techniques that use stability (e.g., the absence of motion or other particular sensor inputs) to trigger actions or specific contexts.
Many different motion gestures and input scenarios are enabled by the Motion and Context Sharing Technique include. Further, a wide variety of user-definable motion gestures and input scenarios are enabled by allowing the user to associate any desired combination of touches, contacts, and/or sensor pen motions with any desired action by the computing device. In other words, any particular touch inputs or combinations of touch inputs are correlated with any desired sensor inputs and/or contacts, with those correlated inputs then being used to initiate any desired action by the computing device.
Note that raw sensor readings can be reported or transmitted from the sensor pen to the computing device for evaluation and characterization by the computing device. For example, raw sensor data from inertial sensors within the sensor pen can be reported by the sensor pen to the computing device, with the computing device then determining pen orientation as a function of the data from the inertial sensors. Alternately, in various embodiments, the sensor pen uses onboard computational capability to evaluate the input from various sensors. For example, sensor data derived from inertial sensors within the sensor pen can be processed by a computational component of the sensor pen to determine pen orientation, with the orientation of tilt then being reported by the sensor pen to the computing device.
Clearly, any desired combination of reporting of raw sensor data and reporting of processed sensor data to the computing device by the sensor pen can be performed depending upon the computational capabilities of the sensor pen. However, for purposes of explanation, the following discussion will generally refer to reporting of sensor data to the computing device by the sensor pen for further processing by the computing device to determine various motion gestures or other input scenarios. A few examples of various motion gestures and input scenarios enabled by the Motion and Context Sharing Technique are briefly introduced below.
For example, one such input technique, referred to as a “touch and tilt for layers” gesture, uses a concurrent user touch and sensor pen tilt to activate or interact with different layers displayed on a screen. Note that the touch and tilt for layers gesture is initiated with the sensor pen at any desired distance from the display. Sensor pen tilt is determined by one or more of the pen sensors and reported to the computing device via the communications capabilities of the sensor pen. The touch and tilt for layers gesture is discussed in further detail herein.
A related gesture referred to as a “touch and roll to rotate” gesture, uses a concurrent user touch on a displayed object (e.g., text, shape, image, etc.) and a user initiated sensor pen rolling motion to rotate the touched object. Note that the touch and roll to rotate gesture is initiated with the sensor pen at any desired distance from the display, with sensor pen tilt being determined by one or more of the pen sensors and reported via the communications capabilities of the sensor pen. The touch and roll to rotate gesture is discussed in further detail herein.
Another gesture, referred to herein as a “roll to undo” gesture, uses sensors of the pen to detect a user initiated rolling motion of the sensor pen, with the result being to undo previous operations, regardless of whether those actions were user initiated or automatically initiated by the computing device or system. As with the touch and tilt for layers gesture, the roll to undo gesture is initiated with the sensor pen at any desired distance from the display, with sensor pen rolling motions being determined by one or more of the pen sensors and reported via the communications capabilities of the sensor pen. The roll to undo gesture is discussed in further detail herein.
Another gesture, referred to herein as a “vertical menu” gesture, uses sensors of the pen to detect a user initiated motion of the sensor pen that occurs with the pen coming into proximity range (or in contact with the screen) in an orientation approximately perpendicular relative to the computing device. For example, in one embodiment, bringing the pen close to the screen with the pen in this perpendicular pose will initiate opening or expansion of vertical software menu or the like, while motions moving away from the computing device or display (i.e., motion beyond the proximity sensing range) may initiate closing or contraction of the vertical software menu or the like. Note that these motions may act in concert with a cursor location on or near the menu location at the time that the motion of the sensor pen is detected, or with any other locus of interaction with the computing device. As with the previously noted gestures, the vertical menu gesture is initiated with the sensor pen at any desired distance from the display, with sensor pen orientation and distance being determined by one or more of the pen sensors and reported via the communications capabilities of the sensor pen. Note that various additional embodiments and considerations with respect to the vertical menu concept are discussed in further detail in Section 2.8.1
Another gesture, referred to herein as a “touch and spatter” gesture, uses sensors of the pen to detect a user initiated rapping of the sensor pen motion while the user is touching the display surface of the computing device. In general, the touch and spatter gesture operates in a drawing or painting type application to initiate an action that mimics the effect of an artist rapping a loaded paint brush on her finger to produce spatters of paint on the paper. In this case, the user touches the screen with a finger and then strikes the sensor pen against that finger (or any other finger, object, or surface). Note that, given the limited hover-sensing range of typical tablets, the tablet typically will not know the actual (x, y) location of the pen tip. Consequently, the touch and spatter gesture initiates an action that produces spatters (in a currently selected pen color) centered on the finger contact point. As with the previously noted gestures, the touch and spatter gesture is initiated with the sensor pen at any desired distance from the display, with sensor pen rapping motions being determined by one or more of the pen sensors and reported via the communications capabilities of the sensor pen. The touch and spatter gesture is discussed in further detail herein.
A similar gesture, referred to herein as a “barrel tap gesture” uses sensors of the pen to detect a user initiated finger tap on the sensor pen while the user is holding that pen. In general, the barrel tap gesture operates at any desired distance from the computing device or digitizer. More specifically, hard-contact finger taps on the barrel of the pen are used as a way to “replace” mechanical button input such as a pen barrel button, mouse button, enter key (or other keyboard press), or other button associated with the locus of interaction between the user and the computing device. The sensor pen tap gestures are generally identified as acceleration spikes from accelerometers in or on the sensor pen, consistent with a finger strike on the pen barrel. The barrel tap gesture is discussed in further detail herein.
Another similar gesture, referred to herein as a “hard stroke gesture” uses sensors of the pen to detect a fast movement (i.e., acceleration beyond a predetermined threshold) of the sensor pen when not contacting the computing device. In general, the hard stroke gesture operates at any desired distance from the computing device or digitizer, but in some embodiments the gesture is accepted at the moment the pen physically strikes the digitizer screen, and may furthermore be registered when the pen strikes the screen within an acceptable range of relative orientation to the tablet display. Such hard contact gestures with the screen can be difficult to sense with traditional pressure sensors due to sampling rate limitations, and because the relative orientation of the pen may not be known on traditional digitizers. More specifically, this particular sensor pen motion can be associated with any desired action of the computing device. In a tested embodiment, it was used to initiate a lasso type operation in a drawing program. The hard stroke gesture is discussed in further detail herein.
Other examples of correlated sensor pen motions relative to the computing device include using pen sensors (e.g., accelerometers, pressure sensors, inertial sensors, grip sensors, etc.) to determine when the sensor pen is picked up or put down by the user. By considering the current sensor pen context or state (i.e., picked up or put down) relative to a current context or state of the computing device (e.g., held by the user, power off, etc.), any desired action can be initiated (e.g., exit sleep mode in computing device when pen picked up, or enter sleep mode if pen set down).
Conversely, a similar technique considers motion of the computing device relative to the sensor pen. For example, if sensors in the computing device (e.g., accelerometers or other motion or positional sensors) indicate that the computing device is being held by a user that is walking or moving when sensors in the pen indicate that the sensor pen is stationary, an automated alert (e.g., visible, audible, tactile, etc.) is initiated by the computing device. Similarly, speakers or lights coupled to the sensor pen can provide any combination of visible and audible alerts to alert the user that the computing device is moving while the sensor pen is stationary. These types of “pen loss prevention” techniques provide additional examples of using correlated motions (or other inputs) of the sensor pen relative to the computing device to initiate various actions.
1.2 Configuration Overview:
As noted above, the “Motion and Context Sharing Technique,” provides various techniques for using a “sensor pen” to enable a variety of input techniques based on various combinations of direct-touch inputs and sensor pen inputs that are not restricted to a near-proximity sensing range of the digitizer of a computing device. The processes summarized above are illustrated by the general system diagram of FIG. 2. In particular, the system diagram of FIG. 2 illustrates the interrelationships between program modules for implementing various embodiments of the Motion and Context Sharing Technique, as described herein. Furthermore, while the system diagram of FIG. 2 illustrates a high-level view of various embodiments of the Motion and Context Sharing Technique, FIG. 2 is not intended to provide an exhaustive or complete illustration of every possible embodiment of the Motion and Context Sharing Technique as described throughout this document.
In addition, it should be noted that any boxes and interconnections between boxes that may be represented by broken or dashed lines in FIG. 2 represent alternate embodiments of the Motion and Context Sharing Technique described herein, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document. Note also that various elements of FIG. 2 were previously introduced in FIG. 1, and that any common elements between these two figures share the same element numbers.
In general, as illustrated by FIG. 2, the processes enabled by the Motion and Context Sharing Technique begin operation by providing sensor inputs from the sensor pen, touch inputs from the computing device, and context information from either or both the sensor pen and computing device to the aforementioned gesture activation module. The gesture activation module then evaluates the available inputs and information to trigger one or more motion gestures, along with any corresponding user interface (UI).
More specifically, the sensor input module 125 provides sensor inputs received from one or more sensors coupled to the sensor pen to the gesture activation module 135. In addition, the touch input module 130 provides touch inputs detected by any touch sensitive surface of the computing device to the gesture activation module 135. As noted above, virtual “touches” on various projections, surfaces, displays, objects, etc., are also considered by using various well-known techniques to track user touches on any surface, display or object.
Further, as noted above, in various embodiment, the Motion and Context Sharing Technique also rejects or ignores unwanted or unintended touches. An optional palm rejection module 205 is used for this purpose. In particular, the palm rejection module 205 evaluates any touch to determine whether that touch was intended, and then either accepts that touch as input for further processing by the touch input module 130, or rejects that touch. In addition, in various embodiments, the palm rejection module 205 disables or ignores (i.e., “rejects”) user touches on or near particular regions of any touch-sensitive surfaces, depending upon the context of that touch. Note that “rejected” touches may still be handled by the Motion and Context Sharing Technique as an input to know where the palm is planted, but flagged such that unintentional button presses or gestures will not be triggered in the operating system or applications by accident.
With respect to context of the computing device and sensor pen, a context reporting module 210 reports this information to the gesture activation module 135. Specifically, the context reporting module 210 determines the current context of the computing device and/or the sensor pen, and reports that context information to the gesture activation module 135 for use in determining motion gestures. Context examples include, but are not limited to, sensor pen and computing device individual or relative motions, whether they are being held, power states, application status, etc. Furthermore, the presence or absence (loss of) a pen signal, as well as the signal strength, may be used as an aspect of context as well. For example, in the case of multiple pens, simple triangulation based on signal strengths enables the Motion and Context Sharing Technique to determine approximate relative spatial location and proximity of one or more users.
Examples of various motion gestures triggered or activated by the gesture activation module 135 include motion gestures 140 through 165, and motion gestures 215 through 240. Note that the aforementioned user defined motion gestures 170 are defined via a user interface 245 that allows the user to define one or more motion gestures using sensor pen inputs relative to any combination of touch and context inputs. Each of the motion gestures 140 through 165, and motion gestures 215 through 240 illustrated in FIG. 2 are described in further detail throughout Section 2 of this document, with examples of many of these motion gestures being illustrated by FIG. 3 through FIG. 10.
2.0 Operational Details of Motion and Context Sharing Technique:
The above-described program modules are employed for implementing various embodiments of the Motion and Context Sharing Technique. As summarized above, the Motion and Context Sharing Technique provides various techniques for using a “sensor pen” to enable a variety of input techniques based on various combinations of direct-touch inputs and sensor pen inputs that are not restricted to a near-proximity sensing range of the digitizer of a computing device. The following sections provide a detailed discussion of the operation of various embodiments of the Motion and Context Sharing Technique, and of exemplary methods for implementing the program modules described in Section 1 with respect to FIG. 1 and FIG. 2.
In particular, the following sections provides examples and operational details of various embodiments of the Motion and Context Sharing Technique, including:
    • An operational overview of the Motion and Context Sharing Technique;
    • Exemplary system hardware for implementing the sensor pen;
    • Exemplary interaction techniques using the sensor pen that are enabled by the Motion and Context Sharing Technique;
    • Exemplary context sensing techniques that consider current sensor pen context;
    • Correlated finger touch and sensor pen motions for initiating actions or commands;
    • Active sensor pen motions outside of hover range of the display for initiating actions or commands;
    • Sensor pen motions combined with direct touch inputs for initiating actions or commands; and
    • Close range motion gestures of the sensor pen within hover range or contact with the display.
2.1 Operational Overview:
As noted above, the Motion and Context Sharing Technique-based processes described herein provide various techniques for using a “sensor pen” to enable a variety of input techniques based on various combinations of direct-touch inputs and sensor pen inputs that are not restricted to a near-proximity sensing range of the digitizer of a computing device
More specifically, the Motion and Context Sharing Technique considers various combinations of sensor pen stroke, pressure, motion, and other sensor pen inputs to enable various hybrid input techniques that incorporate simultaneous, concurrent, sequential, and/or interleaved, sensor pen inputs and touch inputs (i.e., finger, palm, hand, etc.) on displays or other touch sensitive surfaces. This enables a variety of motion-gesture inputs relating to the context of how the sensor pen is used or held, even when the pen is not in contact or within sensing range of the computing device digitizer. In other words, any particular touch inputs or combinations of touch inputs relative to a computing device are correlated with any desired sensor pen inputs, with those correlated inputs then being used to initiate any desired action by the computing device.
2.2 Exemplary System Hardware:
In general, the sensor pen includes a variety of sensors, communications capabilities, a power supply, and logic circuitry to enable collection of sensor data and execution of firmware or software instantiated within memory accessible to the logic circuitry. For example, in a tested embodiment, the sensor pen was powered by an internal battery, and used a conventional microcontroller device to collect the sensor data and to run firmware. The sensor pen in this tested embodiment included a micro-electro-mechanical systems (MEMS) type three-axis gyroscope, as well as MEMS type three-axis accelerometer and magnetometer modules. Wireless communications capabilities were provided in the tested embodiment of the sensor pen by an integral 2.4 GHz transceiver operating at 2 Mbps. Further, in this tested embodiment, the sensor pen firmware sampled the sensors at 200 Hz and wirelessly transmitted sensor data to an associated tablet-based computing device.
Various components of this exemplary sensor pen are discussed herein in the context of user interaction with either or both the sensor pen and a tablet-type computing device (e.g., whether they are being held, moving, power state, application status, etc.) for initiating various motion gestures and other inputs. However, it must be understood that this exemplary sensor pen implementation is discussed only for purposes of illustration and explanation, and is not intended to limit the scope or functionality of the sensor pen such as types of sensors associated with the sensor pen, communications capabilities of the sensor pen, etc. Further this exemplary sensor pen implementation is not intended to limit the scope of functionality of the Motion and Context Sharing Technique or of any motion gestures or other input techniques discussed herein.
2.3 Interaction Techniques:
In general, the Motion and Context Sharing Technique considers a variety of categories of sensor pen motion and context relative to touch, motion and context of an associated touch sensitive computing device or display device. For purposes of explanation, the following discussion will refer to a sketching or drawing type application in the context of a tablet-type computing device. However, it should be understood that both the sensor pen and the Motion and Context Sharing Technique are fully capable of interaction and interoperation with any desired application type, operating system type, or touch-sensitive computing device.
In the context of the sketching application, a number of semantically appropriate mappings for various sensor pen gestures were defined within the context of inking and sketching tasks. Further, as noted above, the Motion and Context Sharing Technique also considers the context of various sensor pen gestures relative to the context of the associated computing device, and any contemporaneous user touches of the computing device to enable a variety of concurrent pen-and-touch inputs.
Note that in other application contexts, such as, for example, active reading or mathematical sketching, different gestures or mappings can be defined. In fact, as noted above, any desired user-definable gestures and concurrent pen-and-touch inputs can be configured for any desired action for any desired application, operating system, or computing device. The following sections describe various techniques, including context sensing, pen motion gestures away from the display or touch sensitive surface of the computing device, motion gestures combined with touch input, close-range motion gestures (i.e., within hover range or in contact with the display), and combined sensor pen motions. Further, it should also be understood that voice or speech inputs can be combined with any of the various input techniques discussed herein above to enable a wide range of hybrid input techniques.
2.4 Context Sensing Techniques:
Sensing the motion and resting states of both the stylus and the tablet itself offers a number of opportunities to tailor the user experience to the context of the user's naturally occurring activity. Several examples of such techniques are discussed below. Note that the exemplary techniques discussed in the following paragraphs are provided for purposes of explanation and are not intended to limit the scope of the sensor pen or the Motion and Context Sharing Technique described herein.
2.4.1 Tool Palette Appears & Disappears with Pen:
On tablet computers, there is often a tension between having tool palettes and other UI controls on the screen at all times, versus employing the entire screen for showing the user's content. Advantageously, by considering the context of the user interaction with the sensor pen and the computing device, the Motion and Context Sharing Technique provides a number of techniques for enhancing user experience with respect to such issues.
In the case of a drawing or sketching application, the user interface (UI) tools of interest to the user are often different at different times during any particular session. For example, the UI tools of interest while handwriting or sketching may be different than those of used to browse content or review work-in-progress. Consequently, in various embodiments, the Motion and Context Sharing Technique considers the context of the sensor pen to automatically fade in a stylus tool palette when the user picks up the sensor pen (as determined via one or more of the aforementioned pen sensors). In the case of a drawing application or the like, this tool palette includes tools such as color chips to change the pen color, as well as controls that change the mode of the stylus (e.g., eraser, highlighter, lasso selection, inking, etc.). Conversely, when the user puts down the pen, or holds the pen relatively still for some period of time, the palette slowly fades out to minimize distraction.
In terms of sensor pen context, picking up or lifting the sensor pen is identified (via one or more of the sensors) as a transition from a state where the pen is not moving (or is relatively still) to a state of motion above a fixed threshold. In a tested embodiment of the sensor pen, a multi-axis gyroscope sensor in the pen was used to determine sensor pen motion for this purpose due to the sensitivity of gyroscopes to subtle motions. For example, one technique used by the Motion and Context Sharing Technique identifies pen motion (e.g., picking up or lifting) whenever a three-axis sum-of-squares of gyroscope signals exceeds some threshold rotational rate (e.g., 36 deg/s). When such motion is identified, the Motion and Context Sharing Technique then triggers the palette to appear or to fade in over the course of some period of time (e.g., one second). Conversely, when the pen motion falls below the threshold rotational rate, the palette disappears or fades out over some period of time. In a tested embodiment, palette fade out occurred over a longer period of time (e.g., five seconds) than palette fade in, and if sensor pen motion resumed before this fade-out finishes, the palette quickly fades back in to full opacity.
For interaction with the palette itself, the Motion and Context Sharing Technique provides various motion gestures relative to the sensor pen to allow the palette UI to respond to either pen taps or finger taps, to allow users to interleave these input modalities to select a current pen tool and color. In this context, this enables the Motion and Context Sharing Technique to treat pen and touch inputs interchangeably. For this reason, it is also possible to call up the palette using touch alone, by tapping on it with a finger (often useful if the pen is currently not in use, for example).
In the context of other application types, picking up or lifting the pen can be used to trigger any desired menu, user interface component, or action relevant to the particular application being used.
2.4.2 Pen Loss Prevention:
One problem with pens and stylus type devices is that the user can lose or forget the pen. For example, users often leave the stylus on their desk, or forget to put it back in the tablet's pen holster after use. Unfortunately, the loss or unavailability of the pen or stylus tends to limit the use of the user's computing device to a touch-only device until such time as another pen or stylus can be obtained.
Advantageously, the Motion and Context Sharing Technique addresses this problem by sensing the context of the sensor pen relative to the computing device, and then automatically reminding or alerting the user whenever the Motion and Context Sharing Technique observes the tablet moving away without the sensor pen. Since both the sensor pen and tablet (or other computing device) have motion sensors, and they are in communication with one another, the Motion and Context Sharing Technique evaluates sensor information of the pen and tablet to infer whether or not they are moving together.
One simple example of this context is that if the tablet starts moving, and continues moving while the pen remains stationary, then a large message (e.g., “Forgot the pen?”) appears on the tablet's display, either immediately, or after any desired preset delay. The message may then optionally then fade away or be dismissed by the user. This serves to remind the user to retrieve the sensor pen. Note that other alert techniques (e.g., audible, visual, tactile, etc.) on either or both the sensor pen and tablet are used in various embodiments of the Motion and Context Sharing Technique to alert the user to potential sensor pen loss. These techniques illustrate how sensing the motion states of both the tablet and sensor pen can help provide a more complete picture of the system's state.
In a related embodiment the Motion and Context Sharing Technique considers correlated gait patterns, for example, to determine if the user is walking with both pen and stylus on his person or not. Further, if sensor analysis indicates that the tablet is in a pack, purse, or not visible to a walking or moving user, an automated alert sent to the user's mobile phone is initiated in various embodiments to alert the user to potential sensor pen loss.
2.5 Correlated Finger Touch and Pen Motions:
By evaluating correspondences between touch-screen input (or touch inputs on other surfaces of the computing device) and sensor pen motions, the Motion and Context Sharing Technique infers additional information about how the user is touching the screen or other touch-sensitive surface. By correlating sensor pen motions with touch inputs, the Motion and Context Sharing Technique enables a variety of input scenarios and motion gestures.
For example, if the user touches the screen with the preferred hand (i.e., the same hand that is holding the sensor pen), the Motion and Context Sharing Technique infers via analysis of sensors in either or both the computing device and the sensor pen that the pen is in motion at the same time that the touch is in motion. FIG. 3 provides a simple illustration of this scenario. In particular, in this example, the user's left hand 300 is holding a sensor pen 310, while the index finger 305 of that hand is in contact 320 with the surface of the display 330 of the computing device 340. Both the index finger 305 and the sensor pen 310 held in the hand are moving in the same direction, as illustrated by the large directional arrows.
In contrast, if the Motion and Context Sharing Technique observes a touch while the sensor pen is relatively stationary, the Motion and Context Sharing Technique infers that the touch was produced by the non-preferred hand (i.e., touch produced by the hand not holding the pen).
For example, returning to the example of a drawing or sketch type application, in various embodiments, the Motion and Context Sharing Technique provides an input mechanism where a dragging type touch with the non-preferred hand (i.e., with little or no corresponding pen motion) pans the canvas on the display screen. Conversely, a rubbing or dragging motion on an ink stroke with the preferred hand (e.g., pen is tucked between the fingers of that hand, but not in contact with the display or digitizer), digitally smudges the ink. This correlated touch and motion input mimics the action of charcoal artists who repeatedly draw some strokes, and then tuck the charcoal pencil to blend (smudge) the charcoal with a finger of the same hand.
Clearly, the user may move both hands concurrently. Consequently, even if the user is holding the pen relatively still, a small amount of motion may be sensed by one or more sensors of the sensor pen. Therefore, in various embodiments, the Motion and Context Sharing Technique ignores motions below a dynamically defined threshold by considering factors such as relative motion velocity and direction between the touch and the sensor pen (i.e., directional correlation). In addition, since an initial determination of whether a display pan or ink smudge is intended, in various embodiments the Motion and Context Sharing Technique defers the decision of panning vs. smudging for a relatively short time-window at the onset of a touch gesture while the relative motions are evaluated to make a final determination.
In various embodiments, the dynamically defined motion threshold increases when the pen is in rapid motion, and exponentially decays otherwise. This allows the Motion and Context Sharing Technique to handle the case where the user has finished smudging, but the pen may not yet have come to a complete stop, and then switches quickly to panning with the non-preferred hand. The dynamic threshold here helps the system to correctly reject the pen motion as a carry-over effect from the recently-completed smudging gesture.
In addition, by considering the sensors of both the computing device and sensor pen in a common inertial frame, the Motion and Context Sharing Technique enables various scenarios that consider corresponding directions of motion (i.e., directional correlation), thereby enabling very subtle or gentle smudging motions that might otherwise fall below a movement threshold. Advantageously, in the case that the user touches and starts moving with both hands at approximately the same time, the use of a common inertial frame enables the Motion and Context Sharing Technique to distinguish which touch point corresponds to the pen motion.
2.6 Active Pen Motion Away from the Display:
In contrast to conventional pen or stylus usage with computing devices, the Motion and Context Sharing Technique allows the sensor pen to provide input at any distance from the computing device or digitizer surface. Advantageously, the use of sensors coupled to the sensor pen in combination with the communications capabilities of the sensor pen enable the Motion and Context Sharing Technique to consider sensor pen motions independent from the computing device. This allows the Motion and Context Sharing Technique to implement various explicit sensor gestures that can be active at all times (or at particular times or in particular contexts).
One example of an of a motion gesture enabled by considering pen motions independently of the computing device” is the aforementioned “roll to undo” gesture, which uses a rolling motion of the pen (i.e., twisting or rotating the pen around the long axis of barrel). Note that this motion gesture is discussed in further detail below. Further, a number of the motion gestures and techniques discussed herein, including various context sensing techniques and various sensor pen motion gestures combined with direct touch inputs, rely on sensing the motion of the pen while it is away from the display (i.e., outside of contact and hover range of the computing device). Therefore, the ability to sense pen activity at a distance from the display enables many of the sensor pen-motion techniques discussed herein.
2.6.1 Roll to Undo:
In various embodiments, the Motion and Context Sharing Technique considers sensor pen rolling motions as a distinct gesture for pen input. For example, the aforementioned roll to undo gesture is activated by user rolling of the sensor around the long axis of the pen, while the sensor pen is beyond hover range of the computing device. In general, rolling of the sensor pen in this manner is detected by sensors such as gyroscopic sensors or accelerometers coupled to the sensor pen. FIG. 4 provides a simple illustration of this scenario. In particular, in this example, the user's right hand 400 is holding a sensor pen 410, while the pen is rotated around the long axis of the sensor pen, as illustrated by the large directional arrow.
In one embodiment, when the Motion and Context Sharing Technique recognize this rolling gesture, the Motion and Context Sharing Technique automatically undoes the last action completed by the computing device. Alternately, in a related embodiment, when the Motion and Context Sharing Technique recognize this rolling gesture, a user interface menu appears on the screen that shows the user they have activated the Undo command. The user can then tap or touch the displayed Undo command one or more times to undo one or more preceding actions. Advantageously, these embodiments have been observed to speed up user interaction with the computing device by presenting the Undo command to the user without requiring the user to navigate the application menu structure to locate the Undo command in a menu and the wasted movement of going to the edge of the screen to invoke it.
2.6.1 Finger Tap to Redo:
To complement the aforementioned roll to undo motion gesture, the Motion and Context Sharing Technique provides a tap to redo motion gesture that is activated in response to detection of user finger taps following activation of an Undo command. As with the roll to undo gesture, that tap to redo motion gesture can be performed outside the hover range of the computing device. Similar to the Undo command, in various embodiments, a user interface menu appears on the screen that shows the user they have activated the Redo command. The user can then tap or touch the displayed Redo command one or more times to redo one or more recently undone actions.
FIG. 5 provides a simple illustration of this scenario. In particular, in this example, the user's right hand 500 is holding a sensor pen 510, while index finger 520 of that hand taps the barrel of the sensor pen, as illustrated by the large directional arrow. Note that as discussed below, a similar input mechanism is used to implement a “barrel tap” input mechanism when the sensor pen is in hover range or contact with the display of the computing device.
Advantageously, both the roll to undo and finger tap to redo motion gestures interleave stylus motion and touch input in a hybrid input scenario that takes advantage of the properties of each interaction modality, while allowing the user to keep the sensor pen close to the user's working space and, thus, the locus of attention.
2.7 Pen Motion Gestures Combined with Direct Touch:
Combined touch and sensor pen motion gestures provide an additional technique that contrasts with basic sensor pen-motion gestures (e.g., the roll to undo gesture) by adding a concurrent user touch component for activation of various input mechanisms. Advantageously, this allows the same sensor pen motion gesture used without touch to be used to initiate one or more entirely different input mechanisms when that same sensor pen motion is combined with one or more different user touch inputs. However, it should also be understood that there is no requirement for the same sensor pen motion gestures to be used when combined with various touch inputs.
For example, in addition to the roll to undo technique that interleaves pen motion and touch inputs (i.e., pen roll followed by touching displayed undo menu), the Motion and Context Sharing Technique also provides a variety of motion gestures that employ simultaneous, concurrent, sequential touch and pen motion inputs. The combination of motion gesture plus touch techniques described below for providing various user input mechanisms illustrates how new sensing modalities can build on the existing skills and habits of users who may be familiar with particular applications or type of content.
Examples of these combined motion gestures and touch input include, but are not limited to a “touch and spatter” input mechanism with respect to painting type applications, a “touch and tilt for layers” input mechanism, and a “touch and roll to rotate” input mechanism. Note that a wide range of combined motion gestures and direct touch for initiating specific input scenarios or commands is enabled by the Motion and Context Sharing Technique, and that the examples discussed below are not intended to limit the scope of the Motion and Context Sharing Technique.
2.7.1 Touch and Spatter:
Artists working in water media often employ a technique of rapping a loaded brush on the finger to produce spatters of paint on the paper. Such effects can produce natural-looking textures for foliage and landscapes. In various embodiments, the Motion and Context Sharing Technique provides a corresponding touch and pen-motion gesture that mimics this physical gesture within the context of a sketching or painting application.
For example, the touch and spatter input mechanism is initiated when the user touches the screen with a finger, and then strikes the pen against that finger (or other surface) to produce spatters as if the sensor pen were a loaded paintbrush. FIG. 6 provides a simple illustration of this scenario. In particular, in this example, the index finger 600 of the user's left hand 610 is touching display 620 of computing device 630. A sensor pen 640 held in the user's right hand 650 is struck against the index finger 600 to initiate the touch and spatter input mechanism, with the result being digital paint spatters 660 in a region around the point where the index finger is touching the display 620.
Note that, given the limited hover-sensing range of typical tablets, it is likely that the pen remains out-of-range (e.g., more than ˜1 cm away from the display surface) with respect to hover detection when the user performs this gesture. Therefore, the tablet does may not know the actual (x, y) location of the pen tip. Consequently, the Motion and Context Sharing Technique produces spatters (in the currently selected pen color) centered on the finger contact point when sensors in the sensor pen indicate that the user is striking the pen against the finger or other surface while the user is concurrently touching the display.
More specifically, in a tested embodiment, the Motion and Context Sharing Technique detects an acceleration peak (via accelerometers coupled to the sensor pen) corresponding to the sensor pen strike. The Motion and Context Sharing Technique then uses the amplitude of the peak to determine any desired combination of a number and transparency level of the spatters, how large the individual spatters are, and how far they scatter from the contact point. Each of these values increase with increasing acceleration peak amplitudes (corresponding to harder sensor pen strikes). The semi-transparent spatters allow the colors to mix with one another in a natural-looking manner.
Furthermore, to prevent possible unintended spatter activation, in various embodiments, the Motion and Context Sharing Technique does not respond to isolated strikes. Instead, in such embodiments, the touch and spatter input mechanism is activated by the user striking the sensor pen to finger multiple times to begin the spattering effect. This results in a short delay before the paint spatters begin while ensuring that the spatter effect is actually intended by the user.
2.7.2 Touch and Tilt for Layers:
A common problem in graphical layout applications (e.g. PowerPoint®, Photoshop®, etc.) is working with multiple layered objects that occlude one another. To address this problem, the Motion and Context Sharing Technique provides a gesture of touching an object and then pitching or tilting the pen backward (or forward) relative to the long axis of the sensor pen to reveal a list of the layered objects in z-order, or alternately, to show or cycle through those layers. The user may then tap on the objects in the list or on any of the displayed layers to select, reorder, or otherwise interact with the selected layer. This input mechanism is referred to herein as the aforementioned “Touch and Tilt for Layers” gesture. In other words, to activate the Touch and Tilt for Layers gesture, the user touches or holds an object while tilting the pen to trigger that input mechanism.
FIG. 7 provides a simple illustration of this scenario. In particular, in this example, the index finger 700 of the user's left hand 710 is touching an area of display 720 of computing device 730 having a plurality of layers 740 or layered objects. A sensor pen 750 held in the user's right hand 760 is tilted pen backward (or forward) relative to the long axis of the sensor pen, as illustrated by the large directional arrow, to reveal and interact with the layers 740 or layered objects corresponding the point where the index finger 700 is touching the display 720.
Note that while the pen pitching or tilting motion can be used to activate the touch and tilt for layers gesture without a concurrent touch, it was observed that arbitrary user tilting motions of the sensor pen sometimes inadvertently activated the touch and tilt for layers gesture. Consequently, in various embodiments, the Motion and Context Sharing Technique limits activation of the touch and tilt for layers gesture to contexts where the holds a finger on a stack of objects or other layers to avoids inadvertent activation of layer cycling or interactions. Therefore, in this example, the touch component of the gesture serves a double purpose. First, touching the screen activates the pen pitching motion for recognition. Second, touching the screen also identifies which objects or layer stacks the touch and tilt for layers gesture applies to.
To further limit potential inadvertent activations, in closely related embodiments, the touch and tilt for layers gesture was activated by a touch concurrent with tilting the pen away from the screen and then back towards the screen (or the opposite motions, i.e., towards and then away), within a limited time-window.
2.7.3 Touch and Roll to Rotate:
A related input mechanism, referred to herein as the aforementioned touch and roll to rotate gesture is activated by holding or touching an object on the display surface while concurrently rolling or twisting the sensor pen around the long axis of the sensor pen. Note that this gesture contrasts with the “roll to undo” gesture discussed above in that when the Motion and Context Sharing Technique infers a concurrent touch in combination with the rolling motion, the touch and roll to rotate gesture is activated instead of the roll to undo gesture (which occurs when there is no concurrent touch).
In a tested embodiment of the Motion and Context Sharing Technique, when the user touches an object and rolls the pen, this enables a rotation mode. In one embodiment, this rotation mode is implemented by allowing the user to dial his finger (i.e., move the finger in a curving motion on the display surface) to precisely rotate the object. In a related embodiment, rotation is controlled by either continuing to rotate the sensor pen, or by contacting the sensor pen on the display surface, and using the sensor pen in a manner similar to the finger dialing motion noted above.
FIG. 8 provides a simple illustration of the touch and roll to rotate input mechanism. In particular, in this example, the index finger 800 of the user's left hand 810 is touching an area of display 820 of computing device 830 having an object 840. A sensor pen 850 held in the user's right hand 860 is rotated around the long axis of the sensor pen, as illustrated by the large directional arrow to initiate the touch and roll to rotate input mechanism. The selected object 840 is then rotated around its axis of rotation (illustrated by the large directional arrow around object 840) using either the user's finger, or additional rotation motions of the sensor pen 850.
2.8 Close-Range Motion Gestures Combined with Hover or Contact:
Similar to touch combined with sensor pen motions, the Motion and Context Sharing Technique provides additional input mechanisms that combine direct sensor pen contact or hover (i.e., sensor pen in hover range of the digitizer of the computing device). For example, input mechanisms implemented by combining sensor pen hover with sensor pen motion gestures (determined via one or more of the sensors coupled to the sensor pen) include, but are not limited to a “vertical menu” input mechanism and a “barrel tap” input mechanism, both of which use the sensed (x, y) location of the pen tip (via hover evaluation of the pen by the computing device) to determine where on the display to bring up a menu. Another input mechanism, referred to herein as a “hard stroke” input mechanism, combines direct contact of the sensor pen with the display in combination with sensor pen motions. Each of these exemplary input mechanisms are discussed in further detail below.
2.8.1 Vertical Menu:
In various embodiments, the vertical menu input mechanism is initiated by using various sensors of the pen to detect a user initiated motion of the sensor pen that occurs with the pen coming towards or into proximity range (or in contact with the screen) in an orientation approximately perpendicular relative to the display screen of the computing device. In other words, holding the sensor pen approximately perpendicular relative to the display (e.g., approximately vertical pen pose when the computing device is lying flat), and either approaching or within hover range relative to the display screen of the computing device, activates a UI window at or near the sensed (x, y) location of the pen tip, thereby enabling efficient interleaving of stroke input with menu invocation.
Note that in various embodiments, a timing factor is also considered for initiating the vertical menu. For example, in such embodiments, the vertical menu is initiated when the sensor pen is held approximately perpendicular relative to the display and approximately stationary for a short time (e.g., a fixed or adjustable time threshold) within the hover range of the display. Note that in various embodiments, the UI window of the vertical menu is initiated when the pen is an approximately perpendicular pose, for a certain amount of time, as it approaches the display, regardless of whether the pen is in a proximity or hover range of the display. Consequently, it should be clear that in such embodiments, the Motion and Context Sharing Technique indirectly uses the pen's ability to know its orientation even when it is beyond the sensing range of the computing device to successfully trigger the vertical menu.
When initiated or activated, the vertical menu input mechanism triggers a UI mode that brings up a localized (relative to the pen tip) UI menu. Consequently, bringing the pen close to the screen with the pen in this relative perpendicular pose will initiate opening or expansion of a vertical software menu or the like. Conversely, motions extending away from the computing device or display (i.e., motions moving away from the screen or beyond the proximity sensing range) may initiate closing or contraction of the vertical software menu or the like. Other mechanisms for closing the UI menu include, but are not limited to, automatically closing the menu after the user picks a command from the UI menu, in response to taps (pen or finger) somewhere else on the screen, detection of some other motion gesture, etc.
Advantageously, menus that appear at or near the locus of interaction (i.e., near the pen tip) can save the user from round-trips with the sensor pen (or other pointing device) to tool palettes or other menus at the edge of the screen. Such localized menus are particularly useful for frequent commands, as well as contextual commands such as Copy and Paste that integrate object selection and direct manipulation with commands. Similarly, in various embodiments, the Motion and Context Sharing Technique initiates context sensitive menus, application popups, etc., when activated by the vertical menu input mechanism.
In various embodiments, the vertical menu input mechanism initiates a marking menu when the pen is held approximately perpendicular relative to the display and approximately stationary for a short time within the hover range of the display. Note that marking menus are also sometimes referred to as “pie menus” or “radial menus.” In general, the marking menu provides a generally circular context menu where selection of a menu item depends on direction. Marking menus may be made of several “pie slices” or radially arranged menu options or commands around a center that may be active or inactive. Each such slice may include any number of menu items (similar to a list or set of menu items in a conventional drop down menu). Further, in various embodiments, commands or menu items in the marking menu are user-definable, and of course, vertical menus could be employed with other well-known types of command selection techniques, such as drawing gestures or picking from traditional pull-down menus as well. In this sense, the “vertical menu” can be conceived as a general purpose mode-switching technique, which enables the pen to input commands, gestures, or perform other tasks secondary to a primary “inking” state.
In one embodiment, the marking menu is initiated as simply a display of directions (e.g., arrows, lines, icons, etc.), radiating out from a center of the marking menu, in which the sensor pen can be stroked to initiate commands without displaying a visible menu of commands. Once this marking menu is displayed, the user may then bring the stylus or sensor pen into contact with the display and stroke in any of the displayed directions to invoke a command. In various embodiments, if the user continues to hold the pen relatively stationary for a short period of time, the Motion and Context Sharing Technique initiates marking menu pops up that reveal a mapping of stroke direction to command. The vertical menu input mechanism thus combines stylus motion and pen-tip stroke input in the same technique.
In various embodiments, the vertical menu input mechanism is activated relative to a particular object when the user places the sensor pen over the object in a pose that is approximately perpendicular relative to the display. In this case, the marking menu includes object-specific commands (e.g. copy, paste, etc.) that take the current object and screen location as operands. Advantageously, this integrates object selection and command invocation into a single fluid pen gesture rather than requiring the user to perform a sequence of actions to achieve the same result.
In operation, the Motion and Context Sharing Technique uses an approximately perpendicular posture of the sensor pen relative to the display to trigger the vertical menu input mechanism from the hover-state. The approximately perpendicular posture of the sensor pen relative to the display is detected using a combination of the accelerometer and gyroscope sensors coupled to the sensor pen relative to a sensed orientation of the display. The accelerometer is used to estimate the orientation of the pen, but even if the accelerometer indicates that the pen is relatively near-perpendicular, this sensor reading will not trigger the vertical menu input mechanism when the gyroscope indicates that the pen is still moving beyond some threshold amount. In this way, the Motion and Context Sharing Technique avoids false positive activation of the marking menu if the user briefly passes through an approximately perpendicular relative pose while handling the sensor pen.
FIG. 9 provides a simple illustration of a vertical menu input scenario. In particular, in this example, a sensor pen 900 held in the user's right hand 910 is in hover range above the display 920 of computing device 930. Note that hover is indicated in this figure by the cone 940 shown by broken lines extending from the tip of the sensor pen 900. By holding the sensor pen 900 approximately vertical within hover range of the display 920, the Motion and Context Sharing Technique initiates a marking menu 950, which in this example includes menu options or command choices A through F. The user selects or initiates any of these commands by either touching one of the displayed commands, sweeping or stroking the sensor pen 900 in the direction of one of the displayed commands, or contacting one of the displayed commands with the sensor pen.
2.8.2 Barrel Tap:
One mechanism by which the sensor pen can implement button presses is to simply include one or more buttons or the like on the barrel of the sensor pen, with button presses then being communicated by the pen to the computing device. However, since the sensor pen includes a variety of sensors, the Motion and Context Sharing Technique instead uses one or more of the existing sensors to identify user finger taps on the barrel of the sensor pen to initiate a barrel tap input mechanism. This barrel tap input mechanism can be used for any of a variety of purposes, including, but not limited to, button press emulations.
In general, the barrel tap input mechanism is activated by sensing relatively hard-contact finger taps on the barrel of the pen as a way to “replace” mechanical button input. In a tested embodiment, the Motion and Context Sharing Technique evaluates accelerometer data to identify an acceleration spike approximately perpendicular to the long axis of the sensor pen while the pen is approximately stationary in the hover state or in direct contact with the display. Activation of the barrel tap input mechanism brings up a menu with commands specific to the object that the user hovers over or contacts with the sensor pen, with that menu being approximately centered under the pen tip.
Note that the barrel tap input mechanism differs from the aforementioned “finger tap” input mechanism in that the finger tap mechanism is performed without a concurrent touch or hover relative to the computing device, while the barrel tap input mechanism is performed while the pen is in hover range or contact within a displayed object.
2.8.3 Hard Tap and Hard Drag Input Mechanisms:
In various embodiments, the Motion and Context Sharing Technique provides additional input mechanisms initiated by the sensor pen that are roughly analogous to “hard tap” and “hard drag” techniques that been implemented using existing finger touch inputs.
However, rather than using the finger to provide touch inputs, the Motion and Context Sharing Technique instead evaluates accelerometer thresholds to determine whether the user intends a hard tap or a hard drag input using the sensor pen. In a tested embodiment in a drawing type application, bringing the stylus or sensor pen down relatively hard on the display surface results in a hard tap input mechanism that triggers a lasso mode, whereas relatively softer sensor pen strokes on the display surface result in a hard drag input mechanism that produces digital ink. Adjustable or customizable accelerometer thresholds for distinguishing the “hard” contact from softer strokes are used by the Motion and Context Sharing Technique. Further, in various embodiments, the angle at which a user is holding the pen when it comes into contact with the display is considered by the Motion and Context Sharing Technique for differentiating between the two input mechanisms.
FIG. 10 provides a simple illustration of the hard tap input mechanism. In particular, in this example, a sensor pen 1000 held in the user's right hand 1010 is brought down relatively hard onto a point 1020 of the display 1030 of computing device 1040. The resulting contact of the sensor pen 1000 onto the surface of display 1030 (identified using accelerometers of the sensor pen) initiates the hard tap input mechanism that triggers the aforementioned lasso mode. The user then uses either a finger touch or drags the sensor pen 1000 across the surface of the display 1030 to draw lasso outlines for use in selecting objects, regions, etc., in the drawing type application.
3.0 Operational Summary:
The processes described above with respect to FIG. 1 through FIG. 10, and in further view of the detailed description provided above in Sections 1 and 2, are illustrated by the general operational flow diagram of FIG. 11. In particular, FIG. 11 provides an exemplary operational flow diagram that summarizes the operation of some of the various embodiments of the Motion and Context Sharing Technique. Note that FIG. 11 is not intended to be an exhaustive representation of all of the various embodiments of the Motion and Context Sharing Technique described herein, and that the embodiments represented in FIG. 11 are provided only for purposes of explanation.
Further, it should be noted that any boxes and interconnections between boxes that are represented by broken or dashed lines in FIG. 11 represent optional or alternate embodiments of the Motion and Context Sharing Technique described herein, and that any or all of these optional or alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
In general, as illustrated by FIG. 11, the Motion and Context Sharing Technique begins operation by receiving 1100 sensor input from one or more sensors (1105 through 1150) of the sensor pen. These inputs are then transmitted 1155 from the sensor pen to the touch-sensitive computing device. The Motion and Context Sharing Technique also receives 1160 one or more touch inputs from the touch-sensitive computing device. Further, the Motion and Context Sharing Technique also optionally receives 1165 context information from either or both the sensor pen and the context sensitive computing device.
The Motion and Context Sharing Technique then evaluates 1170 simultaneous, concurrent, sequential, and/or interleaved sensor pen inputs and touch inputs relative to contexts of sensor pen and computing device. This evaluation serves to identify one or more motion gestures 1180 corresponding to the various sensor, touch and context inputs. The Motion and Context Sharing Technique then automatically initiates 1175 one or more motion gestures 1180 based on the evaluation. Finally, as noted above, in various embodiments, a UI or the like is provided 1185 for use in defining and/or customizing one or more of the motion gestures 1180.
4.0 Exemplary Operating Environments:
The Motion and Context Sharing Technique described herein is operational within numerous types of general purpose or special purpose computing system environments or configurations. FIG. 12 illustrates a simplified example of a general-purpose computer system in combination with a stylus or pen enhanced with various sensors with which various embodiments and elements of the Motion and Context Sharing Technique, as described herein, may be implemented. It should be noted that any boxes that are represented by broken or dashed lines in FIG. 12 represent alternate embodiments of the simplified computing device and sensor pen, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
For example, FIG. 12 shows a general system diagram showing a simplified touch-sensitive computing device 1200. In general, such touch-sensitive computing devices 1200 have one or more touch-sensitive surfaces 1205 or regions (e.g., touch screen, touch sensitive bezel or case, sensors for detection of hover-type inputs, optical touch sensors, etc.). Examples of touch-sensitive computing devices 1200 include, but are not limited to, touch-sensitive display devices connected to a computing device, touch-sensitive phone devices, touch-sensitive media players, touch-sensitive e-readers, notebooks, netbooks, booklets (dual-screen), tablet type computers, or any other device having one or more touch-sensitive surfaces or input modalities.
To allow a device to implement the Motion and Context Sharing Technique, the computing device 1200 should have a sufficient computational capability and system memory to enable basic computational operations. In addition, the computing device 1200 may include one or more sensors 1210, including, but not limited to, accelerometers, cameras, capacitive sensors, proximity sensors, microphones, multi-spectral sensors, pen or stylus digitizer, etc. As illustrated by FIG. 12, the computational capability is generally illustrated by one or more processing unit(s) 1225, and may also include one or more GPUs 1215, either or both in communication with system memory 1220. Note that the processing unit(s) 1225 of the computing device 1200 of may be specialized microprocessors, such as a DSP, a VLIW, or other micro-controller, or can be conventional CPUs having one or more processing cores, including specialized GPU-based cores in a multi-core CPU.
In addition, the computing device 1200 may also include other components, such as, for example, a communications interface 1230 for receiving communications from sensor pen device 1235. The computing device 1200 may also include one or more conventional computer input devices 1240 or combinations of such devices (e.g., pointing devices, keyboards, audio input devices, voice or speech-based input and control devices, video input devices, haptic input devices, touch input devices, devices for receiving wired or wireless data transmissions, etc.). The computing device 1200 may also include other optional components, such as, for example, one or more conventional computer output devices 1250 (e.g., display device(s) 1255, audio output devices, video output devices, devices for transmitting wired or wireless data transmissions, etc.). Note that typical communications interfaces 1230, input devices 1240, output devices 1250, and storage devices 1260 for general-purpose computers are well known to those skilled in the art, and will not be described in detail herein.
The computing device 1200 may also include a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer device 1200 via storage devices 1260 and includes both volatile and nonvolatile media that is either removable 1270 and/or non-removable 1280, for storage of information such as computer-readable or computer-executable instructions, data structures, program modules, or other data. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media refers to tangible computer or machine readable media or storage devices such as DVD's, CD's, floppy disks, tape drives, hard drives, optical drives, solid state memory devices, RAM, ROM, EEPROM, flash memory or other memory technology, magnetic cassettes, magnetic tapes, magnetic disk storage, or other magnetic storage devices, or any other device which can be used to store the desired information and which can be accessed by one or more computing devices.
Storage of information such as computer-readable or computer-executable instructions, data structures, program modules, etc., can also be accomplished by using any of a variety of the aforementioned communication media to encode one or more modulated data signals or carrier waves, or other transport mechanisms or communications protocols, and includes any wired or wireless information delivery mechanism. Note that the terms “modulated data signal” or “carrier wave” generally refer a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. For example, communication media includes wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, RF, infrared, laser, and other wireless media for transmitting and/or receiving one or more modulated data signals or carrier waves. Combinations of the any of the above should also be included within the scope of communication media.
Retention of information such as computer-readable or computer-executable instructions, data structures, program modules, etc., can also be accomplished by using any of a variety of the aforementioned communication media to encode one or more modulated data signals or carrier waves, or other transport mechanisms or communications protocols, and includes any wired or wireless information delivery mechanism. Note that the terms “modulated data signal” or “carrier wave” generally refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. For example, communication media includes wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, RF, infrared, laser, and other wireless media for transmitting and/or receiving one or more modulated data signals or carrier waves. Combinations of the any of the above should also be included within the scope of communication media.
Further, software, programs, and/or computer program products embodying the some or all of the various embodiments of the Motion and Context Sharing Technique described herein, or portions thereof, may be stored, received, transmitted, or read from any desired combination of computer or machine readable media or storage devices and communication media in the form of computer executable instructions or other data structures.
Finally, the Motion and Context Sharing Technique described herein may be further described in the general context of computer-executable instructions, such as program modules, being executed by a computing device. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The embodiments described herein may also be practiced in distributed computing environments where tasks are performed by one or more remote processing devices, or within a cloud of one or more devices, that are linked through one or more communications networks. In a distributed computing environment, program modules may be located in both local and remote computer storage media including media storage devices. Still further, the aforementioned instructions may be implemented, in part or in whole, as hardware logic circuits, which may or may not include a processor.
The sensor pen device 1235 illustrated by FIG. 12 shows a simplified version of a pen or stylus augmented with pen sensors 1245, logic 1265, a power source 1275, and basic I/O capabilities 1285. As discussed above, examples of pen sensors 1245 for use with the sensor pen device 1235 include, but are not limited to, inertial sensors, accelerometers, pressure sensors, grip sensors, near-field communication sensors, RFID tags and/or sensors, temperature sensors, microphones, magnetometers, capacitive sensors, gyroscopes, etc.
In general, the logic 1265 of the sensor pen device 1235 is similar to the computational capabilities of computing device 1200, but is generally less powerful in terms of computational speed, memory, etc. However, the sensor pen device 1235 can be constructed with sufficient logic 1265 such that it can be considered a standalone capable computational device.
The power source 1275 of the sensor pen device 1235 is implemented in various form factors, including, but not limited to, replaceable batteries, rechargeable batteries, capacitive energy storage devices, fuel cells, etc. Finally, the I/O 1285 of the sensor pen device 1235 provides conventional wired or wireless communications capabilities that allow the sensor pen device to communicate sensor data and other information to the computing device 1200.
The foregoing description of the Motion and Context Sharing Technique has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the claimed subject matter to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Further, it should be noted that any or all of the aforementioned alternate embodiments may be used in any combination desired to form additional hybrid embodiments of the Motion and Context Sharing Technique. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

What is claimed is:
1. A computer-implemented process, comprising using a computer to perform process actions for:
receiving sensor inputs from a plurality of sensors coupled to a stylus device while the stylus device is outside a hover range of a touch-sensitive computing device, said sensor inputs representing one or more intentional user interactions with the stylus device;
detecting one or more finger touches on any touch sensitive surface of the touch-sensitive computing device, said finger touches occurring while the intentional user interactions with the stylus device are occurring;
evaluating the sensor inputs in combination with the finger touches to identify an intended motion gesture from a plurality of available motion gestures; and
initiating a user interface action corresponding to the intended motion gesture in an application executing on the touch-sensitive computing device.
2. The computer-implemented process of claim 1 wherein one or more of the sensors are used to determine intentional motion of the stylus device relative to concurrent intentional motion of the touch-sensitive computing device, and wherein those intentional motions are correlated for use in identifying the intended motion gesture.
3. The computer-implemented process of claim 1 further comprising initiating an alert on the touch-sensitive computing device when one or more sensors coupled to the touch-sensitive computing device indicate that the touch-sensitive computing device is moving away from the stylus device.
4. The computer-implemented process of claim 1 wherein one or more of the sensors are used to determine orientation of the stylus device relative to the touch-sensitive computing device, and wherein that relative orientation is used to identify the intended motion gesture.
5. The computer-implemented process of claim 1 wherein the finger touches include movement of one or more fingers across any touch sensitive surface of the touch-sensitive computing device.
6. The computer-implemented process of claim 1 further comprising a user interface for specifying sensor inputs and touches corresponding to one or more of the plurality of available motion gestures.
7. The computer-implemented process of claim 1 wherein one or more of the sensors indicate a rotation of the stylus device around a long axis of the stylus device concurrently with a touch of an object displayed on a touch screen of the touch-sensitive computing device, and wherein the intended motion gesture corresponds to a user interface action for rotating the touched object.
8. The computer-implemented process of claim 1 wherein one or more of the sensors indicate that the stylus device is in an approximately perpendicular orientation relative to a touch screen of the touch-sensitive computing device, and wherein the intended motion gesture corresponds to a user interface action for activating a marking menu on the display device.
9. The computer-implemented process of claim 1 wherein one or more of the sensors indicate a rotation of the stylus device around a long axis of the stylus device, and wherein the intended motion gesture corresponds to a user interface action for undoing a prior action of the application executing on the touch-sensitive computing device.
10. The computer-implemented process of claim 1 wherein one or more of the sensors are used to determine when the stylus device is picked up from a surface, and wherein the intended motion gesture corresponds to a user interface action for presenting a context sensitive menu relating to a currently active object in the application executing on the touch-sensitive computing device.
11. The computer-implemented process of claim 1 wherein one or more of the sensors are used to determine motion of the stylus device and motion of a touch on the touch-sensitive computing device, and when the those motions are in generally the same direction, using the directional correlation of those motions to identify the intended motion gesture.
12. A method for initiating actions in an application executing on a touch-sensitive computing device, comprising:
receiving sensor inputs from a plurality of sensors coupled to a stylus device while the stylus device is outside a hover range of a touch-sensitive computing device, said sensor inputs representing one or more intentional user interactions with the stylus device;
detecting one or more finger touches on any touch sensitive surface of the touch-sensitive computing device, said finger touches occurring while the intentional user interactions with the stylus device are occurring;
evaluating the sensor inputs in combination with any finger touches to identify an intended motion gesture from a plurality of available motion gestures; and
initiating a user interface action corresponding to the intended motion gesture in an application executing on the touch-sensitive computing device.
13. The method of claim 12 wherein one or more of the sensors indicate tapping on a barrel of the stylus device, and wherein the intended motion gesture corresponds to a user interface action for emulating a button press of an input device.
14. The method of claim 12 wherein one or more of the sensors indicate tapping on a barrel of the stylus device, and wherein the intended motion gesture corresponds to a user interface action for redoing a prior action of the application executing on the touch-sensitive computing device.
15. The method of claim 12 wherein one or more of the sensors indicate a tilting of the stylus device relative to a display of the touch-sensitive computing device, and wherein a finger touch on the touch-sensitive computing device corresponds to two or more layers objects, and wherein the intended motion gesture corresponds to a user interface action for cycling between the layers.
16. The method of claim 12 wherein the application executing on the touch-sensitive computing device is a painting type application and wherein one or more of the sensors indicate rapping of the stylus device against a surface while there is a touch on a display of the touch-sensitive computing device, and wherein the intended motion gesture corresponds to a user interface action for digitally spattering a plurality of digital paint drops around the touch on the display.
17. A computer storage media having computer executable instructions stored therein, said instructions causing a computing device to execute a method comprising:
receiving sensor inputs from a plurality of sensors coupled to a stylus device while the stylus device is outside a hover range of a touch-sensitive computing device, said sensor inputs representing one or more intentional user interactions with the stylus device;
detecting one or more finger touches on any touch sensitive surface of the touch-sensitive computing device, said finger touches occurring at the same time as the sensor inputs received from the stylus device;
evaluating the sensor inputs in combination with any finger touches to identify an intended motion gesture from a plurality of available motion gestures; and
initiating a user interface action corresponding to the intended motion gesture in an application executing on the touch-sensitive computing device.
18. The computer storage media of claim 17 wherein one or more of the sensors are used to determine motion of the stylus device relative to motion of the touch-sensitive computing device, and wherein those motions are correlated for use in identifying the intended motion gesture.
19. The computer storage media of claim 17 further comprising initiating an alert on the touch-sensitive computing device when one or more sensors coupled to the touch-sensitive computing device indicate that the touch-sensitive computing device is moving away from the stylus device.
20. The computer storage media of claim 17 wherein one or more of the sensors are used to determine orientation of the stylus device relative to the touch-sensitive computing device, and wherein that relative orientation is used to identify the intended motion gesture.
US13/903,944 2011-02-11 2013-05-28 Motion and context sharing for pen-based computing inputs Active 2031-10-02 US9201520B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/903,944 US9201520B2 (en) 2011-02-11 2013-05-28 Motion and context sharing for pen-based computing inputs

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/026,058 US8988398B2 (en) 2011-02-11 2011-02-11 Multi-touch input device with orientation sensing
US13/903,944 US9201520B2 (en) 2011-02-11 2013-05-28 Motion and context sharing for pen-based computing inputs

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/026,058 Continuation-In-Part US8988398B2 (en) 2010-12-17 2011-02-11 Multi-touch input device with orientation sensing

Publications (2)

Publication Number Publication Date
US20130257777A1 US20130257777A1 (en) 2013-10-03
US9201520B2 true US9201520B2 (en) 2015-12-01

Family

ID=49234253

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/903,944 Active 2031-10-02 US9201520B2 (en) 2011-02-11 2013-05-28 Motion and context sharing for pen-based computing inputs

Country Status (1)

Country Link
US (1) US9201520B2 (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160170585A1 (en) * 2010-12-27 2016-06-16 Sony Corporation Display control device, method and computer program product
WO2018106172A1 (en) * 2016-12-07 2018-06-14 Flatfrog Laboratories Ab Active pen true id
US10134158B2 (en) 2017-02-23 2018-11-20 Microsoft Technology Licensing, Llc Directional stamping
US10296089B2 (en) 2016-08-10 2019-05-21 Microsoft Technology Licensing, Llc Haptic stylus
US20200064985A1 (en) * 2018-08-24 2020-02-27 Microsoft Technology Licensing, Llc System and method for enhanced touch selection of content
US10606414B2 (en) 2017-03-22 2020-03-31 Flatfrog Laboratories Ab Eraser for touch displays
US10620725B2 (en) * 2017-02-17 2020-04-14 Dell Products L.P. System and method for dynamic mode switching in an active stylus
US10628505B2 (en) 2016-03-30 2020-04-21 Microsoft Technology Licensing, Llc Using gesture selection to obtain contextually relevant information
US10635195B2 (en) * 2017-02-28 2020-04-28 International Business Machines Corporation Controlling displayed content using stylus rotation
US10732759B2 (en) 2016-06-30 2020-08-04 Microsoft Technology Licensing, Llc Pre-touch sensing for mobile interaction
US10739916B2 (en) 2017-03-28 2020-08-11 Flatfrog Laboratories Ab Touch sensing apparatus and method for assembly
US10761657B2 (en) 2016-11-24 2020-09-01 Flatfrog Laboratories Ab Automatic optimisation of touch signal
US10775937B2 (en) 2015-12-09 2020-09-15 Flatfrog Laboratories Ab Stylus identification
US10775935B2 (en) 2016-12-07 2020-09-15 Flatfrog Laboratories Ab Touch device
US10872199B2 (en) 2018-05-26 2020-12-22 Microsoft Technology Licensing, Llc Mapping a gesture and/or electronic pen attribute(s) to an advanced productivity action
US10877575B2 (en) * 2017-03-06 2020-12-29 Microsoft Technology Licensing, Llc Change of active user of a stylus pen with a multi user-interactive display
US10877642B2 (en) * 2012-08-30 2020-12-29 Samsung Electronics Co., Ltd. User interface apparatus in a user terminal and method for supporting a memo function
US11029783B2 (en) 2015-02-09 2021-06-08 Flatfrog Laboratories Ab Optical touch system comprising means for projecting and detecting light beams above and inside a transmissive panel
US11182023B2 (en) 2015-01-28 2021-11-23 Flatfrog Laboratories Ab Dynamic touch quarantine frames
US11256371B2 (en) 2017-09-01 2022-02-22 Flatfrog Laboratories Ab Optical component
US11361153B1 (en) 2021-03-16 2022-06-14 Microsoft Technology Licensing, Llc Linking digital ink instances using connecting lines
US11372486B1 (en) * 2021-03-16 2022-06-28 Microsoft Technology Licensing, Llc Setting digital pen input mode using tilt angle
US11435893B1 (en) 2021-03-16 2022-09-06 Microsoft Technology Licensing, Llc Submitting questions using digital ink
US11474644B2 (en) 2017-02-06 2022-10-18 Flatfrog Laboratories Ab Optical coupling in touch-sensing systems
US11526659B2 (en) 2021-03-16 2022-12-13 Microsoft Technology Licensing, Llc Converting text to digital ink
US11567610B2 (en) 2018-03-05 2023-01-31 Flatfrog Laboratories Ab Detection line broadening
US11662838B1 (en) 2022-04-19 2023-05-30 Dell Products L.P. Information handling system stylus with power management through acceleration and sound context
US11662839B1 (en) 2022-04-19 2023-05-30 Dell Products L.P. Information handling system stylus with power management through acceleration and sound context
US11733788B1 (en) 2022-04-19 2023-08-22 Dell Products L.P. Information handling system stylus with single piece molded body
US11797173B2 (en) 2020-12-28 2023-10-24 Microsoft Technology Licensing, Llc System and method of providing digital ink optimized user interface elements
US11875543B2 (en) 2021-03-16 2024-01-16 Microsoft Technology Licensing, Llc Duplicating and aggregating digital ink instances
US11893189B2 (en) 2020-02-10 2024-02-06 Flatfrog Laboratories Ab Touch-sensing apparatus
US11943563B2 (en) 2019-01-25 2024-03-26 FlatFrog Laboratories, AB Videoconferencing terminal and method of operating the same

Families Citing this family (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011121375A1 (en) * 2010-03-31 2011-10-06 Nokia Corporation Apparatuses, methods and computer programs for a virtual stylus
JP5539269B2 (en) 2011-06-27 2014-07-02 シャープ株式会社 Capacitance value distribution detection method, capacitance value distribution detection circuit, touch sensor system, and information input / output device
US9857889B2 (en) * 2012-06-29 2018-01-02 Samsung Electronic Co., Ltd Method and device for handling event invocation using a stylus pen
US9921626B2 (en) * 2012-09-28 2018-03-20 Atmel Corporation Stylus communication with near-field coupling
KR20140064089A (en) * 2012-11-19 2014-05-28 삼성전자주식회사 Method and apparatus for providing user interface through proximity touch input
US20140168176A1 (en) * 2012-12-17 2014-06-19 Microsoft Corporation Multi-purpose stylus for a computing device
JP2014174801A (en) * 2013-03-11 2014-09-22 Sony Corp Information processing apparatus, information processing method and program
US9304609B2 (en) * 2013-03-12 2016-04-05 Lenovo (Singapore) Pte. Ltd. Suspending tablet computer by stylus detection
KR20140140407A (en) * 2013-05-29 2014-12-09 한국전자통신연구원 Terminal and method for controlling multi-touch operation in the same
US9280214B2 (en) * 2013-07-02 2016-03-08 Blackberry Limited Method and apparatus for motion sensing of a handheld device relative to a stylus
US20150029162A1 (en) * 2013-07-24 2015-01-29 FiftyThree, Inc Methods and apparatus for providing universal stylus device with functionalities
US9665206B1 (en) * 2013-09-18 2017-05-30 Apple Inc. Dynamic user interface adaptable to multiple input tools
US10114486B2 (en) * 2013-09-19 2018-10-30 Change Healthcare Holdings, Llc Method and apparatus for providing touch input via a touch sensitive surface utilizing a support object
KR102138034B1 (en) * 2013-10-08 2020-07-27 삼성전자 주식회사 Mobile terminal and auxiliary device for mobile terminal and method for controlling the same
US9606664B2 (en) * 2013-11-13 2017-03-28 Dell Products, Lp Dynamic hover sensitivity and gesture adaptation in a dual display system
GB2520069A (en) * 2013-11-08 2015-05-13 Univ Newcastle Identifying a user applying a touch or proximity input
JP6024725B2 (en) * 2014-01-17 2016-11-16 カシオ計算機株式会社 system
KR20150086976A (en) * 2014-01-21 2015-07-29 삼성전자주식회사 Method for controlling a displaying an object and an electronic device
US9817489B2 (en) 2014-01-27 2017-11-14 Apple Inc. Texture capture stylus and method
US10423245B2 (en) * 2014-01-31 2019-09-24 Qualcomm Incorporated Techniques for providing user input to a device
US9720521B2 (en) * 2014-02-21 2017-08-01 Qualcomm Incorporated In-air ultrasound pen gestures
KR102118482B1 (en) * 2014-04-25 2020-06-03 삼성전자주식회사 Method and apparatus for controlling device in a home network system
US20150338939A1 (en) * 2014-05-23 2015-11-26 Microsoft Technology Licensing, Llc Ink Modes
US20150346998A1 (en) * 2014-05-30 2015-12-03 Qualcomm Incorporated Rapid text cursor placement using finger orientation
US9727161B2 (en) 2014-06-12 2017-08-08 Microsoft Technology Licensing, Llc Sensor correlation for pen and touch-sensitive computing device interaction
US9870083B2 (en) * 2014-06-12 2018-01-16 Microsoft Technology Licensing, Llc Multi-device multi-user sensor correlation for pen and computing device interaction
US20150370472A1 (en) * 2014-06-19 2015-12-24 Xerox Corporation 3-d motion control for document discovery and retrieval
JP6658518B2 (en) * 2014-06-26 2020-03-04 ソニー株式会社 Information processing apparatus, information processing method and program
KR20160001151A (en) * 2014-06-26 2016-01-06 삼성전자주식회사 Method and device for assisting use of a card
KR102332468B1 (en) * 2014-07-24 2021-11-30 삼성전자주식회사 Method for controlling function and electronic device thereof
EP3177983B1 (en) * 2014-08-05 2020-09-30 Hewlett-Packard Development Company, L.P. Determining a position of an input object
US9436296B2 (en) 2014-08-12 2016-09-06 Microsoft Technology Licensing, Llc Color control
KR20160023298A (en) * 2014-08-22 2016-03-03 삼성전자주식회사 Electronic device and method for providing input interface thereof
KR101648446B1 (en) * 2014-10-07 2016-09-01 삼성전자주식회사 Electronic conference system, method for controlling the electronic conference system, and digital pen
US9400570B2 (en) * 2014-11-14 2016-07-26 Apple Inc. Stylus with inertial sensor
US9575573B2 (en) 2014-12-18 2017-02-21 Apple Inc. Stylus with touch sensor
CN104573459B (en) 2015-01-12 2018-02-02 北京智谷睿拓技术服务有限公司 Exchange method, interactive device and user equipment
CN111399668A (en) * 2015-01-20 2020-07-10 Otm技术有限公司 Apparatus and method for generating input
US9746930B2 (en) * 2015-03-26 2017-08-29 General Electric Company Detection and usability of personal electronic devices for field engineers
WO2016168738A1 (en) * 2015-04-17 2016-10-20 Declara, Inc. System and methods for haptic learning platform
US9658704B2 (en) * 2015-06-10 2017-05-23 Apple Inc. Devices and methods for manipulating user interfaces with a stylus
US9633495B2 (en) * 2015-08-03 2017-04-25 Caterpillar Inc. System and method for wirelessly authenticating a device having a sensor
KR102443069B1 (en) 2015-08-12 2022-09-14 삼성전자주식회사 Deveice and method for executing application
JP2018533146A (en) * 2015-09-07 2018-11-08 チーウチャーンピアット、ソンブーンCHIEWCHARNPIPAT, Somboon Digital writing instruments
JP2017062662A (en) * 2015-09-25 2017-03-30 ソニー株式会社 Information processor, information processing method and program
US10216405B2 (en) * 2015-10-24 2019-02-26 Microsoft Technology Licensing, Llc Presenting control interface based on multi-input command
CN107066119B (en) * 2015-11-05 2020-07-07 禾瑞亚科技股份有限公司 Touch system, touch pen and method for issuing command by using motion
US9965056B2 (en) 2016-03-02 2018-05-08 FiftyThree, Inc. Active stylus and control circuit thereof
US10838502B2 (en) * 2016-03-29 2020-11-17 Microsoft Technology Licensing, Llc Sharing across environments
KR20170138279A (en) * 2016-06-07 2017-12-15 엘지전자 주식회사 Mobile terminal and method for controlling the same
US10203781B2 (en) * 2016-06-24 2019-02-12 Microsoft Technology Licensing, Llc Integrated free space and surface input device
US9899038B2 (en) 2016-06-30 2018-02-20 Karen Elaine Khaleghi Electronic notebook system
US10534449B2 (en) 2016-08-19 2020-01-14 Microsoft Technology Licensing, Llc Adjustable digital eraser
US10146337B2 (en) * 2016-09-15 2018-12-04 Samsung Electronics Co., Ltd. Digital handwriting device and method of using the same
JP6087468B1 (en) * 2016-09-21 2017-03-01 京セラ株式会社 Electronics
CN206270897U (en) * 2016-12-09 2017-06-20 广州视源电子科技股份有限公司 Interactive device writing pencil
WO2018136057A1 (en) * 2017-01-19 2018-07-26 Hewlett-Packard Development Company, L.P. Input pen gesture-based display control
US10678422B2 (en) * 2017-03-13 2020-06-09 International Business Machines Corporation Automatic generation of a client pressure profile for a touch screen device
US11295121B2 (en) 2017-04-11 2022-04-05 Microsoft Technology Licensing, Llc Context-based shape extraction and interpretation from hand-drawn ink input
CN107086027A (en) * 2017-06-23 2017-08-22 青岛海信移动通信技术股份有限公司 Character displaying method and device, mobile terminal and storage medium
US11301124B2 (en) 2017-08-18 2022-04-12 Microsoft Technology Licensing, Llc User interface modification using preview panel
US11237699B2 (en) * 2017-08-18 2022-02-01 Microsoft Technology Licensing, Llc Proximal menu generation
EP3502858B1 (en) * 2017-12-22 2023-08-16 Dassault Systèmes Gesture-based manipulator for rotation
US10235998B1 (en) 2018-02-28 2019-03-19 Karen Elaine Khaleghi Health monitoring system and appliance
EP3567597A1 (en) 2018-05-11 2019-11-13 Anoto Korea Corp. Method and apparatus of diagnostic test
KR102240007B1 (en) * 2018-05-24 2021-05-03 주식회사 닷 Information output apparatus
US10732695B2 (en) * 2018-09-09 2020-08-04 Microsoft Technology Licensing, Llc Transitioning a computing device from a low power state based on sensor input of a pen device
US11269428B2 (en) 2018-09-09 2022-03-08 Microsoft Technology Licensing, Llc Changing a mode of operation of a computing device by a pen device
US10559307B1 (en) 2019-02-13 2020-02-11 Karen Elaine Khaleghi Impaired operator detection and interlock apparatus
US20210011601A1 (en) * 2019-07-12 2021-01-14 Novatek Microelectronics Corp. Method, apparatus, and computer system of using an active pen to wake a computer device from a power-saving mode
US10735191B1 (en) 2019-07-25 2020-08-04 The Notebook, Llc Apparatus and methods for secure distributed communications and data access
US11042230B2 (en) 2019-11-06 2021-06-22 International Business Machines Corporation Cognitive stylus with sensors
US11340695B2 (en) * 2020-01-24 2022-05-24 Magic Leap, Inc. Converting a 2D positional input into a 3D point in space
IT202000001603A1 (en) * 2020-01-28 2021-07-28 St Microelectronics Srl SYSTEM AND METHOD OF RECOGNIZING A TOUCH GESTURE
US11385741B2 (en) * 2020-08-31 2022-07-12 Microsoft Technology Licensing, Llc Method to reduce blanking area for palm rejection in low cost in-cell displays
US11775084B2 (en) * 2021-04-20 2023-10-03 Microsoft Technology Licensing, Llc Stylus haptic component arming and power consumption
US11545047B1 (en) * 2021-06-24 2023-01-03 Knowledge Ai Inc. Using biometric data intelligence for education management
CN116107469A (en) * 2021-11-11 2023-05-12 荣耀终端有限公司 Function mode switching method, electronic equipment and system
CN114077321A (en) * 2021-11-23 2022-02-22 赵勇 Digital pen for computer stroke input

Citations (131)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5149919A (en) 1990-10-31 1992-09-22 International Business Machines Corporation Stylus sensing system
US5198623A (en) 1991-11-27 1993-03-30 Calcomp, Inc. Method for use in a digitizer for determining pen tilt
US5414227A (en) 1993-04-29 1995-05-09 International Business Machines Corporation Stylus tilt detection apparatus for communication with a remote digitizing display
US5463725A (en) 1992-12-31 1995-10-31 International Business Machines Corp. Data processing system graphical user interface which emulates printed material
US5625833A (en) 1988-05-27 1997-04-29 Wang Laboratories, Inc. Document annotation & manipulation in a data processing system
US5778404A (en) * 1995-08-07 1998-07-07 Apple Computer, Inc. String inserter for pen-based computer systems and method for providing same
US5867163A (en) 1995-12-01 1999-02-02 Silicon Graphics, Inc. Graphical user interface for defining and invoking user-customized tool shelf execution sequence
US5914701A (en) 1995-05-08 1999-06-22 Massachusetts Institute Of Technology Non-contact system for sensing and signalling by externally induced intra-body currents
US5956020A (en) 1995-07-27 1999-09-21 Microtouch Systems, Inc. Touchscreen controller with pen and/or finger inputs
US6307548B1 (en) 1997-09-25 2001-10-23 Tegic Communications, Inc. Reduced keyboard disambiguating system
US20040047505A1 (en) 2001-12-26 2004-03-11 Firooz Ghassabian Stylus computer
US20040073432A1 (en) 2002-10-15 2004-04-15 Stone Christopher J. Webpad for the disabled
US6788292B1 (en) 1998-02-25 2004-09-07 Sharp Kabushiki Kaisha Display device
US20040189594A1 (en) 1998-04-06 2004-09-30 Sterling Hans Rudolf Positioning a cursor on the display screen of a computer
US20040203520A1 (en) 2002-12-20 2004-10-14 Tom Schirtzinger Apparatus and method for application control in an electronic device
US20050024346A1 (en) 2003-07-30 2005-02-03 Jean-Luc Dupraz Digital pen function control
US20050052427A1 (en) * 2003-09-10 2005-03-10 Wu Michael Chi Hung Hand gesture interaction with touch surface
US20050079896A1 (en) 2003-10-14 2005-04-14 Nokia Corporation Method and apparatus for locking a mobile telephone touch screen
US6906703B2 (en) 2001-03-28 2005-06-14 Microsoft Corporation Electronic module for sensing pen motion
US20050165839A1 (en) 2004-01-26 2005-07-28 Vikram Madan Context harvesting from selected content
US20050179648A1 (en) 2004-02-18 2005-08-18 Microsoft Corporation Tapping to create writing
US20050216867A1 (en) 2004-03-23 2005-09-29 Marvit David L Selective engagement of motion detection
US20050253817A1 (en) 2002-06-19 2005-11-17 Markku Rytivaara Method of deactivating lock and portable electronic device
US20060012580A1 (en) 2004-07-15 2006-01-19 N-Trig Ltd. Automatic switching for a dual mode digitizer
US20060026535A1 (en) 2004-07-30 2006-02-02 Apple Computer Inc. Mode-based graphical user interfaces for touch sensitive input devices
US20060109252A1 (en) 2004-11-23 2006-05-25 Microsoft Corporation Reducing accidental touch-sensitive device activation
US20060136840A1 (en) 1998-11-20 2006-06-22 Microsoft Corporation Pen-based interface for a notepad computer
US20060146038A1 (en) 2004-12-31 2006-07-06 Jong-Woung Park Touch position detecting device, method of detecting touch position and touch screen display device having the same
US20060197753A1 (en) 2005-03-04 2006-09-07 Hotelling Steven P Multi-functional hand-held device
US20060256008A1 (en) 2005-05-13 2006-11-16 Outland Research, Llc Pointing interface for person-to-person information exchange
US20060267957A1 (en) 2005-04-22 2006-11-30 Microsoft Corporation Touch Input Data Handling
US20060267958A1 (en) 2005-04-22 2006-11-30 Microsoft Corporation Touch Input Programmatical Interfaces
US20070070051A1 (en) 1998-01-26 2007-03-29 Fingerworks, Inc. Multi-touch contact motion extraction
US20070075965A1 (en) 2005-09-30 2007-04-05 Brian Huppi Automated response to and sensing of user activity in portable devices
US20070113198A1 (en) 2005-11-16 2007-05-17 Microsoft Corporation Displaying 2D graphic content using depth wells
US20070126732A1 (en) 2005-12-05 2007-06-07 Microsoft Corporation Accessing 2D graphic content using axonometric layer views
US7231609B2 (en) 2003-02-03 2007-06-12 Microsoft Corporation System and method for accessing remote screen content
US20070152976A1 (en) 2005-12-30 2007-07-05 Microsoft Corporation Unintentional touch rejection
US20070182663A1 (en) 2004-06-01 2007-08-09 Biech Grant S Portable, folding and separable multi-display computing system
US20070188477A1 (en) * 2006-02-13 2007-08-16 Rehm Peter H Sketch pad and optical stylus for a personal computer
US20070198950A1 (en) * 2006-02-17 2007-08-23 Microsoft Corporation Method and system for improving interaction with a user interface
US20070247441A1 (en) 2006-04-25 2007-10-25 Lg Electronics Inc. Terminal and method for entering command in the terminal
US7289102B2 (en) 2000-07-17 2007-10-30 Microsoft Corporation Method and apparatus using multiple sensors in a device with a display
US20080002888A1 (en) 2006-06-29 2008-01-03 Nokia Corporation Apparatus, method, device and computer program product providing enhanced text copy capability with touch input display
US20080012835A1 (en) 2006-07-12 2008-01-17 N-Trig Ltd. Hover and touch detection for digitizer
US20080040692A1 (en) 2006-06-29 2008-02-14 Microsoft Corporation Gesture input
US20080046425A1 (en) * 2006-08-15 2008-02-21 N-Trig Ltd. Gesture detection for a digitizer
US20080055278A1 (en) * 2006-09-01 2008-03-06 Howard Jeffrey Locker System and method for alarming for misplaced computer tablet pen
US7362221B2 (en) 2005-11-09 2008-04-22 Honeywell International Inc. Touchscreen device for controlling a security system
US20080106520A1 (en) 2006-11-08 2008-05-08 3M Innovative Properties Company Touch location sensing system and method employing sensor data fitting to a predefined curve
US20080158168A1 (en) 2007-01-03 2008-07-03 Apple Computer, Inc. Far-field input identification
US20080163130A1 (en) 2007-01-03 2008-07-03 Apple Inc Gesture learning
US20080158145A1 (en) 2007-01-03 2008-07-03 Apple Computer, Inc. Multi-touch input discrimination
US7400316B2 (en) 2004-05-28 2008-07-15 International Business Machines Corporation Method and apparatus for dynamically modifying web page display for mobile devices
US20080191898A1 (en) 2004-12-09 2008-08-14 Universal Electronics Inc. Controlling device with dual-mode, touch-sensitive display
US20080259043A1 (en) 2005-02-17 2008-10-23 Koninklijke Philips Electronics, N.V. Device Capable of Being Operated Within a Network, Network System, Method of Operating a Device Within a Network, Program Element, and Computer-Readable Medium
US20080292195A1 (en) 2007-05-22 2008-11-27 Vijayasenan Deepu Data Processing System And Method
US7499024B2 (en) 1992-12-21 2009-03-03 Apple Inc. Method and apparatus for providing visual feedback during manipulation of text on a computer screen
US20090100384A1 (en) 2007-10-10 2009-04-16 Apple Inc. Variable device graphical user interface
US20090109182A1 (en) 2007-10-26 2009-04-30 Steven Fyke Text selection using a touch sensitive screen of a handheld mobile communication device
US7532196B2 (en) 2003-10-30 2009-05-12 Microsoft Corporation Distributed sensing techniques for mobile devices
US20090167702A1 (en) * 2008-01-02 2009-07-02 Nokia Corporation Pointing device detection
US20090178007A1 (en) 2008-01-06 2009-07-09 Michael Matas Touch Screen Device, Method, and Graphical User Interface for Displaying and Selecting Application Options
WO2009084809A1 (en) 2007-12-27 2009-07-09 Nhn Corporation Apparatus and method for controlling screen by using touch screen
US7567242B2 (en) 2003-06-09 2009-07-28 Leapfrog Enterprises, Inc. Writing stylus
US20090209285A1 (en) 2008-02-15 2009-08-20 Sony Ericsson Mobile Communications Ab Portable communication device having touch-sensitive input device ...
US20090228842A1 (en) 2008-03-04 2009-09-10 Apple Inc. Selecting of text using gestures
US20090259969A1 (en) 2003-07-14 2009-10-15 Matt Pallakoff Multimedia client interface devices and methods
US20090262074A1 (en) 2007-01-05 2009-10-22 Invensense Inc. Controlling and accessing content using motion processing on mobile devices
US20090265671A1 (en) 2008-04-21 2009-10-22 Invensense Mobile devices with motion gesture recognition
US20100020025A1 (en) 2008-07-25 2010-01-28 Intuilab Continuous recognition of multi-touch gestures
US20100045705A1 (en) 2006-03-30 2010-02-25 Roel Vertegaal Interaction techniques for flexible displays
US20100053120A1 (en) 2008-09-03 2010-03-04 Chang An-Yu Touchscreen stylus
US20100053095A1 (en) 2008-09-01 2010-03-04 Ming-Tsung Wu Method Capable of Preventing Mistakenly Triggering a Touch panel
US20100079493A1 (en) 2008-09-29 2010-04-01 Smart Technologies Ulc Method for selecting and manipulating a graphical object in an interactive input system, and interactive input system executing the method
US20100083191A1 (en) 2008-09-30 2010-04-01 Richard Marshall Method and apparatus for displaying content at a mobile device
US20100095234A1 (en) 2008-10-07 2010-04-15 Research In Motion Limited Multi-touch motion simulation using a non-touch screen computer input device
US20100103117A1 (en) * 2008-10-26 2010-04-29 Microsoft Corporation Multi-touch manipulation of application objects
US20100103118A1 (en) 2008-10-26 2010-04-29 Microsoft Corporation Multi-touch object inertia simulation
US20100123737A1 (en) 2008-11-19 2010-05-20 Apple Inc. Techniques for manipulating panoramas
US20100127979A1 (en) 2008-11-21 2010-05-27 Samsung Electronics Co., Ltd. Input device
US20100139990A1 (en) 2008-12-08 2010-06-10 Wayne Carl Westerman Selective Input Signal Rejection and Modification
US20100156941A1 (en) 2008-12-19 2010-06-24 Samsung Electronics Co., Ltd Photographing method using multi-input scheme through touch and key manipulation and photographing apparatus using the same
US20100175018A1 (en) 2009-01-07 2010-07-08 Microsoft Corporation Virtual page turn
US20100177121A1 (en) 2008-12-12 2010-07-15 Fuminori Homma Information processing apparatus, information processing method, and program
US20100188328A1 (en) 2009-01-29 2010-07-29 Microsoft Corporation Environmental gesture recognition
US20100194547A1 (en) 2009-01-30 2010-08-05 Scott Michael Terrell Tactile feedback apparatus and method
US20100214216A1 (en) 2007-01-05 2010-08-26 Invensense, Inc. Motion sensing and processing on mobile devices
US20100235729A1 (en) 2009-03-16 2010-09-16 Kocienda Kenneth L Methods and Graphical User Interfaces for Editing on a Multifunction Device with a Touch Screen Display
US7812826B2 (en) 2005-12-30 2010-10-12 Apple Inc. Portable electronic device with multi-touch input
US20100281435A1 (en) 2009-04-30 2010-11-04 At&T Intellectual Property I, L.P. System and method for multimodal interaction using robust gesture processing
US20100298033A1 (en) 2009-05-22 2010-11-25 Kwanhee Lee Mobile terminal and method of executing call function using the same
US20100295781A1 (en) 2009-05-22 2010-11-25 Rachid Alameh Electronic Device with Sensing Assembly and Method for Interpreting Consecutive Gestures
US20100295799A1 (en) 2009-05-21 2010-11-25 Sony Computer Entertainment America Inc. Touch screen disambiguation based on prior ancillary touch input
US20100306670A1 (en) 2009-05-29 2010-12-02 Microsoft Corporation Gesture-based document sharing manipulation
US20100328227A1 (en) 2009-06-29 2010-12-30 Justin Frank Matejka Multi-finger mouse emulation
US20110115741A1 (en) 2009-11-16 2011-05-19 Broadcom Corporation Touch sensitive panel supporting stylus input
US7956847B2 (en) 2007-01-05 2011-06-07 Apple Inc. Gestures for controlling, manipulating, and editing of media files using touch sensitive devices
US20110134026A1 (en) 2009-12-04 2011-06-09 Lg Electronics Inc. Image display apparatus and method for operating the same
US20110167357A1 (en) 2010-01-05 2011-07-07 Todd Benjamin Scenario-Based Content Organization and Retrieval
US7982739B2 (en) 2005-08-18 2011-07-19 Realnetworks, Inc. System and/or method for adjusting for input latency in a handheld device
US20110187651A1 (en) 2010-02-03 2011-08-04 Honeywell International Inc. Touch screen having adaptive input parameter
US20110197153A1 (en) 2010-02-11 2011-08-11 Apple Inc. Touch Inputs Interacting With User Interface Items
US20110193788A1 (en) 2010-02-10 2011-08-11 Apple Inc. Graphical objects that respond to touch or motion input
US20110221777A1 (en) 2010-03-10 2011-09-15 Hon Hai Precision Industry Co., Ltd. Electronic device with motion sensing function and method for executing functions based on movement of electronic device
US20110231796A1 (en) 2010-02-16 2011-09-22 Jose Manuel Vigil Methods for navigating a touch screen device in conjunction with gestures
US20110239110A1 (en) 2010-03-25 2011-09-29 Google Inc. Method and System for Selecting Content Using A Touchscreen
KR20120005417A (en) 2011-10-28 2012-01-16 한국과학기술원 Method and device for controlling touch-screen, and recording medium for the same, and user terminal comprising the same
US20120092269A1 (en) 2010-10-15 2012-04-19 Hon Hai Precision Industry Co., Ltd. Computer-implemented method for manipulating onscreen data
US20120092268A1 (en) 2010-10-15 2012-04-19 Hon Hai Precision Industry Co., Ltd. Computer-implemented method for manipulating onscreen data
US20120158629A1 (en) 2010-12-17 2012-06-21 Microsoft Corporation Detecting and responding to unintentional contact with a computing device
US20120154294A1 (en) 2010-12-17 2012-06-21 Microsoft Corporation Using movement of a computing device to enhance interpretation of input events produced when interacting with the computing device
US8228292B1 (en) 2010-04-02 2012-07-24 Google Inc. Flipping for motion-based input
US20120206330A1 (en) 2011-02-11 2012-08-16 Microsoft Corporation Multi-touch input device with orientation sensing
US20120242598A1 (en) * 2011-03-25 2012-09-27 Samsung Electronics Co., Ltd. System and method for crossing navigation for use in an electronic terminal
US20120262407A1 (en) 2010-12-17 2012-10-18 Microsoft Corporation Touch and stylus discrimination and rejection for contact sensitive computing devices
US20120306927A1 (en) 2011-05-30 2012-12-06 Lg Electronics Inc. Mobile terminal and display controlling method thereof
US20120313865A1 (en) * 2009-08-25 2012-12-13 Promethean Ltd Interactive surface with a plurality of input detection technologies
US20120327042A1 (en) * 2011-06-22 2012-12-27 Harley Jonah A Stylus orientation detection
US20120327040A1 (en) * 2011-06-22 2012-12-27 Simon David I Identifiable stylus
US20130016055A1 (en) * 2011-07-13 2013-01-17 Chih-Hung Chuang Wireless transmitting stylus and touch display system
US8413077B2 (en) 2008-12-25 2013-04-02 Sony Corporation Input apparatus, handheld apparatus, and control method
US20130106725A1 (en) * 2011-10-28 2013-05-02 Atmel Corporation Data Transfer from Active Stylus
US20130106740A1 (en) * 2011-10-28 2013-05-02 Atmel Corporation Touch-Sensitive System with Motion Filtering
US20130120281A1 (en) * 2009-07-10 2013-05-16 Jerry G. Harris Methods and Apparatus for Natural Media Painting Using Touch-and-Stylus Combination Gestures
US20130181948A1 (en) 2012-01-13 2013-07-18 Sony Corporation Information processing apparatus and information processing method and computer program
US20130335333A1 (en) 2010-03-05 2013-12-19 Adobe Systems Incorporated Editing content using multiple touch inputs
US20140073432A1 (en) 2012-09-07 2014-03-13 Dexin Corporation Gaming system with performance tuning and optimized data sharing functions
US20140108979A1 (en) * 2012-10-17 2014-04-17 Perceptive Pixel, Inc. Controlling Virtual Objects
US20140210797A1 (en) * 2013-01-31 2014-07-31 Research In Motion Limited Dynamic stylus palette
US20140253522A1 (en) * 2013-03-11 2014-09-11 Barnesandnoble.Com Llc Stylus-based pressure-sensitive area for ui control of computing device

Patent Citations (137)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5625833A (en) 1988-05-27 1997-04-29 Wang Laboratories, Inc. Document annotation & manipulation in a data processing system
US5149919A (en) 1990-10-31 1992-09-22 International Business Machines Corporation Stylus sensing system
US5198623A (en) 1991-11-27 1993-03-30 Calcomp, Inc. Method for use in a digitizer for determining pen tilt
US7499024B2 (en) 1992-12-21 2009-03-03 Apple Inc. Method and apparatus for providing visual feedback during manipulation of text on a computer screen
US5463725A (en) 1992-12-31 1995-10-31 International Business Machines Corp. Data processing system graphical user interface which emulates printed material
US5414227A (en) 1993-04-29 1995-05-09 International Business Machines Corporation Stylus tilt detection apparatus for communication with a remote digitizing display
US5914701A (en) 1995-05-08 1999-06-22 Massachusetts Institute Of Technology Non-contact system for sensing and signalling by externally induced intra-body currents
US5956020A (en) 1995-07-27 1999-09-21 Microtouch Systems, Inc. Touchscreen controller with pen and/or finger inputs
US5778404A (en) * 1995-08-07 1998-07-07 Apple Computer, Inc. String inserter for pen-based computer systems and method for providing same
US5867163A (en) 1995-12-01 1999-02-02 Silicon Graphics, Inc. Graphical user interface for defining and invoking user-customized tool shelf execution sequence
US6307548B1 (en) 1997-09-25 2001-10-23 Tegic Communications, Inc. Reduced keyboard disambiguating system
US20070268274A1 (en) 1998-01-26 2007-11-22 Apple Inc. Touch sensing with mobile sensors
US7812828B2 (en) 1998-01-26 2010-10-12 Apple Inc. Ellipse fitting for multi-touch surfaces
US20070070051A1 (en) 1998-01-26 2007-03-29 Fingerworks, Inc. Multi-touch contact motion extraction
US20090160816A1 (en) 1998-01-26 2009-06-25 Wayne Westerman Multi-touch contact motion extraction
US6788292B1 (en) 1998-02-25 2004-09-07 Sharp Kabushiki Kaisha Display device
US20040189594A1 (en) 1998-04-06 2004-09-30 Sterling Hans Rudolf Positioning a cursor on the display screen of a computer
US7703047B2 (en) 1998-11-20 2010-04-20 Microsoft Corporation Pen-based interface for a notepad computer
US20060136840A1 (en) 1998-11-20 2006-06-22 Microsoft Corporation Pen-based interface for a notepad computer
US7289102B2 (en) 2000-07-17 2007-10-30 Microsoft Corporation Method and apparatus using multiple sensors in a device with a display
US6906703B2 (en) 2001-03-28 2005-06-14 Microsoft Corporation Electronic module for sensing pen motion
US20040047505A1 (en) 2001-12-26 2004-03-11 Firooz Ghassabian Stylus computer
US20050253817A1 (en) 2002-06-19 2005-11-17 Markku Rytivaara Method of deactivating lock and portable electronic device
US20040073432A1 (en) 2002-10-15 2004-04-15 Stone Christopher J. Webpad for the disabled
US20040203520A1 (en) 2002-12-20 2004-10-14 Tom Schirtzinger Apparatus and method for application control in an electronic device
US7231609B2 (en) 2003-02-03 2007-06-12 Microsoft Corporation System and method for accessing remote screen content
US7567242B2 (en) 2003-06-09 2009-07-28 Leapfrog Enterprises, Inc. Writing stylus
US20090259969A1 (en) 2003-07-14 2009-10-15 Matt Pallakoff Multimedia client interface devices and methods
US20050024346A1 (en) 2003-07-30 2005-02-03 Jean-Luc Dupraz Digital pen function control
US20050052427A1 (en) * 2003-09-10 2005-03-10 Wu Michael Chi Hung Hand gesture interaction with touch surface
US20050079896A1 (en) 2003-10-14 2005-04-14 Nokia Corporation Method and apparatus for locking a mobile telephone touch screen
US7532196B2 (en) 2003-10-30 2009-05-12 Microsoft Corporation Distributed sensing techniques for mobile devices
US20050165839A1 (en) 2004-01-26 2005-07-28 Vikram Madan Context harvesting from selected content
US20050179648A1 (en) 2004-02-18 2005-08-18 Microsoft Corporation Tapping to create writing
US20050216867A1 (en) 2004-03-23 2005-09-29 Marvit David L Selective engagement of motion detection
US7400316B2 (en) 2004-05-28 2008-07-15 International Business Machines Corporation Method and apparatus for dynamically modifying web page display for mobile devices
US20070182663A1 (en) 2004-06-01 2007-08-09 Biech Grant S Portable, folding and separable multi-display computing system
US20060012580A1 (en) 2004-07-15 2006-01-19 N-Trig Ltd. Automatic switching for a dual mode digitizer
US20060026535A1 (en) 2004-07-30 2006-02-02 Apple Computer Inc. Mode-based graphical user interfaces for touch sensitive input devices
US20060109252A1 (en) 2004-11-23 2006-05-25 Microsoft Corporation Reducing accidental touch-sensitive device activation
US7847789B2 (en) 2004-11-23 2010-12-07 Microsoft Corporation Reducing accidental touch-sensitive device activation
US20080191898A1 (en) 2004-12-09 2008-08-14 Universal Electronics Inc. Controlling device with dual-mode, touch-sensitive display
US20060146038A1 (en) 2004-12-31 2006-07-06 Jong-Woung Park Touch position detecting device, method of detecting touch position and touch screen display device having the same
US20080259043A1 (en) 2005-02-17 2008-10-23 Koninklijke Philips Electronics, N.V. Device Capable of Being Operated Within a Network, Network System, Method of Operating a Device Within a Network, Program Element, and Computer-Readable Medium
US20060197753A1 (en) 2005-03-04 2006-09-07 Hotelling Steven P Multi-functional hand-held device
US20060267958A1 (en) 2005-04-22 2006-11-30 Microsoft Corporation Touch Input Programmatical Interfaces
US20060267957A1 (en) 2005-04-22 2006-11-30 Microsoft Corporation Touch Input Data Handling
US20060256008A1 (en) 2005-05-13 2006-11-16 Outland Research, Llc Pointing interface for person-to-person information exchange
US7982739B2 (en) 2005-08-18 2011-07-19 Realnetworks, Inc. System and/or method for adjusting for input latency in a handheld device
US20070075965A1 (en) 2005-09-30 2007-04-05 Brian Huppi Automated response to and sensing of user activity in portable devices
US7362221B2 (en) 2005-11-09 2008-04-22 Honeywell International Inc. Touchscreen device for controlling a security system
US20070113198A1 (en) 2005-11-16 2007-05-17 Microsoft Corporation Displaying 2D graphic content using depth wells
US20070126732A1 (en) 2005-12-05 2007-06-07 Microsoft Corporation Accessing 2D graphic content using axonometric layer views
US20070152976A1 (en) 2005-12-30 2007-07-05 Microsoft Corporation Unintentional touch rejection
US7812826B2 (en) 2005-12-30 2010-10-12 Apple Inc. Portable electronic device with multi-touch input
US20070188477A1 (en) * 2006-02-13 2007-08-16 Rehm Peter H Sketch pad and optical stylus for a personal computer
US20070198950A1 (en) * 2006-02-17 2007-08-23 Microsoft Corporation Method and system for improving interaction with a user interface
US20100045705A1 (en) 2006-03-30 2010-02-25 Roel Vertegaal Interaction techniques for flexible displays
US20070247441A1 (en) 2006-04-25 2007-10-25 Lg Electronics Inc. Terminal and method for entering command in the terminal
US20080002888A1 (en) 2006-06-29 2008-01-03 Nokia Corporation Apparatus, method, device and computer program product providing enhanced text copy capability with touch input display
US20080040692A1 (en) 2006-06-29 2008-02-14 Microsoft Corporation Gesture input
US20080012835A1 (en) 2006-07-12 2008-01-17 N-Trig Ltd. Hover and touch detection for digitizer
US20080046425A1 (en) * 2006-08-15 2008-02-21 N-Trig Ltd. Gesture detection for a digitizer
US20080055278A1 (en) * 2006-09-01 2008-03-06 Howard Jeffrey Locker System and method for alarming for misplaced computer tablet pen
US20080106520A1 (en) 2006-11-08 2008-05-08 3M Innovative Properties Company Touch location sensing system and method employing sensor data fitting to a predefined curve
US20080163130A1 (en) 2007-01-03 2008-07-03 Apple Inc Gesture learning
US20080158145A1 (en) 2007-01-03 2008-07-03 Apple Computer, Inc. Multi-touch input discrimination
US20080158168A1 (en) 2007-01-03 2008-07-03 Apple Computer, Inc. Far-field input identification
US20090262074A1 (en) 2007-01-05 2009-10-22 Invensense Inc. Controlling and accessing content using motion processing on mobile devices
US7956847B2 (en) 2007-01-05 2011-06-07 Apple Inc. Gestures for controlling, manipulating, and editing of media files using touch sensitive devices
US20100214216A1 (en) 2007-01-05 2010-08-26 Invensense, Inc. Motion sensing and processing on mobile devices
US20080292195A1 (en) 2007-05-22 2008-11-27 Vijayasenan Deepu Data Processing System And Method
US20090100384A1 (en) 2007-10-10 2009-04-16 Apple Inc. Variable device graphical user interface
US20090109182A1 (en) 2007-10-26 2009-04-30 Steven Fyke Text selection using a touch sensitive screen of a handheld mobile communication device
WO2009084809A1 (en) 2007-12-27 2009-07-09 Nhn Corporation Apparatus and method for controlling screen by using touch screen
US20090167702A1 (en) * 2008-01-02 2009-07-02 Nokia Corporation Pointing device detection
US20090178007A1 (en) 2008-01-06 2009-07-09 Michael Matas Touch Screen Device, Method, and Graphical User Interface for Displaying and Selecting Application Options
US20090209285A1 (en) 2008-02-15 2009-08-20 Sony Ericsson Mobile Communications Ab Portable communication device having touch-sensitive input device ...
US20090228842A1 (en) 2008-03-04 2009-09-10 Apple Inc. Selecting of text using gestures
US20090265671A1 (en) 2008-04-21 2009-10-22 Invensense Mobile devices with motion gesture recognition
US20100020025A1 (en) 2008-07-25 2010-01-28 Intuilab Continuous recognition of multi-touch gestures
US20100053095A1 (en) 2008-09-01 2010-03-04 Ming-Tsung Wu Method Capable of Preventing Mistakenly Triggering a Touch panel
US20100053120A1 (en) 2008-09-03 2010-03-04 Chang An-Yu Touchscreen stylus
US20100079493A1 (en) 2008-09-29 2010-04-01 Smart Technologies Ulc Method for selecting and manipulating a graphical object in an interactive input system, and interactive input system executing the method
US20100083191A1 (en) 2008-09-30 2010-04-01 Richard Marshall Method and apparatus for displaying content at a mobile device
US20100095234A1 (en) 2008-10-07 2010-04-15 Research In Motion Limited Multi-touch motion simulation using a non-touch screen computer input device
US20100103117A1 (en) * 2008-10-26 2010-04-29 Microsoft Corporation Multi-touch manipulation of application objects
US20100103118A1 (en) 2008-10-26 2010-04-29 Microsoft Corporation Multi-touch object inertia simulation
US20100123737A1 (en) 2008-11-19 2010-05-20 Apple Inc. Techniques for manipulating panoramas
US20100127979A1 (en) 2008-11-21 2010-05-27 Samsung Electronics Co., Ltd. Input device
US20100139990A1 (en) 2008-12-08 2010-06-10 Wayne Carl Westerman Selective Input Signal Rejection and Modification
US20100177121A1 (en) 2008-12-12 2010-07-15 Fuminori Homma Information processing apparatus, information processing method, and program
US20100156941A1 (en) 2008-12-19 2010-06-24 Samsung Electronics Co., Ltd Photographing method using multi-input scheme through touch and key manipulation and photographing apparatus using the same
US8413077B2 (en) 2008-12-25 2013-04-02 Sony Corporation Input apparatus, handheld apparatus, and control method
US20100175018A1 (en) 2009-01-07 2010-07-08 Microsoft Corporation Virtual page turn
US20100188328A1 (en) 2009-01-29 2010-07-29 Microsoft Corporation Environmental gesture recognition
US20100194547A1 (en) 2009-01-30 2010-08-05 Scott Michael Terrell Tactile feedback apparatus and method
US20100235729A1 (en) 2009-03-16 2010-09-16 Kocienda Kenneth L Methods and Graphical User Interfaces for Editing on a Multifunction Device with a Touch Screen Display
US20100281435A1 (en) 2009-04-30 2010-11-04 At&T Intellectual Property I, L.P. System and method for multimodal interaction using robust gesture processing
US20100295799A1 (en) 2009-05-21 2010-11-25 Sony Computer Entertainment America Inc. Touch screen disambiguation based on prior ancillary touch input
US20100295781A1 (en) 2009-05-22 2010-11-25 Rachid Alameh Electronic Device with Sensing Assembly and Method for Interpreting Consecutive Gestures
US20100298033A1 (en) 2009-05-22 2010-11-25 Kwanhee Lee Mobile terminal and method of executing call function using the same
US8265705B2 (en) 2009-05-22 2012-09-11 Lg Electronics Inc. Mobile terminal and method of executing call function using the same
US20100306670A1 (en) 2009-05-29 2010-12-02 Microsoft Corporation Gesture-based document sharing manipulation
US20100328227A1 (en) 2009-06-29 2010-12-30 Justin Frank Matejka Multi-finger mouse emulation
US20130120281A1 (en) * 2009-07-10 2013-05-16 Jerry G. Harris Methods and Apparatus for Natural Media Painting Using Touch-and-Stylus Combination Gestures
US20120313865A1 (en) * 2009-08-25 2012-12-13 Promethean Ltd Interactive surface with a plurality of input detection technologies
US20110115741A1 (en) 2009-11-16 2011-05-19 Broadcom Corporation Touch sensitive panel supporting stylus input
US20110134026A1 (en) 2009-12-04 2011-06-09 Lg Electronics Inc. Image display apparatus and method for operating the same
US20110167357A1 (en) 2010-01-05 2011-07-07 Todd Benjamin Scenario-Based Content Organization and Retrieval
US20110187651A1 (en) 2010-02-03 2011-08-04 Honeywell International Inc. Touch screen having adaptive input parameter
US20110193788A1 (en) 2010-02-10 2011-08-11 Apple Inc. Graphical objects that respond to touch or motion input
US20110197153A1 (en) 2010-02-11 2011-08-11 Apple Inc. Touch Inputs Interacting With User Interface Items
US20110231796A1 (en) 2010-02-16 2011-09-22 Jose Manuel Vigil Methods for navigating a touch screen device in conjunction with gestures
US20130335333A1 (en) 2010-03-05 2013-12-19 Adobe Systems Incorporated Editing content using multiple touch inputs
US20110221777A1 (en) 2010-03-10 2011-09-15 Hon Hai Precision Industry Co., Ltd. Electronic device with motion sensing function and method for executing functions based on movement of electronic device
US20110239110A1 (en) 2010-03-25 2011-09-29 Google Inc. Method and System for Selecting Content Using A Touchscreen
US8228292B1 (en) 2010-04-02 2012-07-24 Google Inc. Flipping for motion-based input
US20120092268A1 (en) 2010-10-15 2012-04-19 Hon Hai Precision Industry Co., Ltd. Computer-implemented method for manipulating onscreen data
US20120092269A1 (en) 2010-10-15 2012-04-19 Hon Hai Precision Industry Co., Ltd. Computer-implemented method for manipulating onscreen data
US20120154294A1 (en) 2010-12-17 2012-06-21 Microsoft Corporation Using movement of a computing device to enhance interpretation of input events produced when interacting with the computing device
US20120262407A1 (en) 2010-12-17 2012-10-18 Microsoft Corporation Touch and stylus discrimination and rejection for contact sensitive computing devices
US20120158629A1 (en) 2010-12-17 2012-06-21 Microsoft Corporation Detecting and responding to unintentional contact with a computing device
US20120206330A1 (en) 2011-02-11 2012-08-16 Microsoft Corporation Multi-touch input device with orientation sensing
US20120242598A1 (en) * 2011-03-25 2012-09-27 Samsung Electronics Co., Ltd. System and method for crossing navigation for use in an electronic terminal
US20120306927A1 (en) 2011-05-30 2012-12-06 Lg Electronics Inc. Mobile terminal and display controlling method thereof
US20120327040A1 (en) * 2011-06-22 2012-12-27 Simon David I Identifiable stylus
US20120327042A1 (en) * 2011-06-22 2012-12-27 Harley Jonah A Stylus orientation detection
US20130016055A1 (en) * 2011-07-13 2013-01-17 Chih-Hung Chuang Wireless transmitting stylus and touch display system
US20130106725A1 (en) * 2011-10-28 2013-05-02 Atmel Corporation Data Transfer from Active Stylus
US20130106740A1 (en) * 2011-10-28 2013-05-02 Atmel Corporation Touch-Sensitive System with Motion Filtering
KR20120005417A (en) 2011-10-28 2012-01-16 한국과학기술원 Method and device for controlling touch-screen, and recording medium for the same, and user terminal comprising the same
US20130181948A1 (en) 2012-01-13 2013-07-18 Sony Corporation Information processing apparatus and information processing method and computer program
US20140073432A1 (en) 2012-09-07 2014-03-13 Dexin Corporation Gaming system with performance tuning and optimized data sharing functions
US20140108979A1 (en) * 2012-10-17 2014-04-17 Perceptive Pixel, Inc. Controlling Virtual Objects
US20140210797A1 (en) * 2013-01-31 2014-07-31 Research In Motion Limited Dynamic stylus palette
US20140253522A1 (en) * 2013-03-11 2014-09-11 Barnesandnoble.Com Llc Stylus-based pressure-sensitive area for ui control of computing device

Non-Patent Citations (182)

* Cited by examiner, † Cited by third party
Title
"DuoSense Pen, Touch & Multi-Touch Digitizer," retrieved at http://www.n-trig.com/Data/Uploads/Misc/DuoSense-Brochure-FINAL.pdf, May 2008, N-trig Ltd., Kfar Saba, Israel, 4 pages.
"PenLab: Itronix GoBook Duo-Touch," retrieved at <<http://pencomputing.com/frames/itronix-duotouch.html>>, retrieved on Jan. 31, 2012, Pen Computing magazine, 3 pages.
"PenLab: Itronix GoBook Duo-Touch," retrieved at >, retrieved on Jan. 31, 2012, Pen Computing magazine, 3 pages.
"Samsung Exhibit II 4G review: Second time around," retrieved at <<http://www.gsmarena.com/samsung-exhibit-2-4g-review-685p5.php>>, GSNArena.com, Dec. 1, 2011, p. 5 of online article, 3 pages.
"Samsung Exhibit II 4G review: Second time around," retrieved at >, GSNArena.com, Dec. 1, 2011, p. 5 of online article, 3 pages.
"TouchPaint.java", The Android Open Source Project, 2007.
"Using Windows Flip 3D", retrieved at <<http://windows.microsoft.com/en-US/windows-vista/Using-Windows-Flip-3D>>, retrieved on Feb. 9, 2012, Microsoft Corporation, Redmond, WA, 1 page.
"Using Windows Flip 3D", retrieved at >, retrieved on Feb. 9, 2012, Microsoft Corporation, Redmond, WA, 1 page.
Aliakseyeu, D., A. Lucero, S. Subramanian, Interacting with piles of artifacts on digital tables, Digital Creativity, Jul. 2007, pp. 161-174, vol. 18, No. 3.
Ashbrook, et al., "Magic: A Motion Gesture Design Tool," retrieved at <<http://research.nokia.com/files/2010-Ashbrook-CHI10-MAGIC.pdf>>, Proccedings of the 28th International Conference on Human Factors in Computing Systems, Apr. 2010, 10 pages.
Ashbrook, et al., "Magic: A Motion Gesture Design Tool," retrieved at >, Proccedings of the 28th International Conference on Human Factors in Computing Systems, Apr. 2010, 10 pages.
Babyak, Richard, "Controls & Sensors: Touch Tones", retrieved at <<http://www.appliancedesign.com/Articles/Controls-and-Displays/BNP-GUID-9-5-2006-A-10000000000000129366>>, Appliance Design, Jun. 30, 2007, 5 pages.
Balakrishnan, et al., "The Rockin'Mouse: Integral 3D Manipulation on a Plane", In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, Mar. 22, 1997, 8 pages.
Balakrishnan, et al., Digital tape drawing, Proceedings of the 12th Annual ACM Symposium on User Interface Software and Technology, ACm Symposium on User Interface Software and Technology, UIST '99, Nov. 7-10, 1999, pp. 161-169, Asheville, USA.
Bao, et al., "Effect of Tilt Angle of Tablet on Pen-based Input Operation Based on Fitts' Law", Proceedings of the 2010 IEEE International Conference on Information and Automation, Jun. 2010, pp. 99-104.
Bartlett, Joel F., "Rock 'n' Scroll Is Here to Stay," accessed at <<http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-2000-3.pdf>>, Western Research Laboratory, Palo Alto, California, May 2000, 9 pages.
Bartlett, Joel F., "Rock 'n' Scroll Is Here to Stay," accessed at >, Western Research Laboratory, Palo Alto, California, May 2000, 9 pages.
Bi, et al., "An Exploration of Pen Rolling for Pen-Based Interaction", In Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology, Oct. 19, 2008, 10 pages.
Bjørneseth, et al., "Dynamic Positioning Systems-Usability and Interaction Styles," retrieved at http://www.ceng.metu.edu.tr/~tcan/se705-s0809/Schedule/assignment3.pdf>>, Proceedings of the 5th Nordic Conference on Human-Computer Interaction: Building Bridges, Oct. 2008, 10 pages.
Bjørneseth, et al., "Dynamic Positioning Systems-Usability and Interaction Styles," retrieved at http://www.ceng.metu.edu.tr/˜tcan/se705-s0809/Schedule/assignment3.pdf>>, Proceedings of the 5th Nordic Conference on Human-Computer Interaction: Building Bridges, Oct. 2008, 10 pages.
Brandl, et al., "Combining and Measuring the Benefits of Bimanual Pen and Direct-Touch Interaction on Horizontal Interfaces", In Proceedings of the Working Conference on Advanced Visual Interfaces, May 28, 2008, 10 pages.
Brandl, et al., "Occlusion-aware menu design for digital tabletops", Proc. of the 27th Int'l Conf. on Human Factors in Computing Systems, CHI 2009, Extended Abstracts, Apr. 4-9, 2009, pp. 3223-3228, Boston, MA, USA.
Buxton, W., "Integrating the Periphery and Context: A New Model of Telematics Proceedings of Graphics Interface", 1995, pp. 239-246.
Buxton, William A.S., "A Three-State Model of Graphical Input", In Proceedings of the IFIP TC13 Third Interational Conference on Human-Computer Interaction, Aug. 27, 1990, 11 pages.
Buxton, William, "Chunking and Phrasing the Design of Human-Computer Dialogues," retrieved at <<http://www.billbuxton.com/chunking.pdf>>, Proceedings of the IFIP World Computer Congress, Sep. 1986, 9 pages.
Buxton, William, "Chunking and Phrasing the Design of Human-Computer Dialogues," retrieved at >, Proceedings of the IFIP World Computer Congress, Sep. 1986, 9 pages.
Buxton, William, "Lexical and Pragmatic Considerations of Input Structure," retreived at <<http://acm.org>>, ACM SIGGRAPH Computer Graphics, vol. 17, Issue 1, Jan. 1983, pp. 31-37.
Buxton, William, "Lexical and Pragmatic Considerations of Input Structure," retreived at >, ACM SIGGRAPH Computer Graphics, vol. 17, Issue 1, Jan. 1983, pp. 31-37.
Card, S. K., J. D. MacKinlay, G. G. Robertson, The design space of input devices, CHI 1990, Apr. 1990, pp. 117-124, Seattle, WA, USA.
Chen, et al., "Navigation Techniques for Dual-Display E-Book Readers," retrieved at <<http://acm.org>>, CHI '08 Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems, Apr. 2008, pp. 1779-1788.
Chen, et al., "Navigation Techniques for Dual-Display E-Book Readers," retrieved at >, CHI '08 Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems, Apr. 2008, pp. 1779-1788.
Cho, et al., "Multi-Context Photo Browsing on Mobile Devices Based on Tilt Dynamics," retrieved at <<http://acm.org>>, MobileHCI '07 Proceedings of the 9th International Conference on Human Computer Interaction with Mibile Devices and Services, Sep. 2007, pp. 190-197.
Cho, et al., "Multi-Context Photo Browsing on Mobile Devices Based on Tilt Dynamics," retrieved at >, MobileHCI '07 Proceedings of the 9th International Conference on Human Computer Interaction with Mibile Devices and Services, Sep. 2007, pp. 190-197.
Chu, et al., "Detail-preserving paint modeling for 3D brushes", Proc. of the 8th Int'l Symposium on Non-Photorealistic Animation and Rendering 2010, NPAR 2010, Jun. 7-10, 2010, pp. 27-34, Annecy, France.
Chun, et al., "Virtual Shelves: Interactions with Orientation-Aware Devices," retrieved at <<http://acm.org>>, UIST'09, Oct. 2009, pp. 125-128.
Chun, et al., "Virtual Shelves: Interactions with Orientation-Aware Devices," retrieved at >, UIST'09, Oct. 2009, pp. 125-128.
Cohen, et al., "Synergistic Use of Direct Manipulation and Natural Language," retrieved at <<http://acm.org, CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 1989, pp. 227-233.
Dachselt, et al., "Throw and Tilt-Seamless Interaction Across Devices Using Mobile Phone Gestures", Proceedings of the 34th Graphics Interface Conference, May 2008, 7 pages.
Döring, et al., "Exploring Gesture-Based Interaction Techniques in Multi-Display Environments with Mobile Phones and a Multi-Touch Table", Proceedings of the Workshop on Coupled Display Visual Interfaces, May 25, 2010, pp. 47-54.
Edge, et al., "Bimanual Tangible Interaction with Mobile Phones," retrieved at <<http://research.microsoft.com/en-us/people/daedge/edgeteibimanual2009.pdf>>, Proceedings of the 3rd International Conference on Tangible and Embedded Interaction, Feb. 2009, pp. 131-136.
Edge, et al., "Bimanual Tangible Interaction with Mobile Phones," retrieved at >, Proceedings of the 3rd International Conference on Tangible and Embedded Interaction, Feb. 2009, pp. 131-136.
Eslambolchilar, et al., "Tilt-Based Automatic Zooming and Scaling in Mobile Devices-a state-space implementation." retrieved at <<http://www.dcs.gla.ac.uk/˜rod/publications/EsIMur04-SDAZ.pdf>>, Proceedings of Mobile HCI2004: 6th International Conference on Human Computer Interaction with Mobile Devices, Springer, Sep. 2004, 12 pages.
Eslambolchilar, et al., "Tilt-Based Automatic Zooming and Scaling in Mobile Devices-a state-space implementation." retrieved at >, Proceedings of Mobile HCI2004: 6th International Conference on Human Computer Interaction with Mobile Devices, Springer, Sep. 2004, 12 pages.
Essl, et al., "Use the Force (or something)-Pressure and Pressure-Like Input for Mobile Music Performance," retrieved at <<http://www.deutsche-telekom-laboratories.de/˜rohs/papers/Essl-ForceMusic.pdf>>, NIME 2010 Conference on New Interfaces for Musical Expression, Jun. 2010, 4 pages.
Essl, et al., "Use the Force (or something)-Pressure and Pressure-Like Input for Mobile Music Performance," retrieved at >, NIME 2010 Conference on New Interfaces for Musical Expression, Jun. 2010, 4 pages.
Figlieroa-Gibson, Gloryvid, U.S. Final Office Action, U.S. Appl. No. 12/970,949, filed Jun. 10, 2015, pp. 1-25.
Figueroa-Gibson, G., U.S. Office Action, U.S. Appl. No. 12/970,939, filed Aug. 22, 2013.
Figueroa-Gibson, G., U.S. Office Action, U.S. Appl. No. 12/970,939, filed Jun. 5, 2013.
Figueroa-Gibson, G., U.S. Office Action, U.S. Appl. No. 12/970,943, filed Jun. 10, 2013.
Figueroa-Gibson, G., U.S. Office Action, U.S. Appl. No. 12/970,943, filed Nov. 6, 2013.
Figueroa-Gibson, G., U.S. Office Action, U.S. Appl. No. 12/970,949, filed Jun. 21, 2013.
Figueroa-Gibson, Gloryvid, U.S. Final Office Action, U.S. Appl. No. 12/970,939, filed May 30, 2014, pp. 1-32.
Figueroa-Gibson, Gloryvid, U.S. Final Office Action, U.S. Appl. No. 12/970,949, filed Aug. 15, 2014, pp. 1-21.
Figueroa-Gibson, Gloryvid, U.S. Final Office Action, U.S. Appl. No. 12/970,949, filed Nov. 29, 2013, pp. 1-29.
Figueroa-Gibson, Gloryvid, U.S. Notice of Allowance, U.S. Appl. No. 12/970,939, filed Dec. 19, 2014, pp. 1-10.
Figueroa-Gibson, Gloryvid, U.S. Notice of Allowance, U.S. Appl. No. 12/970,943, filed Dec. 19, 2014, pp. 1-10.
Figueroa-Gibson, Gloryvid, U.S. Office Action, U.S. Appl. No. 12/970,939, filed Dec. 19, 2013, pp. 1-28.
Figueroa-Gibson, Gloryvid, U.S. Office Action, U.S. Appl. No. 12/970,939, filed Oct. 2, 2014, pp. 1-40.
Figueroa-Gibson, Gloryvid, U.S. Office Action, U.S. Appl. No. 12/970,943, filed Mar. 13, 2014, pp. 1-25.
Figueroa-Gibson, Gloryvid, U.S. Office Action, U.S. Appl. No. 12/970,943, filed Sep. 17, 2014, pp. 1-20.
Figueroa-Gibson, Gloryvid, U.S. Office Action, U.S. Appl. No. 12/970,949, filed Jan. 2, 2015, pp. 1-24.
Figueroa-Gibson, Gloryvid, U.S. Office Action, U.S. Appl. No. 12/970,949, filed Mar. 13, 2014, pp. 1-29.
Fitzmaurice, et al., "An Exploration into Supporting Artwork Orientation in the User Interface", Proc. of the CHI '99 Conf. on Human Factors in Computing Sys's: The CHI is the Limit, Pittsburgh, CHI 1999, May 15-20, 1999, pp. 167-174.
Fitzmaurice, et al., "Tracking Menus", In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, Nov. 2, 2003, 10 pages.
Frisch, et al., "Investigating Multi-Touch and Pen Gestures for Diagram Editing on Interactive Surfaces", In ACM International Conference on Interactive Tabletops and Surfaces, Nov. 23, 2009, 8 pages.
Geisy, Adam, Notice of Allowance, U.S. Appl. No. 13/367,377, filed Oct. 27, 2014, pp. 1-10.
Geisy, Adam, U.S. Final Office Action, U.S. Appl. No. 13/367,377, filed Jul. 1, 2014, pp. 1-12.
Geisy, Adam, U.S. Office Action, U.S. Appl. No. 13/367,377, filed Feb. 13, 2014, pp. 1-11.
Goel, et al., "GripSense: Using Built-In Sensors to Detect Hand Posture and Pressure on Commodity Mobile Phones", In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, Oct. 7, 2012, 10 pages.
Goel, et al., "WalkType: Using Accelerometer Data to Accomodate Situational Impairments in Mobile Touch Screen Text Entry", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, May 5, 2012, 10 pages.
Grossman, et al., "Hover Widgets: Using the Tracking State to Extend the Capabilities of Pen-Operated Devices", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 22, 2006, 10 pages.
Guiard, et al., "Writing Postures in Left-Handers: Inverters are Hand-Crossers", Neuropsychologia, Mar. 1984, pp. 535-538, vol. 22, No. 4.
Harrison, et al., "Scratch Input: Creating Large, Inexpensive, Unpowered and Mobile Finger Input Surfaces," retrieved at <<http://acm.org>>, UIST '08 Proceedings of the 21st Annual ACM Symposium on User interface Software and Technology, Oct. 2008, pp. 205-208.
Harrison, et al., "Scratch Input: Creating Large, Inexpensive, Unpowered and Mobile Finger Input Surfaces," retrieved at >, UIST '08 Proceedings of the 21st Annual ACM Symposium on User interface Software and Technology, Oct. 2008, pp. 205-208.
Harrison, et al., "Skinput: Appropriating the Body as an Input Surface," retrieved at <<http://acm.org>>, CHI '10 Proceedings of the 28th International Conference on Human Factors in Computing Systems, Apr. 2010, pp. 453-462.
Harrison, et al., "Skinput: Appropriating the Body as an Input Surface," retrieved at >, CHI '10 Proceedings of the 28th International Conference on Human Factors in Computing Systems, Apr. 2010, pp. 453-462.
Harrison, et al., "Squeeze Me, Hold Me, Tilt Me! An Exploration of Manipulative User Interfaces", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 18, 1998, 8 pages.
Hasan, et al., "A-Coord Input: Coordinating Auxiliary Input Streams for Augmenting Contextual Pen-Based Interactions", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, May 5, 2012, 10 pages.
Hassan, et al., "Chucking: A One-Handed Document Sharing Technique," T. Gross et al. (Eds.): INTERACT 2009, Part II, LNCS 5727, Aug. 2009, pp. 264-278.
Herot, et al., "One-Point Touch Input of Vector Information from Computer Displays," retrieved at <<http://acm.org>>, SIGGRAPH '78 Proceedings of the 5th Annual Conference on Computer Graphics and Interactive Techniques, 12(3), Aug. 1978, pp. 210-216.
Herot, et al., "One-Point Touch Input of Vector Information from Computer Displays," retrieved at >, SIGGRAPH '78 Proceedings of the 5th Annual Conference on Computer Graphics and Interactive Techniques, 12(3), Aug. 1978, pp. 210-216.
Hinckley, et al., "Codex: A dual screen tablet computer", Proc. of the 27th Int'l Conf. on Human Factors in Computing Sys's, CHI 2009, Apr. 4-9, 2009, pp. 1933-1942, Boston, MA, USA.
Hinckley, et al., "Design and Analysis of Delimiters for Selection-Action Pen Gesture Phrases in Scriboli," retrieved at <<http://acm.org>>, CHI '05 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 2005, pp. 451-460.
Hinckley, et al., "Design and Analysis of Delimiters for Selection-Action Pen Gesture Phrases in Scriboli," retrieved at >, CHI '05 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 2005, pp. 451-460.
Hinckley, et al., "Direct Display Interaction via Simultaneous Pen + Multi-touch Input", In Society for Information Display (SID) Symposium Digest of Technical Papers, May 2010, 4 pages.
Hinckley, et al., "Foreground and Background Interaction with Sensor-Enhanced Mobile Devices," retrieved at <<http://research.microsoft.com/en-us/um/people/kenh/papers/tochisensing.pdf>>, ACM Transactions on Computer-Human Interaction, vol. 12, No. 1, Mar. 2005, 22 pages.
Hinckley, et al., "Foreground and Background Interaction with Sensor-Enhanced Mobile Devices," retrieved at >, ACM Transactions on Computer-Human Interaction, vol. 12, No. 1, Mar. 2005, 22 pages.
Hinckley, et al., "Manual Deskterity: An Exploration of Simultaneous Pen + Touch Direct Input," retrieved at <<http://acm.org>>, CHI EA '10 Proceedings of the 28th of the International Conference, Extended Abstracts on Human Factors in Computing Systems, Apr. 2010, pp. 2793-2802.
Hinckley, et al., "Manual Deskterity: An Exploration of Simultaneous Pen + Touch Direct Input," retrieved at >, CHI EA '10 Proceedings of the 28th of the International Conference, Extended Abstracts on Human Factors in Computing Systems, Apr. 2010, pp. 2793-2802.
Hinckley, et al., "Pen + Touch = New Tools", In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, Oct. 3, 2010, 10 pages.
Hinckley, et al., "Sensing Techniques for Mobile Interaction", In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology, Nov. 5, 2000, 10 pages.
Hinckley, et al., "Sensor Synaesthesia: Touch in Motion, and Motion in Touch", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, May 7, 2011, 10 pages.
Hinckley, et al., "Touch-Sensing Input Devices", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, May 15, 1999, 8 pages.
Hinckley, Ken, "Synchronous Gestures for Multiple Persons and Computers", In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, Nov. 2, 2003, 10 pages.
Holmquist, et al., "Smart-Its Friends: A Technique for Users to Easily Establish Connections between Smart Artefacts", In Proceedings of the 3rd International Conference on Ubiquitous Computing, Sep. 30, 2001, 6 pages.
Hudson, et al., "Whack Gestures: Inexact and Inattentive Interaction with Mobile Devices", In Proceedings of the Fourth International Conference on Tangible, Embedded, and Embodied Interaction, Jan. 25, 2010, 4 pages.
Ilya Traktovenko, U.S. Office Action, U.S. Appl. No. 12/970,945, filed Apr. 22, 2013.
Iwasaki, et al., "Expressive Typing: A New Way to Sense Typing Pressure and Its Applications," retrieved at <<http://acm.org>>, CHI '09 Proceedings of the 27th International Conference Extended Abstracts on Human Factors in Computing Systems, Apr. 2009, pp. 4369-4374.
Iwasaki, et al., "Expressive Typing: A New Way to Sense Typing Pressure and Its Applications," retrieved at >, CHI '09 Proceedings of the 27th International Conference Extended Abstracts on Human Factors in Computing Systems, Apr. 2009, pp. 4369-4374.
Izadi, et al., "C-Slate: A Multi-Touch and Object Recognition System for Remote Collaboration using Horizontal Surfaces", Second Annual IEEE International Workshop on Horizontal Interactive Human-Computer System, Oct. 2007, pp. 3-10.
Joselli et al., "GRMOBILE-A Framework for touch and accelerometer gesture recognition for mobile", Proceedings of the 2009 VIII Brazilian Symposium on Games and Digital Entertainment, Oct. 2009, pp. 141-150.
Joshi, et al., "Image Deblurring Using Inertial Measurement Sensors," retrieved at <<http://acm.org>>, ACM Transactions on Graphics, vol. 29, No. 4, Article 30, Jul. 2010, 9 pages.
Joshi, et al., "Image Deblurring Using Inertial Measurement Sensors," retrieved at >, ACM Transactions on Graphics, vol. 29, No. 4, Article 30, Jul. 2010, 9 pages.
Kendrick, "ChromeTouch: Free Extension for Touch Tables", GigaOM, May 6, 2010, 9 pages.
Kim, et al., "Hand Grip Pattern Recognition for Mobile User Interfaces", In Proceedings of the 18th Conference on Innovative Applications of Artificial Intelligence, vol. 2, Jul. 16, 2006, 6 pages.
Kratz, et al., "Unravelling Seams: Improving Mobile Gesture Recognition with Visual Feedback Techniques," retrieved at <<http://acm.org>>, CHI '09 Proceedings of the 27th International Conference on Human Factors in Computing Systems, Apr. 2009, pp. 937-940.
Kratz, et al., "Unravelling Seams: Improving Mobile Gesture Recognition with Visual Feedback Techniques," retrieved at >, CHI '09 Proceedings of the 27th International Conference on Human Factors in Computing Systems, Apr. 2009, pp. 937-940.
Kurtenbach, et al., "Issues in Combining Marking and Direct Manipulation Techniques", In Proceedings of the 4th Annual ACM Symposium on User Interface Software and Technology, Nov. 11, 1991, 8 pages.
Kurtenbach, et al., "The design of a GUI paradigm based on tablets, two-hands, and transparency", Proceedings of the ACM SIGCHI Conference on Human factors in computing systems, CHI 1997, Mar. 1997, pp. 35-42.
Lee, et al., "HandSCAPE: A vectorizing tape measure for on-site measuring applications", Proceedings of the CHI 2000 Conference on Human factors in computing systems, CHI 2000, Apr. 1-6, 2000, pp. 137-144, The Hague, The Netherlands.
Lester, et al., ""Are You With Me?"-Using Accelerometers to Determine if Two Devices are Carried by the Same Person", In Proceedings of Second International Conference on Pervasive Computing, Apr. 21, 2004, 18 pages.
Li, et al., "Experimental Analysis of Mode Switching Techniques in Pen-Based User Interfaces", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 2, 2005, 10 pages.
Li, et al., "The 1Line Keyboard: A QWERTY Layout in a Single Line", In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Oct. 16, 2011, 10 pages.
Liao, et al., "PACER: Fine-grained Interactive Paper via Camera-touch Hybrid Gestures on a Cell Phone," retrieved at <<http://acm.org>>, CHI '10 Proceedings of the 28th International Conference on human Factors in Computing Systems, Apr. 2010, pp. 2441-2450.
Liao, et al., "PACER: Fine-grained Interactive Paper via Camera-touch Hybrid Gestures on a Cell Phone," retrieved at >, CHI '10 Proceedings of the 28th International Conference on human Factors in Computing Systems, Apr. 2010, pp. 2441-2450.
Luff, et al., Mobility in Collaboration, Proceedings of the ACM 1998 Conference on Computer Supported Cooperative Work, CSCW 1998, Nov. 14-18, 1998, pp. 305-314, Seattle, WA, USA.
Mahony, et al., Nonlinear Complementary Filters on the Special Orthogonal Group, IEEE Trans. Automat. Contr., 2008, pp. 1203-1218, vol. 53, No. 5.
Malacria, et al., "Clutch-Free Panning and Integrated Pan-Zoom Control on Touch-Sensitive Surfaces: The CycloStar Approach," retrieved at <<http://www.malacria.fr/data/doc/pdf/cyclostar.pdf>>, Proceedings of the 28th International Conference on Human Factors in Computing Systems, Apr. 2010, 10 pages.
Malacria, et al., "Clutch-Free Panning and Integrated Pan-Zoom Control on Touch-Sensitive Surfaces: The CycloStar Approach," retrieved at >, Proceedings of the 28th International Conference on Human Factors in Computing Systems, Apr. 2010, 10 pages.
Mason, et al., "Grip Forces When Passing an Object to a Partner", Exp. Brain Res., May 2005, vol. 163, No. 2, pp. 173-187.
McLoone, Peter D., U.S. Office Action, U.S. Appl. No. 14/303,234, Oct. 15, 2015, pp. 1-16.
Mohamed, et al., "Disoriented Pen-Gestures for Identifying Users Around the Tabletop Without Cameras and Motion Sensors", Proceedings of the First IEEE International Workshop on Horizontal Interactive Human-Computer Systems (TABLETOP '06), Jan. 2006, 8 pages.
Mulroy, "N-Trig Pushes Pen and Multitouch Input", PC world, Retrieved on Jan. 27, 2011 at <<http://www.pcworld.com/artice/196723/ntrig-pushes-pen-and-multitouch-input.html>>, May 19, 2010, 3 pages.
N-act Multi-Touch Gesture Vocabulary Set, retrieved date, Oct. 12, 2011, 1 page.
Oviatt, et al., "Toward a Theory of Organized Multimodal Integration Patterns during Human-Computer Interaction," retrieved at <<http://acm.org>>, ICMI '03 Proceedings of the 5th International Conference on Multimodal Interfaces, Nov. 2003, pp. 44-51.
Oviatt, et al., "Toward a Theory of Organized Multimodal Integration Patterns during Human-Computer Interaction," retrieved at >, ICMI '03 Proceedings of the 5th International Conference on Multimodal Interfaces, Nov. 2003, pp. 44-51.
Partridge, et al., "TiltType: Accelerometer-Supported Text Entry for Very Small Devices," retrieved at http://acm.org>>, UIST '02 Proceedings of the 15th Annual ACM Symposium on User Interface Software and Technology, Oct. 2002, pp. 201-204.
Premerlani, et al., Direction Cosine Matrix IMU: Theory, retrieved from gentlenav.googlecode.com/files/DCMDraft2.pdf, May 2009, pp. 1-30.
Rahman, et al., "Tilt Techniques: Investigating the Dexterity of Wrist-based Input," retrieved at <<http://acm.org>>, CHI '09 Proceedings of the 27th international Conference on Human Factors in Computing Systems, Apr. 2009, pp. 1943-1952.
Rahman, et al., "Tilt Techniques: Investigating the Dexterity of Wrist-based Input," retrieved at >, CHI '09 Proceedings of the 27th international Conference on Human Factors in Computing Systems, Apr. 2009, pp. 1943-1952.
Ramos, et al., "Pressure Widgets", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, vol. 6, Issue 1, Apr. 24, 2004, 8 pages.
Ramos, et al., "Tumble! Splat! Helping Users Access and Manipulate Occluded Content in 2D Drawings", In Proceedings of the Working Conference on Advanced Visual Interfaces, May 23, 2006, 8 pages.
Rekimoto, Jun, "Tilting Operations for Small Screen Interfaces", In Proceedings of the 9th Annual ACM Symposium on User Interface Software and Technology, Nov. 6, 1996, 2 pages.
Rofouei, et al., "Your Phone or Mine? Fusing Body, Touch and Device Sensing for Multi-User Device-Display Interaction", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, May 5, 2012, 4 pages.
Roudaut, et al., "TimeTilt: Using Sensor-Based Gestures to Travel through Multiple Applications on a Mobile Device", In Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I, Aug. 24, 2009, 5 pages.
Ruiz, et al., "DoubleFlip: A Motion Gesture Delimiter for Mobile Interaction", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, May 7, 2011, 4 pages.
Ruiz, et al., "User-Defined Motion Gestures for Mobile Interaction", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, May 7, 2011, 10 pages.
Sachs, et al., "3-Draw: A Tool for Designing 3D Shapes", In Journal of IEEE Computer Graphics and Applications, vol. 11, Issue 6, Nov. 1991, 9 pages.
Schmidt, et al., "Advanced Interaction in Context", In Proceedings of the 1st International Symposium on Handheld and Ubiquitous Computing, Sep. 27, 1999, 13 pages.
Schmidt, et al., "PhoneTouch: A Technique for Direct Phone Interaction on Surfaces", In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, Oct. 3, 2010, 4 pages.
Schwarz, et al., "A Framework for Robust and Flexible Handling of Inputs with Uncertainty," retrieved at <<http://acm.org>>, UIST '10, Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, Oct. 2010, pp. 47-56.
Schwarz, et al., "A Framework for Robust and Flexible Handling of Inputs with Uncertainty," retrieved at >, UIST '10, Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, Oct. 2010, pp. 47-56.
Schwesig, et al., "Gummi: A Bendable Computer," retrieved at <<http://acm.org>>, CHI '04, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 2004, pp. 263-270.
Schwesig, et al., "Gummi: A Bendable Computer," retrieved at >, CHI '04, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 2004, pp. 263-270.
Sellen, et al., "The Prevention of Mode Errors through Sensory Feedback," retrieved at <<http://acm.org>>, Journal of Human-Computer Interaction, vol. 7, Issue 2, Jun. 1992, pp. 141-164.
Sellen, et al., "The Prevention of Mode Errors through Sensory Feedback," retrieved at >, Journal of Human-Computer Interaction, vol. 7, Issue 2, Jun. 1992, pp. 141-164.
Silo, et al., "Mobile Interaction Using Paperweight Metaphor", Proc. of the 19th Annual ACM Symposium on User Interface Software and Technology, UIST '06, Oct. 2006, pp. 111-114, Montreux, Switzerland.
Subramanian, et al., "Multi-layer interaction for digital tables," In Proc. of the 19th Annual ACM Symposium on User Interface Software and Technology, Oct. 15, 2006, pp. 269-272.
Sun, et al., "Enhancing Naturalness of Pen-and-Tablet Drawing through Context Sensing", In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, Nov. 13, 2011, 4 pages.
Suzuki, et al., "Stylus Enhancement to Enrich Interaction with Computers", In Proceedings of the 12th International Conference on Human-Computer Interaction: Interaction Platforms and Techniques, Jul. 22, 2007, 10 pages.
Tashman, et al., "LiquidText: A Flexible, Multitouch Environment to Support Active Reading", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, May 7, 2011, 10 pages.
Taylor, et al., "Graspables: Grasp-Recognition as a User Interface", In Proceedings of the 27th International Conference on Human Factors in Computing Systems, Apr. 4, 2008, 9 pages.
Thurott, Paul, "Windows XP Tablet Pc Edition reviewed", Paul Thurrott's Supersite for Windows, Jun. 25, 2002, 7 pages.
Tian, et al., "The Tilt Cursor: Enhancing Stimulus-Response Compatibility by Providing 3D Orientation Cue of Pen", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 28, 2007, 4 pages.
Tian, et al., "Tilt Menu: Using the 3D Orientation Information of Pen Devices to Extend the Selection Capability of Pen-based User Interfaces", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 5, 2008, 10 pages.
Traktovenko, Ilya, U.S. Final Office Action, U.S. Appl. No. 13/530,015, filed Nov. 19, 2014, pp. 1-48.
Traktovenko, Ilya, U.S. Notice of Allowance, U.S. Appl. No. 12/970,945, filed Jul. 10, 2013.
Traktovenko, Ilya, U.S. Notice of Allowance, U.S. Appl. No. 12/970,945, filed Oct. 16, 2013.
Traktovenko, Ilya, U.S. Office Action, U.S. Appl. No. 13/530,015, filed Apr. 28, 2015, pp. 1-32.
Traktovenko, Ilya, U.S. Office Action, U.S. Appl. No. 13/530,015, filed Jul. 18, 2014, pp. 1-26.
Treitler, D., U.S. Office Action, U.S. Appl. No. 13/327,794, filed Aug. 16, 2013.
Treitler, Damon , U.S. Final Office Action, U.S. Appl. No. 13/327,794, filed Dec. 19, 2013, pp. 1-16.
Treitler, Damon, U.S. Final Office Action, U.S. Appl. No. 13/327,794, filed Nov. 20, 2014, pp. 1-13.
Treitler, Damon, U.S. Office Action, U.S. Appl. No. 13/327,794, filed Jul. 17, 2014, pp. 1-13.
Verplaetse, C., "Inertial Proprioceptive Devices: Self-Motion-Sensing Toys and Tools", In IBM Systems Journal, vol. 35, Issue 3-4, Apr. 23, 2013, 12 pages.
Vogel, et al., "Conte: Multimodal Input Inspired by an Artist's Crayon", In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Oct. 16, 2011, 10 pages.
Walker, Geoff, "Palm rejection on resistive touchscreens", Veritas et Visus, Nov. 2005, pp. 31-33.
Wigdor, et al., "Lucid-Touch: A See-through Mobile Device," Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, Oct. 2007, pp. 269-278.
Wigdor, et al., "TiltText:Using Tilt for Text Input to Mobile Phones," retrieved at <<http://acm.org>>, UIST '03, Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, Nov. 2003, pp. 81-90.
Wigdor, et al., "TiltText:Using Tilt for Text Input to Mobile Phones," retrieved at >, UIST '03, Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, Nov. 2003, pp. 81-90.
Williamson, et al., "Shoogle: Excitatory Multimodal Interaction on Mobile Devices," retrieved at <<http://acm.org>>, CHI '07, Proceedings of the SIGCHI Conference on Human factors in Computing Systems, Apr. 2007, pp. 121-124.
Williamson, et al., "Shoogle: Excitatory Multimodal Interaction on Mobile Devices," retrieved at >, CHI '07, Proceedings of the SIGCHI Conference on Human factors in Computing Systems, Apr. 2007, pp. 121-124.
Wilson, et al., "XWand: UI for Intelligent Spaces", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 5, 2003, 8 pages.
Wimmer, et al., HandSense: Discriminating Different Ways of Grasping and Holding a Tangible User Interface, Proceedings of the 3rd International Conference on Tangible and Embedded Interaction, TEI '09, Feb. 2009, pp. 359-362, Cambridge, UK.
Wu, et al., "Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces", In Proceedings of the First IEEE International Workshop on Horizontal Interactive Human-Computer Systems, Jan. 5, 2006, 8 pages.
Xin, et al., "Acquiring and Pointing: An Empirical Study of Pen-Tilt-Based Interaction", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, May 7, 2011, 10 pages.
Xin, et al., "Natural Use Profiles for the Pen: An Empirical Exploration of Pressure, Tilt, and Azimut", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, May 5, 2012, 4 pages.
Yee, Ka-Ping, "Two-Handed Interaction on a Tablet Display", In Proceedings of Extended Abstracts on Human Factors in Computing Systems, Apr. 24, 2004, 4 pages.
Zeleznik, et al., "Hands-On Math: A Page-Based Multi-Touch and Pen Desktop for Technical Work and Problem Solving", In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, Oct. 3, 2010, 10 pages.
Zhou, Hong, Notice of Allowance, U.S. Appl. No. 13/026,058, filed Jul. 17, 2014, pp. 1-5.
Zhou, Hong, Notice of Allowance, U.S. Appl. No. 13/026,058, filed Nov. 7, 2014, pp. 1-5.
Zhou, Hong, U.S. Final Office Action, U.S. Appl. No. 13/026,058, filed Feb. 26, 2014, pp. 1-14.

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160170585A1 (en) * 2010-12-27 2016-06-16 Sony Corporation Display control device, method and computer program product
US10877642B2 (en) * 2012-08-30 2020-12-29 Samsung Electronics Co., Ltd. User interface apparatus in a user terminal and method for supporting a memo function
US11182023B2 (en) 2015-01-28 2021-11-23 Flatfrog Laboratories Ab Dynamic touch quarantine frames
US11029783B2 (en) 2015-02-09 2021-06-08 Flatfrog Laboratories Ab Optical touch system comprising means for projecting and detecting light beams above and inside a transmissive panel
US11301089B2 (en) 2015-12-09 2022-04-12 Flatfrog Laboratories Ab Stylus identification
US10775937B2 (en) 2015-12-09 2020-09-15 Flatfrog Laboratories Ab Stylus identification
US10628505B2 (en) 2016-03-30 2020-04-21 Microsoft Technology Licensing, Llc Using gesture selection to obtain contextually relevant information
US10732759B2 (en) 2016-06-30 2020-08-04 Microsoft Technology Licensing, Llc Pre-touch sensing for mobile interaction
US10296089B2 (en) 2016-08-10 2019-05-21 Microsoft Technology Licensing, Llc Haptic stylus
US10761657B2 (en) 2016-11-24 2020-09-01 Flatfrog Laboratories Ab Automatic optimisation of touch signal
US11579731B2 (en) 2016-12-07 2023-02-14 Flatfrog Laboratories Ab Touch device
US10775935B2 (en) 2016-12-07 2020-09-15 Flatfrog Laboratories Ab Touch device
US11281335B2 (en) 2016-12-07 2022-03-22 Flatfrog Laboratories Ab Touch device
WO2018106172A1 (en) * 2016-12-07 2018-06-14 Flatfrog Laboratories Ab Active pen true id
US11474644B2 (en) 2017-02-06 2022-10-18 Flatfrog Laboratories Ab Optical coupling in touch-sensing systems
US11740741B2 (en) 2017-02-06 2023-08-29 Flatfrog Laboratories Ab Optical coupling in touch-sensing systems
US10620725B2 (en) * 2017-02-17 2020-04-14 Dell Products L.P. System and method for dynamic mode switching in an active stylus
US10134158B2 (en) 2017-02-23 2018-11-20 Microsoft Technology Licensing, Llc Directional stamping
US10635195B2 (en) * 2017-02-28 2020-04-28 International Business Machines Corporation Controlling displayed content using stylus rotation
US10877575B2 (en) * 2017-03-06 2020-12-29 Microsoft Technology Licensing, Llc Change of active user of a stylus pen with a multi user-interactive display
US11016605B2 (en) 2017-03-22 2021-05-25 Flatfrog Laboratories Ab Pen differentiation for touch displays
US11099688B2 (en) 2017-03-22 2021-08-24 Flatfrog Laboratories Ab Eraser for touch displays
US10606414B2 (en) 2017-03-22 2020-03-31 Flatfrog Laboratories Ab Eraser for touch displays
US10739916B2 (en) 2017-03-28 2020-08-11 Flatfrog Laboratories Ab Touch sensing apparatus and method for assembly
US11269460B2 (en) 2017-03-28 2022-03-08 Flatfrog Laboratories Ab Touch sensing apparatus and method for assembly
US11281338B2 (en) 2017-03-28 2022-03-22 Flatfrog Laboratories Ab Touch sensing apparatus and method for assembly
US10845923B2 (en) 2017-03-28 2020-11-24 Flatfrog Laboratories Ab Touch sensing apparatus and method for assembly
US11650699B2 (en) 2017-09-01 2023-05-16 Flatfrog Laboratories Ab Optical component
US11256371B2 (en) 2017-09-01 2022-02-22 Flatfrog Laboratories Ab Optical component
US11567610B2 (en) 2018-03-05 2023-01-31 Flatfrog Laboratories Ab Detection line broadening
US10872199B2 (en) 2018-05-26 2020-12-22 Microsoft Technology Licensing, Llc Mapping a gesture and/or electronic pen attribute(s) to an advanced productivity action
US20200064985A1 (en) * 2018-08-24 2020-02-27 Microsoft Technology Licensing, Llc System and method for enhanced touch selection of content
US10891033B2 (en) * 2018-08-24 2021-01-12 Microsoft Technology Licensing, Llc System and method for enhanced touch selection of content
US11943563B2 (en) 2019-01-25 2024-03-26 FlatFrog Laboratories, AB Videoconferencing terminal and method of operating the same
US11893189B2 (en) 2020-02-10 2024-02-06 Flatfrog Laboratories Ab Touch-sensing apparatus
US11797173B2 (en) 2020-12-28 2023-10-24 Microsoft Technology Licensing, Llc System and method of providing digital ink optimized user interface elements
US11526659B2 (en) 2021-03-16 2022-12-13 Microsoft Technology Licensing, Llc Converting text to digital ink
US11361153B1 (en) 2021-03-16 2022-06-14 Microsoft Technology Licensing, Llc Linking digital ink instances using connecting lines
US11875543B2 (en) 2021-03-16 2024-01-16 Microsoft Technology Licensing, Llc Duplicating and aggregating digital ink instances
US11435893B1 (en) 2021-03-16 2022-09-06 Microsoft Technology Licensing, Llc Submitting questions using digital ink
US11372486B1 (en) * 2021-03-16 2022-06-28 Microsoft Technology Licensing, Llc Setting digital pen input mode using tilt angle
US11662839B1 (en) 2022-04-19 2023-05-30 Dell Products L.P. Information handling system stylus with power management through acceleration and sound context
US11733788B1 (en) 2022-04-19 2023-08-22 Dell Products L.P. Information handling system stylus with single piece molded body
US11662838B1 (en) 2022-04-19 2023-05-30 Dell Products L.P. Information handling system stylus with power management through acceleration and sound context

Also Published As

Publication number Publication date
US20130257777A1 (en) 2013-10-03

Similar Documents

Publication Publication Date Title
US9201520B2 (en) Motion and context sharing for pen-based computing inputs
US10168827B2 (en) Sensor correlation for pen and touch-sensitive computing device interaction
EP3155502B1 (en) Multi-user sensor correlation for computing device interaction
CN109074217B (en) Application for multi-touch input detection
US9244545B2 (en) Touch and stylus discrimination and rejection for contact sensitive computing devices
US10942642B2 (en) Systems and methods for performing erasures within a graphical user interface
US7489306B2 (en) Touch screen accuracy
KR101492678B1 (en) Multi-mode touchscreen user interface for a multi-state touchscreen device
KR102420448B1 (en) Touch-based input for stylus
US9448714B2 (en) Touch and non touch based interaction of a user with a device
Hinckley et al. Motion and context sensing techniques for pen computing
US10082888B2 (en) Stylus modes
JP5374564B2 (en) Drawing apparatus, drawing control method, and drawing control program
US20120200603A1 (en) Pointer Tool for Touch Screens
CN105278734B (en) The control method and control device of touch sensor panel
WO2013054155A1 (en) Multi-touch human interface system and device for graphical input, and method for processing image in such a system.
CN106468963B (en) Touch device and touch-control control method
TW202009687A (en) Method for unlocking a display of a multi-display device
Jang et al. CornerPen: smart phone is the pen
Zhang Various Techniques of Mode Changing with Stylus/Touch Interface

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BENKO, HRVOJE;CHEN, XIANG;HINCKLEY, KENNETH PAUL;SIGNING DATES FROM 20130523 TO 20130528;REEL/FRAME:030576/0746

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0541

Effective date: 20141014

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8