WO2008085736A1 - Method and system for providing word recommendations for text input - Google Patents

Method and system for providing word recommendations for text input Download PDF

Info

Publication number
WO2008085736A1
WO2008085736A1 PCT/US2007/088872 US2007088872W WO2008085736A1 WO 2008085736 A1 WO2008085736 A1 WO 2008085736A1 US 2007088872 W US2007088872 W US 2007088872W WO 2008085736 A1 WO2008085736 A1 WO 2008085736A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
characters
user
instructions
candidate
Prior art date
Application number
PCT/US2007/088872
Other languages
French (fr)
Inventor
Greg Christie
Bas Ording
Scott Forstall
Kenneth Kocienda
Richard Williamson
Jerome Rene Bellegarda
Original Assignee
Apple Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc. filed Critical Apple Inc.
Priority to EP07869922A priority Critical patent/EP2100210A1/en
Priority to AU2007342164A priority patent/AU2007342164A1/en
Publication of WO2008085736A1 publication Critical patent/WO2008085736A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques

Definitions

  • the disclosed embodiments relate generally to text input on portable communication devices, and more particularly, to methods and systems for providing word recommendations in response to text input.
  • a computer-implemented method involves receiving a sequence of input characters from a keyboard, wherein the keyboard has a predefined layout of characters with each character in the layout having one or more neighbor characters.
  • the method also involves generating a set of strings from at least a subset of the sequence of input characters, where the set of strings comprises permutations of respective input characters in the subset of the sequence and neighbor characters of the respective input characters on the layout of the keyboard; identifying in a dictionary one or more candidate words that have a string in the set of strings as a prefix; scoring the candidate words; selecting a subset of the candidate words based on predefined criteria; and presenting the subset of the candidate words.
  • the computer program product comprises a computer readable storage medium and a computer program mechanism embedded therein.
  • the computer program mechanism includes instructions for receiving a sequence of input characters from a keyboard, wherein the keyboard has a predefined layout of characters with each character in the layout having one or more neighbor characters; instructions for generating a set of strings from at least a subset of the sequence of input characters, the set of strings comprising permutations of respective input characters in the subset of the sequence and neighbor characters of the respective input characters on the layout of the keyboard; instructions for identifying in a dictionary one or more candidate words, each candidate word having a string in the set of strings as a prefix; instructions for scoring the candidate words; instructions for selecting a subset of the candidate words based on predefined criteria; and instructions for presenting the subset of the candidate words.
  • a portable communications device includes a display; a keyboard, the keyboard having a predefined layout of characters with each character in the layout having one or more neighbor characters; one or more processors; memory; and a program stored in the memory and configured to be executed by the one or more processors.
  • the program includes instructions for receiving a sequence of input characters from the keyboard; instructions for generating a set of strings from at least a subset of the sequence of input characters, the set of strings comprising permutations of respective input characters in the subset of the sequence and neighbor characters of the respective input characters on the layout of the keyboard; instructions for identifying in a dictionary one or more candidate words, each candidate word having a string in the set of strings as a prefix; instructions for scoring the candidate words; instructions for selecting a subset of the candidate words based on predefined criteria; and instructions for presenting the subset of the candidate words.
  • a portable communications device includes display means; input means having a predefined layout of characters, each character in the layout having one or more neighbor characters; one or more processor means; memory means; and a program mechanism stored in the memory means and configured to be executed by the one or more processors means.
  • the program mechanism includes instructions for receiving a sequence of input characters from the input means; instructions for generating a set of strings from at least a subset of the sequence of input characters, the set of strings comprising permutations of respective input characters in the subset of the sequence and neighbor characters of the respective input characters on the layout of the input means; instructions for identifying in a dictionary one or more candidate words, each candidate word having a string in the set of strings as a prefix; instructions for scoring the candidate words; instructions for selecting a subset of the candidate words based on predefined criteria; and instructions for presenting the subset of the candidate words.
  • a computer-implemented method involves receiving a sequence of individual touch points input by a user that form a user- input directed graph; comparing the user-input directed graph to respective directed graphs for words in a dictionary; generating a list of candidate words based at least in part on the comparing step; and presenting at least some of the candidate words to the user.
  • a computer program product for use in conjunction with a portable communications device.
  • the computer program product comprises a computer readable storage medium and a computer program mechanism embedded therein.
  • the computer program mechanism includes instructions for receiving a sequence of individual touch points input by a user that form a user-input directed graph; instructions for comparing the user-input directed graph to respective directed graphs for words in a dictionary; instructions for generating a list of candidate words based at least in part on the comparing step; and instructions for presenting at least some of the candidate words to the user.
  • a portable communications device includes a display; a keyboard; one or more processors; memory; and a program stored in the memory and configured to be executed by the one or more processors.
  • the program includes instructions for receiving a sequence of individual touch points input by a user that form a user-input directed graph; instructions for comparing the user-input directed graph to respective directed graphs for words in a dictionary; instructions for generating a list of candidate words based at least in part on the comparing step; and instructions for presenting at least some of the candidate words to the user.
  • a portable communications device includes means for receiving a sequence of individual touch points input by a user that form a user-input directed graph; means for comparing the user-input directed graph to respective directed graphs for words in a dictionary; means for generating a list of candidate words based at least in part on the comparing step; and means for presenting at least some of the candidate words to the user.
  • the embodiments provide more efficient ways to enter text in a portable device.
  • FIG. 1 is a block diagram illustrating a portable communications device in accordance with some embodiments.
  • FIG. 2 is a flow diagram illustrating a process of providing word recommendations in accordance with some embodiments.
  • FIG. 3 is a flow diagram illustrating a process of scoring candidate words in accordance with some embodiments.
  • FIG. 4 is a flow diagram illustrating a process of selecting and presenting candidate words in accordance with some embodiments.
  • FIGs. 5A and 5B illustrate exemplary layouts of letter keys on a keyboard in accordance with some embodiments.
  • FIG. 6 illustrates an exemplary derivation of candidate words based on a text input in accordance with some embodiments.
  • FIGs. 7A-7C illustrate examples of scoring of candidate words in accordance with some embodiments. DESCRIPTION OF EMBODIMENTS
  • a portable communication device includes a user interface and a text input device. Via the interface and the text input device, a user may enter text into the device.
  • the text includes words, which are sequences of characters separated by whitespaces or particular punctuation. For a word as it is being entered or an entered word, the device identifies and offers word recommendations that may be selected by the user to replace the word as inputted by the user.
  • Figure 1 is a block diagram illustrating an embodiment of a device 100, such as a portable electronic device having a touch-sensitive display 112.
  • the device 100 may include a memory controller 120, one or more data processors, image processors and/or central processing units 118 and a peripherals interface 116.
  • the memory controller 120, the one or more processors 118 and/or the peripherals interface 116 may be separate components or may be integrated, such as in one or more integrated circuits 104.
  • the various components in the device 100 may be coupled by one or more communication buses or signal lines 103.
  • the peripherals interface 116 may be coupled to an optical sensor (not shown), such as a CMOS or CCD image sensor; RF circuitry 108; audio circuitry 110; and/or an input/output (I/O) subsystem 106.
  • the audio circuitry 110 may be coupled to a speaker
  • the device 100 may support voice recognition and/or voice replication.
  • the RF circuitry 108 may be coupled to one or more antennas 146 and may allow communication with one or more additional devices, computers and/or servers using a wireless network.
  • the device 100 may support a variety of communications protocols, including code division multiple access (CDMA), Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), Wi-Fi (such as IEEE 802.11a, IEEE 802.11b, IEEE 802.1 Ig and/or IEEE 802.1 In), Bluetooth, Wi-MAX, a protocol for email, instant messaging, and/or a short message service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
  • the device 100 may be, at least in part, a cellular telephone.
  • the I/O subsystem 106 may include a touch screen controller 152 and/or other input controller(s) 154.
  • the touch-screen controller 152 may be coupled to a touch-sensitive screen or touch sensitive display system 112.
  • the touch-sensitive display system 112 provides an input interface and an output interface between the device and a user.
  • the display controller 152 receives and/or sends electrical signals from/to the display system 112.
  • the display system 112 displays visual output to the user.
  • the visual output may include graphics, text, icons, video, and any combination thereof (collectively termed "graphics"). In some embodiments, some or all of the visual output may correspond to user-interface objects, further details of which are described below.
  • a touch screen in display system 112 is a touch-sensitive surface that accepts input from the user based on haptic and/or tactile contact.
  • the display system 112 and the display controller 152 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on the display system 112 and converts the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages or images) that are displayed on the touch screen.
  • the touch screen 112 may be used to implement virtual or soft buttons and/or a keyboard.
  • a point of contact between a touch screen in the display system 112 and the user corresponds to a finger of the user.
  • the touch screen in the display system 112 may use LCD (liquid crystal display) technology, or LPD (light emitting polymer display) technology, although other display technologies may be used in other embodiments.
  • the touch screen in the display system 112 and the display controller 152 may detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with a touch screen in the display system 112.
  • a touch-sensitive display in some embodiments of the display system 112 may be analogous to the multi-touch sensitive tablets described in the following U.S.
  • Patents 6,323,846 (Westerman et al.), 6,570,557 (Westerman et al.), and/or 6,677,932 (Westerman), and/or U.S. Patent Publication 2002/0015024A1, each of which is hereby incorporated by reference.
  • a touch screen in the display system 112 displays visual output from the portable device 100, whereas touch sensitive tablets do not provide visual output.
  • the touch screen in the display system 112 may have a resolution in excess of 100 dpi. In an exemplary embodiment, the touch screen in the display system has a resolution of approximately 168 dpi.
  • the user may make contact with the touch screen in the display system 112 using any suitable object or appendage, such as a stylus, a finger, and so forth.
  • the user interface is designed to work primarily with finger-based contacts and gestures, which are much less precise than stylus-based input due to the larger area of contact of a finger on the touch screen.
  • the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user.
  • a touch-sensitive display in some embodiments of the display system 112 may be as described in the following applications: (1) U.S. Patent Application No. 11/381,313, “Multipoint Touch Surface Controller,” filed on May 2, 2006; (2) U.S. Patent Application No. 10/840,862, “Multipoint Touchscreen,” filed on May 6, 2004; (3) U.S. Patent Application No. 10/903,964, “Gestures For Touch Sensitive Input Devices,” filed on July 30, 2004; (4) U.S. Patent Application No. 11/048,264, “Gestures For Touch Sensitive Input Devices,” filed on January 31, 2005; (5) U.S. Patent Application No.
  • the other input controller(s) 154 may be coupled to other input/control devices 114, such as one or more buttons, a keyboard, infrared port, USB port, and/or a pointer device such as a mouse.
  • the one or more buttons may include an up/down button for volume control of the speaker 142 and/or the micro-phone 144.
  • the one or more buttons (not shown) may include a push button. A quick press of the push button (not shown) may engage or disengage a lock of the touch screen 112. A longer press of the push button (not shown) may turn power to the device 100 on or off.
  • the user may be able to customize a functionality of one or more of the buttons.
  • the device 100 may include circuitry for supporting a location determining capability, such as that provided by the global positioning system (GPS).
  • the device 100 may be used to play back recorded music stored in one or more files, such as MP3 files or AAC files.
  • the device 100 may include the functionality of an MP3 player, such as an iPod (trademark of Apple Computer, Inc.).
  • the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with the 30-pin connector used on iPod devices.
  • the device 100 also includes a power system 137 for powering the various components.
  • the power system 137 may include a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices.
  • the device 100 may also include one or more external ports 135 for connecting the device 100 to other devices.
  • the memory controller 120 may be coupled to memory 102 with one or more types of computer readable media.
  • Memory 102 may include high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory.
  • Memory 102 may store an operating system 122, such as Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks.
  • the operating system 122 may include procedures (or sets of instructions) for handling basic system services and for performing hardware dependent tasks.
  • Memory 102 may also store communication procedures (or sets of instructions) in a communication module 124.
  • Memory 102 may include a display module (or a set of instructions) 125, a contact/motion module (or a set of instructions) 126 to determine one or more points of contact and/or their movement, and a graphics module (or a set of instructions) 128.
  • the graphics module 128 may support widgets, that is, modules or applications with embedded graphics. The widgets may be implemented using JavaScript, HTML, or other suitable languages.
  • Memory 102 may also include one or more applications 130.
  • applications 130 include email applications, text messaging or instant messaging applications, web browsers, memo pad applications, address books or contact lists, and calendars.
  • a dictionary contains a list of words and corresponding usage frequency rankings.
  • the usage frequency ranking of a word is the statistical usage frequency for that word in a language, or by a predefined group or people, or by the user of the device 100, or a combination thereof.
  • a dictionary may include multiple usage frequency rankings for regional variations of the same language and/or be tailored to a user's own usage frequency, e.g., derived from the user's prior emails, text messages, and other previous input from the user.
  • the word recommendation module identifies word recommendations for presentation to the user in response to text input by the user.
  • Each of the above identified modules and applications corresponds to a set of instructions for performing one or more functions described above. These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules. The various modules and sub-modules may be rearranged and/or combined. Memory 102 may include additional modules and/or sub-modules, or fewer modules and/or sub-modules. Memory 102, therefore, may include a subset or a superset of the above identified modules and/or sub-modules.
  • Various functions of the device 100 may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits.
  • Process flow 200 describes a process of providing word recommendations in response to input of a character string by a user.
  • a sequence of input characters is received from an input device (202).
  • a user inputs a sequence of characters into the portable communications device via an input device, such as a keyboard, and the device receives the input.
  • the input character sequence is a sequence of non-whitespace characters, delimited by whitespaces or punctuation, input by the user via the input device.
  • the sequence of characters may constitute a word.
  • the input device is a virtual keyboard (also called a soft keyboard) displayed on a touch-sensitive display of the portable device, where the user hits the keys of the keyboard ("types on the keyboard") by touching the touch-sensitive display on locations corresponding to keys of the virtual keyboard.
  • the input device is a physical keyboard on the device (also called a hard keyboard).
  • the keyboard whether virtual or physical, has a plurality of keys, each key corresponding to one or more characters, such as letters, numbers, punctuation, or symbols.
  • the keys are arranged in accordance with a predefined layout that defines the positions of the keys on the keyboard. On the layout, each key has at least one neighbor key.
  • the keyboard layout follows the well-known QWERTY layout or a variant thereof. In some other embodiments, the keyboard layout may follow other layouts. Furthermore, in some embodiments, the layout may change depending on the language used on the device. For example, if English is selected as the user interface language, then the active keyboard layout may be the QWERTY layout, and other layouts may be active when another language, such as Swedish or French, is selected as the user interface language. Further details regarding keyboard layouts are described below in relation to FIG. 5.
  • Permutations of input characters and neighbor characters are determined and a set of strings are generated from the permutations (204).
  • a "permutation" is a sequence of characters, wherein each character in the sequence is either the input character in the corresponding position in the input character sequence or a neighbor character of that input character on the keyboard layout.
  • the first character in the permutation is the first character of the input character sequence or a neighbor of that first character on the keyboard layout
  • the second character in the permutation is the second character of the input character sequence or a neighbor of that second character on the keyboard layout, and so forth, up to and perhaps including the last character in the input character sequence.
  • the length of a permutation and of a generated string is at most the length of the input character sequence.
  • the first character in any of the permutations generated for this input sequence is “r” (the first character in the input sequence) or any characters that are neighbors to “r” on the keyboard layout.
  • the second character in a permutation is “h” or any neighbor thereof.
  • the third character in a permutation is “e” (the third character in the input sequence) or neighbors thereof, and so forth.
  • permutations may be determined for a predefined- length subset of the input sequence and strings of the same predefined length may be generated from the permutations.
  • the predefined length is 3 characters. That is, the permutations are determined and prefix strings are generated from the first three characters in the input sequence and neighbors thereof. If the length of the input sequence is less than the predefined length, a process other than process flow 200 may be used to provide word recommendations. For example, if the input sequence is one or two characters long, the input sequence in its entirety may be compared against words in a dictionary and best matches are identified.
  • the set of strings are compared against a dictionary. Words in the dictionary that have any of the set of strings as a prefix are identified (206).
  • prefix means that the string is a prefix of a word in the dictionary or is itself a word in the dictionary.
  • a dictionary refers to a list of words. The dictionary may be pre- made and stored in the memory. The dictionary may also include usage frequency rankings for each word in the dictionary. A usage frequency ranking for a word indicates (or more generally, corresponds to) the statistical usage frequency for that word in a language. In some embodiments, the dictionary may include different usage frequency rankings for different variants of a language.
  • a dictionary of words in the English language may have different usage frequency rankings with respect to American English and British English.
  • the dictionary may be customizable. That is, additional words may be added to the dictionary by the user.
  • different applications may have different dictionaries with different words and usage frequency rankings. For example, an email application and an SMS application may have different dictionaries, with different words and perhaps different usage frequency rankings within the same language.
  • the identified words are the candidate words that may be presented to the user as recommended replacements for the input sequence.
  • the candidate words are scored (208). Each candidate word is scored based on a character-to-character comparison with the input sequence and optionally other factors. Further details regarding the scoring of candidate words are described below, in relation to FIGs. 3 and 7A - 7C.
  • a subset of the candidate words are selected based on predefined criteria (210) and the selected subset is presented to the user (212). In some embodiments, the selected candidate words are presented to the user as a horizontal listing of words.
  • Process flow 300 describes a process of scoring a candidate word. The scoring helps determine which word(s) in the dictionary is/are the best potential replacement(s) for the input sequence of characters.
  • Each character in a candidate word is compared to the character in the corresponding position in the input sequence (302).
  • the first character in the candidate word is compared to the first character in the input sequence
  • the second character in the candidate word is compared to the second character in the input sequence, and so forth. If either the candidate word or the input sequence is longer than the other, then the additional characters beyond the shorter length of the two are ignored in the comparison.
  • further comparison of the candidate word with the input sequence may be made.
  • the further comparison may include determining the number of character differences between the candidate words and the input sequence, and determining if any character differences are a result of transposed characters.
  • a score is calculated for the candidate word based on the comparison described above (304).
  • each character comparison yields a value, and the values are added to yield the score for the candidate word.
  • the score value given for a character comparison is based on the actual characters as opposed to merely whether the characters match. More particularly, the value may be based on whether the character in the candidate word matches the corresponding character in the input sequence exactly and/or whether the character in the candidate word is a keyboard layout neighbor of the corresponding character in the input sequence.
  • a first "bonus” may be added to the score of the candidate word if the candidate word and the input sequence are different in only one character (306).
  • an optional second "bonus” may be added to the score of the candidate word if the candidate word and the input sequence are different in only a pair of transposed adjacent characters (308). Further details regarding candidate word scoring is described below, in relation to FIGs. IA - 1C.
  • FIG. 4 a flow diagram illustrating a process of selecting and presenting candidate words in accordance with some embodiments.
  • Process flow 400 describes in further details blocks 210 and 212 (FIG. 2), which involves selection and presentation of candidate words.
  • the candidate words are split into two groups based on their usage frequency rankings within the dictionary (402).
  • a first group includes the candidate words whose usage frequency rankings exceeds a predefined threshold.
  • the second group includes the candidate words whose usage frequency rankings does not exceed the threshold. With each of the two groups, the candidate words are sorted by their candidate word scores.
  • these high-scoring words may be removed from the second group and added to the first group if their scores exceed the score of the highest scoring candidate word in the first group by a predefined margin (404).
  • the predefined margin is that the score of the candidate word in the second group must be at least two times the highest candidate word score in the first group.
  • One or more of the highest scoring candidate words in the first group are presented to the user (406). It should be appreciated that if candidate words from the second group were moved to the first group as described above, then the candidate words that are presented will include at least one candidate word that was originally in the second group since that candidate word has a higher score than any of the original candidate words in the first group.
  • the highest scoring candidate word in the second group may nevertheless be presented along with the candidate words from the first group (408).
  • the input sequence as entered by the user may be presented as a matter of course (410). The user may choose any one of the presented candidate words to replace the input sequence, including choosing the input sequence as entered if the user is satisfied with it.
  • FIGs. 5A and 5B are exemplary layouts of letter keys on a keyboard in accordance with some embodiments.
  • the prefix strings based on which candidate words are identified, are generated based on characters in the input sequence and their corresponding neighbor characters on a keyboard layout.
  • Keyboard layouts 502 and 504 are exemplary keyboard layouts.
  • a keyboard layout defines the positions of each key on the keyboard and the alignment of the keys relative to each other. For ease of description, only the letter keys of the layouts 502 and 504 are shown. It should be appreciated, however, that a keyboard layout may also include keys for numbers, punctuation, symbols, and functional keys. In some embodiments, some keys may be overloaded, that is, a key may correspond to multiple characters and/or functions.
  • Layouts 502 and 504 are layouts that follow the well-known QWERTY layout. However, the key alignment in layout 502 is different from the key alignment in layout 504. In layout 502, the keys are aligned in rows but not in columns; a key in one row may straddle two keys in an adjacent row. For example, key “T” straddles keys “F” and "G” in layout 502. In layout 504, the keys are aligned in columns as well as in rows. The definition of which keys are the neighbors of a key may be different depending on how the keys are aligned. In layout 502, the neighbors of a particular key may be defined as the keys that are directly adjacent to the particular key or whose peripheries "touch" a periphery of the particular key.
  • the neighbors of key “G” in layout 502 are keys “T,” “Y,” “F,” “H,” “V,” and “B;” and the neighbors of key “W” are keys “Q,” “E,” “A,” and “S.”
  • the neighbors of a particular key may be defined as the keys that are immediately above, below, to the side of, and diagonal of the particular key.
  • the neighbors of key “G” in layout 504 are keys “R,” “T,” “Y,” “F,” “H,” “C,” “V,” and “B;” and the neighbors of key “W” are keys “Q,” “E,” “A,” “S,” and “D.”
  • layouts 502 and 504 are merely exemplary, and that other layouts and key alignments are possible and the same key may have different neighbors in different layouts.
  • FIG. 6 an exemplary derivation of candidate words based on a text input in accordance with some embodiments.
  • FIG. 6 illustrates an example of the identification of candidate words from an input sequence.
  • the input sequence 602 is "rheatre.”
  • the first three characters and their corresponding neighbors 604 are identified.
  • the first character is “r” and its neighbors, in accordance with the layout 502, are “e,” “d,” “f,” and “t.”
  • the second character is "h,” and its neighbors are “y,” "u.”
  • the character permutations 606 are determined. Each permutation is a character combination where the first character is the first input character or a neighbor thereof, the second character is the second input character or a neighbor thereof, and the third character is the third input character or a neighbor thereof. From these permutations, prefix strings are generated and compared to words in the dictionary. Examples of three-character permutations based on the input sequence 602 include “the,” “rus,” “rye,” and “due.” Words in the dictionary that have one of these strings as a prefix are identified as candidate words 608. Examples of candidate words include “theater,” “rye,” “rusty,” “due,” “the,” and “there.” In other embodiments, the character permutations may include four, five, or more characters, rather than three characters.
  • FIGs. 7A-7C are examples of scoring of candidate words in accordance with some embodiments.
  • FIG. 7A shows an input sequence and three possible candidate words that may be identified from permutations of the first three characters of the input sequence. The candidate words are compared to the input sequence character-by-character and scores for the candidate words are tallied.
  • a score tally of a candidate word involves assigning a value for each character comparison and adding the values together.
  • the value that is assigned for a character comparison is based on the result of the comparison. Particularly, the value is based on whether the character in the candidate word, compared to the character in the corresponding position in the input sequence, is an exact match, a neighbor on the keyboard layout, or neither.
  • the value assigned for an exact match is a predefined value N. If the characters are not an exact match but are neighbors, then the value assigned is a value ⁇ N, where ⁇ is a constant and ⁇ ⁇ 1. In some embodiments, ⁇ is 0.5. In other words, the value assigned for a neighbor match is a reduction of the value for an exact match.
  • the assigned value is ⁇ N, where ⁇ is a constant and ⁇ ⁇ ⁇ ⁇ 1.
  • may be 0.25.
  • may be a function of the "distance" between the characters on the keyboard layout. That is, ⁇ may be a smaller number if the candidate word character is farther away on the keyboard layout from the input sequence character than if the candidate word character is closer on the keyboard layout from the input sequence character without being a neighbor.
  • may be 1 for an exact match, 0.5 for a neighbor, and 0 otherwise.
  • may be 0.5 for a neighbor (a 1-key radius), 0.25 for keys that are two keys away (a 2-key radius), and 0 for keys that are three or more keys away.
  • N is equal to 1.
  • the candidate word shown in FIG. 7A is "theater." Compared to the input sequence of "rheatre," there are exact matches in the second thru fifth positions.
  • the characters in the first, sixth, and seventh positions of the candidate word are keyboard layout neighbors of input sequence characters in the corresponding positions.
  • the second candidate word is "threats.” Compared to the input sequence of
  • the third candidate word is "there.” Compared to the input sequence of
  • Some candidate words when compared to the input sequence, may merit a score bonus, examples of which are shown in FIGs. 7B and 7C.
  • FIG. 7B the input sequence is "thaeter” and the candidate word is "theater.”
  • the score based on the character comparisons alone is 5.5N.
  • the only difference between "thaeter” and “theater” is a pair of transposed or swapped characters, namely "ae” in “thaeter” vs. "ea” in “theater.”
  • a first bonus P is added to the score for this fact.
  • FIG. 7C the input sequence is "thester” and the candidate word is "theater.”
  • the score based on the character comparisons alone is 6.5N.
  • the only difference between “thester” and “theater” is a single character, namely “s” in “thester” vs. "a” in “theater.”
  • a second bonus Q is added to the score for this fact.
  • both P and Q are equal to 0.75.
  • one alternative scheme may include, instead of dividing the candidate words into the first and second groups based on usage frequency rankings, the usage frequency rankings may instead be used as a weighting to be applied to candidate word scores. That is, the score of a candidate word is multiplied by the usage frequency ranking of the candidate word, and candidate words for presentation are selected based on their weighted scores.
  • Another scheme replaces candidate word scoring based on character-by-character comparisons, as described above, with scoring based on the edit distance (also known as the Levenshtein distance) between the input sequence and the candidate word. That is, the score of a candidate word is the edit distance between the candidate word and the input sequence, or a function thereof, and candidate words are selected for presentation based on the edit distance scores. Alternately, the score for each candidate is based on the edit distance multiplied by (or otherwise combined with) the usage frequency ranking of the candidate, and candidate words are selected for presentation based on these scores.
  • the edit distance also known as the Levenshtein distance
  • another scheme uses a graph-matching technique.
  • the sequence of individual touch points that a user inputs into the device for a word form a directed graph.
  • This user-input directed graph is compared against a collection of directed graphs for respective words in a dictionary to generate a list of dictionary words that most closely match the user typing.
  • the probability that a user-input directed graph matches the directed graph for a dictionary word is calculated as follows:
  • P 1...! be, for each point in the user-input directed graph, the probability that the letter corresponding to U x equals the letter corresponding to D x .
  • a respective P x is computed by calculating the Euclidean distance between the points U x and
  • the factor (based on the size of the user interface elements that indicate the keys on the keyboard) is a divisor that is equal to, or proportional to, the distance between center points of two horizontally adjacent keys on the keyboard.
  • G the probability that a graph for a dictionary word matches the user-input graph.
  • G is multiplied by F, the frequency that the word occurs in the source language/domain.
  • G is also multiplied by N, a factor calculated by considering one or more words previously typed by the user. For example, in a sentence/passage being typed by a user, "to” is more likely to follow "going," but “ti” is more likely to follow "do re mi fa so Ia.”
  • G is multiplied by both F and N to yield ⁇ , the probability that a user-input directed graph matches a dictionary word.
  • the collection of dictionary words with the highest probabilities may be presented in a display for user consideration, for example as described in "Method, System, and Graphical User Interface for Providing Word Recommendations" (U.S. Patent Application number to be determined, filed January 5, 2007, attorney docket number 063266- 5041), the content of which is hereby incorporated by reference in its entirety.
  • the top-ranked word is selected for the user by the device without user intervention.
  • the portable device may automatically adjust or recalibrate the contact regions of the keys of the virtual keyboard to compensate for the user pattern of typing errors.
  • the word selected by the user may be recommended first or given a higher score when the same input sequence is subsequently entered by the user.

Abstract

Word recommendations are provided in response to text input. For a particular text input, possible word recommendations are identified based on the characters of the input and corresponding neighbor characters on a keyboard layout. The possible word recommendations are scored based on how closely they match the input word on a character- by-character basis, and a subset of the possible word recommendations are selected for presentation to the user.

Description

Method and System for Providing Word Recommendations for
Text Input
TECHNICAL FIELD
[0001] The disclosed embodiments relate generally to text input on portable communication devices, and more particularly, to methods and systems for providing word recommendations in response to text input.
BACKGROUND
[0002] In recent years, the functional capabilities of portable communications devices have increased dramatically. Current devices enable communication by voice, text, and still or moving images. Communication by text, such as by email, instant message (IM) or short messaging service (SMS), has proven to be quite popular.
[0003] However, the size of these portable communication devices also restricts the size of the text input device, such as a physical or virtual keyboard, in the portable device. With a size-restricted keyboard, designers are often forced to make the keys smaller or overload the keys. Both may lead to typing mistakes and thus more backtracking to correct the mistakes. This makes the process of communication by text on the devices inefficient and reduces user satisfaction with such portable communication devices.
[0004] Accordingly, there is a need for more efficient ways of entering text into portable devices.
SUMMARY
[0005] In accordance with some embodiments, a computer-implemented method involves receiving a sequence of input characters from a keyboard, wherein the keyboard has a predefined layout of characters with each character in the layout having one or more neighbor characters. The method also involves generating a set of strings from at least a subset of the sequence of input characters, where the set of strings comprises permutations of respective input characters in the subset of the sequence and neighbor characters of the respective input characters on the layout of the keyboard; identifying in a dictionary one or more candidate words that have a string in the set of strings as a prefix; scoring the candidate words; selecting a subset of the candidate words based on predefined criteria; and presenting the subset of the candidate words.
[0006] In accordance with some embodiments, there is a computer program product for use in conjunction with a portable communications device. The computer program product comprises a computer readable storage medium and a computer program mechanism embedded therein. The computer program mechanism includes instructions for receiving a sequence of input characters from a keyboard, wherein the keyboard has a predefined layout of characters with each character in the layout having one or more neighbor characters; instructions for generating a set of strings from at least a subset of the sequence of input characters, the set of strings comprising permutations of respective input characters in the subset of the sequence and neighbor characters of the respective input characters on the layout of the keyboard; instructions for identifying in a dictionary one or more candidate words, each candidate word having a string in the set of strings as a prefix; instructions for scoring the candidate words; instructions for selecting a subset of the candidate words based on predefined criteria; and instructions for presenting the subset of the candidate words.
[0007] In accordance with some embodiments, a portable communications device includes a display; a keyboard, the keyboard having a predefined layout of characters with each character in the layout having one or more neighbor characters; one or more processors; memory; and a program stored in the memory and configured to be executed by the one or more processors. The program includes instructions for receiving a sequence of input characters from the keyboard; instructions for generating a set of strings from at least a subset of the sequence of input characters, the set of strings comprising permutations of respective input characters in the subset of the sequence and neighbor characters of the respective input characters on the layout of the keyboard; instructions for identifying in a dictionary one or more candidate words, each candidate word having a string in the set of strings as a prefix; instructions for scoring the candidate words; instructions for selecting a subset of the candidate words based on predefined criteria; and instructions for presenting the subset of the candidate words. [0008] In accordance with some embodiments, a portable communications device includes display means; input means having a predefined layout of characters, each character in the layout having one or more neighbor characters; one or more processor means; memory means; and a program mechanism stored in the memory means and configured to be executed by the one or more processors means. The program mechanism includes instructions for receiving a sequence of input characters from the input means; instructions for generating a set of strings from at least a subset of the sequence of input characters, the set of strings comprising permutations of respective input characters in the subset of the sequence and neighbor characters of the respective input characters on the layout of the input means; instructions for identifying in a dictionary one or more candidate words, each candidate word having a string in the set of strings as a prefix; instructions for scoring the candidate words; instructions for selecting a subset of the candidate words based on predefined criteria; and instructions for presenting the subset of the candidate words.
[0009] In accordance with some embodiments, a computer-implemented method involves receiving a sequence of individual touch points input by a user that form a user- input directed graph; comparing the user-input directed graph to respective directed graphs for words in a dictionary; generating a list of candidate words based at least in part on the comparing step; and presenting at least some of the candidate words to the user.
[0010] In accordance with some embodiments, there is a computer program product for use in conjunction with a portable communications device. The computer program product comprises a computer readable storage medium and a computer program mechanism embedded therein. The computer program mechanism includes instructions for receiving a sequence of individual touch points input by a user that form a user-input directed graph; instructions for comparing the user-input directed graph to respective directed graphs for words in a dictionary; instructions for generating a list of candidate words based at least in part on the comparing step; and instructions for presenting at least some of the candidate words to the user. [0011] In accordance with some embodiments, a portable communications device includes a display; a keyboard; one or more processors; memory; and a program stored in the memory and configured to be executed by the one or more processors. The program includes instructions for receiving a sequence of individual touch points input by a user that form a user-input directed graph; instructions for comparing the user-input directed graph to respective directed graphs for words in a dictionary; instructions for generating a list of candidate words based at least in part on the comparing step; and instructions for presenting at least some of the candidate words to the user. [0012] In accordance with some embodiments, a portable communications device includes means for receiving a sequence of individual touch points input by a user that form a user-input directed graph; means for comparing the user-input directed graph to respective directed graphs for words in a dictionary; means for generating a list of candidate words based at least in part on the comparing step; and means for presenting at least some of the candidate words to the user.
[0013] Thus, the embodiments provide more efficient ways to enter text in a portable device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] For a better understanding of the aforementioned embodiments of the invention as well as additional embodiments thereof, reference should be made to the Description of Embodiments below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
[0015] FIG. 1 is a block diagram illustrating a portable communications device in accordance with some embodiments.
[0016] FIG. 2 is a flow diagram illustrating a process of providing word recommendations in accordance with some embodiments.
[0017] FIG. 3 is a flow diagram illustrating a process of scoring candidate words in accordance with some embodiments. [0018] FIG. 4 is a flow diagram illustrating a process of selecting and presenting candidate words in accordance with some embodiments.
[0019] FIGs. 5A and 5B illustrate exemplary layouts of letter keys on a keyboard in accordance with some embodiments.
[0020] FIG. 6 illustrates an exemplary derivation of candidate words based on a text input in accordance with some embodiments.
[0021] FIGs. 7A-7C illustrate examples of scoring of candidate words in accordance with some embodiments. DESCRIPTION OF EMBODIMENTS
[0022] Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
[0023] A portable communication device includes a user interface and a text input device. Via the interface and the text input device, a user may enter text into the device. The text includes words, which are sequences of characters separated by whitespaces or particular punctuation. For a word as it is being entered or an entered word, the device identifies and offers word recommendations that may be selected by the user to replace the word as inputted by the user. [0024] Attention is now directed to an embodiment of a portable communications device. Figure 1 is a block diagram illustrating an embodiment of a device 100, such as a portable electronic device having a touch-sensitive display 112. The device 100 may include a memory controller 120, one or more data processors, image processors and/or central processing units 118 and a peripherals interface 116. The memory controller 120, the one or more processors 118 and/or the peripherals interface 116 may be separate components or may be integrated, such as in one or more integrated circuits 104. The various components in the device 100 may be coupled by one or more communication buses or signal lines 103.
[0025] The peripherals interface 116 may be coupled to an optical sensor (not shown), such as a CMOS or CCD image sensor; RF circuitry 108; audio circuitry 110; and/or an input/output (I/O) subsystem 106. The audio circuitry 110 may be coupled to a speaker
142 and a micro-phone 144. The device 100 may support voice recognition and/or voice replication. The RF circuitry 108 may be coupled to one or more antennas 146 and may allow communication with one or more additional devices, computers and/or servers using a wireless network. The device 100 may support a variety of communications protocols, including code division multiple access (CDMA), Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), Wi-Fi (such as IEEE 802.11a, IEEE 802.11b, IEEE 802.1 Ig and/or IEEE 802.1 In), Bluetooth, Wi-MAX, a protocol for email, instant messaging, and/or a short message service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document. In an exemplary embodiment, the device 100 may be, at least in part, a cellular telephone.
[0026] The I/O subsystem 106 may include a touch screen controller 152 and/or other input controller(s) 154. The touch-screen controller 152 may be coupled to a touch-sensitive screen or touch sensitive display system 112.
[0027] The touch-sensitive display system 112 provides an input interface and an output interface between the device and a user. The display controller 152 receives and/or sends electrical signals from/to the display system 112. The display system 112 displays visual output to the user. The visual output may include graphics, text, icons, video, and any combination thereof (collectively termed "graphics"). In some embodiments, some or all of the visual output may correspond to user-interface objects, further details of which are described below.
[0028] A touch screen in display system 112 is a touch-sensitive surface that accepts input from the user based on haptic and/or tactile contact. The display system 112 and the display controller 152 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on the display system 112 and converts the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages or images) that are displayed on the touch screen. The touch screen 112 may be used to implement virtual or soft buttons and/or a keyboard. In an exemplary embodiment, a point of contact between a touch screen in the display system 112 and the user corresponds to a finger of the user. [0029] The touch screen in the display system 112 may use LCD (liquid crystal display) technology, or LPD (light emitting polymer display) technology, although other display technologies may be used in other embodiments. The touch screen in the display system 112 and the display controller 152 may detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with a touch screen in the display system 112. A touch-sensitive display in some embodiments of the display system 112 may be analogous to the multi-touch sensitive tablets described in the following U.S. Patents: 6,323,846 (Westerman et al.), 6,570,557 (Westerman et al.), and/or 6,677,932 (Westerman), and/or U.S. Patent Publication 2002/0015024A1, each of which is hereby incorporated by reference. However, a touch screen in the display system 112 displays visual output from the portable device 100, whereas touch sensitive tablets do not provide visual output. The touch screen in the display system 112 may have a resolution in excess of 100 dpi. In an exemplary embodiment, the touch screen in the display system has a resolution of approximately 168 dpi. The user may make contact with the touch screen in the display system 112 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which are much less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user.
[0030] A touch-sensitive display in some embodiments of the display system 112 may be as described in the following applications: (1) U.S. Patent Application No. 11/381,313, "Multipoint Touch Surface Controller," filed on May 2, 2006; (2) U.S. Patent Application No. 10/840,862, "Multipoint Touchscreen," filed on May 6, 2004; (3) U.S. Patent Application No. 10/903,964, "Gestures For Touch Sensitive Input Devices," filed on July 30, 2004; (4) U.S. Patent Application No. 11/048,264, "Gestures For Touch Sensitive Input Devices," filed on January 31, 2005; (5) U.S. Patent Application No. 11/038,590, "Mode- Based Graphical User Interfaces For Touch Sensitive Input Devices," filed on January 18, 2005; (6) U.S. Patent Application No. 11/228,758, "Virtual Input Device Placement On A Touch Screen User Interface," filed on September 16, 2005; (7) U.S. Patent Application No.
11/228,700, "Operation OfA Computer With A Touch Screen Interface," filed on September 16, 2005; (8) U.S. Patent Application No. 11/228,737, "Activating Virtual Keys OfA Touch- Screen Virtual Keyboard," filed on September 16, 2005; and (9) U.S. Patent Application No. 11/367,749, "Multi-Functional Hand-Held Device," filed on March 3, 2006. AU of these applications are incorporated by reference herein..
[0031] The other input controller(s) 154 may be coupled to other input/control devices 114, such as one or more buttons, a keyboard, infrared port, USB port, and/or a pointer device such as a mouse. The one or more buttons (not shown) may include an up/down button for volume control of the speaker 142 and/or the micro-phone 144. The one or more buttons (not shown) may include a push button. A quick press of the push button (not shown) may engage or disengage a lock of the touch screen 112. A longer press of the push button (not shown) may turn power to the device 100 on or off. The user may be able to customize a functionality of one or more of the buttons.
[0032] In some embodiments, the device 100 may include circuitry for supporting a location determining capability, such as that provided by the global positioning system (GPS). In some embodiments, the device 100 may be used to play back recorded music stored in one or more files, such as MP3 files or AAC files. In some embodiments, the device 100 may include the functionality of an MP3 player, such as an iPod (trademark of Apple Computer, Inc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with the 30-pin connector used on iPod devices.
[0033] The device 100 also includes a power system 137 for powering the various components. The power system 137 may include a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices. The device 100 may also include one or more external ports 135 for connecting the device 100 to other devices.
[0034] The memory controller 120 may be coupled to memory 102 with one or more types of computer readable media. Memory 102 may include high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory. Memory 102 may store an operating system 122, such as Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks. The operating system 122 may include procedures (or sets of instructions) for handling basic system services and for performing hardware dependent tasks. Memory 102 may also store communication procedures (or sets of instructions) in a communication module 124. The communication procedures may be used for communicating with one or more additional devices, one or more computers and/or one or more servers. Memory 102 may include a display module (or a set of instructions) 125, a contact/motion module (or a set of instructions) 126 to determine one or more points of contact and/or their movement, and a graphics module (or a set of instructions) 128. The graphics module 128 may support widgets, that is, modules or applications with embedded graphics. The widgets may be implemented using JavaScript, HTML, or other suitable languages.
[0035] Memory 102 may also include one or more applications 130. Examples of applications include email applications, text messaging or instant messaging applications, web browsers, memo pad applications, address books or contact lists, and calendars.
[0036] Also in memory 102 are one or more dictionaries 132 and a word recommendation module (or set of instructions) 134. In some embodiments, a dictionary contains a list of words and corresponding usage frequency rankings. The usage frequency ranking of a word is the statistical usage frequency for that word in a language, or by a predefined group or people, or by the user of the device 100, or a combination thereof. As described below, a dictionary may include multiple usage frequency rankings for regional variations of the same language and/or be tailored to a user's own usage frequency, e.g., derived from the user's prior emails, text messages, and other previous input from the user. The word recommendation module identifies word recommendations for presentation to the user in response to text input by the user.
[0037] Each of the above identified modules and applications corresponds to a set of instructions for performing one or more functions described above. These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules. The various modules and sub-modules may be rearranged and/or combined. Memory 102 may include additional modules and/or sub-modules, or fewer modules and/or sub-modules. Memory 102, therefore, may include a subset or a superset of the above identified modules and/or sub-modules. Various functions of the device 100 may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits.
[0038] Attention is now directed to FIG. 2, a flow diagram illustrating a process of providing word recommendations in accordance with some embodiments. Process flow 200 describes a process of providing word recommendations in response to input of a character string by a user.
[0039] A sequence of input characters is received from an input device (202). A user inputs a sequence of characters into the portable communications device via an input device, such as a keyboard, and the device receives the input. As used herein, the input character sequence is a sequence of non-whitespace characters, delimited by whitespaces or punctuation, input by the user via the input device. The sequence of characters may constitute a word.
[0040] In some embodiments, the input device is a virtual keyboard (also called a soft keyboard) displayed on a touch-sensitive display of the portable device, where the user hits the keys of the keyboard ("types on the keyboard") by touching the touch-sensitive display on locations corresponding to keys of the virtual keyboard. In some other embodiments, the input device is a physical keyboard on the device (also called a hard keyboard).
[0041] The keyboard, whether virtual or physical, has a plurality of keys, each key corresponding to one or more characters, such as letters, numbers, punctuation, or symbols.
The keys are arranged in accordance with a predefined layout that defines the positions of the keys on the keyboard. On the layout, each key has at least one neighbor key. In some embodiments, the keyboard layout follows the well-known QWERTY layout or a variant thereof. In some other embodiments, the keyboard layout may follow other layouts. Furthermore, in some embodiments, the layout may change depending on the language used on the device. For example, if English is selected as the user interface language, then the active keyboard layout may be the QWERTY layout, and other layouts may be active when another language, such as Swedish or French, is selected as the user interface language. Further details regarding keyboard layouts are described below in relation to FIG. 5.
[0042] Permutations of input characters and neighbor characters are determined and a set of strings are generated from the permutations (204). As used herein, a "permutation" is a sequence of characters, wherein each character in the sequence is either the input character in the corresponding position in the input character sequence or a neighbor character of that input character on the keyboard layout. The first character in the permutation is the first character of the input character sequence or a neighbor of that first character on the keyboard layout, the second character in the permutation is the second character of the input character sequence or a neighbor of that second character on the keyboard layout, and so forth, up to and perhaps including the last character in the input character sequence. Thus, the length of a permutation and of a generated string is at most the length of the input character sequence.
[0043] For example, if the input sequence is "rheater," then the first character in any of the permutations generated for this input sequence is "r" (the first character in the input sequence) or any characters that are neighbors to "r" on the keyboard layout. The second character in a permutation is "h" or any neighbor thereof. The third character in a permutation is "e" (the third character in the input sequence) or neighbors thereof, and so forth.
[0044] In some embodiments, permutations may be determined for a predefined- length subset of the input sequence and strings of the same predefined length may be generated from the permutations. In some embodiments, the predefined length is 3 characters. That is, the permutations are determined and prefix strings are generated from the first three characters in the input sequence and neighbors thereof. If the length of the input sequence is less than the predefined length, a process other than process flow 200 may be used to provide word recommendations. For example, if the input sequence is one or two characters long, the input sequence in its entirety may be compared against words in a dictionary and best matches are identified.
[0045] The set of strings are compared against a dictionary. Words in the dictionary that have any of the set of strings as a prefix are identified (206). As used herein, "prefix" means that the string is a prefix of a word in the dictionary or is itself a word in the dictionary. A dictionary, as used herein, refers to a list of words. The dictionary may be pre- made and stored in the memory. The dictionary may also include usage frequency rankings for each word in the dictionary. A usage frequency ranking for a word indicates (or more generally, corresponds to) the statistical usage frequency for that word in a language. In some embodiments, the dictionary may include different usage frequency rankings for different variants of a language. For example, a dictionary of words in the English language may have different usage frequency rankings with respect to American English and British English. [0046] In some embodiments, the dictionary may be customizable. That is, additional words may be added to the dictionary by the user. Furthermore, in some embodiments, different applications may have different dictionaries with different words and usage frequency rankings. For example, an email application and an SMS application may have different dictionaries, with different words and perhaps different usage frequency rankings within the same language.
[0047] The identified words are the candidate words that may be presented to the user as recommended replacements for the input sequence. The candidate words are scored (208). Each candidate word is scored based on a character-to-character comparison with the input sequence and optionally other factors. Further details regarding the scoring of candidate words are described below, in relation to FIGs. 3 and 7A - 7C. A subset of the candidate words are selected based on predefined criteria (210) and the selected subset is presented to the user (212). In some embodiments, the selected candidate words are presented to the user as a horizontal listing of words.
[0048] Attention is now directed to FIG. 3, a flow diagram illustrating a process of scoring candidate words in accordance with some embodiments. Process flow 300 describes a process of scoring a candidate word. The scoring helps determine which word(s) in the dictionary is/are the best potential replacement(s) for the input sequence of characters.
[0049] Each character in a candidate word is compared to the character in the corresponding position in the input sequence (302). Thus, the first character in the candidate word is compared to the first character in the input sequence, the second character in the candidate word is compared to the second character in the input sequence, and so forth. If either the candidate word or the input sequence is longer than the other, then the additional characters beyond the shorter length of the two are ignored in the comparison. In some embodiments, further comparison of the candidate word with the input sequence may be made. For example, the further comparison may include determining the number of character differences between the candidate words and the input sequence, and determining if any character differences are a result of transposed characters. A score is calculated for the candidate word based on the comparison described above (304). Each character comparison yields a value, and the values are added to yield the score for the candidate word. [0050] In some embodiments, the score value given for a character comparison is based on the actual characters as opposed to merely whether the characters match. More particularly, the value may be based on whether the character in the candidate word matches the corresponding character in the input sequence exactly and/or whether the character in the candidate word is a keyboard layout neighbor of the corresponding character in the input sequence.
[0051] Optionally, a first "bonus" may be added to the score of the candidate word if the candidate word and the input sequence are different in only one character (306). Similarly, an optional second "bonus" may be added to the score of the candidate word if the candidate word and the input sequence are different in only a pair of transposed adjacent characters (308). Further details regarding candidate word scoring is described below, in relation to FIGs. IA - 1C.
[0052] Attention is now directed to FIG. 4, a flow diagram illustrating a process of selecting and presenting candidate words in accordance with some embodiments. Process flow 400 describes in further details blocks 210 and 212 (FIG. 2), which involves selection and presentation of candidate words.
[0053] The candidate words are split into two groups based on their usage frequency rankings within the dictionary (402). A first group includes the candidate words whose usage frequency rankings exceeds a predefined threshold. The second group includes the candidate words whose usage frequency rankings does not exceed the threshold. With each of the two groups, the candidate words are sorted by their candidate word scores.
[0054] There may be candidate words in the second group whose scores are very high because, for example, they match the input sequence exactly or almost exactly. In some embodiments, these high-scoring words may be removed from the second group and added to the first group if their scores exceed the score of the highest scoring candidate word in the first group by a predefined margin (404). In some embodiments, the predefined margin is that the score of the candidate word in the second group must be at least two times the highest candidate word score in the first group.
[0055] One or more of the highest scoring candidate words in the first group are presented to the user (406). It should be appreciated that if candidate words from the second group were moved to the first group as described above, then the candidate words that are presented will include at least one candidate word that was originally in the second group since that candidate word has a higher score than any of the original candidate words in the first group.
[0056] In some embodiments, if block 404 is not performed, either because no candidate word in the second group satisfies the score margin threshold or because the moving of candidate words is not performed at all, the highest scoring candidate word in the second group may nevertheless be presented along with the candidate words from the first group (408). Furthermore, in some embodiments, the input sequence as entered by the user may be presented as a matter of course (410). The user may choose any one of the presented candidate words to replace the input sequence, including choosing the input sequence as entered if the user is satisfied with it.
[0057] Attention is now directed to FIGs. 5A and 5B, which are exemplary layouts of letter keys on a keyboard in accordance with some embodiments. As described above, the prefix strings, based on which candidate words are identified, are generated based on characters in the input sequence and their corresponding neighbor characters on a keyboard layout. Keyboard layouts 502 and 504 are exemplary keyboard layouts. A keyboard layout defines the positions of each key on the keyboard and the alignment of the keys relative to each other. For ease of description, only the letter keys of the layouts 502 and 504 are shown. It should be appreciated, however, that a keyboard layout may also include keys for numbers, punctuation, symbols, and functional keys. In some embodiments, some keys may be overloaded, that is, a key may correspond to multiple characters and/or functions.
[0058] Layouts 502 and 504 are layouts that follow the well-known QWERTY layout. However, the key alignment in layout 502 is different from the key alignment in layout 504. In layout 502, the keys are aligned in rows but not in columns; a key in one row may straddle two keys in an adjacent row. For example, key "T" straddles keys "F" and "G" in layout 502. In layout 504, the keys are aligned in columns as well as in rows. The definition of which keys are the neighbors of a key may be different depending on how the keys are aligned. In layout 502, the neighbors of a particular key may be defined as the keys that are directly adjacent to the particular key or whose peripheries "touch" a periphery of the particular key. For example, the neighbors of key "G" in layout 502 are keys "T," "Y," "F," "H," "V," and "B;" and the neighbors of key "W" are keys "Q," "E," "A," and "S." In layout 504, the neighbors of a particular key may be defined as the keys that are immediately above, below, to the side of, and diagonal of the particular key. For example, the neighbors of key "G" in layout 504 are keys "R," "T," "Y," "F," "H," "C," "V," and "B;" and the neighbors of key "W" are keys "Q," "E," "A," "S," and "D."
[0059] It should be appreciated, however, that layouts 502 and 504 are merely exemplary, and that other layouts and key alignments are possible and the same key may have different neighbors in different layouts.
[0060] Attention is now directed to FIG. 6, an exemplary derivation of candidate words based on a text input in accordance with some embodiments. FIG. 6 illustrates an example of the identification of candidate words from an input sequence.
[0061] In FIG. 6, the input sequence 602 is "rheatre." For prefix strings of three characters in length, the first three characters and their corresponding neighbors 604 are identified. Here, the first character is "r" and its neighbors, in accordance with the layout 502, are "e," "d," "f," and "t." The second character is "h," and its neighbors are "y," "u,"
"g," "j," "b," and "n." The third character is "e," and its neighbors are "w," "s," "d," and "r."
[0062] From the input characters and corresponding neighbors, the character permutations 606 are determined. Each permutation is a character combination where the first character is the first input character or a neighbor thereof, the second character is the second input character or a neighbor thereof, and the third character is the third input character or a neighbor thereof. From these permutations, prefix strings are generated and compared to words in the dictionary. Examples of three-character permutations based on the input sequence 602 include "the," "rus," "rye," and "due." Words in the dictionary that have one of these strings as a prefix are identified as candidate words 608. Examples of candidate words include "theater," "rye," "rusty," "due," "the," and "there." In other embodiments, the character permutations may include four, five, or more characters, rather than three characters.
[0063] Attention is now directed to FIGs. 7A-7C, which are examples of scoring of candidate words in accordance with some embodiments. FIG. 7A shows an input sequence and three possible candidate words that may be identified from permutations of the first three characters of the input sequence. The candidate words are compared to the input sequence character-by-character and scores for the candidate words are tallied.
[0064] In some embodiments, a score tally of a candidate word involves assigning a value for each character comparison and adding the values together. The value that is assigned for a character comparison is based on the result of the comparison. Particularly, the value is based on whether the character in the candidate word, compared to the character in the corresponding position in the input sequence, is an exact match, a neighbor on the keyboard layout, or neither. In some embodiments, the value assigned for an exact match is a predefined value N. If the characters are not an exact match but are neighbors, then the value assigned is a value αN, where α is a constant and α < 1. In some embodiments, α is 0.5. In other words, the value assigned for a neighbor match is a reduction of the value for an exact match.
[0065] In some embodiments, if the character in the candidate word is neither an exact match or a neighbor of the corresponding character in the input sequence, then the assigned value is βN, where β is a constant and β < α < 1. For example, β may be 0.25. In some other embodiments, β may be a function of the "distance" between the characters on the keyboard layout. That is, β may be a smaller number if the candidate word character is farther away on the keyboard layout from the input sequence character than if the candidate word character is closer on the keyboard layout from the input sequence character without being a neighbor.
[0066] More generally, the value assigned for a character comparison is γN, where N is a predefined value, γ = 1 for an exact match, and γ may vary based on some function of the "distance" on the layout between the character in the candidate word and the corresponding character in the input sequence. For example, γ may be 1 for an exact match, 0.5 for a neighbor, and 0 otherwise. As another example, γ may be 0.5 for a neighbor (a 1-key radius), 0.25 for keys that are two keys away (a 2-key radius), and 0 for keys that are three or more keys away. In some embodiments, N is equal to 1.
[0067] If the candidate word has a length that is longer than the input sequence, or vice versa, then the character positions that are beyond the lesser of the two lengths are ignored or assigned a value of 0. [0068] The first candidate word shown in FIG. 7A is "theater." Compared to the input sequence of "rheatre," there are exact matches in the second thru fifth positions. The characters in the first, sixth, and seventh positions of the candidate word are keyboard layout neighbors of input sequence characters in the corresponding positions. Thus, the score for "theater" in this case is 0.5N + N + N + N + N +0.5N + 0.5N = 5.5N.
[0069] The second candidate word is "threats." Compared to the input sequence of
"rheatre," there is an exact match in the second position. The characters in the first, third, sixth, and seventh positions of the candidate word are keyboard layout neighbors of the input sequence characters in the corresponding positions, and the characters in the fourth and fifth positions of the candidate word are neither exact matches nor neighbors of the input sequence characters in the corresponding positions. Thus, the score for "threats" in this case is 0.5N + N + 0.5N + 0.25N + 0.25N +0.5N + 0.5N = 3.5N.
[0070] The third candidate word is "there." Compared to the input sequence of
"rheatre," there is an exact match in the second and third positions. The character in the first position of the candidate word is a keyboard layout neighbor of the input sequence character in the corresponding position, and the characters in the fourth and fifth positions of the candidate word are neither exact matches nor neighbors of the input sequence characters in the corresponding positions. Furthermore, because the input sequence is two characters longer than the candidate word, the last two characters in the input sequence are ignored in the comparison and are assigned score values of 0. Thus, the score for "there" in this case is 0.5N + N + N + 0.25N + 0.25N = 3N.
[0071] Some candidate words, when compared to the input sequence, may merit a score bonus, examples of which are shown in FIGs. 7B and 7C. In FIG. 7B, the input sequence is "thaeter" and the candidate word is "theater." The score based on the character comparisons alone is 5.5N. However, the only difference between "thaeter" and "theater" is a pair of transposed or swapped characters, namely "ae" in "thaeter" vs. "ea" in "theater." In some embodiments, a first bonus P is added to the score for this fact. In FIG. 7C, the input sequence is "thester" and the candidate word is "theater." The score based on the character comparisons alone is 6.5N. However, the only difference between "thester" and "theater" is a single character, namely "s" in "thester" vs. "a" in "theater." In some embodiments, a second bonus Q is added to the score for this fact. In some embodiments, both P and Q are equal to 0.75.
[0072] It should be appreciated that, in some other embodiments, alternative candidate word scoring and selection schemes other than the ones described above may be used.
[0073] For example, one alternative scheme may include, instead of dividing the candidate words into the first and second groups based on usage frequency rankings, the usage frequency rankings may instead be used as a weighting to be applied to candidate word scores. That is, the score of a candidate word is multiplied by the usage frequency ranking of the candidate word, and candidate words for presentation are selected based on their weighted scores.
[0074] As another example, another scheme replaces candidate word scoring based on character-by-character comparisons, as described above, with scoring based on the edit distance (also known as the Levenshtein distance) between the input sequence and the candidate word. That is, the score of a candidate word is the edit distance between the candidate word and the input sequence, or a function thereof, and candidate words are selected for presentation based on the edit distance scores. Alternately, the score for each candidate is based on the edit distance multiplied by (or otherwise combined with) the usage frequency ranking of the candidate, and candidate words are selected for presentation based on these scores.
[0075] As another example, another scheme uses a graph-matching technique. In this technique, the sequence of individual touch points that a user inputs into the device for a word (e.g., by contacts with a virtual keyboard on the touch screen) form a directed graph. This user-input directed graph is compared against a collection of directed graphs for respective words in a dictionary to generate a list of dictionary words that most closely match the user typing. In some embodiments, the probability that a user-input directed graph matches the directed graph for a dictionary word is calculated as follows:
[0076] Let U1...!, be each point in the user-input directed graph. [0077] Let Di11J1 be each point in the directed graph of a dictionary word. Points in this directed graph are assigned based on the centroid of the key that inputs the corresponding letter, as represented in the keyboard user interface.
[0078] Let P1...!, be, for each point in the user-input directed graph, the probability that the letter corresponding to Ux equals the letter corresponding to Dx. In some embodiments, a respective Px is computed by calculating the Euclidean distance between the points Ux and
Dx, and applying a factor based on the size of the user interface elements that indicate the keys on the keyboard. A minimum probability may be entered for Px if the graphs for the user word and the dictionary word are different lengths. In one embodiment, the factor (based on the size of the user interface elements that indicate the keys on the keyboard) is a divisor that is equal to, or proportional to, the distance between center points of two horizontally adjacent keys on the keyboard.
[0079] Multiplying the probabilities in Pi...n together yields G, the probability that a graph for a dictionary word matches the user-input graph. In some embodiments, G is multiplied by F, the frequency that the word occurs in the source language/domain. Furthermore, in some embodiments G is also multiplied by N, a factor calculated by considering one or more words previously typed by the user. For example, in a sentence/passage being typed by a user, "to" is more likely to follow "going," but "ti" is more likely to follow "do re mi fa so Ia." In some embodiments, G is multiplied by both F and N to yield Ω, the probability that a user-input directed graph matches a dictionary word.
[0080] The collection of dictionary words with the highest probabilities may be presented in a display for user consideration, for example as described in "Method, System, and Graphical User Interface for Providing Word Recommendations" (U.S. Patent Application number to be determined, filed January 5, 2007, attorney docket number 063266- 5041), the content of which is hereby incorporated by reference in its entirety. In other cases, the top-ranked word is selected for the user by the device without user intervention.
[0081] In some embodiments, as word recommendations are offered by the portable device and selected by the user, statistics regarding the corrections made are collected. For example, the characters in an input sequence that was replaced by a candidate word selected by the user and the corresponding characters may be logged. Over time, the corrections log may be analyzed for patterns that may indicate a pattern of repeated typing errors by the user. If the keyboard is a virtual keyboard on a touch screen of the portable device, the portable device may automatically adjust or recalibrate the contact regions of the keys of the virtual keyboard to compensate for the user pattern of typing errors. As another example, for a given input sequence, the word selected by the user may be recommended first or given a higher score when the same input sequence is subsequently entered by the user.
[0082] The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.

Claims

What is claimed is:
1. A computer-implemented method, comprising: receiving a sequence of input characters from a keyboard, wherein the keyboard has a predefined layout of characters with each character in the layout having one or more neighbor characters; generating a set of strings from at least a subset of the sequence of input characters, the set of strings comprising permutations of respective input characters in the subset of the sequence and neighbor characters of the respective input characters on the layout of the keyboard; identifying in a dictionary one or more candidate words, each candidate word having a string in the set of strings as a prefix; scoring the candidate words; selecting a subset of the candidate words based on predefined criteria; and presenting the subset of the candidate words.
2. The method of claim 1, wherein scoring a respective candidate word comprises: comparing a respective character in each character position of the candidate word with a respective character in a corresponding position in the sequence of input characters; and determining a score for the respective candidate word based on the comparing.
3. The method of claim 2, wherein scoring the respective candidate word further comprises increasing the score of the respective candidate word if the respective candidate word, compared to the sequence of input characters, has only one character that is different.
4. The method of claim 2, wherein scoring the respective candidate word further comprises increasing the score of the respective candidate word if the respective candidate word, compared to the sequence of input characters, has only a set of transposed characters that are different.
5. The method of claim 1 , wherein the keyboard comprises a virtual keyboard.
6. The method of claim 1, wherein the dictionary comprises a list of words and associated usage frequency rankings.
7. The method of claim 6, wherein the associated usage frequency rankings are tailored to the user based on previous input from the user.
8. The method of claim 6, wherein selecting a subset of the candidate words comprises organizing the candidate words into a first group and a second group, the first group comprising the candidate words having respective usage frequency rankings that exceed a threshold, the second group comprising the candidate words having respective usage frequency rankings that do not exceed the threshold; and wherein presenting the subset of the candidate words comprises presenting one or more of the candidate words of the first group in an order based on their scores.
9. The method of claim 8, wherein selecting a subset of the candidate words further comprises adding a candidate word of the second group into the first group if the candidate word of the second group has a score that exceeds a score of the highest scoring candidate word of the first group by a predefined margin.
10. The method of claim 8, wherein presenting the subset of the candidate words further comprises presenting a highest scoring candidate word of the second group.
11. The method of claim 1 , further comprising presenting the sequence of input characters as a candidate word.
12. The method of claim 1 , wherein the keyboard comprises a physical keyboard.
13. A computer program product for use in conjunction with a portable communications device, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein, the computer program mechanism comprising: instructions for receiving a sequence of input characters from a keyboard, wherein the keyboard has a predefined layout of characters with each character in the layout having one or more neighbor characters; instructions for generating a set of strings from at least a subset of the sequence of input characters, the set of strings comprising permutations of respective input characters in the subset of the sequence and neighbor characters of the respective input characters on the layout of the keyboard; instructions for identifying in a dictionary one or more candidate words, each candidate word having a string in the set of strings as a prefix; instructions for scoring the candidate words; instructions for selecting a subset of the candidate words based on predefined criteria; and instructions for presenting the subset of the candidate words.
14. A portable communications device, comprising: a display; a keyboard, the keyboard having a predefined layout of characters with each character in the layout having one or more neighbor characters; one or more processors; memory; and a program, wherein the program is stored in the memory and configured to be executed by the one or more processors, the program including: instructions for receiving a sequence of input characters from the keyboard; instructions for generating a set of strings from at least a subset of the sequence of input characters, the set of strings comprising permutations of respective input characters in the subset of the sequence and neighbor characters of the respective input characters on the layout of the keyboard; instructions for identifying in a dictionary one or more candidate words, each candidate word having a string in the set of strings as a prefix; instructions for scoring the candidate words; instructions for selecting a subset of the candidate words based on predefined criteria; and instructions for presenting the subset of the candidate words.
15. A portable communications device, comprising: display means; input means, the input means having a predefined layout of characters, each character in the layout having one or more neighbor characters; one or more processor means; memory means; and a program mechanism, wherein the program mechanism is stored in the memory means and configured to be executed by the one or more processors means, the program mechanism including: instructions for receiving a sequence of input characters from the input means; instructions for generating a set of strings from at least a subset of the sequence of input characters, the set of strings comprising permutations of respective input characters in the subset of the sequence and neighbor characters of the respective input characters on the layout of the input means; instructions for identifying in a dictionary one or more candidate words, each candidate word having a string in the set of strings as a prefix; instructions for scoring the candidate words; instructions for selecting a subset of the candidate words based on predefined criteria; and instructions for presenting the subset of the candidate words.
16. A computer-implemented method, comprising: receiving a sequence of individual touch points input by a user that form a user-input directed graph; comparing the user-input directed graph to respective directed graphs for words in a dictionary; generating a list of candidate words based at least in part on the comparing step; and presenting at least some of the candidate words to the user.
17. The method of claim 16, wherein the sequence of individual touch points is input by the user on a touch screen of a portable electronic device.
18. The method of claim 16, wherein generating a list of candidate words is based at least in part on the usage frequency of the candidate words.
19. The method of claim 16, wherein generating a list of candidate words is based at least in part on one or more words previously typed by the user.
20. The method of claim 16, wherein the dictionary comprises a list of words and associated usage frequency rankings.
21. The method of claim 20, wherein the associated usage frequency rankings are tailored to the user based on previous input from the user.
22. A computer program product for use in conjunction with a portable communications device, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein, the computer program mechanism comprising: instructions for receiving a sequence of individual touch points input by a user that form a user-input directed graph; instructions for comparing the user-input directed graph to respective directed graphs for words in a dictionary; instructions for generating a list of candidate words based at least in part on the comparing step; and instructions for presenting at least some of the candidate words to the user.
23. A portable communications device, comprising: a display; a keyboard; one or more processors; memory; and a program, wherein the program is stored in the memory and configured to be executed by the one or more processors, the program including: instructions for receiving a sequence of individual touch points input by a user that form a user-input directed graph; instructions for comparing the user-input directed graph to respective directed graphs for words in a dictionary; instructions for generating a list of candidate words based at least in part on the comparing step; and instructions for presenting at least some of the candidate words to the user.
24. A portable communications device, comprising: means for receiving a sequence of individual touch points input by a user that form a user-input directed graph; means for comparing the user-input directed graph to respective directed graphs for words in a dictionary; means for generating a list of candidate words based at least in part on the comparing step; and means for presenting at least some of the candidate words to the user.
PCT/US2007/088872 2007-01-05 2007-12-27 Method and system for providing word recommendations for text input WO2008085736A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP07869922A EP2100210A1 (en) 2007-01-05 2007-12-27 Method and system for providing word recommendations for text input
AU2007342164A AU2007342164A1 (en) 2007-01-05 2007-12-27 Method and system for providing word recommendations for text input

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/620,641 2007-01-05
US11/620,641 US7957955B2 (en) 2007-01-05 2007-01-05 Method and system for providing word recommendations for text input

Publications (1)

Publication Number Publication Date
WO2008085736A1 true WO2008085736A1 (en) 2008-07-17

Family

ID=39052589

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/088872 WO2008085736A1 (en) 2007-01-05 2007-12-27 Method and system for providing word recommendations for text input

Country Status (8)

Country Link
US (1) US7957955B2 (en)
EP (1) EP2100210A1 (en)
CN (1) CN101641661A (en)
AU (2) AU2007342164A1 (en)
DE (1) DE202008000265U1 (en)
HK (1) HK1109015A2 (en)
TW (1) TW200842660A (en)
WO (1) WO2008085736A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102884518A (en) * 2010-02-01 2013-01-16 金格软件有限公司 Automatic context sensitive language correction using an internet corpus particularly for small keyboard devices
WO2014102041A3 (en) * 2012-12-28 2014-09-12 Volkswagen Aktiengesellschaft Method for inputting and identifying a character string
US9996524B1 (en) 2017-01-30 2018-06-12 International Business Machines Corporation Text prediction using multiple devices
US10558749B2 (en) 2017-01-30 2020-02-11 International Business Machines Corporation Text prediction using captured image from an image capture device

Families Citing this family (262)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7030863B2 (en) 2000-05-26 2006-04-18 America Online, Incorporated Virtual keyboard system with automatic correction
US7286115B2 (en) 2000-05-26 2007-10-23 Tegic Communications, Inc. Directional input system with automatic correction
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US7694231B2 (en) * 2006-01-05 2010-04-06 Apple Inc. Keyboards for portable electronic devices
EP2024863B1 (en) 2006-05-07 2018-01-10 Varcode Ltd. A system and method for improved quality management in a product logistic chain
US7562811B2 (en) 2007-01-18 2009-07-21 Varcode Ltd. System and method for improved quality management in a product logistic chain
US8564544B2 (en) 2006-09-06 2013-10-22 Apple Inc. Touch screen device, method, and graphical user interface for customizing display of content category icons
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US7856605B2 (en) 2006-10-26 2010-12-21 Apple Inc. Method, system, and graphical user interface for positioning an insertion marker in a touch screen display
US8570278B2 (en) 2006-10-26 2013-10-29 Apple Inc. Portable multifunction device, method, and graphical user interface for adjusting an insertion point marker
US20080126075A1 (en) * 2006-11-27 2008-05-29 Sony Ericsson Mobile Communications Ab Input prediction
US8074172B2 (en) 2007-01-05 2011-12-06 Apple Inc. Method, system, and graphical user interface for providing word recommendations
US8201087B2 (en) * 2007-02-01 2012-06-12 Tegic Communications, Inc. Spell-check for a keyboard system with automatic correction
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US8528808B2 (en) 2007-05-06 2013-09-10 Varcode Ltd. System and method for quality management utilizing barcode indicators
US8065624B2 (en) * 2007-06-28 2011-11-22 Panasonic Corporation Virtual keypad systems and methods
US8635251B1 (en) * 2007-06-29 2014-01-21 Paul Sui-Yuen Chan Search and computing engine
CA2694327A1 (en) 2007-08-01 2009-02-05 Ginger Software, Inc. Automatic context sensitive language correction and enhancement using an internet corpus
JP4787803B2 (en) * 2007-08-31 2011-10-05 株式会社リコー Information processing apparatus, information processing method, and program
US8667412B2 (en) * 2007-09-06 2014-03-04 Google Inc. Dynamic virtual input device configuration
CN100592249C (en) * 2007-09-21 2010-02-24 上海汉翔信息技术有限公司 Method for quickly inputting related term
US8010895B2 (en) * 2007-10-24 2011-08-30 E-Lead Electronic Co., Ltd. Method for correcting typing errors according to character layout positions on a keyboard
EP2218055B1 (en) 2007-11-14 2014-07-16 Varcode Ltd. A system and method for quality management utilizing barcode indicators
JP2009146065A (en) * 2007-12-12 2009-07-02 Toshiba Corp Keyboard, input method, and information processor
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8232973B2 (en) 2008-01-09 2012-07-31 Apple Inc. Method, device, and graphical user interface providing word recommendations for text input
US9092134B2 (en) * 2008-02-04 2015-07-28 Nokia Technologies Oy User touch display interface providing an expanded selection area for a user selectable object
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US11704526B2 (en) 2008-06-10 2023-07-18 Varcode Ltd. Barcoded indicators for quality management
ATE501478T1 (en) * 2008-06-11 2011-03-15 Exb Asset Man Gmbh APPARATUS AND METHOD WITH IMPROVED TEXT ENTRY MECHANISM
KR101556522B1 (en) * 2008-06-27 2015-10-01 엘지전자 주식회사 Mobile terminal for providing haptic effect and control method thereof
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8589149B2 (en) 2008-08-05 2013-11-19 Nuance Communications, Inc. Probability-based approach to recognition of user-entered data
KR101469619B1 (en) * 2008-08-14 2014-12-08 삼성전자주식회사 Movement Control System For Display Unit And Movement Control Method using the same
US9317200B2 (en) * 2008-08-28 2016-04-19 Kyocera Corporation Display apparatus and display method thereof
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US8898584B2 (en) * 2008-10-07 2014-11-25 Blackberry Limited Dynamic alteration of input mode on touch screen device
CN101739167A (en) * 2008-11-13 2010-06-16 索尼爱立信移动通讯有限公司 System and method for inputting symbols in touch input device
EP2350779A4 (en) * 2008-11-25 2018-01-10 Jeffrey R. Spetalnick Methods and systems for improved data input, compression, recognition, correction, and translation through frequency-based language analysis
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
FR2940693B1 (en) * 2008-12-30 2016-12-02 Thales Sa OPTIMIZED METHOD AND SYSTEM FOR MANAGING CLEAN NAMES FOR OPTIMIZING DATABASE MANAGEMENT AND INTERROGATION
US8669941B2 (en) * 2009-01-05 2014-03-11 Nuance Communications, Inc. Method and apparatus for text entry
US8370736B2 (en) 2009-03-16 2013-02-05 Apple Inc. Methods and graphical user interfaces for editing on a multifunction device with a touch screen display
KR20120016060A (en) * 2009-03-20 2012-02-22 구글 인코포레이티드 Interaction with ime computing device
US20100251105A1 (en) * 2009-03-31 2010-09-30 Lenovo (Singapore) Pte, Ltd. Method, apparatus, and system for modifying substitution costs
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US20120311585A1 (en) 2011-06-03 2012-12-06 Apple Inc. Organizing task items that represent tasks to perform
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US20100325136A1 (en) * 2009-06-23 2010-12-23 Microsoft Corporation Error tolerant autocompletion
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US8516367B2 (en) * 2009-09-29 2013-08-20 Verizon Patent And Licensing Inc. Proximity weighted predictive key entry
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US20110179353A1 (en) * 2010-01-19 2011-07-21 Research In Motion Limited Mobile Electronic Device and Associated Method Providing Proposed Spelling Corrections Based Upon a Location of Cursor At or Adjacent a Character of a Text Entry
US20110184723A1 (en) * 2010-01-25 2011-07-28 Microsoft Corporation Phonetic suggestion engine
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US20110219299A1 (en) * 2010-03-07 2011-09-08 DSNR Labs Ltd. Method and system of providing completion suggestion to a partial linguistic element
CN101788855B (en) * 2010-03-09 2013-04-17 华为终端有限公司 Method, device and communication terminal for obtaining user input information
US10013077B2 (en) * 2010-07-19 2018-07-03 DISH Technologies L.L.C. System and method for data item filtering based on character sequence entry
KR20120009200A (en) * 2010-07-23 2012-02-01 삼성전자주식회사 Method and apparatus for inputting character in a portable terminal
US10664454B2 (en) * 2010-07-30 2020-05-26 Wai-Lin Maw Fill in the blanks word completion system
US9122318B2 (en) 2010-09-15 2015-09-01 Jeffrey R. Spetalnick Methods of and systems for reducing keyboard data entry errors
CN102455786B (en) * 2010-10-25 2014-09-03 三星电子(中国)研发中心 System and method for optimizing Chinese sentence input method
JP5748118B2 (en) * 2010-12-01 2015-07-15 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Document creation support method, document creation support device, and document creation support program
US20120146955A1 (en) * 2010-12-10 2012-06-14 Research In Motion Limited Systems and methods for input into a portable electronic device
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
EP2671136A4 (en) * 2011-02-04 2017-12-13 Nuance Communications, Inc. Correcting typing mistake based on probabilities of intended contact for non-contacted keys
KR101753625B1 (en) * 2011-03-08 2017-07-20 삼성전자주식회사 The method for preventing incorrect input in potable terminal and device thereof
US20120239381A1 (en) 2011-03-17 2012-09-20 Sap Ag Semantic phrase suggestion engine
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
JP2013127770A (en) * 2011-05-03 2013-06-27 Kotatsu Kokusai Denshi Kofun Yugenkoshi Handheld electronic equipment and method for performing access to bookmark
US8719695B2 (en) 2011-05-31 2014-05-06 Apple Inc. Devices, methods, and graphical user interfaces for document manipulation
US9471560B2 (en) * 2011-06-03 2016-10-18 Apple Inc. Autocorrecting language input for virtual keyboards
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US20120324391A1 (en) * 2011-06-16 2012-12-20 Microsoft Corporation Predictive word completion
US8935230B2 (en) 2011-08-25 2015-01-13 Sap Se Self-learning semantic search engine
US8994660B2 (en) 2011-08-29 2015-03-31 Apple Inc. Text correction processing
US9348479B2 (en) 2011-12-08 2016-05-24 Microsoft Technology Licensing, Llc Sentiment aware user interface customization
US9378290B2 (en) 2011-12-20 2016-06-28 Microsoft Technology Licensing, Llc Scenario-adaptive input method editor
US9557781B2 (en) 2012-01-05 2017-01-31 Sony Corporation Adjusting coordinates of touch input
WO2013103344A1 (en) * 2012-01-05 2013-07-11 Sony Ericsson Mobile Communications Ab Adjusting coordinates of touch input
US9330083B2 (en) * 2012-02-14 2016-05-03 Facebook, Inc. Creating customized user dictionary
US9330082B2 (en) * 2012-02-14 2016-05-03 Facebook, Inc. User experience with customized user dictionary
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
EP2662752B1 (en) * 2012-05-11 2017-09-13 BlackBerry Limited Apparatus and method for character entry in a portable electronic device
GB2507014B (en) * 2012-05-11 2020-08-05 Blackberry Ltd Detection of spacebar adjacent character entry
US8884881B2 (en) * 2012-05-11 2014-11-11 Blackberry Limited Portable electronic device and method of controlling same
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US10296581B2 (en) 2012-06-06 2019-05-21 Apple Inc. Multi-word autocorrection
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
CN110488991A (en) 2012-06-25 2019-11-22 微软技术许可有限责任公司 Input Method Editor application platform
US20130346904A1 (en) * 2012-06-26 2013-12-26 International Business Machines Corporation Targeted key press zones on an interactive display
WO2014000267A1 (en) * 2012-06-29 2014-01-03 Microsoft Corporation Cross-lingual input method editor
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9298274B2 (en) * 2012-07-20 2016-03-29 Microsoft Technology Licensing, Llc String predictions from buffer
US8959109B2 (en) 2012-08-06 2015-02-17 Microsoft Corporation Business intelligent in-document suggestions
EP2891078A4 (en) 2012-08-30 2016-03-23 Microsoft Technology Licensing Llc Feature-based candidate selection
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
US8782549B2 (en) 2012-10-05 2014-07-15 Google Inc. Incremental feature-based gesture-keyboard decoding
US9021380B2 (en) 2012-10-05 2015-04-28 Google Inc. Incremental multi-touch gesture recognition
CN107479725B (en) * 2012-10-15 2021-07-16 联想(北京)有限公司 Character input method and device, virtual keyboard, electronic equipment and storage medium
US8850350B2 (en) 2012-10-16 2014-09-30 Google Inc. Partial gesture text entry
US8701032B1 (en) 2012-10-16 2014-04-15 Google Inc. Incremental multi-word recognition
US8843845B2 (en) 2012-10-16 2014-09-23 Google Inc. Multi-gesture text input prediction
KR101370834B1 (en) * 2012-10-18 2014-03-07 삼성전자주식회사 Display apparatus and method for inputting characters thereof
US8819574B2 (en) 2012-10-22 2014-08-26 Google Inc. Space prediction for text input
US8807422B2 (en) 2012-10-22 2014-08-19 Varcode Ltd. Tamper-proof quality management barcode indicators
KR102105101B1 (en) 2012-11-07 2020-04-27 삼성전자주식회사 Display apparatus and Method for correcting character thereof
US20140198047A1 (en) * 2013-01-14 2014-07-17 Nuance Communications, Inc. Reducing error rates for touch based keyboards
US8832589B2 (en) 2013-01-15 2014-09-09 Google Inc. Touch keyboard using language and spatial models
KR20230137475A (en) 2013-02-07 2023-10-04 애플 인크. Voice trigger for a digital assistant
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
KR102078785B1 (en) 2013-03-15 2020-02-19 구글 엘엘씨 Virtual keyboard input for international languages
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
WO2014144579A1 (en) 2013-03-15 2014-09-18 Apple Inc. System and method for updating an adaptive speech recognition model
CN104102625B (en) * 2013-04-15 2017-07-04 佳能株式会社 The method and apparatus that spell check is improved by application keyboard layout information
US9672818B2 (en) 2013-04-18 2017-06-06 Nuance Communications, Inc. Updating population language models based on changes made by user clusters
US9081500B2 (en) 2013-05-03 2015-07-14 Google Inc. Alternative hypothesis error correction for gesture typing
US20140351760A1 (en) * 2013-05-24 2014-11-27 Google Inc. Order-independent text input
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197336A1 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
EP3937002A1 (en) 2013-06-09 2022-01-12 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
KR101809808B1 (en) 2013-06-13 2017-12-15 애플 인크. System and method for emergency calls initiated by voice command
JP6163266B2 (en) 2013-08-06 2017-07-12 アップル インコーポレイテッド Automatic activation of smart responses based on activation from remote devices
WO2015018055A1 (en) 2013-08-09 2015-02-12 Microsoft Corporation Input method editor providing language assistance
KR102157264B1 (en) 2013-10-30 2020-09-17 삼성전자주식회사 Display apparatus and UI providing method thereof
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
US20150169537A1 (en) * 2013-12-13 2015-06-18 Nuance Communications, Inc. Using statistical language models to improve text input
WO2015093651A1 (en) * 2013-12-19 2015-06-25 Twinword Inc. Method and system for managing a wordgraph
KR20150081181A (en) * 2014-01-03 2015-07-13 삼성전자주식회사 Display apparatus and Method for providing recommendation characters thereof
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9659109B2 (en) 2014-05-27 2017-05-23 Wal-Mart Stores, Inc. System and method for query auto-completion using a data structure with trie and ternary query nodes
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
EP3480811A1 (en) 2014-05-30 2019-05-08 Apple Inc. Multi-command single utterance input method
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US10204096B2 (en) 2014-05-30 2019-02-12 Apple Inc. Device, method, and graphical user interface for a predictive keyboard
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US9377871B2 (en) 2014-08-01 2016-06-28 Nuance Communications, Inc. System and methods for determining keyboard input in the presence of multiple contact points
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9606986B2 (en) 2014-09-29 2017-03-28 Apple Inc. Integrated word N-gram and class M-gram language models
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
CN104571587B (en) * 2014-12-30 2018-06-26 北京奇虎科技有限公司 The method and apparatus screened to the upper screen candidate item of input method
GB2535439A (en) * 2015-01-06 2016-08-24 What3Words Ltd A method for suggesting candidate words as replacements for an input string received at an electronic device
GB2549240A (en) * 2015-01-06 2017-10-18 What3Words Ltd A method for suggesting one or more multi-word candidates based on an input string received at an electronic device
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
EP3298367B1 (en) 2015-05-18 2020-04-29 Varcode Ltd. Thermochromic ink indicia for activatable quality labels
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
CN105100853B (en) * 2015-06-30 2019-02-22 北京奇艺世纪科技有限公司 A kind of method and device of virtual keyboard character arrangement
JP6898298B2 (en) 2015-07-07 2021-07-07 バーコード リミティド Electronic quality display index
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179309B1 (en) 2016-06-09 2018-04-23 Apple Inc Intelligent automated assistant in a home environment
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
CN106468960A (en) * 2016-09-07 2017-03-01 北京新美互通科技有限公司 A kind of method and system of candidates of input method sequence
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10372814B2 (en) 2016-10-18 2019-08-06 International Business Machines Corporation Methods and system for fast, adaptive correction of misspells
US10579729B2 (en) 2016-10-18 2020-03-03 International Business Machines Corporation Methods and system for fast, adaptive correction of misspells
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
DK201770383A1 (en) 2017-05-09 2018-12-14 Apple Inc. User interface for correcting recognition errors
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK201770428A1 (en) 2017-05-12 2019-02-18 Apple Inc. Low-latency intelligent automated assistant
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
US20180336275A1 (en) 2017-05-16 2018-11-22 Apple Inc. Intelligent automated assistant for media exploration
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. Far-field extension for digital assistant services
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
CN108459733A (en) * 2018-02-06 2018-08-28 广州阿里巴巴文学信息技术有限公司 auxiliary input method, device, computing device and storage medium
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
DK179822B1 (en) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. Virtual assistant operation in multi-device environments
US11076039B2 (en) 2018-06-03 2021-07-27 Apple Inc. Accelerated task performance
CN109164921B (en) * 2018-07-09 2023-04-07 北京左医科技有限公司 Control method and device for dynamically displaying input suggestions in chat frame
US11194467B2 (en) 2019-06-01 2021-12-07 Apple Inc. Keyboard management user interfaces
DE102021121116B4 (en) 2021-08-13 2023-05-11 Brainbox Gmbh METHOD AND DEVICE FOR ENTERING A CHARACTER STRING
US20230214579A1 (en) * 2021-12-31 2023-07-06 Microsoft Technology Licensing, Llc Intelligent character correction and search in documents

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040183833A1 (en) * 2003-03-19 2004-09-23 Chua Yong Tong Keyboard error reduction method and apparatus
US20050190970A1 (en) * 2004-02-27 2005-09-01 Research In Motion Limited Text input system for a mobile electronic device and methods thereof
US20060274051A1 (en) 2003-12-22 2006-12-07 Tegic Communications, Inc. Virtual Keyboard Systems with Automatic Correction

Family Cites Families (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5305205A (en) 1990-10-23 1994-04-19 Weber Maria L Computer-assisted transcription apparatus
US5565888A (en) 1995-02-17 1996-10-15 International Business Machines Corporation Method and apparatus for improving visibility and selectability of icons
US5748512A (en) 1995-02-28 1998-05-05 Microsoft Corporation Adjusting keyboard
US5818437A (en) * 1995-07-26 1998-10-06 Tegic Communications, Inc. Reduced keyboard disambiguating computer
KR100260760B1 (en) 1996-07-31 2000-07-01 모리 하루오 Information display system with touch panel
US5818451A (en) 1996-08-12 1998-10-06 International Busienss Machines Corporation Computer programmed soft keyboard system, method and apparatus having user input displacement
US5953541A (en) 1997-01-24 1999-09-14 Tegic Communications, Inc. Disambiguating system for disambiguating ambiguous input sequences by displaying objects associated with the generated input sequences in the order of decreasing frequency of use
US6073036A (en) 1997-04-28 2000-06-06 Nokia Mobile Phones Limited Mobile station with touch input having automatic symbol magnification function
US6803905B1 (en) 1997-05-30 2004-10-12 International Business Machines Corporation Touch sensitive apparatus and method for improved visual feedback
CN100334530C (en) * 1997-09-25 2007-08-29 蒂吉通信系统公司 Reduced keyboard disambiguating systems
US5896321A (en) 1997-11-14 1999-04-20 Microsoft Corporation Text completion system for a miniature computer
JP2938420B2 (en) 1998-01-30 1999-08-23 インターナショナル・ビジネス・マシーンズ・コーポレイション Function selection method and apparatus, storage medium storing control program for selecting functions, object operation method and apparatus, storage medium storing control program for operating objects, storage medium storing composite icon
US6169538B1 (en) 1998-08-13 2001-01-02 Motorola, Inc. Method and apparatus for implementing a graphical user interface keyboard and a text buffer on electronic devices
US6271835B1 (en) 1998-09-03 2001-08-07 Nortel Networks Limited Touch-screen input device
US7679534B2 (en) * 1998-12-04 2010-03-16 Tegic Communications, Inc. Contextual prediction of user words and user actions
US7712053B2 (en) * 1998-12-04 2010-05-04 Tegic Communications, Inc. Explicit character filtering of ambiguous text entry
GB9827930D0 (en) 1998-12-19 1999-02-10 Symbian Ltd Keyboard system for a computing device with correction of key based input errors
US6259436B1 (en) 1998-12-22 2001-07-10 Ericsson Inc. Apparatus and method for determining selection of touchable items on a computer touchscreen by an imprecise touch
GB2388938B (en) * 1999-02-22 2004-03-17 Nokia Corp A communication terminal having a predictive editor application
US7286115B2 (en) 2000-05-26 2007-10-23 Tegic Communications, Inc. Directional input system with automatic correction
US7434177B1 (en) 1999-12-20 2008-10-07 Apple Inc. User interface for providing consolidation and access
US6597345B2 (en) 2000-03-03 2003-07-22 Jetway Technologies Ltd. Multifunctional keypad on touch screen
US6714221B1 (en) 2000-08-03 2004-03-30 Apple Computer, Inc. Depicting and setting scroll amount
JP4197220B2 (en) 2000-08-17 2008-12-17 アルパイン株式会社 Operating device
AU2002226886A1 (en) 2000-11-09 2002-05-21 Change Tools, Inc. A user definable interface system, method and computer program product
ATE425680T1 (en) * 2001-03-30 2009-04-15 Sf Investments Inc PROTECTIVE CLOTHING
JP3597141B2 (en) 2001-04-03 2004-12-02 泰鈞 温 Information input device and method, mobile phone and character input method of mobile phone
TW504916B (en) 2001-04-24 2002-10-01 Inventec Appliances Corp Method capable of generating different input values by pressing a single key from multiple directions
US20050024341A1 (en) 2001-05-16 2005-02-03 Synaptics, Inc. Touch screen with user interface enhancement
US7730401B2 (en) 2001-05-16 2010-06-01 Synaptics Incorporated Touch screen with user interface enhancement
EP1457864A1 (en) 2001-09-21 2004-09-15 International Business Machines Corporation INPUT APPARATUS&comma; COMPUTER APPARATUS&comma; METHOD FOR IDENTIFYING INPUT OBJECT&comma; METHOD FOR IDENTIFYING INPUT OBJECT IN KEYBOARD&comma; AND COMPUTER PROGRAM
US7113172B2 (en) 2001-11-09 2006-09-26 Lifescan, Inc. Alphanumeric keypad and display system and method
US20030197736A1 (en) 2002-01-16 2003-10-23 Murphy Michael W. User interface for character entry using a minimum number of selection keys
US20030149978A1 (en) 2002-02-07 2003-08-07 Bruce Plotnick System and method for using a personal digital assistant as an electronic program guide
US7038659B2 (en) 2002-04-06 2006-05-02 Janusz Wiktor Rajkowski Symbol encoding apparatus and method
US20030193481A1 (en) 2002-04-12 2003-10-16 Alexander Sokolsky Touch-sensitive input overlay for graphical user interface
US6927763B2 (en) 2002-12-30 2005-08-09 Motorola, Inc. Method and system for providing a disambiguated keypad
US7194699B2 (en) 2003-01-14 2007-03-20 Microsoft Corporation Animating images to reflect user selection
US7382358B2 (en) 2003-01-16 2008-06-03 Forword Input, Inc. System and method for continuous stroke word-based text input
US20040160419A1 (en) 2003-02-11 2004-08-19 Terradigital Systems Llc. Method for entering alphanumeric characters into a graphical user interface
US7103852B2 (en) 2003-03-10 2006-09-05 International Business Machines Corporation Dynamic resizing of clickable areas of touch screen applications
US7729542B2 (en) 2003-04-04 2010-06-01 Carnegie Mellon University Using edges and corners for character input
US7057607B2 (en) 2003-06-30 2006-06-06 Motorola, Inc. Application-independent text entry for touch-sensitive display
EP2254026A1 (en) * 2004-02-27 2010-11-24 Research In Motion Limited Text input system for a mobile electronic device and methods thereof
US7571111B2 (en) 2004-03-29 2009-08-04 United Parcel Service Of America, Inc. Computer system for monitoring actual performance to standards in real time
US7508324B2 (en) * 2004-08-06 2009-03-24 Daniel Suraqui Finger activated reduced keyboard and a method for performing text input
US8552984B2 (en) 2005-01-13 2013-10-08 602531 British Columbia Ltd. Method, system, apparatus and computer-readable media for directing input associated with keyboard-type device
US7788248B2 (en) 2005-03-08 2010-08-31 Apple Inc. Immediate search feedback
US20060246955A1 (en) 2005-05-02 2006-11-02 Mikko Nirhamo Mobile communication device and method therefor
US7886233B2 (en) 2005-05-23 2011-02-08 Nokia Corporation Electronic text input involving word completion functionality for predicting word candidates for partial word inputs
US7737999B2 (en) 2005-08-26 2010-06-15 Veveo, Inc. User interface for visual cooperation between text input and display device
US7443316B2 (en) 2005-09-01 2008-10-28 Motorola, Inc. Entering a character into an electronic device
US7873356B2 (en) 2005-09-16 2011-01-18 Microsoft Corporation Search interface for mobile devices
US7694231B2 (en) 2006-01-05 2010-04-06 Apple Inc. Keyboards for portable electronic devices
US7644054B2 (en) * 2005-11-23 2010-01-05 Veveo, Inc. System and method for finding desired results by incremental search using an ambiguous keypad with the input containing orthographic and typographic errors
DE102006037156A1 (en) 2006-03-22 2007-09-27 Volkswagen Ag Interactive operating device and method for operating the interactive operating device
US9552349B2 (en) * 2006-08-31 2017-01-24 International Business Machines Corporation Methods and apparatus for performing spelling corrections using one or more variant hash tables
US7683886B2 (en) * 2006-09-05 2010-03-23 Research In Motion Limited Disambiguated text message review function

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040183833A1 (en) * 2003-03-19 2004-09-23 Chua Yong Tong Keyboard error reduction method and apparatus
US20060274051A1 (en) 2003-12-22 2006-12-07 Tegic Communications, Inc. Virtual Keyboard Systems with Automatic Correction
US20050190970A1 (en) * 2004-02-27 2005-09-01 Research In Motion Limited Text input system for a mobile electronic device and methods thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2100210A1

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102884518A (en) * 2010-02-01 2013-01-16 金格软件有限公司 Automatic context sensitive language correction using an internet corpus particularly for small keyboard devices
WO2014102041A3 (en) * 2012-12-28 2014-09-12 Volkswagen Aktiengesellschaft Method for inputting and identifying a character string
US9703393B2 (en) 2012-12-28 2017-07-11 Volkswagen Ag Method for inputting and identifying a character string
US9996524B1 (en) 2017-01-30 2018-06-12 International Business Machines Corporation Text prediction using multiple devices
US10223352B2 (en) 2017-01-30 2019-03-05 International Business Machines Corporation Text prediction using multiple devices
US10223351B2 (en) 2017-01-30 2019-03-05 International Business Machines Corporation Text prediction using multiple devices
US10255268B2 (en) 2017-01-30 2019-04-09 International Business Machines Corporation Text prediction using multiple devices
US10558749B2 (en) 2017-01-30 2020-02-11 International Business Machines Corporation Text prediction using captured image from an image capture device

Also Published As

Publication number Publication date
HK1109015A2 (en) 2008-05-23
DE202008000265U1 (en) 2008-05-21
CN101641661A (en) 2010-02-03
AU2007342164A1 (en) 2008-07-17
AU2008100005A4 (en) 2008-02-07
AU2008100005B4 (en) 2008-11-06
US20080167858A1 (en) 2008-07-10
US7957955B2 (en) 2011-06-07
EP2100210A1 (en) 2009-09-16
TW200842660A (en) 2008-11-01

Similar Documents

Publication Publication Date Title
US11474695B2 (en) Method, device, and graphical user interface providing word recommendations for text input
US7957955B2 (en) Method and system for providing word recommendations for text input
US11416141B2 (en) Method, system, and graphical user interface for providing word recommendations
US10430054B2 (en) Resizing selection zones on a touch sensitive display responsive to likelihood of selection
US9081482B1 (en) Text input suggestion ranking
US20110074685A1 (en) Virtual Predictive Keypad
US20070152980A1 (en) Touch Screen Keyboards for Portable Electronic Devices
WO2007082139A2 (en) Keyboards for portable electronic devices
CN101021763A (en) Soft keyboard layout fast inputting method on touch screen
KR20160009054A (en) Multiple graphical keyboards for continuous gesture input
KR20140121806A (en) Method for inputting characters using software korean keypad
CN105807939B (en) Electronic equipment and method for improving keyboard input speed
US20130091455A1 (en) Electronic device having touchscreen and character input method therefor
KR20160069292A (en) Method and Apparatus for Letters Input of Sliding Type using pattern
TW201237680A (en) Inputting Chinese characters in Pinyin mode

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200780052020.1

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07869922

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2007342164

Country of ref document: AU

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2007869922

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2007342164

Country of ref document: AU

Date of ref document: 20071227

Kind code of ref document: A