WO2014121370A1 - Methods and systems for predicting actions on virtual keyboard - Google Patents

Methods and systems for predicting actions on virtual keyboard Download PDF

Info

Publication number
WO2014121370A1
WO2014121370A1 PCT/CA2013/050099 CA2013050099W WO2014121370A1 WO 2014121370 A1 WO2014121370 A1 WO 2014121370A1 CA 2013050099 W CA2013050099 W CA 2013050099W WO 2014121370 A1 WO2014121370 A1 WO 2014121370A1
Authority
WO
WIPO (PCT)
Prior art keywords
characters
touch
displayed
block
electronic device
Prior art date
Application number
PCT/CA2013/050099
Other languages
French (fr)
Inventor
Jerome Pasquero
Donald Somerset Mcculloch Mckenzie
Tiphanie Lau
Original Assignee
Research In Motion Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research In Motion Limited filed Critical Research In Motion Limited
Priority to PCT/CA2013/050099 priority Critical patent/WO2014121370A1/en
Publication of WO2014121370A1 publication Critical patent/WO2014121370A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/22Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/70Details of telephonic subscriber devices methods for entering alphabetical characters, e.g. multi-tap or dictionary disambiguation

Definitions

  • Example embodiments relate to processing prediction candidates in connection with virtual keyboards and touch-sensitive displays.
  • touch-sensitive displays allowing users to enter characters into applications, such as a word processors, Short Messaging Service applications, Multimedia Messaging Service applications, email applications, etc. Entering a set of characters on a touch-sensitive display can be a challenging task because of the limited amount of real estate on a small touch-sensitive display.
  • An example of a typical text entry touch-sensitive display may include two separate areas. The first area is used to display an interactive virtual keyboard having a set of selectable keys, and the second area is used to display selected input associated with the keys.
  • Predictive text input resources aid in the input of text to an electronic device. These resources include predicting a word or the name of a contact a user is entering after the user inputs one or more characters of the word or name. The resource provides the user with one or more suggestions for one or more additional characters to complete the word or name being entered. Similarly, some word prediction resources offer users full word or name predictions based on the context of other words or names previously input by the user alone or with the assistance of a word prediction resource.
  • Fig. 1 is a simplified block diagram illustrating an example mobile communication device in accordance with an example embodiment.
  • Fig. 2 is a flowchart illustrating an example method for predicting and displaying selected sets of characters.
  • FIG. 3 is a flowchart illustrating an example method for displaying pictures and communication service icons, corresponding to the selected sets of characters.
  • Fig. 4 is a flowchart illustrating an example method for displaying hyperlinks, corresponding to the selected sets of characters.
  • FIG. 5 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
  • FIG. 6 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
  • Fig. 7 is a flowchart illustrating an example method for opening/activating/launching a communication service application.
  • Fig. 8 is a flowchart illustrating an example method for opening/activating/launching a hyperlink.
  • FIG. 9 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
  • Fig. 10 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
  • FIG. 11 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
  • Fig. 12 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
  • Fig. 13 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
  • Fig. 14 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
  • Fig. 15 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
  • Fig. 16 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
  • FIG. 17 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
  • Fig. 18 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
  • Touch-sensitive displays that allow a user to enter characters into an application might be divided into a plurality of areas.
  • the touch- sensitive display might be divided into two areas.
  • the first area may be used for displaying an interactive virtual keyboard having a set of selectable keys
  • the second area may be used for displaying and selecting selected input associated with the keys. Dividing the touch-sensitive display in this way forces the user to continuously toggle their attention between the virtual keyboard and the input reflected by the selected keys in the second portion, or to consider and select candidate character(s) of a predictive text process displayed in the second portion, and thereafter, look back at the virtual keyboard to continue typing.
  • Constant shifting of the user's eyes and fingers between the two areas can place undue stress on the user's eyes and fingers, be distracting, require more time to enter text, and otherwise be inefficient.
  • the time wasted by a user, switching their attention between the input area and the virtual keyboard, leads to an increased number of processing cycles resulting in display power being wasted as the processor sits idle.
  • a method for an electronic device consisting of a touch-sensitive display and processor includes displaying in a touch-sensitive display a plurality of keys. The method also includes detecting a selection of at least one of the keys. In addition, the method includes generating at least one set of characters corresponding to the selected at least one key. The method also includes displaying at least one icon associated with a generated character set within the touch-sensitive display.
  • an electronic device in another disclosed example embodiment, includes a touch-sensitive display, at least one processor, in communication with the touch-sensitive display, and configured to, display in the touch-sensitive display a plurality of keys, detect a selection of at least one of the keys, generate at least one set of characters corresponding to the selected at least one key, display at least one icon associated with a generated character set within the touch-sensitive display.
  • an electronic device in another disclosed example embodiment, includes a touch-sensitive display, at least one processor, in communication with the touch-sensitive display, at least one memory, instructions stored on the at least one memory, which, when executed by the processor, cause the electronic device to perform the steps of, displaying in the touch- sensitive display a plurality of keys, detecting a selection of at least one of the keys, generating at least one set of characters corresponding to the selected at least one key, displaying at least one icon associated with a generated character set within the touch- sensitive display.
  • a processor can be one or more than one processor.
  • Example embodiments described herein may receive character inputs, generate sets of predicted characters based on the received character inputs, generate a set of pictures based on the received character inputs, generate a set of applications based on the received character inputs, generate a set of hyperlinks corresponding to the set of generated set of applications, generate a set of application icons, generate a set of communication service icons for each picture generated, display the sets of predicted characters in a second viewing pane, display the set of communication service icons near the predicted characters, display the set of hyperlinks at or near keys that correspond to the sets of predicted characters, display the set of application icons in a second viewing pane, display the set of pictures at or near keys that correspond to characters in the sets of predicted characters, display the set of communication service icons next to each picture located at or near the keys corresponding to the characters in the sets of predicted characters.
  • Example embodiments described herein further enable a user to select a communication service application, corresponding to a communication service icon, associated with the predicted
  • Communication device 100 is a two-way communication device having data and voice communication capabilities, and the capability to communicate with other computer systems, for example, via the Internet.
  • communication device 100 can be a handheld device, a multiple-mode communication device configured for both data and voice communication, a smartphone, a mobile telephone, a netbook, a gaming console, a tablet, or a PDA (personal digital assistant) enabled for wireless communication.
  • PDA personal digital assistant
  • Communication device 100 includes a case (not shown) housing the components of communication device 100.
  • device 100 has a rectangular shape with two planar sides, although other configurations may be adopted.
  • the internal components of communication device 100 can, for example, be constructed on a printed circuit board (PCB).
  • PCB printed circuit board
  • Communication device 100 includes a controller including at least one processor 102 (such as a microprocessor), which controls the operation of communication device 100.
  • processor 102 can be a single microprocessor, multiple microprocessors, field programmable gate arrays (FPGAs), digital signal processors (DSPs) capable of executing particular sets of instructions, or any circuit capable of electrically coupling the device subsystems.
  • Processor 102 interacts with device subsystems such as a communication system 104 for exchanging radio frequency signals with a wireless network (for example WAN 144 and/or PLMN 146) to perform communication functions.
  • a wireless network for example WAN 144 and/or PLMN 1466
  • Processor 102 also interacts with additional device subsystems including a display 106 such as a liquid crystal display (LCD) screen or any other appropriate display, input devices 108 such as a keyboard and control buttons, persistent memory 110, random access memory (RAM) 112, read only memory (ROM) 114, auxiliary input/output (I/O) subsystems 116, data port 118 such as a conventional serial data port or a Universal Serial Bus (USB) data port, speaker 120, microphone 122, short-range wireless communications subsystem 124 (which can employ any appropriate wireless (for example, RF), optical, or other short range communications technology), and other device subsystems generally designated as 126.
  • a display 106 such as a liquid crystal display (LCD) screen or any other appropriate display
  • input devices 108 such as a keyboard and control buttons
  • persistent memory 110 random access memory (RAM) 112, read only memory (ROM) 114
  • ROM read only memory
  • I/O subsystems 116 auxiliary input/output subsystems
  • the display 106 On one planar side of device 100 is the display 106 that can be realized as a touch-sensitive display in some example embodiments.
  • the touch-sensitive display can be constructed using a touch-sensitive input surface coupled to an electronic controller and which overlays the visible element of display 106.
  • the touch-sensitive overlay and the electronic controller provide a touch-sensitive input device and processor 102 interacts with the touch-sensitive overlay via the electronic controller.
  • the touch-sensitive overlay may extend beyond a display area of display 106 to the edges of the side of communication device 100.
  • Communication system 104 includes one or more communication systems for communicating with wireless WAN 144 and wireless access points within the wireless network.
  • the particular design of communication system 104 depends on the wireless network in which communication device 100 is intended to operate.
  • Communication device 100 can send and receive communication signals over the wireless network after the required network registration or activation procedures have been completed.
  • Processor 102 operates under stored program control and executes software modules 128 stored in a tangible non-transitory computer-readable storage medium such as persistent memory 110, which can be a flexible disk, a hard disk, a CD-ROM (compact disk-read only memory), a MO (magneto -optical) disk, a DVD- ROM (digital versatile disk-read only memory), a DVD RAM (digital versatile disk- random access memory), or a semiconductor memory.
  • Software modules 128 can also be stored in a computer-readable storage medium such as ROM 114, or any appropriate persistent memory technology, including EEPROM, EAROM, and FLASH. These computer-readable storage mediums store computer-readable instructions for execution by processor 102 to perform a variety of functions on communication device 100.
  • Software modules 128 can include operating system software 130, used to control operation of communication device 100. Additionally, software modules 128 can include software applications 132 for providing additional functionality to communication device 100. Software applications 132 can further include a range of applications, including, for example, e-mail messaging application, address book, spell check application, text prediction application, notepad application, Internet browser application, voice communication (i.e., telephony) application, mapping application, or a media player application, or any combination thereof. Each of software applications 132 can include layout information defining the placement of particular fields and graphic elements (for example, text fields, input fields, icons, etc.) in the user interface (i.e., display 106) according to the application. [0039] In some example embodiments, persistent memory 110 stores data
  • the linguistic data is used, for example, by one or more of the applications, such as the spell check application and the text prediction application.
  • the linguistic data may include default linguistic data, such as words and/or groups of words, with a corresponding number for each word or group of words indicating the number of words in the word or group of words.
  • the linguistic data may also include custom linguistic data, such as words or groups of words previously entered by the user.
  • the linguistic data may also include data acquired from a device other than communication device 100.
  • data 134 also includes service data including information required by communication device 100 to establish and maintain communication with the wireless network (for example WAN 144 and/or PLMN 146).
  • the data 134 also includes database linkage data including information required by communication device 100 to link linguistic data to pictures stored in persistent memory 110 or a second external electronic device that communicates with communication device 100 via the Internet.
  • auxiliary input/output (I/O) subsystems 116 include an external communication link or interface, for example, an Ethernet connection.
  • device 100 includes one or more sensors such as an accelero meter, GPS, temperature sensor, and pressure sensor.
  • auxiliary I/O subsystems 116 can further include one or more input devices, including a pointing or navigational tool such as an optical trackpad, clickable trackball, scroll wheel or thumbwheel, or one or more output devices, including a mechanical transducer such as a vibrator for providing vibratory notifications in response to various events on communication device 100 (for example, receipt of an electronic message or incoming phone call), or for other purposes such as haptic feedback (touch feedback).
  • a pointing or navigational tool such as an optical trackpad, clickable trackball, scroll wheel or thumbwheel
  • output devices including a mechanical transducer such as a vibrator for providing vibratory notifications in response to various events on communication device 100 (for example, receipt of an electronic message or incoming phone call), or for other purposes such as haptic feedback (touch feedback
  • communication device 100 also includes one or more removable memory modules 136 (typically including FLASH memory) and a memory module interface 138.
  • removable memory module 136 is to store information used to identify or authenticate a subscriber or the subscriber's account to a wireless network (for example WAN 144 or PLMN 146).
  • a wireless network for example WAN 144 or PLMN 146.
  • SIM Subscriber Identity Module
  • Memory module 136 is inserted in or coupled to memory module interface 138 of communication device 100 in order to operate in conjunction with the wireless network.
  • Communication device 100 also includes a battery 140 which furnishes energy for operating communication device 100.
  • Battery 140 can be coupled to the electrical circuitry of communication device 100 through a battery interface 142, which can manage such functions as charging battery 140 from an external power source (not shown) and the distribution of energy to various loads within or coupled to communication device 100.
  • Short-range wireless communications subsystem 124 is an additional optional component that provides for communication between communication device 100 and different systems or devices, which need not necessarily be similar devices.
  • short-range wireless communications subsystem 124 can include an infrared device and associated circuits and components, or a wireless bus protocol compliant communication device such as a BLUETOOTH® communication module to provide for communication with similarly-enabled systems and devices.
  • a predetermined set of applications that control basic device operations, including data and possibly voice communication applications can be installed on communication device 100 during or after manufacture. Additional applications or upgrades to operating system software 130 or software applications 132 can also be loaded onto communication device 100 through the wireless network (for example WAN 144 and/or PLMN 146), auxiliary I/O subsystem 116, data port 118, short-range wireless communication subsystem 124, or other suitable subsystems 126.
  • the downloaded programs or code modules can be permanently installed, for example, written into the persistent memory 110, or written into and executed from RAM 112 for execution by processor 102 at runtime.
  • Communication device 100 provides, for example, three modes of communication: a data communication mode, a voice communication mode, and a video communication mode.
  • a received data signal such as a text message, an e-mail message, Web page download, or an image file are processed by communication system 104 and input to processor 102 for further processing.
  • a downloaded Web page can be further processed by a browser application, or an e-mail message can be processed by an e-mail message messaging application and output to display 106.
  • a user of communication device 100 can also compose data items, such as e-mail messages, for example, using the input devices, such as auxiliary I/O subsystem 116, in conjunction with display 106.
  • communication device 100 provides telephony functions and operates as a typical cellular phone.
  • communication device 100 provides video telephony functions and operates as a video teleconference terminal.
  • communication device 100 utilizes one or more cameras (not shown) to capture video of video teleconference.
  • Method 200 begins by receiving an input of one or more characters (block 210).
  • processor 102 of device 100 may receive characters from a virtual keyboard displayed on a touch-sensitive display.
  • the virtual keyboard has one or more rows of selectable keys, where each key is assigned a location in a row, each key is assigned a character, and each character is displayed on the assigned key.
  • the virtual keyboard is configured in accordance with the QWERTY keyboard layout. Key selection on the virtual keyboard causes character input.
  • the inputted characters may be displayed in an input field that displays characters input using the virtual keyboard.
  • a character can be any alphanumeric character set in device 100, including, for example, a letter, a number, a symbol, a space, or a punctuation mark.
  • the processor 102 generates one or more sets of characters, including for example, words, acronyms, names, abbreviations, or any combination thereof based on the input received at block 210.
  • the sets of characters may include, for example, sets of characters that are stored in a contact list in a memory of the electronic device, characters stored in another contact list stored in a memory of a second external electronic device that communicates with the electronic device via the Internet, sets of characters that were previously inputted by a user, sets of characters associated with an application stored in a memory of the electronic device or a secondary external electronic device that communicates with the electronic device via the Internet, sets of characters associated with a Uniform Resource Locator, sets of characters based on a hierarchy or tree structure, or any other sets of characters that are selected by a processor based on a predefined arrangement. For example, using a text prediction application the processor may access data 134, including locally or remotely stored contact list data, to identify the one or more sets of characters at block 220.
  • the sets of characters generated at block 220 can begin with the same character received as input at block 210. For example, if the character "G” has been received as input using a virtual keyboard, the character will be received by the processor as the character input. In such example embodiments, the sets of characters generated at block 220 would all begin with “G”, such as "Greg Hislop” "Gretchen Colfax", “Gunther Cole” or "Greg Stark.” Similarly, if the user has input the characters "Gr”, the sets of characters generated at block 220 would all begin with "Gr”, such as "Greg Hislop", “Gretchen Colfax", or "Greg Stark".
  • the sets of characters generated at block 220 may include the characters received as input at block 210 in any position. For example, if the received input is an "s”, and the user has already input the characters "Gr”, and the processor has generated the sets of characters “Gretchen Colfax", “Greg Hislop”, and “Greg Stark”, the processor may prune the generated sets of characters to "Greg Stark” when the character "s" is received at block 210.
  • the processor 102 uses contextual data for generating sets of characters.
  • Contextual data considers the context of characters in the input field.
  • Contextual data may include information about, for example, a set of characters previously inputted by a user, grammatical attributes of the characters inputted in the input field (e.g., whether a noun or a verb is needed as the next set of characters in a sentence), or any combination thereof.
  • the processor may use the contextual data to determine that a noun— instead of a verb— will most likely be the next set of characters after "the.” Likewise, if the set of characters "The Crab” was inputted, based on the context, the processor may determine the subsequent set of characters is likely "Shack.” Using the contextual data, the processor may also determine whether an inputted character was incorrect. For example, the processor may determine that an inputted character was supposed to be a "w” instead of an "a”, given the proximity of these characters on a QWERTY virtual keyboard.
  • the sets of characters generated at block 220 may include characters assigned to the same key on a virtual keyboard, as the characters input at block 210. For example, if a given key on a virtual keyboard can be used to input either "a” or "b", and the received input at block 210 is an "a", the processor may generate "Adonis Jones” or "Bacchus Smith” as sets of characters. It does not matter if upper or lower case is input. The same output, as just described, would result.
  • the characters input at block 210 are representative of a prefix and the sets of characters generated at block 220 are representative of a completion text portion.
  • the input characters i.e., the prefix
  • the sets of characters i.e., the completion text portions
  • the completion text portions are found by searching the contact list data source for the prefix combined with possible completion text portions.
  • the processor 102 ranks the generated sets of characters.
  • the ranking may reflect, for example, the likelihood that a candidate set of characters is intended by a user or the likelihood that a candidate set of characters will be chosen by a user compared to another candidate set of characters.
  • the ranking may be based on contextual data.
  • contextual data may include information about which programs or applications are currently running or being used on the electronic device. For example, if the device is running an email application, then sets of characters associated with that device's email application, such as sets of characters from the device's contact list, can be used to determine the ranking. N-grams, such as unigrams, bigrams, and trigrams, may be also used in the ranking of the sets of characters.
  • the geolocation of the electronic device may also be used in the ranking process.
  • the electronic device recognizes that it is located at a user's office, then sets of characters generally associated with work may be ranked higher in the list. If, on the other hand, the device determines that the device is not located at the user's office, then sets of characters generally not associated with the user's office may be ranked higher in the list.
  • the processor 102 selects which of the sets of characters to display based on the ranking. For example, in some example embodiments, higher ranked sets of characters are more likely to be selected than lower ranked sets of characters. In some example embodiments, the processor 102 limits the displayed sets of characters to the top few or chooses among the higher ranked sets of characters. In addition, in some example embodiments, as described in further detail below, subsequent candidate input characters of the sets of characters affects how processor 102 selects which of the sets of characters to display, in accordance with the method 200. The processor 102 can also display pictures associated with the sets of characters selected for display. Yet in other example embodiments, the processor 102 might display hyperlinks associated with the sets of characters selected for display.
  • the processor 102 displays one or more of the selected sets of characters in a second viewing pane corresponding to a subsequent candidate input character, predicted as the next character that a user might input. For example, if a user inputs "Gr”, the sets of characters “Greg Stark”, “Gretchen Colfax”, “Greg Hislop” are displayed in a second viewing pane.
  • a subsequent candidate input character is any alphanumeric character, such as, for example, a letter, number, symbol, or punctuation mark.
  • Processor 102 causes display of the one or more selected set of characters in a manner that will attract the user's attention.
  • the appearance of a displayed set of characters is enhanced or changed in a way that makes the set more readily visible to a user.
  • set of characters may be displayed with backlighting, highlighting, underlining, holding, italicizing using combinations thereof, or in any other way for making the displayed set of characters more visible.
  • Processor 102 might also include backlighting, highlighting, or some combination thereof, or in any other way for making the displayed set of communication service icons more visible.
  • processor 102 enables a set of communication service icons, in a second viewing pane, associated with the displayed sets of characters, at block 250, to be chosen to activate a communication service application.
  • Method 300 begins at block 310 by receiving an input of one or more characters. (Block 310 is the same as block 210 discussed above with respect to Fig. 2.)
  • processor 102 generates one or more sets of characters, including for example, words, acronyms, names, abbreviations, or any combination thereof based on the input received at block 310.
  • Block 312 is the same as block 220 discussed above with respect to Fig. 2.
  • the processor 102 ranks the generated sets of characters.
  • Block 314 is the same as block 230 discussed above with respect to Fig. 2.
  • Block 316 the processor 102 selects generated sets of characters for display based on the ranking. (Block 316 is the same as block 240 discussed above with respect to Fig. 2.)
  • processor 102 generates a set of pictures, each of which corresponds to the characters selected at block 316.
  • the set of pictures may include, for example, pictures stored in a contact list in a memory of the electronic device, pictures stored in another contact list stored in a memory of a second external electronic device that communicates with the electronic device via the Internet, pictures previously selected by the user, or any other set of pictures that are selected by a processor based on a predefined arrangement.
  • a picture can be an image of a person, an object, a place, a symbol, a character, and might be presented as a normal picture, or some modification of the picture (e.g. a silhouette).
  • a contact prediction application may be used by the processor to access data 134, including the stored contact list data, to identify the set of pictures at block 320.
  • each picture in the set of pictures corresponds to a set of characters selected at block 316.
  • a set of contact pictures corresponding to contacts in the contact list that begin with the character "G” will be generated at block 320.
  • each picture in the set of pictures generated at block 320 would correspond to a unique set of characters selected at block 316. Therefore the processor will generate a unique picture for the sets of characters, "Greg Hislop”, “Gretchen Colfax”, “Gunther Cole", and "Greg Stark”.
  • the sets of characters generated at block 312 may include the characters received as input at block 310 in any position. For example, if the received input is an "s", and the user has already input the characters "Gr”, and the processor has generated the sets of characters “Gretchen Colfax", “Greg Hislop”, and “Greg Stark”, the processor may prune the generated sets of characters to "Greg Stark” when the character "s" is received at block 310.
  • a set of pictures is generated at block 320, and the set is pruned with each subsequent character that is inputted at block 310. Each picture in the set of pictures generated at block 320 corresponds to a set of characters selected at block 316.
  • the sets of characters generated at block 312 may include characters assigned to the same key on a virtual keyboard as the characters input at block 310. For example, if a given key on a virtual keyboard can be used to input either "a" or "b", and the received input at block 310 is an "a", the processor may generate "Adonis Jones” or "Bacchus Smith” as sets of characters. In some example embodiments, a picture corresponding to the sets of characters "Adonis Jones” and "Bacchus Smith” is generated at block 320. It does not matter if upper or lower case is input. The same output, as just described, would result.
  • the processor 102 will display one or more of the generated set of pictures at a location within or substantially within the keyboard corresponding to a subsequent candidate input character, predicted as the next character that a user might input. For example, if a user inputs "Gr”, the picture associated with a generated set of characters "Greg Stark” is displayed on the key for the letter "s”— a subsequent candidate input character for that set of characters.
  • the processor 102 limits the displayed sets of characters to the top few or chooses among the higher ranked sets of characters. For example, if two sets of characters are both ranked high, and the display of the pictures associated with these sets of characters at block 330 would otherwise be displayed at the same key, the electronic device could be configured to display only the picture associated with the highest ranked set of characters. In other example embodiments, both pictures are displayed at or around the same key, or one picture is displayed at one key while the second picture is displayed at another key. In some example embodiments, the processor takes into account the display size to limit the number of pictures displayed associated with the selected sets of characters.
  • the ranking is used to choose between two or more pictures, associated with the sets of generated characters, that when displayed on adjacent keys, would overlap with each other (e.g., because of their respective sizes).
  • the electronic device is configured to display the picture associated with the higher ranked set of characters on the keyboard. For example, if the picture associated with the set of characters "Greg Hislop” is ranked first in a list generated at block 320 after the letter "G" is inputted, "Greg Hislop” could be displayed at the "R” key. When displayed on a virtual keyboard, however, the picture might occupy some space on the "E” key, potentially blocking a picture that would be displayed on or around that key.
  • the pictures associated with the selected sets of characters are displayed at or near keys on the virtual keyboard associated with the subsequent candidate input characters.
  • whether a picture is displayed at or near a key depends, for example, on the size of a candidate picture or the number and size of nearby candidate pictures.
  • gaps are provided between rows of keys, wherein a given candidate picture may be displayed in a gap near a key corresponding to a subsequent candidate picture.
  • the selected set of pictures might be displayed substantially within the display area. That is, pixels of a displayed picture may be displayed within the virtual keyboard, and pixels of the displayed picture may be displayed in another region, such as, for example, a second viewing pane.
  • a key corresponding to a subsequent candidate input character of a set of characters may have a candidate picture displayed within the virtual keyboard on or near the key, and similarly the candidate picture may also be displayed in a second viewing pane simultaneously.
  • Processor 102 causes display of the one or more selected set of pictures in a manner that will attract the user's attention.
  • the appearance of a displayed picture is enhanced or changed in a way that makes the set more readily visible to a user.
  • a picture may be displayed with backlighting, highlighting, using combinations thereof, or in any other way for making the displayed picture more visible.
  • the picture may also be magnified.
  • the processor 102 then displays one or more communication service icons at block 340 at or near the picture displayed at block 330.
  • the picture associated with another generated set of characters "Greg Hislop” is displayed on the key for the letter "h”— a subsequent candidate input character of the generated set of characters “Greg Hislop” if "Gr" has already been entered.
  • a subsequent candidate input in this example embodiment would be the first letter of the last name.
  • the processor 102 displays one or more communication service icons at block 340 at or near the picture displayed at block 330.
  • Processor 102 might also include backlighting, highlighting, or some combination thereof, or in any other way for making the displayed set of communication service icons more visible.
  • processor 102 enables a set of communication service icons displayed at or near the set of pictures, at block 340, to be chosen to activate a communication service application within the virtual keyboard (block 350).
  • Method 400 begins at block 410 by receiving an input of one or more characters.
  • Block 410 is the same as block 210 discussed above with respect to Fig. 2.
  • processor 102 generates one or more sets of characters, including for example, words, acronyms, names, abbreviations, or any combination thereof based on the input received at block 410.
  • Block 412 is the same as block 220 discussed above with respect to Fig. 2.
  • the processor 102 ranks the generated sets of characters.
  • Block 414 is the same as block 230 discussed above with respect to Fig. 2.
  • the processor 102 ranks the generated sets of characters.
  • Block 416 is the same as block 230 discussed above with respect to Fig. 2.
  • the processor 102 generates a set of hyperlinks, each of which corresponds to the characters selected at block 416.
  • the set of hyperlinks may include, for example, hyperlinks to an application stored in a memory of the electronic device, hyperlinks to an application stored in a memory of a secondary external electronic device that communicates with the electronic device via the Internet, hyperlinks previously selected by the user, hyperlinks associated with a Uniform Resource Locator, or any other set of hyperlinks that are selected by the processor based on a predefined arrangement.
  • each hyperlink in the set of hyperlinks corresponds to a set of characters selected at block 416.
  • the characters will be received by the processor as the character input, and a set of hyperlinks corresponding to Uniform Resource Locators that begin with the character "G" will be generated at block 420.
  • each hyperlink in the set of hyperlinks generated at block 420 would correspond to a unique set of characters selected at block 416. Therefore the processor will generate a unique hyperlink for the Uniform Resource Locators, "www.globeandmail.com” and "www.gmail.com”.
  • the hyperlinks associated with the set of characters selected at block 416 correspond to applications stored on the electronic device or an external electronic device that communicates with the electronic device via the Internet. For example, if the character "F" has been selected at block 416, a set of hyperlinks corresponding to the applications that begin with the character "F" will be generated at block 420. In such example embodiments, each hyperlink in the set of hyperlinks generated at block 420, would correspond to a unique set of characters selected at block 416. Therefore the processor will generate a unique hyperlink for the applications, "Facebook”, File Manager", and "Flickr".
  • the processor 102 might display one or more of the generated set of hyperlinks at a location within or substantially within the keyboard corresponding to a subsequent candidate input character, predicted as the next character that a user might input. For example, if a user inputs "G", the hyperlink associated with a generated set of characters "www.globeandmail.com” is displayed on the key for the letter "1”— a subsequent candidate input character for that set of characters. In this case the hyperlink is associated with the Uniform Resource Locator "www.globemail.com”. A hyperlink associated with a generated set of characters "www.gmail.com” might be displayed on the key for the letter "m”— a subsequent candidate input character for that set of characters (block 430). In this case the hyperlink is associated with the Uniform Resource Locator "www.gmail.com”.
  • the processor 102 limits the sets of characters to the top few or chooses among the higher ranked sets of characters. For example, if two sets of characters are both ranked high, and the display of the hyperlinks associated with these sets of characters at block 430 would otherwise be displayed at the same key, the electronic device could be configured to display only the hyperlinks associated with the highest ranked set of characters. In other example embodiments, both hyperlinks are displayed at or around the same key, or one hyperlink is displayed at one key while the second hyperlink is displayed at another key. In some example embodiments, the processor might take into account the display size to limit the number of hyperlinks displayed associated with the selected sets of characters.
  • the ranking is used to choose between two or more hyperlinks, associated with the sets of generated characters, that when displayed on adjacent keys, would overlap with each other (e.g., because of their respective sizes).
  • the electronic device is configured to display the hyperlink associated with the higher ranked set of characters on the keyboard. For example, if the hyperlink associated with the set of characters "www.google.com” is ranked first in a list generated at block 420 after the letter "G" is inputted, "www.google.com” could be displayed at the "O" key. When displayed on a virtual keyboard, however, the hyperlink might occupy some space on the "I” key, potentially blocking a hyperlink that would be displayed on or around that key.
  • the hyperlink associated with the set of characters "www.google.com” and the hyperlink associated with the set of characters "www.gimmecoffee.com” are equally ranked, the hyperlink associated with the set of characters "www.google.com” could be displayed at the "O” key just above or below the hyperlink associated with the set of characters “www.gimmecoffee.com” displayed at the "I” key.
  • the hyperlink associated with the set of characters "www.google.com” could be displayed at the "O” key just above or below the hyperlink associated with the set of characters “www.gimmecoffee.com” displayed at the "I” key.
  • the processor 102 might display one or more of the generated set of hyperlinks at a location within or substantially within the keyboard corresponding to a subsequent candidate input character, predicted as the next character that a user might input (block 430). For example, if a user inputs "F", the hyperlink associated with a generated set of characters "Facebook” is displayed on the key for the letter "a”— a subsequent candidate input character for that set of characters. In this case the hyperlink is associated with the Facebook application. A hyperlink associated with a generated set of characters "Flickr” might be displayed on the key for the letter "L”— a subsequent candidate input character for that set of characters. In this case the hyperlink is associated with the Flickr application.
  • the hyperlinks associated with the selected sets of characters are displayed at or near keys on the virtual keyboard associated with the subsequent candidate input characters.
  • whether a hyperlink is displayed at or near a key depends, for example, on the size of a candidate hyperlink or the number and size of nearby candidate hyperlinks.
  • gaps are provided between rows of keys, wherein a given candidate hyperlink may be displayed in a gap near a key corresponding to a subsequent candidate hyperlink.
  • the selected set of hyperlinks might be displayed substantially within the display area. That is, characters of a displayed hyperlink may be displayed within the virtual keyboard, and characters of the displayed hyperlink may be displayed in another region, such as, for example, a second viewing pane. For example, a key corresponding to a subsequent candidate input character of a set of characters may have a candidate hyperlink displayed within the virtual keyboard on or near the key, and similarly the candidate hyperlink may also be displayed in a second viewing pane simultaneously.
  • Processor 102 causes display of the selected set of hyperlinks in a manner that will attract the user's attention.
  • the appearance of a displayed set of hyperlinks is enhanced or changed in a way that makes the set more readily visible to a user.
  • a set of hyperlinks may be displayed with backlighting, highlighting, underlining, holding, italicizing, using combinations thereof, or in any other way for making the displayed set of hyperlinks more visible.
  • processor 102 enables a set of hyperlinks displayed at or near the corresponding keys, to be selected for activation within the virtual keyboard at block 440.
  • a hyperlink that is located at or near a key can be chosen to activate an application or launch a webpage, using a Uniform Resource Locator, if a predetermined gesture is detected. For example, a swipe-up gesture may cause an application on the phone to be activated, or the web browser application to open a webpage associated with the generated set of characters.
  • Fig. 5 shows an example front view of a touch-sensitive display that has gone through the process of predicting and displaying sets of characters, and the corresponding pictures associated with those characters.
  • an electronic device 500 which may be, for example, device 100, has received a character input and displays "Gr" in the first viewing pane 520, with "R” being the most recently selected key.
  • a plurality of sets of characters is displayed in a second viewing pane 510.
  • the sets of characters "Greg Hislop” and "Greg Stark” 530 are displayed in the second viewing pane 510 in lexicographical order from the topmost portion of the second viewing pane 510 to the bottommost portion of the second viewing pane 510.
  • a phone communication service icon is also displayed horizontally in-line with both sets of characters in the second viewing pane 510.
  • Pictures corresponding to "Greg Stark” and “Greg Hislop" are displayed near the "S" and "H” keys respectively.
  • Fig. 6 shows an example front view of a touch-sensitive display that has gone through the process of predicting and displaying application icons, corresponding to the predicted set of applications associated with the input characters.
  • an electronic device 600 which may be, for example, device 100, has received a character input and displays "F" in the first viewing pane 610, with "A" being the most recently selected key.
  • a set of application icons is displayed in a second viewing pane 620.
  • the applications “Facebook” “File Manager” and “Flickr” 630 are displayed in the second viewing pane 620.
  • Hyperlinks corresponding to "Facebook” “File Manager” and “Flicker” are displayed near the "A” "I” and “L” keys respectively.
  • FIG. 7 depicts a flow diagram of an example method 700 for activating a communication service application corresponding to a displayed communication service icon.
  • Method 700 begins at block 710, with processor 102 detecting the start of a gesture input.
  • the start of a gesture input may be detected, for example, when an object, such as a finger, touches a touch-sensitive display.
  • the processor determines that a gesture is being input.
  • the processor determines that the gesture of a communication service icon is being input.
  • the processor determines that the gesture for the communication service icon is complete. This determination may be made, for example, when the object making the select gesture has moved more than a predetermined number of pixels on the touch-sensitive display or the object has been removed from the touch-sensitive display.
  • Method 800 begins at block 810, with processor 102 detecting the start of a gesture input.
  • the start of a gesture input may be detected, for example, when an object, such as a finger, touches a touch-sensitive display.
  • a gesture input may initially be ambiguous. For example, when an object initially touches a touch-sensitive display, a plurality of possible gestures may potentially be entered, such as a single key selection represented by, for example, a tap; a selection of a displayed picture represented by, for example, a swipe up; and a deselection of a previously selected picture represented by, for example, a swipe downward.
  • a single key selection represented by, for example, a tap
  • a selection of a displayed picture represented by, for example, a swipe up
  • a deselection of a previously selected picture represented by, for example, a swipe downward.
  • different movements may also be mapped to the example gestures or other gestures.
  • the processor determines that a gesture is input.
  • the processor determines that the gesture for a hyperlink is complete. This determination may be made, for example, when the object making the gesture has moved more than a predetermined number of pixels on the touch-sensitive display.
  • the processor determines that the gesture of a hyperlink is complete.
  • Fig. 9 shows an example front view of a touch-sensitive display receiving a gesture input 900.
  • an object touches, and holds, the character "S" shown on the touch-sensitive display, indicating the start of a gesture (illustrating an example of block 710 of Fig. 7 and block 810 of Fig. 8).
  • Fig. 10 shows an example front view of a touch-sensitive display receiving a gesture input 1000.
  • an object touches, and holds, the "A" character shown on the touch-sensitive display, indicating the start of a gesture (illustrating an example of block 710 of Fig. 7 and block 810 of Fig. 8).
  • Fig. 11 shows an example front view of a touch-sensitive display receiving a gesture input 1100.
  • an object touches, and holds, the "L" character shown on the touch-sensitive display, indicating the start of a gesture (illustrating an example of block 710 of Fig. 7 and block 810 of Fig. 8).
  • Fig. 12 shows an example front view of a touch-sensitive display receiving a gesture input 900.
  • the object in Fig. 9 has moved from the character "S" upward, without being removed from the touch-sensitive display, and now covers the picture.
  • the processor may begin to display a set of communication service icons near the picture that is displayed on the key displaying the character "S" (illustrating an example of block 340 of Fig. 3).
  • a phone, instant messaging, and/or email communication service icon may be displayed in the set of communication service icons.
  • the processor then enables the displayed set of communication service icons to be activated (illustrating an example of block 350 of Fig. 3).
  • Fig. 13 shows an example front view of a touch-sensitive display receiving a gesture input 900.
  • an object has moved in the up direction, toward the phone communication service icon, from the picture displayed on the key displaying the "S" character.
  • the example system has determined that a gesture, for one of the communication service icons, in this case the phone communication service icon, is being input (illustrating an example of block 730 of Fig. 7).
  • the user might elect to swipe to the left or right to select one of the other communication service icons.
  • Fig. 14 shows an example front view of a touch-sensitive display receiving a gesture input 900.
  • an object has moved in the up direction, toward the hyperlink, from the "A" character.
  • the example system has determined that a gesture, for the "Facebook” application, is being input(illustrating an example of block 820 of Fig. 8).
  • the user might elect to swipe to the left or right to select one of the other communication service icons.
  • Fig. 15 shows an example front view of a touch-sensitive display receiving a gesture input 900.
  • an object has moved in the up direction, toward the hyperlink, from the "L" character.
  • the example system has determined that a gesture, for the Uniform Resource Locator "www.globeandmail.com", is being input (illustrating an example of block 820 of Fig. 8).
  • the user might elect to swipe to the left or right to select one of the other communication service icons.
  • processor 102 opens/launches/activates the phone communication service application (illustrating an example of block 350 of Fig. 3).
  • processor 102 will open the webpage associated with the Uniform Resource Locator "www.globeandmail.com" (illustrating an example of block 440 of Fig. 4).
  • processor 102 will activate the "Facebook” application (illustrating an example of block 440 of Fig. 4).

Abstract

In one example embodiment, there is disclosed an input method for an electronic device having a touch-sensitive display and a processor. The method includes displaying in the touch-sensitive display a plurality of keys; detecting a selection of one or more of the keys; generating at least one set of characters corresponding to the selected at least one key; and displaying at least one icon associated with a generated character set within the touch-sensitive display. In another example embodiment, an electronic device comprising a processor, a memory, and instructions stored in the memory are disclosed and capable of performing the method.

Description

METHODS AND SYSTEMS FOR PREDICTING ACTIONS ON
VIRTUAL KEYBOARD
FIELD
[0001] Example embodiments relate to processing prediction candidates in connection with virtual keyboards and touch-sensitive displays.
BACKGROUND
[0002] There has been a growing trend for, electronic devices, such as computers, netbooks, cellular phones, smart phones, personal digital assistants, and tablets, etc., to include touch-sensitive displays allowing users to enter characters into applications, such as a word processors, Short Messaging Service applications, Multimedia Messaging Service applications, email applications, etc. Entering a set of characters on a touch-sensitive display can be a challenging task because of the limited amount of real estate on a small touch-sensitive display. An example of a typical text entry touch-sensitive display may include two separate areas. The first area is used to display an interactive virtual keyboard having a set of selectable keys, and the second area is used to display selected input associated with the keys.
[0003] Predictive text input resources aid in the input of text to an electronic device. These resources include predicting a word or the name of a contact a user is entering after the user inputs one or more characters of the word or name. The resource provides the user with one or more suggestions for one or more additional characters to complete the word or name being entered. Similarly, some word prediction resources offer users full word or name predictions based on the context of other words or names previously input by the user alone or with the assistance of a word prediction resource.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] Reference will now be made to the accompanying drawings showing example embodiments of this disclosure. In the drawings:
[0005] Fig. 1 is a simplified block diagram illustrating an example mobile communication device in accordance with an example embodiment. [0006] Fig. 2 is a flowchart illustrating an example method for predicting and displaying selected sets of characters.
[0007] Fig. 3 is a flowchart illustrating an example method for displaying pictures and communication service icons, corresponding to the selected sets of characters.
[0008] Fig. 4 is a flowchart illustrating an example method for displaying hyperlinks, corresponding to the selected sets of characters.
[0009] Fig. 5 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
[0010] Fig. 6 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
[0011] Fig. 7 is a flowchart illustrating an example method for opening/activating/launching a communication service application.
[0012] Fig. 8 is a flowchart illustrating an example method for opening/activating/launching a hyperlink.
[0013] Fig. 9 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
[0014] Fig. 10 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
[0015] Fig. 11 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
[0016] Fig. 12 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
[0017] Fig. 13 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
[0018] Fig. 14 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
[0019] Fig. 15 shows an example front view of a touch-sensitive display in accordance with an example embodiment. [0020] Fig. 16 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
[0021] Fig. 17 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
[0022] Fig. 18 shows an example front view of a touch-sensitive display in accordance with an example embodiment.
DETAILED DESCRIPTION
[0023] Touch-sensitive displays that allow a user to enter characters into an application might be divided into a plurality of areas. For example, the touch- sensitive display might be divided into two areas. The first area may be used for displaying an interactive virtual keyboard having a set of selectable keys, and the second area may be used for displaying and selecting selected input associated with the keys. Dividing the touch-sensitive display in this way forces the user to continuously toggle their attention between the virtual keyboard and the input reflected by the selected keys in the second portion, or to consider and select candidate character(s) of a predictive text process displayed in the second portion, and thereafter, look back at the virtual keyboard to continue typing. Constant shifting of the user's eyes and fingers between the two areas, can place undue stress on the user's eyes and fingers, be distracting, require more time to enter text, and otherwise be inefficient. The time wasted by a user, switching their attention between the input area and the virtual keyboard, leads to an increased number of processing cycles resulting in display power being wasted as the processor sits idle.
[0024] In one disclosed example embodiment, a method for an electronic device consisting of a touch-sensitive display and processor is disclosed. The method includes displaying in a touch-sensitive display a plurality of keys. The method also includes detecting a selection of at least one of the keys. In addition, the method includes generating at least one set of characters corresponding to the selected at least one key. The method also includes displaying at least one icon associated with a generated character set within the touch-sensitive display.
[0025] In another disclosed example embodiment, an electronic device is provided. The electronic device includes a touch-sensitive display, at least one processor, in communication with the touch-sensitive display, and configured to, display in the touch-sensitive display a plurality of keys, detect a selection of at least one of the keys, generate at least one set of characters corresponding to the selected at least one key, display at least one icon associated with a generated character set within the touch-sensitive display.
[0026] In another disclosed example embodiment, an electronic device is provided. The electronic device includes a touch-sensitive display, at least one processor, in communication with the touch-sensitive display, at least one memory, instructions stored on the at least one memory, which, when executed by the processor, cause the electronic device to perform the steps of, displaying in the touch- sensitive display a plurality of keys, detecting a selection of at least one of the keys, generating at least one set of characters corresponding to the selected at least one key, displaying at least one icon associated with a generated character set within the touch- sensitive display.
[0027] The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several example embodiments are described herein, modifications, adaptations, and other implementations are possible. For example, substitutions, additions, or modifications can be made to the components illustrated in the drawings, and the example methods described herein can be modified by removing, substituting, reordering, or adding blocks to the disclosed methods. Accordingly, the foregoing general description and the following detailed description are example and explanatory only and are not limiting. Instead, the proper scope is defined by the appended claims.
[0028] In addition, numerous specific details are set forth in order to provide a thorough understanding of the example embodiments described herein. However, it will be understood by those of ordinary skill in the art that the example embodiments described herein can be practiced without these specific details. Furthermore, well- known methods, procedures and components have not been described in detail so as not to obscure the example embodiments described herein.
[0029] The indefinite article (a or an) and the definite article (the), when used in the specification and claims, is meant to include one or more than of the objects, activities or steps that it might qualify, unless otherwise expressly indicated to the contrary. For example, "a" processor can be one or more than one processor.
[0030] Various example embodiments for selecting on-keyboard prediction candidates are disclosed. Example embodiments described herein may receive character inputs, generate sets of predicted characters based on the received character inputs, generate a set of pictures based on the received character inputs, generate a set of applications based on the received character inputs, generate a set of hyperlinks corresponding to the set of generated set of applications, generate a set of application icons, generate a set of communication service icons for each picture generated, display the sets of predicted characters in a second viewing pane, display the set of communication service icons near the predicted characters, display the set of hyperlinks at or near keys that correspond to the sets of predicted characters, display the set of application icons in a second viewing pane, display the set of pictures at or near keys that correspond to characters in the sets of predicted characters, display the set of communication service icons next to each picture located at or near the keys corresponding to the characters in the sets of predicted characters. Example embodiments described herein further enable a user to select a communication service application, corresponding to a communication service icon, associated with the predicted contact pictures displayed at or near a key.
[0031] Reference is now made to Fig. 1, which illustrates in detail example communication device 100 in which example embodiments can be applied. Communication device 100 is a two-way communication device having data and voice communication capabilities, and the capability to communicate with other computer systems, for example, via the Internet. Depending on the functionality provided by communication device 100, in various example embodiments communication device 100 can be a handheld device, a multiple-mode communication device configured for both data and voice communication, a smartphone, a mobile telephone, a netbook, a gaming console, a tablet, or a PDA (personal digital assistant) enabled for wireless communication.
[0032] Communication device 100 includes a case (not shown) housing the components of communication device 100. In some example embodiments device 100 has a rectangular shape with two planar sides, although other configurations may be adopted. The internal components of communication device 100 can, for example, be constructed on a printed circuit board (PCB). The description of communication device 100 herein mentions a number of specific components and subsystems. Although these components and subsystems can be realized as discrete elements, the functions of the components and subsystems can also be realized by integrating, combining, or packaging one or more elements in any suitable fashion.
[0033] Communication device 100 includes a controller including at least one processor 102 (such as a microprocessor), which controls the operation of communication device 100. Processor 102 can be a single microprocessor, multiple microprocessors, field programmable gate arrays (FPGAs), digital signal processors (DSPs) capable of executing particular sets of instructions, or any circuit capable of electrically coupling the device subsystems. Processor 102 interacts with device subsystems such as a communication system 104 for exchanging radio frequency signals with a wireless network (for example WAN 144 and/or PLMN 146) to perform communication functions.
[0034] Processor 102 also interacts with additional device subsystems including a display 106 such as a liquid crystal display (LCD) screen or any other appropriate display, input devices 108 such as a keyboard and control buttons, persistent memory 110, random access memory (RAM) 112, read only memory (ROM) 114, auxiliary input/output (I/O) subsystems 116, data port 118 such as a conventional serial data port or a Universal Serial Bus (USB) data port, speaker 120, microphone 122, short-range wireless communications subsystem 124 (which can employ any appropriate wireless (for example, RF), optical, or other short range communications technology), and other device subsystems generally designated as 126. Some of the subsystems shown in Fig. 1 perform communication-related functions, whereas other subsystems can provide "resident" or on-device functions.
[0035] On one planar side of device 100 is the display 106 that can be realized as a touch-sensitive display in some example embodiments. The touch-sensitive display can be constructed using a touch-sensitive input surface coupled to an electronic controller and which overlays the visible element of display 106. The touch-sensitive overlay and the electronic controller provide a touch-sensitive input device and processor 102 interacts with the touch-sensitive overlay via the electronic controller. In some example embodiments the touch-sensitive overlay may extend beyond a display area of display 106 to the edges of the side of communication device 100.
[0036] Communication system 104 includes one or more communication systems for communicating with wireless WAN 144 and wireless access points within the wireless network. The particular design of communication system 104 depends on the wireless network in which communication device 100 is intended to operate. Communication device 100 can send and receive communication signals over the wireless network after the required network registration or activation procedures have been completed.
[0037] Processor 102 operates under stored program control and executes software modules 128 stored in a tangible non-transitory computer-readable storage medium such as persistent memory 110, which can be a flexible disk, a hard disk, a CD-ROM (compact disk-read only memory), a MO (magneto -optical) disk, a DVD- ROM (digital versatile disk-read only memory), a DVD RAM (digital versatile disk- random access memory), or a semiconductor memory. Software modules 128 can also be stored in a computer-readable storage medium such as ROM 114, or any appropriate persistent memory technology, including EEPROM, EAROM, and FLASH. These computer-readable storage mediums store computer-readable instructions for execution by processor 102 to perform a variety of functions on communication device 100.
[0038] Software modules 128 can include operating system software 130, used to control operation of communication device 100. Additionally, software modules 128 can include software applications 132 for providing additional functionality to communication device 100. Software applications 132 can further include a range of applications, including, for example, e-mail messaging application, address book, spell check application, text prediction application, notepad application, Internet browser application, voice communication (i.e., telephony) application, mapping application, or a media player application, or any combination thereof. Each of software applications 132 can include layout information defining the placement of particular fields and graphic elements (for example, text fields, input fields, icons, etc.) in the user interface (i.e., display 106) according to the application. [0039] In some example embodiments, persistent memory 110 stores data
134, including linguistic data stored in a database structure. The linguistic data is used, for example, by one or more of the applications, such as the spell check application and the text prediction application. The linguistic data may include default linguistic data, such as words and/or groups of words, with a corresponding number for each word or group of words indicating the number of words in the word or group of words. The linguistic data may also include custom linguistic data, such as words or groups of words previously entered by the user. The linguistic data may also include data acquired from a device other than communication device 100. In certain example embodiments, data 134 also includes service data including information required by communication device 100 to establish and maintain communication with the wireless network (for example WAN 144 and/or PLMN 146). The data 134 also includes database linkage data including information required by communication device 100 to link linguistic data to pictures stored in persistent memory 110 or a second external electronic device that communicates with communication device 100 via the Internet.
[0040] In some example embodiments, auxiliary input/output (I/O) subsystems 116 include an external communication link or interface, for example, an Ethernet connection. In some example embodiments, device 100 includes one or more sensors such as an accelero meter, GPS, temperature sensor, and pressure sensor. In some example embodiments, auxiliary I/O subsystems 116 can further include one or more input devices, including a pointing or navigational tool such as an optical trackpad, clickable trackball, scroll wheel or thumbwheel, or one or more output devices, including a mechanical transducer such as a vibrator for providing vibratory notifications in response to various events on communication device 100 (for example, receipt of an electronic message or incoming phone call), or for other purposes such as haptic feedback (touch feedback).
[0041] In some example embodiments, communication device 100 also includes one or more removable memory modules 136 (typically including FLASH memory) and a memory module interface 138. Among possible functions of removable memory module 136 is to store information used to identify or authenticate a subscriber or the subscriber's account to a wireless network (for example WAN 144 or PLMN 146). For example, in conjunction with certain types of wireless networks, such as GSM and successor networks, removable memory module 136 is referred to as a Subscriber Identity Module (SIM). Memory module 136 is inserted in or coupled to memory module interface 138 of communication device 100 in order to operate in conjunction with the wireless network.
[0042] Communication device 100 also includes a battery 140 which furnishes energy for operating communication device 100. Battery 140 can be coupled to the electrical circuitry of communication device 100 through a battery interface 142, which can manage such functions as charging battery 140 from an external power source (not shown) and the distribution of energy to various loads within or coupled to communication device 100. Short-range wireless communications subsystem 124 is an additional optional component that provides for communication between communication device 100 and different systems or devices, which need not necessarily be similar devices. For example, short-range wireless communications subsystem 124 can include an infrared device and associated circuits and components, or a wireless bus protocol compliant communication device such as a BLUETOOTH® communication module to provide for communication with similarly-enabled systems and devices.
[0043] A predetermined set of applications that control basic device operations, including data and possibly voice communication applications can be installed on communication device 100 during or after manufacture. Additional applications or upgrades to operating system software 130 or software applications 132 can also be loaded onto communication device 100 through the wireless network (for example WAN 144 and/or PLMN 146), auxiliary I/O subsystem 116, data port 118, short-range wireless communication subsystem 124, or other suitable subsystems 126. The downloaded programs or code modules can be permanently installed, for example, written into the persistent memory 110, or written into and executed from RAM 112 for execution by processor 102 at runtime.
[0044] Communication device 100 provides, for example, three modes of communication: a data communication mode, a voice communication mode, and a video communication mode. In the data communication mode, a received data signal such as a text message, an e-mail message, Web page download, or an image file are processed by communication system 104 and input to processor 102 for further processing. For example, a downloaded Web page can be further processed by a browser application, or an e-mail message can be processed by an e-mail message messaging application and output to display 106. A user of communication device 100 can also compose data items, such as e-mail messages, for example, using the input devices, such as auxiliary I/O subsystem 116, in conjunction with display 106. These composed items can be transmitted through communication system 104 over the wireless network (for example WAN 144 or PLMN 146). In the voice communication mode, communication device 100 provides telephony functions and operates as a typical cellular phone. In the video communication mode, communication device 100 provides video telephony functions and operates as a video teleconference terminal. In the video communication mode, communication device 100 utilizes one or more cameras (not shown) to capture video of video teleconference.
[0045] Reference is now made to Fig. 2, which depicts a flow diagram of an example method 200 for predicting and displaying sets of characters. Method 200 begins by receiving an input of one or more characters (block 210). For example, processor 102 of device 100 may receive characters from a virtual keyboard displayed on a touch-sensitive display. The virtual keyboard has one or more rows of selectable keys, where each key is assigned a location in a row, each key is assigned a character, and each character is displayed on the assigned key. For example, the virtual keyboard is configured in accordance with the QWERTY keyboard layout. Key selection on the virtual keyboard causes character input. The inputted characters may be displayed in an input field that displays characters input using the virtual keyboard. As used herein, a character can be any alphanumeric character set in device 100, including, for example, a letter, a number, a symbol, a space, or a punctuation mark. According to method 200, at block 220, the processor 102 generates one or more sets of characters, including for example, words, acronyms, names, abbreviations, or any combination thereof based on the input received at block 210. The sets of characters may include, for example, sets of characters that are stored in a contact list in a memory of the electronic device, characters stored in another contact list stored in a memory of a second external electronic device that communicates with the electronic device via the Internet, sets of characters that were previously inputted by a user, sets of characters associated with an application stored in a memory of the electronic device or a secondary external electronic device that communicates with the electronic device via the Internet, sets of characters associated with a Uniform Resource Locator, sets of characters based on a hierarchy or tree structure, or any other sets of characters that are selected by a processor based on a predefined arrangement. For example, using a text prediction application the processor may access data 134, including locally or remotely stored contact list data, to identify the one or more sets of characters at block 220.
[0046] In some example embodiments, the sets of characters generated at block 220 can begin with the same character received as input at block 210. For example, if the character "G" has been received as input using a virtual keyboard, the character will be received by the processor as the character input. In such example embodiments, the sets of characters generated at block 220 would all begin with "G", such as "Greg Hislop" "Gretchen Colfax", "Gunther Cole" or "Greg Stark." Similarly, if the user has input the characters "Gr", the sets of characters generated at block 220 would all begin with "Gr", such as "Greg Hislop", "Gretchen Colfax", or "Greg Stark".
[0047] In some example embodiments, the sets of characters generated at block 220 may include the characters received as input at block 210 in any position. For example, if the received input is an "s", and the user has already input the characters "Gr", and the processor has generated the sets of characters "Gretchen Colfax", "Greg Hislop", and "Greg Stark", the processor may prune the generated sets of characters to "Greg Stark" when the character "s" is received at block 210.
In some example embodiments, the processor 102 uses contextual data for generating sets of characters. Contextual data considers the context of characters in the input field. Contextual data may include information about, for example, a set of characters previously inputted by a user, grammatical attributes of the characters inputted in the input field (e.g., whether a noun or a verb is needed as the next set of characters in a sentence), or any combination thereof. For example, if the set of characters "the" has already been inputted into the input field, the processor may use the contextual data to determine that a noun— instead of a verb— will most likely be the next set of characters after "the." Likewise, if the set of characters "The Crab" was inputted, based on the context, the processor may determine the subsequent set of characters is likely "Shack." Using the contextual data, the processor may also determine whether an inputted character was incorrect. For example, the processor may determine that an inputted character was supposed to be a "w" instead of an "a", given the proximity of these characters on a QWERTY virtual keyboard.
[0048] In addition, in some example embodiments, the sets of characters generated at block 220 may include characters assigned to the same key on a virtual keyboard, as the characters input at block 210. For example, if a given key on a virtual keyboard can be used to input either "a" or "b", and the received input at block 210 is an "a", the processor may generate "Adonis Jones" or "Bacchus Smith" as sets of characters. It does not matter if upper or lower case is input. The same output, as just described, would result.
[0049] In addition, in some example embodiments, the characters input at block 210 are representative of a prefix and the sets of characters generated at block 220 are representative of a completion text portion. For example, if the input characters (i.e., the prefix) are "Adam", the sets of characters (i.e., the completion text portions) can be "Johnson", "Williams", "Brown", corresponding in the contact list to objects "Adam Johnson", "Adam Williams", and "Adam Brown". In some example embodiments, the completion text portions are found by searching the contact list data source for the prefix combined with possible completion text portions.
[0050] At block 230, the processor 102 ranks the generated sets of characters.
The ranking may reflect, for example, the likelihood that a candidate set of characters is intended by a user or the likelihood that a candidate set of characters will be chosen by a user compared to another candidate set of characters. In some example embodiments, the ranking may be based on contextual data. For example, in some example embodiments, contextual data may include information about which programs or applications are currently running or being used on the electronic device. For example, if the device is running an email application, then sets of characters associated with that device's email application, such as sets of characters from the device's contact list, can be used to determine the ranking. N-grams, such as unigrams, bigrams, and trigrams, may be also used in the ranking of the sets of characters. The geolocation of the electronic device may also be used in the ranking process. If, for example, the electronic device recognizes that it is located at a user's office, then sets of characters generally associated with work may be ranked higher in the list. If, on the other hand, the device determines that the device is not located at the user's office, then sets of characters generally not associated with the user's office may be ranked higher in the list.
[0051] At block 240, the processor 102 selects which of the sets of characters to display based on the ranking. For example, in some example embodiments, higher ranked sets of characters are more likely to be selected than lower ranked sets of characters. In some example embodiments, the processor 102 limits the displayed sets of characters to the top few or chooses among the higher ranked sets of characters. In addition, in some example embodiments, as described in further detail below, subsequent candidate input characters of the sets of characters affects how processor 102 selects which of the sets of characters to display, in accordance with the method 200. The processor 102 can also display pictures associated with the sets of characters selected for display. Yet in other example embodiments, the processor 102 might display hyperlinks associated with the sets of characters selected for display.
[0052] At block 250, in some example embodiments, the processor 102 displays one or more of the selected sets of characters in a second viewing pane corresponding to a subsequent candidate input character, predicted as the next character that a user might input. For example, if a user inputs "Gr", the sets of characters "Greg Stark", "Gretchen Colfax", "Greg Hislop" are displayed in a second viewing pane. In some example embodiments, a subsequent candidate input character is any alphanumeric character, such as, for example, a letter, number, symbol, or punctuation mark.
[0053] Processor 102, in some example embodiments, causes display of the one or more selected set of characters in a manner that will attract the user's attention. In some example embodiments, the appearance of a displayed set of characters is enhanced or changed in a way that makes the set more readily visible to a user. For example, set of characters may be displayed with backlighting, highlighting, underlining, holding, italicizing using combinations thereof, or in any other way for making the displayed set of characters more visible.
[0054] Processor 102 might also include backlighting, highlighting, or some combination thereof, or in any other way for making the displayed set of communication service icons more visible. [0055] According to method 200, at block 260, processor 102 enables a set of communication service icons, in a second viewing pane, associated with the displayed sets of characters, at block 250, to be chosen to activate a communication service application.
[0056] Method 300 (illustrated in Fig. 3) begins at block 310 by receiving an input of one or more characters. (Block 310 is the same as block 210 discussed above with respect to Fig. 2.)
[0057] At block 312, processor 102 generates one or more sets of characters, including for example, words, acronyms, names, abbreviations, or any combination thereof based on the input received at block 310. (Block 312 is the same as block 220 discussed above with respect to Fig. 2.)
[0058] At block 314, the processor 102 ranks the generated sets of characters.
(Block 314 is the same as block 230 discussed above with respect to Fig. 2.)
[0059] At block 316, the processor 102 selects generated sets of characters for display based on the ranking. (Block 316 is the same as block 240 discussed above with respect to Fig. 2.)
[0060] At block 320, processor 102 generates a set of pictures, each of which corresponds to the characters selected at block 316. The set of pictures may include, for example, pictures stored in a contact list in a memory of the electronic device, pictures stored in another contact list stored in a memory of a second external electronic device that communicates with the electronic device via the Internet, pictures previously selected by the user, or any other set of pictures that are selected by a processor based on a predefined arrangement. A picture can be an image of a person, an object, a place, a symbol, a character, and might be presented as a normal picture, or some modification of the picture (e.g. a silhouette). As an example, a contact prediction application may be used by the processor to access data 134, including the stored contact list data, to identify the set of pictures at block 320.
[0061] At block 320, each picture in the set of pictures corresponds to a set of characters selected at block 316. For example, if the character "G" has been selected, a set of contact pictures corresponding to contacts in the contact list that begin with the character "G" will be generated at block 320. In such example embodiments, each picture in the set of pictures generated at block 320, would correspond to a unique set of characters selected at block 316. Therefore the processor will generate a unique picture for the sets of characters, "Greg Hislop", "Gretchen Colfax", "Gunther Cole", and "Greg Stark".
[0062] In some example embodiments, the sets of characters generated at block 312 may include the characters received as input at block 310 in any position. For example, if the received input is an "s", and the user has already input the characters "Gr", and the processor has generated the sets of characters "Gretchen Colfax", "Greg Hislop", and "Greg Stark", the processor may prune the generated sets of characters to "Greg Stark" when the character "s" is received at block 310. In some example embodiments, a set of pictures is generated at block 320, and the set is pruned with each subsequent character that is inputted at block 310. Each picture in the set of pictures generated at block 320 corresponds to a set of characters selected at block 316.
[0063] In some example embodiments, the sets of characters generated at block 312 may include characters assigned to the same key on a virtual keyboard as the characters input at block 310. For example, if a given key on a virtual keyboard can be used to input either "a" or "b", and the received input at block 310 is an "a", the processor may generate "Adonis Jones" or "Bacchus Smith" as sets of characters. In some example embodiments, a picture corresponding to the sets of characters "Adonis Jones" and "Bacchus Smith" is generated at block 320. It does not matter if upper or lower case is input. The same output, as just described, would result.
[0064] At block 330, in some example embodiments, the processor 102 will display one or more of the generated set of pictures at a location within or substantially within the keyboard corresponding to a subsequent candidate input character, predicted as the next character that a user might input. For example, if a user inputs "Gr", the picture associated with a generated set of characters "Greg Stark" is displayed on the key for the letter "s"— a subsequent candidate input character for that set of characters.
[0065] When identifying the sets of characters for display at block 316, in some example embodiments the processor 102 limits the displayed sets of characters to the top few or chooses among the higher ranked sets of characters. For example, if two sets of characters are both ranked high, and the display of the pictures associated with these sets of characters at block 330 would otherwise be displayed at the same key, the electronic device could be configured to display only the picture associated with the highest ranked set of characters. In other example embodiments, both pictures are displayed at or around the same key, or one picture is displayed at one key while the second picture is displayed at another key. In some example embodiments, the processor takes into account the display size to limit the number of pictures displayed associated with the selected sets of characters.
[0066] In some example embodiments, the ranking is used to choose between two or more pictures, associated with the sets of generated characters, that when displayed on adjacent keys, would overlap with each other (e.g., because of their respective sizes). In such example embodiments, the electronic device is configured to display the picture associated with the higher ranked set of characters on the keyboard. For example, if the picture associated with the set of characters "Greg Hislop" is ranked first in a list generated at block 320 after the letter "G" is inputted, "Greg Hislop" could be displayed at the "R" key. When displayed on a virtual keyboard, however, the picture might occupy some space on the "E" key, potentially blocking a picture that would be displayed on or around that key. Thus, at block 320, it is determined that the picture associated with "Greg Hislop" would be displayed fully, and no other pictures would be placed at the "E" key ahead of the first ranked picture associated with "Greg Hislop." An alternative to displaying only the top ranked picture associated with the set of characters would be to use smaller pictures, that can be magnified when selected by a gesture, effectively permitting a greater number of pictures, associated with the sets of characters, to be displayed within or mostly within the boundaries of a single key simultaneously with other sets of characters on adjacent keys of a virtual keyboard.
[0067] In some example embodiments, the pictures associated with the selected sets of characters are displayed at or near keys on the virtual keyboard associated with the subsequent candidate input characters. In some example embodiments, whether a picture is displayed at or near a key depends, for example, on the size of a candidate picture or the number and size of nearby candidate pictures. In some example embodiments, gaps are provided between rows of keys, wherein a given candidate picture may be displayed in a gap near a key corresponding to a subsequent candidate picture. In some example embodiments, the selected set of pictures might be displayed substantially within the display area. That is, pixels of a displayed picture may be displayed within the virtual keyboard, and pixels of the displayed picture may be displayed in another region, such as, for example, a second viewing pane. For example, a key corresponding to a subsequent candidate input character of a set of characters may have a candidate picture displayed within the virtual keyboard on or near the key, and similarly the candidate picture may also be displayed in a second viewing pane simultaneously.
[0068] Processor 102, in some example embodiments, causes display of the one or more selected set of pictures in a manner that will attract the user's attention. In some example embodiments, the appearance of a displayed picture is enhanced or changed in a way that makes the set more readily visible to a user. For example, a picture may be displayed with backlighting, highlighting, using combinations thereof, or in any other way for making the displayed picture more visible. The picture may also be magnified.
[0069] The processor 102, then displays one or more communication service icons at block 340 at or near the picture displayed at block 330. Similarly, the picture associated with another generated set of characters "Greg Hislop" is displayed on the key for the letter "h"— a subsequent candidate input character of the generated set of characters "Greg Hislop" if "Gr" has already been entered. In other words, a subsequent candidate input in this example embodiment would be the first letter of the last name. The processor 102, then displays one or more communication service icons at block 340 at or near the picture displayed at block 330.
[0070] Processor 102 might also include backlighting, highlighting, or some combination thereof, or in any other way for making the displayed set of communication service icons more visible.
[0071] According to method 300, processor 102 enables a set of communication service icons displayed at or near the set of pictures, at block 340, to be chosen to activate a communication service application within the virtual keyboard (block 350).
[0072] Method 400 (illustrated in Fig. 4) begins at block 410 by receiving an input of one or more characters. (Block 410 is the same as block 210 discussed above with respect to Fig. 2.) [0073] At block 412, processor 102 generates one or more sets of characters, including for example, words, acronyms, names, abbreviations, or any combination thereof based on the input received at block 410. (Block 412 is the same as block 220 discussed above with respect to Fig. 2.)
[0074] At block 414, the processor 102 ranks the generated sets of characters.
(Block 414 is the same as block 230 discussed above with respect to Fig. 2.)
[0075] At block 416, the processor 102 ranks the generated sets of characters.
(Block 416 is the same as block 230 discussed above with respect to Fig. 2.)
[0076] At block 420, the processor 102 generates a set of hyperlinks, each of which corresponds to the characters selected at block 416. The set of hyperlinks may include, for example, hyperlinks to an application stored in a memory of the electronic device, hyperlinks to an application stored in a memory of a secondary external electronic device that communicates with the electronic device via the Internet, hyperlinks previously selected by the user, hyperlinks associated with a Uniform Resource Locator, or any other set of hyperlinks that are selected by the processor based on a predefined arrangement.
[0077] At block 420, each hyperlink in the set of hyperlinks corresponds to a set of characters selected at block 416. For example, if the character "G" has been received as input using a virtual keyboard at block 410, the characters will be received by the processor as the character input, and a set of hyperlinks corresponding to Uniform Resource Locators that begin with the character "G" will be generated at block 420. In such embodiments, each hyperlink in the set of hyperlinks generated at block 420, would correspond to a unique set of characters selected at block 416. Therefore the processor will generate a unique hyperlink for the Uniform Resource Locators, "www.globeandmail.com" and "www.gmail.com". In some other example embodiments, the hyperlinks associated with the set of characters selected at block 416, correspond to applications stored on the electronic device or an external electronic device that communicates with the electronic device via the Internet. For example, if the character "F" has been selected at block 416, a set of hyperlinks corresponding to the applications that begin with the character "F" will be generated at block 420. In such example embodiments, each hyperlink in the set of hyperlinks generated at block 420, would correspond to a unique set of characters selected at block 416. Therefore the processor will generate a unique hyperlink for the applications, "Facebook", File Manager", and "Flickr".
[0078] At block 430, in other example embodiments, the processor 102 might display one or more of the generated set of hyperlinks at a location within or substantially within the keyboard corresponding to a subsequent candidate input character, predicted as the next character that a user might input. For example, if a user inputs "G", the hyperlink associated with a generated set of characters "www.globeandmail.com" is displayed on the key for the letter "1"— a subsequent candidate input character for that set of characters. In this case the hyperlink is associated with the Uniform Resource Locator "www.globemail.com". A hyperlink associated with a generated set of characters "www.gmail.com" might be displayed on the key for the letter "m"— a subsequent candidate input character for that set of characters (block 430). In this case the hyperlink is associated with the Uniform Resource Locator "www.gmail.com".
[0079] When identifying the sets of characters at block 416, in some example embodiments the processor 102 limits the sets of characters to the top few or chooses among the higher ranked sets of characters. For example, if two sets of characters are both ranked high, and the display of the hyperlinks associated with these sets of characters at block 430 would otherwise be displayed at the same key, the electronic device could be configured to display only the hyperlinks associated with the highest ranked set of characters. In other example embodiments, both hyperlinks are displayed at or around the same key, or one hyperlink is displayed at one key while the second hyperlink is displayed at another key. In some example embodiments, the processor might take into account the display size to limit the number of hyperlinks displayed associated with the selected sets of characters.
[0080] In some example embodiments, the ranking is used to choose between two or more hyperlinks, associated with the sets of generated characters, that when displayed on adjacent keys, would overlap with each other (e.g., because of their respective sizes). In such example embodiments, the electronic device is configured to display the hyperlink associated with the higher ranked set of characters on the keyboard. For example, if the hyperlink associated with the set of characters "www.google.com" is ranked first in a list generated at block 420 after the letter "G" is inputted, "www.google.com" could be displayed at the "O" key. When displayed on a virtual keyboard, however, the hyperlink might occupy some space on the "I" key, potentially blocking a hyperlink that would be displayed on or around that key. Thus, at block 420, it is determined that the hyperlink associated with "www.google.com" would be displayed fully, and no other hyperlinks would be placed at the "I" key ahead of the first ranked picture associated with "www.google.com." An alternative to displaying only the top ranked hyperlink associated with the set of characters would be to stagger the placement of the hyperlinks along the length of the keys. For example, if the hyperlink associated with the set of characters "www.google.com" and the hyperlink associated with the set of characters "www.gimmecoffee.com" are equally ranked, the hyperlink associated with the set of characters "www.google.com" could be displayed at the "O" key just above or below the hyperlink associated with the set of characters "www.gimmecoffee.com" displayed at the "I" key. Thus effectively permitting a greater number of hyperlinks, associated with the sets of characters, to be displayed within or mostly within the boundaries of a single key simultaneously with other hyperlinks on adjacent keys of a virtual keyboard. At block 430, in yet another example embodiment, the processor 102 might display one or more of the generated set of hyperlinks at a location within or substantially within the keyboard corresponding to a subsequent candidate input character, predicted as the next character that a user might input (block 430). For example, if a user inputs "F", the hyperlink associated with a generated set of characters "Facebook" is displayed on the key for the letter "a"— a subsequent candidate input character for that set of characters. In this case the hyperlink is associated with the Facebook application. A hyperlink associated with a generated set of characters "Flickr" might be displayed on the key for the letter "L"— a subsequent candidate input character for that set of characters. In this case the hyperlink is associated with the Flickr application.
[0081] In some example embodiments, the hyperlinks associated with the selected sets of characters are displayed at or near keys on the virtual keyboard associated with the subsequent candidate input characters. In some example embodiments, whether a hyperlink is displayed at or near a key depends, for example, on the size of a candidate hyperlink or the number and size of nearby candidate hyperlinks. In some example embodiments, gaps are provided between rows of keys, wherein a given candidate hyperlink may be displayed in a gap near a key corresponding to a subsequent candidate hyperlink.
[0082] In some example embodiments, the selected set of hyperlinks might be displayed substantially within the display area. That is, characters of a displayed hyperlink may be displayed within the virtual keyboard, and characters of the displayed hyperlink may be displayed in another region, such as, for example, a second viewing pane. For example, a key corresponding to a subsequent candidate input character of a set of characters may have a candidate hyperlink displayed within the virtual keyboard on or near the key, and similarly the candidate hyperlink may also be displayed in a second viewing pane simultaneously.
[0083] Processor 102, in some example embodiments, causes display of the selected set of hyperlinks in a manner that will attract the user's attention. In some example embodiments, the appearance of a displayed set of hyperlinks is enhanced or changed in a way that makes the set more readily visible to a user. For example, a set of hyperlinks may be displayed with backlighting, highlighting, underlining, holding, italicizing, using combinations thereof, or in any other way for making the displayed set of hyperlinks more visible.
[0084] According to method 400, at block 430, processor 102 enables a set of hyperlinks displayed at or near the corresponding keys, to be selected for activation within the virtual keyboard at block 440.
[0085] In some example embodiments, a hyperlink that is located at or near a key, can be chosen to activate an application or launch a webpage, using a Uniform Resource Locator, if a predetermined gesture is detected. For example, a swipe-up gesture may cause an application on the phone to be activated, or the web browser application to open a webpage associated with the generated set of characters.
[0086] Fig. 5 shows an example front view of a touch-sensitive display that has gone through the process of predicting and displaying sets of characters, and the corresponding pictures associated with those characters. As depicted in Fig. 5, an electronic device 500, which may be, for example, device 100, has received a character input and displays "Gr" in the first viewing pane 520, with "R" being the most recently selected key. As depicted in Fig. 5, a plurality of sets of characters is displayed in a second viewing pane 510. For example, the sets of characters "Greg Hislop" and "Greg Stark" 530, are displayed in the second viewing pane 510 in lexicographical order from the topmost portion of the second viewing pane 510 to the bottommost portion of the second viewing pane 510. A phone communication service icon is also displayed horizontally in-line with both sets of characters in the second viewing pane 510. Pictures corresponding to "Greg Stark" and "Greg Hislop" are displayed near the "S" and "H" keys respectively.
[0087] Fig. 6 shows an example front view of a touch-sensitive display that has gone through the process of predicting and displaying application icons, corresponding to the predicted set of applications associated with the input characters. As depicted in Fig. 6, an electronic device 600, which may be, for example, device 100, has received a character input and displays "F" in the first viewing pane 610, with "A" being the most recently selected key. As depicted in Fig. 6, a set of application icons is displayed in a second viewing pane 620. For example, the applications "Facebook" "File Manager" and "Flickr" 630, are displayed in the second viewing pane 620. Hyperlinks corresponding to "Facebook" "File Manager" and "Flicker" are displayed near the "A" "I" and "L" keys respectively.
[0088] Reference is now made to Fig. 7, which depicts a flow diagram of an example method 700 for activating a communication service application corresponding to a displayed communication service icon. Method 700 begins at block 710, with processor 102 detecting the start of a gesture input. The start of a gesture input may be detected, for example, when an object, such as a finger, touches a touch-sensitive display. At block 720, the processor determines that a gesture is being input. At block 730, the processor determines that the gesture of a communication service icon is being input.
[0089] At block 740, the processor determines that the gesture for the communication service icon is complete. This determination may be made, for example, when the object making the select gesture has moved more than a predetermined number of pixels on the touch-sensitive display or the object has been removed from the touch-sensitive display.
[0090] Reference is now made to Fig. 8, which depicts a flow diagram of an example method 800 for activating a hyperlink corresponding to a Uniform Resource Locator or an application. Method 800 begins at block 810, with processor 102 detecting the start of a gesture input. The start of a gesture input may be detected, for example, when an object, such as a finger, touches a touch-sensitive display.
[0091] A gesture input may initially be ambiguous. For example, when an object initially touches a touch-sensitive display, a plurality of possible gestures may potentially be entered, such as a single key selection represented by, for example, a tap; a selection of a displayed picture represented by, for example, a swipe up; and a deselection of a previously selected picture represented by, for example, a swipe downward. However, different movements may also be mapped to the example gestures or other gestures.
[0092] At block 820, the processor determines that a gesture is input. At block 830, the processor determines that the gesture for a hyperlink is complete. This determination may be made, for example, when the object making the gesture has moved more than a predetermined number of pixels on the touch-sensitive display. At block 840, the processor determines that the gesture of a hyperlink is complete.
[0093] Fig. 9 shows an example front view of a touch-sensitive display receiving a gesture input 900. As depicted in Fig. 9, an object touches, and holds, the character "S" shown on the touch-sensitive display, indicating the start of a gesture (illustrating an example of block 710 of Fig. 7 and block 810 of Fig. 8).
[0094] Fig. 10. shows an example front view of a touch-sensitive display receiving a gesture input 1000. As depicted in Fig. 10, an object touches, and holds, the "A" character shown on the touch-sensitive display, indicating the start of a gesture (illustrating an example of block 710 of Fig. 7 and block 810 of Fig. 8).
[0095] Fig. 11 shows an example front view of a touch-sensitive display receiving a gesture input 1100. As depicted in Fig. 11, an object touches, and holds, the "L" character shown on the touch-sensitive display, indicating the start of a gesture (illustrating an example of block 710 of Fig. 7 and block 810 of Fig. 8).
[0096] Fig. 12 shows an example front view of a touch-sensitive display receiving a gesture input 900. As depicted in Fig. 12, the object in Fig. 9 has moved from the character "S" upward, without being removed from the touch-sensitive display, and now covers the picture. After determining that a gesture 900 is being input, (illustrating an example of block 720 of Fig. 7 and block 820 of Fig. 8), the processor may begin to display a set of communication service icons near the picture that is displayed on the key displaying the character "S" (illustrating an example of block 340 of Fig. 3). For example, in some example embodiments a phone, instant messaging, and/or email communication service icon, may be displayed in the set of communication service icons. The processor then enables the displayed set of communication service icons to be activated (illustrating an example of block 350 of Fig. 3).
[0097] Fig. 13 shows an example front view of a touch-sensitive display receiving a gesture input 900. As depicted in Fig. 13, an object has moved in the up direction, toward the phone communication service icon, from the picture displayed on the key displaying the "S" character. Because of the detected upward gesture, the example system has determined that a gesture, for one of the communication service icons, in this case the phone communication service icon, is being input (illustrating an example of block 730 of Fig. 7). In alternative example embodiments the user might elect to swipe to the left or right to select one of the other communication service icons.
[0098] Fig. 14 shows an example front view of a touch-sensitive display receiving a gesture input 900. As depicted in Fig. 14, an object has moved in the up direction, toward the hyperlink, from the "A" character. Because of the detected upward gesture, the example system has determined that a gesture, for the "Facebook" application, is being input(illustrating an example of block 820 of Fig. 8). In alternative example embodiments the user might elect to swipe to the left or right to select one of the other communication service icons.
[0099] Fig. 15 shows an example front view of a touch-sensitive display receiving a gesture input 900. As depicted in Fig. 15, an object has moved in the up direction, toward the hyperlink, from the "L" character. Because of the detected upward gesture, the example system has determined that a gesture, for the Uniform Resource Locator "www.globeandmail.com", is being input (illustrating an example of block 820 of Fig. 8). In alternative example embodiments the user might elect to swipe to the left or right to select one of the other communication service icons.
[00100] As seen in Fig. 16, after determining the communication service application selection gesture is complete, processor 102 opens/launches/activates the phone communication service application (illustrating an example of block 350 of Fig. 3).
[00101] As seen in Fig. 17, after determining that the hyperlink selection gesture is complete, processor 102 will open the webpage associated with the Uniform Resource Locator "www.globeandmail.com" (illustrating an example of block 440 of Fig. 4).
As seen in Fig. 18, after determining that the hyperlink selection gesture is complete, processor 102 will activate the "Facebook" application (illustrating an example of block 440 of Fig. 4).

Claims

A method for an electronic device having a touch-sensitive display and a processor, comprising:
displaying in the touch-sensitive display a plurality of keys; detecting a selection of at least one of the keys;
generating at least one set of characters corresponding to the selected at least one key; and
displaying at least one icon associated with a generated
character set within the touch-sensitive display.
The method of claim 1 , wherein the icon is displayed at or near a key corresponding to a subsequent candidate input character.
The method of claim 2, wherein a gesture is detected at or near the icon.
The method of claim 3, wherein the gesture is detected for a predetermined amount of time.
The method of claim 3, wherein the icon is a hyperlink.
The method of claim 3, wherein the icon is a picture.
The method of claim 6, further comprising:
displaying a set of icons at or near each picture.
The method of claim 7, wherein the displayed set of icons are communication service icons.
The method of claim 8, wherein a gesture is detected associated with one of the communication service icons.
The method of claim 9, further comprising:
activating a displayed communication service icon after the gesture has been detected.
The method of claim 5, further comprising:
activating an application associated with the hyperlink after the gesture has been detected.
12. The method of claim 5, further comprising:
accessing a webpage associated with the hyperlink after the gesture has been detected.
13. The method of claim 1 , further comprising:
displaying the picture also within a second viewing pane.
14. An electronic device comprising:
a touch-sensitive display;
at least one processor, in communication with the touch- sensitive display, and configured to:
display in the touch-sensitive display a plurality of keys;
detect a selection of at least one of the keys;
generate at least one set of characters
corresponding to the selected at least one key; and
display at least one icon associated with a generated character set within the touch-sensitive display.
15. The electronic device of claim 14, wherein the icon is displayed at or near a key corresponding to a subsequent candidate input character. 16. The electronic device of claim 15, wherein a gesture is detected at or near the icon.
17. The electronic device of claim 16, wherein the gesture is detected for a predetermined amount of time.
18. The electronic device of claim 16, wherein the icon is a hyperlink. 19. The electronic device of claim 16, wherein the icon is a picture.
20. The electronic device of claim 19, wherein a set of icons is displayed at or near each picture.
21 . The electronic device of claim 20, wherein the displayed set of icons are communication service icons.
22. The electronic device of claim 21 , wherein a gesture is detected associated with one of the communication service icons.
23. The electronic device of claim 22, wherein a displayed communication service icon is activated after the gesture has been detected.
24. The electronic device of claim 18, wherein an application associated with the hyperlink is activated after the gesture has been detected.
25. The electronic device of claim 18, wherein a webpage associated with the hyperlink is accessed after the gesture has been detected.
26. The electronic device of claim 14, wherein the processor is further configured to display the picture also within a second viewing pane.
27. An electronic device comprising:
a touch-sensitive display;
at least one memory;
at least one processor, in communication with the touch- sensitive display;
instructions stored on the at least one memory, which, when executed by the processor, cause the electronic device to perform the steps of:
displaying in the touch-sensitive display a plurality of keys;
detecting a selection of at least one of the keys; generating at least one set of characters corresponding to the selected at least one key; and displaying at least one icon associated with a generated character set within the touch-sensitive display.
PCT/CA2013/050099 2013-02-07 2013-02-07 Methods and systems for predicting actions on virtual keyboard WO2014121370A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CA2013/050099 WO2014121370A1 (en) 2013-02-07 2013-02-07 Methods and systems for predicting actions on virtual keyboard

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CA2013/050099 WO2014121370A1 (en) 2013-02-07 2013-02-07 Methods and systems for predicting actions on virtual keyboard

Publications (1)

Publication Number Publication Date
WO2014121370A1 true WO2014121370A1 (en) 2014-08-14

Family

ID=51299110

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2013/050099 WO2014121370A1 (en) 2013-02-07 2013-02-07 Methods and systems for predicting actions on virtual keyboard

Country Status (1)

Country Link
WO (1) WO2014121370A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090174667A1 (en) * 2008-01-09 2009-07-09 Kenneth Kocienda Method, Device, and Graphical User Interface Providing Word Recommendations for Text Input
US20100026650A1 (en) * 2008-07-29 2010-02-04 Samsung Electronics Co., Ltd. Method and system for emphasizing objects
US20110179355A1 (en) * 2010-01-15 2011-07-21 Sony Ericsson Mobile Communications Ab Virtual information input arrangement
US20120023433A1 (en) * 2010-07-23 2012-01-26 Samsung Electronics Co., Ltd. Method and apparatus for inputting character in a portable terminal
US20130021259A1 (en) * 2010-03-29 2013-01-24 Kyocera Corporation Information processing device and character input method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090174667A1 (en) * 2008-01-09 2009-07-09 Kenneth Kocienda Method, Device, and Graphical User Interface Providing Word Recommendations for Text Input
US20100026650A1 (en) * 2008-07-29 2010-02-04 Samsung Electronics Co., Ltd. Method and system for emphasizing objects
US20110179355A1 (en) * 2010-01-15 2011-07-21 Sony Ericsson Mobile Communications Ab Virtual information input arrangement
US20130021259A1 (en) * 2010-03-29 2013-01-24 Kyocera Corporation Information processing device and character input method
US20120023433A1 (en) * 2010-07-23 2012-01-26 Samsung Electronics Co., Ltd. Method and apparatus for inputting character in a portable terminal

Similar Documents

Publication Publication Date Title
US9652448B2 (en) Methods and systems for removing or replacing on-keyboard prediction candidates
EP2592567A1 (en) Methods and systems for removing or replacing keyboard prediction candidates
CA2820997C (en) Methods and systems for removing or replacing on-keyboard prediction candidates
CA2803192C (en) Virtual keyboard display having a ticker proximate to the virtual keyboard
US8825699B2 (en) Contextual search by a mobile communications device
US20140063067A1 (en) Method to select word by swiping capacitive keyboard
US20160196027A1 (en) Column Organization of Content
EP2703957B1 (en) Method to select word by swiping capacitive keyboard
US20140282203A1 (en) System and method for predictive text input
US9128921B2 (en) Touchscreen keyboard with corrective word prediction
EP2669782A1 (en) Touchscreen keyboard with corrective word prediction
US11086410B2 (en) Apparatus for text entry and associated methods
CA2766877C (en) Electronic device with touch-sensitive display and method of facilitating input at the electronic device
US8866747B2 (en) Electronic device and method of character selection
EP2778861A1 (en) System and method for predictive text input
WO2014121370A1 (en) Methods and systems for predicting actions on virtual keyboard
EP2485133A1 (en) Electronic device with touch-sensitive display and method of facilitating input at the electronic device
CN107241486A (en) A kind of input control method and terminal
EP3477458B1 (en) Electronic device and method of providing selectable keys of a keyboard
EP2827257A1 (en) Methods and devices for providing a text prediction
EP2570893A1 (en) Electronic device and method of character selection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13874389

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13874389

Country of ref document: EP

Kind code of ref document: A1