US20140320413A1 - Multimodal text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device - Google Patents

Multimodal text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device Download PDF

Info

Publication number
US20140320413A1
US20140320413A1 US14/328,309 US201414328309A US2014320413A1 US 20140320413 A1 US20140320413 A1 US 20140320413A1 US 201414328309 A US201414328309 A US 201414328309A US 2014320413 A1 US2014320413 A1 US 2014320413A1
Authority
US
United States
Prior art keywords
text
keyboard
input
camera
mobile device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/328,309
Inventor
Cüneyt Göktekin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US14/328,309 priority Critical patent/US20140320413A1/en
Assigned to Göktekin, Cüneyt reassignment Göktekin, Cüneyt DECLARATION OF OWNERSHIP (SEE DOCUMENT FOR DETAILS Assignors: Göktekin, Cüneyt
Assigned to Göktekin, Cüneyt reassignment Göktekin, Cüneyt DECLARATION OF OWNERSHIP(SEE DOCUMENT FOR DETAILS) Assignors: Göktekin, Cüneyt
Publication of US20140320413A1 publication Critical patent/US20140320413A1/en
Assigned to NUANCE COMMUNICATIONS, INC. reassignment NUANCE COMMUNICATIONS, INC. NOTICE Assignors: NUANCE COMMUNICATIONS, INC.
Priority to US15/636,189 priority patent/US10078376B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/0227Cooperation and interconnection of the input arrangement with other functional units of a computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/274Converting codes to words; Guess-ahead of partial word inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • G06V30/224Character recognition characterised by the type of writing of printed characters having additional code marks or containing code marks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/038Indexing scheme relating to G06F3/038
    • G06F2203/0381Multimodal input, i.e. interface arrangements enabling the user to issue commands by simultaneous use of input devices of different nature, e.g. voice plus gesture on digitizer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators

Definitions

  • the present invention relates to a method and a module for multimodal text input on a mobile device either via a keyboard or in a camera based mode by holding the camera of the mobile device on a written text, such that an image is captured of the written text and the written text is recognized, wherein an input text or the recognized text, respectively, is output as the input text to an application receiving the input text.
  • Mobile devices such as mobile phones or smart phones with an integrated camera module and with a display such as a touch screen display, for instance, show a good market penetration and a use in daily life far beyond a simple phone application.
  • the mobile device is used as well as a pocket book, a memo book, a calendar with planner, an address book, for receiving and writing SMS and emails and so on.
  • a standard mobile device offers already the integrated camera module with a resolution over 5 Megapixels sometimes even with optical zoom and it has a powerful microprocessor with over 300 MIPS (mega instructions per second).
  • MIPS microprocessor
  • the text input on the mobile device for standard software applications seems sometimes cumbersome on a small keyboard or on the touch screen keyboard.
  • EP 2333695A1 discloses a camera based method for alignment detection of the mobile device to the written text on a sheet of paper or on a display, by analyzing the captured image of the written text. Immediate feedback by optical and acoustical feedback helps the user to align the mobile device faster and better with the written text, resulting in a faster and better optical character recognition (OCR) for the text input into the mobile device.
  • OCR optical character recognition
  • EP 10161624 discloses another camera based method for the text input and for a keyword detection, wherein the written text gets captured by the image, converted via OCR and analyzed for finding a most probable keyword therein.
  • Such a kind of an input of the written text into the mobile device facilitates the text input for text translation applications and for internet searches about the written text in a book, for instance.
  • EP 08 169 713 discloses a method and a portable device such as preferably a mobile communication device for providing camera based services including the internet, capturing an image of a text page and processing the image data such that text is recognized via OCR software for a further usage within an application.
  • the keyboard will be probably always kept as a first option, however multimodal additional input possibilities are advantageous in many cases and could be incorporated, as far as possible.
  • the multimodal text input via the keyboard and via the camera with OCR implemented in a text input program module would be greatly desirable. It would allow the same keyboard input functionality as before with a standard keyboard input module but also the additional camera based text input possibility. It would be also desirable if the multimodal text input program module could replace the standard keyboard module on the standard mobile devices, such as for instance on mobile phones, smart phones or the like, and under respective operating systems.
  • the objective of the invention is to overcome the shortcomings explained above and to provide a method for a multimodal text input in a mobile device via a keyboard or in a camera mode by holding the camera of the mobile device on a written text, wherein the written text is recognized and selectable, such that a respective key character or a part of the recognized text is selectable by a single keypress or command for an immediate output to an application requesting the text input.
  • Another objective of the invention is a program module executing said method to either replace or complement an original standard keyboard text input module, such that a multimodal text input via the conventional keyboard and via the camera mode is made possible.
  • Said method allows the multimodal text input in the mobile device via a conventional keyboard or in a touch screen keyboard as well as in a camera mode, wherein the camera of the mobile device has to be simply held over a written text, such that an image is taken of the written text and the written text is recognized and output to an application as it would have been input via the keyboard.
  • a text input in a foreign language such a text input possibility is very advantageous, for instance, in case of a translation application for traveling, wherein the input has to be done in Chinese, Greek or Latin language.
  • GUI graphical user interface
  • Touch sensitive keys or fields within a first field of the keyboard or a second field of the part of the captured image or a third field of recognized words are adapted to be ergonomical.
  • gestures can be read as control commands even increasing the effective virtual communication field between the text input interface and the user.
  • multimodal text input module has always the same size of a graphical user interface (GUI) on a display of the mobile device, the same size as an original standard keyboard module making an exchange of the module easier. Also, the interface to the application activating normally the standard keyboard module is kept the same, such that any application before communicating with the standard keyboard interface is now capable of communicating as well with the multimodal text input module.
  • GUI graphical user interface
  • the multimodal method or the module with its implementation can replace the original standard keyboard module, having still the same functionality plus the additional camera mode text input.
  • the already existing original keyboard module is still kept and used on the mobile device as it is, but wherein an additional multimodal text input module steadily running in the background checks whether the original keyboard module is activated by an application.
  • the multimodal text input module activates the camera mode, opens an additional second and/or third field on the display showing the written text captured by the camera in the second field and the recognized text converted by OCR in the third field. Possibly the converted text is displayed as overlaid text over the original written text in the second window.
  • This preferred method allows the multimodal text input on mobile devices or operating systems, respectively, wherein the keyboard module cannot be replaced.
  • optical character recognition on a remote server wherein the part of the captured image of the written text, which is rather small and can be transferred over the air, gets transferred, wherein the respective recognized text is sent back and received and displayed for a possible selection on the mobile device.
  • FIG. 1 a is a drawing of a mobile device with a display showing a text in an application window, an A-Z-keyboard with a camera mode key in a first field, a captured written text in the second field and a recognized text in a third field.
  • FIG. 1 b is a drawing of the mobile device with the display showing the text in the application window and the A-Z-keyboard with the camera mode key in the first field.
  • FIG. 2 a is a drawing of the mobile device with the display showing the text in the application window, a reduced keyboard with the camera mode key in the first field, the captured written text in the second field and the recognized text and suggestion candidates in three third fields.
  • FIG. 2 b is a drawing of the mobile device with the display showing the text in the application window, the A-Z-keyboard with the CKK in the first field, the captured written text in the second field.
  • FIG. 3 a is a drawing of the mobile device with the display showing the text in the application window, the reduced keyboard, the captured written text in the second field, the recognized text and suggestion candidates in three third fields, wherein the first field is overlapping the second and the third fields wherein a gesture crossing the first field is input.
  • FIG. 3 b is a drawing of the mobile device with the display showing the text in the application window and the A-Z-keyboard, wherein another gesture is input crossing the first field.
  • FIG. 4 is a drawing of the mobile device with the display showing the text in the application window, an original A-Z-keyboard in the first field of an original keyboard application and the captured written text in the second field and the recognized text in the third field of a camera input module.
  • FIG. 1 a shows a preferred embodiment of the present invention being a multimodal text input method and module on a mobile device 1 .
  • the multimodal text input method and module allow a user to input a text into an application on the mobile device 1 either via a keyboard or via a camera integrated in the mobile phone, wherein the user holds the mobile phone with the camera over a written text, the mobile phone captures an image of the written text and converts it via optical character recognition (OCR) into character text, such that parts of the recognized text can be selected as input text for the application.
  • OCR optical character recognition
  • the user may input the text conventionally via the keyboard in what is called a keyboard mode or he may input the text via the camera in what is called a camera mode if a written text in front of him is available.
  • the multimodal text input method is implemented in a multimodal text input program module, or further simply called “module”, with the same interface to an application as an original keyboard module and under a respective operating system of the mobile device 1 . If so, then said module can replace the original keyboard module on the mobile device 1 , advantageously offering all applications multimodal text input features as described above.
  • module can replace the original keyboard module on the mobile device 1 , advantageously offering all applications multimodal text input features as described above.
  • the user can choose whether he wants to input the text conventionally via the keyboard or in the camera mode selecting the recognized text or a part thereof
  • An alternative solution instead of replacing the original keyboard module by the complete integral multimodal text input program module is a provision of a separate camera text input module, wherein the camera text input module is like a complement or supplement to the original keyboard module and always active in the background to detect whether the original keyboard module is activated by the application.
  • the camera text input module activates the camera mode and preferably displays an additional field on the display 2 displaying the captured written text and preferably the recognized text which can be selected by a keypress, a gesture 9 , by a voice command or the like.
  • the text can be input either via the keyboard or via the camera text input module in the camera mode and the input text will be sent to the application.
  • the keyboard module gets closed, by the application, a key, a timer or the like, the keyboard module is not detected anymore by the camera text input module, whereupon the camera text input module will deactivate the camera mode and delete any displayed second field 5 or third field 6 on the display 2 .
  • the “application” stands for instance for a translation application, a search application such as google search or the like searching for a keyword in the internet, a phone application requesting for a phone number or a name or for any other application requesting text input.
  • the “text” stands preferably for any character text, a single or multiple character/s, a string of connected characters such as a word, a number or the like.
  • the “character” stands preferably for any character and for any number.
  • the “written text” stands preferably for any written or displayed text on a sheet of paper, in a book or also on a screen or the like which can be captured by the camera.
  • keyboard text input is preferably understood as any text input on the keyboard.
  • the “keyboard” stands preferably for a touch screen keyboard, a mechanical conventional keyboard, a holographic keyboard or the like, wherein characters can be input manually or wherein a typing is registered.
  • the “A-Z-keyboard” stands preferably for a keyboard comprising a full set of keys from A to Z or from 0 to 1 and at least one control key such as ENTER, for instance.
  • the “original keyboard module” stands preferably for a standard keyboard module or sub-program, being already available for the respective mobile device 1 with its respective operating system and respective applications, but a new proprietary or separate keyboard module for a certain mobile device 1 is also conceivable.
  • a “Control key” stands preferably for a respective key on the keyboard executing a certain function or for a hidden touch key behind one of the displayed fields as the first 4 , the second 5 and/or the third field 6 , for instance.
  • a “keypress” or a respective “keypress event” stands preferably for a manual pressing of a certain key or hidden key of the keyboard or of the touch screen, for a certain gesture 9 on the touch screen or in front of a camera, and also for a swype gesture 9 .
  • a “single keypress” can also be a double click or double press or double touch on the same key or place.
  • a certain signal strength or pattern from an acceleration sensor is preferably also usable as a keypress.
  • the “mobile device 1 ” stands preferably for any mobile-phone, smart-phone or the like.
  • the “display 2 ” stands preferably for any conventional non-touch-screen display as well as for any touch screen with touch key functionality.
  • FIG. 1 a shows a preferred embodiment of an implementation of the multimodal text input method according to the present invention.
  • FIG. 1 shows the mobile device 1 with the display 2 being the touch screen.
  • an application window 3 with a graphical user interface (GUI) of the application, the GUI comprising a text input field 3 b and preferably a text output field below.
  • GUI graphical user interface
  • the application requested the text input and started the text input module being the multimodal text input module.
  • the multimodal text input module starts displaying the conventional keyboard being the A-Z-keyboard in the first field 4 .
  • the first field 4 for the A-Z-keyboard has the same size as an original keyboard module which has been replaced on the mobile device 1 by the multimodal text input module.
  • the user may input text as usual via the displayed touch screen keyboard or he may press a camera mode key 8 to switch over to the camera mode.
  • Fig. lb shows the same preferred embodiment of the implementation of the multimodal text input method as shown in FIG. 1 a but after a keypress on the camera mode key 8 .
  • the first field 4 for the keyboard is preferably smaller compared to a size in the keyboard mode of FIG. 1 a , such that the difference between the size of the first field 4 in the keyboard mode and the smaller size in the camera mode is used for displaying the second field 5 wherein a part of the captured written text is displayed and for displaying the third field 5 wherein the recognized text is displayed.
  • the size of any displayed fields in the keyboard mode as well as in the camera mode is always kept the same, such that the used size of the touch screen is kept the same as that of the original keyboard module, such that the original keyboard module can be replaced by the multimodal text input module without overlapping the application more than originally planned by a usage of the original keyboard module.
  • the user holds the camera of the mobile device 1 over the written text being “This text is on a sheet”, wherein the mobile device 1 captures preferably continuously images and displays a part of the respective image in the second field 5 .
  • the respective image gets analyzed by an optical character recognition (OCR) generating the recognized text and preferably displaying the recognized text in the third field 6 . If the recognized and displayed text is correct and desired the user selects the recognized text as input text, whereupon the selected text is output to the application preferably in the same way as the text would have been input via the keyboard.
  • OCR optical character recognition
  • a selection of the recognized text in the third field can be made preferably by a second keypress on the camera key or a keypress on the third field or on the RETURN key or the like.
  • Other preferred possibilities for the selection of the recognized text are for instance via voice or via a timer, such that if the camera is held for a time longer than a stored time limit over the same written text the selection is executed. Other kinds of selection are not excluded.
  • Another preferred embodiment of the camera mode includes word recognition within the recognized text, such that words are identified and such that upon a selection only the word in a focus or in the middle of the third field or behind cross hairs, respectively, is selected and output. Another preferred selection can be made by touching or pressing, respectively, on the word on the touch screen.
  • the recognized text gets displayed preferably as an overlaid recognized text over the captured text in the second field 5 .
  • the captured text in the second field 5 gets preferably erased, such that the recognized text can be overlaid on the displayed image preferably in a similar size and width as the original text. This would reduce necessary space, such that the third field 6 is not needed anymore and therefore the first field 4 or the second field 5 can take over that space.
  • Another preferred method of selecting one of the words of the recognized text displayed adjacent to the displayed keyboard is via a keypress on a key in next proximity of the desired word as input text.
  • Another preferred method of selecting one of the words of the recognized text is by displaying next to each word a respective identifier such as a certain number, for instance, and by a keypress on that respective number on the keyboard.
  • the mode is switched back from the camera mode to the keyboard mode, wherein the keyboard as shown in FIG. 1 a is bigger and easier to use than in the camera mode shown in FIG. 1 b .
  • the text input can also be undertaken via the shown keyboard, the only disadvantage would be that it is smaller.
  • a switching between the keyboard and the camera mode can preferably be done in several different ways, by the keypress on the camera mode key 8 or the keyboard key 7 , by a keypress on only the camera mode key 8 for changing the mode, by a gesture 9 as shown in FIG. 3 a and FIG. 3 b , and the like.
  • FIG. 2 a shows another preferred embodiment of an implementation of the present invention.
  • the display 2 being the touch screen there are displayed three third fields 6 , wherein preferably the first one displays the recognized text and the second and the third one beneath display suggestion candidates.
  • the one or more suggestion candidates are preferably generated by an algorithm in connection to a database, wherein according the recognized character text, which might not be correctly recognized or spelled wrong, best fitting words with a high probability to be correct are generated as the suggestion candidates.
  • the database is preferably a dictionary or a lookup-table. Thus the user can select whatever fits best as the input text. It is also imaginable that only one or more closest fitting words are generated as suggestion candidates for a recognized word in the recognized text which has not been found in the database.
  • the keyboard in the first field 4 is in the camera mode preferably a reduced keyboard, wherein preferably only some necessary control keys or the bottom line of the A-Z-keyboard are displayed in order to provide more space for the camera mode input GUI being the second field 5 and one or more third fields 6 preferably in an area and of the size of the GUI of the original keyboard module.
  • a preferred embodiment of the present invention foresees also a sending of the captured image or preferably of the part of the displayed image in the second field 5 as image data to a remote server, where the image data are OCR processed and the recognized text and preferably the suggestion candidates are generated and returned to the mobile device 1 .
  • This preferred method of a kind of remote computing is advantageous as it keeps the calculation power and the memory requirement on the mobile device 1 small, especially as regards the database for the OCR and as regards different languages. Also the development effort is reduced drastically with regard to an implementation of image processing and the OCR on different mobile device 1 types and operating systems.
  • FIG. 2 b shows another preferred embodiment of an implementation of the present invention, wherein the recognized text is displayed as the overlay over the text within the captured image in the second field 5 .
  • the third field 6 with the recognized text is overlaid over the second field 5 with the captured image, or in the displayed part of the captured image the recognized characters are erased and only the recognized text in a similar size and width is overlaid.
  • FIG. 3 a shows another preferred embodiment of an implementation of the present invention, wherein the mobile phone 1 is in the camera mode.
  • the GUI of the camera mode occupies a certain display area of the touch screen and none of the keyboard keys of the A-Z-keyboard is visible.
  • the usage within the camera mode is as explained above, wherein a switch over or a change to the keyboard mode is preferably detected by a preferred gesture 9 .
  • the switch over to the keyboard mode is also imaginable by a keypress of any other defined gesture 9 , of a hidden key, on the captured image in the second field 5 , or the like.
  • the full size of the text input GUI can be used in this way for the camera input mode, which is the certain display area. If the switch over to the keyboard mode is detected, preferably the image of FIG. 3 b occurs on the mobile device 1 .
  • FIG. 3 b shows the same preferred embodiment of the implementation of the present invention as in FIG. 3 a .
  • the text input module is in the keyboard mode, wherein the A-Z-keyboard is displayed in the first field 4 .
  • the camera key 8 is displayed as well for another switch back to the camera mode as shown in FIG. 3 a .
  • the switch back from the keyboard mode to the camera mode is also possible by a preferred gesture 9 , similar to the one described under the camera mode regarding FIG. 3 a.
  • FIG. 4 shows another preferred embodiment of the implementation of the present invention, wherein the original keyboard module is kept on the mobile device 1 and wherein the multimodal keyboard input module is reduced to the camera text input module complementary to the original keyboard module.
  • the preferred camera text input module is preferably adapted to enhance the original keyboard module on the mobile device 1 by the features of the camera mode.
  • the camera text input module activates the camera and preferably opens the second field 5 and displays the part of the captured image with the written text.
  • the keyboard text input module is independent from the camera text input module, but the camera text input module is dependent of the state of the keyboard text input module, which is checked continuously by the camera text input module.
  • the recognized text is displayed in the third field 6 below the second field 5 . It is also imaginable to overlay the recognized text over the captured written text, such that the third field 6 is either overlaid over the second field 5 , or such that the letters of the recognized text are overlaid over the captured written text within the part of the captured and displayed image.
  • FIG. 4 shows also hidden touch keys 10 within the second 5 and the third field 6 .
  • Such preferred hidden touch keys can also be seen as a reduced keyboard, wherein the hidden touch keys are displayed to the user by the frame or frame sides of the second 5 and/or the third field 6 , for instance, or the like.
  • a keypress on one of the hidden touch keys 10 within the preferred embodiment of FIG. 4 would select the recognized text, for instance.
  • FIG. 4 of the implementation of the present invention can also be used for another mobile device 1 with a conventional mechanical keyboard, wherein the second field 5 and/or the third field 6 are displayed and wherein the selection of the part or the whole of the recognized character text or of the suggestion candidate could be performed by the touch screen, by voice, by a rarely used key, by a certain pattern of an acceleration sensor, by an optical signal, or by another control signal.
  • the conventional mechanical keyboard and if the display 2 is no touch screen an additional key on the mechanical keyboard for the selection of a recognized text is rather difficult.
  • the selection could be controlled by another key event, such as for instance one generated by a voice command, by detection that a recognized text is in a repetitive sequence is for a longer time than a time limit the same one, by an acceleration sensor pattern or the like,

Abstract

Methods and modules for a multimodal text input in a mobile device are provided. Text may be input via keyboard or camera mode by holding the camera over written text. An image is taken of the written text, text is recognized, and output to an application by: activating a keyboard mode; providing an A-Z-keyboard in a first input field; activating the camera mode; capturing the text image and displaying the captured image in a second field of a device display; converting the captured image to character text by OCR and displaying the recognized character text on the display; outputting a selected character as the input text to the application upon a character selection, or outputting a selected part of the recognized character text as the input text to the application upon a selection of the part of the recognized character text via by a single keypress, control command, or gesture.

Description

    RELATED APPLICATIONS
  • This application claims priority under 35 U.S.C. §119 or 365 to European Application No. 12158195.3, filed Mar. 6, 2012, the entire teachings of which are incorporated herein by reference.
  • BACKGROUND
  • The present invention relates to a method and a module for multimodal text input on a mobile device either via a keyboard or in a camera based mode by holding the camera of the mobile device on a written text, such that an image is captured of the written text and the written text is recognized, wherein an input text or the recognized text, respectively, is output as the input text to an application receiving the input text.
  • Mobile devices such as mobile phones or smart phones with an integrated camera module and with a display such as a touch screen display, for instance, show a good market penetration and a use in daily life far beyond a simple phone application. The mobile device is used as well as a pocket book, a memo book, a calendar with planner, an address book, for receiving and writing SMS and emails and so on.
  • A standard mobile device offers already the integrated camera module with a resolution over 5 Megapixels sometimes even with optical zoom and it has a powerful microprocessor with over 300 MIPS (mega instructions per second). However, the text input on the mobile device for standard software applications seems sometimes cumbersome on a small keyboard or on the touch screen keyboard.
  • EP 2333695A1 discloses a camera based method for alignment detection of the mobile device to the written text on a sheet of paper or on a display, by analyzing the captured image of the written text. Immediate feedback by optical and acoustical feedback helps the user to align the mobile device faster and better with the written text, resulting in a faster and better optical character recognition (OCR) for the text input into the mobile device.
  • EP 10161624 discloses another camera based method for the text input and for a keyword detection, wherein the written text gets captured by the image, converted via OCR and analyzed for finding a most probable keyword therein. Such a kind of an input of the written text into the mobile device facilitates the text input for text translation applications and for internet searches about the written text in a book, for instance.
  • EP 08 169 713 discloses a method and a portable device such as preferably a mobile communication device for providing camera based services including the internet, capturing an image of a text page and processing the image data such that text is recognized via OCR software for a further usage within an application.
  • However said methods are supportive for the text input into some special applications without a need to input the text tediously character by character via the keyboard, the application needs to be adapted to the program module with one of the disclosed methods, thus the use of said methods is limited to a rather small field of applications.
  • Current Translator devices, such as for instance “Sprachcomputer Franklin” from Pons, “Dialogue” or “Professional Translator XT” from Hexaglot or “Pacifica” from Lingo Corporation, for example all use the keyboard input for words or sentences which shall be translated. However to input the word or the sentence in another language on the keyboard is often difficult or even nearly impossible if unknown characters as Chinese or Greek characters should be input.
  • An interesting approach to a multimodal input for text is disclosed in US 20110202836A1, wherein text can be input in an application of the mobile device via a keyboard or via speech, wherein the speech is recognized and converted to the input text. However, sometimes a foreign word may be difficult to spell correctly and the speech recognition has also its limitations. So, the captured image of written text with a following OCR might be seen as advantageous in many cases.
  • SUMMARY
  • All together, for the input of text in the mobile device the keyboard will be probably always kept as a first option, however multimodal additional input possibilities are advantageous in many cases and could be incorporated, as far as possible. Thus, the multimodal text input via the keyboard and via the camera with OCR implemented in a text input program module would be greatly desirable. It would allow the same keyboard input functionality as before with a standard keyboard input module but also the additional camera based text input possibility. It would be also desirable if the multimodal text input program module could replace the standard keyboard module on the standard mobile devices, such as for instance on mobile phones, smart phones or the like, and under respective operating systems.
  • The objective of the invention is to overcome the shortcomings explained above and to provide a method for a multimodal text input in a mobile device via a keyboard or in a camera mode by holding the camera of the mobile device on a written text, wherein the written text is recognized and selectable, such that a respective key character or a part of the recognized text is selectable by a single keypress or command for an immediate output to an application requesting the text input.
  • Another objective of the invention is a program module executing said method to either replace or complement an original standard keyboard text input module, such that a multimodal text input via the conventional keyboard and via the camera mode is made possible.
  • The above objectives as well as further objectives which will also become apparent from the following description are achieved by a method and a module for a multimodal text input in a mobile device either by a keyboard or in a camera mode by recognizing text within a captured image of a written text according to the features mentioned in the independent claims 1, 2 and 12, respectively. Additional features and characteristics of the invention are mentioned in the dependent claims.
  • Said method allows the multimodal text input in the mobile device via a conventional keyboard or in a touch screen keyboard as well as in a camera mode, wherein the camera of the mobile device has to be simply held over a written text, such that an image is taken of the written text and the written text is recognized and output to an application as it would have been input via the keyboard. Especially in a case of a text input in a foreign language such a text input possibility is very advantageous, for instance, in case of a translation application for traveling, wherein the input has to be done in Chinese, Greek or Latin language.
  • Advantageously the graphical user interface (GUI) changes between a keyboard and the camera text input mode, such that always relevant keys and/or fields for displaying either the keyboard or the written text in a part of the captured image or of selectable words get displayed. Touch sensitive keys or fields within a first field of the keyboard or a second field of the part of the captured image or a third field of recognized words are adapted to be ergonomical. Also gestures can be read as control commands even increasing the effective virtual communication field between the text input interface and the user.
  • Another advantageous embodiment of the multimodal text input module has always the same size of a graphical user interface (GUI) on a display of the mobile device, the same size as an original standard keyboard module making an exchange of the module easier. Also, the interface to the application activating normally the standard keyboard module is kept the same, such that any application before communicating with the standard keyboard interface is now capable of communicating as well with the multimodal text input module. Advantageously the multimodal method or the module with its implementation can replace the original standard keyboard module, having still the same functionality plus the additional camera mode text input.
  • In another embodiment the already existing original keyboard module is still kept and used on the mobile device as it is, but wherein an additional multimodal text input module steadily running in the background checks whether the original keyboard module is activated by an application. In case the original keyboard module is activated by the application the multimodal text input module activates the camera mode, opens an additional second and/or third field on the display showing the written text captured by the camera in the second field and the recognized text converted by OCR in the third field. Possibly the converted text is displayed as overlaid text over the original written text in the second window. This preferred method allows the multimodal text input on mobile devices or operating systems, respectively, wherein the keyboard module cannot be replaced.
  • Other embodiments foresee the optical character recognition on a remote server, wherein the part of the captured image of the written text, which is rather small and can be transferred over the air, gets transferred, wherein the respective recognized text is sent back and received and displayed for a possible selection on the mobile device.
  • Further advantageous aspects of the invention are set out in the following detailed description.
  • One solution of a preferred embodiment according to the present invention is disclosed in the following drawings and in the detailed description but it shall not be limiting the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
  • FIG. 1 a is a drawing of a mobile device with a display showing a text in an application window, an A-Z-keyboard with a camera mode key in a first field, a captured written text in the second field and a recognized text in a third field.
  • FIG. 1 b is a drawing of the mobile device with the display showing the text in the application window and the A-Z-keyboard with the camera mode key in the first field.
  • FIG. 2 a is a drawing of the mobile device with the display showing the text in the application window, a reduced keyboard with the camera mode key in the first field, the captured written text in the second field and the recognized text and suggestion candidates in three third fields.
  • FIG. 2 b is a drawing of the mobile device with the display showing the text in the application window, the A-Z-keyboard with the CKK in the first field, the captured written text in the second field.
  • FIG. 3 a is a drawing of the mobile device with the display showing the text in the application window, the reduced keyboard, the captured written text in the second field, the recognized text and suggestion candidates in three third fields, wherein the first field is overlapping the second and the third fields wherein a gesture crossing the first field is input.
  • FIG. 3 b is a drawing of the mobile device with the display showing the text in the application window and the A-Z-keyboard, wherein another gesture is input crossing the first field.
  • FIG. 4 is a drawing of the mobile device with the display showing the text in the application window, an original A-Z-keyboard in the first field of an original keyboard application and the captured written text in the second field and the recognized text in the third field of a camera input module.
  • DETAILED DESCRIPTION
  • A description of example embodiments of the invention follows.
  • FIG. 1 a shows a preferred embodiment of the present invention being a multimodal text input method and module on a mobile device 1. The multimodal text input method and module allow a user to input a text into an application on the mobile device 1 either via a keyboard or via a camera integrated in the mobile phone, wherein the user holds the mobile phone with the camera over a written text, the mobile phone captures an image of the written text and converts it via optical character recognition (OCR) into character text, such that parts of the recognized text can be selected as input text for the application. In other words, the user may input the text conventionally via the keyboard in what is called a keyboard mode or he may input the text via the camera in what is called a camera mode if a written text in front of him is available.
  • Preferably the multimodal text input method is implemented in a multimodal text input program module, or further simply called “module”, with the same interface to an application as an original keyboard module and under a respective operating system of the mobile device 1. If so, then said module can replace the original keyboard module on the mobile device 1, advantageously offering all applications multimodal text input features as described above. Thus the user can choose whether he wants to input the text conventionally via the keyboard or in the camera mode selecting the recognized text or a part thereof
  • An alternative solution instead of replacing the original keyboard module by the complete integral multimodal text input program module is a provision of a separate camera text input module, wherein the camera text input module is like a complement or supplement to the original keyboard module and always active in the background to detect whether the original keyboard module is activated by the application. In case the keyboard module is detected as activated, the camera text input module activates the camera mode and preferably displays an additional field on the display 2 displaying the captured written text and preferably the recognized text which can be selected by a keypress, a gesture 9, by a voice command or the like. Thus the text can be input either via the keyboard or via the camera text input module in the camera mode and the input text will be sent to the application. In case the original keyboard module gets closed, by the application, a key, a timer or the like, the keyboard module is not detected anymore by the camera text input module, whereupon the camera text input module will deactivate the camera mode and delete any displayed second field 5 or third field 6 on the display 2.
  • For clarity, the “application” stands for instance for a translation application, a search application such as google search or the like searching for a keyword in the internet, a phone application requesting for a phone number or a name or for any other application requesting text input.
  • The “text” stands preferably for any character text, a single or multiple character/s, a string of connected characters such as a word, a number or the like.
  • The “character” stands preferably for any character and for any number.
  • The “written text” stands preferably for any written or displayed text on a sheet of paper, in a book or also on a screen or the like which can be captured by the camera.
  • The “keyboard text input” is preferably understood as any text input on the keyboard.
  • The “keyboard” stands preferably for a touch screen keyboard, a mechanical conventional keyboard, a holographic keyboard or the like, wherein characters can be input manually or wherein a typing is registered.
  • The “A-Z-keyboard” stands preferably for a keyboard comprising a full set of keys from A to Z or from 0 to 1 and at least one control key such as ENTER, for instance.
  • The “original keyboard module” stands preferably for a standard keyboard module or sub-program, being already available for the respective mobile device 1 with its respective operating system and respective applications, but a new proprietary or separate keyboard module for a certain mobile device 1 is also conceivable.
  • A “Control key” stands preferably for a respective key on the keyboard executing a certain function or for a hidden touch key behind one of the displayed fields as the first 4, the second 5 and/or the third field 6, for instance.
  • A “keypress” or a respective “keypress event” stands preferably for a manual pressing of a certain key or hidden key of the keyboard or of the touch screen, for a certain gesture 9 on the touch screen or in front of a camera, and also for a swype gesture 9. A “single keypress” can also be a double click or double press or double touch on the same key or place. A certain signal strength or pattern from an acceleration sensor is preferably also usable as a keypress.
  • The “mobile device 1” stands preferably for any mobile-phone, smart-phone or the like.
  • The “display 2” stands preferably for any conventional non-touch-screen display as well as for any touch screen with touch key functionality.
  • FIG. 1 a shows a preferred embodiment of an implementation of the multimodal text input method according to the present invention. FIG. 1 shows the mobile device 1 with the display 2 being the touch screen. On the touch screen there is displayed an application window 3 with a graphical user interface (GUI) of the application, the GUI comprising a text input field 3 b and preferably a text output field below. As the text input field 3 b has been activated by the application or by the user the application requested the text input and started the text input module being the multimodal text input module. The multimodal text input module starts displaying the conventional keyboard being the A-Z-keyboard in the first field 4. Preferably the first field 4 for the A-Z-keyboard has the same size as an original keyboard module which has been replaced on the mobile device 1 by the multimodal text input module. Thus the user may input text as usual via the displayed touch screen keyboard or he may press a camera mode key 8 to switch over to the camera mode.
  • Fig. lb shows the same preferred embodiment of the implementation of the multimodal text input method as shown in FIG. 1 a but after a keypress on the camera mode key 8. In the camera mode the first field 4 for the keyboard is preferably smaller compared to a size in the keyboard mode of FIG. 1 a, such that the difference between the size of the first field 4 in the keyboard mode and the smaller size in the camera mode is used for displaying the second field 5 wherein a part of the captured written text is displayed and for displaying the third field 5 wherein the recognized text is displayed. In this preferred embodiment the size of any displayed fields in the keyboard mode as well as in the camera mode is always kept the same, such that the used size of the touch screen is kept the same as that of the original keyboard module, such that the original keyboard module can be replaced by the multimodal text input module without overlapping the application more than originally planned by a usage of the original keyboard module.
  • In the example shown in FIG. 1 b, the user holds the camera of the mobile device 1 over the written text being “This text is on a sheet”, wherein the mobile device 1 captures preferably continuously images and displays a part of the respective image in the second field 5. Parallel to said process of capturing the respective images, the respective image gets analyzed by an optical character recognition (OCR) generating the recognized text and preferably displaying the recognized text in the third field 6. If the recognized and displayed text is correct and desired the user selects the recognized text as input text, whereupon the selected text is output to the application preferably in the same way as the text would have been input via the keyboard.
  • A selection of the recognized text in the third field can be made preferably by a second keypress on the camera key or a keypress on the third field or on the RETURN key or the like. Other preferred possibilities for the selection of the recognized text are for instance via voice or via a timer, such that if the camera is held for a time longer than a stored time limit over the same written text the selection is executed. Other kinds of selection are not excluded.
  • Another preferred embodiment of the camera mode includes word recognition within the recognized text, such that words are identified and such that upon a selection only the word in a focus or in the middle of the third field or behind cross hairs, respectively, is selected and output. Another preferred selection can be made by touching or pressing, respectively, on the word on the touch screen.
  • It is also imaginable that the recognized text gets displayed preferably as an overlaid recognized text over the captured text in the second field 5. In this case the captured text in the second field 5 gets preferably erased, such that the recognized text can be overlaid on the displayed image preferably in a similar size and width as the original text. This would reduce necessary space, such that the third field 6 is not needed anymore and therefore the first field 4 or the second field 5 can take over that space.
  • Another preferred method of selecting one of the words of the recognized text displayed adjacent to the displayed keyboard is via a keypress on a key in next proximity of the desired word as input text.
  • Another preferred method of selecting one of the words of the recognized text is by displaying next to each word a respective identifier such as a certain number, for instance, and by a keypress on that respective number on the keyboard.
  • Preferably by pressing a KEYBOARD key 7 the mode is switched back from the camera mode to the keyboard mode, wherein the keyboard as shown in FIG. 1 a is bigger and easier to use than in the camera mode shown in FIG. 1 b. It should be mentioned that in the camera mode according to the preferred embodiment shown in FIG. 1 b the text input can also be undertaken via the shown keyboard, the only disadvantage would be that it is smaller. A switching between the keyboard and the camera mode can preferably be done in several different ways, by the keypress on the camera mode key 8 or the keyboard key 7, by a keypress on only the camera mode key 8 for changing the mode, by a gesture 9 as shown in FIG. 3 a and FIG. 3 b, and the like.
  • FIG. 2 a shows another preferred embodiment of an implementation of the present invention. On the display 2 being the touch screen there are displayed three third fields 6, wherein preferably the first one displays the recognized text and the second and the third one beneath display suggestion candidates. The one or more suggestion candidates are preferably generated by an algorithm in connection to a database, wherein according the recognized character text, which might not be correctly recognized or spelled wrong, best fitting words with a high probability to be correct are generated as the suggestion candidates. The database is preferably a dictionary or a lookup-table. Thus the user can select whatever fits best as the input text. It is also imaginable that only one or more closest fitting words are generated as suggestion candidates for a recognized word in the recognized text which has not been found in the database. The keyboard in the first field 4 is in the camera mode preferably a reduced keyboard, wherein preferably only some necessary control keys or the bottom line of the A-Z-keyboard are displayed in order to provide more space for the camera mode input GUI being the second field 5 and one or more third fields 6 preferably in an area and of the size of the GUI of the original keyboard module.
  • A preferred embodiment of the present invention foresees also a sending of the captured image or preferably of the part of the displayed image in the second field 5 as image data to a remote server, where the image data are OCR processed and the recognized text and preferably the suggestion candidates are generated and returned to the mobile device 1. This preferred method of a kind of remote computing is advantageous as it keeps the calculation power and the memory requirement on the mobile device 1 small, especially as regards the database for the OCR and as regards different languages. Also the development effort is reduced drastically with regard to an implementation of image processing and the OCR on different mobile device 1 types and operating systems.
  • FIG. 2 b shows another preferred embodiment of an implementation of the present invention, wherein the recognized text is displayed as the overlay over the text within the captured image in the second field 5. Preferably either the third field 6 with the recognized text is overlaid over the second field 5 with the captured image, or in the displayed part of the captured image the recognized characters are erased and only the recognized text in a similar size and width is overlaid.
  • FIG. 3 a shows another preferred embodiment of an implementation of the present invention, wherein the mobile phone 1 is in the camera mode. In this preferred embodiment the GUI of the camera mode occupies a certain display area of the touch screen and none of the keyboard keys of the A-Z-keyboard is visible. The usage within the camera mode is as explained above, wherein a switch over or a change to the keyboard mode is preferably detected by a preferred gesture 9. The switch over to the keyboard mode is also imaginable by a keypress of any other defined gesture 9, of a hidden key, on the captured image in the second field 5, or the like. The full size of the text input GUI can be used in this way for the camera input mode, which is the certain display area. If the switch over to the keyboard mode is detected, preferably the image of FIG. 3 b occurs on the mobile device 1.
  • FIG. 3 b shows the same preferred embodiment of the implementation of the present invention as in FIG. 3 a. Now the text input module is in the keyboard mode, wherein the A-Z-keyboard is displayed in the first field 4. Preferably the camera key 8 is displayed as well for another switch back to the camera mode as shown in FIG. 3 a. The switch back from the keyboard mode to the camera mode is also possible by a preferred gesture 9, similar to the one described under the camera mode regarding FIG. 3 a.
  • FIG. 4 shows another preferred embodiment of the implementation of the present invention, wherein the original keyboard module is kept on the mobile device 1 and wherein the multimodal keyboard input module is reduced to the camera text input module complementary to the original keyboard module. The preferred camera text input module is preferably adapted to enhance the original keyboard module on the mobile device 1 by the features of the camera mode. As the application requires the text input it starts the original keyboard module which is detected by the camera text input module being always active for detecting in the background. Upon the detection of the activated original keyboard module the camera text input module activates the camera and preferably opens the second field 5 and displays the part of the captured image with the written text.
  • Preferably the keyboard text input module is independent from the camera text input module, but the camera text input module is dependent of the state of the keyboard text input module, which is checked continuously by the camera text input module.
  • In the preferred embodiment shown the recognized text is displayed in the third field 6 below the second field 5. It is also imaginable to overlay the recognized text over the captured written text, such that the third field 6 is either overlaid over the second field 5, or such that the letters of the recognized text are overlaid over the captured written text within the part of the captured and displayed image.
  • FIG. 4 shows also hidden touch keys 10 within the second 5 and the third field 6. Such preferred hidden touch keys can also be seen as a reduced keyboard, wherein the hidden touch keys are displayed to the user by the frame or frame sides of the second 5 and/or the third field 6, for instance, or the like. A keypress on one of the hidden touch keys 10 within the preferred embodiment of FIG. 4 would select the recognized text, for instance.
  • Preferably the embodiment of FIG. 4 of the implementation of the present invention can also be used for another mobile device 1 with a conventional mechanical keyboard, wherein the second field 5 and/or the third field 6 are displayed and wherein the selection of the part or the whole of the recognized character text or of the suggestion candidate could be performed by the touch screen, by voice, by a rarely used key, by a certain pattern of an acceleration sensor, by an optical signal, or by another control signal. In the case of the conventional mechanical keyboard and if the display 2 is no touch screen, an additional key on the mechanical keyboard for the selection of a recognized text is rather difficult. Thus the selection could be controlled by another key event, such as for instance one generated by a voice command, by detection that a recognized text is in a repetitive sequence is for a longer time than a time limit the same one, by an acceleration sensor pattern or the like,
  • Where technical features mentioned in any claim are followed by reference signs, those reference signs have been included for the sole purpose of increasing intelligibility of the claims and accordingly, such reference signs do not have any limiting effect on the scope of each element identified by way of example by such reference signs.
  • REFERENCE NUMERALS
  • 1 mobile device
  • 2 display
  • 3 application window
  • 3 b text input field
  • 4 first field
  • 5 second field
  • 6 third field
  • 7 keyboard key
  • 8 camera mode key
  • 9 gesture
  • 10 hidden touch key
  • The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
  • While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.

Claims (21)

1-15. (canceled)
16. A method for multimodal text input in a mobile device via a keyboard or in a camera mode by holding the camera of the mobile device over a written text, such that an image is taken of the written text and the written text is recognized, wherein the input text is output to an application requesting the input text, the method comprising the following steps:
a) activating a keyboard mode;
b) providing an A-Z-keyboard in a first field for text input;
c) activating the camera mode;
d) capturing the image of the written text and displaying the captured image with the written text in a second field of a display of the mobile device;
e) converting the captured image to character text by optical character recognition (OCR) and displaying the recognized character text on the display;
f) outputting a selected character as the input text to the application upon a selection of the character on the A-Z-keyboard, or
outputting a selected part of the recognized character text as the input text to the application upon a selection of the part of the recognized character text;
wherein the respective selection takes place by a single keypress or control command, or by a single gesture.
17. The method according to claim 16, wherein the A-Z-keyboard is provided as a touch screen keyboard in a first field of the display being a touch screen display.
18. The method according to one of the claims 16, wherein the recognized character text is displayed on the display either in a separate third field or as an overlay over the text within the captured image displayed in the second field.
19. The method according to claim 16, wherein in step e) according to the recognized character text one or more suggestion candidates are determined by an algorithm in connection with a data base displayed in one or more third fields (6) or as another overlay within the second field, wherein one or more of the candidates are selectable by a keypress event.
20. The method according to claim 16, wherein in step f) a keypress is for instance a mechanical keypress, a touch keypress or a swype gesture on a touch screen, on either a visible key, a hidden key, within one of the fields of the display, on the part of the recognized text or on another text on the display, for a certain selection.
21. The method according to claim 16, wherein a respective size of the first field, the second field and the third field, if available, is adapted such that a surrounding frame always occupies a same frame or field size on the display, as an original standard keyboard module on the mobile device.
22. The method according to claim 16, wherein in the activated camera mode the second and the one or more third fields, if available, are displayed adjacent to the keyboard.
23. The method according to claim 16, wherein the steps d)-f) are executed repetitively, wherein the respective latest recognized text in the part of the respective latest captured image is analyzed for new text in regards to a previous recognized and output character text to the application, whereupon the control command is generated for the selection of the new text as the part of the recognized character text, as long as a certain keypress is detected for an ending of the repetitive selection and outputting of the respective new text to the application.
24. The method according to claim 16, wherein in step f) the control command for the selection of the part of the recognized character text is generated automatically via a detection algorithm, wherein the detection algorithm recognizes whether the part of the recognized character text in a previous and in the current captured image is the same.
25. The method according to claim 16, wherein:
the keyboard mode is executed by a keyboard text input module; and
the camera mode is executed by a separate camera text input module;
wherein the execution of the keyboard text input module is independent from the execution of the camera text input module, but wherein the execution of the camera text input module is dependent on the execution of the keyboard text input module;
wherein the camera text input module is always active in the background to detect whether the keyboard text input module is activated; and
wherein if the keyboard text input module is detected to be activated or displayed, respectively, the camera mode is activated and at least the second field (5) and the recognized text are visible on the display (2) and selectable; and
else if the keyboard text input module is not detected anymore to be activated, the camera mode is also deactivated again.
26. A method for multimodal text input in a mobile device via a keyboard or in a camera mode by holding the camera of the mobile device over a written text, such that an image is taken of the written text and the written text is recognized, wherein the input text is output to an application requesting the input text, the method comprising:
activating a keyboard mode;
in the activated keyboard mode: providing an A-Z-keyboard in a first field for text input and a control key for a selection of the camera mode;
upon a selection of the camera mode: deactivating the keyboard mode and activating the camera mode and providing a control key for a selection of the keyboard mode;
in the camera mode: capturing the image of the written text and displaying the captured image with the written text in a second field of a display of the mobile device ;
in the camera mode: converting the captured image to character text by optical character recognition (OCR) and displaying the recognized character text on the display;
outputting a selected character as the input text to the application upon a selection of the character on the A-Z-keyboard, or outputting a selected part of the recognized character text as the input text to the application upon a selection of the part of the recognized character text;
upon the selection of the keyboard mode: deactivating the camera mode and activating the keyboard mode again;
wherein all of the respective selections take place by a single keypress or control command, or by a single gesture.
27. A mobile device arranged to facilitate multimodal text input, the mobile device comprising:
a display configured to display text and image content;
a keyboard configured to receive text;
a camera having an camera mode; and
a processor in communication with the display, the keyboard, and the camera, the processor being configured to receive text input via the keyboard or the camera mode in response to detecting the camera of the mobile device being held over a written text, such that an image is taken of the written text and the written text is recognized, where the input text is output to an executing application requesting the input text;
the processor configured to communicate with the camera to capture the image of the written text;
the display configured to display the captured image with the written text in a second field of the display;
the processor configured to:
convert the captured image to character text by optical character recognition (OCR) and cause the recognized character text to be displayed on the display; and
output a selected character as the input text to the application upon a selection of the character on the keyboard, or output a selected part of the recognized character text as the input text to the application upon a selection of the part of the recognized character text, the respective selection takes place by a single keypress or control command, or by a single gesture.
28. The mobile device as in claim 27 wherein the processor is configured to generate the keyboard on the display using a touch screen interface, the keyboard being compatible with one or more standard applications, as for instance a phone application or an internet search application, running on the mobile device and requiring the text input and/or the first and the second means are adapted to display the first field and the second field and the third field, if available, occupying together always a same size on the display.
29. A mobile device according to claim 27, wherein the keyboard and the camera mode are both active at the same time.
30. A mobile device according to claim 29, wherein the processor is arranged to configure the keyboard as a A-Z touchscreen interface such that the A-Z touchscreen interface and the camera are active at the same time.
31. A mobile device according to claim 29, wherein the processor is arranged to configured the keyboard as a touch screen interface;
the processor being further configured to display the captured image in the second field and of the recognized text and enable by a single keypress or control command, or by a single gesture both:
the selection of the part of the character text; and
the selection of the respective character key; and
the immediate outputting of the input text to the application.
32. A mobile device according to claim 31, wherein the single keypress or control command, or by the single gesture include:
the selection of the part of the character text;
the selection of the respective character key; and
the immediate outputting of the input text to the processor.
33. A mobile device according to claim 27, wherein the keyboard comprises two sub-modules,
I) a first standard keyboard sub-module being invoked by the processor if in response to a request for the text input, the first standard keyboard sub-module being dependant on a second sub-module; and
II) the second sub-module being a separate camera sub-module, wherein the camera sub-module.
34. A mobile device according to claim 33, wherein the second keyboard sub-module is adapted to be always active in the background detecting whether the second keyboard sub-module is activated or displayed, respectively.
35. A mobile device according to claim 33, wherein the processor is configured to respond to the second keyboard sub-module being activated or displayed by activating the camera mode, such that the second field is displayed adjacent to the keyboard; or
else to close or shut down, respectively, the camera mode in case the camera mode is still active, such that the camera mode is always and only as long activated and displayed as the second keyboard sub-module is activated by the application.
US14/328,309 2012-03-06 2014-07-10 Multimodal text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device Abandoned US20140320413A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/328,309 US20140320413A1 (en) 2012-03-06 2014-07-10 Multimodal text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device
US15/636,189 US10078376B2 (en) 2012-03-06 2017-06-28 Multimodel text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP12158195.3A EP2637128B1 (en) 2012-03-06 2012-03-06 Multimodal text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device
EP12158195.3 2012-03-06
US13/786,321 US9811171B2 (en) 2012-03-06 2013-03-05 Multimodal text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device
US14/328,309 US20140320413A1 (en) 2012-03-06 2014-07-10 Multimodal text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/786,321 Continuation US9811171B2 (en) 2012-03-06 2013-03-05 Multimodal text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/636,189 Continuation US10078376B2 (en) 2012-03-06 2017-06-28 Multimodel text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device

Publications (1)

Publication Number Publication Date
US20140320413A1 true US20140320413A1 (en) 2014-10-30

Family

ID=45811342

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/786,321 Expired - Fee Related US9811171B2 (en) 2012-03-06 2013-03-05 Multimodal text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device
US14/328,309 Abandoned US20140320413A1 (en) 2012-03-06 2014-07-10 Multimodal text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device
US15/636,189 Active US10078376B2 (en) 2012-03-06 2017-06-28 Multimodel text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/786,321 Expired - Fee Related US9811171B2 (en) 2012-03-06 2013-03-05 Multimodal text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/636,189 Active US10078376B2 (en) 2012-03-06 2017-06-28 Multimodel text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device

Country Status (2)

Country Link
US (3) US9811171B2 (en)
EP (1) EP2637128B1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130234945A1 (en) * 2012-03-06 2013-09-12 Nuance Communications, Inc. Multimodal Text Input by a Keyboard/Camera Text Input Module Replacing a Conventional Keyboard Text Input Module on a Mobile Device
US20140317550A1 (en) * 2013-04-22 2014-10-23 Konica Minolta, Inc. Information processing apparatus accepting inputs through input screen
US20150049023A1 (en) * 2013-08-16 2015-02-19 Omnivision Technologies, Inc. Keyboard Camera Device
US9589198B2 (en) 2010-04-30 2017-03-07 Nuance Communications, Inc. Camera based method for text input and keyword detection
US9826108B2 (en) 2015-08-10 2017-11-21 Red Hat, Inc. Mobile device camera display projection

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9652147B2 (en) * 2008-04-15 2017-05-16 HTC Corportion Method and apparatus for shifting software input panel and recording medium thereof
US9275257B2 (en) * 2012-10-16 2016-03-01 Truedata Systems, Inc. Secure communication architecture
JP6317772B2 (en) 2013-03-15 2018-04-25 トランスレート アブロード,インコーポレイテッド System and method for real-time display of foreign language character sets and their translations on resource-constrained mobile devices
US8965129B2 (en) 2013-03-15 2015-02-24 Translate Abroad, Inc. Systems and methods for determining and displaying multi-line foreign language translations in real time on mobile devices
US9179061B1 (en) 2013-12-11 2015-11-03 A9.Com, Inc. Assisted text input for computing devices
US10845982B2 (en) * 2014-04-28 2020-11-24 Facebook, Inc. Providing intelligent transcriptions of sound messages in a messaging application
CN104461304A (en) * 2014-12-31 2015-03-25 小米科技有限责任公司 Application control method and device
USD749115S1 (en) 2015-02-20 2016-02-09 Translate Abroad, Inc. Mobile device with graphical user interface
US10963651B2 (en) 2015-06-05 2021-03-30 International Business Machines Corporation Reformatting of context sensitive data
KR20170022490A (en) * 2015-08-20 2017-03-02 엘지전자 주식회사 Mobile terminal and method for controlling the same
US10489768B2 (en) * 2015-12-30 2019-11-26 Visa International Service Association Keyboard application with third party engagement selectable items
WO2019045441A1 (en) * 2017-08-29 2019-03-07 Samsung Electronics Co., Ltd. Method for providing cognitive semiotics based multimodal predictions and electronic device thereof
DE102017008750A1 (en) 2017-09-19 2019-03-21 Dräger Safety AG & Co. KGaA Devices, substance measuring device, method and computer program for unambiguously assigning a measurement of a substance measuring device to user-specific data of a subject of the subtance measuring device
KR102567003B1 (en) * 2018-05-08 2023-08-16 삼성전자주식회사 Electronic device and operating method for the same
US11169668B2 (en) * 2018-05-16 2021-11-09 Google Llc Selecting an input mode for a virtual assistant
CN109032380B (en) * 2018-08-01 2021-04-23 维沃移动通信有限公司 Character input method and terminal
US11144715B2 (en) * 2018-11-29 2021-10-12 ProntoForms Inc. Efficient data entry system for electronic forms
US11615789B2 (en) * 2019-09-19 2023-03-28 Honeywell International Inc. Systems and methods to verify values input via optical character recognition and speech recognition
US11128636B1 (en) 2020-05-13 2021-09-21 Science House LLC Systems, methods, and apparatus for enhanced headsets
CN113220919B (en) * 2021-05-17 2022-04-22 河海大学 Dam defect image text cross-modal retrieval method and model

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050052558A1 (en) * 2003-09-09 2005-03-10 Hitachi, Ltd. Information processing apparatus, information processing method and software product
US20050205671A1 (en) * 2004-02-13 2005-09-22 Tito Gelsomini Cellular phone with scanning capability
US20100131900A1 (en) * 2008-11-25 2010-05-27 Spetalnick Jeffrey R Methods and Systems for Improved Data Input, Compression, Recognition, Correction, and Translation through Frequency-Based Language Analysis
US20100128994A1 (en) * 2008-11-24 2010-05-27 Jan Scott Zwolinski Personal dictionary and translator device
US20100289757A1 (en) * 2009-05-14 2010-11-18 Budelli Joey G Scanner with gesture-based text selection capability
US20110202836A1 (en) * 2010-02-12 2011-08-18 Microsoft Corporation Typing assistance for editing
US20110267490A1 (en) * 2010-04-30 2011-11-03 Beyo Gmbh Camera based method for text input and keyword detection
US20120296646A1 (en) * 2011-05-17 2012-11-22 Microsoft Corporation Multi-mode text input
US20130113943A1 (en) * 2011-08-05 2013-05-09 Research In Motion Limited System and Method for Searching for Text and Displaying Found Text in Augmented Reality
US20130234949A1 (en) * 2012-03-06 2013-09-12 Todd E. Chornenky On-Screen Diagonal Keyboard
US20130234945A1 (en) * 2012-03-06 2013-09-12 Nuance Communications, Inc. Multimodal Text Input by a Keyboard/Camera Text Input Module Replacing a Conventional Keyboard Text Input Module on a Mobile Device

Family Cites Families (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2077274C (en) 1991-11-19 1997-07-15 M. Margaret Withgott Method and apparatus for summarizing a document without document image decoding
US5384863A (en) 1991-11-19 1995-01-24 Xerox Corporation Methods and apparatus for automatic modification of semantically significant portions of a document without document image decoding
US5387863A (en) 1992-04-14 1995-02-07 Hughes Aircraft Company Synthetic aperture array dipole moment detector and localizer
DE69430967T2 (en) 1993-04-30 2002-11-07 Xerox Corp Interactive copying system
JP3220885B2 (en) 1993-06-18 2001-10-22 株式会社日立製作所 Keyword assignment system
US5649222A (en) 1995-05-08 1997-07-15 Microsoft Corporation Method for background spell checking a word processing document
US7030863B2 (en) 2000-05-26 2006-04-18 America Online, Incorporated Virtual keyboard system with automatic correction
US20040189804A1 (en) 2000-02-16 2004-09-30 Borden George R. Method of selecting targets and generating feedback in object tracking systems
AU2001293001A1 (en) 2000-09-22 2002-04-02 Sri International Method and apparatus for portably recognizing text in an image sequence of scene imagery
US7069506B2 (en) 2001-08-08 2006-06-27 Xerox Corporation Methods and systems for generating enhanced thumbnails
US7221796B2 (en) * 2002-03-08 2007-05-22 Nec Corporation Character input device, character input method and character input program
US8873890B2 (en) 2004-04-02 2014-10-28 K-Nfb Reading Technology, Inc. Image resizing for optical character recognition in portable reading machine
WO2005101193A2 (en) 2004-04-06 2005-10-27 King Martin T Scanning apparatus and related techniques
US7558595B2 (en) 2004-06-25 2009-07-07 Sony Ericsson Mobile Communications Ab Mobile terminals, methods, and program products that generate communication information based on characters recognized in image data
US7450960B2 (en) 2004-10-07 2008-11-11 Chen Alexander C System, method and mobile unit to sense objects or text and retrieve related information
US7447362B2 (en) 2004-11-08 2008-11-04 Dspv, Ltd. System and method of enabling a cellular/wireless device with imaging capabilities to decode printed alphanumeric characters
US7330608B2 (en) 2004-12-22 2008-02-12 Ricoh Co., Ltd. Semantic document smartnails
JP2006303651A (en) 2005-04-15 2006-11-02 Nokia Corp Electronic device
US7539343B2 (en) 2005-08-24 2009-05-26 Hewlett-Packard Development Company, L.P. Classifying regions defined within a digital image
US20070106468A1 (en) 2005-11-07 2007-05-10 France Telecom Product, service and activity based interactive trip mapping system, method, and computer program product
EP1979858A1 (en) 2006-01-17 2008-10-15 Motto S.A. Mobile unit with camera and optical character recognition, optionally for conversion of imaged text into comprehensible speech
US8098934B2 (en) 2006-06-29 2012-01-17 Google Inc. Using extracted image text
US7787693B2 (en) 2006-11-20 2010-08-31 Microsoft Corporation Text detection on mobile communications devices
US20080207254A1 (en) * 2007-02-27 2008-08-28 Pierce Paul M Multimodal Adaptive User Interface for a Portable Electronic Device
US8204896B2 (en) 2008-01-08 2012-06-19 Kabushiki Kaisha Toshiba Image processing apparatus and image processing method
US8606795B2 (en) 2008-07-01 2013-12-10 Xerox Corporation Frequency based keyword extraction method and system using a statistical measure
US8086275B2 (en) * 2008-10-23 2011-12-27 Microsoft Corporation Alternative inputs of a mobile communications device
US20100123676A1 (en) * 2008-11-17 2010-05-20 Kevin Scott Kirkup Dual input keypad for a portable electronic device
CN101882005B (en) * 2008-11-21 2013-10-16 杭州惠道科技有限公司 Keyboard for small-size electronic equipment
EP2189926B1 (en) 2008-11-21 2012-09-19 beyo GmbH Method for providing camera-based services using a portable communication device of a user and portable communication device of a user
US8331677B2 (en) 2009-01-08 2012-12-11 Microsoft Corporation Combined image and text document
US8208737B1 (en) 2009-04-17 2012-06-26 Google Inc. Methods and systems for identifying captions in media material
US20120131520A1 (en) 2009-05-14 2012-05-24 Tang ding-yuan Gesture-based Text Identification and Selection in Images
US20100293460A1 (en) 2009-05-14 2010-11-18 Budelli Joe G Text selection method and system based on gestures
US8374646B2 (en) 2009-10-05 2013-02-12 Sony Corporation Mobile device visual input system and methods
US8805079B2 (en) 2009-12-02 2014-08-12 Google Inc. Identifying matching canonical documents in response to a visual query and in accordance with geographic information
EP2333695B1 (en) 2009-12-10 2017-08-02 beyo GmbH Method for optimized camera position finding for systems with optical character recognition
US8577146B2 (en) 2010-04-09 2013-11-05 Sony Corporation Methods and devices that use an image-captured pointer for selecting a portion of a captured image
EP2410465A1 (en) 2010-07-21 2012-01-25 beyo GmbH Camera based method for mobile communication devices for text detection, recognition and further processing
WO2013063787A1 (en) * 2011-11-03 2013-05-10 Intel Corporation System and method for input sharing between multiple devices
US8713433B1 (en) 2012-10-16 2014-04-29 Google Inc. Feature-based autocorrection

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050052558A1 (en) * 2003-09-09 2005-03-10 Hitachi, Ltd. Information processing apparatus, information processing method and software product
US20050205671A1 (en) * 2004-02-13 2005-09-22 Tito Gelsomini Cellular phone with scanning capability
US20100128994A1 (en) * 2008-11-24 2010-05-27 Jan Scott Zwolinski Personal dictionary and translator device
US20100131900A1 (en) * 2008-11-25 2010-05-27 Spetalnick Jeffrey R Methods and Systems for Improved Data Input, Compression, Recognition, Correction, and Translation through Frequency-Based Language Analysis
US20100289757A1 (en) * 2009-05-14 2010-11-18 Budelli Joey G Scanner with gesture-based text selection capability
US20110202836A1 (en) * 2010-02-12 2011-08-18 Microsoft Corporation Typing assistance for editing
US20110267490A1 (en) * 2010-04-30 2011-11-03 Beyo Gmbh Camera based method for text input and keyword detection
US20120296646A1 (en) * 2011-05-17 2012-11-22 Microsoft Corporation Multi-mode text input
US20130113943A1 (en) * 2011-08-05 2013-05-09 Research In Motion Limited System and Method for Searching for Text and Displaying Found Text in Augmented Reality
US20130234949A1 (en) * 2012-03-06 2013-09-12 Todd E. Chornenky On-Screen Diagonal Keyboard
US20130234945A1 (en) * 2012-03-06 2013-09-12 Nuance Communications, Inc. Multimodal Text Input by a Keyboard/Camera Text Input Module Replacing a Conventional Keyboard Text Input Module on a Mobile Device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9589198B2 (en) 2010-04-30 2017-03-07 Nuance Communications, Inc. Camera based method for text input and keyword detection
US20130234945A1 (en) * 2012-03-06 2013-09-12 Nuance Communications, Inc. Multimodal Text Input by a Keyboard/Camera Text Input Module Replacing a Conventional Keyboard Text Input Module on a Mobile Device
US9811171B2 (en) * 2012-03-06 2017-11-07 Nuance Communications, Inc. Multimodal text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device
US10078376B2 (en) 2012-03-06 2018-09-18 Cüneyt Göktekin Multimodel text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device
US20140317550A1 (en) * 2013-04-22 2014-10-23 Konica Minolta, Inc. Information processing apparatus accepting inputs through input screen
US10394445B2 (en) * 2013-04-22 2019-08-27 Konica Minolta, Inc. Text field input selection based on selecting a key on a graphically displayed keyboard
US20150049023A1 (en) * 2013-08-16 2015-02-19 Omnivision Technologies, Inc. Keyboard Camera Device
US9826108B2 (en) 2015-08-10 2017-11-21 Red Hat, Inc. Mobile device camera display projection

Also Published As

Publication number Publication date
EP2637128B1 (en) 2018-01-17
US20130234945A1 (en) 2013-09-12
EP2637128A1 (en) 2013-09-11
US20170300128A1 (en) 2017-10-19
US9811171B2 (en) 2017-11-07
US10078376B2 (en) 2018-09-18

Similar Documents

Publication Publication Date Title
US10078376B2 (en) Multimodel text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device
US9519641B2 (en) Photography recognition translation
US9087046B2 (en) Swiping action for displaying a translation of a textual image
KR102147935B1 (en) Method for processing data and an electronic device thereof
USRE46139E1 (en) Language input interface on a device
US9507519B2 (en) Methods and apparatus for dynamically adapting a virtual keyboard
US8626236B2 (en) System and method for displaying text in augmented reality
US9104306B2 (en) Translation of directional input to gesture
EP2704061A2 (en) Apparatus and method for recognizing a character in terminal equipment
US20150161246A1 (en) Letter inputting method, system and device
US9251428B2 (en) Entering information through an OCR-enabled viewfinder
US8775969B2 (en) Contact searching method and apparatus, and applied mobile terminal
US9335965B2 (en) System and method for excerpt creation by designating a text segment using speech
JP2009205579A (en) Speech translation device and program
CN108829686B (en) Translation information display method, device, equipment and storage medium
US20090225034A1 (en) Japanese-Language Virtual Keyboard
JP2014049140A (en) Method and apparatus for providing intelligent service using input characters in user device
KR101626109B1 (en) apparatus for translation and method thereof
CA2754488A1 (en) System and method for displaying text in augmented reality
KR101600085B1 (en) Mobile terminal and recognition method of image information
CN105320651A (en) Human-machine interactive translation method and device
US20120278751A1 (en) Input method and input module thereof
JP2009271800A (en) Character display
JP2019053461A (en) Image processing apparatus, program and image data
CN115878829A (en) Input method, input device and input device

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOEKTEKIN, CUENEYT, GERMANY

Free format text: DECLARATION OF OWNERSHIP (SEE DOCUMENT FOR DETAILS;ASSIGNOR:GOEKTEKIN, CUENEYT;REEL/FRAME:033318/0921

Effective date: 20120402

Owner name: GOEKTEKIN, CUENEYT, GERMANY

Free format text: DECLARATION OF OWNERSHIP;ASSIGNOR:GOEKTEKIN, CUENEYT;REEL/FRAME:033318/0932

Effective date: 20140228

Owner name: GOEKTEKIN, CUENEYT, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GOEKTEKIN, CUENEYT;REEL/FRAME:033318/0932

Effective date: 20140228

Owner name: GOEKTEKIN, CUENEYT, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GOEKTEKIN, CUENEYT;REEL/FRAME:033318/0921

Effective date: 20120402

AS Assignment

Owner name: NUANCE COMMUNICATIONS, INC., MASSACHUSETTS

Free format text: NOTICE;ASSIGNOR:NUANCE COMMUNICATIONS, INC.;REEL/FRAME:034638/0866

Effective date: 20141215

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION